text
stringlengths
56
7.94M
\betaegin{document} \title{A note on the Brown--Erd\H os--S\'os conjecture in groups} \betaegin{abstract} We show that a dense subset of a sufficiently large group multiplication table contains either a large part of the addition table of the integers modulo some $k$, or the entire multiplication table of a certain large abelian group, as a subgrid. As a consequence, we show that triples systems coming from a finite group contain configurations with $t$ triples spanning $\mathcal{O}(\sqrt{t})$ vertices, which is the best possible up to the implied constant. We confirm that for all $t$ we can find a collection of $t$ triples spanning at most $t+3$ vertices, resolving the Brown--Erd\H os--S\'os conjecture in this context. The proof applies well-known arithmetic results including the multidimensional versions of Szemer\'edi's theorem and the density Hales--Jewett theorem. This result was discovered simultaneously and independently by Nenadov, Sudakov and Tyomkyn~\cite{NST}, and a weaker result avoiding the arithmetic machinery was obtained independently by Wong~\cite{W}. \epsilonnd{abstract} \section{Introduction} A central open problem in extremal combinatorics is the Brown--Erd\H os--S\'os conjecture~\cite{BES}. We say that a subgraph $H'$ of a hypergraph $H$ is an $(r,s)$-configuration if $|E(H')|=s$ and $|V(H')|\le r$. The Brown--Erd\H os--S\'os conjecture states that, for any fixed positive integer $t\gammae 3$, any 3-uniform hypergraph $H$ on $n$ vertices which does not contain a $(t+3,t)$-configuration has at most $o(n^2)$ edges. The number $t+3$ cannot be decreased, since random constructions can achieve $\Omega(n^2)$ edges while avoiding any $(t+2,t)$-configurations~\cite{BES}. The conjecture can be generalised to higher uniformity, but we shall focus on the 3-uniform case in this note. Since its formulation in 1973 there has been a great deal of work on this problem. Ruzsa and Szemer\'edi~\cite{RS} resolved the first non-trivial case ($t=3$), but the conjecture remains open for all $t>3$. The strongest result to date is due to S\'ark\"ozy and Selkow~\cite{SS}, who showed that any 3-uniform hypergraph which does not contain a $(t+2+\lfloor \log_2 t\rfloor,t)$-configuration has at most $o(n^2)$ edges. When tackling the Brown--Erd\H os--S\'os conjecture, we may additionally assume that the hypergraph $H$ is linear (as noted in~\cite{S}, for example). It is also clear that we may assume that $H$ is tripartite, since, given a 3-graph $H$, we may obtain a tripartite 3-graph $H'$ by taking three copies of the vertex set of $H$ and placing edges between these partitions corresponding to the edges of $H$. Given a linear, tripartite, 3-uniform hypergraph $H$ on $n+n+n$ vertices we can associate a partially labelled $n\times n$ grid by labelling position $(a,b)$ with label $c$ if $(a,b,c)\in E(H)$. Thus the Brown--Erd\H os--S\'os conjecture can be formulated in terms of a quasigroup -- this is noted in~\cite{S} and~\cite{SW}, for example. \betaegin{conjecture}[Brown--Erd\H os--S\'os]\label{BES} Fix $t\in\mathbb{Z}^+$ and $\epsilon>0$. Then there exists $N=N(t,\epsilon)$ such that for any quasigroup $G$ of order $n>N$ and any subset $A$ of the multiplication table of $G$ of density at least $\epsilon$, we can find a $(t+3,t)$-configuration in $A$; that is to say, a set of $t$ triples in $A$ spanning at most $t+3$ vertices (i.e. rows, columns or labels). \epsilonnd{conjecture} In light of this formulation, it is natural to ask the same question when $G$ is in fact a group, as the additional structure might provide greater local density than can be found in random constructions. \betaegin{conjecture}[Brown--Erd\H os--S\'os for groups]\label{BESG} Fix $t\in\mathbb{Z}^+$ and $\epsilon>0$. Then for any sufficiently large group $G$ and any subset $A$ of the multiplication table of $G$ of density at least $\epsilon$, we can find a $(t+3,t)$-configuration in $A$. \epsilonnd{conjecture} Since the Brown--Erd\H os--S\'os conjecture is resolved for $t\le 3$, the first interesting case of Conjecture~\ref{BESG} is $t=4$. In 2015, Solymosi~\cite{S} resolved this case, showing that Conjecture~\ref{BESG} holds for $t=4$. Recently, Solymosi and Wong~\cite{SW} showed that much more is true, proving that the Brown--Erd\H os--S\'os threshold of $t+3$ vertices can in fact be surpassed in the groups setting. In particular, they prove that dense subsets of sufficiently large group multiplication tables contain sets of $t$ triples in $A$ spanning asymptotically only $3t/4$ vertices. Since their result concentrates on the case of large $t$, they do not match Conjecture~\ref{BESG} for small $t$ but prove that it holds for infinitely many $t$. Given that the Brown--Erd\H os--S\'os threshold can be surpassed in the groups setting, one may ask what the correct behaviour should be in this case. Since $A$ corresponds to a linear hypergraph, we cannot find sets of $t$ triples in $A$ spanning fewer that $\sqrt{t}$ vertices, but can we approach this lower bound? \betaegin{question}\label{q1} Let $t$ be a fixed positive integer. What is the smallest number $F(t)$ such that we are guaranteed to find an $(F(t),t)$-configuration in a dense subset of a sufficiently large group multiplication table? \epsilonnd{question} In this note we answer this question up to a constant factor, and resolve Conjecture~\ref{BESG}. By applying machinery from arithmetic combinatorics, including the multidimensional Szemer\'edi theorem and a multidimensional variant of the density Hales--Jewett theorem, we prove that any dense subset of a sufficiently large group multiplication table contains a large subgrid belonging to one of two families: either the subgrid matches part of the multiplication table of a cyclic group, or the subgrid matches the entire multiplication table of $\mathbb{F}_p^m$ for some small prime $p$ and large $m$. A precise statement appears in Theorem~\ref{thm} following some notation. This reduces Question~\ref{q1} to a discrete optimisation problem, in which we must find configurations with $t$ edges spanning few vertices in each of the two cases resulting from our main theorem. We tackle this optimisation problem in Section~\ref{opt}, showing that $F(t)=\mathcal{O}(\sqrt{t})$ and resolving Conjecture~\ref{BESG} for all $t$. \section{Notation and Statements}\label{stmt} We write $\mathbb{Z}_n$ for the group of integers modulo $n$ under addition and we write $[k]$ for the set $\{0,1,\deltaots,k-1\}$. We begin with some definitions. \betaegin{definition} By the \epsilonmph{multiplication table} of a group $G=(G,\circ)$ we mean the collection of triples $(a,b,a\circ b)$ for $a,b\in G$. The \epsilonmph{vertex set} will be given by three disjoint copies of $G$ called the \epsilonmph{row vertices}, \epsilonmph{column vertices} and \epsilonmph{label vertices}. We shall refer to the triples as the edges or \epsilonmph{faces} of the corresponding tripartite 3-uniform hypergraph. Typically, we will represent this as a labelled grid, with entry $(a,b)$ given label $a\circ b$. In the case that $G=(G,+)$ is an abelian group, we will usually call the multiplication table an \epsilonmph{addition table}. \epsilonnd{definition} \betaegin{definition} By a \epsilonmph{subgrid} of a labelled grid, we mean the labelled grid contained in the intersection of some subset of the rows and columns. \epsilonnd{definition} \betaegin{definition} We say that a labelled grid $A$ is \epsilonmph{isomorphic} to another labelled grid $B$ if we can biject the row sets, column sets and label sets of $A$ and $B$ in such a way that the resulting map is a graph isomorphism between the corresponding 3-graphs. \epsilonnd{definition} Using this notation we reformulate Question~\ref{q1} in a precise way. \betaegin{question}\label{q} Let $t$ be a fixed positive integer and $\epsilonpsilon>0$. Let $F(t)$ be minimal such that, given any subset $A$ of density at least $\epsilonpsilon$ of a sufficiently large (in terms of $t$ and $\epsilonpsilon$) group multiplication table, we may find an $(F(t),t)$-configuration in $A$. How does $F(t)$ grow with $t$? Is $F(t)\le t+3$ for all $t$? \epsilonnd{question} In order to answer this question, we prove the following structural result. \betaegin{theorem}\label{thm} Fix $k,m\in\mathbb{Z}^+$ and $\epsilonpsilon>0$. Then there exists $N=N(k,m,\epsilonpsilon)$ such that, for any group $G$ of order $n>N$ and any subset $A$ of the multiplication table of $G$ of density at least $\epsilonpsilon$, $A$ contains either a subgrid isomorphic to the addition table of $[k]$ as a subset of $\mathbb{Z}_K$ for some $K\gammae k$, or a subgrid isomorphic to the addition table of $\mathbb{Z}_p^m$ for some $p<k$ prime. \epsilonnd{theorem} \betaegin{remark}\label{rem} This result is `best possible' in terms of finding configurations with many edges spanned by few vertices, since if $A$ is simply taken to be the addition table of $[n/2]$ as a subset of $\mathbb{Z}_n$, say, then any subgrid of $A$ is isomorphic to part of a larger addition table and we cannot improve on the first case of the theorem. Similarly, if $A$ is simply the addition table of $\mathbb{Z}_p^t$ for small $p$ and large $t$ then we cannot improve on the second case. \epsilonnd{remark} \section{Proof of Theorem~\ref{thm}} We start by introducing the arithmetic machinery that we use later. We begin with a multidimensional version of Szemer\'edi's theorem~\cite{multiS}. \betaegin{theorem}[Multidimensional Szemer\'edi Theorem]\label{mST} Let $k,t\in\mathbb{Z}^+ $ and let $\epsilonpsilon>0$. Then there exists $N=N(\epsilonpsilon, k, t)$ such that for any $n>N$ and any $A\subset \mathbb{Z}_n^t$ of density at least $\epsilonpsilon$, we can find $a_1,a_2,\deltaots,a_t,d\in \mathbb{Z}_n$ such that $$(a_1+i_1d,a_2+i_2d,\deltaots,a_t+i_td)\in A$$ for each $i_j\in \{0,\deltaots,k-1\}$. In other words, $A$ contains the Cartesian product of $t$ arithmetic progressions of length $k$ with the same common difference. \epsilonnd{theorem} We shall also need a multidimensional version of the density Hales--Jewett theorem~\cite{dHalesJewett}. We recall the definition of a combinatorial line. \betaegin{definition} A \epsilonmph{combinatorial line} in $\mathbb{Z}_m^n$ is a set $U$ of the form $$U=\{(x_1,\deltaots,x_n) \,| \, x_i\text{ constant on } I, \, x_j=z_j\text{ for }j\not\in I\}$$ for some indexing set $I\subset \{1,\deltaots,n\}$ and some $z\in \mathbb{Z}_m^n$. A \epsilonmph{combinatorial subspace of dimension $k$} is a set $U$ of the form $$U=\{(x_1,\deltaots,x_n) \,| \, x_i\text{ constant on each } I_s, \, x_j=z_j\text{ for }j\not\in \cup_s I_s\}$$ for some collection of $k$ disjoint indexing sets $I_s\subset \{1,\deltaots,n\}$, and some $z\in \mathbb{Z}_m^n$. \epsilonnd{definition} \betaegin{theorem}[Density Hales--Jewett]\label{DHJ} Fix $m\in\mathbb{Z}^+$ and let $\epsilonpsilon>0$. Then there exists $N=N(\epsilonpsilon, m)$ such that for any $n>N$ and any $A\subset \mathbb{Z}_m^n$ of density at least $\epsilonpsilon$, we can find a combinatorial line inside $A$. \epsilonnd{theorem} The density Hales--Jewett theorem easily implies its own multidimensional variant -- for a proof, see~\cite{DKT} for example. \betaegin{corollary}[Multidimensional density Hales--Jewett]\label{DHJ2} Let $m,k$ be fixed positive integers and let $\epsilonpsilon>0$. There exists $N=N(\epsilonpsilon, m,k)$ such that for any $n>N$ and any $A\subset \mathbb{Z}_m^n$ of density at least $\epsilonpsilon$, we can find an entire combinatorial subspace of dimension $k$ inside $A$. \epsilonnd{corollary} We will need a further variant of density Hales--Jewett, which follows easily from Corollary~\ref{DHJ2} by applying the same idea used to extend from Theorem~\ref{DHJ} to Corollary~\ref{DHJ2}. \betaegin{corollary}\label{mDHJ} Let $k,t$ be fixed positive integers, $p$ a fixed prime, and let $\epsilonpsilon>0$. There exists $N=N(\epsilonpsilon, p, k, t)$ such that for any $n>N$ and any $A\subset (\mathbb{Z}_p^n)^t$ of density at least $\epsilonpsilon$, we can find a subspace $\Gammaamma$ of dimension $k$ and $a_1,\deltaots,a_t\in \mathbb{Z}_p^n$ such that $$(a_1+\Gammaamma)\times (a_2+\Gammaamma) \times \deltaots \times (a_t+\Gammaamma) \subset A.$$ \epsilonnd{corollary} \betaegin{proof} We simply identify $(\mathbb{Z}_p^n)^t$ with $\mathbb{Z}_{p^t}^n$ in the obvious way. We can then apply Corollary~\ref{DHJ2} to find a combinatorial subspace of dimension $k$ inside $A$, which gives us an affine subspace of dimension $k$. The result follows by translating back to $(\mathbb{Z}_p^n)^t$. \epsilonnd{proof} Lastly, we will need Pyber's theorem~\cite{Pyber} which provides us with a large abelian subgroup of $G$. \betaegin{theorem}[Pyber's Theorem]\label{Pyber} There is a universal constant $c>0$ such that any group $G$ of order $n$ contains an abelian subgroup of order at least $e^{c\sqrt{\log(n)}}$. \epsilonnd{theorem} We are now ready to prove Theorem~\ref{thm}. \betaegin{proof}[Proof of Theorem~\ref{thm}] We begin by applying Theorem~\ref{Pyber}, which states that $G$ contains an abelian subgroup $G'$ of order at least $\epsilonxp(c\sqrt{N})$ for some absolute constant $c>0$. In particular, $N'=|G'|$ tends to infinity with $N$. Note that the multiplication table of $G$ can be partitioned into the Cartesian products of left cosets of $G'$ with right cosets of $G'$. Since $A$ has density at least $\epsilonpsilon$ in the full multiplication table $G\times G$, we know that there exists $r,s\in G$ such that $A$ has density at least $\epsilonpsilon$ in the Cartesian product $rG'\times G's$. The part of the multiplication table corresponding to this Cartesian product is isomorphic to the addition table of $G'$. Let $A'=A\cap(rG'\times G's)$ be the subset of $rG'\times G's$ of density at least $\epsilonpsilon$ obtained from $A$. Note that $G'$ is a finite abelian group, and can therefore be written as a direct product of cyclic groups of prime power order. Suppose that $G'$ has a cyclic factor $\mathbb{Z}_T$. Then, as above, we can find a subset $A''$ which has density at least $\epsilonpsilon$ in a Cartesian product of two cosets of $\mathbb{Z}_T$ in $G$, and this Cartesian product is isomorphic to the addition table of $\mathbb{Z}_T$. Thus $A''$ corresponds to a subset of the $T\times T$ addition table of density at least $\epsilonpsilon$. By Theorem~\ref{mST}, if $T>T(k,\epsilonpsilon)$ is sufficiently large then we can find a Cartesian product of two arithmetic progressions $(a,a+d,\deltaots,a+(k-1)d)$ and $(b,b+d,\deltaots,b+(k-1)d)$ in $A''$. The labels in this subgrid belong to the set $\{a+b,a+b+d,\deltaots,a+b+2d\}$. Indeed, this subgrid is isomorphic to the addition table $\{0,\deltaots,k-1\}\times\{0,\deltaots,k-1\}\subset \mathbb{Z}^2$ and so we are in the first case of the statement of the theorem. So we are done if $G'$ contains a cyclic factor $\mathbb{Z}_T$ with $T>T(k,\epsilonpsilon)$. Therefore we may assume that all factors of $G'$ are cyclic groups with bounded (prime power) order. Since $|G'|$ tends to infinity with $N$, we see that for any positive integer $M$, if $N$ is sufficiently large we may find (by the pigeonhole principle) a cyclic factor $\mathbb{Z}_{p^a}$ which appears to the power $M$. In particular, $G'$ contains $\mathbb{Z}_p^M$ as a subgroup. As above, we note that this means that we may find $A''\subset A$ which has density at least $\epsilonpsilon$ in the Cartesian product of two cosets of $\mathbb{Z}_p^M$ inside $G$, and this product is isomorphic to the multiplication table of $\mathbb{Z}_p^M$. If $M$ is sufficiently large in terms of $m$, then by Corollary~\ref{mDHJ} we can find the complete Cartesian product of $a+\mathbb{Z}_p^m$ and $b+\mathbb{Z}_p^m$ inside $A''$. This complete Cartesian product is isomorphic to the addition table of $\mathbb{Z}_p^m$. If $p\gammae k$ then we can find the addition table of $\mathbb{Z}_p$ and we are in the first case of the theorem, and otherwise we have $p<k$ and are in the second case. \epsilonnd{proof} We now see how Theorem~\ref{thm} simplifies Question~\ref{q}. We let $f(t)$ be minimal such that we can find an $(f(t),t)$-configuration in the addition table of $[k]\subset\mathbb{Z}_K$ for any $K\gammae k$ sufficiently large compared to $t$. Similarly, for each prime $p$ we let $g_p(t)$ be minimal such that we can find an $(g_p(t),t)$-configuration in the addition table of $\mathbb{Z}_p^m$ for any sufficiently large $m$ (in terms of $t$ and $p$). \betaegin{corollary}\label{cormain} We have that $F(t)=\max_{p}(f(t),g_p(t))$. \epsilonnd{corollary} \betaegin{proof} Clearly $F(t)\le\max_{p}(f(t),g_p(t))$. For the other direction, we apply Theorem~\ref{thm} for choices of $k$ and $m$ sufficiently large in terms of $t$. Given a subset $A$ of density at least $\epsilonpsilon$ of the multiplication table of some sufficiently large group $G$, we may therefore find a subgrid isomorphic to the entire addition table of $[k]$ as a subset of $\mathbb{Z}_K$ for some $K\gammae k$, or a subgrid isomorphic to the entire addition table of $\mathbb{Z}_p^m$ for some $p<k$ prime. If $k$ and $m$ are chosen large enough (in terms of $t$ only), we deduce that $A$ contains either an $(f(t),t)$-configuration or a $(g_p(t),t)$-configuration and so $F(t)\gammae \max_{p}(f(t),g_p(t))$. \epsilonnd{proof} We have thus reduced Question~\ref{q} to the problem of finding $f(t)$ and $g_p(t)$. We will devote the next section to tackling this discrete optimisation problem; providing an exact, closed form answer for all $t$ is tricky because of certain divisibility considerations. \section{Finding locally dense configurations}\label{opt} In order to keep the note brief, we will not attempt to give the best possible bounds. We will instead show that $F(t)=\mathcal{O}(\sqrt{t})$, and, because of the connection with Conjecture~\ref{BES}, we will separately confirm that $F(t)\le t+3$ for all $t$. For the analysis of the discrete optimisation problem arising from Corollary~\ref{cormain}, it simplifies the calculations to try and maximise the number of faces induced by a fixed number $v$ of vertices rather than minimise the number of vertices spanned by a fixed number $t$ of faces. Thus we let $f'(v)$ be the maximal number of faces that can be spanned by a set of $v$ vertices in the addition table of $[k]\subset\mathbb{Z}_K$ for any $K\gammae k$ sufficiently large compared to $v$, and observe that if $f'(v)\gammae t$ then $f(t)\le v$. Similarly, we let $g_p'(v)$ be the maximal number of faces that can be spanned by a set of $v$ vertices in the addition table of $\mathbb{Z}_p^m$ for any $m$ sufficiently large in terms of $v$ and $p$, and observe that if $g_p'(v)\gammae t$ then $g_p(t)\le v$. \betaegin{proposition}\label{f} We have that $f'(v)\gammae (1+o(1))v^2/12$, and therefore $f(t)\le (\sqrt{12}+o(1))\sqrt{t}$. \epsilonnd{proposition} \betaegin{proof} We work in the addition table of $[k]\subset\mathbb{Z}_K$ for $K\gammae k\gammae v$. Given $r$ rows and $r$ columns, we can optimise the density of our configuration by including the $s$ most numerous labels. The labels in the addition table are constant along falling diagonals. In the worst case, each falling diagonal corresponds to a different label, in which case the most numerous label occurs $r$ times, the next two most numerous labels occur $r-1$ times each, etc. Therefore, by including the $s$ most numerous labels, we include a total of at least $$r+(r-1)+(r-1)+(r-2)+(r-2)+\deltaots+(r-\lceil (s-1)/2 \rceil)$$ $$=sr-s(s-1)/4-\frac12\lfloor s/2\rfloor$$ different faces. The total number of vertices is $2r+s$ so we seek to maximise this expression with respect to the constraint that $2r+s\le v$. Taking $r=\lfloor v/3\rfloor$ and $s=\lceil v/3\rceil$, and noting that $f'(v)$ is an increasing function of $v$, the proposition follows. \epsilonnd{proof} \betaegin{proposition}\label{g} We have that $g_p'(v)\gammae (1+o(1))v^2/49$ for all $p$, and therefore $g_p(t)\le (7+o(1))\sqrt{t}$. \epsilonnd{proposition} \betaegin{proof} We work in the addition table $T$ of $\mathbb{Z}_p^m$ for $m$ large. If $p\gammae v/3$ then the construction in the proof of Proposition~\ref{f} finds a configuration in the addition table of $\mathbb{Z}_p$ with $(1+o(1))v^2/12$ faces and so we are done. Otherwise, let $l$ be minimal such that $3p^{l+1}> v$. For $m$ sufficiently large, $T$ contains a subgrid isomorphic to the multiplication table of $\mathbb{Z}_p^{l+1}$. We can partition this multiplication table into the Cartesian products of the cosets of $\mathbb{Z}_p^{l}$. These Cartesian products can be arranged into a $p\times p$ grid of blocks ($p^l\times p^l$ subgrids) corresponding to entries of the addition table $\mathbb{Z}_p\times \mathbb{Z}_p$. We form our configuration by taking a union of these blocks. Let $v=\lambda p^l$, and so $\lambda\in [3,3p).$ The number $B$ of blocks that we can use is precisely the maximum number of faces induced by $\lfloor \lambda \rfloor$ vertices in the addition table of $\mathbb{Z}_p$. The number of vertices in the resulting configuration will be at most $v$, and the number of edges will $Bp^{2l}=Bv^2/\lambda^2$. Since $p> \lambda/3$ we could use the construction idea from Proposition~\ref{f}. Unfortunately, we cannot assume that $\lambda$ is large (in which case we could take approximately $\lambda^2/12$ blocks and therefore approximately $v^2/12$ faces) and the worst cases for this construction will in fact be decided by the best options for small $\lambda$. In order to minimise the calculation, we will instead simply take an $a\times a$ grid of these blocks, and we shall choose $a$ maximal subject to our constraint on the number of vertices. If we take the bottom left $a\times a$ grid of these Cartesian products we obtain a configuration with $ap^l$ rows, $ap^l$ columns and at most $(2a-1)p^l$ labels. The configuration has $a^2p^{2l}$ faces. Taking $a$ maximal so that $4a-1 \le v/p^l=\lambda$, we obtain a configuration $C$ with at most $v$ vertices. By the maximality of $a$ we see that $a=\lfloor \lambda/4+1/4\rfloor$ so in particular $a\gammae \max(1,\lambda/4-3/4)$. The number of faces of the configuration $C$ is $a^2p^{2l}$ which is therefore at least $$\max\betaigg(\frac{v^2}{\lambda^2}, \frac{(\lambda-3)^2}{16\lambda^2}v^2\betaigg)$$ which takes its minimal value of $v^2/49$ when $\lambda=7$. \epsilonnd{proof} \betaegin{remark} It is not hard to show that Proposition~\ref{f} is in fact best possible, and $1/12$ is the correct constant in the limit. On the other hand, Proposition~\ref{g} does not give the correct constant. As mentioned in the proof, combining the construction in Proposition~\ref{f} with a careful analysis of small $\lambda$ would allow improvements to be made quite easily. We can also make use of leftover vertices (when $\lambda$ is not an integer, a union of blocks uses only $\lfloor\lambda\rfloor p^l<v$ vertices, leaving some unused) to interpolate between the constructions for integer values of $\lambda$. Using these techniques we can improve the constant from $1/49$ to $5/64$. However, the calculations are quite involved and the result would still not be the best possible, so we have tried to find a compromise between giving the best bounds that we can and providing a streamlined result. \epsilonnd{remark} Combining Propositions~\ref{f} and~\ref{g} with Corollary~\ref{cormain} gives the following result. \betaegin{corollary}\label{cor} $F(t)=\mathcal{O}(\sqrt{t})$ (in fact, $F(t)\le (7+o(1))\sqrt{t}$). \epsilonnd{corollary} Therefore, the Brown--Erd\H os--S\'os threshold of $(t+3,t)$ is far below what can be found given the extra group structure. Nevertheless, we will now confirm that we do indeed prove the Brown--Erd\H os--S\'os conjecture in the context of group multiplication tables, which essentially involves checking that sufficiently dense configurations exist for the small values of $t$, as well as for large $t$ as verified by Corollary~\ref{cor}. \betaegin{proposition}\label{prop} We have that $F(t)\le t+3$ for all $t\gammae 3$. \epsilonnd{proposition} \betaegin{proof} Although much better bounds than $t+3$ are possible for large $t$, it will be most convenient simply to find $(t+3,t)$-configurations in the addition table of $[k]\subset\mathbb{Z}_K$ for $K\gammae k$ large, and also in the addition table of $\mathbb{Z}_p^m$ for $m$ large. The result will then follow by Corollary~\ref{cormain}. For the first case, working in the addition table of $[k]\subset\mathbb{Z}_K$, we note that taking the points in positions $(0,0)$, $(0,1)$, and $(1,0)$ gives the configuration $$\betaegin{matrix} 1&\\0&1\epsilonnd{matrix}$$ which has 6 vertices spanning 3 faces. Next, we can include the point in position $(1,1)$, which introduces one new vertex (a new label, $2$) and one new face. Then the point in position $(2,0)$ introduces one new vertex (a new column) and one new face, and then the point in position $(2,1)$ introduces one new vertex (a new label) and one new face. Continuing, we introduce the points in positions $(i,0)$ and $(i,1)$ for each $i$ until we have $t$ faces. At this point we have a configuration with $t$ faces spanning $t+3$ vertices. In the second case, we are working in the addition table of $\mathbb{Z}_p^m$ for $m$ large. We can use the above argument to find an $(r+3,r)$-configuration for $r$ up to $2p-1$ by taking the bottom two rows, minus the final face, of the multiplication table of some copy of $\mathbb{Z}_p$. When we add in the final point in position $(p-1,1)$ we re-use the label in position $(0,0)$ so we get an $(r+2,r)$-configuration. We can then start again in a new copy of $\mathbb{Z}_p$, including the corresponding points one by one in the same order as before. Our first point introduces two new vertices (a new row and new column) for just one more face, but since we are adding it to an $(r+2,r)$-configuration we get back to an $(r+3,r)$-configuration. Thereafter we add at most one new vertex with every new face. Once we finish the bottom two rows of the next copy of $\mathbb{Z}_p$ we can start again in another copy, and we can continue until we have $t$ faces. At that point we will span at most $t+3$ vertices. \epsilonnd{proof} Conjecture~\ref{BESG} follows immediately from Proposition~\ref{prop}. \section{Concluding remarks} We have shown that the Brown--Erd\H{o}s--S\'os conjecture is true for hypergraphs with an underlying group structure, and in fact a much stronger result is possible. We give a bound of $\mathcal{O}(\sqrt{t})$ on the minimum size of a collection of vertices spanned by $t$ edges, which is tight up to the implied constant. Theorem~\ref{thm} provides an explanation for this local density by showing that bounded-size subgrids manifesting an abelian group structure can be found in any dense subset of a group multiplication table. It is natural to wonder the ability to find many configurations with density beating the Brown--Erd\H{o}s--S\'os threshold is in some way connected to group-like structure. Are there interesting structural constraints weaker than the group axioms that still provide local density beyond the Brown--Erd\H{o}s--S\'os threshold? Or does the existence of many $(r,s)$-configurations with $r$ sufficiently small in terms of $s$ require an underlying group structure? \end{document}
\begin{document} \title{Extremal Unimodular Lattices in Dimension $36$} \author{ Masaaki Harada\thanks{ Research Center for Pure and Applied Mathematics, Graduate School of Information Sciences, Tohoku University, Sendai 980--8579, Japan. email: [email protected]. This work was carried out at Yamagata University.} } \maketitle \noindent {\bf Dedicated to Professor Vladimir D. Tonchev on His 60th Birthday} \begin{abstract} In this paper, new extremal odd unimodular lattices in dimension $36$ are constructed. Some new odd unimodular lattices in dimension $36$ with long shadows are also constructed. \end{abstract} \section{Introduction} A (Euclidean) lattice $L \subset \mathbb{R}^n$ in dimension $n$ is {\em unimodular} if $L = L^{*}$, where the dual lattice $L^{*}$ of $L$ is defined as $\{ x \in {\mathbb{R}}^n \mid (x,y) \in \mathbb{Z} \text{ for all } y \in L\}$ under the standard inner product $(x,y)$. A unimodular lattice is called {\em even} if the norm $(x,x)$ of every vector $x$ is even. A unimodular lattice, which is not even, is called {\em odd}. An even unimodular lattice in dimension $n$ exists if and only if $n \equiv 0 \pmod 8$, while an odd unimodular lattice exists for every dimension. Two lattices $L$ and $L'$ are {\em isomorphic}, denoted $L \cong L'$, if there exists an orthogonal matrix $A$ with $L' = L \cdot A$, where $ L \cdot A=\{xA \mid x \in L\}$. The automorphism group $\Aut(L)$ of $L$ is the group of all orthogonal matrices $A$ with $L = L \cdot A$. Rains and Sloane~\cite{RS-bound} showed that the minimum norm $\min(L)$ of a unimodular lattice $L$ in dimension $n$ is bounded by $\min(L) \le 2 \lfloor n/24 \rfloor+2$ unless $n=23$ when $\min(L) \le 3$. We say that a unimodular lattice meeting the upper bound is {\em extremal}. The smallest dimension for which there is an odd unimodular lattice with minimum norm (at least) $4$ is $32$ (see~\cite{lattice-database}). There are exactly five odd unimodular lattices in dimension $32$ having minimum norm $4$, up to isomorphism~\cite{CS98}. For dimensions $33,34$ and $35$, the minimum norm of an odd unimodular lattice is at most $3$ (see~\cite{lattice-database}). The next dimension for which there is an odd unimodular lattice with minimum norm (at least) $4$ is $36$. Four extremal odd unimodular lattices in dimension $36$ are known, namely, Sp4(4)D8.4 in~\cite{lattice-database}, $G_{36}$ in~\cite[Table~2]{G04}, $N_{36}$ in~\cite[Section~3]{H11} and $A_4(C_{36})$ in~\cite[Section~3]{H12}. Recently, one more lattice has been found, namely, $A_6(C_{36,6}(D_{18}))$ in~\cite[Table~II]{Hodd}. This situation motivates us to improve the number of known non-isomorphic extremal odd unimodular lattices in dimension $36$. The main aim of this paper is to prove the following: \begin{prop}\label{main} There are at least $26$ non-isomorphic extremal odd unimodular lattices in dimension $36$. \end{prop} The above proposition is established by constructing new extremal odd unimodular lattices in dimension $36$ from self-dual $\mathbb{Z}_k$-codes, where $\mathbb{Z}_{k}$ is the ring of integers modulo $k$, by using two approaches. One approach is to consider self-dual $\mathbb{Z}_4$-codes. Let $B$ be a binary doubly even code of length $36$ satisfying the following conditions: \begin{align} \label{eq:C1} &\text{the minimum weight of $B$ is at least $16$}, \\ \label{eq:C2} &\text{the minimum weight of its dual code $B^\perp$ is at least $4$.} \end{align} Then a self-dual $\mathbb{Z}_4$-code with residue code $B$ gives an extremal odd unimodular lattice in dimension $36$ by Construction A. We show that a binary doubly even $[36,7]$ code satisfying the conditions (\ref{eq:C1}) and (\ref{eq:C2}) has weight enumerator $1+ 63 y^{16}+ 63 y^{20}+ y^{36}$ (Lemma~\ref{lem:WE}). It was shown in~\cite{PST} that there are four codes having the weight enumerator, up to equivalence. We construct ten new extremal odd unimodular lattices in dimension $36$ from self-dual $\mathbb{Z}_4$-codes whose residue codes are doubly even $[36,7]$ codes satisfying the conditions (\ref{eq:C1}) and (\ref{eq:C2}) (Lemma~\ref{lem:N1}). New odd unimodular lattices in dimension $36$ with minimum norm $3$ having shadows of minimum norm $5$ are constructed from some of the new lattices (Proposition~\ref{prop:longS}). These are often called unimodular lattices with long shadows (see~\cite{NV03}). The other approach is to consider self-dual $\mathbb{Z}_k$-codes $(k=5,6,7,9,19)$, which have generator matrices of a special form given in (\ref{eq:GM}). Eleven more new extremal odd unimodular lattices in dimension $36$ are constructed by Construction A (Lemma~\ref{lem:N2}). Finally, we give a certain short observation on ternary self-dual codes related to extremal odd unimodular lattices in dimension $36$. All computer calculations in this paper were done by {\sc Magma}~\cite{Magma}. \section{Preliminaries} \label{sec:def} \subsection{Unimodular lattices} Let $L$ be an odd unimodular lattice and let $L_0$ denote the even sublattice, that is, the sublattice of vectors of even norms. Then $L_0$ is a sublattice of $L$ of index $2$~\cite{CS98}. The {\em shadow} $S(L)$ of $L$ is defined to be $L_0^* \setminus L$. There are cosets $L_1,L_2,L_3$ of $L_0$ such that $L_0^* = L_0 \cup L_1 \cup L_2 \cup L_3$, where $L = L_0 \cup L_2$ and $S = L_1 \cup L_3$. Shadows for odd unimodular lattices appeared in~\cite{CS98} and also in~\cite[p.~440]{SPLAG}, in order to provide restrictions on the theta series of odd unimodular lattices. Two lattices $L$ and $L'$ are {\em neighbors} if both lattices contain a sublattice of index $2$ in common. If $L$ is an odd unimodular lattice in dimension divisible by $4$, then there are two unimodular lattices containing $L_0$, which are rather than $L$, namely, $L_0 \cup L_1$ and $L_0 \cup L_3$. Throughout this paper, we denote the two unimodular neighbors by \begin{equation}\label{eq:N} Ne_1(L)=L_0 \cup L_1 \text{ and } Ne_2(L)=L_0 \cup L_3. \end{equation} The theta series $\theta_{L}(q)$ of $L$ is the formal power series $ \theta_{L}(q) = \sum_{x \in L} q^{(x,x)}. $ The kissing number of $L$ is the second nonzero coefficient of the theta series of $L$, that is, the number of vectors of minimum norm in $L$. Conway and Sloane~\cite{CS98} gave some characterization of theta series of odd unimodular lattices and their shadows. Using~\cite[(2), (3)]{CS98}, it is easy to determine the possible theta series $\theta_{L_{36}}(q)$ and $\theta_{S(L_{36})}(q)$ of an extremal odd unimodular lattice $L_{36}$ in dimension $36$ and its shadow $S(L_{36})$: \begin{align} \label{eq:T1} \theta_{L_{36}}(q) =& 1 + (42840 + 4096 \alpha)q^4 +(1916928 - 98304 \alpha)q^5 + \cdots, \\ \label{eq:T2} \theta_{S(L_{36})}(q) =& \alpha q + (960 - 60 \alpha) q^3 + (3799296 + 1734 \alpha)q^5 + \cdots, \end{align} respectively, where $\alpha$ is a nonnegative integer. It follows from the coefficients of $q$ and $q^3$ in $\theta_{S(L_{36})}(q)$ that $0 \le \alpha \le 16$. \subsection{Self-dual $\mathbb{Z}_k$-codes and Construction A} Let $\mathbb{Z}_{k}$ be the ring of integers modulo $k$, where $k$ is a positive integer greater than $1$. A {\em $\mathbb{Z}_{k}$-code} $C$ of length $n$ is a $\mathbb{Z}_{k}$-submodule of $\mathbb{Z}_{k}^n$. Two $\mathbb{Z}_k$-codes are {\em equivalent} if one can be obtained from the other by permuting the coordinates and (if necessary) changing the signs of certain coordinates. A code $C$ is {\em self-dual} if $C=C^\perp$, where the dual code $C^\perp$ of $C$ is defined as $\{ x \in \mathbb{Z}_{k}^n \mid x \cdot y = 0$ for all $y \in C\}$, under the standard inner product $x \cdot y$. If $C$ is a self-dual $\mathbb{Z}_k$-code of length $n$, then the following lattice \[ A_{k}(C) = \frac{1}{\sqrt{k}} \{(x_1,\ldots,x_n) \in \mathbb{Z}^n \mid (x_1 \bmod k,\ldots,x_n \bmod k)\in C\} \] is a unimodular lattice in dimension $n$. This construction of lattices is called Construction A. \section{From self-dual $\mathbb{Z}_4$-codes}\label{sec:4} From now on, we omit the term odd for odd unimodular lattices in dimension $36$, since all unimodular lattices in dimension $36$ are odd. In this section, we construct ten new non-isomorphic extremal unimodular lattices in dimension $36$ from self-dual $\mathbb{Z}_4$-codes by Construction A. Five new non-isomorphic unimodular lattices in dimension $36$ with minimum norm $3$ having shadows of minimum norm $5$ are also constructed. \subsection{Extremal unimodular lattices} Every $\mathbb{Z}_4$-code $C$ of length $n$ has two binary codes $C^{(1)}$ and $C^{(2)}$ associated with $C$: \[ C^{(1)}= \{ c \bmod 2 \mid c \in C \} \text{ and } C^{(2)}= \left\{ c \bmod 2 \mid c \in \mathbb{Z}_4^n, 2c\in C \right\}. \] The binary codes $C^{(1)}$ and $C^{(2)}$ are called the {residue} and {torsion} codes of $C$, respectively. If $C$ is a self-dual $\mathbb{Z}_4$-code, then $ C^{(1)}$ is a binary doubly even code with $C^{(2)} = {C^{(1)}}^{\perp}$~\cite{Z4-CS}. Conversely, starting from a given binary doubly even code $B$, a method for construction of all self-dual $\mathbb{Z}_4$-codes $C$ with $C^{(1)}=B$ was given in~\cite[Section~3]{Z4-PLF}. The {Euclidean weight} of a codeword $x=(x_1,\ldots,x_n)$ of $C$ is $m_1(x)+4m_2(x)+m_3(x)$, where $m_{\alpha}(x)$ denotes the number of components $i$ with $x_i=\alpha$ $(\alpha=1,2,3)$. The {minimum Euclidean weight} $d_E(C)$ of $C$ is the smallest Euclidean weight among all nonzero codewords of $C$. It is easy to see that $\min\{d(C^{(1)}),4d(C^{(2)})\} \le d_E(C)$, where $d(C^{(i)})$ denotes the minimum weight of $C^{(i)}$ $(i=1,2)$. In addition, $d_E(C) \le 4d(C^{(2)})$ and $A_4(C)$ has minimum norm $\min\{4,d_E(C)/4\}$ (see e.g.~\cite{H11}). Hence, if there is a binary doubly even code $B$ of length $36$ satisfying the conditions (\ref{eq:C1}) and (\ref{eq:C2}), then an extremal unimodular lattice in dimension $36$ is constructed as $A_4(C)$, through a self-dual $\mathbb{Z}_4$-code $C$ with $C^{(1)}=B$. If there is a binary $[36,k]$ code $B$ satisfying the conditions (\ref{eq:C1}) and (\ref{eq:C2}), then $k=7$ or $8$ (see~\cite{Brouwer-Handbook}). \begin{lem}\label{lem:WE} Let $B$ be a binary doubly even $[36,7]$ code satisfying the conditions {\rm (\ref{eq:C1})} and {\rm (\ref{eq:C2})}. Then the weight enumerator of $B$ is $1+ 63 y^{16}+ 63 y^{20}+ y^{36}$. \end{lem} \begin{proof} The weight enumerator of $B$ is written as: \[ W_{B}(y)= 1 +a y^{16} +b y^{20} +c y^{24} +d y^{28} +e y^{32} +(2^7-1-a-b-c-d-e) y^{36}, \] where $a,b,c,d$ and $e$ are nonnegative integers. By the MacWilliams identity, the weight enumerator of $B^\perp$ is given by: \begin{align*} W_{B^\perp}(y)=& 1 +\frac{1}{16}(- 567 + 5a + 4b + 3c + 2d + e) y \\& +\frac{1}{2}(1260 -10a - 10b - 9c - 7d - 4e) y^2 \\& +\frac{1}{16}(- 112455 + 885a + 900b + 883c + 770d + 497e)y^3 + \cdots. \end{align*} Since $d(B^\perp) \ge 4$, the weight enumerator of $B$ is written using $a$ and $b$: \begin{align*} W_{B}(y)=& 1 + a y^{16} + b y^{20} + (882 -10a - 4b) y^{24} + (- 1638 + 20a + 6b) y^{28} \\ & + (1197 -15a - 4b) y^{32} + (- 314 + 4a + b)y^{36}. \end{align*} Suppose that $B$ does not contain the all-one vector $\mathbf{1}$. Then $b=314 -4a$. In this case, since the coefficients of $y^{24}$ and $y^{28}$ are $- 374 + 6a$ and $246 -4a$, these yield that $a \ge 62$ and $a \le 61$, respectively, which gives the contradiction. Hence, $B$ contains $\mathbf{1}$. Then $b=315-4a$. Since the coefficient $a - 63$ of $y^{32}$ is $0$, the weight enumerator of $B$ is uniquely determined as $1+ 63 y^{16}+ 63 y^{20}+ y^{36}$. \end{proof} \begin{rem} A similar approach shows that the weight enumerator of a binary doubly even $[36,8]$ code $B$ satisfying the conditions {\rm (\ref{eq:C1})} and {\rm (\ref{eq:C2})} is uniquely determined as $1 + 153 y^{16} + 72 y^{20} + 30 y^{24}$. \end{rem} It was shown in~\cite{PST} that there are four inequivalent binary $[36,7,16]$ codes containing $\mathbf{1}$. The four codes are doubly even. Hence, there are exactly four binary doubly even $[36,7]$ codes satisfying the conditions {\rm (\ref{eq:C1})} and {\rm (\ref{eq:C2})}, up to equivalence. The four codes are optimal in the sense that these codes achieve the Gray--Rankin bound, and the codewords of weight $16$ are corresponding to quasi-symmetric SDP $2$-$(36,16,12)$ designs~\cite{JT}. Let $B_{36,i}$ be the binary doubly even $[36,7,16]$ code corresponding to the quasi-symmetric SDP $2$-$(36,16,12)$ design, which is the residual design of the symmetric SDP $2$-$(64,28,12)$ design $D_i$ in~\cite[Section~5]{PST} $(i=1,2,3,4)$. As described above, all self-dual $\mathbb{Z}_4$-codes $C$ with $C^{(1)}=B_{36,i}$ have $d_E(C) =16$ $(i=1,2,3,4)$. Hence, $A_4(C)$ are extremal. \begin{table}[thbp] \caption{Extremal unimodular lattices in dimension $36$} \label{Tab:L} \begin{center} {\small \begin{tabular}{l|c|c|c} \noalign{\hrule height0.8pt} \multicolumn{1}{c|}{Lattices $L$} & $\tau(L)$ & $\{n_1(L),n_2(L)\}$ & $\#\Aut(L)$ \\ \hline Sp4(4)D8.4 in~\cite{lattice-database} &42840& $\{480, 480\}$ & 31334400 \\ $G_{36}$ in~\cite[Table~2]{G04} &42840& $\{144, 816\}$ & 576 \\ $N_{36}$ in~\cite{H11} &42840& $\{0, 960\}$ & 849346560 \\ $A_4(C_{36})$ in~\cite{H12} &51032& $\{0, 840\}$ & 660602880 \\ $A_6(C_{36,6}(D_{18}))$ in~\cite{Hodd} &42840& $\{384, 576\}$ & 288 \\ \hline $A_4(C_{36, 1})$ in Section~\ref{sec:4}& 51032 &$\{ 0, 840\}$& 6291456 \\ $A_4(C_{36, 2})$ in Section~\ref{sec:4}& 42840 &$\{ 0, 960\}$& 6291456 \\ $A_4(C_{36, 3})$ in Section~\ref{sec:4}& 51032 &$\{ 0, 840\}$& 22020096\\ $A_4(C_{36, 4})$ in Section~\ref{sec:4}& 51032 &$\{ 0, 840\}$& 1966080 \\ $A_4(C_{36, 5})$ in Section~\ref{sec:4}& 51032 &$\{ 0, 840\}$& 1572864 \\ $A_4(C_{36, 6})$ in Section~\ref{sec:4}& 42840 &$\{ 0, 960\}$& 2621440 \\ $A_4(C_{36, 7})$ in Section~\ref{sec:4}& 42840 &$\{ 0, 960\}$& 1966080 \\ $A_4(C_{36, 8})$ in Section~\ref{sec:4}& 42840 &$\{ 0, 960\}$& 393216 \\ $A_4(C_{36, 9})$ in Section~\ref{sec:4}& 51032 &$\{ 0, 840\}$& 1376256 \\ $A_4(C_{36,10})$ in Section~\ref{sec:4}& 51032 &$\{ 0, 840\}$& 393216 \\ \hline $A_{5}(D_{36,1})$ in Section~\ref{sec:E}&42840&$\{144, 816\}$ & 144 \\ $A_{5}(D_{36,2})$ in Section~\ref{sec:E}&42840&$\{456, 504\}$ & 72 \\ $A_{6}(D_{36,3})$ in Section~\ref{sec:E}&42840&$\{240, 720\}$ & 288 \\ $A_{6}(D_{36,4})$ in Section~\ref{sec:E}&42840&$\{240, 720\}$ & 576 \\ $A_{7}(D_{36,5})$ in Section~\ref{sec:E}&42840&$\{288, 672\}$ & 288 \\ $A_{7}(D_{36,6})$ in Section~\ref{sec:E}&42840&$\{144, 816\}$ & 72 \\ $A_{7}(D_{36,7})$ in Section~\ref{sec:E}&42840&$\{144, 816\}$ & 288 \\ $A_{9}(D_{36,8})$ in Section~\ref{sec:E}&42840&$\{384, 576\}$ & 144 \\ $A_{19}(D_{36,9})$ in Section~\ref{sec:E}&42840&$\{288, 672\}$ & 144 \\ \hline $A_{5}(E_{36,1})$ in Section~\ref{sec:E}&42840& $\{456, 504\}$ & 72 \\ $A_{6}(E_{36,2})$ in Section~\ref{sec:E}&42840& $\{384, 576\}$ &144\\ \noalign{\hrule height0.8pt} \end{tabular} } \end{center} \end{table} Using the method in~\cite[Section~3]{Z4-PLF}, self-dual $\mathbb{Z}_4$-codes $C$ are constructed from $B_{36,i}$. Then ten extremal unimodular lattices $A_4(C_{36,i})$ $(i=1,2,\ldots,10)$ are constructed, where $C_{36,i}^{(1)} = B_{36,2}$ $(i=1,2,3)$, $C_{36,i}^{(1)} = B_{36,3}$ $(i=4,5,6,7)$ and $C_{36,i}^{(1)} = B_{36,4}$ $(i=8,9,10)$. To distinguish between the known lattices and our lattices, we give in Table~\ref{Tab:L} the kissing numbers $\tau(L)$, $\{n_1(L),n_2(L)\}$ and the orders $\#\Aut(L)$ of the automorphism groups, where $n_i(L)$ denotes the number of vectors of norm $3$ in $Ne_i(L)$ defined in (\ref{eq:N}) $(i=1,2)$. These have been calculated by {\sc Magma}. Table~\ref{Tab:L} shows the following: \begin{lem}\label{lem:N1} The five known lattices and the ten extremal unimodular lattices $A_4(C_{36,i})$ $(i=1,2,\ldots,10)$ are non-isomorphic to each other. \end{lem} \begin{rem} In this way, we have found two more extremal unimodular lattices $A_4(C)$, where $C$ are self-dual $\mathbb{Z}_4$-codes with $C^{(1)} = B_{36,1}$. However, we have verified by {\sc Magma} that the two lattices are isomorphic to $N_{36}$ in~\cite{H11} and $A_4(C_{36})$ in~\cite{H12}. \end{rem} \begin{rem}\label{rem} For $L=A_4(C_{36,i})$ $(i=1,2,\ldots,10)$, it follows from $\tau(L)$ and $\{n_1(L),n_2(L)\}$ that one of the two unimodular neighbors $Ne_1(L)$ and $Ne_2(L)$ defined in (\ref{eq:N}) is extremal. We have verified by {\sc Magma} that the extremal one is isomorphic to $A_4(C_{36,i})$. \end{rem} For $i=1,2,\ldots,10$, the code $C_{36,i}$ is equivalent to some code $\overline{C_{36,i}}$ with generator matrix of the form: \begin{equation} \label{eq:g-matrix} \left(\begin{array}{ccc} I_{7} & A & B_1+2B_2 \\ O &2I_{22} & 2D \end{array}\right), \end{equation} where $A$, $B_1$, $B_2$, $D$ are $(1,0)$-matrices, $I_n$ denotes the identity matrix of order $n$ and $O$ denotes the $22 \times 7$ zero matrix. We only list in Figure~\ref{Fig} the $7 \times 29$ matrix $ M_{i}= \left(\begin{array}{cc} A & B_1+2B_2 \end{array}\right) $ to save space. Note that $\left(\begin{array}{ccc} O &2I_{22} & 2D \end{array}\right)$ in (\ref{eq:g-matrix}) can be obtained from $\left(\begin{array}{ccc} I_{7} & A & B_1+2B_2 \end{array}\right)$ since $\overline{C_{36,i}}^{(2)} = {\overline{C_{36,i}}^{(1)}}^{\perp}$. A generator matrix of $A_4(C_{36,i})$ is obtained from that of $C_{36,i}$. \begin{figure} \caption{Generator matrices of $\overline{C_{36,i} \label{Fig} \end{figure} \subsection{Unimodular lattices with long shadows}\label{sec:L} The possible theta series of a unimodular lattice $L$ in dimension $36$ having minimum norm $3$ and its shadow are as follows: \begin{align*} & 1 + (960 - \alpha)q^3 + (42840 + 4096 \beta)q^4 + \cdots, \\ & \beta q + (\alpha - 60 \beta) q^3 + (3833856 - 36 \alpha + 1734 \beta)q^5 + \cdots, \end{align*} respectively, where $\alpha$ and $\beta$ are integers with $0 \le \beta \le \frac{\alpha}{60} < 16$~\cite{H11}. Then the kissing number of $L$ is at most $960$ and $\min(S(L)) \le 5$. Unimodular lattices $L$ with $\min(L)=3$ and $\min(S(L))=5$ are often called unimodular lattices with long shadows (see~\cite{NV03}). Only one unimodular lattice $L$ in dimension $36$ with $\min(L)=3$ and $\min(S(L))=5$ was known, namely, $A_4(C_{36})$ in~\cite{H11}. Let $L$ be one of $A_4(C_{36,2})$, $A_4(C_{36,6})$, $A_4(C_{36,7})$ and $A_4(C_{36,8})$. Since $\{n_1(L),n_2(L)\}=\{0,960\}$, one of the two unimodular neighbors $Ne_1(L)$ and $Ne_2(L)$ in (\ref{eq:N}) is extremal and the other is a unimodular lattice $L'$ with minimum norm $3$ having shadow of minimum norm $5$. We denote such lattices $L'$ constructed from $A_4(C_{36,2})$, $A_4(C_{36,6})$, $A_4(C_{36,7})$ and $A_4(C_{36,8})$ by $N_{36,1}$, $N_{36,2}$, $N_{36,3}$ and $N_{36,4}$, respectively. We list in Table~\ref{Tab:LS} the orders $\#\Aut(N_{36,i})$ $(i=1,2,3,4)$ of the automorphism groups, which have been calculated by {\sc Magma}. Table~\ref{Tab:LS} shows the following: \begin{prop}\label{prop:longS} There are at least $5$ non-isomorphic unimodular lattices $L$ in dimension $36$ with $\min(L)=3$ and $\min(S(L))=5$. \end{prop} \begin{table}[th] \caption{$\#\Aut(N_{36,i})$ $(i=1,2,3,4)$} \label{Tab:LS} \begin{center} {\small \begin{tabular}{c|r} \noalign{\hrule height0.8pt} Lattices $L$ & \multicolumn{1}{c}{$\#\Aut(L)$} \\ \hline $A_4(C_{36})$ in~\cite{H11} & 1698693120\\ $N_{36,1}$ & 12582912\\ $N_{36,2}$ & 5242880 \\ $N_{36,3}$ & 3932160 \\ $N_{36,4}$ & 786432 \\ \noalign{\hrule height0.8pt} \end{tabular} } \end{center} \end{table} \section{From self-dual $\mathbb{Z}_k$-codes $(k \ge 5)$}\label{sec:E} In this section, we construct more extremal unimodular lattices in dimension $36$ from self-dual $\mathbb{Z}_k$-codes $(k \ge 5)$. Let $A^T$ denote the transpose of a matrix $A$. An $n \times n$ matrix is {negacirculant} if it has the following form: \[ \left( \begin{array}{ccccc} r_0 &r_1 & \cdots &r_{n-1} \\ -r_{n-1}&r_0 & \cdots &r_{n-2} \\ -r_{n-2}&-r_{n-1}& \cdots &r_{n-3} \\ \vdots & \vdots && \vdots\\ -r_1 &-r_2 & \cdots&r_0 \end{array} \right). \] Let $D_{36,i}$ $(i=1,2,\ldots,9)$ and $E_{36,i}$ $(i=1,2)$ be $\mathbb{Z}_k$-codes of length $36$ with generator matrices of the following form: \begin{equation} \label{eq:GM} \left( \begin{array}{ccc@{}c} \quad & {\Large I_{18}} & \quad & \begin{array}{cc} A & B \\ -B^T & A^T \end{array} \end{array} \right), \end{equation} where $k$ are listed in Table~\ref{Tab:Codes}, $A$ and $B$ are $9 \times 9$ negacirculant matrices with first rows $r_A$ and $r_B$ listed in Table~\ref{Tab:Codes}. It is easy to see that these codes are self-dual since $AA^T+BB^T=-I_9$. Thus, $A_k(D_{36,i})$ $(i=1,2,\ldots,9)$ and $A_k(E_{36,i})$ $(i=1,2)$ are unimodular lattices, for $k$ given in Table~\ref{Tab:Codes}. In addition, we have verified by {\sc Magma} that these lattices are extremal. \begin{table}[thb] \caption{Self-dual $\mathbb{Z}_k$-codes of length 36} \label{Tab:Codes} \begin{center} {\small \begin{tabular}{c|c|l|l} \noalign{\hrule height0.8pt} Codes & $k$ & \multicolumn{1}{c|}{$r_A$} &\multicolumn{1}{c}{$r_B$} \\ \hline $D_{36,1}$& 5&$(0, 0, 0, 1, 2, 2, 0, 4, 2)$ & $(0, 0, 0, 0, 4, 3, 3, 0, 1)$\\ $D_{36,2}$& 5&$(0, 0, 0, 1, 3, 0, 2, 0, 4)$ & $(3, 0, 4, 1, 4, 0, 0, 4, 4)$\\ $D_{36,3}$&6 &$(0,1,5,3,2,0,3,5,1)$&$(3,1,0,0,5,1,1,1,1)$ \\ $D_{36,4}$&6 &$(0,1,3,5,1,5,5,4,4)$&$(4,0,3,2,4,5,5,2,4)$ \\ $D_{36,5}$& 7&$(0, 1, 6, 3, 5, 0, 4, 5, 4)$ & $(0, 1, 6, 3, 5, 2, 1, 5, 1)$\\ $D_{36,6}$& 7&$(0, 1, 1, 3, 2, 6, 1, 4, 6)$ & $(0, 1, 4, 0, 5, 3, 6, 2, 0)$\\ $D_{36,7}$& 7&$(0, 0, 0, 1, 5, 5, 5, 1, 1)$ & $(0, 5, 4, 2, 5, 1, 1, 3, 6)$\\ $D_{36,8}$& 9&$(0, 0, 0, 1, 0, 4, 3, 0, 0)$ & $(0, 4, 1, 5, 3, 5, 1, 7, 0)$\\ $D_{36,9}$&19&$(0, 0, 0, 1, 15, 15, 9, 6, 5)$ &$(14, 16, 0, 14, 15, 8, 8, 3, 12)$ \\ \hline $E_{36,1}$&5 &$(0, 1, 0, 2, 1, 3, 2, 2, 0)$&$(2, 0, 1, 0, 1, 1, 2, 3, 1)$\\ $E_{36,2}$&6 &$(0, 1, 5, 3, 4, 4, 1, 1, 0)$&$(4, 0, 1, 3, 4, 2, 3, 0, 1)$\\ \noalign{\hrule height0.8pt} \end{tabular} } \end{center} \end{table} To distinguish between the above eleven lattices and the known $15$ lattices, in Table~\ref{Tab:L} we give $\tau(L)$, $\{n_1(L),n_2(L)\}$ and $\#\Aut(L)$, which have been calculated by {\sc Magma}. The two lattices have the identical $ (\tau(L),\{n_1(L),n_2(L)\},\#\Aut(L)) $ for each of the pairs $(A_{5}(E_{36,1}), A_{5}(D_{36,2}))$ and $(A_{6}(E_{36,2}),A_{9}(D_{36,8}))$. However, we have verified by {\sc Magma} that the two lattices are non-isomorphic for each pair. Therefore, we have the following: \begin{lem}\label{lem:N2} The $26$ lattices in Table~\ref{Tab:L} are non-isomorphic to each other. \end{lem} Lemma~\ref{lem:N2} establishes Proposition~\ref{main}. \begin{rem} Similar to Remark~\ref{rem}, it is known~\cite{H11} that the extremal neighbor is isomorphic to $L$ for the case where $L$ is $N_{36}$ in~\cite{H11}, and we have verified by {\sc Magma} that the extremal neighbor is isomorphic to $L$ for the case where $L$ is $A_4(C_{36})$ in~\cite{H12}. \end{rem} \section{Related ternary self-dual codes}\label{sec:T} In this section, we give a certain short observation on ternary self-dual codes related to some extremal odd unimodular lattices in dimension $36$. \subsection{Unimodular lattices from ternary self-dual codes} Let $T_{36}$ be a ternary self-dual code of length $36$. The two unimodular neighbors $Ne_1(A_3(T_{36}))$ and $Ne_2(A_3(T_{36}))$ given in (\ref{eq:N}) are described in~\cite{HKO} as $L_S(T_{36})$ and $L_T(T_{36})$. In this section, we use the notation $L_S(T_{36})$ and $L_T(T_{36})$, instead of $Ne_1(A_3(T_{36}))$ and $Ne_2(A_3(T_{36}))$, since the explicit constructions and some properties of $L_S(T_{36})$ and $L_T(T_{36})$ are given in~\cite{HKO}. By Theorem~6 in~\cite{HKO} (see also Theorem~3.1 in~\cite{G04}), $L_T(T_{36})$ is extremal when $T_{36}$ satisfies the following condition (a), and both $L_S(T_{36})$ and $L_T(T_{36})$ are extremal when $T_{36}$ satisfies the following condition (b): \begin{itemize} \item[(a)] extremal (minimum weight $12$) and admissible (the number of $1$'s in the components of every codeword of weight $36$ is even), \item[(b)] minimum weight $9$ and maximum weight $33$. \end{itemize} For each of (a) and (b), no ternary self-dual code satisfying the condition is currently known. \subsection{Condition (a)} Suppose that $T_{36}$ satisfies the condition (a). Since $T_{36}$ has minimum weight $12$, $A_3(T_{36})$ has minimum norm $3$ and kissing number $72$. By Theorem~6 in~\cite{HKO}, $\min(L_T(T_{36}))=4$ and $\min(L_S(T_{36}))=3$. Hence, since the shadow of $L_T(T_{36})$ contains no vector of norm $1$, by (\ref{eq:T1}) and (\ref{eq:T2}) $L_T(T_{36})$ has theta series $1 + 42840 q^4 +1916928 q^5 + \cdots$. It follows that $\{n_1(L_T(T_{36})),n_2(L_T(T_{36}))\}=\{72,888\}$. By Theorem~1 in~\cite{MPS}, the possible complete weight enumerator $W_C(x,y,z)$ of a ternary extremal self-dual code $C$ of length $36$ containing $\mathbf{1}$ is written as \begin{align*} a_{1} \delta_{36} +a_{2} \alpha_{12}^{3} +a_{3} \alpha_{12}^{2} {\beta_6^2} +a_{4} \alpha_{12} (\beta_6^2)^{2} +a_{5} (\beta_6^2)^{3} +a_{6} \beta_6\gamma_{18} \alpha_{12} +a_{7} \beta_6\gamma_{18} {\beta_6^2}, \end{align*} using some $a_i \in \mathbb{R}$ $(i=1,2,\ldots,7)$, where $\alpha_{12}=a(a^3+8p^3)$, $\beta_6 =a^2-12b$, $\gamma_{18}=a^6-20a^3p^3-8p^6$, $\delta_{36}=p^3(a^3-p^3)^3$ and $a=x^3+y^3+z^3$, $p=3xyz$, $b=x^3y^3+x^3z^3+y^3z^3$. From the minimum weight, we have the following: \begin{align*} & a_2 = \frac{3281}{13824} - \frac{a_1}{64}, a_3 = \frac{203}{4608} - \frac{9 a_1}{256}, a_4 = \frac{1763}{13824} + \frac{3 a_1}{128}, \\& a_5 = -\frac{277}{13824} - \frac{a_1}{256}, a_6 = \frac{1133}{1728} + \frac{3 a_1}{64}, a_7 = -\frac{77}{1728} - \frac{a_1}{64}. \end{align*} Since $W_C(x,y,z)$ contains the term $(15180 + 2916a_1) y^{15} z^{21}$, if $C$ is admissible, then \[ a_1=-\frac{15180}{2916}. \] Hence, the complete weight enumerator of a ternary admissible extremal self-dual code containing $\mathbf{1}$ is uniquely determined, which is listed in Figure~\ref{Fig:CWE}. \begin{figure} \caption{Complete weight enumerator} \label{Fig:CWE} \end{figure} \subsection{Condition (b)} Suppose that $T_{36}$ satisfies the condition (b). By the Gleason theorem (see Corollary~5 in~\cite{MPS}), the weight enumerator of $T_{36}$ is uniquely determined as: \begin{align*} & 1 + 888 y^9 + 34848 y^{12} + 1432224 y^{15} + 18377688 y^{18} + 90482256 y^{21} \\& + 162551592 y^{24} + 97883072 y^{27} + 16178688 y^{30} + 479232 y^{33}. \end{align*} By Theorem~6 in~\cite{HKO} (see also Theorem~3.1 in~\cite{G04}), $L_S(T_{36})$ and $L_T(T_{36})$ are extremal. Hence, $\min(A_3(T_{36}))=3$ and $\min(S(A_3(T_{36})))=5$. Note that a unimodular lattice $L$ contains a $3$-frame if and only if $L\cong A_3(C) $ for some ternary self-dual code $C$. Let $L_{36}$ be any of the five lattices given in Table~\ref{Tab:LS}. Let $L_{36}^{(3)}$ be the set $\{\{x,-x\}\mid (x,x)=3, x \in L_{36}\}$. We define the simple undirected graph $\Gamma(L_{36})$, whose set of vertices is the set of $480$ pairs in $L_{36}^{(3)}$ and two vertices $\{x,-x\},\{y,-y\}\in L_{36}^{(3)}$ are adjacent if $(x,y)=0$. It follows that the $3$-frames in $L_{36}$ are precisely the $36$-cliques in the graph $\Gamma(L_{36})$. We have verified by {\sc Magma} that $\Gamma(L_{36})$ are regular graphs with valency $368$, and the maximum sizes of cliques in $\Gamma(L_{36})$ are $12$. Hence, none of these lattices is constructed from some ternary self-dual code by Construction A. \noindent {\bf Acknowledgments.} The author would like to thank Masaaki Kitazume for bringing the observation in Section~\ref{sec:T} to the author's attention. This work is supported by JSPS KAKENHI Grant Number 23340021. \end{document}
\begin{document} \title{Best and worst policy control in low-prevalence SEIR} \begin{abstract} We consider the low-prevalence linearized SEIR epidemic model for a society that has resolved to keep future infections low in anticipation of a vaccine. The society can vary its amount of potentially-infection-spreading activity over time, within a certain feasible range. Because the activity has social or economic value, the society aims to maximize activity overall subject to infection rate constraints. We find that consistent policies are the {\em worst possible} in terms of activity, while the best policies alternate between high and low activity. In a variant involving multiple subpopulations, we find that the best policies are maximally {\em coordinated} (maintaining similar prevalence among subpopulations) but {\em oscillatory} (having growth rates that vary in time). It turns out that linearized SEIR is mathematically equivalent to an idealized racecar model (with different subpopulations corresponding to different cars) and the amount of fuel used corresponds to the amount of activity. Using this analogy, steady V-shaped formations (in which one subpopulation ``leads the way'' with consistently higher prevalence and activity, while others follow behind with lower prevalence and activity) are especially problematic. These formations are {\em very effective} at minimizing fuel use, hence {\em very ineffective} at boosting activity. In an appendix, we obtain analogous results for alternative notions of activity, which incorporate crowding effects. \end{abstract} \section{Introduction} Non-pharmaceutical interventions (NPIs), including the prohibition or rescheduling of activities, are used to limit the damage caused by pandemics. When managing a pandemic, a government or population may decide that the cost of obtaining significant herd immunity through infection is unacceptably high, and that it is therefore necessary to maintain low disease prevalence through NPIs until vaccines are available. One important and controversial question is the following: once disease prevalence is low (say, 50 confirmed cases per million per day), is it better to keep the effective reproductive rate as close to one as possible (aiming for consistency and sustainability) or to ``suppress and resuppress'' (i.e., drive the prevalence down even further, then relax restrictions, then restore restrictions if/when prevalence rates recover, etc.)? In a recent paper, the author and others showed that strict but intermittent measures were better than consistently moderate measures at optimizing certain utility functions within the low-prevalence limit of SEIR/SEIS and its variants \cite{lockdownscount2020}. In this follow-up note, we show that consistent strategies are actually the {\em least effective} when measured by a certain type of {\em activity}. We also work out the {\em most effective} strategies, including in settings where upper and lower bounds on the infection rate are imposed. We then explore settings with multiple subpopulations and find that, in terms of prevalence, it is much better to aim for {\em geographic consistency} and {\em temporal variability} than other way around. The models in this paper are simplifications that omit important considerations. They are meant to generate hypotheses and inform judgment, not resolve real world questions on their own. Nonetheless, it is interesting that, within these models, some of the {\em intuitively-best-sounding} practices are actually the worst. {\bf Acknowledgement:} We thank Morris Ang, Minjae Park, Joshua Pfeffer, Pu Yu, and the co-authors of \cite{lockdownscount2020} for useful conversations. The author is partially supported by NSF award DMS 1712862. \section{Methods} \subsection{Setup and motivation: a linearized SEIR optimal control problem} During a pandemic, there may come a time when a state or country resolves to keep its total number of future infections small (perhaps less than $2$ percent of the population) up until a later time $T$ at which a vaccination program will commence. If we assume dynamics are given by a standard SEIR model, this means that $S$ and $R$ can be treated as (essentially) constant for the remaining duration. We choose our time unit so that the mean incubation time is $1$, and the mean infectious time is the constant $\gamma^{-1}$. We then obtain the {\em linearized} ODE\cite{diekmann2010construction} given by \begin{equation} \label{eqn::note} \dot E(t) = -E(t) + \beta(t) I(t), \,\,\,\,\,\,\,\, \,\,\,\,\, \dot I(t) = E(t) - \gamma I(t),\end{equation} where $E(t)$ represents the fraction of the population {\em exposed} (infected but not yet infectious), $I(t)$ represents the fraction that is {\em infectious}, and $\beta(t) \in [\beta_{\mathrm{min}},\beta_{\mathrm{max}}]$ is a control parameter (a measurable function of $t$) describing the rate at which disease transmission occurs at time $t$. We assume that there is some {\em flexible activity} (haircuts, conversations, surgeries, lessons, factory shifts, etc.)\ that has social/economic value but also carries transmission risk. By ``flexible'' we mean that its utility is not dependent on {\em when} it occurs. Our policy tool is deciding how much of this activity to schedule/allow and when to do so. For now, we assume that the disease transmission caused by flexible activity is primarily due to the activity itself (not ancillary crowding effects) so that disease transmission is linear in the amount of activity. (We will discuss alternatives in Appendix~\ref{app:generalutility}.) We interpret $\beta(t)$ as the {\em total} amount of transmission-inducing activity (flexible or otherwise) happening at time $t$. We interpret $\beta_{\mathrm{min}}$ as the amount of transmission-inducing activity that occurs when no flexible activity is scheduled, so that $\beta(t) - \beta_{\mathrm{min}}$ is the amount of {\em flexible} activity at time $t$. We interpret $\beta_{\mathrm{max}}-\beta_{\mathrm{min}}$ as the maximal amount of flexible activity that can be scheduled at once (due to limitations on space or on the number of individuals available to be active). Define the {\em total activity} by $A = \int_0^T \beta(t) dt$. Our goal will be to find strategies that (subject to restrictions) maximize $A$. The main assumption underlying this goal is that the social/economic utility derived from flexible activity depends only on $A$, not on how the flexible activity is temporally distributed. We {\em do not} assume that all flexible activity is of equal value. For example, we allow for the possibility that if $A$ were small, only very important activity would be allowed, but if $A$ were larger, more discretionary activity would take place.\footnote{For example, if a maskless conversation contributes twice as much disease transmission risk as a similar masked conversation, then the maskless conversation would count as twice as much ``activity.'' So ``removing a mask during a close conversation'' could be treated as a form of discretionary activity that might only occur in larger $A$ scenarios. If an 8-person party involves more than twice as much transmission as a 4-person party, then it would count as more than twice as much activity.} As long as social/economic utility is an increasing function of $A$ ({\em not necessarily linear}) it is sensible for maximizing $A$ to be an objective. Alternative objective functions (accounting for activity that is {\em not} perfectly flexible, e.g.\ because people pursuing different activities at once might {\em crowd} each other in a way that increases transmission) will be discussed in Appendix~\ref{app:generalutility}. For now, we will focus only on maximizing $A$, not on minimizing total infections. (Effectively, we are assuming that prevalence is low enough that further minimizing infections is not the primary consideration.) But we will consider imposing upper and lower bounds on infection rates. See also \cite{lockdownscount2020} for further references, as well as some discussion of the probability distributions governing incubation and infectious periods, social networks, and other factors beyond the scope of this note. We assume a vaccine will arrive at time $T$. We do not model the vaccination strategy, but we allow that the cost of controlling the disease during its implementation may depend on the terminal values $E(T)$ and $I(T)$. Thus it is of interest to find the optimal $\beta:[0,T] \to [\beta_{\mathrm{min}},\beta_{\mathrm{max}}]$ yielding any particular choice of $E(T)$ and $I(T)$, and to find the amount of activity that approach yields. (Finding the optimal values for $E(T)$ and $I(T)$ would then be a second step, and would depend on the vaccination rollout model used.) \subsection{Problem statements} \begin{prob} \label{prob::activity} Given $E(0)$, $I(0)$, $E(T)$ and $I(T)$, find the $\beta$ that maximizes $A$. \end{prob} To simplify the presentation, we change coordinates to reduce Problem~\ref{prob::activity} to a one-dimensional problem. Define the {\em velocity} of the disease to be $V(t) := E(t)/I(t)$. The term ``velocity'' is motivated by the fact that \begin{equation}\label{eqn::logI} \frac{\partial}{\partial t} \log I(t) = \frac{\dot I(t)}{I(t)} = V(t) - \gamma, \,\,\,\,\,\,\,\,\,\, \log\frac{I(T)}{I(0)} = \int_0^T V(t) dt - \gamma T, \end{equation} so that $V(t)$ is (up to additive constant) the rate at which $\log I(t)$ is changing. Computing further we find \begin{align} \label{eqn::Y} \dot V(t) &= \frac{ \dot E(t) I(t) - \dot I(t)E(t)}{I(t)^2} \\ &= \frac{ \bigl(- E(t) + \beta(t)I(t)\bigr) I(t) - \bigl(E(t) - \gamma I(t) \bigr) E(t)}{I(t)^2} \nonumber \\ &= \frac{ -E(t)I(t) + \beta(t) I(t)^2 - E(t)^2 + \gamma I(t)E(t)}{I(t)^2} \nonumber \\ &= \beta(t) -V(t)^2 + (\gamma-1)V(t) \nonumber \\ &= \beta(t) -\phi(V(t)), \nonumber \end{align} where $\phi$ is the quadratic function defined by $\phi(x) := x^2 + (1-\gamma)x$. This also implies $\beta(t) = \dot V(t) + \phi(V(t))$. We then have \begin{equation} \label{eqn::A} A = \int_0^T \phi\bigl(V(t)\bigr)dt + V(T)-V(0) = \int_0^T V(t)^2 dt + \int_0^T (1-\gamma) V(t) dt + V(T) - V(0). \end{equation} The existence and uniqueness of solutions to \eqref{eqn::Y} are immediate from the Carath\'eodory existence theorem and the corresponding uniqueness conditions, provided $\beta$ is a measurable function from $[0,T]$ to $[\beta_{\mathrm{min}},\beta_{\mathrm{max}}]$. See \cite[Theorem 5.3]{hale1980ODE}. If the reader prefers not to consider this much generality, it is fine to focus on the case that $\beta$ is piecewise continuous (or even piecewise constant). \begin{figure} \caption{\label{fig::ychart} \label{fig::ychart} \end{figure} Now note that \eqref{eqn::logI} and the definition of $V$ imply that fixing the quadruple $\bigl( E(0), I(0), E(T), I(T) \bigr)$ modulo multiplicative constant is equivalent to fixing the triple $\bigl( V(0), V(T), \int_0^T V(t)dt \bigr)$. So we rephrase Problem~\ref{prob::activity} as an equivalent problem of maximizing \eqref{eqn::A} given this triple. Since the latter three RHS terms are determined by the triple, this is equivalent to maximizing the first term $\int_0^T V(t)^2 dt$, hence equivalent to the following: \begin{prob} \label{prob::activity2} Given prescribed values for $V(0)$, $V(T)$, and $\int_0^T V(t) dt$, find a Lipschitz function $V$ that maximizes $\int_0^T V(t)^2dt$ subject to the constraint $\dot V(t) + \phi\bigl(V(t)\bigr) \in [\beta_{\mathrm{min}}, \beta_{\mathrm{max}}]$ for all $t$, or equivalently \begin{equation} \label{eqn::Yconstraint} \beta_{\mathrm{min}} - \phi(V(t)) \leq \dot V(t) \leq \beta_{\mathrm{max}} - \phi(V(t)). \end{equation} Alternative phrasing: let $t$ be a random variable chosen uniformly from $[0,T]$ and choose $V$ to maximize the variance of $V(t)$ given \eqref{eqn::Yconstraint} and a prescribed value for the expectation of $V(t)$, as well as $V(0)$ and $V(T)$. \end{prob} See Figure~\ref{fig::ychart} for an intuitive picture of what the $V(t)$ satisying \eqref{eqn::Yconstraint} are like. For any $b > 0$ we write $\phi^{-1}(b)$ for the unique positive solution $x$ to $\phi(x) = b$. By the quadratic formula, $\phi^{-1}(b) = \frac{(\gamma-1) + \sqrt{(\gamma-1)^2 + 4 b}}{2}$. It is easily seen from \eqref{eqn::Y} that if we set $\beta(t) = b$ for all $t$, then $V(t)$ converges to $\phi^{-1}(b)$ as $t \to \infty$ (regardless of the initial value $V(0)$). For short, write $v_{\mathrm{min}}=\phi^{-1}(\beta_{\mathrm{min}})$ and $v_{\mathrm{max}} = \phi^{-1}(\beta_{\mathrm{max}})$. Note that if $V(0) \in [v_{\mathrm{min}}, v_{\mathrm{max}}]$ then $V(t) \in [v_{\mathrm{min}}, v_{\mathrm{max}}]$ for all time $t$, regardless of $\beta$. We also consider a constrained version: \begin{prob} \label{prob::activity3} Solve Problem~\ref{prob::activity} with the added constraint that $C_1 \leq \log I(t) \leq C_2$ for all $t \in [0,T]$. \end{prob} As motivation, note that imposing an upper bound on $\log I(t)$ is a way to ensure that a health care system is not overwhelmed and to limit the daily risk assumed by individual workers. It is also a crude way to ensure that the overall number of infections does not become too large: perhaps $I(t) = e^{C_2}$ is about the level at which the price of infection becomes unacceptable. On the other side, if neighboring states and countries have not eliminated the disease, and are maintaining steady levels of infection, then cases may be reintroduced from those localities at some small but steady rate, which would effectively impose a lower bound on $\log I(t)$.\footnote{The upward drift on $\log I(t)$ caused by this influx, which is more pronounced when $\log I(t)$ is low, might be offset to some degree by contract tracing that is more effective when $\log I(t)$ is low. Other low-prevalence considerations (randomness, possible periods with no disease, large jumps due to superspreaders, etc.) are beyond the scope of this note. An alternative to the rigid lower bound is to add an extra fixed-prevalence subpopulation, in the language of Section~\ref{sec::subpopulations}, that is only weakly connected to the other subpopulations. An alternative to the rigid upper bound is to subtract a multiple of $\int_0^T I(t)dt$ from the objective function, which would heavily penalize larger $\log I(t)$ values but would not matter much for smaller $\log I(t)$ values.} \subsection{Physics analogy} \label{subsec::physics} As an instructive metaphor, interpret $X(t) := \log I(t)+\gamma t$ as the {\em position} of a rocket-powered car along a frictionless street, the derivative $\dot X(t) = V(t)$ as the {\em velocity}, and the second derivative $\dot V(t) = \beta(t) - \phi(V(t))$ as the {\em acceleration}. Interpret $\beta(t)$ as an {\em internal force} applied via the gas pedal and $-\phi(V(t))$ as an {\em external force} which is a quadratic function of the velocity, accounting for wind resistance and/or gravity.\footnote{If $\gamma=1$, then $-\phi(V(t)) = -V(t)^2$ is the standard {\em quadratic drag} used to model wind resistance. If $\gamma \not =1$ then $\phi(V(t)) = \bigl( V(t) - \frac{\gamma-1}{2}\bigr)^2 - \bigl( \frac{\gamma-1}{2}\bigr)^2$, which corresponds to a prevailing wind of speed $\frac{\gamma-1}{2}$ and a street sloped to yield a velocity-independent gravitational force of $\bigl( \frac{\gamma-1}{2}\bigr)^2$. The metaphor breaks down if $V(t) < \frac{\gamma-1}{2}$ (i.e., if the windspeed is forward but the car is moving slower than the wind) since in this case the force from the wind is in the wrong direction.} Here $\beta_{\mathrm{min}}$ corresponds to the gas pedal not being pressed and $\beta_{\mathrm{max}}$ corresponds to a fully pressed pedal (there are no brakes). We can interpret $A$ as the total amount of fuel used\footnote{Assume fuel weight is small compared to car weight, so overall car weight does not change.} and imagine we are trying to waste as much fuel as possible subject to given boundary conditions. When we impose the constraints $C_1 \leq \log I(t) \leq C_2$ we interpret them as bounding the car between two trucks moving at constant speed. See Figures~\ref{fig::trucks} and~\ref{fig::examplechart}. \begin{figure} \caption{\label{fig::trucks} \label{fig::trucks} \end{figure} \begin{figure} \caption{\label{fig::examplechart} \label{fig::examplechart} \end{figure} \section{Results} \subsection{Activity minimizers and maximizers} The following is immediate from the statement of Problem~\ref{prob::activity2} and the fact that the variance of a constant random variable is zero. \begin{prop} \label{prop::worstpossible} In the setting of Problem~\ref{prob::activity2}, if one fixes $V(0) = V(T) \in [v_{\mathrm{min}},v_{\mathrm{max}}]$ and sets $\int_0^T V(t)dt = T\cdot V(0)$ then the {\em minimal} activity solution is the constant-velocity solution with $V(t) = V(0)$ for all $t$, which corresponds to $\beta(t) = \phi\bigl( V(0)\bigr)$ for all $t$. In other words, if one aims to maximize $A$, constant velocity strategies are the {\em worst possible}. \end{prop} If $V(0) \not = V(T)$, then it is not possible to make the velocity variance exactly zero; but the worst possible strategy is still the one that makes this variance as small as possible. Glancing at Figure~\ref{fig::ychart}, it is intuitively clear that if one wanted to {\em maximize} the variance of $V(t)$, given its mean and its initial and final values, then one would ideally want $V(t)$ to spend most of its time near $v_{\mathrm{min}}=.5$ or near $v_{\mathrm{max}}=1.5$, with as little time as possible spent transitioning between the two sides. One might guess that if $V(0) \not= V(T)$ the optimal strategy would be this: first move $V(t)$ as quickly as possible toward one side of $[v_{\mathrm{min}},v_{\mathrm{max}}]$ (the one accessible without crossing $V(T)$), then at some point move $V(t)$ as quickly as possible toward the other side, and then at some point move $V(t)$ as quickly as possible toward $V(T)$. This is correct and we formalize this as follows. \begin{prop} \label{prop::bestpossible} In the setting of Problem~\ref{prob::activity2}, if $V(T) > V(0)$, then any {\em optimal solution} has $\beta(t) = \beta_{\mathrm{max}}$ on a single interval, with $\beta(t) = \beta_{\mathrm{min}}$ before and after that. If $V(T) < V(0)$, then the optimal solution has $\beta(t) = \beta_{\mathrm{min}}$ on a single interval, with $\beta(t) = \beta_{\mathrm{max}}$ before and after that. If $V(T) = V(0)$ then there is an optimal solution of each of the two types mentioned above. \end{prop} This is proved by showing that if $V$ does {\em not} have the form described then one can modify it in a way that increases $\int V(t)^2dt$ while keeping $\int V(t)dt$ the same. See Appendix~\ref{app::proofs} for details. A similar argument is made for more general objective functions in Appendix~\ref{app:generalutility}. \begin{prop}\label{prop::wallconstrained} In the setting of Problem~\ref{prob::activity3}, with $\gamma \in (v_{\mathrm{min}},v_{\mathrm{max}})$, assume $\log I(0)$ and $\log I(T)$ are fixed values in $\{C_1,C_2 \}$ and $E(0)$ and $E(T)$ are fixed so that $V(0)=V(T)=\gamma$, as in Figure~\ref{fig::examplechart}. Then in any optimal solution, $\beta(t)$ alternates between $\beta_{\mathrm{min}}$ and $\beta_{\mathrm{max}}$ finitely many times, and the $V(t)$ graph (like the one in Figure~\ref{fig::examplechart}) has finitely many excursions away from $\gamma$, which alternatively go above $\gamma$ ($\beta_{\mathrm{max}}$ on the way up, $\beta_{\mathrm{min}}$ on the way down) or below $\gamma$ ($\beta_{\mathrm{min}}$ on the way down, $\beta_{\mathrm{max}}$ on the way up). All of these excursions have maximal area (i.e., they correspond to $\log I(t)$ crossing from $C_1$ to $C_2$ or back) except possibly for two smaller-but-equal-area excursions (which together correspond to $\log I(T)$ crossing from one of $\{C_1, C_2 \}$ to an intermediate value and back). For more general boundary conditions, if an optimal solution $V$ hits $\{C_1,C_2\}$ at least once, and $t_1$ and $t_2$ are the first and last times this happens, then the restriction of $V$ to $[t_1,t_2]$ behaves as described above, while the restrictions to $[0,t_1]$ or $[t_2,T]$ each have the form described in Proposition~\ref{prop::bestpossible}. If $V$ never hits a wall, then it must be of the form described in Proposition~\ref{prop::bestpossible}. \end{prop} Proposition~\ref{prop::wallconstrained} formalizes the notion, suggested by Figures~\ref{fig::trucks} and~\ref{fig::examplechart}, that when $T$ is large, the optimal long-term strategy is to alternate between maximal forward acceleration and maximal reverse acceleration, timing the accelerations so that the car's velocity reaches $\gamma$ exactly as it reaches each truck. On the other hand, one may have to break the Figure~\ref{fig::examplechart} pattern at a couple of turn-around points to ensure that the boundary conditions are satisfied. The proof is similar to the proof of Proposition~\ref{prop::bestpossible}. One checks that if $V(t)$ does not have the asserted form then it is possible to make modifications to increase $\int V(t)^2dt$ while keeping $\int V(t)dt$ the same {\em and} keeping $\log I(t)$ within bounds. See Appendix~\ref{app::proofs}. \subsection{Multiple subpopulations} \label{sec::subpopulations} Suppose there are several {\em disjoint} subpopulations (factory workers in Town A, factory workers in Town B, students/teachers in Town A, students/teachers in Town B, etc.) each of which has some amount of flexible activity that can be set independently. Suppose further that there is some interaction between members of different groups that does not depend on the level of flexible activity (e.g., because a student and a factory worker live in the same household). To formalize this, for $1 \leq k \leq n$ write \begin{equation} \label{eqn::note2} \dot E_k(t) = -E_k(t) + \beta_k(t) I_k(t) + \sum_{j \neq k} \alpha_{j,k} I_j(t), \,\,\,\,\,\,\,\, \,\,\,\,\, \dot I_k(t) = E_k(t) - \gamma I_k(t),\end{equation} where for each $k$ the process $\beta_k:[0,T] \to [\beta_{\mathrm{min}},\beta_{\mathrm{max}}]$ is a control parameter (a measurable function of $t$). Letting $V_k(t) := E_k(t)/I_k(t)$, we find \begin{align} \label{eqn::Yk} \dot V_k(t) &= \frac{ \dot E_k(t) I_k(t) - \dot I_k(t)E_k(t)}{I_k(t)^2} \\ &= \frac{ \bigl(- E_k(t) + \beta_k(t)I_k(t) + \sum_{j \neq k} \alpha_{j,k} I_j(t)\bigr) I_k(t) - \bigl(E_k(t) - \gamma I_k(t) \bigr) E_k(t)}{I_k(t)^2} \nonumber \\ &= \frac{ -E_k(t)I_k(t) + \beta_k(t) I_k(t)^2 - E_k(t)^2 + \gamma I_k(t)E_k(t) + \sum_{j \neq k} \alpha_{j,k} I_j(t)\bigr) I_k(t)}{I_k(t)^2} \nonumber \\ &= \beta_k(t) -V_k(t)^2 + (\gamma-1)V_k(t) + \sum_{j \neq k} \alpha_{j,k} I_j(t)/I_k(t) \nonumber \\ &= \beta_k(t) -\phi(V_k(t)) + \sum_{j \neq k} \alpha_{j,k} e^{X_j(t) - X_k(t)}, \nonumber \end{align} where again $\phi$ is the quadratic function defined by $\phi(x) := x^2 + (1-\gamma)x$. This also implies \begin{equation}\label{eqn::betak} \beta_k(t) = \dot V_k(t) + \phi(V_k(t))- \sum_{j \neq k} \alpha_{j,k} e^{X_j(t) - X_k(t)}.\end{equation} We then have \begin{align} \label{eqn::Ak} A &= \sum_{k=1}^n \int_0^T \Bigl( \phi\bigl(V_k(t)\bigr) - \sum_{j \neq k} \alpha_{j,k} e^{X_j(t) - X_k(t)}\Bigr) dt + V_k(T)-V_k(0) \\ &= \sum_{k=1}^n\Bigl( \int_0^T V_k(t)^2 dt + \int_0^T (1-\gamma) V_k(t) dt + V_k(T) - V_k(0) - \int_0^T \sum_{j \neq k} \alpha_{j,k} e^{X_j(t) - X_k(t)} dt\Bigr) . \nonumber \end{align} Removing the terms that depend only on the given boundary values, the objective becomes \begin{equation} \label{eqn::Ak2} \sum_{k=1}^n\Bigl( \int_0^T V_k(t)^2 dt\Bigr) - \sum_{j \neq k}\Bigl( \int_0^T \alpha_{j,k} e^{X_j(t) - X_k(t)} dt \Bigr), \end{equation} subject to the bounds on $\beta_k(t)$ and whatever initial and final values for the $V_k$ and $X_k$ are assumed. If we assume further that $\alpha_{j,k} = \alpha_{k,j}$ (which might make sense if the subpopulations are of similar size) then we find that $A$ is a constant plus \begin{equation} \label{eqn::Ak3} \sum_{k=1}^n\Bigl( \int_0^T V_k(t)^2 dt\Bigr) - \sum_{j \neq k}\Bigl( \int_0^T \alpha_{j,k} \cosh(X_j(t) - X_k(t)) dt \Bigr). \end{equation} The first term is a measure of policy oscillation: it is large if the velocities $V_k(t)$ have large swings. The latter term is a measure of policy coordination: it is largest if prevalence does not differ too much from one subpopulation to another. The fact that $A$ is equal to (a constant plus) \eqref{eqn::Ak3} can be summarized in English as follows: {\bf once boundary values are fixed, activity is largest when policies are {\em coordinated} but {\em oscillatory}.} On the other side, one might expect that when $T$ is large, activity-{\em minimizing} policies could involve traveling for long stretches in ``migrating bird'' patterns like the one in Figure~\ref{fig::vcars}, where only one subpopulation (or a small number of them) is substantially active, and others acquire small amounts of infection at a steady rate from the active subpopulations, despite being themselves relatively inactive. \begin{figure} \caption{\label{fig::vcars} \label{fig::vcars} \end{figure} \section{Discussion} There are many factors we have not considered: implementation costs, inoculum size, contract tracing, herd immunity effects (which may be significant in subpopulations even if overall prevalence is low), unpredictable super-spreader events at low prevalence (perhaps quickly boosting cases from 1 per million to 100 per million), regional disease-elimination opportunities, subpopulation differences, etc. But at least within the models presented here, coordinated suppression (followed by relaxation and resuppression as needed) appears superior to temporal consistency and subpopulation variability. As noted in \cite{lockdownscount2020}, the benefits of staggering flexible activity are smaller when there is less flexible activity to stagger---but larger when more realistic incubation/infectious period distributions are incorporated into the model. These findings recall the clich\'e that if everyone perfectly distanced for three weeks the disease would disappear. The clich\'e does not take into account that some contact cannot be eliminated, some infections last unusually long, etc. But this paper shows that within simple models that do account for these things, the same principle applies: people enjoy more contact overall when their activity is coordinated. It may be {\em especially ineffecient} for some subpopulations to tightly close for the long term while others remain open enough to maintain a steady disease prevalence, as in the V-shaped pattern from Figure~\ref{fig::vcars}. We remark that one can imagine this type of pattern arising with no government action at all --- e.g., if individuals voluntarily reduce activity once prevalence nears a threshold, but that threshold differs among subpopulations. It could also arise if subpopulations pull in opposite directions due to differing preferences or needs; perhaps $\beta_{\mathrm{min}}$ and $\beta_{\mathrm{max}}$ differ from group to group, or perhaps some prefer, all things considered, to acquire substantial herd immunity through infection, while others prefer to keep prevalence low. There are many social, political and game theoretic issues we won't discuss. Instead, we conclude by reiterating our main point: within the simple SEIR-based models discussed here, the {\em amount} of activity a society enjoys is higher when the activity is {\em staggered} and {\em coordinated}. \begin{appendix} \section{Proofs} \label{app::proofs} \begin{proof}[Proof of Proposition~\ref{prop::bestpossible}] It is not hard to see that the differentiable functions $V:[0,T] \to (0,\infty)$ that satisfy \eqref{eqn::Yconstraint} are precisely those that (in the language of Figure~\ref{fig::ychart}) never cross a blue curve from above to below and never cross a red curve from below to above. More generally (without assuming differentiability of $V$) one may take this as a formal definition of what it {\em means} to satisfy \eqref{eqn::Yconstraint}. Such functions are Lipschitz and hence differentiable outside of a set of Lebesgue measure zero by Rademacher's theorem, but $V$ may have points of non-differentiability, since $\beta$ may have discontinuities. Taking this view, it is clear that the set of $V$ that satisfy these constraints---and have the given values of $V(0)$, $V(T)$, and $\int_0^T V(t)dt$ --- is compact w.r.t.\ the $L^\infty$ norm (precompactness follows from Arzel\`a-Ascoli and the constraint is clearly preserved under $L^\infty$ limits). Suppose the given values for $V(0)$, $V(T)$, and $\int_0^T V(t)dt$ are such that there exists at least one $V$ with these values that satisfies \eqref{eqn::Yconstraint}. (The proposition statement holds trivially otherwise.) Then the existence of an {\em optimal} $V(t)$ follows from the continuity of $\int_0^T V(t)^2dt$ w.r.t.\ the $L^\infty$ norm and the above-mentioned compactness. We aim to show that any such $V$ has the form described in the proposition statement. Given any $V$ satisfying \eqref{eqn::Yconstraint}, we define a point $t \in (0,T)$ to be {\em taut} if either $\beta(t) = \beta_{\mathrm{max}}$ a.e.\ in a neighborhood of $t$ or $\beta(t) = \beta_{\mathrm{min}}$ a.e.\ in a neighborhood of $t$. In other words, $t$ is taut if $V(t)$ traces a blue curve or a red curve in a neighborhood of $t$. We say that $t$ is a {\em sharp peak} if $\beta(t) = \beta_{\mathrm{max}}$ a.e.\ in $(t_1,t)$ and $\beta(t) = \beta_{\mathrm{min}}$ a.e.\ in $(t,t_2)$ for some $t_1<t<t_2$. In other words, $V(t)$ traces a red curve to the left of $t$ and a blue curve to the right. Define a {\em sharp valley} analogously. Call $t$ {\em upward-flexible} if it is neither taut nor a sharp peak. If $t$ is upward-flexible then one can modify $V(t)$ (shifting it ``upward'') in any small neighborhood of $t$ in a way that increases $\int_0^T V(t)dt$ by any sufficiently small amount while respecting \eqref{eqn::Yconstraint}. This can be done for example by replacing $V$ with the supremum of $V$ and a function that is taut except for a sharp peak that lies just above $V$ near $t$. The analogous statement holds if $t$ is {\em downward-flexible}, i.e., neither taut nor a sharp valley. Call a point {\em doubly flexible} if it is both upward and downward flexible. Note that if $V(s) > V(t)$, and $s$ is upward-flexible and $t$ is downward-flexible, then one can increase $V$ in a neighborhood of $s$ and compensate by decreasing $V$ in a neighborhood of $t$ in a way that increases $\int_0^T V(t)^2dt$ while keeping $\int_0^T V(t)dt$ the same. We conclude from this that if $V$ is optimal, the supremum of $V(s)$ over upward-flexible $s$ is at most the infimum of $V(s)$ over downward flexible $s$. In other words, there exists a $v$ such that all points on the graph of $V$ below height $v$ are either taut or sharp valleys, and all points on the graph of $V$ above height $v$ are either taut or sharp peaks. Similar arguments show that $V(t)$ cannot be locally constant at $v$ unless $v \in \{v_{\mathrm{min}}, v_{\mathrm{max}} \}$ (in which case the height $v$ is crossed only once). If $V$ is optimal and {\em monotone non-increasing} (i.e., in a region where the red and blue curves are downward---this can happen in Figure~\ref{fig::ychart} if $V(0)$ and $V(T)$ are both greater than $v_{\mathrm{max}}$) then the above implies that there can be at most one sharp peak above $v$ (where $\beta(t)$ changes from $\beta_{\mathrm{max}}$ to $\beta_{\mathrm{min}}$) and at most one sharp valley below $v$ (where $\beta(t)$ changes from $\beta_{\mathrm{min}}$ to $\beta_{\mathrm{max}}$) and no doubly flexible points, which implies the proposition statement in this special case (similarly if $V$ is monotone non-decreasing). If $V$ is not monotone, it still follows from the above that any excursions away from the horizontal line of height $v$ are either $V$-shaped (blue curve going down, red curve going up) or $\Lambda$-shaped (red curve going up, blue curve coming down). Define an {\em occupation measure} $\nu$ by letting $\nu(S)$ denote the Lebesgue measure of $\{t \in [a,b]: V(t) \in S \}$. Clearly the occupation measure determines $\int_0^T V(t)dt$ and $\int_0^T V(t)^2dt$. So if $V$ is optimal then any $W$ with the same occupation measure, which also satisfies \eqref{eqn::Yconstraint}, must be optimal as well. Now we will argue that if $V$ crosses any given horizontal line (i.e., passes from above to below or vice versa) more than twice then $V$ is not optimal. The idea is explained in Figure~\ref{fig::movingpeak}. Suppose (for sake of getting a contradiction) that there exists an interval $(t_1,t_2) \subset (0,T)$ such that $V(t_1) = V(t_2)$ and $V(t)>V(t_1)$ for $t \in (t_1,t_2)$. Suppose that there also exists another interval $(t_3,t_4) \subset (0,T)$, of positive distance from $(t_1,t_2)$, such that either $V(t_3)=V(t_1)$ or $V(t_4)=V(t_1)$ and $V(t)>V(t_1)$ for $t \in (t_3,t_4)$. Then, as explained in Figure~\ref{fig::movingpeak}, one can ``rearrange'' $V$ to create a $W$ with the same occupation measure but with doubly flexible points of different heights. The first steps in the figure should be self-explanatory, but the final ``flattening then $N$'' step requires explanation. Suppose generally that there is an interval $[a,b]$ such on this interval $V$ achieves its minimum at $a$ and its maximum at $b$ but $V$ is not monotone on $[a,b]$. We can then construct the unique continuous function $W(t)$ that agrees with $V$ outside of $(a,b)$, that has the same occupation measure as $V$, and that is monotone non-decreasing. In other words, $W$ ``spends the same amount of time at each vertical height'' as $V$ but $W$ is non-decreasing (so it visits the heights strictly in increasing order). It is then clear that $W$ satisfies \eqref{eqn::Yconstraint} and also that the slope is non-extremal at points of different heights, so that $W$ (and hence $V$) is suboptimal, which is the desired contradiction. \begin{figure} \caption{\label{fig::movingpeak} \label{fig::movingpeak} \end{figure} Now suppose $V(T) > V(0)$. Since $V$ crosses every horizontal line at most twice, $V$ must be monotone non-decreasing between $V(0)$ and the first time $t_1$ at which $V$ reaches its maximum, monotone non-increasing between $t_1$ and the first time $t_2$ at which $V$ reaches its minimum, and monotone non-increasing between $t_2$ and $T$. (We allow for the degenerate possibility that $0=t_1$ or $t_2 = T$.) Since $V$ has at most one height at which doubly flexible points occur, the above conditions (and the fact that $V$ cannot be locally constant) imply that $V$ cannot have any doubly flexible points, and indeed must have the form required in the proposition statement. A similar argument applies if $V(T) < V(0)$. If $V(T) = V(0)$, the same argument shows that $V$ must have one upward and one downward excursion away from its initial position---but these excursions can be made in either order, so there are two equally optimal solutions in this case. (The one that puts the downward excursion first would yield fewer {\em infections} but these are not included in the definition of $A$.) \end{proof} \begin{proof}[Proof of Proposition~\ref{prop::wallconstrained}] The arguments in the proof of Proposition~\ref{prop::bestpossible} imply that each excursion of $V$ above $\gamma$ must obtain the maximal possible slope (with $\beta(t)=\beta_{\mathrm{max}}$) on an initial interval and the minimal possible slope (with $\beta(t)=\beta_{\mathrm{min}})$ for the rest of the excursion; in other words, in the language of Figure~\ref{fig::ychart}, its graph is a concatenation of a red curve and a blue curve. If this were not the case, we could apply the procedures in the proof of Proposition~\ref{prop::bestpossible} to this interval and produce a $W$ with higher second moment. (Note that if we apply these procedures to an interval on which $V(t)>\gamma$, they do not change the fact that $\log I(t)$ is increasing over the course of that interval and they do not change the amount of increase, so they cannot lead to a violation of the $C_1 \leq \log I(t) \leq C_2$ constraint.) Similarly, each excursion of $V$ below $\gamma$ consists of a maximally downward segment (blue) followed by a maximally upward segment (red). Let us take the same argument a bit further. Suppose $V(t)$ is optimal and suppose that that $\log I(t)$ does not hit $C_1$ or $C_2$ during an interval $[s_1, s_2]$. Then we claim that every local maximum (minimum) of $V$ in $(s_1,s_2)$ must be a global maximum (minimum). First observe that $V$ must have derivative in $\{\beta_{\mathrm{min}},\beta_{\mathrm{max}}\}$ almost everywhere within $(s_1,s_2)$, since otherwise (sufficiently small) perturbations like those described in the proof of Proposition~\ref{prop::bestpossible} would increase $\int_0^T V(t)^2 dt$ without violating the $C_1$ and $C_2$ conditions. Next, suppose that a local (but not global) maximum is obtained at $s$. Then (since $V$ is not locally constant) by choosing arbitrarily small $\epsilon$, we can arrange so that the component of $\{t: V(t) > V(s) - \epsilon \}$ containing $s$ is arbitrarily small but non-empty. We can then ``redistribute'' the local time corresponding to that component elsewhere, as in Figure~\ref{fig::movingpeak}, to produce a $W$ with the same occupation measure as $V$, and if $\epsilon$ is small enough this redistribution will not change the fact that $C_1$ and $C_2$ fail to be hit, but it will also produce a positive mass of places where $\beta(t) \not \in \{\beta_{\mathrm{min}}, \beta_{\mathrm{max}} \}$, which enables an improvement to $\int_0^T V(t)^2 dt$, which is a contradiction. We conclude from this that within $[s_1,s_2]$, the function $V$ must take the form described in Proposition~\ref{prop::bestpossible}. By taking limits, we find that the same is true if either or both of $\log I(s_1)$ and $\log I(s_2)$ lie in $\{C_1, C_2 \}$ but $V(s) \not \in \{C_1, C_2 \}$ for $s \in (s_1, s_2)$. If $\log I(s_1) = \log I(s_2) = C_1$ then we must have $V(s_1) = V(s_2) = \gamma$ and in between $s_1$ and $s_2$ (recalling the statement of Proposition~\ref{prop::bestpossible}) $V$ makes one upward and one downward excursion away from $\gamma$, and the areas between these curves and the horizontal line at height $\gamma$ both have equal area as in Figure~\ref{fig::examplechart}. This area must be strictly less than $C_2-C_1$ if $C_2$ is never hit in $(s_1,s_2)$. In this case (and the analogous case with the roles of $C_1$ and $C_2$ reversed) we refer to $(s_1, s_2)$ as a ``single wall excursion'' (since the same element of $\{C_1,C_2 \}$ is hit at both endpoints). Similarly, if $\log I(s_1) = C_1$ and $\log(s_2) = C_2$ (but these two endpoints are avoided for $(s_1,s_2)$) then between $s_1$ and $s_2$ the function $V$ must make a single positive excursion enclosing the maximal possible area above $\gamma$, namely area $C_2 - C_1$, precisely as in Figure~\ref{fig::examplechart}. In this case we refer to $(s_1,s_2)$ as a ``double wall excursion.'' Next, we will argue that if there are {\em two} single wall excursions, then one can make one of the excursions bigger and the other one smaller in a way that does not change $\int_0^t V(t)dt$ and $\int_0^t V(t)^2dt$ but that makes it so that $\int_0^t V(t)^2dt$ is no longer maximal (a contradiction). One of the two single wall excursions (call it ``smaller'') must have size less than or equal to that of the other (larger) one. As we have done before (in Figure~\ref{fig::movingpeak}) we can then ``move mass'' from near the tips of the corresponding upper/lower excursions of $V$ (away from $\gamma$) in the smaller one to the upper/lower excursions of $V$ (away from $\gamma$) in the larger one in a way that produces a new function that is suboptimal, and this yields the contradiction. We conclude that there is at most one single wall excursion, and the rest of the proposition follows. \end{proof} \section{More general utility functions} \label{app:generalutility} \subsection{Setup and motivation} We now generalize our original setup to account for crowding effects. In this setting, we consider two kinds of activity: first, $\mu(t)$ is the amount of {\em useful activity} taking place at time $t$. Informally, think of $\mu(t)$ as encoding the (risk-weighted) number of conversations, restaurant meals, haircuts, etc. Second, $\beta(t)$ is the amount of {\em transmission activity} that drives the ODE \eqref{eqn::note}. Instead of making these quantities equal as before, we now assume they are related by $\mu(t) = u(\beta(t))$ where $u$ is increasing and continuous but potentially {\em non-linear}. We then denote the {\em total useful activity} by $U =\int_0^T \mu(t)dt = \int_0^T u(\beta(t)) dt$ and assume that our goal is to maximize $U$, instead of maximizing $A = \int_0^T \beta(t)dt$.\footnote{In the car analogy, $\mu(t)$ is fuel use, $\beta(t)$ is force, but the relationship is non-linear. Maximizing fuel use is again the goal.} One way to motivate this is to suppose that some fraction of the disease transmission $\beta(t)$ comes from {\em deliberate close interaction} (i.e., is spread between friends or coworkers choosing to engage in a valuable activity together) and that the rest comes from infectious air lingering in public places (hallways, subway cars, etc.) The former might be linear in $\mu(t)$ (twice as many conversations means twice as many chances for spread) but the latter might be quadratic in $\mu(t)$ (if the density of infectious particles in the air and the number of people inhaling them are both linear in $\mu(t)$). Combining the two effects, we might find $\beta(t) = \psi( \mu(t)):= a_1 \mu(t) + a_2 \mu(t)^2$ where $a_1$ and $a_2$ are positive constants, and taking the positive inverse, \begin{equation}\label{eqn::ub} u(x) = \psi^{-1}(x)= \frac{-a_1 + \sqrt{a_1^2 + 4a_2x}}{2a_2}.\end{equation} As a concrete example, suppose $a_1 = 3/4$ and $a_2=1/4$. If $\beta(t)=\mu(t)=1$ then $25$ percent of the transmission comes from ``lingering air'' (the quadratic term). If $\beta(t)=2.25$ then (after solving for $\mu(t)$) about 38 percent of the transmission comes from lingering air; if $\beta(t)=.25$ it is only about $9$ percent. In this example, ``crowding effects'' play a larger role when activity is higher.\footnote{The assumption that $\mu(t)$ and $\beta(t)$ determine one another via a single function $u$ is a simplification. In principle, the relationship between $\mu(t)$ and $\beta(t)$ could vary in time. For example, perhaps deliberate interaction is a larger factor for weekend activities and lingering air is a larger factor on weekdays, so that the same $u$ cannot be used for both.} We stress that $\mu(t)$ is a measure of the {\em amount} of useful activity---defined in a way that ensures $\beta(t) = u^{-1}(\mu(t))$. It is {\em not} a measure of the {\em value} of the activity. If a more-valued activity causes the same amount of disease transmission as a less-valued activity, then it will make the same contribution to $\mu(t)$. We allow for the possiblity that in scenarios where $U$ is kept small, only very important activity will be allowed, but in scenarios where $U$ is large, more discretionary activity will occur. We only assume that utility is an {\em increasing function} of $U$ so that maximizing $U$ is a reasonable objective. We can generalize Problem~\ref{prob::activity} by replacing $A$ with \begin{equation} \label{eqn::Udef} U = \int_0^T u\Bigl( \beta(t)\Bigr)dt = \int_0^T u \Bigl( \dot V(t) + \phi(V(t)) \Bigr)dt,\end{equation} where $u$ is some fixed twice-differentiable function, and as before we write $\phi(x) = x^2 + (1-\gamma)x$. The setup in Problem~\ref{prob::activity} amounts to taking $u(b) = b$ for $b \in [\beta_{\mathrm{min}},\beta_{\mathrm{max}}]$. Now let us express \eqref{eqn::Udef} a different way. Write \begin{equation}\label{eqn::Gy} G_y(x) := u(x+\phi(y)) - u(\phi(y)) - u'(\phi(y))x. \end{equation} Observe that for all $y$ we have $G_y(0) = G_y'(0)=0$. If $u$ is (strictly) concave then (for fixed $y$) $G_y$ is (strictly) concave, and has a maximum at $0$. To ensure that \eqref{eqn::Gy} makes sense for relevant inputs, it will be convenient for us to extend the definition of $u$ beyond $[\beta_{\mathrm{min}},\beta_{\mathrm{max}}]$ to the full range of $\phi$ (which is all of $[0,\infty)$ when $\gamma=1$) in such a way that $u$ remains concave. It does not matter exactly how we do this, but one natural approach is to assume $u$ is differentiable everywhere but affine---or strictly concave but nearly affine---outside of $[\beta_{\mathrm{min}},\beta_{\mathrm{max}}]$. Now we can rewrite \eqref{eqn::Gy} with $x = \dot V(t)$ and $y = V(t)$ to obtain \begin{equation} u \Bigl( \dot V(t) \!+ \! \phi(V(t)) \Bigr) \!=\! G_{V(t)}\bigl(\dot V(t)\bigr) + u\bigl(\phi(V(t))\bigr) + u'\bigl(\phi(V(t))\bigr)\dot V(t) \end{equation} and substituting this into \eqref{eqn::Udef} yields \begin{equation} U= \int_0^T u \Bigl( \dot V(t) \!+ \! \phi(V(t)) \Bigr)dt \!=\! \int_0^T \Bigl( G_{V(t)}\bigl(\dot V(t)\bigr) + u\bigl(\phi(V(t))\bigr) + u'\bigl(\phi(V(t))\bigr)\dot V(t) \Bigr)dt .\end{equation} The final RHS term can be written $u'\bigl(\phi(V(t))\bigr)\dot V(t) = \frac{\partial}{\partial t} r(V(t)) = r'(V(t)) \dot V(t)$ where $r'(x) = u'(\phi(x))$, i.e., $r(a) := \int_0^a u'(\phi(x))dx.$ Thus the final RHS term integrates to a quantity that depends only on $V(T)$ and $V(0)$. If we remove that, the objective becomes \begin{equation}\label{eqn::phi}\int_0^T u\bigl(\phi(V(t))\bigr) dt + \int_0^T G_{V(t)}\bigl(\dot V(t)\bigr) dt\end{equation} Writing it this way, we have separated the objective into two pieces: the first term ascribes different benefits to different velocities via the function $u \circ \phi$. The second (non-positive) term ascribes costs to non-zero acceleration rates (in a manner that also depends on velocity). We can now formulate the problem in the language of Problem~\ref{prob::activity2} as follows: \begin{prob} \label{prob::activityu} Given $V(0)$, $V(T)$, and $\int_0^T V(t) dt$, find a $V$ that maximizes \eqref{eqn::phi} subject to \eqref{eqn::Yconstraint}. \end{prob} Now suppose that $u(x)$ is concave in $x$ but $u(\phi(x))$ is strictly convex. The latter holds if $u(x) = x$ but fails if $u$ is ``too concave.'' To illustrate what this means, consider the $\gamma=1$ case.\footnote{In a probabilistic formulation of the SEIR model, setting $\gamma=1$ corresponds to assuming that the incubation time and the infectious time are independent exponential random variables with the same rate $1$. If $f(x) = x e^{-x}$ is the density function for the {\em sum} of these positive random variables then $\int_a^b f(t) \beta(t) dt$ is the expected number of people infected between time $a$ and $b$ by a person infected at time $0$. If $\gamma$ is either very small or very large, then the corresponding $f$ is approximately exponential, and the model is effectively more like an SIR model; taking $\gamma$ close to one ensures that $f$ is more concentrated (a smaller standard deviation relative to its mean). If the true $f$ is {\em actually much more} concentrated than $xe^{-x}$ then the models of this paper are inadequate, and a different approach is needed (such as Erlang SEIR with a higher Erlang parameter, see \cite{lockdownscount2020}). Still, $\gamma=1$ might be the best approximation {\em within} the framework of this paper.} In this case $\phi(x) = x^2$ so $u(\phi(x)) = u(x^2)$ is strictly convex if $u(x) = x^\alpha$ for $\alpha \in (1/2,1]$, but not if $u(x) = x^\alpha$ for $\alpha \leq 1/2$. Note that $u(x^2)$ is also strictly convex if $u$ is as given in \eqref{eqn::ub}, for any positive $a_1$ and $a_2$. Since the mean of $V$ is fixed, the first term of \eqref{eqn::phi} is the worst possible if $V$ is constant (by Jensen's inequality). But the second term penalizes fluctuation (i.e., one pays a price for non-zero derivative) so these two factors work against each other. On the other hand, if $V$ varies slowly, the second term should not matter very much. Proposition~\ref{prop::worstwithu} below is a simple illustration of that point. \subsection{Best and worst policies} If $u$ is concave, then a rapidly fluctuating $\beta$ may yield a lower $U$ than a constant $\beta$ with the same mean so that (in contrast to Proposition~\ref{prop::worstpossible}) constant strategies are not the worst possible. On the other hand, Proposition~\ref{prop::worstwithu} states that constant $V(t)$ are the worst possible (given the corresponding boundary data) among functions that {\em vary slowly} in the sense of having no Fourier modes of short wavelength. \begin{prop}\label{prop::worstwithu} Suppose that $u(x)$ is smooth and concave in $x$ but $u(\phi(x))$ is smooth and strictly convex. Then there exists a $C>0$ (independent of $T$ or the boundary data) such that constant-$V(t)$ strategies are the worst possible (i.e, $U$-minimizing, given constraints from Problem~\ref{prob::activityu}) among all differentiable $V(t)$ whose Fourier series decompositions on the interval $[0,T]$ include no mode with wave length less than $C$. \end{prop} Note that this proposition holds trivially if $T < C$, and hence provides no information in that case. It does not rule out the possibility that constant strategies are optimal over very short time periods. \begin{proof} Write $v$ for the fixed value of $V(0)=V(T)= T^{-1}\int_0^T V(t)dt$. Let $L$ be the affine function tangent to $u\circ \phi$ at $v$, and write $\tilde u = u \circ \phi - L$. Then we can write the first term of \eqref{eqn::phi} as $\int_0^T L(V(t))dt + \int_0^T \tilde u (V(t))dt$. Since $\int_0^T L(V(t))dt$ is fixed by the boundary data, we can ignore that term, so the objective becomes \begin{equation}\label{eqn::phi2}\int_0^T \tilde u \bigl(V(t))\bigr) dt + \int_0^T G_{V(t)}\bigl(\dot V(t)\bigr) dt\end{equation} We can assume that $V(0) = V(T) \in (v_{\mathrm{min}},v_{\mathrm{max}})$ (since otherwise there would only be one or zero possible solutions with the same boundary values for $V$ and with $\int_0^T V(t) = T V(0)$, and the proposition statement would be trivially true). Thus, in $\eqref{eqn::phi2}$ we need only to evaluate $\tilde u(v)$ for $v \in (v_{\mathrm{min}},v_{\mathrm{max}})$ and $G_{v}(x)$ for $v \in (v_{\mathrm{min}},v_{\mathrm{max}})$ and $x$ in the {\em bounded range} of $\dot V(t)$ values possible when $V(t) \in [v_{\mathrm{min}},v_{\mathrm{max}}]$. Within this range, because of the convexity and smoothness assumptions, there exists a $c_1>0$ such that $\tilde u (x) \geq c_1 (x-v)^2$ and there exists a $c_2$ such that $0 \geq G_v(x) \geq - c_2 x^2$. Thus \begin{equation}\label{eqn::phi3}\int_0^T \tilde u \bigl(V(t))\bigr) dt + \int_0^T G_{V(t)}\bigl(\dot V(t)\bigr) dt \geq c_1 \int_0^T (V(t)-v)^2 dt - c_2 \int_0^t \dot V(t)^2 dt\end{equation} If we write $V_k(t) = e^{k 2\pi i t /T}$ for the $k$th Fourier mode, then $\int_0^T |\dot V_k(t)|^2dt = (2\pi k/T)^2 \int_0^T |V_k(t)|^2dt$. As long as $(2\pi k/T)^2 \leq c_1/c_2$, \eqref{eqn::phi2} will be negative if we set $V(t) = v+V_k$, and by orthogonality of the Fourier series, the same applies to any linear combination of Fourier modes $V_k$ such that $(2\pi k/T)^2 \leq c_1/c_2$, or equivalently $k/T \leq \sqrt{c_1/c_2}/(2\pi)$, which means that the wavelength $T/k$ satisfies $T/k \geq 2\pi \sqrt{c_2/c_1}$. \end{proof} \begin{prop} \label{prop::bestwithu} In the context of Proposition~\ref{prop::worstwithu}, the best possible ($U$-maximizing) $V(t)$ cannot cross any horizontal line more than twice in $(0,T)$. If $V(0) < V(T)$ and we write $m:=\inf_{t \in [0,T]}V(t)$ and $M:=\sup_{t \in [0,T]}V(t)$, then $V$ must be monotone non-increasing between time $0$ and the first time it hits $m$, then monotone non-decreasing until the first time it hits $M$, then monotone non-increasing again until time $T$. (Similar statements hold if $V(0) > V(T)$ or $V(0)=V(T)$, c.f.\ Proposition~\ref{prop::bestpossible}.) \end{prop} \begin{proof} The argument in Figure~\ref{fig::movingpeak} works exactly the same way in this setting; the only difference is that for the fourth ``flattening'' step, one can use Jensen's inequality to show that utility is {\em strictly larger} for the flattened curve than for the original. The flattening does not change the first term \eqref{eqn::phi2}, but it makes the second term strictly larger. That is, we claim that if $W$ is a curve produced by flattening $V$ on $[s_1,s_2]$ then \begin{equation}\label{eqn::gvt} \int_0^T G_{V(t)} (\dot V(t))dt < \int_{s_1}^{s_2} G_{W(t)} (\dot W(t))dt.\end{equation} To see this, note that the fundamental theorem of calculus and the construction of $W$ imply \begin{equation}\label{eqn::vtwt} \int_{t\in[s_1,s_2]: V(t) \in (a,b)} \dot V(t) dt = \int_{t\in[s_1,s_2]: W(t) \in (a,b)} \dot W(t)dt.\end{equation} If $t$ is sampled uniformly from $[s_1,s_2]$ then there is an $F$ such that $\mathbb E[\dot V(t) | V(t)] = F(V(t))$ and since \eqref{eqn::vtwt} holds for any $(a,b)$ this implies $E[\dot W(t) | W(t)] = F(W(t))$. On the other hand, since $\dot V(t)$ assumes negative values with positive probability and $\dot W(t)$ is conditionally deterministic given $W(t)$, we have a strict inequality on conditional variance: $$\mathbb E\bigl[\text{Var}[\dot V(t) | V(t)]\bigr] > \mathbb E \bigl[\text{Var}[\dot W(t) | W(t)]\bigr].$$ Since there is (on the range inputs possible here) a negative upper bound on the second derivative of $G_{V(t)}$ we deduce that $$\mathbb E \Bigl[ \mathbb E \bigl[G_{V(t)} (\dot V(t)) | V(t)\bigr] \Bigr] < \mathbb E \Bigl[ \mathbb E\bigl[G_{W(t)} (W(t)) | W(t)\bigr] \Bigr],$$ which implies \eqref{eqn::gvt}. This argument shows that $V$ cannot cross any horizontal line more than twice. The rest of the proposition statement easily follows from this. \end{proof} \begin{figure} \caption{\label{fig::usefulexamplechart} \label{fig::usefulexamplechart} \end{figure} \subsection{Euler-Lagrange solutions} In principle, one can find an optimal $V$ explicitly using Euler-Lagrange theory. We briefly sketch the idea here. Consider an interval $(s, s+\Delta)$ on which $V$ is known to increase monotonically from $x_1$ to $x_2$ and assume $\int_s^{s+\Delta}V(t)dt = \Lambda$. Once $\Lambda$ and $\Delta$ are fixed, one can fix any constants $a$ and $b$ and aim to maximize \begin{equation}\label{eqn::phiw}\int_s^{s+\Delta} w\bigl( V(t) \bigr) dt + \int_s^T G_{V(t)}\bigl(\dot V(t)\bigr) dt\end{equation} where $w(v):=u \circ \phi(v) + av + b$. The extra $av+b$ terms do not affect the optimal solution, since the amount that they add to \eqref{eqn::phiw} is determined by $\Lambda$ and $\Delta$. However, one can also let $\Lambda$ and $\Delta$ be {\em variable} parameters and then try to tune $a$ and $b$ so that the optimizer to \eqref{eqn::phiw} obtains the desired values. During the interval $(t,t+\Delta)$ we interpret $h(x) := \dot V(V^{-1}(x))$ as the ``speed'' at which $x$ is passed through, for $x \in (x_1, x_2)$, so that $1/h(x)$ is the density function for the occupation measure at $x$, and the goal becomes to maximize $\int_{x_1}^{x_2} \frac{G_x( h(x)) + w(x)}{h(x)}dx$. We can then use calculus to find (for each $x$) the $h(x)$ that optimizes the integrand, recalling the constraints on $h(x)$ from \eqref{eqn::Yconstraint}. If the optimizer is unique for each $x$, this determines the function $h$. Once $h$ is known, solving the ODE $\dot V(t) = h(V(t))$ allows us to produce analogs of the red curves in Figure~\ref{fig::ychart} that dictate the way $V$ evolves during its upward trajectories. We can treat decreasing intervals similarly, obtaining analogs of the blue curves in Figure~\ref{fig::ychart}. In general, finding $a$ and $b$ is a tricky optimization problem; however, if we {\em assume} or {\em guess} that the optimal $V$ has an interval on which $V(t)$ is close to $v_{\mathrm{min}}$ (and ergo $\dot V(t) \approx 0$) and an interval on which $V(t)$ is close to $v_{\mathrm{max}} $, then we can deduce that $w$ must be close to zero at $v_{\mathrm{min}}$ and $v_{\mathrm{max}}$ (since otherwise one could increase \eqref{eqn::phiw} by either prolonging or condensing these intervals) which determines approximately what $a$ and $b$ must be. (It is not hard to see that---if the mean of $V$ and its endpoints in $(v_{\mathrm{min}}, v_{\mathrm{max}})$ are held fixed---this assumption is correct if $T$ is large enough, but incorrect for smaller $T$.) Once $a$ and $b$ are known---and analogs of the blue and red curves in Figure~\ref{fig::ychart} are drawn---the problem of figuring out where the ``turnarounds'' occur is essentially the same here as in Proposition~\ref{prop::bestpossible}. This approach was used to produce Figure~\ref{fig::usefulexamplechart}. Although we will not give details, we expect the arguments in the proof of Proposition~\ref{prop::wallconstrained} to work in a general $u$ version of Proposition~\ref{prop::bestwithu}, enabling one to show that the long-term optimal $\log I$ oscillates between $C_1$ and $C_2$ in a similar fashion (with the blue and red curves coming from {\em some} choice of $a$ and $b$). If the $a$ and $b$ are different from the ones guessed in producing Figure~\ref{fig::examplechart}, then the shape of the turnarounds might be different as well. \end{appendix} \end{document}
\begin{document} \title[]{Operators of Harmonic Analysis in Variable Exponent Lebesgue Spaces. Two--weight Estimates} \author{Vakhtang Kokilashvili, Alexander Meskhi and Muhammad Sarwar} \keywords{} \maketitle \noindent{\bf Abstract.} In the paper two--weighted norm estimates with general weights for Hardy-type transforms, maximal functions, potentials and Calder\'on--Zygmund singular integrals in variable exponent Lebesgue spaces defined on quasimetric measure spaces $(X, d, \mu)$ are established. In particular, we derive integral--type easily verifiable sufficient conditions governing two--weight inequalities for these operators. If exponents of Lebesgue spaces are constants, then most of the derived conditions are simultaneously necessary and sufficient for appropriate inequalities. Examples of weights governing the boundedness of maximal, potential and singular operators in weighted variable exponent Lebesgue spaces are given. \vskip+0.2cm \noindent{\bf Key words:} Variable exponent Lebesgue spaces, Hardy transforms, fractional and singular integrals, quasimetric measure spaces, spaces of homogeneous type, two-weight inequality. \vskip+0.2cm \noindent{\bf AMS Subject Classification}: 42B20, 42B25, 46E30 \vskip+1cm \section*{Introduction} We study the two-weight problem for Hardy-type, maximal, potentials and singular operators in Lebesgue spaces with non-standard growth defined on quasimetric measure spaces. In particular, our aim is to derive easily verifiable sufficient conditions for the boundedness of these operators in weighted $L^{p(\cdot)}(X)$ spaces which enable us effectively construct examples of appropriate weights. The conditions are simultaneously necessary and sufficient for corresponding inequalities when the weights are of special type and the exponent $p$ of the space is constant. We assume that the exponent $p$ satisfies the local log-H\"older continuity condition and if the diameter of $X$ is infinite, then we suppose that $p$ is constant outside some ball. In the framework of variable exponent analysis such a condition first appeared in the paper \cite{Di1}, where the author established the boundedness of the Hardy--Littlewood maximal operator in $L^{p(\cdot)}({\Bbb{R}}^n)$. As far as we know, unfortunately, it is not known an analog of the log-H\"older decay condition (at infinity) for $p: X\to [1, \infty)$ even in the unweighted case, which is well--known and natural for the Euclidean spaces (see \cite{CrFiNe}, \cite{Ne}, \cite{CaCrFi}). The local log-H\"older continuity condition for the exponent $p$ together with the log-H\"older decay condition guarantees the boundedness of operators of harmonic analysis in $L^{p(\cdot)}({\Bbb{R}}^n)$ spaces (see e.g., \cite{CrFiMaPe}). A considerable interest of researchers is attracted to the study of mapping properties of integral operators defined on (quasi-)metric measure spaces. Such spaces with doubling measure and all their generalities naturally arise when studying boundary value problems for partial differential equations with variable coefficients, for instance, when the quasi–metric might be induced by a differential operator, or tailored to fit kernels of integral operators. The problem of the boundedness of integral operators naturally arises also in the Lebesgue spaces with non-standard growth. Historically the boundedness of the Hardy--Littlewood maximal, potential and singular operators in $L^{p(\cdot)}$ spaces defined on (quasi)metric measure spaces was derived in \cite{HaHaLa}, \cite{HaHaPe}, \cite{KoMe3}, \cite{KoMe5}, \cite{KoSa3}-\cite{KoSa7}, \cite{AlSa} (see also references cited therein). Weighted inequalities for classical operators in $L^{p(\cdot)}_w$ spaces, where $w$ is a power--type weight, were established in the papers \cite{KoSa1}-\cite{KoSa3}, \cite{KoSa5}-\cite{KoSa7}, \cite{KoSaSa2}, \cite{EdMe}, \cite{SaVa}, \cite{SaSaVa}, \cite{DiSa} (see also the survey papers \cite{Sa3}, \cite{Ko}), while the same problems with general weights for Hardy, maximal, potential and singular operators were studied in \cite{EdKoMeJFSA}-\cite{EdKoMe2}, \cite{KoMe4}, \cite{KoSa3}, \cite{KoSa5}, \cite{Kop}, \cite{Di3}, \cite{AsiKoMe}, \cite{MaZe}, \cite{DiHa}. Moreover, in the paper \cite{DiHa} a complete solution of the one--weight problem for maximal functions defined on Euclidean spaces are given in terms of Muckenhoupt--type conditions. Finally we notice that in the paper \cite{EdKoMe2} modular--type sufficient conditions governing the two--weight inequality for maximal and singular operators were established. It should be emphasized that in the classical Lebesgue spaces the two--weight problem for fractional integrals is already solved (see \cite{KoKr}, \cite{KoMi}) but it is often useful to construct concrete examples of weights from transparent and easily verifiable conditions. This problem for singular integrals still remains open. However, some sufficient conditions governing two--weight estimates for the Calder\'{o}n--Zygmund operators were given in the papers \cite{EdKo}, \cite{CrMaPe} (see also the monographs \cite{EdKoMe}, \cite{Vo} and references cited therein). To derive two--weight estimates for maximal, singular and potential operators we use the appropriate inequalities for Hardy--type transforms on $X$ (which are also derived in this paper). The paper is organized as follows: In Section 1 we give some definitions and auxiliary results regarding quasimetric measure spaces and the variable exponent Lebesgue spaces. Section 2 is devoted to the sufficient conditions governing two--weight inequalities for Hardy--type defined on quasimetric measure spaces, while in Section 3 we study the two--weight problem for potentials defined on quasimetric measure spaces. In Section 4 we discuss weighted estimates for maximal and singular integrals. Finally we point out that constants (often different constants in the same series of inequalities) will generally be denoted by $c$ or $C$. The symbol $f(x) \approx g(x)$ means that there are positive constants $c_1$ and $c_2$ independent of $x$ such that the inequality $ f(x) \leq c_1 g(x) \leq c_2 f(x)$ holds. Throughout the paper by the symbol $p'(x)$ is denoted the function $p(x)/ (p(x)-1)$. \section{preliminaries} Let $X:=(X, d, \mu)$ be a topological space with a complete measure $\mu$ such that the space of compactly supported continuous functions is dense in $L^1(X,\mu)$ and there exists a non-negative real-valued function (quasi-metric) $d$ on $X\times X$ satisfying the conditions: \vskip+0.1cm \noindent(i) $d(x,y)=0$ if and only if $x=y$; \noindent(ii) there exists a constant $a_1> 0$, such that $d(x,y)\leq a_1(d(x,z)+d(z,y))$ for all $x,\,y,\,z\in X$; \noindent(iii) there exists a constant $a_0> 0$, such that $d(x,y) \leq a_0 d(y,x) $ for all $x,\,y,\in X$. We assume that the balls $B(x,r):=\{y\in X:d(x,y)<r\}$ are measurable and $0\leq\mu (B(x,r))<\infty$ for all $x \in X$ and $r>0$; for every neighborhood $V$ of $x\in X,$ there exists $r>0,$ such that $B(x,r)\subset V.$ Throughout the paper we also suppose that $\mu \{x\}=0$ and that $$ B(x,R) \setminus B(x, r) \neq \emptyset \eqno{(1)}$$ for all $x\in X$, positive $r$ and $R$ with $0< r < R < L$, where $$L:= diam \; (X) = \;\sup\{ d(x,y): x,y\in X\}. $$ We call the triple $(X, d, \mu)$ a quasimetric measure space. If $\mu$ satisfies the doubling condition $ \mu(B(x, 2r))\leq c \mu (B(x,r))$, where the positive constant $c$ does not depend on $x\in X$ and $r>0$, then $(X, d, \mu)$ is called a space of homogeneous type $(\hbox{SHT})$. For the definition and some properties of an $SHT$ see, e.g., \cite{CoWe}, \cite{StTo}, \cite{FoSt}. A quasimetric measure space, where the doubling condition is not assumed and may fail, is called a non-homogeneous space. Notice that the condition $L<\infty$ implies that $\mu(X)<\infty$ because every ball in $X$ has a finite measure. We say that the measure $\mu$ is upper Ahlfors $Q$- regular if there is a positive constant $c_1$ such that $ \mu B(x,r) \leq c_1 r^Q $ for for all $x\in X$ and $r>0$. Further, $\mu$ is lower Ahlfors $q-$ regular if there is a positive constant $c_2$ such that $ \mu B(x,r) \geq c_2 r^q $ for all $x\in X$ and $r>0$. It is easy to check that if $L<\infty$, then $\mu$ is lower Ahlfors regular (see also, e.g., \cite{HaHaPe}). For the boundedness of potential operators in weighted Lebesgue spaces with constant exponents on non-homogeneous spaces we refer, for example, to the monograph \cite{EdKoMe1} (Ch. 6) and references cited therein. Let $p$ be a non--negative $\mu-$ measurable function on $X$. Suppose that $E$ is a $\mu-$ measurable set in $X$ and $a$ is a constant satisfying the condition $1<a<\infty$. Throughout the paper we use the notation: \begin{gather*} p_-(E):= \inf_{E} p; \;\; p_+(E):= \sup_{E} p; \;\;\ p_-:= p_-(X); \;\; p_+:= p_+(X);\\ \overline{B}(x,r):= \{ y\in X: \; d(x,y)\leq r \},\;\; kB(x,r):= B(x,kr); B_{xy}:=B(x,d(x,y)); \\ \overline{B}_{xy}:=\overline{B}(x,d(x,y)); g_{B}:=\frac{1}{\mu({B})}\int\limits_{B}|g(x)| d\mu(x). \end{gather*} where $1<p_-\leq p_+<\infty$. \vskip+0.1cm Assume now that $1\leq p_-\leq p_+< \infty$. The variable exponent Lebesgue space $L^{p(\cdot)}(X)$ (sometimes it is denoted by $L^{p(x)}(X)$) is the class of all $\mu$-measurable functions $f$ on $X$ for which $ S_p(f):= \int\limits_{X} |f(x)|^{p(x)} d\mu(x) < \infty. $ The norm in $L^{p(\cdot)}(X)$ is defined as follows: $$ \|f\|_{L^{p(\cdot)}(X)} = \inf \{ \lambda>0: S_{p}(f/\lambda) \leq 1 \}. $$ It is known (see e.g. \cite{KoRa}, \cite{Sa1}, \cite{KoSa1}, \cite{HaHaPe}) that $L^{p(\cdot)}$ space is a Banach space. For other properties of $L^{p(\cdot)}$ spaces we refer to \cite{Sh}, \cite{KoRa}, \cite{Sa1}, \cite{Sa3}, \cite{Ko}, etc. \vskip+0.1cm Now we introduce several definitions: \vskip+0.1cm \begin{definition} Let $(X, d, \mu)$ be a quasimetric measure space and let $N \geq 1$ be a constant. Suppose that $p$ satisfy the condition $0<p_-\leq p_+<\infty$. We say that $p \in {\mathcal{P}}(N,x)$, where $x\in X$, if there are positive constants $b$ and $c$ (which might be depended on $x$) such that $$ \mu(B(x, Nr))^{p_-(B(x,r))- p_+(B(x,r))} \leq c \eqno{(2)} $$ holds for all $r$, $0<r\leq b.$ Further, $p \in {\mathcal{P}}(N)$ if there are a positive constants $b$ and $c$ such that (2) holds for all $x\in X$ and all $r$ satisfying the condition $0<r\leq b.$ \end{definition} \begin{definition} Let $(X, d, \mu)$ be an $SHT$. Suppose that $0<p_-\leq p_+<\infty$. We say that $p\in LH(X,x)$ ( $p$ satisfies the log-H\"older-- type condition at a point $x\in X$) if there are positive constants $b$ and $c$ (which might be depended on $x$) such that $$ |p(x)-p(y)| \leq \frac{c}{-\ln\big(\mu (B_{xy}\big)\big)} \eqno{(3)}$$ holds for all $y$ satisfying the condition $d(x,y)\leq b.$ Further, $p\in LH(X)$ ( $p$ satisfies the log-H\"older type condition on $X$)if there are positive constants $b$ and $c$ such that $(3)$ holds for all $x,y$ with $d(x,y)\leq b.$ \end{definition} \begin{definition}Let $(X, d, \mu)$ be a quasimetric measure space and let $0<p_-\leq p_+<\infty$. We say that $p\in \overline{LH}(X,x)$ if there are positive constants $b$ and $c$ (which might be depended on $x$) such that $$ |p(x)-p(y)| \leq \frac{c}{-\ln d(x,y)}\eqno{(4)}$$ for all $y$ with $d(x,y)\leq b.$ Further, $p\in \overline{LH}(X)$ if (4) holds for all $x,y$ with $d(x,y)\leq b.$ \end{definition} It is easy to see that if the measure $\mu$ is upper Ahlfors $Q$-regular and $p\in LH(X)$ (resp. $p\in LH(X,x)$), then $p\in \overline{LH}(X)$ (resp. $p\in \overline{LH}(X,x)$. Further, if $\mu$ is lower Ahlfors $q$-regular and $p\in \overline{LH}(X)$ (resp. $p\in \overline{LH}(X,x)$), then $p\in LH(X)$ (resp. $p\in LH(X,x)$). \vskip+0.1cm {\em Remark} 1.1. It can be checked easily that if $(X, d, \mu)$ is an SHT, then $\mu B_{x_0x}\approx \mu B_{xx_0}.$ \vskip+0.1cm {\em Remark} 1.2. Let $(X, d, \mu)$ be an $\hbox{SHT}$ with $L<\infty$. It is known (see, e.g., \cite{HaHaPe}, \cite{KoMe3}) that if $p\in{\overline{LH}}(X)$), then $p\in\mathcal{P}(1)$. Further, if $\mu$ is upper Ahlfors $Q$-regular, then the condition $p\in\mathcal{P}(1)$ implies that $p\in \overline{LH}(X)$. \begin{proposition} If $0<p_-(X)\leq p_+(X)<\infty$ and $p\in LH(X)$ $($ resp. $p\in \overline{LH}(X) \;)$, then the functions $c p(\cdot)$ and $1/p(\cdot)$ belong to the class $LH(X)$ $($ resp. $\overline{LH}(X)\;).$ Further if $p\in LH(X,x)$ (resp. $p\in \overline{LH}(X,x)$) then $cp(\cdot) and 1/p(\cdot)$ belong to $LH(X,x)$ $($ resp. $p\in \overline{LH}(X,x)\; )$, where $c$ is a positive constant. If $1<p_-(X)\leq p_+(X)<\infty$ and $p\in LH(X)$ $($ resp. $p\in LH(X) \overline{LH}(X)\;)$, then $p'\in LH(X,x)$ $($ resp. $p'\in \overline{LH}(X,x))$. \end{proposition} Proof of this statement follows immediately from the definitions of the classes $LH(X,x)$, $LH(X)$, $\overline{LH}(X,x)$, $\overline{LH}(X)$; therefore we omit the details. \begin{proposition} Let $(X, d, \mu)$ be an $\hbox{SHT}$ and let $p\in {\mathcal{P}}(1)$. Then $ (\mu B_{xy})^{p(x)} \leq c (\mu B_{yx})^{p(y)}, $ for all $x,y\in X$ with $\mu(B(x, d(x,y))) \leq b,$ where $b$ is a small constant, and the constant $c$ does not depend on $x,y\in X$. \end{proposition} \begin{proof} Due to the doubling condition for $\mu$, Remark $1.1$, the condition $p\in {\mathcal{P}}(1)$ and the fact $x\in B(y, a_1(a_0+1)d(y,x))$ we have the following estimates: $ \mu (B_{xy})^{p(x)} \leq \mu \big( B(y, a_1(a_0+1)d(x,y))\big)^{p(x)} \leq c \mu B(y, a_1(a_0+1)d(x,y))^{p(y)} \leq c (\mu B_{yx})^{p(y)} $, which proves the statement. \end{proof} The proof of the next statement is trivial and follows directly from the definition of the classes ${\mathcal{P}}(N,x)$ and ${\mathcal{P}}(N)$. Details are omitted. \begin{proposition} Let $(X, d, \mu)$ be a quasimetric measure space and let $x_0\in X$. Suppose that $N\geq1$ be a constant. Then the following statements hold: \rm{(i)} If $p\in {\mathcal{P}}(N,x_0)$ (resp. $p\in {\mathcal{P}}(N))$, then there are positive constants $r_0$, $c_1$ and $c_2$ such that for all $0<r\leq r_0$ and all $y\in B(x_0,r)$ (resp. for all $x_0, y$ with $d(x_0,y) < r \leq r_0$) we have that $\mu \big( B(x_0,Nr)\big)^{p(x_0)}\leq c_1 \mu \big( B(x_0,Nr)\big)^{p(y)}\leq c_2 \mu \big( B(x_0,Nr)\big)^{p(x_0)}.$ \rm{(ii)} Let $p\in {\mathcal{P}}(N,x_0)$. Then there are positive constants $r_0$, $c_1$ and $c_2$ (in general, depending on $x_0$) such that for all $r$ ($r\leq r_0$) and all $x,y\in B(x_0,r)$ we have $\mu\big( B(x_0,Nr)\big)^{p(x)}\leq c_1\mu\big( B(x_0,Nr)\big)^{p(y)}\leq c_2\mu\big( B(x_0,Nr)\big)^{p(x)}.$ \rm{(iii)} Let $p\in {\mathcal{P}}(N)$. Then there are positive constants $r_0$, $c_1$ and $c_2$ such that for all balls $B$ with radius $r$ ($r\leq r_0$) and all $x,y\in B$, we have $\mu(N B)^{p(x)}\leq c_1\mu(N B)^{p(y)}\leq c_2\mu( N B)^{p(x)}.$ \end{proposition} It is known that (see, e.g., \cite{KoRa}, \cite{Sa1}) if $f$ is a measurable function on $X$ and $E$ is a measurable subset of $X$, then the following inequalities hold: \begin{gather*} \|f\|^{p_+(E)}_{L^{p(\cdot)}(E)} \leq S_{p} (f\chi_E) \leq \|f\|^{p_-(E)}_{L^{p(\cdot)}(E)}, \;\; \|f\|_{L^{p(\cdot)}(E)}\leq 1; \\ \|f\|^{p_-(E)}_{L^{p(\cdot)}(E)}\leq S_{p}(f\chi_E)\leq \|f\|^{p_+(E)}_{L^{p(\cdot)}(E)},\;\; \|f\|_{L^{p(\cdot)}(E)}> 1. \end{gather*} H\"older's inequality in variable exponent Lebesgue spaces has the following form: $$\int\limits_{E} fg d\mu \leq \Big(1/p_-(E) + 1/(p')_-(E) \Big) \| f \|_{L^{p(\cdot)}(E)} \| g \|_{L^{p'(\cdot)}(E)}.$$ \vskip+0.1cm \begin{lemma} Let $(X, d, \mu)$ be an $\hbox{SHT}$. \rm{(i)} Let $\beta$ be a measurable function on $X$ such that $\beta_+ < -1$ and let $r$ be a small positive number. Then there exists a positive constant $c$ independent of $r$ and $x$ such that $$ \int\limits_{X \setminus B(x_0,r) } (\mu B_{x_{0}y})^{\beta(x)} d \mu(y) \leq c \frac{\beta(x)+1}{\beta(x)} \mu(B(x_0,r))^{\beta(x)+1}; $$ \rm{(ii)} Suppose that $p$ and $\alpha$ are measurable functions on $X$ satisfying the conditions $1<p_- \leq p_+ <\infty$ and $\alpha_- > 1/p_-$. Then there exists a positive constant $c$ such that for all $x\in X$ the inequality $$ \int\limits_{\overline{B} (x_0 , 2d(x_0 ,x))} \big( \mu B(x,d(x,y))\big) ^{(\alpha(x) -1)p'(x)} d\mu (y) \leq c \big( \mu B(x_0 ,d(x_0 ,x)) \big) ^{(\alpha(x) -1)p'(x) +1} $$ holds. \end{lemma} \begin{proof} Part (i) was proved in \cite{KoMe3} (see also \cite{EdKoMe}, p.372, for constant $\beta$). The proof of Part (ii) was given in \cite{EdKoMe} (Lemma 6.5.2, p. 348) but repeating those arguments we can see that it is also true for variable $\alpha$ and $p$. Details are omitted. \end{proof} Let $M$ be a maximal operator on $X$ given by $$ Mf(x):= \sup_{x\in X,r>0} \frac{1}{\mu(B(x,r))}\int\limits_{B(x,r)}|f(y)| d\mu(y). $$ \begin{definition} Let $(X, d, \mu)$ be a quasimetric measure space. We say that $p\in {\mathcal{M}}(X)$ if the operator $M$ is bounded in $L^{p(\cdot)}(X)$. \end{definition} L. Diening \cite{Di1} proved that if $\Omega$ is a bounded domain in ${\Bbb{R}}^n$, $1< p_-\leq p_+<\infty$ and $p$ satisfies the local log-H\"older continuity condition on $\Omega$ (i.e., $ |p(x)-p(y)|\leq\frac{c}{-\ln (|x-y|)}$ for all $x,y\in \Omega$ with $|x-y|\leq 1/2$), then the Hardy--Littlewood maximal operator defined on $\Omega$ is bounded in $L^{p(\cdot)}(\Omega)$. \vskip+0.1cm Now we prove the following lemma: \begin{lemma} Let $(X, d, \mu)$ be an $SHT$. Suppose that $0<p_-\leq p_+<\infty$. Then $p$ satisfies the condition $p\in {\mathcal{P}}(1)$ $($resp. $p\in {\mathcal{P}}(1,x)$) if and only if $p\in LH(X)$ $($ resp. $p\in LH(X,x)\;)$. \end{lemma} \begin{proof} {\em Necessity.} Let $p\in {\mathcal{P}}(1)$ and let $x, y\in X$ with $d(x,y)< c_0$ for some positive constant $c_0$. Observe that $x,y\in B$, where $B:=B(x, 2d(x,y))$. By the doubling condition for $\mu$ we have that $ \big(\mu B_{xy}\big)^{-|p(x)- p(y)|} \leq c \big( \mu B \big)^{-|p(x)- p(y)|} \leq c \big( \mu B \big)^{p_-(B) - p_+(B)} \leq C, $ where $C$ is a positive constant which is greater than $1$. Taking now the logarithm in the last inequality we have that $p\in LH(X)$. If $p\in {\mathcal{P}}(1,x)$, then by the same arguments we find that $p\in LH(X,x)$. {\em Sufficiency.} Let $B:=B(x_0,r)$. First observe that If $x,y \in B$, then $\mu B_{xy} \leq c \mu B(x_0,r)$. Consequently, this inequality and the condition $p\in LH(X)$ yield $ |p_-(B)- p_+ (B)| \leq \frac{C}{- \ln \big(c_0 \mu B(x_0, r)\big)}$. Further, there exists $r_0$ such that $0<r_0<1/2$ and $ c_1 \leq \frac{ \ln \big(\mu (B)\big)}{-\ln \big(c_0 \ln \mu(B)\big)} \leq c_2$, $0< r \leq r_0 $, where $c_1$ and $c_2$ are positive constants. Hence $ \big(\mu(B)\big)^{p_-(B)- p_+(B)} \leq \Big( \mu(B) \Big)^{ \frac{C}{\ln \big( c_0\mu (B)\big)} } = \exp\bigg( \frac{C \ln \big( \mu(B)\big) }{\ln \big(c_0 \mu(B)\big)}\bigg) \leq C$. Let now $p\in LH(X,x)$ and let $B_x:=B(x,r)$ where $r$ is a small number. We have that $ p_+(B_x)- p(x) \leq \frac{c}{- \ln \big(c_0 \mu B(x, r)\big)}$ and $ p(x)-p_-(B_x) \leq \frac{c}{- \ln \big(c_0 \mu B(x, r)\big)}$ for some positive constant $c_0$. Consequently, $ (\mu (B_x) )^{p_-(B_x)- p_+(B_x)} = \big( \mu(B_x) \big)^{p(x)- p_+(B_x)} \big( \mu(B_x) \big)^{p_-(B_x)- p(x)}\leq c \big( \mu(B_x) \big)^{\frac{-2c}{-\ln(c_0\mu B_x))}}\leq C$. \end{proof} \begin{definition} A measure $\mu$ on $X$ is said to satisfy the reverse doubling condition $( \mu \in \hbox{RDC}(X))$ if there exist constants $A>1$ and $B>1$ such that the inequality $ \mu\big(B(a,Ar)\big)\geq B\mu\big(B(a,r) \big)$ holds. \end{definition} {\em Remark} 1.3. It is known that if all annulus in $X$ are not empty, then $\mu \in \hbox{DC}(X)$ implies that $\mu \in \hbox{RDC}(X)$ (see, e.g., \cite{StTo}). \vskip+0.1cm In the sequel we will use the notation: \begin{gather*} I_{1,k} := \begin{cases} B(x_0,A^{k-1}L / a_1)\; \hbox{if} \;\; L<\infty \\ B(x_0,A^{k-1}/ a_1)\;\; \hbox{if}\;\; L = \infty \end{cases}, \\ I_{2,k} := \begin{cases} \overline{B}(x_0,A^{k+2}a_1L)\setminus B(x_0, A^{k-1}L/ a_1),\;\; \hbox{if} \;\;L<\infty \\ \overline{B}(x_0,A^{k+2}a_1 )\setminus B(x_0, A^{k-1}/ a_1)\; \hbox{if}\;\; L=\infty, \end{cases},\\ I_{3,k} := \begin{cases} X\setminus B(x_0,A^{k+2}L a_1)\;\; \hbox{if} \;\; L<\infty \\ X\setminus B(x_0,A^{k+2} a_1)\;\; \hbox{if}\; L = \infty \end{cases}, \\ E_k := \begin{cases} \overline{B}(x_0,A^{k+1}L) \setminus B(x_0,A^{k}L) \;\; \hbox{if}\;\; L<\infty \\ \overline{B}(x_0,A^{k+1}) \setminus B(x_0,A^{k}) \;\; \hbox{if}\;\; L=\infty \end{cases}, \end{gather*} where the constant $A$ is defined in the reverse doubling condition and the constant $a_1$ is taken from the triangle inequality for the quasimetric $d$. \begin{lemma} Let $(X, d, \mu)$ be an $\hbox{SHT}$. Suppose that there is a point $x_0\in X$ such that $p\in LH(X,x_0)$. Then there exist positive constants $r_0$ and $C$ $($ which might be depended on $x_0\;)$ such that for all $r$, $0<r\leq r_0$, the inequity $$( \mu B_{A})^{p_{-}(B_{A})-p_{+}(B_{A})}\leq C$$ holds, where $B_{A}: = B(x_0, Ar) \setminus B (x_0, r)$ and the constant $C$ is independent of $r$ and the constant $A$ is defined in Definition 1.10. \end{lemma} \begin{proof} Let $B:= B(x_0,r)$. First observe that by the doubling and reverse doubling conditions we have that $ \mu B_{A} = \mu B(x_0,Ar)-\mu B(x_0,r) \geq (B-1)\mu B(x_0,r) \geq c \mu (AB)$. Suppose that $0<r<c_{0}$, where $c_0$ is a sufficiently small constant. Then by using Lemma 1.9 we find that $ \big(\mu B_{A}\big)^{p_{-}(B_{A})-p_{+}(B_{A})} \leq c \big( \mu( AB ) \big)^{p_{-}(B_{A})-p_{+}(B_{A})}\leq c\big( \mu( AB)\big)^{p_{-}(AB)-p_{+}(AB)} \leq c. $ \end{proof} \begin{lemma} Let $(X,d,\mu)$ be an ${\hbox{SHT}}$ and let $ 1<p_{-}(x)\leq p(x)\leq q(x)\leq q_{+}(X)<\infty.$ Suppose that there is a point $x_0\in X$ such that $p,q\in LH(X,x_0)$. Assume that $p(x)\equiv p_{c}\equiv const $, $q(x)\equiv q_{c}\equiv \hbox{const}$ outside some ball $B(x_{0},a)$ if $L=\infty$. Then there exist a positive constant C such that $$ \sum\limits_k\|f\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}\|g \chi_{I_{2,k}}\|_{L^{q'(\cdot)}(X)}\leq C\|f\|_{L^{p(\cdot)}(X)}\|g\|_{L^{q'(\cdot)}(X)} $$ for all $f\in L^{p(\cdot)}(X)$ and $g\in L^{q'(\cdot)}(X).$ \end{lemma} \begin{proof} Suppose that $L =\infty$. To prove the lemma first observe that $ \mu(E_k)\approx \mu B(x_0,A^k )$ and $\mu(I_{2,k})\approx\mu B(x_0,A^{k-1})$. This holds because $\mu$ satisfies the reverse doubling condition and, consequently, \begin{gather*} \mu E_k = \mu \Big(\overline{B}(x_0,A^{k+1})\setminus B(x_0,A^k )\Big) =\mu \overline{B}(x_0,A^{k+1}) - \mu B{(x_0,A^{k})} \\ = \mu \overline{B}(x_0,A A^{k}) - \mu B(x_0,A^{k}) \geq B\mu B(x_0, A^{k}) - \mu B(x_0,A^{k}) = (B-1)\mu B(x_0, A^{k}) \end{gather*} Moreover, using the doubling condition we have $ \mu E_{k}\leq \mu B(x_0, AA^{k})\mathop \leq c \mu B(x_0, A^{k}), $ where $c>1$. Hence, $ \mu E_k\approx \mu B(x_0,A^{k}).$ Further, since we can assume that $a_1\geq 1$, we find that \begin{gather*} \mu I_{2,k} = \mu \Big(\overline{B}(x_{0},A^{k+2}a_1)\setminus B(x_{0},A^{k-1}/a_1)\Big) =\mu \overline{B}(x_{0},A^{k+2}a_1)-\mu B(x_{0},A^{k-1}/a_1) \\ = \mu \overline{B}(x_{0},A A^{k+1} a_1)- \mu B(x_{0},A^{k-1}/a_1) \geq B \mu B(x_{0}, A^{k+1} a_1)-\mu B(x_{0},A^{k-1}/a_1) \\ \geq B^{2}\mu B(x_{0},A^{k}/a_1)-\mu B(x_{0},A^{k-1}/a_1) \geq B^{3}\mu B(x_{0},A^{k-1}/a_1)-\mu B(x_{0},A^{k-1}/a_1) \\ =(B^{3}-1)\mu B(x_0,A^{k-1}/a_1). \end{gather*} Moreover, using the doubling condition we have $ \mu I_{2,k} \leq \mu\overline{ B}(x_{0},A^{k+2}r) \leq c \mu B(x_{0},A^{k+1}r) \leq c^{2}\mu B(x_{0},A^{k}/a_1) \leq c^{3}\mu B(x_{0},A^{k-1}/a_1) $. This gives the estimates $ (B^{3}-1)\mu B(x_{0},A^{k-1}/a_1) \leq \mu (I_{2,k}) \leq c^{3}\mu B(x_{0},A^{k-1}/a_1).$ For simplicity assume that $a=1$. Suppose that $m_0$ is an integer such that $\frac{A^{m_0-1}}{a_1}>1$. Let us split the sum as follows: $$ \sum\limits_{i}\|f \chi_{I_{2,i}}\|_{L^{p(\cdot)}(X)}\cdot \|g \chi_{I_{2,i}}\|_{L^{q'(\cdot)}(X)}=\sum\limits_{i\leq m_0}\Big(\cdots \Big)+\sum\limits_{i> m_0}\Big(\cdots\Big)=:J_{1}+J_{2}. $$ Since $p(x)\equiv p_{c}=const,\ q(x)=q_{c}=const$ outside the ball $B(x_0,1)$, by using H\"{o}lder's inequality and the fact that $p_{c}\leq q_{c}$, we have $$ J_{2} = \sum\limits_{i>m_0}\|f \chi_{I_{2,i}}\|_{L^{p_c}(X)}\cdot \|g \chi_{I_{2,i}}\|_{L^{(q_c)'}(X)} \leq c\|f\|_{L^{p(\cdot)}(X)}\cdot \|g\|_{L^{q'(\cdot)}(X)}. $$ Let us estimate $J_{1}$. Suppose that $ \|f\|_{L^{p(\cdot)}(X)}\leq 1$ and $\|g \|_{L^{q'(\cdot)}(X)} \leq 1$. Also, by Proposition 1.4 we have that $1/q' \in LH(X,x_0)$. Therefore by Lemma 1.11 and the fact that $1/q'\in LH(X,x_0)$ we obtain that $\mu \big( I_{2,k}\big)^{\frac{1}{q_{+}(I_{2,k})}} \approx \|\chi_{I_{2,k}}\|_{L^{q(\cdot)}(X)}\approx \mu \big(I_{2,k}\big)^{\frac{1}{q_{-}(I_{2,k})}}$ and $\mu \big(I_{2,k}\big)^{\frac{1}{q_{+}'(I_{2,k})}} \approx \|\chi_{I_{2,k}}\|_{L^{q'(\cdot)}(X)}\approx \mu \big( I_{2,k}\big)^{\frac{1}{q'_{-}(I_{k})}} $, where $k \leq m_0$. Further, observe that these estimates and H\"{o}lder's inequality yield the following chain of inequalities: \begin{gather*} J_{1} \leq c \sum\limits_{k\leq m_0}\;\; \int\limits_{\overline{B}(x_{0},A^{m_0+1})}\frac{\|f \chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}\cdot \|g \chi_{I_{2,k}}\|_{L^{q'(\cdot)}(X)}}{\|\chi_{I_{2,k}}\|_{L^{q(\cdot)}(X)}\cdot \|\chi_{I_{2,k}}\|_{L^{q'(\cdot)}(X)}}\chi_{E_{k}}(x)d\mu(x) \\ =c \int\limits_{\overline{B}(x_{0},A^{m_0+1})}\sum\limits_{k\leq m_0}\frac{\|f \chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}\cdot \|g \chi_{I_{2,k}}\|_{L^{q'(\cdot)}(X)}}{\|\chi_{I_{2,k}}\|_{L^{q(\cdot)}(X)}\cdot \|\chi_{I_{2,k}}\|_{L^{q'(\cdot)}(X)}}\chi_{E_{k}}(x)d\mu(x) \\ \leq c\Big\|\sum\limits_{k\leq m_0}\frac{\|f \chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}}{\|\chi_{I_{2,k}}\|_{L^{q(\cdot)}(X)}}\chi_{E_{k}}(x) \Big\| _{L^{q(\cdot)}(\overline{B}(x_{0},A^{m_0+1}))} \\ \times\Big\|\sum\limits_{k\leq m_0}\frac{\|g \chi_{I_{2,k}}\|_{L^{q'(\cdot)}(X)}}{\|\chi_{I_{2,k}}\|_{L^{q'(\cdot)}(X)}} \chi_{E_{k}}(x)\Big\|_{L^{q'(\cdot)}(\overline{B}(x_{0},A^{m_0+1}))} =: c S_{1}(f)\cdot S_{2}(g). \end{gather*} Now we claim that $ S_1(f) \leq c I(f) $, where \begin{gather*} I(f):=\Big\|\sum\limits_{k\leq m_0}\frac{\|f\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}}{\|\chi_{I_{2k}}\|_{L^{p(\cdot)}(X)}}\chi_{E_{k}(\cdot)}\Big\|_{L^{p(\cdot)}(\overline{B}(x_0,A^{m_0+1}))} \end{gather*} and the positive constant $c$ does not depend on $f$. Indeed, suppose that $I(f)\leq 1$. Then taking into account Lemma 1.11 we have that \begin{gather*} \sum\limits_{k\leq m_0}\frac{1}{\mu (I_{2,k})}\int_{E_{k}}\|f\chi_{I_{2,k}} \|_{L^{p(\cdot)}(X)}^{p(x)}d\mu(x)\\ \leq c\int\limits_{\overline{B}(x_0,A^{m_0+1})}\Big(\sum\limits_{k\leq m_0}\frac{ \|f\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}}{\|\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}}\chi_{E_{k}(x)}\Big)^{p(x)}d\mu(x) \leq c. \end{gather*} Consequently, since $p(x)\leq q(x)$, $E_k\subseteq I_{2,k}$ and $\|f\|_{L^{p(\cdot)}(X)}\leq 1$, we find that $$ \sum\limits_{k\leq m_0}\!\frac{1}{\mu (I_{2,k})}\!\int\limits_{E_{k}}\!\!\|f\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}^{q(x)}d\mu(x)\leq \sum\limits_{k\leq m_0}\frac{1}{\mu (I_{2,k})}\int\limits_{E_{k}}\!\!\|f\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}^{p(x)}d\mu(x)\!\leq\! c. $$ This implies that $S_1(f)\leq c$. Thus the desired inequality is proved. Further, let us introduce the following function: $${\mathbb P}(y):=\sum\limits_{k\leq 2}p_+(\chi_{I_{2,k}})\chi_{E_{k}(y)}. $$ It is clear that $p(y)\leq{\mathbb P}(y)$ because $E_k \subset I_{2,k}.$ Hence $$ I(f)\leq c \Big\|\sum\limits_{k\leq m_0}\frac{\|f\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}}{\|\chi_{I_{2k}}\|_{L^{p(\cdot)}(X)}}\chi_{E_{k}(\cdot)}\Big\|_{L^{{\mathbb{P}}(\cdot)}(\overline{B}(x_0,A^{m_0+1}))}$$ for some positive constant $c$. Then by using the this inequality, the definition of the function ${\mathbb {P}}$, the condition $p\in LH(X)$ and the obvious estimate $ \|\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}^{p_{+}(I_{2,k})}\geq c\mu (I_{2,k})$, we find that \begin{gather*} \int\limits_{\overline{B}(x_0,A^{m_0+1})}\!\!\!\!\!\!\bigg(\sum\limits_{k\leq m_0}\frac{\|f\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}}{\|\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}}\chi_{E_{k}(x)}\bigg)^{{\mathbb P}(x)}\!\!\!\!\!\!\!\!d\mu(x)\\ = \int\limits_{{\overline{B}}(x_0,A^{m_0+1})}\!\!\!\!\!\!\bigg(\sum \limits_{k\leq m_0}\frac{\|f\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}^{p_{+}(I_{2,k})}}{\|\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}^{p_{+}(I_{2,k})}}\chi_{E_{k}(x)}\bigg)d\mu(x)\\ \leq c\int\limits_{\overline{B}(x_0,A^{m_0+1})}\bigg(\sum\limits_{k\leq m_0}\frac{\|f\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}^{p_{+}(I_{2,k})}}{\mu (I_{2,k})}\chi_{E_{k}(x)}\bigg)d\mu(x) \leq c\sum\limits_{k\leq m_0}\|f\chi_{I_{2,k}}\|_{L^{p(\cdot)}(X)}^{p_{+}(I_{2,k})}\\ \leq c\sum\limits_{k\leq m_0}\int\limits_{I_{2,k}}|f(x)|^{p(x)}d\mu(x) \leq c \int\limits_{X}|f(x)|^{p(x)}d\mu(x)\leq c. \end{gather*} Consequently, $ I(f)\leq c \| f\|_{L^{p(\cdot)}(X)}$. Hence, $ S_1(f)\leq c \| f\|_{L^{p(\cdot)}(X)}$. Analogously taking into account the fact that $q'\in DL(X)$ and arguing as above we find that $ S_{2}(g) \leq c \| g\|_{L^{q'(\cdot)}(X)}$. Thus summarizing these estimates we conclude that $$ \sum\limits_{i\leq m_0}\|f \chi_{I_{i}}\|_{L^{p(\cdot)}(X)} \|g \chi_{I_{i}}\|_{L^{q'(\cdot)}(X)}\leq c \|f\|_{L^{p(\cdot)}(X)}\|g\|_{L^{q'(\cdot)}(X)}. $$ \end{proof} The next statement for metric measure spaces was proved in \cite{HaHaPe} (see also \cite{KoMe3}, \cite{KoMe5} for quasimetric measure spaces). \vskip+0.1cm {\bf Theorem A.} {\em Let $(X, d,\mu)$ be an $SHT$ and let $\mu (X) < \infty$. Suppose that $1<p_-\leq p_+<\infty$ and $p\in {\mathcal{P}}(1)$. Then $M$ is bounded in $L^{p(\cdot)}(X)$.} For the following statement we refer to \cite{Kh}: \vskip+0.1cm {\bf Theorem B.} {\em Let $(X, d, \mu)$ be an $SHT$ and let $L=\infty$. Suppose that $1<p_-\leq p_+<\infty$ and $p\in {\mathcal{P}}(1)$. Suppose also that $p=p_c= \hbox{const}$ outside some ball $B:= B(x_0,R)$. Then $M$ is bounded in $L^{p(\cdot)}(X)$.} \section{Hardy--type transforms} In this section we derive two-weight estimates for the operators: $$ T_{v,w}f(x) = v(x)\int\limits _{B_{x_0 x}} f(y)w(y)d\mu(y)\;\; \hbox{and} \;\; T'_{v,w}f(x) = v(x)\int\limits _{X\backslash \overline{B}_{x_0 x}} f(y)w(y)d\mu(y). $$ Let $a$ is a positive constant and let $p$ be a measurable function defined on $X$. Let us introduce the notation: $$ p_{0}(x):=p_{-}(\overline{B}_{x_0x}); \;\; \widetilde{p}_0 (x): = \left\{ \begin{array}{ll} p_0 (x) & \hbox{if}\ \ d(x_0, x) \le a; \\ p_c = \hbox{const} & \hbox{if}\ \ d(x_0, x) > a. \end{array} \right. $$ $$ p_{1}(x):=p_{-} \left(\overline{B}(x_0,a) \setminus B_{x_0x} \right); \,\, \widetilde{p}_1 (x) := \left\{ \begin{array}{ll} p_1 (x) & \hbox{if}\ \ d(x_0, x) \le a; \\ p_c = \hbox{const} & \hbox{if} \ \ d(x_0, x) > a. \end{array} \right. $$ \vskip+0.1cm {\em Remark} 2.1. If we deal with a quasi-metric measure space with $L<\infty$, then we will assume that $a=L$. Obviously, $\widetilde{p}_0 \equiv p_0$ and $\widetilde{p}_1 \equiv p_1$ in this case. {\bf Theorem 2.1.} {\em Let $(X, d, \mu)$ be a quasi-metric measure space . Assume that $p$ and $q$ are measurable functions on $X$ satisfying the condition $1<p_{-} \leq \widetilde{p}_{0}(x)\leq q(x)\leq q_{+}<\infty. $ In the case when $L=\infty$ suppose that $p\equiv p_c\equiv$ const, $q\equiv q_c\equiv$ const, outside some ball $\overline{B}(x_0,a)$. If the condition $$ A_{1}:= \sup\limits_{0\leq t\leq L} \int\limits_{t< d(x_{0},x)\leq L}\big(v(x)\big)^{q(x)}\bigg(\int\limits_{d(x_{0},x)\leq t} w^{(\widetilde{p}_{0})'(x)}(y)d\mu(y)\bigg)^{\frac{q(x)}{(\widetilde{p}_0)'(x)}}d\mu(x)<\infty,$$ hold, then $T_{v,w}$ is bounded from $L^{p(\cdot)}(X)$ to $ L^{q(\cdot)}(X)$.} \vskip+0.1cm {\em Proof.} Here we use the arguments of the proofs of Theorem 1.1.4 in \cite{EdKoMe} (see p. 7) and of Theorem 2.1 in \cite{EdKoMe1}. First we notice that $p_{-} \leq p_{0}(x) \leq p(x)$ for all $ x \in X $. Let $f \geq 0$ and let $S_{p}(f)\leq 1$. First assume that $L<\infty$. We denote $$ I(s):= \int\limits_{d(x_{0},y)<s} f(y) w(y) d\mu(y) \ \ \hbox{for}\ s \in [0,L]. $$ Suppose that $I(L)<\infty$. Then $I(L) \in (2^{m},2^{m+1}]$ for some $ m \in \mathbb{Z}. $ Let us denote $ s_{j}:= \sup \{s:I(s)\leq 2^{j}\}, \ j \leq m$, and $s_{m+1}:=L. $ Then $\big\{ s_{j}\big\}_{j=-\infty}^{m+1}$ is a non-decreasing sequence. It is easy to check that $I(s_{j})\leq 2^{j},\ I(s)> 2^{j}$ for $s>s_{j}$, and $ 2^{j}\leq \int\limits_{s_{j}\leq d(x_{0},y)\leq s_{j+1}}f(y) w(y) d\mu (y) $. If $\beta:= \lim\limits_{j\rightarrow -\infty}s_{j},$ then $ d(x_{0},x)<L$ if and only if $d(x_{0},x)\in [0,\beta]\cup \bigcup \limits_{j=-\infty}^{m}(s_{j},s_{j+1}]. $ If $I(L)=\infty$ then we take $m=\infty$. Since $ 0 \leq I(\beta) \leq I(s_{j}) \leq 2^{j} $ for every $j$, we have that $ I(\beta) = 0. $ It is obvious that $ X =\bigcup \limits_{j\leq m}\{x: s_{j}< d(x_{0},x) \leq s_{j+1}\}$. Further, we have that \begin{eqnarray*} S_q(T_{v,w}f) = \int\limits_X (T_{v,w}f(x))^{q(x)} d\mu(x) = \int\limits_X \Bigg( v(x) \!\!\!\!\!\!\! \int\limits_{B(x_0,\ d(x_0,x))} f(y) w(y)d\mu(y) \Bigg)^{q(x)} \!\!\!\! d\mu(x)\\ = \int\limits_X (v(x))^{q(x)} \Bigg( \int\limits_{B(x_0,\ d(x_0,x))} f(y) w(y)d\mu(y) \Bigg)^{q(x)} d\mu(x) \\ \leq \sum\limits_{j=-\infty}^m \int\limits_{s_j < d(x_0,x)\leq s_{j+1}}\!\!\!\!\!\!\!\!\!\!\Big(v(x)\Big)^{q(x)} \!\Bigg(\;\;\;\; \int\limits_{d(x_0,y)< s_{j+1}} \!\!\!\!\!\!\!\!\! f(y) w(y)d\mu(y) \Bigg)^{q(x)} d\mu(x). \end{eqnarray*} Notice that $ I(s_{j+1}) \leq 2^{j+1} \leq 4 \int\limits_{s_{j-1}\leq d(x_{0},y)\leq s_{j}}w(y) f(y) d\mu(y) $. Consequently, by this estimate and H\"older's inequality with respect to the exponent $p_0(x)$ we find that \begin{eqnarray*} S_{q}\big(T_{v,w}f\big) \leq c \sum \limits_{j=-\infty}^{m}\int\limits_{s_{j}<d(x_{0},x)\leq s_{j+1}}\!\!\!\!\!\!\!\!\!\Big(v(x)\Big)^{q(x)}\Bigg( \int\limits_{s_{j-1}\leq d(x_{0},y)\leq s_{j}}\!\!\!\!\!\!\!\!\!f(y)w(y)d \mu (y)\Bigg)^{q(x)}d\mu(x) \\ \leq c\sum\limits_{j=-\infty}^{m}\int\limits_{s_{j}<d(x_{0},x)\leq s_{j+1}}\!\!\!\!\!\!\!\!\!\big(v(x)\big)^{q(x)}J_{k}(x)d\mu(x), \end{eqnarray*} where $$ J_{k}(x):=\bigg(\!\!\!\!\!\!\!\!\!\int\limits_{s_{j-1}\leq d(x_{0},y)\leq s_{j}}\;\;\!\!\!\!\!\!\!\!\!\!\!f(y)^{p_{0}(x)}d\mu(y)\bigg)^{\frac{q(x)}{p_{0}(x)}} \bigg(\!\!\!\!\!\!\!\!\!\int\limits_{s_{j-1}\leq d(x_{0},y)\leq s_{j}}\!\!\!\!\!\!\!\!\!w(y)^{(p_{0})'(x)}d \mu (y)\bigg)^{\frac{q(x)}{(p_{0})'(x)}}. $$ Observe now that $ q(x) \geq p_{0}(x)$. Hence, this fact and the condition $S_{p}(f)\leq 1 $ imply that \begin{eqnarray*} J_{k}(x) \leq c \bigg(\int\limits_{\{y:s_{j-1}\leq d(x_{0},y)\leq s_{j}\}\cap \{y:f(y)\leq 1\}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!f(y)^{p_{0}(x)}d\mu(y) + \!\!\!\!\!\!\!\!\int\limits_{\{y:s_{j-1}\leq d(x_{0},y)\leq s_{j}\}\cap \{y:f(y)> 1\}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! f(y)^{p(y)}d\mu(y)\bigg)^{\frac{q(x)}{p_{0}(x)}} \\ \times \bigg(\int\limits_{s_{j-1}\leq d(x_{0},y)\leq s_{j}}\!\!\!\!\!\!\!\!\!w(y)^{(p_{0})'(x)}d \mu (y)\bigg)^{\frac{q(x)}{(p_{0})'(x)}} \\ \leq c \bigg( \mu \big( \{y:s_{j-1}\leq d(x_{0},y)\leq s_{j}\}\big) +\int\limits_{\{y:s_{j-1}\leq d(x_{0},y)\leq s_{j}\}\cap \{y:f(y)> 1\}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!f(y)^{p(y)}d\mu(y)\bigg)\\ \times \bigg(\int\limits_{s_{j-1}\leq d(x_{0},y)\leq s_{j}}w(y)^{(p_{0})'(x)}d \mu (y)\bigg)^{\frac{q(x)}{(p_{0})'(x)}}. \end{eqnarray*} It follows now that \begin{eqnarray*} S_q(T_{v,w}f) \leq c \bigg(\sum\limits_{j=-\infty}^{m}\mu \big(\{y:s_{j-1}\leq d(x_{0},y)\leq s_{j}\}\big)\int\limits_{s_{j}<d(x_{0},x)\leq s_{j+1}}v(x)^{q(x)}\\ \times \bigg( \int\limits_{s_{j-1}\leq d(x_{0},y)\leq s_{j}}w(y)^{(p'_{0})(x)}d\mu(y)\bigg)^{\frac{q(x)}{(p_{0})'(x)}} d\mu(x)\\ + \sum\limits_{j=-\infty}^{m}\bigg(\int\limits_{y:\{s_{j-1}\leq d(x_{0},y)\leq s_{j}\}\cap\{y: f(y)>1\}}f(y)^{p(y)}d\mu(y)\bigg) \\ \times \!\!\!\!\! \int\limits_{s_{j}<d(x_{0},x)\leq s_{j+1}}\!\!\!\!\!\!\!\!\! v(x)^{q(x)} \bigg(\!\!\!\! \int\limits_{s_{j-1}\leq d(x_{0},y)\leq s_{j}}w(y)^{(p_{0})'(x)}d\mu(y)\bigg)^{\frac{q(x)}{(p_{0})'(x)}}d\mu(x) \bigg) :=c\big(N_{1}+N_{2}\big). \end{eqnarray*} It is obvious that $$ N_{1} \leq A_{1}\sum\limits_{j=-\infty}^{m+1}\!\!\mu \big( \{y:s_{j-1}\leq d(x_{0},y)\leq s_{j}\}\big) \leq C A_{1} $$ and $$ N_{2} \leq A_{1}\sum\limits_{j=-\infty}^{m+1}\int\limits_{\{y:s_{j-1}\leq d(x_{0},y)\leq s_{j}\}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!f(y)^{p(y)}d\mu(y)= C \int\limits_{X}\big(f(y)\big)^{p(y)}d\mu(y)=A_{1}S_{p}(f) \leq A_{1}. $$ Finally $ S_q(T_{v,w}f) \leq c\big(c A_{1}+A_{1}\big)<\infty $. Thus $T_{v,w}$ is bounded if $A_{1}< \infty$. Let us now suppose that $L=\infty$. We have \begin{eqnarray*} T_{v,w}f(x) = \chi_{B(x_0,a)}(x) v(x)\int\limits_{B_{x_0x}} f(y)w(y)d\mu(y) \\ +\chi_{X \backslash B(x_0,a)}(x)v(x) \int\limits_{B_{x_0x}}f(y)w(y)d\mu(y) =: T_{v,w}^{(1)}f(x)+T_{v,w}^{(2)}f(x) \end{eqnarray*} By using already proved result for $L<\infty$ and the fact that ${\hbox{diam}}\; \big(B(x_0,a)\big)<\infty$ we find that $ \|T_{v,w}^{(1)}f\|_{L^{q(\cdot)}\big(B(x_0,a)\big)}\leq c\|f\|_{L^{p(\cdot)}\big(B(x_0,a)\big)} \leq c$ because $$ A_{1}^{(a)}:= \sup\limits_{0\leq t \leq a}\int\limits_{t< d(x_{0},x)\leq a}\!\!\!\!\!\!\big(v(x)\big)^{q(x)}\bigg(\int\limits_{d(x_{0},x)\leq t} w^{(p_{0})'(x)}(y)d\mu(y)\bigg)^{\frac{q(x)}{(p_{0})'(x)}}\!\!\!\!\!d\mu(x) \leq A_1 <\infty. $$ Further, observe that \begin{eqnarray*} T_{v,w}^{(2)}f(x)\!\! =\!\! \chi_{X\backslash B(x_{0},a)}(x)v(x) \!\!\!\int\limits_{B_{x_0x}}\!\!\!f(y)w(y)d\mu(y) = \chi_{X\backslash B(x_{0},a)} (x)v(x) \!\!\!\!\!\!\!\! \int\limits_{d(x_{0},y)\leq a} \!\!\!\!\! f(y)w(y)d\mu(y)\\ +\chi_{X\backslash B(x_{0},a)}(x)v(x) \!\!\!\!\!\! \int\limits_{a\leq d(x_{0},y)\leq d(x_{0},x)} \!\!\!\!\!\! f(y)w(y)d\mu(y) =: T^{(2,1)}_{v,w} f(x)+T^{(2,2)}_{v,w} f(x). \end{eqnarray*} It is easy to see that (see also Theorem 1.1.3 or 1.1.4 of \cite{EdKoMe}) the condition $$\overline A_{1}^{(a)}:=\sup\limits_{t\geq a} \bigg(\int\limits_{d(x_0,x)\geq t} \!\!\!\! \big(v(x)\big)^{q_{c}}d\mu(x)\bigg)^{\frac{1}{q_{c}}}\bigg(\!\!\!\int\limits_{a\leq d(x_0,y)\leq t}\!\!\!\!\!\!\!\! w(y)^{(p_{c})'}d\mu(y)\bigg)^{\frac{1}{(p_{c})'}} <\infty $$ guarantees the boundedness of the operator $$T_{v,w} f(x)= v(x)\!\!\!\!\!\!\! \int\limits_{a\leq d(x_0,y)< d(x_0,x)}\!\!\!\! f(y)w(y)d\mu(y)$$ from $L^{p_{c}}\big(X\backslash B(x_0,a)\big)$ to $L^{q_{c}}\big(X\backslash B(x_0,a)\big).$ Thus $T^{(2,2)}_{v,w}$ is bounded. It remains to prove that $T^{(2,1)}_{v,w}$ is bounded. We have $$ \| T^{(2,1)}_{v,w}f \|_{L^{p(\cdot)}(X)} = \Bigg(\int\limits_{\big(B(x_0,a)\big)^{c}} v(x)^{q_{c}} d\mu(x)\Bigg)^{\frac{1}{q_{c}}} \Bigg(\int\limits_{\overline B(x_0,a)}f(y)w(y)d\mu(y)\Bigg)$$ $$ \leq \Bigg(\int\limits_{\big(B(x_0,a)\big)^{c}} v(x)^{q_{c}} d\mu(x)\Bigg)^{\frac{1}{q_{c}}} \|f\|_{L^{p(\cdot)}\big(\overline B(x_0,a)\big)}\|w\|_{L^{p'(\cdot)}\big(\overline B(x_0,a)\big)}. $$ Observe now that the condition $A_1<\infty$ guarantees that the integral $$ \int\limits_{\big(B(x_0,a)\big)^{c}} v(x)^{q_{c}} d\mu(x)$$ is finite. Moreover, $N:= \|w\|_{L^{p'(\cdot)}\big( \overline B(x_0,a)\big)}<\infty$. Indeed, we have that $$N \leq \left\{ \begin{array}{ll} \bigg(\int\limits_{\overline B(x_0,a)}w(y)^{p'(y)}d\mu(y)\bigg)^{\frac{1}{\big(p_- (\overline B(x_0,a))\big)'}} & \hbox{if} \ \|w\|_{L^{p'(\cdot)}(\overline{B}(x_0,a))}\leq 1, \\ \bigg(\int\limits_{\overline B(x_0,a)}w(y)^{p'(y)}d\mu(y)\bigg)^{\frac{1}{\big(p_+ (\overline B(x_0,a))\big)'}} & \hbox{if}\ \|w\|_{L^{p'(\cdot)}(\overline{B}(x_0,a)}> 1. \end{array} \right. $$ Further, $$ \int\limits_{\overline B(x_0,a)}\!\!\! w(y)^{p'(y)}d\mu(y) = \!\!\!\!\!\!\int\limits_{\overline B(x_0,a)\cap{\{w\leq 1\}}}\!\!\!\!\!\!w(y)^{p'(y)}d\mu(y)+ \!\!\!\!\!\!\!\!\!\int\limits_{\overline B(x_0,a)\cap{\{w> 1\}}}\!\!\!\!\!\!w(y)^{p'(y)}d\mu(y) := I_{1}+I_{2}. $$ For $I_{1}$, we have that $ I_{1} \leq \mu\big( \overline B(x_0,a))<\infty$. Since $L=\infty$ and condition (1) holds, there exists a point $y_0\in X$ such that $a< d(x_0,y_0)<2a$. Consequently, $\overline{B}(x_0,a) \subset \overline{B}(x_0, d(x_0,y_0))$ and $ p(y) \geq p_{-} \big({ \overline B(x_0,d(x_0,y_0))}\big) =p_{0}(y_0) $, where $y\in \overline{B}(x_0, a)$. Consequently, the condition $A_1<\infty$ yields $I_2 \leq \int\limits_{\overline B(x_0,a) }w(y)^{(p_0)'(y_0)}dy <\infty $. Finally we have that $ \| T^{(2,1)}_{v,w}f \|_{L^{p(\cdot)}(X)} \leq C$. Hence, $T_{v,w}$ is bounded from $L^{p(\cdot)}(X)$ to $L^{q(\cdot)}(X)$. $\Box$ The proof of the following statement is similar to that of Theorem 2.1; therefore we omit it (see also the proofs of Theorem 1.1.3 in \cite{EdKoMe} and Theorems 2.6 and 2.7 in \cite{EdKoMe1} for similar arguments). \vskip+0.1cm {\bf Theorem 2.2.} {\em Let $(X, d, \mu)$ be a quasi-metric measure space . Assume that $p$ and $q$ are measurable functions on $X$ satisfying the condition $1<p_{-}\leq \widetilde{p}_1(x)\leq q(x)\leq q_{+}<\infty. $ If $L= \infty$, then we assume that $p\equiv p_c\equiv$ const, $q\equiv q_c\equiv$ const outside some ball $B(x_0,a)$. If $$ B_{1}= \sup\limits_{0\leq t \leq L} \int\limits_{d(x_{0},x)\leq t}\big(v(x)\big)^{q(x)}\bigg(\int\limits_{t\leq d(x_{0},x)\leq L} w^{(\widetilde{p}_{1})'(x)}(y)d\mu(y)\bigg)^{\frac{q(x)}{(\widetilde{p}_1)'(x)}}d\mu(x)<\infty,$$ then $T'_{v,w}$ is bounded from $L^{p(\cdot)}(X)$ to$ L^{q(\cdot)}(X).$} \vskip+0.1cm {\em Remark} 2.2. If $p\equiv$ const, then the condition $A_1<\infty$ in Theorem 2.1 (resp. $B_1<\infty$ in Theorem 2.2) is also necessary for the boundedness of $T_{v,w}$ (resp. $T'_{v,w}$) from $L^{p(\cdot)}(X)$ to $L^{q(\cdot)}(X)$. See \cite{EdKoMe}, pp.4-5, for the details. \section{Potentials} In this section we discuss two--weight estimates for the potential operators $T_{\alpha(\cdot)}$ and $I_{\alpha(\cdot)}$ on quasi-metric measure spaces, where $0<\alpha_- \leq \alpha_+ <1$. If $\alpha\equiv \;{\hbox{const}}$, then we denote $T_{\alpha(\cdot)}$ and $I_{\alpha(\cdot)}$ by $T_{\alpha}$ and $I_{\alpha}$ respectively. The boundedness of Riesz potential operators in $L^{p(\cdot)}(\Omega)$ spaces, where $\Omega$ is a domain in ${\Bbb{R}}^n$ was established in \cite{Di2}, \cite{Sa2}, \cite{CrFiMaPe}, \cite{CaCrFi}. For the following statement we refer to \cite{KoSa5}: \vskip+0.1cm {\bf Theorem C.} {\em Let $(X, d,\mu)$ be an $\hbox{SHT}$. Suppose that $1<p_-\leq p_+<\infty$ and $p\in {\mathcal{P}}{(1)}$. Assume that if $L=\infty$, then $p\equiv \; const$ outside some ball. Let $\alpha$ be a constant satisfying the condition $0< \alpha < 1/p_+$. We set $q(x)= \frac{p(x)}{1-\alpha p(x)}$. Then $T_{\alpha}$ is bounded in $L^{p(\cdot)}(X)$.} \vskip+0.1cm {\bf Theorem D \cite{KoMe5}.} {\em Let $(X, d, \mu)$ be a non--homogeneous space with $L<\infty$ and let $N$ be a constant defined by $N=a_1(1+2 a_0)$, where the constants $a_0$ and $a_1$ are taken from the definition of the quasi--metric $d$. Suppose that $1<p_-<p_+<\infty$, $p, \alpha \in {\mathcal{P}}(N)$ and that $\mu$ is upper Ahlfors $1$-regular. We define $q(x)= \frac{p(x)}{1-\alpha(x)p(x)}$, where $0< \alpha_-\leq \alpha_+< 1/p_-$. Then $I_{\alpha(\cdot)}$ is bounded from $L^{p(\cdot)}(X)$ to $L^{q(\cdot)}(X)$.} \vskip+0.1cm For the statements and their proofs of this section we keep the notation of the previous sections and, in addition, introduce the new notation: \begin{eqnarray*} v^{(1)}_{\alpha}(x):=v(x)(\mu B_{x_{0} x})^{\alpha-1},\;\; w^{(1)}_{\alpha}(x):=w^{-1}(x);\ v^{(2)}_{\alpha}(x):=v(x);\\ w^{(2)}_{\alpha}(x):=w^{-1}(x)(\mu B_{x_{0} x})^{\alpha-1};\\ F_x:= \begin{array}{ll} \{ y\in X: \frac{d(x_0,y) L}{A^2a_1} \leq d(x_0, y) \leq A^2 L a_1 d(x_0,x)\}, \;\; \hbox{if} \;\; L<\infty \\ \{ y\in X: \frac{d(x_0,y)}{A^2a_1} \leq d(x_0, y) \leq A^2 a_1 d(x_0,x)\}, \;\; \hbox{if}\;\; L= \infty, \end{array}, \end{eqnarray*} where $A$ and $a_1$ are constants defined in Definition 1.10 and the triangle inequality for $d$ respectively. We begin this section with the following general--type statement: \vskip+0.1cm {\bf Theorem 3.1.} {\em Let $(X, d, \mu)$ be an $\hbox{SHT}$ without atoms. Suppose that $1<p_-\leq p_+<\infty$ and $\alpha$ is a constant satisfying the condition $0<\alpha<1/p_+$. Let $p \in {\mathcal{P}}(1)$. We set $q(x)=\frac{p(x)}{1-\alpha p(x)}$. Further, if $L=\infty$, then we assume that $p\equiv p_{c}\equiv$ const outside some ball $B(x_{0},a)$. Then the inequality $$ \| v (T_{\alpha}f) \|_{L^{q(\cdot)}(X)} \leq c \| wf\|_{L^{p(\cdot)}(X)} \eqno{(5)}$$ holds if the following three conditions are satisfied: $(a)\;\;\; T_{v^{(1)}_{\alpha},w^{(1)}_{\alpha}} $ is bounded from $L^{p(\cdot)}(X)$ to $L^{q(\cdot)}(X)$ ; $(b)\;\;\; T_{v^{(2)}_{\alpha},w^{(2)}_{\alpha}}$ is bounded from $L^{p(\cdot)}(X)$ to $L^{q(\cdot)}(X)$; $(c)\;$ there is a positive constant $b$ such that one of the following inequality holds: $1) \; v_+(F_x) \leq b w(x)$ for $\mu-$ a.e. $x\in X\;\;\; $; $2)\; v(x)\leq b w_-(F_x) $ for $\mu-$ a.e. $x\in X. $} \vskip+0.1cm {\em Proof.} For simplicity suppose that $L< \infty$. The proof for the case $L=\infty$ is similar to that of the previous case. Recall that the sets $I_{i,k}$, $i=1,2,3$ and $E_k$ are defined in Section 1. Let $f\geq 0$ and let $\|g\|_{L^{q'(\cdot)}(X)} \leq 1$. We have \begin{eqnarray*} \int\limits_{X} (T_{\alpha}f)(x) g(x) v(x) d\mu(x) = \sum_{k= -\infty}^0 \int\limits_{E_k} (T_{\alpha}f)(x) g(x) v(x) d\mu(x) \\ \leq \sum_{k=-\infty}^{0} \int\limits_{E_k} (T_{\alpha}f_{1,k})(x) g(x) v(x) d\mu(x)+ \sum_{k=-\infty}^0 \int\limits_{E_k} (T_{\alpha}f_{2,k})(x) g(x) v(x) d\mu(x) \\ + \sum_{k=-\infty}^0 \int\limits_{E_k} (T_{\alpha}f_{3,k}) (x) g(x) v(x) d\mu(x):= S_1+ S_2 +S_3, \end{eqnarray*} where $ f_{1,k}= f\cdot \chi_{I_{1,k}}$, $f_{2,k}= f\cdot \chi_{I_{2,k}}$, $f_{3,k}= f\cdot \chi_{I_{3,k}}. $ Observe that if $x\in E_k$ and $y\in I_{1,k}$, then $d (x_0, y) \leq d(x_0,x)/Aa_1$. Consequently, the triangle inequality for $d$ yields $d(x_0,x)\leq A' a_1a_0 d(x,y)$, where $A'=A/(A-1)$. Hence, by using Remark 1.1 we find that $ \mu(B_{x_0x})\leq c \mu( B_{xy})$. Applying now condition (a) we have that $$ S_1 \leq c \bigg\| \big(\mu B_{x_0 x}\big)^{\alpha-1} v(x) \int\limits_{B_{x_0 x}} f(y) d\mu(y) \bigg\|_{L^{q(x)}(X)} \| g \|_{L^{q'(\cdot)}(X)} \leq c \| f \|_{L^{p(\cdot)}(X)}. $$ Further, observe that if $x\in E_k$ and $y\in I_{3,k}$, then $ \mu \big(B_{x_0y}\big) \leq c \mu\big( B_{xy}\big)$. By condition (b) we find that $ S_3 \leq c \| f \|_{L^{p(\cdot)}(X)} $. Now we estimate $S_2$. Suppose that $v_+(F_x) \leq b w(x)$. Theorem C and Lemma 1.12 yield \begin{eqnarray*} S_2 \leq \sum_{k} \| \big(T_{\alpha} f_{2,k}\big)(\cdot) \chi_{E_k}(\cdot) v(\cdot) \|_{L^{q(\cdot)}(X)} \|g \chi_{E_k}(\cdot)\|_{L^{q'(\cdot)}(X)} \\ \leq \sum_{k} \Big( v_+( E_k) \Big)\| (T_{\alpha} f_{2,k})(\cdot) \|_{L^{q(\cdot)}(X)} \| g(\cdot) \chi_{E_k}(\cdot) \|_{L^{q'(\cdot)}(X)} \\ \leq c \sum_{k} \Big( v_+(E_k) \Big) \| f_{2,k} \|_{L^{p(\cdot)}(X)} \| g(\cdot) \chi_{E_k}(\cdot) \|_{L^{q'(\cdot)}(X)} \\ \leq c \sum_{k} \| f_{2,k}(\cdot) w(\cdot) \chi_{I_{2,k}}(\cdot) \|_{L^{p(\cdot)}(X)} \| g(\cdot) \chi_{E_k}(\cdot) \|_{L^{q'(\cdot)}(X)} \\ \leq c \| f(\cdot) w(\cdot) \|_{L^{p(\cdot)}(X)} \| g(\cdot) \|_{L^{q'(\cdot)}(X)} \leq c \| f(\cdot) w(\cdot) \|_{L^{p(\cdot)}(X)}. \end{eqnarray*} The estimate of $S_2$ for the case when $v(x) \leq b w_-(F_{x})$ is similar to that of the previous one. Details are omitted. $\Box$ \vskip+0.1cm Theorems 3.1, 2.1 and 2.2 imply the following statement: \vskip+0.1cm {\bf Theorem 3.2.} {\em Let $(X, d, \mu)$ be an $\hbox{SHT}$. Suppose that $1<p_-\leq p_+<\infty$ and $\alpha$ is a constant satisfying the condition $0<\alpha<1/p_+$. Let $p \in {\mathcal{P}}(1)$. We set $q(x)=\frac{p(x)}{1-\alpha p(x)}$. If $L=\infty$, then we suppose that $p\equiv p_{c}\equiv$ const outside some ball $B(x_{0},a)$. Then inequality $(5)$ holds if the following three conditions are satisfied: $$(i)\;\;\; P_1\!:=\!\! \sup\limits_{0<t\leq L}\!\!\!\!\! \int\limits_{t< d(x_{0},x)\leq L}\!\!\!\!\! \bigg(\frac{v(x)}{\big(\mu (B_{x_0 x})\big)^{1-\alpha}}\bigg)^{q(x)} \!\bigg(\!\!\!\!\!\int\limits_{d(x_{0},y)\leq t} \!\!\!\!\!\!\!w^{-({\widetilde{p}}_{0})'(x)}(y)d\mu(y)\bigg)^{\frac{q(x)}{({\widetilde{p}}_{0})'(x)}}\!\!\! d\mu(x)\! <\!\infty;$$ $$ (ii)\;\;\; P_2\!:=\!\!\!\! \sup\limits_{0<t\leq L}\!\!\!\!\!\int\limits_{d(x_{0},x)\leq t}\!\!\!\!\!\!\!\big(v(x)\big)^{q(x)} \bigg(\!\!\!\!\!\!\!\int\limits_{t< d(x_{0},y)\leq L} \!\!\!\!\!\!\!\!\!\!\! \Big( w(y)\big(\mu B_{x_0y}\big)^{1-\alpha} \Big)^{-({\widetilde{p}}_{1})'(x)}\!\! d\mu(y) \bigg)^{\frac{q(x)}{({\widetilde{p}}_1)'(x)}}\!\!\!d\mu(x)\! <\!\infty, $$ $(iii)\;\;\;\; $ condition $(c)$ of Theorem $3.1$ holds.} \vskip+0.1cm {\em Remark} 3.1. If $p= p_c\equiv $ const on $X$, then the conditions $P_i<\infty$, $i=1,2$, are necessary for (5). Necessity of the condition $P_1<\infty$ follows by taking the test function $f= w^{-(p_c)'} \chi_{B(x_0,t)}$ in (5) and observing that $\mu B_{xy}\leq c\mu B_{x_0x}$ for those $x$ and $y$ which satisfy the conditions $d(x_0,x)\geq t$ and $d(x_0,y)\leq t$ (see also \cite{EdKoMe}, Theorem 6.6.1, p. 418 for the similar arguments), while necessity of the condition $P_2<\infty$ can be derived by choosing the test function $f(x)= w^{-(p_c)'}(x) \chi_{X\setminus B(x_0,t)}(x)\big( \mu B_{x_0x}\big)^{(\alpha-1)((p_c)'-1)}$ and taking into account the estimate $\mu B_{xy}\leq \mu B_{x_0y}$ for $d(x_0,x)\leq t$ and $d(x_0,y)\geq t$. \vskip+0.1cm The next statement follows in the same manner as the previous one. In this case Theorem D is used instead of Theorem C. The proof is omitted. \vskip+0.1cm {\bf Theorem 3.3.} {\em Let $(X, d, \mu)$ be a non--homogeneous space with $L<\infty$. Let $N$ be a constant defined by $N=a_1(1+2 a_0)$. Suppose that $1<p_-\leq p_+<\infty$, $p, \alpha \in {\mathcal{P}}(N)$ and that $\mu$ is upper Ahlfors $1$-regular. We define $q(x)= \frac{p(x)}{1-\alpha(x)p(x)}$, where $0< \alpha_-\leq \alpha_+< 1/p_+$. Then the inequality $$ \| v(\cdot) (I_{\alpha(\cdot)}f)(\cdot) \|_{L^{q(\cdot)}(X)} \leq c \| w(\cdot) f(\cdot)\|_{L^{p(\cdot)}(X)} \eqno{(6)}$$ holds if $$(i)\;\;\; \sup\limits_{0\leq t\leq L}\!\!\!\!\!\! \int\limits_{t< d(x_{0},x)\leq L} \!\!\!\!\!\!\!\bigg(\frac{v(x)}{\big(d(x_0,x)\big)^{1-\alpha(x)}}\bigg)^{q(x)} \bigg(\! \int\limits_{\overline{B}(x_0,t)} \!\!\!\!w^{-(p_{0})'(x)}(y)d\mu(y)\bigg)^{\frac{q(x)}{(p_{0})'(x)}}\!\!\!\! d\mu(x)\!<\!\infty;$$ $$ (ii)\;\;\; \sup\limits_{0\leq t \leq L}\!\!\!\int\limits_{\overline{B}(x_0,t)}\!\!\!\!\!\!\big(v(x)\big)^{q(x)}\bigg(\!\!\!\int\limits_{t< d(x_{0},y)\leq L} \!\!\!\!\!\!\! \big(w(y) d(x_0,y) ^{1-\alpha(y)}\big)^{-(p_{1})'(x)}d\mu(y)\bigg)^{\frac{q(x)}{(p_1)'(x)}}\!\!\!\! d\mu(x)\!<\!\infty, $$ $(iii)\;\;\; $ condition $(c)$ of Theorem $3.1$ is satisfied.} \vskip+0.1cm {\em Remark} 3.2. It is easy to check that if $p$ and $\alpha$ are constants, then conditions (i) and (ii) in Theorem 3.3 are also necessary for (6). This follows easily by choosing appropriate test functions in $(6)$ (see also Remark 3.1) \vskip+0.1cm {\bf Theorem 3.4.} {\em Let $(X, d, \mu)$ be an $\hbox{SHT}$ without atoms. Let $1<p_-\leq p_+<\infty$ and let $\alpha$ be a constant with the condition $0<\alpha <1/p_+$. We set $q(x)=\frac{p(x)}{1-\alpha p(x)}$. Assume that $p$ has a minimum at $x_0$ and that $p \in \hbox{LH}(X)$. Suppose also that if $L= \infty$, then $p$ is constant outside some ball $B(x_0,a)$. Let $v$ and $w$ be positive increasing functions on $(0,2L)$. Then the inequality $$ \| v(d(x_0,\cdot)) (T_{\alpha} f) (\cdot) \|_{L^{q(\cdot)}(X)} \leq c \| w(d(x_0,\cdot))f(\cdot)\|_{L^{p(\cdot)}(X)} \eqno{(7)}$$ holds if $$ I_1\!:= \!\!\!\sup_{0 < t \leq L} \!\!\!I_1(t) \!\!:= \!\!\!\sup\limits_{0< t \leq L}\!\!\!\!\int\limits_{t< d(x_{0},x)\leq L}\!\!\!\!\bigg( \frac{v(d(x_0,x))}{\big(\mu (B_{x_0 x} )\big)^{1-\alpha}}\bigg)^{q(x)}$$ $$ \times \bigg(\!\!\!\!\!\!\!\!\int\limits_{d(x_{0},y)\leq t} \!\!\!\!\!\!\!\!\! w^{-({\widetilde{p}}_{0})'(x)}(d(x_0,y))d\mu(y)\!\! \bigg)^{\frac{q(x)}{({\widetilde{p}}_{0})'(x)}}\!\!d\mu(x) < \infty $$ for $ L=\infty;$ $$ J_1\!\!:=\!\!\! \sup\limits_{0<t\leq L}\!\!\!\!\!\! \int\limits_{t< d(x_{0},x)\leq L}\!\!\!\!\!\!\!\bigg(\frac{v(d(x_0,x))}{\big(\mu (B_{x_0 x} )\big)^{1-\alpha}}\bigg)^{q(x)}\!\!\! \bigg(\!\!\!\!\!\!\!\int\limits_{d(x_{0},y)\leq t} \!\!\!\!\!\!\!\!\! w^{-p'(x_0)}(d(x_0,y))d\mu(y)\bigg)^{\frac{q(x)}{p'(x_0)}}\!\!\!d\mu(x)\!<\!\infty $$ for $ L<\infty.$} \vskip+0.1cm {\em Proof.} Let $L=\infty$. Observe that by Lemma 1.9 the condition $p\in \hbox{LH}(X)$ implies $p\in {\mathcal{P}}(1)$. We will show that the condition $I_1<\infty$ implies the inequality $ \frac{v(A^2a_1t)}{w(t)} \leq C $ for all $t>0$, where $A$ and $a_1$ are constants defined in Definition 1.10 and the triangle inequality for $d$ respectively. Indeed, let us assume that $t\leq b_1$, where $b_1$ is a small positive constant. Then, taking into account the monotonicity of $v$ and $w$, and the facts that ${\widetilde{p}}_0(x) = p_0(x)$ (for small $d(x_0,x)$) and $\mu \in \; \hbox{RDC}(X)$, we have $$ I_1(t) \! \geq \!\!\!\!\!\!\! \int\limits_{A^2a_1 t\leq d(x_{0},x)< A^3 a_1 t}\!\!\!\!\!\!\!\!\! \bigg( \frac{v(A^2 a_1 t)}{w(t)}\bigg)^{q(x)}\!\!\!\!\! \big(\mu B(x_0, t)\big)^{(\alpha -1/p_0(x))q(x)} d\mu(x) $$ $$ \geq \bigg( \frac{v(A^2 a_1 t)}{w(t)}\bigg)^{q_-} \!\!\!\!\!\! \int\limits_{A^2 a_1 t\leq d(x_{0},x)<A^3 a_1 t} \!\!\!\!\!\!\! \big(\mu B(x_0, t)\big)^{(\alpha- 1/p_0(x))q(x)}d\mu(x) \geq c \bigg( \frac{v(A^2 a_1 t)}{w(t)} \bigg)^{q_-}. $$ Hence, $ \overline{c}:= \overline{\lim\limits_{t\to 0}} \frac{v(A^2 a_1 t)}{w(t)} <\infty$. Further, if $t>b_2$, where $b_2$ is a large number, then since $p$ and $q$ are constants, for $d(x_0, x)>t$, we have that \begin{eqnarray*} I_1(t) \geq \bigg( \int\limits_{A^2 a_1 t\leq d(x_{0},x)<A^3 a_1 t} v(d(x_0,x))^{q_c} \big(\mu B(x_0, t)\big)^{(\alpha-1)q_c}d\mu(x) \bigg)\\ \times \bigg( \int\limits_{B(x_0,t)} w^{-(p_c)'}(x) d\mu(x) \bigg)^{ q_c/ (p_c)'} d\mu(x) \\ \geq C \bigg( \frac{v(A^2 a_1 t)}{w(t)}\bigg)^{q_c} \!\!\!\! \int\limits_{A^2 a_1 t\leq d(x_{0},x)<A^3 a_1 t} \!\!\!\! \big(\mu B(x_0, t)\big)^{(\alpha- 1/p_c)q_c}d\mu(x) \geq c \bigg( \frac{v(A^2 a_1t)}{w(t)}\bigg)^{q_c}. \end{eqnarray*} In the last inequality we used the fact that $\mu$ satisfies the reverse doubling condition. Now we show that the condition $I_1 <\infty$ implies \begin{eqnarray*} \sup_{t>0} I_2(t) := \sup\limits_{t>0}\int \limits_{d(x_{0},x)\leq t} (v(d(x_0,x)))^{q(x)} \bigg(\int\limits_{d(x_{0},y)> t}\! \! \! w^{-({\widetilde{p}}_{1})'(x)} (d(x_0,y))\\ \times \big( \mu(B_{x_0y}) \big)^{(\alpha-1)(\widetilde{p}_1)'(x)}d\mu(y)\bigg)^{\frac{q(x)}{({\widetilde{p}}_{1})'(x)}}d\mu(x)<\infty \end{eqnarray*} Due to monotonicity of functions $v$ and $w$, the condition $p\in LH(X)$, Proposition 1.4, Lemma 1.7, Lemma 1.9 and the assumption that $p$ has a minimum at $x_0$, we find that for all $t>0$, \begin{eqnarray*} I_2 (t) \leq \int\limits_{d(x_{0},x)\leq t}\Big(\frac{v(t)}{w(t)}\Big)^{q(x)} \Big(\mu \big( B(x_0, t)\big)\Big)^{(\alpha-1/p(x_0))q(x)} d\mu(x)\\ \leq c \int\limits_{d(x_{0},x)\leq t}\Big(\frac{v(t)}{w(t)}\Big)^{q(x)} \Big(\mu \big( B(x_0, t) \big)\Big)^{\big(\alpha-1/p(x_0)\big)q(x_0)} d\mu(x) \\ \leq c \bigg(\int\limits_{d(x_{0},x)\leq t}\Big(\frac{v(A^2 a_1 t)}{w(t)}\Big)^{q(x)} d\mu(x)\bigg) \Big(\mu \big( B(x_0, t)\big)\Big)^{-1} \leq C. \end{eqnarray*} Now Theorem 3.2 completes the proof. $\Box$ \vskip+0.1cm {\bf Theorem 3.5.} {\em Let $(X,d,\mu)$ be an $SHT$ with $L<\infty$. Suppose that $p$, $q$ and $\alpha$ are measurable functions on $X$ satisfying the conditions: $1<p_-\leq p(x)\leq q(x)\leq q_+<\infty$ and $1/p_-<\alpha_-\leq \alpha_+<1$. Assume that there is a point $x_0$ such that $\mu\{ x_0 \}=0$ and $p,q, \alpha \in \hbox{LH}(X, x_0)$. Suppose also that $w$ is a positive increasing function on $(0,2L)$.Then the inequality $$ \| \big(T_{\alpha(\cdot)} f\big) v\|_{L^{q(\cdot)}(X)} \leq c \| w (d(x_0,\cdot)) f(\cdot) \|_{L^{p(\cdot)}(X)} $$ holds if the following two conditions are satisfied: \begin{eqnarray*} \widetilde{I}_1:= \sup\limits_{0< t\leq L}\int\limits_{t\leq d(x_{0},x)\leq L}\bigg(\frac{v(x)}{\big(\mu B_{x_0x}\big)^{1-\alpha(x)}}\bigg)^{q(x)}\\ \times \Big(\int\limits_{d(x_{0},x)\leq t} w^{-(p_{0})'(x)}(d(x_0,y))d\mu(y)\Big)^{\frac{q(x)}{(p_{0})'(x)}}d\mu(x)<\infty;\\ \widetilde{I}_2:=\sup\limits_{0< t\leq L}\!\!\! \int\limits_{d(x_{0},x)\leq t}\!\!\! \big(v(x)\big)^{q(x)}\bigg(\int\limits_{t\leq d(x_{0},x)\leq L} \!\!\! \Big( w(d(x_0,y)) \\ \times \big( \mu B_{x_0y}\big)^{1-\alpha(x)} \Big)^{-(p_{1})'(x)}d\mu(y) \bigg)^{\frac{q(x)}{(p_1)'(x)}}d\mu(x)<\infty. \end{eqnarray*}} \vskip+0.1cm {\em Proof.} For simplicity assume that $L=1$. First observe that by Lemma 1.9 we have $p,q,\alpha\in {\mathcal{P}}(1)$. Suppose that $f\geq 0$ and $S_{p} \big( w(d(x_0,\cdot)) f(\cdot) \big) \leq 1$. We will show that $S_{q} \big( v (T_{\alpha(\cdot)} f)\big) \leq C$. We have $$S_{q} \big( v T_{\alpha(\cdot)} f \big) \leq C_q \bigg[ \int\limits_{X} \bigg( v(x) \int\limits_{ d(x_0,y)\leq d(x_0,x)/(2a_1) } f(y) \big( \mu B_{xy} \big)^{ \alpha(x)-1 } d\mu(y) \bigg)^{q(x)} d \mu(x) $$ $$ + \int_X \bigg(v(x) \int\limits_{d(x_0,x)/(2a_1) \leq d(x_0,y) \leq 2a_1 d(x_0,x)} f(y) \big( \mu B_{xy} \big)^{\alpha(x)-1}d\mu(y) \bigg)^{q(x)} d\mu(x) $$ $$ + \int\limits_X \bigg(v(x) \int\limits_{d(x_0,y) \geq 2a_1 d(x_0,x)} f(y) \big( \mu B_{xy} \big)^{\alpha(x)-1}d\mu(y) \bigg)^{q(x)} d\mu(x) \bigg] := C_q [I_1 +I_2+I_3]. $$ First observe that by virtue of the doubling condition for $\mu$, Remark 1.1 and simple calculation we find that $ \mu \big( B_{x_0x}\big) \leq c \mu \big(B_{xy}\big)$. Taking into account this estimate and Theorem 2.1 we have that $$ I_1 \leq c \int_X \bigg( \frac{v(x)}{ \big( \mu B_{x_0x} \big)^{1-\alpha(x)}} \int\limits_{d(x_0,y)< d(x_0,x)} f(y) d\mu(y) \bigg)^{q(x)} d\mu(x) \leq C. $$ Further, it is easy to see that if $d(x_0,y)\geq 2a_1 d(x_0,x)$, then the triangle inequality for $d$ and the doubling condition for $\mu$ yield that $\mu B_{x_0y} \leq c \mu B_{xy}.$ Hence due to Proposition 1.5 we see that $ \big( \mu B_{x_0y}\big)^{\alpha(x)-1} \geq c \big( \mu B_{xy}\big)^{\alpha(y)-1}$ for such $x$ and $y$. Therefore, Theorem 2.2 implies that $ I_3 \leq C.$ It remains to estimate $I_2$. Let us denote: $$ E^{(1)}(x) \!:=\! \overline{B}_{x_0 x}\setminus B\big(x_0, d(x_0,x)/(2a_1)\big); \;\; E^{(2)}(x):= \overline{B}\big(x_0, 2a_1 d(x_0,x)\big) \setminus B_{x_0 x}.$$ Then we have that $$ I_2 \leq C \bigg[ \int\limits_{X} \Big[v(x) \int\limits_{E^{(1)}(x)} f(y) \big( \mu B_{x y}\big)^{\alpha(x)-1} d\mu (y) \Big]^{q(x)} d\mu(x) $$ $$ + \int\limits_{X} \Big[v(x) \int\limits_{E^{(2)}(x)} f(y) \big( \mu B_{x y}\big)^{\alpha(x)-1} d\mu (y) \Big]^{q(x)} d\mu(x)\bigg] := c[ I_{21}+I_{22}]. $$ Using H\"older's inequality for the classical Lebesgue spaces we find that $$I_{21} \leq \int\limits_X v^{q(x)} (x) \bigg( \int\limits_{E^{(1)}(x)} w^{p_0(x)} (d(x_0,y))(f(y))^{p_0(x)} d\mu(y) \bigg)^{q(x)/p_0(x)}$$ $$ \times \bigg( \int\limits_{E^{(1)}(x)} w^{-(p_0)'(x)}(d(x_0,y))\big( \mu B_{xy}\big)^{(\alpha(x)-1)(p_0)'(x)} d\mu(y) \bigg)^{q(x)/(p_0)'(x)} d\mu(x). $$ Denote the first inner integral by $J^{(1)}$ and the second one by $J^{(2)}$. By using the fact that $p_0(x) \leq p(y)$, where $y\in E^{(1)}(x)$, we see that $J^{(1)} \leq \mu( B_{x_0 x} )+ \int\limits_{E^{(1)}(x)} (f(y))^{p(y)} \big( w(d(x_0,y))\big)^{p(y)} d\mu(y) $, while by applying Lemma 1.7, for $J^{(2)}$, we have that \begin{eqnarray*}J^{(2)} \leq c w^{-(p_0)'(x)} \Big( \frac{d(x_0, x)}{2a_1}\Big) \int\limits_{E^{(1)}(x)} \Big( \mu B_{x y} \Big)^{(\alpha(x)-1)(p_0)'(x)} d\mu(y) \\ \leq c w^{-(p_0)'(x)}\Big(\frac{d(x_0,x)}{2a_1}\Big) \Big( \mu B_{x_0 x} \Big)^{(\alpha(x)-1)(p_0)'(x)+1}. \end{eqnarray*} Summarizing these estimates for $J^{(1)}$ and $J^{(2)}$ we conclude that \begin{eqnarray*} I_{21} \leq \int\limits_X v^{q(x)} (x) \big( \mu B_{x_0x}\big)^{q(x)\alpha(x)} w^{-q(x)}\Big(\frac{d(x_0,x)}{2a_1}\Big) d\mu(x) + \int\limits_X v^{q(x)} (x) \\ \times \bigg( \int\limits_{E^{(1)}(x)} w^{p(y)} (d(x_0,y))(f(y))^{p(y)} d\mu(y) \bigg)^{q(x)/p_0(x)} \big( \mu B_{x_0x}\big)^{q(x)(\alpha(x)-1/p_0(x))} \\ \times w^{-q(x)}\Big(\frac{d(x_0,x)}{2a_1}\Big)d\mu(x) =: I_{21}^{(1)}+ I_{21}^{(2)}. \end{eqnarray*} By applying monotonicity of $w$, the reverse doubling property for $\mu$ with the constants $A$ and $B$ (see Remark 1.3), and the condition $\widetilde{I}_1<\infty$ we have that $$I_{21}^{(1)} \!\!\leq c \!\!\sum_{k=-\infty}^{0} \int\limits_{ \overline{B} (x_0, A^k) \setminus B(x_0, A^{k-1})}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!v(x)^{q(x)} \bigg( \!\!\!\!\!\!\! \int\limits_{B \big( x_0, \frac{A^{k-1}}{2a_1}\big)}\!\!\!\!\!\!\!\!\!\! w^{-(p_0)'(x)} (d(x_0,y)) d\mu(y) \bigg)^{\frac{q(x)}{(p_0)'(x)}} $$ $$ \times \big( \mu B_{x_0,x}\big)^{\frac{q(x)}{p_0(x)}+(\alpha(x)-1)q(x)}d\mu(x) \leq c \sum_{k=-\infty}^0 \!\!\! \Big( \mu \overline{B} (x_0, A^k)\Big)^{q_-/p_+} $$ $$ \times \int\limits_{\overline{B}(x_0, A^k) \setminus B(x_0, A^{k-1})} \!\!\!\!\!\!\!\!\! v(x)^{q(x)} \bigg(\int\limits_{B \big( x_0, A^{k}\big)} \!\!\!\!\!\!\!\! w^{-(p_0)'(x)} (d(x_0,y)) d\mu(y) \bigg)^{\frac{q(x)}{(p_0)'(x)}} $$ $$ \times \big( \mu B_{x_0,x}\big)^{q(x)(\alpha(x)-1)}d\mu(x) \leq c \sum_{k=-\infty}^0 \Big( \mu \bar{B}(x_0, A^k) \setminus B(x_0, A^{k-1}) \Big)^{q_-/p_+} $$ $$ \leq c \sum_{k=-\infty}^0 \int\limits_{\mu \bar{B}(x_0, A^k) \setminus B(x_0, A^{k-1})} \!\!\!\!\!\!\!\! \big( \mu B_{x_0,x}\big)^{q_-/p_+-1} d\mu(y) $$ $$ \leq c \int_{X} \big( \mu B_{x_0,x}\big)^{q_-/p_+-1} d\mu(y) <\infty. $$ Due to the facts that $q(x) \geq p_0(x)$, $S_p\big(w\big(d(x_0,\cdot)f(\cdot)\big)\big)\leq 1$, $\widetilde{I}_1<\infty$ and $w$ is increasing, for $I_{21}^{(2)}$, we find that \begin{eqnarray*} I_{21}^{(2)} \leq c \sum_{k=-\infty}^0 \bigg( \int\limits_{\mu \bar{B}(x_0, A^{k+1}a_1) \setminus B(x_0, A^{k-2})} w^{p(y)}(d(x_0,y))(f(y))^{p(y)} d\mu(y) \bigg) \\ \times\bigg( \int\limits_{\mu \bar{B}(x_0, A^k) \setminus B(x_0, A^{k-1})}v^{q(x)}(x) \bigg( \int\limits_{B(x_0,A^{k-1})} w^{-(p_0)'(x)} (d(x_0,y)) d\mu(y) \bigg)^{\frac{q(x)}{(p_0)'(x)}} \\ \times\big(\mu B_{x_0,x}\big)^{(\alpha(x)-1)q(x)}d\mu(x) \bigg) \leq c S_p(f(\cdot)w(d(x_0,\cdot)) \leq c. \end{eqnarray*} Analogously, it follows the estimate for $I_{22}$. In this case we use the condition $\widetilde{I}_2<\infty$ and the fact that $p_1(x) \leq p(y)$ when $d(x_0,y) \leq d(x_0,y) < 2a_1 d(x_0,x)$. The details are omitted. The theorem is proved. $\Box$ \vskip+0.1cm Taking into account the proof of Theorem 3.5 we can easily derive the following statement proof of which is omitted: \vskip+0.1cm {\bf Theorem 3.6.} {\em Let $(X,d,\mu)$ be an $SHT$ with $L<\infty$. Suppose that $p$, $q$ and $\alpha$ are measurable functions on $X$ satisfying the conditions $1<p_-\leq p(x)\leq q(x)\leq q_+<\infty$ and $1/p_-<\alpha_-\leq \alpha_+<1$. Assume that there is a point $x_0$ such that $p,q, \alpha \in \hbox{LH}(X, x_0)$ and $p$ has a minimum at $x_0$. Let $v$ and $w$ be positive increasing function on $(0,2L)$ satisfying the condition $J_1<\infty$ $($ see Theorem $3.4\;)$. Then inequality $(7)$ is fulfilled.} \vskip+0.1cm {\bf Theorem 3.7.} {\em Let $(X, d, \mu)$ be an SHT with $L<\infty$ and let $\mu$ be upper Ahlfors $1$-regular. Suppose that $1<p_-\leq p_+<\infty$ and that $p \in {\overline{LH}}(X)$. Let $p$ have a minimum at $x_0$. Assume that $\alpha$ is constant satisfying the condition $\alpha< 1/p_+$. We set $q(x)= \frac{p(x)}{1-\alpha p(x)}$. If $v$ and $w$ are positive increasing functions on $(0,2L)$ satisfying the condition $$E:=\!\!\!\sup\limits_{0\leq t\leq L}\!\!\!\!\!\!\! \int\limits_{t< d(x_{0},x)\leq L} \!\!\!\!\!\!\!\!\bigg(\frac{v(d(x_0,x))}{\big(d(x_0,x)\big)^{1-\alpha}}\bigg)^{q(x)} \bigg(\!\!\! \int\limits_{d(x_{0},x)\leq t} \!\!\!\!\!\!\!\!w^{-(p_{0})'(d(x_0,x))}(y)d\mu(y)\bigg)^{\frac{q(x)}{(p_{0})'(x)}}\!\!\! d\mu(x)<\infty, $$ then the inequality $$ \| v \big(d(x_0,\cdot)\big) (I_{\alpha}f)(\cdot) \|_{L^{q(\cdot)}(X)} \leq c \| w\big(d(x_0,\cdot)\big) f(\cdot)\|_{L^{p(\cdot)}(X)} $$ holds.} \vskip+0.1cm {\em Proof} is similar to that of Theorem 3.4. We only discuss some details. First observe that due to Remark 1.2 we have that $p\in {\mathcal{P}}(N)$, where $N= a_1(1+2a_0)$. It is easy to check that the condition $E<\infty$ implies that $\frac{v(A^2a_1t)}{w(t)} \leq C $ for all t, where the constant $A$ is defined in Definition 1.10 and $a_1$ is from the triangle inequality for $d$. Further, Lemmas 1.7, 1.9, the fact that $p$ has a minimum at $x_0$ and the inequality $$\int\limits_{d(x_0,y)>t}\big( d(x_0,y)\big)^{(\alpha-1)(p_1)'(x)} d\mu(y) \leq c t^{(\alpha-1)(p_1)'(x)+1},$$ where the constant $c$ does not depend on $t$ and $x$, yield that $$ \sup\limits_{0\leq t \leq L}\!\!\int\limits_{d(x_{0},x)\leq t} \!\!\! (v(d(x_0,x)))^{q(x)} \bigg(\int\limits_{d(x_{0},y)> t}\! \! \!\!\! \bigg(\frac{w (d(x_0,y))}{\big( d(x_0,y) \big)^{1-\alpha}}\bigg)^{-(p_{1})'(x)} \!\!\!\! d\mu(y)\bigg)^{\frac{q(x)}{(p_{1})'(x)}}\!\!d\mu(x)<\infty. $$ Theorem 3.3 completes the proof. $\!\!\!\Box$ \vskip+0.2cm {\bf Example 3.8.} {\em Let $v(t)=t^{\gamma}$ and $w(t)= t^{\beta}$, where $\gamma$ and $\beta$ are constants satisfying the condition $ 0\leq \beta< 1/(p_-)'$, $\gamma\geq \max\{0,\;\; 1-\alpha-\frac{1}{q_+}-\frac{q_-}{q_+}(-\beta+\frac{1}{(p_-)'})\}$. Then $(v,w)$ satisfies the conditions of Theorem $3.4$.} \section{Maximal and Singular Operators} Let $$ Kf (x)= p.v. \int\limits_X k(x,y)f(y) d\mu(y), $$ where $k: X\times X\setminus \{(x,x): x\in X\} \to {\Bbb{R}}$ be a measurable function satisfying the conditions: \begin{gather*} |k(x,y)|\leq \frac{c}{\mu B(x, d(x,y))}, \;\; x,y\in X, \;\; x\neq y; \\ |k(x_1,y)-k(x_2,y)|+ |k(y, x_1)-k(y, x_2)| \leq c \omega \Big( \frac{d(x_2, x_1)}{d(x_2, y)}\Big) \frac{1}{\mu B(x_2, d(x_2,y))} \end{gather*} for all $x_1, x_2$ and $y$ with $d(x_2,y)\geq c d(x, x_2)$, where $\omega$ is a positive non-decreasing function on $(0,\infty)$ which satisfies the $\Delta_2$ condition: $\omega(2t)\leq c \omega(t)$ ($t>0$); and the Dini condition: $\int_0^1 \big(\omega(t)/t\big) dt <\infty$. We also assume that for some constant $s$, $1<s<\infty$, and all $f\in L^{s}(X)$ the limit $Kf(x)$ exists almost everywhere on $X$ and that $K$ is bounded in $L^{s}(X)$. \vskip+0.1cm It is known (see, e.g., \cite{EdKoMe}, Ch. 7) that if $r$ is constant such that $1<r<\infty$, $(X, d, \mu)$ is an SHT and the weight function $w\in A_r(X)$, i.e. $$ \sup_{B} \bigg(\frac{1}{\mu (B)} \int\limits_B w(x) d\mu(x)\bigg) \bigg( \frac{1}{\mu (B)} \int\limits_B w^{1-r'}(x) d\mu(x)\bigg)^{r-1}<\infty,$$ where the supremum is taken over all balls $B$ in $X$, then the one--weight inequality $ \| w^{1/r} Kf\|_{L^{r}(X)} \leq c \|w^{1/r} f \|_{L^{r}(X)}$ holds. \vskip+0.1cm The boundedness of Calder\'on--Zygmund operators in $L^{p(\cdot)}({\Bbb{R}}^n)$ was establish in \cite{DiRu}. \vskip+0.1cm {\bf Theorem E \cite{KoSa8}.} {\em Let $(X, d,\mu)$ be an \hbox{SHT}. Suppose that $p\in {\mathcal{P}}(1)$. Then the singular operator $K$ is bounded in $L^{p(\cdot)}(X)$.} \vskip+0.1cm Before formulating the main results of this section we introduce the notation: $$ \overline{v}(x):= \frac{v(x)}{\mu (B_{x_0 x})},\;\;\; \widetilde{w}(x) := \frac{1}{w(x)}, \;\; \widetilde{w}_1(x):= \frac{1}{w(x)\mu (B_{x_0 x})}.$$ The following statements follows in the same way as Theorem 3.1 was proved. In this case Theorem 1.2 (for the maximal operator) and Theorem E (for singular integrals) are used instead of Theorem C. Details are omitted. \begin{theorem}Let $(X, d, \mu)$ be an $\hbox{SHT}$ and let $1<p_-\leq p_+<\infty$. Further suppose that $p\in {\mathcal{P}}(1)$. If $L=\infty$, then we assume that $p$ is constant outside some ball $B(x_0,a)$. Then the inequality $$ \| v (Nf) \|_{L^{p(\cdot)}(X)} \leq C \| w f \|_{L^{p(\cdot)}(X)}, \eqno{(8)}$$ where $N$ is $M$ or $K$, holds if the following three conditions are satisfied: $(a)$ $\;\;\;\; T_{\overline{v}, \widetilde{w}}$ is bounded in $L^{p(\cdot)}(X)$; $(b)$ $\;\;\;\; T'_{v, \widetilde{w}_1}$ is bounded in $L^{p(\cdot)}(X)$; $(c)$ there is a positive constant $b$ such that one of the following two conditions hold: $1)\;\; v_+(F_x) \leq b w(x)$ $\mu-$ a.e. $x\in X$; $2)\;\; v(x)\leq b \; w_-(F_x)$ $\mu-$ a.e. $x\in X $, where $F_x$ is the set depended on $x$ which is defined in Section $3$. \end{theorem} The next two statements are direct consequences of Theorems 4.1, 2.1 and 2.2. \begin{theorem} Let $(X, d, \mu)$ be an $\hbox{SHT}$ and let $1<p_-\leq p_+<\infty$. Further suppose that $p\in {\mathcal{P}}(1)$. If $L=\infty$, then we assume that $p\equiv$ $p_c\equiv$ const outside some ball $B(x_0,a)$. Let $N$ be $M$ or $K$. Then inequality $(8)$ holds if: \begin{eqnarray*} (i) \;\; \sup\limits_{0< t\leq L}\int\limits_{t< d(x_{0},x) \leq L}\bigg(\frac{v(x)}{\mu B_{x_0x}}\bigg)^{p(x)} \bigg(\int\limits_{\overline{B}(x_0,t)}w^{-({\widetilde{p}}_{0})'(x)}(y)d\mu(y)\bigg)^{\frac{p(x)}{({\widetilde{p}}_{0})'(x)}}d\mu(x)<\infty;\\ (ii)\;\; \sup\limits_{0< t\leq L}\int\limits_{\overline{B}(x_0, t)} \big(v(x)\big)^{p(x)} \bigg(\int\limits_{t< d(x_{0},x)\leq L} \bigg(\frac{w(y)}{\mu B_{x_0y}}\bigg)^{-({\widetilde{p}}_{1})'(x)}d\mu(y)\bigg)^{\frac{p(x)}{({\widetilde{p}}_1)'(x)}}d\mu(x)<\infty; \end{eqnarray*} $(iii)\;\;\;\; $ condition $(c)$ of the previous theorem is satisfied.\end{theorem} {\em Remark} 4.1. It is known (see \cite{EdKo}) that if $p\equiv$ const, then conditions (i) and (ii) (written for $X={\Bbb{R}}$, the Euclidean distance and the Lebesgue measure) of Theorem 4.2 are also necessary for the two--weight inequality $$ \| v (Hf) \|_{L^{p(\cdot)}({\mathbb{R}})} \leq C \| w f \|_{L^{p(\cdot)}({\mathbb{R}})}, $$ where $H$ is the Hilbert transform on ${\mathbb{R}}$: $(Hf)(x)=\; \text{p.v.}\; \int\limits_{{\mathbb{R}}} \frac{f(t)}{x-t}dt$. \vskip+0.1cm {\em Remark} 4.2. If $p\equiv$ const and $N=M$, then condition (i) of Theorem 4.2 is necessary for (8). This follows from the obvious estimate $Mf(x) \geq \frac{c}{\mu ( B_{x_0x} )} \int\limits_{B_{x_0x}} f(y) d\mu(y)$ ($f\geq 0$) and Remark 2.2. \begin{theorem} Let $(X, d, \mu)$ be an $\hbox{SHT}$ without atoms. Let $1<p_-\leq p_+<\infty$. Assume that $p$ has a minimum at $x_0$ and that $p \in \hbox{LH}(X)$. If $L=\infty$ we also assume that $p\equiv p_c \equiv$ const outside some ball $B(x_0,a)$. Let $v$ and $w$ be positive increasing functions on $(0,2L)$. Then the inequality $$ \| v(d(x_0,\cdot)) (N f) (\cdot) \|_{L^{p(\cdot)}(X)} \leq c \| w(d(x_0,\cdot))f(\cdot)\|_{L^{p(\cdot)}(X)}, \eqno{(9)} $$ where $N$ is $M$ or $K$, holds if the following condition is satisfied: $$ \sup\limits_{0<t \leq L}\int\limits_{t< d(x_{0},x) \leq L}\!\!\!\!\!\!\!\!\!\!\!\!\bigg(\frac{v(d(x_0,x))}{\mu(B_{x_0x}) } \bigg)^{p(x)} \bigg(\int\limits_{\overline{B}(x_0, t)} \!\!\!\!\! w^{-({\widetilde{p}}_{0})'(x)}(d(x_0,y)) d\mu(y)\bigg)^{\frac{p(x)}{({\widetilde{p}}_{0})'(x)}}d\mu(x)<\infty. $$ \end{theorem} Proof of this statement is similar to that of Theorem 3.4; therefore we omit it. Notice that Lemma 1.9 yields that $p\in LH(X)\Rightarrow p\in {\mathcal{P}}(1).$ \begin{example} Let $(X, d, \mu)$ be a quasimetric measure space with $L<\infty$. Suppose that $1<p_-\leq p_+<\infty$ and $p\in LH(X)$. Assume that the measure $\mu$ is upper and lower Ahlfors $1-$ regular. Let there exist $x_0\in X$ such that $p$ has a minimum at $x_0$. Then the condition $$ S:= \sup\limits_{0<t \leq L}\int\limits_{t< d(x_{0},x)\leq L}\!\!\!\!\!\!\!\!\!\!\!\!\bigg(\frac{v(d(x_0,x))}{\mu(B_{x_0x}) } \bigg)^{p(x)} \bigg(\int\limits_{\overline{B}(x_0, t)} \!\!\!\!\!w^{-p'(x_0)}(d(x_0,y))d\mu(y)\bigg)^{\frac{p(x)}{p'(x_0)}}d\mu(x)<\infty $$ is satisfied for the weight functions $ v(t)= t^{1/p'(x_0)}$, $w(t)=t^{1/p'(x_0)}\ln \frac{2L}{t}$ and, consequently, by Theorem $4.3$ inequality $(9)$ holds, where $N$ is $M$ or $K$. \end{example} Indeed, first observe that $v$ and $w$ are both increasing on $[0,L]$. Further, it is easy to check that $S \leq c \sup\limits_{0< t \leq L} V(t) \Big(W(t)\Big)^{\frac{p(x_0)}{p'(x_0)}}<\infty$ because $ W(L) <\infty$. By using the representation formula of a general integral by improper integral and the fact that $\mu$ is Ahlfors $1-$ regular, it follows that $ W(t) \leq C_1 \ln^{-1} \frac{2L}{t}$ and $ V(t) \leq C_2 \ln \frac{2L}{t}$ for $0<t\leq L$, where the positive constants does not depend on $t$. Hence the result follows. Observe that for the constant $p$ both weights $v$ and $w$ are outside the Muckenhoupt class $A_p(X)$ (see e.g. \cite{EdKoMe}, Ch. 8). \vskip+0.1cm {\bf Acknowledgement.} The first and second authors were partially supported by the Georgian National Science Foundation Grant (project numbers: No. GNSF/ST09/23/3-100 and No. GNSF/ST07/3-169). The part of this work was fulfilled in Abdus Salam School of Mathematical sciences, GC University, Lahore. The second and third authors are grateful to the Higher Educational Commission of Pakistan for the financial support. Authors' Addresses V. Kokilashvili: A. Razmadze Mathematical Institute, 1. M. Aleksidze Str., 0193 Tbilisi, Georgia and Faculty of Exact and Natural Sciences, I. Javakhishvili Tbilisi State University 2, University St., Tbilisi 0143 Georgia \\ e-mail: [email protected].\\ A. Meskhi: A. Razmadze Mathematical Institute, 1. M. Aleksidze Str., 0193 Tbilisi, Georgia and Department of Mathematics, Faculty of Informatics and Control Systems, Georgian Technical University, 77, Kostava St., Tbilisi, Georgia.\\e-mail: [email protected]\\ M. Sarwar: Abdus Salam School of Mathematical Sciences, GC University, 68-B New Muslim Town, Lahore, Pakistan \\e-mail: [email protected] \end{document}
\betaegin{equation}gin{document} \tauitle[Composite Wavelet Transforms]{ Composite Wavelet Transforms: Applications and Perspectives} \alphauthor{Ilham A. Aliev} \alphaddress{Department of Mathematics, Akdeniz University, 07058 Antalya TURKEY} \varepsilonmail{[email protected]} \alphauthor{Boris Rubin} \alphaddress{Department of Mathematics, Louisiana State University, Baton Rouge, Louisiana 70803} \varepsilonmail{[email protected]} \alphauthor{Sinem Sezer} \alphaddress{Faculty of Education, Akdeniz University, 07058 Antalya TURKEY} \varepsilonmail{[email protected]} \alphauthor{Simten B. Uyhan} \alphaddress{Department of Mathematics, Akdeniz University, 07058 Antalya TURKEY} \varepsilonmail{[email protected]} \tauhanks{The research was supported by the Scientific Research Project Administration Unit of the Akdeniz University (Turkey) and TUBITAK (Turkey). The second author was also supported by the NSF grants EPS-0346411 (Louisiana Board of Regents) and DMS-0556157.} \renewcommand{\subjclassname}{ \tauextup{2000} Mathematics Subject Classification} \subjclass[2000]{42C40, 44A12, 47G10.} \kappaeywords{Wavelet transforms, potentials, semigroups, generalized translation, Radon transforms, inversion formulas, matrix spaces.} \betaegin{equation}gin{abstract} We introduce a new concept of the so-called {\it composite wavelet transforms}. These transforms are generated by two components, namely, a kernel function and a wavelet function (or a measure). The composite wavelet transforms and the relevant Calder\'{o}n-type reproducing formulas constitute a unified approach to explicit inversion of the Riesz, Bessel, Flett, parabolic and some other operators of the potential type generated by ordinary (Euclidean) and generalized (Bessel) translations. This approach is exhibited in the paper. Another concern is application of the composite wavelet transforms to explicit inversion of the k-plane Radon transform on ${\Bbb R}^n$. We also discuss in detail a series of open problems arising in wavelet analysis of $L_p$-functions of matrix argument. \varepsilonnd{abstract} \maketitle \centerline{Contents} \centerline{} 1. Introduction. 2. Composite wavelet transforms for dilated kernels. 3. Wavelet transforms associated to one-parametric semigroups and inversion ${}\qquad {}\quad$ of potentials. 4. Wavelet transforms with the generalized translation operator. 5. Beta-semigroups. 6. Parabolic wavelet transforms. 7. Some applications to inversion of the $k$-plane Radon transform. 8. Higher-rank composite wavelet transforms and open problems. References. \section{Introduction} Continuous wavelet transforms \betaegin{equation}gin{equation*} \mathcal{W}f(x,t)=t^{-n}\int_{{\Bbb R}^n}f(y)\, w \left (\frac{x-y}{t}\right )\, dy, \qquad x\in \mathbb{R} ^{n},\ \ \ t>0, \varepsilonnd{equation*} where $w$ is an integrable radial function satisfying $\int_{{\Bbb R}^n}w(x)dx=0$, have proved to be a powerful tool in analysis and applications. There is a vast literature on this subject (see, e.g., \cite {Da}, \cite{16}, \cite{20}, just for few). Owing to the formula \betaigskip \betaegin{equation}gin{equation} \int_{0}^{\infty }\mathcal{W}f(x,t)\tauext{ }\frac{dt}{t^{1+\alphalpha }} =c_{\alphalpha ,w}(-{\Cal D}elta )^{\alphalpha /2}f(x),\qquad \alphalpha \in \mathbb{C},\quad {\Cal D}elta =\sum\limits_{k=1}^{n}\frac{\P_martialial ^{2}}{\P_martialial x_{k}^{2}}, \label{1.2} \varepsilonnd{equation} that can be given precise meaning, continuous wavelet transforms enable us to resolve a variety of problems dealing with powers of differential operators. Such problems arise, e.g., in potential theory, fractional calculus, and integral geometry; see, \cite{16}, \cite{21}-\cite{27}, \cite{32}. Dealing with functions of several variables, it is always tempting to reduce the dimension of the domain of the wavelet function $w$ and find new tools to gain extra flexibility. This is actually a motivation for our article. We introduce a new concept of the so-called {\it composite wavelet transforms}. Loosely speaking, this is a class of wavelet-like transforms generated by two components, namely, a kernel function and a wavelet. Both are in our disposal. The first one depends on as many variables as we need for our problem. The second component, which is a wavelet function (or a measure), depends only on one variable. Such transforms are usually associated with one-parametric semigroups, like Poisson, Gauss-Weierstrass, or metaharmonic ones, and can be implemented to obtain explicit inversion formulas for diverse operators of the potential type and fractional integrals. These arise in integral geometry in a canonical way; see, e.g., \cite{15, 22, 26, R9}. In the present article we study different types of composite wavelet transforms in the framework of the $L_p$-theory and the relevant Fourier and Fourier-Bessel harmonic analysis. The main focus is reproducing formulas of Calder\'{o}n's type and explicit inversion of Riesz, Bessel, Flett, parabolic, and some other potentials. Apart of a brief review of recent developments in the area, the paper contains a series of new results. These include wavelet transforms for dilated kernels and wavelet transforms generated by Beta-semigroups associated to multiplication by $\varepsilonxp(-t|\xi|^{\beta}), \; \beta>0$, in terms of the Fourier transform. Such semigroups arise in the context of stable random processes in probability and enjoy a number of remarkable properties \cite{Ko}, \cite{La}. Special emphasis is made on detailed discussion of open problems arising in wavelet analysis of functions of matrix argument. Important results for $L_2$-functions in this ``higher-rank" set-up were obtained in \cite{OOR} using the Fourier transform technique. The $L_p$-case for $p\neq 2$ is still mysterious. The main difficulties are related to correct definition and handling of admissible wavelet functions on the cone of positive definite symmetric matrices. The paper is organized according to the Contents presented above. \section{Composite Wavelet Transforms for Dilated Kernels} \subsection{Preliminaries} Let $L_{p}\varepsilonquiv L_{p}(\mathbb{R}^{n}), \; 1\le p<\infty,$ be the standard space of functions with the norm \betaegin{equation}gin{equation*} \left\| f\right\| _{p}={\Cal B}ig( \int_{\mathbb{R}^{n}}\left| f(x)\right| ^{p}dx{\Cal B}ig )^{1/p}<\infty. \varepsilonnd{equation*} For technical reasons, the notation $L_{\infty }$ will be used for the space $C_{0}\varepsilonquiv C_{0}(\mathbb{R}^{n})$ of all continuous functions on $ \mathbb{R}^{n}$ vanishing at infinity. The Fourier transform of a function $f$ on $\mathbb{R}^{n}$ is defined by \betaegin{equation}gin{equation*} Ff(\xi)=\int_{\mathbb{R}^{n}}f(x)\, e^{ix\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot \xi }\,dx, \qquad x \vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot \xi=x_{1}\xi _{1}+\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ots +x_{n}\xi _{n}. \varepsilonnd{equation*} For $ 0\le a<b\le\infty$, we write $\int_a^b f(\varepsilonta)d\mu (\varepsilonta)$ to denote the integral of the form $\int_{[a,b)} f(\varepsilonta)d\mu (\varepsilonta)$. \noindent\betaegin{equation}gin{definition}\label{d1} Let $q$ be a measurable function on ${\Bbb R}^n$ satisfying the following conditions: (a) $q\in L_1\cap L_r$ for some $r>1$; (b) the least radial decreasing majorant of $q$ is integrable, i.e. $$\tauilde q (x)=\sup_{|y|>|x|} |q(y)| \in L_1;$$ (c)$\qquad \int_{{\Bbb R}^n} q(x)\, dx =1.$ \noindent We denote \betaegin{equation}\label {qu} q_t (x)=t^{-n} q(x/t), \qquad Q_t f(x)=(f*q_t)(x), \qquad t>0,\varepsilonnd{equation} and set \betaegin{equation} \label{cwtr}Wf(x,t)= \int_0^\infty Q_{t\varepsilonta} f(x)\,d\mu (\varepsilonta),\varepsilonnd{equation} where $\mu$ is a finite Borel measure on $[0,\infty)$. If $\mu$ is {\it a wavelet measure} (i.e., $\mu$ has a certain number of vanishing moments and obeys suitable decay conditions) then (\ref{cwtr}) will be called the {\it composite wavelet transform} of $f$. The function $q$ will be called a {\it kernel function} and $Q_t$ a {\it kernel operator} of the composite transform $W$. \varepsilonnd {definition} The integral (\ref{cwtr}) is well-defined for any function $f \in L_p$, and $$ ||Wf(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot,t)||_p\le ||\mu || \, ||q||_{1} \, ||f||_p, $$ where $||\mu ||=\int_{[0,\infty)}d|\mu| (\varepsilonta)$. We will also consider a more general weighted transform \betaegin{equation} \label{cwtrw}W_af(x,t)= \int_0^\infty Q_{t\varepsilonta} f(x)\,e^{-at\varepsilonta}\,d\mu (\varepsilonta),\varepsilonnd{equation} where $a\gammamaeq 0$ is a fixed parameter. The kernel function $q$, the wavelet measure $\mu$, and the parameter $a \gammamaeq 0$ are in our disposal. This feature makes the new transform convenient in applications. \subsection{Calder\`on's identity} An analog of Calder\'on's reproducing formula for $W_af$ is given by the following theorem. \betaegin{equation}gin{theorem}\label{teo:34} Let $\mu$ be a finite Borel measure on $[0,\infty)$ satisfying \betaegin{equation}\label{eq:32} \mu([0,\infty))=0 \quad \tauext{and} \quad \int_0^\infty |\log \varepsilonta|\, d|\mu|(\varepsilonta) <\infty.\varepsilonnd{equation} If $f\in L_p, \; 1 \le p \le \infty$\footnote{We remind that $L_\infty$ is interpreted as the space $C_0$ with the uniform convergence.}, and $$c_\mu=\int_0^\infty \log \frac{1}{\varepsilonta} \,\, d\mu (\varepsilonta),$$ then \betaegin{equation}\label{eq33}\int_0^\infty W_af(x,t)\frac{dt}{t}\varepsilonquiv \lim\limits_{\varepsilon \rightarrow 0}\int_\varepsilon^\infty W_af(x,t)\frac{dt}{t}=c_\mu f(x) \varepsilonnd{equation} where the limit exists in the $L_p$-norm and pointwise for almost all $x$. If $f\in C_0 $, this limit is uniform on ${\Bbb R}^n$. \varepsilonnd{theorem} \betaegin{equation}gin{proof} Consider the truncated integral \betaegin{equation}gin{equation}\label{eq:38} I_\varepsilon f(x)=\int_\varepsilon^\infty W_af(x,t)\frac{dt}{t}, \qquad \varepsilon >0. \varepsilonnd{equation} Our aim is to represent it in the form \betaegin{equation}gin{equation}\label{eq:3100} I_\varepsilon f(x)= \int_{0}^{\infty}Q_{\varepsilon s} f(x) \,e^{-a\varepsilon s}\, k(s) \, ds \varepsilonnd{equation} where \betaegin{equation}\label{ka} k\in L_1 (0, \infty) \qquad \tauext{\rm and} \qquad \int_0^\infty k(s) ds=c_\mu. \varepsilonnd{equation} Once (\ref{eq:3100}) is established, all the rest follows from properties (a)-(c) in Definition \ref{d1} according to the standard machinery of approximation to the identity; see \cite {St}. Equality (\ref{eq:3100}) can be formally obtained by changing the order of integration, namely, \betaegin{equation}gin{eqnarray} I_\varepsilon f(x) &=&\int_0^\infty d\mu(\varepsilonta) \int_{\varepsilon }^\infty Q_{t\varepsilonta} f(x)\,e^{-at\varepsilonta}\,\frac{dt}{t}\nonumber \\&=& \int_{0}^{\infty}d\mu(\varepsilonta) \int_{\varepsilonta}^\infty Q_{\varepsilon s} f(x)\,e^{-a\varepsilon s}\,\frac{ds}{s}\nonumber \\&=&\int_{0}^{\infty}Q_{\varepsilon s} f(x)\,e^{-a\varepsilon s} k(s)\, ds, \qquad k(s)=s^{-1}\int_{0}^{s}d\mu(\varepsilonta). \nonumber \varepsilonnd{eqnarray} Furthermore, since $\mu([0,\infty))=0$, then \betaegin{equation}a \int_0^\infty |k(s)| ds&=&\int_0^1 {\Cal B}ig |\int_0^s d\mu(\varepsilonta){\Cal B}ig |\frac{ds}{s}+\int_1^{\infty} {\Cal B}ig |\int_s^{\infty} d\mu(\varepsilonta){\Cal B}ig |\frac{ds}{s}\nonumber \\&\le&\int_0^1 d|\mu|(\varepsilonta)\int_\varepsilonta^1 \frac{ds}{s}+\int_1^{\infty}d|\mu|(\varepsilonta)\int_1^\varepsilonta \frac{ds}{s}\nonumber \\&=&\int_0^\infty |\log \varepsilonta|\, d|\mu|(\varepsilonta) <\infty. \nonumber\varepsilonnd{equation}a Similarly we have $$\int_0^\infty k(s) ds=\int_0^\infty \log \frac{1}{\varepsilonta} \, d\mu (\varepsilonta)=c_\mu,$$ which gives (\ref{ka}). Thus, to complete the proof, it remains to justify application of Fubini's theorem leading to (\ref{eq:3100}). To this end, it suffices to show that the repeated integral $$ \int_\varepsilon^\infty \frac{dt}{t}\int_0^\infty |Q_{t\varepsilonta} f(x)|\,d|\mu| (\varepsilonta)$$ is finite for almost all $x$ in ${\Bbb R}^n$. We write it as $A(x)+B(x)$, where $$ A(x)=\int_\varepsilon^\infty \frac{dt}{t}\int_0^{1/t}|Q_{t\varepsilonta} f(x)|\,d|\mu| (\varepsilonta), \quad B(x)=\int_\varepsilon^\infty \frac{dt}{t}\int_{1/t}^\infty|Q_{t\varepsilonta} f(x)|\,d|\mu| (\varepsilonta).$$ Since the least radial decreasing majorant of $q$ is integrable (see property (b) in Definition \ref {d1}), then $\sup_{t>0}|Q_{t} f(x)|\le c\, M_f (x)$ where $M_f (x)$ is the Hardy-Littlewood maximal function, which is finite for almost $x$; see e.g., \cite[Theorem 2, Section 2, Chapter III]{St}. Hence, for almost $x$, $$ A(x)\le c\,M_f (x)\int_\varepsilon^\infty \frac{dt}{t}\int_0^{1/t}d|\mu| (\varepsilonta)= c\,M_f (x)\int_0^{1/\varepsilon}\betaig(\log \frac{1}{\varepsilonta}-\log \varepsilon\betaig) d|\mu| (\varepsilonta)<\infty.$$ To estimate $B(x)$, we observe that since $q\in L_r, \; r>1$, then, by Young's inequality $$ ||Q_t f||_s \le ||f||_p\, ||q_t||_r=t^{-\P_martialialel}||f||_p\, ||q||_r, \qquad \P_martialialel=n(1-1/r)>0, \quad \frac{1}{s}=\frac{1}{r}+\frac{1}{p}-1.$$ This gives $${\Cal B}ig \|\int_{1/t}^\infty|Q_{t\varepsilonta} f(x)|\,d|\mu| (\varepsilonta){\Cal B}ig \|_s \le t^{-\P_martialialel}||f||_p\,\int_{1/t}^\infty \varepsilonta^{-\P_martialialel}\,d|\mu| (\varepsilonta),$$ and therefore, \betaegin{equation}a ||B||_s&\le&||f||_p\,\int_\varepsilon^\infty \frac{dt}{t^{1+\P_martialialel}}\int_{1/t}^\infty\varepsilonta^{-\P_martialialel}\,d|\mu| (\varepsilonta)\nonumber\\&=&\frac{||f||_p}{\P_martialialel}{\Cal B}ig (\, \int_0^{1/\varepsilon}d|\mu|(\varepsilonta)+\frac{1}{\varepsilon^\P_martialialel}\int_{1/\varepsilon}^\infty \varepsilonta^{-\P_martialialelta} d|\mu|(\varepsilonta){\Cal B}ig )\le \frac{||f||_p\, ||\mu||}{\P_martialialel}<\infty. \nonumber \varepsilonnd{equation}a This completes the proof. \varepsilonnd{proof} \section{Wavelet Transforms Associated to One-parametric Semigroups and Inversion of Potentials} In this section we consider an important subclass of wavelet transforms, generated by certain one-parametric semigroups of operators. Some composite wavelet transforms from the previous section belong to this subclass. \subsection{Basic examples} \betaegin{equation}gin{example}\label{e1} Consider the {\it Poisson semigroup} $\mathcal{P}_{t}$ generated by the Poisson integral \betaegin{equation}gin{equation} \mathcal{P}_{t}f(x)=\int_{\mathbb{R}^{n}}p(y,t)f(x-y)\,dy\tauext{ \ , \ } t>0 \label{1.1} \varepsilonnd{equation} with the Poisson kernel \betaegin{equation}gin{equation} p(y,t)=\frac{\mathcal{G}amma \left( (n+1)/2\right) }{ \P_mi ^{(n+1)/2}}\frac{t}{(t^{2}+\left| y\right| ^{2})^{(n+1)/2}}=t^{-n}p(y/t, 1); \label{1.2} \varepsilonnd{equation} see \cite{31}, \cite {St}. In this specific case, the kernel function of the relevant composite wavelet transform is $q(x)\varepsilonquiv p(x, 1)$ and on the Fourier transform side we have \betaegin{equation}\label{puf} F[\mathcal{P}_{t}f](\xi)=e^{-t\left| \xi\right| } Ff(\xi).\varepsilonnd{equation} \varepsilonnd{example} \betaegin{equation}gin{example} Another important example is the {\it Gauss-Weierstrass semigroup} $\mathcal{W}_{t}$ defined by \betaegin{equation}gin{equation} \mathcal{W}_{t}f(x)=\int_{\mathbb{R}^{n}}w(y,t)f(x-y)\,dy,\qquad F[w(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot ,t)](\xi )=e^{-t|\xi |^{2}}, \quad t>0; \label{2.4} \varepsilonnd{equation} see \cite{31}. The Gauss-Weierstrass kernel $w(y,t)$ is explicitly computed as \betaegin{equation} w(y,t)=(4\P_mi t)^{-n/2}\varepsilonxp (-\left| y\right| ^{2}/4t).\varepsilonnd{equation} In comparison with (\ref{qu}), here the scaling parameter $t$ is replaced by $\sqrt{t}$, so that \betaegin{equation}\label{scp2}w(y,t)=(\sqrt{t})^{-n}q (y/\sqrt{t}), \qquad q (y)=w (y, 1)=(4\P_mi)^{-n/2}\varepsilonxp (-\left| y\right| ^{2}/4), \varepsilonnd{equation} and the corresponding wavelet transform has the form \betaegin{equation} \label{GWtr} Wf(x,t)=\int_0^\infty \mathcal{W}_{t\varepsilonta} f(x)\, e^{-at\varepsilonta}\, d\mu(\varepsilonta), \qquad x \in {\Bbb R}^n, \;\; t>0,\; \; a\gammamae 0.\varepsilonnd{equation} This agrees with (\ref{cwtrw}) up to an obvious change of scaling parameters. \varepsilonnd{example} \betaegin{equation}gin{example} The following interesting example does not fall into the scope of wavelet transforms in Section 2, however, it has a very close nature. Consider the {\it metaharmonic semigroup} ${\Cal M}_t$ defined by \betaegin{equation}\label{sg3}({\Cal M}_t f)(x) \! = \! \int\limits_{\mathbb R^n} m(y,t)f(x \! - \! y)\,dy, \quad F[m(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot,t)](\xi) \! = \! e^{-t\sqrt{1+|\xi|^2}};\varepsilonnd{equation} see \cite[p. 257-258]{21}. The corresponding kernel has the form \betaegin{equation} m (y,t)=\frac{2t}{(2\P_mi)^{(n+1)/2}}\, \frac{K_{(n+1)/2} (\sqrt{|y|^2 +t^2})}{(\sqrt{|y|^2 +t^2})^{(n+1)/2}},\varepsilonnd{equation} where $K_{(n+1)/2}(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot)$ is the McDonald function. The relevant wavelet transform is \betaegin{equation} \label{mh} Wf(x,t)=\int_0^\infty \mathcal{{\Cal M}}_{t\varepsilonta} f(x)\, d\mu(\varepsilonta), \qquad x \in {\Bbb R}^n, \;\; t>0.\varepsilonnd{equation} \varepsilonnd{example} This list of examples can be continued \cite{8}. \subsection{Operators of the potential type} One of the most remarkable applications of wavelet transforms associated to the Poisson, Gauss-Weierstrass, and metaharmonic semigroups is that they pave the way to a series of explicit inversion formulas for operators of the potential type arising in analysis and mathematical physics. Typical examples of such operators are the following: \betaegin{equation}gin{eqnarray} \qquad I^{\alphalpha }f &=&F^{-1}\left| \xi \right| ^{-\alphalpha }Ff\varepsilonquiv (-{\Cal D}elta )^{-\alphalpha /2}f \quad \tauext{\rm (Riesz potentials)}, \label{1.8} \\ \qquad J^{\alphalpha }f &=&F^{-1}(1+\left| \xi \right| ^{2})^{-\alphalpha /2}Ff\varepsilonquiv (E-{\Cal D}elta )^{-\alphalpha /2}f \quad \tauext{\rm (Bessel potentials)}, \label{1.9} \\ \qquad \mathcal{F}^{\alphalpha }f &=&F^{-1}(1+\left| \xi \right| )^{-\alphalpha }Ff\varepsilonquiv (E+\sqrt{-{\Cal D}elta })^{-\alphalpha }f \quad \tauext{\rm (Flett potentials)}.\label{1.10} \varepsilonnd{eqnarray} Here $Re\,\alphalpha >0$, \ $\left| \xi \right| =\left( \xi _{1}^{2}+\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ots +\xi _{n}^{2}\right) ^{1/2},$ \ ${\Cal D}elta =\sum_{k=1}^{n}\frac{\P_martialial ^{2}}{\P_martialial x_{k}^{2}}$ is the Laplacean, and $E$ is the identity operator. For \ $f\in L_{p}(\mathbb{R}^{n}),$ \ $1\leq p<\infty ,$ these potentials have remarkable integral representations via the Poisson and Gauss-Weierstrass semigroups, namely, \betaegin{equation}a \label{pot1}I^{\alphalpha }f(x)&=&\frac{1}{\mathcal{G}amma (\alphalpha )}\int_{0}^{\infty }t^{\alphalpha -1}\mathcal{P}_{t}f(x)\,dt,\qquad 0<Re\,\alphalpha <n/p\,;\\ \label{pot2}J^\alphalpha f(x) &=& \frac{1}{\mathcal{G}amma (\alphalpha/2 )} \int_0^\infty t^{\alphalpha/2-1}e^{-t}\,\mathcal{W}_t f(x) \,dt,\qquad 0<Re\,\alphalpha <\infty ;\\ \label {fla}\mathcal{F}^{\alphalpha }f(x)&=&\frac{1}{\mathcal{G}amma (\alphalpha )}\int_{0}^{\infty }t^{\alphalpha -1}e^{-t}\mathcal{P}_{t}f(x)\,dt,\,\qquad 0<Re\,\alphalpha <\infty ;\varepsilonnd{equation}a see \cite{30}, \cite{21}, \cite{12}. Regarding Flett potentials, see, in particular, \cite[p. 446-447]{12}, \cite[p. 541-542]{28}, \cite{9}. We also mention another interesting representation of the Bessel potential, which is due to Lizorkin \cite{19} and employs the metaharmonic semigroup, namely, \betaegin{equation}gin{equation} J^{\alphalpha }f(x)=\frac{1}{\mathcal{G}amma (\alphalpha )}\int_{0}^{\infty }t^{\alphalpha -1}\mathcal{M}_{t}f(x)\,dt,\qquad 0<Re\,\alphalpha <\infty . \label{1.13} \varepsilonnd{equation} Equalities (\ref{pot1})-(\ref{1.13}) have the same nature as classical Balakrishnan's formulas for fractional powers of operators (see \cite[p. 121]{28}). Let us show how these equalities generate wavelet inversion formulas for the corresponding potentials. The core of the method is the following statement which is a particular case of Lemma 1.3 from \cite{23}. \betaegin{equation}gin{lemma}\label{lB} Given a finite Borel measure $\mu $ on $[0,\infty )$ and a complex number $\alphalpha ,$ \ $ \alpha'=Re\,\alphalpha \gammamaeq 0$, let \betaegin{equation}gin{equation} \lambdabda _{\alphalpha }(s)=s^{-1} I_+^{\alphalpha +1}\mu (s), \label{1.15} \varepsilonnd{equation} where \betaegin{equation}gin{equation} I_+^{\alphalpha +1}\mu(s)=\frac{1}{\mathcal{G}amma (\alphalpha +1)} \int_{0}^{s}(s-\varepsilonta )^{\alphalpha }d\mu (\varepsilonta ) \label{1.16} \varepsilonnd{equation} is the Riemann-Liouville fractional integral of order $\alphalpha +1$ of the measure $\mu$. Suppose that $\mu $ satisfies the following conditions: \betaegin{equation}a \label{con1}&{}&\tauext{ \ }\int_{1}^{\infty }\varepsilonta^{\gammamaamma }d|\mu |(\varepsilonta )<\infty \tauext{ \ \ \tauextit{for some} \ }\gammamaamma >\alpha' ; \\ \label{con2}&{}&\tauext{ \ }\int_{0}^{\infty }\varepsilonta^{j}d\mu (\varepsilonta)=0\tauext{ \ \ }\forall j=0,1,\ldots ,[Re\,\alphalpha ]\tauext{ \ \tauextit{(the integer part of} }\alpha' \tauext{).} \varepsilonnd{equation}a Then \betaegin{equation}gin{eqnarray}\label{impo} \lambdabda _{\alphalpha }(s)=\left\{ \betaegin{equation}gin{array}{lcl} O(s^{\alpha'-1}), & \tauext{if} & 0<s<1,\\ O(s ^{-1-\P_martialialel})\; \tauext{for some $\P_martialialel >0$}, \; & \tauext{if} & s>1, \varepsilonnd{array} \right. \varepsilonnd{eqnarray} and \betaegin{equation}gin{eqnarray} c_{\alphalpha,\mu } &\varepsilonquiv &\int_{0}^{\infty }\lambdabda _{\alphalpha }(s)\,ds=\int_{0}^{\infty }\frac{\tauilde{\mu}(t)}{t^{\alphalpha +1}}\,dt \notag \\ {}\nonumber\\ &=&\label{1.17n}\left\{ \betaegin{equation}gin{array}{lcl}\P_martialialisplaystyle{ \mathcal{G}amma (-\alphalpha )\int_{0}^{\infty }\varepsilonta^{\alphalpha }\,d\mu (\varepsilonta)} & \tauext{if} & \alphalpha \notin \mathbb{N}_{0}=\{0,1,2,\ldots \}, \\ {}\\ \P_martialialisplaystyle{\frac{(-1)^{\alphalpha +1}}{\alphalpha !}\int_{0}^{\infty }\varepsilonta^{\alphalpha }\log \varepsilonta \, d\mu (\varepsilonta)} & \tauext{if} & \alphalpha \in \mathbb{N}_{0}, \varepsilonnd{array} \right. \varepsilonnd{eqnarray} where $\tauilde{\mu}(t)=\int_{0}^{\infty }e^{-t\varepsilonta}d\mu (\varepsilonta)$ is the Laplace transform of $\mu$. \varepsilonnd{lemma} The estimate (\ref{impo}) is important in proving almost everywhere convergence in forthcoming inversion formulas. Consider, for example, Flett potential (\ref{1.10}), (\ref{fla}), and make use of the composite wavelet transform \betaegin{equation} \label {pwt} W\varphihi (x,t)=\int_0^\infty \mathcal{P}_{t\varepsilonta}\varphihi(x)\,e^{-t\varepsilonta}\,d\mu (\varepsilonta),\varepsilonnd{equation} cf. Example \ref{e1} and (\ref{cwtrw}) with $a=1$. \betaegin{equation}gin{theorem} \label{t1.5}\ Let $f\in L_{p},$ \ $1\leq p\leq \infty ,$ \ and let $\varphihi =\mathcal{F}^{\alphalpha }f,$ \ $\alphalpha >0$, be the Flett potentials of $f.$ Suppose that $\mu $ is a finite Borel measure on $[0,\infty )$ satisfying (\ref{con1}) and (\ref{con2}). Then \betaegin{equation}gin{equation} \int_{0}^{\infty }W_{\mu }\varphihi (x,t )\,\frac{dt }{t ^{1+\alphalpha }}\varepsilonquiv \lim_{\varepsilonpsilon \rightarrow 0}\int_{\varepsilonpsilon }^{\infty }W_{\mu }\varphihi (x,t )\,\frac{dt }{t^{1+\alphalpha }} =c_{\alphalpha ,\mu }f(x), \label{1.20} \varepsilonnd{equation} where \ $c_{\alphalpha ,\mu }$ is defined by (\ref{1.17n}) and the limit is interpreted in the $L_{p}-$norm\ and pointwise a.e. on \ ${\Bbb R}^n$. If $f\in C_{0}$, the statement remains true with the limit in (\ref {1.20}) interpreted in the sup-norm. \varepsilonnd{theorem} \betaegin{equation}gin{proof} We sketch the proof and address the reader to \cite{9} for details. Changing the order of integration, owing to (\ref{pwt}), (\ref{fla}), and the semigroup property of the Poisson integral, we get \betaegin{equation} W\varphihi (x,t )=\frac{1}{\mathcal{G}amma (\alphalpha )}\int_{0}^{\infty }d\mu (\varepsilonta)\int_{t\varepsilonta}^{\infty }(\rho -t\varepsilonta )_{+}^{\alphalpha -1}\,e^{-\rho }\,\mathcal{ P}_{\rho }f(x)\,d\rho . \label{1.21}\varepsilonnd{equation} Then further calculations give \betaegin{equation} \label {kuku}\int_{\varepsilonpsilon }^{\infty }W\varphihi (x,t )\frac{dt }{t^{1+\alphalpha }}= \int_{0}^{\infty }e^{-\varepsilonpsilon s }\mathcal{P}_{\varepsilonpsilon s }f(x)\,\lambdabda _{\alphalpha}(s) \,ds, \quad \lambdabda _{\alphalpha}\left(s \right)= s^{-1}I_+^{\alpha +1} \mu (s),\varepsilonnd{equation} cf. (\ref{1.15}). It remains to applied Lemma \ref{lB} combined with the standard machinery of approximation to the identity. \varepsilonnd{proof} Potentials (\ref{1.8})-(\ref{1.10}) and many others can be similarly inverted by making use of the wavelet transforms associated with suitable semigroups; see \cite{8}, \cite{9}. \subsection{Examples of wavelet measures} Examples of wavelet measures, that obey the conditions of Lemma \ref{lB} with $c_{\alphalpha ,\mu }\neq 0$, are the following. 1. Fix an integer $m>Re \,\alpha $ and choose an even Schwartz function $h(\varepsilonta )$ \ on $\mathbb{R}^{1}$ so that $$h^{\left( k\right) }(0)=0 \quad \forall \,k=0,1,2,...,\quad \tauext{\rm and}\quad \int_{0}^{\infty }\varepsilonta^{\alpha -m}h\left( \varepsilonta \right) d\varepsilonta \neq 0.$$ One can take, for instance, $h\left( \varepsilonta \right) =\varepsilonxp \left( -\varepsilonta ^{2}-1/\varepsilonta ^{2}\right) ,$ \ $h\left( 0\right) =0.$ Set $d\mu \left( \varepsilonta \right) =h^{\left( m\right) }\left( \varepsilonta \right) d\varepsilonta .$ It is not difficult to show that $\int_{0}^{\infty }\varepsilonta ^{k}d\mu \left( \varepsilonta \right) =0$, $\forall \ k=0,1,...,[Re \,\alpha]$, \ and $c_{\alpha,\mu }\neq 0.$ 2. Let $\mu =\sum\limits_{j=0}^{m}\betainom{m}{j}\left( -1\right) ^{j}\P_martialialelta _{j}$, where $m>Re \,\alpha$ \ is a fixed integer and $ \P_martialialelta _{j}=\P_martialialelta _{j}\left( \varepsilonta \right) $ denotes the unit mass at the point $\varepsilonta =j$, i.e., $\left\langle \P_martialialelta _{j},f\right\rangle =f(j).$ It is known \cite[p. 117]{28}, \ that \betaegin{equation}gin{equation*} \int_{0}^{\infty }\varepsilonta ^{k}d\mu \left( \varepsilonta \right) \varepsilonquiv \sum\limits_{j=0}^{m}\betainom{m}{j}\left( -1\right) ^{j}j^{k}=0,\tauext{ \ } \forall \tauext{ \ }k=0,1,...,m-1\tauext{ \ }\mathrm{(we\ set\ 0^{0}=1\ ).} \varepsilonnd{equation*} Moreover, $c_{\alpha ,\mu }=$ $\int_{0}^{\infty }t^{-\alpha -1}\left( 1-e^{-t}\right) ^{m}dt\neq 0.$ \section{Wavelet transforms with the generalized translation operator} Continuous wavelet transforms, studied in the previous sections, rely on the classical Fourier analysis on ${\Bbb R}^n$. Interesting modifications of these transforms and the corresponding potential operators arise in the framework of the Fourier-Bessel harmonic analysis associated to the Laplace-Bessel differential operator \betaegin{equation} \label{fb} {\Cal D}elta_\nu =\sum\limits_{k=1}^{n} \frac{\P_martialial^2}{\P_martialial x_k^2}+\frac{2\nu}{x_n}\frac{\P_martialial}{\P_martialial x_n}\ , \qquad \nu>0.\varepsilonnd{equation} This analysis amounts to pioneering works by Delsarte \cite{De} and Levitan \cite{18}, and was extensively developed in subsequent publications; see \cite{17}, \cite{32}, \cite{bh}, and references therein. Let $\mathbb R^n_+=\{x: \, x=(x_1,\ldots , x_n)\in \mathbb R^n, \ x_n > 0\}$ and $x'=(x_1,\ldots , x_{n-1})$. Denote $$ L_{p,\nu}({\Bbb R}^n_+)={\Cal B}ig \{f: ||f||_{p, \nu}={\Cal B}ig ( \int_{{\Bbb R}^n_+}\left| f(x)\right| ^{p}x_n^{2\nu}dx{\Cal B}ig )^{1/p}<\infty {\Cal B}ig \}. $$ The Fourier-Bessel harmonic analysis is adopted to {\it the generalized convolutions} \betaegin{equation}\label{conv} (f \alphast g)(x)=\int_{\mathbb R^n_+} f(y)(T^y g)(x) \, y_n^{2\nu}dy, \qquad x \in \mathbb R^n_+ ,\varepsilonnd{equation} with the {\it generalized translation operator} \betaegin{equation}\label{gtop} (T^yf)(x)=\frac{\mathcal{G}amma(\nu +1/2)}{\mathcal{G}amma(\nu)\mathcal{G}amma(1/2)}\int_0^\P_mi \!\! f(x'-y',\sqrt{x_n^2-2x_ny_n{\hbox{\rm cos}} \alphalpha +y_n^2})\,\sin^{2\nu-1}\alphalpha \,d\alphalpha, \varepsilonnd{equation} \cite {17}, \cite {18}, \cite {32}. The Fourier-Bessel transform $F_\nu$, for which $F_{\nu }\left( f* g\right) =F_{\nu }\left( f\right) F_{\nu }(g),$ \ is defined by \betaegin{equation}gin{equation}\label{eq:211} (F_\nu f)(\xi)=\int_{\mathbb R^n_+} f (x)\,e^{i\xi' \vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot x'}j_{\nu-1/2}(\xi_n x_n) \, x_n^{2\nu} dx, \qquad \xi \in \mathbb R^n_+. \varepsilonnd{equation} Here $j_\lambda (\tauau)=2^\lambda\mathcal{G}amma (\lambda+1) \, \tauau^{-\lambda} J_\lambda(\tauau)$, where $J_\lambda(\tauau)$ is the Bessel function of the first kind. The {\it generalized Gauss-Weierstrass, Poisson, and metaharmonic semigroups} $ \; \{\mathcal{W}_t^{(\nu)}\}, \; \{{\Cal P}_t^{(\nu)}\}, \; \{{\Cal M}_t^{(\nu)}\}$ are defined as follows: \betaegin{equation}a \label{k11}(\mathcal{W}_t^{(\nu)}f)(x)&=&\int\limits_{\mathbb R^n_+}w^{(\nu)}(y,t)(T^yf)(x) \, y_n^{2\nu}dy,\\ && \hskip -1.5truecm F_\nu [w^{(\nu)}(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot,t)](\xi) =e^{-t|\xi|^2}; \nonumber \\ \label{k21}({\Cal P}_t^{(\nu)}f)(x)&=&\int\limits_{\mathbb R^n_+}p^{(\nu)}(y,t)(T^yf)(x) \, y_n^{2\nu}dy, \\ && \hskip -1.5truecm F_\nu [p^{(\nu)}(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot,t)](\xi) =e^{-t|\xi|}; \nonumber \\ \label{k31} ({\Cal M}_t^{(\nu)}f)(x)&=& \int\limits_{\mathbb R^n_+}m^{(\nu)}(y,t)(T^yf)(x) \, y_n^{2\nu}dy, \\ && \hskip -1.5truecm F_\nu [m^{(\nu)}(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot,t)](\xi) =e^{-t\sqrt{1+|\xi|^2}}. \nonumber \varepsilonnd{equation}a The corresponding kernels $ w^{(\nu)}(y,t), \; p^{(\nu)}(y,t)$, and $m^{(\nu)}(y,t)$ have the form \betaegin{equation}a \label{nu1} w^{(\nu)}(y,t)&=& \frac{2\P_mi^{\nu +1/2}}{\mathcal{G}amma(\nu +1/2)} \, (4\P_mi t)^{-(n+2\nu)/2} e^{-|y|^2/4t},\\ \label{nu2} p^{(\nu)}(y,t)&=& \frac{2 \, \mathcal{G}amma\betaig((n+2\nu+1)/2\betaig)}{\P_mi^{n/2}\mathcal{G}amma (\nu+1/2)}\, \frac{t}{(|y|^2 +t^2)^{(n+2\nu +1)/2}},\\ \qquad \label{nu3} m^{(\nu)}(y,t)&=& \frac{ 2^{-\nu+3/2}t}{\mathcal{G}amma (\nu+1/2)(2\P_mi)^{n/2}}\, \frac{K_{(n+2\nu +1)/2} (\sqrt{|y|^2 +t^2})}{(\sqrt{|y|^2 +t^2})^{(n+2\nu +1)/2}}.\varepsilonnd{equation}a More information about these semigroups and their modifications $$\{e^{-t}\mathcal{W}^{(\nu)}_t\}, \qquad \{e^{-t}{\Cal P}^{(\nu)}_t\}, \qquad \{ e^{-t}{\Cal M}^{(\nu)}_t\},$$ can be found in \cite {2}, \cite {3}, \cite {13}. Modified Riesz, Bessel, and Flett potentials with the generalized translation operator (\ref{gtop}) are formally defined in terms of the Fourier-Bessel transform by \betaegin{equation}gin{eqnarray} I_{\nu }^{\alphalpha }f &=&F_{\nu }^{-1}\left| \xi \right| ^{-\alphalpha }F_{\nu }f\varepsilonquiv \left( -{\Cal D}elta _{\nu }\right) ^{-\alphalpha /2}f, \label{2.15} \\ \mathcal{J}_{\nu }^{\alphalpha }f &=&F_{\nu }^{-1}(1+\left| \xi \right| ^{2})^{-\alphalpha /2}F_{\nu }f\varepsilonquiv \left( E-{\Cal D}elta _{\nu }\right) ^{-\alphalpha /2}f, \label{2.17}\\ \mathcal{F}_{\nu }^{\alphalpha }f &=&F_{\nu }^{-1}(1+\left| \xi \right| )^{-\alphalpha }F_{\nu }f\varepsilonquiv \left( E+\sqrt{-{\Cal D}elta _{\nu }}\right) ^{-\alphalpha }f, \label{2.16} \varepsilonnd{eqnarray} respectively. Here $Re\,\alphalpha >0$ and ${\Cal D}elta _{\nu }$ is the Laplace-Bessel differential operator (\ref{fb}). These generalized potentials have analogous to (\ref{pot1})-(\ref{fla}) representations in terms of the semigroups (\ref{k11})-(\ref{k31}), namely, if $ f \in L_{p,\nu}({\Bbb R}^n_+)$ then \betaegin{equation}a \label{pot1n}I_{\nu }^{\alphalpha }f(x)&=&\frac{1}{\mathcal{G}amma (\alphalpha )}\int_{0}^{\infty }t^{\alphalpha -1}\mathcal{P}_{t}^{(\nu)}f(x)\,dt,\qquad 0<Re\,\alphalpha <(n+2\nu)/p\,,\\ J_{\nu }^\alphalpha f(x) &=& \frac{1}{\mathcal{G}amma (\alphalpha/2 )} \int_0^\infty t^{\alphalpha/2-1}e^{-t}\,\mathcal{W}_t^{(\nu)} f(x) \,dt,\qquad 0<Re\,\alphalpha <\infty ,\\ \label {flan}\mathcal{F}_{\nu }^{\alphalpha }f(x)&=&\frac{1}{\mathcal{G}amma (\alphalpha )}\int_{0}^{\infty }t^{\alphalpha -1}e^{-t}\mathcal{P}_{t}^{(\nu)}f(x)\,dt,\,\qquad 0<Re\,\alphalpha <\infty .\varepsilonnd{equation}a Moreover, \betaegin{equation}gin{equation} J_{\nu }^{\alphalpha }f(x)=\frac{1}{\mathcal{G}amma (\alphalpha )}\int_{0}^{\infty }t^{\alphalpha -1}\mathcal{M}_{t}^{(\nu)}f(x)\,dt,\qquad 0<Re\,\alphalpha <\infty . \label{1.13n} \varepsilonnd{equation} We denote by $S_{t}^{(\nu)}$ any of the semigroups \betaegin{equation}\label{sgr} \mathcal{W}_t^{(\nu)}, \; \;e^{-t}\mathcal{W}_t^{(\nu)}, \;\; \mathcal{P}_{t}^{(\nu)},\; \;e^{-t}{\Cal P}^{(\nu )}_t, \;\; \mathcal{M}_{t}^{(\nu)}, \;\; e^{-t}{\Cal M}^{(\nu)}_t,\varepsilonnd{equation} and define the relevant wavelet transform (cf. (\ref{cwtr})) \betaegin{equation}gin{equation} \mathfrak{S}^{(\nu)}f(x,t)=\int_{0}^{\infty }S_{t\varepsilonta }^{(\nu)}\,f(x)\, d\mu (\varepsilonta ), \qquad t>0, \label{2.10} \varepsilonnd{equation} generated by a finite Borel measure $\mu$ on $[0,\infty )$. There exist analogs of Calder\'{o}n's reproducing formula for wavelet transforms (\ref{2.10}) of functions belonging to the weighted space $L_{p,\nu}({\Bbb R}^n_+)$ and inversion formulas for potentials $I_{\nu }^{\alphalpha }f, \; J_{\nu }^{\alphalpha }f, \; \mathcal{F}_{\nu }^{\alphalpha }f$, when $f\in L_{p,\nu}({\Bbb R}^n_+)$. For example, the following statement holds. \betaegin{equation}gin{theorem} \label{t2.6} Let $\varphihi =I_{\nu }^{\alphalpha }f$, $f\in L_{p,\nu}({\Bbb R}^n_+)$, $1\leq p<\left( n+2\nu \right) /\alphalpha $, and suppose that $\mu $ is a finite Borel measure on $[0,\infty )$ satisfying (\ref{con1}) and (\ref{con2}). If \ $\mathfrak{S}^{(\nu)}\varphihi $ is the wavelet transform of $\varphihi$ associated with the generalized Poisson semigroup $\mathcal{P}_{t}^{(\nu)}$, then \betaegin{equation}gin{equation} \int_{0}^{\infty }\frac{\mathfrak{S}^{(\nu)}\varphihi (x,t)}{t^{1+\alphalpha }}dt=\lim_{\varepsilonpsilon \rightarrow 0}\int_{\varepsilonpsilon }^{\infty } \frac{\mathfrak{S}^{(\nu)}\varphihi \left( x,t\right) }{t^{1+\alphalpha }} dt=c_{\alphalpha ,\mu }f(x), \label{2.19} \varepsilonnd{equation} where $c_{\alphalpha ,\mu }$ is defined by (\ref{1.17n}). The limit in (\ref{2.19}) exists in the $L_{p,\nu}({\Bbb R}^n_+)$-norm and in the a.e. sense. If $f\in C_{0}$, the convergence in (\ref{2.19}) is uniform. \varepsilonnd{theorem} The proof of this theorem is presented in \cite{8} in the general context of the so-called admissible semigroups. This context includes all semigroups (\ref{sgr}). \section{ Beta-semigroups} We remind basic formulas from Section 3.1 for the kernels of the Poisson and Gauss-Weierstrass semigroups: \betaegin{equation}\label{beta1} F[p(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot ,t)](\xi )=e^{-t\left| \xi\right| }, \qquad p(y,t)=\frac{\mathcal{G}amma \left( (n+1)/2\right) }{ \P_mi ^{(n+1)/2}}\frac{t}{(t^{2}+\left| y\right| ^{2})^{(n+1)/2}};\varepsilonnd{equation} \betaegin{equation} \label{beta2} F[w(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot ,t)](\xi )=e^{-t|\xi |^{2}}, \qquad w(y,t)=(4\P_mi t)^{-n/2}\varepsilonxp (-\left| y\right| ^{2}/4t).\varepsilonnd{equation} It would be natural to consider a more general semigroup generated by the kernel $w^{(\beta)}(y,t)$ defined by \betaegin{equation} \label{beta} F[w^{(\beta)}(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot ,t)](\xi )=e^{-t|\xi |^{\beta}}, \qquad \beta>0.\varepsilonnd{equation} This semigroup arises in diverse contexts of analysis, integral geometry, and probability; see, e.g., \cite{Fe}, \cite{Ko}, \cite{La}, \cite{R8}. Unlike (\ref{beta1}) and (\ref{beta2}), the kernel function $w^{(\beta)}(y,t)$ cannot be computed explicitly, however, by taking into account that \betaegin{equation} w^{(\beta)}(y,t) =t^{-n/\betaegin{equation}ta } w^{(\beta)}( t^{-1/\betaegin{equation}ta }y),\qquad w^{(\beta)}(y)\varepsilonquiv w^{(\beta)}(y,1), \varepsilonnd{equation} properties of $w^{(\beta)}(y,t)$ are well determined by the following lemma. \betaegin{equation}gin{lemma}\label{lb} The function \betaegin{equation}\label{gaql} w^{(\beta)}(y) =F^{-1}[e^{-|\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot|^\beta}](y)=(2\P_mi)^{-n} \int_{{\Bbb R}^n}e^{-|\xi|^\beta} e^{i y\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot \xi}\, d\xi, \qquad \beta>0,\varepsilonnd{equation} is uniformly continuous on ${\Bbb R}^n$. If $\beta$ is an even integer, then $w^{(\beta)}(y)$ is infinitely smooth and rapidly decreasing. More generally, if $\beta\neq 2,4,\ldots $, then $w^{(\beta)}(y)$ has the following behavior when $|y| \tauo \infty$: \betaegin{equation}\label{cbe} w^{(\beta)}(y) =c_\beta |y|^{-n-\beta} (1+o(|y|)), \quad c_\beta=-\frac{2^{\beta}\P_mi^{-n/2} \mathcal{G}am ((n+\beta)/2)}{ \mathcal{G}am (-\beta/2)}.\varepsilonnd{equation} If $0<\beta\le 2$, then $w^{(\beta)}(y)>0$ for all $y \in {\Bbb R}^n$. \varepsilonnd{lemma} \betaegin{equation}gin{proof} (Cf. \cite[p. 44, for $n=1$]{Ko}). The uniform continuity of $w^{(\beta)}(y)$ follows immediately from (\ref{gaql}). Note that if $\beta$ is an even integer, then $e^{-|\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot|^\beta}$ is a Schwartz function and therefore, $w^{(\beta)}(y)$ is infinitely smooth and rapidly decreasing. Let us prove positivity of $w^{(\beta)}(y)$ when $0<\beta\le 2$. For $y=0$ and for the cases $\beta=1$ and $\beta=2$, this is obvious. Let $0<\beta< 2$. By Bernstein's theorem \cite[Chapter 18, Sec. 4]{Fel}, there is a non-negative finite measure $\mu_\beta$ on $[0,\infty)$ so that $ e^{-z^{\beta/2}}=\int_0^\infty e^{-tz}\,d\mu_\beta (t)$, $z\in [0,\infty)$. Replace $z$ by $|\xi|^2$ to get \betaegin{equation}\label{751} e^{-|\xi|^{\beta}}=\int_0^\infty e^{-t|\xi|^2}\,d\mu_\beta (t).\varepsilonnd{equation} Then the equality \betaegin{equation}\label{75} [e^{-t|\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot\,|^2}]^{\wedge}(y)=\P_mi^{n/2}t^{-n/2}e^{-|y|^2/4t}, \qquad t>0,\varepsilonnd{equation} yields \betaegin{equation}a (2\P_mi)^{n}\,w^{(\beta)}(y)&=&\int_{{\Bbb R}^n}e^{i \xi\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot y}d\xi\int_0^\infty e^{-t|\xi|^2}\,d\mu_\beta (t)= \int_0^\infty d\mu_\beta (t)\int_{{\Bbb R}^n}e^{i \xi\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot y} e^{-t|\xi|^2}\,d\xi\nonumber\\&=& \P_mi^{n/2}\int_0^\infty t^{-n/2}e^{-|y|^2/4t}\,d\mu_\beta (t)>0.\nonumber\varepsilonnd{equation}a The Fubini theorem is applicable here, because, by (\ref{751}), $$ \int_{{\Bbb R}^n}|e^{i \xi\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot y}|d\xi\int_0^\infty e^{-t|\xi|^2}\,d\mu_\beta (t)=\int_{{\Bbb R}^n} e^{-|\xi|^{\beta}}d\xi<\infty.$$ Let us prove (\ref{cbe}). It suffices to show that \betaegin{equation}\label{73p} \lim\limits_{|y| \tauo \infty}|y|^{n +\beta}w^{(\beta)}(y)=2^{\beta}\P_mi^{-n/2-1}\mathcal{G}am (1+\beta/2)\mathcal{G}am ((n+\beta)/2)\, \sin (\P_mi \beta/2)\varepsilonnd{equation} (we leave to the reader to check that the right-hand side coincides with $c_\beta$). For $n=1$, this statement can be found in \cite [Chapter 3, Problem 154]{PS} and in \cite [p. 45]{Ko}. In the general case, the proof is more sophisticated and relies on the properties of Bessel functions. By the well-known formula for the Fourier transform of a radial function (see, e.g., \cite{31}), we write $(2\P_mi)^{n}\,w^{(\beta)}(y)=I(|\varepsilonta|)$, where \betaegin{equation}a I(s)&=&(2\P_mi)^{n/2}s^{1-n/2}\int_0^\infty e^{-r^{\beta}} r^{n/2} J_{{n/2-1}} (rs)\, dr\nonumber\\&=&(2\P_mi)^{n/2}s^{-n}\int_0^\infty e^{-r^{\beta}} \frac{d}{dr}\,[(rs)^{n/2} J_{{n/2}} (rs)]\, dr.\nonumber\varepsilonnd{equation}a Integration by parts yields $$ I(s)=\beta(2\P_mi)^{n/2}s^{-n/2}\int_0^\infty e^{-r^{\beta}} r^{n/2+\beta-1} J_{{n/2}} (rs)\, dr.$$ Changing variable $z=s^\beta r^\beta$, we obtain $$ s^{n +\beta}I(s)=(2\P_mi)^{n/2} A(s^{-\beta}), \qquad A(\P_martialialel)= \int_0^\infty e^{-z\P_martialialel} z^{n/2\beta} J_{n/2} (z^{1/\beta})\,dz. $$ We actually have to compute the limit $A_0=\lim\limits_{\P_martialialel \tauo 0} A(\P_martialialel)$. To this end, we invoke Hankel functions $H_\nu^{(1)} (z)$, so that $ J_\nu (z)=Re \,H_\nu^{(1)} (z)$ if $z$ is real \cite{Er}. Let $h_\nu (z)=z^\nu H_\nu^{(1)} (z)$. This is a single-valued analytic function in the $z$-plane with cut $(-\infty, 0]$. Using the properties of the Bessel functions \cite{Er}, we get \betaegin{equation}\label {as} \lim\limits_{z \tauo 0}h_\nu (z)=2^\nu \mathcal{G}am (\nu)/\P_mi i,\varepsilonnd{equation} \betaegin{equation}\label {as1} h_\nu (z) \sim \sqrt{2/\P_mi} \, z^{\nu -1/2}e^{iz-\frac{\P_mi i }{2}(\nu +\frac{1}{2})}, \qquad z \tauo \infty.\varepsilonnd{equation} Then we write $A(\P_martialialel)$ as $ A(\P_martialialel)= Re \,\int_0^\infty e^{-z\P_martialialel} h_{n/2} (z^{1/\beta})\,dz$ and change the line of integration from $[0,\infty)$ to $n_\tauheta=\{z: z=re^{i\tauheta}, \; r>0\}$ for small $\tauheta<\P_mi \beta/2$. By Cauchy's theorem, owing to (\ref{as}) and (\ref{as1}), we obtain $ A(\P_martialialel)= Re \,\int_{n_\tauheta} e^{-z\P_martialialel} h_{n/2} (z^{1/\beta})\,dz$. Since for $z=re^{i\tauheta}$, $ h_{n/2} (z^{1/\beta})=O(1)$ when $r=|z|\tauo 0$ and $ h_{n/2} (z^{1/\beta})=O(r^{(n -1)/2\beta} e^{-r^{1/\beta}\sin (\tauheta /\beta)})$ as $r\tauo \infty$, by the Lebesgue theorem on dominated convergence, we get $ A_0=Re \,\int_{n_\tauheta} h_{n/2} (z^{1/\beta})\,dz$. To evaluate the last integral, we again use analyticity and replace $n_\tauheta$ by $n_{\P_mi \beta/2}=\{z: z=re^{i\P_mi \beta/2}, \; r>0\}$ to get $$ A_0=Re \,{\Cal B}ig [e^{i\P_mi \beta/2}\int_0^\infty h_{n/2} (r^{1/\beta}e^{i\P_mi/2})\,dr{\Cal B}ig ].$$ To finalize calculations, we invoke McDonald's function $K_\nu (z)$ so that $$ h_\nu (z)=z^\nu H_\nu^{(1)} (z)=-\frac{2i}{\P_mi}(z e^{-i\P_mi/2})^\nu K_\nu (z e^{-i\P_mi/2}).$$ This gives $$ A_0=\frac{2\beta}{\P_mi}\, \sin (\P_mi \beta/2) \int_0^\infty s^{n/2 +\beta-1} K_{n/2} (s)\, ds. $$ The last integral can be explicitly evaluated by the formula 2.16.2 (2) from \cite {PBM}, and we obtain the result. \varepsilonnd{proof} The Beta-semigroup $\mathcal{B}_{t}$ generated by the kernel $w^{(\beta)} (y,t)$ (see (\ref{beta})) is defined by \betaegin{equation}gin{equation} \mathcal{B}_{t}f(x)=\int_{\mathbb{R}^{n}}w^{\left( \betaegin{equation}ta \right) }\left( y,t\right) f\left( x-y\right) dy, \qquad t>0, \label{2.21} \varepsilonnd{equation} and the corresponding weighted wavelet transform has the form \betaegin{equation}\label{bew} W_a f(x,t)=\int_0^\infty {\Cal B}_{t\varepsilonta} f (x)\, e^{-at\varepsilonta}\,d\mu (\varepsilonta),\varepsilonnd{equation} where $a\gammamae 0$ is a fixed number which is in our disposal; cf (\ref{cwtrw}). Following \cite{A}, we introduce Beta-potentials \betaegin{equation}\label{bpot} J_\beta^\alpha f = (E+ (-{\Cal D}el^{\beta/2}))^{-\alpha/\beta}f,\qquad \alphalpha >0, \quad \beta >0,\varepsilonnd{equation} that can be realized through the Beta-semigroup as \betaegin{equation}\label{bpott} J_\beta^\alpha f (x)=\frac{1}{\mathcal{G}amma (\alphalpha /\betaegin{equation}ta )} \int_{0}^{\infty }t^{\alpha/\beta-1}\tauext{ }e^{-t}\tauext{ } \mathcal{B}_{t}f(x)\tauext{ }dt. \varepsilonnd{equation} For $\beta=2$, (\ref{bpot}) coincides with the classical Bessel potential (\ref{1.9}), and (\ref{bpott}) mimics (\ref{pot2}). Similarly, for $\beta=1$, the Beta-potentials coincide with the Flett potential (\ref{fla}). Explicit inversion formulas for Beta-potentials can be obtained with the aid of the wavelet transform (\ref{bew}) as follows. \betaegin{equation}gin{theorem} \label{t2.7} Let $f\in L_{p}(\mathbb{R}^{n})$, \ $1\leq p< \infty $, $\alphalpha >0, \; \beta>0$. Suppose that $\mu $ is a finite Borel measure on $[0,\infty )$ satisfying \betaegin{equation}gin{eqnarray*} &\tauext{(a) \ }&\int_{1}^{\infty }\varepsilonta^{\gammamaamma }\tauext{ }d\left| \mu \right| \left( \varepsilonta\right) <\infty \tauext{ \ \ for some \ }\gammamaamma >\alphalpha /\betaegin{equation}ta ; \\ &\tauext{(b) \ }&\int_{0}^{\infty }\varepsilonta^{j}\tauext{ }d\mu \left( \varepsilonta\right) =0, \quad \forall \,j=0,1,...,[\alphalpha /\betaegin{equation}ta ]. \varepsilonnd{eqnarray*} If $\varphihi =J_{\betaegin{equation}ta }^{\alphalpha }f$, then \betaegin{equation}\label{form} \int_{0}^{\infty }W\varphihi \left( x,t\right) \frac{dt}{t^{1+\alphalpha /\betaegin{equation}ta }}\varepsilonquiv \lim_{\varepsilonpsilon \rightarrow 0}\int_{\varepsilonpsilon }^{\infty }W\varphihi \left( x,t\right) \frac{dt}{t^{1+\alphalpha /\betaegin{equation}ta }}=c_{\alphalpha /\betaegin{equation}ta ,\mu }f(x), \varepsilonnd{equation} where $c_{\alpha/\beta,\mu }$ \ is defined by (\ref{1.17n}) (with $\alpha$ replaced by $\alpha/\beta$). The limit in (\ref{form}) exists in the $L_{p}$-norm and pointwise for almost all x. If $f\in C_{0}$, the convergence is uniform. \varepsilonnd{theorem} The proof of this theorem mimics that of Theorem \ref {t1.5}; see \cite {A} for details. \betaegin{equation}gin{remark} The classical Riesz potential $I^{\alphalpha}f$ has an integral representation via the Beta-semigroup, namely, \betaegin{equation}\label{form} I^{\alphalpha}f(x)= \frac{1}{\mathcal{G}amma (\alphalpha /\betaegin{equation}ta )} \int_{0}^{\infty }t^{{\alpha/\beta}-1}\tauext{ } \mathcal{B}_{t}f(x)\tauext{ }dt.\varepsilonnd{equation} Here $f \in L_{p}(\mathbb{R}^{n}), \ 1\leq p < \infty$ , and \ $ 0< Re \alphalpha < n/p$. For the cases $\betaegin{equation}ta=1$ and $\betaegin{equation}ta=2$ we have the representations in terms of the Poisson and Gauss-Weierstrass semigroups, respectively. The potential $I^{\alphalpha}f$ can be inverted in the framework of the $L_{p}$-theory by making use of (\ref{form}) and the composite wavelet transform (\ref{bew}) with $a=0$. \varepsilonnd{remark} \section{Parabolic Wavelet Transforms} The following anisotropic wavelet transforms of the composite type, associated with the heat operators \betaegin{equation}\label{Ho} \P_martialial /\P_martialial t -{\Cal D}elta, \qquad E+\P_martialial /\P_martialial t -{\Cal D}elta, \varepsilonnd{equation} were introduced by Aliev and Rubin \cite{7}. These transforms are constructed using the Gauss-Weierstrass kernel $w(y, t) = (4\P_mi t)^{-n/2} \varepsilonxp (- |y|^2/4t)$ as follows. Let ${\Bbb R}^{n+1}$ be the $(n+1)$-dimensional Euclidean space of points $(x, t)$, $x = (x_1, \ldots, x_n) \in {\Bbb R}^n, \, $ $t \in {\Bbb R}^1$. We pick up a wavelet measure $\mu$ on $[0,\infty)$, a scaling parameter $a>0$, and set \betaegin{equation} \label{pwtr} P_\mu f(x, t;a) = \int_{{\Bbb R}^n \tauimes (0, \infty)} f(x-\sqrt{a} y, t-a\tauau) \,w(y, \tauau) \, dyd\mu (\tauau), \varepsilonnd{equation} \betaegin{equation} \label{pwtrw}{\Cal P}_\mu f(x, t; a) = \int_{{\Bbb R}^n \tauimes (0, \infty)} f(x - \sqrt{a} y, t- a\tauau) \,w(y, \tauau)\, e^{-a\tauau} \,dyd\mu(\tauau)\varepsilonnd{equation} (to simplify the notation, without loss of generality we can assume $\mu (\{0\})=0$). We call (\ref{pwtr}) and (\ref{pwtrw}) the {\it parabolic wavelet transform} and the {\it weighted parabolic wavelet transform}, respectively. Parabolic potentials $H^\alpha f$ and ${\Cal H}^\alpha f$, associated to differential operators in (\ref{Ho}), are defined in the Fourier terms by \betaegin{equation}\label{ppo} F[H^\alpha f](\xi, \tauau) = (|\xi|^2 + i \tauau)^{-\alpha/2} F[f] (\xi, \tauau),\varepsilonnd{equation} \betaegin{equation}\label{ppo1} F[\mathcal H^\alpha f](\xi, \tauau) = (1+ |\xi|^2 + i \tauau)^{-\alpha/2} F[f](\xi, \tauau),\varepsilonnd{equation} where $F$ stands for the Fourier transform in ${\Bbb R}^{n+1}$. These potentials were introduced by Jones \cite{Jo} and Sampson \cite{Sa} and used as a tool for characterization of anisotropic function spaces of fractional smoothness; see \cite{7} and references therein. For $\alpha>0$, potentials $H^\alpha f$ and ${\Cal H}^\alpha f$ are representable by the integrals \betaegin{equation}a \qquad H^\alpha f(x, t) &=& {1\over \mathcal{G}amma(\alpha/2)} \int_{{\Bbb R}^n \tauimes (0,\infty)} \tauau^{\alpha/2-1} w(y, \tauau) \,f(x-y, t-\tauau)\, dyd\tauau,\\ {\Cal H}^\alpha f(x, t)&=&{1\over \mathcal{G}amma (\alpha/2)} \int_{{\Bbb R}^n \tauimes (0, \infty)} \tauau^{\alpha/2-1} e^{-\tauau} w(y, \tauau) \,f(x-y, t-\tauau) \, dyd\tauau.\varepsilonnd{equation}a Their behavior on functions $f \in L_p \varepsilonquiv L_p ({\Bbb R}^{n+1})$ is characterized by the following theorem. \betaegin{equation}gin{theorem} \cite {Ba}, \cite {Ra} \newline {\rm I.} \ Let $f \in L_{p}, \; 1 \le p < \infty, \; 0 < \alpha < (n+2)/p, \quad q = (n+2-\alpha p)^{-1} (n+2) p$. {\rm (a)} \ The integral $(H^\alpha f)(x, t)$ converges absolutely for almost all $(x, t) \in {\Bbb R}^{n+1}$. {\rm (b)} \ For $p > 1$, \ \ the operator $H^\alpha$ is bounded from $L_{p}$ into $L_{q}$. {\rm (c)} \ For $p = 1$, $H^\alpha$ is an operator of the weak $(1, q)$ type: $$ |\{ (x, t): |(H^\alpha f) (x, t) | > \gammamaamma\}| \le \left({c\| f \|_1 \over \gammamaamma}\right)^q. $$ \newline {\rm II.} \ The operator ${\Cal H}^\alpha$ is bounded on $L_{p}$ for all $\alpha \gammamae 0, \quad 1 \le p \le \infty$. \varepsilonnd{theorem} Explicit inversion formulas for parabolic potentials in terms of wavelet transforms (\ref{pwtr}) and (\ref{pwtrw}) are given by the following theorem. \betaegin{equation}gin{theorem} \cite{7} Let $\mu$ be a finite Borel measure on $[0, \infty)$ satisfying the following conditions: \betaegin{equation}a \label{conn1}&{}&\tauext{ \ }\int_{1}^{\infty }\tau^{\gammamaamma }d|\mu |(t )<\infty \tauext{ \ \ \tauextit{for some} \ }\gammamaamma >\alphalpha/2 ; \\ \label{conn2}&{}&\tauext{ \ }\int_{0}^{\infty }t^{j}d\mu (t)=0\tauext{ \ , \ }\forall j=0,1,\ldots ,[\alphalpha/2 ]. \varepsilonnd{equation}a Suppose that $\varphihi = H^\alpha f, \; \; f \in L_p, \; \; 1 \le p < \infty, \; \; 0 < \alpha < (n+2)/p$. Then \betaegin{equation}\label {inf}\int_0^\infty P_\mu \varphihi(x, t; a) \; {da\over a^{1+\alpha/2}} \, \varepsilonquiv \, \lim_{\varepsilon \tauo 0} \int^\infty_\varepsilon (\P_martialialots) = c_{{\alpha}/2, \mu} \ f(x, t),\varepsilonnd{equation} where $c_{{\alpha}/2, \mu}$ is defined by (\ref{1.17n}) (with $\alphalpha$ replaced by $\alphalpha /2$). The limit in (\ref {inf}) is interpreted in the $L_{p}$-norm for $1 \le p < \infty$ and a.e. on ${\Bbb R}^{n+1}$ for $1 < p < \infty$. The same statement holds for all $\alpha > 0$ and $1 \le p \le \infty$ ($L_\infty$ is identified with $C_0$) provided that $H^\alpha$ and $P_\mu$ are replaced by ${\Cal H}^\alpha$ and ${\Cal P}_\mu$, respectively. \varepsilonnd{theorem} More general results for parabolic wavelet transforms with the generalized translation associated to singular heat operators \betaegin{equation}\label{Hos} \P_martialial /\P_martialial t -{\Cal D}elta_{\nu}, \qquad E+\P_martialial /\P_martialial t -{\Cal D}elta_{\nu}, \qquad \qquad{\Cal B}ig ({\Cal D}elta _{\nu }=\sum\limits_{k=1}^{n}\frac{\P_martialial ^{2}}{\P_martialial x_{k}^{2}}+\frac{2\nu}{x_{n}}\,\frac{\P_martialial}{\P_martialial x_{n}}{\Cal B}ig ),\varepsilonnd{equation} were obtained in \cite{6}. These include the Calder\'{o}n-type reproducing formula and explicit $L_p$-inversion formulas for parabolic potentials with the generalized translation defined by \betaegin{equation}gin{eqnarray*} H_{\nu }^{\alphalpha }f(x,t) &=&F_{\nu }^{-1}[(\left| x\right| ^{2}+it)^{-\alphalpha /2}\tauext{ }F_{\nu }f(x,t)], \\ \mathcal{H}_{\nu }^{\alphalpha }f(x,t) &=&F_{\nu }^{-1}[(1+\left| x\right| ^{2}+it)^{-\alphalpha /2}\tauext{ }F_{\nu }f(x,t)]. \varepsilonnd{eqnarray*} In the last two expressions, $x\in \mathbb{R}_{+}^{n}=\{x\in \mathbb{R}^{n}:$ \ $ x_{n}>0\}$, $\; t\in \mathbb{R}^{1}$, and $F_{\nu}$ \ is the Fourier-Bessel transform, i.e., the Fourier transform with respect to the variables $ t $ and $x^{\P_mrime }=(x_{1},...,x_{n-1}),$ and the Bessel transform with respect to $x_{n}>0.$ These results were applied in \cite{6, 7} to wavelet-type characterization of the parabolic Lebesque spaces. \section{Some Applications to Inversion of the $k$-plane Radon Transform} We recall some basic definitions. More information can be found in \cite{{GGG}, 15, 10, 24, 25}. Let $\ \mathcal{G}_{n,k}$ \ and $ G_{n,k}$ be the affine Grassmann manifold of all non-oriented $k$-dimensional planes ($k$-planes) $\tauau $ \ in $\mathbb{R}^{n}$ and the ordinary Grassmann manifold of $k$-dimensional linear subspaces $\zetaeta $ of $ \mathbb{R}^{n}$, respectively. Each $k$-plane $\tauau \in \mathcal{G}_{n,k}$ is parameterized as $\tauau =\left( \zetaeta \tauext{, }u\right) $, where $\zetaeta \in G_{n,k}$ and $u\in \zetaeta ^{\P_merp }$ (the orthogonal complement of $\zetaeta $ in $\mathbb{R}^{n}$). We endow $\mathcal{G}_{n,k\tauext{ }}$ with the product measure $d\tauau =d\zetaeta du$, where $d\zetaeta $ is the $O(n)$-invariant measure on $G_{n,k\tauext{ }}$ of total mass $1,$ and $du$ denotes the Euclidean volume element on $\zetaeta ^{\P_merp }$. The \tauextit{\ k-plane Radon transform }of a function $f$ on $\mathbb{R}^{n}$ is defined by \betaegin{equation}gin{equation} \hat f (\tauau )\varepsilonquiv \hat f (\zetaeta \tauext{, } u)=\int_{\zetaeta }f(y+u)\,dy, \label{3.1} \varepsilonnd{equation} where $dy$ is the induced Lebesque measure on the subspace \ $\zetaeta \in G_{n,k}.$ \ This transform assigns to a function $f$ a collection of integrals of $f$ over all $k$-planes in ${\Bbb R}^n$. The corresponding \tauextit{dual $k$-plane transform} of a function $\varphihi$ on $ \mathcal{G}_{n,k}$ is defined as the mean value of $\varphihi \left( \tauau \right) $ over all $k$-planes $\tauau $ through $x\in \mathbb{R}^{n}$: \betaegin{equation}gin{equation} \check{\varphihi}\left( x\right) =\int_{O(n)}\varphihi (\mathcal{\sigmama }\zetaeta _{0}+x) \, d\mathcal{\sigmama }, \qquad x\in \mathbb{R}^{n}. \label{3.2} \varepsilonnd{equation} Here $\zetaeta _{0}\in G_{n,k}$ \ is an arbitrary fixed $k-$plane through the origin. If $f\in L_{p}(\mathbb{R}^{n}),$ then $\hat f $ is finite a.e. on $\mathcal{G}_{n,k}$ \ if and only if $1\leq p<n/k.$ Several inversion procedures are known for $\hat f$. One of the most popular, which amounts to Blaschke and Radon, relies on the Fuglede formula \cite[p. 29]{15}, \betaegin{equation}gin{equation} ( \hat f)^{\vee }=d_{k,n}I^{k}f, \qquad d_{k,n}=\left( 2\P_mi \right) ^{k}\sigmama _{n-k-1}/\sigmama _{n-1}, \label{3.3} \varepsilonnd{equation} and reduces reconstruction of $f$ to inversion of the Riesz potentials $I^{k}f.$ The latter can also be inverted in many number of ways \cite{29}, \cite {28}, \cite{21}. In view of considerations in Section 3.2 and 5, one can employ a composite wavelet transform generated by the Poisson, Gauss-Weierstrass, or Beta semigroup and thus obtain new inversion formulas for the $k$-plane transform on ${\Bbb R}^n$ in terms of a wavelet measure on the one-dimensional set $[0,\infty)$. For instance, this way leads to the following \betaegin{equation}gin{theorem} \label{t3.1} Let $\varphihi =\hat f$ be the $k$-plane Radon transform of a function $f\in L_{p}$, $1\leq p<n/k$. $\ $Let $\mu $ be a finite Borel measure on $ [0,\infty )$ satisfying \betaegin{equation}gin{eqnarray*} &\tauext{(a) \ }&\int_{1}^{\infty }\varepsilonta ^{\gammamaamma }d\left| \mu \right| \left( \varepsilonta \right) <\infty \tauext{ \ for some }\gammamaamma >k; \\ &\tauext{(b) \ }&\int_{0}^{\infty }\varepsilonta ^{j}d\mu \left( \varepsilonta \right) =0 \quad \forall \tauext{ }j=0,1,...,k. \varepsilonnd{eqnarray*} Let $W\check{\varphihi}$ be the wavelet transform of $\check{\varphihi}$, associated with the Poisson semigroup (\ref{1.1}), namely, \betaegin{equation} \label{pWtr} W\check{\varphihi}(x,t)=\int_0^\infty \mathcal{P}_{t\varepsilonta} \check{\varphihi}(x)\, d\mu(\varepsilonta), \qquad x \in {\Bbb R}^n, \;\; t>0.\varepsilonnd{equation} Then \betaegin{equation}gin{equation} \int_{0}^{\infty }W\check{\varphihi}\left( x,t\right) \frac{dt}{t^{1+k}}\varepsilonquiv \lim_{\varepsilonpsilon \rightarrow \infty }\int_{\varepsilonpsilon }^{\infty }W\check{\varphihi} \left( x,t\right) \frac{dt}{t^{1+k}}=c_{k,\mu }f(x), \label{3.4} \varepsilonnd{equation} where (cf. (\ref{1.17n})), \betaegin{equation}gin{equation*} c_{k,\mu }=\frac{\left( -1\right) ^{k+1}}{k!}\int_{0}^{\infty }t^{k}\log t\tauext{ }d\mu \left( t\right) . \varepsilonnd{equation*} The limit in (\ref{3.4}) exists in the $L_{p}$-norm and pointwise almost everywhere. If $f\in C_{0}\cap L_{p}$, the convergence is uniform on ${\Bbb R}^n$. \varepsilonnd{theorem} \betaegin{equation}gin{remark} The following observation might be interesting. Let \betaegin{equation} I_-^\alpha u(t)=\frac{1}{\mathcal{G}am (\alpha)}\int_t^\infty (t-s)^{\alpha -1} u(s)\,ds, \qquad t>0,\varepsilonnd{equation} be the Riemann-Liouville integral of $u$. It is known \cite[formula (16.9)]{21} that the Poisson integral takes the Riesz potential $I^\alpha f$ to the Riemann-Liouville integral of the function $t \tauo {\Cal P}_t f$, namely, \betaegin{equation} \label {comb4} {\Cal P}_t I^\alpha f=I_-^\alpha {\Cal P}_{(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot)} f.\varepsilonnd{equation} Denoting by $R$ and $R^{\alphast }$ the Radon $k$-plane transform and its dual, owing to Fuglede's formula (\ref{3.3}), we have \betaegin{equation}gin{equation} R^{\alphast }Rf=d_{k,n}\,I^{k}f. \label{3.5} \varepsilonnd{equation} Combining (\ref{3.5}) and (\ref{comb4}), we get \betaegin{equation}gin{equation} R_{t}^{\alphast }Rf=d_{k,n}\,I_-^k {\Cal P}_{(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot)} f, \qquad R_{t}^{\alphast }\varphihi (x)=(\mathcal{P}_{t}R^{\alphast }\varphihi )(x) . \label{3.7} \varepsilonnd{equation} This formula has the same nature as the following one in terms of the spherical means, that lies in the scope of the classical Funk-Radon-Helgason theory: \betaegin{equation}\label{frh} (\hat f)^\vee_r (x)=\sigma_{k-1}\int_r^\infty (\fr{M}} \def\frm{\fr{m}_t f)(x) (t^2 -r^2)^{k/2 -1} t \, dt; \varepsilonnd{equation} see Lemma 5.1 in \cite{24}. Here $\sigma_{k-1}$ is the volume of the $(k-1)$-dimensional unit sphere, \betaegin{equation} (\fr{M}} \def\frm{\fr{m}_t f)(x)=\frac{1}{\sigma_{n-1}}\int_{S^{n-1}} f(x+t\tauheta ) \,d \tauheta, \qquad t>0, \varepsilonnd{equation} and $(\hat f)^\vee_r (x)$ is the so-called {\it shifted dual $k$-plane transform}, which is the mean value of $\hat f (\tau)$ over all $k$-planes $\tau$ at distance $r$ from $x$. \varepsilonnd{remark} \section{Higher-rank Composite Wavelet Transforms and Open Problems} Challenging perspectives and open problems for composite wavelet transforms are connected with functions of matrix argument and their application to integral geometry. This relatively new area encompasses the so-called higher-rank problems, when traditional scalar notions, like distance or scaling, become matrix-valued. \subsection{Matrix spaces, preliminaries} We remind basic notions, following \cite{R9}. Let $\fr{M}} \def\frm{\fr{m}_{n,m} \sim {\Bbb R}^{nm}$ be the space of real matrices $x=(x_{i,j})$ having $n$ rows and $m$ columns, $n\gammamaeq m$; $dx=\P_mrod^{n}_{i=1}\P_mrod^{m}_{j=1} dx_{i,j}$ is the volume element on ${\Cal M}a$, $x'$ denotes the transpose of $x$, and $I_m$ is the identity $m \tauimes m$ matrix. Given a square matrix $a$, we denote by $\P_martialialet(a)$ the determinant of $a$, and by $|a|$ the absolute value of $\P_martialialet(a)$; ${\hbox{\rm tr}} (a)$ stands for the trace of $a$. For $x\in {\Cal M}a$, we denote \betaegin{equation}\label{xm}|x|_m =\P_martialialet (x'x)^{1/2}.\varepsilonnd{equation} If $m=1$, this is the usual Euclidean norm on ${\Bbb R}^n$. For $m>1$, $|x|_m$ is the volume of the parallelepiped spanned by the column-vectors of $x$. We use standard notations $O(n)$ and $SO(n)$ for the orthogonal group and the special orthogonal group of ${\Bbb R}^{n}$ with the normalized invariant measure of total mass 1. Let ${\Cal S}_m \sim {\Bbb R}^{m(m+1)/2}$ be the space of $m \tauimes m$ real symmetric matrices $s=(s_{i,j})$ with the volume element $ds=\P_mrod_{i \le j} ds_{i,j}$. We denote by $\P_m$ the cone of positive definite matrices in ${\Cal S}_m$; $\overline\P_m$ is the closure of $\P_m$, that is, the set of all positive semi-definite $m\tauimes m$ matrices. For $r\in\P_m$ ($r\in\overline\P_m$), we write $r>0$ ($r\gammamaeq 0$). Given $a$ and $b$ in $S_m$, the inequality $a >b$ means $a - b \in \P_m$ and the symbol $\int_a^b f(s) ds$ denotes the integral over the set $(a +\P_m)\cap (b -\P_m)$. The group $G=GL(m,{\Bbb R})$ of real non-singular $m \tauimes m$ matrices $g$ acts transitively on $\P_m$ by the rule $r \tauo g rg'$. The corresponding $G$-invariant measure on $\P_m$ is \betaegin{equation}\label{2.1} d_{*} r = |r|^{-d} dr, \qquad |r|=\P_martialialet (r), \qquad d= (m+1)/2 \varepsilonnd{equation} \cite[p. 18]{Te}. \betaegin{equation}gin{lemma}\label{12.2} \cite[pp. 57--59] {Mu}\hskip10truecm \noindent {\rm (i)} \ If $ \; x=ayb$ where $y\in{\Cal M}a, \; a\in GL(n,{\Bbb R})$, and $ b \in GL(m,{\Bbb R})$, then $dx=|a|^m |b|^ndy$. \\ {\rm (ii)} \ If $ \; r=q'sq$ where $s\in S_m$, and $q\in GL(m,{\Bbb R})$, then $dr=|q|^{m+1}ds$. \\ {\rm (iii)} \ If $ \; r=s^{-1}$ where $s\in \P_m$, then $r\in \P_m$, and $dr=|s|^{-m-1}ds$. \varepsilonnd{lemma} For $Re \, \alpha >d-1$, the Siegel gamma function of $\P_m$ is defined by \betaegin{equation}\label{2.444} \gammamam (\alpha)=\int_{\P_m} \varepsilonxp(-{\hbox{\rm tr}} (r)) |r|^{\alpha } d_*r =\P_mi^{m(m-1)/4}\P_mrod\limits_{j=0}^{m-1} \mathcal{G}am (\alpha- j/2), \varepsilonnd{equation} \cite{FK, Te}. The relevant beta function has the form \betaegin{equation}\label{2.6} B_m (\alpha ,\beta)=\int_0^{I_m} |r|^{\alpha -d} |I_m-r|^{\betaegin{equation}ta -d} dr= \frac{\gammamam (\alpha)\gammamam (\beta)}{\gammamam (\alpha+\beta)}, \quad d= (m+1)/2. \varepsilonnd{equation} This integral converges absolutely if and only if $Re \, \alpha, Re \, \beta >d-1$. All function spaces on ${\Cal M}a$ are identified with the corresponding spaces on ${\Bbb R}^{nm}$. For instance, ${\Cal S}({\Cal M}a)$ denotes the Schwartz space of infinitely differentiable rapidly decreasing functions. The Fourier transform of a function $f\in L_{1}({\Cal M}a)$ is defined by \betaegin{equation}\label{ft} {\Cal F} f(y)=\int_{{\Cal M}a} \varepsilonxp({\hbox{\rm tr}}(iy'x)) f (x) dx,\qquad y\in{\Cal M}a \; .\varepsilonnd{equation} The {\it Cayley-Laplace operator} ${\Cal D}el$ on $ {\Cal M}a$ is defined by \betaegin{equation}\label{K-L} {\Cal D}el=\P_martialialet(\P_martialial '\P_martialial), \qquad \P_martialial=(\P_martialial/\P_martialial x_{i,j}). \varepsilonnd{equation} In terms of the Fourier transform, the action of ${\Cal D}el$ represents a multiplication by the homogeneous polynomial $(-1)^m |y|_m^2$ of degree $2m$ of $nm$ variables $y_{i,j}$. For the sake of simplicity, for some operators on functions of matrix argument we will use the same notation as in the previous sections. The G{\alphaa}rding-Gindikin integrals of functions $f$ on $\P_m$ are defined by \betaegin{equation}\label{3.1} (I_{+}^\alpha f)(s) \! = \! \frac {1}{\gammamama} \int\limits_0^s \! f(r)|s \! - \! r|^{\alpha-d} dr, \quad (I_{-}^\alpha f)(s) \! = \! \frac {1}{\gammamama} \int\limits_s^\infty \! f(r)|r \! - \! s|^{\alpha-d} dr,\varepsilonnd{equation} where $s \in \P_m$ in the first integral and $s \in \overline\P_m$ in the second one. We assume $Re \, \alpha > d-1$, $ d=(m+1)/2$ (this condition is necessary for absolute convergence of these integrals). The first integral exists a.e. for arbitrary locally integrable function $f$. Existence of the second integral requires extra assumptions for $f$ at infinity. The {\it Riesz potential} of a function $f\in{\Cal S}({\Cal M}a)$ is defined by \betaegin{equation}\label{rie} (I^\alpha f)(x)=\frac{1}{\gammamaam_{n,m} (\alpha)} \int_{{\Cal M}a} f(x-y) |y|^{\alpha-n}_m dy;\varepsilonnd{equation} \betaegin{equation}\label{gam} \gammamaam_{n,m} (\alpha)=\frac{2^{\alpha m} \, \P_mi^{nm/2}\, \mathcal{G}am_m (\alpha/2)}{\mathcal{G}am_m ((n-\alpha)/2)}, \ \ \ Re \, \alpha>m-1, \ \ \alpha\neq n-m+1, \, n-m+2, \ldots \varepsilonnd{equation} This integral is finite a.e. for $f \in L_{p}({\Cal M}a)$ provided $ 1 \le p <n(Re \, \alpha +m-1)^{-1}$ \cite[Theorem 5.10]{R9}. An application of the Fourier transform gives \betaegin{equation}\label{hek} {\Cal F} [I^\alpha f](\xi)=|\xi|_m^{-\alpha} {\Cal F} f(\xi)\varepsilonnd{equation} (as in the case of ${\Bbb R}^n$), so that $I^\alpha$ can be formally identified with the negative power of the Cayley-Laplace operator (\ref{K-L}), namely, $I^\alpha=(-{\Cal D}el_m)^{-\alpha/2}$. Discussion of precise meaning of the equality (\ref{hek}) and related references can be found in \cite{R9}, \cite {OR2}. \betaegin{equation}gin{definition} For $x \in {\Cal M}a, \; n \gammamae m$, and $t \in \P_m$, we define the (generalized) heat kernel $h_t (x)$ by the formula \betaegin{equation}\label{heat} h_t (x)=(4\P_mi)^{-nm/2}|t|^{-n/2} \varepsilonxp (-{\hbox{\rm tr}} (t^{-1} x'x)/4), \qquad |t|=\P_martialialet (t), \varepsilonnd{equation} and set \betaegin{equation}\label{ga} H_t f(x)=\int\limits_{{\Cal M}a} h_t (x-y) f(y) dy=\int\limits_{{\Cal M}a} h_{I_m} (y) f(x-yt^{1/2}) \, dy. \varepsilonnd{equation} \varepsilonnd{definition} Clearly, $H_t f(x)$ is a generalization of the Gauss-Weierstrass integral (\ref{2.4}). \betaegin{equation}gin{lemma}\label{hke}\cite{R9}{}\hfil \noindent {\rm (i)} For each $ \; t \in \P_m$, \betaegin{equation}\label{ed} \int_{{\Cal M}a} h_t (x) \,dx =1.\varepsilonnd{equation} \noindent {\rm (ii)} The Fourier transform of $ \; h_t (x)$ has the form \betaegin{equation}\label{ft}{\Cal F} h_t(y)= \varepsilonxp (-{\hbox{\rm tr}} (ty'y), \varepsilonnd{equation} which implies the semi-group property \betaegin{equation}\label{cnv} h_t \alphast h_\tauau=h_{t+\tauau}, \qquad t, \tauau \in \P_m. \varepsilonnd{equation} \noindent {\rm (iii)} If $f \in L_{p}({\Cal M}a), \; 1\le p \le \infty$, then \betaegin{equation}\label{gw} ||H_t f||_p \le ||f||_p \, , \qquad \quad H_t H_\tauau f=H_{t+\tauau}f, \varepsilonnd{equation} and \betaegin{equation}\label{lim}\lim\limits_{t \tauo 0}(H_t f)(x)=f(x) \varepsilonnd{equation} in the $L_{p}$-norm. If $f$ is a continuous function vanishing at infinity, then (\ref{lim}) holds in the $\sup$-norm. \varepsilonnd{lemma} \betaegin{equation}gin{theorem}\label{rrg} \cite{R9} \ Let $ \; m-1<Re \, \alpha<n-m+1$, $d=(m+1)/2$. Then \betaegin{equation}\label{rg} (I^\alpha f)(x) = \frac {1}{\mathcal{G}am_m(\alpha/2)} \int_{\P_m} |t|^{\alpha/2}H_t f(x) \,d_*t, \qquad d_*t=|t|^{-d}\, dt,\varepsilonnd{equation} \betaegin{equation}\label{rgg} H_t [I^\alpha f](x) =I_{-}^{\alpha/2}[H_{(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot)} f(x)](t),\varepsilonnd{equation} provided that integrals on either side of the corresponding equality exist in the Lebesgue sense. \varepsilonnd{theorem} \subsection{Composite wavelet transforms: open problems} Formula (\ref{rg}) provokes a natural construction of the relevant composite wavelet transform on ${\Cal M}a$ associated with the heat kernel and containing a $\P_m$-valued scaling parameter. To find this construction, we first obtain an auxiliary integral representation of a power function of the form $|t|^{\lambda -d}$, $d=(m+1)/2$. \betaegin{equation}gin{definition} A function $w$ on $\P_m$ is said to be symmetric if \betaegin{equation}\label{defs} w(g\varepsilonta g^{-1})= w (\varepsilonta) \quad \tauext{for all} \quad g \in GL(m,{\Bbb R}), \quad \varepsilonta\in \P_m.\varepsilonnd{equation} \varepsilonnd{definition} Note that if $w$ is symmetric, then for any $s,t \in \P_m$, \betaegin{equation}\label{wsy} w(t^{1/2} s t^{1/2})= w(s^{1/2} t s^{1/2})\quad \tauext{and} \quad w(ts)=w(st).\varepsilonnd{equation} Indeed, the second equality follows from (\ref{defs}) if we set $\varepsilonta= ts, \; g=t^{-1}$. The first equality in (\ref{wsy}) is a consequence of the second one: $$ w(t^{1/2} s t^{1/2})= w(t^{-1/2}[t^{1/2} s t^{1/2}]t^{1/2})=w(st)=w(ts)=w(s^{1/2} t s^{1/2}).$$ \betaegin{equation}gin{lemma} Let $w$ be a symmetric function on $\P_m$ satisfying \betaegin{equation} \int_{\P_m}\frac{|w(\varepsilonta)|}{|\varepsilonta|^\lambda}\, d\varepsilonta < \infty, \qquad c=\int_{\P_m}\frac{w(\varepsilonta)}{|\varepsilonta|^\lambda}\, d\varepsilonta\neq 0, \qquad |\varepsilonta|=\P_martialialet (\varepsilonta). \varepsilonnd{equation} Then for $t \in \P_m$, \betaegin{equation}\label{gra} |t|^{\lambda -d}=c^{-1}\, \int_{\P_m}\frac{w(a^{-1}t)}{|a|^{m+1-\lambda}}\, da, \qquad d=(m+1)/2.\varepsilonnd{equation} \varepsilonnd{lemma} \betaegin{equation}gin{proof} By (\ref{wsy}) we have (set $a=\rho^{-1}, \;da= \rho^{-2d}d\rho$ ) \betaegin{equation}a \int_{\P_m}\frac{w(a^{-1}t)}{|a|^{m+1-\lambda}}\, da&=&\int_{\P_m}\frac{w(t^{1/2}a^{-1}t^{1/2})}{|a|^{m+1-\lambda}}\, da=\int_{\P_m}\frac{w(t^{1/2}\rho t^{1/2})}{|\rho|^{\lambda-d}}\, d_*\rho \nonumber\\ &=&|t|^{\lambda -d}\int_{\P_m}\frac{w(\varepsilonta)}{|\varepsilonta|^\lambda}\, d\varepsilonta=c\, |t|^{\lambda -d}.\nonumber\varepsilonnd{equation}a \varepsilonnd{proof} Now we replace a power function in (\ref{rg}) according to (\ref{gra}) with $\lambda=\alpha/2$. For $Re \, \alpha> (m-1)/2$, we obtain \betaegin{equation}a (I^\alpha f)(x) &=& \frac {c^{-1}}{\mathcal{G}am_m(\alpha/2)} \int_{\P_m} H_t f(x)\, dt \int_{\P_m}\frac{w(a^{-1}t)}{|a|^{m+1-\alpha/2}}\, da\nonumber \\ &=&\frac {c^{-1}}{\mathcal{G}am_m(\alpha/2)} \int_{\P_m}\frac{d_* a}{|a|^{d-\alpha/2}}\int_{\P_m} H_t f(x) \,w (a^{-1}t)\, dt.\nonumber \varepsilonnd{equation}a This gives \betaegin{equation} \label {prot}(I^\alpha f)(x) = \frac {c^{-1}}{\mathcal{G}am_m(\alpha/2)}\int_{\P_m} {\Cal H} f(x,a)|a|^{\alpha/2} \,d_*a,\qquad Re \, \alpha> (m-1)/2,\varepsilonnd{equation} with \betaegin{equation} {\Cal H} f(x,a)=|a|^{-d}\int_{\P_m} H_t f(x)\,w (a^{-1}t)\, dt\varepsilonnd{equation} or, by the symmetry of $w$, after changing variable, \betaegin{equation} \label {hewtr}{\Cal H} f(x,a)=\int_{\P_m} H_{a^{1/2}\varepsilonta a^{1/2}} f(x) w(\varepsilonta)\, d\varepsilonta, \qquad x \in {\Cal M}a, \quad a \in \P_m.\varepsilonnd{equation} Taking into account an obvious similarity between (\ref{hewtr}) and the corresponding ``rank-one" formula for $m=1$, we call ${\Cal H} f(x,a)$ the {\it composite wavelet transform of $f$ associated to the heat semigroup} $H_t$. Here $w$ is a symmetric integrable function on $\P_m$ (that will be endowed later with some cancelation properties) and $a$ is a $\P_m$-valued scaling parameter. One can replace $w$ by a more general {\it wavelet measure}, as we did in the previous sections, but here we want to minimize technicalities. Owing to (\ref{hek}), it is natural to expect that the inverse of $I^\alpha$ has the same form (\ref{prot}) with $\alpha$ formally replaced by $-\alpha$, and the case $\alpha=0$ gives a variant of Calder\'{o}n's reproducing formula. Thus, we encounter the following open problem: {\betaf Problem A.} {\it Give precise meaning to the inversion formula \betaegin{equation}\label{rghy} f(x) =c_{m,\alpha} \int_{\P_m} \frac{{\Cal H} \varphihi (x,a)}{|a|^{\alpha/2}} \,d_*a, \qquad \varphihi=I^\alpha f,\varepsilonnd{equation} and the reproducing formula \betaegin{equation}\label{rghr} f(x) =c_{m} \int_{\P_m} {\Cal H} f (x,a)\, d_*a,\varepsilonnd{equation} say, for $ f\in L_p$ or any other ``natural" function space. Give examples of wavelet functions $w$ for which (\ref{rghy}) and (\ref{rghr}) hold. Find explicit formulas for the normalizing coefficients $c_{m,\alpha}$ and $c_{m}$, depending on $w$.} Solution of this problem would give a series of pointwise inversion formulas for diverse Radon-like transforms on matrix spaces; see, e.g., \cite {OR2}, \cite {OR3}, \cite {R9}, where such formulas are available in terms of distributions. Justification of (\ref{rghy}) and (\ref{rghr}) would also bring new light to a variety of inversion formulas for Radon transforms on Grassmannians, cf. \cite {GRu}. \subsection{Some discussion} Trying to solve Problem A, we come across new problems that are of independent interest. Let $Re \, \alpha>d-1, \; d=(m+1)/2$. Suppose, for instance, that $f(x), \; x \in {\Cal M}a$, is a Schwartz function and $w (\varepsilonta), \; \varepsilonta \in \P_m$, is ``good enough". We anticipate the following equality: \betaegin{equation}\label{ant} I_\varepsilon f(x)\varepsilonquiv \int_{\varepsilon I_m}^\infty\frac{{\Cal H} [I^\alpha f] (x,a)}{|a|^{\alpha/2}} \,d_*a= \int_{\P_m} {\Cal L}am_{\alpha/2}(s)\, H_{\varepsilon s} f(x)\, ds,\varepsilonnd{equation} where ${\Cal L}am_{\alpha/2}(s)$ expresses through the G{\alphaa}rding-Gindikin integral in (\ref{3.1}) as \betaegin{equation}\label {ky} {\Cal L}am_{\alpha/2}(s)=\frac{\mathcal{G}am_m (d)}{|s|^d}\, I_+^{\alpha/2 +d} w (s), \qquad s\in \P_m.\varepsilonnd{equation} If $m=1$ and $\alpha/2$ is replaced by $\alpha$, then (\ref{ky}) coincides with the function $\lambdabda _{\alphalpha }(s)=s^{-1} I_+^{\alphalpha +1}\mu (s)$ in Lemma \ref{lB}. Now, we give the following \betaegin{equation}gin{definition} An integrable symmetric function $w$ on $\P_m$ is called an {\it admissible wavelet} if \betaegin{equation}\label {wad} {\Cal L}am_{\alpha/2}(s)\varepsilonquiv \frac{\mathcal{G}am_m (d)}{|s|^d}\, I_+^{\alpha/2 +d} w (s) \in L_1 (\P_m)\quad \tauext{\rm and} \quad c_\alpha=\int_{\P_m}{\Cal L}am_{\alpha/2}(s)\,ds\neq 0.\varepsilonnd{equation} \varepsilonnd{definition} If $w$ is admissible, then, by Lemma \ref {hke}, the $L_p$-limit as $\varepsilon \tauo 0$ of the right-hand side of (\ref{ant}) is $c_\alpha \,f$, and we are done. This discussion includes the case $\alpha=0$ corresponding to the reproducing formula. Thus, our attempt to solve Problem A rests upon the following {\betaf Problem B.} {\it Find examples of admissible wavelets (both for $\alpha\neq 0$ and $\alpha=0$) and compute $c_\alpha$.} Now, let us try to prove (\ref{ant}). We say ``try", because along the way, we come across one more open problem related to application of the Fubini theorem; cf. justification of interchange of the order of integration in the proof of Theorem \ref{teo:34}. By (\ref{hewtr}) and (\ref{rgg}), \betaegin{equation}a {\Cal H} I^\alpha f(x,a)&=&\int_{\P_m} H_{a^{1/2}\varepsilonta a^{1/2}} I^\alpha f(x) \,w(\varepsilonta)\, d\varepsilonta\nonumber \\ &=&\int_{\P_m} I_{-}^{\alpha/2}[H_{(\vnk\tauimes\fr{M}} \def\frm{\fr{m}_{n-k,m}ot)} f(x)](a^{1/2}\varepsilonta a^{1/2})\,w(\varepsilonta)\, d\varepsilonta.\nonumber \varepsilonnd{equation}a Assume that $x$ is fixed and denote $\P_msi (s)=H_{s} f(x)$. Then \betaegin{equation}a {\Cal H} I^\alpha f(x,a)&=&\int_{\P_m}w(\varepsilonta)\,I_{-}^{\alpha/2}\P_msi (a^{1/2}\varepsilonta a^{1/2})\, d\varepsilonta\nonumber \\ &=& \frac {1}{\mathcal{G}am_m(\alpha/2)} \int_{\P_m}w(\varepsilonta)\,d\varepsilonta \int_{a^{1/2}\varepsilonta a^{1/2}}^\infty \P_msi (s) |s-a^{1/2}\varepsilonta a^{1/2}|^{\alpha/2 -d}\, ds\nonumber \\ &=&\frac {1}{\mathcal{G}am_m(\alpha/2)} \int_{\P_m}\P_msi (s)\, ds\int_0^{a^{-1/2}\varepsilonta a^{-1/2}}w(\varepsilonta)\,|s-a^{1/2}\varepsilonta a^{1/2}|^{\alpha/2 -d}\,d\varepsilonta\nonumber \\ &=&|a|^{\alpha/2 -d} \int_{\P_m}\P_msi (s)\,I_{+}^{\alpha/2}w (a^{-1/2}s a^{-1/2})\, ds. \nonumber \varepsilonnd{equation}a Hence, the left-hand side of (\ref{ant}) transforms as follows. \betaegin{equation}a I_\varepsilon f(x)&=& \int_{\varepsilon I_m}^\infty \frac{da}{|a|^{m+1}} \int_{\P_m}\P_msi (s)\,I_{+}^{\alpha/2}w (a^{-1/2}s a^{-1/2})\, ds\nonumber \\ &=&\int_{\P_m}\P_msi (s)\,\int_{\varepsilon I_m}^\infty I_{+}^{\alpha/2}w (a^{-1/2}s a^{-1/2})\frac{da}{|a|^{m+1}} \qquad \tauext{\rm (set $a=\tau^{-1}$)}\nonumber \\ &=&\int_{\P_m}\P_msi (s)\,\int_0^{\varepsilon^{-1}I_m} I_{+}^{\alpha/2}w (\tau^{1/2}s \tau^{1/2})\, d\tau\nonumber \\ &=&\varepsilon^{md}\int_{\P_m}\P_msi (\varepsilon s)\, dy\int_0^{\varepsilon^{-1}I_m} I_{+}^{\alpha/2}w (\tau^{1/2}\varepsilon^{1/2}s\varepsilon^{1/2} \tau^{1/2})\, d\tau.\nonumber\varepsilonnd{equation}a Thus we have \betaegin{equation} I_\varepsilon f(x)=\int_{\P_m}\P_msi (\varepsilon s)\,k(s)\, ds=\int_{\P_m}H_{\varepsilon s} f(x) \,k(s)\, ds, \varepsilonnd{equation} where $$k(s)=\int_0^{I_m} I_{+}^{\alpha/2}w(\lambda^{1/2}s\lambda^{1/2})\, d\lambda.$$ To get (\ref{ant}), it remains to show that $k(s)$ coincides with the function (\ref{ky}). We have \betaegin{equation}a k(s)&=&\frac {1}{\mathcal{G}am_m(\alpha/2)}\int_0^{I_m}d\lambda \int_0^{\lambda^{1/2}s\lambda^{1/2}} w(s) |\lambda^{1/2}s\lambda^{1/2} -s|^{\alpha/2 -d}\, ds\nonumber \\ &&\tauext{(set $s=\lambda^{1/2}z\lambda^{1/2}$ and note that $w(\lambda^{1/2}z\lambda^{1/2})=w(z^{1/2}\lambda z^{1/2})$)}\nonumber \\ &=&\frac {1}{\mathcal{G}am_m(\alpha/2)}\int_0^{I_m} |\lambda|^{\alpha/2}d\lambda \int_0^s |s-z|^{\alpha/2 -d}w(z^{1/2}\lambda z^{1/2})\, dz\nonumber \\ &=&\frac {1}{\mathcal{G}am_m(\alpha/2)} \int_0^s |s-z|^{\alpha/2 -d}\, dz\int_0^{I_m} |\lambda|^{\alpha/2}\,w(z^{1/2}\lambda z^{1/2})\,d\lambda\nonumber \\ &=&\frac {1}{\mathcal{G}am_m(\alpha/2)} \int_0^s |s-z|^{\alpha/2 -d}\, \frac{dz}{|z|^{\alpha/2 +d}}\int_0^z w(b) |b|^{\alpha/2}\, db\nonumber \\ &=&\frac {1}{\mathcal{G}am_m(\alpha/2)} \int_0^s w(b) |b|^{\alpha/2}\,u(b,s)\, db,\nonumber \varepsilonnd{equation}a where \betaegin{equation}a u(b,s)&=& \int_b^s |s-z|^{\alpha/2 -d}\, \frac{dz}{|z|^{\alpha/2 +d}} \qquad \tauext{(set $z=r^{-1}$)}\nonumber \\ &=& \int_{s^{-1}}^{b^{-1}}|sr -I_m|^{\alpha/2 -d}\, dr=|s|^{\alpha/2 -d}\int_{s^{-1}}^{b^{-1}}|r -s^{-1}|^{\alpha/2 -d}\, dr.\nonumber \varepsilonnd{equation}a The last integral can be easily computed using the well-known formula for Siegel Beta functions \betaegin{equation}\int\limits_a^b |r-a|^{\alpha -d} |b-r|^{\betaegin{equation}ta -d} dr= B_m (\alpha ,\beta) |b-a|^{\alpha+\betaegin{equation}ta -d}\varepsilonnd{equation} (many such formulas can be found, e.g., in \cite{OR2}), and we have \betaegin{equation} u(b,s)= B_m (\alpha/2, d)\, \frac{|s-b|^{\alpha/2}}{|s|^{d}\, |b|^{\alpha/2}}, \qquad B_m (\alpha/2, d)=\frac{\mathcal{G}am_m(\alpha/2)\, \mathcal{G}am_m(d)}{\mathcal{G}am_m(\alpha/2 +d)}.\varepsilonnd{equation} Finally, we get $$ k(s)=\frac{\mathcal{G}am_m(d)}{|s|^{d}\,\mathcal{G}am_m(\alpha/2 +d)} \int_0^s w(b)|s-b|^{\alpha/2}\, ds=\frac{\mathcal{G}am_m(d)}{|s|^{d}}\, I^{\alpha/2 +d}_+ w(s)= {\Cal L}am_{\alpha/2}(s).$$ {\betaf Problem C.} Although all calculations above go through smoothly, interchange of the order of integration remains unjustified. We do not know how to justify it and what additional requirements on the wavelet $w$ should be imposed (if any). One of the obstacles is that $\int_0^\infty \neq \int_0^s +\int_s^\infty$, when we integrate over the higher-rank cone. \betaigskip \betaegin{equation}gin{thebibliography}{AB1} \betaibitem[Al]{A} I.A.Aliev, {\it On the Bessel type potentials and associated function spaces}, Preprint, 2007. \betaibitem[AB1]{2} I.A.Aliev and S. Bayrakci, \tauextit{On inversion of B-elliptic potentials by the method of Balakrishnan-Rubin}, Fract. Calc. Appl. Anal., \tauextbf{1} (1998), 365-384. \betaibitem[AB2]{3} \betaysame, \tauextit{On inversion of Bessel potentials associated with the Laplace-Bessel differential operator} , Acta Math. Hungar., \tauextbf{95} (2002), 125-145. \betaibitem[AE1]{4} I.A.Aliev and M. Eryigit, \tauextit{Inversion of Bessel potentials with the aid of weighted wavelet transforms}, Math. Nachr. \tauextbf{242} (2002), 27-37. \betaibitem[AE2]{5} \betaysame, \tauextit{Wavelet-type transform and Bessel potentials associated with the generalized translation}, Integr. Equation and Operator Theory, \tauextbf{51} (2005), 303-317. \betaibitem[AR1]{6} I.A. Aliev and B. Rubin, \tauextit{Parabolic potentials and wavelet transforms with the generalized translations}, Studia Mathematica, \tauextbf{145}(2001), 1-16. \betaibitem[AR2]{7} \betaysame, \tauextit{Parabolic wavelet transforms and Lebesque spaces of parabolic potentials}, Rocky Mountain J. of Math., \tauextbf{32} (2002), 391-408. \betaibitem[AR3]{bh} \betaysame, \tauextit{Spherical harmonics associated to the Laplace-Bessel operator and generalized spherical convolutions.}, Anal. Appl. (Singap.) \tauextbf{1} (2003), 81--109. \betaibitem[AR4]{8} \betaysame, \tauextit{Wavelet-like transforms for admissible semi-groups; Inversion formulas for potentials and Radon transforms}, J. of Fourier Anal. and Appl., \tauextbf{11}, (2005), 333-352. \betaibitem[ASE]{9} I.A. Aliev , S. Sezer, and M. Eryigit, \tauextit{An integral transform associated to the Poisson integral and inversion of Flett potentials}, J. of Math. Anal. and Appl., \tauextbf{321} (2006), 691-704. \betaibitem[Ba] {Ba} R. Bagby, \tauextit{ Lebesgue spaces of parabolic potentials}, Illinois J. Math., \tauextbf{15} (1971), 610-634. \betaibitem [Da] {Da} Daubechies, I., Ten lectures on wavelets, CBMS-NSF Series in Appl. Math., SIAM Publ., Philadelphia, 1992. \betaibitem[De] {De} J. Delsarte, \tauextit{Sur une extension de la formule de Taylor}, J. Math. Pures Appl., \tauextbf{17}, Fasc. III (1938) 213-231. \betaibitem[E]{10} L. Ehrenpreis, \tauextit{The Universality of the Radon Transform}, Clarendon Press, Oxford, 2003. \betaibitem [Er] {Er} A. Erd\'elyi (Editor), Higher transcendental functions, Vol. II, McGraw-Hill, New York, 1953. \betaibitem[FK] {FK} J. Faraut and A. Kor\'anyi, \tauextit{Analysis on symmetric cones}, Clarendon Press, Oxford, 1994. \betaibitem[Fe]{Fe} M. V. Fedorjuk, \tauextit{ Asymptotic behavior of the Green function of a pseudodifferential parabolic equation}, (Russian) Differentsial'nye Uravneniya, \tauextbf{14} (1978), no. 7, 1296-1301. \betaibitem [Fel] {Fel} W. Feller, An introduction to probability theory and its application, Wiley \& Sons, New York, 1971. \betaibitem[Fl]{12} T.M. Flett,\tauextit{Temperatures , Bessel potentials, and Lipschitz spaces}, Proc. London Math. Soc. (3) \tauextbf{22}(1971), 385-451. \betaibitem[GA]{13} A.D. Gadjiev and I.A. Aliev, \tauextit{Riesz and Bessel potentials generated by a generalized translation and their inverses}, Proc. IV All-Union Winter Conf. Theory of functions and Approximation , Saratov (Russia), 1988. In the book: Theory of Functions and Approximation, printed in Saratov Univ. , 47-53, 1990(Russian). \betaibitem [GGG]{GGG} I. M. Gel'fand, S. G. Gindikin, and M. I. Graev, \tauextit{ Selected topics in integral geometry}, Translations of Mathematical Monographs, AMS, Providence, Rhode Island, 2003. \betaibitem [GRu] {GRu} E. Grinberg and B. Rubin, \tauextit{Radon inversion on Grassmannians via G{\alphaa}rding-Gindikin fractional integrals}, Annals of Math. \tauextbf{159} (2004), 809--843. \betaibitem[H]{15} S. Helgason, \tauextit{The Radon Transform}, Birkh\"{a}user, Boston, 2nd edition, 1999. \betaibitem[HO]{16} M. Holschneider, \tauextit{Wavelets: an analysis tool,} Clarendon Press, Oxford, 1995. \betaibitem[Jo] {Jo} B.F. Jones, \tauextit{Lipschitz spaces and heat equation}, J. Math. Mech., \tauextbf{18} (1968), 379-410. \betaibitem[Ki]{17} I.A. Kipriyanov, \tauextit{Singular elliptic boundary problems} , Nauka. Mirovozzrenie Zhizn Moscow, Fizmatlit, (Russian), 1997. \betaibitem [Ko] {Ko} A. Koldobsky, Fourier analysis in convex geometry, Mathematical Surveys and Monographs, \tauextbf {116}, AMS, 2005. \betaibitem[La] {La} N. S. Landkof, \tauextit{Several remarks on stable random processes and $\alphalpha $-superharmonic functions}, (Russian) Mat. Zametki, \tauextbf{14} (1973), 901--912. \betaibitem[Le]{18} B.M. Levitan, \tauextit{Expansion in Fourier series and integrals in Bessel functions}, Uspekhi Mat. Nauk., \tauextbf{6} (1951), 102-143 (Russian). \betaibitem[Li]{19} P.I. Lizorkin, \tauextit{The Functions of Hirshman type and relations between the spaces $B_{p}^{r}(E_{n})$ and $L_{p}^{r}(E_{n})$}, Mat.Sb. \tauextbf{63} (1964), 505-535, (Russian). \betaibitem[M]{20} Y. Meyer, \tauextit{Wavelets and operators}, Cambridge Studies in Adv. Math. 37, Cambridge Univ. Press, 1992. \betaibitem [Mu] {Mu} R.J. Muirhead, \tauextit{Aspects of multivariate statistical theory}, John Wiley \& Sons. Inc., New York, 1982. \betaibitem [OOR] {OOR} G. \'Olafsson, E. Ournycheva, and B. Rubin, \tauextit{Multiscale wavelet transforms, ridgelet transforms, and Radon transforms on the space of matrices}, Appl. Comput. Harm. Anal., \tauextbf{21}, (2006), 182--203. \betaibitem [OR1] {OR1} E. Ournycheva and B. Rubin, \tauextit{The composite cosine transform on the Stiefel manifold and generalized zeta integrals}, Contemporary Math., \tauextbf{405} (2006), 111--133. \betaibitem [OR2] {OR2} \betaysame, \tauextit{Method of mean value operators for Radon transforms in the space of matrices }, Intern. J. Math. (in press). \betaibitem [OR3] {OR3} \betaysame, \tauextit{Semyanistyi's integrals and Radon transforms on matrix spaces}, J. of Fourier Anal. and Appl., (in press). \betaibitem [PS] {PS} G. Polya and G. Szego, Aufgaben und lehrsatze aus der analysis, Springer-Verlag, Berlin-New York, 1964. \betaibitem [PBM] {PBM} A. P. Prudnikov, Y. A. Brychkov, and O. I. Marichev, Integrals and series: special functions, Gordon and Breach Sci. Publ., New York - London, 1986. \betaibitem[Ra] {Ra} V.R. Gopala Rao, \tauextit{ A characterization of parabolic function spaces}, Amer. J. Math., \tauextbf{ 99} (1977), 985-993. \betaibitem[R1]{21} B. Rubin, \tauextit{Fractional integrals and potentials, } Pitman monographs and Surveys in Pure and Applied Mathematics, \tauextbf{82}, Longman, Harlow, 1996. \betaibitem[R2]{22} \betaysame, \tauextit{The Calder\'{o}n reproducing formula, windowed X-ray transforms and Radon transforms in $L_{p}$-spaces}, The Journal of Fourier Anal. and Appl. \tauextbf{4}, 175-197, 1998. \betaibitem[R3]{23} \betaysame, \tauextit{Fractional Integrals and wavelet transforms associated with Blaschke-Levy representations on the sphere}, Israel J. Math. \tauextbf{114} (1999), 1-27. \betaibitem[R4]{24} \betaysame, \tauextit{Reconstruction of functions from their integrals over k-planes}, Israel J. Math. \tauextbf{141} (2004), 93-117. \betaibitem[R5]{25} \betaysame, \tauextit{The convolution-backprojection method for k-plane transforms, and Calder\'{o}n's identity for ridgelet transforms}, Appl. Comput. Harmon. Anal. \tauextbf{16} (2004), 231-242. \betaibitem[R6]{26} \betaysame, \tauextit{Fractional calculus and wavelet transforms in integral geometry}, Frac. Calc. Appl. Anal. \tauextbf{1} (1998), no. 2, 193-219. \betaibitem[R7]{27} \betaysame, \tauextit{Calderon-type reproducing formula}, in: Encyclopaedia Math. Supplement II, Kluwer, 2000, 104-105; reprinted in Frac. Calc. Appl. Anal., \tauextbf{3} (2000), 103-106. \betaibitem[R8]{R8} \betaysame, \tauextit{Intersection bodies and generalized cosine transforms}, Preprint, 2007, arXiv:0704.0061v2. \betaibitem[R9]{R9} \betaysame, \tauextit{ Riesz potentials and integral geometry in the space of rectangular matrices}, Advances in Math. \tauextbf{205} (2006), 549--598. \betaibitem[S]{29} S.G. Samko, \tauextit{Hypersingular integrals and their applications, Analytical methods and Special functions}, 5. Taylor and Francis, Ltd. London, 2002. \betaibitem[SKM]{28} S.G. Samko, A.A. Kilbas, and O.I. Marichev, \tauextit{ Fractional Integrals and Derivatives} , Theory and Applications, Gordon and Breach Science Publishers, 1993. \betaibitem[Sa] {Sa} C.H. Sampson, \tauextit{ A characterization of parabolic Lebesgue spaces}, Dissertation, Rice Univ. 1968. \betaibitem [St] {St} E. Stein, \tauextit{Singular integrals and differentiability properties of functions}, Princeton Univ. Press, Princeton, N.J., 1970. \betaibitem[SW1]{30} E. Stein and G. Weiss, \tauextit{On the theory of harmonic functions of severel variables}, \tauextit{I. The theory of }$H^{p}$\tauextit{\ spaces,} Acta Math. \tauextbf{103} (1960), 25-62. \betaibitem[SW2]{31} \betaysame, \tauextit{Introduction to Fourier Analysis on Euclidean Spaces}, Princeton University Press, Princeton , NJ, 1971. \betaibitem [Te]{Te} A. Terras, \tauextit{Harmonic analysis on symmetric spaces and applications}, Vol. II, Springer, Berlin, 1988. \betaibitem[Tr]{32} K. Trim\`{e}che, \tauextit{Generalized Wavelets and Hypergroups }, Gordon and Breach Sci. Publ., New York-London, 1997. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Exact hyperplane covers for subsets of the hypercube} \begin{abstract} Alon and F\"{u}redi (1993) showed that the number of hyperplanes required to cover $\{0,1\}^n\setminus \{0\}$ without covering $0$ is $n$. We initiate the study of such exact hyperplane covers of the hypercube for other subsets of the hypercube. In particular, we provide exact solutions for covering $\{0,1\}^n$ while missing up to four points and give asymptotic bounds in the general case. Several interesting questions are left open. \end{abstract} \section{Introduction} A vector $v\in \mathbb{R}^n$ and a scalar $\alpha\in \mathbb{R}$ determine the hyperplane \[ \{x\in \mathbb{R}^n:\langle v,x\rangle \coloneqq v_1x_1+\dots+v_nx_n=\alpha\} \] in $\mathbb{R}^n$. How many hyperplanes are needed to cover $\{0,1\}^n$? Only two are required; for instance, $\{x:x_1=0\}$ and $\{x:x_1=1\}$ will do. What happens however if $0\in \mathbb{R}^n$ is not allowed on any of the hyperplanes? We can `exactly' cover $\{0,1\}^n\setminus \{0\}$ with $n$ hyperplanes: for example, the collections $\{\{x:x_i=1\}:i\in [n]\}$ or $\{\{x:\sum_{i=1}^nx_i=j\}:j \in [n]\}$ can be used, where $[n]:=\{1,2,\ldots,n\}$. Alon and F\"{u}redi \cite{AlonFuredi} showed that in fact $n$ hyperplanes are always necessary. Recently, a variation was studied by Clifton and Huang \cite{CliftonHuang}, in which they require that each point from $\{0,1\}^n\setminus \{0\}$ is covered at least $k$ times for some $k\in \mathbb{N}$ (while $0$ is never covered). Another natural generalisation is to put more than just $0$ to the set of points we wish to avoid in the cover. For $B\subseteq \{0,1\}^n$, the \emph{exact cover} of $B$ is a set of hyperplanes whose union intersects $\{0,1\}^n$ exactly in~$B$ (points from $\{0,1\}^n\setminus B$ are not covered). Let $\ec(B)$ denote the \emph{exact cover number} of $B$, i.e., the minimum size of an exact cover of $B$. We will usually write $B$ in the form $\{0,1\}^n \setminus S$ for some subset $S \subseteq \{0,1\}^n$. In particular, the result of Alon and F\"{u}redi \cite{AlonFuredi} states that $\ec(\{0,1\}^n\setminus \{0\})=n$. We first determine what happens if we remove up to four points. \begin{theorem} \label{thm:uptofour} Let $S\subseteq\{0,1\}^n$. \begin{itemize} \item If $|S|\in \{2,3\}$, then $\ec(\{0,1\}^n\setminus S)=n-1$. \item If $|S|=4$, then $\ec(\{0,1\}^n\setminus S)=n-1$ if there is a hyperplane $Q$ with $|Q\cap S|=3$ and $\ec(\{0,1\}^n\setminus S)=n-2$ otherwise. \end{itemize} \end{theorem} The upper bounds are shown by iteratively reducing the dimension of the problem by one using a single `merge coordinates' hyperplane; this allows us to reduce the question to the case $n\leq 7$, which we can handle exhaustively. Since the number of required hyperplanes seems to decrease, a natural question is whether this pattern continues. For $n\in \mathbb{N}$ and $k\in [2^n]$, we also introduce the exact cover numbers \begin{align*} \ec(n,k)&=\max\{\ec(\{0,1\}^n\setminus S):S\subseteq\{0,1\}^n,~ |S|=k\},\\ \ec(n)&=\max\{\ec(B):B\subseteq\{0,1\}^n\}. \end{align*} Our main result concerns the asymptotics of $\ec(n)$ and implies that $\ec(n,k)$ can be much larger than $n$. \begin{theorem} \label{thm:ec:arbitrary} For any positive integer $n$, $2^{n-2}/n^2\leq \ec(n)\leq 2^{n+1}/n$. \end{theorem} The lower bound uses a random construction and the upper bound uses the fact that we can efficiently cover the hypercube with Hamming spheres. We leave open whether $\ec(n,k)\leq n$ when $k$ is sufficiently large with respect to $n$, but can show that $\ec(n,k)$ is always at most a constant (depending on $k$) away from $n$. \begin{theorem} \label{thm:ec:fixed_size} For any positive integer $k$, \[n-\log_2(k) \leq \ec(n,k) \leq n-2^k+\ec(2^k,k).\] \end{theorem} The proof of this theorem uses the same techniques as the proof of Theorem \ref{thm:uptofour}. The problem of determining the asymptotics of $\ec(n)$ was also suggested by F\"{u}redi at Alon's birthday conference in 2016. \section{Covering all but up to four points} \label{sec:up_to_four} In this section, we determine $\ec(\{0,1\}^n\setminus S)$ for subsets $S$ of size 2, 3 and 4. For the lower bounds, we use the following result of Alon and F\"{u}redi \cite{AlonFuredi}. \begin{theorem}[Corollary 1 in \cite{AlonFuredi}] \label{thm:ATcor} If $n\geq m\geq 1$, then $m$ hyperplanes that do not cover all vertices of $\{0,1\}^n$ miss at least $2^{n-m}$ vertices. \end{theorem} For the upper bounds, it suffices to give an explicit construction of a collection of hyperplanes that exactly covers $\{0,1\}^n\setminus S$, for every subset $S$ of size 2, 3 or 4. We split the proof of Theorem \ref{thm:uptofour} into two cases, the case where $|S| \in \{2, 3\}$ and the case where $|S| = 4$. \begin{lemma} \label{lem:cov:23} Let $n \geq 2$ and $S\subseteq\{0,1\}^n$ with $|S|\in \{2,3\}$. Then $\ec(\{0,1\}^n\setminus S)=n-1$. \end{lemma} \begin{proof} For $n=2$ the statement is true, therefore let $n \geq 3$ and $S\subseteq\{0,1\}^n$ with $|S|\in \{2,3\}$. We first prove the lower bound $\ec(\{0,1\}^n\setminus S) \geq n-1$; this follows from applying the case of $m=n-2$ in Theorem \ref{thm:ATcor}. Indeed, this shows that any $n-2$ hyperplanes that do not cover all of $\{0,1\}^n$ miss at least $4$ vertices, and hence a minimum of $n-1$ hyperplanes are required to miss $2$ or $3$ vertices. For the upper bound, note that we may assume by vertex transitivity that $(0,\dots,0)\in S$. Consider first the case $|S|=2$. By relabelling the indices, we may assume the second vector $u$ in $S$ satisfies $\{i\in[n]:u_i=1\}=\{1,\dots,\ell\}$ for some $\ell\in \mathbb{N}$. We cover $\{0,1\}^n\setminus S$ by the collection of $n-1$ hyperplanes \[ \{\{x:x_i=1\}:i\in \{\ell+1,\dots,n\}\}\cup \left\{\left\{x:x_1+\dots+x_\ell=j\right\}:j\in [\ell-1]\right\}, \] noting none of these hyperplanes contain an element from $S$. Now consider the case $|S|=3$. We may assume the second and third vectors in $S$ correspond to the subsets $\{1,\dots,a+b\}$ and $\{1,\dots,a\}\cup\{a+b+1,\dots,a+b+c\}$ for some $a,b,c\in \mathbb{Z}_{\geq 0}$ with $a+b\geq 1$ and $c\geq 1$. We first add the $n-(a+b+c)$ hyperplanes of the form $\{x:x_i=1\}$ for $i\in \{a+b+c+1,\dots,n\}$. For $x\in S$, we have \begin{align*} &x_1+\dots+x_a\in \{0,a\},\\ &x_{a+1}+\dots+x_{a+b}\in \{0,b\},\\ &x_{a+b+1}+\dots+x_{a+b+c}\in \{0,c\}. \end{align*} If $a\geq 1$, we add the $a-1$ hyperplanes $\{x:x_1+\dots+x_a=i\}$ for $i\in [a-1]$. Analogously, we add the $b-1$ hyperplanes $\{x: x_{a+1}+\ldots+x_{a+b}=i\}$ for $i\in[b-1]$ if $b\geq 1$, and the $c-1$ hyperplanes $\{x: x_{a+b+1}+\ldots+x_{a+b+c}=i\}$ for $i\in [c-1]$. The only points of $\{0,1\}\setminus S$ that are yet to be covered satisfy the equations above and also satisfy $x_i=0$ for $i>a+b+c$. Suppose first that $a,b\geq 1$. In this case we have added $n-3$ hyperplanes so far. The problem has effectively been reduced to covering $\{0,1\}^3$ with three missing points $(0,0,0), (1,1,0)\text{ and } (1,0,1)$ using $2$ hyperplanes. Indeed, we may add the following two hyperplanes to our collection in order to exactly cover $\{0,1\}^n\setminus S$: \begin{align*} &\left\{x:\frac{x_1+\dots+x_a}a+\frac{x_{a+1}+\dots+x_{a+b}}b+\frac{x_{a+b+1}+\dots+x_{a+b+c}}c=1\right\},\\ &\left\{x:\frac{x_{a+1}+\dots+x_{a+b}}b+\frac{x_{a+b+1}+\dots+x_{a+b+c}}c=2\right\}. \end{align*} Suppose now that $a=0$ or $b=0$. Since $a+b\geq 1$ and $c\geq 1$, we have used $n-2$ hyperplanes so far. If $a=0$, we may add the hyperplane \[ \left\{x:\frac{x_{1}+\dots+x_{b}}b+\frac{x_{b+1}+\dots+x_{b+c}}c=2\right\} \] and, if $b=0$, we add \[ \left\{x:-\frac{x_1+\dots+x_a}a+\frac{x_{a+1}+\dots+x_{a+c}}c=1\right\}. \] In either case, the resulting collection covers $\{0,1\}\setminus S$ without covering any point in $S$. \end{proof} For the case of four missing points, we always need at least $n-2$ hyperplanes by Theorem \ref{thm:ATcor}. For $n=3$, we may need either $1$ or $2$ hyperplanes. For example, we may exactly cover $\{0,1\}^3\setminus (\{0\}\times \{0,1\}^2)$ by the single hyperplane $\{x:x_1=1\}$, but if $S$ does not lie on a hyperplane then we need two hyperplanes. The set $\{0\}\times \{0,1\}^2$ has the special property that there is no hyperplane that covers three of its points without covering the fourth. It turns out this condition is exactly what decides how many hyperplanes are required when removing four points. \begin{lemma} \label{lem:cov:4} Let $S\subseteq\{0,1\}^n$ with $|S|=4$. Then $\ec(\{0,1\}^n\setminus S)=n-1$ if there is a hyperplane $Q$ with $|Q\cap S|=3$ and $\ec(\{0,1\}^n\setminus S)=n-2$ otherwise. \end{lemma} \begin{proof} We know that $\ec(\{0,1\}^n\setminus S)\geq n-2$ from Theorem \ref{thm:ATcor}. If there is a hyperplane $Q$ intersecting $S$ in exactly three points, then $\ec(\{0,1\}^n\setminus S)\geq n-1$. Indeed, by vertex transitivity, we may assume that $0$ is the point of $S$ uncovered by $Q$. Any exact cover of $\{0,1\}^n\setminus S$ can be extended to an exact cover of $\{0,1\}^n\setminus \{0\}$ by adding the hyperplane $Q$ to the collection. We prove the claimed upper bounds by induction on $n$, handling the case $n\leq 7$ by computer search. Again, we may assume that $0\in S$. Let $u,v,w$ denote the other three vectors in $S$. For any $i$ with $u_i=v_i=w_i=0$, we can use a hyperplane of the form $\{x:x_i=1\}$ to reduce the covering problem to one of a lower dimension. (Note that dropping the coordinate $i$ in this case does not change whether three points in $S$ can be covered without covering the fourth.) Hence we may assume by induction that no such $i$ exists. After possibly permuting coordinates, we assume that $u_i=v_i=w_i=1$ on the first $a$ coordinates, $u_i=v_i=1$ and $w_i=0$ on the $b$ coordinates after that, and so on, i.e., sorted by decreasing Hamming weight and lexicographically within the same weight. In other words, our four vectors take the form \begin{equation} \label{eq:VennForm} \begin{pmatrix} 0\\ u\\ v\\ w\\ \end{pmatrix}= \begin{pmatrix} 0& 0 & 0& 0& 0 & 0& 0\\ 1 & 1 & 1& 0 &1 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 &1 & 0\\ 1 & 0 & 1 & 1 & 0 & 0 & 1\\ \end{pmatrix} ,\end{equation} where each column may be replaced with $0$ or more columns of its type. Since $n>7$, by the pigeonhole principle one of the columns must be repeated at least twice. We will show how to handle the case for which this is the first column (i.e. $a\geq 2$); the other cases are analogous. Our collection of hyperplanes will contain the hyperplanes \begin{equation} \label{eq:ec:a} \{\{x:x_1+\dots+x_a=i\}:i\in [a-1]\}. \end{equation} The only points $x$ which have yet to be covered have the property that $x_i$ takes the same value in $\{0,1\}$ for all $i\in [a]$. We now proceed similarly to the proof of Lemma \ref{lem:cov:23}. Informally, we wish to `merge' the first $a$ coordinates and then apply the induction hypothesis. For each $s\in S$, we define $\pi(s)=(s_{a},\dots,s_n)$. Let $\pi(S)=\{\pi(s):s\in S\}$. Then $|S|=|\pi(S)|=4$. Any hyperplane \[ P=\{y:v_1y_1+\dots+v_{n-a+1}y_{n-a+1}=\alpha\} \] in $\{0,1\}^{n-a+1}$ can be used to define a hyperplane \[ L(P)= \left\{x:v_1\frac{x_1+\dots+x_a}a+v_2x_{a+1}+\dots +v_{n-a+1} x_n=\alpha\right\} \] in $\{0,1\}^n$. For all $x\in \{0,1\}^n$ with $\sum_{i=1}^a x_i\in \{0,a\}$, we find that $\pi(x)\in P$ if and only if $x\in L(P)$. This shows that if $P_1,\dots,P_M$ form an exact cover for $\{0,1\}^{n-a+1}\setminus \pi(S)$, then $L(P_1),\dots,L(P_M)$, together with the hyperplanes from $(\ref{eq:ec:a})$, form an exact cover for $\{0,1\}^n\setminus S$. This proves \[ \ec(\{0,1\}^n\setminus S)\leq \ec(\{0,1\}^{n-a+1}\setminus \pi(S))+a-1. \] Since there is a hyperplane covering three points in $S$ without covering the fourth if and only if this is the case for $\pi(S)$, we find the claimed upper bounds by induction. Observe that the proof reduction works also in the case $n\leq 7$ if there are at least two coordinates of the same type in (\ref{eq:VennForm}). Thus, the computer verification is needed only in the case when each column in (\ref{eq:VennForm}) appears at most once. The code used to check the small cases is attached to the arXiv submission at \url{https://arxiv.org/abs/2010.00315}. \end{proof} Another natural variant on the original Alon-F\"{u}redi problem is to ask for the exact cover number of a single layer of a hypercube without one point. It turns out this can be easily solved by translating it to the original problem. \begin{proposition} Let $n\in \mathbb{N}$ and $i\in \{0,\dots,n\}$. Let $B$ be obtained by removing a single point from the $i$-th layer $\{x\in \{0,1\}^n:x_1+ \ldots + x_n=i\}$. Then $\ec(B)=\min\{i,n-i\}$. \end{proposition} \begin{proof} We may assume that $i\leq n/2$ and that $b=(1,\dots,1,0,\dots,0)$ is the missing point. The upper bound follows by taking the hyperplanes \[ \{\{x:x_1+\dots+x_i=j\}:j\in \{0,\dots,i-1\}\}. \] For the lower bound, we claim that we may find a cube of dimension $i$ within the $i$-th layer for which $b$ plays the role of the origin. Indeed, consider the affine map \[ \iota:\{0,1\}^i\to \{0,1\}^n: x \mapsto (1-x_1,1-x_2,\dots,1-x_i,0,\dots,0,x_i,x_{i-1},\dots,x_1). \] That is, we view the point $b$ as the origin and take the directions of the form $(-1,0,\dots,0,1)$, $(0,-1,0,\dots,0,1,0)$, etcetera, as the axes of the cube. Now $(B\setminus \{b\}) \cap \iota(\{0,1\}^i) = \iota(\{0,1\}^i\setminus\{0\})$, and hence we may convert any exact cover for $B\setminus\{b\}$ to an exact cover for $\{0,1\}^i\setminus\{0\}$. The lower bound follows from the result of Alon and F\"{u}redi \cite{AlonFuredi}. \end{proof} \section{Asymptotics} \label{sec:ec:asympt} We first consider the asymptotics of $\ec(n,k)$ when $k$ is held fixed. For the upper bound, we prove the following lemma. \begin{lemma} \label{lem:ec:k} For all $k\in \mathbb{N}$ and $n\geq 2^{k-1}$, $\ec(n,k)\leq 1+\ec(n-1,k)$. \end{lemma} \begin{proof} Fix $k\in \mathbb{N}$, $n\geq 2^{k-1}$ and a subset $S\subseteq \{0,1\}^n$ of size $|S|=k$. For any $i\in [n]$, let $S_{-i}\subseteq\{0,1\}^{n-1}$ be obtained from $S$ by deleting coordinate $i$ from each element of $S$. We claim that there exists an $i\in [n]$ such that $|S_{-i}|=k$ and \begin{equation} \label{eq:ec:i} \ec(\{0,1\}^n\setminus S)\leq 1+\ec(\{0,1\}^{n-1}\setminus S_{-i}). \end{equation} The lemma follows immediately from this claim. By vertex transitivity, we may assume that $0\in S$. Suppose first that there exists $i\in[n]$ for which $s_i=0$ for all $s\in S$. Then $|S_{-i}|=k$. From an exact cover for $\{0,1\}^{n-1}\setminus S_{-i}$, we may obtain an exact cover for $\{x \in \{0,1\}^n\setminus S:x_i=0\}$. Combining with the hyperplane $\{x:x_i=1\}$, this gives an exact cover for $\{0,1\}^n\setminus S$. This proves (\ref{eq:ec:i}). We henceforth assume that $0\in S$ and that the remaining $k-1$ elements of~$S$ cannot all be $0$ on the same coordinate. Hence there are at most $2^{k-1}-1$ possible values that $(s_i:s\in S)$ can take for $i\in [n]$. Since $n\geq 2^{k-1}$, by the pigeonhole principle, there must exist coordinates $1\leq i<j\leq n$ with $s_i=s_j$ for all $s\in S$. This implies that $|S_{-i}|=|S|=k$. We now show (\ref{eq:ec:i}) is satisfied. After permuting coordinates, we may assume that $(i,j)=(1,2)$. An exact cover for $\{0,1\}^{n-1}\setminus S_{-1}$ is converted to an exact cover for $\{0,1\}^n\setminus S$ as in the proof of Lemma \ref{lem:cov:4}: any hyperplane of the form \[ P=\{y:v_1y_1+\dots +v_{n-1}y_{n-1}=\alpha\} \] is converted to \[ L(P)=\left\{x:v_1\frac{x_1+x_2}2+v_2x_3+\dots +v_{n-1}x_n=\alpha\right\}, \] and we add the hyperplane $\{x:x_1+x_2=1\}$ to the adjusted collection. \end{proof} It is now easy to prove to that $\ec(n,k)=n+\Theta_k(1)$. \begin{proof}[Proof of Theorem \ref{thm:ec:fixed_size}] Let $k\in \mathbb{N}$. We need to prove that for all $n\geq 2^k$, \[ n-\log_2(k)\leq \ec(n,k)\leq n-2^k+\ec(2^k,k). \] The upper bound is vacuous for $n=2^k$ and follows from $n-2^k$ applications of Lemma \ref{lem:ec:k} for $n>2^k$. The lower bound follows from Theorem \ref{thm:ATcor}: if $n-\ell$ hyperplanes cover all but $k$ vertices, then $k\geq 2^\ell$, and hence $n-\ell\geq n-\log_2(k)$. (In fact, this shows $\ec(\{0,1\}^n\setminus S)\geq n-\log_2(k)$ for each subset $S\subseteq\{0,1\}^n$ of size $k$.) \end{proof} We now turn to the problem of comparing exact cover numbers for sets $S$ of different sizes. We use two auxiliary lemmas. For the lower bound, we use a random argument for which we need to know the approximate number of intersection patterns of the hypercube. An \textit{intersection pattern} of $\{0,1\}^n$ is a non-empty subset $P\subseteq \{0,1\}^n$ for which there exists a hyperplane $H$ with $H\cap \{0,1\}^n=P$. \begin{lemma} \label{lem:q_n_num_int_patterns} $\{0,1\}^n$ has at most $2^{n^2}$ possible intersection patterns. \end{lemma} \begin{proof} We will associate each intersection pattern with a unique element from $(\{0,1\}^n)^n$. Let $P\subseteq \{0,1\}^n$ be an intersection pattern with $P= H\cap \{0,1\}^n$ for $H$ a hyperplane. Then $|P|<2^n$. Let $x \in P$ be such that $\sum_{i=1}^nx_i2^i$ is minimal. Let $\oplus$ denote coordinate-wise addition modulo $2$ and write $x\oplus P=\{x\oplus p:p\in P\}\subseteq \{0,1\}^n$. Note that $0\in x\oplus P$ since $x\in P$, and that $x\oplus P$ is the intersection of a linear subspace of dimension $n-1$ with $\{0,1\}^n$. (The linear subspace can be obtained from $H$ by a series of reflections.) We greedily find $0\leq k\leq n-1$ linearly independent vectors $v_1,\dots,v_k\in x\oplus P$ whose linear span intersects $\{0,1\}^n$ in $x \oplus P$. We label $P$ with the $n$-tuple $(x,v_1,\dots,v_k,0,\dots,0)$, where we added $n-1-k$ copies of the vector $0$ at the end of the tuple. This associates each intersection pattern to a unique element from $(\{0,1\}^n)^n$. \end{proof} The above proof is rather crude, but in fact not far from the truth: the number of possible intersection patterns is $2^{(1+o(1))n^2}$ (see e.g. \cite[Lemma 4.3]{Baldi}). We also use an auxiliary result for the upper bound. The \textit{total domination number} of a graph $G$ is the minimum cardinality of a subset $D\subseteq V(G)$ such that each $v\in V(G)$ has a neighbour in $D$. \begin{lemma}[Theorem 5.2 in \cite{totaldomhypercube}] \label{lem:total_dom_num} The total domination number of the hypercube is at most $2^{n+1}/n$ for any $n \geq 1$. \end{lemma} Note that this bound must be close to tight since the hypercube is a regular graph of degree $n$, so any total dominating set has cardinality at least $2^n/n$. We are now ready to prove $2^{n-2}/n^2\leq \ec(n) \leq 2^{n+1}/n$. \begin{proof}[Proof of Theorem \ref{thm:ec:arbitrary}] For the lower bound, we need to give a subset $B\subseteq \{0,1\}^n$ that is difficult to cover exactly. We will find a subset $S$ for which all ``large" intersection patterns have a non-empty intersection with $S$. This means that to cover $\{0,1\}^n\setminus S$, we can only use hyperplanes with ``small" intersection patterns. We take a subset $S\subseteq \{0,1\}^n$ at random by including each point independently with probability $1/2$. Note that the lower bound is trivial for $n \leq 8$, so we may assume that $n > 8$. For any fixed intersection pattern $P$, the probability that it is disjoint from our random set $S$ is $\left(\frac12\right)^{|P|}$. By Lemma \ref{lem:q_n_num_int_patterns}, there are at most $2^{n^2}$ possible intersection patterns. Hence, by the union bound, the probability that there is an intersection pattern which has at least $2n^2$ elements and does not intersect with $S$, is at most $2^{n^2}\left(\frac12\right)^{2n^2}$, which is smaller than $1/2$ for $n \geq 2$. With probability at least $1/2$, our random set $S$ has at most $2^{n-1}$ points. Hence, there exists a subset $S$ of size $2^{n-1}$ that `hits' all intersection patterns of size at least $2n^2$. Any exact cover for $\{0,1\}^n \setminus S$ consists entirely of hyperplanes whose intersection pattern has size at most $2n^2$, and hence needs at least $|\{0,1\}^n\setminus S|/2n^2=2^{n-2}/n^2$ hyperplanes. We now prove the upper bound. The Hamming distance on $\{0,1\}^n$ is given by $d(x,y)=\sum_{i=1}^n |x_i-y_i|$. A Hamming sphere of radius 1 around a point $x\in \{0,1\}^n$ is given by $S(x)=\{y\in \{0,1\}^n:d(x,y)=1\}$. We claim that any subset of a Hamming sphere is an intersection pattern. Since the cube is vertex-transitive, it suffices to prove our claim for $S(0)$. The hyperplane $\{x:\sum_{i=1}^n x_i=1\}$ intersects $\{0,1\}^n$ in $S(0)$. Intersecting that hyperplane with hyperplanes of the form $\{x:x_j=0\}$ gives a lower-dimensional affine subspace, and we can construct such a subspace which intersects $S(0)$ in any subset we desire. In order to turn the affine subspace into a hyperplane with the same intersection pattern, we may add generic directions that do not yield new points in the hypercube (e.g. consider adding $(1,\pi,0,\dots,0)$). This proves each subset of a Hamming sphere is an intersection pattern. The hypercube has total domination number at most $2^{n+1}/n$ by Lemma~\ref{lem:total_dom_num}. Hence, we can find a subset $D$ of the cube such that each vertex has a neighbour in $D$. In particular, there are $M\leq 2^{n+1}/n$ Hamming spheres centered on the vertices in $D$ that cover the cube. For any $B\subseteq \{0,1\}^n$, we write $B=B_1\cup \dots \cup B_M$ such that each $B_i$ is covered by at least one of the Hamming spheres. This means that each $B_i$ is a intersection pattern, and we may cover $B$ exactly using $M$ hyperplanes. This gives the desired exact cover of $B$ with at most $2^{n+1}/n$ hyperplanes. \end{proof} Noga Alon pointed out the following improvement on the constant of the lower bound in Theorem~\ref{thm:ec:arbitrary}. There are at most $2^{n^2}$ possible intersection patterns by Lemma~\ref{lem:q_n_num_int_patterns}, so if all possible nonempty $B\subseteq \{0,1\}^n$ can be achieved by taking a union of $x$ of them, then $2^{n^2x}\geq 2^{2^n} -1$. The left-hand side of this inequality is even and the right-hand side is odd, hence $2^{n^2x}\geq 2^{2^n}$ and so $x\geq \frac{2^{n}}{n^2}$. \section{Conclusion} \label{sec:ec:concl} Based on the fact that $\ec(n,k)\leq n$ for $k=1,2,3,4$, one might hope to prove that in fact $\ec(n,k)\leq n+C$ for some constant $C>0$ (independent of~$k$). However, this is not true in general by Theorem \ref{thm:ec:arbitrary}. A~natural question is then whether this will be true for $n$ sufficiently large when $k$ is fixed. \begin{problem} Is there a constant $C>0$, such that for any $k\in \mathbb{N}$ there exists a $n_0(k)\in \mathbb{N}$ such that $\ec(n,k)\leq n+C$ for all $n\geq n_0(k)$? \end{problem} In an earlier version of this paper \cite{AaronsonGGJK20}, we conjectured that for any $S\subseteq \{0,1\}^r$ and $n\in \mathbb{N}$ with $n\geq r$, \[ \ec(\{0,1\}^{n}\setminus (S\times\{0\}^{n-r}))=\ec(\{0,1\}^r\setminus S) +n-r. \] This would have given a negative answer to the problem above, but the counterexample $S = \{1000, 1111,1001,1011,0110,0001,0010,0111\}$ when $n = 6$ was given by Adam Zsolt Wagner \cite{Adam}. One approach to improving the lower bound in Theorem \ref{thm:ec:arbitrary} is to try to prove that, for some $\varepsilon\in (0,1)$, the number of hyperplanes containing $n^{1+\varepsilon}$ points is $O(2^{n^{1+\varepsilon}})$. Unfortunately, this is false: there are $2^{(1+o(1))n^2}$ possible intersection patterns of size at least $n^2$. This can be seen by considering intersection patterns of the form $\{0,1\}^{\log_2(n^2)}\times B$ for $B\subseteq\{0,1\}^{n-\log_2(n^2)}$. (If $B$ is a non-empty intersection pattern, then $\{0,1\}^{\log_2(n^2)}\times B$ is an intersection pattern containing at least $n^2$ points.) On the other hand, by taking every other layer we may intersect each `axis-aligned subcube' of the form $\{0,1\}^a\times \{x\}$, ensuring that no such intersection pattern can be used in a cover. However, there is a more general type of subcube to consider. We say a subset $A\subseteq \{0,1\}^n$ of size $|A|=2^d$ forms a $d$-dimensional \emph{subcube} if there are vectors $u,w_1,\dots,w_d\in \mathbb{R}^n$ such that \[ A=\{u+\alpha_1w_1+\dots+\alpha_d w_d: \alpha_1,\dots,\alpha_d \in \{0,1\}\}. \] A solution to the following problem might help improve either the upper or lower bound of Theorem \ref{thm:ec:arbitrary}. \begin{problem} Fix $n,d \in \mathbb{N}$. What is the smallest cardinality of a subset $S\subseteq\{0,1\}^n$ for which $A\cap S\neq \emptyset$ for all $d$-dimensional subcubes $A \subseteq \{0,1\}^n$? \end{problem} This is of a similar flavour to a problem proposed by Alon, Krech and Szab\'{o}~\cite{AlonKrechSzabo}, who asked instead for the asymptotics of the above problem when the cubes have to be axis-aligned. A $d$-dimensional axis-aligned subcube is of the form $\{0,1\}^d\times\{x\}$ after permuting coordinates. Let $g(n,d)$ denote the minimal cardinality of a subset that hits all $d$-dimensional axis-aligned subcubes in $\{0,1\}^n$. The best-known asymptotic bounds for $g(n,d)$ are from~\cite{AlonKrechSzabo}: \[ \frac{\log_2(d)}{2^{d+2}}\leq \lim_{n\to \infty}\frac{g(n,d)}{2^n} \leq \frac1{d+1}. \] Finally, we remark that we have already seen these subcubes come up in Lemma \ref{lem:cov:4} as well: the sets $S\subseteq \{0,1\}^n$ of size 4 with $\ec(\{0,1\}^n\setminus S)=n-2$ are exactly the 2-dimensional subcubes. \end{document}
\begin{document} \mainmatter \title{Space Saving by Dynamic Algebraization} \author{Martin F\"{u}rer \and Huiwen Yu} \institute{Department of Computer Science and Engineering\\ The Pennsylvania State University, University Park, PA, USA.\\ \emailpsu } \maketitle \begin{abstract} Dynamic programming is widely used for exact computations based on tree decompositions of graphs. However, the space complexity is usually exponential in the treewidth. We study the problem of designing efficient dynamic programming algorithm based on tree decompositions in polynomial space. We show how to construct a tree decomposition and extend the algebraic techniques of Lokshtanov and Nederlof \cite{savespace2010} such that the dynamic programming algorithm runs in time $O^*(2^h)$, where $h$ is the maximum number of vertices in the union of bags on the root to leaf paths on a given tree decomposition, which is a parameter closely related to the tree-depth of a graph \cite{treedepth}. We apply our algorithm to the problem of counting perfect matchings on grids and show that it outperforms other polynomial-space solutions. We also apply the algorithm to other set covering and partitioning problems. \keywords{Dynamic programming, tree decomposition, space-efficient algorithm, exponential time algorithms, zeta transform} \end{abstract} \section{Introduction} Exact solutions to NP-hard problems typically adopt a branch-and-bound, inclusion/exclusion or dynamic programming framework. While algorithms based on branch-and-bound or inclusion/exclusion techniques \cite{polyspace13} have shown to be both time and space efficient, one problem with dynamic programming is that for many NP-hard problems, it requires exponential space to store the computation table. As in practice programs usually run out of space before they run out of time \cite{openproblem}, an exponential-space algorithm is considered not scalable. Lokshtanov and Nederlof \cite{savespace2010} have recently shown that algebraic tools like the zeta transform and M\"{o}bius inversion \cite{mobiusorigin,stanley2000enumerative} can be used to obtain space efficient dynamic programming under some circumstances. The idea is sometimes referred to as the coefficient extraction technique which also appears in \cite{spaceicalp08,spaceicalp09}. The principle of space saving is best illustrated with the better known Fourier transform. Assume we want to compute a sequence of polynomial additions and multiplications modulo $x^n-1$. We can either use a linear amount of storage and do many complicated convolution operations throughout, or we can start and end with the Fourier transforms and do the simpler component-wise operations in between. Because we can handle one component after another, during the main computation, very little space is needed. This principle works for the zeta transform and subset convolution \cite{fouriermobius} as well. In this paper, we study the problem of designing polynomial-space dynamic programming algorithms based on tree decompositions. Lokshtanov et al. \cite{kpath} have also studied polynomial-space algorithms based on tree decomposition. They employ a divide and conquer approach. For a general introduction of tree decomposition, see the survey \cite{discovertw}. It is well-known that dynamic programming has wide applications and produces prominent results on efficient computations defined on path decomposition or tree decomposition in general \cite{dptw}. Tree decomposition is very useful on low degree graphs as they are known to have a relatively low pathwidth \cite{pathwidth}. For example, it is known that any degree 3 graph of $n$ vertices has a path decomposition of pathwidth $\frac{n}{6}$. As a consequence, the minimum dominating set problem can be solved in time $O^*(3^{n/6})$\footnote{$O^*$ notation hides the polynomial factors of the expression.}, which is the best running time in this case \cite{mindominatingset}. However, the algorithm trades large space usage for fast running time. To tackle the high space complexity issue, we extend the method of \cite{savespace2010} in a novel way to problems based on tree decompositions. In contrast to \cite{savespace2010}, here we do not have a fixed ground set and cannot do the transformations only at the beginning and the end of the computation. The underlying set changes continuously, therefore a direct application on tree decomposition does not lead to an efficient algorithm. We introduce the new concept of zeta transforms for dynamic sets. Guided by a tree decomposition, the underlying set (of vertices in a bag) gradually changes. We adapt the transform so that it always corresponds to the current set of vertices. Herewith, we might greatly expand the applicability of the space saving method by algebraization. We broadly explore problems which fit into this framework. Especially, we analyze the problem of counting perfect matchings on grids which is an interesting problem in statistical physics \cite{monomer}. There is no previous theoretical analysis on the performance of any algorithm for counting perfect matchings on grids of dimension at least 3. We analyze two other natural types of polynomial-space algorithms, the branching algorithm and the dynamic programming algorithm based on path decomposition of a subgraph \cite{pathwidthsparsegraph}. We show that our algorithm outperforms these two approaches. Our method is particularly useful when the treewidth of the graph is large. For example, grids, $k$-nearest-neighbor graphs \cite{knn} and low degree graphs are important graphs in practice with large treewidth. In these cases, the standard dynamic programming on tree decompositions requires exponential space. The paper is organized as follows. In Section 2, we summarize the basis of tree decomposition and related techniques in \cite{savespace2010}. In Section 3, we present the framework of our algorithm. In Section 4, we study the problem of counting perfect matchings on grids and extend our algorithmic framework to other problems. \section{Preliminaries} \subsection{Saving space using algebraic transformations} Lokshtanov and Nederlof \cite{savespace2010} introduce algebraic techniques to solve three types of problems. The first technique is using discrete Fourier transforms (DFT) on problems of very large domains, e.g., for the subset sum problem. The second one is using M\"{o}bius and zeta transforms when recurrences used in dynamic programming can be formulated as subset convolutions, e.g., for the unweighted Steiner tree problem. The third one is to solve the minimization version of the second type of problems by combining the above transforms, e.g., for the traveling salesman problem. To the interest of this paper, we explain the techniques used in the second type of problems. Given a universe $V$, let $\mathcal{R}$ be a ring and consider functions from $2^V$ to $\mathcal{R}$. Denote the collection of such functions by $\mathcal{R}[2^V]$. A singleton $f_A[X]$ is an element of $\mathcal{R}[2^V]$ which is zero unless $X=A$. The operator $\oplus$ is the pointwise addition and the operator $\odot$ is the pointwise multiplication. We first define some useful algebraic transforms. The {\it zeta transform} of a function $f\in \mathcal{R}[2^V]$ is defined to be \begin{equation} \label{zeta} \zeta f[Y] = \sum_{X\subseteq Y} f[X]. \end{equation} The {\it M\"{o}bius transform/inversion} \cite{mobiusorigin,stanley2000enumerative} of $f$ is defined to be \begin{equation} \label{mobius} \mu f[Y]=\sum_{X\subseteq Y} (-1)^{|Y\setminus X|}f[X]. \end{equation} The M\"{o}bius transform is the inverse transform of the zeta transform, as they have the following relation \cite{mobiusorigin,stanley2000enumerative}: \begin{equation} \label{zetamobius} \mu(\zeta f)[X]=f[X]. \end{equation} The high level idea of \cite{savespace2010} is that, rather than directly computing $f[V]$ by storing exponentially many intermediate results $\{f[S]\}_{S\subseteq V}$, they compute the zeta transform of $f[S]$ using only polynomial space. $f[V]$ can be obtained by M\"{o}bius inversion (\ref{mobius}) as $f[V]=\sum_{X\subseteq V} (-1)^{|V\setminus X|}(\zeta f)[X]$. Problems which can be solved in this manner have a common nature. They have recurrences which can be formulated by subset convolutions. The {\it subset convolution} \cite{fouriermobius} is defined to be \begin{equation} \label{subsetconvolution} f*_{\mathcal{R}} g[X]=\sum_{X'\subseteq X} f(X')g(X\setminus X'). \end{equation} To apply the zeta transform to $f*_{\mathcal{R}}g$, we need the {\it union product} \cite{fouriermobius} which is defined as \begin{equation} \label{unionproduct} f*_u g[X]=\sum_{X_1\bigcup X_2=X} f(X_1)g(X_2). \end{equation} The relation between the union product and the zeta transform is as follows \cite{fouriermobius}: \begin{equation} \label{unionproductzeta} \zeta(f*_u g)[X]= (\zeta f)\odot(\zeta g)[X]. \end{equation} In \cite{savespace2010}, functions over $(\mathcal{R}[2^V];\oplus,*_{\mathcal{R}})$ are modeled by arithmetic circuits. Such a circuit is a directed acyclic graph where every node is either a singleton (constant gate), a $\oplus$ gate or a $*_{\mathcal{R}}$ gate. Given any circuit $C$ over $(\mathcal{R}[2^V];\oplus,*_{\mathcal{R}})$ which outputs $f$, every gate in $C$ computing an output $a$ from its inputs $b,c$ is replaced by small circuits computing a relaxation $\{a^i\}_{i=1}^{|V|}$ of $a$ from relaxations $\{b^i\}_{i=1}^{|V|}$ and $\{c^i\}_{i=1}^{|V|}$ of $b$ and $c$ respectively. (A {\it relaxation} of a function $f\in\mathcal{R}[2^V]$ is a sequence of functions $\{f^i:f^i\in\mathcal{R}[2^V], 0\leq i\leq |V| \}$, such that $\forall i, X\subseteq V$, $f^i[X]=f[X]$ if $i=|X|$, $f^i[X]=0$ if $i<|X|$, and $f^i[X]$ is an arbitrary value if $i>|X|$.) For a $\oplus$ gate, replace $a=b\oplus c$ by $a^i=b^i\oplus c^i$, for $0\leq i\leq |V|$. For a $*_{\mathcal{R}}$ gate, replace $a=b *_{\mathcal{R}} c$ by $a^i=\sum_{j=0}^i b^j *_u c^{i-j}$, for $0\leq i\leq |V|$. This new circuit $C_1$ over $(\mathcal{R}[2^V];\oplus,*_u)$ is of size $O(|C|\cdot |V|)$ and outputs $f_{|V|}[V]$. The next step is to replace every $*_u$ gate by a gate $\odot$ and every constant gate $a$ by $\zeta a$. It turns $C_1$ to a circuit $C_2$ over $(\mathcal{R}[2^V]; \oplus, \odot)$, such that for every gate $a\in C_1$, the corresponding gate in $C_2$ outputs $\zeta a$. Since additions and multiplications in $C_2$ are pointwise, $C_2$ can be viewed as $2^{|V|}$ disjoint circuits $C^Y$ over $(\mathcal{R}[2^V]; +, \cdot)$ for every subset $Y\subseteq V$. The circuit $C^Y$ outputs $(\zeta f)[Y]$. It is easy to see that the construction of every $C^Y$ takes polynomial time. As all problems of interest in this paper work on the integer domain $\mathbb{Z}$, we consider $\mathcal{R}=\mathbb{Z}$ and replace $*_{\mathcal{R}}$ by $*$ for simplicity. Assume $0 \leq f[V] < m$ for some integer $m$, we can view the computation as on the finite ring $\mathbb{Z}_{m}$. Additions and multiplications can be implemented efficiently on $\mathbb{Z}_{m}$ (e.g., using the fast algorithm in \cite{multiplication} for multiplication). \begin{theorem}[Theorem 5.1 \cite{savespace2010}] \label{thmsavespace} Let $C$ be a circuit over $(\mathbb{Z}[2^V]; \oplus, *)$ which outputs $f$. Let all constants in $C$ be singletons and let $f[V] < m$ for some integer $m$. Then $f[V]$ can be computed in time $O^*(2^{|V|})$ and space $O(|V||C|\log m)$. \end{theorem} \subsection{Tree decomposition} For any graph $G=(V,E)$, a {\it tree decomposition} of $G$ is a tree $\mathcal{T}=(V_{\mathcal{T}}, E_{\mathcal{T}})$ such that every node $x$ in $V_\mathcal{T}$ is associated with a set $B_x$ (called the bag of $x$) of vertices in $G$ and $\mathcal{T}$ has the following additional properties: 1. For any nodes $x, y$, and any node $z$ belonging to the path connecting $x$ and $y$ in $\mathcal{T}$, $B_x\cap B_y\subseteq B_z$. 2. For any edge $e=\{u, v\}\in E$, there exists a node $x$ such that $u, v\in B_x$. 3. $\cup_{x\in V_{\mathcal{T}}} B_x = V$. The {\it width} of a tree decomposition $\mathcal{T}$ is $\max_{x\in V_{\mathcal{T}}} |B_x|-1$. The {\it treewidth} of a graph $G$ is the minimum width over all tree decompositions of $G$. We reserve the letter $k$ for treewidth in the following context. Constructing a tree decomposition with minimum treewidth is an NP-hard problem. If the treewidth of a graph is bounded by a constant, a linear time algorithm for finding the minimum treewidth is known \cite{twconst}. An $O(\log n)$ approximation algorithm of the treewidth is given in \cite{twapproxlogn}. The result has been further improved to $O(\log k)$ in \cite{twapproxlogk}. There are also a series of works studying constant approximation of treewidth $k$ with running time exponential in $k$, see \cite{twconst} and references therein. To simplify the presentation of dynamic programming based on tree decomposition, an arbitrary tree decomposition is usually transformed into a {\it nice} tree decomposition which has the following additional properties. A node in a nice tree decomposition has at most 2 children. Let $c$ be the only child of $x$ or let $c_1,c_2$ be the two children of $x$. Any node $x$ in a nice tree decomposition is of one of the following five types: \begin{enumerate} \item An {\it introduce vertex} node (introduce vertex $v$), where $B_x=B_c\cup\{v\}$. \item An {\it introduce edge} node (introduce edge $e=\{u,v\}$), where $u,v\in B_x$ and $B_x=B_c$. We say that $e$ is associated with $x$. \item A {\it forget vertex} node (forget vertex $v$), where $B_x=B_c\setminus \{v\}$. \item A {\it join} node, where $x$ has two children and $B_x=B_{c_1}=B_{c_2}$. \item A {\it leaf} node, a leaf of $\mathcal{T}$. \end{enumerate} For any tree decomposition, a nice tree decomposition with the same treewidth can be constructed in polynomial time \cite{nicetree}. Notice that an introduce edge node is not a type of nodes in a common definition of a nice tree decomposition. We can create an introduce edge node after the two endpoints are introduced. We further transform every leaf node and the root to a node with an empty bag by adding a series of introduce nodes or forget nodes respectively. \section{Algorithmic framework} We explain the algorithmic framework using the problem of counting perfect matchings based on tree decomposition as an example to help understand the recurrences. The result can be easily applied to other problems. A {\it perfect matching} in a graph $G=(V,E)$ is a collection of $|V|/2$ edges such that every vertex in $G$ belongs to exactly one of these edges. Consider a connected graph $G$ and a nice tree decomposition $\mathcal{T}$ of treewidth $k$ on $G$. Consider a function $f\in \mathbb{Z}[2^V]$. Assume that the recurrence for computing $f$ on a join node can be formulated as a subset convolution, while on other types of tree nodes it is an addition or subtraction. We explain how to efficiently evaluate $f[V]$ on a nice tree decomposition by dynamic programming in polynomial space. Let $\mathcal{T}_x$ be the subtree rooted at $x$. Let $T_x$ be the vertices contained in bags associated with nodes in $\mathcal{T}_x$ which are not in $B_x$. For any $X\subseteq B_x$, let $Y_X$ be the union of $X$ and $T_x$. For any $X\subseteq B_x$, let $f_x[X]$ be the number of perfect matchings in the subgraph $Y_X$ with edges introduced in $\mathcal{T}_x$. As in the construction of Theorem \ref{thmsavespace}, we first replace $f_x$ by a relaxation $\{f_x^i\}_{0\leq i\leq k+1}$ of $f$, where $k$ is the treewidth. We then compute the zeta transform of $f_x^i$, for $0\leq i\leq k+1$. In the following context, we present only recurrences of $f_x$ for all types of tree nodes except the join node where we need to use the relaxations. The recurrences of $f_x$ based on $f_c$ can be directly applied to their relaxations with the same index as in Theorem \ref{thmsavespace}. For any leaf node $x$, $(\zeta f_x)[\emptyset]=f_x[\emptyset]$ is a problem-dependent constant. In the case of the number of perfect matchings, $f_x[\emptyset]=1$. For the root $x$, $(\zeta f_x)[\emptyset] = f_x[\emptyset]= f[V]$ which is the value of interest. For the other cases, consider an arbitrary subset $X\subseteq B_x$. 1. $x$ is an introduce vertex node. If the introduced vertex $v$ is not in $X$, $f_x[X]=f_c[X]$. If $v\in X$, in the case of the number of perfect matchings, $v$ has no adjacent edges, hence $f_x[X]=0$ (for other problems, $f_x[X]$ may equal to $f_c[X]$, which implies a similar recurrence). By definition of the zeta transform, if $v\in X$, we have $(\zeta f_x)[X]=\sum_{v\in X'\subseteq X}f_x[X']+\sum_{v\notin X'\subseteq X}f_x[X']=\sum_{v\notin X'\subseteq X}f_x[X']$. Therefore, \begin{eqnarray} \label{introvertex} (\zeta f_x)[X] =\left\{ \begin{array}{ll} (\zeta f_c)[X] & \textrm{ $v\notin X$} \\ (\zeta f_c)[X\setminus\{v\}] & \textrm{ $v\in X$} \end{array} \right. \end{eqnarray} 2. $x$ is a forget vertex node. $f_x[X]=f_c[X\cup\{v\}]$ by definition. \begin{eqnarray} \label{forget} (\zeta f_x)[X]&=&\sum_{X'\subseteq X}f_x[X']=\sum_{X'\subseteq X}f_c[X'\cup\{v\}] \nonumber \\ &=&(\zeta f_c)[X\cup\{v\}]-(\zeta f_c)[X]. \end{eqnarray} 3. $x$ is a join node with two children. By assumption, the computation of $f_x$ on a join node can be formulated as a subset convolution. We have \begin{eqnarray} \label{convolution} f_x[X]=\sum_{X'\subseteq X} f_{c_1}[X']f_{c_2}[X\setminus X']=f_{c_1}* f_{c_2}[X]. \end{eqnarray} For the problem of counting perfect matchings, it is easy to verify that $f_x[X]$ can be computed using (\ref{convolution}). Let $f_x^i=\sum_{j=0}^i f_{c_1}^j*_u f_{c_2}^{i-j}$. We can transform the computation to \begin{equation} \label{join} (\zeta f_x^i)[X]= \sum_{j=0}^i(\zeta f_{c_1}^j)[X]\cdot (\zeta f_{c_2}^{i-j})[X], \textrm{ for }0\leq i\leq k+1. \end{equation} 4. $x$ is an introduce edge node introducing $e=\{u,v\}$. The recurrence of $f_x$ with respect to $f_c$ is problem-dependent. Since the goal of the analysis of this case is to explain why we need to modify the construction of an introduce edge node, we consider only the recurrence for the counting perfect matchings problem. In this problem, if $e\nsubseteq X$, $f_x[X]=f_c[X]$, then $(\zeta f_x)[X] = (\zeta f_c)[X]$. If $e\subseteq X$, we can match $u$ and $v$ by $e$ or not use $e$ for matching, thus $f_x[X]=f_c[X]+f_c[X\setminus\{u,v\}]$. In this case, we have \begin{eqnarray} (\zeta f_x)[X] &=& \sum_{e\subseteq X'\subseteq X}f_x[X']+\sum_{e\nsubseteq X'\subseteq X}f_x[X'] = \sum_{e\subseteq X'\subseteq X}(f_c[X']+f_c[X'\setminus \{u,v\}]) \nonumber \\ &+&\sum_{e\nsubseteq X'\subseteq X}f_c[X'] = \sum_{X'\subseteq X}f_c[X']+\sum_{e\subseteq X'\subseteq X}f(X'\setminus\{u,v\}). \nonumber \end{eqnarray} Hence, \begin{eqnarray} \label{introedge} (\zeta f_x)[X] =\left\{ \begin{array}{ll} (\zeta f_c)[X] & \textrm{ $e\nsubseteq X$} \\ (\zeta f_c)[X]+(\zeta f_c)[X\setminus\{u,v\}] & \textrm{ $e\subseteq X$} \end{array} \right. \end{eqnarray} In cases 2 and 4, we see that the value of $(\zeta f_x)[X]$ depends on the values of $\zeta f_c$ on two different subsets. We can visualize the computation along a path from a leaf to the root as a computation tree. This computation tree branches on introduce edge nodes and forget vertex nodes. Suppose along any path from the root to a leaf in $\mathcal{T}$, the maximum number of introduce edge nodes is $m'$ and the maximum number of forget vertex nodes is $h$. To avoid exponentially large storage for keeping partial results in this computation tree, we compute along every path from a leaf to the root in this tree. This leads to an increase of the running time by a factor of $O(2^{m'+h})$, but the computation is in polynomial space (explained in detail later). As $m'$ could be $\Omega(n)$, this could contribute a factor of $2^{\Omega(n)}$ to the time complexity. To reduce the running time, we eliminate the branching introduced by introduce edge nodes. On the other hand, the branching introduced by forget vertex nodes seems inevitable. For any introduce edge node $x$ which introduces an edge $e$ and has a child $c$ in the original nice tree decomposition $\mathcal{T}$, we add an auxiliary child $c'$ of $x$, such that $B_{c'}=B_x$ and introduce the edge $e$ at $c'$. $c'$ is a special leaf which is not empty. We assume the evaluation of $\zeta f$ on $c'$ takes only polynomial time. For the counting perfect matchings problem, $f_{c'}[X]=1$ only when $X=e$ or $X=\emptyset$, otherwise it is equal to 0. Then $(\zeta f_{c'})[X]=2$ if $e\subseteq X$, otherwise $(\zeta f_{c'})[X]=1$. We will verify that this assumption is valid for other problems considered in the following sections. We call $x$ a {\it modified introduce edge} node and $c'$ an {\it auxiliary leaf}. As the computation on $x$ is the same as that on a join node, we do not talk about the computation on modified introduce edge nodes separately. \\ In cases 1 and 2, we observe that the addition operation is not a strictly pointwise addition as in Theorem \ref{thmsavespace}. This is because in a tree decomposition, the set of vertices on every tree node might not be the same. However, there is a one-to-one correspondence from a set $X$ in node $x$ to a set $X'$ in its child $c$. We call it a {\it relaxed pointwise addition} and denote it by $\oplus'$. Hence, $f$ can be evaluated by a circuit $C$ over $(\mathbb{Z}[2^V]; \oplus', *)$. We transform $C$ to a circuit $C_1$ over $(\mathbb{Z}[2^V]; \oplus', *_u)$, then to $C_2$ over $(\mathbb{Z}[2^V]; \oplus', \odot)$, following constructions in Theorem \ref{thmsavespace}. In Theorem \ref{thmsavespace}, $C_2$ can be viewed as $2^{|V|}$ disjoint circuits. In the case of tree decomposition, the computation makes branches on a forget node. Therefore, we cannot take $C_2$ as $O(2^k)$ disjoint circuits. Consider a subtree $\mathcal{T}_x$ of $\mathcal{T}$ where the root $x$ is the only join node in the subtree. Take an arbitrary path from $x$ to a leaf $l$ and assume there are $h'$ forget nodes along this path. We compute along every path of the computation tree expanded by the path from $x$ to $l$, and sum up the result at the top. There are $2^{h'}$ computation paths which are independent. Hence we can view the computation as $2^{h'}$ disjoint circuits on $(\mathbb{Z}; +, \cdot)$. Assume the maximum number of forget nodes along any path from the root $x$ to a leaf in $\mathcal{T}_x$ is $h$ and there are $n_l$ leaves, the total computation takes at most $n_l\cdot 2^{h}$ time and in polynomial space. In general, we proceed the computation in an in-order depth-first traversal on a tree decomposition $\mathcal{T}$. Every time we hit a join node $j$, we need to complete all computations in the subtree rooted at $j$ before going up. Suppose $j_{1},j_{2}$ are the closest join nodes in two subtrees rooted at the children of $j$ (if there is no other join node consider $j_1$ or $j_2$ to be empty). Assume there are at most $h_j$ forget nodes between $j,j_1$ and $j,j_2$. Let $T_x$ be the time to complete the computation of $(\zeta f_x)[X]$ at node $x$. We have $T_j\leq 2\cdot 2^{h_j}\cdot\max\{T_{j_1},T_{j_2}\})$. The modified edge node is a special type of join node. In this case, since one of its children $c_1$ is always a leaf, the running time only depends on the subtree rooted at $c_2$, thus similar to an introduce vertex node. Suppose there are $n_j$ join nodes and let $h$ be the maximum number of forget nodes along any path from the root to a leaf. By induction, it takes $2^{n_j}\cdot 2^h$ time to complete the computation on $\mathcal{T}$ and in polynomial space. Notice that $2^{n_j}$ is the number of leaves in $\mathcal{T}$, hence $2^{n_j}=O(|V|+|E|)$. To summarize, we present the algorithm for the problem of counting perfect matchings based on a modified nice tree decomposition $\mathcal{T}$ in Algorithm 1. \begin{algorithm} \caption{Counting perfect matchings on a modified nice tree decomposition} \begin{algorithmic} \State {\bf Input}: a modified nice tree decomposition $\mathcal{T}$ with root $r$. \State {\bf return} $(\zeta f)(r, \emptyset,0)$. \Procedure {$(\zeta f)$}{$x$, $X$,$i$}. // $(\zeta f)(x, X,i)$ represents $(\zeta f_x^i)[X]$. \State {\bf if} $x$ is a leaf: {\bf return} 1. \State {\bf if} $x$ is an auxiliary leaf: {\bf return} 2 when $e\subseteq X$, otherwise 1. \State {\bf if} $x$ is an introduce vertex node: {\bf return} $(\zeta f)(c, X,i)$ when $v\notin X$, or $(\zeta f)(c, X-\{v\},i)$ when $v\in X$. \State {\bf if} $x$ is a forget vertex node: {\bf return} $(\zeta f)(c, X\cup\{v\},i)-(\zeta f)(c, X,i)$. \State {\bf if} $x$ is a join node: {\bf return} $\sum_{j=0}^i(\zeta f)(c_1, X,j)\cdot (\zeta f)(c_2, X,i-j)$. \EndProcedure \end{algorithmic} \end{algorithm} \begin{comment} We introduce a new parameter, the {\it branch size} of a graph. For a given tree decomposition of a graph $G$, the branch size of this tree decomposition is the maximum size of the union of all bags along path from the root to a leaf. The branch size of a graph is the minimum branch size of any tree decomposition of $G$. With the convention that the root of a modified nice tree decomposition has an empty bag, the branch size is equal to the maximum number of forget nodes along any path from the root to a leaf. It is this number of forget nodes that we will directly tie to the complexity of our algorithm. It is not known how to construct a tree decomposition minimizing this quantity. We first present a relation of the parameter branch size with treewidth. \end{comment} For any tree decomposition $\mathcal{T}$ of a graph $G$, we can transform it to a modified nice tree decomposition $\mathcal{T}'$ with the convention that the root has an empty bag. In this way, the parameter $h$, the maximum number of forget nodes along any path from the root to a leaf in $\mathcal{T}'$ is equal to the maximum size of the union of all bags along any path from the root to a leaf in $\mathcal{T}$. We directly tie this number $h$ to the complexity of our algorithm. Let $h_m(G)$ be the minimum value of $h$ for all tree decompositions of $G$. We show that $h_m(G)$ is closely related to a well-known parameter, the {\it tree-depth} of a graph \cite{treedepth}. \begin{definition}[tree-depth \cite{treedepth}] Given a rooted tree $T$ with vertex set $V$, a closure of $T$, $clos(T)$ is a graph $G$ with the same vertex $V$, and for any two vertices $x,y\in V$ such that $x$ is an ancestor of $y$ in $T$, there is a corresponding edge $(x,y)$ in $G$. The tree-depth of $T$ is the height of $T$. The tree-depth of a graph $G$, $td(G)$ is the minimum height of trees $T$ such that $G\subseteq clos(T)$. \end{definition} \begin{Proposition} For any connected graph $G$, $h_m(G) = td(G)$. \end{Proposition} \begin{proof} For any tree decomposition of $G$, we first transform it to a modified nice tree decomposition $\mathcal{T}$. We contract $\mathcal{T}$ by deleting all nodes except the forget nodes. Let $T_f$ be this contracted tree such that for every forget node in $\mathcal{T}$ which forgets a vertex $x$ in $G$, the corresponding vertex in $T_f$ is $x$. We have $G\subseteq clos(T_f)$. Therefore, $td(G)\leq h$, here $h$ is the maximum number of forget nodes along any path from the root to a leaf in $\mathcal{T}$. For any tree $T$ such that $G\subseteq clos(T)$, we construct a corresponding tree decomposition $\mathcal{T}$ of $G$ such that, $\mathcal{T}$ is initialized to be $T$ and every bag associated with the vertex $x$ of $T$ contains the vertex itself. For every vertex $x\in T$, we also put all ancestors of $x$ in $T$ into the bag associated with $x$. It is easy to verify that it is a valid tree decomposition of $G$. Therefore, the tree-depth of $T$, $td(T)\geq h_m(G)$. $\square$ \end{proof} In the following context, we also call the parameter $h$, the maximum size of the union of all bags along any path from the root to a leaf in a tree decomposition $\mathcal{T}$, the tree-depth of $\mathcal{T}$. Let $k$ be the treewidth of $G$, it is shown in \cite{treedepth} that $td(G)\leq (k+1)\log |V|$. Therefore, we also have $h_m(G)\leq (k+1)\log |V|$. Moreover, it is obvious to have $h_m(G)\geq k+1$. \begin{comment} \begin{Proposition} For any graph $G=(V,E)$ with treewidth $k$ and branch size $h$. $k+1\leq h\leq O(k\log |V|)$. \end{Proposition} \begin{proof} $h$ is at least $k+1$, as there are at least $k+1$ vertices to forget before reaching the root. To obtain an upper bound of $h$, we first turn any tree decomposition of treewidth $k$ into a balanced tree decomposition of treewidth $3k$ \cite{kto3k}. Then transform the balanced tree decomposition into the modified nice tree decomposition with the same treewidth. In this way, $h=O(k \log |V|)$. \end{proof} \end{comment} Finally, we summarize the main result of this section in the following theorem. \begin{theorem} Given any graph $G=(V, E)$ and tree decomposition $\mathcal{T}$ on $G$. Let $f$ be a function evaluated by a circuit $C$ over $(\mathbb{Z}[2^V]; \oplus', \ast)$ with constants being singletons. Assume $f[V]< m$ for integer $m$. We can compute $f[V]$ in time $O^*((|V|+|E|)2^{h})$ and in space $O(|V||C|\log m)$. Here $h$ is the maximum size of the union of all bags along any path from the root to a leaf in $\mathcal{T}$. \end{theorem} \section{Counting perfect matchings} The problem of counting perfect matchings is $\sharp$P-complete. It has long been known that in a bipartite graph of size $2n$, counting perfect matchings takes $O^*(2^n)$ time using the inclusion and exclusion principle. A recent breakthrough \cite{pmasryser} shows that the same running time is achievable for general graphs. For low degree graphs, improved results based on dynamic programming on path decomposition on a sufficiently large subgraph are known \cite{matchingpwsubgraph}. Counting perfect matchings on grids is an interesting problem in statistical physics \cite{monomer}. The more generalized problem is the Monomer-Dimer problem \cite{monomer}, which essentially asks to compute the number of matchings of a specific size. We model the Monomer-Dimer problem as computing the matching polynomial problem . For grids in dimension 2, the pure Dimer (perfect matching) problem is polynomial-time tractable and an explicit expression of the solution is known \cite{matching2dim}. We consider the problem of counting perfect matchings in cube/hypercube in Section 4.1. Results on counting perfect matchings in more general grids, computing the matching polynomial and applications to other set covering and partitioning problems are presented in Section 4.2. \subsection{Counting perfect matchings on cube/hypercube} We consider the case of counting perfect matchings on grids of dimension $d$, where $d\geq 3$ and the length of the grid is $n$ in each dimension. We denote this grid by $G_d(n)$. To apply Algorithm 1, we first construct a balanced tree decomposition on $G_d(n)$ with the help of balanced separators. The balanced tree decomposition can easily be transformed into a modified nice tree decomposition. \textbf{Tree decomposition using balanced vertex separators.} We first explain how to construct a balanced tree decomposition using vertex separators of general graphs. An $\alpha$-balanced vertex separator of a graph/subgraph $G$ is a set of vertices $S\subseteq G$, such that after removing $S$, $G$ is separated into two disjoint parts $A$ and $B$ with no edge between $A$ and $B$, and $|A|, |B|\leq \alpha|G|$, where $\alpha$ is a constant in $(0,1)$. Suppose we have an oracle to find an $\alpha$-balanced vertex separator of a graph. We begin with creating the root of a tree decomposition $\mathcal{T}$ and associate the vertex separator $S$ of the whole graph with the root. Consider a subtree $\mathcal{T}_x$ in $\mathcal{T}$ with the root $x$ associated with a bag $B_x$. Denote the vertices belonging to nodes in $\mathcal{T}_x$ by $V_x$. Initially, $V_x=V$ and $x$ is the root of $\mathcal{T}$. Suppose we have a vertex separator $S_x$ which partitions $V_x$ into two disjoint parts $V_{c_1}$ and $V_{c_2}$. We create two children $c_1,c_2$ of $x$, such that the set of vertices belonging to $\mathcal{T}_{c_i}$ is $S_x\cup V_{c_i}$. Denote the set of vertices belonging to nodes in the path from $x$ to the root of $\mathcal{T}$ by $U_x$, we define the bag $B_{c_i}$ to be $S_x\cup (V_{c_i}\cap U_x)$, for $i=1,2$. It is easy to verify that this is a valid tree decomposition. Since $V_x$ decreases by a factor of at least $1-\alpha$ in each partition, the height of the tree is at most $\log_{\frac{1}{1-\alpha}} n$. To transform this decomposition into a modified nice tree decomposition, we only need to add a series of introduce vertex nodes, forget vertex nodes or modified introduce edge nodes between two originally adjacent nodes. We call this tree decomposition algorithm {\bf Algorithm 2}. \\ We observe that after the transformation, the number of forget nodes from $B_{c_i}$ to $B_x$ is the size of the balanced vertex separator of $V_x$, i.e. $|S_x|$. Therefore, the number of forget nodes from the root to a leaf is the sum of the sizes of the balanced vertex separators used to construct this path in the tree decomposition. A grid graph $G_d(n)$ has a nice symmetric structure. Denote the $d$ dimensions by $x_1,x_2,...,x_d$ and consider an arbitrary subgrid $G'_d$ of $G_d(n)$ with length $n'_i$ in dimension $x_i$. The hyperplane in $G_d'$ which is perpendicular to $x_i$ and cuts $G'_d$ into halves can be used as a $1/2$-balanced vertex separator. We always cut the dimension with the longest length. If $n_i'=n_{i+1}'$, we choose to first cut the dimension $x_i$, then $x_{i+1}$. We illustrate the construction of the 2-dimensional case in the following example. \begin{example}[Balanced tree decomposition on $G_2(n)$] \label{exp2dgrid} The left picture is a partitioning on a 2-dimensional grid. We always bipartition the longer side of the grid/subgrid. The right picture is the corresponding balanced tree decomposition on this grid. The same letters on both sides represent the same set of nodes. $P_i$ represent a balanced vertex separator. We denote the left/top half of $P_i$ by $P_{i1}$, and the right/bottom part by $P_{i2}$ (see Figure \ref{example2dgrid}). The treewidth of this decomposition is $\frac{3}{2}n$. \begin{figure} \caption{An illustrative figure for balanced tree decomposition on $G_2(n)$.} \label{example2dgrid} \end{figure} \end{example} To run Algorithm 2 on $G_d(n)$, we cut dimensions $x_1,x_2,...,x_d$ consecutively with separators of size $\frac{1}{2^{i-1}}n^{d-1}$, for $i=1,2...,d$. Then we proceed with subgrids of length $n/2$ in every dimension. It is easy to see that the treewidth of this tree decomposition is $\frac{3}{2}n^{d-1}$. The tree-depth $h$ of this tree decomposition is at most $\sum_{j=0}^{\infty}\sum_{i=0}^{d-1} \frac{1}{2^i}\cdot (\frac{1}{2^j}n)^{d-1}$, which is $\frac{2^d-1}{2^{d-1}-1}n^{d-1}$. \begin{comment} In general case, it is sufficient to consider the construction of an arbitrary path from root to leaf, we denote it by $R,B_0,B_1,...,B_l$. We denote the original grid by $G$ for simplicity. \textbf{Algorithm 2:} 0. Create an empty root $R$. 1. Bipartition $G$ into two parts $G_1,G_2$ along some direction $x_1$ with separator $S_0$ of size $n^{d-1}$. Let $B_0=S_0$ be the child of $R$. Let $G_i\leftarrow G_i\bigcup B_0,i=1,2$. $G\leftarrow G_1$. \\ - The number of forget nodes from $B_0$ to $R$ is $|S_0|=n^{d-1}$. 2. Bipartition $G$ in direction $x_2\perp x_1$ into two parts $G_{1},G_{2}$ with separator $S_1$ of size $\frac{1}{2}n^{d-1}$. Let $B_1=B_0\bigcup S_1$ be a child of $B_0$. $|B_1|=\frac{3}{2}n^{d-1}$. Let $G_{1} \leftarrow B_1\bigcup G_{1}$. $G\leftarrow G_1$. \\ - The number of forget nodes is from $B_1$ to $B_0$ is $|S_1|=\frac{1}{2}n^{d-1}$. 3. Bipartition $G$ in direction $x_3\perp x_1,x_2$ into $G_{1},G_{2}$, with separator $S_2$ of size $\frac{1}{4}n^{d-1}$. Let $B_2$ be the union of $S_2$, half of $S_1$ and one fourth of $S_0$. $|B_2|=(\frac{1}{4}+\frac{1}{4}+\frac{1}{4})n^{d-1}=\frac{3}{4}n^{d-1}$. Let $G_i\leftarrow G_i\bigcup B_2$. $G\leftarrow G_1$. \\ - The number of forget nodes is from $B_2$ to $B_1$ is $|S_2|=\frac{1}{4}n^{d-1}$. 4. In general, bipartition $G$ in direction $x_{i+1} \perp x_1,....,x_{i}$, for $1\leq i\leq d-1$ into $G_1,G_2$ with separator $S_i$ of size $\frac{1}{2^{i}}n^{d-1}$. $B_i$ is the union of $S_i$, $\frac{1}{2}$ of $S_{i-1}$,...,$\frac{1}{2^{i}}$ of $S_0$. $|B_i|=\frac{i+1}{2^{i}}n^{d-1}$. \\ - The number of forget nodes is from $B_i$ to $B_{i-1}$ is $|S_i|=\frac{1}{2^i}n^{d-1}$. 5. After processing $d$ bipartitions, we have a subgrid with length $n/2$ in every dimension. Go to Step 1. The tree decomposition algorithm recurses. \end{comment} \begin{lemma} The treewidth of the tree decomposition $\mathcal{T}$ on $G_d(n)$ obtained by Algorithm 2 is $\frac{3}{2}n^{d-1}$. The tree-depth of $\mathcal{T}$ is at most $\frac{2^d-1}{2^{d-1}-1}n^{d-1}$. \end{lemma} To apply Algorithm 1 to the problem of counting perfect matchings, we verify that $f[S]\leq {|E|\choose |V|/2}\leq |E|^{|V|/2}$ and all constants are singletons. \begin{theorem} The problem of counting perfect matchings on grids of dimension $d$ and uniform length $n$ can be solved in time $O^*(2^{\frac{2^d-1}{2^{d-1}-1}n^{d-1}})$ and in polynomial space. \end{theorem} To the best of our knowledge, there is no rigorous time complexity analysis of the counting perfect matchings problem in grids in the literature. To demonstrate the efficiency of Algorithm 1, we compare it to three other natural algorithms. \textbf{1. Dynamic programming based on path decomposition.} A path decomposition is a special tree decomposition where the underlying tree is a path. A path decomposition with width $2n^{d-1}$ is obtained by putting all vertices with $x_1$ coordinate equal to $j$ and $j+1$ into the bag of node $j$, for $j=0,1,...,n-1$. A path decomposition with a smaller pathwidth of $n^{d-1}$ can be obtained as follows. Construct $n$ nodes $\{p_1,p_2,...,p_n\}$ associated with a bag of vertices with $x_1$ coordinate equal to $j$, for $j=0,1,...,n-1$. For any $p_j,p_{j+1}$, start from $p_j$, add a sequence of nodes by alternating between adding a vertex of $x_1=j+1$ and deleting its neighbor with $x_1=j$. The number of nodes increases by a factor of $n^{d-1}$ than the first path decomposition. We run the standard dynamic programming on the second path decomposition. This algorithm runs in time $O^*(2^{n^{d-1}})$, however the space complexity is $O^*(2^{n^{d-1}})$. It is of no surprise that it has a better running time than Algorithm 1 due to an extra space usage. We remark that van Rooij et al. \cite{dpgeneralsubsetconvolution} give a dynamic programming algorithm for the counting perfect matching problem on any tree decomposition of treewidth $k$ with running time $O^*(2^k)$ and space exponential to $k$. \\ \textbf{2. Dynamic programming based on path decomposition on a subgrid.} One way to obtain a polynomial space dynamic programming is to construct a low pathwidth decomposition on a sufficiently large subgraph. One can then run dynamic programming on this path decomposition and do an exhaustive enumeration on the remaining graph in a similar way as in \cite{matchingpwsubgraph}. To extract from $G_d(n)$ a subgrid of pathwidth $O(\log n)$ (notice that this is the maximum pathwidth for a polynomial space dynamic programming algorithm), we can delete a portion of vertices from $G_d(n)$ to turn a "cube"-shaped grid into a long "stripe" with $O(\log n)$ cross-section area. It is sufficient to remove $O(\frac{n^d}{(\log n)^{1/(d-1)}})$ vertices. This leads to a polynomial-space algorithm with running time $2^{O(\frac{n^d}{(\log n)^{1/(d-1)}})}$, which is worse than Algorithm 1. \\ \textbf{3. Branching algorithm.} A naive branching algorithm starting from any vertex in the grid could have time complexity $2^{O(n^d)}$ in the worst case. We analyze a branching algorithm with a careful selection of the starting point. The branching algorithm works by first finding a balanced separator $S$ and partitioning the graph into $A\cup S\cup B$. The algorithm enumerates every subset $X\subseteq S$. A vertex in $X$ either matches to vertices in $A$ or to vertices in $B$ while vertices in $S\setminus X$ are matched within $S$. Then the algorithm recurses on $A$ and $B$. Let $T_d(n)$ be the running time of this branching algorithm on $G_d(n)$. We use the same balanced separator as in Algorithm 2. We have an upper bound of the running time as, $T_d(n)\leq 2T_{d}(\frac{n-|S|}{2})\sum_{X\subseteq S} 2^{|X|} T_{d-1}(|S\setminus X|)$. We can use any polynomial space algorithm to count perfect matchings on $S\setminus X$. For example using Algorithm 1, since the separator is of size $O(n^{d-1})$, we have $T_{d-1}(|S\setminus X|)=2^{O(n^{d-2})}$. Therefore, $T_d(n)\leq 2T_{d}(\frac{n}{2})\cdot 2^{o(n^{d-1})}\sum_{i=0}^{|S|}{|S|\choose i} 2^{i}=2T_{d}(\frac{n}{2})\cdot 2^{o(n^{d-1})} 3^{|S|}$. We get $T_d(n) = O^*(3^h)$, i.e. $O^*(3^{\frac{2^d-1}{2^{d-1}-1}n^{d-1}})$, which is worse than Algorithm 1. We remark that this branching algorithm can be viewed as a divide and conquer algorithm on balanced tree decomposition, which is similar as in \cite{kpath}. \subsection{Extensions} \textbf{Counting perfect matchings on general grids. } Consider more general grids of dimension $d$ with each dimension of length $n_i$, $1\leq i\leq d$, which is at most $n_m$. We use Algorithm 2 to construct a balanced tree decomposition $\mathcal{T}$ of a general grid and obtain an upper bound of the tree-depth $h$ of $\mathcal{T}$. \begin{lemma} \label{lemmagrid} Given any grid of dimension $d$ and volume $\mathcal{V}$. Using Algorithm 2, the tree-depth of this tree decomposition is at most $\frac{3d\mathcal{V}}{n_m}$. \end{lemma} \begin{proof} Assume that $2^{q_i-1}n_{i+1}< n_i\leq 2^{q_i}n_{i+1}$ for some integer $q_i\geq 0$ and $i=1,2,...,d-1$. Let $h(q_1,...,q_{d-1})$ be the maximum number of forget nodes from the root to a leaf in this case. We can think of the whole construction as in $d$ phases (the algorithm might do nothing in some phases). In Phase 1, the grid/subgrid is halved in dimension $x_1$ in $q_i$ times. For $i=2,...,d$, suppose the lengths of dimension $x_1,x_2,...,x_{i-1},x_{i}$ are $n_1',n_2',...,n_{i-1}',n_i'=n_i$ respectively, we have $n_i'/2<n_1'\leq n_2'\leq\cdots\leq n_{i-1}'\leq n_i'$. For any $1\leq j\leq i-1$, if $n_j'=n_{j+1}'$, Algorithm 2 will first cut dimension $x_j$ then $x_{j+1}$. If $n_j' < n_{j+1}'$, Algorithm 2 will first cut dimension $x_{j+1}$ then $x_{j}$. In this way, we obtain a new partition order $x_1',...,x_i'$ which is a permutation of $x_1,...,x_i$. In Phase $i$ for $i\leq d-1$, the grid/subgrid is halved in dimension $x_1',x_2',...,x_i'$ consecutively in $q_i$ rounds. In Phase $d$, the algorithm repeats bipartitioning dimension $x_1',x_2',...,x_d'$ until the construction is completed. We denote the maximum number of forget nodes from the root to a leaf created in Phase $i$ by $h_i$. $i=1$. $h_1=\frac{q_1\mathcal{V}}{n_1}$. Notice that $n_1=2^{q_1+\cdots q_{d-1}}(\frac{\mathcal{V}}{2^{q_1+2q_2+\cdots +(d-1)q_{d-1}}})^{1/d}$, $h_1$ is maximized when $q_1=\frac{1}{\ln 2}\cdot\frac{d}{d-1}$ and $q_2=...=q_{d-1}=0$. We have $h_1\leq \frac{3\mathcal{V}}{n_1}$. $i=2$. If $n_1=2^{q_1}n_{2}$, $h_2=\frac{\mathcal{V}}{n_1}\cdot(1+\frac{1}{2}+\cdots +\frac{1}{2^{q_2-1}})+\frac{\mathcal{V}}{n_2}\cdot(\frac{1}{2^{q_1+1}}+\frac{1}{2^{q_1+2}}+\cdots +\frac{1}{2^{q_1+q_2}})$, i.e. $h_2=\frac{\mathcal{V}}{n_1}\cdot\frac{2^2-1}{2-1}\cdot(1-\frac{1}{2^{(2-1)q_2}})\leq \frac{3\mathcal{V}}{n_1}$. If $n_1< 2^{q_1}n_{2}$, Algorithm 2 will alternate to cut the $x_2$ dimension and $x_1$ dimension in $q_2$ rounds. $h_2=\frac{\mathcal{V}}{n_1}\cdot(\frac{1}{2}+\cdots +\frac{1}{2^{q_2}})+\frac{\mathcal{V}}{n_2}\cdot(\frac{1}{2^{q_1}}+\frac{1}{2^{q_1+1}}+\cdots +\frac{1}{2^{q_1+q_2-1}})$. Since $2^{q_1}n_2>n_1$, $h_2<\frac{\mathcal{V}}{n_1}\cdot(\frac{\frac{1}{2}(1-\frac{1}{2^{q_2}})}{1-1/2}+\frac{1-\frac{1}{2^{q_2}}}{1-1/2})< \frac{3\mathcal{V}}{n_1}$. In general, for any $2\leq i\leq d-1$, we can bound $h_i$ as $h_i\leq \frac{\mathcal{V}}{2^{q_2+2q_3+\cdots+(i-2)q_{i-1}}n_1}\cdot(1+\frac{1}{2}+\cdots+\frac{1}{2^{i-1}})\cdot(1+\frac{1}{2^{i-1}}+\cdots+\frac{1}{2^{(i-1)(q_i-1)}})$. Hence, $h_i\leq \frac{\mathcal{V}}{2^{q_2+2q_3+\cdots+(i-2)q_{i-1}}n_1}\cdot\frac{2^i-1}{2^{i-1}-1}\cdot(1-\frac{1}{2^{(i-1)q_i}})$, which is at most $\frac{3\mathcal{V}}{n_1}$. $i=d$. $h_d\leq \frac{2^{d}-1}{2^{d-1}-1}(\frac{\mathcal{V}}{2^{q_1+2q_2+\cdots +(d-1)q_{d-1}}})^{1-1/d}\leq \frac{3\mathcal{V}}{n_1}$. Hence, $h(q_1,...,q_{d-1})=h_1+h_2+\cdots+h_d\leq \frac{3d\mathcal{V}}{n_1}$. $\square$ \end{proof} Based on Lemma \ref{lemmagrid}, we give time complexity results of algorithms discussed in Section 4.1. First, $h$ is the only parameter to the running time of Algorithm 1 and the branching algorithm. Algorithm 1 runs in time $O^*(2^{\frac{3d\mathcal{V}}{n_m}})$ and the branching algorithm runs in time $O^*(3^{\frac{3d\mathcal{V}}{n_m}})$. The dynamic programming algorithm based on path decomposition on a subgrid has a running time $2^{O(\frac{\mathcal{V}}{(\log n_m)^{1/(d-1)}})}$. Those three algorithms have polynomial space complexity. For constant $d$, Algorithm~1 has the best time complexity. For the dynamic programming algorithm based on path decomposition, it runs in time $O^*(2^{\frac{\mathcal{V}}{n_m}})$ but in exponential space. The result can easily be generalized to the $k$-nearest-neighbor ($k$NN) graphs and their subgraphs in $d$-dimensional space, as it is known that there exists a vertex separator of size $O(k^{1/d}n^{1-1/d})$ which splits the $k$NN graph into two disjoint parts with size at most $\frac{d+1}{d+2}n$ \cite{knn}. More generally, we know that a nontrivial result can be obtained by Algorithm 1 if there exists a balanced separator of the graph $G$ with the following property. Let $s(n')$ be the size of a balanced separator $S$ on any subgraph $G'$ of $G$ of size $n'\leq n$. $S$ partitions the subgraph into two disjoint parts $G_1',G_2'$, such that $S\cup G_i'$ is of size at most $cn'$, for some constant $c\in(0,1)$, $i=1,2$. If there exists a constant $\gamma<1$, such that for every $n'\leq n$, $s(cn')\leq \gamma s(n')$, then the number of forget nodes along any path from the root to a leaf is at most $s(n)+\gamma s(n)+\gamma^2 s(n)+\cdots\leq \frac{s(n)}{1-\gamma}$. In this case, the tree decomposition of treewidth $k$ constructed by Algorithm 2 has the tree-depth $h=\Theta(k)$. For $k=\Omega(\log n)$, Algorithm 1 has a similar running time as the standard dynamic programming algorithm but with much better space complexity. \\ \textbf{Computing the matching polynomial.} The matching polynomial of a graph $G$ is defined to be $m[G, \lambda]=\sum_{i=0}^{|G|/2} m^i[G]\lambda^i$, where $m^i[G]$ is the number of matchings of size $i$ in graph $G$. We put the coefficients of $m[G, \lambda]$ into a vector $\mathbf{m}[G]$. The problem is essentially to compute the coefficient vector $\mathbf{m}[G]$. For every node $x$ in a tree decomposition, let vector $\mathbf{m}_x[X]$ be the coefficient vector of the matching polynomial defined on $Y_X$. Notice that every entry of $\mathbf{m}_x[X]$ is at most $|E|^{|V|/2}$ and all constants are singletons. $\mathbf{m}^0_x[X]=1$ and $\mathbf{m}^i_x[X]=0$ for $i>|X|/2$. The case of $x$ being a forget vertex node follows exactly from Algorithm 1. For any type of tree node $x$, - $x$ is a leaf node. $\mathbf{m}^i_x[\emptyset]=1$ if $i=0$, or 0 otherwise. - $x$ is an introduce vertex node. If $v\in X$, $\mathbf{m}^i_x[X] =\mathbf{m}^i_c[X\setminus \{v\}]$. Hence $(\zeta\mathbf{m}^i_x)[X]=2(\zeta\mathbf{m}^i_c)[X\setminus\{v\}]$ if $v\in X$, or $(\zeta\mathbf{m}^i_x)[X]=(\zeta\mathbf{m}^i_c)[X]$ otherwise. - $x$ is an auxiliary leaf of a modified introduce edge node. $\mathbf{m}_{x}^i[X]=1$ only when $u,v\in X$ and $i=1$, or $i=0$. Otherwise it is 0. - $x$ is a join node. $\mathbf{m}_x^i[X]=\sum_{X'\subseteq X}\sum_{j=0}^i \mathbf{m}_{c_1}^j[X']\mathbf{m}_{c_2}^{i-j}[X\setminus X']$. \\ {\bf Counting $l$-packings.} Given a universe $U$ of elements and a collection of subsets $\mathcal{S}$ on $U$, an $l$-packing is a collection of $l$ disjoint sets. The $l$-packings problem can be solved in a similar way as computing the matching polynomial. Packing problems can be viewed as matching problems on hypergraphs. Tree decomposition on graphs can be generalized to tree decomposition on hypergraph, where we require every hyperedge to be assigned to a specific bag \cite{hypertree}. A hyperedge is introduced after all vertices covered by this edge are introduced. \\ \textbf{Counting dominating sets, counting set covers.} The set cover problem is given a universe $U$ of elements and a collection of sets $\mathcal{S}$ on $U$, find a subcollection of sets from $\mathcal{S}$ which covers the entire universe $U$. The dominating set problem is defined on a graph $G=(V,E)$. Let $U=V$, $\mathcal{S}=\{N[v]\}_{v\in V}$, where $N[v]$ is the union of the neighbors of $v$ and $v$ itself. The dominating set problem is to find a subset of vertices $S$ from $V$ such that $\bigcup_{v\in S} N[v]$ covers $V$. The set cover problem can be viewed as a covering problem on a hypergraph, where one selects a collection of hyperedges which cover all vertices. The dominating set problem is then a special case of the set cover problem. If $\mathcal{S}$ is closed under subsets, a set cover can be viewed as a disjoint cover. We only consider the counting set covers problem. For any subset $X\subseteq B_x$, we define $h_x[X]$ to be the number of set covers of $Y_X$. We have $h_x[X]\leq |U|^{|\mathcal{S}|}$, and all constants are singletons. We omit the recurrence for forget vertex nodes as we can directly apply recurrence (\ref{forget}) in Algorithm 1. For any node $x$, $h_x[\emptyset]=1$. - $x$ is a leaf node. $h_x[\emptyset]=1$. - $x$ is an introduce vertex node. If $v\in X$, $h_x[X]=0$. If $v\notin X$, $h_x[X]=h_c[X]$. - $x$ is an auxiliary leaf of a modified introduce hyperedge node. $h_{x}[X]=1$ when $X\subseteq e$, and $h_x[X]=0$ otherwise. - $x$ is a join node. $h_x[X]=\sum_{X'\subseteq X}h_{c_1}[X']h_{c_2}[X-X']$. \\ Finally, we point out that our framework has its limitations. First, it cannot be applied to problems where the computation on a join node cannot be formalized as a convolution. The maximum independent set problem is an example. Also it is not known if there is a way to adopt the framework to the Hamiltonian path problem, the counting $l$-path problems, and the unweighted Steiner tree problem. It seems that for theses problems we need a large storage space to record intermediate results. It is interesting to find more problems which fit in our framework. \begin{comment} \section{Extension to other problems} More problems are given in Appendix Section 8. $\bullet$ \textbf{Maximum internal spanning tree}. Given a graph $G=(V,E)$, find out if there exists a spanning tree with maximum number of internal nodes. In other words, find a spanning tree with minimum number of leaves. For every subset $X\subseteq B_x$, $v\in V$, let $h_x[v, l, X]$ be an indicator function such that $h_x[v, l, X]>0$ if there exists a spanning tree of the set $Y_X\cup\{v\}$ which has $l$ leaves and is rooted at node $v$. $h_x[v, l, X]\leq |V|! \leq |V|^{|V|}$. We check that constants are singletons. We omit the recurrences for forget vertex nodes as we can directly apply recurrence (\ref{forget}) in Algorithm 1. - $h_x[v,0,\emptyset]=1$. $h_x[v,r,X]=0$ whenever there is an isolated vertex. - $x$ is an introduce vertex node introducing vertex $u$. If $u\in X$, $h_x[v,l,X]=0$. If $u\notin X$, $h_x[v,l,X]=h_c[v,r,X]$. - $x$ is a modified introduce edge node. In child $c'$, $h_x[v,l,X]=1$ if $X=\emptyset$ and $l=0$ or $X=\{u\}$, $(u,v)$ is the edge introduced at $c'$ and $l=1$. - $x$ is a join node. $h_x[v, l, X] = \sum_{i=0}^l\sum_{X'\subseteq X} h_{c_1}[v, i, X']\cdot h_{c_2}[v, l-i, X\setminus X']$. \\ Specifically, consider graphs of degree at most 3. A branching algorithm with running time $O(1.8612^n)$ where $n=|V|$ is given in \cite{maxinternalst}. Using Algorithm 1, we can obtain the result more efficiently. We use a result from \cite{bisectionwidth}, which states that in graphs of degree at most 3, a bisection of width at most $\frac{n}{6}$ can be found in polynomial time. Suppose we use this bisection algorithm and find a bipartition of $G$ into $G_1,G_2$ with boundary $S$. Then $|S|\leq \frac{n}{3}$ can be viewed as a vertex separator of $G_1\setminus S$ and $G_2\setminus S$. We use a similar construction from Section 3.1 and get a balanced tree decomposition on $G$ of treewidth at most $\frac{n}{3}$. In this tree decomposition, the number of forget nodes is at most $(\frac{1}{3}+\frac{1}{6}+\frac{1}{12}+\cdots)n$, which is at most $\frac{2n}{3}$. Hence, our algorithm runs in $O^*(2^{2n/3})$, i.e., $O^*(1.5874^n)$ time and in polynomial space. \begin{theorem} In graphs of degree at most 3, the maximum internal spanning tree problem can be solved in $O^*(1.5874^n)$ time and in polynomial space. \end{theorem} We remark that Algorithm 1 does not lead to a time complexity improvement for other problems considered in this paper with known results for degree-3 graphs. We observe that the key to improve the running time in this case is rather than constructing a balanced tree decomposition on the whole graph, constructing it on a sufficiently large subgraph. Our result does not require the treewidth of the subgraph to be $O(\log n)$. Hence we allow removing less vertices than what has been done in \cite{twsubgraph}. It is an interesting question to find out how to balance the treewidth of the remaining graph and the number of vertices removed. \end{comment} \section{Conclusion} \label{conclusion} We study the problem of designing efficient dynamic programming algorithms based on tree decompositions in polynomial space. We show how to construct a modified nice tree decomposition $\mathcal{T}$ and extend the algebraic techniques in \cite{savespace2010} to dynamic sets such that we can run the dynamic programming algorithm in time $O^*(2^h)$ and in polynomial space, with $h$ being the maximum size of the union of bags along any path from the root to a leaf of $\mathcal{T}$, a parameter closely related to the tree-depth of a graph \cite{treedepth}. We apply our algorithm to many problems. It is interesting to find more natural graphs with nontrivial modified nice tree decompositions, and to find more problems which fit in our framework. \begin{comment} \section*{Appendix} \section{Counting perfect matchings and variants} \subsection{An example of constructing balanced tree decomposition on 2-dimensional grid} \label{gridexample} \begin{example}[Balanced tree decomposition on $G_2(n)$] \label{exp2dgrid} The left picture is a partitioning on a 2-dimensional grid. We always bipartition the longer side of the grid/subgrid. The right picture is the corresponding balanced tree decomposition on this grid. The same letters on both sides represent the same set of nodes. $P_i$ represent a balanced vertex separator. We denote the left/top half of $P_i$ by $P_{i1}$, and the right/bottom part by $P_{i2}$ (see Fig. \ref{example2dgrid}). The treewidth of this decomposition is $\frac{3}{2}n$. \begin{figure} \caption{An illustrative figure for balanced tree decomposition on $G_2(n)$.} \label{example2dgrid} \end{figure} \end{example} \subsection{Extensions to more general graphs} \label{pmgeneral} \begin{proof}[Proof of Lemma \ref{lemmagrid}] Assume that $2^{q_i-1}n_{i+1}< n_i\leq 2^{q_i}n_{i+1}$ for some integer $q_i\geq 0$ and $i=1,2,...,d-1$. Let $h(q_1,...,q_{d-1})$ be the maximum number of forget nodes from the root to a leaf in this case. We can think of the whole construction as in $d$ phases (the algorithm might do nothing in some phases). In Phase 1, the grid/subgrid is halved in dimension $x_1$ in $q_i$ times. For $i=2,...,d$, suppose the lengths of dimension $x_1,x_2,...,x_{i-1},x_{i}$ are $n_1',n_2',...,n_{i-1}',n_i'=n_i$ respectively, we have $n_i'/2<n_1'\leq n_2'\leq\cdots\leq n_{i-1}'\leq n_i'$. For any $1\leq j\leq i-1$, if $n_j'=n_{j+1}'$, Algorithm 2 will first cut dimension $x_j$ then $x_{j+1}$. If $n_j' < n_{j+1}'$, Algorithm 2 will first cut dimension $x_{j+1}$ then $x_{j}$. In this way, we obtain a new partition order $x_1',...,x_i'$ which is a permutation of $x_1,...,x_i$. In Phase $i$ for $i\leq d-1$, the grid/subgrid is halved in dimension $x_1',x_2',...,x_i'$ consecutively in $q_i$ rounds. In Phase $d$, the algorithm repeats bipartitioning dimension $x_1',x_2',...,x_d'$ until the construction is completed. We denote the maximum number of forget nodes from the root to a leaf created in Phase $i$ by $h_i$. $i=1$. $h_1=\frac{q_1\mathcal{V}}{n_1}$. Notice that $n_1=2^{q_1+\cdots q_{d-1}}(\frac{\mathcal{V}}{2^{q_1+2q_2+\cdots +(d-1)q_{d-1}}})^{1/d}$, $h_1$ is maximized when $q_1=\frac{1}{\ln 2}\cdot\frac{d}{d-1}$ and $q_2=...=q_{d-1}=0$. We have $h_1\leq \frac{3\mathcal{V}}{n_1}$. $i=2$. If $n_1=2^{q_1}n_{2}$, $h_2=\frac{\mathcal{V}}{n_1}\cdot(1+\frac{1}{2}+\cdots +\frac{1}{2^{q_2-1}})+\frac{\mathcal{V}}{n_2}\cdot(\frac{1}{2^{q_1+1}}+\frac{1}{2^{q_1+2}}+\cdots +\frac{1}{2^{q_1+q_2}})$, i.e. $h_2=\frac{\mathcal{V}}{n_1}\cdot\frac{2^2-1}{2-1}\cdot(1-\frac{1}{2^{(2-1)q_2}})\leq \frac{3\mathcal{V}}{n_1}$. If $n_1< 2^{q_1}n_{2}$, Algorithm 2 will alternate to cut the $x_2$ dimension and $x_1$ dimension in $q_2$ rounds. $h_2=\frac{\mathcal{V}}{n_1}\cdot(\frac{1}{2}+\cdots +\frac{1}{2^{q_2}})+\frac{\mathcal{V}}{n_2}\cdot(\frac{1}{2^{q_1}}+\frac{1}{2^{q_1+1}}+\cdots +\frac{1}{2^{q_1+q_2-1}})$. Since $2^{q_1}n_2>n_1$, $h_2<\frac{\mathcal{V}}{n_1}\cdot(\frac{\frac{1}{2}(1-\frac{1}{2^{q_2}})}{1-1/2}+\frac{1-\frac{1}{2^{q_2}}}{1-1/2})< \frac{3\mathcal{V}}{n_1}$. In general, for any $2\leq i\leq d-1$, we can bound $h_i$ as $h_i\leq \frac{\mathcal{V}}{2^{q_2+2q_3+\cdots+(i-2)q_{i-1}}n_1}\cdot(1+\frac{1}{2}+\cdots+\frac{1}{2^{i-1}})\cdot(1+\frac{1}{2^{i-1}}+\cdots+\frac{1}{2^{(i-1)(q_i-1)}})$. Hence, $h_i\leq \frac{\mathcal{V}}{2^{q_2+2q_3+\cdots+(i-2)q_{i-1}}n_1}\cdot\frac{2^i-1}{2^{i-1}-1}\cdot(1-\frac{1}{2^{(i-1)q_i}})$, which is at most $\frac{3\mathcal{V}}{n_1}$. $i=d$. $h_d\leq \frac{2^{d}-1}{2^{d-1}-1}(\frac{\mathcal{V}}{2^{q_1+2q_2+\cdots +(d-1)q_{d-1}}})^{1-1/d}\leq \frac{3\mathcal{V}}{n_1}$. Hence, $h(q_1,...,q_{d-1})=h_1+h_2+\cdots+h_d\leq \frac{3d\mathcal{V}}{n_1}$. \end{proof} {\bf Remark. }Based on the way of computing the number of forget nodes, we know that a nontrivial result can be obtained by Algorithm 1 if there exists a balanced separator of the graph $G$ with the following property. Let $s(n')$ be the size of a balanced separator $S$ on any subgraph $G'$ of $G$ of size $n'\leq n$. $S$ partitions the subgraph into two disjoint parts $G_1',G_2'$, such that $S\cup G_i'$ is of size at most $cn'$, for some constant $c\in(0,1)$, $i=1,2$. If there exists a constant $\gamma<1$, such that for every $n'\leq n$, $s(cn')\leq \gamma s(n')$, then the number of forget nodes along any path from the root to a leaf is at most $s(n)+\gamma s(n)+\gamma^2 s(n)+\cdots\leq \frac{s(n)}{1-\gamma}$. In this case, the tree decomposition of treewidth $k$ constructed by Algorithm 2 has the tree-depth $h=\Theta(k)$. For $k=\Omega(\log n)$, Algorithm 1 has a similar running time as the standard dynamic programming algorithm but with much better space complexity. \begin{comment} \begin{lemma} Given any grid of dimension $d$ and volume $\mathcal{V}$. Assume $n_1\geq n_2\geq\cdots\geq n_d$. Using the tree decomposition construction in Algorithm 2, the maximum number of forget nodes along a path from the root to a leaf $h$ is maximized when $n_1=n_2=\cdots =n_d$. \end{lemma} \begin{proof} We have $\mathcal{V}=\prod_{i=1}^d n_i$. First, we observe that the construction on the case $n_d\leq n_1< 2n_d$ is the same as the case $n_d=n_1$, as any $d$ consecutive cuts are along different direction. $h=(\frac{\mathcal{V}}{n_1}+\frac{\mathcal{V}/2}{n_2}+\cdots+\frac{\mathcal{V}/2^{d-1}}{n_d})\cdot(1+\frac{1}{2^{d-1}}+\frac{1}{2^{2(d-1)}}\cdots)$ which is $\frac{2^{d}-1}{2^{d-1}-1}\cdot \mathcal{V}^{1-1/d}$ in the case of $n_1=n_d$. Moreover, $h$ is maximized when $n_i=n_{i+1}$ for $1\leq i\leq d-1$. In general, assume for simplicity that $n_i=2^{q_i}n_{i+1}$, for $q_i\geq 0$ and $i=1,2,...,d-1$, let $h(q_1,...,q_{d-1})$ be the maximum number of forget nodes from the root to a leaf in this case. We show that $h(q_1,...,q_{d-1})$ is maximized when all $q_i=0$. We prove by fixing all but $q_i$ and show that $h(q_1,...,0, q_{i+1},...,q_{d-1})\geq h(q_1,...,q_i, ...,q_{d-1})$ for $i=1,2,...d-1$ by induction. Then we have $h(0,q_2,...,q_{d-1})\leq h(0,0,q_3,...,q_{d-1})\leq\cdots\leq h(0,0,...,0)$. We can think of the whole construction as in $d$ phases (the algorithm might do nothing in some phases). In Phase $i$, the subgrid is halved in dimension $x_1,x_2,...,x_i$ consecutively in $q_i$ times, for $i=1,2,...,d-1$. In Phase $d$, every dimension of the subgrid is of the same length $n_d$ and the volume of the subgrid is $n_d^d$, which is $\frac{\mathcal{V}}{2^{q_1+2q_2+\cdots (d-1)q_{d-1}}}$. We denote the maximum number of forget nodes from the root to a leaf in Phase $i$ by $h_i$. Notice that $h_i(q_1,...,q_{d-1})$ only has an increasing factor of $q_i$ while it decreases with respect to $q_j$ for $j\neq i$. Let $\alpha_d=\frac{2^{d}-1}{2^{d-1}-1}$. $h_d(q_1,...,q_{d-1})=\alpha(\frac{\mathcal{V}}{2^{q_1+2q_2+\cdots (d-1)q_{d-1}}})^{1-1/d}$. For every $i\leq d-1$, it is sufficient to prove the argument by showing that $h_i(q_1,..,0,...,q_{d-1})+h_d(q_1,...,0,...,q_{d-1})\geq h_i(q_1,..,q_i,...,q_{d-1})+h_d(q_1,...,q_i,...,q_{d-1})$. We omit the $\mathcal{V}^{1-1/d}$ factor in the expression of $h_i$ as it is common in every $h_i$. $i=1$. $h_1=\frac{q_1}{n_1}$. As $n_1=2^{q_1+\cdots q_{d-1}}(\frac{\mathcal{V}}{2^{q_1+2q_2+\cdots (d-1)q_{d-1}}})^{1/d}$, it is easy to verify that $\frac{q_1}{2^{q_1(1-1/d)}}+\frac{\alpha_d}{2^{q_1(1-1/d)}}\leq \alpha_d$, hence $h(q_1,q_2...,q_{d-1})\leq h(0,q_2,...,q_{d-1})$. Assume the argument is valid for $i=j-1\leq d-2$. When $i=j$, $h_j(0,...0,q_j,...,q_{d-1})= \sum_{p=1}^j\frac{1}{2^{p-1}}\cdot\frac{1}{n_p}\frac{1-1/2^{(j-1)q_j}}{1-1/2^{j-1}}$. As $n_1=n_{j}$, this equals to $\frac{2^j-1}{2^{j-1}-1}\cdot\frac{1-1/2^{(j-1)q_j}}{n_1}$. $h_d(0,...0,q_j,...,q_{d-1})=\frac{\alpha}{2^{(jq_j+\cdots (d-1)q_{d-1})(1-1/d)}}$. We verify that $\frac{2^j-1}{2^{j-1}-1}\cdot\frac{1-1/2^{(j-1)q_j}}{2^{q_j(1-j/d)}}+\frac{\alpha_d}{2^{q_j(1-j/d)}}\leq \alpha_d$, hence prove this case. \end{proof} \end{comment} \begin{comment} Consider generalized grids of dimension $d$ and each dimension spans length $n_i$, $1\leq i\leq d$. To construct a balanced tree decomposition in this case, we follow Algorithm 2 by always cutting the grid along the dimension with the longest length into two equal-sized parts. Recall that the number of forget nodes from the root to a leaf is the sum of the size of all separators we use for constructing this path. The maximum number of forget nodes from the root to a leaf $h$ is the only parameter to the running time of Algorithm 1 and the branching algorithm from the previous section. The explicit expression of $h$ can be complicated in the general case. However, we are able to obtain a reasonable upper bound. \begin{lemma} Given any grid of dimension $d$ and volume $\mathcal{V}$. Assume $n_1\geq n_2\geq\cdots\geq n_d$. Using the tree decomposition construction in Algorithm 2, the maximum number of forget nodes along a path from the root to a leaf $h$ is maximized when $n_1=n_2=\cdots =n_d$. \end{lemma} \begin{proof} We have $\mathcal{V}=\prod_{i=1}^d n_i$. First we observe that the construction on the case $n_d\leq n_1< 2n_d$ is the same as the case $n_d=n_1$, as any $d$ consecutive cuts are along different direction. $h=(\frac{\mathcal{V}}{n_1}+\frac{\mathcal{V}/2}{n_2}+\cdots+\frac{\mathcal{V}/2^{d-1}}{n_d})\cdot(1+\frac{1}{2^{d-1}}+\frac{1}{2^{2(d-1)}}\cdots)$ which is $\frac{2^{d}-1}{2^{d-1}-1}\cdot \mathcal{V}^{1-1/d}$ in the case of $n_1=n_d$. Moreover, $h$ is maximized when $n_i=n_{i+1}$ for $1\leq i\leq d-1$. We then consider another simple case that $2^q\cdot n_2=n_1$, $q\geq 1$ and $n_2=n_d$ to help illustrate how to prove in general case. We bipartition along $x_1$ $q$ times and then proceed on a grid of volume $\frac{\mathcal{V}}{2^q}$ with procedures same as for $d$ dimensional cube. Let $\alpha_d = \frac{2^{d}-1}{2^{d-1}-1}$. We have the maximum number of forget nodes $h=\frac{q\mathcal{V}}{n_1}+(\frac{\mathcal{V}}{2^q})^{1-1/d}\cdot \alpha_d$. Since $n_1=2^q(\frac{\mathcal{V}}{2^q})^{1/d}$, $h=(q\cdot 2^{q(1/d-1)}+\alpha_d\cdot 2^{q(1/d-1)})\cdot \mathcal{V}^{1-1/d}$, which is smaller than $\alpha_d\cdot \mathcal{V}^{1-1/d}$. In general, assume for simplicity that $n_i=2^{q_i}n_{i+1}$, for $q_i\geq 0$ and $i=1,2,...,d-1$, let $h(q_1,...,q_{d-1})$ be the maximum number of forget nodes from the root to a leaf in this case. We show that $h(q_1,...,q_i,...,q_{d-1})\geq h(q_1,...,q_i+1,...,q_{d-1})$, thus $h(q_1,...,q_{d-1})$ is maximized when $n_1=n_d$. We can think of the construction of those two instances as three phases. Phase 1, partition along dimension $x_1$ to $x_{i-1}$ until in every subgraph $n'_1=n'_i$. Phase 2, partition along dimension $x_1$ to $x_i$ until in every subgraph $n'_1=n'_{i+1}$. Phase 3 is the rest of the construction. We denote the number of forget nodes created in those three phases by $h_i$ for $i=1,2,3$. Superscript $j$ in any variable represent the corresponding variable in instance $j$ for $j=1,2$. The constructions of Phase 1 and 3 are the same. Notice that the number of forget nodes can be represented by $c\cdot\frac{\mathcal{V}}{n_1}$ where $c$ is a construction-dependent constant. Hence, $h_1^j=c\cdot\frac{\mathcal{V}}{n_1^j}$. As $n_1=2^{q_1+\cdots q_{d-1}}(\frac{\mathcal{V}}{2^{q_1+2q_2+\cdots (d-1)q_{d-1}}})^{1/d}$, we have $\frac{n_1^1}{n_1^2}=2^{i/d-1}<1$. Therefore $h_1^1>h_2^1$. Similarly we have $h_3^1>h_3^2$. For Phase 2, we first assume $q_i>0$. In instance 1, the algorithm bipartition the graph from dimension $x_1$ to $x_i$ in $q_i$ iterations, while in instance 2 in $q_i+1$ iterations. The volume $\mathcal{V}'$ of the subgraph at the beginning of Phase 2 is the same for both instances. Since, $h_2^j=c\cdot\frac{\mathcal{V}}{n_1}\cdot(1+\frac{1}{2^i}+\cdots+\frac{1}{2^{i(q_i^j-1)}})$ for construction-dependent $c$, we have $\frac{h^1_2}{h^2_2}=\frac{n_1^2}{n_1^1}\cdot\frac{1-1/2^{iq_i}}{1-1/2^{i(q_i+1)}}=2^{1-i/d}\cdot\frac{2^{(q_i+1)i}-2^i}{2^{(q_i+1)i}-1}>1$. If $q_i=0$, it is sufficient to assume $q_j\leq 1$ for every $j$. ???? If $2^{q_1}\cdot n_2=n_1$, $2^{q_2}\cdot n_3=n_2$ for $q_1,q_2\geq 1$ and $n_3=n_d$. We can argue similarly and have $h=\frac{q_1\mathcal{V}}{n_1}+(\frac{1}{n_1}+\cdots\frac{1}{2^{q_2}n_1})\cdot\mathcal{V}+\frac{1}{2^{q_1+1}}(\frac{1}{n_2}+\cdots\frac{1}{2^{q_2}n_2})\cdot\mathcal{V} +\alpha_d\cdot(\frac{\mathcal{V}}{2^{q_1+q_2}\cdot 2^{q_2}})^{1-1/d}$. As $n_1=2^{q_1+q_2}\cdot(\frac{\mathcal{V}}{2^{2q_2+q_1}})^{1/d}$, $h=2^{q_1(1/d-1)}\cdot2^{-q_2}\cdot(q_1+\frac{3/2}{1-1/2^{q_2+1}}+\alpha_d)\cdot\mathcal{V}^{1-1/d}$, which is smaller than $\alpha_d\cdot \mathcal{V}^{1-1/d}$. General case is tedious to write but can be proved in a similar way. \end{proof} \end{comment} \begin{comment} \section{Extension to other problems} \label{moreproblem} $\bullet$ \textbf{Computing the matching polynomial.} The matching polynomial of a graph $G$ is defined to be $m[G, \lambda]=\sum_{i=0}^{|G|/2} m^i[G]\lambda^i$, where $m^i[G]$ is the number of matchings of size $i$ in graph $G$. We put the coefficients of $m[G, \lambda]$ into a vector $\mathbf{m}[G]$. The problem is essentially to compute the coefficient vector $\mathbf{m}[G]$. For every node $x$ in a tree decomposition, let the vector $\mathbf{m}_x[X]$ be the coefficient vector of the matching polynomial defined on $Y_X$. Notice that every entry of $\mathbf{m}_x[X]$ is at most $|E|^{|V|/2}$ and all constants are singletons. $\mathbf{m}^0_x[X]=1$ and $\mathbf{m}^i_x[X]=0$ for $i>|X|/2$. The case of $x$ being a forget vertex node follows exactly from Algorithm 1. For any other type of tree node $x$, - $x$ is a leaf node. $\mathbf{m}^i_x[\emptyset]=1$ if $i=0$, or 0 otherwise. - $x$ is an introduce vertex node. If $v\in X$, $\mathbf{m}^i_x[X] =\mathbf{m}^i_c[X\setminus \{v\}]$. Hence $(\zeta\mathbf{m}^i_x)[X]=2(\zeta\mathbf{m}^i_c)[X\setminus\{v\}]$ if $v\in X$, or $(\zeta\mathbf{m}^i_x)[X]=(\zeta\mathbf{m}^i_c)[X]$ otherwise. - $x$ is an auxiliary leaf of a modified introduce edge node. $\mathbf{m}_{x}^i[X]=1$ only when $u,v\in X$ and $i=1$, or $i=0$. Otherwise it is 0. - $x$ is a join node. $\mathbf{m}_x^i[X]=\sum_{X'\subseteq X}\sum_{j=0}^i \mathbf{m}_{c_1}^j[X']\mathbf{m}_{c_2}^{i-j}[X\setminus X']$. \\ We then apply Algorithm 1 to other set covering and partitioning problems. In all the following problems, we check that all constants are singletons. We omit the recurrences for forget vertex nodes if we can directly apply recurrence (\ref{forget}) in Algorithm 1. \\ $\bullet$ \textbf{Counting $l$-packings.} Given a universe $U$ of elements and a collection of subsets $\mathcal{S}$ on $U$, counting the number of collections of $l$ disjoint sets ($l$-packing). Packing problems can be viewed as matching problems on hypergraph. Tree decomposition on graphs can be generalized to tree decomposition on hypergraph, where we require every hyperedge to be assigned to a specific bag \cite{hypertree}. We can construct a modified nice hypertree decomposition. This problem can be solved in the same way as computing the matching polynomial on graphs. \\ $\bullet$ \textbf{Sum of weighted partition.} Given a universe $U$ of elements and a collection $\mathcal{C}$ of subsets $\mathcal{S}$ on $U$. Let $\mathbf{f}$ be a function from $l$-tuples $\mathcal{S}_l=(S_1,S_2,...,S_l)$ with $S_i\in\mathcal{C}$ to integers, consider the case where $\mathbf{f}(\mathcal{S}_l) = f_1(S_1)\cdot f_2(S_2)\cdots\cdot f_l(S_l)$, $l\geq 0$. Compute $p_l(\mathbf{f})=\sum_{\mathcal{S}_l} \mathbf{f}(\mathcal{S}_l)$. This problem is a generalization of the counting $l$-packing problem which is introduced in \cite{setpartition2009}. We can think of the problem as being defined on a hypergraph where $U$ is the vertex set and the collection of hyperedges is $\mathcal{S}$. It is more convenient to compute every $p_r(\mathbf{f})$ for all integer $r\in [0, l]$. Let $p_r(\mathbf{f}, x, X)$ be the sum of the weights $\mathbf{f}(\mathcal{S}_r)$ of all the $r$-tuples in $Y_X$ from the subtree rooted at node $x$. Suppose $f_i(S_j)\leq M$ for any $i,j$. Then $p_r(\mathbf{f}, x, X)\leq {|\mathcal{S}|\choose r}M^r\leq (|\mathcal{S}|M)^r$. For any node $x$, $p_0(\mathbf{f},x,X)=1$. $p_r(\mathbf{f}, x, \emptyset)=1$ if $r=0$, or 0 otherwise. - $x$ is a leaf node. $p_r(\mathbf{f},x,\emptyset)=1$ if $r=0$, or 0 if $r>0$. - $x$ is an introduce vertex node. If $v\in X$, $p_r(\mathbf{f}, x, X)=p_r(\mathbf{f}, c, X\setminus v)$. If $v\notin X$, $p_r(\mathbf{f}, x, X)=p_r(\mathbf{f}, c, X)$. - $x$ is an auxiliary leaf of a modified introduce hyperedge node. $p_r(\mathbf{f},x,X)=f_1(e)$ if $r=1$ and $e\subseteq X$, or 0 otherwise. - $x$ is a join node. $p_r(\mathbf{f}, x, X) = \sum_{i=0}^r\sum_{X'\subseteq X} p_i(\mathbf{f}, c_1, X')\cdot p_{r-i}(\mathbf{f}, c_2, X\setminus X')$. \\ $\bullet$ \textbf{Maximum weighted $l$-packing.} Consider the case where the weight of a set $w(S)$ is an integer from $\{1,2,...,M\}$, compute the maximum weight of an $l$-packing. Following the reduction in \cite{setpartition2009}, we can compute the maximum-weighted $l$-packing by setting $f_i(S)=\beta^{w(S)}$ in the sum of weighted partition problem. Then $p_l(\mathbf{f})=\sum_{\mathcal{S}_l} \beta^{w(S_1)+w(S_2)+\cdots+w(S_l)} = \sum_{i=0}^{Ml} a_i\beta^i$, where $a_i$ is the number of $l$-packings with weight $i$. Pick $\beta$ to be at least the maximum value of $a_i$, e.g. $(|\mathcal{S}|M)^l+1$, the coefficient representation of $p_l(\mathbf{f})$ by $\beta$ is unique. Hence, the number of the $l$-tuples with weight $W$ can be recovered from $p_l(\mathbf{f})$, for every $W\leq lM$. We can solve the problem of the existence of an $l$-packing of any weight. In specific, we also obtain the minimum weight of any $l$-packing. We remark that another way to compute the maximum weight of an $l$-packing is to explicitly count $p_r(x, X, W)$, which is the number of $r$-tuples of weight $W$ in $Y_X$ from tree node $x$, and for every integer $W\leq lM$. \\ $\bullet$ \textbf{Counting dominating sets, counting set cover.} The set cover problem is given a universe $U$ of elements and a collection of sets $\mathcal{S}$ on $U$, find a subcollection of sets from $\mathcal{S}$ which covers the entire universe $U$. The dominating set problem is defined on a graph $G=(V,E)$. Let $U=V$, $\mathcal{S}=\{N[v]\}_{v\in V}$, where $N[v]$ is the union of the neighbors of $v$ and $v$ itself. The dominating set problem is to find a subset of vertices $S$ from $V$ such that $\bigcup_{v\in S} N[v]$ covers $V$. The two problems can be solved in a same way. The set cover problem can be viewed as a covering problem on a hypergraph, where one selects a collection of hyperedges which cover all vertices. The dominating set problem is a special case of the set cover problem. We only consider the counting set cover problem. For any subset $X\subseteq B_x$, define $h_x[X]$ to be the number of set covers of $Y_X$. $h_x[X]\leq |U|^{|\mathcal{S}|}$. For any node $x$, $h_x[\emptyset]=1$. - $x$ is a leaf node. $h_x[\emptyset]=1$. - $x$ is an introduce vertex node. If $v\in X$, $h_x[X]=0$. If $v\notin X$, $h_x[X]=h_c[X]$. - $x$ is an auxiliary leaf of a modified introduce hyperedge node. $h_{x}[X]=1$ when $X=e$ or $\emptyset$, and $h_x[X]=0$ otherwise. - $x$ is a join node. $h_x[X]=\sum_{X'\subseteq X}h_{c_1}[X']h_{c_2}[X-X']$. We remark that we can also explicitly compute the number of set covers of size $s$. To achieve this, we need to compute $h_x[X,s]$ at every tree node, where $h_x[X,s]$ is the number of set covers of $Y_X$ and of size $s$. \\ \end{comment} \begin{comment} $\bullet$ \textbf{Unweighted Steiner tree.} Given a graph $G=(V,E)$, a subset $S\subseteq V$ and an integer $t\leq |V|$, find out if there exists a subtrees $T=(V_T,E_T)$ of $G$, such that $S\subseteq V_T$ and $|V_T|\leq t$. For $X\subseteq B_x$, $u\in V$ and $1\leq i\leq t$, let $h_x[u,i,X]$ be an indicator function such that $h_x[u,i,X]>0$ if there is a subtrees $T=(V_T,E_T)$, such that $(Y_X\cup\{u\})\cap S\subseteq V_T$ and $|V_T|\leq i$. $h_x[u,i,X]\leq |V|!\leq |V|^{|V|}$. - If $x$ is a leaf node, $h_x[u,i,\emptyset]=1$ if $i\geq 1$. For any node $x$, $h_x[u,i,\{u\}]=1$ if $i\geq 1$. - $x$ is an introduce vertex node. If $v\in X$, $h_x[u,i,X] = h_c[u,i,X\setminus \{v\}]$. If $v\notin X$, $h_x[u,i,X] = h_c[u,i,X]$. - $x$ is a modified introduce edge node. In child $c'$, $h_{c'}[u,i,X]=1$ if $X\cup\{u\}=e$, and $i\geq 2$, or $X=\emptyset$, or $X=\{u\}$. - $x$ is a join node. $h_x[u,i,X]=\sum_{X'\subseteq X}\sum_{w\in N(u)}\sum_{j=1}^{i-1} h_{c_1}[u,j,X']h_{c_2}[w,i-j,X-X']$. There is a polynomial space algorithm runs in time $O^*(2^{|S|})$ \cite{savespace2010}. Hence, if $|S|<h$, Algorithm 1 does not lead to any better result. \\ $\bullet$ \textbf{Counting $l$-path, counting Hamiltonian cycles.} Given a graph $G=(V,E)$ with no self-loop, an $l$-path is a simple path of length $l$. A Hamiltonian path is a simple path of length $n-1$. A Hamiltonian cycle is a cycle of length $n$. For every pair of distinct nodes $u,v$, we count the number of $i$-path from $u$ to $v$, for $0\leq i\leq l$. For any $X\subseteq B_x$, let $h_x[u,v,i,X]$ be the number of $i$-paths from $u$ to $v$ in $Y_X$. $h_x[u,v,i,X]\leq |V|^i$. - $x$ is a leaf node, $h_x[u,v,i,\emptyset]=1$ if $i=0$, or 0 otherwise. For any node $x$, $h_x[u,v,i,X]=0$ if $u$ or $v$ is not in $Y_X$ and $i\geq 1$, or $i>|Y_X|$. - $x$ is an introduce vertex node introducing vertex $w$. If $w\in X$, $h_x[u,i,X] = h_c[u,i,X\setminus \{w\}]$. If $w\notin X$, $h_x[u,i,X] = h_c[u,i,X]$. - $x$ is a modified introduce edge node. In child $c'$, $h_{c'}[u,v,1,X]=1$ if $(u,v)$ is the edge introduced in $c'$, or 0 otherwise. - $x$ is a join node. $h_x[u,v,i,X]=\sum_{X'\subseteq X}\sum_{w\neq u,v}\sum_{j=0}^i h_{c_1}[u,w,j,X']\cdot h_{c_2}[w,v,i-j,X-X']$. Let $l=n-1$, we obtain the number of Hamiltonian paths. At the root, we sum up all $h_x[u,v,n-1,\emptyset]$ where $h_x[u,v,n-1,\emptyset]>0$, and $u$,$v$ are adjacent, we obtain the number of Hamiltonian cycles. \\ $\bullet$ \textbf{Degree constrained spanning tree.} Given a graph $G=(V, E)$ and an integer $c\geq 2$, decide whether there exists a spanning tree of $G$ such that every vertex in the tree has a degree bounded by $c$. For $c=2$, the problem is the Hamiltonian path problem. For every subset $X\subseteq B_x$, $v\in V$, let $h_x[v, r, X]$ be an indicator function such that $h_x[v, r, X]>0$ if there exists a spanning tree on set $Y_X\cup\{v\}$ with degree bounded $c$ and is rooted at node $v$ of degree $r$, $0\leq r\leq c$. $h_x[v,r,X]\leq |V|!\leq |V|^{|V|}$. - If $x$ is a leaf node, $h_x[v,0,\emptyset]=1$. $h_x[v,r,X]=0$ whenever there is an isolated vertex. - $x$ is an introduce vertex node introducing vertex $u$. If $u\in X$, $h_x[v,r,X]=0$. If $u\notin X$, $h_x[v,r,X]=h_c[v,r,X]$. - $x$ is a modified introduce edge node. In child $c'$, $h_{c'}[v,r,X]=1$ if $X=\emptyset$ and $r=0$ or $X=\{u\}$, $(u,v)$ is the edge introduced at $c'$ and $r=1$. - $x$ is a join node. $h_x[v, r, X] = \sum_{i=0}^r\sum_{X'\subseteq X} h_{c_1}[v, i, X']h_{c_2}[v, r-i, X-X']$. \\ \end{comment} \begin{comment} $\bullet$ \textbf{Cover polynomial} \cite{coverpoly}. Given a directed graph, the cover polynomial is defined to be $\sum_{i,j}c(i,j)x^iy^j$, where $c(i,j)$ is the combinations of $i$ disjoint paths and $j$ disjoint cycles which cover the whole graph. \textbf{ !!! I am not sure how to do this any more !!! } \\ \end{comment} \begin{comment} \subsection{Minimization/maximization problems} Following the paradigm in solving space-efficient minimization in \cite{savespace2010}, we can solve the traveling salesman problem, the weighted Steiner tree problem, the minimum set cover problem in the framework of tree decomposition. The running time is also determined by the maximum number of forget nodes in any path from root to leaf. Notice that we can also solve those minimization problem using the framework of the sum weighted partitions problem. Using the extension to max-sum semi ring as described in section 2.1, we have another solution to the maximum weighted $l$-packing problem. The deductions are similar to the corresponding problems in previous section but operates on min-sum or max-sum semi ring. We discuss one more problem of different structure. \\ \end{comment} \end{document}
\begin{document} \renewcommand{\leqslant}{\leqslantqslant} \renewcommand{\geqslant}{\geqslantqslant} \newcommand{ \hbox{\qedsymbol}}{ \hbox{\qedsymbol}} \newcommand{\,\diagdown\,}{\,\,\diagdown\,down\,} \newcommand{\lin}{\,\frac{}{\quad}\,} \newcommand{\is}{\stackrel {\text{\raisebox{-1ex}{$\sim\ \;$}}}{\to}} \newcommand{\ci}{ \begin{picture}(6,6) \put(3,3){\circle*{3}} \end{picture}} \newcommand{\hhat} {\text{\raisebox{-0.5ex}{\,$\widehat{}$}}\ } \newcommand{\ddd}{ \text{\begin{picture}(12,8) \put(-2,-4){$\cdot$} \put(3,0){$\cdot$} \put(8,4){$\cdot$} \end{picture}}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{corollary}{Corollary}[section] \theoremstyle{remark} \newtheorem{remark}{Remark}[section] \title{Canonical matrices of bilinear and sesquilinear forms ootnotetext{This is the authors' version of a work that was accepted for publication in Linear Algebra and its Applications (2007), doi:10.1016/j.laa.2007.07.023.} \begin{abstract} Canonical matrices are given for \begin{itemize} \item bilinear forms over an algebraically closed or real closed field; \item sesquilinear forms over an algebraically closed field and over real quaternions with any nonidentity involution; and \item sesquilinear forms over a field $\mathbb F$ of characteristic different from $2$ with involution (possibly, the identity) up to classification of Hermitian forms over finite extensions of $\mathbb F$; the canonical matrices are based on any given set of canonical matrices for similarity over $\mathbb F$. \end{itemize} A method for reducing the problem of classifying systems of forms and linear mappings to the problem of classifying systems of linear mappings is used to construct the canonical matrices. This method has its origins in representation theory and was devised in [V.V. Sergeichuk, {\it Math. USSR-Izv.} 31 (1988) 481--501]. {\it AMS classification:} 15A21, 15A63. {\it Keywords:} Canonical matrices; Bilinear and sesquilinear forms; Congruence and *congruence; Quivers and algebras with involution. \end{abstract} \section{Introduction} \label{s0} We give canonical matrices of bilinear forms over an algebraically closed or real closed field (familiar examples are $\mathbb C$ and $\mathbb R$), and of sesquilinear forms over an algebraically closed field and over $\mathbb P$-quaternions ($\mathbb P$ is a real closed field) with respect to any nonidentity involution. We also give canonical matrices of sesquilinear forms over a field $\mathbb F$ of characteristic different from $2$ with involution (possibly, the identity) up to classification of Hermitian forms over finite extensions of $\mathbb F$; the canonical matrices are based on any given set of canonical matrices for similarity. Bilinear and sesquilinear forms over a field $\mathbb F$ of characteristic different from $2$ have been classified by Gabriel, Riehm, and Shrader-Frechette. Gabriel \cite{gab2} reduced the problem of classifying bilinear forms to the nondegenerate case. Riehm \cite{rie} assigned to each nondegenerate bilinear form $\cal A\colon V\times V\to \mathbb F$ a linear mapping $A\colon V\to V$ and a finite sequence $\varphi^{\cal A}_1,\, \varphi^{\cal A}_2,\dots$ consisting of $\varepsilon_i$-Hermitian forms $\varphi^{\cal A}_i$ over finite extensions of $\mathbb F$ and proved that two nondegenerate bilinear forms $\cal A$ and $\cal B$ are equivalent if and only if the corresponding mappings $A$ and $B$ are similar and each form $\varphi^{\cal A}_i$ is equivalent to $\varphi^{\cal B}_i$ (results of this kind were earlier obtained by Williamson \cite{wil}). This reduction was studied in \cite{sch} and was improved and extended to sesquilinear forms by Riehm and Shrader-Frechette \cite{rie1}. But this classification of forms was not expressed in terms of canonical matrices, so it is difficult to use. Using Riehm's reduction, Corbas and Williams \cite{cor} obtained canonical forms of nonsingular matrices under congruence over an algebraically closed field of characteristic different from 2 (their list of nonsingular canonical matrices contains an inaccuracy, which can be easily fixed; see \cite[p.\,1013]{hor-ser_can}). Thompson \cite{thom} gave canonical pairs of symmetric or skew-symmetric matrices over $\mathbb C$ and $\mathbb R$ under simultaneous congruence. Since any square complex or real matrix can be expressed uniquely as the sum of a symmetric and a skew-symmetric matrix, Thompson's canonical pairs lead to canonical matrices for congruence; they are studied in \cite{lee-wei}. We construct canonical matrices that are much simpler than the ones in \cite{cor,lee-wei}. We construct canonical matrices of bilinear and sesquilinear forms by using the technique for reducing the problem of classifying systems of forms and linear mappings to the problem of classifying systems of linear mappings that was devised by Roiter \cite{roi} and the second author \cite{ser_first, ser_disch,ser_izv}. A system of forms and linear mappings satisfying some relations is given as a representation of a partially ordered graph $P$ with relations: each vertex corresponds to a vector space, each arrow or nonoriented edge corresponds to a linear mapping or a bilinear/sesquilinear form (see Section \ref{sub_pos1}). The problem of classifying such representations over a field or skew field $\mathbb F$ of characteristic different from $2$ reduces to the problems of classifying \begin{itemize} \parskip=-3pt \item representations of some quiver $\underline P$ with relations and involution (in fact, representations of a finite dimensional algebra with involution) over $\mathbb F$, and \item Hermitian forms over fields or skew fields that are finite extensions of the center of $\mathbb F$. \end{itemize} The corresponding reduction theorem was extended in \cite{ser_izv} to the problem of classifying selfadjoint representations of a linear category with involution and in \cite{ser_sym} to the problem of classifying symmetric representations of an algebra with involution. Similar theorems were proved by Quebbermann, Scharlau, and Schulte \cite{que-scha,w.schar} for additive categories with quadratic or Hermitian forms on objects, and by Derksen, Shmelkin, and Weyman \cite{der-wey,shme} for generalizations of quivers involving linear groups. Canonical matrices of \begin{itemize} \item[(i)] bilinear and sesquilinear forms, \item[(ii)] pairs of symmetric or skew-symmetric forms, and pairs of Hermitian forms, and \item[(iii)] isometric or selfadjoint operators on a space with scalar product given by a nondegenerate symmetric, skew-symmetric, or Hermitian form \end{itemize} were constructed in \cite{ser_disch, ser_izv} by this technique over a field $\mathbb F$ of characteristic different from 2 up to classification of Hermitian forms over fields that are finite extensions of $\mathbb F$. Thus, the canonical matrices of (i)--(iii) over $\mathbb C$ and $\mathbb R$ follow from the construction in \cite{ser_disch, ser_izv} since classifications of Hermitian forms over these fields are known. The canonical matrices of bilinear and sesquilinear forms over an algebraically closed field of characteristic different from 2 and over a real closed field given in \cite[Theorem 3]{ser_izv}, and the canonical matrices of bilinear forms over an algebraically closed field of characteristic $2$ given in \cite{ser0} are based on the Frobenius canonical form for similarity. In this article we simplify them by using the Jordan canonical form. Such a simplification was given by the authors in \cite{hor-ser} for canonical matrices of bilinear and sesquilinear forms over $\mathbb C$; a direct proof that the matrices from \cite{hor-ser} are canonical is given in \cite{hor-ser_regul, hor-ser_can}; applications of these canonical matrices were obtained in \cite{dok_iso,dok-zha_jor, dok-zha_rat, hor-ser_can, hor-ser_unit}. We also construct canonical matrices of sesquilinear forms over quaternions; they were given in \cite{ser_quat} with incorrect signs for the indecomposable direct summands; see Remark \ref{rema}. Analogous results for canonical matrices of isometric operators have been obtained in \cite{ser_iso}. The paper is organized as follows. In Section \ref{s_intre} we formulate our main results: Theorem \ref{t1.1} about canonical matrices of bilinear and sesquilinear forms over an algebraically or real closed field and over quaternions, and Theorem \ref{Theorem 5} about canonical matrices of bilinear and sesquilinear forms over any field $\mathbb F$ of characteristic not $2$ with an involution, up to classification of Hermitian forms. In Section \ref{sub_pos1} we give a brief exposition of the technique for reducing the problem of classifying systems of forms and linear mappings to the problem of classifying systems of linear mappings. We use it in Sections \ref{s_pro} and \ref{secmat}, in which we prove Theorems \ref{t1.1} and \ref{Theorem 5}. \section{Canonical matrices for congruence and *congruence} \label{s_intre} Let $\mathbb F$ be a field or skew field with involution $a\mapsto \bar{a}$, i.e., a bijection $\mathbb F\to \mathbb F$ satisfying $\overline{a+b}=\bar a+ \bar b,$ $\overline{ab}=\bar b \bar a,$ and $\bar{\bar a}=a.$ Thus, the involution may be the identity only if $\mathbb F$ is a field. For any matrix $A=[a_{ij}]$ over $\mathbb F$, we write $ A^*:=\bar{A}^T=[\bar{a}_{ji}]. $ Matrices $A,B\in{\mathbb F}^{n\times n}$ are said to be *{\it\!congruent}\/ over $\mathbb F$ if there is a nonsingular $S\in{\mathbb F}^{n\times n}$ such that $S^*AS=B$. If $S^TAS=B$, then the matrices $A$ and $B$ are called {\it congruent}. The transformations of congruence ($A\mapsto S^TAS$) and *congruence ($A\mapsto S^*AS$) are associated with the bilinear form $x^TAy$ and the sesquilinear form $x^*Ay$, respectively. \subsection{Canonical matrices over an algebraically or real closed field and over quaternions} \label{sub_01} In this section we give canonical matrices for congruence over: \begin{itemize} \item an algebraically closed field, and \item a {\it real closed field}---i.e., a field ${\mathbb P}$ whose algebraic closure ${\mathbb K}$ has a finite degree $\ne 1$ (that is, $1<\dim_{\mathbb P}{\mathbb K}<\infty$). \end{itemize} We also give canonical matrices for *congruence over: \begin{itemize} \item an algebraically closed field with nonidentity involution, and \item the {\it skew field of\/ $\mathbb P$-quaternions} \begin{equation*}\label{1ya} {\mathbb H}=\{a+bi+cj+dk\,|\,a,b,c,d\in\mathbb P\}, \end{equation*} in which $\mathbb P$ is a real closed field, $i^2=j^2=k^2=-1$, $ij=k=-ji,$ $jk=i=-kj,$ and $ki=j=-ik.$ \end{itemize} We consider only two involutions on $\mathbb H$: \emph{quaternionic conjugation} \begin{equation}\label{ne} a+bi+cj+dk\ \longmapsto\ a-bi-cj-dk, \qquad a,b,c,d\in\mathbb P, \end{equation} and \emph{quaternionic semiconjugation} \begin{equation} \label{nen} a+bi+cj+dk\ \longmapsto\ a-bi+cj+dk, \qquad a,b,c,d\in\mathbb P, \end{equation} because if an involution on $\mathbb H$ is not quaternionic conjugation, then it becomes quaternionic semiconjugation after a suitable reselection of the imaginary units $i,j,k$; see \cite[Lemma 2.2]{ser_iso}. There is a natural one-to-one correspondence \begin{equation*}\label{1ye} \leqslantft\{\parbox{5cm}{$\ $algebraically closed fields\\ with nonidentity involution}\right\} \quad\longleftrightarrow \quad \bigl\{\text{real closed fields}\bigr\} \end{equation*} sending an algebraically closed field with nonidentity involution to its fixed field. This follows from our next lemma, in which we collect known results about such fields. \begin{lemma}\label{l00} {\rm(a)} Let\/ $\mathbb P$ be a real closed field and\/ let $\mathbb K$ be its algebraic closure. Then $\charact{\mathbb P}=0$ and \begin{equation}\label{1pp} \mathbb K={\mathbb P}+{\mathbb P}i,\qquad i^2=-1. \end{equation} The field\/ ${\mathbb P}$ has a unique linear ordering $\leqslant$ such that \begin{equation*}\label{slr} \text{$a>0$ and\, $b>0$} \quad\Longrightarrow\quad \text{$a+b>0$ and\, $ab>0$}. \end{equation*} The positive elements of\/ $\mathbb P$ with respect to this ordering are the squares of nonzero elements. {\rm(b)} Let\/ $\mathbb K$ be an algebraically closed field with nonidentity involution. Then $\charact\mathbb K=0$, \begin{equation}\label{123} \mathbb P:=\bigl\{k\in{\mathbb K}\,\bigr|\, \bar{k}=k\bigr\} \end{equation} is a real closed field, \begin{equation}\label{1pp11} \mathbb K={\mathbb P}+{\mathbb P}i,\qquad i^2=-1, \end{equation} and the involution is ``complex conjugation'': \begin{equation}\label{1ii} \overline{a+bi}=a-bi,\qquad a,b\in\mathbb P. \end{equation} {\rm(c)} Every algebraically closed field $\mathbb F$ of characteristic $0$ contains at least one real closed subfield. Hence, $\mathbb F$ can be represented in the form \eqref{1pp11} and possesses the involution \eqref{1ii}. \end{lemma} \begin{proof} (a) Let $\mathbb K$ be the algebraic closure of $\mathbb F$ and suppose $1<\dim_{\mathbb P}{\mathbb K}<\infty$. By Corollary 2 in \cite[Chapter VIII, \S 9]{len}, we have $\charact{\mathbb P}=0$ and \eqref{1pp}. The other statements of part (a) follow from Proposition 3 and Theorem 1 in \cite[Chapter XI, \S 2]{len}. (b) If $\mathbb K$ is an algebraically closed field with nonidentity involution $a\mapsto \bar{a}$, then this involution is an automorphism of order 2. Hence ${\mathbb K}$ has degree $2$ over its \emph{fixed field} ${\mathbb P}$ defined in \eqref{123}. Thus, ${\mathbb P}$ is a real closed field. Let $i\in \mathbb K$ be such that $i^2=-1$. By (a), every element of ${\mathbb K}$ is uniquely represented in the form $k=a+bi$ with $a,b\in{\mathbb P}$. The involution is an automorphism of ${\mathbb K}$, so $\bar{i}^2=-1$. Thus, $\bar{i}=-i$ and the involution has the form \eqref{1ii}. (c) This statement is proved in \cite[\S 82, Theorem 7c]{wan}. \end{proof} For notational convenience, write \[ A^{-T}:=(A^{-1})^T\quad\text{and}\quad A^{-*}:=(A^{-1})^*. \] The \emph{cosquare} of a nonsingular matrix $A$ is $A^{-T}A$. If two nonsingular matrices are congruent then their cosquares are similar because \[ (S^TAS)^{-T}(S^TAS) =S^{-1}A^{-T}AS. \] If $\Phi$ is a cosquare, every matrix $C$ such that $C^{-T}C=\Phi$ is called a {\it cosquare root} of $\Phi$; we choose any cosquare root and denote it by $\sqrt[T]{\Phi}$. Analogously, $A^{-*}A$ is the \emph{*cosquare} of $A$. If two nonsingular matrices are *congruent then their *cosquares are similar. If $\Phi$ is a *cosquare, every matrix $C$ such that $C^{-*}C=\Phi$ is called a {\it *cosquare root} of $\Phi$; we choose any *cosquare root and denote it by $\sqrt[\displaystyle *]{\Phi}$.\label{pppp} For each real closed field, we denote by $\leqslant$ the ordering from Lemma \ref{l00}(a). Let $\mathbb K=\mathbb P+\mathbb Pi$ be an algebraically closed field with nonidentity involution represented in the form \eqref{1pp11}. By the \emph{absolute value} of $k=a+bi\in\mathbb K$ ($a,b\in\mathbb P)$ we mean a unique nonnegative ``real'' root of $a^2+b^2$, which we write as \begin{equation}\label{1kk} |k|:=\sqrt{a^2+b^2} \end{equation} (this definition is unambiguous since $\mathbb K$ is represented in the form \eqref{1pp11} uniquely up to replacement of $i$ by $-i$). For each $M\in{\mathbb K}^{m\times n}$, its {\it realification} $M^{\mathbb P}\in{\mathbb P}^{2m\times 2n}$ is obtained by replacing every entry $a+bi$ of $M$ by the $2\times 2$ block \begin{equation}\label{1j} \begin{matrix} a&-b\\b&a \end{matrix} \end{equation} Define the $n$-by-$n$ matrices \begin{equation*}\label{1aaaa} \Delta_n(\lambda):= \begin{bmatrix} 0&&&\lambda \\ &&\ddd&i\\ &\lambda&\ddd&\\ \lambda&i&&0 \end{bmatrix},\qquad J_n(\lambda) := \begin{bmatrix} \lambda&1&&0\\ &\lambda&\ddots&\\ &&\ddots&1\\ 0&&&\lambda \end{bmatrix}, \end{equation*} \begin{equation*}\label{1aa} \Gamma_n := \begin{bmatrix} 0&&&&&\ddd \\&&&&1& \ddd\\ &&&-1&-1&\\ &&1&1&\\ &-1&-1& &&\\ 1&1&&&&0 \end{bmatrix}, \end{equation*} and \begin{equation*}\label{1aaa} \Gamma'_n:= \begin{bmatrix} 0&& & &&-1\\ && & & \ddd &1\\ && & -1& \ddd & \\ && 1& 1&& \\ &\ddd&\ddd&&& \\ 1&1&& & &0 \end{bmatrix} \text{\quad if $n$ is even}, \end{equation*} \begin{equation*}\label{1aaa2} \Gamma'_n:= \begin{bmatrix} 0&&& &&&1\\ &&& && \ddd &0\\ &&&& 1& \ddd & \\ &&& 1&0&& \\ &&1&1 &&& \\ & \ddd &\ddd& &&&\\ 1&1&& &&&0 \end{bmatrix} \text{\quad if $n$ is odd}; \end{equation*} the middle groups of entries are in the center of $\Gamma'_n$. The {\it skew sum} of two matrices $A$ and $B$ is \begin{equation*}\label{1.2a} [A\,\diagdown\, B]:= \begin{bmatrix}0&B\\A &0 \end{bmatrix}. \end{equation*} The main result of this article is the following theorem, which is proved in Section \ref{secmat}. It was obtained for complex matrices in \cite{hor-ser, hor-ser_can}. \begin{theorem} \label{t1.1} {\rm(a)} Over an algebraically closed field of characteristic different from $2$, every square matrix is congruent to a direct sum, determined uniquely up to permutation of summands, of matrices of the form: \begin{itemize} \item [{\rm(i)}] $J_n(0)$; \item [{\rm(ii)}] $[J_n(\lambda)\,\diagdown\, I_n]$, in which $\lambda\ne (-1)^{n+1}$, $\lambda\ne 0,$ and $\lambda$ is determined up to replacement by $\lambda^{-1}$; \item [{\rm(iii)}] $\sqrt[T]{ J_n ((-1)^{n+1})}$. \end{itemize} Instead of the matrix {\rm(iii)}, one may use\/ $\Gamma_n$, or\/ $\Gamma'_n$, or any other nonsingular matrix whose cosquare is similar to $J_n((-1)^{n+1})$; these matrices are congruent to {\rm(iii)}. {\rm (b)} Over an algebraically closed field of characteristic $2$, every square matrix is congruent to a direct sum, determined uniquely up to permutation of summands, of matrices of the form: \begin{itemize} \item [{\rm(i)}] $J_n(0)$; \item [{\rm(ii)}] $[J_n(\lambda)\,\diagdown\, I_n]$, in which $\lambda$ is nonzero and is determined up to replacement by $\lambda^{-1}$; \item [{\rm(iii)}] $\sqrt[T]{J_{n}(1)}$ with odd $n$; no blocks of the form $[J_n(1)\,\diagdown\, I_n]$ are permitted for any odd $n$ for which a block $\sqrt[T]{J_{n}(1)}$ occurs in the direct sum. \footnote{If the direct sum would otherwise contain both $\sqrt[T]{J_{n}(1)}$ and $[J_n(1)\,\diagdown\, I_n]$ for the same odd $n$, then this pair of blocks must be replaced by three blocks $\sqrt[T]{J_{n}(1)}$. This restriction is imposed to ensure uniqueness of the canonical direct sum because $\sqrt[T]{J_{n}(1)} \oplus [J_n(1)\,\diagdown\, I_n]$ is congruent to $\sqrt[T]{J_{n}(1)} \oplus \sqrt[T]{J_{n}(1)} \oplus\sqrt[T]{J_{n}(1)}$; see \cite{ser0} and Remark \ref{rem3}.} \end{itemize} Instead of the matrix {\rm(iii)}, one may use\/ $\Gamma'_n$ or any other nonsingular matrix whose cosquare is similar to $J_n(1)$, these matrices are congruent to {\rm(iii)}. {\rm(c)} Over an algebraically closed field with nonidentity involution, every square matrix is *congruent to a direct sum, determined uniquely up to permutation of summands, of matrices of the form: \begin{itemize} \item [{\rm(i)}] $J_n(0)$; \item [{\rm(ii)}] $[J_n(\lambda)\,\diagdown\, I_n]$, in which $|\lambda|\ne 1$ $($see \eqref{1kk}$)$, $\lambda\ne 0$, and $\lambda$ is determined up to replacement by $\bar{\lambda}^{-1}$ $($alternatively, in which $|\lambda|>1)$; \item [{\rm(iii)}] $\pm\sqrt[\displaystyle *]{ J_n (\lambda)}$, in which $|\lambda|=1$. \end{itemize} Instead of the matrices {\rm(iii)}, one may use any of the matrices \begin{equation}\label{jde} \mu\sqrt[\displaystyle *]{ J_n (1)},\quad \mu\Gamma_n,\quad \mu\Gamma_n',\quad \mu\Delta_n(1),\quad \mu A \end{equation} with $|\mu|=1$, where $A$ is any $n\times n$ matrix whose *cosquare is similar to a Jordan block. {\rm (d)} Over a real closed field\/ $\mathbb P$ whose algebraic closure is represented in the form \eqref{1pp}, every square matrix is congruent to a direct sum, determined uniquely up to permutation of summands, of matrices of the form: \begin{itemize} \item [{\rm(i)}] $J_n(0)$; \item [{\rm(ii)}] $[J_n(a)\,\diagdown\, I_n]$, in which $0\ne a \in{\mathbb P}$, $a\ne (-1)^{n+1}$, and $a$ is determined up to replacement by $a^{-1}$ $($alternatively, $a \in{\mathbb P}$ and $|a|>1$ or $a= (-1)^{n})$; \item [{\rm(iii)}] $\pm \sqrt[T]{ J_n((-1)^{n+1})}$; \item [{\rm(ii$'$)}] $[J_n(\lambda)^{\mathbb P} \,\diagdown\, I_{2n}]$, in which $\lambda\in({\mathbb P}+{\mathbb P}i)\smallsetminus {\mathbb P}$, $|\lambda|\ne 1$, and $\lambda$ is determined up to replacement by $\bar{\lambda}$, $\lambda^{-1}$, or $\bar{\lambda}^{-1}$ $($alternatively, $\lambda=a+bi$ with $a,b\in\mathbb P$, $b>0$, and $a^2+b^2>1)$; \item [{\rm(iii$'$)}] $\pm\sqrt[T]{ J_n(\lambda)^{\,\mathbb P}}$, in which $\lambda\in({\mathbb P}+{\mathbb P}i)\smallsetminus {\mathbb P}$, $|\lambda|=1$, and $\lambda$ is determined up to replacement by $\bar{\lambda}$ $($alternatively, $\lambda=a+bi$ with $a,b\in\mathbb P$, $b>0$, and $a^2+b^2=1)$. \end{itemize} Instead of {\rm(iii)}, one may use $\pm \Gamma_n$ or $\pm \Gamma_n'$. \noindent Instead of {\rm(iii$'$)}, one may use $\pm\big(\sqrt[ \displaystyle *]{ J_n(\lambda)}\,\big)^{\mathbb P}$ with the same $\lambda$, or any of the matrices \begin{equation}\label{dsk} \big((c+i) \Gamma_n\big)^{\mathbb P},\quad\big((c+i) \Gamma'_n\big)^{\mathbb P},\quad\Delta_n(c+i)^{\mathbb P} \end{equation} with $0\ne c\in{\mathbb P}$. {\rm(e)} Over a skew field of\/ $\mathbb P$-quaternions $($$\mathbb P$ is real closed$)$ with quaternionic conjugation \eqref{ne} or quaternionic semiconjugation \eqref{nen}, every square matrix is *congruent to a direct sum, determined uniquely up to permutation of summands, of matrices of the form: \begin{itemize} \item [{\rm(i)}] $J_n(0)$; \item [{\rm(ii)}] $[J_n(\lambda)\,\diagdown\, I_n]$, in which $0\ne\lambda\in\mathbb P +\mathbb Pi$, $|\lambda|\ne 1$, and $\lambda$ is determined up to replacement by $\bar{\lambda}$, $\lambda^{-1}$, or $\bar{\lambda}^{-1}$ $($alternatively, $\lambda=a+bi$ with $a,b\in\mathbb P$, $b\geqslant 0$, and $a^2+b^2>1)$; \item [{\rm(iii)}] $\varepsilon \sqrt[\displaystyle *]{ J_n (\lambda)}$, in which $\lambda\in\mathbb P +\mathbb Pi$, $|\lambda|=1$, $\lambda$ is determined up to replacement by $\bar{\lambda}$, and \begin{equation}\label{kki} \varepsilon := \begin{cases} 1,& \text{if the involution is \eqref{ne}, $\lambda =(-1)^{n}$,}\\ & \text{and if the involution is \eqref{nen}, $\lambda =(-1)^{n+1}$},\\ \pm 1,& \text{otherwise.} \end{cases} \end{equation} \end{itemize} Instead of {\rm(iii)}, one may use \begin{equation}\label{gyo} (a+bi)\Gamma_n \quad\text{or}\quad (a+bi)\Gamma_n', \end{equation} in which $a,b\in\mathbb P$, $a^2+b^2=1$, and \[ \begin{cases} b\geqslant 0 & \text{if the involution is \eqref{ne}}, \\ a\geqslant 0 & \text{if the involution is \eqref{nen}}. \end{cases} \] Instead of {\rm(iii)}, one may also use \begin{equation}\label{gyo1} (a+bi)\Delta_n(1), \end{equation} in which $a,b\in\mathbb P$, $a^2+b^2=1$, and \[ \begin{cases} a\geqslant 0, & \text{if the involution is \eqref{ne}, $n$ is even,} \\ &\text{and if the involution is \eqref{nen}, $n$ is odd}, \\ b\geqslant 0, & \text{otherwise}. \end{cases} \] \end{theorem} In this theorem ``determined up to replacement by'' means that a block is congruent or *congruent to the block obtained by making the indicated replacements. \begin{remark}\label{rem3} Theorem \ref{tetete} ensures that each system of linear mappings and bilinear forms on vector spaces over an algebraically or real closed field as well as each system of linear mappings and sesquilinear forms on vector spaces over an algebraically closed field or real quaternions with nonidentity involution, decomposes into a direct sum of indecomposable systems that is unique up to isomorphisms of summands. Over any field of characteristic not $2$, two decompositions into indecomposables may have nonisomorphic direct summands, but Theorem \ref{tetete1} tells us that the number of indecomposable direct summands does not depend on the decomposition. However, over an algebraically closed field $\mathbb F$ of characteristic $2$, not even the number of indecomposable direct summands is invariant. For example, the matrices \begin{equation}\label{1k} [\,1\,]\oplus [\,1\,]\oplus [\,1\,], \qquad \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \oplus [\,1\,] \end{equation} are congruent over $\mathbb F$ since \[ \begin{bmatrix} 1&0&1\\1&1&0\\1&1&1 \end{bmatrix} \begin{bmatrix} 1&0&0\\0&1&0\\0&0&1 \end{bmatrix} \begin{bmatrix} 1&1&1\\0&1&1\\1&0&1 \end{bmatrix} =\begin{bmatrix} 0&1&0\\1&0&0\\0&0&1 \end{bmatrix}, \] but each of the direct summands in \eqref{1k} is indecomposable by Theorem \ref{t1.1}(b). The cancellation theorem does not hold for bilinear forms over $\mathbb F$: the matrices \eqref{1k} are congruent but the matrices \[ [\,1\,]\oplus [\,1\,], \qquad \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \] are not congruent because they are canonical. \end{remark} \subsection{Canonical matrices for *congruence over a field of characteristic different from 2} \label{sub_02} Canonical matrices for congruence and *congruence over a field of characteristic different from 2 were obtained in {\cite[Theorem 3]{ser_izv}} up to classification of Hermitian forms. They were based on the Frobenius canonical matrices for similarity. In this section we rephrase \cite[Theorem 3]{ser_izv} in terms of an \emph{arbitrary} set of canonical matrices for similarity. This flexibility is used in the proof of Theorem \ref{t1.1}. The same flexibility is used in \cite{hor-ser} to construct simple canonical matrices for congruence or *congruence over $\mathbb C$, and in \cite{ser_iso} to construct simple canonical matrices of pairs $(A,B)$ in which $B$ is a nondegenerate Hermitian or skew-Hermitian form and $A$ is an isometric operator over an algebraically or real closed field or over real quaternions. In this section $\mathbb F$ denotes a field of characteristic different from 2 with involution $a\mapsto \bar{a}$, which can be the identity. Thus, congruence is a special case of *congruence. For each polynomial \[ f(x)=a_0x^n+a_1x^{n-1}+\dots +a_n\in \mathbb F[x], \] we define the polynomials \begin{align*}\label{mau} \bar f(x)&:=\bar a_0x^n+\bar a_1x^{n-1}+\dots+\bar a_n,\\ f^{\vee}(x)&:=\bar a_n^{-1}(\bar a_nx^n+\dots+\bar a_1x+\bar a_0)\quad\text{if } a_n\ne 0. \end{align*} The following lemma was proved in \cite[Lemma 6]{ser_izv} (or see \cite[Lemma 2.3]{ser_iso}). \begin{lemma} \label{LEMMA 7} Let $\mathbb F$ be a field with involution $a\mapsto \bar a$, let $p(x) = p^{\vee}(x)$ be an irreducible polynomial over $\mathbb F$, and let $r$ be the integer part of $(\deg p(x))/2$. Consider the field \begin{equation}\label{alft} \mathbb F(\kappa) = \mathbb F[x]/p(x)\mathbb F[x],\qquad \kappa:= x+p(x)\mathbb F[x], \end{equation} with involution \begin{equation}\label{alfta} f(\kappa)^{\circ} := \bar f(\kappa^{-1}). \end{equation} Then each element of\/ $\mathbb F(\kappa)$ on which the involution acts identically is uniquely representable in the form $q(\kappa)$, in which \begin{equation}\label{ser13} q(x)=a_rx^r+\dots+ a_1x +a_0+\bar a_1x^{-1}+\dots+\bar a_rx^{-r}, \quad a_0 = \bar a_0, \end{equation} $a_0,\dots a_r\in\mathbb F;$ if $\deg p(x) = 2r$ is even, then \begin{equation*}\label{uvp} a_r= \begin{cases} 0 &\text{if the involution on $\mathbb F$ is the identity}, \\ \bar a_r &\text{if the involution on $\mathbb F$ is not the identity and $p(0)\ne 1$},\\ -\bar a_r &\text{if the involution on $\mathbb F$ is not the identity and $p(0)=1$}. \end{cases} \end{equation*} \vskip-2em \hbox{\qedsymbol} \end{lemma} We say that a square matrix is \emph{indecomposable for similarity} if it is not similar to a direct sum of square matrices of smaller sizes. Denote by ${\cal O}_{\mathbb F}$ any maximal set of nonsingular indecomposable canonical matrices for similarity;\label{papage} this means that each nonsingular indecomposable matrix is similar to exactly one matrix from ${\cal O}_{\mathbb F}$. For example, ${\cal O}_{\mathbb F}$ may consist of all nonsingular {\it Frobenius blocks}, i.e., the matrices \begin{equation}\label{3.lfo} \Phi=\begin{bmatrix} 0&& 0&-c_n\\1&\ddots&&\vdots \\&\ddots&0&-c_2\\ 0&&1& -c_1 \end{bmatrix} \end{equation} whose characteristic polynomials $\chi_{\Phi}(x)$ are powers of irreducible monic polynomials $p_{\Phi}(x)\ne x$: \begin{equation}\label{ser24} \chi_{\Phi}(x)=p_{\Phi}(x)^s =x^n+ c_1x^{n-1}+\dots+c_n. \end{equation} If $\mathbb F$ is an algebraically closed field, then we may take ${\cal O}_{\mathbb F}$ to be all nonsingular Jordan blocks. It suffices to construct *cosquare roots $\sqrt[\displaystyle *]{\Phi}$ (see page \pageref{pppp}) only for $\Phi\in{\cal O}_{\mathbb F}$: then we can take \begin{equation}\label{ndw} \sqrt[\displaystyle *]{\Psi} =S^*\sqrt[\displaystyle *]{\Phi}S\qquad\text{if $\Psi =S^{-1}\Phi S$ and $\sqrt[\displaystyle *]{\Phi}$ exists} \end{equation} since $\Phi=A^{-*}A$ implies $S^{-1}\Phi S=(S^*AS)^{-*}(S^*AS). $ Existence conditions and an explicit form of $\sqrt[\displaystyle *]{\Phi}$ for Frobenius blocks $\Phi$ over a field of characteristic not $2$ were established in \cite[Theorem 7]{ser_izv}; this result is presented in Lemma \ref{lsdy1}. In the proof of Theorem \ref{t1.1}, we take another set ${\cal O}_{\mathbb F}$ and construct simpler *cosquare roots over an algebraically or real closed field $\mathbb F$. The version of the following theorem given in \cite[Theorem 3]{ser_izv} considers the case in which ${\cal O}_{\mathbb F}$ consists of all nonsingular Frobenius blocks. \begin{theorem} \label{Theorem 5} {\rm(a)} Let\/ $\mathbb F$ be a field of characteristic different from $2$ with involution $($which can be the identity$)$. Let ${\cal O}_{\mathbb F}$ be a maximal set of nonsingular indecomposable canonical matrices for similarity over $\mathbb F$. Every square matrix $A$ over\/ $\mathbb F$ is *congruent to a direct sum of matrices of the following types: \begin{itemize} \item [{\rm(i)}] $J_n(0)$; \item [{\rm(ii)}] $[\Phi\,\diagdown\, I_n]$, in which $\Phi\in {\cal O}_{\mathbb F}$ is an $n\times n$ matrix such that $\sqrt[\displaystyle *]{\Phi}$ does not exist $($see Lemma {\rm\ref{lsdy1}}$)$; and \item [{\rm(iii)}] $\sqrt[\displaystyle *]{\Phi}q(\Phi)$, in which $\Phi\in{\cal O}_{\mathbb F}$ is such that $\sqrt[\displaystyle *]{\Phi}$ exists and $q(x)\ne 0$ has the form \eqref{ser13} in which $r$ is the integer part of $(\deg p_{\Phi}(x))/2$ and $p_{\Phi}(x)$ is the irreducible divisor of the characteristic polynomial of $\Phi$. \end{itemize} The summands are determined to the following extent: \begin{description} \item [Type (i)] uniquely. \item [Type (ii)] up to replacement of $\Phi$ by the matrix $\Psi\in{\cal O}_{\mathbb F}$ that is similar to $\Phi^{-*}$ $($i.e., whose characteristic polynomial is $\chi_{\Phi}^{\vee}(x))$. \item [Type (iii)] up to replacement of the whole group of summands \[ \sqrt[\displaystyle *]{\Phi}q_1(\Phi) \oplus\dots\oplus \sqrt[\displaystyle *]{\Phi}q_s(\Phi) \] with the same $\Phi$ by a direct sum \[ \sqrt[\displaystyle *]{\Phi}q'_1(\Phi) \oplus\dots\oplus \sqrt[\displaystyle *]{\Phi}q'_s(\Phi) \] such that each $q'_i(x)$ is a nonzero function of the form \eqref{ser13} and the Hermitian forms \begin{gather*}\label{777} q_1(\kappa)x_1^{\circ}x_1+\dots+ q_s(\kappa)x_s^{\circ}x_s, \\\label{777s} q'_1(\kappa)x_1^{\circ}x_1+\dots+ q'_s(\kappa)x_s^{\circ}x_s \end{gather*} are equivalent over the field \eqref{alft} with involution \eqref{alfta}. \end{description} {\rm(b)} In particular, if\/ $\mathbb F$ is an algebraically closed field of characteristic different from $2$ with the identity involution, then the summands of type {\rm(iii)} can be taken equal to $\sqrt[\displaystyle *]{\Phi}$. If\/ $\mathbb F$ is an algebraically closed field with nonidentity involution, or a real closed field, then the summands of type {\rm(iii)} can be taken equal to $\pm\sqrt[\displaystyle *]{\Phi}$. In these cases the summands are uniquely determined by the matrix $A$. \end{theorem} Let $$f(x)= \gamma_0x^m + \gamma_1x^{m-1}+\dots+\gamma_m\in \mathbb F[x],\qquad \gamma_0\ne 0\ne\gamma_m.$$ A vector $(a_1, a_{2},\dots, a_n)$ over $\mathbb F$ is said to be \emph{$f$-recurrent} if $n\leqslant m$, or if \[ \gamma_0 a_{l} + \gamma_{1}a_{l+1}+\dots+ \gamma_ma_{l+m}=0,\qquad l=1,2,\dots,n - m \] (by definition, it is not $f$-recurrent if $m=0$). Thus, this vector is completely determined by any fragment of length $m$. The following lemma was stated in \cite[Theorem 7]{ser_izv} but only a sketch of the proof was given. \begin{lemma} \label{lsdy1} Let $\mathbb F$ be a field of characteristic not $2$ with involution $a\mapsto \bar a$ $($possibly, the identity$)$. Let $\Phi\in\mathbb F^{n\times n}$ be nonsingular and indecomposable for similarity; thus, its characteristic polynomial is a power of some irreducible polynomial $p_{\Phi}(x)$. {\rm(a)} $\sqrt[\displaystyle *]{\Phi}$ exists if and only if \begin{equation}\label{lbdr} p_{\Phi}(x) = p_{\Phi}^{\vee}(x),\ \ \text{and} \end{equation} \begin{equation}\label{4.adlw} \text{if the involution on $\mathbb F$ is the identity, also $p_{\Phi}(x)\ne x + (-1)^{n+1}$}. \end{equation} {\rm(b)} If \eqref{lbdr} and \eqref{4.adlw} are satisfied and $\Phi$ is a nonsingular Frobenius block \eqref{3.lfo} with characteristic polynomial \begin{equation}\label{ser24lk} \chi_{\Phi}(x)=p_{\Phi}(x)^s =x^n+ c_1x^{n-1}+\dots+c_n, \end{equation} then for $\sqrt[\displaystyle *]{\Phi}$ one can take the Toeplitz matrix \begin{equation}\label{okjd} \sqrt[\displaystyle *]{\Phi}:= [a_{i-j}]= \begin{bmatrix} a_0 &a_{-1}&\ddots&a_{1-n} \\a_{1}&a_0 &\ddots&\ddots \\\ddots&\ddots& \ddots&a_{-1} \\ a_{n-1}&\ddots& a_{1}&a_0 \end{bmatrix}, \end{equation} whose vector of entries $(a_{1-n},a_{2-n},\dots,a_{n-1})$ is the $\chi_{\Phi}$-recurrent extension of the vector \begin{equation}\label{ksy} v=(a_{1-m},\dots,a_{m}) =(a,0,\dots,0,\bar a) \end{equation} of length \begin{equation}\label{leg} 2m= \begin{cases} n & \text{if $n$ is even}, \\ n+1 & \text{if $n$ is odd,} \end{cases} \end{equation} in which \begin{equation} \label{mag} a:= \begin{cases} 1 & \text{if $n$ is even, except for the case} \\ & \qquad p_{\Phi}(x)=x+c\ \text{with }c ^{n-1}=-1, \\ \chi_{\Phi}(-1)& \text{if $n$ is odd and $p_{\Phi}(x)\ne x+1$,} \\ e-\bar e&\text{otherwise, with any fixed $\bar e\ne e\in\mathbb F$}. \end{cases} \end{equation} \end{lemma} \begin{proof} (a) Let $\Phi\in\mathbb F^{n\times n}$ be nonsingular and indecomposable for similarity. We prove here that if $\sqrt[\displaystyle *]{\Phi}$ exists then the conditions \eqref{lbdr} and \eqref{4.adlw} are satisfied; we prove the converse statement in (b). Suppose $A:=\sqrt[\displaystyle *]{\Phi}$ exists. Since \begin{equation} \label{mau3} A = A^*\Phi= \Phi^*A\Phi, \end{equation} we have $A\Phi A^{-1}=\Phi^{-*}$ and \begin{align*} \chi_{\Phi}(x) &= \det(xI-\Phi^{-*})= \det(xI-\bar\Phi^{-1})= \det((-\bar\Phi^{-1})(I -x\bar\Phi))= \\&=\det(-\bar\Phi^{-1})\cdot x^n\cdot \det(x^{-1}I -\bar\Phi)=\chi_{\Phi}^{\vee}(x). \end{align*} In the notation \eqref{ser24}, $p_{\Phi}(x)^s = p_{\Phi}^{\vee}(x)^s$, which verifies \eqref{lbdr}. It remains to prove \eqref{4.adlw}. Because of \eqref{ndw}, we may assume that $\Phi$ is a nonsingular Frobenius block \eqref{3.lfo} with characteristic polynomial \eqref{ser24lk}. If $a_{ij}$ are the entries of $A$, then we define $a_{i,n+1}$ by $A\Phi= [a_{ij}] \Phi=[a_{i,j+1}]$, $a_{n+1,j}$ by $\Phi^*A\Phi= \Phi^*[a_{i,j+1}]= [a_{i+1,j+ 1}]$; and we then use \eqref{mau3} to obtain $[a_{ij}]= [a_{i+1,j+ 1}]$. Hence the matrix entries depend only on the difference of the indices and $A$ has the form \eqref{okjd} with $a_{i-j}:=a_{ij}$. That $(a_{1-n},a_{2-n},\dots,a_{n-1})$ is $\chi_{\Phi}$-recurrent follows from \begin{equation}\label{fol} [a_{i-j}]\Phi=[a_{i-j-1}]. \end{equation} In view of \begin{equation}\label{lyf1} \begin{aligned} &\chi_{\Phi}(x) =x^n+ c_1x^{n-1}+\dots+c_{n-1}x+c_n \\=&\chi_{\Phi}^{\vee}(x)= \bar c_n^{-1}(\bar c_nx^n+ \bar c_{n-1}x^{n-1} +\dots+\bar c_1x+1), \end{aligned} \end{equation} the vector $(\bar a_{n-1},\dots,\bar a_{1-n})$ is $\chi_{\Phi}$-recurrent, so $ [a_{i-j}]= A= A^*\Phi = [\bar a_{j-i+1}], $ and we have \begin{equation}\label{ser27} (a_{1-n},\dots,a_{n-1})= (a_{1-n},\dots,a_0, \bar a_0,\dots, \bar a_{2-n}). \end{equation} Since this vector is $\chi_{\Phi}$-recurrent, it is completely determined by the fragment \begin{equation}\label{ser28} (a_{1-m},\dots, a_0, \bar a_0,\dots, \bar a_{1-m}) \end{equation} of length $2m$ defined in \eqref{leg}. Write \begin{equation}\label{ser25a} \mu_{\Phi}(x):= p_{\Phi}(x)^{s-1}=x^t+ b_1x^{t-1}+\dots+b_t,\qquad b_0:=1. \end{equation} Suppose that \eqref{4.adlw} is not satisfied; i.e., the involution is the identity and $p_{\Phi}(x)= x + (-1)^{n-1}$. Let us prove that \begin{equation}\label{kyyyd} \text{the vector \eqref{ser28} is $\mu_{\Phi}(x)$-recurrent. } \end{equation} If $n = 2m$ then $\mu_{\Phi}(x) =(x-1)^{2m-1}$ and \eqref{kyyyd} is obvious. Let $n = 2m - 1$. Then the coefficients of $\chi_{\Phi}(x) =(x+1)^{n}$ in \eqref{ser24lk} and $\mu_{\Phi}(x) =(x+1)^{n-1}$ in \eqref{ser25a} are binomial coefficients: \[ c_i=\binom ni,\qquad b_i=\binom {n-1}i. \] Standard identities for binomial coefficients ensure that \[ c_i=b_i+b_{i-1}= b_i+b_{n-i},\qquad 0<i<n. \] Thus \eqref{kyyyd} follows since \begin{equation*} \label{ser29} \begin{split} 2[b_0a_{1-m} &+ b_1a_{2-m}+\dots +b_{n-2}a_{3-m} +b_{n-1}a_{2-m}]\\%\nonumber &=(b_0+0)a_{1-m}+ (b_1+b_{n-1})a_{2-m} + (b_2+b_{n-2})a_{3-m}\\ &\qquad +\dots +(b_{n-1}+b_1)a_{2-m} +(0+b_0)a_{1-m}\\ &=c_0a_{1-m}+ c_1a_{2-m}+\dots+ c_na_{1-m}=0 \end{split} \end{equation*} in view of the $\chi_{\Phi}$-recurrence of (\ref{ser28}). But then the $\mu_{\Phi}$-recurrent extension of \eqref{ser28} coincides with \eqref{ser27} and we have $$(0,\dots,0,b_0,\dots,b_t) A= 0$$ (see \eqref{ser25a}), which contradicts our assumption that $A$ is nonsingular. (b) Let $\Phi$ be a nonsingular Frobenius block \eqref{3.lfo} with characteristic polynomial \eqref{ser24lk} satisfying \eqref{lbdr} and \eqref{4.adlw}. We first prove the nonsingularity of every Toeplitz matrix $A:=[a_{i-j}]$ whose vector of entries \begin{equation}\label{oyuyf} (a_{1-n},a_{2-n},\dots,a_{n-1}) \end{equation} is $\chi_{\Phi}$-recurrent (and so \eqref{fol} holds) but is not $\mu_{\Phi}$-recurrent. If $w:= (a_{n-1},\dots, a_0)$ is the last row of $A$, then \begin{equation}\label{ldyf} w\Phi^{n-1},\ w\Phi^{n-2},\dots, w \end{equation} are all the rows of $A$ by \eqref{fol}. If they are linearly dependent, then $wf(\Phi) = 0$ for some nonzero polynomial $f(x)$ of degree less than $n$. If $p_{\Phi}(x)^r$ is the greatest common divisor of $f(x)$ and $\chi_{\Phi}(x)=p_{\Phi}(x)^s$, then $r<s$ and $$p_{\Phi}(x)^r=f(x)g(x)+ \chi_{\Phi}(x)h(x)\qquad \text{for some }g(x),h(x)\in\mathbb F[x].$$ Since $wf(\Phi) = 0$ and $w\chi_{\Phi}(\Phi) = 0$, we have $wp_{\Phi}(\Phi)^r = 0$. Thus, $w\mu_{\Phi}(\Phi) = 0$. Because \eqref{ldyf} are the rows of $A$, \begin{align*} (0,\dots,0,&b_0,\dots,b_t, \underbrace{0,\dots,0}_{i})A\\ &=b_0w\Phi^{i+t}+ b_1w\Phi^{i+t-1}+\dots +b_tw\Phi^{i} = w\Phi^i\mu_{\Phi}(\Phi)=0 \end{align*} for each $i=0,1,\dots, n-t-1$. Hence, \eqref{oyuyf} is $\mu_{\Phi}$-recurrent, a contradiction. Finally, we must show that \eqref{ksy} is $\chi_{\Phi}$-recurrent but not $\mu_{\Phi}$-recurrent (and so in view of \eqref{lyf1} its $\chi_{\Phi}$-recurrent extension has the form \eqref{ser27}, which ensures that $A= [a_{j-i}] = A^*\Phi$ is nonsingular and can be taken for $\sqrt[\displaystyle *]{\Phi}$). Suppose first that $n = 2m$. Since \eqref{ksy} has length $n$, it suffices to verify that it is not $\mu_{\Phi}$-recurrent. This is obvious if $\deg \mu_{\Phi}(x)<n -1$. Let $\deg\mu_{\Phi}(x)=n -1$. Then $\mu_{\Phi}(x) =(x+c)^{n-1}$ for some $c$ and we need to show only that \begin{equation}\label{nlp} a + b_{n-1}\bar a = a + c^{n-1}\bar a\ne 0. \end{equation} If $c ^{n-1}\ne -1$ then by \eqref{mag} $a=1$ and so \eqref{nlp} holds. Let $c ^{n-1}= -1$. If the involution on $\mathbb F$ is the identity then by \eqref{lbdr} $c=\pm 1$ and so $c=-1$, contrary to \eqref{4.adlw}. Hence the involution is not the identity, $a=e-\bar e$, and \eqref{nlp} is satisfied. Now suppose that $n = 2m - 1$. Since \eqref{ksy} has length $n + 1$, it suffices to verify that it is $\chi_{\Phi}$-recurrent, i.e., that \begin{equation}\label{uuv} a +c_{n}\bar a= 0. \end{equation} By \eqref{lyf1}, $c_n =\bar c_n^{-1}$. Because $\chi_{\Phi}(x) =\chi_{\Phi}^{\vee}(x) = \bar c_n^{-1}x^n\bar \chi_{\Phi}(x^{-1}),$ we have $$\chi_{\Phi}(-1) = -c_n\overline{ \chi_{\Phi}({-1})}.$$ If $p_{\Phi}(x)\ne x+1$ then $a=\chi_{\Phi}(-1)\ne 0$ and \eqref{uuv} holds. If $p_{\Phi}(x)= x+1$ then the involution on $\mathbb F$ is not the identity by \eqref{4.adlw}. Hence $a=e-\bar e$ and \eqref{uuv} is satisfied. \end{proof} \section{Reduction theorems for systems of forms and linear mappings }\label{sub_pos1} Classification problems for systems of forms and linear mappings can be formulated in terms of representations of graphs with nonoriented, oriented, and doubly oriented ($\longleftrightarrow$) edges; the notion of quiver representations was extended to such representations in \cite{ser_first}. In this section we give a brief summary of definitions and theorems about such representations; for the proofs and a more detailed exposition we refer the reader to \cite{ser_izv} and \cite{ser_iso}. For simplicity, we consider representations of graphs without doubly oriented edges. Let $\mathbb F$ be a field or skew field with involution $a\mapsto \bar{a}$ (possibly, the identity). A {\it sesquilinear form} on right vector spaces $U$ and $V$ over $\mathbb F$ is a mapping $B\colon U\times V\to \mathbb F$ satisfying \[ B(ua+u'a',v)= \bar{a}B(u,v)+ \bar{a'}B(u',v) \] and \[ B(u,va+v'a') =B(u,v)a+B(u,v')a' \] for all $u,u'\in U$, $v,v'\in V$, and $a,a'\in \mathbb F$. This form is \emph{bilinear} if the involution $a\mapsto\bar{a}$ is the identity. If $e_1,\dots,e_m$ and $f_1,\dots, f_n$ are bases of $U$ and $V$, then $B(u,v)=[u]_e^{*}B_{ef}[v]_f$ for all $u\in U$ and $v\in V$, in which $[u]_e$ and $[v]_f$ are the coordinate vectors and $ B_{ef}:=[B(e_i,f_j)]$ is the matrix of $B$. Its matrix in other bases is $R^*B_{ef}S$, in which $R$ and $S$ are the transition matrices. A \emph{pograph} (partially ordered graph) is a graph in which every edge is nonoriented or oriented; for example, \begin{equation}\label{2.6} \raisebox{20pt}{\xymatrix{ &{1}&\\ {2}\ar@(ul,dl)@{-}_{\mu} \ar@{-}[ur]^{\lambda} \ar@/^/[rr]^{\beta} \ar@/_/@{-}[rr]_{\nu} &&{3} \ar[ul]_{\alpha} \ar@(ur,dr)^{\gamma} }} \end{equation} We suppose that the vertices are $1,2,\dots,n$, and that there can be any number of edges between any two vertices. A {\it representation} ${\cal A}$ of a pograph $P$ over $\mathbb F$ is given by assigning to each vertex $i$ a right vector space ${\cal A}_i$ over $\mathbb F$, to each arrow $\alpha\colon i\to j$ a linear mapping ${\cal A}_{\alpha}\colon {\cal A}_i\to {\cal A}_j$, and to each nonoriented edge $\lambda\colon i\lin\, j\ (i\leqslant j)$ a sesquilinear form ${\cal A}_{\lambda}\colon {\cal A}_i\times {\cal A}_j\to {\mathbb F}$. For example, each representation of the pograph \eqref{2.6} is a system \begin{equation*}\label{2.6aa} {\cal A}:\quad\raisebox{20pt}{\xymatrix{ &{{\cal A}_1}&\\ {{\cal A}_2} \save !<-3mm,0cm> \ar@(ul,dl) @{-}_{{\cal A}_{\mu}} \restore \ar@{-}[ur]^{{\cal A}_{\lambda}} \ar@/^/[rr]^{{\cal A}_{\beta}} \ar@/_/@{-}[rr]_{{\cal A}_{\nu}} &&{{\cal A}_3} \ar[ul]_{{\cal A}_{\alpha}} \save !<3mm,0cm> \ar@(ur,dr)^{{\cal A}_{\gamma}} \restore }} \end{equation*} of vector spaces ${\cal A}_1,{\cal A}_2,{\cal A}_3$ over $\mathbb F$, linear mappings ${\cal A}_{\alpha}$, ${\cal A}_{\beta}$, ${\cal A}_{\gamma}$, and forms $ {\cal A}_{\lambda}\colon {\cal A}_1\times {\cal A}_2\to {\mathbb F}$, ${\cal A}_{\mu}\colon {\cal A}_2\times {\cal A}_2\to {\mathbb F}$, ${\cal A}_{\nu}\colon {\cal A}_2\times {\cal A}_3\to {\mathbb F}.$ A {\it morphism} $f=(f_1,\dots,f_n)\colon {\cal A}\to{\cal A}'$ of representations ${\cal A}$ and ${\cal A}'$ of $P$ is a set of linear mappings $f_i\colon {\cal A}_i\to {\cal A}'_i$ that transform $\cal A$ to ${\cal A}'$; this means that \[ f_j{\cal A}_{\alpha}= {\cal A}'_{\alpha}{f}_i,\qquad {\cal A}_{\lambda }(x,y)= {\cal A}'_{\lambda } (f_ix,f_jy)\] for all arrows $ \alpha\colon i\longrightarrow j$ and nonoriented edges $\lambda \colon i\lin j\ (i\leqslant j)$. The composition of two morphisms is a morphism. A morphism $f\colon{\cal A}\to{\cal A}'$ is called an {\it isomorphism} and is denoted by $f\colon {\cal A}\is{\cal A}'$ if all $f_i$ are bijections. We write ${\cal A}\simeq{\cal A}'$ if ${\cal A}$ and ${\cal A}'$ are isomorphic. The {\it direct sum}\/ ${\cal A}\oplus{\cal A}'$ of representations ${\cal A}$ and ${\cal A}'$ of $P$ is the representation consisting of the vector spaces ${\cal A}_i\oplus {\cal A}'_i$, the linear mappings ${\cal A}_{\alpha}\oplus {\cal A}'_{\alpha}$, and the forms $ {\cal A}_{\lambda }\oplus {\cal A}'_{\lambda }$ for all vertices $i$, arrows $\alpha$, and nonoriented edges $\lambda $. A representation $\cal A$ is {\it indecomposable} if ${\cal A}\simeq {\cal B}\oplus{\cal C}$ implies ${\cal B}=0$ or ${\cal C}=0$, where $0$ is the representation in which all vector spaces are $0$. The {\it *dual space} to a vector space $V$ is the vector space $V^{*}$ of all mappings $\varphi:V\to \mathbb F$ that are \emph{semilinear}, this means that $$ \varphi(va+v'a')=\bar{a}(\varphi v)+ \bar{a'}(\varphi v'),\qquad v,v'\in V,\ \ a,a'\in \mathbb F. $$ We identify $V$ with $V^{**}$ by identifying $v\in V$ with $\varphi\mapsto \overline{\varphi v}$. For every linear mapping $A:U\to V$, we define the {\it *adjoint mapping} $A^{*}\colon V^{*}\to U^{*}$ setting $A^{*}\varphi:=\varphi A$ for all $\varphi\in V^*.$ For every pograph ${P}$, we construct the \emph{quiver $\underline{P}$ with an involution on the set of vertices and an involution on the set of arrows} as follows: we replace \begin{itemize} \item each vertex $i$ of ${P}$ by two vertices $i$ and $i^*$, \item each oriented edge $\alpha\colon i\to j$ by two arrows $\alpha\colon i\to j$ and $\alpha^*\colon j^*\to i^*$, \item each nonoriented edge $\lambda\colon k\lin\, l\ (k\leqslant l)$ by two arrows $\alpha\colon l\to k^*$ and $\alpha^*\colon k\to l^*$, \end{itemize} and set $u^{**}:=u$ and $\alpha^{**}:=\alpha$ for all vertices and arrows of the quiver $\underline{P}$. For example, \begin{equation}\label{4.1} \raisebox{20pt}{\xymatrix@R=4pt{ &{2}\ar[dd]_{\alpha} \ar@{-}@/^/[dd]^{\lambda}\\ {P}:& \\ &{1} \ar@(ur,dr)@{-}^{\mu}}} \qquad\qquad\qquad \raisebox{23pt}{\xymatrix@R=4pt{ &{2}\ar[dd]_{{\alpha}} \ar[ddrr]^(.25){{\lambda}}& &{2^*} \\ {\underline{P}:}&&\\ &{1}\ar[uurr]^(.75){{\lambda}^*} \ar@<0.4ex>[rr]^{{\mu}} \ar@<-0.4ex>[rr]_{{\mu}^*} &&{1^*}\ar[uu]_{{\alpha}^*} }} \end{equation} Respectively, for each representation $\cal M$ of $P$ over $\mathbb F$, we define the representation $\underline {\cal M}$ of $\underline {P}$ by replacing \begin{itemize} \item each vector space $V$ in ${\cal M}$ by the pair of spaces $V$ and $V^*$, \item each linear mapping $A\colon U\to V$ by the pair of mutually *adjoint mappings $A\colon U\to V$ and $A^*\colon V^*\to U^*$, \item each sesquilinear form $B\colon V\times U\to\mathbb F$ by the pair of mutually *adjoint mappings \begin{equation*}\label{mse} B\colon u\in U\mapsto B(?,u)\in V^*,\qquad B^*\colon v\in V\mapsto \overline{B(v,?)}\in U^*. \end{equation*} \end{itemize} For example, the following are representations of \eqref{4.1}: \begin{equation}\label{4.2} \raisebox{23pt}{\xymatrix@R=4pt{ &{U}\ar[dd]_{A} \ar@{-}@/^/[dd]^{B}\\ {\cal A}:& \\ &{V} \save !<1mm,0cm> \ar@(ur,dr)@{-}^{C} \restore }} \qquad \qquad \qquad \raisebox{23pt}{\xymatrix@R=4pt{ &{U}\ar[dd]_{A} \ar[ddrr]^(.25){B}& &{U^{\star}} \\ {\underline{\cal A}:}&&\\ &{V}\ar[uurr]^(.75){B^{\star}} \ar@<0.4ex>[rr]^{C} \ar@<-0.4ex>[rr]_{C^{\star}} &&{V^{\star}}\ar[uu]_{A^{\star}} }} \end{equation} For each representation $\cal M$ of $\underline {P}$ we define an \emph{adjoint representation} ${\cal M}^{\circ}$ of $\underline {P}$ consisting of the vector spaces ${\cal M}^{\circ}_v:={\cal M}^*_{v^*}$ and the linear mappings ${\cal M}^{\circ}_{\alpha}:={\cal M}^*_{\alpha^*}$ for all vertices $v$ and arrows $\alpha$ of $\underline {P}$. For example, the following are representations of the quiver ${\underline{P}}$ defined in \eqref{4.1}: \[ \xymatrix@R=4pt{ &{U_1}\ar[dd]_{A_1} \ar[ddrr]^(.25){B_1}& &{U_2} \\ {{\cal M}:}&&\\ &{V_1}\ar[uurr]^(.75){B_2} \ar@<0.4ex>[rr]^{C_1} \ar@<-0.4ex>[rr]_{C_2} &&{V_2}\ar[uu]_{A_2} } \qquad\quad \xymatrix@R=4pt{ &{U_2^{\star}}\ar[dd]_{A_2^{\star}} \ar[ddrr]^(.25){B_2^{\star}}& &{U_1^{\star}} \\ {{\cal M}^{\circ}:}&&\\ &{V_2^{\star}} \ar[uurr]^(.75){B_1^{\star}} \ar@<0.4ex>[rr]^{C_2^{\star}} \ar@<-0.4ex>[rr]_{C_1^{\star}} &&{V_1^{\star}}\ar[uu]_{A_1^{\star}} } \] The second representation in \eqref{4.2} is \emph{selfadjoint}: $\underline{\cal A}^{\circ} =\underline{\cal A}$. In a similar way, for each morphism $f\colon {\cal M}\to{\cal N}$ of representations of $\underline{P}$ we construct the {\it adjoint morphism} \begin{equation}\label{kdtc} f^{\circ}\colon {\cal N}^{\circ}\to{\cal M}^{\circ},\qquad \text{in which }\ f^{\circ}_i:=f^*_{i^*} \end{equation} for all vertices $i$ of $\underline {P}$. An isomorphism $f\colon {\cal M}\is{\cal N}$ of selfadjoint representations ${\cal M}$ and ${\cal N}$ is called a {\it congruence} if $f^{\circ}=f^{-1}$. For each isomorphism $f\colon {\cal A}\is{\cal B}$ of representations of a pograph ${P}$, we define the \emph{congruence} $\underline{f}\colon \underline{\cal A}\is \underline{\cal B}$ of the corresponding selfadjoint representations of $\underline{P}$ by defining: \begin{equation*}\label{ldi} \underline{f}_{\,i}:=f_i, \quad \underline{f}_{\,i^*}:=f_i^{-*} \quad\text{for each vertex $i$ of $P$.} \end{equation*} Two representations $\cal A$ and $\cal B$ of a pograph $P$ are isomorphic if and only if the corresponding selfadjoint representations $\underline{\cal A}$ and $\underline{\cal B}$ of the quiver $\underline{P}$ are congruent. Therefore, \emph{the problem of classifying representations of a pograph $P$ up to isomorphism reduces to the problem of classifying selfadjoint representations of the quiver $\underline{P}$ up to congruence.} Let us show how to solve the latter problem if we know a maximal set $\ind (\underline{P})$ of nonisomorphic indecomposable representations of the quiver $\underline{P}$ (this means that every indecomposable representation of $\underline{P}$ is isomorphic to exactly one representation from $\ind (\underline{P})$). We first replace each representation in $\ind (\underline{P})$ that is {\it isomorphic} to a selfadjoint representation by one that is {\it actually} selfadjoint---i.e., has the form $\underline{\cal A}$, and denote the set of these $\underline{\cal A}$ by $\ind_0(\underline{P})$. Then in each of the one- or two-element subsets $$ \{{\cal M},{\cal L}\}\subset\ind (\underline{P}) \smallsetminus \ind_0(\underline{P}) \quad \text{such that }{\cal M}^{\circ}\simeq {\cal L}, $$ we select one representation and denote the set of selected representations by $\ind_1(\underline{P})$. We obtain a new set $\ind (\underline{P})$ that we partition into 3 subsets: \begin{equation}\label{4.8d} {\ind (\underline{P})} = \begin{tabular}{|c|c|} \hline &\\[-12pt] $\; {\cal M}\; $& ${\cal M}^{\circ}\text{ (if ${\cal M}^{\circ}\not \simeq{\cal M})$}$\\ \hline \multicolumn{2}{|c|} {$\underline{\cal A}\vphantom{{\hat{N}}}$}\\ \hline \end{tabular}\,,\; \begin{matrix} {\cal M}\in \ind_1(\underline{P}),\\[1pt] \underline{\cal A}\in\ind_0(\underline{P}). \end{matrix} \end{equation} For each representation ${\cal M}$ of $\underline{P}$, we define a representation ${\cal M}^+$ of $P$ by setting $ {\cal M}^+_i:={\cal M}_i\oplus {\cal M}_{i^*}^*$ for all vertices $i$ in $P$ and \begin{equation}\label{piyf} {\cal M}^+_{\alpha }:= \begin{bmatrix} {\cal M}_{\alpha}&0\\0&{\cal M}_{\alpha^*}^{*} \end{bmatrix},\qquad {\cal M}^+_{\beta}:= \begin{bmatrix} 0&{\cal M}_{\beta^*}^{*}\\{\cal M}_{\beta}&0 \end{bmatrix} \end{equation} for all edges $ \alpha\colon i\longrightarrow j$ and $\beta\colon i\lin j\ (i\leqslant j)$. The representation ${\cal M}^+$ arises as follows: each representation ${\cal M}$ of $\underline P$ defines the selfadjoint representation ${\cal M}\oplus {\cal M}^{\circ}$; the corresponding representation of $P$ is ${\cal M}^+$ (and so $\underline{\cal M}^+={\cal M}\oplus {\cal M}^{\circ}$). For every representation ${\cal A}$ of ${P}$ and for every selfadjoint automorphism $f=f^{\circ}\colon \underline{\cal A}\is\underline{\cal A}$, we denote by ${\cal A}^f$ the representation of $P$ that is obtained from ${\cal A}$ by replacing each form ${\cal A}_{\beta}$ $(\beta\colon i\lin j$, $i\leqslant j)$ by ${\cal A}^f_{\beta}:={\cal A}_{\beta}f_j$. Let ${\ind (\underline{P})}$ be partitioned as in \eqref{4.8d}, and let $\underline{\cal A}\in{\ind_0 (\underline{P})}$. By \cite[Lemma 1]{ser_izv}, the set $R$ of noninvertible elements of the endomorphism ring $\End (\underline{\cal A})$ is the radical. Therefore, $\mathbb T({\cal A}):=\End (\underline{\cal A})/R$ is a field or skew field, on which we define the involution \begin{equation}\label{kyg} (f+R)^{\circ}:=f^{\circ}+R. \end{equation} For each nonzero $a=a^{\circ}\in \mathbb T({\cal A})$, we fix a selfadjoint automorphism \begin{equation}\label{lfw} f_a=f_a^{\circ}\in a,\quad\text{ and define ${\cal A}^a:={\cal A}^{f_a}$} \end{equation} (we can take $f_a:=(f+f^{\circ})/2$ for any $f\in a$). The set of representations ${\cal A}^a$ is called the \emph{orbit of} ${\cal A}$. For each Hermitian form \[ \varphi(x)=x^{\circ}_1a_1x_1+\dots+ x^{\circ}_ra_rx_r,\qquad 0\ne a_i=a_i^{\circ}\in \mathbb T({\cal A}), \] we write \[ {\cal A}^{\varphi(x)}:= {\cal A}^{a_1}\oplus\dots\oplus {\cal A}^{a_r}. \] The following theorem is a special case of \cite[Theorem 1]{ser_izv} (or \cite[Theorem 3.1]{ser_iso}). \begin{theorem} \label{tetete1} Over a field or skew field\/ $\mathbb F$ of characteristic different from $2$ with involution $a\mapsto \bar{a}$ $($possibly, the identity$)$, every representation of a pograph $P$ is isomorphic to a direct sum \begin{equation*}\label{iap} {\cal M}_1^+\oplus\dots\oplus {\cal M}_p^+\oplus {\cal A}_1^{\varphi_1(x)}\oplus \dots\oplus {\cal A}_q^{\varphi_q(x)}, \end{equation*} in which \[ {\cal M}_i\in \ind_1(\underline{P}),\qquad \underline{\cal A}_j\in \ind_0(\underline{P}), \] and ${\cal A}_j\ne {\cal A}_{j'}$ if $j\ne j'$. This sum is determined by the original representation uniquely up to permutation of summands and replacement of ${\cal A}_j^{\varphi_j(x)}$ by ${\cal A}_j^{\psi_j(x)}$, in which ${\varphi_j(x)}$ and ${\psi_j(x)}$ are equivalent Hermitian forms over $\mathbb T({\cal A}_j)$ with involution \eqref{kyg}. \hbox{\qedsymbol} \end{theorem} Theorem \ref{tetete1} implies the following generalization of the law of inertia for quadratic forms. \begin{theorem}[{\cite [Theorem 3.2]{ser_iso}}] \label{tetete} Let $\mathbb F$ be either \begin{itemize} \item[\rm(i)] an algebraically closed field of characteristic different from $2$ with the identity involution, or \item[\rm(ii)] an algebraically closed field with nonidentity involution, or \item[\rm(iii)] a real closed field, or the skew field of quaternions over a real closed field. \end{itemize} Then every representation of a pograph $P$ over $\mathbb F$ is isomorphic to a direct sum, uniquely determined up to permutation of summands, of representations of the types: \begin{equation}\label{do} {\cal M}^+,\ \begin{cases} {\cal A}& \text{if ${\cal A}^{-}\simeq{\cal A}$}, \\ {\cal A},\ {\cal A}^{-} & \text{if ${\cal A}^{-}\not\simeq{\cal A}$}, \end{cases} \end{equation} in which ${\cal M}\in \ind_1(\underline{P})$ and $\underline{\cal A}\in\ind_0(\underline{P})$. In the respective cases {\rm(i)--(iii)}, the representations \eqref{do} have the form \begin{itemize} \item[\rm(i)] ${\cal M}^+$, ${\cal A}$, \item[\rm(ii)] ${\cal M}^+$, ${\cal A}$, ${\cal A}^{-}$, \item[\rm(iii)] $ {\cal M}^+, \begin{cases} \ \ {\cal A}, & \parbox[t]{270pt}{if $\mathbb T({\cal A})$ is an algebraically closed field with the identity involution or a skew field of quaternions with involution different from quaternionic conjugation,} \\ {\cal A},{\cal A}^{-}, & \text{otherwise}. \end{cases} $ \end{itemize} \end{theorem} \begin{remark}\label{rema} Theorem \ref{tetete} is a special case of Theorem 2 in \cite{ser_izv}, which was formulated incorrectly in the case of quaternions. To correct it, remove ``or the algebra of quaternions \dots'' in a) and b) and add ``or the algebra of quaternions over a maximal ordered field'' in c). The paper \cite{ser_quat} is based on the incorrect Theorem 2 in \cite{ser_izv} and so the signs $\pm$ of the sesquilinear forms in the indecomposable direct summands in \cite[Theorems 1--4]{ser_quat} are incorrect. Correct canonical forms are given for bilinear/sesquilinear forms in Theorem \ref{t1.1}, for pairs of symmetric/skew-symmetric matrices in \cite{rod_pair_nonst,rod_pair_stand}, for selfadjoint operators in \cite{kar}, and for isometries in \cite{ser_iso}. \end{remark} \section{Proof of Theorem \ref{Theorem 5} } \label{s_pro} Each sesquilinear form defines a representation of the pograph \begin{equation}\label{jsos+} \xymatrix{ P\,: &{1} \ar@(ur,dr)@{-}^{\alpha} } \end{equation} Its quiver is \begin{equation*}\label{jso+} \underline{ P}:\quad\xymatrix{ {1} \ar@/^/@{->}[rr]^{\alpha} \ar@/_/@{->}[rr]_{\alpha^*} &&{1^*} } \end{equation*} We prove Theorem \ref{Theorem 5} using Theorem \ref{tetete1}; to do this, we first identify in Lemma \ref{lenhi} the sets $\ind_1(\underline{P})$ and $\ind_0(\underline{P})$, and the orbit of $\cal A$ for each $\underline{\cal A}\in \ind_0(\underline{P})$. Every representation of $P$ or $\underline P$ over $\mathbb F$ is isomorphic to a representation in which all vector spaces are $\mathbb F\oplus\dots\oplus\mathbb F$. From now on, we consider only such representations of $P$ and $\underline P$; they can be given by a square matrix $A$: \begin{equation}\label{ren7a+} {\cal A}:\quad \xymatrix{ *{\ci} \ar@(ur,dr)@{-}^{ A}}\qquad\text{(we write ${\cal A}=A$)} \end{equation} and, respectively, by rectangular matrices $A$ and $B$ of the same size: \begin{equation*}\label{ser14sd} {\cal M}:\quad\xymatrix{ {\ci} \ar@/^/@{->}[rr]^{A} \ar@/_/@{->}[rr]_{B} &&{\ci} }\qquad\text{(we write ${\cal M}=(A,B)$),} \end{equation*} we omit the spaces $\mathbb F\oplus\dots\oplus\mathbb F$ since they are completely determined by the sizes of the matrices. The adjoint representation \begin{equation*}\label{ren2+} {\cal M}^{\circ}:\quad\xymatrix{ {\ci} \ar@/^/@{->}[rr]^{B^*} \ar@/_/@{->}[rr]_{A^*} &&{\ci} } \end{equation*} is given by the matrix pair \begin{equation}\label{ldbye} (A,B)^{\circ} = (B^*,A^*). \end{equation} A morphism of representations \begin{equation*}\label{ser14d} \xymatrix@R=12pt{{\ {\cal M}:}\ar[dd]_f&& *{\ci}\ar[dd]_{F_1} \ar@/^/@{->}[rr]^{A} \ar@/_/@{->}[rr]_{B} &&*{\ci}\ar[dd]^{F_2} \\ \\ {\ {\cal M}':}&&*{\ci} \ar@/^/@{->}[rr]^{A'} \ar@/_/@{->}[rr]_{B'} &&*{\ci} } \end{equation*} is given by the matrix pair $f=[F_1,F_2]\colon\; {\cal M}\to {\cal M}'$ (for morphisms we use square brackets) satisfying \begin{equation}\label{msp} F_2A=A'F_1,\qquad F_2B=B'F_1. \end{equation} Denote by $0_{m0}$ and $0_{0n}$ the $m\times 0$ and $0\times n$ matrices representing the linear mappings $0\to {\mathbb F}^m$ and ${\mathbb F}^n\to 0$. Thus, $0_{m0}\oplus 0_{0n}$ is the $m\times n$ zero matrix. \begin{lemma}\label{lenhi} Let $\mathbb F$ be a field or skew field of characteristic different from $2$. Let ${\cal O}_{\mathbb F}$ be a maximal set of nonsingular indecomposable canonical matrices over\/ $\mathbb F$ for similarity. Let $P$ be the pograph \eqref{jsos+}. Then: \begin{itemize} \item[{\rm(a)}] The set $\ind(\underline{P})$ can be taken to be the set of all representations \begin{equation*}\label{ser16} (\Phi, I_n),\ (J_n(0), I_n),\, (I_n,J_n(0)),\ (M_n,N_n),\ (N_n^T,M_n^T) \end{equation*} in which $\Phi\in {\cal O}_{\mathbb F}$ is $n$-by-$n$ and \begin{equation*}\label{ser17} M_n:=\begin{bmatrix} 1&0&&0\\&\ddots&\ddots&\\0&&1&0 \end{bmatrix},\quad N_n:=\begin{bmatrix} 0&1&&0\\&\ddots&\ddots&\\0&&0&1 \end{bmatrix} \end{equation*} are $(n-1)$-by-$n$ for each natural number $n$. \item[\rm(b)] The set $\ind_1(\underline{P})$ can be taken to be the set of all representations \begin{equation*}\label{sew} (\Phi, I_n),\quad (J_n(0), I_n),\quad (M_n,N_n) \end{equation*} in which $\Phi\in{\cal O}_{\mathbb F}$ is an $n\times n$ matrix such that $\sqrt[\displaystyle *]{\Phi}$ does not exist, and \begin{equation}\label{4.adg} \parbox{18em} {$\Phi$ is determined up to replacement by\\ the unique $\Psi\in{\cal O}_{\mathbb F}$ that is similar to $\Phi^{-*}$.} \end{equation} The corresponding representations of ${P}$ are \begin{equation}\label{dwh} (\Phi, I_n)^+=[\Phi\,\diagdown\, I_n], \end{equation} \begin{equation}\label{dld} (M_n,N_n)^+\simeq J_{2n-1}(0),\qquad (J_n(0), I_n)^+\simeq J_{2n}(0). \end{equation} \item[\rm(c)] The set $\ind_0(\underline{{P}})$ can be taken to be the set of all representations \begin{equation}\label{nsp} \underline{\cal A}_{\Phi} := (\sqrt[\displaystyle *]{\Phi}, (\sqrt[\displaystyle *]{\Phi})^*) \end{equation} in which $\Phi\in{\cal O}_{\mathbb F}$ is such that $\sqrt[\displaystyle *]{\Phi}$ exists. The corresponding representations of ${P}$ are \begin{equation}\label{5mi} {\cal A}_{\Phi} =\sqrt[\displaystyle *]{\Phi},\quad {\cal A}_{\Phi}^- =-\sqrt[\displaystyle *]{\Phi}, \quad {\cal A}_{\Phi}^f= \sqrt[\displaystyle *]{\Phi}F, \end{equation} in which $f=[F,F^*]\colon \underline{\cal A}_{\Phi}\is \underline{\cal A}_{\Phi}$ is a selfadjoint automorphism. \item[\rm(d)] Let $\mathbb F$ be a field and let $\underline{\cal A}_{\Phi} := (\sqrt[\displaystyle *]{\Phi}, (\sqrt[\displaystyle *]{\Phi})^*)\in \ind_0(\underline{{P}})$, in which $\Phi$ is a nonsingular matrix over $\mathbb F$ that is indecomposable for similarity $($thus, its characteristic polynomial is a power of some irreducible polynomial $p_{\Phi})$. \begin{itemize} \item[\rm(i)] The ring $\End(\underline{\cal A}_{\Phi})$ of endomorphisms of $\underline{\cal A}_{\Phi}$ consists of the matrix pairs \begin{equation}\label{ldy} [f(\Phi),f(\Phi^{-*})],\qquad f(x)\in\mathbb F[x], \end{equation} and the involution on $\End(\underline{\cal A}_{\Phi})$ is \[ [f(\Phi),f(\Phi^{-*})]^{\circ}= [\bar f(\Phi^{-1}),\bar f(\Phi^*)]. \] \item[\rm(ii)] $\mathbb T({\cal A}_{\Phi})$ can be identified with the field \begin{equation}\label{ksyq} {\mathbb F}(\kappa)={\mathbb F}[x]/p_{\Phi}(x){\mathbb F}[x],\qquad \kappa:=x+p_{\Phi}(x){\mathbb F}[x], \end{equation} with involution \begin{equation}\label{kxu} f(\kappa)^{\circ}= \bar f(\kappa^{-1}). \end{equation} Each element of $\mathbb T({\cal A}_{\Phi})$ on which this involution acts identically is uniquely represented in the form $q(\kappa)$ for some nonzero function \eqref{ser13}. The representations \begin{equation}\label{wmp} {\cal A}_{\Phi}^{q(\kappa)}: \quad \ \xymatrix{ *{\ci} \ar@(ur,dr)@{-}^{ \sqrt[\displaystyle *]{\Phi}\,q(\Phi)}} \end{equation} $($see \eqref{lfw}$)$ constitute the orbit of ${\cal A}_{\Phi}$. \end{itemize} \end{itemize} \end{lemma} \begin{proof} (a) This form of Kronecker's theorem about matrix pencils follows from \cite[Sect. 11.1]{gab-roi}. (b)\,\&\,(c) Let $\Phi,\Psi\in{\cal O}_{\mathbb F}$ be $n$-by-$n$. In view of \eqref{ldbye}, $(\Phi,I_n)^{\circ} =(I_n,\Phi^*)\simeq (\Phi^{-*},I_n)$ and so \begin{equation}\label{mdpiug} (\Psi,I_n)\simeq (\Phi,I_n)^{\circ} \quad\Longleftrightarrow\quad \text{$\Psi$ is similar to $\Phi^{-*}$.} \end{equation} Suppose $(\Phi,I_n)$ is isomorphic to a selfadjoint representation: \begin{equation}\label{bgd} [F_1,F_2]\colon\; (\Phi,I_n) \is (B,B^*). \end{equation} Define a selfadjoint representation $(A,A^*)$ by the congruence \begin{equation}\label{wfw} [F_1^{-1},F_1^*]\colon\; (B,B^*) \is (A,A^*). \end{equation} The composition of \eqref{bgd} and \eqref{wfw} is the isomorphism \[ \xymatrix@R=15pt{ *{\ci}\ar[dd]_{I_n} \ar@/^/@{->}[rr]^{\Phi} \ar@/_/@{->}[rr]_{I_n} &&*{\ci}\ar[dd]^{F:=F^*_1F_2} \\ \\ *{\ci} \ar@/^/@{->}[rr]^{A} \ar@/_/@{->}[rr]_{A^*} &&*{\ci} } \] By \eqref{msp}, $ A = F\Phi$ and $A^* = F$. Thus $A = A^*\Phi$. Taking $A=\sqrt[\displaystyle *]{\Phi}$, we obtain \begin{equation*}\label{kdi} [I_n,(\sqrt[\displaystyle *]{\Phi})^*]\colon (\Phi,I_n)\is (\sqrt[\displaystyle *]{\Phi},(\sqrt[\displaystyle *]{\Phi})^*). \end{equation*} This means that if $(\Phi,I_n)\in \ind(\underline{{P}})$ is isomorphic to a selfadjoint representation, then $(\Phi,I_n)$ is isomorphic to \eqref{nsp}. Hence, the representations \eqref{nsp} comprise $\ind_0(\underline{{P}})$. Due to \eqref{mdpiug}, we can identify isomorphic representations in the set of remaining representations $(\Phi,I_n)\in \ind(\underline{{P}})$ by imposing the condition \eqref{4.adg}; we then obtain $\ind_1(\underline{P})$ from Lemma \ref{lenhi}(b). To verify \eqref{dld}, we prove that $J_m(0)$ is permutationally similar to \[ \begin{cases} (M_n,N_n)^+=[M_n\,\diagdown\, N_n^T] & \text{if $m=2n-1$}, \\ (J_n(0), I_n)^+=[J_n(0)\,\diagdown\, I_n] & \text{if $m=2n$} \end{cases} \] (see \eqref{piyf}). The units of $J_m(0)$ are at the positions $ (1,2)$, $(2,3),\,\dots,\,$$(m-1,m); $ so it suffices to prove that there is a permutation $f$ on $\{1,2,\dots,m\}$ such that \[ (f(1),f(2)),\ \ (f(2),f(3)),\ \dots,\ (f(m-1),f(m)) \] are the positions of the unit entries in $[M_n\,\diagdown\, N_n^T]$ or $[J_n(0)\,\diagdown\, I_n]$. This becomes clear if we arrange the positions of the unit entries in the $(2n-1)\times(2n-1)$ matrix $$ [M_n\,\diagdown\, N_n^T]=\leqslantft[ \begin{array}{c|c} \text{\large\rm 0}& \begin{matrix} 0&&0\\1&\ddots&\\ &\ddots&0\\0&&1 \end{matrix} \\ \hline \begin{matrix} 1&0&&0\\&\ddots&\ddots&\\ 0&&1&0 \end{matrix} & \text{\large\rm 0} \end{array}\right] $$ as follows: \begin{multline*} (n,2n-1),\, (2n-1,n-1),\, (n-1,2n-2),\\ (2n-2,n-2),\dots, (2,n+1),\, (n+1,1), \end{multline*} and the positions of the unit entries in the $2n\times 2n$ matrix $[J_n(0)\,\diagdown\, I_n]$ as follows: \[ (1,n+1),\,(n+1,2),\, (2,n+2),\, (n+2,3), \dots,(2n-1,n),\,(n,2n). \] (d) Let $\mathbb F$ be a field. If $\Phi$ is a square matrix over $\mathbb F$ that is indecomposable for similarity, then each matrix over $\mathbb F$ that commutes with $\Phi$ is a polynomial in $\Phi$. To verify this, we may assume that $\Phi$ is an $n\times n$ Frobenius block \eqref{3.lfo}. Then the vectors \begin{equation}\label{gwz} e:=(1,0,\dots,0)^T,\ \Phi e,\ \dots,\ \Phi^{n-1} e \end{equation} form a basis of $\mathbb F^n$. Let $S\in\mathbb F^{n\times n}$ commute with $\Phi$, let \[S e=a_0e+a_1\Phi e+\dots+ a_{n-1}\Phi^{n-1}e,\qquad a_0,\dots, a_{n-1}\in\mathbb F,\] and let $f(x):=a_0+a_1x +\dots+ a_{n-1}x^{n-1}\in\mathbb F[x].$ Then $Se=f(\Phi)e$ and \[S\Phi e=\Phi Se=\Phi f(\Phi)e= f(\Phi)\Phi e, \ \dots, \ S\Phi^{n-1} e=f(\Phi)\Phi^{n-1}e. \] Since \eqref{gwz} is a basis, $S=f(\Phi)$. (i) Let $\underline{\cal A}_{\Phi} := (A, A^*)\in \ind_0(\underline{{P}})$, in which $\Phi$ is a nonsingular matrix over $\mathbb F$ that is indecomposable for similarity and $A:=\sqrt[\displaystyle *]{\Phi}$. Let $g=[G_1, G_2]\in \End(\underline{\cal A}_{\Phi})$. Then \eqref{msp} ensures that \begin{equation}\label{mdtc} G_2A=A G_1,\qquad G_2A^*=A^* G_1, \end{equation} and so \begin{equation}\label{dkr} \Phi G_1=A^{-*} AG_1= A^{-*}G_2 A= G_1A^{-*} A= G_1{\Phi}. \end{equation} Since $G_1$ commutes with $\Phi$, we have $G_1 = f(\Phi)$ for some $f(x)\in \mathbb F[x]$, and \begin{equation}\label{lrsh} G_2=A G_1 A^{-1} = f(A \Phi A^{-1})= f(A A^{-*} A A^{-1}) =f(\Phi^{-*}). \end{equation} Consequently, the ring $\End(\underline{\cal A}_{\Phi})$ of endomorphisms of $\underline{\cal A}_{\Phi}$ consists of the matrix pairs \eqref{ldy}, and the involution \eqref{kdtc} has the form \[ [f(\Phi),f(\Phi^{-*})]^{\circ}= [f(\Phi^{-*})^*,f(\Phi)^*]= [\bar f(\Phi^{-1}),\bar f(\Phi^*)]. \] (ii) The first equality in \eqref{lrsh} ensures that each endomorphism $[f(\Phi),f(\Phi^{-*})]$ is completely determined by $f(\Phi)$. Thus, the ring $\End(\underline{\cal A}_{\Phi})$ can be identified with \[ \mathbb F[\Phi]=\{f(\Phi)\,|\,f\in\mathbb F[x]\}\quad \text{with involution $f(\Phi)\mapsto \bar f(\Phi^{-1})$,}\] which is isomorphic to $\mathbb F[x]/p_{\Phi}(x)^s\mathbb F[x]$, in which $p_{\Phi}(x)^s$ is the characteristic polynomial \eqref{ser24} of $\Phi$. Thus, the radical of the ring $ \mathbb F[\Phi]$ is generated by $p_{\Phi}(\Phi)$ and $\mathbb T({\cal A}_{\Phi})$ can be identified with the field \eqref{ksyq} with involution $f(\kappa)^{\circ}= \bar f(\kappa^{-1})$. According to Lemma \ref{LEMMA 7}, each element of the field \eqref{ksyq} on which the involution acts identically is uniquely representable in the form $q(\kappa)$ for some nonzero function $q(x)$ of the form \eqref{ser13}. The pair $[q(\Phi), A q(\Phi) A^{-1}]$ is an endomorphism of $\underline{\cal A}_{\Phi}$ due to \eqref{mdtc}. This endomorphism is selfadjoint since the function \eqref{ser13} satisfies $q(x^{-1})=\bar q(x)$, and so \[ A q(\Phi) A^{-1} =q(\Phi^{-*})=\bar q(\Phi^*)= q(\Phi)^*.\] Since distinct functions $q(x)$ give distinct $q(\kappa)$ and \[ q(\Phi)\in q(\kappa)=q(\Phi) +p_{\Phi}(\Phi){\mathbb F}[\Phi], \] in \eqref{lfw} we may take $ f_{q(\kappa)}:= [q(\Phi),q(\Phi)^*] \in \End(\underline{\cal A}_{\Phi}). $ By \eqref{5mi}, the corresponding representations $ {\cal A}_{\Phi}^{q(\kappa)} = {\cal A}_{\Phi}^{f_{q(\kappa)}} $ have the form \eqref{wmp} and constitute the orbit of ${\cal A}_{\Phi}$. \end{proof} \begin{proof}[Proof of Theorem \ref{Theorem 5}] (a) Each square matrix $A$ gives the representation \eqref{ren7a+} of the pograph \eqref{jsos+}. Theorem \ref{tetete1} ensures that each representation of \eqref{jsos+} over a field $\mathbb F$ of characteristic different from $2$ is isomorphic to a direct sum of representations of the form ${\cal M}^+$ and ${\cal A}^a$, where ${\cal M}\in \ind_1(\underline{P})$, $\underline{\cal A}\in \ind_0(\underline{P})$, and $0\ne a=a^{\circ}\in\mathbb T({\cal A})$. This direct sum is determined uniquely up to permutation of summands and replacement of the whole group of summands $ {\cal A}^{a_1} \oplus\dots\oplus {\cal A}^{a_s} $ with the same $\cal A$ by $ {\cal A}^{b_1} \oplus\dots\oplus {\cal A}^{b_s} $, provided that the Hermitian forms $ a_1x_1^{\circ}x_1+\dots+ a_sx_s^{\circ}x_s$ and $ b_1x_1^{\circ}x_1+\dots+ b_sx_s^{\circ}x_s$ are equivalent over $\mathbb T({\cal A})$, which is a field by \eqref{ksyq}. This proves (a) since we can use the sets $\ind_1(\underline{P})$ and $\ind_0(\underline{P})$ from Lemma \ref{lenhi}; the field $\mathbb T({\cal A})$ is isomorphic to \eqref{ksyq}, and the representations ${\cal M}^+$ and ${\cal A}^a$ have the form \eqref{dwh}, \eqref{dld}, and \eqref{wmp}. (b) Let $\mathbb F$ be a real closed field and let $\Phi\in{\cal O}_{\mathbb F}$ be such that $\sqrt[\displaystyle *]{\Phi}$ exists. Let us identify $\mathbb T({\cal A}_{\Phi})$ with the field \eqref{ksyq}. Then $\mathbb T({\cal A}_{\Phi})$ is either $\mathbb F$ or its algebraic closure. In the latter case, the involution \eqref{kxu} on $\mathbb T({\cal A}_{\Phi})$ is not the identity; otherwise $\kappa= \kappa^{-1}$, $\kappa^2-1=0$, i.e., $p_{\Phi}(x)=x^2-1$, which contradicts the irreducibility of $p_{\Phi}(x)$. Applying Theorem \ref{tetete}, we complete the proof of (b). \end{proof} \section{Proof of Theorem \ref{t1.1}} \label{secmat} \subsection{Proof of Theorem \ref{t1.1}(a)} \label{sec(a)} Let $\mathbb F$ be an algebraically closed field of characteristic different from $2$ with the identity involution. Take ${\cal O}_{\mathbb F}$ to be all nonsingular Jordan blocks. The summands (i)--(iii) of Theorem \ref{t1.1}(a) can be obtained from the summands (i)--(iii) of Theorem \ref{Theorem 5} because for nonzero $\lambda,\mu\in\mathbb F$ \begin{align*} J_n(\lambda)\text{ is similar to }J_n(\mu)^{-T}& \quad\Longleftrightarrow\quad \lambda={\mu}^{-1},\\ \sqrt[ T]{J_n(\lambda)}\ \text{ exists }&\quad\Longleftrightarrow\quad \lambda=(-1)^{n+1}. \end{align*} The first of these two equivalences is obvious. Let us prove the second. By \eqref{lbdr} and \eqref{4.adlw}, if $\sqrt[T]{J_n(\lambda)}$ exists then $\lambda=(-1)^{n+1}$. Conversely, let $\lambda=(-1)^{n+1}$. It suffices to prove the following useful statement: \begin{equation}\label{cnt} \text{the cosquares of $\Gamma_n$ and $\Gamma_n'$ are similar to $J_n((-1)^{n+1})$}, \end{equation} which implies that $\sqrt[T]{J_n((-1)^{n+1})}$ exists by \eqref{ndw} with $\sqrt[T]{\Phi}=\Gamma_n$ and $\Psi=J_n((-1)^{n+1})$. To verify the first similarity in \eqref{cnt}, compute \begin{equation*}\label{1n} \Gamma_n^{-1}=(-1)^{n+1} \begin{bmatrix} \vdots&\vdots&\vdots&\vdots&\ddd \\ -1&-1&-1&-1&\\ 1&1&1&&\\ -1&-1&&&\\ 1&&&& 0 \end{bmatrix} \end{equation*} and \begin{equation}\label{1x11} \Gamma_n^{-T}\Gamma_n= (-1)^{n+1} \begin{bmatrix} 1&2&&\text{\raisebox{-6pt} {\large\rm *}} \\&1&\ddots&\\ &&\ddots&2\\ 0 &&&1 \end{bmatrix}. \end{equation} To verify the second similarity in \eqref{cnt}, there are two cases to consider: If $n$ is even then \[ (\Gamma'_{n})^{-1}= \leqslantft[\begin{array}{c|c} \begin{matrix} \vdots&\vdots&&\vdots \\ 1&1&\cdots&1\\ -1&-1&\cdots&-1 \\ 1&1&\cdots&1 \end{matrix} & \begin{matrix} \vdots&\ddd&-1&1 \\ 1&\ddd&\ddd \\ -1&1&& \\ 1&&& \end{matrix} \\ \hline \begin{matrix} -1&-1&\cdots&-1 \\ \vdots&\vdots&\ddd& \\ -1&-1&& \\ -1&&& \end{matrix} &\text{\large\rm 0} \end{array}\right] \] and \begin{equation*}\label{kiy} (\Gamma'_n)^{-T} \Gamma'_n= \begin{bmatrix} -1&\pm 2& &\text{\raisebox{-6pt} {\large\rm *}} \\ &-1&\ddots& \\ &&\ddots&\pm 2 \\ 0&&& -1 \end{bmatrix}. \end{equation*} If $n$ is odd then \begin{equation}\label{1n'} (\Gamma'_n)^{-1}= \begin{bmatrix} &&&\pm 1&\dots&-1&1\\ &0&&\vdots&\ddd& \ddd\\ &&&-1&1\\ &&&1\\ &&1\\ &\ddd\\ 1&&&&&&0 \end{bmatrix} \end{equation} and \begin{equation}\label{1n''} (\Gamma'_n)^{-T} \Gamma'_n= \begin{bmatrix} 1&\pm 1& &\text{\raisebox{-6pt} {\large\rm *}} \\ &1&\ddots& \\ &&\ddots&\pm 1 \\ 0&&& 1 \end{bmatrix}. \end{equation} We have proved that all direct sums of matrices of the form (i)--(iii) are canonical matrices for congruence. Let us prove the last statement of Theorem \ref{t1.1}(a). If two nonsingular matrices over $\mathbb F$ are congruent then their cosquares are similar. The converse statement is correct too because the cosquares of distinct canonical matrices for congruence have distinct Jordan canonical forms. Due to \eqref{cnt}, $\Gamma_{n}$ and $\Gamma'_{n}$ are congruent to $\sqrt[T]{J_n((-1)^{n+1})}$. \subsection{Proof of Theorem \ref{t1.1}(b)} \label{sec(b)} Let $\mathbb F$ be an algebraically closed field of characteristic $2$. According to \cite{ser0}, each square matrix over\/ $\mathbb F$ is congruent to a matrix of the form \begin{equation}\label{ogy} \bigoplus_i [J_{m_i}(\lambda_i)\,\diagdown\, I_{m_i}]\oplus \bigoplus_j \sqrt[T]{J_{n_j}(1)} \oplus \bigoplus_k J_{r_k}(0), \end{equation} in which $\lambda_i\ne 0$, $n_j$ is odd, and $J_{m_i}(\lambda_i)\ne J_{n_j}(1)$ for all $i$ and $j$. This direct sum is determined uniquely up to permutation of summands and replacement of any $J_{m_i}(\lambda_i)$ by $J_{m_i}(\lambda_i^{-1})$. The matrix $\sqrt[ T]{J_{n}(1)}$ was constructed in \cite[Lemma 1]{ser0} for any odd $n$, but it is cumbersome. Let us prove that $\Gamma'_{n}$ is congruent to $\sqrt[ T]{J_{n}(1)}$. Due to \eqref{1n'} and \eqref{1n''} (with $-1=1$), the cosquare of $\Gamma'_{n}$ is similar to $J_{n}(1)$. Let $\Sigma$ be the canonical matrix of the form \eqref{ogy} for $\Gamma'_{n}$. Then the cosquares of $\Sigma$ and $\Gamma'_{n}$ are similar, and so $\Sigma=\sqrt[ T]{J_{n}(1)}$. \subsection{Proof of Theorem \ref{t1.1}(c)} \label{sec(c)} Let $\mathbb F=\mathbb P+\mathbb Pi$ be an algebraically closed field with nonidentity involution represented in the form \eqref{1pp11}. Take ${\cal O}_{\mathbb F}$ to be all nonsingular Jordan blocks. The summands (i)--(iii) of Theorem \ref{t1.1}(c) can be obtained from the summands (i)--(iii) of Theorem \ref{Theorem 5} because for nonzero $\lambda,\mu\in\mathbb F$ \begin{align}\nonumber J_n(\lambda)\text{ is similar to }J_n(\mu)^{-*} &\quad\Longleftrightarrow\quad \lambda=\bar{\mu}^{-1},\\ \label{nlsi} \sqrt[\displaystyle *]{J_n(\lambda)}\ \text{ exists }&\quad\Longleftrightarrow\quad |\lambda|=1\ \ (\text{see \eqref{1kk}}). \end{align} Let us prove \eqref{nlsi}. By \eqref{lbdr}, if $\sqrt[\displaystyle *]{J_n(\lambda)}$ exists for $\lambda =a+bi\ (a,b\in \mathbb P)$ then $x-\lambda =x-\bar\lambda^{-1}$. Thus, $\lambda =\bar\lambda^{-1}$ and $1=\lambda \bar\lambda=a^2+b^2=|\lambda |^2$. Conversely, let $|\lambda|=1$. It suffices to show that the *cosquare of $i^{n+1}\sqrt{\lambda} \Gamma_n$ is similar to $J_n(\lambda)$ since then $\sqrt[\displaystyle *]{J_n(\lambda)}$ exists by \eqref{ndw} with $\Psi=J_n(\lambda)$. To verify this similarity, observe that for each unimodular $\lambda\in\mathbb F$, \begin{equation}\label{kum} (i^{n+1}\sqrt{\lambda} \Gamma_n)^{-*} (i^{n+1}\sqrt{\lambda} \Gamma_n)= \lambda\,(-1)^{n+1} \Gamma_n^{-T}\Gamma_n; \end{equation} by \eqref{1x11}, $\lambda\,(-1)^{n+1} \Gamma_n^{-T}\Gamma_n$ is similar to $\lambda J_n (1)$, which is similar to $J_n (\lambda).$ It remains to prove that each of the matrices \eqref{jde} can be used instead of (iii) in Theorem \ref{t1.1}(c). Let us show that if $\lambda\in\mathbb F$ is unimodular, then $J_n(\lambda)$ is similar to the *cosquare of each of the matrices \begin{equation}\label{ksim} \sqrt{\lambda}\sqrt[\displaystyle *]{ J_n (1)},\qquad i^{n+1}\sqrt{\lambda} \Gamma_n,\qquad i^{n+1}\sqrt{\lambda} \Gamma'_n,\qquad \sqrt{\lambda}\, \Delta_n(1). \end{equation} The first similarity is obvious. The second was proved in \eqref{kum}. The third can be proved analogously since $(\Gamma_n')^{-T}\Gamma'_n$ is similar to $\Gamma_n^{-T}\Gamma_n$ by \eqref{cnt}. The fourth similarity holds since $J_n(1)$ is similar to the *cosquare of $\Delta_n(1)$ as a consequence of the following useful property: for each $\mu\in \mathbb F$ with $\bar\mu^{-1} \mu\ne -1$, \begin{equation}\label{kdt} \text{ $J_n(\bar{\mu}^{-1} \mu)$ is similar to the *cosquare of $\Delta_n(\mu)$.} \end{equation} To verify this assertion, compute \begin{multline*} \Delta_n(\mu)^{-*} \Delta_n(\mu)\\= \begin{bmatrix} \text{\raisebox{-6pt} {\large\rm *}}&&i\bar{\mu}^{-2}&\ \bar{\mu}^{-1}\\ &\ddd&\ddd&\\ i\bar{\mu}^{-2}&\ \bar{\mu}^{-1}&&\\ \bar{\mu}^{-1}&&&0 \end{bmatrix} \Delta_n(\mu)= \begin{bmatrix} \mu\bar{\mu}^{-1} &i\bar{\mu}^{-1}u&& \text{\raisebox{-6pt} {\large\rm *}} \\&\mu\bar{\mu}^{-1} &\ddots&\\ &&\ddots& i\bar{\mu}^{-1}u\\ 0 &&&\mu\bar{\mu}^{-1} \end{bmatrix} \end{multline*} with $ u:=\bar{\mu}^{-1} \mu+ 1\ne 0.$ Therefore, the *cosquare of each of the matrices \eqref{ksim} can replace $J_n(\lambda)$ in ${\cal O}_{\mathbb F}$, and so each of the matrices \eqref{ksim} may be used as $\sqrt[\displaystyle *]{\Phi}$ in (iii) of Theorem \ref{Theorem 5}(a). Thus, instead of $\pm\sqrt[\displaystyle *]{J_n(\lambda)}$ in (iii) of Theorem \ref{t1.1}(c) we may use any of the matrices \eqref{ksim} multiplied by $\pm 1$; and hence any of the matrices \eqref{jde} except for $\mu A$ since each $\sqrt{\lambda}$ can be represented in the form $a+bi$ with $a,b\in\mathbb P$, $b\geqslant 0$, and $a+bi\ne -1$. Let $A$ be any nonsingular $n\times n$ matrix whose *cosquare is similar to a Jordan block. Then $A$ is *congruent to some matrix of type (iii), and hence $A$ is *congruent to $\mu_0\Gamma_n$ for some unimodular $\mu_0$. Thus, $\mu A$ is *congruent to $\mu\mu_0\Gamma_n$, and so we may use $\mu A$ instead of $\pm\sqrt[\displaystyle *]{J_n(\lambda)}$ in (iii). \subsection{Proof of Theorem \ref{t1.1}(d)} \label{sec(d)} Let $\mathbb P$ be a real closed field. Let $\mathbb K:=\mathbb P+\mathbb Pi$ be the algebraic closure of $\mathbb P$ represented in the form \eqref{1pp} with involution $a+bi\mapsto a-bi$. By \cite[Theorem 3.4.5]{hor}, we may take ${\cal O}_{\mathbb P}$ to be all $J_n(a)$ with $a \in\mathbb P$, and all $J_n(\lambda )^{\mathbb P}$ with $\lambda\in{\mathbb K}\smallsetminus\mathbb P$ determined up to replacement by $\bar\lambda$. Let $a \in\mathbb P$. Reasoning as in the proof of Theorem \ref{t1.1}(a), we conclude that \begin{itemize} \item $J_n(a)$ is similar to $J_n(b)^{-T}$ with $b \in\mathbb P$ if and only if $a=b^{-1}$; \item $\sqrt[ T]{J_n(a)}$ exists if and only if $a=(-1)^{n+1}$. \end{itemize} Thus, the summands (i)--(iii) of Theorem \ref{Theorem 5} give the summands (i)--(iii) in Theorem \ref{t1.1}(d). Due to \eqref{cnt}, we may take $(\Gamma_n)^{-T} \Gamma_n$ or $(\Gamma'_n)^{-T} \Gamma'_n$ instead of $J_n((-1)^{n+1})$ in ${\cal O}_{\mathbb P}$. Thus, we may use $\pm \Gamma_n$ or $\pm \Gamma_n'$ instead of $\pm \sqrt[T]{ J_n((-1)^{n+1})}$ in Theorem \ref{t1.1}(d). Let $\lambda,\mu\in(\mathbb P+\mathbb Pi)\smallsetminus\mathbb P$. Then \begin{align}\nonumber J_n(\lambda)^{\mathbb P}\text{ is similar to }(J_n(\mu)^{\mathbb P})^{-T} &\quad\Longleftrightarrow\quad \lambda\in\{{\mu}^{-1}, \bar{\mu}^{-1}\},\\ \label{ndsis} \sqrt[T]{J_n(\lambda)^{\mathbb P}}\ \text{ exists }&\quad\Longleftrightarrow\quad |\lambda |=1. \end{align} Let us prove \eqref{ndsis}. For $\Phi:=J_n(\lambda )^{\mathbb P}$, we have \begin{equation*}\label{kgv} p_{\Phi}(x)=(x-\lambda)(x-\bar\lambda) =x^2-(\lambda+\bar\lambda) +|\lambda|^2. \end{equation*} If $\sqrt[ T]{\Phi}$ exists then $|\lambda|=1$ by \eqref{lbdr} and \eqref{lyf1}. Conversely, let $|\lambda|=1$. We can take \begin{equation}\label{mdu} \sqrt[T]{J_n(\lambda)^{ \mathbb P}}=\Big(\sqrt[\displaystyle *]{J_n(\lambda)}\Big)^{ \mathbb P}. \end{equation} Indeed, $M:=\sqrt[\displaystyle *]{J_n(\lambda )}$ exists by \eqref{nlsi}; it suffices to prove \begin{equation}\label{hfe} (M^{\mathbb P})^{-T} M^{\mathbb P}=J_n(\lambda )^{\mathbb P}. \end{equation} If $M$ is represented in the form $M=A+Bi$ with $A$ and $B$ over $\mathbb P$, then its realification $M^{\mathbb P}$ (see \eqref{1j}) is permutationally similar to \[ M_{\mathbb P}:=\begin{bmatrix} A&-B\\B&A \end{bmatrix}. \] Applying the same transformation of permutation similarity to the matrices of \eqref{hfe} gives \begin{equation}\label{jpoi} (M_{\mathbb P})^{-T} M_{\mathbb P}=J_n(\lambda )_{\mathbb P}. \end{equation} Since \[ \begin{bmatrix} A+Bi&0\\0&A-Bi \end{bmatrix} \begin{bmatrix} I&iI\\I&-iI \end{bmatrix}= \begin{bmatrix} I&iI\\I&-iI \end{bmatrix} \begin{bmatrix} A&-B\\B&A \end{bmatrix}, \] we have \begin{equation*}\label{luf} M_{\mathbb P}=S^{-1}(M\oplus \bar M)S=S^{*}(M\oplus \bar M)S \end{equation*} with \[ S:=\frac{1}{\sqrt{2}} \begin{bmatrix} I&iI\\I&-iI \end{bmatrix}=S^{-*}. \] Thus, \eqref{jpoi} is represented in the form \[ \leqslantft(S^*(M\oplus \bar M)S\right)^{-*} S^*(M\oplus \bar M)S= S^{-1}\leqslantft(J_n(\lambda )\oplus J_n(\bar\lambda )\right)S. \] This equality is equivalent to the pair of equalities \[ M^{-*} M=J_n(\lambda ),\qquad \bar M^{-*} \bar M=J_n(\bar \lambda ), \] which are valid since $M=\sqrt[\displaystyle *]{J_n(\lambda )}$. This proves \eqref{mdu}, which completes the proof of \eqref{ndsis}. Thus, the summands (ii) and (iii) of Theorem \ref{Theorem 5} give the summands (ii$'$) and (iii$'$) in Theorem \ref{t1.1}(d). It remains to prove that each of the matrices \eqref{dsk} can be used instead of (iii$'$). Every unimodular $\lambda=a+bi\in {\mathbb P}+{\mathbb P}i$ with $b>0$ can be expressed in the form \begin{equation}\label{roo4} \lambda=\frac{e+i}{e-i}\,, \qquad e\in{\mathbb P},\quad e>0. \end{equation} Due to \eqref{cnt}, the *cosquares \[ ((e+i)\Gamma_n)^{-*} (e+i)\Gamma_n=\lambda \Gamma_n^{-*}\Gamma_n,\quad ((e+i)\Gamma'_n)^{-*} (e+i)\Gamma'_n=\lambda (\Gamma'_n)^{-*}\Gamma'_n \] are similar to $\lambda J_n((-1)^{n+1})$, which is similar to $(-1)^{n+1} J_n(\lambda)$. Theorem \ref{Theorem 5} ensures that the matrix $\pm\sqrt[T]{ J_n(\lambda)^{\,\mathbb P}}$ in (iii$'$) can be replaced \begin{equation}\label{jyf} \text{by $\pm((e+i)\Gamma_n)^{\mathbb P}$ and also by $\pm((e+i)\Gamma'_n)^{\mathbb P}$ with $e>0$.} \end{equation} For each square matrix $A$ over $\mathbb P+\mathbb Pi$ we have \begin{equation}\label{roo343} S^TA^{\mathbb P}S=\overline A^{\:\mathbb P},\qquad S:=\,\diagdown\,g (1,-1,1,-1,\dots), \end{equation} and so $-\big((e+i)\Gamma_n\big)^{\mathbb P}$ is congruent to \[ -\overline{ (e+i)\Gamma_n}^{\,\mathbb P}= -\big((e-i)\Gamma_n\big)^{\mathbb P}= \big((-e+i)\Gamma_n\big)^{\mathbb P}. \] Therefore, the matrices \eqref{jyf} are congruent to $((c+i)\Gamma_n)^{\mathbb P}$ and $((c+i)\Gamma'_n)^{\mathbb P}$ with $0\ne c\in{\mathbb P}$ and $|c|=e$. Let us show that the summands {\rm(iii$'$)} can be also replaced by $\Delta_n(c+i)$ with $0\ne c\in{\mathbb P}$. By \eqref{kdt}, the *cosquare of $\Delta_n(e+i)$ with $e>0$ is similar to $J_n(\lambda)$, in which $\lambda$ is defined by \eqref{roo4}. Reasoning as in the proof of \eqref{hfe}, we find that the cosquare of $\Delta_n(e+i)^{\mathbb P}$ is similar to $J_n(\lambda)^{\,\mathbb P}$. Hence, $\pm\Delta_n(e+i)^{\mathbb P}$ with $e>0$ can be used instead of (iii$'$). Due to \eqref{roo343}, the matrix $-\Delta_n(e+i)^{\mathbb P}$ is congruent to \[ -\overline{ \Delta_n(e+i)}^{\mathbb P}= \Delta_n(-e+i)^{\mathbb P}. \] \subsection{Proof of Theorem \ref{t1.1}(e)} \label{sec(e)} \begin{lemma} \label{le} Let $\mathbb H$ be the skew field of quaternions over a real closed field $\mathbb P$. Let ${\cal O}_{\mathbb H}$ be a maximal set of nonsingular indecomposable canonical matrices over $\mathbb H$ for similarity. \begin{itemize} \item[\rm(a)] Each square matrix over $\mathbb H$ is *congruent to a direct sum, determined uniquely up to permutation of summands, of matrices of the form: \begin{itemize} \item[\rm(i)] $J_n(0)$. \item[\rm(ii)] $(\Phi, I_n)^+=[\Phi\,\diagdown\, I_n]$, in which $\Phi\in{\cal O}_{\mathbb H}$ is an $n\times n$ matrix such that $\sqrt[\displaystyle *]{\Phi}$ does not exist; $\Phi$ is determined up to replacement by the unique $\Psi\in{\cal O}_{\mathbb F}$ that is similar to $\Phi^{-*}$. \item[\rm(iii)] $\varepsilon_{\Phi} \sqrt[\displaystyle *]{\Phi}$, in which $\Phi\in{\cal O}_{\mathbb H}$ is such that $\sqrt[\displaystyle *]{\Phi}$ exists; $\varepsilon_{\Phi} =1$ if $\sqrt[\displaystyle *]{\Phi}$ is *congruent to $-\sqrt[\displaystyle *]{\Phi}$ and $\varepsilon_{\Phi} =\pm 1$ otherwise. This means that $\varepsilon_{\Phi} =1$ if and only if $\mathbb T({\cal A}_{\Phi})$ is an algebraically closed field with the identity involution or $\mathbb T({\cal A}_{\Phi})$ is a skew field of quaternions with involution different from quaternionic conjugation \eqref{ne}. \end{itemize} \end{itemize} \item[\rm(b)] If $\varepsilon_{\Phi} = 1$ and $\Phi$ is similar to $\Psi$, then $\varepsilon_{\Psi} =1$. \end{lemma} \begin{proof} (a) Theorem \ref{tetete} ensures that any given representation of any pograph $P$ over $\mathbb H$ decomposes uniquely, up to isomorphism of summands, into a direct sum of indecomposable representations. Hence the problem of classifying representations of $P$ reduces to the problem of classifying indecomposable representations. By Theorem \ref{tetete} and Lemma \ref{lenhi}, the matrices (i)--(iii) form a maximal set of nonisomorphic indecomposable representations of the pograph \eqref{jsos+}. (b) On the contrary, assume that $\varepsilon_{\Psi} =\pm 1$. Then $\sqrt[\displaystyle *]{\Psi}$ and $-\sqrt[\displaystyle *]{\Psi}$ have the same canonical form $\sqrt[\displaystyle *]{\Phi}$, a contradiction. \end{proof} Let $\mathbb P$ be a real closed field and let $\mathbb H$ be the skew field of $\mathbb P$-quaternions with quaternionic conjugation \eqref{ne} or quaternionic semiconjugation \eqref{nen}. These involutions act as complex conjugation on the algebraically closed subfield $\mathbb K:=\mathbb P+\mathbb Pi$. By \cite[Section 3, \S 12]{jac}, we can take ${\cal O}_{\mathbb F}$ to be all $J_n(\lambda)$, in which $\lambda\in\mathbb K$ and $\lambda$ is determined up to replacement by $\bar \lambda$. For any nonzero $\mu\in\mathbb K$, the matrix $J_n(\mu)^{-*}$ is similar to $J_n(\bar\mu^{-1})$. Since $\bar\mu^{-1}$ is determined up to replacement by $\mu^{-1}$, \begin{equation*}\label{lust} J_n(\lambda)\text{ is similar to }J_n(\mu)^{-*} \quad\Longleftrightarrow\quad \lambda\in\{{\mu}^{-1}, \bar{\mu}^{-1}\}. \end{equation*} Let us prove that for a nonzero $\lambda \in\mathbb K$ \begin{equation*}\label{nsis} \sqrt[\displaystyle *]{J_n(\lambda)}\ \text{ exists }\quad\Longleftrightarrow\quad |\lambda |=1. \end{equation*} If $\sqrt[\displaystyle *]{J_n(\lambda)}$ exists then by \eqref{lbdr} $x-\lambda =x-\bar\lambda^{-1}$ and so $|\lambda|=1$. Conversely, let $|\lambda|=1$. In view of \eqref{1x11}, the *cosquare of $A:=\sqrt{\lambda (-1)^{n+1}}\Gamma_n$ is \begin{equation}\label{ntd} \Phi:=A^{-*} A= \lambda F,\qquad F:=(-1)^{n+1} \Gamma_n^{-T}\Gamma_n = \begin{bmatrix} 1&2&&\text{\raisebox{-6pt} {\large\rm *}} \\&1&\ddots&\\ &&\ddots&2\\ 0 &&&1 \end{bmatrix}, \end{equation} and so $\Phi$ is similar to $J_n(\lambda)$. Thus, $\sqrt[\displaystyle *]{J_n(\lambda)}$ exists by \eqref{ndw} with $\sqrt[\displaystyle *]{\Phi}=A$. Lemma \ref{le}(a) ensures the summands (i)--(iii) in Theorem \ref{t1.1}(e); the coefficient $\varepsilon$ in (iii) is defined in Lemma \ref{le}(a). Let us prove that $\varepsilon$ can be calculated by \eqref{kki}. By Lemma \ref{le}(b) and since $\Phi$ in \eqref{ntd} is similar to $J_n(\lambda)$, we have $\varepsilon =\varepsilon_{\Phi}$, so it suffices to prove \eqref{kki} for $\varepsilon_{\Phi}$. Two matrices $G_1,G_2\in\mathbb H^{n\times n}$ give an endomorphism $[G_1,G_2]$ of $\underline{\cal A}_{\Phi}=(A,A^*)$ if and only if they satisfy \eqref{mdtc}. By \eqref{dkr}, the equalities \eqref{mdtc} imply \begin{equation}\label{kdn} G_1{\Phi}={\Phi} G_1. \end{equation} \emph{Case $\lambda\ne \pm 1$}. Represent $G_1$ in the form $U+Vj$ with $U,V\in\mathbb K^{n\times n}$. Then \eqref{kdn} implies two equalities \begin{equation}\label{kuf} U\Phi =\Phi U,\qquad V\bar \Phi j =\Phi Vj. \end{equation} By the second equality and \eqref{ntd}, $\bar\lambda VF=\lambda FV$, \[ (\bar\lambda-\lambda)V= \lambda (F-I)V-\bar\lambda V(F-I). \] Thus $V=0$ since $\lambda\ne\bar\lambda$ and $F-I$ is nilpotent upper triangular. By the first equality in \eqref{kuf} (which is over the field $\mathbb K$), $G_1=U=f(\lambda F)=f(\Phi)$ for some $f\in\mathbb K[x]$; see the beginning of the proof of Lemma \ref{lenhi}(d). Since $A$ is over $\mathbb K$, the identities \eqref{mdtc} imply \eqref{lrsh}. Because $G_2=A G_1 A^{-1}$, the homomorphism $[G_1,G_2]\in \End (\underline{\cal A}_{\Psi})$ is completely determined by $G_1=f(\Phi)$. The matrix $\Phi=\lambda F$ is upper triangular, so the mapping $f(\Phi)\mapsto f(\lambda)$ on $\mathbb K[\Phi]$ defines an endomorphism of rings $\End (\underline{\cal A}_{\Phi})\to \mathbb K$; its kernel is the radical of $\End (\underline{\cal A}_{\Phi})$. Hence $\mathbb T( {\cal A}_{\Phi})$ can be identified with $\mathbb K$. Using \eqref{ldbye}, we see that the involution on $\mathbb T({\cal A}_{\Phi})$ is induced by the mapping $G_1\mapsto G_2^*$ of the form \[f(\lambda F)\mapsto f((\lambda F)^{-*})^*= \bar f((\lambda F)^{-1}).\] Therefore, the involution is \[f(\lambda)\ \longmapsto\ \bar f({\lambda}^{-1})= \bar f(\bar{\lambda})= \overline{f({\lambda})}\] and coincides with the involution $a+bi\mapsto a-bi$ on $\mathbb K$. The statement (iii) in Lemma \ref{le}(a) now implies $\varepsilon_{\Phi}= \pm 1$; this proves \eqref{kki} in the case $\lambda\ne \pm 1$. \emph{Case $\lambda= \pm 1$.} Then \begin{equation}\label{mse1} A=\sqrt{\lambda (-1)^{n+1}}\Gamma_n= \begin{cases} \Gamma_n & \text{if $ \lambda=(-1)^{n+1}$}, \\ i\Gamma_n & \text{if $ \lambda=(-1)^{n}$}. \end{cases} \end{equation} Define \begin{align*} \check h&:=a+bi-cj-dk\quad \text{for each}\ \ h=a+bi+cj+dk\in\mathbb H, \\ \check f(x)&:=\sum_l \check h_lx^l\quad \text{for each}\ \ f(x)=\sum_l h_{l}x^l\in \mathbb H[x]. \end{align*} Because $\lambda= \pm 1$ and by \eqref{kdn}, $G_1$ has the form \[ G_1=\begin{bmatrix} a_1&a_2&\ddots&a_{n} \\&a_1&\ddots&\ddots \\&&\ddots&a_2\\ 0&&&a_1 \end{bmatrix}, \qquad a_1,\dots,a_n\in\mathbb H. \] Thus, $G_1=f(\Phi)$ for some polynomial $f(x)\in\mathbb H[x]$. Using the first equality in \eqref{mdtc}, the identity $if(x)=\check f(ix)$, and \eqref{mse1}, we obtain \begin{align*} G_2=A G_1 A^{-1} =A f(\Phi) A^{-1} =\begin{cases} f( A \Phi A^{-1})= f( \Phi^{-*}) & \text{if $ \lambda=(-1)^{n+1}$,} \\ \check f( A \Phi A^{-1})= \check f( \Phi^{-*}) & \text{if $ \lambda=(-1)^{n}$}. \end{cases} \end{align*} Since the homomorphism $[G_1,G_2]$ is completely determined by $G_1=f(\Phi)$ and $\Phi$ has the upper triangular form \eqref{ntd} with $\lambda=\pm 1$, we conclude that the mapping $f(\Phi)\mapsto f(\lambda)$ defines an endomorphism of rings $\End(\underline{\cal A}_{\Phi})\to \mathbb H$; its kernel is the radical of $\End(\underline{\cal A}_{\Phi})$. Hence $\mathbb T( {\cal A}_{\Phi})$ can be identified with $\mathbb H$. The involution on $\mathbb T( {\cal A}_{\Phi})$ is induced by the mapping $G_1\mapsto G_2^*$; i.e., by \[ f(\Phi)\mapsto \begin{cases} \bar{f}( \Phi^{-1}) & \text{if $ \lambda=(-1)^{n+1}$,} \\ \widehat{f}( \Phi^{-1}) & \text{if $ \lambda=(-1)^{n}$}, \end{cases} \] in which the involution $h\mapsto\bar h$ on $\mathbb F$ is either quaternionic conjugation \eqref{ne} or quaternionic semiconjugation \eqref{nen}, and $h\mapsto\widehat{h}$ denotes the other involution \eqref{nen} or \eqref{ne}. Thus the involution on $\mathbb T( {\cal A}_{\Phi})$ is $h\mapsto\bar h$ if $\lambda=(-1)^{n+1}$ and is $h\mapsto \widehat{h}$ if $\lambda=(-1)^{n}$. Due to (iii) in Lemma \ref{le}(a), this proves \eqref{kki} in the case $\lambda= \pm 1$. It remains to prove that the matrices \eqref{gyo} and \eqref{gyo1} can be used instead of (iii) in Theorem \ref{t1.1}(e). Let us prove this statement for the first matrix in \eqref{gyo}. For each unimodular $\lambda\in\mathbb K$, the *cosquare \eqref{ntd} of $A=\sqrt{\lambda (-1)^{n+1}}\Gamma_n$ is similar to $J_n(\lambda)$, so we can replace $J_n(\lambda)$ by $\Phi$ in ${\cal O}_{\mathbb H}$ and conclude by Lemma \ref{le}(a) that $\varepsilon A$ can be used instead of (iii) in Theorem \ref{t1.1}(e). First, let the involution on $\mathbb H$ be quaternionic conjugation. By \eqref{kki} the matrix $\varepsilon A$ is \begin{equation}\label{su1a} \text{either }\ i\Gamma_n,\ \text{ or }\ \pm\mu\Gamma_n \text{ with } \mu:=\sqrt{\lambda(-1) ^{n+1}}\ne i. \end{equation} Since $\lambda$ is determined up to replacement by $\bar{\lambda}$ and $\sqrt{\lambda(-1) ^{n+1}}\ne i$, we can take $\lambda(-1)^{n+1}=u+vi\ne -1$ with $v\geqslant 0$, and obtain $\mu=\sqrt{\lambda(-1)^{n+1}}=a+bi$ with $a> 0$ and $b\geqslant 0$. Replacing the matrices $-\mu\Gamma_n=(-a-bi)\Gamma_n$ in \eqref{su1a} by the *congruent matrices $\bar{j}\cdot(-a-bi)\Gamma_n\cdot j=(-a+bi)\Gamma_n$, we get the first matrix in \eqref{gyo}. Now let the involution be quaternionic semiconjugation. By \eqref{kki} the matrix $\varepsilon A$ is \begin{equation}\label{su2} \text{either \ $\Gamma_n$, \ or \ $\pm\mu\Gamma_n$ with } \mu:=\sqrt{\lambda(-1) ^{n+1}}\ne 1. \end{equation} In \eqref{su2} we can take $\lambda(-1)^{n+1}=u+vi\ne 1$ with $v\geqslant 0$. Then $\mu=\sqrt{\lambda(-1)^{n+1}}=a+bi$ with $a\geqslant 0$ and $b> 0$. Replacing the matrices $-\mu\Gamma_n=(-a-bi)\Gamma_n$ in \eqref{su2} by the *congruent matrices $\bar{j}\cdot(-a-bi)\Gamma_n\cdot j=(a-bi)\Gamma_n$ ($\bar j=j$ since the involution is quaternionic semiconjugation), we get the first matrix in \eqref{gyo}. The same reasoning applies to the second matrix in \eqref{gyo}. Let us prove that the matrix \eqref{gyo1} can be used instead of (iii) in Theorem \ref{t1.1}(e). By \eqref{kdt}, $J_n(\lambda)$ with a unimodular $\lambda\in\mathbb K$ is similar to the *cosquare of $\sqrt{\lambda}\, \Delta_n$ with $\Delta_n:=\Delta_n(1)$. Therefore, $\varepsilon \sqrt[\displaystyle *]{ J_n (\lambda)}$ in (iii) can be replaced by $\varepsilon \sqrt{\lambda}\, \Delta_n$. Suppose that either the involution is quaternionic conjugation and $n$ is odd, or that the involution is quaternionic semiconjugation and $n$ is even. Then $\bar j=(-1)^{n}j$. By \eqref{kki}, $\varepsilon =1$ if $\lambda =-1$ and $\varepsilon =\pm 1$ if $\lambda \ne -1$. So each $\varepsilon \sqrt{\lambda}\, \Delta_n$ is either $i\Delta_n$ or $\pm\mu\Delta_n$, in which $\mu:=\sqrt{\lambda}$ and $\lambda=u+vi\ne -1$. We can suppose that $v\geqslant 0$ since $\lambda$ is determined up to replacement by $\bar{\lambda}$. Because $\mu$ is represented in the form $a+bi$ with $a> 0$ and $b\geqslant 0$, the equality \begin{equation*}\label{hdu} S_n\Delta_nS_n =(-1)^n\Delta_n, \qquad S_n:=\,\diagdown\,g(j,-j,j,-j,\dots), \end{equation*} shows that we can replace $-\mu\Delta_n=(-a-bi) \Delta_n$ by the *congruent matrix \[ S_n^*(-a-bi)\Delta_nS_n =(-1)^n S_n(-a-bi)\Delta_n S_n=(-a+bi)\Delta_n \] and obtain the matrix \eqref{gyo1}. Now suppose that the involution is quaternionic conjugation and $n$ be even, or that the involution is quaternionic semiconjugation and $n$ is odd. Then $\bar j=(-1)^{n+1}j$. By \eqref{kki}, each $\varepsilon \sqrt{\lambda}\, \Delta_n$ is either $\Delta_n$ or $\pm\mu\Delta_n$, in which $\mu:=\sqrt{\lambda}$ and $\lambda=u+vi\ne 1$ with $v\geqslant 0$. Since $\mu$ is represented in the form $a+bi$ with $a\geqslant 0$ and $b>0$, we can replace $-\mu\Delta_n =(-a-bi)\Delta_n$ by the *congruent matrix \[ S_n^*(-a-bi)\Delta_nS_n =(-1)^{n+1} S_n(-a-bi)\Delta_n S_n=(a-bi)\Delta_n \] and obtain the matrix \eqref{gyo1}. \end{document}
\begin{document} \newtheorem{lemma}{Lemma}[section] \newtheorem{thm}[lemma]{Theorem} \newtheorem{cor}[lemma]{Corollary} \newtheorem{voorb}[lemma]{Example} \newtheorem{rem}[lemma]{Remark} \newtheorem{prop}[lemma]{Proposition} \newtheorem{stat}[lemma]{{\hspace{-5pt}}} \newtheorem{obs}[lemma]{Observation} \newtheorem{defin}[lemma]{Definition} \newenvironment{remarkn}{\begin{rem} \rm}{\end{rem}} \newenvironment{exam}{\begin{voorb} \rm}{\end{voorb}} \newenvironment{defn}{\begin{defin} \rm}{\end{defin}} \newenvironment{obsn}{\begin{obs} \rm}{\end{obs}} \newenvironment{emphit}{\begin{itemize} }{\end{itemize}} \newcommand{\gothic{a}}{\gothic{a}} \newcommand{\gothic{b}}{\gothic{b}} \newcommand{\gothic{c}}{\gothic{c}} \newcommand{\gothic{e}}{\gothic{e}} \newcommand{\gothic{f}}{\gothic{f}} \newcommand{\gothic{g}}{\gothic{g}} \newcommand{\gothic{h}}{\gothic{h}} \newcommand{\gothic{k}}{\gothic{k}} \newcommand{\gothic{m}}{\gothic{m}} \newcommand{\gothic{n}}{\gothic{n}} \newcommand{\gothic{p}}{\gothic{p}} \newcommand{\gothic{q}}{\gothic{q}} \newcommand{\gothic{r}}{\gothic{r}} \newcommand{\gothic{s}}{\gothic{s}} \newcommand{\gothic{u}}{\gothic{u}} \newcommand{\gothic{v}}{\gothic{v}} \newcommand{\gothic{w}}{\gothic{w}} \newcommand{\gothic{z}}{\gothic{z}} \newcommand{\gothic{A}}{\gothic{A}} \newcommand{\gothic{B}}{\gothic{B}} \newcommand{\gothic{G}}{\gothic{G}} \newcommand{\gothic{L}}{\gothic{L}} \newcommand{\gothic{S}}{\gothic{S}} \newcommand{\gothic{T}}{\gothic{T}} \newcommand{\marginpar{\hspace{1cm}*} }{\marginpar{\hspace{1cm}*} } \newcommand{\marginpar{\hspace{1cm}*} n}{\marginpar{\hspace{1cm}**} } \newcommand{\marginpar{\hspace{1cm}*} q}{\marginpar{\hspace{1cm}*???} } \newcommand{\marginpar{\hspace{1cm}*} nq}{\marginpar{\hspace{1cm}**???} } \newcounter{teller} \renewcommand{\Roman{teller}}{\Roman{teller}} \newenvironment{tabel}{\begin{list} {\rm \bf \Roman{teller}. }{\usecounter{teller} \leftmargin=1.1cm \labelwidth=1.1cm \labelsep=0cm \parsep=0cm} }{\end{list}} \newcounter{tellerr} \renewcommand{\Roman{teller}r}{(\roman{tellerr})} \newenvironment{subtabel}{\begin{list} {\rm (\roman{tellerr}) }{\usecounter{tellerr} \leftmargin=1.1cm \labelwidth=1.1cm \labelsep=0cm \parsep=0cm} }{\end{list}} \newenvironment{ssubtabel}{\begin{list} {\rm (\roman{tellerr}) }{\usecounter{tellerr} \leftmargin=1.1cm \labelwidth=1.1cm \labelsep=0cm \parsep=0cm \topsep=1.5mm} }{\end{list}} \newcommand{{\bf N}}{{\bf N}} \newcommand{{\bf R}}{{\bf R}} \newcommand{{\bf C}}{{\bf C}} \newcommand{{\bf S}}{{\bf S}} \newcommand{{\bf T}}{{\bf T}} \newcommand{{\bf Z}}{{\bf Z}} \newcommand{{\bf F}}{{\bf F}} \newcommand{{\rm E}}{{\rm E}} \newcommand{\mbox{\bf Proof} \hspace{5pt}}{\mbox{\bf Proof} \hspace{5pt}} \newcommand{\mbox{\bf Remark} \hspace{5pt}}{\mbox{\bf Remark} \hspace{5pt}} \newcommand{\vskip10.0pt plus 4.0pt minus 6.0pt}{\vskip10.0pt plus 4.0pt minus 6.0pt} \newcommand{\simh}{{\stackrel{{\rm cap}}{\sim}}} \newcommand{{\mathop{\rm ad}}}{{\mathop{\rm ad}}} \newcommand{{\mathop{\rm Ad}}}{{\mathop{\rm Ad}}} \newcommand{\mathop{\rm Aut}}{\mathop{\rm Aut}} \newcommand{\mathop{\rm arccot}}{\mathop{\rm arccot}} \newcommand{{\mathop{\rm cap}}}{{\mathop{\rm cap}}} \newcommand{{\mathop{\rm rcap}}}{{\mathop{\rm rcap}}} \newcommand{{\mathop{\rm Cap}}}{{\mathop{\rm Cap}}} \newcommand{\mathop{\rm diam}}{\mathop{\rm diam}} \newcommand{\mathop{\rm div}}{\mathop{\rm div}} \newcommand{\mathop{\rm dist}}{\mathop{\rm dist}} \newcommand{\mathop{\rm codim}}{\mathop{\rm codim}} \newcommand{\mathop{\rm Re}}{\mathop{\rm Re}} \newcommand{\mathop{\rm Im}}{\mathop{\rm Im}} \newcommand{{\mathop{\rm Tr}}}{{\mathop{\rm Tr}}} \newcommand{{\mathop{\rm Vol}}}{{\mathop{\rm Vol}}} \newcommand{{\mathop{\rm card}}}{{\mathop{\rm card}}} \newcommand{\mathop{\rm supp}}{\mathop{\rm supp}} \newcommand{\mathop{\rm sgn}}{\mathop{\rm sgn}} \newcommand{\mathop{\rm ess\,inf}}{\mathop{\rm ess\,inf}} \newcommand{\mathop{\rm ess\,sup}}{\mathop{\rm ess\,sup}} \newcommand{\mathop{\rm Int}}{\mathop{\rm Int}} \newcommand{\mathop{\rm Leibniz}}{\mathop{\rm Leibniz}} \newcommand{\mathop{\rm lcm}}{\mathop{\rm lcm}} \newcommand{{\rm loc}}{{\rm loc}} \newcommand{\mathop{\rm mod}}{\mathop{\rm mod}} \newcommand{\mathop{\rm span}}{\mathop{\rm span}} \newcommand{1\hspace{-4.5pt}1}{1\hspace{-4.5pt}1} \newcommand{\DWR}{} \hyphenation{groups} \hyphenation{unitary} \newcommand{\tfrac}[2]{{\textstyle \frac{#1}{#2}}} \newcommand{{\cal B}}{{\cal B}} \newcommand{{\cal C}}{{\cal C}} \newcommand{{\cal D}}{{\cal D}} \newcommand{{\cal E}}{{\cal E}} \newcommand{{\cal F}}{{\cal F}} \newcommand{{\cal H}}{{\cal H}} \newcommand{{\cal I}}{{\cal I}} \newcommand{{\cal K}}{{\cal K}} \newcommand{{\cal L}}{{\cal L}} \newcommand{{\cal M}}{{\cal M}} \newcommand{{\cal N}}{{\cal N}} \newcommand{{\cal O}}{{\cal O}} \newcommand{{\cal S}}{{\cal S}} \newcommand{{\cal T}}{{\cal T}} \newcommand{{\cal X}}{{\cal X}} \newcommand{{\cal Y}}{{\cal Y}} \newcommand{{\cal Z}}{{\cal Z}} \newcommand{W^{1,2}\raisebox{10pt}[0pt][0pt]{\makebox[0pt]{\hspace{-34pt}$\scriptstyle\circ$}}}{W^{1,2}\raisebox{10pt}[0pt][0pt]{\makebox[0pt]{\hspace{-34pt}$\scriptstyle{\cal I}rc$}}} \newlength{\hightcharacter} \newlength{\widthcharacter} \newcommand{{\cal O}vsup}[1]{\settowidth{\widthcharacter}{$#1$}{\mathop{\rm ad}}dtolength{\widthcharacter}{-0.15em}\settoheight{\hightcharacter}{$#1$}{\mathop{\rm ad}}dtolength{\hightcharacter}{0.1ex}#1\raisebox{\hightcharacter}[0pt][0pt]{\makebox[0pt]{\hspace{-\widthcharacter}$\scriptstyle{\cal I}rc$}}} \newcommand{{\cal O}v}[1]{\settowidth{\widthcharacter}{$#1$}{\mathop{\rm ad}}dtolength{\widthcharacter}{-0.15em}\settoheight{\hightcharacter}{$#1$}{\mathop{\rm ad}}dtolength{\hightcharacter}{0.1ex}#1\raisebox{\hightcharacter}{\makebox[0pt]{\hspace{-\widthcharacter}$\scriptstyle{\cal I}rc$}}} \newcommand{\scov}[1]{\settowidth{\widthcharacter}{$#1$}{\mathop{\rm ad}}dtolength{\widthcharacter}{-0.15em}\settoheight{\hightcharacter}{$#1$}{\mathop{\rm ad}}dtolength{\hightcharacter}{0.1ex}#1\raisebox{0.7\hightcharacter}{\makebox[0pt]{\hspace{-\widthcharacter}$\scriptstyle{\cal I}rc$}}} \thispagestyle{empty} \begin{center} \vspace*{1.5cm} {\Large{\bf The weighted Hardy inequality }}\\[3mm] {\Large{\bf and self-adjointness of }}\\[3mm] {\Large{\bf symmetric diffusion operators }} \\[5mm] \large Derek W. Robinson$^\dag$ \\[1mm] \normalsize{25th June 2020}\\[1mm] \end{center} \begin{list}{}{\leftmargin=1.7cm \rightmargin=1.7cm \listparindent=15mm \parsep=0pt} \item {\bf Abstract} $\;$ Let $\Omega$ be a domain in ${\bf R}^d$ with boundary $\Gamma$${\!,}$ $d_\Gamma$ the Euclidean distance to the boundary and $H=-\mathop{\rm div}(C\,\nabla)$ an elliptic operator with $C=(\,c_{kl}\,)>0$ where $c_{kl}=c_{lk}$ are real, bounded, Lipschitz functions. We assume that $C\sim c\,d_\Gamma^{\,\delta}$ as $d_\Gamma\to0$ in the sense of asymptotic analysis where $c$ is a strictly positive, bounded, Lipschitz function and $\delta\geq0$. We also assume that there is an $r>0$ and a $ b_{\delta,r}>0$ such that the weighted Hardy inequality \[ \int_{\Gamma_{\!\!r}} d_\Gamma^{\,\delta}\,|\nabla \psi|^2\geq b_{\delta,r}^{\,2}\int_{\Gamma_{\!\!r}} d_\Gamma^{\,\delta-2}\,| \psi|^2 \] is valid for all $\psi\in C_c^\infty(\Gamma_{\!\!r})$ where $\Gamma_{\!\!r}=\{x\in\Omega: d_\Gamma(x)<r\}$. We then prove that the condition $(2-\delta)/2<b_\delta$ is sufficient for the essential self-adjointness of $H$ on $C_c^\infty(\Omega)$ with $b_\delta$ the supremum over $r$ of all possible $b_{\delta,r}$ in the Hardy inequality. This result extends all known results for domains with smooth boundaries and also gives information on self-adjointness for a large family of domains with rough, e.g.\ fractal, boundaries. \end{list} \noindent AMS Subject Classification: 31C25, 47D07. \noindent Keywords: Self-adjointness, diffusion operators, weighted Hardy inequality. \noindent \begin{tabular}{@{}cl@{\hspace{10mm}}cl} $ {}^\dag\hspace{-5mm}$& Mathematical Sciences Institute (CMA) & {} &{}\\ &Australian National University& & {}\\ &Canberra, ACT 0200 && {} \\ & Australia && {} \\ &[email protected] & &{}\\ \end{tabular} \setcounter{page}{1} \section{Introduction}\label{S1} In this paper we derive sufficiency criteria for the self-adjointness of degenerate elliptic operators $H=-\mathop{\rm div}(C\nabla)$ defined on $C_c^\infty(\Omega)$ where $\Omega$ is a domain in ${\bf R}^d$ with boundary $\Gamma$ and $C=(c_{kl})$ is a strictly positive, symmetric, $d\times d$-matrix with $c_{kl}$ real Lipschitz continuous functions. We assume that $C$ resembles the diagonal matrix $c\,d_\Gamma^{\,\delta}I$ near the boundary where $c$ is a strictly positive bounded Lipschitz function, $d_\Gamma$ the Euclidean distance to the boundary and $\delta\geq0$ a parameter which measures the order of degeneracy. A precise definition of the degeneracy condition will be given in Section~\ref{S5} but the idea is that the matrices $C$ and $c\,d_\Gamma^{\,\delta}I$ are equivalent on a boundary layer $\Gamma_{\!\!r}=\{x\in\Omega: d_\Gamma(x)<r\}$ and asymptotically equal as $r\to0$. In an earlier article {\cal I}te{Rob15} we established that if $\Omega$ is a $C^2$-domain, or the complement of a lower dimensional $C^2$-domain, then the condition $\delta>2-(d-d_{\!H})/2$, with $d_{\!H}$ the Hausdorff dimension of the boundary $\Gamma$, is sufficient for self-adjointness. Broadly similar conclusions had been reached earlier for bounded domains by Nenciu and Nenciu {\cal I}te{NeN} by quite disparate arguments. In this paper we develop an alternative approach which reconciles the two sets of arguments and gives sufficiency criteria for a much broader class of domains. In particular we are able to derive results for domains with rough boundaries, e.g.\ boundaries with a fractal nature or boundaries which are uniformly disconnected. Our arguments rely on two basic ideas. First, self-adjointness is determined by the properties of the coefficients of the operator on an arbitrarily thin boundary layer $\Gamma_{\!\!r}$. Secondly, the existence of the weighted Hardy inequality on the boundary layer, \begin{equation} \int_{\Gamma_{\!\!r}} d_\Gamma^{\,\delta}\,|\nabla \psi|^2\geq b_{\delta,r}^{\,2}\int_{\Gamma_{\!\!r}} d_\Gamma^{\,\delta-2}\,| \psi|^2 \label{esa1.2} \end{equation} for some $b_{\delta,r}>0$ and all $\psi\in C_c^\infty(\Gamma_{\!\!r})$ is crucial. If the Hardy inequality is valid for one $r>0$ it is clearly valid for all $s\in\langle0,r]$ and one can choose the corresponding $b_{\delta,s}\geq b_{\delta,r}$. Therefore we define the (boundary) Hardy constant $b_\delta$ by \[ b_\delta= \textstyle{\sup_{r>0}}\;\textstyle{\inf_{\psi\in C_c^\infty(\Gamma_{\!\!r})}}\;\Big\{\Big( {\displaystyle\int_{\Gamma_{\!\!r}}} d_\Gamma^{\,\delta}\,|\nabla \psi|^2\Big)^{1/2}\Big /\,\Big({\displaystyle\int_{\Gamma_{\!\!r}}} d_\Gamma^{\,\delta-2}\,| \psi|^2\Big)^{1/2}\Big\}\;, \] i.e.\ $b_\delta$ is the supremum over the possible choices of the $b_{\delta,s}$. Then, under these two assumptions, we establish in Theorem~\ref{tsa5.1} that the condition \begin{equation} (2-\delta)/2<b_\delta \label{esa1.3} \end{equation} is sufficient for self-adjointness of $H$. It is notable that there are no explicit restrictions on the domain $\Omega$ or its boundary $\Gamma$ only implicit conditions necessary for the boundary Hardy inequality (\ref{esa1.2}). In all the specific cases considered in {\cal I}te{Rob15} the Hardy inequality~(\ref{esa1.2}) is valid and the Hardy constant has the standard value $b_\delta=(d-d_{\!H}+\delta-2)/2$. Then (\ref{esa1.3}) reduces to the condition $\delta>2-(d-d_{\!H})/2$. Consequently a similar conclusion is valid for all domains which support the weighted Hardy inequality (\ref{esa1.2}) on a boundary layer with the standard constant. Note that if $\delta\geq 2$ then (\ref{esa1.3}) is obviously satisfied. In fact in this case self-adjointness of $H$ follows from an upper bound $C\leq a\,d_\Gamma^{\,\delta}I$ on the coefficients and the weighted Hardy inequality is irrelevant. This latter result was already known (see, for example, {\cal I}te{ERS5} Theorem~4.10 and Corollary~4.12) but we give a short proof in Section~\ref{S2}. It is also known from an earlier collaboration {\cal I}te{LR} with Lehrb{\"a}ck on Markov uniqueness that the condition $\delta\geq 2-(d-d_{\!H})$ is necessary for self-adjointness. Theorem~\ref{tsa5.1} establishes, however, that the condition $\delta>2-(d-d_{\!H})/2$ is sufficient for $H$ to be self-adjoint in the standard case. The proof of this statement utilizes the ideas of Agmon, {\cal I}te{Agm1} Theorem~1.5, as developed by Nenciu and Nenciu {\cal I}te{NeN} but extended to unbounded domains. These ideas are elaborated in detail in Section~\ref{S4} where we establish a prototype of our main result from the upper bounds $C\leq a\,d_\Gamma^{\,\delta}I$ and a stronger version of the weighted Hardy inequality (\ref{esa1.2}). Then in Section~\ref{S5} we establish the key result, Theorem~\ref{tsa5.1}. In Section~\ref{S6} we apply the latter result to uniform domains with Ahlfors regular boundaries. This allows a wide range of `rough' boundaries. The application is made possible by a result of Lehrb{\"a}ck, {\cal I}te{Leh3} Theorem~1.3, which establishes the validity of the weighted Hardy inequality (\ref{esa1.2}) on unbounded John domains. A modification of Lehrb{\"a}ck's arguments {\cal I}te{Leh5} also demonstrates the validity of the Hardy inequality on boundary layers for bounded John domains. In combination with the Ahlfors regularity one can then deduce that the condition (\ref{esa1.3}) again suffices for the self-adjointness of $H$. Unfortunately little is known about the optimal value of the (boundary) Hardy constant $b_\delta$ at this level of generality. Nevertheless we establish that $b_\delta$ is bounded above by the standard value $(d-d_{\!H}+\delta-2)/2$ and that $b_\delta+\delta/2$ is an increasing function on the interval of interest. It then follows that there is a critical degeneracy $\delta_c\in\langle 2-(d-d_{\!H})/2, 2\rangle$ such that $H$ is self-adjoint if $\delta>\delta_c$. In addition $b_\delta$ is equal to the standard value if and only if $b_2=(d-d_{\!H})/2$ and in this case $\delta_c=2-(d-d_{\!H})/2$ (see Theorem~\ref{tsa6.1}). One can, however, construct examples for which $b_\delta<(d-d_{\!H}+\delta-2)/2$. In fact for each $\delta>2-(d-d_{\!H})/2$ there are examples with $b_\delta$ arbitrarily close to zero and consequently $\delta_c$ is arbitrarily close to $2$. Moreover, if the sufficiency condition (\ref{esa1.3}) is satisfied then $H$ satisfies a weighted Rellich inequality on a boundary layer $\Gamma_{\!\!r}$. Finally we emphasize that the weighted Hardy inequality (\ref{esa1.2}) only depends on the operator $H$ through the order of degeneracy $\delta$ and the conclusions of our theorems only depend on properties near the boundary. Thus if $\Gamma$ decomposes as the countable union of positively separated components $\Gamma^{(j)}$ Theorems~\ref{tsa5.1} and \ref{tsa6.1} can be elaborated. In this situation the boundary layer $\Gamma_{\!\!r}$ also decomposes into separate components $\Gamma^{(j)}_{\!\!r}\!\!,\,$ if $r$ is sufficiently small. Then one can assign different orders of degeneracy $\delta_j$ to each component and introduce different Hardy constants $b_{\delta_j}$. After this modification self-adjointness of $H$ follows from the family of conditions $(2-\delta_j)/2<b_{\delta_j}$. This is discussed in Section~4. \section{Preliminaries}\label{S2} In this section we gather some preliminary information on self-adjointness of diffusion operators. The elliptic operator $H=-\mathop{\rm div}(C\nabla)$, with domain $C_c^\infty(\Omega)$, is a positive symmetric operator on $L_2(\Omega)$. Consequently it is closable with respect to the graph norm $\|\varphi\|_{D(H)}=(\|H\varphi\|_2^2+\|\varphi\|_2^2)^{1/2}$. For simplicity of notation we let $H$ and $D(H)$ denote the closure of the operator and its domain, respectively. Since the coefficients $c_{kl}$ are bounded $\sup_{x\in\Omega}\|C(x)\|<\infty$, i.e.\ there is a $\nu>0$ such that $C\leq \nu I$. It also follows from the strict positivity of $C$ that for each compact subset $K$ of $\Omega$ there is a $\mu_K>0$ such that $C\geq\mu_K I$. Thus the operator $H$ is locally strongly elliptic. These local properties imply, by elliptic regularity, that the domain $D(H^*)$ of the $L_2$-adjoint $H^*$ of $H$ is contained in $W^{2,2}_{\rm loc}(\Omega)$. Moreover, $W^{2,\infty}_{c}(\Omega)D(H^*)\subseteq D(H)$. Next let $h$ denote the positive bilinear form associated with $H$ on $L_2(\Omega)$, i.e.\ the form with domain $D(h)=C_c^\infty(\Omega)$ given by \[ h(\psi,\varphi)=(\psi,H\varphi)=\sum\nolimits^d_{k,l=1}(\partial_k\psi, c_{kl}\partial_l\varphi) \] for all $\psi, \varphi\in D(h)$ and set $h(\varphi)=h(\varphi, \varphi)$. The form is closable with respect to the graph norm $\|\varphi\|_{D(h)}=(h(\varphi)+\|\varphi\|_2^2)^{1/2}$ and we also use $h$ and $D(h)$ to denote the closed form and its domain. Then $h$ is a Dirichlet form {\cal I}te{BH} {\cal I}te{FOT} and by elliptic regularity $D(h)\subseteq W^{1,2}_{\rm loc}(\Omega)$. The Dirichlet form $h$ has a {\it carr\'e du champ}, a positive bilinear form $\psi, \varphi\in D(h)\mapsto \Gamma_{\!\!c}(\psi, \varphi)\in L_1(\Omega)$ such that \[ \Gamma_{\!\!c}(\psi, \varphi)=\sum\nolimits^d_{k,l=1}c_{kl} (\partial_k\psi)( \partial_l\varphi) \] for all $\psi, \varphi\in C_c^\infty(\Omega)$ (see, for example, {\cal I}te{BH} Section~1.4). Consequently $h(\varphi)=\|\Gamma_{\!\!c}(\varphi)\|_1$ where $\Gamma_{\!\!c}(\varphi)=\Gamma_{\!\!c}(\varphi, \varphi)$. In the remainder of this section we introduce a well-known identity which has been used extensively to establish self-adjointness and apply it to operators on ${\bf R}^d$ and to operators with $\delta\geq2$. These applications are subsequently useful. At this stage we do not require any additional restrictions on the boundary properties of the coefficients. Further assumptions will be introduced in Section~\ref{S5}. The standard Stone-von Neumann criterion for the self-adjointness of $H$ is the range property $R(\lambda I+H)=L_2(\Omega)$ for some, or for all, $\lambda>0$. Equivalently one has the kernel condition $\ker(\lambda I+H^*)=\{0\}$ on the adjoint. The following fundamental proposition gives a method for verifying the latter condition in some very general situations. \begin{prop}\label{psa2.1} If $\varphi\in D(H^*)$ and $\eta\in W^{1,\infty}_c(\Omega)$ then $\eta\varphi\in D(h)$ and \begin{equation} (H^*\varphi, \eta^2\varphi)=h(\eta\varphi)-(\varphi,\Gamma_{\!\!c}(\eta)\varphi) \;.\label{esa2.1} \end{equation} Thus if $(\lambda I+H^*)\varphi=0$ for some $\lambda>0$ then \begin{equation} \lambda\, \|\eta\varphi\|_2^2+h(\eta\varphi)=(\varphi,\Gamma_{\!\!c}(\eta)\varphi) \label{esa2.2} \end{equation} for all $\eta\in W^{1,\infty}_c(\Omega)$. \end{prop} This result has a long history and a variety of different proofs. The identity (\ref{esa2.1}) occurs in Wienholtz' 1958 thesis (see {\cal I}te{Wie} Section~3) so it was certainly known to Rellich in the 1950s. It was also derived by Agmon in his 1982 lectures (see {\cal I}te{Agm1} equation (1.16)). Both these authors used standard methods of elliptic differential equations. Alternatively~(\ref{esa2.1}) can be established in the abstract setting of local Dirichlet forms, {\cal I}te{Rob12} Lemma~2.2, or on graphs, {\cal I}te{KPP} Lemma~2.1. The identity is referred to as the localization lemma in {\cal I}te{NeN} where numerous other background references are given. The most straightforward application of the result follows by noting that the form $h$ is positive. Hence if $(\lambda I+H^*)\varphi=0$ then (\ref{esa2.2}) implies that \begin{equation} \lambda \,\|\eta\varphi\|_2^2\leq (\varphi,\Gamma_{\!\!c}(\eta)\varphi) \label{esa2.3} \end{equation} for all $\eta\in W^{1,\infty}_c(\Omega)$. This latter estimate can be exploited to deduce self-adjointness of $H$ by the construction of a sequence of $\eta$ which converges to the identity but $\Gamma_{\!\!c}(\eta)$ converges to zero thereby implying that $\varphi=0$. This argument appears as Theorem~3.1 of {\cal I}te{Dav14} who also gives several earlier references dating back to the 1960s. \begin{remarkn}\label{rsa2.1} There is a certain delicacy in the derivation of the identity (\ref{esa2.1}) since $D(H^*)$ is not generally a subset of $D(h)$. Therefore it is essential that $\eta$ is a differentiable function with compact support. This guarantees that $\eta D(H^*)\subseteq D(h)$ by elliptic regularity. In fact $D(H^*)\subseteq D(h)$ if and only if $H$ is self-adjoint. This is a consequence of a key property of the Friedrichs extension $H_{\!F}$ of $H$. This is the self-adjoint extension of $H$ determined by the Dirichlet form $h$ and it is the only self-adjoint extension with domain contained in $D(h)$ (see {\cal I}te{Kat1} Theorem~VI.2.11). Thus if $H$ is self-adjoint then $H^*=H=H_{\!F}$. Consequently $D(H^*)\subseteq D(h)$. Conversely all self-adjoint extensions of $H$ are restrictions of $H^*$ so if $D(H^*)\subseteq D(h)$ then they must all be equal to $H_{\!F}$ by the result cited by Kato. \end{remarkn} It is significant that the inequality (\ref{esa2.3}) does not explicitly depend on the form $h$. Therefore the compactness of the support of $\eta$ is no longer critical. Consequently the inequality can be extended by continuity to a larger class of functions. \begin{cor}\label{csa2.1} If $\varphi\in D(H^*)$ with $(\lambda I+H^*)\varphi=0$ for some $\lambda>0$ then \begin{equation} \lambda\, \|\eta\varphi\|_2^2\leq (\varphi,\Gamma_{\!\!c}(\eta)\varphi) \label{esa2.4} \end{equation} for all $\eta\in\bigcap_{s>0} W^{1,\infty}(\Omega_s)$ where $\Omega_s=\{x\in\Omega: d_\Gamma(x)>s\}$. \end{cor} \mbox{\bf Proof} \hspace{5pt}\ Fix $\eta\in W^{1,\infty}(\Omega_s)$. Then let $\rho\in W^{1,\infty}_c({\bf R}^d)$ be a positive function with $\rho=1$ on a ball $B\subset {\bf R}^d$ and zero on the complement of a larger concentric ball. Then define $\rho_n(x)=\rho(x/n)$. Since $W^{1,\infty}_c({\bf R}^d)$ is an algebra of multipliers on $W^{1,\infty}(\Omega)$ it follows that $\rho_n\eta\in W^{1,\infty}_c(\Omega)$. Therefore replacing $\eta$ in (\ref{esa2.2}) by $\rho_n\eta$ and using the Leibniz rule combined with the Cauchy--Schwarz inequality one deduces that for each $\varepsilon>0$ one has \[ \lambda\, \|\rho_n\eta\varphi\|_2^2\leq (\varphi,\Gamma_{\!\!c}(\rho_n\eta)\varphi) \leq (1+\varepsilon)\,(\rho_n\varphi,\Gamma_{\!\!c}(\eta)\rho_n\varphi)+(1+\varepsilon^{-1})\,(\eta\varphi,\Gamma_{\!\!c}(\rho_n)\eta\varphi) \;. \] But $ \|\rho_n\eta\varphi\|_2^2\to \|\eta\varphi\|_2^2$ as $n\to\infty$. Moreover, \[ (\eta\varphi,\Gamma_{\!\!c}(\rho_n)\eta\varphi)\leq \nu\,(\eta\varphi,|\nabla\!\rho_n|^2\eta\varphi)\leq \nu\,\|\eta\varphi\|_2^2\,\|\nabla\!\rho\|_\infty^2/n^2\to0 \] as $n\to\infty$ by the definition of $\rho_n$. The conclusion follows immediately. $\Box$ This argument will be used in a more complicated context in Section~\ref{S4}. Another simple corollary of the proposition which will be applied in the follow in section is the case $\Omega={\bf R}^d$. \begin{cor}\label{csa2.2} If $\Omega={\bf R}^d$ then $H=-\mathop{\rm div}(C\nabla)$ is self-adjoint. \end{cor} \mbox{\bf Proof} \hspace{5pt}\ It follows from (\ref{esa2.3}) that $\lambda \,\|\eta\varphi\|_2^2\leq (\varphi,\Gamma_{\!\!c}(\eta)\varphi)\leq \nu\,(\varphi, |\nabla\!\eta|^2\varphi)$ for all $\eta\in W^{1,\infty}_c({\bf R}^d)$. Now replacing $\eta$ by $\rho_n$ as above and taking the limit $n\to\infty$ leads to the conclusion that $ \|\rho_n\varphi\|_2^2\to \|\varphi\|_2^2$ and $\|\nabla\!\rho_n\|_\infty^2\to0$. Therefore $\varphi=0$ and $H$ is self-adjoint. $\Box$ It is more difficult to utilize this technique if $\Omega$ has a boundary $\Gamma$ but one can modify the argument slightly to conclude that $H$ is self-adjoint if the coefficients of $H$ are sufficiently degenerate at $\Gamma$. \begin{cor}\label{csa2.3} If $\Omega$ is a general domain in ${\bf R}^d$ but there is an $r>0$ and $\delta\geq 2$ such that $C\leq \nu\,d_\Gamma^{\,\delta}I$ on $\Gamma_{\!\!r}$ then $H=-\mathop{\rm div}(C\nabla)$ is self-adjoint. \end{cor} \mbox{\bf Proof} \hspace{5pt}\ Define $\xi_n\in W^{1,\infty}(0,\infty)$ by $\xi_n(t)=0$ if $t<1/n$, $\xi_n(t)=1$ if $t>1$ and $\xi_n(t)=\log(nt)/\log n$ if $1/n\leq t\leq 1$. Then set $\eta_n={\cal H}i\,(\xi_n{\cal I}rc(r^{-1}d_\Gamma))$ where ${\cal H}i\in C_c^\infty({\bf R}^d)$ with $D=(\Omega\cap \mathop{\rm supp}{\cal H}i)$ non-empty. Thus the $\eta_n\in W^{1,\infty}_c(\Omega)$ have support in $D\cap (\Omega_{r/n})$. It follows immediately that $\lim_{n\to\infty}\|\eta_n\psi-{\cal H}i\psi\|_2=0$ for all $\psi\in L_2(\Omega)$. Moreover, by the Cauchy--Schwarz inequality, \[ (\psi,\Gamma_{\!\!c}(\eta_n)\psi) \leq (1+\varepsilon)\,(\psi,\Gamma_{\!\!c}({\cal H}i)|(\xi_n{\cal I}rc(r^{-1}d_\Gamma))|^2\psi) +(1+\varepsilon^{-1})\,(\psi,{\cal H}i^2\,\Gamma_{\!\!c}(\xi_n{\cal I}rc(r^{-1}d_\Gamma))\psi) \] for all $\varepsilon>0$. But $\mathop{\rm supp}\Gamma_{\!\!c}(\xi_n{\cal I}rc(r^{-1}d_\Gamma))\subseteq{\overline \Gamma}_{\!\!r}$ and $\Gamma_{\!\!c}(\xi_n{\cal I}rc(r^{-1}d_\Gamma))\leq \nu \,(r^{-1}d_\Gamma)^{\delta-2}(\log n)^{-2}\leq \nu\,(\log n)^{-2}$ on its support. Therefore \[ \limsup_{n\to\infty}(\psi,\Gamma_{\!\!c}(\eta_n)\psi)\leq (1+\varepsilon)\,(\psi,\Gamma_{\!\!c}({\cal H}i)|\psi)\leq \nu\,(\psi,|\nabla\!{\cal H}i|^2\psi) \;. \] Now one can replace $\eta$ by $\eta_n$ in (\ref{esa2.3}) and take the limit $n\to\infty$ followed by the limit $\varepsilon\to0$ to conclude that if $\varphi\in D(H^*)$ and $(\lambda I+H^*)\varphi=0$ then \[ \lambda\,\|{\cal H}i\psi\|_2^2\leq \nu\,(\varphi,|\nabla\!{\cal H}i|^2\varphi) \] for all ${\cal H}i\in C_c^\infty({\bf R}^d)$. This effectively reduces the problem to the ${\bf R}^d$-case covered by the previous corollary. One again deduces that $\varphi=0$ by constructing a sequence of ${\cal H}i_n$ such that $\|{\cal H}i_n\varphi\|_2^2\to\|\varphi\|_2^2$ and $(\varphi,|\nabla\!{\cal H}i_n|^2\varphi)\to0$ as $n\to\infty$. $\Box$ There are two distinct but related problems that occur if one tries to apply the foregoing arguments to less degenerate situations, e.g.\ to operators with $C\sim d_\Gamma^{\,\delta}I$ near the boundary with $\delta<2$. First the criterion $\ker(\lambda I+H^*)\varphi=\{0\}$ for self-adjointness is clearly a global property but self-adjointness should be determined by boundary behaviour. Therefore one needs to reformulate the criterion appropriately. Secondly, the inequality (\ref{esa2.3}) is not sufficiently sensitive to the boundary behaviour. This problem arises since we totally discarded the term $h(\eta\varphi)$ in the identity (\ref{esa2.2}). Therefore one needs to exploit more detailed properties of $h$ near the boundary. If $\delta\geq2$ then the boundary is essentially inaccessible to the related diffusion process and this explains why the conclusion of Corollary~\ref{csa2.3} is independent of the details of the boundary. An alternative expression of this inaccessibility is that $\Omega$ equipped with the Riemannian metric $ds^2=d_\Gamma^{\,\delta}\,dx^2$ is complete for all $\delta\geq 2$. Thus the corresponding Riemannian distance to the boundary is infinity. \section{Boundary estimates}\label{S3} In this section we examine two more preparatory topics. First we show that the self-adjointness property $H=H^*$ can be verified in two steps, an interior estimate and a boundary estimate. A similar approach was taken in {\cal I}te{Rob15} for the verification of the alternative self-adjointness criterion $H=H_{\!F}$ and the following discussion relies partly on the results in Section~2.1 of the previous paper. Secondly, we discuss extensions of Hardy inequalities on a boundary layer to weak Hardy inequalities on the whole domain. The first step in establishing that $H=H^*$ is to verify the property on the interior sets $\Omega_r=\{x\in\Omega: d_\Gamma(x)>r\}$. \begin{lemma}\label{lsa3.1} Assume $\mathop{\rm supp}\varphi\subseteq \Omega_r$ for some $r>0$. Then $\varphi\in D(H^*)$ if and only if $\varphi\in D(H)$. Moreover, if these conditions are satisfied then $H^*\varphi=H\varphi$. \end{lemma} \mbox{\bf Proof} \hspace{5pt}\ First assume $\varphi\in D(H^*)$ with $\mathop{\rm supp}\varphi\subseteq \Omega_r$. Then fix $s,t>0$ such that $s<t<r$. Secondly, choose a $\xi\in W^{2,\infty}({\bf R}^d)$ with the properties $0\leq\xi\leq1$, $\mathop{\rm supp}\xi\subseteq \Omega_s$ and $\xi=1$ on $\Omega_t$. Then define the (closed) operator $H_\xi=-\mathop{\rm div}(C_\xi\nabla)$ on $L_2({\bf R}^d)$ where the matrix $C_\xi$ is given by $C_\xi=\xi\,C+(1-\xi)\,\mu I$ with $\mu>0$. Since $C>0$ on $\Omega$ and $\mu>0$ it follows that $C_\xi>0$ on ${\bf R}^d$. Moreover, the coefficients of $C_\xi$ are locally Lipschitz. Therefore the operator $H_\xi$ on $L_2({\bf R}^d)$ is self-adjoint by Corollary~\ref{csa2.2}. It follows from this construction that $(H^*\varphi, \psi)=(\varphi, H\psi)=(\varphi, H_\xi\psi)$ for all $\psi\in C_c^\infty(\Omega_t)$. But if $\psi\in C_c^\infty(\Omega\backslash \Omega_r)$ the relation follows by locality of $H$ and $H_\xi$ since this implies that all terms are identically zero. Therefore the identity is valid for all $\psi\in C_c^\infty(\Omega)$ by decomposition. Consequently $|(\varphi, H_\xi\psi)|\leq \|H^*\varphi\|_2\,\|\psi\|_2$. Since $H_\xi$ is self-adjoint it follows that $\varphi\in D(H_\xi)$ and $H_\xi\varphi=H^*\varphi$. But it follows from the proof of Proposition~2.2 in {\cal I}te{Rob15} that if $\mathop{\rm supp}\varphi\subseteq \Omega_r$ then $\varphi\in D(H_\xi)$ if and only if $\varphi\in D(H)$ and in this case $H_\xi\varphi=H\varphi$. Combining these statements one concludes that $\varphi\in D(H)$ and $H\varphi=H^*\varphi$. Conversely, if $\varphi\in D(H)$ with $\mathop{\rm supp}\varphi\subseteq \Omega_r$ then $\varphi\in D(H_\xi)$ and $H_\xi\varphi=H\varphi$ by Proposition~2.2 in {\cal I}te{Rob15}. Then $(\varphi, H\psi)=(\varphi,H_\xi\psi)$ for all $\psi\in C_c^\infty(\Omega)$ and $|(\varphi, H\psi)|\leq \|H_\xi\varphi\|_2\,\|\psi\|_2$. Therefore $\varphi\in D(H^*)$ and $H^*\varphi=H_\xi\varphi=H\varphi$. $\Box$ Lemma~\ref{lsa3.1} now allows one to characterize the self-adjointness of $H$ by its boundary behaviour. This follows from the next proposition since $H$ is self-adjoint if and only if $R(\lambda I+H)=L_2(\Omega)$. \begin{prop}\label{psa3.1} If the range condition $(\varphi, (\lambda I+H)\psi)=0$ for a $\lambda>0$ and all $\psi\in C_c^\infty(\Omega)$ implies that $\varphi=0$ on a boundary layer $\Gamma_{\!\!r}$ then $H$ is self-adjoint. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ The range condition implies that $|(\varphi, H\psi)|=\lambda|(\varphi, \psi)|\leq \lambda\,\|\varphi\|_2\,\|\psi\|_2$ for all $\psi\in C_c^\infty(\Omega)$. Therefore $\varphi\in D(H^*)$. But by assumption this condition also implies that $\varphi=0$ on $\Gamma_{\!\!r}$. Therefore $\mathop{\rm supp} \varphi\subseteq \Omega_s$ for some $s\in\langle0,r\rangle$. Then $\varphi\in D(H)$ and $H\varphi=H^*\varphi$ by Lemma~\ref{lsa3.1}. Consequently, $\varphi\in D(h)$ and \[ h(\varphi)=(\varphi, H\varphi)=(H^*\varphi,\varphi)=-\lambda\,\|\varphi\|_2^2 \;. \] Since $h(\varphi)\geq 0$ and $\lambda>0$ it follows immediately that $\|\varphi\|_2=0$ and $\varphi=0$. $\Box$ Proposition~\ref{psa3.1} reduces the proof of self-adjointness of $H$ to the verification that all the elements in the kernel of $\lambda I+H^*$ vanish on some thin boundary layer $\Gamma_{\!\!r}$. This will be achieved with a stronger version of the inequality (\ref{esa2.3}) used in Section~\ref{S2} which follows from the introduction of a Hardy-type lower bound on the form $h$. The general Hardy inequality for second-order operators can be expressed as \[ h(\varphi)\geq \|{\cal H}i\varphi\|_2^2 \] for all $\varphi \in D(h)$ with ${\cal H}i$ a positive multiplier on $D(h)$. Thus the weighted Hardy inequality~(\ref{esa1.2}) corresponds to the form of the operator $-\sum^d_{k=1}\partial_k\,d_\Gamma^{\,\delta}\,\partial_k$ with ${\cal H}i=b_{\delta,r}\,d_\Gamma^{\,\delta/2-1}\!$. Inequalities of this type are known on a wide variety of domains (see {\cal I}te{BEL} and the references therein for background) but are also known to fail in quite simple situations. For example, the weighted Hardy inequality~(\ref{esa1.2}) fails if $\Omega$ is a unit ball and $\delta>1$ although it is valid for all $\psi$ with support in each boundary layer $\Gamma_{\!\!r}$ with $r<1$ (see Example~\ref{exsa5.4} below). The boundary Hardy inequality is also valid on thin boundary layers for domains with a $C^2$-boundary (see {\cal I}te{Rob15}, Proposition~2.9). In this situation the Hardy constant $b_\delta=(\delta-1)/2$. It is, however, of greater interest that boundary estimates of this type are also valid for domains with very rough boundaries, e.g.\ boundaries of a fractal nature {\cal I}te{KZ} {\cal I}te{Leh3}. We will discuss this in Section~\ref{S6} but for the present these observations motivate the examination of this restricted form of the Hardy inequality. The following lemma establishes that the general Hardy inequality on $\Gamma_{\!\!r}$ extends to a weaker form of the inequality on $\Omega$. This will be of utility in Section~\ref{S4}. \begin{lemma}\label{lsa3.2} Fix $r>0$ and $\mu\geq0$. Then assume there is a positive ${\cal H}i\in\bigcap_{s>0}L_\infty(\Omega_s)$ such that \begin{equation} \mu\,\|\psi\|_2^2+h(\psi)\geq \|{\cal H}i\psi\|_2^2 \label{esa3.1} \end{equation} for all $\psi\in C_c^1(\Gamma_{\!\!r})$. It follows that for each $\varepsilon>0$ there is a $\lambda_{r,\varepsilon}>0$ such that \begin{equation} \lambda_{r,\varepsilon}\|\psi\|_2^2+h(\psi)\geq (1-\varepsilon)\|{\cal H}i\psi\|_2^2 \label{esa3.2} \end{equation} for all $\psi\in D(h)$. \end{lemma} \mbox{\bf Proof} \hspace{5pt}\ The $\mu$ plays no essential role in the proof of the lemma. Its presence only changes the value of the resulting $\lambda_{r,\varepsilon}$. Therefore we assume in the following argument that $\mu=0$ although in Section~\ref{S5} we use the result with $\mu>0$. First fix $\xi\in C^1(\Omega)$ with $0\leq\xi\leq1$, $\xi=1$ on $\Gamma_{\!\!s}$ for some $s\in\langle0,r\rangle$ and $\xi=0$ on $\Omega_r$. Then for each $\psi\in C_c^1(\Omega)$ one has $\xi\psi\in C_c^1(\Gamma_{\!\!r})$ and \[ \Gamma_{\!\!c}(\xi\psi)=\xi^2\,\Gamma_{\!\!c}(\psi)+2\,\xi\psi \,\Gamma_{\!\!c}(\xi,\psi)+\psi^2\,\Gamma_{\!\!c}(\xi) \] by the Leibniz rule. Hence for each $\varepsilon>0$ one has \[ \Gamma_{\!\!c}(\xi\psi)\leq (1+\varepsilon)\,\xi^2\,\Gamma_{\!\!c}(\psi)+(1+\varepsilon^{-1})\,\psi^2\,\Gamma_{\!\!c}(\xi) \] by the Cauchy-Schwarz inequality. Therefore by integration and rearrangement \[ h(\psi)\geq \int_\Omega \xi^2\,\Gamma_{\!\!c}(\psi)\geq (1+\varepsilon)^{-1}\,h(\xi\psi)-\varepsilon^{-1}\int_{\Gamma_{\!\!r}\backslash \Gamma_{\!\!s}}\psi^2\, \Gamma_{\!\!c}(\xi) \] since $\mathop{\rm supp}\Gamma_{\!\!c}(\xi)\subseteq \Gamma_{\!\!r}\backslash \Gamma_{\!\!s}$. Hence \[ \varepsilon^{-1}\lambda_{r,s}\,\|\psi\|_2^2+h(\psi)\geq (1+\varepsilon)^{-1}\,h(\xi\psi) \] with $\lambda_{r,s}$ the $L_\infty$-norm of $\Gamma_{\!\!c}(\xi)$. Then it follows from the assumption (\ref{esa3.1}), with $\mu=0$ and $\psi$ replaced by $\xi\psi$, that \[ h(\xi\psi)\geq \|{\cal H}i\xi\psi\|_2^2\geq \|{\cal H}i_s\psi\|_2^2 \] where ${\cal H}i_s$ denotes the restriction of ${\cal H}i$ to $\Gamma_{\!\!s}$. But \[ \|{\cal H}i_s\psi\|_2^2= \|{\cal H}i\psi\|_2^2-\int_{\Gamma_{\!\!s}^c}|{\cal H}i|^2|\psi|^2\geq \|{\cal H}i\psi\|_2^2-\lambda_s\|\psi\|_2^2 \] where $\lambda_s=\sup\{{\cal H}i(x):d_\Gamma(x)>s\}$. Combining these estimates gives \[ (\lambda_s+\varepsilon^{-1}\lambda_{r,s})\,\|\psi\|_2^2+h(\psi)\geq (1+\varepsilon^{-1})\,\|{\cal H}i\psi\|_2^2 \] for all $\varepsilon>0$ and all $\psi\in C_c^1(\Omega)$. Finally with the replacement $\varepsilon\mapsto\varepsilon^{-1}-1$ one obtains (\ref{esa3.2}) for all $\psi\in C_c^1(\Omega)$ with $\lambda_{r,\varepsilon}=\inf_{s\in\langle0,r\rangle}(\lambda_s+\varepsilon^{-1}\lambda_{r,s})$. Then (\ref{esa3.2}) follows for all $\psi\in D(h)$ by closure. If $\mu>0$ then $\lambda_{r,\varepsilon}$ is replaced by $\mu+ \lambda_{r,\varepsilon}$. $\Box$ Note that the strong Hardy inequality (\ref{esa3.1}) only involves the restriction of ${\cal H}i$ to $\Gamma_{\!\!r}$; the value of ${\cal H}i$ on the interior sets $\Omega_s$ with $s>r$ is arbitrary up to the boundedness hypothesis. This freedom pertains in the setting of the next proposition which establishes a generalization of the basic inequality (\ref{esa2.3}) used to discuss the self-adjointness of operators with a degeneracy of order $\delta\geq 2$ at the boundary in Corollary~\ref{csa2.3}. \begin{prop}\label{psa3.2} Assume that ${\cal H}i$ satisfies the Hardy inequality $(\ref{esa3.1})$ on $\Gamma_{\!\!r}$ and fix $\lambda_{r,\varepsilon}\geq 0$, with $\varepsilon>0$, such that \begin{equation} \lambda_{r,\varepsilon}\|\psi\|_2^2+h(\psi)\geq (1-\varepsilon)\,\|{\cal H}i\psi\|_2^2 \label{esa3.3} \end{equation} for all $\psi\in D(h)$. Further fix $\lambda>\lambda_{r,\varepsilon}$ and $\varphi\in D(H^*)$ such that $(\lambda I+H^*)\varphi=0$. It follows that \begin{equation} (\lambda-\lambda_{r,\varepsilon})\,\|\eta\varphi\|_2^2+(1-\varepsilon)\,\|\eta{\cal H}i\varphi\|_2^2\leq (\varphi, \Gamma_{\!\!c}(\eta)\varphi) \label{esa3.4} \end{equation} for all $\eta\in\bigcap_{s>0} W^{1,\infty}(\Omega_s)$. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ First it follows from Lemma~\ref{lsa3.2} that for each $\varepsilon>0$ one may indeed choose $\lambda_{r,\varepsilon}$ such that (\ref{esa3.3}) is satisfied. Then the inequality (\ref{esa3.4}) follows for all $\eta\in W^{1,\infty}_c(\Omega)$ by combination of the basic identity (\ref{esa2.2}) and the weak Hardy inequality (\ref{esa3.3}). It then extends to the larger set of $\eta$ by repetition of the argument used to establish Corollary~\ref{csa2.1}. $\Box$ In the sequel inequality (\ref{esa3.4}) plays a similar role in the discussion of self-adjointness for operators with a degeneracy $\delta<2$ to that played by (\ref{esa2.3}) in the proof of self-adjointness of operators with $\delta\geq 2$ in Section~\ref{S2}. \section{A prototypical theorem}\label{S4} In this section we develop the ideas of Agmon {\cal I}te{Agm1} and Nenciu and Nenciu {\cal I}te{NeN} to derive a general self-adjointness theorem which serves as a prototype for a more specific result in the following section. The discussion in Section~\ref{S2} of self-adjointness of operators with coefficients $C$ satisfying the degeneracy condition $C\leq \nu\,d_\Gamma^{\,\delta} I$, with $\delta\geq2$, on a boundary layer $\Gamma_{\!\!r}$ was based on the inequality (\ref{esa2.3}). The proof followed by choosing a sequence $\eta\in W_c^{1,\infty}(\Omega)$ such that $\eta\to1\hspace{-4.5pt}1_\Omega$ pointwise and $\Gamma_{\!\!c}(\eta)\to0$. But the first condition indicates that $|\nabla\eta|$ could increase as $d_\Gamma^{\,-1}$ near the boundary. In this case it is inevitable that $\Gamma_{\!\!c}(\eta)\sim d_\Gamma^{\,\delta-2}$ for small $d_\Gamma$. Hence if $\delta\geq2$ then $\Gamma_{\!\!c}(\eta)$ is bounded. This was the key feature of the arguments in Section~\ref{S2}. If, however, $\delta<2$ then $\Gamma_{\!\!c}(\eta)$ is unbounded and the reasoning of Section~\ref{S2} is totally inadequate. But the improved inequality (\ref{esa3.4}) gives a potential path to circumvent this difficulty. The factor $\Gamma_{\!\!c}(\eta)$ on the right hand side might well diverge at the boundary but the term on the left with the factor $\eta{\cal H}i$ could also diverge. Therefore the idea is to choose a sequence of $\eta$ such that the two divergences cancel and do not interfere with the estimation argument. This is the strategy of Nenciu and Nenciu {\cal I}te{NeN} in the case of bounded $\Omega$ with some basic smoothness of the boundary. But the method also extends to the more general case of unbounded domains with rough boundaries if one has sufficient information on the Hardy inequality near the boundary. This is illustrated by the following prototypical result. \begin{thm}\label{tsa4.1} Assume there is an $r>0$ such that $C\leq \nu\, d_\Gamma^{\,\delta}I$ on $\Gamma_{\!\!r}$ with $\nu>0$ and $\delta\in[0,2\rangle$. Moreover, assume the Hardy inequality \begin{equation} h(\psi)\geq \nu\,b_{\delta,r}^{\,2}\, \|d_\Gamma^{\,\delta/2-1}\psi\|_2^2 \label{esa4.00} \end{equation} is satisfied with $b_{\delta,r}>0$ for all $\psi\in D(h)$ with $\mathop{\rm supp}\psi\subseteq \Gamma_{\!\!r}$. Let $b_\delta$ denote the supremum over small $r$ of the possible $b_{\delta,r}$. If $(2-\delta)/2<b_\delta$ then $H$ is self-adjoint. \end{thm} \mbox{\bf Proof} \hspace{5pt}\ The Hardy inequality (\ref{esa4.00}) is analogous to the weighted inequality (\ref{esa1.2}) and the Hardy constant $b_\delta$ is defined similarly. The inequality corresponds to (\ref{esa3.1}) with $\mu=0$ and ${\cal H}i= (\nu^{1/2}\,b_{\delta,r})\,d_\Gamma^{\,\delta/2-1}$. Thus it follows from Lemma~\ref{lsa3.2} and Proposition~\ref{psa3.2} that (\ref{esa3.4}) is satisfied with this choice of ${\cal H}i$ on $\Omega$. Note that Lemma~\ref{lsa3.2} is applicable since ${\cal H}i$ is positive and bounded on the interior set $\Omega_r$ for $\delta\leq 2$. Now the principal idea is to utilize (\ref{esa3.4}) with $\varphi\in \ker(\lambda I+H^*)$ to deduce that $\varphi=0$ on $\Gamma_{\!\!r}$. This suffices for self-adjointness by Proposition~\ref{psa3.1}. The proof uses a method introduced by Agmon with a different aim in mind (see {\cal I}te{Agm1} Theorem~1.5). First one expresses $\eta$ in the form $\eta=e^\xi\,\zeta$, with suitable support restrictions on $\xi$ and $\zeta$. Then (\ref{esa3.4}) gives \[ (\lambda-\lambda_{r,\varepsilon})\,\|{\cal H}i e^\xi\zeta\varphi\|_2^2 +(1-\varepsilon)\|e^\xi\zeta\varphi\|_2^2\leq (\varphi, \Gamma_{\!\!c}(e^\xi\zeta)\varphi) \;. \] Secondly, one chooses $\xi$ such that $\Gamma_{\!\!c}(\xi)\leq (1-\varepsilon)\,{\cal H}i^2$ on $\Omega$ to obtain \begin{equation} (\lambda-\lambda_{r,\varepsilon})\,\|e^\xi\zeta\varphi\|_2^2+(e^\xi\zeta\varphi, \Gamma_{\!\!c}(\xi) e^\xi\zeta\varphi)\leq (\varphi, \Gamma_{\!\!c}(e^\xi\zeta)\varphi) \;. \label{esa4.0} \end{equation} This can be reformulated as \begin{equation} (\lambda-\lambda_{r,\varepsilon})\,\|e^\xi\zeta\varphi\|_2^2\leq (e^\xi\varphi, \Gamma_{\!\!c}(\zeta)e^\xi\varphi)+2\,(e^\xi\varphi, \Gamma_{\!\!c}(\xi,\zeta)\zeta e^{\xi}\varphi) \label{esa4.1} \end{equation} by evaluating the right hand side of (\ref{esa4.0}) with the Leibniz rule. This corresponds closely to Lemma~3.4 of {\cal I}te{NeN} although the latter lemma is expressed in a different manner. The key point is that the second term on the left hand side of (\ref{esa4.0}) is cancelled by the leading term in the Leibniz expansion of the right hand side. Finally one replaces $\zeta$ by a sequence $\zeta_n$ where $\zeta_n\to1$ in such a way that one can control the growth of the right hand side and subsequently deduce that $\varphi=0$ on $\Gamma_{\!\!r}$. First, define $\hat\xi$ on $\langle0,\infty\rangle$ by \[ {\hat\xi}(t)=\log(t/(1+t))^{(2-\delta)/2} +2^{-1}\log\log((1+t)/t) \;. \] (This corresponds to the function $G$ used by Nenciu and Nenciu {\cal I}te{NeN} in the proof of their Theorem~5.3 but with $t$ replaced by $t/(1+t)$.) It follows immediately that one has \[ e^{2{\hat\xi}(t)}=(t/(1+t))^{(2-\delta)}\log((1+t)/t)=-(2-\delta)^{-1}(t/(1+t))^{(2-\delta)}\log(t/(1+t))^{(2-\delta)} \;. \] But $t/(1+t)\in \langle0,1\rangle$ for $t\in\langle0,\infty\rangle$ and $s\in\langle0,1\rangle\mapsto -s\log s$ is both positive and bounded. Therefore $e^{2\hat \xi}$ is uniformly bounded on $\langle0,\infty\rangle$ for $\delta\in[0,2\rangle$. Moreover, \[ {\hat\xi}^{\,\prime}(t)=((2-\delta)/2t)(1/(1+t))(1-((2-\delta)\log((1+t)/t))^{-1})\leq (2-\delta)/2t \;. \] Now set $\xi={\hat\xi}{\cal I}rc d_\Gamma$. It follows that \begin{equation} e^{2\xi}=-(2-\delta)^{-1}(d_\Gamma/(1+d_\Gamma))^{(2-\delta)}\log(d_\Gamma/(1+d_\Gamma))^{(2-\delta)} \label{esa4.2} \end{equation} and \begin{equation} \Gamma_{\!\!c}(\xi)\leq \nu\,((2-\delta)/2)^2\,d_\Gamma^{\,\delta-2} \;. \label{esa4.3} \end{equation} Thus if $((2-\delta)/2)^2\leq (1-\varepsilon)\,b_{\delta,r}^{\,2}$ then the condition $\Gamma_{\!\!c}(\xi)\leq (1-\varepsilon)\,{\cal H}i^2$ used for the cancellation in passing from (\ref{esa4.0}) to (\ref{esa4.1}) is satisfied. Note that $s\in\langle0,e^{-1}\rangle\mapsto -s\log s$ is increasing. Therefore \begin{equation} e^{2\xi}\leq -d_\Gamma^{\,2-\delta}\log d_\Gamma\;\;\;\;\;\mbox{and}\;\;\;\;\;e^{2\xi}\,\Gamma_{\!\!c}(\xi)\leq -\nu\,((2-\delta)/2)^2\,\log d_\Gamma \label{esa4.4} \end{equation} on $\Gamma_{\!\!r}$ for all small $r$. In particular the term which cancelled in the passage from (\ref{esa4.0}) to (\ref{esa4.1}) diverges logarithmically as $d_\Gamma\to0$. Secondly, let $\hat \zeta\in W^{1,\infty}(0,\infty)$ be an increasing function with $0\leq \hat\zeta\leq1$, $\hat\zeta=0$ if $t<r/2$, $\hat \zeta=1$ if $t\geq r$ and $|{\hat\zeta}^{\,\prime}|\leq 2/r$. Then set $\zeta=\hat \zeta{\cal I}rc d_\Gamma$ and $\zeta_n={\hat\zeta}{\cal I}rc(2^nd_\Gamma)$. Thus $\zeta=\zeta_0$ and $\zeta_n=0$ if $d_\Gamma<2^{-(n+1)}r$, $\zeta_n=1$ if $d_\Gamma\geq 2^{-n}r$ and $|\nabla \zeta_n|\leq 2^{n+1}/r$. Thirdly, with these choices we examine the bound (\ref{esa4.1}). It follows by definition that $\zeta_n=1$ on $B_m=\{x\in\Omega:2^{-m+1}r\leq d_\Gamma\leq r\}\subset \Gamma_{\!\!r}$ if $n>m$. But $e^{\,\xi}$ is bounded away from zero on $B_m$ by (\ref{esa4.2}). Therefore there is a $b_m>0$ such that the norm on the left hand side satisfies \begin{equation} \|e^{\,\xi}\zeta_n\varphi\|_2^2\geq b_m\|1\hspace{-4.5pt}1_{B_m}\varphi\|_2^2 \label{esa4.41} \end{equation} for all $n\geq m$. Next consider the factor $e^{2\xi}\,\Gamma_{\!\!c}(\zeta_n)$ on the right hand side of (\ref{esa4.1}). It clearly has support in the set $A_n=\{x\in\Omega: 2^{-(n+1)}r\leq d_\Gamma\leq 2^{-n}r\}$ because the function $\zeta_n$ is equal to zero $0$ if $d_\Gamma<2^{-(n+1)}r$ and to $1$ if $d_\Gamma\geq 2^{-n}r$. But on $A_n$ one has \[ \Gamma_{\!\!c}(\zeta_n)\leq 4\,\nu\,(2^{-n}r)^{(\delta-2)} \] by the assumed bounds on $C$ and the definition of $\zeta_n$. Since $e^{2\xi}\leq -(2^{-n}r)^{(2-\delta)}\log(2^{-n}r)$ on $A_n$ by (\ref{esa4.4}) it follows that \begin{equation} e^{2\xi}\,\Gamma_{\!\!c}(\zeta_n)\leq -4\,\nu\,\log(2^{-n}r)\leq 8\,n\,\nu \label{esa4.5} \end{equation} on $A_n$ for all large $n$. The second factor $e^{2\xi}\,\Gamma_{\!\!c}(\xi,\zeta_n)\,\zeta_n$ on the right hand side of (\ref{esa4.1}) also has support in $A_n$ and \[ |e^{2\xi}\,\Gamma_{\!\!c}(\xi,\zeta_n)\,\zeta_n|\leq (e^{2\xi}\,\Gamma_{\!\!c}(\xi))^{1/2}\,(e^{2\xi}\,\Gamma_{\!\!c}(\zeta_n))^{1/2} \] by the Cauchy--Schwarz inequality. Therefore it follows from (\ref{esa4.4}) and (\ref{esa4.5}) that \begin{equation} |e^{2\xi}\,\Gamma_{\!\!c}(\xi,\zeta_n)\zeta_n|\leq 8\,(2-\delta)\,n\,\nu \label{esa4.6} \end{equation} on $A_n$ for all large $n$. Finally substituting the estimates (\ref{esa4.41}), (\ref{esa4.5}) and (\ref{esa4.6}) into (\ref{esa4.1}) one concludes that there is an $a>0$, independent of $n$, such that \[ (\lambda-\lambda_{r,\varepsilon})\,b_m\,\|1\hspace{-4.5pt}1_{B_m}\varphi\|_2^2\leq a\,n\,\|1\hspace{-4.5pt}1_{A_n}\varphi\|_2^2 \] for all large $n$, and in particular $n\geq m$. But then for all large $N_1, N_2$ one must have \[ (\lambda-\lambda_{r,\varepsilon})\,b_m\,\Big(\sum\nolimits^{N_2}_{n=N_1}n^{-1}\Big)\,\|1\hspace{-4.5pt}1_{B_m}\varphi\|_2^2\leq a\,\Big(\sum\nolimits^{N_2}_{n=N_1}\|1\hspace{-4.5pt}1_{A_n}\varphi\|_2^2\Big)\leq a\,\|\varphi\|_2^2 \;. \] Since the sum on the left diverges as $N_2\to\infty$ it follows that $1\hspace{-4.5pt}1_{B_m}\varphi=0$. As this is valid for all $m$ it follows that $\varphi=0$ on $\Gamma_{\!\!r}$. Therefore we have now deduced that if $((2-\delta)/2)^2\leq (1-\varepsilon)\,b_{\delta,r}^{\,2}$, which ensures that $\Gamma_{\!\!c}(\xi)\leq (1-\varepsilon)\,{\cal H}i^2$, then $H$ is self-adjoint by Proposition~\ref{psa3.1}. But this conclusion is valid for all small $\varepsilon$ and $r$. Thus $H$ is self-adjoint whenever $(2-\delta)/2<b_\delta$. $\Box$ As pointed out in the introduction the statement of Theorem~\ref{tsa4.1} can be strengthened if the boundary $\Gamma$ of $\Omega$ separates into a countable union of positively separated components $\Gamma^{(j)}$. Then the degeneracy can vary from component to component. \begin{cor}\label{csa4.1} Assume $\Gamma=\bigcup_{j\geq 1}\Gamma^{(j)}$ with $d(\Gamma^{(j)},\Gamma^{(k)})\geq r_0>0$ for all $j\neq k$ . If, for each $j$ one has $C\leq \nu\,d_{\Gamma^{(j)}}^{\;\delta_j/2-1}I$ on $\Gamma^{(j)}_{\!\!r }$ with $r<r_0/2$ and $\delta_j\in[0,2\rangle$ and if the Hardy inequality $(\ref{esa3.1})$ is satisfied with ${\cal H}i_j= \nu^{1/2}\,b_{\delta_j}\,d_{\Gamma^{(j)}}^{\;\delta_j/2-1}$ on $\Gamma^{(j)}_{\!\!r}$ where $b_{\delta_j}>(2-\delta_j)/2$ then $H$ is self-adjoint. \end{cor} \mbox{\bf Proof} \hspace{5pt}\ The proof is essentially unchanged. First one can prove that if $\varphi\in D(H^*)$ and $(\lambda I+H^*)\varphi=0$ then $\varphi=0$ on each $\Gamma^{(j)}_{\!\!r}$ by repetition of the above argument component by component. with $\zeta$ successively replaced by $\zeta^{(j)}=\zeta{\cal I}rc d_{\Gamma^{(j)}}$. One then establishes that $\varphi=0$ on $\Gamma_{\!\!r}=\bigcup_{j\geq1}\Gamma^{(j)}_{\!\!r}$ and the proof follows as before. $\Box$ \section{A direct theorem}\label{S5} Theorem~\ref{tsa4.1} established a self-adjointness criterion from a weak but explicit upper bound on the coefficients $C$ and an implicit lower bound, a Hardy inequality on a boundary layer. In this section we show that self-adjointness also follows from the asymptotic degeneracy condition on the coefficients $C$ at the boundary used in the earlier paper {\cal I}te{Rob15} together with the weighted Hardy inequality (\ref{esa1.2}). This form of the result is convenient in verifying self-adjointness for particular types of domain. Throughout the sequel the coefficients are assumed to satisfy the boundary condtion \begin{equation} \textstyle{\inf_{r\in\langle0,r_0]}}\;\textstyle{\sup_{x\in\Gamma_{\!\!r}}}\|(C\,d_\Gamma^{\,-\delta})(x)-c(x)I\|=0 \label{esa5.1} \end{equation} for some $r_0>0$ where $c$ is a bounded Lipschitz function satisfying $\inf_{x\in\Gamma_{\!\!r}}c(x)\geq \mu>0$ and $\delta\geq0$. Condition~(\ref{esa5.1}) can be interpreted in an obvious way as \[ \textstyle{\limsup_{d_\Gamma\to0}} \;C(c\,d_\Gamma^{\,\delta}I)^{-1}=I \;. \] Thus in the language of asymptotic analysis $C$ converges to $c\,d_\Gamma^{\,\delta} I$ as $d_\Gamma\to0$ (see {\cal I}te{DeB}). The parameter $\delta$ determines the order of degeneracy at the boundary and $c$ describes the boundary profile of $C$. The comparability of $C$ and $c\,d_\Gamma^{\,\delta}I$ can be made more precise by noting that for each $r\in\langle0,r_0]$ there are $\sigma_{\!r}, \tau_{\!r}>0$ such that \begin{equation} \sigma_{\!r}(c \,d_\Gamma^{\,\delta})(x)I\leq C(x)\leq \tau_{\!r} (c \,d_\Gamma^{\,\delta})(x)I \label{esa5.2} \end{equation} for all $x\in \Gamma_{\!\!r}$. The earlier discussions of Markov uniqueness in {\cal I}te{RSi4} and {\cal I}te{LR} were based on these latter conditions. They play a key role in the following together with the observation that the limit condition (\ref{esa5.1}) implies that $\sigma_{\!r},\tau_{\!r}\to1$ as $r\to0$. In fact one may assume that $\sigma_{\!r}$ converges monotonically upward and $\tau_{\!r}$ converges monotonically downward. Secondly, we assume that the weighted Hardy inequality (\ref{esa1.2}) is valid on $\Gamma_{\!\!r}$. Thus for each $r\in\langle0,r_0]$ and $\delta\geq 0$ there is a $b_{\delta,r}>0$ such that \begin{equation} \int_{\Gamma_{\!\!r}}d_\Gamma^{\,\delta}\,|\nabla\psi|^2\geq b_{\delta, r}^{\,2}\int_{\Gamma_{\!\!r}}d_\Gamma^{\,\delta-2}\,|\psi|^2 \label{esa5.3} \end{equation} for all $\psi\in C_c^1(\Gamma_{\!\!r})$. Then the Hardy constant $b_\delta=\sup_{r>0} b_{\delta,r}$ where again the supremum is over all possible $b_{\delta,r}$ for which (\ref{esa5.3}) holds. Although the weighted Hardy inequality (\ref{esa5.3}) near the boundary is independent of $c$ it does lead to a weak form of the Hardy inequality for the operator $H$ or, more precisely, for the form $h$, on the whole domain $\Omega$. \begin{lemma}\label{lsa5.1} Assume the boundary condition $(\ref{esa5.1})$ and the weighted Hardy inequality $(\ref{esa5.3})$ are valid on the boundary layer $\Gamma_{\!\!r}$. Then for each $\varepsilon>0$ there is a $\lambda_{r,\varepsilon}>0$ such that \[ \lambda_{r,\varepsilon}\,\|\psi\|_2^2+h(\psi)\geq \sigma_{\!r}(1-\varepsilon)^2\,b_{\delta, r}^{\,2}\,\|c^{1/2}d_\Gamma^{\,\delta/2-1}\psi\|_2^2 \] for all $\psi\in C_c^1(\Omega)$. \end{lemma} \mbox{\bf Proof} \hspace{5pt}\ It follows from (\ref{esa5.2}) that \begin{eqnarray*} h(\psi)\geq \sigma_{\!r}\int_{ \Gamma_{\!\!r}}c\,d_\Gamma^{\,\delta}\,|\nabla\psi|^2 =\sigma_{\!r}\int_{ \Gamma_{\!\!r}}\,d_\Gamma^{\,\delta}\,|\nabla( c^{1/2}\psi)- (\nabla c^{1/2})\psi|^2 \end{eqnarray*} for all $\psi\in C_c^\infty(\Gamma_{\!\!r})$. Then by the Cauchy--Schwarz inequality one has for each $\varepsilon>0$ \[ h(\psi)\geq \sigma_{\!r}(1-\varepsilon)\int_{ \Gamma_{\!\!r}}\,d_\Gamma^{\,\delta}\,|\nabla( c^{1/2}\psi)|^2 - \sigma_{\!r}(\varepsilon^{-1}-1)\int_{ \Gamma_{\!\!r}}\,d_\Gamma^{\,\delta}\,|(\nabla c^{1/2})\psi|^2 \] for all $\psi\in C_c^\infty(\Gamma_{\!\!r})$. Therefore, by the weighted Hardy inequality (\ref{esa5.3}), one deduces that \[ \mu_{r,\varepsilon}\|\psi\|_2^2+h(\psi)\geq \sigma_{\!r}(1-\varepsilon)\,b_{\delta, r}^{\,2}\,\int_{ \Gamma_{\!\!r}}\,c\,d_\Gamma^{\,\delta-2}\,|\psi|^2 \] for all $\psi\in C_c^1(\Gamma_{\!\!r})$ with $\mu_{r,\varepsilon}=\sigma_{\!r}(\varepsilon^{-1}-1)r^\delta(\|\nabla c\|_\infty^2/\lambda)$. Finally it follows from Lemma~\ref{lsa3.2} with $\mu=\mu_{r,\varepsilon}$ and ${\cal H}i^2= \sigma_{\!r}(1-\varepsilon)\,b_{\delta, r}^{\,2}\,c\,d_\Gamma^{\,\delta-2}$ that there is a $\lambda_{r,\varepsilon}>0$ such that \[ \lambda_{r,\varepsilon}\,\|\psi\|_2^2+h(\psi)\geq \sigma_{\!r}(1-\varepsilon)^2\,b_{\delta, r}^{\,2}\,\|c^{1/2}d_\Gamma^{\,\delta/2-1}\psi\|_2^2 \] for all $\psi\in C_c^1(\Omega)$. $\Box$ Now one has the direct version of Theorem~\ref{tsa4.1}. \begin{thm}\label{tsa5.1} Assume the coefficient matrix $C$ satisfies the boundary condition $(\ref{esa5.1})$ and the weighted boundary Hardy inequality $(\ref{esa5.3})$ inequality is valid with $\delta\in[0,2\rangle$. If $(2-\delta)/2<b_\delta$ then $H$ is self-adjoint. \end{thm} \mbox{\bf Proof} \hspace{5pt}\ The proof is very similar to the proof of Theorem~\ref{tsa4.1} but with some small changes. First the upper bound on the coefficient matrix $C$ is now replaced by the upper bound in (\ref{esa5.2}). Secondly, one observes that since the weighted Hardy inequality (\ref{esa5.3}) is valid the weak Hardy inequality of Lemma~\ref{lsa5.1} is also valid. But then one obtains an inequality identical in form to (\ref{esa3.4}) with a slightly different choice of ${\cal H}i$. Nevertheless the choices of $\eta$, $\xi$ and $\zeta$ are as before. Now, however, to verify the cancellation condition $\Gamma_{\!\!c}(\xi)\leq(1-\varepsilon)\,{\cal H}i^2$ we have to take into account the modified bound on $C$ and the different choice of ${\cal H}i$. First, since the bound $C\leq \nu\,d_\Gamma^{\,\delta }I$ used previously to estimate $\Gamma_{\!\!c}(\xi)$ is now replaced by right hand bound $C\leq \tau_{\!r} \,c \,d_\Gamma^{\,\delta} I$ of (\ref{esa5.2}) one effectively replaces $\nu$ by $ \tau_{\!r} \,c$ in the upper bound on $\Gamma_{\!\!c}(\xi)$. Explicitly, the earlier bound $\Gamma_{\!\!c}(\xi)\leq \nu\,d_\Gamma^{\,\delta}\,|\nabla\xi|^2$ is replaced by $\Gamma_{\!\!c}(\xi)\leq \tau_{\!r}\,(c\,d_\Gamma^{\,\delta})\,|\nabla\xi|^2$ with a similar change in the bound on $\Gamma_{\!\!c}(\zeta)$. Secondly, the earlier argument used the Hardy inequality $h(\psi)\geq \|{\cal H}i\psi\|_2^2$ for $\psi\in C_c^\infty(\Gamma_{\!\!r})$ with the identification ${\cal H}i^2=\nu\,b_{\delta,r}^{\,2}\,d_\Gamma^{\,\delta-2}$. This then led to the weak Hardy inequality \[ \lambda_{r,\varepsilon}\|\psi\|_2^2+h(\psi)\geq (1-\varepsilon)\,\|{\cal H}i\psi\|_2^2 \] for all $\psi\in D(h)$ by Lemma~\ref{lsa3.2}. Now, however, the weighted Hardy inequality (\ref{esa5.3}) gives the analogous weak Hardy inequality of Lemma~\ref{lsa5.1} but with ${\cal H}i^2= \sigma_{\!r}(1-\varepsilon)\,b_{\delta, r}^{\,2}\,(c\,d_\Gamma^{\,\delta-2})$ Therefore $\nu\,b_{\delta,r}^{\,2}\,d_\Gamma^{\,\delta-2}$ is replaced by $\sigma_{\!r}\,(1-\varepsilon)\, b_{\delta,r}^{\,2}\,(c\,d_\Gamma^{\,\delta-2})$ in the identification of ${\cal H}i^2$. Thirdly, after these replacements the condition $\Gamma_{\!\!c}(\xi)\leq(1-\varepsilon)\,{\cal H}i^2$ which previously translated to $\nu\,((2-\delta)/2)^2\,d_\Gamma^{\,\delta-2}\leq(1-\varepsilon)\, \nu\,b_{\delta,r}^{\,2}\,d_\Gamma^{\,\delta-2}$ is replaced by the similar inequality $ \tau_{\!r} \, ((2-\delta)/2)^2\,(c\,d_\Gamma^{\,\delta-2})\leq (1-\varepsilon)^2\,\sigma_{\!r}\, b_{\delta,r}^{\,2}\,(c\,d_\Gamma^{\,\delta-2})$. Thus after cancellation of the strictly positive function $c\,d_\Gamma^{\,\delta-2}$ one obtains the condition $((2-\delta)/2)^2\leq (1-\varepsilon)^2\,(\sigma_{\!r}/\tau_{\!r})\, b_{\delta,r}^{\,2}$ for each $r\in \langle0,r_0\rangle$ and $\varepsilon\in\langle0,1\rangle$. Fourthly, all the arguments of the previous proof carry through with these modifications. Therefore one concludes that for fixed $r$ the condition $((2-\delta)/2)^2\leq (1-\varepsilon)^2\,(\sigma_{\!r}/\tau_{\!r})\, b_{\delta,r}^{\,2}$ suffices for self-adjointness of $H$. Finally one may take the essential supremum of the right hand side of this condition over $r$ followed by the limit $\varepsilon\to0$ to deduce that $((2-\delta)/2)^2< b_{\delta}^{\,2}$ is sufficient for self-adjointness. $\Box$ Theorem~\ref{tsa5.1} reduces the proof of self-adjointness to verification of the boundary Hardy inequality (\ref{esa5.3}) and calculation of the corresponding boundary Hardy constant $b_\delta$. Both these problems have been resolved with additional smoothness or convexity assumptions for the boundary. A general result of this nature is the following. \begin{lemma} \label{lxsa5.1} Assume there are $\beta, \gamma>0$ such that \begin{equation} |d_\Gamma(\nabla^2d_\Gamma)-(\beta-1)|\leq \gamma\,d_\Gamma \label{exsa5.1} \end{equation} in the weak sense on a boundary layer $\Gamma_{\!\!r}$. Then if $\delta+\beta-2>\gamma r$ the weighted Hardy inequality $(\ref{esa5.3})$ is valid on $\Gamma_{\!\!r}$ with $b_{\delta,r}=(\delta+\beta-2-\gamma r)/2$. In particular $b_\delta=(\delta+\beta-2)/2$ under the restriction $\delta>2-\beta$. \end{lemma} \mbox{\bf Proof} \hspace{5pt}\ The proof is based on a standard argument which will be applied to other settings below. It was used in the proof of Proposition~2.9 in {\cal I}te{Rob15}. Set ${\cal H}i=d_\Gamma^{\,\delta-1}(\nabla d_\Gamma)$. Then \[ \mathop{\rm div} {\cal H}i=(\delta-1+d_\Gamma (\nabla^2 d_\Gamma))\,d_\Gamma^{\,\delta-2}\geq (\delta+\beta-2-\gamma\,r)\,d_\Gamma^{\,\delta-2} \] on $\Gamma_{\!\!r}$ where we have used (\ref{exsa5.1}). Therefore if $\delta+\beta-2-\gamma\,r>0$ one has \begin{eqnarray*} (\delta+\beta-2-\gamma\,r)\int_{\Gamma_{\!\!r}}d_\Gamma^{\,\delta-2}\psi^2&\leq &\int_{\Gamma_{\!\!r}}(\mathop{\rm div}{\cal H}i)\psi^2\\[5pt] &\leq&2\int_{\Gamma_{\!\!r}}|{\cal H}i.\nabla\psi|\,|\psi| \leq 2\Big(\int_{\Gamma_{\!\!r}}d_\Gamma^{\,\delta-2}\psi^2\Big)^{1/2}\,\Big(\int_{\Gamma_{\!\!r}}d_\Gamma^{\,\delta}(\nabla\psi)^2\Big)^{1/2} \end{eqnarray*} for all $\psi\in C_c^\infty(\Gamma_{\!\!r})$. Then the Hardy inequality follows by squaring the last inequality and dividing out the common factor. The identification of $b_\delta$ is immediate. $\Box$ Condition (\ref{exsa5.1}) was derived for $C^2$-domains and subdomains by Brusentsev {\cal I}te{Brus}, Section~6, and also exploited by Filippas, Maz'ya and Tertikas {\cal I}te{FMT1}, Section~4, in their analysis of Hardy--Sobolev inequalities. It was also used to establish Theorem~5.3 in {\cal I}te{NeN1} and Theorem~3.1 in {\cal I}te{Rob15}. Combination of Theorem~\ref{tsa5.1} and Lemma~\ref{lxsa5.1}, however, yield a stronger version of this latter result. \begin{cor}\label{cexsa5.2} Assume that the coefficients $C$ of $H$ satisfy the boundary condition $(\ref{esa5.1})$. Further assume either $\Omega$ is a $C^2$-domain in ${\bf R}^d$, or $\Omega={\bf R}^d\backslash\{0\}$, or $\Omega={\bf R}^d\backslash\overline \Pi$ with $\Pi$ a $C^2$-domain in the subspace ${\bf R}^s$. It follows that $H$ is self-adjoint whenever $\delta>2-(d-d_{\!H})/2$. \end{cor} \mbox{\bf Proof} \hspace{5pt}\ The conclusion follows since in each case $d_\Gamma$ satisfies (\ref{exsa5.1}) with $\beta=d-d_{\!H}$ (see {\cal I}te{Rob15} Subsection~2.3). Therefore $b_\delta=(d-d_{\!H}+\delta-2)/2$, by Lemma~\ref{lxsa5.1}. Then the sufficient condition $b_\delta>(2-\delta)/2$ for self-adjointness of Theorem~\ref{tsa5.1} is equivalent to $\delta>2-(d-d_{\!H})/2$. $\Box$ The earlier version of this result, Theorem~3.1 in {\cal I}te{Rob15}, required some explicit bounds on the derivatives of the coefficients of the operator on the boundary layer $\Gamma_{\!\!r}$. In the above corollary no such constraint is necessary. One can also apply Theorem~\ref{tsa5.1} to domains which are the complement of convex sets. \begin{prop}\label{pexsa5.3} Assume that the coefficients $C$ of $H$ satisfy the boundary condition $(\ref{esa5.1})$. Further assume that $\Omega={\bf R}^d\backslash K$ where $K$ is a non-empty closed convex set. It follows that $H$ is self-adjoint whenever $\delta>2-(d-d_{\!H})/2$. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ First consider the case $\dim(K)=d$. Then $d_{\!H}=d-1$ and the condition to establish is $\delta>3/2$. But it follows from the convexity of $K$ that $d_\Gamma$ is convex on open convex subsets of $\Omega$ and $\nabla^2d_\Gamma\geq0$ (see, for example, {\cal I}te{Rob13} Proposition~2.1). Therefore repeating the argument in the proof of Lemma~\ref{lxsa5.1} gives $\mathop{\rm div}{\cal H}i\geq (\delta-1)/2$. Then the weighted Hardy inequality is valid for $\delta>1$ with $b_{\delta,r}=b_\delta=(\delta-1)/2$ and the sufficiency condition $b_\delta>(2-\delta)/2$ is equivalent to $\delta>3/2$. Secondly, assume $\dim(K)=s$ with $s\in\{1,\ldots, d-1\}$. Then $d_{\!H}=s$. Now one can factorize ${\bf R}^d={\bf R}^{s}\times {\bf R}^{d-s}$ such that $K$ is a closed convex subset of ${\bf R}^s$. Choose coordinates $x=(y,z)$ with $y\in {\bf R}^s$ and $z\in {\bf R}^{d-s}$. Then for $x\in \Omega$ one has $d_\Gamma(x)=(d_K(y)^2+d_A(z)^2)^{1/2}$ where $d_K$ is the Euclidean distance to $K$ in ${\bf R}^s\backslash K$ and $d_A(z)=|z|$. Then it follows as in {\cal I}te{Rob15} that \[ d_\Gamma(\nabla_{\!\!x}^2d_\Gamma)=d_K(\nabla_{\!\!y}^2d_K)+(d-s-1) \;. \] But now $\nabla_{\!y}^2d_K\geq0$ and the argument of Lemma~\ref{lxsa5.1} gives $\mathop{\rm div}{\cal H}i\geq (\delta+d-s-2)/2$. So one now has the weighted Hardy inequality for $\delta>2-(d-d_{\!H})$ with $b_{\delta,r}=b_\delta=(d-d_{\!H}+\delta-2)/2$. Therefore the sufficiency condition for self-adjointness $b_\delta>(2-\delta)/2$ of Theorem~\ref{tsa5.1} is again equivalent to $\delta>2-(d-d_{\!H})/2$. $\Box$ It is to be expected that there is a similar result to Corollary~\ref{cexsa5.2} or Proposition~\ref{pexsa5.3} for convex domains. But the relevant Hardy inequality is not known at present. It is known that the weighted Hardy inequality (\ref{esa5.3}) is valid for convex domains, bounded or unbounded, if $\delta\in[0,1\rangle$ with $b_\delta=b_{\delta,r}=(1-\delta)/2$ (see {\cal I}te{Avk1} for the general $L_p$-case). It is also known that it is valid for $\delta>1$ for unbounded convex domains or on thin boundary layers for bounded convex domains but it is not known whether the Hardy constant $b_\delta=(\delta-1)/2$. We will return to this discussion in a broader context in the next section. The convex situation is illustrated by the following example which also anticipates the more general results of Section~\ref{S6}. \begin{exam}\label{exsa5.4} Let $\Omega=B(0\,;1)$, the unit ball centred at the origin. Then $d_\Gamma(x)=1-|x|$, $(\nabla d_\Gamma)(x)=-x/|x|$ and $(\nabla^2d_\Gamma)(x)=-(d-1)(1-|x|)/|x|$. Thus if $r\in\langle0,r_0]$ with $r_0<1$ then $0\geq d_\Gamma(\nabla^2d_\Gamma)\geq -(d-1)r/(1-r_0)$. In particular (\ref{exsa5.1}) is satisfied with $\beta=1$ and $\gamma=(d-1)/(1-r_0)$. Repeating the argument used to prove Corollary~\ref{cexsa5.2} one then obtains the weighted Hardy inequality (\ref{esa5.3}) on the boundary layer $\Gamma_{\!\!r}$ with $b_{\delta,r}=(\delta-1-\gamma\,r)/2$ for all $\delta>1+\gamma \,r$ and $r\in \langle0,r_0]$. Therefore $b_\delta\geq (\delta-1)/2$ for $\delta>1$. Conversely if $\delta>1$ one can construct a sequence $\psi_n\in C^1(\Gamma_{\!\!r})$ such that $\int d_\Gamma^{\,\delta}\, |\nabla\psi_n|^2/\int d_\Gamma^{\,\delta-2}\,|\psi_n|^2\to (\delta-1)^2/4$ as $n\to \infty$ (see Proposition~\ref{psa6.3}). Therefore $b_\delta=(\delta-1)/2$. Note that the foregoing argument requires $r<1$. In fact the weighted Hardy inequality on $\Omega$ fails for $\delta>1$. Specifically $\int_\Omega d_\Gamma^{\,\delta}\, |\nabla\psi|^2/\int_\Omega d_\Gamma^{\,\delta-2}\,|\psi|^2=\alpha^2$ if $\psi=d_\Gamma^{\,\alpha}$ with $\alpha\in\langle0,(\delta-1)/2\rangle$ where the upper bound on $\alpha $ ensures that $d_\Gamma^{\,\alpha+\delta/2-1}\in L_2(\Gamma_{\!\!r})$. Therefore \[ \textstyle{\inf_{\psi\in C_c^1(\Omega)}}\int_\Omega d_\Gamma^{\,\delta}\, |\nabla\psi|^2/\int_\Omega d_\Gamma^{\,\delta-2}\,|\psi|^2=0 \] and the Hardy inequality fails on $\Omega$. \end{exam} Self-adjointness of $H$ should follow in Corollary~\ref{cexsa5.2} and Proposition~\ref{pexsa5.3} from the slightly more general condition $\delta\geq 2-(d-d_{\!H})/2$ but the critical case $\delta= 2-(d-d_{\!H})/2$ does not follow from the foregoing arguments or from the arguments of {\cal I}te{Rob15}. The conjecture is partially supported by the results for the $C^2$-case. If $\Omega$ is a $C^2$-domain then $d_{\!H}=d-1$ and the sufficient condition of Corollary~\ref{cexsa5.2} is $\delta>3/2$. It follows, however, from Theorem~3.2 of {\cal I}te{Rob15} that the matching condition $\delta\leq 3/2$ is necessary for self-adjointness. Note that the proof of this latter result does use a bound on the derivatives of the coefficients through a bound on $|\mathop{\rm div} (C d_\Gamma^{\,-\delta})|$ and it is not clear whether it still holds in the more general setting of the current paper. Finally we note that Theorem~\ref{tsa5.1}, Corollary~\ref{cexsa5.2} and Proposition~\ref{pexsa5.3} can again be used as building blocks for the consideration of more complicated domains whose boundaries decompose into separated components. In particular these results extend to the case that $\Gamma$ consists of a sum of positively separated components in exactly the same manner that Theorem~\ref{tsa4.1} extended to Corollary~\ref{csa4.1}. The value of the parameter $\delta$ can vary component by component in the degeneracy condition (\ref{esa5.1}) and the Hardy inequality (\ref{esa5.3}). For example, if $\Omega={\bf R}^d\backslash S$ with $S$ a countable set of positively separated points then the condition $\delta>2-d/2$ is sufficient for self-adjointness. \section{Rough boundaries}\label{S6} Theorems~\ref{tsa4.1} and \ref{tsa5.1} demonstrate that the weighted Hardy inequality at the boundary is the key property underlying self-adjointness. Moreover, Corollary~\ref{cexsa5.2} and Proposition~\ref{pexsa5.3} give a variety of cases in which the weighted Hardy inequality is valid and the Hardy constant can be explicitly calculated. This has the consequence of identifying the critical degeneracy for self-adjointness as $\delta_c=2-(d-d_{\!H})/2$. But all these cases rely on some smoothness or convexity properties of the boundary of the domain. In this section we examine the situation of rough boundaries which is much more opaque. There are several possible interpretations of domains with rough boundary. Two established notions are John domains and the subclass of uniform domains. For simplicity we will consider uniform domains although many of the following conclusions follow for John domains with some local uniformity. In addition there is the concept of Ahlfors regularity which, despite its name, describes boundary sets of a very irregular nature, e.g.\ self-similar fractals. The Ahlfors property was used in {\cal I}te{LR} to characterize Markov uniqueness. We begin by summarizing some relevant definitions and results. First, $\Omega$ is defined to be a uniform domain if there is a $\sigma\geq1$ and for each pair of points $x,y\in \Omega$ a rectifiable curve $\gamma{\cal O}lon[0,l]\to\Omega$, parametrized by arc length, such that $\gamma(0)=x$, $\gamma(l)=y$ with arc length $l(\gamma(x\,;y))\leq \sigma\,|x-y|$ and $d_\Gamma(\gamma(t))\geq \sigma^{-1}\,(t\wedge(l-t))$ for all $t\in [0,l]$. Uniform domains were introduced by Martio and Sarvas {\cal I}te{MarS} as a special subclass of domains studied earlier by John {\cal I}te{John} in which the bound on the length of the curve $\gamma$ is omitted. In fact these authors only examined bounded domains and the extension to unbounded domains was given subsequently by V{\"a}is{\"a}l{\"a} {\cal I}te{Vai1} (see also {\cal I}te{Leh3}, Section~4). In the case of bounded John domains the definition can be simplified. Then it suffices that there is a preferred point $x_c\in\Omega$, the centre point, which can be connected to every other $x\in\Omega$ by a curve with the foregoing properties. In the sequel we are interested in boundary properties of the domains. Then the uniformity property is assumed to be valid for all pairs of points $x,y$ in the boundary layer $\Gamma_{\!\!r}$ but the curve $\gamma$ joining the points, which lies in $\Omega$, is not constrained to $\Gamma_{\!\!r}$. Secondly, let $B(x_0\,;r_0)$ denote the open Euclidean ball centred at $x_0$ with radius $r_0$. Then the boundary $\Gamma$ is defined to be Ahlfors $s$-regular if there is a regular Borel measure $\mu$ on $\Gamma$ and an $ s>0$ such that for each subset $A=\Gamma \cap B(x_0\,;r_0)$, with $x_0\in\Gamma$ and $r_0>0$, one can choose $a>0$ so that \begin{equation} a^{-1}\,r^s \leq \mu(A\cap B(x\,;r)) \leq a\, r^s \label{euni1.3} \end{equation} for all $x\in A$ and $r\in \langle0, 2r_0\rangle$. This is a locally uniform version of the Ahlfors regularity property used in the theory of metric spaces (see, for example, the monographs {\cal I}te{DaS, Semm, Hei, MaT}). It implies that $\mu$ and the Hausdorff measure ${\cal H}^s$ on $\Gamma$ are locally equivalent and $s=d_{\!H}$, the Hausdorff dimension of~$\Gamma$. The Ahlfors property implies that $\Gamma$ is regular in the sense that each of the subsets $\Gamma_{\!\!x,r}=\Gamma \cap B(x\,;r)$ with $x\in\Gamma$ has Hausdorff dimension $s$ but $\Gamma$ could have a wildly irregular fractal nature. As a prelude to the discussion of self-adjointness we begin with a characterization of Markov uniqueness of operators satisfying the boundary condition (\ref{esa5.1}). Recall that $H$ is defined to be Markov unique if it has a unique self-adjoint extension on $L_2(\Omega)$ which generates a positive semigroup, i.e.\ a semigroup which maps positive functions into positive functions. \begin{prop}\label{psa6.1} Assume that $\Omega$ is a uniform domain whose boundary $\Gamma$ is Ahlfors $s$-regular. Further assume the coefficients of $H$ satisfy the boundary condition $(\ref{esa5.1})$. Then the following conditions are equivalent: \begin{tabel} \item\label{psa6.1-1} $H$ is Markov unique, \item\label{psa6.1-2} $\delta\geq 2-(d-s)\;\;(\,= 2-(d-d_{\!H}))$. \end{tabel} \end{prop} The proof of the proposition is a repeat of the proof of Theorem~1.1 in {\cal I}te{LR}. In the latter reference it was assumed that $C$ satisfied bounds $a\,d_{\Gamma}^{\,\delta}I\leq C\leq b\,d_{\Gamma}^{\,\delta}I$ on a boundary layer $\Gamma_{\!\!r}$ with $a, b>0$ constant. But these bounds follow from (\ref{esa5.2}) since $c$ is bounded, positive and bounded away from zero on $\Gamma_{\!\!r}$. Then the proof that Condition~\ref{psa6.1-2} implies Condition~\ref{psa6.1-1} is by a capacity estimate which relies solely on the Ahlfors regularity. It does not require the uniform domain property. The proof of the converse does require the uniform property if $d_{\!H}\in [d-1,d\rangle$ but only in one small bounded neighbourhood of $\Gamma$. We refer to {\cal I}te{LR} for details. The relevance of the proposition for self-adjointness is the following. \begin{cor}\label{csa6.1} Under the assumptions of Proposition~$\ref{psa6.1}$ the condition $\delta\geq 2-(d-d_{\!H})$ is necessary for self-adjointness. \end{cor} \mbox{\bf Proof} \hspace{5pt}\ If $\delta<2-(d-d_{\!H})$ then $H$ is not Markov unique. Hence it is not self-adjoint. $\Box$ Now we have the following crucial existence result for the weighted Hardy inequality on John domains. \begin{prop}\label{psa6.2} $(${\rm Lehrb{\"a}ck}$)$ Assume that $\Omega$ is a John domain whose boundary $\Gamma$ is Ahlfors $s$-regular. Then for each $\delta>2-(d-d_{\!H})$ there are $ r_0>0$ and for each $r\in\langle0,r_0\rangle$ a $b_{\delta,r}>0$ such that \begin{equation} \|d_\Gamma^{\,\delta/2}\,\nabla\psi\|_2\geq b_{\delta,r}\,\|d_\Gamma^{\,\delta/2-1}\psi\|_2 \label{esa6.1} \end{equation} for all $\psi\in C_c^1(\Gamma_{\!\!r})$. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ First note that (\ref{esa6.1}) is just an operator version of the weighted Hardy inequality (\ref{esa1.2}). Thus if $\Omega$ is unbounded it follows for all $C_c^1(\Omega)$ by Theorem~1.3 of {\cal I}te{Leh3} but with $s$ the Aikawa dimension of the boundary which is larger than the Hausdorff dimension in general. It follows, however, from the Ahlfors regularity of the boundary that the Hausdorff dimension and the Aikawa dimension are equal (see Lemma~2.1 of {\cal I}te{Leh3}). Therefore the statement of the proposition follows by restriction to $\psi$ with support in $\Gamma_{\!\!r}$. If $\Omega$ is bounded then the weighted Hardy inequality is not generally valid on the whole domain. For example, it fails on the unit ball (see Example~\ref{exsa5.4}). Nevertheless if $r_0$ is sufficiently small the arguments of {\cal I}te{Leh3} establish the Hardy inequality on the boundary layer. In fact the argument simplifies considerably {\cal I}te{Leh5}. The idea is to prove that if $r\in\langle0,r_0\rangle$ then there is a $B_{\delta,r}>0$ such that \[ B_{\delta,r}^{\,2}\int_{\Gamma_{\!\!r}}d_\Gamma^{\,\delta-2}\,|\psi|^2 \leq \int_{\Gamma_{\!\!r}}d_\Gamma^{\,\delta-1}\,|\psi|\,|\nabla\psi|+\int_{\Gamma_{\!\!r}}d_\Gamma^{\,\delta}\,|\nabla\psi|^2 \] for all $\psi\in C_c^1(\Gamma_{\!\!r})$. Then the weighted Hardy inequality follows by applying the Cauchy-Schwarz inequality to the first term on the right hand side and rearranging. But the latter inequality is established by an argument involving a covering of the John domain $\Omega$ by Whitney cubes satisfying the Boman chain condition (see, for example, {\cal I}te{BKL} {\cal I}te{HK}). Since $\Omega$ is bounded it suffices to consider chains which begin with a cube $Q_c$ which contains the centre point $x_c$. Then, however, one can suppose that $r$ is sufficiently small that $Q_c$ has an empty intersection with $\Gamma_{\!\!r}$. Therefore the average of each $\psi\in C_c^1(\Gamma_{\!\!r})$ over $Q_c$ is equal to zero and the main assumption (8) in Theorem~3.1 of {\cal I}te{Leh3} is automatically satisfied. Then repeating the proof of the Theorem~3.1, with $\Omega=\Omega'=\Gamma_{\!\!r}$, one obtains the desired inequality. $\Box$ Although the principal interest in Proposition~\ref{psa6.2} is its applicability to domains with rough or uniformly disconnected boundaries it also gives information for simpler cases such as convex domains. In particular it establishes that the weighted Hardy inequality (\ref{esa6.1}) is satisfied on a boundary layer for a general convex domain and for all $\delta>1$. This is consistent with the behaviour exhibited for the unit ball in Example~\ref{exsa5.4}. In this particular example one has $b_\delta=(\delta-1)/2$ and it could well be that this remains the case for the Hardy constant and a general convex domain. This is not currently known. It would follow from an estimate $0\geq d_\Gamma(\nabla^2d_\Gamma)\geq \gamma r$ on $\Gamma_{\!\!r}$. The Hardy inequality (\ref{esa6.1}) of Proposition~\ref{psa6.2} is the main ingredient in Theorem~\ref{tsa5.1}. Therefore if $\delta\in \langle2-(d-d_{\!H}), 2\rangle$ and the coefficients of $H$ satisfy the boundary condition (\ref{esa5.1}) it follows that the condition $b_\delta>(2-\delta)/2$ is sufficient for self-adjointness of $H$. But for this criterion to be of utility it is necessary to have further information on the Hardy constant $b_\delta$. It follows from {\cal I}te{Rob15} that $b_\delta=(d-d_{\!H}+\delta-2)/2$ for the domains covered by Corollary~\ref{cexsa5.2} and Proposition~\ref{pexsa5.3}. This then yields the explicit sufficiency condition on the degeneracy parameter $\delta>2-(d-d_{\!H})/2$ for self-adjointness. Although there is a large literature on the Hardy constant it appears to be confined to domains with smooth boundaries or satisfying convexity properties. Little appears to be known about the value of the Hardy constant for domains with rough boundaries. Nevertheless one can derive some general properties of $b_\delta$ which lead to a criterion for it to have the standard value $(d-d_{\!H}+\delta-2)/2$. In addition one can demonstrate by explicit example that this is not always the case. In fact the Hardy constant can be arbitrarily small (see Example~\ref{exsa6.1}). First we examine the general properties of $b_\delta$. The most complicated one to establish in the current setting is that $b_\delta$ is bounded from above by the standard value. Note that the following proposition applies equally well to convex domains or Lipschitz domains. In both these cases it establishes that the boundary Hardy constant satisfies the upper bound $b_\delta\leq (\delta-1)/2$ for $\delta>1$. This result was already cited for convex domains in Example~\ref{exsa5.4}. The general result does, however, require uniformity of the domain if $d_{\!H}\in[\,d-1,d\,\rangle$. \begin{prop}\label{psa6.3} Assume $\Omega$ is a uniform domain with an Ahlfors $s$-regular boundary $\Gamma$. Further assume that if $\delta>2-(d-s)$ then there is a $b_{\delta,r}>0$ such that \[ \|d_\Gamma^{\,\delta/2}\,\nabla\psi\|_2\geq b_{\delta,r}\,\|d_\Gamma^{\,\delta/2-1}\psi\|_2 \] for all $\psi\in C_c^1(\Gamma_{\!\!r})$. It follows, since $s=d_{\!H}$, that the Hardy constant $b_\delta$ satisfies \begin{equation} b_\delta\leq (d-d_{\!H}+\delta-2)/2 \;.\label{esa6.10} \end{equation} \end{prop} \mbox{\bf Proof} \hspace{5pt}\ It is evident that one may assume \begin{equation} b_{\delta,r}\leq b_\delta=\inf\Big\{\|d_\Gamma^{\,\delta/2}\,\nabla\psi\|_2/\|d_\Gamma^{\,\delta/2-1}\psi\|_2:\psi\in C_c^1(\Gamma_{\!\!r}),\,r>0\Big\} \;.\label{esa6.2} \end{equation} Moreover, since $b_\delta$ is the supremum over all possible choices of $b_{\delta, r}$ for small $r>0$ it suffices to prove that $b_{\delta,r}$ satisfies the bound (\ref{esa6.10}). But this can be established by modification of a lemma of Adam Ward {\cal I}te{War} (see also {\cal I}te{War1}, Lemma~5.1). \begin{lemma}\label{lsa6.1} For each $\beta\geq0$ \begin{equation} b_{\delta,r}\leq |(\beta+\delta-2)/2|+\|d_\Gamma^{\,1-\beta/2}\nabla\psi\|_2/\|d_\Gamma^{\,-\beta/2}\psi\|_2 \;.\label{esa6.3} \end{equation} for all $\psi\in C_c^1(\Gamma_{\!\!r})$. \end{lemma} \mbox{\bf Proof} \hspace{5pt}\ Set $\alpha=\beta+\delta-2$. Then replace $\psi$ in (\ref{esa6.3}) by $d_\Gamma^{\,-\alpha/2}\psi$. It follows from the Leibniz rule and the triangle inequality that \begin{eqnarray*} \|d_\Gamma^{\,\delta/2}\nabla(d_\Gamma^{\,-\alpha/2}\psi)\|_2&\leq&\|d_\Gamma^{\,\delta/2}(\nabla d_\Gamma^{\,-\alpha/2})\psi\|_2+\|d_\Gamma^{\,\delta/2}d_\Gamma^{\,-\alpha/2}(\nabla \psi)\|_2\\[5pt] &=&|(\beta+\delta-2)/2|\,\|d_\Gamma^{\,-\beta/2}\psi\|_2+\|d_\Gamma^{\,1-\beta/2}(\nabla\psi)\|_2 \;. \end{eqnarray*} Similarly $\|d_\Gamma^{\,\delta/2-1}d_\Gamma^{\,-\alpha/2}\psi\|_2=\|d_\Gamma^{\,-\beta/2}\psi\|_2$. The desired conclusion follows immediately. $\Box$ The next step in the proof of Proposition~\ref{psa6.3} is to construct a sequence of $\psi_n\in C_c^1(\Gamma_{\!\!r})$ such that if $\beta=d-d_{\!H}$ then the numerator in the last term of (\ref{esa6.2}) is bounded uniformly in $n$ but the denominator diverges as $n\to\infty$. Once this is achieved one immediately deduces the bound (\ref{esa6.10}). The construction is based on some estimates which are a consequence of the uniformity of $\Omega$ and the Ahlfors regularity of $\Gamma$. Let $A=\Gamma\cap B(x_0;R)$, with $x_0\in \Gamma$ and $R>0$ and set $A_r=\{x\in\overline \Omega:d_A(x)<r\}$ where $d_A$ is the Euclidean distance to the set $A$. Then there are $\kappa,\kappa'>0$ such that \begin{equation} \kappa'r^{(d-d_{\!H})}\leq |A_r|\leq \kappa\, r^{(d-d_{\!H})} \label{esa6.30} \end{equation} for all small $r>0$. These estimates are established in Section~2 of {\cal I}te{LR}. Their proof is based on ideas of Salli {\cal I}te{Sall}. The upper bound only requires the Ahlfors regularity of $\Gamma$ but if $s\in [\,d-1,d\,\rangle$ the lower bound also requires the uniformity property. The $\psi_n$ are now defined with the aid of the $W^{1,\infty}(0,\infty)$-functions $\xi_n$ used in the proof of Corollary~\ref{csa2.1}. On $[0,1]$ one has $\xi_n(t)=0$ if $t<1/n$, $\xi_n(t)=1$ if $t>1$ and $\xi_n(t)=\log(nt)/\log n$ if $1/n\leq t\leq 1$. Thus $\xi_n(1)=1$ and $\xi_n(t)\geq 1/2$ if $t\in [n^{-1/2},1]$. Then fix a decreasing function ${\cal H}i\in C^1(0,1)$ with ${\cal H}i(t)=1$ if $t\in[0,1/2]$, ${\cal H}i(1)=0$ and $|{\cal H}i'|\leq 4$. Finally define $\psi_n=(\xi_n{\cal I}rc(r^{-1}d_\Gamma))({\cal H}i{\cal I}rc(r^{-1}d_A ))$. It follows from this construction that $0\leq\psi_n\leq 1$, $\mathop{\rm supp}\psi_n\subset A_r\subset\Gamma_{\!\!r}$ and the $\psi_n$ converge pointwise to ${\cal H}i{\cal I}rc(r^{-1}d_A)$ as $n\to\infty$. Further $(\xi_n{\cal I}rc(r^{-1}d_\Gamma))\geq 1/2$ if $d_\Gamma>rn^2$ and $({\cal H}i{\cal I}rc(r^{-1}d_A ))\geq1$ if $d_A<r/2$. In addition $d_A\geq d_\Gamma$. Therefore, setting $\beta=d-d_{\!H}$, \[ \|d_\Gamma^{\,-\beta/2}\psi_n\|_2=\int d_\Gamma^{\,-\beta}\,|\psi_n|^2\geq (1/4)\int d_A^{\,-\beta}\,1\hspace{-4.5pt}1_{D_{r,n}} \] where $D_{r,n}=\{x\in\Omega:r/n^{1/2}\leq d_A(x)\leq r/2\,,\,r/n^{1/2}\leq d_\Gamma(x)\leq r\}$. Then since $d_A^{\,-\beta}=(r/2)^{-\beta}(1+\beta\int^1_{d_A(x)/r}t^{-(\beta+1)})$ one has \begin{eqnarray*} \int d_\Gamma^{\,-\beta}\,|\psi_n|^2&\geq &(1/4)(r/2)^{-\beta}\int 1\hspace{-4.5pt}1_{D_{r,n}}\Big(1+\beta\int^1_{d_A(x)/r}t^{-(\beta+1)}\Big)\\[5pt] &\geq &(1/4)(r/2)^{-\beta}|D_{r,n}| +(1/4)\,\beta\int^1_{r n^{-1/2}}dt\,t^{-1}((rt/2)^{-\beta}|D_{r,t,n}|) \end{eqnarray*} where $D_{r,t, n}=\{x\in\Omega:r/n^{1/2}\leq d_A(x)\leq tr/2\,,\,r/n^{1/2}\leq d_\Gamma(x)\leq r\}$. But it follows from the volume estimates (\ref{esa6.30}) that $\lim_{n\to\infty}((r/2)^{-\beta}|D_{r,n}|)=((r/2)^{-\beta}|A_{r/2}|\geq \kappa'$ and $\lim_{n\to\infty}((rt/2)^{-\beta}|D_{r,t,n}|)=((rt/2)^{-\beta}|A_{rt/2}|)\geq \kappa'$ uniformly for $t\in\langle0,1\rangle$. Since the integral of $t^{-1}$ is divergent at the origin one concludes that $\lim_{n\to\infty}\|d_\Gamma^{\,-\beta/2}\psi_n\|_2=\infty$. This is the first step in handling the last term in (\ref{esa6.3}). The second step is to estimate $\|d_\Gamma^{\,1-\beta/2}(\nabla\psi_n)\|_2^2=\int d_\Gamma^{\,2-\beta}|\nabla\psi_n|^2$. First observe that \[ |\nabla\psi_n|^2\leq 2r^{-2}|(\xi'_n{\cal I}rc(r^{-1}d_\Gamma))|^2|({\cal H}i{\cal I}rc(r^{-1}d_A ))|^2 +2r^{-2}|(\xi_n{\cal I}rc(r^{-1}d_\Gamma))|^2|({\cal H}i'{\cal I}rc(r^{-1}d_A ))|^2 \] by the Leibniz rule and the Cauchy-Schwarz inequality. Denote the integrals involving the first and second terms on the right hand side by $I_1$ and $I_2$, respectively. Then since $|\xi_n|\leq 1$ one has \begin{eqnarray*} I_2&\leq &8r^{-2}\int d_\Gamma^{\,2-\beta} |(\xi_n{\cal I}rc(r^{-1}d_\Gamma))|^2\,1\hspace{-4.5pt}1_{\{x:r/2\leq d_A(x)\leq r\}}\\[5pt] &\leq&8r^{-2}\int d_\Gamma^{\,2-\beta}\,1\hspace{-4.5pt}1_{\{x:r/2\leq d_A(x)\leq r\}} \leq 8(r^{-\beta}|A_r|)\leq 8\kappa \end{eqnarray*} if $\beta\leq 2$. Alternatively, if $\beta>2$ one deduces that \[ I_2\leq 8r^{-2}\int d_\Gamma^{\,2-\beta}\, 1\hspace{-4.5pt}1_{D'_{r,n}}\leq 8\Big(r^{-\beta}\,|D'_{r,n}|+(\beta-2)\int^1_{n^{-1}}dt\,t\,((rt)^{-\beta}\,|D'_{r,t,n}|\Big) \] where $D'_{r,t,n}=\{x\in\Omega:rn^{-1}\leq d_\Gamma(x)\leq rt\,,\, r/2\leq d_A(x)\leq r\}$ and $D'_{r,n}=D'_{r,1,n}$. As before one has $\lim_{n\to\infty}(r^{-\beta}|D'_{r,n}|)=(r^{-\beta}|A_{r}|)\leq \kappa$ but the estimate on $(rt)^{-\beta}|D'_{r,t,n}|$ is more delicate. First note that $D'_{r,t,n}\subset\{x\in\Omega:0\leq d_\Gamma(x)\leq rt\,,\,r/2\leq d_A(x)\leq r\}$. But if $t<1/2$ then the condition $d_\Gamma(x)\leq rt$ eliminates all those $x\in A_r$ such that $d_A(x)=d_\Gamma(x)$. Therefore $D'_{r,t,n}\subset {\hat A}_{rt}$ where $\hat A$ is a bounded subset of $\Gamma$ determined by $A$. This can be explicitly verified as follows. Since $A=\Gamma\cap B(x_0\,;R)$ one clearly has $D'_{r,t,n}\subset \Omega\cap B(x_0\,;R+r)$. More specifically \begin{eqnarray*} D'_{r,t,n}&\subseteq &( \Omega\cap B(x_0\,;R+r))\cap\{x\in\Omega: d_A(x)\geq r/2\,,\, d_\Gamma(x)\leq rt\}\\[5pt] &\subseteq &( \Omega\cap B(x_0\,;R+r)\backslash B(x_0\,;R))\cap\{x\in\Omega: d_\Gamma(x)\leq rt\} \end{eqnarray*} for all small $t$. The second inclusion follows since for $t<1/2$ the combination of the condition $d_A(x)\geq r/2$ and $d_\Gamma(x)<rt$ implies that $x$ is in the complement of $B(x_0\,;R)$. Now it follows that one can choose $\hat A=\Gamma\cap B(x_0\,;R+r)$. Therefore $(rt)^{-\beta}|D'_{r,t,n}|\leq (rt)^{-\beta}|{\hat A}_{rt}|$ is bounded uniformly for all $n\geq 1$ and $t\leq 1$ by (\ref{esa6.30}). Consequently $I_2$ is uniformly bounded in $n$. Next we have to estimate the integral $I_1$. But \[ I_1=2r^{-2}\int d_\Gamma^{\,2-\beta}|(\xi'_n{\cal I}rc(r^{-1}d_\Gamma))|^2|({\cal H}i{\cal I}rc(r^{-1}d_A ))|^2\leq 2(\log n)^{-2}\int d_\Gamma^{\,-\beta}\,1\hspace{-4.5pt}1_{D''_{r,n}} \] where one now has $D''_{r,n}=\{x\in\Omega:rn^{-1}\leq d_\Gamma(x)\leq r\,,\,0< d_A(x)\leq r\}$. Arguing as above \[ I_1\leq 2(\log n)^{-2}\Big(|D''_{r,n}|+\beta\int^1_{n^{-1}} dt\,t^{-1}((rt)^{-\beta}\,|D''_{r,t,n}|\Big) \] with $D''_{r,t,n}=\{x\in\Omega:rn^{-1}\leq d_\Gamma(x)\leq rt\,,\, 0< d_A(x)\leq r\}$. The the volume estimates (\ref{esa6.30}) establish that $\sup_{n\geq1}(r^{-\beta}|D''_{r,n}|)\leq \kappa$ and $\sup_{n\geq1}((rt)^{-\beta}|D''_{r,t,n}|)\leq \kappa$. But the integral gives a factor $\log n$ and one obtains a bound $I_1\leq b\, (\log n)^{-1}$ with $b>0$. Hence one concludes that \[ \|d_\Gamma^{\,1-\beta/2}(\nabla\psi_n)\|_2=\int d_\Gamma^{\,2-\beta}|\nabla\psi_n|^2\leq a+b\,(\log n)^{-1} \] with $a,b>0$. This completes the proof of Proposition~\ref{psa6.3}. $\Box$ It follows from the upper bound of Proposition~\ref{psa6.3} combined with the lower bound of Proposition~2.9 of {\cal I}te{Rob15} that for the domains covered by Corollary~\ref{cexsa5.2} and Proposition~\ref{pexsa5.3} the optimal Hardy constant always has the standard value $b_\delta=(d-d_{\!H}+\delta-2)/2$. In the current setting this is not necessarily the situation. Nonetheless one can draw some interesting conclusions from general properties of $b_\delta$. The next proposition collects three related properties which are independent of the particular characteristics of $\Omega$ and $\Gamma$. \begin{prop}\label{lsa6.2} Assume $\Omega$ is such that the weighted Hardy inequality \[ \|d_\Gamma^{\,\delta/2}\,\nabla\psi\|_2\geq b_{\delta,r}\,\|d_\Gamma^{\,\delta/2-1}\psi\|_2 \] is valid for all $\psi\in C_c^1(\Gamma_{\!\!r})$ and all $\delta\in I=\langle 2-(d-d_{\!H}), 2\,]$. It follows that \begin{tabel} \item\label{lsa6.2-1} $\delta\in I\mapsto b_\delta+\delta/2$ is non-decreasing and $b_2\geq b_\delta-(2-\delta)/2$ for all $\delta\in I$, \item\label{lsa6.2-2} if $b_2\geq (d-d_{\!H})/2$ then $|b_2-b_\delta|\leq (2-\delta)/2$ for all $\delta\in I$, \item\label{lsa6.2-3} if, in addition, $b_\delta\leq (d-d_{\!H}+\delta-2)/2$ then $b_2=b_\delta-(2-\delta)/2$ for all $\delta\in I$. \end{tabel} \end{prop} \mbox{\bf Proof} \hspace{5pt}\ {\ref{lsa6.2-1}.} Fix $\delta_1,\delta_2\in I$ with $\delta_1<\delta_2$. Then \begin{eqnarray*} \|d_\Gamma^{\,\delta_2/2-1}\psi\|_2&=&\|d_\Gamma^{\,\delta_1/2-1}(d_\Gamma^{\,(\delta_2-\delta_1)/2}\psi)\|_2\\[5pt] &\leq& b_{\delta_1,r}^{\,-1}\,\|d_\Gamma^{\,\delta_1/2}\,\nabla (d_\Gamma^{\,(\delta_2-\delta_1)/2}\psi)\|_2\\[5pt] &\leq& b_{\delta_1,r}^{\,-1}\Big(\|d_\Gamma^{\,\delta_2/2}(\nabla \psi)\|_2+((\delta_2-\delta_1)/2)\,\|d_\Gamma^{\,\delta_2/2-1}\psi\|_2\Big)\\[5pt] &\leq &b_{\delta_1,r}^{\,-1}\Big(1+((\delta_2-\delta_1)/2)\,b_{\delta_2,r}^{-1}\Big)\,\|d_\Gamma^{\,\delta_2/2}(\nabla \psi)\|_2 \end{eqnarray*} for all $\psi\in C_c^1(\Gamma_{\!\!r})$. Therefore \[ b_{\delta_1,r}\Big(1+((\delta_2-\delta_1)/2)\,b_{\delta_2,r}^{-1}\Big)^{-1}\leq b_{\delta_2,r} \] or, equivalently, $b_{\delta_1,r}+\delta_1/2\leq b_{\delta_2,r}+\delta_2/2$. Thus taking the supremum over the choice of the $b_{\delta_j,r}$ one has $b_{\delta_1}+\delta_1/2\leq b_{\delta_2}+\delta_2/2$. Hence $\delta\in I\mapsto b_\delta+\delta/2$ is non-decreasing. \noindent{\ref{lsa6.2-2}.} First it follows from I that $b_2+1\geq b_\delta+\delta/2$ for all $\delta\in I$. Therefore $b_2-b_\delta\geq -(2-\delta)/2$. Secondly, $(2-\delta)/2<(d-d_{\!H})/2$ since $\delta\in I$. Hence $b_2-(2-\delta)/2>b_2-(d-d_{\!H})/2\geq 0$ by assumption. Thirdly, it follows from the weighted Hardy inequality that \[ b_{2,r}\,\|d_\Gamma^{\,\delta/2-1}\psi\|_2\leq \|d_\Gamma\nabla(d_\Gamma^{\,\delta/2-1}\psi)\|_2\leq (1-\delta/2)\|d_\Gamma\,d_\Gamma^{\,\delta/2-2}\psi\|_2+\|d_\Gamma^{\,\delta/2}\,\nabla\psi\|_2 \;. \] But by the preceding one may choose $r$ sufficiently small that $(2-\delta)/2<b_{2,r}$. Hence \[ (b_{2,r}-(2-\delta)/2)\,\|d_\Gamma^{\,\delta/2-1}\psi\|_2\leq \|d_\Gamma^{\,\delta/2}\,\nabla\psi\|_2 \] and $b_\delta\geq b_2-(2-\delta)/2$. Therefore $b_2-b_\delta\leq (2-\delta)/2$. Thus $|b_2-b_\delta|\leq (2-\delta)/2$ for all $\delta\in I$. \noindent{\ref{lsa6.2-3}.} If $b_2\geq (d-d_{\!H})/2$ and $b_\delta\leq (d-d_{\!H}+\delta-2)/2$ then it immediately follows that $b_\delta\leq (d-d_{\!H})/2-(2-\delta)/2\leq b_2-(2-\delta)/2$. Therefore $b_\delta\leq b_2-(2-\delta)/2$. But the converse was established in the proof of II. Hence one must have an equality. $\Box$ The final assumption $b_\delta\leq (d-d_{\!H}+\delta-2)/2$ in Proposition~\ref{lsa6.2} was established in Proposition~\ref{psa6.3}. Therefore under the assumptions of that proposition one concludes that the bound $b_2\geq (d-d_{\!H})/2$ implies that $b_\delta=b_2-(2-\delta)/2$ and $b_\delta $ is a strictly increasing function of $\delta$ on the interval $\langle2-(d-d_{\!H}),2\,]$. This observation is used in the proof of the following more precise version of Theorem~\ref{tsa5.1}. In contrast to that theorem we do not have to assume explicitly the validity of the weighted Hardy inequality as it is a consequence of Lehrb{\"a}ck's theorem, Proposition~\ref{psa6.2}. \begin{thm}\label{tsa6.1} Assume that $\Omega$ is a uniform domain whose boundary $\Gamma$ is Ahlfors $s$-regular. Further assume that the coefficients $C$ of $H$ satisfy the boundary condition $( \ref{esa5.1})$. Then one has the following: \begin{tabel} \item \label{tsa6.1-1} there is a unique $\delta_c\in[\, 2-(d-d_{\!H})/2,2\rangle$ such that $H$ is self-adjoint for all $\delta> \delta_c$, \item \label{tsa6.1-2} $b_\delta=(d-d_{\!H}+\delta-2)/2$ for all $\delta\in\langle 2-(d-d_{\!H})/2,2\,]$ if and only if $b_2\geq(d-d_{\!H})/2$, and if these conditions are satisfied then $b_2=(d-d_{\!H})/2$ and $\delta_c=2-(d-d_{\!H})/2$. \end{tabel} \end{thm} \mbox{\bf Proof} \hspace{5pt}\ \ref{tsa6.1-1}. First it follows from Corollary~\ref{csa2.3} that $H$ is self-adjoint if $\delta\geq2$. Secondly, the weighted Hardy inequality is valid on a boundary layer $\Gamma_{\!\!r}$ for all $\delta>2-(d-d_{\!H})$ by Proposition~\ref{psa6.2}. Therefore it follows from Theorem~\ref{tsa5.1} that the condition $(2-\delta)/2<b_\delta$ is sufficient for the self-adjointness of $H$ for $\delta\in \langle 2-(d-d_{\!H}), 2\,]$. Setting $B_\delta=b_\delta+\delta/2$ this sufficiency condition becomes $B_\delta>1$. But $b_\delta>0$ for $\delta>2-(d-d_{\!H})$ by Proposition~\ref{psa6.2}. Therefore $B_\delta>\delta/2$. In particular $B_2>1$. Thirdly, $b_\delta\leq (d-d_{\!H}+\delta-2)/2$ by Proposition~\ref{psa6.3}. Therefore $B_\delta\leq (d-d_{\!H}+2\delta-2)/2$. In particular $B_{2-(d-d_{\!H})/2}\leq 1$. Finally $b_\delta=b_2-(2-\delta)/2$ for $\delta\in \langle 2-(d-d_{\!H}),2\,]$ by the remark preceding the theorem. Therefore there must be a unique $\delta_c$ in this range for which $B_{\delta_c}=1$. Thus if $\delta>\delta_c$ then $H$ is self-adjoint by Theorem~\ref{tsa5.1}. \noindent{ \ref{tsa6.1-2}.} Assume $b_2\geq (d-d_{\!H})/2$. It follows from Proposition~\ref{psa6.3} that $b_\delta\leq (d-d_{\!H}+\delta-2)/2$. In particular $b_2\leq (d-d_{\!H})/2$. Therefore $b_2= (d-d_{\!H})/2$. But $b_\delta=b_2-(2-\delta)/2$ by the remark preceding the theorem. Hence $b_\delta=(d-d_{\!H}+\delta-2)/2$. Conversely if $b_\delta=(d-d_{\!H}+\delta-2)/2$ then $b_2=(d-d_{\!H})/2$. Finally if these equivalent conditions are satisfied then $b_\delta>(2-\delta)/2$ is equivalent to the condition $\delta>2-(d-d_{\!H})/2$. Therefore one has $\delta_c=2-(d-d_{\!H})/2$. $\Box$ It now follows from the remark preceding the theorem and Statement~\ref{tsa6.1-2} of the theorem that one has the following reformulation of the self-adjointness result for the standard case. \begin{cor}\label{csa6.11} Assume $\Omega$ is a uniform domain with an Ahlfors $s$-regular boundary $\Gamma$ and that the coefficients $C$ of $H$ satisfy the boundary condition $( \ref{esa5.1})$. If $b_2\geq (d-d_{\!H})/2$ then $H$ is self-adjoint for all $\delta>2-(d-d_{\!H})/2$ and and the Hardy constant $b_\delta=(d-d_{\!H}+\delta-2)/2$ for all $\delta\in\langle 2-(d-d_{\!H})/2,2\,]$. \end{cor} Next consider the non-standard case under the ongoing uniformity and Ahlfors regularity assumptions. Since $b_\delta\leq (d-d_{\!H}+\delta-2)/2$ for $\delta>2-(d-d_{\!H})$ by Proposition~\ref{lsa6.2} the non-standard case corresponds to the condition $b_\delta<(d-d_{\!H}+\delta-2)/2$. The following example demonstrates that this can occur and one can even have $b_\delta$ arbitrarily small. \begin{exam}\label{exsa6.1} First let $B_R$ denote a ball of radius $R$. Secondly let $\psi\in C_c^1(B_R)$ denote a function which is one on the concentric ball $B_{(1-2\varepsilon)R}$, zero on $B_{(1-\varepsilon)R}$ and such that $\|\nabla\psi\|_\infty\leq 2/(\varepsilon R)$ where $\varepsilon\in \langle0,1/4\rangle$. Then \[ \int_{B_R}d_\Gamma^{\,\delta-2}|\psi|^2\geq \int_{B_{R/2}}d_\Gamma^{\,\delta-2}|\psi|^2\geq aR^{\,d+\delta-2} \;\;\;\;{\rm and}\;\;\;\;\int_{B_R}d_\Gamma^{\,\delta}\,|\nabla\psi|^2\leq bR^{\,d+\delta-2} \varepsilon^{\delta-1} \] with $a,b>0$ where the values of $a$ and $b$ are independent of $R,\varepsilon$ and $\delta$. Therefore \[ \int_{B_R}d_\Gamma^{\,\delta}\,|\nabla\psi|^2\Big/\int_{B_R}d_\Gamma^{\,\delta-2}|\psi|^2\leq (b/a)\,\varepsilon^{\delta-1} \] and if $\delta>1$ this ratio tends to zero as $\varepsilon\to0$. Secondly, let $B_k$ denote the concentric ball with radius $2^{-k}R$ where $k=0,1,\ldots$. Thus $B_0=B_R$. Then let $\psi_k\in C_c^1(B_k)$ be the scaled function with $\psi_k(x)=\psi(2^{k}x)$ for $x\in B_0$. It follow immediately by scaling invariance that the above ratio is unchanged by the replacement $B_0\to B_k$ and $\psi\to\psi_k$. \end{exam} \begin{wrapfigure}{r}{5cm} \begin{center} \includegraphics[width=4cm,keepaspectratio]{lollipops3rot} \end{center} \end{wrapfigure} Thirdly, let $B$ denote a large ball and attach (a family of translates of) the balls $B_k$ to $B$ by narrow tunnels of length $2^{-k}R$ and width $\varepsilon\,(2^{-k}R)$. The balls and tunnels are understood to lie in the exterior of $B$ with the attachments separated from each other such that no pair overlap, as indicated in the illustration. Then let $\Omega$ denote the open interior of the `decorated' ball. It follows that $\Omega$ is a uniform domain with an Ahlfors $(d-1)$-regular boundary. In particular $d_{\!H}=d-1$. Hence the standard value $(d-d_{\!H}+\delta-2)/2$ of the Hardy constant would be $(\delta-1)/2>0$. Finally let $\Psi_{\!n}$ be the function with $\mathop{\rm supp}\Psi_{\!n}=\bigcup_{k\geq n}B_k$ such that $\Psi_{\!n}|_{B_k}=\psi_k$. It then follows that for each small $r>0$ there is an $n$ such that $\mathop{\rm supp}\Psi_{\!n}\subset \Gamma_{\!\!r}$. Therefore $b_{\delta,r}\leq 2 (b/a)\varepsilon^{\delta-1}$ where the factor~$2$ is added to compensate any small adjustment needed to account for the effect of the tunnels. Hence $b_\delta\leq 2(b/a)\varepsilon^{\delta-1}<(\delta-1)/2$ for all sufficiently small $\varepsilon$. Therefore $b_\delta$ does not attain the standard value and can be made arbitrarily small by appropriate choice of~$\varepsilon$. Although this example is rather special it does indicate a general property. The value of the parameter $\sigma$ which enters the definition of uniform domains is governed by the ratio $2/\varepsilon$ of the length of the tunnels and the width of the tunnels. In particular as $\varepsilon$ decreases the uniformity parameter increases. In addition the Ahlfors parameter $a$ governing the regularity of the boundary has a dependence on the width of the tunnels. Therefore one must expect that the Hardy constant depends both on the degree of uniformity and also the Ahlfors regularity parameter. Finally we note that the value $b_2$ of the Hardy constant has a particular significance. First since $b_\delta\leq (d-d_{\!H}+\delta-2)/2$ if $\delta>2-(d-d_{\!H})$ by Proposition~\ref{psa6.3} one must have $b_2\leq (d-d_{\!H})/2$. Moreover, $b_\delta=(d-d_{\!H}+\delta-2)/2$ if and only if $b_2=(d-d_{\!H})/2$. Secondly, the weighted Hardy inequality with $\delta=2$ states that \[ (\psi, H_2\psi)\geq b_{2,r}^{\,2}(\psi, \psi) \] for all $\psi\in C_c^2(\Gamma_{\!\!r})$ where $H_2\psi=-\mathop{\rm div}(d_\Gamma^{\,2}\,\nabla\psi)$. Thus \[ b_{2,r}^{\,2}\leq \inf\{(\psi, H_2\psi):\psi\in C_c^2(\Gamma_{\!\!r})\,,\; \|\psi\|_2=1\} \;. \] Therefore the supremum of the possible $ b_{2,r}^{\,2}$ is equal to the infimum of the spectrum of $H_2$ acting on $L_2(\Gamma_{\!\!r})$ and $b_2^{\,2}$ is the infimum of the spectra of $H_2$ acting on $L_2(\Gamma_{\!\!r})$ for all small $r$. In particular the bottom of the spectrum is $(d-d_{\!H})^2/4$ if and only if $b_\delta$ has the standard value. In the non-standard case the bottom of the spectrum is strictly smaller than $(d-d_{\!H})^2/4$ and as the example shows it can be arbitrarily small. This observation might provide a practical method in special cases, such as convex domains, of confirming that the Hardy constant has the standard value. \section{Rellich inequalities} Theorems~\ref{tsa5.1} and \ref{tsa6.1} demonstrate that the weighted Hardy inequality is the essential ingredient in the derivation of self-adjointness of the degenerate elliptic operator $H$. In contrast in the earlier paper {\cal I}te{Rob15} a weighted Rellich inequality for functions supported near the boundary was the crucial element. Although the Rellich inequality is no longer needed for the self-adjointness problem it is of interest that it can nevertheless be derived in the broader setting of John-Ahlfors domains. First we derive a Hardy inequality for the quadratic form $h$ associated with $H$. Note that Lehrb{\"a}ck's result, Proposition~\ref{psa6.2}, establishes the weighted Hardy inequality (\ref{esa5.3}) on a boundary layer for John domains with Ahlfors regular boundaries so the following statements follow in this setting. \begin{lemma}\label{lsa7.1} Assume that the coefficients $C$ of $H$ satisfy the boundary condition $( \ref{esa5.1})$ and that the weighted Hardy inequality $(\ref{esa5.3})$ is valid on the boundary layer $\Gamma_{\!\!r}$ with $\delta> 2-(d-d_{\!H})$. Then there are $s\in\langle0,r\rangle$ and $a_{\delta, s}\in\langle0,b_{\delta,r}\rangle$ such that the Hardy inequality \[ h(\psi)\geq a_{\delta,s}^{\,2}\,\|c^{1/2}d_\Gamma^{\,\delta/2-1}\,\psi\|_2^2 \] is valid for all $\psi\in D(h)$ with $\mathop{\rm supp} \psi\subset \Gamma_{\!\!r}$. $\psi\in C_c^1(\Gamma_{\!\!s})$. Moreover $a_\delta$, the supremum of the possible $a_{\delta,s}$, is equal to $b_\delta$, the Hardy constant of $(\ref{esa5.3})$. \end{lemma} \mbox{\bf Proof} \hspace{5pt}\ $\;$ First it follows from the boundary condition (\ref{esa5.2}) that \[ h(\psi)\geq \sigma_{\!r}\|c\,d_\Gamma^{\,\delta/2}(\nabla\psi)\|_2^2 \] for all $\psi\in C_c^1(\Gamma_{\!\!r})$. Hence \begin{eqnarray*} h(\psi)^{1/2}&\geq& \sigma_{\!r}^{1/2}\Big(\|d_\Gamma^{\,\delta/2}\,\nabla(c^{1/2}\psi)\|_2-a_c\,\|c^{1/2}d_\Gamma^{\,\delta/2}\,\psi\|_2\Big)\\[5pt] &\geq& \sigma_{\!r}^{1/2}\Big(b_{\delta,s}\|c^{1/2}d_\Gamma^{\,\delta/2-1}\psi\|_2-sa_c\,\|c^{1/2}d_\Gamma^{\,\delta/2-1}\,\psi\|_2\Big) =(\sigma_{\!r}^{1/2}\,a_{\delta,s})\,\|c^{1/2}d_\Gamma^{\,\delta/2-1}\psi\|_2 \end{eqnarray*} where $a_{\delta,s}=b_{\delta,s}-sa_c$ and $a_c=2^{-1}\|(\nabla c)/c\|_\infty$ with $\|{\cal D}ot\|_\infty$ the $L_\infty(\Gamma_{\!\!r})$-norm. Note that the $b_{\delta,s}>0$ may be chosen such that $b_{\delta,s}$ converges upward to $b_\delta$ as $s\to0$. Therefore one may assume $a_{\delta,s}>0$ for all small $s>0$ Then $a_{\delta,s}\to b_\delta$ as $s\to0$ and since $\sigma_{\!r}\to1$ as $r\to0$ the proof is complete for $\psi\in C_c^1(\Gamma_{\!\!r})$. But the Hardy inequality extends by continuity to the $\psi\in D(h)$ with support in the boundary layer. $\Box$ The Hardy inequality of the lemma now implies the Rellich inequality by adaptation of the simpler part of the reasoning in Section~2 of {\cal I}te{Rob12}. \begin{prop}\label{psa7.2} Assume that the coefficients $C$ of $H$ satisfy the boundary condition $( \ref{esa5.1})$ and that the weighted Hardy inequality $(\ref{esa5.3})$ is valid on $\Gamma_{\!\!r}$ with $\delta> 2-(d-d_{\!H})$. If $b_\delta>(2-\delta)/2$ then there are $r>0$ and a map $s\in\langle0,r\rangle\mapsto B_{\delta,s}>0$ such that the Rellich inequality \begin{equation} \|H\psi\|_2^2\geq B_{\delta,s}^{\,2}\,\|c\,d_\Gamma^{\,\delta-2}\psi\|_2^2 \label{ensa1} \end{equation} is valid for all $\psi$ in the domain $ D(H)$ of the self-adjoint operator $H$ with $\mathop{\rm supp}\psi\subset\Gamma_{\!\!s}$. Moreover, the Rellich constant $B_\delta$, the supremum of the possible $B_{\delta,s}$, is given by $B_\delta=b_\delta^{\,2}-((2-\delta)/2)^2$. \end{prop} \mbox{\bf Proof} \hspace{5pt}\ First it follows from Lemma~\ref{lsa7.1} that there is an $s>0$ such that the Hardy inequality $h(\psi)\geq \|{\cal H}i_s\psi\|_2^2$ is valid for all $\psi\in C_c^\infty(\Gamma_{\!\!s})$ with ${\cal H}i_s=a_{\delta,s}(c^{1/2}d_\Gamma^{\,\delta/2-1})$. Secondly, it follows from the boundary condition (\ref{esa5.2}) that \[ \Gamma_{\!\!c}({\cal H}i_s)\leq \tau_s\,(c\,d_\Gamma^{\,\delta})\,|\nabla{\cal H}i_s|^2 \;. \] Then a straightforward calculation establishes the eikonal inequality \begin{equation} \Gamma_{\!\!c}({\cal H}i_s)\leq {\gamma}_s\,{\cal H}i_s^4 \label{ensa2} \end{equation} where ${\gamma}_s=\tau_s\,((2-{\delta}_s)/(2\,a_{\delta,s}))^2$ with ${\delta}_s=\delta-sa_c$ and we have used the notation of the proof of Lemma~\ref{lsa7.1} (see {\cal I}te{Agm1}, Theorem~1.4(ii)). Thirdly, it follows from the basic identity (\ref{esa2.1}) that \[ (H\psi, {\cal H}i_s^2\psi)=h({\cal H}i_s\psi)-(\psi,\Gamma_{\!\!c}({\cal H}i_s)\psi) \] for all $\psi\in C_c^2(\Gamma_{\!\!s})$. Therefore \[ \|H\psi\|_2\,\|{\cal H}i_s^2\psi\|_2\geq (1-{\gamma}_s)\,\|{\cal H}i_s^2\psi\|_2^2 \] for all $\psi\in C_c^2(\Gamma_{\!\!s})$ by the Hardy inequality and (\ref{ensa2}). It follows immediately that if ${\gamma}_s<1$ then \[ \|H\psi\|_2^2\geq (1-{\gamma}_s)^2\,\|{\cal H}i_s^2\psi\|_2^2= a_{\delta,s}^{\,4}\,(1-{\gamma}_s)^2\,\|c\,d_\Gamma^{\,\delta-2}\psi\|_2^2 \;. \] After substituting the value of ${{\cal H}i}_s$ and rearranging one obtains the Rellich inequality (\ref{ensa1}) with $B_{\delta,s}=a_{\delta,s}^{\,2}\,(1-{\gamma}_s)=a_{\delta,s}^{\,2}-\tau_s\,((2-{\delta}_s)/2)^2>0$ for all $\psi\in C_c^2(\Gamma_{\!\!r})$. Now $\tau_s$ and $2-\delta_s$ converge downward to $1$ and $2-\delta$, respectively, as $s\to0$. Moreover $a_{\delta,s}$ converges upward to $b_\delta$ by Lemma~\ref{psa7.2}. Therefore $B_{\delta,s}$ converges upward to $ B_\delta=b_\delta^{\,2}-((2-{\delta})/2)^2$ as $s\to0$. Since $B_{\delta,s}>0$ for all small $s>0$ it follows that $B_\delta>0$. Hence $b_\delta>(2-\delta)/2$. Conversely, if $b_\delta>(2-\delta)/2$ then $B_\delta>0$ and $B_{\delta, s}>0$ or, equivalently, $\gamma_s<1$ for all small $s>0$. Finally if $b_\delta>(2-\delta)/2$ then $H$ is self-adjoint by Theorem~\ref{tsa5.1}. Therefore $C_c^2(\Omega)$ is a core of $H$. Then the Rellich inequality (\ref{ensa1}) extends from $C_c^2(\Gamma_{\!\!r})$ to all $\psi\in D(H)$ with support in $\Gamma_{\!\!r}$. This follows since $\psi$ can be approximated by a sequence $\psi_n\in C_c^\infty(\Omega)$ and then the $\psi_n$ can be replaced by $\xi\psi_n$ where $\xi$ is a $C^\infty$-function with support in ${\overline\Gamma_{\!\!r}}$ which is equal to $1$ on $\Gamma_{\!\!s}$ with $s<r$. The modified functions are in $D(H)$ and still approximate $\psi$ in the graph norm as a simple corollary of Proposition~2.1.II in {\cal I}te{Rob15}. $\Box$ \section{Summary and comments} The foregoing investigation developed from the earlier works {\cal I}te{RSi4} and {\cal I}te{LR} on Markov uniqueness. The Markov property is equivalent to the parabolic diffusion equation \[ \partial\varphi_t/\partial t+H\varphi_t=0 \] having a unique weak solution on $L_1(\Omega)$ (see {\cal I}te{Dav14} or {\cal I}te{RSi4}) whilst self-adjointness of $H$ is equivalent to uniqueness on $L_2(\Omega)$. The main conclusion of {\cal I}te{LR} for uniform domains with Ahlfors regular boundaries was the equivalence of $L_1$-uniqueness with the condition $\delta\geq 2-(d-d_{\!H})$. In particular $L_1$-uniqueness depends on the geometry of the domain only through the Hausdorff dimension of its boundary. The foregoing analysis indicates that the situation is more complicated for $L_2$-uniqueness. Although we have only derived the sufficiency condition $b_\delta>(2-\delta)/2$ it is plausible that $b_\delta\geq(2-\delta)/2$ is both necessary and sufficient for self-adjointness, i.e.\ for $L_2$-uniqueness. There are not many examples for guidance. One simple case is $\Omega={\bf R}^d\backslash \{0\}$ and the operator $H=-\sum^d_{k=1}\partial_k(|x|^\delta\partial_k)$. If $\delta>2-d$ the Hardy inequality is valid with $b_\delta=(d+\delta-2)/2$ and the condition $b_\delta>(2-\delta)/2$ is equivalent to $\delta>2-d/2$. Since, however, the operator is rotationally invariant it can be established by taking radial coordinates and applying the classical Weyl limit-point, limit-circle, theory that $H$ is essentially self-adjoint if and only if $\delta\geq 2-d/2$. A closely related result was established in {\cal I}te{Rob15} for $C^2$-domains. Theorem~\ref{tsa5.1} establishes that $\delta>3/2$ is sufficient for self-adjointness but Theorem~3.2 of {\cal I}te{Rob15} also established that the condition $\delta\geq 3/2$ is necessary. The latter result did, however, require some bounds on the derivatives of the coefficients of the operator. One thing that is clear is that the Hardy constant $b_\delta$ corresponding to the weighted Hardy inequality on boundary layers plays a significant role in determining self-adjointness. A similar conclusion was reached by Ward {\cal I}te{War} {\cal I}te{War1} in his analysis of Schr{\"o}dinger operators on domains by extension of the arguments of Nenciu and Nenciu {\cal I}te{NeN1}. In addition $b_\delta$ can have a quite complicated dependency on the geometry of the boundary. For domains with $C^2$-boundaries the Hardy constant only depends on the Hausdorff dimension of the boundary. It is given by $b_\delta=(d-d_{\!H}+\delta-2)/2$. An expression we have referred to as the standard value. This is also the case for the complement of lower dimensional $C^2$-domains or the complement of convex subsets. Example~\ref{exsa6.1} demonstrates, however, that for uniform domains with Ahlfors regular boundaries $b_\delta$, and consequently the self-adjointness property, can depend on the regularity parameters governing the boundary. It appears unlikely that one could calculate $b_\delta$ exactly in such situations. Nevertheless Propositions~\ref{psa6.3} and \ref{lsa6.2} do give general properties which allow one to gain considerable information about the diffusion. In particular $b_\delta$ is bounded from above by the standard value and the corresponding critical degeneracy for self-adjointness is larger than the value $2-(d-d_{\!H})/2$ in the standard case. This raises the question as to the minimal smoothness requirements on the domain and its boundary to ensure that $b_\delta$ attains the standard value. For example, is this the case for $C^{1,1}$-domains or, more generally, for Lipschitz domains. What is the situation for convex domains? Finally we have only considered symmetric diffusion operators but our arguments should extend to the non-symmetric operators with drift terms considered by Nenciu and Nenciu {\cal I}te{NeN}. Following these authors the non-symmetric operators on $L_2(\Omega)$ can be reformulated as symmetric operators on weighted spaces $L_2(\Omega\,; \rho)$. Then, however, one would need some control on the behaviour of the weights $\rho$ near the boundary as in Theorem~5.3 of {\cal I}te{NeN}. \end{document}
\begin{document} \title{Groups where free subgroups are abundant\thanks{2010 Mathematics Subject Classification numbers: 22A05, 20E05 (primary); 20B35, 54H11 (secondary).}} \author{Zachary Mesyan\thanks{This work was done while the author was supported by a Postdoctoral Fellowship from the Center for Advanced Studies in Mathematics at Ben Gurion University, a Vatat Fellowship from the Israeli Council for Higher Education, and ISF grant 888/07.}} \maketitle \begin{abstract} Given an infinite topological group $G$ and a cardinal $\kappa > 0$, we say that $G$ is \emph{almost $\kappa$-free} if the set of $\kappa$-tuples $(g_i)_{i \in \kappa} \in G^\kappa$ which freely generate free subgroups of $G$ is dense in $G^\kappa$. In this note we examine groups having this property and construct examples. For instance, we show that if $G$ is a non-discrete Hausdorff topological group that contains a dense free subgroup of rank $\kappa > 0$, then $G$ is almost $\kappa$-free. A consequence of this is that for any infinite set $\Omega$, the group of all permutations of $\Omega$ is almost $2^{|\Omega|}$-free. We also show that an infinite topological group is almost $\aleph_0$-free if and only if it is almost $n$-free for each positive integer $n.$ This generalizes the work of Dixon and Gartside-Knight. \end{abstract} \section{Introduction} In 1990 Dixon~\cite{Dixon} showed that almost all finite sequences of permutations of a countably infinite set freely generate free subgroups. More precisely, letting $S = \mathrm{Sym}(\mathbb{Z}_+)$ denote the group of all permutations of the positive integers, Dixon showed that for each integer $n \geq 2$, the set $\{(g_1, \dots, g_n) \in S^n : \{g_1, \dots, g_n\} \ \text{freely generates a free subgroup of} \ S\}$ is comeagre in $S^n$, where $S$ is viewed as a topological group under the function topology. (The notion of ``comeagre" will be recalled in Section~\ref{generalities}, and the function topology will be discussed in Section~\ref{perm_section}.) Various authors subsequently have proved results of this form for other completely metrizable groups (e.g., see \cite{Bhattarcharjee} and \cite{GMR}). Then, Gartside and Knight~\cite{GK} undertook a more general study of the phenomenon in question. They defined a Polish group $G$ (i.e., one that is completely metrizable and separable) to be \emph{almost free} if the set $\{(g_1, \dots, g_n) \in G^n : \{g_1, \dots, g_n\} \ \text{freely generates a free subgroup of} \ G\}$ is comeagre in $G^n$ for each integer $n \geq 2$. They then gave various other characterizations of almost free groups and constructed new examples. Our goal in this note is to extend the above definition of ``almost free" to groups that are not necessarily Polish (this is complicated by the fact that ``comeagre" is not a particularly useful notion for topological groups that are not completely metrizable, or more generally, ones that are not Baire spaces), generalize various results about almost free Polish groups to this context, and give new examples of groups with this property. Thus, given any infinite topological group $G$ and a cardinal $\kappa > 0$, we shall say that $G$ is \emph{almost $\kappa$-free} if the set $\{(g_i)_{i \in \kappa} \in G^\kappa : \{g_i\}_{i \in \kappa} \ \text{freely generates a free subgroup of} \ G\}$ is dense in $G^\kappa$. It turns out that if $G$ is a Polish group, then it is almost free (in the sense of Gartside and Knight) if and only if it is almost $n$-free for each positive integer $n$ (Lemma~\ref{def-equiv}). Therefore, our definition indeed generalizes the existing one, while preserving the intuition behind it. Namely, it still describes groups where sequences of elements that freely generate free subgroups are ubiquitous. We shall prove that an infinite topological group is almost free (i.e., almost $n$-free for each positive integer $n$) if and only if it is almost $\aleph_0$-free (Proposition~\ref{count-free}). We shall then show that if $G$ is a non-discrete Hausdorff topological group that contains a dense free subgroup of rank $\kappa > 0$, then $G$ is almost $\kappa$-free, as well as a more general version of this statement (Corollary~\ref{discrete-free2} and Theorem~\ref{discrete-free}). These are generalizations of parts of the main result of~\cite{GK}, though we employ very different methods (the proofs in~\cite{GK} are primarily topological, whereas ours are primarily algebraic). Using the results above, we shall deduce that for any infinite set $\Omega$, the group $\mathrm{Sym}(\Omega)$ of all permutations of $\Omega$ is almost $2^{|\Omega|}$-free (with respect to the function topology), generalizing the aforementioned theorem of Dixon (Corollary~\ref{perm}). In a similar vein, we shall show that many $\kappa$-fold products of finite permutation groups are almost $2^\kappa$-free (Theorem~\ref{prod-main}). We shall also show that for any cardinal $\kappa > 0$, a free group of rank $\kappa$ is almost $\kappa$-free, with respect to any nondiscrete topology (Proposition~\ref{free-groups}). It turns out that every dense subgroup of a connected semi-simple real Lie group is also almost free (Corollary~\ref{lie}). Finally, we shall show that the direct product of two topological groups is almost $\kappa$-free if and only if at least one of the two groups is itself almost $\kappa$-free (Proposition~\ref{prod-example}). All the results mentioned in this paragraph give rise to examples of almost ($\kappa$-)free groups that are not Polish. \section{Generalities and examples} \label{generalities} Throughout, $\mathbb{Z}_+$ will denote the set of positive integers. Given a set $\Gamma$ and a cardinal $\kappa$, we shall denote the direct product $\prod_\kappa \Gamma$ by $\Gamma^\kappa$. If $\Gamma$ is a topological space, then $\Gamma^\kappa$ will be understood to be a topological space under the product topology. Topological groups will not be assumed to be Hausdorff, unless it is specifically stated otherwise. \begin{definition} Given an infinite topological group $G$ and a cardinal $\kappa > 0$, let $G_\kappa$ denote the set $\, \{(g_i)_{i \in \kappa} \in G^\kappa : \{g_i\}_{i \in \kappa} \ \text{freely generates a free subgroup of} \ G\}$. We shall say that $G$ is \emph{almost $\kappa$-free} if $G_\kappa$ is dense in $G^\kappa$. Moreover, we shall say that $G$ is \emph{almost free} if $G$ is almost $n$-free for each $n \in\mathbb{Z}_+$. \end{definition} For the convenience of the reader, let us next recall some topological notions. A topological space is said to be \textit{Polish} if it is completely metrizable (i.e., it admits a metric with respect to which it is complete) and separable (i.e., it contains a countable dense subset). A subset of a topological space is called \textit{nowhere dense} if its closure contains no open subsets. Also, a subset is called \textit{comeagre} if it is the complement of a countable union of nowhere dense sets. In~\cite{GK} Gartside and Knight defined a Polish topological group $G$ to be \emph{almost free} if for each $n \geq 2$, $G_n$ is comeagre in $G^n$, and they define $G$ to be \emph{almost countably free} if $G_{\aleph_0}$ is comeagre in $G^{\aleph_0}$. At first glance it might appear that our definition clashes with this one. However, it turns out that if $G$ is a Polish (or even just a completely metrizable) group, then it is almost free in the sense of Gartside and Knight if and only if it is almost free in our sense, and $G$ is almost countably free if and only if it is almost $\aleph_0$-free, as the next lemma shows. Thus, from now on we shall use ``almost countably free" and ``almost $\aleph_0$-free" interchangeably. \begin{definition} Let $G$ be a group. Given a cardinal $\kappa > 0$, a set of variables $\, \{x_i\}_{i\in \kappa}$, and a free word $w = w(x_{i_1}, \dots, x_{i_n})$ $(i_1, \dots, i_n \in \kappa)$, let $C^\kappa (w)$ denote the set $\, \{(g_i)_{i \in \kappa} \in G^\kappa : w(g_{i_1}, \dots, g_{i_n}) = 1\}$. \end{definition} \begin{lemma} \label{def-equiv} Let $G$ be a completely metrizable topological group, and let $\kappa$ be a cardinal satisfying $1 \leq \kappa \leq \aleph_0$. Then $G_\kappa$ is dense in $G^\kappa$ if and only if $G_\kappa$ is comeagre in $G^\kappa$. \end{lemma} \begin{proof} Since $G$ is completely metrizable and $\kappa$ is a countable cardinal, $G^\kappa$ is completely metrizable as well. Hence, the ``if" direction follows from the Baire Category Theorem. For the converse, we begin by noting that $G^\kappa \setminus G_\kappa = \bigcup_w C^\kappa(w)$. Since there are only countably many such words $w$, this is a union of countably many sets. Since the ``evaluation" map $G^n \rightarrow G$ induced by a word $w$ (of length $n$) is continuous, $\{1\}$ is closed in $G$ (as a Hausdorff space), and $C^\kappa(w) = w^{-1}(\{1\})$, we conclude that $C^\kappa(w)$ is closed in $G$. Since $G_\kappa$ is dense in $G^\kappa$, and since $C^\kappa(w) \subseteq G^\kappa \setminus G_\kappa$, it follows that $C^\kappa(w)$ cannot contain an open subset, for any free word $w$. Thus, $G^\kappa \setminus G_\kappa$ is the (countable) union of the nowhere dense sets $C^\kappa(w)$, and hence $G_\kappa$ is comeagre in $G^\kappa$. \end{proof} Let us pause to give some examples. As mentioned in the Introduction, Dixon~\cite{Dixon} showed that $\mathrm{Sym}(\mathbb{Z}_+)$ is almost free (see also~\cite{GMR}; more on this below). In~\cite{GK} Gartside and Knight gave a number of other examples of Polish almost free groups. Specifically, all Polish oligomorphic permutation groups have this property (i.e., subgroups of the group of all permutations of an infinite set $\Omega$, $\mathrm{Sym}(\Omega)$, whose action on $\Omega^n$ has only finitely many orbits, for all $n \in \mathbb{Z}_+$). Every finite-dimensional connected non-solvable Lie group is almost free (see also~\cite{Epstein}), as is the absolute Galois group of the rational numbers (i.e., the group of all automorphisms of the algebraic closure of $\mathbb{Q}$). Although such a group is completely metrizable rather than Polish, an inverse limit of wreath products of nontrivial groups is also almost free (see also~\cite{Bhattarcharjee}). The next result allows us to construct new almost free groups that are not necessarily completely metrizable, from existing ones. \begin{proposition} \label{prod-example} Let $\kappa > 0$ be a cardinal, and let $G$ and $H$ be topological groups. Then the direct product $G \times H$ $($regarded as a topological group in the product topology$)$ is almost $\kappa$-free if and only if at least one of $G$ and $H$ is almost $\kappa$-free. \end{proposition} \begin{proof} Suppose that $G$ is almost $\kappa$-free, and let $U \subseteq (G \times H)^\kappa$ be a nonempty open set. We wish to show that $U \cap (G \times H)_\kappa \neq \emptyset$. After passing to a subset and reindexing, we may assume without loss of generality that $U$ is of the form $$(U_1 \times V_1) \times \ldots \times (U_n \times V_n) \times \prod_{\kappa \setminus \{1, \ldots, n\}} (G \times H),$$ where $U_1, \ldots, U_n \subseteq G$, $V_1, \ldots, V_n \subseteq H$ are some nonempty open sets. Since $G$ is almost $\kappa$-free, we can find a sequence $(g_i)_{i \in \kappa} \in G_\kappa$ such that $g_i \in U_i$ for $i \in \{1, \ldots, n\}$. Let $(h_i)_{i \in \kappa} \in H^\kappa$ be any sequence such that $h_i \in V_i$ for $i \in \{1, \ldots, n\}$. Setting $f_i = (g_i, h_i) \in G\times H$ for all $i\in \kappa$, we have $(f_i)_{i \in \kappa} \in U$. Further, since $\{g_i\}_{i \in \kappa}$ freely generates a free subgroup, so does $\{f_i\}_{i \in \kappa}$. (If the $f_i$ satisfy some free word, then so do their projections onto $G$.) Hence $(f_i)_{i \in \kappa} \in U \cap (G \times H)_\kappa$, as desired. Similarly, if $H$ is almost $\kappa$-free, then so is $G \times H$. For the converse, suppose that neither $G$ nor $H$ is almost $\kappa$-free. Then we can find open subsets $U = \prod_{i \in \kappa} U_i \subseteq G^\kappa$ and $V = \prod_{i \in \kappa} V_i \subseteq H^\kappa$ such that $U \cap G_\kappa = \emptyset = V \cap H_\kappa$. Set $W = \prod_{i \in \kappa} (U_i \times V_i) \subseteq (G \times H)^\kappa,$ and let $(f_i)_{i \in \kappa} = ((g_i, h_i))_{i \in \kappa} \in W$ be an arbitrary element. We wish to show that $(f_i)_{i \in \kappa} \notin (G \times H)_\kappa$, which implies that $G \times H$ is not almost $\kappa$-free, since $W$ is a nonempty open subset of $(G \times H)^\kappa$. After reindexing, if necessary, we can find free words $w_G = w_G(x_1, \dots, x_n)$ and $w_H = w_H(x_1, \dots, x_n)$, for some $n \geq 1$, such that $w_G(g_1, \dots, g_n) = 1$ and $w_H(h_1, \dots, h_n) = 1$, by our assumptions on $U$ and $V$. It follows that $w_Gw_Hw_G^{-1}w_H^{-1}(f_1, \dots, f_n) = 1$. If $w_Gw_Hw_G^{-1}w_H^{-1}$ is not the trivial word, then this shows that $(f_i)_{i \in \kappa} \notin (G \times H)_\kappa$, as desired. If, however, $w_Gw_Hw_G^{-1}w_H^{-1}$ is the trivial word, then it must be the case that $w_G$ and $w_H$ are powers of the same element. (This follows from the fact that all subgroups of a free group are free, and hence every commutative subgroup is cyclic.) If $n = 1$, then it must be the case that there is a (nontrivial) free word of the form $x^m$ ($m \in \mathbb{Z}_+$) satisfied by $f_1$. Otherwise, assuming that $n > 1$, we can find a free word $w = w(x_1, \dots, x_n)$ that commutes with neither $w_G$ nor $w_H$. Also, we still have $ww_Hw^{-1}w_H^{-1}(h_1, \dots, h_n) = 1$. Therefore, replacing $w_H$ with $ww_Hw^{-1}w_H^{-1}$ in the expression $w_Gw_Hw_G^{-1}w_H^{-1}$, we obtain a nontrivial free word satisfied by $f_1, \dots, f_n$. \end{proof} Here is another way of constructing new almost free groups from old ones, which will be needed in the sequel. \begin{lemma} \label{dense-subgroup} Let $\kappa > 0$ be a cardinal, and let $G$ be a topological group containing a dense subgroup $H$ which is almost $\kappa$-free with respect to the induced topology. Then $G$ is itself almost $\kappa$-free. \end{lemma} \begin{proof} Let $U \subseteq G^\kappa$ be a nonempty open subset. Then $U \cap H^\kappa$ is a nonempty open subset of $H^\kappa$, and hence $\emptyset \neq (U \cap H^\kappa) \cap H_\kappa = U \cap H_\kappa \subseteq U \cap G_\kappa$. Therefore, $G$ is almost $\kappa$-free. \end{proof} Let us next record another useful basic fact. \begin{lemma} \label{down} Let $G$ be a topological group, and let $\, 0 < \lambda < \kappa$ be cardinals. If $G$ is almost $\kappa$-free, then it is almost $\lambda$-free. \end{lemma} \begin{proof} Writing $\kappa = \lambda + \nu$, for some cardinal $\nu$, we may identify $G^\kappa$ with $G^\lambda \times G^\nu$. Let $U \subseteq G^\lambda$ be a nonempty open subset. Then $V = U \times G^\nu \subseteq G^\kappa$ is an open subset as well. Since $G$ is almost $\kappa$-free, $G_\kappa \cap V \neq \emptyset$. Let $\pi_\lambda : G^\kappa \rightarrow G^\lambda$ be the natural projection with respect to the product decomposition $G^\kappa = G^\lambda \times G^\nu$. Then $\pi_\lambda(G_\kappa) \cap U = \pi_\lambda(G_\kappa) \cap \pi_\lambda (V) \neq \emptyset$. Now, $\pi_\lambda(G_\kappa) \subseteq G_\lambda$, since if a sequence of elements freely generates a free group, then so does any subsequence. Hence, $G_\lambda \cap U \neq \emptyset$, showing that $G$ is almost $\lambda$-free. \end{proof} The following is a generalization of a result of Gartside and Knight in~\cite{GK} about Polish groups. \begin{proposition} \label{count-free} Let $G$ be a topological group. Then $G$ is almost free if and only if $G$ is almost countably free. \end{proposition} \begin{proof} The ``if" direction follows immediately from the previous lemma. For the converse, let $U \subseteq G^{\aleph_0}$ be a nonempty open set. After passing to a subset and reindexing, we may assume without loss of generality that $U$ is of the form $U_1 \times \ldots \times U_n \times G \times G \times \ldots$, where $U_1, \ldots, U_n \subseteq G$ are some nonempty open sets. Since $G$ is almost free, we can find some $(g_1, \ldots, g_{n+2}) \in U_1 \times \ldots \times U_n \times G \times G$ such that $g_1, \ldots, g_{n+2}$ freely generate a free subgroup of $G$. Then, in particular, $g_{n+1}, g_{n+2}$ also freely generate a free subgroup of $G$. It is well known that one can embed a free group on $\aleph_0$ generators into a free group on two generators. Thus, we can find $\{h_1, h_2, \ldots\} \subseteq \langle g_{n+1}, g_{n+2} \rangle$ (the subgroup generated by $g_{n+1}$ and $g_{n+2}$) that freely generates a free subgroup. Then $\{g_1, \ldots, g_n\} \cup \{h_1, h_2, \ldots \}$ also freely generates a free subgroup of $G$. (If there is a free word $w(x_1, \dots, x_m)$ and elements $k_1, \ldots, k_m \in \{g_1, \ldots, g_n\} \cup \{h_1, h_2, \ldots \}$ such that $w(k_1, \dots, k_m) = 1$, then there must be a free word $\bar{w}(x_1, \dots, x_{n+2})$ such that $\bar{w}(g_1, \dots, g_{n+2}) = 1$.) Thus $(g_1, \ldots, g_n, h_1, h_2, \ldots) \in G_{\aleph_0} \cap U$, showing that $G$ is almost countably free. \end{proof} When studying almost $\kappa$-freeness of groups, it is harmless to focus on ones that are not discrete, as the next observation shows. \begin{lemma} \label{discrete} Let $G$ be a discrete topological group. Then $G$ is not almost $\kappa$-free for any cardinal $\kappa > 0$. \end{lemma} \begin{proof} It suffices to note that $\{1\} \times \prod_{\kappa \setminus \{1\}} G \subseteq G^\kappa \setminus G_\kappa$ is an open set, for any cardinal $\kappa > 0$. \end{proof} \section{Dense free subgroups} The main goal of this section is to prove that, under mild assumptions, a group containing a dense free subgroup of rank $\kappa > 0$ is almost $\kappa$-free (Theorem~\ref{discrete-free} and Corollary~\ref{discrete-free2}). We begin with several lemmas. \begin{lemma} \label{non_comm} Let $F$ be a non-discrete free group of rank $\geq 2$, and let $U \subseteq F$ be an open neighborhood of the identity. Then $U$ is not commutative. Hence, one can find two elements of $U$ that freely generate a free subgroup. \end{lemma} \begin{proof} Suppose that $U$ is commutative, and let $S \subseteq F$ be the subgroup of all elements that commute with all elements of $U$. Since $F$ is free and non-discrete, there is an element $x \in F \setminus \{1\}$ such that $U \subseteq S = \langle x \rangle$. (As in the proof of Proposition~\ref{prod-example}, this follows from the fact that all subgroups of $F$ are free, and hence every commutative subgroup is cyclic.) Since $F$ has rank $\geq 2$, we can find an element $y \in F$ such that $x$ and $y$ freely generate a free subgroup. Now, since $x \cdot 1 \cdot x^{-1}, y \cdot 1 \cdot y^{-1} \in U$, we can find a neighborhood of the identity $W$ such that $xWx^{-1}, yWy^{-1} \subseteq U$. It follows that $W \subseteq \langle x \rangle$. (For, if $g \in W$ does not commute with an element of $U$, then neither does $xgx^{-1} \in U$.) Since $F$ is non-discrete, $W \neq \{1\}$, and hence $yWy^{-1}$ must contain an element that does not commute with $x$ (since $x$ and $y$ freely generate a free subgroup), contradicting the assumption that $U$ is commutative. Therefore, $U$ cannot be commutative. The final claim follows from the aforementioned fact that all subgroups of $F$ are free, and hence if two elements of $F$ generate a noncommutative subgroup, then it must have rank $2$. \end{proof} \begin{lemma} \label{prod-neigh} Let $F$ be a topological group, let $U_1, \ldots, U_n \subseteq F$ be nonempty open subsets, let $g_1 \in U_1, \ldots, g_n \in U_n$ be any elements, and let $m \in \mathbb{Z}_+$. Then there is an open neighborhood $U$ of the identity such that for all $\, 0 \leq j \leq m$ and $\, 1 \leq i \leq n$, we have $U^{m-j}g_i U^j \subseteq U_i$. \end{lemma} \begin{proof} For each $0 \leq j \leq m$, let $w_j = w_j(x,y)$ be the free word $x^{m-j}yx^j$. Since the ``evaluation" map $F^{m+1} \rightarrow F$ induced by the word $w_j$ is continuous, and since $1^{m-j}g_i1^j \in U_i$, for each $0 \leq j \leq m$ and $1 \leq i \leq n$ there is an open neighborhood $U_{ij}$ of the identity such that $U_{ij}^{m-j}g_i U_{ij}^j \subseteq U_i$. Then the open neighborhood of the identity $U = \bigcap_{i,j} U_{ij}$ has the desired properties. \end{proof} \begin{lemma} \label{fin-case} Let $F$ be a non-discrete free group of rank $\kappa \geq 2$. Then $F$ is almost free. \end{lemma} \begin{proof} Let $\{x_i\}_{i \in \kappa}$ be a set of free generators for $F$. Given an element $g \in F$, let $L(g)$ denote the length of $g$ as a reduced monoid word in $\{x_i\}_{i \in \kappa} \cup \{x_i^{-1}\}_{i \in \kappa}$. Also, let $n$ be a positive integer, and let $U_1, \ldots, U_n \subseteq F$ be nonempty open subsets. We shall construct elements $f_1 \in U_1, \ldots, f_n \in U_n$ that freely generate a free subgroup. We begin by picking arbitrary elements $g_1 \in U_1, \ldots, g_n \in U_n$. Let $m$ be an integer greater than $2L(g_i) + 2n + 2$, for all $1 \leq i \leq n$, and let $U$ be a neighborhood of the identity as in Lemma~\ref{prod-neigh}. By Lemma~\ref{non_comm}, there are elements $y,z \in U$ that freely generate a free subgroup. For each $1 \leq i \leq n$, we can find elements $h_{i,1}, h_{i,2} \in \langle y, z \rangle$ such that $f_i = z^{-i}yh_{i,1}g_ih_{i,2}yz^i$ when reduced, yields a word of the form $z^{-i}ywyz^i$, where $w$ is reduced as a word in the $x_i$, and where no additional reductions are possible other than those resulting from ones that occur in $z^{-i}y$ or $yz^i$ (as words in the $x_i$). Moreover, we may choose the $h_{i,1}$ and $h_{i,2}$ so that each has length no greater than $L(g_i)$ as a monoid word in $\{y, y^{-1}, z, z^{-1}\}$. (For, $g_i$ cannot cancel more than $L(g_i)$ copies of $y$ and $z$ on either side.) Hence, by our choice of $U$, for each $1 \leq i \leq n$, we have $f_i \in U_i$. Now, let $z^{i_1}y^{i_2}ry^{i_3}z^{i_4}$ and $z^{i_5}y^{i_6}sy^{i_7}z^{i_8}$ be two reduced group words in the $x_i$ (except for possible unresolved reductions in sub-words of the form $z^jy^k$ at the beginning and sub-words of the form $y^kz^j$ at the end), for some $r,s \in F$ and nonzero integers $i_1, \ldots, i_8$, where $i_4 \neq - i_5$. Then $$(z^{i_1}y^{i_2}ry^{i_3}z^{i_4})(z^{i_5}y^{i_6}sy^{i_7}z^{i_8}) = z^{i_1}y^{i_2}(ry^{i_3}z^{i_4+i_5}y^{i_6}s)y^{i_7}z^{i_8},$$ when reduced, also yields a reduced group word (with the same caveat) of the same form. It follows that if $w = w(t_1, \ldots, t_n)$ is a reduced free group word, then $w(f_1, \ldots, f_n) \neq 1$, and hence $f_1, \ldots, f_n$ freely generate a free subgroup. \end{proof} We are now in a position to characterize when a free group is almost $\kappa$-free. \begin{proposition} \label{free-groups} Let $F$ be a free topological group of rank $\kappa$, for some cardinal $\kappa > 0$. Then the following are equivalent. \begin{enumerate} \item[$(1)$] The topology on $F$ is non-discrete. \item[$(2)$] $F$ is almost $\kappa$-free. \end{enumerate} Moreover, in the above situation, if $\, 2 \leq \kappa \leq \aleph_0$, then $F$ is almost countably free. \end{proposition} \begin{proof} By Lemma~\ref{discrete}, (2) implies (1). Let us show that (1) implies (2) and the final claim. First, suppose that $\kappa = 1$. Since $F$ is non-discrete, for any nonempty open subset $U \subseteq F$, we can find an element $f \in U\setminus \{1\}$, and $f$ freely generates a free subgroup. Hence $F$ is almost $1$-free. Next, suppose that $2 \leq \kappa \leq \aleph_0$. By Lemma~\ref{fin-case}, $F$ is almost free, and therefore, Proposition~\ref{count-free} implies that $F$ is almost countably free. In particular, $F$ is almost $\kappa$-free, by Lemma~\ref{down}. Finally, suppose that $\aleph_0 \leq \kappa$, and let $\{x_i\}_{i \in \kappa}$ be a set of free generators for $F$. Also, let $U \subseteq F^\kappa$ be a nonempty open subset. We wish to show that $U \cap F_\kappa \neq \emptyset$. After passing to a subset and reindexing, we may assume without loss of generality that $U$ is of the form $U_1 \times \ldots \times U_n \times \prod_{\kappa \setminus \{1, \ldots, n\}} F$, where $U_1, \ldots, U_n \subseteq F$ are some nonempty open sets. By Lemma~\ref{fin-case}, we can find elements $f_1 \in U_1, \ldots, f_n \in U_n$ that freely generate a free subgroup of $F$. As words in $\{x_i\}_{i \in \kappa}$, the elements $f_1, \ldots, f_n$ involve only finitely many of the $x_i$, say, $\{x_{i_1}, \ldots, x_{i_m}\}$ $(i_1, \ldots, i_m \in \kappa)$. Let $\Lambda = \kappa \setminus \{x_{i_1}, \ldots, x_{i_m}\}$. Then $\{f_1, \ldots, f_n\} \cup \{x_i\}_{i \in \Lambda}$ freely generates a free subgroup of rank $\kappa$. Writing $\{f_i\}_{i \in \kappa, \, n<i} = \{x_i\}_{i \in \Lambda}$, we therefore have $(f_i)_{i\in \kappa} \in U \cap F_\kappa$, as desired. \end{proof} We recall that no countable non-discrete group $G$ can be completely metrizable. For, unless there is an element $g \in G$ that is isolated, $G$ can be written as a countable union of nowhere dense sets, namely the singleton sets, which contradicts the Baire Category Theorem. On the other hand, if some $g \in G$ is isolated, then $G$ must be discrete. Hence, in particular, Proposition~\ref{free-groups} gives additional examples of non-Polish almost $\kappa$-free groups when $\kappa$ is countable. We are ready for our main result. \begin{theorem} \label{discrete-free} Let $G$ be a topological group, and let $\kappa > 0$ be a cardinal. Suppose that $G$ contains a free subgroup $F$ of rank $\kappa$, such that $F\setminus \{1\}$ is dense. Then $G$ is almost $\kappa$-free. Moreover, if $\, 2 \leq \kappa \leq \aleph_0$, then $G$ is almost countably free. \end{theorem} \begin{proof} Our hypotheses imply that the induced topology on $F$ is non-discrete. The result now follows from Proposition~\ref{free-groups} and Lemma~\ref{dense-subgroup}. \end{proof} The following consequence of Theorem~\ref{discrete-free} generalizes a result of Gartside and Knight in~\cite{GK} about Polish groups. \begin{corollary} \label{discrete-free2} Let $G$ be a non-discrete Hausdorff topological group, and let $\kappa > 0$ be a cardinal. Suppose that $G$ contains a dense free subgroup $F$ of rank $\kappa$. Then $G$ is almost $\kappa$-free. Moreover, if $\, 2 \leq \kappa \leq \aleph_0$, then $G$ is almost countably free. \end{corollary} \begin{proof} Since $G$ is non-discrete, it contains a point that is not isolated. By translation-invariance of the topology, it follows that $1$ is not isolated. Since $G$ is Hausdorff, this implies that every nonempty open set contains a nonempty open subset which is not a neighborhood of the identity. Thus, if $F \subseteq G$ is dense, then $F \setminus \{1\}$ must be dense as well. The result now follows from Theorem~\ref{discrete-free}. \end{proof} By Lemma~\ref{discrete}, the non-discreteness assumption in the above corollary is necessary. Using similar reasoning, it is possible to construct an example showing that the Hausdorff condition is necessary as well. For, let $F$ be a discrete free group of rank $\kappa > 0$, and let $H$ be an indiscrete group which contains no nontrivial free subgroups (e.g., a group where all elements have finite order). Then $F \times \{1\}$ is a dense free subgroup of $G = F \times H$, which is clearly a non-discrete non-Hausdorff group (if $H \neq \{1\}$). However, $G$ cannot be almost $\kappa$-free, since $(\{1\} \times H) \times \prod_{\kappa \setminus \{1\}} (F \times H)$ is an open subset of $G^\kappa \setminus G_\kappa$. Garside and Knight~\cite{GK} showed that if $G$ is a non-discrete Polish topological group that is almost countably free, then it contains a dense free subgroup (of rank $\aleph_0$). Thus, one might wonder whether more general converses to Theorem~\ref{discrete-free} and Corollary~\ref{discrete-free2} hold. Let us note that it is possible to construct groups that are almost $\kappa$-free but have no dense free subgroups at all. For instance, let $\kappa < \lambda$ be infinite cardinals, let $F$ be a free group of rank $\kappa$ (with any non-discrete topology; e.g., the profinite topology), and let $A$ be a discrete abelian group of cardinality $\lambda$. Then, by Proposition~\ref{prod-example}, $F \times A$ is almost $\kappa$-free. However, since $A$ is abelian, all free subgroups of $F \times A$ must have cardinality $\leq \kappa$, and hence cannot be dense. One can even construct a similar example of a non-discrete Hausdorff group that is almost $\kappa$-free and has a dense subgroup on $\kappa$ generators but no dense free subgroups. For instance, let $F$ be a non-discrete free group of rank $\aleph_0$, and let $G = \bigoplus_{\aleph_0} F$, the subgroup of $\prod_{\aleph_0} F$ consisting of elements having only finitely many nonidentity components. We regard $G$ as a topological group under the topology induced by the product topology on $\prod_{\aleph_0} F$ (which makes $G$ Hausdorff if $F$ is). Again, by Proposition~\ref{prod-example}, $G$ is almost countably free, and as a countable group it clearly has a dense subgroup on $\aleph_0$ generators. However, for each $j \in \aleph_0$, any dense subgroup of $G$ must contain an element of the form $g_j = (f_i)_{i \in \aleph_0}$, where $f_i = 1$ for all $i \neq j$, and $f_j \neq 1$. Now, any two such elements $g_j$ and $g_k$ commute, and they cannot be powers of the same element of $G$ unless $j = k$ (since only $1 \in F$ has finite order). Hence, no dense subgroup of $G$ can be free. Thus, the most obvious attempts to relax the complete metrizability assumption in the aforementioned converse to Corollary~\ref{discrete-free2} fail. In~\cite{BG} Breuillard and Gelander showed that every dense subgroup of a connected semi-simple real Lie group $G$ contains a free subgroup of rank $2$ that is dense (in $G$). Hence, we have the following consequence of Corollary~\ref{discrete-free2}. \begin{corollary} \label{lie} Every dense subgroup of a connected semi-simple real Lie group is almost countably free. \end{corollary} Many such groups are not Polish; for instance, the countable ones. We note in passing that the main results of this section have a flavor similar to that of the following theorem from~\cite{Gelander}. \begin{theorem}[Gelander] Let $G$ be a connected compact Lie group, let $n \geq 3$ be an integer, and let $H \subseteq G$ be an $(n-1)$-generated dense subgroup. Then $\, \{(h_1, \dots, h_n) \in H^n : \langle h_1, \dots, h_n \rangle = H \}$ is dense in $G^n$. \end{theorem} \section{Permutations} \label{perm_section} Let $\Omega$ be an infinite set. Regarding $\Omega$ as a discrete topological space, the monoid $\mathrm{Self}(\Omega)$ of all set maps from $\Omega$ to itself becomes a topological space under the \emph{function topology}. A subbasis of open sets in this topology is given by the sets $\{f \in \mathrm{Self}(\Omega) : f(\alpha) = \beta\}$ $(\alpha,\beta \in \Omega).$ It is easy to see that composition of maps is continuous in this topology. The group $\mathrm{Sym}(\Omega)$ of all permutations of $\Omega$ inherits from $\mathrm{Self}(\Omega)$ the function topology. Moreover, when restricted to $\mathrm{Sym}(\Omega)$, this topology makes $( )^{-1}$ continuous, since $$\{f \in \mathrm{Sym}(\Omega) : f(\alpha) = \beta\}^{-1} = \{f \in \mathrm{Sym}(\Omega) : f(\beta) = \alpha\},$$ turning $\mathrm{Sym}(\Omega)$ into a topological group, which can easily be seen to be Hausdorff. In the case where $\Omega = \mathbb{Z}_+$, one can put a metric $d$ on $\mathrm{Sym}(\Omega)$ which induces the function topology. Specifically, given $f, g \in \mathrm{Sym}(\mathbb{Z}_+)$, let $d(f,g) = 0$ if $f = g$, and otherwise let $d(f,g) = 2^{-n}$, where $n \in \mathbb{Z}_+$ is the least number such that either $f(n) \neq g(n)$ or $f^{-1}(n) \neq g^{-1}(n)$. A subbasis of open sets in the topology induced by this metric on $\mathrm{Sym}(\mathbb{Z}_+)$ consists of sets of the form $$(\ast) \ \ \bigcap_{i = 1}^n \{f \in \mathrm{Sym}(\mathbb{Z}_+) : f(i) = \alpha_i \ \text{and} \ f^{-1}(i) = \beta_i\} \ (n, \alpha_1, \ldots, \alpha_n, \beta_1, \ldots, \beta_n \in \mathbb{Z}_+).$$ We note that, with $\alpha_i$ and $\beta_i$ as above, both $\{f \in \mathrm{Sym}(\mathbb{Z}_+) : f(i) = \alpha_i\}$ and $\{f \in \mathrm{Sym}(\mathbb{Z}_+) : i = f(\beta_i)\}$ are open sets with respect to the function topology, as defined in the previous paragraph, and hence so is $$\{f \in \mathrm{Sym}(\mathbb{Z}_+) : f(i) = \alpha_i \ \text{and} \ f^{-1}(i) = \beta_i\}$$ $$= \{f \in \mathrm{Sym}(\mathbb{Z}_+) : f(i) = \alpha_i\} \cap \{f \in \mathrm{Sym}(\mathbb{Z}_+) : i = f(\beta_i)\}.$$ Thus, a set open in the topology induced by $d$ is open in the function topology. Conversely, a set of the form $\{f \in \mathrm{Sym}(\mathbb{Z}_+) : f(n) = m\}$ can be expressed as the union of all the sets of the form ($\ast$) where $\alpha_n = m$. Therefore, a set that is open in the function topology is open in the topology induced by $d$ as well, showing that the two are in fact the same topology. We wish to show that for any infinite set $\Omega$, $\mathrm{Sym}(\Omega)$ is almost $2^{|\Omega|}$-free, with respect to the topology described above. This task can be accomplished easily if we rely on the following result from~\cite{Bruijn}. \begin{theorem}[de Bruijn] \label{debruijn} Let $\Omega$ be an infinite set. Then $\, \mathrm{Sym}(\Omega)$ contains a free subgroup of rank $\, 2^{|\Omega|}.$ Moreover, this free subgroup can be taken to be dense $($in the function topology$)$. \end{theorem} The last claim in the above theorem does not actually appear in~\cite{Bruijn} but does, however, follow from de Bruijn's proof, as noted by Hodges. See~\cite{MS} for a discussion of this, along with a model-theoretic generalization of the result. Applying Corollary~\ref{discrete-free2} to Theorem~\ref{debruijn}, we obtain the following. \begin{corollary} \label{perm} Let $\Omega$ be an infinite set. Then $\, \mathrm{Sym}(\Omega)$ is almost $\, 2^{|\Omega|}$-free, with respect to the function topology. \end{corollary} Corollary~\ref{perm} is a generalization of the result of Dixon~\cite{Dixon} mentioned in the Introduction, showing that $\mathrm{Sym}(\mathbb{Z}_+)$ is almost free with respect to the function topology. (See Lemma~\ref{down}. See also~\cite{BR} for generalizations of Dixon's result and Theorem~\ref{debruijn} in a different direction--to automorphisms groups of relatively free algebras.) We note that $\mathrm{Sym}(\mathbb{Z}_+)$ is complete with respect to the metric defined above, and it is not hard to see that it is separable as well. Thus, with respect to the function topology, $\mathrm{Sym}(\Omega)$ is Polish whenever $|\Omega| = \aleph_0$. This is not the case, however, if $|\Omega| > \aleph_0$. For, in this situation, $\mathrm{Sym}(\Omega)$ is neither separable nor metrizable, since it is not first-countable. (A topological space is said to be \emph{first-countable} if each point has a countable base for its system of neighborhoods. Every metric space is first-countable, since given a point $p$, the open balls centered at $p$ of radii $1/n$, for $n \in \mathbb{Z}_+$, form such a countable base for this point.) As mentioned above, in~\cite{MS} Melles and Shelah proved a more general version of Theorem~\ref{debruijn}, showing that for certain models $M,$ the automorphism group $\mathrm{Aut}(M)$ of $M$ contains a free subgroup of rank $2^{|M|}$ that is dense (in the function topology). Hence, Corollary~\ref{perm} can be generalized accordingly. Let us next construct a different sort of almost $\kappa$-free group of permutations. For each $m \in \mathbb{Z}_+$, let $S_m = \mathrm{Sym}(\{1, 2, \ldots, m\})$ denote the group of all permutations of the set $\{1, 2, \ldots, m\}$. Let $\kappa$ be an infinite cardinal, let $\varphi : \kappa \rightarrow \mathbb{Z}_+$ be a function, and let $G = \prod_{i \in \kappa} S_{\varphi(i)}$. Endowing each $S_{\varphi(i)}$ with the discrete topology, $G$ becomes a Hausdorff topological group in the product topology. As with $\mathrm{Sym}(\Omega)$ above, $G$ cannot be Polish if $\kappa > \aleph_0$. We shall show that $G$ is almost $2^\kappa$-free with respect to this topology, assuming that for each $l \in \mathbb{Z}_+$, $|\{i : \varphi(i) \geq l\}| = \kappa$. The argument is divided into several steps. \begin{lemma} \label{prod1} Let $w = w(x_1, \ldots, x_n)$ be a free word. Then there exist $m \in \mathbb{Z}_+$ and permutations $f_1, \ldots, f_n \in S_m$ such that $w(f_1, \ldots, f_n) \neq 1$. \end{lemma} \begin{proof} Since $\mathrm{Sym}(\mathbb{Z}_+)$ contains a free group of rank $2^{\aleph_0}$, we can find $g_1, \ldots, g_n \in \mathrm{Sym}(\mathbb{Z}_+)$ such that $w(g_1, \ldots, g_n) \neq 1$. (Actually, here we only need the fact that $\mathrm{Sym}(\mathbb{Z}_+)$ contains a free group of rank $n$, and this is easy to see without reference to Theorem~\ref{debruijn}. For, every countable group acts on itself and hence can be embedded in $\mathrm{Sym}(\mathbb{Z}_+)$. This, in particular, holds for countable free groups.) Thus, for some distinct $i, j \in \mathbb{Z}_+$ we have $w(g_1, \ldots, g_n)(i) = j$. Write $w(g_1, \ldots, g_n) = h_1 \ldots h_k$, where $h_1, \ldots, h_k \in \{g_1, \ldots, g_n\} \cup \{g_1^{-1}, \ldots, g_n^{-1}\}$. Let $\Gamma = \{i, h_k(i), h_{k-1}h_k(i), \ldots, h_1 \dots h_k(i)\}$, and set $$\Delta = \Gamma \cup g_1(\Gamma) \cup \dots \cup g_n(\Gamma) \cup g_1^{-1}(\Gamma) \cup \dots \cup g_n^{-1}(\Gamma).$$ Let $m$ be the maximal element of the finite set $\Delta \subseteq \mathbb{Z}_+$. Then we can find $f_1, \ldots, f_n \in S_m$ such that for each $i \in \{1, \ldots, n\}$, $g_i$ agrees with $f_i$ on $\Gamma$, and $g_i^{-1}$ agrees with $f_i^{-1}$ on $\Gamma$. In particular, we have $w(f_1, \ldots, f_n)(i) = j \neq i$, as desired. \end{proof} \begin{proposition} \label{prod2} Let $\kappa$ be an infinite cardinal, let $\varphi : \kappa \rightarrow \mathbb{Z}_+$ be a function, and let $G = \prod_{i \in \kappa} S_{\varphi(i)}$. Suppose that $\, |\{i : \varphi(i) \geq l\}| = \kappa$ for each $l \in \mathbb{Z}_+$. Then $G$ contains a free subgroup of rank $\kappa$. \end{proposition} \begin{proof} First, suppose that $\kappa = \aleph_0$, and let us show that for all $n \in \mathbb{Z}_+$, $G$ contains a free subgroup of rank $n$. Let $\{w_i\}_{i \in \aleph_0}$ be an enumeration of all the free words in the $n$ letters $x_1, \ldots, x_n$. By Lemma~\ref{prod1}, to each $i \in \aleph_0$ we can assign an element $m_i \in \aleph_0$ such that there exist permutations $f_{i1}, \ldots, f_{in} \in S_{\varphi(m_i)}$ for which $w_i(f_{i1}, \ldots, f_{in}) \neq 1$. Moreover, by our hypotheses on $G$, we may assume that $m_i \neq m_j$ for $i \neq j$. Now, let $g_1, \ldots, g_n \in G$ be any permutations such that the natural projection of $g_j$ on $S_{\varphi(m_i)}$ is $f_{ij}$ ($1 \leq j \leq n$, $i \in \aleph_0$). Then for all $i \in \aleph_0$ we have $w_i(g_1, \ldots, g_n) \neq 1$, since the natural projection of $w_i(g_1, \ldots, g_n)$ on $S_{\varphi(m_i)}$ is $w_i(f_{i1}, \ldots, f_{in})$. Hence, $g_1, \ldots, g_n$ freely generate a free group. For the general case, upon relabeling (and using the axiom of choice), we can identify $G$ with $\prod_{i \in \kappa} H_i$, where $H_i = \prod_{j \in \aleph_0} S_{\varphi_i(j)}$, each $\varphi_i : \aleph_0 \rightarrow \mathbb{Z}_+$ ($i \in \kappa$) is a function, and we have $|\{j : \varphi_i(j) \geq l\}| = \aleph_0$ for all $l \in \mathbb{Z}_+$ and $i \in \kappa$. We shall construct a set $\{f_i\}_{i \in \kappa} \subseteq G$ that freely generates a free subgroup. Let $\{U_i\}_{i \in \kappa}$ be a well-ordering of all the finite subsets of $\{f_i\}_{i \in \kappa}$ (for now, treated simply as a set of symbols), and write $U_i = \{f_{i1}, \ldots, f_{in_i}\}$, where $n_i = |U_i|$. Now, we define the permutations $f_i$ so that, for each $i \in \kappa$, the natural projections of $f_{i1}, \ldots, f_{in_i}$ on $H_i$ generate a free subgroup (this is possible, by the previous paragraph), and if $f_i \notin U_j$ for some $i, j \in \kappa$, then we let the natural projection of $f_i$ on $H_j$ be the identity. It follows that if $\{g_1, \ldots, g_n\} \subseteq \{f_i\}_{i \in \kappa}$ is any finite subset, then $g_1, \ldots, g_n$ cannot satisfy a free word, and hence $\{f_i\}_{i \in \kappa}$ freely generates a free group. \end{proof} The above proposition actually implies that the group $G$ in question contains a free subgroup of rank $2^\kappa$. To show this we shall need the following result from~\cite{Bergman}. (The second paragraph of the proof of Proposition~\ref{prod2} employs a similar idea to that used to prove this theorem.) \begin{theorem}[Bergman] \label{bergman} Let $\, V$ be any variety of finitary $($general$)$ algebras, let $F$ be a free $\, V$-algebra on $\, \aleph_0$ generators, and let $\kappa$ be an infinite cardinal. Then $F^\kappa$ contains a free $\, V$-algebra on $2^\kappa$ generators. In particular, if $F$ is a free group on $\, \aleph_0$ generators, then $F^\kappa$ contains a free group on $2^\kappa$ generators. \end{theorem} The proof of the following corollary is similar to that of Theorem~\ref{debruijn}. \begin{corollary} \label{prod3} Let $\kappa$ and $G$ be as in Proposition~\ref{prod2}. Then $G$ contains a free subgroup of rank $2^\kappa$. \end{corollary} \begin{proof} First, we note that $G$ contains a $\kappa$-fold direct product of groups of the same form, by the same argument as in the second paragraph of the proof of Proposition~\ref{prod2}. (For, $\kappa$ can be written as a disjoint union of $\kappa$ subsets of cardinality $\kappa$.) Now, by the same proposition, it follows that $G$ contains a $\kappa$-fold direct product of free groups of rank $\aleph_0$. Thus, by Theorem~\ref{bergman}, $G$ contains a free group of rank $2^\kappa$. \end{proof} We require one more ingredient to construct our desired almost $\kappa$-free group of permutations. \begin{lemma} \label{prod4} Let $\kappa$ be an infinite cardinal, and let $G = \prod_{i \in \kappa} G_i$, where each $G_i$ is of the form $S_m$, for some $m \in \mathbb{Z}_+$. Suppose that $f_1, \ldots, f_n \in G$ freely generate a free subgroup, and let $g_1, \ldots, g_n \in G$ be any permutations such that there exists a finite subset $\, \Gamma \subseteq \kappa$ with the property that for each $j \in \{1, \ldots, n\}$ and $i \in \kappa \setminus \Gamma$, the natural projections of $g_j$ and $f_j$ on $G_i$ are equal. Then $g_1, \ldots, g_n$ also freely generate a free subgroup. \end{lemma} \begin{proof} Suppose, on the contrary, that $w(g_1, \ldots, g_n) = 1$ for some free word $w$. Then the natural projection of $w(f_1, \ldots, f_n)$ on $\prod_{i \in \kappa \setminus \Gamma} G_i$ is also the identity. Now, $\prod_{i \in \Gamma} G_i$ is a finite group, and hence the natural projection of $w(f_1, \ldots, f_n)$ on $\prod_{i \in \Gamma} G_i$ has finite order. It follows that $f_1, \ldots, f_n$ satisfy a free word, contradicting our hypothesis. Hence $g_1, \ldots, g_n$ freely generate a free subgroup. \end{proof} \begin{theorem} \label{prod-main} Let $\kappa$ be an infinite cardinal, let $\varphi : \kappa \rightarrow \mathbb{Z}_+$ be a function, and let $G = \prod_{i \in \kappa} S_{\varphi(i)}$ $($where $S_{\varphi(i)} = \mathrm{Sym}(\{1, 2, \ldots, \varphi(i)\}))$. Suppose that $\, |\{i : \varphi(i) \geq l\}| = \kappa$ for each $l \in \mathbb{Z}_+$. Then $G$ contains a free subgroup of rank $2^\kappa$ that is dense with respect to the product topology resulting from putting the discrete topology on each $S_{\varphi(i)}$. In particular, $G$ is almost $2^\kappa$-free. \end{theorem} \begin{proof} For each $i \in \kappa$, let $\pi_i : G \rightarrow S_{\varphi(i)}$ denote the natural projection. We note that each open subset of $G$ contains a set of the form $\bigcap_{i \in I} \{f \in G : \pi_i f = g_{i}\}$, for some finite $I \subseteq \kappa$ and $g_{i} \in S_{\varphi(i)}$ ($i \in I$). Let $\Delta$ be the set of all finite subsets of $\kappa$ (so in particular, $|\Delta| = \kappa$). For each $I \in \Delta$ let $G_I = \prod_{i \in I} S_{\varphi(i)}$, and write $G_I = \{g_{(I,j)}\}_{1\leq j \leq |G_I|}$. By Corollary~\ref{prod3}, we can find a subset $\{f_i\}_{i \in 2^\kappa} \subseteq G$ that freely generates a free group. Let us also reindex the first $k$-many $f_i$ as $\{f_i\}_{i \in \kappa} = \{f_{(I,j)} : I \in \Delta, 1\leq j \leq |G_I|\}$. Now, for each $I \in \Delta$ and $1\leq j \leq |G_I|$ define $h_{(I,j)} \in G$ so that $h_{(I,j)}$ agrees with $g_{(I,j)}$ on the coordinates indexed by $I$ and with $f_{(I,j)}$ elsewhere. Letting $H = \{h_{(I,j)} : I \in \Delta, 1\leq j \leq |G_I|\} \cup \{f_i\}_{i \in 2^\kappa \setminus \kappa},$ we see that $H$ is dense in $G$, by the observation about the open subsets of $G$ made in the previous paragraph. Also, by Lemma~\ref{prod4} (applied to all finite subsets of $H$), the elements of $H$ freely generate a free group of rank $2^\kappa$, giving us the desired conclusion. The final claim follows from Corollary~\ref{discrete-free2}. \end{proof} \section{Further questions} We conclude with a couple of questions to which we would like to know the answers. \begin{question} Given an integer $n > 1$, is there a topological group that is almost $n$-free but not almost $(n+1)$-free? \end{question} \begin{question} Is there a completely metrizable group that is almost free but not almost $\aleph_1$-free? \end{question} \noindent Department of Mathematics \newline University of Colorado \newline Colorado Springs, CO 80933-7150 \newline USA \newline \noindent Email: {\tt [email protected]} \end{document}
\begin{document} \title{Symmetries and choreographies in families bifurcating from \\the polygonal relative equilibrium of the $n$-body problem} \author{Renato Calleja\thanks{Instituto de Investigaciones en Matem\'{a}ticas Aplicadas y en Sistemas, Universidad Nacional Aut\'{o}noma de M\'{e}xico, [email protected]}, Eusebius Doedel\thanks{Department of Computer Science, Concordia University, Montreal, Canada, [email protected]}, Carlos Garc\'{\i}a-Azpeitia\thanks{Facultad de Ciencias, Universidad Nacional Aut\'{o}noma de M\'{e}xico, [email protected]}} \maketitle \begin{abstract} We use numerical continuation and bifurcation techniques in a boundary value setting to follow Lyapunov families of periodic orbits. These arise from the polygonal system of $n$ bodies in a rotating frame of reference. When the frequency of a Lyapunov orbit and the frequency of the rotating frame have a rational relationship then the orbit is also periodic in the inertial frame. We prove that a dense set of Lyapunov orbits, with frequencies satisfying a diophantine equation, correspond to choreographies. We present a sample of the many choreographies that we have determined numerically along the Lyapunov families and along bifurcating families, namely for the cases $n=4,~6,~7,~8$, and $9$. We also present numerical results for the case where there is a central body that affects the choreography, but that does not participate in it. Animations of the families and the choreographies can be seen at the link below\footnote{\texttt{http://mym.iimas.unam.mx/renato/choreographies/index.html} }. \end{abstract} \section*{Introduction} The study of $n$ equal masses that follow the same path has attracted much attention in recent years. The first solution that differs from the classical Lagrange circular one was discovered numerically by C.~Moore in 1993 \cite{Mo93}, where three bodies follow one another around the now famous figure-eight orbit. This orbit was located by minimizing the action among symmetric paths. Independently in \cite{ChMo00}, Chenciner and Montgomery (2000) gave a rigorous mathematical proof of the existence of this orbit, by minimizing the action over paths that connect a colinear and an isosceles configuration. Such solutions are now commonly known as \textquotedblleft choreographies\textquotedblright, after the work in \cite{Si00}, where C.~Sim\'{o} presented extensive numerical computations of choreographies for many choices of the number of bodies. The results in \cite{ChMo00} mark the beginning of the development of variational methods, where the existence of choreographies can be associated with the problem of finding critical points of the classical action of the Newton equations of motion. The main obstacles encountered in the application of the principle of least action are the existence of paths with collisions, and the lack of compactness of the action. In \cite{FeTe04}, Terracini and Ferrario (2004) applied the principle of least action systematically over symmetric paths to avoid collisions, using ideas introduced by Marchal \cite{Ma02}. For the discussion of these and other variational approaches we refer to \cite{BaTe04,Ch03,Fe06,FePo08,TeVe07}, and references therein. Another way to obtain choreographies is by using continuation methods. Chenciner and F\'{e}joz (2009) pointed out in \cite{ChFe08} that choreographies appear in dense sets along the Vertical Lyapunov families that arise from $n$ bodies rotating in a polygon; see also \cite{ChFe05,GaIz13,Ma00}. The local existence of the Vertical Lyapunov families is proven in \cite{ChFe08} using the Weinstein-Moser theory. When the frequency varies along the Vertical Lyapunov families then an infinite number of choreographies exists; a fact established in \cite{ChFe08} for orbits close to the polygon equilibrium, with $n\leq6$. While similar computations can be carried out for other values of $n$, a general analytical proof that is valid for all $n$ remains an open problem. In \cite{GaIz13} C.~Garc\'{\i}a-Azpeitia and J. Ize (2013) proved the global existence of bifurcating Planar and Vertical Lyapunov families, using the equivariant degree theory from \cite{IzVi03}. The purpose of our current work is to compute such global families numerically, as well as subsequently bifurcating families. To explain our numerical results in a precise notational setting we first recall some relevant results from \cite{GaIz13}. The equations of motion of $n$ bodies of unit mass in a rotating frame are given by \begin{align} \ddot{u}_{j}+2\sqrt{s_{1}}~i~\dot{u}_{j} & =s_{1}u_{j}-\sum_{i=1(i\neq j)}^{n}\frac{u_{j}-u_{i}}{\left\Vert (u_{j},z_{j})-(u_{i},z_{i})\right\Vert ^{3}}~,\label{NE}\\ \ddot{z}_{j} & =-\sum_{i=1(i\neq j)}^{n}\frac{z_{j}-z_{i}}{\left\Vert (u_{j},z_{j})-(u_{i},z_{i})\right\Vert ^{3}}~,\nonumber \end{align} where the $(u_{j},z_{j})\in\mathbb{C}\times\mathbb{R}$ are the positions of the bodies in space, and $s_{1}$ is defined by \begin{equation} s_{k}=\frac{1}{4}\sum_{j=1}^{n-1}\frac{\sin^{2}(kj\zeta/2)}{\sin^{3} (j\zeta/2)}~,\qquad\zeta=\frac{2\pi}{n}~. \label{S} \end{equation} The circular, polygonal relative equilibrium consists of the positions \begin{equation} u_{j}=e^{ij\zeta},\qquad z_{j}=0\text{.} \label{RE} \end{equation} The frequency of the rotational frame is chosen to be $\sqrt{s_{1}}$, so that the polygon (\ref{RE}) is an equilibrium of (\ref{NE}). The emanating Lyapunov families have starting frequencies that are equal to the natural modes of oscillation of the equilibrium (\ref{RE}). These Lyapunov families constitute continuous families in the space of renormalized $2\pi$-periodic functions. The \emph{global} property means that the norm or the period of the orbits along the family tends to infinity, or that the family ends in a collision or at a bifurcation orbit. The theorem in \cite{GaIz13} states that for $n\geq6$ and for each integer $k$ such that \[ 3\leq k\leq n-3, \] the polygonal relative equilibrium has one \emph{global bifurcation of planar periodic solutions} with symmetries \begin{equation} u_{j}(t)=e^{ij\zeta}u_{n}(t+jk\zeta),\qquad u_{n}(t)=\bar{u}_{n}(-t)\text{.} \label{PS} \end{equation} Moreover, the proof in \cite{GaIz13} predicts solutions with $k=2$ or $n-2$ if the linear equations at the polygonal equilibrium have normal modes corresponding to these symmetries. In fact, three cases occur for different values of $n$: for $n=4,5,6$ there are no solutions with $k=2$ or $n-2$, for $n=7,8,9$ there are two solutions with $k=2$ and no solutions with $k=$ $n-2$, and for $n\geq10$ there is one solution with $k=2$ and one with $k=$ $n-2$. In the case of spatial Lyapunov families the eigenvalues of the linearized system of equations are given explicitly by $i\sqrt{s_{k}}$, for $k=1,...,n-1$; see \cite{ChFe08} and \cite{GaIz13}. The eigenvalues $i\sqrt{s_{k}}$ are resonant due to the fact that $s_{n-k}=s_{k}$ for $1\leq k<n/2$. Moreover, the first eigenvalue $i\sqrt{s_{1}}$ is resonant with the triple planar eigenvalue $i\sqrt{s_{1}}$, and hence is highly degenerate. These resonances can be dealt with using the equivariant degree theory in \cite{IzVi03}. The theorem in \cite{GaIz13} states that for $n\geq3$ and for each $k$ such that \[ 1\leq k\leq n/2\text{,} \] the polygonal relative equilibrium has one\emph{ global bifurcation of spatial periodic solutions}, which start with frequency $\sqrt{s_{k}}$, have the symmetry (\ref{PS}), as well as the symmetries \begin{equation} z_{j}(t)=z_{n}(t+jk\zeta), \label{SS} \end{equation} and \begin{equation} u_{n}(t)=u_{n}(t+\pi),\qquad z_{n}(t)=-z_{n}(t+\pi). \label{8} \end{equation} For example, for the case where $k=n/2$ and $n$ is even, we have $k\zeta=\pi$. Then the symmetries (\ref{PS}), (\ref{SS}) and (\ref{8}) imply that \begin{align*} u_{j}(t) & =e^{ij\zeta}u_{n}(t+j\pi)=e^{ij\zeta}u_{n}(t),\\ z_{j}(t) & =z_{n}(t+jk\zeta)=(-1)^{j}z_{n}(t). \end{align*} Solutions having these symmetries are known as Hip-Hop orbits, and have been studied in \cite{BaCo06,DaTr83,MeSc93,TeVe07}. Solutions with symmetries (\ref{PS}) and (\ref{SS}) are \textquotedblleft traveling waves\textquotedblright\, in the sense that each body follows the same path, but with a rotation and a time shift. The symmetries allow us to establish that a dense set of solutions along the family are choreographies in the inertial frame of reference. We say that a planar or spatial Lyapunov orbit is $\ell:m$\emph{\ resonant} if its period and frequency are \[ T=\frac{2\pi}{\sqrt{s_{1}}}\left( \frac{\ell}{m}\right) \text{,\qquad} \nu=\sqrt{s_{1}}\frac{m}{\ell}\text{,} \] where $\ell$ and $m$ are relatively prime, and such that \[ k\ell-m\in n\mathbb{Z}. \] In Theorem~\ref{proposition} we prove that $\ell:m$ resonant Lyapunov orbits are choreographies in the inertial frame. Each of the integers $k$, $\ell$, and $m$ plays a different role in the description of the choreographies. Indeed, the projection of the choreography onto the $xy$-plane has winding number $\ell$ around a center, and is symmetric with respect to the $\mathbb{Z}_{m}$-group of rotations by $2\pi/m$. In addition the $n$ bodies form groups of $d$-polygons, where $d$ is the greatest common divisor of $k$ and $n$. Some choreographies wind around a toroidal manifold with winding numbers $\ell$ and $m$, \textit{i.e.}, the choreography path is a $(\ell,m)$-torus knot. In particular, such orbits appear in families that we refer to as "Axial families", \textit{e.g.}, in Figure~\ref{fig08}. In \cite{BaTe04} and \cite{Mo} different classifications for the symmetries of planar choreographies have been presented. These classifications differ from the one presented here since they are designed for choreographies found by means of a variational approach. The nature of our approach is continuation and as such, the winding numbers $\ell$ and $m$ appear in a natural manner in the classification of the choreographies. Therefore, our approach presents complementary information not available with variational methods. We note that for other values of $\ell$ and $m$ the orbits of the $n$ bodies in the inertial frame are also closed, but consist of multiple curves, called ``\textit{multiple choreographic solutions}'' in \cite{Ch03}. We use robust and highly accurate boundary value techniques with adaptive meshes to continue the Lyapunov families. An extensive collection of python scripts that reproduce the results reported in this article for a selection of values of $n$ will be made freely available. These scripts control the software AUTO to carry out the necessary sequences of computations. Similar scripts will be available for related problems, including an $n$ vortex problem and a periodic lattice of Schr\"odinger sites. In \cite{ChFe08} the numerical continuation of the Vertical Lyapunov families is implemented as local minimizers in subspaces of symmetric paths. Presumably not all families are local minimizers restricted to subspaces. One advantage of our procedure is that it allows the numerical continuation of all Planar and Vertical Lyapunov families that arise from simple eigenvalues. The systematic computation of periodic orbits that arise from eigenvalues of higher multiplicity remains under investigation. Previous numerical work has established the existence of many choreographies; see for example \cite{ChGe02}. Computer-assisted proofs of the existence of choreographies have been given in, for example, \cite{KaZg03} and \cite{KaSi07}. It would be of interest to use such techniques to mathematically validate the existence of some of the choreographies in our article. The figure-8 orbit is still the only choreography known to be stable \cite{KaSi07}, and so far we have not found evidence of other stable choreographies. In Section~1 we prove that a dense set of orbits along the Lyapunov families corresponds to choreographies. In Section 2 we describe the numerical continuation procedure used to determine the periodic solution families, and in Section 3 we give examples of numerically computed Lyapunov families and some of their bifurcating families. In Section 4 we provide a sample of the choreographies that appear along Planar Lyapunov families. Section 5 presents choreographies along the Vertical Lyapunov families and along their bifurcating families. In particular, a family of axially symmetric orbits forms a connection between a Vertical family and a Planar family. Choreographies along such tertiary Planar families are referred to as ``unchained polygons'' in \cite{ChFe08}. In Section 6 we present results for a similar configuration, namely the Maxwell relative equilibrium, where a central body is added at the center of the $n$-polygon. This configuration has been used as a model to study the stability of the rings of Saturn, as established in \cite{Mo92} and in \cite{GaIz13,VaKo07,Ro00} for $n\geq7$. Using a similar approach as in the earlier sections, we determine solutions where $n$ bodies of equal mass $1$ follow a single trajectory, but with an additional body of mass $\mu$ at or near the center. While this extra body does not participate in the choreography, it does affect its structure, and its stability properties. We also present Vertical Lyapunov families that bifurcate from a non-circular, polygonal equilibrium, whose solutions have symmetries that correspond to \textquotedblleft standing waves\textquotedblright, and which do not give rise to choreographies in the inertial frame. \section{Choreographies and Lyapunov Families} In this section we prove that there are Lyapunov orbits of the $n$ body problem that correspond to choreographies in the inertial frame of reference. \begin{lemma} \label{Lemma}Let \[ \Omega=\frac{1}{n}\left( k\frac{\sqrt{s_{1}}}{\nu}-1\right) \text{.} \] Then in the inertial frame of reference, with period scaled to $2\pi$, the Planar Lyapunov orbits satisfy \[ q_{j}(t)=e^{-ij(2\pi)\Omega}q_{n}(t+jk\zeta). \] \end{lemma} \begin{proof} : In the inertial frame the solutions are given by \[ q_{j}(t)=e^{i\sqrt{s_{1}}t}u_{j}(\nu t), \] where $\nu$ is the frequency and $T=2\pi/\nu$ is the period. Reparametrizing time the solution becomes $q_{j}(t)=e^{it\sqrt{s_{1}}/\nu}u_{j}(t)$, where $u_{j} $ is the $2\pi$-periodic solution with the symmetries (\ref{PS}). We have \[ q_{j}(t)=e^{it\sqrt{s_{1}}/\nu}u_{j}(t)=e^{it\sqrt{s_{1}}/\nu}e^{ij\zeta} u_{n}(t+jk\zeta)\text{.} \] Since \[ q_{n}(t+jk\zeta)=e^{i(t+jk\zeta)\sqrt{s_{1}}/\nu}u_{n}(t+jk\zeta)\text{,} \] it follows that \[ q_{j}(t)=e^{it\sqrt{s_{1}}/\nu}e^{ij\zeta}\left( e^{-i(t+jk\zeta)\sqrt{s_{1} }/\nu}q_{n}(t+jk\zeta)\right) =e^{-ij\zeta n\Omega}q_{n}(t+jk\zeta). \] \end{proof} In particular, if $\Omega\in\mathbb{Z}$ then the Lyapunov solutions satisfy \begin{equation} q_{j}(t)=q_{n}(t+jk\zeta)\text{,} \label{qn} \end{equation} and are choreographies. In fact, planar choreographies exists for any rational number $\Omega=p/q$ where $q$ is relatively prime to $n$. \begin{proposition} \label{PC}If $\Omega=p/q$, with $q$ relatively prime to $n$, then \begin{equation} q_{j}(t)=q_{n}(t+j\left( 1_{n}k\zeta\right) )\text{,} \end{equation} where $1_{n}=1$ mod $n$. The solution $q_{n}(t)$ is $2\pi m$-periodic, where $m$ and $\ell$ are relatively prime such that \begin{equation} \frac{\ell}{m}=\frac{np+q}{kq}\text{.} \label{lm} \end{equation} \end{proposition} \begin{proof} : If $\Omega=p/q$, the solution satisfies \begin{equation} q_{j}(t)=e^{-i2\pi jp/q}q_{n}(t+jk\zeta). \end{equation} Since $n$ and $q$ are relatively prime, we can define $q^{\ast}$ as the modular inverse of $q$. Setting $1_{n}=q^{\ast}q$, there is an $\ell$ such that $j 1_{n}=j+n\ell$ for any $j$. Then we have \begin{equation} q_{j}(t)=q_{j+n\ell}(t)=e^{-i2\pi(j 1_{n}p/q}q_{n}(t+j(1_{n}k\zeta ))=e^{-i2\pi(jq^{\ast}p)}q_{n}(t+j(1_{n}k\zeta))\text{.} \end{equation} Since \[ \frac{\sqrt{s_{1}}}{\nu}=\frac{n\Omega+1}{k}=\frac{np+q}{qk}=\frac{\ell} {m}\text{,} \] it follows that $e^{it\sqrt{s_{1}}/\nu}$ is $2\pi m$-periodic, and since $u_{n}(t)$ is $2\pi$-periodic, we also have that the function $q_{n} (t)=e^{it\sqrt{s_{1}}/\nu}u_{n}(t)$ is $2\pi m$-periodic. \end{proof} \begin{proposition} \label{SC}For $\Omega=p/q$, with $q$ and $n$ relatively prime, the spatial Lyapunov solution is a choreography that satisfies \begin{equation} (q_{j},z_{j})(t)=(q_{n},z_{n})(t+j(1_{n}k\zeta))\text{,} \end{equation} where $1_{n}=1$ mod $n$ and $(q_{j},z_{j})(t)$ is $2\pi m$-periodic. \end{proposition} \begin{proof} : For the planar component of the spatial Lyapunov families we have $q_{j}(t)=q_{n}(t+j1_{n}k\zeta)$, where $q_{n}(t)$ is $2\pi m$-periodic. We have in addition that the spatial component $z_{n}$ is $2\pi$-periodic and satisfies $z_{j}(t)=z_{n}(t+jk\zeta)$. Since $1_{n}=1$ mod $n$, we have \[ z_{j}(t)=z_{n}(t+jk\zeta)=z_{n}(t+j1_{n}k\zeta), \] where $z_{n}(t)$ is also $2\pi m$-periodic. \end{proof} For fixed $n$ the set of rational numbers $p/q$ such that $q$ and $n$ are relatively prime is dense. If the range of the frequency $\nu$ along the Lyapunov family contains an interval, then there is a dense set of rational numbers $\Omega=p/q$ inside that interval. Hence there is an infinite number of Lyapunov orbits that correspond to choreographies. To be precise, the resonant Lyapunov orbit gives a choreography that has period \[ mT=m\frac{2\pi}{\nu}=\frac{2\pi}{\sqrt{s_{1}}}\ell~\text{,} \] where $T$ is the period of the resonant Lyapunov orbit. Furthermore, the number $\ell$ is related to the number of times that the orbit of the choreography winds around a central point. Rational numbers $p/q$, where $q$ is relatively prime to $n$, appear infinitely often in an interval, with $p$ and $q$ arbitrarily large. In such a frequency interval the infinite number of rationals $p/q$ that correspond to choreographies give arbitrarily large $\ell$ and $m$ as well. This gives rise to an infinite number of choreographies, with arbitrarily large frequencies $\frac{2\pi}{\sqrt{s_{1}} }\ell$, and orbits of correspondingly increasing complexity. Although the previous results give sufficient conditions for the existence of infinitely many choreographies, there can be additional choreographies due to the fact that the orbit of the choreography $q_{n}(t)$ has additional symmetries by rotations of $2\pi/m$. We now describe these symmetries and the necessary conditions. \begin{definition} We define a Lyapunov orbit as being $\ell:m$ resonant if it has period \[ T_{\ell:m}=\frac{2\pi}{\sqrt{s_{1}}}\frac{\ell}{m}\text{,} \] where $\ell$ and $m$ are relatively prime such that \[ k\ell-m\in n\mathbb{Z}\text{.} \] \end{definition} \begin{theorem} \label{proposition}In the inertial frame an $\ell:m$ resonant Lyapunov orbit is a choreography, \[ (q_{j},z_{j})(t)=(q_{n},z_{n})(t+j\tilde{k}\zeta)\text{,} \] where $\tilde{k}=k-(k\ell-m)\ell^{\ast}$ with $\ell^{\ast}$ the $m$-modular inverse of $\ell$. The projection on the $xy$-plane of the choreography is symmetric by rotations of the angle $2\pi/m$ and winds around a center $\ell$ times. The period of the choreography is $m~T_{\ell:m}$. \end{theorem} \begin{proof} : Since $u_{n}(t)$ is $2\pi$-periodic and \[ e^{it\sqrt{s_{1}}/\nu}=e^{it\ell/m} \] is $2\pi m$-periodic, the function $q_{n}(t)=e^{it\sqrt{s_{1}}/\nu}u_{n}(t)$ is $2\pi m$-periodic. Furthermore, since \begin{equation} q_{n}(t-2\pi)=e^{-i2\pi\ell/m}q_{n}(t), \label{symq} \end{equation} the orbit of $q_{n}(t)$ is invariant under rotations of $2\pi/m$. By Lemma\ \ref{Lemma}, since \[ \Omega=\frac{k\ell-m}{nm}=\frac{r}{m}, \] with $r=(k\ell-m)/n\in\mathbb{Z}$, the solutions satisfy \begin{equation} q_{j}(t)=e^{-i2\pi j(r/m)}q_{n}(t+jk\zeta)\text{.} \end{equation} Since $\ell$ and $m$ are relatively prime we can find $\ell^{\ast}$, the $m$-modular inverse of $\ell$. Since $\ell\ell^{\ast}=1$ mod $m$, it follows from the symmetry (\ref{symq}) that \[ q_{n}(t-2\pi jr\ell^{\ast})=e^{-i2\pi j(r/m)}q_{n}(t). \] Therefore, \begin{equation} q_{j}(t)=e^{-i2\pi j(r/m)}q_{n}(t+jk\zeta)=q_{n}(t+j(k-rn\ell^{\ast})\zeta). \end{equation} For the planar component $q_{j}(t)$ of spatial Lyapunov families we have the same relation. In addition we have that the spatial component $z_{n}$ is $2\pi$-periodic and satisfies $z_{j}(t)=z_{n}(t+jk\zeta)$. Since $rn\ell ^{\ast}\zeta=2\pi r\ell^{\ast}\in2\pi\mathbb{Z}$, it follows that \[ z_{j}(t)=z_{n}(t+jk\zeta)=z_{n}(t+j(k-rn\ell^{\ast})\zeta), \] and thus $z_{n}(t)$ is also $2\pi m$-periodic. \end{proof} \section{Numerical continuation of Lyapunov families} To continue the Lyapunov families numerically it is necessary to take the symmetries into account. The equations (\ref{NE}) in the rotational frame, have two symmetries that are inherited from Newton's equations in the inertial frame, namely rotations in the plane $e^{\theta i}u_{j}$ and translations in the spatial coordinate $z_{j}+c$. This implies that any rotation in the plane and any translation of an equilibrium is also an equilibrium, and that the linear equations have two conserved quantities and two trivial eigenvalues. To determine the conserved quantities, we can sum the equation (\ref{NE}) over the $z_{j}$ coordinates to obtain that $\sum_{j=1}^{n}\ddot{z}_{j}=0$, \textit{i.e.}, the linear momentum in $z$ is conserved \begin{equation} \sum_{j=1}^{n}\dot{z}_{j}(t)= \text{constant.} \end{equation} The other conserved quantity can be obtained easily in real coordinates. Identifying $i$ with the symplectic matrix $J$, taking the real product of the $u$ component of equation (\ref{NE}) with the generator of the rotations $Ju_{j}$, and summing over $j$, we obtain \[ 0=\sum_{j=1}^{n}\left\langle \ddot{u}_{j}+2\sqrt{s_{1}}J\dot{u}_{j} ,Ju_{j}\right\rangle _{\mathbb{R}^{2}}=\frac{d}{dt}\sum_{j=1}^{n}\left\langle \dot{u}_{j}+\sqrt{s_{1}}Ju_{j},Ju_{j}\right\rangle _{\mathbb{R}^{2}}\text{.} \] Therefore, the second conserved quantity is \begin{equation} \sum_{j=1}^{n}\dot{u}_{j}\cdot Ju_{j} - \sqrt{s_{1}}\left\vert u_{j} \right\vert ^{2} .\nonumber \end{equation} To continue the Lyapunov families numerically we need to take the conserved quantities into account. Let $x_{j}=(u_{j,}z_{j})$ be the vector of positions and $v_{j}=(\dot{u}_{j,}\dot{z}_{j})$ the vector of velocities. In our numerical computations we use the augmented equations \begin{align} \label{EqA}\dot{x}_{j} & = v_{j}\text{,}\nonumber\\ \\ \dot{v}_{j} & = 2\sqrt{s_{1}}~diag(J,0)~v_{j}+\nabla_{x_{j}}V+ \sum_{k=1} ^{3}\lambda_{k}F_{j}^{k} ,\nonumber \end{align} where $V(x)=\sum_{i<j}\left\Vert x_{j}-x_{i}\right\Vert ^{-1}$, and where $F_{j}^{1}=e_{3}$ corresponds to the generator of the translations in $z$, $F_{j}^{2}=diag(J,0)x_{j}$ to rotations in the plane, and $F_{j}^{3}=v_{j}$ to the conservation of the energy. The solutions of the equation (\ref{EqA}) are solutions of the original equations of motion when the values of the three parameters $\lambda_{k}$ are zero. It is known that the converse of this statement is also true, for instance see \cite{IzVi03} and \cite{DoVa03}. \begin{proposition} Assume that the functions $F^{k}=(F_{1}^{k},...,F_{n}^{k})$ for $k=1,2,3$, are orthogonal (or linearly independent). Then a solution $(x,v)$ of the equation is a solution of the augmented equation (\ref{EqA}) if and only if $\lambda_{j}=0$ for $j=1,2,3$. \end{proposition} \begin{proof} : Multiplying the equation in (\ref{EqA}) by $F_{j}^{k}$, summing over $j$, and integrating by parts, we obtain \[ \int_{0}^{2\pi}\sum_{j=1}^{n}\dot{v}_{j}\cdot F_{j}^{k}dt=\lambda_{k}\int _{0}^{2\pi}\sum_{j=1}^{n}\left\vert F_{j}^{k}\right\vert ^{2}dt\text{.} \] Suppose that $(x,v)$ is a solution. Then it conserves the aforementioned quantities, and therefore \[ \int_{0}^{2\pi}\sum_{j=1}^{n}\dot{v}_{j}\cdot F_{j}^{k}dt=0. \] The result that $\lambda_{j}=0$ then follows from the orthogonality of the fields $F^{k}$. \end{proof} For the purpose of numerical continuation the period of the solutions is rescaled to $1$, so that it appears explicitly in the equations. Let $\varphi(t,x,v)$ be the flow of the rescaled equations. Then we define the time-$1$ map for the rescaled flow as \[ \varphi(1,x,v;T,\lambda_{1},\lambda_{2},\lambda_{3}):\mathbb{R}^{6n} \times\mathbb{R}^{4}\rightarrow\mathbb{R}^{6n}. \] Let $\tilde{x}(t)$ be the solution computed in the previous step along a family. We implement Poincar\'{e} restrictions given by the integrals \begin{align*} I_{1}(x,v) & =\int_{0}^{1}x_{n}\cdot e_{2}~dt=0,\\ I_{2}(x,v) & =\int_{0}^{1}x_{n}\cdot e_{3}~dt=0,\\ I_{3}(x,v) & =\int_{0}^{1}\left( x_{n}(t)-\tilde{x}_{n}(t)\right) \cdot\tilde{x}_{n}^{\prime}(t)~dt=0\text{,} \end{align*} which correspond to rotations, translations in $z$, and the energy, respectively. The results in \cite{DoVa03} are based on the continuation of zeros of the map \[ F(x,v;T,\lambda_{1},\lambda_{2},\lambda_{3}):=\left( (x,v)-\varphi (x,v),I_{1},I_{2},I_{3}\right) :\mathbb{R}^{6n+4}\rightarrow\mathbb{R} ^{6n+3}. \] Actually, continuation is done with AUTO for the complete operator equation in function space. That is, the numerical computation of the maps $\varphi$ and $I_{j}$ is done for the corresponding operators in $C_{2\pi}^{2} (\mathbb{R}^{6n})$. This operator equation is discretized using highly accurate piecewise polynomial collocation at Gauss points. \section{Lyapunov families and bifurcating families} In this section we give a brief description of some of the many solutions families that we have computed using python scripts that drive the AUTO software. We start with Planar families that arise from the circular, polygonal equilibrium state of the $n$-body problem when $n\geq6$. For the case $n=6$ there is a single such Planar family. While of interest, its orbits are of relatively small amplitude, and for this reason we have chosen to illustrate the numerical results for the case $n=7$ in this section. One of the four Planar families that exist for $n=7$ also consists of relatively small amplitude orbits. The other three Planar families are illustrated in Figure~\ref{fig01}, where the panels on the left show an orbit along each of three distinct Planar Lyapunov families. These orbits are well away from the polygonal relative equilibrium from which the respective families originate, while they are also still well away from the collision orbits which these families appear to approach. The panels on the right in Figure~\ref{fig01} show orbits along the three families that are further away from the relative equilibria. Orbits along the Planar families for the cases $n=8$ and $n=9$ share many features with those for the case $n=7$. Families of spatial orbits, which have nonzero $z$-component, emanate from the polygonal relative equilibrium when $n\geq3$. These families and their orbits are often referred to as \textquotedblleft Vertical\textquotedblright, because the solution of the linearized Newton equations at the equilibrium is perfectly vertical, \textit{i.e.}, the $x$- and $y$-components are identically zero. For the case $n=3$ the Vertical Lyapunov family is highly degenerate, as it corresponds to an eigenvalue of algebraic multiplicity $5$, and there are no further eigenvalues that give rise to Vertical orbits. For the case $n=4$ there is an equally degenerate eigenvalue ($k=1$). However, there is also a nondegenerate eigenvalue that gives rise to a Vertical family, namely the one known as the \textquotedblleft Hip-Hop family\textquotedblright\ ($k=2$). The top-left panel of Figure~\ref{fig02} shows orbits along this family, which terminates in a collision orbit. The coloring of the orbits along the family gradually changes from solid blue (near the equilibrium) to solid red (near the terminating collision orbit). The same coloring scheme is used when showing other entire families of orbits in rotating coordinates. The top-right panel shows a single orbit from the Hip-Hop family, namely the first bifurcation orbit encountered along it. The color of this orbit gradually changes from blue to red as the orbit is traversed, so that one can infer the direction of motion. The masses are shown at their ``initial'' positions. The same coloring scheme is used when showing other individual orbits in rotating coordinates. The center-left panel of Figure~\ref{fig02} shows the Axial family that bifurcates from the Hip-Hop family. The name ``Axial'' alludes to the fact that the orbits of this family are invariant under the transformation $(-y,-z)$, when the $x$-axis is chosen to pass through the \textquotedblleft center\textquotedblright of the orbit. The Axial family connects to a Planar family, namely at the planar bifurcation orbit shown in the center-right panel of Figure~\ref{fig02}. We refer to this Planar family as ``Unchained'', because some of its orbits give rise to choreographies called ``Unchained polygons'' in \cite{ChFe08}. The Hip-Hop family for $n=4$, and its bifurcating families, are qualitatively similar to corresponding families that we have computed for the cases $n=6$ and $n=8$. The examples of orbit families given in this section are representative of the many planar and spatial Lyapunov families that we have computed, their secondary and tertiary bifurcating families, as well as corresponding families for other values of $n$. Complete bifurcation pictures are rather complex, but our algorithms are capable of attaining a high degree of detail; which at this point excludes only the degenerate bifurcations mentioned earlier. In the following sections we focus our attention on choreographies that arise from resonant periodic orbits. The statements proved for the Lyapunov families also hold true for subsequent spatial and planar bifurcations, as long as the symmetries (\ref{PS}) and (\ref{SS}) are present. However this is not always the case, and in Section 6 we give details on a Lyapunov family that does not possess these symmetries. Figure~\ref{fig03} illustrates the appearance of choreographies from resonant Lyapunov orbits and from resonant orbits along subsequent bifurcating families. Specifically, the top-left panel of Figure~\ref{fig03} shows a resonant Planar Lyapunov orbit for the case $n=7$, and the top-right panel shows the same orbit in the inertial frame, where it is seen to correspond to a choreography. Similarly the center panels show a resonant spatial Lyapunov orbit and corresponding choreography for $n=9$, while the bottom panels show a resonant Axial orbit and corresponding choreography for $n=4$. \begin{figure} \caption{ Some orbits along Planar families for the case $n=7$. Top: two orbits with $k=2$. Center: two orbits with $k=3$. Bottom: two orbits with $k=4$. } \label{fig01} \end{figure} \begin{figure} \caption{ Top-Left: the Vertical Lyapunov family for $n=4$ and $k=2$. Top-Right: the first bifurcation orbit along the Vertical family. Center-Left: the Axial family that bifurcates from the Vertical family. Center-Right: the bifurcation orbit where the Axial family connects to to a Planar family. Bottom-Left: one branch of the Planar family to which the Axial family connects. Bottom-Right: the other branch of the family to which the Axial family connects. } \label{fig02} \end{figure} \begin{figure} \caption{The panels on the left show orbits in the rotating frame, while the panels on the right show the same orbits in the inertial frame, where they correspond to choreographies. Top: a resonant Planar Lyapunov orbit. Center: a resonant Vertical Lyapunov orbit. Bottom: a resonant Axial orbit. } \label{fig03} \end{figure} \section{Choreographies along Planar Lyapunov families} In this section we present some of the infinitely many choreographies that appear along the Planar Lyapunov families, namely for the cases $n=7$, $n=8$ and $n=9$, as shown in Figures~\ref{fig04},~\ref{fig05}, and \ref{fig06}, respectively. Corresponding data are given in Tables~1~-~3. Each choreography winds $\ell$ times around a center and is invariant under rotations of $2\pi/m$. The bodies move in groups of $d$-polygons, where $d$ is the greatest common divisor of $n$ and $k$. In addition these choreographies are symmetric with respect to reflection in the plane generated by the second symmetry in (\ref{PS}). When there is an infinite number of choreographies then the winding number $\ell$ and the symmetry indicator $m$ can be arbitrarily large, and the choreography arbitrarily complex. From the observed range of values of the periods along a Lyapunov family we mostly choose the simpler resonances, and hence the simpler choreographies. For example, the family $k=2$ for $n=7$ has a relatively simple choreography. Here $\ell=5$ and $m=3$ are relatively prime, with \[ k\ell-m=2\times5-3=7\in n\mathbb{Z}\text{.} \] In this example $2\pi s_{1}^{-1/2}=4.1387$, and since $T_{5:3}=(2\pi s_{1}^{-1/2})(5/3)=\allowbreak6.8978$ is within the range of periods of the Lyapunov family, it follows that the $5:3$ resonant Lyapunov orbit corresponds to a choreography in the inertial frame. This choreography is shown in the center-right panel of Figure~\ref{fig04}. It has period $3T_{5:3}$, winding number $5$, and it is invariant under rotations of $2\pi/3$. Similar statements apply to other planar choreographies. \noindent For $n=7$ there is a sequence of Planar Lyapunov families having $k=2,3,4,2$, respectively. The last of these families, with $k=2$, has orbits of rather small amplitude, and is not included in the families shown in Figure~\ref{fig04}. \[ \begin{tabular} [c]{|l|l|l|c|}\hline $k$ & Eigenvalue & Period Interval & Resonant Orbit\\\hline 2 & 1.53960$i$ & [4.0811, 28.328] & 4:1 and 5:3\\\hline 3 & 1.85058$i$ & [3.3953, 27.974] & 3:2 and 5:1\\\hline 4 & 1.50806$i$ & [4.1664, 28.499] & 2:1 and 15:4\\\hline 2 & 0.761477$i$ & [8.2513 ,8.3328] & --\\\hline \end{tabular} \] \centerline{Table~1: Data for $n=7$ bodies.} \noindent For $n=8$ there is a sequence of Planar Lyapunov family having $k=2,3,4,5,2$, respectively. We have chosen one choreography from each one of these families, with an additional one for $k=5$, in Figure~\ref{fig05}. \[ \begin{tabular} [c]{|l|l|l|c|}\hline $k$ & Eigenvalue & Period Interval & Resonant Orbit\\\hline 2 & 1.94947$i$ & [3.2230, 14.836] & 11:6\\\hline 3 & 2.39714$i$ & [2.6211, 29.654] & 11:9\\\hline 4 & 2.41171$i$ & [2.6053, 7.4935] & 5:4\\\hline 5 & 1.91468$i$ & [3.2814, 29.122] & 5:1 and 9:5\\\hline 2 & 0.435437$i$ & [5.4804, 14.430] & 5:2\\\hline \end{tabular} \] \centerline{Table~2: Data for $n=8$ bodies.} \noindent For $n=9$ there is a sequence of Planar Lyapunov families having $k=2,3,4,5,6,2$, respectively. In Figure~\ref{fig06} we have selected one choreography from each of these families. \[ \begin{tabular} [c]{|l|l|l|c|}\hline $k$ & Eigenvalue & Period Interval & Resonant Orbit\\\hline 2 & 2.27175$i$ & [2.7660, 30] & 5:1\\\hline 3 & 2.85442$i$ & [2.2012, 10.298] & 4:3\\\hline 4 & 3.06012$i$ & [2.0534, 30.411] & 5:2\\\hline 5 & 2.90713$i$ & [2.1613, 30.612] & 2:1\\\hline 6 & 2.26399$i$ & [2.7197, 10.008] & 5:3\\\hline 2 & 0.196565$i$ & [15.400, 31.927] & 5:1\\\hline \end{tabular} \] \centerline{Table~3: Data for $n=9$ bodies.} \begin{figure} \caption{ Planar choreographies for $n=7$ bodies: Top-Left: a $3:2$-resonant orbit for $k=3$. Top-Right: a $5:1$-resonant orbit for $k=3$. Center-Left: a $4:1 $-resonant orbit for $k=2$. Center-Right: a $5:3$-resonant orbit for $k=2$. Bottom-Left: a $15:4$-resonant for orbit for $k=4$. Bottom-Right: a $2:1$-resonant orbit for $k=4$. } \label{fig04} \end{figure} \begin{figure} \caption{ Planar choreographies for $n=8$. Top-Left: a $5:4$-resonant orbit for $k=4$. Top-Right: a $5:1$-resonant orbit for $k=5$. Center-Left: an $11:9$-resonant orbit for $k=3$. Center-Right: an $11:6$-resonant orbit for $k=2$. Bottom-Left: a $9:5$-resonant orbit for $k=5$. Bottom-Right: a $5:2$-resonant orbit for $k=2$. } \label{fig05} \end{figure} \begin{figure} \caption{ Planar choreographies for $n=9$. Top-Left: a $5:2$-resonant orbit for $k=4$. Top-Right: a $2:1$-resonant orbit for $k=5$. Center-Left: a $4:3$-resonant orbit for $k=3$. Center-Right: a $5:3$-resonant orbit for $k=6$. Bottom-Left: a $5:1$-resonant orbit for $k=2$. Bottom-Right: a $5:1$-resonant orbit for $k=2$. } \label{fig06} \end{figure} \section{Choreographies along Vertical Lyapunov families and their bifurcating families} In this section we give examples of choreographies along the Vertical Lyapunov families and along their bifurcating families. The projections of these choreographies onto the $xy$-plane are invariant under rotations of $2\pi/m$. The bodies form groups of $d$-polygons, where $d$ is the greatest common divisor of $n$ and $k$. Since we obtain choreographies by rotating closed orbits, each choreography is contained in a surface of revolution. Indeed, due to the symmetries of the Vertical families the choreographies wind around a cylindrical manifold with winding number $\ell$, while for the Axial families the choreographies wind around a toroidal manifold with winding numbers $\ell$ and $m$. The spatial choreographies along the Vertical Lyapunov families are symmetric with respect to the reflections $-y$ and $-z$, when the $x$-axis is chosen to pass through the \textquotedblleft center\textquotedblright of the orbit. While planar choreographies for large values of $\ell$ and $m$ are somewhat difficult to appreciate, spatial choreographies of this type are easier to visualize because they wind around a cylindrical manifold. For even values of $n$ we mention the case $k=n/2$, for which the orbits of the Vertical family are known as Hip-Hop orbits. Choreographies along such families have been described before in \cite{TeVe07}, and in \cite{ChFe08}, where they were found numerically as local minimizers of the action restricted to symmetric paths. Along Hip-Hop families we have located the choreography for $n=4$ found in \cite{TeVe07}. Several choreographies along Hip-Hop families are shown the top four panels of Figure~\ref{fig07}. We have not computed all Vertical families, due to presence of double resonant eigenvalues. For the case $n=9$, we show two choreographies along a family of periodic orbits that is not a Hip-Hop family, namely in the two bottom panels of Figure~\ref{fig07}. Such families were not determined in \cite{ChFe08} because they do not correspond to local minimizers of the action. Further investigation is needed for a systematic approach to determine these families. We now present some choreographies along the families that emanate from the first bifurcation along Hip-Hop families in the rotating frame. The projections of these spatial choreographies onto the $xy$-plane are somewhat similar to those along the Planar Lyapunov orbits. However, the spatial periodic orbits in the rotating frame that correspond to these choreographies have only one symmetry, which is given by the transformation $(-y,-z)$ when the $x$-axis is chosen to pass through the \textquotedblleft center\textquotedblright of the orbit in the rotating frame; see the bottom left panel of Figure~\ref{fig03}. This is due to the fact that the Axial family arises from the Vertical Lyapunov family via a symmetry-breaking bifurcation. The symmetry implies that choreographies along the Axial families wind around a toroidal manifold with winding numbers $\ell$ and $m$. Since we assume that $\ell$ and $m$ are co-prime, the choreography path is known as a \textit{torus knot}. The simplest nontrivial example is the $(2,3)$-torus knot, also known as the \textit{trefoil knot}. We note that for other integers $\ell$ and $m$ such that $k\ell-m\notin n\mathbb{Z}$, the orbit of the $n$ bodies in the inertial frame consists of separate curves that form a torus link. Some of the choreographies along the Axial families are shown in Figure~\ref{fig08}. In Section 3 we already mentioned that there are planar bifurcation orbits along Axial families that give rise to planar families. Such an Axial family and its planar bifurcation orbit are shown in the center panels of Figure~\ref{fig02}, namely for the case $n=4$. Orbits along the two branches of the bifurcating planar family are shown in the bottom panels. Specifically, our numerical computations indicate that Hip-Hop families connect indirectly to planar families via the above-described tertiary bifurcation. Choreographies along such planar families have symmetries that are similar to those of Planar Lyapunov families, although in fact these families do not correspond to Lyapunov families. While there are no Planar Lyapunov families for $n=4, 5$, and $6$, there are such tertiary planar families for these values of $n$, and these contain planar choreographies. Such choreographies are called unchained polygons in \cite{ChFe08}, and there are infinitely many of these. In particular, the Vertical family for $n=3$\ and $k=1$ leads indirectly to the planar $P_{12}$-family of Marchal \cite{Ma00}. We have continued such families numerically for $k=n/2$, where $n=4, 6$, and $8$, and six choreographies along them are shown in the panels of Figure~\ref{fig09}. \begin{figure} \caption{ Vertical Lyapunov families. Top-Left: a $3:2$-resonant Hip-Hop orbit along $V_{1} \label{fig07} \end{figure} \begin{figure} \caption{ Resonant Axial orbits with $k=n/2$. Top-Left: a $7:10$ resonant Axial orbit for $n=4$. Top-Right: a $9:14$ resonant Axial orbit for $n=4$. Center-Left: a $5:9$ resonant Axial orbit for $n=6$. Center-Right: an $11:15$ resonant Axial orbit for $n=6$. Bottom-Left: a $7:12$ resonant orbit for $n=8$. Bottom-Right: a $15:28$ resonant Axial orbit for $n=8$. } \label{fig08} \end{figure} \begin{figure} \caption{ Unchained polygons for $k=n/2$. Top-Left: a $1:6$ resonant orbit for $n=4$. Top-Right: a $1:2$ resonant orbit for $n=4$. Center-Left: a $5:9$ resonant orbit for $n=6$. Center-Right: a $1:3$ resonant orbit for $n=6$. Bottom-Left: a $3:4$ resonant orbit for $n=8$. Bottom-Right: a $1:4$ resonant orbit for $n=8$. } \label{fig09} \end{figure} \section{Other configurations} \subsection{The Maxwell configuration} The choreographies in the preceding sections are unstable, in part because they arise directly or indirectly from an unstable relative equilibrium. To determine stable solutions it is helpful to consider orbits that emanate from a stable relative equilibrium. The polygonal equilibrium is never stable; about half of its eigenvalues are stable and about half are unstable. For this reason we now consider the Maxwell configuration, consisting of an $n$-polygon with an additional massive body at the center, which is known to be stable when $n\geq7$. Specifically, the central body has mass $m_{0}=\mu$, and the other $n$ bodies have equal mass $m_{j}=1$ for $j\in\{1,...,n\}$. Let $(u_{j},z_{j})\in\mathbb{C}\times\mathbb{R}$ be the position of body $j\in\{0,1,...,n\}$. The Newton equations of motion for the $n+1$ bodies in rotating coordinates \[ q_{j}(t)=(e^{i\sqrt{\omega}t}u_{j}(t),z_{j}(t)) \] have an equilibrium with $\left( u_{0},z_{0}\right) =\left( 0,0\right) $ and $\left( u_{j},z_{j}\right) =\left( e^{ij\zeta},0\right) $ for $j\in\{1,...,n\}$, where $\zeta=2\pi/n$ and \[ \omega=\mu+s_{1}\text{.} \] This well-known Maxwell configuration reduces to the polygonal relative equilibrium when $\mu=0$. For $n\geq7$ all planar eigenvalues are imaginary, and produce Planar Lyapunov families. The $n+1$ spatial eigenvalues include $0$ (due to symmetries), $i\sqrt{\mu+n}$ for $k=n$, and \[ i\sqrt{\mu+s_{k}},\qquad k=1,~\cdots~,n-1\text{.} \] The frequency $\sqrt{\mu+n}$ produces the Vertical Lyapunov family, which corresponds to the oscillatory ring in \cite{MeSc93}. For $k=n/2$, with $n$ even, we obtain a Hip-Hop family \cite{MeSc93}. For the Maxwell configuration we say that a Lyapunov orbit is $\ell:m$ resonant when its period satisfies \[ T_{\ell:m}=\frac{2\pi}{\sqrt{\mu+s_{1}}}\frac{\ell}{m}, \] where $\ell$ and $m$ are relatively prime such that $k\ell-m\in n\mathbb{Z}$. For an $\ell:m$ resonant Lyapunov orbit the $n$ bodies of equal mass follow the same path as in Theorem \ref{proposition}. To illustrate our numerical computations we chose the first stable case for $\mu=200$, namely $n=7$. We also consider the case $n=8$ with $\mu=300$. We computed many families for $n=7$ and $n=8$ and we present only a few planar resonant orbits in Figure~\ref{fig11} and spatial resonant orbits in Figure~\ref{fig12}. \subsection{A triangular configuration} Here we present some families of periodic solutions that emanate from the \textquotedblleft triangular\textquotedblright\ equilibrium shown in the top-left panel of Figure~\ref{fig12}, with $9$ bodies of equal mass. Periodic solutions that emanate from the triangular equilibrium have been determined with the same numerical scheme used throughout this paper. However, a detailed description of these results is outside the scope of the current paper, whose aim is the continuation of solutions with symmetries that produce choreographies. The triangular equilibrium can been reached by following one of the families of spatial periodic orbits that bifurcate from the polygonal relative equilibrium for $n=9$. These spatial solutions have the symmetry \[ u_{j}(t)=\bar{u}_{n-j}(-t)\text{.} \] Actually, in addition to the Vertical Lyapunov families that produce choreographies from the polygonal configuration, we have also determined these solutions, which do not produce choreographies. To the best of our knowledge, the existence of these families has not been established before. \vskip0.25cm \textbf{Acknowledgements.} We would like to thank R. Montgomery, J. Montaldi, D. Ayala and L. Garc\'{\i}a-Naranjo for many interesting discussions. We also acknowledge the assistance of Ramiro Chavez Tovar with the preparation of figures and animations. \begin{figure} \caption{ Top-Left: a $4:3$ resonant orbit for $7+1$ bodies and $k=6$. Top-Right: a $5:2$ resonant orbit for $7+1$ bodies and $k=6$. Center-Left: a $5:3$ resonant orbit for $7+1$ bodies and $k=2$. Center-Right: a $7:3$ resonant orbit for $8+1$ bodies for $k=8$. Bottom-Left: a $5:2$ resonant orbit for $8+1$ bodies and $k=2$. Bottom-Right: a $5:3$ resonant orbit for $8+1$ bodies and $k=7$. } \label{fig10} \end{figure} \begin{figure} \caption{ Top-Left: a $15:14$ resonant orbit for $7+1$ bodies and $k=7$. Top-Right: an $8:7$ resonant orbit for $7+1$ bodies and $k=7$. Center-Left: a $9:8$ resonant orbit for $8+1$ bodies and $k=8$. Center-Right:a $17:16$ resonant orbit for $8+1$ bodies and $k=8$. Bottom-Left: a $13:12$ resonant orbit for $8+1$ bodies and $k=4$. Bottom-Right: a $15:12$ resonant orbit for $8+1$ bodies and $k=4$. } \label{fig11} \end{figure} \begin{figure} \caption{ Some Lyapunov families with different symmetries for $n=9$. Top-Left: another equilibrium of the 9-body problem. Top-Right: an orbit along a bifurcating Planar family. Center-Left: an orbit along another bifurcating Planar family. Center-Right: an orbit along yet another bifurcating Planar family. Bottom-Left: an orbit along a bifurcating spatial family. Bottom-Right: an orbit along another bifurcating spatial family. } \label{fig12} \end{figure} \end{document}
\begin{document} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem{Cor}{Corollary} \begin{center} {\bf Automorphisms of Formal Matrix Rings} \end{center} \textbf{Piotr Krylov}, National Research Tomsk State University, [email protected] \textbf{Askar Tuganbaev} National Research University <<MPEI>>, [email protected] \textbf{Abstract.} We study automorphism groups of formal matrix algebras. We also consider automorphisms of ordinary matrix algebras (in particular, triangular matrix algebras). \textbf{Key words:} formal matrix algebra, triangular matrix algebras, automorphism \textbf{MSC2010 database 16S50; 16D10} \tableofcontents \section{Introduction}\label{section1} Automorphisms and isomorphisms of various matrix rings are studied in many papers; for example, see \cite{AbyT15}, \cite{AnhW11}, \cite{AnhW13}, \cite{HaeH00}, \cite{Isa80}, \cite{Jon91}, \cite{Jon95}, \cite{Kez90}, \cite{KryN18}, \cite{KryN18b}, \cite{KryT21}, \cite{Lev75}, \cite{LiW12}, \cite{Tap15}, \cite{Tap17}, \cite{Tap18}, \cite{XiaW10}, \cite{XiaW14}. Some other mappings of matrix rings were also studied; in particular, commuting and centralizing mappings were studied (for example, see \cite{LiW12}, \cite{LiWF18}, \cite{XiaW10}, \cite{XiaW14}). The author's work \cite{KryT21} is devoted to automorphisms and homomorphsms of formal matrix algebras. First, the authors consider automorphisms of the algebra $S=L\oplus M$, where $L$ is some subalgebra and $M$ is a nilpotent ideal. In such a case, one says that $S$ is a \textsf{splitting extension} of the ideal $M$ by the subalgebra $L$. In \cite{KryT21}, results on the group $\text{Aut}\,S$ are applied to formal matrix algebras. At the same time, some assertions are not given in full generality. In the given paper, we significantly amplify some results of \cite{KryT21} and give many new results on automorphisms of formal matrix rings. We also study automorphisms of ordinary matrix rings (in particular, triangular matrix rings). We note that papers \cite{AnhW11} and \cite{AnhW13} contain interesting results and methods of searching for automorphisms of formal triangular matrix rings. We took and used some ideas from these papers \cite{AnhW11} and \cite{AnhW13}. We consider only associative rings which are unital algebras over some commutative unital ring $T$. However, the ring $T$ itself is clearly almost non-existent. We sometimes write <<an algebra>>, we sometimes write <<a ring>>. Let $K$ be some algebra. Then $\text{Aut}\,K$ is the automorphism group of $K$, $\text{In(Aut}\,K)$ is the subgroup of inner automorphisms of $K$, and $\text{Out}\,K$ is the group of outer automorphisms of $K$, i.e., the factor group $\text{Aut}\,K/\text{In(Aut}\,K)$. If $S$ is a ring, then $U(S)$ is the group of invertible elements and $P(S)$ the prime radical of the ring $S$. For an $S$-$S$-bimodule $M$, we denote by $\text{Aut}_SM$ the automorphism group of $M$. Let $R$, $S$ be two rings, $A$ be an $R$-$S$-bimodule and let $\alpha$, $\gamma$ be automorphisms of rings $R$ and $S$, respectively. We can define a new bimodule structure on $A$ by setting $$ x\circ a=\alpha(x)a,\; a\circ y=a\gamma(y) \text{ for all } x\in R,\, y\in S,\, a\in A. $$ Usually, this bimodule is denoted by $_{\alpha}A_{\gamma}$ and the initial bimodule can be denoted by $_1A_1$. The semidirect product of groups $A$ and $B$ is denoted by $A\leftthreetimes B$. This designation has a catchy character but it is convenient. The relation $G\cong A\leftthreetimes B$ implies that the group $G$ contains a normal subgroup $H$ and the subgroup $E$ such that $$ G=H\cdot E,\; H\cap E=\langle e\rangle,\; A\cong H,\; B\cong E\cong G/H. $$. \section{Group $\text{Aut}\,K$ for Formal Matrix Rings $K$ with zero trace ideals}\label{section2} The book of authors \cite{KryT17} is devoted to formal matrix rings and formal matrix algebras. We can say on formal matrix algebras, as well. For ease of reading, we briefly recall some of the material from \cite{KryT17} and \cite{KryT21}. We fix a positive integer $n\ge 2$. Let $R_1,\ldots,R_n$ be rings and let $M_{ij}$ be $R_i$-$R_j$-bimodules with $M_{ii}=R_i$, $i,j=1,\ldots,n$. Let's assume that for any subscripts $i,j,k=1,\ldots,n$ such that $i\ne j$ and $j\ne k$, an $R_i$-$R_k$-bimodule homomorphsm $\varphi_{ijk}\colon M_{ij}\otimes_{R_j}M_{jk}\to M_{ik}$ is defined. We denote by $\varphi_{iik}$ and $\varphi_{ikk}$ canonical isomorphisms $$ R_{i}\otimes_{R_i}M_{ik}\to M_{ik},\quad M_{ik}\otimes_{R_k}R_k\to M_{ik} $$ respectively, $i,k=1,\ldots,n$. Instead of $\varphi_{ijk}(a\otimes b)$, we write $ab$. Using these designations, we also assume that $(ab)c=a(bc)$ for all elements $a\in M_{ij}$, $b\in M_{jk}$, $c\in M_{k\ell}$ and subscripts $i,j,k,\ell$. We denote by $K$ the set of all square matrices $(a_{ij})$ of order $n$ with values in bimodules $M_{ij}$. With respect to standard operations of matrix addition and matrix multiplication, $K$ forms a ring. We can write it in the following form: $$ K=\begin{pmatrix} R_1&M_{12}&\ldots&M_{1n}\\ M_{21}&R_{2}&\ldots&M_{2n}\\ \ldots&\ldots&\ldots&\ldots\\ M_{n1}&M_{n2}&\ldots&R_{n} \end{pmatrix}. $$ The ring $K$ is called a \textsf{formal} (or \textsf{generalized}) \textsf{matrix ring} of order $n$. If $M_{ij}=0$ for all $i,j$ with $i>j$, then $K$ is a \textsf{formal (upper) triangular matrix ring}. For every $k=1,\ldots,n$, we set $$ I_k=\sum\limits_{i\ne k}\text{Im}(\varphi_{kik}),\;\text{or, in other words,}\; I_k=\sum\limits_{i\ne k}M_{ki}M_{ik}. $$ Here $M_{ki}M_{ik}$ is the set of all finite sums of elements of the form $ab$, where $a\in M_{ki}$ and $b\in M_{ik}$. Then $I_k$ is an ideal of the ring $R_k$. One says that $I_1,\ldots,I_n$ are \textsf{trace ideals} of the ring $K$. As usual, we identify some matrices with the corresponding elements. For example, we can identify the matrix of the form $\begin{pmatrix} r&0\\ 0&0 \end{pmatrix}$ with the element $r$ and so on. Similar agreements also apply to matrix sets. Let $K$ be some formal matrix algebra. We denote by $L$ the subring of all diagonal matrices and by $M$ the subgroup of all matrices with zeros on the main diagonal. We can write the direct sum $K=L\oplus M$ of Abelian groups. The subgroup $M$ is an ideal if and only if all trace ideals of the ring $K$ are equal to zero. In this case one says that $K$ is a ring \textsf{with zero trace ideals}. These rings include all triangular matrix rings. Let $K$ be a formal matrix ring with zero trace ideals. We have a splitting extension $K=L\oplus M$, where $M$ is a nilpotent ideal of nilpotence degree $\le n$ and an $L$-$L$-bimodule. In \cite{KryT21}, automorphisms of such rings $K$ are represented by certain matrices of order $2$. It is done as follows. To an arbitrary automorphism $\varphi$ of the algebra $K$, the matrix $\begin{pmatrix} \alpha&\gamma\\ \delta&\beta \end{pmatrix}$ can be compared in a standard way. Here $$ \alpha\colon L\to L,\; \beta\colon M\to M,\; \gamma\colon M\to L\; \delta\colon L\to M $$ are $T$-module homomorphsms and $$ \varphi(x+y)=\begin{pmatrix} \alpha&\gamma\\ \delta&\beta \end{pmatrix} \begin{pmatrix} x\\ y \end{pmatrix}= (\alpha(x)+\gamma(y))+(\delta(x)+\beta(y)) $$ for all $x\in L$ and $y\in M$. Similar to \cite{KryT21}, we mainly consider (except for Section 10) only <<triangular>> case; we mean that $\gamma=0$ for any automorphism $\varphi$. In what follows, we do not distinguish the automorphism $\varphi$ and the matrix corresponding to it. For brevity, we sometimes write <<triangular automorphism $\varphi$>> if $\varphi=\begin{pmatrix} \alpha&0\\ \delta&\beta \end{pmatrix}$, and a <<diagonal automorphism $\varphi$>> if $\varphi=\begin{pmatrix} \alpha&0\\ 0&\beta \end{pmatrix}$. Let $\varphi=\begin{pmatrix} \alpha&0\\ \delta&\beta \end{pmatrix}$ be some automorphism of the algebra $K$. In such case $\alpha$, $\delta$ and $\beta$ satisfy the relations of \cite[Section 3]{KryT21}. In particular, $\alpha$ is an automorphism of the algebra $L$ and $\beta$ is an automorphism of the algebra $M$ (as a non-unital algebra). If $M^2=0$, then $\delta$ is a derivation of the algebra $L$ with values in the bimodule $_{\alpha}M_{\alpha}$ and $\beta$ is an isomorphism of $L$-$L$-bimodules $M\to {}_{\alpha}M_{\alpha}$. We denote by $\text{In}_1(\text{Aut}\,K)$ (resp., $\text{In}_0(\text{Aut}\,K)$) the subgroup of inner automorphisms of the algebra $K$ defined by invertible elements of the form $1+y$, $y\in M$, (resp., invertible elements of the algebra $L$). The first subgroup is normal in $\text{Aut}\,K$ and we have the semidirect decomposition $$ \text{In(Aut}\,K)=\text{In}_1(\text{Aut}\,K)\leftthreetimes \text{In}_0(\text{Aut}\,K) $$ (see \cite[Section 4]{KryT21}). We define a homomorphism and several groups (see \cite[Section 3]{KryT21}). Let $f\colon \text{Aut}\,K\to \text{Aut}\,L$ be a homomorphism such that $f(\varphi)=\alpha$ for every automorphism $\varphi=\begin{pmatrix} \alpha&0\\ \delta&\beta \end{pmatrix}$. Next, let $\Lambda$ be the subgroup of diagonal automorphisms and let $\Psi$ be the subgroup consisting of automorphisms of the form $\begin{pmatrix} 1&0\\ 0&\beta \end{pmatrix}$. In addition, we denote by $\Omega$ the image of the homomorphsm $f$. Further, let $\Phi$ be the normal subgroup $$ \left\{\varphi\in \text{Aut}\,K \,|\, \varphi=\begin{pmatrix} \alpha&0\\ \delta&\beta \end{pmatrix},\;\alpha\in\text{In(Aut}\,L)\right\} $$ of the group $\text{Aut}\,K$. Information on the defined groups is very important to understand the structure of the group $\text{Aut}\,K$. Let $e_1,\ldots,e_n$ be the identity elements of rings $R_1,\ldots,R_n$, respectively. We identify them with the corresponding matrix units. We formulate two conditions for the algebra $K$ (see \cite[Section 9]{KryT21}). \textbf{(I)} For any $\varphi\in\text{Aut}\,K$, the relation $\varphi M=M$ holds, i.e., any automorphism is triangular. \textbf{(II)} For any $\varphi\in\text{Aut}\,K$ and every $i=1,\ldots,n$, we have the inclusion $\varphi(e_i)\in e_k+M$ for some $k$. \textsf{Condition \textbf{(II)} implies condition \textbf{(I)}.} We formulate in detail and in a more complete form the main results of \cite[Sections 8 and 9]{KryT21} about the group $\text{Aut}\,K$, where $K$ is a formal matrix algebra with zero trace ideals. First, we write several useful relations and isomorphisms: $$ \Psi\cap \text{Aut}\,K=\Psi\cap \text{In}_0(\text{Aut}\,K)=\Psi_0;\leqno \textbf{(1)} $$ $$ \Lambda/(\text{In}_0(\text{Aut}\,K)\cdot \Psi)\cong \Omega/\text{In(Aut}\,L); \leqno \textbf{(2)} $$ $$ \Phi/\text{Ker}\,f\cong \text{In}_0(\text{Aut}\,K)/\Psi_0\cong \text{In(Aut}\,L);\leqno \textbf{(3)} $$ $$ \Phi/\text{In(Aut}\,K)\cong \Psi/\Psi_0.\leqno \textbf{(4)} $$ The group $\Psi_0$ is the subgroup of inner automorphisms of the algebra $K$ defined by the central elements of the algebra $L$ (this group is defined in \cite[Section 4]{KryT21}). In the following theorem, we collect main information about the group $\text{Aut}\,K$. \textbf{Theorem 2.1.} Let $K$ be a formal matrix algebra with zero trace ideals such that condition \textbf{(I)} holds. Then we have the following assertions. \textbf{(a)} We have the relations $$ \text{Aut}\,K=\text{In}_1(\text{Aut}\,K)\leftthreetimes \Lambda; \leqno \textbf{a1)} $$ $$ \text{Ker}\,f=\text{In}_1(\text{Aut}\,K)\leftthreetimes \Psi;\leqno \textbf{a2)} $$ $$ \Phi=\text{In}(\text{Aut}\,K)\cdot \Psi= \text{In}_1(\text{Aut}\,K)\leftthreetimes \text{In}_0(\text{Aut}\,K)\cdot \Psi.\leqno \textbf{a3)} $$ \textbf{(b)} We have isomorphisms $$ \text{Aut}\,K/\text{Ker}\,f\cong \Omega\cong \Lambda/\Psi; \leqno \textbf{b1)} $$ $$ \text{Aut}\,K/\Phi\cong \Omega/\text{In}(\text{Aut}\,L).\leqno \textbf{b2)} $$ \textbf{(c)} The group $\text{Out}\,K$ has a normal subgroup which is isomorphic to $\Psi/\Psi_0$ and the factor group with respect to the subgroup is isomorphic to $\Omega/\text{In}(\text{Aut}\,L)$. \textbf{(d)} If the relation $\Omega=\text{In}(\text{Aut}\,L)$ holds, then we have that $$ \text{Aut}\,K=\Phi=\text{In}_1(\text{Aut}\,K)\leftthreetimes (\text{In}_0(\text{Aut}\,K)\cdot \Psi); \leqno \textbf{d1)} $$ $$ \text{Out}\,K\cong \Psi/\Psi_0.\leqno \textbf{d2)} $$ \textbf{(e)} If the relation $\Psi=\Psi_0$ holds, then $$ \Phi=\text{In}(\text{Aut}\,K), \quad \text{Out}\,K\cong \Omega/\text{In}(\text{Aut}\,L). $$ It can be concluded that if we can find the structure of the groups $\Psi$ and $\Omega$, then the structure of the groups $\text{Aut}\,K$ and $\text{Out}\,K$ is known in some way. In \cite{KryT21}, the authors calculate the groups $\Psi$ and $\Omega$ for an algebra $K$ over commutative indecomposable ring under some conditions. In Sections $7$ and $8$, these results receive considerable development and the indicated algebras are defined in Section 5. A ring $R$ is called \textsf{indecomposable} if $1$ is the unique non-zero central idempotent of $R$. \textbf{Corollary 2.2.} Let all factor rings $R_1/P(R_1),\ldots, R_n/P(R_n)$ be indecomposable. Then for the algebra $K$ conditions \textbf{(II)} and \textbf{(I)} hold; consequently, we obtain assertions of Theorem 2.1. As we agreed, we identify the ring $R_i$ with the ring $e_iKe_i$; we also identify the bimodule $M_{ij}$ with the bimodule $e_iKe_j$. The example from \cite{Jon95}, also given in \cite{KryT21}, says that automorphisms can <<mix>> the rings $R_i$ and bimodules $M_{ij}$. In \cite{KryT21}, we highlighted some conditions preventing such mixing. Taking Theorem 2.1(1) into account, we can restrict ourselves to diagonal automorphisms. \textbf{Theorem 2.3.} Let's assume that all factor rings $R_1/P(R_1),\ldots, R_n/P(R_n)$ are indecomposable. Let $\varphi=\begin{pmatrix}\alpha&0\\ 0&\beta\end{pmatrix}$ be the diagonal automorphism of the algebra $K$. Then the automorphism $\alpha$ of the algebra $L$ permute the rings $R_1,\ldots, R_n$ and the automorphism $\beta$ of the $L$-$L$-bimodule $M$ permute bimodules $M_{ij}$ in accordance with some permutation $\tau$ of degree $n$. In addition, the restriction of $\beta$ to $M_{ij}$ is a bimodule isomorphism $M_{ij}\to M_{\tau(i)\tau(j)}$ (with respect to ring isomorphisms $\alpha\big|_{R_i}\colon R_i\to R_{\tau(i)}$ and $\alpha\big|_{R_j}\colon R_j\to R_{\tau(j)}$). \section{Formal Triangular Matrix Rings}\label{section3} In \cite{KryT21}, formal triangular matrix rings were not specifically considered. They have a certain specificity which allows you to penetrate more deeply into the structure of both the rings themselves and their automorphism groups; see \cite{BirHKP00}. As stated in Section 1, we recall that all our rings are $T$-algebras. For the ring $K$, we give one condition for the rings $R_1,\ldots,R_n$ which is weaker than the condition from Corollary 2.2 and Theorem 2.3; this condition guarantees that condition \textbf{(I)} holds. \textbf{Definition 3.1 \cite{AnhW13}.} An idempotent $e$ of a ring $R$ is called \textsf{semicentral} if $(1-e)Re=0$. A ring $R$ is said to be \textsf{strongly indecomposable} if $1$ is its unique non-zero semicentral idempotent. For a ring $R$, we consider the following conditions. \textbf{(1)} $R$ is an indecomposable ring. \textbf{(2)} $R$ is a strongly indecomposable ring. \textbf{(3)} The factor ring $R/P(R)$ is indecomposable. \textbf{(4)} For any idempotent $e$ of the ring $R$, the relation $(1-e)Re=0$ implies the relation $eR(1-e)=0$. There are the following relations between the above conditions. $$ (3)\,\Rightarrow\,(2)\,\Rightarrow\,(1),\qquad (2)\,\Rightarrow\,(4). $$ To conditions \textbf{(I)} and \textbf{(II)} from Section $2$, we add another condition for the formal matrix ring $K$: \textbf{(III)} Each of the rings $R_1,\ldots,R_n$ satisfies the above condition \textbf{(4)}. In Section $2$, it is remarked that condition \textbf{(II)} implies condition \textbf{(I)}. We will show below that condition \textbf{(III)} also implies condition \textbf{(I)} in the <<triangular>> case. We write again the formal triangular matrix ring $K$ of order $n$ in full form: $$ K=\begin{pmatrix} R_1&M_{12}&\ldots&M_{1n}\\ 0&R_2&\ldots&M_{2n}\\ \ldots&\ldots&\ldots&\ldots\\ 0&0&\ldots&R_n \end{pmatrix}. $$ \textbf{Proposition 3.2.} Let $K$ be a formal triangular matrix algebra and let all rings $R_1,\ldots,R_n$ satisfy condition \textbf{(4)}. Then $K$ satisfies condition \textbf{(I)}, i.e., any automorphism of the algebra $K$ is triangular. $\lhd$ Similar to Section $2$, we denote by $e_1,\ldots,e_n$ the identity elements of rings $R_1,\ldots,R_n$, respectively. We write splitting extension $K=L\oplus M$. Suppose, on the contrary, that there exists an automorphism $\varphi$ of the algebra $K$ which is not triangular. For every $i=1,\ldots,n$, we have $$ \varphi(e_i)=g_i+y_i, \; \text{where } g_i\in L, \, y_i\in M. $$ Here $g_1,\ldots,g_n$ is a complete orthogonal system of idempotents in $L$. We write them with respect to the decomposition $L=R_1\oplus\ldots\oplus R_n$: $$ g_1=g_1^{(1)}+\ldots+g_n^{(1)}, $$ $$ \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots \eqno (1) $$ $$ g_n=g_1^{(n)}+\ldots+g_n^{(n)}. $$ All first summands in the relations $(1)$ form a complete orthogonal system of idempotents in ring $R_1$ and so on. Since $e_iKe_j=0$ for any $i,j$ with $i>j$, we have that $\varphi(e_i)K\varphi(e_j)=0$ and, consequently, $g_iLg_j=0$ for all same $i$ and $j$. By considering the relation $(1)$, we obtain the following relations: $$ \left(g_1^{(n)}+g_1^{(n-1)}+\ldots+g_1^{(2)}\right)R_1g_1^{(1)}=0, $$ $$ \left(g_2^{(n)}+g_2^{(n-1)}+\ldots+g_2^{(3)}\right)R_2\left(g_2^{(1)}+g_2^{(2)}\right)=0, $$ $$ \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots \eqno (2) $$ $$ g_n^{(n)}R_n\left(g_n^{(1)}+g_n^{(2)}+\ldots+g_n^{(n-1)}\right)=0. $$ By considering conditions on the ring $R_1,\ldots,R_n$, we can write the following relation: $$ g_1^{(1)}R_1\left(g_1^{(n)}+g_1^{(n-1)}+\ldots+g_1^{(2)}\right)=0, $$ $$ \left(g_2^{(1)}+g_2^{(2)}\right)R_2\left(g_2^{(n)}+g_2^{(n-1)}+\ldots+g_2^{(3)}\right)=0, $$ $$ \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots \eqno (3) $$ $$ \left(g_n^{(1)}+g_n^{(2)}+\ldots+g_n^{(n-1)}\right)R_ng_n^{(n)}=0. $$ It follows from the relations $(3)$ we obtain that $$ g_k^{(i)}R_kg_k^{(j)}=0 \eqno (4) $$ for all $i,j,k=1,\ldots,n$ with $i<j$. Since the automorphism $\varphi$ is not triangular, there exist subscripts $i$ and $j$ ($i<j$) such that $\varphi M_{ij}$ is not contained in $M$. Therefore, it follows from the relation $\varphi M_{ij}=(g_i+y_i)(L\oplus M)(g_j+y_j)$ that $g_iLg_j\ne 0$. This implies that $$ g_1^{(i)}R_1g_1^{(j)}\oplus\ldots\oplus g_n^{(i)}R_ng_n^{(j)}\ne 0. $$ Therefore, $g_k^{(i)}R_kg_k^{(j)}\ne 0$ for some $k$. This contradicts to $(4)$. Consequently, the assertion of the proposition is true.~$\rhd$ It is quite common when a formal triangular matrix ring satisfies condition \textbf{(III)}. This is confirmed by the following result. \textbf{Lemma 3.3.} If a ring $S$ is either semiprime or normal (e.g., commutative), or strongly indecomposable, then $S$ satisfies condition \textbf{(4)}. $\lhd$ For a normal or strongly indecomposable ring, the assertion is obvious. We assume that the ring $S$ is semiprime but it does not satisfy condition \textbf{(4)}. Then $S$ contains an idempotent $e$ such that $(1-e)Se=0$ but $eS(1-e)\ne 0$. The ring $S$ can be identified with formal matrix ring of the form $\begin{pmatrix} eSe&eS(1-e)\\ 0&(1-e)S(1-e) \end{pmatrix}$. In this ring, $eS(1-e)$ is a non-zero nilpotent ideal; this is a contradiction. Consequently, $S$ satisfies condition \textbf{(4)}.~$\rhd$ The following result strengthens \cite[Corollary 9.9(2)]{KryT21}. \textbf{Corollary 3.4.} Let $K$ be a formal triangular matrix algebra such that each of the rings $R_1,\ldots,R_n$ satisfies condition \textbf{(4)}. Then for $K$, we have assertions of Theorem 2.1. We give a partial case of Corollary 3.4. \textbf{Corollary 3.5.} Let's assume that $K$ is a formal triangular matrix algebra such that $R_1,\ldots,R_n$ are commutative rings and $\Omega=\langle 1\rangle$. For example, let $R_1=\ldots=R_n=T$, where $T$ is a commutative indecomposable ring and $M_{ij}\ne 0$ for all $i,j$ with $i<j$. Then we have relation $\text{Aut}\,K=\text{In}_1(\text{Aut}\,K)\leftthreetimes \Psi$. $\lhd$ It follows from the relation $\Omega=\langle 1\rangle$ that $\text{Aut}\,K=\text{Ker}\,f$. Therefore, the relations from the corollary follow from Corollary 3.4 and Theorem 2.1. We pass to a partial case. It follows from of assertions 2.1, 3.2 and 3.3 that $\text{Aut}\,K=\text{In}_1(\text{Aut}\,K)\leftthreetimes \Lambda$. Let $\varphi=\begin{pmatrix} \alpha&0\\0&\beta \end{pmatrix}\in \Lambda$. The automorphism $\beta$ permute bimodules $M_{ij}$ in accordance with some permutation $\tau$ (Theorem 2.3). Since all bimodules $M_{ij}$ are non-zero, we have that $\tau$ is the identity permutation. Therefore, it follows from the relations $R_1=\ldots=R_n=T$ that $\alpha=1$; then we obtain the relation $\Omega=\langle 1\rangle$.~$\rhd$ We specialize Theorem 2.3 to the case $K$ of formal triangular matrix rings. \textbf{Corollary 3.6.} Let $K$ be a formal triangular matrix algebra such that condition \textbf{(I)} holds and the factor rings $R_1/P(R_1),\ldots,R_n/P(R_n)$ are indecomposable. For example, let the rings $R_1,\ldots,R_n$ be strongly indecomposable. In addition, let $M_{ij}\ne 0$ for all $i,j$ with $i<j$. Then any diagonal automorphism $\varphi=\begin{pmatrix}\alpha&0\\ 0&\beta\end{pmatrix}$ of the algebra $K$ leaves each of the rings $R_1,\ldots,R_n$ in place and the restriction $\beta\big|_{M_{ij}}$ is an isomorphism of $R_i$-$R_j$-bimodules $M_{ij}\to {}_{R_i}(M_{ij})_{R_j}$ for all $i,j$ with $i<j$. $\lhd$ Automorphisms $\alpha$ and $\beta$ permute the rings $R_1,\ldots,R_n$ and bimodules $M_{ij}$ in accordance with some permutation $\tau$. Similar to the proof of Corollary 3.5, we obtain that $\tau$ is the identity permutation.~$\rhd$ \section{Subgroup $\Psi$ and Inner Automorphisms}\label{section4} In the beginning of the section, $K$ denotes some formal matrix algebra with zero trace ideals. In \cite[Section 3]{KryT21}, we formulated the calculation problem of the subgroup $\Psi$. Theorem 2.1 shows an important role of this subgroup in the description problem for the automorphism group of the algebra $K$. We give several remarks on the subgroup $\Psi$. Everything below is true for any algebra $K$ (i.e., it is not assumed that all its automorphisms are triangular). We take an arbitrary automorphism $\varphi=\begin{pmatrix}1&0\\ 0&\beta\end{pmatrix}\in \Psi$. It is known that $\beta$ is an automorphism of the algebra $M$ (as a non-unital algebra) and an automorphism of the $L$-$L$-bimodule $M$. Conversely, if the mapping $\beta$ satisfies these properties, then $\begin{pmatrix}1&0\\ 0&\beta\end{pmatrix}$ is an automorphism of the algebra $K$ contained in $\Psi$. We clarify this observation as follows. The automorphism $\beta$ induces the automorphism $\beta_{ij}$ on every $R_i$-$R_j$-bimodule $M_{ij}$. At the same time, for any pairwise distinct subscripts $i,j,k$ and elements $a\in M_{ij}$, $b\in M_{jk}$ the relation $$ \beta_{ik}(a\cdot b)=\beta_{ij}(a)\cdot \beta_{jk}(b) \eqno (*) $$ must be carried out. For any two subscripts $i,j$, let we have an automorphism $\beta_{ij}$ of the bimodule $M_{ij}$ such that the relation $(*)$ is true for all values of symbols included in it. By setting $$ \beta=\sum_{i,j=1,\,i\ne j}^n\beta_{ij}, $$ we pass to an automorphism $\begin{pmatrix}1&0\\ 0&\beta\end{pmatrix}$ which belongs to the subgroup $\Psi$. We obtain a group embedding $$ \Psi\to \text{Aut}_LM= \prod_{i,j=1,\,i\ne j}^n\text{Aut}_LM_{ij} $$ if we assign the set of restrictions $\beta\big|_{M_{ij}}$ to the automorphism $\begin{pmatrix}1&0\\ 0&\beta\end{pmatrix}$ from the subgroup $\Psi$. If the algebra $K$ satisfies the property $M^2=0$, then the correspondence $\begin{pmatrix}1&0\\ 0&\beta\end{pmatrix}\to \beta$ defines isomorphism groups $\Psi\cong \text{Aut}_LM$. There is another situation where it is also possible to specify the structure of the subgroup $\Psi$. To reveal this situation, we impose additional restrictions condition on the ring $K$. Namely, for any subscripts $i,j,k$ with $i<j<k$, we set that $R_i$-$R_k$-bimodule homomorphsms $\varphi_{ijk}\colon M_{ij}\otimes M_{jk}\to M_{ik}$, used in the definition of the multiplication in $K$, are isomorphisms. In Section 2, we agreed to write $ab$ instead of $\varphi_{ijk}(a\otimes b)$; in addition, the symbol $M_{ij}\cdot M_{jk}$ denotes the set of all finite sums of elements of the form $ab$, where $a\in M_{ij}$, $b\in M_{jk}$. In other words, $M_{ij}\cdot M_{jk}$ is the image of the homomorphsm $\varphi_{ijk}$. It is also clear what we mean by the product of several bimodules $M_{ij}$. Thus, for $i<j<k$ or $i>j>k$, we have the relation $M_{ij}\cdot M_{jk}=M_{ik}$. Also we have the relations $$ M_{ik}=M_{i,i+1}\cdot M_{i+1,i+2}\cdot\ldots\cdot M_{k-1,k}, $$ $$ M_{ik}=M_{i,i-1}\cdot M_{i-1,i-2}\cdot\ldots\cdot M_{k+1,k}, $$ respectively, for $i<k$ and $i>k$. Next, we obtain the following. For every $i=1,\ldots,n-1$, let we have an automorphism $\beta_{i,i+1}$ of the bimodule $M_{i,i+1}$. These automorphisms induce the uniquely defined automorphism $\beta_{ik}$ of the bimodule $M_{ik}$ for all $i,k$ with $i<k$. Similarly, the set of automorphisms $\beta_{i,i-1}$ of bimodules $M_{i,i-1}$ for $i=2,\ldots,n$ induces the automorphism $\beta_{ik}$ of the bimodule $M_{ik}$ for all $i,k$ with $i>k$. At the same time, the relation $(*)$ is true. Thus, automorphisms $\beta_{i,i+1}$ ($i=1,\ldots,n-1$) and $\beta_{i,i-1}$ ($i=2,\ldots,n$) induce the uniquely defined automorphism $\beta$ of the algebra $M$ and the $L$-$L$-bimodule $M$. Consequently, the automorphism $\begin{pmatrix}1&0\\ 0&\beta\end{pmatrix}$ are contained in the subgroup $\Psi$. We can write the following result. \textbf{Corollary 4.1.} In the above situation, we have an isomorphism $$ \Psi\cong \prod_{i=1}^{n-1}\text{Aut}(M_{i,i+1})\times \prod_{i=2}^{n}\text{Aut}(M_{i,i-1}). $$ If $K$ is a triangular matrix ring, then there is no the second factor in the right part. In the second half of this section, we touch on the following familiar question: when are all automorphisms of the algebra $K$ inner? It was considered in \cite[Section 10]{KryT21} for triangular matrix algebras. Here much depends of the subgroup $\Psi$. \textsf{Up to the end of this section, we assume that the algebra $K$ of formal matrices with zero trace ideals satisfies condition \textbf{(I)} given in Section 2.} The following facts follow from Theorem 2.1(c) and the relations $(1)$ before this theorem. \textbf{Corollary 4.2.}\\ \textbf{1.} Every automorphism of the algebra $K$ is inner if and only if we have the relation $$ \Psi=\Psi_0 \text{ and } \Omega=\text{In(Aut}\,L). $$ \textbf{2.} The inclusion $\Psi\subseteq\text{In(Aut}\,K)$ is equivalent to the relation $\Psi=\Psi_0$. We give several remarks related to the relation $\Psi=\Psi_0$. It is hardly possible to find criteria for the fulfillment of this relation without additional information about rings $R_i$ and bimodules $M_{ij}$. What can be said about automorphisms from $\Psi_0$? Let an automorphism $\varphi=\begin{pmatrix}1&0\\ 0&\beta\end{pmatrix}$ belong to $\Psi_0$ and be defined by an invertible central matrix $v=\text{diag}(v_1,\ldots,v_n)\in L$, where $L=\oplus_{i=1}^nR_i$. For any distinct subscripts $i,j$ and any $y\in M_{ij}$, we have the relation $$ \varphi(y)=\beta(y)=v^{-1}yv=v_i^{-1}yv_j; \eqno (1) $$ that's all we know about $\varphi$. \textsf{Now we assume that the rings $R_1,\ldots,R_n$ have pairwise isomorphic centers.} We identify these centers and say the <<common center>>. We denote it by $C$. Let's assume that the automorphism group of every $R_i$-$R_j$-bimodule $M_{ij}$ consists of multiplications by invertible elements from the center $C$, i.e., $\text{Aut}(M_{ij})=U(C)$. We take $\varphi=\begin{pmatrix}1&0\\ 0&\beta\end{pmatrix}\in \Psi$. Let $c_{ij}$ be an invertible element of the ring $C$ such that the relation $\beta(y)=c_{ij}y$ holds for $y\in M_{ij}$. For the automorphism $\varphi$, the relation $(*)$ takes the form $$ c_{ik}ab=c_{ij}c_{jk}ab \eqno (**) $$ $$ \text{or } (c_{ik}-c_{ij}c_{jk})M_{ij}M_{jk}=0. \eqno (***) $$ Thus, we can assign a system of invertible elements $c_{ij}$, $i,j=1,\ldots,n$, to the automorphism $\varphi$, where we assume that $c_{ii}=1$. For these elements, relations $(**)$ and $(***)$ hold. The above group embedding $\Psi\to \text{Aut}_LM$ turns into an embedding $\Psi\to \prod_{n^2-n}U(S)$. It is difficult to find the image of it. In Section 7, we do this for the matrix ring over a given ring $R$. \textbf{Proposition 4.3.} Let $K$ be the algebra from Theorem 2.1. In addition, we assume that all rings $R_i$ have a common center $C$ and $cm=mc$ for all $c\in C$ and $m\in M$. Under such assumptions, the relation $\Psi=\Psi_0$ is true if and only if for any automorphism $\varphi=\begin{pmatrix}1&0 \\ 0&\beta\end{pmatrix}\in \Psi$, there exist such invertible elements $c_{ij}\in C$, $i,j=1,\ldots,n$ that $c_{ii}=1$, \textbf{a)} $\varphi(y)=c_{ij}y$ for any $i,j$ and $y\in M_{ij}$; \textbf{b)} $c_{ij}\cdot c_{jk}=c_{ik}$ for all $i,j,k$. $\lhd$ We assume that the relation $\Psi=\Psi_0$ is true and $\varphi=\begin{pmatrix}1&0 \\ 0&\beta\end{pmatrix}\in \Psi$. Continuing the relation \textbf{(1)}, we obtain $\varphi(y)=v_i^{-1}v_jy$. We set $$ c_{ij}=v_i^{-1}v_j \text{ and } c_{ii}=1 \text{ for all } i,j. $$ Elements $c_{ij}$ satisfy \textbf{a)} and \textbf{b)}. Conversely, let for every automorphism $\varphi\in \Psi$, exist elements $c_{ij}$ with properties mentioned in \textbf{a)} and \textbf{b)}. We choose some invertible element $v_1$ in $C$ and we set $v_2=v_1c_{12}$,$\ldots$,$v_n=v_1c_{1n}$. Then $v_i^{-1}v_j=c_{ij}$ for all $i,j$. In addition, conjugation by the matrix $\text{diag}(v_1,\ldots,v_n)$ coincides with the automorphism $\varphi$. Consequently, $\varphi\in \Psi_0$ and $\Psi=\Psi_0$.~$\rhd$ \textbf{Corollary 4.4.} To conditions of Proposition 4.3, we add another condition:\\ $M_{ij}M_{jk}$ is a faithful $C$-module for all $i,j,k$.\\ Then we can exclude item \textbf{b)} of the proposition. As for the relation $\Omega=\text{In(Aut}\,L)$, Section 8 contains various information about the group $\Omega$ for the ring formal matrices over a given ring $R$. \section[Formal Matrix Rings over a Given Ring]{Formal Matrix Rings \\ over a Given Ring}\label{section5} There is an interesting form of formal matrix rings. They are listed in the title of this section. Such rings are considered in the book \cite{KryT17}. Namely, let $R$ be some ring. If $K$ is a formal matrix ring such that $R_1=\ldots=R_n=R$ and $M_{ij}=R$ for all distinct subscripts $i$ and $j$, then one says that $K$ is a \textsf{formal matrix ring over the ring} $R$ or \textsf{formal matrix ring with values in the ring} $R$. We denote by $e_{ij}$ ($i,j=1,\ldots,n$) matrix units of the ring $K$. For all values of subscripts $i,j,k$, we have $e_{ij}e_{jk}=s_{ijk}e_k$ for some central elements $s_{ijk}$ of the ring $R$. Under multiplication of matrices $A=(a_{ij})$ and $B=(b_{ij})$ from $K$, we need to take into account the relation $$ c_{ij}=\sum_{k=1}^ns_{ikj}a_{ik}b_{kj}, \eqno (*) $$ where $A\cdot B=C=(c_{ij})$. Elements $s_{ijk}$ satisfy identities $$ s_{iik}=1=s_{ikk},\; s_{ijk}\cdot s_{ik\ell}=s_{ij\ell}\cdot s_{jk\ell}. \eqno (1) $$ Now let $\{s_{ijk}\,|\,i,j,k=1,\ldots,n\}$ be some set of central of elements of the ring $R$ which satisfy identities $(1)$. If we define multiplication of matrices $A=(a_{ij})$ and $B=(b_{ij})$ by the relation $(*)$, then we obtain a formal matrix ring over the ring $R$. Therefore, two given definitions are equivalent. Let $K$ be some formal matrix ring over the ring $R$ and let $\Sigma =\{s_{ijk}\,|\,i,j,k=1,\ldots,n\}$ be the corresponding system of central elements. The set $\Sigma$ is called a \textsf{multiplier system} and its elements are called \textsf{multipliers} of the ring $K$. Instead of <<multipliers>> also one says <<multiplicative coefficients>>; for example, see \cite{Tap15}. The ring $K$ can be denoted by $M(n,R,\Sigma)$. If all $s_{ijk}$ are equal to $1$, then we obtain an ordinary matrix ring $M(n,R)$. Let $\tau$ be a permutation of degree $n$. For any matrix $A=(a_{ij})$ of order $n$, we set $\tau A=(a_{\tau(i)\tau(j)})$, i.e., we take the conjugation of the matrix $A$ by the matrix permutation $\tau$. Next, if $\Sigma=\{s_{ijk}\}$ is some multiplier system, then we set $t_{ijk}=s_{\tau(i)\tau(j)\tau(k)}$. Then $\{t_{ijk}\}$ also is a multiplier system, since it satisfies identities $(1)$. We denote it by $\tau\Sigma$. Consequently, there exists a formal matrix ring $M(n,R,\tau\Sigma)$. The rings $M(n,R,\Sigma)$ and $M(n,R,\tau\Sigma)$ are isomorphic under the correspondence $A\to \tau A$. \textsf{Up to the end of the section, we assume that $K$ is a formal matrix ring over given ring $R$ which is a $T$-algebra. Also we assume that every multiplier $s_{ijk}$ is equal to $1$ or $0$.} In \cite{KryT21}, it is shown that, under this assumption, there exists a permutation $\tau$ with property that the ring $\tau K$ can be represented as a ring of formal block matrices with zero trace ideals. We briefly recall this material. By considering identities $(1)$, it is easy to verify the following lemma. \textbf{Lemma 5.1.} Let subscripts $i,j,k$ be pairwise distinct. Then for elements $s_{iji}$, $s_{jkj}$ and $s_{kik}$, we have one of the following possibilities. \textbf{1)} All three elements are equal to $1$. \textbf{2)} Some two of these three elements are zeros and the third element is $1$. \textbf{3)} All three elements are zeros. On the set of integers $\{1,\ldots,n\}$, we define a binary relation $\sim$ by setting $i\sim j$ $\Leftrightarrow$ $s_{iji}$ is equal to $1$. \textbf{Lemma 5.2.} The relation $\sim$ is an equivalence relation. The symmetrical matrix $S=(s_{iji})$ is called the \textsf{multiplier matrix} of the ring $K$. We construct a permutation $\tau$ as follows. To the upper row, we arrange positive integers from $1$ to $n$ in natural order. The bottom row consists of equivalence classes with respect to relation $\sim$ arranged in an arbitrary order. Inside classes, these integers are also arranged in an arbitrary order. Then the main diagonal of the matrix $\tau S$ contains blocks consisting of 1's. There is an one-to-one correspondence between these blocks and equivalence classes with respect to the relation $\sim$. The order of this block is equal to the number of elements of the corresponding equivalence class. In the matrix $\tau S$, all positions outside the considered blocks are occupied by zeros. As it was mentioned above, the rings $K$ and $\tau K$ are isomorphic under the correspondence $A\to \tau A$, $A\in K$. To simplify the text, we agree that the multiplier matrix $S$ of the ring $K$ already has the above block form. Let the number of blocks on the main diagonal of matrix $S$ be equal $m$. On the main diagonal of any matrix $A\in K$, we select blocks $A_1,\ldots,A_m$ of the same order and in the same sequence as on the main diagonal of the matrix $S$. For a fixed $\ell$, the blocks $A_{\ell}$ of all matrices in $K$ form the usual matrix ring $M(k_{\ell},R)$ for some $k_{\ell}$. We denote it by $R_{\ell}$. Blocks $A_1,\ldots,A_m$ define an obvious block decomposition of matrices $A$. The symbol $L$ denotes the direct sum of rings $R_1\oplus\ldots\oplus R_m$. By $M$, we denote the set of all matrices $A\in K$ such that the corresponding blocks $A_1,\ldots,A_m$ consist of zeros. It is clear that $M$ is an $L$-$L$-bimodule. The decomposition $L=R_1\oplus\ldots\oplus R_m$ induces the block decomposition of every matrix mentioned above. Namely, we write $1=e_1+\ldots+e_m$, where $e_i$ is the identity element of the ring $R_i$. Now we denote by $M_{ij}$ the subbimodule $e_iMe_j$ in $M$. The action of the ring $L$ on the subbimodule $M_{ij}$ coincides with the action of rings $R_i$ and $R_j$ from the left and the right, respectively. We have a bimodule direct decomposition $M=\oplus_{i,j=1}^mM_{ij}$, where $i\ne j$. Similar to Section $2$, we have the direct sum $K=L\oplus M$. The ring $K$ is a formal (block) matrix ring constructed from the rings $R_1,\ldots, R_m$ and bimodules $M_{ij}$ in accordance with procedure given in Section 2; see \cite[Section 2.3]{KryT17}. Basically, we will consider the ring $K$ as a ring of block matrices. As a ring of block matrices, the algebra $K$ has zero trace ideals. Consequently, we get into the situation of Section 2. If the factor ring $R/P(R)$ is indecomposable, then all factor rings $R_i/P(R_i)$ ($i=1,\ldots,m$) are indecomposable, as well. By considering Corollary 2.2, we can write such result. \textbf{Corollary 5.3.} Let the ring $R/P(R)$ be indecomposable. Then the algebra $K$ satisfies conditions \textbf{(II)} and \textbf{(I)}; consequently, Theorems 2.1 and 2.3 are true for the group $\text{Aut}\,K$. \textbf{Remark 5.4.} For a commutative ring $R$, the ring $R/P(R)$ is indecomposable if and only if the ring $R$ is indecomposable. Therefore, if $R$ is an indecomposable commutative ring, then Theorems 2.1 and 2.3 are true for the automorphism group of the $R$-algebra $K$. \section{Case $M^2=0$}\label{section6} We preserve all designations and agreements of the previous section. Thus, $K$ is a formal matrix algebra over a given ring $R$ and $K=L\oplus M$, where the symbols $L$ and $M$ have the same meaning. We consider $K$ as a ring of formal block matrices in accordance with Section 5. For every algebra $K$ with $M^2=0$, the authors clarify some facts in \cite{KryT21}; additional information about the group $\text{Aut}\,K$ is also obtained in \cite{KryT21}. We recall this material. First, we briefly repeat some general considerations from \cite[Section 9]{KryT21}. The following fact is true; see \cite[Lemma 9.6]{KryT21}. \textbf{Lemma 6.1.} Let we have indecomposable rings $R_1,\ldots,R_n$, $L=R_1\oplus\ldots\oplus R_n$, and let $\alpha\in \text{Aut}\,L$. Then for every subscript $i=1,\ldots,n$, there exists a subscript $j$ such that $\alpha R_i=R_j$. In \cite{KryT21}, based on Lemma 6.1, a certain permutation group $\Sigma$ of degree $n$ is defined; it acts on the ring $L$. At the same time, a permutation $\sigma\in\Sigma$ is identified with the corresponding automorphism $\alpha_{\sigma}$ of the ring $L$. In this article, it is also introduced a normal subgroup $\Gamma$ of automorphisms of the ring $L$, leaving all $R_i$ in place. Then $$ \Gamma\cap \Sigma=\langle 1\rangle \, \text{ and } \text{Aut}\,L=\Gamma\leftthreetimes \Sigma. $$ We return to formal matrix algebras $K$ over $R$, where $R$ is some ring. Similar to Section 2, we write $$ K=L\oplus M, \text{ where } L=R_1\oplus\ldots\oplus R_m $$ and every $R_i$ is an ordinary matrix ring $M(k_i,R)$ for some $k_i\ge 1$. We impose the same restrictions on the ring $R$ as in Section 5. Namely, we assume that the ring $R/P(R)$ is indecomposable. Then all rings $R_i/P(R_i)$ are indecomposable, as well. Consequently, the ring $K$ satisfies condition \textbf{(II)}. In addition, all rings $R_i$ are indecomposable, as well. Therefore, we can apply Lemma 6.1 to $L$. We can write $\text{Aut}\,L=\Gamma\leftthreetimes \Sigma$, where $\Gamma$ and $\Sigma$ are such subgroups as indicated after Lemma 6.1. Let $h\colon \text{Aut}\,L\to \Sigma$ be the canonical homomorphsm and $g=hf\colon \text{Aut}\,L\to \Sigma$. Then we have $$ \text{Ker}\,g=\left\{\varphi=\begin{pmatrix}\alpha&0\\ \delta&\beta\end{pmatrix} \,|\, \alpha R_i=R_i, \, i=1,\ldots,m\right\}. $$ With the use of relation $(1)$ in this section $5$, it is easy to verify the following fact. \textbf{Lemma 6.2.} For a given algebra $K$, the relation $M^2=0$ is true if and only if the following condition holds:\\ for any pairwise distinct subscripts $i,j,k$, the relations $s_{iji}=s_{jkj}=s_{kik}=0$ imply the relation $s_{ijk}=0$. In \cite[Section 3]{KryT21}, a subgroup $\Delta$ is defined. It consists of automorphisms of the form $\begin{pmatrix}1&0\\ \delta&1\end{pmatrix}$. For our algebra $K$, we have relations $$ \Delta=\text{In}_1(\text{Aut}\,K) \text{ and } \Delta\cong 1+M. $$ According to Theorem 2.1 we have the relation $\text{Aut}\,K=\Delta\leftthreetimes \Lambda$. We can also write decomposition $\text{Ker}\,g=\Delta\leftthreetimes C$, where $C$ denotes $$ \left\{\begin{pmatrix} \alpha&0\\0&\beta \end{pmatrix}\, \Big|\, \alpha R_i=R_i \text{ for all } i=1,\ldots,m\right\}. $$ By considering Theorem 2.1, we can formulate the following theorem. \textbf{Theorem 6.3.} Let $R$ be a ring with indecomposable factor ring $R/P(R)$ and let $K$ be a formal matrix algebra over $R$ with $M^2=0$. \textbf{1.} We have the relations $$ \text{Aut}\,K=\Delta\leftthreetimes \Lambda,\; \text{Aut}\,K=\Delta\leftthreetimes C\leftthreetimes \Sigma,\; \Phi=\Delta\leftthreetimes (\text{In}_0(\text{Aut}\,K))\cdot\Psi. $$ \textbf{2.} If all automorphisms of every algebra $R_1,\ldots,R_m$ are inner, then we have relations $$ \text{Aut}\,K=\Delta\leftthreetimes (\text{In}_0(\text{Aut}\,K)\cdot\Psi)\leftthreetimes \Sigma, $$ $$ \text{Out}\,K\cong \Psi/\Psi_0\leftthreetimes \Sigma. $$ \textbf{Remarks on item 2:} When the conditions of this item are met, the relation $\text{Ker}\,g=\Phi$ is true and $$ \text{Aut}\,K=\text{Ker}\,g\leftthreetimes \Sigma=\Phi\leftthreetimes \Sigma=\Delta\leftthreetimes (\text{In}_0(\text{Aut}\,K)\cdot \Psi)\leftthreetimes\Sigma. $$ Included in Theorem 6.3, the structure of subgroups $\Delta$, $\text{In}_0(\text{Aut}\,K)$, $\Psi$ and $\Sigma$ is known (see Sections 7 and 8). Therefore, we know the structure of the whole group $\text{Aut}\,K$ from this item. For example, the condition on automorphisms of the algebras $R_1,\ldots,R_m$ is satisfied for a commutative ring $R$ which is a unique factorization domain or a local ring. \section[Subgroup $\Psi$ and Inner Automorphisms, II]{Subgroup $\Psi$ and\\ Inner Automorphisms, II}\label{section7} Section 4 contains some information about the subgroup $\Psi$ for the formal matrix algebra with zero trace ideals. In Section 7, we will calculate this subgroup for a formal matrix ring with values in an arbitrary ring $R$. Thus, we continue the line of Sections 5 and 6. At the same time, we develop the results from \cite[Section 13]{KryT21}. We preserve all designations and terms of Sections 5 and 6. In addition, we assume that $C(U(R))$ is the center of the group $U(R)$. Let $K$ be a formal matrix algebra with values in a ring $R$. According to Section 4, there exists a group embedding $$ \Psi\to\text{Aut}_LM=\prod_{i,j=1,i\ne j}^m\text{Aut}_LM_{ij}. $$ By \cite[Proposition 13.2]{KryT21}, automorphisms of the $R_i$-$R_j$-bimodule $M_{ij}$ coincide with multiplications by invertible central elements of the ring $R$. Therefore, we can write $\text{Aut}_LM_{ij}=C(U(R))$. Let $\varphi=\begin{pmatrix}1&0\\ 0&\beta\end{pmatrix}\in \Psi$. We have a system of invertible central elements $c_{ij}$ ($i,j=1,\ldots,m$) of $R$ with $c_{ii}=1$, which satisfy to the relations $(**)$ and $(***)$ from Section 4 for all values of subscripts $i,j,k$. We show that for our algebra $K$, we can limit ourselves in a certain sense to a smaller number of elements $c_{ij}$. And also we will exactly specify the image of the embedding $\Psi\to \prod_{m^2-m}C(U(R))$. For this purpose, we give some argument. We fix a subscript $i$, where $1\le i\le m-1$. Let $k_i$ be a subscript such that $$ i< k_i\le m,\; M_{i,i+1}\cdots M_{k_i-1,k_i}\ne 0 $$ and $k_i$ is the maximal number with such a property (the meaning of the written product of bimodules is explained in Section 4). Then $M_{ij}\cdot M_{jk}\ne 0$ for any $k$ and $j$ such that $i<k\le k_i$ and $i<j<k$. It follows from the relation $(***)$ from Section 4 that $c_{ik}=c_{ij}c_{jk}$ (note that multipliers $s_{ijk}$ only take values $0$ or $1$). If there are indices $\ell$ such that $k_i<\ell\le m$, then the elements $c_{i\ell}$ and $c_{ij}c_{j\ell}$ ($i<j< \ell$) may not be related in any way, since $M_{ij}M_{j\ell}=0$. And accordingly, the element $c_{i\ell}$ does not depend on the elements $c_{i,i+1},\ldots, c_{\ell-1,\ell}$. We choose some positions. First of all, we take positions $(1,2)$, $\ldots$, $(m-1,m)$. Further, for every $i$, where $1\le i\le m-1$ and $k_i<m$, we choose positions $$ (i,k_i+1), \ldots, (i,m). \eqno(1) $$ Now we will do the same for the positions $(i,j)$. And then we fix positions $(2,1),\ldots,(m,m-1)$. For every $j$, where $1\le j\le m-1$ and $k_j<m$, we choose positions $$ (k_j+1,j), \ldots, (m,j), \eqno(2) $$ where $k_j$ is the maximal number with $M_{k_j,k_j-1}\cdots M_{j+1,j}\ne 0$. We come to the corresponding facts on elements $c_{ij}$ for $i>j$. After the work done, we can formulate the following assertion. \textbf{Proposition 7.1.}\\ \textbf{1.} There exists an isomorphism $\Psi\cong \prod_{(i,j)}C(U(R))$, where pairs $(i,j)$ run over all selected above positions. More precisely, $\Psi\cong\prod_{p}C(U(R))$, where $$ p=2(m-1)+q,\; q=\sum_{i=1}^{m-1}s_i+\sum_{j=2}^{m}t_j $$ and $s_i$ (resp., $t_j$) is the number of selected positions in $(1)$ (resp., $(2)$). \textbf{2.} We have isomorphisms $$ \Psi_0\cong \prod_{m-1}C(U(R)) \text{ and } \Psi/\Psi_0\cong \prod_{(m-1)+q}C(U(R)). $$ $\lhd$ \textbf{1.} The embedding $\Psi\to \prod_{m^2-m}C(U(R))$ associates an automorphism $\beta\in\Psi$ with the system of invertible central elements $$ \{c_{ij} \,|\, i,j=1,\ldots,m,\, i\ne j\} $$ of the ring $R$, where $\beta(y)=c_{ij}y$, $y\in M_{ij}$ (see Section 4 and the above). It follows from the text before the proposition that we can also restrict ourself by elements $c_{ij}$ for pairs $(i,j)$ running over only positions indicated there. \textbf{2.} Let $\beta\in\Psi_0$ and $c_{ij}$ be the elements from \textbf{1}. For all $i,j,k=1,\ldots,m$, the relation $c_{ik}=c_{ij}\cdot c_{jk}$ holds (see Proposition 4.3 and its proof). From here, we obtain that elements $c_{ij}$ with $i<j$ are products of elements of the form $c_{k,k+1}$. Taking into account that that $1=c_{11}=c_{ij}c_{ji}$, we obtain $c_{ji}= c_{ij}^{-1}$. Therefore, elements $c_{ji}$ with $i<j$ are expressed by elements which are inverse to elements $c_{k,k+1}$. These considerations lead to the first isomorphism from \textbf{2}. The second isomorphism follows from the first isomorphism and \textbf{1}.~$\rhd$ \textbf{Corollary 7.2, \cite{KryT21}.} If $M^2=0$, then we have isomorphisms $$ \Psi\cong \prod_{m^2-m}C(U(R)),\; \Psi_0\cong \prod_{m-1}C(U(R)),\; \Psi/\Psi_0\cong \prod_{(m-1)^2}C(U(R)). $$ From Proposition 4.3, Corollary 4.4, and the material of this section, we can formulate when an automorphism from $\Psi$ is inner. We note that automorphisms from $\Psi$ can be called \textsf{multiplicative}. \textbf{Corollary 7.3.} Let $K$ be a formal matrix algebra over a ring $R$ with indecomposable factor ring $R/P(R)$. The multiplicative automorphism $\varphi$ is inner if and only if the corresponding to it system of elements $c_{ij}\in C(U(R))$ ($i,j=1,\ldots,m$) satisfies to the relations $c_{ij}\cdot c_{jk}=c_{ik}$ for all $i,j,k=1,\ldots,m$. \textbf{Corollary 7.4.} If we add the condition $$ M_{ij}\cdot M_{jk}\ne 0 \; \text{ for all } i,j,k, $$ to conditions of Corollary 7.3, then the relation $\Psi=\Psi_0$ holds, i.e., every multiplicative automorphism is inner. $\lhd$ We have that the $R$-module $M_{ij}\cdot M_{jk}$ is faithful. (This has already been used at the beginning of this section.) This implies relations $c_{ij}\cdot c_{jk}=c_{ik}$.~$\rhd$ \section{Groups $\Omega$ and $\Omega_1$}\label{section8} As before, $K$ is a formal matrix algebra over the ring $R$. In the beginning of Section 2, the group $\Omega$ was defined; it is the image of the homomorphsm $f\colon \text{Aut}\,K\to \text{Aut}\,L$. In this section, we consider the group $\Omega$ in the case of the ring $K$. The role of this group and the group $\Psi$ has already been mentioned in Section 2 (especially see the end of Section 2). We also define a group $\Omega_1$ as the image of the restriction of the homomorphsm $f$ to $\text{Ker}\,g$. We return to the decomposition $\text{Aut}\,L=\Gamma\leftthreetimes \Sigma$ from section $6$ and the homomorphsm $g\colon \text{Aut}\,K\to \Sigma$. We recall on the decomposition $K=L\oplus M$. If $M^2=0$, then $\text{Aut}\,K=\text{Ker}\,g\leftthreetimes \Sigma$ \cite [Theorem 13.3]{KryT21}. Next, we have the semidirect decomposition $\Omega=\Omega_1\leftthreetimes \Sigma$ (see the beginning of Section 14 in \cite{KryT21}). If $M^2\ne 0$, then we can only say that $\Omega_1$ is a normal subgroup in $\Omega$ and the factor group $\Omega/\Omega_1$ is isomorpically embedded in the permutation group $\Sigma$. We pay attention to the subgroup $\Omega_1$ especially. We write several questions on the structure of the group $\Omega_1$. \textbf{1.} Which automorphisms from $\Gamma$ belong $\Omega_1$? \textbf{2.} What is the structure of the group $\Omega_1$? \textsf{Next, we assume that factor ring $R/P(R)$ is indecomposable.} We will answer the first question and, in one case, the second question. We pay attention to the following circumstance. By Theorem 2.1 and Corollary 5.3, we have the relation $$ \text{Aut}\,K=\text{In}_1(\text{Aut}\,K)\leftthreetimes \Lambda. $$ This implies the relation $\text{Ker}\,g=\text{In}_1(\text{Aut}\,K)\leftthreetimes C$, where $$ C=\text{Ker}\,g\cap\Lambda= \{\begin{pmatrix}\alpha&0\\ 0&\beta\end{pmatrix}\,|\, \alpha R_i=R_i, \,i=1,\ldots,m\}. $$ Now we can assert that an automorphism $\alpha\in\Gamma$ is contained in $\Omega_1$ if and only if there is a transformation $\beta$ of the algebra $M$ which is both its automorphism and an isomorphism of $L$-$L$-bimodules $M\to {}_{\alpha}M_{\alpha}$. The last property is equivalent to the property that the matrix $\begin{pmatrix}\alpha&0\\ 0&\beta\end{pmatrix}$ defines automorphism of the algebra $K$ contained in $\text{Ker}\,g$. Let $\alpha\in\Omega_1$ and let $\varphi=\begin{pmatrix}\alpha&0\\ 0&\beta\end{pmatrix}$ be the corresponding automorphism of the algebra $K$. For any $i,j=1,\ldots,m$, we have the relation $$ \beta M_{ij}=\beta(e_iMe_j)=\alpha(e_i)\beta(M)\alpha(e_j)= e_iMe_{j}=M_{ij}. $$ We set $\beta_{ij}=\beta\big|_{M_{ij}}$ and $\alpha_i=\alpha\big|_{R_i}$. Then $\beta_{ij}\colon M_{ij}\to {}_{\alpha_i}(M_{ij})_{\alpha_j}$ is an isomorphism $R_i$-$R_j$-bimodules (bimodules of the form $_{\alpha}A_{\gamma}$ are defined in Section 1). We can give the following form to the question of which elements of $\Gamma$ are contained in $\Omega_1$. For which automorphisms $\alpha_i\in \text{Aut}\,R_i$ and $\alpha_j\in \text{Aut}\,R_j$ there exist isomorphisms between $R_i$-$R_j$-bimodules $M_{ij}$ and how they are arranged? We prove a general fact. It generalizes the following result (see \cite[Chapter 2, Proposition 5.2]{Bas68}): Let $H$ be some algebra and let $\alpha$, $\gamma$ be automorphisms of $H$. There exists an isomorphism of $H$-$H$-bimodules $H\to {}_{\alpha}H_{\gamma}$ if and only if $\alpha^{-1}\gamma$ is an inner automorphism. Let $k$, $\ell$ be two positive integers and $c=\text{LCM}(k,\ell)$. We denote $$ P=M(k,R),\, Q=M(\ell,R),\, H=M(c,R),\, V=M(k\times\ell,R),\, \ell'=\dfrac{c}{k},\, k'=\dfrac{c}{\ell}. $$ The ring $H$ can be represented as a block matrix ring by two methods: as a block matrix ring over $P$ of order $\ell'$ and as a block matrix ring over $Q$ of order $k'$. It also is a $P$-$Q$-bimodule block matrices over $V$ of size $\ell'\times k'$. Let $\alpha$ and $\gamma$ be automorphisms of algebras $P$ and $Q$, respectively. They induce the automorphisms of the algebra $H$; we call them \textsf{ring automorphisms}. For them, we leave designations $\alpha$ and $\gamma$, respectively. This agreement is also preserved in the following proposition. \textbf{Proposition 8.1.} If $\alpha\in\text{Aut}\,P$ and $\gamma\in\text{Aut}\,Q$, then an isomorphism of $P$-$Q$-bimodules $V\to {}_{\alpha}V_{\gamma}$ exists if and only if $\alpha^{-1}\gamma$ is an inner automorphism of the algebra $H$. $\lhd$ Let we have an isomorphism of $P$-$Q$-bimodules $\beta\colon V\to {}_{\alpha}V_{\gamma}$. The isomorphism $\beta$ induces an $H$-$H$-bimodule isomorphism $$ \overline{\beta}\colon H\to {}_{\alpha}H_{\gamma},\; \overline{\beta}(A)=(\beta(A_{ij})) $$ for every matrices $A=(A_{ij})\in H$. We mean that the matrix $A$ is represented in the above block form, i.e., $A_{ij}$ are blocks of size $\ell'\times k'$. Consequently, $\alpha^{-1}\gamma$ is an inner automorphism of the algebra $H$. Now we assume that $\alpha^{-1}\gamma$ is an inner automorphism of the algebra $H$. Consequently, there exists an isomorphism of $H$-$H$-bimodules $\beta\colon H\to {}_{\alpha}H_{\gamma}$. We take the triangular matrix algebra $S=\begin{pmatrix}H&H\\ 0&H\end{pmatrix}$. We denote by $\psi$ the automorphism of the algebra $S$ which converts the matrix $\begin{pmatrix}a&c\\0&b\end{pmatrix}$ to the matrix $\begin{pmatrix}\alpha(a)&\beta(c)\\0&\gamma(b)\end{pmatrix}$, i.e., $\psi=\begin{pmatrix}(\alpha,\gamma)&0\\0&\beta\end{pmatrix}$ in the matrix form accepted by us of automorphisms. Let $e_1,\ldots,e_{\ell'}$ and $f_1,\ldots,f_{k'}$ be diagonal matrix units which correspond to two block partitions of matrices in $H$. We have the relation $$ \alpha(e_i)=e_i,\; i=1,\ldots,\ell',\; \gamma(f_j)=f_j,\; j=1,\ldots,k'. $$ This implies that $\psi$ induces the automorphism of triangular matrix algebras $\begin{pmatrix}e_iHe_i&e_iHf_j\\ 0&f_jHf_j\end{pmatrix}$; in fact, this means that $\psi$ induces the automorphism of the algebra $\begin{pmatrix}P&V\\ 0&Q\end{pmatrix}$. Consequently, $\beta\big|_V$ is isomorphism $P$-$Q$-bimodules $V\to {}_{\alpha}V_{\gamma}$.~$\rhd$ Let $n_i$ be the order of matrices in the ring $R_i$, $i=1,\ldots,m$. We set $c_{ij}=\text{LCM}(n_i,n_j)$ for all pairwise distinct $i,j=1,\ldots,m$. We denote by $H_{ij}$ the matrix ring $M(c_{ij},R)$. It is a ring block matrices over $R_i$ and over $R_j$ and also is an $R_i$-$R_j$-bimodule of block matrices over $M_{ij}$. We assume that automorphisms of the rings $R_i$ and $R_j$ are rings automorphisms of the algebra $H_{ij}$. \textbf{Theorem 8.2.} The automorphism $\alpha=(\alpha_i)$ of the algebra $L=\oplus_{i=1}^mR_i$ belongs to the group $\Omega_1$ if and only if $\alpha_i^{-1}\alpha_j$ is an inner automorphism of the algebra $H_{ij}$ for all distinct subscripts $i$ and $j$. $\lhd$ \textsf{Necessity.} Let $\alpha\in\Omega_1$. Consequently, there exists an automorphism $\varphi=\begin{pmatrix}\alpha&0\\ 0&\beta\end{pmatrix}$ of the algebra $K$, where $\beta$ is an automorphism $L$-$L$-bimodules $M\to {}_{\alpha}M_{\alpha}$. The restricition $\beta$ to $M_{ij}$ is an $R_i$-$R_j$-bimodule isomorphism $M_{ij}\to {}_{\alpha_i}(M_{ij})_{\alpha_j}$. According to Proposition 8.1, $\alpha_i^{-1}\alpha_j\in \text{In(Aut}\,H_{ij})$. \textsf{Sufficiency.} By Proposition 8.1, there is an isomorphism of $R_i$-$R_j$-bimodules $\beta_{ij}\colon M_{ij}\to {}_{\alpha_i}(M_{ij})_{\alpha_j}$, $i,j=1,\ldots,m$, $i\ne j$. Let $\beta=\sum_{i,j=1,i\ne j}^m\beta_{ij}$. Then $\beta$ is an isomorphism of $L$-$L$-bimodules $M\to {}_{\alpha}M_{\alpha}$. For the transformation $\begin{pmatrix}\alpha&0\\ 0&\beta\end{pmatrix}$ of the algebra $K$ to be its automorphism, it suffices to verify that the equality $$ \beta_{ik}(ab)=\beta_{ij}(a)\beta_{jk}(b) \eqno (*) $$ holds for any elements $a\in M_{ij}$, $b\in M_{jk}$ and of all pairwise distinct subscripts $i,j,k$. We fix three mentioned subscripts $i,j,k$ and define another matrix ring. We set $H=M(d,R)$, where $d=\text{LCM}(n_i,n_j,n_k)$. This ring $H$ is a block matrix ring over the bimodules $M_{ij}$, $M_{jk}$, $M_{ik}$. We assume that automorphisms $\alpha_i$, $\alpha_j$, $\alpha_k$ are ring automorphisms of the algebra $H$. We consider isomorphisms $\beta_{ij}$, $\beta_{jk}$, $\beta_{ik}$ as bimodule isomorphisms $$ H\to {}_{\alpha_i}H_{\alpha_j},\; H\to {}_{\alpha_j}H_{\alpha_k},\; H\to {}_{\alpha_i}H_{\alpha_k} $$ respectively. At the same time, products $\alpha_i^{-1}\alpha_j$, $\alpha_j^{-1}\alpha_k$, $\alpha_i^{-1}\alpha_k$ are inner automorphisms of the algebra $H$. They induce the same bimodule isomorphisms $\beta_{ij}$, $\beta_{jk}$ and $\beta_{ik}$ as above. These bimodule isomorphisms act as follows (see \cite[Section2]{KryT21}). There exist invertible elements $u,v,w\in H$ such that we have the relation $$ \beta_{ij}(x)=\alpha_i(x)\alpha_i(u)=\alpha_i(u)\alpha_j(x), $$ $$ \beta_{jk}(y)=\alpha_j(y)\alpha_j(v)=\alpha_j(v)\alpha_k(y), $$ $$ \beta_{ik}(z)=\alpha_i(z)\alpha_i(w)=\alpha_i(w)\alpha_k(z) $$ for all $x,y,z\in H$. We verify that the relation $$ \beta_{ik}(xy)=\beta_{ij}(x)\beta_{jk}(y) \eqno (**) $$ holds in $H$ for any $x,y\in H$. It follows from the relation $$ \alpha_i^{-1}\alpha_k=\alpha_i^{-1}\alpha_j\alpha_j^{-1}\alpha_k $$ that the relation $w=vu$ holds (it is also necessary to take into account that the elements $u,v,w$ are defined up to invertible central elements). Now we have the relation $$ \beta_{ik}(xy)=\alpha_i(xy)\alpha_i(w)= \alpha_i(x)\alpha_i(y)\alpha_i(v)\alpha_i(u). $$ Also we have the relation $$ \beta_{ij}(x)\beta_{jk}(y)=\alpha_i(x)\alpha_i(u)\alpha_j(y)\alpha_j(v)= $$ $$ =\alpha_i(x)\alpha_i(y)\alpha_i(u)\alpha_j(v)= \alpha_i(x)\alpha_i(y)\alpha_i(v)\alpha_i(u). $$ Thus, the relation $(**)$ is proved. In it, matrix multiplication is executed in block form. This implies the relation $(*)$.~$\rhd$ In the remaining part of the section, we touch on the structure problem for the group $\Omega_1$. This problem seems to be quite complicated. We will find the structure of the group $\Omega_1$ under one condition on the integers $n_i$. We recall that $n_i$ is the order of matrices in the ring $R_i$. Let $n_s$ be the least of integers $n_1,\ldots,n_m$. Next, we assume that the algebra $K$ satisfies the following property: $n_s$ divides each of the integers $n_i$. Under such an assumption, every ring $R_i$ is a block matrix ring over $R_s$ of order $n_i/n_s$. Therefore, every automorphism $\alpha_s$ of the algebra $R_s$ is extended to ring automorphism $\alpha_i$ of the algebra $R_i$. We call the obtained automorphism $(\alpha_1,\alpha_2,\ldots,\alpha_m)$ of $L$ a \textsf{scalar automorphism}. Also we write it in the form $(\alpha_s,\alpha_s,\ldots,\alpha_s)$, i.e., we identify $\alpha_i$ with $\alpha_s$. We denote by $D$ the subgroup of all scalar automorphisms of the algebra $L$. The following result extends \cite[Corollary 14.3]{KryT21}. \textbf{Corollary 8.3.}\\ \textbf{1.} The relation $\Omega_1=\text{In(Aut}\,L)\cdot D$ holds. \textbf{2.} There exists an isomorphism $\Omega_1/\text{In(Aut}\,L)\cong \text{Out}\,R_s$. $\lhd$ \textbf{1.} Let $\alpha=(\alpha_i)\in \Omega_1$. For every $j=1,\ldots,m$, we denote by $\mu_j$ the automorphism $\alpha_j\alpha_s^{-1}$ of the algebra $R_j$. It follows from Theorem 8.2 that $\mu_j$ is an inner automorphism of the algebra $R_j$. Now it follows from the relations $\alpha_j=\mu_j\alpha_s$ ($j=1,\ldots,m$) that $$ \alpha=(\mu_1,\ldots,\mu_m)(\alpha_s,\ldots,\alpha_s)=\mu\gamma\in\text{In(Aut}\,L)\cdot D. $$ The inclusion $\Omega_1\subseteq \text{In(Aut}\,L)\cdot D$ is proved. We prove the converse inclusion. The inclusion $\text{In(Aut}\,L)\subseteq \Omega_1$ holds always. Now we take an arbitrary automorphism $\gamma=(\alpha,\ldots,\alpha)$ from $D$, where $\alpha\in R_s$. The inclusion $\gamma\in\Omega_1$ follows from Theorem 8.2. \textbf{2.} The assertion follows from \textbf{1}.~$\rhd$ \section[Automorphisms of Triangular Matrix Rings]{Automorphisms of Triangular Matrix Rings}\label{section9} In this section, $R$ is an arbitrary algebra over some commutative ring $T$. The (upper) triangular matrix ring over $R$ is denoted by $T(n,R)$. We denote this matrix ring by $K$. We consider the ring $K$ as a splitting extension: $T(n,R)=K=L\oplus M$, where the symbols $L$ and $M$ have the same meaning. It also is convenient to assume that the ring $L$ is the sum $R_1\oplus \ldots\oplus R_n$ and the $L$-$L$-bimodule $M$ is equal to $\oplus_{i,j=1, i<j}^nM_{ij}$. Every automorphism $\alpha$ of the algebra $R$ induces the automorphism $\overline{\alpha}$ of the algebra $K$, where $$ \overline{\alpha}(a_{ij})=(\alpha(a_{ij})),\;(a_{ij})\in K. $$ The automorphism $\overline{\alpha}$ is called the \textsf{ring automorphism induced by $\alpha$.} Such automorphisms were considered in the previous section. The theorem below is another formulation of one theorem from \cite{Kop96}; the proof differs from the proof in \cite{Kop96}. \textbf{Theorem 9.1.} Every triangular automorphism of the algebra $T(n,R)$ is a product of an inner automorphism and ring automorphism. $\lhd$ Let $\varphi$ be some triangular automorphism of the algebra $K$. We represent it as a product of an inner and a diagonal automorphisms, since Theorem 2.1(1) is true for the subgroup of triangular automorphisms. Therefore, we can assume that $\varphi$ is a diagonal automorphism. Thus, $\varphi=\begin{pmatrix}\alpha&0\\ 0&\beta\end{pmatrix}$, where $\alpha$ is an automorphism of the algebra $L$, $\beta$ is an automorphism of the algebra $M$ and an isomorphism of $L$-$L$-bimodules $M\to {}_{\alpha}M_{\alpha}$. Since $e_1,\ldots,e_n$ is a complete orthogonal system of central idempotents of the ring $L$, we have that the system $\alpha(e_1),\ldots,\alpha(e_n)$ satisfies the same properties. For every $i$, we write $$ \alpha(e_i)=f_1^{(i)}+f_2^{(i)}+\ldots+f_n^{(i)},\eqno (1) $$ where $f_k^{(i)}\in R_k$, $k=1,\ldots,n$. Elements $f_k^{(1)},f_k^{(2)},\ldots, f_k^{(n)}$ form a complete orthogonal system of central idempotents of the ring $R_k$. There exist inclusions $$ M\supset M^2\supset\ldots\supset M^{n-1}\supset M^n=0, $$ where $M^{n-1}=M_{1n}$. Therefore, the relation $\varphi M_{1n}=M_{1n}$ is true. The relation $e_1Ke_n=M_{1n}$ implies the relation $$ \varphi(e_1Ke_n)=\alpha(e_1)K\alpha(e_n)=\varphi M_{1n}=M_{1n}=R. $$ Next, we can write the relation $$ \alpha(e_1)K\alpha(e_n)=\alpha(e_1)(L\oplus M)\alpha(e_n)=f_1^{(1)}Mf_n^{(n)}. $$ Thus, $f_1^{(1)}Mf_n^{(n)}=R$. Consequently, $f_1^{(1)}=1$ (i.e., $e_1$), $f_n^{(n)}=1$ (i.e., $e_n$) and $$ f_1^{(2)}=\ldots =f_1^{(n)}=0,\; f_n^{(1)}=\ldots =f_n^{(n-1)}=0. $$ We use the induction on $n$ to show that $\alpha(e_i)=e_i$, $i=1,\ldots,n$. It's already been proven for $n=2$. Let $n\ge 3$. We represent the ring $K$ in the form of a block-triangular matrix ring of order 2: $$ K=\begin{pmatrix}S&N\\ 0&R_n\end{pmatrix}, \text{ where} $$ $$ S=\begin{pmatrix} R_1&M_{12}&\ldots&M_{1,n-1}\\ 0&R_2&\ldots&M_{2,n-1}\\ \ldots&\ldots&\ldots&\ldots\\ 0&0&\ldots&R_{n-1} \end{pmatrix}, N=\begin{pmatrix} M_{1n}\\ M_{2n}\\ \ldots\\ M_{n-1,n} \end{pmatrix}. $$ We verify that $\varphi S=S$, i.e., $\varphi$ induces the automorphism of the ring $S$. First, it follows from relations $(1)$ that $$ \varphi(R_1),\ldots,\varphi(R_{n-1})\subseteq R_1\oplus\ldots\oplus R_{n-1}. $$ Let $M_{ik}$ be an arbitrary bimodule, where $i<k$ and $k\ne n$. Since the element $\alpha(e_k)$ has the zero component in $R_n$, we have that the last column of all matrices from $\alpha(e_i)M\alpha(e_k)$ consists of zeros. Therefore, $\varphi(M_{ik})\subseteq S$ and $\varphi(S)\subseteq S$. Similarly, we obtain $\varphi^{-1}(S)\subseteq S$. Let $\begin{pmatrix}\alpha_1&\gamma_1\\ 0&\beta_1\end{pmatrix}$ and $\begin{pmatrix}\alpha_2&\gamma_2\\ 0&\beta_2\end{pmatrix}$ be matrices of automorphisms $\varphi$ and $\varphi^{-1}$, respectively with respect to the decomposition $K=L_1\oplus N$, where $L_1=S\oplus R_n$. Then $\alpha_1\alpha_2=1=\alpha_2\alpha_1$, i.e., $\alpha_1=\varphi\big|_S$ is an automorphism of the algebra $S$. By the induction hypothesis, we have $f_2^{(2)}=\ldots =f_{n-1}^{(n-1)}=1$ and all <<remaining>> $f_i^{(j)}$ are equal to zero. It is proved that $$ \alpha(e_1)=e_1,\ldots,\alpha(e_n)=e_n. $$ We return to the initial representation of the ring $K$; namely, to the relation $K=L\oplus M$. We take an arbitrary subscript $i$ and the element $x\in R_i$. Then $$ x=xe_i,\; \varphi(x)=\varphi(x)\varphi(e_i)=\varphi(x)e_i\in L\cap Ke_i=R_i. $$ Therefore, $\alpha R_i=R_i$ for all $i=1,\ldots,n$. We denote by $\alpha_i$ the restriction of $\alpha$ to $R_i$ ($i=1,\ldots,n$). Then $\alpha_i$ is an automorphism of the ring $R_i$. We also have the relation $$ \beta M_{ij}=\beta(e_iMe_j)=\alpha(e_i)M\alpha(e_j)=e_iMe_j=M_{ij}. $$ We denote by $\beta_{ij}$ the restriction $\beta\big|_{M_{ij}}$. Since $\beta_{ij}(xyz)=\alpha_i(x)\beta(y)\alpha_j(z)$ for any $x\in R_i$, $z\in R_j$, $y\in M_{ij}$, we have that $\beta_{ij}$ is an isomorphism of $L$-$L$-bimodules $M_{ij}$ and ${}_{\alpha_i}(M_{ij})_{\alpha_j}$. In such a case, $\alpha_i\alpha_j^{-1}$ is an inner automorphism of the algebra $R$ \cite[Part 2, Proposition 5.2]{Bas68}. For every $i=1,\ldots,n$, we set $\mu_i=\alpha_i\alpha_1^{-1}$. Then $$ \alpha_i=\mu_i\alpha_1, \; \alpha=(\mu_1,\ldots,\mu_n)(\alpha_1,\ldots,\alpha_1), $$ where $\mu_1,\ldots,\mu_n$ are inner automorphisms of the ring $R$. More briefly, $\alpha=\mu\gamma$, where $\mu$ is an inner automorphism and $\gamma$ is a ring automorphism of the ring $L$. Automorphisms $\mu$ and $\gamma$ induce an inner automorphism and a ring automorphism of the ring $K$. Let us keep the notation $\mu$ and $\gamma$, respectively. We set $\zeta=(\mu\gamma)^{-1}\varphi$ and show that $\zeta$ is an inner automorphism. With respect to the decomposition $K=L\oplus M$, its matrix is of the form $\varphi=\begin{pmatrix}1&0\\ 0&\rho\end{pmatrix}$, where $\rho$ is an automorphism of the algebra $M$ and an automorphism of the $L$-$L$-bimodule $M$. We set $\rho_{ij}=\rho\big|_{M_{ij}}$ for all $i,j$ with $i<j$. Then $\rho_{ij}$ is an automorphism of the $R$-$R$-bimodule $M_{ij}$, i.e., $\rho_{ij}$ is an automorphism of the $R$-$R$-bimodule $R$, since $$ \rho M_{ij}=\rho(e_iMe_j)=e_iMe_j=M_{ij}. $$ Consequently, $\rho_{ij}$ acts on $M_{ij}$ as a multiplication by some invertible central element $c_{ij}$ of the ring $R$. Thus, the automorphism $\zeta$ defines a system of invertible central elements $c_{ij}$ ($i,j=1,\ldots,n$, $i<j$). At the same time, the relation $c_{ij}\cdot c_{jk}=c_{ik}$ hold for all $i,j,k=1,\ldots,n$ such that $i<j<k$. To verify that $\zeta$ is an inner automorphism, we can use the argument which is similar to the argument from the proof of Proposition 4.3. As a result, we obtain the relation $\varphi=\mu\gamma\zeta$ in which $\mu,\zeta$ are inner automorphisms and $\gamma$ is a ring automorphism. By reformulating, we obtain the relation $\varphi=\mu\zeta'\gamma$, where $\zeta'=\gamma\zeta\gamma^{-1}$ is an inner automorphism. Thus, the automorphism $\varphi$ is equal to a product of an inner automorphism and a ring automorphism, which is required.~$\rhd$ The following Corollaries 9.2 and 9.3, directly follow from the above theorem, Proposition 3.2 and Lemma 3.3. \textbf{Corollary 9.2.} Any automorphism of the algebra $T(n,R)$ is a product of an inner automorphism and a ring automorphism in each of the following cases. \textbf{1)} $R$ is a strongly indecomposable algebra. \textbf{2)} $R$ is a semiprime or normal algebra, \cite{Jon95}. \textbf{Corollary 9.3, \cite{Kez90}.} If $R$ is a commutative ring, then all automorphisms of the $R$-algebra $T(n,R)$ are inner. \section{Group $\text{Aut}\,K$ with $K=M(n,R)$}\label{section10} Similar to the previous section, $R$ is an algebra over a commutative ring $T$. We give some remarks on automorphisms of the $T$-algebra $K=M(n,R)$. In previous sections, we used methods based on splitting extensions. However, it is not applicable to the algebra $M(n,R)$. One of the possible approaches to the study the algebra $\text{Aut}\,K$ is based on interrelations of this group with the Picard group of the ring $R$. Such approach is used in \cite{RosZ61} for separable algebras and in \cite{Isa80} for the $T$-algebra $M(n,T)$. The Picard group $\text{Pic}\,S$ of the $T$-algebra $S$ is defined as the class group $[P]$ of isomorphic invertible $S$-$S$-bimodules $P$ with operation $[P]\cdot [Q]=[P\otimes_SQ]$; see \cite{Bas68}. As usual, finitely generated projective generators are called \textsf{progenerators}. The direct sum of $n$ copies of the module $A$ is denoted by $A^n$. We use the following fact \cite[Part 2, Proposition 5.2]{Bas68}. Let $A$ be an $R$-$R$-bimodule and let we have an isomorphism $A\cong R$ of left $R$-modules. Then for some $\alpha\in \text{Aut}\,R$, there exists an isomorphism of $R$-$R$-bimodules $A\cong {}_1R_{\alpha}$. We give a familiar result \cite[Part 2, Proposition 5.3]{Bas68}. \textbf{Proposition 10.1.} Let $Q$ be a left progenerator $R$-module and let the endomorphism ring $S=\text{End}_RQ$ be considered as a $T$-algebra. Then we have a group exact sequence $$ 1\to\text{In(Aut}_TS)\to\text{Aut}_TS\stackrel{\eta_Q}{\longrightarrow} \text{Pic}\,R, $$ $$ \text{where Im}\,\eta_Q=\{[P]\in\text{Pic}\,R\,|\,P\otimes_RQ\cong Q \text{ as left } R\text{-modules}\}. $$ We apply Proposition 10.1 to the algebra $K=M(n,R)$. We take $R^n$ as the module $Q$. Then we can identify the algebra $S$ with $K$. Next, if $[P]\in\text{Pic}\,R$, then there exists an isomorphism of left $R$-modules $P\otimes_RQ\cong Q$ if and only if left $R$-modules $P^n$ and $R^n$ are isomorphic. We can write the following result. \textbf{Proposition 10.2.} There exists an exact sequence of groups $$ 1\to\text{In(Aut}\,K)\to\text{Aut}\,K\stackrel{\eta}{\longrightarrow} \text{Pic}\,R,\eqno (1) $$ $$ \text{where Im}\,\eta=\{[P]\in\text{Pic}\,R\,|\,P^n\cong R^n \text{ as left } R\text{-modules}\}. $$ The homomorphism $\eta$ is the composition of homomorphsms $$ \text{Aut}\,K\stackrel{\zeta}{\longrightarrow} \text{Pic}\,K\stackrel{\theta}{\longrightarrow} \text{Pic}\,R. $$ Here $\zeta\colon \varphi\to[_1K_{\varphi}]$; in addition, $\theta$ is the canonical isomorphism which exists, since categories $K$-mod and $R$-mod are equivalent. More precisely, we identify $R^n$ with the module of row vectors of length $n$ and denote the $R$-$K$-bimodule $R^n$ by $M$. We also denote the $K$-$R$-bimodule of column vectors $R^n$ by $N$. Then $$ \theta\colon [Q]\to [M\otimes_KQ\otimes_KN],\; \theta^{-1}\colon [P]\to [N\otimes_RP\otimes_RM]. $$ Thus, we have $$ \eta\colon \varphi\to [M\otimes_K {}_1K_{\varphi}\otimes_KN]= [M_{\varphi}\otimes_KN]. $$ \textbf{Proposition 10.3.} The following two assertions are equivalent. \textbf{1)} Every automorphism of the algebra $K=M(n,R)$ is a product of an inner automorphism and a ring automorphism. \textbf{2)} If $[P]\in \text{Pic}\,R$ and left $R$-modules $P^n$ and $R^n$ are isomorphic, then left $R$-modules $P$ and $R$ are isomorphic. $\lhd$ \textbf{1)\,$\Rightarrow$\,2).} Let $[P]\in\text{Pic}\,R$ and left $R$-modules $P^n$ and $R^n$ are isomorphic. According to Proposition 10.2, we have $[P]\in\text{Im}\,\eta$. We choose $\varphi\in\text{Aut}\,K$ such that $\eta(\varphi)=[P]$. Based on \textbf{1)}, we have $\varphi=\mu\psi$, where $\mu$ is an inner automorphism and $\psi$ is a ring automorphism. Therefore, $\eta(\varphi)=\eta(\psi)=[P]$. Let $\psi$ be defined by an automorphism $\alpha\in \text{Aut}\,R$. Then we have isomorphisms $$ P\cong M_{\psi}\otimes_KN\cong {}_1R_{\alpha}. $$ \textbf{2)\,$\Rightarrow$\,1).} If $\varphi$ is an arbitrary automorphism of the algebra $K$, then $$ \eta(\varphi)=[P]\in\text{Pic}\,R,\; P^n\cong R^n \text{ as left } R\text{-modules}. $$ According to \textbf{2)}, there exists an isomorphism ${}_RP\cong {}_RR$. Consequently, ${}_1P_1\cong {}_1P_{\alpha}$ for some $\alpha\in\text{Aut}\,R$. We also have the following relation in the group $\text{Pic}\,K$: $$ [N\otimes_R {}_1R_{\alpha}\otimes_RM]=[N_{\alpha}\otimes_RM]=[_1K_{\overline{\alpha}}]. $$ Now we can write the following relation: $$ \varphi=\eta^{-1}([P])=\eta^{-1}([_1R_{\alpha}])= \zeta^{-1}\theta^{-1}([_1R_{\alpha}])= $$ $$ =\zeta^{-1}([N\otimes_R {}_1R_{\alpha}\otimes_RM])= \zeta^{-1}([_1K_{\overline{\alpha}}]). $$ In other words, $\zeta(\varphi)=[_1K_{\overline{\alpha}}]=\zeta(\overline{\alpha})$, where $\overline{\alpha}$ is the automorphism induced by $\alpha$. Consequently, $\varphi=\mu\overline{\alpha}$, where $\mu$ is some inner automorphism of the algebra $K$.~$\rhd$ \textbf{Corollary 10.4.} Let the ring $R$ do not have non-trivial idempotents and every left $R$-progenerator satisfies the isomorphism property of direct decompositions. Then any automorphism of the algebra $M(n,R)$ is equal to a product of an inner automorphism and ring automorphism. $\lhd$ Let $[P]\in \text{Pic}\,R$ and let left $R$-modules $P^n$, $R^n$ be isomorphic. It follows from the isomorphism $\text{End}_RP\cong R$ that $_RP$ is an indecomposable module. Therefore, $_RP\cong {}_RR$ and we can use Proposition 10.3.~$\rhd$ \textbf{Corollary 10.5.} If $R$ is a local ring or a principal left ideal domain, then any automorphism of the algebra $M(n,R)$ is equal to a product of an inner automorphism and a ring automorphism. $\lhd$ Let left $R$-modules $P^n$ and $R^n$ be isomorphic. Then $\text{End}_RP\cong R$. If the ring $R$ is local, then we immediately obtain $_RP\cong {}_RR$. If $R$ is a principal left ideal domain, then the module $_RP$ is isomorphic to a direct sum of left ideals of the ring $R$. Therefore, $_RP\cong {}_RR$ and we can use Proposition 10.3.~$\rhd$ \textbf{Remark 10.6.} Under conditions of Corollaries 10.4 and 10.5, we have an isomorphism $\text{Out}\,K\cong\text{Out}\,R$. The Tuganbaev's study is supported by Russian Scientific Foundation. \addtocontents{toc}{\textbf{$\quad\;$Bibliography$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$ \pageref{biblio}}\par} \label{biblio} \end{document}
\begin{document} \subjclass{14D20} \title[A GIT construction of $\overline{M}_{0,0}(\bbP^r,d)N$]{An elementary GIT construction of the moduli space of stable maps} \author{Adam E. Parker} \address{Department of Mathematics and Computer Science\\ Wittenberg University\\Springfield, Ohio 45504} \email{[email protected]} \begin{abstract} This paper provides an elementary construction of the moduli space of stable maps $\overline{M}_{0,0}(\mathbb{P}^r,d)$ as a sequence of ``weighted blow-ups along regular embeddings" of a projective variety. This is a corollary to a more general GIT construction of $\overline{M}_{0,n}(\mathbb{P}^r,d)$ that places stable maps, the Fulton-MacPherson space $\mathbb{P}^1[n]$, and curves $\overline{M}_{0,n}$ into a single context. \end{abstract} \maketitle Given a projective space ${\mathbb P}^r$ and a class $d \in A_1({\mathbb P}^r)\cong {\mathbb Z}$, a \emph{n-pointed, stable map of degree $d$} consists of the data $\{\mu:C \to {\mathbb P}^r, \{p_i\}_{i=1}^n \}$ where: \begin{itemize} \item $C$ is a complex, projective, connected, reduced, n-pointed, genus $0$ curve with at worst nodal singularities. \item $\{ p_i \}$ are smooth points of $C$. \item $\mu: C \to {\mathbb P}^r$ is a morphism. \item $\mu_{*}[C] = d l$, where $l$ is a line generator of $A_1({\mathbb P}^r)$. \item If $\mu$ collapses a component $E$ of $C$ to a point, then $E$ must contain at least three special points (nodes or marked points). \end{itemize} We say that two stable maps are \emph{isomorphic} if there is an isomorphism of the pointed domain curves $f:C \to C'$ that commutes with the morphisms to ${\mathbb P}^r$. Then there is a projective coarse moduli space $\overline{M}_{0,0}(\bbP^r,d)N$ that parametrizes stable maps up to isomorphism \cite{FP}. The open locus $M_{0,n}({\mathbb P}^r,d)$ corresponds to maps with a smooth domain while the \emph{boundary} is naturally broken into divisors $D(N_1, N_2, d_1, d_2)$ where $N_1 \cup N_2$ is a partition of $\{1,2, \dots, n\}$ and $d_1 + d_2 = d$. This corresponds to maps where the domain curve has two components, one of degree $d_1$ with the points of $N_1$ on it. Similarly, we can define stable maps to ${\mathbb P}^r \times {\mathbb P}^1$ of bi-degree $(d,1)$, and look at the corresponding coarse moduli space $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$. The boundry again is broken into divisors. When no confusion is possible, we write $D(N_1, N_2, d_1,d_2)$ where we should instead use $D(N_1, N_2, (d_1,1),(d_2,0))$. In \cite{P1}, Pandharipande constructs the open $M_{0,0}({\mathbb P}^r,d) \subset \overline{M}_{0,0}(\bbP^r,d)$ as the GIT quotient of the open basepoint free locus $U(1,r,d) \subset \oplus^r_0 H^0({\mathbb P}^1,{\mathcal O}(d))$. We have a similar construction for the open pointed locus $M_{0,n}({\mathbb P}^r \times {\mathbb P}^1, (d,1)) \\ \subset \overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$. Our main result is to construct the compact $\overline{M}_{0,0}(\bbP^r,d)N$ as a geometric quotient of $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$ by $G=Aut({\mathbb P}^1)$. \begin{main}\label{t1} Let $E$ be an effective divisor such that $-E$ is $\phi$-ample. Take a linearized line bundle ${\mathcal L} \in \Pic^G(({\mathbb P}^1)^n \times {\mathbb P}^r_d)$ such that \[ (({\mathbb P}^1)^n \times {\mathbb P}^r_d)^{ss}({\mathcal L}) = (({\mathbb P}^1)^n \times {\mathbb P}^r_d)^s({\mathcal L}) \neq \emptyset. \] Then for each sufficiently small $\epsilon > 0$, the line bundle ${\mathcal L}' = \phi^*({\mathcal L})(-\epsilon E)$ is ample and \begin{multline*} (\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N)^{ss}({\mathcal L}') = (\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N)^{s}({\mathcal L}') \\ = \phi^{-1}\{(({\mathbb P}^1)^n \times {\mathbb P}^r_d)^{ss}({\mathcal L})\}. \end{multline*} There is a canonical identification \[ (\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N)^s({\mathcal L}') /G = \overline{M}_{0,0}(\bbP^r,d)N \] and a commutative diagram \[ \begin{CD} (\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N)^{s}({\mathcal L}') @>f>> \overline{M}_{0,0}(\bbP^r,d)N \\ @V{\phi}VV @V{\bar{\phi}}VV \\ (({\mathbb P}^1)^n \times {\mathbb P}^r_d)^{s}({\mathcal L}) @>>> (({\mathbb P}^1)^n \times {\mathbb P}^r_d)^{s}({\mathcal L}) / G \\ \end{CD} \] where ${\mathbb P}^r_d := {\mathbb P}((H^0({\mathbb P}^1,{\mathcal O}(d))^{r+1})$ and $\phi: \overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N \to ({\mathbb P}^1)^n \times {\mathbb P}^r_d$ is the Givental contraction map \cite{G}. \end{main} The eventual goal would be to construct $\overline{M}_{0,0}(\bbP^r,d)$ as sequence of blow-ups of some projective variety. One benefit of such a construction is the ability to compute the Chow ring of $\overline{M}_{0,0}(\bbP^r,d)$, as Keel's Theorem 1 from the appendix of \cite{K} gives the Chow ring of a blow up. This can't happen. First, $\overline{M}_{0,0}(\bbP^r,d)$ is not smooth. It has singularities at points corresponding to maps with nontrivial automorphisms. However, $\overline{M}_{0,0}(\bbP^r,d)$ is actually smooth when considered as a stack, and so at best we may hope for a stack analogue of a sequence of blow ups mentioned above. The second issue seems more serious. There are no known maps from $\overline{M}_{0,0}(\bbP^r,d)$ to anything nice, and a birational map from $\overline{M}_{0,0}(\bbP^r,d)$ is exactly what is needed to carry out the above project. As corollaries to our GIT construction, we are able to construct a birational map $\bar{\phi}$ from $\overline{M}_{0,0}(\bbP^r,d)$. Recently, progress has been made on understanding $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))$. For example, in \cite{M}, the above $\phi$ is factored into a sequence of intermediate moduli spaces such that the map between two successive spaces is a ``weighted blow up of a regular local embedding". As a corollary of the above theorem, we take the quotient of these intermediate spaces and factor $\bar{\phi}$. In Section 1, we collect some preliminary results and definitions that will be used throughout the paper. Section 2 identifies the stable locus in $({\mathbb P}^1)^n \times {\mathbb P}^r_d$ and explains how to pull it back to $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$. In Section 3, we prove the above theorem. Finally, in Section 4 we explain the factorization of $\varphi$ from \cite{M} and construct the intermediate spaces the induced quotient map. All work is done over ${\mathbb C}$. This paper is part of my thesis written at the University of Texas at Austin under the direction of Prof. Sean Keel. I wish to extend my sincere gratitude for all of his encouragement, guidance, and patience. \section{Preliminaries} Suppose that we wanted to compactify the space of $n$-pointed degree-$d$ morphisms from ${\mathbb P}^1 \to {\mathbb P}^r$. Perhaps after the above discussion, one would expect that $\overline{M}_{0,0}(\bbP^r,d)N$ correctly compactifies these objects. However, since we quotient out by isomorphisms, we only get degree $d$ \emph{un}-parametrized pointed morphisms to ${\mathbb P}^r$. Here we discuss two spaces that do correctly answer this question. \subsection{Linear Sigma Model} On one hand, an $n$-pointed, degree-$d$ morphism $f$ is given by $(r+1)$ homogenous degree $d$ polynomials in two variables, along with a choice of $n$ distinct points on the domain ${\mathbb P}^1$. In the notation of \cite{P1}, these maps correspond to the basepoint free locus \[ \begin{aligned} (({\mathbb P}^1)^n \setminus \Delta) \times {\mathbb P}(U(1, r, d)) &\subset ({\mathbb P}^1)^n \times {\mathbb P}( \bigoplus_0^r H^0({\mathbb P}^1, {\mathcal O}_{{\mathbb P}^1}(d))) \\ & := ({\mathbb P}^1)^n \times {\mathbb P}^r_d. \end{aligned} \] Once we pick coordinates on ${\mathbb P}^1$, we can consider a closed point on the basepoint free locus as \[ [x_1:y_1] \times \dots \times [x_n: y_n] \times [ f_0(x,y) : f_1(x,y) : \dots : f_r(x,y)] \] where $[x_s:y_s] \neq [x_t:y_t]$, the $f_j$ don't have any common roots, and scaling doesn't change the map. The coefficients of these $f_j$ determine a point in projective space ${\mathbb P}^r_d := {\mathbb P}^{(r+1)(d+1)-1}$. We will sometimes write $a_i^j$ for the coefficient $x^{d-i}y^i$ on $f_j$ (after choosing the obvious coordinates on ${\mathbb P}^r_d.)$ We thus have a simple compactification by allowing the $r+1$ forms to have common roots, and allowing the $n$ points to come together. This space is sometimes referred to as the linear sigma model. Moreover, there is a $G$ action on this space, similar to the action examined in \cite{MFK} on binary quantics. On closed points, the action is given by \[ G \times (({\mathbb P}^1)^n \times {\mathbb P}^r_d)\to ({\mathbb P}^1)^n \times {\mathbb P}^r_d \] \begin{multline*} g \cdot [ [x_1:y_1] \times \dots \times [x_n:y_n] \times f_0(x:y) , \dots , f_r( x:y)] = \\ [g[x_1: y_1] \times \dots \times g[x_n:y_n] \times f_0 \circ g^{-1}(x:y), \dots :f_r \circ g^{-1}(x:y)] \end{multline*} where $g$ and $g^{-1}$ act on $[x:y]$ by matrix multiplication. \subsection{The Graph Space} There is another, less simple (non-linear) compactification of $(({\mathbb P}^1)^n \setminus \Delta) \times {\mathbb P}(U(1, r, d))$. It is clear that this set equals $M_{0,n}({\mathbb P}^r \times {\mathbb P}^1, (d,1))$, and thus $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$ provides another compacitification. We will refer to the domain curve $C$ for a map in $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$ as a \emph{comb}. There is an obvious distinguished component $C_0$ on which $\mu|_{C_0}$ will be of degree $(d',1)$. We will call this component the \emph{handle}. The other components fit into \emph{teeth} $T_i$, which are (perhaps reducible) genus-$0$, $n_i$-pointed curves meeting $C_0$ at unique points $q_i$. There is always a representative of the map so that degree $1$ part of $\mu$ restricted to the handle is the identity. The action on $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$ is induced by the action on the image ${\mathbb P}^1$. Namely we have \[ G \times \overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N \to \overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N \] \[ g \cdot [\mu_1 \times \mu_2 : C \to {\mathbb P}^r \times {\mathbb P}^1] \to [\mu_1 \times g \circ \mu_2 : C \to {\mathbb P}^r \times {\mathbb P}^1] \] \subsection{The Givental Map} Recall in \cite{G}, that Givental constructs a projective morphism that relates the graph space and the linear sigma model. \begin{theorem} (Givental) \label{g} There is a projective morphism \[ \varphi: \overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1)) \to {\mathbb P}^r_d \] \end{theorem} Set theoretically, consider a point in $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))$. As mentioned above, there is a representative \[ [\mu:C \to {\mathbb P}^r \times {\mathbb P}^1 \mbox{ of bi-degree } (d,1) ] \] and a component $C_0 \subset C$ such that $\mu|_{C_0}$ is the graph of $r+1$ degree $d'$ polynomials $(f_0, \dots, f_r)$ with no common zero. On the teeth $T_1, \dots, T_s$, $\mu$ has degree $(d_i, 0)$ respectively, and $d_1 + \dots + d_s = d - d'$. Thus $\mu$ sends $T_i$ into ${\mathbb P}^r \times z_i \subset {\mathbb P}^r \times {\mathbb P}^1$. Let $h$ be a degree $d-d'$ form that vanishes at each $z_i$ with multiplicity $d_i$. Then \[ \varphi(\mu) = [f_0 \cdot h , f_1 \cdot h, \dots, f_r \cdot h] \in {\mathbb P}^r_d \] where we read off the coefficients to obtain the point in projective space. The projective morphism that we consider is thus the product of $\varphi$ with the $n$ evaluation morphisms $ev_i: \overline{M}_{0,0}(\bbP^r,d)N \to {\mathbb P}^r \times {\mathbb P}^1 \to {\mathbb P}^1$. We call it $\phi$. On $M_{0,n}({\mathbb P}^r \times {\mathbb P}^1, (d,1))$, $\phi$ gives the isomorphism with $(({\mathbb P}^1)^n \setminus \Delta) \times {\mathbb P}(U(1, r, d))$ mentioned above. The following lemma is needed when we take the quotients. \begin{lemma} The above map \[ \phi: \overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N \to ({\mathbb P}^1)^n \times {\mathbb P}^r_d \] is equivariant with respect to the above $G$ actions. \end{lemma} \begin{proof}We show that both the evaluation morphisms $ev_i$ and the Givental $\varphi$ map are equivariant. Then their product is as well. Take a point in $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$. Choose a representative $(\mu:C \to {\mathbb P}^r \times {\mathbb P}^1, \{p_i \})$. Write $C = C_0 \cup T_i$ as a comb, such that $T_i \cap C_0 = q_i$. Also write $\mu_i = \pi_1 \circ \mu|_{T_i}$. If we look at the image of the above map under $\varphi$, we see that $\varphi(\mu)$ will be the product of $r+1$ forms $(f_0, \cdots, f_r)$ of degree $d'$ representing the handle and a form $h$ of degree $d-d'$ that vanishes at the $q_i$ with the correct degrees. We see that $g\cdot \varphi(\mu)$ will be the product of $(f_0 \cdot g^{-1}, \dots, f_r \cdot g^{-1})$ which are $r+1$ forms of degree $d'$ with no common zero with a form $h'$ of degree $d- d'$ that vanishes at $g(q_i)$ with the same degree that $h$ vanished at $q_i$. We now need to calculate $\varphi(g\cdot \mu)$. With the above notation, we see that $g \cdot (\mu:C \to {\mathbb P}^r \times {\mathbb P}^1)$ would send $p \in T_i$ to $(\mu_i(p),g(q_i))$, and $p \in C_0$ to $(\mu_0(p),g(p))$. We find a representative of this new map that has the degree $1$ part be the identity. Take the curve $C' = g(C_0) \cup T_i$, where now the teeth $T_i$ are glued to $C_0$ at $g(q_i)$. Define the map from $C' \to {\mathbb P}^r \times {\mathbb P}^1$ as $(\mu_0 \circ g^{-1}, id)$ on $g(C_0)$, and agrees with $\mu_i$ on the other teeth. The corresponding map is isomorphic to $g \cdot \mu$. We look at the image under the Givental map. The image will be the $r+1$ degree $d'$ forms $\mu_0 \circ g^{-1}$, along with a form $h$ that vanishes at $g(q_i)$ of the correct degree. This is the same as $g \cdot \varphi(\mu)$. This shows that $\varphi$ is equivariant. That the evaluation morphisms are equivariant is immediate. \end{proof} \subsection{The Forgetful Morphism} The second map that we will be interested in is the ``forgetful" morphism \[ f: \overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N \to \overline{M}_{0,0}(\bbP^r,d)N \] defined by forgeting the map to ${\mathbb P}^1$ and collapsing any components that become unstable. Moreover, since the $G$ action on $\overline{M}_{0,0}(\bbP^r,d)N$ is trivial, we automatically have $f$ is $G$ equivariant. \section{Calculations on ${\mathbb P}^r_d$} Immediately, one would expect that $\overline{M}_{0,0}(\bbP^r,d)N$ is the quotient of $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$ by $G$ as $G$ ``takes into account" the map to ${\mathbb P}^1$. The question is, ``How to take the quotient?" We will use Geometric Invariant Theory in order to find an open set in $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$ such that the quotient is $\overline{M}_{0,0}(\bbP^r,d)N$. All background concerning GIT will be taken from \cite{D} and \cite{MFK}, though we recall the main theorem here for reference. \begin{theorem} \cite{D} \cite{MFK} \label{D} Let $X$ be an algebraic variety, ${\mathcal L}$ a G-linearized line bundle on $X$. Then there are open sets $X^{s}({\mathcal L}) \subset X^{ss}({\mathcal L})\subset X$ (the \emph{stable and semi-stable loci}), such that the quotient \[ \pi:X^{ss}({\mathcal L}) \to X^{ss}({\mathcal L}) /G \] is quasi-projective and a ``good categorical quotient". This says (among other things) that for any other $G$-invariant morphism $g:X^{ss}({\mathcal L}) \to Z$, there is a unique morphism $h: X^{ss}({\mathcal L}) //G \to Z$ satisfying $h\circ \pi = g$. If we restrict to $X^{s}({\mathcal L})$ then we have a ``geometric quotient". This says (among other things) that the geometric fibers are orbits of the geometric points of $X$, and the regular functions on $X^{s}({\mathcal L})/G$ are $G$ -equivariant functions on $X$. \end{theorem} Of special interest to us will be when $X$ is proper over ${\mathbb C}$ (as in this case) and ${\mathcal L}$ is ample, as $X^{ss}({\mathcal L}) /G$ will be projective \cite{MFK}. We start with considering the $G$ action on $({\mathbb P}^1)^n \times {\mathbb P}^r_d$. In our case since $G $ acts on a normal, irreduciable, proper variety $X$ (such as ${\mathbb P}^r)$. Then any line bundle admits a unique $G$ linearization. \begin{proposition} \[ \Pic^{G}(({\mathbb P}^1)^n \times {\mathbb P}^r_d) \cong {\mathbb Z}^{n+1} \] \end{proposition} \begin{proof} For any vector $\vec{k}= (k_1, \cdots, k_n,k_{n+1}) \in {\mathbb Z}^{n+1}$, we define a line bundle on $({\mathbb P}^1)^n \times {\mathbb P}^r_d$ by \[ {\mathcal L}_{\vec{k}}=\bigotimes_{i=1}^{n+1} \pi_i^{*}({\mathcal O}(k_i)) \] where $\pi_i$ is projection onto the $i$-th component. Every line bundle on $({\mathbb P}^1)^n \times {\mathbb P}^r_d$ is isomorphic to ${\mathcal L}_{\vec{k}}$ for some choice of $\vec{k}$ (\cite{hart}). We need only show that each of these line bundles has one (and only one) linearization. However, since each $\pi_i$ is $G$-equivariant, and each of the restrictions of ${\mathcal L}_{\vec{k}}$ to a factor has a unique linearization (\cite{D}), ${\mathcal L}_{\vec{k}}$ has a canonical $G$ - linearization. \end{proof} \begin{corollary} \[ {\mathcal L}_{\vec{k}} \text{ is ample } \iff k_i >0 \] \end{corollary} \begin{proof} If all $k_i >0$ then ${\mathcal L}_{\vec{k}}$ defines the projective embedding \begin{multline*} \begin{CD} ({\mathbb P}^1)^n \times {\mathbb P}^r_d@>{\text{Veronese}}>> \prod_{i=1}^n {\mathbb P}^{(1+k_i) -1} \times {\mathbb P}^{\binom{rd+r+d + k_{n+1}}{rd+r+d}-1} \end{CD}\\ \begin{CD} @>{\text{Segre}}>> {\mathbb P}^{\left( ( \prod_{i=1}^n 1+k_i) \times {\binom{rd+r+d + k_{n+1}}{rd+r+d}} \right)-1} \end{CD} \end{multline*} On the other hand if some multiple of ${\mathcal L}_{\vec{k}}$ defines a closed embedding, restricting it to any factor will be ample. But this is ${\mathcal O}_{{\mathbb P}^1}(k_i)$ (or ${\mathcal O}_{{\mathbb P}^r_d}(k_{n+1})$) and these are ample iff $k_i, k_{n+1} >0$. \end{proof} In order to find the stable and semi-stable loci in $({\mathbb P}^1)^n \times {\mathbb P}^r_d$, we will look at the image under the above Veronese / Segre maps. The main point is the following. \begin{proposition} Let $\Omega$ be the composition of the Veronese and Segre maps above. Then \[ (({\mathbb P}^1)^n \times {\mathbb P}^r_d)^{ss}({\mathcal L}_{\vec{k}}) = \Omega^{-1}\left\{\left({\mathbb P}^{\left( ( \prod_{i=1}^n 1+k_i) \times {\binom{rd+r+d + k_{n+1}}{rd+r+d}} \right)-1}\right)^{ss}({\mathcal O}(1))\right\} \] and similarly for the stable locus. \end{proposition} \begin{proof} Call the image projective space ${\mathbb P}^{BIG}$. First, we show that there is an action of $G$ on ${\mathbb P}^{k}$ such that the Veronese map ${\mathbb P}^1 \to {\mathbb P}^k$ is $G$ equivariant. We need a reparesentation of $G$ in $GL(k+1)$. We can explicitly write it out, where we choose $[x,y]$ as coordinates on ${\mathbb P}^1$ and the obvious coordinates $[x^k: x^{k-1}y: \dots: y^k]$ on ${\mathbb P}^k$. Namely, \[ \rho: G \to GL(k+1) \] \[ \left( \begin{matrix} a & b \\ c & d \end{matrix} \right) \to [a_{i,j}]_{i,j=0}^k \] where $a_{i,j}=\sum_{n=0}^j \binom{k-i}{n}\binom{i}{j-n} b^n a^{k-i-n} d^{j-n} c^{i-j+n}$. This is the coefficient of $x^{k-j}y^j$ in $(ax+by)^{k-i} (cx+dy)^i$, and is a homomorphism. We can define the representation of $G$ into $GL(\binom{rd+r+d + k_{n+1}}{rd+r+d})$ similarly. We now have representations \[ \rho_i: G \to GL(k_i +1) \,\,\,\,\, \text{and} \,\,\,\,\,\, \rho_{n+1}: G\to GL\left(\binom{rd+r+d + k_{n+1}}{rd+r+d}\right). \] We define the action on ${\mathbb P}^{BIG}$ by taking the tensor representation. This extends to an action on all of ${\mathbb P}^{BIG}$. Thus $\Omega$ is $G$-invariant by construction. Take the composition $({\mathbb P}^1)^n \times {\mathbb P}^r_d \to \Omega(({\mathbb P}^1)^n \times {\mathbb P}^r_d) \hookrightarrow {\mathbb P}^{BIG}$. We apply the following theorem of \cite{MFK} to each of these arrows. \begin{theorem} \cite{MFK} (pg 46) Assume that $f:X \to Y$ is finite, $G$- equivariant with respect to actions of $G$ on $X$ and $Y$. If $X$ is proper over $k$ (${\mathbb C}$ for us) and $M$ is ample on $Y$, then \[ X^{ss}(f^*M) = f^{-1}\{Y^{ss}(M)\} \] and the same result holds for the stable locus. \end{theorem} Finally, that $\Omega^* {\mathcal O}(1) = {\mathcal L}_{\vec{k}}$ is obvious. \end{proof} We are now able to determine the stable and semi-stable locus in the linear-sigma model. \begin{theorem} Let $\vec{k}=(k_1, k_2, \dots, k_{n+1}) \in {\mathbb Z}^{n+1}_+$. Then $[x_1: y_1] \times \dots \times [x_n: y_n] \times [a_i^j] \in (({\mathbb P}^1)^n \times {\mathbb P}^r_d)^{ss}({\mathcal L}_{\vec{k}})$ (respectively $\in (({\mathbb P}^1)^n \times {\mathbb P}^r_d)^{s}({\mathcal L}_{\vec{k}})$) if for every point $p\in {\mathbb P}^1$ \[ \sum_{i\,|\, [x_i: y_i] = p} k_i + d_p \cdot k_{n+1} \leq \frac{1}{2} \left(\sum_{i=1}^n k_i + d\cdot k_{n+1} \right) \] (respectively strict inequality holds) where $d_p$ is the degree of common vanishing of the forms $f_0, \dots, f_r$ at $p \in {\mathbb P}^1$. \end{theorem} \begin{proof} We prove the theorem by first looking at the action of a maximal torus acting on ${\mathbb P}^{BIG}$. Here, there is only one line bundle, so everything is canonical. Then we pull back to find the corresponding locus in $({\mathbb P}^1)^n \times {\mathbb P}^r_d$. We then move onto the entire group G. Let T be the maximal torus of $SL_2({\mathbb C})$, equal to the image of the 1-parameter subgroup \begin{equation*} \lambda(t) = \left( \begin{matrix} t^{-1} & 0 \\ 0 & t \end{matrix} \right). \end{equation*} We choose coordinates $a^j_i$ on ${\mathbb P}^r_d$, where $a^j_i$ is the coefficient of $x^{d-i}y^i$ in $f_j(x,y)$. Similarly, we choose the following coordinates on ${\mathbb P}^{BIG}$. For $0 \leq s_i \leq k_i$ $(1\leq i \leq n)$, and $v_{ij}$ such that $\sum_{i=0}^d \sum_{j=0}^r v_{ij} = k_{n+1}$, we have the coordinate $x_i^{k_i - s_i} y_i^{s_i}(a_i^j)^{v_{ij}}$. Then $T$ acts on ${\mathbb P}^{BIG}$ by \[ \lambda(t) \cdot (x_i^{k_i - s_i} y_i^{s_i}(a_i^j)^{v_{ij}}) \to t^{(\sum_{i=1}^n 2s_i-k_i) + (\sum_{ij} (d-2i)v_{ij})}x_i^{k_i - s_i} y_i^{s_i}(a_i^j)^{v_{ij}} \] By the above Lemma, we know that it's enough to compute the semi-stable locus of this action on ${\mathbb P}^{BIG}$ and pull it back via the various inclusions and embeddings. Luckily we know how to compute the semi-stable locus of a torus acting on a projective space. From \cite{D} we know that a point of projective space is stable (resp semi-stable) with respect to $T$ if and only if $0 \in \mbox{interior} (\overline{wt})$ ( resp $0 \in \overline{wt}$). In our case, the weight set (wt) is the subset of \[ \left\{ -\sum_{i=1}^n k_i - d \cdot k_{n+1}, \dots, \sum_{i=1}^n k_i + d \cdot k_{n+1} \right\} \] consisting of powers of $t$ such that the coordinate $x_i^{k_i - s_i} y_i^{s_i}(a_i^j)^{v_{ij}} $ is non zero. If the point is unstable, then all the powers $(\sum_{i=1}^n 2s_i-k_i) + (\sum_{ij} (d-2i)v_{ij}) < 0 $ (or perhaps all $>0$.) So $x_i^{k_i - s_i} y_i^{s_i}(a_i^j)^{v_{ij}} =0$ if \begin{multline*} 0 \leq \left(\sum_{i=1}^n (2s_i-k_i) + \sum_{ij} (d-2i)v_{ij} \right) \iff \\ 0 \leq 2 \sum_{i=1}^n (s_i-k_i) - 2 \sum_{ij} i \cdot v_{ij} + \sum_{i=1}^n k_i + d k_{n+1} \iff \\ \sum_{ij} i\cdot v_{ij} + \sum_{i=1}^n (k_i -s_i) \leq \frac{1}{2}\left( \sum_{i=1}^n k_i + d k_{n+1} \right) . \end{multline*} Define the following sets in $({\mathbb P}^1)^n \times {\mathbb P}^r_d$: \begin{multline*} US = \{ [x_1: y_1] \times \dots \times [x_n: y_n] \times [a_i^j] \,\,| \\ x_i^{k_i - s_i} y_i^{s_i}(a_i^j)^{v_{ij}}= 0\,\, \text{ if } \,\,\ \sum_{ij} i\cdot v_{ij} + \sum_{i=1}^n (k_i -s_i) \leq \frac{1}{2}\left( \sum_{i=1}^n k_i + d k_{n+1} \right) \} \end{multline*} and \begin{multline*} X = \{[x_1: y_1] \times \dots \times [x_n: y_n] \times [a_i^j] \,\, | \\ \frac{1}{2} \left( \sum_{i=1}^n k_i + d k_{n+1} \right) < \sum_{[x_i:y_i] = [1:0]} k_i + k_{n+1} \cdot d_{[1:0] } \} \end{multline*} We show that $US = X$. First, assume that $X \subset US$. Let $x = \{ [x_1: y_1] \times \dots \times [x_n: y_n] \times [a_i^j] \}$ be in $US \setminus X$. So, $ x_i^{k_i - s_i} y_i^{s_i}(a_i^j)^{v_{ij}}= 0$ if \[ \sum_{ij} i\cdot v_{ij} + \sum_{i=1}^n (k_i -s_i) \leq \frac{1}{2}\left( \sum_{i=1}^n k_i + d k_{n+1} \right). \] But we also have \[ \sum_{[x_i:y_i] = [1:0]} k_i + k_{n+1} \cdot d_{[1:0] } \leq \frac{1}{2} \left( \sum_{i=1}^n k_i + d k_{n+1} \right) . \] Then, take $s_i = 0$ if $[x_i: y_i] = [1:0]$. And at least one of the $a_{d_{[1:0]}}^j \neq 0$. For that value of $j$, let $v_{ij}= k_{n+1}$. Then we have \[ \sum_{i=0}^n(k_i - s_i) + \sum_{ij} i\cdot v_{ij} = \sum_{[x_i:y_i] = [1:0]} k_i + k_{n+1} \cdot d_{[1:0] } \leq \frac{1}{2} \left( \sum_{i=1}^n k_i + d k_{n+1} \right). \] The coordinate $x_i^{k_i - s_i} y_i^{s_i}(a_i^j)^{v_{ij}}\neq 0$ by construction, which says $x \notin US$, a contradiction. Thus $US \subseteq X$. Now, assume that $US \subset X$ Take $y = \{ [x_1: y_1] \times \dots \times [x_n: y_n] \times [a_i^j] \} \in X \setminus US$. So $ x_i^{k_i - s_i} y_i^{s_i}(a_i^j)^{v_{ij}}\neq 0$, but \[ \sum_{ij} i\cdot v_{ij} + \sum_{i=1}^n (k_i -s_i)\} \leq \frac{1}{2}\left( \sum_{i=1}^n k_i + d k_{n+1} \right) \] Combining with the fact that $y \in X$, we see that \[ \sum_{i=1}^n (k_i - s_i) + \sum_{ij} i \cdot v_{ij} < \sum_{[x_i:y_i] = [1:0]} k_i + k_{n+1} \cdot d_{[1:0] } \] Then, for all $i$ with $[x_i:y_i] = [1:0]$, we must have $s_i = 0$. Thus, \[ \sum_{[x_i:y_i] = [1:0]} k_i \leq \sum_{i=1}^n(k_i - s_i). \] Similarly, since $(a_i^j)^{v_{ij}} \neq 0$, we know that if $(a_i^j) = 0$, then $v_{ij}=0$. Thus, \[ \sum_{j=0}^r \sum_{i=0}^di \cdot v_{ij} = \sum_{j=0}^r \sum_{i = d_{[1:0]}}^d i \cdot v_{ij} \geq d_{[1:0]} \sum_{ij} v_{ij} = d_{[1:0]} \cdot k_{n+1} \] Combining these gives our contradiction, showing that $X\subseteq US$ as desired. If we repeat this calculation, except replace the condition $(\sum_{i=1}^n 2s_i-k_i) + (\sum_{ij} (d-2i)v_{ij}) < 0 $ with $(\sum_{i=1}^n 2s_i-k_i) + (\sum_{ij} (d-2i)v_{ij}) > 0$, we get the following Lemma. \begin{lemma}\label{l1} $[x_1: y_1] \times \dots \times [x_n, y_n] \times [a_i^j]$ is unstable with respect to $T$ if \[ \sum_{i \, | \,[x_i:y_i] = [1:0]} k_i + k_{n+1}\cdot d_{[1:0]} > \frac{1}{2} \left( \sum_{i=1}^n k_i + d k_{n+1}\right) \] or \[ \sum_{i \, | \,[x_i:y_i] = [0:1]} k_i + k_{n+1} \cdot d_{[0:1]}> \frac{1}{2} \left( \sum_{i=1}^n k_i + d k_{n+1}\right) \] \end{lemma} We are now ready to move onto stability with respect to G. Suppose that $[x_1: y_1] \times \dots \times [x_n, y_n] \times [a_i^j]$ is stable with respect to G and there is a point $p$ in ${\mathbb P}^1$ such that \[ \sum_{[x_i:y_i] = p} k_i + k_{n+1} \cdot d_p > \frac{1}{2} \left( \sum_{i=1}^n k_i + d k_{n+1}\right). \] Let $g\in G$ map $ p \to [1:0]$. Then $g\cdot [x_1: y_1] \times \dots \times [x_n, y_n] \times [a_i^j]$ is unstable with respect to T, and $[x_1: y_1] \times \dots \times [x_n, y_n] \times [a_i^j]$ is unstable with respect to G, contradicting the assumption. Now assume that $[x_1: y_1] \times \dots \times [x_n, y_n] \times [a_i^j]$ is unstable, but has no point $p$ such that \[ \sum_{i | [x_i:y_i] = p} k_i + k_{n+1} \cdot d_p > \frac{1}{2} \left( \sum_{i=1}^n k_i + d k_{n+1}\right). \] Then, there is some maximal torus $T'$ with which $[x_1: y_1] \times \dots \times [x_n, y_n] \times [a_s]$ is unstable. For any maximal torus in G, there is $g\in G$ such that $gT'g^{-1} = T$. Then we have that $g \cdot [x_1: y_1] \times \dots \times [x_n, y_n] \times [a_i^j]$ is unstable with respect to $T$, hence must have either $[1:0]$ or $[0:1]$ satisfiying Lemma \ref{l1}. Then $[x_1: y_1] \times \dots \times [x_n, y_n] \times [a_i^j]$ has $g^{-1}[1:0]$ satisfying Lemma \ref{l1}. \end{proof} We are now ready to describe the chamber decomposition of the ample cone of $\Pic^G (({\mathbb P}^1)^n \times {\mathbb P}^r_d)$. As a first step we normalize our line bundle so that we form the simplex \[ \Delta = \left\{ (k_1, k_2, \dots, k_{n+1}) | \sum_{i=1}^n k_i + d \cdot k_{n+1} = 2 \right\} \] Then for each subset $I \in (1,2, \dots, n)$ and each integer $0 \leq d_{I} \leq d$, we get a wall $W_{I, d_I}$ given by \[ \sum_{i \in I} k_i + d_I \cdot k_{n+1} =1 \] and the walls break $\Delta$ into chambers. Following \cite{H2}, we mention the following obvious statements. \begin{enumerate} \item \[ W_{S, d_S} = W_{S^c, d-d_S} \] \item Each interrior wall divides $\Delta$ into two parts \[ \left\{ (k_1, k_2, \dots, k_{n+1}) | \sum_{i \in I} k_i + d_I \cdot k_{n+1} \leq 1\right\} \] and \[ \left\{(k_1, k_2, \dots, k_{n+1}) | \sum_{i \in I} k_i + d_I \cdot k_{n+1} \geq 1\right\} \] \item Two vectors $\vec{k}= (k_1, \dots, k_{n+1})$ and $\vec{k'} = (k'_1, \dots, k'_{n+1})$ lie in the same chamber if for all $I \subset \{ 1, 2, \dots \}$ and $0 \leq d_I \leq d$ then \[ \sum_{i \in I} k_i + d_I \cdot k_{n+1} \leq 1 \iff \sum_{i \in I}k' _i + d_I \cdot k'_{n+1} \leq 1 \] This means that vectors in the same chamber will define the same stable and semi-stable loci, and hence the same quotient. \item There are semi-stable points that aren't stable iff $\vec{k}$ lies on a wall. \end{enumerate} Recall that our goal isn't to take the GIT quotient of $({\mathbb P}^1)^n \times {\mathbb P}^r_d$, but of $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$, and at this point we haven't said anything about the stable or semi-stable loci in $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$. We are able to pull back the stable locus via $\phi$, by the following Theorem of Yi Hu. \begin{theorem} \cite{H1}\label{H1} Let $\pi:Y \to X$ be a $G$-equivariant projective morphism between two (possibly singular) quasi-projective varieties. Given any linearized ample line bundle $L$ on $X$, choose a relatively ample linearized line bundle $M$ on $Y$. Assume moreover that $X^{ss}(L)=X^{s}(L)$. Then there exists a $n_0$ such that when $n\geq n_0$, we have \[ Y^{ss}(\pi^* L^n \otimes M) = Y^{s}(\pi^* L^n \otimes M) = \pi^{-1}\{X^s(L)\} \] \end{theorem} For example, the locus of maps in $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))$ that are stable will be maps such that no tooth of the comb $C$ has degree $\geq d/2$. See Figure \ref{f7} to see how the stable locus depends on the line bundle. \begin{figure} \caption{Stable Locus of $\overline{M} \label{f7} \end{figure} \section{The Geometric Quotient} We are now ready to present our GIT description of $\overline{M}_{0,0}(\bbP^r,d)N$. The construction is similar to that of $\overline{M}_{0,n}$ from \cite{HK}. We state it similarly. First let $E$ be an effective divisor with support the full exceptional locus of $\phi$, such that $-E$ is $\phi$ ample. Such an $E$ exists by the following Lemma from \cite{mori}. \begin{lemma} \cite{mori} (pg. 70) \label{mori} Let $f:X \to Y$ be a birational morphism. Assume that $Y$ is projective and $X$ is ${\mathbb Q}$-factorial. Then there is an effective $f$-exceptional divisor $E$ such that $-E$ is $f$-ample. \end{lemma} $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$ is ${\mathbb Q}$-factorial because it is locally the quotient of a smooth scheme by a finite group. \begin{main} For each linearized line bundle ${\mathcal L} \in \Pic^G(({\mathbb P}^1)^n \times {\mathbb P}^r_d)$ such that \[ (({\mathbb P}^1)^n \times {\mathbb P}^r_d)^{ss}({\mathcal L}) = (({\mathbb P}^1)^n \times {\mathbb P}^r_d)^s({\mathcal L}) \neq \emptyset \] and for each sufficiently small $\epsilon > 0$, the line bundle ${\mathcal L}' = \phi^*({\mathcal L})(-\epsilon E)$ is ample and \begin{multline*} (\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N)^{ss}({\mathcal L}') = (\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N)^{s}({\mathcal L}') \\ = \phi^{-1}\{(({\mathbb P}^1)^n \times {\mathbb P}^r_d)^{ss}({\mathcal L})\}. \end{multline*} There is a canonical identification \[ (\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N)^s({\mathcal L}') /G = \overline{M}_{0,0}(\bbP^r,d)N \] and a commutative diagram \[ \begin{CD} (\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N)^{ss}({\mathcal L}') @>f>> \overline{M}_{0,0}(\bbP^r,d)N \\ @V{\phi}VV @VVV \\ (({\mathbb P}^1)^n \times {\mathbb P}^r_d)^{ss}({\mathcal L}) @>>> (({\mathbb P}^1)^n \times {\mathbb P}^r_d)^{ss}({\mathcal L}) / G \\ \end{CD} \] where $\phi$ is the generalized Givental map, $f$ is the forgetful morphism. \end{main} \begin{proof} For the first two statements, we apply the above Theorem \ref{H1} of Hu. Following the notation from \cite{HK}, let $U$ be the semi-stable locus in $({\mathbb P}^1)^n \times {\mathbb P}^r_d$ for the above action of $G$ corresponding to ${\mathcal L}_{\vec{k}}$. Recall that this corresponds to $([x_i, y_i], f_0, \dots f_r)$ such that for any $p \in {\mathbb P}^1$, we have \[ \sum_{i | [x_i: y_i] = p} k_i + k_{n+1} \cdot d_p \leq \frac{1}{2} \left( \sum_{i=1}^n k_i + k_{n+1} \cdot d \right). \] Let $U' = \phi^{-1}(U)$. Let the corresponding quotients be $Q$ and $Q'$. We have the obvious composition of $G$ invariant maps: \[ U' \to U \to Q. \] And by the universal properties of GIT quotients, we get a proper birational map $Q' \to Q$. Similarly, since $G$ acts trivially on $\overline{M}_{0,0}(\bbP^r,d)N$, we have by the universal property again a proper birational map from $Q' \to \overline{M}_{0,0}(\bbP^r,d)N$. We will show that this is an isomorphism by showing that both sides have the same Picard number. This is enough since both sides are ${\mathbb Q}$-factorial. \[ \rho(Q') = \rho(U') = \rho(U) +e(U) = \rho (Q) + e(U) \] where $e(u)$ is the number of $\phi$ exceptional divisors that meet $U'$. Since $\phi$ is an isomorphism on the open locus $M_{0,n}({\mathbb P}^r \times {\mathbb P}^1, (d,1)) \subset \overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$, we need only look at the boundary divisors in $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$. We use Lemma \ref{codim} to see which divisors are exceptional. \begin{lemma}\label{codim} \[ \phi(D(N_1,N_2,d_1, d_2)) \subset ({\mathbb P}^1)^n \times {\mathbb P}^r_d \] has codimension $|N_2| + (r+1)d_2-1$. \end{lemma} \begin{proof} The idea for this proof comes from Kirwan (\cite{k1}). First notice that \[ \phi(D(N_1, N_2, d_1,d_2)) = (p_1, p_2, \dots, p_n, f_0, \dots, f_r) \] where $ p_i =p_j \text { for } i,j \in N_2$, and each of the $f_s$ vanish at that point of multiplicity $d_2$. First, we calculate the codimension of $(p_1, p_2, \dots, p_n, f_0, \dots, f_r)$ where each of the $p_i$ is $[0:1]$, and where each of $f_j$ has a zero of order $d_2$ at $[0:1]$ and an order of $d-d_2 = d_1$ at $[1:0]$ (i.e. each $f_j$ consists of only the monomial $x^{d_2}y^{d_1}$). It is clear this has codimension $n+rd+r+d - r = n+ (r+1)d$. If we remove the condition that each $f_j$ have a root of order $d_1$ at $[1:0]$, then we allow each $f_j$ to have higher powers of $x$. We also remove the condition that those $p_i$ with $i\notin N_2$ are equal to $[0:1]$. Thus we see that the the set of $(p_1, p_2, \dots, p_n, f_0, \dots, f_r)$ such that $[0:1] = p_i$ for $i \in N_2$, and each $f_i$ vanishes at $[0:1]$ with multiplicity $d_2$ has codimension \[ n +(r+1)d - (r+1) d_1 - |N_1| = |N_2| + (r+1)d_2. \] Finally, we act on this set by $G$. We subtract one from the above codimension because $G$ has dimension two, but we don't count the two dimensional stabilizer of $[0:1]$. \end{proof} Next we show that $\rho(Q')$ is independent of the chamber from where ${\mathcal L}_{\vec{k}}$ comes from. We check that as we cross a wall $W_{I,d_I}$, $\rho(Q')$ doesn't change. Let our two open sets be $U_1$ and $U_2$. Recall that $W_{I,d_I}$ breaks our chamber into two parts \[ \left\{(k_1, \dots, k_n, k_{n+1}) | \sum_{i \in I} k_i + d_I \cdot k_{n+1} \leq 1 \right\} \] and \[ \left\{(k_1, \dots, k_n, k_{n+1}) | \sum_{i \in I} k_i + d_I \cdot k_{n+1} \geq 1 \right\} \] so suppose that $U_1$ meets the first set. Notice that $U'_1$ and $U'_2$ meet the same divisors $D(N_1, N_2, d_1, d_2)$ except that $U'_1$ meets $D(I^c, I, d-d_I, d_I)$ but not $D(I, I^c, d_I, d-d_I)$. Similarly, $U'_2$ meets $D(I, I^c, d_I, d-d_I)$ but not $D(I^c, I, d-d_I, d_I)$. If $2<|I| , 1< r$ and $1 \leq d_I \leq d$, or $r=1$ and $1<d_i \leq d$, then $Q_1 \dashrightarrow Q_2$ is a small modification (an isomorphism in codimension $1$ in the notation of \cite{mori}). Hence $\rho(Q_1) = \rho (Q_2)$, and it's clear that $e(U_1) = e(U_2)$. If $2=|I| \text{ and } d_I = 0$, then we see that $Q_1 \dashrightarrow Q_2$ contracts the divisor $(p_1, \dots, p_n, f_0, \dots f_r)$ where $p_i = p_j, i,j \in I$. Therefore $\rho(Q_1) = \rho (Q_2)+1$. However, by Lemma \ref{codim}, we see that the divisor $D(I^c, I, d, 0)$ with $|I|=2$ lying over $U_1$ is not exceptional, while it's complement $D(I, I^c, 0 ,d)$ lying over $U_2$ is exceptional. Hence $e(U_2) = e(U_1)+1$. Putting these together we see that $\rho(Q'_1) = \rho(Q'_2)$ as desired. If $r=1, |I|=0, d_I =1$, then we see that $Q_1 \dashrightarrow Q_2$ contracts the divisor $(p_1, \dots, p_n, f_0, f_1)$ where $f_0, f_1$ have a common root. Therefore $\rho(Q_1) = \rho (Q_2)+1$. However, by Lemma \ref{codim}, we see that the divisor $D(N,0,d-1,1)$ lying over $U_1$ is not exceptional, while it's complement $D(0,N, 1, d-1)$ is contracted. Hence $e(U_2) = e(U_1)+1$. Putting these together we see that $\rho(Q'_1) = \rho(Q'_2)$ as desired. Finally, we prove the Theorem for one vector of one chamber. Here we look at all divisors $D(N_1, N_2, d_1,d_2) \subset \overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$. We have $2^n$ ways to distribute the $n$ points on the domain curve, and we can label the collapsed component with any degree $\leq d$. Hence there are $2^n ( d+1) $ potential configurations, however the configurations $D(I, I^c, d, 0)$ are not stable maps if $|I| = n$ or $n-1$. Hence there are $2^n(d+1) - n-1$ total boundary divisors in $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))N$. We need to determine how many are stable (with respect to the group). We do several calculations, as a given linearization $\vec{k}$ may lie in a maximal chamber for certain values of $d,n$, but lie on a wall for others. All the calculations are very similar though. Assume that $r>1$. \begin{itemize} \item CASE 1 ($d+n$ odd. $d>n$) We choose the linearization corresponding to $(1,1,1, \dots, 1, 1)$. We count the unstable divisors, i.e. the number of $D(N_1, N_2, d_1, d_2)$ such that \[ |N_2| + d_2 \geq \frac{d+n+1}{2} \] Any divisor $D(N_1, N_2, d_1, d_2)$ with $\frac{d+n+1}{2} \leq d_2 \leq d$ is unstable. There are $2^n(\frac{d-n+1}{2})$ of these. Thus the total number of unstable divisors is \[ 2^n\left(\frac{d-n+1}{2}\right) + \overbrace{(2^n - \binom{n}{0})}^{\# \text{ with } d_2 = \frac{d+n-1}{2}}+\overbrace{(2^n - \binom{n}{0} - \binom{n}{1})}^{ \# \text{ with } d_2 = \frac{d+n-3}{2}} + \] \[ \dots +\overbrace{2^n - \binom{n}{0} - \binom{n}{1} - \dots - \binom{n}{n-1}}^{\# \text{ with } d_2 = \frac{d-n+1}{2}} \] \begin{align*} &= 2^n\left(\frac{d-n+1}{2}\right) +n2^n -n\binom{n}{0} - (n-1)\binom{n}{1} - \dots - 1\binom{n} {n-1} \\ & = 2^n\left(\frac{d-n+1}{2}\right) +n2^n - n \binom{n}{n} - (n-1)\binom{n}{n-1} - \dots - 1\binom{n}{1}\\ & = 2^n\left(\frac{d-n+1}{2}\right) + n2^n - \sum_{i=1}^{n}(i) \binom{n}{i} \\ &= 2^n\left(\frac{d-n+1}{2}\right) +n2^n - n2^{n-1} = 2^{n-1}(d+1) \end{align*} Hence the total number of stable divisors is $2^{n-1}(d+1)-1-n$ (stable with respect to the group). The number which are $\phi$ exceptional are all except those where $I^c = 2$ (by Corollary \ref{codim}). So there are $2^{n-1}(d+1)-1-n-\binom{n}{2}$ $\phi$-exceptional divisors. Thus, since $\rho(Q) = n+1$, \begin{align*} \rho(Q') = \rho(Q) + e(U) &= n+1 + 2^{n-1}(d+1) - 1 -n -\binom{n}{2} \\ & = 2^{n-1}(d+1)- \binom{n}{2}. \end{align*} \item CASE 2 ($d+n$ odd, $d <n$) We again choose the linearization corresponding to $(1,1,1, \dots, 1,1)$. \item CASE 3 ($d+n$ even, $n$ odd) We choose the linearization corresponding to $(1,1, \dots, 1, 2)$. \item CASE 4 ($d+n$ even, $n$ even) We choose the linearization corresponding to $(1,2,2, \dots, 2, 1)$. \end{itemize} Note that care must be taken when $n=2$ and $d=1,2$. For here, $\rho(({\mathbb P}^1)^n \times {\mathbb P}^r_d) = 2$ for the given linearizations (instead of the expected $3$). This is because the unstable locus contains a divisor. However, we wouldn't need to subtract out the divisor $D(0,2,d_1, d_2)$ for not being $\phi$ exceptional, because it would have been unstable with respect to the group. Thus, the sums work out to be the same. We have shown for every line bundle such that the stable locus equals the semi-stable locus that $\rho(Q') = 2^{n-1}(d+1) -\binom{n}{2}.$ From \cite{P1} we know \[ \rho(\overline{M}_{0,0}(\bbP^r,d)N) = 2^{n-1}(d+1) - \binom{n}{2} \] which completes the proof for $r>1$. When $r=1$ we repeat the above construction. Here we see, by Corollary \ref{codim}, that the divisor $D(N, 0, d-1,1)$ is not $\phi$-exceptional. So we subtract one from the above count of $\phi$-exceptional divisors. Thus \[ \rho(Q') = \rho(Q) + e(U) = n+1 + 2^{n-1}(d+1) - 2 -n -\binom{n}{2} = 2^{n-1}(d+1)- \binom{n}{2} - 1 \] An immediate consequence of Theorem 4.4 in \cite{BF} gives $\rho(\overline{M}_{0,n}({\mathbb P}^1,d))$ and it agrees with the above calculation. In the case when $r=0$, then $d=0$ or else the moduli space is empty. Thus ${\mathbb P}^0_0 = pt$. The calculation follows as now we are only dealing with stable curves, and not stable maps and was proven originally in \cite{HK}. We have that \[ \rho(Q') = 2^{n-1} - \binom{n}{2} -1 = \rho(\overline{M}_{0,n}) \] \end{proof} There are three immediate corollaries that are interesting. The first is a new proof of a result of Keel and Hu in \cite{HK}. By letting $d,r=0$ in the above Theorem we have. \begin{corollary} \label{c2} \cite{HK} For each linearized line bundle ${\mathcal L} \in \Pic^G(({\mathbb P}^1)^n)$ such that \[ (({\mathbb P}^1)^n)^{ss}({\mathcal L}) = (({\mathbb P}^1)^n)^s({\mathcal L}) \neq \emptyset \] and for each sufficiently small $\epsilon > 0$, the line bundle ${\mathcal L}' = \phi^*({\mathcal L})(-E)$ is ample and \[ ({\mathbb P}^1[n])^{ss}({\mathcal L}') = ({\mathbb P}^1[n])^{s}({\mathcal L}') = \phi^{-1}(({\mathbb P}^1)^n)^ss({\mathcal L}) \] There is a canonical identification \[ ({\mathbb P}^1[n])^s({\mathcal L}') /G = \overline{M}_{0,n} \] and a commutative diagram \[ \begin{CD} ({\mathbb P}^1[n])^{ss}({\mathcal L}') @>f>> \overline{M}_{0,n} \\ @V{\phi}VV @VVV \\ (({\mathbb P}^1)^n)^{ss}({\mathcal L}) @>>> (({\mathbb P}^1)^n )^{ss}({\mathcal L}) / G) \\ \end{CD} \] \end{corollary} \begin{proof} In the case when $d=1$ and $r=1$, we have that \[ \overline{M}_{0,n}({\mathbb P}^1,1) = {\mathbb P}^1[n] \] where ${\mathbb P}^1[n] $ is the Fulton-MacPherson compactification of $n$ points on ${\mathbb P}^1$. The Fulton-MacPherson map ${\mathbb P}^1[n] \to ({\mathbb P}^1)^n$ is exactly the product of evaluation morphisms $\phi$. \end{proof} Secondly, we find that the Grassmannian of lines is a GIT quotient of a projective space. \begin{corollary} \label{c3} The Grassmannian of lines in ${\mathbb P}^r$ is the GIT quotient of ${\mathbb P}^r_1={\mathbb P}^{2(r+1)-1}$ by the above action of $G$. \end{corollary} \begin{proof} We know that $\overline{M}_{0,0}({\mathbb P}^r,1) =M_{0,0}({\mathbb P}^r,1) = {\mathbb G}(1,r).$ By the Theorem \[ (\overline{M}_{0,0}({\mathbb P}^r \times {\mathbb P}^1, (1,1)))^{s} / G = {\mathbb G}(1,r). \] But $\overline{M}_{0,0}({\mathbb P}^r \times {\mathbb P}^1, (1,1))^{s} = {M}_{0,0}({\mathbb P}^r \times {\mathbb P}^1,(1,1)) \cong ({\mathbb P}^r_1)^{s}$. Note that when $n=0$, there is only one ample line bundle (up to multiple) on the linear sigma model and it has a unique linearization. \end{proof} The third corollary constructs $\overline{M}_{0,0}(\bbP^r,d)$ as a sequence of intermediate moduli spaces. It requires more background, and follows. \section{Intermediate Moduli Spaces} In the case when $n=0$, we obtain as a corollary a factorization of the induced map \[ \bar{\varphi}:\overline{M}_{0,0}(\bbP^r,d) \to ({\mathbb P}^r_d)^{s}/Aut({\mathbb P}^1) \] into a sequence of intermediate moduli spaces, such that the map between successive spaces is ``almost" a blow up. We will use a factorization of the Givental map $\varphi$ presented in \cite{M}. We give the necessary definitions and results here. Recall the construction of $\overline{M}_{0,0}(\bbP^r,d)$ presented by Fulton and Pandharipande in\cite{FP}. Given a basis of hyperplanes $\bar{t} \in H^0({\mathbb P}^1, {\mathcal O}(1))$, there is an open subset $U_{\bar{t}}\subset \overline{M}_{0,0}(\bbP^r,d)$ such that if we pull back those hyperplanes, the corresponding domain curves along with the sections will be $(r+1)d$ pointed stable \emph{curves}. By choosing an ordering on the sections, we get an \'{e}tale rigidification of that open set by a smooth moduli space denoted $M_{0,0}({\mathbb P}^r,d,\bar{t})$ that is a $({\mathbb C}^*)^r$ bundle over $\overline{M}_{0,d(r+1)}$. \[ \begin{CD} \overline{M}_{0,d(r+1)} \supset B@<{({\mathbb C}^*)^{r}}<< {M}_{0,0}({\mathbb P}^r,d, \bar{t}) @>{(S_d)^{r+1}}>> U_{\bar{t}}\subset \overline{M}_{0,0}(\bbP^r,d) \\. \end{CD} \] $\overline{M}_{0,0}(\bbP^r,d)$ is then constructed by gluing together the $U_{\bar{t}}$ for different choices of $\bar{t}$. In \cite{M}, Musta\c{t}\v{a} constructs similar rigifications of $\overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1))$ and ${\mathbb P}^r_d$ that are $({\mathbb C}^*)^r$ bundles over ${\mathbb P}^1[d(r+1)]$ and $({\mathbb P}^1)^{d(r+1)}$ where ${\mathbb P}^1[d(r+1)]$ is the Fulton-MacPherson compactification of $d(r+1)$ points on ${\mathbb P}^1$. \[ \begin{CD} {\mathbb P}^1[d(r+1)] \supset B@<{({\mathbb C}^*)^{r}}<< G(r,d,\bar{t}) @>{(S_d)^{r+1}}>> U_{\bar{t}}\subset \overline{M}_{0,0}(\bbP^r \times \bbP^1, (d,1)) \\ @V{F-M}VV @V{\varphi(\bar{t})}VV @V{\varphi}VV \\ ({\mathbb P}^1)^{d(r+1)} @<{({\mathbb C}^*)^{r}}<< {\mathbb P}^r_d(\bar{t}) @>{(S_d)^{r+1}}>> {\mathbb P}^r_d \\ \end{CD} \] The factorization of $\varphi$ is obtained by gluing together the pull back of the following factorization of the Fulton-MacPherson map. \begin{defthe} (pg 13) Consider a degree 1 morphism $\phi: C \to {\mathbb P}^1$ having as a domain $C$ a rational curve with $N$ marked points. The morphism will be called $n$ - stable if \begin{enumerate} \item Not more than $N-n$ of the marked points coincide. \item Any ending curve that is not the parametrized component contains more than $N-n$ points. \item All the marked points are smooth, and every component that is not the parametrized component has at least three distinct special points. \end{enumerate} There is a smooth projective moduli space ${\mathbb P}^1[N,n]$ for families of $n$-stable degree 1 morphisms. Moreover ${\mathbb P}^1[N,n]$ is the blow up of ${\mathbb P}^1[N,n-1]$ along the strict transforms of the $n$ - dimensional diagonals in $({\mathbb P}^1)^N$. \end{defthe} There is an analogous factorization of $\varphi(\bar{t})$. \begin{defthe}(pg 22) A $(\bar{t},d,k)$ - acceptable family of morphism over $S$ is given by the following data \[ (\pi:{\mathcal C} \to S, \phi: {\mathcal C} \to {\mathbb P}^1, \{q_{i,j}\}_{0 \leq i \leq n , 1 \leq j \leq d }, {\mathcal L}, e) \] where \begin{enumerate} \item The family $(\pi:{\mathcal C} \to S, \phi: {\mathcal C} \to {\mathbb P}^1, \{q_{i,j}\}_{0 \leq i \leq n , 1 \leq j \leq d })$ is a $(r+1)(k-1) +1$ stable family of degree $1$ morphisms to ${\mathbb P}^1$. \item ${\mathcal L}$ is a line bundle on ${\mathcal C}$ \item $e:{\mathcal O}_{{\mathcal C}}^{r+1} \to {\mathcal L}$ is a morphism of sheaves with $\pi_*e$ nowhere zero and that, via the natural isomorphism $H^0({\mathbb P}^n,{\mathcal O}(1)) \cong H^0(S \times {\mathbb P}^1, {\mathcal O}_{S\times {\mathbb P}^1}^{r+1})$ we have \[ (e(\bar{t}_i)=0) = \sum_{j=1}^d q_{i,j} \] \end{enumerate} There is a smooth moduli space ${\mathbb P}^r_d(\bar{t},k)$ for these families that is a torus bundle over an open subset of ${\mathbb P}^1[(r+1)d, (r+1)(k-1) +1]$. \end{defthe} Finally, \cite{M} creates global objects factoring $\varphi$ the same way that $\overline{M}_{0,0}(\bbP^r,d)$ was constructed in \cite{FP}. \begin{defthe}(pg 23) A $(d,k)$ - acceptable family of morphism is given by the following data \[ (\pi:{\mathcal C} \to S, \mu=(\mu_1, \mu_2): C \to {\mathbb P}^r \times {\mathbb P}^1, {\mathcal L}, e) \] where: \begin{enumerate} \item ${\mathcal L}$ is a line bundle on ${\mathcal C}$ which, together with the morphism \[ e:{\mathcal O}_{{\mathcal C}}^{r+1} \to {\mathcal L} \] determines the rational map $\mu_1:{\mathcal C} \to {\mathbb P}^r$. \item For any $s \in S$ and any irreducible component $C'$ of $C_s$, the restriction $e_{C'}: {\mathcal O}_{{\mathcal C}'}^{r+1} \to {\mathcal L}_{C'}$ is non-zero. \item For any $s\in S$, $deg {\mathcal L}_{C_s} = d$ and the image $e_{C_s}(H) \in H^0({\mathcal C}_s, {\mathcal L}_{{\mathcal C}_s})$ of a generic section $H\in H^0({\mathcal C}_s, {\mathcal O}_{{\mathcal C}_s}^{r+1})$ determines the structure of a $(r+1)(k-1)+1$ stable morphism on $\mu_2:{\mathcal C}_s \to {\mathbb P}^1$ \end{enumerate} There is a projective coarse moduli space ${\mathbb P}^r_d(k)$ for these objects. \end{defthe} It is exactly these objects that we take the quotient of by $Aut({\mathbb P}^1)$. After we quotient, there is no longer a parametrized component to refer to in the above definitions. However, when $d$ is odd, there is a unique component of the domain curve that will play this role. \begin{proposition} Let $C$ be a connected genus-$0$ curve such that each edge is labeled with a number $d$. If $\sum d_i$ is odd, then there is a unique irreducible component $\bar{C}$ such that if $C$ is a comb with handle $\bar{C}$, no tooth has sum of degrees $> d/2$ \end{proposition} \begin{proof} Let $\{C\}_{d/2}$ be the set of all connected subcurves of degree $\geq d/2$. Intersect all such subcurves. There is a unique component in the intersection that will be $\bar{C}$. \end{proof} \begin{definition} A $(d,k)^*$ acceptable morphism is given by the following data. \[ (\pi:{\mathcal C} \to S, \mu:{\mathcal C} \to {\mathbb P}^r, {\mathcal L}, e) \] where: \begin{enumerate} \item ${\mathcal L}$ is a line bundle on ${\mathcal C}$ which, together with the morphism \[ e:{\mathcal O}_{{\mathcal C}}^{r+1} \to {\mathcal L} \] determines the rational map $\mu: {\mathcal C} \to {\mathbb P}^r$. \item For any $s \in S$ and any irreducible component $C'$ of ${{\mathcal C}}_s$, the restriction $e_{C'}: {\mathcal O}_{{\mathcal C}'}^{r+1} \to {\mathcal L}_{C'}$ is non-zero. \item For any $s\in S$, $deg {\mathcal L}_{C_s} = d$ and the image $e_{C_s}(H) \in H^0({\mathcal C}_s, {\mathcal L}_{{\mathcal C}_s})$ of a generic section $H\in H^0({\mathcal C}_s, {\mathcal O}_{{\mathcal C}_s}^{r+1})$ determines the structure of a $(r+1)(k-1)+1$ - stable rigid morphism where $\bar{C}_{s}$ plays the role of the parametrized component in the definition of a $n$ stable degree $1$ morphism above. \end{enumerate} \end{definition} \begin{corollary} There is a projective coarse moduli space $\overline{M}_{0,0}({\mathbb P}^r,d,k)$ for families of $(d,k)^*$ acceptable morphisms. \end{corollary} \begin{proof} We show that \[ \overline{M}_{0,0}({\mathbb P}^r,d,k) := ({\mathbb P}^r_d(k))^{s}/Aut({\mathbb P}^1) \] satisfies the properties of a coarse moduli space. This quotient is constructed identically to that from Theorem 0.1. We pull back the stable locus from ${\mathbb P}^r_d$ by \ref{H1}. Again, the stable locus in ${\mathbb P}^r_d(k)$ will be those $(d,k)$ - acceptable maps such that no tooth has degree $> d/2$. The universal properties of this space are inherited from the universal properties of ${\mathbb P}^r_d(k)$ as well as the universal properties of a categorical quotient. First we need to show that there is a natural transformation of functors \[ \phi:\overline{{\mathcal M}}_{0,0}({\mathbb P}^r,d,k) \to Hom_{Sch}(*, \overline{M}_{0,0}({\mathbb P}^r,d,k)) \] where $\overline{{\mathcal M}}_{0,0}({\mathbb P}^r,d,k)$ is the obvious moduli functor $\{ schemes \} \to \{ sets \}$. Given a family of $(d,k)^*$ - acceptable morphisms \[ (\pi:{\mathcal C} \to S, \mu:{\mathcal C} \to {\mathbb P}^r, {\mathcal L}, e) \] we can get a $(d,k)$ - acceptable morphism \[ (\pi:{\mathcal C} \to S, \mu=(\mu_1, \mu_2): C \to {\mathbb P}^r \times {\mathbb P}^1, {\mathcal L}, e) \] by taking $\mu_2: {\mathcal C} \to {\mathbb P}^1$ to be identity on $\bar{C}_s$ and constant on the other components. This will lie in the stable locus by construction, and thus gives a map $S \to ({\mathbb P}^r_d(k))^{s}$. Composing with the quotient gives an element of $Hom_{Sch}(S, \overline{M}_{0,0}({\mathbb P}^r,d,k))$. We need to show that if given a scheme $Z$ and a natrual transformation of functors $\psi: \overline{{\mathcal M}}_{0,0}({\mathbb P}^r,d,k) \to Hom_{Sch}(*, Z)$, there exists a unique morphism of schemes \[ \gamma:\overline{M}_{0,0}({\mathbb P}^r,d,k) \to Z \] such that $\psi = \tilde{\gamma} \circ \phi$. By the above, we have a functor \[ \overline{{\mathcal M}}_{0,0}({\mathbb P}^r,d,k) \to ({\mathcal P}^r_d(k))^s \] as such we get a functor \[ \bar{\psi}: ( {\mathcal P}^r_d(k))^s \to Hom_{Sch}(*, Z) \] which by representability gives a map \[ \bar{\gamma}: ({\mathbb P}^r_d(k))^s \to Z. \] This map is $G$ equivariant by construction, hence factors though the quotient \[ \gamma:\overline{M}_{0,0}({\mathbb P}^r,d,k) \to Z \] \end{proof} We can sum up this corollary with the following figure Notice that ${\mathbb P}^r_d / G = \overline{M}_{0,0}({\mathbb P}^r,d,1) = \dots = \overline{M}_{0,0}({\mathbb P}^r,d,\frac{d+1}{2})$. This is because up to that point, the exceptional loci of the blow ups will lie outside the stable locus. For example, the exceptional divisor of ${\mathbb P}^r_d(\bar{t},1)\to {\mathbb P}^r_d(\bar{t})$ corresponds to a curve with two components. One component is parametrized, and the other has all the $d(r+1)$ points on it. \index{Bibliography@\emph{Bibliography}} \end{document}
\begin{document} \title{ An eigenvalue problem for fully nonlinear elliptic equations with gradient constraints} \author{Ryan Hynd\footnote{Department of Mathematics, University of Pennsylvania. Partially supported by NSF grant DMS-1301628.}} \maketitle \begin{abstract} We consider the problem of finding $\lambda\in \mathbb{R}$ and a function $u:\mathbb{R}^n\rightarrow\mathbb{R}$ that satisfy the PDE $$ \max\left\{\lambda + F(D^2u) -f(x),H(Du)\right\}=0, \quad x\in \mathbb{R}^n. $$ Here $F$ is elliptic, positively homogeneous and superadditive, $f$ is convex and superlinear, and $H$ is typically assumed to be convex. Examples of this type of PDE arise in the theory of singular ergodic control. We show that there is a unique $\lambda^*$ for which the above equation has a solution $u$ with appropriate growth as $|x|\rightarrow \infty$. Moreover, associated to $\lambda^*$ is a convex solution $u^*$ that has bounded second derivatives, provided $F$ is uniformly elliptic and $H$ is uniformly convex. It is unknown whether or not $u^*$ is unique up to an additive constant; however, we verify this is the case when $n=1$ or when $F, f,H$ are ``rotational." \end{abstract} \section{Introduction} The eigenvalue problem of singular ergodic control is to find a real number $\lambda$ and function $u:\mathbb{R}^n\rightarrow\mathbb{R}$ that satisfy the PDE \begin{equation}\label{LinEigProb} \max\left\{\lambda -\Delta u -f(x),|Du|-1\right\}=0, \quad x\in \mathbb{R}^n. \end{equation} Here $Du=(u_{x_i})$ is the gradient of $u$, $\Delta u=\sum^{n}_{i=1}u_{x_ix_i}$ is the usual Laplacian, and $f$ is assumed to be convex and superlinear $$ \lim_{|x|\rightarrow \infty}\frac{f(x)}{|x|}=\infty. $$ We call any such $\lambda$ an {\it eigenvalue}. In previous work \cite{Hynd}, we showed there is a unique eigenvalue $\lambda^*\in \mathbb{R}$ such that the PDE \eqref{LinEigProb} admits a viscosity solution $u$ satisfying the growth condition $$ \lim_{|x|\rightarrow \infty}\frac{u(x)}{|x|}=1. $$ Moreover, associated to $\lambda^*$, there is always one solution $u^*$ that is convex with $D^2u^*\in L^\infty(\mathbb{R}^n;S_n(\R))$. Here $S_n(\R)$ denotes the collection of real, symmetric $n\times n$ matrices. \par The eigenvalue $\lambda^*$ is also known to have the ergodic control theoretic interpretation $$ \lambda^*:=\inf_{\nu}\limsup_{t\rightarrow \infty}\frac{1}{t}\left\{\mathbb{E}\int^{t}_{0}f\left(\sqrt{2}W(s) + \nu(s)\right)ds + |\nu|(t)\right\} $$ as shown in \cite{Menaldi}. Here $(W(t),t\ge 0)$ is an $n$-dimensional Brownian motion on a probability space $(\Omega, {\mathcal F}, \mathbb{P})$ and $\nu$ is an $\mathbb{R}^n$ valued control process. Each $\nu$ is required to be adapted to the filtration generated by $W$ and satisfy $$ \begin{cases} \nu(0)=0\;\\ t\mapsto \nu(t) \; \text{is left continuous}\\ |\nu|(t)<\infty , \;\text{for all}\; t> 0\; \end{cases} $$ $\mathbb{P}$ almost surely; the notation $|\nu|(t)$ denotes the total variation of $\nu$ restricted to the interval $[0,t)$. We say $\nu$ is a {\it singular control} as it may have sample paths that are not be absolutely continuous with respect to the standard Lebesgue measure on $[0,\infty)$. We refer the reader to \cite{Borkar, Fleming, Oksendal} for more information on how PDE arise in singular stochastic control. \par We also showed in \cite{Hynd} that $\lambda^*$ is given by the following ``minmax" formula \begin{equation}\label{minmaxLap} \lambda^*=\inf\left\{\sup_{|D\psi(x)|<1}\left\{\Delta\psi(x) + f(x)\right\} : \psi\in C^2(\mathbb{R}^n), \; \liminf_{|x|\rightarrow \infty}\frac{\psi(x)}{|x|}\ge 1 \right\} \end{equation} and the ``maxmin" formula \begin{equation}\label{maxminLap} \lambda^*=\sup\left\{\inf_{x\in\mathbb{R}^n}\left\{\Delta\phi(x) + f(x)\right\} : \phi\in C^2(\mathbb{R}^n), \, |D\phi|\le 1 \right\}. \end{equation} The purpose of this paper is to verify generalizations of these results. \par In particular, we consider the following eigenvalue problem: find $\lambda\in \mathbb{R}$ and $u:\mathbb{R}^n\rightarrow \mathbb{R}$ satisfying the PDE \begin{equation}\label{EigProb} \max\left\{\lambda + F(D^2u) -f(x),H(Du)\right\}=0, \quad x\in \mathbb{R}^n. \end{equation} Here $D^2u=(u_{x_ix_j})$ is the hessian of $u$. A standing assumption in this paper is that the nonlinearity $F:S_n(\R)\rightarrow \mathbb{R}$ is elliptic, positively homogeneous, and superadditive: \begin{equation}\label{Fassump} \begin{cases} -\Theta\tr N\le F(M+N)-F(M)\le -\theta\tr N, \quad (N\ge 0) \\ F(tM)=tF(M)\ \\ F(M)+F(N)\le F(M+N)\ \end{cases} \end{equation} for each $M,N\in S_n(\R)$, $t\ge 0$ and some $\theta,\Theta\ge 0$. If $\theta>0$, we say $F$ is uniformly elliptic. For instance, in \eqref{LinEigProb} $F$ is the linear function $F(M)=-\tr M.$ And a more typical nonlinear example we have in mind is $$ F(M)=\min_{1\le k\le N}\{-\tr(A_kM)\}, $$ where each $\{A_k\}_{k=1,\dots,N}\subsetS_n(\R)$ satisfies $$ \theta|\xi|^2\le A_k\xi\cdot\xi\le \Theta|\xi|^2, \quad \xi\in \mathbb{R}^n. $$ \par We will assume throughout that the gradient constraint function $H\in C(\mathbb{R}^n)$ satisfies \begin{equation}\label{Hassump} \begin{cases} H(0)<0\\ \{p\in \mathbb{R}^n: H(p)\le 0\}\; \text{is compact and strictly convex.} \end{cases} \end{equation} In the motivating equation \eqref{LinEigProb}, $H(p)=|p|-1$. And in view of the results of \cite{Hynd}, it is natural to study solutions of \eqref{EigProb} subject to a suitable growth condition. To this end, we define the function $$ \ell(v):=\max\{p\cdot v: H(p)\le 0\}, \quad v\in \mathbb{R}^n $$ which is also known as the support function of the convex set $\{p\in \mathbb{R}^n: H(p)\le 0\}$. \par Note that we can replace $H$ in \eqref{EigProb} with the explicit convex gradient constraint $$ H_0(p):=\max_{|v|=1}\left\{p\cdot v -\ell(v)\right\} $$ since $H(p)\le 0$ if and only if $H_0(p)\le 0$ (Theorem 8.24 in \cite{Rock}). This is something we will do repeatedly in the work that follows. We also note that by the assumptions \eqref{Hassump}, there are positive constants $c_0, c_1$ such that \begin{equation}\label{UpperLowerell} c_0|v|\le \ell(v)\le c_1|v|, \quad v\in \mathbb{R}^n \end{equation} and consequently \begin{equation}\label{UpperLowerH} |p|-c_1\le H_0(p)\le |p|-c_0, \quad p\in \mathbb{R}^n. \end{equation} \par The main result of this paper is as follows. \begin{thm}\label{Thm1} Assume \eqref{Fassump}, \eqref{Hassump}, and that $f$ is convex and superlinear. \\ (i) There is a unique $\lambda^*\in\mathbb{R}$ such that \eqref{EigProb} has a viscosity solution $u\in C(\mathbb{R}^n)$ satisfying the growth condition \begin{equation}\label{ellgrowth} \lim_{|x|\rightarrow \infty}\frac{u(x)}{\ell(x)}=1. \end{equation} Associated to $\lambda^*$ is a convex viscosity solution $u^*$ that satisfies \eqref{ellgrowth}. \\ (ii) Suppose that $F$ is uniformly elliptic, $H$ is convex and that there are $\sigma,\Sigma>0$ such that \begin{equation}\label{Hassump2} \sigma|\xi|^2\le D^2H(p)\xi\cdot \xi\le \Sigma|\xi|^2, \quad \xi\in\mathbb{R}^n \end{equation} for Lebesgue almost every $p\in \mathbb{R}^n$. Then we may choose $u^*$ to satisfy $D^2u^*\in L^\infty(\mathbb{R}^n;S_n(\R))$. \end{thm} \par When $\lambda=\lambda^*$ in \eqref{EigProb}, we will call solutions that satisfy the growth condition \eqref{ellgrowth} {\it eigenfunctions}. It is unknown if eigenfunctions are unique up to an additive constant. However, we establish below that when $n=1$ any two convex eigenfunctions differ by a constant; see Proposition \ref{UniqunessN1}. We also show that if $F$, $f$ and $H$ are ``rotational," then $u^*$ can be chosen radial and twice continuously differentiable. This generalizes Theorem 2.3 of \cite{Kruk} and Theorem 1.3 of our previous work \cite{Hynd}. \begin{thm}\label{SymmRegThm} Suppose \begin{equation}\label{SymmetryCond} \begin{cases} f(Ox)=f(x)\\ H(O^tp)=H(p)\\ F(OMO^t)=F(M) \end{cases} \end{equation} for each $x,p\in \mathbb{R}^n$, $M\in S_n(\R)$ and orthogonal $n\times n$ matrix $O$. If $F$ is uniformly elliptic and $H$ satisfies \eqref{Hassump2}, then there is a radial eigenfunction $u^*\in C^2(\mathbb{R}^n)$. \end{thm} \par In Proposition \ref{ConvUniqueness} below, we assume \eqref{SymmetryCond} and show any two convex, radial eigenfunctions differ by an additive constant. Unfortunately, we do not know if this symmetry assumption ensures that every eigenfunction is radial. Finally, we verify a minmax formula for $\lambda^*$ which is the fully nonlinear analog of the formula \eqref{minmaxLap}. However, for nonlinear $F$, we only establish an inequality corresponding to the formula \eqref{maxminLap}. \begin{thm}\label{minmaxThm} Define $$ \lambda_+:=\inf\left\{\sup_{H(D\psi(x))<0}\left\{-F(D^2\psi(x)) + f(x)\right\} : \psi\in C^2(\mathbb{R}^n), \; \liminf_{|x|\rightarrow \infty}\frac{\psi(x)}{\ell(x)}\ge 1 \right\}. $$ and $$ \lambda_-:=\sup\left\{\inf_{x\in\mathbb{R}^n}\left\{-F(D^2\phi(x)) + f(x)\right\} : \phi\in C^2(\mathbb{R}^n), \, H(D\phi)\le 0 \right\} $$ Then $$ \lambda_-\le \lambda^*\le\lambda_+. $$ If there is an eigenfunction $u^*$ that satisfies $D^2u^*\in L^\infty(\mathbb{R}^n;S_n(\R))$, then $\lambda^*=\lambda_+.$ \end{thm} The organization of this paper is as follows. In section \ref{CompSect}, we verify the uniqueness of eigenvalues as detailed in Theorem \ref{Thm1}. Then we consider the existence of an eigenvalue $\lambda^*$ in section \ref{ExistSec}. Next, we verify Theorem \ref{SymmRegThm} in section \ref{RegSect} and prove Theorem \ref{SymmRegThm} in section \ref{1DandRotSymmSect}. Section \ref{MinMaxSect} of this paper is dedicated to the proof of Theorem \ref{minmaxThm}. Finally, we would like to acknowledge hospitality of the University of Pennsylvania's Center of Race $\&$ Equity in Education where part of this paper was written. \section{Comparison principle}\label{CompSect} In this section, we show there can be at most one eigenvalue as detailed in Theorem \ref{Thm1}. As equation \eqref{EigProb} is a fully nonlinear elliptic equation for a scalar function $u$, we will employ the theory of viscosity solutions \cite{Bardi, Crandall, CIL, Fleming}. In particular, we will use results and notation from the ``user guide" \cite{CIL}. Moreover, going forward we typically will omit the modifier ``viscosity" when we refer to sub- and supersolutions. We begin our discussion with a basic proposition about subsolutions of the first order PDE $H(Du)=0$. \begin{lem}\label{HLipLem} A function $u\in C(\mathbb{R}^n)$ satisfies \begin{equation}\label{HSubsoln} H(Du(x))\le 0, \quad x\in \mathbb{R}^n \end{equation} if and only if \begin{equation}\label{ellLip} u(x)-u(y)\le \ell(x-y), \quad x,y\in \mathbb{R}^n. \end{equation} \end{lem} \begin{proof} Assume \eqref{HSubsoln}. Then $u$ is Lipschitz by \eqref{UpperLowerH}, and $H(Du(x))\le 0$ for almost every $x\in \mathbb{R}^n$. Let $u^\end{proof}silon:=\eta^\end{proof}silon*u$ be a standard mollification of $u$. That is, $\eta\in C^\infty_c(\mathbb{R}^n)$ is a nonnegative, radial function supported in $B_1(0)$ that satisfies $\int_{\mathbb{R}^n}\eta(z)dz=1$ and $\eta^\end{proof}silon:=\end{proof}silon^{-n}\eta(\cdot/\end{proof}silon)$. It is readily verified that $u^\end{proof}silon\in C^\infty(\mathbb{R}^n)$ and $u^\end{proof}silon$ converges to $u$ uniformly as $\end{proof}silon$ tends to 0; see Appendix C.5 of \cite{Evans2} for more on mollification. As $H_0$ is convex, we have by Jensen's inequality $$ H_0(Du^\end{proof}silon)=H_0\left(D(\eta^\end{proof}silon*u)\right)=H_0\left(\eta^\end{proof}silon*Du\right)\le \eta^\end{proof}silon*H_0(Du)\le 0. $$ It follows that for any $x,y\in \mathbb{R}^n$ $$ u^\end{proof}silon(x)-u^\end{proof}silon(y)=\int^1_0Du^\end{proof}silon(y+t(x-y))\cdot (x-y)dt\le \ell(x-y). $$ Sending $\end{proof}silon\rightarrow 0^+$ gives \eqref{ellLip}. \par For the converse, suppose there is $p\in \mathbb{R}^n$ such that $$ u(x)\le u(x_0)+p\cdot(x-x_0) + o(|x-x_0|) $$ as $x\rightarrow x_0$. Substituting $x=x_0 - tv$ for $t>0$ and $|v|=1$ above gives $$ u(x_0)- t\ell(v)\le u(x_0-tv)\le u(x_0) -t p\cdot v+o(t). $$ As a result $p\cdot v\le \ell(v)$. As $v$ was arbitrary, $H(p)\le 0$. \end{proof} \begin{cor} The function $\ell$ satisfies \eqref{HSubsoln}. Moreover, at any $x\in \mathbb{R}^n$ for which $\ell$ is differentiable $$ \ell(x)=D\ell(x)\cdot x\quad \text{and} \quad H(D\ell(x))=0. $$ \end{cor} \begin{proof} As $\ell$ is convex and positively homogeneous, it is sublinear. Therefore, $\ell(x)\le \ell(y)+\ell(x-y)$ for each $x,y\in \mathbb{R}^n$. By the previous lemma, $\ell$ satisfies \eqref{HSubsoln}. Now suppose that $\ell$ is differentiable at $x$, and choose $\xi$ such that $H(\xi)\le 0$ and $\ell(x)=\xi\cdot x$. Then, as $y\rightarrow x$ \begin{align*} \xi\cdot y& \le\ell(y) \\ & =\ell(x)+ D\ell(x)\cdot (y-x)+o(|y-x|)\\ & = \xi\cdot x+D\ell(x)\cdot (y-x)+o(|y-x|). \end{align*} Choosing $y=x+tv$, for $t>0$ and $v\in \mathbb{R}^n$ gives $\xi\cdot v\le D\ell(x)\cdot v + o(1)$ as $t\rightarrow 0^+$. Thus, $\xi=D\ell(x)$ and $H(D\ell(x))\le 0$. If $x\neq 0$, $$ H_0(D\ell(x))\ge D\ell(x)\cdot \frac{x}{|x|} -\ell\left(\frac{x}{|x|}\right)=\frac{D\ell(x)\cdot x-\ell(x)}{|x|}=0 $$ and so $H(D\ell(x))=0$. Conversely, if $x=0$, then $\ell$ is linear since it is positively homogeneous. However, this would contradict \eqref{UpperLowerell}. \end{proof} The following assertion is a comparison principle for eigenvalues that makes use of the growth condition \eqref{ellgrowth}. \begin{prop}\label{lamCompProp} Assume $u\in USC(\mathbb{R}^n)$ is a subsolution of \eqref{EigProb} with eigenvalue $\lambda$ and $v\in LSC(\mathbb{R}^n)$ is a supersolution of \eqref{EigProb} with eigenvalue $\mu$. If \begin{equation}\label{growthComp} \limsup_{|x|\rightarrow\infty}\frac{u(x)}{\ell(x)}\le 1\le \liminf_{|x|\rightarrow\infty}\frac{v(x)}{\ell(x)}, \end{equation} the $\lambda\le \mu$. \end{prop} \begin{rem} Any subsolution $u$ of \eqref{EigProb} satisfies $H(Du)\le 0$. By Lemma \ref{HLipLem}, $u$ then satisfies \eqref{ellLip} and therefore the first inequality in \eqref{growthComp} automatically holds. We have included both inequalities in \eqref{growthComp} simply for aesthetic purposes, and we continue this practice throughout this paper. \end{rem} \begin{proof} For $\tau\in (0,1)$ and $\eta>0$, set $$ w^{\tau}(x,y):=\tau u(x) - v(y), \quad \varphi^\eta(x,y):=\frac{1}{2\eta}|x-y|^2 $$ $x,y\in \mathbb{R}^n$. Observe \begin{align}\label{wminusphi} (w^{\tau}-\varphi^\eta)(x,y)&=\tau(u(x)-u(y))+\tau u(y)-v(y)-\frac{1}{2\eta}|x-y|^2 \nonumber \\ &\le \tau\ell(x-y)+\tau u(y)-v(y)-\frac{1}{2\eta}|x-y|^2 \nonumber \\ &\le \tau c_1|x-y|+\tau u(y)-v(y)-\frac{1}{2\eta}|x-y|^2 \nonumber \\ &\le \eta \tau^2 c_1^2+\tau u(y)-v(y)-\frac{1}{4\eta}|x-y|^2. \end{align} In view of \eqref{growthComp}, $\lim_{|y|\rightarrow \infty}(\tau u(y)-v(y))=-\infty$ and so $$ \lim_{|x|+|y|\rightarrow \infty}(w^{\tau}-\varphi^\eta)(x,y)=-\infty. $$ As a result, there is $(x_\eta,y_\eta)$ maximizing $w^{\tau}-\varphi^\eta$. \par By Theorem 3.2 in \cite{CIL}, for each $\rho>0$ there are $X,Y\in S_n(\R)$ with $X\le Y$ such that $$ \left(\frac{x_\eta-y_\eta}{\eta},X\right)\in \overline{J}^{2,+}(\tau u)(x_\eta) $$ and $$ \left(\frac{x_\eta-y_\eta}{\eta},Y\right)\in \overline{J}^{2,-}v(y_\eta). $$ Note that \begin{align*} H_0\left(\frac{x_\eta-y_\eta}{\eta}\right)&=H_0\left(\tau\frac{x_\eta-y_\eta}{\tau\eta}+(1-\tau)0\right)\\ &\le \tau H_0\left(\frac{x_\eta-y_\eta}{\tau\eta}\right)+(1-\tau)H_0(0)\\ &\le (1-\tau)H_0(0)\\ &<0. \end{align*} As $v$ is a supersolution of \eqref{EigProb}, $$ \mu + F(Y)-f(y_\eta)\ge 0. $$ Since $F$ is elliptic and positively homogeneous, \begin{align}\label{ComparisonIneq} \tau \lambda -\mu&\le -\tau F\left(\frac{X}{\tau}\right)+F(Y)+\tau f(x_\eta)-f(y_\eta) \nonumber\\ &= - F\left(X\right)+F(Y)+\tau f(x_\eta)-f(y_\eta)\nonumber \\ &\le \tau f(x_\eta)-f(y_\eta)\nonumber \\ &= f(x_\eta)-f(y_\eta) + (\tau-1)f(x_\eta)\nonumber \\ &\le f(x_\eta)-f(y_\eta) +(\tau-1)\inf_{\mathbb{R}^n}f. \end{align} \par We now claim that $(y_\eta)_{\eta>0}\subset\mathbb{R}^n$ is bounded. To see this, recall inequality \eqref{wminusphi}. If there is a sequence $\eta_k\rightarrow 0$ as $k\rightarrow \infty$ for which $|y_{\eta_k}|$ is unbounded, then $(w^\tau-\varphi^{\eta_k})(x_{\eta_k},y_{\eta_k})$ tends to $-\infty$ as $k\rightarrow\infty$. However, \begin{align*} (w^\tau-\varphi^{\eta_k})(x_{\eta_k},y_{\eta_k})&=\sup_{\mathbb{R}^n\times\mathbb{R}^n}(w^\tau-\varphi^{\eta_k})\\ &\ge (w^\tau-\varphi^\eta)(0,0)\\ &=\tau u(0)-v(0). \end{align*} Thus, $(y_\eta)_{\eta>0}$ and similarly $(x_\eta)_{\eta>0}$ is bounded. It then follows from Lemma 3.1 in \cite{CIL} that $$ \lim_{\eta\rightarrow 0^+}\frac{|x_\eta-y_\eta|^2}{2\eta}=0 $$ and $(x_\eta,y_\eta)_{\eta>0}\subset\mathbb{R}^n\times\mathbb{R}^n$ has a cluster point $(x_\tau,x_\tau)$. Passing to the limit along an appropriate sequence $\eta$ tending to $0$ in \eqref{ComparisonIneq} then gives \begin{equation}\label{ComparisonIneq2} \tau \lambda-\mu\le (\tau-1)\inf_{\mathbb{R}^n}f. \end{equation} We conclude after sending $\tau\rightarrow 1^-$. \end{proof} \begin{cor} There can be at most one $\lambda\in \mathbb{R}$ for which \eqref{EigProb} has a solution $u$ satisfying \eqref{ellgrowth}. \end{cor} \par We are uncertain whether or not eigenfunctions $u$ are uniquely defined up to an additive constant. However, we do know that if $F$ is not uniformly elliptic and $f$ is not strictly convex, eigenfunctions are not necessarily unique. For instance when $F\equiv 0$ and $H(p)=|p|-1$, equation \eqref{EigProb} reduces to \begin{equation}\label{FzeroEqn} \max\{\lambda-f,|Du|-1\}=0, \quad \mathbb{R}^n. \end{equation} It is easily verified that $\lambda^*=\inf_{\mathbb{R}^n}f$ and $u(x)=|x-x_0|$ is a solution of \eqref{FzeroEqn} for each $x_0$ such that $\inf_{\mathbb{R}^n}f=f(x_0)$. Notice that if there is another point $y_0\neq x_0$ where $f$ attains its minimum, then $u(x)=|x-y_0|$ is another solution. \par We will give some conditions in Proposition \ref{UniqunessN1} below that guarantee uniqueness when $n=1$. However, we postpone this discussion until after we have considered the regularity of solutions of \eqref{EigProb}. We conclude this section by giving a few examples with explicit solutions. \begin{ex} Assume $n=1$, and consider the eigenvalue problem $$ \begin{cases} \max\{\lambda - u'' -x^2, |u'|-1\}=0, \quad x\in \mathbb{R}\\ \lim_{|x|\rightarrow \infty}\frac{u(x)}{|x|}=1 \end{cases}. $$ Direct computation gives the explicit eigenvalue $$ \lambda^*=(2/3)^{2/3} $$ with a corresponding eigenfunction \begin{align*} u^*(x)&=\inf_{|y|<(\lambda^*)^{1/2}}\left\{\frac{\lambda^*}{2}y^2-\frac{1}{12}y^4 +|x-y|\right\}\\ &= \begin{cases} \frac{\lambda^*}{2}x^2-\frac{1}{12}x^4, \quad |x|<(\lambda^*)^{1/2}\\ \frac{\lambda^*}{2}[(\lambda^*)^{1/2}]^2-\frac{1}{12}[(\lambda^*)^{1/2}]^4 +(x-(\lambda^*)^{1/2}),\quad x\ge (\lambda^*)^{1/2} \\ \frac{\lambda^*}{2}[(\lambda^*)^{1/2}]^2-\frac{1}{12}[(\lambda^*)^{1/2}]^4 -(x+(\lambda^*)^{1/2}),\quad x\le -(\lambda^*)^{1/2} \\ \end{cases}. \end{align*} One checks additionally that $u^*\in C^2(\mathbb{R})$. In fact, searching for a solution that is twice continuously differentiable lead us to the particular value of $\lambda^*$. \end{ex} \begin{ex} The problem in the previous example can be generalized to any dimension $n\in \mathbb{N}$ \begin{equation}\label{SepVarProb} \begin{cases} \max\left\{\lambda - \Delta u -|x|^2, \max_{1\le i\le n}|u_{x_i}|-1\right\}=0, \quad x\in \mathbb{R}^n\\ \lim_{|x|\rightarrow \infty}u(x)/\sum^n_{i=1}|x_i|=1 \end{cases}. \end{equation} Note that this problem corresponds to \eqref{EigProb} when $F(M)=-\tr M$, $f(x)=|x|^2$ and $H(p)=\max_{1\le i\le n}|p_i|-1$. In this case, $\ell(v)=\sum^{n}_{i=1}|v_i|$. Now assume $(\lambda_1, u_1)$ is a solution of the eigenvalue problem in the previous example. Then $\lambda^*=n\lambda_1$ and $$ u^*(x)=\sum^{n}_{i=1}u_1(x_i) $$ is a solution of the eigenvalue problem \eqref{SepVarProb} with $\lambda=\lambda^*$. Moreover, $u^*\in C^2(\mathbb{R}^n)$. \end{ex} \section{Existence of an eigenvalue}\label{ExistSec} In order to prove the existence of an eigenvalue, we will study solutions of the following PDE for $\delta>0$. \begin{equation}\label{deltaProb} \max\left\{\delta u + F(D^2u) -f(x),H(Du)\right\}=0, \quad x\in \mathbb{R}^n. \end{equation} In particular, we will follow section 3 our previous work \cite{Hynd}, which was inspired by the approach of J. Menaldi, M. Robin and M. Taksar \cite{Menaldi}. Employing the same techniques used to verify Proposition \ref{lamCompProp} above, we can establish the following assertion. \begin{prop}\label{DeltaCompProp} Assume $\delta>0$, $u\in USC(\mathbb{R}^n)$ is a subsolution of \eqref{deltaProb} and $v\in LSC(\mathbb{R}^n)$ is a supersolution of \eqref{deltaProb}. If $u$ and $v$ satisfy \eqref{growthComp}, then $u\le v$. \end{prop} It is now immediate that there can be at most one solution of \eqref{deltaProb} that satisfies the growth condition \eqref{ellgrowth}. We will call this solution $u_\delta$. To verify that $u_\delta$ exists, we can appeal to Perron's method once we have appropriate sub and supersolutions. To this end, we first characterize the largest function $v$ that is less than a given function $g$ and satisfies $H(Dv)\le 0$. \begin{lem}\label{infConvLem} Assume $g\in C(\mathbb{R}^n)$ is superlinear. The unique solution of the PDE \begin{equation}\label{DeterministicEq} \max\{v-g,H(Dv)\}=0, \quad x\in \mathbb{R}^n \end{equation} that satisfies the growth condition \eqref{ellgrowth} is given by the inf-convolution of $g$ and $\ell$ \begin{equation}\label{vInfConv} v(x):=\inf_{y\in\mathbb{R}^n}\left\{g(y)+\ell(x-y)\right\} \end{equation} \end{lem} \begin{proof} The uniqueness follows from Proposition \ref{DeltaCompProp}. In particular, this equation corresponds to \eqref{deltaProb} with $F\equiv 0$ and $\delta=1$. Therefore, we only verify that $v$ given in \eqref{vInfConv} is a solution that satisfies the growth condition \eqref{ellgrowth}. Choosing $y=x$ gives, $v(x)\le g(x)$. Also note $x\mapsto g(y) +\ell(x-y)$ satisfies \eqref{ellLip}, which implies that $v$ does as well. Hence, $v$ is a subsolution of \eqref{DeterministicEq}. In particular, $\limsup_{|x|\rightarrow\infty}v(x)/\ell(x)\le 1$. Using $\ell(x-y)\ge \ell(x)-\ell(y)$, $$ v(x)\ge \inf_{y\in\mathbb{R}^n}\left\{g(y)-\ell(y)\right\}+\ell(x). $$ As $g$ is assumed superlinear, $\inf_{\mathbb{R}^n}\left\{g(y)-\ell(y)\right\}$ is finite. Thus, $\liminf_{|x|\rightarrow\infty}v(x)/\ell(x)\ge 1$. \par Finally, if $\psi$ is another subsolution of \eqref{DeterministicEq} \begin{align*} v(x)&=\inf_{y\in\mathbb{R}^n}\left\{g(y)+\ell(x-y)\right\}\\ &\ge\inf_{y\in\mathbb{R}^n}\left\{\psi(y)+\ell(x-y)\right\}\\ &\ge \psi(x). \end{align*} By Lemma 4.4 of \cite{CIL}, $v$ must be a supersolution of \eqref{DeterministicEq}. \end{proof} The solution of \eqref{DeterministicEq} when $g(x)=\frac{1}{2}|x|^2$ will be of particular interest to us and will help us construct a useful supersolution of PDE \eqref{deltaProb}. \begin{lem}\label{xsquaredLemma} Let $g(x):=\frac{1}{2}|x|^2$ and $v$ the solution of \eqref{DeterministicEq} subject to the growth condition \eqref{ellgrowth}. Then $$ v(x)=\frac{1}{2}|x|^2 $$ when $H(x)\le 0$, and $$ H(Dv)=0 $$ in $\{x\in \mathbb{R}^n: H(x)>0\}$ \end{lem} \begin{proof} Recall that $H(x)\le 0$ implies $\ell(v)\ge x\cdot v$ for all $v\in \mathbb{R}^n$. Thus \begin{align*} v(x)&=\inf_{y\in\mathbb{R}^n}\left\{\frac{1}{2}|y|^2+\ell(x-y)\right\}\\ &\ge\inf_{y\in\mathbb{R}^n}\left\{\frac{1}{2}|y|^2+x\cdot (x-y)\right\}\\ &=\inf_{y\in\mathbb{R}^n}\left\{\frac{1}{2}|y-x|^2+\frac{1}{2}|x|^2\right\}\\ &=\frac{1}{2}|x|^2. \end{align*} As $v(x)\le \frac{1}{2}|x|^2$ for all $x$, the first claim follows. \par Now suppose that $H(x)>0$. Then there is a $v_0\in \mathbb{R}^n$ with $|v_0|=1$ such that $\ell(v_0)< x \cdot v_0$. Fix $\end{proof}silon>0$ so small that $\ell(v_0)< x\cdot v_0-\end{proof}silon$. Then \begin{align*} v(x)&=\inf_{y\in\mathbb{R}^n}\left\{\frac{1}{2}|y-x|^2+\ell(y)\right\}\\ &\le \frac{1}{2}\left|(\end{proof}silon v_0)-x\right|^2+\ell(\end{proof}silon v_0)\\ &=\frac{1}{2}|x|^2 +\frac{\end{proof}silon^2}{2}|v_0|^2-(\end{proof}silon v_0)\cdot x+\ell(\end{proof}silon v_0)\\ &=\frac{1}{2}|x|^2 +\frac{\end{proof}silon^2}{2}+\end{proof}silon[- v_0\cdot x+\ell(v_0)] \\ &\le \frac{1}{2}|x|^2 +\frac{\end{proof}silon^2}{2}-\end{proof}silon^2\\ &<\frac{1}{2}|x|^2. \end{align*} Since $v$ satisfies \eqref{DeterministicEq}, the PDE $H(Du)=0$ holds on the open set $\{x\in \mathbb{R}^n: H(x)>0\}$. \end{proof} We are now ready to exhibit sub and supersolutions of \eqref{deltaProb} that are comparable to $\ell(x)$ for large values of $|x|$. \begin{lem} Let $\delta\in (0,1)$. There are constants $K_1, K_2\ge 0$ such that \begin{equation}\label{uUpper} \overline{u}(x)= \frac{K_1}{\delta}+\inf_{y\in\mathbb{R}^n}\left\{\frac{1}{2}|y|^2+\ell(x-y)\right\} \end{equation} is a supersolution of \eqref{deltaProb} satisfying \eqref{ellgrowth} and \begin{equation}\label{uLower} \underline{u}(x)=(\ell(x)-K_2)^++\inf_{\mathbb{R}^n}f \end{equation} is a subsolution of \eqref{deltaProb} satisfying \eqref{ellgrowth}. \end{lem} \begin{proof} 1. Choose $$ K_1:= - F(I_n) + \sup_{H(x)\le 0}f(x). $$ Lemma \ref{xsquaredLemma} implies $\underline{u}(x)=\frac{K_1}{\delta}+\frac{1}{2}|x|^2$ when $H(x)\le 0$. Thus, $$ \delta \underline{u}+F(D^2u)-f\ge K_1+F(I_n)-f\ge 0 $$ on $\{x\in\mathbb{R}^n: H(x)< 0\}$. \par We also have by Lemma \ref{xsquaredLemma} that $H(D\underline{u})=0$ on $\{x\in\mathbb{R}^n: H(x)> 0\}$. We will now verify that $H(D\underline{u}(x_0))=0$ when $H(x_0)=0$. To this end, suppose that $$ \underline{u}(x_0)+p\cdot (x-x_0)+o(|x-x_0|) \le \underline{u}(x) $$ as $x\rightarrow x_0$. Using $\underline{u}(x_0)=\frac{K_1}{\delta}+\frac{1}{2}|x_0|^2$ and $\underline{u}(x)\le \frac{K_1}{\delta}+\frac{1}{2}|x|^2$ with the above inequality gives $$ \frac{1}{2}|x_0|^2+p\cdot (x-x_0)+o(|x-x_0|) \le \frac{1}{2}|x|^2, $$ as $x\rightarrow x_0$. It follows that $p=x_0$, and so $H(p)=H(x_0)=0$. \par 2. Choose $K_2\ge 0$ so large that $$ (\ell(x)-K_2)^+\le f(x)-\inf_{\mathbb{R}^n}f, \quad x\in \mathbb{R}^n. $$ Such a $K_2$ exists by the assumption that $f$ is superlinear and \eqref{UpperLowerell}. Observe $\underline{u}$ defined in \eqref{uLower} satisfies \eqref{ellLip}; thus $H(D\underline{u})\le 0$. And as $\ell$ is convex, $\underline{u}$ is convex. Therefore, $F(D^2\underline{u})\le 0$ and $$ \delta\underline{u}+F(D^2\underline{u})-f\le \delta\underline{u}- f\le (\ell-K_2)^++\inf_{\mathbb{R}^n}f-f\le 0, \quad x\in \mathbb{R}^n $$ for $\delta\le 1$. \end{proof} A key property of $u_\delta$ is that it is a convex function. This is critical to the arguments to follow. We also remark that our proof of this fact below was inspired by Korevaar's work \cite{KO} and is an adaption of Lemma 3.7 in \cite{Hynd}. The new feature we verify here is that the assumption that $F$ is superadditive still produces a convex solution. \begin{prop}\label{UdelConvex} The function $u_\delta$ is convex. \end{prop} \begin{proof} For $\tau\in (0,1)$ and $\eta>0$, we define $$ w^\tau(x,y,z):=\tau u(z) -\frac{u(x)+u(y)}{2} $$ and $$ \varphi^\eta(x,y,z)=\frac{1}{2\eta}\left|\frac{x+y}{2}-z\right|^2 $$ for $x,y,z\in \mathbb{R}^n.$ Notice that \begin{eqnarray}\label{SimpleEstW} (w^\tau -\varphi_\eta)(x,y,z)&=& \tau \left\{u\left(z\right) - u\left(\frac{x+y}{2}\right)\right\} - \frac{1}{2\eta}\left|\frac{x+y}{2}-z\right|^2\nonumber \\ & & + \tau u\left(\frac{x+y}{2}\right) - \frac{u(x) + u(y)}{2}\nonumber \\ &\le &\left( \tau\ell\left(\frac{x+y}{2}-z\right) - \frac{1}{2\eta}\left|\frac{x+y}{2}-z\right|^2\right) \nonumber \\ && + \tau u\left(\frac{x+y}{2}\right)-\frac{u(x) + u(y)}{2}. \end{eqnarray} By the growth condition \eqref{ellgrowth}, it follows that $$ \lim_{|x|+|y|\rightarrow \infty}\left\{\tau u\left(\frac{x+y}{2}\right)-\frac{u(x) + u(y)}{2}\right\}=-\infty $$ and therefore $$ \lim_{|x|+|y|+|z|\rightarrow \infty}(w^\tau-\varphi_\eta)(x,y,z)=-\infty. $$ In particular, there is $(x_\eta,y_\eta,z_\eta)\in \mathbb{R}^n\times \mathbb{R}^n\times \mathbb{R}^n$ maximizing $w^\tau-\varphi_\eta.$ By Theorem 3.2 in \cite{CIL}, there are $X,Y,Z\in {\mathcal S}(n)$ such that \begin{equation}\label{3JetInc} \begin{cases} \left( -2D_x\varphi_\eta(x_\eta,y_\eta,z_\eta), X\right)\in \overline{J}^{2,-}u(x_\eta)\\ \left( -2D_{y}\varphi_\eta(x_\eta,y_\eta,z_\eta), Y\right)\in \overline{J}^{2,-}u(y_\eta)\\ \left( \frac{1}{\tau}D_{z}\varphi_\eta(x_\eta,y_\eta,z_\eta), Z\right)\in \overline{J}^{2,+}u(z_\eta)\\ \end{cases} \end{equation} and \begin{equation}\label{XYZineq} \tau Z\le \frac{1}{2}(X+Y). \end{equation} \par Now set $$ p_\eta := -2D_x\varphi_\eta(x_\eta,y_\eta,z_\eta)=-2D_y\varphi_\eta(x_\eta,y_\eta,z_\eta)=D_z\varphi_\eta(x_\eta,y_\eta,z_\eta)=\frac{1}{\eta}\left(z_\eta -\frac{x_\eta+y_\eta}{2}\right). $$ By the bottom inclusion in \eqref{3JetInc}, $$ \max\{\delta u(z_\eta) +F(Z) - f(z_\eta), H(p_\eta/\tau)\}\le 0. $$ It follows that $$ H(p_\eta)=H\left(\tau\frac{p_\eta}{\tau}+(1-\tau)0\right)<0 $$ and by the top two inclusions in \eqref{3JetInc}, $$ \begin{cases} \delta u(x_\eta) + F(X) - f(x_\eta)\ge 0\\ \delta u(y_\eta) + F(Y) - f(y_\eta)\ge 0 \end{cases}. $$ Combining these inequalities with \eqref{XYZineq} gives \begin{align}\label{usingfconvex} \delta w^\tau(x,y,z)&\le \delta w(x_\eta,y_\eta,z_\eta) \nonumber \\ & = \tau\delta u(z_\eta) -\frac{\delta u(x_\eta)+\delta u(y_\eta)}{2} \nonumber \\ &\le \tau(-F(Z) + f(z_\eta)) -\frac{(-F(X)+f(x_\eta) ) +(-F(Y)+ f(y_\eta))}{2} \nonumber \\ &= \left[-F(\tau Z) +\frac{F(X)+F(Y)}{2}\right] +\tau f(z_\eta)-\frac{f(x_\eta) + f(y_\eta)}{2}\nonumber \\ &\le \left[-F\left(\frac{X+Y}{2}\right) +\frac{F(X)+F(Y)}{2}\right] +\tau f(z_\eta)-\frac{f(x_\eta) + f(y_\eta)}{2}\nonumber \\ &\le f(z_\eta)-\frac{f(x_\eta) + f(y_\eta)}{2}+(\tau -1)f(z_\eta)\nonumber\\ &\le f(z_\eta)-\frac{f(x_\eta) + f(y_\eta)}{2}+(\tau -1)\inf_{\mathbb{R}^n}f \end{align} for each $(x,y,z)\in \mathbb{R}^n.$ \par Another basic estimate for $w^\tau -\varphi_\eta$ that stems from \eqref{SimpleEstW} and \eqref{UpperLowerell} is $$ (w^\tau-\varphi_\eta)(x,y,z)\le \tau u\left(\frac{x+y}{2}\right) - \frac{u(x)+u(y)}{2} + \tau^2c_1^2\eta. $$ This inequality gives that $(x_\eta, y_\eta)_{\eta>0}\subset\mathbb{R}^n\times\mathbb{R}^n$ is bounded. For were this not the case, $(w^\tau-\varphi_\eta)(x_\eta,y_\eta,z_\eta)$ tends to $-\infty$ yet \begin{eqnarray} (w^\tau-\varphi_\eta)(x_\eta,y_\eta,z_\eta)&=&\max_{x,y,z}(w^\tau-\varphi_\eta)(x,y,z) \nonumber \\ &\ge &(w^\tau-\varphi_\eta)(0,0, 0) \nonumber \\ & =& (\tau -1) u(0) \nonumber \\ &>& -\infty, \nonumber \end{eqnarray} for each $\eta>0.$ Similarly, $(z_\eta)_{\eta>0}\subset \mathbb{R}^n$ is bounded. \par Again we appeal to Lemma 3.1 in \cite{CIL}, which asserts the existence of a cluster point $(x_\tau,y_\tau, (x_\tau+ y_\tau)/2)$ of $((x_\eta, y_\eta,z_\eta))_{\eta>0}$ that maximizes $$ (x,y)\mapsto \tau u\left(\frac{x+y}{2}\right) - \frac{u(x)+u(y)}{2}. $$ Thus, we may pass to the limit through an appropriate sequence of $\eta$ tending to $0$ in \eqref{usingfconvex} to find for any $x,y\in\mathbb{R}^n$ $$ \tau u\left(\frac{x+y}{2}\right) - \frac{u(x)+u(y)}{2} \le f\left(\frac{x_\tau+y_\tau}{2}\right) - \frac{f(x_\tau)+f(y_\tau)}{2}+(\tau -1)\inf_{\mathbb{R}^n}f\le (\tau -1)\inf_{\mathbb{R}^n}f. $$ Here we have used the convexity of $f$. Finally, we conclude upon sending $\tau\rightarrow 1^-$. \end{proof} By Aleksandrov's theorem (section 6.4 of \cite{Gariepy}), $u_\delta$ is twice differentiable at Lebesgue almost every $x\in\mathbb{R}^n$. At any such $x$, if $H(Du(x))<0$, then $x$ must be uniformly bounded for $$ f(x)=\delta u_\delta(x) +F(D^2u_\delta)\le \delta u_\delta(x). $$ Recall that $f$ is superlinear and $u_\delta$ grows at most linearly. As precise statement is as follows. \begin{cor}\label{boundedDerOmega} There is a constant $R$, independent of $\delta\in (0,1)$, such that if $p\in J^{1,-}u_\delta(x)$ and $H(p)<0$, then $|x|\le R$. \end{cor} \begin{proof} As $u_\delta$ is convex, $J^{1,-}u_\delta(x)=\partial u(x)$; see proposition 4.7 in \cite{Bardi}. It then follows that $(p,O_n)\in J^{2,-}u_\delta(x)$. Thus, $$ \max\{\delta u_\delta(x)-f(x),H(p)\}\ge 0. $$ As $H(p)<0$, it must be that $\delta u_\delta(x)-f(x)\ge 0$. As a result, $$ f(x)\le \delta u_\delta(x)\le K_1+\ell(x)\le K_1 +c_1|x|. $$ Thus, $|x|\le R$ for some $R$ that is independent of $\delta\in (0,1)$. \end{proof} Another important corollary is the following ``extension formula" for solutions. We interpret this formula informally as: once the values of $u_\delta(x)$ are known for each $x$ satisfying $H(Du_\delta(x))<0$, $u_\delta$ is determined on all of $\mathbb{R}^n$. \begin{cor}\label{ExtCor} Let \begin{equation}\label{OmegaDel} \Omega_\delta:=\mathbb{R}^n\setminus\{x\in \mathbb{R}^n: H(Du_\delta(x))\ge 0\;\text{in the viscosity sense}\;\}. \end{equation} Then \begin{equation}\label{ExtensionForm} u_\delta(x)=\inf\left\{u_\delta(y)+\ell(x-y): y\in \Omega_\delta\right\}, \quad x\in \mathbb{R}^n. \end{equation} Moreover, the infimum in \eqref{ExtensionForm} can be taken over $\partial\Omega_\delta$ when $x\notin\Omega_\delta$. \end{cor} \begin{proof} Set $u=u_\delta$ and define $v$ to be the right hand side of \eqref{ExtensionForm}. Since $u(x)\le u(y)+\ell(x-y)$ for each $x,y\in\mathbb{R}^n$, $u\le v$. If $x\in \overline{\Omega}_\delta$, there is a sequence $(x_k)_{k\in \mathbb{N}}\subset \Omega_\delta$ converging to $x$ as $k\rightarrow\infty$. Clearly, $v(x)\le u(x_k)+\ell(x-x_k)$ and sending $k\rightarrow\infty$ gives $v(x)\le u(x).$ Thus, $u(x)=v(x)$ for $x\in \overline{\Omega}_\delta$. \par Observe that $v(x)-v(y)\le \ell(x-y)$ for all $x,y\in \mathbb{R}^n$. Therefore, $v$ satisfies the PDE $H(Dv)\le 0$ on $\mathbb{R}^n.$ In particular, $$ \begin{cases} H(Dv)\le 0\le H(Du), \quad & x\in \mathbb{R}^n\setminus\overline{\Omega}_\delta\\ v=u, \quad & x\in\partial\Omega_\delta \end{cases} $$ while $$ \limsup_{|x|\rightarrow \infty}\frac{v(x)}{\ell(x)}\le 1\le \limsup_{|x|\rightarrow \infty}\frac{u(x)}{\ell(x)}. $$ It follows from an argument similar to one given in Proposition \ref{lamCompProp} used to derive \eqref{ComparisonIneq2}, that $$ \tau v-u\le (\tau- 1)\inf_{\mathbb{R}^n}f $$ for each $\tau\in (0,1)$. In particular, $v\le u$ on $\mathbb{R}^n\setminus\overline{\Omega}_\delta$. So we are able to conclude \eqref{ExtensionForm}. \par Now suppose $x\notin\Omega_\delta$ and choose $y\in\Omega_\delta$ such that $u(x)=u(y)+\ell(x-y)$. There is a $t\in [0,1]$ such that $$ z=t y +(1-t)x \in \partial \Omega_\delta. $$ Observe that since $u$ is convex and $\ell$ is positively homogeneous \begin{align*} u(z)+\ell(x-z)&=u(t y +(1-t)x)+\ell(t(x-y))\\ &\le t(u(y) + \ell(x-y)) +(1-t)u(x)\\ &=tu(x)+(1-t)u(x)\\ &\le u(x). \end{align*} Thus, the minimum in \eqref{ExtensionForm} occurs on the boundary of $\partial \Omega_\delta$ when $x\notin \Omega_\delta$. \end{proof} We will now verify the existence of an eigenvalue. Let $\delta\in (0,1)$ and $x_\delta$ denote a global minimizer of $u_\delta$ $$ \min_{x\in\mathbb{R}^n}u_\delta(x)=u_\delta(x_\delta). $$ Clearly, $0\in J^{1,-}u(x_\delta)$ and by assumption $H(0)<0$; thus $x_\delta\in \Omega_\delta$. And by Corollary \ref{boundedDerOmega}, $|x_\delta|\le R$. Set $$ \begin{cases} \lambda_\delta:=\delta u_\delta(x_\delta)\\ v_\delta(x):=u_\delta(x)-u_\delta(x_\delta),\quad x\in \mathbb{R}^n \end{cases}. $$ In view of \eqref{uUpper}, \eqref{uLower}, \begin{equation}\label{LamDelBounds} -\left(\inf_{\mathbb{R}^n}f\right)^{-}\le \lambda_\delta \le K_1 +\frac{1}{2}R^2; \end{equation} and by \eqref{UpperLowerell} \begin{equation}\label{veeDelBounds} \begin{cases} 0\le v_\delta(x)\le c_1(|x|+R)\\ |v_\delta(x)-v_\delta(y)|\le c_1|x-y| \end{cases} \end{equation} for $x,y\in \mathbb{R}^n$ and $0<\delta<1$. \begin{proof} (part $(i)$ of Theorem \ref{Thm1}) By \eqref{LamDelBounds} and \eqref{veeDelBounds}, there is a sequence of positive numbers $(\delta_k)_{k\in \mathbb{N}}$ tending to $0$, $\lambda^*\in \mathbb{R}$ and $u^*\in C(\mathbb{R}^n)$ such that $\lambda_{\delta_k}\rightarrow \lambda^*$ and $v_{\delta_k}\rightarrow u^*$ locally uniformly on $\mathbb{R}^n$. By the stability of viscosity solutions under locally uniform convergence (Lemma 6.1 in \cite{CIL}), $u^*$ satisfies \eqref{EigProb} with $\lambda=\lambda^*$. \par In view of the extension formula \eqref{ExtensionForm}, \begin{align*} v_{\delta_k}(x)&=u_{\delta_k}(x)-u_{\delta_k}(x_{\delta_k})\\ &=\inf_{y\in \Omega_{\delta_k}}\{u_{\delta_k}(y)-u_{\delta_k}(x_{\delta_k})+\ell(x-y)\}\\ &\ge \inf_{y\in \Omega_{\delta_k}}\{\ell(x-y)\}\\ &\ge \inf_{y\in \Omega_{\delta_k}}\{\ell(x)-\ell(y)\}\\ &=\ell(x)-\sup_{y\in\Omega_{\delta_k}}\ell(y)\\ &\ge \ell(x)-\sup_{|y|\le R}\ell(y). \end{align*} Thus, $u^*(x)\ge \ell(x)-\sup_{|y|\le R}\ell(y)$ and in particular, $u^*$ satisfies the growth condition \eqref{ellgrowth}. It now follows that $\lambda^*$ is the desired eigenvalue. \end{proof} We now have the following characterization of the eigenvalue $\lambda^*$. See also \cite{Armstrong} for a similar characterization of eigenvalues of operators that are uniformly elliptic, fully nonlinear, and positively homogeneous. \begin{cor} Let $\lambda^*$ be as described in part $(i)$ of Theorem \ref{Thm1}. Then \begin{align}\label{LamChar1} \lambda^*&=\sup\{\lambda\in \mathbb{R}: \text{there is a subsolution $u$ of \eqref{EigProb} with eigenvalue $\lambda$} \nonumber \\ &\left.\hspace{1in} \text{satisfying}\; \limsup_{|x|\rightarrow \infty}\frac{u(x)}{\ell(x)}\le 1\right\}. \end{align} and \begin{align}\label{LamChar2} \lambda^*&=\inf\{\mu\in \mathbb{R}: \text{there is a supersolution $v$ of \eqref{EigProb} with eigenvalue $\mu$} \nonumber \\ &\left.\hspace{1in} \text{satisfying}\; \liminf_{|x|\rightarrow \infty}\frac{v(x)}{\ell(x)}\ge 1\right\}. \end{align} \end{cor} In particular, choosing $\lambda=\inf_{\mathbb{R}^n}f$ and $u\equiv 0$ in \eqref{LamChar1} gives $\lambda^*\ge \inf_{\mathbb{R}^n}f$. And selecting $\mu=-F(I_n)+\sup_{H(x)\le 0}f(x)$ and $v(x)=\inf_{\mathbb{R}^n}\{|y|^2/2+\ell(x-y)\}$ in \eqref{LamChar2} gives $\lambda^*\le -F(I_n)+\sup_{H(x)\le 0}f(x)$. In summary, we have the bounds on $\lambda^*$ $$ \inf_{x\in\mathbb{R}^n}f(x)\le \lambda^*\le-F(I_n)+\sup_{H(x)\le 0}f(x). $$ \section{Regularity of solutions}\label{RegSect} Our goal in this section is to prove part $(ii)$ of Theorem \ref{Thm1}. To this end, we will assume that $F$ is uniformly elliptic, assume $H$ satisfies \eqref{Hassump2} and derive a uniform upper bound on $D^2u_\delta$. Recall $u_\delta$ is the unique solution of \eqref{deltaProb} that satisfies \eqref{ellgrowth}. We will first use an easy semiconcavity argument to bound $D^2u_\delta(x)$ for all large values of $|x|$. Then we will pursue second derivatives bounds on $u_\delta$ for smaller values of $|x|$. To this end, we will employ to the so-called ``penalty method" introduced by L. C. Evans \cite{Evans}. For other related work, consult also \cite{HyndMawi,Ishii, Soner, Wiegner}. \subsection{Preliminaries} An important identity for us will be \begin{equation}\label{ellFormula} \ell(v)=\inf_{\lambda>0}\lambda H^*\left(\frac{v}{\lambda}\right), \quad v\in \mathbb{R}^n\setminus\{0\} \end{equation} where $H^*(w)=\sup_{p\in \mathbb{R}^n}\{p\cdot w - H(p)\}$ is the Legendre transform of $H$; see exercise 11.6 of \cite{Rock}. This formula is crucial to our method for deriving second derivates estimates on $u_\delta$ for large values of $|x|$. \begin{lem}\label{W2infFarOutBound} Define $\Omega_\delta$ as in \eqref{OmegaDel}. There is a constant $C$ such that $$ D^2u_\delta(x)\le \frac{C}{\text{dist}(x,\Omega_\delta)}I_n $$ for Lebesgue almost every $x\in \mathbb{R}^n\setminus\overline{\Omega}_\delta$. \end{lem} \begin{proof} We will employ formula \eqref{ellFormula}. We will also use that \begin{equation}\label{Hstar} H^*(0)>0 \end{equation} and \begin{equation}\label{Hstar2} \frac{1}{\Sigma}|\xi|^2\le D^2H^*(w)\xi\cdot \xi\le \frac{1}{\sigma}|\xi|^2,\quad \xi\in\mathbb{R}^n \end{equation} for almost every $w\in \mathbb{R}^n$. Let $v\in \mathbb{R}^n\setminus\{0\}$ and $\lambda>0$. Note \eqref{Hstar2} implies \begin{equation}\label{lowerHstar} \lambda H^*(0)+DH^*(0)\cdot v +\frac{1}{2\Sigma \lambda}|v|^2\le \lambda H^*\left(\frac{v}{\lambda}\right)\le \lambda H^*(0)+DH^*(0)\cdot v +\frac{1}{2\sigma \lambda}|v|^2. \end{equation} Thus, $\lim_{\lambda\rightarrow 0^+}\lambda H^*\left(v/\lambda\right)=+\infty$. And with \eqref{Hstar}, we also conclude that $\lim_{\lambda\rightarrow \infty}\lambda H^*\left(v/\lambda\right)=+\infty.$ As $\lambda\mapsto \lambda H^*\left(v/\lambda\right)$ is strictly convex, there is a unique $\lambda=\lambda(v)>0$ for which $\ell(v)=\lambda(v) H^*(v/\lambda(v))$. \par Using the positive homogeneity of $\ell$, for $t>0$ \begin{align*} \lambda(tv) H^*\left(\frac{tv}{\lambda(tv)}\right)&=\ell(tv)\\ &=t\ell(v)\\ &=t\lambda(v) H^*\left(\frac{v}{\lambda(v)}\right)\\ &=t\lambda(v) H^*\left(\frac{tv}{t\lambda(v)}\right). \end{align*} Thus, $\lambda(tv)=t\lambda(v)$. It also follows from \eqref{lowerHstar} that $$ \gamma:=\inf_{|v|=1}\lambda(v)>0. $$ In particular, $\lambda(v)\ge \gamma |v|$, for each $v\neq 0.$ \par Again let $v\neq 0$, and choose $h\in \mathbb{R}^n$ so small that $v\pm h\neq 0$. Then for $\lambda=\lambda(v)$ \begin{align*} \ell(v+h)-2\ell(v)+\ell(v-h)&\le \lambda H^*\left(\frac{v+h}{\lambda}\right) - 2\lambda H^*\left(\frac{v}{\lambda}\right)+\lambda H^*\left(\frac{v-h}{\lambda}\right)\\ &=\lambda\left[ H^*\left(\frac{v}{\lambda}+\frac{h}{\lambda}\right)-2H^*\left(\frac{v}{\lambda}\right)+H^*\left(\frac{v}{\lambda}-\frac{h}{\lambda}\right) \right]\\ &\le \lambda \frac{1}{\sigma}\left|\frac{h}{\lambda}\right|^2\\ &=\frac{1}{\sigma \lambda}|h|^2\\ &\le \frac{1}{\gamma \sigma|v|}|h|^2. \end{align*} \par Now we can employ the extension formula \eqref{ExtensionForm}. Let $x\in \mathbb{R}^n\setminus\overline{\Omega}_\delta$ and choose $h$ so small that $x\pm h\in\mathbb{R}^n\setminus\overline{\Omega}_\delta$. Selecting $y\in \partial\Omega_\delta$ so that $u_\delta(x)=u_\delta(y)+\ell(x-y)$ gives \begin{align*} u_\delta(x+h)-2u_\delta(x)+u_\delta(x-h)&\le \ell(x-y+h)-2\ell(x-y)+\ell(x-y-h)\\ &\le \frac{1}{\gamma \sigma |x-y|}|h|^2\\ &\le \frac{C }{\text{dist}(x,\partial \Omega_\delta)}|h|^2. \end{align*} The claim follows as $u_\delta$ is differentiable Lebesgue almost everywhere. \end{proof} In order to complete the proof of part $(ii)$ of Theorem \ref{Thm1}, we must bound the second derivatives on $u_\delta$ on some subset of $\mathbb{R}^n$ that includes $\overline{\Omega}_\delta$. Before we detail our approach, it will be necessary for us to differentiate (a smoothing) of $F$. To this end, we extend $F$ to the space $M_n(\R)$ of all $n\times n$ real matrices as follows $$ \overline{F}(M):=F\left(\frac{1}{2}(M+M^t)\right), \quad M\in M_n(\R). $$ We can then treat $\overline{F}(M)$ as a function of the $n^2$ real entries of the matrix $M\in M_n(\R)$. It is readily checked that $\overline{F}$ is uniformly elliptic, positively homogeneous and superadditive on $M_n(\R)$. In particular, $\overline{F}$ satisfies \eqref{Fassump} for each $M,N\in M_n(\R)$ and $t\ge 0$. This allows us to identify $F$ with $\overline{F}$ and we shall do this for the remainder of this section. \par We now define $F^\varrho$ as the standard mollification of $F$ $$ F^\varrho(M):=\int_{M_n(\R)}\eta^\varrho(N)F(M-N)dN, \quad M\in M_n(\R). $$ The integral above is over the $n^2$ real variables $N=(N_{ij})\in M_n(\R)$, and as in Lemma \ref{HLipLem}, $\eta\in C^\infty_c(M_n(\R))$ is a nonnegative function that is supported in $\{M\in M_n(\R): |M|\le 1\}$ and $\eta(M)$ only depends on $|M|$. Moreover, $\eta$ satisfies $\int_{\mathbb{R}^n}\eta(Z)dZ=1$ and we have defined $\eta^\varrho:=\varrho^{-n^2}\eta(\cdot/\varrho)$. See also section 4 of \cite{HyndMawi} or Proposition 9.8 in \cite{CC} for more details on mollifying functions of matrices. \par It is readily verified that $F^\varrho\in C^\infty(M_n(\R))$ and, with the help of \eqref{Fassump}, $F^\varrho$ is uniformly elliptic, concave and satisfies \begin{equation}\label{FFvarrhoEst} F^\varrho(M)\le F(M)\le F^\varrho(M)+\sqrt{n}\Theta \varrho, \quad M\in M_n(\R). \end{equation} However, $F^\varrho$ is not in general positively homogeneous. Nevertheless, $F^\varrho$ inherits a certain almost homogeneity property. \begin{lem}\label{HomogeneityLEM} For every $M\in M_n(\R)$, $$ F^\varrho(M)=F^\varrho_{M_{ij}}(M)M_{ij} - \int_{M_n(\R)}\eta^\varrho(N)F_{M_{ij}}(M-N)N_{ij}dN. $$ In particular, \begin{equation}\label{FrhoAlmostHomo} |F^\varrho(M)-F^\varrho_{M_{ij}}(M)M_{ij}|\le \sqrt{n}\Theta\varrho, \quad M\in M_n(\R). \end{equation} \end{lem} \begin{proof} By the ellipticity assumption \eqref{Fassump}, $F$ is Lipschitz continuous. Rademacher's Theorem then implies that $F$ is differentiable for Lebesgue almost every $M\in M_n(\R)$, which we identify with $\mathbb{R}^{n^2}$. Therefore, $$ F^\varrho_{M_{ij}}(M)=\int_{M_n(\R)}\eta^\varrho(N)F_{M_{ij}}(M-N)dN. $$ See Theorem 1 of section 5.3 in \cite{Evans2} for an easy verification of this equality. Since $F$ is positively homogenous of degree one, $$ F(M)=F_{M_{ij}}(M)M_{ij} $$ for Lebesgue almost every $M\in M_n(\R)$. And therefore, \begin{align*} F^\varrho_{M_{ij}}(M)M_{ij}&=\int_{M_n(\R)}\eta^\varrho(N)F_{M_{ij}}(M-N)M_{ij}dN\\ &=\int_{M_n(\R)}\eta^\varrho(N)F_{M_{ij}}(M-N)(M_{ij}-N_{ij})dN\\ &\quad +\int_{M_n(\R)}\eta^\varrho(N)F_{M_{ij}}(M-N)N_{ij}dN\\ &=\int_{M_n(\R)}\eta^\varrho(N)F(M-N)dN+\int_{M_n(\R)}\eta^\varrho(N)F_{M_{ij}}(M-N)N_{ij}dN\\ &=F^\varrho(M)+\int_{M_n(\R)}\eta^\varrho(N)F_{M_{ij}}(M-N)N_{ij}dN. \end{align*} \par The ellipticity assumption \eqref{Fassump} also implies $$ -\Theta|\xi|^2\le F_{M_{ij}}(M)\xi_i\xi_j\le-\theta|\xi|^2, \quad \xi\in \mathbb{R}^n $$ for almost every $M\in M_n(\R)$. Therefore, \begin{align*} \left|\int_{M_n(\R)}\eta^\varrho(N)F_{M_{ij}}(M-N)N_{ij}dN\right|&\le\int_{M_n(\R)}\eta^\varrho(N)\left|F_{M_{ij}}(M-N)N_{ij}\right|dN \\ &\le \int_{M_n(\R)}\eta^\varrho(N)\sqrt{\sum^n_{ij=1}\left(F_{M_{ij}}(M-N)\right)^2}\; |N|dN\\ &\le \sqrt{n}\Theta \int_{M_n(\R)}\eta^\varrho(N)|N|dN\\ & = \sqrt{n}\Theta\varrho \int_{|Z|\le 1}\eta(Z)|Z|dZ \quad (Z=N/\varrho)\\ & \le \sqrt{n}\Theta\varrho \int_{|Z|\le 1}\eta(Z)dZ \\ & = \sqrt{n}\Theta\varrho. \end{align*} \end{proof} \par We will additionally need to smooth out $H$ and $f$, and we will do so by using the standard mollifications $H^\varrho=\eta^\varrho*H$ and $f^\varrho=\eta^\varrho*f$. Here $\eta$ is a standard mollifier on $\mathbb{R}^n$. We also select $\varrho_1$ so small that \begin{equation}\label{Hsmallvarrho1} H^\varrho(0)<0, \quad \varrho\in (0,\varrho_1). \end{equation} The following lemma asserts that the solution of the PDE \eqref{deltaProb} is well approximated by a solution of the same equation with $H^\varrho$ and $f^\varrho$ replacing $H$ and $f$. \begin{lem}\label{firstApproxLem} Assume $\delta\in (0,1)$ and $\varrho\in (0,\varrho_1)$. Let $u_{\delta,\varrho}$ be solution of \eqref{deltaProb} with $F, H^\varrho$, and $f^\varrho$ subject to the growth condition \eqref{ellgrowth} with $\ell^\varrho(v)=\sup\{p\cdot v: H^\varrho(p)\le 0\}$ replacing $\ell$. Then $\lim_{\varrho \rightarrow 0^+} u_{\delta,\varrho}=u_\delta$ locally uniformly on $\mathbb{R}^n$. \end{lem} \begin{proof} Using test functions as in \eqref{uUpper} and \eqref{uLower} that correspond to \eqref{deltaProb} with $F, H^\varrho$, and $f^\varrho$ we find $$ \inf_{\mathbb{R}^n}f^\varrho\le u_{\delta,\varrho}\le \frac{1}{\delta}\left(-F(I_n) + \sup_{H^\varrho\le 0}f^\varrho\right)+\ell^\varrho. $$ By the convexity of $H$ and $f$, Jensen's inequality implies $H\le H^\varrho$ and $f\le f^\varrho$. It then follows that $\ell^\varrho\le \ell$. By the ellipticity of $F$, $-F(I_n)\le n\Theta$ and so \begin{equation}\label{udeltarhoBounds} \inf_{\mathbb{R}^n}f\le u_{\delta,\varrho}\le \frac{1}{\delta}\left(n\Theta + \sup_{H\le 0}f^\varrho\right)+\ell. \end{equation} Since $f^\varrho\rightarrow f$ locally uniformly on $\mathbb{R}^n$, $u_{\delta,\varrho}$ is locally bounded on $\mathbb{R}^n$ independently of $\varrho\in (0,\varrho_1)$. \par Also notice that $H(Du_{\delta,\varrho})\le H^\varrho(Du_{\delta,\varrho})\le 0$ which implies that $u_{\delta,\varrho}$ is uniformly equicontinuous on $\mathbb{R}^n$. It follows that for each sequence of positive numbers $(\varrho_k)_{k\in \mathbb{N}}$ tending to 0, there is a subsequence of $(u_{\delta,\varrho_k})_{k\in \mathbb{N}}$ converging locally uniformly to some $u\in C(\mathbb{R}^n)$. By the stability of viscosity solutions under local uniform convergence, $u$ is a solution of \eqref{deltaProb}. In order to conclude, it suffices to verify that $u$ satisfies \eqref{ellgrowth}. Then by uniqueness we would have $u=u_\delta$ and the full sequence $(u_{\delta,\varrho_k})_{k\in \mathbb{N}}$ must converge to $u_\delta$. \par We now employ the extension formula \eqref{ExtensionForm} with $$ \Omega_{\delta,\varrho}:=\mathbb{R}^n\setminus\{x\in \mathbb{R}^n: H(Du_{\delta,\varrho}(x))\ge 0\;\text{in the viscosity sense}\;\} $$ to get \begin{align}\label{Lowerudeltarhobound} u_{\delta,\varrho}(x)&=\inf_{y\in \Omega_{\delta,\varrho}}\{u_{\delta,\varrho}(x)+\ell^\varrho(x-y) \}\nonumber \\ &\ge\inf_{y\in \Omega_{\delta,\varrho}}\left\{ \inf_{\mathbb{R}^n}f+\ell^\varrho(x)-\ell^\varrho(y)\right\} \nonumber \\ &= \inf_{\mathbb{R}^n}f+\ell^\varrho(x)-\sup_{y\in \Omega_{\delta,\varrho}}\ell(y). \end{align} It is immediate from the proof of Corollary \eqref{boundedDerOmega} that there is an $R>0$ such that $\Omega_{\delta,\varrho}\subset B_R(0)$ for each $\delta\in (0,1)$ and $\varrho\in (0,\varrho_1)$. We also leave it to the reader to verify that $\ell(v)=\lim_{\varrho\rightarrow 0^+}\ell^\varrho(v)$ for each $v\in \mathbb{R}^n$. Passing to the limit along an appropriate sequence of $\varrho$ tending to 0 in \eqref{Lowerudeltarhobound} gives $$ u(x)\ge \inf_{\mathbb{R}^n}f+\ell(x)-\sup_{|y|\le R}\ell(y). $$ Hence, $u$ satisfies \eqref{ellgrowth}. \end{proof} \subsection{The penalty method} Now we fix $\delta\in (0,1)$, $\rho\in (0,\rho_1)$ and a choose a ball $B=B_R(0)\subset\mathbb{R}^n$ so large that \begin{equation}\label{BbigEnough} H(p)\le 0\quad \Longrightarrow\quad |p|\le R. \end{equation} For $\end{proof}silon>0$, we will now focus on solutions of the fully nonlinear PDE \begin{equation}\label{PenalizedEqn} \delta u+F^\varrho(D^2u)+\beta_\end{proof}silon(H^\varrho(Du))=f^\varrho, \quad x\in B \end{equation} subject to the boundary condition \begin{equation}\label{PenalizedBC} u(x)=u_{\delta, \varrho}(x), \quad x\in \partial B. \end{equation} Recall that $u_{\delta,\varrho}$ is the solution of \eqref{deltaProb} with $F, H^\varrho$, and $f^\varrho$ subject to the growth condition \eqref{ellgrowth} with $\ell^\varrho(v)=\sup\{p\cdot v: H^\varrho(p)\le 0\}$ instead of $\ell$. \par In \eqref{PenalizedEqn}, $F^\varrho$ is a standard mollification of $F$ and the family $\{\beta_\end{proof}silon\}_{\end{proof}silon> 0}$ of functions each satisfy \begin{equation}\label{betaAss} \begin{cases} \beta_\end{proof}silon\in C^\infty(\mathbb{R})\\ \beta_\end{proof}silon(z)=0, \quad z\le 0\\ \beta_\epsilon(z)>0, \quad z>0\\ \beta_\epsilon'\ge0,\\ \beta_\epsilon''\ge0,\\ \beta_\epsilon(z)=(z-\end{proof}silon)/\end{proof}silon, \quad z\ge 2\end{proof}silon\\ \end{cases}. \end{equation} Our intuition is that $\beta_\end{proof}silon$ is a smoothing of Lipschitz function $z\mapsto (z/\end{proof}silon)^+$; and therefore, solutions of \eqref{PenalizedEqn} should be close to solutions of $\max\{\delta u+F^\varrho(D^2u)-f^\varrho,H^\varrho(Du)\}=0$ that satisfy \eqref{PenalizedBC}. These solutions will in turn be very close to $u_{\delta,\varrho}|_{B}$ for $\varrho$ small (see Lemma \ref{FFvarrholocLem} below). \par By a theorem of N. Trudinger (Theorem 8.2 in \cite{Trudinger}) there is a unique classical solution $u^\end{proof}silon\in C^\infty(B)\cap C(\overline{B})$ solving \eqref{PenalizedEqn} and satisfying the boundary condition \eqref{PenalizedBC}. This result relies on the Evans--Krylov a priori estimates for solutions of concave, fully nonlinear elliptic equations and the continuity method \cite{EvansC2, KrylovC2}. Along with the concavity of $F$, the main structural condition that allows us to apply this theorem is that $p\mapsto \beta_\end{proof}silon(H^\varrho(p))$ grows at most quadratically for each $\end{proof}silon>0$. We remark that $u^\end{proof}silon$ naturally depends on the other parameters $\delta\in (0,1)$ and $\varrho\in (0,\varrho_1)$; we have chosen not to indicate this dependence for ease of notation. \par Since $u_{\delta,\varrho}$ solves \eqref{deltaProb} with $F, H^\varrho$, and $f^\varrho$, we have from \eqref{betaAss} and \eqref{FFvarrhoEst} that \begin{align*} \delta u_{\delta,\varrho}+F^\varrho(D^2u_{\delta,\varrho}) + \beta_\end{proof}silon(H^\varrho(Du_{\delta,\varrho}))&= \delta u_{\delta,\varrho}+F^\varrho(D^2u_{\delta,\varrho})\\ &\le\delta u_{\delta,\varrho}+ F(D^2u_{\delta,\varrho})\\ &\le f^\varrho. \end{align*} In view of \eqref{PenalizedEqn} and \eqref{PenalizedBC}, $u_{\delta,\varrho}\le u^\end{proof}silon$ by a routine maximum principle argument. Also note $$ F^\varrho(D^2u^\end{proof}silon)\le f^\varrho -\delta u^\end{proof}silon\le f^\varrho -\delta u_{\delta,\varrho}. $$ The Aleksandrov-Bakelman-Pucci estimate (Theorem 3.6 in \cite{CC}, Theorem 17.3 in \cite{Gilbarg}) then implies $$ \sup_{B}u^\end{proof}silon\le C\left(\sup_{\partial B}|u_{\delta,\varrho}|+\sup_{B}|f^\varrho-\delta u_{\delta,\varrho}|\right) $$ for some constant $C=C(\text{diam}(B),n,\theta,\Theta)$. Combined with \eqref{udeltarhoBounds} and \eqref{BbigEnough}, we have the following supremum norm bound $$ |u^\end{proof}silon|_{L^\infty(B)}\le C\left\{\left(\inf_{\mathbb{R}^n}f\right)^- + \sup_B\ell + \frac{1}{\delta}\left(n\Theta+\sup_B|f^\varrho|\right)\right\}. $$ We will use this estimate to obtain bounds on the higher derivatives of $u^\end{proof}silon$ that will be independent of all $\end{proof}silon>0$ and sufficiently small. We are now in a position to derive uniform estimates on the derivatives of $u^\end{proof}silon$. We will borrow from the recent work by the author and H. Mawi on fully nonlinear elliptic equations with convex gradient constraints \cite{HyndMawi}. Note however, one of the main assumptions in \cite{HyndMawi} is that the nonlinearity is uniformly elliptic and {\it convex}; note the class of nonlinearities we study in this paper satisfy \eqref{Fassump} and are {\it concave}. We will make use of Lemma \ref{HomogeneityLEM} instead of a convexity assumption on $F$. \par We will also employ the uniform convexity assumption \eqref{Hassump2}, which implies \begin{align}\label{Coercive} \begin{cases} H^\varrho(p)\ge H^\varrho(0)+DH^\varrho(0)\cdot p+\frac{\sigma}{2}|p|^2\\ DH^\varrho(p)\cdot p-H^\varrho(p)\ge -H^\varrho(0)+\frac{\sigma}{2}|p|^2\\ |DH^\varrho(p)|\le |DH^\varrho(0)| + \sqrt{n}\Sigma|p| \end{cases}(p\in \mathbb{R}^n). \end{align} And we choose $\varrho_1>0$ sufficiently smaller if necessary so that \eqref{Hsmallvarrho1} holds and $$ \begin{cases} |H^\varrho(0)|\le |H(0)|+1\\ |DH^\varrho(0)|\le |DH(0)|+1\\ |f^\varrho|_{W^{1,\infty}(B)}\le |f|_{W^{1,\infty}(B_{R+1}(0))} \quad (B=B_R(0)) \end{cases} $$ for $0<\varrho<\varrho_1$. In stating our uniform estimates below, it will be convenient for us to label the following list $$ \Pi:=\left(\sigma,\Sigma,\theta,\Theta, n,\text{diam}(B), H(0),|DH(0)|, |f|_{W^{1,\infty}(B_{R+1}(0))}|, \inf_{\mathbb{R}^n}f,\sup_B\ell, \varrho_1\right). $$ \begin{lem} Let $\delta\in(0,1)$, $\varrho\in (0,\varrho_1)$, $\end{proof}silon\in (0,1)$ and suppose $\zeta\in C^\infty_c(B)$ is nonnegative. There is a constant $C$ depending only on the list $\Pi$ and $|\zeta|_{W^{2,\infty}(B)}$ such that $$ \zeta(x) |Du^\end{proof}silon(x)|\le C, \quad x\in B. $$ \end{lem} \begin{proof} 1. Set $$ M_\end{proof}silon:=\sup_{x\in B}|\zeta(x)Du^\end{proof}silon(x)| $$ and define $$ v^\end{proof}silon(x):=\frac{1}{2}\zeta^2(x)|Du^\end{proof}silon(x)|^2 - \alpha_\end{proof}silon u^\end{proof}silon(x). $$ Here $\alpha_\end{proof}silon$ is a positive constant that will be chosen below. We will first obtain a bound on $v^\end{proof}silon$ from above and then use the resulting estimate to bound $M_\end{proof}silon$. We emphasize that each constant below will only depend on the list $\Pi$ and $|\zeta|_{W^{2,\infty}(B)}$; in particular, the constants will not depend on $\end{proof}silon$ and $\alpha_\end{proof}silon$. \par 2. We first differentiate equation \eqref{PenalizedEqn} with respect to $x_k$ $(k=1\dots, n)$ to get \begin{equation}\label{1stDerPenEqn} \delta u^\end{proof}silon_{x_k}+F^\varrho_{M_{ij}}(D^2u^\end{proof}silon)u^\end{proof}silon_{x_i x_j x_k} + \beta'_\end{proof}silon(H^\varrho(Du^\end{proof}silon))DH^\varrho(Du^\end{proof}silon)\cdot Du^\end{proof}silon_{x_k}=f^\varrho_{x_k}. \end{equation} We suppress $\end{proof}silon, \varrho$ dependence and function arguments and use \eqref{1stDerPenEqn} to compute \begin{align}\label{BernIdentity1} F_{M_{ij}}v_{x_i x_j}+ \beta' H_{p_k}v_{x_k}&=\left(F_{M_{ij}}\zeta_{x_i}\zeta_{x_j} + \zeta F_{M_{ij}}\zeta_{x_i x_j}\right)|Du|^2 + \nonumber \\ & \quad\quad 4F_{M_{ij}}\zeta\zeta_{x_i} Du\cdot Du_{x_j} +\zeta^2 F_{M_{ij}}Du_{x_i}\cdot Du_{x_j}\nonumber \\ &\quad\quad - \beta' H_{p_k}(\alpha u_{x_k}- \zeta\zeta_{x_k}|Du|^2) \nonumber \\ & \quad\quad +\zeta^2 u_{x_k}(f_{x_k}-\delta u_{x_k}) - \alpha F_{M_{ij}}u_{x_i x_j}. \end{align} We reiterate that in \eqref{BernIdentity1}, we have written $u$ for $u^\end{proof}silon$, $v$ for $v^\end{proof}silon$, $F$ for $F^\varrho(D^2u^\end{proof}silon)$, $\beta$ for $\beta_\end{proof}silon(H^\varrho(Du^\end{proof}silon))$, $H$ for $H^\varrho(Du^\end{proof}silon)$ and $f$ for $f^\varrho$. We will continue this convention for the remainder of this proof. \par 3. Now we recall Lemma \ref{HomogeneityLEM}. In particular, the inequality \eqref{FrhoAlmostHomo} along with the convexity of $\beta=\beta_\end{proof}silon$ implies \begin{align*} -F_{M_{ij}}u_{x_i x_j} &:=-F_{M_{ij}}(D^2u)(D^2u)_{ij}\\ & \le -F(D^2u)+\sqrt{n}\Theta\varrho_1\\ &= \beta(H(Du))+\delta u -f+\sqrt{n}\Theta\varrho_1\\ &\le H(Du)\beta'(H(Du))+\delta u -f+\sqrt{n}\Theta\varrho_1. \end{align*} Combining with \eqref{BernIdentity1} gives \begin{align}\label{BernIdentity2} F_{M_{ij}}v_{x_i x_j}+ \beta' H_{p_k}v_{x_k}&\le \left(F_{M_{ij}}\zeta_{x_i}\zeta_{x_j} + \zeta F_{M_{ij}}\zeta_{x_i x_j}\right)|Du|^2 + \nonumber \\ & \quad\quad 4F_{M_{ij}}\zeta\zeta_{x_i} Du\cdot Du_{x_j} +\zeta^2 F_{M_{ij}}Du_{x_i}\cdot Du_{x_j}\nonumber \\ &\quad\quad - \beta' (\alpha(H_{p_k}u_{x_k}-H)-\zeta H_{p_k}\zeta_{x_k}|Du|^2) \nonumber \\ & \quad\quad +\zeta^2 u_{x_k}(f_{x_k}-\delta u_{x_k}) +\alpha(\delta u -f+\sqrt{n}\Theta\varrho_1). \end{align} \par 3. Assume $x_0\in \overline{B}$ is a maximizing point for $v$. If $x_0\in \partial B$, then $v\le -\alpha u_{\delta,\varrho}(x_0)\le -\alpha \inf_{\mathbb{R}^n} f$. Therefore, \begin{equation}\label{v1Upp} v\le C (\alpha+1). \end{equation} Alternatively, suppose $x_0\in B.$ If $\beta'=\beta'(H(Du(x_0)))\le 1<1/\end{proof}silon$, then $H(Du(x_0))\le 2\end{proof}silon\le 2$. By \eqref{Coercive}, $|Du(x_0)|$ is bounded from above independently of $\end{proof}silon$. Hence, the \eqref{v1Upp} holds for an appropriate constant $C$. The final situation to consider is when $\beta'=\beta'(H(Du(x_0)))>1$. \par Recall the uniform ellipticity assumption gives $$ \eta^2 F_{M_{ij}}Du_{x_i}\cdot Du_{x_j}\le -\zeta^2\theta |D^2u|^2. $$ And employing necessary conditions $Dv(x_0)=0$ and $D^2v(x_0)\le 0$ and the Cauchy-Schwarz inequality to the term $4F_{M_{ij}}\eta\eta_{x_i} Du\cdot Du_{x_j}\le (\zeta|D^2u|) (C|D\zeta||Du|)$ allow us to evaluate \eqref{BernIdentity2} at the point $x_0$ to get \begin{align*} 0 & \le C(|Du|^2+1+\alpha) - \beta' (\alpha(H_{p_k}u_{x_k}-H)-\zeta H_{p_k}\zeta_{x_k}|Du|^2)\\ & \le C(|Du|^2+1+\alpha) - \beta' (\sigma\alpha |Du|^2- C_0 (1+\zeta|Du|)|Du|^2)\\ & \le C\beta'\left\{|Du|^2+1+\alpha - \sigma\alpha |Du|^2+C_0 (1+\zeta|Du|)|Du|^2\right\}. \end{align*} After multiplying through by $\zeta=\zeta(x_0)^2$ we have \begin{equation}\label{betaprimeINeq} 0\le C\beta'\left\{(\zeta|Du|)^2+1+\alpha - \sigma\alpha (\zeta |Du|)^2+C_0 (1+\zeta|Du|)(\zeta |Du|)^2\right\} \end{equation} which of course holds at $x_0$. \par We now choose $$ \alpha:=\frac{2C_0}{\sigma}M_\end{proof}silon. $$ Note $\sigma\alpha \ge 2C_0 \zeta(x_0)|Du(x_0)|$ and so \eqref{betaprimeINeq} gives $$ 0\le C\beta'\left\{(\zeta|Du|)^2+1+\alpha - 2C_0 (\zeta |Du|)^3+C_0 (1+\zeta|Du|)(\zeta |Du|)^2\right\}. $$ As $\beta'>1$, the expression in the parentheses is necessarily nonnegative. It follows that there is constant $C$ such that $$ \zeta(x_0) |Du(x_0)|\le C(1+\alpha)^{1/3}. $$ As a result, \eqref{v1Upp} holds for another appropriately chosen constant $C$. \par 4. Therefore, $$ M_\end{proof}silon^2=\sup_{B}|\zeta Du^\end{proof}silon|^2 =2\sup_{B}(v^\end{proof}silon + \alpha_\end{proof}silon u^\end{proof}silon)\le C(\alpha_\end{proof}silon+1) \le C\left(\frac{2C_0}{\sigma}M_\end{proof}silon+1\right). $$ Consequently, $M_\end{proof}silon$ is bounded above independently of $\end{proof}silon\in (0,1)$. \end{proof} Next we assert that $\beta_\end{proof}silon(H^\varrho(Du^\end{proof}silon))$ is locally bounded, independently of all $\end{proof}silon$ sufficiently small. \begin{lem}\label{betaboundLem} Let $\delta\in(0,1)$, $\varrho\in (0,\varrho_1)$, $\end{proof}silon\in (0,1)$ and suppose $\zeta\in C^\infty_c(B)$ is nonnegative. There is a constant $C$ depending only on the list $\Pi$ and $|\zeta|_{W^{2,\infty}(B)}$ such that $$ \zeta(x)\beta_\end{proof}silon(H^\varrho(Du^\end{proof}silon(x)))\le C, \quad x\in B. $$ \end{lem} We omit a proof of Lemma \ref{betaboundLem} as the proof of Lemma 3.3 in our recent work \cite{HyndMawi} immediately applies here. We also note that $$ F^\varrho(D^2u^\end{proof}silon)=-\beta_\end{proof}silon(H^\varrho(Du^\end{proof}silon)) +f^\varrho-\delta u^\end{proof}silon $$ is locally bounded, independently of $\end{proof}silon \in (0,1]$. By the $W^{2,p}_{\text{loc}}$ estimates for fully nonlinear elliptic equations due to L. Caffarelli (Theorem 1 in \cite{CaffAnn}, Theorem 7.1 in \cite{CC}), we have the following. \begin{lem}\label{W2pBoundUeps} Let $\delta\in(0,1)$, $\varrho\in (0,\varrho_1)$, $\end{proof}silon\in (0,1)$, $p\in (n,\infty)$, and assume $G\subset B$ is open with $\overline{G}\subset B$. There is a constant $C$ depending on $p$, the list $\Pi$, $1/\text{dist}(\partial G,B)$ and $G$ such that $$ |D^2u^\end{proof}silon|_{L^p(G)}\le C\left\{|u^\end{proof}silon|_{L^\infty(B)}+1\right\}. $$ \end{lem} \begin{proof} Assume $B_r(x_0)\subset B$ is nonempty, and choose $\zeta\in C^\infty_c(B_r(x_0))$ such that $0\le \zeta\le 1$, $\zeta\equiv 1$ on $B_{r/2}(x_0)$ and \begin{equation}\label{DzetaBounds} |D\zeta|_{L^\infty(B_{r/2}(x_0))}\le \frac{C}{r}, \quad |D^2\zeta|_{L^\infty(B_{r/2}(x_0))}\le \frac{C}{r^2}. \end{equation} From Lemma \ref{betaboundLem}, $\beta_\end{proof}silon(H^\varrho(Du^\end{proof}silon(x)))\le C_1$ for $x\in B_{r/2}(x_0)$ for some $C_1$ depending only on the list $\Pi$ and $r$. By the assumption that $F$ is uniformly elliptic and concave, Theorem 7.1 in \cite{CC} implies there is a universal constant $c_0$ such that \begin{align*} r^2|D^2u^\end{proof}silon|_{L^p(B_{r/4}(x_0))}&\le c_0\left\{|u^\end{proof}silon|_{L^\infty(B_{r/2}(x_0))} + |f^\varrho-\delta u^\end{proof}silon - \beta_\end{proof}silon(H^\varrho(Du^\end{proof}silon))|_{L^\infty(B_{r/2}(x_0))}\right\}\\ &\le c_0\left\{|u^\end{proof}silon|_{L^\infty(B_{r/2}(x_0))} + |f^\varrho-\delta u^\end{proof}silon|_{L^\infty(B_{r/2}(x_0))}+C_1\right\}\\ &\le C_0\left\{|u^\end{proof}silon|_{L^\infty(B)} + |f^\varrho|_{L^\infty(B)}+C_1 \right\}. \end{align*} Here $C_0$ only depends only on the list $\Pi$. \par Now select $r=\frac{1}{2}\text{dist}(\partial G, B)$ and cover $\overline{G}$ with finitely many balls $B_{r/4}(x_1), \dots, B_{r/4}(x_m)$, with each $x_1,\dots,x_m\in G$. Then \begin{align*} \int_G|D^2u^\end{proof}silon(x)|^pdx&\le \int_{\cup^m_{i=1}B_{r/4}(x_i)}|D^2u^\end{proof}silon(x)|^pdx \\ &\le \sum^m_{i=1}\int_{B_{r/4}(x_i)}|D^2u^\end{proof}silon(x)|^pdx \\ &\le mC_0^p\left(|u^\end{proof}silon|_{L^\infty(B)} + |f^\varrho|_{L^\infty(B)}+C_1\right)^p. \end{align*} \end{proof} In view of our uniform estimates, we are in position to send $\end{proof}silon\rightarrow 0^+$ in the equation \eqref{PenalizedEqn}. \begin{prop} Let $\delta\in (0,1)$, $\varrho\in (0,\varrho_1)$, $p\in (n,\infty)$ and assume $G\subset B$ is open with $\overline{G}\subset B$. \\ (i) There is $v_{\delta,\varrho}\in C(\overline{B})\cap W^{2,p}_{\text{loc}}(B)$ such that $u^\end{proof}silon\rightarrow v_{\delta,\varrho}$, as $\end{proof}silon\rightarrow 0^+$, uniformly in $\overline{B}$ and weakly in $W^{2,p}(G)$. \\ (ii) Moreover, $v_{\delta,\varrho}$ is the unique solution of the boundary value problem \begin{equation}\label{vdeltavarrhoPDE} \begin{cases} \max\{\delta v+F^\varrho(D^2v)-f^\varrho,H^\varrho(Dv)\}=0&\quad x\in B \\ \hspace{2.38in}v=u_{\delta,\varrho}& \quad x\in \partial B \end{cases}. \end{equation} (iii) There is a constant $C$ depending on $p$, the list $\Pi$, $1/\text{dist}(\partial G,B)$ and $G$ such that \begin{equation}\label{vdeltarhoEst1} |D^2v_{\delta,\varrho}|_{L^p(G)}\le C\left\{|v_{\delta,\varrho}|_{L^\infty(B)} +1\right\} \end{equation} and \begin{equation}\label{vdeltarhoEst2} -C\le F^\varrho(D^2v_{\delta,\varrho}(x)) \end{equation} for Lebesgue almost every $x\in G$. \end{prop} \begin{proof} $(i)-(ii)$ The convergence to $v$ satisfying \eqref{vdeltavarrhoPDE} is proved very similar to Proposition 4.1 in \cite{Hynd} and part $(ii)$ of Theorem 1.1 in \cite{HyndMawi}, so we omit the details. In both arguments, the uniqueness of solutions of a related boundary value problem of the type \eqref{vdeltavarrhoPDE} is crucial; in our case, uniqueness follows from the estimate \eqref{GenComparisonEst} below. \par $(iii)$ The bound \eqref{vdeltarhoEst1} follows from part $(i)$ and Lemma \ref{W2pBoundUeps}. Let us now verify \eqref{vdeltarhoEst2}. Recall that $F^\varrho$ is concave. As $u^\end{proof}silon$ converges to $v_{\delta,\varrho}$ weakly $W^{2,p}_{\text{loc}}(B)$, for each $\zeta\in C^\infty_c(B)$ that is nonnegative, \begin{equation}\label{LimSupConcav} \limsup_{\end{proof}silon\rightarrow 0^+}\int_B F^\varrho(D^2u^\end{proof}silon(x))\zeta(x)dx\le \int_B F^\varrho(D^2v_{\delta,\varrho}(x))\zeta(x)dx. \end{equation} By Lemma \ref{betaboundLem}, there is a constant $C$ depending only the list $\Pi$ and $|\zeta|_{W^{2,\infty}(B)}$ such that $\zeta F^\varrho(D^2u^\end{proof}silon)\ge -C$. Inequality \eqref{LimSupConcav} then gives \begin{equation}\label{vdeltarhoEst3} -C\le \zeta(x) F^\varrho(D^2v_{\delta,\varrho}(x)) \end{equation} for almost every $x\in B$. Let $x_0\in G$ and $r:=\frac{1}{2}\text{dist}(\partial G, B)$, and choose $0\le \zeta\le 1$ to be supported in $B_r(x_0)$ and satisfy $\zeta\equiv 1$ on $B_{r/2}(x_0)$ and \eqref{DzetaBounds}. Then \eqref{vdeltarhoEst3} implies that \eqref{vdeltarhoEst2} holds for almost every $x\in B_{r/2}(x_0)$ for some constant $C$ depending on $\Pi$ and $r$. The general bound follows by a routine covering argument. \end{proof} \begin{prop}\label{FFvarrholocLem} Let $\delta\in(0,1)$, $p\in (n,\infty)$ and assume $G\subset B$ is open with $\overline{G}\subset B$.\\ $(i)$ Then $v_{\delta,\varrho}\rightarrow u_\delta$, as $\varrho\rightarrow 0^+$, uniformly on $\overline{B}$ and weakly in $W^{2,p}(G)$. \\ $(ii)$ There is a constant $C$ depending on $p$, the list $\Pi$, $1/\text{dist}(\partial G,B)$ and $G$ such that $$ -C\le F(D^2u_{\delta}(x)) $$ for almost every $x\in G$. \end{prop} \begin{proof} $(i)$ We first claim \begin{equation}\label{UdeltarhoVdeltarhoEst} u_{\delta,\varrho}(x)\le v_{\delta,\varrho}(x)\le u_{\delta,\varrho}(x) +\frac{1}{\delta}\sqrt{n}\Theta\varrho \end{equation} for $x\in B$ and $\varrho\in (0,\varrho_1)$. And in order to prove \eqref{UdeltarhoVdeltarhoEst}, we will need the estimate \begin{equation}\label{GenComparisonEst} \max_{\overline{B}}\{u-v\}\le \max_{\partial B}\{u-v\}+\frac{1}{\delta}\max_{\overline{B}}\{g-h\} \end{equation} which holds for each $u\in USC(\overline{B})$ and $v\in LSC(\overline{B})$ that satisfy \begin{equation}\label{GenComparisonPDE} \max\{\delta u+F(D^2u)-g,H^\varrho(Du)\}\le 0\le \max\{\delta v+F(D^2v)-h,H^\varrho(Dv)\}, \quad x\in B. \end{equation} Here $g,h\in C(\overline{B})$. The estimate \eqref{GenComparisonEst} can be proved with the ideas used to verify Proposition \eqref{lamCompProp}; see also Proposition 2.2 of \cite{HyndMawi}. We leave the details to the reader. \par Using $F^\varrho\le F$, the inequality $u_{\delta,\varrho}\le v_{\delta,\varrho}$ follows from \eqref{GenComparisonEst} as $u=u_{\delta,\varrho}$, $v=v_{\delta,\varrho}$ satisfy \eqref{GenComparisonPDE} $g=h=f^\varrho$. Likewise, we can use the bound $F^\varrho+\sqrt{n}\Theta \varrho$ to show the inequality $v_{\delta,\varrho}\le u_{\delta,\varrho} + \sqrt{n}\Theta \varrho/\delta$ follows from \eqref{GenComparisonEst} as $u=v_{\delta,\varrho}$, $v=u_{\delta,\varrho}$ satisfy \eqref{GenComparisonPDE} with $g=f^\varrho+\frac{1}{\delta}\sqrt{n}\Theta\varrho$ and $h=f^\varrho$. The assertion that $v_{\delta,\varrho}$ converges to $u_\delta$ in $W^{2,p}(G)$ weakly follows from \eqref{vdeltarhoEst1}. \par $(ii)$ Let $U\subset G$ be measurable and recall that $F^\varrho\le F$ and $F$ is concave. By \eqref{vdeltarhoEst2}, we have there is a constant $C$ depending on $p$, the list $\Pi$, $1/\text{dist}(\partial G,B)$ and $G$ such that \begin{align*} -C|U|&\le \limsup_{\varrho \rightarrow 0^+}\int_{U}F^\varrho(D^2v_{\delta,\varrho}(x))dx \\ &\le \limsup_{\varrho \rightarrow 0^+}\int_{U}F(D^2v_{\delta,\varrho}(x))dx \\ &\le \int_{U}F(D^2u_{\delta}(x))dx. \end{align*} \end{proof} \begin{cor}\label{W2infuDel} For each $\delta\in (0,1)$, $D^2u_\delta\in L^\infty(\mathbb{R}^n;S_n(\R))$. Moreover, there is a constant $C$ depending only on the list $\Pi$ for which $$ |D^2u_\delta|_{ L^\infty(\mathbb{R}^n;S_n(\R))}\le C. $$ for each $\delta\in(0,1)$. \end{cor} \begin{proof} Choose $R_1>0$ so that $\Omega_\delta\subset B_{R_1}(0)$ for all $\delta\in (0,1)$; such an $R_1$ exists by corollary \ref{boundedDerOmega}. Lemma \ref{W2infFarOutBound} gives that there is a universal constant $C$ such that $$ D^2u_\delta(x)\le \frac{C}{R_1}I_n. $$ for almost every $|x|\ge 2R_1$. \par Now select $R>2R_1$ so large that \eqref{BbigEnough} is satisfied. Part $(ii)$ of Proposition \ref{FFvarrholocLem}, with $G=B_{2R_1}(0)$ and $B=B_R(0)$, gives a constant $C_1$ depending on $R_1$ and the list $\Pi$ such that \begin{equation}\label{FD2uBounds} -C_1\le F(D^2u_\delta(x)) \end{equation} for almost every $|x|\le 2R_1$. Since $u_\delta$ is convex (Proposition \ref{UdelConvex}), the uniform ellipticity assumption on $F$ implies \begin{equation}\label{FD2uBounds2} F(D^2u_\delta(x))\le -\theta\Delta u_\delta(x) \end{equation} for almost every $x\in \mathbb{R}^n$. Therefore, we can again appeal to the convexity of $u_\delta$ and employ \eqref{FD2uBounds} and \eqref{FD2uBounds2} to get $$ D^2u_\delta(x)\le \Delta u_\delta(x) I_n\le \frac{C_1}{\theta}I_n $$ for almost every $|x|\le 2R_1$. \end{proof} \begin{proof} (part $(ii)$ of Theorem \ref{Thm1}) By the convexity of $u_\delta$ and Corollary \ref{W2infuDel}, there is a constant $C$ independent of $\delta\in (0,1)$ for which $$ 0\le u_\delta(x+h)-2u_\delta(x)+u_\delta(x-h)\le C|h|^2 $$ for every $x, h\in \mathbb{R}^n$. The assertion now follows from passing to the limit along an appropriate sequence $\delta$ tending to $0$ as was done in the proof of part $(i)$ of Theorem \ref{Thm1}. \end{proof} \begin{rem}\label{remboundedOmega} By part $(ii)$ of Theorem \ref{Thm1}, $Du_\delta$ exists everywhere and is continuous. By Corollary \ref{boundedDerOmega} $$ \Omega_\delta=\{x\in \mathbb{R}^n : H(Du_\delta(x))<0\} $$ is open and bounded. \end{rem} \section{1D and rotationally symmetric problems}\label{1DandRotSymmSect} Now we will discuss a few results for solutions of the eigenvalue problem \eqref{EigProb} when the dimension $n=1$ and when $F,f,H$ satisfy the symmetry hypothesis \eqref{SymmetryCond}: $$ \begin{cases} f(Ox)=f(x)\\ H(O^tp)=H(p)\\ F(OMO^t)=F(M) \end{cases} $$ for each $x,p\in \mathbb{R}^n$, $M\in S_n(\R)$ and orthogonal $n\times n$ matrix $O$. First, we prove Theorem \ref{SymmRegThm} which involves the regularity of symmetric eigenfunctions. Then we consider the uniqueness of eigenfunctions of \eqref{EigProb} that satisfy the growth condition \eqref{ellgrowth}. \begin{proof} (Theorem \ref{SymmRegThm}) The assumption \eqref{SymmetryCond} implies that $u_\delta$ is radial; this follows from the uniqueness assertion \ref{DeltaCompProp}. In particular, $u^*$ constructed in the proof of part $(i)$ of Theorem \ref{Thm1} will also be radial. Consequently, there is a function $\phi: [0,\infty)\rightarrow \mathbb{R}$ such that $u^*(x)=\phi(|x|)$. As $u^*$ is convex, $\phi$ is nondecreasing and convex. Moreover, for almost every $x\in \mathbb{R}^n$ $$ \begin{cases} Du^*(x)=\phi'(|x|)\frac{x}{|x|}\\ D^2u^*(x)=\phi''(|x|)\frac{x\otimes x}{|x|^2} +\frac{\phi'(|x|)}{|x|}\left(I_n - \frac{x\otimes x}{|x|^2}\right) \end{cases}. $$ \par Similar arguments imply $f(x)=f_0(|x|)$ for a nondecreasing, convex function $f_0$. Likewise $H(p)$ only depends on $|p|$ and so $\{p\in \mathbb{R}^n: H(p)\le 0\}$ is a ball. Thus, $\ell(v)=a|v|$ for some $a>0$, and as a result $H_0(p)=|p|-a$. The assumption \eqref{SymmetryCond} also implies $F=F(M)$ only depends on the eigenvalues of $M$. In particular, the symmetric function $G(\mu_1,\dots,\mu_n):=F(\diag(\mu_1,\dots,\mu_n))$ completely determines $F$. And as $F$ is uniformly elliptic $$ G(\mu_1+h,\dots,\mu_n)-G(\mu_1,\dots,\mu_n)\le-\theta h. $$ for $h\ge 0$. \par From our comments above, $\phi$ satisfies \begin{equation}\label{phiEq} \max\left\{\lambda^*+G\left(\frac{\phi'}{r},\dots,\frac{\phi'}{r}, \phi''\right) - f_0(r), \phi'-a\right\}=0, \quad r>0. \end{equation} And since $\phi'$ is nondecreasing, $$ \{r>0: \phi'(r)<a\}=(0,r_0) $$ for some $r_0>0$; this is another way of expressing $\Omega:=\{x\in \mathbb{R}^n: H(Du^*(x))<0\}=B_{r_0}(0)$. Part $(ii)$ of Theorem \ref{Thm1} then implies $u^*\in C^2(\Omega)\cap C^{1,1}_\text{loc}(\mathbb{R}^n)$. Thus, $\phi'=a$ for $r\ge a$ and $\phi\in C^2(\mathbb{R}\setminus\{r_0\})\cap C^{1,1}_\text{loc}(\mathbb{R})$. Furthermore, as $\phi''(r_0+)=0$, we just need to show $\phi''(x_0-)=0$. \par Recall the left hand limit $\phi''(x_0-)$ exists and is nonnegative since $\phi$ is convex. By \eqref{phiEq}, $$ \lambda^*+G\left(\frac{\phi'}{r},\dots,\frac{\phi'}{r}, \phi''\right) - f_0(r)\le 0, \quad r>0. $$ Sending $r\rightarrow r_0^+$ gives \begin{equation}\label{Geq1} \lambda^*+G\left(\frac{a}{r_0},\dots,\frac{a}{r_0}, 0\right) - f_0(r_0)\le 0. \end{equation} Now, $$ \lambda^*+G\left(\frac{\phi'}{r},\dots,\frac{\phi'}{r}, \phi''\right) - f_0(r)= 0, \quad r\in (0,r_0) $$ and sending $r\rightarrow r_0^-$ gives \begin{equation}\label{Geq2} \lambda^*+G\left(\frac{a}{r_0},\dots,\frac{a}{r_0},\phi''(x_0-) \right) - f_0(r_0)=0. \end{equation} Combining \eqref{Geq1} and \eqref{Geq2} gives $$ G\left(\frac{a}{r_0},\dots,\frac{a}{r_0}, 0\right) \le f_0(r_0)-\lambda^*=G\left(\frac{a}{r_0},\dots,\frac{a}{r_0},\phi''(x_0-) \right). $$ By the monotonicity of $G$ in each of its arguments, $\phi''(x_0-)\le 0$. Thus $\phi''(x_0)=0$, and as a result, $u^*\in C^2(\mathbb{R}^n)$. \end{proof} \begin{prop}\label{UniqunessN1} Assume $n=1$. Any two convex solutions of \eqref{EigProb} that satisfy \eqref{ellgrowth} differ by an additive constant. \end{prop} \begin{proof} Assume $u_1, u_2$ are convex and satisfy $$ \begin{cases} \max\{\lambda^*+F(u'')-f,H(u')\}=0, \quad x\in \mathbb{R}\\ \lim\frac{u(x)}{\ell(x)}=1 \end{cases}. $$ As in the proof of Theorem \ref{SymmRegThm}, we may deduce that necessarily $u_1,u_2\in C^2(\mathbb{R})$. Also observe $$ H_0(p)=\max_{v\pm 1}\left\{pv-\ell(v)\right\}=\max\{p-\ell(1), -p -\ell(-1)\}. $$ In particular, $$ \{p\in \mathbb{R}: H(p)\le 0\}=[-\ell(-1), \ell(1)]. $$ It then follows from the convexity of $u_1$ and $u_2$ that $$ I_1:=\{x\in \mathbb{R}: H(u_1'(x))<0\}\quad \text{and}\quad I_2:=\{x\in \mathbb{R}: H(u_2'(x))<0\} $$ are bounded, open intervals. \par Let us first assume $I_1=I_2=(\alpha,\beta)$. Then $$ \lambda^*+F(u_1'')-f=0=\lambda^*+F(u_2'')-f, \quad x\in (\alpha,\beta) $$ As $F$ is uniformly elliptic $u_1''=u_2''=F^{-1}(f-\lambda^*)$ for $x\in (\alpha,\beta)$. Hence, $u'_1-u_2'$ is constant. The above characterization of $\{p\in \mathbb{R}: H(p)\le 0\}$ also implies $$ \begin{cases} u_1'=u_2'=-\ell(-1), \quad x\in (-\infty,\alpha]\\ u_1'=u_2'=\ell(1), \quad x\in [\beta,\infty) \end{cases} $$ It now follows that necessarily $u'_1=u'_2$ and so $u_1-u_2$ is constant. \par Now we are left to prove that $I_1=I_2$; for definiteness, we shall assume $I_1= (\alpha_1,\beta_1)$ and $I_2= (\alpha_2,\beta_2)$. First suppose that $I_1\cap I_2=\emptyset$ and without loss of generality $\beta_1<\alpha_2$. Then on $I_1$, $\lambda^*+F(u_1'')-f=0$ and $u_2'=-\ell(-1)$. We always have $\lambda^*+F(u_2'')-f\le 0$ which implies $\lambda^*-f\le 0$ on $I_1$ since $u_2''=0$. It then follows that $F(u_1'')=f-\lambda^*\ge 0$ and thus $u_1''\le 0.$ As $u$ is convex, $u_1''=0$ in $I_1.$ However, $u_1'$ is constant and it would then be impossible for $u'_1(\alpha_1)=-\ell(-1)<0$ and $u'_1(\beta_1)=\ell(1)>0$. Therefore, $I_1\cap I_2\neq\emptyset$. \par Without any loss of generality, we may assume $\alpha_1<\alpha_2<\beta_1$. Repeating our argument above, we find $u_1''=0$ on $(\alpha_1,\alpha_2)$. It must be that $u_1'$ is constant and thus equal to $-\ell(-1)$ on $[\alpha_1,\alpha_2]$. But then $H(u_1')=0$ on $[\alpha_1,\alpha_2]$, which contradicts the definition of $I_1$. Hence, $I_1=I_2$ and the assertion follows. \end{proof} \begin{prop}\label{ConvUniqueness} Assume the symmetry condition \eqref{SymmetryCond} and that $F$ is uniformly elliptic. Then any two convex, rotationally symmetric solutions of \eqref{EigProb} that satisfy \eqref{ellgrowth} differ by an additive constant. \end{prop} \begin{proof} As remarked in the above proof of Theorem \ref{SymmRegThm}, the symmetry assumption on $H$ results in $H_0(p)=|p|-a$ for some $a>0$. Now assume $u_1, u_2$ are convex, rotationally symmetric solutions of \eqref{EigProb} that satisfy \eqref{ellgrowth}. Then it follows $$ \{x\in\mathbb{R}^n: H(Du_i(x))<0\}=B_{r_i}(0) $$ for $i=1,2$ and some $r_1,r_2>0$. Thus, \begin{equation}\label{u1u2const} u_i(x)=a|x|+b_i, \quad |x|\ge r_i \end{equation} for some constants $b_i$. If $r_1=r_2=:r$, then $$ \begin{cases} F(D^2u_1)=f(x)-\lambda^*=F(D^2u_2), \quad &x\in B_{r}(0)\\ u_1=ar+b_1, \quad u_2=ar+b_2, \quad & x\in \partial B_r(0). \end{cases} $$ As $F$ is uniformly elliptic, $u_1\equiv u_2+b_1-b_2$ on $\overline{B_r(0)}$ and thus on $\mathbb{R}^n$. \par Now suppose $r_1<r_2$. And set $v:= u_2+b_1-b_2$; from \eqref{u1u2const} $u_1\equiv v$ for $|x|\ge r_2$. Since, $$ F(D^2u_1)\le f(x)-\lambda^*=F(D^2v),\quad x\in B_{r_2} $$ the maximum principle implies $u_1\le v$ in $\overline{B}_{r_2}$. The strong maximum principle implies $u_1\equiv v$ in $\overline{B}_{r_2}$, from which we conclude the proof, or $u_1< v$ in $B_{r_2}$. However if $u_1< v$ in $B_{r_2}$, Hopf's Lemma (see the appendix of \cite{Armstrong}) implies \begin{equation}\label{HopfCond} \frac{\partial v}{\partial \nu}(x_0)<\frac{\partial u_1}{\partial \nu}(x_0) \end{equation} for each $x_0\in\partial B_{r_2}$. Here $\nu=x_0/|x_0|$. As $u$ and $v$ are rotational and convex, \eqref{HopfCond} implies $$ |Dv(x_0)|<|Du_1(x_0)|\le a. $$ However, $|Dv|=a$ on $\partial B_{r_2}$. This contradicts the hypothesis that $r_1<r_2$. \end{proof} \section{Minmax formulae}\label{MinMaxSect} This final section is devoted entirely to the proof of Theorem \ref{minmaxThm}. In particular, we will make use of the characterizations of $\lambda^*$ given in \eqref{LamChar1} and \eqref{LamChar2}. We will also use that the functions $H$ and $H_0$ have the same sign. \par Let $\phi \in C^2(\mathbb{R}^n)$ and suppose that $H(D\phi)\le 0$. If $$ \lambda_\phi:=\inf_{\mathbb{R}^n}\left\{-F(D^2\phi(x))+f(x)\right\}>-\infty, $$ then $\phi$ is a subsolution of \eqref{EigProb} with eigenvalue $\lambda_\phi$. By \eqref{LamChar1}, $\lambda_\phi\le \lambda^*$. Hence, $\lambda_-=\sup_\phi\lambda_\phi\le \lambda^*.$ Now let $\psi\in C^2(\mathbb{R}^n)$ satisfy $$ \liminf_{|x|\rightarrow \infty}\frac{\psi(x)}{\ell(x)}\ge 1. $$ If $$ \mu_\psi:=\sup_{H(D\psi)<0}\left\{-F(D^2\psi(x))+f(x)\right\}<\infty, $$ then $\psi$ is a supersolution of \eqref{EigProb} with eigenvalue $\mu_\psi$. It follows from \eqref{LamChar2} that $\lambda_\psi\ge \lambda^*$. As a result, $\lambda_+=\inf_\psi\mu_\psi\ge \lambda^*.$ \par Let $u^*$ be an eigenfunction associated with $\lambda^*$ that satisfies $D^2u^*\in L^\infty(\mathbb{R}^n; S_n(\mathbb{R}))$. As in Remark \ref{remboundedOmega}, $$ \Omega_0:=\{x\in \mathbb{R}^n: H(Du^*(x))<0\} $$ is open and bounded. For $\end{proof}silon>0$ and $\tau>1$, set $$ u^{\end{proof}silon,\tau}=\tau u^\end{proof}silon=\tau(\eta^\end{proof}silon*u^*). $$ Here $u^\end{proof}silon=\eta^\end{proof}silon*u^*$ is the standard mollification of $u^*$. Observe that $H_0$ is Lipschitz continuous with Lipschitz constant no more than one; in view of the basic estimate $|Du^*-Du^\end{proof}silon|_{L^\infty(\mathbb{R}^n)}\le \end{proof}silon |D^2u^*|_{L^\infty(\mathbb{R}^n)}$, \begin{equation}\label{LipHCond} H_0(Du^*(x))\le H_0(Du^\end{proof}silon(x))+\end{proof}silon |D^2u^*|_{L^\infty(\mathbb{R}^n)}, \quad x\in \mathbb{R}^n. \end{equation} \par So for any $x\in \mathbb{R}^n$ where $H(Du^{\end{proof}silon,\tau}(x))<0$, \begin{equation}\label{LipHCond2} H_0(Du^{\end{proof}silon}(x))=H_0\left(\frac{1}{\tau}Du^{\end{proof}silon,\tau}(x)+\frac{\tau-1}{\tau}0\right)< \frac{\tau-1}{\tau}H_0(0)<0. \end{equation} In view of \eqref{LipHCond} and \eqref{LipHCond2}, we can choose $\end{proof}silon_1=\end{proof}silon_1(\tau)>0$ such that $$ \varrho:=-\left(\frac{\tau-1}{\tau}H_0(0)+\end{proof}silon_1 |D^2u^*|_{L^\infty(\mathbb{R}^n)}\right)>0 $$ and $$ \{x\in \mathbb{R}^n: H(Du^{\end{proof}silon,\tau}(x))<0\}\subset\{x\in \mathbb{R}^n: H_0(Du^*(x))<-\varrho\} $$ for $\end{proof}silon\in (0,\end{proof}silon_1)$. Since $\{x\in \mathbb{R}^n: H_0(Du^*(x))<-\varrho\}$ is a proper open subset of $\Omega_0$ we can further select $\end{proof}silon_2=\end{proof}silon_2(\tau)>0$ so that \begin{equation}\label{ImportantIncludsionepstau} \{x\in \mathbb{R}^n: H_0(Du^*(x))<-\varrho\}\subset \Omega^\end{proof}silon:= \{x\in \mathbb{R}^n: \text{dist}(x,\partial\Omega_0)>\end{proof}silon\} \end{equation} for $\end{proof}silon\in (0,\end{proof}silon_2)$. \par By assumption, $u^*$ satisfies $\lambda^* +F(D^2u^*)-f=0$ for almost every $x\in \Omega_0$. Mollifying both sides of this equation gives $ \lambda^*+F(D^2u^*)^\end{proof}silon-f^\end{proof}silon=0$ in $\Omega^\end{proof}silon$. Since $F$ is concave $$ F(D^2u^\end{proof}silon(x))=F\left(\int_{\mathbb{R}^n}\eta^\end{proof}silon(y)D^2u^*(x-y)dy\right)\ge \int_{\mathbb{R}^n}\eta^\end{proof}silon(y)F(D^2u^*(x-y))dy=F(D^2u^*)^\end{proof}silon(x). $$ Consequently, $\lambda^*+F(D^2u^\end{proof}silon)-f^\end{proof}silon\ge 0,$ in $\Omega^\end{proof}silon$. And since $\Omega_0$ is bounded, $|f^\end{proof}silon-f|_{L^\infty(\Omega_0)}=o(1)$ as $\end{proof}silon \rightarrow 0^+$. Therefore, \begin{equation}\label{lambdaMollifiedEqn} \lambda^*+F(D^2u^\end{proof}silon)-f\ge o(1), \quad x\in \Omega^\end{proof}silon. \end{equation} as $\end{proof}silon\rightarrow 0^+$. \par We can now combine the inclusion \eqref{ImportantIncludsionepstau} and the inequality \eqref{lambdaMollifiedEqn}. For $\end{proof}silon\in (0,\min\{\end{proof}silon_1,\end{proof}silon_2\}$ \begin{align*} \lambda^+&\le \sup_{H(Du^{\end{proof}silon,\tau})<0}\left\{-F(D^2u^{\end{proof}silon,\tau}(x)) +f(x)\right\}\\ &= \sup_{H(Du^{\end{proof}silon,\tau})<0}\left\{-\tau F(D^2u^{\end{proof}silon}(x)) +f(x)\right\}\\ &= \sup_{H(Du^{\end{proof}silon,\tau})<0}\left\{- F(D^2u^{\end{proof}silon}(x)) +f(x)\right\}+O(\tau-1)\\ &\le \sup_{\Omega^\end{proof}silon}\left\{- F(D^2u^{\end{proof}silon}(x)) +f(x)\right\}+O(\tau-1)\\ &\le \lambda^* +o(1)+O(\tau-1). \end{align*} We conclude by first ending $\end{proof}silon\rightarrow 0^+$ and then $\tau\rightarrow 1^+$. \end{document}
\begin{document} \title{Spontaneous symmetry breaking in non-steady modes of open quantum many-body systems} \author{Taiki Haga} \email[]{[email protected]} \affiliation{Department of Physics and Electronics, Osaka Metropolitan University, Sakai-shi, Osaka 599-8531, Japan} \date{\today} \begin{abstract} In a quantum many-body system coupled to the environment, its steady state can exhibit spontaneous symmetry breaking when a control parameter exceeds a critical value. In this study, we consider spontaneous symmetry breaking in non-steady modes of an open quantum many-body system. Assuming that the time evolution of the density matrix of the system is described by a Markovian master equation, the dynamics of the system is fully characterized by the eigenmodes and spectrum of the corresponding time evolution superoperator. Among the non-steady eigenmodes with finite lifetimes, we focus on the eigenmodes with the highest frequency, which we call the most coherent mode. For a dissipative spin model, it is shown that the most coherent mode exhibits a transition from a disordered phase to a symmetry-broken ordered phase, even if the steady state does not show singular behavior. We further argue that the phase transition of the most coherent mode induces a qualitative change in the decoherence dynamics of highly entangled states, i.e., the Schr\"odinger's cat states. \end{abstract} \maketitle \section{Introduction} Recent advances in quantum engineering have made it possible to precisely control atomic, molecular, and optical systems, which include ultracold atoms in optical lattices \cite{Bloch-08-1, Bloch-08-2, Syassen-08, Bloch-12, Ritsch-13}, trapped ions \cite{Lanyon-09, Barreiro-11, Blatt-12}, Rydberg atoms \cite{Low-12, Browaeys-20, Schauss-12, Sanchez-18, Lienhard-18, Borish-20}, and coupled optical cavities \cite{Hartmann-06, Baumann-10, Eichler-14, Rodriguez-16}. Decoherence due to coupling with the environment is inevitable in these systems. The nonequilibrium dynamics of open quantum many-body systems is highly complex and largely unexplored due to the intricate interplay of coherent Hamiltonian dynamics and dissipative dynamics arising from interactions with the environment. Spontaneous symmetry breaking (SSB) is a pivotal concept in condensed matter physics and statistical mechanics. The ground state of some quantum many-body systems, such as the transverse field Ising model and the Bose-Hubbard model, is known to exhibit a phase transition from a disordered phase to a symmetry-broken ordered phase \cite{Sachdev}. In a typical open quantum many-body system, a unique steady state is realized in the long-time limit, so it is natural to consider SSB in the steady state. Such a steady state is not necessarily at thermal equilibrium but can be a nonequilibrium state maintained by a balance between external driving and energy dissipation to the environment. Phase transitions in nonequilibrium steady states of open quantum many-body systems are known as dissipative phase transitions and have attracted much attention in recent years \cite{Tomadin-11, Lee-11, Kessler-12, Honing-12, Horstmann-13, Torre-13, Lee-13, Ludwig-13, Carr-13, Sieberer-14, Marcuzzi-14, Weimer-15, Maghrebi-16, Sieberer-16, Biondi-17, Rota-18, Minganti-18, Ferreira-19, Young-20, Domokos-17, Fitzpatrick-17, Imamoglu-18}. The dynamics of Markovian open quantum systems is governed by the Gorini--Kossakowski--Sudarshan--Lindblad (GKSL) master equation \cite{Lindblad-76, Gorini-76}. The superoperator that generates the time evolution of a density matrix is called Liouvillian. The dynamics of an open quantum system is completely determined by the eigenmodes and spectrum of the Liouvillian. The steady states correspond to eigenmodes with zero eigenvalues. Therefore, the dissipative phase transition is interpreted as a transition of a Liouvillian eigenmode. In this study, we consider the phase structure of non-steady eigenmodes that have nonzero decay rates. These eigenmodes are irrelevant to the long-time behavior of the system but contribute to the transient dynamics toward a steady state. We here ask what happens when non-steady eigenmodes exhibit SSB as certain control parameters are varied. Phase transitions of non-steady eigenmodes can significantly alter the relaxation dynamics of the system. However, even if some non-steady eigenmodes undergo a phase transition, the steady state does not necessarily exhibit singular behavior. Phase transitions of non-steady eigenmodes can provide a new mechanism for dynamical phase transitions in open quantum many-body systems. The purpose of this study is to demonstrate that non-steady eigenmodes of a well-studied open quantum many-body system exhibit previously unrecognized SSB. In particular, we consider one of the simplest models of open quantum many-body systems, the transverse field Ising model under dephasing. The steady state of this system is an infinite-temperature state, regardless of the strength of the transverse field or dephasing. Among the non-steady eigenmodes, we focus on the one with the highest frequency and call it the most coherent mode of the Liouvillian. Mean-field analysis for the infinite-dimensional case and finite-size numerics for the one-dimensional case show that the most coherent mode exhibits a phase transition from a disordered (paramagnetic) phase to a symmetry-broken (ferromagnetic) phase at a critical transverse field that depends on the dephasing rate. This SSB affects the relaxation dynamics of highly entangled states. Suppose that an initial state is taken to be an equal superposition of two symmetry-broken states, which is known as the ``Schr\"odinger's cat state". As the disorder-to-order phase transition of the most coherent mode occurs, the early dynamics of the density matrix shows a crossover from strongly-damped relaxation to under-damped relaxation with temporal oscillations. This study reveals a typical situation in which SSB of non-steady eigenmodes causes a qualitative change in the decoherence dynamics of an open quantum system. This paper is organized as follows. In Sec.~\ref{sec:transition_non_steady_eigenmodes}, we review the quantum master equation for open quantum systems and define phase transitions in non-steady eigenmodes. In Sec.~\ref{sec:model}, we introduce the transverse field Ising model under dephasing. The structures of eigenmodes and spectra are discussed for a small system. Section \ref{sec:mean_field} presents the results of the mean-field analysis for the infinite-dimensional case. The phase diagram for the transverse field and dephasing rate is determined. In Sec.~\ref{sec:numerical_1D}, the critical field in the one-dimensional case is determined by numerically solving the quantum master equation, and qualitative agreement with the mean-field calculation is confirmed. In Sec.~\ref{sec:relaxation}, we discuss the effect of the transition of the most coherent mode on the relaxation dynamics of a highly entangled cat state. Section \ref{sec:conclusion} is devoted to discussion and conclusions. \section{Phase transition of non-steady modes} \label{sec:transition_non_steady_eigenmodes} The time evolution of a Markovian open quantum system is described by the GKSL quantum mater equation \cite{Lindblad-76, Gorini-76}: \begin{equation} i\frac{d \rho}{dt} = \mathcal{L}(\rho) := [H, \rho] + i \sum_{\nu} \left( L_{\nu} \rho L_{\nu}^{\dag} - \frac{1}{2} \{ L_{\nu}^{\dag}L_{\nu}, \rho \} \right), \label{master_eq_1} \end{equation} where $\rho$ is the density matrix of the system, $[A,B] = AB-BA$, $\{A,B\} = AB+BA$, and $L_{\nu}$ are called jump operators. The index $\nu$ of $L_{\nu}$ represents the types of dissipation or the lattice sites. The superoperator $\mathcal{L}$ that generates the time evolution of $\rho$ is known as the Liouvillian. The quantum master equation \eqref{master_eq_1} is justified when the time scale of the dynamics caused by the interaction with the environment is much longer than the time scale of the environment \cite{Breuer, Rivas}. Here, we define the inner product between operators $A$ and $B$ by \begin{equation} (A|B) := \mathrm{Tr}[A^{\dag}B]. \label{inner_product_operators} \end{equation} The adjoint operator of $\mathcal{L}$ is defined by $(A|\mathcal{L}(B))=(\mathcal{L}^\dag(A)|B)$, and given by \begin{equation} \mathcal{L}^{\dag}(A) = [H, A] - i \sum_{\nu} \left( L_{\nu}^{\dag} A L_{\nu} - \frac{1}{2} \{ L_{\nu}^{\dag}L_{\nu}, A \} \right). \label{Liouvillian_dag} \end{equation} The right eigenmodes $\Phi_{\alpha}^{\mathrm{R}}$ and left eigenmodes $\Phi_{\alpha}^{\mathrm{L}}$ of $\mathcal{L}$ are defined by \begin{equation} \mathcal{L}(\Phi_{\alpha}^{\mathrm{R}}) = \lambda_{\alpha} \Phi_{\alpha}^{\mathrm{R}}, \quad \mathcal{L}^{\dag}(\Phi_{\alpha}^{\mathrm{L}}) = \lambda_{\alpha}^* \Phi_{\alpha}^{\mathrm{L}}, \label{right_left_eigenmode_eq} \end{equation} where $\lambda_{\alpha}$ denotes the $\alpha$th eigenvalue, and $\alpha=0, 1, ... , D^2-1$, provided that $D$ is the dimension of $\mathcal{H}$. We assume that the eigenmodes are normalized as \begin{equation} (\Phi_{\alpha}^{\mathrm{R}} | \Phi_{\alpha}^{\mathrm{R}})=1, \quad (\Phi_{\alpha}^{\mathrm{L}} | \Phi_{\beta}^{\mathrm{R}})=\delta_{\alpha \beta}, \end{equation} for all $\alpha$ and $\beta$. Note that, in general, the right eigenmodes $\Phi_{\alpha}^{\mathrm{R}}$ are not orthogonal to each other because $\mathcal{L}$ is not Hermitian. The right eigenmodes $\Phi_\alpha^\mathrm{R}$ and eigenvalues $\lambda_{\alpha}$ have the following properties \cite{Minganti-18}: \begin{enumerate} \item There is at least one right eigenmode $\Phi_0^\mathrm{R}$ with zero eigenvalue, $\mathcal{L}(\Phi_0^\mathrm{R})=0$, corresponding to the steady state. Here, $\Phi_0^\mathrm{R}$ is Hermitian and positive-semidefinite. \item All eigenvalues have non-positive imaginary parts, which guarantees the convergence of the density matrix to the steady state in the long-time limit. We assume that the eigenmodes are sorted such that $0 = |\mathrm{Im}[\lambda_0]| \leq |\mathrm{Im}[\lambda_1]| \leq \cdots \leq |\mathrm{Im}[\lambda_{D^2-1}]|$. \item $\mathrm{Tr}[\Phi_{\alpha}^{\mathrm{R}}]=0$ if $\lambda_\alpha \neq 0$, which follows from $\mathrm{Tr}[\mathcal{L}(\rho)]=0$. In general, $\Phi_{\alpha}^{\mathrm{R}}$ with nonzero $\lambda_\alpha$ is neither Hermitian nor positive-semidefinite. \item If $\mathcal{L}(\Phi_\alpha^\mathrm{R})=\lambda_\alpha \Phi_\alpha^\mathrm{R}$, then $\mathcal{L}((\Phi_\alpha^\mathrm{R})^\dag)=\lambda_\alpha^* (\Phi_\alpha^\mathrm{R})^\dag$. This means that the Liouvillian spectrum on the complex plane is symmetric with respect to the real axis. \end{enumerate} In the following, we see that the Liouvillian $\mathcal{L}$ can be interpreted as a non-Hermitian operator on an extended Hilbert space \cite{Prosen-12, Znidaric-15}. Let $\{ \ket{i} \}_{i=1,...,D}$ be an orthonormal basis set of the Hilbert space $\mathcal{H}$ of the system. An arbitrary operator $A$ can be mapped to a vector in $\mathcal{H} \otimes \mathcal{H}$ by \begin{equation} A = \sum_{i,j=1}^D A_{ij} \ket{i} \bra{j} \ \to \ |A) = \sum_{i,j=1}^D A_{ij} \ket{i} \otimes \ket{j} \in \mathcal{H} \otimes \mathcal{H}, \end{equation} where $A_{ij}=\braket{i|A|j}$. We here denote a vector in $\mathcal{H} \otimes \mathcal{H}$ by the round ket $|...)$. The inner product between $|A)$ and $|B)$ reads $(A|B)=\sum_{i,j=1}^D A_{ij}^* B_{ij}$ from Eq.~\eqref{inner_product_operators}. In terms of this vector representation, the Liouvillian is rewritten as \begin{equation} \mathcal{L} = H \otimes I - I \otimes H^{\mathrm{T}} + i \sum_{\nu} \mathcal{D}[L_\nu], \label{Liouvillian_ladder} \end{equation} with \begin{equation} \mathcal{D}[L_\nu] = L_{\nu} \otimes L_{\nu}^* - \frac{1}{2} L_{\nu}^{\dag}L_{\nu} \otimes I - \frac{1}{2} I \otimes L_{\nu}^{\mathrm{T}} L_{\nu}^*, \end{equation} where $I$ represents the identity operator on $\mathcal{H}$, and $L_{\nu}^*$ is defined as $\braket{i|L_{\nu}^*|j} = \braket{i|L_{\nu}|j}^*$. For simplicity of notation, the same notation $\mathcal{L}$ is used for the Liouvillian in the vector representation. The master equation \eqref{master_eq_1} is then rewritten as the Schr\"odinger equation-like form, \begin{equation} i\frac{d}{dt}|\rho) = \mathcal{L}|\rho). \label{master_eq_2} \end{equation} The right and left eigenmodes are written as $|\Phi_{\alpha}^{\mathrm{R}})$ and $|\Phi_{\alpha}^{\mathrm{L}})$ in the vector representation, respectively. If the set of eigenmodes forms a basis of $\mathcal{H} \otimes \mathcal{H}$, the spectral decomposition of $\mathcal{L}$ reads \begin{equation} \mathcal{L} = \sum_{\alpha} \lambda_{\alpha} | \Phi_{\alpha}^{\mathrm{R}}) (\Phi_{\alpha}^{\mathrm{L}} |. \label{L_spectral_decomposition} \end{equation} The time evolution of the density matrix $|\rho_t)$ is given by \begin{equation} |\rho_t) = e^{-i\mathcal{L}t}|\rho_{\mathrm{ini}}) = \sum_{\alpha=0}^{D^2-1} c_{\alpha} e^{-i\lambda_{\alpha} t} |\Phi_{\alpha}^{\mathrm{R}}), \quad c_{\alpha} = (\Phi_{\alpha}^{\mathrm{L}}|\rho_{\mathrm{ini}}), \label{rho_expansion} \end{equation} where $|\rho_{\mathrm{ini}})$ is an initial state. While the structure of the steady state $|\Phi_0^{\mathrm{R}})$ has been extensively studied in many previous studies, here we focus on the behavior of non-steady eigenmodes $|\Phi_{\alpha}^{\mathrm{R}})$ ($\mathrm{Im}[\lambda_\alpha]<0$). Suppose that the Liouvillian $\mathcal{L}$ contains a control parameter $g$. Let $|\Phi_{\alpha}^{\mathrm{R}}(g))$ and $|\Phi_{\alpha}^{\mathrm{L}}(g))$ be some (non-degenerate) right and left eigenmodes with a nonzero eigenvalue $\lambda_{\alpha}(g)$. For a finite system, when $g$ is varied continuously, $|\Phi_{\alpha}^{\mathrm{R}}(g))$, $|\Phi_{\alpha}^{\mathrm{L}}(g))$, and $\lambda_{\alpha}(g)$ should also vary continuously almost everywhere in the parameter space. It should be noted that in non-Hermitian systems, eigenvectors and eigenvalues can exhibit singular behavior when two eigenvalues collide at a certain parameter value, called the exceptional point \cite{Bender-98, Guo-09, Ruter-10, Heiss-12, El-Ganainy-18, Minganti-19, Ozdemir-19, Miri-19, Ashida-20}. We assume that the eigenvalue $\lambda_{\alpha}(g)$ under consideration does not collide with another eigenvalue when $g$ is varied. Suppose that for an operator $\mathcal{O}$ on $\mathcal{H} \otimes \mathcal{H}$, the following property is satisfied: \begin{equation} \lim_{L \to \infty} ( \Phi_{\alpha}^{\mathrm{L}}(g)| \mathcal{O} | \Phi_{\alpha}^{\mathrm{R}}(g)) = \begin{cases} M(g) \neq 0 & (g < g_\mathrm{c}); \\ 0 & (g \geq g_\mathrm{c}), \end{cases} \label{def_eigenmode_phase_transition} \end{equation} where $L$ represents the system size and $M(g)$ is a complex-valued analytic function. Equation \eqref{def_eigenmode_phase_transition} defines the phase transition of a non-steady eigenmode. The critical parameter $g_\mathrm{c}$ depends on the eigenmode under consideration. Note that the phase transition of a non-steady eigenmode is not necessarily accompanied by a phase transition in the steady state. In general, the phase transitions of non-steady eigenmodes do not affect the long-time behavior of the system, but they can affect the transient dynamics to the steady state. \begin{figure} \caption{Schematic phase diagrams and Liouvillian spectra. (a) Case that the order exists only at zero temperature. (b) Case that the order exists at finite temperature. Figures (a-1) and (b-1) show the expected phase diagrams with respect to the control parameter $g$ and the fictitious temperature $\beta^{-1} \label{Fig-QPT-diagram} \end{figure} Unfortunately, it is difficult to theoretically investigate the large-scale structure of each non-steady eigenmode. Instead, we consider a quantity obtained by averaging with appropriate weights with respect to the eigenmodes. By interpreting $\mathcal{L}$ as a non-Hermitian Hamiltonian on $\mathcal{H} \otimes \mathcal{H}$, we define the Liouvillian canonical average by \begin{equation} \langle \mathcal{O} \rangle_\beta := \frac{\mathrm{Tr}\left[\mathcal{O} e^{-\beta \mathcal{L}}\right]}{\mathrm{Tr}\left[e^{-\beta \mathcal{L}}\right]} = \frac{\sum_\alpha e^{-\beta \lambda_\alpha} ( \Phi_{\alpha}^{\mathrm{L}}| \mathcal{O} | \Phi_{\alpha}^{\mathrm{R}})}{\sum_\alpha e^{-\beta \lambda_\alpha}}, \label{def_canonical_average} \end{equation} where $\beta$ is a fictitious inverse temperature. The canonical average defined by Eq.~\eqref{def_canonical_average} can be calculated by various theoretical methods developed in statistical mechanics and field theory, such as mean-field approximation, diagrammatic techniques, and renormalization group analysis. Note that the expectation value $\langle \mathcal{O} \rangle_\beta$ itself has no physical meaning because the ``Gibbs state" given by $e^{-\beta \mathcal{L}}/\mathrm{Tr}[e^{-\beta \mathcal{L}}]$ is neither Hermitian nor positive-semidefinite. It should be considered as a mathematical tool for extracting the large-scale structure of non-steady eigenmodes. In predicting the qualitative behavior of $\langle \mathcal{O} \rangle_\beta$, it is useful to invoke the analogy with the conventional quantum phase transition in Hermitian systems \cite{Sachdev}. Figures \ref{Fig-QPT-diagram}(a-1) and (b-1) show the expected phase diagram with respect to the control parameter $g$ and the fictitious temperature $\beta^{-1}$. The order parameter $\langle \mathcal{O} \rangle_\beta$ takes a nonzero value in the ordered region, but it vanishes in the disordered region. Figures \ref{Fig-QPT-diagram}(a-2) and (b-2) show the Liouvillian spectra for $g < g_\mathrm{c}$. The red (blue) region represents the eigenvalues corresponding to the ordered (disordered) eigenmode. As in Hermitian systems, there are two cases, depending on whether the order exists only at ``zero temperature" $\beta=\infty$ (see Fig.~\ref{Fig-QPT-diagram}(a)) or at finite temperature $\beta<\infty$ (see Fig.~\ref{Fig-QPT-diagram}(b)). In the former case, the phase transition occurs only in the eigenmodes with the smallest real part of eigenvalues, and these modes are denoted as $|\Phi_{\mathrm{mc}}^{\mathrm{R}})$ and $|\Phi_{\mathrm{mc}}^{\mathrm{L}})$. Then, the zero-temperature limit of Eq.~\eqref{def_canonical_average} reads \begin{equation} \lim_{\beta \to \infty} \langle \mathcal{O} \rangle_\beta = (\Phi_{\mathrm{mc}}^{\mathrm{L}}| \mathcal{O} |\Phi_{\mathrm{mc}}^{\mathrm{R}}). \end{equation} Since the eigenmodes $|\Phi_{\mathrm{mc}}^{\mathrm{R}})$ and $|\Phi_{\mathrm{mc}}^{\mathrm{L}})$ have the largest frequency or the highest coherence, we call them as the most coherent modes of the Liouvillian. If the order exists at finite temperature, other eigenmodes close to the most coherent mode are also ordered (see Fig.~\ref{Fig-QPT-diagram}(b-2)). Lower-dimensional systems are expected to correspond to the case (a) and higher-dimensional systems to the case (b). The dashed lines in Figs.~\ref{Fig-QPT-diagram}(a-1) and (b-1) represent the boundaries of the ``quantum critical region", which is characterized by the absence of quasiparticle-like excitations \cite{Sachdev}. However, the physical meaning of the quantum critical region in this case is unclear at present, because it includes non-steady eigenmodes far from the steady state. \section{Model} \label{sec:model} We introduce a prototypical model that exhibits the phase transition of non-steady eigenmodes, the transverse field Ising model under dephasing. The Hamiltonian is given by \begin{equation} H = -J \sum_{\langle j k \rangle} \sigma^z_j \sigma^z_k - g \sum_{j} \sigma^x_j, \end{equation} where $\sigma^{\mu}_j \ (\mu=x,y,z)$ denote the Pauli matrices at site $j$ and $\langle j k \rangle$ represents a pair of nearest-neighbor sites. $J \ (>0)$ and $g$ represent the strength of the exchange interaction and the transverse field, respectively. Suppose that each spin is affected by dephasing with a rate $\gamma$. The corresponding jump operator at site $j$ is given by \begin{equation} L_j = \sqrt{\gamma} \sigma^z_j. \label{jump_operator_dephasing} \end{equation} The Liouvillian is written in the vector representation as \begin{align} \mathcal{L} &= -J \sum_{\langle jk \rangle} (\sigma^z_{j,+} \sigma^z_{k,+} - \sigma^z_{j,-} \sigma^z_{k,-}) - g \sum_{j} (\sigma^x_{j,+} - \sigma^x_{j,-}) \nonumber \\ &\quad + i \gamma \sum_j \sigma^z_{j,+} \sigma^z_{j,-} - i \gamma N, \label{Liouvillian_Ising} \end{align} where $\sigma^{\mu}_{j,+(-)}$ acts on the first (second) Hilbert space of $\mathcal{H} \otimes \mathcal{H}$, and $N$ represents the number of spins. The transverse field Ising model under dephasing is equivalent to the usual Ising model affected by a fluctuating longitudinal field, \begin{equation} \mathcal{H}(t) = -J \sum_{\langle j k \rangle} \sigma^z_j \sigma^z_k - g \sum_{j} \sigma^x_j + \sqrt{\gamma} \sum_{j} \xi_j(t) \sigma^z_j, \label{H_Ising_fluc_field} \end{equation} where $\xi_j(t)$ represent Gaussian white-noise processes with $\langle \langle \xi_j(t) \rangle \rangle=0$ and $\langle \langle \xi_j(t)\xi_k(t') \rangle \rangle = \delta_{jk} \delta(t-t')$. Here, $\langle \langle ... \rangle \rangle$ denotes the average with respect to the noise $\xi_j(t)$. The time evolution of the state vector $\ket{\psi(t)}$ reads $i\partial_t \ket{\psi(t)} = \mathcal{H}(t) \ket{\psi(t)}$. The density matrix $\rho(t)$ can be obtained by averaging over the noise, $\rho(t) = \langle \langle \ket{\psi(t)}\bra{\psi(t)} \rangle \rangle$. It can be shown that the time evolution of $\rho(t)$ is given by the GKSL master equation with the jump operator \eqref{jump_operator_dephasing} (see Refs.~\cite{Pichler-13, Stannigel-14, Cai-13} for details of the derivation). This type of master equation can also be derived, under certain conditions, on the assumption that the environment is a set of independent harmonic oscillators and that the system-environment coupling is linear with respect to the bosonic annihilation and creation operators \cite{Ban-10}. Experimental realization of the transverse field Ising model is possible with trapped ions \cite{Lanyon-09} and Rydberg atoms \cite{Schauss-12, Sanchez-18, Lienhard-18, Borish-20}, where fluctuations in the trapping field lead to dephasing of the qubit. Note that for the GKSL master equation to be valid, the correlation time $\tau_\mathrm{c}$ of the noise $\xi_j(t)$ must be much shorter than the time scales of the system. Since the time scales of the system are characterized by $1/J$, $1/g$, and $1/\gamma$, the legitimate region of the parameters is given by $J, g, \gamma \ll \tau_\mathrm{c}^{-1}$. In the absence of the transverse field $(g=0)$, the most coherent modes of $\mathcal{L}$ are the product states of the ferromagnetic state and N\'eel state, which are four-fold degenerate due to the $\mathbb{Z}_2$ symmetry. In particular, for the one-dimensional case with the periodic boundary condition, the most coherent modes are written as \begin{align} |\Phi_{\mathrm{mc}}^{\mathrm{R}}) &= \ket{\uparrow \uparrow \uparrow \cdots \uparrow} \otimes \ket{\uparrow \downarrow \uparrow \cdots \downarrow}, \nonumber \\ &\quad \ket{\uparrow \uparrow \uparrow \cdots \uparrow} \otimes \ket{\downarrow \uparrow \downarrow \cdots \uparrow}, \nonumber \\ &\quad \ket{\downarrow \downarrow \downarrow \cdots \downarrow} \otimes \ket{\uparrow \downarrow \uparrow \cdots \downarrow}, \nonumber \\ &\quad \ket{\downarrow \downarrow \downarrow \cdots \downarrow} \otimes \ket{\downarrow \uparrow \downarrow \cdots \uparrow}, \label{MCM_Ising_zero_g} \end{align} where $N$ is assumed to be even. The corresponding eigenvalue is given by $\lambda = -2JN - i \gamma N$. In the presence of infinitesimal $g$, the four-fold degeneracy of Eq.~\eqref{MCM_Ising_zero_g} is lifted, and $|\Phi_{\mathrm{mc}}^{\mathrm{R}})$ is the superposition of each term in Eq.~\eqref{MCM_Ising_zero_g} with equal weight. Figure \ref{Fig-spec-Ising}(a) shows the Liouvillian spectrum for the one-dimensional case with $g=0$ and $N=4$. Note that each eigenvalue is highly degenerate. For example, the zero eigenvalue is $2^N$-fold degenerate because the corresponding eigenmodes are the product states of two copies of the same spin configuration. As $g$ increases, each cluster of eigenvalues becomes elongated, and eventually they merge into a large cluster (see Fig.~\ref{Fig-spec-Ising}(b)). It should also be noted that the spectrum has the dihedral symmetry with respect to the vertical line $\mathrm{Re}[\lambda] = 0$ and the horizontal line $\mathrm{Im}[\lambda] = -\gamma N$ due to the PT symmetry of the Liouvillian \cite{Prosen-12}. \begin{figure} \caption{Liouvillian spectra of the transverse field Ising chain under dephasing with $N=4$. (a) Spectrum for $g=0$. The left-most red point corresponds to the most coherent mode. The insets show the spin configuration of the eigenmodes with the largest and smallest real part of eigenvalues. (b) Spectra for $g=0.3$, $0.4$, $0.5$, and $1$ with $J=\gamma=1$.} \label{Fig-spec-Ising} \end{figure} Conversely, for $g \gg J, \gamma$, the most coherent mode is the paramagnetic state in which all spins are parallel or antiparallel in the $x$-direction: \begin{equation} |\Phi_{\mathrm{mc}}^{\mathrm{R}}) \simeq \ket{\rightarrow \rightarrow \cdots \rightarrow} \otimes \ket{\leftarrow \leftarrow \cdots \leftarrow}, \label{MCM_Ising_zero_J} \end{equation} where $\ket{\rightarrow} = (\ket{\uparrow} + \ket{\downarrow})/\sqrt{2}$ and $\ket{\leftarrow} = (\ket{\uparrow} - \ket{\downarrow})/\sqrt{2}$. To distinguish between the ferromagnetic and paramagnetic states, we define the magnetization for $+$ spins by \begin{equation} M_+^z := \frac{1}{N} \sum_{j=1}^N \sigma^z_{j,+}. \label{def_M} \end{equation} Here, note that the expectation value of $M_+^z$ vanishes due to the $\mathbb{Z}_2$ symmetry, $(\Phi_{\mathrm{mc}}^{\mathrm{L}}| M_+^z |\Phi_{\mathrm{mc}}^{\mathrm{R}}) = 0$. Instead, we consider the squared magnetization, \begin{equation} \mathcal{O} := (M_+^z)^2. \end{equation} If the most coherent mode $|\Phi_{\mathrm{mc}}^{\mathrm{R}})$ is the paramagnetic state given by Eq.~\eqref{MCM_Ising_zero_J}, the spin correlation function decays exponentially, \begin{equation} (\Phi_{\mathrm{mc}}^{\mathrm{L}}| \sigma^z_{j,+} \sigma^z_{k,+} |\Phi_{\mathrm{mc}}^{\mathrm{R}}) \sim e^{-d_{jk}/\xi}, \end{equation} where $d_{jk}$ is the distance between sites $j$ and $k$, and $\xi$ is the correlation length. On the other hand, if the most coherent mode $|\Phi_{\mathrm{mc}}^{\mathrm{R}})$ is the ferromagnetic state given by Eq.~\eqref{MCM_Ising_zero_g}, the spin correlation function converges to a nonzero value, \begin{equation} \lim_{d_{jk} \to \infty} (\Phi_{\mathrm{mc}}^{\mathrm{L}}| \sigma^z_{j,+} \sigma^z_{k,+} |\Phi_{\mathrm{mc}}^{\mathrm{R}}) = m^2 \neq 0. \end{equation} Thus, there exists a critical value $g_\mathrm{c}$ such that $(\Phi_{\mathrm{mc}}^{\mathrm{L}}|\mathcal{O}|\Phi_{\mathrm{mc}}^{\mathrm{R}}) = m^2 \neq 0$ for $g<g_\mathrm{c}$ and $(\Phi_{\mathrm{mc}}^{\mathrm{L}}|\mathcal{O}|\Phi_{\mathrm{mc}}^{\mathrm{R}}) = 0$ for $g>g_\mathrm{c}$ in the thermodynamic limit $N \to \infty$. In the following, we confirm the existence of the phase transition in the most coherent mode using mean-field analysis and finite-size numerics. \section{Mean-field analysis} \label{sec:mean_field} \begin{figure*} \caption{Magnetization $m := (m_\mathrm{A} \label{Fig-MF-phase-diagram} \end{figure*} \begin{figure*} \caption{Magnetization $m$ of the one-dimensional dissipative Ising model as a function of the dephasing rate $\gamma$ and the transverse field $g$. The parameters are $J=1$, and $\beta=\infty$, $1.5$, $1.0$, and $0.8$. The spin number is $N=4$. A clear correlation can be seen with the phase diagram based on the mean-field approximation in Figs.~\ref{Fig-MF-phase-diagram} \label{Fig-1D-phase-diagram} \end{figure*} We apply the mean-field approximation to the non-steady eigenmodes of the dissipative Ising model. Here, we consider a hypercubic lattice of arbitrary dimension. To describe the N\'eel order, the lattice points are divided into the A-sublattice and the B-sublattice. It is convenient to introduce the following unitary transformation: \begin{equation} U := \prod_j \exp\left( i\frac{\pi}{2} \sigma_{j,-}^z \right) \prod_{j \in \mathrm{B}} \exp\left( i\frac{\pi}{2} \sigma_{j,-}^x \right), \label{def_U} \end{equation} where $\prod_{j \in \mathrm{B}}$ represents the product over the lattice sites belonging to the B-sublattice. The first term in Eq.~\eqref{def_U} flips the $x$-component of all spins and the second term flips the $z$-component of the B-sublattice spins. The Liouvillian is transformed as \begin{align} \tilde{\mathcal{L}} := U^{\dag} \mathcal{L} U &= -J \sum_{\langle jk \rangle} (\sigma^z_{j,+} \sigma^z_{k,+} + \sigma^z_{j,-} \sigma^z_{k,-}) \nonumber \\ &\quad - g \sum_{j} (\sigma^x_{j,+} + \sigma^x_{j,-}) \nonumber \\ &\quad + i \gamma \sum_j \mathrm{sgn}(j) \sigma^z_{j,+} \sigma^z_{j,-} - i \gamma N, \label{Liouvillian_Ising_ferro} \end{align} where $\mathrm{sgn}(j)=1$ for the A-sublattice and $\mathrm{sgn}(j)=-1$ for the B-sublattice. The transformed Liouvillian \eqref{Liouvillian_Ising_ferro} is symmetric with respect to the exchange of $\sigma^\mu_{j,+}$ and $\sigma^\mu_{j,-}$. Thus, the most coherent mode for $g=0$ is the tensor product of the ferromagnetic states. Let us define the ``Gibbs state" with a fictitious inverse temperature $\beta$ by \begin{equation} \rho_\beta = \frac{e^{-\beta \tilde{\mathcal{L}}}}{\mathrm{Tr}\left[e^{-\beta \tilde{\mathcal{L}}}\right]}. \end{equation} Note that $\rho_\beta$ is not a physical density matrix because it is neither Hermitian nor positive-semidefinite. The reduced density matrix for lattice site $j$ is defined by $\rho_{\beta, j} = \mathrm{Tr}_{\neq j}[\rho_\beta]$, where $\mathrm{Tr}_{\neq j}$ represents the trace for all spins except site $j$. In the mean-field analysis, the reduced density matrix is approximated by \begin{equation} \rho_{\beta, j} \simeq \frac{e^{-\beta \tilde{\mathcal{L}}_j^\mathrm{MF}}}{\mathrm{Tr}_j \left[ e^{-\beta \tilde{\mathcal{L}}_j^\mathrm{MF}} \right]}, \end{equation} with \begin{align} \tilde{\mathcal{L}}_j^\mathrm{MF} &= - J \sum_{\langle k \rangle_j} (\sigma^z_{j,+} m_{k,+} + \sigma^z_{j,-} m_{k,-}) - g (\sigma^x_{j,+} + \sigma^x_{j,-}) \nonumber \\ &\quad + i \gamma \mathrm{sgn}(j) \sigma^z_{j,+} \sigma^z_{j,-} - i \gamma, \end{align} where $\langle k \rangle_j$ represents the nearest neighboring sites around $j$ and $\mathrm{Tr}_j$ represents the trace for spins at site $j$. The magnetization $m_{k, \pm} := \mathrm{Tr}_k [\sigma^z_{k, \pm} \rho_{\beta, k}]$ at site $k$ is determined self-consistently. Note that $m_{k, \pm}$ is generally complex-valued. Since $m_{k, \pm}$ depends only on the sublattice to which $k$ belongs, we denote it as $m_\mathrm{A}$ or $m_\mathrm{B}$. Then, the mean-field Liouvillian is given by \begin{align} \tilde{\mathcal{L}}_\mathrm{A}^\mathrm{MF} &= - zJ m_\mathrm{B} (\sigma^z_{+} + \sigma^z_{-}) - g (\sigma^x_{+} + \sigma^x_{-}) \nonumber \\ &\quad + i \gamma \sigma^z_{+} \sigma^z_{-} - i \gamma, \label{mf_Liouvillian_A} \end{align} \begin{align} \tilde{\mathcal{L}}_\mathrm{B}^\mathrm{MF} &= - zJ m_\mathrm{A} (\sigma^z_{+} + \sigma^z_{-}) - g (\sigma^x_{+} + \sigma^x_{-}) \nonumber \\ &\quad - i \gamma \sigma^z_{+} \sigma^z_{-} - i \gamma, \label{mf_Liouvillian_B} \end{align} where $z$ is the coordinate number of the lattice and we have omitted the site index of the spin operator. The self-consistent equations read \begin{equation} m_\mathrm{A} = \frac{\mathrm{Tr}_1 \left[\sigma_+^z e^{-\beta \tilde{\mathcal{L}}_\mathrm{A}^\mathrm{MF}}\right]}{\mathrm{Tr}_1 \left[ e^{-\beta \tilde{\mathcal{L}}_\mathrm{A}^\mathrm{MF}} \right]}, \quad m_\mathrm{B} = \frac{\mathrm{Tr}_1 \left[\sigma_+^z e^{-\beta \tilde{\mathcal{L}}_\mathrm{B}^\mathrm{MF}}\right]}{\mathrm{Tr}_1 \left[ e^{-\beta \tilde{\mathcal{L}}_\mathrm{B}^\mathrm{MF}} \right]}, \label{mf_self_consistent} \end{equation} where ``$\mathrm{Tr}_1$" represents the trace over the one-site Hilbert space with a basis set $\{ \ket{\uparrow} \otimes \ket{\uparrow}, \ket{\uparrow} \otimes \ket{\downarrow}, \ket{\downarrow} \otimes \ket{\uparrow}, \ket{\downarrow} \otimes \ket{\downarrow} \}$. From Eqs.~\eqref{mf_Liouvillian_A}, \eqref{mf_Liouvillian_B}, and \eqref{mf_self_consistent}, the following relation holds: \begin{equation} m_\mathrm{A} = m_\mathrm{B}^*. \label{m_A_m_B_relation} \end{equation} The self-consistent equation \eqref{mf_self_consistent} together with Eqs.~\eqref{mf_Liouvillian_A}, \eqref{mf_Liouvillian_B}, and \eqref{m_A_m_B_relation} is numerically solved by iteration. Figure \ref{Fig-MF-phase-diagram} shows the averaged magnetization $m :=(m_\mathrm{A} + m_\mathrm{B})/2$ as a function of the dephasing rate $\gamma$ and the transverse field $g$. From Eq.~\eqref{m_A_m_B_relation}, the magnetization $m$ is real. For Hermitian systems, the magnetization satisfies $-1 \leq m \leq 1$. On the other hand, note that in this case $m$ becomes greater than $1$ when $\gamma$ and $g$ are large. This is because the right and left eigenmodes are not equal due to the non-Hermiticity of the Liouvillian. In Figs.~\ref{Fig-MF-phase-diagram}(a)--(d), the bright region with $m>0$ corresponds to the ferromagnetic (ordered) phase, and the dark region with $m=0$ corresponds to the paramagnetic (disordered) phase. The phase boundary determines the critical field $g_\mathrm{c}(\gamma)$. For $\gamma=0$, since the model is identical to the decoupled copies of the transverse field Ising model, it exhibits a quantum phase transition at $g=1$ at zero temperature ($\beta=\infty$). The critical field $g_\mathrm{c}$ for $\gamma=0$ decreases as the temperature increases, and it eventually vanishes at $\beta=1$. A remarkable feature of magnetization is its oscillatory behavior as a function of $\gamma$. In the absence of the transverse field ($g=0$), the magnetization $m(\gamma, g)$ has a periodicity, \begin{equation} m\left(\gamma + \frac{\pi}{\beta}, 0\right) = m(\gamma, 0), \label{m_periodicity} \end{equation} which directly follows from the self-consistent equation \eqref{mf_self_consistent}. Since for $\beta<1$, $m(\gamma, g)$ vanishes near $\gamma=g=0$, the periodicity \eqref{m_periodicity} implies the existence of infinitely many islands of paramagnetic phase, as shown in Fig.~\ref{Fig-MF-phase-diagram}(d). In this case, the phase diagram has a reentrant structure. For $\beta=0.8$ and $\gamma=4$, the transition from the paramagnetic phase to the ferromagnetic phase occurs at $g\simeq1.1$ with increasing $g$, and the second transition to the paramagnetic phase occurs at $g\simeq2.5$ (see Figs.~\ref{Fig-MF-phase-diagram}(d) and (h)). \section{Numerical results for 1D model} \label{sec:numerical_1D} We study the phase structure of the most coherent mode by the finite-size numerics of the one-dimensional dissipative Ising model. We assume the periodic boundary condition and an even number of spins. First, let us consider the phase diagram at finite temperature. The magnetization $M_+^z$ for $+$ spins is defined by Eq.~\eqref{def_M}, and we consider the average of the squared magnetization, \begin{equation} m^2 := \frac{\mathrm{Tr}\left[(M_+^z)^2 e^{-\beta \tilde{\mathcal{L}}}\right]}{\mathrm{Tr}\left[e^{-\beta \tilde{\mathcal{L}}}\right]}. \label{def_m_finite_size} \end{equation} The magnetization $m$ is given by the square root of Eq.~\eqref{def_m_finite_size}, which corresponds to $(m_\mathrm{A} + m_\mathrm{B})/2$ of the mean-field analysis in Sec.~\ref{sec:mean_field}, where $m_\mathrm{A} \:(m_\mathrm{B})$ is the magnetization at the sublattice A (B). Figure \ref{Fig-1D-phase-diagram} shows the magnetization $m$ as a function of the dephasing rate $\gamma$ and the transverse field $g$. The spin number is $N=4$. The clear correlation between Figs.~\ref{Fig-MF-phase-diagram} and \ref{Fig-1D-phase-diagram} confirms the validity of the mean-field approximation. In particular, one can observe a precursor of the reentrant structure of the phase diagram in Fig.~\ref{Fig-1D-phase-diagram}(d). As in the case of the conventional transverse field Ising model, no true long-range order is expected to exist at finite temperature in one dimension. Thus, the magnetization for $\beta < \infty$ should vanish in the thermodynamic limit $N \to \infty$. \begin{figure} \caption{(a) Binder cumulant $U_4$ of the most coherent mode as a function of the transverse field $g$ with $\gamma=1$. The systems sizes are $N=4$, $6$, $8$, $10$, and $12$. The cross point of $U_4$ for different system sizes indicates the critical field $g_\mathrm{c} \label{Fig-1D-binder} \end{figure} Next, we focus on the phase transition of the most coherent mode ($\beta=\infty$). It is hard to calculate the most coherent mode through the numerical diagonalization of the Liouvillian for $N>8$. Thus, we consider the imaginary time evolution of the master equation, \begin{equation} \frac{d}{dt}|\rho) = -\tilde{\mathcal{L}}|\rho). \label{master_eq_imag_time} \end{equation} By integrating Eq.~\eqref{master_eq_imag_time} for a sufficiently long time, $|\rho)$ converges to the most coherent mode $|\Phi_{\mathrm{mc}}^{\mathrm{R}})$. To determine the critical field $g_\mathrm{c}(\gamma)$, we define the Binder cumulant $U_4$ for the most coherent mode as \begin{equation} U_4 := 1 - \frac{\mathrm{tr}\left[(M^z)^4 \Phi_{\mathrm{mc}}^{\mathrm{R}}\right]}{3\mathrm{tr}\left[(M^z)^2 \Phi_{\mathrm{mc}}^{\mathrm{R}}\right]^2}, \label{def_binder} \end{equation} where $\Phi_{\mathrm{mc}}^{\mathrm{R}}$ is the original matrix representation of $|\Phi_{\mathrm{mc}}^{\mathrm{R}})$ and \begin{equation} M^z = \frac{1}{N} \sum_{j=1}^N \sigma^z_{j} \end{equation} is the magnetization for the original spin operator. Note that the trace ``$\mathrm{tr}$" in Eq.~\eqref{def_binder} is taken over the original Hilbert space $\mathcal{H}$. In the thermodynamic limit, the Binder cumulant $U_4$ takes the value $2/3$ in the long-range ordered phase and zero in the disordered phase. Figure \ref{Fig-1D-binder}(a) shows the Binder cumulant $U_4$ as a function of the transverse field $g$ with $\gamma=1$. For sufficiently large system sizes, it is known that $U_4$ for different system sizes intersect at the critical field $g_\mathrm{c}$. To determine $g_\mathrm{c}$, let us denote the cross point of $U_4$ for two system sizes $N$ and $N-2$ as $g_\mathrm{c}^{(N)}$. The critical field is given by $g_\mathrm{c}=\lim_{N\to\infty} g_\mathrm{c}^{(N)}$. We fit $g_\mathrm{c}^{(N)}$ for $N=6$, $8$, $10$, $12$ with the algebraic function $f(N) = a+bN^{-c} \:(c>0)$, where $g_\mathrm{c}=a$. Figure \ref{Fig-1D-binder}(b) shows $g_\mathrm{c}$ determined by this procedure as a function of the dephasing rate $\gamma$. The regions of $g<g_\mathrm{c}$ and $g>g_\mathrm{c}$ correspond to the long-range ordered phase and the disordered phase, respectively. Note that for $\gamma=0$, $g_\mathrm{c}=1$ because the model is equivalent to two copies of the conventional transverse field Ising model. The curve of $g_\mathrm{c}$ in Fig.~\ref{Fig-1D-binder}(b) coincides with the phase boundary in Fig.~\ref{Fig-1D-phase-diagram}(a) for small system size. These results provide evidence for the existence of an order-disorder transition of the most coherent mode in one dimension. \section{Relaxation dynamics of a cat state} \label{sec:relaxation} We discuss how the phase transition of the most coherent mode affects the relaxation dynamics of the system. As an initial state, we consider the following pure state: \begin{equation} \ket{\psi_\mathrm{ini}} = \frac{1}{\sqrt{2}} (\ket{\mathrm{F}} + \ket{\mathrm{N}}), \label{cat_state} \end{equation} where $\ket{\mathrm{F}}=\ket{\uparrow \uparrow \uparrow \cdots \uparrow}$ is a ferromagnetic state and $\ket{\mathrm{N}}=\ket{\uparrow \downarrow \uparrow \cdots \downarrow}$ is a N\'eel state. Superpositions of two macroscopically different quantum states, such as Eq.~\eqref{cat_state}, are called the ``Schr\"odinger's cat states" \cite{Leibfried-05, Monz-11, Omran-19, Song-19}, which are an important resource in quantum computing and quantum communication. The corresponding density matrix reads \begin{align} \rho_\mathrm{ini} &= \ket{\psi_\mathrm{ini}} \bra{\psi_\mathrm{ini}} \nonumber \\ &= \frac{1}{2} (\ket{\mathrm{F}} \bra{\mathrm{F}} + \ket{\mathrm{N}} \bra{\mathrm{N}} + \ket{\mathrm{F}} \bra{\mathrm{N}} + \ket{\mathrm{N}} \bra{\mathrm{F}}). \label{rho_ini} \end{align} In the limit of $g \to 0$, the most coherent mode is the equal superposition of Eq.~\eqref{MCM_Ising_zero_g}. Thus, the overlap between the initial state $\rho_\mathrm{ini}$ and the most coherent mode $\Phi_{\mathrm{mc}}^{\mathrm{R}}$ is given by \begin{equation} (\rho_\mathrm{ini}| \Phi_{\mathrm{mc}}^{\mathrm{R}}) = \frac{1}{4}. \end{equation} From the eigenmode expansion \eqref{rho_expansion}, the large overlap between the initial state and the most coherent mode means that the time evolution of the density matrix is dominated by the most coherent mode. Thus, the density matrix is expected to exhibit coherent oscillations at the frequency of the most coherent mode. In the opposite case $g \gg J$, since the most coherent mode is approximately given by Eq.~\eqref{MCM_Ising_zero_J}, the overlap with the initial state is significantly small. Thus, the density matrix is expected to exhibit monotonic relaxation. In summary, when $g$ exceeds the critical field $g_\mathrm{c}$, a crossover from coherent relaxation with oscillations to incoherent relaxation without oscillations is expected to occur. \begin{figure} \caption{Overlap $r_\alpha = |(\rho_\mathrm{ini} \label{Fig-overlap} \end{figure} Figure \ref{Fig-overlap} shows the overlap \begin{equation} r_\alpha := |(\rho_\mathrm{ini}|\Phi_\alpha^{\mathrm{R}})| \end{equation} between eigenmodes $|\Phi_\alpha^{\mathrm{R}})$ and the initial state $|\rho_\mathrm{ini})$ given by Eq.~\eqref{rho_ini}. For $g \ll J$, the overlap has large values for eigenmodes with $\mathrm{Re}[\lambda_\alpha]=0, \pm 2NJ$ (see Fig.~\ref{Fig-overlap}(a)), because the eigenmodes with $\mathrm{Re}[\lambda_\alpha]=0$, $-2NJ$, and $2NJ$ have large overlaps with $\ket{\mathrm{F}} \bra{\mathrm{F}} + \ket{\mathrm{N}} \bra{\mathrm{N}}$, $\ket{\mathrm{F}} \bra{\mathrm{N}}$, and $\ket{\mathrm{N}} \bra{\mathrm{F}}$, respectively. This implies that the time evolution of the density matrix starting from $\rho_{\mathrm{ini}}$ is governed by a small number of eigenmodes with frequency $2NJ$. When $g$ is comparable to $J$ (see Fig.~\ref{Fig-overlap}(b)), the overlap is delocalized over all eigenmodes. In this case, since the time evolution of the density matrix is given by the superposition of a large number of eigenmodes with various frequencies, an incoherent relaxation without oscillations is expected. \begin{figure} \caption{(a) Time evolution of the fidelity $F(t)$ for $g=0.2$, $0.4$, $0.6$, $0.8$, $1.0$, $1.2$, $1.4$, and $1.6$ from top to bottom. The spin number is $N=8$ and the other parameters are $J=\gamma=1$. (b) Fourier transform of $\tilde{F} \label{Fig-fidelity} \end{figure} We consider the fidelity \begin{equation} F(t) := \left( \mathrm{tr}\left[ \left(\rho_\mathrm{ini}^{1/2} \ \rho_t \ \rho_\mathrm{ini}^{1/2} \right)^{1/2} \right] \right)^2, \end{equation} which represents the distance between the initial state and the state at time $t$ \cite{Jozsa-94, Liang-19}. We have $F(0)=1$ at $t=0$ and $F(t) \to D^{-1}$ in the long-time limit $t \to \infty$, where $D=2^N$ is the dimension of the Hilbert space. Figure \ref{Fig-fidelity}(a) shows the time evolution of the fidelity $F(t)$ for different values of $g$. Note that the horizontal axis is the rescaled time $tJN$. We confirm that $F(t)$ with respect to the rescaled time $tJN$ is almost independent of the system size $N$. For $g<1$, $F(t)$ exhibits temporal oscillations, whereas for $g>1$, $F(t)$ decays monotonically. To highlight this crossover, in Fig.~\ref{Fig-fidelity}(b), we show the Fourier transform of $F(t)$, \begin{equation} \tilde{F}(\omega) = \int_{-\infty}^{\infty} dt e^{-i\omega t} F(|t|). \end{equation} The horizontal axis is the rescaled frequency $\omega/(NJ)$. For $g<1$, $\tilde{F}(\omega)$ shows a peak at $\omega/(NJ)=2$, which is consistent with the fact that the time evolution is dominated by the most coherent mode with frequency $2NJ$. As $g$ increases, the peak is smeared and eventually disappears at $g \simeq 1.1$. The value of $g$ at which the peak of $\tilde{F}(\omega)$ disappears is close to the critical field $g_\mathrm{c}\simeq 1.11$ of the most coherent mode. These results suggest that the transition of the most coherent mode leads to a qualitative change in the decoherence of a cat state. \section{Conclusions} \label{sec:conclusion} In this study, we found that non-steady eigenmodes of a simple open quantum many-body system exhibit spontaneous symmetry breaking. The transition from the disordered phase to the symmetry-broken ordered phase occurs only in the vicinity of the most coherent mode, which is the eigenmode with the highest frequency. This means that the transition does not change the long-time behavior of the system, but only affects the transient relaxation dynamics. In particular, we demonstrated the crossover from under-damped to over-damped relaxation of a Schr\"odinger's cat state. We discuss the experimental feasibility of the setup postulated in this study. The transverse field Ising model can be realized by trapped ions \cite{Lanyon-09} or Rydberg atoms \cite{Schauss-12, Sanchez-18, Lienhard-18, Borish-20}. The dephasing of the system is caused by different types of noise, such as fluctuations in the magnetic field of the ion-trap device. Recent advances in qubit manipulation techniques have made it possible to generate Schr\"odinger's cat states with dozens of qubits \cite{Leibfried-05, Monz-11, Omran-19, Song-19}. The off-diagonal elements of the density matrix, which characterize the coherence of the state, can be obtained by observing the parity oscillations. We expect that there exists a critical value of the transverse field at which the way the coherence decays changes qualitatively. The introduction of the Liouvillian canonical ensemble defined by Eq.~\eqref{def_canonical_average} allows various field theoretical approaches, such as mapping to a classical model with an additional dimension, diagrammatic techniques, and renormalization group analysis, which have been developed in the study of conventional quantum phase transitions \cite{Sachdev}. Such an analysis can be useful in clarifying ``low-energy" excitations near the most coherent mode and the nature of the quantum critical regime shown in Fig.~\ref{Fig-QPT-diagram}. The above field theoretical approach should not be confused with the Keldysh formalism \cite{Torre-13, Maghrebi-16, Sieberer-16, Young-20}, which is developed to study steady states. To establish a field theory for non-steady eigenmodes can open a new research direction on the nonequilibrium dynamics of open quantum many-body systems. \begin{acknowledgments} This work was supported by JSPS KAKENHI Grant Number JP22K13983. \end{acknowledgments} \end{document}
\begin{document} \title[Transmission problem with delay and weight] {A transmission problem for waves under time-varying delay and nonlinear weight} \author[Carlos A. S. Nonato]{Carlos A. S. Nonato} \address{ Federal University of Bahia, Mathematics Departament, Salvador, 40170-110, Bahia, Brazil} \email{[email protected]} \author{Carlos A. Raposo } \address{Federal University of S\~ao Jo\~ao del-Rei, Mathematics Departament, S\~ao Jo\~ao del-Rei, 36307-352, Minas Gerais, Brazil} \email{[email protected]} \author{Waldemar D. Bastos} \address{S\~ao Paulo State University, Mathematics Departament, S\~ao Jos\'e do Rio Preto, 15054-352, S\~ao Paulo, Brazil} \email{[email protected]} \subjclass{Primary 35L20; 35B40; Secondary 93D15} \keywords{Transmission problem, Time-variable delay, Nonlinear weights, Exponential stability} \begin{abstract} This manuscript focus on in the transmission problem for one dimensional waves with nonlinear weights on the frictional damping and time-varying delay. We prove global existence of solutions using Kato's variable norm technique and we show the exponential stability by the energy method with the construction of a suitable Lyapunov functional. \end{abstract} \maketitle \section{Introduction} In this paper we investigate global existence and decay properties of solutions for a transmission problem for waves with nonlinear weights and time-varying delay. We consider the following system \begin{equation}\label{EQ1.1} \begin{gathered} u_{tt}(x,t) - au_{xx}(x,t) + \mu_1(t)u_t(x,t) + \mu_2(t)u_t(x,t-\tau(t)) = 0 \,\, \mbox{in } \Omega \times ]0, \infty[,\\ v_{tt}(x,t) - bv_{xx}(x,t) = 0 \,\, \mbox{in } ]L_1, L_2[ \times ]0, \infty[, \end{gathered} \end{equation} where $0 < L_1 < L_2 < L_3$, $\Omega = ]0, L_1[ \cup ]L_2, L_3[$ and $a$, $b$ are positive constants. \setlength{\unitlength}{1cm} \begin{center} \begin{picture}(12.5,4.5) \put (0,2){\framebox(12,1){}} \put (0,1.5){\line(0,1){1.9}} \put (12,1.48){\line(0,1){1.9}} \multiput (-0.4,1.2)(0,0.2){10}{\line(1,1){0.4}} \multiput (12,1.5)(0,0.2){10}{\line(1,1){0.4}} \multiput (0,2)(0.2,0){21}{\line(0,1){1}} \put(8,2){\line(0,1){1}} \multiput (8,2)(0.2,0){21}{\line(0,1){1}} \put (0.2,3.2){{\small\bf Part with delay }} \put (4.2,3.2){{\small\bf Elastic Part }} \put (8.2,3.2){{\small\bf Part with delay }} \put (2,1){\vector(-1,0){2}} \put (2,1){\vector(1,0){2}} \put (6,1){\vector(-1,0){2}} \put (6,1){\vector(1,0){2}} \put (10,1){\vector(-1,0){2}} \put (10,1){\vector(1,0){2}} \multiput (0,0.8)(4,0){4}{\line(0,1){0.4}} \put (0,0.5){$0$} \put (4,0.5){$L_1$} \put (8,0.5){$L_2$} \put (12,0.5){$L_3$} \put(1.7,1.5){$u(x,t)$} \put(5.7,1.5){$v(x,t)$} \put(9.7,1.5){$u(x,t)$} \end{picture} \end{center} The system \eqref{EQ1.1} is subjected to the transmission conditions \begin{equation} \begin{gathered} u(L_i, t) = v(L_i, t), \quad i= 1,2 \\ au_x(L_i, t) = bv_x(L_i, t), \quad i= 1,2, \end{gathered} \label{EQ1.2} \end{equation} the boundary conditions \begin{equation}\label{EQ1.3} u(0, t) = u(L_3, t) = 0 \end{equation} and initial conditions \begin{equation} \begin{gathered} u(x,0) = u_0(x), \quad u_t(x,0) = u_1(x) \quad \mbox{on } \Omega, \\ u_t(x, t-\tau(0)) = f_0(x, t-\tau(0)) \quad \mbox{in } \Omega \times ]0, \tau(0)[, \\ v(x,0) = v_0(x), \quad v_t(x,0) = v_1(x) \quad \mbox{on } ]L_1, L_2[, \end{gathered} \label{EQ1.4} \end{equation} where the initial datum $\left(u_0, u_1, v_0, v_1, f_0 \right)$ belongs to a suitable Sobolev space. Here $0 < \tau(t)$ is the time-varying delay and $\mu_1(t)$ and $\mu_2(t)$ are nonlinear weights acting on the frictional damping. As in \cite{NICAISE_PIGNOTTI_1}, we assume that \begin{equation}\label{hipot_1} \tau(t) \in W^{2,\infty}([0,T]), \quad \forall T>0 \end{equation} and that there exist positive constants $\tau_0$, $\tau_1$ and $d$ satisfying \begin{equation}\label{hipot_2} 0 < \tau_0 \leq \tau(t) \leq \tau_1, \,\,\, \tau'(t) \leq d < 1, \quad \forall t>0. \end{equation} We are interested in proving the exponential stability for the problem \eqref{EQ1.1}-\eqref{EQ1.4}. In order to obtain this, we will assume that \begin{equation}\label{assumption} \max \{1, \frac{a}{b}\} < \frac{L_1 + L_3 - L_2}{2(L_2 - L_1)}. \end{equation} As described in \cite{Benseghir}, the assumption \eqref{assumption} gives the relationship between the boundary regions and the transmission permitted. It can be also seen as a restriction on the wave speeds of the two equations and the damped part of the domain. It is known that for Timoshenko systems \cite{Soufyane} and Bresse systems \cite{Boussouira_Rivera_Dilberto} the wave speeds always control the decay rate of the solution. It is an interesting open question to investigate the behavior of the solution when \eqref{assumption} is not satisfied. Time delay is the property of a physical system by which the response to an applied force is delayed in its effect, and the central question is that delays source can destabilize a system that is asymptotically stable in the absence of delays, see \cite{DATKO,Datko2,Guesmia,Xu}. Transmission problems are closely related to the design of material components, attracting considerable attention in recent years, e.g., in the analysis of damping mechanisms in the metallurgical industry or smart materials technology, see \cite{Balmes,Rao} and the references therein. Studies of fluid structure interaction and the added mass effect, also known as virtual mass effect, hydrodynamic mass, and hydroelastic vibration of structures, started with H. Lamb \cite{Lamb} who investigated the vibrations of a thin elastic circular plate in contact with water. Experimental study of impact on composite plates with fluid-structure interaction was investigated in \cite{Kwon}. From the mathematical point of view a transmission problem for wave propagation consists on a hyperbolic equation for which the corresponding elliptic operator has discontinuous coefficients. We consider the wave propagation over bodies consisting of two physically different materials, one purely elastic and another subject to frictional damping. The type of wave propagation generated by mixed materials originates a transmission (or diffraction) problem. To the best of our knowledge, the first contribution in literature for transmission problem with a time delay was given by A. Benseghir in \cite{Benseghir}. More precisely, in \cite{Benseghir} the transmission problem \begin{equation}\label{problem_C} \begin{gathered} u_{tt} - au_{xx} + \mu_1 u_t(x,t) + \mu_2 u_t(x,t-\tau) = 0, \,\, \mbox{in } \Omega \times ]0, \infty[, \\ v_{tt} - bv_{xx} = 0, \,\, \mbox{in } ]L_1, L_2[ \times ]0, \infty[. \end{gathered} \end{equation} with constant weights $\mu_1, \mu_2$ and time delay $\tau > 0$ was studied. Under an appropriate assumption on the weights of the two feedbacks ($\mu_1 < \mu_2$), it was proved the well-posedness of the system and, under condition \eqref{assumption}, it was established an exponential decay result. The result in \cite{Benseghir} were improved by S. Zitouni et al. \cite{Zitouni_Abdelouaheb_Zennir_Rachida}. There, the authors considered the problem with a time-varying delay $\tau(t)$ of the form \begin{equation}\label{problem_E} \begin{gathered} u_{tt} - au_{xx} + \mu_1 u_t(x,t) + \mu_2 u_t(x,t-\tau(t)) = 0, \,\, \mbox{in } \Omega \times ]0, \infty[, \\ v_{tt} - bv_{xx} = 0, \,\, \mbox{in } ]L_1, L_2[ \times ]0, \infty[ \end{gathered} \end{equation} and proved the global existence and exponential stability under suitable assumptions on the delay term and assumption \eqref{assumption}. Without delay, systems \eqref{problem_C}, \eqref{problem_E} has been investigated in \cite{Bastos_Raposo}. The transmission problem with history and delay was considered by G. Li et al., \cite{Li} where the equations were expressed as \begin{equation}\label{problem_D} \begin{gathered} u_{tt} \! - \! au_{xx} \! + \!\! \int_{0}^{\infty} \!\! g(s)u_{xx}(x, t-s)ds \! + \! \mu_1 u_t(x,t) \! + \! \mu_2 u_t(x,t-\tau) = 0, \, \mbox{in } \Omega \times ]0, \infty[,\\ v_{tt} \! - \! bv_{xx} = 0, \, \mbox{in } ]L_1, L_2[ \times ]0, \infty[. \end{gathered} \end{equation} Under suitable assumptions on the delay term and on the function $g$, the authors proved an exponential stability result for two cases. In the first case, they considered $\mu_2 < \mu_1$ and for second case, they assumed that $\mu_2 = \mu_1$. S. Zitouni et al., \cite{Zitouni_Ardjouni_Zennir_Amiar_2} extended the result in \cite{Li} for varying delay. In \cite{Zitouni_Ardjouni_Zennir_Amiar_2} was proved existence and the uniqueness of the solution by using the semigroup theory and the exponential stability of the solution by the energy method for the following problem \begin{equation}\label{problem_D_1} \begin{gathered} u_{tt} \! - \! au_{xx} \! + \!\! \int_{0}^{\infty} \!\!\! g(s)u_{xx}(x, t \! - \! s)ds \! + \! \mu_1 u_t(x,t) \! + \! \mu_2 u_t(x,t-\tau(t)) \! = \! 0, \, \mbox{in } \Omega \times ]0, \infty[,\\ v_{tt} \! - \! bv_{xx} \! = \! 0, \, \mbox{in } ]L_1, L_2[ \times ]0, \infty[. \end{gathered} \end{equation} Stability to localized viscoelastic transmission problem was considered by Mu\~noz Rivera et al., \cite{Rivera_Octavio_Mauricio} where they considered \begin{equation}\label{problem_D2} \begin{gathered} \rho \phi_{tt} + \sigma_{x} = 0, \\ \sigma(x,t) = \alpha(x)\varphi_{x} - k(x)\varphi_{xt} - \beta(x)\varphi_{xt} = 0. \end{gathered} \end{equation} In \cite{Rivera_Octavio_Mauricio} the authors investigated the effect of the positions of the dissipative mechanisms on a bar with three component $]0,L_0[,\, ]L_0,L_1[, \, ]L_1,L[$, and showed that the system is exponentially stable if and only if the viscous component is not in the center of the bar. In other case, they showed the lack of exponential stability, and that the solutions still decay but just polynomially to zero. The case of time-varying delay has already been considered in other works, such as \cite{Orlov_Fridman,Kirane_Said_Anwar,Liu,Zitouni_Ardjouni_Zennir_Amiar}. Wave equations with time-varying delay and nonlinear weights was considered in the recent work of Barros et al., \cite{Vanessa} where was studied the equation given by \begin{equation}\label{NLS} u_{tt} - u_{xx} + \mu_1(t)u_t +\mu_2(t)u_t(x,t-\tau(t)) = 0, \,\, \mbox{in } ]0,L[ \times ]0, +\infty[. \end{equation} Under proper conditions on nonlinear weights $\mu_1(t), \mu_2(t)$ and $\tau(t)$, authors proved global existence and an estimate for the decay rate of the energy. In the present work we improve the results in \cite{Zitouni_Abdelouaheb_Zennir_Rachida} where, for constant weights $\mu_1(t)=\mu_1$, $\mu_2(t)=\mu_2$ and under adequate assumptions regarding the weight and time-varying delay, was proved the well posedness and singularity of solutions by using the semigroup theory. Authors also showed exponential stability by introducing an appropriate Lyapunov functional. Here we consider a transmission problem with nonlinear weights and time-varying delay, which is the main characteristic of this work. Although there are some works on laminated beam and on Timoshenko system with delay, all of them consider constant weights, i.e., $\mu_1$ and $\mu_2$ are constants. To the best of our knowledge, there is no result for these systems with nonlinear weights. Moreover, since the weights are nonlinear, a difficulty comes in: the operator is nonautonomous. This makes hard the use semigroup theory to study well-posedness. To overcome it we use the Kato's variable norm technique together with semigroup theory to show that the system is well-posed. The remainder of this paper is organized as follows. In section \ref{sec:Notation_preliminaries} we introduce some notations and prove the dissipative property for the energy of the system. In the section \ref{sec:global_solution}, by using Kato's variable norm technique and under some restriction on the non-linear weights and the time-varying delay, the system is shown to be well-posed. In section \ref{sec:Exponential_stability}, we present the result of exponential stability by energy methods, and by using suitable sophisticated estimates for multipliers to construct an appropriated Lyapunov functional. \section{Notation and preliminaries}\label{sec:Notation_preliminaries} We start by setting the following hypothesis:\\ {\bf (H1)} $\mu_1:\mathbb{R}_+ \rightarrow ]0,+\infty[$ is a non-increasing function of class $C^1(\mathbb{R}_+)$ satisfying \begin{equation}\label{H1} \left| \frac{\mu'_1(t)}{\mu_1(t)}\right| \leq M_1, \quad \forall t \geq 0, \end{equation} where $M_1 > 0$ is a constant. {\bf (H2)} $\mu_2:\mathbb{R}_+ \rightarrow \mathbb{R}$ is a function of class $C^1(\mathbb{R}_+)$, which is not necessarily positive or monotone, such that \begin{gather}\label{H2_1} \left| \mu_2(t) \right| \leq \beta \mu_1(t), \\ \left| \mu'_2(t) \right| \leq M_2 \mu_1(t), \end{gather} for some $0 < \beta < \sqrt{1-d}$ and $M_2>0$. As in Nicaise and Pignotti \cite{NICAISE_PIGNOTTI_1} we introduce the new variable \begin{equation}\label{EQ2.1} z(x,\rho,t) = u_t(x,t - \tau(t) \rho),\quad (x,\rho) \in \Omega \times ]0,1[, \; t>0. \end{equation} It is easily verified that the new variable satisfies $$\tau(t)z_t(x,\rho,t) + (1 - \tau'(t)\rho)z_\rho(x,\rho,t) = 0$$ and the problem \eqref{EQ1.1} is equivalent to \begin{equation}\label{EQ2.2} \begin{gathered} u_{tt}(x,t) - au_{xx}(x,t) + \mu_1(t)u_t(x,t) + \mu_2(t)z(x,1,t) = 0 \quad \mbox{in } \Omega \times ]0, \infty[, \\ v_{tt}(x,t) - bv_{xx}(x,t) = 0 \quad \mbox{in } ]L_1, L_2[ \times ]0, \infty[, \\ \tau(t)z_t(x,\rho,t) + (1 - \tau'(t)\rho)z_\rho(x,\rho,t) = 0 \quad \mbox{in } \Omega \times ]0,1[ \times ]0, \infty[. \end{gathered} \end{equation} This system is subject to the transmission conditions \begin{equation} \begin{gathered} u(L_i, t) = v(L_i, t), \quad i= 1,2, \\ au_x(L_i, t) = bv_x(L_i, t), \quad i= 1,2, \end{gathered} \label{EQ2.2.1} \end{equation} the boundary conditions \begin{equation}\label{EQ2.2.2} u(0, t) = u(L_3, t) = 0 \end{equation} and the initial conditions \begin{equation}\label{EQ2.3} \begin{gathered} u(x,0) = u_0(x), \quad u_t(x,0) = u_1(x) \quad \mbox{on } \Omega, \\ v(x,0) = v_0(x), \quad v_t(x,0) = v_1(x) \quad \mbox{on } \Omega, \\ z(x,\rho,0) = u_t(x, -\tau(0)\rho) = f_0(x, -\tau(0)\rho), \quad (x,\rho) \quad \mbox{in } \Omega \times ]0, 1[. \end{gathered} \end{equation} For any regular solution of \eqref{EQ2.2}, we define the energy as \begin{gather*} E_1(t) = \frac{1}{2} \int_\Omega \left( |u_t(x,t)|^2 + a|u_x(x,t)|^2 \right)\,dx, \\ E_2(t) = \frac{1}{2} \int_{L_1}^{L_2}\left( |v_t(x,t)|^2 + b|v_x(x,t)|^2 \right)\,dx. \end{gather*} The total energy is defined by \begin{equation}\label{EQ2.8} E(t) = E_1(t) + E_2(t) + \frac{\xi(t)\tau(t)}{2} \int_\Omega \int_{0}^{1} z^2(x,\rho,t)\,d\rho\,dx, \end{equation} where \begin{equation}\label{hipotese_7} \xi(t)=\bar{\xi}\mu_1(t) \end{equation} is a non-increasing function of class $C^1(\mathbb{R}_+)$ and $\bar{\xi}$ be a positive constant such that \begin{equation}\label{hipotese_8} \frac{\beta}{\sqrt{1-d}} < \bar{\xi} < 2 - \frac{\beta}{\sqrt{1-d}}. \end{equation} Our first result states that the energy is a non-increasing function. \begin{lemma}\label{Lemma_2.2} Let $(u,v,z)$ be a solution to the system \eqref{EQ2.2}-\eqref{EQ2.3}. Then the energy functional defined by \eqref{EQ2.8} satisfies \begin{align}\label{derivate_energy} E'(t) & \leq -\mu_1(t) \left( 1-\frac{\bar{\xi}}{2}- \frac{\beta}{2\sqrt{1-d}} \right) \int_\Omega u_t^2\,dx \\ & \quad - \mu_1(t) \left( \frac{\bar{\xi}(1-\tau'(t))}{2}- \frac{\beta \sqrt{1-d}}{2} \right) \int_\Omega z_1^2(x,\rho,t)\,dx \nonumber\\ & \leq 0 \nonumber. \end{align} \end{lemma} \begin{proof} Multiplying the first and second equations of \eqref{EQ2.2} by $u_t(x,t)$ and $v_t(x,t)$, integrating on $\Omega$ and $]L_1,L_2[$ respectively and using integration by parts, we get \begin{align}\label{EQ2.12} \frac{1}{2}\frac{d}{dt} \int_\Omega \left( u_t^2 + au_x^2 \right)\,dx = & -\mu_1(t) \int_\Omega u_t^2\,dx - \mu_2(t)\int_\Omega z(x,1,t) u_t\,dx + a \left[ u_x u_t \right]_{\partial \Omega}, \end{align} \begin{equation} \frac{1}{2} \frac{d}{dt} \int_{L_1}^{L_2}\left( v_t^2 + bv_x^2 \right)\,dx = b \left[ v_x v_t \right]_{L_1}^{L_2}. \label{EQ2.13} \end{equation} Now multiplying the third equation of \eqref{EQ2.3} by $\xi(t)z(x,\rho,t)$ and integrating on $\Omega \times ]0,1[$, we obtain \begin{align*} &\tau(t)\xi(t)\int_{\Omega} \int_0^1 z_t(x,\rho,t)z(x,\rho,t)\,d\rho\,dx = -\frac{\xi(t)}{2} \int_{\Omega} \int_0^1 (1- \tau'(t)\rho)\frac{\partial}{\partial \rho}(z(x,\rho,t))^2\,d\rho\,dx. \end{align*} Consequently, \begin{align}\label{equ1} \frac{d}{dt} \left( \frac{\xi(t)\tau(t)}{2} \int_{\Omega} \int_0^1 z^2(x,\rho,t)\,d\rho\,dx \right) & = \frac{\xi(t)}{2} \int_{\Omega} (z^2(x,0,t)-z^2(x,1,t))\,dx \\ &\quad + \frac{\xi(t)\tau'(t)}{2} \int_{\Omega} \int_0^1 z^2(x,1,t)\,d\rho\,dx \nonumber \\ &\quad + \frac{\xi'(t)\tau(t)}{2} \int_{\Omega} \int_0^1 z^2(x,\rho,t)\,d\rho\,dx. \nonumber \end{align} From \eqref{EQ2.8}, \eqref{EQ2.12}, \eqref{EQ2.13}, \eqref{equ1} and using the conditions \eqref{EQ2.2.1} and \eqref{EQ2.2.2}, we know that \begin{align}\label{equ2} E'(t) &= \frac{\xi(t)}{2} \int_{\Omega} u_t^2\,dx - \frac{\xi(t)}{2} \int_{\Omega} z^2(x,1,t)\,dx + \frac{\xi(t)\tau'(t)}{2} \int_{\Omega} z^2(x,1,t)\,dx \\ & \quad + \frac{\xi'(t)\tau(t)}{2} \int_{\Omega} \int_0^1 z^2(x,\rho,t)\,d\rho\,dx - \mu_1(t)\int_{\Omega} u_t^2\,dx - \mu_2(t)\int_{\Omega} z(x,1,t) u_t\,dx. \nonumber \end{align} Due to Young's inequality, we have \begin{align}\label{equ3} \mu_2(t) \int_{\Omega} z(x,1,t) u_t\,dx \leq & \frac{\left| \mu_2(t) \right|}{2\sqrt{1-d}} \int_{\Omega}u_t^2\,dx + \frac{\left| \mu_2(t) \right| \sqrt{1-d}}{2} \int_{\Omega} z^2(x,1,t)\,dx. \end{align} Inserting \eqref{equ3} into \eqref{equ2}, we obtain \begin{align*} E'(t) &\leq -\left( \mu_1(t) - \frac{\xi(t)}{2} - \frac{\left| \mu_2(t) \right|}{2\sqrt{1-d}} \right) \int_{\Omega} u_t^2\,dx \\ &\;\;\;\; - \left( \frac{\xi(t)}{2} - \frac{\xi(t)\tau'(t)}{2} - \frac{\left| \mu_2(t) \right| \sqrt{1-d}}{2} \right) \int_{\Omega} z^2(x,1,t)\,dx \\ &\;\;\;\; + \frac{\xi'(t)\tau(t)}{2} \int_{\Omega} \int_0^1 z^2(x,\rho,t)\,d\rho\,dx \\ &\leq -\mu_1(t) \left( 1-\frac{\bar{\xi}}{2}- \frac{\beta}{2\sqrt{1-d}} \right) \int_{\Omega} u_t^2\,dx \\ & \;\;\;\; - \mu_1(t) \left( \frac{\bar{\xi}(1-\tau'(t))}{2}- \frac{\beta \sqrt{1-d}}{2} \right) \int_{\Omega} z^2(x,1,t)\,dx \\ & \leq 0. \end{align*} Hence, the proof is complete. \end{proof} \section{Global solution}\label{sec:global_solution} In this section, our goal is to prove existence and uniqueness of solutions to the system \eqref{EQ2.2} - \eqref{EQ2.3}. This is the content of Theorem \ref{Global_Solution}. We begin by introducing the vector function $U = (u,v,\varphi,\psi,z)^T$, where $\varphi(x,t) = u_t(x,t)$ and $\psi(x,t) = v_t(x,t)$. The system \eqref{EQ2.2}-\eqref{EQ2.3} can be written as \begin{equation}\label{EQ2.17} \left\{\begin{array}{ll} U_{t} - \mathcal{A}(t) U= 0, \\ U(0) = U_0 = (u_0,v_0,u_1,v_1,f_0(\cdot,-,\tau(0)))^T, \end{array} \right.\quad \end{equation} where the operator $\mathcal{A}(t)$ is defined by \begin{equation}\label{EQ2.18} \mathcal{A}(t)\,U = \left( \begin{array}{c} \varphi(x,t) \\ \psi(x,t) \\ au_{xx}(x,t) - \mu_1(t)\varphi(x,t) - \mu_2(t)z(x,1,t) \\ bv_{xx}(x,t) \\ - \frac{1 - \tau'(t)\rho}{\tau(t)} z_{\rho}(x,\rho,t) \end{array} \right). \end{equation} Now, taking into account the conditions \eqref{EQ1.2}-\eqref{EQ1.3}, as well as previous results presented in \cite{Benseghir,LiuG,Li,Zitouni_Abdelouaheb_Zennir_Rachida}, we introduce the set \begin{align*} X_{*} = & \{ (u,v) \in H^1(\Omega) \times H^1(]L_1,L_2[)/ u(0)=u(L_3)=0, u(L_i)=v(L_i), au_x(L_i) = bv_x(L_i), i= 1,2 \}. \end{align*} We define the phase space as $$\mathcal{H} = X_{*} \times L^2(\Omega) \times L^2(]L_1,L_2[) \times L^2(\Omega \times ]0,1[)$$ equipped with the inner product \begin{align}\label{inner_product} \langle U,\hat{U} \rangle_{\mathcal{H}} = \int_{\Omega} \left( \varphi \hat{\varphi} + au_x \hat{u}_x \right)\,dx + \int_{L_1}^{L_2} \left( \psi \hat{\psi} + bv_x \hat{v}_x \right)\,dx + \xi(t)\tau(t) \int_{\Omega} \int_{0}^{1} z \hat{z}\,d\rho\,dx, \end{align} for $U = (u,v,\varphi,\psi,z)^T$ and $\hat{U} = (\hat{u},\hat{v},\hat{\varphi},\hat{\psi},\hat{z})^T$. The domain $D(\mathcal{A}(t))$ of $\mathcal{A}(t)$ is defined by \begin{align}\label{domain} D(\mathcal{A}(t)) = \{ & (u,v,\varphi,\psi,z)^T \in \mathcal{H}/ (u,v) \in \left( H^2(\Omega) \times H^2(]L_1,L_2[) \right) \cap X_{*}, \\ & \varphi \in H^1(\Omega), \psi \in H^1(]L_1,L_2[), z \in L^2 \left( ]0,L[; H_0^1(]0,1[) \right), \varphi=z(\cdot,0) \}. \nonumber \end{align} Notice that the domain of the operator $\mathcal{A}(t)$ does not dependent on time $t$, i.e., \begin{equation}\label{DA(t)=DA(0)} D(\mathcal{A}(t)) = D(\mathcal{A}(0)), \quad \forall t>0. \end{equation} A general theory for not autonomous operators given by equations of type \eqref{EQ2.17} has been developed using semigroup theory, see \cite{Kato_1}, \cite{Kato_3} and \cite{Pazy}. The simplest way to prove existence and uniqueness results is to show that the triplet $\left\{ (\mathcal{A}, \mathcal{H}, Y) \right\}$, with $\mathcal{A} = \left\{ \mathcal{A}(t)/ t \in [0,T] \right\}$, for some fixed $T>0$ and $Y=\mathcal{A}(0)$, forms a CD-systems (or constant domain system, see \cite{Kato_1} and \cite{Kato_3}). More precisely, the following theorem, which id due to Tosio Kato (Theorem 1:9 of \cite{Kato_1}) gives the existence and uniqueness results and is proved in Theorem $1.9$ of \cite{Kato_1} (see also Theorem $2.13$ of \cite{Kato_3} or \cite{Ali}). For convenience let states Kato's result here. \begin{theorem}\label{Theorem_preliminar} Assume that \begin{enumerate} \item[(i)] $Y=D(\mathcal{A}(0))$ is dense subset of $\mathcal{H}$, \item[(ii)] \eqref{DA(t)=DA(0)} holds, \item[(iii)] for all $t \in [0,T]$, $\mathcal{A}(t)$ generates a strongly continuous semigroup on $\mathcal{H}$ and the family $\mathcal{A}(t) = \left\{ \mathcal{A}(t)/ t \in [0,T] \right\}$ is stable with stability constants $C$ and $m$ independent of $t$ (i.e., the semigroup $(S_t(s))_{s\geq 0}$ generated by $\mathcal{A}(t)$ satisfies $\| S_t(s)u \|_{\mathcal{H}} \leq Ce^{ms} \| u \|_{\mathcal{H}}$, for all $u \in \mathcal{H}$ and $s\geq 0$), \item[(iv)] $\partial_t \mathcal{A}(t)$ belongs to $L_{*}^{\infty}([0,T],B(Y, \mathcal{H}))$, which is the space of equivalent classes of essentially bounded, strongly measurable functions from $[0,T]$ into the set $B(Y, \mathcal{H})$ of bounded linear operators from $Y$ into $\mathcal{H}$. Then, problem \eqref{EQ2.17} has a unique solution $U \in C([0,T],Y) \cap C^1([0,T], \mathcal{H})$ for any initial datum in $Y$. \end{enumerate} \end{theorem} Using the time-dependent inner product \eqref{inner_product} and the Theorem \ref{Theorem_preliminar} we get the following result of existence and uniqueness of global solutions to the problem \eqref{EQ2.17}. \begin{theorem}\label{Global_Solution}[Global solution] For any initial datum $U_0 \in \mathcal{H}$ there exists a unique solution $U$ satisfying $$ U \in C([0,+\infty[, \mathcal{H}) $$ for problem \eqref{EQ2.17}. Moreover, if $U_0 \in D(\mathcal{A}(0))$, then $$ U \in C([0,+\infty[, D(\mathcal{A}(0))) \cap C^1([0,+\infty[, \mathcal{H}). $$ \end{theorem} \begin{proof} Our goal is then to check the above assumptions for problem \eqref{EQ2.17}. First, we show that $D(\mathcal{A}(0))$ is dense in $\mathcal{H}$. The proof we will follow method used in \cite{Nicaise_Pignotti} with the necessary modification imposed by the nature of our problem. Let $\hat{U} = (\hat{u},\hat{v},\hat{\varphi},\hat{\psi},\hat{z})^T \in \mathcal{H}$ be orthogonal to all elements of $D(\mathcal{A}(0))$, namely \begin{align}\label{x_1} 0 = \langle U,\hat{U} \rangle_{\mathcal{H}} = & \int_{\Omega} \left( \varphi \hat{\varphi} + au_x \hat{u}_x \right)\,dx + \int_{L_1}^{L_2} \left( \psi \hat{\psi} + bv_x \hat{v}_x \right)\,dx + \xi(t)\tau(t) \int_{\Omega} \int_{0}^{1} z \hat{z}\,d\rho\,dx, \end{align} for $U = (u,v,\varphi,\psi,z)^T \in D(\mathcal{A}(0))$. We first take $u=v=\varphi=\psi=0$ and $z \in C_{0}^{\infty}\left(\Omega \times ]0,1[ \right)$. As $U=(0,0,0,0,z)^T \in D(\mathcal{A}(0))$ and therefore, from \eqref{x_1}, we deduce that $$ \int_{\Omega} \int_0^1 z\hat{z}\,d\rho\,dx = 0. $$ Since $C_{0}^{\infty}\left(\Omega \times ]0,1[ \right)$ is dense in $L^2\left(\Omega \times ]0,1[ \right)$, then, it follows then that $\hat{z}=0$. Similarly, let $\varphi \in C_{0}^{\infty}(\Omega)$, then $U=(0,0,\varphi,0,0)^T \in D(\mathcal{A}(0))$, which implies from \eqref{x_1} that $$ \int_{\Omega} \varphi \hat{\varphi}\,dx = 0. $$ So, as above, it follows that $\hat{\varphi} = 0$. In the same way, by taking $\psi \in C_{0}^{\infty}(]L_1,L_2[)$, we get from \eqref{x_1} $$ \int_{L_1}^{L_2} \psi \hat{\psi}\,dx = 0 $$ and by density of $C_{0}^{\infty}(]L_1,L_2[)$ in $L^2(]L_1,L_2[)$, we obtain $\hat{\psi} = 0$. Finally, for $(u,v) \in C_{0}^{\infty}\left(\Omega \times ]L_1,L_2[ \right)$ (then $(u_x,v_x) \in C_{0}^{\infty}\left(\Omega \times ]L_1,L_2[ \right)$) we obtain $$ a\int_{\Omega} u_x\hat{u}_x\,dx + b\int_{L_1}^{L_2} v_x\hat{v}_x\,dx = 0. $$ Since $C_{0}^{\infty} \left(\Omega \times ]L_1,L_2[ \right)$ is dense in $L^2 \left( \Omega \times ]L_1,L_2[ \right)$, we deduce that $\left( \hat{u}_x,\hat{v}_x \right)=(0,0)$ because $\left( \hat{u},\hat{v} \right) \in X_{*}$. We consequently have \begin{equation}\label{x_2} D(\mathcal{A}(0)) \,\, \text{is dense in } \mathcal{H}. \end{equation} Now, we show that the operator $\mathcal{A}(t)$ generates a $C_0-$semigroup in $\mathcal{H}$ for a fixed $t$. We calculate $\langle \mathcal{A}(t)U, U \rangle_t$ for a fixed $t$. Take $U=(u,v,\varphi,\psi,z)^T \in D(\mathcal{A}(t))$. Then \begin{align*} \langle \mathcal{A}(t)U, U \rangle_t = & - \mu_1(t) \int_{\Omega} \varphi^2\,dx - \mu_2(t) \int_{\Omega} z(x,1)\varphi\,dx - \frac{\xi(t)}{2} \int_{\Omega} \int_0^1 (1-\tau'(t)\rho)\dfrac{\partial}{\partial \rho} z^2(x,\rho)\,d\rho\,dx. \end{align*} Since $$ \left(1- \tau'(t)\rho \right)\dfrac{\partial}{\partial \rho} z^2(x,\rho) = \dfrac{\partial}{\partial \rho} \left( \left(1- \tau'(t)\rho \right) z^2(x,\rho) \right) + \tau'(t)z^2(x,\rho), $$ we have \begin{align*} \int_0^1 \left(1- \tau'(t)\rho \right)\dfrac{\partial}{\partial \rho} z^2(x,\rho)\,d\rho = & \left( 1- \tau'(t) \right)z^2(x,1) - z^2(x,0) + \tau'(t)\int_0^1 z^2(x,\rho)\,d\rho. \end{align*} Whereupon \begin{align*} \langle \mathcal{A}(t)U, U \rangle_t =& - \mu_1(t) \int_{\Omega} \varphi^2\,dx - \mu_2(t) \int_{\Omega} z(x,1)\varphi\,dx + \frac{\xi(t)}{2} \int_{\Omega} \varphi^2\,dx \\ & - \frac{\xi(t)(1-\tau'(t))}{2} \int_{\Omega} z^2(x,1)\,dx - \frac{\xi(t)\tau'(t)}{2} \int_{\Omega} \int_0^1 z^2(x,\rho)\,d\rho\,dx. \end{align*} Therefore, from \eqref{equ3}, we deduce \begin{align*} \langle \mathcal{A}(t)U, U \rangle_t \leq &- \mu_1(t) \left( 1-\frac{\bar{\xi}}{2}- \frac{\beta}{2\sqrt{1-d}} \right) \int_{\Omega} \varphi^2\,dx \\ &- \mu_1(t) \left( \frac{\bar{\xi}(1-\tau'(t))}{2}- \frac{\beta \sqrt{1-d}}{2} \right)\int_{\Omega} z^2(x,1)\,dx \\ &+ \frac{\xi(t)|\tau'(t)|}{2\tau(t)}\tau(t) \int_{\Omega}\int_0^1 z^2(x,\rho)\,d\rho\,dx. \end{align*} Then, we have \begin{align*} \langle \mathcal{A}(t)U, U \rangle_t \leq &- \mu_1(t) \left( 1-\frac{\bar{\xi}}{2}- \frac{\beta}{2\sqrt{1-d}} \right) \int_{\Omega} \varphi^2\,dx \\ &- \mu_1(t) \left( \frac{\bar{\xi}(1-\tau'(t))}{2}- \frac{\beta \sqrt{1-d}}{2} \right)\int_{\Omega} z^2(x,1)\,dx \\ &+ \kappa(t) \langle U, U \rangle_t, \end{align*} where $$ \kappa(t) = \frac{\sqrt{1+\tau'(t)^2}}{2\tau(t)}. $$ From \eqref{derivate_energy} we conclude that \begin{equation}\label{dissip} \langle \mathcal{A}(t)U, U \rangle_t - \kappa(t)\langle U,U \rangle_t \leq 0, \end{equation} which means that operator $\tilde{\mathcal{A}}(t) = \mathcal{A}(t) - \kappa(t) I$ is dissipative. Now, we prove the surjectivity of the operator $\lambda I - \mathcal{A}(t)$ for fixed $t>0$ and $\lambda >0$. For this purpose, let $F=(f_1,f_2,f_3,f_4,f_5)^T \in \mathcal{H}$. We seek $U=(u,v,\varphi,\psi,z)^T \in D(\mathcal{A}(t))$ which is solution of $$ \left( \lambda I - \mathcal{A}(t) \right)U=F, $$ that is, the entries of $U$ satisfy the system of equations \begin{eqnarray} \lambda u - \varphi = f_1, \label{q_1}\\ \lambda v - \psi = f_2, \label{q_2}\\ \lambda \varphi - au_{xx} + \mu_1(t)\varphi + \mu_2(t) z(x,1) = f_3, \label{q_3}\\ \lambda \psi - bv_{xx} = f_4, \label{q_4}\\ \lambda z + \frac{1 - \tau'(t)\rho}{\tau(t)}z_{\rho} = f_5. \label{q_5} \end{eqnarray} Suppose that we have found $u$ and $v$ with the appropriated regularity. Therefore, from \eqref{q_1} and \eqref{q_2} we have \begin{eqnarray} \varphi = \lambda u - f_1, \label{x_6.1} \\ \psi = \lambda v - f_2. \label{x_6.2} \end{eqnarray} It is clear that $\varphi \in H^1(\Omega)$ and $\psi \in H^1(]L_1,L_2[)$. Furthermore, if $\tau'(t) \neq 0$, following the same approach as in \cite{Nicaise_Pignotti}, we obtain $$ z(x,\rho) = \varphi(x)e^{\sigma(\rho,t)} + \tau(t) e^{\sigma(\rho,t)} \int_{0}^{\rho} \frac{f_5(x,s)}{1-s\tau'(s)} e^{-\sigma(s,t)}\,ds, $$ where $$\sigma(\rho,t) = \lambda \frac{\tau(t)}{\tau'(t)}\ln(1- \rho \tau'(t)),$$ is solution of \eqref{q_5} satisfying \begin{equation}\label{t_1} z(x,0)=\varphi(x). \end{equation} Otherwise, $$ z(x,\rho) = \varphi(x)e^{-\lambda \tau(t) \rho} + \tau(t)e^{-\lambda \tau(t) \rho} \int_{0}^{\rho} f_5(x,s)e^{\lambda \tau(t) s}\,ds $$ is solution of \eqref{q_5} satisfying \eqref{t_1}. From now on, for pratical purposes we will consider $\tau'(t) \neq 0$ (the case $\tau'(t) = 0$ is analogous). Taking into account \eqref{x_6.1} we have \begin{align}\label{x_7} z(x,1) & = \varphi e^{\sigma(1,t)} + \tau(t) e^{\sigma(1,t)} \int_{0}^{1} \frac{f_5(x,s)}{1-s\tau'(s)} e^{-\sigma(s,t)}\,ds \\ & = \left( \lambda u - f_1 \right) e^{\sigma(1,t)} + \tau(t) e^{\sigma(1,t)} \int_{0}^{1} \frac{f_5(x,s)}{1-s\tau'(s)} e^{-\sigma(s,t)}\,ds \nonumber \\ & = \lambda u e^{\sigma(1,t)} - f_1 e^{\sigma(1,t)} + \tau(t) e^{\sigma(1,t)} \int_{0}^{1} \frac{f_5(x,s)}{1-s\tau'(s)} e^{-\sigma(s,t)}\,ds. \nonumber \end{align} Substituting \eqref{x_6.1} and \eqref{x_7} in \eqref{q_3}, and \eqref{x_6.2} in \eqref{q_4}, we obtain \begin{equation}\label{x_11} \left\{ \begin{array}{l} \eta u - au_{xx} = g_1, \\ \lambda^2 v - b v_{xx} = g_2, \end{array}\right. \end{equation} where \begin{equation*} \eta := \lambda^2 + \lambda \mu_1(t) + \lambda \mu_2(t) e^{\sigma(1,t)}, \end{equation*} \begin{align*} g_1 := & f_3 + \lambda f_1 + \mu_1(t) f_1 + \mu_2(t) f_1 e^{\sigma(1,t)} \\ & - \mu_2(t) \tau(t) e^{\sigma(1,t)} \int_{0}^{1} \frac{f_5(x,s)}{1-s\tau'(s)} e^{-\sigma(s,t)}\,ds, \end{align*} \begin{equation*} g_2:= f_4 + \lambda f_2. \end{equation*} In order to solve \eqref{x_11}, we use a standard procedure, considering variational problem \begin{equation}\label{variational_problem} \Upsilon( (u,v), (\tilde{u}, \tilde{v}) ) = L(\tilde{u},\tilde{v}), \end{equation} where the bilinear form \begin{eqnarray*} \Upsilon: X_{*} \times X_{*} \rightarrow \mathbb{R} \end{eqnarray*} and the linear form $$ L: X_{*} \rightarrow \mathbb{R} $$ are defined by \begin{align*} \Upsilon( (u,v), (\tilde{u},\tilde{v}) ) = & \eta \int_{\Omega} u \tilde{u}\,dx + a \int_{\Omega} u_x \tilde{u}_x\,dx + \lambda^2 \int_{L_1}^{L_2} v \tilde{v}\,dx + b \int_{L_1}^{L_2} v_x \tilde{v}_x\,dx - a \left[ u_x \tilde{u} \right]_{\partial \Omega} - b \left[ v_x \tilde{v} \right]_{L_1}^{L_2} \end{align*} and \begin{equation*} L(\tilde{u},\tilde{v}) = \int_{\Omega} g_1 \tilde{u}\,dx + \int_{L_1}^{L_2} g_2 \tilde{v}\,dx. \end{equation*} It is easy to verify that $\Upsilon$ is continuous and coercive, and $L$ is continuous, so by applying the Lax-Milgram Theorem, we obtain a solution for $(u,v) \in X_{*}$ for \eqref{x_11}. In addition, it follows from \eqref{q_3} and \eqref{q_4} that $(u,v) \in H^2(\Omega) \times H^2(]L_1,L_2[)$ and so $(u,v,\varphi,\psi,z) \in D(\mathcal{A}(t))$. Therefore, the operator $\lambda I - \mathcal{A}(t)$ is surjective for any $\lambda > 0$ and $t>0$. Again as $\kappa(t)>0$, this prove that \begin{equation}\label{lambdaI-tildeA_surjective} \lambda I - \tilde{\mathcal{A}}(t) = \left( \lambda + \kappa(t) \right)I - \mathcal{A}(t) \ \text{is surjective} \end{equation} for any $\lambda >0$ and $t>0$. To complete the proof of (iii), it suffices to show that \begin{equation}\label{norma} \dfrac{\| \Phi \|_t}{\| \Phi \|_s} \leq e^{\frac{c}{2\tau_0}|t-s|}, \quad t,s \in [0,T], \end{equation} where $\Phi = (u,v,\varphi,\psi,z)^T$, $c$ is a positive constant and $\| \cdot \|$ is the norm associated the inner product \eqref{inner_product}. For all $t,s \in [0,T]$, we have \begin{align*} \| \Phi \|_t^2 - \| \Phi \|_s^2 e^{\frac{c}{\tau_0}|t-s|} = & \left(1 - e^{\frac{c}{\tau_0}|t-s|} \right) \left[ \int_{\Omega} \left( \varphi^2 + au_x^2 \right)dx + \int_{L_1}^{L_2} \left( \psi^2 + bv_x^2 \right)dx \right] \\ &+ \left( \xi(t)\tau(t) - \xi(s)\tau(s)e^{\frac{c}{\tau_0}|t-s|} \right) \int_{\Omega} \int_0^1 z^2(x,\rho)\,d\rho\,dx. \end{align*} It is clear that $1 - e^{\frac{c}{\tau_0}|t-s|} \leq 0$. Now we will prove $\xi(t)\tau(t) - \xi(s)\tau(s)e^{\frac{c}{\tau_0}|t-s|} \leq 0$ for some $c>0$. In order to do this , first observe that $$ \tau(t) = \tau(s) + \tau'(r)(t-s), $$ for some $r \in (s,t)$. Since $\xi$ is a non increasing function and $\xi>0$, we get $$ \xi(t)\tau(t) \leq \xi(s)\tau(s) + \xi(s)\tau'(r)(t-s), $$ which implies $$ \dfrac{\xi(t)\tau(t)}{\xi(s)\tau(s)} \leq 1 + \dfrac{|\tau'(r)|}{\tau(s)}|t-s|. $$ Using \eqref{hipot_1} and that $\tau'$ is bounded, we deduce $$ \frac{\xi(t)\tau(t)}{\xi(s)\tau(s)} \leq 1 + \frac{c}{\tau_0}|t-s| \leq e^{\frac{c}{\tau_0}|t-s|}, $$ which proves \eqref{norma} and therefore $(iii)$ follows. Moreover, as $\kappa'(t) = \frac{\tau'(t)\tau''(t)}{2\tau(t)\sqrt{1+\tau'(t)^2}} - \frac{\tau'(t)\sqrt{1+\tau'(t)^2}}{2\tau(t)^2}$ is bounded on $[0,T]$ for all $T>0$ (by \eqref{hipot_1} and \eqref{hipotese_8}) we have $$ \frac{d}{dt}\mathcal{A}(t)U = \left( \begin{array}{c} 0 \\ 0 \\ -\mu_1'(t) \varphi - \mu_2'(t)z(\cdot,1) \\ 0 \\ \frac{\tau''(t)\tau(t)\rho-\tau'(t)(\tau'(t)\rho-1)}{\tau(t)^2}z_{\rho} \end{array} \right), $$ with $\frac{\tau''(t)\tau(t)\rho-\tau'(t)(\tau'(t)\rho-1)}{\tau(t)^2}$ bounded on $[0,T]$ by \eqref{hipot_1} and \eqref{hipotese_8}. Thus \begin{equation}\label{derivate_tilde_A} \frac{d}{dt}\tilde{\mathcal{A}}(t) \in L_{*}^{\infty}([0,T], B(D(\mathcal{A}(0)), \mathcal{H})), \end{equation} where $L_{*}^{\infty}([0,T], B(D(\mathcal{A}(0)), \mathcal{H}))$ is the space of equivalence classes of essentially bounded, strongly measurable functions from $[0,T]$ into $B(D(\mathcal{A}(0)), \mathcal{H})$. Here $B(D(\mathcal{A}(0)), \mathcal{H})$ is the set of bounded linear operators from $D(\mathcal{A}(0))$ into $\mathcal{H}$. Then, \eqref{dissip}, \eqref{lambdaI-tildeA_surjective} and \eqref{norma} imply that the family $\tilde{\mathcal{A}} = \left\{ \tilde{\mathcal{A}}(t): t \in [0,T] \right\}$ is a stable family of generators in $\mathcal{H}$ with stability constants independent of $t$, by Proposition $1.1$ from \cite{Kato_1}. Therefore, the assumptions $(i)-(iv)$ of Theorem \ref{Theorem_preliminar} are verified by \eqref{DA(t)=DA(0)}, \eqref{x_2}, \eqref{dissip}, \eqref{lambdaI-tildeA_surjective}, \eqref{norma} and \eqref{derivate_tilde_A}. Thus, the problem \begin{equation} \left\{\begin{array}{ll} \tilde{U}_{t} = \tilde{\mathcal{A}}(t) \tilde{U},\\ \tilde{U}(0) = U_0 \end{array} \right.\quad \end{equation} has a unique solution $\tilde{U} \in C\left( [0,+\infty[, D(\mathcal{A}(0)) \right) \cap C^1\left( [0,+\infty[, \mathcal{H} \right)$ for $U_0 \in D(\mathcal{A}(0))$. The requested solution of \eqref{EQ2.17} is then given by $$ U(t) = e^{\int_0^t \kappa(s)\,ds}\tilde{U}(t) $$ because \begin{align*} U_t(t) &= \kappa(t)e^{\int_0^t \kappa(s)\,ds}\tilde{U}(t) + e^{\int_0^t \kappa(s)\,ds}\tilde{U}_t(t) \\ &= e^{\int_0^t \kappa(s)\,ds} \left( \kappa(t) + \tilde{\mathcal{A}}(t) \right) \tilde{U}(t) \\ &= \mathcal{A}(t)e^{\int_0^t \kappa(s)\,ds}\tilde{U}(t) \\ &=\mathcal{A}(t) U(t) \end{align*} which concludes the proof. \end{proof} \section{Exponential stability}\label{sec:Exponential_stability} This section is dedicated to study of the asymptotic behavior. The main goal of this section is to study the stability of solutions to the system \eqref{EQ2.2}-\eqref{EQ2.3}. This is the content of Theorem \ref{Principal} where we show that the solution of problem \eqref{EQ2.2}-\eqref{EQ2.3} is exponentially stable. Our effort consists in building a suitable Lyapunov functional by the energy method. For this goal we present several technical lemmas. \begin{lemma} Let $(u,v,z)$ be a solution of \eqref{EQ2.2}-\eqref{EQ2.3}, then for any $\varepsilon_1 > 0$ and $c_1$ is the Poincar\'e's constant, we have the estimate \begin{align}\label{y_2} \frac{d}{dt}\mathcal{I}_1(t) \leq & - \left( a - \mu_1^2(0) c_1^2 \varepsilon_1 \right) \int_{\Omega} u_x^2\,dx - b \int_{L_1}^{L_2} v_x^2\,dx \\ & + \left( 1 + \frac{1}{2\varepsilon_1} \right) \int_{\Omega} u_t^2\,dx + \int_{L_1}^{L_2} v_t^2\,dx + \frac{\beta^2}{2 \varepsilon_1} \int_{\Omega} z^2(x,1,t)\,dx, \nonumber \end{align} where \begin{equation}\label{y_1} \mathcal{I}_1(t) = \int_{\Omega} uu_t\,dx + \int_{L_1}^{L_2} vv_t\,dx. \end{equation} \end{lemma} \begin{proof} Differentiating $\mathcal{I}_1(t)$ and using \eqref{EQ2.2}, we obtain \begin{align*} \frac{d}{dt}\mathcal{I}_1(t) & = \int_{\Omega} u_t^2\,dx - a \int_{\Omega} u_x^2\,dx - \mu_1(t) \int_{\Omega} u u_t\,dx - \mu_2(t) \int_{\Omega} u z(x,1,t)\,dx + \int_{L_1}^{L_2} v_t^2\,dx - b \int_{L_1}^{L_2} v_x^2\,dx \nonumber \\ &\leq \int_{\Omega} u_t^2\,dx - a \int_{\Omega} u_x^2\,dx + \left| \mu_1(t) \int_{\Omega} u u_t\,dx \right| + \left| \mu_2(t) \int_{\Omega} u z(x,1,t)\,dx \right| + \int_{L_1}^{L_2} v_t^2\,dx - b \int_{L_1}^{L_2} v_x^2\,dx. \end{align*} From hypothesis (H1) and (H2), we have \begin{align}\label{s_1} \frac{d}{dt}\mathcal{I}_1(t) \leq & \int_{\Omega} u_t^2\,dx - a \int_{\Omega} u_x^2\,dx + \mu_1(0) \left| \int_{\Omega} u u_t\,dx \right| \\ & + \beta \mu_1(0) \left| \int_{\Omega} u z(x,1,t)\,dx \right| + \int_{L_1}^{L_2} v_t^2\,dx - b \int_{L_1}^{L_2} v_x^2\,dx. \nonumber \end{align} By using the conditions \eqref{EQ2.2.1} and \eqref{EQ2.2.2}, we obtain \begin{gather*} u^2(x,t) = \left( \int_0^x u_x(s,t)\,ds \right)^2 \leq L_1 \int_{0}^{L_1} u_x^2(x,t)\,dx, \quad x \in [0,L_1], \\ u^2(x,t) \leq (L_3 - L_2) \int_{L_2}^{L_3} u_x^2(x,t)\,dx, \quad x \in [L_2,L_3], \end{gather*} which imply the following Poincar\'e's inequality \begin{equation}\label{Poinc_1} \int_{\Omega} u^2(x,t)\,dx \leq c_1^2 \int_{\Omega} u_x^2\,dx, \quad x \in \Omega, \end{equation} where $c_1 = \max \{ L_1, L_3 - L_2 \}$ is the Poincar\'e's constant. Using Young's inequality and \eqref{Poinc_1}, we have \begin{equation}\label{s_2} \mu_1(0) \left| \int_{\Omega} u u_t\,dx \right| \leq \frac{\varepsilon_1 \mu_1^2(0) c_1^2}{2} \int_{\Omega} u_x^2\,dx + \frac{1}{2\varepsilon_1} \int_{\Omega} u_t^2\,dx \end{equation} and \begin{equation}\label{s_3} \beta \mu_1(0) \left| \int_{\Omega} u z(x,1,t)\,dx \right| \leq \frac{\varepsilon_1 \mu_1^2(0) c_1^2}{2} \int_{\Omega} u_x^2\,dx + \frac{\beta^2}{2\varepsilon_1} \int_{\Omega} z^2(x,1,t)\,dx. \end{equation} Substituting \eqref{s_2} and \eqref{s_3} in \eqref{s_1} we conclude the lemma. \end{proof} Now, inspired by \cite{Marzocchi_Naso_Rivera}, we introduce the functional \begin{equation} q(x) = \left\{ \begin{array}{lc} x - \dfrac{L_1}{2}, & x \in [0, L_1], \\ \\ \dfrac{L_2 -L_3 -L_1}{2(L_2 - L_1)}(x - L_1) + \dfrac{L_1}{2}, & x \in [L_1, L_2], \\ \\ x - \dfrac{L_2 + L_3}{2}, & x \in [L_2, L_3]. \end{array} \right. \end{equation} It is easy to see that $q(x)$ is bounded, i.e., $\left| q(x) \right| \leq M$, where $$ M = \max \left\{ \frac{L_1}{2}, \frac{L_3 - L_2}{2} \right\}. $$ We have the following result. \begin{lemma} Let $(u,v,z)$ be a solution of \eqref{EQ2.2}-\eqref{EQ2.3}, then for any $\varepsilon_2 > 0$, the following estimates holds true \begin{align}\label{y_4} \frac{d}{dt}\mathcal{I}_2(t) \leq & \left( \frac{1}{2} + \frac{1}{2\varepsilon_2} \right) \int_{\Omega} u_t^2\,dx + \left( \frac{a}{2} + M^2 \mu_1^2(0) \varepsilon_2 \right) \int_{\Omega} u_x^2\,dx + \frac{\beta^2}{2 \varepsilon_2} \int_{\Omega} z^2(x,1,t)\,dx \\ & - \frac{1}{4} \left[ L_1 u_t^2(L_1,t) + (L_3-L_2)u_t^2(L_2,t) \right] - \frac{a}{4} \left[ L_1 u_x^2(L_1,t) + (L_3-L_2)u_x^2(L_2,t) \right], \nonumber \end{align} and \begin{align}\label{y_5} \frac{d}{dt}\mathcal{I}_3(t) = & \frac{L_2 - L_3 - L_1}{4(L_2 - L_1)} \left( \int_{L_1}^{L_2} v_t^2\,dx + b \int_{L_1}^{L_2} v_x^2\,dx \right) + \frac{1}{4} \left[ L_1 v_t^2(L_1,t) + (L_3-L_2)v_t^2(L_2,t) \right] \\ & + \frac{b}{4} \left[ L_1 v_x^2(L_1,t) + (L_3-L_2)v_x^2(L_2,t) \right], \nonumber \end{align} where \begin{equation}\label{y_3} \mathcal{I}_2(t) = - \int_{\Omega} q(x)u_x u_t\,dx \quad \mbox{and} \quad \mathcal{I}_3(t) = - \int_{L_1}^{L_2} q(x)v_x v_t\,dx. \end{equation} \end{lemma} \begin{proof} Differentiating $\mathcal{I}_2(t)$ and using \eqref{EQ2.2}, we obtain \begin{align*} \frac{d}{dt}\mathcal{I}_2(t) = & -\int_{\Omega} q(x)u_{xt} u_t\,dx - a \int_{\Omega} q(x)u_{xx} u_x\,dx \\ & + \mu_1(t) \int_{\Omega} q(x)u_{x} u_t\,dx + \mu_2(t) \int_{\Omega} q(x)u_{x} z(x,1,t)\,dx. \end{align*} Integrating by parts and considering the hypothesis (H1) and (H2), we have \begin{align}\label{y_6} \frac{d}{dt}\mathcal{I}_2(t) & \leq \frac{1}{2} \int_{\Omega} q'(x)u_t^2\,dx - \frac{1}{2} \left[ q(x) u_t^2 \right]_{\partial \Omega} + \frac{a}{2} \int_{\Omega} q'(x)u_x^2\,dx - \frac{a}{2} \left[ q(x) u_x^2 \right]_{\partial \Omega} \\ &\quad + \mu_1(0) \left| \int_{\Omega} q(x)u_{x} u_t\,dx \right| + \beta \mu_1(0) \left| \int_{\Omega} q(x)u_{x} z(x,1,t)\,dx \right| \nonumber \\ & \leq \frac{1}{2} \int_{\Omega} u_t^2\,dx - \frac{1}{2} \left[ q(x) u_t^2 \right]_{\partial \Omega} + \frac{a}{2} \int_{\Omega} u_x^2\,dx - \frac{a}{2} \left[ q(x) u_x^2 \right]_{\partial \Omega} \nonumber \\ &\quad + \mu_1(0)M \left| \int_{\Omega} u_{x} u_t\,dx \right| + \beta \mu_1(0) M \left| \int_{\Omega} u_{x} z(x,1,t)\,dx \right|. \nonumber \end{align} On the other hand, by using the boundary conditions \eqref{EQ2.2.2}, we have \begin{gather*} \frac{1}{2} \left[ q(x) u_t^2 \right]_{\partial \Omega} = \frac{1}{4} \left[ L_1 u_t^2(L_1,t) + (L_3-L_2) u_t^2(L_2,t) \right], \\ -\frac{a}{2} \left[ q(x) u_x^2 \right]_{\partial \Omega} \leq -\frac{a}{4} \left[ L_1 u_x^2(L_1,t) + (L_3-L_2) u_x^2(L_2,t) \right]. \end{gather*} Inserting the above two equalities into \eqref{y_6} and by Young's inequality, we conclude that \eqref{y_6} gives \eqref{y_4}. By the same argument, taking the derivative of $\mathcal{I}_3(t)$, we obtain \begin{align*} \frac{d}{dt}\mathcal{I}_3(t) & = \frac{1}{2} \int_{L_1}^{L_2} q'(x)v_t^2\,dx - \frac{1}{2} \left[ q(x) v_t^2 \right]_{L_1}^{L_2} + \frac{b}{2} \int_{L_1}^{L_2} q'(x)v_x^2\,dx - \frac{b}{2} \left[ q(x) v_x^2 \right]_{L_1}^{L_2} \\ & = \frac{L_2 - L_3 - L_1}{4(L_2 - L_1)} \left( \int_{L_1}^{L_2} v_t^2\,dx + b \int_{L_1}^{L_2} v_x^2\,dx \right) + \frac{1}{4} \left[ L_1 v_t^2(L_1,t) + (L_3-L_2)v_t^2(L_2,t) \right] \nonumber \\ &\quad + \frac{b}{4} \left[ L_1 v_x^2(L_1,t) + (L_3-L_2)v_x^2(L_2,t) \right] \end{align*} Hence, the proof is complete. \end{proof} As in \cite{Kirane_Said_Anwar}, taking into account the last lemma, we introduce the functional \begin{equation}\label{y_7} \mathcal{J}(t) = \bar{\xi} \tau(t) \int_{\Omega} \int_0^1 e^{-2\tau(t) \rho} z^2(x,\rho,t)\,d\rho\,dx. \end{equation} For this functional we have the following estimate. \begin{lemma}[{\cite[Lemma~3.7]{Kirane_Said_Anwar}}] Let $(u,v,z)$ be a solution of \eqref{EQ2.2}-\eqref{EQ2.3}. Then the functional $\mathcal{J}(t)$ satisfies \begin{equation}\label{y_8} \frac{d}{dt}\mathcal{J}(t) \leq -2 \mathcal{J}(t) + \bar{\xi} \int_{\Omega} u_t^2\,dx. \end{equation} \end{lemma} Now we are in position to prove our result of stability. \begin{theorem}\label{Principal} Let $U(t) = (u(t),v(t),\varphi(t),\psi(t),z(t))$ be the solution of \eqref{EQ2.2}-\eqref{EQ2.3} with initial data $U_0 \in D\left( \mathcal{A}(0) \right)$ and $E(t)$ the energy of $U$. Assume that the hypothesis \eqref{hipot_1}, \eqref{hipot_2}, (H1), (H2) and \begin{equation}\label{hipot_3} \max \{1, \frac{a}{b}\} < \frac{L_1 + L_3 - L_2}{2(L_2 - L_1)} \end{equation} hold. Then there exist positive constants $c$ and $\alpha$ such that \begin{equation}\label{theorem_principal} E(t) \leq cE(0)e^{-\alpha t}, \quad \forall t \geq 0. \end{equation} \end{theorem} \begin{proof} Let us define the Lyapunov functional \begin{equation}\label{y_9} \mathcal{L}(t) = N E(t)(t) + \sum_{i=1}^{3}N_i \mathcal{I}_i(t) + \mathcal{J}(t), \end{equation} where $N$, $N_i$, $i=1,2,3$ are positive real numbers which will be chosen later. By the Lemma \ref{Lemma_2.2}, there exists a positive constant $K$ such that \begin{equation}\label{y_10} \frac{d}{dt}E(t) \leq -K \left[ \int_{\Omega} u_t^2\,dx + \int_{\Omega} z^2(x,1,t)\,dx \right]. \end{equation} It follows from the transmission conditions \eqref{EQ2.2.1} that \begin{equation}\label{y_11} a^2 u_x^2(L_i,t) = b^2 v_x^2(L_i,t), \quad i=1,2. \end{equation} Using the estimates \eqref{y_2}, \eqref{y_4}, \eqref{y_5}, \eqref{y_8}, \eqref{y_10} and the equation \eqref{y_11}, we obtain \begin{align}\label{y_12} \frac{d}{dt}\mathcal{L}(t) \leq & - \left[ KN - \left( 1 + \frac{1}{2\varepsilon_1} \right)N_1 - \left( \frac{1}{2} + \frac{1}{2\varepsilon_2} \right)N_2 - \bar{\xi} \right] \int_{\Omega} u_t^2\,dx \\ & - \left( KN - \frac{\beta^2}{2\varepsilon_1}N_1 - \frac{\beta^2}{2\varepsilon_2}N_2 \right) \int_{\Omega} z^2(x,1,t)\,dx \nonumber \\ & - \left[ \left( a - \mu_1^2(0)c_1^2 \varepsilon_1 \right)N_1 - \left( \frac{a}{2} + M^2 \mu_1^2(0) \varepsilon_2 \right)N_2 \right] \int_{\Omega} u_x^2\,dx \nonumber \\ & + \left[ N_1 + \frac{L_2-L_3-L_1}{4(L_2-L_1)} N_3 \right] \int_{L_1}^{L_2} v_t^2\,dx \nonumber \\ & - \left[ N_1 - \frac{L_2-L_3-L_1}{4(L_2-L_1)} N_3 \right] b\int_{L_1}^{L_2} v_x^2\,dx \nonumber \\ & - \left(N_2 - N_3 \right) \left[ \frac{L_1}{4}u_t^2(L_1,t) + \frac{L_3-L_2}{4}u_t^2(L_2,t) \right] \nonumber \\ & - \left(N_2 - \frac{a}{b} N_3 \right) \frac{a}{4} \left[ \frac{L_1}{4}u_t^2(L_1,t) + \frac{L_3-L_2}{4}u_t^2(L_2,t) \right] -2 \mathcal{J}(t). \nonumber \end{align} Now we observe that under assumption \eqref{hipot_3}, we can always find real constants $N_1, N_2$ and $N_3$ in such way that \begin{equation*} N_1 + \frac{L_2-L_3-L_1}{4(L_2-L_1)} N_3 < 0, \quad N_2 > \max \left\{ 1, \frac{a}{b} \right\} N_3, \quad N_1 > \frac{N_2}{2}. \end{equation*} After that, we pick positive constants $\varepsilon_1$ and $\varepsilon_2$ small enough that \begin{equation*} \mu_1^2(0)c_1^2 \varepsilon_1 N_1 + M^2 \mu_1^2(0) \varepsilon_2 N_2 < a \left( N_1 - \frac{N_2}{2} \right). \end{equation*} Finally, since $\xi(t)\tau(t)$ non-negative and limited, we choose $N$ large enough that \eqref{y_12} is taken into the following estimate \begin{align*} \frac{d}{dt} \mathcal{L}(t) & \leq - \eta_1 \int_{\Omega} \left( u_t^2 + u_x^2 \right)\,dx - \eta_1 \int_{L_1}^{L_2} \left( v_t^2 + v_x^2 \right)\,dx - \eta_1 \int_{\Omega} z^2(x,\rho,t)\,dx - \eta_1 \int_{\Omega} z^2(x,1,t)\,dx \\ &\leq - \eta_1 \int_{\Omega} \left( u_t^2 + u_x^2 \right)\,dx - \eta_1 \int_{L_1}^{L_2} \left( v_t^2 + v_x^2 \right)\,dx - \eta_1 \int_{\Omega} z^2(x,\rho,t)\,dx, \end{align*} for a certain positive constant $\eta_1$. This implies by \eqref{EQ2.8} that there exists $\eta_2 > 0$ such that \begin{equation}\label{y_13} \frac{d}{dt} \mathcal{L}(t) \leq - \eta_2 E(t), \quad \forall t \geq 0. \end{equation} On the hand, it is not hard to see for $N$ large enough that the $\mathcal{L}(t)\sim E(t)$, i.e. there exists two positive constants $\gamma_1$ and $\gamma_2$ such that \begin{equation}\label{y_14} \gamma_1 E(t) \leq \mathcal{L}(t) \leq \gamma_2 E(t), \quad \forall t \geq 0. \end{equation} Combining \eqref{y_13} and \eqref{y_14}, we obtain \begin{equation*} \frac{d}{dt} \mathcal{L}(t) \leq -\alpha \mathcal{L}(t), \quad \forall t \geq 0 \end{equation*} which leads to \begin{equation}\label{y_17} \mathcal{L}(t) \leq \mathcal{L}(0)e^{-\alpha t}, \quad \forall t \geq 0. \end{equation} The desired result \eqref{theorem_principal} follows by using estimates \eqref{y_14} and \eqref{y_17}. Then, the proof of Theorem \ref{Principal} is complete. \end{proof} \subsection*{Acknowledgment} The authors thanks CAPES(Brazil). \end{document}
\begin{document} \begin{center} {\bf On the Existence of Finite Type Link Homotopy Invariants}\\ {\footnotesize BLAKE MELLOR}\\ {\footnotesize Honors College}\\ {\footnotesize Florida Atlantic University}\\ {\footnotesize 5353 Parkside Drive}\\ {\footnotesize Jupiter, FL 33458}\\ {\footnotesize\it [email protected]}\\ {\footnotesize DYLAN THURSTON}\\ {\footnotesize Department of Mathematics}\\ {\footnotesize Harvard University}\\ {\footnotesize Cambridge, MA 02138}\\ {\footnotesize\it [email protected]}\\ {\footnotesize ABSTRACT}\\ {\ }\\ \parbox{4.5in}{\footnotesize \ \ \ \ \ We show that for links with at most 5 components, the only finite type homotopy invariants are products of the linking numbers. In contrast, we show that for links with at least 9 components, there must exist finite type homotopy invariants which are {\it not} products of the linking numbers. This corrects the errors of the first author in \cite{me1, me2}. \noindent {\it Keywords:} Finite type invariants; link homotopy.}\\ \end{center} \tableofcontents \section{Introduction} \label{S:intro} In \cite{me1, me2} the first author claimed, erroneously, that there are no finite type link homotopy or concordance invariants other than the pairwise linking numbers (and their products). However, the proofs of this result in both of these paper contained a serious algebraic error. The purpose of this paper is to show the opposite - in fact, there {\it do} exist finite type link homotopy (and, hence, concordance) invariants other than the linking numbers. However, the proof is not constructive; it is still an open problem to actually construct such an invariant (see Section~\ref{S:questions}). There have been many excellent introductions to the theory of finite type invariants, such as \cite{bl, bn1, cd}; we will not try to replicate them here. We will provide a few basic definitions in order to clarify our notation and terminology. It should be mentioned that our approach and proofs are combinatorial in nature. \subsection{Singular Links} \label{SS:singular} Recall that, in the most general sense, a {\it link invariant} is a map from the set of equivalence classes of links under isotopy to another set $G$. We will need to have some additional structure on $G$. For our purposes, $G$ will be the field of complex numbers $\mathbb{C}$. In this theory, it is also convenient to look at invariants of {\it regular} isotopy (i.e. links with framing), rather than just isotopy. So we will not allow the first Reidemeister move. We first note that we can extend any link invariant to an invariant of {\it singular} links, where a singular link is an immersion of several copies of $S^1$ into 3-space which is an embedding except for a finite number of isolated double points. Given a link invariant $v$, we extend it via the relation: $$\includegraphics{extend.eps}$$ An invariant $v$ of singular links is then said to be of {\it finite type}, if there is an integer $d$ such that $v$ is zero on any link with more than $d$ double points. $v$ is then said to be of {\it type} $d$. We denote by $V_d$ the vector space over $\mathbb{C}$ generated by $\mathbb{C}$-valued finite type invariants of type $d$. We can completely understand the space of $\mathbb{C}$-valued finite type invariants by understanding all of the quotient spaces $V_d/V_{d-1}$. \subsection{Link homotopy and link concordance} The idea of {\it link homotopy} (or just {\it homotopy}) was introduced by Milnor~\cite{mi}. Two links are homotopic if one can be transformed into the other through a sequence of ambient isotopies of $S^3$ and crossing changes of a component with itself (but {\it not} crossing changes of different components). Habegger and Lin~\cite{hl} succeeded in classifying links up to homotopy. We construct a theory of finite type invariants in exactly the same way as before; the difference is that the invariants are trivial when evaluated on a link with a singularity in which a component intersects itself. In this case, the two "resolutions" of the singular point are homotopically equivalent, so the value of a homotopy invariant on their difference is zero. We will denote the vector space of type $d$ link homotopy invariants by $V_d^h$. \begin{defn} \label{D:concordance} Consider two k-component links $L_0$ and $L_1$. These can be thought of as embeddings: $$L_i: \bigsqcup_{i=1}^k S^1 \hookrightarrow {\bf R}^3$$ A {\bf (link) concordance} between $L_0$ and $L_1$ is an embedding: $$H: \left({\bigsqcup_{i=1}^k S^1}\right) \times I \hookrightarrow {\bf R}^3 \times I$$ such that $H(x,0) = (L_0(x),0)$ and $H(x,1) = (L_1(x),1)$. A concordance is an isotopy if and only if H is level preserving; i.e. if the image of $H_t$ is a link at level $t$ for each $t \in I$. \end{defn} We will denote the vector space of type $d$ link concordance invariants by $V_d^c$. \subsection{Unitrivalent diagrams} \label{SS:unitrivalent} It is a marvelous fact that the vector spaces $V_d^*/V_{d-1}^*$ can be given relatively simple combinatorial descriptions in terms of {\it unitrivalent diagrams}. These are spaces of unitrivalent graphs (with colored endpoints and oriented vertices) with various relations imposed upon them. That these descriptions are isomorphic to the original vector spaces is largely due to Kontsevich and his integral (see \cite{cd} for an excellent exposition of the Kontsevich integral). The description of the space for link homotopy was developed by Bar-Natan and others \cite{bn1, bgrt}. The modification for concordance was found by Habegger and Masbaum \cite{hm}. For a more detailed development, see \cite{me1, me2}. \begin{defn} \label{D:homotopy} $B^h$ is defined as the vector space of (disjoint unions of) unitrivalent diagrams modulo the following relations: \begin{itemize} \item The antisymmetry (AS) relation (see Figure~\ref{F:ihx}). \item The IHX relation (see Figure~\ref{F:ihx}). \item The link relation (see Figure~\ref{F:link_rel}), where the sum is over {\it all} univalent vertices of the diagram with the same color. Another example of diagrams appearing in a link relation can be found in Figure~\ref{F:arise}. \item Any diagram with a loop is trivial. \item Any diagram with a connected component which has two univalent vertices of the same color is trivial. \end{itemize} The {\bf degree} $d$ of a diagram in $B^h$ is defined to be one half of the number of vertices of the diagram. Let $B_d^h$ be the vector space of unitrivalent diagrams of degree $d$ (notice that all of the relations involve diagrams of the same degree, so they apply equally well to $B_d^h$). So $B^h$ is just the graded vector space $\bigoplus_{d=1}^{\infty} B_d^h$. We define $B^c$ to be the space of unitrivalent diagrams modulo only the first four relations (so components can have multiple endpoints with the same color), and $B^c = \bigoplus_{d=1}^{\infty} B_d^c$ in the same way. \end{defn} \begin{figure} \caption{AS and IHX relations} \label{F:ihx} \end{figure} \begin{figure} \caption{The link relation for unitrivalent diagrams} \label{F:link_rel} \end{figure} \begin{thm} \label{T:homotopy} \cite{bn1, bgrt, hl} $B_d^h \cong V_d^h/V_{d-1}^h$, and $B_d^c \cong V_d^c/V_{d-1}^c$. \end{thm} \section{Non-existence results for $B^h$ and $B^c$} \label{S:nonexistence} Now that we have properly defined the spaces $B^h$ and $B^c$ of unitrivalent diagrams for link homotopy, we want to analyze them more closely. Let $B^h(k)$ (respectively $B^c(k)$) denote the space of unitrivalent diagrams for link homotopy (resp. concordance) with $k$ possible colors for the univalent vertices (i.e. we are looking at links with $k$ components). \subsection{Previous results for $B^h(k)$} \label{SS:3and4} Consider a diagram $D \in B^h(k)$. Each component of $D$ is a tree diagram with at most one endpoint of each color. Since a unitrivalent tree with $n$ endpoints has $2n-2$ vertices, and hence degree $n-1$, $D$ cannot have any components of degree greater than $k-1$. {\bf Notation:} Before we continue, we will introduce a bit of notation which will be useful in this section. Given a unitrivalent diagram $D$, we define $m(D;i,j)$ to be the number of components of $D$ which are simply line segments with ends colored $i$ and $j$, as shown below: $$i-----j$$ We call these components {\it struts}. Recall the following (correct) results from \cite{me1}. We include the proofs for completeness, and as a warm up for the more complicated proof in Section~\ref{SS:k=5}: \begin{thm} \label{T:3comp} If D has a component C of degree k-1 (with $k \geq 3$), then D is trivial in $B^h(k)$. \end{thm} {\sc Proof:} $C$ has one endpoint of each color $1,2,...,k$. Without loss of generality, we may assume that $C$ has a branch as shown, where $\bar{C}$ denotes the remainder of $C$: $$C:\ \ \begin{matrix} \bar{C} \\ | \\ | \\ 1-----2 \end{matrix}$$ We are going to apply the link relation with the color 1. Let $\{C_1,...,C_n\}$ be the components of $D$ with an endpoint colored 1. So, ignoring the other components of $D$, we have the diagrams of Figure~\ref{F:arise} (where $\bar{C_i}$ denotes all of $C_i$ except for the endpoint colored 1). \begin{figure} \caption{Diagrams arising from the link relation} \label{F:arise} \end{figure} The link relation then implies that $D + \sum{D_i} = 0$. If $C_i$ is just a line segment with endpoints colored 1 and 2, then $D_i = D$. Otherwise, $\bar{C_i}$ will have an endpoint of some color $j \in {3,...,k}$. In this case, since $\bar{C}$ has an endpoint of each color 3,...,$k$, including $j$, $D_i$ will have a component with two endpoints colored $j$, and hence be trivial in $B^h$. Therefore, we find that $D+m(D;1,2)D = 0$ where $m(D;1,2) \geq 0$. We can divide both sides by $1+m(D;1,2)$ (since we are working over a field of characteristic 0) to conclude that $D = 0$. $\Box$ \begin{thm} \label{T:4comp} If D has a component C of degree k-2 (with $k \geq 4$), then D is trivial in $B^h(k)$. \end{thm} {\sc Proof:} Without loss of generality, $C$ has endpoints colored $1,2,...,k-1$. We will prove the lemma by inducting on $m(D;1,k)$; inducting among the set of diagrams having a component with endpoints colored $1,2,...,k-1$. As in the previous theorem, we may assume that $C$ has a branch as shown: $$C:\ \ \begin{matrix} \bar{C} \\ | \\ | \\ 1-----2 \end{matrix}$$ And conclude that $D+\sum{D_i}=0$, where the $D_i$ are defined as before. Since $\bar{C}$ contains endpoints of all colors except 1, 2, and $k$, $D_i$ has two endpoints of the same color (and hence is trivial) unless $C_i$ has one of the following 3 forms (as in Theorem~\ref{T:3comp}): $$(1)\ \ C_i = \ 1-----2$$ $$(2)\ \ C_i = \ 1-----k$$ $$(3)\ \ C_i = \ \begin{matrix} k \\ | \\ | \\ 1-----2 \end{matrix}$$ In the first case, $D_i = D$; and in the second case, $D_i = D'$, where $D'$ is the same as $D$ except that: \begin{itemize} \item $C$ is replaced by a component $C'$ identical to it except that the endpoint colored 2 in $C$ is colored $k$ in $C'$ (so $\bar{C'} = \bar{C}$). \item A line segment with endpoints colored 1 and $k$ has been replaced by a line segment with endpoints colored 1 and 2. In other words, $m(D';1,2) = m(D;1,2)+1$ and $m(D';1,k) = m(D;1,k)-1$. \end{itemize} In the third case, $D_i$ has a component of degree $k-1$, and so is trivial by the previous theorem. Therefore, we find that $D + m(D;1,2)D + m(D;1,k)D' = 0$. If $m(D;1,k) = 0$ we conclude, as before, that $D$ is trivial modulo the link relation, which proves the base case of our induction. For the inductive step, we use the IHX relation on $C'$ to decompose $D' = \sum_{i\neq 1,2,k}{\pm D_i'}$, where $D_i'$ is the same as $D'$ except that $C'$ has been replaced by a component $C_i'$ with endpoints of the same colors (although arranged differently), and a branch as shown: $$C_i':\ \ \begin{matrix} \bar{C_i'} \\ | \\ | \\ i-----k \end{matrix}$$ (The decomposition is simply a matter of letting the endpoint colored $k$ ``travel'' the tree - see Figure~\ref{F:expand} for an example.) In particular, $m(D_i';a,b) = m(D';a,b)$ for all colors $a$ and $b$. \begin{figure} \caption{Using the IHX relation to decompose a diagram} \label{F:expand} \end{figure} We now apply the link relation to $D_i'$ using color $i$ (and component $C_i'$). In this case, the only other components which matter (modulo trivial diagrams) are ones which look like one of the following: $$(1)\ \ i-----k$$ $$(2)\ \ i-----2$$ $$(3)\ \ \begin{matrix} 2 \\ | \\ | \\ i-----k \end{matrix}$$ As before, the first case gives $D_i'$ again, the third case is trivial by Theorem~\ref{T:3comp}, and the second case gives a diagram $D_i''$ such that: \begin{itemize} \item $C_i'$ is replaced by a component $C_i''$ identical to it except that the endpoint colored $k$ in $C_i'$ is colored 2 in $C_i''$ (so $\bar{C_i''} = \bar{C_i'}$). \item A line segment with endpoints colored $i$ and 2 has been replaced by a line segment with endpoints colored $i$ and $k$. In other words, $m(D_i'';i,k) = m(D_i';i,k)+1$ and $m(D_i'';i,2) = m(D_i';i,2)-1$. \end{itemize} Otherwise, $D_i''$ is the same as $D_i'$; in particular, $m(D_i'';1,k) = m(D_i';1,k) = m(D';1,k) = m(D;1,k)-1$. Then the link relation tells us that $D_i' + m(D';i,k)D_i' + m(D';2,i)D_i'' = 0$. Since $D_i''$ has a component of degree $k-2$ with endpoints colored $1,...,k-1$ (namely, $C_i''$), the inductive hypothesis implies that $D_i''$ is trivial. Therefore, $(1+m(D';i,k))D_i' = 0$, so $D_i'$ is trivial in $B^h(k)$. This is true for every $i$, so it immediately follows that $D'$, and hence $D$, are also trivial in $B^h(k)$. $\Box$ So the largest possible degree of a component of a diagram in $B^h(k)$ is $k-3$ (if $k \geq 4$). In particular, this means that if $k$ is 3 or 4, then the largest possible degree of a component of a diagram in $B^h(k)$ is 1. It is well-known that the pairwise linking numbers are the only type 1 link homotopy invariants, and are dual to struts via the isomorphism of Theorem~\ref{T:homotopy}. Their products are dual to disjoint unions of struts. So we have as a corollary: \begin{cor} \label{C:3and4} On links with at most 4 components, the only finite type homotopy invariants are the pairwise linking numbers and their products. \end{cor} The obvious question is whether this result will generalize to links with more components. In the next section we will show, by a rather involved combinatorial argument, that it {\it does} extend to links with five components. However, in Section~\ref{S:existence} we will show that it fails for links with more than 8 components. {\sc Remark:} The error in \cite{me1} (replicated in \cite{me2}) was in the attempt to generalize the result to all $k$. On page 785 of \cite{me1}, line 15, the possibility that $c=a$ was neglected. This adds another term to the sum, which ends up cancelling everything out. This was pointed out by Alexander Merkov \cite{mer}. \subsection{The case of $B^h(5)$} \label{SS:k=5} In this section we consider the $B^h(5)$. We know that no diagram in this space has a component of degree 3 or more. So the question is whether a diagram can have a component of degree 2. Any such component will be a "Y-component" - i.e. a graph with three (colored) univalent vertices connected to a single trivalent vertex. \begin{thm} \label{T:k=5} If $D \in B^h(5)$ has a component C of degree 2, then D is trivial. \end{thm} {\sc Proof:} This proof is significantly more delicate than that for Theorem~\ref{T:4comp}, involving an extra level of induction. Without loss of generality, $C$ has endpoints colored $1,2,3$. $$C:\ \ \begin{matrix} 3 \\ | \\ | \\ 1-----2 \end{matrix}$$ Our first induction is on $m(D;1,4)+m(D;1,5)$; inducting among the set of diagrams having a component with endpoints colored $1,2,3$. We will begin by proving the base case of this induction. Let $\{C_1,...,C_n\}$ be the other components of $D$ with an endpoint colored 1. Then we apply the link relation as in Figure~\ref{F:arise}. We will apply the relation along the color 1, fixing the color 3. This means that $\bar{C}$ (in Figure~\ref{F:arise}) is just a single univalent vertex, colored 3. This will be successively attached to the components $C_i$ to form the diagrams $D_i$ (in the figure $\bar{C_i}$ denotes all of $C_i$ except for the endpoint colored 1). We will refer to this operation as "expanding along 1, fixing 3." So then $D+\sum{D_i}=0$. $D_i$ has two endpoints of the same color unless $C_i$ has one of the following 4 forms: $$(1)\ \ C_i = \ 1-----2$$ $$(2)\ \ C_i = \ 1-----4$$ $$(3)\ \ C_i = \ 1-----5$$ $$(4)\ \ C_i = \ \begin{matrix} a \\ | \\ | \\ 1-----b \end{matrix}\ \ a,b \in \{2,4,5\}$$ In the first case, $D_i = D$. In the fourth case, $D_i$ has a component of degree 3, and so is trivial by Theorem~\ref{T:4comp}. In the second case, $D_i = D_4$, where $D_4$ is the same as $D$ except that: \begin{itemize} \item $C$ is replaced by a component $C'$ identical to it except that the endpoint colored 2 in $C$ is colored 4 in $C'$. $$C':\ \ \begin{matrix} 3 \\ | \\ | \\ 1-----4 \end{matrix}$$ \item A strut with endpoints colored 1 and 4 has been replaced by a strut with endpoints colored 1 and 2. In other words, $m(D';1,2) = m(D;1,2)+1$ and $m(D';1,4) = m(D;1,4)-1$. \end{itemize} {\sc Notation:} For the remainder of this proof, we will represent diagrams by giving the changes made from $D$. We will draw the new component of degree 2 which has replaced $C$ (we will see that for all of our diagrams, any other components of degree 2 remain unchanged). Although the total number of struts is always preserved, some struts have been replaced by others. We represent a strut by the (unordered) pair of the colors of its endpoints, and use an arrow to show how the struts have been traded. For example, we will represent $D_4$ as follows: $$D_4 = \begin{matrix} 3 \\ | \\ | \\ 1-----4 \end{matrix} (1,4) \rightarrow (1,2)$$ Finally, in the third case, $D_i = D_5$, which is defined similarly to $D_4$. $$D_5 = \begin{matrix} 3 \\ | \\ | \\ 1-----5 \end{matrix} (1,5) \rightarrow (1,2)$$ Therefore, we find that $D + m(D;1,2)D + m(D;1,4)D_4 + m(D;1,5)D_5 = 0$. If $m(D;1,4)+m(D;1,5) = 0$, then we conclude $D + m(D;1,2)D = 0$ (since $m(D;i,j) \geq 0$). Since $m(D;1,2) \geq 0$, we can divide by $1 + m(D;1,2)$ to conclude $D=0$, which proves the base case of our first induction. Henceforth, for convenience, we will let $m(i,j) = m(D;i,j)$. We will now assume the inductive hypothesis that any diagram $E \in B^h(5)$ with a component of degree 2 with endpoints colored 1, 2 and 3, and such that $m(E;1,4)+m(E;1,5) < m(1,4)+m(1,5)$, is trivial. Our inductive step consists of using this hypothesis to prove that $D_4$ and $D_5$ are trivial. This will immediately imply that $D+m(1,2)D=0$, and hence that $D=0$. We will prove that $D_4$ is trivial. The proof that $D_5$ is trivial is very similar. This proof involves a second induction. We will be looking at diagrams which do not have a component with endpoints colored 1, 2 and 3, so are not directly trivial by the (first) inductive hypothesis. However, we will find that (modulo the inductive hypothesis), we can effectively "swap" struts in these diagrams so that the number of struts with certain colors on their endpoints always decreases. Since there are only a finite number of such struts, the supply eventually disappears, and we are able to conclude that the diagrams are trivial. We begin with $D_4$. We expand along 3, fixing 1. $$D_4 + m(3,4)D_4 + m(2,3)D_{42} + m(3,5)D_{45} = 0$$ $$D_{42} = \begin{matrix} 3 \\ | \\ | \\ 1-----2 \end{matrix}\ \begin{matrix} (1,4) \rightarrow (1,2) \\ (2,3) \rightarrow (3,4) \end{matrix}$$ $$D_{45} = \begin{matrix} 3 \\ | \\ | \\ 1-----5 \end{matrix}\ \begin{matrix} (1,4) \rightarrow (1,2) \\ (3,5) \rightarrow (3,4) \end{matrix}$$ $D_{42}$ has a component of degree 2 with endpoints colored 1, 2, and 3. Also, $m(D_{42};1,4) + m(D_{42};1,5) = (m(1,4)-1) + m(1,5) = m(1,4)+m(1,5)-1$. So by the inductive hypothesis, $D_{42} = 0$. Therefore: $$D_4 = \frac{-m(3,5)}{1+m(3,4)} D_{45}$$ Consider $D_{45}$. We expand along 3, fixing 5. $$D_{45} + m(1,3)D_{45} + m(2,3)D_{452} + m(3,4)D_{454} = 0$$ $$D_{452} = \begin{matrix} 3 \\ | \\ | \\ 2-----5 \end{matrix}\ \begin{matrix} (1,4) \rightarrow (1,2) \\ (3,5) \rightarrow (3,4) \\ (2,3) \rightarrow (1,3) \end{matrix}$$ $$D_{454} = \begin{matrix} 3 \\ | \\ | \\ 4-----5 \end{matrix}\ \begin{matrix} (1,4) \rightarrow (1,2) \\ (3,5) \rightarrow (3,4) \\ (3,4) \rightarrow (1,3) \end{matrix} \Rightarrow \begin{matrix} (1,4) \rightarrow (1,2) \\ (3,5) \rightarrow (1,3) \end{matrix}$$ Therefore: $$D_4 = \frac{-m(3,5)}{1+m(3,4)}\frac{1}{1+m(1,3)}\left({-m(2,3)D_{452}-m(3,4)D_{454}}\right)$$ Neither of the new diagrams are trivial inductively, so we will need to analyze both of them. First, we consider $D_{452}$. We will show that, modulo the inductive hypothesis, we can swap a strut (3,5) (i.e. a strut with endpoints colored 3 and 5) for a strut (3,4), while simultaneously swapping a strut (2,4) for a strut (2,5). We begin by expanding along 2, fixing 3. $$D_{452} + m(2,5)D_{452} + (m(1,2)+1)D_{4521} + m(2,4)D_{4524} = 0$$ $$D_{4521} = \begin{matrix} 3 \\ | \\ | \\ 2-----1 \end{matrix}\ \begin{matrix} (1,4) \rightarrow (1,2) \\ (3,5) \rightarrow (3,4) \\ (2,3) \rightarrow (1,3) \\ (1,2) \rightarrow (2,5) \end{matrix} \Rightarrow \begin{matrix} (1,4) \rightarrow (2,5) \\ (3,5) \rightarrow (3,4) \\ (2,3) \rightarrow (1,3) \end{matrix}$$ $$D_{4524} = \begin{matrix} 3 \\ | \\ | \\ 2-----4 \end{matrix}\ \begin{matrix} (1,4) \rightarrow (1,2) \\ (3,5) \rightarrow (3,4) \\ (2,3) \rightarrow (1,3) \\ (2,4) \rightarrow (2,5) \end{matrix}$$ By the inductive hypothesis, $D_{4521} = 0$. Therefore: $$D_{452} = \frac{-m(2,4)}{1+m(2,5)}D_{4524}$$ Now consider $D_{4524}$. We expand along 3, fixing 2. $$D_{4524} + (m(3,4)+1)D_{4524} + (m(1,3)+1)D_{45241} + (m(3,5)-1)D_{45245} = 0$$ $$D_{45241} = \begin{matrix} 3 \\ | \\ | \\ 2-----1 \end{matrix}\ \begin{matrix} (1,4) \rightarrow (1,2) \\ (3,5) \rightarrow (3,4) \\ (2,3) \rightarrow (1,3) \\ (2,4) \rightarrow (2,5) \\ (3,4) \rightarrow (1,3) \end{matrix} \Rightarrow \begin{matrix} (1,4) \rightarrow (1,2) \\ (3,5) \rightarrow (1,3) \\ (2,3) \rightarrow (1,3) \\ (2,4) \rightarrow (2,5) \end{matrix}$$ $$D_{45245} = \begin{matrix} 3 \\ | \\ | \\ 2-----5 \end{matrix}\ \begin{matrix} (1,4) \rightarrow (1,2) \\ (3,5) \rightarrow (3,4) \\ (2,3) \rightarrow (1,3) \\ (2,4) \rightarrow (2,5) \\ (3,5) \rightarrow (3,4) \end{matrix}$$ By the inductive hypothesis, $D_{45241} = 0$. Therefore: $$D_{4524} = \frac{-(m(3,5)-1)}{2+m(3,4)}D_{45245}$$ Combining this with the result of the previous step, we have: $$D_{452} = \frac{-m(2,4)}{1+m(2,5)}\frac{-(m(3,5)-1)}{2+m(3,4)}D_{45245} = \frac{m(2,4)}{1+m(2,5)}\frac{(m(3,5)-1)}{2+m(3,4)}D_{45245}$$ Notice that the component of degree 2 in $D_{45245}$ is the same as that in $D_{452}$, so we could start the whole procedure again. Also notice that we have swapped the struts as we wanted. Inductively, we can see that: $$D_{452} = \prod_{k=1}^n{\frac{(m(2,4)-k+1)}{k+m(2,5)}\frac{(m(3,5)-k)}{1+k+m(3,4)}}D_{452(45)^n}$$ Eventually, $n > m(3,5)$, so $D_{452} = 0$. This means that: $$D_4 = \frac{-m(3,5)}{1+m(3,4)}\frac{-m(3,4)}{1+m(1,3)}D_{454}$$ Now we examine $D_{454}$. We expand along 4, fixing 3. $$D_{454} + m(4,5)D_{454} + (m(1,4)-1)D_{4541} + m(2,4)D_{4542} = 0$$ $$D_{4541} = \begin{matrix} 3 \\ | \\ | \\ 4-----1 \end{matrix}\ \begin{matrix} (1,4) \rightarrow (1,2) \\ (3,5) \rightarrow (1,3) \\ (1,4) \rightarrow (4,5) \end{matrix}$$ $$D_{4542} = \begin{matrix} 3 \\ | \\ | \\ 4-----2 \end{matrix}\ \begin{matrix} (1,4) \rightarrow (1,2) \\ (3,5) \rightarrow (1,3) \\ (2,4) \rightarrow (4,5) \end{matrix}$$ Notice that $D_{4542}$ has a degree 2 component which is the same, up to sign, as $D_{4524}$. So, by using the same kind of argument used for $D_{452}$, we can show that $D_{4542} = 0$ (we will look at the diagrams $D_{4542(54)^n}$). So: $$D_{454} = \frac{m(1,4)-1}{1+m(4,5)}(-D_{4541})$$ Therefore, we can express $D_4$ in terms of $D_{4541}$: $$D_4 = \frac{m(3,5)}{1+m(3,4)}\frac{m(3,4)}{1+m(1,3)}\frac{m(1,4)-1}{1+m(4,5)}(-D_{4541})$$ Notice that we have swapped a strut (1,4) for a strut (4,5), and a strut (3,5) for a strut (3,4). Since $D_{4541}$ has the same degree 2 component (up to sign) as $D_4$, we can repeat the same sequence of operations. Inductively, we will find that: $$D_4 = \prod_{k=1}^n{\frac{(m(3,5)-k+1)}{1+m(3,4)}\frac{m(3,4)}{k+m(1,3)}\frac{(m(1,4)-k)}{k+m(4,5)}} (-1)^nD_{4(541)^n}$$ Eventually, $n > m(1,4)$, so $D_4 = 0$. By a similar argument, $D_5$ will equal 0, so we can conclude that $D + m(1,2)D=0$. Therefore, $D=0$, which completes the proof. $\Box$ \begin{cor} \label{5comp} On links with at most 5 components the only finite type invariants are the pairwise linking numbers and their products. \end{cor} It appears to be difficult to extend the proof of Theorem~\ref{T:k=5} to $B^h(6)$. In this case, the IHX relation comes into play, and it is not clear that one can decrease $m(1,4)+m(1,5)$ monotonically. \subsection{Previous results for $B^c(k)$} \label{SS:2and3} Unlike for homotopy, there is no {\it a priori} limit on the size of the components of diagrams in $B^c(k)$. However, we are able to prove some non-existence results for small values of $k$. The proofs of the following results can be found in \cite{me2}: \begin{thm} \label{T:123} The only nontrivial diagrams in $B^c(k)$ for $k=1,2,3$ are disjoint unions of struts. In other words, any diagram with a component of degree greater than 1 is trivial. \end{thm} As with homotopy, it seems difficult to extend the methods used to prove Theorem~\ref{T:123} to higher values of $k$. The attempt in \cite{me2} fell prey to the same error as in \cite{me1} (see the Remark in Section~\ref{SS:3and4}). {\sc Remark:} If we allow the first Reidemeister move, we can prove that $B^c(1)$ (with the new relation) is trivial, confirming a result of \cite{ng} that the Arf invariant (which is $\mathbb{Z}_2$-valued) is the only finite type knot concordance invariant. \section{Existence results for $B^h$} \label{S:existence} In this section we demonstrate the existence of non-trivial diagrams in $B^h(k)$ which are {\it not} just the products of small components, for $k \geq 9$. Since $B^h$ is just a quotient of $B^c$, this implies the existence of non-trivial diagrams in $B^c$. This can also be proved directly using the same methods, but we will leave that as an exercise for the reader. The arguments used are not constructive; we simply use a counting argument to show that (within a certain subspace) there are more diagrams than relations. Since all the relations are just linear combinations, we have a homogeneous system of linear equations with more equations than unknowns, and we conclude that there must be non-trivial solutions. Each such solution corresponds to some finite type invariant which is not just a product of linking numbers. We consider the subspace $Y^h(k)$ of $B^h(k)$ which is spanned by diagrams which have a single Y-component (degree 2 component) and all other components are struts (degree 1). This is the space of diagrams which have exactly one trivalent vertex. Since all the relations of $B^h$ preserve the number of trivalent vertices (i.e. any two diagrams in a given relation have the same number of trivalent vertices), $Y^h(k)$ is closed under the relations of $B^h$. We will show that $Y^h(k)$ contains non-trivial diagrams for $k \geq 9$. We count the number of diagrams in $Y^h(k)$ which have exactly $n$ struts. We count these diagrams by counting the number of ways of coloring the endpoints of the Y-component (i.e. of choosing 3 distinct colors), and then counting the number of ways of choosing the $n$ struts (i.e. of choosing $n$ pairs of distinct colors). Notice that this count does not distinguish the orientation of the trivalent vertex. This would double the number of diagrams, except that the new ones are just the negatives of the previous ones by the antisymmetry relation. So we will leave them out, and simply not count the antisymmetry relations among our relations. Since our diagrams only have one trivalent vertex, there are no IHX relations. Also, since our diagrams have no loops, and the endpoints of any component are given distinct colors, we can ignore the fourth and fifth relations of Definition~\ref{D:homotopy}. This means that the only relations we need to count are the link relations. Let $u(n,k)$ be the number of elements of $Y^h(k)$ which have $n$ struts (i.e. our number of "unknowns"). There are $\binom{k}{3}$ ways of choosing the labels for the Y-component. There are $\binom{k}{2}$ possible struts. The number of ways of selecting $n$ of them, with repetition allowed, is simply $\binom{\binom{k}{2} + n -1}{n}$. Putting these together, we find: $$u(n,k) = \binom{k}{3}\binom{\binom{k}{2} + n -1}{n}$$ Now we want to count the link relations among these diagrams. Notice that the diagrams in $Y^h(k)$ with $n$ struts are exactly the diagrams in $Y^h(k)$ with $2n+3$ endpoints. Since the link relation preserves the number of endpoints, as well as the number of univalent vertices, if one diagram in a relation is an element of $Y^h(k)$ with $n$ struts, so is every other diagram in the relation. Let $r(n,k)$ be the number of link relations among elements in $Y^h(k)$ which have $n$ struts. We can think of one of these relations as consisting of $n+1$ struts, together with one "special" strut. The special strut will have a distinguished endpoint. The link relation is created by attaching this endpoint in turn to all the other struts which have an endpoint of the same color, forming a series of diagrams with a single Y-component. An example is shown below: $$\begin{matrix} 3-----2* \\ \\ 2-----1 \\ \\ 2-----4 \\ \\ 5-----6 \end{matrix} \ \ \longrightarrow \ \ \begin{matrix} \begin{matrix} 3 \\ | \\ | \\ 2-----1 \end{matrix} \\ \\ 2-----4 \\ \\ 5-----6 \end{matrix} \ \ + \ \ \begin{matrix} \begin{matrix} 3 \\ | \\ | \\ 2-----4 \end{matrix} \\ \\ 2-----1 \\ \\ 5-----6 \end{matrix} \ \ = \ \ 0$$ There are $k(k-1)$ ways of coloring the "special" strut (not $\binom{k}{2}$, since the endpoints are distinguished). Then, as before, there are $\binom{\binom{k}{2} + n}{n+1}$ ways of choosing the other $n+1$ struts. We conclude that: $$r(n,k) = k(k-1)\binom{\binom{k}{2} + n}{n+1}$$ To compare these two counts, we look at the quotient of the number of relations by the number of diagrams: $$\frac{r(n,k)}{u(n,k)} = \frac{k(k-1)\binom{\binom{k}{2} + n}{n+1}}{\binom{k}{3}\binom{\binom{k}{2} + n -1}{n}} = \frac{6}{k-2}\frac{\binom{k}{2}+n}{n+1}$$ For a fixed value of $k$, we can look at the limit of this ratio as $n \to \infty$: $$\lim_{n \to \infty} \frac{r(n,k)}{u(n,k)} = \frac{6}{k-2}$$ If $k \geq 9$, then this limit is less than 1, which means there are more relations than diagrams, so there must be non-trivial diagrams. If we plug in $k=9$ and solve for the ratio to be 1, we obtain $n=209$. So if we have 210 struts (i.e. diagrams of degree 212) there will definitely be nontrivial diagrams. \begin{thm} There is a non-trivial homotopy invariant on links with 9 components, of type 212, which is not a product of linking numbers. \end{thm} {\sc Remark:} In fact, we have slightly overcounted the relations. We have counted diagrams where the distinguished endpoint of the special strut has a color which does not appear elsewhere in the diagram, so it cannot be attached to any other strut to form a Y-component. However, if we take the limit of the number of these extra relations divided by $u(n,k)$ as $n$ tends to $\infty$, we get 0. So removing these relations from the count does not significantly improve our result. In general, of course, many of the relations are dependent. So we would expect that there are also non-trivial diagrams when $k=8$, and the ratio tends to 1, and possibly for even lower values of $k$. \section{Questions} \label{S:questions} \begin{quest} What is an explicit example of a non-trivial finite type link homotopy invariant which is not a product of linking numbers? \end{quest} Any such invariant would immediately give a finite type invariant for string links. Bar-Natan \cite{bn2} has shown that the finite type invariants for string links are exactly the Milnor invariants, which classify string links up to homotopy. However, Milnor's invariants, other than the linking numbers, have indeterminacies which prevent them from being lifted to links as integer- (or $\mathbb{C}$-) valued invariants. Apparently, it is possible to find some product in which the indeterminacies "cancel" and the product can be lifted, which is unexpected. \begin{quest} What is the first value of $k$ for which $B^h(k)$ has non-trivial diagrams which are not disjoint unions of struts? \end{quest} We know that such diagrams exist for $k \geq 9$, and that they do {\it not} exist for $k \leq 5$, but the situation for $k = 6,7,8$ is still unknown. It seems likely there are non-trivial diagrams in $B^h(8)$, but as yet we do not have a proof. \begin{quest} What is the first value of $k$ for which $B^c(k)$ has non-trivial diagrams which are not disjoint unions of struts? \end{quest} As for $B^h(k)$, we know that such diagrams exist for $k \geq 9$, and do not exist for $k \leq 3$, but the situation for $4 \leq k \leq 8$ is unknown. \begin{quest} Can we refine the methods of Section~\ref{S:existence} to prove that there are nontrivial diagrams for lower values of $k$? \end{quest} We mentioned in Section~\ref{S:existence} that the link relations preserve the number of trivalent vertices of the diagram. They also preserve the number of univalent vertices of each color. Perhaps this could be used to find smaller "closed" subspaces with fewer dependent relations, so that the ratio of relations to diagrams is smaller in the limit. \end{document}
\begin{document} \title{Rotated $D_n$-lattices} \begin{center} {\footnotesize $^{1,3}$ UNICAMP - Universidade Estadual de Campinas, 13083-859, Campinas, SP, BRAZIL \\ $^{2}$ UFLA - Universidade Federal de Lavras, 37200-000, Lavras, MG, BRAZIL \\ Email addresses: [grajorge, ferrari, sueli] \ @ime.unicamp.br \\ }\end{center} \hrule \begin{abstract} Based on algebraic number theory we construct some families of rotated $D_n$-lattices with full diversity which can be good for signal transmission over both Gaussian and Rayleigh \linebreak fading channels. Closed-form expressions for the minimum product distance of those lattices are obtained through algebraic properties. \end{abstract} \noindent \keywords{\footnotesize {\bf Keywords:} $D_n$-lattices, Signal transmission, Cyclotomic Fields, Minimum product distance} \hrule \section{Introduction} A lattice $\Lambda=\Lambda^{n} \subseteq \mathbb{R}^n$ is a discrete set generated by integer combinations of $n$ linearly independents vectors ${\bm v_1},\ldots,{\bm v_n} \in \mathbb{R}^n$. Its packing density $\Delta(\Lambda)$ is the proportion of the space $\mbox{\bms R}^{n}$ covered by congruent disjoint spheres of maximum radius \cite{sloane}. A lattice $\Lambda$ has diversity $m\leq n$ if $m$ is the maximum number such that for all ${\bm y}=(y_1,\cdots,y_n) \in \Lambda$, ${\bm y} \neq {\bm 0}$ there are at least $m$ non-vanishing coordinates. Given a full diversity lattice $\Lambda \subseteq \mbox{\bms R}^{n}$ $(m=n)$, the minimum product distance is defined as $d_{min}(\Lambda) = \min\{\prod_{i=1}^{n}|y_i|\,\, \mbox{for all}\,\, {\bm y}=(y_1,\cdots,y_n) \in \Lambda,{\bm y} \neq {\bm 0} \}$ \cite{Oggier}. Signal constellations having lattice structure have been studied as meaningful means for \linebreak signal transmission over both Gaussian and single-antenna Rayleigh fading channel \cite{boutros}. \linebreak Usually the problem of finding good signal constellations for a Gaussian channel is associated to the search for lattices with high packing density \cite{sloane}. On the other hand for a Rayleigh fading channel the efficiency, measured by lower error probability in the transmission, is strongly related to the lattice diversity and minimum product distance \cite{boutros}, \cite{Oggier}. The approach in this work, following \cite{eva2} and \cite{Oggier} is the use of algebraic number theory to construct lattices with good performance for both channels. For general lattices the packing density and the minimum product distance are usually hard to estimate \cite{mic}. Those parameters can be obtained in certain cases of lattices associated to number fields, through algebraic properties. In \cite{gabriele}, \cite{Oggier} and \cite{upper-bound} some families of rotated $\mbox{\bms Z}^{n}$-lattices with full diversity and good minimum product distance are studied for transmission over Rayleigh fading channels. In \cite{suarez} the lattices $A_{p-1}$, $p$ prime, $E_6$, $E_8$ $K_{12}$ and $\Lambda_{24}$ were realized as full diversity ideal lattices via some subfields of cyclotomic fields. In \cite{boutros} rotated $n$-dimensional lattices (including $D_4$, $K_{12}$ and $\Lambda_{16}$) which are good for both channels are constructed with diversity $n/2$. In this work we also attempt to consider lattices which are feasible for both channels by constructing rotated $D_n$-lattices with full diversity $n$ and get a closed-form for their minimum product distance. The results were obtained for $n=2^{r-2},$ $r \geq 5$ and $n=(p-1)/2$, $p$ prime and $p\geq 7,$ in Propositions \ref{Prop1}, \ref{idealI} and \ref{rotacionado}. As it is known, a $D_n$ lattice has better packing density $\delta(D_n)$ when compared to $\mbox{\bms Z}^{n}$ ($D_n$ has the best lattice packing density for $n=3,4,5$ and $\lim_{n\longrightarrow \infty}\frac{\delta(\mbox{\bmsi Z}^{n})}{\delta(D_n)} =0$) and also a very efficient decoding algorithm \cite{sloane}. The relative minimum product distances $d_{p,rel}(D_n)$ of the rotated $D_n$-lattices obtained here are smaller than the minimum product distance $d_{p,rel}(\mbox{\bms Z}^{n})$ of rotated $\mbox{\bms Z}^{n}$-lattices constructed for the Rayleigh channels in \cite{and} and \cite{Oggier}, but, as it is shown in Sections 4 and 5, $\lim_{n\longrightarrow \infty}\frac{\sqrt[n]{d_{p,rel}(\mbox{\bmsi Z}^{n})}}{\sqrt[n]{d_{p,rel}(D_n)}} =\sqrt{2}$, what offers a good trade-off. In Sections 2 e 3 we summarize some definitions and results on Algebraic Number Theory. Sections 4 and 5 are devoted to the construction of full diversity rotated $D_n$-lattices through cyclotomic fields and the deduction of their minimum product distance. \section{Number Fields} In this section we summarize some concepts and results of algebraic \linebreak number theory and establish the notation to be used from now on. The results presented here can be found in \cite{traj}, \cite{pierre}, \cite{stewart} and \cite{was}. Let $\mbox{\bms K}$ be a number field of degree $n$ and $\mathcal O_{\mbox{\bmsi K}}$ its ring of integers. It can be shown that every nonzero fractionary ideal $I$ of $\mathcal O_{\mbox{\bmsi K}}$ is a free $\mbox{\bms Z}$-module of rank $n$. There are exactly $n$ distinct $\mbox{\bms Q}$-homomorphisms $\{\sigma_i\}_{i=1}^{n}$ of $\mbox{\bms K}$ in $\mbox{\bms C}.$ A homomorphism $\sigma_i$ is said {\it real} if $\sigma_i(\mbox{\bms K})\subset \mbox{\bms R}$, and the field $\mbox{\bms K}$ is said {\it totally real} if $\sigma_i$ is real for all $i=1,\cdots,n.$ Given $x \in \mbox{\bms K},$ the values $N(x) = N_{\mbox{\bmsi K}|\mbox{\bmsi Q}}(x) = \prod_{i=1}^{n} \sigma_i(x), \ \ \ Tr(x)=Tr_{\mbox{\bmsi K}|\mbox{\bmsi Q}}(x) = \sum_{i=1}^{n} \sigma_i(x)$ are called, {\it norm} and {\it trace} of $x$ in $\mbox{\bms K}|\mbox{\bms Q},$ respectively. It can shown that if $x \in \mathcal{O}_{\mbox{\bmsi K}}$, then $N(x), Tr(x) \in \mbox{\bms Z}$. Let $\{\omega_1,\ldots,\omega_n\}$ be a $\mbox{\bms Z}$-basis of $\mathcal O_{\mbox{\bmsi K}}.$ The integer $d_{\mbox{\bmsi K}} = (det[ \sigma_j(\omega_i)]_{i,j=1}^{n})^2$ is called the {\it discriminant} of $\mbox{\bms K}$. The {\it norm} of an ideal $I \subseteq \mathcal O_{\mbox{\bmsi K}}$ is defined as $N(I) = |\mathcal O_{\mbox{\bmsi K}}/I|.$ The {\it codifferent} de $\mbox{\bms K}|\mbox{\bms Q}$ is the fractionary ideal $\Delta(\mbox{\bms K}|\mbox{\bms Q})^{-1}=\{x\in \mbox{\bms K}; \,\, \forall \, \alpha \in {\mathcal{O}}_{\mbox{\bmsi K}},\,Tr_{\mbox{\bmsi K}|\mbox{\bmsi Q}}(x\alpha) \in \mbox{\bms Z}\}$ of ${\mathcal{O}}_{\mbox{\bmsi K}}.$ Let $\zeta=\zeta_m \in \mbox{\bms C}$ be a primitive $m$-th root of unity. We consider here the {\it cyclotomic field} $\mbox{\bms Q}(\zeta)$ and its subfield $\mbox{\bms K}=\mbox{\bms Q}(\zeta+\zeta^{-1}).$ We have that $[\mbox{\bms Q}(\zeta+\zeta^{-1}):\mbox{\bms Q}]= \varphi(m)/2$, where $\varphi$ is the Euler function; $\mathcal O_{\mbox{\bmsi K}} = \mbox{\bms Z} [\zeta+\zeta^{-1}];$ $d_{\mbox{\bmsi K}}=p^{\frac{p-3}{2}}$ if $m=p$, $p$ prime, $p\geq 5$ and $d_{\mbox{\bmsi K}}=2^{(r-1)2^{r-2}-1}$ if $m=2^r$. \section{Ideal lattices} From now on, let $\mbox{\bms K}$ be a totally real number field. Let $\alpha \in \mbox{\bms K}$ such that $\alpha_i=\sigma_i(\alpha) > 0$ for all $i=1,\cdots,n.$ The homomorphism $$\left.\begin{array}{l} \sigma_{\alpha}:\mbox{\bms K} \longrightarrow \mbox{\bms R}^{n} \\ \hspace{0.8cm} x\longmapsto \left(\sqrt{\alpha_{1}}\sigma_1(x),\ldots,\sqrt{\alpha_{n}}\sigma_{n}(x)\right)\end{array} \right.$$ is called {\it twisted homomorphism}. When $\alpha=1$ the twisted homomorphism is the {\it Minkowski homomorphism}. It can be shown that if $I\subseteq {\mbox{\bms K}}$ is a free $\mbox{\bms Z}$-module of rank $n$ with $\mbox{\bms Z}$-basis $\{w_1,\ldots,w_n\}$, then the image $\Lambda = \sigma_{\alpha}(I)$ is a lattice in $\mbox{\bms R}^n$ with basis $\{{ \sigma_{\alpha}(w_1)},\ldots,{ \sigma_{\alpha}(w_n)}\},$ or equivalently with generator matrix ${\bm M}= (\sigma_{\alpha}(w_{ij}))_{i,j=1}^{n}$ where $w_i = (w_{i1,\cdots,w_{in}})$ for all $i=1,\cdots,n$. Moreover, if $\alpha I \overline{I} \subseteq \Delta(\mbox{\bms K}|\mbox{\bms Q})^{-1}$ where $\overline{I}$ denote the complex conjugation of $I,$ then $\sigma_{\alpha}(I)$ is an integer lattice. Since $\mbox{\bms K}$ is totally real, the associated Gram matrix of $\sigma_{\alpha}(I)$ is ${\bm G}={\bm M}.{\bm M^{t}} = \left( Tr_{\mbox{\bmsi K}|\mbox{\bmsi Q}}(\alpha w_{i}\overline{w_{j}}) \right)_{i,j=1}^{n}$ \cite{Oggier}. \begin{proposition}\label{detb}\cite{eva2} If $I\subseteq \mbox{\bms K}$ is a fractional ideal, then for $\Lambda=\sigma_{\alpha}(I)$ and $det(\Lambda)=det(G)$, we have: \begin{equation} \label{detb1} det(\Lambda)= det(G) = N(I)^{2} N_{\mbox{\bmsi K}|\mbox{\bmsi Q}}(\alpha)|d_{\mbox{\bmsi K}}|.\end{equation} \end{proposition} \begin{proposition} \label{distanciaminima}\cite{Oggier} Let $\mbox{\bms K}$ be a totally real field number with $[\mbox{\bms K}:\mbox{\bms Q}]=n$ and $I \subseteq {\mbox{\bms K}}$ a fractional ideal. The {\it minimum product distance} of $\Lambda =\sigma_{\alpha}(I)$ is \begin{equation} d_{p,min}(\Lambda)= \sqrt{N_{\mbox{\bmsi K}|\mbox{\bmsi Q}}(\alpha)}min_{0\neq y\in I}|N_{\mbox{\bmsi K}|\mbox{\bmsi Q}}(y)|.\end{equation} In particular, if $I$ is a principal ideal then $ d_{p,min}(\Lambda)= \sqrt{\frac{det(\Lambda)}{|d_{\mbox{\bmsi K}}|}}.$ \end{proposition} \begin{definition} \label{relative} The {\it relative minimum product distance} of $\Lambda,$ denoted by ${\bm d_{p,rel}(\Lambda)}$, is the minimum product distance of a scaled version of $\Lambda$ with unitary minimum norm vector. \end{definition} \section{Rotated $D_n$-lattices for $n=2^{r-2}$, $r \geq 5$ via $\mbox{\bms K}=\mbox{\bms Q}(\zeta_{2^{r}}+\zeta_{2^{r}}^{-1})$} In this section we will present some families of rotated $D_n$-lattices using ideals and modules in the totally real number field $\mbox{\bms K}=\mbox{\bms Q}(\zeta_{2^{r}}+\zeta_{2^{r}}^{-1}).$ One of the strategies to construct these lattices was to start from the standard characterization of $D_n$ as generated by the basis \begin{equation}\label{beta} \beta= \{(-1,-1,0,\cdots,0),(1, -1,0, \cdots,0),\cdots,(0,0,\cdots,1,-1)\}. \end{equation} We derive in 4.2 a rotated $D_n$-lattice as a sublattice of the rotated $\mbox{\bms Z}^{n}$ algebraic constructions presented in \cite{and}, \cite{Oggier} and \cite{gabriele}. Another strategy explored next in 4.1 is to investigate the necessary condition given in Proposition \ref{detb}, for the existence of rotated $D_n$-lattices. Let $\zeta=\zeta_{2^r}$ be a primitive $2^r$-th root of unity, $m = 2^{r},$ $\mbox{\bms K}=\mbox{\bms Q} (\zeta +\zeta^{-1})$ and $n=[\mbox{\bms K}:\mbox{\bms Q}]=2^{r-2}$. \subsection{\bf A first construction:} Let $\alpha \in \mathcal O_{\mbox{\bmsi K}}$ and $I \subseteq \mathcal O_{\mbox{\bmsi K}}$ an ideal. If $\sigma_{\alpha}(I)$ is a rotated $D_n$-lattice scaled by $\sqrt{c}$, then $det(\sigma_{\alpha}(I))=4\, c^n$. Based on Proposition \ref{detb}, taking $I=\mathcal O_{\mbox{\bmsi K}}$ and $c=2^{r-1}$, since $d_{\mbox{\bmsi K}}=2^{(r-1)2^{r-2}-1}$ and $n=2^{r-2}$ it follows that a necessary condition to construct a rotated $D_n$-lattice $\sigma_{\alpha}(I)$ is to find an element $\alpha \in \mathcal O_{\mbox{\bmsi K}}$ such that $N(\alpha)=8$. Table $1$ shows some elements $\alpha \in \mathcal O_{\mbox{\bmsi K}}$ such that $N(\alpha)=8$ in low dimensions. From it we got the suggestion for a general expression for $\alpha$ as \begin{equation} \label{alpha} \alpha=4 + (\zeta_{2^r}+ {\zeta{_{2^r}^{-1}}})- 2({\zeta{_{2^r}^{2}}}+ {\zeta{_{2^r}^{-2}}})-({\zeta{_{2^r}^{3}}}+ {\zeta{_{2^r}^{-3}}}) \end{equation} and then derive Proposition \ref{Prop1}. \begin{table}[h] \begin{center} \begin{tabular}{c|c|c} \hline \hline $r $& $\alpha $& ${N}(\alpha) $ \\ \hline $4$ & $4 + (\zeta_{16}+ {\zeta{_{16}^{-1}}})- 2({\zeta{_{16}^{2}}}+ {\zeta{_{16}^{-2}}})-({\zeta{_{16}^{3}}}+ {\zeta{_{16}^{-3}}})$& $8$ \\ $5 $& $4 + (\zeta_{32}+ {\zeta{_{32}^{-1}}})- 2({\zeta{_{32}^{2}}}+ {\zeta{_{32}^{-2}}})-({\zeta{_{32}^{3}}}+ {\zeta{_{32}^{-3}}})$& $8$ \\ $6 $& $4 + (\zeta_{64}+ {\zeta{_{64}^{-1}}})- 2({\zeta{_{64}^{2}}}+ {\zeta{_{64}^{-2}}})-({\zeta{_{64}^{3}}}+ {\zeta{_{64}^{-3}}})$& $8$ \\ \hline \hline \end{tabular} \caption{ } \end{center} \end{table} To prove that $\frac{1}{\sqrt{2^{r-1}}}\sigma_{\alpha}(I)$ is a rotated $D_n$-lattice we need the next preliminary results. \begin{proposition}\label{traco} \cite{and} If $\zeta=\zeta_{2^r}$ and $\mbox{\bms K}=\mbox{\bms Q} (\zeta +\zeta^{-1})$, then \newline $ Tr_{\mbox{\bmsi K}|\mbox{\bmsi Q}}(\zeta^k + \zeta^{-k})=\left\{\begin{array}{l} 0, \ \ se \ \ gcd(k,2^r)<2^{r-1}; \\ -2^{r-1}, \ \ se \ \ gcd(k,2^r)=2^{r-1}; \\ 2^{r-1}, \ \ se \ \ gcd(k,2^r)=2^{r}. \end{array}\right .$\end{proposition} \begin{proposition}\label{quase} If $\mbox{\bms K}=\mbox{\bms Q}(\zeta+\zeta^{-1})$, $e_0 =1$ and $e_i=\zeta^{i}+\zeta^{-i}$ for $i = 1,\cdots, {2^{r-2}-1}$, then \\ (a) $Tr_{\mbox{\bmsi K}|\mbox{\bmsi Q}}(\alpha e_i e_i) = \left\{\begin{array}{ll} 2^{r}, & \mbox{if }\, i=0,1 \\ 2^{r+1}, & \mbox{if }\, 2 \leq i < 2^{r-2}-1 \\ 3.2^{r}, & \mbox{if }\, i=2^{r-2}-1 \end{array} \right.$ \\ (b) $Tr_{\mbox{\bmsi K}|\mbox{\bmsi Q}}(\alpha e_i e_0) = \left\{\begin{array}{ll} 2^{r-1}, & \mbox{if }\, i=1 \\ -2^{r}, & \mbox{if }\, i=2 \\ -2^{r-1}, & \mbox{ if }\,\, i=3 \\ 0, & \mbox{if }\, 3<i \leq 2^{r-2}-1 \end{array}\right.$ \\ (c) If $0 < i < j \leq 2^{r-2}-1$ then $$Tr_{\mbox{\bmsi K}|\mbox{\bmsi Q}}(\alpha e_i e_j) = \left\{\begin{array}{ll} 2^{r-1}, & \mbox{ if }\,\, |i-j|=1\,\, \mbox{ and} \\ & (i,j)\not\in \{(1,2),(2,1),(2^{r-2}-2, 2^{r-2}-1), \\ & (2^{r-2}-1,2^{r-2}-2)\} \\ -2^{r}, & \mbox{ if }\,\, |i-j|=2 \\ -2^{r-1}, & \mbox{ if }\,\, |i-j|=3 \\ 2^{r}, & \mbox{ if }\,\, (i,j) \in \{(2^{r-2}-2, 2^{r-2}-1), \\ & (2^{r-2}-1,2^{r-2}-2)\} \\ 0, & \mbox{ otherwise.} \end{array}\right.$$ \end{proposition} {\it Proof:} The proof is straightforward by calculating the $gcd(k,2^{r})$ for some values of $k$ and applying Proposition \ref{traco}. For $0 < i < j \leq 2^{r-2}-1$ we have: \[\begin{split} Tr(\alpha e_i e_i) & = Tr(8)+ 4Tr({\zeta^{2i}}+ {\zeta^{-2i}})+ 2Tr({\zeta}+ {\zeta^{-1}})+ Tr({\zeta^{2i+1}} + {\zeta^{-2i-1}}) \\& + Tr({\zeta^{2i-1}}+ {\zeta^{-(2i-1)}})- 4Tr({\zeta^{2}}+ {\zeta^{-2}}) -2Tr({\zeta^{2i+2}}+ {\zeta^{-(2i+2)}}) \\& -2Tr({\zeta^{2i-2}}+ {\zeta^{-(2i-2)}})-2Tr({\zeta^{3}}+ {\zeta^{-3}}) \\& - Tr({\zeta^{2i+3}}+ {\zeta^{-(2i+3)}}) - Tr({\zeta^{2i-3}}+ {\zeta^{-(2i-3)}}) \end{split} \] For $2\leq i < 2^{r-2}-1$ since $gdc(k,2^{r})< 2^{r-1}$ for $k= 2i,2i \pm 1,2i\pm2, 2i \pm 3$ it follows that $Tr(\alpha e_i e_i) = 2^{r+1}$. For $i=1,2^{r-2}-1$ the development is analogous. For $i=0$ we have: \[ Tr(\alpha e_0e_0)=Tr(4)+ Tr(\zeta+ {\zeta^{-1}}) -2Tr({\zeta^{2}}+ {\zeta^{-2}})-Tr({\zeta^{3}}+{\zeta^{-3}})=2^r\] and then it follows (a). \[ \begin{split} Tr(\alpha e_ie_0) & =4Tr({\zeta^{i}}+{\zeta^{-i}})+Tr({\zeta^{i+1}}+{\zeta^{-(i+1)}}) + Tr({\zeta^{i-1}}+{\zeta^{-(i-1)}}) \\&-2Tr({\zeta^{i+2}}+ {\zeta^{-(i+2)}})-2Tr({\zeta^{i-2}}+{\zeta^{-(i-2)}})\\&- Tr({\zeta^{i+3}}+{\zeta^{-(i+3)}})-Tr({\zeta^{i-3}}+ {\zeta^{-(i-3)}})\end{split} \] For $i\neq 1,2,3$, since $gcd(k,2^{r})< 2^{r-1}$ for $k=i, i\pm 1, i\pm 2, i\pm 3$ then $Tr(\alpha e_ie_0)=0$.\\ For $i=1,2,3$ using $Tr(\zeta^{0}+ \zeta^{0})=2^{r-1}$ it follows (b). \[ \begin{split} Tr(\alpha e_ie_j)& =Tr(\zeta^{i-j+1}+\zeta^{-(i-j+1)})+ Tr(\zeta^{i-j-1}+\zeta^{-(i-j-1)}) \\&-2[Tr(\zeta^{i-j+2}+\zeta^{-(i-j+2)})+ Tr(\zeta^{i-j-2}+\zeta^{-(i-j-2)})]\\& -[Tr(\zeta^{i-j+3}+ \zeta^{-(i-j+3)}) + Tr(\zeta^{i-j-3}+\zeta^{-(i-j-3)})]\\& -2 Tr(\zeta^{i+j-2}+\zeta^{-(i+j-2)})- Tr(\zeta^{i+j+3}+\zeta^{-(i+j+3)})\\&- Tr(\zeta^{i+j-3}+\zeta^{-(i+j-3)}) \end{split}\] Since $gcd(k,2^{r})<2^{r-1}$ for $k=i+j+1, i+j+ 2$; $gcd(i+j+3,2^{r})<2^{r-1}$ for $i+j \neq 2^{r-1}-3$; $gcd(i+j+3,2^r) =2^{r-1}$, for $i+j = 2^{r-1}-3$ and $gcd(i+j-3,2^r)<2^{r-1}$, for $i+j \neq 3$, it follows (c). \bb \begin{proposition}\label{Prop1} The lattice $\frac{1}{\sqrt{2^{r-1}}}\sigma_{\alpha}(\mathcal O_{\mbox{\bmsi K}}) \subseteq \mbox{\bms R}^{2^{r-2}}$, $\alpha =4 + (\zeta_{2^r}+ {\zeta{_{2^r}^{-1}}})- 2({\zeta{_{2^r}^{2}}}+ {\zeta{_{2^r}^{-2}}})-({\zeta{_{2^r}^{3}}}+ {\zeta{_{2^r}^{-3}}})$ is a rotated $D_n$-lattice for $n=2^{r-2}$. \end{proposition} {\it Proof:} The Gram matrix for $\frac{1}{\sqrt{2^{r-1}}}\sigma_{\alpha}(\mathcal O_{\mbox{\bmsi K}})$ related to the $\mbox{\bms Z}$-basis $\{e_0,e_1,\cdots,e_{n-1}\}$ is \[{\bm G}={ \left( \begin{array}{ccccccccccc} 2 & 1 & -2 & -1 & 0 & \cdots & & & & \cdots & 0 \\ 1 & 2 & 0 & -2 & -1 & 0 & & & & & \vdots \\ \vdots & 0 & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & 0 & \vdots \\ & & & & \ddots & 0 & -1 & -2 & 1 & 4 & 2 \\ 0 & \vdots & & & & \cdots & 0 & -1 & -2 & 2 & 6 \end{array} \right)}\] \noindent and it is easy to see that ${\bm G}$ is the Gram matrix for $D_n$ related to the generator matrix ${\bm T}{\bm B}$ where \[{\bm T}={\left( \begin{array}{cccccccc} 0 & \cdots & & & & \cdots & 0 & -1 \\ 0 & \cdots & & & \cdots & 0 & 1 & 0 \\ 0 & \cdots & & \cdots & 0 & -1 & 0 & 1 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \\ 0 & 0 & 1 & 0 & -1 & 0 & \cdots & 0 \\ 0 & -1 & 0 & 1 & 0 & \cdots & \cdots & 0 \\ 1 & -1 & -1 & 0 & \cdots & & \cdots & 0 \end{array} \right)}\] and ${\bm B}$ is the standard generator matrix for $D_n$ given by basis $\beta$ (\ref{beta}). So, since lattices with the same Gram matrix must be Euclidean equivalent, then then $\sigma_{\alpha}(I)$ is a rotated $D_n$-lattice. \bb We determine next the relative minimum product distance of the rotated $D_n$-lattice considered in Proposition \ref{Prop1}. Using Propositions \ref{detb} and \ref{Prop1} we conclude: \begin{corollary} If $m = 2^{r}$, $r \geq 4$, $\mbox{\bms K}=\mbox{\bms Q}(\zeta_m+\zeta_m^{-1})$ and $\alpha =4 + (\zeta_{2^r}+ {\zeta{_{2^r}^{-1}}})- 2({\zeta{_{2^r}^{2}}}+ {\zeta{_{2^r}^{-2}}})-({\zeta{_{2^r}^{3}}}+ {\zeta{_{2^r}^{-3}}})$ then $N_{\mbox{\bmsi K}|\mbox{\bmsi Q}}(\alpha) = 8.$ \end{corollary} \begin{proposition} For $n=2^{r-2}$, if $\Lambda = \frac{1}{\sqrt{2^{r-1}}}\sigma_{\alpha}(\mathcal O_{\mbox{\bmsi K}})$ and $\alpha$ as in (\ref{alpha}) then the lattice relative minimum product distance is $${\bm d_{p,rel}\left(\frac{1}{\sqrt{2^{r-1}}}\sigma_{\alpha}(\mathcal O_{\mbox{\bmsi K}})\right)} =2^{\frac{3-r n}{2}}.$$ \end{proposition} {\it Proof:} The minimum norm of the standard $D_n$ is $\sqrt{2}.$ $\mathcal O_{\mbox{\bmsi K}}$ is a principal ideal, therefore using Proposition \ref{distanciaminima} we have $d_{p,min}(\sigma_{\alpha}(\mathcal O_{\mbox{\bmsi K}})) = \sqrt{N(\alpha)N(\mathcal O_{\mbox{\bmsi K}})^{2}}.$ Since $N(\alpha)=8$ and $N(\mathcal O_{\mbox{\bmsi K}})=1,$ then $${\bm d_{p,rel}\left(\frac{1}{\sqrt{2^{r-1}}}\sigma_{\alpha}(\mathcal O_{\mbox{\bmsi K}})\right)} = \frac{1}{\sqrt{2}^{n}}\frac{1}{\sqrt{2^{r-1}}^{n}} \sqrt{8} =\frac{\sqrt{8}}{2^{r \frac{n}{2}}} = 2^{\frac{3-rn}{2}}.$$ \bb \subsection{\bf A second construction:} In \cite{and} and \cite{gabriele} families of rotated $\mbox{\bms Z}^n$-lattices obtained as image of a twisted homomorphism applied to $\mbox{\bms Z}[\zeta+\zeta^{-1}]$ and having full diversity are constructed. Those constructions consider $\alpha = 2 + e_1$ and $\alpha=2-e_1,$ respectively, and generate equivalent lattices in the Euclidean metric by permutations and coordinate signal changes. We will use in our construction the rotated $\mbox{\bms Z}^{n}$-lattice $\Lambda=\frac{1}{\sqrt{2^{r-1}}}\sigma_{\alpha}(I)$ with $\alpha = 2 + e_1$ and $I=\mathcal O_{\mbox{\bmsi K}}=\mbox{\bms Z}[\zeta+\zeta^{-1}],$ and then consider $D_n$ as a sublattice of $\Lambda$. If $e_0 =1$ and $e_i=\zeta^{i}+\zeta^{-i}$ for $i = 1,\cdots, {2^{r-2}-1}$, by \cite{gabriele} a generator matrix for the rotated $\mbox{\bms Z}^{n}$-lattice $\Lambda=$ $\frac{1}{\sqrt{2^{r-1}}}\sigma_{\alpha}(\mathcal O_{\mbox{\bmsi K}})$ is ${\bm M_1} = \displaystyle\frac{1}{\sqrt{2^{r-1}}}{\bm N} {\bm A},$ where ${\bm N}=(\sigma_i(e_{j-1})_{i,j=1}^{n}$ and ${\bm A}= diag(\sqrt{\sigma_{k}(\alpha)}).$ Let ${\bm T}$ the basis change matrix \[{{\bm T}=\left( \begin{array}{ccccc} 1 & -1 & \cdots & 1 & -1 \\ 1 & -1 & \cdots & 1 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 1 & 0 & \cdots & 0 & 0 \end{array}\right)}.\] For ${\bm M} = {\bm T}{\bm M_1}$, ${\bm G}={\bm M} {\bm M^{t}}={\bm I_n}$ and we will consider the standard lattice $D_n \subseteq \mbox{\bms Z}^{n}$ rotated by ${\bm M}$. \begin{proposition}\label{idealI} Let $I \subseteq \mathcal O_{\mbox{\bmsi K}}$ be the $\mbox{\bms Z}$-module with $\mbox{\bms Z}$-basis \linebreak $\{-e_1,e_2,\cdots,-e_{n-1}, -2e_0 + 2 e_1 - 2e_2 + \cdots -2e_{n-2} + e_{n-1}\}$ and $\alpha=2+e_1.$ The lattice $\frac{1}{\sqrt{2^{r-1}}}\sigma_{\alpha}(I) \subseteq \mbox{\bms R}^{2^{r-2}}$ is a rotated $D_n$-lattice. \end{proposition} \noindent{\it Proof:} Let ${\bm B}$ be the generator matrix of $D_n$ associated to the basis $\beta$ (\ref{beta}). Using homomorphism properties, a straightforward computation shows that ${\bm B} {\bm M} =$ $$ { \frac{1}{\sqrt{2^{r-1}}}\footnotesize\left(\begin{array}{ccc} \sigma_1(-2 e_0 + \cdots - 2e_{n-2} + e_{n-1} ) & \cdots & \sigma_n(-2 e_0 + \cdots - 2e_{n-2} + e_{n-1} ) \\ \sigma_1(-e_{n-1}) & \cdots & \sigma_n( -e_{n-1}) \\ \vdots & \ddots & \vdots \\ \sigma_1(-e_1) & \cdots & \sigma_n(-e_1) \end{array}\right) {\bm A}} $$ is a generator matrix for $\frac{1}{\sqrt{2^{r-1}}}\sigma_{\alpha}(I).$ This lattice is a rotated $D_n$-lattice since ${\bm B} {\bm M} ({\bm B} {\bm M})^{t} = {\bm B} {\bm B^{t}}$ is the standard Gram matrix of $D_n$ relative to the basis $\beta$. \bb We show next that the rotated $D_n$-lattice of the last proposition is associated to a principal ideal of $\mathcal O_{\mbox{\bmsi K}}$ and then calculate its relative minimum product distance. \begin{proposition} Let $I$ be the $\mbox{\bms Z}$-module given in Proposition \ref{idealI}. $I$ is a principal ideal and $I = e_1 \mathcal O_{\mbox{\bmsi K}}.$ \end{proposition} {\bf Proof:} It is easy to see that $I = 2e_0\mbox{\bms Z} + e_1\mbox{\bms Z} + \cdots + e_{n-1}\mbox{\bms Z}.$ Let $x \in e_1 \mathcal O_{\mbox{\bmsi K}}.$ Then $x = e_1 (a_0 e_0 + a_1 e_1 + a_2 e_2 + \cdots + a_{n-1}e_{n-1}) = a_0 (e_1 + e_{-1}) + a_1 (e_2 + 2e_0) + a_2 (e_3 + e_{-1}) + \cdots + a_{n-1}(e_n + e_{-n+2}) = a_1 (2 e_0) + (2 a_0 + a_2)(e_1) + (a_1 +a_3)(e_2) + \cdots + (a_{n-2})(e_{n-1}) \in I.$ Now, if $x \in I,$ then $x = a_0 2e_0 + a_1 e_1 + \cdots + a_{n-1} e_{n-1} = (e_1)[a_0 e_1 + a_1 e_2 + (a_2 - a_0) e_3 + ( a_3 - a_1) e_4 + (a_4 - a_2 -a_0) e_5 + (a_5 - a_3 -a_1)e_6 + \cdots + ( a_{n-1})e_{n-2} + (a_{n-2} - a_{n-4} \cdots - a_0)e_{n-1}] \in e_1 \mathcal O_{\mbox{\bmsi K}}.$ So, $I$ is a principal ideal of $\mathcal O_{\mbox{\bmsi K}}.$ \bb \begin{remark} It follows from Proposition \ref{distanciaminima} and Definition \ref{relative} that the \ relative minimum product distance of $D_n$-lattices constructed from principal ideals in $\mathcal O_{\mbox{\bmsi K}}=\mbox{\bms Q}(\zeta_m+\zeta_m^{-1})$, $m=2^{r}$, $r \geq5$, depends only of the determinant of $D_n$ and of the discriminant of $\mbox{\bms K}$. Therefore for any construction of a rotated $D_n$ lattice from a principal ideal $I$ in $\mathcal O_{\mbox{\bmsi K}}$ the relative minimum product distance is $d_{p,rel}(\sigma_{\alpha}(I)) = 2^{\frac{3-r n}{2}}.$ \end{remark} It is also interesting to note that besides being Euclidean equivalent, the lattices obtained through the first and second constructions are equivalent in the Lee metric since the isometry is a composition of permutations and coordinate signal changes. The density $\Delta(\Lambda)$ of a lattice $\Lambda \subseteq \mbox{\bms R}^{n}$ is given by $\Delta(\Lambda) = \frac{(d/2)^{n}Vol(B(1))}{det(\Lambda)^{1/2}}$ where $Vol(B(1))$ is the volume of the unitary sphere in $\mbox{\bms R}^{n}$ and $d$ is the minimum norm of $\Lambda$. The parameter $\delta(\Lambda) = \frac{(d/2)^{n}}{det(\Lambda)^{1/2}}$ is so called center density. Table $2$ shows a comparison between the normalized $d_{p,rel}$ and the center density of rotated $\mbox{\bms Z}^n$-lattices constructed in \cite{gabriele} and rotated $D_n$-lattices constructed here via principal ideals in $\mbox{\bms K}=\mbox{\bms Q}(\zeta+\zeta^{-1})$, $n=2^{r-2}$. Asymptotically we have \begin{equation} \label{limite} \lim_{n\longrightarrow \infty}\frac{\sqrt[n]{d_{p,rel}(\mbox{\bms Z}^{n})}}{\sqrt[n]{d_{p,rel}(D_n)}} =\sqrt{2}\,\, \, \mbox{and}\,\,\, \lim_{n\longrightarrow \infty}\frac{\delta(\mbox{\bms Z}^{n})}{\delta(D_n)} =0. \end{equation} \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \hline $r$ & $n$ & $\sqrt[n]{d_{p,rel}(\mbox{\bms Z}^n)}$ & $\sqrt[n]{d_{p,rel}(D_n)}$ & $\delta(\mbox{\bms Z}^n)$ & $\delta(D_n)$ \\ \hline $4$ & $4$ & $ 0.385553 $ & $ 0.324210 $ &$0.062500$ & $0.125000$\\ \hline $5$ & $8$ & $ 0.261068 $ & $ 0.201311 $ &$0.003906$ & $0.031250$ \\ \hline $6$ & $16$ & $ 0.180648 $ & $ 0.133393 $ &$0.000015$ & $0.001953$ \\ \hline $7$ & $32$ & $ 0.126361 $ & $ 0.091307 $ &$2.3 \times 10^{-10}$ & $7.6 \times 10^{-6}$\\ \hline $8$ & $64$ & $ 0.088868 $ & $ 0.063523 $ &$5.4 \times 10^{-20}$ & $1.1 \times 10^{-10}$\\ \hline $9$ & $128$ & $ 0.062669 $ & $ 0.044554 $ &$2.9 \times 10^{-39}$ & $2.7 \times 10^{-20}$\\ \hline \hline \end{tabular} \caption{ } \end{center} \end{table} If the goal is to construct lattices which have good performance on both Gaussian and Rayleigh channels, we may assert that taking into account the trade-off density versus product distance, there is some advantages in considering these rotated $D_n$-lattices instead of rotated $\mbox{\bms Z}^{n}$-lattices, $n=2^{r-2}$, $r \geq 5,$ in high dimensions. \section{Rotated $D_n$-lattices for $n=\frac{p-1}{2}$, $p$ prime, via $\mbox{\bms K}=\mbox{\bms Q}(\zeta_{p}+\zeta_{p}^{-1})$} Let $\zeta=\zeta_{p}$ be a primitive $p$-th root of unity, $p$ prime, $\mbox{\bms L}=\mbox{\bms Q} (\zeta)$ and $\mbox{\bms K}=\mbox{\bms Q} (\zeta+\zeta^{-1})$. We will construct a family of rotated $D_n$-lattices, derived from the construction of a rotated $\mbox{\bms Z}^{n}$-lattice in \cite{Oggier}, via a $\mbox{\bms Z}$-module that is not an ideal. Let $e_j=\zeta^{j}+\zeta^{-j}$ for $j = 1,\cdots, {(p-1)/2}.$ By \cite{Oggier} a generator matrix of the rotated $\mbox{\bms Z}^{n}$-lattice $\Lambda=\frac{1}{\sqrt{p}}\sigma_{\alpha}(\mathcal O_{\mbox{\bmsi K}})$ is ${\bm M}= \displaystyle\frac{1}{\sqrt{p}}{\bm T} {\bm N} {\bm A} , \mbox{ where }$ ${\bm T}=(t_{ij})$ is an upper triangular matrix with $t_{ij}=1$ if $i\leq j,$ ${\bm N} = (\sigma_i(e_j))_{i,j=1}^{n}$ and ${\bm A} =diag(\sqrt{\sigma_{k}(\alpha)}).$ We have ${\bm G}={\bm M} {\bm M^{t}}={\bm I_n}$ \cite{Oggier}. \begin{proposition} \label{rotacionado} Let $I \subseteq \mathcal O_{\mbox{\bmsi K}}$ be a $\mbox{\bms Z}$-module with $\mbox{\bms Z}$-basis $\{e_1,e_2,\cdots,e_{n-1},-e_1 - 2e_2 - \cdots -2e_n\}$ and $\alpha=2-e_1.$ The lattice $\frac{1}{\sqrt{p}}\sigma_{\alpha}(I) \subseteq \mbox{\bms R}^{\frac{p-1}{2}}$ is a rotated $D_n$-lattice. \end{proposition} \noindent{\it Proof:} Let ${\bm B}$ be a generator matrix for $D_n$ given by basis $\beta$ \ref{beta}. Using homomorphism properties, a straightforward computation shows that ${\bm B}{\bm M}$ is a generator matrix for $\Lambda=\frac{1}{\sqrt{p}}\sigma_{\alpha}(I).$ This lattice is a rotated $D_n$ since ${\bm B} {\bm M} ({\bm B} {\bm M})^{t} = {\bm B} {\bm B^{t}}$ is a Gram matrix of $D_n.$ It has full diversity since it is contained in $\frac{1}{\sqrt{p}}\sigma_{\alpha}(\mathcal O_{\mbox{\bmsi K}})$ \cite{Oggier}. \bb \begin{proposition} The $\mbox{\bms Z}$-module $I \subseteq \mathcal O_{\mbox{\bmsi K}}$ is not an ideal of $\mathcal O_{\mbox{\bmsi K}}$. \end{proposition} \noindent{\it Proof:} The set $\{e_1,e_2,\cdots,e_{n-1},2e_n\}$ is an another $\mbox{\bms Z}$-basis to $I$. We will show that $e_n$ is not in $I.$ Indeed, if $e_n \in I$, then $I = \mathcal O_{\mbox{\bmsi K}},$ but $\left|\frac{\mathcal O_{\mbox{\bmsi K}}}{I}\right| = 2.$ So, $e_n \not\in I.$ $e_{n-1} e_{1}$ is not in $I$. In fact, note that $e_{n-1} e_1 = e_{n} + e_{n-2}$ and $e_{n-2} \in I$. If $e_{n-1}e_1 \in I$, then $e_n = e_{n-1}e_1 - e_{n-2} \in I,$ and this doesn't happen. \bb \begin{proposition} If $\Lambda =\frac{1}{\sqrt{p}}\sigma_{\alpha}(I) \subseteq \mbox{\bms R}^{\frac{p-1}{2}}$ with $\alpha$ and $I$ as in the Proposition \ref{rotacionado}, then the relative minimum product distance is $${\bm d_{p,rel}}(\Lambda)= 2^{\frac{1-p}{4}} p^{\frac{3-p}{4}} .$$ \end{proposition} \noindent{\it Proof:} First note that $|N(e_1)| =1.$ Indeed, $(\zeta+\zeta^{-1})(-\zeta^{p-1} - \zeta^{p-2} - \cdots - \zeta - 1)=1$ and so $$N(\zeta+\zeta^{-1})N(-\zeta^{p-1} - \zeta^{p-2} - \cdots - \zeta - 1)=N(1)=1.$$ Since $e_1 \in \mathcal O_{\mbox{\bmsi K}}$, then $N(e_1) \in \mbox{\bms Z}$, what implies $|N(e_1)|=1.$ Now, the minimum norm in $D_n$ is $\sqrt{2}.$ By Proposition \ref{distanciaminima}, ${\bm d_{p}(\sigma_{\alpha}(I))} =\sqrt{N(\alpha)} min_{0\neq y \in I}|N(y)| = \sqrt{p},$ since $min_{0\neq y \in I}|N(y)| =1.$ Therefore, the relative minimum product distance is $$d_{p,rel}\left(\frac{1}{\sqrt{p}}\sigma_{\alpha}(I)\right) = \left(\frac{1}{\sqrt{p}^{\frac{p-1}{2}}}\right) \left(\frac{1}{\sqrt{2}^{\frac{p-1}{2}}}\right)\sqrt{p} = 2^{\frac{1-p}{4}} p^{\frac{3-p}{4}}.$$ \bb Table $3$ shows a comparison between the normalized $d_{p,rel}$ and the center density $\delta$ of rotated $\mbox{\bms Z}^n$-lattices constructed in \cite{Oggier} and rotated $D_n$-lattices constructed here, $n=(p-1)/2$. As in Section \ref{limite} we also have for $\Lambda = \frac{1}{\sqrt{p}}(\sigma_{\alpha}(I)) \subseteq \mbox{\bms R}^{\frac{p-1}{2}}$ and $p$ prime, the following results: $$\lim_{n\longrightarrow \infty}\frac{\sqrt[n]{d_{p,rel}(\mbox{\bms Z}^{n})}}{\sqrt[n]{d_{p,rel}(D_n)}} =\sqrt{2}\,\, \mbox{and} \,\, \lim_{n\longrightarrow \infty}\frac{\delta(\mbox{\bms Z}^{n})}{\delta(D_n)} =0.$$ \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \hline $p$ & $n$ & $\sqrt[n]{d_{p,rel}(\mbox{\bms Z}^n)}$ & $\sqrt[n]{d_{p,rel}(D_n)}$ & $\delta(\mbox{\bms Z}^n)$ & $\delta(D_n)$ \\ \hline $11$ & $5$ & $ 0,38321 $ & $0,27097 $ & $0,03125$ & $0,08838$ \\ \hline $13$ & $6$ & $ 0,34344 $ & $0,24285 $ & $0,01563$ & $0,06250$ \\ \hline $17$ & $8$ & $ 0,28952$ & $0,20472 $ & $0,00390$ & $0,03125$ \\ \hline $19$ & $9$ & $0,27187 $ & $ 0,19105 $ & $0,00195$ & $0,02209$ \\ \hline $23$ & $11$ & $ 0,24045$ & $ 0,17003 $ & $0,00049 $ & $0,01105$\\ \hline \hline \end{tabular} \caption{ } \end{center} \end{table} \section{Conclusion} In this work we construct some families of full diversity rotated $D_n$-lattices. These lattices present good performance for signal transmission over both Gaussian and Rayleigh channels. Considering the trade-off between density and relative product distance we may assert that the rotated $D_n$-lattices presented here have better performance than the known rotated $\mbox{\bms Z}^{n}$-lattices for $n=2^{r-2},$ $r \geq 5$ and $n=\frac{p-1}{2}$, $p$ prime, $p \geq 7$. \end{document}
\begin{document} \title{\bf{Algorithmic aspects of multigrid methods for optimization in shape spaces}} \author{Martin Siebenborn\thanks{Universit\"at Trier, D-54286 Trier, Germany, Email: [email protected], [email protected]} \and Kathrin Welker\footnotemark[1] } \date{} \maketitle \begin{abstract} \noindent We examine the interaction of multigrid methods and shape optimization in appropriate shape spaces. Our aim is a scalable algorithm for application on supercomputers, which can only be achieved by mesh-independent convergence. The impact of discrete approximations of geometrical quantities, like the mean curvature, on a multigrid shape optimization algorithm with quasi-Newton updates is investigated. For the purpose of illustration, we consider a complex model for the identification of cellular structures in biology with minimal compliance in terms of elasticity and diffusion equations. \end{abstract} \section{Introduction} PDE constrained shape optimization is becoming more and more suited for practical applications, e.g., acoustics \cite{Berggren2016largescale}, aerodynamics \cite{Mohammadi-2001} and electrostatics \cite{Langer-2015}. A finite dimensional optimization problem can be obtained for example by representing shapes as splines. However, the connection of shape calculus with infinite dimensional spaces \cite{Delfour-Zolesio-2001,ItoKunisch,SokoZol} leads to a more flexible approach. In recent work, it has been shown that PDE constrained shape optimization problems can be embedded in the framework of optimization on shape spaces. Finding a shape space and an associated metric is a challenging task and different approaches lead to various models. There exists no common shape space suitable for all applications. One possible approach is to define shapes as elements of a Riemannian manifold as proposed in \cite{MichorMumford2,MichorMumford1}. In \cite{MichorMumford}, a survey of various suitable inner products is given, e.g., the curvature weighted Riemannian metric and the Sobolev metric. From a theoretical point of view this is attractive because algorithmic ideas from \cite{Absil} can be combined with approaches from differential geometry. For example in \cite{schulz2014structure}, shape optimization is considered as optimization on a Riemannian shape manifold. This particular manifold contains only shapes with infinitely differentiable boundaries, which limits the practical applicability. From a computational point of view, usually finite element methods are used to discretize the PDE models where one has to deal with polygonal shape representations. A well-established approach is to deal with shape derivatives in a so-called Hadamard form, i.e., in the form of integrals over the surface (cf.~\cite{Mohammadi-2001,SokoZol}). An equivalent and intermediate result in the process of deriving Hadamard expressions is a volume expression of the shape derivative. One usually has to require additional regularity assumptions in order to transform volume into surface forms. In addition to saving analytical effort this makes volume expressions preferable over Hadamard forms, which is also utilized in \cite{Langer-2015}. In the case of the more attractive volume formulation, the shape manifold and the corresponding inner products mentioned above are not appropriate. One possible approach to use these formulations is given in \cite{schulz2015Steklov}. Here an inner product, which is called Steklov-Poincar\'{e} metric, and a shape space are proposed. These are further considered in the following. The combination of this particular shape space and its associated inner product is an essential step towards applying efficient finite element solvers. This is especially important in order to avoid complicated load balancing for parallel computers with respect to both, the volume and surface elements. If there are many computationally costly operations on surfaces like the evaluation of the Sobolev metric, this is necessary. By using volume forms and Steklov-Poincar\'{e}-type metrics one only has to consider standard finite element assemblies and a classical load balancing with respect to volume elements. In general, practical applications necessitate very fine discretizations. Here one can observe mesh-dependence at several points, e.g., the number of iterations of the PDE solver and the overall optimization loop. The same holds for discrete approximations of geometrical quantities, like the mean curvature (cf.~\cite{meyer2003discrete}), which strongly depend on the chosen mesh. Especially a perimeter regularization is affected by the increasing values of the discretized mean curvature within a multgrid framework. In particular, scalable algorithms for application on supercomputers can only be achieved by mesh-independent convergence of both, the PDE simulation and the overall optimization loop. In order to come up with such an algorithm, one has to apply on the one hand multigrid preconditioner to the PDE solver and on the other hand quasi-Newton methods or something even more sophisticated for the optimization. In this paper, we examine the interaction of multigrid and shape optimization based on Steklov-Poincar\'{e} metrics and the corresponding shape space. We focus on this particular combination because it is well-suited for a large-scale finite element algorithm with quasi-Newton techniques. This paper has the following structure. In section \ref{section_model}, besides a short review on the background of shape optimization, PDE models are formulated. Furthermore, the corresponding shape derivatives are given. Section \ref{section_algorithm} summarizes the entire process from shape derivatives to a complete optimization algorithm in suitable shape spaces. A numerical test framework and results are presented in section \ref{section_results}. \section{Model formulations and their shape derivatives}\label{section_model} After setting up notation and terminology we formulate the model problems. They are motivated by finding the optimal design of cellular structures. In the third part of this section we deduce the shape derivatives of the model problems. \subsection{Notations and definitions} One focus in shape optimization is to investigate shape functionals. First, we define such a functional. \begin{definition}[Shape functional] Let $D$ denote a non-empty subset of ${\mathbb{R}}^d$, where $d\in\mathbb{N}$. Moreover, $\mathcal{A}\subset \{\Omega\colon \Omega \subset D\}$ denotes a set of subsets. A function $$J\colon \mathcal{A}\to {\mathbb{R}}\text{, } \Omega\mapsto J(\Omega)$$ is called a shape functional. \end{definition} \noindent In the following, let $D$ be as in the definition above. Moreover, let $\{F_t\}_{t\in[0,T]}$ be a family of mappings $F_t\colon \overline{D}\to\mathbb{R}^d$ such that $F_0=id$, where $T>0$. This family transforms a domain $\Omega\subset D$ into \emph{perturbed domains} \begin{equation*} \Omega_t := F_t(\Omega)=\{F_t(x)\colon x\in \Omega\}\text{ with }\Omega_0=\Omega \end{equation*} and the boundary $\Gamma=\partial\Omega$ into \emph{perturbed boundaries} \begin{equation*} \Gamma_t:= F_t(\Gamma)=\{F_t(x)\colon x\in \Gamma\}\text{ with }\Gamma_0=\Gamma. \end{equation*} Considering the domain $\Omega$ as a collection of material particles, which are changing their position in the time-interval $[0,T]$, the family $\{F_t\}_{t\in[0,T]}$ describes the motion of each particle. This means that at time $t\in [0,T]$ a particle $x\in\Omega$ has the new position $x_t:= F_t(x)\in\Omega_t$ with $x_0=x$. The motion of each such particle $x$ can be described by the \emph{velocity method} or by the \emph{perturbation of identity}. Let $V$ be a sufficiently smooth vector field. For $V$ the velocity method defines the family of the above-mentioned mappings as the flow $F_t(x):= \xi(t,x)$, which is determined by the following initial value problem: \begin{equation*} \begin{split} \frac{d\xi(t,x)}{dt}&=V(\xi(t,x))\\ \xi(0,x)&=x \end{split} \end{equation*} The perturbation of identity is defined by $F_t(x):= x+tV(x)$ and used in the following. This paper deals with PDE constrained shape optimization problems, i.e., shape optimization problems constrained by equations involving an unknown function of two or more variables and at least one partial derivative of this function. Such a problem is given by \begin{equation*} \min_{\Omega} J(\Omega), \end{equation*} where $J$ is a shape functional additionally depends on a solution of a PDE. To solve these problems, we need their shape derivatives: \begin{definition}[Shape derivative] \label{def_shapeder} Let $D\subset {\mathbb{R}}^d$ be open, where $d\geq 2$ is a natural number. Moreover, let $k\in\mathbb{N}\cup \{\infty\}$ and let $\Omega\subset D$ be measurable. The Eulerian derivative of a shape functional $J$ at $\Omega$ in direction $V\in\mathcal{C}^k_0(D,{\mathbb{R}}^d)$ is defined by \begin{equation} \label{eulerian} DJ(\Omega)[V]:= \lim\limits_{t\to 0^+}\frac{J(\Omega_t)-J(\Omega)}{t}. \end{equation} If for all directions $V\in\mathcal{C}^k_0(D,{\mathbb{R}}^d)$ the Eulerian derivative (\ref{eulerian}) exists and the mapping \begin{equation*} G(\Omega)\colon \mathcal{C}^k_0(D,{\mathbb{R}}^d)\to {\mathbb{R}}, \ V\mapsto DJ(\Omega)[V] \end{equation*} is linear and continuous, the expression $DJ(\Omega)[V]$ is called the shape derivative of $J$ at $\Omega$ in direction $V\in\mathcal{C}^k_0(D,{\mathbb{R}}^d)$. In this case, $J$ is called shape differentiable of class $\mathcal{C}^k$ at $\Omega$. \end{definition} For a detailed introduction into shape calculus, we refer to the monographs \cite{Delfour-Zolesio-2001,SokoZol}. In particular, \cite{SokoZol} states that shape derivatives can always be expressed as boundary integrals due to the Hadamard structure theorem \cite[theorem 2.27]{SokoZol}. In many cases, the shape derivative arises in two equivalent notational forms: \begin{align} \label{dom_form} DJ_\Omega[V]&:=\int_\Omega F(x)V(x)\, dx &\text{(volume formulation)}\\ \label{bound_form} DJ_\Gamma[V]&:=\int_\Gamma f(s)V(s)^T n(s)\, ds &\text{(surface formulation)} \end{align} Here $F$ is a (differential) operator acting linearly on the vector field $V$ and $f\in L^1(\Gamma)$ with $DJ_\Omega[V]=DJ(\Omega)[V]=DJ_\Gamma[V]$. In general, we have to deal with so-called material derivatives in order to derive shape derivatives. The definition of these shape derivatives is given in the following. For a material derivative free approach we refer to \cite{Sturm2013}. \begin{definition}[Material derivative] \label{material_deriv} Let $\Omega,\Omega_t,F_t$ and $T$ be as above. Moreover, let $\{p_t\colon \Omega_t\to {\mathbb{R}}\colon t\leq T \}$ denote a family of mappings. The material derivative of a generic function $p\hspace{.3mm}(=p_0)\colon \Omega\to \mathbb{R}$ at $x\in\Omega$ is denoted by $D_mp$ or $\dot{p}$ and given by the derivative of the composed function $p_t\circ F_t\colon \Omega\to\Omega_t\to\mathbb{R}$ defined in the fixed domain $\Omega$, i.e., \begin{equation*} \dot{p}(x):=\lim\limits_{t\to 0^+}\frac{\left(p_t\circ F_t\right)(x)-p(x)}{t} =\frac{d^+}{dt}\left(p_t\circ F_t\right)(x)\,\rule[-2.5mm]{.1mm}{6mm}_{\hspace{1mm}t=0}. \end{equation*} \end{definition} \noindent In the following, let $p\colon \Omega\to\mathbb{R}$ be a function. The classical chain rule for differentiation applied to $\dot{p}$ gives the relation between material and shape derivatives. The shape derivative of $p$ in the direction of a vector field $V$ is denoted by $p'$ and given by \begin{equation} \label{shape_der} p'=\dot{p}-V^T\nabla p \quad \text{in }\Omega. \end{equation} In subsection \ref{subsection_shapederivatives}, the following rules for the material derivative are needed to derive shape derivatives of objective functions depending on solutions of PDEs. For the material derivative the product rule holds, i.e., \begin{equation} \label{product_rule} D_m(p\hspace{.7mm}q)=D_mp\hspace{.7mm}q+p\hspace{.7mm}D_mq. \end{equation} While the shape derivative commutes with the gradient, the material derivative does not, but the following equality, which is proven in \cite{Berggren}, holds: \begin{equation} \label{material_grad} D_m\nabla p=\nabla D_mp-\nabla V^T\nabla p. \end{equation} The concept of material and shape derivatives of a function $p\colon \Omega\to {\mathbb{R}}$ can be extended to its boundary $\Gamma=\partial\Omega$. We mention only a few aspects. For more details we refer to the literature, e.g., \cite{SokoNov}. Let $z\colon \Gamma\to {\mathbb{R}}$ be the trace on the boundary $\Gamma$ of $p$. In this setting, the boundary shape derivative $z'$ is defined by \begin{equation} \label{shape_der_boundary} z'=\dot{p}-V^T\nabla_\Gamma p, \end{equation} where $\nabla_\Gamma$ denotes the tangential gradient given by \begin{equation*} \nabla_\Gamma p=\nabla p-\frac{\partial p}{\partial n}n. \end{equation*} Here $\frac{\partial}{\partial n}$ denotes the derivative normal to $\Gamma$. Combining (\ref{shape_der}) with (\ref{shape_der_boundary}) gives the correlation of boundary and domain shape derivatives: \begin{equation*} z'=p'+V^T\frac{\partial p}{\partial n}n \end{equation*} In order to deduce shape derivative formulas we have to consider the derivative of perturbed objective functions: \begin{align*} \frac{d^+}{dt}\left(\int_{\Omega_t}p_t\ dx_t\right)\,\rule[-4mm]{.1mm}{9mm}_{\hspace{1mm}t=0} \qquad \text{and}\qquad \frac{d^+}{dt}\left(\int_{\Gamma_t}z_t\ ds_t\right)\,\rule[-4mm]{.1mm}{9mm}_{\hspace{1mm}t=0}, \end{align*} where $\Omega,\Omega_t,p,p_t,\Gamma_t$ are as above and $z_t\colon\Gamma_t\to {\mathbb{R}}$ denotes a mapping. They are given by \begin{align} \label{der_domain_int} \frac{d^+}{dt}\left(\int_{\Omega_t}p_t\ dx_t\right)\,\rule[-4mm]{.1mm}{9mm}_{\hspace{1mm}t=0} & =\int_{\Omega}\dot{p}+\mathrm{div}(V)p\ dx \text{,}\\ \label{der_boundary_int} \frac{d^+}{dt}\left(\int_{\Gamma_t}z_t\ ds_t\right)\,\rule[-4mm]{.1mm}{9mm}_{\hspace{1mm}t=0} & =\int_{\Gamma}\dot{z}+\mathrm{div}_\Gamma(V)z\ ds, \end{align} where \begin{equation*} \mathrm{div}_\Gamma(V) := \text{div} (V) - n^T \frac{\partial V}{\partial n} \end{equation*} denotes the tangential divergence of $V$. For their proofs we refer to the literature, e.g., \cite{HaslingerMakinen,ItoKunisch,Welker}. \subsection{Problem formulations} Let $\Omega\subset D$ be a bounded Lipschitz domain with boundary $\partial\Omega$. This domain is assumed to be a cuboid and partitioned in a subdomain $\Omega_\text{out}\subset\Omega$ and a finite number of disjoint subdomains $\Omega_i\subset \Omega$ with boundaries $\Gamma_i:=\partial\Omega_i$ such that $\Omega_\text{out}\bigcupdot\left(\cupdot_{i\in\mathbb{N}} \Omega_i\right) \bigcupdot\left(\cupdot_{i\in\mathbb{N}}\Gamma_i\right)=\Omega$ and $\mathrm{\Gamma_{bottom}}\cupdot\mathrm{\Gamma_{sides}}\cupdot\mathrm{\Gamma_{top}}=\partial\Omega\ (=:\mathrm{\Gamma_{out}})$, where $\cupdot$ denotes the disjoint union. The union of all domains $\Omega_i$ is denoted by $\Omega_\text{int} := \bigcupdot_{i\in\mathbb{N}}\Omega_i$ and the union of all boundaries $\Gamma_i$ is called the interface and denoted by $\mathrm{\Gamma_{int}}:=\bigcupdot_{i\in\mathbb{N}}\Gamma_i$. The outer normal vector field to $\Omega_\text{int}$ is given by $n$. Figure \ref{fig_domain} illustrates this situation. In our setting, $\Omega$ is meant to be composed of two distinct materials, one in $\Omega_\text{int}$ and one in $\Omega_\text{out}$. Let $\nu_l\in{\mathbb{R}}$ be arbitrary constants, where $l\in\{1,\cdots,4\}$. For the objective function \begin{equation} J(y,u,\Omega) = j_1(u,\Omega)+j_2(y,\Omega)+j_3(\Omega)+j_4(\Omega)\label{objective} \end{equation} with \begin{align*} j_1(u,\Omega)&:=\nu_1 \int_{\Omega} \sigma(u):\epsilon(u)\; dx,\\ j_2(y,\Omega) &:=\frac{\nu _2}{2} \int_0^T \int_{\Omega} (y - \bar{y})^2\; dx\, dt,\\ j_3(\Omega)& := \nu_3 \int_{\Omega_\text{out}} 1\; dx ,\\ j_4(\Omega) & := \nu_4 \int_{\mathrm{\Gamma_{int}}} 1\; ds \end{align*} we consider the following PDE constrained optimization problem in strong form: \begin{align} \min\limits_{\mathrm{\Gamma_{int}}}\; J(y,u,\Omega)\label{eq_minimization} \end{align} subject to the following constraints: \begin{align} - \text{div}\,\sigma(u) & = 0 \quad \text{in}\; \Omega \label{le1} \\ u & = 0 \quad \text{on}\; \mathrm{\Gamma_{bottom}} \label{le2} \\ \sigma(u) \cdot n & = f \quad \text{on}\; \mathrm{\Gamma_{top}} \cup \mathrm{\Gamma_{sides}} \label{le4}\\[10pt] \frac{\partial y}{\partial t} - \text{div} (k \nabla y) & = 0 \quad \text{in} \; \Omega \times (0,T] \label{diff1}\\ y & = 1 \quad \text{on}\; \mathrm{\Gamma_{top}} \times (0,T] \label{diff2} \\ \frac{\partial y}{\partial n} & = 0 \quad \text{on}\; \left(\mathrm{\Gamma_{bottom}}\cup\mathrm{\Gamma_{sides}}\right) \times (0,T] \label{diff3} \\ y & = 0 \quad \text{in} \; \Omega \times \lbrace 0 \rbrace \label{diff4} \end{align} Equations \eqref{le1}-\eqref{le4} describe the linear elasticity model, where $\sigma := \lambda \text{Tr}(\epsilon) I + 2\mu \epsilon$ denotes the stress tensor and $\epsilon = \frac{1}{2}\left( \nabla u + \nabla u^T \right)$ the strain tensor with respect to the Lam\'{e} parameters $\lambda$ and $\mu$. We further assume \begin{equation} f = \begin{cases} 0 &\quad \text{on}\; \mathrm{\Gamma_{sides}}\\ f_\text{top} &\quad \text{on}\; \mathrm{\Gamma_{top}}\\ \end{cases}, \end{equation} where $f_\text{top}$ is a force at the top boundary of the domain, which leads to the deformation $u$. A diffusion model is given by \eqref{diff1}-\eqref{diff4} with a jumping permeability coefficient $k$. The different properties of the two materials with respect to elasticity and permeability are modelled by the following choice of coefficients: \begin{equation*} k := \begin{cases} k_\text{out} &\text{in} \; \Omega_\text{out}\\ k_\text{int} &\text{in} \; \Omega_\text{int}\\ \end{cases},\quad \lambda := \begin{cases} \lambda_\text{out} & \text{in} \; \Omega_\text{out}\\ \lambda_\text{int} & \text{in} \; \Omega_\text{int}\\ \end{cases},\quad \mu := \begin{cases} \mu_\text{out} & \text{in} \; \Omega_\text{out}\\ \mu_\text{int} & \text{in} \; \Omega_\text{int}\\ \end{cases}. \end{equation*} Due to the jump in these coefficients, the formulations (\ref{le1}) and (\ref{diff1}) are to be understood only formally. The objective function $J$ is composed of the four shape functionals $j_l$. Here $j_1$ corresponds to the minimization of the compliance of the composite material in the domain $\Omega$. With $j_2$ the diffusion model is fitted to data measurements $\bar{y}$. We assume for the observation $\bar{y}\in L^2\left(0,T;L^2 (\Omega)\right)$. Since we are interested in a space-filling design of the inclusions $\Omega_\text{int}$, the purpose of the functional $j_3$ is to minimize the volume of $\Omega_\text{out}$. This is equivalent to maximizing $\Omega_\text{int}$ if the outer shape $\mathrm{\Gamma_{out}}$ is fixed. $j_4$ is a perimeter regularization. Of course, the integral in $j_4$ has to be understood as the sum of the integrals over $\mathrm{\Gamma_{int}}$. We formulate explicitly the continuity of the state and of the flux at the interface as \begin{align} \label{interface_cond1} \left\llbracket u\right\rrbracket & =0, & \hspace{-2cm} \left\llbracket \sigma(u)\cdot n\right\rrbracket =0& \hspace{1cm} \text{on }\mathrm{\Gamma_{int}},\\[5pt] \label{interface_cond2} \left\llbracket y\right\rrbracket & =0, & \hspace{-2cm} \left\llbracket k\nd{y}\right\rrbracket =0& \hspace{1cm}\text{on }\mathrm{\Gamma_{int}}\times(0,T], \end{align} where the jump symbol $\llbracket\cdot\rrbracket$ denotes the discontinuity across the interface $\mathrm{\Gamma_{int}}$ and is defined by $\llbracket v \rrbracket := v\,\rule[-2mm]{.1mm}{4mm}_{\hspace{.5mm}\Omega_\text{int}}-v\,\rule[-2mm]{.1mm}{4mm}_{\hspace{.5mm}\Omega_\text{out}}$ for $v\in\Omega$. \begin{figure} \caption{Illustration of the domain $\Omega$} \label{fig_domain} \end{figure} \begin{remark} Equations (\ref{le1})-(\ref{le4}) admit a solution $u\in H^1(\Omega, \mathbb{R}^d)$ and (\ref{diff1})-(\ref{diff4}) have a solution $y\in L^2\left(0,T;H^1(\Omega)\right)$. However, it turns out that the restrictions of these solutions to $\Omega_\text{int}$ and $\Omega_\text{out}$ have a higher regularity (cf.~for example \cite[proposition 11.7]{ItoKunisch}). To be more precise, $u\,\rule[-2mm]{.1mm}{4mm}_{\hspace{.5mm}\Omega_\text{int}}\in H^2(\Omega_\text{int})$, $u\,\rule[-2mm]{.1mm}{4mm}_{\hspace{.5mm}\Omega_\text{out}}\in H^2(\Omega_\text{out}, \mathbb{R}^d)$, $y\,\rule[-2mm]{.1mm}{4mm}_{\hspace{.5mm}\Omega_\text{int}}\in L^2\left(0,T;H^2(\Omega_\text{int})\right)$ and $y\,\rule[-2mm]{.1mm}{4mm}_{\hspace{.5mm}\Omega_\text{out}}\in L^2\left(0,T;H^2(\Omega_\text{out})\right)$. In this setting, the integral over $\Omega$ has to be understood as the sum of the integrals over $\Omega_\text{int}$ and $\Omega_\text{out}$. In the following, the integral over $\Omega$ always denotes this sum. Thus, it is guaranteed that $\frac{\partial u\,\rule[-.7mm]{.1mm}{2mm}_{\hspace{.5mm}\Omega_\text{int}}}{\partial n}\in H^{1/2}\left(\mathrm{\Gamma_{int}}, \mathbb{R}^d\right)$ and $\frac{\partial y\,\rule[-.7mm]{.1mm}{2mm}_{\hspace{.5mm}\Omega_\text{int}}}{\partial n}\in L^2\left(0,T;H^{1/2}(\mathrm{\Gamma_{int}})\right)$ by the trace theorem for Sobolev spaces. \end{remark} The boundary value problem (\ref{le1})-(\ref{le4}) is given in its weak form by \begin{equation} a_1(u,w)=b_1(u,w_1, w_2)\, , \ \forall w\in H^1\left(\Omega, \mathbb{R}^d\right) \end{equation} and for all $w_1\in H^{-1/2}\left(\mathrm{\Gamma_{bottom}}, \mathbb{R}^d\right)$, $w_2\in H^{1/2}\left(\mathrm{\Gamma_{sides}} \cup\mathrm{\Gamma_{top}}, \mathbb{R}^d\right)$. Here the bilinear form $a_1(u,w)$ is given by \begin{equation} \begin{split} a_1(u,w) & :=\int_{\Omega}\sigma(u):\epsilon(w)\hspace{.7mm}dx -\int_{\mathrm{\Gamma_{int}}}\left\llbracket \left(\sigma(u)\cdot n\right)^Tw\right\rrbracket dx \\ & \hspace*{.5cm} -\int_{\mathrm{\Gamma_{out}}}\left(\sigma(u)\cdot n\right)^Tw \hspace{.7mm}ds \end{split} \end{equation} and $b_1(u,w_1,w_2)$ is defined by \begin{equation} b_1(u,w_1,w_2) :=\int_{\mathrm{\Gamma_{bottom}}}w_1^T u\hspace{.7mm}ds +\int_{\mathrm{\Gamma_{out}}\setminus\mathrm{\Gamma_{bottom}}}w_2^T \left(\sigma (u)\cdot n-f\right)\hspace{.3mm}ds. \end{equation} Moreover, problem (\ref{diff1})-(\ref{diff4}) is written in weak form as \begin{equation} \label{wf} a_2(y,z)=b_2(y,z_1, z_2)\, , \ \forall z\in W\left(0,T;H^1(\Omega)\right) \end{equation} and for all $z_1\in L^2\left(0,T;H^{-1/2}(\mathrm{\Gamma_{top}})\right)$, $z_2\in L^2\left(0,T;H^{1/2}(\mathrm{\Gamma_{bottom}}\cup\mathrm{\Gamma_{sides}})\right)$ as in \cite{schulz2014structure}. Here the bilinear form $a_2(y,z)$ is given by \begin{equation} \begin{split} a_2(y,z) & :=\intdx{y(T,x)\, z(T,x)}-\intdxdt{\td{z}y+k\nabla y^T\nabla z}\\ & \hspace*{.5cm}-\intdsidt{\left\llbracket k\nd{y}z\right\rrbracket}-\intdsodt{k_\text{out}\nd{y}z}\label{wfa2} \end{split} \end{equation} and $b_2(y,z_1,z_2)$ by \begin{equation} b_2(y,z_1,z_2) :=\intdt{\int_{\mathrm{\Gamma_{top}}}z_1(y-1)\, ds}+\intdt{\int_{\mathrm{\Gamma_{out}}\setminus\mathrm{\Gamma_{top}}}z_2\nd{y}\, ds}. \label{wfb} \end{equation} For properties of the function spaces, we refer to the literature, e.g., \cite{GrossReusken,Troeltzsch}. The Lagrangian of (\ref{objective})-(\ref{diff4}) is defined by \begin{equation} {\mathscr{L}}(y,u,w,z,\Omega)={\mathscr{L}}_1(u,w,\Omega)+{\mathscr{L}}_2(y,z,\Omega)+{\mathscr{L}}_3(\Omega)+{\mathscr{L}}_4(\Omega) \label{lagrangian} \end{equation} with \begin{align*} {\mathscr{L}}_1(u,w,\Omega)&:=j_1(u,\Omega) + a_1(u,w)-b_1(u,w_1,w_2) ,\\ {\mathscr{L}}_2(y,z,\Omega) &:=j_2(y,\Omega) + a_2(y,z)-b_2(y,z_1,z_2),\\ {\mathscr{L}}_3(\Omega)& := j_3(\Omega) ,\\ {\mathscr{L}}_4(\Omega) & := j_4(\Omega). \end{align*} The adjoint problem to (\ref{le1})-(\ref{le4}), which we obtain from differentiating the Lagrangian ${\mathscr{L}}_1$ with respect to $u$, is given in strong form by \begin{align} -\text{div}\,\sigma(w) & = 0 \quad \text{in}\; \Omega \label{adjointle1} \\ w & = 0 \quad \text{on}\; \mathrm{\Gamma_{bottom}} \\ \sigma(w) \cdot n & = -\nu_1 f \quad \text{on}\; \mathrm{\Gamma_{sides}}\cup\mathrm{\Gamma_{top}} \\ w_1&= \sigma(w) \cdot n \quad \text{on}\; \mathrm{\Gamma_{bottom}} \\ w_2 & = - w \quad \text{on}\; \mathrm{\Gamma_{sides}}\cup\mathrm{\Gamma_{top}} \label{adjointle3} \end{align} and the state equation to (\ref{le1})-(\ref{le4}), which we get by differentiating the Lagrangian ${\mathscr{L}}_1$ with respect to $w$, is given in strong form by \begin{equation} -\text{div}\,\sigma(u) = 0 \quad \text{in}\; \Omega. \label{designle} \end{equation} Moreover, the adjoint problem to (\ref{diff1})-(\ref{diff4}), which we obtain from differentiating the Lagrangian ${\mathscr{L}}_2$ with respect to $y$, is given in strong form by \begin{align} -\frac{\partial z}{\partial t}-\mathrm{div}(k\nabla z)&=-\nu_2(y-\overline{y}) \quad \text{in }\Omega \times [0,T)\label{adjointdiff1}\\ \nd{z}&=0\quad\text{on } \left(\mathrm{\Gamma_{bottom}}\cup\mathrm{\Gamma_{sides}}\right) \times [0,T)\\ z&=0\quad\text{on }\mathrm{\Gamma_{top}} \times [0,T)\\ z&= 0\quad\text{in }\Omega \times \{T\}\\ z_1&=k_\text{out}\nd{z}\quad\text{on }\mathrm{\Gamma_{top}} \times [0,T)\\ z_2&=-k_\text{out}z\quad\text{on }\left(\mathrm{\Gamma_{bottom}}\cup\mathrm{\Gamma_{sides}}\right) \times [0,T)\label{adjointdiff7} \end{align} and the state equation to (\ref{diff1})-(\ref{diff4}), which we get by differentiating the Lagrangian ${\mathscr{L}}_2$ with respect to $z$, is given in strong form by \begin{equation} \td{y}-\mathrm{div}(k\nabla y)=0 \quad \text{in }\Omega \times (0,T]. \label{designdiff} \end{equation} We formulate explicitly the interface condition of (\ref{adjointle1})-(\ref{adjointle3}) and (\ref{adjointdiff1})-(\ref{adjointdiff7}) by \begin{align} \label{interface_w}\left\llbracket w\right\rrbracket & =0, & \hspace{-2cm} \left\llbracket \sigma(w)\cdot n\right\rrbracket =0& \hspace{1cm} \text{on }\mathrm{\Gamma_{int}},\\[5pt] \left\llbracket z\right\rrbracket & =0, & \hspace{-2cm} \left\llbracket k\nd{z}\right\rrbracket =0& \hspace{1cm}\text{on }\mathrm{\Gamma_{int}}\times[0,T). \end{align} There are a lot of options to prove shape differentiability of shape functionals, which depend on a solution of a PDE, and to derive the shape derivative of a PDE constrained shape optimization problem. The min-max approach \cite{Delfour-Zolesio-2001}, the chain rule approach \cite{SokoZol}, the Lagrange method of C\'{e}a \cite{Cea-RAIRO} and the rearrangement method \cite{Ito-Kunisch-Peichl} have to be mentioned in this context. A nice overview about these approaches is given in \cite{Sturm}. The next subsection deals with the shape derivatives of the models above. \subsection{Shape derivatives} \label{subsection_shapederivatives} In the sequel, we deduce the shape derivative of each shape functional $j_l$, where $l\in \{1,2,3,4\}$. The sum of these four shape derivatives is the shape derivative of $J$. Note that we need only the volume form of the shape derivative of $j_l$ with $l\in \{1,2,3\}$ with respect to the Steklov-Poincar\'{e} metric and the optimization techniques established in \cite{schulz2015Steklov,Welker}. The existence of these shape derivatives is given by the theorem of Correa and Seeger \cite[theorem 2.1]{CorreaSeger}. The following theorem gives a representation of the shape derivative of $j_1$ expressed as volume integral. \begin{theorem} Let $u\in H^1(\Omega,\mathbb{R}^d)$ denote the weak solution of (\ref{le1})-(\ref{le4}). Moreover, $w\in H^1(\Omega,\mathbb{R}^d)$ denote the weak solution of the adjoint equation (\ref{adjointle1})-(\ref{adjointle3}). Then the shape derivative of $j_1$ in direction $V$ is given by \begin{equation} \boxed{ \begin{aligned} Dj_1(u,\Omega)[V] =\int_{\Omega} & -\sigma(u):\left(\nabla V^T\nabla w\right)-\sigma(w):\left(\nabla V^T \nabla u\right)\\ & +\text{\emph{div}}(V) \left(\nu_1\sigma(u):\epsilon(u)+\sigma(w):\nabla u\right)\hspace{.7mm}dx. \end{aligned} } \label{sd_j1} \end{equation} \end{theorem} \begin{proof} We consider the Lagrangian ${\mathscr{L}}_1$. In analogy to \cite[chapter 10, subsection 5.2]{Delfour-Zolesio-2001}, we can verify that \begin{equation} \label{minmax} j_1(u,\Omega)= \min_{u\in H^1(\Omega,\mathbb{R}^d)} \max_{w\in H^1(\Omega,\mathbb{R}^d)} {\mathscr{L}}_1 (u,w,\Omega) \end{equation} holds. We apply the theorem of Correa and Seeger on the right-hand side of (\ref{minmax}). The verification of the assumptions of this theorem can be checked in the same way as in \cite[chapter 10, subsection 6.4]{Delfour-Zolesio-2001}. Applying the rules for differentiating volume and surface integrals given in (\ref{der_domain_int}) and (\ref{der_boundary_int}) yields \begin{equation} \label{shape_der_1} \begin{split} & D{\mathscr{L}}_1(u,w,\Omega)[V]\\ &=\int_{\Omega}\nu_1 D_m\left(\sigma(u):\epsilon(u)\right)+D_m\left(\sigma(u):\epsilon(w)\right)\\ &\hspace*{10mm}+\mathrm{div}(V)\left(\nu_1 \sigma(u):\epsilon(u)+\sigma(u):\epsilon(w)\right)dx\\ &\hspace*{4mm}-\int_{\mathrm{\Gamma_{int}}} D_m\left( \left\llbracket \left(\sigma(u)\cdot n\right)^Tw\right\rrbracket\right) + \mathrm{div}_{\mathrm{\Gamma_{int}}}(V)\left\llbracket \left(\sigma(u)\cdot n\right)^Tw \right\rrbracket ds \\ &\hspace*{4mm}-\int_{\mathrm{\Gamma_{out}}}D_m\left(\left(\sigma(u)\cdot n\right)^Tw\right) + \mathrm{div}_{\mathrm{\Gamma_{out}}}(V)\left( \left(\sigma(u)\cdot n\right)^Tw\right) \hspace{.7mm}ds\\ &\hspace*{4mm}-\int_{\mathrm{\Gamma_{bottom}}}D_m\left(w_1^T u\right)+ \mathrm{div}_{\mathrm{\Gamma_{bottom}}}(V) w_1^T u \hspace{.7mm}ds \\ &\hspace*{4mm}-\int_{\mathrm{\Gamma_{out}}\setminus\mathrm{\Gamma_{bottom}}}D_m\left(w_2^T \left(\sigma (u)\cdot n-f\right)\right)\\ &\hspace*{27mm}+ \mathrm{div}_{\mathrm{\Gamma_{out}}\setminus\mathrm{\Gamma_{bottom}}}(V)\hspace{.7mm}w_2^T \left(\sigma (u)\cdot n-f\right)\hspace{.3mm}ds. \end{split} \end{equation} Note that \begin{equation} \label{umformung} \sigma(u):\epsilon(w)=\sigma(u):\nabla w \end{equation} holds because $\sigma(u)$ is symmetric (cf.~\cite{Sewell}). Moreover, we have \begin{align} D_m(\sigma(u):\nabla w)=\mu D_m\left((\nabla u+\nabla u^T): \nabla w\right)+\lambda D_m\left(\text{div}(u)\text{div}(w)\right) \end{align} due to the definition of $\sigma$ and $\epsilon$. Combining (\ref{product_rule}) with (\ref{material_grad}) in $\mathbb{R}^d$ yields \begin{equation} \label{div_umformung} D_m(\text{div}(u))=D_m(I:\nabla u)=\text{div}(\dot{u})-I:(\nabla V^T\nabla u). \end{equation} By applying the product rule and appropriate transformations we get \begin{equation} \label{derivative_sigmaespilon} \begin{split} & D_m(\sigma(u):\epsilon(w))\\ &= \sigma(\dot{u}):\nabla w+\sigma(u):\nabla \dot{w}-\sigma(u):(\nabla V^T\nabla w)-\sigma(w):(\nabla V^T\nabla u) \end{split} \end{equation} due to (\ref{material_grad}) and (\ref{umformung})-(\ref{div_umformung}). Since the outer boundary $\mathrm{\Gamma_{out}}$ is fixed, we can choose the deformation vector field $V$ equals zero in small neighbourhoods of $\mathrm{\Gamma_{out}}$. Moreover, each material derivative in small neighbourhoods of $\mathrm{\Gamma_{out}}$ is equal to zero. Thus, the outer integrals in (\ref{shape_der_2}) are not further considered. From (\ref{shape_der_1}) we obtain the following under consideration of the adjoint and design equation in weak form and by appropriate transformations: \begin{equation} \label{shape_der_2} \begin{split} D{\mathscr{L}}_1(u,w,\Omega)[V] &=\int_{\Omega}-\sigma(u):\left(\nabla V^T\nabla w\right)-\sigma(w):\left(\nabla V^T \nabla u\right)\\ &\hspace*{10mm}+\mathrm{div}(V)\left(\nu_1\sigma(u):\epsilon(u)+\sigma(w):\nabla u\right)\hspace{.7mm}dx\\ &\hspace*{4mm}+\int_{\mathrm{\Gamma_{int}}} \left\llbracket \left(\sigma(w)\cdot n\right)^T\dot{u} - D_m\left(\left(\sigma(u)\cdot n\right)^T\right)w\right\rrbracket \\ &\hspace*{15mm}+ \mathrm{div}_{\mathrm{\Gamma_{int}}}(V)\left\llbracket \left(\sigma(u)\cdot n\right)^Tw \right\rrbracket ds \end{split} \end{equation} Due to the identity $\left\llbracket \Phi \Psi\right\rrbracket=\left\llbracket \Phi\right\rrbracket \Psi_\text{out} +\Phi_\text{int} \left\llbracket \Psi\right\rrbracket= \Phi_\text{out} \left\llbracket \Psi\right\rrbracket+\left\llbracket \Phi\right\rrbracket \Psi_\text{int}$, which implies \begin{equation*} \left\llbracket \Phi \Psi\right\rrbracket=0 \text{ if } \left\llbracket \Phi\right\rrbracket=0\wedge \left\llbracket \Psi\right\rrbracket=0, \end{equation*} and the interface conditions (\ref{interface_cond1}) and (\ref{interface_w}), the interface integral vanishes in (\ref{shape_der_2}). By applying the theorem of Correa and Seeger, we obtain (\ref{sd_j1}). \end{proof} Now, we consider the shape functional $j_2$ and the diffusion problem (\ref{diff1})-(\ref{diff4}). If we consider ${\mathscr{L}}_2$, its shape derivative can be deduced analogously to the computations in \cite{schulz2014structure}. Thus, we get \begin{equation} \boxed{ \begin{aligned} Dj_2(y,\Omega)[V] =\int_{0}^{T}\hspace{-.1cm}\int_{\Omega} & -k\nabla y^T\left(\nabla V+\nabla V^T\right)\nabla z\\ &+\text{div}(V) \left( \frac{\nu_2}{2}(y-\bar{y})^2 + \td{y}z+k\nabla y^T\nabla z\right)\hspace{.7mm}dx\hspace{.5mm}dt. \end{aligned} } \label{sd_j2} \end{equation} The shape derivative of $j_3$, which is responsible for a space-filling design of $\Omega_\text{int}$, is given by \begin{equation} \boxed{ Dj_3(\Omega)[V] =\nu_3\int_{\Omega_{\text{out}}} \text{div}(V) \hspace{.7mm}dx.\label{sd_j3} } \end{equation} It results directly from an application of (\ref{der_domain_int}). Moreover, the objective function $J$ includes the perimeter regularization term $j_4$. We get the shape derivative of this regularization term by applying (\ref{der_boundary_int}): \begin{align*} Dj_4(\Omega)[V] & =\nu_4\int_{\mathrm{\Gamma_{int}}}\text{div}_\mathrm{\Gamma_{int}}(V) \hspace{.7mm}ds=\nu_4\int_{\mathrm{\Gamma_{int}}}\text{div}_\mathrm{\Gamma_{int}}(\left<V,n\right>n) \hspace{.7mm}ds\\ &= \nu_4\int_{\mathrm{\Gamma_{int}}}\left<V,n\right>\text{div}_\mathrm{\Gamma_{int}}(n) \hspace{.7mm}ds, \end{align*} i.e., \begin{equation} \boxed{ Dj_4(\Omega)[V] =\nu_4\int_{\mathrm{\Gamma_{int}}}\kappa\left<V,n\right> ds,\label{sd_j4} } \end{equation} where $\kappa:=\text{div}_\mathrm{\Gamma_{int}}(n)$ denotes the mean curvature of $\mathrm{\Gamma_{int}}$. So far, we have deduced derivative. However, in order to optimize on shape spaces, we need the gradient with respect to the inner product under consideration. In the next section, we comment on shape spaces and metrics. \section{Optimization based on Steklov-Poincar\'{e} metrics}\label{section_algorithm} As pointed out for example in \cite{schulz2015Steklov,schulz2014structure,Welker}, PDE constrained shape optimization problems can be embedded in the framework of optimization on shape spaces. This section summarizes the way from shape derivatives to an entire optimization algorithm in a suitable shape space. In our setting, we think of shapes as boundary contours of deforming objects. We consider so-called prior shapes $\Gamma_0$. These are boundaries $\Gamma_0=\partial\mathcal{X}_0$ of connected and compact subsets $\mathcal{X}_0\subset \Omega\subset{\mathbb{R}}^d$ with $\mathcal{X}_0\neq\emptyset$, where $\Omega$ denote bounded Lipschitz domains. Let $\mathcal{X}_0$ be Lipschitz domains and called prior sets. Shapes -- in our setting -- arise from $H^1$-deformations of such prior sets $\mathcal{X}_0$. These $H^1$-deformations, evaluated at a prior shape $\Gamma_0=\partial \mathcal{X}_0$, give deformed shapes $\Gamma$ if the deformations are injective and continuous. These shapes are called of class $H^{1/2}$ and proposed firstly in \cite{schulz2015Steklov}. They are defined by \begin{equation}\label{shape_mainifold} {\cal B}^{1/2}(\Gamma_0,{\mathbb{R}}^d):= {\cal H}^{1/2}(\Gamma_0,{\mathbb{R}}^d)\big\slash \mbox{Homeo}^{1/2}(\Gamma_0), \end{equation} where ${\cal H}^{1/2}(\Gamma_0,{\mathbb{R}}^d)$ is given by \begin{equation} \label{force-ball} \begin{split} {\cal H}^{1/2}(\Gamma_0,{\mathbb{R}}^d):= \{w\colon \Gamma_0\to \Omega \colon & \exists W\in H^1(\Omega,\Omega) \text{ s.t.}\\ & W\hspace{-.5mm}\,\rule[-2mm]{.1mm}{4mm}_{\hspace{.5mm}\Gamma_0} \text{ injective, continuous}\text{, }W\hspace{-.5mm}\,\rule[-2mm]{.1mm}{4mm}_{\hspace{.5mm}\Gamma_0}=w\} \end{split} \end{equation} and $\mbox{Homeo}^{1/2}(\Gamma_0)$ is defined by \begin{equation} \mbox{Homeo}^{1/2}(\Gamma_0)\\:= \{ w\colon w\in {\cal H}^{1/2}(\Gamma_0,{\mathbb{R}}^d)\text{, }w\colon \Gamma_0\to \Gamma_0 \text{ homeomorphism}\}. \end{equation} In the following, we assume that ${\cal B}^{1/2}\left(\Gamma_0,{\mathbb{R}}^d\right)$ has a manifold structure. If necessary, we can refine the space ${\cal B}^{1/2}(\Gamma_0,{\mathbb{R}}^d)$ as already described in \cite{Welker}. However, this conceivable limitation leaves the following theory untouched. If $\Gamma\in{\cal B}^{1/2}(\Gamma_0,{\mathbb{R}}^d)$ is smooth enough to admit a normal vector field $n$, the following isomorphisms arise from definition (\ref{force-ball}): \begin{equation} \label{isomophism} \begin{split} T_\Gamma {\cal B}^{1/2}(\Gamma_0,{\mathbb{R}}^d) & \cong \{h\colon h=\phi n \text{ a.e.}\text{, } \phi\in H^{1/2}(\Gamma) \text{ continuous} \} \\ & \cong \{\phi\colon \phi\in H^{1/2}(\Gamma) \text{ continuous} \} \end{split} \end{equation} We consider the following scalar products, the so-called \emph{Steklov-Poincar\'{e} metrics} (cf.~\cite{schulz2015Steklov}): \begin{equation}\label{scp} \begin{split} g^S\colon H^{1/2}(\Gamma_\text{int})\times H^{1/2}(\Gamma_\text{int}) & \to {\mathbb{R}},\\ (\alpha,\beta) &\mapsto \langle\alpha,(S^{pr})^{-1}\beta\rangle= \int_{\Gamma_\text{int}} \alpha(s)\cdot [(S^{pr})^{-1}\beta](s)\ ds \end{split} \end{equation} Here $S^{pr}$ denotes the projected Poincar\'e-Steklov operator which is given by \begin{equation} S^{pr}\colon H^{-1/2}(\Gamma) \to H^{1/2}(\Gamma),\ \alpha \mapsto (\gamma_0 U)^T n, \end{equation} where $\gamma_0\colon H^1_0(\Omega,{\mathbb{R}}^d) \to H^{1/2}(\Gamma,{\mathbb{R}}^d)$, $U \mapsto U\,\rule[-2mm]{.1mm}{4mm}_{\, \Gamma}$ and $U\in H^1_0(\Omega,{\mathbb{R}}^d)$ solves the Neumann problem \begin{equation}\label{weak-elasticity-N2} a(U,V)=\int_{\Gamma} \alpha\cdot (\gamma_0 V)^T n\ ds\quad \forall\hspace{.3mm} V\in H^1_0(\Omega,{\mathbb{R}}^d) \end{equation} with $a$ being a symmetric and coercive bilinearform. We are now able to formulate an optimization algorithm on the shape space ${\cal B}^{1/2}\left(\Gamma_0,{\mathbb{R}}^d\right)$ with respect to $g^S$. Before we can do that, we have to state its connection to shape calculus. Due to the Hadamard structure theorem (cf.~\cite[theorem 2.27]{SokoZol}), there exists a scalar distribution $r$ on the boundary of the domain under consideration. If we assume $r\in L^1(\Gamma)$ and $V\,\rule[-2mm]{.1mm}{4mm}_{\hspace{.6mm}\Gamma}=\alpha n$, the shape derivative can be expressed as the boundary integral \begin{equation} \label{HadamardConcisely} DJ_{\Gamma}[V]=\int_{\Gamma} \alpha \hspace{.5mm} r \ ds. \end{equation} Due to the isomorphism (\ref{isomophism}) and the expression (\ref{HadamardConcisely}), we can state the connection of $\mathcal{B}^{1/2}(\Gamma_0,{\mathbb{R}}^d)$ with respect to the Steklov-Poincar\'{e} metric $g^S$ to shape calculus. The distribution $r$ is often called the \emph{shape gradient}. This is confusing, because one has to note that gradients always depend on chosen scalar products defined on the space under consideration. It rather means that $r$ is the usual $L^2$-shape gradient. However, we have to find a representation of the shape gradient with respect to $g^S$. Such a representation $h\in T_{\Gamma} {\cal B}^{1/2}(\Gamma_0,{\mathbb{R}}^d) \cong \{h\colon h\in H^{1/2}(\Gamma) \text{ continuous} \}$ is determined by \begin{equation} \label{connection1} g^S(\phi,h)=\left(r,\phi\right)_{L^2(\Gamma)}, \end{equation} which is equivalent to \begin{equation} \label{connection2} \int_{\Gamma} \phi(s)\cdot [(S^{pr})^{-1}h](s) \ ds=\int_{\Gamma} r(s)\phi(s) \ ds \end{equation} for all continuous $\phi\in H^{1/2}(\Gamma)$. Based on the connection (\ref{connection1}) we can formulate an algorithm to solve PDE constrained shape optimization problems on $\mathcal{B}^{1/2}(\Gamma_0,{\mathbb{R}}^d)$ with respect to $g^S$. From (\ref{connection2}) we get $h=S^{pr}r=(\gamma_0 U)^T n$, where $U\in H^1_0(\Omega,{\mathbb{R}}^d)$ solves \begin{equation} \label{main} a(U,V)=\int_{\Gamma} r\cdot (\gamma_0 V)^T n \ ds =DJ_{\Gamma}[V]=DJ_\Omega[V] \quad \forall\hspace{.3mm} V\in H^1_0(\Omega,{\mathbb{R}}^d). \end{equation} This means that we get the gradient representation $h$ and the mesh deformation $U$ all at once and that we have to solve \begin{equation} a(U, V) = b(V) \label{deformatio_equation} \end{equation} for all test functions $V$ in the optimization algorithm, where $b$ is a linear form and given by $$b(V):=DJ_\text{vol}(\Omega)[V]+DJ_\text{surf}(\Omega)[V].$$ $J_\text{surf}(\Omega)$ denotes parts of the objective function leading to surface shape derivative expressions, e.g., perimeter regularizations. The shape derivative $DJ_\text{surf}(\Omega)[V]$ of these terms are incorporated as Neumann boundary conditions. Parts of the objective function leading to volume shape derivative expressions are denoted by $J_\text{vol}(\Omega)$. However, note that from a theoretical point of view the volume and surface shape derivative formulations have to be equal to each other for all test functions. Thus, $DJ_\text{vol}[V]$ is assembled only for test functions $V$ whose support includes $\Gamma$, i.e., $$DJ_\text{vol}(\Omega)[V]=0 \quad \forall\hspace{.3mm} V \text{ with } \text{supp}(V)\cap\Gamma=\emptyset.$$ We call (\ref{deformatio_equation}) the \emph{deformation equation}. The entire optimization algorithm is given in figure \ref{algorithm} and explained in detail in the next section. \tikzset{mybox/.style={ rectangle, rounded corners=2mm, thick, draw=black, text width=30em, text centered, drop shadow,fill=white} } \begin{figure} \caption{Optimization algorithm} \label{algorithm} \end{figure} In the setting of section \ref{section_model}, each $\Gamma_i$ is a shape in the above sense, i.e., an element of $\mathcal{B}^{1/2}\left(\Gamma_0,\mathbb{R}^d\right)$. The volume formulation is given by $$DJ_\text{vol}(\Omega)[V]= Dj_1(\Omega)[V]+Dj_2(\Omega)[V]+Dj_3(\Omega)[V]$$ and the surface formulation by $$DJ_\text{surf}(\Omega)[V]=Dj_4(\Omega)[V].$$ The next section is devoted to the numerical realization of algorithm \ref{algorithm} in order to solve the PDE constrained model problems given above. \section{Numerical results}\label{section_results} We focus on three numerical experiments, which are selected in order to demonstrate challenges arising for large-scale multigrid shape optimizations. All results involved are computed at the High Performance Computing Center Stuttgart using the machine HAZELHEN with up to $16\,384$ cores. In the \textit{first experiment}, we choose $\nu_1=\nu_2=0$ such that we end up with a pure geometric optimization problem without any PDE constraints. Hereby, we want to demonstrate a proper choice of boundary conditions in the mesh deformation in order to obtain results for periodic domains. This optimization problem is motivated by the question for a space-filling cell design with least area of surfaces and only one type of cells. It goes back to the 19th century and is also known as the Kelvin problem, for which he proposed a solution (cf.~\cite{thomson1887division}) based on cells as depicted in figure \ref{fig_volume_only_optimal}. His conjecture was disproved by a counter example in \cite{weaire1994counter}. However, these authors propose a solution with two types of cells. The resulting optimal shape might be understood as the result of a biological growing process and used as a building block for finite element models of the human skin (cf.~\cite{muha2011effective}). \begin{figure} \caption{Initial configuration with symmetries} \label{fig_volume_only_initial} \caption{Optimal shape} \label{fig_volume_only_optimal} \caption{Approximation of a Kelvin cell in a symmetric domain} \label{fig_volume_only} \end{figure} Figure \ref{fig_volume_only_initial} depicts the initial geometry for the optimization, where we want to exploit symmetries on all outer surfaces. The shape derivative of the objective \eqref{objective} for this particular case is given as the sum of the derivatives \eqref{sd_j3} and \eqref{sd_j4} with $\nu_3 = \nu_4 = 1.0$. Note that this results in a mixture of volume and surface formulations. \begin{figure} \caption{Inital shape} \label{fig_initial_tube} \caption{Optimal shape} \label{fig_optimal} \caption{Sharp-edged initial geometry with identical coarse and fine grid under 6 levels of hierarchical refinements and optimal solution on finest level} \label{fig_tube_optim} \end{figure} \begin{figure} \caption{Cross-section of the tube at $25\%$ of the length for the first iterations with perimeter regularization} \label{fig_tube_multigrid} \end{figure} The linear and bilinear form defining the shape metric \eqref{deformatio_equation}, which gives also the information for the mesh deformation field $U$, is chosen to be the weak form of a linear elasticity model. In order to model the periodicity of the domain, we choose the following Dirichlet and Neumann conditions at the outer boundaries depending on the components $U = (U_x, U_y, U_z)^T$: \begin{align} U_x & = 0,\quad \sigma\left( (0, U_y, U_z)^T \right) \cdot n = 0 \qquad \text{on}\; \mathrm{\Gamma_{front}} \cup \mathrm{\Gamma_{bottom}}ack \label{eq_def_2} \\ U_y & = 0,\quad \sigma\left( (U_x, 0, U_z)^T \right) \cdot n = 0 \qquad \text{on}\; \mathrm{\Gamma_{left}} \cup \mathrm{\Gamma_{right}} \label{eq_def_3} \\ U_z & = 0,\quad \sigma\left( (U_x, U_y, 0)^T \right) \cdot n = 0 \qquad \text{on}\; \mathrm{\Gamma_{bottom}} \cup \mathrm{\Gamma_{top}} \label{eq_def_4} \end{align} This particular choice ensures that nodes are allowed to slide within $\mathrm{\Gamma_{out}}$, but not to leave the planes. The Lam\'e parameter are fixed to $\lambda = 0.01$ and $\mu = 0.1$. We apply this shape metric to all numerical examples in this sections, due to the fact that the sliding conditions minimize the influence of the outer boundary on the optimization of $\mathrm{\Gamma_{int}}$. Especially, when $\mathrm{\Gamma_{int}}$ intersects $\mathrm{\Gamma_{out}}$ like in the volume experiment shown in figure \ref{fig_volume_only}, the sliding effect at boundaries is obligatory. The result of the optimization is visualized in figure \ref{fig_volume_only_optimal}. It can be identified as an approximation to a truncated octahedron according to Kelvins conjecture. This shape is also denoted as a tetrakaidecahedron with 6 square faces and 8 hexagonal faces. We obtain this result after 12 gradient steps on a fine grid with approximately $3.9\cdot 10^6$ and a coarse grid with $60\,533$ finite elements. Throughout this section all linear systems are solved with the multigrid preconditioned conjugated gradient solver of the software PETSC. Here a symmetric Gauss-Seidel smoother is applied and the coarse grid solver is a direct factorization computed with the SUPERLU-DIST library. This choice is possible, since all PDEs to be solved, i.e., the parabolic diffusion equation and linear elasticity, are discretized as symmetric, positive definite matrices. Moreover, we choose linear finite elements on tetrahedral grids for the discretization of all PDE models involved. Load balancing is performed with the PARMETIS graph partitioning library. As mentioned above, it is sufficient to partition and balance only with respect to tetrahedrons and not also surface triangles at $\mathrm{\Gamma_{int}}$. This is due to the fact that we plug in the volume form of shape derivatives whenever possible. Thus, there are only very few surface-only operations, which do not dramatically affect the scalability. By solving the deformation equation \eqref{deformatio_equation} we then obtain a deformation field $U$. This vector field is added as a deformation to all nodes in the finite element meshes on all multigrid levels. This means that the fine grid solution $U$ is interpolated at the nodes of the coarser grids. The multigrid hierarchy is designed such that for the initial geometry the nodes of the coarse grid are a subset of the nodes of the fine grid. Thus, this property is maintained throughout the whole optimization process. In principle, this means that the shape optimization takes place at the fine grid and the coarse grids are carried along during this process for the preconditioner of the linear solver. The \textit{second numerical test} is chosen in order to touch upon the main topic of this paper, namely the interaction of multigrid solvers and shape optimization. We are in a simplified situation of the optimization problem \eqref{eq_minimization}, which is only subject to the diffusion model \eqref{diff1}-\eqref{diff4}. Thus, the objective function is of tracking type with regularization corresponding to $\nu_1 = \nu_3 = 0$. The domain $\Omega = \Omega_\text{out} \cupdot \Omega_\text{int}$ has only one inclusion with a jumping diffusion coefficient $k_\text{out} = 1.0$ and $k_\text{int} = 0.001$. Figure \ref{fig_tube_optim} shows this situation. The initial geometry is depicted on the left hand side and the target is shown on the right hand side. Data measurements $\bar{y}$ are generated in terms of the same model equation and a similar shape to the one in figure \ref{fig_optimal}. We assume two points in time for measurements -- one at the final time $T=15$ and one after $7.5$ seconds. \begin{figure} \caption{Objective value} \label{fig_objective} \caption{Gradient in $g^S$ norm} \label{fig_gradient} \caption{Objective value and gradient in $g^S$ norm for the pure parabolic test case with regularization in the first 10 iterations} \label{fig_objecive_and_gradient} \end{figure} Measurements are assumed to be represented within 10\,000 Gaussian type radial basis functions. This involves expensive but necessary computations, since $\bar{y}$ has to be evaluated on varying meshes during the optimization. A general mesh to mesh interpolation on a distributed memory computer can not be computed efficiently, which makes a detour via radial basis functions to the expensive yet scalable method of choice. The optimization problem is discretized using 7 levels of hierarchical mesh refinements starting from a coarse grid with $3\,390$ tetrahedral elements to a fine grid with approximately $8.89\cdot 10^8$. For the diffusion we choose a backward Euler time stepping with a step-size of $\Delta t = 1.5$. In figure \ref{fig_initial_tube}, we observe sharp edges in the geometry, which, in principle, should resolve a tube with radius $0.5$ and length $5$. This is due to the hierarchical grid structure, which is typical for multigrid methods, since coarse and fine grid have to resolve the same geometry. In our particular numerical test, this property of the mesh hierarchy together with the shape metric lead to problems in the discretization. Figure \ref{fig_tube_multigrid} shows a cross-section of $\Omega_\text{int}$ at $25\%$ of the tubes length during the first iterates of the optimization. We observe that edges, which have a smaller influence on the objective function than planes, tend to remain at their initial position. This effect is intensified by the choice of the shape metric \eqref{deformatio_equation}. Hereby, the impact of the shape derivative \eqref{sd_j2} on the geometry can be understood as a traction on the boundary, which is stronger on large planes compared to sharp edges. The application of a limited memory BFGS update techniques also influences this behaviour, since relatively large steps are chosen especially in the first iterations. \begin{figure} \caption{log-log-plot of $L^\infty$-norm of mean curvature with respect to number of tetrahedral finite elements} \label{fig_tube_curvature} \end{figure} \begin{figure} \caption{Initial grid} \label{fig_initial_grid} \caption{10 gradient steps} \label{fig_intermediate1_grid} \caption{20 steps} \label{fig_intermediate2_grid} \caption{30 steps} \label{fig_final_grid} \caption{Optimization with respect to diffusion data measurements and elastic compliance of a cellular structure} \label{fig_compliance_parabolic_opt} \end{figure} In figure \ref{fig_tube_multigrid}, it can be observed that plain optimization problem tends to overlapping elements. This would lead to an immediate break down of the PDE solver. In order to prevent this, the problem is regularized by the perimeter term $j_4$ with $\nu_4=0.01$ in \eqref{objective}. The effect of the perimeter regularization can be observed throughout the figures. As soon as sharp edges are smoothed out by the influence of the perimeter regularization, we can switch off this regularization by setting $\nu_4=0$. In our particular case, we do this after 10 quasi-Newton steps. The effect of this strategy is visualized in figure \ref{fig_objecive_and_gradient}. The objective value is depicted on the left hand side, where there is a small step when the regularization is switched off. Note that this step is a bit delayed, since we choose a limited memory BFGS strategy with a storage of 5 gradients. The figure on the right hand side shows the $g^S$-norm of the gradient, where we observe a jump after 10 iterations. The two figures in \ref{fig_objecive_and_gradient} can be seen as an indicator for mesh-independent convergence as expected due to the quasi-Newton method. We can thus conclude, that for this particular test, a regularization is only required due to the multigrid approach. Otherwise, if there are no fine-resolved kinks in the initial geometry, one could be successful without regularization. We should mention that there is a formulation according to \cite[proposition 2.50]{SokoZol}, which is equivalent to (\ref{sd_j4}), for the derivative of the perimeter regularization given by \begin{equation} Dj_4(\Omega)[V] =\nu_4\int_{\Omega} \text{div}(V) - \left<\frac{\partial V}{\partial n},n\right> ds.\label{sd_j4_vol} \end{equation} This is attractive from a computational point of view, since the evaluation of $\kappa$ in each iteration is a surface-only operation. Thus, its scalability is affected by the load balancing, which one would have to perform with respect to both volume and surface elements. However, in our experiments, it seems that for the discretized problem the formulation \eqref{sd_j4} is more successive in eliminating kinks arising due to the multigrid mesh hierarchy. Figure \ref{fig_tube_curvature} shows the approximation to the mean curvature on the 6 multigrid levels for the initial geometry. The curvature computation follows the approach presented in \cite{meyer2003discrete}. An exponential growth of the curvature over the grid levels can be observed, which makes it necessary to adapt $\nu_4$ depending on the finest grid level throughout the refined computations presented in figure \ref{fig_objecive_and_gradient}. We now concentrate on the \textit{third numerical example}, which is the optimization with respect to the complete system \eqref{le1}-\eqref{diff4} gathering diffusion and elasticity. The objective, in our particular case, is then formed by the constants $\nu_1 = 0.15$, $\nu_2 = 0.1$, $\nu_3 = 16.0$ and $\nu_4 = 0.01$. Furthermore, data measurements $\bar{y}$ are chosen to be the simulated values $y$ with the initial geometry. These are again represented in radial basis functions in order to interpolate them on arbitrary meshes. The first challenge encountered here is that the coarse grid has to resolve the desired geometry as depicted in figure \ref{fig_compliance_parabolic_opt}. This limits the coarseness, since each inclusion, that should be simulated, has to be present in the entire grid hierarchy. In addition to the problem of large kinks in the fine grid, we now have to deal with coarse grids having a large number of finite elements. Here we want to simulate $7\cdot 7 \cdot 7$ inclusions resulting in a coarse grid consisting of $74\,096$ tetrahedral finite elements. A coarser grid is not reasonable, since this would have a negative influence on the aspect ratios of the elements, which affects the convergence of iterative solvers. Moreover, the large number of elements in the coarse grid is a major challenge for solvers based on multigrid. In order to be efficient, a direct factorization is usually applied on the coarsest level, which is known to be only scalable to some extent. Figure \ref{fig_compliance_parabolic_opt} shows some snapshots of the optimization on a fine grid with $4.74\cdot 10^6$ tetrahedral elements. The aim of this particular model is to show how cellular structures can be fortified by tightly stacking cells together close to the surface, where forces are acting. This, to some extent, reflects growing processes in biological skin structures. In this particular case, we are not able to apply BFGS update techniques like in the pure parabolic test case. The quasi-Newton method leads to step sizes which are too large to be feasible as deformations on the finite element grid. We thus apply only a gradient descent method. The geometry after 30 gradient steps, which is depicted in figure \ref{fig_final_grid}, is not yet optimal in the sense that it is a stationary point. Due to increasing aspect ratios of discretization elements, the multigrid solver does not converge anymore. This illustrates the challenge to choose proper regularization in the case of shape optimization, such that an optimal solution can still be represented in a finite element mesh. \section{Conclusions} One of the main focuses of this paper is to describe the interaction of algorithmic building blocks ranging from a multigrid finite element solver to quasi-Newton methods for shape optimization problems. Here we concentrate on shape spaces and metrics, which are especially suited for large-scale computations. This results in scalable algorithms for supercomputers. Based on complex examples this paper presents the possibilities and challenges of multigrid methods and shape optimization. In particular, it is shown how to handle the increasing values of approximated curvature in an hierarchical grid structure affecting the regularization. The purpose of the underlying models within this article is to form building blocks for further studies of biological cell structures. \section*{Acknowledgment} This work has been partly supported by the German Research Foundation (DFG) within the priority program SPP 1648 ``Software for Exascale Computing'' under contract number Schu804/12-1, the priority program SPP 1962 ``Non-smooth and Complementarity-based Distributed Parameter Systems: Simulation and Hierarchical Optimization'' under contract number Schu804/15-1 and the research training group 2126 ``Algorithmic Optimization''. \end{document}
\begin{document} \title[Modular Inverse and Chinese Remainder Algorithm]{Modular Inverses and\\ Chinese Remainder Algorithm} \author{W. H. Ko} \address{} \curraddr{Park Avenue, Mongkok, HKSAR, China.} \email{wh\[email protected]} \thanks{} \subjclass[2010]{Primary 11A25; Secondary 11A05, 11D04} \date{} \dedicatory{} \begin{abstract} This paper introduces two forms of modular inverses and proves their reciprocity formulas respectively. These formulas are then applied to formulate new and generalized algorithm for computing these modular inverses. The same algorithm is also shown to be applicable for the Chinese Remainder problem, i.e., simultaneous linear congruence equations, for co-prime moduli as well as non-co-prime moduli. \end{abstract} \maketitle \section{Introduction} Modular inverse is one of the basic operations in modular arithmetic, and it is applied extensively in computer science and telecommunications, particularly, in cryptography. However, it is also a time-consuming operation implemented in hardware or software compared with other modular arithmetic operations such as addition, subtraction and even multiplication. Efficient algorithms in calculating modular inverse have been sought after for the past few decades and any improvements will still be welcome. In implementing an efficient algorithm to calculate the modular inverse of the form $b^{-1} \pmod{2^m}$, Arazi and Qi \cite{AraziQi} have made use of a reciprocity trick, in Algorithm 3b, which can be translated mathematically to Eq.\eqref{eq:reciprocity}. Although this reciprocity formula, Eq.\eqref{eq:reciprocity}, can be regarded as a modified form of the linear Diophantine equation Eq.\eqref{eq:diophan}, this formula as written in the form of Eq. \eqref{eq:reciprocity} is able to bring more insight into the properties of modular inverse, and thus allowing us formulate new algorithms to calculate modular inverse and to solve the Chinese Remainder problem efficiently, as we are going to show in the following discussions. In a recent attempt \cite{zhangs} to modernize some ancient Chinese algorithms, Zhang and Zhang have introduced a reciprocity formula similar to Eq. \eqref{eq:reciprocity}. However, its definition of $f_{a,b}$ and $f_{b,a}$ is slightly different from Eq. \eqref{eq:invdef} and thus its reciprocity formula is different. The classical definition of modular inverse is reproduced in this section, and a modified definition of modular inverse is introduced in the next section. In Section 3, a new, extended modular inverse will be introduced. Reciprocity formulas for these modular inverses will also be proved. In the end of this paper, examples will be provided to illustrate the use of the generalized modular inverse algorithm to calculate the classical and extended modular inverses as well as the Chinese Remainder problem with co-prime moduli and non-co-prime moduli. \begin{definition} In modular arithmetic, the classical definition of modular inverse of an integer $a$ modulo $m$ is an integer $x$ such that \begin{equation} \label{eq:congruence} ax\equiv 1 \pmod{m} \end{equation} The modular inverse is generally denoted as \begin{equation*} x \equiv a^{-1} \pmod{m} \end{equation*} \end{definition} Finding the modular inverse is equivalent to finding the solution of the following linear Diophantine equation, where $a,x,k,m \in\mathbb{Z}$, \begin{equation}\label{eq:diophan} a\,x-k\,m=1 \end{equation} \section{Modular Inverse} \subsection{Modular Inverse Definition} Slightly different version of modular inverse is used throughout this discussion and it is introduced in this section. This definition of modular inverse will still satisfy the same congruence equation, Eq.\eqref{eq:congruence}. Furthermore, in order to have a nicer presentation in the equations containing modular inverse, new notations for modular operation and modular inverse will be used within this paper. \begin{definition} For all $a, m \in\mathbb{Z}$, a modulo m, denoted by $(a)_{m}$, is defined as : \begin{equation}\label{eq:moddef} (a)_{m}=a-m \bigg\lfloor\dfrac{a}{m}\bigg\rfloor \end{equation} Note that \\ \indent \quad \quad $\begin{cases} 0 \le (a)_{m} < m & \text{if }m>0\\ m < (a)_{m} \le 0 & \text{if }m<0 \end{cases}$ \end{definition} \begin{definition} Let $a, m \in\mathbb{Z}\setminus \{0\} \text{ and } \gcd(a,m)=1$, modular inverse $a$ modulo $m$, denoted by $(a^{-1})_{m}$, is defined as : \begin{equation}\label{eq:invdef} (a^{-1})_{m}=x, \end{equation} \begin{equation*} where \begin{cases} 1\le x\le m-1 & \text{if } 1<m \,\&\, a\:x \equiv 1 \pmod{m} \\ m+1\le x \le-1 & \text{if } m<-1 \,\&\, a\,x \equiv 1 \pmod{m} \\ x=\dfrac{1}{2}|m|(\operatorname{sgn}(m)-\operatorname{sgn}(a))+\operatorname{sgn}(a) & \text{if } |m|=1 \\ Undefined & \text{if } a\,m=0 \text{ or } \gcd(a,m) \ne 1 \end{cases}\notag \end{equation*} \end{definition} Obviously, for $|m|>1, (a^{-1})_{m}$ satisfies the congruence requirement that $$a(a^{-1})_{m} \equiv 1 \pmod{m}$$. For $|m|=1$, there are two cases. \begin{description} \item[Case 1]$m=1\\ \begin{aligned} (a^{-1})_{m}&=\dfrac{1}{2}|1|(\operatorname{sgn}(1)-\operatorname{sgn}(a))+\operatorname{sgn}(a)=\dfrac{1}{2}(1-\operatorname{sgn}(a))+\operatorname{sgn}(a)\notag\\ &=\begin{cases} 1 & \text{if } a>0 \\ 0 & \text{if } a<0 \end{cases}\notag \end{aligned}$ \item[Case 2] $m=-1\\ \begin{aligned} (a^{-1})_{m}&=\dfrac{1}{2}|-1|(\operatorname{sgn}(-1)-\operatorname{sgn}(a))+\operatorname{sgn}(a)=\dfrac{1}{2}(-1-\operatorname{sgn}(a))+\operatorname{sgn}(a)\notag\\&=\begin{cases} 0 & \text{if } a>0 \\ -1 & \text{if } a<0 \end{cases}\notag. \end{aligned}$ \end{description} Hence for $|m|=1, (a^{-1})_{m}$ also satisfies the requirement that $a(a^{-1})_{m} \equiv 1 \pmod{m}$. Particularly, note that \begin{equation*} (1^{-1})_m=(1)_m= \begin{cases} 1 & m \ge 1\\ m+1 & m \le -1 \end{cases} \end{equation*} \begin{equation*} (-1^{-1})_m=(-1)_m= \begin{cases} m-1 & m \ge 1\\ -1 & m \le -1 \end{cases} \end{equation*} It is also interesting to note that the modular inverse for $|m|=1$ as defined by Eq.\eqref{eq:invdef} is slightly different from the classical definition that $a^{-1} \pmod{m} =0$ for $|m|=1$ and non-zero $a$. \subsection{Reciprocity Formula} The reciprocity relationship between modular inverses seems obvious from the linear Diophantine equation, Eq.\eqref{eq:diophan} and, as a matter of fact, this is the Euclidean algorithm (i.e., iterative division) in disguise. However, this reciprocity formula is not found in any classic text such as \cite{hardy+wright}, nor in any modern text, such as \cite{ireland+rosen, song}. The reciprocity identity first appeared in \cite{joye+paillier} as Lemma 1 in a format similar to Eq.\eqref{eq:reciprocity}. However, only positive integers were discussed for specific type of cryptography applications and the conditions that $|m|=1$ was not taken care of. By the way, on the footnote of p.244 of \cite{joye+paillier}, it stated that Arazi was "the first to take advantage of this folklore theorem to implement fast modular inversions". \begin{theorem}\label{th:reciprocity}\emph{(Reciprocity formula)} Let $a, b \in\mathbb{Z} \text{ and } \gcd(a,b)=1$, then \begin{equation}\label{eq:reciprocity} a\,(a^{-1})_{b}+b\,(b^{-1})_{a}=1+a \, b. \end{equation} \end{theorem} \begin{proof} \begin{description} \item[Case 1]$a>1,b>1$\\ Let $U=a(a^{-1})_{b}+b(b^{-1})_{a}$. Since $U\equiv1\pmod{a}$ and $U\equiv1\pmod{b}$, then $U\equiv1\pmod{a b}.$ That is, $U=1+kab, \text{ where k }\in\mathbb{Z}$. Therefore,\\ $1<a+b \le U=1+k\,a\,b \le a(b-1)+b(a-1)=2a\,b-(a+b)<2a\,b \implies 0<k\,a\,b<2a\,b \implies 0<k<2 \implies k=1$.\\ Therefore $a\,(a^{-1})_{b}+b\,(b^{-1})_{a}=1+a\,b.$ \item[Case 2]$a>1,b<-1$\\ Since $b+1\le (a^{-1})_{b} \le -1 \implies a(b+1) \le a(a^{-1})_{b} \le -a$, and $1 \le (b^{-1})_{a} \le a-1 \implies b(a-1) \le b(b^{-1})_{a} \le b$, therefore, $a(b+1) + b(a-1) \le a(a^{-1})_{b} + b(b^{-1})_{a} \le -a+b \implies 2a\,b-(a-b) \le U=1+k\,a\,b \le -(a-b) \implies 2a\,b<2a\,b-(a+1-b) \le k\,a\,b \le -(a+1-b)<0 \implies 0<k<2 \implies k=1$. \item[Case 3]$a<-1,b>1$\\ Similar to Case 2, and therefore $k=1$. \item[Case 4]$a<-1,b<-1$\\ Since $b+1 \le (a^{-1})_{b} \le -1 \implies -a \le a(a^{-1})_{b} \le a(b+1)$, and $a+1 \le (b^{-1})_{a} \le -1 \implies -b \le b(b^{-1})_{a} \le b(a+1)$, therefore $ -a-b \le a(a^{-1})_{b}+b(b^{-1})_{b} \le a(b+1)+b(a+1) \implies -(a+b) \le U=1+k\,a\,b \le 2a\,b+(a+b) \implies 0<-(a+b+1) \le k\,a\,b \le 2a\,b+(a+b+1) < 2a\,b \implies 0<k<2 \implies k=1.$ \item[Case 5]$a=1,b>1$\\ Since $(a^{-1})_b = 1$ and $(b^{-1})_a = 1$, hence, $a(a^{-1})_{b}+b(b^{-1})_{a}= 1 \cdot 1+b \cdot 1=1+b=1+a\,b$. \item[Case 6]$a=1,b<-1$\\ Since $(a^{-1})_b = b+1$ and $(b^{-1})_a = 0$, hence, $a(a^{-1})_{b}+b(b^{-1})_{a}= 1(b+1)+b \cdot 0=1+b=1+a\,b$. \item[Case 7]$a=-1,b>1$\\ Since $(a^{-1})_b = 0$ and $(b^{-1})_a = 0$, hence, $a(a^{-1})_{b}+b(b^{-1})_{a}= (-1) \cdot 0+b \cdot 0=0=1+a\,b$. \item[Case 8]$a=-1,b<-1$\\ Since $(a^{-1})_b = -1$ and $(b^{-1})_a = -1$, hence, $a(a^{-1})_{b}+b(b^{-1})_{a}= (-1) \cdot (-1)+b \cdot (-1)=1-b=1+a\,b$. \item[Case 9]$|a|=1 \text{ and } |b|=1$\\ $a(a^{-1})_{b}+b(b^{-1})_{a}=a(\frac{1}{2}|b|(\operatorname{sgn}(b)-\operatorname{sgn}(a))+\operatorname{sgn}(a))+b(\frac{1}{2}|a|(\operatorname{sgn}(a)-\operatorname{sgn}(b))+\operatorname{sgn}(b))=b(a+\operatorname{sgn}(b))+\operatorname{sgn}(a)(a-a\, b\, \operatorname{sgn}(b)) =(1+a\,b)-(1-a \,\operatorname{sgn}(a))(1-b \,\operatorname{sgn}(b))=(1+a\,b)-(1-|a|)(1-|b|)=1+a\,b.$ \end{description} \end{proof} \begin{corollary} If $a,b,k \in\mathbb{Z} \text{ and } \gcd(a,b)=1$, then\\ $((k\,a+b)^{-1})_{a}=\begin{cases} (b^{-1})_{a} & |a|>1\\ (b^{-1})_{a}+\frac{1}{2}(\operatorname{sgn}(k\,a+b)-\operatorname{sgn}(b))) & |a|=1 \end{cases}$ \end{corollary} \begin{proof} \begin{description} \item[Case 1] $|a|>1$\\ $a(a^{-1})_{b}+b(b^{-1})_{a}=1+a\,b \implies (k\,a+b)(b^{-1})_{a}=1+a(b-(a^{-1})_{b}+k(b^{-1})_{a}) \implies ((k\,a+b)^{-1})_{a}=(b^{-1})_{a}$ \item[Case 2] $|a|=1$\\ $((k\,a+b)^{-1})_{a}-(b^{-1})_{a}\\ =\frac{1}{2}|a|(\operatorname{sgn}(a)-\operatorname{sgn}(k\,a+b))+\operatorname{sgn}(k\,a+b)-(\frac{1}{2}|a|(\operatorname{sgn}(a)-\operatorname{sgn}(b))+\operatorname{sgn}(b))\\ =\frac{1}{2}(|a|-2)(\operatorname{sgn}(b)-\operatorname{sgn}(k\,a+b))=\frac{1}{2}(\operatorname{sgn}(k\,a+b)-\operatorname{sgn}(b))$ \end{description} \end{proof} \begin{corollary} If $a,b,k \in\mathbb{Z}, |a| > 1$ and $\gcd(a,b)=1$, then \begin{equation}\label{eq:invform} \begin{aligned} (a^{-1})_{k\,a+b}&=k(a-(b^{-1})_{a})+(a^{-1})_{b}\\ (a^{-1})_{k\,a-b}&=k(b^{-1})_{a}-(b-(a^{-1})_{b}) \end{aligned} \end{equation} \end{corollary} \begin{proof} Let $c=k\,a+b$, and note $a\,(a^{-1})_{c}+c\,(c^{-1})_{a}=1+a \, c.$ \begin{align} (a^{-1})_{k\,a+b}&=\dfrac{1+a(k\,a+b)-(k\,a+b)((k\,a+b)^{-1})_{a}}{a}\notag\\ &=(k\,a+b)+\dfrac{1-(k\,a+b)(b^{-1})_{a}}{a}\notag\\ &=(k\,a+b)-k(b^{-1})_{a}+\dfrac{1-b(b^{-1})_{a}}{a}\notag\\ &=k(a-(b^{-1})_{a})+(a^{-1})_{b}\notag \end{align} This completes the first equation. And note $((k\,a-b)^{-1})_a=((-b)^{-1})_a=a-(b^{-1})_a$, and the second equation will be obtained. \end{proof} It is interesting to note that Eq.\eqref{eq:invform} fails if the classic definition of $(a^{-1})_b=0 \text{ for }|b|=1$ is used. \section{Extended Modular Inverse} \subsection{Extended Modular Inverse Definition} We are also interested to find $x$ satisfying \begin{equation}\label{eq:extcongruence} ax\equiv b \pmod{m} \end{equation} and the solution is well-known. For example, in \cite[p.125]{song}, the solution is given as \begin{equation}\label{eq:extmodinv} x\equiv b \cdot \dfrac{1}{a} \pmod{m} \end{equation} provided that $\gcd(a,m)=1$. The solution provided by Eq.\eqref{eq:extmodinv} contains all the integers congruent to $x\pmod{m}$. A new function, namely, extended modular inverse is introduced as follows. \begin{definition} Let $a,b,m \in\mathbb{Z}$ and $\gcd(a,m)=d$,extended modular inverse, denoted as $(b(a^{-1})_m)_m$, is defined as \begin{equation}\label{eq:extmoddef} (b(a^{-1})_m)_m =\begin{cases} (x)_m & d=1 \,\&\, a\,x\equiv b\pmod{m}\\ (x)_{\frac{m}{d}} & d\ne1 \,\& \, d|b \,\&\, \left(\dfrac{a}{d}\right)x\equiv\left(\dfrac{b}{d}\right) \pmod{\left(\dfrac{m}{d}\right)}\\ \text{Undefined} & d \nmid b \end{cases} \end{equation} \end{definition} Extended modular inverse is having the following properties : \begin{enumerate}[i.] \item If $\gcd(a,m)=1$, then $x=(b(a^{-1})_m)_m$ satisfies the linear congruence equation, Eq.\eqref{eq:extcongruence}.\\ This is obvious from the definition above. \item If $\gcd(a,m)=d\ne1, d|b$, then \begin{equation}\label{eq:coprimesol} x_i=\left(\left(\dfrac{b}{d}\right)\left(\dfrac{a}{d}\right)^{-1}\right)_{\frac{m}{d}}+i\dfrac{m}{d}, \qquad i=0,1, \ldots ,d-1 \end{equation} satisfies the linear congruence equation, Eq.\eqref{eq:extcongruence}. \begin{flalign} a\,x_i&=a\left(\left(\left(\dfrac{b}{d}\right)\left(\dfrac{a}{d}\right)^{-1}\right)_{\frac{m}{d}}+i\dfrac{m}{d}\right)=d\left(\dfrac{a}{d}\right)\left(\left(\dfrac{b}{d}\right)\left(\dfrac{a}{d}\right)^{-1}\right)_{\frac{m}{d}}+m\dfrac{i\,a}{d}\notag\\ &=d\left(\dfrac{b}{d}+k\dfrac{m}{d}\right)+m\dfrac{i\,a}{d}=b+m\left(k+i\dfrac{a}{d}\right) , \text{where } k \in \mathbb{Z}. \notag \end{flalign} Hence, $a\,x_i \equiv b \pmod{m}$. \item If, $d\mid b$, then by Eq.\eqref{eq:moddef}, \begin{equation} \begin{cases} 0 \le (b(a^{-1})_m)_m < m & 0<m\\ m<(b(a^{-1})_m)_m \le 0 & m<0 \end{cases}\notag \end{equation} \item If $\gcd(a,m)=1, \gcd(a,b)=g$, then \begin{equation}\label{eq:coprimemodinv} (b(a^{-1})_m)_m=\left(\dfrac{b}{g}\left(\dfrac{a}{g}\right)^{-1}_m\right)_m \end{equation} Let $x=\left(\dfrac{b}{g}\left(\dfrac{a}{g}\right)^{-1}_m\right)_m$.\\ $a\,x=g\left(\dfrac{a}{g}\right)\left(\dfrac{b}{g}\left(\dfrac{a}{g}\right)^{-1}_m\right)_m=g\left(\dfrac{b}{g}+k\,m\right)=b+k\,g\,m, \text{where } k \in \mathbb{Z}.\\$ Hence, $x$ satisfies Eq.\eqref{eq:extcongruence} and $(b(a^{-1})_m)_m=x$. \end{enumerate} \subsection{Reciprocity Formula for Extended Modular Inverse} Extended Euclidean algorithm, rooted back to over 2000 years ago, is still the most basic algorithm for calculating the ordinary modular inverse, i.e., $(a^{-1})_m$. On the other hand, in ancient China, Dayan Qiuyi Shu (Dayan Algorithm to Find One) was used to calculate the modular inverse, as noted by Ding et al. on \cite[p.20]{ding+pei+salomaa}. Both the Euclidean algorithm and Dayan Algorithm use the trick of continuing divisions to obtain pairs of smaller numbers until the solution is obtained. However, the Euclidean algorithm stops at zero while the Dayan algorithm stops when it reaches one, i.e., $r_n=1$, since it is assumed that the two numbers are co-prime to each other. And because of this characteristic, the Chinese name of this algorithm states that its objective is to "find one". A straightforward means to calculate extended modular inverse is to first calculate the ordinary modular inverse, and then multiply by $b$, followed by a modulo operation to obtain the remainder, as Eq. \eqref{eq:extmodinv} shows. This is the most obvious way by means of the definition of the extended modular inverse, Eq.\eqref{eq:extmoddef}. However, the following section is to introduce a different algorithm which is also based on repeated division as Euclidean algorithm and Dayan algorithm do. For the new algorithm, the following Reciprocity formula is needed. \begin{theorem}\label{th:reciprocity2}\emph{(Reciprocity Formula for Extended Modular Inverse)}\\ Let $p,q,a \in\mathbb{Z}$, $0<q<p,0 \le a <p, \gcd(p,q)=1$, and let $p=c\,q+s\,r, a=\beta\,q-s\,\gamma$, where $0 \le r<q, 0\le\gamma<q,s\in\{-1,1\}$, then \begin{equation}\label{eq:extreciprocity} (a(q^{-1})_p)_p=\begin{cases} \dfrac{a}{q}+\dfrac{p}{q}(\gamma(r^{-1})_q)_q=\dfrac{a}{q}+\dfrac{p}{q}(-(s\,a)_q((s\,p)^{-1})_q)_q & 1<q\\ a & q=1 \end{cases} \end{equation} \end{theorem} \begin{proof} \begin{enumerate}[1.] \item $q=1$\\ Since $(1^{-1})_p=1$ and $0\le a<p$, therefore $(a(1^{-1})_p)_p=(a)_p=a$. \item $s=1$ Note $a=\beta\,q-\gamma$, where $\beta, \gamma \in\mathbb{Z}$, and $0 \le \gamma < q$. With Eq.\eqref{eq:reciprocity} and Eq.\eqref{eq:invform}, \begin{enumerate}[i)] \item $1<q \text{ and }(q^{-1})_{c\,q+r}=c(q-(r^{-1})_q)+(q^{-1})_r \text{ and } q(q^{-1})_r+r(r^{-1})_q=1+q\,r\\ \implies (a(q^{-1})_p)_p=((\beta\,q-\gamma)(c(q-(r^{-1})_q)+(q^{-1})_r))_p\\ =(\beta(c\,q^2-c\,q(r^{-1})_q+q(q^{-1})_r)-\gamma(c\,q-c(r^{-1})_q+(q^{-1})_r)))_p\\ =(\beta(1+(c\,q+r)(q-(r^{-1})_q))-\gamma(c\,q-c(r^{-1})_q+(q^{-1})_r)))_p\\ =\left(\beta-\gamma\left(c\,q-c(r^{-1})_q+\dfrac{1+q\,r-r(r^{-1})_q}{q}\right)\right)_p\\ =\left(\beta-\gamma\left(\dfrac{1+(c\,q+r)(q-(r^{-1})_q)}{q}\right)\right)_p =\left(\beta-\dfrac{\gamma}{q}+\dfrac{p}{q}\gamma(r^{-1})_q\right)_{p}\\ =\left(\beta-\dfrac{\gamma}{q}+\dfrac{p}{q}\left(q \bigg \lfloor \dfrac{\gamma(r^{-1})_q}{q} \bigg \rfloor+(\gamma(r^{-1})_q)_q\right)\right)_{p}\\ =\left(\beta-\dfrac{\gamma}{q}+\dfrac{p}{q}(\gamma(r^{-1})_q)_q\right)_p=\left(\dfrac{a}{q}+\dfrac{p}{q}(\gamma(r^{-1})_q)_q\right)_p$ \item $0 \le a\le p-1 \implies 0\le \dfrac{a}{q}+\dfrac{p}{q}(\gamma(r^{-1})_q)_q \le \dfrac{p-1}{q}+\dfrac{p}{q}(q-1)=\dfrac{p\,q-1}{q}<p$ \item From i) and ii), we can get, for $1<q$, \begin{flalign} \left(\dfrac{a}{q}+\dfrac{p}{q}(\gamma(r^{-1})_q)_q\right)_p&=\dfrac{a}{q}+\dfrac{p}{q}(\gamma(r^{-1})_q)_q\notag\\ &=\dfrac{a}{q}+\dfrac{p}{q}((\beta\,q-a)_q)((c\,q+p)^{-1})_q)_q\notag\\ &=\dfrac{a}{q}+\dfrac{p}{q}(-(a)_q(p^{-1})_q)_q\notag \end{flalign} This completes the proof for the $s=1$. \end{enumerate} \item $s=-1$\\ Note that $a=\beta\,q+\gamma$ and, from Eq.\eqref{eq:invform}, $(q^{-1})_{c\,q-r}=c\,(r^{-1})_q-(r-(q^{-1})_r)$. \begin{enumerate}[i.)] \item \begin{flalign} (a(q^{-1})_p)_p&=((\beta\,q+\gamma)(q^{-1})_{c\,q-r})_{c\,q-r}\notag\\ &=((\beta\,q+\gamma)(c\,(r^{-1})_q-(r-(q^{-1})_r)))_{c\,q-r}\notag\\ &=(c(\beta\,q+\gamma)(r^{-1})_q-r(\beta\,q+\gamma)+(q^{-1})_r))(\beta\,q+\gamma)_{c\,q-r}\notag\\ &=\left(\beta+\dfrac{\gamma}{q}\left(q(q^{-1})_r-q\,r\right)+c\,\gamma(r^{-1})_q+\beta(c\,q-r)(c^{-1})_q\right)_{c\,q-r}\notag\\ &=\left(\beta+\dfrac{\gamma}{q}\left(q(q^{-1})_r-q\,r\right)+c\,\gamma(r^{-1})_q\right)_{c\,q-r}\notag\\ &=\left(\beta+\dfrac{\gamma}{q}\left(1-r(r^{-1})_q\right)+c\,\gamma(r^{-1})_q\right)_{c\,q-r}=\left(\dfrac{a}{q}+\dfrac{p}{q}\gamma(r^{-1})_q\right)_p\notag\\ &=\left(\dfrac{a}{q}+\dfrac{p}{q}\left( q\bigg\lfloor \dfrac{\gamma(r^{-1})_q}{q} \bigg\rfloor+\left(\gamma(r^{-1})_q\right)_q\right) \right)_p\notag\\ &=\left(\dfrac{a}{q}+\dfrac{p}{q}\left(\gamma(r^{-1})_q\right)_q \right)_p=\dfrac{a}{q}+\dfrac{p}{q}\left(\gamma(r^{-1})_q\right)_q \notag \end{flalign} \item \begin{flalign} \dfrac{a}{q}+\dfrac{p}{q}\left(\gamma(r^{-1})_q\right)_q&=\dfrac{a}{q}+\dfrac{p}{q}\left((a-\beta q)((c\,q-p)^{-1})_q\right)_q\notag\\ &=\dfrac{a}{q}+\dfrac{p}{q}(-(-a)_q((-p)^{-1})_q)_q\notag \end{flalign} \end{enumerate} \end{enumerate} \end{proof} Obviously, Eq. \eqref{eq:extreciprocity} reduces to, Eq. \eqref{eq:reciprocity}, the Reciprocity formula for ordindary modular inverse for positive integers $p, q$, when $a=1$, since $(-(p^{-1})_q)_q=q-(p^{-1})_q$. \begin{corollary} Let $p,q,a \in\mathbb{Z}$, $0<q<p,0 \le a <p, \gcd(p,q)=d\ne1$, and let $p=c\,q+s\,r, a=\beta\,q-s\,\gamma$, where $0 \le r<q, 0\le\gamma<q,s\in\{-1,1\}$, then \begin{equation*} (a(q^{-1})_p)_p=\begin{cases} \dfrac{a}{q}+\dfrac{p}{q}(\gamma(r^{-1})_q)_q=\dfrac{a}{q}+\dfrac{p}{q}(-(s\,a)_q((s\,p)^{-1})_q)_q & d|a \\ \text{Undefined} & d\nmid a\end{cases} \end{equation*} \end{corollary} \begin{proof} Assume $d|a$, let $p'=\dfrac{p}{d}, q'=\dfrac{q}{d},a'=\dfrac{a}{d}$, then \begin{enumerate}[1.] \item $p=c\,q+s\,r, a=\beta\,q-s\,\gamma \implies p'=c\,q'+s\,r', a'=\beta\,q'-s\,\gamma'$, and \item Pick up the smallest $x_i$ from Eq.\eqref{eq:coprimesol} and note Eq.\eqref{eq:extreciprocity}, \begin{flalign} (a(q^{-1})_p)_p&=\left(\left(\dfrac{a}{d}\right)\left(\dfrac{b}{d}\right)^{-1}_{\frac{p}{d}}\right)_{\frac{p}{d}}=(a'(q'^{-1})_{p'})_{p'} =\dfrac{a'}{q'}+\dfrac{p'}{q'}(\gamma'(r'^{-1})_{q'})_{q'}\notag\\ &=\dfrac{a'}{q'}+\dfrac{p'}{q'}((d\,\gamma')((d\,r')^{-1})_{d\,q'})_{d\,q'}=\dfrac{a}{q}+\dfrac{p}{q}(\gamma(r^{-1})_q)_q\notag \end{flalign} \end{enumerate} \end{proof} \begin{theorem}\label{th:theorem0}\emph{(Extended Modular Inverse Formula)} \footnote{As Theorem \ref{th:theorem0} reduces to the Dayan algorithm for specific values, it may be called a generalized Dayan algorithm. In fact, this algorithm was first inspired by my study on the Dayan algorithm\cite{man}.} \\ Let $p,q,a,k_i,r_i, c_i, \beta_i, \gamma_i \in\mathbb{N_+}, 1<q<p, 0\le a<p, \gcd(p,q)=1$.\\ If $r_{i-1}=c_{i+1}\,r_i+s_{i+1}\,r_{i+1}, \gamma_i=\beta_i\,r_i-s_{i+1}\,\gamma_{i+1}, f_i=c_if_{i-1}+s_{i-1}f_{i-2}$, where $r_{-1}=p,r_0=q,\gamma_0=(a)_p, f_0=1,f_1=c_1, 0\le r_{i+1}<r_i, 0\le\gamma_{i+1}<r_i$, then \begin{equation}\label{eq:extmodinvform1} (a(q^{-1})_p)_p=\displaystyle\sum_{i=0}^{n} \dfrac{p\,\gamma_i}{r_{i-1}r_i}, \footnote{Eq.\ref{eq:extmodinvform1} can be viewed as a generalized form of the following formula deduced by Euler, $z=q+a\,b\,v\left(\dfrac{1}{ab}-\dfrac{1}{bc}+\dfrac{1}{cd}-\dfrac{1}{de}+ \ldots\right)$ \cite[p.61]{dickson}.} \end{equation} , and \begin{equation}\label{eq:extmodinvform2} (a(q^{-1})_p)_p=\displaystyle\sum_{i=0}^{n} f_i\beta_i \end{equation} where $n$ is such that $r_n=1$, or $\gamma_{n+1}=0$.\\ Note that this algorithm can be further generalized if the following formulas are used instead : $r_{i-1}=c_{i+1}\,r_i+s_{i+1,1}\,r_{i+1}, \gamma_i=\beta_i\,r_i+s_{i+1,2}\,\gamma_{i+1}$. \end{theorem} \begin{proof} Note $q>1, (a(q^{-1})_p)_p=(\gamma_0(r_0^{-1})_{r_{-1}})_{r_{-1}}$, and Eq. \eqref{eq:extreciprocity}. \begin{enumerate}[1.)] \item Assume $r_i\ne1$ and $\gamma_{i+1}\ne0$ for all $i=0, 1, 2, \ldots, n-1$, then we can prove, by induction that \begin{equation}\label{eq:extmodinvform3} (\gamma_0(r_0^{-1})_{r_{-1}})_{r_{-1}}=\displaystyle\sum_{i=0}^{n} \dfrac{r_{-1}\gamma_i}{r_{i-1}r_i}+\dfrac{r_{-1}}{r_n}(\gamma_{n+1}(r_{n+1}^{-1})_{r_{n}})_{r_{n}} \end{equation} \begin{enumerate}[a.)] \item Since $(\gamma_0(r_0^{-1})_{r_{-1}})_{r_{-1}}=\dfrac{\gamma_0}{r_0}+\dfrac{r_{-1}}{r_0}(\gamma_1(r_1^{-1})_{r_{0}})_{r_{0}} $ therefore, Eq. \eqref{eq:extmodinvform3} is true when $n=0$. \item If $r_i\ne1$ and $\gamma_{i+1}\ne0$ for all $i=0, 1, 2, \ldots ,k-1$, then\\ $(\gamma_0(r_0^{-1})_{r_{-1}})_{r_{-1}}=\displaystyle\sum_{i=0}^{k} \dfrac{r_{-1}\gamma_i}{r_{i-1}r_i}+\dfrac{r_{-1}}{r_k}(\gamma_{k+1}(r_{k+1}^{-1})_{r_{k}})_{r_{k}}$.\\ If $r_k\ne1$, and $\gamma_{k+1}\ne0$, then\\ $(\gamma_0(r_0^{-1})_{r_{-1}})_{r_{-1}}=\displaystyle\sum_{i=0}^{k} \dfrac{r_{-1}\gamma_i}{r_{i-1}r_i}+\dfrac{r_{-1}}{r_k}(\gamma_{k+1}(r_{k+1}^{-1})_{r_{k}})_{r_{k}}\\ =\displaystyle\sum_{i=0}^{k} \dfrac{r_{-1}\gamma_i}{r_{i-1}r_i}+\dfrac{r_{-1}}{r_k}\left(\dfrac{\gamma_{k+1}}{r_{k+1}}+\dfrac{r_k}{r_{k+1}}(\gamma_{k+2}(r_{k+2}^{-1})_{r_{k+1}})_{r_{k+1}}\right)\\ =\displaystyle\sum_{i=0}^{k+1} \dfrac{r_{-1}\gamma_i}{r_{i-1}r_i}+\dfrac{r_{-1}}{r_{k+1}}(\gamma_{k+2}(r_{k+2}^{-1})_{r_{k+1}})_{r_{k+1}}$.\\ Hence, Eq. \eqref{eq:extmodinvform3} is also true for $n=k+1$, provided that it is true for $n=k$. \item If $\gamma_{n+1}=0$, or $r_n=1$, then $(\gamma_{n+1}(r_{n+1}^{-1})_{r_{n}})_{r_{n}}=0$, and hence, Eq.\eqref{eq:extmodinvform1} is obtained. \end{enumerate} \item\ Assume $r_i\ne1$ and $\gamma_{i+1}\ne0$ for all $i=0, 1, 2, \ldots, n-1$, then we can prove, by induction that \begin{equation}\label{eq:extmodinvform4} (\gamma_0(r_0^{-1})_{r_{-1}})_{r_{-1}}=\displaystyle\sum_{i=0}^{n}f_i\beta_i-s_{n+1}f_n\dfrac{\gamma_{n+1}}{r_n}+\dfrac{r_{-1}}{r_n}(\gamma_{n+1}(r_{n+1}^{-1})_{r_{n}})_{r_{n}} \end{equation} \begin{enumerate}[a.] \item First we want to prove \begin{equation*} r_{-1}=f_i r_{i-1}+s_i f_{i-1}r_i \end{equation*} \begin{enumerate}[i.)] \item When $i=1, r_{-1}=f_1 r_0+s_1 f_0 r_1=c_1 r_0+s_1 r_1$. Hence, it is true. \item Assume it is true for $i=k$, then\\ $r_{-1}=f_k r_{k-1}+s_k f_{k-1}r_k=f_k(c_{k+1}r_k+s_{k+1}r_{k+1})+s_k f_{k-1}r_k=(c_{k+1}f_k+s_k f_{k-1})r_k+s_{k+1}f_i r_{k+1}=f_{k+1}r_k+s_{k+1}f_k r_{k+1}$. Hence, it is true for $i=k+1$. \end{enumerate} \item Since $(\gamma_0(r_0^{-1})_{r_{-1}})_{r_{-1}}=\dfrac{\gamma_0}{r_0}+\dfrac{r_{-1}}{r_0}(\gamma_1(r_1^{-1})_{r_0})_{r_0}\\ =\dfrac{\beta_1r_0-s_1\gamma_1}{r_0}+\dfrac{r_{-1}}{r_0}(\gamma_1(r_1^{-1})_{r_0})_{r_0} =\beta_1-s_1 f_0\dfrac{\gamma_1}{r_0}+\dfrac{r_{-1}}{r_0}(\gamma_1(r_1^{-1})_{r_0})_{r_0}$, therefore, Eq. \eqref{eq:extmodinvform4} is true when $n=0$. \item If it is true for $n=k-1$, then\\ $(\gamma_0(r_0^{-1})_{r_{-1}})_{r_{-1}} =\displaystyle\sum_{i=0}^{k-1}f_i\beta_i-s_{k}f_{k-1}\dfrac{\gamma_{k}}{r_{k-1}}+\dfrac{r_{-1}}{r_{k-1}}(\gamma_{k}(r_{k}^{-1})_{r_{k-1}})_{r_{k-1}}\\ =\displaystyle\sum_{i=0}^{k-1}f_i\beta_i-s_{k}f_{k-1}\dfrac{\gamma_{k}}{r_{k-1}}+\dfrac{r_{-1}}{r_{k-1}}\left(\dfrac{\gamma_k}{r_k}+(\gamma_{k+1}(r_{k+1}^{-1})_{r_{k}})_{r_{k}}\right)\\ =\displaystyle\sum_{i=0}^{k-1}f_i\beta_i-s_{k}f_{k-1}\dfrac{\gamma_{k}}{r_{k-1}}+\dfrac{f_k r_{k-1}+s_k f_{k-1}r_k}{r_{k-1}}\dfrac{\gamma_k}{r_k}+\dfrac{r_{-1}}{r_{k-1}}(\gamma_{k+1}(r_{k+1}^{-1})_{r_{k}})_{r_{k}}\\ =\displaystyle\sum_{i=0}^{k-1}f_i\beta_i+f_k\dfrac{\beta_{k+1}r_k-s_{k+1}\gamma_{k+1}}{r_k}+\dfrac{r_{-1}}{r_{k-1}}(\gamma_{k+1}(r_{k+1}^{-1})_{r_{k}})_{r_{k}}\\ =\displaystyle\sum_{i=0}^{k}f_i\beta_{i}-s_{k+1}f_k\dfrac{\gamma_{k+1}}{r_k}+\dfrac{r_{-1}}{r_{k-1}}(\gamma_{k+1}(r_{k+1}^{-1})_{r_{k}})_{r_{k}}$ \\Hence, it is true for $n=k$. \item Since $r_n=1 \implies \gamma_{n+1}=0$, $(\gamma_0(r_0^{-1})_{r_{-1}})_{r_{-1}}=\displaystyle\sum_{i=0}^{n}f_i\beta_i$ when $r_n=1$ or $\gamma_{n+1}=0$. Hence, Eq.\eqref{eq:extmodinvform2} is proved. \end{enumerate} \end{enumerate} \end{proof} \begin{corollary} For the algorithm in Theorem \ref{th:theorem0}, given $s_i\in\{-1,1\}$, then $r_i, c_i, \beta_i, \gamma_i$ are generated by the following equations : \begin{equation}\label{eq:extinvalgo1} \left.\begin{aligned} c_{i+1} &=s_{i+1}\Bigl \lfloor s_{i+1}\dfrac{r_{i-1}}{r_i} \Bigl \rfloor,\\ r_{i+1}&=(s_{i+1}\,r_{i-1})_{r_i},\\ \gamma_{i+1}&=(-s_{i+1}\gamma_i)_{r_i},\\ \beta_i&=s_{i+1}\Bigl \lceil s_{i+1}\dfrac{\gamma_i}{r_i} \Bigl \rceil\\ f_i&=c_if_{i-1}+s_{i-1}f_{i-2} \end{aligned}\right\}\text{ when } r_i>1 \end{equation} When $r_i$=1, $c_{i+1}=r_{i-1}, r_{i+1}=0,\gamma_{i+1}=0,\beta_i=\gamma_i$. \end{corollary} \begin{proof} The algorithm in Theorem \ref{th:theorem0} requires $r_i, c_i, \beta_i, \gamma_i$ to satisfy \begin{equation*} r_{i-1}=c_{i+1}\,r_i+s_{i+1}\,r_{i+1}, \gamma_i=\beta_i\,r_i-s_{i+1}\,\gamma_{i+1} \end{equation*} When $r_i=1$, these conditions are obviously satisfied. When $r_i>1$, these conditions are also satisfied with Eq.\eqref{eq:extinvalgo1}, by the following substitutions. \begin{enumerate}[1.] \item $r_i$ \begin{flalign} c_i\,r_{i-1}+s_i\,r_i&=\left(s_i\Bigl \lfloor s_i\dfrac{r_{i-2}}{r_{i-1}} \Bigl \rfloor\right)r_{i-1}+s_i(s_i\,r_{i-2})_{r_{i-1}}\notag\\ &=s_i\left(\Bigl \lfloor s_i\dfrac{r_{i-2}}{r_{i-1}} \Bigl \rfloor r_{i-1}+(s_i\,r_{i-2})_{r_{i-1}}\right)=s_i(s_i\,r_{i-2})=r_{i-2}\notag \end{flalign} \item $\gamma_i$ \begin{flalign} \beta_i\,r_i-s_{i+1}\,\gamma_{i+1}&=s_{i+1}\Bigl \lceil s_{i+1}\dfrac{\gamma_i}{r_i} \Bigl \rceil\,r_i-s_{i+1}(-s_{i+1}\gamma_i)_{r_i}\notag\\ &=-s_{i+1}\Bigl \lfloor -s_{i+1}\dfrac{\gamma_i}{r_i} \Bigl \rfloor\,r_i-s_{i+1}(-s_{i+1}\gamma_i)_{r_i}\notag\\ &=-s_{i+1}\left(\Bigl \lfloor -s_{i+1}\dfrac{\gamma_i}{r_i} \Bigl \rfloor\,r_i+(-s_{i+1}\gamma_i)_{r_i}\right)\notag\\ &=-s_{i+1}(-s_{i+1}\gamma_i)=\gamma_i\notag \end{flalign} \end{enumerate} \end{proof} Alternately, Eq. \eqref{eq:extinvalgo1} can be written as : \begin{equation}\label{eq:extinvalgo2} \left.\begin{aligned} c_i &=s_i\Bigl \lfloor s_i\dfrac{r_{i-2}}{r_{i-1}} \Bigl \rfloor,\\ r_i&=(s_i\,r_{i-2})_{r_{i-1}},\\ \gamma_i&=(-s_i\gamma_{i-1})_{r_{i-1}},\\ \beta_i&=s_{i+1}\Bigl \lceil s_{i+1}\dfrac{\gamma_i}{r_i} \Bigl \rceil\\ f_i&=c_if_{i-1}+s_{i-1}f_{i-2} \end{aligned}\right\}\text{ when } r_i>0 \end{equation} \begin{corollary}\label{th:corol1} In Theorem \ref{th:theorem0}, if $\gcd(p,q)=d\ne1$, and with the same algorithm to generate $r_{i-1}=c_{i+1}\,r_i+s_{i+1}\,r_{i+1}, \gamma_i=\beta_i\,r_i-s_{i+1}\,\gamma_{i+1}, f_i=c_if_{i-1}+s_{i-1}f_{i-2}$, then \begin{enumerate}[1.] \item The series $r_i$ will terminate at $r_{n+1}=0$, or $\gamma_{n+1}=0$. \item If $r_{n+1}=0$, then $r_n=d=\gcd(p,q)$, and $d|r_i$ for all $i=-1,0,1, \ldots n$. \item If $r_{n+1}=0 \,\&\, r_n \nmid \gamma_{n+1}$, then $d\nmid a$, and there is no solution for $(a(q^{-1})_p)_p$. \item If $r_{n+1}=0$ and $\gamma_{m+1}=0, m\le n$, then \begin{equation*} \left(\left(\dfrac{a}{d}\right)\left(\dfrac{q}{d}\right)^{-1}_{\frac{p}{d}}\right)_{\frac{p}{d}}=\displaystyle\sum_{i=0}^{m} \dfrac{p\,\gamma_i}{r_{i-1}r_i}=\displaystyle\sum_{i=0}^{m} f_i\beta_i \end{equation*} \end{enumerate} \end{corollary} \begin{proof} \begin{enumerate}[1.] \item Since $r_i$ is a decreasing series of positive integers, it will eventually terminate at 0. If $\gamma_m$ reaches 0 before $r_n$, where $m<n$, then $\gamma_i=0$ for $i=m,m+1, \ldots, n$ \item Let $r_{n+1}=0$, then \begin{flalign} \gcd(r_{n-1},r_n)&=\gcd(c_{n+1}\,r_n+s_{n+1}\,r_{n+1},r_n)=\gcd(c_{i+1}\,r_n,r_n)\notag\\ &=r_n=d\notag. \end{flalign} For $i=0, 1, \ldots, n-1$, \begin{flalign} \gcd(r_{i-1},r_i)&=\gcd(c_{i+1}\,r_i+s_{i+1}\,r_{i+1},r_i)=\gcd(s_{i+1}\,r_{i+1},r_i)\notag\\ &=\gcd(r_{i+1},r_i)=\gcd(r_i,r_{i+1})\notag \end{flalign} Hence, $\gcd(p,q)=\gcd(r_{-1},r_0)=\gcd(r_{n-1},r_n)=d$, and $d|r_i$. \item For $i=0, 1, \ldots, n, \gamma_{i+1}=s_{i+1}(\beta_i\,r_i-\gamma_i) \implies \gamma_{i+1}=(-s_{i+1}\gamma_i)_d$.\\ Hence, $\gamma_{n+1}=((-1)^{n+1}s_1s_2\ldots s_{n+1}\gamma_0)_d$.\\ Since $d\nmid\gamma_{n+1}$, therefore $d\nmid \gamma_0$, i.e., $d\nmid a$. \item Assume $r_{n+1}=0$. Then, $r_n=1$, or $r_n\ne1$. \begin{enumerate}[a.)] \item $r_n=1$\\ $\gcd(p,q)=1$, and therefore the Theorem \ref{th:theorem0} applies. \item $r_n =d\ne1$. Assume $d|\gamma_{n+1}$, otherwise as seen above, $d\nmid a$ and there will be no solution. Then let $\gamma_{m+1}=0$, where $m\le n$ and $\gamma_i\ne 0$, for $i=0,1,\ldots,m$. Therefore, the equations in Theorem \ref{th:theorem0} will terminate at $i=m$. \end{enumerate} \end{enumerate} \end{proof} The algorithm for extended modular inverse detailed in Theorem \ref{th:theorem0} exhibits some flexibilities which can be exploited for optimization for various specific types of problems. For example, if $a=1$ and all $s_i$'s are 1, then $\beta_i=1$ and the formula to calculate $f_i$'s is equivalent to $j_s=q_sj_{s-1}+j_{s-2}$ in \cite[p.20]{ding+pei+salomaa}, which is the Dayan Qiuyi Shu. For this reason, Theorem \ref{th:theorem0} may be called a general Dayan algorithm. Dayan algorithm has been known in China since the 13th century or earlier. However, except some brief mention of it in a few literatures such as \cite{ding+pei+salomaa}, the Dayan algorithm is rarely discussed in details in the English literature other than from a historical point of view. Obviously, when $a=1$, this algorithm produces the ordinary modular inverse, i.e., $(q^{-1})_p$. One may also notice a property of this algorithm is that it terminates when $\gamma_{n+1}=0$, even if $r_n\ne1$. That means for some $a$'s, it does not require to perform the division operation until $r_n=1$. To one extreme, for example, if $a=k\,q<p$, then Eq.\eqref{eq:extreciprocity} provides the answer as $k$ after the first division step when $\gamma_1=0$. This is also apparent from the following equation : \begin{equation} (k\,q(q^{-1})_p)_p =(k(1+p\,q-p(p^{-1})_q))_p=k\notag \end{equation} For $a$'s of the form $a=k_1\,q-k_2\,r_1$, where $k_2 r_1<q$, the algorithm will terminate after the second division step, when $\gamma_2=0$. $a$'s of other forms may also result in the algorithm to terminate before $r_n=1$. Examples to illustrate this property will be shown at the end of this paper. Note the flexibility in setting up $s_i$'s, one may also select each $s_i$ independently to produce $r_i$ to have the least absolute value. For the Euclidean Algorithm, it is known as the Method of the Least Absolute Remainder. Among all different possible variations and flavours of this algorithm, some of the applications will be discussed in slightly greater details in the next section. \section{Applications} \subsection{Modular Inverse Formula} \begin{corollary}\emph{(Modular Inverse Formula, first type)} Let $p,q$ be positive integers, $1<q<p$ and co-prime to each other, the modular inverse is given by \begin{flalign} \label{eq:modinvform1} (q^{-1})_p&=p\sum_{i=0}^{n}\dfrac{(-1)^i}{r_{i-1}r_i}+\begin{cases} 0 & n=even \\p & n=odd \end{cases}\\ \label{eq:modinvform2} (q^{-1})_p&=-p\sum_{i=0}^{\lfloor\frac{n+1}{2}\rfloor}\dfrac{c_{2i+1}}{r_{2i-1}r_{2i}}+\begin{cases} \dfrac{1}{r_{n-1}} & n=even \\p & n=odd \end{cases} \end{flalign} where $r_i=(r_{i-2})_{r_{i-1}}$ and $r_n=1$. \end{corollary} \begin{proof} \begin{enumerate}[1.] \item Equation\eqref{eq:modinvform1}\\ In Theorem \ref{th:theorem0}, let $a=1, s_i=-1$, then from Eq.\eqref{eq:extinvalgo1}, we obtain $r_i=(r_{i-2})_{r_{i-1}}$ and $\gamma_0=1, \gamma_i=(-\gamma_{i-1})_{r_{i-1}}$. \begin{enumerate}[i.)] \item First, we have to show $\gamma_{i}=r_{i-1}-r_i+(-1)^i$, for $i= 2,3, \ldots, n-1, and\\ \gamma_n=\begin{cases} (-1)^n-r_n & n=even\\r_{n-1}-r_n+(-1)^n & n=odd \end{cases}$. \begin{enumerate}[(a)] \item $\gamma_1=(-1)_{r_0}=r_0-1$,\\ $\gamma_2=(-(r_0-1))_{r_1}=(1-(r_0)_{r_1})_{r_1}=(1-r_2))_{r_1}=r_1-r_2+1$\\ $\gamma_3=(-(r_1-r_2+1))_{r_2}=(-r_1-1)_{r_2}=(-r_3-1)_{r_2}=r_2-r_3-1$ \item If it is true for $i=k$, then $\gamma_{k+1}=(-\gamma_k)_{r_k}=(-(r_{k-1}-r_k+(-1)^k))_{r_k}=(-r_{k-1}-(-1)^k)_{r_k}=(-r_{k+1}+(-1)^{k+1})_{r_k}=r_k-r_{k+1}+(-1)^{k+1}$, since $-r_{k+1}+(-1)^{k+1}<0$.\\ Hence, this equation is true for $i=2,3, \ldots, n-1.$ \item When $r_n=1, \gamma_n=(-\gamma_{n-1})_{r_{n-1}}=(-r_{n-2}+r_{n-1}-(-1)^{n-1})_{r_{n-1}}=(-r_{n-2}+1)_{r_{n-1}}=(-r_n+1)_{r_{n-1}}=0$, if $n=$even. If $n=$odd, then $\gamma_n=(-r_n-1)_{r_{n-1}}=r_{n-1}-r_n-1$, since $-r_n-1<0$. \end{enumerate} \item \begin{flalign} \displaystyle\sum_{i=0}^{n}\dfrac{\gamma_i}{r_{i-1}r_i}&=\dfrac{1}{r_{-1}r_0}+\dfrac{r_0-1}{r_0 r_1}+\displaystyle\sum_{i=2}^{n}\dfrac{\gamma_i}{r_{i-1}r_i}\notag\\ &=\dfrac{1}{r_{-1}r_0}+\dfrac{r_0-1}{r_0 r_1}+\displaystyle\sum_{i=2}^{n-1}\dfrac{r_{i-1}-r_i+(-1)^i}{r_{i-1}r_i}+\dfrac{\gamma_n}{r_{n-1}r_n}\notag\\ &=\dfrac{1}{r_{-1}r_0}+\dfrac{1}{r_1}-\dfrac{1}{r_0 r_1}+\displaystyle\sum_{i=2}^{n-1}\left( \dfrac{1}{r_i} - \dfrac{1}{r_{i-1}} + \dfrac{(-1)^i}{r_{i-1}r_i} \right)+\dfrac{\gamma_n}{r_{n-1}r_n}\notag\\ &=\displaystyle\sum_{i=0}^{n-1}\dfrac{(-1)^i}{r_{i-1}r_i}+\dfrac{1}{r_{n-1}}+\dfrac{\gamma_n}{r_{n-1}r_n}\notag\\ &=\displaystyle\sum_{i=0}^{n-1}\dfrac{(-1)^i}{r_{i-1}r_i}+\dfrac{1}{r_{n-1}}+\begin{cases} \dfrac{(-1)^n-r_n}{r_{n-1}r_n} & n=even \\ \dfrac{r_{n-1}-r_n+(-1)^n}{r_{n-1}r_n} & n=odd \end{cases}\notag\\ &=\displaystyle\sum_{i=0}^{n}\dfrac{(-1)^i}{r_{i-1}r_i}+\begin{cases} 0 & n=even \\ 1 & n=odd \end{cases}\notag \end{flalign} \end{enumerate} \item Equation \eqref{eq:modinvform2}\\ Since \begin{flalign} \displaystyle\sum_{i=0}^{2\lfloor \frac{n-1}{2} \rfloor+1}\dfrac{(-1)^i}{r_{i-1}r_i}&=\displaystyle\sum_{i=0}^{\lfloor\frac{n-1}{2}\rfloor}\left(\dfrac{(-1)^{2i}}{r_{2i-1}r_{2i}}+\dfrac{(-1)^{2i+1}}{r_{2i}r_{2i+1}} \right) \notag \\ &=\displaystyle\sum_{i=0}^{\lfloor\frac{n-1}{2}\rfloor}\left(\dfrac{(-1)^{2i}(r_{2i+1}-r_{2i-1})}{r_{2i-1}r_{2i}r_{2i+1}} \right) =\displaystyle\sum_{i=0}^{\lfloor\frac{n-1}{2}\rfloor}\dfrac{-c_{2i+1}}{r_{2i-1}r_{2i+1}} \notag \end{flalign} , then $\displaystyle\sum_{i=0}^{n}\dfrac{(-1)^i}{r_{i-1}r_i}=\displaystyle\sum_{i=0}^{\lfloor\frac{n-1}{2}\rfloor}\dfrac{-c_{2i+1}}{r_{2i-1}r_{2i+1}} +\begin{cases} \dfrac{1}{r_{n-1}r_n} & n=even \\ 0 & n=odd\end{cases}$ \\Combining this with Eq.\eqref{eq:modinvform1}, and Eq.\eqref{eq:modinvform2} will be obtained. \end{enumerate} \end{proof} In order not to keep track of the parity of $n$, we may simply calculate the value of $p\displaystyle\sum_{i=0}^{n}\dfrac{(-1)^i}{r_{i-1}r_i}$, if it is negative, then add $p$ to obtain the modular inverse. Eq. \eqref{eq:modinvform1} can also be obtained through the Reciprocity formula for ordinary modular inverse, since \begin{equation} \dfrac{(q^{-1})_p}{p}+\dfrac{(p^{-1})_q}{q}=1+\dfrac{1}{p\,q}\notag \end{equation} \begin{corollary}\emph{(Modular Inverse Formula, sceond type)} Let $p,q$ be positive integers, $1<q<p$ and co-prime to each other, the modular inverse is given by \begin{flalign} \label{eq:modinvform3} (q^{-1})_p&=p\sum_{i=0}^{n}\dfrac{1}{r_{i-1}r_i} \\ \label{eq:modinvform4} (q^{-1})_p&=p\sum_{i=0}^{\lfloor\frac{n+1}{2}\rfloor}\dfrac{c_{2i+1}}{r_{2i-1}r_{2i}}+\begin{cases} \dfrac{1}{r_{n-1}} & n=even \\0 & n=odd \end{cases} \end{flalign} where $r_i=(-r_{i-2})_{r_{i-1}}$ and $r_n=1$. \end{corollary} \begin{proof} \begin{enumerate}[1.)] \item Equation \eqref{eq:modinvform3}\\ In Theorem \ref{th:theorem0}, let $a=1, s_i=-1$, then from Eq.\eqref{eq:extinvalgo1}, we obtain $r_i=(-r_{i-2})_{r_{i-1}}$ and $\gamma_i=(\gamma_{i-1})_{r_{i-1}}$. Since $\gamma_0=1$, we have $\gamma_i=1$ for all $i$'s. Hence, the equation is proved. \item Equation \eqref{eq:modinvform4}\\ Note that $\dfrac{r_{i-1}+r_{i+1}}{r_{i-1}r_{i}r_{i+1}}=\dfrac{c_{i+1}}{r_{i-1}r_{i+1}}$ and similar to the proof for Eq. \eqref{eq:modinvform2}, it is straightforward to prove Eq.\eqref{eq:modinvform4}. \end{enumerate} \end{proof} With Eqs.\eqref{eq:modinvform1} and \eqref{eq:modinvform3}, it is necessary to keep track of only all the remainders and no quotients are needed. Probably, they are the shortest or simplest formula (algorithm) for calculating modular inverse though the trade-off is that floating point divisions are used. However, due to the granularity of floating point numbers, this may be useful for small numbers only. Furthermore, at a first glance, it seems to suggest the Eq.\eqref{eq:modinvform3} is a better choice to calculate the modular inverse, since all the terms are positive and there is no need to keep track of the parity of $n$. However, expanding $\dfrac{p}{q}$ into continued fractions will show that it will generally take more division steps for this formula. \begin{corollary} Let $p,q$ be positive integers, $1<q<p$ and co-prime to each other, if $r_i=(-r_{i-2})_{r_{i-1}}$ and $r_n=1$ then for $j=0,1,\ldots,n$ \begin{equation} \label{eq:modinvcal} (r_j(q^{-1})_p)_p=f_j \end{equation} \end{corollary} \begin{proof} In Theorem \ref{th:theorem0}, let $s_i=-1$. Then, $\gamma_0=r_j, \gamma_i=(\gamma_{i-1})_{r_{i-1}}$ and $\beta_i=-\Bigl \lceil -\dfrac{\gamma_i}{r_{i-1}} \Bigl \rceil = \Bigl \lfloor \dfrac{\gamma_i}{r_{i-1}} \Bigl \rfloor$. Since $r_j<r_{j-1}< \ldots < r_1<r_0=q$, then $\beta_0=\beta_1=\ldots=\beta_{j-1}=0, \beta_j=1$, and $\beta_{j+1}=\ldots=\beta_n=0$. Hence, from Eq. \eqref{eq:extmodinvform2},\\ \centerline{$(r_j(q^{-1})_p)_p=\displaystyle\sum_{i=0}^{n} f_i\beta_i=f_j$} \end{proof} Furthermore, from Eq.\eqref{eq:modinvcal}, it can easily be deduced that, for $k\in\mathbb{Z}$, \begin{equation} (k\,r_j(q^{-1})_p)_p=(k\,f_j)_p\notag \end{equation} \subsection{Chinese Remainder Algorithm} The next example demonstrates the application of extended modular inverse to the linear congruence equations, i.e., the Chinese Remainder Theorem. This algorithm is the $n$-congruence iterative CRA from \cite[p.23]{ding+pei+salomaa}, and it is also known as Garner's formula. \begin{theorem}\label{th:CRT}\emph{(Chinese Remainder Theorem)} Let $m_1,m_2, \ldots, m_n$ be pair-wise relatively prime and greater than 1, and $a_1, a_2, \ldots$ , $a_n$ are integers less than $m_1, m_2, \ldots , m_n$ respectively. \begin{equation}\label{eq:CRTeq} \left.\begin{aligned} x&\equiv a_1 \pmod{m_1}\\ x&\equiv a_2 \pmod{m_2}\\ &\ldots \ldots\\ &\ldots \ldots\\ x&\equiv a_n \pmod{m_n} \end{aligned}\right\} \end{equation} The solution of Eq.\eqref{eq:CRTeq} is given by, $x=x_n$ from the following iterative algorithm : \begin{equation} \left.\begin{aligned} x_1&=a_1\\ x_2&=m_1((a_2-x_1)(m_{1}^{-1})_{m_2})_{m_2}+x_1\\ &\ldots \ldots\\ &\ldots \ldots\\ x_n&=m_1m_2 \ldots m_{n-1}((a_n-x_{n-1})((m_1m_2 \ldots m_{n-1})^{-1})_{m_n})_{m_n}+x_{n-1} \end{aligned}\right\}\notag \end{equation} \end{theorem} \begin{proof} It suffices to shows that $x_2$ satisfies the first two congruence equations.\\ It is obvious that $x_2$ satisfies the first congruence equation, and we need to show the second congruence equation is also satisfied. \begin{flalign} (x_2)_{m_2}&=(m_1((a_2-x_1)(m_{1}^{-1})_{m_2})_{m_2}+x_1)_{m_2}\notag\\ &=((m_1(a_2-x_1)(m_{1}^{-1})_{m_2})_{m_2}+x_1)_{m_2}\notag\\ &=(((a_2-x_1)(1+m_1 m_2-m_2(m_{2}^{-1})_{m_1}))_{m_2}+x_1)_{m_2}\notag\\ &=((a_2-x_1)+x_1)_{m_2}=(a_2)_{m_2}=a_2\notag \end{flalign} \end{proof} This algorithm requires only $(n-1)$ extended modular inverses, and more importantly, extended modular inverse calculation statistically requires less division steps, though detailed efficiency analysis still need to be studied. \begin{theorem}\emph{(Chinese Remainder Theorem, non-co-prime moduli)}\\ Let $m_1,m_2$ be integers greater than 1, not necessarily co-prime to each other, and $a_1, a_2$ are integers less than $m_1, m_2$ respectively. \begin{equation*} \left.\begin{aligned} x&\equiv a_1 \pmod{m_1}\\ x&\equiv a_2 \pmod{m_2} \end{aligned}\right\}\notag \end{equation*} Let $d=\gcd(m_1,m_2)$, and $d|(a_1-a_2)$, then, the solution is given by : \begin{equation}\label{eq:CRTsol} x_2=m_1((a_2-a_1)(m_1^{-1})_{m_2})_{m_2}+a_1+k\dfrac{m_1 m_2}{d} \end{equation} where $k\in\mathbb{Z}$ and by Corollary \ref{th:corol1}, with $a=(a_2-a_1)_{m_2}$,\\ \centerline{$((a_2-a_1)(m_{1}^{-1})_{m_2})_{m_2}=\left(\left(\dfrac{a}{d}\right)\left(\dfrac{m_1}{d}\right)^{-1}_{\frac{m_2}{d}}\right)_{\frac{m_2}{d}}$.} \end{theorem} \begin{proof} This solution is obviously true for co-prime moduli, by Theorem \ref{th:CRT}.\\ Let's assume $\gcd(m_1,m_2)=d\ne1$ and $d|a$.\\ It is obvious to see $(x_2)_{m_1}=a_1$. \begin{flalign} (x_2)_{m_2}&=\left(m_1(a(m_{1}^{-1})_{m_2})_{m_2}+a_1+k\dfrac{m_1 m_2}{d}\right)_{m_2}\notag\\ &=\left(d\dfrac{m_1}{d}\left(\left(\dfrac{a}{d}\right)\left(\dfrac{m_1}{d}\right)^{-1}_{\frac{m_2}{d}}\right)_{\frac{m_2}{d}}+a_1\right)_{m_2} =\left(d\left(\dfrac{a}{d}+\dfrac{m_2}{d}k_2\right)+a_1\right)_{m_2}\notag\\ &=((a_2-a_1)_{m_2}+a_1)_{m_2}=(a_2)_{m_2}\notag \end{flalign} \end{proof} With Eq.\eqref{eq:CRTsol} as the solution of a two-modulus Chinese Remainder Algorithm, it is possible to parallelize the algorithm for improved efficiency. For example, there is a set of 16 simultaneous linear congruence equations, not necessarily all co-prime to each other. We may first calculate the 8 solutions for 8 sets of two-modulus simultaneous linear congruence equations, in parallel independently. With these 8 solutions, they can be processed by 4 parallel threads or processors, and then two threads in the next round. Thus, in general, it takes $\lceil\operatorname{Log}_2(n)\rceil$ rounds of parallel computations of a two-modulus Chinese Remainder Algorithm. \section{Examples} \begin{table}[t] \centering \begin{tabular}{ | r | r | r | r | r | r | r | r | r | r |} \hline Step $i$ & -1 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \hline $r_i=(r_{i-2})_{r_{i-1}}$ & 189 & 106 & 83 & 23 & 14 & 9 & 5 & 4 & 1 \\ \hline $\gamma_i=(\gamma_{i-1})_{r_{i-1}}$ & & 1 & 105 & 61 & 8 & 6 & 3 & 2 & 2 \\ \hline $c_i=\lfloor \frac{r_{i-2}}{r_{i-1}}\rfloor$ & & & 1 & 1 & 3 & 1 & 1 & 1 & 1 \\ \hline $\beta_i=\lceil\frac{\gamma_i}{r_i}\rceil$ & & 1 & 2 & 3 & 1 & 1 & 1 & 1 & 2 \\ \hline $f_i=c_i f_{i-1}+f_{i-2}$ & 0 & 1 & 1 & 2 & 7 & 9 & 16 & 25 & 41 \\ \hline \end{tabular} \caption{Illustration for Extended Modular Inverse Algorithm to find out $(106^{-1})_{189}$ with $s_i=1$} \label{tab:XMIAlg} \end{table} Here are some examples to demonstrate the use of some algorithms and formulas discussed in this paper. Table \ref{tab:XMIAlg} gives the results generated by the Extended Modular Inverse Algorithm detailed in Theorem \ref{th:theorem0}. \begin{example} Equation \eqref{eq:extmodinvform2} \begin{flalign} (106^{-1})_{189}&=\displaystyle\sum_{i=0}^{n} f_i\beta_i \notag\\ &=1 \cdot 1+1\cdot2+2\cdot3+7\cdot1+9\cdot1+16\cdot1+25\cdot1+41\cdot2\notag\\ &=148\notag \end{flalign} \end{example} \begin{example} Equation \eqref{eq:extmodinvform1} \begin{flalign} (106^{-1})_{189}&=p\displaystyle\sum_{i=0}^{n} \dfrac{\gamma_i}{r_{i-1}r_i}\notag\\ &=189(\dfrac{1}{189\cdot106}+\dfrac{105}{106\cdot83}+\dfrac{61}{83\cdot23}+\dfrac{8}{23\cdot14}+\dfrac{6}{14\cdot9}\notag\\&+\dfrac{3}{9\cdot5}+\dfrac{2}{5\cdot4}+\dfrac{2}{4\cdot1})\notag\\ &=148\notag \end{flalign} \end{example} \begin{example} Equation \eqref{eq:modinvform1} \begin{flalign} (106^{-1})_{189}&=p\displaystyle\sum_{i=0}^{n} \dfrac{(-1)^i}{r_{i-1}r_i}\notag\\ &=189(\dfrac{1}{189\cdot106}-\dfrac{1}{106\cdot83}+\dfrac{1}{83\cdot23}-\dfrac{1}{23\cdot14}+\dfrac{1}{14\cdot9}\notag\\&-\dfrac{1}{9\cdot5}+\dfrac{1}{5\cdot4}-\dfrac{1}{4\cdot1})\notag\\ &=-41\notag \end{flalign} Since it gives a negative value, and also note $n=7=odd$, and therefore we have to add $p$ to it, and hence, $(106^{-1})_{189}=-41+189=148$. \end{example} \begin{example} Non-identical $s_i$'s\\ Table \ref{tab:XMImode0} shows the results for extended modular inverse calculation with different $s_i$ in each step. \begin{flalign} (106^{-1})_{189}&=\displaystyle\sum_{i=0}^{n} f_i\beta_i \notag\\ &=1 \cdot 0+2\cdot0+9\cdot0+25\cdot1+41\cdot3\notag\\ &=148\notag\\ (106^{-1})_{189}&=189\left(\dfrac{1}{189\cdot106}+\dfrac{1}{106\cdot23}+\dfrac{1}{23\cdot9}+\dfrac{1}{9\cdot4}+\dfrac{3}{4\cdot1}\right)\notag\\ &=148\notag \end{flalign} \begin{table}[t] \centering \begin{tabular}{ | r | r | r | r | r | r | r | } \hline Step $i$ & -1 & 0 & 1 & 2 & 3 & 4 \\ \hline \hline $r_i=(s_i\,r_{i-2})_{r_{i-1}}$ & 189 & 106 & 23 & 9 & 4 & 1 \\ \hline $\gamma_i=(-s_i\,\gamma_{i-1})_{r_{i-1}}$ & & 1 & 1 & 1 & 1 & 3 \\ \hline $s_i$ & & & -1 & -1 & -1 & 1 \\ \hline $c_i=s_i\lfloor s_i\frac{r_{i-2}}{r_{i-1}}\rfloor$ & & & 2 & 5 & 3 & 2 \\ \hline $\beta_i=s_{i+1}\lceil s_{i+1}\frac{\gamma_i}{r_i}\rceil$ & & 0 & 0 & 0 & 1 & 3 \\ \hline $f_i=c_i f_{i-1}+s_{i-1}f_{i-2}$ & 0 & 1 & 2 & 9 & 25 & 41 \\ \hline \end{tabular} \caption{Illustration for Extended Modular Inverse Algorithm to find out $(106^{-1})_{189}$ with varying $s_i$'s.} \label{tab:XMImode0} \end{table} \end{example} \begin{example} Chinese Remainder Algorithm\\ We are going to solve the following simultaneous congruence equations : \begin{equation*} \left.\begin{aligned} x&\equiv 5 &\pmod{106}\\ x&\equiv 51 &\pmod{189} \end{aligned}\right\}\notag \end{equation*} \begin{table}[t] \centering \begin{tabular}{ | r | r | r | r | r | r | r | r | r | r |} \hline Step $i$ & -1 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \hline $r_i=(r_{i-2})_{r_{i-1}}$ & 189 & 106 & 83 & 23 & 14 & 9 & 5 & 4 & 1 \\ \hline $\gamma_i=(-\gamma_{i-1})_{r_{i-1}}$ & & 46 & 60 & 23 & 0 & & & & \\ \hline $c_i=\lfloor \frac{r_{i-2}}{r_{i-1}}\rfloor$& & & 1 & 1 & 3 & & & & \\ \hline $\beta_i=\lceil\frac{\gamma_i}{r_i}\rceil$ & & 1 & 1 & 1 & 0 & & & & \\ \hline $f_i=c_i f_{i-1}+f_{i-2}$ & 0 & 1 & 1 & 2 & 7 & & & & \\ \hline \end{tabular} \caption{Illustration for Extended Modular Inverse Algorithm to find out $(46(106^{-1})_{189})_{189}$ with $s_i=1$} \label{tab:CRT} \end{table} Note we have to find the extended modular inverse $(46(106^{-1})_{189})_{189}$, by Eq.\eqref{eq:extmodinvform2}, and using the result from the Extended Modular Inverse algorithm shown in Table \ref{tab:CRT}, we get \begin{flalign} (46(106^{-1})_{189})_{189}&=1\cdot1+1\cdot1+2\cdot1+3\cdot0=4\notag\\ \notag\\ x&=m_1((a_2-x_1)(m_{1}^{-1})_{m_2})_{m_2}+x_1\notag\\ &=106((51-5)(106^{-1})_{189})_{189}+5=106(46(60^{-1})_{189})_{189}+5\notag\\ &=106\cdot4+5=429\notag \end{flalign} Comparing Tables \ref{tab:XMIAlg} and \ref{tab:CRT}, we notice that, if it is already known that the two numbers are co-prime to each other, it takes only three steps in calculating $(46(106^{-1})_{189})_{189}$, whereas it takes 8 steps to calculate $(106^{-1})_{189}$. If it is not known in advance whether the two moduli are co-prime to each other or not, it is still necessary to carry out the algorithm for $r_i$'s until either $1$ or $0$ is reached. \end{example} \begin{example} Non-co-prime Chinese Remainder Theorem\\ The following example demonstrates the efficiency of the algorithm for non-co-prime Chinese Remainder Theorem. There is no need to pre-determine whether the two moduli are co-prime with each other, and $\gcd(m_1,m_2)$ will be found as part of the algorithm. The solution may also be found even before GCD is found. \begin{table}[t] \centering \begin{tabular}{ | r | r | r | r | r | r | r | r | r | r || r |} \hline Step $i$ & -1 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \hline $r_i=(r_{i-2})_{r_{i-1}}$ & 945 & 530 & 415 & 115 & 70 & 45 & 25 & 20 & 5 & 0 \\ \hline $\gamma_i=(-\gamma_{i-1})_{r_{i-1}}$ & & 230 & 300 & 115 & 0 & & & & & \\ \hline $c_i=\lfloor \frac{r_{i-2}}{r_{i-1}}\rfloor$ & & & 1 & 1 & 3 & & & & & \\ \hline $\beta_i=\lceil\frac{\gamma_i}{r_i}\rceil$ & & 1 & 1 & 1 & 0 & & & & & \\ \hline $f_i=c_i f_{i-1}+f_{i-2}$ & 0 & 1 & 1 & 2 & 7 & & & & & \\ \hline \end{tabular} \caption{Illustration for Extended Modular Inverse Algorithm to find out $(230(530^{-1})_{945})_{945}$ with $s_i=1$} \label{tab:CRT2} \end{table} \begin{equation*} \left.\begin{aligned} x&\equiv 79 &\pmod{530}\\ x&\equiv 309 &\pmod{945} \end{aligned}\right\}\notag \end{equation*} With Table \ref{tab:CRT2}, it is found that \begin{flalign} (230(530^{-1})_{945})_{945}&=1\cdot1+1\cdot1+2\cdot1=4\notag\\ \gcd(530,945)&=5\notag \end{flalign} The solution is : \begin{flalign} x&=530(230(530^{-1})_{945})_{945}+79+k\dfrac{530\cdot945}{5}\notag\\ &=530\cdot4+79+100170k=2199+100170k\notag \end{flalign} As a verification, we get\\ $(2199+100170k)_{530}=79,\\ (2199+100170k)_{945}=309$. \end{example} \section{Conclusions} We have introduced a modified modular inverse and a new extended modular inverse and proved the recipocity formulas for them. An algorithm is found to calculate the extended modular inverse, and for some specific sets of numbers, it is more efficient than the classical extended Euclidean algorithm. It is also applicable to the Chinese Remainder problem, with co-prime moduli as well as non-co-prime moduli. \end{document}
\begin{equation}taegin{document} \timesitle{\begin{equation}taf Stabilization of the weakly coupled wave-plate system with one internal damping\timeshanks{This work is partially supported by the NSFC under grants 11471231, 11231007 and 11322110, by the Program for Changjiang Scholars and Innovative Research Team in University under grant IRT\_16R53, and the Fundamental Research Funds for the Central Universities in China under grant 2015SCU04A02.}} \alphauthor{Xiaoyu Fu\timeshanks{School of Mathematics, Sichuan University, Chengdu 610064, China. E-mail address: rj\[email protected].}\quad and\quad Qi L\"u\timeshanks{School of Mathematics, Sichuan University, Chengdu 610064, China. E-mail address: [email protected].}} \deltaate{} \muaketitle \begin{equation}taegin{abstract} \nablao This paper is addressed to a stabilization problem of a system coupled by a wave and a Euler-Bernoulli plate equation. Only one equation is supposed to be damped with a damping function $d(\cdot)$. Under some assumption about the damping and the coupling terms, it is shown that sufficiently smooth solutions of the system decay logarithmically at infinity without any geometric conditions on the effective damping domain. The proofs of these decay results rely on the interpolation inequalities for the coupled elliptic-parabolic systems and make use of the estimate of the resolvent operator for the coupled system. The main tools to derive the desired interpolation inequalities are global Carleman estimates. \frepsilonnd{abstract} \mus \nablao{\begin{equation}taf Key Words}. Logarithmic stability, coupled wave-plate equations, interpolation inequality, resolvent operator estimate. \section{Introduction} Let $\Omegamega$ be a bounded domain in ${\muathop{\rm l\nablaegthinspace R}}^n$ ($n\hbox{\rm inf$\,$}tyn {\muathop{\rm l\nablaegthinspace N}}$) with the $C^4$ boundary $\Gammaamma$. Consider the following weakly coupled wave-plate system: \begin{equation}tael{0a1}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle y_{tt}-\Delta y+c(x) z+\alpha d(x)y_t=0 &\widehatbox{ in } {\muathop{\rm l\nablaegthinspace R}}^+\times \Omega,\\ \nablas\deltaisplaystyle z_{tt}+\Delta^2 z+c(x) y+(1-\alpha)d(x)z_t=0 &\widehatbox{ in } {\muathop{\rm l\nablaegthinspace R}}^+\times \Omega,\\ \nablas\deltaisplaystyle y=z=\Delta z =0&\widehatbox{ on } {\muathop{\rm l\nablaegthinspace R}}^+\times \Gamma,\\ \nablas\deltaisplaystyle (y(0), y_t(0))=(y^0,y^1),\ (z(0), z_t(0))=(z^0,z^1)&\widehatbox{ in } \Omega. \frepsilona\right.\frepsilone Here $c(\cdot)\hbox{\rm inf$\,$}tyn L^\hbox{\rm inf$\,$}tynfty(\Omega)$ denotes the coupling function, $d(\cdot)\hbox{\rm inf$\,$}tyn L^\hbox{\rm inf$\,$}tynfty(\Omega)$ denotes the damping function, $\alpha=0$ or $\alpha=1$. Both $c(\cdot)$ and $d(\cdot)$ are nonnegative functions. \mus When $d=0$, the system \frepsilonqref{0a1} is a classical model to describe the propagation of waves in elastic solids (e.g.\cite{Achenbach,Graff}). In recent years, the model in which $d\nablaeq 0$ attracts lots of attentions due to the study of the plastic composite materials. These materials are widely used in industry, such as aerocraft, ships, submarines and automobiles. Light weight is one of the main advantage of them. However, light weight could lead the structural elements of the composite to subject to unnecessary vibrations. Then it is important to add some damping on the system to attenuate unnecessary vibrations in the design of composite dynamic structures(e.g. \cite{Adams,Yim}). In \frepsilonqref{0a1}, if $\alpha=1$, then the term ``$d(x)y_t$" is an damping acted on the wave which is used to stabilize the wave directly and the plate indirectly. On the other hand, if $\alpha=0$, then ``$d(x)z_t$" is an damping acted on the plate which is used to stabilize the plate directly and the wave indirectly. The stabilization of both wave and plate equations are studied extensively in the literature (see \cite{BLR,BH,Fu1,L,LR,Ortega,P1,Zua} and the rich references therein). Generally speaking, there are three types of decays for the energy of damped systems, that is, exponential decay, polynomial decay and logarithmic decay. To obtain the first two decays, one needs restrictive conditions on the support of the damping. For example, to get the exponential decay of the energy of wave equations, one needs the set $$ \omega\begin{equation}tauildrel \timesriangle \omegaver =\{x\hbox{\rm inf$\,$}tyn\Omega:\, d(x)\gammaeq d_0\mubox{ for some $d_0>0$}\} $$ fulfills the geometric control condition (e.g. \cite{BLR}). To get the polynomial decay, one needs $\Omega$ and $\omega$ fulfills some special geometric condition (see \cite{BH,BZ,P1} for example). Similar things happen for plate equations (e.g.\cite{ALM,BZ1,Haraux}). When $\Omega$ and $\omega$ does not fulfill any special condition, people find that the energies of both wave and plate equations satisfy logarithmic decay(e.g. \cite{L,LR}). In this paper, we show such decay also holds for \frepsilonqref{0a1}. There exist many results on the stabilization of coupled systems in the literature (see \cite{AB99, B1,ACK,AN,BL,RZZ,ZZ2} and the rich references therein). Particularly, in \cite{ACK}, the author obtained the polynomial decay of the system \frepsilonqref{0a1} with a constant coupling parameter and a damping which is effective on the whole boundary. As far as we know, there is no reference addressing the asymptotic behavior of the system (\ref{0a1}) (with only one damping but without geometric assumptions on the effective damping domain). In this paper, we will show the logarithmic decay property for solutions of the system (\ref{0a1}). According to a well known result of Burq (see \cite[Theorem 3]{B}), to obtain the logarithmic decay rate, it suffices to show some high-frequency estimates with exponential loss on the resolvent. Thus, the main difficulty is the estimate of the resolvent operators, which will be solved by using some Carleman inequalities. To this end, we borrow some idea in \cite{Fu3}. However, there are some new essential difficulties. Indeed, to get the energy decay for a system coupled by two wave equations, one should establish an interpolation inequality for a system coupled by two elliptic equations. In our case, we have to get two interpolation inequalities for a system coupled by one elliptic and two parabolic equations. One cannot simply mimic the techniques in \cite{Fu3} to obtain our result. Please see Section \ref{ss5} for more details. The rest of this paper is organized as follows: In Section \ref{ops}, we give the main results in this paper. Section \ref{sec-Carleman} is devoted to establish Carleman estimates for some second order partial differential operators. Section \ref{ss5} is addressed to proving some interpolation inequalities by those Carleman estimates. At last, in Section \ref{ss6}, we prove our main results. \section{Statement of the main results}\lambdaabel{ops} Throughout this paper, we always assume that $c(\cdot)$ and $d(\cdot)$ are bounded real valued nonnegative functions satisfying \begin{equation}tael{cond1} c(x)\gammae c_0>0 \ \widehatboxox{ in } \omega_c \frepsilone and \begin{equation}tael{cond1.1}d(x)\gammae d_0>0 \ \widehatboxox{ in }\omega_{d}, \frepsilone where $\omega_c$ and $ \omega_d$ are any fixed non-empty open subsets of $\Omega$, and $c_0$ and $d_0$ are given constants. \begin{equation}taegin{remark} The condition \frepsilonqref{cond1} means that the two equations are at least coupled effectively on the non-empty open set $\omega_c$. If it does not hold, then the wave and the plate may not impact each other efficaciously. Then one cannot stabilize the system by only put damping on one equation. \frepsilonnd{remark} \begin{equation}taegin{remark} The condition \frepsilonqref{cond1.1} means that the damping are at least worked effectively on the non-empty open set $\omega_d$. If it does not hold, then the damping may not be strong enough to stabilize the system. An extreme example is that $d=0$. \frepsilonnd{remark} In what follows, we will use $\deltaisplaystyle C=C(\Omega,\omega_c,\omega_{d})$ to denote generic positive constants which may vary from line to line. Let $$ H\begin{equation}tauildrel \timesriangle \omegaver =H_0^1(\Omega)\times L^2(\Omega)\times (H^2(\Omega)\cap H_0^1(\Omega))\times L^2(\Omega). $$ Write an element $U\hbox{\rm inf$\,$}tyn H$ as $ U=(y,u,z,v)$, where $ y\hbox{\rm inf$\,$}tyn H_0^1(\Omega)$ and $ u, v\hbox{\rm inf$\,$}tyn L^2(\Omega),\ z\hbox{\rm inf$\,$}tyn H^2(\Omega)\cap H_0^1(\Omega)$. Define an unbounded operator ${\cal A}: \; D({\cal A})\subset H\timeso H$ by \begin{equation}tael{0a2}\lambdaeft\{\begin{equation}taa{ll} D({\cal A})\begin{equation}tauildrel \timesriangle \omegaver =\lambdaeft\{U\hbox{\rm inf$\,$}tyn H: \ {\cal A} U\hbox{\rm inf$\,$}tyn H, \ y\begin{equation}taig|_{\Gamma}=z\begin{equation}taig|_{\Gamma}=\Delta z\begin{equation}taig|_{\Gamma}=0\right\}, \\ \nablas\deltaisplaystyle {\cal A} U=\Big (u, \Delta y- c(x) z-\alpha d(x)u,v, -\Delta^2 z-c(x)y-(1-\alpha)d(x)v\Big ).\frepsilona\right.\frepsilone It is easy to show that ${\cal A}$ generates a $C_0$-semigroup $\{e^{t{\cal A}}\}_{t\gammaeq 0}$ on $H$. Therefore, system (\ref{0a1}) is well-posed. The energy of a solution $(y,z)$ to the system (\ref{0a1}) at time $t$ is given by: \begin{equation}tael{energy}\begin{equation}taa{ll}\deltaisplaystyle E(t)&\deltaisplaystyle=\frphirac{1}{2}\hbox{\rm inf$\,$}tynt_\Omega\begin{equation}taig(|\nabla y(t)|^2+|y_t(t)|^2\begin{equation}taig)dx\\ \nablas&\deltaisplaystyle\quad+\frphirac{1}{2}\hbox{\rm inf$\,$}tynt_\Omega\begin{equation}taig(|\Delta z(t)|^2+|z_t(t)|^2\begin{equation}taig)dx+\hbox{\rm inf$\,$}tynt_\Omega c(x){\bf R}e\Big ( y(t)\omegaverline z(t)\Big )dx. \frepsilona\frepsilone When $d=0$, it is easy to see that $E(\cdot)$ is conservative. When $d\nablaeq 0$, our main results are stated as follows: \begin{equation}tat\lambdaabel{0t1} Let $c(\cdot)$ and $d(\cdot)$ satisfy (\ref{cond1}) and (\ref{cond1.1}). Suppose that $\omega_c\cap\omega_{d}\nablaeq\frepsilonmptyset$. Then, solutions $e^{t{\cal A}}(y^0,y^1,z^0,z^1)\frepsilonquiv (y,y_t,z,z_t)\hbox{\rm inf$\,$}tyn C({\muathop{\rm l\nablaegthinspace R}};\; D({\cal A}))\cap C^1({\muathop{\rm l\nablaegthinspace R}};\;H)$ to system (\ref{0a1}) satisfy \begin{equation}tael{0a3}\begin{equation}taa{ll}\deltaisplaystyle ||e^{t{\cal A}}(y^0,y^1,z^0,z^1)||_{H}\lambdae \frphirac{C}{\lambdaog (2+t)}||(y^0,y^1,z^0,z^1)||_{D({\cal A})},\\ \nablas\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq\frphiorall\;(y^0,y^1,z^0,z^1)\hbox{\rm inf$\,$}tyn D({\cal A}),\ \frphiorall\; t>0. \frepsilona\frepsilone \frepsilont \begin{equation}taegin{remark} From \cite{L}, we know the decay rate in Theorem \ref{0t1} is sharp. \frepsilonnd{remark} Following \cite[Theorem 7.1]{D} (see also \cite[Theorem 3]{B}), Theorem \ref{0t1} is a consequence of the following resolvent estimate for the operator ${\cal A}$: \begin{equation}tat\lambdaabel{0t2} Under the assumptions of Theorem \ref{0t1}, there exists a constant $C>0$ such that if $$ {\bf R}e\lambda\hbox{\rm inf$\,$}tyn\lambdaeft[-{e^{-C|{\mathop{\rm Im}\,}\lambda|}/C},0\right], $$ then it holds that $$ ||({\cal A}-\lambda I)^{-1}||_{{\cal L}(H)}\lambdae Ce^{C|{\mathop{\rm Im}\,}\lambda|}, \quad\widehatbox{ for } |\lambda|>1. $$ \frepsilont \begin{equation}tar In this paper, we assume that $\omega_c\cap\omega_{d}\nablaeq\frepsilonmptyset$. It would be quite interesting to consider the case that $\omega_c\cap\omega_{d}=\frepsilonmptyset$. As far as we know, this is an unsolved problem. \frepsilonr \section{Carleman estimates for the elliptic and parabolic operators} \lambdaabel{sec-Carleman} In this section, we establish Carleman estimates for elliptic operator ${{\partial}ime}a_{ss}+\Delta$, and parabolic operators ${{\partial}ime}a_t+\Delta$ and ${{\partial}ime}a_s-\Delta$, respectively. To begin with, we assume that $\omega_k$ ($k=0, 1, 2, 3, 4$) are subdomains of $\Omega$ such that $\omega_0\subset\omega_1\subset\omega_2\subset\omega_3\subset\omega_c\cap\omega_{d}\begin{equation}tauildrel \timesriangle \omegaver =\omega_4$. Recall that there exists a function $\widehatat{{\partial}ime}si\hbox{\rm inf$\,$}tyn C^2(\omegaverline \Omega)$ such that (see \cite{FI} for example) \begin{equation}tael{as} \widehatat{{\partial}ime}si> 0 \widehatbox{ in } \Omega, \quad\widehatat{{\partial}ime}si=0 \widehatbox{ on }{{\partial}ime}a\Omega,\quad |\nabla\widehatat{{\partial}ime}si|>0 \widehatbox{ in }\omegaverline{\Omega\setminus\omega_0}. \frepsilone With the aid of the function $\widehatat{{\partial}ime}si$ defined above, we introduce weight functions as follows: \begin{equation}tael{as4} \timesh=e^{\frepsilonll}, \quad \frepsilonll=\lambda{{\partial}ime}hi,\quad {{\partial}ime}hi=e^{\muu{{\partial}ime}si},\quad {{\partial}ime}si={{\partial}ime}si(s,x)\begin{equation}tauildrel \timesriangle \omegaver =\frphirac{\widehatat{{\partial}ime}si(x)}{ ||\widehatat{{\partial}ime}si||_{L^\hbox{\rm inf$\,$}tynfty(\Omega)}}+b^2-s^2. \frepsilone Here $1<b\lambdae2$ will be given later, $\lambda, \muu, s\hbox{\rm inf$\,$}tyn {\muathop{\rm l\nablaegthinspace R}}$ are parameters and $x\hbox{\rm inf$\,$}tyn{\overline v}erline{\Omega}$. We first recall the following known global Carleman estimate for elliptic equations. \begin{equation}tal\lambdaabel{car1} (\cite[Theorem 3.1]{Fu3}) Let $p\hbox{\rm inf$\,$}tyn C^2((-b,b)\times\Omega;\;{\muathop{\rm l\nablaegthinspace\nablaegthinspace\nablaegthinspace C}})$, and let $\frepsilonll\hbox{\rm inf$\,$}tyn C^2((-b,b)\times\Omega)$ be given by (\ref{as4}). Then, there is a constant $\muu_0>0$ such that for all $\muu\gammae \muu_0$, one can find two constants $C=C(\muu)>0$ and $\lambda_0=\lambda_0(\muu)$ so that for all $p\hbox{\rm inf$\,$}tyn H_0^1((-b,b)\times\Omega)$ and $\deltaisplaystyle p_{ss}+\Delta p=f_1$ (in $(-b,b)\times\Omega$, in the sense of distribution) with $f_1\hbox{\rm inf$\,$}tyn L^2((-b,b)\times\Omega)$, and for all $\lambda\gammae\lambda_0$, it holds that \begin{equation}tael{crf}\begin{equation}taa{ll}\deltaisplaystyle \lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi(|\nabla p|^2+|p_s|^2+\lambda^2\muu^2{{\partial}ime}hi^2|p|^2)dxds\\ \nablas\deltaisplaystyle\lambdae C\Big[\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|f_1|^2dxds+\lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_0}\timesh^2{{\partial}ime}hi(|\nabla p|^2+|p_s|^2+\lambda^2\muu^2{{\partial}ime}hi^2|p|^2)dxds\Big]. \frepsilona\frepsilone \frepsilonl \mus Further, choose some cut-off functions $\frepsilonta_{j+1} \hbox{\rm inf$\,$}tyn C_0^\hbox{\rm inf$\,$}ty(\omega_{j+1})$ ($j=0, 1, 2, 3, 4$) such that $\frepsilonta_{j+1} (x)=1$ in $\omega_{j}$. Then, we have the following local weighted energy estimate. \begin{equation}tal\lambdaabel{le0m-01} Let $\gamma\hbox{\rm inf$\,$}tyn{\muathop{\rm l\nablaegthinspace R}}$ and $\frepsilonll\hbox{\rm inf$\,$}tyn C^2((-b,b)\times\Omega;{\muathop{\rm l\nablaegthinspace R}})$ be given by (\ref{as4}). Then, there is a constant $\lambda_0>0$ such that for all $\lambda\gammae \lambda_0$, one can find a constant $C>0$ so that for all $q\hbox{\rm inf$\,$}tyn H_0^1((-b,b)\times\Omega)$ and $\deltaisplaystyle \gamma q_{s}+\Delta q=f_2$ (in $(-b,b)\times\Omega$, in the sense of distribution) with $f_2\hbox{\rm inf$\,$}tyn L^2((-b,b)\times\Omega)$, for any $\begin{equation}ta\gammae 2$, it holds that \begin{equation}tael{1224-x1}\begin{equation}taa{ll}\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_{j+1}^2{{\partial}ime}hi^k |\nabla q|^2dxds\\ \nablas\deltaisplaystyle\lambdae \frphirac{1}{(\lambda\muu)^\begin{equation}ta}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|f_2|^2dxds+C(\lambda\muu)^\begin{equation}ta\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_{j+1}^2{{\partial}ime}hi^{k+2}|q|^2 dxds \frepsilona\frepsilone where $j=0, 1, 2, 3, 4$ and $k\hbox{\rm inf$\,$}tyn{\muathop{\rm l\nablaegthinspace N}}$. \frepsilonl {\hbox{\rm inf$\,$}tyt Proof. } We choose a cut-off function $\frepsilonta_{j+1} \hbox{\rm inf$\,$}tyn C_0^\hbox{\rm inf$\,$}ty(\omega_{j+1}; [0,1])$ such that $\frepsilonta (x)=1$ in $\omega_{j}$. A short calculation shows that \begin{equation}tael{1202eq1}\begin{equation}taa{ll}\deltaisplaystyle \timesh^2{{\partial}ime}hi^k\frepsilonta_{j+1}^2\Big[\omegaverline {q}(\gamma q_{s}+\Delta {q})+ {q}\omegaverline{(\gamma q_{s}+\Delta {q})}\Big]\\ \nablas\deltaisplaystyle=(\gamma\timesh^2{{\partial}ime}hi^k\frepsilonta_{j+1}^2|q|^2)_s-(\gamma\timesh^2{{\partial}ime}hi^k\frepsilonta_{j+1}^2)_{s}|q|^2+\sum_{l=1}^n\Big[\timesh^2{{\partial}ime}hi^k\frepsilonta_{j+1}^2(\omegaverline {q}q_{x_l}+q{\omegaverline q}_{x_l})\Big]_{x_l}\\ \nablas\deltaisplaystyle\quadq-2\timesh^2{{\partial}ime}hi^k\frepsilonta_{j+1}^2|\nabla q|^2-\sum_{l=1}^n\Big (\timesh^2{{\partial}ime}hi^k\frepsilonta_{j+1}^2\Big )_{x_l}(\omegaverline {q}q_{x_l}+q{\omegaverline q}_{x_l}). \frepsilona \frepsilone Next, recalling (\ref{as4}) for the definition of $\frepsilonll$, it is easy to see that \begin{equation}tael{1202-as6}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle \frepsilonll_s=\lambda\muu{{\partial}ime}hi{{\partial}ime}si_s,\quadq\frepsilonll_{x_j}=\lambda\muu{{\partial}ime}hi{{\partial}ime}si_{x_j},\quad\frepsilonll_{x_j s}=\lambda\muu^2{{\partial}ime}hi{{\partial}ime}si_s{{\partial}ime}si_{x_j},\\ \nablas\deltaisplaystyle \frepsilonll_{ss}=\lambda\muu^2{{\partial}ime}hi{{\partial}ime}si_s^2+\lambda\muu{{\partial}ime}hi{{\partial}ime}si_{ss},\quad\frepsilonll_{x_jx_k}=\lambda\muu^2{{\partial}ime}hi{{\partial}ime}si_{x_j}{{\partial}ime}si_{x_k}+\lambda\muu{{\partial}ime}hi{{\partial}ime}si_{x_jx_k}. \frepsilona\right. \frepsilone Integrating (\ref{1202eq1}) in $(-b,b)\times\Omega$, noting that $q (-b)=q(b)=0$ in $\Omega$ and $q=0$ on the boundary, by (\ref{1202-as6}), we get the desire result immediately. \sigmagned {$\sqr69$} \mus Further, we recall the following point-wise weighted inequality for the parabolic operators. \begin{equation}tal\lambdaabel{1218t1} (\cite[Theorem 2.1]{Fu01}) Let $\gamma\hbox{\rm inf$\,$}tyn {\muathop{\rm l\nablaegthinspace R}}$ and $z\hbox{\rm inf$\,$}tyn C^2({\muathop{\rm l\nablaegthinspace R}}^{1+n}; \;{\muathop{\rm l\nablaegthinspace\nablaegthinspace\nablaegthinspace C}})$. Set $\timesh=e^\frepsilonll,\ v=\timesh q$ and $\deltaisplaystyle\Psi=-2\Delta\frepsilonll$. Then \begin{equation}tael{1218a2}\begin{equation}taa{ll}\deltaisplaystyle \frphirac{1}{2}\timesh^2|\gamma q_s+\Delta q|^2+M_s+\deltaiv V\\ \nablas\deltaisplaystyle\gammae2 \sum_{j,k=1}^n\frepsilonll_{x_jx_k}(v_{x_k}\omegav_{x_j}+\omegav_{x_k} v_{x_j})+2\Delta\frepsilonll|\nabla v|^2+B|v|^2, \frepsilona\frepsilone where \begin{equation}tael{1218f3} A\begin{equation}tauildrel \timesriangle \omegaver =|\nabla\frepsilonll|^2+\Delta\frepsilonll, \frepsilone and \begin{equation}tael{1218a3}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle M\begin{equation}tauildrel \timesriangle \omegaver =(\gamma^2 \frepsilonll_s-\gamma A) |v|^2+\gamma|\nabla v|^2,\\ \nablas\deltaisplaystyle V\begin{equation}tauildrel \timesriangle \omegaver =[V^1,\cdotots,V^k,\cdotots,V^n],\\ \nablas\deltaisplaystyle V^k\begin{equation}tauildrel \timesriangle \omegaver =-\gamma (v_{x_k}\omegav_s+\omegav_{x_k}v_s)+2\sum_{j=1}^n\frepsilonll_{x_j}(v_{x_j}\omegav_{x_k}+\omegav_{x_j}v_{x_k})-2\frepsilonll_{x_k}|\nabla v|^2\\ \nablas\deltaisplaystyle\quadq\quad +2\Delta\frepsilonll(v_{x_k}\omegav+\omegav_{x_k}v) +(2A\frepsilonll_{x_k}-2\Delta\frepsilonll_{x_k}-2\gamma\frepsilonll_{x_k}\frepsilonll_s)|v|^2,\\ \nablas\deltaisplaystyle B\begin{equation}tauildrel \timesriangle \omegaver =\gamma^2\frepsilonll_{ss}-\gamma A_s-2\gamma(\nabla\frepsilonll\cdotot\nabla\frepsilonll_s-\Delta\frepsilonll\frepsilonll_s)-2\Delta^2\frepsilonll+2(\nabla A\cdotot\nabla \frepsilonll-A\Delta\frepsilonll). \frepsilona\right.\frepsilone \frepsilonl \mus {\hbox{\rm inf$\,$}tyt Proof.} Taking $\alpha=\gamma\hbox{\rm inf$\,$}tyn{\muathop{\rm l\nablaegthinspace R}},\ \begin{equation}ta=0$, $m=n$ and $(a^{jk})_{n\times n}= I_n$ (the unit matrix) in \cite[Theorem 2.1]{Fu01} with $t$ replaced by $s$, in this case ${\cal P} q=\gamma q_s+\Delta q$. Noting that $\Psi=-2\Delta\frepsilonll$ and $\deltaisplaystyle \timesh({\cal P} q\omegaverline {I_1}+\omegaverline{{\cal P} q} I_1)\lambdae 2|I_1|^2+\frphirac{1}{ 2}\timesh^2|{\cal P} q|^2$, we immediately get the desired result.\sigmagned {$\sqr69$} \begin{equation}tar Similar point-wise weighted identity was given in \cite[Lemma 2.1]{lz} for the real valued parabolic equation. \frepsilonr \mus Based on Lemma \ref{le0m-01} and Lemma \ref{1218t1}, we have the following Carleman estimate for the heat equations. \begin{equation}tal\lambdaabel{1202-car1} Let $\gamma\hbox{\rm inf$\,$}tyn{\muathop{\rm l\nablaegthinspace R}}$ and $\frepsilonll\hbox{\rm inf$\,$}tyn C^2((-b,b)\times\Omega;{\muathop{\rm l\nablaegthinspace R}})$ be given by (\ref{as4}). Then, there is a constant $\muu_1>0$ such that for all $\muu\gammae \muu_1$, one can find two constants $C=C(\muu)>0$ and $\lambda_1=\lambda_1(\muu)$ so that for all $q\hbox{\rm inf$\,$}tyn H_0^1((-b,b)\times\Omega)$ and $\deltaisplaystyle \gamma q_{s}+\Delta q=f_2$ (in $(-b,b)\times\Omega$, in the sense of distribution) with $f_2\hbox{\rm inf$\,$}tyn L^2((-b,b)\times\Omega)$, and for all $\lambda\gammae\lambda_1$, it holds that \begin{equation}tael{1201-crf}\begin{equation}taa{ll}\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2(\lambda{{\partial}ime}hi)^{-1}(|\Delta q|^2+|\gamma q_s|^2)dxds+ \lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi(|\nabla q|^2+\lambda^2\muu^2{{\partial}ime}hi^2|q|^2)dxds\\ \nablas\deltaisplaystyle\lambdae C\Big[\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|f_2|^2dxds+\lambda^3\muu^4\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_1}\timesh^2{{\partial}ime}hi^3|q|^2dxds\Big]. \frepsilona\frepsilone \frepsilonl \mus {\hbox{\rm inf$\,$}tyt Proof. } We divide the proof into several steps. \mus {\hbox{\rm inf$\,$}tyt Step 1. } By (\ref{1202-as6}), a short calculation shows that \begin{equation}tael{1218x1}\begin{equation}taa{ll}\deltaisplaystyle 2\sum_{j,k=1}^n\frepsilonll_{x_jx_k}(v_{x_k}\omegav_{x_j}+\omegav_{x_k} v_{x_j})+2\Delta\frepsilonll|\nabla v|^2\\ \nablas\deltaisplaystyle\gammae 4\lambda\muu^2{{\partial}ime}hi |\nabla{{\partial}ime}si\cdotot\nabla v|^2+2\lambda\muu^2{{\partial}ime}hi |\nabla{{\partial}ime}si|^2|\nabla v|^2-C\lambda\muu{{\partial}ime}hi |\nabla v|^2. \frepsilona\frepsilone Next, by (\ref{as4}), (\ref{1202-as6}) and (\ref{1218f3}), we obtain that \begin{equation}tael{1218x3}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle A=\lambda^2\muu^2{{\partial}ime}hi^2|\nabla{{\partial}ime}si|^2+\lambda\muu^2{{\partial}ime}hi|\nabla{{\partial}ime}si|^2+\lambda\muu{{\partial}ime}hi\Delta{{\partial}ime}si\\ \nablas\deltaisplaystyle \nabla ({{\partial}ime}hi^2|\nabla{{\partial}ime}si|^2)\cdotot\nabla\frepsilonll=2\lambda\muu^2{{\partial}ime}hi^3 |\nabla{{\partial}ime}si|^4+\lambda\muu{{\partial}ime}hi^3\nabla(|\nabla{{\partial}ime}si|^2)\cdotot\nabla{{\partial}ime}si. \frepsilona\right.\frepsilone Recalling (\ref{1218a3}) for the definition of $B$, it follows from (\ref{1218x3}) that \begin{equation}tael{1218x2}\begin{equation}taa{ll}\deltaisplaystyle B\nablaegthinspace \nablaegthinspace \nablaegthinspace &\deltaisplaystyle=\gamma^2\frepsilonll_{ss}-\gamma A_s-2\gamma(\nabla\frepsilonll\cdotot\nabla\frepsilonll_s-\Delta\frepsilonll\frepsilonll_s)-2\Delta^2\frepsilonll+2(\nabla A\cdotot\nabla \frepsilonll-A\Delta\frepsilonll)\\ \nablas&\deltaisplaystyle\gammae 2\lambda^3\muu^4{{\partial}ime}hi^3|\nabla{{\partial}ime}si|^4-C\lambda^3\muu^3{{\partial}ime}hi^3-C\lambda\muu^4{{\partial}ime}hi. \frepsilona\frepsilone Combining (\ref{1218x1}) and (\ref{1218x2}), we find that \begin{equation}tael{0619-1}\begin{equation}taa{ll}\deltaisplaystyle \widehatbox{ RHS of (\ref{1218a2})}\nablaegthinspace \nablaegthinspace \nablaegthinspace &\deltaisplaystyle\gammae 2\lambda\muu^2{{\partial}ime}hi |\nabla{{\partial}ime}si|^2 (|\nabla v|^2+\lambda^2\muu^2{{\partial}ime}hi^2|\nabla{{\partial}ime}si|^2v^2)\\ \nablas&\deltaisplaystyle\quad -C\lambda\muu{{\partial}ime}hi(|\nabla v|^2+\lambda^2\muu^2{{\partial}ime}hi^2|v|^2+\muu^3 |v|^2). \frepsilona\frepsilone Integrating inequality (\ref{1218a2}) on $(-b,b)\times\Omega$, using integration by parts, noting that $v_{x_j}=\frphirac{{{\partial}ime}a v}{{{\partial}ime}a\nablau}\nablau_j$ on $\Sigma$ (which follows from $v|_\Sigma=0$), $v(-b)=v(b)=0$ and by (\ref{as}) and (\ref{0619-1}), we have that \begin{equation}tael{1202oo1}\begin{equation}taa{ll}\deltaisplaystyle \lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_\Omega{{\partial}ime}hi |\nabla{{\partial}ime}si|^2\Big (|\nabla v|^2+\lambda^2\muu^2{{\partial}ime}hi^2|\nabla{{\partial}ime}si|^2|v|^2\Big )dxds\\ \nablas\deltaisplaystyle\lambdae C\Big[||\timesh f_2||^2_{L^2(Q)}+\lambda\muu\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_\Omega{{\partial}ime}hi \Big (|\nabla v|^2+\lambda^2\muu^2{{\partial}ime}hi^2|v|^2+\muu^3|v|^2\Big )dxds\Big], \frepsilona\frepsilone where we use the following fact $$ \hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_\Omega \deltaiv Vdxds=2\lambda\muu\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_\Gamma {{\partial}ime}hi \frphirac{{{\partial}ime}a{{\partial}ime}si}{{{\partial}ime}a\nablau}\Big |\frphirac{{{\partial}ime}a v}{{{\partial}ime}a\nablau}\Big |^2dxds\lambdae0. $$ On the other hand, by (\ref{as}) and (\ref{as4}), it is easy to see that $$ h\begin{equation}tauildrel \timesriangle \omegaver =|\nabla{{\partial}ime}si|=\frphirac{1}{||\widehatat{{\partial}ime}si||_{L^\hbox{\rm inf$\,$}tynfty(\Omega)}}|\nabla\widehatat{{\partial}ime}si(x)|>0,\quadq \widehatbox{ in }{\overline v}erline{\Omega\setminus\omega_0}. $$ Then \begin{equation}tael{1202oo4}\begin{equation}taa{ll}&\deltaisplaystyle \lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_\Omega{{\partial}ime}hi h^2\Big (|\nabla v|^2+\lambda^2\muu^2{{\partial}ime}hi^2h^2|v|^2\Big )dxds\\ \nablas&\deltaisplaystyle\gammae c_1\lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega\setminus \Omega_{\omega_0}}{{\partial}ime}hi \Big (|\nabla v|^2+\lambda^2\muu^2{{\partial}ime}hi^2|v|^2\Big )dxds\\ \nablas&\deltaisplaystyle\quad-C\lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{{\omega_0}}{{\partial}ime}hi \Big (|\nabla v|^2+\lambda^2\muu^2{{\partial}ime}hi^2|v|^2\Big )dxds. \frepsilona\frepsilone Combining (\ref{1202oo1})--(\ref{1202oo4}), noting that $v=\timesh q$, we get that \begin{equation}tael{1224-0}\begin{equation}taa{ll}\deltaisplaystyle \lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi\Big (|\nabla q|^2+\lambda^2\muu^2{{\partial}ime}hi^2|q|^2\Big )dxds\\ \nablas\deltaisplaystyle\lambdae C\Big[||\timesh f_2||^2_{L^2(Q)}+\lambda\muu\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_\Omega\timesh^2{{\partial}ime}hi \Big (|\nabla q|^2+\lambda^2\muu^2{{\partial}ime}hi^2|q|^2+\muu^3|q|^2\Big )dxds\Big]\\ \nablas\deltaisplaystyle\quadq+C\lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_0}\timesh^2{{\partial}ime}hi\Big (|\nabla q|^2+\lambda^2\muu^2{{\partial}ime}hi^2|q|^2\Big )dxds. \frepsilona\frepsilone Taking $\deltaisplaystyle\muu_1\begin{equation}tauildrel \timesriangle \omegaver =4C+1>0$, for any $\muu\gammae\muu_1$, we have \begin{equation}tael{1224-1}\begin{equation}taa{ll}\deltaisplaystyle \lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi\Big (|\nabla q|^2+\lambda^2\muu^2{{\partial}ime}hi^2|q|^2\Big )dxds\\ \nablas\deltaisplaystyle\lambdae C\Big (||\timesh f_2||^2_{L^2(Q)}+\lambda\muu^4\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_\Omega\timesh^2{{\partial}ime}hi |q|^2dxds\Big )\\ \nablas\deltaisplaystyle\quadq+C\lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_0}\timesh^2{{\partial}ime}hi\Big (|\nabla q|^2+\lambda^2\muu^2{{\partial}ime}hi^2|q|^2\Big )dxds. \frepsilona\frepsilone Taking $\lambda_2\begin{equation}tauildrel \timesriangle \omegaver =2\sqrt{C}+1>0$, for any $\lambda\gammae\lambda_2$, we have \begin{equation}tael{611-0}\begin{equation}taa{ll}\deltaisplaystyle \lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi(|\nabla q|^2+\lambda^2\muu^2{{\partial}ime}hi^2|q|^2)dxds\\ \nablas\deltaisplaystyle\lambdae C\Big[\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|f_2|^2dxds+\lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_0}\timesh^2{{\partial}ime}hi(|\nabla q|^2+\lambda^2\muu^2{{\partial}ime}hi^2|q|^2)dxds\Big]. \frepsilona\frepsilone \mus {\hbox{\rm inf$\,$}tyt Step 2. } Let us estimate ``$\deltaisplaystyle\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2(\lambda{{\partial}ime}hi)^{-1}(|\Delta q|^2+|\gamma q_s|^2)dxds$". By (\ref{1202-as6}) and (\ref{as}), a short calculation shows that \begin{equation}taegin{eqnarray}\lambdaabel{1202-213} \begin{equation}taegin{array}{ll} \deltaisplaystyle\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega} \timesh^2 (\lambda{{\partial}ime}hi)^{-1} \gamma (\omegaverline q_s\cdotot\Delta q+q_s\omegaverline\Delta q)dxds\\ \nablas\deltaisplaystyle =-\gamma\deltaisplaystyle\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\deltaisplaystyle \nabla \begin{equation}taig(\timesh^2(\lambda{{\partial}ime}hi)^{-1}\begin{equation}taig)\cdotot(\nabla \omegaverline q q_s+\nabla q \omegaverline q_s) dxds +\gamma\deltaisplaystyle\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\deltaisplaystyle\begin{equation}taig(\timesh^2(\lambda{{\partial}ime}hi)^{-1}\begin{equation}taig)_s|\nabla q|^2 dxds\\ \nablas\deltaisplaystyle \lambdaeq\deltaisplaystyle\frphirac{1}{2}\deltaisplaystyle\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega} \timesh^2 (\lambda{{\partial}ime}hi)^{-1}|\gamma q_s|^2dxdt+C\lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi |\nabla q|^2 dxds. \frepsilonnd{array} \frepsilonnd{eqnarray} Combining (\ref{1202-213}) and (\ref{611-0}), we end up with \begin{equation}taegin{eqnarray}\lambdaabel{1202-214} \begin{equation}taegin{array}{ll} \deltaisplaystyle\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2(\lambda{{\partial}ime}hi)^{-1}(|\Delta q|^2+|\gamma q_s|^2)dxds\\ \nablas\deltaisplaystyle \lambdae C\Big[\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|f_2|^2dxds+\lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_0}\timesh^2{{\partial}ime}hi(|\nabla q|^2+\lambda^2\muu^2{{\partial}ime}hi^2|q|^2)dxds\Big]. \frepsilonnd{array} \frepsilonnd{eqnarray} Finally, Combing (\ref{611-0}) and (\ref{1202-214}), by Lemma \ref{le0m-01}, taking $\lambda_1=\muax\{\lambda_0,\ \lambda_2\}$, we get the desired estimate in Lemma \ref{1202-car1} immediately. \sigmagned {$\sqr69$} \section{Two interpolation inequalities for the weakly coupled elliptic-parabolic system}\lambdaabel{ss5} In this section we shall prove two interpolation inequalities for the following weakly coupled elliptic-parabolic system: \begin{equation}tael{1202-s1}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle p_{ss}+\Delta p-c(x)q+i\alpha d(x) p_s=p^0 & \mubox{ in } X, \\ \nablas\deltaisplaystyle w_{s}+\Delta w-c(x)p+i(1-\alpha)d(x) q_s=w^0 & \mubox{ in } X, \\ \nablas\deltaisplaystyle q_{s}-\Delta q=w & \mubox{ in } X, \\ \nablas\deltaisplaystyle p=q=w=0&\mubox{ on }\Sigma. \frepsilona\right.\frepsilone Here $X=(-2,2)\times\Omega,\ \Sigma=(-2,2)\times\Gamma$ and $p^0,\ w^0\hbox{\rm inf$\,$}tyn L^2(X)$. In what follows, we will use the notations $Y\begin{equation}tauildrel \timesriangle \omegaver =(-1,1)\times\Omega,\quad X^*\begin{equation}tauildrel \timesriangle \omegaver =(-2,2)\times(\omega_c\cap\omega_{d})$. We have the following interpolation inequalities for system (\ref{1202-s1}). \begin{equation}tat\lambdaabel{1202-t1} Let $\alpha=1$. Under the assumptions in Theorem \ref{0t1}, there exists a constant $C>0$ such that, for any $\frepsilon>0$, any solutions $(p,w, q)$ of system (\ref{1202-s1}) satisfies \begin{equation}tael{1202-0a2}\begin{equation}taa{ll}\deltaisplaystyle ||p||_{H^1(Y)}+||w||_{L^2(Y)}+||q||_{L^2(Y)}\\ \nablas\deltaisplaystyle\lambdae Ce^{C/\frepsilon}\begin{equation}taig(||p^0||_{L^2(X)}+||w^0||_{L^2(X)}+||p||_{L^2(X^*)}+||p_s||_{L^2(X^*)}\begin{equation}taig)\\ \nablas\deltaisplaystyle\quad+Ce^{-2/\frepsilon}\begin{equation}taig(||p||_{L^2(X)}+||p_s||_{L^2(X)}+||w||_{L^2(X)}+||q||_{L^2(X)}\begin{equation}taig). \frepsilona\frepsilone \frepsilont \begin{equation}tat\lambdaabel{1202-t2} Let $\alpha=0$. Under the assumptions in Theorem \ref{0t1}, there exists a constant $C>0$ such that, for any $\frepsilon>0$, any solutions $(p,w, q)$ of system (\ref{1202-s1}) satisfies \begin{equation}tael{1202-a2}\begin{equation}taa{ll}\deltaisplaystyle ||p||_{H^1(Y)}+||w||_{L^2(Y)}+||q||_{L^2(Y)}\\ \nablas\deltaisplaystyle\lambdae Ce^{C/\frepsilon}\begin{equation}taig(||p^0||_{L^2(X)}+||w^0||_{L^2(X)}+||q||_{L^2(X^*)}+||q_s||_{L^2(X^*)}\begin{equation}taig)\\ \nablas\deltaisplaystyle\quad+Ce^{-2/\frepsilon}\begin{equation}taig(||p||_{L^2(X)}+||p_s||_{L^2(X)}+||w||_{L^2(X)}+||q||_{L^2(X)}). \frepsilona\frepsilone \frepsilont \mus {\hbox{\rm inf$\,$}tyt Proof of Theorem \ref{1202-t1}. } The proof is based on the global Carleman estimates presented in Lemma \ref{car1} and Lemma \ref{1202-car1}. In case of $\alpha=1$, the damping we imposed related to $d(x)p_s$, in this situation, the main difficulty is to estimate the energy of the coupled system $(p, w, q)$ only localized in $\omega_c\cap\omega_d$ by $\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{\omega_c\cap\omega_d}(|p|^2+|p_s|^2)dx$. The proof is long, hence we divide it into several steps. {\hbox{\rm inf$\,$}tyt Step 1. } Note that there is no boundary conditions for $p,\ w$ and $q$ at $s={{\partial}ime}m2$ in system (\ref{1202-s1}). Therefore, we need to introduce a cut-off function $\frphi=\frphi(s)\hbox{\rm inf$\,$}tyn C_0^\hbox{\rm inf$\,$}tynfty(-b,b)\subset C_0^\hbox{\rm inf$\,$}tynfty({\muathop{\rm l\nablaegthinspace R}})$ such that \begin{equation}tael{1202-as7}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle 0\lambdae\frphi(s)\lambdae1 \quad &|s|<b,\\ \nablas\deltaisplaystyle \frphi(s)=1,&|s|\lambdae b_0, \frepsilona\right.\frepsilone where $1<b_0<b\lambdae2$ are given as follows: \begin{equation}tael{1202-as2} b\begin{equation}tauildrel \timesriangle \omegaver =\sqrt{1+\frphirac{1}{\muu}\lambdan (2+e^\muu)},\quadq b_0\begin{equation}tauildrel \timesriangle \omegaver =\sqrt{b^2-{1\omegaver\muu}\lambdan \lambdaeft(\frphirac{1+e^\muu}{e^\muu}\right)},\quadq\frphiorall \muu>\lambdan 2. \frepsilone Put \begin{equation}tael{1202-as8} \widehatat p=\frphi p,\quad \widehatat w=\frphi w, \quad \widehatat q=\frphi q. \frepsilone Then, noting that $\frphi$ does not depend on $x$, by (\ref{1202-s1}), it follows \begin{equation}tael{1202-as9}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle \widehatat p_{ss}+\Delta \widehatat p=F_1& \mubox{ in } X, \\ \nablas\deltaisplaystyle \widehatat w_{s}+\Delta \widehatat w=F_2& \mubox{ in } X,\\ \nablas\deltaisplaystyle \widehatat q_{s}-\Delta \widehatat q=F_3& \mubox{ in } X,\\ \nablas\deltaisplaystyle\widehatat p=\widehatat q=\widehatat w=0 &\mubox{ on }\Sigma, \frepsilona\right. \frepsilone where \begin{equation}tael{lab-1}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle F_1\begin{equation}tauildrel \timesriangle \omegaver =\frphi_{ss}p+2\frphi_sp_s+\frphi p^0-i\alpha d(x)\frphi p_s+c(x)\widehatat q,\\ \nablas\deltaisplaystyle F_2\begin{equation}tauildrel \timesriangle \omegaver =\frphi_{s}w+\frphi w^0-i(1-\alpha)d(x)\frphi q_s+c(x)\widehatat p,\\ \nablas\deltaisplaystyle F_3\begin{equation}tauildrel \timesriangle \omegaver =\frphi_{s}q+\widehatat w. \frepsilona\right.\frepsilone For system (\ref{1202-as9}), by using Lemmas \ref{car1} and \ref{1202-car1}, noting that $\alpha=1$, we conclude that there is a $\muu_1>0$ such that for all $\muu\gammae \muu_1$, one can find two constants $C=C(\muu)>0$ and $\lambda_1=\lambda_1(\muu)$ so that for all $\lambda\gammae\lambda_1$, it holds that \begin{equation}tael{as216}\begin{equation}taa{ll}\deltaisplaystyle \lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi(|\nabla\widehatat p|^2+|\widehatat p_s|^2+\lambda^2\muu^2{{\partial}ime}hi^2|\widehatat p|^2+\lambda^2\muu^2| \widehatat w|^2+\lambda^2\muu^2|\widehatat q|^2)dxds\\ \nablas\deltaisplaystyle\quad+\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2(\lambda{{\partial}ime}hi)^{-1}(|\Delta \widehatat w|^2+|\widehatat w_s|^2+|\Delta \widehatat q|^2+|\widehatat q_s|^2)dxds\\ \nablas\deltaisplaystyle\lambdae C\Big[\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\Big (|F_1|^2+|F_2|^2+|F_3|^2\Big )dxds\\ \nablas\deltaisplaystyle\quad+C\lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_1}\timesh^2{{\partial}ime}hi \Big (\lambda^2\muu^2{{\partial}ime}hi^2|\widehatat p|^2+|\widehatat p_s|^2+|\nabla\widehatat p|^2+\lambda^2\muu^2{{\partial}ime}hi^2|\widehatat w|^2+\lambda^2\muu^2{{\partial}ime}hi^2|\widehatat q|^2\Big ) dxds. \frepsilona\frepsilone {\hbox{\rm inf$\,$}tyt Step 2. } Let us estimate ``$\deltaisplaystyle\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_1}\timesh^2{{\partial}ime}hi^3|\widehatat w|^2dxds$". Recall that $\frepsilonta_2\hbox{\rm inf$\,$}tyn C_0^\hbox{\rm inf$\,$}ty(\omega_2) $ satisfying $\frepsilonta_2=1$ in $\omega_1$. By (\ref{1202-as9}) and (\ref{lab-1}), we have \begin{equation}tael{1201-x0}\begin{equation}taa{ll}&\deltaisplaystyle \timesh^2{{\partial}ime}hi^3\frepsilonta_2^2|\widehatat w|^2\\ \nablas&\deltaisplaystyle=\timesh^2{{\partial}ime}hi^3\frepsilonta_2^2\omegaverline{\widehatat w}(\widehatat q_s-\Delta\widehatat q)-\timesh^2{{\partial}ime}hi^3\frepsilonta_2^2\omegaverline{\widehatat w}\frphi_s q\\ \nablas&\deltaisplaystyle=-\timesh^2{{\partial}ime}hi^3\frepsilonta_2^2\widehatat q\omegaverline{(\widehatat w_s+\Delta\widehatat w)}+(\timesh^2{{\partial}ime}hi^3\frepsilonta_2^2\omegaverline{\widehatat w}\widehatat q)_s-(\timesh^2{{\partial}ime}hi^3\frepsilonta_2^2)_s\omegaverline{\widehatat w}\widehatat q-\timesh^2{{\partial}ime}hi^3\frepsilonta_2^2\omegaverline{\widehatat w}\frphi_s q\\ \nablas&\deltaisplaystyle\quad-\sum_{j=1}^n\Big[\timesh^2{{\partial}ime}hi^3\frepsilonta_2^2(\omegaverline{\widehatat w}\widehatat q_{x_j}-\omegaverline{\widehatat w}_{x_j}\widehatat q)\Big]_{x_j}+\sum_{j=1}^n(\timesh^2{{\partial}ime}hi^3\frepsilonta_2^2)_{x_j}({\omegaverline {\widehatat w}}\widehatat q_{x_j}-{\omegaverline {\widehatat w}}_{x_j}\widehatat q). \frepsilona\frepsilone Now, integrating (\ref{1201-x0}) on $(-b,b)\times\Omega$, noting that $\widehatat w(-b)=\widehatat w(b)=0$ in $\Omega$, by (\ref{1202-as9}) and (\ref{1202-as6}), we find that \begin{equation}tael{1219-x0}\begin{equation}taa{ll}\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_2^2{{\partial}ime}hi^3|\widehatat w|^2dxds\\ \nablas\deltaisplaystyle\lambdae -\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^3\frepsilonta_2^2\widehatat q {\omegaverline {F_2}}dxds+C\lambda^2\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^5\frepsilonta_2^2|\widehatat q|^2dxds\\ \nablas\deltaisplaystyle\quad+C\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\Big |\sum_{j=1}^n(\timesh^2{{\partial}ime}hi^3\frepsilonta_2^2)_{x_j}({\omegaverline {\widehatat w}}\widehatat q_{x_j}-{\omegaverline {\widehatat w}}_{x_j}\widehatat q)\Big | dxds. \frepsilona\frepsilone However, by Lemma \ref{le0m-01}, it is easy to check that \begin{equation}tael{1219-x1}\begin{equation}taa{ll}\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\Big |\sum_{j=1}^n(\timesh^2{{\partial}ime}hi^3\frepsilonta_2^2)_{x_j}({\omegaverline {\widehatat w}}\widehatat q_{x_j}-{\omegaverline {\widehatat w}}_{x_j}\widehatat q)\Big | dxds\\ \nablas\deltaisplaystyle\lambdae \frphirac{1}{(\lambda\muu)^3}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_1}\timesh^2\frepsilonta_2^2{{\partial}ime}hi(|\nabla \widehatat w|^2+{{\partial}ime}hi^2|\widehatat w|^2)dxds\\ \nablas\deltaisplaystyle\quad+C(\lambda\muu)^5\Big[\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_1}\timesh^2\frepsilonta_2^2{{\partial}ime}hi^5|\nabla \widehatat q|^2ddxds+\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_2}\timesh^2{{\partial}ime}hi^7|\widehatat q|^2dxds\Big]\\ \nablas\deltaisplaystyle\lambdae \frphirac{1}{(\lambda\muu)^5}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|F_2|^2dxds+\frphirac{C}{\lambda\muu}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^3\frepsilonta_2^2|\widehatat w|^2dxds\\ \nablas\deltaisplaystyle\quadq+\frphirac{C}{(\lambda\muu)^4}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|F_3|^2dxds+C(\lambda\muu)^{14}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_2}\timesh^2{{\partial}ime}hi^7|\widehatat q|^2dxds. \frepsilona\frepsilone Combining (\ref{1219-x0}) and (\ref{1219-x1}), for fixed $\muu$, we conclude that there is a $\lambda_3>0$ such that for all $\lambda\gammae \lambda_3$, one can find two constants $C=C(\muu)>0$ and $\lambda_3=\lambda_3(\muu)$ so that for all $\lambda\gammae\lambda_3$, it holds that \begin{equation}tael{1219-x2}\begin{equation}taa{ll}\deltaisplaystyle \lambda^3\muu^4\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_2^2{{\partial}ime}hi^3|\widehatat w|^2dxds\\ \nablas\deltaisplaystyle\lambdae C\Big[\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\Big (|F_2|^2+|F_3|^2\Big )dxds+(\lambda\muu)^{18}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_2^2{{\partial}ime}hi^7|\widehatat q|^2dxds\Big]. \frepsilona\frepsilone {\hbox{\rm inf$\,$}tyt Step 3. } Let us estimate ``$\deltaisplaystyle\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_2}\timesh^2{{\partial}ime}hi^7 |\widehatat q|^2dxds$". Recall cut-off function $\frepsilonta_3\hbox{\rm inf$\,$}tyn C_0^\hbox{\rm inf$\,$}ty(\omega_3) $ satisfying $\frepsilonta_3=1$ in $\omega_2$. By elementary calculation, we have \begin{equation}tael{eq2}\begin{equation}taa{ll}\deltaisplaystyle \timesh^2{{\partial}ime}hi^7\frepsilonta_3^2\omegaverline {\widehatat q}(\widehatat p_{ss}+\Delta\widehatat p)\\ \nablas\deltaisplaystyle=(\timesh^2{{\partial}ime}hi^7\frepsilonta_3^2\omegaverline {\widehatat q}\widehatat p_{s})_s-\timesh^2{{\partial}ime}hi^7\frepsilonta_3^2\widehatat p_s\omegaverline{\widehatat q}_{s}-(\timesh^2{{\partial}ime}hi^7\frepsilonta_3^2)_s\omegaverline {\widehatat q}\widehatat p_{s}+\timesh^2{{\partial}ime}hi^7\frepsilonta_3^2\widehatat p\Delta\omegaverline{\widehatat q}\\ \nablas\deltaisplaystyle\quad+\sum_{j=1}^n\Big[\timesh^2{{\partial}ime}hi^7\frepsilonta_3^2(\omegaverline {\widehatat q}\widehatat p_{x_j}-\omegaverline {\widehatat q}_{x_j}\widehatat p)\Big]_{x_j}-\sum_{j=1}^n(\timesh^2{{\partial}ime}hi^7\frepsilonta_3^2)_{x_j}(\omegaverline {\widehatat q}\widehatat p_{x_j}-\omegaverline {\widehatat q}_{x_j}\widehatat p). \frepsilona \frepsilone On the other hand, multiplying the first equation of (\ref{1202-as9}) by $ \timesh^2{{\partial}ime}hi^7\frepsilonta_3^2\omegaverline {\widehatat q}$, we have \begin{equation}tael{eq3}\begin{equation}taa{ll}\deltaisplaystyle c(x)\timesh^2{{\partial}ime}hi^7\frepsilonta_3^2|\widehatat q|^2&\deltaisplaystyle=\timesh^2{{\partial}ime}hi^7\frepsilonta_3^2\omegaverline {\widehatat q}\Big (\widehatat p_{ss}+\Delta\widehatat p\Big )-\timesh^2{{\partial}ime}hi^7\frepsilonta_3^2\omegaverline {\widehatat q}(F_1-c(x)\widehatat q). \frepsilona \frepsilone Now, integrating (\ref{eq3}) on $(-b,b)\times\Omega$, noting that $\widehatat p(-b)=\widehatat p(b)=0$ in $\Omega$, by (\ref{1202-as9}), (\ref{1202-as6}), (\ref{eq2}) and (\ref{cond1}), we find that \begin{equation}tael{eq5}\begin{equation}taa{ll}\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_3^2{{\partial}ime}hi^7|\widehatat q|^2 dxds\\ \nablas\deltaisplaystyle\lambdae C\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^7|F_1-c(x)\widehatat q|^2dxds+\frphirac{C}{(\lambda\muu)^{16}}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^{5}\frepsilonta_3^2|\nabla\widehatat q|^2dxds\\ \nablas\deltaisplaystyle\quad+\frphirac{1}{(\lambda\muu)^{18}}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2(\lambda{{\partial}ime}hi)^{-1}\Big (|\widehatat q_s|^2+|\Delta\widehatat q|^2\Big )dxds\\ \nablas\deltaisplaystyle\quad+C(\lambda\muu)^{19}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_3}\timesh^2{{\partial}ime}hi^{15}(|\nabla\widehatat p|^2+|\widehatat p_s|^2+|\widehatat p|^2)dxds. \frepsilona\frepsilone On the one hand, by Lemma \ref{le0m-01}, we know that \begin{equation}tael{122-y}\begin{equation}taa{ll}\deltaisplaystyle \frphirac{C}{(\lambda\muu)^{16}}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^{5}\frepsilonta_3^2|\nabla\widehatat q|^2dxds\\ \nablas\deltaisplaystyle\lambdae \frphirac{C}{(\lambda\muu)^{18}}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|F_3|^2dxds+\frphirac{C}{(\lambda\muu)^{14}}\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_3^2{{\partial}ime}hi^7|\widehatat q|^2 dxds. \frepsilona\frepsilone On the other hand, by Lemma \ref{1202-car1}, we have \begin{equation}tael{122-y1}\begin{equation}taa{ll}\deltaisplaystyle \frphirac{1}{(\lambda\muu)^{18}}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2(\lambda{{\partial}ime}hi)^{-1}\Big (|\widehatat q_s|^2+|\Delta\widehatat q|^2\Big )dxds\\ \nablas\deltaisplaystyle\lambdae \frphirac{C}{(\lambda\muu)^{18}}\Big[\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|F_3|^2dxds+\lambda^3\muu^4\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_1}\timesh^2{{\partial}ime}hi^3|\widehatat q|^2dxds\Big] \\ \nablas\deltaisplaystyle\lambdae \frphirac{C}{(\lambda\muu)^{18}}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|F_3|^2dxds+\frphirac{C}{(\lambda\muu)^{14}}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_2}\timesh^2{{\partial}ime}hi^7|\widehatat q|^2dxds. \frepsilona\frepsilone By (\ref{eq5})--(\ref{122-y1}), for fixed $\muu$, we conclude that there is a $\lambda_4>0$ such that for all $\lambda\gammae \lambda_4$, one can find two constants $C=C(\muu)>0$ and $\lambda_4=\lambda_4(\muu)$ so that for all $\lambda\gammae\lambda_3$, it holds that \begin{equation}tael{122-y3}\begin{equation}taa{ll}\deltaisplaystyle (\lambda\muu)^{18}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_2}\timesh^2{{\partial}ime}hi^7 |\widehatat q|^2dxds\nablaegthinspace \nablaegthinspace \nablaegthinspace &\deltaisplaystyle\lambdae C\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^7\Big ((\lambda\muu)^{18}|F_1-c(x)\widehatat q|^2+|F_3|^2\Big )dxds\\ \nablas&\deltaisplaystyle\quad+C(\lambda\muu)^{37}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_3}\timesh^2{{\partial}ime}hi^{15}(|\nabla\widehatat p|^2+|\widehatat p_s|^2+|\widehatat p|^2)dxds. \frepsilona\frepsilone Finally, we choose cut-off function $\frepsilonta_4\hbox{\rm inf$\,$}tyn C_0^\hbox{\rm inf$\,$}ty(\omega_c\cap\omega_{d}) $ such that $\frepsilonta_4=1$ in $\omega_3$. Multiplying the first equation of (\ref{1202-as9}) by $\timesh^2{{\partial}ime}hi^{15}\frepsilonta_4^2\omegaverline{\widehatat p}$, integrating it on $(-b,b)\times\Omega$, using integration by parts, and noting that $\widehatat p(-b)=\widehatat p(b)=0$ in $\Omega$, by a simple calculation, we have \begin{equation}tael{nr3}\begin{equation}taa{ll}\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_3}\timesh^2{{\partial}ime}hi^{15}(|\nabla\widehatat p|^2+|\widehatat p_s|^2)dxds\\ \nablas\deltaisplaystyle\lambdae C\Big[e^{C\lambda}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_c\cap\;\omega_d}|\widehatat p|^2 dxds+\frphirac{1}{(\lambda\muu)^{37}}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|F_1|^2dxds\Big]. \frepsilona\frepsilone Combing (\ref{as216}) and (\ref{eq5})-- (\ref{nr3}), by (\ref{cond1}) and (\ref{1202-as7}), noting that $\widehatat p=\frphi p,\ \widehatat w=\frphi w,\ \widehatat q=\frphi q$, by (\ref{lab-1}) and noting that $\alpha=1$, taking $\lambda_*=\muax\{\lambda_j,\ j=0, 1, 2, 3, 4\}$, for any $\lambda\gammae\lambda_*$, we have \begin{equation}tael{equ216}\begin{equation}taa{ll}\deltaisplaystyle \lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi(|\nabla p|^2+|p_s|^2+\lambda^2\muu^2{{\partial}ime}hi^2|p|^2+\lambda^2\muu^2{{\partial}ime}hi^2|w|^2+\lambda^2\muu^2{{\partial}ime}hi^2|q|^2)dxds\\ \nablas\deltaisplaystyle\lambdae Ce^{C\lambda}\Big\{\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}(| p^0|^2+|w^0|^2)dxds+\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_c\cap\;\omega_{d}} (| p|^2+|p_s|^2)dxds\Big\}\\ \nablas\deltaisplaystyle\quad+C(\lambda\muu)^{18}\hbox{\rm inf$\,$}tynt_{(-b,-b_0)\begin{equation}taigcup (b_0, b)}\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^7(| p|+|p_s|^2+| q|+|w|^2)dxds. \frepsilona\frepsilone \mus {\hbox{\rm inf$\,$}tyt Step 4. } Recalling (\ref{as4}) and (\ref{1202-as2}) for the definitions of ${{\partial}ime}hi$ and $b,\ b_0$, respectively, it is easy to check that \begin{equation}tael{as5}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle {{\partial}ime}hi(s,\cdot)\gammae 2+e^\muu , &\widehatbox{ for any $s$ satisfying }|s|\lambdae 1,\\ \nablas\deltaisplaystyle {{\partial}ime}hi(s,\cdot)\lambdae 1+e^{\muu}, &\widehatbox{ for any $s$ satisfying }b_0\lambdae |s|\lambdae b. \frepsilona\right.\frepsilone Finally, denote $c_0=2+e^{\muu}>1$. Fixing the parameter $\muu$ in (\ref{as216}), and using (\ref{as5}), one finds that \begin{equation}tael{as217}\begin{equation}taa{ll}\deltaisplaystyle \lambda e^{2\lambda c_0}\hbox{\rm inf$\,$}tynt_{-1}^1\hbox{\rm inf$\,$}tynt_{\Omega}(|\nabla p|^2+|p_s|^2+|p|^2+| w|^2+|q|^2)dxds\\Big[3mm] \nablas\deltaisplaystyle\lambdae Ce^{C\lambda}\lambdaeft\{\hbox{\rm inf$\,$}tynt_{-2}^2\hbox{\rm inf$\,$}tynt_{\Omega}(|p^0|^2+|w^0|^2)dxds+\hbox{\rm inf$\,$}tynt_{-2}^2\hbox{\rm inf$\,$}tynt_{\omega_c\cap\;\omega_{d}}(|p|^2+|p_s|^2) dxds\right\}\\Big[3mm] \nablas\deltaisplaystyle\quad+C(\lambda\muu)^{18}e^{c_2\muu} e^{2\lambda(c_0-1)}\hbox{\rm inf$\,$}tynt_{(-b,-b_0)\begin{equation}taigcup (b_0, b)}\hbox{\rm inf$\,$}tynt_{\Omega}(|p|^2+|p_s|^2+|w|^2+|q|^2)dxds \frepsilona\frepsilone where $c_2=7\muu ||{{\partial}ime}si||_{L^{\hbox{\rm inf$\,$}ty}(\Omega)}>0$. From (\ref{as217}), one concludes that there exists an $\frepsilon_2>0$ such that the desired inequality (\ref{1202-0a2}) holds for $\frepsilon\hbox{\rm inf$\,$}tyn (0,\frepsilon_2]$, which, in turn, implies that it holds for any $\frepsilon>0$. This completes the proofs of Theorem \ref{1202-t1}.\sigmagned {$\sqr69$} \mus {\hbox{\rm inf$\,$}tyt Proof of Theorem \ref{1202-t2}. } In case of $\alpha=0$, the damping we imposed related to $d(x)q_s$, in this situation, we will estimate the energy of the coupled system $(p, w, q)$ only localized in $\omega_c\cap\omega_d$ by $\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{\omega_c\cap\omega_d}(|q|^2+|q_s|^2)dx$. We divide the proof into several steps. \mus {\hbox{\rm inf$\,$}tyt Step 1. } Recall $\frepsilonta_2\hbox{\rm inf$\,$}tyn C_0^\hbox{\rm inf$\,$}ty(\omega_2) $ such that $\frepsilonta_2=1$ in $\omega_1$. Multiplying the first equation of (\ref{1202-as9}) by $\timesh^2{{\partial}ime}hi\frepsilonta_2^2\omegaverline{\widehatat p}$, integrating it on $(-b,b)\times\Omega$, using integration by parts, noting that $\widehatat p(-b)=\widehatat p(b)=0$ in $\Omega$ and $\alpha=0$, by a simple calculation, we conclude that \begin{equation}tael{0nr3}\begin{equation}taa{ll}\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\frepsilonta_2^2\timesh^2{{\partial}ime}hi(|\nabla\widehatat p|^2+|\widehatat p_s|^2)dxds\\ \nablas\deltaisplaystyle\lambdae C(\lambda\muu)^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_2^2{{\partial}ime}hi^3|\widehatat p|^2 dxds+ \frphirac{1}{(\lambda\muu)^2}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|F_1|^2dxds. \frepsilona\frepsilone Therefore, by (\ref{as216}) and (\ref{0nr3}), we conclude that there is a $\muu_1>0$ such that for all $\muu\gammae \muu_1$, one can find two constants $C=C(\muu)>0$ and $\lambda_1=\lambda_1(\muu)$ so that for all $\lambda\gammae\lambda_1$, it holds that \begin{equation}tael{01as216}\begin{equation}taa{ll}\deltaisplaystyle \lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi(|\nabla\widehatat p|^2+|\widehatat p_s|^2+\lambda^2\muu^2{{\partial}ime}hi^2|\widehatat p|^2+\lambda^2\muu^2| \widehatat w|^2+\lambda^2\muu^2|\widehatat q|^2)dxds\\ \nablas\deltaisplaystyle\lambdae C\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\Big (|F_1|^2+|F_2|^2+|F_3|^2\Big )dxds\\ \nablas\deltaisplaystyle\quadq+C\lambda^3\muu^4\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_2}\timesh^2{{\partial}ime}hi^3(|\widehatat p|^2+|\widehatat w|^2)dxds+Ce^{C\lambda}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_2}|\widehatat q|^2dxds. \frepsilona\frepsilone \mus {\hbox{\rm inf$\,$}tyt Step 2. } Let us estimate ``$\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_2}\timesh^2{{\partial}ime}hi^3|\widehatat p|^2dxds$". Recall that $\frepsilonta_3\hbox{\rm inf$\,$}tyn C_0^\hbox{\rm inf$\,$}ty(\omega_3) $ such that $\zetaeta_3=1$ in $\omega_2$. Then, multiplying the second equation of (\ref{1202-as9}) by $ \timesh^2{{\partial}ime}hi^3\frepsilonta_3^2\omegaverline {\widehatat p}$, we have \begin{equation}tael{03eq3}\begin{equation}taa{ll}\deltaisplaystyle c(x)\timesh^2{{\partial}ime}hi^3\frepsilonta_3^2|\widehatat p|^2 =\timesh^2{{\partial}ime}hi^3\frepsilonta_3^2\omegaverline {\widehatat p}\begin{equation}taig(\widehatat w_{s}+\Delta\widehatat w\begin{equation}taig) -\timesh^2{{\partial}ime}hi^3\frepsilonta_3^2\omegaverline {\widehatat p}\begin{equation}taig(F_2-c(x)\widehatat p\begin{equation}taig). \frepsilona \frepsilone However, \begin{equation}tael{122-f1}\begin{equation}taa{ll}\deltaisplaystyle \timesh^2{{\partial}ime}hi^3\frepsilonta_3^2\omegaverline {\widehatat p}\begin{equation}taig(\widehatat w_{s}+\Delta\widehatat w\begin{equation}taig) \\ \nablas\deltaisplaystyle=(\timesh^2{{\partial}ime}hi^3\frepsilonta_3^2\omegaverline {\widehatat p}\widehatat w)_{s}-(\timesh^2{{\partial}ime}hi^3\frepsilonta_3^2)_s\omegaverline {\widehatat p}\widehatat w-\timesh^2{{\partial}ime}hi^3\frepsilonta_3^2\omegaverline {\widehatat p}_s\widehatat w\\ \nablas\deltaisplaystyle\quad+\sum_{j=1}^n\begin{equation}taig(\timesh^2{{\partial}ime}hi^3\frepsilonta_3^2\omegaverline {\widehatat p}\widehatat w_{x_j}\begin{equation}taig)_{x_j}-\sum_{j=1}^n(\timesh^2{{\partial}ime}hi^3\frepsilonta_3^2)_{x_j}\omegaverline {\widehatat p}\widehatat w_{x_j}-\sum_{j=1}^n\timesh^2{{\partial}ime}hi^3\frepsilonta_3^2\omegaverline {\widehatat p}_{x_j}\widehatat w_{x_j}. \frepsilona\frepsilone Now, integrating (\ref{03eq3}) on $(-b,b)\times\Omega$, by (\ref{122-f1}) and Lemma \ref{1202-car1}, we get that \begin{equation}tael{03eq5}\begin{equation}taa{ll}\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_3^2{{\partial}ime}hi^3|\widehatat p|^2 dxds\\ \nablas\deltaisplaystyle\deltaisplaystyle\lambdae C\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^3|F_2-c(x)\widehatat p|^2 dxds+C(\lambda\muu)^3\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_3^2{{\partial}ime}hi^5(|\widehatat w|^2+|\nabla \widehatat w|)^2 dxds\\ \nablas\deltaisplaystyle\quad+\frphirac{C}{(\lambda\muu)^3}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_3^2{{\partial}ime}hi(|\widehatat p_s|^2+\nabla \widehatat p|^2) dxds. \frepsilona\frepsilone Proceeding exactly as (\ref{0nr3}), we have \begin{equation}tael{122-f2}\begin{equation}taa{ll}\deltaisplaystyle \frphirac{C}{(\lambda\muu)^3}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_3^2{{\partial}ime}hi(|\widehatat p_s|^2+\nabla p|^2) dxds\\ \nablas\deltaisplaystyle\lambdae \frphirac{C}{\lambda\muu}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_2^2{{\partial}ime}hi^3|\widehatat p|^2 dxds+\frphirac {C}{(\lambda\muu)^5}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2|F_1|^2dxds. \frepsilona\frepsilone Combining (\ref{03eq5}) and (\ref{122-f2}), for fixed $\muu$, there is a $\lambda_5>0$ such that for any $\lambda\gammae\lambda_5$, the following holds: \begin{equation}tael{003eq5}\begin{equation}taa{ll}\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_2}\timesh^2{{\partial}ime}hi^3|\widehatat p|^2 dxds &\deltaisplaystyle\lambdae C\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^3\begin{equation}taig(|F_2-c(x)\widehatat p|^2+(\lambda\muu)^{-5}|F_1|^2\begin{equation}taig) dxds\\ \nablas&\deltaisplaystyle\quadq+C(\lambda\muu)^3\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_3^2{{\partial}ime}hi^5(|\widehatat w|^2+|\nabla \widehatat w|)^2 dxds. \frepsilona\frepsilone \mus {\hbox{\rm inf$\,$}tyt Step 3. } By using Lemma (\ref{le0m-01}) and proceeding similarly analysis as (\ref{1219-x2}), we have \begin{equation}tael{122-fx0}\begin{equation}taa{ll}\deltaisplaystyle C(\lambda\muu)^3\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_3^2{{\partial}ime}hi^5(|\widehatat w|^2+|\nabla\widehatat w|)^2 dxds\\ \nablas\deltaisplaystyle \lambdae C(\lambda\muu)^6\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2\frepsilonta_3^2{{\partial}ime}hi^5|\widehatat w|^2dxds+C\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^3|F_2|^2 dxds\\ \nablas\deltaisplaystyle\lambdae C(\lambda\muu)^{-4}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^3\Big (|F_2|^2+|F_3|^2\Big )dxds+Ce^{C\lambda}\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_c\cap\omega_d}|\widehatat q|^2dxds. \frepsilona\frepsilone Finally, by (\ref{01as216}), (\ref{1201-x0}) and (\ref{03eq5}), noting that $\widehatat p=\frphi p,\ \widehatat w=\frphi w,\ \widehatat q=\frphi q$, by (\ref{lab-1}), taking $\lambda^*=\muax\{\lambda_1,\lambda_5\}$, for any $\lambda\gammae\lambda^*$, we obtain that \begin{equation}tael{03equ216}\begin{equation}taa{ll}\deltaisplaystyle \lambda\muu^2\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi(|\nabla p|^2+|p_s|^2+\lambda^2\muu^2{{\partial}ime}hi^2|p|^2+\lambda^2\muu^2{{\partial}ime}hi^2|w|^2+\lambda^2\muu^2{{\partial}ime}hi^2|q|^2)dxds\\ \nablas\deltaisplaystyle\lambdae Ce^{C\lambda}\Big[\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\Omega}(| p^0|^2+|w^0|^2)dxds+\hbox{\rm inf$\,$}tynt_{-b}^b\hbox{\rm inf$\,$}tynt_{\omega_c\cap\omega_{d}}( | q|^2+|q_s|^2)dxds\Big]\\ \nablas\deltaisplaystyle\quad+C(\lambda\muu)^5\hbox{\rm inf$\,$}tynt_{(-b,-b_0)\begin{equation}taigcup (b_0, b)}\hbox{\rm inf$\,$}tynt_{\Omega}\timesh^2{{\partial}ime}hi^3(| p|+|p_s|^2+| q|+|w|^2)dxds. \frepsilona\frepsilone Then, proceeding exactly the same analysis as (\ref{as5}) and (\ref{as217}), we complete the proof of Theorem \ref{1202-t2}.\sigmagned {$\sqr69$} \mus \section{ Proof of the main results}\lambdaabel{ss6} In this section, we shall give the proof of logarithmic decay results. Recall (\ref{0a2}) for the definition of ${\cal A}$ and $D({\cal A})$. Denote by ${\cal R}({\cal A})$ the resolvent set of ${\cal A}$. The proof of the logarithmic decay relies on a result of Burq (\cite[Theorem 3]{B}), which links it to estimates on the resolvent of $({\cal A}-i\muu)$ for $\muu\hbox{\rm inf$\,$}tyn{\muathop{\rm l\nablaegthinspace R}}$. Later, Duyckaerts consider the resolvent operator $({\cal A}-\lambda I)^{-1}$ for complex number $\lambda$. Concerning the resolvent estimate established in Theorem \ref{0t2}, let us recall the following known result. \begin{equation}tal\lambdaabel{Duy}(\cite[Theorem 7.1]{D}) Let $D>0$, and $$ {\cal O}_D\begin{equation}tauildrel \timesriangle \omegaver =\begin{equation}taig\{\lambda\hbox{\rm inf$\,$}tyn{\muathop{\rm l\nablaegthinspace\nablaegthinspace\nablaegthinspace C}}: \ |{\bf R}e\lambda|< D^{-1}e^{-D|{\mathop{\rm Im}\,}\lambda|}\begin{equation}taig\}. $$ Assume that for some $D>0$,\ ${\cal O}_D$ is included in ${\cal R}({\cal A})$, and that in ${\cal O}_D$ there is a positive constant $C$ such that $$ ||({\cal A}-\lambda I)^{-1}||_{{\cal L}(H)}\lambdae Ce^{C|{\mathop{\rm Im}\,}\lambda|}. $$ Then for all $k$ there exists $C_k$ such that $$ ||e^{t{\cal A}}U_0||_{H}\lambdae \frphirac{C_k}{(\lambdaog (t+2))^k}||U_0||_{D({\cal A}^k)}, \quad \frphiorall U_0\hbox{\rm inf$\,$}tyn D({\cal A}^k). $$ \frepsilonl Therefore, once we prove the existence and the estimate of the norm of the resolvent $({\cal A}-\lambda I)^{-1}$ when $ {\bf R}e\lambda\hbox{\rm inf$\,$}tyn\Big[-e^{-C|{\mathop{\rm Im}\,}\lambda|}/ C,0\Big]$, stated in Theorem \ref{0t2}, by virtue of Lemma \ref{Duy}, we can get Theorem \ref{0t1}, immediately. \mus {\hbox{\rm inf$\,$}tyt Proof of Theorem \ref{0t2}. } We divide the proof into two steps. \mus {\hbox{\rm inf$\,$}tyt Step 1.} First, fix $F=(f^0,f^1,g^0,g^1)\hbox{\rm inf$\,$}tyn H$ and $U_0\begin{equation}tauildrel \timesriangle \omegaver =U(0)=(y^0,y^1,z^0,z^1)\hbox{\rm inf$\,$}tyn D({\cal A})$. It is easy to see that the following equation \begin{equation}tael{6a1} ({\cal A}-\lambda I)U_0=F \frepsilone is equivalent to \begin{equation}tael{6a2}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle -\lambda y^0+y^1=f^0, & \widehatbox { in } \Omega,\\ \nablas\deltaisplaystyle \Delta y^0-[\alpha d(x)+\lambda] y^1-c(x)z^0=f^1, & \widehatbox { in } \Omega,\\ \nablas\deltaisplaystyle -\lambda z^0+z^1=g^0, & \widehatbox { in } \Omega,\\ \nablas\deltaisplaystyle -\Delta^2 z^0-[(1-\alpha)d(x)+\lambda)]z^1-c(x)y^0=g^1, & \widehatbox { in } \Omega,\\ \nablas\deltaisplaystyle y^0=z^0=\Delta z^0=0 &\widehatbox{ on } \Gamma.\\ \frepsilona\right.\frepsilone By (\ref{6a2}), we conclude that \begin{equation}tael{6a3}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle \Delta y^0 -\lambda^2 y^0-\alpha\lambda (x)y^0-c(x)z^0=[d(x)+\lambda] f^0+f^1 & \widehatbox { in } \Omega,\\ \nablas\deltaisplaystyle-\Delta^2z^0-\lambda^2 z^0-(1-\alpha)\lambda d(x)z^0-c(x)y^0=\lambda g^0+g^1 & \widehatbox { in } \Omega,\\ \nablas\deltaisplaystyle y^0=z^0=\Delta z^0=0 &\widehatbox{ on } \Gamma,\\ \nablas\deltaisplaystyle y^1=f^0+\lambda y^0,\ z^1=g^0+\lambda z^0 & \widehatbox { in } \Omega. \frepsilona\right.\frepsilone Put \begin{equation}tael{v} p=e^{i\lambda s}y^0,\quadq q=e^{i\lambda s}z^0. \frepsilone It is easy check that $p$ and $q$ satisfy the following equation: \begin{equation}tael{6a4}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle p_{ss}+\Delta p+i\alpha d(x) p_s-c(x)q=(\lambda f^0+df^0+f^1)e^{i\lambda s} & \widehatbox { in } {\muathop{\rm l\nablaegthinspace R}}\times \Omega,\\ \nablas\deltaisplaystyle q_{ss}-\Delta^2 q+i(1-\alpha)d(x) q_s-c(x)p=(\lambda g^0+g^1)e^{i\lambda s} & \widehatbox { in } {\muathop{\rm l\nablaegthinspace R}}\times \Omega,\\ \nablas\deltaisplaystyle p=q=\Delta q=0& \widehatbox { on }{\muathop{\rm l\nablaegthinspace R}}\times \Gamma. \frepsilona\right.\frepsilone Further, we set \begin{equation}tael{1201-v} w=q_s-\Delta q. \frepsilone Then, clearly, $p, q$ and $w$ satisfy the following equation: \begin{equation}tael{1202-6a4}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle p_{ss}+\Delta p+i\alpha d(x) p_s-c(x)q=(\lambda f^0+df^0+f^1)e^{i\lambda s} & \widehatbox { in } {\muathop{\rm l\nablaegthinspace R}}\times \Omega,\\ \nablas\deltaisplaystyle w_{s}+\Delta w+i(1-\alpha)d(x) q_s-c(x)p=(\lambda g^0+g^1)e^{i\lambda s} & \widehatbox { in } {\muathop{\rm l\nablaegthinspace R}}\times \Omega,\\ \nablas\deltaisplaystyle q_{s}-\Delta q=w & \widehatbox { in } {\muathop{\rm l\nablaegthinspace R}}\times \Omega,\\ \nablas\deltaisplaystyle p=q=w=0& \widehatbox { on }{\muathop{\rm l\nablaegthinspace R}}\times \Gamma. \frepsilona\right.\frepsilone \mus {\hbox{\rm inf$\,$}tyt Step 2. } By (\ref{v}), we have the following estimates. \begin{equation}tael{6a7}\lambdaeft\{\begin{equation}taa{ll}\deltaisplaystyle ||y^0||_{H^1_0(\Omega)}+||z^0||_{H^2(\Omega)}\lambdae Ce^{C|{\mathop{\rm Im}\,}\lambda|}\begin{equation}taig(||p||_{H^1(Y)}+||w||_{L^2(Y)}+||q||_{L^2(Y)}\begin{equation}taig)\\ \nablas\deltaisplaystyle ||p||_{H^1(X)}+||w||_{L^2(X)}+||q||_{L^2(X)}\lambdae C(|\lambda|+1)e^{C|{\mathop{\rm Im}\,}\lambda|}\begin{equation}taig(||y^0||_{H^1_0(\Omega)}+||z^0||_{H^2(\Omega)}\begin{equation}taig),\\ \nablas\deltaisplaystyle ||p||_{L^2(X^*)}+||p_s||_{L^2(X^*)}\lambdae C(1+|\lambda|)e^{C|{\mathop{\rm Im}\,}\lambda|}||y^0||_{L^2(\omega_c\cap\omega_d)},\\ \nablas\deltaisplaystyle ||q||_{L^2(X^*)}+||q_s||_{L^2(X^*)}\lambdae C(1+|\lambda|)e^{C|{\mathop{\rm Im}\,}\lambda|}||z^0||_{L^2(\omega_c\cap\omega_d)}. \frepsilona\right.\frepsilone Now, in case of $\alpha=1$, applying Theorem \ref{1202-t1} to equation (\ref{6a4}), and combining (\ref{6a7}), we get that \begin{equation}tael{6a5}\begin{equation}taa{ll}\deltaisplaystyle ||y^0||_{H^1_0(\Omega)}+||z^0||_{H^2(\Omega)}\\ \nablas\deltaisplaystyle\lambdae Ce^{C|{\mathop{\rm Im}\,}\lambda|}\Big (||f^0||_{H^1_0(\Omega)}+||f^1||_{L^2(\Omega)}+||g^0||_{H^2(\Omega)}+||g^1||_{L^2(\Omega)}+||y^0||_{L^2(\omega_c\cap\;\omega_d)}\Big ). \frepsilona\frepsilone In case of $\alpha=0$, applying Theorem \ref{1202-t2} to equation (\ref{6a4}), and combining (\ref{6a7}), we obtain that \begin{equation}tael{06a5}\begin{equation}taa{ll}\deltaisplaystyle ||y^0||_{H^1_0(\Omega)}+||z^0||_{H^2(\Omega)}\\ \nablas\deltaisplaystyle\lambdae Ce^{C|{\mathop{\rm Im}\,}\lambda|}\Big (||f^0||_{H^1_0(\Omega)}+||f^1||_{L^2(\Omega)}+||g^0||_{H^2(\Omega)}+||g^1||_{L^2(\Omega)}+||z^0||_{L^2(\omega_c\cap\;\omega_d)}\Big ). \frepsilona\frepsilone On the other hand, multiplying the first equation of (\ref{6a3}) by $2\omegaverline y^0$ and integrating it on $\Omega$, it follows that \begin{equation}tael{6a8}\begin{equation}taa{ll}\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{\Omega}\Big (-\Delta y^0+\lambda^2y^0+\alpha\lambda d(x)y^0+c(x)z^0\Big )\cdot2\omegaverline y^0dx\\ \nablas\deltaisplaystyle=2\lambda^2\hbox{\rm inf$\,$}tynt_{\Omega}|y^0|^2dx+2\hbox{\rm inf$\,$}tynt_\Omega |\nabla y^0|^2dx+2\alpha\hbox{\rm inf$\,$}tynt_{\Omega}\lambda d(x) |y^0|^2dx+2\hbox{\rm inf$\,$}tynt_\Omega c(x) z^0\omegaverline y^0dx. \frepsilona\frepsilone Similarly, multiplying the second equation of (\ref{6a3}) by $2\omegaverline z^0$ and integrating it on $\Omega$, we find that \begin{equation}tael{n6a8}\begin{equation}taa{ll}\deltaisplaystyle \hbox{\rm inf$\,$}tynt_{\Omega}\Big (\Delta^2 z^0+\lambda^2z^0+(1-\alpha)\lambda d(x)z^0+cy^0\Big )\cdot2\omegaverline z^0dx\\ \nablas\deltaisplaystyle=2\lambda^2\hbox{\rm inf$\,$}tynt_\Omega|z^0|^2dx+2\hbox{\rm inf$\,$}tynt_\Omega |\Delta z^0|^2dx+2(1-\alpha)\hbox{\rm inf$\,$}tynt_{\Omega}\lambda d(x) |z^0|^2dx+2\hbox{\rm inf$\,$}tynt_\Omega c(x) y^0\omegaverline z^0dx. \frepsilona\frepsilone Taking the imaginary part in the both sides of (\ref{6a8}) +(\ref{n6a8}), by (\ref{6a4}), we have \begin{equation}tael{6a9}\begin{equation}taa{ll}\deltaisplaystyle 2\alpha|{\mathop{\rm Im}\,}\lambda|\hbox{\rm inf$\,$}tynt_{\Omega}d(x)|y^0|^2dx+2(1-\alpha)|{\mathop{\rm Im}\,}\lambda|\hbox{\rm inf$\,$}tynt_{\Omega}d(x)|z^0|^2dx\\ \nablas\deltaisplaystyle \lambdae C\Big[||(\lambda f^0+df^0+f^1)||_{L^2(\Omega)}||y^0||_{L^2(\Omega)}+|{\mathop{\rm Im}\,}\lambda||{\bf R}e\lambda|||y^0||^2_{L^2(\Omega)}\Big]\\ \nablas\deltaisplaystyle\quad+C\Big[||(\lambda g^0+g^1)||_{L^2(\Omega)}||z^0||_{L^2(\Omega)}+|{\mathop{\rm Im}\,}\lambda||{\bf R}e\lambda||| z^0||^2_{L^2(\Omega)}\Big], \frepsilona\frepsilone where we have use the following obvious fact: $$ {\mathop{\rm Im}\,}\hbox{\rm inf$\,$}tynt_\Omega c(x) z^0\omegaverline y^0dx+{\mathop{\rm Im}\,}\hbox{\rm inf$\,$}tynt_\Omega c(x) y^0\omegaverline z^0dx={\mathop{\rm Im}\,} \hbox{\rm inf$\,$}tynt_\Omega c(x) (z^0\omegaverline y^0+y^0\omegaverline z^0)dx=0. $$ Hence, combining (\ref{6a5}) and (\ref{6a9}), and noting that $d(x)\gammae d_0>0$ on $\omega_{d}$, we arrive at \begin{equation}tael{6a10}\begin{equation}taa{ll}\deltaisplaystyle ||y^0||_{H^1_0(\Omega)}+||z^0||_{H^2(\Omega)}\\ \nablas\deltaisplaystyle\lambdae Ce^{C|{\mathop{\rm Im}\,}\lambda|}\begin{equation}taig(||f^0||_{H^1_0(\Omega)}+||f^1||_{L^2(\Omega)}+||g^0||_{H^2(\Omega)}+||g^1||_{L^2(\Omega)}\begin{equation}taig)\\ \nablas\deltaisplaystyle\quad+Ce^{C|{\mathop{\rm Im}\,}\lambda|}|{\bf R}e\lambda|(||y^0||_{H^1_0(\Omega)}+||z^0||_{H^2(\Omega)}). \frepsilona\frepsilone We now take $$ Ce^{C|{\mathop{\rm Im}\,}\lambda|}|{\bf R}e\lambda|\lambdae\frphirac{1}{2}, $$ which holds, whenever $\deltaisplaystyle |{\bf R}e\lambda|\lambdae e^{-C|{\mathop{\rm Im}\,}\lambda|}/C$ for some sufficiently large $C>0$. Then, by (\ref{6a10}), we find that \begin{equation}tael{6a11}\begin{equation}taa{ll}\deltaisplaystyle ||y^0||_{H^1_0(\Omega)}+||z^0||_{H^2(\Omega)}\\ \nablas\deltaisplaystyle\lambdae Ce^{C|{\mathop{\rm Im}\,}\lambda|}\begin{equation}taig(||f^0||_{H^1_0(\Omega)}+||f^1||_{L^2(\Omega)}+||g^0||_{H^2(\Omega)}+||g^1||_{L^2(\Omega)}\begin{equation}taig). \frepsilona\frepsilone Recalling that $y^1=f^0+\lambda y^0,\ z^1=g^0+\lambda z^0$, it follows \begin{equation}tael{6a12}\begin{equation}taa{ll}\deltaisplaystyle ||y^1||_{L^2(\Omega)}+||z^1||_{L^2(\Omega)}\\ \nablas\deltaisplaystyle\lambdae ||f^0||_{L^2(\Omega)}+|\lambda|||y^0||_{L^2(\Omega)}+||g^0||_{H^2(\Omega)}+|\lambda|||z^0||_{H^2(\Omega)}\\ \nablas\deltaisplaystyle\lambdae Ce^{C|{\mathop{\rm Im}\,}\lambda|}\begin{equation}taig(||f^0||_{H^1_0(\Omega)}+||f^1||_{L^2(\Omega)}+||g^0||_{H^2(\Omega)}+||g^1||_{L^2(\Omega)}\begin{equation}taig). \frepsilona\frepsilone By (\ref{6a11})--(\ref{6a12}), we know that ${\cal A}-\lambda I$ is injective. Therefore ${\cal A}-\lambda I$ is bi-injective from $D({\cal A})$ to $H_1$. Moreover, $$ ||({\cal A}-\lambda I)^{-1}||_{{\cal L}(H;H)}\lambdae Ce^{C|{\mathop{\rm Im}\,}\lambda|},\quad {\bf R}e\lambda\hbox{\rm inf$\,$}tyn [-e^{-C|{\mathop{\rm Im}\,}\lambda|}/C,0],\quadq |\lambda|\gammae 1. $$ This completes the proof of Theorem \ref{0t2}.\sigmagned {$\sqr69$} \mus \begin{equation}taegin{thebibliography}{99} \begin{equation}taibitem{Achenbach} J.~D.~Achenbach, \sl Wave propagation in elastic solids, \rm North Holland publishing company, Amsterdam, 1973. \begin{equation}taibitem{Adams} R.~D.~Adams, \hbox{\rm inf$\,$}tyt Damping properties analysis of composites, \sl engineering materials handbook, composites, ASM, \rm{\begin{equation}taf 1}(1987), 206--217. \begin{equation}taibitem{AB99} F.~Alabau, \hbox{\rm inf$\,$}tyt Stabilisation fronti\`ere indirecte de syst\`emes faiblement coupl\'es, \sl C. R. Math. Acad. Sci. Paris S\'er I Math, \rm 328 (1999), 1015--1020. \begin{equation}taibitem{B1} F.~Alabau, \hbox{\rm inf$\,$}tyt Indirect boundary stabilization of weakly coupled hyperbolic systems, \sl SIAM J. Control Optim., \rm 41 (2002), 511--541. \begin{equation}taibitem{ACK} F.~Alabau, P.~Cannarsa and V. ~Komornik, \hbox{\rm inf$\,$}tyt Indirect internal stabilization of weakly coupled evolution equations, \sl J. Evol. Equ., \rm 2 (2002), 127--150. \begin{equation}taibitem{BL} F.~Alabau and M.~L\'eautaud, \hbox{\rm inf$\,$}tyt Indirect stabilization of locally coupled wave-types equations, \sl ESAIM Control Optim. Calc. Var., \rm 18 (2010), 548--582. \begin{equation}taibitem{AN} K.~Ammari and S.~Nicaise, \hbox{\rm inf$\,$}tyt Stabilization of a transmission wave/plate equation. \sl J. Differential Equations \rm{\begin{equation}taf 249}(2010), 707--727. \begin{equation}taibitem{ALM} N.~Anantharaman, M.~L\'{e}autaud and F.~Maci\`{a}, \hbox{\rm inf$\,$}tyt Wigner measures and observability for the Schr\"{o}dinger equation on the disk. \sl Invent. Math. \rm 206 (2016), 485--599. \begin{equation}taibitem{BLR} C. ~Bardos, G. ~Lebeau and J. ~Rauch, \hbox{\rm inf$\,$}tyt Sharp sufficient conditions for the observation, control and stabilization from the boundary, \sl SIAM J. Control Optim., \rm 30 (1992), 1024--165. \begin{equation}taibitem{BD} C.~J.~K.~ Batty and T.~Duyckaerts, \hbox{\rm inf$\,$}tyt Non-uniform stability for bounded semi-groups on Banach spaces, \sl J. Evol. Equ., \rm 8 (2008), 765--780. \begin{equation}taibitem{B} N.~Burq, \hbox{\rm inf$\,$}tyt D\'ecroissance de l'\'energie locale de l'\'equation des ondes pour le probl\'eme ext\'erieur et absence de r\'esonance au voisinagage du r\'eel, \sl Acta Math., \rm 180 (1998), 1--29. \begin{equation}taibitem{BH} N.~Burq and M.~Hitrik, \hbox{\rm inf$\,$}tyt Energy decay for damped wave equations on partially rectangular domains, \sl Math. Res. Lett., \rm 14 (2007), 35--47. \begin{equation}taibitem{BZ1} N.~Burq and M.~Zworski, \hbox{\rm inf$\,$}tyt Geometric control in the presence of a black box. \sl J. Amer. Math. Soc. \rm{\begin{equation}taf 17} (2004), 443--471. \begin{equation}taibitem{BZ} N.~Burq and C.~Zuily, \hbox{\rm inf$\,$}tyt Concentration of Laplace eigenfunctions and stabilization of weakly damped wave equation. \sl Comm. Math. Phys. \rm 345 (2016), 1055--1076. \begin{equation}taibitem{CFNS} G. ~Chen, S. ~A. ~Fulling, F. ~J. ~Narcowich and S. ~Sun, \hbox{\rm inf$\,$}tyt Exponential decay of energy of evolution equations with locally distributed damping, \sl SIAM J. Appl. Math., \rm 51 (1991), 266--301. \begin{equation}taibitem{D} T.~Duyckaerts, \hbox{\rm inf$\,$}tyt Optimal decay rates of the energy of a hyperbolic-parabolic system coupled by an interface, \sl Asymptot. Anal., \rm 51 (2007), 17--45. \begin{equation}taibitem{Fu1} X.~Fu, \hbox{\rm inf$\,$}tyt Logarithmic decay of hyperbolic equations with arbitrary small boundary damping, \sl Comm. Partial Differential Equations, \rm 34(2009), 957--975. \begin{equation}taibitem{Fu01} X.~Fu, \hbox{\rm inf$\,$}tyt Null controllability for the parabolic equation with a complex principal part,\sl J. Funct. Anal., \rm 257(2009), 1333--1354. \begin{equation}taibitem{Fu2} X.~Fu, \hbox{\rm inf$\,$}tyt Longtime behavior of the hyperbolic equations with an arbitrary internal damping, \sl Z. Angew. Math. Phys., \rm 62(2011), 667--680. \begin{equation}taibitem{Fu3} X.~Fu, \hbox{\rm inf$\,$}tyt Sharp decay rates for the weakly coupled hyperbolic system with one internal damping, \sl SIAM J. Control Optim., \rm 50 (2012), 1643--1660. \begin{equation}taibitem{FI} A.~V.~Fursikov and O.~Yu.~Imanuvilov, \hbox{\rm inf$\,$}tyt Controllability of Evolution Equations, \rm Lecture Notes Series 34, Research Institute of Mathematics, Seoul National University, Seoul, Korea, 1994. \begin{equation}taibitem{Graff} K.~F.~Graff, \sl Wave motion in elastic solids, \rm Clarendon Press, Oxford, 1975. \begin{equation}taibitem{Haraux} A.~Haraux, \hbox{\rm inf$\,$}tyt S\'{e}ries lacunaires et contr\^{o}le semi-interne des vibrations d'une plaque rectangulaire. \sl J. Math. Pures Appl. (9)\rm 68 (1989), 457--465. \begin{equation}taibitem{Lagnese} J.~E.~Lagnese, \sl Boundary stabilization of thin plates. \rm SIAM Studies in Applied Mathematics, 10. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1989. \begin{equation}taibitem{L} G.~Lebeau, \hbox{\rm inf$\,$}tyt Equation des ondes amorties, \rm in Algebraic and Geometric Methods in Mathematical Physics (Kaciveli, 1993), Kluwer Acad. Publ., Dordrecht, 1996, 73--109. \begin{equation}taibitem{LR} G.~Lebeau and L.~Robbiano, \hbox{\rm inf$\,$}tyt Stabilisation de l'\'equation des ondes par le bord, \sl Duke Math. J., \rm 86 (1997), 465--491. \begin{equation}taibitem{lz} X. Liu and X. Zhang, \hbox{\rm inf$\,$}tyt Local controllability of multidimensional quasi-linear parabolic equations, \sl SIAM J. Control Optim., \rm 50 (2012), 2046-2064. \begin{equation}taibitem{Ortega} J.~H.~Ortega and E. Zuazua, Enrique(E-MADC-AM) Generic simplicity of the spectrum and stabilization for a plate equation. \sl SIAM J. Control Optim. \rm 39 (2000), 1585--1614. \begin{equation}taibitem{P1} K.-D.~Phung, \hbox{\rm inf$\,$}tyt Polynomial decay rate for the dissipative wave equation, \sl J. Differential Equations, \rm 240 (2007), 92--124. \begin{equation}taibitem{PZ} K.-D.~Phung and X.~Zhang, \hbox{\rm inf$\,$}tyt Time reversal focusing of the initial state for Kirchoff plate, \sl SIAM J. Appl. Math., \rm 68 (2008), 1535-1556. \begin{equation}taibitem{RZZ} J.~Rauch, X.~Zhang and E.~Zuazua, \hbox{\rm inf$\,$}tyt Polynomial decay of a hyperbolic-parabolic coupled system, \sl J. Math. Pures Appl., \rm 84 (2005), 407--470. \begin{equation}taibitem{Yim} J.~H.~Yim and B.~Z.~Jang, \hbox{\rm inf$\,$}tyt An analytical method for prediction of the damping in symmetric balanced laminated composites. \sl Polymer Composites \rm 20 (1999), 192--199. \begin{equation}taibitem{ZZ2} X.~Zhang and E.~Zuazua, \hbox{\rm inf$\,$}tyt Long time behavior of a coupled heat-wave system arising in fluid-structure interaction, \sl Arch. Rat. Mech. Anal., \rm 184 (2007), 49--120. \begin{equation}taibitem{Zua} E.~Zuazua, \hbox{\rm inf$\,$}tyt Exponential decay for the semilinear wave equation with locally distributed damping, \sl Comm. Partial Differential Equations, \rm 15 (1990), 205--235. \frepsilonnd{thebibliography} \frepsilonnd{document}
\begin{document} \noindent {\Large {\bf Squeezing with cold atoms}} {\sc A. Lambrecht, T. Coudreau, A.M. Steinberg and E. Giacobino,} \\ {\it Laboratoire Kastler Brossel, UPMC, ENS et CNRS,} {\it Universit\'e Pierre et Marie Curie, case 74, F75252 Paris, France} \\ ({\sc EuroPhysics Letters} {\bf 36}, p. 93 (1996))\\ PACS: 32.80.Pj; 42.50.Lc; 42.65.Pc \\ \noindent {\bf Abstract. -} Cold atoms from a magneto-optic trap have been used as a nonlinear ($\chi^{(3)}$) medium in a nearly resonant cavity. Squeezing in a probe beam passing through the cavity was demonstrated. The measured noise reduction is 40\% for free atoms and 20\% for weakly trapped atoms. Soon after the first implementations of magneto-optic traps \cite{ref1a, ref1b}, the strong nonlinear properties of laser cooled atoms were recognized. It was shown that a probe beam going through a cloud of cold atoms could experience gain due to Raman transitions involving the trapping beams \cite{ref2a,ref2b}. When the atoms were placed in a resonant optical cavity, laser action corresponding to that gain feature was demonstrated \cite{ref3}. Meanwhile, when cold atoms are driven by a slightly detuned probe laser beam inside an optical cavity, bistability is observed at very low light powers \cite{ref4a}. This high nonlinear dispersion comes from the fact that, since the atoms are virtually motionless, the Doppler width is smaller than the natural linewidth and the probe frequency can be set close to atomic resonance. Such nonlinear behavior indicates that the system is capable of significantly modifying the quantum fluctuations of a probe beam, in particular leading to squeezing. The generation of squeezed light through interaction with nonlinear media has been subject of extensive theoretical and experimental studies (see for example \cite{ApplB}). The use of atomic media looked particularly promising in the absence of Doppler broadening \cite{castelli,reid,hilico}; laser-cooled atoms should therefore be ideal. However, no experiment performed so far has used cold atoms for squeezing, while those which relied on atomic beams failed to live up to expectations. In this paper we report quantum noise reduction in a probe beam that has interacted with cold atoms in an optical cavity. Quadrature squeezing as large as $40\%$ (uncorrected for detection efficiency) was measured at the output of the cavity. This value is the largest ever measured in an atomic medium. Furthermore it is in good agreement with the theoretical predictions. It therefore has very good prospects for leading to much higher levels of squeezing when the apparatus is suitably improved. The squeezing was first observed in a free cloud, just after the trap had been turned off. It was also observed in a weakly bound trap with trapping beams turned down by a factor of $10$ as compared to the original trap. In light of the rapid development of atom-cooling technology, it is conceivable that a set-up of this type be constructed on a much smaller scale, relying exclusively on diode lasers. This could lead to the realization of a compact and efficient quantum ``noise eater''. Up to now, all experiments of this type had been performed on atomic beams, where the transverse Doppler effect is small but often not completely negligible\cite{raizen,hope,poizat}. The best value of quantum noise reduction measured in that case is of the order of 20$\%$, while theory predicts higher figures that have never been obtained. In contrast to atomic beams, cold atoms constitute a well controlled medium, where theoretical models can in principle be fully tested against experimental results, including realistic conditions such as the spatial character of the laser beam and additional noise sources. In such conditions one is able to accurately model the squeezing data theoretically for the first time, whereas in the atomic beam experiments no quantitative explanation has been given of the discrepancy between the measured squeezing and the significantly higher theoretical predictions. To fit our experimental data, we have developped a theoretical model which fully takes into account the transverse structure of the probe beam \cite{ref10} and which also considers the influence of additional noise sources like atomic number fluctuations \cite{Lambrecht}. With this treatment we have not only obtained a correct prediction for the magnitude of the quadrature squeezing, but also a continuous prediction for the minimal and maximal noise power as a function of cavity detuning, which fits the experimental result well. The experimental set-up used to demonstrate squeezing with cold atoms has been described in detail elsewhere \cite{ref5}. Here, we will only recall its main features. A circularly polarized probe beam is sent into a resonant optical cavity containing a cloud of cold Cesium atoms prepared in a standard magneto-optic trap\cite{ref1a,ref1b}. With large and rather intense trapping beams, detuned by $3$ times the linewidth below resonance of the $6S_{1/2},\ F=4$ to $6P_{3/2},\ F=5$ transition, we obtain a cloud of $1$cm in diameter, with densities on the order of $10^9$ atoms/cm$^3$. The temperature of the atoms is of the order of some mK. The number of atoms interacting with the probe beam is measured with a method described below. The cavity is a $25$cm-long linear cavity, with a waist of $260\mu$m, built around the cell (fig.1). The input mirror has a transmission coefficient of $10\%$ and the end mirror is highly reflecting. The cavity is thus close to the ``bad cavity'' case, where the linewidth of the cavity ($5$MHz) is larger than the radiative linewidth ($\gamma =2.6$MHz), and which is expected to be the most favorable for squeezing. The cavity is in the horizontal symmetry plane of the trap, making a $45^{\circ }$ angle with the two trapping beams propagating in this plane. The probe beam, generated by the same Ti:Sapphire laser as the trapping beams, can be detuned by $0$ to $130$MHz on either side of the $ 6S_{1/2},\ F=4$ to $6P_{3/2}\ ,F=5$ transition. We measure the probe beam intensity transmitted through the cavity with a photodiode located behind the end mirror. \begin{figure} \caption{Experimental set-up designed to study quantum fluctuations of a probe beam that has interacted with cold atoms in a nearly single-ended cavity. PBS, polarizing beamsplitter; QWP, quarterwave plate; HWP, halfwave plate; PD, photodiode.} \end{figure} The field coming out of the cavity is separated from the incoming one by an optical circulator, made of a polarizing beamsplitter and a quarter-wave plate, and mixed with a local oscillator beam using the second input port of the same beamsplitter. Orthogonally polarized at the output of the beamsplitter, the signal and local oscillator beams are split into equal-intensity sum and difference fields by a half-wave plate and a second polarizing beam splitter. Both parts are detected by photodiodes with a quantum efficiency of $96\%$. The total homodyne efficiency is of the order of $90\%$. The ac parts of the photodiode currents are amplified and subtracted. The resulting signal is further amplified and sent to a spectrum analyzer and to a computer. When scanning the cavity resonance, bistability due to the nonlinear dispersion of the cold atoms is easily observed with incident powers as low as a few $\mu$W, as soon as the number $N$ of interacting atoms is large enough. The nonlinear phase shift of the cavity giving rise to bistability is proportional to the cooperativity parameter $C$ also called bistability parameter: ($C=g^2N/\gamma T$, where $g$ is the atom-light coupling coefficient, $\gamma $ the radiative linewidth of the transition and $T$ the energy transmission coefficient of the mirror). $C$ is determined for each recording by measuring the ratio between the amplitudes of the bistability curve and of the resonance curve of the empty cavity. We find a cooperativity of the order of $ 100$ in presence of the trap. It should be noted that the atoms are partly saturated by the trapping beams, and that therefore the measured $C$ value is smaller than the one corresponding to the total number of atoms in the interaction area. The fluorescence emitted by atoms excited by the trapping beams constitutes a source of excess noise, which decreases the amount of quantum noise reduction attainable in principle. To avoid this effect, we turn off the trap during the noise measurements. With this method the measurement time is limited to $20$ to $30$ms, due to expansion and free fall of the atomic cloud. When the trap is turned off, the number of interacting atoms becomes time-dependent. The variation of the refractive index due to the escape of the atoms out of the probe beam provides a natural scan of the cavity across resonance. In this way, the resonance peak is scanned in about $10$ms. However, under these conditions, the $C$ value, being proportional to $N$, is no longer constant over the scan and it becomes necessary to adopt a specific model of its time-dependence in order to interpret the noise spectra. A model for the variation of $C$ with time can be obtained by calculating the variation of the linear phase shift caused by an expanding and falling ensemble of atoms in a Gaussian laser beam \cite{Lambrecht}. The atomic sample is assumed to have initially a Gaussian velocity and position distribution. Supposing according to experimental conditions the Rayleigh length of the beam to be much larger than the cloud size which itself is large compared to the beam waist, the variation of $ C\left( t\right) $ is given by the product of a Lorentzian function representing the ballistic flight of the atoms with an exponential function accounting for the effect of gravity: \begin{equation} C\left( t\right) =C\left( 0\right) \frac{\tau _r^2}{\tau _r^2+t^2}\exp \left( -\frac{t^4}{\tau _g^2\left( \tau _r^2+t^2\right) }\right) \label{Coop} \end{equation} $C\left( 0\right) $ is the cooperativity value right after the trap is turned off; the time constant $\tau _r=\sigma _r/\sigma _v $ is the time for the atoms with temperature $T_{\rm at}$ and with mean velocity $ \sigma _v=\sqrt{kT_{\rm at}/m}$ to fly through the cloud of radius $\sigma _r$ and $ \tau _g=2\sqrt{2}\sigma _v/g$ is the time it takes for the falling atoms to accelerate to $2\sqrt{2}$ times their original thermal velocity. To check this model, the cooperativity was experimentally studied as a function of time. fig.2 shows the result of such a measurement. The experimental points are fitted by the theoretical expression (\ref{Coop}). The line correponds to the theoretical prediction for a cloud radius of $4$mm and a temperature of $5$mK, which are consistent with the characteristics of our trap. As the theoretical curve fits the experimental data very well, these parameters were subsequently used to calculate the noise spectrum. \begin{figure} \caption{Time dependence of the cooperativity parameter $C$ as the cloud of cold atoms is released at $t=0$. The dots correspond to experimental data as described in the text, the curves are derived from Eq.(1).} \end{figure} While the cavity resonance is scanned by the escape of the atoms the field fluctuations of the output beam are monitored with the homodyne detection described above at a fixed analyzing frequency of $5$MHz. At the same time, the local oscillator phase is rapidly varied with a piezoelectric transducer to explore all noise quadratures of the probe beam. As we have only $20-30$ms for the measurement, the phase of the local oscillator must be scanned at frequencies on the order of kHz, which determines the analyzing bandwidth of the spectrum analyzer to be about $100$kHz. The video bandwidth of the spectrum analyzer should be adjusted to avoid any distortion of the spectrum. As the videofilter of our model (Tektronix 2753 P) did not have enough flexibility, we used numerical filters in the processing of the spectrum. The spectrum shown in fig.3 was obtained in such a manner. The electronic noise was substracted from the signal provided by the spectrum analyzer before the filtering process. The averaged shot noise level (determined by blocking the cavity) is indicated by the straight line. It can be seen that the noise on the left hand side of the figure, when the cavity is out of resonance, is at shot noise, whereas it goes below shot noise on the lower branch of the bistability curve. The observed squeezing is $(40 \pm 10)\%$. On the upper branch (right hand part of the figure), large excess noise is observed in some quadratures. The powers of the probe laser and local oscillator were $ 25\mu$W and $9$mW. The large ratio of these powers ensures the noise measured by the homodyne detection does not require any normalization, even if the probe beam reflected by the cavity does not have a constant intensity. The detuning from atomic resonance was $52$MHz below resonance. The cooperativity parameter $C$ was found to be $220$ right after turning off the trap. The error bar for the squeezing measurement is due to the width of the random noise of the signal which is generated within the measurement system itself. This random noise varies as the mean noise level and is estimated from the shot noise level to be $\pm 10\%$ at the minimum of fig.3. The variation of the transmitted field reproduced in the insert of fig.3 shows that the system is slightly below the bistability threshold. The spectrum presented in fig. 3 shows the best squeezing value obtained in a series of experiments where several parameters such as the detuning of the probe field from the cavity and atomic resonances, the analyzing frequency and the atomic number were varied. \begin{figure} \caption{Noise signal taken with free atoms at a fixed observation frequency of 5MHz as a function of time, while the cavity resonance is scanned by the departure of the atoms. Oscillations correspond to phase scan of the local oscillator. The broken lines are theoretical predictions for the minimal and maximal noise, the solid line indicates the shot noise level. The probe beam was detuned by 52MHz below resonance and its power was 25$\mu$W. The error bar is due to the width of the noise trace. The insert shows the corresponding mean intensity.} \end{figure} In a second series of experiments, we have looked for quantum noise reduction in presence of the trap. As mentioned above, intense trapping beams produce too much excess noise to allow observation of squeezing. However, we have been able to find a compromise by first trapping the atoms from the vapor with intense laser beams, and then turning down the trapping beams to about $1/10$ of their original power. Under these conditions, the cooperativity parameter is on the order of $20$. The noise spectrum shown in fig.4 is recorded under similar conditions as above (detuning of 52MHz and incident probe power of $16\mu$W) except that the cavity is scanned by means of a piezoelectric transducer while the number of atoms is constant during the measurement. The best value for squeezing obtained under such conditions is on the order of $(20 \pm 10)\%$. Here, the quantum noise reduction appears on the upper branch of the bistability curve. An interesting feature of this result is that the steep edge on the left hand side of the bistability curve (upper trace in fig.4) is due to optical pumping, rather than to saturation which is responsible for squeezing. \begin{figure} \caption{Noise signal (measured at a fixed frequency of 5MHz) and mean intensity taken with weakly trapped atoms, while the cavity length was swept with a piezoelectric transducer and the local oscillator phase was modulated rapidly. The solid line indicates the shot noise level. The error bar is determined as in fig.3.} \end{figure} Indeed the two processes that can lead to bistability, saturation of the optical transition and optical pumping, are easily distinguished by their sign. Saturation of the atomic transition causes a decrease of the refractive index, whereas optical pumping increases it. Thus the bistability curves due to the two processes have steep edges on opposite sides. In the case shown in fig.4, the left hand side of the bistability curve can be unambiguously attributed to optical pumping. With increasing intensity of the probe beam, the dominant phenomenon becomes the saturation of the atomic transition \cite{ref8} and the steep edge changes sides. In this second experiment squeezing is reproducibly observed only in the range of parameters where the bistability curve originates from optical pumping. However, squeezing is still linked to the saturation of the atomic transition and the observed features can be interpreted as a consequence of the dynamical processes that take place in the cavity. As the cavity length is scanned, light from the probe beam enters the cavity and pumps the atoms towards the sublevels with the highest magnetic quantum number. At that stage, saturation of the atomic transition starts to take place, and this non-linearity is at the origin of squeezing in the output field. As was shown in \cite{ref8}, the instabilities on the right hand side of the curve (after the region where the squeezing occurs) come from a competition between optical pumping, which is not fully completed here, and saturation of the atomic transition. The experimental spectra for free atoms have been compared with the theoretical predictions given by the two-level atom model derived from ref.\cite{ref10}. The measured experimental parameters are included in the model, which takes into account the Gaussian character of the beam. The minimal and maximal quantum noise were calculated in conditions reproducing those of the experiments, i.e. by including the variation in time of the cooperativity as represented in fig.2. and the homodyne efficiency of $90\%$. The resulting spectra are shown by the broken lines in fig.3. It can be seen that the agreement between theory and experiment is satisfactory. As far as the squeezing in presence of the trap is concerned, one can calculate the value of squeezing from the same two-level atom theory. The expected quantum noise reduction is then 20\%, in good agreement with the observed value. Squeezing is here limited by the lower value of the cooperativity which is on the order of $20$. We have evaluated the potential effect of spurious noise sources in the experiments. In the experiment with free atoms, the excess noise due to the atomic number fluctuations in the cold atom cloud in the process of expanding and of falling \cite{Lambrecht} was shown to have a spectrum peaked at frequencies that are too low to affect the noise measurements at 5MHz. Additional causes of excess noise are the repumping laser and the weak trapping beams in the second experiment. Their effects have been evaluated with a calculation based on the atomic number fluctuations they produce. In the frequency range studied, we find their excess noise to be negligible for the rather low powers that are used for repumping and trapping beams. This study shows that cold atoms provide a powerful medium for quantum optics. The measured squeezing is higher than any value observed in previous atomic experiments. Even higher values are expected by increasing the ratio of the cavity linewidth to the atomic linewidth, i.e. by going more towards the bad-cavity limit. One of the difficulties of our first set of experiments, in which the trap had to be turned off during the noise measurements, is avoided in the second kind of experiments. It should be possible to go further by working with cold atoms that do not interact at all with the trapping beams, for example, by using a ``dark SPOT'' set-up\cite{ketterle}. This work was supported in part by the EC contract ESPRIT BRA 6934 and the EC HMC contract CHRX-CT 930114. \end{document}
\begin{equation}gin{document} \title{Height estimates for Killing graphs} \author{Debora Impera} \rangledress{Dipartimento di Matematica e Applicazioni\\ Universit\`a di Milano--Bicocca Via Cozzi 53\\ I-20125 Milano, ITALY} \email{[email protected]} \author{Jorge H. de Lira} \rangledress{Departamento de Matem\'atica\\Universidade Federal do Cear\'a-UFC\\60455-760 Fortaleza, CE, Brazil.} \email{[email protected]} \author{Stefano Pigola} \rangledress{Sezione di Matematica - DiSAT\\ Universit\'a dell'Insubria - Como\\ via Valleggio 11\\ I-22100 Como, Italy} \email{[email protected]} \author{Alberto G. Setti} \rangledress{Sezione di Matematica - DiSAT\\ Universit\'a dell'Insubria - Como\\ via Valleggio 11\\ I-22100 Como, Italy} \email{[email protected]} \date{\today} \maketitle \begin{equation}gin{abstract} The paper aims at proving global height estimates for Killing graphs defined over a complete manifold with nonempty boundary. To this end, we first point out how the geometric analysis on a Killing graph is naturally related to a weighted manifold structure, where the weight is defined in terms of the length of the Killing vector field. According to this viewpoint, we introduce some potential theory on weighted manifolds with boundary and we prove a weighted volume estimate for intrinsic balls on the Killing graph. Finally, using these tools, we provide the desired estimate for the weighted height function in the assumption that the Killing graph has constant weighted mean curvature and the weighted geometry of the ambient space is suitably controlled. \end{abstract} \section*{Introduction and main results} Let $\left(M, \langle \cdot, \cdot \rangle_M\right) $ be a complete, $(n+1) $-dimensional Riemannian manifold endowed with a complete Killing vector field $Y$ whose orthogonal distribution has constant rank $n$ and it is integrable. Let $(P, \langle \cdot, \cdot \rangle_P)$ be an integral leaf of that distribution equipped with its induced complete Riemannian metric $\langle \cdot, \cdot \rangle_P$. The flow $\vartheta: P \times \mathbb{R} \rightarrow M$ generated by $Y$ is an isometry between $M$ and the warped product $P\times_{e^{-\psi}}\mathbb{R} $ with metric \[ \langle \cdot, \cdot \rangle_{M}= \langle \cdot, \cdot \rangle_P+e^{-2\psi}\mathrm{d} s \otimes \mathrm{d} s \] where $s$ is the flow parameter and $\psi = - \log |Y|$. Let $\Omega\subset P$ be a possibly unbounded domain with regular boundary $\partial\Omega\neq\emptyset$. The {\it Killing graph} of a smooth function $u:\bar{\Omega}\rightarrow \mathbb{R} $ is the hypersurface $\Sigma\subset M$ parametrized by the map \[ X(x)=\vartheta(x,u(x)), \quad x\in \bar\Omega. \] Obviously, if $Y$ is a parallel vector field, then $M$ is isometric with the Riemannian product $P\times\mathbb{R} $ and the notion of a Killing graph reduces to that of a usual vertical graph. The above terminology, together with some existence results, was first introduced by M. Dajczer, P.A. Hinojosa, and J.H. de Lira in \cite{DHL-CalcVar}. Since then, Killing graphs have become the subject of a systematic investigation both in order to understand their geometry and as a tool to study different problems such as the existence of solutions of the asymptotic Plateau problem in certain symmetric spaces; \cite{CR-Asian, Ri-preprint}. The aim of this paper is to obtain quantitative height estimates for a smooth Killing graph \[ \Sigma=\mathrm{Graph}_{\bar \Omega}\left( u\right) \hookrightarrow M=P\times_{e^{-\psi}}\mathbb{R} \] parametrized over (the closure of) a possibly unbounded domain $\Omega\subset P$ and whose smooth boundary $\partial\Sigma\neq\emptyset$ is contained in the totally geodesic slice $P\times\{0\}$ of $M$. When $\psi (x) \equiv \mathrm{const}$ and the ambient manifold is the Riemannian product $P\times\mathbb{R} $, it is well understood that quantitative a-priori estimates can be deduced by assuming that the mean curvature $H$ of the graph is constant (\textit{CMC graphs} for short). For bounded domains into the Euclidean plane $P=\mathbb{R} ^{2}$ this was first observed in seminal papers by E. Heinze, \cite{He}, and J. Serrin, \cite{Se}. More precisely, assume that $H>0$ with respect to the\textit{ downward pointing Gauss map} $\mathcal{N}$. Then, $\Sigma$ is confined into the slab $\mathbb{R} ^{2}\times \lbrack0,1/H]$ regardless of the size of the domain $\Omega$. This type of estimates has been recently extended to unbounded domains $\Omega \subset\mathbb{R} ^{2}$ by A. Ros and H. Rosenberg, \cite{RoRo-AJM}. Their technique, which is based on smooth convergence of CMC surfaces, requires strongly that the base leaf $P=\mathbb{R} ^{2}$ is homogeneous and cannot be trivially adapted to general manifolds. In the case of a generic base manifold $P$, and maintaining the assumption that the CMC vertical graph $\Sigma$ is parametrized over a bounded domain, the corresponding height estimates have been obtained by D. Hoffman, J.H. de Lira and H. Rosenberg, \cite{HLR-TAMS}, J.A. Aledo, J.M. Espinar and J.A. Galvez, \cite{AEG-Illinois}, L. Al\'\i as and M. Dajczer, \cite{AD-PEMS} etc. The geometry of $P$ enters the game in the form of curvature conditions, namely, the Ricci curvature of \ $P$ cannot be too much negative when compared with $H$. In particular, for non-negatively Ricci curved bases the height estimates hold with respect to any choice of $H>0$. In the very recent \cite{IPS-Crelle}, the boundedness assumption on the domain $\Omega$ has been replaced by a quadratic volume growth condition, thus obtaining a complete extension of the Ros-Rosenberg result to any complete manifold $P$ with non-negative sectional curvature and dimensions $n\leq4$. The restriction on the dimension is due to the fact that, up to now, it is not known whether CMC graphs over non-negatively curved manifolds $P$ are necessarily contained in a vertical slab. Granted this, the desired estimates can be obtained. In a slightly different perspective, qualitative bounds of the height of CMC vertical graphs on bounded domains have been obtained by J. Spruck, \cite{S-PAMQ}. It is worth to observe that his technique, based on Serrin-type gradient estimates and Harnack inequalities, is robust enough to give a-priori bounds even in the case where the mean curvature is non-constant. Actually, it works even for Killing graphs up to using the work by M. Dajczer and J.H. de Lira, \cite{DL-Poincare}; see also \cite{DLR-JAM}. However, in the Killing setting, the problem of obtaining quantitative bounds both on bounded and\ on unbounded domains remained open. Due to the structure of the ambient space, it is reasonable to expect that an a-priori estimate for the height function of a CMC Killing graph is sensitive of the deformation function ${\psi}$. In fact, since the length element of the fibre $\left\{ x\right\} \times\mathbb{R} $ is weighted by the factor $e^{-\psi(x)}$, a reasonable pointwise bound should be of the form \[ 0 \leq e^{-\psi(x)}u(x) \lesssim \frac{1}{|H|}. \] Actually, the same weight $e^{-\psi(x)}$ appears also in the expression of the volume element of $\Sigma$ thus suggesting the existence of an intriguing interplay between Killing graph and smooth metric measure spaces (also called {\it weighted manifolds}). Since this interplay represents the leading idea of the entire paper we are going to take a closer look at how it arises. In view of the fact that we are considering graphical hypersurfaces, weighted structures should appear both at the level of the base manifold $P$, where $\Sigma$ is parametrised, and at the level of the ambient space $M$, where $\Sigma$ is realised. In fact, these two weighted contexts will interact in the formulation of the main result. To begin with, we note that the induced metric on $\Sigma = \mathrm{Graph}_{\bar \Omega}(u)$ is given by \begin{equation}gin{equation} \langle \cdot, \cdot \rangle_{\Sigma} = \langle \cdot, \cdot \rangle_P+e^{-2\psi}\mathrm{d} u\otimes \mathrm{d} u.\label{1st} \end{equation} Thus, the corresponding Riemannian volume element $\mathrm{d} \Sigma$ has the expression \begin{equation}gin{equation}\label{volumelement} \mathrm{d}\Sigma= W e^{-\psi}\mathrm{d} P, \end{equation} where $W= \sqrt{e^{2\psi} +|\nabla^Pu|^2}$ and $\mathrm{d} P$ is the volume element of $P$. As alluded to above, the special form of \eqref{volumelement}, when compared with the case of a product ambient space, suggests to switch the viewpoint from that of the Riemannian manifold $(P,\langle \cdot, \cdot \rangle_{P})$ to that of the smooth metric measure space \[ P_{\psi}:= (P,\langle \cdot, \cdot \rangle_{P},\mathrm{d} P_{\psi}) \] where we are using the standard notation \[ \mathrm{d} P_{\psi} = e^{-\psi} \mathrm{d} P. \] In particular, \[ \mathrm{d} \Sigma = W \mathrm{d} P_{\psi} \] and we are naturally led to investigate to what extent the geometry of $\Sigma$ is influenced by the geometry of the weighted space $P_{\psi}$. As we shall see momentarily, the geometry of $P_{\psi}$ will enter the game in the form of a growth condition on the weighted volume of its geodesic balls $B^{P}_{R}(o)$: \[ \mathrm{vol}_{\psi} (B^{P}_{R}(o)) = \int_{B^{P}_{R}(o)} \mathrm{d} P_{\psi}. \] In a different direction, we observe that the smooth metric measure space structure of $P_{\psi}$ extends to the whole ambient space up to identifying $\psi : P \to \mathbb{R}$ with the function $\bar \psi : P \times_{e^{-\psi}} \mathbb{R} \to \mathbb{R}$ given by \[ \bar \psi (x,s) = \psi (x). \] With a slight abuse of notation, we write \begin{equation}gin{equation} M_{\psi} := (M, \langle \cdot, \cdot \rangle_{M}, \mathrm{d}_{\psi}M) = (P \times_{e^{-\psi}} \mathbb{R} , e^{-\psi} \mathrm{d} M) \end{equation} and we can consider the original Killing graph as an hypersurface \[ \Sigma = \mathrm{Graph}_{\bar \Omega}(u) \hookrightarrow M_{\psi}. \] Previous works on classical height estimates for CMC graphs show that the relevant geometry of the ambient space is subsumed to a condition on its Ricci tensor. Thus, if we think of realizing $\Sigma$ inside $M_{\psi}$ we can expect that height estimates need a condition on its Bakry-\'Emery Ricci tensor defined by \[ \mathbb{R}ic^{M}_{\psi} = \mathbb{R}ic^{M} + \mathrm{Hess}^{M}(\psi). \] We shall come back to this later on. Following \cite{DHL-CalcVar}, we now orient $\Sigma$ using the {\it upward pointing} unit normal \begin{equation}gin{equation} \label{N} \mathcal{N} = \frac{e^{2\psi} Y - \vartheta_* \nabla^P u}{\sqrt{e^{2\psi}+|\nabla^P u|^2}}=\frac{1}{W}\big(e^{2\psi} Y - \vartheta_* \nabla^P u\big). \end{equation} Note that $\mathcal{N}$ is upward pointing in the sense that \begin{equation}gin{equation}\label{NY} \langle \mathcal{N}, Y\rangle_{M}=\frac{1}{W} >0. \end{equation} Let $H:\Omega\subseteq P \to \mathbb{R} $ be the corresponding mean curvature function. The weighted $n$-volume associated to (the restriction of) $\psi$ (to $\Sigma$) is defined by \begin{equation}gin{equation} \label{Apsi} \mathcal{A}_{\psi}[\Sigma] := \int_\Sigma \mathrm{d} \Sigma_{\psi}. \end{equation} We are not concerned with the convergence of the integral. Given a compactly supported variational vector field $Z$ along $\Sigma$ the first variation formula reads \begin{equation}gin{equation} \delta_Z \mathcal{A}_{\psi} = \int_\Sigma \left(\mathrm{div} ^\Sigma Z - \langle {\nabla}^{M} \psi, Z\rangle_{M}\right) \mathrm{d} \Sigma_{\psi}. \end{equation} In particular, if $Z = v \mathcal{N}$ for some $v\in C_{c}^\infty(\Sigma)$ we have \begin{equation}gin{equation} \delta_Z \mathcal{A}_{\psi} = \int_\Sigma \left(-nH - \langle \nabla^{M} \psi, \mathcal{N}\rangle_{M}\right) v \, \mathrm{d} \Sigma_{\psi}= -n\int_\Sigma H_{\psi} v \, \mathrm{d} \Sigma_\psi, \end{equation} where, using the definition proposed by M. Gromov, \cite{Gro}, \begin{equation}gin{equation} \label{Hpsi} H_{\psi} = H + \frac1n \langle {\nabla}^{M} \psi, \mathcal{N}\rangle_{M} \end{equation} is the {\it $\psi$-weighted mean curvature} of $\Sigma$. The way we have followed to introduce the weighted structure on the ambient space $M$ may look the most natural: it is trivially compatible with the weighted structure of the base space $P_{\psi}$ and with the weighted height function of $\Sigma$. Moreover, the weight $\psi$ appears in the volume element of $\Sigma$. However, it is worth to note that this is not the only ``natural'' choice. This becomes clear as soon as we express the mean curvature (and its modified version) of $\Sigma = \mathrm{Graph}_{\Omega}(u)$ in the classical form of a capillarity equation. Indeed, it is shown in \cite{ DL-Poincare, DHL-CalcVar} that \begin{equation}gin{equation}\label{capillary-2} \mathrm{div} ^P\Big(\frac{\nabla^P u}{W}\Big) =n H_{-\psi}. \end{equation} which, in view of \eqref{Hpsi}, is completely equivalent to \begin{equation}gin{equation}\label{capillary-3} \mathrm{div} _{\psi}^P\left(\frac{\nabla^P u}{W} \right)= nH. \end{equation} Here, we are using the standard notation for the {\it weighted divergence} in $P_{\psi}$: \[ \mathrm{div}^{P}_{\psi}X \, = e^{\psi} \mathrm{div} (e^{-\psi} X) = \mathrm{div}^{P} X - \langle X, \nabla \psi\rangle_{P}. \] Thus, with respect to the capillarity type equation \eqref{capillary-2}, the natural and relevant weighted structure on $M$ arises from the weight $-\psi$ and we might be led to consider $\Sigma$ as an hypersurface in $M_{-\psi}$. On the other hand, the original choice $\psi$ fits very well into the capillarity type equation \eqref{capillary-3}. Both these weighted structures are relevant and a choice has to be made. To give an idea of this kind of duality between $\psi$ and $-\psi$ structures, we extend, in the setting of Killing graphs, the classical relation between the mean curvature of the graph and the isoperimetric properties of the parametrization domain. This is the content of the following weighted versions of a result by E. Heinz, \cite{He2}, S.S. Chern, \cite{Ch}, H. Flanders, \cite{Fl}, and I. Salavessa \cite{Salavessa-PAMS}. Define the ``standard'' and the``weighted'' Cheeger constants of a domain $\Omega$ by, respectively, \begin{equation}gin{equation}\nonumber \mathfrak{b}(\Omega) = \inf_{D} \frac{\mathrm{vol}(\partial D)}{\mathrm{vol}(D)}, \quad \mathfrak{b}_\psi (\Omega) = \inf_{D} \frac{\mathrm{vol}_\psi(\partial D)}{\mathrm{vol}_\psi(D)}, \end{equation} where $D$ is a bounded subdomain with compact closure in $\Omega$ and with regular boundary $\partial D \not= \emptyset$. \begin{equation}gin{proposition*} Let $\Sigma = \mathrm{Graph}_{\Omega}(u) \hookrightarrow P \times_{e^{-\psi}} \mathbb{R}$ be an $n$-dimensional Killing graph defined over the domain $\Omega\subset P$. Then, the mean curvature $H$ of $\Sigma$, and its weighted version $H_{-\psi}$, satisfy the following inequalities: \begin{equation}gin{equation} n\inf_\Omega |H_{-\psi}|\le \mathfrak{b}(\Omega), \quad n\inf_\Omega |H|\le \mathfrak{b}_\psi (\Omega). \end{equation} In particular: \begin{equation}gin{itemize} \item [(i)] If $\Omega \subset P_{\psi}$ has zero weighted Cheeger constant, and $\Sigma$ has constant mean curvature $H$, then $\Sigma$ is a minimal graph. \item [(ii)] If $\Omega \subset P$ has zero Cheeger constant, and $\Sigma \hookrightarrow M_{-\psi}$ has constant weighted mean curvature $H_{-\psi}$, then $\Sigma$ is a $(-\psi)$-minimal graph. \end{itemize} \end{proposition*} Indeed, if $D \Subset P$ is a relatively compact domain in $P$ with boundary $\partial D = \Gamma$ and outward pointing unit normal $\nu_0\in TP$, integrating \eqref{capillary-3} and using the weighted version of the divergence theorem \[ \int_{D} \mathrm{div}_{\psi} Z\, \mathrm{d} P_{\psi} = \int_{\Gamma} \langle Z , \nu_{0} \rangle_{P}\, d\Gamma_{\psi} \] we obtain \[ \int_{D} nH \mathrm{d} P_{\psi} = \int_{\Gamma} \left\langle \frac{\nabla^P u}{W},\nu_0\right\rangle_{P} \, \mathrm{d} \Gamma_{\psi} \] If $H$ has constant sign, multiplying by $-1$ if necessary, we deduce that \begin{equation}gin{equation*} \int_D n|H| \mathrm{d} P_{\psi} \leq \int_\Gamma \left\vert \left\langle \frac{\nabla^P u}{W},\nu_0 \right\rangle \right\vert \mathrm{d} \Gamma_{\psi} \end{equation*} and recalling that $|\nabla^P u|/W\leq 1$, we conclude that \[ n\inf_D |H| \int_D \mathrm{d} P_{\psi} \leq \int_\Gamma \mathrm{d} \Gamma_{\psi}. \] that is \begin{equation}gin{equation}\label{Salavessa_weighted} n \inf_D |H| \mathrm{vol}_{\psi}(D) \leq \mathrm{vol}_{\psi}(\partial D). \end{equation} Note that \eqref{Salavessa_weighted} certainly holds even if $H$ has not constant sign, for then $\inf |H| =0$. In a completely similar way, starting from equation \eqref{capillary-2}, we obtain \begin{equation}gin{equation}\nonumber n \inf_{D} |H_{\psi}| \mathrm{vol}(D) \leq \mathrm{vol}(\partial D). \end{equation} The desired conclusions now follow trivially. This brief discussion should help to put in the appropriate perspective the following theorem which represents the main result of the paper. \begin{equation}gin{theoremA} \label{th_fheightestimate} Let $(M,\langle \cdot, \cdot \rangle_{M}) $ be a complete, $(n+1)$-dimensional Riemannian manifold endowed with a complete Killing vector field $Y$ whose orthogonal distribution has constant rank $n$ and it is integrable. Let $(P,\langle \cdot, \cdot \rangle_{P})$ be an integral leaf of that distribution and let $\Omega\subset P$ be a smooth domain. Set $\psi=-\log(|Y|)$ and assume that: \begin{equation}gin{enumerate} \item[(a)] $-\infty<\inf_{{\Omega}} \psi \leq \sup_{{\Omega}} \psi<+\infty$; \item[(b)] $\mathrm{vol}_{\psi}\left( B_{R}^{P}\cap\Omega\right) =\mathcal{O}(R^2) $, as $R\rightarrow+\infty$; \item[(c)] $\mathbb{R}ic^{M}_\psi\langle \cdot, \cdot \rangleeq 0$ in a neighborhood of $\Omega\times\mathbb{R} \subset M_{\psi}$. \end{enumerate} Let $\Sigma=\mathrm{Graph}_{\bar \Omega}(u)$ be a Killing graph over $\bar \Omega$ with weighted mean curvature $H_{\psi} \equiv \mathrm{const} < 0$ with respect to the upward pointing Gauss map $\mathcal{N}$. Assume that: \begin{equation}gin{enumerate} \item[(d)] The boundary $\partial\Sigma$ of $\Sigma$ lies in the slice $P\times\{0\}$; \item[(e)] The weighted height function of $\Sigma$ is bounded: $\sup_{\Sigma} |u e^{-\psi}|< +\infty$. \end{enumerate} \noindent Then \begin{equation}gin{equation} \label{hest} 0\leq u(x) e^{-\psi(x)} \leq\frac{C}{|H_{\psi}|}, \end{equation} where $C:=e^{2(\sup_\Omega \psi-\inf_\Omega\psi)}\langle \cdot, \cdot \rangleeq 1$. \end{theoremA} It is worth pointing out that the constant $C$ in \eqref{hest} depends only on the variation of $\psi$ . The strategy of proof follows the main steps in \cite{IPS-Crelle}. For instance, in order to to get the upper estimate, we will use potential theoretic properties of the weighted manifold with boundary $\Sigma_{\psi} = (\Sigma, \langle \cdot, \cdot \rangle_{\Sigma}, \mathrm{d} \Sigma_{\psi})$. The idea is to show that, thanks to the capillarity equation \eqref{capillary-2}, the moderate volume growth assumption (b) on $\Omega$ is inherited by $\Sigma_{\psi}$. Therefore, the weighted Laplacian, defined on a smooth function $w : \Sigma \to \mathbb{R}$ by \[ \Delta^{\Sigma}_{\psi} w = \mathrm{div}^{\Sigma}_{\psi}(\nabla^{\Sigma} w) = \Delta^{\Sigma} w - \langle \nabla^{\Sigma} \psi, \nabla^{\Sigma} w \rangle_{\Sigma}, \] satisfies a global maximum principle similar to that valid on a compact set. And it is precisely in the compact setting that, throughout the construction of explicit examples, we shall show that our height estimate is essentially sharp. Moreover, in the spirit of known results in the product case (see above), since we do not impose any restriction on the size of the mean curvature, the ambient space $M_{\psi}$ is assumed to have non-negative weighted (i.e. Bakry-\'Emery) Ricci curvature. Note however, see Remark~\textrm{Re}f{RmkHLR}, that the result extends to the case where $\mathbb{R}ic^{M}_\psi$ is bounded below by a negative constant, provided a suitable bound on $H^2$ is imposed. On the other hand, to show that $u\langle \cdot, \cdot \rangleeq 0$ one uses the parabolicity of $\Delta^\Sigma_{3\psi}$. This provides further instance of the interplay between different weight structures on $M.$ It is worth to point out that, as it often happens in submanifolds theory, the case $n=2$ is very special. In fact, for two dimensional Killing graphs, one can show that the curvature assumption (c) implies the boundedness condition (e). Therefore, the previous result takes the following striking form. \begin{equation}gin{corollaryA} \label{coro_fheightestimate} Let $(M,\langle \cdot, \cdot \rangle_{M}) $ be a complete, $3$-dimensional Riemannian manifold endowed with a complete Killing vector field $Y$ whose orthogonal distribution has constant rank $2$ and it is integrable. Let $(P,\langle \cdot, \cdot \rangle_{P})$ be an integral leaf of that distribution and let $\Omega\subset P$ be a smooth domain. Set $\psi=-\log(|Y|)$ and assume that: \begin{equation}gin{enumerate} \item[(a)] $-\infty<\inf_{{\Omega}} \psi \leq \sup_{{\Omega}} \psi<+\infty$; \item[(b)] $\mathrm{vol}_{\psi}\left( B_{R}^{P}\cap\Omega\right) = \mathcal{O}(R^2) $, as $R\rightarrow+\infty$; \item[(c)] $\mathbb{R}ic^{M}_\psi\langle \cdot, \cdot \rangleeq 0$ in a neighborhood of $\Omega\times\mathbb{R} $. \end{enumerate} Let $\Sigma=\mathrm{Graph}_{\Omega}(u)$ a be a $2$-dimensional Killing graph over $\Omega$ with constant weighted mean curvature $H_{\psi} \equiv \mathrm{const}<0$ with respect to the upward pointing Gauss map and with boundary $\partial \Sigma \subset P\times\{0\}$. \noindent Then \[ 0\leq u e^{-\psi} (x) \leq\frac{C}{|H_{\psi}|}, \] where $C:=e^{2(\sup_\Omega \psi-\inf_\Omega\psi)}\langle \cdot, \cdot \rangleeq 1$. \end{corollaryA} The organization of the paper is as follows: \noindent In Section \textrm{Re}f{PotTheory} we prove some potential theoretic properties of weighted manifolds with boundary, related to global maximum principles for the weighted Laplacian. These results, using new direct arguments, extend to the weighted context previous investigation in \cite{IPS-Crelle}. \noindent Section \textrm{Re}f{HeightEst} contains the proof of the quantitative height estimates for Killing graphs over possibly unbounded domains. To this end, we shall introduce: (i) some basic formulas for the weighted Laplacian of the height and the angle functions of the Killing graph and (ii) weighted volume growth estimate that will enable us to apply the global maximum principles obtained in Section \textrm{Re}f{PotTheory}. \noindent In Section \textrm{Re}f{section-examples} we construct concrete examples of Killing graphs with constant weighted mean curvature which, in particular, show that the constant $C$ in our estimate has the correct functional dependence on the variation $\sup \psi-\inf \psi$ of $\psi$.\\ \noindent{\bf Acknowledgments.} It is our pleasure to thank the anonymous referee for a very careful reading of the manuscript and for important suggestions that greatly improved the exposition. \section{Some potential theory on weighted manifolds}\label{PotTheory} The aim of this section is to study potential theoretic properties, and, more precisely, parabolicity with respect to the weighted Laplacian, of a weighted Riemannian manifold with possibly empty boundary. This relies on the notion of weak sub (super) solution subject to Neumann boundary conditions that we are going to introduce. In order to avoid confusion, and since we are dealing with general results valid on any weighted manifold, throughout this Section we shall call $f:M \to \mathbb{R}$ the weight function. The symbol $\psi$ is thus reserved to the peculiar weight related to Killing graphs. Let $M_f = (M,\langle \cdot, \cdot \rangle,e^{-f}\mathrm{d} M)$ be a smooth, $n$-dimensional, weighted manifold with smooth boundary $\partial M\neq\emptyset$ oriented by the exterior unit normal $\nu.$ The interior of $M$ is denoted by $\mathrm{int}M = M \setminus \partial M$. By a domain in $M$ we mean a non-necessarily connected open set $D\subseteq M$. We say that the domain $D$ is smooth if its topological boundary $\partial D$ is a smooth hypersurface $\Gamma$ with boundary $\partial\Gamma=\partial D\cap\partial M$. Adopting the notation in \cite{IPS-Crelle}, for any domain $D\subseteq M$ we define \begin{equation}gin{align*} \partial_{0}D&=\partial D\cap\mathrm{int}M,\\ \partial_{1}D&=\partial M\cap D \end{align*} We will refer to $\partial_0 D$ and to $\partial_1 D$ respectively as the \textit{Dirichlet boundary} and the \textit{Neumann boundary} of the domain $D$. Finally, the \textit{interior part} of $D$, in the sense of manifolds with boundary, is defined as \[ \mathrm{int}D=D \cap \mathrm{int}M, \] so that, in particular, \[ D= \mathrm{int}D \cup \partial_1 D. \] We recall that the Sobolev space $W^{1,2}( \mathrm{int}D_f)$ is defined as the Banach space of functions $u \in L^2( \mathrm{int}D_f)$ whose distributional gradient satisfies $\nabla u \in L^2( \mathrm{int}D_f)$. Here we are using the notation \[ L^2( \mathrm{int}D_f):= \{w\, :\, \int_{D} w^2\mathrm{d} M_f<+\infty\}. \] By the Meyers-Serrin density result, this space coincides with the closure of $C^{\infty}(\mathrm{int}D)$ with respect to the Sobolev norm $\|u\|_{W^{1,2}(M_f)}=\|u\|_{L^2(M_f)}+\|\nabla u\|_{L^2(M_f)}$. Moreover, when $D=M$ and $M$ is complete, $W^{1,2}(\mathrm{int}M_f)$ can be also realised as the $W^{1,2}(M_f)$-closure of $C^{\infty}_{c}(M)$. Finally, the space $W^{1,2}_{\mathrm{loc}}(\mathrm{int}D_f)$ is defined by the condition that $u \cdot \chi \in W^{1,2}(\mathrm{int}D_f)$ for every cut-off function $\chi \in C^{\infty}_c(\mathrm{int}D)$. We extend this notion by including the Neumann boundary of the domain as follows \[ W^{1,2}_{\mathrm{loc}}(D_f)=\{u \in W^{1,2}(\mathrm{int}\Omega_f), \text{ }\forall \text{domain } \Omega \Subset D=\mathrm{int}D \cup \partial_1 D\}. \] Now, suppose $D\subseteq M$ is any domain. We put the following Definition. Recall from the Introduction that the $f$-Laplacian of $M_{f}$ is the second order differential operator defined by \[ \Delta_{f} w = e^{f} \mathrm{div}(e^{-f} \nabla w) = \Delta w - \langle \nabla f , \nabla w \rangle, \] where $\Delta$ denotes the Laplace-Beltrami operator of $(M,\langle \cdot, \cdot \rangle)$. Clearly, as one can verify from the weighted divergence theorem, $-\Delta_{f}$ is a non-negative, symmetric operator on $L^{2}(M_{f})$. \begin{equation}gin{definition} By a weak Neumann sub-solution $u\in W_{loc}^{1,2}\left(D_f\right) $ of the $f$-Laplace equation, i.e., a weak solution of the problem \begin{equation}gin{equation} \left\{ \begin{equation}gin{array} [c]{ll} \Delta_f u\langle \cdot, \cdot \rangleeq0 & \text{on }\mathrm{int}D\\ \dfrac{\partial u}{\partial\nu}\leq0 & \text{on }\partial_{1}D, \end{array} \right. \label{subneumannproblem} \end{equation} we mean that the following inequality \begin{equation}gin{equation} -\int_{D}\left\langle \nabla u,\nabla\varphi\right\rangle \mathrm{d} M_f\langle \cdot, \cdot \rangleeq0 \label{fsubsol} \end{equation} holds for every $0\leq\varphi\in C_{c}^{\infty}\left( D\right) $. Similarly, by taking $D=M$, one defines the notion of weak Neumann subsolution of the $f$- Laplace equation on $M_f$ as a function $u\in W_{loc}^{1,2}\left( M_f\right) $ which satisfies (\textrm{Re}f{fsubsol}) for every $0\leq\varphi\in C_{c}^{\infty }\left( M\right) $. As usual, the notion of weak supersolution can be obtained by reversing the inequality and, finally, we speak of a weak solution when the equality holds in (\textrm{Re}f{fsubsol}) without any sign condition on $\varphi$. \end{definition} \begin{equation}gin{remark} \rm{ Analogously to the classical case, in the above definition, it is equivalent to require that (\textrm{Re}f{fsubsol}) holds for every $0\leq\varphi\in \mathrm{Lip}_{c}\left( D\right) $. } \end{remark} Following the terminology introduced in \cite{IPS-Crelle}, we are now ready to give the following \begin{equation}gin{definition} \label{def_fparab} A weighted manifold $M_f$ with boundary $\partial M\neq\emptyset$ oriented by the exterior unit normal $\nu$ is said to be $f$-parabolic if any bounded above, weak Neumann subsolution of the $f$-Laplace equation on $M_f$ must be constant. Namely, for every $u\in C^{0}\left( M\right) \cap W_{loc}^{1,2}\left( M_f\right) $, \begin{equation}gin{equation} \label{def_fpar} \begin{equation}gin{array} [c]{ccc} \left\{ \begin{equation}gin{array} [c]{ll} \Delta_f u\langle \cdot, \cdot \rangleeq0 & \text{on }\mathrm{int}M\\ \dfrac{\partial u}{\partial\nu}\leq0 & \text{on }\partial M\\ \sup_{M}u<+\infty & \end{array} \right. & \mathbb{R}ightarrow & u\equiv\mathrm{const}. \end{array} \end{equation} \end{definition} In the boundary-less setting it is by now well-known that $f$--parabolicity is related to a wide class of equivalent properties involving the recurrence of the Brownian motion, $f$--capacities of condensers, the heat kernel associated to the drifted laplacian, weighted volume growth, function theoretic tests, global divergence theorems and many other geometric and potential-analytic properties. All these characterization can be proven to hold true also in case of weighted manifolds with non-empty boundaries. However, here we limit ourselves to pointing out the following characterization. \begin{equation}gin{theorem}\label{thm_fAhlfors} A weighted manifold $M_f$ is $f$-parabolic if and only the following maximum principle holds. For every domain $D\subseteq M$ with $\partial_{0}D\neq\emptyset$ and for every $u\in C^{0}\left( \overline{D}\right) \cap W_{loc}^{1,2}\left( D_f\right) $ satisfying \[ \left\{ \begin{equation}gin{array} [c]{ll} \Delta_f u\langle \cdot, \cdot \rangleeq0 & \text{on }\mathrm{int}D\\ \dfrac{\partial u}{\partial\nu}\leq0 & \text{on }\partial_{1}D\\ \sup\limits_{D}u<+\infty & \end{array} \right. \] in the weak sense, it holds \[ \sup_{D}u=\sup_{\partial_{0}D}u. \] Moreover, if $M_f$ is a $f$-parabolic manifold with boundary $\partial M\neq\emptyset$ and if $u\in C^{0}\left( M\right) \cap W_{loc} ^{1,2}\left( \mathrm{int}M_f\right) $ satisfies \[ \left\{ \begin{equation}gin{array} [c]{ll} \Delta_f u\langle \cdot, \cdot \rangleeq0 & \text{on }\mathrm{int}M\\ \sup_{M}u<+\infty & \end{array} \right. \] then \[ \sup_{M}u=\sup_{\partial M}u. \] \end{theorem} We refer the reader to \cite[Theorem 0.9]{IPS-Crelle} for a detailed proof of the previous result in the unweighted setting. Although the proof of this theorem can be deduced adapting to the weighted laplacian $\Delta_{f}$ the arguments in \cite{IPS-Crelle} (indeed, all results obtained there can be adapted to the weighted case with only minor modifications of the proofs), we provide here a shorter and more elegant argument. In order to do this we will need the following preliminary fact that will be proved without the use of any capacitary argument and, therefore, can be adapted to deal also with the $f$-parabolicity under Dirichlet boundary conditions; see \cite{PPS-L1Liouville} for old results and recent advances on the Dirichlet parabolicity of (unweighted) Riemannian manifolds. \begin{equation}gin{proposition}\label{prop_equivfpar} Let $M_f$ be a weighted manifold with boundary $\partial M$ oriented by the exterior unit normal $\nu$ and consider the warped product manifold $\hat{M}=M\times_{e^{-f}}\mathbb{T}$, with boundary $\partial\hat{M}=\partial{M}\times\mathbb{T}$ oriented by the exterior unit normal $\hat \nu(x,\theta) = (\nu(x),0)$. Here we are setting $\mathbb{T}=\mathbb{R} /\mathbb{Z}$, normalized so that $\mathrm{vol}(\mathbb{T})=1$. Then $M_f$ is $f$-parabolic if and only if $\hat M$ is parabolic. \end{proposition} \begin{equation}gin{proof} To illustrate the argument we are going use pointwise computations for $C^{2}$ functions which however can be easily formulated in weak sense for functions in $C^{0}\cap W^{1,2}_{loc}$. We also recall that the $f$-Laplacian $\Delta_{f}$ on $M$ is related to the Laplace-Beltrami operator $\hat\Delta$ of $\hat M$ by the formula \[ \hat\Delta = \Delta_f + f^{-2} \Delta_{\mathbb{T}}. \] With this preparation, assume first that $\hat{M}$ is parabolic. We have to show that any (regular enough) solution of the problem \[ \left\{ \begin{equation}gin{array} [c]{ll} {\Delta}_{f} u\langle \cdot, \cdot \rangleeq0 & \text{on }\mathrm{int}{M}\\ \dfrac{\partial u}{\partial {\nu}}\leq0 & \text{on }\partial {M}\\ \sup_{{M}} u<+\infty & \end{array} \right. \] must be constant. To this end, having selected $u$ we simply define $\hat u(x,t)=u(x)$, $(x,t)\in\hat{M}$, and we observe that $\hat u$ satisfies the analogous problem on $\hat M$. Since $\hat M$ is parabolic, $\hat u$ and hence $u$ must be constant, as required. Conversely, assume that $M_f$ is $f$-parabolic and let $u$ be a (regular enough) solution of the problem \begin{equation}gin{equation} \label{subneumannproblem2} \begin{equation}gin{array} [c]{ccc} \left\{ \begin{equation}gin{array} [c]{ll} \hat \Delta u\langle \cdot, \cdot \rangleeq0 & \text{on }\mathrm{int} \hat{M}\\ \dfrac{\partial u}{\partial \hat\nu}\leq0 & \text{on }\partial \hat{M}=\partial M\times \mathbb{T}\\ \sup_{\hat M}u<+\infty. & \end{array} \right. \end{array} \end{equation} By translating and scaling we may assume that $\sup u=1 $ and since $\max\{u,0\}$ is again a solution of \eqref{subneumannproblem2}, we may in fact assume that $0\leq u\leq 1$. Let \[ \bar{u}(x) = \int_{\mathbb{T}} u(x,t)dt. \] Recalling that rotations in $\mathbb{T}$ are isometries of $\hat{M}$, and therefore commute with the Laplacian $\hat \Delta$, we deduce that \[ \Delta_{f} \bar u (x) = \hat \Delta \bar u (x)= \int_{\mathbb{T}} \hat \Delta u (x,t) dt \langle \cdot, \cdot \rangleeq 0 \] and \[ \frac{\partial \bar u}{\partial \nu}\leq 0, \] so, by the assumed $f$-parabolicity of $M_f$, $\bar u$ is constant, and \[ 0=\Delta_f \bar{u}=\int_{\mathbb{T}} \hat \Delta u (x,t). \] Since $\hat \Delta u \langle \cdot, \cdot \rangleeq 0$ we conclude that $\hat \Delta u=0$ in $\mathrm{int} \,\hat{M}$ . Applying the above argument to $u^2$, which is again a solution of \eqref{subneumannproblem2}, we obtain that \[ 0=\hat \Delta u^2 = 2 u \hat \Delta u + 2 |\hat \nabla u|^2 = 2 |\hat \nabla u|^2 \quad \text{ in } \mathrm{int} \, \hat{M}, \] and therefore $u$ is constant, as required. \end{proof} We are now ready to present the \begin{equation}gin{proof}[Proof of Theorem \textrm{Re}f{thm_fAhlfors}] Assume first that $M_f$ is $f$-parabolic. As a consequence of Proposition \textrm{Re}f{prop_equivfpar} this is equivalent to the parabolicity of $\hat{M}$ which, in turns, is equivalent to the validity of the following Ahlfors-type maximum principle (see \cite[Theorem 0.9]{IPS-Crelle}). For every domain $\hat{D}\subseteq \hat{M}$ with $\partial_{0}\hat{D}\neq\emptyset$ and for every $u\in C^{0}(\overline{\hat{D}}) \cap W_{loc}^{1,2}( \hat{D}) $ satisfying \begin{equation}gin{equation}\label{eq_ahlforsmhat} \left\{ \begin{equation}gin{array} [c]{ll} \hat{\Delta} u\langle \cdot, \cdot \rangleeq0 & \text{on }\mathrm{int}\hat{D}\\ \dfrac{\partial u}{\partial\hat{\nu}}\leq0 & \text{on }\partial_{1}\hat{D}\\ \sup\limits_{\hat{D}}u<+\infty & \end{array} \right. \end{equation} in the weak sense, it holds \[ \sup_{\hat{D}}u=\sup_{\partial_{0}\hat{D}}u. \] Furthermore, in case $\hat{D}=\hat{M}$, for every $u\in C^{0}(\overline{\hat{D}}) \cap W_{loc}^{1,2}( \hat{D}) $ satisfying \[ \left\{ \begin{equation}gin{array} [c]{ll} \hat{\Delta} u\langle \cdot, \cdot \rangleeq0 & \text{on }\mathrm{int}\hat{M}\\ \sup\limits_{\hat{M}}u<+\infty & \end{array} \right. \] in the weak sense, it holds \[ \sup_{\hat{M}}u=\sup_{\partial_{0}\hat{M}}u. \] In particular, if $D\subset M$ is any smooth domain with $\partial_0 D\neq\emptyset$ and if $u\in C^{0}\left( \overline{D}\right) \cap W_{loc}^{1,2}\left(D\right) $ satisfies \[ \left\{ \begin{equation}gin{array} [c]{ll} \Delta_f u\langle \cdot, \cdot \rangleeq0 & \text{on }\mathrm{int}D\\ \dfrac{\partial u}{\partial\nu}\leq0 & \text{on }\partial_{1}D\\ \sup\limits_{D}u<+\infty & \end{array} \right. \] in the weak sense, then $\hat{u}(x,t)=u(x)$ is a solution of \eqref{eq_ahlforsmhat} on $\hat{D}\times\mathbb{T}$. Hence, the parabolicity $\hat{M}$ in the form of the Ahlfors-type maximum principle implies that \[ \sup_Du=\sup_{\hat{D}}\hat{u}=\sup_{\partial_{0}\hat{D}=\partial_0D\times\mathbb{T}}\hat{u}=\sup_{\partial_0D}u. \] The same reasoning applies in case $D=M$. Conversely, assume that for every domain $D\subseteq M$ with $\partial_{0}D\neq\emptyset$ and for every $u\in C^{0}\left( \bar{D}\right) \cap W_{loc}^{1,2}( D_f) $ satisfying \[ \left\{ \begin{equation}gin{array} [c]{ll} \Delta_f u\langle \cdot, \cdot \rangleeq0 & \text{on }\mathrm{int}D\\ \dfrac{\partial u}{\partial\nu}\leq0 & \text{on }\partial_{1}D\\ \sup\limits_{D}u<+\infty & \end{array} \right. \] in the weak sense, it holds \[ \sup_{D}u=\sup_{\partial_{0}D}u. \] Suppose by contradiction that $M_f$ is not $f$-parabolic. Then there exists a non-constant function $v\in C^{0}\left( M\right) \cap W_{loc}^{1,2}\left( M_f\right) $ satisfying \[ \left\{ \begin{equation}gin{array} [c]{ll} \Delta_f v\langle \cdot, \cdot \rangleeq0 & \text{on }\mathrm{int}M\\ \dfrac{\partial v}{\partial\nu}\leq0 & \text{on }\partial_{1}M\\ \sup\limits_{M}v<+\infty & \end{array} \right. \] in the weak sense. Given $\eta<\sup_M v$ consider the domain $D_{\eta}=\{x\in M:v(x)>\eta \}\neq\emptyset$. We can choose $\eta$ sufficiently close to $\sup_M v$ in such a way that $\mathrm{int}M\not \subseteq D_{\eta}$. In particular, $\partial D_{\eta}\subseteq\left\{ v=\eta\right\} $ and $\partial _{0}D_{\eta}\neq\emptyset$. Now, $v\in C^{0}\left( \overline{D }_{\eta}\right) \cap W_{loc}^{1,2}\left( (D_{\eta})_f\right) $ is a bounded above weak Neumann subsolution of the $f$-Laplacian equation on $D_{\eta}$. Moreover, \[ \sup_{\partial_{0}D_{\eta}}{v}=\eta<\sup_{D_{\eta}}{v}, \] contradicting our assumptions. \end{proof} From the geometric point of view, it can be proved that $f$--parabolicity is related to the growth rate of the weighted volume of intrinsic metric objects. Indeed, exploiting a result due to A. Grigor'yan, \cite{Gr1}, one can prove the following result; see also Theorem 0.7 and Remark 0.8 in \cite{IPS-Crelle}). For the sake of clarity, we recall from the Introduction that the weighted volume of the metric ball $B^{M}_{R}(o) = \{ x\in M : \mathrm{dist}_{M}(x,o)<R\}$ of $M_{f}$ is defined by $\mathrm{vol}_{f}(B^{M}_{R}\!(o)) = \int_{B^{M}_{R}\!(o)} \mathrm{d} M_{f}$. \begin{equation}gin{proposition}\label{prop-fparab-volume} Let $M_f$ be a complete weighted manifold with boundary $\partial M\neq\emptyset$. If, for some reference point $o\in M$, \[ \mathrm{vol}_f (B_{R}^{M}\!(o) ) =\mathcal{O}(R^2), \quad \mathrm{as}\quad R\rightarrow+\infty. \] then $M_f$ is $f$-parabolic. \end{proposition} \begin{equation}gin{proof} Set $\Omega_R:=B_{R}^{M}(o)\times \mathbb{T}$. Then, as a consequence of Fubini's Theorem, \[ \mathrm{vol}_f(B_{R}^{M} \!(o))=\int_{B_{R}^{M}\!(o)}\mathrm{d} M_f=\int_{\Omega_R}\mathrm{d} \hat{M}=\mathrm{vol}(\Omega_R). \] Denote by $B_R^{\hat{M}}\!(\hat{o})$ the geodesic ball in $\hat{M}$ with reference point $\hat{o}=(o,\hat{t})$. Given an arbitrary point $\hat{x}=(x,t)\in\hat{M}$, let $\hat{\alpha}=(\alpha,\begin{equation}ta):[0,1]\rightarrow \hat{M}$ be a curve in $\hat{M}$ such that $\hat{\alpha}(0)=\hat{o}$ and $\hat{\alpha}(1)=\hat{x}$. Then \begin{equation}gin{align*} \ell(\hat{\alpha})=&\int_{0}^{1}\|\hat{\alpha}^{\prime}(s)\|\mathrm{d} s\\ =&\int_{0}^{1}\sqrt{({\alpha}^{\prime}(s))^2+e^{-2 f(\alpha(s))}({\begin{equation}ta}^{\prime}(s))^2}\mathrm{d} s\\ \langle \cdot, \cdot \rangleeq &\int_{0}^{1}\|\alpha^{\prime}(s)\|\mathrm{d} s\\ \langle \cdot, \cdot \rangleeq & \textrm{dist}_M(x,o). \end{align*} Hence, as a consequence of the previous chain of inequalities, if $\hat{x}\in B_R^{\hat{M}}\!(\hat{o})$, then $x\in B_R^{M}\!(o)$, which in turns implies that \[ B_R^{\hat{M}}\!(\hat{o})\subseteq \Omega_R. \] In particular, \[ \mathrm{vol}(B_R^{\hat{M}}\!(\hat{o}))\leq \mathrm{vol}(\Omega_R)\leq C R^2 \] for $R$ sufficiently large. The conclusion now follows from the above mentioned result by Grigor'yan and from the fact that, as proven in Proposition \textrm{Re}f{prop_equivfpar}, the parabolicity of $\hat{M}$ is equivalent to the $f$-parabolicity of $M_f$. \end{proof} \section{The height estimates}\label{HeightEst} In this section we prove the main result of the paper, Theorem \textrm{Re}f{th_fheightestimate}, and show how the boundedness assumption can be dropped in dimension $2$; Corollary \textrm{Re}f{coro_fheightestimate}. To this end, we need some preparation: first we derive some basic formulas concerning the weighted Laplacian of the height function and the angle function of the Killing graph; see Proposition \textrm{Re}f{prop-formulas}. Next we extend to the weighted setting and for Killing graphs a crucial volume estimate obtained in \cite{IPS-Crelle, LW}; see Lemma \textrm{Re}f{lemma_volume}. This estimate will allow us to use the global maximum principles introduced in Section \textrm{Re}f{PotTheory}. The proofs of Theorem \textrm{Re}f{th_fheightestimate} and of its Corollary \textrm{Re}f{coro_fheightestimate} will be provided in Section \textrm{Re}f{subsection-proof}. \subsection{Some basic formulas} This section aims to prove the following \begin{equation}gin{proposition}\label{prop-formulas} Let $( M, \langle \cdot, \cdot \rangle_{M})$ be a complete $(n+1)$-dimensional Riemannian manifold endowed with a complete Killing vector field $Y$ whose orthogonal distribution has constant rank $n$ and is integrable. Let $(P,\langle \cdot, \cdot \rangle_{P}) $ be an integral leaf of that distribution and let $\Sigma=\mathrm{Graph_{\Omega}}(u)$ be a Killing graph over a smooth domain $\Omega\subset P$, with upward unit normal $\mathcal{N}$. Set $\psi=-\log|Y|$. Then, for any constant $C \in \mathbb{R}$ the following equations hold on $(\Sigma, \langle \cdot, \cdot \rangle)$: \begin{equation}gin{eqnarray} \label{eq-Deltafu} \Delta^{\Sigma}_{C\psi}u & = & \left( nH+(C-2)\langle {\nabla}^{M} \psi, \mathcal{N} \rangle_{M} \right) e^{2\psi}\langle Y,\mathcal{N} \rangle_{M},\\ \label{eq-DeltafAngle} \Delta^{\Sigma}_{C\psi}\langle Y, \mathcal{N} \rangle_{M} & = & -\langle Y, \mathcal{N}\rangle_{M} \left( |A|^2+\mathbb{R}ic^{M}_{C\psi}(\mathcal{N},\mathcal{N}) \right)- nY^T(H_{C\psi}), \end{eqnarray} where $\mathbb{R}ic^{M}_{C\psi}$ is the Bakry-\'Emery Ricci tensor of the weighted manifold $M_{C\psi}$, $|A|$ is the norm of the second fundamental form of $\Sigma$ and $H$, $H_{C\psi}$ denote respectively the mean curvature of $\Sigma$ and its $C\psi$-weighted modified version. \end{proposition} \begin{equation}gin{proof} Observe that \[ u=s|_{\Sigma}, \] where, we recall, $s$ is the flow parameter of $Y$. Using ${\nabla}s=e^{2\psi} Y$, we have \[ \nabla^{\Sigma} u = \nabla^{\Sigma} s = e^{2\psi} Y^T. \] Thus, letting $\{e_i\}$ be an orthonormal basis of $T\Sigma$, and recalling that, since $Y$ is Killing, $\langle \nabla^{M}_V Y, V\rangle_{M} = 0$ for every vector $V$, we compute \begin{equation}gin{align*} & \Delta^{\Sigma}u=\sum_{i=1}^{n}\langle\nabla_{e_{i}}^{\Sigma}\nabla^{\Sigma}u,e_{i}\rangle\\ &\,\, = \sum_{i=1}^{n}\langle\nabla^{\Sigma}_{e_{i}} e^{2\psi}Y^{T},e_{i}\rangle\\ &\,\, = e^{2\psi}\sum_{i=1}^{n}\langle{\nabla}^{M}_{e_{i}}(Y-\langle Y,\mathcal{N} \rangle_{M} \mathcal{N}),e_{i}\rangle_{M} +2\sum_{i=1}^{n}\langle e_{i},{\nabla^{\Sigma}}\psi\rangle\langle e_{i},{\nabla^{\Sigma}}s\rangle\\ &= e^{2\psi}\sum_{i=1}^{n}\langle{\nabla}^{M}_{e_{i}}Y,e_{i}\rangle_{M} -\langle Y,\mathcal{N}\rangle_{M} e^{2\psi}\sum_{i=1}^{n}\langle{\nabla}^{M}_{e_{i}} \mathcal{N},e_{i}\rangle_{M} +2\sum_{i=1}^{n}\langle e_{i},\nabla^{\Sigma}\psi \rangle\langle e_{i},\nabla^{\Sigma}u\rangle\\ & \,\,=nHe^{2\psi}\langle Y,\mathcal{N} \rangle_{M}+2\langle\nabla^{\Sigma}\psi,\nabla^{\Sigma}u\rangle. \end{align*} Equation \eqref{eq-Deltafu} follows since, by definition, \[ \Delta_{C\psi}^{\Sigma}u= \Delta^\Sigma u- C\langle\nabla^\Sigma \psi, \nabla^\Sigma u\rangle. \] As for equation \eqref{eq-DeltafAngle}, note that, since $\mathcal{N}$ is a unit normal and $Y$ is a Killing vector field $\nabla^{M}_\mathcal{N} Y$ is tangent to $\Sigma$ and, for every vector $X$ tangent to $\Sigma$, \[ \langle \nabla^{M}_X \mathcal{N}, Y\rangle_{M} = \langle \nabla^{M}_X \mathcal{N}, Y^T + \langle Y, \mathcal{N} \rangle_{M} \mathcal{N} \rangle_{M} = \langle \nabla^{M}_X \mathcal{N}, Y^T\rangle_{M} = -\langle A_{\mathcal{N}}\,Y^T, X\rangle, \] so that \begin{equation}gin{equation*} \langle \nabla^\Sigma \langle Y,\mathcal{N} \rangle_{M}, X\rangle = X \langle Y,\mathcal{N}\rangle_{M} = -\langle \nabla^{M}_\mathcal{N} Y, X\rangle_{M} + \langle Y, \nabla^{M}_{X} \mathcal{N}\rangle_{M} = - \langle \nabla^{M}_\mathcal{N} Y + A_{\mathcal{N}} \, Y^T, X\rangle_{M} \end{equation*} and therefore \[ \nabla^{\Sigma}\langle Y, \mathcal{N} \rangle_{M}=-{\nabla}^{M}_\mathcal{N} Y- A_{\mathcal{N}} Y^T. \] Moreover, using the Codazzi equations, it is not difficult to prove that \[ \Delta^{\Sigma}\langle Y,\mathcal{N}\rangle_{M}=-\bigl(|A|^{2}+{\mathbb{R}ic^{M}}(\mathcal{N},\mathcal{N})\bigr)\langle Y, \mathcal{N} \rangle_{M}-nY^T(H), \] see, e.g., \cite[Prop. 1]{Fornari-Ripoll-Illinois}. Using the definition of $H_{C\psi}= H+\frac 1n \langle \nabla^{M}( C\psi), \mathcal{N}\rangle_{M}$ we also have \begin{equation}gin{align*} -nY^T(H) &=-nY^T(H_{C\psi})+CY^T\langle {\nabla}^{M} {\psi}, \mathcal{N}\rangle_{M}\\ &=-nY^T(H_{C\psi})+C\langle{\nabla}^{M}_Y {\nabla}^{M} {\psi}, \mathcal{N} \rangle_{M} - C\langle Y,\mathcal{N}\rangle_{M} {\mathrm{Hess}^{M}}(\psi)(\mathcal{N},\mathcal{N})+ C\langle{\nabla}^{M} {\psi},{\nabla}^{M}_{Y^T} \mathcal{N}\rangle_{M}\\ &=-nY^T(H_{C\psi})+C\langle{\nabla}^{M}_{{\nabla} {\psi}} Y, \mathcal{N} \rangle_{M} -C\langle Y,\mathcal{N}\rangle_{M} {\mathrm{Hess}^{M}}(\psi)(\mathcal{N},\mathcal{N}) + C\langle{\nabla}^\Sigma {\psi},({\nabla}^{M}_{Y^T} \mathcal{N})^{T}\rangle\\ &=-nY^T(H_{C\psi})+C\langle\nabla^\Sigma {\psi},\nabla^\Sigma \langle Y, \mathcal{N} \rangle_{M} \rangle -C\langle Y,\mathcal{N}\rangle_{M} {\mathrm{Hess}}^{M}(\psi)(\mathcal{N},\mathcal{N}), \end{align*} where we have used that:\\ \begin{equation}gin{itemize} \item [-] $\nabla^{M}_Y \nabla^{M}\psi = \nabla^{M}_{\nabla^{M} \psi}Y,$ since $\psi$ depends only on the $P$-variables and $Y=\partial_{s}$; \item [-] $\langle \nabla^{M}_{\nabla^{M} \psi} Y, \mathcal{N} \rangle_{M} = -\langle \nabla^{M}_\mathcal{N} Y,\nabla^{M} \psi\rangle_{M}$ and $\langle \nabla^{M}_\mathcal{N} Y, \mathcal{N} \rangle_{M} =0$, since $Y$ is Killing; ; \item [-] $\langle\nabla^{M} \psi, \nabla^{M}_{Y^T} \mathcal{N} \rangle_{M} = \langle \nabla^\Sigma \psi, (\nabla^{M}_{Y^T}\mathcal{N})^{T} \rangle$, since $\langle \mathcal{N}, \nabla^{M}_{Y^T}\mathcal{N} \rangle_{M} = \frac 12 Y^T\langle \mathcal{N}, \mathcal{N} \rangle_{M} = 0$; \item [-] the identity \begin{equation}gin{equation*} \begin{equation}gin{split} \nabla^\Sigma \psi\langle Y, \mathcal{N} \rangle_{M} &= \langle \nabla^{M}_{\nabla^\Sigma \psi} Y, \mathcal{N} \rangle_{M} + \langle Y, \nabla^{M}_{\nabla^\Sigma \psi} \mathcal{N} \rangle_{M}\\ &= \langle \nabla^{M}_{\nabla^{M} \psi} Y, \mathcal{N} \rangle_{M} - \langle \mathcal{N} , \nabla^{M} \psi \rangle_{M} \langle \nabla^{M}_\mathcal{N} Y, \mathcal{N}\rangle_{M}\\ &\,\,\,\,\,\, +\langle Y^T , (\nabla_{\nabla^\Sigma \psi} \mathcal{N})^{T}\rangle + \langle \mathcal{N}, Y \rangle_{M} \langle \mathcal{N}, \nabla^{M}_{\nabla^\Sigma \psi} \mathcal{N} \rangle_{M}\\ &= \langle \nabla^{M}_{\nabla^{M} \psi} Y, \mathcal{N} \rangle_{M} +\langle Y^T ,(\nabla^{M}_{\nabla^\Sigma \psi} \mathcal{N})^{T} \rangle\\ &=\langle \nabla^{M}_{\nabla^{M} \psi} Y, \mathcal{N} \rangle_{M} +\langle \nabla^\Sigma \psi ,(\nabla^{M}_{Y^T} \mathcal{N})^{T} \rangle. \end{split} \end{equation*} \end{itemize} Inserting the above identities into \[ \Delta^\Sigma_{C\psi}\langle Y, \mathcal{N} \rangle_{M} = \Delta^\Sigma \langle Y,\mathcal{N} \rangle_{M} - C\langle \nabla^\Sigma \psi, \nabla^\Sigma \langle Y, \mathcal{N}\rangle\rangle \] and recalling that \[ \mathbb{R}ic^{M}_{C\psi}= \mathbb{R}ic^{M}+C\, \mathrm{Hess}^{M}(\psi) \] yield the validity of \eqref{eq-DeltafAngle}. \end{proof} \subsection{Weighted volume estimates} In this section we extend to the weighted setting of Killing graphs, with prescribed (weighted) mean curvature, an estimate of the extrinsic volume originally obtained in \cite{LW} and later extended in \cite{IPS-Crelle}. \begin{equation}gin{lemma} \label{lemma_volume} Let $( M, \langle \cdot, \cdot \rangle_{M})$ be a complete $(n+1)$-dimensional Riemannian manifold endowed with a complete Killing vector field $Y$ whose orthogonal distribution has constant rank $n$ and is integrable. Let $(P,\langle \cdot, \cdot \rangle_{P}) $ be an integral leaf of that distribution so that $M$ can be identified with $P \times_{e^{-\psi}}\mathbb{R}$, where $\psi=-\log|Y|$. Let $\Sigma=\mathrm{Graph_{ \Omega}}(u)$ be a Killing graph over a smooth domain $\Omega\subset P$, with mean curvature $H$ with respect to the upward unit normal $\mathcal{N}$ and let $\pi:\Sigma\to P$ be the projection map. Then, for any $y_{0}=( x_{0},u(x_0)) \in \Sigma$ and for every $R>0$, \[ \pi(B_R^\Sigma(y_0)) \subseteq \Omega_{R}\left(x_{0}\right) \] where we have set \[ \Omega_{R}\left( x_{0}\right) =B_{R}^{P}(x_{0})\cap\Omega. \] Moreover, assume that given $D\in \mathbb{R}$, \begin{equation}gin{equation} A:=\sup_{\Omega}\left\vert u (x) e^{-\psi (x) }\right\vert +\sup_{\Omega}\left\vert H_{D\psi} (x) \right\vert <+\infty,\label{volume-u+Hf}. \end{equation} Then, there exists a constant $C>0$, depending on $n$ and $A$, such that, for every $\delta,R>0$, the corresponding $D\psi$-volume of the intrinsic ball of $\Sigma$ satisfies \begin{equation}gin{equation}\label{eq_volume} \mathrm{vol}_{D\psi}B_{R}^{\Sigma}\left( y_{0}\right) \leq C\left(1+\frac{1}{\delta R}\right) \mathrm{vol}_{D\psi}\left( \Omega_{\left( 1+\delta\right) R}\left( x_{0}\right) \right), \end{equation} where $y_{0}=(x_{0},u(x_{0}))\in\Sigma$ is a reference origin. \end{lemma} \begin{equation}gin{proof} The Riemannian metric of $M$ writes as $\langle \cdot, \cdot \rangle_{M}=\langle \cdot, \cdot \rangle_{P}+e^{-2\psi} \mathrm{d} s \otimes \mathrm{d} s$. Let $y_{0}=(x_{0},u(x_{0}))$ and $y=(x,u(x))$ be points in $\Sigma$ connected by the curve $(\alpha(t), u(\alpha(t))$, where $\alpha(t)$, $t\in\lbrack0,1]$, is an arbitrary path connecting $x_{0}$ and $x$ in $\Omega\subseteq P$. Writing $s(t)=u(\alpha(t))$ we have \begin{equation}gin{align*} \int_{0}^{1}\left\{|\alpha^{\prime}\left( t\right) ^{2}+e^{-2\psi(\alpha(t))}s^{\prime}(t)^{2}\right\}^{\frac{1}{2}} dt & \langle \cdot, \cdot \rangleeq \int_{0}^{1}|\alpha^{\prime}(t)|dt \langle \cdot, \cdot \rangleeq d_{P}(x_{0},x). \end{align*} Thus, if $y\in B_{R}^{\Sigma}(y_{0})$ we deduce that $x\in\Omega _{R}\left( x_{0}\right) $, proving the first half of the lemma. Now we compute the volume of $B_{R}^{\Sigma}(y_{0})$. Since $\pi(B_{R}^{\Sigma}(y_{0}))\subset\Omega_{R}\left( x_{0}\right) $ we have \begin{equation}gin{align} \mathrm{vol}_{D\psi}(B_{R}^{\Sigma}(y_{0})) & =\int_{\pi(B_{R}^{\Sigma}(y_{0}))}\sqrt{e^{2\psi}+|\nabla^{P}u|^{2}}e^{-(D+1)\psi}\,dP\label{volume-1}\\ & \leq\int_{\Omega_{R}\left( x_{0}\right) } \sqrt{e^{2\psi}+|\nabla^{P}u|^{2}}e^{-(D+1)\psi}dP\nonumber\\ & =\int_{\Omega_{R}\left( x_{0}\right) } \frac{|\nabla^{P}u|^{2}}{\sqrt{e^{2\psi}+|\nabla^{P}u|^{2}}}e^{-(D+1)\psi}\,dP\nonumber\\ & +\int_{\Omega_{R}\left( x_{0}\right) }\frac {e^{2\psi}}{\sqrt{e^{2\psi(x)}+|\nabla^{P}u|^{2}}}e^{-(D+1)\psi}dP.\nonumber \end{align} We then consider the vector field \[ Z=\rho u\frac{e^{-\left(D+1\right) \psi}\nabla^{P}u}{\sqrt{e^{2\psi} +|\nabla^{P}u|^{2}}}, \] where the function $\rho$ is given by \[ \rho(x)= \begin{equation}gin{cases} 1 & \mathrm{on}\quad B_{R}(x_{0})\\ \frac{\left( 1+\delta\right) R-r(x)}{\delta R} & \mathrm{on}\quad B_{\left( 1+\delta\right) R}(x_{0})\backslash B_{R}(x_{0})\\ 0 & \mathrm{elsewhere}, \end{cases} \] with $r(x) = \mathrm{dist}_{P}(x,x_{0})$. Recalling equation \eqref{capillary-2} in the Introduction, i.e., \[ \mathrm{div}^{P}\left\{ \frac{\nabla^{P}u}{\sqrt{e^{2\psi}+\left\vert \nabla^{P}u\right\vert ^{2}}}\right\}=nH+\frac{\left\langle \nabla^{P}u,\nabla^{P}\psi\right\rangle }{\sqrt{e^{2\psi}+\left\vert \nabla ^{P}u\right\vert ^{2}}} \] we compute \begin{equation}gin{align*} \mathrm{div} Z & =e^{-\left( D+1\right) \psi}\left\{ u\frac {\left\langle \nabla^{P}\rho,\nabla^{P}u\right\rangle }{\sqrt{e^{2\psi}+\left\vert \nabla ^{P}u\right\vert ^{2}}}+\rho\frac{\left\vert \nabla ^{P}u\right\vert ^{2}}{\sqrt{e^{2\psi}+\left\vert \nabla ^{P}u\right\vert ^{2}} }+n\rho uH_{D\psi}\right\} \\ & =e^{-D\psi}\left\{ u e^{-\psi}\frac{\left\langle \nabla^{P} \rho,\nabla^{P}u\right\rangle }{\sqrt{e^{2\psi}+\left\vert \nabla ^{P}u\right\vert ^{2}}}+\rho e^{-\psi}\frac{\left\vert \nabla^{P}u\right\vert ^{2} }{\sqrt{e^{2\psi}+\left\vert \nabla ^{P}u\right\vert ^{2}}}+n\rho u e^{-\psi}H_{D\psi}\right\} \\ & \langle \cdot, \cdot \rangleeq e^{-D\psi}\left\{ -\left\vert u e^{-\psi}\right\vert \left\vert \nabla^{P}\rho\right\vert +\rho e^{-\psi} \frac{\left\vert \nabla^{P}u\right\vert ^{2}}{\sqrt{e^{2\psi}+\left\vert \nabla ^{P}u\right\vert ^{2}}}-n\rho\left\vert u e^{-\psi}\right\vert \left\vert H_{D\psi}\right\vert \right\} . \end{align*} Since $Z$ has compact support in $\Omega_{\left( 1+\delta\right) R}$, applying the divergence theorem and using the properties of $\rho$, from the above inequality we obtain \begin{equation}gin{align*} \int_{\Omega_{R}\left( x_{0}\right) }\frac{\left\vert \nabla^{P}u\right\vert ^{2}}{\sqrt{e^{2\psi}+\left\vert \nabla ^{P}u\right\vert ^{2}}}e^{-\psi}e^{-D\psi}\text{ }dP & \leq\frac{1}{\delta R}\int_{\Omega_{\left( 1+\delta\right) R}\left( x_{0}\right) }\left\vert u e^{-\psi}\right\vert e^{-D\psi}\text{ }dP\\ & +n\int_{\Omega_{\left( 1+\delta\right) R}\left( x_{0}\right) } \left\vert u e^{-\psi}\right\vert \left\vert H_{D\psi}\right\vert e^{-D\psi}\text{ }dP. \end{align*} Inserting this latter into (\textrm{Re}f{volume-1}) we get \begin{equation}gin{align*} \mathrm{vol}_{D\psi}(B_{R}^{\Sigma}(y_{0})) & \leq\frac{1}{\delta R} \int_{\Omega_{\left( 1+\delta\right) R}\left( x_{0}\right) } \left\vert u e^{-\psi}\right\vert e^{-D\psi}\text{ }dP\\ & +n\int_{\Omega_{\left( 1+\delta\right) R}\left( x_{0}\right) } \left\vert u e^{-\psi}\right\vert \left\vert H_{D\psi}\right\vert e^{-D\psi}\text{ }dP. \end{align*} To conclude the desired volume estimate, we now recall that, by assumption, \[ \sup_{\Omega}\left\vert u e^{-\psi}\right\vert +\sup_{\Omega }\left\vert H_{D\psi}\right\vert <+\infty. \] The proof of the Lemma is completed. \end{proof} \begin{equation}gin{remark}\label{rmk-fvolumes} \rm{ We note for further use that if, in the previous Lemma, we assume that $\inf_P\psi>-\infty$, then the following more general inequality holds: \[ \mathrm{vol}_{C\psi} B_{R}^{\Sigma} (y_0) \leq A \mathrm{vol}_{D\psi}(\Omega_R(x_0)) , \] for any constant $C>D$ and any $R \langle \cdot, \cdot \rangleg 1$. } \end{remark} \subsection{Proofs of Theorem \textrm{Re}f{th_fheightestimate} and Corollary \textrm{Re}f{coro_fheightestimate}}\label{subsection-proof} We are now in the position to give the \begin{equation}gin{proof}[Proof (of Theorem \textrm{Re}f{th_fheightestimate})] Since $H_{\psi} \equiv \mathrm{const}$, it follows by equation \eqref{eq-Deltafu} that \begin{equation}gin{align*} \Delta^{\Sigma}_{\psi}(H_{\psi} u) &= nH_{\psi}He^{2\psi}\langle Y,\mathcal{N} \rangle_{M}- H_{\psi}e^{2\psi}\langle \nabla^{M} {\psi}, \mathcal{N} \rangle_{M} \langle Y, \mathcal{N} \rangle_{M}\\ &= nH^2e^{2\psi}\langle Y, \mathcal{N} \rangle_{M} + He^{2\psi} \langle \nabla^{M} {\psi}, \mathcal{N} \rangle_{M} \langle Y, \mathcal{N} \rangle_{M}\\ &\,\,\,\,\, -He^{2\psi}\langle \nabla^{M} {\psi}, \mathcal{N} \rangle_{M} \langle Y,\mathcal{N} \rangle_{M} -\frac1n e^{2\psi} \langle \nabla^{M} {\psi}, \mathcal{N} \rangle_{M}^{2} \langle Y, \mathcal{N} \rangle_{M}\\ &\leq nH^2e^{2\psi}\langle Y, \mathcal{N} \rangle_{M}. \end{align*} Combining this inequality with equation \eqref{eq-DeltafAngle}, it is straightforward to prove that, under our assumptions on $\mathbb{R}ic^{M}_{\psi}$ and $H_{\psi}$, the function \[ \varphi (x) =H_{\psi}u(x) e^{-2\sup_\Omega \psi}+\langle Y_{x}, \mathcal{N}_{x} \rangle_{M} \] satisfies \[ \Delta^{\Sigma}_{ \psi} \varphi \leq0\text{ on }\Sigma. \] On the other hand, using assumptions (b) and (e) we can apply Lemma \textrm{Re}f{lemma_volume} to deduce that \[ \mathrm{vol}_{\psi}(B_{R}^{\Sigma})=\mathcal{O}(R^2) \text{, as } R\rightarrow+\infty. \] In particular, by Proposition \textrm{Re}f{prop-fparab-volume}, $\Sigma$ is parabolic with respect to the weighted Laplacian $\Delta^{\Sigma}_{\psi}$. Since, again by (e), $\varphi$ is a bounded function and, according to (d), $u\equiv0$ on $\partial\Omega$, an application of the Ahlfors maximum principle stated in Theorem \textrm{Re}f{thm_fAhlfors} gives \[ \inf_{\Omega}\left( H_{\psi}u e^{-2\sup_{\Omega} \psi}+\langle Y, \mathcal{N} \rangle_{M}\right) =\inf_{\partial\Omega} \langle Y, \mathcal{N} \rangle_{M} \langle \cdot, \cdot \rangleeq0. \] Combining this latter with the fact that $\langle Y, \mathcal{N} \rangle_{M} \leq e^{-\psi}$ we get the desired upper estimate on $u$. Finally, note that equation \eqref{eq-Deltafu} can also be written in the form: \[ \Delta^{\Sigma}_{3\psi} u=H_{\psi} e^{2\psi}\langle Y, \mathcal{N} \rangle_{M}. \] Since, according to Remark \textrm{Re}f{rmk-fvolumes}, $\Sigma$ is also parabolic with respect to the weighted laplacian $\Delta^{\Sigma}_{3\psi}$ and $u$ is a bounded $3\psi$-superharmonic function, the desired lower estimate on $u$ follows again as an application of Theorem \textrm{Re}f{thm_fAhlfors}. \end{proof} \begin{equation}gin{remark} \label{RmkHLR} { \rm It is clear from the proof that if we consider, as in \cite{HLR-TAMS}, the function $\varphi=cH_\psi e^{-2\psi} u +\langle Y,\mathcal{N}\rangle_M$, for some $0<c\leq 1$, it is possible to extend the height estimate to the case where $\mathbb{R}ic_\psi^M\langle \cdot, \cdot \rangleeq -G^2(x)$ in a neighborhood of $\Omega\times \mathbb{R}$, provided $(1-c)H^2(x)/n\langle \cdot, \cdot \rangleeq G^2(x)$. The resulting estimate becomes \[ 0\leq ue^{-\psi}\leq \frac{e^{2(\sup_\Omega \psi-\inf_\Omega\psi)}}{c|H_\psi|}. \] } \end{remark} To conclude this section, we show how the boundedness assumption (e) can be dropped in the case of $2$-dimensional graphs. \begin{equation}gin{proof}[Proof (of Corollary \textrm{Re}f{coro_fheightestimate})] Let $\Sigma$ be a $2$-dimensional Killing graph over a domain $\Omega\subset P$ of constant weighted mean curvature $H_\psi<0$. Recall that the {\it Perelman scalar curvature} of $R_\psi$ of $M=P\times_{e^{-\psi}}\mathbb{R} $ is defined by \[ R^{M}_\psi=R+2 \Delta \psi-|\nabla \psi|^2=\mathrm{Tr}(\mathbb{R}ic^{M}_\psi)+\Delta_\psi\psi. \] Assume that $R^{M}_{\psi} \langle \cdot, \cdot \rangleeq 0$. Then, as a consequence of a result due to Espinar \cite[Theorem 4.2]{Esp}, for every $y=(x,u(x))\in\Sigma$ it holds \[ |u(x)e^{-\psi}|\leq\mathrm{dist}(y,\partial \Sigma)\leq C(|H_\psi|)< +\infty. \] On the other hand, a straightforward calculation shows that \[ \Delta_\psi \psi=e^{2\psi}\mathbb{R}ic^{M}_\psi(Y,Y). \] Hence, $R^{M}_{\psi} \langle \cdot, \cdot \rangleeq 0$ provided $\mathbb{R}ic^{M}_{\psi} \langle \cdot, \cdot \rangleeq 0$. Putting everything together, it follows that condition (e) in Theorem \textrm{Re}f{th_fheightestimate} is automatically satisfied. \end{proof} \section{Height estimates in model manifolds}\label{section-examples} In this section we construct rotationally symmetric examples of Killing graphs with constant weighted mean curvature and exhibit explicit estimates on the maximum of their weighted height in terms of the weighted mean curvature. When the base space is $\mathbb{R} ^{n}$ and the Killing vector field has constant length $1$ (hence the ambient space is $\mathbb{R} ^{n}\times\mathbb{R} $) these graphs are standard half-spheres and the estimate on their maximal height is precisely the reciprocal of the mean curvature; \cite{He, Se}. We shall assume that the induced metric $\langle \cdot, \cdot \rangle_{P}$ on $P$ is rotationally invariant. More precisely, we suppose that $P$ is a model space with pole at $o$ and Gaussian coordinates $(r,\theta)\in\left( 0,R\right) \times \mathbb{S}^{n-1}$, $R\in(0,+\infty],$ in terms of which $g_{P}$ is expressed by \[ g_{P}=dr^{2}+\xi^{2}(r)d\theta^{2}, \] for some $\xi\in C^{\infty}([0,R))$ satisfying \[ \begin{equation}gin{cases} & \xi>0 \text{ on }\left( 0,R\right) \\ & \xi^{(2k)}\left( 0\right) =0, \,\, k\in \mathbb{N} \\ & \xi^{\prime}\left( 0\right) =1 \end{cases} \] and where $d\theta^{2}$ denotes the usual metric in $\mathbb{S}^{n-1}$. We also assume that the norm of the Killing field does not depend on $\theta$, so that \[ |Y|^{2}=e^{-2\psi(r)}, \] In this case, the ambient metric $\langle \cdot, \cdot \rangle_{M}$ of $M=P\times_{e^{-\psi}}\mathbb{R} $ is written in terms of cylindrical coordinates $(s,r,\theta)$ as \[ \langle \cdot, \cdot \rangle_{M}=e^{-2\psi(r)}ds^{2}+dr^{2}+\xi^{2}(r)d\theta^{2}. \] and $M$ is a doubly-warped product with respect to warping functions of the coordinate $r$. The smoothness of $\psi$ implies that the pole $o$ is a critical point for $e^{-\psi}$, namely \[ \frac{d e^{-\psi(r)}}{dr}\left( 0\right)= -e^{-\psi(r)}\frac{d \psi(r)}{dr}=0. \] A rotationally invariant Killing graph $\Sigma_{0}\subset M$ is defined by a function $u$ that depends only on the radial coordinate $r$. In this case (\textrm{Re}f{capillary-3}) becomes \begin{equation}gin{equation} \label{H-ode} \bigg(\frac{u'(r)}{W}\bigg)' + \frac{u'}{W} \big(\Delta_P r - \langle \nabla^P \psi, \nabla^P r\rangle\big)= nH, \end{equation} where \[ W = \sqrt{e^{2\psi(r)}+u'^2(r)} \] and $'$ denotes derivatives with respect to $r$. Note that the weighted Laplacian of $r$ is given by \[ \Delta_P r- \langle \nabla^P \psi, \nabla^P r\rangle = -\psi'(r) + (n-1) \frac{\xi'(r)}{\xi(r)} = \frac{|Y|'(r)}{|Y|(r)} + (n-1) \frac{\xi'(r)}{\xi(r)}, \] which is, up to a factor $1/n$, the mean curvature $H_{{\rm cyl}}(r)$ of the cylinder over the geodesic sphere of radius $r$ centered at $o$ and ruled by the flow lines of $Y$ over that sphere. We also have \begin{equation}gin{equation} \label{H-Hpsi} nH_{\psi} = nH -\frac{u'(r)}{W}\langle\nabla^P \psi, \nabla^P r\rangle = nH -\psi'(r)\frac{u'(r)}{W} = {\rm div}^P_{2\psi}\bigg(\frac{u'(r)}{W}\nabla^P r\bigg)\cdot \end{equation} It follows from (\textrm{Re}f{H-ode}) that both $H$ and $H_{\psi}$ depend only on $r$. Integrating both extremes in (\textrm{Re}f{H-Hpsi}) against the weighted measure ${\rm d}P_\psi = e^{-\psi} {\rm d}P$ one obtains in this particular setting a first-order equation involving $u(r)$, namely \begin{equation}gin{equation} \label{H-rot-flux} \frac{u'(r)}{W} e^{-2\psi(r)}\xi^{n-1}(r) = \int_0^r nH_\psi e^{-2\psi(\tau)} \xi^{n-1}(\tau)\,{\rm d}\tau. \end{equation} Denoting the right hand side in (\textrm{Re}f{H-rot-flux}) by $I(r)$ and solving it for $u'(r)$ yields \begin{equation}gin{equation} \label{u-prima} u'^2= e^{2\psi} \frac{I^2 }{e^{-4\psi}\xi^{2(n-1)}-I^2} \end{equation} We assume that $u'(r)\le 0$ and denote \begin{equation}gin{equation} V_\psi(r) = \frac{1}{|\mathbb{S}^{n-1}|}{\rm vol}_\psi (B_r(o)) =\int_0^r e^{-2\psi(\tau)}\xi^{n-1}(\tau) \, {\rm d}\tau \end{equation} and \[ A_\psi(r) = \frac{1}{|\mathbb{S}^{n-1}|}{\rm vol}_\psi (\partial B_r(o)) = e^{-2\psi(r)}\xi^{n-1}(r). \] Fixed this geometric setting, we prove the existence of compact rotationally symmetric Killing graphs with constant weighted mean curvature. \begin{equation}gin{theorem} \label{H-rot-existence} Suppose that the ratio $\frac{A_\psi(r)}{V_\psi(r)}$ is non-increasing for $r\in (0,R)$. Let $H_0$ be a constant with \begin{equation}gin{equation} \label{isop-ineq} n|H_0| = \frac{A_\psi(r_0)}{V_\psi(r_0)} \end{equation} for some $r_0\in (0,R)$. Then there exists a compact rotationally symmetric Killing graph $\Sigma_0\subset P\times_{e^{-\psi}}\mathbb{R}$ of a radial function $u(r)$, $r\in [0,r_0]$, given by \begin{equation}gin{equation} \label{u-solution} u(r) = \int^r_{r_0} e^{\psi(\tau)} \frac{I(\tau)}{\sqrt{e^{-4\psi(\tau)}\xi^{2(n-1)}(\tau)-I^2(\tau)}} \, {\rm d}\tau. \end{equation} with constant weighted mean curvature $H_\psi=H_0$ and boundary $\partial B_{r_0}(o)\subset P$. The weighted height function in this graph is bounded as follows \begin{equation}gin{equation} \label{u-estimate} e^{-\psi(r)} u(r) \le e^{\sup_{B_r(o)} \psi-\inf_{B_r(o)}\psi}\int_{0}^{r_{0}}\frac{-nH_0}{\sqrt{\frac{A^2_\psi(\tau)}{V^2_\psi(\tau)}-n^2 H^2_0}}d\tau. \end{equation} \end{theorem} \begin{equation}gin{proof} Since $A_\psi(r)/V_\psi(r)$ is non-increasing it follows from (\textrm{Re}f{isop-ineq}) that \[ e^{-4\psi(r)}\xi^{2(n-1)}(r)-I^2(r) \langle \cdot, \cdot \ranglee A_\psi^2(r) - n^2 H^2_\psi V_\psi^2(r)\langle \cdot, \cdot \ranglee 0 \] for $r\in (0,r_0]$ what guarantees that the expression \begin{equation}gin{equation} \label{u-rot} u(r) = \int_{r_0}^{r} e^{\psi(\tau)} \frac{I(\tau)}{\sqrt{e^{-4\psi(\tau)}\xi^{2(n-1)}(\tau)-I^2(\tau)}} \, {\rm d}\tau \end{equation} is well-defined for $r\in [0,r_0]$ with $u'(r)\le 0$. For further reference, we remark an application of L'H\^opital's rule shows that \begin{equation}gin{equation} \label{h-cyl} \lim_{r\to 0} \frac{A_\psi(r)}{V_\psi(r)} =\lim_{r\to 0} \bigg(2\frac{|Y|'(r)}{|Y|(r)} + (n-1) \frac{\xi'(r)}{\xi(r)}\bigg)\cdot \end{equation} In order to get a more detailed analysis at $r=r_0$ we consider a parametrization of $\Sigma_0$ in terms of cylindrical coordinates as \[ (t,\theta) \mapsto (r(t), s(t), \theta), \] where $t$ is the arc-lenght parameter defined by \[ \dot r^2 (t)+ e^{-2\psi(r(t))} \dot s^2(t) =1 \] and $\cdot$ denotes derivatives with respect to $s$. Since \[ u'(r(t)) = \frac{\dot s(t)}{\dot r(t)} \] we impose $\dot s\langle \cdot, \cdot \ranglee 0$ and $\dot r\le 0$. Hence $W=-e^{\psi}/\dot r$ whenever $\dot r(s)\neq 0$ and (\textrm{Re}f{H-rot-flux}) is written as \begin{equation}gin{equation} \label{flux-param} \dot s e^{-3\psi(r)}\xi^{n-1}(r) = -\int_0^r nH_\psi e^{-2\psi(\tau)} \xi^{n-1}(\tau)\,{\rm d}\tau \end{equation} Since $\xi(0)=0$, $\xi'(0)=1$ and $\psi'(0)=0$, applying L'H\^{o}pital's rule as above shows that \[ \lim_{r\to 0}\frac{\int_0^r nH_\psi e^{-2\psi(\tau)} \xi^{n-1}(\tau)\,{\rm d}\tau}{e^{-3\psi(r)}\xi^{n-1}(r)} = \lim_{r\to 0}\frac{nH_\psi e^{\psi(r)}}{-3\psi'(r) + (n-1)\frac{\xi'(r)}{\xi(r)}} =0 \] and we conclude that $e^{-\psi(r)}\dot s\to 0$ as $r\to 0$. Therefore $\dot s \to 0$ as $r\to 0$ what implies that $\Sigma_0$ is smooth at its intersection with the vertical axis of revolution. Now, we have from (\textrm{Re}f{flux-param}) and (\textrm{Re}f{isop-ineq}) that $\dot r =0$ at $r=r_0$ since $e^{-\psi(r_0)}\dot s(r_0) =1$. Finally, let $\phi$ be the angle between a meridian $\theta={\rm cte.}$ in $\Sigma_0$ and and radial vector field $\partial_r$. We have \[ \frac{u'(r)}{W} = -\frac{\dot s}{e^{\psi(r)}} = -\sin\phi. \] Hence \begin{equation}gin{eqnarray*} \bigg(\frac{u'(r)}{W} \bigg)' = -\cos\phi \,\dot \phi (t)\frac{1}{\dot r(t)} = -\dot \phi(t). \end{eqnarray*} On the other hand \begin{equation}gin{eqnarray*} \bigg(\frac{u'(r)}{W} \bigg)' &= & nH - \frac{u'(r)}{W} \bigg(\frac{|Y|'(r)}{|Y|(r)}+(n-1)\frac{\xi'(r)}{\xi(r)}\bigg)\\ & = & nH_\psi + \sin \phi \bigg(2\frac{|Y|'(r)}{|Y|(r)}+(n-1)\frac{\xi'(r)}{\xi(r)}\bigg) \end{eqnarray*} In sum, $\Sigma_0$ is parameterized by the solution of the first order system \begin{equation}gin{equation} \begin{equation}gin{cases} \dot r & = \cos \phi \\ e^{-\psi (r)}\dot s & =\sin \phi\\ \dot \phi & = -nH_0 - \sin \phi \Big(2\frac{|Y|'(r)}{|Y|(r)}+(n-1)\frac{\xi'(r)}{\xi(r)}\Big), \end{cases} \end{equation} with initial conditions $r(0) = r_0, s(0)=0, \phi(0)= \frac{\pi}{2}$. The height estimate follows directly from (\textrm{Re}f{u-solution}). This finishes the proof. \end{proof} It is worth to point out that, in the classical situation of the Euclidean space where $e^{-\psi}=|Y|\equiv1$ and $\xi\left( r\right) =r$, the Killing graph defined by $u(r)$ reduces to the standard sphere and (\textrm{Re}f{u-estimate}) gives rise to the expected sharp bound \[ u(r) \leq \frac{1}{|H|}. \] Actually, a similar conclusion can be achieved if we choose $\psi\left( r\right) $ in such a way that \[ e^{-2\psi\left( r\right)} \xi^{n-1}\left( r\right) =r^{n-1}. \] Note that this choice is possible and compatible with the request $d\psi/dr\left( 0\right) =0$ because $\xi\left( r\right) $ is odd at the origin. This choice corresponds to the case when \[ \frac{A_\psi(r)}{V_\psi(r)} = \frac{n}{r} \] as in the Euclidean space. We have thus obtained the following height estimate \begin{equation}gin{corollary} \label{th_symm-example}Let $P=[0,R)\times_{\xi}\mathbb{S}^{n-1}$ be an $n$-dimensional model manifold with warping function $\xi$ and let $\psi:\left[0,R\right) \rightarrow\mathbb{R} _{>0}$ be the smooth, even function defined by \[ \psi(r) =c\cdot \begin{equation}gin{cases} \frac{(n-1)}{2}\log\frac{\xi (r)}{r} & r\neq0\\ 1 & r=0, \end{cases} \] where $c>0$ is a given constant. Fix $0<r_{0}<R$, let $H_{0}=-1/r_{0}$ and define \[ u(r) =\int^{r_0}_{r}\frac{-H_{0}\tau}{\sqrt{1-H_0^2 \tau^{2}}}e^{\psi( \tau)} d\tau. \] Then, in the ambient manifold $M=P\times_{e^{-\psi}}\mathbb{R} $, the Killing graph of $u$ over $\Omega=B_{r_{0}}^{P}( o) \subset P$ has constant weighted mean curvature $H_\psi=H_0$ with respect to the upward pointing Gauss map. Moreover, \[ 0\leq e^{-\psi(r)}u(r) \leq\frac {\max_{\lbrack0,r_{0}]}e^{-\psi}}{\min_{[0,r_{0}]}e^{-\psi}}\cdot \frac{1}{|H_0|}=e^{\sup_\Omega \psi-\inf_\Omega\psi}\frac{1}{|H_0|}. \] \end{corollary} The counterpart of Theorem \textrm{Re}f{H-rot-existence} and Corollary \textrm{Re}f{th_symm-example} in the case of constant mean curvature can be obtained along the same lines by integrating both sides in (\textrm{Re}f{capillary-3}) instead of (\textrm{Re}f{H-Hpsi}). Denoting \[ V(r) = \frac{1}{|\mathbb{S}^{n-1}|}{\rm vol} (B_r(o)) =\int_0^r e^{-\psi(\tau)}\xi^{n-1}(\tau) \, {\rm d}\tau \] and \[ A(r) = \frac{1}{|\mathbb{S}^{n-1}|}{\rm vol} (\partial B_r(o)) = e^{-\psi(r)}\xi^{n-1}(r) \] one obtains \begin{equation}gin{theorem} \label{Hw-rot-existence} Suppose that the ratio $\frac{A(r)}{V(r)}$ is non-increasing for $r\in (0,R)$. Let $H_0$ be a non-positive constant with \begin{equation}gin{equation} \label{isop-ineq} n|H_{0}| = \frac{A(r_0)}{V(r_0)} \end{equation} for some $r_0\in (0,R)$. Then there exists a compact rotationally symmetric Killing graph $\Sigma_0\subset P\times_{e^{-\psi}} \mathbb{R}$ of a radial function $u(r)$, $r\in [0,r_0]$, given by \begin{equation}gin{equation} \label{u-solution} u(r) = \int^r_{r_0} \frac{nH_{0}V(\tau)}{\sqrt{A^2(\tau)-n^2 H_{0}^2 V^2(\tau)}} \, {\rm d}\tau. \end{equation} with constant mean curvature $H=H_{0}$ and boundary $\partial B_{r_0}(o)\subset P$. The height function in this graph is bounded as follows \begin{equation}gin{equation} \label{u-estimate} e^{-\psi(r)}u(r) \le e^{\sup_{B_r(o)} \psi-\inf_{B_r(o)}\psi} \int_{0}^{r_{0}}\frac{-nH_{0}}{\sqrt{\frac{A^2(\tau)}{V^2(\tau)}-n^2 H_{0}^2}}d\tau. \end{equation} \end{theorem} The analog of Corollary \textrm{Re}f{th_symm-example} in the case of constant mean curvature is \begin{equation}gin{corollary} \label{th_symm-example-w}Let $P=[0,R)\times_{\xi}\mathbb{S}^{n-1}$ be an $n$-dimensional model manifold with warping function $\xi$ and let $\psi:\left[0,R\right) \rightarrow\mathbb{R} _{>0}$ be the smooth, even function defined by \[ \psi(r) =c\cdot \begin{equation}gin{cases} (n-1)\log\frac{\xi (r)}{r} & r\neq0\\ 1 & r=0, \end{cases} \] where $c>0$ is a given constant. Fix $0<r_{0}<R$, let $H_{0}=-1/r_{0}$ and define \[ u(r) =\int^{r}_{r_0}\frac{H_{0}\tau}{\sqrt{1-H_{0}^2 \tau^{2}}}d\tau. \] Then, in the ambient manifold $M=P\times_{e^{-\psi}}\mathbb{R} $, the Killing graph of $u$ over $\Omega=B_{r_{0}}^{P}( o) \subset P$ has constant mean curvature $H_{0}$ with respect to the upward pointing Gauss map. Moreover, \[ 0\leq e^{-\psi(r)} u(r) \leq \frac{\max_{\lbrack0,r_{0}]}e^{-\psi}}{\min_{[0,r_{0}]}e^{-\psi}}\cdot \frac{1}{|H_{0}|} =e^{\sup_\Omega \psi-\inf_\Omega\psi}\frac{1}{|H_0|}. \] \end{corollary} \begin{equation}gin{remark} \rm{Up to the factor $2$ in the exponential, this is the estimate obtained in Theorem \textrm{Re}f{th_fheightestimate} above. We suspect that the high rotational symmetry considered in the example prevents to achieve the maximum height predicted by the theorem. On the other hand we conjecture that the rotationally symmetric graphs can be used as barriers to obtain sharp estimates in the case of general warped spaces $P\times_{e^{-\psi}}\mathbb{R}$ in the case when the radial sectional curvatures of $P$ are bounded from above by some radial function.} \end{remark} \begin{equation}gin{thebibliography}{9999} \bibitem{AEG-Illinois} J.A. Aledo, J.M. Espinar, and J.A. G{\'a}lvez, \emph{Height estimates for surfaces with positive constant mean curvature in {$\Bbb M^2\times\Bbb R$}}, Illinois J. Math. \textbf{52} (2008), no.~1, 203--211. \bibitem{AD-PEMS} L.~J. Al{\'{\i}}as and M.~Dajczer, \emph{Constant mean curvature hypersurfaces in warped product spaces}, Proc. Edinb. Math. Soc. (2) \textbf{50} (2007), no.~3, 511--526. \bibitem{Ch} S.S.~Chern \emph{On the curvatures of a piece of hypersurface in euclidean space}. Abh. Math. Sem. Univ. Hamburg \textbf{29} (1965), 77--91. \bibitem{CR-Asian} J.B.~Casteras, J.B.~Ripoll, \emph{On asymptotic Plateau's problem for CMC hypersurfaces on rank $1$ symmetric spaces of noncompact type}. Asian J. Math. \textbf{20} (2016), no. 4, 695--707. \bibitem{DL-Poincare} M.~Dajczer and J.~H. de~Lira, \emph{Killing graphs with prescribed mean curvature and {R}iemannian submersions}, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire \textbf{26} (2009), no.~3, 763--775. \bibitem{DHL-CalcVar} M.~Dajczer, P.~A. Hinojosa and J.~H. de~Lira, \emph{Killing graphs with prescribed mean curvature}. Calc. Var. Partial Differential Equations \textbf{33} (2008), no. 2, 231--248. \bibitem{DLR-JAM} M.~Dajczer, M.; J.~H. de~Lira, J.B~Ripoll, \emph{An interior gradient estimate for the mean curvature equation of Killing graphs and applications}. J. Anal. Math. \textbf{129} (2016), 91--103. \bibitem{Esp} J.~M. Espinar, \emph{Gradient {S}chr\"odinger operators, manifolds with density and applications}, arXiv:1209.6162v7. \bibitem{Fl} H. Flanders, \emph{Remark on mean curvature}. J. London Math. Soc. \textbf{41} (1966), 364--366. \bibitem{Fornari-Ripoll-Illinois} S.~Fornari and J.B.~Ripoll, \emph{Killing fields, mean curvature, translation maps}, Illinois J. Math. \textbf{48} (2004), no.~4, 1385---1403. \bibitem{Gr1} A.~A. Grigor'yan, \emph{The existence of positive fundamental solutions of the {L}aplace equation on {R}iemannian manifolds}, Mat. Sb. (N.S.) \textbf{128(170)} (1985), no.~3, 354--363, 446. \bibitem {Gro} M.~Gromov, \emph{Isoperimetry of waists and concentration of maps}. Geom. Funct. Anal. \textbf{13(1)} (2003), 178--215. \bibitem {He2}E. Heinz, \emph{\"Uber Fl\"achen mit eineindeutiger Projektion auf eine Ebene, deren Kr\"ummungen durch Ungleichungen eingeschr\"ankt sind}. Math. Ann. \textbf{129} (1955), 451--454. \bibitem {He}E. Heinz, \textit{On the nonexistence of a surface of constant mean curvature with finite area and prescribed rectifiable boundary. }Arch. Rational Mech. Anal. \textbf{35} (1969), 249--252. \bibitem{HLR-TAMS} D.~Hoffman, J.H.S. de~Lira, and H.~Rosenberg, \emph{Constant mean curvature surfaces in {$M^2\times\bold R$}}, Trans. Amer. Math. Soc. \textbf{358} (2006), no.~2, 491--507. \bibitem{IPS-Crelle} D.~Impera, S.~Pigola, and A.~G. Setti, \emph{Potential theory on manifolds with boundary and applications to controlled mean curvature graphs}, to appear in J. Reine Angew. Math. DOI: 10.1515/crelle-2014-0137. \bibitem{LW} P.~Li and J.~Wang, \emph{Finiteness of disjoint minimal graphs}, Math. Res. Lett. \textbf{8} (2001), no.~5-6, 771--777. \bibitem{On} B.~O'Neill, \emph{Semi-{R}iemannian geometry}, Pure and Applied Mathematics, vol. 103, Academic Press Inc., New York, 1983, With applications to relativity. \bibitem{PPS-L1Liouville} L.~F. Pessoa, S.~Pigola, and A.~G. Setti, \emph{Dirichlet parabolicity and $L^1$-liouville property under localized geometric conditions}. J. Funct. Anal. \textbf{273} (2017), no. 2, 652--693. \bibitem{Ri-preprint} J.B.~Ripoll, \emph{On the asymptotic Plateau?s problem for CMC hypersurfaces in hyperbolic space}. Preprint (2013) available at \url{https://arxiv.org/pdf/1309.3644v1.pdf}. \bibitem{RoRo-AJM} A.~Ros and H.~Rosenberg, \emph{Properly embedded surfaces with constant mean curvature}, Amer. J. Math. \textbf{132} (2010), no.~6, 1429--1443. \bibitem{Salavessa-PAMS} I.~M. Salavessa, \emph{Graphs with parallel mean curvature}, Proc. Amer. Math. Soc. \textbf{107} (1989), 449--458. \bibitem{Se}J. Serrin, \textit{On surfaces of constant mean curvature which span a given space curve}. Math. Z. {\bf 112} (1969), 77–-88. \bibitem{S-PAMQ} J.~Spruck, \emph{Interior gradient estimates and existence theorems for constant mean curvature graphs in {$M^n\times\bold R$}}, Pure Appl. Math. Q. \textbf{3} (2007), no.~3, Special Issue: In honor of Leon Simon. Part 2, 785--800. \end{thebibliography} \end{document}
\begin{document} \begin{abstract} We consider layer potentials associated to elliptic operators $Lu=-{\rm div}\big(A \nabla u\big)$ acting in the upper half-space $\mathbb{R}u$ for $n\geq 2$, or more generally, in a Lipschitz graph domain, where the coefficient matrix $A$ is $L^\infty$ and $t$-independent, and solutions of $Lu=0$ satisfy interior estimates of De Giorgi/Nash/Moser type. A ``Calder\'on-Zygmund" theory is developed for the boundedness of layer potentials, whereby sharp $L^p$ and endpoint space bounds are deduced from $L^2$ bounds. Appropriate versions of the classical ``jump-relation" formulae are also derived. The method of layer potentials is then used to establish well-posedness of boundary value problems for $L$ with data in $L^p$ and endpoint spaces. \end{abstract} \maketitle \tableofcontents {\bf S}ection{Introduction} {\bf S}etcounter{equation}{0} Consider a second order, divergence form elliptic operator \begin{equation}\Lambda^\alphabel{L-def-1} L=-{\rm div}\,A(x)\nabla\quad\mbox{in}\,\,\, \mathcal{R}R^{n+1}:=\{X=(x,t):\,x\in\mathcal{R}R^n,\,\,t\in\mathcal{R}R\}, \end{equation} where $A$ is an $(n+1)\times(n+1)$ matrix of $L^\infty$, $t$-independent, complex coefficients, satisfying the uniform ellipticity condition \begin{equation}\Lambda^\alphabel{eq1.1*} \Lambda^{-1}|\xi|^{2}\leq\mathcal{R}e e\,\Lambda^\alphangle A(x)\,\xi,\xi\rangle := \mathcal{R}e e{\bf S}um_{i,j=1}^{n+1}A_{ij}(x)\,\xi_{j}\,\overline{\xi}_{i}, \quad \Vert A\Vert_{\infty}\leq\Lambda, \end{equation} for some $\Lambda\in(0,\infty)$, for all $\xi\in\mathcal{C}C^{n+1}$, and for almost every $x\in\mathcal{R}R^{n}$. The operator $L$ is interpreted in the usual weak sense via the accretive sesquilinear form associated with~\eqref{eq1.1*}. In particular, we say that $u$ is a \textbf{``solution''} of $Lu=0$, or simply $Lu=0$, in a domain $\Omega{\bf S}ubset \mathcal{R}R^{n+1}$, if $u\in L^2_{1,\,\mathrm{loc}}(\Omega)$ and $\int_{\mathcal{R}R^{n+1}} A\nabla u \cdot \nabla \mathcal{P}hi =0$ for all $\mathcal{P}hi\in C^\infty_0(\Omega)$. Throughout the paper, we shall impose the following `` {\bf standard assumptions}": \begin{enumerate} \item[(1)] The operator $L =-\operatorname{div} A\nabla$ is of the type defined in \eqref{L-def-1} and \eqref{eq1.1*} above, with $t$-independent coefficient matrix $A(x,t)=A(x)$. {\bf S}mallskip \item[(2)] Solutions of $Lu=0$ satisfy the interior De Giorgi/Nash/Moser (DG/N/M) type estimates defined in \eqref{eq2.DGN} and \eqref{eq2.M} below. \end{enumerate} The paper has two principal aims. First, we prove sharp $L^p$ and endpoint space bounds for layer potentials associated to any operator $L$ that, along with its Hermitian adjoint $L^*$, satisfies the standard assumptions. These results are of ``Calder\'on-Zygmund" type, in the sense that the $L^p$ and endpoint space bounds are deduced from $L^2$ bounds. Second, we use the layer potential method to obtain well-posedness results for boundary value problems for certain such $L$. The precise definitions of the layer potentials, and a brief historical summary of previous work (including the known $L^2$ bounds), is given below. Let us now discuss certain preliminary matters needed to state our main theorems. For the sake of notational convenience, we will often use capital letters to denote points in $\mathbb{R}e$, e.g., $X=(x,t),\, Y=(y,s)$. We let $B(X,r):= \{Y\in\mathbb{R}e:\,|X-Y|<r\}$, and $\mathcal{D}elta(x,r):= \{y\in\mathbb{R}^n:\,|x-y|<r\}$ denote, respectively, balls of radius $r$ in $\mathbb{R}e$ and in $\mathbb{R}^n$. We use the letter $Q$ to denote a generic cube in $\mathbb{R}^n$, with sides parallel to the co-ordinate axes, and we let $\ell(Q)$ denote its side length. We adopt the convention whereby $C$ denotes a finite positive constant that may change from one line to the next but ultimately depends only on the relevant preceding hypotheses. We will often write $C_p$ to emphasize that such a constant depends on a specific parameter $p$. We may also write $a\lesssim b$ to denote $a\leq Cb$, and $a\approx b$ to denote $a\lesssim b \lesssim a$, for quantities $a,b\in\mathcal{R}R$. \noindent{\bf De Giorgi/Nash/Moser (DG/N/M) estimates}. We say that a locally square integrable function $u$ is ``locally H\"{o}lder continuous", or equivalently, satisfies ``De Giorgi/Nash (DG/N) estimates" in a domain $\Omega{\bf S}ubset\mathcal{R}R^{n+1}$, if there is a positive constant $C_0<\infty$, and an exponent $\alpha\in(0,1]$, such that for any ball $B=B(X,R)$ whose concentric double $2B := B(X,2R)$ is contained in $\Omega$, we have \begin{equation} |u(Y)-u(Z)|\leq C_0\left(\frac{|Y-Z|}{R}\right)^{\alpha}\left(\fint_{2B}|u|^{2}\right)^{1/2}, \Lambda^\alphabel{eq2.DGN}\end{equation} whenever $Y,Z\in B$. Observe that any function $u$ satisfying \eqref{eq2.DGN} also satisfies Moser's ``local boundedness" estimate (see \cite{Mo}) \begin{equation} {\bf S}up_{Y\in B}|u(Y)|\leq C_0 \left(\fint_{2B}|u|^{2}\right)^{1/2}.\Lambda^\alphabel{eq2.M}\end{equation} Moreover, as is well known, \eqref{eq2.M} self improves to \begin{equation}\Lambda^\alphabel{eq2.Mr} {\bf S}up_{Y\in B}|u(Y)|\leq C_r \left(\fint_{2B}|u|^{r}\right)^{1/r},\qquad \forall \,r\in(0,\infty). \end{equation} \begin{remark}\Lambda^\alphabel{firstrem} It is well known (see \cite{DeG,Mo,Na}) that when the coefficient matrix $A$ is real, solutions of $Lu=0$ satisfy the DG/N/M estimates \eqref{eq2.DGN} and \eqref{eq2.M}, and the relevant constants depend quantitatively on ellipticity and dimension only (for this result, the matrix $A$ need not be $t$-independent). Moreover, estimate \eqref{eq2.DGN}, which implies \eqref{eq2.M}, is stable under small complex perturbations of the coefficients in the $L^\infty$ norm (see, e.g., \cite[Chapter~VI]{Gi} or \cite{A}). Therefore, the standard assumption (2) above holds automatically for small complex perturbations of real symmetric elliptic coefficients. We also note that in the $t$-independent setting considered here, the DG/N/M estimates always hold when the ambient dimension $n+1$ is equal to 3 (see \cite[Section 11]{AAAHK}). \end{remark} We shall refer to the following quantities collectively as the ``{\bf standard constants}": the dimension $n$ in \eqref{L-def-1}, the ellipticity parameter $\Lambda$ in \eqref{eq1.1*}, and the constants $C_0$ and $\alpha$ in the DG/N/M estimates \eqref{eq2.DGN} and \eqref{eq2.M}. In the presence of DG/N/M estimates (for $L$ and $L^*$), by \cite{HK}, both $L$ and $L^*$ have fundamental solutions $E: \{(X,Y)\in\mathcal{R}R^{n+1}\times \mathcal{R}R^{n+1} : X\neq Y\} \to \mathcal{C}C$ and $E^*(X,Y) := \overline {E(Y,X)}$, respectively, satisfying $E(X,\cdot),\,E(\cdot,X)\in L^2_{1,\,\mathrm{loc}}(\mathcal{R}R^{n+1}{\bf S}etminus\{X\})$ and \begin{equation}\Lambda^\alphabel{eq7.4a} L_{x,t} \,E (x,t,y,s) = \delta_{(y,s)},\quad L^*_{y,s}\, E^*(y,s,x,t)= L^*_{y,s} \,\overline {E (x,t,y,s)} = \delta_{(x,t)}, \end{equation} where $\delta_X$ denotes the Dirac mass at the point $X$. In particular, this means that \begin{equation}\Lambda^\alphabel{fundsolprop} \int_{\mathcal{R}R^{n+1}} A(x) \nabla_{x,t} E(x,t,y,s) \cdot \nabla \mathcal{P}hi(x,t)\, dxdt= \mathcal{P}hi(y,s), \qquad (y,s)\in\mathcal{R}R^{n+1}\,, \end{equation} for all $\mathcal{P}hi\in C^\infty_0(\mathcal{R}R^{n+1})$. Moreover, by the $t$-independence of our coefficients, \begin{equation*} E(x,t,y,s)=E(x,t-s,y,0)\,. \end{equation*} As is customary, we then define the single and double layer potential operators, associated to $L$, in the upper and lower half-spaces, by \begin{equation}\Lambda^\alphabel{eq2.23}\begin{split} \mathcal{S}^p_{\alpha}m f(x,t)&:=\int\limits_{\mathcal{R}R^n}E(x,t,y,0)\,f(y)\,dy, \qquad (x,t)\in\mathcal{R}R^{n+1}_p_{\alpha}m\,, \\[4pt] \mathcal{D}^p_{\alpha}m f(x,t) & :=\int_{\mathbb{R}^{n}}\overline{\Big(p_{\alpha}artial_{\nu^*} E^* (\cdot,\cdot,x,t)\Big)}(y,0)\,f(y)\,dy,\qquad (x,t)\in\mathcal{R}R^{n+1}_p_{\alpha}m\,, \end{split} \end{equation} where $p_{\alpha}artial_{\nu^*}$ denotes the outer co-normal derivative (with respect to $\mathbb{R}u$) associated to the adjoint matrix $A^*$, i.e., \begin{equation}\Lambda^\alphabel{eq2.23*} \Big(p_{\alpha}artial_{\nu^*} E^*(\cdot,\cdot,x,t)\Big)(y,0):= -e_{n+1}\cdot A^*(y)\,\Big(\nabla_{y,s} E^*(y,s,x,t)\Big)\big|_{s=0} \,. \end{equation} Here, $e_{n+1}:= (0,...,0,1)$ is the standard unit basis vector in the $t$ direction. Similarly, using the notational convention that $t=x_{n+1}$, we define the outer co-normal derivative with respect to $A$ by $$p_{\alpha}artial_{\nu} u:= -e_{n+1}\cdot A \nabla u=-{\bf S}um_{j=1}^{n+1}A_{n+1,j}\,p_{\alpha}artial_{x_j} u\,.$$ When we are working in a particular half-space (usually the upper one, by convention), for simplicity of notation, we shall often drop the superscript and write simply, e.g., $\mathcal{S},\, \mathcal{D}$ in lieu of $\mathcal{S}^+,\,\mathcal{D}^+$. At times, it may be necessary to identify the operator $L$ to which the layer potentials are associated (when this is not clear from context), in which case we shall write $\mathcal{S}_L,\,\mathcal{D}_L,$ and so on. We note at this point that for each fixed $t>0$ (or for that matter, $t<0$), the operator $f\mapsto \mathcal{S}f(\cdot,t)$ is well-defined on $L^p(\mathbb{R}^n)$, $1\leq p<\infty$, by virtue of the estimate \[ \int_{\mathbb{R}^n}|E(x,t,y,0)| \, |f(y)| \, dy \,\lesssim \, I_1(|f|)(x)\,, \] which follows from~\eqref{G-est1} below, where $I_1$ denotes the classical Riesz potential. Also, the operator $f\mapsto \mathcal{D}f(\cdot,t)$ is well-defined on $L^p(\mathbb{R}^n)$, $2-\varepsilon<p<\infty$, by virtue of the estimate \[ \int_{\mathbb{R}^n}\big|\Big(\nabla E(x,t,\cdot,\cdot)\Big)(y,0)\big|^q \, dy \,\lesssim \, t^{n(1-q)}\,,\quad 1<q<2+\varepsilon\,, \] which follows from \cite[Lemmata 2.5 and 2.8]{AAAHK} (see also \cite[Proposition 2.1]{AAAHK}, which guarantees that $\nabla E$ makes sense on horizontal slices in the first place). We denote the boundary trace of the single layer potential by \begin{equation}\Lambda^\alphabel{eq2.24} S\!f(x):=\int\limits_{\mathcal{R}R^n}E(x,0,y,0)\,f(y)\,dy,\qquad x\in\mathcal{R}R^n, \end{equation} which is well-defined on $L^p(\mathbb{R}^n)$, $1\leq p<\infty$, by~\eqref{G-est1} below. We shall also define, in Section~\mathbb{R}f{s2} below, boundary singular integrals \begin{equation} \begin{split}\Lambda^\alphabel{eq1.6}Kf(x) & :=``\mathrm{p.v.}"\int_{\mathbb{R}^{n}}\overline{\Big(p_{\alpha}artial_{\nu^*} E^* (\cdot,\cdot,x,0)\Big)}(y,0)\,f(y)\,dy\,,\\[4pt] \widetilde{K}f(x) & := ``\mathrm{p.v.}"\int_{\mathbb{R}^{n}}\Big(p_{\alpha}artial_\nu E(x,0,\cdot,\cdot)\Big)(y,0)\,f(y)\,dy\,,\\[4pt] \mathbf{T} f(x) & := ``\mathrm{p.v.}\,"\int_{\mathcal{R}R^n} \Big(\nabla E\Big)(x,0,y,0)f(y)\,dy\,, \end{split} \end{equation} where the ``principal value" is purely formal, since we do not actually establish convergence of a principal value. We shall give precise definitions and derive the jump relations for the layer potentials in Section \mathbb{R}f{s2}. Classically, $\widetilde{K}$ is often denoted $K^{\ast}$, but we avoid this notation here, as $\widetilde{K}$ need not be the adjoint of $K$ unless $L$ is self-adjoint. In fact, using the notation $\mathop{\rm adj}\nolimits(T)$ to denote the Hermitian adjoint of an operator $T$ acting in $\mathbb{R}^n$, we have that $\widetilde{K}_L = \mathop{\rm adj}\nolimits (K_{L^*})$. Let us now recall the definitions of the non-tangential maximal operators $N_{\ast},\widetilde{N}_{\ast}$, and of the notion of ``non-tangential convergence". Given $x_0\in\mathbb{R}^{n}$, define the cone $\widetilde{G}amma(x_0):=\{(x,t)\in\mathbb{R}_{+}^{n+1}:|x_0-x|<t\}$. Then for measurable functions $F:\mathbb{R}_{+}^{n+1}\rightarrow\mathcal{C}C$, define \begin{equation*} \begin{split} N_{\ast} F(x_0)& :={\bf S}up_{(x,t)\in\widetilde{G}amma(x_0)}|F(x,t)|,\\[4pt] \widetilde{N}_{\ast} F(x_0) & :={\bf S}up_{(x,t)\in \widetilde{G}amma(x_0)}\left(\fint\!\!\fint_{|(x,t)-(y,s)|<t/4} |F(y,s)|^{2}dyds\right)^{1/2}\,, \end{split}\end{equation*} where $\fint_E f := |E|^{-1} \int_E f$ denotes the mean value. We shall say that $F$ ``converges non-tangentially" to a function $f:\mathbb{R}^n\rightarrow\mathcal{C}C$, and write $F\overset{{\rm n.t.}}{\longrightarrow} f$, if for a.e. $x\in \mathbb{R}^n$, $$\lim\limits_{\widetilde{G}amma(x)\ni(y,t)\to(x,0)}F(y,t)=f(x)\,.$$ These definitions have obvious analogues in the lower half-space $\mathbb{R}_{-}^{n+1}$ that we distinguish by writing $\widetilde{G}amma^p_{\alpha}m, N^p_{\alpha}m_{\ast},\widetilde{N}^p_{\alpha}m_{\ast}$, e.g., the cone $\widetilde{G}amma^-(x_0):=\{(x,t)\in\mathbb{R}_{-}^{n+1}:|x_0-x|<-t\}$. As usual, for $1<p<\infty$, let $\dot{L}_1^p(\mathcal{R}R^n)$ denote the homogenous Sobolev space of order one, which is defined as the completion of $C_0^\infty(\mathbb{R}^n)$, with respect to the norm $\|f\|_{\dot{L}^p_1}:=\|\nabla f\|_p$, realized as a subspace of the space $L^1_{\mathrm{loc}}(\mathcal{R}R^n)/\mathcal{C}C$ of locally integrable functions modulo constant functions. As usual, for $0<p\leq1$, let $H^p_{\mathrm{at}}(\mathbb{R}^n)$ denote the classical atomic Hardy space, which is a subspace of the space ${\bf S}'(\mathcal{R}R^n)$ of tempered distributions (see, e.g., \cite[Chapter III]{St2} for a precise definition). Also, for $n/(n+1)<p\leq1$, let $\dot{H}^{1,p}_{\mathrm{at}}(\mathbb{R}^n)$ denote the homogeneous ``Hardy-Sobolev" space of order one, which is a subspace of ${\bf S}'(\mathcal{R}R^n)/\mathcal{C}C$ (see, e.g., \cite[Section~3]{MM} for further details). In particular, we call $a\in \dot{L}^2_1(\mathbb{R}^n)$ a {\it regular atom} if there exists a cube $Q{\bf S}ubset\mathbb{R}^n$ such that \begin{equation*} {\bf S}upp a{\bf S}ubset Q,\qquad \|\nabla a\|_{L^{2}(Q)}\leq |Q|^{\frac{1}{2}-\frac{1}{p}}, \end{equation*} and we define the space \[ \dot{H}^{1,p}_{\mathrm{at}}(\mathbb{R}^n) := \{{f\in{\bf S}'(\mathcal{R}R^n)/\mathcal{C}C} : \nabla f={\bf S}um_{j=1}^\infty \Lambda^\alphambda_j\nabla a_j,\,\,(\Lambda^\alphambda_j)_j\in\ell^p,\,\, a_j\,\,\mbox{is a regular atom}\}, \] where the series converges in $H^p_{\mathrm{at}}(\mathbb{R}^n)$, and the space is equipped with the quasi-norm $\|f\|_{\dot{H}^{1,p}_{\mathrm{at}}(\mathbb{R}^n)}:={\rm inf}\,[{\bf S}um_j |\Lambda^\alphambda_j|^p]^{1/p}$, where the infimum is taken over all such representations. We now define the scales \[ H^p(\mathcal{R}R^n):=\left\{ \begin{array}{l} H^p_{\mathrm{at}}(\mathcal{R}R^n)\,, \,\,0<p\leq 1, \\[6pt] L^p(\mathcal{R}R^n)\,, \,\,1<p<\infty, \end{array} \right. \, \quad \dot{H}^{1,p}(\mathcal{R}R^n):=\left\{ \begin{array}{l} \dot{H}^{1,p}_{\mathrm{at}}(\mathcal{R}R^n)\,, \,\,\frac{n}{n+1}<p\leq 1, \\[6pt] \dot{L}_1^p(\mathcal{R}R^n)\,, \,\,1<p<\infty. \end{array}\right. \] We recall that, by the classical result of C. Fefferman (cf. \cite{FS}), the dual of $H^1(\mathbb{R}^n)$ is $BMO(\mathbb{R}^n)$. Moreover, $(H^p_{\mathrm{at}})^*=\dot{C}^{\alpha}(\mathbb{R}^n)$, if $\alpha:=n(1/p-1)\in (0,1)$, where $\dot{C}^{\alpha}(\mathbb{R}^n)$ denotes the homogeneous H\"older space of order $\alpha$. In general, for a measurable set $E$, and for $0<\alpha<1$, the H\"{o}lder space $\dot{C}^\alpha (E)$ is defined to be the set of $f\in C(E)/ \mathcal{C}C$ satisfying \[ \|f\|_{\dot{C}^\alpha}:= {\bf S}up \frac{|f(x)-f(y)|}{|x-y|^\alpha}\,<\,\infty, \] where the supremum is taken over all pairs $(x,y)\in E\times E $ such that $x\neq y$. For $0\leq \alpha<1$, we define the scale \[ \Lambda^\alpha(\mathcal{R}R^n):=\left\{ \begin{array}{l} \dot{C}^{\alpha}(\mathbb{R}^n)\,, \,\,0<\alpha < 1, \\[6pt] BMO(\mathcal{R}R^n)\,,\,\,\alpha =0\,. \end{array} \right. \] As usual, we say that a function $F\in L^2_{\mathrm{loc}}(\mathbb{R}u)$ belongs to the tent space $T^\infty_2(\mathbb{R}u)$, if it satisfies the Carleson measure condition \[ \|F\|_{T^\infty_2(\mathbb{R}u)}:= \left({\bf S}up_Q\frac1{|Q|}\iint_{R_Q}|F(x,t)|^2\, \frac{dx dt}{t}\right)^{1/2}\,<\,\infty\,. \] Here, the supremum is taken over all cubes $Q{\bf S}ubset\mathcal{R}R^n$, and $R_Q:=Q\times(0,\ell(Q))$ is the usual ``Carleson box" above $Q$. With these definitions and notational conventions in place, we are ready to state the first main result of this paper. \begin{theorem}\Lambda^\alphabel{P-bdd1} Suppose that $L$ and $L^*$ satisfy the standard assumptions, let $\alpha$ denote the minimum of the De Giorgi/Nash exponents for $L$ and $L^*$ in \eqref{eq2.DGN}, and set ${p_{\alpha}:=n/(n+\alpha)}$. Then there exists $p_{\alpha}p>2$, depending only on the standard constants, such that \begin{align} {\bf S}up_{t> 0}\|\nabla \mathcal{S} f(\cdot,t)\|_{L^p(\mathcal{R}R^n,\mathcal{C}C^{n+1})} & \leq C_p\|f\|_{H^p(\mathcal{R}R^n)}, \qquad\forall\,p\in(p_{\alpha},p_{\alpha}p)\,, \Lambda^\alphabel{LP-2} \\[4pt] \|\widetilde{N}_*\left(\nabla{\mathcal{S}}f\right)\|_{L^p(\mathcal{R}R^n)} & \leq C_p\|f\|_{H^p(\mathcal{R}R^n)}, \qquad\forall\,p\in(p_{\alpha},p_{\alpha}p)\,, \Lambda^\alphabel{LP-1} \\[4pt] \|\nabla_xS\!f\|_{H^{p}(\mathcal{R}R^n,\mathcal{C}C^n)} & \leq C_p\|f\|_{H^p(\mathcal{R}R^n)}, \qquad\forall\,p\in(p_{\alpha},p_{\alpha}p)\,, \Lambda^\alphabel{LP-3a} \\[4pt] \|\widetilde{K}f\|_{H^p(\mathcal{R}R^n)} & \leq C_p\|f\|_{H^p(\mathcal{R}R^n)}, \qquad\forall\,p\in(p_{\alpha},p_{\alpha}p)\,, \Lambda^\alphabel{LP-3} \\[4pt] \|N_*(\mathcal{D} f)\|_{L^p(\mathcal{R}R^n)} & \leq C_p\|f\|_{L^p(\mathcal{R}R^n)}, \qquad\forall\,p\in\left(\frac{p_{\alpha}p}{p_{\alpha}p-1},\infty\right)\,, \Lambda^\alphabel{LP-4} \\[4pt] \|t\nabla \mathcal{D}f\|_{T^\infty_2(\mathbb{R}u)}& \leq C\|f\|_{BMO(\mathcal{R}R^n)}\,, \Lambda^\alphabel{LP-5} \\[4pt] \|\mathcal{D} f\|_{\dot{C}^\beta(\overline{\mathbb{R}u})}& \leq {C_\beta}\|f\|_{\dot{C}^\beta(\mathbb{R}^n)}\,, \qquad\forall\,\beta\in\left(0,\alpha\right)\,, \Lambda^\alphabel{LP-6} \end{align} for an extension of $\mathcal{D} f$ to $\overline{\mathbb{R}u}$, where $S$, $\mathcal{S}$, $\mathcal{D}$, and $\widetilde{K}$ may correspond to either $L$ or $L^*$, and the analogous bounds hold in the lower half-space. \end{theorem} To state our second main result, let us recall the definitions of the Neumann and Regularity problems, with (for now) $n/(n+1)<p<\infty$: \[ (\mbox{N})_p\,\,\left\{ \begin{array}{l} Lu=0\mbox{ in }\mathbb{R}u,\\[6pt] \widetilde{N}_*(\nabla u)\in L^p(\mathbb{R}^n),\\[6pt] p_{\alpha}artial_\nu u(\cdot,0)=g\in H^p(\mathbb{R}^n), \end{array}\right. \qquad (\mbox{R})_p\,\,\left\{ \begin{array}{l} Lu=0\mbox{ in }\mathbb{R}u,\\[6pt] \widetilde{N}_*(\nabla u)\in L^p(\mathbb{R}^n),\\[6pt] u(\cdot,0)=f\in \dot{H}^{1,p}(\mathbb{R}^n)\,, \end{array}\right. \] where we specify that the solution $u$, of either $(N)_p$ or $(R)_p$, will assume its boundary data in the the following sense: \begin{itemize} \item $u(\cdot ,0)\in \dot{H}^{1,p}(\mathbb{R}^n)$, and $u\overset{{\rm n.t.}}{\longrightarrow} u(\cdot,0)$; {\bf S}mallskip \item $\nabla_x u(\cdot,0)$ and $p_{\alpha}artial_\nu u(\cdot,0)$ belong to $H^p(\mathbb{R}^n)$, and are the weak limits, (in $L^p$, for $p>1$, and in the sense of tempered distributions, if $p\leq 1$), as $t\to 0$, of $\nabla_x u(\cdot,t)$, and of $-e_{n+1}\cdot A\nabla u(\cdot,t)$, respectively. \end{itemize} We also formulate the Dirichlet problem in $L^p$, with $1<p<\infty$: \[ (\mbox{D})_p\,\,\left\{ \begin{array}{l} Lu=0\mbox{ in }\mathbb{R}u,\\[6pt] N_*(u)\in L^p(\mathbb{R}^n),\\[6pt] u(\cdot,0)=f\in L^p(\mathbb{R}^n)\,, \end{array}\right. \] and in $\Lambda^\alpha,$ with $0\leq\alpha<1$: \[ (\mbox{D})_{\Lambda^\alpha}\,\,\left\{ \begin{array}{l} Lu=0\mbox{ in }\mathbb{R}u,\\[6pt] t\nabla u \in T^\infty_2(\mathbb{R}u)\,\,{\rm if}\,\, \alpha =0\,,\,\, {\rm or}\,\, u\in \dot{C}^\alpha(\overline{\mathbb{R}u})\,\, {\rm if}\,\,0<\alpha<1\,,\\[6pt] u(\cdot,0)=f\in \Lambda^\alpha(\mathbb{R}^n)\,\,. \end{array}\right. \] The solution $u$ of $(D)_p$, with data $f$, satisfies \begin{itemize} \item $u\overset{{\rm n.t.}}{\longrightarrow} f$, and $u(\cdot,t)\to f$ as $t\to 0$ in $L^p(\mathbb{R}^n)$. \end{itemize} The solution $u$ of $(D)_{\Lambda^\alpha}$, with data $f$, satisfies \begin{itemize} \item $u(\cdot,t)\to f$ as $t\to 0$ in the weak* topology on $\Lambda^\alpha,\, 0\leq \alpha <1$. {\bf S}mallskip \item $u \in \dot{C}^\alpha(\overline{\mathbb{R}u})$, and $u(\cdot,0) = f$ pointwise, $0<\alpha<1$. \end{itemize} \begin{theorem}\Lambda^\alphabel{th2} Let $L=-\operatorname{div} A\nabla$ and $L_0=-\operatorname{div} A_0\nabla$ be as in \eqref{L-def-1} and \eqref{eq1.1*} with $A=A(x)$ and $A_0=A_0(x)$ both $t$-independent, and suppose that $A_0$ is real symmetric. There exists $\varepsilon_0>0$ and $\varepsilonilon>0$, both depending only on dimension and the ellipticity of $A_0$, such that if \[ \|A-A_0\|_\infty < \varepsilon_0\,, \] then $(N)_p$, $(R)_p$, $(D)_q$ and $(D)_{\Lambda^\alpha}$ are uniquely solvable for $L$ and $L^*$ when $1-\varepsilonsilon<p<2+\varepsilonsilon$, $(2+\varepsilonsilon)'<q<\infty$ and $0\leq\alpha <n\varepsilonsilon/(1-\varepsilonsilon)$, respectively. \end{theorem} \begin{remark} By Remark~\mathbb{R}f{firstrem}, both $L$ and $L^*$ satisfy the ``standard assumptions" under the hypotheses of Theorem \mathbb{R}f{th2}. \end{remark} \begin{remark} Theorems \mathbb{R}f{P-bdd1} and \mathbb{R}f{th2} continue to hold, with the half-space $\mathbb{R}u$ replaced by a Lipschitz graph domain of the form $\Omega=\{(x,t)\in\mathbb{R}e: \, t>p_{\alpha}hi(x)\}$, where $p_{\alpha}hi:\mathbb{R}^n \to \mathbb{R}$ is a Lipschitz function. Indeed, that case may be reduced to that of the half-space by a standard pull-back mechanism. We omit the details. \end{remark} \begin{remark} In the case of the Dirichlet problem with $\dot{C}^\alpha$ data, we answer in the affirmative a higher dimensional version of a question posed in $\mathcal{R}R^2_+$ in \cite{SV}. We note that this particular result is new even in the case $A=A_0$ (i.e., in the case that $A$ is real symmetric). \end{remark} Let us briefly review some related history. We focus first on the question of boundedness of layer potentials. As we have just noted, our results extended immediately to the setting of a Lipschitz graph domain. The prototypical result in that setting is the result of Coifman, McIntosh and Meyer~\cite{CMM} concerning the $L^{2}$ boundedness of the Cauchy integral operator on a Lipschitz curve, which implies $L^2$ bounds for the layer potentials associated to the Laplacian via the method of rotations. In turn, the corresponding $H^p/L^p$ bounds follow by classical Calder\'on-Zygmund theory. For the variable coefficient operators considered here, the $L^2$ boundedness theory (essentially, the case $p=2$ of Theorem \mathbb{R}f{P-bdd1}, along with $L^2$ square function estimates) was introduced in \cite{AAAHK}. In that paper, it was shown, first, that such $L^2$ bounds (along with $L^2$ invertibility for $p_{\alpha}m(1/2)I+K$) are stable under small complex $L^\infty$ perturbations of the coefficient matrix, and second, that these boundedness and invertibility results hold in the case that $A$ is real and symmetric (hence also for complex perturbations of real symmetric $A$). The case $p=2$ for $A$ real, but not necessarily symmetric, was treated in \cite{KR} in the case $n=1$ (i.e., in ambient dimension $n+1 = 2$), and in \cite{HKMP1}, in all dimensions. Moreover, in hindsight, in the special case that the matrix $A$ is of the ``block" form \[ \left[\begin{array}{c|c} & 0\\ B & \vdots\\ & 0\\ \hline 0\cdots0 & 1\end{array}\right], \] where $B=B(x)$ is a $n\times n$ matrix, $L^2$ bounds for layer potentials follow from the solution of the Kato problem \cite{AHLMcT}, since in the block case the single layer potential is given by $\mathcal{S} f(\cdot,t)= (1/2) J^{-1/2} e^{-t{\bf S}qrt{J}},$ where $J:= -\operatorname{div}_x B(x)\nabla_x$. Quite recently, the case $p=2$ of Theorem \mathbb{R}f{P-bdd1} was shown to hold in general, for $L$ and $L^*$ satisfying the ``standard assumptions", in work of Rosen \cite{R}, in which $L^2$ bounds for layer potentials are obtained via results of \cite{AAM} concerning functional calculus of certain first order ``Dirac-type" operators\footnote{ A direct proof of these $L^2$ bounds for layer potentials, bypassing the functional calculus results of \cite{AAM}, will appear in \cite{GH}.}. We note further that Rosen's $L^2$ estimates do not require the DG/N/M hypothesis (rather, just ellipticity and $t$-independence). On the other hand, specializing to the ``block" case mentioned above, we observe that counter-examples in \cite{MNP} and \cite{Frehse} (along with some observations in \cite{AuscherSurvey}), show that the full range of $L^p$ and Hardy space results treated in the present paper cannot be obtained without assuming DG/N/M. It seems very likely that $L^p$ boundedness for some restricted range of $p$ should still hold, even in the absence of DG/N/M, as is true in the special case of the ``block matrices" treated in \cite{AuscherSurvey}, \cite{BK}, \cite{HoMa}, and \cite{May}, but we have not considered this question here. We mention also that even in the presence of DG/N/M (in fact, even for $A$ real and symmetric), the constraint on the upper bound on $p$ in \eqref{LP-2}-\eqref{LP-1} is optimal. To see this, consider the block case, so that $L$ is of the form $Lu=u_{tt} +{\rm div}_x B(x)\nabla_xu=: u_{tt}-Ju$, where $B=B(x)$ is an $n\times n$ uniformly elliptic matrix. Thus, $\mathcal{S} f(\cdot,t)= (1/2) J^{-1/2}e^{-t{\bf S}qrt{J}} f$, so that, considering only the tangential part of the gradient in \eqref{LP-2}, and letting $t\to 0$, we obtain as a consequence of \eqref{LP-2} that \begin{equation}\Lambda^\alphabel{eq4.68} \|\nabla_xJ^{-1/2} f\|_p \lesssim \|f\|_p\,. \end{equation} But by Kenig's examples (see \cite[pp.~119--120]{AT}), for each $p>2$, there is a $J$ as above for which the Riesz transform bound \eqref{eq4.68} fails. The matrix $B$ may even be taken to be real symmetric. Thus, our results are in the nature of best possible, in the sense that, first, the DG/N/M hypothesis is needed to treat $p$ near (or below) 1, and second, that even with DG/N/M, the exponent $p_{\alpha}p$ is optimal. As regards the question of solvability, addressed here in Theorem \mathbb{R}f{th2}, we recall that in the special case of the Laplacian on a Lipschitz domain, solvability of the $L^p$ Dirichlet problem is due to Dahlberg \cite{D}, while the Neumann and Regularity problems were treated first, in the case $p=2$, by Jerison and Kenig \cite{JK2}, and then by Verchota \cite{V}, by an alternative proof using the method of layer potentials; and second, in the case $1<p< 2+\varepsilon$, by Verchota \cite{V} (Regularity problem only), and in the case $1\leq p< 2+\varepsilon$ by Dahlberg and Kenig \cite{DK} (Neumann and Regularity), and finally, in the case $1-\varepsilon<p<1$ by Brown \cite{Br} (who then obtained $D_{\Lambda^\alpha}$ by duality). A conceptually different proof of the latter result has been subsequently given by Kalton and Mitrea in \cite{KM} using a general perturbation technique of functional analytic nature\footnote{Thus answering a question posed by E. Fabes.}. More generally, in the setting of variable coefficients, in the special case that $A=A_0$ (i.e., that $A$ is real symmetric), the $L^p$ results for the Dirichlet problem were obtained by Jerison and Kenig \cite{JK1}, and for the Neumann and Regularity problems by Kenig and Pipher in \cite{KP} (the latter authors also treated the analogous Hardy space theory in the case $p=1$). The case $p=2$ of Theorem \mathbb{R}f{th2} (allowing complex coefficients) was obtained first in \cite{AAAHK}, with an alternative proof given in \cite{AAH}. The case $n=1$ (i.e., in ambient dimension $n+1=2$) of Theorem \mathbb{R}f{th2} follows from the work of Barton \cite{Bar}. In the present work, we consider solvability of boundary value problems only for complex perturbations of real, symmetric operators, but we point out that there has also been some recent progress in the case of non-symmetric $t$-independent operators. For real, non-symmetric coefficients, the case $n=1$ has been treated by Kenig, Koch, Pipher and Toro \cite{KKPT} (Dirichlet problem), and by Kenig and Rule \cite{KR} (Neumann and Regularity). The work of Barton \cite{Bar} allows for complex perturbations of the results of \cite{KKPT} and \cite{KR}. The higher dimensional case $n>1$ has very recently been treated in \cite {HKMP1} (the Dirichlet problem for real, non-symmetric operators), and in \cite{HKMP2} (Dirichlet and Regularity, for complex perturbations of the real, non-symmetric case). In these results for non-symmetric operators, necessarily there are additional restrictions on the range of allowable $p$, as compared to the symmetric case (cf. \cite{KKPT}). We remark that in the non-symmetric setting, with $n>1$, the Neumann problem remains open. We mention that we have also obtained an analogue of Theorem \mathbb{R}f{th2} for the Transmission problem, which we plan to present in a forthcoming publication \cite{HMM}. Finally, let us discuss briefly the role of $t$-independence in our ``standard assumptions". Caffarelli, Fabes and Kenig \cite{CFK} have shown that some regularity, in a direction transverse to the boundary, is needed to obtain $L^p$ solvability for, say, the Dirichlet problem. Motivated by their work, one may naturally split the theory of boundary value problems for elliptic operators in the half-space\footnote{There are analogues of the theory in a star-like Lipschitz domain.} into two parts: 1) solvability theory for $t$-independent operators, and 2) solvability results in which the discrepancy $|A(x,t) -A(x,0)|$, which measures regularity in $t$ at the boundary, is controlled by a Carleson measure estimate of the type considered in \cite{FKP}\footnote{The Carleson measure control of \cite{FKP} is essentially optimal, in view of \cite{CFK}.}, and in which one has some good solvability result for the operator with $t$-independent coefficients $A_0(x):=A(x,0)$. The present paper, and its companion article \cite{HMM}, fall into category 1). The paper \cite{HMaMo} falls into category 2), and uses our results here to obtain boundedness and solvability results for operators in that category, in which the Carleson measure estimate for the discrepancy is sufficiently small (in this connection, see also the previous work \cite{AA}, which treats the case $p=2$). \vskip 0.08in \noindent{\it Acknowledgments.} The first named author thanks S. Mayboroda for suggesting a simplified proof of estimate \eqref{LP-4}. The proof of item (vi) of Corollary \mathbb{R}f{cor4.47} arose in discussions between the first author and M. Mourgoglou. {\bf S}ection{Jump relations and definition of the boundary integrals}\Lambda^\alphabel{s2} {\bf S}etcounter{equation}{0} Throughout this section, we impose the ``standard assumptions" defined previously. The operators ${\rm div}$ and $\nabla$ are considered in all $n+1$ variables, and we write ${\rm div}_x$ and $\nabla_x$ when only the first $n$ variables are involved. Also, since we shall consider operators $T$ that may be viewed as acting either in $\mathcal{R}R^{n+1}$, or in $\mathcal{R}R^n$ with the $t$ variable frozen, we need to distinguish Hermitian adjoints in these two settings. We therefore use $T^*$ to denote the $(n+1)$-dimensional adjoint of $T$, while $\mathop{\rm adj}\nolimits(T)$ denotes the adjoint of $T$ acting in $\mathcal{R}R^n$. As usual, to apply the layer potential method, we shall need to understand the jump relations for the co-normal derivatives of $u^p_{\alpha}m= \mathcal{S}^p_{\alpha}m f.$ To this end, let us begin by recording the fact that, by the main result of \cite{R}, \begin{equation}\Lambda^\alphabel{eq4.prop46} {\bf S}up_{p_{\alpha}m t> 0}\|\nabla \mathcal{S}_L^p_{\alpha}m\,f(\cdot,t)\|_{L^2(\mathcal{R}R^n,\mathcal{C}C^{n+1})}+{\bf S}up_{p_{\alpha}m t> 0} \|\nabla \mathcal{S}_{L^*}^p_{\alpha}m\,f(\cdot,t)\|_{L^2(\mathcal{R}R^n,\mathcal{C}C^{n+1})}\leq C\|f\|_{L^2(\mathcal{R}R^n)}\,. \end{equation} Combining the last estimate with \cite[Lemma 4.8]{AAAHK} (see Lemma~\mathbb{R}f{l4.8*} below), we obtain \begin{equation}\Lambda^\alphabel{eq4.9} \|\widetilde{N}_*^p_{\alpha}m(\nabla \mathcal{S}^p_{\alpha}m f)\|_{L^2(\mathcal{R}R^n)}\leq C\|f\|_{L^2(\mathcal{R}R^n)}\,. \end{equation} Next, we recall the following fact proved in \cite{AAAHK}. Recall that $e_{n+1} :=(0,...,0,1)$ denotes the standard unit basis vector in the $t=x_{n+1}$ direction. \begin{lemma}[{\cite[Lemmata 4.1 and 4.3]{AAAHK}}]\Lambda^\alphabel{l4.1} Suppose that $L$ and $L^*$ satisfy the standard assumptions. If $Lu=0$ in $\mathcal{R}R_{p_{\alpha}m}^{n+1}$ and $\widetilde{N}_*^p_{\alpha}m(\nabla u) \in L^2(\mathcal{R}R^n)$, then the co-normal derivative $p_{\alpha}artial_\nu u(\cdot,0)$ exists in the variational sense and belongs to $L^2(\mathcal{R}R^n)$, i.e., there exists a unique $g\in L^2(\mathcal{R}R^n)$, and we set $p_{\alpha}artial_\nu u(\cdot,0):=g$, with $\|g\|_2\leq C\|\widetilde{N}_*^p_{\alpha}m(\nabla u)\|_2$, such that \begin{enumerate} \item[(i)] $\int_{\mathcal{R}R_{p_{\alpha}m}^{n+1}}A\nabla u\cdot \nabla\mathcal{P}hi\, dX = p_{\alpha}m \int_{\mathcal{R}R^n} g\, \mathcal{P}hi(\cdot,0) \, dx\,$ for all $\mathcal{P}hi \in C^\infty_0(\mathbb{R}e)$. \item[(ii)] $-\Lambda^\alphangle A\nabla u(\cdot,t),e_{n+1}\rangle \to g$ weakly in $L^2(\mathbb{R}^n)$ as $t\to 0^p_{\alpha}m$. \end{enumerate} Moreover, there exists a unique $f\in \dot{L}_1^2(\mathbb{R}^n)$ , with $\|f\|_{\dot{L}^2_1(\mathbb{R}^n)}\leq C\|\widetilde{N}_*^p_{\alpha}m(\nabla u)\|_2$, such that \begin{enumerate} \item[(iii)] $u\to f$ non-tangentially. \item[(iv)] $\nabla_xu(\cdot,t)\to \nabla_x f$ weakly in $L^2(\mathbb{R}^n)$ as $t\to 0^p_{\alpha}m$. \end{enumerate} \end{lemma} For each $f\in L^2(\mathcal{R}R^n)$, it follows from \eqref{eq7.4a} and \eqref{eq4.prop46} that $u:=\mathcal{S}^{p_{\alpha}m}f$ is a solution of $Lu=0$ in $\mathcal{R}R^{n+1}_p_{\alpha}m$, and this solution has the properties listed in Lemma~\mathbb{R}f{l4.1} because \eqref{eq4.9} holds. We then have the following result. \begin{lemma}\Lambda^\alphabel{L-jump} Suppose that $L$ and $L^*$ satisfy the standard assumptions. If $f\in L^2(\mathcal{R}R^n)$, then almost everywhere on $\mathcal{R}R^n$, we have \begin{equation}\Lambda^\alphabel{jump-1} p_{\alpha}artial_\nu\mathcal{S}^+f(\cdot,0) - p_{\alpha}artial_\nu\mathcal{S}^-f(\cdot,0) = f, \end{equation} where the co-normal derivatives are defined in the variational sense of Lemma \mathbb{R}f{l4.1}. \end{lemma} \begin{proof} Let us first suppose that $f\in C^\infty_0(\mathcal{R}R^n)$, and introduce \[ u:=\left\{ \begin{array}{l} {\mathcal{S}}^+f\,\,\,\mbox{in}\,\,\,\mathcal{R}R^{n+1}_+, \\[4pt] {\mathcal{S}}^-f\,\,\,\mbox{in}\,\,\,\mathcal{R}R^{n+1}_-, \end{array} \right. \] and pick some $\mathcal{P}hi\in C^\infty_0(\mathcal{R}R^{n+1})$. By Lemma \mathbb{R}f{l4.1} (i) and the property of the fundamental solution in \eqref{fundsolprop}, we obtain \begin{align}\begin{split}\Lambda^\alphabel{IBP-1} \int_{\mathcal{R}R^n}\Big\{p_{\alpha}artial_\nu\mathcal{S}^+f(x,0)& - p_{\alpha}artial_\nu\mathcal{S}^-f(x,0) \Big\}\,\mathcal{P}hi(x,0)\,dx \\[4pt] & =\int_{\mathcal{R}R^n}p_{\alpha}artial_\nu\mathcal{S}^+f(x,0) \,\mathcal{P}hi(x,0)\,dx \,-\,\int_{\mathcal{R}R^n} p_{\alpha}artial_\nu\mathcal{S}^-f(x,0)\,\mathcal{P}hi(x,0)\,dx \\[4pt] & =\int_{\mathcal{R}R^{n+1}_{+}}\Lambda^\alphangle A\nabla u,\nabla\mathcal{P}hi\rangle\,dX \,+\,\int_{\mathcal{R}R^{n+1}_{-}}\Lambda^\alphangle A\nabla u,\nabla\mathcal{P}hi\rangle\,dX \\[4pt] & =\int_{\mathcal{R}R^{n+1}}\Bigl\Lambda^\alphangle A(x)\left( \int_{\mathcal{R}R^n}\nabla_{x,t}E(x,t,y,0)f(y)\,dy\right)\,,\,\nabla\mathcal{P}hi(x,t) \Bigr\rangle\,dxdt \\[4pt] & =\int_{\mathcal{R}R^n}f(y)\left(\int_{\mathcal{R}R^{n+1}}\Lambda^\alphangle A(x)\nabla_{x,t}E(x,t,y,0), \nabla\mathcal{P}hi(x,t)\rangle\,dxdt\right)\,dy \\[4pt] & =\int_{\mathcal{R}R^n}f(y)\,\mathcal{P}hi(y,0)\,dy. \end{split}\end{align} The use of Fubini's theorem in the fifth line is justified by absolute convergence, since $\nabla E(\cdot,Y) \in L^p_{\mathrm{loc}}(\mathbb{R}e),\, 1\leq p<(n+1)/n$ (cf. \cite[Theorem 3.1]{HK}). Given an arbitrary $f\in L^2(\mathcal{R}R^n)$, we may approximate $f$ by $f_k\in C^\infty_0(\mathcal{R}R^n)$, and observe that both the first and last lines in \eqref{IBP-1} converge appropriately (for the first line, this follows from \eqref{eq4.9} and Lemma \mathbb{R}f{l4.1}). Then, since $\mathcal{P}hi$ was arbitrary in $C^\infty_0(\mathbb{R}e)$, (\mathbb{R}f{jump-1}) follows. \end{proof} In view of (\mathbb{R}f{jump-1}), we now define the bounded operators $K,\, \widetilde{K}:L^2(\mathcal{R}R^n)\rightarrow L^2(\mathcal{R}R^n)$ and $\mathbf{T}: L^2(\mathcal{R}R^n)\to L^2(\mathcal{R}R^n,\mathcal{C}C^{n+1})$, as discussed in \eqref{eq1.6}, rigorously by \begin{align*} \widetilde{K}_Lf&:=-{\textstyle{\frac{1}{2}}}f + p_{\alpha}artial_\nu\mathcal{S}^+_Lf(\cdot,0) ={\textstyle{\frac{1}{2}}}f + p_{\alpha}artial_\nu\mathcal{S}^-_Lf(\cdot,0)\\[4pt] K_L f&:= \mathop{\rm adj}\nolimits(\widetilde{K}_{L^*}) f\\[4pt] \mathbf{T}_Lf &:= \Big(\nabla_x S_{\!L} f \,,\, \tfrac{-1}{A_{n+1,n+1}}\big(\widetilde{K}_{L}f + \textstyle{{\bf S}um_{j=1}^n A_{n+1,j}}\, p_{\alpha}artial_{x_j}S_{\!L} f\big) \Big). \end{align*} We then have the following lemma, which we quote without proof from \cite{AAAHK}\footnote{\cite[Lemma 4.18]{AAAHK} assumes that \eqref{eq4.prop46} holds, but as noted above, it is now known that this is always the case, given our standard assumptions, by the result of \cite{R}.}, although part (i) below is just a rephrasing of Lemma~\mathbb{R}f{l4.1}(ii) and Lemma~\mathbb{R}f{L-jump}. \begin{lemma}[{\cite[Lemma 4.18]{AAAHK}}]\Lambda^\alphabel{l4.ntjump} Suppose that $L$ and $L^*$ satisfy the standard assumptions. If $f\in L^2(\mathbb{R}^n)$, then \begin{enumerate} \item[(i)] $p_{\alpha}artial_\nu(\mathcal{S}^p_{\alpha}m_Lf)(\cdot,0) = \left(p_{\alpha}m \frac{1}{2}I + \widetilde{K}_L\right)f$,\\ and $-\Lambda^\alphangle A\nabla{\mathcal{S}}_L^p_{\alpha}m f(\cdot, t),e_{n+1}\rangle \to \left(p_{\alpha}m \frac{1}{2}I + \widetilde{K}_L\right)f$ weakly in $L^2$ as $t\to 0^p_{\alpha}m$, \end{enumerate} where the co-normal derivative is defined in the variational sense of Lemma \mathbb{R}f{l4.1}. \begin{enumerate} \item[(ii)] $\nabla \mathcal{S}^p_{\alpha}m_L f(\cdot,t) \to \left(\mp\frac{1}{2A_{n+1,n+1}}e_{n+1} + {\bf T}_L\right)f$ weakly in $L^2$ as $t\to 0^p_{\alpha}m$, \end{enumerate} where the tangential component of ${\bf T}_Lf$ equals $\nabla_x S_{\!L}f$. \begin{enumerate} \item[(iii)]$\mathcal{D}_L^{p_{\alpha}m }f (\cdot,t) \to \left(\mp\frac{1}{2}I + K_L \right)f$ weakly in $L^2$ as $t\to 0^p_{\alpha}m$. \end{enumerate} \end{lemma} {\bf S}ection{A ``Calder\'{o}n-Zygmund" Theory for the boundedness of layer potentials: Proof of Theorem~\mathbb{R}f{P-bdd1}} {\bf S}etcounter{equation}{0} We continue to impose the ``standard assumptions" throughout this section. We shall work in the upper half-space, the proofs of the analogous bounds for the lower half-space being essentially identical. Our main goal in this section is to prove Theorem \mathbb{R}f{P-bdd1}. We begin with some observations concerning the kernels of the operators $f\mapsto p_{\alpha}artial_t \mathcal{S}f(\cdot,t)$ and $f\mapsto \nabla_x \mathcal{S}f(\cdot,t),$ which we denote respectively by $$K_t(x,y):=p_{\alpha}artial_t E(x,t,y,0)\quad {\rm and } \quad \vec{H}_t(x,y) := \nabla_x E(x,t,y,0).$$ By the DG/N/M estimates \eqref{eq2.DGN} and \eqref{eq2.M} (see \cite[Theorem~3.1]{HK} and \cite[Lemma~2.5]{AAAHK}), for all $t\in\mathcal{R}R$ and $x,y\in\mathcal{R}R^n$ such that $|t|+|x-y|>0$, we have \begin{align}\Lambda^\alphabel{G-est1} |E(x,t,y,0)| & \leq \frac{C}{(|t|+|x-y|)^{n-1}}, \\[4pt] |p_{\alpha}artial_tE(x,t,y,0)| & \leq \frac{C}{(|t|+|x-y|)^{n}}\,, \Lambda^\alphabel{G-est2} \end{align} and, for each integer $m\geq 0$, whenever $2|h|\leq \max(|x-y|,|t|)$, \begin{align}\begin{split}\Lambda^\alphabel{G-est3} &|(p_{\alpha}artial_t)^m E(x+h,t,y,0)-(p_{\alpha}artial_t)^mE(x,t,y,0)|\\ &\qquad+\,|(p_{\alpha}artial_t)^mE(x,t,y+h,0)-(p_{\alpha}artial_t)^mE(x,t,y,0)| \leq \,C_m\,\frac{|h|^\alpha}{(|t|+|x-y|)^{n+m-1+\alpha}}, \end{split}\end{align} where $\alpha\in (0,1]$ is the minimum of the De Giorgi/Nash exponents for $L$ and $L^*$ in \eqref{eq2.DGN}. Thus, $K_t(x,y)$ is a standard Calder\'on-Zygmund kernel, uniformly in $t$, but $\vec{H}_t(x,y)$ is not, and for this reason the proof of Theorem \mathbb{R}f{P-bdd1} will be somewhat delicate. On the other hand, the lemma below shows that the kernel $\vec{H}_t(x,y)$ does satisfy a sort of weak ``1-sided" Calder\'on-Zygmund condition similar to those considered by Kurtz and Wheeden in \cite{KW}. In particular, the following lemma from \cite{AAAHK} is at the core of our proof of Theorem~\mathbb{R}f{P-bdd1}. \begin{lemma}[{\cite[Lemma 2.13, (4.15) and (2.7)]{AAAHK}}]\Lambda^\alphabel{Lemma1} Suppose that $L$ and $L^*$ satisfy the standard assumptions. Consider a cube $Q{\bf S}ubset\mathcal{R}R^n$ and fix any points $x,x'\in Q$ and $t,t'\in \mathcal{R}R$ such that $|t-t'|<2\ell(Q)$. For all $(y,s)\in\mathbb{R}e$, set \[ u(y,s):=E(x,t,y,s)-E(x',t',y,s). \] If $\alpha>0$ is the H\"{o}lder exponent in \eqref{G-est3}, then for all integers $k\geq 4$, we have \[ {\bf S}up_{s\in \mathbb{R}}\int_{2^{k+1}Q{\bf S}etminus 2^kQ}|\nabla u(y,s)|^2\,dy \leq C2^{-2\alpha k}\Bigl(2^k\ell(Q)\Bigr)^{-n}. \] The analogous bound holds with $E^*$ in place of $E$. \end{lemma} We will also need the following lemma from \cite{AAAHK} to deduce \eqref{LP-1} from \eqref{LP-2} for $p\geq2$. \begin{lemma}[{\cite[Lemma 4.8]{AAAHK}}]\Lambda^\alphabel{l4.8*} Suppose that $L$ and $L^*$ satisfy the standard assumptions. Let $\mathcal{S}_t$ denote the operator $f\mapsto \mathcal{S}_L f(\cdot,t)$. Then for $1<p<\infty$, \[ \|\widetilde{N}_*\left(\nabla \mathcal{S}_L f\right)\|_{L^p(\mathcal{R}R^n)}\,\leq\, C_p \left(1+{\bf S}up_{t>0}\|\nabla \mathcal{S}_t\|_{p\to p}\right)\|f\|_{L^p(\mathcal{R}R^n)}\,, \] where $\|\cdot\|_{p\to p}$ denotes the operator norm in $L^p$. The analogous bound holds for $L^*$ and in the lower half-space. \end{lemma} We are now ready to present the proof of Theorem~\mathbb{R}f{P-bdd1}. \begin{proof}[Proof of Theorem~\mathbb{R}f{P-bdd1}] As noted above, we work in $\mathbb{R}u$ and restrict our attention to the layer potentials for $L$, as the proofs in $\mathbb{R}^{n+1}_-$ and for $L^*$ are essentially the same. We first consider estimates \eqref{LP-2}-\eqref{LP-3}, and we separate their proofs into two parts, according to whether $p\leq 2$ or $p>2$. Afterwards, we prove estimates \eqref{LP-4}-\eqref{LP-6}. {\bf S}mallskip \noindent{\bf Part 1: estimates \eqref{LP-2}-\eqref{LP-3} in the case} $p_{\alpha}<p\leq 2$. We set $\mathcal{S} := \mathcal{S}_L^+$ to simplify notation. We separate the proof into the following three parts. {\bf S}mallskip \noindent{\bf Part 1(a): estimate \eqref{LP-2} in the case} $p_{\alpha}<p\leq 2$. Consider first the case $p\leq 1$. We claim that if $\frac{n}{n+1}<p\leq 1$ and $a$ is an $H^p$-atom in $\mathcal{R}R^n$ with \begin{equation}\Lambda^\alphabel{atom-X} {\bf S}upp a{\bf S}ubset Q,\quad\int_{\mathcal{R}R^n}a\,dx=0,\quad \|a\|_{L^2(\mathcal{R}R^n)}\leq\ell(Q)^{n\bigl(\frac{1}{2}-\frac{1}{p}\bigr)}, \end{equation} then for $\alpha>0$ as in \eqref{G-est3}, and for each integer $k\geq 4$, we have \begin{equation}\Lambda^\alphabel{eqSt-9} {\bf S}up_{t\geq0}\int_{2^{k+1}Q{\bf S}etminus 2^kQ}|\nabla{\mathcal{S}}a(x,t)|^2\,dx \leq C2^{-(2\alpha+n)k}\ell(Q)^{n\bigl(1-\frac{2}{p}\bigr)}\,, \end{equation} where $\nabla\mathcal{S}a(\cdot,0)$ is defined on $2^{k+1}Q{\bf S}etminus 2^kQ$, since ${\bf S}upp a{\bf S}ubset Q$. Indeed, using the vanishing moment condition of the atom, Minkowski's inequality, and Lemma~\mathbb{R}f{Lemma1} (with the roles of $(x,t)$ and $(y,s)$, or equivalently, the roles of $L$ and $L^*$, reversed), we obtain \begin{align*}\begin{split} \int_{2^{k+1}Q{\bf S}etminus 2^kQ}|&\nabla{\mathcal{S}}a(x,t)|^2\,dx \\[4pt] &=\int_{2^{k+1}Q{\bf S}etminus 2^kQ}\Bigl|\int_{\mathcal{R}R^n} \Bigl[\nabla_{x,t}E(x,t,y,0)-\nabla_{x,t}E(x,t,y_Q,0)\Bigr]\,a(y)\,dy \Bigr|^2\,dx \\[4pt] &\leq \left(\int_{\mathcal{R}R^n}|a(y)|\left(\int_{2^{k+1}Q{\bf S}etminus 2^kQ} \Bigl|\nabla_{x,t}E(x,t,y,0)-\nabla_{x,t}E(x,t,y_Q,0)\Bigr|^2\,dx\right)^{1/2} dy\right)^{2} \\[4pt] &\leq C2^{-2\alpha k}\Bigl(2^{k}\ell(Q)\Bigr)^{-n}\|a\|_{L^1(\mathcal{R}R^n)}^2 \leq C2^{-(2\alpha+n)k}\ell(Q)^{n\bigl(1-\frac{2}{p}\bigr)}, \end{split}\end{align*} since $\|a\|_{L^1(\mathcal{R}R^n)}\leq |Q|^{1/2}\|a\|_{L^2(\mathcal{R}R^n)}$. This proves ~\eqref{eqSt-9} and thus establishes the claim. With \eqref{eqSt-9} in hand, we can now prove \eqref{LP-2} by a standard argument. We write \[ \int_{\mathcal{R}R^n}|\nabla \mathcal{S}a(x,t)|^p\,dx=\int_{16 Q}|\nabla \mathcal{S} a(x,t)|^p\,dx +{\bf S}um_{k=4}^\infty\int_{2^{k+1}Q{\bf S}etminus 2^kQ}|\nabla \mathcal{S}a(x,t)|^p\,dx, \] where $a$ is an $H^p$-atom supported in $Q$ as in \eqref{atom-X}. Applying H\"{o}lder's inequality with exponent $2/p$, the $L^2$ estimate for $\nabla \mathcal{S}$ in~\eqref{eq4.prop46}, and estimate \eqref{eqSt-9} for $t>0$, we obtain \[ {\bf S}up_{t>0}\int_{\mathcal{R}R^n}|\nabla \mathcal{S}a(x,t)|^p\,dx\leq C, \] since $\alpha>n(1/p-1)$ in the interval $p_{\alpha}<p\leq1$ with $p_{\alpha}:=n/(n+\alpha)$. This proves (\mathbb{R}f{LP-2}) for $p_{\alpha}<p\leq1$, and so interpolation with \eqref{eq4.prop46} proves (\mathbb{R}f{LP-2}) for $p_{\alpha}<p\leq2$. {\bf S}mallskip \noindent{\bf Part 1(b): estimate \eqref{LP-1} in the case} $p_{\alpha}<p\leq 2$. We first note that as in Part 1(a), by using \eqref{eq4.9} instead of \eqref{eq4.prop46}, we may reduce matters to showing that for $p_{\alpha}<p\leq 1$ and for each integer $k\geq 10$, we have \[ \int_{2^{k+1}Q{\bf S}etminus 2^k Q} |\widetilde{N}_*(\nabla \mathcal{S} a)|^p\leq C 2^{-(\alpha-n(1/p-1)) kp}, \] whenever $a$ is an $H^p$-atom supported in $Q$ as in \eqref{atom-X}, since $\alpha >n(1/p-1)$ in the interval $p_{\alpha}<p\leq 1$ with $p_{\alpha}:=n/(n+\alpha)$. In turn, using H\"{o}lder's inequality with exponent $1/p$ when $p<1$, we need only prove that for each integer $k\geq 10$, we have \begin{equation}\Lambda^\alphabel{ms1} \int_{2^{k+1}Q{\bf S}etminus 2^k Q} |\widetilde{N}_*(\nabla \mathcal{S} a)|\leq C 2^{-\alpha k}|Q|^{1-1/p}. \end{equation} To this end, set $u:={\mathcal{S}}a$, and suppose that $x\in 2^{k+1}Q{\bf S}etminus 2^k Q$ for some integer $k\geq 10$. We begin with the estimate $\widetilde{N}_*\leq N_1+N_2$, where \begin{align*} N_1 (\nabla u)(x)&:={\bf S}up_{|x-y|<t<2^{k-3}\ell(Q)} \left(\fint_{B((y,t),t/4)}|\nabla u|^2\right)^{1/2},\\[4pt] N_2 (\nabla u)(x)&:={\bf S}up_{|x-y|<t,\,\,t>2^{k-3}\ell(Q)} \left(\fint_{B((y,t),t/4)} |\nabla u|^2\right)^{1/2}. \end{align*} Following \cite{KP}, by Caccioppoli's inequality we have \begin{align*}\begin{split} N_1 (\nabla u)(x) &\leq C{\bf S}up_{|x-y|< t< 2^{k-3}\ell(Q)} \left(\fint_{B((y,t),t/2)} \frac{|u-c_B|^2}{t^2}\right)^{1/2} \\[4pt] &\leq C {\bf S}up_{t<2^{k-3}\ell(Q)} \left\{\left(\fint_{t/2}^{3t/2} \fint_{|x-y|<3t/2}\frac{|u(y,s)-u(y,0)|^2}{t^2}\right)^{1/2}\right. \\[4pt] & \qquad\qquad\qquad\qquad+\,\,\left.\left(\fint_{|x-y|<3t/2}\frac{|u(y,0)-c_B|^2}{t^2} \right)^{1/2}\,\right\} =:I+II, \end{split}\end{align*} where the constant $c_B$ is at our disposal, and $u(y,0):=\mathcal{S}a(y,0):=S\!a(y)$. By the vanishing moment property of $a$, if $z_Q$ denotes the center of $Q$, then for all $(y,s)$ and $t$ as in $I$, we have \begin{align*}\begin{split} \frac{1}{t}|u(y,s)-u(y,0)| &= \left|\frac{1}{t}\int_0^s\frac{p_{\alpha}artial}{p_{\alpha}artial\tau} {\mathcal{S}} a(y,\tau)\,d\tau\right| \\[4pt] &\leq {\bf S}up_{0<\tau<3t/2}\int_{{\mathcal{R}R}^n}|p_{\alpha}artial_\tau E(y,\tau,z,0) -p_{\alpha}artial_\tau E(y,\tau,z_Q,0)||a(z)|\,dz \\[4pt] &\leq C\int_{\mathcal{R}R^n}\frac{\ell(Q)^\alpha}{|y-z_Q|^{n+\alpha}}|a(z)|\,dz \\[4pt] &\leq C 2^{-\alpha k}(2^k\ell(Q))^{-n}|Q|^{1-1/p}, \end{split}\end{align*} where in the next-to-last step we have used \eqref{G-est3} with $m=1$, and in the last step we have used that $\|a\|_1 \leq C|Q|^{1-1/p}.$ Thus, \[ \int_{2^{k+1}Q{\bf S}etminus 2^kQ} I\,dx\leq 2^{-\alpha k}|Q|^{1-1/p}, \] as desired. By Sobolev's inequality, for an appropriate choice of $c_B$, we have \begin{align*}\begin{split} II&\leq C{\bf S}up_{0<t<2^{k-3}\ell(Q)} \left(\fint_{|x-y|<3t/2}|\nabla_{\rm{tan}}u(y,0)|^{2_*}\right) ^{1/2_\ast} \\[4pt] &\leq C \left(M\big(|\nabla_{\rm{tan}}u(\cdot,0)|^{2_\ast} \chi_{2^{k+3}Q{\bf S}etminus 2^{k-2}Q}\big)(x)\right)^{1/2_*}\,, \end{split}\end{align*} where $\nabla_{\rm tan} u(x,0) := \nabla_x u(x,0)$ is the tangential gradient, $2_*:= 2n/(n+2)$, and $M$, as usual, denotes the Hardy-Littlewood maximal operator. Consequently, we have \begin{align*}\begin{split} \int_{2^{k+1}Q{\bf S}etminus 2^kQ}II &\leq C\left(2^k\ell(Q)\right)^{n/2} \left(\int_{2^{k+3}Q{\bf S}etminus 2^{k-2}Q}|\nabla_{\rm{tan}}u(\cdot,0)|^2\right)^{1/2} \\[4pt] &\leq C \left(2^k\ell(Q)\right)^{n/2} 2^{-(\alpha+n/2)k}\ell(Q)^{n(1/2-1/p)} \leq C2^{-\alpha k}|Q|^{1-1/p}, \end{split}\end{align*} where in the second inequality we used estimate \eqref{eqSt-9} for $t=0$, since $u=\mathcal{S}a$ and ${{\bf S}upp a {\bf S}ubset Q}$. We have therefore proved that $N_1$ satisfies \eqref{ms1}. It remains to treat $N_2$. For each $x\in 2^{k+1}Q{\bf S}etminus 2^kQ$, choose $(y_\ast$, $t_\ast)$ in the cone $\widetilde{G}amma(x){\bf S}ubset\mathcal{R}R^{n+1}_+$ so that the supremum in the definition of $N_2$ is essentially attained, i.e., so that \[ N_2(\nabla u)(x)\leq 2\left(\fint_{B((y_\ast,t_\ast),\,t_\ast/4)} |\nabla u|^2\right)^{1/2}, \] with $|x-y_\ast|<t_*$ and $t_\ast\geq 2^{k-3}\ell(Q)$. By Caccioppoli's inequality, \[ N_2(\nabla u)(x)\leq C\frac{1}{t_\ast}\left( \fint_{B((y_\ast,t_\ast),\,t_\ast/2)}|u|^2\right)^{1/2}. \] Now for $(y,s)\in B((y_\ast,t_\ast),t_\ast/2)$, by \eqref{G-est3} with $m=0$, we have \begin{align*}\begin{split} |u(y,s)|&\leq \int_{\mathcal{R}R^n}|E(y,s,z,0)-E(y,s,z_Q,0||a(z)|\,dz \\[4pt] &\leq C\|a\|_{L^1(\mathcal{R}R^n)}\frac{\ell(Q)^\alpha}{s^{n-1+\alpha}} \leq C\ell(Q)^\alpha t_\ast^{1-n-\alpha}|Q|^{1-1/p}. \end{split}\end{align*} Therefore, \[ N_2(\nabla u)(x)\leq C\ell(Q)^\alpha t_\ast^{-n-\alpha}|Q|^{1-1/p} \leq C 2^{-\alpha k}(2^k\ell(Q))^{-n}|Q|^{1-1/p}. \] Integrating over $2^{k+1}Q{\bf S}etminus 2^kQ$, we obtain \eqref{ms1} for $N_2$, hence (\mathbb{R}f{LP-1}) holds for $p_{\alpha}<p\leq2$. {\bf S}mallskip \noindent{\bf Part 1(c): estimates \eqref{LP-3a}-\eqref{LP-3} in the case} $p_{\alpha}<p\leq 2$. We note that the case $p=2$ holds by \eqref{eq4.prop46} and Lemma \mathbb{R}f{l4.ntjump} (i) and (ii). Thus, by interpolation, it is again enough to treat the case $p_{\alpha}<p\leq1$, and in that setting, \eqref{LP-3a}-(\mathbb{R}f{LP-3}) are an immediate consequence of the more general estimates in~\eqref{eq4.31**}-\eqref{eq4.31*} below, which we note for future reference. \begin{proposition}\Lambda^\alphabel{r4.1} Suppose that $L$ and $L^*$ satisfy the standard assumptions, let $\alpha$ denote the minimum of the De Giorgi/Nash exponents for $L$ and $L^*$ in \eqref{eq2.DGN}, and set ${p_{\alpha}:=n/(n+\alpha)}$. Then \begin{align}\Lambda^\alphabel{eq4.31**} \|\nabla_x\, S\!f\|_{H^p(\mathcal{R}R^n,\mathcal{C}C^n)} + {\bf S}up_{t> 0}\|\nabla_x\, \mathcal{S} f(\cdot, t)\|_{H^p(\mathcal{R}R^n,\mathcal{C}C^n)}&\leq C_p \|f\|_{H^p(\mathcal{R}R^n)},\quad\forall\,p\in(p_{\alpha},1]\,,\\[4pt] \Lambda^\alphabel{eq4.31*} \|p_{\alpha}artial_\nu\mathcal{S}f(\cdot, 0)\|_{H^p(\mathcal{R}R^n)} + {\bf S}up_{t>0}\|\Lambda^\alphangle A\nabla{\mathcal{S}} f(\cdot, t),e_{n+1}\rangle\|_{H^p(\mathcal{R}R^n)} &\leq C_p \|f\|_{H^p(\mathcal{R}R^n)},\quad\forall\,p\in(p_{\alpha},1]\,, \end{align} where $p_{\alpha}artial_\nu\mathcal{S}f(\cdot, 0) = ((1/2) I+\widetilde{K})f$ is defined in the variational sense of Lemma~\mathbb{R}f{l4.1}. The analogous results hold for $L^*$ and in the lower half-space. \end{proposition} \begin{proof} It suffices to show that if $a$ is an $H^p(\mathcal{R}R^n)$-atom as in (\mathbb{R}f{atom-X}), and $t>0$, then \begin{align*} &\vec{m}_0:= C\nabla_x S\!a, &&\vec{m}_t:= C\nabla_x \mathcal{S} a(\cdot,t),\\[4pt] &m_0:=C((1/2)I+\widetilde{K})a, &&m_t:=C\Lambda^\alphangle A\nabla{\mathcal{S}}a(\cdot, t),e_{n+1}\rangle, \end{align*} are all molecules adapted to $Q$, for some harmless constant $C\in(0,\infty)$, depending only on the ``standard constants". Recall that, for $n/(n+1)<p\leq 1$, an $H^p$-molecule adapted to a cube $Q{\bf S}ubset\mathcal{R}R^n$ is a function $m\in L^1(\mathcal{R}R^n)\cap L^2(\mathcal{R}R^n)$ satisfying \begin{equation}\Lambda^\alphabel{eqSt-10} \begin{array}{l} (i)\quad \int_{\mathcal{R}R^n}m(x)\,dx=0, \\[8pt] (ii)\quad \Bigl(\int_{16\,Q}|m(x)|^2\,dx\Bigr)^{1/2} \leq\ell(Q)^{n\bigl(\frac{1}{2}-\frac{1}{p}\bigr)}, \\[8pt] (iii)\quad\Bigl(\int_{2^{k+1}Q{\bf S}etminus 2^kQ}|m(x)|^2\,dx\Bigr)^{1/2} \leq 2^{-\varepsilon k}\Bigl(2^k\ell(Q)\Bigr) ^{n\bigl(\frac{1}{2}-\frac{1}{p}\bigr)},\qquad\forall\,k\geq 4, \end{array} \end{equation} for some $\varepsilon>0$ (see, e.g., \cite{CW}, \cite{TW}). Note that for $\vec{m}_t$ and $m_t$, when $t>0$, property (ii) follows from the $L^2$ estimate in \eqref{eq4.prop46}, and (iii) follows from \eqref{eqSt-9} with $\varepsilon:=\alpha-n(1/p-1),$ which is positive for $p_{\alpha}<p\leq 1$ with $p_{\alpha}:=n/(n+\alpha)$. Moreover, these estimates for $\vec{m}_t$ and $m_t$ hold uniformly in $t$, and since $a\in L^2(\mathcal{R}R^n)$, we obtain (ii) and (iii) for $\vec{m}_0$ and $m_0$ by Lemma \mathbb{R}f{l4.ntjump}. Thus, it remains to show that $\vec{m}_t$ and $m_t$ have mean-value zero for all $t\geq0$. This is nearly trivial for $\vec{m}_t$. For any $R>1$, choose $\mathcal{P}hi_R \in C^\infty_0(\mathcal{R}R^{n+1})$, with $0\leq \mathcal{P}hi_R\leq 1$, such that \begin{equation}\Lambda^\alphabel{Phi-R} \mathcal{P}hi_R\equiv 1\mbox{ on }B(0,R), \quad {\bf S}upp \mathcal{P}hi_R{\bf S}ubset B(0,2R),\quad \|\nabla\mathcal{P}hi_R\|_{L^\infty(\mathcal{R}R^n)}\leq C/R\,, \end{equation} and let $p_{\alpha}hi_R:=\mathcal{P}hi_R(\cdot,0)$ denote its restriction to $\mathbb{R}^n\times\{0\}$. For $1\leq j\leq n$ and $R > C(\ell(Q)+|y_Q|)$ (where $y_Q$ is the center of $Q$), using that $a$ has mean value zero, we have \begin{align*} \left|\int_{\mathbb{R}^n} p_{\alpha}artial_{x_j} \mathcal{S} a (\cdot,t) \,p_{\alpha}hi_R \right| &= \left|\int_{\mathbb{R}^n} \mathcal{S} a(\cdot,t)\, p_{\alpha}artial_{x_j} p_{\alpha}hi_R \right| \\[4pt] &\lesssim \,\frac1R \int_{R\leq|x|\leq 2R} \int_{Q} |E(x,t,y,0 -E(x,t,y_Q,0)| \, |a(y)| \, dy\, dx\\[4pt] &\lesssim\, \frac1R \int_{R\leq|x|\leq 2R} \int_{Q} \frac{\ell(Q)^\alpha}{R^{n-1+\alpha}}\, |a(y)|\, dy\\[4pt] &\lesssim \, \left(\frac{\ell(Q)}{R}\right)^\alpha \|a\|_{L^1(\mathbb{R}^n)} \,\lesssim\, \left(\frac{\ell(Q)}{R}\right)^\alpha \ell(Q)^{n(1-1/p)}\,, \end{align*} where we used the DG/N bound \eqref{G-est3} with $m=0$, the Cauchy-Schwarz inequality and the definition of an atom \eqref{atom-X}. Letting $R\to \infty$, we obtain $\int_{\mathbb{R}^n} \nabla_x \mathcal{S} a (\cdot,t) = 0$ for all~$t\geq0$. Next, let us show that $((1/2) I+\widetilde{K})a$ has mean-value zero. Set $u:={\mathcal{S}}a$ in $\mathcal{R}R^{n+1}_+$, so that matters are reduced to proving that \[ \int_{\mathcal{R}R^n} p_{\alpha}artial_\nu u(x, 0) \,dx=0, \] where $p_{\alpha}artial_\nu u(\cdot,0)$ is defined in the variational sense of Lemma~\mathbb{R}f{l4.1}. Choose $\mathcal{P}hi_R,p_{\alpha}hi_R$ as above, and note that $p_{\alpha}artial_\nu u(\cdot,0) \in L^1(\mathbb{R}^n)$, by the bounds \eqref{eqSt-10} (ii) and (iii) that we have just established. Then by Lemma~\mathbb{R}f{l4.1}~(i), we have \begin{align*}\begin{split} \left|\int_{\mathcal{R}R^n}p_{\alpha}artial_\nu u(\cdot,0)\,dx\right| \,&=\, \left|\lim_{R\to\infty}\int_{\mathcal{R}R^n}p_{\alpha}artial_\nu u(\cdot,0)\,p_{\alpha}hi_R\,dx\right| \\[4pt] &= \, \left|\lim_{R\to\infty}\int_{\mathcal{R}R^{n+1}_+}\Lambda^\alphangle A\nabla u,\nabla\mathcal{P}hi_R \rangle\,dX\right| \\[4pt] &\lesssim \, \overline{\lim_{R\to\infty}} \left(\int_{X\in{\mathcal{R}R^{n+1}_+}:R<|X|<2R}\,\,|\nabla u|^{q}\,dX\right) ^{1/q}\left(\int_{R<|X|<2R}\,\,|\nabla\mathcal{P}hi_R|^{q'}\,dX\right) ^{1/q'}, \end{split}\end{align*} where $q:= p(n+1)/n$ and $q' = q/(q-1).$ Since $0<\alpha \leq 1$ and $p_{\alpha}:=n/(n+\alpha)$, we have $n/(n+1) <p\leq 1$, hence $1<q\leq (n+1)/n$ and $n+1\leq q'<\infty.$ Consequently, the second factor above is bounded uniformly in $R$ as $R\to \infty$, whilst the first factor converges to zero by Lemma~\mathbb{R}f{L-improve} and the dominated convergence theorem, since we have already proven \eqref{LP-1} in the case $p_{\alpha}<p\leq 2$. This proves that $\int_{\mathbb{R}^n} ((1/2) I+\widetilde{K})a=0$. The proof that $\int_{\mathbb{R}^n}\Lambda^\alphangle A\nabla{\mathcal{S}} a(\cdot, t),e_{n+1}\rangle = 0$ for all $t>0$ follows in the same way, except we use \cite[(4.6)]{AAAHK} instead of Lemma~\mathbb{R}f{l4.1}~(i). \end{proof} This concludes the proof of Part 1 of Theorem \mathbb{R}f{P-bdd1}. At this point, we note for future reference the following corollary of \eqref{LP-3} and Proposition~\mathbb{R}f{r4.1}. \begin{corollary}\Lambda^\alphabel{r4.2} Suppose that $L$ and $L^*$ satisfy the standard assumptions, and let $\alpha$ denote the minimum of the De Giorgi/Nash exponents for $L$ and $L^*$ in \eqref{eq2.DGN}. Then \begin{equation}\Lambda^\alphabel{eq4.33*} {\bf S}up_{t> 0}\|\mathcal{D}_L^p_{\alpha}m g(\cdot,t)\|_{\Lambda^\beta(\mathbb{R}^n)} \,+\, \|K_L g\|_{\Lambda^\beta(\mathbb{R}^n)} \,\leq \,C_\beta\, \|g\|_{\Lambda^\beta(\mathbb{R}^n)}\,, \quad \forall\,\beta\in\left[0,\alpha\right). \end{equation} Moreover, $\mathcal{D}_L^p_{\alpha}m 1 $ is constant on $\mathcal{R}R^{n+1}_p_{\alpha}m$, and $K_L1$ is constant on $\mathcal{R}R^n$. The analogous results hold for $L^*$. \end{corollary} \begin{proof} Consider $\mathbb{R}u$ and set $p_{\alpha}artial_{\nu^*}\mathcal{S}_{t,L^*} f:= p_{\alpha}artial_{\nu^*} \mathcal{S}_{L^*}f (\cdot,t)$, and $\mathcal{D}_{t,L} f:= \mathcal{D}_Lf(\cdot,t)$. We have $p_{\alpha}artial_{\nu^*}\mathcal{S}_{-t,L^*} = \mathop{\rm adj}\nolimits(\mathcal{D}_{t,L})$, and by definition $\widetilde{K}_{L^*}=\mathop{\rm adj}\nolimits(K_L)$. Thus, estimates \eqref{LP-3} and \eqref{eq4.31*} imply \eqref{eq4.33*} by duality. The case $\beta=0$ of \eqref{eq4.33*} shows that $\mathcal{D}_L1(\cdot,t)$ and $K_L1$ exist in $BMO(\mathcal{R}R^n)$, for each $t>0$. The moment conditions obtained in the proof of Proposition~\mathbb{R}f{r4.1} show that for any atom $a$ as in (\mathbb{R}f{atom-X}), and for each $t>0$, we have \[ \Lambda^\alphangle \mathcal{D}_L 1(\cdot,t), a\rangle = \int_{\mathbb{R}^n} p_{\alpha}artial_{\nu^*} \mathcal{S}_{L^*} a(\cdot,-t) =0\,, \quad \text{ and }\quad \Lambda^\alphangle K_L 1, a\rangle = \int_{\mathbb{R}^n} \widetilde{K}_{L^*} a =0,\, \] where $\Lambda^\alphangle\cdot,\cdot\rangle$ denotes the dual pairing between $BMO(\mathbb{R}^n)$ and $H^1_{\mathrm{at}}(\mathbb{R}^n)$. This shows, since $a$ was an arbitrary atom, that $\mathcal{D}_L 1(\cdot,t)$ and $K_L 1$ are zero in the sense of $BMO(\mathbb{R}^n)$, hence $\mathcal{D}_L 1(x,t)$ and $K_L1(x)$ are constant in $x\in\mathcal{R}R^n$, for each fixed $t>0$. It remains to prove that $\mathcal{D}_L 1(x,t)$ is constant in $t>0$, for each fixed $x\in\mathcal{R}R^n$. To this end, let $p_{\alpha}hi_R$ denote the boundary trace of a smooth cut-off function $\mathcal{P}hi_R$ as in \eqref{Phi-R}. We observe that by the definition of $\mathcal{D}_L$ (cf. \eqref{eq2.23}-\eqref{eq2.23*}), and translation invariance in $t$, we have \begin{align*} p_{\alpha}artial_t \mathcal{D}_L 1(x,t) &=\lim_{R\to \infty} p_{\alpha}artial_t \mathcal{D}_L p_{\alpha}hi_R(x,t) =\lim_{R\to \infty}\int_{\mathbb{R}^{n}}\overline{\Big(p_{\alpha}artial_t p_{\alpha}artial_{\nu^*} E^* (\cdot,\cdot,x,t)\Big)}(y,0)\,p_{\alpha}hi_R(y)\,dy \\[4pt] &=\lim_{R\to \infty}\int_{\mathbb{R}^{n}}\overline{p_{\alpha}artial_s e_{n+1}\cdot A^*(y)\,\Big(\nabla_{y,s} E^*(y,s,x,t)\Big)}\big|_{s=0}\,p_{\alpha}hi_R(y)dy \\[4pt] &=\,\lim_{R\to \infty} {\bf S}um_{i=1}^n{\bf S}um_{j=1}^{n+1} \int_{\mathbb{R}^{n}}\overline{ A_{i,j}^*(y)\,\Big(p_{\alpha}artial_{y_j} E^*(y,s,x,t)\Big)}\big|_{s=0}\,p_{\alpha}artial_{y_i} p_{\alpha}hi_R(y)\,dy\,, \end{align*} where in the last step we set $y_{n+1}:= s$, and used that $L^* E^* = 0$ away from the pole at $(x,t)$. The limit above equals 0, since for $R> C|x|$ with $C>1$ sufficiently large, the term at level $R$ is bounded by \begin{align*} R^{-1} \int_{C^{-1}R< |x-y|<CR} & \big|\big(\nabla E(x,t,\cdot,\cdot)\big)(y,0)\big|\, dy\\[4pt] &\lesssim\, R^{-1+n/2} \left(\int_{C^{-1}R< |x-y|<CR}\big |\big(\nabla E(x,t,\cdot,\cdot)\big)(y,0)\big|^2\, dy\right)^{1/2} \lesssim \,R^{-1}\,, \end{align*} where in the last step we used the $L^2$ decay for $\nabla E$ from \cite[Lemma 2.8]{AAAHK}. This shows that $\mathcal{D}_L 1(x,t)$ is constant in $t>0$, for each fixed $x\in\mathcal{R}R^n$, as required. \end{proof} \noindent{\bf Part 2: estimates \eqref{LP-2}-\eqref{LP-3} in the case $2<p<p_{\alpha}p$.} We begin by stating without proof the following variant of Gehring's lemma as established by Iwaniec~\cite{I}. \begin{lemma}\Lambda^\alphabel{lemmaIwaniec} Suppose that $g$, $h\in L^p(\mathcal{R}R^n)$, with $1<p<\infty$, and that for some $C_0>0$ and for all cubes $Q{\bf S}ubset \mathcal{R}R^n$, \begin{equation}\Lambda^\alphabel{ms13} \left(\fint_Qg^p\right)^{1/p} \leq C_0\fint_{4Q} g+ \left(\fint_{4Q}h^p\right)^{1/p}. \end{equation} Then there exists $s=s(n,p,C_0)>p$ and $C=C(n,p,C_0)>0$ such that \begin{equation*} \int_{\mathcal{R}R^n}g^s\leq C\int_{\mathcal{R}R^n}h^s. \end{equation*} \end{lemma} \begin{remark}\Lambda^\alphabel{r4.3} If $1<r<p$, by replacing $g$ with $\tilde{g}:=g^r$, $h$ with $\tilde{h}:=h^r$, and $p$ with $\tilde{p}:=\frac{p}{r}$, then the conclusion of the Lemma~\mathbb{R}f{lemmaIwaniec} holds provided (\mathbb{R}f{ms13}) is replaced with \begin{equation*} \left(\fint_Qg^p\right)^{1/p} \leq C_0\left(\fint_{4Q} g^r\right)^{1/r} + \left(\fint_{4Q}h^p\right)^{1/p}. \end{equation*} In this case $s$ also depends on $r$. \end{remark} For the sake of notational convenience, we set $$\mathcal{S}_t f(x):= \mathcal{S}^p_{\alpha}m_L\,f(x,t), \qquad (x,t)\in \mathcal{R}R^{n+1},$$ so that when $t=0$ we have $\mathcal{S}_0:= S=S_{\!L}$ (cf. \eqref{eq2.24}). We shall apply the Remark~\mathbb{R}f{r4.3} with $g:=\nabla_x \mathcal{S}_{t_0}f$, with $t_0>0$ fixed, $p=2$, and $r=2_{\ast}:=2n/(n+2)$. To be precise, we shall prove that for each fixed $t_0>0$, and for every cube $Q{\bf S}ubset\mathcal{R}R^n$, we have \begin{equation}\Lambda^\alphabel{newSt-3} \left(\fint_{Q}|\nabla_x \mathcal{S}_{t_0}f|^2\,dx\right)^{1/2} \lesssim \left(\fint_{4Q}|\nabla_x \mathcal{S}_{t_0}f|^{2_{\ast}}\,dx\right)^{1/2_{\ast}} + \left(\fint_{4Q}\Bigl(|f|+N_{**}(p_{\alpha}artial_t \mathcal{S}_tf)\Bigr)^2\right)^{1/2}\,, \end{equation} for all $f\in L^2\cap L^\infty$, where $N_{**}$ denotes the ``two-sided" nontangential maximal operator $$N_{**} (u)(x):= {\bf S}up_{\{(y,t)\in \mathcal{R}R^{n+1}:\, |x-y|<|t|\}} |u(y,t)|.$$ We claim that the conclusion of Part 2 of Theorem \mathbb{R}f{P-bdd1} then follows. Indeed, for each fixed $t\in \mathcal{R}R$, $p_{\alpha}artial_t S_t$ is a Calder\'on-Zygmund operator with a ``standard kernel'' $K_t(x,y):= p_{\alpha}artial_tE(x,t,y,0)$, with Calder\'on-Zygmund constants that are uniform in $t$ (cf. \eqref{G-est2}-\eqref{G-est3}). Thus, by the $L^2$ bound \eqref{eq4.prop46}, we have from standard Calder\'on-Zygmund theory and a variant of the usual Cotlar inequality for maximal singular integrals that \begin{equation}\Lambda^\alphabel{eq4.lpnt}{\bf S}up_{t>0}\|p_{\alpha}artial_t \mathcal{S}_t f\|_{p} +\|N_{**}(p_{\alpha}artial_t \mathcal{S}_t f)\|_p \leq C_p \|f\|_p,\qquad\forall\,p\in(1,\infty). \end{equation} Consequently, if (\mathbb{R}f{newSt-3}) holds for arbitrary $t_0>0$, then there exists $p_{\alpha}p>2$ such that \[ {\bf S}up_{t>0}\|\nabla \mathcal{S}_t f\|_{L^p(\mathcal{R}R^n)}\leq C \|f\|_{L^p(\mathcal{R}R^n)},\qquad\forall\,p\in(2,p_{\alpha}p), \] by Lemma~\mathbb{R}f{lemmaIwaniec}, Remark~\mathbb{R}f{r4.3} and a density argument. This would prove~\eqref{LP-2}. We could then use Lemma \mathbb{R}f{l4.8*} to obtain \eqref{LP-1}, and \eqref{LP-3a}-\eqref{LP-3} would follow from Lemma \mathbb{R}f{l4.ntjump} (i) and (ii), and another density argument. Thus, it is enough to prove (\mathbb{R}f{newSt-3}). To this end, we fix a cube $Q{\bf S}ubset\mathcal{R}R^n$ and split \[ f=f_1+f_2:=f \,1_{4Q}\,+\,f\, 1_{(4Q)^c},\qquad u=u_1+u_2:=\mathcal{S}_tf_1+\mathcal{S}_tf_2. \] Using \eqref{eq4.prop46}, and the definition of $f_1$, we obtain \begin{equation}\Lambda^\alphabel{aaahk} {\bf S}up_{t>0}\fint_{Q}|\nabla_x \mathcal{S}_tf_1(x)|^2\,dx \leq C\fint_{4Q}|f(x)|^2\,dx. \end{equation} Thus, to prove \eqref{newSt-3} it suffices to establish \begin{equation}\Lambda^\alphabel{newSt-6} \left(\fint_{Q}|\nabla_x u_2(x,t_0)|^2\,dx\right)^{1/2} \lesssim \left(\fint_{4Q}|\nabla_x u(x,t_0)|^{2_{\ast}} \,dx\right)^{1/2_{\ast}} \, +\,\left(\fint_{4Q} \Bigl(|f| + N_{**}(p_{\alpha}artial_t u)\Bigr)^2\right)^{1/2}. \end{equation} To do this, we shall use the following result from \cite{AAAHK}. \begin{proposition}[{\cite[Proposition 2.1]{AAAHK}}]\Lambda^\alphabel{propslice} Let $L$ be as in \eqref{L-def-1}-\eqref{eq1.1*}. If $A$ is $t$-independent, then there exists $C_0>0$, depending only on dimension and ellipticity, such that for all cubes $Q{\bf S}ubset \mathcal{R}R^n$ and $s\in \mathcal{R}R$ the following holds: if $Lu=0$ in $4Q\times (s-\ell(Q),s+\ell(Q))$, then \[ \frac{1}{|Q|}\int_Q |\nabla u(x,s)|^2 dx \leq C_0 \frac{1}{\ell(Q)^2} \frac{1}{|Q^{**}|}\iint_{Q^{**}} |u(x,t)|^2 dx dt, \] where $Q^{**} := 3Q \times (s-\ell(Q)/2,s+\ell(Q)/2)$. \end{proposition} Applying Proposition \mathbb{R}f{propslice} with $s:= t_0$ and $u:=\mathcal{S}_t(f1_{(4Q)^c})$, which is a solution of $Lu=0$ in the infinite strip $4Q\times(-\infty,\infty)$, we obtain \begin{align*} \fint_{Q}&|\nabla_x u_2(x,t_0)|^2\,dx \lesssim \frac{1}{\ell(Q)^{2}} \fint_{t_0-\ell(Q)/2}^{t_0+\ell(Q)/2} \!\fint_{3Q}|{u_2}(x,t)-c_Q|^2 dx dt \\[4pt]& \lesssim \,\, \frac{1}{\ell(Q)^{2}} \fint_{t_0-\ell(Q)/2}^{t_0+\ell(Q)/2} \!\fint_{3Q}|{u_2}(x,t)-{u_2}(x,t_0)|^2 dx dt + \frac{1}{\ell(Q)^2} \!\fint_{3Q}|{u_2}(x,t_0)-c_Q|^2 dx \\[4pt]& \lesssim \,\, \frac{1}{\ell(Q)^{2}} \fint_{t_0-\ell(Q)/2}^{t_0+\ell(Q)/2} \!\fint_{3Q}\left|\int^{\max\{t_0,t\}}_{\min\{t_0,t\}} p_{\alpha}artial_s {u_2}(x,s)\, ds\right|^2 dx dt + \frac{1}{\ell(Q)^2} \!\fint_{3Q}|{u_2}(x,t_0)-c_Q|^2 dx \\[4pt] & \lesssim \,\, \fint_{3Q}\fint_{t_0-l(Q)/2}^{t_0+\ell(Q)/2}|p_{\alpha}artial_s{u_2}(x,s)|^2 ds dx \,+\,\left(\fint_{3Q} |\nabla_x {u_2}(x,t_0)|^{2_*} dx\right)^{2/2_*} \\[4pt] & \lesssim \,\, \fint_{3Q}\big(N_{**}(p_{\alpha}artial_t {u_2})(x)\big)^2dx + \left(\fint_{3Q} |\nabla_x {u_2}(x,t_0)|^{2_*} dx\right)^{2/2_*} \\[4pt] & \lesssim \,\, \fint_{3Q}\big(N_{**}(p_{\alpha}artial_t {u})(x)\big)^2dx + \left(\fint_{3Q} |\nabla_x {u}(x,t_0)|^{2_*} dx\right)^{2/2_*} + \fint_{4Q} |f(x)|^2dx, \end{align*} where in the third last estimate we have made an appropriate choice of $c_Q$ in order to use Sobolev's inequality, and in the last estimate we wrote $u_2=u-u_1$, and then used \eqref{eq4.lpnt} with $p=2$ to control $N_{**}(p_{\alpha}artial_t u_1)$, and \eqref{aaahk} to control $\nabla_x u_1$. Estimate \eqref{newSt-6} follows, and so the proofs of estimates \eqref{LP-2}-\eqref{LP-3} are now complete. {\bf S}mallskip \noindent{\bf Part 3: proof of estimate \eqref{LP-4}}\footnote{We are indebted to S. Mayboroda for suggesting this proof, which simplifies our original argument.}. We shall actually prove a more general result. It will be convenient to use the following notation. For $f\in L^2(\mathbb{R}^n,\mathcal{C}C^{n+1})$, set \begin{equation}\Lambda^\alphabel{eq7.8a} \big(\mathcal{S}^p_{\alpha}m\nabla\big) f(x,t) := \int_{\mathbb{R}^n} \Big(\nabla E(x,t,\cdot,\cdot)\Big)(y,0)\, \cdot f(y)\,dy\,,\qquad (x,t)\in\mathcal{R}R^{n+1}_p_{\alpha}m\,, \end{equation} where $\Big(\nabla E(x,t,\cdot,\cdot)\Big)(y,0):=\Big(\nabla_{y,s} E(x,t,y,s)\Big)\big|_{s=0}\,$. Our goal is to prove that \begin{equation}\Lambda^\alphabel{LP-4a} \big\|N_*\big((\mathcal{S} \nabla) f\big)\big\|_{L^p(\mathcal{R}R^n)} \, \leq \,C_p\|f\|_{L^p(\mathcal{R}R^n,\mathcal{C}C^{n+1})}, \qquad\forall\,p\in\left(\frac{p_{\alpha}p}{p_{\alpha}p-1},\infty\right)\,, \end{equation} which clearly implies \eqref{LP-4}, by the definition of $\mathcal{D}$. It is enough to prove \eqref{LP-4a} for all $f\in L^p\cap L^2$. Moreover, it is enough to work with $\widetilde{N}_*$ rather than $N_*$, since for solutions, the former controls the latter pointwise, for appropriate choices of aperture, by the Moser estimate \eqref{eq2.M}. In view of Lemma~\mathbb{R}f{l4.ntjump}, we extend definition \eqref{eq7.8a} to the boundary of $\mathcal{R}R^{n+1}_p_{\alpha}m$ by setting \[ \big(\mathcal{S}^p_{\alpha}m\nabla\big) f(\cdot,0) := \mathop{\rm adj}\nolimits\left(\mp\frac{1}{2A_{n+1,n+1}}e_{n+1} + {\bf T}\right)f\,. \] By duality, since \eqref{LP-2} holds for $L^*$, as well as for $L$, and in both half-spaces, we have \begin{equation}\Lambda^\alphabel{eq4.65} {\bf S}up_{p_{\alpha}m t\geq0}\|(\mathcal{S}^p_{\alpha}m \nabla) f(\cdot,t)\|_p \leq C_p \|f\|_p\,,\qquad\forall\, p\in\left(\frac{p_{\alpha}p}{p_{\alpha}p-1},\infty\right)\,,\end{equation} for all $f\in L^p\cap L^2$, where we used Lemma \mathbb{R}f{l4.ntjump} (ii) to obtain the bound for $t=0$. We now fix $p>p_{\alpha}p/(p_{\alpha}p-1)$ and choose $r$ so that $p_{\alpha}p/(p_{\alpha}p-1)<r<p$. Let $z\in \mathcal{R}R^n$ and $(x,t)\in \widetilde{G}amma(z)$. For each integer $k\geq0$, set $\mathcal{D}elta_k:= \{y\in \mathcal{R}R^n: |y-z|< 2^{k+2}t\}\,$ and write $$f=f\,1_{\mathcal{D}elta_0}+{\bf S}um_{k=1}^\infty f\,1_{\mathcal{D}elta_k{\bf S}etminus\mathcal{D}elta_{k-1}} =: f_0+{\bf S}um_{k=1}^\infty f_k=: f_0 +f^{\bf S}tar\,.$$ Likewise, set $u:= (\mathcal{S} \nabla) f,\,u_k:= (\mathcal{S} \nabla) f_k\,$, and $ u^{\bf S}tar:= (\mathcal{S} \nabla) f^{\bf S}tar$. Also, let $B_{x,t}:=B((x,t),t/4)$, $\widetilde{B}_{x,t}:= B((x,t),t/2)$ and $B^k_{x,t}:=B((x,t),2^kt)$ denote the Euclidean balls in $\mathbb{R}e$ centered at $(x,t)$ of radius $t/4$, $t/2$, and $2^kt$, respectively. Using the Moser estimate \eqref{eq2.Mr}, we have \begin{align*}\left(\fint\!\!\!\!\fint_{B_{x,t}}|u|^2\right)^{1/2} &\lesssim \left(\fint\!\!\!\!\fint_{\widetilde{B}_{x,t}}|u|^r\right)^{1/r} \leq \left(\fint\!\!\!\!\fint_{\widetilde{B}_{x,t}}|u_0|^r\right)^{1/r} +\left(\fint\!\!\!\!\fint_{\widetilde{B}_{x,t}}|u^{\bf S}tar|^r\right)^{1/r} \\[4pt] &\lesssim\, \left(\fint\!\!\!\!\fint_{\widetilde{B}_{x,t}}|u_0|^r\right)^{1/r} +\,\left(\fint\!\!\!\!\fint_{\widetilde{B}_{x,t}}|u^{\bf S}tar(y,s)-u^{\bf S}tar(y,0)|^r\,dyds\right)^{1/r} \\[4pt]&\qquad+ \left(\fint_{\mathcal{D}elta_0}|u(\cdot,0)|^r\right)^{1/r}+\left(\fint_{\mathcal{D}elta_0}|u_0(\cdot,0)|^r\right)^{1/r} \\[4pt] &=:I+II+III+IV\,. \end{align*} By \eqref{eq4.65}, we have \[ I +IV\lesssim \left(\fint_{\mathcal{D}elta_0} |f|^r\right)^{1/r}\,. \] To estimate $II$, note that for all $(y,s)\in B^k_{x,t}$, including when $s=0$, we must have \[ u_k(y,s) := (\mathcal{S} \nabla) f_k(y,s) = \int_{\mathbb{R}^n} \Big(\nabla E(y,s,\cdot,\cdot)\Big)(z,0)\, \cdot f_k(z)\,dz\,, \] since ${\bf S}upp f_k {\bf S}ubset \mathcal{D}elta_k{\bf S}etminus\mathcal{D}elta_{k-1}$ does not intersect $B^k_{x,t}$. Thus, $L u_k = 0$ in $B^k_{x,t}$, and so by the De~Giorgi/Nash estimate~\eqref{eq2.DGN}, followed by \eqref{eq2.Mr} and \eqref{eq4.65}, we obtain $$II\leq {\bf S}um_{k=1}^\infty {\bf S}up_{B_{x,t}} |u_k(y,s)-u_k(y,0)| \lesssim {\bf S}um_{k=1}^\infty2^{-\alpha k} \left(\fint\!\!\!\!\fint_{B^k_{x,t}}|u_k|^r\,\right)^{1/r} \lesssim {\bf S}um_{k=1}^\infty2^{-\alpha k}\left(\fint_{\mathcal{D}elta_k}|f|^r\,\right)^{1/r}\,.$$ We also have $III \leq (M(|u(\cdot,0)|^r)(z))^{1/r}$, where $M$ denotes the Hardy-Littlewood maximal operator. Altogether, for all $z\in\mathcal{R}R^n$, after taking the supremum over $(x,t)\in\widetilde{G}amma(z)$, we obtain \begin{equation}\Lambda^\alphabel{eq4.pwnm} N_*((\mathcal{S} \nabla) f)(z)\lesssim \widetilde{N}_*((\mathcal{S} \nabla) f)(z)\lesssim \big(M\big(|(\mathcal{S} \nabla) f(\cdot,0)|^r\big)(z)\big)^{1/r} +\big(M\big(|f|^r\big)(z)\big)^{1/r}\,.\end{equation} Estimate \eqref{LP-4a}, and thus also \eqref{LP-4}, then follow readily from \eqref{eq4.pwnm} and \eqref{eq4.65}. {\bf S}mallskip \noindent{\bf Part 4: proof of estimate \eqref{LP-5}}. We first recall the following square function estimate, whose proof is given in \cite[Section 3]{HMaMo}: \begin{equation*} \iint_{\mathbb{R}_+^{n+1}}\left|t\,\nabla\left(\mathcal{S}\nabla\right)f(x,t)\right|^{2}\,\frac{dxdt}{t} \lesssim\,\Vert f\Vert_{L^2(\mathbb{R}^n)}^{2}\,. \end{equation*} By the definition of $\mathcal{D}$, this implies in particular that \begin{equation} \iint_{\mathbb{R}_+^{n+1}}\left|t\,\nabla\mathcal{D} f(x,t)\right|^{2}\,\frac{dxdt}{t} \lesssim\,\Vert f\Vert_{L^2(\mathbb{R}^n)}^{2}\,.\Lambda^\alphabel{eqA1} \end{equation} We now proceed to prove \eqref{LP-5}. The proof follows a classical argument of \cite{FS}. Fix a cube $Q$, set $Q_0:=32Q$ and observe that since $\nabla\mathcal{D} 1=0$ by Corollary~\mathbb{R}f{r4.2}, we may assume without loss of generality that $f_{Q_0}:=\fint_{Q_0} f=0$. We split $f=f_0+{\bf S}um_{k=4}^\infty f_k$, where $f_0:= f1_{Q_0}$, and $f_k:= f1_{2^{k+2}Q{\bf S}etminus 2^{k+1} Q}$, $k\geq 4$. By \eqref{eqA1}, we have \begin{equation*} \int_0^{\ell(Q)}\!\!\!\int_Q \left|t\,\nabla\mathcal{D} f_0(x,t)\right|^{2}\,\frac{dxdt}{t} \lesssim\, \int_{Q_0} |f|^2\,= \int_{Q_0} |f-f_{Q_0}|^2\,\lesssim \, |Q|\, \|f\|^2_{BMO(\mathbb{R}^n)}\,. \end{equation*} Now suppose that $k\geq 4$. Since $u_k:=\mathcal{D} f_k$ solves $Lu_k=0$ in $4Q\times(-4\ell(Q),4\ell(Q))$, by Caccioppoli's inequality, we have $$\int_0^{\ell(Q)}\!\!\!\int_Q \left|t\,\nabla\mathcal{D} f_k(x,t)\right|^{2}\,\frac{dxdt}{t} \lesssim\,\fint_{-\ell(Q)}^{2\ell(Q)}\!\!\int_{2Q} \left|\mathcal{D} f_k(x,t)-c_{k,Q}\right|^{2}\,dxdt\,,$$ where the constant $c_{k,Q}$ is at our disposal. We now choose $c_{k,Q}:= \mathcal{D} f_k(x_Q,t_Q)$, where $x_Q$ denotes the center of $Q$, and $t_Q>0$ is chosen such that $|t-t_Q|<2\ell(Q)$ for all $t\in (-\ell(Q),2\ell(Q))$. We observe that, by the Cauchy-Schwarz inequality, \begin{align}\begin{split}\Lambda^\alphabel{eq4.44} \big|\mathcal{D} f_k&(x,t)-c_{k,Q}\big|^{2}\\[4pt] &\leq\, \int_{2^{k+2}Q{\bf S}etminus2^{k+1}Q}\left|\nabla_{y,s}\big(E(x,t,y,s) - E(x_Q,t_Q,y,s)\big)|_{s=0}\right|^2dy\,\int_{2^{k+2}Q{\bf S}etminus2^{k+1}Q}|f|^2\\[4pt] &\lesssim \, 2^{-2k\alpha}\fint_{2^{k+2}Q}|f|^2\,=\, 2^{-2k\alpha}\fint_{2^{k+2}Q}|f-f_{Q_0}|^2\lesssim\, k^2 \,2^{-2k\alpha}\, \|f\|^2_{BMO}\,, \end{split}\end{align} where we have used Lemma \mathbb{R}f{Lemma1}, and then the telescoping argument of \cite{FS}, in the last two inequalities. Summing in $k$, we obtain \eqref{LP-5}. {\bf S}mallskip \noindent{\bf Part 5: proof of estimate \eqref{LP-6}}. It suffices to prove that $\|\mathcal{D} f\|_{\dot{C}^\beta({\mathbb{R}u})} \leq C_\beta \|f\|_{\dot{C}^\beta(\mathbb{R}^n)}\,$, since then $\mathcal{D} f$ has an extension in $\dot{C}^\beta(\overline{\mathbb{R}u})$. Moreover, by Corollary~\mathbb{R}f{cor4.47} below, this extension must then satisfy $\mathcal{D}_L^{p_{\alpha}m }g (\cdot,0) = (\mp\frac{1}{2}I + K_L)g$. For $a> 0$, let $I:= Q\times [a, a+\ell(Q)]\,$ denote any $(n+1)$-dimensional cube contained in ${\mathbb{R}u}$, where as usual, $Q$ is a cube in $\mathbb{R}^n$. By the well-known criterion of N. Meyers \cite{Me}, it is enough to show that for every such $I$, there is a constant $c_I$ such that \begin{equation}\Lambda^\alphabel{eq4.45} \frac1{|I|}\iint_I\, \frac{|\mathcal{D} f(x,t)- c_I|}{\ell(I)^{\,\beta}} \,dxdt\, \leq \,C_\beta\, \|f\|_{\dot{C}^\beta(\mathbb{R}^n)}\,, \end{equation} for some uniform constant $C_\beta$ depending only on $\beta$ and the standard constants. To do this, we set $c_I := \fint_Q \mathcal{D} f(\cdot, a +\ell(Q))$, and $Q_0:=32Q$. For this choice of $c_I$, we may suppose without loss of generality that $f_{Q_0}:=\fint_{Q_0} f =0$, since $\nabla\mathcal{D} 1=0$ by Corollary~\mathbb{R}f{r4.2}. We then make the same splitting $f = f_0 + {\bf S}um_{k=4}^\infty f_k$ as in Part 4 above, which in turn induces a corresponding splitting $c_I={\bf S}um c_{k,I}$, with $c_{k,I} := \fint_Q \mathcal{D} f_k(\cdot,a +\ell(Q))$. By \eqref{LP-4}, we have \[ {\bf S}up_{t> 0}\|\mathcal{D} f(\cdot,t)\|_{L^2(\mathbb{R}^n)} \leq C \|f\|_{L^2(\mathbb{R}^n)}\,. \] Consequently, since $\ell(I)=\ell(Q)$, \begin{align*} \frac1{|I|}\iint_I\,&\frac{|\mathcal{D} f_0(x,t)- c_{0,I}|}{\ell(I)^{\,\beta}} \,dxdt\, \leq\, \frac1{\ell(Q)^{\,\beta}}\, {\bf S}up_{t>0} \fint_Q |\mathcal{D} f_0(x,t)| \, dx \\[4pt] &\leq \, C\ell(Q)^{-\beta} \left(\fint_{Q_0}|f|^2\right)^{1/2} \,=\, C\ell(Q)^{-\beta} \left(\fint_{Q_0}|f-f_{Q_0}|^2\right)^{1/2}\,\leq \,C\, \|f\|_{\dot{C}^\beta(\mathbb{R}^n)}\,. \end{align*} For $k\geq 4$, we then observe that, exactly as in \eqref{eq4.44}, we have \[ \ell(I)^{-\beta}\left|\mathcal{D} f_k(x,t)-c_{k,I}\right| \,\lesssim \, 2^{-k\alpha}\ell(Q)^{-\beta}\left(\fint_{2^{k+2}Q{\bf S}etminus2^{k+1}Q}|f-f_{Q_0}|^2\right)^{1/2}\,\lesssim\, k \,2^{-k(\alpha-\beta)}\, \|f\|_{\dot{C}^\beta(\mathbb{R}^n)}\,, \] where in the last step we have again used the telescoping argument of \cite{FS}. Summing in~$k$, we obtain \eqref{LP-6}. This completes the proof of Theorem \mathbb{R}f{P-bdd1}. \end{proof} We conclude this section with the following immediate corollary of Theorem \mathbb{R}f{P-bdd1}. We will obtain more refined versions of some of these convergence results in Section~\mathbb{R}f{s6}. \begin{corollary}\Lambda^\alphabel{cor4.47} Suppose that $L$ and $L^*$ satisfy the standard assumptions, let $\alpha$ denote the minimum of the De Giorgi/Nash exponents for $L$ and $L^*$ in \eqref{eq2.DGN}, set $p_{\alpha}:=n/(n+\alpha)$ and let $p_{\alpha}p>2$ be as in Theorem \mathbb{R}f{P-bdd1}. \noindent If $1<p<p_{\alpha}p$ and $p_{\alpha}p/(p_{\alpha}p-1)<q<\infty$, then for all $f\in L^p(\mathbb{R}^n)$ and $g\in L^q(\mathbb{R}^n)$, one has \begin{enumerate} \item[(i)] $-\Lambda^\alphangle A\nabla{\mathcal{S}}_L^p_{\alpha}m f(\cdot, t),e_{n+1}\rangle \to \left(p_{\alpha}m \frac{1}{2}I + \widetilde{K}_L\right)f$ weakly in $L^p$ as $t\to 0^p_{\alpha}m$. \item[(ii)] $\nabla \mathcal{S}^p_{\alpha}m_L f(\cdot,t) \to \left(\mp\frac{1}{2A_{n+1,n+1}}e_{n+1} + {\bf T}_L\right)f$ weakly in $L^p$ as $t\to 0^p_{\alpha}m$. \item[(iii)]$\mathcal{D}_L^{p_{\alpha}m }g (\cdot,t) \to \left(\mp\frac{1}{2}I + K_L \right)g$ weakly in $L^q$ as $t\to 0^p_{\alpha}m$. \end{enumerate} If $p_{\alpha}<p\leq 1$ and $0\leq \beta<\alpha$, then for all $f\in H^p(\mathbb{R}^n)$ and $g\in \Lambda^\beta(\mathbb{R}^n)$, one has \begin{enumerate} \item[(iv)] $-\Lambda^\alphangle A\nabla{\mathcal{S}}_L^p_{\alpha}m f(\cdot, t),e_{n+1}\rangle \to \left(p_{\alpha}m \frac{1}{2}I + \widetilde{K}_L\right)f$ in the sense of tempered distributions as $t\to 0^p_{\alpha}m$. \item[(v)] $\nabla_x \mathcal{S}^p_{\alpha}m_L f(\cdot,t) \to \nabla_x S_{\!L}f$ in the sense of tempered distributions as $t\to 0^p_{\alpha}m$. \item[(vi)] $\mathcal{D}_L^{p_{\alpha}m }g (\cdot,t)\to \left(\mp\frac{1}{2}I + K_L \right)g$ in the weak* topology on $\Lambda^\beta(\mathbb{R}^n)$, $0\leq\beta<\alpha$, as $t\to 0^p_{\alpha}m$. Moreover, if $0<\beta<\alpha$, then $\mathcal{D}_L^{p_{\alpha}m }g (\cdot,0) = \left(\mp\frac{1}{2}I + K_L \right)g$ in the sense of $\dot{C}^\beta(\mathbb{R}^n)$. \end{enumerate} \noindent If $p_{\alpha}<p<p_{\alpha}p$, then for all $f\in H^p(\mathbb{R}^n)$, one has \begin{enumerate} \item[(vii)] $\mathcal{S}^p_{\alpha}m_L f(\cdot,t) \to S_{\!L}f$ in the sense of tempered distributions as $t\to 0^p_{\alpha}m$. \end{enumerate} The analogous results hold for $L^*$. \end{corollary} \begin{proof} Items (i)-(v) of the corollary follow immediately from Theorem \mathbb{R}f{P-bdd1}, Proposition~\mathbb{R}f{r4.1}, Lemma \mathbb{R}f{l4.ntjump}, and the fact that $L^2\cap H^p$ is dense in $H^p$ for $0<p<\infty$. We omit the details. Item (vii) is ``elementary" in the case $p>1$, by \eqref{G-est1}-\eqref{G-est3}. The case $p_{\alpha}<p\leq 1$ follows readily from the case $p=2$, the density of $L^2\cap H^p$ in $H^p$, estimate \eqref{LP-3a}, Proposition~\mathbb{R}f{r4.1}, and the Sobolev embedding (see, e.g., \cite[III.5.21]{St2})$$\|h\|_{L^q(\mathbb{R}^n)} \lesssim \|\nabla_x h\|_{H^p(\mathbb{R}^n)}\,,\qquad \frac1q=\frac1p -\frac1n,$$ since $p>n/(n+1)$ implies $q>1$. Again we omit the routine details. We prove item (vi) as follows, treating only layer potentials for $L$ in $\mathbb{R}u$, as the proofs for $L^*$ and in $\mathbb{R}e_-$ are the same. We recall that $\Lambda^\beta(\mathbb{R}^n) = (H^p_{\mathrm{at}}(\mathbb{R}^n))^*$, with $p=n/(n+\beta)$ (so that, in particular, $n/(n+1) <p\leq 1$). It is therefore enough to prove that \begin{equation}\Lambda^\alphabel{7.dualitylimit} \left\Lambda^\alphangle a, \mathcal{D}_Lg(\cdot,t) \right\rangle \xrightarrow{t\to 0^+} \left\Lambda^\alphangle a,\Big(( - 1/2) I + K_L\Big)g \right\rangle, \end{equation} where $a$ is an $H^p(\mathbb{R}^n)$-atom supported in a cube $Q{\bf S}ubset\mathbb{R}^n$ as in \eqref{atom-X}, and $\Lambda^\alphangle\cdot,\cdot\rangle$ denotes the dual pairing between $H^p_{\mathrm{at}}(\mathbb{R}^n)$ and $\Lambda^\beta(\mathbb{R}^n)$. Using Corollary~\mathbb{R}f{r4.2} and dualizing, we have \begin{align*}\begin{split} \left\Lambda^\alphangle a, \mathcal{D}_Lg(\cdot,t) \right\rangle &=\, \left\Lambda^\alphangle a, \mathcal{D}_L(g-g_Q)(\cdot,t) \right\rangle =\, \left \Lambda^\alphangle p_{\alpha}artial_{\nu^*} \mathcal{S}_{L^*}a (\cdot,-t), g-g_Q \right\rangle \\[4pt] &=\, \left \Lambda^\alphangle p_{\alpha}artial_{\nu^*} \mathcal{S}_{L^*}a (\cdot,-t), (g-g_Q) 1_{\Lambda^\alphambda Q} \right\rangle \,+\, \left \Lambda^\alphangle p_{\alpha}artial_{\nu^*} \mathcal{S}_{L^*}a (\cdot,-t), (g-g_Q) 1_{(\Lambda^\alphambda Q)^c} \right\rangle \\[4pt] &=: I_t(\Lambda^\alphambda)+II_t(\Lambda^\alphambda)\,, \end{split}\end{align*} where $g_Q:= \fint_Q g$, and $\Lambda^\alphambda>0$ is at our disposal. Since $a \in L^2$ and $(g-g_Q) 1_{\Lambda^\alphambda Q}\in L^2$, by Lemma \mathbb{R}f{l4.ntjump}, we have \[ I_t(\Lambda^\alphambda) \xrightarrow{t \to 0^+}\left \Lambda^\alphangle \left((-1/2)I + \widetilde{K}_{L^*}\right)a, (g-g_Q) 1_{\Lambda^\alphambda Q} \right\rangle =\left \Lambda^\alphangle a,\Big(( - 1/2) I + K_{L}\Big)(g-g_Q) 1_{\Lambda^\alphambda Q} \right\rangle\,. \] Setting $R_j:=2^{j+1} \Lambda^\alphambda Q{\bf S}etminus 2^j \Lambda^\alphambda Q$, we have \begin{align*} |II_t(\Lambda^\alphambda)| \, &\leq\, {\bf S}um\limits_{j=0}^\infty \| p_{\alpha}artial_{\nu^*} \mathcal{S}_{L^*}a (\cdot,-t)\|_{L^2(R_j)} \,\|g-g_{Q}\|_{L^2(R_j)} \\[4pt] &\lesssim {\bf S}um_j (2^j\Lambda^\alphambda)^{-\alpha-\frac{n}{2}}|Q|^{\frac12-\frac1p} \,\|g-g_{Q}\|_{L^2(R_j)} \,\lesssim\, {\bf S}um\limits_j (2^j\Lambda^\alphambda)^{\,(\beta-\alpha)/2} \|g\|_{\Lambda^\beta} \approx \Lambda^\alphambda^{\,(\beta-\alpha)/2}\|g\|_{\Lambda^\beta}, \end{align*} where in the second and third inequalities, we used \eqref{eqSt-9} and then a telescoping argument. We observe that the bound \eqref{eqSt-9} is uniform in $t$, so that $$\lim_{\Lambda^\alphambda\to \infty} II_t(\Lambda^\alphambda) = 0\,,\quad {\rm uniformly \,\,in}\,\, t\,.$$ Similarly, by \eqref{eqSt-10}~(iii) with $m:=((-1/2)I + \widetilde{K}_{L^*})a$, we have \begin{align*} \Big\Lambda^\alphangle a,\Big(( - 1/2) I + K_{L}\Big)(g-&g_Q) \textbf{1}_{(\Lambda^\alphambda Q)^c} \Big\rangle\\[4pt] &=\,\Big\Lambda^\alphangle \left((-1/2)I + \widetilde{K}_{L^*}\right)a, (g-g_Q) \textbf{1}_{(\Lambda^\alphambda Q)^c} \Big\rangle \lesssim \Lambda^\alphambda^{\,(\beta-\alpha)/2}\|g\|_{\Lambda^\beta}\, \to\, 0\,, \end{align*} as $\Lambda^\alphambda \to \infty$. Using Corollary~\mathbb{R}f{r4.2} again, we then obtain \eqref{7.dualitylimit}. \end{proof} {\bf S}ection{Solvability via the method of layer potentials: Proof of Theorem \mathbb{R}f{th2}} {\bf S}etcounter{equation}{0} The case $p=2$ of Theorem~\mathbb{R}f{th2} was proved in \cite{AAAHK} via the method of layer potentials. We shall now use Theorem~\mathbb{R}f{P-bdd1} and perturbation techniques to extend that result to the full range of indices stated in Theorem~\mathbb{R}f{th2}. \begin{proof}[Proof of Theorem \mathbb{R}f{th2}] Let $L:=-\operatorname{div} A\nabla$ and $L_0:=-\operatorname{div} A_0\nabla$ be as in \eqref{L-def-1} and \eqref{eq1.1*}, with $A$ and $A_0$ both $t$-independent, and suppose that $A_0$ is real symmetric. Let $\varepsilon_0>0$ and suppose that $\|A-A_0\|_\infty <\varepsilon_0$. We suppose henceforth that $\varepsilon_0>0$ is small enough (but not yet fixed) so that, by Remark~\mathbb{R}f{firstrem}, every operator $L_{\bf S}igma:= (1-{\bf S}igma)L_0 +{\bf S}igma L,\, 0\leq{\bf S}igma\leq 1$, along with its Hermitian adjoint, satisfies the standard assumptions, with uniform control of the ``standard constants". For the remainder of this proof, we will let $\varepsilonsilon$ denote an arbitrary small positive number, not necessarily the same at each occurrence, but ultimately depending only on the standard constants and the perturbation radius $\varepsilon_0$. Let us begin with the Neumann and Regularity problems. As mentioned in the introduction, for real symmetric coefficients (the case $A=A_0$), solvability of $(N)_p$ and $(R)_p$ was obtained in \cite{KP} in the range $1\leq p <2+\varepsilonsilon$. Moreover, although not stated explicitly in \cite{KP}, the methods of that paper provide the analogous Hardy space results in the range $1-\varepsilonsilon<p<1$, but we shall not use this fact here. We begin with two key observations. Our first observation is that, by Theorem \mathbb{R}f{P-bdd1} and analytic perturbation theory, \begin{align}\begin{split}\Lambda^\alphabel{eq5.1a} \|\big(\widetilde{K}_{L} - \widetilde{K}_{L_0}\big) f\|_{H^p(\mathbb{R}^n)} \,+\, \|\nabla_x&\left(S_{\!L} - S_{\!L_0}\right) f\|_{H^p(\mathbb{R}^n)}\\[4pt] &\leq\, C_p\, \|A-A_0\|_{L^\infty(\mathbb{R}^n)}\,\|f\|_{H^p(\mathbb{R}^n)}\, ,\quad 1-\varepsilonsilon < p<2+\varepsilonsilon\,. \end{split}\end{align} We may verify~\eqref{eq5.1a} by following the arguments in \cite[Section 9]{HKMP2}. Our second observation is that we have the pair of estimates \begin{equation}\Lambda^\alphabel{eq5.2a} \|f\|_{H^p(\mathbb{R}^n)}\,\leq \,C_p\, \|\big((p_{\alpha}m1/2)I + \widetilde{K}_{L_0}\big) f\|_{H^p(\mathbb{R}^n)} \, ,\quad 1\leq p<2+\varepsilonsilon\,, \end{equation} and \begin{equation}\Lambda^\alphabel{eq5.3a} \|f\|_{H^p(\mathbb{R}^n)}\,\leq \,C_p\, \|\nabla_xS_{\!L_0} f\|_{H^p(\mathbb{R}^n)}\, ,\quad 1\leq p<2+\varepsilonsilon\,. \end{equation} We verify \eqref{eq5.2a} and \eqref{eq5.3a} by using Verchota's argument in \cite{V} as follows. First, it is enough to establish these estimates for $f\in L^2(\mathbb{R}^n)\cap H^p(\mathbb{R}^n)$, which is dense in $H^p(\mathbb{R}^n)$. Then, by the triangle inequality, we have \begin{align}\begin{split}\Lambda^\alphabel{eq5.4a} C_p \|f\|_{H^p(\mathbb{R}^n)}\,&\leq\, \|\big((1/2)I + \widetilde{K}_{L_0}\big) f\|_{H^p(\mathbb{R}^n)} \,+ \,\|\big((1/2)I - \widetilde{K}_{L_0}\big) f\|_{H^p(\mathbb{R}^n)} \\[4pt] &= \|p_{\alpha}artial_\nu u_0^+\|_{H^p(\mathbb{R}^n)} + \|p_{\alpha}artial_\nu u_0^-\|_{H^p(\mathbb{R}^n)}\,, \end{split}\end{align} where $u_0^p_{\alpha}m :=\mathcal{S}_{L_0}^p_{\alpha}m f$, and we have used the jump relation formula in Lemma~\mathbb{R}f{l4.ntjump}~(i). Moreover, by the solvability of $(N)_p$ and $(R)_p$ in \cite{KP}, which we apply in both the upper and lower half-spaces, and the fact that the tangential gradient of the single layer potential does not jump across the boundary, we have that \begin{align*}\begin{split} \|p_{\alpha}artial_\nu u_0^+\|_{H^p(\mathbb{R}^n)} &\approx \|\nabla_x u^+_0(\cdot,0)\|_{H^p(\mathbb{R}^n)} \\[4pt] &=\,\|\nabla_x u^-_0(\cdot,0)\|_{H^p(\mathbb{R}^n)} \approx \|p_{\alpha}artial_\nu u_0^-\|_{H^p(\mathbb{R}^n)}\,,\quad 1\leq p<2+\varepsilonsilon\,, \end{split}\end{align*} where the implicit constants depend only on $p$, $n$ and ellipticity. Combining the latter estimate with \eqref{eq5.4a}, we obtain \eqref{eq5.2a} and \eqref{eq5.3a}. With \eqref{eq5.2a} and \eqref{eq5.3a} in hand, we obtain invertibility of the mappings $$(p_{\alpha}m1/2)I + \widetilde{K}_{L_0}: H^p(\mathbb{R}^n)\to H^p(\mathbb{R}^n)\,,\,\, {\rm and} \,\, S_{\!L_0}: H^p(\mathbb{R}^n) \to \dot{H}^p_1(\mathbb{R}^n)\,,$$ by a method of continuity argument which connects $L_0$ to the Laplacian $-\mathcal{D}elta$ via the path $\tau\to L_\tau := (1-\tau)(-\mathcal{D}elta) +\tau L_0,\, 0\leq\tau\leq1$. Indeed, the ``standard constants" are uniform for every $L_\tau$ in the family, so we have the analogue of \eqref{eq5.1a}, with $L$ and $L_0$ replaced by $L_{\tau_1}$ and $L_{\tau_2}$, for any $\tau_1,\tau_2 \in [0,1]$. We omit the details. We now fix $\varepsilon_0>0$ small enough, depending on the constants in \eqref{eq5.1a}-\eqref{eq5.3a}, so that \eqref{eq5.2a} and \eqref{eq5.3a} hold with $L$ in place of $L_0$. Consequently, by another method of continuity argument, in which we now connect $L$ to $L_0$, via the path ${\bf S}igma\mapsto L_{\bf S}igma:= (1-{\bf S}igma)L_0 +{\bf S}igma L,\, 0\leq{\bf S}igma\leq 1$, and use that \eqref{eq5.1a} holds not only for $L$ and $L_0$, but also uniformly for any intermediate pair $L_{{\bf S}igma_i}$ and $L_{{\bf S}igma_2}$, we obtain invertibility of the mappings \begin{equation}\Lambda^\alphabel{eq5.5a} (p_{\alpha}m1/2)I + \widetilde{K}_{L}: H^p(\mathbb{R}^n)\to H^p(\mathbb{R}^n)\,,\,\, {\rm and} \,\, S_{\!L}: H^p(\mathbb{R}^n) \to \dot{H}^p_1(\mathbb{R}^n)\,, \end{equation} initially in the range $1\leq p<2+\varepsilonsilon$. Again we omit the routine details. Moreover, since \eqref{LP-3a}-\eqref{LP-3} hold in the range $n/(n+\alpha)<p<p_{\alpha}p$, we apply the extension of Sneiberg's Theorem obtained in \cite{KM} to deduce that the operators in \eqref{eq5.5a} are invertible in the range $1-\varepsilonsilon<p<2+\varepsilonsilon$. We then apply the extension of the open mapping theorem obtained in \cite[Chapter~6]{MMMM}, which holds on quasi-Banach spaces, to deduce that the inverse operators are bounded. At this point, we may construct solutions of $(N)_p$ and $(R)_p$ as follows. Given Neumann data $g\in H^p(\mathbb{R}^n)$, or Dirichlet data $f\in \dot{H}^{1,p}(\mathbb{R}^n)$, with $1-\varepsilonsilon<p<2+\varepsilonsilon$, we set $$u_{N}:= \mathcal{S}_L \left((1/2)I +\widetilde{K}_L\right)^{-1} g\,, \qquad u_{R}:= \mathcal{S}_L(S_{\!L}^{-1} f)$$ and observe that $u_N$ then solves $(N)_p$, and $u_R$ solves $(R)_p$, by \eqref{LP-1}, the invertibility of $(1/2)I +\widetilde{K}_L$ and of $S_{\!L}$ (respectively), and Corollary \mathbb{R}f{cor4.47} (which guarantees weak convergence to the data; we defer momentarily the matter of non-tangential convergence). Next, we consider the Dirichlet problem. Since the previous analysis also applies to $L^*$, we dualize our estimates for $(p_{\alpha}m1/2)I +\widetilde{K}_{L^*}$ to obtain that $(p_{\alpha}m1/2)I+K_{L}$ is bounded and invertible from $L^q(\mathbb{R}^n)$ to $L^q(\mathbb{R}^n)$, $2-\varepsilonsilon <q<\infty$, and from $\Lambda^\beta(\mathbb{R}^n)$ to $\Lambda^\beta(\mathbb{R}^n)$, $0\leq \beta<\varepsilonsilon$. Given Dirichlet data $f$ in $L^q(\mathbb{R}^n)$ or $\Lambda^\beta(\mathbb{R}^n)$, in the stated ranges, we then construct the solution to the Dirichlet problem by setting $$u:= \mathcal{D}_L \Big((-1/2)I +K_{L}\Big)^{-1} f\,,$$ which solves $(D)_q$ (at least in the sense of weak convergence to the data) or $(D)_{\Lambda^\beta}$, by virtue of \eqref{LP-4}-\eqref{LP-6} and Corollary \mathbb{R}f{cor4.47}. We note that, at present, our solutions to $(N)_p$, $(R)_p$, and $(D)_q$ assume their boundary data in the weak sense of Corollary \mathbb{R}f{cor4.47}. In the next section, however, we establish some results of Fatou type (see Lemmata \mathbb{R}f{lemma6.1}, \mathbb{R}f{lemma6.2a} and \mathbb{R}f{lemma6.2}), which allow us to immediately deduce the stronger non-tangential and norm convergence results required here. It remains to prove that our solutions to $(N)_p$ and $(R)_p$ are unique among the class of solutions satisfying $\mathbb{N}N(\nabla u)\in L^p(\mathbb{R}^n)$, and that our solutions to $(D)_q$ (resp. $(D)_{\Lambda^\beta}$) are unique among the class of solutions satisfying $N_*(u)\in L^q(\mathbb{R}^n)$ (resp. $t\nabla u \in T^\infty_2(\mathbb{R}u)$, if $\beta=0$, or $u \in \dot{C}^\beta(\mathbb{R}^n)$, if $\beta>0$). We refer the reader to \cite{HMaMo} for a proof of the uniqueness for $(N)_p$, $(R)_p$ and $(D)_q$ for a more general class of operators. Also, we refer the reader to~\cite{BM} for a proof of the uniqueness for $(D)_{\Lambda^\beta}$ in the case $\beta>0$. Finally, the uniqueness for $(D)_{\Lambda^0}=(D)_{BMO}$ follows by combining Theorem~\mathbb{R}f{P-bdd1} and Corollaries~\mathbb{R}f{r4.2} and~\mathbb{R}f{cor4.47} with the uniqueness result in Proposition~\mathbb{R}f{prop-BMO-unique} below (see also Remark~\mathbb{R}f{rem-BMO-Thm1.1}). \end{proof} We conclude this section by proving a uniqueness result for $(D)_{\Lambda^0}=(D)_{BMO}$. \begin{proposition}\Lambda^\alphabel{prop-BMO-unique} Suppose that the hypotheses of Theorem \mathbb{R}f{th2} hold. If $Lu=0$ in~$\mathbb{R}u$ with \begin{align}\Lambda^\alphabel{eqb1} {\bf S}up_{t>0}\|u(\cdot,t)\|_{BMO(\mathbb{R}^n)} & <\infty,\\[4pt] \Lambda^\alphabel{eq2} \|u\|_{BMO(\mathbb{R}u)} & <\infty, \end{align} and $u(\cdot,t)\to 0$ in the weak* topology on $BMO(\mathbb{R}^n)$ as $t\to 0^+$, then $u= 0$ in~$\mathbb{R}u$ in the sense of $BMO(\mathbb{R}^n)$. The analogous results hold for $L^*$ and in the lower half-space. \end{proposition} \begin{remark}\Lambda^\alphabel{rem-BMO-Thm1.1} We note that \eqref{eq2} follows from the Carleson measure estimate \begin{equation}\Lambda^\alphabel{eqb3} {\bf S}up_Q\frac1{|Q|}\iint_{R_Q} |\nabla u(x,t)|^2\, t \,dx dt < \infty\,, \end{equation} by the Poincar\'e inequality of \cite{HuS}, but we will not make explicit use of \eqref{eqb3} in the proof of Proposition~\mathbb{R}f{prop-BMO-unique}. \end{remark} \begin{proof} For each $\varepsilon>0$, we set $u_\varepsilon(x,t):=u(x,t+\varepsilon)$ and $f_\varepsilon:= u_\varepsilon(\cdot,0)=u(\cdot,\varepsilon)\,$. First, note that $u_\varepsilon\in \dot{C}^\beta(\overline{\mathbb{R}u})$ for $0<\beta\leq\alpha$, with a bound that depends on $\varepsilon$, where $\alpha$ is the De Giorgi/Nash exponent for $L$ in \eqref{eq2.DGN}. To see this, we use the ``mean oscillation" characterization of $\dot{C}^\beta$ due to N. Meyers (see \eqref{eq4.45}). In particular, for $(n+1)$-dimensional boxes $I$ with side length $\ell(I)\leq \varepsilon/2$, we use the DG/N estimate \eqref{eq2.DGN}, and for boxes with $\ell(I) \geq \varepsilon/2$, we use \eqref{eq2}. We omit the routine details. By the uniqueness of $(D)_{\Lambda^\beta}$ for $\beta>0$ (see~\cite{BM}), we must then have \begin{equation}\Lambda^\alphabel{epseqn} u_\varepsilon(\cdot,t) = p_{\alpha}t f_\varepsilon :=\mathcal{D}_L \big((-1/2)I +K_{L}\big)^{-1}f_\varepsilon,\qquad\forall\,\varepsilon>0\,. \end{equation} Next, by \eqref{eqb1}, we have ${\bf S}up_{\varepsilon>0} \|f_\varepsilon\|_{BMO(\mathbb{R}^n)} < \infty$, and so there exists a subsequence $f_{\varepsilon_k}$ converging in the weak$^*$ topology on $BMO(\mathbb{R}^n)$ to some $f$ in $BMO(\mathbb{R}^n)$. Let $g$ denote a finite linear combination of $H^1$-atoms, and for each $t>0$, set $g_t := \mathop{\rm adj}\nolimits(p_{\alpha}t) g$, where $\mathop{\rm adj}\nolimits$ denotes the $n$-dimensional Hermitian adjoint. Let $\Lambda^\alphangle \cdot,\cdot \rangle$ denote the (complex-valued) dual pairing between $BMO(\mathbb{R}^n)$ and $H^1_{\mathrm{at}}(\mathbb{R}^n)$. Then, since $\mathop{\rm adj}\nolimits(p_{\alpha}t)$ is bounded on $H^1(\mathbb{R}^n)$, uniformly in $t>0$, by Theorem~\mathbb{R}f{P-bdd1}, we have \begin{align*} \int_{\mathbb{R}^n} (p_{\alpha}t f)\,\overline{g} &= \Lambda^\alphangle f\,,\, g_t \rangle = \lim_{k \to \infty} \Lambda^\alphangle f_{\varepsilon_k}\,,\, g_t \rangle\\ &= \lim_{k \to \infty} \int_{\mathbb{R}^n}(p_{\alpha}t f_{\varepsilon_k}) \,\overline{g} = \lim_{k \to \infty} \int_{\mathbb{R}^n} u(\cdot,t+\varepsilon_k)\,\overline{g} =\int_{\mathbb{R}^n} u(\cdot,t)\, \overline{g}\,, \end{align*} where in the next-to-last step we used~\eqref{epseqn}, and in the last step we used the DG/N estimate \eqref{eq2.DGN}, and the fact that $g$ is a finite linear combination of atoms. Since $g$ was an arbitrary element of a dense subset of $H^1(\mathbb{R}^n)$, this shows that $u(\cdot,t) = p_{\alpha}t f\,$. Now, since $u(\cdot,t) =p_{\alpha}t f$ for some $f$ in $BMO(\mathbb{R}^n)$, by Corollary~\mathbb{R}f{cor4.47}, we have $u(\cdot,t)\to f$ in the weak* topology as $t\rightarrow 0^+$. On the other hand, we assumed that $u(\cdot,t)\to 0$ in the weak* topology, thus $f=0$, and so $u(\cdot,t) = p_{\alpha}t f\, = 0$ in the sense of $BMO(\mathbb{R}^n)$. \end{proof} {\bf S}ection{Boundary behavior of solutions}\Lambda^\alphabel{s6} {\bf S}etcounter{equation}{0} In this section, we present some a priori convergence results of ``Fatou-type", which show that Theorem \mathbb{R}f{th2} is optimal, in the sense that, necessarily, the data must belong to the stated space, in order to obtain the desired quantitative estimate for the solution or its gradient. The results also show that in some cases, our solutions enjoy convergence to the data in a stronger sense than that provided by Corollary \mathbb{R}f{cor4.47}. The results are contained in three lemmata. The first two results below are for the Neumann and Regularity problems. \begin{lemma}\Lambda^\alphabel{lemma6.1} Let $n/(n+1)<p<\infty$. Suppose that $L$ and $L^*$ satisfy the standard assumptions. If $Lu=0$ in $\mathcal{R}R^{n+1}_p_{\alpha}m$ and $\widetilde{N}_*^p_{\alpha}m(\nabla u) \in L^p(\mathcal{R}R^n)$, then the co-normal derivative $p_{\alpha}artial_\nu u(\cdot,0)$ exists in the variational sense and belongs to $H^p(\mathcal{R}R^n)$, i.e., there exists a unique $g\in H^p(\mathcal{R}R^n)$, and we set $p_{\alpha}artial_\nu u(\cdot,0):=g$, with \begin{equation}\Lambda^\alphabel{eqb2} \|g\|_{H^p(\mathbb{R}^n)}\leq C\, \|\mathbb{N}N^p_{\alpha}m(\nabla u)\|_{L^p(\mathbb{R}^n)}\,, \end{equation} such that \begin{equation}\Lambda^\alphabel{eq5.2} \int_{\mathcal{R}R^{n+1}_p_{\alpha}m} A\nabla u\cdot \nabla\mathcal{P}hi\, dX = p_{\alpha}m\Lambda^\alphangle g\,, \mathcal{P}hi (\cdot,0)\rangle\,,\qquad\forall\, \mathcal{P}hi \in C^\infty_0(\mathbb{R}e)\,, \end{equation} where $\Lambda^\alphangle g\,, \mathcal{P}hi (\cdot,0)\rangle :=\int_{\mathbb{R}^n} g(x)\, \mathcal{P}hi(x,0)\, dx$, if $p\geq 1$, and $\Lambda^\alphangle g\,, \mathcal{P}hi (\cdot,0)\rangle$ denotes the usual pairing of the distribution $g$ with the test function $\mathcal{P}hi(\cdot,0)$, if $p<1$. Moreover, there exists a unique $f\in \dot{H}^{1,p}(\mathbb{R}^n)$, and we set $u(\cdot,0):= f$, with \begin{equation}\Lambda^\alphabel{eq6.3*} \|f\|_{\dot{H}^{1,p}(\mathbb{R}^n)}\leq C\|\widetilde{N}_*^p_{\alpha}m(\nabla u)\|_{L^p(\mathbb{R}^n)}\,, \end{equation} such that $u\to f$ non-tangentially. \end{lemma} \begin{lemma}\Lambda^\alphabel{lemma6.2a} Suppose that $L$ and $L^*$ satisfy the standard assumptions. Suppose also that $Lu=0$ in $\mathcal{R}R^{n+1}_p_{\alpha}m$ and $\widetilde{N}_*^p_{\alpha}m(\nabla u) \in L^p(\mathcal{R}R^n)\,$ for some $n/(n+1)<p<\infty$. There exists $\varepsilonilon>0$, depending only on the standard constants, such that in the case $1<p<2+\varepsilonsilon$, one has \begin{equation}\Lambda^\alphabel{eq6.4*} {\bf S}up_{p_{\alpha}m t>0} \|\nabla u(\cdot,t)\|_{L^p(\mathbb{R}^n)} \leq C_p \|\mathbb{N}N^p_{\alpha}m(\nabla u)\|_{L^p(\mathbb{R}^n)}\,, \end{equation} \begin{equation}\Lambda^\alphabel{eq6.5*} -e_{n+1} \cdot A\nabla u(\cdot,t) \to p_{\alpha}artial_\nu u(\cdot,0) \textrm{ weakly in } L^p \textrm{ as } t\to 0^p_{\alpha}m\,, \end{equation} \begin{equation}\Lambda^\alphabel{eq6.6*} \nabla_x u(\cdot,t) \to \nabla_x u(\cdot,0) \textrm{ weakly in } L^p \textrm{ as } t\to 0^p_{\alpha}m\,, \end{equation} where $p_{\alpha}artial_\nu u(\cdot,0) \in L^p(\mathbb{R}^n)$ and $u(\cdot,0)\in \dot{L}^p_1(\mathbb{R}^n)$ denote the variational co-normal and non-tangential boundary trace, respectively, defined in Lemma \mathbb{R}f{lemma6.1}. Also, in the case $n/(n+1)<p\leq 1$, if there exists $h\in \dot{H}^{1,p}(\mathbb{R}^n)$ such that $\nabla_x u(\cdot,t) \to \nabla_x h$ in the sense of tempered distributions, then $u(\cdot,0)=h$ in the sense of $\dot{H}^{1,p}(\mathbb{R}^n)$, where $u(\cdot,0)\in \dot{H}^{1,p}(\mathbb{R}^n)$ denotes the non-tangential boundary trace defined in Lemma \mathbb{R}f{lemma6.1}. \end{lemma} The third and final result below is for the Dirichlet problem. \begin{lemma}\Lambda^\alphabel{lemma6.2} Suppose that the hypotheses of Theorem \mathbb{R}f{th2} hold. Let $2-\varepsilonsilon<p<\infty$ denote the range of well-posedness of $(D)_p$. If $Lu=0$ in $\mathbb{R}^{n+1}_p_{\alpha}m$ and \begin{equation}\Lambda^\alphabel{eq6.3} \|N_*^p_{\alpha}m(u)\|_{L^p(\mathbb{R}^n)}<\infty\,, \end{equation} then there exists a unique $f\in L^p(\mathbb{R}^n)$, and we set $u(\cdot,0):=f$, such that \begin{equation}\Lambda^\alphabel{eq6.4} u \to f \textrm{ non-tangentially, and } u(\cdot,t) \to f \textrm{ in } L^p(\mathbb{R}^n)\textrm{ as } t\to 0^p_{\alpha}m\,. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \mathbb{R}f{lemma6.1}] We suppose that $\mathbb{N}N(\nabla u) \in L^p(\mathbb{R}^n)$, and we seek a variational co-normal $g\in H^p(\mathbb{R}^n)$, and a non-tangential limit $f \in \dot{H}^{1,p}(\mathbb{R}^n)$, satisfying the bounds \eqref{eqb2} and \eqref{eq6.3*}. The case $p>1$ may be obtained by following, {\it mutatis mutandi}, the proof of \cite[Theorem 3.1]{KP} (see also \cite[Lemma 4.3]{AAAHK}, stated in this paper as Lemma \mathbb{R}f{l4.1}, which treats the case $p=2$ by following \cite[Theorem 3.1]{KP}). We omit the details. The case $p\leq 1$, which is a bit more problematic, is treated below. First, we consider the existence of the non-tangential limit $f\in \dot{H}^{1,p}(\mathbb{R}^n)$, assuming now that $\mathbb{N}N(\nabla u) \in L^p(\mathbb{R}^n)$ with $n/(n+1)<p\leq 1$. In fact, following the proof of \cite[Theorem~3.1, p. 462]{KP}), we see that the non-tangential limit $f(x)$ exists at every point $x\in\mathbb{R}^n$ for which $\mathbb{N}N(\nabla u)(x)$ is finite (thus, a.e. in $\mathbb{R}^n$, no matter the value of $p$), and moreover, for any pair of points $x,y\in\mathbb{R}^n$ at which $\mathbb{N}N(\nabla u)(x)$ and $\mathbb{N}N(\nabla u)(y)$ are finite, we have the pointwise estimate \[ |f(x) -f(y)| \,\leq \,C\,|x-y| \left(\mathbb{N}N(\nabla u)(x) \,+\,\mathbb{N}N(\nabla u)(y)\right)\,, \] where $C$ depends only on the standard constants. Thus, by the criterion of \cite{KoS}, we obtain immediately that $f\in \dot{H}^{1,p}(\mathbb{R}^n)$, with $\|f\|_{\dot{H}^{1,p}(\mathbb{R}^n)} \lesssim \|\mathbb{N}N(\nabla u)\|_{L^p(\mathbb{R}^n)}$. Next, we consider the existence of the co-normal derivative $g\in H^p(\mathcal{R}R^n)$. We use $\Lambda^\alphangle \cdot,\cdot\rangle$ to denote the usual pairing of tempered distributions ${\bf S}'(\mathbb{R}^d)$ and Schwartz functions ${\bf S}(\mathbb{R}^d)$, where $d$ may be either $n$ or $n+1$ (the usage will be clear from the context). By Lemma~\mathbb{R}f{L-improve}, for all $0<q \leq 2n/(n+1)$, we have \begin{equation}\Lambda^\alphabel{eq3} \|\nabla u\|_{L^{q(n+1)/n}(\mathbb{R}u)} \leq C(q,n)\, \|\mathbb{N}N(\nabla u)\|_{L^q(\mathbb{R}^n)}\,, \end{equation} and since $\mathbb{N}N(\nabla u)\in L^p(\mathbb{R}^n)$, this implies that $\nabla u \in L^r(\mathbb{R}u)$, with $r:= p(n+1)/n >1.$ We may then define a linear functional $\Lambda=\Lambda_u \in {\bf S}'(\mathbb{R}e)$ by $$\Lambda^\alphangle\Lambda,\mathcal{P}hi\rangle:= \iint_{\mathbb{R}u} A\nabla u\cdot \nabla\mathcal{P}hi\,,\qquad \forall\,\mathcal{P}hi \in {\bf S}(\mathbb{R}e)\,.$$ For $\varphi\in{\bf S}(\mathbb{R}^n)$, we say that $\mathcal{P}hi\in {\bf S}(\mathbb{R}e)$ is an extension of $\varphi$ if $\mathcal{P}hi(\cdot,0)=\varphi$. We now define a linear functional $g \in {\bf S}'(\mathbb{R}^n)$ by setting $$\Lambda^\alphangle g,\varphi\rangle:= \Lambda^\alphangle\Lambda,\mathcal{P}hi\rangle\,,\qquad \forall\,\varphi \in {\bf S}(\mathbb{R}^n)\,,$$ where $\mathcal{P}hi$ is any extension of $\varphi$. Since such an extension of $\varphi$ need not be unique, however, we must verify that $g$ is well-defined. To this end, fix $\varphi\in{\bf S}(\mathbb{R}^n)$, and let $\mathcal{P}hi_1,\mathcal{P}hi_2\in{\bf S}(\mathbb{R}e)$ denote any two extensions of $\varphi$. Then $\mathcal{P}si:= \mathcal{P}hi_1-\mathcal{P}hi_2\in {\bf S}(\mathbb{R}e),$ with $\mathcal{P}si(\cdot,0)\equiv 0$, and so $ \Lambda^\alphangle\Lambda,\mathcal{P}si\rangle = 0$, by the definition of a (weak) solution. Thus, the linear functional $g$ is well-defined, and so $u$ has a variational co-normal $p_{\alpha}artial_\nu u(\cdot,0): = g$ in ${\bf S}'(\mathbb{R}^n)$ satisfying ~\eqref{eq5.2}. It remains to prove \eqref{eqb2}. For $\varphi\in{\bf S}(\mathbb{R}^n)$, we set $M_\varphi f:={\bf S}up_{t>0}|\varphi_t * f|\,,$ where as usual $\varphi_t(x):=t^{-n}\varphi(x/t)$. We recall that a tempered distribution $f$ belongs to $H^p(\mathbb{R}^n)$ if and only if $M_\varphi f \in L^p(\mathbb{R}^n)$, for some $\varphi\in{\bf S}(\mathbb{R}^n)$ with $\int_{\mathbb{R}^n}\varphi = 1$ (see, e.g., \cite[Theorem~1, p.~91]{St2}), and we have the equivalence $\|f\|_{H^p(\mathbb{R}^n)} \approx \|M_\varphi f\|_{L^p(\mathbb{R}^n)}\,$. We now fix $\varphi \in C_0^\infty(\mathbb{R}^n)$, with $\varphi\geq0$, $\int\varphi = 1$, and ${\bf S}upp\varphi{\bf S}ubset \mathcal{D}elta(0,1):=\{x\in\mathbb{R}^n:|x|<1\}$, so we have $$\|p_{\alpha}artial_\nu u\|_{H^p(\mathbb{R}^n)}\leq C\, \|M_\varphi(p_{\alpha}artial_\nu u)\|_{L^p(\mathbb{R}^n)}\,,$$ and it suffices to show that \[ \|M_\varphi(p_{\alpha}artial_\nu u)\|_{L^p(\mathbb{R}^n)} \leq C \|\mathbb{N}N(\nabla u)\|_{L^p(\mathbb{R}^n)} \,. \] We claim that \begin{equation}\Lambda^\alphabel{eq5} M_\varphi(p_{\alpha}artial_\nu u)\leq C \left(M\left(\mathbb{N}N(\nabla u)\right)^{n/(n+1)}\right)^{(n+1)/n}\,, \end{equation} pointwise, where $M$ denotes the usual Hardy-Littlewood maximal operator. Taking the claim for granted momentarily, we see that \begin{equation*} \int_{\mathbb{R}^n}M_\varphi(p_{\alpha}artial_\nu u)^p \lesssim\int_{\mathbb{R}^n}\left(M\left(\mathbb{N}N(\nabla u)\right)^{n/(n+1)}\right)^{p(n+1)/n} \lesssim \int_{\mathbb{R}^n}\left(\mathbb{N}N(\nabla u)\right)^{p}\,, \end{equation*} as desired, since $p(n+1)/n>1$. It therefore remains to establish \eqref{eq5}. To this end, we fix $x\in \mathbb{R}^n$ and $t>0$, set $B:= B(x,t):=\{Y\in \mathbb{R}e: |Y-x|<t\}$, and fix a smooth cut-off function $\eta_{B}\in C_0^\infty(2B)$, with $\eta_{B}\equiv 1$ on $B$, $0\leq\eta_{B}\leq 1,$ and $|\nabla \eta_{B}|\lesssim 1/t$. Then $$\mathcal{P}hi_{x,t}(y,s):= \eta_B(y,s)\, \varphi_t(x-y)$$ is an extension of $\varphi_t(x-\cdot)$, with $\mathcal{P}hi_{x,t}\in C_0^\infty(2B),$ which satisfies $$0\leq \mathcal{P}hi_{x,t} \lesssim t^{-n}\,,\qquad|\nabla_Y \mathcal{P}hi_{x,t}(Y)|\lesssim t^{-n-1}\,.$$ We then have \begin{align*} |\left(\varphi_t*p_{\alpha}artial_\nu u\right)&(x)|=|\Lambda^\alphangle p_{\alpha}artial_\nu u,\varphi_t(x-\cdot)\rangle| =\left| \iint_{\mathbb{R}u} A\nabla u\cdot \nabla\mathcal{P}hi_{x,t}\right|\\[4pt] &\lesssim \,t^{-n-1}\iint_{\mathbb{R}u\cap 2B} |\nabla u|\, \lesssim \,t^{-n-1}\left(\int_{\mathbb{R}^n}\left(\mathbb{N}N(|\nabla u| 1_{2B})(y)\right)^{n/(n+1)}dy\right)^{(n+1)/n}\,, \end{align*} where in the last step we have used \eqref{eq3} with $q = n/(n+1)$. For $C>0$ chosen sufficiently large, simple geometric considerations then imply that $$\mathbb{N}N(|\nabla u| 1_{2B})(y) \leq \mathbb{N}N(\nabla u)(y)\,1_{\mathcal{D}elta(x,Ct)}(y)\,,$$ where $\mathcal{D}elta(x,Ct):=\{y\in\mathbb{R}^n:|x-y|<Ct\}.$ Combining the last two estimates, we obtain $$|\left(\varphi_t*p_{\alpha}artial_\nu u\right)(x)|\lesssim \left(t^{-n}\int_{|x-y|<Ct}\left(\mathbb{N}N(\nabla u)(y)\right)^{n/(n+1)}dy\right)^{(n+1)/n}\,.$$ Taking the supremum over $t>0$, we obtain \eqref{eq5}, as required. \end{proof} \begin{proof}[Proof of Lemma \mathbb{R}f{lemma6.2a}] We begin with \eqref{eq6.4*} and follow the proof in the case $p=2$ from~\cite{AAAHK}. The desired bound for $p_{\alpha}artial_t u$ follows readily from $t$-independence and the Moser local boundedness estimate \eqref{eq2.M}. Thus, we only need to consider $\nabla_x u$. Let $\vec{p_{\alpha}si} \in C_0^\infty (\mathbb{R}^n,\mathbb{C}^n)$, with $\|\vec{p_{\alpha}si}\|_{p'} =1$. For $t>0$, let $\mathbb{D}_t$ denote the grid of dyadic cubes $Q$ in $\mathbb{R}^n$ with side length satisfying $\ell(Q) \leq t < 2\ell(Q)$, and for $Q\in \mathbb{D}_t$, set $Q^*:= 2Q\times (t/2,3t/2)$. Then, using the Caccioppoli-type estimate on horizontal slices in \cite[(2.2)]{AAAHK}, we obtain \begin{align*} \bigg|\int_{\mathbb{R}^n} \nabla_y u(y,t) \cdot\vec{p_{\alpha}si}(y) \, dy\bigg| & \leq \,\left(\int_{\mathbb{R}^n} |\nabla_y u(y,t)|^p \,dy\right)^{1/p} \|\vec{p_{\alpha}si}\|_{p'} \\[4pt] &= \,\left({\bf S}um_{Q\in\mathbb{D}_t} \frac{1}{|Q|} \int_Q \int_{Q} |\nabla_y u(y,t)|^pdy \,dx\right)^{1/p} \\[4pt] &\lesssim\, \left({\bf S}um_{Q\in\mathbb{D}_t}\int_Q\left( \frac1{|Q^*|}\iint_{Q^*} |\nabla_y u(y,s)|^2\, dyds\right)^{p/2}dx\right)^{1/p} \\[4pt] &\lesssim\,\left(\int_{\mathbb{R}^n}\Big(\mathbb{N}N(\nabla u)\Big)^p\right)^{1/p}\,. \end{align*} This concludes the proof of \eqref{eq6.4*}. Next, we prove \eqref{eq6.5*}. By \eqref{eqb2} and \eqref{eq6.4*}, and the density of $C^\infty_0(\mathbb{R}^n)$ in $L^{p'}(\mathbb{R}^n)$, it is enough to prove that \[ \lim_{t\to 0}\int_{\mathbb{R}^n} \vec{N}\cdot A(x) \nabla u(x,t)\, p_{\alpha}hi(x)\, dx \,=\, \int_{\mathbb{R}^n} p_{\alpha}artial_\nu u(x,0)\, p_{\alpha}hi(x)\, dx\,,\quad \forall\,p_{\alpha}hi\in C^\infty_0(\mathbb{R}^n)\,, \] where $\vec{N}:=-e_{n+1}$. For $p_{\alpha}hi\in C^\infty_0(\mathbb{R}^n)$, let $\mathcal{P}hi$ denote a $C_0^\infty(\mathbb{R}e)$ extension of $p_{\alpha}hi$ to $\mathbb{R}e$, so by \eqref{eq5.2}, it suffices to show that \begin{equation}\Lambda^\alphabel{eq6.16a} \lim_{t\to 0}\int_{\mathbb{R}^n} \vec{N}\cdot A\nabla u(\cdot,t)\,p_{\alpha}hi \,=\, \iint_{\mathbb{R}^{n+1}_+} A \nabla u \cdot \nabla \mathcal{P}hi \,. \end{equation} Let $P_\varepsilon$ be an approximate identity in $\mathbb{R}^n$ with a smooth, compactly supported convolution kernel. Integrating by parts, we see that for each $\varepsilon > 0$, \begin{equation}\Lambda^\alphabel{eq4.nt5}\int_{\mathbb{R}^n} \vec{N}\cdot P_\varepsilon(A\nabla u(\cdot,t))\,p_{\alpha}hi = \iint_{\mathbb{R}^{n+1}_+}P_\varepsilon\left( A \nabla u(\cdot,t+s)\right)(x) \cdot \nabla \mathcal{P}hi (x,s)\, dx ds ,\end{equation} since $Lu=0$ and our coefficients are $t$-independent. By the dominated convergence theorem, we may pass to the limit as $\varepsilon \to 0$ in \eqref{eq4.nt5} to obtain \begin{align*} \int_{\mathbb{R}^n} \vec{N}\cdot A\nabla u(\cdot,t)\,p_{\alpha}hi &= \iint_{\mathbb{R}^{n+1}_+} A(x) \nabla u(x,t+s) \cdot \nabla \mathcal{P}hi (x,s)\, dx ds\\[4pt] &=\int_t^\infty\!\!\!\int_{\mathbb{R}^{n}} A(x) \nabla u(x,s) \cdot \nabla \Big(\mathcal{P}hi (x,s-t)-\mathcal{P}hi(x,s) \Big) \, dx ds\\[4pt] &\qquad+\, \int_t^\infty\!\!\!\int_{\mathbb{R}^{n}} A(x) \nabla u(x,s) \cdot \nabla\mathcal{P}hi(x,s) \, dx ds\,=:I(t) + II(t)\,. \end{align*} By Lemma \mathbb{R}f{L-improve} and the dominated convergence theorem, we have $I(t) \to 0$, as $t\to 0$, and $II(t) \to \iint_{\mathbb{R}^{n+1}_+} A \nabla u \cdot \nabla \mathcal{P}hi $, as $t\to 0$, hence \eqref{eq6.16a} holds. Next, we prove \eqref{eq6.6*}. By \eqref{eq6.3*} and \eqref{eq6.4*}, and the density of $C^\infty_0(\mathbb{R}^n)$ in $L^{p'}(\mathbb{R}^n)$, it is enough to prove that \begin{equation}\Lambda^\alphabel{eq6.17} \lim_{t\to 0}\int_{\mathbb{R}^n} u(x,t)\, \operatorname{div}_x \vec{p_{\alpha}si}(x)\, dx \,=\, \int_{\mathbb{R}^n} u(x,0)\, \operatorname{div}_x\vec{p_{\alpha}si}(x)\, dx\,,\quad \forall\,\vec{p_{\alpha}si}\in C^\infty_0(\mathbb{R}^n,\mathbb{C}^n)\,. \end{equation} Following the proof of \cite[Theorem 3.1, p. 462]{KP}), we obtain \begin{equation}\Lambda^\alphabel{neednow} |u(x,t)-u(x,0)| \leq C t \mathbb{N}N(\nabla u)(x)\,,\quad\text{ for a.e. }x\in \mathbb{R}^n\,, \end{equation} whence \eqref{eq6.17} follows. Finally, we consider the case $n/(n+1)<p\leq 1$, and we assume there exists $h\in \dot{H}^{1,p}(\mathbb{R}^n)$ such that $\nabla_x u(\cdot,t) \to \nabla_x h$ in the sense of tempered distributions. By Sobolev embedding, $u(\cdot,0)$ and $u(\cdot,t)$ belong (uniformly in $t$) to $L^{q}(\mathbb{R}^n)$, with $1/q = 1/p - 1/n$. Note that $q>1$, since $p>n/(n+1)$. For all $\varepsilon\in (0,1)$, by combining the pointwise estimate~\eqref{neednow}, which still holds in this case, with the trivial bound $|u(\cdot,t)-u(\cdot,0)|\leq |u(\cdot,t)| +|u(\cdot,0)|$, we obtain $$|u(x,t)-u(x,0)| \leq C \left(t \widetilde{N}_*(\nabla u)(x)\right)^\varepsilon \big(|u(x,t)| +|u(\cdot,0)(x)|\big)^{1-\varepsilon}\,,\quad\text{ for a.e. }x\in \mathbb{R}^n\,.$$ For $p,q$ as above, set $r=q/(1-\varepsilon)$, $s=p/\varepsilon$, and choose $\varepsilon\in (0,1)$, depending on $p$ and $n$, so that $1/r+1/s=1$. Then by H\"older's inequality, for all $p_{\alpha}si\in{\bf S}(\mathbb{R}^n)$, we have \begin{align}\begin{split}\Lambda^\alphabel{eqa} \bigg|\int_{\mathbb{R}^n} \big(u(x,t) -&u(x,0)\big)p_{\alpha}si(x)\, dx\bigg| \\[4pt] &\lesssim \,\|p_{\alpha}si\|_\infty \,t^\varepsilon \left(\int_{\mathbb{R}^n} \left(\widetilde{N}_*(\nabla u)\right)^p\right)^{1/s} \left(\|u(\cdot,0)\|_q + {\bf S}up_{t>0} \|u(\cdot,t)\|_q\right)^{1-\varepsilon}\,\to \,0\,, \end{split}\end{align} as $t\to 0$. On the other hand, for all $\vec{p_{\alpha}hi}\in {\bf S}(\mathbb{R}^n,\mathbb{C}^n)$, we have $$\int_{\mathbb{R}^n} \big(u(x,t) -h(x)\big)\operatorname{div}_x\vec{p_{\alpha}hi}(x)\, dx \to 0\,.$$ Combining the latter fact with \eqref{eqa}, applied with $p_{\alpha}si =\operatorname{div}_x \vec{p_{\alpha}hi}$, we obtain $$\int_{\mathbb{R}^n} h(x)\operatorname{div}_x\vec{p_{\alpha}hi}(x)\, dx = \int_{\mathbb{R}^n} u(x,0)\operatorname{div}_x\vec{p_{\alpha}hi}(x)\, dx\,,\qquad \forall \vec{p_{\alpha}hi} \in \mathcal{S}(\mathbb{R}^n,\mathbb{C}^n)\,,$$ thus $\nabla_x h = \nabla_x u(\cdot,0)$ as tempered distributions, and since each belongs to $H^p(\mathbb{R}^n)$, we also have $\nabla_x h = \nabla_x u(\cdot,0)$ in $H^p(\mathbb{R}^n)$, hence $u(\cdot,0)=h$ in the sense of $\dot{H}^{1,p}(\mathbb{R}^n)$. \end{proof} \begin{proof}[Proof of Lemma \mathbb{R}f{lemma6.2}] We first prove that~\eqref{eq6.4} holds in the case that $u=\mathcal{D}h$ for some $h\in L^p(\mathbb{R}^n)$. Indeed, in that scenario, the case $p=2$ has been treated in \cite[Lemma 4.23]{AAAHK}. To handle the remaining range of $p$, we observe that by Theorem~\mathbb{R}f{P-bdd1}, we have \[ \|N_*(\mathcal{D}h)\|_{L^p(\mathbb{R}^n)} \leq\, C_p\, \|h\|_{L^p(\mathbb{R}^n)}\,. \] We may therefore exploit the usual technique, whereby a.e. convergence on a dense class (in our case $L^2\cap L^p$), along with $L^p$ bounds on the controlling maximal operator, imply a.e. convergence for all $h\in L^p(\mathbb{R}^n)$. We omit the standard argument. Convergence in $L^p(\mathbb{R}^n)$ then follows by the dominated convergence theorem. Thus, it is enough to show that $u = \mathcal{D}h$ for some $h\in L^p(\mathbb{R}^n).$ We follow the corresponding argument for the case $p=2$ given in \cite{AAAHK}, which in turn follows \cite[pp.~199--200]{St1}, substituting $\mathcal{D}$ for the classical Poisson kernel. For each $\varepsilon > 0$, set $f_\varepsilon := u(\cdot,\varepsilon)$, and let $u_\varepsilon := \mathcal{D}\big((-1/2)I + K \big)^{-1} f_\varepsilon$ denote the layer potential solution with data $f_\varepsilon$. We claim that $u_\varepsilon(x,t) = u(x,t+\varepsilon).$ To prove this, we set $U_\varepsilon(x,t) := u(x,t + \varepsilon) - u_\varepsilon(x,t)$, and observe that \begin{enumerate} \item[(i)] $LU_\varepsilon = 0$ in $\mathbb{R}^{n+1}_+$ (by $t$-independence of coefficients). \item[(ii)] Estimate \eqref{eq6.3} holds for $U_\varepsilon$, uniformly in $\varepsilon > 0$. \item[(iii)] $U_\varepsilon (\cdot,0) = 0$ and $U_\varepsilon(\cdot,t) \to 0 $ non-tangentially and in $L^p$, as $t\to 0$. \end{enumerate} Item (iii) relies on interior continuity \eqref{eq2.DGN} and smoothness in $t$, along with the result for layer potentials noted above. The claim then follows by the uniqueness for $(D)_p$, which is proved in~\cite{HMaMo} for a more general class of operators. We now complete the proof of the lemma. For convenience of notation, for each $t>0$, we set $\mathcal{D}_t h:=\mathcal{D} h(\cdot,t)$. By \eqref{eq6.3}, ${\bf S}up_\varepsilon \|f_\varepsilon\|_{L^p(\mathbb{R}^n)} < \infty$, and so there exists a subsequence $f_{\varepsilon_k}$ converging in the weak$^*$ topology on $L^p(\mathbb{R}^n)$ to some $f \in L^p(\mathbb{R}^n)$. For each $g \in L^{p'}(\mathbb{R}^n)$, we set $g_1 := \mathop{\rm adj}\nolimits\big((-1/2)I+ K \big)^{-1} \mathop{\rm adj}\nolimits (\mathcal{D}_t) g$, and observe that \begin{align*} \int_{\mathbb{R}^n} \left[\mathcal{D}_t \big((-1/2)I+ K \big)^{-1} f \right]\,\overline{g} & = \int_{\mathbb{R}^n} f\, \overline{g_1} = \lim_{k \to \infty} \int_{\mathbb{R}^n} f_{\varepsilon_k} \overline{g_1}\\ &= \lim_{k \to \infty} \int_{\mathbb{R}^n}\left[\mathcal{D}_t \big((-1/2)I+ K \big)^{-1} \!f_{\varepsilon_k}\right] \,\overline{g} \\ &=\lim_{k \to \infty} \int_{\mathbb{R}^n} u(\cdot,t+\varepsilon_k)\,\overline{g} =\int_{\mathbb{R}^n} u(\cdot,t)\, \overline{g}. \end{align*} It follows that $u=\mathcal{D} h$, with $h= \big((-1/2)I +K\big)^{-1} f$ in $L^p(\mathbb{R}^n)$, as required. \end{proof} {\bf S}ection{Appendix: Auxiliary lemmata} {\bf S}etcounter{equation}{0} We now return to prove some technical results that were used to prove Proposition~\mathbb{R}f{r4.1} and Lemmata~\mathbb{R}f{lemma6.1}-\mathbb{R}f{lemma6.2a}. The results are stated in the more general setting of a Lipschitz graph domain of the form $\Omega:=\{(x,t)\in\mathbb{R}e:\, t>p_{\alpha}hi(x)\}$, where $p_{\alpha}hi:\mathbb{R}^n\to \mathbb{R}$ is Lipschitz. We set $M:=\|\nablap_{\alpha}hi\|_{L^\infty(\mathbb{R}^n)}<\infty$, and consider constants \begin{equation}\Lambda^\alphabel{eta-alpha} 0<\eta<\frac{1}{M},\qquad 0<\beta<\min\,\Bigl\{1,\frac{1}{M}\Bigr\}. \end{equation} We define the cone \[ \widetilde{G}amma:=\{X=(x,t)\in\mathcal{R}R^{n+1}:\,|x|<\eta t\}. \] For $X\in\Omega{\bf S}ubset\mathbb{R}e$, we use the notation $\delta(X):=\operatorname{dist}(X,p_{\alpha}artial\Omega)$. For $u\in L^2_{\mathrm{loc}}(\Omega)$, we set \begin{equation}\Lambda^\alphabel{eq2.3} \widetilde{N}_*(u)(Q):={\bf S}up\limits_{X\in Q+\widetilde{G}amma}\left( \fint_{B(X,\,\beta\delta(X))}|u(Y)|^2\,dY\right)^{1/2}, \qquad Q\inp_{\alpha}artial\Omega\,, \end{equation} and \begin{equation}\Lambda^\alphabel{eq2.3*} N_*(u)(Q):={\bf S}up\limits_{X\in Q+\widetilde{G}amma}\,|u(X)|\,, \qquad Q\inp_{\alpha}artial\Omega. \end{equation} If we want to emphasize the dependence on $\eta$ and $\beta$, then we shall write $\widetilde{G}amma_{\eta}$, $\widetilde{N}_{*,\eta,\,\beta}$, $N_{*,\eta}$. The lemma below shows that the choice of $\eta$ and $\beta$, within the permissible range in \eqref{eta-alpha}, is immaterial for $L^p(p_{\alpha}artial\Omega)$ estimates of $\widetilde{N}_{*,\eta,\,\beta}$. \begin{lemma}\Lambda^\alphabel{L-cones} Let $\Omega{\bf S}ubset\mathbb{R}^{n+1}$ denote a Lipschitz graph domain. For each $p\in(0,\infty)$ and \[ 0<\eta_1,\,\eta_2<\frac{1}{M},\qquad 0<\beta_1,\,\beta_2<\min\Bigl\{1\,,\,\frac{1}{M}\Bigr\}, \] there exist constants $C_j=C_j(M,p,\eta_1,\eta_2,\beta_1,\beta_2)\in(0,\infty)$, $j=1,2$, such that \begin{equation}\Lambda^\alphabel{equiv-N} C_1\|\widetilde{N}_{*,\eta_2,\,\beta_2}u\|_{L^p(p_{\alpha}artial\Omega)} \leq \|\widetilde{N}_{*,\eta_1,\,\beta_1}u\|_{L^p(p_{\alpha}artial\Omega)} \leq C_2\|\widetilde{N}_{*,\eta_2,\,\beta_2}u\|_{L^p(p_{\alpha}artial\Omega)} \end{equation} for all $u \in L^2_{\mathrm{loc}}(\Omega)$. \end{lemma} \begin{proof} First, a straightforward adaptation of the argument in \cite[p.~62]{St2} gives \begin{equation}\Lambda^\alphabel{equiv-N2} \|\widetilde{N}_{*,\eta_2,\,\beta}u\|_{L^p(p_{\alpha}artial\Omega)} \leq C\|\widetilde{N}_{*,\eta_1,\,\beta}u\|_{L^p(p_{\alpha}artial\Omega)}, \end{equation} whenever $0<\eta_1<\eta_2<1/M$, $p\in(0,\infty)$ and $\beta\in(0,1)$. The opposite inequality is trivially true (with $C=1$). Thus, since $\frac{1-M\eta}{M+\eta}\nearrow\frac{1}{M}$ as $\eta{\bf S}earrow 0$, estimate (\mathbb{R}f{equiv-N}) will follow as soon as we prove that for any \begin{equation}\Lambda^\alphabel{ineq-2} 0<\eta<\frac{1}{M},\qquad 0<\beta_1<\beta_2<\min\Bigl\{1\,,\,\frac{1-M\eta}{M+\eta}\Bigr\}, \end{equation} there exists a finite constant $C=C(M,p,\eta,\beta_1,\beta_2)>0$ such that \[ \|\widetilde{N}_{*,\eta,\,\beta_2}u\|_{L^p(p_{\alpha}artial\Omega)} \leq C\|\widetilde{N}_{*,\eta,\,\beta_1}u\|_{L^p(p_{\alpha}artial\Omega)} \] for all $u\in L^2_{\mathrm{loc}}(\Omega)$. To this end, let $\eta$, $\beta_1$, $\beta_2$ be as in (\mathbb{R}f{ineq-2}) and consider two arbitrary points, $Q\inp_{\alpha}artial\Omega$ and $X\in Q+\widetilde{G}amma_{\eta_2}$, as well as two parameters, $\beta'\in (0,\beta_2)$ and $\varepsilon>0$, to be chosen later. The parameter $\varepsilon>0$ and Euclidean geometry ensure that \begin{align*}\begin{split} |X-&Y| < \beta_2 \delta(X) \Longrightarrow \\[4pt] &|B(X,\beta'\delta(X))| \leq C(n,\beta_2,\beta',\varepsilon) \, |B(Y,(\beta_2-\beta'+\varepsilon)\delta(X)) \cap B(X,\,\beta'\delta(X))|. \end{split}\end{align*} We also have \begin{equation}\Lambda^\alphabel{trivial-1} |X-Z|<\beta'\,\delta(X)\Longrightarrow \frac{1}{1+\beta'}\delta(Z)\leq\delta(X)\leq\frac{1}{1-\beta'}\delta(Z), \end{equation} and \[ B(X,\beta'\delta(X)){\bf S}ubset Q+\widetilde{G}amma_\kappa,\quad\mbox{where }\, \kappa:=\frac{\eta+\beta'}{1-\beta'\eta}. \] Note that, due to our assumptions, $0<\kappa<{1}/{M}$. Using Fubini's Theorem and the preceding considerations, we may then write \begin{align*} \hspace{1cm}&\hspace{-1cm}\fint_{B(X,\,\beta_2\delta(X))} |u(Y)|^2\,dY = \fint_{B(X,\,\beta_2\delta(X))} \left(\fint_{B(Y,\,(\beta_2-\beta'+\varepsilon)\delta(X)) \cap B(X,\,\beta'\delta(X))}1\, dZ\right)|u(Y)|^2\,dY\\ &= C(n,\beta_2,\beta',\varepsilon)\fint_{B(X,\,\beta'\delta(X))} \left(\fint_{B(X,\,\beta_2\delta(X))} 1_{B(Z,\,(\beta_2-\beta'+\varepsilon)\delta(X))}(Y)\, |u(Y)|^2\,dY\right)dZ\\ &= C(n,\beta_2,\beta',\varepsilon)\fint_{B(X,\,\beta'\delta(X))} \left(\fint_{B(Z,\,(\beta_2-\beta'+\varepsilon)\delta(X))}|u(Y)|^2\,dY\right)dZ\\ &\leq C(n,\beta_2,\beta',\varepsilon)\fint_{B(X,\,\beta'\delta(X))} \left(\fint_{B(Z,\,\frac{\beta_2-\beta'+\varepsilon}{1-\beta'}\delta(Z))} |u(Y)|^2\,dY\right)dZ\\ &\leq C(n,\beta_2,\beta',\varepsilon) \left(\widetilde{N}_{*,\kappa,\frac{\beta_2-\beta'+\varepsilon}{1-\beta'}}(u)(Q)\right)^2. \end{align*} We now choose $\varepsilon \in (0,\beta_1(1-\beta_2))$ and set $\beta':=\frac{\beta_2-\beta_1+\varepsilon}{1-\beta_1}$ to ensure that $\beta'\in(0,\beta_2)$ and $\frac{\beta_2-\beta'}{1-\beta'}=\beta_1$, so the inequality above further yields \begin{equation}\Lambda^\alphabel{geo-2} \widetilde{N}_{*,\eta,\,\beta_2}(u)(Q)\leq C \widetilde{N}_{*,\kappa,\,\beta_1}(u)(Q)\quad \mbox{for some }\,\,\kappa=\kappa(\beta_1,\beta_2,\eta)\in(0,1/M). \end{equation} Consequently, by (\mathbb{R}f{geo-2}) and (\mathbb{R}f{equiv-N2}), we have \[ \|\widetilde{N}_{*,\eta,\,\beta_2}u\|_{L^p(p_{\alpha}artial\Omega)} \leq C\|\widetilde{N}_{*,\kappa,\,\beta_1}u\|_{L^p(p_{\alpha}artial\Omega)} \leq C\|\widetilde{N}_{*,\eta,\,\beta_1}u\|_{L^p(p_{\alpha}artial\Omega)}. \] This finishes the proof of the lemma. \end{proof} We now prove a self-improvement property for $L^p(\Omega)$ estimates of solutions. \begin{lemma}\Lambda^\alphabel{L-improve} Let $\Omega{\bf S}ubset\mathbb{R}^{n+1}$ denote a Lipschitz graph domain. Suppose that $w\in L^2_{\mathrm{loc}}(\Omega)$, and that $\widetilde{N}_*(w)\in L^p(p_{\alpha}artial\Omega)$ for some $p\in(0,\infty)$. First, if $0 < p \leq 2n/(n+1)$, then \begin{equation}\Lambda^\alphabel{impr-1} w\in L^{{p(n+1)}/{n}}(\Omega)\quad\mbox{and}\quad \|w\|_{L^{{p(n+1)}/{n}}(\Omega)} \leq C(p_{\alpha}artial\Omega,p)\,\|\widetilde{N}_*(w)\|_{L^p(p_{\alpha}artial\Omega)}. \end{equation} Second, if $0<p<\infty$, and if $Lw=0$ in $\Omega$, then \eqref{impr-1} holds. Finally, there exists $q=q(n,\Lambda)>2$ such that if $0<p<qn/(n+1)$, and if $w=\nabla u$ for some solution $u\in L^2_{1,\,\mathrm{loc}}(\Omega)$ of $Lu=0$ in $\Omega$, then \eqref{impr-1} holds. \end{lemma} \begin{proof} Fix $\eta$, $\beta$ as in (\mathbb{R}f{eta-alpha}). We observe that by Lemma \mathbb{R}f{L-cones}, the choice of $\beta$ within the permissible range is immaterial. We now choose $\beta'$ so that $0<\beta'<\beta/2<1/2$. Then \eqref{trivial-1} holds, and we have $\beta'/(1-\beta')<\beta$. {\bf S}mallskip \noindent{\bf Case 1}. Suppose that $w\in L^2_{\mathrm{loc}}(\Omega)$, and that $\widetilde{N}_*(w)\in L^p(p_{\alpha}artial\Omega)$ for some $0<p\leq2n/(n+1)$. To prove \eqref{impr-1}, we set \[ F(Z):=\left(\fint_{B(Z,\,\beta\delta(Z))}|w(X)|^2\,dX\right)^{1/2}, \qquad Z\in\Omega, \] and observe that \begin{align*}\iint_{\Omega} |w(X)|^{p(n+1)/n}\, dX &=\iint_{\Omega} \left(\fint_{|X-Z|<\beta' \delta(X)}\,dZ\right)\, |w(X)|^{p +p/n}\, dX\\[4pt] &\leq C \iint_{\Omega} \left(\fint_{|X-Z|<\beta \delta(Z)}\, |w(X)|^{p +p/n}\,dX\right)\,dZ \\[4pt] &\leq C \iint_{\Omega} \left(\fint_{|X-Z|<\beta \delta(Z)}\, |w(X)|^2 dX\right)^{(p +p/n)/2}\, dZ\\[4pt] &=:\iint_{\Omega}F(Z)^{p+p/n}\, dZ \leq C\, \|\mu\|_{\mathcal{C}}\int_{p_{\alpha}artial\Omega}N_*(F)^p, \end{align*} where we have used Fubini's Theorem, \eqref{trivial-1} and the fact that $\beta'/(1-\beta')<\beta$ in the first inequality, the fact that $p(n+1)/n \leq 2$ in the second, and Carleson's lemma (which still holds in the present setting) in the third. In particular, we are using $\|\mu\|_{\mathcal{C}}$ to denote the Carleson norm of the measure $$d\mu(Z):= F(Z)^{p/n} \,1_\Omega(Z)\,dZ.$$ Also, by definition, $N_*(F) = \widetilde{N}_*(w)$ (cf. \eqref{eq2.3} and \eqref{eq2.3*}), and so $$\|N_*(F)\|_{L^p(p_{\alpha}artial \Omega)}=\|\widetilde{N}_*(w)\|_{L^p(p_{\alpha}artial \Omega)}<\infty.$$ Thus, to finish the proof of Case 1, it is enough to observe that for every ``surface ball" $\mathcal{D}elta(P,r):= B(P,r)\capp_{\alpha}artial\Omega\,$, where $P:=(x,\varphi(x))\in p_{\alpha}artial\Omega$ and $r>0$, we have \begin{align*}\frac{1}{|\mathcal{D}elta(P,r)|}\iint_{B(P,r)\cap\Omega} F(Z)^{p/n}\,dZ &\leq\, C r^{-n}\int_{|x-z|<r}\int_{\varphi(z)}^{\varphi(z)+2r}F(z,s)^{p/n}\, ds dz\\[4pt] &\leq \,C r\fint_{|x-z|<r}\Big(N_*(F)(z,\varphi(z))\Big)^{p/n}\, dz\\[4pt] &\leq\, C \left(\int_{|x-z|<r} \Big(N_*(F)(z,\varphi(z))\Big)^{p}\, dz\right)^{1/n}\,\leq \,C \|N_*(F)\|_{L^p(p_{\alpha}artial\Omega)}^{p/n}\,, \end{align*} since the bound \eqref{impr-1} follows, as required. {\bf S}mallskip \noindent{\bf Case 2}. Now suppose that $Lw=0$ in $\Omega$, and that $\widetilde{N}_*(w)\in L^p(p_{\alpha}artial\Omega)$ for some $p\in(0,\infty)$. By Moser's sub-mean inequality \eqref{eq2.M}, we have $\widetilde{N}_*(w)(Q)\approx\|w\|_{L^\infty(Q+\widetilde{G}amma)}=:N_*(w)(Q)$, uniformly for $Q\inp_{\alpha}artial\Omega$, at least if $\beta>0$ is sufficiently small. Under this assumption, estimate (\mathbb{R}f{impr-1}) can then be proved as in Case 1, except that invoking H\"older's inequality, which was the source of the restriction $p\leq 2n/(n+1)$, is unnecessary. This completes the proof of Case 2, since the restriction on the size of $\beta$ is immaterial by Lemma~\mathbb{R}f{L-cones}. {\bf S}mallskip \noindent{\bf Case 3}. Finally, suppose that $w=\nabla u$ for some solution $u\in L^2_{1,\,\mathrm{loc}}(\Omega)$ of $Lu=0$ in $\Omega$, and that $\widetilde{N}_*(w)\in L^p(p_{\alpha}artial\Omega)$ for some $p\in(0,\infty)$. It is well-known (cf., e.g., \cite{Ke}) that there exists $q=q(n,\Lambda)>2$ such that \[ \left(\fint_{B(X,\,\beta\delta(X))}|w(Y)|^{q}\,dY\right)^{1/q} \leq C\left(\fint_{B(X,\,2\beta\delta(X))}|w(Y)|^2\,dY\right)^{1/2}. \] The proof of \eqref{impr-1} when $0<p<qn/(n+1)$ then proceeds as in Case 1, where Lemma~\mathbb{R}f{L-cones} is used once more to readjust the size of the balls. \end{proof} \noindent {\tt S. Hofmann}: Department of Mathematics, University of Missouri-Columbia, Columbia, MO 65211, USA, {\it e-mail: [email protected]} \vskip 0.10in \noindent {\tt M. Mitrea}: Department of Mathematics, University of Missouri-Columbia, Columbia, MO 65211, USA, {\it e-mail: [email protected]} \vskip 0.10in \noindent {\tt A.J.~Morris}: Mathematical Institute, University of Oxford, Oxford, OX2 6GG,~UK, {\it e-mail: [email protected]} \end{document}
\begin{document} \title [Non-minimal bridge position of $2$-cable links] {Non-minimal bridge position of $2$-cable links} \author[J. H. Lee]{Jung Hoon Lee} \address{Department of Mathematics and Institute of Pure and Applied Mathematics, Jeonbuk National University, Jeonju 54896, Korea} \email{[email protected]} \subjclass[2010]{Primary: 57M25} \keywords{} \begin{abstract} Suppose that every non-minimal bridge position of a knot $K$ is perturbed. We show that if $L$ is a $(2, 2q)$-cable link of $K$, then every non-minimal bridge position of $L$ is also perturbed. \end{abstract} \maketitle \section{Introduction}\label{sec1} A knot in $S^3$ is said to be in {\em bridge position} with respect to a bridge sphere, the original notion introduced by Schubert \cite{Schubert}, if the knot intersects each of the $3$-balls bounded by the bridge sphere in a collection of $\partial$-parallel arcs. It is generalized to knots (and links) in $3$-manifolds with the development of Heegaard splitting theory, and is related to many interesting problems concerning, e.g. bridge number, (Hempel) distance, and incompressible surfaces in $3$-manifolds. From any $n$-bridge position, we can always get an $(n+1)$-bridge position by creating a new local minimum point and a nearby local maximum point of the knot. A bridge position isotopic to one obtained in this way is said to be {\em perturbed}. (It is said to be {\em stabilized} in some context.) A bridge position of the unknot is unique in the sense that any $n$-bridge ($n > 1$) position of the unknot is perturbed \cite{Otal1}. The uniqueness also holds for $2$-bridge knots \cite{Otal2}. See also \cite{Scharlemann-Tomova}, where all bridge surfaces for $2$-bridge knots are considered. Ozawa \cite{Ozawa} showed that non-minimal bridge positions of torus knots are perturbed. Zupan \cite{Zupan} showed such property for iterated torus knots and iterated cables of $2$-bridge knots. More generally, he showed that if $K$ is an mp-small knot and every non-minimal bridge position of $K$ is perturbed, then every non-minimal bridge position of a $(p,q)$-cable of $K$ is also perturbed \cite{Zupan}. (Here, a knot is {\em mp-small} if its exterior contains no essential meridional planar surface.) We remark that there exist examples of a knot with a non-minimal bridge position that is not perturbed \cite{Ozawa-Takao} and furthermore, knots with arbitrarily high index bridge positions that are not perturbed \cite{JKOT}. In this paper, we consider non-minimal bridge position of $2$-cable links of a knot $K$ without the assumption of mp-smallness of $K$. \begin{theorem} \label{thm1} Suppose that $K$ is a knot in $S^3$ such that every non-minimal bridge position of $K$ is perturbed. Let $L$ be a $(2,2q)$-cable link of $K$. Then every non-minimal bridge position of $L$ is also perturbed. \end{theorem} For the proof, we use the notion of t-incompressibility and t-$\partial$-incompressibility of \cite{Hayashi-Shimokawa}. We isotope an annuls $A$ whose boundary is $L$ to a good position so that it is t-incompressible and t-$\partial$-incompressible in one side, say $B_2$, of the bridge sphere. Then $A \cap B_2$ consists of bridge disks and (possibly) properly embedded disks. By using the idea of changing the order of t-$\partial$-compressions in \cite{Doll} or \cite{Hayashi-Shimokawa}, we show that in fact $A \cap B_2$ consists of bridge disks only. Then by a further argument, we find a cancelling pair of disks for the bridge position. \section{T-incompressible and t-$\partial$-incompressible surfaces in a $3$-ball}\label{sec2} A {\em trivial tangle} $T$ is a union of properly embedded arcs $b_1, \ldots, b_n$ in a $3$-ball $B$ such that each $b_i$ cobounds a disk $D_i$ with an arc $s_i$ in $\partial B$, and $D_i \cap (T - b_i) = \emptyset$. By standard argument, $D_i$'s can be taken to be pairwise disjoint. Let $F$ denote a surface in $B$ satisfying $F \cap (\partial B \cup T) = \partial F$. A {\em t-compressing disk} for $F$ is a disk $D$ in $B - T$ such that $D \cap F = \partial D$ and $\partial D$ is essential in $F$, i.e. $\partial D$ does not bound a disk in $F$. A surface $F$ is {\em t-compressible} if there is a t-compressing disk for $F$, and $F$ is {\em t-incompressible} if it is not t-compressible. An arc $\alpha$ properly embedded in $F$ with its endpoints on $F \cap \partial B$, is {\em t-essential} if $\alpha$ does not cobound a disk in $F$ with a subarc of $F \cap \partial B$. In particular, an arc in $F$ parallel to a component of $T$ can be t-essential. See Figure \ref{fig1}. (Such an arc will be called {\em bridge-parallel} in Section \ref{sec3}.) A {\em t-$\partial$-compressing disk} for $F$ is a disk $\Delta$ in $B - T$ such that $\partial \Delta$ is an endpoint union of two arcs $\alpha$ and $\beta$, and $\alpha = \Delta \cap F$ is t-essential, and $\beta = \Delta \cap \partial B$. A surface $F$ is {\em t-$\partial$-compressible} if there is a t-$\partial$-compressing disk for $F$, and $F$ is {\em t-$\partial$-incompressible} if it is not t-$\partial$-compressible. \begin{figure} \caption{A t-essential arc $\alpha$.} \label{fig1} \end{figure} Hayashi and Shimokawa \cite{Hayashi-Shimokawa} classified t-incompressible and t-$\partial$-incompressible surfaces in a compression body in more general setting. Here we give a simplified version of the theorem. \begin{lemma}[\cite{Hayashi-Shimokawa}]\label{lem1} Let $(B, T)$ be a pair of a $3$-ball and a trivial tangle in $B$, and let $F \subset B$ be a surface satisfying $F \cap (\partial B \cup T) = \partial F \ne \emptyset$. Suppose that $F$ is both t-incompressible and t-$\partial$-incompressible. Then each component of $F$ is either \begin{enumerate} \item a disk $D_i$ cobounded by an arc $b_i$ of $T$ and an arc in $\partial B$ with $D_i \cap (T - b_i) = \emptyset$, or \item a disk $C$ properly embedded in $B$ with $C \cap T = \emptyset$. \end{enumerate} \end{lemma} \section{Bridge position}\label{sec3} Let $S$ be a $2$-sphere decomposing $S^3$ into two $3$-balls $B_1$ and $B_2$. Let $K$ denote a knot (or link) in $S^3$. Then $K$ is said to be in {\em bridge position} with respect to $S$ if $K \cap B_i$ $(i=1,2)$ is a trivial tangle. Each arc of $K \cap B_i$ is called a {\em bridge}. If the number of bridges of $K \cap B_i$ is $n$, we say that $K$ is in {\em $n$-bridge position}. The minimum such number $n$ among all bridge positions of $K$ is called the {\em bridge number} $b(K)$ of $K$. A bridge cobounds a {\em bridge disk} with an arc in $S$, whose interior is disjoint from $K$. We can take a collection of $n$ pairwise disjoint bridge disks by standard argument, and it is called a {\em complete bridge disk system}. For a bridge disk $D$ in, say $B_1$, if there exists a bridge disk $E$ in $B_2$ such that $D \cap E$ is a single point of $K$, then $D$ is called a {\em cancelling disk}. We call $(D, E)$ a {\em cancelling pair}. A {\em perturbation} is an operation on an $n$-bridge position of $K$ that creates a new local minimum and local maximum in a small neighborhood of a point of $K \cap S$, resulting in an $(n+1)$-bridge position of $K$. A bridge position obtained by a perturbation admits a cancelling pair by the construction. Conversely, it is known that a bridge position admitting a cancelling pair is isotopic to one obtained from a lower index bridge position by a perturbation. (See e.g. \cite[Lemma 3.1]{Scharlemann-Tomova}). Hence, as a definition, we say that a bridge position is {\em perturbed} if it admits a cancelling pair. Let $V_1$ be a standard solid torus in $S^3$ with core $\alpha$, and $V_2$ be a solid torus in $S^3$ whose core is a knot $C$. A meridian $m_1$ of $V_1$ is uniquely determined up to isotopy. Let $l_1 \subset \partial V_1$ be a longitude of $V_1$ such that the linking number $\textrm{lk} (l_1, \alpha) = 0$, called the {\em preferred longitude}. Similarly, let $m_2$ and $l_2$ be a meridian and a longitude of $V_2$ respectively such that $\textrm{lk} (l_2, C) = 0$. Take a $(p,q)$-torus knot (or link) $T_{p,q}$ in $\partial V_1$ that wraps $V_1$ longitudinally $p$ times; more precisely, $|T_{p,q} \cap m_1| = p$ and $|T_{p,q} \cap l_1| = q$. Let $h: V_1 \to V_2$ be a homeomorphism sending $m_1$ to $m_2$ and $l_1$ to $l_2$. Then $K = h(T_{p,q}) \subset S^3$ is called a {\em $(p,q)$-cable} of $C$. Concerning the bridge number, it is known that $b(K) = p \cdot b(C)$ \cite{Schubert}, \cite{Schultens}. Let $K$ be a knot (or link) in $n$-bridge position with respect to a decomposition $S^3 = B_1 \cup_S B_2$, so $K \cap B_1$ is a union of bridges $b_1, \ldots, b_n$. Let $\mathcal{R} = R_1 \cup \cdots \cup R_n$ be a complete bridge disk system for $\bigcup b_i$, where $R_i$ is a bridge disk for $b_i$. Let $F$ be a surface bounded by $K$ and $F_1 = F \cap B_1$. When we move $K$ to an isotopic bridge position, $F_1$ moves together. We consider $\mathcal{R} \cap F_1$. By isotopy we assume that in a small neighborhood of $b_i$, $\mathcal{R} \cap F_1 = b_i$. An arc $\gamma$ in $F_1$ is {\em bridge-parallel} ({\em b-parallel} briefly) if $\gamma$ is parallel, in $F_1$, to some $b_i$ and cuts off a rectangle $P$ from $F_1$ whose four edges are $\gamma$, $b_i$, and two arcs in $S$. Let $\alpha$($\ne b_k$) denote an arc of $\mathcal{R} \cap F_1$ which is outermost in some $R_k$ and cuts off the corresponding outermost disk $\Delta$ disjoint from $b_k$. The following lemma will be used in Section \ref{sec6}. \begin{lemma}\label{lem2} After possibly changing $K$ to an isotopic bridge position, there is no $\alpha$ that is b-parallel. \end{lemma} \begin{proof} We isotope $K$ and $F_1$ and take $\mathcal{R}$ so that the minimal number of $| \mathcal{R} \cap F_1 |$ is realized. Suppose that there is such an arc $\alpha$ which is parallel in $F_1$ to $b_i$ (same or not with $b_k$). Isotope $b_i$ along $P$ to an arc parallel to $\alpha$ so that the changed surface $F'_1$ is disjoint from $\Delta$. See Figure \ref{fig2}. Take a new bridge disk $R'_i$ for $b_i$ to be a parallel copy of $\Delta$. Other bridge disks $R_j$ ($j \ne i$) remain unaltered. They are all mutually disjoint. Hence $\mathcal{R}' = \mathcal{R} - R_i \cup R'_i$ is a new complete bridge disk system. We see that $| \mathcal{R}' \cap F'_1 | < | \mathcal{R} \cap F_1 |$ since at least $\alpha$ no longer belongs to the intersection $\mathcal{R}' \cap F'_1$. This contradicts the minimality of $| \mathcal{R} \cap F_1 |$. \end{proof} \begin{figure} \caption{Sliding $b_i$ along $P$.} \label{fig2} \end{figure} Now we consider a sufficient condition for a bridge position to be perturbed. \begin{lemma}\label{lem3} Suppose a separating arc $\gamma$ of $F \cap S$ cuts off a disk $\Gamma$ from $F$ such that \begin{enumerate} \item $\Gamma \cap B_1$ is a single disk $\Gamma_1$, and \item $\Gamma \cap B_2$($\ne \emptyset$) consists of bridge disks $D_1, \ldots, D_k$. \end{enumerate} Then the bridge position of $K$ is perturbed. \end{lemma} \begin{proof} Let $b_i$ ($i=1, \ldots, k$) denote the bridge for $D_i$ and $s_i = D_i \cap S$. Let $r_1, \ldots, r_{k+1}$ denote the bridges contained in $\Gamma_1$. We assume that $r_i$ is adjacent to $b_{i-1}$ and $b_i$. See Figure \ref{fig3}. Let $\mathcal{R} = R_1 \cup \cdots \cup R_{k+1}$ be a union of disjoint bridge disks, where $R_i$ is a bridge disk for $r_i$. In the following argument, we consider $\mathcal{R} \cap \Gamma_1$ except for $r_1 \cup \cdots \cup r_{k+1}$. \begin{figure} \caption{The disk $\Gamma_1$ and bridge disks $D_i$'s.} \label{fig3} \end{figure} Suppose there is a circle component of $\mathcal{R} \cap \Gamma_1$. Let $\alpha$($\subset R_i \cap \Gamma_1$) be one which is innermost in $\Gamma_1$ and $\Delta$ be the innermost disk that $\alpha$ bounds. Let $\Delta'$ be the disk that $\alpha$ bounds in $R_i$. Then by replacing $\Delta'$ with $\Delta$, we can reduce $| \mathcal{R} \cap \Gamma_1 |$. So we assume that there is no circle component of $\mathcal{R} \cap \Gamma_1$. Suppose there is an arc component of $\mathcal{R} \cap \Gamma_1$ with both endpoints on the same arc of $\Gamma_1 \cap S$. Let $\alpha$($\subset R_i \cap \Gamma_1$) be one which is outermost in $\Gamma_1$ and $\Delta$ be the corresponding outermost disk in $\Gamma_1$ cut off by $\alpha$. The arc $\alpha$ cuts $R_i$ into two disks and let $\Delta'$ be one of the two disks that does not contain $r_i$. By replacing $\Delta'$ with $\Delta$, we can reduce $| \mathcal{R} \cap \Gamma_1 |$. So we assume that there is no arc component of $\mathcal{R} \cap \Gamma_1$ with both endpoints on the same arc of $\Gamma_1 \cap S$. If $\mathcal{R} \cap \Gamma_1 = \emptyset$, then $(R_i, D_i)$ is a desired cancelling pair and the bridge position of $K$ is perturbed. So we assume that $\mathcal{R} \cap \Gamma_1 \ne \emptyset$. Case $1$. Every arc of $\mathcal{R} \cap \Gamma_1$ is b-parallel. Let $\alpha$ be an arc of $\mathcal{R} \cap \Gamma_1$ which is outermost in some $R_j$ and $\Delta$ be the outermost disk that $\alpha$ cuts off from $R_j$. In addition, let $\alpha$ be b-parallel to $r_i$ via a rectangle $P$. Then $P \cup \Delta$ is a new bridge disk for $r_i$, and $(P \cup \Delta, D_i)$ or $(P \cup \Delta, D_{i-1})$ is a cancelling pair. Case $2$. There is a non-b-parallel arc of $\mathcal{R} \cap \Gamma_1$. Consider only non-b-parallel arcs of $\mathcal{R} \cap \Gamma_1$. Let $\beta$ denote one which is outermost in $\Gamma_1$ among them and $\Gamma_0$ denote the outermost disk cut off by $\beta$. Because there are at least two outermost disks, we take $\Gamma_0$ such that $\partial \Gamma_0$ contains some $s_i$. Let $r_l, \ldots, r_m$ be the bridges contained in $\Gamma_0$ and $\mathcal{R}' = R_l \cup \cdots \cup R_m$. In the following, we consider $\mathcal{R}' \cap \Gamma_0$ except for $r_l \cup \cdots \cup r_m$ and $\beta$. If $\mathcal{R}' \cap \Gamma_0 = \emptyset$, then there exists a cancelling pair $(R_i, D_i)$. Otherwise, every arc of $\mathcal{R}' \cap \Gamma_0$ is b-parallel. Let $\alpha$ be an arc of $\mathcal{R}' \cap \Gamma_0$ which is outermost in some $R_j$ and $\Delta$ be the outermost disk that $\alpha$ cuts off from $R_j$. In addition, let $\alpha$ be b-parallel to $r_i$ via a rectangle $P$. Then $P \cup \Delta$ is a new bridge disk for $r_i$, and $(P \cup \Delta, D_i)$ or $(P \cup \Delta, D_{i-1})$ is a cancelling pair. \end{proof} \begin{figure} \caption{The disks $D_0$ and $D_1$ are cancelling disks, whereas $D_2$ and $D_3$ are not.} \label{fig4} \end{figure} \begin{remark} Let $K$ be an unknot in $n$-bridge position, with $K \cap B_1 = r_0 \cup r_1 \cup \cdots \cup r_{n-1}$ and $K \cap B_2 = b_0 \cup b_1 \cup \cdots \cup b_{n-1}$. We assume that the bridge $r_i$ ($i=0,1,\ldots,n-1$) is adjacent to $b_{i-1}$ and $b_i$, where we consider the index $i$ modulo $n$. Let $R_i$ and $D_i$ denote bridge disks for $r_i$ and $b_i$ respectively. Then $\mathcal{C} = R_0 \cup D_0 \cup \cdots \cup R_{n-1} \cup D_{n-1}$ is called a {\em complete cancelling disk system} if each $(R_i, D_{i-1})$ and each $(R_i, D_i)$ is a cancelling pair. Let $D$ denote a disk bounded by $K$. By following the argument of the proof of Theorem \ref{thm1}, we can assume that $D \cap B_2$ consists of bridge disks $D_0, \ldots, D_{n-1}$ and $D \cap B_1$ is a single disk, as in \cite{Hayashi-Shimokawa}. Then if $n > 1$, the bridge position of $K$ admits a cancelling pair by Lemma \ref{lem3}, giving a proof of the uniqueness of bridge position of the unknot. One may hope that $D_0 \cup \cdots \cup D_{n-1}$ extends to a complete cancelling disk system. But when $n \ge 4$, there exists an example such that $D_0 \cup \cdots \cup D_{n-1}$ does not extend to a complete cancelling disk system, as expected in \cite[Remark 1.2]{Hayashi-Shimokawa}. Some $D_i$ is even not a cancelling disk. This issue was related to one of the motivations for the present work. In Figure \ref{fig4}, $K$ is an unknot in $4$-bridge position bounding a disk $D$ and $D \cap B_2 = D_0 \cup D_1 \cup D_2 \cup D_3$. Each of the disks $D_0$ and $D_1$ is a cancelling disk. However, $D_2$ and $D_3$ are not cancelling disks because, say for $D_2$, an isotopy of $b_2$ along $D_2$ and then slightly into $B_1$ does not give a $3$-bridge position of $K$ (see \cite{Scharlemann-Tomova}, \cite{Lee}). \end{remark} \section{Proof of Theorem \ref{thm1}: First step} \label{sec4} Let $K$ be a knot such that every non-minimal bridge position of $K$ is perturbed. Let $L$ be a $(2,2q)$-cable link of $K$, with components $K_1$ and $K_2$. Suppose that $L$ is in non-minimal bridge position with respect to a bridge sphere $S$ bounding $3$-balls $B_1$ and $B_2$. Each $L \cap B_i$ $(i=1,2)$ is a trivial tangle. Since $L$ is a $2$-cable link, $L$ bounds an annulus, denoted by $A$. We take $A$ so that $|A \cap S|$ is minimal. \begin{claim}\label{claim1} One of the following holds. \begin{itemize} \item $L$ is the unlink in a non-minimal bridge position, hence perturbed. \item $A \cap B_2$ is t-incompressible in $B_2$. \end{itemize} \end{claim} \begin{proof} Suppose that $A \cap B_2$ is t-compressible. Let $\Delta$ be a t-compressing disk for $A \cap B_2$ and let $\alpha = \partial \Delta$. Let $F$ be the component of $A \cap B_2$ containing $\alpha$. Case $1$. $\alpha$ is essential in $A$. A t-compression of $A$ along $\Delta$ gives two disjoint disks bounded by $K_1$ and $K_2$ respectively. Then $L$ is an unlink. Since the complement of an unlink has a reducing sphere, by \cite{Bachman-Schleimer} a bridge position of an unlink is a split union of bridge positions of unknot components. Since a non-minimal bridge position of the unknot is perturbed, we see that $L$ is perturbed. Case $2$. $\alpha$ is inessential in $A$. Let $\Delta '$ be the disk that $\alpha$ bounds in $A$. Then $(\textrm{Int} \, \Delta ') \cap S \ne \emptyset$, since otherwise $\alpha$ is inessential in $F$. By replacing $\Delta '$ of $A$ with $\Delta$, we get a new annulus $A'$ bounded by $L$ such that $| A' \cap S | < | A \cap S |$, contrary to the minimality of $| A \cap S |$. \end{proof} Since our goal is to show that the bridge position of $L$ is perturbed, from now on we assume that $L$ is not the unlink. By Claim \ref{claim1}, $A \cap B_2$ is t-incompressible in $B_2$. If $A \cap B_2$ is t-$\partial$-compressible in $B_2$, we do a t-$\partial$-compression. \begin{claim}\label{claim2} A t-$\partial$-compression preserves the t-incompressibility of $A \cap B_2$. \end{claim} \begin{proof} Let $\Delta$ be a t-$\partial$-compressing disk for $A \cap B_2$. Suppose that the surface after the t-$\partial$-compression along $\Delta$ is t-compressible. A t-compressing disk $D$ can be isotoped to be disjoint from two copies of $\Delta$ and the product region $\Delta \times I$. Then $D$ would be a t-compressing disk for $A \cap B_2$ before the t-$\partial$-compression, a contradiction. \end{proof} A t-$\partial$-compression simplifies a surface because it cuts the surface along a t-essential arc. So if we maximally t-$\partial$-compress $A \cap B_2$, we obtain a t-$\partial$-incompressible $A \cap B_2$. Note that the effect on $A$ of a t-$\partial$-compression of $A \cap B_2$ is just pushing a neighborhood of an arc in $A$ into $B_1$, which is called {\em an isotopy of Type $A$} in \cite{Jaco}. After a maximal sequence of t-$\partial$-compressions, $A \cap B_2$ is both t-incompressible and t-$\partial$-incompressible by Claim \ref{claim2}. Then by applying Lemma \ref{lem1}, \begin{itemize} \item[($\ast$)] $A \cap B_2$ consists of bridge disks $D_i$'s and properly embedded disks $C_j$'s. \end{itemize} \section{Proof of Theorem \ref{thm1}: T-$\partial$-compression and its dual operation} \label{sec5} Take an annulus $A$ bounded by $L$ so that $(\ast )$ holds and the number $m$ of properly embedded disks $C_j$ is minimal. In this section, we will show that $m=0$, i.e. $A \cap B_2$ consists of bridge disks only. Suppose that $m > 0$. Then $A \cap B_1$ is homeomorphic to an $m$-punctured annulus. A similar argument as in the proof of Claim \ref{claim1} leads to that $A \cap B_1$ is t-incompressible. By Lemma \ref{lem1} again, $A \cap B_1$ is t-$\partial$-compressible. We can do a sequence of t-$\partial$-compressions on $A \cap B_1$ until it becomes t-$\partial$-incompressible. Note that the t-incompressibility of $A \cap B_1$ is preserved. Now we are going to define a t-$\partial$-compressing disk $\Delta_i$ ($i=0,1, \ldots, s$ for some $s$) and its {\em dual disk} $U_{i+1}$ inductively. Let $A_0 = A$. Let $\Delta_0$ be a t-$\partial$-compressing disk for $A_0 \cap B_1$ and $\alpha_0 = \Delta_0 \cap A_0$ and $\beta_0 = \Delta_0 \cap S$. By a t-$\partial$-compression along $\Delta_0$, a neighborhood of $\alpha_0$ is pushed along $\Delta_0$ into $B_2$ and thus a band $b_1$ is created in $B_2$. Let $A_1$ denote the resulting annulus bounded by $L$. Let $U_1$ be a dual disk for $\Delta_0$, that is, a disk such that an isotopy of Type A along $U_1$ recovers a surface isotopic to $A_0$. For the next step, let $\mathcal{U}_1 = U_1$. Let $\Delta_1$ be a t-$\partial$-compressing disk for $A_1 \cap B_1$ and $\alpha_1 = \Delta_1 \cap A_1$ and $\beta_1 = \Delta_1 \cap S$. After a t-$\partial$-compression along $\Delta_1$, a band $b_2$ is created in $B_2$. Let $A_2$ denote the resulting annulus bounded by $L$. There are three cases to consider. Case $1$. $\beta_1$ intersects the arc $\mathcal{U}_1 \cap S$ more than once. The band $b_2$ cuts off small disks $U_{2,1}, U_{2,2}, \ldots, U_{2,k_2}$ from $\mathcal{U}_1$, which are mutually parallel along the band. We designate any one among the small disks, say $U_{2,1}$, as the dual disk $U_2$. Let $\mathcal{R}_2 = \bigcup^{k_2}_{j=2} U_{2,j}$ be the union of others. Case $2$. $\beta_1$ intersects $\mathcal{U}_1 \cap S$ once. We take the subdisk that $b_2$ cuts off from $\mathcal{U}_1$ as the dual disk $U_2$, and let $\mathcal{R}_2 = \emptyset$ in this case. Case $3$. $\beta_1$ does not intersect $\mathcal{U}_1 \cap S$. We take a dual disk $U_2$ freely, and let $\mathcal{R}_2 = \emptyset$ in this case. In any case, let $\mathcal{U}_2 = \mathcal{U}_1 \cup U_2 - \mathrm{int}\, \mathcal{R}_2$. In general, assume that $A_i$ and $\mathcal{U}_i$ are defined. Let $\Delta_i$ be a t-$\partial$-compressing disk for $A_i \cap B_1$ and $\alpha_i = \Delta_i \cap A_i$ and $\beta_i = \Delta_i \cap S$. After a t-$\partial$-compression along $\Delta_i$, a band $b_{i+1}$ is created in $B_2$. Let $A_{i+1}$ denote the resulting annulus bounded by $L$. {\bf Case a}. $\beta_i$ intersects the collection of arcs $\mathcal{U}_i \cap S$ more than once. The band $b_{i+1}$ cuts off small disks $U_{i+1,1}, U_{i+1,2}, \ldots, U_{i+1,k_{i+1}}$ from $\mathcal{U}_i$, which are mutually parallel along the band. We designate any one among the small disks, say $U_{i+1,1}$, as the dual disk $U_{i+1}$. Let $\mathcal{R}_{i+1} = \bigcup^{k_{i+1}}_{j=2} U_{i+1,j}$ be the union of others. {\bf Case b}. $\beta_i$ intersects $\mathcal{U}_i \cap S$ once. We take the subdisk that $b_{i+1}$ cuts off from $\mathcal{U}_i$ as the dual disk $U_{i+1}$, and let $\mathcal{R}_{i+1} = \emptyset$ in this case. {\bf Case c}. $\beta_i$ does not intersect $\mathcal{U}_i \cap S$. We take a dual disk $U_{i+1}$ freely, and let $\mathcal{R}_{i+1} = \emptyset$ in this case. In any case, let $\mathcal{U}_{i+1} = \mathcal{U}_i \cup U_{i+1} - \mathrm{int}\, \mathcal{R}_{i+1}$. Later, we do isotopy of Type A, dual to the t-$\partial$-compression, in reverse order along $U_{i+1}, U_i, \ldots, U_1$. Let us call it {\em dual operation} for our convenience.. Let $b_{i+1}$ be the band mentioned above, cutting off $U_{i+1,1}, U_{i+1,2}, \ldots, U_{i+1,k_{i+1}}$ from $\mathcal{U}_i$ (in {\bf Case a}). When the dual operation along $U_{i+1}$ is done, we modify every $U_j$ and $\mathcal{U}_j$ ($j \le i$) containing any $U_{i+1, s}$ ($s > 1)$ of $\mathcal{R}_{i+1}$, by replacing each $U_{i+1,s}$ ($s > 1$) with the union of a subband of $b_{i+1}$ between $U_{i+1,s}$ and $U_{i+1}$($= U_{i+1,1}$) and a copy of $U_{i+1}$, and doing a slight isotopy. We remark that, although it is not illustrated in Figure \ref{fig5}, some $U_j$'s and $\mathcal{U}_j$'s temporarily become immersed when the subband passes through some removed region, say $U_{r,s}$ ($s > 1$). But the $U_{r,s}$ ($s > 1$) is also modified as we proceed the dual operations, and the $U_j$'s and $\mathcal{U}_j$'s again become embedded. (In Figure \ref{fig5} and Figure \ref{fig6}, the dual operation along $U_{i+1}$ is done, and the dual operation along $U_i$ is not done yet.) Actually, before the sequence of dual operations, $U_j$'s and $\mathcal{U}_j$'s ($j \le k$) are modified in advance so that $\mathcal{U}_k$ is disjoint from the union of certain band $b_{k+1}$ and a disk $C_l$ (which will be explained later). See Figure \ref{fig6}. \begin{figure} \caption{Modifying $U_j$'s and $\mathcal{U} \label{fig5} \end{figure} \begin{figure} \caption{Making $\mathcal{U} \label{fig6} \end{figure} The t-$\partial$-compressing disk $\Delta_i$ is taken to be disjoint from two copies of $\Delta_j$ ($j < i$). Moreover, for every $i$ we can draw $\alpha_i$ on $A$($= A_0$). Since the t-$\partial$-compression along $\Delta_i$ is equivalent to cutting along $\alpha_i$ and the sequence $\Delta_0, \Delta_1, \ldots, \Delta_s$ is maximal, every $c_j = \partial C_j$ is incident to some $\alpha_i$. \begin{claim}\label{claim3} For each $c_j$, there exists an $\alpha_i$ such that $\alpha_i$ connects $c_j$ to other component of $A \cap S$. \end{claim} \begin{proof} Suppose that there exists a $c_j$ which is not connected to other component of $A \cap S$. That is, for such $c_j$, every $\alpha_i$ incident to $c_j$ connects $c_j$ to itself. Then after a maximal sequence of t-$\partial$-compressions on $A$, some non-disk components will remain. This contradicts Lemma \ref{lem1}. \end{proof} Let $k$ be the smallest index such that $\alpha_k$ connects some $c_j$, say $c_l$, to other component (other $c_j$ or $D_i \cap S$). If $k = 0$, then by a t-$\partial$-compression along $\Delta_0$, either $C_l$ and other $C_j$ are merged into one properly embedded disk, or $C_l$ and a bridge disk are merged into a new bridge disk. This contradicts the minimality of $m$. So we assume that $k \ge 1$. Suppose that we performed t-$\partial$-compressions along $\Delta_0, \Delta_1, \ldots, \Delta_k$. Consider the small disks that the band $b_{k+1}$ cuts off from $\mathcal{U}_k$. They are parallel along $b_{k+1}$. We replace the small disks one by one, the nearest one to $C_l$ first, so that $\mathcal{U}_k$ is disjoint from $b_{k+1} \cup C_l$. Let $\Delta$ be the small disk nearest to $C_l$. Let $\Delta '$ be the union of a subband of $b_{k+1}$ and $C_l$ that $\Delta \cap b_{k+1}$ cuts off from $b_{k+1} \cup C_l$. For every $U_j$ and $\mathcal{U}_j$ ($j \le k$) containing $\Delta$, we replace $\Delta$ with $\Delta '$. Then again let $\Delta$ be the (next) small disk nearest to $C_l$ and we repeat the above operation until $\mathcal{U}_k$ is disjoint from $b_{k+1} \cup C_l$. Now we do the dual operation on $A_k$ in reverse order along $U_k, U_{k-1}, \ldots, U_1$. Let $A'_{i-1}$ ($i=1, \ldots, k$) be the resulting annulus after the dual operation along $U_i$. The shape of the dual disk $U_k$ is possibly changed but the number of circle components and arc components of $A'_{k-1} \cap S$ is same with those of $A_{k-1} \cap S$. After the dual operation along $U_k$, it is necessary to modify some $U_j$'s and $\mathcal{U}_j$'s ($j \le k-1$) further as in Figure \ref{fig6} so that $U_{k-1}$ is disjoint from $b_{k+1} \cup C_l$. In this way, we do the sequence of dual operations, and the number of circle components of $A'_0 \cap S$ is also $m$. Then because $b_{k+1}$ is disjoint from $U_1, \ldots, U_k$, we can do the t-$\partial$-compression of $A'_0$ along $\Delta_k$ first and the number $m$ of properly embedded disks $C_j$ is reduced, contrary to our assumption. We have shown the following claim. \begin{claim}\label{claim4} $A \cap B_2$ consists of bridge disks. \end{claim} \section{Proof of Theorem \ref{thm1}: Finding a cancelling pair} \label{sec6} By Claim \ref{claim4}, $A \cap B_2$ consists of bridge disks. Let $d_0, \ldots, d_{k-1}$ and $e_0, \ldots, e_{l-1}$ be bridges of $K_1 \cap B_2$ and $K_2 \cap B_2$ respectively that are indexed consecutively along each component. Let $D_i \subset A \cap B_2$ ($i = 0, \ldots, k-1$) be the bridge disk for $d_i$ and $s_i = D_i \cap S$, and let $E_j \subset A \cap B_2$ ($j = 0, \ldots, l-1$) be the bridge disk for $e_j$ and $t_j = E_j \cap S$. Let $\mathcal{R} = R_1 \cup \cdots \cup R_{k+l}$ be a complete bridge disk system for $L \cap B_1$ and let $F = A \cap B_1$. We consider $\mathcal{R} \cap F$ except for the bridges $L \cap B_1$. If there is an inessential circle component of $\mathcal{R} \cap F$ in $F$, it can be removed by standard innermost disk argument. If there is an essential circle component of $\mathcal{R} \cap F$ in $F$, then $L$ would be the unlink as in Case $1$ of the proof of Claim \ref{claim1}. So we assume that there is no circle component of $\mathcal{R} \cap F$. If there is an inessential arc component of $\mathcal{R} \cap F$ in $F$ with both endpoints on the same $s_i$ (or $t_j$), then the arc can be removed by standard outermost disk argument. So we assume that there is no arc component of $\mathcal{R} \cap F$ with both endpoints on the same $s_i$ (or $t_j$). If $\mathcal{R} \cap F = \emptyset$, we easily get a cancelling pair, say $(R_m, D_i)$ or $(R_m, E_j)$, so we assume that $\mathcal{R} \cap F \ne \emptyset$. Let $\alpha$ denote an arc of $\mathcal{R} \cap F$ which is outermost in some $R_m$ and let $\Delta$ denote the outermost disk that $\alpha$ cuts off from $R_m$. Applying Lemma \ref{lem2}, $\alpha$ is not b-parallel. Suppose that one endpoint of $\alpha$ is in, say $s_{i_1}$, and the other is in $s_{i_2}$ with the cyclic distance $d(i_1, i_2) = \textrm{min} \{ |i_1 - i_2|, |k - (i_1 - i_2)| \}$ greater than $1$. Then after the t-$\partial$-compression along $\Delta$, we get a subdisk of $A$ satisfying the assumption of Lemma \ref{lem3}, hence $L$ is perturbed. So without loss of generality, we assume that one endpoint of $\alpha$ is in $s_0$ and the other is in $t_0$. Let $A_1$ be the annulus obtained from $A$ by the t-$\partial$-compression along $\Delta$ and $F_1 = A_1 \cap B_1$. The bridge disks $D_0$ and $E_0$ are connected by a band, and let $P_1$ be the resulting rectangle with four edges $d_0, e_0$, and two arcs in $S$, say $p_1, p_2$. Let $\alpha_1$ denote an arc of $\mathcal{R} \cap F_1$ which is outermost in some $R_m$ and let $\Delta_1$ denote the outermost disk cut off by $\alpha_1$. If at least one endpoint of $\alpha_1$ is contained in $p_1$ or $p_2$, or one endpoint of $\alpha_1$ is in $s_{i_1}$($t_{j_1}$ respectively) and the other is in $s_{i_2}$($t_{j_2}$ respectively), then similarly as above, \begin{itemize} \item either $\alpha_1$ is inessential with both endpoints on the same component of $F_1 \cap S$, or \item $\alpha_1$ is b-parallel, or \item Lemma \ref{lem3} can be applied. \end{itemize} Hence we may assume that one endpoint of $\alpha_1$ is in $s_i$ ($i \ne 0$) and the other is in $t_j$ ($j \ne 0$). After the t-$\partial$-compression along $\Delta_1$, $D_i$ and $E_j$ are merged into a rectangle. Arguing in this way, each $D_i$ ($i=0, \ldots, k-1$) is merged with some $E_j$ because of the fact that $\mathcal{R} \cap s_i = \emptyset$ gives us a cancelling pair. Moreover, we see that $k = l$. After $k$ successive t-$\partial$-compressions on $A$, the new annulus $A'$ intersects $B_1$ and $B_2$ alternately, in rectangles. Note that $b(L) = 2 b(K)$ and $L$ is in $2k$-bridge position. Since $L$ is in non-minimal bridge position, $k > b(K)$. So by the assumption of the theorem, the bridge position of $K_i$ ($i=1,2$) is perturbed. Let $(D, E)$ be a cancelling pair for $K_1$ with $D \subset B_1$ and $E \subset B_2$. However, $D$ and $E$ may intersect $K_2$. Let $P_i$ and $P_{i+1}$ be any adjacent rectangles of $A'$ in $B_1$ and $B_2$ respectively. We remove any unnecessary intersection of $D \cap P_i$ and $E \cap P_{i+1}$, the nearest one to $K_2$ first, by isotopies along subdisks of $P_i$ and $P_{i+1}$ respectively. See Figure $7$ for an example. Then $(D, E)$ becomes a cancelling pair for the bridge position of $L$ as desired. \begin{figure} \caption{Isotoping $D$ and $E$.} \label{fig7} \end{figure} {\noindent \bf Acknowledgments.} The author was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2018R1D1A1A09081849). \end{document}
\begin{document} \title{$2$- and $3$-modular Lattice Wiretap Codes in Small Dimensions } \author{Fuchun~Lin \and Fr\'ed\'erique~Oggier \and Patrick~Sol\'e } \institute{Fuchun Lin and Fr\'ed\'erique Oggier \at Division of Mathematical Sciences, School of Physical and Mathematical Sciences, Nanyang Technological University, 21 Nanyang Link, Singapore 637371 \\ \email{ [email protected] and [email protected]} \and Patrick Sol\'e \at Telecom ParisTech, CNRS, UMR 5141, Dept Comelec, 46 rue Barrault 75634 Paris cedex 13, France and Mathematics Department, King AbdulAziz Unversity, Jeddah, Saudi Arabia.\\ \email{ [email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} A recent line of work on lattice codes for Gaussian wiretap channels introduced a new lattice invariant called secrecy gain as a code design criterion which captures the confusion that lattice coding produces at an eavesdropper. Following up the study of unimodular lattice wiretap codes \cite{preprint}, this paper investigates $2$- and $3$-modular lattices and compares them with unimodular lattices. Most even $2$- and $3$-modular lattices are found to have better performance (that is, a higher secrecy gain) than the best unimodular lattices in dimension $n,\ 2\leq n\leq 23$. Odd $2$-modular lattices are considered, too, and three lattices are found to outperform the best unimodular lattices. \keywords{Wiretap codes \and Gaussian channel \and Lattice codes \and Secrecy gain \and Modular lattices\and Theta series } \end{abstract} \section{Introduction} \label{sec:introduction} In his seminal work, Wyner introduced the wiretap channel \cite{Wyner}, a discrete memoryless channel where the sender Alice transmits confidential messages to a legitimate receiver Bob, in the presence of an eavesdropper Eve, who has only partial access to what Bob sees. Both reliable and confidential communication between Alice and Bob is shown to be achievable at the same time, by exploiting the physical difference between the channel to Bob and that to Eve, without the use of cryptographic means. Since then, many results of information theoretical nature have been found for various classes of wiretap channels ranging from Gaussian point-to-point channels to relay networks (see e.g. \cite{theoretic security} for a survey) capturing the trade-off between reliability and secrecy and aiming at determining the highest information rate that can be achieved with perfect secrecy, the so-called \textit{secrecy capacity}. Coding results focusing on constructing concrete codes that can be implemented in a specific channel are much fewer (see \cite{OW-84,TDCMM-07} for wiretap codes dealing with channels with erasures, \cite{Polar} for Polar wiretap codes and \cite{wiretap Rayleigh fading} for wiretap Rayleigh fading channels. In this paper, we will focus on Gaussian wiretap channels, whose secrecy capacity was established in \cite{IEEE Gaussian wiretap channel}. Examples of existing Gaussian wiretap codes were designed for binary inputs, as in \cite{LDPC Gaussian wiretap codes,nested codes}. A different approach was adopted in \cite{ISITA}, where lattice codes were proposed, using as design criterion a new lattice invariant called \textit{secrecy gain}, defined as the maximum of its \textit{secrecy function} (Section II), which was shown to characterize the confusion at the eavesdropper. A recent study on a new design criterion called \textit{flatness factor} confirms that to confuse Eve, the secrecy gain should be maximized \cite{flatness factor}. This suggests the study of the secrecy gain of lattices as a way to understand how to design a good Gaussian lattice wiretap code. Belfiore and Sol\'{e} \cite{ITW} discovered a symmetry point, called \textit{weak secrecy gain}, in the secrecy function of \textit{unimodular} lattices (generalized to all \textit{$\ell$-modular} lattices \cite{Gaussian wiretap codes}) and conjectured that the weak secrecy gain is actually the secrecy gain. Anne-Maria Ernvall-Hyt\"{o}nen \cite{EH,EHKissingnumber} invented a method to prove or disprove the conjecture for unimodular lattices. Up to date, secrecy gains of a special class of unimodular lattices called \textit{extremal unimodular} lattices and all unimodular lattices in dimensions up to $23$ are computed \cite{Gaussian wiretap codes,preprint}. The asymptotic behavior of the average weak secrecy gain as a function of the dimension $n$ was investigated and an achievable lower bound on the secrecy gain of even unimodular lattices was given \cite{Gaussian wiretap codes}. Numerical upper bounds on the secrecy gains of unimodular lattices in general and unimodular lattices constructed from self-dual binary codes were given to compared with the achievable lower bound \cite{ITW2012}. This paper studies the weak secrecy gain of $2$- and $3$-modular lattices. Preliminary work \cite{ISITversion} showed that most of the known even $2$- and $3$-modular lattices in dimensions up to $24$ have secrecy gains bigger than the best unimodular lattices. After recalling how to compute the weak secrecy gain of even $2$- and $3$-modular lattices using the theory of modular forms, we extend our study to a class of odd $2$-modular lattices constructed from self-dual codes. We propose two methods to compute their weak secrecy gains and find three of these lattices have secrecy gains bigger than the best unimodular lattices. We then conclude that, at least in dimensions up to $23$, $2$- and $3$-modular lattices are a better option than unimodular lattices. The remainder of this paper is organized as follows. In Section \ref{sec:preliminaries}, we first give a brief introduction to modular lattices and their \textit{theta series} as well as recall the definition of the secrecy gain and the previous results concerning this lattice invariant. The main results are given in Section \ref{sec:results}. Two approaches to compute the theta series of modular lattices are given, one making use of the modular form theory while the other utilizing the connection between the theta series and the weight enumerator of self-dual codes. Weak secrecy gains of several $2$- and $3$-modular lattices computed are then compared with the best unimodular lattices in Section \ref{sec:comparision}. In Section \ref{sec:conclu}, we summarize our results and give some future works. \section{Preliminaries and previous results}\label{sec:preliminaries} Consider a Gaussian wiretap channel, which is modeled as follows: Alice wants to send data to Bob over a Gaussian channel whose noise variance is given by $\sigma_b^2$. Eve is the eavesdropper trying to intercept data through another Gaussian channel with noise variance $\sigma_e^2$, where $\sigma_b^2< \sigma_e^2$, in order to have a positive secrecy capacity \cite{IEEE Gaussian wiretap channel}. More precisely, the model is \begin{equation}\label{channel model} \begin{array}{cc} \mathbf{y}&=\mathbf{x}+\mathbf{v_b}\\ \mathbf{z}&=\mathbf{x}+\mathbf{v_e}.\\ \end{array} \end{equation} $\mathbf{x}\in \mathbb{R}^n$ is the transmitted signal. $\mathbf{y}$ and $\mathbf{z}$ are the received signals at Bob's, respectively Eve's side. $\mathbf{v_b}$ and $\mathbf{v_e}$ denote the Gaussian noise vectors at Bob's, respectively Eve's side, each component of both vectors are with zero mean, and respective variance $\sigma_b^2$ and $\sigma_e^2$. In this paper, we choose $\mathbf{x}$ to be a codeword coming from a specially designed lattice of dimension $n$, namely, we consider lattice coding. Let us thus start by recalling some concepts concerning lattices, in particular, \textit{modular lattices}. A \textit{lattice} $\Lambda$ is an additive subgroup of $\mathbb{R}^n$, which can be described in terms of its \textit{generator matrix} $M$ by $$ \Lambda=\{\mathbf{x}=\mathbf{u}M|\mathbf{u}\in \mathbb{Z}^m\}, $$ where $$ M=\left ( \begin{array}{cccc} v_{11}&v_{12}&\cdots&v_{1n}\\ v_{21}&v_{22}&\cdots&v_{2n}\\ \cdots&&\cdots&\\ v_{m1}&v_{m2}&\cdots&v_{mn}\\ \end{array} \right ) $$ and the row vectors $\mathbf{v}_i=(v_{i1},\cdots,v_{in}),\ i=1,\ 2,\ \cdots,\ m$ form a basis of the lattice $\Lambda$. The matrix $$ G=MM^T, $$ where $M^T$ denotes the transpose of $M$, is called the \textit{Gram matrix} of the lattice. It is easy to see that the $(i,j)$th entry of $G$ is the inner product of the $i$th and $j$th row vectors of $M$, denoted by $$G_{(i,j)}=\mathbf{v}_i\cdot \mathbf{v}_j.$$ The \textit{determinant} $\mbox{det}(\Lambda)$ of a lattice $\Lambda$ is the determinant of the matrix $G$, which is independent of the choice of the matrix $M$. A \textit{fundamental region} for a lattice is a building block which when repeated many times fills the whole space with just one lattice point in each copy. There are many different ways of choosing a fundamental region for a lattice $\Lambda$, but the volume of the fundamental region is uniquely determined and called the \textit{volume} $\mbox{vol}(\Lambda)$ of $\Lambda$, which is exactly $\sqrt{\mbox{det}(\Lambda)}$. Let us see an example of a fundamental region of a lattice. A \textit{Voronoi cell} $\mathcal{V}_{\Lambda}(\mathbf{x})$ of a lattice point $\mathbf{x}$ in $\Lambda$ consists of the points in the space that are closer to $\mathbf{x}$ than to any other lattice points of $\Lambda$. The \textit{dual} of a lattice $\Lambda$ of dimension $n$ is defined to be $$ \Lambda^*=\{\mathbf{x}\in \mathbb{R}^n: \mathbf{x}\cdot \mathbf{\lambda}\in \mathbb{Z} , \mbox{ for all } \mathbf{\lambda}\in \Lambda\}. $$ A lattice $\Lambda$ is called an \textit{integral lattice} if $\Lambda\subset \Lambda^*$. The norm of any lattice point in an integral lattice $\Lambda$ is always an integer. If the norm is even for any lattice point, then $\Lambda$ is called an \textit{even} lattice. Otherwise, it is called an \textit{odd} lattice. A lattice is said to be \textit{equivalent}, or geometrically similar to its dual, if it differs from its dual only by possibly a rotation, reflection and change of scale. An integral lattice that is equivalent to its dual is called a \textit{modular} lattice. Alternatively as it was first defined by H.-G. Quebbemann \cite{modular lattices}, an $n$-dimensional integral lattice $\Lambda$ is modular if there exists a similarity $\sigma$ of $\mathbb{R}^n$ such that $\sigma(\Lambda^{*})=\Lambda$. If $\sigma$ multiplies norms by $\ell$, $\Lambda$ is said to be $\ell$-\textit{modular}. The determinant of an $\ell$-modular lattice $\Lambda$ of dimension $n$ is given by \begin{equation}\label{determinant} \mbox{det}(\Lambda)=\ell^{\frac{n}{2}}. \end{equation} This is because, on the one hand, $\mbox{det}(\Lambda^{*})=\mbox{det}(\Lambda)^{-1}$ by definition and, on the other hand, $\ell^n\mbox{det}(\Lambda^{*})=\mbox{det}(\Lambda)$ since $\sigma(\Lambda^{*})=\Lambda$. When $\ell=1$, $\mbox{det}(\Lambda)=1$ and we recover the definition of unimodular lattice as an integral lattice whose determinant is $1$. \begin{example} \begin{equation}\label{Z+lZ} C^{\ell}=\sum_{d|\ell}\sqrt{d}\mathbb{Z},\ \ell=1,2,3,5,6,7,11,14,15,23 \end{equation} is an $\ell$-modular lattice \cite{N.J.A Sloane}. When $\ell$ is a prime number, $C^{\ell}=\mathbb{Z}\oplus\sqrt{\ell}\mathbb{Z}$ is a two-dimensional $\ell$-modular lattice with the similarity map $\sigma$ taking $(x,y)$ to $(\sqrt{\ell}y,\sqrt{\ell}x)$. \end{example} We will use some terminology from classical error correction codes in this paper. Unfamiliar readers can refer to \cite{error correction codes}. We will also assume basic knowledge of algebraic number theory \cite{algebraic number theory}. There is a classical way of constructing $\ell$-modular lattices from self-dual codes called Construction A. Let $K=\mathbb{Q}(\sqrt{\mu})$ be a quadratic imaginary extension of the rational field $\mathbb{Q}$ constructed by adjoining to it the square root of a square free negative integer $\mu$. The ring of integers $\mathfrak{O}_K$ of $K$ is given by \begin{equation}\label{eq: ringofintegers} \mathfrak{O}_K=\mathbb{Z}[\theta],\ \theta=\left\{ \begin{array}{ll} \frac{1+\sqrt{\mu}}{2}, &\mu\equiv 1\ (\mbox{mod }4) \\ \sqrt{\mu}, &\mbox{otherwise.} \\ \end{array} \right. \end{equation} Let $p$ be a prime number. Then the quotient ring $R=\mathfrak{O}_K/p\mathfrak{O}_K$ is given by \begin{equation}\label{eq: quotientringR} R=\left\{ \begin{array}{ll} \mathbb{F}_p\times\mathbb{F}_p, &p \mbox{ is split in }K; \\ \mathbb{F}_p+u\mathbb{F}_p\mbox{ with }u^2=0, &p\mbox{ is ramified in }K; \\ \mathbb{F}_{p^2}, &p\mbox{ is inert in }K. \\ \end{array} \right. \end{equation} Let $k$ be a positive integer. Let $$\rho: \mathfrak{O}_K^k\rightarrow R^k$$ be the map of component wise reduction modulo $p\mathfrak{O}_K$. Then the pre-image $\rho^{-1}(C)$ of a self-dual code $C$ over $R$ of length $k$ with carefully chosen $\mu$ and $p$ and possibly a re-scaling can give rise to a real $\ell$-modular lattice of dimension $2k$ \cite{CBachoc,PSole}. Examples will be specified in the sequel. \begin{definition} The theta series of a lattice $\Lambda$ is defined by $$\Theta_{\Lambda}(\tau)=\Sigma_{\mathbf{\lambda}\in \Lambda}q^{\mathbf{||\lambda}||^2},q=e^{\pi i \tau},\ \tau\in \mathcal{H},$$ where $\mathbf{||\lambda}||^2=\mathbf{\lambda}\cdot\mathbf{\lambda}$ is called the (squared) norm of $\mathbf{\lambda}$ and $\mathcal{H}=\{a+ib\in \mathbb{C}|b>0\}$ denotes the upper half plane. \end{definition} The theta series of an integral lattice has a neat representation. Since the norms are all integers, we can combine the terms with the same norm and write \begin{equation}\label{equ:theta series} \Theta_{\Lambda}(\tau)=\Sigma_{m=0}^{\infty}A_mq^m, \end{equation} where $A_m$ counts the number of lattice points with norm $m$. They are actually \textit{modular forms} \cite{modular forms}. We will also need the following functions and formulae from analytic number theory for our discussion, for which interested readers can refer to \cite{analytic number theory}. \begin{definition} The Jacobi theta functions are defined as follows: $$ \left\{ \begin{array}{ll} \vartheta_2(\tau)&=\Sigma_{m\in\mathbb{Z}}q^{(m+\frac{1}{2})^2},\\ \vartheta_3(\tau)&=\Sigma_{m\in\mathbb{Z}}q^{m^2},\\ \vartheta_4(\tau)&=\Sigma_{m\in\mathbb{Z}}(-q)^{m^2}.\\ \end{array} \right. $$ \end{definition} \begin{definition} The Dedekind eta function is defined by $$ \eta(\tau)=q^{\frac{1}{12}}\prod^\infty_{m=1}(1-q^{2m}). $$ \end{definition} The Jacobi theta functions and the Dedekind eta function are connected as follows \cite{analytic number theory}: \begin{equation}\label{theta and eta} \left\{ \begin{array}{ll} \vartheta_2(\tau)&=\frac{2\eta(2\tau)^2}{\eta(\tau)},\\ \vartheta_3(\tau)&=\frac{\eta(\tau)^5}{\eta(\frac{\tau}{2})^2\eta(2\tau)^2},\\ \vartheta_4(\tau)&=\frac{\eta(\frac{\tau}{2})^2}{\eta(\tau)}.\\ \end{array} \right. \end{equation} Lattice encoding for the wiretap channel (\ref{channel model}) is done via a generic coset coding strategy \cite{ISITA}: let $\Lambda_e\subset\Lambda_b$ be two nested lattices. A $k$-bit message is mapped to a coset in $\Lambda_b/\Lambda_e$, after which a vector is randomly chosen from the coset as the encoded word. The lattice $\Lambda_e$ can be interpreted as introducing confusion for Eve, while $\Lambda_b$ is intended to ensure reliability for Bob. Since a message is now corresponding to a coset of codewords instead of one single codeword, the probability of correct decoding is then summing over the whole coset (suppose that we do not have power constraint and are utilizing the whole lattice to do the encoding). Here we are interested in computing $P_{c,e}$, Eve's probability of correct decision, and want to minimize this probability. It was shown in \cite{ISITA,Gaussian wiretap codes} that to minimize $P_{c,e}$ is to minimize \begin{equation}\label{secrecy theta series} \sum_{\mathbf{t}\in \Lambda_e}e^{-||\mathbf{t}||^2/2\sigma_e^2}, \end{equation} which is easily recognized as the theta series of $\Lambda_e$ at $\tau=\frac{i}{2\pi\sigma_e^2}$. We hence only care about values of $\tau$ such that $\tau=yi,\ y>0$. Motivated by the above argument, the confusion brought by the lattice $\Lambda_e$ with respect to no coding (namely, use a scaled version of the lattice $\mathbb{Z}^n$ with the same volume) is measured as follows: \begin{definition} \cite{ISITA} Let $\Lambda$ be an $n$-dimensional lattice of volume $v^n$. The secrecy function of $\Lambda$ is given by $$\Xi_{\Lambda}(\tau)=\frac{\Theta_{v\mathbb{Z}^n}(\tau)}{\Theta_{\Lambda}(\tau)}, \tau=yi, y>0 .$$ The \textit{secrecy gain} is then the maximal value of the secrecy function with respect to $\tau$ and is denoted by $\chi_{\Lambda}$. \end{definition} \begin{figure} \caption{\label{fig:secrecy function of BW_16} \label{fig:secrecy function of BW_16} \end{figure} $\ell$-modular lattices were shown to have a symmetry point, called \textit{weak secrecy gain} $\chi^w_{\Lambda}$, at $\tau=\frac{i}{\sqrt{\ell}}$ in their secrecy function \cite{Gaussian wiretap codes}. See Fig. \ref{fig:secrecy function of BW_16} for an example, where $y$ is plotted in dB to transform the multiplicative symmetry point into an additive symmetry point. $BW_{16}$ is a $2$-modular lattice. One can see there is a symmetry point at $y=-\frac{3}{2}$ dB, which is $\frac{\sqrt{2}}{2}$. This paper is devoted to computing the weak secrecy gain of $2$- and $3$-modular lattices in small dimensions. \section{The weak secrecy gain of $2$- and $3$-modular lattices in small dimensions}\label{sec:results} The key to the computation of secrecy gains is the theta series of the corresponding lattice. We present here two approaches to obtain a closed form expression of the theta series of $2$- and $3$-modular lattices: the modular form approach and the weight enumerator approach. The modular form approach relies on the fact that the theta series of an $\ell$-modular lattice belongs to the space of modular forms generated by some basic functions, which gives a decomposition formula. The formula for even $2$- and $3$-modular lattices is comparatively simple while the formula for $\ell$-modular lattices in general, including the odd lattices, is rather complicated. A weight enumerator approach is added in the computation for odd $2$-modular lattices in the second subsection. This approach exploits the connection between the weight enumerator of a self-dual code and the theta series of a lattice constructed from this code. But calculating the weight enumerator of the code adds considerable workload. \subsection{Even $2$ and $3$-modular lattices} The theta series of modular lattices are modular forms, which, roughly speaking, are functions that stay ``invariant'' under the transformation by certain subgroups of the group SL$_2(\mathbb{Z})$ \cite{modular forms}. The modular form theory shows that theta series as modular forms are expressed in a polynomial in two basic modular forms. We only need a few terms of a theta series to compute the coefficients of this expression and obtain a closed form expression of the theta series. The following lemma plays a crucial role in our calculation of the theta series of $2$- and $3$-modular lattices. \begin{lemma}\cite{modular lattices}\label{lemma} The theta series of an even $\ell$-modular lattice of dimension $n=2k$ when $\ell=1,\ 2,\ 3$ belongs to a space of \textit{modular forms} of weight $k$ generated by the functions $\Theta_{2k_0}^{\lambda}(\tau)\Delta_{2k_1}^{\mu}(\tau)$ with integers $\lambda,\ \mu\geq0$ satisfying $k_0\lambda+k_1\mu=k$, where for $\ell=1,\ 2,\ 3$, $k_0=4,\ 2,\ 1$ respectively, $k_1=\frac{24}{1+\ell}$, $\Theta_{2k_0}(\tau)$ denote the theta series of the modular lattices $E_8,\ D_4\mbox{ and }A_2$, respectively, and $ \Delta_{2k_1}(\tau)=\left(\eta(\tau)\eta(\ell \tau)\right)^{k_1}$. \end{lemma} \begin{example} \label{even unimodular} If $\ell=1$, we read from Lemma \ref{lemma} that $k_0=4$, $k_1=\frac{24}{2}=12$, $\Theta_{2k_0}(\tau)=\Theta_{E_8}(\tau)$ and $\Delta_{2k_1}(\tau)=\eta^{24}(\tau)$. We then deduce that if $\Lambda$ is an even unimodular lattice of dimension $n=2k$ then \begin{equation}\label{equ:even unimodular} \Theta_{\Lambda}(\tau)=\sum_{4\lambda+12\mu=k}a_{\mu}\Theta_{E_8}^{\lambda}(\tau)\Delta_{24}^{\mu}(\tau). \end{equation} \end{example} The formula (\ref{equ:even unimodular}) was adopted in \cite{ITW,Gaussian wiretap codes} to compute the secrecy gains of several even unimodular lattices. In order to write the secrecy function, we need to have the theta series of $\mathbb{Z}^n$ scaled to the right volume. Now it follows from (\ref{determinant}) that \begin{equation}\label{theta series of vZ^n} \Theta_{\ell^{\frac{1}{4}}\mathbb{Z}^n}(\tau)=\vartheta_3^n(\sqrt{\ell}\tau). \end{equation} According to Lemma \ref{lemma}, the theta series of an even $2$-modular lattice $\Lambda$ of dimension $n=2k$ can be written as \begin{equation}\label{2-modular} \Theta_{\Lambda}(\tau)=\sum_{2\lambda+8\mu=k}a_{\mu}\Theta_{D_4}^{\lambda}(\tau)\Delta_{16}^{\mu}(\tau), \end{equation} where \begin{equation}\label{D_4} \begin{array}{ll} \Theta_{D_4}(\tau)&=\frac{1}{2}\left( \vartheta_3^4(\tau)+\vartheta_4^4(\tau) \right)\\ &=1+24q^2+24q^4+96q^6+\cdots\\ \end{array} \end{equation} and $$ \Delta_{16}(\tau)=\left(\eta(\tau)\eta(2\tau)\right)^{8}. $$ By (\ref{theta and eta}), we can write $\Delta_{16}(\tau)$ in terms of Jacobi theta functions and compute the first few terms: \begin{equation}\label{Delta_16} \begin{array}{ll} \Delta_{16}(\tau)&=\frac{1}{256}\vartheta_2^8(\tau)\vartheta_3^4(\tau)\vartheta_4^4(\tau)\\ &=q^2-8q^4+12q^6+\cdots.\\ \end{array} \end{equation} The secrecy function of an even $2$-modular lattice $\Lambda$ of dimension $n$ is then written as $$ \Xi_{\Lambda}(\tau)=\frac{\vartheta_3^n(\sqrt{2}\tau)}{\sum_{2\lambda+8\mu=k}a_{\mu}\Theta_{D_4}^{\lambda}(\tau)\Delta_{16}^{\mu}(\tau)}, $$ or more conveniently, $$ \begin{array}{ll} 1/\Xi_{\Lambda}(\tau)&=\sum_{2\lambda+8\mu=k}a_{\mu}\frac{\Theta_{D_4}^{\lambda}(\tau)\Delta_{16}^{\mu}(\tau)}{\vartheta_3^n(\sqrt{2}\tau)}\\ &=\sum_{2\lambda+8\mu=k}a_{\mu}\left(\frac{\Theta_{D_4}(\tau)}{\vartheta_3^4(\sqrt{2}\tau)}\right)^{\lambda}\left(\frac{\Delta_{16}(\tau)}{\vartheta_3^{16}(\sqrt{2}\tau)}\right)^{\mu}. \end{array} $$ Now we only need to know the coefficients $a_{\mu}$ in order to compute the weak secrecy gain of a $2$-modular lattice. Let us compute an example to show how the coefficients $a_\mu$'s in (\ref{2-modular}) are computed. By substituting (\ref{D_4}) and (\ref{Delta_16}) into (\ref{2-modular}), we have a formal sum with coefficients represented by the $a_{\mu}$'s. Then by comparing this formal sum with (\ref{equ:theta series}), we obtain a number of linear equations in the $a_{\mu}$'s. When we have enough equations, the $a_{\mu}$'s can be recovered by solving a linear system. \begin{example} $BW_{16}$ is an even lattice with minimum norm $4$. The theta series of $BW_{16}$ looks like $$\Theta_{BW_{16}}(\tau)=1+0q^2+A_4q^4+\cdots,\ A_4\neq 0.$$ On the other hand, by (\ref{2-modular}), (\ref{D_4}) and (\ref{Delta_16}), $$ \begin{array}{ll} \Theta_{BW_{16}}(\tau)&=a_0\Theta_{D_4}^{4}(\tau)+a_1\Delta_{16}(\tau)\\ &=a_0(1+24q^2+\cdots)^4+a_1(q^2+\cdots)\\ &=a_0(1+96q^2+\cdots)+a_1(q^2+\cdots)\\ &=a_0+(96a_0+a_1)q^2+\cdots.\\ \end{array} $$ We now have two linear equations in two unknowns $a_0$ and $a_1$ $$ \left \{ \begin{array}{cc} a_0&=1\\ 96a_0+a_1&=0\\ \end{array} \right. $$ which gives $a_0=1$ and $a_1=-96$, yielding the theta series \begin{equation} \Theta_{BW_{16}}=\Theta_{D_4}^4-96\Delta_{16}. \end{equation} \end{example} The weak secrecy gain of $BW_{16}$ can then be approximated using Mathematica \cite{Mathematica} (see Fig. \ref{fig:secrecy function of BW_16}): \begin{equation} \chi_{BW_{16}}=2.20564. \end{equation} Similarly according to Lemma \ref{lemma}, the theta series of an even $3$-modular lattice $\Lambda$ of dimension $n=2k$ can be written as \begin{equation}\label{3-modular} \Theta_{\Lambda}(\tau)=\sum_{\lambda+6\mu=k}a_{\mu}\Theta_{A_2}^{\lambda}(\tau)\Delta_{12}^{\mu}(\tau), \end{equation} where \begin{equation}\label{A_2} \begin{array}{ll} \Theta_{A_2}(\tau)&=\vartheta_2(2\tau)\vartheta_2(6\tau)+\vartheta_3(2\tau)\vartheta_3(6\tau)\\ &=1+6q^2+0q^4+6q^6+\cdots\\ \end{array} \end{equation} and $$ \Delta_{12}(\tau)=\left(\eta(\tau)\eta(3\tau)\right)^{6}. $$ We can also compute the first few terms of $\Delta_{12}(\tau)$: \begin{equation}\label{Delta_12} \Delta_{12}(\tau)=q^2-6q^4+9q^6+\cdots. \end{equation} The secrecy function of an even $3$-modular lattice $\Lambda$ of dimension $n$ is $$ \begin{array}{ll} 1/\Xi_{\Lambda}(\tau)&=\sum_{\lambda+6\mu=k}\frac{a_{\mu}\Theta_{A_2}^{\lambda}(\tau)\Delta_{12}^{\mu}(\tau)}{\vartheta_3^n(\sqrt{3}\tau)}.\\ &=\sum_{\lambda+6\mu=k}a_{\mu}\left(\frac{\Theta_{A_2}(\tau)}{\vartheta_3^2(\sqrt{3}\tau)}\right)^{\lambda}\left(\frac{\Delta_{12}(\tau)}{\vartheta_3^{12}(\sqrt{3}\tau)}\right)^{\mu}. \end{array} $$ Table \ref{table:2,3-modular} summarizes the weak secrecy gains of even $2$- and $3$-modular lattices computed. The basic information about these lattices, such as minimum norm and kissing number can be found in \cite{catalogue}. \begin{table} \centering \caption{\label{table:2,3-modular} Weak secrecy gains of the known even $2$- and $3$-modular lattices} \begin{tabular}{|c|c|c|c|c|} \hline dim & lattice &$\ell$ & theta series &$\chi^w_{\Lambda}$ \\ \hline \hline $2$ & $A_{2}$ &$3$ &$\Theta_{A_2}$ &$1.01789$\\ \hline \hline $4$ & $D_4$ &$2$ &$\Theta_{D_4}$ &$1.08356$\\ \hline \hline $12$ & $K_{12}$ &$3$ &$\Theta_{A_2}^6-36\Delta_{12}$ &$1.66839$\\ \hline \hline $14$ & $C^2\times G(2,3)$ &$3$ &$\Theta_{A_2}^7-42\Theta_{A_2}\Delta_{12}$ &$1.85262$\\ \hline \hline $16$ & $BW_{16}$ &$2$ &$\Theta_{D_4}^4-96\Delta_{16} $&$2.20564$\\ \hline \hline $20$ & $HS_{20}$ &$2$ &$\Theta_{D_4}^5-120\Theta_{D_4}\Delta_{16} $&$3.03551$\\ \hline \hline $22$ & $A_2\times A_{11}$ &$3$ &$\Theta_{A_2}^{11}-66\Theta_{A_2}^5\Delta_{12}$ &$3.12527$\\ \hline $24$ & $L_{24.2}$ &$3$ &$\Theta_{A_2}^{12}-72\Theta_{A_2}^{6}\Delta_{12}$ &$3.92969$\\ &&&$-216\Delta_{12}^2$&\\ \hline \end{tabular} \end{table} \subsection{Odd $2$-modular lattices} Odd $2$-modular lattices were constructed in \cite{CBachoc,PSole} via Construction A. They are, by the time of writing this paper, the only known instances of odd $2$-modular lattices. There is a natural connection between the theta series of the lattice constructed from a code $C$ via Construction A and an appropriate weight enumerator of the code $C$. We will exploit this connection to obtain a closed form expression for these lattices. For the rest of the paper, we will let $K=\mathbb{Q}(\sqrt{-2})$ and $R=\mathfrak{O}_K/3\mathfrak{O}_K$, where the notations are explained in Section \ref{sec:preliminaries}. According to (\ref{eq: ringofintegers}), since $-2\equiv 2\mbox{ mod }4$, the ring of integers $\mathfrak{O}_K$ of $K$ is $\mathfrak{O}_K=\mathbb{Z}[\sqrt{-2}]=\{a+b\sqrt{-2}|a,b\in\mathbb{Z}\}$. Now we consider the decomposition of the prime ideal $3\mathfrak{O}_K$. Since $3=(1+\sqrt{-2})(1-\sqrt{-2})$ and $(1+\sqrt{-2})\mathfrak{O}_K\neq(1-\sqrt{-2})\mathfrak{O}_K$, the ideal $3\mathfrak{O}_K$ splits. According to (\ref{eq: quotientringR}), the quotient ring $R=\mathfrak{O}_K/3\mathfrak{O}_K=\mathbb{F}_3\times \mathbb{F}_3$. Note that the ring $\mathbb{F}_3+v\mathbb{F}_3$ with $v^2=1$ is isomorphic to the ring $\mathbb{F}_3\times \mathbb{F}_3$, through an isomorphism $\delta:\ a(v-1)+b(v+1)\mapsto (a,b)$. We will identify $R=\mathfrak{O}_K/3\mathfrak{O}_K$ with the ring $\mathbb{F}_3+v\mathbb{F}_3$ and use the two notations interchangeably. In particular, we will identify the coset $a+3\mathfrak{O}_K$ with $a\in \mathbb{F}_3$, and the coset $\sqrt{-2}+3\mathfrak{O}_K$ with $v$. Let $C$ be a code of length $n=2k$ over $R=\mathbb{F}_3+v\mathbb{F}_3=\mathfrak{O}_K/3\mathfrak{O}_K$, which is by definition a $R$-submodule of $R^n$. According to Construction A, $\rho^{-1}(C)$ is a lattice over $\mathfrak{O}_K$\footnote{A $k$-dimensional lattice can be defined in a more general setting by a free abelian group of rank $k$.}, say, with generator matrix $$ \left ( \begin{array}{ccc} \lambda_{11}&\cdots&\lambda_{1k}\\ &\cdots&\\ \lambda_{k1}&\cdots&\lambda_{kk}\\ \end{array} \right ). $$ Let $\frac{1}{\sqrt{3}}\rho^{-1}(C)_{real}$ denote the real lattice defined by the generator matrix $$ \frac{1}{\sqrt{3}}\left ( \begin{array}{ccccc} \mbox{Re}(\lambda_{11})&\mbox{Im}(\lambda_{11})&\cdots&\mbox{Re}(\lambda_{1k})&\mbox{Im}(\lambda_{1k})\\ \mbox{Im}(\lambda_{11})&\mbox{Re}(\lambda_{11})&\cdots&\mbox{Im}(\lambda_{1k})&\mbox{Re}(\lambda_{1k})\\ &&\cdots&&\\ \mbox{Re}(\lambda_{k1})&\mbox{Im}(\lambda_{k1})&\cdots&\mbox{Re}(\lambda_{kk})&\mbox{Im}(\lambda_{kk})\\ \mbox{Im}(\lambda_{k1})&\mbox{Re}(\lambda_{k1})&\cdots&\mbox{Im}(\lambda_{kk})&\mbox{Re}(\lambda_{kk})\\ \end{array} \right ). $$ Now we look at the theta series of the lattice $\frac{1}{\sqrt{3}}\rho^{-1}(C)_{real}$ constructed from a code $C$ over $R$. \begin{definition}\cite{PSole} The \textit{length function} $l_K$ of an element $r$ in $R=\mathbb{F}_3+v\mathbb{F}_3=\mathfrak{O}_K/3\mathfrak{O}_K$ is defined by \begin{equation}\label{lengthfunction} l_K(r)=\inf\{x\bar{x}|x\in r\subset \mathfrak{O}_K\}, \end{equation} where $\bar{x}$ is the complex conjugation of $x$. \end{definition} One computes the length of the nine elements of $R$ as follows: \begin{equation}\label{length} \left \{ \begin{array}{ll} l_K(0)&=0\\ l_K(\pm 1)&=1\\ l_K(\pm v)&=2\\ l_K(\pm 1\pm v)&=3.\\ \end{array} \right. \end{equation} \begin{definition}\cite{PSole} The \textit{length composition} $n_l(\mathbf{x})$, $l=0,1,2,3$ of a vector $\mathbf{x}$ in $R^n$ counts the number of coordinates of length $l$. The \textit{length weight enumerator} of a code $C$ over $R$ is then defined by \begin{equation}\label{lengthweightenumerator} \mbox{lwe}_C(a,b,c,d)=\sum_{\mathbf{c}\in C}a^{n_0(\mathbf{c})}b^{n_1(\mathbf{c})}c^{n_2(\mathbf{c})}d^{n_3(\mathbf{c})}. \end{equation} \end{definition} Define four theta series $\theta_l$, $l=0,1,2,3$ corresponding to the four different lengths of elements of $R$: \begin{equation}\label{thetatolength} \left \{ \begin{array}{ll} \theta_0&=\sum_{x\in 3\mathfrak{O}_K}q^{\frac{x\bar{x}}{3}}\\ \theta_1&=\sum_{x\in 1+3\mathfrak{O}_K}q^{\frac{x\bar{x}}{3}}\\ \theta_2&=\sum_{x\in \sqrt{-2}+3\mathfrak{O}_K}q^{\frac{x\bar{x}}{3}}\\ \theta_3&=\sum_{x\in 1+\sqrt{-2}+3\mathfrak{O}_K}q^{\frac{x\bar{x}}{3}}.\\ \end{array} \right. \end{equation} Recalling that $\mathfrak{O}_K=\{a+b\sqrt{-2}|a,b\in\mathbb{Z}\}$, the theta series are written as double sums. \begin{equation}\label{infinitesum} \left \{ \begin{array}{ll} \theta_0&=\sum_{a\in\mathbb{Z}}\sum_{b\in\mathbb{Z}}q^{3a^2+6b^2}\\ \theta_1&=\sum_{a\in\mathbb{Z}}\sum_{b\in\mathbb{Z}}q^{3(a+\frac{1}{3})^2+6b^2}\\ \theta_2&=\sum_{a\in\mathbb{Z}}\sum_{b\in\mathbb{Z}}q^{3a^2+6(b+\frac{1}{3})^2}\\ \theta_3&=\sum_{a\in\mathbb{Z}}\sum_{b\in\mathbb{Z}}q^{3(a+\frac{1}{3})^2+6(b+\frac{1}{3})^2}.\\ \end{array} \right. \end{equation} We already know how to handle the $lm^2$ type of infinite sum, namely, $$ \sum_{m\in \mathbb{Z}}q^{lm^2}=\sum_{m\in \mathbb{Z}}(q^l)^{m^2}=\vartheta_3(l\tau). $$ For the $(3m+1)^2$ type of infinite sum, we first observe that, on one hand, $$ \sum_{m\in\mathbb{Z}}q^{m^2}=\sum_{m\in\mathbb{Z}}q^{(3m)^2}+\sum_{m\in\mathbb{Z}}q^{(3m+1)^2}+\sum_{m\in\mathbb{Z}}q^{(3m-1)^2} $$ and, on the other hand, $$ \sum_{m\in\mathbb{Z}}q^{(3m+1)^2}=\sum_{m\in\mathbb{Z}}q^{(3m-1)^2}. $$ We then conclude that $$ \begin{array}{ll} \sum_{m\in\mathbb{Z}}q^{(3m+1)^2}&=\frac{1}{2}\left( \sum_{m\in\mathbb{Z}}q^{m^2} - \sum_{m\in\mathbb{Z}}q^{(3m)^2}\right)\\ &=\frac{1}{2}\left(\vartheta_3(\tau)-\vartheta_3(9\tau)\right).\\ \end{array} $$ The four theta series defined above are then computed as \begin{equation}\label{thetatolength} \left \{ \begin{array}{ll} \theta_0&=\vartheta_3(3\tau)\vartheta_3(6\tau)\\ \theta_1&=\frac{1}{2}\left(\vartheta_3(\frac{\tau}{3})-\vartheta_3(3\tau)\right)\vartheta_3(6\tau)\\ \theta_2&=\frac{1}{2}\vartheta_3(3\tau)\left(\vartheta_3(\frac{2\tau}{3})-\vartheta_3(6\tau)\right)\\ \theta_3&=\frac{1}{4}\left(\vartheta_3(\frac{\tau}{3})-\vartheta_3(3\tau)\right)\left(\vartheta_3(\frac{2\tau}{3})-\vartheta_3(6\tau)\right).\\ \end{array} \right. \end{equation} \begin{theorem} \begin{equation}\label{Macwillim} \Theta_{\frac{1}{\sqrt{3}}\rho^{-1}(C)}(q)=\mbox{lwe}_C(\theta_0,\theta_1,\theta_2,\theta_3). \end{equation} \end{theorem} \begin{proof}The theta series of the lattice $\frac{1}{\sqrt{3}}\rho^{-1}(C)$ is by definition $$ \begin{array}{ll} \Theta_{\frac{1}{\sqrt{3}}\rho^{-1}(C)}(\tau)&=\sum_{\mathbf{\lambda}\in\frac{1}{\sqrt{3}}\rho^{-1}(C)}q^{||\mathbf{\lambda}||^2}\\ &=\sum_{\mathbf{c}\in C}\sum_{\mathbf{x}\in\frac{1}{\sqrt{3}}(\mathbf{c}+3\mathfrak{O}_K^k)}q^{\mathbf{x}\bar{\mathbf{x}}}\\ &=\sum_{\mathbf{c}\in C}\theta_0^{n_0(\mathbf{c})}\theta_1^{n_1(\mathbf{c})}\theta_2^{n_2(\mathbf{c})}\theta_3^{n_3(\mathbf{c})}\\ &=\mbox{lwe}_C(\theta_0,\theta_1,\theta_2,\theta_3).\\ \end{array} $$ \end{proof} As it was remarked in \cite{CBachoc} (Remark 3.8) and later proved in \cite{PSole}, if $C$ is a self-dual code over $R$ with respect to Hermitian inner product, then $\frac{1}{\sqrt{3}}\rho^{-1}(C)_{real}$ is an odd $2$-modular lattice. \begin{example}\label{8-dimensional} A Hermitian self-dual code $C$ over $R$ of length $4$ was constructed in \cite{PSole}. It is a linear code with a generator matrix \begin{equation}\label{generatormatrix} G^H=\left[ \begin{array}{cccc} 1&0&v&-1-v\\ 0&1&-1+v&v\\ \end{array} \right]. \end{equation} One can generate all the $81$ codewords and compute the length weight enumerator: $$ \begin{array}{ll} \mbox{lwe}_C(a,b,c,d)&=a^4+4a^2d^2+16abcd+8ad^3+8b^3d\\ &\ +4b^2c^2+24bcd^2+8c^3d+8d^4.\\ \end{array} $$ The theta series of the $8$-dimensional odd $2$-modular lattice is then computed by (\ref{Macwillim}) (using a computer software, for example, Mathematica \cite{Mathematica} to output the first few terms). $$ \begin{array}{l} \Theta_{\frac{1}{\sqrt{3}}\rho^{-1}(C)}(\tau)\\ =\vartheta_3(3\tau)^4\vartheta_3(6\tau)^4\\ \ +\frac{1}{4}\vartheta_3(3\tau)^3\left(\vartheta_3(\frac{\tau}{3})-\vartheta_3(3\tau)\right)\left(\vartheta_3(\frac{2\tau}{3})-\vartheta_3(6\tau)\right)^4\\ \ +\frac{3}{2}\vartheta_3(3\tau)^2\vartheta_3(6\tau)^2\left(\vartheta_3(\frac{\tau}{3})-\vartheta_3(3\tau)\right)^2\left(\vartheta_3(\frac{2\tau}{3})-\vartheta_3(6\tau)\right)^2\\ \ +\frac{5}{8}\vartheta_3(3\tau)\vartheta_3(6\tau)\left(\vartheta_3(\frac{\tau}{3})-\vartheta_3(3\tau)\right)^3\left(\vartheta_3(\frac{2\tau}{3})-\vartheta_3(6\tau)\right)^3\\ \ +\frac{1}{4}\vartheta_3(6\tau)^3\left(\vartheta_3(\frac{\tau}{3})-\vartheta_3(3\tau)\right)^4\left(\vartheta_3(\frac{2\tau}{3})-\vartheta_3(6\tau)\right)\\ \ +\frac{1}{32}\left(\vartheta_3(\frac{\tau}{3})-\vartheta_3(3\tau)\right)^4\left(\vartheta_3(\frac{2\tau}{3})-\vartheta_3(6\tau)\right)^4\\ =1+32q^2+128q^3+240q^4+\cdots.\\ \end{array} $$ \end{example} This method has the advantage of being self-contained in its deduction. But the computation of the weight enumerator of the code $C$ is tedious and, worse still, as the dimension increases, it may become infeasible. Let us fall back to the first approach adopted in the previous subsection. First we need a formula similar to Lemma \ref{lemma} which deals with the theta series of odd $2$-modular lattices. There is indeed a formula which deals with the theta series of $\ell$-modular lattice, including the odd lattices, for $\ell=1,2,3,5,6,7,11,14,15,23$ discovered by E. M. Rains and N. J. A. Sloane. \begin{lemma}\cite{N.J.A Sloane}\label{lemma2} Define $$ f_1(\tau)=\Theta_{C^{\ell}}(\tau), $$ where the lattice $C^{\ell}$ is as defined in (\ref{Z+lZ}). Let $\eta^{\ell}(\tau)=\Pi_{d|\ell}\eta(d\tau)$ and let $D_{\ell}=24,16,12,8,8,6,4,4,4,2$ corresponding to $\ell=1,2,3,5,6,7,11,14,15,23$. Define $$ f_2(\tau)=\left\{ \begin{array}{ll} \left (\frac{\eta^{\ell}(\frac{\tau}{2})\eta^{\ell}(2\tau)}{\eta^{\ell}(\tau)^2} \right)^{\frac{D_{\ell}}{\mbox{dim }C^{\ell}}},&\ell\mbox{ is odd;}\\ \left (\frac{\eta^{(\frac{\ell}{2})}(\frac{\tau}{2})\eta^{(\frac{\ell}{2})}(4\tau)}{\eta^{(\frac{\ell}{2})}(\tau)\eta^{(\frac{\ell}{2})}(2\tau)} \right)^{\frac{D_{\ell}}{\mbox{dim }C^{\ell}}} ,&\ell\mbox{ is even.}\\ \end{array} \right. $$ The theta series of an $\ell$-modular lattice $\Lambda$ of dimension $k\mbox{dim}(C^{(\ell)})$ can be written as \begin{equation}\label{l-modular decomposition} \Theta_{\Lambda}(\tau)=f_1(\tau)^k\sum_{i=0}^{\lfloor k\mbox{ ord}_1(f_1)\rfloor}a_if_2(\tau)^i, \end{equation} where $\mbox{ord}_1(f_1)$ is the \textit{divisor} of the modular form $f_1(\tau)$, which, in this case, is $\frac{1}{8}\sum_{d|\ell}d$ if $\ell$ is odd and $\frac{1}{6}\sum_{d|\ell}d$ if $\ell$ is even. \end{lemma} Let us now take $\ell=2$. Then $C^2=\mathbb{Z}\oplus\sqrt{2}\mathbb{Z}$ hence \begin{equation}\label{2modularoddf1} \begin{array}{ll} f_1(\tau)&=\Theta_{C^2}(\tau)\\ &=\vartheta_3(\tau)\vartheta_3(2\tau)\\ &=1+2q+2q^2+4q^3+\cdots.\\ \end{array} \end{equation} Next, $\mbox{ord}_1(f_1)$ is computed to be $\frac{1}{2}$. $D_2=16$. Finally since $2$ is even $$ f_2(\tau)=\left (\frac{\eta(\frac{\tau}{2})\eta(4\tau)}{\eta(\tau)\eta(2\tau)} \right)^{\frac{16}{2}}=\frac{\vartheta_2^2(2\tau)\vartheta_4^2(\tau)}{4\vartheta_3^2(\tau)\vartheta_3^2(2\tau)}. $$ We observe that the denominator of $f_2(\tau)$ is $4f_1^2(\tau)$. We then define a function \begin{equation}\label{2modularoddDelta4} \begin{array}{ll} \Delta_4(\tau)&\triangleq f_1^2(\tau)f_2(\tau)\\ &=\frac{1}{4}\vartheta_2^2(2\tau)\vartheta_4^2(\tau)\\ &=q-4q^2+4q^3+\cdots,\\ \end{array} \end{equation} and rewrite (\ref{l-modular decomposition}) in the form of (\ref{2-modular}): \begin{equation}\label{2-modularodd} \Theta_{\Lambda}(\tau)=\sum_{i=0}^{\lfloor\frac{k}{2}\rfloor}a_if_1^{k-2i}(\tau)\Delta_4^i(\tau). \end{equation} For lattices in small dimensions, the first few terms of the theta series can be computed numerically using computer softwares, for example, Magma \cite{Magma}. \begin{example}\label{grammatrix} A generator matrix of the $8$-dimensional odd $2$-modular lattice in Example \ref{8-dimensional} can be computed from the generator matrix (\ref{generatormatrix}) of the code $C$: $$ M=\frac{1}{\sqrt{3}}\left[ \begin{array}{cccccccc} 1&0&0&0&0&\sqrt{2}&-1&-\sqrt{2}\\ 0&\sqrt{2}&0&0&1&0&-1&-\sqrt{2}\\ 0&0&1&0&-1&\sqrt{2}&0&\sqrt{2}\\ 0&0&0&\sqrt{2}&1&-\sqrt{2}&1&0\\ 0&0&0&0&3&0&0&0\\ 0&0&0&0&0&3\sqrt{2}&0&0\\ 0&0&0&0&0&0&3&0\\ 0&0&0&0&0&0&0&3\sqrt{2}\\ \end{array} \right]. $$ To make the typing easy, we compute the Gram matrix $$ MM^{T}=\left[ \begin{array}{cccccccc} 2&1&0&-1&0&2&-1&-2\\ 1&2&-1&0&1&0&-1&-2\\ 0&-1&2&-1&-1&2&0&2\\ -1&0&-1&2&1&-2&1&0\\ 0&1&-1&1&3&0&0&0\\ 2&0&2&-2&0&6&0&0\\ -1&-1&0&1&0&0&3&0\\ -2&-2&2&0&0&0&0&6\\ \end{array} \right] $$ and input it to Magma to generate the lattice $\Lambda$. The first few terms of $\Theta_{\frac{1}{\sqrt{3}}\rho^{-1}(C)}(\tau)$ can be obtained (by the command ThetaSeries($\Lambda$,0,4);): $$ \Theta_{\frac{1}{\sqrt{3}}\rho^{-1}(C)}(q)=1+32q^2+128q^3+240q^4+\cdots. $$ Now in dimension $8$, the theta series of a $2$-modular lattice can be written as \begin{equation}\label{2-modular8-dimensional} \begin{array}{l} a_0f_1(\tau)^4+a_1f_1(\tau)^2\Delta_4(\tau)+a_2\Delta_4(\tau)^2\\ =a_0(1+8q+32q^2+\cdots)+a_1(q+0q^2+\cdots)\\ \ +a_2(q^2+\cdots)\\ =a_0+(8a_0+a_1)q+(32a_0+0+a_2)q^2+\cdots.\\ \end{array} \end{equation} We then have three linear equations in three unknowns $a_0$, $a_1$ and $a_2$ $$ \left \{ \begin{array}{cc} a_0&=1\\ 8a_0+a_1&=0\\ 32a_0+a_2&=32,\\ \end{array} \right. $$ which gives $a_0=1$, $a_1=-8$ and $a_2=0$, yielding the theta series \begin{equation} \Theta_{\frac{1}{\sqrt{3}}\rho^{-1}(C)}(q)=f_1(\tau)^4-8f_1(\tau)^2\Delta_4(\tau). \end{equation} \end{example} Theta series of the twelve odd $2$-modular lattices constructed in \cite{PSole} are computed and shown in Table \ref{table:2-modular odd}, as polynomials in $f_1$ and $\Delta_4$ for simplicity. Their weak secrecy gains are approximated using Mathematica \cite{Mathematica}. \begin{table} \centering \caption{\label{table:2-modular odd} Weak secrecy gains of odd $2$-modular lattices constructed from self-dual codes} \begin{tabular}{|c|c|c|} \hline dim & theta series &$\chi^w_{\Lambda}$ \\ \hline \hline $8$ &$f_1^4-8f_1^2\Delta_4$ &$1.22672$\\ \hline \hline $12$ &$f_1^6-12f_1^4\Delta_4$ &$1.49049$\\ \hline \hline $16$ &$f_1^8-16f_1^6\Delta_4$ &$2.06968$\\ \hline \hline $18$ &$f_1^9-18f_1^7\Delta_4+18f_1^5\Delta_4^2$ &$2.35656$\\ \hline \hline $20$ &$f_1^{10}-20f_1^8\Delta_4+40f_1^6\Delta_4^2$&$2.70165$\\ \hline \hline $22$ &$f_1^{11}-22f_1^9\Delta_4+66f_1^7\Delta_4^2-4f_1^5\Delta_4^3$&$3.11161$\\ \hline \hline $24$ &$f_1^{12}-24f_1^{10}\Delta_4+96f_1^8\Delta_4^2-28f_1^6\Delta_4^3$ &$3.60867$\\ \hline \hline $26$ &$f_1^{13}-26f_1^{11}\Delta_4+130f_1^9\Delta_4^2+-80f_1^7\Delta_4^3$ &$4.21349$\\ \hline \hline $28$ &$f_1^{14}-28f_1^{12}\Delta_4+168f_1^{10}\Delta_4^2$ &$4.98013$\\ &$-176f_1^8\Delta_4^3+32f_1^6\Delta_4^4$&\\ \hline \hline $30$ &$f_1^{15}-30f_1^{13}\Delta_4+210f_1^{11}\Delta_4^2$ &$5.72703$\\ &$-282f_1^9\Delta_4^3+112f_1^7\Delta_4^4$&\\ \hline \end{tabular} \end{table} \section{Best known lattices} \label{sec:comparision} Now that we have computed the weak secrecy gains of several $2$- and $3$-modular lattices, we want to compare them with the best unimodular lattices in their respective dimensions. Figure \ref{fig: 2n3vs1} compares the secrecy gains of the best unimodular lattices with the weak secrecy gains of the $2$- and $3$-modular lattices we have computed. We can see that most of these even $2$- and $3$-modular lattices, indicated by disconnected big dots, outperform the unimodular lattices except in dimension $22$, and three of the odd $2$-modular lattices, indicated by disconnected small dots, outperform the unimodular lattices, in particular, in dimension $18$, the odd $2$-modular lattice has the best secrecy gain known by now. \begin{figure} \caption{\label{fig: 2n3vs1} \label{fig: 2n3vs1} \end{figure} Table \ref{table:1,2,3-modular} gives a list of $2$- and $3$-modular lattices out-performing the best unimodular lattices. \begin{table} \centering \caption{\label{table:1,2,3-modular}List of $2$- and $3$-modular lattices out-performing the best unimodular lattices} \begin{tabular}{|c|c|c|c|} \hline dim & lattice & $\ell$ &$\chi_{\Lambda}$ \\ \hline \hline $2$ & $\mathbb{Z}^2$ &$1$ &$1$\\ \hline $2$ & $A_{2}$ &$3$ &$\geq1.01789$\\ \hline \hline $4$ & $\mathbb{Z}^4$ &$1$ &$1$\\ \hline $4$ & $D_4$ &$2$ &$\geq1.08356$\\ \hline \hline $12$ & $D_{12}^+$ &$1$ &$1.6$\\ \hline $12$ & $K_{12}$ &$3$ &$\geq1.66839$\\ \hline \hline $14$ & $(E_7^2)^+$ &$1$ &$1.77778$\\ \hline $14$ & $C_2\times G(2,3)$ &$3$ &$\geq1.85262$\\ \hline \hline $16$ & $(D_{8}^2)^+$ &$1$&$2$\\ \hline $16$ & $\frac{1}{\sqrt{3}}\rho^{-1}(C)_{real}$ &$2$&$\geq 2.06968$\\ \hline $16$ & $BW_{16}$ &$2$&$\geq2.20564$\\ \hline \hline $18$ & $(D_{6}^3)^+$ or $(A_9^2)^+$ &$1$&$2.28571$\\ \hline $18$ & $\frac{1}{\sqrt{3}}\rho^{-1}(C)_{real}$ &$2$&$\geq2.35656$\\ \hline \hline $20$ & $(A_{5}^4)^+$ &$1$&$2.66667$\\ \hline $20$ & $\frac{1}{\sqrt{3}}\rho^{-1}(C)_{real}$ &$2$&$\geq 2.70165$\\ \hline $20$ & $HS_{20}$ &$2$&$\geq3.03551$\\ \hline \hline $22$ & $(A_1^{22})^+$ &$1$ &$3.2$\\ \hline $22$ & $A_2\times A_{11}$ &$3$ &$\geq3.12527$\\ \hline \end{tabular} \end{table} \section{Conclusion and future work}\label{sec:conclu} This paper computes the weak secrecy gains of several known $2$- and $3$-modular lattices in small dimensions. Most of the even $2$- and $3$-modular lattices and three of the odd $2$-modular lattices have a higher secrecy gain than the best unimodular lattices. We then conclude that, at least in dimensions up to $23$, $2$- and $3$-modular lattices are better option for Gaussian wiretap channel. A line of future work would naturally be investigating $\ell$-modular lattices for other values of $\ell$ to understand if bigger $\ell$ allows better modular lattices in terms of secrecy gain. Also, more $2$- and $3$-modular lattice examples should be found to get a better understanding of why they have a higher secrecy gain, since a classification of such lattices is currently unavailable even in small dimensions. \section*{Acknowledgment} The research of F. Lin and of F. Oggier for this work is supported by the Singapore National Research Foundation under the Research Grant NRF-RF2009-07. The research of P. Sol\'e for this work is supported by Merlion project 1.02.10. The authors would like to thank Christine Bachoc for helpful discussions. \end{document}
\begin{document} \title{Endpoint estimates for commutators of intrinsic square functions in the Morrey type spaces} \author{Hua Wang \footnote{E-mail address: [email protected].}\\ \footnotesize{College of Mathematics and Econometrics, Hunan University, Changsha 410082, P. R. China}} \date{} \maketitle \begin{abstract} In this paper, the boundedness properties of commutators generated by $b$ and intrinsic square functions in the endpoint case are discussed, where $b\in BMO(\mathbb R^n)$. We first establish the weighted weak $L\log L$-type estimates for these commutator operators. Furthermore, we will prove endpoint estimates of commutators generated by $BMO(\mathbb R^n)$functions and intrinsic square functions in the weighted Morrey spaces $L^{1,\kappa}(w)$ for $0<\kappa<1$ and $w\in A_1$, and in the generalized Morrey spaces $L^{1,\Theta}$, where $\Theta$ is a growth function on $(0,+\infty)$ satisfying the doubling condition.\\ MSC(2010): 42B25; 42B35\\ Keywords: Intrinsic square functions; weighted Morrey spaces; generalized Morrey spaces; commutators; $A_p$ weights \end{abstract} \section{Introduction and main results} The intrinsic square functions were first introduced by Wilson in \cite{wilson1,wilson2}; they are defined as follows. For $0<\alpha\le1$, let ${\mathcal C}_\alpha$ be the family of functions $\varphi$ defined on $\mathbb R^n$ such that $\varphi$ has support containing in $\{x\in\mathbb R^n: |x|\le1\}$, $\int_{\mathbb R^n}\varphi(x)\,dx=0$, and for all $x, x'\in \mathbb R^n$, \begin{equation*} \big|\varphi(x)-\varphi(x')\big|\leq \big|x-x'\big|^{\alpha}. \end{equation*} For $(y,t)\in {\mathbb R}^{n+1}_{+}=\mathbb R^n\times(0,+\infty)$ and $f\in L^1_{{loc}}(\mathbb R^n)$, we set \begin{equation} A_\alpha(f)(y,t)=\sup_{\varphi\in{\mathcal C}_\alpha}\big|f*\varphi_t(y)\big|=\sup_{\varphi\in{\mathcal C}_\alpha}\bigg|\int_{\mathbb R^n}\varphi_t(y-z)f(z)\,dz\bigg|, \end{equation} where $\varphi_t(x)=t^{-n}\varphi(x/t)$. Then we define the intrinsic square function of $f$ (of order $\alpha$) by the formula \begin{equation} \mathcal S_{\alpha}(f)(x)=\left(\iint_{\Gamma(x)}\Big(A_\alpha(f)(y,t)\Big)^2\frac{dydt}{t^{n+1}}\right)^{1/2}, \end{equation} where $\Gamma(x)$ denotes the usual cone of aperture one: \begin{equation*} \Gamma(x)=\big\{(y,t)\in{\mathbb R}^{n+1}_+:|x-y|<t\big\}. \end{equation*} Similarly, we can define a cone of aperture $\beta$ for any $\beta>0$: \begin{equation*} \Gamma_\beta(x)=\big\{(y,t)\in{\mathbb R}^{n+1}_+:|x-y|<\beta\cdot t\big\}, \end{equation*} and the corresponding square function \begin{equation} \mathcal S_{\alpha,\beta}(f)(x)=\left(\iint_{\Gamma_\beta(x)}\Big(A_\alpha(f)(y,t)\Big)^2\frac{dydt}{t^{n+1}}\right)^{1/2}. \end{equation} The intrinsic Littlewood--Paley $\mathcal G$-function and the intrinsic $\mathcal G^*_\lambda$-function will be given respectively by \begin{equation} \mathcal G_\alpha(f)(x)=\left(\int_0^\infty\Big(A_\alpha(f)(x,t)\Big)^2\frac{dt}{t}\right)^{1/2} \end{equation} and \begin{equation} \mathcal G^*_{\lambda,\alpha}(f)(x)=\left(\iint_{{\mathbb R}^{n+1}_+}\left(\frac t{t+|x-y|}\right)^{\lambda n}\Big(A_\alpha(f)(y,t)\Big)^2\frac{dydt}{t^{n+1}}\right)^{1/2}, \quad \lambda>1. \end{equation} Let $b$ be a locally integrable function on $\mathbb R^n$, in this paper, we will consider the commutators generated by $b$ and intrinsic square functions, which are defined respectively by the following expressions in \cite{wang1}. \begin{equation*} \big[b,\mathcal S_\alpha\big](f)(x)=\left(\iint_{\Gamma(x)}\sup_{\varphi\in{\mathcal C}_\alpha}\bigg|\int_{\mathbb R^n}\big[b(x)-b(z)\big]\varphi_t(y-z)f(z)\,dz\bigg|^2\frac{dydt}{t^{n+1}}\right)^{1/2}, \end{equation*} \begin{equation*} \big[b,\mathcal G_\alpha\big](f)(x)=\left(\int_0^\infty\sup_{\varphi\in{\mathcal C}_\alpha}\bigg|\int_{\mathbb R^n}\big[b(x)-b(y)\big]\varphi_t(x-y)f(y)\,dy\bigg|^2\frac{dt}{t}\right)^{1/2}, \end{equation*} and \begin{equation*} \begin{split} &\big[b,\mathcal G^*_{\lambda,\alpha}\big](f)(x)\\ =&\left(\iint_{{\mathbb R}^{n+1}_+}\left(\frac t{t+|x-y|}\right)^{\lambda n}\sup_{\varphi\in{\mathcal C}_\alpha}\bigg|\int_{\mathbb R^n}\big[b(x)-b(z)\big]\varphi_t(y-z)f(z)\,dz\bigg|^2\frac{dydt}{t^{n+1}}\right)^{1/2}, \lambda>1. \end{split} \end{equation*} On the other hand, the classical Morrey spaces $\mathcal L^{p,\lambda}$ were originally introduced by Morrey in \cite{morrey} to study the local behavior of solutions to second order elliptic partial differential equations. Since then, these spaces play an important role in studying the regularity of solutions to partial differential equations. For the boundedness of the Hardy--Littlewood maximal operator, the fractional integral operator and the Calder\'on--Zygmund singular integral operator on these spaces, we refer the reader to \cite{adams,chiarenza,peetre}. In \cite{mizuhara}, Mizuhara introduced the generalized Morrey space $L^{p,\Theta}$ which was later extended and studied by many authors (see \cite{guliyev1,guliyev2,guliyev3,lu,nakai}). In \cite{komori}, Komori and Shirai defined the weighted Morrey space $L^{p,\kappa}(w)$ which could be viewed as an extension of weighted Lebesgue space, and then discussed the boundedness of the above classical operators in Harmonic Analysis on these weighted spaces. Recently, in \cite{wang1,wang2,wang3,wang4}, we have established the strong type and weak type estimates for intrinsic square functions and their commutators on $L^{p,\Theta}$ and $L^{p,\kappa}(w)$ with $1\leq p<\infty$. In order to simplify the notations, for any given $\sigma>0$, we set \begin{equation*} \Phi\left(\frac{|f(x)|}{\sigma}\right)=\frac{|f(x)|}{\sigma}\cdot\left(1+\log^+\frac{|f(x)|}{\sigma}\right) \end{equation*} when $\Phi(t)=t\cdot(1+\log^+t)$. The main results of this paper can be stated as follows. For the endpoint estimates for these commutator operators $\big[b,\mathcal S_\alpha\big],\big[b,\mathcal G_{\alpha}\big]$ and $\big[b,\mathcal G^*_{\lambda,\alpha}\big]$ in the weighted Lebesgue space $L^1_w$, when $b\in BMO(\mathbb R^n)$ and $w\in A_1$, we will obtain \newtheorem{theorem}{Theorem}[section] \begin{theorem}\label{mainthm:1} Let $0<\alpha\leq1$, $w\in A_1$ and $b\in BMO(\mathbb R^n)$. Then for any given $\sigma>0$, there exists a constant $C>0$ independent of $f$ and $\sigma$ such that \begin{equation*} w\big(\big\{x\in\mathbb R^n:\big|[b,\mathcal S_\alpha](f)(x)\big|>\sigma\big\}\big)\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx, \end{equation*} where $\Phi(t)=t(1+\log^+t)$. \end{theorem} \begin{theorem}\label{mainthm:2} Let $0<\alpha\leq1$, $w\in A_1$ and $b\in BMO(\mathbb R^n)$. Then for any given $\sigma>0$, there exists a constant $C>0$ independent of $f$ and $\sigma$ such that \begin{equation*} w\big(\big\{x\in\mathbb R^n:\big|[b,\mathcal G_\alpha](f)(x)\big|>\sigma\big\}\big)\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx, \end{equation*} where $\Phi(t)=t(1+\log^+t)$. \end{theorem} \begin{theorem}\label{mainthm:3} Let $0<\alpha\leq1$, $w\in A_1$ and $b\in BMO(\mathbb R^n)$. If $\lambda>{(3n+2\alpha)}/n$, then for any given $\sigma>0$, there exists a constant $C>0$ independent of $f$ and $\sigma$ such that \begin{equation*} w\big(\big\{x\in\mathbb R^n:\big|[b,\mathcal G^*_{\lambda,\alpha}](f)(x)\big|>\sigma\big\}\big)\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx, \end{equation*} where $\Phi(t)=t(1+\log^+t)$. \end{theorem} In particular, if we take $w$ to be a constant function, then we immediately get the following: \newtheorem{corollary}[theorem]{Corollary} \begin{corollary}\label{maincor:1} Let $0<\alpha\leq1$ and $b\in BMO(\mathbb R^n)$. Then for any given $\sigma>0$, there exists a constant $C>0$ independent of $f$ and $\sigma$ such that \begin{equation*} \big|\big\{x\in\mathbb R^n:\big|[b,\mathcal S_\alpha](f)(x)\big|>\sigma\big\}\big|\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(x)|}{\sigma}\right)dx, \end{equation*} where $\Phi(t)=t(1+\log^+t)$. \end{corollary} \begin{corollary}\label{maincor:2} Let $0<\alpha\leq1$ and $b\in BMO(\mathbb R^n)$. Then for any given $\sigma>0$, there exists a constant $C>0$ independent of $f$ and $\sigma$ such that \begin{equation*} \big|\big\{x\in\mathbb R^n:\big|[b,\mathcal G_\alpha](f)(x)\big|>\sigma\big\}\big|\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(x)|}{\sigma}\right)dx, \end{equation*} where $\Phi(t)=t(1+\log^+t)$. \end{corollary} \begin{corollary}\label{maincor:3} Let $0<\alpha\leq1$ and $b\in BMO(\mathbb R^n)$. If $\lambda>{(3n+2\alpha)}/n$, then for any given $\sigma>0$, there exists a constant $C>0$ independent of $f$ and $\sigma$ such that \begin{equation*} \big|\big\{x\in\mathbb R^n:\big|[b,\mathcal G^*_{\lambda,\alpha}](f)(x)\big|>\sigma\big\}\big|\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(x)|}{\sigma}\right)dx, \end{equation*} where $\Phi(t)=t(1+\log^+t)$. \end{corollary} For the endpoint estimates of commutators generated by $BMO(\mathbb R^n)$ functions and intrinsic square functions in the weighted Morrey spaces $L^{1,\kappa}(w)$ for all $0<\kappa<1$ and $w\in A_1$, we will prove \begin{theorem}\label{mainthm:4} Let $0<\alpha\leq1$, $0<\kappa<1$, $w\in A_1$ and $b\in BMO(\mathbb R^n)$. Then for any given $\sigma>0$ and any ball $B$, there exists a constant $C>0$ independent of $f$, $B$ and $\sigma$ such that \begin{equation*} \frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|[b,\mathcal S_\alpha](f)(x)\big|>\sigma\big\}\big)\leq C\cdot\sup_B\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)dx, \end{equation*} where $\Phi(t)=t(1+\log^+t)$. \end{theorem} \begin{theorem}\label{mainthm:5} Let $0<\alpha\leq1$, $0<\kappa<1$, $w\in A_1$ and $b\in BMO(\mathbb R^n)$. Then for any given $\sigma>0$ and any ball $B$, there exists a constant $C>0$ independent of $f$, $B$ and $\sigma$ such that \begin{equation*} \frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|[b,\mathcal G_\alpha](f)(x)\big|>\sigma\big\}\big)\leq C\cdot\sup_B\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)dx, \end{equation*} where $\Phi(t)=t(1+\log^+t)$. \end{theorem} \begin{theorem}\label{mainthm:6} Let $0<\alpha\leq1$, $0<\kappa<1$, $w\in A_1$ and $b\in BMO(\mathbb R^n)$. If $\lambda>{(3n+2\alpha)}/n$, then for any given $\sigma>0$ and any ball $B$, there exists a constant $C>0$ independent of $f$, $B$ and $\sigma$ such that \begin{equation*} \frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|[b,\mathcal G^*_{\lambda,\alpha}](f)(x)\big|>\sigma\big\}\big)\leq C\cdot\sup_B\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)dx, \end{equation*} where $\Phi(t)=t(1+\log^+t)$. \end{theorem} For the endpoint estimates of commutators generated by $BMO(\mathbb R^n)$ functions and intrinsic square functions in the generalized Morrey spaces $L^{1,\Theta}$ when $\Theta$ satisfies the doubling condition, we will show that \begin{theorem}\label{mainthm:7} Let $0<\alpha\leq1$ and $b\in BMO(\mathbb R^n)$. Suppose that $\Theta$ satisfies $(\ref{doubling})$ and $1\le D(\Theta)<2^n$, then for any given $\sigma>0$ and any ball $B(x_0,r)$, there exists a constant $C>0$ independent of $f$, $B(x_0,r)$ and $\sigma$ such that \begin{equation*} \frac{1}{\Theta(r)}\cdot\big|\big\{x\in B(x_0,r):\big|[b,\mathcal S_\alpha](f)(x)\big|>\sigma\big\}\big|\leq C\cdot\sup_{r>0}\frac{1}{\Theta(r)} \int_{B(x_0,r)}\Phi\left(\frac{|f(x)|}{\sigma}\right)dx, \end{equation*} where $\Phi(t)=t(1+\log^+t)$. \end{theorem} \begin{theorem}\label{mainthm:8} Let $0<\alpha\leq1$ and $b\in BMO(\mathbb R^n)$. Suppose that $\Theta$ satisfies $(\ref{doubling})$ and $1\le D(\Theta)<2^n$, then for any given $\sigma>0$ and any ball $B(x_0,r)$, there exists a constant $C>0$ independent of $f$, $B(x_0,r)$ and $\sigma$ such that \begin{equation*} \frac{1}{\Theta(r)}\cdot\big|\big\{x\in B(x_0,r):\big|[b,\mathcal G_\alpha](f)(x)\big|>\sigma\big\}\big|\leq C\cdot\sup_{r>0}\frac{1}{\Theta(r)} \int_{B(x_0,r)}\Phi\left(\frac{|f(x)|}{\sigma}\right)dx, \end{equation*} where $\Phi(t)=t(1+\log^+t)$. \end{theorem} \begin{theorem}\label{mainthm:9} Let $0<\alpha\leq1$ and $b\in BMO(\mathbb R^n)$. Suppose that $\Theta$ satisfies $(\ref{doubling})$, $1\le D(\Theta)<2^n$ and $\lambda>{(3n+2\alpha)}/n$, then for any given $\sigma>0$ and any ball $B(x_0,r)$, there exists a constant $C>0$ independent of $f$, $B(x_0,r)$ and $\sigma$ such that \begin{equation*} \frac{1}{\Theta(r)}\cdot\big|\big\{x\in B(x_0,r):\big|[b,\mathcal G^*_{\lambda,\alpha}](f)(x)\big|>\sigma\big\}\big|\leq C\cdot\sup_{r>0}\frac{1}{\Theta(r)} \int_{B(x_0,r)}\Phi\left(\frac{|f(x)|}{\sigma}\right)dx, \end{equation*} where $\Phi(t)=t(1+\log^+t)$. \end{theorem} \section{Notations and preliminaries} A weight $w$ will always mean a positive function which is locally integrable on $\mathbb R^n$, $B=B(x_0,r_B)$ denotes the open ball with the center $x_0$ and radius $r_B$. For $1<p<\infty$, a weight function $w$ is said to belong to the Muckenhoupt's class $A_p$, if there is a constant $C>0$ such that for every ball $B\subseteq \mathbb R^n$(see \cite{garcia,muckenhoupt}), \begin{equation*} \left(\frac1{|B|}\int_B w(x)\,dx\right)\left(\frac1{|B|}\int_B w(x)^{-1/{(p-1)}}\,dx\right)^{p-1}\le C. \end{equation*} For the case $p=1$, $w\in A_1$, if there is a constant $C>0$ such that for every ball $B\subseteq \mathbb R^n$, \begin{equation*} \frac1{|B|}\int_B w(x)\,dx\le C\cdot\underset{x\in B}{\mbox{ess\,inf}}\;w(x). \end{equation*} We also define $A_\infty=\cup_{1\leq p<\infty}A_p$. It is well known that if $w\in A_p$ with $1\leq p<\infty$, then for any ball $B$, there exists an absolute constant $C>0$ such that \begin{equation}\label{weights} w(2B)\le C\,w(B). \end{equation} In general, for $w\in A_1$ and any $j\in\mathbb Z_+$, there exists an absolute constant $C>0$ such that (see \cite{garcia}) \begin{equation}\label{general weights} w\big(2^j B\big)\le C\cdot 2^{jn}w(B). \end{equation} Moreover, if $w\in A_\infty$, then for all balls $B$ and all measurable subsets $E$ of $B$, there exists a number $\delta>0$ independent of $E$ and $B$ such that (see \cite{garcia}) \begin{equation}\label{compare} \frac{w(E)}{w(B)}\le C\left(\frac{|E|}{|B|}\right)^\delta. \end{equation} A weight function $w$ is said to belong to the reverse H\"{o}lder class $RH_r$, if there exist two constants $r>1$ and $C>0$ such that the following reverse H\"{o}lder inequality holds for every ball $B\subseteq \mathbb R^n$. \begin{equation*} \left(\frac{1}{|B|}\int_B w(x)^r\,dx\right)^{1/r}\le C\left(\frac{1}{|B|}\int_B w(x)\,dx\right). \end{equation*} Given a ball $B$ and $\lambda>0$, $\lambda B$ denotes the ball with the same center as $B$ whose radius is $\lambda$ times that of $B$. For a given weight function $w$ and a measurable set $E$, we also denote the Lebesgue measure of $E$ by $|E|$ and the weighted measure of $E$ by $w(E)$, where $w(E)=\int_E w(x)\,dx$. Equivalently, we could define the above notions with cubes instead of balls. Hence we shall use these two different definitions appropriate to calculations. Given a weight function $w$ on $\mathbb R^n$, for $1\leq p<\infty$, the weighted Lebesgue space $L^p_w(\mathbb R^n)$ is defined as the set of all functions $f$ such that \begin{equation*} \big\|f\big\|_{L^p_w}=\bigg(\int_{\mathbb R^n}|f(x)|^pw(x)\,dx\bigg)^{1/p}<\infty. \end{equation*} In particular, when $w$ equals to a constant function, we will denote $L^p_w(\mathbb R^n)$ simply by $L^p(\mathbb R^n)$. Let $0<\kappa<1$ and $w$ be a weight function on $\mathbb R^n$. Then the weighted Morrey space $L^{1,\kappa}(w)$ is defined by (see \cite{komori}) \begin{equation*} L^{1,\kappa}(w)=\left\{f\in L^1_{loc}(w):\big\|f\big\|_{L^{1,\kappa}(w)}=\sup_B\frac{1}{w(B)^{\kappa}}\int_B|f(x)|w(x)\,dx<\infty\right\}, \end{equation*} where the supremum is taken over all balls $B$ in $\mathbb R^n$. Let $\Theta=\Theta(r)$, $r>0$, be a growth function, that is, a positive increasing function in $(0,+\infty)$ and satisfy the following doubling condition: \begin{equation}\label{doubling} \Theta(2r)\leq D\cdot\Theta(r), \quad \mbox{for all }\,r>0, \end{equation} where $D=D(\Theta)\ge1$ is a doubling constant independent of $r$. The generalized Morrey space $L^{1,\Theta}(\mathbb R^n)$ is defined as the set of all locally integrable functions $f$ for which (see \cite{mizuhara}) \begin{equation*} \sup_{r>0;B(x_0,r)}\frac{1}{\Phi(r)}\int_{B(x_0,r)}|f(x)|\,dx<\infty, \end{equation*} where $B(x_0,r)=\{x\in\mathbb R^n:|x-x_0|<r\}$ is the open ball centered at $x_0$ and with radius $r>0$. We next recall some basic definitions and facts about Orlicz spaces needed for the proof of the main results. For more information on the subject, one can see \cite{rao}. A function $\Phi$ is called a Young function if it is continuous, nonnegative, convex and strictly increasing on $[0,+\infty)$ with $\Phi(0)=0$ and $\Phi(t)\to +\infty$ as $t\to +\infty$. We define the $\Phi$-average of a function $f$ over a ball $B$ by means of the following Luxemburg norm: \begin{equation*} \big\|f\big\|_{\Phi,B}=\inf\left\{\sigma>0:\frac{1}{|B|}\int_B\Phi\left(\frac{|f(x)|}{\sigma}\right)dx\leq1\right\}. \end{equation*} An equivalent norm that is often useful in calculations is as follows(see \cite{rao,perez1}): \begin{equation}\label{equiv norm} \big\|f\big\|_{\Phi,B}\leq \inf_{\eta>0}\left\{\eta+\frac{\eta}{|B|}\int_B\Phi\left(\frac{|f(x)|}{\eta}\right)dx\right\}\leq 2\big\|f\big\|_{\Phi,B}. \end{equation} Given a Young function $\Phi$, we use $\bar\Phi$ to denote the complementary Young function associated to $\Phi$. Then the following generalized H\"older's inequality holds for any given ball $B$ (see \cite{perez1,perez2}). \begin{equation*} \frac{1}{|B|}\int_B|f(x)\cdot g(x)|dx\leq 2\big\|f\big\|_{\Phi,B}\big\|g\big\|_{\bar\Phi,B}. \end{equation*} In order to deal with the weighted case, for $w\in A_\infty$, we also need to define the weighted $\Phi$-average of a function $f$ over a ball $B$ by means of the weighted Luxemburg norm: \begin{equation*} \big\|f\big\|_{\Phi(w),B}=\inf\left\{\sigma>0:\frac{1}{w(B)}\int_B\Phi\left(\frac{|f(x)|}{\sigma}\right)w(x)\,dx\leq1\right\}. \end{equation*} It can be shown that for $w\in A_\infty$(see \cite{rao,zhang}), \begin{equation}\label{equiv norm with weight} \big\|f\big\|_{\Phi(w),B}\approx \inf_{\eta>0}\left\{\eta+\frac{\eta}{w(B)}\int_B\Phi\left(\frac{|f(x)|}{\eta}\right)w(x)\,dx\right\}, \end{equation} and \begin{equation*} \frac{1}{w(B)}\int_B|f(x)g(x)|w(x)\,dx\leq C\big\|f\big\|_{\Phi(w),B}\big\|g\big\|_{\bar\Phi(w),B}. \end{equation*} The young function that we are going to use is $\Phi(t)=t(1+\log^+t)$ with its complementary Young function $\bar\Phi(t)\approx\exp(t)$. In the present situation, we denote \begin{equation*} \big\|f\big\|_{L\log L,B}=\big\|f\big\|_{\Phi,B}, \qquad \big\|g\big\|_{\exp L,B}=\big\|g\big\|_{\bar\Phi,B}; \end{equation*} and \begin{equation*} \big\|f\big\|_{L\log L(w),B}=\big\|f\big\|_{\Phi(w),B}, \qquad \big\|g\big\|_{\exp L(w),B}=\big\|g\big\|_{\bar\Phi(w),B}. \end{equation*} By the (weighted) generalized H\"older's inequality, we have (see \cite{perez1,zhang}) \begin{equation}\label{holder} \frac{1}{|B|}\int_B|f(x)\cdot g(x)|dx\leq 2\big\|f\big\|_{L\log L,B}\big\|g\big\|_{\exp L,B}, \end{equation} and \begin{equation}\label{weighted holder} \frac{1}{w(B)}\int_B|f(x)g(x)|w(x)\,dx\leq C\big\|f\big\|_{L\log L(w),B}\big\|g\big\|_{\exp L(w),B}. \end{equation} Let us now recall the definition of the space of $BMO(\mathbb R^n)$ (Bounded Mean Oscillation) (see \cite{duoand,john}). A locally integrable function $b$ is said to be in $BMO(\mathbb R^n)$, if \begin{equation*} \|b\|_*=\sup_{B}\frac{1}{|B|}\int_B|b(x)-b_B|\,dx<\infty, \end{equation*} where $b_B$ stands for the average of $b$ on $B$, i.e., $b_B=\frac{1}{|B|}\int_B b(y)\,dy$ and the supremum is taken over all balls $B$ in $\mathbb R^n$. Modulo constants, the space $BMO(\mathbb R^n)$ is a Banach space with respect to the norm $\|\cdot\|_*$. By the John--Nirenberg's inequality, it is not difficult to see that for any given ball $B$ (see \cite{perez1,perez2}) \begin{equation}\label{exp} \big\|b-b_B\big\|_{\exp L,B}\leq C\|b\|_*. \end{equation} Furthermore, we can also prove that for any $w\in A_\infty$ and any given ball $B$ (see \cite{zhang}), \begin{equation}\label{weighted exp} \big\|b-b_B\big\|_{\exp L(w),B}\leq C\|b\|_*. \end{equation} Throughout this paper, the letter $C$ always denotes a positive constant independent of the main parameters involved, but it may be different from line to line. By $A\approx B$, we mean that there exists a constant $C>1$ such that $\frac1C\le\frac AB\le C$. \section{Proofs of Theorems \ref{mainthm:1} and \ref{mainthm:2}} Given a real-valued function $b\in BMO(\mathbb R^n)$, we shall follow the idea developed in \cite{alvarez,ding} and denote $F(\xi)=e^{\xi[b(x)-b(z)]}$, $\xi\in\mathbb C$. Then by the analyticity of $F(\xi)$ on $\mathbb C$ and the Cauchy integral formula, we get \begin{equation*} \begin{split} b(x)-b(z)=F'(0)&=\frac{1}{2\pi i}\int_{|\xi|=1}\frac{F(\xi)}{\xi^2}\,d\xi\\ &=\frac{1}{2\pi}\int_0^{2\pi}e^{e^{i\theta}[b(x)-b(z)]}\cdot e^{-i\theta}\,d\theta. \end{split} \end{equation*} Thus, for any $\varphi\in{\mathcal C}_\alpha$, $0<\alpha\le1$, we obtain \begin{equation*} \begin{split} &\bigg|\int_{\mathbb R^n}\big[b(x)-b(z)\big]\varphi_t(y-z)f(z)\,dz\bigg|\\ =& \bigg|\frac{1}{2\pi}\int_0^{2\pi}\bigg(\int_{\mathbb R^n}\varphi_t(y-z)e^{-e^{i\theta}b(z)}f(z)\,dz\bigg) e^{e^{i\theta}b(x)}\cdot e^{-i\theta}\,d\theta\bigg|\\ \leq&\frac{1}{2\pi}\int_0^{2\pi}\sup_{\varphi\in{\mathcal C}_\alpha}\bigg|\int_{\mathbb R^n}\varphi_t(y-z)e^{-e^{i\theta}b(z)}f(z)\,dz\bigg|e^{\cos\theta\cdot b(x)}\,d\theta\\ \leq&\frac{1}{2\pi}\int_0^{2\pi}A_\alpha\big(e^{-e^{i\theta}b}\cdot f\big)(y,t)\cdot e^{\cos\theta\cdot b(x)}\,d\theta. \end{split} \end{equation*} So we have \begin{equation*} \big|\big[b,\mathcal S_\alpha\big](f)(x)\big|\le\frac{1}{2\pi}\int_0^{2\pi} \mathcal S_\alpha\big(e^{-e^{i\theta}b}\cdot f\big)(x)\cdot e^{\cos\theta\cdot b(x)}\,d\theta, \end{equation*} and \begin{equation*} \big|\big[b,\mathcal G_\alpha\big](f)(x)\big|\le\frac{1}{2\pi}\int_0^{2\pi} \mathcal G_\alpha\big(e^{-e^{i\theta}b}\cdot f\big)(x)\cdot e^{\cos\theta\cdot b(x)}\,d\theta. \end{equation*} Then, by the $L^p_w$-boundedness of intrinsic square functions (see \cite{wilson2}), and using the same arguments as in \cite{ding}, we can also show the following: \begin{theorem}\label{commutator thm} Let $0<\alpha\le1$, $1<p<\infty$ and $w\in A_p$. Then the commutators $\big[b,\mathcal S_\alpha\big]$ and $\big[b,\mathcal G_{\alpha}\big]$ are all bounded from $L^p_w(\mathbb R^n)$ into itself whenever $b\in BMO(\mathbb R^n)$. \end{theorem} We are now ready to give the proofs of Theorems $\ref{mainthm:1}$ and $\ref{mainthm:2}$, which are based on the Calder\'on--Zygmund decomposition. \begin{proof}[Proofs of Theorems $\ref{mainthm:1}$ and $\ref{mainthm:2}$] We will only give the proof of Theorem $\ref{mainthm:1}$ here, since the proof of Theorem $\ref{mainthm:2}$ is similar and easier. Inspired by the work in \cite{perez2,perez3,zhang}, for any fixed $\sigma>0$, we apply the Calder\'on--Zygmund decomposition of $f$ at height $\sigma$ to obtain a sequence of disjoint non-overlapping dyadic cubes $\{Q_i\}$ such that the following property holds (see \cite{stein}) \begin{equation}\label{decomposition} \sigma<\frac{1}{|Q_i|}\int_{Q_i}|f(y)|\,dy< 2^n\cdot\sigma, \end{equation} where $Q_i=Q(c_i,\ell_i)$ denotes the cube centered at $c_i$ with side length $\ell_i$ and all cubes are assumed to have their sides parallel to the coordinate axes. Setting $E=\bigcup_i Q_i$. Now we define two functions $g$ and $h$ as follows: \begin{equation*} g(x)= \begin{cases} f(x) & \mbox{if}\;\; x\in E^c,\\ \frac{1}{|Q_i|}\int_{Q_i}|f(y)|\,dy & \mbox{if}\;\; x\in Q_i, \end{cases} \end{equation*} and \begin{equation*} h(x)=f(x)-g(x)=\sum_i h_i(x), \end{equation*} where $h_i(x)=h(x)\chi_{Q_i}(x)$. Then we have \begin{equation}\label{pointwise estimate g} |g(x)|\le C\cdot\sigma, \quad \mbox{a.e. }\, x\in\mathbb R^n, \end{equation} and \begin{equation}\label{f=g+h} f(x)=g(x)+h(x). \end{equation} Obviously, supp\,$h_i\subseteq Q_i$, $\int_{Q_i}h_i(x)\,dx=0$ and $\|h_i\|_{L^1}\le 2\int_{Q_i}|f(x)|\,dx$ by the above decomposition. Since $\big|\big[b,\mathcal S_\alpha\big](f)(x)\big|\leq\big|\big[b,\mathcal S_\alpha\big](g)(x)\big|+\big|\big[b,\mathcal S_\alpha\big](h)(x)\big|$ by (\ref{f=g+h}), then we can write \begin{equation*} \begin{split} &w\big(\big\{x\in\mathbb R^n:\big|\big[b,\mathcal S_\alpha\big](f)(x)\big|>\sigma\big\}\big)\\ \leq& w\big(\big\{x\in\mathbb R^n:\big|\big[b,\mathcal S_\alpha\big](g)(x)\big|>\sigma/2\big\}\big) +w\big(\big\{x\in\mathbb R^n:\big|\big[b,\mathcal S_\alpha\big](h)(x)\big|>\sigma/2\big\}\big)\\ :=&I_1+I_2. \end{split} \end{equation*} Observe that $w\in A_1\subset A_2$. Applying Chebyshev's inequality and Theorem \ref{commutator thm}, we obtain \begin{equation*} I_1\leq \frac{4}{\sigma^2}\cdot\Big\|\big[b,\mathcal S_\alpha\big](g)\Big\|^2_{L^2_w}\leq \frac{C}{\sigma^2}\cdot\big\|g\big\|^2_{L^2_w}. \end{equation*} Moreover, by the inequality (\ref{pointwise estimate g}) and the $A_1$ condition, we deduce that \begin{align}\label{g} \big\|g\big\|^2_{L^2_w}&\le C\cdot\sigma\int_{\mathbb R^n}|g(x)|w(x)\,dx\notag\\ &\le C\cdot\sigma\left(\int_{E^c}|f(x)|w(x)\,dx+\int_{\bigcup_i Q_i}|g(x)|w(x)\,dx\right)\notag\\ &\le C\cdot\sigma\left(\int_{\mathbb R^n}|f(x)|w(x)\,dx+\sum_i\frac{w(Q_i)}{|Q_i|}\int_{Q_i}|f(y)|\,dy\right)\notag\\ &\le C\cdot\sigma\left(\int_{\mathbb R^n}|f(x)|w(x)\,dx+\sum_i\underset{y\in Q_i}{\mbox{ess\,inf}}\,w(y)\int_{Q_i}|f(y)|\,dy\right)\notag\\ &\le C\cdot\sigma\left(\int_{\mathbb R^n}|f(x)|w(x)\,dx+\int_{\bigcup_i Q_i}|f(y)|w(y)\,dy\right)\notag\\ &\le C\cdot\sigma\int_{\mathbb R^n}|f(x)|w(x)\,dx. \end{align} So we have \begin{equation*} I_1\leq C\int_{\mathbb R^n}\frac{|f(x)|}{\sigma}\cdot w(x)\,dx\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx. \end{equation*} To deal with the other term $I_2$, let $Q_i^*=2\sqrt n Q_i$ be the cube concentric with $Q_i$ such that $\ell(Q_i^*)=(2\sqrt n)\ell(Q_i)$. Then we can further decompose $I_2$ as follows. \begin{equation*} \begin{split} I_2\le&\,w\Big(\Big\{x\in \bigcup_i Q_i^*:\Big|\big[b,\mathcal S_\alpha\big](h)(x)\Big|>\sigma/2\Big\}\Big)\\ &+w\Big(\Big\{x\notin \bigcup_i Q_i^*:\Big|\big[b,\mathcal S_\alpha\big](h)(x)\Big|>\sigma/2\Big\}\Big)\\ :=&\,I_3+I_4. \end{split} \end{equation*} Since $w\in A_1$, then by the inequality (\ref{weights}), we can get \begin{equation*} I_3\leq\sum_i w\big(Q_i^*\big)\le C\sum_i w(Q_i). \end{equation*} Furthermore, it follows from the inequality (\ref{decomposition}) and the $A_1$ condition that \begin{equation*} \begin{split} I_3&\leq C\sum_i\frac{\,1\,}{\sigma}\cdot\underset{y\in Q_i}{\mbox{ess\,inf}}\,w(y)\int_{Q_i}|f(y)|\,dy\\ &\leq \frac{C}{\sigma}\sum_i\int_{Q_i}|f(y)|w(y)\,dy\leq \frac{C}{\sigma}\int_{\bigcup_i Q_i}|f(y)|w(y)\,dy\\ &\leq C\int_{\mathbb R^n}\frac{|f(y)|}{\sigma}\cdot w(y)\,dy\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(y)|}{\sigma}\right)\cdot w(y)\,dy. \end{split} \end{equation*} For any given $x\in \mathbb R^n$ and $(y,t)\in\Gamma(x)$, we have \begin{align}\label{commutator estimate} \sup_{\varphi\in{\mathcal C}_\alpha}\bigg|\int_{\mathbb R^n}\big[b(x)-b(z)\big]\varphi_t(y-z)h_i(z)\,dz\bigg|&\leq \big|b(x)-b_{Q_i}\big|\cdot\sup_{\varphi\in{\mathcal C}_\alpha}\bigg|\int_{\mathbb R^n}\varphi_t(y-z)h_i(z)\,dz\bigg|\notag\\ &+\sup_{\varphi\in{\mathcal C}_\alpha}\bigg|\int_{\mathbb R^n}\big[b(z)-b_{Q_i}\big]\varphi_t(y-z)h_i(z)\,dz\bigg|. \end{align} Hence \begin{equation*} \begin{split} \big|\big[b,\mathcal S_\alpha\big](h)(x)\big|&\leq\sum_i\big|b(x)-b_{Q_i}\big|\cdot\mathcal S_\alpha(h_i)(x)\\ &+\left(\iint_{\Gamma(x)}\sup_{\varphi\in{\mathcal C}_\alpha}\bigg|\int_{\mathbb R^n}\big[b(z)-b_{Q_i}\big]\varphi_t(y-z)\cdot\sum_i h_i(z)\,dz\bigg|^2\frac{dydt}{t^{n+1}}\right)^{1/2}\\ &=\sum_i\big|b(x)-b_{Q_i}\big|\cdot\mathcal S_\alpha(h_i)(x)+\mathcal S_\alpha\bigg(\sum_i[b-b_{Q_i}]h_i\bigg)(x). \end{split} \end{equation*} Then we can write \begin{equation*} \begin{split} I_4\leq & w\bigg(\bigg\{x\notin \bigcup_i Q_i^*:\sum_i\big|b(x)-b_{Q_i}\big|\cdot\mathcal S_\alpha(h_i)(x)>\sigma/4\bigg\}\bigg)\\ &+ w\bigg(\bigg\{x\notin \bigcup_i Q_i^*:\mathcal S_\alpha\bigg(\sum_i[b-b_{Q_i}]h_i\bigg)(x)>\sigma/4\bigg\}\bigg)\\ :=&I_5+I_6. \end{split} \end{equation*} It follows directly from the Chebyshev's inequality that \begin{equation*} \begin{split} I_5&\leq\frac{\,4\,}{\sigma}\int_{\mathbb R^n\backslash\bigcup_i Q_i^*}\bigg|\sum_i\big|b(x)-b_{Q_i}\big|\cdot\mathcal S_\alpha(h_i)(x)\bigg|w(x)\,dx\\ &\leq\frac{\,4\,}{\sigma}\sum_i\left(\int_{(Q_i^*)^c}\big|b(x)-b_{Q_i}\big|\cdot\mathcal S_\alpha(h_i)(x)\,w(x)\,dx\right).\\ \end{split} \end{equation*} Denote the center of $Q_i$ by $c_i$. For any $\varphi\in{\mathcal C}_\alpha$, $0<\alpha\le1$, by the cancellation condition of $h_i$, we obtain that for any $(y,t)\in\Gamma(x)$, \begin{align}\label{kernel estimate1} \big|(h_i*\varphi_t)(y)\big|&=\left|\int_{Q_i}\big[\varphi_t(y-z)-\varphi_t(y-c_i)\big]h_i(z)\,dz\right|\notag\\ &\le\int_{Q_i\cap\{z:|z-y|\le t\}}\frac{|z-c_i|^\alpha}{t^{n+\alpha}}|h_i(z)|\,dz\notag\\ &\le C\cdot\frac{\ell(Q_i)^{\alpha}}{t^{n+\alpha}}\int_{Q_i\cap\{z:|z-y|\le t\}}|h_i(z)|\,dz. \end{align} In addition, for any $z\in Q_i$ and $x\in (Q^*_i)^c$, we have $|z-c_i|<\frac{|x-c_i|}{2}$. Thus, for all $(y,t)\in\Gamma(x)$ and $|z-y|\le t$ with $z\in Q_i$, we can see that \begin{equation}\label{2t} t+t\ge|x-y|+|y-z|\ge|x-z|\ge|x-c_i|-|z-c_i|\ge\frac{|x-c_i|}{2}. \end{equation} Hence, for any $x\in (Q^*_i)^c$, by using the above inequalities (\ref{kernel estimate1}) and (\ref{2t}), we obtain \begin{equation*} \begin{split} \big|\mathcal S_{\alpha}(h_i)(x)\big|&=\left(\iint_{\Gamma(x)}\bigg(\sup_{\varphi\in{\mathcal C}_\alpha}\big|(\varphi_t*{h_i})(y)\big|\bigg)^2\frac{dydt}{t^{n+1}}\right)^{1/2}\\ &\leq C\cdot\ell(Q_i)^{\alpha}\bigg(\int_{Q_i}|h_i(z)|\,dz\bigg)\left(\int_{\frac{|x-c_i|}{4}}^\infty \int_{|y-x|<t}\frac{dydt}{t^{2(n+\alpha)+n+1}}\right)^{1/2}\\ &\leq C\cdot\ell(Q_i)^{\alpha}\bigg(\int_{Q_i}|h_i(z)|\,dz\bigg) \left(\int_{\frac{|x-c_i|}{4}}^\infty\frac{dt}{t^{2(n+\alpha)+1}}\right)^{1/2}\\ &\leq C\cdot\frac{\ell(Q_i)^{\alpha}}{|x-c_i|^{n+\alpha}}\bigg(\int_{Q_i}|f(z)|\,dz\bigg). \end{split} \end{equation*} Since $Q_i^*=2\sqrt n Q_i\supset 2Q_i$, then $(Q_i^*)^c\subset (2Q_i)^c$. This fact together with the above pointwise estimate yields \begin{equation*} \begin{split} I_5&\leq\frac{C}{\sigma}\sum_i \left(\ell(Q_i)^{\alpha}\int_{Q_i}|f(z)|\,dz\times\int_{(Q_i^*)^c}\big|b(x)-b_{Q_i}\big|\cdot\frac{w(x)}{|x-c_i|^{n+\alpha}}dx\right)\\ &\leq\frac{C}{\sigma}\sum_i \left(\ell(Q_i)^{\alpha}\int_{Q_i}|f(z)|\,dz\times\int_{(2Q_i)^c}\big|b(x)-b_{Q_i}\big|\cdot\frac{w(x)}{|x-c_i|^{n+\alpha}}dx\right)\\ &\leq\frac{C}{\sigma}\sum_i \left(\ell(Q_i)^{\alpha}\int_{Q_i}|f(z)|\,dz\times\sum_{j=1}^\infty\int_{2^{j+1}Q_i\backslash 2^{j}Q_i}\big|b(x)-b_{2^{j+1}Q_i}\big|\cdot\frac{w(x)}{|x-c_i|^{n+\alpha}}dx\right)\\ &+\frac{C}{\sigma}\sum_i \left(\ell(Q_i)^{\alpha}\int_{Q_i}|f(z)|\,dz\times\sum_{j=1}^\infty\int_{2^{j+1}Q_i\backslash 2^{j}Q_i}\big|b_{2^{j+1}Q_i}-b_{Q_i}\big|\cdot\frac{w(x)}{|x-c_i|^{n+\alpha}}dx\right)\\ &:=\mbox{\upshape I+II}. \end{split} \end{equation*} For the term I, \begin{equation*} \begin{split} \mbox{\upshape I}&\leq\frac{C}{\sigma}\sum_i \left(\ell(Q_i)^{\alpha}\int_{Q_i}|f(z)|\,dz\times\sum_{j=1}^\infty\frac{1}{[2^{j-1}\ell(Q_i)]^{n+\alpha}}\int_{2^{j+1}Q_i\backslash 2^{j}Q_i}\big|b(x)-b_{2^{j+1}Q_i}\big|\cdot w(x)\,dx\right). \end{split} \end{equation*} Since $w\in A_1$, we know that there exists a number $r>1$ such that $w\in RH_r$. It then follows from H\"older's inequality, the John--Nirenberg's inequality(\cite{john}) and (\ref{general weights}) that \begin{align} \int_{2^{j+1}Q_i}\big|b(x)-b_{2^{j+1}Q_i}\big|\cdot w(x)\,dx&\leq\left(\int_{2^{j+1}Q_i}\big|b(x)-b_{2^{j+1}Q_i}\big|^{r'}dx\right)^{1/{r'}}\left(\int_{2^{j+1}Q_i}w(x)^rdx\right)^{1/r}\notag\\ &\leq C\|b\|_*\cdot w\big(2^{j+1}Q_i\big)\notag\\ &\leq C\|b\|_*\cdot(2^{j+1})^nw\big(Q_i\big). \end{align} Hence \begin{equation*} \begin{split} \mbox{\upshape I}&\leq \frac{C\cdot\|b\|_*}{\sigma}\sum_i\left(\int_{Q_i}|f(z)|\,dz\times\sum_{j=1}^\infty\frac{(2^{j+1})^nw\big(Q_i\big)}{(2^{j-1})^{n+\alpha}|Q_i|}\right)\\ &\leq\frac{C}{\sigma}\sum_i\left(\frac{w\big(Q_i\big)}{|Q_i|}\cdot\int_{Q_i}|f(z)|\,dz\times\sum_{j=1}^\infty\frac{1}{2^{j\alpha}}\right)\\ &\leq\frac{C}{\sigma}\sum_i\underset{z\in Q_i}{\mbox{ess\,inf}}\,w(z)\int_{Q_i}|f(z)|\,dz\\ &\leq \frac{C}{\sigma}\int_{\bigcup_i Q_i}|f(z)|w(z)\,dz\leq C\int_{\mathbb R^n}\frac{|f(z)|}{\sigma}\cdot w(z)\,dz\\ &\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(z)|}{\sigma}\right)\cdot w(z)\,dz. \end{split} \end{equation*} For the term II, since $b\in BMO(\mathbb R^n)$, then a simple calculation shows that \begin{equation}\label{BMO} \big|b_{2^{j+1}Q_i}-b_{Q_i}\big|\leq C\cdot(j+1)\|b\|_*. \end{equation} This estimate (\ref{BMO}) together with the inequality (\ref{general weights}) implies that \begin{equation*} \begin{split} \mbox{\upshape II}&\leq\frac{C\cdot\|b\|_*}{\sigma}\sum_i\left(\ell(Q_i)^{\alpha}\int_{Q_i}|f(z)|\,dz\times\sum_{j=1}^\infty \big(j+1\big)\cdot\frac{w\big(2^{j+1}Q_i\big)}{[2^{j-1}\ell(Q_i)]^{n+\alpha}}\right)\\ &\leq\frac{C\cdot\|b\|_*}{\sigma}\sum_i\left(\int_{Q_i}|f(z)|\,dz\times\sum_{j=1}^\infty\big(j+1\big)\cdot\frac{(2^{j+1})^nw\big(Q_i\big)}{(2^{j-1})^{n+\alpha}|Q_i|}\right)\\ &\leq\frac{C}{\sigma}\sum_i\left(\frac{w\big(Q_i\big)}{|Q_i|}\cdot\int_{Q_i}|f(z)|\,dz\times\sum_{j=1}^\infty\frac{(j+1)}{2^{j\alpha}}\right)\\ &\leq\frac{C}{\sigma}\sum_i\left(\frac{w\big(Q_i\big)}{|Q_i|}\cdot\int_{Q_i}|f(z)|\,dz\right)\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(z)|}{\sigma}\right)\cdot w(z)\,dz. \end{split} \end{equation*} On the other hand, by using the weighted weak-type (1,1) estimate of intrinsic square functions (see \cite{wilson2}), we have \begin{equation*} \begin{split} I_6&\leq\frac{C}{\sigma}\int_{\mathbb R^n}\sum_i\big|b(x)-b_{Q_i}\big|\cdot|h_i(x)|\,w(x)\,dx\\ &=\frac{C}{\sigma}\sum_i\int_{Q_i}\big|b(x)-b_{Q_i}\big|\cdot|h_i(x)|\,w(x)\,dx\\ &\leq\frac{C}{\sigma}\sum_i\int_{Q_i}\big|b(x)-b_{Q_i}\big|\cdot|f(x)|\,w(x)\,dx\\ &+\frac{C}{\sigma}\sum_i\frac{1}{|Q_i|}\int_{Q_i}|f(y)|\,dy\times\int_{Q_i}\big|b(x)-b_{Q_i}\big|w(x)\,dx\\ &:=\mbox{\upshape III+IV}. \end{split} \end{equation*} By the generalized H\"older's inequality with weight (\ref{weighted holder}), (\ref{weighted exp}) and (\ref{equiv norm with weight}), we can deduce that \begin{equation*} \begin{split} \mbox{\upshape III}\leq&\frac{C}{\sigma}\sum_i w(Q_i)\cdot\frac{1}{w(Q_i)}\int_{Q_i}\big|b(x)-b_{Q_i}\big|\cdot|f(x)|\,w(x)\,dx\\ \leq&\frac{C}{\sigma}\sum_i w(Q_i)\cdot\big\|b-b_{Q_i}\big\|_{\exp L(w),Q_i}\big\|f\big\|_{L\log L(w),Q_i}\\ \leq&\frac{C\cdot\|b\|_*}{\sigma}\sum_i w(Q_i)\cdot\big\|f\big\|_{L\log L(w),Q_i}\\ =&\frac{C\cdot\|b\|_*}{\sigma}\sum_i w(Q_i)\cdot \inf_{\eta>0}\left\{\eta+\frac{\eta}{w(Q_i)}\int_{Q_i}\Phi\left(\frac{|f(y)|}{\eta}\right)w(y)\,dy\right\}\\ \leq&\frac{C\cdot\|b\|_*}{\sigma}\sum_i w(Q_i)\cdot \left\{\sigma+\frac{\sigma}{w(Q_i)}\int_{Q_i}\Phi\left(\frac{|f(y)|}{\sigma}\right)w(y)\,dy\right\}\\ \leq& C\left\{\sum_i w(Q_i)+\sum_i\int_{Q_i}\Phi\left(\frac{|f(y)|}{\sigma}\right)w(y)\,dy\right\}\\ \leq& C\int_{\mathbb R^n}\Phi\left(\frac{|f(y)|}{\sigma}\right)\cdot w(y)\,dy. \end{split} \end{equation*} Arguing as in the proof of (3.8), we find that \begin{equation*} \begin{split} \int_{Q_i}\big|b(x)-b_{Q_i}\big|w(x)\,dx&\leq\left(\int_{Q_i}\big|b(x)-b_{Q_i}\big|^{r'}dx\right)^{1/{r'}}\left(\int_{Q_i}w(x)^rdx\right)^{1/r}\\ &\leq C\|b\|_*\cdot w(Q_i). \end{split} \end{equation*} Therefore \begin{equation*} \begin{split} \mbox{\upshape IV}\leq&\frac{C}{\sigma}\sum_i\frac{w(Q_i)}{|Q_i|}\int_{Q_i}|f(y)|\,dy\leq\frac{C}{\sigma}\sum_i\underset{y\in Q_i}{\mbox{ess\,inf}}\,w(y)\int_{Q_i}|f(y)|\,dy\\ \leq&\frac{C}{\sigma}\int_{\bigcup_i Q_i}|f(y)|w(y)\,dy\leq C\int_{\mathbb R^n}\frac{|f(y)|}{\sigma}\cdot w(y)\,dy\\ \leq& C\int_{\mathbb R^n}\Phi\left(\frac{|f(y)|}{\sigma}\right)\cdot w(y)\,dy. \end{split} \end{equation*} Summing up all the above estimates, we get the desired result. \end{proof} \section{Proof of Theorem \ref{mainthm:3}} In order to prove the main theorem of this section, we will need the following estimates which were established by the author in \cite{wang5}. \newtheorem{prop}[theorem]{Proposition} \begin{prop} Let $w\in A_1$ and $0<\alpha\le1$. Then for any $j\in\mathbb Z_+$, we have \begin{equation*} \big\|\mathcal S_{\alpha,2^j}(f)\big\|_{L^2_w}\leq C\cdot2^{jn/2}\big\|\mathcal S_\alpha(f)\big\|_{L^2_w}. \end{equation*} \end{prop} \begin{prop} Let $w\in A_1$, $0<\alpha\le1$ and $2<q<\infty$. Then for any $j\in\mathbb Z_+$, we have \begin{equation*} \big\|\mathcal S_{\alpha,2^j}(f)\big\|_{L^q_w}\leq C\cdot2^{jn/2}\big\|\mathcal S_\alpha(f)\big\|_{L^q_w}. \end{equation*} \end{prop} \begin{prop} Let $w\in A_1$, $0<\alpha\le1$ and $1<q<2$. Then for any $j\in\mathbb Z_+$, we have \begin{equation*} \big\|\mathcal S_{\alpha,2^j}(f)\big\|_{L^q_w}\leq C\cdot2^{jn/q}\big\|\mathcal S_\alpha(f)\big\|_{L^q_w}. \end{equation*} \end{prop} Moreover, from the definition of $\mathcal G^*_{\lambda,\alpha}$($\lambda>3$), we readily see that \begin{align}\label{g(lambda)} \left|\mathcal G^*_{\lambda,\alpha}(f)(x)\right|^2=&\iint_{\mathbb R^{n+1}_+}\left(\frac{t}{t+|x-y|}\right)^{\lambda n}\Big(A_\alpha(f)(y,t)\Big)^2\frac{dydt}{t^{n+1}}\notag\\ =&\int_0^\infty\int_{|x-y|<t}\left(\frac{t}{t+|x-y|}\right)^{\lambda n}\Big(A_\alpha(f)(y,t)\Big)^2\frac{dydt}{t^{n+1}}\notag\\ &+\sum_{j=1}^\infty\int_0^\infty\int_{2^{j-1}t\le|x-y|<2^jt}\left(\frac{t}{t+|x-y|}\right)^{\lambda n}\Big(A_\alpha(f)(y,t)\Big)^2\frac{dydt}{t^{n+1}}\notag\\ \le&\, C\bigg[\mathcal S_\alpha(f)(x)^2+\sum_{j=1}^\infty 2^{-j\lambda n}\mathcal S_{\alpha,2^j}(f)(x)^2\bigg]. \end{align} Thus, by applying Propositions 4.1--4.3, the $L^q_w$-boundedness of $\mathcal S_\alpha$(see \cite{wilson2}) and the above inequality (\ref{g(lambda)}), we obtain that for $1<q<\infty$ and $w\in A_1$, \begin{align}\label{g(lambda) bounds} \big\|\mathcal G^*_{\lambda,\alpha}(f)\big\|_{L^q_{w}} &\le C\Bigg(\big\|\mathcal S_\alpha(f)\big\|_{L^q_{w}}+\sum_{j=1}^\infty 2^{-\frac{j\lambda n}{2}}\big\|\mathcal S_{\alpha,2^j}(f)\big\|_{L^q_{w}}\Bigg)\notag\\ &\le C\Bigg(\big\|\mathcal S_\alpha(f)\big\|_{L^q_{w}}+\sum_{j=1}^\infty 2^{-\frac{j\lambda n}{2}}\cdot\big[2^{\frac{jn}{2}}+2^{\frac{jn}{q}}\big]\big\|\mathcal S_{\alpha}(f)\big\|_{L^q_{w}}\Bigg)\notag\\ &\le C\big\|f\big\|_{L^q_{w}}\Bigg(1+\sum_{j=1}^\infty2^{-\frac{j\lambda n}{2}} \cdot\big[2^{\frac{jn}{2}}+2^{\frac{jn}{q}}\big]\Bigg)\notag\\ &\le C\big\|f\big\|_{L^q_{w}}, \end{align} where the last inequality holds under the assumption $\lambda>3>\max\{1,2/q\}$ when $1<q<\infty$. In addition, for a given real-valued function $b\in BMO(\mathbb R^n)$, as before, we can also prove that \begin{equation}\label{4.3} \big|\big[b,\mathcal G^*_{\lambda,\alpha}\big](f)(x)\big|\le\frac{1}{2\pi}\int_0^{2\pi} \mathcal G^*_{\lambda,\alpha}\big(e^{-e^{i\theta}b}\cdot f\big)(x)\cdot e^{\cos\theta\cdot b(x)}\,d\theta. \end{equation} Taking into account the inequalities (\ref{g(lambda) bounds}) and (\ref{4.3}), and following along the same arguments used in \cite{ding}, we can also show the following: \begin{theorem}\label{commutator thm2} Let $0<\alpha\le1$, $1<q<\infty$ and $w\in A_1$. Suppose that $\lambda>3$, then the commutator $\big[b,\mathcal G^*_{\lambda,\alpha}\big]$ are bounded from $L^q_w(\mathbb R^n)$ into itself whenever $b\in BMO(\mathbb R^n)$. \end{theorem} In \cite{wang2}, we have established the weighted weak-type (1,1) estimate of $\mathcal G^*_{\lambda,\alpha}$ on $L^1_w(\mathbb R^n)$. More specifically, we obtained \begin{theorem}\label{g(lambda)weak estimate} Let $0<\alpha\le1$, $w\in A_1$ and $\lambda>{(3n+2\alpha)}/n$. Then for any given $\sigma>0$, there exists a constant $C>0$ independent of $f$ and $\sigma$ such that \begin{equation*} w\Big(\Big\{x\in\mathbb R^n:\big|\mathcal G^*_{\lambda,\alpha}(f)(x)\big|>\sigma\Big\}\Big)\le\frac{C}{\sigma}\int_{\mathbb R^n}|f(x)|w(x)\,dx. \end{equation*} \end{theorem} \begin{proof}[Proof of Theorem $\ref{mainthm:3}$] For any fixed $\sigma>0$, as before, we again perform the Calder\'on--Zygmund decomposition of $f$ at the level $\sigma$ to obtain a sequence of disjoint non-overlapping dyadic cubes $\{Q_i\}$ such that the following property holds (see \cite{stein}) \begin{equation}\label{decomposition2} \sigma<\frac{1}{|Q_i|}\int_{Q_i}|f(y)|\,dy< 2^n\sigma. \end{equation} Setting $E=\bigcup_i Q_i$. Now we decompose $f(x)=g(x)+h(x)$, where $g(x)=f(x)$ when $x\in E^c$, and $g(x)=\frac{1}{|Q_i|}\int_{Q_i}|f(y)|\,dy$ when $x\in Q_i$. Then \begin{equation*} h(x)=f(x)-g(x)=\sum_i h_i(x), \end{equation*} with $h_i(x)=h(x)\chi_{Q_i}(x)$. Clearly, by the above decomposition, we get supp\,$h_i\subseteq Q_i$, $\int_{Q_i}h_i(x)\,dx=0$ and $\|h_i\|_{L^1}\le 2\int_{Q_i}|f(x)|\,dx$ . Note that $\big|\big[b,\mathcal G^*_{\lambda,\alpha}\big](f)(x)\big|\leq\big|\big[b,\mathcal G^*_{\lambda,\alpha}\big](g)(x)\big|+\big|\big[b,\mathcal G^*_{\lambda,\alpha}\big](h)(x)\big|$, then we have \begin{equation*} \begin{split} &w\big(\big\{x\in\mathbb R^n:\big|[b,\mathcal G^*_{\lambda,\alpha}](f)(x)\big|>\sigma\big\}\big)\\ \leq& w\big(\big\{x\in\mathbb R^n:\big|[b,\mathcal G^*_{\lambda,\alpha}](g)(x)\big|>\sigma/2\big\}\big) +w\big(\big\{x\in\mathbb R^n:\big|[b,\mathcal G^*_{\lambda,\alpha}](h)(x)\big|>\sigma/2\big\}\big)\\ :=&J_1+J_2. \end{split} \end{equation*} Let us start with the term $J_1$. By using Chebyshev's inequality, Theorem \ref{commutator thm2} and the inequality (\ref{g}), we obtain \begin{equation*} \begin{split} J_1&\leq \frac{4}{\sigma^2}\cdot\Big\|\big[b,\mathcal G^*_{\lambda,\alpha}\big](g)\Big\|^2_{L^2_w}\leq \frac{C}{\sigma^2}\cdot\big\|g\big\|^2_{L^2_w}\\ &\leq\frac{C}{\sigma^2}\cdot\sigma\int_{\mathbb R^n}|f(x)|w(x)\,dx\\ &\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx. \end{split} \end{equation*} To estimate the other term $J_2$, as before, we also let $Q_i^*=2\sqrt n Q_i$ be the cube concentric with $Q_i$ such that $\ell(Q_i^*)=(2\sqrt n)\ell(Q_i)$. Then we can further split $J_2$ into two parts as follows. \begin{equation*} \begin{split} J_2\le&\,w\Big(\Big\{x\in \bigcup_i Q_i^*:\Big|\big[b,\mathcal G^*_{\lambda,\alpha}\big](h)(x)\Big|>\sigma/2\Big\}\Big)\\ &+w\Big(\Big\{x\notin \bigcup_i Q_i^*:\Big|\big[b,\mathcal G^*_{\lambda,\alpha}\big](h)(x)\Big|>\sigma/2\Big\}\Big)\\ :=&\,J_3+J_4. \end{split} \end{equation*} The part of the argument involving $J_3$ proceeds as in Theorem \ref{mainthm:1}, \begin{equation*} J_3\leq\sum_i w\big(Q_i^*\big)\le C\sum_i w(Q_i)\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx. \end{equation*} By the previous estimate (\ref{commutator estimate}), we thus obtain \begin{equation*} \begin{split} \big|\big[b,\mathcal G^*_{\lambda,\alpha}\big](h)(x)\big|&\leq\sum_i\big|b(x)-b_{Q_i}\big|\cdot\mathcal G^*_{\lambda,\alpha}(h_i)(x)\\ &+\left(\iint_{\Gamma(x)}\sup_{\varphi\in{\mathcal C}_\alpha}\bigg|\int_{\mathbb R^n}\big[b(z)-b_{Q_i}\big]\varphi_t(y-z)\cdot\sum_i h_i(z)\,dz\bigg|^2\frac{dydt}{t^{n+1}}\right)^{1/2}\\ &=\sum_i\big|b(x)-b_{Q_i}\big|\cdot\mathcal G^*_{\lambda,\alpha}(h_i)(x)+\mathcal G^*_{\lambda,\alpha}\bigg(\sum_i[b-b_{Q_i}]h_i\bigg)(x). \end{split} \end{equation*} Therefore \begin{equation*} \begin{split} J_4\leq & w\bigg(\bigg\{x\notin \bigcup_i Q_i^*:\sum_i\big|b(x)-b_{Q_i}\big|\cdot\mathcal G^*_{\lambda,\alpha}(h_i)(x)>\sigma/4\bigg\}\bigg)\\ &+ w\bigg(\bigg\{x\notin \bigcup_i Q_i^*:\mathcal G^*_{\lambda,\alpha}\bigg(\sum_i[b-b_{Q_i}]h_i\bigg)(x)>\sigma/4\bigg\}\bigg)\\ :=&J_5+J_6. \end{split} \end{equation*} It follows directly from the Chebyshev's inequality that \begin{equation*} \begin{split} J_5&\leq\frac{\,4\,}{\sigma}\int_{\mathbb R^n\backslash\bigcup_i Q_i^*}\bigg|\sum_i\big|b(x)-b_{Q_i}\big|\cdot\mathcal G^*_{\lambda,\alpha}(h_i)(x)\bigg|w(x)\,dx\\ &\leq\frac{\,4\,}{\sigma}\sum_i\left(\int_{(Q_i^*)^c}\big|b(x)-b_{Q_i}\big|\cdot\mathcal G^*_{\lambda,\alpha}(h_i)(x)\,w(x)\,dx\right).\\ \end{split} \end{equation*} We also denote the center of $Q_i$ by $c_i$. In the proof of Theorem \ref{mainthm:1}, we have already shown that \begin{equation} \big|\mathcal S_{\alpha}(h_i)(x)\big|\leq C\cdot\frac{\ell(Q_i)^{\alpha}}{|x-c_i|^{n+\alpha}}\bigg(\int_{Q_i}|f(z)|\,dz\bigg). \end{equation} Below we will give the pointwise estimates of $\big|\mathcal S_{\alpha,2^j}(h_i)(x)\big|$ for $j=1,2,\ldots$. Notice that for any $z\in Q_i$ and $x\in (Q^*_i)^c$, we get $|z-c_i|<\frac{|x-c_i|}{2}$. Thus, for all $(y,t)\in\Gamma_{2^j}(x)$ and $|z-y|\le t$ with $z\in Q_i$, we can deduce that \begin{equation}\label{2(j)t} t+2^jt\ge|x-y|+|y-z|\ge|x-z|\ge|x-c_i|-|z-c_i|\ge\frac{|x-c_i|}{2}. \end{equation} Hence, for any $x\in (Q^*_i)^c$, by the inequalities (\ref{kernel estimate1}) and (\ref{2(j)t}), we obtain that for $j=1,2,\ldots$, \begin{equation*} \begin{split} \big|\mathcal S_{\alpha,2^j}(h_i)(x)\big|&=\left(\iint_{\Gamma_{2^j}(x)}\bigg(\sup_{\varphi\in{\mathcal C}_\alpha}\big|(\varphi_t*{h_i})(y)\big|\bigg)^2\frac{dydt}{t^{n+1}}\right)^{1/2}\\ &\leq C\cdot\ell(Q_i)^{\alpha}\bigg(\int_{Q_i}|h_i(z)|\,dz\bigg)\left(\int_{\frac{|x-c_i|}{2^{j+2}}}^\infty \int_{|y-x|<2^jt}\frac{dydt}{t^{2(n+\alpha)+n+1}}\right)^{1/2}\\ &\leq C\cdot2^{{jn}/2}\ell(Q_i)^{\alpha}\bigg(\int_{Q_i}|h_i(z)|\,dz\bigg) \left(\int_{\frac{|x-c_i|}{2^{j+2}}}^\infty\frac{dt}{t^{2(n+\alpha)+1}}\right)^{1/2}\\ &\leq C\cdot2^{j(3n+2\alpha)/2}\frac{\ell(Q_i)^{\alpha}}{|x-c_i|^{n+\alpha}}\bigg(\int_{Q_i}|f(z)|\,dz\bigg). \end{split} \end{equation*} Therefore, by using the pointwise estimate we just derived above and the inequality (\ref{g(lambda)}), \begin{equation*} \begin{split} \left|\mathcal G^*_{\lambda,\alpha}(h_i)(x)\right|&\leq C\bigg[\big|\mathcal S_\alpha(h_i)(x)\big| +\sum_{j=1}^\infty 2^{{-j\lambda n}/2}\big|\mathcal S_{\alpha,2^j}(h_i)(x)\big|\bigg]\\ &\leq C\cdot\frac{\ell(Q_i)^{\alpha}}{|x-c_i|^{n+\alpha}}\bigg(\int_{Q_i}|f(z)|\,dz\bigg)\times\left(1+\sum_{j=1}^\infty 2^{{-j\lambda n}/2}\cdot2^{j(3n+2\alpha)/2}\right)\\ &\leq C\cdot\frac{\ell(Q_i)^{\alpha}}{|x-c_i|^{n+\alpha}}\bigg(\int_{Q_i}|f(z)|\,dz\bigg), \end{split} \end{equation*} where the last inequality is due to our assumption $\lambda>{(3n+2\alpha)}/n$. Consequently, \begin{equation*} J_5\leq\frac{C}{\sigma}\sum_i \left(\ell(Q_i)^{\alpha}\int_{Q_i}|f(z)|\,dz\times\int_{(Q_i^*)^c}\big|b(x)-b_{Q_i}\big|\cdot\frac{w(x)}{|x-c_i|^{n+\alpha}}dx\right). \end{equation*} Following along the same lines as in Theorem \ref{mainthm:1}, we can also show \begin{equation*} J_5\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx. \end{equation*} On the other hand, by using the weighted weak-type (1,1) estimate of $\mathcal G^*_{\lambda,\alpha}$(see Theorem \ref{g(lambda)weak estimate}), we have \begin{equation*} J_6\leq\frac{C}{\sigma}\int_{\mathbb R^n}\sum_i\big|b(x)-b_{Q_i}\big|\cdot|h_i(x)|\,w(x)\,dx. \end{equation*} The rest of the proof is exactly the same as that of Theorem \ref{mainthm:1}, and we finally obtain \begin{equation*} J_6\leq C\int_{\mathbb R^n}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx. \end{equation*} Collecting all these estimates, we get the desired estimate. \end{proof} \section{Proofs of Theorems \ref{mainthm:4}, \ref{mainthm:5} and \ref{mainthm:6}} \begin{proof}[Proofs of Theorems $\ref{mainthm:4}$ and $\ref{mainthm:5}$] We will only give the proof of Theorem $\ref{mainthm:4}$ here, because the proof of Theorem $\ref{mainthm:5}$ is essentially the same. Fix a ball $B=B(x_0,r_B)\subseteq\mathbb R^n$ and decompose $f=f_1+f_2$, where $f_1=f\cdot\chi_{_{2B}}$, $\chi_{_{2B}}$ denotes the characteristic function of $2B=B(x_0,2r_B)$. For any $0<\kappa<1$, $w\in A_1$ and any given $\sigma>0$, one writes \begin{equation*} \begin{split} &\frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|[b,\mathcal S_\alpha](f)(x)\big|>\sigma\big\}\big)\\ \leq &\frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|[b,\mathcal S_\alpha](f_1)(x)\big|>\sigma/2\big\}\big) +\frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|[b,\mathcal S_\alpha](f_2)(x)\big|>\sigma/2\big\}\big)\\ :=&I_1+I_2. \end{split} \end{equation*} Using Theorem \ref{mainthm:1} and the inequality (\ref{weights}), we get \begin{equation*} \begin{split} I_1&\leq C\cdot\frac{1}{w(B)^\kappa}\int_{\mathbb R^n}\Phi\left(\frac{|f_1(x)|}{\sigma}\right)\cdot w(x)\,dx\\ &= C\cdot\frac{1}{w(B)^\kappa} \int_{2B}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx\\ &= C\cdot\frac{w(2B)^\kappa}{w(B)^\kappa}\cdot\frac{1}{w(2B)^\kappa} \int_{2B}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx\\ &\leq C\cdot\sup_B\left\{\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx\right\}. \end{split} \end{equation*} For any $x\in B$, we can easily check that \begin{equation*} \big|\big[b,\mathcal S_\alpha\big](f_2)(x)\big|\leq \big|b(x)-b_{B}\big|\cdot\mathcal S_\alpha(f_2)(x)+\mathcal S_\alpha\Big([b-b_{B}]f_2\Big)(x). \end{equation*} So we have \begin{equation*} \begin{split} I_2\leq&\frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|b(x)-b_{B}\big|\cdot\mathcal S_\alpha(f_2)(x)>\sigma/4\big\}\big)\\ &+\frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|\mathcal S_\alpha\Big([b-b_{B}]f_2\Big)(x)\big|>\sigma/4\big\}\big)\\ :=&I_3+I_4. \end{split} \end{equation*} For the term $I_3$, for all $0<\alpha\leq1$ and $x\in B$, it was proved by the author \cite{wang1} that \begin{equation}\label{S(f2)} \big|\mathcal S_\alpha(f_2)(x)\big|\leq C\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B}|f(z)|\,dz. \end{equation} Since $w\in A_1$, then there exists a number $r>1$ such that $w\in RH_r$. Hence, by using the above pointwise estimate (\ref{S(f2)}), Chebyshev's inequality together with H\"older's inequality and John--Nirenberg's inequality (see \cite{john}), we conclude that \begin{equation*} \begin{split} I_3&\leq\frac{1}{w(B)^\kappa}\cdot\frac{\,4\,}{\sigma}\int_B\big|b(x)-b_{B}\big|\cdot\mathcal S_\alpha(f_2)(x)w(x)\,dx\\ &\leq C\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B}\frac{|f(z)|}{\sigma}\,dz\\ &\times\frac{1}{w(B)^\kappa}\cdot\left(\int_{B}\big|b(x)-b_{B}\big|^{r'}dx\right)^{1/{r'}}\left(\int_{B}w(x)^rdx\right)^{1/r}\\ &\leq C\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B}\frac{|f(z)|}{\sigma}\,dz\times w(B)^{1-\kappa}\\ &\leq C\sum_{j=1}^\infty\frac{1}{w(2^{j+1}B)}\int_{2^{j+1}B}\frac{|f(z)|}{\sigma}\cdot w(z)\,dz\times w(B)^{1-\kappa}\\ &\leq C\cdot\sup_B\left\{\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(z)|}{\sigma}\right)\cdot w(z)\,dz\right\}\times\sum_{j=1}^\infty\frac{w(B)^{1-\kappa}}{w(2^{j+1}B)^{1-\kappa}}. \end{split} \end{equation*} Since $w\in A_1\subset A_\infty$, by the inequality (\ref{compare}), we get \begin{align}\label{<C} \sum_{j=1}^\infty\frac{w(B)^{1-\kappa}}{w(2^{j+1}B)^{1-\kappa}}&\leq C\sum_{j=1}^\infty\left(\frac{|B|}{|2^{j+1}B|}\right)^{\delta(1-\kappa)}\notag\\ &\leq C\sum_{j=1}^\infty\left(\frac{1}{2^{jn}}\right)^{\delta(1-\kappa)}\leq C, \end{align} which in turn gives that \begin{equation*} I_3\leq C\cdot\sup_B\left\{\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(z)|}{\sigma}\right)\cdot w(z)\,dz\right\}. \end{equation*} Similar to the proof of (\ref{S(f2)}), for all $0<\alpha\leq1$ and all $x\in B$, we can show the following pointwise estimate as well. \begin{equation}\label{[b,S](f2)} \Big|\mathcal S_\alpha\Big([b-b_{B}]f_2\Big)(x)\Big|\leq C\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B}\big|b(z)-b_{B}\big|\cdot\big|f(z)\big|\,dz. \end{equation} Applying the above pointwise estimate (\ref{[b,S](f2)}) and Chebyshev's inequality, we have \begin{equation*} \begin{split} I_4&\leq\frac{1}{w(B)^\kappa}\cdot\frac{\,4\,}{\sigma}\int_B\Big|\mathcal S_\alpha\Big([b-b_{B}]f_2\Big)(x)\Big|w(x)\,dx\\ &\leq\frac{w(B)}{w(B)^\kappa}\cdot\frac{C}{\sigma}\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B} \big|b(z)-b_{B}\big|\cdot\big|f(z)\big|\,dz\\ \end{split} \end{equation*} \begin{equation*} \begin{split} &\leq\frac{w(B)}{w(B)^\kappa}\cdot\frac{C}{\sigma}\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B} \big|b(z)-b_{2^{j+1}B}\big|\cdot\big|f(z)\big|\,dz\\ &+\frac{w(B)}{w(B)^\kappa}\cdot\frac{C}{\sigma}\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B} \big|b_{2^{j+1}B}-b_B\big|\cdot\big|f(z)\big|\,dz\\ &:=I_5+I_6. \end{split} \end{equation*} For the term $I_5$, observe that for any $a,b>0$, $\Phi(a\cdot b)\leq\Phi(a)\cdot\Phi(b)$ when $\Phi(t)=t(1+\log^+t)$. We then use the generalized H\"older's inequality with weight (\ref{weighted holder}), (\ref{weighted exp}) and (\ref{equiv norm with weight}) together with (\ref{<C}) to obtain \begin{equation*} \begin{split} I_5&\leq \frac{C}{\sigma}\cdot w(B)^{1-\kappa}\sum_{j=1}^\infty\frac{1}{w(2^{j+1}B)}\int_{2^{j+1}B} \big|b(z)-b_{2^{j+1}B}\big|\cdot\big|f(z)\big|w(z)\,dz\\ &\leq \frac{C}{\sigma}\cdot w(B)^{1-\kappa}\sum_{j=1}^\infty\big\|b-b_{2^{j+1}B}\big\|_{\exp L(w),2^{j+1}B}\big\|f\big\|_{L\log L(w),2^{j+1}B}\\ &\leq \frac{C\|b\|_*}{\sigma}\cdot w(B)^{1-\kappa}\sum_{j=1}^\infty \inf_{\eta>0}\left\{\eta+\frac{\eta}{w(2^{j+1}B)}\int_{2^{j+1}B}\Phi\left(\frac{|f(z)|}{\eta}\right)w(z)\,dz\right\}\\ &\leq \frac{C\|b\|_*}{\sigma}\cdot w(B)^{1-\kappa}\sum_{j=1}^\infty \left\{\frac{\sigma}{w(2^{j+1}B)^{1-\kappa}}+\frac{\sigma}{w(2^{j+1}B)}\int_{2^{j+1}B}\Phi\left(\frac{|f(z)|}{\sigma}\right)w(z)\,dz\right\}\\ &\leq C\|b\|_*\cdot\left[1+\sup_B\left\{\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(z)|}{\sigma}\right)\cdot w(z)\,dz\right\}\right]\times\sum_{j=1}^\infty\frac{w(B)^{1-\kappa}}{w(2^{j+1}B)^{1-\kappa}}\\ &\leq C\cdot\sup_B\left\{\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(z)|}{\sigma}\right)\cdot w(z)\,dz\right\}. \end{split} \end{equation*} For the last term $I_6$ we proceed as follows. By the inequality (\ref{BMO}) again, we get \begin{equation*} \begin{split} I_6&\leq C\cdot w(B)^{1-\kappa}\sum_{j=1}^\infty(j+1)\|b\|_*\cdot\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B}\frac{|f(z)|}{\sigma}\,dz\\ &\leq C\cdot w(B)^{1-\kappa}\sum_{j=1}^\infty(j+1)\|b\|_*\cdot\frac{1}{w(2^{j+1}B)}\int_{2^{j+1}B}\frac{|f(z)|}{\sigma}\cdot w(z)\,dz\\ &\leq C\cdot\sup_B\left\{\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(z)|}{\sigma}\right)\cdot w(z)\,dz\right\}\times\sum_{j=1}^\infty(j+1)\cdot\frac{w(B)^{1-\kappa}}{w(2^{j+1}B)^{1-\kappa}}. \end{split} \end{equation*} Since $w\in A_1\subset A_\infty$, by using the inequality (\ref{compare}) again, we have \begin{align} \sum_{j=1}^\infty(j+1)\cdot\frac{w(B)^{1-\kappa}}{w(2^{j+1}B)^{1-\kappa}}&\leq C\sum_{j=1}^\infty(j+1)\cdot\left(\frac{|B|}{|2^{j+1}B|}\right)^{\delta(1-\kappa)}\notag\\ &\leq C\sum_{j=1}^\infty(j+1)\cdot\left(\frac{1}{2^{(j+1)n}}\right)^{\delta(1-\kappa)}\leq C, \end{align} which implies \begin{equation*} I_6\leq C\cdot\sup_B\left\{\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(z)|}{\sigma}\right)\cdot w(z)\,dz\right\}. \end{equation*} Summarizing the above discussions, we obtain the conclusion of the theorem. \end{proof} \begin{proof}[Proof of Theorem $\ref{mainthm:6}$] Fix a ball $B=B(x_0,r_B)\subseteq\mathbb R^n$ and decompose $f=f_1+f_2$, where $f_1=f\cdot\chi_{_{2B}}$. For any $0<\kappa<1$, $w\in A_1$ and any given $\sigma>0$, we then write \begin{equation*} \begin{split} &\frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|[b,\mathcal G^*_{\lambda,\alpha}](f)(x)\big|>\sigma\big\}\big)\\ \leq &\frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|[b,\mathcal G^*_{\lambda,\alpha}](f_1)(x)\big|>\sigma/2\big\}\big) +\frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|[b,\mathcal G^*_{\lambda,\alpha}](f_2)(x)\big|>\sigma/2\big\}\big)\\ :=&J_1+J_2. \end{split} \end{equation*} Theorem \ref{mainthm:3} and the inequality (\ref{weights}) imply that \begin{equation*} \begin{split} J_1&\leq C\cdot\frac{1}{w(B)^\kappa}\int_{\mathbb R^n}\Phi\left(\frac{|f_1(x)|}{\sigma}\right)\cdot w(x)\,dx\\ &= C\cdot\frac{1}{w(B)^\kappa} \int_{2B}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx\\ &= C\cdot\frac{w(2B)^\kappa}{w(B)^\kappa}\cdot\frac{1}{w(2B)^\kappa} \int_{2B}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx\\ &\leq C\cdot\sup_B\left\{\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(x)|}{\sigma}\right)\cdot w(x)\,dx\right\}. \end{split} \end{equation*} For any $x\in B$, we are able to verify that \begin{equation*} \big|\big[b,\mathcal G^*_{\lambda,\alpha}\big](f_2)(x)\big|\leq \big|b(x)-b_{B}\big|\cdot\mathcal G^*_{\lambda,\alpha}(f_2)(x) +\mathcal G^*_{\lambda,\alpha}\Big([b-b_{B}]f_2\Big)(x). \end{equation*} So we have \begin{equation*} \begin{split} J_2\leq&\frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|b(x)-b_{B}\big|\cdot\mathcal G^*_{\lambda,\alpha}(f_2)(x)>\sigma/4\big\}\big)\\ &+\frac{1}{w(B)^\kappa}\cdot w\big(\big\{x\in B:\big|\mathcal G^*_{\lambda,\alpha}\Big([b-b_{B}]f_2\Big)(x)\big|>\sigma/4\big\}\big)\\ :=&J_3+J_4. \end{split} \end{equation*} For the term $J_3$, for all $0<\alpha\leq1$, $x\in B$ and $j\in\mathbb Z_+$, it was also shown by the author \cite{wang1} that \begin{equation}\label{Sj(f2)} \big|\mathcal S_{\alpha,2^j}(f_2)(x)\big|\leq C\cdot 2^{{3jn}/2}\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B}|f(z)|\,dz. \end{equation} Hence, it follows from the inequalities (\ref{Sj(f2)}), (\ref{S(f2)}) and (\ref{g(lambda)}) that \begin{align}\label{g(lambda)(f2)} \left|\mathcal G^*_{\lambda,\alpha}(f_2)(x)\right|&\leq C\bigg[\big|\mathcal S_\alpha(f_2)(x)\big| +\sum_{j=1}^\infty 2^{{-j\lambda n}/2}\big|\mathcal S_{\alpha,2^j}(f_2)(x)\big|\bigg]\notag\\ &\leq C\cdot\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B}|f(z)|\,dz\times\left(1+\sum_{j=1}^\infty 2^{{-j\lambda n}/2}\cdot2^{3jn/2}\right)\notag\\ &\leq C\cdot\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B}|f(z)|\,dz, \end{align} where the last inequality is due to our assumption $\lambda>{(3n+2\alpha)}/n>3$. Hence, we can continue the estimate of $J_3$ in the same way as in Theorem \ref{mainthm:4}, and obtain \begin{equation*} \begin{split} J_3&\leq\frac{1}{w(B)^\kappa}\cdot\frac{\,4\,}{\sigma}\int_B\big|b(x)-b_{B}\big|\cdot\mathcal G^*_{\lambda,\alpha}(f_2)(x)w(x)\,dx\\ &\leq C\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B}\frac{|f(z)|}{\sigma}\,dz \times\frac{1}{w(B)^\kappa}\cdot\int_B\big|b(x)-b_{B}\big|w(x)\,dx\\ &\leq C\sum_{j=1}^\infty\frac{1}{w(2^{j+1}B)}\int_{2^{j+1}B}\frac{|f(z)|}{\sigma}\cdot w(z)\,dz\times w(B)^{1-\kappa}\\ &\leq C\cdot\sup_B\left\{\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(z)|}{\sigma}\right)\cdot w(z)\,dz\right\}\times\sum_{j=1}^\infty\frac{w(B)^{1-\kappa}}{w(2^{j+1}B)^{1-\kappa}}\\ &\leq C\cdot\sup_B\left\{\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(z)|}{\sigma}\right)\cdot w(z)\,dz\right\}. \end{split} \end{equation*} For the term $J_4$, similar to the proof of (\ref{g(lambda)(f2)}), for all $0<\alpha\leq1$, all $x\in B$ and $\lambda>3$, we can show the following pointwise estimate as well. \begin{equation}\label{[b,g](f2)} \Big|\mathcal G^*_{\lambda,\alpha}\Big([b-b_{B}]f_2\Big)(x)\Big|\leq C\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B}\big|b(z)-b_{B}\big|\cdot\big|f(z)\big|\,dz. \end{equation} Following the same arguments as in the proof of Theorem \ref{mainthm:4} and using the pointwise estimate (\ref{[b,g](f2)}) and Chebyshev's inequality, we have eventually obtained \begin{equation*} \begin{split} J_4&\leq\frac{1}{w(B)^\kappa}\cdot\frac{\,4\,}{\sigma}\int_B\Big|\mathcal G^*_{\lambda,\alpha}\Big([b-b_{B}]f_2\Big)(x)\Big|w(x)\,dx\\ &\leq\frac{w(B)}{w(B)^\kappa}\cdot\frac{C}{\sigma}\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B} \big|b(z)-b_{B}\big|\cdot\big|f(z)\big|\,dz\\ &\leq C\cdot\sup_B\left\{\frac{1}{w(B)^\kappa} \int_{B}\Phi\left(\frac{|f(z)|}{\sigma}\right)\cdot w(z)\,dz\right\}. \end{split} \end{equation*} Combining all the above estimates, we are done. \end{proof} \section{Proofs of Theorems \ref{mainthm:7}, \ref{mainthm:8} and \ref{mainthm:9}} \begin{proof}[Proofs of Theorems $\ref{mainthm:7}$ and $\ref{mainthm:8}$] Again we will only give the proof of Theorem $\ref{mainthm:7}$ here. Theorem $\ref{mainthm:8}$ can be dealt with similarly. For any ball $B=B(x_0,r)\subseteq\mathbb R^n$ with $x_0\in\mathbb R^n$ and $r>0$, we set $f=f_1+f_2$, where $f_1=f\cdot\chi_{_{2B}}$. Then for each fixed $\sigma>0$, we have \begin{equation*} \begin{split} &\frac{1}{\Theta(r)}\cdot\big|\big\{x\in B:\big|[b,\mathcal S_\alpha](f)(x)\big|>\sigma\big\}\big|\\ \leq &\frac{1}{\Theta(r)}\cdot \big|\big\{x\in B:\big|[b,\mathcal S_\alpha](f_1)(x)\big|>\sigma/2\big\}\big| +\frac{1}{\Theta(r)}\cdot \big|\big\{x\in B:\big|[b,\mathcal S_\alpha](f_2)(x)\big|>\sigma/2\big\}\big|\\ :=&K_1+K_2. \end{split} \end{equation*} We consider the term $K_1$ first. Theorem \ref{maincor:1} and the inequality (\ref{doubling}) imply that \begin{equation*} \begin{split} K_1&\leq C\cdot\frac{1}{\Theta(r)}\int_{\mathbb R^n}\Phi\left(\frac{|f_1(x)|}{\sigma}\right)dx\\ &= C\cdot\frac{1}{\Theta(r)} \int_{2B}\Phi\left(\frac{|f(x)|}{\sigma}\right)dx\\ &= C\cdot\frac{\Theta(2r)}{\Theta(r)}\cdot\frac{1}{\Theta(2r)} \int_{B(x_0,2r)}\Phi\left(\frac{|f(x)|}{\sigma}\right)dx\\ &\leq C\cdot\sup_{r>0;B(x_0,r)}\left\{\frac{1}{\Theta(r)} \int_{B(x_0,r)}\Phi\left(\frac{|f(x)|}{\sigma}\right)dx\right\}. \end{split} \end{equation*} We now turn our attention to the estimate of $K_2$. Recalling that the following estimate holds for any $x\in B$, \begin{equation*} \big|\big[b,\mathcal S_\alpha\big](f_2)(x)\big|\leq \big|b(x)-b_{B}\big|\cdot\mathcal S_\alpha(f_2)(x)+\mathcal S_\alpha\Big([b-b_{B}]f_2\Big)(x). \end{equation*} Thus, we have \begin{equation*} \begin{split} K_2\leq&\frac{1}{\Theta(r)}\cdot \big|\big\{x\in B:\big|b(x)-b_{B}\big|\cdot\mathcal S_\alpha(f_2)(x)>\sigma/4\big\}\big|\\ &+\frac{1}{\Theta(r)}\cdot \big|\big\{x\in B:\big|\mathcal S_\alpha\Big([b-b_{B}]f_2\Big)(x)\big|>\sigma/4\big\}\big|\\ :=&K_3+K_4. \end{split} \end{equation*} By using the previous pointwise estimate (\ref{S(f2)}), Chebyshev's inequality and the definition of BMO, we conclude that \begin{equation*} \begin{split} K_3&\leq\frac{1}{\Theta(r)}\cdot\frac{\,4\,}{\sigma}\int_B\big|b(x)-b_{B}\big|\cdot\mathcal S_\alpha(f_2)(x)\,dx\\ &\leq C\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B}\frac{|f(z)|}{\sigma}\,dz \times\left\{\frac{|B|}{\Theta(r)}\cdot\frac{1}{|B|}\int_B\big|b(x)-b_{B}\big|dx\right\}\\ \end{split} \end{equation*} \begin{equation*} \begin{split} &\leq C\|b\|_*\sum_{j=1}^\infty\frac{|B|}{|2^{j+1}B|}\cdot\frac{\Theta(2^{j+1}r)}{\Theta(r)}\cdot\frac{1}{\Theta(2^{j+1}r)}\int_{B(x_0,2^{j+1}r)}\frac{|f(z)|}{\sigma}\,dz\\ &\leq C\cdot\sup_{r>0;B(x_0,r)}\left\{\frac{1}{\Theta(r)} \int_{B(x_0,r)}\Phi\left(\frac{|f(z)|}{\sigma}\right)dz\right\}\times\sum_{j=1}^\infty\frac{|B|}{|2^{j+1}B|}\cdot\frac{\Theta(2^{j+1}r)}{\Theta(r)}. \end{split} \end{equation*} Note that $1\leq D(\Theta)<2^n$, then by using the doubling condition (\ref{doubling}) of $\Theta$, we know that \begin{equation}\label{theta<C} \sum_{j=1}^\infty\frac{|B|}{|2^{j+1}B|}\cdot\frac{\Theta(2^{j+1}r)}{\Theta(r)}\leq\sum_{j=1}^\infty\left(\frac{D(\Theta)}{2^n}\right)^{j+1}\leq C, \end{equation} which in turn gives that \begin{equation*} K_3\leq C\cdot\sup_{r>0;B(x_0,r)}\left\{\frac{1}{\Theta(r)} \int_{B(x_0,r)}\Phi\left(\frac{|f(z)|}{\sigma}\right)dz\right\}. \end{equation*} Applying the previous pointwise estimate (\ref{[b,S](f2)}) and Chebyshev's inequality, we have \begin{equation*} \begin{split} K_4&\leq\frac{1}{\Theta(r)}\cdot\frac{\,4\,}{\sigma}\int_B\Big|\mathcal S_\alpha\Big([b-b_{B}]f_2\Big)(x)\Big|\,dx\\ &\leq\frac{|B|}{\Theta(r)}\cdot\frac{C}{\sigma}\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B} \big|b(z)-b_{B}\big|\cdot\big|f(z)\big|\,dz\\ &\leq\frac{|B|}{\Theta(r)}\cdot\frac{C}{\sigma}\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B} \big|b(z)-b_{2^{j+1}B}\big|\cdot\big|f(z)\big|\,dz\\ &+\frac{|B|}{\Theta(r)}\cdot\frac{C}{\sigma}\sum_{j=1}^\infty\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B} \big|b_{2^{j+1}B}-b_B\big|\cdot\big|f(z)\big|\,dz\\ &:=K_5+K_6. \end{split} \end{equation*} For the term $K_5$, notice that the inequality $\Phi(a\cdot b)\leq\Phi(a)\cdot\Phi(b)$ holds for any $a,b>0$, when $\Phi(t)=t(1+\log^+t)$. We then use the generalized H\"older's inequality (\ref{holder}), (\ref{exp}) and (\ref{equiv norm}) together with (\ref{theta<C}) to obtain \begin{equation*} \begin{split} K_5&\leq \frac{|B|}{\Theta(r)}\cdot\frac{C}{\sigma}\sum_{j=1}^\infty\big\|b-b_{2^{j+1}B}\big\|_{\exp L,2^{j+1}B}\big\|f\big\|_{L\log L,2^{j+1}B}\\ &\leq \frac{C\|b\|_*}{\sigma}\cdot\frac{|B|}{\Theta(r)}\sum_{j=1}^\infty \inf_{\eta>0}\left\{\eta+\frac{\eta}{|2^{j+1}B|}\int_{2^{j+1}B}\Phi\left(\frac{|f(z)|}{\eta}\right)dz\right\}\\ &\leq \frac{C\|b\|_*}{\sigma}\cdot \frac{|B|}{\Theta(r)}\sum_{j=1}^\infty \left\{\frac{\sigma\cdot\Theta(2^{j+1}r)}{|2^{j+1}B|}+\frac{\sigma}{|2^{j+1}B|}\int_{2^{j+1}B}\Phi\left(\frac{|f(z)|}{\sigma}\right)dz\right\}\\ &\leq C\|b\|_*\cdot\left[1+\sup_{r>0;B(x_0,r)}\left\{\frac{1}{\Theta(r)} \int_{B(x_0,r)}\Phi\left(\frac{|f(z)|}{\sigma}\right)dz\right\}\right]\\ \end{split} \end{equation*} \begin{equation*} \begin{split} &\times\sum_{j=1}^\infty\frac{|B|}{|2^{j+1}B|}\cdot\frac{\Theta(2^{j+1}r)}{\Theta(r)}\\ &\leq C\cdot\sup_{r>0;B(x_0,r)}\left\{\frac{1}{\Theta(r)} \int_{B(x_0,r)}\Phi\left(\frac{|f(z)|}{\sigma}\right)dz\right\}. \end{split} \end{equation*} For the last term $K_6$, an application of the inequality (\ref{BMO}) leads to that \begin{equation*} \begin{split} K_6&\leq C\cdot\frac{|B|}{\Theta(r)}\sum_{j=1}^\infty(j+1)\|b\|_*\cdot\frac{1}{|2^{j+1}B|}\int_{2^{j+1}B}\frac{|f(z)|}{\sigma}\,dz\\ &\leq C\cdot\frac{|B|}{\Theta(r)}\sum_{j=1}^\infty(j+1)\|b\|_*\cdot\frac{\Theta(2^{j+1}r)}{|2^{j+1}B|}\cdot\frac{1}{\Theta(2^{j+1}r)}\int_{B(x_0,2^{j+1}r)}\frac{|f(z)|}{\sigma}\,dz\\ &\leq C\cdot\sup_{r>0;B(x_0,r)}\left\{\frac{1}{\Theta(r)} \int_{B(x_0,r)}\Phi\left(\frac{|f(z)|}{\sigma}\right)dz\right\}\\ &\times\sum_{j=1}^\infty(j+1)\cdot\frac{|B|}{|2^{j+1}B|}\cdot\frac{\Theta(2^{j+1}r)}{\Theta(r)}. \end{split} \end{equation*} Moreover, by using the doubling condition (\ref{doubling}) of $\Theta$ again and the fact that $1\leq D(\Theta)<2^n$, we find that \begin{equation}\label{final} \sum_{j=1}^\infty(j+1)\cdot\frac{|B|}{|2^{j+1}B|}\cdot\frac{\Theta(2^{j+1}r)}{\Theta(r)}\leq C\sum_{j=1}^\infty(j+1)\cdot\left(\frac{D(\Theta)}{2^n}\right)^{j+1}\leq C. \end{equation} Substituting the above inequality (\ref{final}) into the term $K_6$, we thus obtain \begin{equation*} K_6\leq C\cdot\sup_{r>0;B(x_0,r)}\left\{\frac{1}{\Theta(r)} \int_{B(x_0,r)}\Phi\left(\frac{|f(z)|}{\sigma}\right)dz\right\}. \end{equation*} Summing up all the above estimates, we therefore conclude the proof of the main theorem. \end{proof} Finally, we remark that by using the same arguments as in the proof of Theorems \ref{mainthm:6} and \ref{mainthm:7}, we can also show the conclusion of Theorem \ref{mainthm:9}. The details are omitted here. \section{Acknowledgment} This paper is supported by the Fundamental Research Funds for the Central Universities of Hunan University (Grant no. 531107040013). \end{document}
\begin{document} \title[Weak convergence of random metric measure spaces]{{\large Convergence in distribution of random metric measure spaces}\\[1mm] ($\Lambda$-coalescent measure trees)} \thispagestyle{empty} \author{Andreas Greven} \address{Andreas Greven\\ Mathematisches Institut\\ University of Erlangen\\ Bismarckstr.\ 1$\tfrac 12$\\ D-91054 Erlangen \\ Germany} \email{[email protected]} \author{Peter Pfaffelhuber} \address{Peter Pfaffelhuber\\ Zoologisches Institut\\ Ludwig-Maximilian-University Munich \\ Gro\ss haderner Stra\ss e 2\\ D-82152 Planegg-Martinsried\\ Germany} \thanks{The research was supported by the DFG-Forschergruppe 498 via grant GR 876/13-1,2} \email{[email protected]} \author{Anita Winter} \address{Anita Winter\\ Mathematisches Institut\\ University of Erlangen\\ Bismarckstr.\ 1$\tfrac 12$\\ D-91054 Erlangen \\ Germany} \email{[email protected]} \thispagestyle{empty} \keywords{Metric measure spaces, Gromov metric triple, $\R$-trees, Gromov-Hausdorff topology, weak topology, Prohorov metric, Wasserstein metric, $\Lambda$-coalescent} \subjclass[2000]{Primary: 60B10, 05C80; Secondary: 60B05, 60G09} \begin{abstract} We consider the space of complete and separable metric spaces which are equipped with a probability measure. A notion of convergence is given based on the philosophy that a sequence of metric measure spaces converges if and only if all finite subspaces sampled from these spaces converge. This topology is metrized following Gromov's idea of embedding two metric spaces isometrically into a common metric space combined with the Prohorov metric between probability measures on a fixed metric space. We show that for this topology convergence in distribution follows - provided the sequence is tight - from convergence of all randomly sampled finite subspaces. We give a characterization of tightness based on quantities which are reasonably easy to calculate. Subspaces of particular interest are the space of real trees and of ultra-metric spaces equipped with a probability measure. As an example we characterize convergence in distribution for the (ultra-)metric measure spaces given by the random genealogies of the $\Lambda$-coalescents. We show that the $\Lambda$-coalescent defines an infinite (random) metric measure space if and only if the so-called ``dust-free''-property holds. \end{abstract} \maketitle \thispagestyle{empty} \section{Introduction and Motivation} \label{S:intro} In this paper we study random metric measure spaces which appear frequently in the form of random trees in probability theory. Prominent examples are random binary search trees as a special case of random recursive trees (\cite{DrmotaHwang2005}), ultra-metric structures in spin-glasses (see, for example, \cite{BolthausenKistler2006,MPV87}), and coalescent processes in population genetics (for example, \cite{Hudson1990, Evans2000}). Of special interest is the continuum random tree, introduced in \cite{Ald1993}, which is related to several objects, for example, Galton-Watson trees, spanning trees and Brownian excursions. Moreover, examples for Markov chains with values in finite trees are the Aldous-Broder Markov chain which is related to spanning trees (\cite{Aldous1990}), growing Galton-Watson trees, and tree-bisection and reconnection which is a method to search through tree space in phylogenetic reconstruction (see e.g., \cite{Felsenstein2003}). Because of the exponential growth of the state space with an increasing number of vertices tree-valued Markov chains are - even so easy to construct by standard theory - hard to analyze for their qualitative properties. It therefore seems to be reasonable to pass to a continuum limit and to construct certain limit dynamics and study them with methods from stochastic analysis. We will apply this approach in the forthcoming paper \cite{GrePfaWin2006} to trees encoding genealogical relationships in exchangeable models of populations of constant size. The result will be the tree-valued Fleming-Viot dynamics. For this purpose it is necessary to develop systematically the topological properties of the state space and the corresponding convergence in distribution. The present paper focuses on these topological properties. As one passes from finite trees to ``infinite'' trees the necessity arises to equip the tree with a probability measure which allows to sample typical finite subtrees. In \cite{Ald1993}, Aldous discusses a notion of convergence in distribution of a ``consistent'' family of finite random trees towards a certain limit: the continuum random tree. In order to define convergence Aldous codes trees as separable and complete metric spaces satisfying some special properties for the metric characterizing them as trees which are embedded into $\ell_1^+$ and equipped with a probability measure. In this setting finite trees, i.e., trees with finitely many leaves, are always equipped with the uniform distribution on the set of leaves. The idea of convergence in distribution of a ``consistent'' family of finite random trees follows Kolmogorov's theorem which gives the characterization of convergence of $\R$-indexed stochastic processes with regular paths. That is, a sequence has a unique limit provided a tightness condition holds on path space and assuming that the ``finite-dimensional distributions'' converge. The analogs of finite-dimensional distributions are ``subtrees spanned by finitely many randomly chosen leaves'' and the tightness criterion is built on characterizations of compact sets in $\ell^+_1$. Aldous's notion of convergence has been successful for the purpose of rescaling a given family of trees and showing convergence in distribution towards a specific limit random tree. For example, Aldous shows that suitably rescaled families of critical finite variance offspring distribution Galton-Watson trees conditioned to have total population size $N$ converge as $N\to\infty$ to the Brownian continuum random tree, i.e., the $\R$-tree associated with a Brownian excursion. Furthermore, Aldous constructs the genealogical tree of a resampling population as a metric measure space associated with the Kingman coalescent, as the limit of $N$-coalescent trees with weight $1/N$ on each of their leaves. However, Aldous's ansatz to view trees as closed subsets of $\ell_1^+$, and thereby using a very particular embedding for the construction of the topology, seemed not quite easy and elegant to work with once one wants to construct tree-valued limit dynamics (see, for example, \cite{EvaPitWin2006}, \cite{EvaWin2006} and \cite{GrePfaWin2006}). More recently, isometry classes of $\R$-trees, i.e., a particular class of metric spaces, were introduced, and a means of measuring the distance between two (isometry classes of) metric spaces were provided based on an ``optimal matching'' of the two spaces yielding the Gromov-Hausdorff metric (see, for example, Chapter 7 in \cite{BurBurIva01}). The main emphasis of the present paper is to exploit Aldous's philosophy of convergence without using Aldous's particular embedding. That is, we equip the space of separable and complete real trees which are equipped with a probability measure with the following topology: \begin{itemize} \item A sequence of trees (equipped with a probability measure) converges to a limit tree (equipped with a probability measure) if and only if all randomly sampled finite subtrees converge to the corresponding limit subtrees. The resulting topology is referred to as the {\em Gromov-weak topology} (compare Definition~\ref{D:00}). \end{itemize} Since the construction of the topology works not only for tree-like metric spaces, but also for the space of (measure preserving isometry classes of) metric measure spaces we formulate everything within this framework. \begin{itemize} \item We will see that the Gromov-weak topology on the space of metric measure spaces is Polish (Theorem~\ref{T:01}). \end{itemize} In fact, we metrize the space of metric measure spaces equipped with the Gromov-weak topology by the {\em Gromov-Prohorov metric} which combines the two concepts of metrizing the space of metric spaces and the space of probability measures on a given metric space in a straightforward way. Moreover, we present a number of equivalent metrics which might be useful in different contexts. This then allows to discuss convergence of random variables taking values in that space. \begin{itemize} \item We next characterize compact sets (Theorem \ref{T:Propprec} combined with Theorem~\ref{T:05}) and tightness (Theorem \ref{T:PropTight} combined with Theorem~\ref{T:05}) via quantities which are reasonably easy to compute. \item We then illustrate with the example of the $\Lambda$-coalescent tree (Theorem \ref{T:04}) how the tightness characterization can be applied. \end{itemize} We remark that topologies on metric measure spaces are considered in detail in Section $3\tfrac 12$ of \cite{Gromov2000}. We are aware that several of our results (in particular, Theorems~\ref{T:01}, \ref{T:Propprec} and~\ref{T:05}) are stated in \cite{Gromov2000} in a different set-up. While Gromov focuses on geometric aspects, we provide the tools necessary to do probability theory on the space of metric measure spaces. See Remark \ref{Rem:Gromov} for more details on the connection to Gromov's work. Further related topologies on particular subspaces of isometry classes of complete and separable metric spaces have already been considered in \cite{Stu2006} and \cite{EvaWin2006}. Convergence in these two topologies implies convergence in the {Gromov-weak topology} but not vice versa. \subsection*{Outline} The rest of the paper is organized as follows. In the next two sections we formulate the main results. In Section~\ref{S:mms} we introduce the space of metric measure spaces equipped with the Gromov-weak topology and characterize their compact sets. In Section~\ref{S:tigght} we discuss convergence in distribution and characterize tightness. We then illustrate the main results introduced so far with the example of the metric measure tree associated with genealogies generated by the infinite $\Lambda$-coalescent in Section~\ref{S:example}. Sections~\ref{S:GPW} through~\ref{S:equivtop} are devoted to the proofs of the theorems. In Section~\ref{S:GPW} we introduce the Gromov-Prohorov metric as a candidate for a complete metric which generates the Gromov-weak topology and show that the generated topology is separable. As a technical preparation we collect results on the modulus of mass distribution and the distance distribution (see Definition~\ref{D:modul}) in Section~\ref{S:massDist}. In Sections~\ref{S:compact} and~\ref{S:PropTight} we give characterizations on pre-compactness and tightness for the topology generated by the Gromov-Prohorov metric. In Section \ref{S:equivtop} we prove that the topology generated by the Gromov-Prohorov metric coincides with the Gromov-weak topology. Finally, in Section~\ref{S:equivMetrics} we provide several other metrics that generate the Gromov-weak topology. \section{Metric measure spaces} \label{S:mms} As usual, given a topological space $(X,{\mathcal O})$, we denote by ${\mathcal M}_1(X)$ the space of all probability measures on $X$ equipped with the Borel-$\sigma$-algebra ${\mathcal B}(X)$. Recall that the support of $\mu$, $\mathrm{supp}(\mu)$, is the smallest closed set $X_0\subseteq X$ such that $\mu(X\setminus X_0)=0$. The push forward of $\mu$ under a measurable map $\varphi$ from $X$ into another metric space $(Z,r_Z)$ is the probability measure $\varphi_\ast\mu\in{\mathcal M}_1(Z)$ defined by \be{forw} \varphi_\ast\mu(A) := \mu\big(\varphi^{-1}(A)\big), \end{equation} for all $A\in{\mathcal B}(Z)$. Weak convergence in $\mathcal M_1(X)$ is denoted by $\Longrightarrow$. \sm In the following we focus on complete and separable metric spaces. \begin{definition}[Metric measure space] A {\em metric measure space} is a complete and separable metric space $(X,r)$ which is equipped with a probability measure $\mu\in{\mathcal M}_1(X)$. We write $\mathbb{M}$ \label{D:000} for the space of measure-preserving isometry classes of complete and separable metric measure spaces, where we say that $(X,r,\mu)$ and $(X^\prime,r^\prime,\mu^\prime)$ are {measure-preserving} isometric if there exists an isometry $\varphi$ between the supports of $\mu$ on $(X,r)$ and of $\mu'$ on $(X^\prime,r')$ such that $\mu'=\varphi_\ast\mu$. It is clear that the property of being measure-preserving isometric is an equivalence relation. We abbreviate $\mathcal X = (X,r,\mu)$ for a whole isometry class of metric spaces whenever no confusion seems to be possible.\sm \end{definition}\sm \begin{remark} \begin{enumerate} \item[{}] \item[(i)] Metric measure spaces, or short {\em mm-spaces}, are discussed in \cite{Gromov2000} in detail. Therefore they are sometimes also referred to as {\em Gromov metric triples} (see, for example, \cite{Ver1998}). \label{Rem:07} \item[(ii)] We have to be careful to deal with \emph{sets} in the sense of the Zermelo-Fraenkel axioms. The reason is that we will show in Theorem~\ref{T:01} that $\mathbb{M}$ can be metrized, say by $\mathrm{d}$, such that $(\mathbb{M},\mathrm{d})$ is complete and separable. Hence if $\mathbb{P}\in{\mathcal M}_1(\mathbb{M})$ then the measure preserving isometry class represented by $(\mathbb{M},\mathrm{d},\mathbb{P})$ yields an element in $\mathbb{M}$. The way out is to define $\mathbb{M}$ as the space of measure preserving isometry classes of those metric spaces equipped with a probability measure whose elements are not themselves metric spaces. Using this restriction we avoid the usual pitfalls which lead to Russell's antinomy. $\qed$ \end{enumerate} \end{remark}\sm To be in a position to formalize that for a sequence of metric measure spaces all finite subspaces sampled by the measures sitting on the corresponding metric spaces converge we next introduce the algebra of polynomials on $\mathbb{M}$. \begin{definition}[Polynomials] A function $\Phi=\Phi^{n,\phi}:\mathbb{M}\to\R$ \label{D:01} is called a {\em polynomial (of degree $n$ with respect to the test function $\phi$)} on $\mathbb{M}$ if and only if $n\in\N$ is the mimimal number such that there exists a bounded continuous function $\phi:\,[0,\infty)^{\binom{n}{2}}\to\R$ such that \be{pp3} \begin{aligned} \Phi\big((X,r,\mu)\big)= \int\mu^{\otimes n}(\mathrm{d}(x_1,...,x_n))\, \phi\big((r(x_{i},x_{j}))_{1\le i<j\le n}\big), \end{aligned} \end{equation} where $\mu^{\otimes n}$ is the $n$-fold product measure of $\mu$. Denote by $\Pi$ the algebra of all polynomials on $\mathbb M$. \end{definition}\sm \begin{example} In future work, we are particularly interested in tree-like metric spaces, i.e., ultra-metric spaces and $\R$-trees. In this setting, functions of the form \label{Exp:00} \eqref{pp3} can be, for example, the mean total length or the averaged diameter of the sub-tree spanned by $n$ points sampled independently according to $\mu$ from the underlying tree. $\qed$\end{example}\sm The next example illustrates that one can, of course, not separate metric measure spaces by polynomials of degree $2$ only. \begin{example} Consider the following two metric measure spaces. \label{Exp:01} \beginpicture \setcoordinatesystem units <1cm,1cm> \setplotarea x from -1 to 10, y from 1 to 3.7 \plot 1 3 2 2 3 3 2 2 / \put{$\frac{1}{2}$} [cC] at .6 3 \put{$\frac{1}{2}$} [cC] at 3.4 3 \put{$\bullet$} [cC] at 1 3 \put{$\bullet$} [cC] at 3 3 \put{$\mathcal X$} [cC] at 1.7 1.5 \plot 7 3 8 2 9 3 8 2 8 3 / \put{$\frac{2-\sqrt{3}}{6}$} [cC] at 6.4 3 \put{$\frac{2+\sqrt{3}}{6}$} [cC] at 9.6 3 \put{$\frac{1}{3}$} [cC] at 8.3 3 \put{$\bullet$} [cC] at 7 3 \put{$\bullet$} [cC] at 9 3 \put{$\bullet$} [cC] at 8 3 \put{$\mathcal Y$} [cC] at 7.7 1.5 \endpicture Assume that in both spaces the mutual distances between different points are $1$. In both cases, the empirical distribution of the distances between two points equals $\tfrac{1}{2}\delta_0+\tfrac{1}{2}\delta_{1}$, and hence all polynomials of degree $n=2$ agree. But obviously, $\mathcal X$ and $\mathcal Y$ are not measure preserving isometric. $\qed$\end{example}\sm The first key observation is that the algebra of polynomials is a rich enough subclass to determine a metric measure space. \begin{proposition}[Polynomials separate points] The algebra $\Pi$ of \label{P:00} poly\-nomials separates points in $\mathbb{M}$. \end{proposition}\sm We need the useful notion of the distance matrix distribution. \begin{definition}[Distance matrix distribution] \label{def:distMatDistr} Let $\mathcal X=(X,r,\mu)\in\mathbb M$ and the space of infinite (pseudo-)distance matrices \be{mutdis} \mathbb R^{\rm{met}} := \big\{(r_{ij})_{1\leq i<j<\infty}:\, r_{ij} + r_{jk}\geq r_{ik},\,\forall\,1\leq i<j<k<\infty\big\}. \end{equation} Define the map $\iota^{\mathcal X}:\,X^\N\to\mathbb R^{\rm{met}}$ by \be{e:iota} \iota^{\mathcal X}\big(x_1,x_2,...\big):=\big(r(x_i,x_j)\big)_{1\le i<j<\infty}, \end{equation} and the \emph{distance matrix distribution} of $\mathcal X$ by \be{distMat} \nu^{\mathcal X}:=(\iota^{\mathcal X})_\ast\mu^{\otimes \N}. \end{equation} \end{definition} \sm Note that for $\mathcal X\in\mathbb M$ and $\Phi$ of the form \eqref{pp3}, we have that \be{ppp3} \Phi(\mathcal X) = \int \nu^{\mathcal X}\big({\rm d}(r_{ij})_{1\leq i<j}\big) \phi\big((r_{ij})_{1\leq i<j\leq n}\big). \end{equation} \begin{proof}[Proof of Proposition \ref{P:00}] Let $\mathcal X_\ell=(X_\ell,r_\ell,\mu_\ell)\in\mathbb{M}$, $\ell=1,2$, and assume that $\Phi(\mathcal X_1)=\Phi(\mathcal X_2)$, for all $\Phi\in\Pi$. The algebra $\{\phi\in \mathcal C_{\mathrm{b}}(\mathbb R^{\binom{n}{2}});\, n\in\mathbb N\}$ is separating in $\mathcal M_1(\mathbb R^{\text{met}})$ and so $\nu^{\mathcal X_1}=\nu^{\mathcal X_2}$ by \eqref{ppp3}. Applying Gromov's Reconstruction theorem for mm-spaces (see Paragraph~$3\tfrac12.5$ in \cite{Gromov2000}), we find that ${\mathcal X}_1={\mathcal X}_2$. \end{proof}\sm We are now in a position to define the Gromov-weak topology. \begin{definition}[Gromov-weak topology] A sequence $({\mathcal X}_n)_{n\in\N}$ is said to converge Gromov-weakly to ${\mathcal X}$ in \label{D:00} $\mathbb{M}$ if and only if $\Phi({\mathcal X}_n)$ converges to $\Phi({\mathcal X})$ in $\R$, for all polynomials $\Phi\in\Pi$. We call the corresponding topology ${\mathcal O}_{\mathbb{M}}$ on $\mathbb{M}$ the Gromov-weak topology. \end{definition}\sm The following result ensures that the state space is suitable to do probability theory on it. \begin{theorem} The space $({\mathbb{M}},{\mathcal O}_{\mathrm{\mathbb{M}}})$ is Polish. \label{T:01} \end{theorem}\sm In order to obtain later tightness criteria for laws of random elements in $\mathbb{M}$ we need a characterization of the compact sets of $(\mathbb{M},{\mathcal O}_{\mathbb{M}})$. Informally, a subset of $\mathbb{M}$ will turn out to be pre-compact iff the corresponding sequence of probability measures put most of their mass on subspaces of a uniformly bounded diameter, and if the contribution of points which do not carry much mass in their vicinity is small. These two criteria lead to the following definitions. \begin{definition}[Distance distribution and Modulus of mass distribution] Let $\mathcal X=(X,r,\mu)\in\mathbb{M}$. \label{D:modul} \begin{enumerate} \item[(i)] The \emph{distance distribution}, which is an element in ${\mathcal M}_1([0,\infty))$, is given by $w_{\mathcal X} := r_\ast \mu^{\otimes 2}$, i.e., \be{distInt} \begin{aligned} w_{\mathcal X}(\cdot) := \mu^{\otimes 2}\big\{(x,x'): r(x,x') \in \boldsymbol{\cdot}\big\}. \end{aligned} \end{equation} \item[(ii)] For $\delta>0$, define the \emph{modulus of mass distribution} as \be{modul} v_\delta(\mathcal X) := \inf\Big\{\varepsilon>0:\, \mu\big\{x\in X:\, \mu(B_\varepsilon(x))\le\delta\big\}\le\varepsilon\Big\} \end{equation} where $B_\varepsilon(x)$ is the open ball with radius $\varepsilon$ and center $x$. \end{enumerate} \end{definition}\sm \begin{remark}\label{Rem:08} Observe that $w_{\mathcal X}$ and $v_\delta$ are well-defined because they are constant on isometry classes of a given metric measure space. \end{remark}\sm The next result characterizes pre-compactness in $(\mathbb{M},{\mathcal O}_{\mathbb{M}})$. \begin{theorem}[Characterization of pre-compactness] A set $\Gamma\subseteq\mathbb{M}$ is pre-compact in the Gromov-weak topology if and only if the following hold. \label{T:Propprec} \begin{itemize} \item[(i)] The family $\{w_{\mathcal X}:\,\mathcal X\in\Gamma\}$ is tight. \item [(ii)] For all $\varepsilon>0$ there exist a $\delta=\delta(\varepsilon)>0$ such that \be{e:cpm} \sup_{{\mathcal X}\in\Gamma} v_\delta(\mathcal X) < \varepsilon. \end{equation} \end{itemize} \end{theorem}\sm \begin{remark}\label{Rem:05} If $\Gamma=\{\mathcal X_1, \mathcal X_2,...\}$ then we can replace $\sup$ by $\limsup$ in \eqref{e:cpm}. $\qed$ \end{remark}\sm \begin{example} In the following we illustrate the two requirements for a family in $\mathbb{M}$ to be pre-compact which are given in Theorem \ref{T:Propprec} by two counter-examples. \label{Exp:star0} \begin{itemize} \item[(i)] Consider the isometry classes of the metric measure spaces ${\mathcal X}_n:=(\{1,2\},r_n(1,2)=n,\mu_n\{1\}=\mu_n\{2\}=\tfrac12)$. A potential limit object would be a metric space with masses $\tfrac{1}{2}$ within distance infinity. This clearly does not exist. Indeed, the family $\{w_{\mathcal X_n}=\tfrac{1}{2}\delta_0+\tfrac{1}{2}\delta_n;\,n\in\N\}$ is not tight, and hence $\{{\mathcal X}_n;\,n\in\N\}$ is not pre-compact in $\mathbb{M}$ by Condition (i) of Theorem \ref{T:Propprec}. \item[(ii)] Consider the isometry classes of the metric measure spaces $\mathcal X_n=(X_n, r_n, \mu_n)$ given for $n\in\N$ by \be{eq:Exp:star:1} X_n:=\{1,...,2^n\}, \quad r_n(x,y):= \mathbf{1}\{x\neq y\}, \quad \mu_n:=2^{-n}\sum_{i=1}^{2^n} \delta_i, \end{equation} i.e., ${\mathcal X}_n$ consists of $2^n$ points of mutual distance $1$ and is equipped with a uniform measure on all points. \beginpicture \setcoordinatesystem units <.8cm,.8cm> \setplotarea x from -1 to 12, y from -0.5 to 5 \plot 1 1 1 4 / \put{$\bullet$}[cC] at 1 1 \put{$\bullet$}[cC] at 1 4 \put{$\frac 12$} [lC] at 1.2 1.1 \put{$\frac 12$} [lC] at 1.2 3.9 \put{$\mathcal X_1$} [lC] at 1 0.3 \plot 5 1 5 4 / \plot 3.5 2.5 6.5 2.5 / \put{$\bullet$}[cC] at 5 1 \put{$\bullet$}[cC] at 5 4 \put{$\bullet$}[cC] at 3.5 2.5 \put{$\bullet$}[cC] at 6.5 2.5 \put{$\tfrac 14$} [lC] at 5.1 1.1 \put{$\tfrac 14$} [lC] at 5.1 3.9 \put{$\tfrac 14$} [cC] at 3.2 2.5 \put{$\tfrac 14$} [cC] at 6.8 2.5 \put{$\mathcal X_2$} [lC] at 5 0.3 \plot 10 1 10 4 / \plot 8.5 2.5 11.5 2.5 / \plot 8.5 1 11.5 4 / \plot 8.5 4 11.5 1 / \put{$\bullet$}[cC] at 10 1 \put{$\bullet$}[cC] at 10 4 \put{$\bullet$}[cC] at 8.5 2.5 \put{$\bullet$}[cC] at 11.5 2.5 \put{$\bullet$}[cC] at 11.5 1 \put{$\bullet$}[cC] at 11.5 4 \put{$\bullet$}[cC] at 8.5 1 \put{$\bullet$}[cC] at 8.5 4 \put{$\tfrac 18$} [cC] at 10.25 1.1 \put{$\tfrac 18$} [cC] at 10.25 3.9 \put{$\tfrac 18$} [cC] at 8.2 2.3 \put{$\tfrac 18$} [cC] at 11.8 2.3 \put{$\tfrac 18$} [cC] at 11.8 3.9 \put{$\tfrac 18$} [cC] at 11.8 1.1 \put{$\tfrac 18$} [cC] at 8.2 1.1 \put{$\tfrac 18$} [cC] at 8.2 3.9 \put{$\mathcal X_3$} [lC] at 10 0.3 \put{$\mathbf{\cdots}$} [lC] at 14 2.5 \endpicture A potential limit object would consist of infinitely many points of mutual distance $1$ with a uniform measure. Such a space does not exist. Indeed, notice that for $\delta>0$, \be{Exp:star:tight2} v_\delta(\mathcal X_n) = \begin{cases} 0, & \delta< 2^{-n}, \\ 1, & \delta \ge 2^{-n},\end{cases} \end{equation} so $\sup_{n\in\mathbb N} v_\delta(\mathcal X_n) = 1$, for all $\delta>0$. Hence $\{{\mathcal X}_n;\,n\in\N\}$ does not fulfil Condition (ii) of Theorem~\ref{T:Propprec}, and is therefore not pre-compact. $\qed$\end{itemize} \end{example}\sm \section{Distributions of random metric measure spaces} \label{S:tigght} From Theorem~\ref{T:01} and Definition \ref{D:00} we immediately conclude the characterization of weak convergence for a sequence of probability measures on $\mathbb{M}$. \begin{cor}[Characterization of weak convergence] A sequence $(\mathbb{P}_n)_{n\in\N}$ in ${\mathcal M}_1({\mathbb{M}})$ converges weakly w.r.t. the Gromov-weak topology if and only if \begin{itemize} \item[(i)] the family $\{\mathbb{P}_n;\,n\in\N\}$ is relatively compact in ${\mathcal M}_1(\mathbb{M})$, and \item[(ii)] for all polynomials $\Phi\in\Pi$, \label{C:02} $(\mathbb{P}_n\big[\Phi\big])_{n\in\N}$ converges in $\R$. \end{itemize} \end{cor}\sm \begin{proof} The ``only if'' direction is clear, as polynomials are bounded and continuous functions by definition. To see the converse, recall from Lemma 3.4.3 in \cite{EthierKurtz86} that given a relative compact sequence of probability measures, each separating family of bounded continuous functions is convergence determining. \end{proof}\sm While Condition (ii) of the characterization of convergence given in Corollary~\ref{C:02} can be checked in particular examples, we still need a manageable characterization of tightness on $\CM_1 (\mathbb{M})$ which we can conclude from Theorem~\ref{T:Propprec}. It will be given in terms of the distance distribution and the modulus of mass distribution. \begin{theorem}[Characterization of tightness] A set $\mathbf{A}\subseteq{\mathcal M}_1(\mathbb{M})$ is tight if and only if the following holds: \label{T:PropTight} \begin{itemize} \item[(i)] The family $\{\mathbb P[w_\mathcal X]: \mathbb P\in \mathbf A\}$ is tight in ${\mathcal M}_1(\R)$. \item [(ii)] For all $\varepsilon>0$ there exist a $\delta=\delta(\varepsilon)>0$ such that \be{e:tight} \sup_{\mathbb{P}\in\mathbf{A}}\mathbb{P}\big[v_\delta(\mathcal X)\big] < \varepsilon. \end{equation} \end{itemize} \end{theorem}\sm \begin{remark}\label{Rem:06} \begin{enumerate} \item[(i)] Using the properties of $v_\delta$ from Lemmata \ref{l:usef} and \ref{l:dvelta} it can be seen that \eqref{e:tight} can be replaced either by \begin{align}\label{e:tight1a} \sup_{\mathbb P\in\mathbf A} \mathbb P\{v_{\delta}(\mathcal X)\geq \varepsilon\}< \varepsilon \end{align} or \begin{align}\label{e:tight1b} \sup_{\mathbb P\in\mathbf A} \mathbb P[\mu\{x: \mu(B_{\varepsilon}(x)) \leq \delta\}]< \varepsilon. \end{align} \item[(ii)] If $\mathbf{A}=\{\mathbb{P}_1, \mathbb{P}_2,...\}$ then we can replace $\sup$ by $\limsup$ in \eqref{e:tight}, \eqref{e:tight1a} and \eqref{e:tight1b}. $\qed$ \end{enumerate} \end{remark}\sm The usage of Theorem \ref{T:PropTight} will be illustrated with the example of the {\em $\Lambda$-coalescent measure tree} constructed in the next section, and with examples of trees corresponding to spatially structured coalescents (\cite{GreLimWin2006}) and of evolving coalescents (\cite{GrePfaWin2006}) in forthcoming work. \begin{remark} Starting with Theorem~\ref{T:PropTight} one characterizes easily \label{Rem:00} tightness for the stronger topology given in \cite{Stu2006} based on certain $L^2$-Wasserstein metrics if one requires in addition to (i) and (ii) uniform integrability of sampled mutual distance. Similarly, with Theorem~\ref{T:PropTight} one characterizes tightness in the space of measure preserving isometry classes of metric spaces equipped with a finite measure (rather than a probability measure) if one requires in addition tightness of the family of total masses (compare, also with Remark \ref{Rem:01}(ii)). $\qed$\end{remark}\sm \section{Example: $\Lambda$-coalescent measure trees} \label{S:example} In this section we apply the theory of metric measure spaces to a class of genealogies which arise in population models. Often such genealogies are represented by coalescent processes and we focus on $\Lambda$-coalescents introduced in \cite{Pit1999} (see also \cite{Sag1999}). The family of $\Lambda$-coalescents appears in the description of the genealogies of population models with evolution based on resampling and branching. Such coalescent processes have since been the subject of many papers (see, for example, \cite{MoehleSagitov2001}, \cite{BertoinLeGall2005}, \cite{SevenPeople2005} \cite{LimicSturm2006}, \cite{BereBereSchweinsberg2006}). In resampling models where the offspring variance of an individual during a reproduction event is finite, the Kingman coalescent appears as a special $\Lambda$-coalescent. The fact that general $\Lambda$-coalescents allow for multiple collisions is reflected in an infinite variance of the offspring distribution. Furthermore a $\Lambda$-coalescent is up to time change dual to the process of relative frequencies of families of a Galton-Watson process with possibly infinite variance offspring distribution (compare \cite{SevenPeople2005}). Our goal here is to decide for which $\Lambda$-coalescents the genealogies are described by a metric measure space. We start with a quick description of $\Lambda$-coalescents. Recall that a partition of a set $S$ is a collection $\{A_\lambda\}$ of pairwise disjoint subsets of $S$, also called {\em blocks}, such that $S=\cup_\lambda A_\lambda$. Denote by $\mathbb{S}_\infty$ the collection of partitions of $\N:=\{1,2,3,...\}$, and for all $n\in\N$, by $\mathbb{S}_n$ the collection of partitions of $\{1,2,3,...,n\}$. Each partition $\mathcal{P}\in\mathbb{S}_\infty$ defines an equivalence relation $\sim_{\mathcal{P}}$ by $i\sim_{\mathcal{P}}j$ if and only if there exists a partition element $\pi\in\mathcal{P}$ with $i,j\in\pi$. Write $\rho_n$ for the restriction map from $\mathbb{S}_\infty$ to $\mathbb{S}_n$. We say that a sequence $({\mathcal P}_k)_{k\in\N}$ converges in $\mathbb{S}_\infty$ if for all $n\in\N$, the sequence $(\rho_{n}{\mathcal P}_k)_{k\in\N}$ converges in $\mathbb{S}_n$ equipped with the discrete topology. We are looking for a strong Markov process $\xi$ starting in ${\mathcal P}_0\in\mathbb{S}_\infty$ such that for all $n\in\N$, the restricted process $\xi_n:=\rho_n\circ\xi$ is an $\mathbb{S}_{n }$-valued Markov chain which starts in $\rho_n\mathcal{P}_0\in\mathbb{S}_n$, and given that $\xi_n(t)$ has $b$ blocks, each $k$-tuple of blocks of $\mathbb{S}_n$ is merging to form a single block at rate $\lambda_{b,k}$. Pitman \cite{Pit1999} showed that such a process exists and is unique (in law) if and only if \be{e:lambda} \lambda_{b,k} := \int^1_0\Lambda(\mathrm{d}x)\,x^{k-2}(1-x)^{b-k} \end{equation} for some non-negative and finite measure $\Lambda$ on the Borel subsets of $[0,1]$. Let therefore $\Lambda$ be a non-negative finite measure on $\CB([0,1])$ and $\mathcal{P}\in\mathbb{S}_\infty$. We denote by $\mathbb{P}^{\Lambda,\mathcal{P}}$ the probability distribution governing $\xi$ with $\xi(0)=\mathcal{P}$ on the space of cadlag paths with the Skorohod topology. \begin{example} If we choose \be{e:triv} \mathcal{P}^0 := \big\{\{1\},\{2\},...\big\}, \end{equation} $\Lambda=\delta_0$, or $\Lambda(\mathrm{d}x)=\mathrm{d}x$, then $\mathbb{P}^{\Lambda,\mathcal{P}^0}$ is the {\em Kingman} and the {\em Bolthausen-Sznitman} coalescent, respectively. $\qed$\end{example}\sm For each non-negative and finite measure $\Lambda$, all initial partitions $\mathcal{P}\in\mathbb{S}_{\infty}$ and $\mathbb{P}^{\Lambda,\mathcal{P}}$-almost all $\xi$, there is a (random) metric $r^{\xi}$ on $\N$ defined by \be{e:lambda2} r^{\xi}\big(i,j\big) := \inf\big\{t\ge 0:\,i\sim_{\xi(t)}j\big\}. \end{equation} That is, for a realization $\xi$ of the $\Lambda$ coalescent, $r^{\xi}\big(i,j\big)$ is the time it needs $i$ and $j$ to coalesce. Notice that $r^{\xi}$ is an {\em ultra-metric} on $\N$, almost surely, i.e., for all $i,j,k\in\N$, \be{e:ultra} r^{\xi}(i,j) \le r^{\xi}(i,k)\vee r^{\xi}(k,j). \end{equation} Let $(L^\xi,r^{\xi})$ denote the completion of $(\N,r^{\xi})$. Clearly, the extension of $r^{\xi}$ to $L^\xi$ is also an ultra-metric. Recall that ultra-metric spaces are associated with tree-like structures. The main goal of this section is to introduce the {\em $\Lambda$-coalescent measure tree} as the metric space $(L^\xi,r^{\xi})$ equipped with the ``uniform distribution''. Notice that since the Kingman coalescent is known to ``come down immediately to finitely many partition elements'' the corresponding metric space is almost surely compact (\cite{Evans2000}). Even though there is no abstract concept of the ``uniform distribution'' on compact spaces, the reader may find it not surprising that in particular examples one can easily make sense out of this notion by approximation. We will see, that for $\Lambda$-coalescents, under an additional assumption on $\Lambda$, one can extend the uniform distribution to locally compact metric spaces. Within this class falls, for example, the Bolthausen-Sznitman coalescent which is known to have infinitely many partition elements for all times, and whose corresponding metric space is therefore not compact. Define $H_n$ to be the map which takes a realization of the $\mathbb{S}_\infty$-valued coalescent and maps it to (an isometry class of) a metric measure space as follows: \be{e:Hn} H_n:\,\xi\mapsto \Big(L^\xi,r^{\xi},\mu^{\xi}_n:=\tfrac{1}{n}\sum\nolimits_{i=1}^n\delta_i\Big). \end{equation} Put then for given ${\mathcal P}_0\in\mathbb{S}_\infty$, \be{e:Qn} \mathbb{Q}^{\Lambda,n} := \big(H_n\big)_\ast\mathbb{P}^{\Lambda,\mathcal{P}_0}. \end{equation} Next we give the characterization of existence and uniqueness of the $\Lambda$-coalescent measure tree. \begin{theorem}[The $\Lambda$-coalescent measure tree] The family \label{T:04} $\{\mathbb{Q}^{\Lambda,n};\,n\in\N\}$ converges in the weak topology with respect to the Gromov-weak topology if and only if \be{e:lam} \int^1_0 \Lambda(\mathrm{d}x)\,x^{-1} = \infty. \end{equation} \end{theorem}\sm \begin{remark}[``Dust-free'' property] Notice first that Condition (\ref{e:lam}) is equivalent to the total coalescence rate of a given $\{i\}\in\mathcal{P}_0$ being infinite (compare with the proof of Lemma 25 in \cite{Pit1999}). By exchangeability and the de Finetti Theorem, the family $\{\tilde{f}(\pi);\,\pi\in\xi(t)\}$ of frequencies \be{e:freq} \tilde{f}(\pi) := \lim_{n\to\infty}\frac{1}{n}\#\big\{j\in\{1,...,n\}:\,j\in\pi\big\} \end{equation} exists for $\mathbb{P}^{\Lambda,\mathcal{P}_0}$ almost all $\pi\in\xi(t)$ and all $t>0$. Define $f:=(f(\pi);\,\pi\in\xi(t))$ to be the ranked rearrangements of $\{\tilde{f}(\pi);\,\pi\in\xi(t)\}$ meaning that the entrees of the vector $f$ are non-increasing. Let $\mathbf{P}^{\Lambda,\mathcal{P}_0}$ denote the probability distribution of $f$. Call the frequencies $f$ {\em proper} if $\sum_{i\ge 1}f(\pi_i)=1$. By Theorem~8 in \cite{Pit1999}, the $\Lambda$-coalescent has in the limit $n\to\infty$ proper frequencies if and only if Condition (\ref{e:lam}) holds. According to {\em Kingman's correspondence} (see, for example, Theorem~14 in \cite{Pit1999}), the distribution $\mathbb{P}^{\Lambda,\mathcal{P}_0}$ and $\mathbf{P}^{\Lambda,\mathcal{P}_0}$ determine each other uniquely. For $\mathcal{P}\in\mathbb{S}_\infty$ and $i\in\N$, let $\mathcal{P}^i:=\{j\in\N:\,i\sim_\mathcal{P} j\}$ denote the partition element in $\mathcal{P}$ which contains $i$. Then Condition (\ref{e:lam}) holds if and only if for all $t>0$, \be{zorn7} \mathbb{P}^{\Lambda,\mathcal{P}_0}\big\{ \tilde{f}\big((\xi(t))^1\big)=0\big\} = 0. \end{equation} The latter is often referred to as the {\em ``dust''-free} property. $\qed$\end{remark}\sm \begin{proof}[Proof of Theorem~\ref{T:04}] For {\em existence} we will apply the characterization of tightness as given in Theorem~\ref{T:PropTight}, and verify the two conditions. \sm (i) By definition, for all $n\in\N$, $\mathbb{Q}^{\Lambda,n}[w_{{\mathcal X}}]$ is exponentially distributed with parameter $\lambda_{2,2}$. Hence the family $\{\mathbb{Q}^{\Lambda,n}[w_{{\mathcal X}}];\,n\in\N\}$ is tight. \sm (ii) Fix $t\in (0,1)$. Then for all $\delta>0$, by the uniform distribution and exchangeability, \be{e:v1} \begin{aligned} &\mathbb Q^{\Lambda,n}\big[\mu\{x: B_\varepsilon(x)\leq \delta\}] \\ &= \mathbb P^{\Lambda,\mathcal P_0}\big[\mu^\xi_n\big\{x\in L^\xi:\, \mu^\xi_n({B}_t(x))\leq\delta \big|x=1\big\}] \\ &= \mathbb P^{\Lambda,\mathcal P_0} \big\{\mu^\xi_n(B_t(1))\leq\delta\big\}. \end{aligned} \end{equation} By the de Finetti theorem, $\mu^\xi_n(B_t(1))\xrightarrow {n\to\infty}\tilde f\big ( (\xi(t))^1\big)$, $\mathbb P^{\Lambda, \mathcal P_0}$-almost surely. Hence, dominated convergence yields \be{e:w2} \begin{aligned} \lim_{\delta\to 0}\lim_{n\to\infty} \mathbb Q^{\Lambda,n}\big[\mu\{x: B_\varepsilon(x)\leq \delta\}] &= \lim_{\delta\to 0} \mathbb P^{\Lambda,\mathcal P_0}\big\{\tilde f((\xi(t))^1) \leq\delta\big\} \\ &= \mathbb P^{\Lambda,\mathcal P_0}\big\{\tilde f((\xi(t))^1) = 0\big\}. \end{aligned} \end{equation}\sm We have shown that Condition~\eqref{e:lam} is equivalent to~\eqref{zorn7}, and therefore, using \eqref{e:tight1b}, a limit of $\mathbb Q^{\Lambda, n}$ exists if and only if the ``dust-free''-property holds.\sm {\em Uniqueness} of the limit points follows from the projective property, i.e. restricting the observation to a tagged subset of initial individuals is the same as starting in this restricted initial state. \end{proof} \section{A complete metric: The Gromov-Prohorov metric} \label{S:GPW} In this section we introduce the Gromov-Prohorov metric $d_{\mathrm{GPr}}$ on $\mathbb{M}$ and prove that the metric space $(\mathbb{M},d_{\mathrm{GPr}})$ is complete and separable. In Section~\ref{S:equivtop} we will see that the Gromov-Prohorov metric generates the Gromov-weak topology. Notice that the first naive approach to metrize the Gromov-weak topology could be to fix a countably dense subset $\{\Phi_n;\,n\in\N\}$ in the algebra of all polynomials, and to put for ${\mathcal X},{\mathcal Y}\in\mathbb{M}$, \be{e:naiv} d_{\mathrm{naive}}\big({\mathcal X},{\mathcal Y}\big) := \sum_{n\in\N}2^{-n}\big|\Phi_n({\mathcal X})-\Phi_n({\mathcal Y})\big|. \end{equation} However, such a metric is not complete. Indeed one can check that the sequence $\{{\mathcal X}_n;\,n\in\N\}$ given in Example~\ref{Exp:star0}(ii) is a Cauchy sequence w.r.t $d_{\mathrm{naive}}$ which does not converge. Recall that metrics on the space of probability measures on a fixed complete and separable metric space are well-studied (see, for example, \cite{Rachev1991, GibbsSu2001}). Some of them, like the Prohorov metric and the Wasserstein metric (on compact spaces) generate the weak topology. On the other hand the space of all (isometry classes of compact) metric spaces, not carrying a measure, is complete and separable once equipped with the Gromov-Hausdorff metric (see, \cite{EvaPitWin2006}). We recall the notion of the Prohorov and Gromov-Hausdorff metric below. Metrics on metric measure spaces should take both components into account and compare the spaces and the measures simultaneously. This was, for example, done in \cite{EvaWin2006} and \cite{Stu2006}. We will follow along similar lines as in \cite{Stu2006}, but replace the Wasserstein metric with the Prohorov metric. ~ Recall that the {\em Prohorov metric} between two probability measures $\mu_1$ and $\mu_2$ on a common metric space $(Z,r_Z)$ is defined by \be{Proh} d_{\mathrm{Pr}}^{(Z,r_Z)}\big(\mu_1,\mu_2\big) := \inf\Big\{\varepsilon>0:\,\mu_1(F)\le\mu_2(F^\varepsilon)+ \varepsilon,\;\forall\,F\text{ closed}\Big\} \end{equation} where \be{eq:Feps} F^\varepsilon := \big\{z\in Z:\, r_Z(z,z')<\varepsilon,\text{ for some }z'\in F\big\}. \end{equation} Sometimes it is easier to work with the equivalent formulation based on \emph{couplings} of the measures $\mu_1$ and $\mu_2$, i.e., measures $\tilde\mu$ on $X\times Y$ with $\tilde\mu(\boldsymbol{\cdot}\times Y) =\mu_1(\boldsymbol{\cdot})$ and $\tilde\mu(X\times\boldsymbol{\cdot})=\mu_2(\boldsymbol{\cdot})$. Notice that the product measure $\mu_1\otimes\mu_2$ is a coupling, and so the set of all couplings of two measures is not empty. By Theorem 3.1.2 in \cite{EthierKurtz86}, \be{Proh2} \begin{aligned} &d_{\mathrm{Pr}}^{(Z,r_Z)}\big(\mu_1,\mu_2\big) \\ &= \inf_{\tilde\mu}\,\inf\Big\{\varepsilon>0:\; \tilde\mu\big\{(z,z')\in Z\times Z:\, r_Z(z,z')\ge\varepsilon\big\}\le\varepsilon\Big\}, \end{aligned} \end{equation} where the infimum is taken over all couplings $\tilde\mu$ of $\mu_1$ and $\mu_2$. The metric $d_{\mathrm{Pr}}^{(Z,r_Z)}$ is complete and separable if $(Z,r_Z)$ is complete and separable (\cite{EthierKurtz86}, Theorem 3.1.7). ~ The Gromov-Hausdorff metric is a metric on the space $\mathbb X_c$ of (isometry classes of) compact metric spaces. For $(X,r_{X})$ and $(Y,r_{Y})$ in $\mathbb{X}_{\mathrm{c}}$ the \emph{Gromov-Hausdorff metric} is given by \be{eq:GH1} d_{{\mathrm{GH}}}\big((X,r_X),(Y,r_Y)\big) := \inf_{(\varphi_{X},\varphi_{Y},Z)} d_{\rm{H}}^Z\big(\varphi_X(X),\varphi_Y(Y)\big), \end{equation} where the infimum is taken over isometric embeddings $\varphi_X$ and $\varphi_Y$ from $X$ and $Y$, respectively, into some common metric space $(Z,r_Z)$, and the Hausdorff metric $d^{(Z,r_Z)}_{\mathrm{H}}$ for closed subsets of a metric space $(Z,r_Z)$ is given by \be{eq:HausDef} d_{\rm H}^{(Z,r_Z)}(X,Y) := \inf\big\{\varepsilon>0:\,X\subseteq Y^\varepsilon,Y\subseteq X^\varepsilon\big\}, \end{equation} where $X^\varepsilon$ and $Y^\varepsilon$ are given by \eqref{eq:Feps} (compare \cite{Gromov2000, BridsonHaefliger1999, BurBurIva01}). Sometimes, it is handy to use an equivalent formulation of the Gromov-Hausdorff metric based on \emph{correspondences}. Recall that a \emph{relation} $R$ between two compact metric spaces $(X,r_X)$ and $(Y,r_Y)$ is any subset of $X\times Y$. A relation $R\subseteq X\times Y$ is called a \emph{correspondence} iff for each $x\in X$ there exists at least one $y\in Y$ such that $(x,y)\in{R}$, and for each $y'\in Y$ there exists at least one $x'\in X$ such that $(x',y')\in{R}$. Define the \emph{distortion} of a (non-empty) relation as \be{distortion} \mathrm{dis}(R) := \sup\big\{|r_X(x,x') - r_Y(y,y')|:\, (x,y), (x',y')\in R\big\}. \end{equation} Then by Theorem~7.3.25 in \cite{BurBurIva01}, the Gromov-Hausdorff metric can be given in terms of a minimal distortion of all correspondences, i.e., \be{GH} d_{{\mathrm{GH}}}\big((X,r_X),(Y,r_Y)\big) = \frac{1}{2}\inf_{R}{\mathrm{dis}}(R), \end{equation} where the infimum is over all correspondences $R$ between $X$ and $Y$. ~ To define a metric between two metric measure spaces $\mathcal X=(X,r_{X},\mu_X)$ and $\mathcal Y=(Y,r_{Y},\mu_Y)$ in $\mathbb M$, we can neither use the Prohorov metric nor the Gromov-Hausdorff metric directly. However, we can use the idea due to Gromov and embed $(X,r_X)$ and $(Y,r_Y)$ isometrically into a common metric space and measure the distance of the image measures. \begin{definition}[Gromov-Prohorov metric] The Gromov-Prohorov distance between two metric measure spaces $\mathcal X= (X,r_X, \mu_{X})$ and $\mathcal Y=(Y,r_Y,\mu_{Y})$ in $\mathbb M$ is defined by \be{dGPr} d_{{\mathrm{GPr}}}\big(\mathcal X, \mathcal Y\big) := \inf_{(\varphi_{X},\varphi_{Y},Z)} d^{(Z,r_Z)}_{\mathrm{Pr}} \big((\varphi_{X})_\ast\mu_{X},(\varphi_{Y})_\ast\mu_{Y}\big), \end{equation} where the infimum is taken over all isometric embeddings $\varphi_{X}$ and $\varphi_Y$ from $X$ and $Y$, respectively, into some common metric space $(Z,r_Z)$. \end{definition}\sm \begin{remark} \begin{enumerate} \item[(i)]\sloppy To see that the Gromov-Prohorov metric is well-defined we have to check that the right hand side of \eqref{dGPr} does not depend on the element ofthe isometry class of $(X,r_X,\mu_X)$ and $(Y,r_Y,\mu_Y)$. We leave out the straight-forward details. \item[(ii)] Notice that w.l.o.g.\ the common metric space $(Z,r_Z)$ and the isometric embeddings $\varphi_{X}$ and $\varphi_Y$ from $X$ and $Y$ can be chosen to be $X\sqcup Y$ and the canonical embeddings $\varphi_X$ and $\varphi_Y$ from $X$ and $Y$ to $X\sqcup Y$, respectively (compare, for example, Remark~3.3(iii) in \cite{Stu2006}). We can therefore also write \be{metricdGPr} d_{{\mathrm{GPr}}}\big(\mathcal X, \mathcal Y\big) := \inf_{r^{X,Y}} d^{(X\sqcup Y,r^{X,Y})}_{\mathrm{Pr}} \big((\varphi_X)_\ast\mu_{X},(\varphi_Y)_\ast\mu_{Y}\big), \end{equation} where the infimum is here taken over all complete and separable metrics $r^{X,Y}$ which extend the metrics $r_X$ on $X$ and $r_Y$ on $Y$ to $X\sqcup Y$. \end{enumerate} \label{Rem:27} \end{remark} \begin{remark}[Gromov's $\underline{\square}_1$-metric] Even though the material presented in this paper was developed independently of Gromov's work, some of the most important ideas are already contained in Chapter~3$\tfrac{1}{2}$ in \cite{Gromov2000}. \label{Rem:Gromov} More detailed, one can also start with a Polish space $(X,{\mathcal O})$ which is equipped with an probability measure $\mu\in{\mathcal M}_1(X)$ on ${\mathcal B}(X)$, and then introduce a metric $r:X\times X\to\R_+$ as a measurable function satisfying the metric axioms. Polish measure spaces $(X,\mu)$ can be parameterized by the segment $[0,1)$ where the parametrization refers to a measure preserving map $\varphi:[0,1)\to X$. If $r$ is a metric on $X$ then $r$ can be pulled back to a metric $(\varphi^{-1})_\ast r$ on $[0,1)$ by letting \be{e:hatr} (\varphi^{-1})_\ast r(t,t') := r\big(\varphi(t),\varphi(t')\big). \end{equation} Notice that such a measure-preserving parametrization is far from unique and Gromov introduces his $\underline{\square}_1$-distance between $(X,r,\mu)$ and $(X',r',\mu')$ as the infimum of distances $\square_1$ between the two metric spaces $([0,1),(\varphi^{-1})_\ast r)$ and $([0,1),(\psi^{-1})_\ast r')$ defined as \be{square} \begin{aligned} &\square_1\big(d,d'\big) \\ &:= \sup\big\{\varepsilon>0:\,\exists X_\varepsilon\in{\mathcal B}([0,1)):\,\lambda(X_\varepsilon)\le\varepsilon,\mbox{ s.t.} \\ &\quad\qquad \quad\qquad \quad\qquad |d(t_1,t_2)-d'(t_1,t_2)|\le\varepsilon,\;\forall\,t_1,t_2\in X\setminus X_\varepsilon\big\}, \end{aligned} \end{equation} where the infimum is taken all possible measure preserving parameterizations and $\lambda$ denotes the Lebesgue measure. The interchange of first embedding in a measure preserving way and then taking the distance between the pulled back metric spaces versus first embedding isometrically and then taking the distance between the pushed forward measures explains the similarities between Gromov's $\varepsilon$-partition lemma (Section~3$\tfrac{1}{2}$.8 in \cite{Gromov2000}), his union lemma (Section~3$\tfrac{1}{2}$.12 in \cite{Gromov2000}) and his pre-compactness criterion (Section~3$\tfrac{1}{2}$.D in \cite{Gromov2000}) on the one hand and our Lemma~\ref{lZorn}, Lemma~\ref{l:Gpronespace} and Proposition~\ref{P:02}, respectively, on the other. We strongly conjecture that the Gromov-weak topology agrees with the topology generated by Gromov's $\underline{\square}_1$-metric but a (straightforward) proof is not obvious to us. $\qed$ \end{remark} We first show that the Gromov-Prohorov distance is indeed a metric. \begin{lemma} $d_{{\mathrm{GPr}}}$ defines a metric on $\mathbb{M}$. \label{metricL:metric} \end{lemma}\sm In the following we refer to the topology generated by the Gromov-Prohorov metric as the {\em Gromov-Prohorov topology}. \index{Gromov-Prohorov topology} In Theorem~\ref{T:05} of Section~\ref{S:equivtop} we will prove that the Gromov-Prohorov topology and the Gromov-weak topology coincide. \index{Gromov-weak topology} \sm \begin{remark}[Extension of metrics via relations] \label{rem:met1} The proof of the lemma and some of the following results is based on the extension of two metric spaces $(X_1,r_{X_1})$ and $(X_2,r_{X_2})$ if a non-empty relation $R\subseteq X_1\times X_2$ is known. The result is a metric on $X_1\sqcup X_2$ where $\sqcup$ is the disjoint union. Recall the distortion of a relation from \eqref{distortion} Define the metric space $(X_1\sqcup X_2, r_{X_1\sqcup X_2}^R)$ by letting $r_{{X_1\sqcup X_{2}}}^{R}(x,x'):=r_{X_i}(x,x')$ if $x,x'\in X_i$, $i=1,2$ and for $x_1\in X_1$ and $x_2\in X_2$, \be{def:rZR} \begin{aligned} &r_{{X_1\sqcup X_{2}}}^{R}(x_1,x_{2}) \\ &:= \inf\big\{r_{X_1}(x_1,x_1') +\tfrac 12\mathrm{dis}(R)+r_{X_{2}}(x_{2},x_{2}'):\,(x_1',x_{2}')\in R\big\}. \end{aligned} \end{equation} It is then easy to check that $r_{{X_1\sqcup X_{2}}}^{R}$ defines a (pseudo-)metric on $X_1\sqcup X_{2}$ which extends the metrics on $X_1$ and $X_{2}$. In particular, $r^R_{X_1\sqcup X_2}(x_1,x_2)=\tfrac 12\text{\rm dis}(R)$, for any pair $(x_1,x_2)\in R$, and \be{eq:rHZ} d_H^{(X_1\sqcup X_2,r_{X_1\sqcup X_2}^R)}(\pi_1 R, \pi_2 R) = \tfrac12\mathrm{dis}(R), \end{equation} where $\pi_1$ and $\pi_2$ are the projection operators on $X_1$ and $X_2$, respectively. $\qed$ \end{remark}\sm \begin{proof}[Proof of Lemma \ref{metricL:metric}] {\em Symmetry} is obvious and {\em positive definiteness} can be shown by standard arguments. To see the {\em triangle inequality}, let $\varepsilon,\delta> 0$ and ${\mathcal X}_i:=(X_i,r_{X_i},\mu_{X_i})\in\mathbb{M}$, $i=1,2,3$, be such that $d_{{\mathrm{GPr}}}\big({\mathcal X}_1,{\mathcal X}_2\big)<\varepsilon$ and $d_{{\mathrm{GPr}}}\big({\mathcal X}_2,{\mathcal X}_3\big)<\delta$. Then, by the definition \eqref{dGPr} together with Remark~\ref{Rem:27}(ii), we can find metrics $r^{1,2}$ and $r^{2,3}$ on $X_1\sqcup X_2$ and $X_2\sqcup X_3$, respectively, such that \be{metric1q1} d^{(X_1\sqcup X_2,r^{1,2})}_{\mathrm{Pr}}\big((\tilde\varphi_1)_\ast\mu_{X_1},(\tilde\varphi_2)_\ast\mu_{X_2}\big)<\varepsilon, \end{equation} and \be{metric1q2} d^{(X_2\sqcup X_3,r^{2,3})}_{\mathrm{Pr}} \big((\tilde\varphi_2')_\ast\mu_{X_2},(\tilde\varphi_3)_\ast\mu_{X_3}\big)<\delta, \end{equation} where $\tilde\varphi_1, \tilde\varphi_2$ and $\tilde\varphi_2', \tilde\varphi_3$ are canonical embeddings from $X_1, X_2$ to $X_1\sqcup X_2$ and $X_2, X_3$ to $X_2\sqcup X_3$, respectively. Setting $Z := (X_1\sqcup X_2) \sqcup (X_2\sqcup X_3)$ we define the metric $r_Z^R$ on $Z$ using the relation \begin{align} \label{eq:rtriangle} R := \{(\tilde\varphi_2(x), \tilde\varphi_2'(x)): x\in X_2\} \subseteq (X_1\sqcup X_2)\times (X_2\sqcup X_3) \end{align} and Remark \ref{rem:met1}. Denote the canonical embeddings from $X_1$, the two copies of $X_2$ and $X_3$ to $Z$ by $\varphi_1, \varphi_2,\varphi_2'$ and $\varphi_3$, respectively. Since $\text{dis}(R)=0$ and \be{e:trian0} d_{\text{Pr}}^{(Z,r_Z^R)} \big( (\varphi_2)_\ast \mu_2, (\varphi_2')_\ast \mu_2\big) = 0, \end{equation} by the triangle inequality of the Prohorov metric, \be{metrice:trian} \begin{aligned} d_{{\mathrm{GPr}}}\big({\mathcal X}_1,{\mathcal X}_3\big) &\leq d_{{\mathrm{Pr}}}^{(Z,r_Z^R)}\big((\varphi_1)_\ast\mu_1,(\varphi_3)_\ast\mu_3\big) \\ &\le d_{{\mathrm{Pr}}}^{(Z,r_Z^R)}\big((\varphi_1)_\ast\mu_1,(\varphi_2)_\ast\mu_2\big)+ d_{{\mathrm{Pr}}}^{(Z,r_Z^R)}\big((\varphi_2)_\ast\mu_2,(\varphi_2')_\ast\mu_2\big) \\ & \qquad \qquad \qquad \qquad \qquad \qquad + d_{{\mathrm{Pr}}}^{(Z,r_Z^R)}\big((\varphi_2')_\ast\mu_2,(\varphi_3)_\ast\mu_3\big) \\ &< \varepsilon+\delta. \end{aligned} \end{equation} Hence the triangle inequality follows by taking the infimum over all $\varepsilon$ and~$\delta$. \end{proof}\sm \begin{proposition} \label{P:07} The metric space is $(\mathbb{M},d_{\mathrm{GPr}})$ is complete and separable. \end{proposition}\sm We prepare the proof with a lemma. \begin{lemma}\label{Lemm} Fix $(\varepsilon_n)_{n\in\N}$ in $(0,1)$. A sequence $({\mathcal X}_n:=(X_n,r_{n},\mu_{n}))_{n\in\N}$ in $\mathbb{M}$ satisfies \be{Lemm1} d_{\mathrm{GPr}}\big({\mathcal X}_n,{\mathcal X}_{n+1}\big) < \varepsilon_n \end{equation} if and only if there exist a complete and separable metric space $(Z,r_Z)$ and isometric embeddings $\varphi_{1}$, $\varphi_{2}$, ... from $X_1$, $X_2$, ..., respectively, into $(Z,r_Z)$, such that \be{Lemm2} d_{\mathrm{Pr}}^{(Z,r_Z)} \big((\varphi_{n})_\ast\mu_{n},(\varphi_{{n+1}})_\ast\mu_{{n+1}} \big) < \varepsilon_n. \end{equation} \end{lemma}\sm \begin{proof} The ``if'' direction is clear. For the ``only if'' direction, take sequences $({\mathcal X}_n:=(X_n,r_{n},\mu_{n}))_{n\in\N}$ and $(\varepsilon_n)_{n\in\N}$ which satisfy (\ref{Lemm1}). By Remark \ref{Rem:27}, for $Y_n:=X_n\sqcup X_{n+1}$ and all $n\in\N$, there is a metric $r_{Y_n}$ on $Y_n$ such that \be{e:ein} d_{\rm{Pr}}^{(Y_n,r_{Y_n})} \big((\varphi_{n})_\ast \mu_{n}, (\varphi_{{n+1}})_\ast \mu_{{n+1}}\big) < \varepsilon_n \end{equation} where $\varphi_{n}$ and $\varphi_{{n+1}}$ are the canonical embeddings from $X_n$ and $X_{n+1}$ to $Y_n$. Put \be{e:ein2} R_n := \big\{(x,x')\in X_n\times X_{n+1}:\,r_{Y_n}(\varphi_{n}(x), \varphi_{{n+1}}(x'))< \varepsilon_n\big\}. \end{equation} Recall from (\ref{Proh2}) that \eqref{e:ein} implies the existence of a coupling $\tilde{\mu}_n$ of $(\varphi_{n})_\ast \mu_{n}$ and $(\varphi_{{n+1}})_\ast \mu_{{n+1}}$ such that \be{e:ein1} \tilde{\mu}_n\big\{(x,x'):\, r_{Y_n}(y,y')<\varepsilon_n\big\} > 1-\varepsilon_n. \end{equation} This implies that $R_n$ is not empty and \be{eq:rel} d_{\text{Pr}}^{(Y_n, r_{Y_n}^{R_n})} \big( (\varphi_{n})_\ast\mu_{n}, (\varphi_{{n+1}})_\ast \mu_{{n+1}}\big) \leq \varepsilon_n. \end{equation} Using the metric spaces $(Y_n, r_{Y_n}^{R_n})$ we define recursively metrics $r_{Z_n}$ on $Z_n:=\bigsqcup_{k=1}^n X_k$. Starting with $n=1$, we set $(Z_1,r_{Z_1}):=({X}_1,r_1)$. Next, assume we are given a metric $r_{Z_n}$ on $Z_n$. Consider the isometric embeddings $\psi_k^n$ from ${X}_k$ to $Z_n$, for $k=1,...,n$ which arise from the canonical embedding of $X_k$ in $Z_n$. Define for all $n\in\N$, \be{e:ein3} \tilde R_n := \big\{(z,x_{})\in Z_n\times X_{n+1}: ((\psi_n^n)^{-1}(z),x)\in R_n\big\} \end{equation} which defines metrics $r_{Z_{n+1}}^{\tilde R_n}$ on $Z_{n+1}$ via \eqref{def:rZR}. By this procedure we obtain in the limit a separable metric space $(Z':=\bigsqcup_{n=1}^\infty X_n, r_{Z'})$. Denote its completion by $(Z, r_{Z})$ and isometric embeddings from $X_n$ to $Z$ which arise by the canonical embedding by $\psi_n, n\in\N$. Observe that the restriction of $r_{Z}$ to $X_n\sqcup X_{n+1}$ is isometric to $(Y_n, r_{Y_n}^{R_n})$ and thus \be{eq:rel2} d_{\text{Pr}}^{(Z,r_Z)}\big( (\psi_n)_\ast \mu_{X_n}, (\psi_{n+1})_\ast \mu_{X_{n+1}}\big) \leq \varepsilon_n \end{equation} by \eqref{eq:rel}. So the claim follows. \end{proof}\sm \begin{proof}[Proof of Proposition~\ref{P:07}] To get \emph{separability}, we partly follow the proof of Theorem 3.2.2 in \cite{EthierKurtz86}. Given ${\mathcal X}:=(X,r,\mu)\in\mathbb{M}$ and $\varepsilon>0$, we can find ${\mathcal X}^\varepsilon:=(X,r,\mu^\varepsilon)\in\mathbb{M}$ such that $\mu^\varepsilon$ is a finitely supported atomic measure on $X$ and $d_{\mathrm{Pr}}(\mu^\varepsilon,\mu)<\varepsilon$. Now $d_{\mathrm{GPr}}\big({\mathcal X}^\varepsilon,{\mathcal X}\big)<\varepsilon$, while $X^\varepsilon$ is just a ``finite metric space'' and can clearly be approximated arbitrary closely in the Gromov-Prohorov metric by finite metric spaces with rational mutual distances and weights. The set of isometry classes of finite metric spaces with rational edge-lengths is countable, and so $(\mathbb{M},d_{{\mathrm{GPr}}})$ is separable.\sm \sloppy To get {\em completeness}, it suffices to show that every Cauchy sequence has a convergent subsequence. Take therefore a Cauchy sequence $({\mathcal X}_n)_{n\in\N}$ in $(\mathbb{M},d_{\mathrm{GPr}})$ and a subsequence $({\mathcal Y}_n)_{n\in\N}$, $\mathcal Y_n=(Y_n,r_n,\mu_n)$ with $d_{\text{GPr}}(\mathcal Y_n, \mathcal Y_{n+1}) \leq 2^{-n}$. By Lemma~\ref{Lemm} we can choose a complete and separable metric space $({Z},r_{{Z}})$ and, for each $n\in\N$, an isometric embedding $\varphi_{n}$ from $Y_n$ into $({Z},r_{{Z}})$ such that $((\varphi_{n})_\ast\mu_{n})_{n\in\N}$ is a Cauchy sequence on ${\mathcal M}_1(Z)$ equipped with the weak topology. By the completeness of $\mathcal M_1(Z)$, $((\varphi_{n})_\ast\mu_n)_{n\in\N}$ converges to some $\bar{\mu}\in\mathcal M_1(Z)$. Putting the arguments together yields that with $\mathcal Z:=(Z, r_Z,\bar{\mu})$, \be{nonum2} \begin{aligned} d_{\rm{GPr}}\big(\mathcal Y_n, \mathcal Z\big) \overset{n\to\infty}{\longrightarrow} 0, \end{aligned} \end{equation} so that $\CZ$ is the desired limit object, which finishes the proof. \end{proof}\sm We conclude this section by another Lemma. \begin{lemma} Let $\mathcal X=(X,r,\mu)$, $\mathcal X_1=(X_1,r_{1},\mu_{1})$, $\mathcal X_2=(X_2,r_{2},\mu_{2}),...$ be in $\mathbb M$. Then, \label{l:Gpronespace} \be{sixxt} d_{\mathrm{GPr}}\big(\mathcal X_n,\mathcal X\big) \xrightarrow{n\to\infty} 0 \end{equation} if and only if there exists a complete and separable metric space $(Z,r_Z)$ and isometric embeddings $\varphi,\varphi_{1},\varphi_{2},...$ from $X, X_1, X_2$ into $(Z,r_Z)$, respectively, such that \be{twee} d_{\mathrm{Pr}}^{(Z,r_Z)}\big((\varphi_{n})_\ast\mu_n,\varphi_\ast\mu\big) \xrightarrow {n\to\infty} 0. \end{equation} \end{lemma}\sm \begin{proof} Again the ``if'' direction is clear by definition. For the ``only if'' direction, assume that (\ref{sixxt}) holds. To conclude (\ref{twee}) we can follow the same line of argument as in the proof of Lemma~\ref{Lemm} but with a metric $r$ extending the metrics $r$, $r_{1}$, $r_2$,... built on correspondences between $X$ and $X_n$ (rather than $X_n$ and $X_{n+1}$). We leave out the details. \end{proof} \section{Distance distribution and Modulus of mass distribution} \label{S:massDist} In this section we provide results on the distance distribution and on the modulus of mass distribution. These will be heavily used in the following sections, where we present metrics which are equivalent to the Gromov-Prohorov metric and which are very helpful in proving the characterizations of compactness and tightness in the Gromov-Prohorov topology. We start by introducing the {\em random distance distribution} of a given metric measure space. \begin{definition}[Random distance distribution] Let $\mathcal X=(X,r,\mu)\in\mathbb M$. For each $x\in X$, define the map $r_x:X \to [0,\infty)$ by $r_x(x'):=r(x,x')$, and put \label{def:hatmu} $\mu^x:=(r_x)_\ast\mu\in \mathcal M_1([0,\infty))$, i.e., $\mu^x$ defines the distribution of distances to the point $x\in X$. Moreover, define the map $\hat{r}:X\to\mathcal M_1([0,\infty))$ by $\hat{r}(x):=\mu^x$, and let \be{hatmuuu} \hat\mu_{\mathcal X}:=\hat{r}_\ast\mu\in\mathcal M_1(\mathcal M_1([0,\infty))) \end{equation} be the \emph{random distance distribution} of ${\mathcal X}$. \end{definition}\sm Notice first that the random distance distribution does not characterizes the metric measure space uniquely. We will illustrate this with an example. \begin{example}\label{Exp:hatmu} Consider the following two metric measure spaces: \beginpicture \setcoordinatesystem units <1cm,1cm> \setplotarea x from -1 to 10, y from 0 to 5 \plot 1 4 2 2.5 3 2.5 4 4 / \plot 1 3 2 2.5 3 2.5 4 3 / \plot 1 2 2 2.5 3 2.5 4 2 / \plot 1 1 2 2.5 3 2.5 4 1 / \put{$\frac{1}{20}$} [cC] at .6 4 \put{$\frac{2}{20}$} [cC] at .6 3 \put{$\frac{3}{20}$} [cC] at .6 2 \put{$\frac{4}{20}$} [cC] at .6 1 \put{$\frac{1}{20}$} [cC] at 4.4 4 \put{$\frac{2}{20}$} [cC] at 4.4 3 \put{$\frac{3}{20}$} [cC] at 4.4 2 \put{$\frac{4}{20}$} [cC] at 4.4 1 \put{$\bullet$} [cC] at 1 4 \put{$\bullet$} [cC] at 1 3 \put{$\bullet$} [cC] at 1 2 \put{$\bullet$} [cC] at 1 1 \put{$\bullet$} [cC] at 4 4 \put{$\bullet$} [cC] at 4 3 \put{$\bullet$} [cC] at 4 2 \put{$\bullet$} [cC] at 4 1 \put{$\mathcal X$} [cC] at 2.5 .5 \plot 7 4 8 2.5 9 2.5 10 4 / \plot 7 3 8 2.5 9 2.5 10 3 / \plot 7 2 8 2.5 9 2.5 10 2 / \plot 7 1 8 2.5 9 2.5 10 1 / \put{$\frac{1}{20}$} [cC] at 6.6 4 \put{$\frac{1}{20}$} [cC] at 6.6 3 \put{$\frac{4}{20}$} [cC] at 6.6 2 \put{$\frac{4}{20}$} [cC] at 6.6 1 \put{$\frac{2}{20}$} [cC] at 10.4 4 \put{$\frac{2}{20}$} [cC] at 10.4 3 \put{$\frac{3}{20}$} [cC] at 10.4 2 \put{$\frac{3}{20}$} [cC] at 10.4 1 \put{$\bullet$} [cC] at 7 4 \put{$\bullet$} [cC] at 7 3 \put{$\bullet$} [cC] at 7 2 \put{$\bullet$} [cC] at 7 1 \put{$\bullet$} [cC] at 10 4 \put{$\bullet$} [cC] at 10 3 \put{$\bullet$} [cC] at 10 2 \put{$\bullet$} [cC] at 10 1 \put{$\mathcal Y$} [cC] at 8.5 .5 \endpicture That is, both spaces consist of 8 points. The distance between two points equals the minimal number of edges one has to cross to come from one point to the other. The measures $\mu_X$ and $\mu_Y$ are given by numbers in the figure. We find that \be{tthi} \begin{aligned} \hat\mu_{\mathcal X} = \hat\mu_{\mathcal Y} &= \tfrac 1{10}\delta_{\tfrac 1{20}\delta_0+\tfrac{9}{20}\delta_2+\tfrac12\delta_3}+\tfrac {1}{5}\delta_{\tfrac 1{10} \delta_0+\tfrac 25\delta_2+\tfrac 1{2}\delta_3} \\ &\quad+ \tfrac 3{10}\delta_{\tfrac 3{20}\delta_0+\tfrac{7}{20}\delta_2+\tfrac12\delta_3}+\tfrac {2}{5}\delta_{\tfrac 1{5} \delta_0+\tfrac 3{10}\delta_2+\tfrac 1{2}\delta_3}. \end{aligned} \end{equation} Hence, the random distance distributions agree. But obviously, $\mathcal X$ and $\mathcal Y$ are not measure preserving isometric. $\qed$\end{example}\sm Recall the distance distribution $w_{\boldsymbol{\cdot}}$ and the modulus of mass distribution $v_\delta(\boldsymbol{\cdot})$ from Definition~\ref{D:modul}. Both can be expressed through the random distance distribution $\hat{\mu}(\boldsymbol{\cdot})$. These facts follow directly from the definitions, so we omit the proof. \begin{lemma}[Reformulation of $w_{\boldsymbol{\cdot}}$ and $v_\delta(\boldsymbol{\cdot})$ in terms of $\hat{\mu}(\boldsymbol{\cdot})$] Let $\mathcal X\in\mathbb M$. \label{l:hatmu1} \begin{enumerate} \item[(i)] The distance distribution $w_{\mathcal X}$ satisfies \be{firs} w_{\mathcal X} = \int_{{\mathcal M}_1([0,\infty))}\hat\mu_{\mathcal X}(\mathrm{d}\nu)\,\nu. \end{equation} \item[(ii)] For all $\delta>0$, the modulus of mass distribution $v_\delta(\mathcal X)$ satisfies \be{secc} v_\delta(\mathcal X) = \inf\big\{\varepsilon>0: \hat\mu_{\mathcal X}\{\nu\in \mathcal M_1([0,\infty)):\,\nu([0,\varepsilon)) \leq \delta\}\leq\varepsilon\big\}. \end{equation} \end{enumerate} \end{lemma}\sm The next result will be used frequently. \begin{lemma} Let ${\mathcal X}=(X,r,\mu)\in\mathbb{M}$ and \label{l:usef} $\delta>0$. If $v_\delta({\mathcal X})<\varepsilon$, for some $\varepsilon>0$, then \be{twew} \mu\big\{x\in X:\,\mu(B_\varepsilon(x))\le\delta\big\} < \varepsilon. \end{equation} \end{lemma}\sm \begin{proof} By definition of $v_\delta(\boldsymbol{\cdot})$, there exists $\varepsilon'<\varepsilon$ for which $\mu\big\{x\in X:\, \mu(B_{\varepsilon'}(x))\le\delta\big\}\le\varepsilon'$. Consequently, since $\{x: \mu(B_\varepsilon(x))\leq\delta\} \subseteq \{x:\mu(B_{\varepsilon'}(x))\leq\delta\}$, \be{twone} \mu\{x: \mu(B_\varepsilon(x))\leq\delta\} \leq \mu\{x: \mu(B_{\varepsilon'}(x))\leq\delta\} \le \varepsilon' < \varepsilon, \end{equation} and we are done. \end{proof}\sm The next result states basic properties of the map $\delta\mapsto v_\delta$. \begin{lemma}[Properties of $v_\delta(\boldsymbol{\cdot})$] \label{l:dvelta} Fix $\mathcal X\in\mathbb{M}$. The map which sends $\delta\ge 0$ to $v_{\delta}({\mathcal X})$ is non-decreasing, right-continuous and bounded by $1$. Moreover, $v_\delta(\mathcal X) \overset{\delta\to 0}{\longrightarrow} 0$. \end{lemma}\sm \begin{proof} The first three properties are trivial. For the forth, fix $\varepsilon>0$, and let ${\mathcal X}=(X,r,\mu)\in\mathbb{M}$. Since $X$ is complete and separable there exists a compact set $K_\varepsilon\subseteq X$ with $\mu(K_\varepsilon)> 1-\varepsilon$ (see \cite{EthierKurtz86}, Lemma 3.2.1). In particular, $K_\varepsilon$ can be covered by finitely many balls $A_1,...,A_{N_\varepsilon}$ of radius $\varepsilon/2$ and positive $\mu$-mass. Choose $\delta$ such that \be{p6w} 0< \delta < \min\big\{\mu(A_i):\, 1\leq i\leq N_\varepsilon\big\}. \end{equation} Then \be{p6ww} \begin{aligned} \mu\big\{x\in X: \mu(B_\varepsilon(x))>\delta\big\} &\ge \mu\big(\bigcup\nolimits_{i=1}^{N_\varepsilon}A_i\big) \\ &\ge \mu(K_\varepsilon) \\ &> 1-\varepsilon. \end{aligned} \end{equation} Therefore, by definition, $v_\delta(\mathcal X)\leq\varepsilon$, and since $\varepsilon$ was chosen arbitrary, the assertion follows. \end{proof}\sm The following proposition states continuity properties of $\hat{\mu}(\boldsymbol{\cdot})$, $w_{\boldsymbol{\cdot}}$ and $v_\delta(\boldsymbol{\cdot})$. The reader should have in mind that we finally prove with Theorem~\ref{T:05} in Section~\ref{S:equivtop} that the Gromov-weak and the Gromov-Prohorov topology are the same. \begin{proposition}[Continuity properties of $\hat{\mu}(\boldsymbol{\cdot})$, $w_{\boldsymbol{\cdot}}$ and $v_\delta(\boldsymbol{\cdot})$]\label{P:lIV2} \begin{itemize} \item[{}] \item[(i)] The map $\mathcal X\mapsto\hat\mu_{\mathcal X}$ is continuous with respect to the Gromov-weak topology on $\mathbb{M}$ and the weak topology on $\mathcal M_1(\mathcal M_1([0,\infty)))$. \item[(ii)] The map $\mathcal X\mapsto\hat\mu_{\mathcal X}$ is continuous with respect to the Gromov-Prohorov topology on $\mathbb{M}$ and the weak topology on $\mathcal M_1(\mathcal M_1([0,\infty)))$. \item[(iii)] The map $\mathcal X\mapsto w_{\mathcal X}$ is continuous with respect to both the Gromov-weak and the Gromov-Prohorov topology on $\mathbb{M}$ and the weak topology on $\mathcal M_1([0,\infty))$. \item[(iv)] Let $\mathcal X$, $\mathcal X_1$, $\mathcal X_2$, ... in $\mathbb M$ such that $\hat\mu_{\mathcal X_n}\overset{n\to\infty}{\Longrightarrow} \hat\mu_{\mathcal X}$ and $\delta>0$. Then \be{sevv} \limsup_{n\to\infty}v_{\delta}(\mathcal X_n) \leq v_\delta(\mathcal X). \end{equation} \end{itemize} \end{proposition}\sm The proof of Parts~(i) and~(ii) of Proposition~\ref{P:lIV2} are based on the notion of moment measures. \begin{definition}[Moment measures of $\hat{\mu}_{\mathcal X}$]\label{D:02} For $\mathcal X=(X,r,\mu)\in\mathbb M$ and $k\in\N$, define the {\em $k^{\mathrm{th}}$ moment measure} $\hat\mu_{\mathcal X}^k\in\mathcal M_1([0,\infty)^k)$ of $\hat\mu_{\mathcal X}$ by \be{sixx} \hat\mu_{\mathcal X}^k(\mathrm{d}(r_1, ..., r_k)) := \int\hat\mu_{\mathcal X}(\mathrm{d}\nu)\, \nu^{\otimes k}(\mathrm{d}(r_1,...,r_k)). \end{equation} \end{definition}\sm \begin{remark}[Moment measures determine $\hat{\mu}_{\mathcal X}$] Observe that for all $k\in\N$, \label{Rem:02} \be{foutt} \begin{aligned} &\hat\mu_{\mathcal X}^k(A_1\times ...\times A_k) \\ &= \mu^{\otimes k+1}\big\{(u_0,u_1,...,u_{k}):\,r(u_0, u_1)\in A_1,..., r(u_0,u_k)\in A_k\big\}. \end{aligned} \end{equation} By Theorem 16.16 of \cite{Kallenberg2002}, the moment measures $\hat\mu_{\mathcal X}^k, k=1,2,...$ determine $\hat\mu_{\mathcal X}$ uniquely. Moreover, weak convergence of random measures is equivalent to convergence of all moment measures. $\qed$ \end{remark}\sm \begin{proof}[Proof of Proposition~\ref{P:lIV2}] (i) Take $\mathcal X$, $\mathcal X_1$, $\mathcal X_2$, ... in $\mathbb M$ such that \be{conv} \Phi(\mathcal X_n) \xrightarrow{n\to\infty} \Phi(\mathcal X), \end{equation} for all $\Phi\in\Pi$. For $k\in\mathbb N$, consider all $\phi\in{\mathcal C}_{\mathrm{b}}([0,\infty)^{\binom{k+1}{2}})$ which depend on $(r_{ij})_{0\le i<j\le k }$ only through $(r_{0,1}, ..., r_{0,k})$, i.e., there exists $\tilde\phi\in{\mathcal C}_{\mathrm{b}}([0,\infty)^{k})$ with $\phi\big((r_{ij})_{0\le i<j\le k}\big) = \tilde\phi\big((r_{0,j})_{1\le j\le k}\big)$. Since for any $\mathcal Y=(Y,r,\mu)\in\mathbb M$, \be{fiff} \begin{aligned} &\int\hat\mu^k_{\mathcal Y}(\mathrm{d}(r_1,...,r_k))\, \tilde\phi\big(r_1,...,r_k\big) \\ &= \int\mu^{\otimes k+1}(\mathrm{d}(u_0,u_1,...,u_{k}))\, \tilde\phi\big(r(u_0,u_1),...,r(u_0,u_k)\big) \\ &= \int\mu^{\otimes k+1}(\mathrm{d}(u_0,u_1,...,u_{k}))\, \phi\big((r(u_i,u_j))_{0\leq i<j\leq k}\big) \end{aligned} \end{equation} it follows from \eqref{conv} that $\hat\mu^k_{\mathcal X_n}\overset{n\to\infty}{\Longrightarrow}\hat\mu_{\mathcal X}^k$ in the topology of weak convergence. Since $k$ was arbitrary the convergence $\hat\mu_{\mathcal X_n}\overset{n\to\infty}{\Longrightarrow}\hat\mu_{\mathcal X}$ follows by Remark~\ref{Rem:02}.\sm (ii) Once more it suffices to prove that all moment measures converge. Let $\mathcal X=(X,r_X,\mu_X)\in\mathbb{M}$ and $\varepsilon>0$ be given. Now consider a metric measure space $\mathcal Y = (Y,r_Y,\mu_Y) \in\mathbb{M}$ with $d_{\rm{GPr}}(\mathcal X, \mathcal Y) < \varepsilon$. We know that there exists a metric space $(Z,r_Z)$, isometric embeddings $\varphi_X$ and $\varphi_Y$ of $\mathrm{supp}(\mu_X)$ and $\mathrm{supp}(\mu_Y)$ into $Z$, respectively, and a coupling $\tilde\mu$ of $(\varphi_X)_\ast \mu_X$ and $(\varphi_Y)_\ast\mu_Y$ such that \be{eq:w1} \tilde\mu\big\{(z,z'): r_Z(z,z')\ge\varepsilon\big\} \le \varepsilon. \end{equation} Given $k\in\mathbb N$, define a coupling $\tilde{\hat\mu}^k$ of $\hat\mu^k_{\mathcal X}$ and $\hat\mu^k_{\mathcal Y}$ by \be{yy} \begin{aligned} &\tilde{\hat\mu}^k\big(A_1\times\cdots\times A_k\times B_1\cdots\times B_k\big) \\ &:= \tilde\mu^{\otimes (k+1)}\big\{(z_0,z_0'),...,(z_k,z_k'):\, r_Z(z_0,z_i)\in A_i, r_Z(z_0', z_i')\in B_i, i=1,...,k\big\} \end{aligned} \end{equation} for all $A_1\times\cdots\times A_k\times B_1\times\cdots\times B_k\in\mathcal B(\mathbb R_+^{2k})$. Then \be{7.10} \begin{aligned} &\tilde{\hat\mu}^k\big\{(r_1,...,r_k,r_1',..., r_k'):\, |r_i-r_i'|\ge 2\varepsilon \text{ for at least one }i\big\} \\ &\leq k\cdot\tilde{\hat\mu}^1\big\{(r_1,r_1'):\, |r_1-r_1'|\ge 2\varepsilon\big\} \\ &= k\cdot\tilde\mu^{\otimes 2} \big\{(z,z'),(\tilde z,\tilde z'):\, |r_Z(z,\tilde z)-r_Z(z',\tilde z')|\ge 2\varepsilon\big\} \\ &\leq k\cdot\tilde\mu^{\otimes 2}\big\{(z,z'),(\tilde z, \tilde z'):\, r_Z(z,z')\ge\varepsilon \text{ or } r_Z(\tilde z,\tilde z')\ge\varepsilon\big\} \\ &\le 2k\varepsilon, \end{aligned} \end{equation} which implies that $d_{\rm{Pr}}^{\mathbb R_+^k}(\tilde{\hat\mu}^k_{\mathcal X}, \tilde{\hat\mu}^k_{\mathcal Y})\le 2k\varepsilon$, and the claim follows.\sm (iii) By Part~(i) of Lemma~\ref{l:hatmu1}, for ${\mathcal X}\in\mathbb{M}$, $w_{\mathcal X}$ equals the first moment measure of $\hat{\mu}_{\mathcal X}$. The continuity properties of ${\mathcal X}\mapsto w_{\mathcal X}$ are therefore a direct consequence of (i) and (ii). \sm (iv) Let $\mathcal X$, $\mathcal X_1$, $\mathcal X_2$, ... in $\mathbb M$ such that $\hat\mu_{\mathcal X_n}\overset{n\to\infty}{\Longrightarrow} \hat\mu_{\mathcal X}$ and $\delta>0$. Assume that $\varepsilon>0$ is such that $\varepsilon>v_\delta(\mathcal X)$. Then by Lemmata~\ref{l:hatmu1}(ii) and~\ref{l:usef}, \be{eigg} \hat\mu_{\mathcal X}\big\{\nu\in\mathcal M_1([0,\infty)):\, \nu([0,\varepsilon))\leq\delta\big\} <\varepsilon. \end{equation} The set $\{\nu\in\mathcal M_1([0,\infty)):\, \nu([0,\varepsilon))\leq\delta\}$ is closed in $\mathcal M_1([0,\infty))$. Hence by the Portmanteau Theorem (see, for example, Theorem 3.3.1 in \cite{EthierKurtz86}), \be{neiii} \begin{aligned} \limsup_{n\to\infty}&\,\hat\mu_{\mathcal X_n}\big\{\nu\in\mathcal M_1([0,\infty)):\,\nu([0,\varepsilon))\leq\delta\big\} \\ &\leq \hat\mu_{\mathcal X}\big\{\nu\in\mathcal M_1([0,\infty)):\, \nu([0,\varepsilon)) \leq\delta\big\} < \varepsilon. \end{aligned} \end{equation} That is, we have $v_\delta(\mathcal X_n)<\varepsilon$, for all but finitely many $n$, by (\ref{sevv}). Therefore we find that $\limsup_{n\to\infty} v_\delta(\mathcal X_n) <\varepsilon$. This holds for every $\varepsilon>v_\delta(\mathcal X)$, and we are done. \end{proof}\sm The following estimate will be used in the proofs of the pre-compactness characterization given in Proposition~\ref{P:02} and of Part~(i) of Lemma~\ref{LL}. \begin{lemma}\label{lZorn} Let $\delta>0$, $\varepsilon\geq 0$, and $\mathcal X=(X,r,\mu)\in\mathbb{M}$. If $v_\delta(\mathcal X)< \varepsilon$, then there exists $N\leq \lfloor\frac{1}{\delta}\rfloor$ and points $x_1,..., x_N\in X$ such that the following hold. \begin{itemize} \item For $i=1,...,N$, $\mu\big(B_\varepsilon(x_i)\big)>\delta$, and $\mu\big(\bigcup\limits_{i=1}^N B_{2\varepsilon}(x_i)\big)> 1-\varepsilon$. \item For all $i,j=1,...,N$ with $i\not=j$, $r\big(x_i, x_j\big)>\varepsilon$. \end{itemize} \end{lemma}\sm \begin{proof} Consider the set $D:=\{x\in X:\,\mu(B_{\varepsilon}(x))> \delta\}$. Since $v_\delta(\mathcal X)<\varepsilon$, Lemma~\ref{l:usef} implies that $\mu(D)> 1-\varepsilon$. Take a maximal $2\varepsilon$ separated net $\{x_i: i \in I\}\subseteq D$, i.e., \be{eq:l12ca} D\subseteq \bigcup_{i\in I} B_{2\varepsilon}(x_i), \end{equation} and for all $i\not =j$, \be{eq:l12c} r(x_i, x_j)>2\varepsilon, \end{equation} while adding a further point to $D$ would destroy \eqref{eq:l12c}. Such a net exists in every metric space (see, for example, in \cite{BurBurIva01}, p. 278). Since \be{eq:l13d} 1 \geq \mu\Big(\bigcup_{i\in I} B_{\varepsilon}(x_i)\Big) = \sum_{i\in I} \mu\big(B_{\varepsilon}(x_i)\big) \geq |I|\delta, \end{equation} $|I|\leq \lfloor\frac{1}{\delta}\rfloor$ follows. \end{proof}\sm \section{Compact sets} \label{S:compact} By Prohorov's Theorem, in a complete and separable metric space, a set of probability measures is relatively compact iff it is tight. This implies that compact sets in $\mathbb{M}$ play a special role for convergence results. In this section we characterize the (pre-)compact sets in the Gromov-Prohorov topology. Recall the distance measure $w_{\mathcal X}$ from \eqref{distInt} and the modulus of mass distribution $v_\delta({\mathcal X})$ from \eqref{modul}. Denote by $(\mathbb{X}_{\mathrm{c}},d_{\mathrm{GH}})$ the space of all isometry classes of compact metric spaces equipped with the Gromov-Hausdorff metric (see Section \ref{S:GPW} for basic definitions). The following characterizations together with Theorem~\ref{T:05} stated in Section~\ref{S:equivtop} which states the equivalence of the Gromov-Prohorov and the Gromov-weak topology imply the result stated in Theorem~\ref{T:Propprec}. \begin{proposition}[Pre-compactness characterization] \label{P:02} Let ${\Gamma}$ be a family in $\mathbb{M}$. The following four conditions are equivalent. \begin{itemize} \item[(a)] The family ${\Gamma}$ is pre-compact in the Gromov-Prohorov topology. \item[(b)] The family $\big\{w({\mathcal X});\,{\mathcal X}\in\Gamma\big\}$ is tight, and \be{eq:unifConvVd} \sup_{\mathcal X\in\Gamma}v_\delta({\mathcal X}) \xrightarrow{\delta \to 0} 0. \end{equation} \item[(c)] For all $\varepsilon>0$ there exists $N_\varepsilon\in\N$ such that for all ${\mathcal X}=(X,r,\mu)\in\Gamma$ there is a subset $X_{\varepsilon, {\mathcal X}}\subseteq X$ with \begin{itemize} \item $\mu\big(X_{\varepsilon,{\mathcal X}}\big)\ge 1-\varepsilon$, \item $X_{\varepsilon, {\mathcal X}}$ can be covered by at most $N_\varepsilon$ balls of radius $\varepsilon$, and \item $X_{\varepsilon,\mathcal X}$ has diameter at most $N_\varepsilon$. \end{itemize} \item[(d)] For all $\varepsilon>0$ and ${\mathcal X}=(X,r,\mu)\in\Gamma$ there exists a compact subset $K_{\varepsilon, {\mathcal X}}\subseteq X$ with \begin{itemize} \item $\mu\big(K_{\varepsilon,{\mathcal X}}\big)\ge 1-\varepsilon$, and \item the family $\mathcal{K}_\varepsilon :=\{K_{\varepsilon,{\mathcal X}};\, {\mathcal X}\in\Gamma\}$ is pre-compact in $(\mathbb{X}_{\rm{c}},d_{\mathrm{GH}})$. \end{itemize} \end{itemize} \end{proposition}\sm \begin{remark} \label{Rem:01} \begin{itemize} \item[{}] \item[(i)] In the space of compact metric spaces equipped with a probability measure with full support, Proposition 2.4 in \cite{EvaWin2006} states that Condition (d) is sufficient for pre-compactness. \item[(ii)] Proposition~\ref{P:02}(b) characterizes tightness for the stronger topology given in \cite{Stu2006} based on certain $L^2$-Wasserstein metrics if one requires in addition uniform integrability of sampled mutual distance. Similarly, (b) characterizes tightness in the space of measure preserving isometry classes of metric spaces equipped with a finite measure (rather than a probability measure) if one requires in addition tightness of the family of total masses. $\qed$ \end{itemize} \end{remark}\sm \begin{proof}[Proof of Proposition~\ref{P:02}] As before, we abbreviate ${\mathcal X}=(X,r_X,\mu_X)$. We prove four implications giving the statement. \sm ${(a)\Rightarrow (b).}$ Assume that $\Gamma\in\mathbb{M}$ is pre-compact in the Gromov-Prohorov topology. To show that $\big\{w({\mathcal X});\,{\mathcal X}\in\Gamma\big\}$ is tight, consider a sequence $\mathcal X_1, \mathcal X_2,...$ in $\Gamma$. Since $\Gamma$ is relatively compact by assumption, there is a converging subsequence, i.e., we find $\mathcal X\in\mathbb{M}$ such that $d_{\rm{GPr}}(\mathcal X_{n_k}, \mathcal X) \overset{k\to\infty}{\longrightarrow} 0$ along a suitable subsequence $(n_k)_{k\in\N}$. By Part (iii) of Proposition~\ref{P:lIV2}, $w_{\mathcal X_{n_k}} \overset{k\to\infty}{\Longrightarrow} w_{\mathcal X}$. As the sequence was chosen arbitrary it follows that $\big\{w({\mathcal X});\,{\mathcal X}\in\Gamma\big\}$ is tight. The second part of the assertion in (b) is by contradiction. Assume that $v_\delta({\mathcal X})$ does not converge to $0$ uniformly in ${\mathcal X}\in\Gamma$, as $\delta\to 0$. Then we find an $\varepsilon>0$ such that for all $n\in\N$ there exist sequences $(\delta_n)_{n\in\N}$ converging to 0 and ${\mathcal X}_n\in\Gamma$ with \be{eq:P02P1} v_{\delta_n}({\mathcal X}_n) \ge \varepsilon. \end{equation} By assumption, there is a subsequence $\{{\mathcal X}_{n_k};\,k\in\N\}$, and a metric measure space ${\mathcal X}\in\Gamma$ such that $d_{\mathrm{GPr}}\big({\mathcal X}_{n_k},{\mathcal X}) \overset{k\to\infty}{\longrightarrow} 0$. By Parts~(ii) and~(iv) of Proposition~\ref{P:lIV2}, we find that $\limsup_{k\to\infty} v_{\delta_{n_k}}({\mathcal X}_{n_k})=0$ which contradicts \eqref{eq:P02P1}. \sm ${(b)\Rightarrow (c).}$ By assumption, for all $\varepsilon>0$ there are $C(\varepsilon)$ with \be{p9q} \sup_{\mathcal X\in\Gamma} w_{\mathcal X}\big([C(\varepsilon),\infty)\big) < \varepsilon, \end{equation} and $\delta(\varepsilon)$ such that \be{p9qq} \sup_{\mathcal X\in\Gamma} v_{\delta(\varepsilon)}({\mathcal X}) < {\varepsilon}. \end{equation} Set \be{Xprime} X_{\varepsilon,\mathcal X}' := \big\{x\in X:\, \mu_X\big(B_{C(\tfrac{\varepsilon^2}{4})}(x)\big) >1-\varepsilon/2\big\}. \end{equation} We claim that $\mu_X(X'_{\varepsilon,\mathcal X})>1-\varepsilon/2$. If this were not the case, there would be $\mathcal X\in\Gamma$ with \be{eq:IV3d} \begin{aligned} w_{\mathcal X}\big([C(\tfrac14{\varepsilon^2});\infty)\big) &= \mu_X^{\otimes 2}\big\{(x,x')\in X\times X:\, r_X(x,x')\ge C(\tfrac14{\varepsilon^2})\big\} \\ &\geq \mu_X^{\otimes 2}\big\{(x,x'): x\notin X_{\varepsilon,\mathcal X}', x'\notin B_{C(\tfrac{\varepsilon^2}{4})}(x)\big\} \\ &\geq \frac \varepsilon 2 \mu_X(\complement X_{\varepsilon,\mathcal X}') \\ &\geq \frac{\varepsilon^2}{4}, \end{aligned} \end{equation} which contradicts (\ref{p9q}). Furthermore, the diameter of $X'_{\varepsilon,\mathcal X}$ is bounded by $4C(\tfrac{\varepsilon^2}{4})$. Indeed, otherwise we would find points $x,x'\in X'_{\varepsilon,\mathcal X}$ with $B_{C(\tfrac{\varepsilon^2}{4})}(x) \cap B_{C(\tfrac{\varepsilon^2}{4})}(x') = \emptyset$, which contradicts that \be{eq:IV3e} \begin{aligned} \mu_X\big(B_{C(\tfrac{\varepsilon^2}{4})}(x)\cap B_{C(\tfrac{\varepsilon^2}{4})}(x')\big) &\geq 1-\mu_X\big(\complement B_{C(\tfrac{\varepsilon^2}{4})}(x)\big)- \mu_X\big(\complement B_{C(\tfrac{\varepsilon^2}{4})}(x')\big) \\ &\geq 1-\varepsilon. \end{aligned} \end{equation}\sm By Lemma~\ref{lZorn}, for all $\mathcal X=(X,r_X,\mu_X) \in \Gamma$, we can choose points $x_1,...,x_{N_\varepsilon^{\mathcal X}}\in X$ with $N_\varepsilon^{\mathcal X} \leq N(\varepsilon) :=\lfloor \frac{1}{\delta(\varepsilon/2)}\rfloor$, $r_X(x_i,x_j)>\varepsilon/2$, $1\le i<j\le N_\varepsilon^{\mathcal X}$, and with $\mu_X\big(\bigcup_{i=1}^{N_\varepsilon^{\mathcal X}} B_{\varepsilon}(x_i)\big) > 1-\varepsilon/2$. Set \be{eq:IV3a} X_{\varepsilon,\mathcal X} := X_{\varepsilon,\mathcal X}'\cap \bigcup_{i=1}^{N_\varepsilon^{\mathcal X}}B_{\varepsilon}(x_i). \end{equation} Then $\mu_X(X_{\varepsilon,\mathcal X}) > 1-\varepsilon$. In addition, $X_{\varepsilon,\mathcal X}$ can be covered by at most $N(\varepsilon)$ balls of radius $\varepsilon$ and $X'_{\varepsilon, \mathcal X}$ has diameter at most $4C(\tfrac{\varepsilon^2}{4})$, so the same is true for $X_{\varepsilon,\mathcal X}$. \sm ${(c)\Rightarrow (d).}$ Fix $\varepsilon>0$, and set $\varepsilon_n :=\varepsilon 2^{-(n+1)}$, for all $n\in\N$. By assumption we may choose for each $n\in\N$, $N_{\varepsilon_n}\in\N$ such that for all ${\mathcal X}\in\Gamma$ there is a subset $X_{\varepsilon_n, {\mathcal X}}\subseteq X$ of diameter at most $N_{\varepsilon_n}$ with $\mu\big(X_{\varepsilon_n,{\mathcal X}}\big)\ge 1-\varepsilon_n$, and such that $X_{\varepsilon_n, {\mathcal X}}$ can be covered by at most $N_{\varepsilon_n}$ balls of radius $\varepsilon_n$. Without loss of generality we may assume that all $\{X_{\varepsilon_n,\mathcal X};\,n\in\N,{\mathcal X}\in\Gamma\}$ are closed. Otherwise we just take their closure. For every ${\mathcal X}\in\Gamma$ take compact sets $K_{\varepsilon_n, \mathcal X}\subseteq X$ with $\mu_X(K_{\varepsilon_n, \mathcal X})> 1-\varepsilon_n$. Then the set \be{KepsX} K_{\varepsilon,\mathcal X} := \bigcap_{n=1}^\infty \big(X_{\varepsilon_n, \mathcal X} \cap K_{\varepsilon_n, \mathcal X}\big) \end{equation} is compact since it is the intersection of a compact set with closed sets, and \be{KepsX2} \mu_X(K_{\varepsilon,\mathcal X}) \geq 1-\sum_{n=1}^\infty\big(\mu_X(\complement X_{\varepsilon_n,\mathcal X}) +\mu_X(\complement K_{\varepsilon_n,\mathcal X})\big) > 1-\varepsilon. \end{equation} Consider \be{eq:IV3f} \begin{aligned} \mathcal K_\varepsilon := \big\{K_{\varepsilon,\mathcal X};\,\mathcal X\in\Gamma\big\}. \end{aligned} \end{equation} To show that $\mathcal K_\varepsilon$ is pre-compact we use the pre-compactness criterion given in Theorem 7.4.15 in \cite{BurBurIva01}, i.e., we have to show that $\mathcal K_\varepsilon$ is uniformly totally bounded. This means that the elements of $\mathcal K_\varepsilon$ have bounded diameter and for all $\varepsilon'>0$ there is a number $N_{\varepsilon'}$ such that all elements of $\mathcal K_\varepsilon$ can be covered by $N_{\varepsilon'}$ balls of radius $\varepsilon'$. By definition, $\mathcal K_{\varepsilon,\mathcal X}\subseteq X_{\varepsilon_1,\mathcal X}$ and so, $\mathcal K_{\varepsilon,\mathcal X}$ has diameter at most $N_{\varepsilon_1}$. So, take $\varepsilon'<\varepsilon$ and $n$ large enough for $\varepsilon_n<\varepsilon'$. Then $X_{\varepsilon_n, \mathcal X}$ as well as $K_{\varepsilon,\mathcal X}$ can be covered by $N_{\varepsilon_n}$ balls of radius $\varepsilon'$. So $\mathcal K_{\varepsilon}$ is pre-compact in $(\mathbb{X}_{\mathrm{c}},d_{\mathrm{GH}})$. \sm ${(d)\Rightarrow (a).}$ The proof is in two steps. Assume first that all metric spaces $(X,r_X)$ with $(X,r_X,\mu_X)\in\Gamma$ are compact, and that the family $\{(X,r_X): (X,r_X,\mu_X) \in\Gamma\}$ is pre-compact in the Gromov-Hausdorff topology. Under these assumptions we can choose for every sequence in $\Gamma$ a subsequence $(\mathcal X_m)_{m\in\N}$, $\mathcal X_m = (X_m, r_{X_m}, \mu_{X_m})$, and a metric space $(X,r_X)$, such that \be{gre5} d_{\rm{GH}}(X,X_m)\overset{m\to\infty}{\longrightarrow}0. \end{equation} By Lemma~\ref{l:met2}, there are a compact metric space $(Z,r_Z)$ and isometric embeddings $\varphi_X$, $\varphi_{X_1}$, $\varphi_{X_2}$, ... from $X$, $X_1$, $X_2$, ..., respectively, to $Z$, such that $d_{\mathrm{H}}\big(\varphi_X(X), \varphi_{X_m}(X_m)\big) \overset{m\to\infty}{\longrightarrow}0$. Since $Z$ is compact, the set $\{(\varphi_{X_m})_\ast\mu_{X_m}: m\in\mathbb N\}$ is pre-compact in ${\mathcal M}_1(Z)$ equipped with the weak topology. Therefore $(\varphi_{X_m})_\ast\mu_{X_m}$ has a converging subsequence, and ${(a)}$ follows in this case. In the second step we consider the general case. Let $\varepsilon_n:=2^{-n}$, fix for every ${\mathcal X}\in\Gamma$ and every $n\in\N$, $x\in K_{\varepsilon_n,\mathcal X}$. Put \be{gre6} \mu_{X,n}(\boldsymbol{\cdot}) := \mu_X(\boldsymbol{\cdot}\cap K_{\varepsilon_n,\mathcal X}) +(1-\mu_X(K_{\varepsilon_n,\mathcal X}))\delta_x(\boldsymbol{\cdot}) \end{equation} and let $\mathcal X^n:=(X,r_X,\mu_{\mathcal X,n})$. By construction, for all $\mathcal X\in\Gamma$, \be{gre7} d_{\rm{GPr}}\big(\mathcal X^n, \mathcal X\big) \leq \varepsilon_n, \end{equation} and $\mu_{X,n}$ is supported by $K_{\varepsilon_n, \mathcal X}$. Hence, $\Gamma_n:=\{\mathcal X^n;\,\mathcal X\in\Gamma\}$ is pre-compact in $\mathbb{X}_{\mathrm{c}}$ equipped with the Gromov-Hausdorff topology, for all $n\in\N$. We can therefore find a converging subsequence in $\Gamma_n$, for all $n$, by the first step. By a diagonal argument we find a subsequence $(\mathcal X_m)_{m\in\N}$ with $\mathcal X_m=(X_m,r_{X_m},\mu_{X_m})$ such that $(\mathcal X_m^n)_{m\in\N}$ converges for every $n\in\N$ to some metric measure space $\mathcal Z_n$. Pick a subsequence such that for all $n\in\N$ and $m\ge n$, \be{gre8} d_{\rm{GPr}}\big(\mathcal X_m^n, \mathcal Z_n\big) \leq \varepsilon_m. \end{equation} Then \be{gre9} d_{\rm{GPr}}\big( \mathcal X_m^n, \mathcal X_{m'}^n\big) \leq 2\varepsilon_n, \end{equation} for all $m,m'\geq n$. We conclude that $(\mathcal X_n)_{n\in\N}$ is a Cauchy sequence in $(\mathbb{M}, d_{\rm{GPr}})$ since $\sum_{n\ge 1}\varepsilon_n<\infty$. Indeed, \be{ttoto} \begin{aligned} &d_{\rm{GPr}}\big(\mathcal X_n, \mathcal X_{n+1}\big) \\ &\leq d_{\rm{GPr}}\big(\mathcal X_n, \mathcal X_n^n\big) + d_{\rm{GPr}}\big(\mathcal X_n^n, \mathcal X_{n+1}^n\big) + d_{\rm{GPr}}\big(\mathcal X_{n+1}^n, \mathcal X_{n+1}\big) \\ &\leq 4 \varepsilon_n. \end{aligned} \end{equation} Since $(\mathbb{M}, d_{\rm{GPr}})$ is complete, this sequence converges and we are done. \end{proof}\sm \section{Tightness} \label{S:PropTight} In Proposition~\ref{P:02} we have given a characterization for relative compactness in $\mathbb{M}$ with respect to the Gromov-Prohorov topology. This characterization extends to the following tightness characterization in $\mathcal M_1(\mathbb{M})$ which is equivalent to Theorem \ref{T:PropTight}, once we have shown the equivalence of the Gromov-Prohorov and the Gromov-weak topology in Theorem~\ref{T:05} in Section~\ref{S:equivMetrics}. \begin{proposition}[Tightness with respect to the Gromov-Prohorov topology] A set $\mathbf{A}\subseteq{\mathcal M}_1(\mathbb{M})$ is tight with respect to the Gromov-Prohorov topology on $\mathbb{M}$ if and only if for all $\varepsilon>0$ there exist \label{PropTight} $\delta>0$ and $C>0$ such that \be{eq:PropTightPf1} \sup_{\mathbb P\in\mathbf A} \mathbb P\big[v_\delta(\mathcal X) + w_{\mathcal X}([C;\infty))\big] < \varepsilon. \end{equation} \end{proposition}\sm \begin{proof}[Proof of Proposition~\ref{PropTight}] For the ``only if'' direction assume that $\mathbf{A}$ is tight and fix $\varepsilon>0$. By definition, we find a compact set $\Gamma_\varepsilon$ in $({\mathbb M},d_{\mathrm{GPr}})$ such that $\inf_{\mathbb{P}\in\mathbf{A}}\mathbb{P}(\Gamma_\varepsilon)>1-\varepsilon/4$. Since $\Gamma_\varepsilon$ is compact there are, by part (b) of Proposition~\ref{P:02}, $\delta=\delta(\varepsilon)>0$ and $C=C(\varepsilon)>0$ such that $v_\delta(\mathcal X)<\varepsilon/4$ and $w_{\mathcal X}([C,\infty))<\varepsilon/4$, for all $\mathcal X\in \Gamma_\varepsilon$. Furthermore both $v_\delta(\boldsymbol{\cdot})$ and $w_{\boldsymbol{\cdot}}([C,\infty))$ are bounded above by 1. Hence for all $\mathbb{P}\in\mathbf{A}$, \be{pppp} \begin{aligned} &\mathbb{P}\big[v_\delta(\mathcal X)+w_{\mathcal X}([C,\infty))\big] \\ &= \mathbb{P}\big[v_\delta(\mathcal X)+w_{\mathcal X}([C,\infty));\Gamma_\varepsilon\big] +\mathbb{P}\big[v_\delta(\mathcal X)+w_{\mathcal X}([C,\infty));\complement \Gamma_\varepsilon\big] \\ &< \frac{\varepsilon}{2}+\frac{\varepsilon}{2} \\ &= \varepsilon. \end{aligned} \end{equation} Therefore \eqref{eq:PropTightPf1} holds. \sm For the ``if'' direction assume \eqref{eq:PropTightPf1} is true and fix $\varepsilon>0$. For all $n\in\N$, there are $\delta_n>0$ and $C_n>0$ such that \be{e:tight1} \begin{aligned} \sup_{\mathbb{P}\in\mathbf{A}} \mathbb{P}\big[v_{\delta_n}(\mathcal X)+w_{\mathcal X}([C_n,\infty))\big] < 2^{-2n}\varepsilon^2. \end{aligned} \end{equation} By Tschebychev's inequality, we conclude that for all $n\in\N$, \be{e:tight2} \sup_{\mathbb{P}\in\mathbf{A}} \mathbb{P}\big\{\mathcal X:\, v_{\delta_n}(\mathcal X)+ w_{\mathcal X}([C_n,\infty))> 2^{-n}\varepsilon\big\} < 2^{-n}\varepsilon. \end{equation} By the equivalence of (a) and (b) in Proposition~\ref{P:02} the closure of \be{Gaama} \Gamma_\varepsilon := \bigcap_{n=1}^\infty\big\{\mathcal X:\, v_{\delta_n}(\mathcal X)+w_{\mathcal X}([C_n,\infty))\le 2^{-n}\varepsilon\big\} \end{equation} is compact. We conclude \be{twel} \begin{aligned} \mathbb{P}\big(\overline{\Gamma_\varepsilon}) &\ge \mathbb{P}\big(\Gamma_\varepsilon) \\ &\ge 1-\sum_{n=1}^\infty\mathbb{P}\big\{\mathcal X:\, v_{\delta_n}(\mathcal X)+w_{\mathcal X}([C_n,\infty))>\tfrac{\varepsilon}{2^n}\big\} \\ &> 1-\varepsilon. \end{aligned} \end{equation} Since $\varepsilon$ was arbitrary, $\mathbf{A}$ is tight. \end{proof}\sm \section{Gromov-Prohorov and Gromov-weak topology coincide} \label{S:equivtop} In this section we show that the topologies induced by convergence of polynomials and convergence in the Gromov-Prohorov metric coincide. This implies that the characterizations of compact subsets of $\mathbb{M}$ and tight families in ${\mathcal M}_1(\mathbb{M})$ in Gromov-weak topology stated in Theorems \ref{T:Propprec} and \ref{T:PropTight} are covered by the corresponding characterizations with respect to the Gromov-Prohorov topology given in Propositions \ref{P:02} and \ref{PropTight}, respectively. Recall the distance matrix distribution from Definition \ref{def:distMatDistr} \begin{theorem} Let $\mathcal X, \mathcal X_1,\mathcal X_2,...\in\mathbb{M}$. \label{T:05} The following are equivalent: \begin{itemize} \item[(a)] The Gromov-Prohorov metric converges, i.e., \be{e:(b)} d_{\mathrm{GPr}}\big(\mathcal X_n,\mathcal X\big) \xrightarrow{n\to\infty} 0. \end{equation} \item[(b)] Distance matrix distributions converge, i.e. \be{e:(0)} \nu^{\mathcal X_n} \Longrightarrow \nu^{\mathcal X} \text{ as }n\to\infty. \end{equation} \item[(c)] All polynomials $\Phi\in\Pi$ converge, i.e., \be{e:(a)} \Phi(\mathcal X_n) \xrightarrow{n\to\infty} \Phi(\mathcal X). \end{equation} \end{itemize} \end{theorem}\sm \begin{proof} ${(a)\Rightarrow (b).}$ Let $\mathcal X=(X,r_X, \mu_X)$, $\mathcal X_1=(X_1,r_1, \mu_1)$, $\mathcal X_2=(X_2,r_2,\mu_2)$. By Lemma~\ref{l:Gpronespace} there are a complete and separable metric space $(Z,r_Z)$ and isometric embeddings $\varphi$, $\varphi_1$, $\varphi_2$,... from $(X,r_X)$, $(X_1,r_{1})$, $(X_2,r_{2})$, ..., respectively, to $(Z,r_Z)$ such that $(\varphi_n)_\ast\mu_n$ converges weakly to $\varphi_\ast\mu_X$ on $(Z,r_Z)$. Consequently, using \eqref{e:iota}, \be{eq:d1} \nu^{\mathcal X_n} = (\iota^{\mathcal X_n})_\ast \mu_n^{\N} = (\iota^{\mathcal Z})_\ast \big(((\varphi_n)_\ast \mu_n)^{\N}\big) \Longrightarrow (\iota^{\mathcal Z})_\ast \big((\varphi_\ast \mu)^{\N}\big) = (\iota^{\mathcal X})_\ast \mu^{\N} = \nu^{\mathcal X} \end{equation} ${(b)\Rightarrow (c).}$ This is a consequence of the two different representation of polynomials from \eqref{pp3} and \eqref{ppp3}. ${(c)\Rightarrow (a).}$ Assume that for all $\Phi\in\Pi$, $\Phi({\mathcal X}_n)\xrightarrow{n\to\infty}\Phi({\mathcal X})$. It is enough to show that the sequence $({\mathcal X}_n)_{n\in\mathbb N}$ is pre-compact with respect to the Gromov-Prohorov topology, since by Proposition \ref{P:00}, this would imply that all limit points coincide and equal ${\mathcal X}$. We need to check the two conditions guaranteeing pre-compactness given by Part (b) of Proposition~\ref{P:02}. By Part~(iii) of Proposition~\ref{P:lIV2}, the map $\mathcal X\mapsto w_{\mathcal X}$ is continuous with respect to the Gromov-weak topology. Hence, the family $\{w_{\mathcal X_n};\,n\in\N\}$ is tight. In addition, by Parts~(i) and (iv) of Proposition~\ref{P:lIV2}, $\limsup_{n\to\infty} v_{\delta}(\mathcal X_n) \leq v_\delta(\mathcal X) \xrightarrow{\delta\to 0} 0$. By Remark \ref{Rem:05}, the latter implies \eqref{eq:unifConvVd}, and we are done. \sm \end{proof}\sm \section{Equivalent metrics} \label{S:equivMetrics} In Section~\ref{S:GPW} we have seen that $\mathbb{M}$ equipped with the Gromov-Prohorov metric is separable and complete. In this section we conclude the paper by presenting further metrics (not necessarily complete) which are all equivalent to the Gromov-Prohorov metric and which may be in some situations easier to work with. \subsection*{The Eurandom metric} $\!\!\!\!$\footnote{When we first discussed how to metrize the Gromov-weak topology the Eurandom metric came up. Since the discussion took place during a meeting at Eurandom, we decided to name the metric accordingly.} Recall from Definition~\ref{D:01} the algebra of polynomials, i.e., functions which evaluate distances of finitely many points sampled from a metric measure space. By Proposition~\ref{P:00}, polynomials separate points in $\mathbb{M}$. Consequently, two metric measure spaces are different if and only if the distributions of sampled finite subspaces are different. We therefore define \be{dGPr2} \begin{aligned} d_{\rm{Eur}}\big(\mathcal X, \mathcal Y\big) := \inf_{\tilde\mu}\,\inf\big\{\varepsilon>0:\, \tilde{\mu}^{\otimes 2}\{&(x,y),(x',y')\in (X\times Y)^2 :\; \\ & |r_X(x,x') - r_Y(y,y')|\ge\varepsilon\}<\varepsilon\big\}, \end{aligned} \end{equation} where the infimum is over all couplings $\tilde\mu$ of $\mu_X$ and $\mu_Y$. We will refer to $d_{\rm{Eur}}$ as the {\em Eurandom} metric. Not only is $d_{\mathrm{Eur}}$ a metric on $\mathbb{M}$, it also generates the Gromov-Prohorov topology. \begin{proposition}[Equivalent metrics]\label{propMet2} The distance $d_{\rm{Eur}}$ is a metric on $\mathbb{M}$. It is equivalent to $d_{\rm{GPr}}$, i.e., the generated topology is the Gromov-weak topology. \end{proposition}\sm Before we prove the proposition we give an example to show that the Eurandom metric is not complete. \begin{example}[Eurandom metric is not complete] \label{Exp:star} Let for all $n\in\N$, ${\mathcal X}_n:=(X_n,r_n,\mu_n)$ as in Example \ref{Exp:star0}(ii). For all $n\in\N$, \be{p3qa} \begin{aligned} d_{\mathrm{Eur}}&\big({\mathcal X}_n,{\mathcal X}_{n+1}\big) \\ &\leq \inf\big\{\varepsilon>0:\,(\mu_n\otimes \mu_{n+1})^{\otimes 2}\{|\mathbf{1}\{x=x'\}-\mathbf{1}\{y=y'\}|\ge\varepsilon\}\le\varepsilon\big\} \\ &\leq 2^{-(n-1)}, \end{aligned} \end{equation} i.e., $(\mathcal X_n)_{n\in\N}$ is a Cauchy sequence for $d_{\rm{Eur}}$ which does not converge. Hence $(\mathbb{M}, d_{\rm{Eur}})$ is not complete. The Gromov-Prohorov metric was shown to be complete, and hence the above sequence is not Cauchy in this metric. Indeed, \be{p3q} d_{\mathrm{GPr}}\big({\mathcal X}_n,{\mathcal X}_{n+1}\big) = 2^{-1}\overset{n\to\infty}{\not\longrightarrow}0. \end{equation} $\qed$\end{example}\sm To prepare the proof of Proposition~\ref{propMet2}, we provide bounds on the introduced ``distances''. \begin{lemma}[Equivalence] Let ${\mathcal X},{\mathcal Y}\in\mathbb{M}$, and $\delta\in(0,\tfrac 12)$. \label{LL} \begin{itemize} \item[(i)] If $d_{\mathrm{Eur}}\big({\mathcal X},{\mathcal Y}\big)<\delta^4$ then $d_{\mathrm{GPr}}\big({\mathcal X},{\mathcal Y}\big) < 12(2v_\delta(\mathcal X)+\delta)$. \item[(ii)] \be{eq:eqivMet2} d_{\mathrm{Eur}}\big({\mathcal X},{\mathcal Y}\big) \leq 2d_{\mathrm{GPr}}\big({\mathcal X},{\mathcal Y}\big). \end{equation} \end{itemize} \end{lemma}\sm \begin{proof} (i) The Gromov-Prohorov metric relies on the Prohorov metric of embeddings of $\mu_X$ and $\mu_Y$ in ${\mathcal M}_1(Z)$ in a metric space $(Z,r_Z)$. This is in contrast to the Eurandom metric which is based on an optimal coupling of the two measures $\mu_X$ and $\mu_Y$ without referring to a space of measures over a third metric space. Since we want to bound the Gromov-Prohorov metric in terms of the Eurandom metric the main goal of the proof is to construct a suitable metric space $(Z,r_Z)$. The construction proceeds in three steps. We start in Step~1 with finding a suitable $\varepsilon$-net $\{x_1,...,x_N\}$ in $(X,r_X)$, and show that this net has a suitable corresponding net $\{y_1,...,y_N\}$ in $(Y,r_Y)$. In Step~2 we then verify that these nets have the property that $r_X(x_i, x_j)\approx r_Y(y_i, y_j)$ (where the '$\approx$' is made precise below) and $\delta$-balls around these nets carry almost all $\mu_X$- and $\mu_Y$- mass. Finally, in Step~3 we will use these nets to define a metric space $(Z,r_Z)$ containing both $(X,r_X)$ and $(Y,r_Y)$, and bound the Prohorov metric of the images of $\mu_X$ and $\mu_Y$. \subsubsection*{Step~1 (Construction of suitable $\varepsilon$-nets in $X$ and $Y$)} Fix $\delta\in(0,\tfrac{1}{2})$. Assume that ${\mathcal X},{\mathcal Y}\in\mathbb{M}$ are such that $d_{\mathrm{Eur}}\big({\mathcal X},{\mathcal Y}\big)<\delta^4$. By definition, we find a coupling $\tilde{\mu}$ of $\mu_X$ and $\mu_Y$ such that \be{eq:l11} \begin{aligned} \tilde\mu^{\otimes 2}\big\{(x_1,y_1),(x_2,y_2):\, |r_X(x_1,x_2)-r_Y(y_1,y_2)|>2\delta\big\}<\delta^4. \end{aligned} \end{equation} Set $\varepsilon:=4v_\delta(\mathcal X)\geq 0$. By Lemma~\ref{lZorn}, there are $N\leq\lfloor\frac{1}{\delta}\rfloor$ points $x_1,...,x_N\in X$ with pairwise distances at least $\varepsilon$, \be{eq:l12cb} \mu\big(B_\varepsilon(x_i)\big)>\delta, \end{equation} for all $i=1,...,N$, and \be{eq:l12cc} \mu\big(\bigcup\nolimits_{i=1}^N B_{\varepsilon}(x_i)\big) \geq 1-\varepsilon. \end{equation} Put $D:=\bigcup_{i=1}^N B_{\varepsilon}(x_i)$. We claim that for every $i=1,...,N$ there is $y_i\in Y$ with \be{eq:l11aa} \tilde\mu\big(B_{\varepsilon}(x_i)\times B_{2(\varepsilon+\delta)}(y_i)\big) \geq (1-\delta^2)\mu_X\big(B_{\varepsilon}(x_i)\big). \end{equation} Indeed, assume the assertion is not true for some $1\leq i\leq N$. Then, for all $y\in Y$, \be{eq:l11b} \tilde\mu\big(B_\varepsilon(x_i)\times \complement B_{2(\varepsilon+\delta)}(y)\big) \ge \delta^2\mu_X\big(B_\varepsilon(x_i)\big). \end{equation} which implies that \be{eq:l11c} \begin{aligned} \tilde\mu^{\otimes 2}& \{(x',y'), (x'',y''): |r_X(x',x'') - r_Y(y',y'')|>2\delta\} \\& \geq \tilde\mu^{\otimes 2}\{(x',y'), (x'',y''): x',x''\in B_\varepsilon(x_i), y''\notin B_{2(\varepsilon+\delta)}(y')\} \\ &\geq \mu_X(B_\varepsilon(x_i))^2\delta^2 \\ &> \delta^4, \end{aligned} \end{equation} by \eqref{eq:l12ca} and (\ref{eq:l11b}) which contradicts \eqref{eq:l11}. \subsubsection*{Step~2 (Distortion of $\{x_1,...,x_N\}$ and $\{y_1,...,y_n\}$)} Assume that $\{x_1,...,x_N\}$ and $\{y_1,...,y_n\}$ are such that (\ref{eq:l12cb}) through (\ref{eq:l11aa}) hold. We claim that then \be{e:1} \big|r_X(x_i,x_j)-r_Y(y_i,y_j)\big| \le 6(\varepsilon+\delta), \end{equation} for all $i,j=1,...,N$. Assume that \eqref{e:1} is not true for some pair $(i,j)$. Then for all $x'\in B_\varepsilon(x_i)$, $x''\in B_\varepsilon(x_j)$, $y'\in B_{2(\varepsilon+\delta)}(y_i)$, and $y''\in B_{2(\varepsilon+\delta)}(y_j)$, \be{thenfor} \big|r_X(x',x'') - r_Y(y',y'')\big| > 6(\varepsilon+\delta)-2\varepsilon-4(\varepsilon+\delta) = 2\delta. \end{equation} Then \be{eq:l12d} \begin{aligned} \tilde\mu^{\otimes 2}&\big\{(x',y'), (x'',y''): |r_X(x',x'') - r_Y(y',y'')|>2\delta\big\} \\ &\geq \tilde\mu^{\otimes 2}\big\{(x',y'), (x'',y''): \\& \qquad \quad x'\in B_\varepsilon(x_i),x''\in B_\varepsilon(x_j), y'\in B_{2(\varepsilon+\delta)}(y_i), y''\in B_{2(\varepsilon+\delta)}(y_j)\} \\ &= \tilde\mu\big(B_\varepsilon(x_i)\times B_{2(\varepsilon+\delta)}(y_i)\big) \tilde\mu\big(B_\varepsilon(x_j)\times B_{2(\varepsilon+\delta)} (y_j)\big) \\ &> \delta^2(1-\delta)^2 \\ &> \delta^4, \end{aligned} \end{equation} where we used \eqref{eq:l11aa}, \eqref{eq:l12ca} and $\delta<\tfrac 12$. Since (\ref{eq:l12d}) contradicts \eqref{eq:l11}, we are done. \subsubsection*{Step~3 (Definition of a suitable metric space $(Z,r_Z)$)} Define the relation $R:=\{(x_i,y_i): i=1,...,N\}$ between $X$ and $Y$ and consider the metric space $(Z,r_Z)$ defined by $Z:=X\sqcup Y$ and $r_Z:=r_{X\sqcup Y}^R$, given as in Remark ~\ref{rem:met1}. Choose isometric embeddings $\varphi_X$ and $\varphi_Y$ from $(X,r_X)$ and $(Y,r_Y)$, respectively, into $(Z,r_Z)$. As $\mathrm{dis}(R)\leq 6(\varepsilon+\delta)$ (see (\ref{distortion}) for definition), by Remark~\ref{rem:met1}, $r_Z(\varphi_X(x_i),\varphi_Y(y_i))\leq 3(\varepsilon+\delta)$, for all $i=1,...,N$. If $x\in X$ and $y\in Y$ are such that $r_Z(\varphi_X(x),\varphi_Y(y)) \geq 6(\varepsilon + \delta)$ and $r_X(x,x_i)<\varepsilon$ then \be{eq:l15a} \begin{aligned} r_Y(y,y_i) &\geq r_Z(\varphi_X(x),\varphi_Y(y)) - r_X(x,x_i) - r_Z(\varphi_X(x_i), \varphi_Y(y_i)) \\ &\geq 6(\varepsilon+\delta)-\varepsilon-3(\varepsilon+\delta) \\ &\ge 2(\varepsilon+\delta) \end{aligned} \end{equation} and so for all $x\in B_{\varepsilon}(x_i)$, \be{l16} \big\{y\in Y:\, r_Z(\varphi_X(x),\varphi_Y(y))\ge6(\varepsilon+\delta) \big\} \subseteq \complement B_{2(\varepsilon+\delta)}(y_i). \end{equation} Let $\tilde{\mu}$ be the probability measure on $Z\times Z$ defined by $\hat{\mu}(A\times B):=\tilde{\mu}(\varphi^{-1}_{X}(A)\times\varphi^{-1}_{Y}(B))$, for all $A,B\in{\mathcal B}(Z)$. Therefore, by \eqref{e:1}, (\ref{l16}), (\ref{eq:l11aa}) and as $N\le\lfloor 1/\delta\rfloor$, \be{l13} \begin{aligned} \hat\mu\{&(z,z'): r_Z(z,z')\geq 6(\varepsilon+\delta)\} \\ &\leq \hat\mu\big( \varphi_X(\complement D)\times \varphi_Y(Y))+\hat\mu\Big(\bigcup\limits_{i=1}^N B_{\varepsilon}(\varphi_{X}(x_i))\times\complement B_{2(\varepsilon+\delta)}(\varphi_{Y}(y_i))\Big) \\ &\leq \varepsilon+\sum_{i=1}^N\mu_X(B_{\varepsilon} (x_i))\delta^2 \\ & \leq \varepsilon+\delta. \end{aligned} \end{equation} Hence, using (\ref{Proh2}) and $\varepsilon=4v_\delta(\mathcal X)$, \be{l14} d_{\mathrm{Pr}}^{(Z,r_Z)}\big((\varphi_{X})_\ast\mu_X, (\varphi_{Y})_\ast\mu_Y\big) \le 6\big(4v_\delta({\mathcal X})+2\delta\big), \end{equation} and so $d_{\mathrm{GPr}}\big({\mathcal X},{\mathcal Y}\big)\le 12\big(2v_\delta({\mathcal X})+\delta\big)$, as claimed. \sm (ii) Assume that $d_{\mathrm{GPr}}\big({\mathcal X},{\mathcal Y}\big)<\delta$. Then, by definition, there exists a metric space $(Z,r_Z)$, isometric embeddings $\varphi_{X}$ and $\varphi_{Y}$ between $\mathrm{supp}(\mu_X)$ and $\mathrm{supp}(\mu_Y)$ and $Z$, respectively, and a coupling $\hat{\mu}$ of $(\varphi_{X})_\ast\mu_X$ and $(\varphi_{Y})_\ast\mu_Y$ such that \be{p6q} \hat{\mu}\big\{(z,z'):\,r_Z(z,z')\ge\delta\big\} < \delta. \end{equation} Hence with the special choice of a coupling $\tilde\mu$ of $\mu_X$ and $\mu_Y$ defined by $\tilde{\mu}(A\times B)=\hat{\mu}\big(\varphi_{X}(A)\times \varphi_{Y}(B)\big)$, for all $A\in{\mathcal B}(X)$ and $B\in{\mathcal B}(Y)$, \be{l15} \begin {aligned} &\tilde\mu^{\otimes 2}\big\{(x,y),(x',y')\in X\times Y:\, |r_X(x,x') - r_Y(y,y')|\ge 2\delta\} \\ &\leq \tilde\mu^{\otimes 2}\big\{(x,y), (x',y')\in X\times Y:\, \\ &\qquad r_Z(\varphi_X(x), \varphi_Y(y))\ge\delta \text{ or } r_Z(\varphi_X(x'), \varphi_Y(y'))\ge\delta\big\} \\ &< 2\delta. \end{aligned} \end{equation} This implies that $d_{\mathrm{Eur}}\big({\mathcal X},{\mathcal Y}\big)<2\delta$. \end{proof}\sm \begin{proof}[Proof of Proposition~\ref{propMet2}] Observe that by Lemma~\ref{l:dvelta}, $v_\delta(\mathcal X)\overset{\delta\to 0}{\longrightarrow}0$. So Lemma~\ref{LL} implies the {\em equivalence} of $d_{\rm{GPr}}$ and $d_{\rm{Eur}}$ once we have shown that $d_{\rm{Eur}}$ is indeed a metric. The \emph{symmetry} is clear. If ${\mathcal X}$, ${\mathcal Y}\in\mathbb{M}$ are such that $d_{\rm{Eur}}(\mathcal X, \mathcal Y)=0$, by equivalence, $d_{\rm{GPr}}(\mathcal X,\mathcal Y)=0$ and hence $\mathcal X=\mathcal Y$. For the \emph{triangle inequality}, let $\mathcal X_i=(X_i,r_i,\mu_i)\in\mathbb{M}, i=1,2,3$, be such that $d_{\rm{Eur}}(\mathcal X_1,\mathcal X_2)<\varepsilon$ and $d_{\rm{Eur}}(\mathcal X_2,\mathcal X_3)<\delta$ for some $\varepsilon,\delta>0$. Then there exist couplings $\tilde\mu_{1,2}$ of $\mu_1$ and $\mu_2$ and $\tilde\mu_{2,3}$ of $\mu_2$ and $\mu_3$ with \be{cou12} \tilde\mu_{1,2}^{\otimes 2}\big\{(x_1,x_2),(x_1', x_2'):\, |r_1(x_1,x_1') - r_2(x_2,x_2')|\ge\varepsilon\big\} < \varepsilon \end{equation} and \be{cou23} \tilde\mu_{2,3}^{\otimes 2}\big\{(x_2,x_3), (x_2', x_3'):\, |r_2(x_2,x_2') - r_3(x_3,x_3')|\ge\delta\big\} < \delta. \end{equation} Introduce the transition kernel $K_{2,3}$ from $X_2$ to $X_3$ defined by \be{kernel} \tilde\mu_{2,3}({\rm d}(x_2,x_3)) = \mu_2({\rm d}x_2) K_{2,3}(x_2, {\rm d}x_3). \end{equation} which exists since $X_2$ and $X_3$ are Polish. Using this kernel, define a coupling $\tilde\mu_{1,3}$ of $\mu_1$ and $\mu_3$ by \begin{equation} \tilde\mu_{1,3} ({\rm d}(x_1,x_3)) := \int_{X_2} \tilde\mu_{1,2}({\rm d}(x_1,x_2)) K_{2,3}(x_2,{\rm d}x_3). \end{equation} Then \begin{equation} \begin{aligned} \tilde\mu_{1,3}^{\otimes 2}&\big\{ (x_1,x_3),(x_1',x_3'):\, |r_1(x_1,x_1') - r_3(x_3,x_3')|\ge\varepsilon + \delta\big\} \\ & = \int_{X_1^2\times X_2^2\times X_3^2} \tilde\mu_{1,2}({\rm d}(x_1,x_2))\tilde\mu_{1,2}({\rm d}(x_1',x_2')) K_{2,3}(x_2,{\rm d}x_3) K_{2,3}(x_2',{\rm d}x_3') \\ &\qquad \qquad \qquad \qquad\qquad \qquad \qquad \quad \mathbf{1}\big\{|r_1(x_1,x_1') - r_3(x_3,x_3')|\ge\varepsilon + \delta\big\} \\ &\leq \int_{X_1^2\times X_2^2\times X_3^2} \tilde\mu_{1,2}({\rm d}(x_1,x_2))\tilde\mu_{1,2}({\rm d}(x_1',x_2')) K_{2,3}(x_2,{\rm d}x_3) K_{2,3}(x_2',{\rm d}x_3') \\& \qquad \big( \mathbf{1}\{|r_1(x_1,x_1') - r_2(x_2,x_2')|\ge\varepsilon\} + \mathbf{1}\{|r_2(x_2,x_2') - r_3(x_3,x_3')|\ge\delta\}\big) \\ &= \tilde\mu_{1,2}^{\otimes 2}\big\{(x_1,x_2), (x_1 ,x_2'):\, |r_1(x_1,x_1') - r_2(x_2,x_2')| \ge \varepsilon\} \\&\qquad \qquad \qquad + \tilde\mu_{2,3}^{\otimes 2} \{(x_2,x_3), (x_2 ,x_3'):\, |r_2(x_2,x_2') - r_3(x_3,x_3')|\ge \delta\big\} \\ &< \varepsilon + \delta \end{aligned} \end{equation} which yields $d_{\rm{Eur}}(\mathcal X_1, \mathcal X_3)<\varepsilon+\delta$. \end{proof}\sm \subsection*{The Gromov-Wasserstein and the modified Eurandom metric} The topology of weak convergence for probability measures on a fixed metric space $(Z,r)$ is generated not only by the Prohorov metric, but also by \be{Was} d^{(Z,r_Z)}_{\mathrm{W}}(\mu_1,\mu_2) := \inf_{\tilde\mu}\int_{Z\times Z}\tilde\mu({\rm d}(x,x'))\,\big(r(x,x')\wedge 1\big), \end{equation} where the infimum is over all couplings $\tilde\mu$ of $\mu_1$ and $\mu_2$. This is a version of the {\em Wasserstein metric} (see, for example, \cite{Rachev1991}). If we rely on the Wasserstein rather than the Prohorov metric, this results in two further metrics: in the Gromov-Wasserstein metric, i.e., \be{GW} d_{\rm{GW}}(\mathcal X, \mathcal Y) := \inf_{(\varphi_X, \varphi_Y,Z)}d_{\rm{W}}^{(Z,r_Z)} \big((\varphi_X)_\ast\mu_X,(\varphi_Y)_\ast\mu_Y\big), \end{equation} where the infimum is over all isometric embeddings from $\mathrm{supp}(\mu_X)$ and $\mathrm{supp}(\mu_Y)$ into a common metric $Z$ and in the modified Eurandom metric \begin{equation}\label{eq:p003} \begin{aligned} d_{\mathrm{Eur}}'\big(&\mathcal X, \mathcal Y\big):= \\ & \inf_{\tilde\mu} \int\tilde\mu(\mathrm{d}(x,y)) \tilde \mu(\mathrm{d}(x',y'))\big(|r_X(x,x')-r_Y(y,y')|\wedge 1\big), \end{aligned} \end{equation} where the infimum is over all couplings of $\mu_X$ and $\mu_Y$. \begin{remark} An $L^2$-version of $d_{\rm{GW}}$ on the set of compact metric measure spaces is already used in \cite{Stu2006}. It turned out that the metric is complete and the generated topology is separable. $\qed$\end{remark}\sm Altogether, we might ask if we could achieve similar bounds to those given in Lemma~\ref{LL} by exchanging the Gromov-Prohorov with the Gromov-Wasserstein metric and the Eurandom with the modified Eurandom metric. \begin{proposition} The distances $d_{\rm{GW}}$ and $d_{\rm{Eur}}'$ define metrics on $\mathbb{M}$. They all generate the Gromov-Prohorov topology. \label{P:PP} Bounds that relate these two metrics with $d_{\rm{GPr}}$ and $d_{\rm{Eur}}$ are for $\mathcal X, \mathcal Y\in\mathbb M$, \begin{equation} (d_{\rm{GPr}}(\mathcal X, \mathcal Y))^2 \leq d_{\rm{GW}}(\mathcal X, \mathcal Y) \leq d_{\rm{GPr}}(\mathcal X, \mathcal Y) \end{equation} and \begin{equation} (d_{\rm{Eur}}(\mathcal X, \mathcal Y))^2 \leq d_{\rm{Eur}}'(\mathcal X, \mathcal Y) \leq d_{\rm{Eur}}(\mathcal X, \mathcal Y) \end{equation} Consequently, the Gromov-Wasserstein metric is complete. \end{proposition}\sm \begin{proof} The fact that $d_{\rm{GW}}$ and $d_{\rm{Eur}}'$ define metrics on $\mathbb{M}$ is proved analogously as for the Gromov-Prohorov and the Eurandom metric. The Prohorov and the version of the Wasserstein metric used in \eqref{GW} and \eqref{eq:p003} on fixed metric spaces can be bounded uniformly (see, for example, Theorem~3 in \cite{GibbsSu2001}). This immediately carries over to the present case. \end{proof}\sm \appendix \section{Additional facts on Gromov-Hausdorff convergence} Recall the notion of the Gromov-Hausdorff distance on the space $\mathbb X_c$ of isometry classes of compact metric spaces given in \eqref{eq:GH1}. We give a statement concerning convergence in the Gromov-Hausdorff metric which is analogous to Lemma \ref{l:Gpronespace} for Gromov-Prohorov convergence. \begin{lemma}\label{l:met2} Let $(X,r_X)$, $(X_1,r_{X_1})$, $(X_2,r_{X_2})$, ... be in $\mathbb X_{\rm{c}}$. Then $$d_{\mathrm{GH}}(X_n,X)\overset{n\to\infty}{\longrightarrow} 0$$ if and only if there is a compact metric space $(Z,r_Z)$ and isometric embeddings $\varphi$, $\varphi_1$, $\varphi_2$, ... of $(X,r)$, $(X_1,r_{X_1})$, $(X_2,r_{X_2})$, ..., respectively, into $(Z,r_Z)$ such that \begin{align}\label{eq:A21}d^{(Z,r_Z)}_{\mathrm{H}}\big(\varphi_n(X_n),\varphi(X)\big) \overset{n\to\infty}{\longrightarrow} 0. \end{align} \end{lemma}\sm \begin{proof}The ``if''-direction is clear. So we come immediately to the ``only if'' direction. If $d_{\mathrm{GH}}\big(X_n,X\big)\overset{n\to\infty}{\longrightarrow} 0$, then by (\ref{GH}) we find correspondences $R_n$ between $X$ and $X_n$ such that $\mathrm{dis}(R_n)\overset{n\to\infty}{\longrightarrow} 0$. Using these and $X_0:=X$, we define recursively metrics $r_{Z_n}$ on $Z_n:= \bigsqcup\nolimits_{k=0}^n X_k$. First, set $Z_1:=X_0\sqcup X_1$ and $r_{Z_1} := r_{Z_1}^{R_1}$ (recall Remark \ref{rem:met1}). In the $n^{\mathrm{th}}$ step, we are given a metric on $Z_n$. Consider the canonical isometric embedding $\varphi$ from $X$ to $Z_n$ and define the relation $\tilde R_{n}\subseteq Z_n\times X_{n+1}$ by \be{Rn1} \tilde R_{n+1} := \big\{(z,x_{})\in Z_n\times X_{n+1}:\,(\varphi^{-1}(z),x_{}) \in R_{n+1}\big\}, \end{equation} and set $r_{Z_{n+1}}:=r_{Z_{n+1}}^{\tilde R_{n+1}}$. By this procedure we end up with a metric $r_{Z}$ on $Z:=\bigsqcup\nolimits_{n=0}^\infty X_n$ and isometric embeddings $\varphi_0$, $\varphi_1$,$...$ between $X_0$, $X_1$, ... and $Z$, respectively, such that \be{e:009a} d^{(Z,r_Z)}_{\mathrm{H}}\big(\varphi_n(X_n),\varphi(X)\big) = \frac12 \mathrm{dis}(R_n) \overset{n\to\infty}{\longrightarrow} 0. \end{equation} W.l.o.g.\ we can assume that $Z$ is complete. Otherwise we just embed everything into the completion of $Z$. To verify {\em compactness} of $(Z,r_Z)$ it is therefore sufficient to show that $Z$ is totally bounded (see, for example, Theorem 1.6.5 in \cite{BurBurIva01}). For that purpose fix $\varepsilon>0$, and let $n\in\N$. Since $X$ is compact, we can choose a finite $\varepsilon/2$-net $S$ in $X$. Then for all $x\in Z$ with $r_Z(x,X)<\varepsilon/2$ there exists $x'\in S$ such that $r_Z(x,x')<\varepsilon$. Moreover, $d_{\mathrm{H}}\big(\varphi_n(X_n),\varphi(X)\big)<\varepsilon$, for all but finitely many $n\in\N$. For the remaining $\varphi_n(X_n)$ choose finite $\varepsilon$-nets and denote their union by $\tilde S$. In this way, $S\cup \tilde S$ is a finite set, and $\{B_\varepsilon(s):\,s\in S\cup \tilde S\}$ is a covering of $Z$. \end{proof}\sm {\sc Acknowledgements. } The authors thank Anja Sturm and Theo Sturm for helpful discussions, and Reinhard Leipert for help on Remark~\ref{Rem:07}(ii). Our special thanks go to Steve Evans for suggesting to verify that the Gromov-weak topology is not weaker than the Gromov-Prohorov topology and to Vlada Limic for encouraging us to write a paper solely on the topological aspects of genealogies. Finally we thank an anonymous referee for several helpful comments that improved the presentation and correctness of the paper. \newcommand{\etalchar}[1]{$^{#1}$} \end{document}
\begin{document} \title{Data Reduction for Maximum Matching on Real-World Graphs: Theory and Experiments \thanks{This work was partially supported by the DFG project FPTinP (NI 369/16).}} \author{Tomohiro~Koana \and Viatcheslav~Korenwein \and André~Nichterlein \and Rolf~Niedermeier \and Philipp~Zschoche} \date{Institut f\"ur Softwaretechnik und Theoretische Informatik, TU~Berlin, Germany,\\ \texttt{\small \{tomohiro.koana,andre.nichterlein,rolf.niedermeier,zschoche\}@tu-berlin.de}} \maketitle \begin{abstract} Finding a maximum-cardinality or maximum-weight matching in (edge-weighted) undirected graphs is among the most prominent problems of algorithmic graph theory. For $n$-vertex and~$m$-edge graphs, the best known algorithms run in $\widetilde{O}(m\sqrt{n})$ time. We build on recent theoretical work focusing on linear-time data reduction rules for finding maximum-cardinality matchings and complement the theoretical results by presenting and analyzing (thereby employing the kernelization methodology of parameterized complexity analysis) new (near-)linear-time data reduction rules for both the unweighted and the positive-integer-weighted case. Moreover, we experimentally demonstrate that these data reduction rules provide significant speedups of the state-of-the art implementations for computing matchings in real-world graphs: the average speedup factor is~4.7 in the unweighted case and 12.72 in the weighted case. \end{abstract} \section{Introduction} In their book chapter on matching, \citet{KV18} write that ``matching theory is one of the classical and most important topics in combinatorial theory and optimization''. Correspondingly, the design and analysis of (weighted) matching algorithms plays a pivotal role in algorithm theory as well as in practical computing. Complementing the rich literature on matching algorithms (see \citet{CDP19} and \citet{DPS18} for recent accounts, the latter also providing a literature overview), in this work we focus on efficient linear-time data reduction rules that may help to speedup superlinear-time matching algorithms. Notably, while recent breakthrough results on (weighted) matching (including linear-time approximation algorithms~\cite{DP14}) focus on the theory side, we study theory and practice, thereby contributing to both sides. To achieve our results, we follow and complement recent purely theoretical work~\cite{MNN20} presenting and analyzing linear-time data reductions for the unweighted case. More specifically, on the theoretical side we provide and analyze further data reduction rules for the unweighted as well as weighted case. On the practical side, we demonstrate that these data reduction rules may serve to speedup various matching solvers (including state-of-the-art ones) due to \citet{HSt17}, \citet{KP98}, and \citet{Kol09}. Formally, we study the following two problems; note that we formulate them as decision problems since this better fits with presenting our theoretical part where we prove kernelization results (thereby employing the framework of parameterized complexity analysis). However, all our data reduction rules are ``parameter-oblivious'' and thus also work and are implemented for the optimization versions where the solution size is not known in advance. \problemdef{\textsc{Maximum-Cardinality Matching}\xspace} {An undirected graph~$G=(V,E)$ and $s \in \mathds{N}$.} {Is there a size-$s$ subset~$M \subseteq E$ of nonoverlapping (that is, pairwise vertex-disjoint) edges?} \problemdef{\textsc{Maximum-Weight Matching}\xspace} {An undirected graph~$G=(V,E)$, non-negative edge weights~$\omega\colon E \rightarrow \mathds{N}$, and~$s \in \mathds{N}$.} {Is there a subset~$M \subseteq E$ of nonoverlapping edges of weight~$\sum_{e \in M} \omega(e) \ge s$?} We remark that all our results extend to the case of rational weights; however, natural numbers are easier to cope with. \paragraph{Related work.} \citet{MV80} were the first to announce an~$O(\sqrt{n}m)$-time algorithm for \textsc{Maximum-Cardinality Matching}\xspace on graphs with~$n$ vertices and~$m$ edges; see \citet{Vaz20} for the details of this algorithm. This time bound was previously achieved only for bipartite graphs~\cite{HK73}. While the classic matching algorithm of \citet{HK73} is simple, elegant, and also very efficient in practice (in fact we use it in one kernelization algorithm as subroutine), the algorithm of \citet{MV80} is rather complicated and not (yet) competitive in practice.\footnote{The only implementation of the algorithm of \citet{MV80} we are aware of is due to \citet{HSt17}. This solver was the slowest in our experiments.} In fact, the fastest solver for \textsc{Maximum-Cardinality Matching}\xspace seems to be still the one by \citet{KP98}, with a worst-case running time of~$O(nm \cdot \alpha(n,m))$ ($\alpha$ denotes the inverse of the Ackermann function). The (theoretically) fastest algorithm for \textsc{Maximum-Weight Matching}\xspace in sparse graphs is by \citet{DPS18} with a running time of~$O(\sqrt{n}m \log (nN))$ (here~$N$ denotes the largest integer weight). In practice, the fastest solver we found is due to \citet{Kol09}, which is an implementation of Edmonds' algorithm~\cite{Edm65,Edm65-2} for a perfect matching of minimum cost combined with many heuristic speedups. Providing parameterized algorithms or kernels for \textsc{Maximum-Cardinality Matching}\xspace has recently gained high interest~\cite{CDP19,KN18,HK19,MNN20,GMN17,FLSPW18,IOO17}. For \textsc{Maximum-Weight Matching}\xspace, however, we are only aware of the work by \citet{IOO17} who provided an algorithm with running time~$O(t (m + n \log n))$ (here~$t$ is the tree-depth of the input graph). In this work, we transfer some data reduction rules for \textsc{Vertex Cover} to \textsc{Maximum-Cardinality Matching}\xspace. To this end, we use the algorithm by \citet{IOY14} that in~$O(m\sqrt{n})$ time exhaustively applies an LP-based data reduction rule due to~\citet{NT75}. We refer to \citet{HLSS20} for a brief overview of practically relevant data reduction rules for \textsc{Vertex Cover}. Very recently, \citet{KLPU20} provided a fine-tuned implementation of degree-based data reduction rules for \textsc{Maximum-Cardinality Matching}\xspace which is on average three times faster than our implementation when considering the same data reduction rules (see \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} in \Cref{ssec:low-degree-rules}). \paragraph{Our contributions.} We extend kernelization results~\cite{MNN20} for \textsc{Maximum-Cardinality Matching}\xspace and lift them to \textsc{Maximum-Weight Matching}\xspace. Our data reduction rules for \textsc{Maximum-Cardinality Matching}\xspace are well-known (as crown rule~\cite{JCF04} and LP-based rule~\cite{NT75}) for the NP-hard \textsc{Vertex Cover} problem. Our theoretical contribution here is to show that the crown rule is also correct for \textsc{Maximum-Cardinality Matching}\xspace. Moreover, we prove that the exhaustive application of the crown rule and exhaustive application of the LP-based rule lead to the very same graph; thus these two known rules can be seen as equivalent. We provide algorithms to efficiently apply our data reduction rules (for the unweighted and the weighted case). Herein, we have a particular eye on exhaustively applying the data reduction rules in (near) linear time, which seems imperative in an effort to practically improve matching algorithms. Hence, our main theoretical contribution lies in developing efficient algorithms implementing the data reduction rules, thereby also showing a purely theoretical guarantee on the amount of data reduction that can be achieved in the worst case (this is also known as kernelization in parameterized algorithmics). We proceed by implementing and testing the data reduction algorithms for \textsc{Maximum-Cardinality Matching}\xspace and \textsc{Maximum-Weight Matching}\xspace, thereby demonstrating their practical effectiveness. More specifically, combining them in form of preprocessing with various solvers~\cite{Kol09,HSt17,KP98} yields partially huge speedups on sparse real-world graphs (taken from the SNP library~\cite{snap}). We refer to \cref{tab:results} for an overview over the various solvers (with the core algorithmic approach they implement) and the speedup factors obtained by apply our data reduction rules as a preprocessing. \begin{table} \caption{ Summary of the speedup factors gained by our kernelization on graphs from the SNP library~\cite{snap} with various solvers. We refer to \cref{sec:experiments} for details. } \centering \begin{tabular}{l l @{\hskip 1cm} r r} \toprule \multicolumn{2}{c}{solver} & \multicolumn{2}{c}{speedup}\\ implemented by & algorithmic approach by& average & median \\ \midrule \citet{Kol09} (unweighted) & \citet{Edm65,Edm65-2} & $157.30$ & $29.27$ \\ \citet{HSt17} & \citet{MV80} & 608.79 & 28.87 \\ \citet{KP98} & \citet{Edm65} & 4.70 & 2.20 \\ \midrule \citet{Kol09} (weighted) & \citet{Edm65,Edm65-2} & 12.72 & 1.40 \\ \bottomrule \end{tabular} \label{tab:results} \end{table} \paragraph{Notation.} We use standard notation from graph theory. All graphs considered in this work are simple and undirected. For a graph~$G = (V,E)$, we denote with~$E(G) = E$ the edge set. For a vertex subset~$V' \subseteq V$, we denote with~$G[V']$ the subgraph induced by~$V'$. We write $uv$ to denote the edge~$\{u,v\}$ and~$G-v$ to denote the graph obtained from~$G$ by removing~$v$ and all its incident edges. A \emph{feedback edge set} of a graph~$G$ is a set~$X$ of edges such that~$G-X = (V, E \setminus X)$ is a tree or forest. The \emph{feedback edge number} denotes the size of a minimum feedback edge set. A \emph{vertex cover} in a graph is a set of vertices that has a nonempty intersection with each edge in the graph. A \emph{matching} in a graph is a set of pairwise disjoint edges. Let~$G$ be a graph and let~$M \subseteq E(G)$ be a matching in~$G$. We denote by~$\ensuremath{\text{mm}}(G)$ a maximum-cardinality matching respectively a maximum-weight matching in~$G$, depending on whether we have edge weights or not. If there are edge weights~$\omega\colon E \rightarrow \mathds{N}$, then for a matching~$M$ we denote by~$\omega(M) := \sum_{e\in M}\omega(e)$ the weight of~$M$. Moreover, we denote with~$\omega(G)$ the weight of a maximum-weight matching~$\ensuremath{\text{mm}}(G)$, i.\,e.~$\omega(G) := \omega( \ensuremath{\text{mm}}(G) )$. A vertex~$v \in V$ is called \emph{matched} with respect to~$M$ if there is an edge in~$M$ containing~$v$, otherwise~$v$ is called \emph{free} with respect to~$M$. If the matching~$M$ is clear from the context, then we omit ``with respect to~$M$''. \paragraph{Kernelization.} A \emph{parameterized problem} is a set of instances~$(I,k)$ where~$I \in\Sigma^*$ for a finite alphabet $\Sigma$ and~$k\in \mathbb{N}$ is the \emph{parameter}. We say that two instances~$(I,k)$ and $(I',k')$ of parameterized problems~$P$ and~$P'$ are \emph{equivalent} if~$(I,k)$ is a yes-instance for~$P$ if and only if~$(I',k')$ is a yes-instance for~$P'$. A \emph{kernelization} is an algorithm that, given an instance~$(I,k)$ of a parameterized problem~$P$, computes in polynomial time an equivalent instance~$(I',k')$ of~$P$ (the \emph{kernel}) such that $|I'|+k'\leq f(k)$ for some computable function~$f$. We say that~$f$ measures the \emph{size} of the kernel, and if~$f(k)\in k^{O(1)}$, then we say that $P$~admits a polynomial kernel. Typically, a kernel is achieved by applying polynomial-time executable data reduction rules. We call a data reduction rule~$\mathcal{R}$ \emph{correct} if the new instance~$(I',k')$ that results from applying~$\mathcal{R}$ to~$(I,k)$ is equivalent to~$(I,k)$. An instance is called \emph{reduced} with respect to some data reduction rule if further application of this rule has no effect on the instance. \paragraph{Structure of this work.} In \Cref{sec:unweighted-kernel,sec:weigted-kernel}, we provide the kernelization results for \textsc{Maximum-Cardinality Matching}\xspace and \textsc{Maximum-Weight Matching}\xspace which we experimentally evaluate on real-world data sets in \Cref{sec:experiments}. In \Cref{sec:unweighted-kernel} we discuss the unweighted case by recalling old and presenting new data reduction rules. In \Cref{sec:weigted-kernel} we show how to extend some of the data reduction rules presented for \textsc{Maximum-Cardinality Matching}\xspace to \textsc{Maximum-Weight Matching}\xspace. In \Cref{sec:experiments}, we describe our experimental results, discuss the effect of our data reduction rules on state-of-the-art solvers, and evaluate the prediction quality of our theoretical kernelization results. We conclude in \Cref{sec:conclusion} with a glimpse on future research challenges. \section{Maximum-Cardinality Matching} \label{sec:unweighted-kernel} For \textsc{Maximum-Cardinality Matching}\xspace we first recall in \Cref{ssec:low-degree-rules} simple data reduction rules for low-degree vertices due to a classic result of \citet{KS81}. Then we improve the known kernel-size for \textsc{Maximum-Cardinality Matching}\xspace parameterized by the feedback edge number when only these two data reduction rules are exhaustively applied~\cite{MNN20}. In \Cref{ssec:crowns-LP}, we discuss the crown data reduction rule (designed for \textsc{Vertex Cover}\xspace \cite{JCF04}) and show that it also works for \textsc{Maximum-Cardinality Matching}\xspace. To this end, we briefly describe a classic LP-based data reduction due to \citet{NT75}. It was known that this LP-based data reduction also removes all crowns from the input graph \cite{AI16,IOY14}. We note that this does not immediately imply that we can use the LP-based data reduction in the context of \textsc{Maximum-Cardinality Matching}\xspace (the correctness is only known for \textsc{Vertex Cover}\xspace). We prove that exhaustively applying the crown data reduction rule is equivalent to ``exhaustively'' applying the LP-based data reduction. This allows us to use the algorithm of \citet{IOY14} that exhaustively applies the LP-based data reduction in order to remove all crowns and nothing else. Finally, we show in \cref{ssec:relaxed-crowns} a generalization of the crown data reduction rule. However, we leave it open how to apply this generalized crown data reduction rule efficiently. Note that we have implemented all of these data reduction rules (except the generalized crown data reduction rule); see \cref{sec:experiments} for an evaluation. \subsection{Removing low-degree vertices}\label{ssec:low-degree-rules} For \textsc{Maximum-Cardinality Matching}\xspace two simple data reduction rules are due to a classic result of \citet{KS81}. They deal with vertices of degree at most two. \begin{rrule}[\cite{KS81}]\label{rule:deg-zero-one-vertices} Let~$v \in V$. If~$\deg(v) = 0$, then delete~$v$. If~$\deg(v) = 1$, then delete~$v$ and its neighbor, and decrease the solution size~$s$ by one. \end{rrule} \begin{rrule}[\cite{KS81}]\label{rule:deg-two-vertices} Let~$v$ be a vertex of degree two and let~$u,w$ be its neighbors. Then remove~$v$, merge~$u$ and~$w$, and decrease the solution size~$s$ by one. \end{rrule} If the degree of the considered vertex~$v$ is zero, then the maximum matching size remains unchanged. Otherwise, we have $|\ensuremath{\text{mm}}(G)| = |\ensuremath{\text{mm}}(G')|+1$, where~$G'$ is the instance resulting from one application of either \cref{rule:deg-zero-one-vertices} or~\ref{rule:deg-two-vertices}: When applying \cref{rule:deg-zero-one-vertices}, then $v$~is matched to its only neighbor~$u$. For \cref{rule:deg-two-vertices} the situation is not so clear as~$v$ is matched to~$u$ or to~$w$ depending on how the maximum-cardinality matching in the rest of the graph looks like. Thus, one can only fix the matching edge with endpoint~$v$ (in the original graph) in a simple postprocessing step. Each of the above data reduction rules can be exhaustively applied in linear time. While for \cref{rule:deg-zero-one-vertices} this is easy to see, for \cref{rule:deg-two-vertices} the algorithm needs further ideas~\cite{BK09a}. However, to exhaustively apply both data reductions rules together only algorithms with superlinear running times are known \cite{BK20}. Using the above data reduction rules, one can show a kernel with respect to the parameter \paramEnv{feedback edge number}, that is, the size of a minimum feedback edge set. We refer to \cref{sec:eval-kernel} for a practical evaluation of a theoretical upper bound on the kernel size. \begin{theorem}[\cite{MNN20}]\label{thm:fes-lin-kernel} \textsc{Maximum-Cardinality Matching}\xspace{} admits a linear-time computable kernel with at most $2k-1$ vertices and at most $3k-2$ edges, when parameterized by the \paramEnv{feedback edge number}~$k$. \end{theorem} 15oindent \citet{MNN20} originally proved a kernel with at most $12k$ vertices and $13k$ edges. However, we can tighten this upper bound in the following way. \begin{proof}[Proof of \cref{thm:fes-lin-kernel}] Our kernelization procedure consists of the following two steps: \begin{enumerate} \item Apply \cref{rule:deg-zero-one-vertices} exhaustively in linear time. \label{step:deg-one} \item Apply \cref{rule:deg-two-vertices} exhaustively in linear time \cite{BK09a}. \label{step:deg-two} \end{enumerate} Let~$G^{(1)} = (V^{(1)}, E^{(1)})$ and~$G^{(2)} = (V^{(2)}, E^{(2)})$ be the graphs obtained after Steps \eqref{step:deg-one} and \eqref{step:deg-two}, respectively. Thus, $G^{(2)}$ is the graph returned by the kernelization algorithm. Note that~$G^{(2)}$ might contain isolated vertices and degree-one vertices. To evaluate the size of~$G^{(2)}$, we first analyze the structure of~$G^{(1)}$. Since~$G^{(1)}$ is an induced subgraph of the input graph~$G$, it follows that~$G^{(1)}$ has a feedback edge set $F^{(1)} \subseteq E^{(1)}$ of size at most $k$. We show that~$G^{(1)}$ contains at most~$2k-1$ vertices of degree at least three. To this end, consider the graph~$G^{(1)}_F=(V^{(1)}_F,E^{(1)}_F)$ obtained from~$G^{(1)}$ as follows. Remove the edges in $F^{(1)}$. For each edge $e = uw \in F^{(1)}$, we remove $uw$ and we introduce two degree-one vertices $uv_u^{e}$ and $wv_w^{e}$ adjacent to $u$ and $v$, respectively. Formally, we have \[V^{(1)}_F := V^{(1)} \cup \left\{ v_u^{e},v_w^{e} \mid e=uw \in F^{(1)} \right\} \;\;\text{and}\;\; E^{(1)}_F := (E^{(1)} \setminus F^{(1)}) \cup \left\{ uv_u^{e}, wv_w^{e} \mid e=uw \in F^{(1)} \right\}.\] Observe that~$G^{(1)}_F$ is a forest where~$V^{(1)}_F \setminus V^{(1)}$ are the leaves and~$V^{(1)} \subseteq V^{(1)}_F$ are the internal vertices. Since~$|F^{(1)}| \le k$, we have at most~$2k$ leaves. Since~$G^{(1)}_F$ is a forest, it follows that~$G^{(1)}_F$ has at most~$2k-1$ vertices degree of at least three. We partition the vertex set of~$G^{(1)}$ into $V^{(1)} = V^{(1)}_2 \cup V^{(1)}_{\geq 3}$, where $V^{(1)}_2$ are the vertices of degree two and $V^{(1)}_{\geq 3}$ are the vertices of degree at least three. Since \cref{rule:deg-zero-one-vertices} is exhaustively applied, the minimum degree in~$G^{(1)}$ is at least two. Moreover, note that by construction of~$G^{(1)}_F$, it follows that the set of vertices with degree at least three is identical in~$G^{(1)}_F$ and~$G^{(1)}$. Thus, $|V^{(1)}_{\geq 3}| \le 2k-1$. Note that exhaustively applying \cref{rule:deg-two-vertices} in Step~\eqref{step:deg-two} on $G^{(1)}$ will remove all vertices in~$V^{(1)}_2$ (and possibly some of $V^{(1)}_{\geq 3}$). Thus, the resulting graph~$G^{(2)}$ of our kernelization procedure has at most $2k-1$ vertices and consequently at most $3k-2$ edges. \end{proof} Applying the~$O(m\sqrt{n})$-time algorithm for \textsc{Maximum-Cardinality Matching}\xspace~\cite{MV80} altogether yields an~$O(n + m + k^{1.5})$-time algorithm, where~$k$ is the feedback edge number. \subsection{Crown and LP-based data reduction}\label{ssec:crowns-LP} Crowns are a classic data reduction tool for \textsc{Vertex Cover}\xspace (given an undirected graph find a smallest set of vertices that covers all edges) and can be seen as a generalization of \cref{rule:deg-zero-one-vertices} \cite{JCF04}. A crown satisfies the following properties (see \cref{fig:crown} for a visualization): \begin{definition}\label{def:crown} A \emph{crown} in a graph~$G = (V,E)$ is a pair $(H,I)$ such that \begin{enumerate} \item $I \subseteq V$ is an independent set in~$G$ (no two vertices in~$I$ are adjacent in~$G$), \label{crown:IS} \item $H = N(I) := \bigcup_{v \in I} N(v)$, and \label{crown:Neighbors} \item there is a matching~$M_{H,I}$ between~$H$ and~$I$ that matches all vertices in~$H$. \label{crown:Matching} \end{enumerate} \end{definition} \begin{figure} \caption{An example of a crown where the vertices in~$H$ but not the ones in~$I$ might have further neighbors in the graph.} \label{fig:crown} \end{figure} It is not hard to see that, given a crown~$(H,I)$, there is a minimum vertex cover containing all vertices in~$H$: The matching~$M_{H,I}$ implies that a minimum size vertex cover in~$G[H \cup I]$ has size~$|M_{H,I}| = |H|$. Since~$I$ is an independent set and~$H = N(I)$, taking all vertices in~$H$ into a vertex cover is at least as good as taking some vertices of~$I$. Thus, we end up with the following data reduction rule. \begin{rrule}[\cite{JCF04}]\label{rule:crown} Let~$(H,I)$ be a crown. Then remove all vertices in~$H \cup I$ and decrease the solution size~$s$ by~$|H|$. \end{rrule} Luckily, \cref{rule:crown} not only works for \textsc{Vertex Cover}\xspace but also for \textsc{Maximum-Cardinality Matching}\xspace: Simply adding to any maximum-cardinality matching in the reduced graph the matching~$M_{H,I}$ results in a maximum-cardinality matching for~$G$, as proven in the next lemma. \begin{lemma} \cref{rule:crown} is correct, that is, $|\ensuremath{\text{mm}}(G)| = |\ensuremath{\text{mm}}(G')| + |H|$. \end{lemma} \begin{proof} Let~$(H,I)$ be a crown in the input graph~$G$ and~$G' := G - (H \cup I)$. Observe that $\ensuremath{\text{mm}}(G') \cup M_{H,I}$ is a matching of cardinality~$|\ensuremath{\text{mm}}(G')| + |H|$ in~$G$, thus~$|\ensuremath{\text{mm}}(G)| \ge |\ensuremath{\text{mm}}(G')| + |H|$. Conversely, observe that~$|\ensuremath{\text{mm}}(G)| \le |\ensuremath{\text{mm}}(G - H)| + |H|$ as each vertex in~$H$ can be matched at most once. However, we have~$\ensuremath{\text{mm}}(G - H) = \ensuremath{\text{mm}}(G - (H \cup I)) = \ensuremath{\text{mm}}(G')$ as each vertex in~$I$ has degree zero in~$G-H$ and can thus be removed (see \cref{rule:deg-zero-one-vertices}). Thus, $|\ensuremath{\text{mm}}(G)| = |\ensuremath{\text{mm}}(G')| + |H|$. \end{proof} Now that we established that \cref{rule:crown} can also be applied for \textsc{Maximum-Cardinality Matching}\xspace, it remains to do so as fast as possible. However, to find a crown~$(H,I)$ we need to also find a matching~$M_{H,I}$. Moreover, the size of~$M_{H,I}$ depends on the size of the crown; a crown can be quite large (consider for example a complete bipartite graph~$K_{n,n}$: there is only one crown which is the whole graph). Thus, applying \cref{rule:crown} even once in linear time seems hard to do. Indeed, the best known algorithm to apply \cref{rule:crown} (even just once) runs in~$O(\sqrt{n} m)$ time~\cite{IOY14}. Since we can compute in~$O(\sqrt{n} m)$ time also a maximum-cardinality matching for the input graph, \cref{rule:crown} seems to be not useful (why decrease the size of the input when one can solve the problem in the same time?). However, the algorithm of \citet{IOY14} to exhaustively apply \cref{rule:crown} has only a single step that requires superlinear time: the computation of one maximum-cardinality matching in a \emph{bipartite} graph with~$O(n+m)$ vertices and edges. To find such a matching, we used a straightforward implementation of the classic algorithm of \citet{HK73}. It turned out in our experiments that even this implementation is faster than computing a maximum-cardinality matching in non-bipartite graphs with any of the implementations for \textsc{Maximum-Cardinality Matching}\xspace that we tested. \paragraph{Exhaustively removing crowns.} Subsequently, we briefly sketch the algorithm of \citet{IOY14} and prove that it exhaustively applies \cref{rule:crown} in an input graph~$G=(V,E)$. Note that their algorithm efficiently applies a classic linear programming (LP-) based data reduction rule for \textsc{Vertex Cover}\xspace~\cite{NT75}. We first observe that a straight-forward adaptation of this LP-based data reduction rule to \textsc{Maximum-Cardinality Matching}\xspace (working with the LP-relaxation for the \textsc{Maximum-Cardinality Matching}\xspace-ILP) does not seems to work (\cref{fig:lp-matching-counterexample} provides a counterexample). However, \citet{AI16} already observed that the LP-based data reduction rule of \citet{IOY14} (working with the LP-relaxation for the \textsc{Vertex Cover}\xspace-ILP) also removes all crowns from the input graph. It was hence already clear that the LP-based data reduction rule is at least as powerful as exhaustively applying the crown data reduction rule (in the context of \textsc{Vertex Cover}\xspace). Below, we show that exhaustively applying the crown data reduction rule is exactly as powerful as the LP-based data reduction rule. To be precise, we prove that ``exhaustively'' applying the LP-based data reduction rule (as \citet{IOY14} did) and exhaustively applying the crown data reduction rule (\cref{rule:crown}) results in exactly the same graph. Thus, we can use the algorithm of \citet{IOY14} working with the LP-relaxation for the \textsc{Vertex Cover}\xspace-ILP to apply \cref{rule:crown} in the context of \textsc{Maximum-Cardinality Matching}\xspace. We start with explaining the LP-based kernelization for \textsc{Vertex Cover}\xspace (refer to \citet[Chapter 2]{CFK+15} for a more detailed description with all proofs). The standard \emph{integer} linear program (ILP) for \textsc{Vertex Cover}\xspace is as follows. \begin{align*} \text{Minimize} && \sum_{v \in V} x_v \\ \text{Subject to} && x_v + x_u & \ge 1 & \forall uv \in E \\ && x_v & \in \{0,1\} & \forall v \in V \end{align*} As usual, the LP relaxation (subsequently called VC-LP) is obtained by replacing the constraints~$x_v \in \{0,1\}$ by~$0 \le x_v \le 1$. This LP and its dual LP always have half-integral solutions, that is, there is always an optimal solution such that each variable is assigned a value in~$\{0,15icefrac{1}{2},1\}$ (this is true for a more general class of LPs called BIP2~\cite{Hoc02}). Given a half-integral solution for the VC-LP, define the following three sets of vertices corresponding to variables set to~$0$, $15icefrac{1}{2}$, and~$1$, respectively: \begin{itemize} \item $V_1 := \{v \mid x_v = 1\}$, \item $V_{15icefrac{1}{2}} := \{v \mid x_v = 15icefrac{1}{2}\}$, and \item $V_{0} := \{v \mid x_v = 0\}$. \end{itemize} The following classic result of \citet{NT75} forms the basis for the LP-based kernelization for \textsc{Vertex Cover}\xspace. \begin{theorem}[\cite{NT75}]\label{thm:NT} There is a minimum vertex cover~$S$ for~$G$ such that~$V_1 \subseteq S \subseteq V_1 \cup V_{15icefrac{1}{2}}$. \end{theorem} \begin{remark} The dual of the VC-LP is the following relaxation of the ILP for \textsc{Maximum-Cardinality Matching}\xspace: \begin{align*} \text{Maximize} && \sum_{uv \in E} y_{uv} \\ \text{Subject to} && \sum_{u \in N(v)} y_{uv} & \le 1 & \forall v \in V \\ && 0 \le y_{uv} & \le 1 & \forall uv \in E \end{align*} Notably, a statement similar to \cref{thm:NT} does not hold for \textsc{Maximum-Cardinality Matching}\xspace, see \cref{fig:lp-matching-counterexample} for two counterexamples. 15ewcommand{\drawTikzClique}[4]{ \foreach \i in {1,...,#2} { 15ode[knoten] (#1-\i) at ({#3 * cos(#4 + 360 * \i / #2 - 360 / #2))},{#3 * sin(#4 + 360 * \i / #2 - 360 / #2))}) {}; } \foreach \i in {2,...,#2} { \pgfmathtruncatemacro{\runs}{\i - 1}; \foreach \j in {1,...,\runs} { \draw[majarr] (#1-\j) edge (#1-\i); } } } 15ewcommand{\drawTikzWeightedCycle}[5]{ \foreach \i in {1,...,#2} { 15ode[knoten] (#1-\i) at ({#3 * cos(#4 + 360 * \i / #2 - 360 / #2))},{#3 * sin(#4 + 360 * \i / #2 - 360 / #2))}) {}; } \foreach \i in {2,...,#2} { \pgfmathtruncatemacro{\ii}{\i - 1}; \draw[majarr] (#1-\i) edge node{#5} (#1-\ii); } \draw[majarr] (#1-1) edge node{#5} (#1-#2); } \begin{figure} \caption{ Examples in which a solution for the matching-LP does not help in finding a maximum-cardinality matching. In both graphs, bold edges indicate a perfect matching and the numbers next to the edges give a valid optimal solution to the LP for \textsc{Maximum-Cardinality Matching} \label{fig:lp-matching-counterexample} \end{figure} \end{remark} The following data reduction rule is an immediate consequence of \cref{thm:NT}: \begin{rrule}\label{rule:LP} Compute a solution for the VC-LP, remove all vertices in~$V_0 \cup V_1$, and decrease the solution size by~$|V_1|$. \end{rrule} \citet{AFLS07,CC08} independently showed that \Cref{rule:LP} removes a crown (see \cref{def:crown} for its definition). We include the proof for the sake of completeness. \begin{lemma}[\cite{AFLS07,CC08}] \label{lem:lpcrown} For any solution $X$ for the VC-LP, $(V_1, V_0)$ is a crown. \end{lemma} \begin{proof} It is easy to see that~$V_0$ is an independent set since the constraint~$x_v + x_u \ge 1$ for all~$uv \in E$ forbids to set two variables of adjacent vertices to zero. Thus, we have that~$N(V_0) \subseteq V_1$. It follows from the optimality of~$X$ that~$N(V_0) = V_1$ (any variable of a vertex~$v \in V_1 \setminus N(V_0)$ could be set to~$15icefrac{1}{2}$ instead). To show that there is a matching between~$V_0$ and~$V_1$ that matches all vertices in~$V_1$, we use the optimality of~$X$ together with Hall's marriage theorem\footnote{ Hall's marriage theorem states that, in any bipartite graph $G=(X \uplus Y,E)$, there is an $X$-saturating matching if and only if for every subset $W \subseteq X$ we have $|W|\leq |N(W)|$. In other words: every subset $W \subseteq X$ has sufficiently many adjacent vertices in $Y$. }: Assuming that there is no such matching, it follows from Hall's marriage theorem that there is a subset~$W \subseteq V_1$ such that~$|N(W) \cap V_0| < |W|$. However, this implies that setting all variables corresponding to vertices in~$W \cup (N(W) \cap V_0)$ to $15icefrac{1}{2}$ results in a better solution to the VC-LP, a contradiction to the optimality of~$X$. \end{proof} Subsequently, we first show how to efficiently compute a solution for the VC-LP. Then, we show that ``exhaustively'' applying \cref{rule:LP} and ``exhaustively'' applying \cref{rule:crown} leads to exactly the same kernel. A half-integral solution for the VC-LP can be obtained by computing a minimum vertex cover in a bipartite graph~$\overline{G}$: The vertices~$\overline{V}$ of~$\overline{G}$ consist of two copies of~$V$, called~$V_L$ and~$V_R$; for~$i \in \{L,R\}$ we set $V_i := \{v_i \mid v \in V\}$ and~$\overline{V} := V_L \cup V_R$. The edges~$\overline{E}$ of~$\overline{G}$ are as follows: $\overline{E} := \{v_Lu_R,v_Ru_L \mid uv \in E\}$. Thus, $\overline{G}$ has~$2n$ vertices and~$2m$ edges. Then, a solution for the VC-LP can be constructed from a vertex cover~$\overline{S}$ for~$\overline{G}$ as follows~\cite{NT75}: $$ x_v = \begin{cases} 1, & \text{if } v_L \in \overline{S} \wedge v_r \in \overline{S}, \\ 0, & \text{if } v_L 15otin \overline{S} \wedge v_r 15otin \overline{S}, \\ 15icefrac{1}{2}, & \text{else.} \end{cases}$$ Hence, using Kőnig's theorem\footnote{Kőnig's theorem states that, in any bipartite graph, the number of edges in a maximum matching is equal to the number of vertices in a minimum vertex cover. Moreover, such a minimum vertex cover can be constructed in linear time given a maximum matching.}, we can compute a solution for the VC-LP in~$O(\sqrt{n}m)$ time with the algorithm of \citet{HK73}. Of course, to have \cref{rule:LP} as strong as possible, we would like to have a solution for the VC-LP with the maximum number of variables set to~$0$ and~$1$. Note that the above solution for the VC-LP does not necessarily fulfill this condition. However, \citet{IOY14} provided an algorithm that, given any solution for the VC-LP, computes an optimal solution for the VC-LP that minimizes, over all half-integral optimal solutions, the number of variables set to~$15icefrac{1}{2}$ in \emph{linear time}. Hence, using the algorithm of \citet{IOY14}, we can exhaustively apply \cref{rule:LP} in~$O(\sqrt{n} m)$ time. By \emph{exhaustively applying \cref{rule:LP}}, we mean that we apply \cref{rule:LP} until the solution setting all variables to~$15icefrac{1}{2}$ is the unique optimal solution for the VC-LP. It remains to show that exhaustively applying \cref{rule:LP} results in the same instance as exhaustively applying \cref{rule:crown}. This extends the work of \citet{AFLS07} who showed that applying \cref{rule:LP} exhaustively removes all crowns. \begin{lemma} \label{lem:LP-equivalent-to-crown} Let~$G$ be a graph, let~$G_{\text{LP}}$ be the graph obtained from exhaustively applying \cref{rule:LP} on~$G$, and let~$G_{\text{crown}}$ be the graph obtained from exhaustively applying \cref{rule:crown}. Then~$G_{\text{LP}} = G_{\text{crown}}$. \end{lemma} \begin{proof} First we show that there exists a set $V_{15icefrac{1}{2}}^{\star}$ of vertices such that for every half-integral optimal solution for the VC-LP for~$G$ that minimizes the number of variables set to~$15icefrac{1}{2}$, the set of vertices whose variables are set to $15icefrac{1}{2}$ is $V_{15icefrac{1}{2}}^{\star}$. Let $X'$ and $X''$ be two half-integral solutions for the VC-LP with the minimum number of variables set to $15icefrac{1}{2}$. For $i \in \{ 1, 0, 15icefrac{1}{2} \}$, let~$V_i'$ and $V_i''$ be, as defined above, the set of vertices whose variables are set to~$i$ in $X'$ and $X''$, respectively. Assume for the sake of contradiction that $V_{15icefrac{1}{2}}' 15e V_{15icefrac{1}{2}}''$. We claim that the following (call it $X$) is an optimal solution for the VC-LP: \begin{align*} x_v = \begin{cases} 1 & \text{if } v \in V_1' \cup (V_{15icefrac{1}{2}}' \cap V_1''), \\ 0 & \text{if } v \in V_0' \cup (V_{15icefrac{1}{2}}' \cap V_0''), \\ 15icefrac{1}{2} & \text{else } (\text{that is}, v \in V_{15icefrac{1}{2}}' \setminus (V_1'' \cup V_0'')). \end{cases} \end{align*} We define $V_1$, $V_0$, and $V_{15icefrac{1}{2}}$ analogously for~$X$. To prove the claim, we first verify that $X$ is a solution for the VC-LP. It suffices to show that $N(v) \subseteq V_1$ for each vertex $v \in V_0$. If $v \in V_0'$, then we have $N(v) \subseteq V_1' \subseteq V_1$. Otherwise (that is, $v \in V_{15icefrac{1}{2}}' \cap V_0''$), we have \( N(v) \subseteq N(V_{15icefrac{1}{2}}' \cap V_0'') \subseteq N(V_{15icefrac{1}{2}}') \cap N(V_0'') \subseteq (V_1' \cup V_{15icefrac{1}{2}}') \cap V_1'' \subseteq V_1 \), because $N(V_{15icefrac{1}{2}}') \subseteq V_1' \cup V_{15icefrac{1}{2}}'$ and $N(V_0'') \subseteq V_1''$. Thus, we see that $X$ is a solution for the VC-LP. We then show that $X$ is optimal for the VC-LP. By \Cref{lem:lpcrown}, there is a matching $M$ in $G[V_1' \cup V_0']$ that matches all vertices of $V_1'$, and thereby, we have $\sum_{v \in V_1' \cup V_0'} x_v'' \ge \sum_{uv \in M} x_u'' + x_v'' \ge |M| \ge |V_1'|$. It follows that \begin{align*} \sum_{v \in V} x_v'' = \sum_{v \in V_1' \cup V_0'} x_v'' + \sum_{v \in V_{15icefrac{1}{2}}'} x_v'' &\ge |V_1'| + |V_{15icefrac{1}{2}}' \cap V_1''| + \frac{1}{2} |V_{15icefrac{1}{2}}' \setminus (V_0'' \cup V_1'')| \\ &= |V_1'| + \frac{1}{2} |V_{15icefrac{1}{2}}'| + \frac{1}{2} (|V_{15icefrac{1}{2}}' \cap V_1''| - |V_{15icefrac{1}{2}}' \cap V_0''|). \end{align*} Note that $\sum_{v \in V} x_v'' = \sum_{v \in V} x_v' = |V_1'| + \frac{1}{2} |V_{15icefrac{1}{2}}'|$ by the optimality of $X'$ and $X''$. Thus, we obtain $|V_{15icefrac{1}{2}}' \cap V_1''| - |V_{15icefrac{1}{2}}' \cap V_0''| \le 0$. Then the cost of $X$ is \begin{align*} \sum_{v \in V} x_v &= (|V_1'| + |V_{15icefrac{1}{2}}' \cap V_1''|) + \frac{1}{2} |V_{15icefrac{1}{2}}' \setminus (V_1'' \cup V_0'')| \\ &= |V_1'| + \frac{1}{2} |V_{15icefrac{1}{2}}'| + \frac{1}{2} (|V_{15icefrac{1}{2}}' \cap V_1''| - |V_{15icefrac{1}{2}}' \cap V_0''|) \le |V_1'| + \frac{1}{2} |V_{15icefrac{1}{2}}'|, \end{align*} and hence $X$ is an optimal solution for the VC-LP for $G$. Since the number of variables set to $15icefrac{1}{2}$ in~$X$ is smaller than that of $X'$ and $X''$, this contradicts our assumption on $X'$ and $X''$. Consequently,~$G_{\text{LP}}$ is well-defined---$G_{\text{LP}} = G[V_{15icefrac{1}{2}}^\star]$. It remains to show that if \cref{rule:crown} is applied exhaustively, then the VC-LP for the resulting graph~$G_{\text{crown}}$ has a unique optimal solution, in which all variables are set to~$15icefrac{1}{2}$. Assume towards a contradiction that the VC-LP for~$G_{\text{crown}}$ has a solution~$X$ with some variables not set to~$15icefrac{1}{2}$. Then by \Cref{lem:lpcrown}, $G_{\text{crown}}$ has a crown, which is a contradiction. Hence, in~$G_{\text{crown}}$ the VC-LP has a unique solution: setting all variables to~$15icefrac{1}{2}$. \end{proof} It is known that applying \cref{rule:LP} results in an instance with at most~$2 \tau$ vertices, where~$\tau$ is the vertex cover number (the size of a minimum vertex cover)~\cite{CFK+15}. Combining this, the algorithm of \citet{IOY14}, and \cref{lem:LP-equivalent-to-crown} gives the following theorem. \begin{theorem}\label{thm:crown-exhaustive-application} \cref{rule:crown} can be exhaustively applied in~$O(\sqrt{n} m)$ time and the resulting instance has at most~$2\tau$ vertices. \end{theorem} We refer to \cref{sec:eval-kernel} for a practical evaluation of the theoretical upper bound of \Cref{thm:crown-exhaustive-application} on the number of vertices in the reduced instance. However, note that from a theoretical point of view, the kernel given in \cref{thm:fes-lin-kernel} is incomparable to the upper bound of \Cref{thm:crown-exhaustive-application}: In a large odd cycle~$C_{2n+1}$, applying the kernel behind \cref{thm:fes-lin-kernel} yields a constant-size kernel (as the feedback edge number is one), that is, \cref{thm:fes-lin-kernel} essentially solves this instance. \cref{rule:crown}, however, does not remove a single vertex in this case. Conversely, applying \cref{rule:crown} on a complete bipartite graph~$K_{3,n}$ (here~$\tau = 3$) solves the instance again, but the kernel behind \cref{thm:fes-lin-kernel} does not remove a single vertex. Note that the algorithm behind \cref{thm:crown-exhaustive-application} contains only one step requiring super-linear running time: The computation of a bipartite matching to compute an initial solution for the VC-LP. Since computing matchings in practice is much easier for bipartite than for general graphs (even though the known theoretical worst-case running times are the same~\cite{Vaz20}, it is not surprising that our implementation that exhaustively applies \cref{rule:crown} is significantly faster than the state-of-the-art matching implementations on the respective input graph (see~\cref{sec:experiments}). \subsection{Relaxed Crowns}\label{ssec:relaxed-crowns} In this subsection, we provide a data reduction rule that generalizes \cref{rule:deg-two-vertices} (for degree-two vertices) in the same sense as \cref{rule:crown} (crown rule) generalizes \cref{rule:deg-zero-one-vertices} (for degree-one vertices). However, in contrast to \cref{rule:crown} we decided not to implement this rule as we do not have a sufficiently fast algorithm to apply the rule. Our new rule uses a relaxed crown concept, defined as follows: \begin{definition}\label{def:generalized-crown} A \emph{relaxed crown} in a graph~$G = (V,E)$ is a pair $(H,I)$ such that \begin{enumerate} \item $I \subseteq V$ is an independent set in~$G$ (no two vertices in~$I$ are adjacent in~$G$), \label{r-crown:IS} \item $H = N(I) := \bigcup_{v \in I} N(v)$, and \label{r-crown:Neighbors} \item for every~$v \in H$ there is a matching~$M_{H,I,v}$ between~$H\setminus\{v\}$ and~$I$ that matches all vertices in~$H\setminus\{v\}$. \label{r-crown:Matching} \end{enumerate} \end{definition} Compared to the crown concept we relaxed Condition~\eqref{r-crown:Matching}. In particular, note that in the relaxed crown, the set~$H$ can be larger (by one vertex) than~$I$! Our new data reduction rule is as follows (see \cref{fig:relaxed-crown} for an illustration). \begin{figure} \caption{Left-hand side: A graph with a relaxed crown~$(H,I)$ and a maximum matching (thick edges) of cardinality five. Right-hand side: The graph obtained by applying \cref{rule:relaxed-crown} \label{fig:relaxed-crown} \end{figure} \begin{rrule}\label{rule:relaxed-crown} Let~$G$ be a crown-free graph and let~$(H,I)$ be a relaxed crown in~$G$. Then remove all vertices in~$H \cup I$, add a new vertex~$w$ with~$N(w) = \bigcup_{u \in H} N(u) \setminus (H \cup I)$ and decrease the solution size~$s$ by~$|H|-1$. \end{rrule} \begin{lemma} \cref{rule:relaxed-crown} is correct. \end{lemma} \begin{proof} ($\mathds{R}ightarrow$): Let~$M \subseteq E$ be a maximum-cardinality matching in the input graph~$G$ and let~$G'$ be the reduced graph. We split~$M$ into three parts~$M = M_0 \cup M_1 \cup M_2$ as follows (edges with zero, one, or two endpoints in the relaxed crown): $M_i := \{uv \in M \mid i = \mid uv \cap (H \cup I)|\}$. Since~$I$ is an independent set, it follows that~$|M_1|+|M_2| \le |H|$. We now make a case distinction whether or not~$M_1$ is the empty set. \emph{Case 1.} $M_1 = \emptyset$: Note that~$|M_2| \le |H|.$ By assumption, $G$ does not contain a crown and, thus, we can strengthen this to $|M_2| \le |H|-1$. Hence, $M_0$ is a matching for~$G'$ with~$|M_0| = |M| - |M_2| \ge |M| - (|H|-1)$. \emph{Case 2.} $M_1 15eq \emptyset$: Let~$uv \in M_1$ be an arbitrary edge with~$u \in H$. Then, $wv$~is in~$G'$ and~$M_0 \cup \{wv\}$ is a matching for~$G'$ of size at least~$|M_0| + 1 = |M| - (|M_1| + |M_2|) + 1 \ge |M| - (|H|-1)$. ($\Leftarrow$): Let~$M'$ be a maximum-cardinality matching in~$G'$. If $w$ is not matched in $M'$, then $M \cup M_{H, I, v}$ is a matching of size $|M'| + |H| - 1$ for an arbitrary vertex $v \in H$. So we may assume that $wu \in M'$. Let $v \in H$ be an arbitrary vertex adjacent to $u$. Then, $(M \setminus \{ uw \}) \cup M_{H, I, v}$ is a matching of size~$|M'| + |H| - 1$ for~$G$. Hence, $|\ensuremath{\text{mm}}(G)| = |\ensuremath{\text{mm}}(G')| + |H| - 1$. \end{proof} \section{Maximum-Weight Matching} \label{sec:weigted-kernel} In this section, we show how to lift \cref{thm:fes-lin-kernel} (dealing with \textsc{Maximum-Cardinality Matching}\xspace{}) to the weighted case. \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} are based on the simple observation that for every vertex~$v \in V$ of degree at least one, there exists a maximum-cardinality matching containing~$v$: If~$v$ is not matched, then take an arbitrary neighbor~$u$ of~$v$, remove the edge containing~$u$ from a maximum-cardinality matching, and add the edge~$uv$. This observation does not hold in the weighted case---see, e.\,g., \Cref{fig:deg-one-rule-weighted} (left-hand side) where the only maximum-weight matching $\{au,bc\}$ leaves~$v$ free. Thus, we need new ideas to obtain efficient data reduction rules for the weighted case. \paragraph{Vertices of degree at most one.} We start with the simple case of dealing with vertices of degree at most one. Here, the following data reduction rule is obvious. \begin{rrule} \label{rule:deg-zero-weighted} If $\deg(v)=0$ for a vertex $v \in V$, then delete $v$. If $\omega(e) = 0$ for an edge $e \in E$, then delete $e$. \end{rrule} Next, we show how to deal with degree-one vertices, see \Cref{fig:deg-one-rule-weighted} for a visualization. \begin{figure} \caption{Left: Input graph. Right: The graph after applying \cref{rule:deg-one-weighted} \label{fig:deg-one-rule-weighted} \end{figure} \begin{rrule} \label{rule:deg-one-weighted} Let $G = (V,E)$ be a graph with non-negative edge weights~$\omega\colon E \rightarrow \mathds{N}$. Let~$v$ be a degree-one vertex and let $u$ be its neighbor. Then delete $v$, set the weight of every edge~$e$ incident with~$u$ to~$\max\{0,\omega(e)-\omega(uv)\}$, and decrease the solution value~$s$ by~$\omega(uv)$. \end{rrule} While proving the correctness of this rule (see next lemma) is relatively straightforward, the naive algorithm to exhaustively apply \cref{rule:deg-one-weighted} is too slow for our purpose: If the edge weights are adjusted immediately after deleting~$v$, then exhaustively applying the rule to a star requires~$\Theta(n^2)$ time. However, as we subsequently show, \cref{rule:deg-one-weighted} can be exhaustively applied in linear time. \begin{lemma} \label[lemma]{lem:deg-one-weighted-correct} \cref{rule:deg-one-weighted} is correct. \end{lemma} \begin{proof} Let $v$ be a vertex of degree one and let $u$ be its neighbor. Let $M$ be a matching of weight at least $s$ for~$G$. We assume without loss of generality that $M$ is of maximum weight and, hence, $u$ is matched. If $uv \in M$, then the deletion of $v$ decreases the weight of the matching by~$\omega(uv)$. Hence, the resulting graph $G'$ (with adjusted weights) has a matching of weight at least $s-\omega(uv)$. If~$uv 15ot\in M$, then $M$~is also contained in the resulting graph $G'$. As~$v$ is not matched, $M$~contains exactly one edge~$e$ with~$u \in e$. Thus, $e$ has in~$G'$ weight~$\max \{ 0, w(e)-w(uv) \}$ and~$M$ has in~$G'$ weight at least~$s-\omega(uv)$. Conversely, let~$M'$ be a matching in the reduced graph~$G'$ with weight at least~$s-\omega(uv)$. We construct a matching~$M''$ for~$G$ as follows. First, consider the case that~$u$ is matched by an edge~$e$. If~$e$ has in~$G'$ weight more than zero, then set~$M'' := M'$. If~$e$ has in~$G'$ weight zero, then set~$M'':= (M' \setminus \{e\}) \cup \{uv\}$. Second, if~$u$ is free, then set~$M'':= M' \cup \{uv\}$. In all three cases~$M''$ is a matching in $G$ with weight at least~$s$. \end{proof} \begin{lemma} \label[lemma]{lem:deg-one-weighted-time} \cref{rule:deg-one-weighted} can be exhaustively applied in $O(n+m)$ time. \end{lemma} \begin{proof} The basic idea of the algorithm exhaustively applying \cref{rule:deg-one-weighted} in linear time is as follows: We store in each vertex a number indicating the weight of the heaviest incident edge removed due to \cref{rule:deg-one-weighted}. Then, whenever we want to access the ``current'' weight of an edge~$e$, then we subtract from~$\omega(e)$ the two numbers stored in the two incident vertices. Once \cref{rule:deg-one-weighted} is no more applicable, then we update the edge weights to get rid of the numbers in the vertices in order to create a \textsc{Maximum-Weight Matching}\xspace instance. The details of the algorithm are as follows. First, in~$O(n+m)$ time we collect all degree-one vertices in a list~$L$ and initialize for each vertex~$v$ a counter~$c(v) := 0$. Then, we process~$L$ one by one. For a degree-one vertex~$v \in L$, let~$u$ be its neighbor. We decrease~$s$ by~$\max\{0,w(uv) - c(u) - c(v)\}$, then set~$c(u) := c(u) + \max\{0, w(uv) - c(u) - c(v)\}$, and then delete~$v$. If after the deletion of~$v$ its neighbor~$u$ has degree one, then~$u$ is added to~$L$. Thus, after at most~$n$ steps, each one doable in constant time, we processed~$L$. When~$L$ is empty, then in~$O(m)$ time we update for each edge~$uv$ its weight~$w(uv) := \max\{0,w(uv) - c(u) - c(v)\}$. This finishes the description of the algorithm. Observe that we have the following invariant when processing the list~$L$: the weight of an edge~$uv$ is~$\max\{0,w(uv) - c(u) - c(v)\}$. With this invariant, it is easy to see that the algorithm indeed applies \cref{rule:deg-one-weighted} exhaustively. \end{proof} Note that after applying \cref{rule:deg-one-weighted} we can have weight-zero edges and thus \cref{rule:deg-zero-weighted} might become applicable. We do not know whether \cref{rule:deg-zero-weighted,rule:deg-one-weighted} \emph{together} can be applied exhaustively in linear time. However, for the kernel we present at the end of this section it is sufficient to apply \cref{rule:deg-one-weighted} exhaustively. \paragraph{Vertices of degree two.} Lifting \cref{rule:deg-two-vertices} to the weighted case is more delicate than lifting \cref{rule:deg-zero-one-vertices} to \cref{rule:deg-zero-weighted,rule:deg-one-weighted}. The reason is that the two incident edges might have different weights. As a consequence, we cannot decide locally what to do with a degree-two vertex. Instead, we process multiple degree-two vertices at once. To this end, we use the following notation. \begin{definition} \label[definition]{def:maxpath} Let~$G$ be a graph. A path~$P = v_0v_1\ldots v_\ell$ is a \emph{maximal path{}} in~$G$ if $\ell \ge 3$ and the inner vertices~$v_1, v_2, \ldots, v_{\ell-1}$ all have degree two in~$G$, but the endpoints~$v_0$ and~$v_\ell$ do not, that is, $\deg_G(v_1) = \ldots = \deg_G(v_{\ell-1}) = 2$, $\deg_G(v_0) 15eq 2$, and $\deg_G(v_\ell) 15eq 2$. \end{definition} \begin{definition} \label[definition]{def:pendcycle} Let~$G$ be a graph. A cycle~$C = v_0v_1\ldots v_\ell v_0$ is a \emph{pending cycle{}} in~$G$ if at most one vertex in~$C$ does \emph{not} have degree two in~$G$. \end{definition} The reason to study maximal paths{} and pending cycles{} is that we can compute a maximum-weight matching on path and cycle graphs in linear time, as stated next. This allows us to preprocess all vertices in a maximal path{} or a pending cycle{} at once. \begin{observation} \label[observation]{obs:path-cycle-lin-time} \textsc{Maximum-Weight Matching}\xspace can be solved in~$O(n)$ time on paths and cycles. \end{observation} \begin{proof} If the input graph~$G$ is a path, then by exhaustively applying \cref{rule:deg-zero-weighted,rule:deg-one-weighted}, we can compute a maximum-weight matching. Otherwise, if~$G$ is a cycle, then we take an arbitrary edge~$e$ and distinguish two cases. First, we take~$e$ into a matching and remove both endpoints from the graph. In the resulting path, we compute in linear time a maximum-weight matching~$M$. Second, we delete~$e$ and obtain a path for which we compute in linear time a maximum-weight matching~$M'$. We then simply choose between~$M \cup\{e\}$ and~$M'$ the heavier matching as the result. \end{proof} Now, using \cref{obs:path-cycle-lin-time}, we introduce data reduction rules for maximal paths{} and pending cycles{}. Both rules are based on a similar idea which is easier to explain for a pending cycle. Let~$C$ be a pending cycle{} and~$u\in C$ be the degree-at-least-three vertex in~$C$. Then there are two cases: $u$ is matched with a vertex not in~$C$ or it is not. Let~$M$ be a maximum-weight matching for~$G$, and let~$M'$ be a maximum-weight matching with the constraint that~$u$ is matched to a vertex outside~$C$. Clearly, $M \cap E(C)$ is at least as large as~$M' \cap E(C)$. Looking only at~$C$, all that we need to know is the difference of the weights of these two matchings. This can be encoded with one vertex~$z$ which replaces the whole cycle~$C$ (see \Cref{fig:rrule cycle3} for an illustration). \begin{figure} \caption{Left: A pending cycle{} \label{fig:rrule cycle3} \end{figure} Then, matching~$z$ corresponds to taking the matching in~$C$ and not matching~$z$ corresponds to taking the matching in~$C-u$. Formalizing this idea, we arrive at the following data reduction rule. \begin{rrule} \label{rule:pend-cycle} Let $G$ be a graph with non-negative edge weights. Let~$C$ be a pending cycle{} in~$G$, where $u \in C$ has degree at least three in $G$. Then replace $C$ by an edge $uz$ with $\omega(uz)=\omega(C) - \omega(C-u)$ and decrease the solution value~$s$ by $\omega(C-u)$, where $z$ is a new vertex. \end{rrule} \begin{lemma} \label{thm:cycle correct} \cref{rule:pend-cycle} is correct. \end{lemma} \begin{proof} Let~$C$ be a pending cycle{} in~$G$ where~$u \in C$ has degree at least three in~$G$ and let~$G'$ be the graph obtained by applying \cref{rule:pend-cycle} to~$C$. We show~$\omega(G') = \omega(G) - \omega(C-u)$. Let~$M$ be a maximum-weight matching in~$G$. Let~$M_{\overline{C}} := M \setminus E(C)$. Observe that~$\omega(M_{\overline{C}}) = \omega(M) - \omega(M \cap E(C)) \ge \omega(G) - \omega(C)$. If~$u$ is matched with respect to~$M_{\overline{C}}$, then we have~$M_{\overline{C}} = M \setminus E(C-u)$. Hence, $\omega(G') \ge \omega(M_{\overline{C}}) \ge \omega(G) - \omega(C-u)$. If~$u$ is free with respect to $M_{\overline{C}}$, then~$M_{\overline{C}} \cup \{uz\}$ is a matching in $G'$ with weight at least $(\omega(G)-\omega(C))+(\omega(C)-\omega(C-u))=\omega(G)-\omega(C-u)$. Hence, in both cases we have~$\omega(G') \ge \omega(G) - \omega(C-u)$. Conversely, let $M'$ be a maximum-weight matching in $G'$. Recall that, for an edge-weighted graph~$H$, $\ensuremath{\text{mm}}(H)$ denotes a maximum-weight matching in~$H$. If~$uz \in M'$, then $(M' \setminus \{uz\}) \cup \ensuremath{\text{mm}}(C)$ is a matching in~$G$ with~$\omega(G') - (\omega(C) - \omega(C-u)) + \omega(C) = \omega(G')+\omega(C-u)$. Hence, $\omega(G) \ge \omega(G') + \omega(C-u)$. If~$uz 15ot\in M'$, then~$M' \cup \ensuremath{\text{mm}}(C-u)$ is a matching in~$G$ with weight at least $\omega(G') + \omega(C-u)$. Again, in both cases we have~$\omega(G) \ge \omega(G') + \omega(C-u)$. Combined with~$\omega(G') \ge \omega(G) - \omega(C-u)$, we arrive at~$\omega(G') = \omega(G) - \omega(C-u)$. \end{proof} The basic idea for maximal paths{} is the same as for pending cycles{}. The difference is that we have to distinguish four cases depending on whether or not the two endpoints~$u$ and~$v$ of a maximal path{}~$P$ are matched within~$P$. To avoid some trivial case distinctions, we assume that~$\omega(uv) = 0$ if the edge~$uv$ does not exist in~$G$. We denote by $P-u-v$ the path obtained from removing in~$P$ the vertices $u$ and~$v$. \Cref{fig:rrule-deg2} visualizes the next data reduction rule. \begin{rrule} \label{rule:max-path} Let~$G = (V,E)$ be a graph with non-negative edge weights~$\omega\colon E \rightarrow \mathds{N}$. Let~$P$ be a maximal path{} in~$G$ with endpoints~$u$ and~$v$. Then remove all vertices in~$P$ except~$u$ and $v$, then add a new vertex~$z$ and, if not already existing, add the edge~$uv$. Furthermore, set~$\omega(uz):=\omega(P-v) - \omega(P-u-v)$, $\omega(vz):=\omega(P-u) - \omega(P-u-v)$, and~$\omega(uv):=\max\{\omega(uv),\omega(P) - \omega(P-u-v)\}$, and decrease the solution value~$s$ by~$\omega(P-u-v)$. \end{rrule} \begin{figure} \caption{Applying \cref{rule:max-path} \label{fig:rrule-deg2} \end{figure} \begin{lemma} \label[lemma]{thm:path correct} \cref{rule:max-path} is correct. \end{lemma} \begin{proof} Let~$G$ be the input graph with a maximal path{}~$P$ with endpoints~$u$ and~$v$. Furthermore, let~$G'$ be the reduced instance with~$z$ defined as in the data reduction rule. We show that~$\omega(G') = \omega(G) - \omega(P-u-v)$. Let~$M$ be a maximum-weight matching for~$G$. We define~$M_{\overline{P}} := M \setminus E(P)$. Observe that~$\omega(M_{\overline{P}}) = \omega(M) - \omega(M \cap E(P)) \ge \omega(G) - \omega(P)$. We consider four cases. \begin{enumerate} \item If both~$u$ and~$v$ are matched with respect to~$M_{\overline{P}}$, then~$M_{\overline{P}} = M \setminus E(P - u - v)$ and hence \begin{align} \omega(M_{\overline{P}}) = \omega(M) - \omega(M \cap E(P - u - v)) \ge \omega(G) - \omega(P-u-v). \end{align} \item Let one vertex in $\{u,v\}$ be matched and let one be free. Without loss of generality, we assume that~$u$ is matched and $v$ is free with respect to $M_{\overline{P}}$. Then, we have that $M_{\overline{P}} = M \setminus E(P - u)$ and hence~$\omega(M_{\overline{P}}) \ge \omega(G) - \omega(P-u)$. Thus, $M_{\overline{P}} \cup \{vz\}$ is a matching of weight at least \begin{align*} (\omega(G) - \omega(P-u)) + (\omega(P-u)-\omega(P-u-v)) = \omega(G) - \omega(P-u-v). \end{align*} \item Finally, if both~$u$ and~$v$ are free with respect to~$M_{\overline{P}}$, then~$M_{\overline{P}} \cup \{uv\}$ is a matching of weight at least $(\omega(G) - \omega(P)) + (\omega(P)-\omega(P-u-v)) = \omega(G) - \omega(P-u-v)$. \end{enumerate} Thus in each case we have~$\omega(G') \ge \omega(G) - \omega(P-u-v)$. Conversely, let $M'$ be a maximum-weight matching for~$G'$. We define $\overline{M'} := M' \setminus \{uz, vz, uv \}$. Again, we distinguish four cases. \begin{enumerate} \item If both~$u$ and~$v$ are matched with respect to~$\overline{M'}$, then $\overline{M'}=M'$. Hence, $\overline{M'} \cup \ensuremath{\text{mm}}(P-u-v)$ is a matching in~$G$ with weight at least~$\omega(G') + \omega(P-u-v)$. \item If~$u$ is matched and~$v$ is free with respect to~$\overline{M'}$, then w.l.o.g.~$vz \in M'$. Hence, $\overline{M'} \cup \ensuremath{\text{mm}}(P-u)$ is a matching in $G$ with weight at least~$\omega(G') - (\omega(P-u) - \omega(P-u-v)) + \omega(P-u) = \omega(G') + \omega(P-u-v)$. \item If~$u$ is matched and~$v$ is free with respect to~$\overline{M'}$, then w.l.o.g.~$uz \in M'$. Hence, $\overline{M'} \cup \ensuremath{\text{mm}}(P-v)$ is a matching in $G$ with weight at least~$\omega(G') - (\omega(P-v) - \omega(P-u-v)) + \omega(P-v) = \omega(G') + \omega(P-u-v)$. \item Finally, if both~$u$ and~$v$ are free with respect to~$\overline{M'}$, then w.l.o.g~$uv \in M'$ as~$\omega(uv) \ge \omega(uz)$ and~$\omega(uv) \ge \omega(vz)$. Now, we encounter two subcases. \begin{enumerate} \item If~$\omega(uv) > \omega(P) - \omega(P-u-v)$, then the edge~$uv$ is in~$G$ and in~$G'$, having the same weight in both graphs. Then, $M' \cup \ensuremath{\text{mm}}(P-u-v)$ is a matching in~$G$ with weight at least~$\omega(G') + \omega(P-u-v)$. \item Otherwise, $\overline{M'} \cup \ensuremath{\text{mm}}(P)$ is a matching in~$G$ with weight at least~$\omega(G') - (\omega(P) - \omega(P-u-v)) + \omega(P) = \omega(G') + \omega(P-u-v)$. \end{enumerate} \end{enumerate} Hence, in all cases we have~$\omega(G) \ge \omega(G') + \omega(P-u-v)$. Combined with~$\omega(G') \ge \omega(G) + \omega(P-u-v)$, we can infer that~$\omega(G') = \omega(G) - \omega(P-u-v)$. \end{proof} \begin{lemma} \label[lemma]{lem:deg-two-lin-time-weighted} \cref{rule:max-path,rule:pend-cycle} can be exhaustively applied in $O(n+m)$ time. \end{lemma} \begin{proof} First, we collect in~$O(n+m)$ time all maximal paths{} and all pending cycles{} \cite[Lemma 3]{BDKNN20}. Given a maximal path{} or a pending cycle{} on~$\ell$ vertices due to \cref{obs:path-cycle-lin-time} one can compute the necessary maximum-weight matchings (at most four) in~$O(\ell)$ time. Moreover, replacing the maximal path{} or the pending cycle{} by the respective structure is doable in~$O(\ell)$ time. Applying \cref{rule:max-path,rule:pend-cycle} does not create new maximal paths{} (recall that a maximal path{} needs at least two vertices of degree two) or pending cycles{}. Thus, as all maximal paths{} and pending cycles{} combined contain at most~$n$ vertices, \cref{rule:max-path,rule:pend-cycle} can be exhaustively applied in $O(n+m)$ time. \end{proof} Each of \cref{rule:deg-zero-weighted,rule:deg-one-weighted,rule:max-path} can be exhaustively applied in linear time; however, we do not know whether all these data reduction rules together can be exhaustively applied in linear time. Note that after applying \cref{rule:pend-cycle} \cref{rule:deg-one-weighted} might become applicable. For our problem kernel below it suffices to apply \cref{rule:deg-zero-weighted,rule:deg-one-weighted,rule:max-path} in a specific order (using \cref{lem:deg-one-weighted-time,lem:deg-two-lin-time-weighted}). Note that we might output a problem kernel where \cref{rule:deg-zero-weighted,rule:deg-one-weighted} are applicable. In our experimental part it turned out that it is beneficial to apply the rules exhaustively (in superlinear time) to reduce the input graph as much as possible. \begin{theorem} \label{thm:w-kernel} \textsc{Maximum-Weight Matching}\xspace admits a linear-time computable $13k$-vertex and $17k$-edge kernel with respect to the parameter feedback edge number~$k$. \end{theorem} \begin{proof} Let~$G = (V,E)$ be the input instance and $F \subseteq E$ a feedback edge set of size at most $k$. Without loss of generality, one can assume that the input graph does not contain a cycle where each vertex has degree two, or a path where the endpoints have degree one and the internal vertices have degree two, because otherwise such a cycle or path can be solved independently in linear time (see~\cref{obs:path-cycle-lin-time}). The kernelization algorithm works as follows: First, exhaustively apply \cref{rule:deg-zero-weighted} in~$O(n+m)$ time. Second, exhaustively apply \cref{rule:deg-one-weighted} in $O(n+m)$ time (see~\cref{lem:deg-one-weighted-time}). Third, exhaustively apply \cref{rule:pend-cycle,rule:max-path} in $O(n+m)$ time (see \cref{lem:deg-two-lin-time-weighted}). Note that when applying the rules in this order, the resulting graph~$\widehat G = (\widehat V, \widehat E)$ does not contain any maximal paths, or pending cycles. But for each pending cycle{} we introduced a new degree one vertex. However, for each pending cycle there is at least one (distinct) edge in $F$. Hence, $\widehat G$ has at most $k$ degree one vertices. Let $V_1$ be the set of degree-one vertices in $\widehat G$. Moreover, after the second step of our kernelization (\cref{rule:deg-one-weighted}) the graph contains at most~$3k$ maximal paths{}~\cite[Lemma 2]{BDKNN20}. Thus, a feedback edge set~$\widehat F \subseteq \widehat E$ for~$\widehat G$ of minimum size contains at most $4k$ edges (each application of \cref{rule:max-path} increases the feedback edge set by one). To analyze the size of~$\widehat G$ in terms of $k$, we transform $\widehat G$ into a forest by cutting the edges in~$\widehat F$ such that for each edge in~$\widehat F$ two new vertices of degree one are introduced. Formally, we have graph~$G'=(V',E')$, where~$V' := \widehat V \cup \left\{ v_u^{uw},v_w^{uw} \mid uw \in \widehat F \right\}$ and $E' := (\widehat E \setminus \widehat F) \cup \left\{ uv_u^{uw},wv_w^{uw} \mid uw \in \widehat F \right\}$. Observe that $G'$ is a forest where $(V' \setminus \widehat V) \cup V_1$ are the leaves and~$\widehat V \setminus V_1$ are the internal vertices. Hence, we have at most $9k$ leaves and thus at most $9k-1$ internal vertices of degree at least three. Since there are at most $3k$ internal vertices of degree two (one for each application of \cref{rule:max-path}), we have $|\widehat V \setminus V_1| < 12k$ and $|\widehat V | \leq 13k$. Furthermore, $\widehat E \setminus \widehat F$ are edges of the forest $G'[\widehat V]$. Hence, we have~$|\widehat E| = |\widehat E \setminus \widehat F| + |\widehat F| < 17k$. \end{proof} \section{Experimental Evaluation} \label{sec:experiments} In this section, we provide an experimental evaluation of the presented data reduction rules on real-world graphs ranging from a few thousand vertices and edges to a few million vertices and edges. We analyze the effectiveness and efficiency of the kernelization as well as the effect on the subsequently used state-of-the-art solvers of \citet{HSt17,KP98}, and \citet{Kol09}. In \cref{sec:setup}, we give details about our test scenario. Then we first focus in \cref{sec:eval-kernel} on the evaluation of \cref{rule:crown,rule:deg-zero-one-vertices,rule:deg-two-vertices,rule:deg-one-weighted,rule:deg-zero-weighted,rule:pend-cycle,rule:max-path} in terms of running time needed to apply them and size of resulting instances. In \cref{sec:eval-time-unweighted}, we then analyze the effect of applying \cref{rule:crown,rule:deg-zero-one-vertices,rule:deg-two-vertices} in combination with a solver for \textsc{Maximum-Cardinality Matching}\xspace. Afterwards, in \cref{sec:eval-time-weighted}, we analyze the effect of applying \cref{rule:deg-one-weighted,rule:deg-zero-weighted,rule:pend-cycle,rule:max-path} in combination with a solver for \textsc{Maximum-Weight Matching}\xspace. \subsection{Setup and Implementation Details} \label{sec:setup} Our program is written in C++14 and the source code is available from \url{https://git.tu-berlin.de/akt-public/matching-data-reductions.git}. One can replicate all experiments by following the manual provided with the source code. We ran all our experiments on an Intel(R) Xeon(R) CPU E5-1620 3.60\,GHz machine with 64\,GB main memory under the Debian GNU/Linux 7.0 operating system, where we compiled the program (including the solvers of \cite{Kol09,KP98}) with GCC~7.3.0. For the solver of \citet{HSt17} we used Python 2.7.15rc1. \paragraph{Data set.} All tested graphs are from the established SNAP~\cite{snap} data set with a time limit of one hour per instance. See \Cref{tab:short-list} for a sample list of graphs with their respective numbers of vertices and edges. \begin{table} \def8{8} \caption{A selection of our test graphs from SNAP~\cite{snap} with their respective size.} \label{tab:short-list} \begin{center} \pgfplotstabletypeset[ columns={filename,misc_n,misc_m}, columns/filename/.style={string type,column name=Graph,column type = {r}}, columns/misc_n/.style={column name=$n$,precision=1,column type = {r}}, columns/misc_m/.style={column name=$m$,precision=1,column type = {r}}, every head row/.style ={before row=\toprule, after row=\midrule}, every last row/.style ={after row=\bottomrule}, row predicate/.code={ \pgfmathparse{int(mod(#1,8)} \ifnum#3=1\relax \else\pgfplotstableuserowfalse \fi }, col sep = comma] {data/results.csv} \pgfplotstabletypeset[ columns={filename,misc_n,misc_m}, columns/filename/.style={string type,column name=Graph,column type = {r}}, columns/misc_n/.style={column name=$n$,precision=1,column type = {r}}, columns/misc_m/.style={column name=$m$,precision=1,column type = {r}}, every head row/.style ={before row=\toprule, after row=\midrule}, every last row/.style ={after row=\bottomrule}, row predicate/.code={ \pgfmathparse{int(mod(#1,8)} \ifnum#3=3\relax \else\pgfplotstableuserowfalse \fi }, col sep = comma] {data/results.csv} \end{center} \end{table} The full list is given in \cref{tab:unweighted} in the Appendix. The weighted graphs are generated from the unweighted graphs by adding edge-weights between~1 and~1000 chosen independently and uniformly at random. \paragraph{Implementation details of our kernelization algorithms.} We implemented kernelization algorithms for the unweighted and weighted case. The first kernelization is for \textsc{Maximum-Cardinality Matching}\xspace{}, which exhaustively applies \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices}. Our implementation here is rather simplistic in the sense that it maintains a list of degree-one and degree-two vertices which are processed one after the other (in a straightforward manner). We apply the rules exhaustively although this gives in theory a super-linear running time. Note that one can (theoretically) improve our implementation of \cref{rule:deg-two-vertices} by a linear-time algorithm of \citet{BK09a}. Very recently, \citet{KLPU20} provided a fine-tuned algorithm that exhaustively applies \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} on bipartite graphs roughly three times faster than our naive implementation; their general approach should be also applicable for general graphs. However, our naive (super-linear time) implementation for exhaustively applying \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} was at least two times faster than reading and parsing the input graph and at least three times faster than the fastest implementation for finding maximum-cardinality matchings. Thus, applying \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} was not a bottleneck in our implementation and we did not optimize it further. The second kernelization is also \textsc{Maximum-Cardinality Matching}\xspace{} and it exhaustively applies \cref{rule:crown}. To this end, we used the algorithm described by \citet{IOY14}. The main steps of this algorithm are: \begin{enumerate} \item compute a maximum-cardinality matching in a given bipartite graph~$\overline{G}$ (to compute the initial LP-solution; see \cref{ssec:crowns-LP}) and \label{step:bip-match} \item determine the topological ordering of the DAG formed by the strongly connected components of a given digraph~$\overline{D}$. \label{step:topOr-DAG-SCC} (The underlying undirected graph of~$\overline{D}$ is~$\overline{G}$; the matching computed in Step~\ref{step:bip-match} determines how the edges in~$\overline{G}$ are directed in~$\overline{D}$. Each crown in the input graph~$G$ corresponds to a strongly connected component in~$\overline{D}$; refer to \citet{IOY14} for details.) \end{enumerate} Both steps can be solved using classic algorithms. We implemented for Step~\ref{step:bip-match} the classic $O(\sqrt{n}m)$-time algorithm of \citet{HK73} for finding a bipartite matching in~$\overline{G}$. For Step \ref{step:topOr-DAG-SCC}, we implemented Kosaraju's algorithm~\cite{AHU83} for finding the strongly connected components of~$\overline{D}$ in reverse topological order. The third kernelization is for \textsc{Maximum-Weight Matching}\xspace{}. We use the algorithms described in \cref{lem:deg-one-weighted-time,lem:deg-two-lin-time-weighted} to apply \cref{rule:pend-cycle,rule:max-path,rule:deg-one-weighted}. Deviating from the algorithm described in \cref{thm:w-kernel}, based on empirical observations our program applies \cref{rule:pend-cycle,rule:max-path,rule:deg-one-weighted,rule:deg-zero-weighted} as long as possible. Hence, the kernelization does not run in linear time but further shrinks the input graph. 15ewcommand{\texttt{KP-Edm}\xspace}{\texttt{KP-Edm}\xspace} 15ewcommand{\texttt{Kol-Edm}\xspace}{\texttt{Kol-Edm}\xspace} 15ewcommand{\texttt{Kol-Edm}\xspaceW}{\texttt{Kol-Edm-W}\xspace} 15ewcommand{\texttt{HS-MV}\xspace}{\texttt{HS-MV}\xspace} \begin{table} \caption{Set of solvers we used in our experiments. Here, ``\textsc{MM}$\leadsto$\textsc{W-PM}'' is the folklore reduction from \textsc{Maximum-Cardinality Matching}\xspace to \textsc{Minimum Weighted Perfect Matching} and ``\textsc{W-M}$\leadsto$\textsc{W-PM}'' is the folklore reduction from \textsc{Maximum-Weight Matching}\xspace to \textsc{Minimum Weighted Perfect Matching}. } \centering \begin{tabular}{l r r r} \toprule acronym & implementation by & core algorithm & language \\ \midrule \texttt{KP-Edm}\xspace&\citet{KP98}&\citet{Edm65} & C\\ \texttt{Kol-Edm}\xspace& \citet{Kol09} & \citet{Edm65,Edm65-2},\textsc{MM}$\leadsto$\textsc{W-PM} & C++\\ \texttt{Kol-Edm}\xspaceW& \citet{Kol09} & \citet{Edm65,Edm65-2},\textsc{W-M}$\leadsto$\textsc{W-PM} & C++\\ \texttt{HS-MV}\xspace& \citet{HSt17} & \citet{MV80} & Python 2.7\\ \bottomrule \end{tabular} \label{tab:solvers} \end{table} \paragraph{Used solvers.} To test the effect of our data reduction rules, we compare the running time of a solver on an input instance against the running time of our kernelization procedure plus the running time of the same solver on the output of our kernelization procedure. We refer to \cref{tab:solvers} for an overview of the tested solvers. For the data reductions rules for \textsc{Maximum-Weight Matching}\xspace we used the solver of \citet{Kol09} (implemented in C++) which is a fine-tuned version of Edmonds' algorithm for \textsc{Minimum-Weight Perfect Matching} \cite{Edm65,Edm65-2}. Note that the solver of \citet{Kol09} finds perfect matchings of minimum weight. We thus use this solver on graphs obtained from applying the folklore reduction from \textsc{Maximum-Weight Matching}\xspace (and thus also from \textsc{Maximum-Cardinality Matching}\xspace) to \textsc{Minimum-Weight Perfect Matching}. Applied on a graph~$G$ with~$n$ vertices and~$m$ edges, the reduction adds a copy of~$G$ and adds a weight-zero edge between each vertex in~$G$ and its added copy. Thus, the resulting graph can be computed in linear time and has~$2n$ vertices and~$2m + n$ edges. To the best of our knowledge, the solver of \citet{Kol09} plus the folklore reduction yields the currently fastest algorithm for \textsc{Maximum-Weight Matching}\xspace. For the rest of this paper, \texttt{Kol-Edm}\xspaceW denotes the solver of \citet{Kol09} plus the folklore reduction for \textsc{Maximum-Weight Matching}\xspace. To test the data reduction rules for \textsc{Maximum-Cardinality Matching}\xspace, we used three different solvers. First, we used the solver (denoted by \texttt{Kol-Edm}\xspace) of \citet{Kol09} plus the folklore reduction for \textsc{Maximum-Cardinality Matching}\xspace, which we get basically for free from the weighted case. Second, we used the solver (denoted by \texttt{KP-Edm}\xspace) of \citet{KP98} (implemented in C) which is a fine-tuned version of Edmonds' algorithm \cite{Edm65}. To the best of our knowledge this is in practice still the fastest algorithm. In our experiments, \texttt{KP-Edm}\xspace was clearly the fastest solver. Third, we used the solver (denoted by \texttt{HS-MV}\xspace) of \citet{HSt17} (implemented in Python).\footnote{In a few cases \texttt{HS-MV}\xspace returned an edge set that is not a matching. However, a maximum-cardinality matching was easily recoverable from the returned edge set by removing one or two edges. The authors (Huang and Stein) and are working on a fix for this issue.} This is the only implementation of the Micali-Vazirani algorithm \cite{MV80} we are aware of. The Micali-Vazirani algorithm has currently the best asymptotic worst-case running time. However, in our tests \texttt{HS-MV}\xspace (Python) was clearly outperformed by \texttt{KP-Edm}\xspace (C). \subsection{Efficiency and Effectiveness of our Data Reduction Rules} \label{sec:eval-kernel} \paragraph{Effectiveness of our rules.} The effectiveness of our kernelization algorithms is displayed in \Cref{fig:kernel-size2}: \begin{figure} \caption{Kernel sizes (in \%; 100\,\% = $n+m$ of input graph) for several subsets of our data reduction rules. We tested: all unweighted rules (\Cref{rule:crown,rule:deg-zero-one-vertices,rule:deg-two-vertices} \label{fig:kernel-size2} \end{figure} Few graphs remained almost unchanged while other graphs were essentially solved by the kernelization algorithm. For the unweighted case the situation is as follows: On the~44 tested graphs, on average 72\% of the vertices and edges are removed by the kernelization; the median is 82\%. The least amenable graph was \texttt{amazon0302} with a size reduction of only 7\%. In contrast, on 16 out of the~44~graphs the kernelization algorithm reduces more than 99\% of the vertices and edges. This is mostly due to \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices}: if \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} are exhaustively applied, then an application of the crown rule (\cref{rule:crown}) further reduces the kernel size only on four instance substantially. Moreover, a closer look on how often the degree-based rules \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} are applied reveals that on the majority of our tested graphs \cref{rule:deg-zero-one-vertices} is applied twice as much as \cref{rule:deg-two-vertices}. While the data reduction rules are less effective in the weighted case (see \cref{fig:kernel-size2}), they reduce the graphs on average still by 51\% with the median value being a bit lower with 48\%. The least amenable graph is again \texttt{amazon0302} with a size reduction of only 3\%. \paragraph{Efficiency of our rules.} In \cref{fig:crown-is-slow}, we compare the running times of the \texttt{KP-Edm}\xspace solver (the fastest state-of-the-art solver on our instances for the unweighted case) when applied directly on the input graph together with running times of our data reduction rules for the unweighted case (\Cref{rule:deg-zero-one-vertices,rule:deg-two-vertices,rule:crown}). \pgfplotstablesort[sort key=k_degree_result_time]{\sortedTable}{15amesTable} \begin{figure} \caption{Running time of various data reduction rules and the solvers (unweighted case; without kernelization). To all values 1{} \label{fig:crown-is-slow} \end{figure} On some graphs it does take more time to apply \cref{rule:crown,rule:deg-zero-one-vertices,rule:deg-two-vertices} than executing the \texttt{KP-Edm}\xspace solver directly. But if only \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} are applied, then the running time of the kernelization stays far below the running time of the \texttt{KP-Edm}\xspace solver while the resulting kernel size is mostly the same (see \cref{fig:kernel-size2}). Applying \cref{rule:crown} without \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} is clearly not a good idea as of \texttt{KP-Edm}\xspace is faster in finding a maximum-cardinality matching. In \cref{fig:weighted-times}, we compare the running times of the \texttt{Kol-Edm}\xspaceW solver (the state-of-the-art solver for the weighted case) when applied directly on the input graph together with running times of our data reduction rules for the weighted case (\Cref{rule:max-path,rule:pend-cycle,rule:deg-one-weighted,rule:deg-zero-weighted}). \pgfplotstablesort[sort key=kerneltime,col sep = comma]{\sortedWeightedTable}{data/old-high-weighted-all-results-changed-permutation.csv} \begin{figure} \caption{Running time of applying \Cref{rule:max-path,rule:pend-cycle,rule:deg-one-weighted,rule:deg-zero-weighted} \label{fig:weighted-times} \end{figure} The picture is similar to the unweighted case: On most graphs the data reduction rules are applied much faster than the solver (\texttt{Kol-Edm}\xspaceW), but there are a few exceptions. \paragraph{Advice on which reduction rules to apply.} If one has to find maximum-cardinality matchings on large real-world graphs, then we advise to always apply \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} before feeding the graph to a solver. Whether or not applying \cref{rule:crown}, however, depends on the specific type of real-world data at hand and should be tested on a few test cases: In most of our test cases the benefit paid with the higher running times was rather small. For the weighted case, we advise to apply our data reduction rules, but maybe invest time in a more efficient implementation of the rules (our implementation of the weighted rules could probably profit from further optimizations). \paragraph{Kernel size: theory versus practice.} In theory, we are used to measure the effectiveness of data reduction rules in terms of provable upper bounds for the size of resulting graph (the kernel) in a function only depending on \emph{some} parameter. If \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} are not applicable, then \cref{thm:fes-lin-kernel} states that a resulting graph has at most~$2k$~vertices and at most~$3k$~edges, where $k$ is the \paramEnv{feedback edge number}. In \cref{fig:kernel-vs-input-vs-fes}, we measure the gap between the actual size of the kernel and the proven upper bound from \cref{thm:fes-lin-kernel}. \begin{figure} \caption{Sizes and bounds (in terms of number of vertices plus edges) of various structures and kernels. To all values 1{} \label{fig:kernel-vs-input-vs-fes} \end{figure} As a matter of fact, we can clearly observe that the upper bound shown in \cref{thm:fes-lin-kernel} is not suitable to explain why \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} perform so well in real-world graphs: in 41 of our tested 44 graphs the input size itself is already smaller than the guaranteed upper bound of \cref{thm:fes-lin-kernel}. We exclude \cref{thm:w-kernel} from the discussion here, as the upper bounds given are even weaker than the ones in \cref{thm:fes-lin-kernel}. \cref{rule:crown} and \cref{thm:crown-exhaustive-application} are better suitable to explain the results: As can be seen in \Cref{fig:core-vs-size-percentages}, this $2\tau$-bound on the number of vertices is not optimal. \begin{figure} \caption{Relative sizes and theoretical upper bounds (in \%; 100\,\% = $n$) of various structures and kernels. The 2-core (3-core) of the graph resulting from iteratively removing all vertices of degree less than 2 (less than 3). The vertex cover number was computed with an ILP-solver; on very few graphs we could not compute a minimum vertex cover in a couple of hours and the corresponding graph is skipped in the corresponding plot. } \label{fig:core-vs-size-percentages} \end{figure} However, there is a clear similarity between the lines indicating theoretical upper bound and measured kernel size. Moreover, on roughly~$2/3$ of the instances the worst-case upper bound~$2\tau$ is smaller than~$n$, that is, the deletion of some vertices is guaranteed. Overall, \cref{thm:crown-exhaustive-application} seems to deliver the better theoretical explanation. However, \cref{thm:crown-exhaustive-application} is based on \cref{rule:crown} and, as can be seen in \cref{fig:kernel-size2}, just applying \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} is almost always better than only applying \cref{rule:crown} (the exceptions are the graphs ``web-NotreDame'', ``web-Stanford'', ``web-Google'', and ``web-BerkStan''). Thus, while \cref{thm:crown-exhaustive-application} somewhat explains the effects of \cref{rule:crown}, no explanation is provided for the good performance of \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices}. Summarizing, the current theoretical upper bounds for the kernel size need improvement. The most promising route seems to be a multivariate analysis in the sense that one should use more than one parameter in the analysis~\cite{Nie10}. Of course, the challenging part here is finding the ``correct'' parameters. \subsection{Running times for Maximum-Cardinality Matching} \label{sec:eval-time-unweighted} In this section we evaluate the effect on the running time of state-of-the-art solvers (see \cref{tab:solvers}) for \textsc{Maximum-Cardinality Matching}\xspace if \cref{rule:crown,rule:deg-zero-one-vertices,rule:deg-two-vertices} are applied in advance. Note that all reported running times involving \texttt{Kol-Edm}\xspace are averages over 100 runs where we randomly permute vertex indices in the input. Although this permutation yields an isomorphic graph, we empirically observed that in the unweighted case the running time of \texttt{Kol-Edm}\xspace heavily depends on the permutation. For example, choosing a ``good'' or a ``bad'' permutation for the same graph may yield speedup of factor 20 or more. Precise data on the spectrum of running time variation (for graphs where the time limit was not reached) are shown in \cref{fig:blossom5runningtime}. \begin{figure} \caption{ The spectrum of the \texttt{Kol-Edm} \label{fig:blossom5runningtime} \end{figure} The running time of all other solvers and our kernelization algorithm were only marginally affected by changing the permutation. We noticed that the different implementations vary greatly in the time they need for parsing the input graph (especially in the smaller graphs \texttt{HS-MV}\xspace (Python) needs more time to parse the graph than \texttt{KP-Edm}\xspace (C) needs to find a maximum-cardinality matching). Moreover, we mainly care about the speedup of the respective algorithms (when run on the kernel instead of the original input) and not about the speedup of the graph parsing. Hence, we neglect the time to parse the input graph in all running-time measures and discussions. Note that we do \emph{not} neglect the time the implementation needs for parsing the kernel, as this is something that needs only be done with data reduction but not without. More precisely, for runs without data reduction, we report the time the particular implementation needs \emph{after} the graph was loaded; for runs with data reduction we report the time of our data reduction rules plus the total time of the implementation (including parsing the kernel). \begin{figure} \caption{Running time of the three state-of-the-art solvers with and without kernelization (each mark indicates one instance). The inclined solid/dashed/dash dotted/dotted lines indicate a factor of 1/2/5/25 difference in the running time. To all values 1{} \label{fig:new-running-times} \end{figure} In \cref{fig:new-running-times}, we compare the running times of the solvers when they are directly applied on the input instances against the running time of the kernelization plus the running time of the solvers on the resulting kernel. The running time of the \texttt{Kol-Edm}\xspace solver is improved on average by a factor of $157.30$ (median: $29.27$) when the kernelization is applied (left-side of \cref{fig:new-running-times}). The running time of the \texttt{HS-MV}\xspace solver is improved on average by a factor of $608.79$ (median: $28.87$) when the kernelization is applied (right-side of \cref{fig:new-running-times}). Hence, it is safe to say that these two algorithms clearly benefit from the kernelization. However, on the instance \emph{amazon0302} the \texttt{HS-MV}\xspace solver is 25\% faster without the kernelization and on the instance \emph{facebook-combined} the \texttt{Kol-Edm}\xspace solver is (on average over the hundred runs) 6\% faster without the kernelization. On all other instances the running times of both solvers where improved by the kernelization. In contrast, the message drawn by the results for the \texttt{KP-Edm}\xspace solver is less clear (right-hand side of \cref{fig:new-running-times}). The running time of the \texttt{KP-Edm}\xspace solver is impaired on 9 of our 44 instances by the kernelization. On average we \emph{still} get an improvement of the running time by a factor of $4.70$ (median: $2.20$). The reason for the unclear result for the \texttt{KP-Edm}\xspace solver is that most of the instances are too easy for it, that is, they are solved very quickly with or without kernelization. On harder instances (like our largest four graphs) we could observe a more significant speedup gained due to the kernelization. \subsection{Running times for Maximum-Weight Matching} \label{sec:eval-time-weighted} In this section, we evaluate \cref{rule:max-path,rule:pend-cycle,rule:deg-one-weighted,rule:deg-zero-weighted} for \textsc{Maximum-Weight Matching}\xspace. The weighted graphs we used for our tests were generated from the unweighted graphs by adding edge-weights between~1 and~1000 chosen independently and uniformly at random. In the weighted case we tested our kernelization only together with the \texttt{Kol-Edm}\xspaceW solver since the \texttt{KP-Edm}\xspace and \texttt{HS-MV}\xspace solvers only work in the unweighted setting. In contrast to the unweighted case (see \cref{fig:blossom5runningtime}), we could not observe that the running time of \texttt{Kol-Edm}\xspaceW is affected when the vertices are permuted. For consistency, however, we take the average running times also in the weighted case. Note that for different permutations the data reduction rules were applied in different order resulting in kernels slightly differing in size (see \cref{fig:not-confluent} for an example). \begin{figure} \caption{In the middle we see two weighted graphs where both \cref{rule:deg-one-weighted} \label{fig:not-confluent} \end{figure} \begin{figure} \caption{Running time comparison with and without using our kernelization (\Cref{rule:max-path,rule:pend-cycle,rule:deg-one-weighted,rule:deg-zero-weighted} \label{fig:kernel-time} \end{figure} In \cref{fig:kernel-time}, we illustrate the running time comparison of our kernelization for the weighted data reduction rules against various unweighted data reduction rules and the running time of \texttt{Kol-Edm}\xspaceW when applied without kernelization (the four largest graphs are missing since we could not solve them without kernelization). Our weighted kernelization algorithm becomes slower than in the unweighted case (here, we mean just \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices}). This is not surprising as our algorithm for \cref{rule:max-path,rule:pend-cycle,rule:deg-one-weighted,rule:deg-zero-weighted} is more involved than the one for the \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices}. Furthermore, the solver of \citet{Kol09} is significantly faster in the weighted case (\texttt{Kol-Edm}\xspaceW) than in the unweighted case (\texttt{Kol-Edm}\xspace). On three graphs the \texttt{Kol-Edm}\xspaceW computes a maximum-weighted matching faster than we can produce the kernel. However, on most graphs, our kernelization algorithm reduces the overall running time of \texttt{Kol-Edm}\xspaceW (on average by a factor of $12.72$; median: $1.40$). Note that also in the weighted case the kernelization is more frequently beneficial than it is not. \section{Conclusion}\label{sec:conclusion} Our work shows that it practically pays off to use (linear-time) data reduction rules for computing maximum (unweighted and weighted) matchings. Our current state of the theoretical (kernel size upper bounds) analysis, however, is insufficient to fully explain this success. Here, a multivariate approach in which more than one parameter is taken into consideration seems like the natural next step~\cite{Nie10}. Finding the \emph{right} parameters is the challenging part here. In fact, adding to any graph~$G$ to each vertex a new degree-one vertex as neighbor results in a graph~$G'$ where \cref{rule:deg-zero-one-vertices} reduces everything, whereas the original graph~$G$ might not be amenable at all to the data reduction rules. Many graph parameters (including feedback edge number) cannot differentiate between~$G$ and~$G'$ and, hence, are not suited to explain the practical effectiveness. \paragraph{Future research for unweighted matchings.} Through the connection between \textsc{Vertex Cover}\xspace and \textsc{Maximum-Cardinality Matching}\xspace one might be able to transfer further kernelization results from \textsc{Vertex Cover}\xspace to \textsc{Maximum-Cardinality Matching}\xspace. However, there are known limitations: obtaining a kernel for \textsc{Vertex Cover}\xspace with~$O(\tau^{2-\varepsilon})$ edges ($\tau$ is the vertex cover number) for any~$\varepsilon > 0$ is unlikely in the sense that the polynomial hierarchy would collapse~\cite{DM14}. Thus, obtaining a kernel with~$O(\tau^{2-\varepsilon})$ edges for \textsc{Maximum-Cardinality Matching}\xspace requires a new approach that should \emph{not} work for \textsc{Vertex Cover}\xspace. So far, the kernelization algorithms we discuss in this paper \emph{do} work for both \textsc{Maximum-Cardinality Matching}\xspace and \textsc{Vertex Cover}\xspace. However, as \textsc{Maximum-Cardinality Matching}\xspace is polynomial-time solvable, an $O(\tau^{2-\varepsilon})$-sized kernel trivially exists for \textsc{Maximum-Cardinality Matching}\xspace. The challenge here is to find such a kernel that is (near-)linear-time computable. In future research, one might also study the combination of data reduction with linear-time approximation algorithms for matching~\cite{DP14}. Furthermore, it would be interesting to know whether there is an efficient way of applying \cref{rule:relaxed-crown} or a variation of it. The solver of \citet{Kol09} is significantly faster on weighted graphs (\texttt{Kol-Edm}\xspaceW) than on unweighted graphs (\texttt{Kol-Edm}\xspace). We believe that the reason for this is that in unweighted graphs there are a lot of symmetries, and unlucky tie-breaking seems to have a strong impact on the solver of \citet{Kol09}. In the weighted case, the performance of the solver of \citet{Kol09} was much more consistent under permuting the vertices in the input graph. As a consequence, we believe that the following might speedup the algorithm: given an unweighted graph, introduce edge-weights such that a maximum-weight matching in the then weighted graph is also a maximum-cardinality matching in the unweighted graph. Using the famous Isolation Lemma \cite{MVV87} one might even enrich and support this with a theoretical analysis. \paragraph{Future research for weighted matchings.} While our naive implementation for the unweighted case proved to be quite fast, the algorithm for the weighted case could benefit from further tuning. Note that in the unweighted case \cref{rule:deg-zero-one-vertices,rule:deg-two-vertices} only make changes in the local neighborhood of the affected vertices. This is not the case in the weighted case, where the application of \cref{rule:deg-one-weighted,rule:max-path,rule:pend-cycle} involve iterations over all edges, see \cref{lem:deg-one-weighted-time,lem:deg-two-lin-time-weighted}. Hence, applying the data reduction rules exhaustively requires a larger overhead. Although some improvements in the implementation might be possible, an improved algorithmic approach to exhaustively apply the data reduction rules is needed. Is there a (quasi-)linear-time algorithm to exhaustively apply \cref{rule:deg-zero-weighted,rule:deg-one-weighted,rule:max-path,rule:pend-cycle}? Furthermore, is there a variant of the crown data reduction (\cref{rule:crown}) for \textsc{Maximum-Weight Matching}\xspace? \textbf{Acknowledgement.} We are very grateful to anonymous reviewers of ESA~'18 and of ACM JEA for constructive and detailed feedback. TK was supported by DFG, project FPTinP (NI 369/16). 15ewpage \appendix \section{Appendix - Full Data Set} \pgfplotstableset{ create on use/maxMatch/.style={ create col/expr={ \thisrow{k_degree_result_matchings}+\thisrow{k_crown_result_matchings}+\thisrow{k_c_matchings} }, } } \begin{table}[h!]\small \caption{A full list of graph from the SNAP \cite{snap} data set which we used in our test scenario; here~$|V^{=i}|$ is the number of degree-$i$ vertices, $\Delta$ the maximum degree, and ``MCM'' the cardinality of a maximum matching.} \label{tab:unweighted} \centering \pgfplotstabletypeset[columns={filename,misc_n,misc_m,misc_degree_1,misc_degree_2,misc_max_degree,degeneracy_degeneracy,maxMatch}, columns/filename/.style={string type,column name=Graph,column type = {r}}, columns/misc_n/.style={column name=$n$,precision=1,column type = {r}}, columns/misc_m/.style={column name=$m$,precision=1,column type = {r}}, columns/misc_degree_1/.style={column name=$|V^{=1}|$,precision=1,column type = {r}}, columns/misc_degree_2/.style={column name=$|V^{=2}|$,precision=1,column type = {r}}, columns/misc_max_degree/.style={column name=$\Delta$,precision=1,column type = {r}}, columns/degeneracy_degeneracy/.style={column name=degeneracy,precision=1,column type = {r}}, columns/maxMatch/.style={column name=MCM,precision=1,column type = {r}}, every head row/.style ={before row=\toprule, after row=\midrule}, every last row/.style ={after row=\bottomrule},col sep = comma] {data/results.csv} \end{table} \end{document}
\begin{document} \title{On the Separability of Stochastic Geometric Objects, with Applications} \begin{abstract} In this paper, we study the linear separability problem for stochastic geometric objects under the well-known unipoint/multipoint uncertainty models. Let $S=S_R \cup S_B$ be a given set of stochastic bichromatic points, and define $n = \min\{|S_R|, |S_B|\}$ and $N = \max\{|S_R|, |S_B|\}$. We show that the {\it separable-probability} (SP) of $S$ can be computed in $O(nN^{d-1})$ time for $d \geq 3$ and $O(\min\{nN \log N, N^2\})$ time for $d=2$, while the {\it expected separation-margin} (ESM) of $S$ can be computed in $O(nN^{d})$ time for $d \geq 2$. In addition, we give an $\Omega(nN^{d-1})$ {\it witness-based lower bound} for computing SP, which implies the optimality of our algorithm among all those in this category. Also, a hardness result for computing ESM is given to show the difficulty of further improving our algorithm. As an extension, we generalize the same problems from points to general geometric objects, i.e., polytopes and/or balls, and extend our algorithms to solve the generalized SP and ESM problems in $O(nN^{d})$ and $O(nN^{d+1})$ time, respectively. Finally, we present some applications of our algorithms to stochastic convex-hull related problems. \end{abstract} \section{Introduction}\label{sec:introduction} Linear separability describes the property that a set of $d$-dimensional bichromatic (red and blue) points can be separated by a hyperplane such that all the red points lie on one side of the hyperplane and all the blue points lie on the other side. This problem has been well studied for years in computational geometry, and is widely used in machine learning and data mining for data classification. However, existing linear-separation algorithms require that all the input points must have fixed locations, which is rarely true in reality due to imprecise sampling from GPS, robotics sensors, or some other probabilistic systems. It is therefore essential to study the conventional linear separability problem under uncertainty. In this paper, we study the linear separability problem under two different uncertainty models, i.e., the {\it unipoint} and {\it multipoint} models \cite{agarwal2014convex}. In the former, each stochastic data point has a fixed and known location, and has a positive probability to exist at that location; whereas in the latter, each stochastic data point occurs in one of discretely-many possible locations with a known probability, and the existence probabilities of each point sum up to at most 1 to allow for its absence. Our focus is to compute the {\it separable-probability} (SP) and the {\it expected separation-margin} (ESM) for a given set of bichromatic stochastic points (or general geometric objects) in $\mathbb{R}^d$ for $d \ge 2$, where the former is the probability that the existent points (or objects) are linearly separable, and the latter is the expectation of the separation-margin of the existent points (or objects). (See Section~\ref{subsec2.2.1} for a detailed and formal definition of the latter.) The brute-force approach of enumerating all possible instances takes exponential runtime, and therefore we propose, in this paper, novel algorithms that carefully compute SP and ESM in a much more efficient way. Furthermore, we show that our approach is highly extensible and can solve many other related problems defined on other types of objects or on multiple colors. To summarize, our main contributions are: \begin{enumerate} \item[(1)] We propose an $O(nN^{d-1})$-time algorithm, which uses linear space, for solving the SP problem when given a set of bichromatic stochastic points in $\mathbb{R}^d$, $d \ge 3$. (The runtime is $O(\min\{N^2, nN\log N\})$ for $d = 2$.) We also show an $\Omega(nN^{d - 1})$ lower bound for all witness-based algorithms, which implies the optimality of our algorithm among all witness-based methods for $d \ge 3$. (See Section~\ref{sec:separable-prob}.) \item[(2)] We show that the ESM of the above dataset can be computed in $O(nN^d)$ time for $d \geq 2$, using linear space. A hardness result is also given to show the total number of distinct possible separation-margins is $\Theta(nN^d)$, which implies that it may be difficult to achieve a better runtime. (See Section~\ref{sec:separation-margin}.) \item[(3)] We extend our algorithms to compute the SP and the ESM for datasets containing general stochastic geometric objects, such as polytopes and/or balls. Our generalized algorithms solve the former problem in $O(nN^d)$ time, and the latter in $O(nN^{d+1})$ time, using linear space. (See Section~\ref{sec:extension:shape}.) \item[(4)] We provide some applications of our algorithms to stochastic convex-hull (SCH) related problems. Specifically, by taking advantage of our SP algorithm, we give a method to compute the SCH membership probability, which matches the best known bound but is more direct. Also, we consider some generalized versions of this problem and show how to apply our separability algorithms to solve them efficiently. (See Section~\ref{sec:application}.) \end{enumerate} To provide a smooth flow for the paper, all proofs and some details are deferred to Appendix~\ref{append.proof}. \subsection{Related work}\label{sec:related_work} The study of computational geometry problems under uncertainty is a relatively new topic, and has attracted a lot of attention; see \cite{aggarwal2009survey} and \cite{dalvi2009probabilistic} for two surveys. Different uncertainty models and related problems have been investigated in recent years. See \cite{agarwal2013nearest, agarwal2009indexing, agarwal2012range, deBerg2014separability, evans2008guaranteed, loffler2009data, loffler2010largest, zhang2012nearest} for example. The unipoint/multipoint uncertainty model, which we use in this paper, was first defined in \cite{agarwal2014convex, suri2013most}, and has been applied in many recent papers. For instance, in \cite{kamousi2011stochastic}, Kamousi et al. studied the stochastic minimum spanning tree problem, and computed its expected length. Suri et al. investigated the most likely convex hull problem over uncertain data in \cite{suri2013most}; the similar topic was revisited by Agarwal et al. in \cite{agarwal2014convex} to compute the probability that a query point is inside the uncertain hull. In \cite{suri2014most}, Suri and Verbeek studied the most likely Voronoi Diagram (LVD) in $\mathbb{R}^1$ under the unipoint model, and the expected complexity of LVD was further improved by Li et al. in \cite{li2015arrangement}, who explored the stochastic line arrangement problem in $\mathbb{R}^2$. In \cite{agrawal2015skyline}, Agrawal et al. proposed efficient algorithms for the most likely skyline problem in $\mathbb{R}^2$ and gave NP-hardness results in higher dimensions. Recently, in \cite{deBerg2014separability}, de Berg et al. studied the separability problem given a set of bichromatic imprecise points in $\mathbb{R}^2$ in a setting that each point is drawn from an imprecision region. Very recently, in an unpublished manuscript \cite{martin2015seperability}, one of our proposed problems, the SP computing problem, was independently studied by Fink et al. under the same uncertainty model, and similar results were obtained, i.e., an $O((n+N)^d) = O(N^d)$-time and $O(n+N) = O(N)$-space algorithm for computing the SP of a given set of bichromatic stochastic points in $\mathbb{R}^d$. In fact, before the final step of the improvement, the time bound achieved was $O(N^d \log N)$, and then {\it duality} \cite{deBerg_CG_book} and {\it topological sweep} \cite{Edelsbrunner:1986:topological_sweep} techniques were applied to eliminate the $\log$ factor. On the other hand, our algorithm runs initially in $O(nN^{d-1} \log N)$ time using linear space, and can be further improved to $O(nN^{d-1})$ runtime by using the same techniques. (A careful discussion will be given in Section~\ref{sec:separable-prob}.) Note that the algorithm in \cite{martin2015seperability} always runs in $\Theta(N^d)$ time (no matter how small $n$ is) and, more importantly, this runtime appears to be intrinsic: all possible $d$-tuples of (distinct) points have to be enumerated in order to correctly compute the SP. Our time bound matches the bound in \cite{martin2015seperability} when $n = \Theta(N)$. However, when $n \ll N$, our method is much faster, and, in fact, this is usually the case in many real classification problems in machine learning and data mining. In terms of how to solve the problem, Fink et al.'s method is very different from ours. Their computation of SP relies on an additional dummy anchor point, and based on this point, the probability is computed in an inclusion-exclusion manner. On the other hand, our method solves the problem more directly: it does not introduce any additional points and the SP is eventually computed using a simple addition principle. Furthermore, our algorithm can be easily extended to solve many generalized problems (e.g., multiple colors, general geometric objects, etc.) \subsection{Basic notations and preliminaries} \label{sec:basic-notations} Throughout this paper, the basic notations we use are the following. We use $S = S_R \cup S_B$ to denote the given stochastic bichromatic dataset, where $S_R$ (resp. $S_B$) is a set of red (resp. blue) stochastic points (or general geometric objects in Section~\ref{sec:extension:shape}). The notations $n$ and $N$ are used to denote the sizes of the smaller and larger classes of $S$ respectively, i.e., $n = \min\{|S_R|,|S_B|\}$ and $N = \max\{|S_R|,|S_B|\}$, and $d$ is used to denote the dimension. In this paper, we always assume that $d$ is a constant. When we need to denote a normal bichromatic dataset (without considering the existence probabilities), we usually use the notation $T = T_R \cup T_B$. The coordinates of a point $x \in \mathbb{R}^d$ are denoted as $x^{(1)}, x^{(2)}, \dots, x^{(d)}$. If $T$ is a dataset in $\mathbb{R}^d$ and $U$ is some linear subspace of $\mathbb{R}^d$, we use $T^U$ to denote a new dataset in the space $U$, which is obtained by orthogonally projecting $T$ from $\mathbb{R}^d$ onto $U$. We say a set of bichromatic points is {\it strongly separable} iff there exists a hyperplane, $h$, so that all the red points strictly lie on one side of $h$ while all the blue points strictly lie on the other side. Also, we can define the concept of {\it weakly separable} similarly, except that we allow points to lie on the hyperplane. A hyperplane that strongly (resp., weakly) separates a set of bichromatic points is called a {\it strong} (resp., {\it weak}) {\it separator}. The following is a fundamental result we will use in various places of this paper. \begin{theorem} \label{th1} Suppose $T = T_R \cup T_B$ is a set of bichromatic points in $\mathbb{R}^d$. $T$ is strongly separable by a hyperplane iff $\mathcal{CH}(T_R) \cap \mathcal{CH}(T_B) = \emptyset$, where $\mathcal{CH}(\cdot)$ denote the convex hull of the indicated point-set. \end{theorem} \section{Separable-probability}\label{sec:separable-prob} The separable-probability (SP) of a stochastic bichromatic dataset $S = S_R \cup S_B$ in $\mathbb{R}^d$ refers to the probability that the existent points in $S$ (obtained by a random experiment) are (strongly) separable. For simplicity, we describe here the details of our algorithm under the unipoint model only. The generalization of our algorithm to the multipoint model is quite straightforward and we discuss it in Appendix~\ref{sec:multipoint}. Trivially, given a dataset $S$, its SP can be computed by simply enumerating all the $2^{|S|}$ possible instances of $S$ and summing up the probabilities of the separable ones, which takes exponential time. In order to solve the problem more efficiently than by brute-force, one has to categorize all the separable instances of $S$ into a reasonable number of groups such that the sum of the probabilities of the instances in each group can be easily computed. A natural approach is to charge each separable instance to a {\it unique} separator, and use that as the key to do the grouping. The uniqueness requirement here is to avoid over-counting. In addition, all these separators should be easy to enumerate and the sum of the probabilities of those separable instances charged to each separator should be efficiently computable. In $\mathbb{R}^1$ and $\mathbb{R}^2$, this is easy to achieve. For example, in $\mathbb{R}^1$, given a separable instance, all the possible separators form a segment, and we can choose the leftmost endpoint as the unique separator; in $\mathbb{R}^2$, all the possible separators of a separable instance form a double-fan, and we can choose the most counterclockwise one, which goes through exactly one red and one blue point, as the unique separator. (See Figure~\ref{fig:intro_to_extreme} for an illustration.) It is easy to see that, with the separators chosen above, the SP of the stochastic dataset can be easily computed by considering the sum of the probabilities of the instances charged to each such separator. However, to define such a separator for a separable instance beyond $\mathbb{R}^2$ turns out to be hard and challenging. To solve this problem, we define an important concept called {\it extreme separator} in Section~\ref{subsec2.1.1}, and apply this concept to compute the SP of $S$ in Section~\ref{subsec2.1.2}. \begin{figure} \caption{Illustrating the unique separator we choose for separable instances in $\mathbb{R} \label{fig:intro_to_extreme} \end{figure} \begin{figure} \caption{Illustrating $U^*$ in $\mathbb{R} \label{fig:p0p1} \end{figure} \begin{figure} \caption{Illustrating the extreme separator in $\mathbb{R} \label{fig:extreme} \end{figure} For convenience, we assume that the points given in $S$ have the \textit{strong general position} property (SGPP), which is defined as follows. Let $I=\{i_1,i_2,\dots,i_{|I|}\}$ be any subset of the index set $\{1,2,\dots,d\}$ where $i_1<i_2<\dots<i_{|I|}$. We define a projection function $\phi_I:\mathbb{R}^d \rightarrow \mathbb{R}^{|I|}$ as $\phi_I(x) = (x^{(i_1)},x^{(i_2)},\dots,x^{(i_{|I|})})$. Also, for any $X \subseteq \mathbb{R}^d$, we define $\Phi_I(X)=\{\phi_I(x):x \in X\}$. Let $T$ be a set of points in $\mathbb{R}^d$. When $d \leq 2$, we say $T$ has SGPP iff it is in general (linear) position, i.e., affinely independent. When $d \geq 3$, we say $T$ has SGPP iff \\ \textbf{1)} $T$ is in general (linear) position; \\ \textbf{2)} $\Phi_J(T)$ has SGPP for $J = \{3,4,\dots,d\}$.\\ {\bf Remark.} Note that this assumption is actually not stronger than the conventional general position assumption, since one can easily apply linear transformations to make a set of points in general position have SGPP (the separability of a dataset is invariant under non-singular linear transformations). In fact, a randomly-generated non-singular $d \times d$ matrix $\mathcal{A}$ can transfer a set of points in general position into another set having SGPP with a probability almost 1. Though this method works well in practice, we give, in Appendix~\ref{append:compute_matrix_A}, a deterministic algorithm that guarantees to output a valid $\mathcal{A}$ in $O(N^{d-1})$ time (without increasing the overall runtime). \subsection{Extreme separator} \label{subsec2.1.1} To solve the SP problem, we define a very important concept called \textit{extreme separator} through a sequence of steps. Suppose a separable bichromatic dataset $T = T_R \cup T_B$ with SGPP is given in $\mathbb{R}^d$ ($d \geq 2$). Assume that both $T_R$ and $T_B$ are nonempty. Let $\mathcal{V}$ be the collection of the $(d-1)$-dim linear subspaces of $\mathbb{R}^d$ whose equation is of the form $ax_1+bx_2=0$, where $a$ and $b$ are constants not equal to 0 simultaneously. In other words, $\mathcal{V}$ contains all the $(d-1)$-dim linear subspaces that are perpendicular to the $x_1x_2$-plane and go through the origin. Then there is a natural one-to-one correspondence between $\mathcal{V}$ and $\mathbb{P}^1$ (the 1-dim projective space), \begin{equation*} \sigma:[ax_1+bx_2=0]\ \longleftrightarrow\ [a:b]. \end{equation*} For convenience, we use $\sigma$ to denote the maps in both directions in the rest of this paper. We now define a map $\pi_T:\mathcal{V} \rightarrow \{0,1\}$ as follows. For any $V \in \mathcal{V}$, we orthogonally project all the points in $T$ onto $V$ and use $T^V = T_R^V \cup T_B^V$ to denote the new dataset after projection. If $T^V$ is strongly separable, we set $\pi_T(V)=1$. Otherwise, we set $\pi_T(V)=0$. The map $\pi_T$ induces another map $\pi_T^*:\mathbb{P}^1 \rightarrow \{0,1\}$ by composing with the correspondence $\sigma$. Let $P_0$ and $P_1$ be the pre-images of $\{0\}$ and $\{1\}$ under $\pi_T^*$, respectively (see Figure~\ref{fig:p0p1}). By applying Theorem \ref{th1}, it is easy to prove the following. \begin{theorem} \label{th2} $P_0$ is a connected closed subspace of $\mathbb{P}^1$. $P_0 = \emptyset$ iff $\Phi_J(T)$ is strongly separable in $\mathbb{R}^{d-2}$ for $J = \{3,4,\dots,d\}$. \end{theorem} We now have two cases, i.e., $P_0 \neq \emptyset$ and $P_0 = \emptyset$. If $P_0 \neq \emptyset$, we define the extreme separator of $T$ as follows. Since $P_0$ is a connected closed subspace of $\mathbb{P}^1$, it has a unique clockwise boundary point $u^*$ (i.e., $u^*$ is the last point of $P_0$ in the clockwise direction). Let $U^*=\sigma(u^*)$ be the linear subspace in $\mathcal{V}$ corresponding to $u^*$ (see Figure~\ref{fig:p0p1} again). The following theorem reveals the separability property of $T^{U^*}$. \begin{theorem} \label{th3} $T^{U^*}$ is weakly separable and there only exists one weak separator. Furthermore, the unique separator of $T^{U^*}$ goes through exactly $d$ points, of which at least one is in $T_R^{U^*}$ and one is in $T_B^{U^*}$. \end{theorem} \begin{definition} (\textit{derived separator}) Let $U$ be a $k$-dim linear subspace ($k<d$) of $\mathbb{R}^d$. Suppose $h$ is a strong (resp., weak) separator of $T^U$ in the space $U$. It is easy to see that the pre-image, $h'$, of $h$ under the orthogonal projection $\mathbb{R}^d \rightarrow U$ is a strong (resp., weak) separator of $T$ in $\mathbb{R}^d$. We call $h'$ the \textit{derived separator} of $h$ in $\mathbb{R}^d$. \end{definition} Let $h^*$ be the unique weak separator of $T^{U^*}$. We define the \textit{extreme separator} of $T$ as the derived separator of $h^*$ in $\mathbb{R}^d$. (See Figure~\ref{fig:extreme}.) At the same time, we call $U^*$ the \textit{auxiliary subspace} defining the extreme separator. Clearly, the extreme separator and the auxiliary subspace are perpendicular to each other. On the other hand, if $P_0 = \emptyset$, we recursively define the extreme separator of $T$ as the derived separator of the extreme separator of $\Phi_J(T)$, for $J = \{3,4,\dots,d\}$. Note that $P_0$ is nonempty when $d=2$. To complete this recursive definition, we define the extreme separator in $\mathbb{R}^1$ as the weak separator (which is a point) with the smallest coordinate. Note that the above definition of the extreme separator is only for the case that both $T_R$ and $T_B$ are nonempty. In the trivial case where $T_R$ and/or $T_B$ is empty, we simply define the extreme separator as $x_d = \infty$. To understand the intuition for the extreme separator, let us consider the case $d=3$. Imagine there is a plane rotating clockwise around the $x_3$-axis. We keep projecting the points in $T$ (orthogonally) to that plane and track the separability of the projection images. If the images are always separable, then the extreme separator is defined recursively. Otherwise, there is a closed period of time in which the images are inseparable, which is subsequently followed by an open period in which the images are separable. At the connection of the two periods (from the inseparable one to the separable one), the images are weakly separable by a unique weak separator. Then the rotating plane at this point is just the auxiliary subspace, and the extreme separator is obtained by orthogonally ``extending'' the unique weak separator to $\mathbb{R}^3$. \subsection{Computing the separable-probability} \label{subsec2.1.2} We remind the reader that $n = \min\{|S_R|,|S_B|\}$ and $N = \max\{|S_R|,|S_B|\}$. Set $J = \{3,4,\dots,d\}$. If the existent points in $S$ are separable, then there are two cases: 1) the extreme separator of the existent points is defined recursively (the case of $P_0 = \emptyset$) or equal to $x_d = \infty$ (the trivial case); 2) the extreme separator of the existent points is directly defined in $\mathbb{R}^d$ (the case of $P_0 \neq \emptyset$). These two cases are clearly disjoint so that the SP can be computed as the sum of their probabilities. By applying Theorem \ref{th2}, the probability of the first case is equal to the SP of $\Phi_J(S)$. In the second case, according to Theorem \ref{th3}, the extreme separator goes through exactly $d$ points (of which at least one is red and one is blue). Thus, the SP of $S$ can be computed as \begin{equation*} \mathit{Sep}(S) = \mathit{Sep}(\Phi_J(S)) + \sum_{h \in H_S} \tau_S(h), \end{equation*} where $H_S$ is the set of the hyperplanes that go through exactly $d$ points (of which at least one is red and one is blue) in $S$ and, for $h \in H_S$, $\tau_S(h)$ is the probability that the extreme separator of the existent points is $h$. \begin{figure} \caption{Illustrating the location of $o$. The space in the figure is the 2-dim subspace of $\mathbb{R} \label{fig:o} \end{figure} Clearly, for each $h \in H_S$, there is a unique element $U^* \in \mathcal{V}$ perpendicular to it ($h$ can never be parallel to the $x_1x_2$-plane due to the SGPP of $S$). If $h$ is indeed the extreme separator of the existent points, then $U^*$ must be the auxiliary subspace. Let $E = E_R \cup E_B$ be the set of the points on $h$. In order to compute $\tau_S(h)$, we investigate the conditions for $h$ to be the extreme separator of the existent points. First, as the $d$ points on $h$, the points in $E$ must exist. Second, because the existent points should be \textit{weakly} (but not strongly) separable after being projected onto $U^*$, there must exist $\hat{r} \in \mathcal{CH}(E_R)$ and $\hat{b} \in \mathcal{CH}(E_B)$ whose projection images on $U^*$ coincide, according to Theorem \ref{th1}. (Actually, such $\hat{r}$ and $\hat{b}$ are unique if they exist, due to the SGPP of $S$.) Finally, since the extreme separator should weakly separate the existent points, all the red existent points must lie on one side of $h$ while all the blue ones must lie on the other side, except the points in $E$. Also, the side for red/blue points is specific, as $\sigma(U^*)$ must be the \textit{clockwise} boundary of $P_0$. To distinguish the red/blue side of $h$, we define, based on $\hat{r}$ and $\hat{b}$, an indicator $o = (o^{(1)},o^{(2)},\dots,o^{(d)})$, where \\ $ \begin{array}{l} \rule{7mm}{0mm} o^{(1)} = \hat{r}^{(1)}+(\hat{b}^{(2)}-\hat{r}^{(2)}), \\ \rule{7mm}{0mm} o^{(2)} = \hat{r}^{(2)}+(\hat{r}^{(1)}-\hat{b}^{(1)}), \\ \rule{7mm}{0mm} o^{(i)} = \hat{r}^{(i)} = \hat{b}^{(i)}$ for $i \in J. \end{array} $\\ (See Figure~\ref{fig:o} for the location of $o$.) It is easy to see that, when all the red (resp., blue) points appear on the same (resp., opposite) side of $h$ w.r.t. $o$, $\sigma(U^*)$ is the \textit{clockwise} boundary of $P_0$. In sum, $h$ is the extreme separator of the existent points iff\\ \textbf{1)} all the points in $E$ exist; \\ \textbf{2)} there are $\hat{r} \in \mathcal{CH}(E_R)$ and $\hat{b} \in \mathcal{CH}(E_B)$ such that their projection images on $U^*$ coincide; \\ \textbf{3)} no red (resp. blue) point on the opposite (resp. same) side of $h$ w.r.t. $o$ exists. Among the three conditions, the second one has nothing to do with the existences of the stochastic points in $S$ and can be verified in constant time. If $h$ violates this condition, then $\tau_S(h)=0$. Otherwise, $\tau_S(h)$ is just equal to the product of the existence probabilities of the points in $E$ and the non-existence probabilities of the points which should not exist due to the third condition. The simplest way to compute it is to scan every point in $S$ once, which takes linear time. This then leads to an $O(nN^d)$ overall time for computing the SP of $S$, since $|H_S|$ is bounded by $O(nN^{d-1})$. To reduce the time complexity, we can apply the idea of \textit{radial-order sort} in \cite{agarwal2014convex}. Specifically, when enumerating the hyperplanes spanned by $d$ points, we first determine $(d-1)$ points and sort all the remaining points around the $(d-2)$-dim subspace spanned by the those $(d-1)$ points (similar to polar-angle sorting around a point in $\mathbb{R}^2$). Then we consider the last point in that sorted order and maintain a sliding window on the sorted list to record the points on one side of the current hyperplane. In this way, each $\tau_S(h)$ can be computed in amortized constant time by modifying the previous result computed. The time complexity is then reduced to $O(nN^{d-1} \log N)$ if we use $O(N \log N)$ time to do sorting each time. Inspired by \cite{martin2015seperability}, we can further eliminate the log factor by taking advantage of {\it duality} \cite{deBerg_CG_book} and {\it topological sweep} \cite{Edelsbrunner:1986:topological_sweep} techniques. Thus, the time complexity is finally improved to $O(nN^{d-1})$ for any $d \geq 3$. See Appendix~\ref{append:improving_prob} for details. (In the special case of $d=2$, the runtime becomes $O(N^2)$ instead of $O(nN)$ by applying this improvement so that the final time bound is $O(\min\{nN \log N, N^2\})$.) \subsection{Witness-based lower bound for computing separable-probability} When solving the SP problem, the key idea of our algorithm is to group the probabilities of those separable instances which share the same extreme separator so that the SP can be efficiently computed by considering the extreme separators instead of single instances. Actually, by extending and abstracting this idea, we are able to get a general framework for computing SP, which we call the \textit{witness-based framework}. Let $S$ be the given stochastic dataset and $\mathcal{I}_S$ be the set of all the separable instances of $S$. The witness-based framework for computing the SP of $S$ is the following. (Here, $\mathcal{P}(\cdot)$ denotes the powerset function.) \begin{enumerate} \item Define a set $W = \{h_1,\dots,h_m\}$ of hyperplanes (called \textit{witness separators}) with specified weights $w_1,\dots,w_m$ and an implicitly specified witness rule $f:W \rightarrow \mathcal{P}(\mathcal{I}_S)$ such that \begin{itemize} \item the instances in $f(h_i)$ are (either strongly or weakly) separated by $h_i$; \item the witness probability (see Step \textbf{2} below) of each $h_i$ is efficiently computable; \item any instance $I \in \mathcal{I}_S$ satisfies $\sum\limits_{\forall i (I \in f(h_i))} w_i = 1$. \end{itemize} We say the witness separator $h_i$ \textit{witnesses} the instances in $f(h_i)$. \item Compute \textbf{efficiently} the \textit{witness probability} of each $h_i \in W$, which is defined as \begin{equation*} \mathit{witP}(h_i) = \sum_{I \in f(h_i)} \mathit{Pr}(I), \end{equation*} i.e., the sum of the probabilities of all the instances witnessed by $h_i$. \item Compute the SP of $S$ by linearly combining the witness probabilities with the specified weights, i.e., \begin{equation*} \mathit{Sep}(S) = \sum_{i=1}^{m}{ \left(w_i \cdot \mathit{witP}(h_i)\right) } = \sum_{I \in \mathcal{I}_S} \mathit{Pr}(I). \end{equation*} \end{enumerate} Note that the witness-based framework is very general. The ways of defining witness separators and specifying witness rules may vary a lot among different witness-based algorithms. Our algorithm and the one introduced in \cite{martin2015seperability}, which are the only two known algorithms for computing SP at this time, both belong to the witness-based framework. Similar frameworks are also used to solve other probability-computing problems. For example, the two algorithms in \cite{agarwal2014convex} for computing convex-hull membership probability are both implemented by defining witness edges/facets and summing up the witness probabilities. To our best knowledge, up to now, most probability-computing problems under unipoint/multipoint model are solved by applying ideas close to this framework. Now we show that any SP computing algorithm following the witness-based framework takes at least $\Omega(nN^{d-1})$ time in the worst case, and thus our algorithm is optimal among this category of algorithms for any $d \geq 3$. Clearly, the runtime of a witness-based algorithm is at least $|W| = m$, i.e., the number of the witness separators. Then a question naturally arises: how many witness separators do we need for computing SP? From the above framework, one restriction for $W$ is that each separable instance of $S$ must be witnessed by at least one witness separator $h_i \in W$, i.e., $\mathcal{I}_S = \bigcup_{i=1}^{m} f(h_i)$. Otherwise, the probabilities of the unwitnessed instances in $\mathcal{I}_S$ will not be counted when computing the SP of $S$. It then follows that each separable instance of $S$ must be separated by some $h_i \in W$. We prove that, in the worst case, we always need $\Omega (nN^{d-1})$ hyperplanes to separate all the separable instances of $S$, which implies an $\Omega (nN^{d-1})$ lower bound on the runtime of any witness-based SP computing algorithm. We say a hyperplane set $H$ \textit{covers} a a bichromatic dataset $T = T_R \cup T_B$ iff for any non-trivial separable subset $V \subseteq T$ (i.e., $V$ contains at least one red point and one blue point), there exists $h \in H$ that separates $V$. The following theorem completes the discussion, and is also of independent interest. \begin{theorem} \label{lower} For a bichromatic dataset $T$, define $\chi(T)$ to be the size of the smallest hyperplane set that covers $T$. Let $\mathcal{T}_{n,N}^d$ be the collection of all the bichromatic datasets in $\mathbb{R}^d$ containing $n$ red points and $N$ blue points ($n \leq N$). Define \begin{equation*} \varGamma_d(n,N) = \sup_{T \in \mathcal{T}_{n,N}^d} \chi(T). \end{equation*} Then for any constant $d$, we have $\varGamma_d(n,N) = \Omega(nN^{d-1})$. \end{theorem} \section{Expected separation-margin}\label{sec:separation-margin} In this section, we discuss how to compute the expected separation-margin (ESM) of a stochastic dataset $S = S_R \cup S_B$. Again, we only describe the details of our algorithm under the unipoint model. The generalization to the multipoint model is straightforward and is discuss in Appendix~\ref{sec:multipoint}. We assume that $S$ has (conventional) general position property. \subsection{Definitions} \label{subsec2.2.1} Let $T = T_R \cup T_B$ be a separable bichromatic dataset and $h$ be a separator. We define the margin of $h$ w.r.t. $T$ as $M_h(T) = \min_{a \in T} \mathit{dist}(a,h).$ The separator which maximizes the margin is called the \textit{maximum-margin separator} and the corresponding margin is called the \textit{separation-margin} of $T$, denoted by $\mathit{Mar}(T)$. If $T$ is not separable or if $T_R = \emptyset$ or $T_B = \emptyset$, we define its separation-margin to be 0 for convenience. The ESM of a stochastic dataset $S = S_R \cup S_B$ is the expectation of the separation-margin of the existent points. \begin{theorem} \label{th4} For any separable dataset $T = T_R \cup T_B$ with $T_R \neq \emptyset$ and $T_B \neq \emptyset$, the maximum-margin separator of $T$ is unique. Furthermore, for any closest pair $(r,b)$ where $r \in \mathcal{CH}(T_R)$ and $b \in \mathcal{CH}(T_B)$, the maximum-margin separator of $T$ is the bisector of the segment $\overline{rb}$. \end{theorem} \begin{figure} \caption{An example in $\mathbb{R} \label{fig:margin} \end{figure} Let $h$ be the maximum-margin separator of $T$ and $M = \mathit{Mar}(T)$ be its separation-margin. Define $C_R = \{r \in T_R: \mathit{dist}(r,h) = M\}$ and $C_B = \{b \in T_B: \mathit{dist}(b,h) = M\}$. We call $C = C_R \cup C_B$ the \textit{support set} of $T$ and the points in it the \textit{support points}. All the support points have the same distance to the maximum-margin separator. Thus, there should exist two parallel hyperplanes $h_r$ and $h_b$ (both parallel to the maximum-margin separator) where $h_r$ goes through all the red support points and $h_b$ goes through all the blue ones. We call $h_r$ and $h_b$ the \textit{support planes} of $T$. Including the maximum-margin separator $h$, they form a group of three parallel and equidistant hyperplanes $(h_r,h,h_b)$. (See Figure~\ref{fig:margin}) Since the maximum-margin separator is unique, the support set and support planes are also unique. We shall show that the maximum-margin separator can be uniquely determined via the support set. \begin{theorem} \label{th5} Suppose $C$ is the support set of $T$. Then $T$ and $C$ share the same maximum-margin separator (also the same separation-margin) and the support set of $C$ is just itself. \end{theorem} \subsection{Computing the expected separation-margin} \label{subsec2.2.2} According to Theorem \ref{th5}, the separation-margin of a separable dataset is equal to that of its support set. Thus, the ESM of $S$ can be computed as \begin{equation*} \mathit{Emar}(S) = \sum_{C \subseteq S}{(\xi_S(C) \cdot \mathit{Mar}(C))}, \end{equation*} where $\xi_S(C)$ is the probability that the existent points in $S$ are separable with the support set $C$. Since $S$ has the general position property, the size of the support set of the existent points can be at most $2d$ ($d$ red points and $d$ blue points at most). It follows that the total number of the possible $C$ to be considered is bounded by $O(n^d N^d)$. Indeed, we can further improve this bound. \begin{theorem} \label{th6} For a given stochastic dataset $S$ with general position property, the total number of the possible support sets is bounded by $O(nN^d)$. As a result, the number of the (distinct) possible separation-margins is also bounded by $O(nN^d)$. \end{theorem} By applying the previous formula for $\mathit{Emar}(S)$, we can enumerate all the $O(nN^d)$ possible support sets to compute the ESM of $S$. The $O(nN^d)$ possible support sets can be enumerated as follows. For the ones of sizes less than $(d+1)$, we enumerate them in the obvious way. For the ones of sizes larger than or equal to $(d+1)$, we first enumerate a tuple of $(d+1)$ points (of which at least one is red and one is blue), which would be the representation of the support sets (see the proof of Theorem \ref{th6} in Appendix \ref{proof:th6}). Via these $(d+1)$ points, we can determine two parallel hyperplanes $h_r$ and $h_b$ where $h_r$ goes through the red ones and $h_b$ goes through the blue ones. We then find all the points on $h_r$ and $h_b$, the number of which is at most $2d$ (including the original $(d+1)$ points). Once we have those points, we are able to enumerate all the possible support sets represented by the original $(d+1)$ points. For each such possible support set $C$, $\mathit{Mar}(C)$ can be straightforwardly computed in constant time since $|C| \leq 2d$. To compute $\xi_S(C)$, we observe that $C$ is the support set of the existent points iff\\ \textbf{1)} all the points in $C_R$ (resp., $C_B$) lie on $h_r$ (resp., $h_b$); \\ \textbf{2)} all the points in $C$ exist; \\ \textbf{3)} none of the red (resp., blue) points on the same side of $h_r$ (resp., $h_b$) w.r.t. $h$ exists; \\ \textbf{4)} except the points in $C$, none of the red (resp, blue) points on $h_r$ (resp., $h_b$) exists. \\ Among the four conditions, the first one has nothing to do with the existences of the stochastic points. If the enumerated set, $C$, violates this condition, then $\xi_S(C)=0$. Otherwise, $\xi_S(C)$ is just equal to the product of the existence probabilities of the points in $C$ (the second condition) and the non-existence probabilities of those points which should not exist (the last two conditions). If we use the simplest way, i.e., scanning all the points in $S$, to find the points on $h_r$ and $h_b$ (for enumerating the possible support sets represented by a set of $(d+1)$ points) as well as to compute each $\xi_S(C)$, the total time for computing $\mathit{Emar}(S)$ is $O(nN^{d+1})$. In fact, the runtime can be improved to $O(nN^d)$ by applying tricks similar to the ones used previously for improving the efficiency of our SP computing algorithm. However, the way of applying them is somewhat different, and we present the details in Appendix~\ref{append:improving_margin}. \subsection{Hardness of computing expected separation-margin} We show that the bound achieved in Theorem~\ref{th6} is tight, which suggests that our algorithm for computing ESM may be difficult to improve further. \begin{theorem} \label{tight} For any stochastic dataset $S$, define $\kappa(S)$ to be the total number of its (distinct) possible separation-margins. Then for any constant $d$, there exists some dataset $S$ containing $n$ red points and $N$ blue points ($n \leq N$) in $\mathbb{R}^d$ with $\kappa(S) = \Theta(n N^d)$. \end{theorem} From the above theorem, we can conclude that any algorithm that explicitly considers every possible separation-margin of the stochastic dataset requires at least $\Omega(n N^d)$ time to compute the ESM. This then implies that our algorithm is optimal among this category of algorithms. To do better, the only hope is to avoid considering every possible separation-margin explicitly. However, this is fairly difficult (though may not be impossible) because of the lack of an explicit relationship among distinct separation-margins. \section{Extension to general geometric objects} \label{sec:extension:shape} In the previous sections, we considered the separability related problems for stochastic points only. In fact, the two problems can be naturally generalized to the case of general stochastic geometric objects (see Figure~\ref{fig:objects}). In this paper, the general objects to be considered include polytopes with constant number of vertices, and/or $d$-dim closed balls with various radii. We show that, with some effort, our methods can be extended to solve the generalized versions of the SP and ESM problems. The stochastic model used is similar to the unipoint model: each object has a fixed location with an associated existence probability. For convenience, we still use $S = S_R \cup S_B$ to denote the given stochastic dataset, in which each element is either a polytope or a ball. \begin{figure} \caption{A separability problem for a set of bichromatic general objects in $\mathbb{R} \label{fig:objects} \end{figure} \subsection{Reducing polytopes to points} To deal with polytopes is easy, because of the fact that the entire polytope is on one side of a (hyperplane) separator iff all its vertices are. Thus, we can simply replace each polytope by its vertices and associate with each vertex an existence probability equal to that of the polytope. In this way, the polytopes in $S$ can be reduced to points. One thing should be noted is that, once we reduce the polytopes to points, the existences of the vertices of each polytope are dependent upon each other. However, this issue can be easily handled without any increase in time complexity, because each polytope only has a constant number of vertices. \subsection{Handling balls}\label{sec:ball} Once we are able to use the vertices to replace the polytopes, it suffices to consider the separability problems for datasets containing only stochastic balls (points can be regarded as 0-radius balls). Before we discuss how to handle balls, we need a definition of general position for a ball-dataset. We say a set of balls in $\mathbb{R}^d$ is in general position (or has the general position property) if \\ \textbf{1)} the centers of the balls are in general position; \\ \textbf{2)} no $(d+1)$ balls have a common tangent hyperplane. \\ Furthermore, we say a ball-dataset has strong general position property (SGPP) if it satisfies the two conditions above and all of the 0-radius balls in it have SGPP (as defined in Section~\ref{sec:separable-prob}) when regarded as points. In Section~\ref{sp-ball}, the given ball-dataset $S$ is required to have SGPP. And in Section~\ref{esm-ball}, we only need the assumption that $S$ has the (usual) general position property. \subsubsection{Separable-probability (ball-version)} \label{sp-ball} Let $T=T_R \cup T_B$ be a set of bichromatic balls with SGPP and set $J = \{3,4,\dots,d\}$. With similar proofs, Theorem~\ref{th1} and \ref{th2} can be directly generalized to ball-datasets (the meaning of $\mathcal{CH}(T_R)$/$\mathcal{CH}(T_B)$ should be modified as the convex hull of all the balls in $T_R$/$T_B$). The ball-version of Theorem \ref{th3} (and also its proof) is slightly different, which we present as follows (the proof can be found in Appendix~\ref{th3ball-proof}). \begin{theorem} \label{th3ball} $T^{U^*}$ is weakly separable and there only exists one weak separator. Furthermore, the unique weak separator of $T^{U^*}$ either goes through exactly $d$ 0-radius balls (of which at least one is in $T_R^{U^*}$ and one is $T_B^{U^*}$) or is tangent to at least one ball with radius larger than 0. \end{theorem} Once we generalize those results, we are immediately able to generalize the concept of extreme separator to ball-datasets. As we do in Section~\ref{subsec2.1.1}, if $P_0 \neq \emptyset$, we define the extreme separator of $T$ as the derived separator of the unique weak separator of $T^{U^*}$. If $P_0 = \emptyset$, we recursively define the extreme separator of $T$ as the derived separator of the extreme separator of $\Phi_J(T)$. If the extreme separator is directly defined (i.e., the case of $P_0 \neq \emptyset$), we call the subset consisting of all the balls tangent to extreme separator the \textit{critical set}. We shall use the following theorem later for solving the ball version of the SP problem. \begin{theorem} \label{th7} Let $T=T_R \cup T_B$ be a separable bichromatic ball-dataset whose extreme separator is directly defined and let $C$ be its critical set. Then the extreme separator of $C$ is also directly defined. Furthermore, $T$ and $C$ share the same extreme separator and auxiliary subspace. \end{theorem} Theorem~\ref{th7} implies that the extreme separator is uniquely determined by the critical set. This then gives us the basic idea to solve the problem, enumerating all the possibilities for the critical set. As in Section~\ref{subsec2.1.2}, we can compute the SP of $S$ as \begin{equation*} \mathit{Sep}(S) = \mathit{Sep}(\Phi_J(S)) + \sum_{C \subseteq S} \lambda_S(C), \end{equation*} where $\lambda_S(C)$ is the probability that the critical set of the existent balls is $C$. Since the balls in $S$ have SGPP, the size of the critical set of the existent balls can be at most $d$. Furthermore, the critical set should contain at least one red ball and one blue ball. Thus, it suffices to compute $\lambda_S(C)$ for all the subsets $C \subseteq S$ with $|C| \leq d$ which contain balls of both colors. We consider two cases separately. First, all the balls in $C$ have radius 0. Second, there is at least one ball in $C$ with radius larger than 0. In the first case, according to Theorem~\ref{th3ball}, $\lambda_S(C)>0$ only if $|C|=d$. Since the balls in $C$ are actually points, the situation here is almost the same as what we confronted in the point-version of the problem. We can uniquely determine a hyperplane $h$ which goes through the $d$ points in $C$, and a subspace $U^* \in \mathcal{V}$ perpendicular to $h$. Then $\lambda_S(C)$ is just equal to the probability that $h$ is the extreme separator of the existent balls. Also, the conditions for $h$ to be the extreme separator are very similar to those in Section~\ref{subsec2.1.2}, which are \\ \textbf{1)} all the balls in $C$ exist; \\ \textbf{2)} there exist $r \in \mathcal{CH}(C_R)$ and $b \in \mathcal{CH}(C_B)$ such that their projection images on $U^*$ coincide; \\ \textbf{3)} no red (resp. blue) ball on the opposite (resp. same) side of $h$ w.r.t. the point $o$ exists, where the definition of $o$ is similar to that in Section~\ref{subsec2.1.2}; \\ \textbf{4)} no ball intersecting with $h$ exists, except the ones in $C$. \\ If $C$ violates the second condition, then $\lambda_S(C) = 0$. Otherwise, $\lambda_S(C)$ is just equal to the product of the existence probabilities of the balls in $C$ and the non-existence probabilities of the balls that should not exist. In the second case, however, the size of $C$ may be less than $d$. According to Theorem~\ref{th7}, if $C$ is the critical set of the existent points, then the extreme separator and auxiliary subspace of the existent points are the same as those of $C$. This implies that $\lambda_S(C)=0$ if $C$ is not separable or the extreme separator of $C$ is defined recursively. So we only need to consider the situation that the extreme separator of $C$ is directly defined. Assume that $C$ has the extreme separator $h$ (directly defined) with the auxiliary subspace $U^* \in \mathcal{V}$. Let $c$ be any ball in $C$ with radius larger than 0. Then it is easy to see that $C$ is the critical set of the existent balls iff \\ \textbf{1)} all the balls in $C$ exist; \\ \textbf{2)} all the balls in $C$ are tangent to $h$; \\ \textbf{3)} no ball with the same color as (resp. different color than) $c$ but on the opposite (resp. same) side of $h^*$ w.r.t. $c$ exists; \\ \textbf{4)} no ball intersecting with $h$ exists, except the ones in $C$. \\ Because of the constant size of $C$, $h$ and $U^*$ can be computed in constant time by simply using brute-force. Similarly, if $C$ satisfies the second condition, $\lambda_S(C)$ is equal to the product of the existence probabilities of the balls in $C$ and the non-existence probabilities of the balls that should not exist. In both the cases, $\lambda_S(C)$ can be computed in linear time by simply scanning all the balls in $S$. Thus, the SP can be finally computed in $O(nN^d)$ time, as the number of the subsets $C$ considered is bounded by $O(nN^{d-1})$. Unfortunately, the improvement techniques used in the point-version of the problem cannot be generalized to ball-datasets so that our eventual time bound for computing the SP of general geometric objects remains $O(nN^d)$. \subsubsection{Expected separation-margin (ball-version)} \label{esm-ball} Let $T = T_R \cup T_B$ be a set of bichromatic balls in general position. Clearly, the definitions given in Section~\ref{subsec2.2.1} (maximum-margin separator, separation-margin, support set/points/planes, etc.) can be directly generalized to ball-datasets. Also, with these definitions, the ball-versions of Theorem \ref{th4} and \ref{th5} can be easily verified (by using the same proofs). To extend the previous algorithm, we need to prove the ball version of Theorem \ref{th6}. The first step is the same as that in the original proof of Theorem \ref{th6}: we arbitrarily label the balls in $S$ and define the representation of $C$ as the $(d+1)$ balls in $C$ with the smallest labels, for any $C \subseteq S$ with $|C|>d$. We show that the number of possible support sets represented by any group of $(d+1)$ balls is $O(1)$. Let $a_1,a_2,\dots,a_{d+1}$ be any $(d+1)$ balls in $S$ where $a_1,\dots,a_k$ are red and $a_{k+1},\dots,a_{d+1}$ are blue, where $1 \leq k \leq d$ as before. Let each ball $a_i$ have center $c_i$ and radius $\delta_i$. If some possible support set $C$ is represented by these $(d+1)$ balls, then the support plane $h_r$ (resp. $h_b$) must be tangent to $a_1,\dots,a_k$ (resp. $a_{k+1},\dots,a_{d+1}$). Furthermore, the balls $a_1,\dots,a_k$ (resp. $a_{k+1},\dots,a_{d+1}$) must be on the open side of $h_r$ (resp. $h_b$), i.e., the side different from the one containing the area between $h_r$ and $h_b$. Formally, suppose the equations of $h_r$ and $h_b$ are $\vec{\omega} \cdot x + b_1 = 0$ and $\vec{\omega} \cdot x + b_2 = 0$. We then have the following system of equations \begin{equation*} \left\{ \begin{array}{cl} \vec{\omega} \cdot c_i + b_1 = -r_i & \text{for } i \in \{1,\dots,k\}, \\ \vec{\omega} \cdot c_i + b_2 = r_i & \text{for } i \in \{(k+1),\dots,(d+1)\}, \\ | \vec{\omega} | = 1, \\ b_1<b_2. \end{array} \right. \end{equation*} The $(d+1)$ linear equations are linearly independent, as the centers are in general position. Thus, by limiting the norm of $\vec{\omega}$ to be 1, this system has at most two solutions. In other words, there are at most two possibilities for the support planes $(h_r,h_b)$. By following the logic in the proof of Theorem \ref{th6}, we then know the number of the possible support sets represented by these $(d+1)$ balls is constant, which immediately implies that the total number of all possible support sets is bounded by $O(nN^d)$. To enumerate these possible support sets, we can directly use the same method as in Section~\ref{subsec2.2.2}, i.e., first enumerate $(d+1)$ balls and then enumerate the possible support sets represented by them. Again, because the improvement techniques used in the point-version of the problem do not work for ball-datasets, we have to scan all the balls once for computing the corresponding probability of each possible support set, which makes the overall time $O(nN^{d+1})$ for computing the ESM of general geometric objects. \section{Applications}\label{sec:application} In this section, we present some applications of our algorithms to stochastic convex-hull (SCH) related problems. Given a stochastic (point) dataset $A$, the SCH of $A$ refers to the convex-hull of the existent points in $A$, which is an uncertain convex shape. \subsection{SCH membership probability problem} The SCH membership probability problem was introduced for the first time in \cite{agarwal2014convex}. The problem can be described as follows: given a stochastic dataset $A = \{a_1,\dots,a_m\} \subset \mathbb{R}^d$ and a query point $q \in \mathbb{R}^d$, compute the probability that $q$ is inside the SCH of $A$, which we call the \textit{SCH membership probability} (SCHMP) of $q$ w.r.t. $A$. It is shown in \cite{martin2015seperability} that the SCHMP problem in $\mathbb{R}^d$ can be reduced to the SP problem in $\mathbb{R}^{d-1}$. Due to this, by plugging in our SP computing algorithm presented in Section \ref{sec:separable-prob}, we immediately obtain an $O(m^{d-1})$-time algorithm to compute SCHMP for $d \geq 3$, which matches the best known bound in \cite{martin2015seperability}. Indeed, this bound can be achieved by applying any SP computing algorithm with runtime bounded by $O(N^d)$. More interestingly, we show that our SP computing algorithm yields a more direct and natural method to solve the SCHMP problem in $O(m^{d-1})$ time for $d \geq 3$ and $O(m \log m)$ time for $d = 2$, which does not involve any non-trivial reduction between the two problems. Given a SCHMP problem instance $(A,q)$, clearly, the query point $q$ is outside the SCH of $A$ iff it can be separated from the existent points in $A$ by a hyperplane. Thus, we construct a stochastic bichromatic dataset $S = S_R \cup S_B$, where $S_R$ contains only one point, $q$, with existence probability 1 and $S_B = A$. Then the SCHMP of $q$ w.r.t. A is just equal to $1-\mathit{Sep}(S)$. This can be computed in $O(m^{d-1})$ time for $d \geq 3$ and $O(m \log m)$ time for $d = 2$ by applying our SP computing algorithm, since $|S_R| = 1$ and $|S_B| = m$. Note that the $O(m^{d-1})$ runtime of this simple method relies on the $O(nN^{d-1})$ time bound of our SP computing algorithm for $d \geq 3$. If we plug in an $O(N^d)$-time SP computing algorithm, the time cost will become $O(m^d)$. Interestingly enough, this method for computing SCHMP is a generalization of the witness-edge method in \cite{agarwal2014convex} to the case $d>2$, where the latter was the first known approach that solves this problem in $\mathbb{R}^2$ and was thought to be difficult to be generalized to higher dimensions \cite{agarwal2014convex}. This can be seen as follows. When plugging in our SP computing algorithm, we enumerate all the possible extreme separators of $\{q\} \cup \varGamma$, where $\varGamma$ denotes the set of the existent points in $A$. If the extreme separator is finally defined in $\mathbb{R}^{d-2k}$, it goes through $(d-2k)$ points, of which one is $q$. These $(d-2k)$ points form a $(d-2k-1)$-dim face of $\mathcal{CH}(\{q\} \cup \varGamma)$ about the vertex $q$. It is evident that this face is uniquely determined by the convex polytope $\mathcal{CH}(\{q\} \cup \varGamma)$. We call it the \textit{witness-face} of $q$ in $\mathcal{CH}(\{q\} \cup \varGamma)$. Then enumerating the possible extreme separators is equivalent to enumerating the possible witness-faces of $q$ in $\mathcal{CH}(\{q\} \cup \varGamma)$. When $d = 2$, the concept of witness-face coincides with that of \textit{witness-edge} defined in \cite{agarwal2014convex}. Thus, in this case, our method is identical to the witness-edge method. \subsection{Other SCH-related problems} Our algorithms presented in the previous sections can also be applied to solve some new problems related to SCH. Here we propose three such problems and show how to solve them. \\ $\bullet$ \textbf{SCH intersection probability problem}. This problem is a natural generalization of the SCHMP problem. Given a stochastic dataset $A = \{a_1,\dots,a_m\} \subset \mathbb{R}^d$ and a query object $Q$ which is a convex polytope with constant complexity (e.g., segment, simplex, etc.) in $\mathbb{R}^d$, the goal is to compute the probability that $Q$ has non-empty intersection with the SCH of $A$. When $Q$ is a single point, this is just the SCHMP problem. To solve this problem, we extend the method described in the preceding subsection. According to Theorem \ref{th1}, $Q$ has no intersection with the SCH iff its vertices can be separated from the existent points in $A$ by a hyperplane. Based on this, by constructing a stochastic bichromatic dataset $S = S_R \cup S_B$, where $S_R$ contains the vertices of $Q$ with existence probability 1 and $S_B = A$, we can apply our SP computing algorithm to compute the desired probability in $O(m^{d-1})$ time (note that $|S_R|$ is $O(1)$ for $Q$ has constant complexity). \\ $\bullet$ \textbf{SCH $\varepsilon$-distant probability problem}. This problem is another natural generalization of the SCHMP problem. Given a stochastic dataset $A = \{a_1,\dots,a_m\} \subset \mathbb{R}^d$, a query point $q \in \mathbb{R}^d$, and a parameter $\varepsilon \geq 0$, the goal is to compute the probability that the distance from $q$ to the SCH of $A$ is greater than $\varepsilon$. When $\varepsilon = 0$, this is equivalent to the SCHMP problem. To solve this problem, we need to apply our generalized SP computing algorithm presented in Section \ref{sec:extension:shape}. Clearly, $q$ has a distance greater than $\varepsilon$ to the SCH of $A$ iff the $\varepsilon$-ball centered at $q$ can be separated from the existent points in $A$ by a hyperplane. Thus, by constructing a generalized stochastic bichromatic dataset $S = S_R \cup S_B$, where $S_R$ contains the $\varepsilon$-ball centered at $q$ with existence probability 1 and $S_B = A$, we can apply our generalized SP computing algorithm to compute the desired probability in $O(m^d)$ time. \\ $\bullet$ \textbf{Expected distance to a SCH}. Given a stochastic dataset $A = \{a_1,\dots,a_m\} \subset \mathbb{R}^d$ and a query point $q \in \mathbb{R}^d$, the goal of this problem is to compute the expected distance from $q$ to the SCH of $A$. To achieve this, we notice that the distance from $q$ to the SCH of $A$ is just equal to the separation-margin of $\{q\} \cup \varGamma$, where $\varGamma$ denotes the set of the existent points in $A$. Thus, we construct a stochastic bichromatic dataset $S = S_R \cup S_B$, where $S_R$ contains only one point $q$ with existence probability 1 and $S_B = A$. Then the problem can be solved in $O(m^d)$ time by plugging in our ESM computing algorithm presented in Section \ref{sec:separation-margin}. \appendix \section{Detailed proofs} \label{append.proof} \subsection{Proof of Theorem \ref{th1}} The ``only if'' part is obvious. Suppose we have a hyperplane $h$ which separates $T_R$ and $T_B$ into different sides. Let $H$ be the half-space which contains $T_R$ and $H'$ be the other one which contains $T_B$. Since both $H$ and $H'$ are convex, we have $\mathcal{CH}(T_R) \subseteq H$ and $\mathcal{CH}(T_B) \subseteq H'$. It immediately follows $\mathcal{CH}(T_R) \cap \mathcal{CH}(T_B) = \emptyset$. To prove the ``if'' part, we assume $\mathcal{CH}(T_R) \cap \mathcal{CH}(T_B) = \emptyset$. Let $(r,b)$ be the closest point-pair where $r \in \mathcal{CH}(T_R)$ and $b \in \mathcal{CH}(T_B)$. We denote the mid-point of the segment $\overline{rb}$ by $s$ and define a separator $h$ as the hyperplane that goes through $s$ and perpendicular to $\overline{rb}$. We prove that $h$ separates $T_R$ and $T_B$ by contradiction. Assume $h$ does not separate $T_R$ and $T_B$. That means there are two points in $T_R$ (or $T_B$) on the different sides of $h$. Without loss of generality, we just assume such two points are in $T_R$. Thus, we can find a point $r^* \in \mathcal{CH}(T_R)$ which is on the hyperplane $h$. If we observe the triangle $\triangle brr^*$, we find $\angle brr^* < \pi/2$, which means there exists a point $t \in \overline{rr^*}$ such that $\mathit{dist}(t,b)<\mathit{dist}(r,b)$. This contradicts the fact that $(r,b)$ is the closest point-pair (note that $t \in \mathcal{CH}(T_R)$). Thus, $h$ separates $T_R$ and $T_B$. \subsection{Proof of Theorem \ref{th2}} \label{th2-proof} We define a subset of $\mathcal{CH}(T_R) \times \mathcal{CH}(T_B)$ as \begin{equation*} D = \{(r,b) \in \mathcal{CH}(T_R) \times \mathcal{CH}(T_B):\phi_J(r)=\phi_J(b)\} \end{equation*} where $J = \{3,4,\dots,d\}$. Also, we define a continuous function $f:D \rightarrow \mathbb{P}^1$ as \begin{equation*} f: (r,b) \longmapsto [(r^{(1)}-b^{(1)}):(r^{(2)}-b^{(2)})]. \end{equation*} We shall first prove that $P_0$ is equal to the image of $f$, $\text{Im}f$. Let $u = [a:b]$ be a point in $\mathbb{P}^1$ and $U = \sigma(u)$. According to Theorem \ref{th1}, $u \in P_0$ iff $\mathcal{CH}(T_R^U) \cap \mathcal{CH}(T_B^U) \neq \emptyset$. It is clear that $\mathcal{CH}(T_R^U) \cap \mathcal{CH}(T_B^U) \neq \emptyset$ iff $u$ is in the image of $f$, which implies $P_0 = \text{Im}f$. Then it suffices to prove the theorem regarding $\text{Im}f$ instead of $P_0$. Because of the connectedness and compactness of $D$, $\text{Im}f$ is also connected and compact. Furthermore, since $\mathbb{P}^1$ is Hausdorff, $\text{Im}f$ is closed. Thus, the first statement of the theorem is proved. To prove the second one, we first assume $\text{Im}f = \emptyset$, which implies $D = \emptyset$. It then immediately follows that $\Phi_J(T)$ is strongly separable in $\mathbb{R}^{d-2}$ for $J = \{3,4,\dots,d\}$. On the other hand, if $\Phi_J(T)$ is strongly separable, $\mathcal{CH}(\Phi_J(T_R)) \cap \mathcal{CH}(\Phi_J(T_B)) = \emptyset$. In this situation, $D$ has to be empty and thus $\text{Im}f = \emptyset$. \subsection{Proof of Theorem \ref{th3}} Suppose that $\alpha_{\vec{v}} = \min_{r \in T_R}\{\vec{v} \cdot r\}$, $\alpha'_{\vec{v}} = \max_{r \in T_R}\{\vec{v} \cdot r\}$, $\beta_{\vec{v}} = \min_{b \in T_B}\{\vec{v} \cdot b\}$, $\beta'_{\vec{v}} = \max_{b \in T_B}\{\vec{v} \cdot b\}$. We define a function $f:\mathbb{P}^1 \rightarrow \mathbb{R}$ as \begin{equation*} f(u) = \sup_{\vec{v} \in \bar{U}}\ \max\{(\alpha_{\vec{v}}-\beta'_{\vec{v}}),(\beta_{\vec{v}}-\alpha'_{\vec{v}})\}, \end{equation*} where $\bar{U} = \mathbb{S}^{d-1} \cap \sigma(u)$ ($\mathbb{S}^{d-1}$ is the unit sphere in $\mathbb{R}^d$). It is easy to see that $f$ is continuous. Furthermore, according to the definition of $P_0$, we know that $u \in P_0$ iff $f(u) \leq 0$. Since $u^*$ is a boundary point of $P_0$, we have $f(u^*)=0$. Thus, $T^{U^*}$ is weakly separable. To prove the remaining part of the theorem, we introduce a definition called \textit{degree}. Let $X$ be a polytope and $x$ be a point on the boundary of $X$. We define the \textit{degree} of $x$ in $X$, denoted by $\deg_Xx$, to be the minimum of the dimensions of all the simplices that are spanned by some vertices of $X$ and contain $x$. Since $T^{U^*}$ is not strongly separable, we can find a point $x^* \in \mathcal{CH}(T_R^{U^*}) \cap \mathcal{CH}(T_B^{U^*})$. To simplify the notation, we denote $\mathcal{CH}(T_R^{U^*})$ by $C_1$ and $\mathcal{CH}(T_B^{U^*})$ by $C_2$. We claim that $\deg_{C_1}x^* + \deg_{C_2}x^* \geq d-2$. Suppose $\deg_{C_1}x^* = k_1$ and $\deg_{C_2}x^* = k_2$. According to the definition of degree, we can find $(k_1+1)$ red (resp. $(k_2+1)$ blue) points in $T_R^{U^*}$ (resp. $T_B^{U^*}$) such that the simplex spanned by these points, $\bar{s}_R$ (resp. $\bar{s}_B$), contains $x^*$ in its interior. Let $\alpha:\mathbb{R}^d \rightarrow U^*$ and $\beta:U^* \rightarrow \mathbb{R}^{d-2}$ be the orthogonal projection functions. Clearly, we have \begin{equation*} \phi_J = \beta \circ \alpha, \end{equation*} for $J = \{3,4,\dots,d\}$. Then the convex hull of the $\beta$-images of the vertices of $\bar{s}_R$ (resp. $\bar{s}_B$) contains the point $\beta(x^*)$. The $\beta$-images of the points in $T^{U^*}$ are just the points in $\Phi_J(T)$. If $k_1 + k_2 < d-2$, we can always find two simplices with the vertices in $\Phi_J(T)$ such that they intersect at $\beta(x^*)$ and the sum of their dimensions is less than $(d-2)$. This contradicts the fact that $\Phi_J(T)$ is in general position (as $T$ has SGPP). Thus, $\deg_{C_1}x^* + \deg_{C_2}x^* \geq d-2$. Now we go back to $U^*$. We know that $T^{U^*}$ is weakly separable. Let $h$ be a weak separator. Since $x^* \in C_1 \cap C_2$, $x^*$ must be on $h$. Note that $x^*$ is in the interiors of $\bar{s}_R$ and $\bar{s}_B$. This implies $h$ must go through all of the $(k_1+k_2+2)$ vertices of $\bar{s}_R$ and $\bar{s}_B$. Since $k_1+k_2+2 \geq d$ and $T$ is in general position, the weak separator $h$ is unique and goes through exactly $d$ points in $T^{U^*}$ (of which at least one is in $T_R^{U^*}$ and one is $T_B^{U^*}$). \subsection{Proof of Theorem \ref{lower}} To prove this theorem, it is more convenient to work on ``directed'' hyperplanes. A \textit{directed hyperplane} in $\mathbb{R}^d$ is a hyperplane with one side (half-space) specified to be red and the other side specified to be blue. It can be represented as a $(d+1)$-tuple $(a_0,a_1,\dots,a_d)$ of real numbers (not all equal to 0 simultaneously) such that the inequality $a_0 + \sum_{i=1}^{d} a_i x_i < 0$ indicates the red side. We say the directed hyperplane $(a_0,a_1,\dots,a_d)$ \textit{separates} a set of bichromatic points iff there is no point located on the side of different color, i.e., for each point $x = (x_1,\dots,x_d)$, we have \begin{equation*} a_0 + \sum_{i=1}^{d} a_i x_i \left\{ \begin{array}{ll} \leq 0 & \text{if } x \text{ is red}, \\ \geq 0 & \text{if } x \text{ is blue}. \end{array} \right. \end{equation*} Since a (undirected) hyperplane can be replaced with two directed hyperplanes, the number of the directed hyperplanes required for covering a dataset is at most twice the number of the undirected ones. Thus, it suffices to prove the result with respect to directed hyperplanes. In the rest of the proof, the notation $\chi(T)$ is used to denote the size of the smallest directed-hyperplane set (instead of hyperplane set) which covers $T$. We show that, for any constant $d$, there exists some bichromatic dataset $T \in \mathcal{T}_{n,N}^d$ with general position such that $\chi(T)=\Omega(nN^{d-1})$. Specifically, we use induction on the dimension $d$. The base case $d=1$ is trivial. Assume that for any $d \leq k-1$, such bichromatic dataset $T$ exists. Now we try to construct $T$ in $\mathbb{R}^k$. Our first step is to construct a set $T'$ of one red point and $\Theta(N)$ blue points in $\mathbb{R}^k$ with $\chi(T') = \Omega (N^{k-1})$. Let $U = U_R \cup U_B$ be a set of $N$ red points and $N$ blue points (in general position) in $\mathbb{R}^{k-1}$ with $\chi(U) = \Omega (N^{k-1})$. Define two functions $f_R, f_B: \mathbb{R}^{k-1} \rightarrow \mathbb{R}^k$ as \begin{equation*} f_R: (x_1,\dots,x_{k-1}) \mapsto (-x_1,\dots,-x_{k-1},-1), \end{equation*} \begin{equation*} f_B: (x_1,\dots,x_{k-1}) \mapsto (x_1,\dots,x_{k-1},1). \end{equation*} Then we set the $2N$ blue points in $T'$ to be $f_R(U_R) \cup f_B(U_B)$ (ignoring their original colors in $U$) and the only red point in $T'$ to be the origin of $\mathbb{R}^k$. We claim that $\chi(T') = \Omega (\chi(U))$. For any non-trivial separable subset $V \subseteq U$ (i.e., $V$ contains at least one red point and one blue point), define $f(V)$ to be a subset of $T'$ containing the blue points $f_R(V_R) \cup f_B(V_B)$ and the only red point. It is easy to see that $V$ is separable iff $f(V)$ is. Furthermore, if a non-horizontal (i.e., not parallel to the plane $x_d = 0$) directed hyperplane in $\mathbb{R}^k$, $(a_0,a_1,\dots,a_k)$, separates $f(V)$, then we have a corresponding directed hyperplane in $\mathbb{R}^{k-1}$, $(a_0+a_k,a_1,\dots,a_{k-1})$, which separates $V$. We call the latter the \textit{induced} plane of the former. Now let $H = \{h_1,\dots,h_{\chi(T')}\}$ be a set of directed hyperplanes in $\mathbb{R}^k$ which cover $T'$. Assume they are all non-horizontal (if any of them is horizontal, we can always slightly rotate it without changing the subsets of $T'$ it separates). Then let $H' = \{h'_1\dots,h'_{\chi(T')}\}$ be a set of directed hyperplanes in $\mathbb{R}^{k-1}$ in which $h'_i$ is the induced plane of $h_i$. Clearly, $H'$ covers $U$, which implies that $\chi(U) \leq \chi(T')$. The next step is to extend $T'$ into another set $T$ of $\Theta(n)$ red points and $\Theta(N)$ blue points in $\mathbb{R}^k$ with $\chi(T) = \Omega(nN^{k-1})$. We denote by $r$ the only red point in $T'$ (whose location is the origin of $\mathbb{R}^k$) and by $b_1,\dots,b_{2N}$ the $2N$ blue points in $T'$. We first slightly perturb each $b_i$ without changing $\chi(T')$ to make the points $r,b_1,\dots,b_{2N}$ be in general position. For convenience, we now use $T'$ to denote the new set after the perturbation. Then we find an $\varepsilon$-ball centered at the origin of $\mathbb{R}^k$ with $\varepsilon$ small enough such that if the red point $r$ perturbs inside that ball, $\chi(T')$ does not change. The value of $\varepsilon$ can be determined as follows. For each $(k-1)$-dim linear subspace spanned by $k$ blue points $b_{\pi_1},\dots,b_{\pi_k}$, we compute the distance from the origin to it. And $\varepsilon$ is then set to be a number less than the minimum of those distances. Inside this $\varepsilon$-ball, we pick $n$ red points $r_1,\dots,r_n$ such that all the points $r_1,\dots,r_n,b_1,\dots,b_{2N}$ are in general position. Define the red points in $T$ to be $r_1,\dots,r_n$. Next, we find another small number $\varepsilon'$ such that for any hyperplane $h$ in $\mathbb{R}^k$, there are at most $k$ points among $r_1,\dots,r_n$ whose distances to $h$ are at most $\varepsilon'$. We can determine $\varepsilon'$ as follows. For each $(k+1)$-tuple $t = (r_{\pi_1},\dots,r_{\pi_{k+1}})$, we define \begin{equation*} \delta_t = \inf_h \max_{i=1}^{k+1} \mathit{dist}(h,r_{\pi_i}). \end{equation*} Clearly, $\varepsilon'$ can be any number less than the minimum of all $\delta_t$. Now, for each red point $r_i$, we add $(k+1)$ blue points $b'_{i,1},\dots,b'_{i,{k+1}}$ inside the $\varepsilon'$-ball centered at $r_i$ such that the simplex spanned by $b'_{i,1},\dots,b'_{i,{k+1}}$ contains $r_i$ in its interior. We carefully determine the locations of these additional points to guarantee general position. Define the blue points in $T$ to be these $(k+1)n$ additional points and the original $2N$ ones. We prove that $\chi(T) = \Omega(nN^{k-1})$. Let $H$ be any directed-hyperplane set which covers $T$. Also, let $H_i \subseteq H$ be the subset of the directed hyperplanes whose distances to the point $r_i$ are at most $\varepsilon'$. We claim that $|H_i| \geq \chi(T')$ for any $i \in \{1,\dots,n\}$. Set $T'' = T''_R \cup T''_B = \{r_i\} \cup \{b_1,\dots,b_{2N}\}$. Recall that $r_i$ is inside the $\varepsilon$-ball centered at the origin of $\mathbb{R}^k$, which implies $\chi(T'') = \chi(T')$. Assume that $|H_i| < \chi(T'')$. Then there exists a non-trivial separable subset $V \subseteq T''$ which is not separated by any $h \in H_i$. Let $h^*$ be a directed hyperplane which goes through $r_i$ and weakly separates $V$. Consider the blue points $b'_{i,1},\dots,b'_{i,k+1}$. Since $r_i$ is in the interior of the simplex spanned by $b'_{i,1},\dots,b'_{i,k+1}$, we can find at least one point $b'_{i,j}$ such that the subset of $T$, $V \cup \{b'_{i,j}\}$, is also separated by $h^*$ (and thus separable). We show that $V \cup \{b'_{i,j}\}$ is not separated by any $h \in H$, which contradicts the fact that $H$ covers $T$. We consider two cases: $h \in H_i$ and $h \in H \backslash H_i$. Any $h \in H_i$ is not a separator of $V \cup \{b'_{i,j}\}$ because it does not separate $V$. For any $h \in H \backslash H_i$, we notice that $\mathit{dist}(h,r_i) > \epsilon'$. Thus, both $r_i$ and $b'_{i,j}$ are on the same side of $h$, which implies that $h$ is not a separator of $V \cup \{b'_{i,j}\}$. As a result, we have $|H_i| \geq \chi(T'') = \chi(T')$. Now recall that for any hyperplane $h$ in $\mathbb{R}^k$, there are at most $k$ points among $r_1,\dots,r_n$ whose distances to $h$ are at most $\varepsilon'$. This implies that \begin{equation*} |H| \geq \sum_{i=1}^{n}\frac{|H_i|}{k} \geq \frac{n\chi(T')}{k}. \end{equation*} Therefore, $\chi(T)$ is $\Omega(nN^{k-1})$. Note that the number of the blue points in $T$ is now $2N+(k+1)n$. To make it exactly $N$, we only need to choose $N_0 = N/(k+3)$, and use the same method to construct a dataset $T$ containing $n$ red points and $2N_0+(k+1)n$ blue points in general position with $\chi(T) = \Omega(nN_0^{k-1}) = \Omega(nN^{k-1})$. Then by adding some dummy blue points, we eventually have $T \in \mathcal{T}_{n,N}^d$ with $\chi(T) = \Omega(nN^{k-1})$, which completes the proof. \subsection{Proof of Theorem \ref{th4}} Let $(r,b)$ be any closest pair of points where $r \in \mathcal{CH}(T_R)$ and $b \in \mathcal{CH}(T_B)$. Also, let $h$ be the bisector of the segment $\overline{rb}$. Then $M_h(T)=\mathit{dist}(r,b)/2$. Consider any other separator (of $T$) $h' \neq h$. We have that \begin{equation*} \min\{\mathit{dist}(r,h'),\mathit{dist}(b,h')\}<\mathit{dist}(r,b)/2. \end{equation*} Furthermore, since $r \in \mathcal{CH}(T_R)$ and $b \in \mathcal{CH}(T_B)$, $M_{h'}(T)$ must be less than or equal to $\min\{\mathit{dist}(r,h'),\allowbreak \mathit{dist}(b,h')\}$. Thus, \begin{equation*} M_{h'}(T) \leq \min\{\mathit{dist}(r,h'),\mathit{dist}(b,h')\} < \mathit{dist}(r,b)/2 = M_h(T). \end{equation*} This implies that $h$ is the unique maximum-margin separator, though the closest pair $(r,b)$ may be not unique. \subsection{Proof of Theorem \ref{th5}} Let $h$ be the maximum-margin separator of $T$ and $M = \mathit{Mar}(T)$ be the separation-margin of $T$. Also, let $(r,b)$ be the closest pair where $r \in \mathcal{CH}(T_R)$ and $b \in \mathcal{CH}(T_B)$. From the proofs of Theorem \ref{th1} and \ref{th4}, we know that $\mathit{dist}(r,h)=\mathit{dist}(b,h)=M$. It immediately follows that $r \in \mathcal{CH}(C_R)$ and $b \in \mathcal{CH}(C_B)$. Since $\mathcal{CH}(C_R) \subseteq \mathcal{CH}(T_R)$ and $\mathcal{CH}(C_B) \subseteq \mathcal{CH}(T_B)$, $(r,b)$ is also the closest pair w.r.t. $\mathcal{CH}(C_R)$ and $\mathcal{CH}(C_B)$. Thus, both the maximum-margin separator and separation-margin of $C$ are the same with those of $T$. Furthermore, because all of the points in $C$ have the same distance to $h$, the support set of $C$ is just $C$ itself. \subsection{Proof of Theorem \ref{th6}} \label{proof:th6} The number of possible support sets of size smaller than or equal to $d$ is clearly bounded by $O(nN^d)$. So we only need to bound the number of the ones of sizes are larger than $d$. We first arbitrarily label all the points in $S$ from 1 to $(n+N)$. For any $C \subseteq S$ with $|C|>d$, define the \textit{representation} of $C$ as the $(d+1)$ points in $C$ with the smallest labels (we say those $(d+1)$ points represent $C$). Let $a_1,a_2,\dots,a_{d+1}$ be a tuple of $(d+1)$ points in $S$ where $a_1,\dots,a_k$ are red and $a_{k+1},\dots,a_{d+1}$ are blue. If $k=0$ or $k=d+1$, there is no possible support set represented by these $(d+1)$ points because the number of the blue/red points in the support set can at most be $d$. Now consider the case that $1 \leq k \leq d$. It is easy to see that there exist a unique pair of parallel hyperplanes $(h_r,h_b)$ such that $h_r$ goes through $a_1,\dots,a_k$ and $h_b$ goes through $a_{k+1},\dots,a_{d+1}$, as $S$ is in general position. If a possible support set $C$ is represented by $a_1,a_2,\dots,a_{d+1}$, then $h_r$ and $h_b$ must be the corresponding support planes. That means all the red/blue points in $C$ must lie on $h_r$/$h_b$. Note that there are at most $2d$ points on $h_r$ and $h_b$, which implies that the number of such $C$ is constant. Since the number of such $(d+1)$-tuples is $O(nN^d)$, $S$ can have at most $O(nN^d)$ possible support sets. Since the separation-margin is uniquely determined by support set, the number of the possible separation-margins is also bounded by $O(nN^d)$. \subsection{Proof of Theorem \ref{tight}} First, we define $(d+1)$ probability distributions $D_0,D_1,\dots,D_d$ in $\mathbb{R}^d$, where $D_i$ is a uniform distribution over an $\varepsilon$-ball ($\varepsilon$ is a small enough positive constant) centered at $c_i$. Set \begin{equation*} c_0 = (0,\dots,0), \end{equation*} \begin{equation*} c_i = (\underbrace{0,\dots,0}_{i-1},1,\underbrace{0,\dots,0}_{d-i}),\ \text{for } i = 1,\dots,d. \end{equation*} Now we randomly generate a stochastic dataset $S^* = S_R^* \cup S_B^*$ with $n$ red points and $N$ ($N \geq n$) blue points as follows. The existence probabilities of all the bichromatic points are set to be 0.5. The location of each red point is drawn from the distribution $D_0$. For the blue points, we evenly separate them into $d$ groups each of which contains $N/d$ points (for convenience, assume $N$ is a multiple of $d$). Then the location of each blue point in the $i$-th group is drawn from the distribution $D_i$. All the locations are drawn independently. We claim that the randomly generated $S^*$ satisfies \begin{equation*} \Pr\left[\ \kappa(S^*) \geq n \left( \frac{N}{d} \right)^d\ \right] > \delta \end{equation*} for any $\delta < 1$, whence the existence of $S$ with $\kappa(S) = \Theta(n N^d)$ is shown. We denote the points in $S_R^*$ by $r_1,\dots,r_n$ and the points in $S_B^*$ by $b_1,\dots,b_N$. Also, we denote by $B_i$ the subset of $S_B^*$ which contains all the points drawn from $D_i$. Consider all the $(d+1)$-tuples $(r_j,b_{\pi_1},\dots,b_{\pi_d})$ where \begin{equation*} r_j \in S_R^*,\ \ b_{\pi_i} \in B_i. \end{equation*} Clearly, we have in total $n(N/d)^d = \Theta(nN^d)$ such tuples. For each tuple $\tau = (r_j,b_{\pi_1},\dots,b_{\pi_d})$, define $C_\tau = \{r_j,b_{\pi_1},\dots,b_{\pi_d}\}$. We show that $\xi_{S^*}(C_\tau) > 0$, i.e., $C_\tau$ is a possible support set. Let $h_b$ be the hyperplane goes through $b_{\pi_1},\dots,b_{\pi_d}$. Since $\varepsilon$ is small enough, from the spatial locations of $D_0,D_1,\dots,D_d$, we observe that the point on $h_b$ closest to $r_j$ is always inside the simplex $(b_{\pi_1} \dots b_{\pi_d})$, no matter what the exact locations of $r_j,b_{\pi_1},\dots,b_{\pi_d}$ are. It follows that $C_\tau$ is the support set of the instance of $S^*$ in which the only existent points are those in $C_\tau$. Thus, $\xi_{S^*}(C_\tau) > 0$. Define $M_\tau = \mathit{Mar}(C_\tau)$, i.e., the separation-margin of $C_\tau$. It is easy to see that, for any two distinct tuples $\tau$ and $\tau'$, the probability of $M_\tau = M_{\tau'}$ is infinitesimal, i.e., \begin{equation*} \Pr [\ M_\tau = M_{\tau'}\ ] < c \end{equation*} for any small $c>0$. Therefore, we have \begin{equation*} \Pr [\ M_\tau \neq M_{\tau'} \text{ for } \forall \tau \neq \tau'\ ] \geq 1 - \sum_{\tau \neq \tau'} \Pr [\ M_\tau = M_{\tau'}\ ] > \delta \end{equation*} for any $\delta < 1$. Note that if all $M_\tau$ are distinct, $\kappa(S^*)$ is at least $n(N/d)^d$. So we can conclude \begin{equation*} \Pr\left[\ \kappa(S^*) \geq n \left( \frac{N}{d} \right)^d\ \right] \geq \Pr [\ M_\tau \neq M_{\tau'} \text{ for } \forall \tau \neq \tau'\ ]. \end{equation*} As a result, there exists some stochastic dataset $S$ with $\kappa(S) = \Theta(n N^d)$. \subsection{Proof of Theorem \ref{th3ball}} \label{th3ball-proof} By applying the same method used in the proof of Theorem \ref{th3} (see Appendix~\ref{th2-proof}), we can directly show that $T^{U^*}$ is weakly separable. However, to prove the remaining part, we need to slightly change the method in the proof of Theorem \ref{th3}. First, we modify the definition of ``degree'' as follows. Let $X$ be the convex hull of a finite set of balls and $x$ be a point on the boundary of $X$. Also, let $Y$ be the union of those balls. We define the degree of $x$ in $X$, denoted by $\deg_Xx$, to be the minimum of the dimensions of all the simplices that contain $x$ and use only the points in $Y$ as their vertices. We use $C_1$ and $C_2$ to denote the convex hulls of the balls in $T_R^{U^*}$ and $T_B^{U^*}$, respectively. Since $T^{U^*}$ is not strongly separable, there exists a point $x^* \in C_1 \cap C_2$. Suppose $\deg_{C_1}x^* = k_1$ and $\deg_{C_2}x^* = k_2$. Then we can find a $k_1$-dim (resp. $k_2$-dim) simplex $\bar{s}_R$ (resp. $\bar{s}_B$) satisfying \\ \textbf{1)} $\bar{s}_R$ (resp. $\bar{s}_B$) contains $x^*$ in its interior; \\ \textbf{2)} each vertex of $\bar{s}_R$ (resp. $\bar{s}_B$) is contained in at least one ball in $T_R^{U^*}$ (resp. $T_B^{U^*}$). \\ Consider the balls that contain the vertices of $\bar{s}_R$ and $\bar{s}_B$. We have two cases. First, all of those balls are 0-radius balls. Second, at least one of them has the radius larger than 0. For the first case, the proof of Theorem \ref{th3} is sufficient to show that the weak separator of $T^{U^*}$ is unique and goes through $d$ points (0-radius balls). In the second case, without loss of generality, we assume there is a vertex of $\bar{s}_R$, $v$, contained in a ball $B \in T_R^{U^*}$ with radius larger than 0. Since any weak separator of $T^{U^*}$ must go through $v$, $v$ must be on the boundary of $B$. Thus, $T^{U^*}$ has only one weak separator, which is the tangent hyperplane of $B$ on $v$ (so it is tangent to at least one ball with radius larger than 0). \subsection{Proof of Theorem \ref{th7}} Let $P_0$ and $P_1$ be the pre-images of $\{0\}$ and $\{1\}$ under the map $\pi_T^*$ respectively. Also, let $P_0'$ and $P_1'$ be the counterparts under the map $\pi_C^*$. Suppose $u^*$ is the clockwise boundary of $P_0$. Since $C \subseteq T$, we have $P_0' \subseteq P_0$. On the other hand, as $C$ is the critical set of $T$, it is easy to see that $\mathcal{CH}(C_R^{U^*}) \cap \mathcal{CH}(C_B^{U^*}) \neq \emptyset$, where $U^* = \sigma(u^*)$. This then implies $u^* \in P_0'$. Now because $P_0'$ is nonempty, the extreme separator of $C$ is directly defined. Furthermore, from the fact that $u^* \in P_0' \subseteq P_0$, we know $u^*$ is also the clockwise boundary of $P_0'$ so that $U^*$ is the auxiliary subspace of both $T$ and $C$. To prove $T$ and $C$ share the same extreme separator, we assume $h$ is the unique weak separator of $T^{U^*}$. Since $C^{U^*} \subseteq T^{U^*}$, $h$ is also a weak separator of $C^{U^*}$. More precisely, $h$ is the unique weak separator of $C^{U^*}$, as $C^{U^*}$ only has one weak separator (according to Theorem \ref{th3ball}). Consequently, the derived separator of $h$ in $\mathbb{R}^d$ is the extreme separator of both $T$ and $C$. \section{A deterministic algorithm to compute \texorpdfstring{$\mathcal{A}$}{A}} \label{append:compute_matrix_A} Given a set $S = \{s_1, s_2, \dots, s_n\}$ of $n$ points in general position, where $s_i \in \mathbb{R}^d$, we propose a deterministic algorithm that computes, in $O(n^{d-1})$ time, a $d \times d$ orthogonal matrix $\mathcal{A} = (\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_d)^T$ which linearly transforms $S$ into a new set $S' = \{\mathcal{A}s_1, \mathcal{A}s_2, \dots, \mathcal{A}s_n\}$ satisfying SGPP. According to the definition of SGPP, what we want is that, for any $k \in \{0,1,\dots, \lfloor (d - 1) / 2 \rfloor\}$, $\Phi_{\{2k+1, 2k+2, \dots, d\}}(S')$ is in general position in $\mathbb{R}^{d - 2k}$. For $k=0$, $\Phi_{\{2k+1, 2k+2, \dots, d\}}(S') = S'$ is for sure in general position if $\mathcal{A}$ is orthogonal. For $k \geq 1$, it is easy to see that $\Phi_{\{2k+1, 2k+2, \dots, d\}}(S')$ is in general position iff \begin{equation}\label{eqn:span_Rd} \text{dim}(\text{span}\{\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_{2k-1}, \mathbf{a}_{2k}, (\mathbf{s}_{i_1} -\mathbf{s}_{i_2}), (\mathbf{s}_{i_1} - \mathbf{s}_{i_3}), \dots, (\mathbf{s}_{i_1} - \mathbf{s}_{i_{d-2k+1}})\}) = d, \end{equation} for any distinct $(d - 2k + 1)$ points $s_{i_1}, \dots, s_{i_{d-2k+1}} \in S$. Based on this fact, we show how to compute $\mathbf{a}_{2k+1}$ and $\mathbf{a}_{2k+2}$ at a time as $k$ increases from 0. (This also implies that $\mathbf{a}_1, \dots, \mathbf{a}_{2k}$ are already given when computing $\mathbf{a}_{2k+1}$ and $\mathbf{a}_{2k+2}$.) For a particular $k$, we first find a candidate $\mathbf{a}_{2k+1}$ satisfying $$\text{dim}(\text{span}\{\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_{2k+1}, (\mathbf{s}_{i_1} - \mathbf{s}_{i_2}), (\mathbf{s}_{i_1} - \mathbf{s}_{i_3}), \dots, (\mathbf{s}_{i_1} - \mathbf{s}_{i_{d-2k-1}})\}) = d - 1,$$ for any distinct $s_{i_1}, \dots, s_{i_{d-2k-1}} \in S$. In other words, the candidate $\mathbf{a}_{2k+1}$ cannot lie in any $(d - 2)$-dim subspace, $V$, spanned by $\{\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_{2k}, (\mathbf{s}_{i_1} - \mathbf{s}_{i_2}), (\mathbf{s}_{i_1} - \mathbf{s}_{i_3}), \dots, (\mathbf{s}_{i_1} - \mathbf{s}_{i_{d-2k-1}})\}$. Based on this, we propose a simple method that guarantees to find such a candidate $\mathbf{a}_{2k+1}$ as follows. Initialize an open ball $B$ centered at $c = (1, 1, \dots, 1)$ with radius $r = 0.5$ in $\mathbb{R}^d$. We enumerate all possible $\mathbf{s}_{i_1}, \dots, \mathbf{s}_{i_{d-2k-1}}$, and shrink this ball gradually to a non-empty feasible region for the candidate $\mathbf{a}_{2k+1}$. Let $\mathbf{s}_{i_1}, \dots, \mathbf{s}_{i_{d-2k-1}}$ be the current enumerated tuple, and $B = (c, r)$ be the ball maintained so far. Let the span $V$ be defined the same as above. Now, consider two cases: $c \not\in V$ and $c \in V$. In the first case, we simply reduce $r$ to $r' = \min\{r, \mathit{dist}(c, V)\}$; in the second case, we choose an arbitrary point $c' \in B - V$ as the new center of the ball, and then set the new radius $r' = \min\{\mathit{dist}(c', V), r - \mathit{dist}(c, c')\}$. After this shrinking, the new ball is contained in the previous one and does not intersect with the subspace $V$. In this way, after $\binom{n}{d - 2k - 1} = O(n^{d-2k-1})$ steps of enumeration, the final ball indicates a non-empty feasible region for the candidate $\mathbf{a}_{2k+1}$. We then pick any point in it as our candidate $\mathbf{a}_{2k+1}$. \begin{figure} \caption{Case 1: $c \not\in V$} \caption{Case 2: $c \in V$} \caption{Illustrating two cases in $\mathbb{R} \label{fig:case1} \label{fig:case2} \end{figure} After the candidate $\mathbf{a}_{2k+1}$ is found, we modify it to guarantee the orthonormality of $\{\mathbf{a}_1,\dots,\mathbf{a}_{2k+1}\}$ without changing its span, and use the vector after modification as the $(2k+1)$-st row vector of $\mathcal{A}$, $\mathbf{a}_{2k+1}$. Next, the row vector $\mathbf{a}_{2k+2}$ can be computed similarly, i.e., first computing a candidate $\mathbf{a}_{2k+2}$ satisfying Equation~\ref{eqn:span_Rd} (note that, in this case, each infeasible region $V'$ is a $(d - 1)$-dim subspace instead) and then modifying it to guarantee the orthonormality. In this way, determining $\mathbf{a}_{2k+1}$ and $\mathbf{a}_{2k+2}$ takes $O(n^{d-2k-1})$ time, and the transformation matrix $\mathcal{A}$ can be computed in $O(n^{d-1} + n^{d-3} + n^{d-5} + \dots) = O(n^{d-1})$ time. \section{Computing the separable-probability in \texorpdfstring{$O(nN^{d-1})$}{O(nNd-1)} time}\label{append:improving_prob} Previously, we showed how to solve the problem by enumerating $d - 1$ points first, followed by a radial-order sort and a sliding windows technique on the remaining points. This method takes $O(nN^{d - 1} \log N)$ time. Inspired by \cite{martin2015seperability}, we show how to eliminate the log factor by the well-known techniques of {\it duality} \cite{deBerg_CG_book} and {\it topological sweep} \cite{Edelsbrunner:1986:topological_sweep} as follows. We first enumerate $d - 2$ points (of which at least one is red and at least one is blue), and these points span a $(d - 3)$-dim subspace $\mathcal{D}$, which corresponds to a 2D dual subspace, $\mathcal{D}^*$. By duality, each remaining point, $p$, maps to a $(d - 1)$-dim hyperplane, $p^*$, in the dual space, whose intersection with $\mathcal{D}^*$ is a line, $l$. (Since there is a clear one-to-one correspondence between $p^*$ and $l$, with a slight abuse of notation, we use $p^*$ to represent $l$ below.) It then follows that there are $n + N - d + 2 = O(N)$ lines in $\mathcal{D}^*$, forming a line arrangement, and the dual of each intersection point, $f^*$, formed by two lines $p^*_1$ and $p^*_2$ is the span $f$ of some $(d - 1)$-dim facet in the primal space. We define the \textit{statistic} of $f^*$ as a tuple in form of ($\mathcal{R}^-, \mathcal{R}^+, \mathcal{B}^-, \mathcal{B}^+, \mathcal{T})$, where $\mathcal{R}^-$ and $\mathcal{R}^+$ (resp., $\mathcal{B}^-$ and $\mathcal{B}^+$) denote the non-existence probability of the remaining red (resp., blue) points on either side of $f$, and $\mathcal{T}$ is a set consisting of all the points on $f$. Given the statistic for every $f^*$, the probability of each $f^*$, i.e., each enumerated facet in the primal plane, can be computed in $O(1)$ time. Thus, it suffices to show how to compute the statistics for all $f^*$ efficiently. Assume the lines in $\mathcal{D}$ are $p^*_1, \dots, p^*_m$, and the intersection points on $p^*_1$ are $f^*_2, \dots, f^*_m$. W.l.o.g., assume $f^*_2, \dots, f^*_m$ are sorted from left to right in $\mathcal{D}^*$. We first compute the statistic for $f_2^*$ by brute-force, which takes $O(N)$ time. Then, we iterate through $f^*_3, \dots, f^*_m$ from left to right on $p^*_1$. (See Figure~\ref{fig:dual} for an example.) By duality, the movement $f^*_{i - 1} \rightarrow f^*_i$ corresponds to the hyperplane rotation from $f^*_{i - 1}$ to $f^*_{i}$ w.r.t. the dual of line $p^*_1$, which is a $(d - 2)$-dim subspace in the primal space. More importantly, the rotation does not hit any other points except $p_{i-1}$ and $p_i$, which allows us to quickly compute, in $O(1)$ time, the statistic of $f_i^*$ from that of $f_{i-1}^*$. This way, the statistics of all the intersections along $p^*_1$ can be computed in $O(N)$ time without considering the sorting. In fact, we cannot afford to sort the intersections on each line since that will take $O(N^2 \log N)$ time. Instead, we compute the entire line arrangement using $O(N^2)$ time and space, then we can visit the intersections on each line in the correct order (though not necessarily consecutively). To further reduce the space from $O(N^2)$ to $O(N)$, one can perform a {\it topological sweep} on the arrangement. The topological sweep maintains a cut of size $O(N)$, and sweep it from left to right over the entire line arrangement using $O(N^2)$ so-called elementary steps, each taking $O(1)$ amortized time. (See Figure~\ref{fig:dual1} for details.) Based on this, we find the leftmost intersection point, $f^*_l$, in $\mathcal{D}^*$, and compute statistic of it by brute-force. This step takes $O(N^2)$ time. Afterwards, when an elementary step is triggered, the statistic for the current intersection point, $p^*$, can be reported, and we can compute, in $O(1)$ time, the statistics for two more intersections points (e.g., $f^*_{r_1}$ and $f^*_{r_2}$ in Figure~\ref{fig:dual1}) for future reporting. Thus, as we advance from the leftmost cut to the rightmost one, the statistics of all the intersection points are reported on the fly. Therefore, the runtime of our algorithm is improved to $O(nN^{d-3} \cdot N^2) = O(nN^{d-1})$, using linear space. {\bf Remark.} Note that, in $\mathbb{R}^2$ only, the above method actually runs in $O(N^2)$ instead of $O(nN)$. However, the runtime of our previous method based on radial-order sort still remains $O(nN \log N)$. \begin{figure} \caption{An example of the arrangement in the subspace $\mathcal{D} \caption{A elementary step in topological sweep} \caption{Illustrating how to use duality and topological sweep to eliminate the log factor.} \label{fig:dual} \label{fig:dual1} \end{figure} \section{Improving the time for computing the expected separation-margin} \label{append:improving_margin} It is easy to improve the time for computing the ESM to $O(nN^d \log N)$ by slightly modifying the sort method we used for improving our SP computing algorithm. When enumerating $(d+1)$ points, we first determine $d$ points (of which at least one is red and one is blue). Let us denote by $r_1,\dots,r_k$ the red ones and $b_1,\dots,b_{d-k}$ the blue ones. We can uniquely determine two parallel $(d-2)$-dim linear subspaces $X_r$ and $X_b$ such that $r_1,\dots,r_k \in X_r$ and $b_1,\dots,b_{d-k} \in X_b$. We sort all the remaining red points around $X_r$ and the blue ones around $X_b$. Then we consider the last point in that sorted order (say red first and then blue) and meanwhile maintain two sliding windows (for red and blue points respectively). In this way, we are able to use amortized constant time for considering each tuple of $(d+1)$ points, i.e., computing the probabilities of all the possible support sets represented by the $(d+1)$ points and adding the portions contributed by these possible support sets to the ESM. Thus, the computation of the ESM can be done in $O(nN^d \log N)$ time. To further improve the runtime to $O(nN^d)$ requires more effort. We can still apply the duality and topological sweep techniques but the approach is somewhat different from that in the SP problem. For convenience, we define the \textit{red} (resp., \textit{blue}) \textit{statistics} of a hyperplane $h$ to be the set of the red (resp., blue) points on $h$ and the product of the non-existence probabilities of all the red (resp., blue) points on each side of $h$. As we see, in the process of computing the SP, the object enumerated is one hyperplane spanned by $d$ points and what we want to compute is the red and blue statistics of the hyperplane. In this situation, the idea of duality and topological sweep can be directly used to improve the efficiency of each computation. However, when computing the ESM, the situation is different. At each step, we have three parallel and equidistant hyperplanes $(h_r,h,h_b)$ determined by $(d+1)$ points, and what we want to compute is the red statistics of $h_r$ and the blue statistics of $h_b$. Thus, in order to apply the duality and topological sweep techniques, our idea is to transform the problem from the latter form to the former one. We consider two different cases: $d \geq 3$ and $d = 2$. Suppose $d \geq 3$. In this case, when enumerating $(d+1)$ points, we first determine two of them, of which one is red and one is blue (say $r$ and $b$). Let $c$ be the midpoint of the segment $\overline{rb}$. Then for each $r_i \in S_R$, we construct a new red point $r'_i = r_i + \overrightarrow{rc}$ with existence probability the same as that of $r_i$. And for each $b_i \in S_B$, we construct a new blue point $b'_i = b_i + \overrightarrow{bc}$ with existence probability the same as that of $b_i$. Denote by $S'$ the new stochastic dataset of those constructed points. Now consider any tuple of $(d+1)$ points in $S$ including $r$ and $b$. Let $(h_r,h,h_b)$ be the three hyperplanes determined by these $(d+1)$ points. In order to complete the computation for this tuple, what we need to know is the red statistics of $h_r$ and the blue statistics of $h_b$. It is easy to see that: \\ $\bullet$ A red (resp., blue) point in $S$ is on $h_r$ (resp., $h_b$) iff its corresponding point in $S'$ is on $h$. (So each of the $(d+1)$ points corresponds to a point on $h$.) \\ $\bullet$ The red (resp., blue) points in $S$ on each side of $h_r$ (resp., $h_b$) just correspond to the red (resp., blue) points in $S'$ on each side of $h$. \\ Based on the above observations, the red statistics of $h_r$ and the blue statistics of $h_b$ w.r.t. $S$ just correpond to the red and blue statistics of $h$ w.r.t. $S'$. In other words, to consider all the possible support sets represented by this tuple, it suffices to know the red and blue statistics of $h$ w.r.t. $S'$. Now the problem we face is similar to that in the SP problem. We want to compute, for each hyperplane $h$ spanned by the point $c$ and other $(d-1)$ points in $S'$, the red and blue statistics of $h$. By applying the idea of duality and topological sweep, this can be done in $O(N^{d-1})$ time. This is the runtime for a fixed $(r,b)$. To compute the ESM, we need to enumerate all $O(nN)$ such pairs so that the overall time is $O(nN^d)$. In the case of $d=2$, however, the above method does not work. Since we enumerate three points when $d=2$, if we first determine two of them ($r$ and $b$), we are not able to create the line arrangement in the dual space and use topological sweep to complete the computation for $(r,b)$ in $O(N)$ time. So we need to deal with the 2-dim problem separately. Without loss of generality, we only consider the case where the three points enumerated are one red point and two blue points (the two-red one-blue case is symmetric). Let $n_r$ be the number of the red points and $n_b$ the number of the blue ones. When enumerating three points, we first determine the red point $r$ and sort all the other red points around $r$. Then for all the blue points, we construct their dual lines to form a line arrangement. Each vertex (i.e., intersection point) of the arrangement corresponds to a pair of blue points $(b_i,b_j)$. We want to apply topological sweep on the arrangement and consider each 3-tuple $(r,b_i,b_j)$ at the time we visit the vertex $(b_i,b_j)$. Let $(r,b_i,b_j)$ be any such tuple and $(h_r,h,h_b)$ be the three hyperplanes determined by this tuple. In order to complete the computation for this tuple, we need to know the red statistics of $h_r$ and the blue statistics of $h_b$. We note that the hyperplane $h_b$ is actually determined by $b_i$ and $b_j$ only (independent of $r$). Thus, the blue statistics of $h_b$ can be directly computed in the process of topological sweep. The crucial part is to compute the red statistics of $h_r$. What we do is to maintain $n_b$ sliding windows $w_1,\dots,w_{n_b}$ on the sorted list of the red points, where $w_i$ corresponds to the blue point $b_i$. During the topological sweep, the sliding window $w_i$ dynamically indicates the red points on one side of the hyperplane $h_r$ determined by the tuple $(r,b_i,b^*)$, where $(b_i,b^*)$ is the most recently visited vertex on the dual line of $b_i$. At each time a new vertex $(b_i,b_j)$ is visited, we update $w_i$ and $w_j$, and meanwhile compute the red statistics of the hyperplane $h_r$ determined by the tuple $(r,b_i,b_j)$. It is easy to see that both updating the sliding windows and computing the statistics can be done in amortized constant time. Therefore, for each red point $r$, the computations take $O(n_b^2)$ time. The total time for considering all the red points is then $O(n_r n_b^2)$, which is bounded by $O(nN^2)$. Symmetrically, the work for enumerating two red points and one blue point can also be done in $O(nN^2)$ time. As a result, for any $d \geq 2$, the ESM of a stochastic bichromatic dataset $S$ in $\mathbb{R}^d$ can be computed in $O(nN^d)$ time. \section{Extension to multipoint model}\label{sec:multipoint} All our algorithms in the paper can be generalized in a straightforward manner from the unipoint model to the multipoint model with the same bounds. Let $S = S_R \cup S_B$ be set of stochastic bichromatic points under the multipoint model, i.e., $S = \{A_1,A_2,\dots,A_{m}\}$, where each $A_i = \{(a_1^{(i)},p_1^{(i)}), (a_2^{(i)},p_2^{(i)}), \dots, (a_{n_i}^{(i)},p_{n_i}^{(i)})\}$ represents an uncertain point, where $a_j^{(i)} \in \mathbb{R}^d$ is its $j$-th possible location and $p_j^{(i)} \in (0,1]$ is its corresponding probability of existing at $a_j^{(i)}$. With a slight abuse of the notation, let $|S_R|$ (resp., $|S_B|$) be the total number of locations of all red (resp., blue) multipoints, and define $n = \min\{|S_R|, |S_B|\}$ and $N = \max\{|S_R|, |S_B|\}$. Then the total size of S is $\sum_{i=1}^{m}{n_i} = n + N$. Clearly, $S$ can be regarded as a unipoint-model stochastic dataset $\rule{0mm}{5mm} S' = \{(a_j^{(i)},p_j^{(i)}): i \in \{1,\dots,m\}, j \in \{1,\dots,n_i\}\},$ where the existences of $a_1^{(i)},\dots,a_{n_i}^{(i)}$ are dependent (i.e., at most one of them can exist) for $i \in \{1,\dots,m\}$. Thus, while applying our algorithms under the multipoint model, the only issue is that we need to deal with such dependences. Note that in our algorithms all the problems are finally transformed into one form: computing the probability that some points definitely exist and some other points definitely do not. Let $\mathcal{X}$ (resp., $\bar{\mathcal{X}}$) be the set containing all the points that are definitely present (resp., absent). For any uncertain point $A_i$, consider the following three cases.\\ \textbf{1)} If $|A_i \cap \mathcal{X}| \ge 2$, the probability contributed by $A_i$ is 0 since an uncertain point cannot exist at two different places simultaneously.\\ \textbf{2)} If $|A_i \cap \mathcal{X}| = 1$, the probability contributed by $A_i$ is equal to the probability of the only element in $A_i \cap \mathcal{X}$.\\ \textbf{3)} If $|A_i \cap \mathcal{X}| = 0$, the probability contributed by $A_i$ is simply $1 - \sum p_i$, for all $(a_i, p_i) \in A_i \cap \bar{\mathcal{X}}$.\\ Finally, the probability for the scenario $(\mathcal{X}, \bar{\mathcal{X}})$ is equal to the product of the probabilities contributed by all $A_i$. In this way, by only slightly modifying the previous way of computation, the dependences among the existences of the points can be easily handled without any increase in the running time. \end{document}
\begin{document} \baselineskip=15.5pt \title[On Abel-Jacobi maps of moduli of parabolic bundles]{On Abel-Jacobi maps of moduli of parabolic bundles over a curve} \author{Sujoy Chakraborty} \address{School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005, India.} \email{[email protected]} \thanks{E-mail : [email protected]} \thanks{Address : School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005, India.} \subjclass[2010]{14C15, 14D20, 14D22, 14H60} \keywords{Chow groups; Moduli space; Parabolic bundle.} \begin{abstract} Let $C$ be a nonsingular complex projective curve, and $\mathcal{L}$ e a line bundle of degree 1 on $C$. Let $\Malpha := \mathcal{M}(r,\mathcal{L},\alpha)$ denote the moduli space of $S$-equivalence classes of Parabolic stable bundles of fixed rank $r$, determinant $\mathcal{L}$, full flags and generic weight $\alpha$. Let $n=$ dim$\Malpha$. We aim to study the Abel-Jacobi maps for $\Malpha$ in the cases $k=2,n-1$. When $k=n-1$, we prove that the Abel-Jacobi map is a split surjection. When $k=2$ and $r=2$, we show that the Abel-Jacobi map is an isomorphism. \end{abstract} \maketitle \section{Introduction} Let $X$ be a smooth projective variety over $\mathbb{C}$. The Abel-Jacobi map was defined by Griffiths as a generalization of the Jacobi map for curves. For each integer $k$, it is a map $AJ^k$ from the group $Z^k(X)_{hom}$ of codimension-$k$ cycles modulo the cycles homologous to zero, to the intermediate Jacobian $IJ^k(X) := \dfrac{H^{2k-1}(X,\C)}{F^kH^{2k-1}+ H^{2k-1}(X,\mathbb{Z})}$ (see Section 2 for the definitions), and the map $AJ^k$ actually factors through the group $\CH^k(X)_{hom} := \dfrac{Z^k(X)_{hom}}{Z^k(X)_{rat}}$, where the denominator is the subgroup of codimension-$k$ cycles rationally equivalent to zero (see e.g. \cite[Chapter 12]{Voi1}). In general $IJ^k(X)$ is a complex torus, and it is in fact an Abelian variety for $k=2,n-1$, where $n=dim(X)$. An interesting question is to study the \textit{weak representability} of the Abel-Jacobi map $AJ^k$ for $k=2, n-1$ and also determine the Abelian variety in terms of the cycles on $X$. In \cite{JY}, the authors have studied, among other things, these two Abel-Jacobi maps when $X$ is the moduli space of semistable vector bundles of fixed rank and determinant on a curve. Our aim here is to study these two Abel-Jacobi maps in the following situations:\\ Let $C$ be a nonsingular projective curve of genus $g\geq 3$ over $\mathbb{C}$. Let $\mathcal{L}$ be a line bundle on $C$. Let us moreover fix a finite set of closed points $S$ on $C$, referred to as \textit{Parabolic points}, and Parabolic weights. We also assume that the weights are generic. Let $\mathcal{M}(r,\mathcal{L}, \alpha)$ denote the moduli space of $S$-equivalence classes of Parabolic stable vector bundles of rank $r$, determinant $\mathcal{L}$, full flags at each Parabolic point, and \textit{generic} weights $\alpha:= \{0\leq \alpha_{1,P} <\alpha_{2,P}<\cdots <\alpha_{r,P}< 1\}_{P\in S} $ along the Parabolic points. \\ For genric weights $\alpha$, $\mathcal{M}(r,\mathcal{L}, \alpha)$ is a smooth projective variety over $\mathbb{C}$ of dimension $n:= (r^2-1)(g-1)+\dfrac{m(m-1)}{2}$, where $m=|S|$. Our aim is to study some specific Abel-Jacobi maps when the variety in consideration is the moduli space $\mathcal{M}(r,\mathcal{L}, \alpha)$. We now outline the content of the paper and the main ideas of the proofs below.\\ In Section 2, we recall all the notions and definitions needed for our purpose, for example the notion of (Parabolic) stability and semi-stability of (Parabolic) vector bundles on a curve, their moduli spaces and so on.\\ In Section 3, we fix a line bundle $\mathcal{L}$ of degree 1, and study the Abel-Jacobi map $AJ^{n-1}$ for the space $\Malpha := \mathcal{M}(r,\mathcal{L}, \alpha)$. Let us denote by $\mathcal{M} := \mathcal{M}(r,\mathcal{L})$ the moduli space of isomorphism classes of stable vector bundles of rank $r$ and determinant $\mathcal{L}$ on $C$. The main result of Section 3 is the following: \begin{theorem}[Theorem \ref{abeljacobiarbitwt}] Let $n=$ dim$\Malpha$. For any generic weight $\alpha$, \[AJ^{n-1}: \CH^{n-1}(\Malpha)_{hom}\otimes \Q \rightarrow IJ^{n-1}(\Malpha)\otimes\Q\] is a split surjection. \end{theorem} We first prove the above in the case when the weights $\alpha$ are sufficiently small, in the sense of \cite[Proposition 5.2]{BY}. In this case, there exists a morphism $\pi: \Malpha \rightarrow \mathcal{M}$ by forgetting the Parabolic structure, under which $\Malpha$ actually becomes a Zariski locally trivial fibration with fibres a product of flag varieties by \cite[Theorem 4.2]{BY}. Our approach is based on \cite{JY}, where the result has been proved for $\mathcal{M}$. Regarding the choice of determinant, a crucial input in the proof is \cite[Theorem 2.1]{JY}, where the determinant $\mathcal{O}(x)$ is necessary. Next, to prove the result for arbitrary generic weights, we use \cite[Theorem 4.1]{BY}, which says that if $\alpha,\beta$ are generic weights separated by a single wall in the set of all compatible weights (see Section 2 for definitions), and if $\gamma$ is the weight which is the point of intersection of the wall and the line joining $\alpha$ and $\beta$, then there exist maps \[ \xymatrix{ \Malpha \ar[rd]_{\phi_{\alpha}} & &\Mbeta \ar[ld]^{\phi_{\beta}} \\ &\Mgamma } \] which act as resolution of singularities for $\Mgamma$. The fibre product $\N := \Malpha \underset{\Mgamma}{\times}\Mbeta$ turns out to be a common blow-up of $\Malpha,\Mbeta$ along suitable subvarieties, which helps us to relate the Abel-Jacobi maps for $\Malpha$ and $\Mbeta$ via that of $\N$. This, together with the fact that the result was already shown for $\alpha$ sufficiently small, enables us to conclude Theorem 1.1 for any generic weight.\\ In Section 4, we fix rank $r=2$ and determinant a degree 1 line bundle $\mathcal{L}$. The main result of this section is the following: \begin{theorem}[Theorem \ref{abeljacobiiso}] For any generic weight $\alpha$, $AJ^2: \CH^2(\Malpha)_{hom}\otimes \Q \rightarrow IJ^{2}(\Malpha)\otimes\Q$ is an isomorphism. \end{theorem} Again, our approach is to first show the result for sufficiently small generic weights, and then show it for arbitrary weights. Let $\Malpha, \Mbeta, \Mgamma$ be as in above. As for the reason to fix the rank to be 2, the main step in proving the result for arbitrary weights is to find an explicit description of the singular locus $\Sigma_{\gamma}$ of $\Mgamma$, which we describe in Proposition \ref{sigmagammadescription}, which actually uses that we are in the case $r=2$. \\ We would also like to point out that even though we study the Abel-Jacobi map with $\Q$-coefficients, most of the arguments will hold for $\mathbb{Z}$ coefficients as well, except some of the results (e.g. Proposition \ref{chow1iso}(b)), where we use dimension reasoning. \section{Preliminaries} \subsection{Semistability and stability of vector bundles} Let $C$ be a nonsingular projective curve over $\mathbb{C}$. Let $E$ be a holomorphic vector bundle of rank $r$ over $C$. \\ Here onwards, by a $variety$ we will always mean an irreducible quasi-projective variety. \begin{definition}[Degree and slope] The \textit{degree} of $E$, denoted $deg(E)$, is defined as the degree of the line bundle $det(E) := \wedge^r E$. The \textit{slope} of $E$, denoted $\mu(E)$, is defined as \[\mu(E) := \dfrac{deg(E)}{r}\] \end{definition} \begin{definition}[Semistability and stability] $E$ is called \textit{semistable (resp. stable)}, if for any sub-bundle $F\subset E, \, 0< rank(F) < r$, we have \[\mu(F) \, \underset{(resp. <)}{\leq}\,\mu(E).\] \end{definition} It is easy to check that if $gcd(r, deg(E))=1$, then the notion of semistability and stability coincide for a vector bundle $E$. \subsection{Moduli space of vector bundles}\label{sec-2.2} We briefly recall the notion of the moduli space of vector bundles over $C$. The collection of all semistable vector bundles over $C$ of fixed slope $\mu$ forms an abelian category, whose simple objects are exactly the stable vector bundles of slope $\mu$. If $E$ is a semistable bundle of slope $\mu$, then there exists a \textit{Jordan-H\"older filtration} for $E$ given by \[E = E_k \supset E_{k-1} \supset \cdots \supset E_1 \supset 0\] such that each $E_i/E_{i-1}$ is stable bundle of same slope $\mu$. The filtration is not unique, but the associated graded object $\textrm{gr}(E) := \bigoplus_{i=1}^{k} E_i/E_{i-1}$ is unique upto isomorphism. Two vector bundles $E$ and $E'$ are called \textit{S-equivalent} if $\textrm{gr}(E)\cong \textrm{gr}(E')$. When $E, E'$ are stable, being S-equivalent is same as being isomorphic as vector bundles over $C$. The moduli space of S-equivalence classes of vector bundles of rank $r$ and determinant $\mathcal{L}$ on $X$, denoted $\mathcal{M}(r,\mathcal{L})$, is a normal projective variety of dimension $(r^2-1)(g-1)$; its singular locus is given by the strictly semistable bundles. In the case when $gcd(r,deg(\mathcal{L)})=1$, $\mathcal{M}(r,\mathcal{L})$ is the isomorphism class of stable vector bundles on $C$. It is a nonsingular projective variety; moreover, it is a fine moduli space. When $r,\mathcal{L}$ are fixed, we shall denote the moduli space by $ \mathcal{M} $, when there is no scope for confusion. \subsection{Parabolic bundles and stability} \begin{definition}[Parabolic bundles, Parabolic data] Let us fix a set $S = {p_1, \cdots , p_n}$ of $n$ distinct closed points on $C$. A \textit{Parabolic vector bundle of rank r on C} is a holomorphic vector bundle $E$ on $C$ with a \textit{Parabolic structure along points of S}. By this, we mean a collection weighted flags of the fibers of $E$ over each point $p\in S$: \begin{align} E_p &= E_{p,1} \supsetneq E_{p,2} \supsetneq ... \supsetneq E_{p,s_p} \supsetneq E_{p, s_{p+1}}= 0, \\ 0 &\leq \alpha_{p,1} < \alpha_{p,2} < ... \,< \alpha_{p,s_p}\, < 1, \end{align} where $s_p$ is an integer between $1$ and $r$. The real number $\alpha_{p,i}$ is called the \textit{weight attached to the subspace} $E_{p,i}$. The \textit{multiplicity} of the weight $\alpha_{p,i}$ is the integer $m_{p,i} := dim(E_{p,i}) - dim(E_{p,i-1})$. Thus $\sum_i m_{p,i} = r$. We call the flag to be $full$ if $s_p=r,$ or equivalently $m_{p,i} =1 \,\forall i$. Let $\alpha := \{(\alpha_{p,1}, \alpha_{p,2}, ..., \,\alpha_{p,s_p}\,)\,|\,p\in S\}$ and $m:= \{(m_{p,1},...,\,m_{p,s_p}\,)\,|\, p\in S\}$. We call the tuple $(r,\,\mathcal{L},\,m,\,\alpha)$ as the \textit{Parabolic data} for the Parabolic bundle $E$, where $\mathcal{L} := det(E)$. Usually we denote the Parabolic bundle as $E_*$ to distinguish from the underlying vector bundle $E$. \end{definition} \begin{definition}[Parabolic degree and slope] The \textit{degree} of a Parabolic bundle $E_*$ is defined as the usual degree $deg(E)$ of the underlying vector bundle $E$. The \textit{Parabolic degree} of $E_*$ with respect to $\alpha$ is defined as \[Pardeg_{\alpha}(E_*):= deg(E) + \sum_{p\in S}\sum_{i=1}^{s_P}m_{p,i}\alpha_{p,i}.\] The \textit{Parabolic slope} of $E_*$ with respect to $\alpha$ is defined as \[Par\mu_{\alpha}(E_*) := \dfrac{Pardeg_{\alpha}(E_*)}{rank(E)}.\] \end{definition} \begin{definition}[Parabolic semistability and stability]\label{pardegslopedef} Any vector sub-bundle $F\hookrightarrow E$ obtains a Parabolic structure in a canonical way: For each $p\in S$, the flag at $F_p$ is obtained intersecting $F_p$ with the flag at $E_p$, and the weight attached to the subspace $F_{p,j}$ is $\alpha_k$, where $k$ is the largest integer such that $F_{p,j}\subseteq E_{p,k}$. (for more details see \cite[Definition 1.7]{MS}.) We call the resulting Parabolic bundle to be a \textit{Parabolic sub-bundle,} and denote it by $F_*$. A Parabolic bundle $E_*$ is called $\alpha$-\textit{Parabolic semistable (resp. $\alpha$-Parabolic stable)}, if for every proper sub-bundle $F\subset E, 0<rank(F)<rank(E)$, we have \[Par\mu_{\alpha}(F_*)\, \underset{(resp. <)}{\leq} \,Par\mu_{\alpha}(E_*).\] We also call them simply Parabolic semistable or Parabolic stable, if the weight is clear from the context. \end{definition} \subsection{Generic weights and walls} We briefly recall the notion of \textit{generic weights} and \textit{walls}. For more details we refer to \cite{BH}. Fix a set $S$ of points in $C$, positive integer $r$, line bundle $\mathcal{L}$ on $C$ and multiplicities $m$ as defined above. Let $\Delta^r:= \{(a_1,..., a_r)\, | \, 0\leq a_1 \leq ... \leq a_r <1\}$, and define $W := \{\alpha : S \rightarrow \Delta^r\}$. Note that the elements of $W$ determine both weights and the multiplicities at the Parabolic points, and hence a Parabolic structure. Conversely, given multiplicities $m$ at the Parabolic points, we can associate a map $S \rightarrow \Delta^r$, by repeating each weight $\alpha_{p,i}$ according to its multiplicty $m_{p,i}$. This leads to a natural notion of when a given weight $\alpha$ is \textit{compatible} with the multiplicity $m$. The set of all weights compatible with $m$ is a product of $|S|$-many simplices. We denote by $V_m$ the set of all weights compatible with m. Let $\alpha \in V_m$. Let $E_*$ be a Parabolic bundle with data $(r, \mathcal{L}, m, \alpha)$ and Parabolic degree 0. Let $d= deg(\mathcal{L})$. If $E_*$ is Parabolic semistable but not Parabolic stable, then it would contain a sub-bundle $F$ of rank $r'$ and degree $d'$ (say), such that under the induced Parabolic structure on $F$, with induced weights $\alpha' := \{0\leq \alpha'_{p,1} \leq \cdots \leq \alpha'_{p,r'}< 1\}_{p\in S}$ where each $\alpha'_{p,i}=\alpha_{p,j}$ for some $j$, we get $Pardeg_{\alpha'}(F_*)=Pardeg(E_*)$. This translates to \begin{align*} r(d'+\sum_{p\in S}\sum_{i=1}^{r'}\alpha'_{p,i}) = r'(d+\sum_{p\in S}\sum_{j=1}^{r}\alpha_{p,j}). \end{align*} This clearly gives a hyperplane section of $V_m$. Such a hyperplane is determined by the data $\xi := (r',d',\alpha')$. Hence, it is easy to see that only finitely many such hyperplanes can intersect $V_m$ (see \cite{BH}); call them $H_1,\cdots,H_l$. \begin{definition}(Walls and generic weights) We call each of the intersections $H_i \cap V_m$ a \textit{wall} in $V_m$. There are only finitely many such walls. We call the connected components of $V_m \setminus \cup_{i=1}^{l}H_i$ as \textit{chambers}, and weights belonging to these chambers are called \textit{generic}. \end{definition} Clearly, for generic weights, a Parabolic bundle is Parabolic semistable iff it is Parabolic stable. \subsection{Moduli of Parabolic bundles}\label{sub-2.5} Again, we briefly recall the notion of moduli space of Parabolic semistable bundles over $X$. The construction is analogous to section \ref{sec-2.2}; for details we refer to \cite{MS}.\\ \begin{definition}[\text{\cite[Definition 1.5]{MS}}]\label{defparhomo} A \textit{morphism of Parabolic bundles} $f: E_*\rightarrow E'_*$ is a morphism of usual bundles $E\rightarrow E'$, such that at each Parabolic point $P$, denoting by $f_P$ the restriction of $f$ to the fibre at $P$, we have $f_P(E_{P,i})\subseteq E'_{P,j}$ whenever $\alpha_{P,i}>\alpha'_{P,j}$. \end{definition} The collection of Parabolic semistable bundles $E_*$ with fixed Parabolic slope form an abelian category. For each such $E_*$ there exists a Jordan-H\"{o}lder filtration similar to Section \ref{sec-2.2}, with the obvious difference that each successive quotient of the filtration is Parabolic stable and of the same Parabolic slope as that of $E_*$. and we can define an associated graded object $gr_{\alpha}(E_*)$ analogous to section \ref{sec-2.2}. Again, we call two Parabolic semistable bundles to be $S$-equivalent if their associated graded objects are isomorphic. \begin{definition} We denote by $\mathcal{M}(r,\mathcal{L},m,\alpha)$ the moduli space of S-equivalence classes of Parabolic semistable bundles over $C$ of rank $r$, determinant $\mathcal{L}$, multiplicities $m$ and weights $\alpha$. It is a normal projective variety, with singular locus given by the strictly semistable bundles. When $r,\mathcal{L},m$ are fixed, we will denote the moduli space by $\Malpha$ if no confusion occurs.\\ For generic weight $\alpha$, $\mathcal{M}_\alpha $ is a nonsingular projective variety; moreover, it is a fine moduli space (\cite[Proposition 3.2]{BY}). \end{definition} \subsection{Chow groups} Let $X$ be a variety over $\mathbb{C}$ of dimension $n$. let $Z^k(X)$ (equivalently, $Z_{n-k}(X)$) denote the free abelian group generated by the irreducible closed subvarieties of codimension $k$ (equivalently, dimension $n-k$) in $X$. There are many interesting equivalence relations on this group; we require two of them for our purpose: namely, \textit{rational equivalence} and \textit{homological equivalence}. We refer to \cite[Section 9]{Voi2} and \cite{Ful} for further details. Let $Z^(X)_{hom}$ and $Z^k(X)_{rat}$ denote the subgroups of $Z^k(X)$ consisting of elements homologically equivalent to zero and rationally equivalent to 0, respectively. In general the following chain of containments hold: $Z^k(X)_{rat}\subseteq Z^k(X)_{hom}\subseteq Z^k(X)$. We define \[\CH^k(X) := \dfrac{Z^k(X)}{Z^k(X)_{rat}},\, \CH^k(X)_{hom} := \dfrac{Z^k(X)_{hom}}{Z^k(X)_{rat}}.\] Clearly $\CH^k(X)_{hom}\subset \CH^k(X)$. We also denote them by $\CH_{n-k}(X)_{hom}$ and $\CH_{n-k}(X)$ respectively, if we want to focus on the dimension rather than codimension of the subvarieties. \subsection{Intermediate Jacobians} Let $X$ be a smooth projective variety over $\mathbb{C}$. For each $k\geq 0$, we have the Hodge decomposition \[H^k(X,\mathbb{C})\cong \bigoplus_{i+j=k, p,q\geq 0} H^{i,j}(X),\] where $H^{i,j}(X)$ are the Dolbeault cohomology groups, with the property that $\overline{H^{i,j}(X)} = H^{j,i}(X) \,\,\forall i,j$. Let $k=2p-1$ be odd, and define $F^pH^{2p-1}(X,\mathbb{C}) := H^{2p-1,0}(X)\oplus H^{2p-2,1}(X)\oplus\cdots \oplus H^{p,p-1}(X)$. Then we have \[H^{2p-1}(X,\C) = F^pH^{2p-1} \oplus \overline{F^pH^{2p-1}}\] Moreover, the image of the composition \[H^{2p-1}(X,\mathbb{Z})\rightarrow H^{2p-1}(X,\C) \twoheadrightarrow \overline{F^pH^{2p-1}}\] gives a lattice. \begin{definition}\label{defintjac} We define the \textit{p-th Intermediate Jacobian} as \[IJ^p(X) := \dfrac{H^{2p-1}(X,\C)}{F^pH^{2p-1}(X,\mathbb{C})+ H^{2p-1}(X,\mathbb{Z})} = \dfrac{\overline{F^pH^{2p-1}(X,\mathbb{C})}}{H^{2p-1}(X,\mathbb{Z})}.\] \end{definition} \section{Surjectivity of Abel-Jacobi map for 1-cycles} Let us fix an integer $r$, a line bundle $\mathcal{L}$ of degree 1, and let $\M$ denote the moduli space of semistable bundles of rank $r$ and determinant $\mathcal{L}$, and similarly let the moduli of parobolic bundles be denoted by $\Malpha := \M(r,\mathcal{L},m,\alpha)$. \subsection{CASE OF SMALL WEIGHTS}\label{smallweight} First, let us assume that the Parabolic weights are small enough, so that \cite[Proposition 5.2]{BY} is applicable. Such small generic weights exist by \cite[Proposition 3.2]{BY}, since the bundles have degree 1 and there are only finitely many walls. In this case, by \cite[Proposition 5.2]{BY}, there exists a morphism \[\pi : \Malpha\rightarrow \M,\] by forgetting the Parabolic structure; moreover, $\pi$ is a locally trivial fibration (in Zariski topology) with fibers as product of flag varieties by \cite[Theorem 4.2]{BY}. \begin{proposition}\label{specseq} $\pi^* : H^3(\M,\Q)\rightarrow H^3(\Malpha,\Q)$ is an isomorphism. \end{proposition} \begin{proof} (see \cite[Theorem 4.11]{Voi2} for the description of Leray spectral sequence.) Consider the Leray spectral sequence of $\pi$ with rational coefficients, satisfying \[E_{2}^{p,q} = H^p(\M, R^q \pi_* \underline{\Q}_{\Malpha}) \implies H^{p+q}(\M,\Q)\] where $\Q_{\Malpha}$ denotes the locally constant sheaf with stalk $\Q$. Since $\pi$ is a fibration, the sheaves $R^q \pi_* \underline{\Q}_{\Malpha}$ turn out to be local systems (locally constant sheaves with fibers as finite dimensional $\Q$-vector space); moreover, since by \cite[Theorem 1.2]{KS} $\M$ is rational, it is simply connected. If $F$ denotes the fiber of $\pi$, by the equivalence of categories of local systems and representations of $\pi_1(\M)$, we conclude that all the $R^q \pi_* \underline{\Q}_{\Malpha}$ are constant sheaves with stalk $H^q(F,\Q)$, \begin{eqnarray*} \therefore H^p(\M,R^q \pi_* \underline{\Q}_{\Malpha}) = H^p(\M, H^q(F,\Q)) \overset{UCT}{\simeq} H^p(\M, \Q)\otimes_{\Q} H^q(F,\Q). \end{eqnarray*} By a theorem of Deligne (\cite[Theorem 4.15]{Voi2}), since $\pi$ is smooth and proper, the above spectral sequence degenerates at $E_2$-page, i.e. $E_2^{p,q} = E_{\infty}^{p,q} \,\,\forall p,q.$ Now, $H^3(\Malpha,\Q)$ has a filtration \begin{eqnarray}\label{filtraion} 0\subseteq F^3H^3\subseteq F^2H^3\subseteq F^1H^3\subseteq F^0H^3 = H^3(\Malpha,\Q), \end{eqnarray} with $E_2^{p,3-p} = E_{\infty}^{p,3-p}\simeq \dfrac{F^pH^3}{F^{p+1}H^3}\,\,\forall p\leq3.$ Let us compute the various $E_2^{p,3-p}$'s: \begin{eqnarray*} E_2^{3,0} &=& H^3(\M,\Q),\\ E_2^{2,1} &=& H^2(\M, H^1(F,\Q)) = 0\,\,(\because \text{F has cohomologies only in even degress}),\\ E_2^{1,2} &=& H^1(\M,H^2(F,\Q)) = 0\,\,(\because \M \text{is simply connected}),\\ E_2^{0,3} &=& H^0(\M,H^3(F,\Q)) = 0 \,\,(\because \text{F has cohomologies only in even degress}) \end{eqnarray*} $\therefore$ the filtration in \ref{filtraion} looks like \[0\subseteq F^3H^3 = ... = F^0H^3=H^3(\Malpha,\Q) ,\] from which we conclude that $H^3(\M,\Q) = E_2^{3,0}=E_{\infty}^{3,0}\simeq F^3H^3 = F^0H^3 = H^3(\Malpha,\Q)$. Moreover, the isomorphism is precisely the edge map $E_\infty^{3,0}\rightarrow H^3(\Malpha,\Q)$, which coincides with $\pi^*$. Hence we get our result. \end{proof} Fix a universal (Poinc\'are) bundle $U \rightarrow C\times \M$, and let $p_1: C\times\M \rightarrow C, p_2: C\times \M \rightarrow \M$ be the projections. Let $c_2(U)\in H^4(C\times\M,\Q)$ denote the second Chern class of $U$. Consider the following homomorphisms: \begin{eqnarray} H^1(C,\Q)\overset{p_{1}^{*}}{\rightarrow} H^1(C\times \M,\Q) \overset{\cup c_2(U)}{\longrightarrow}H^5(C\times \M,\Q)\overset{p_{2*}}{\longrightarrow}H^3(\M,\Q) \label{3.2} \end{eqnarray} The last map, namely 'pushforward' $p_{2*}$ is defined by the composition \begin{equation*} \resizebox{1.0\hsize}{!}{$H^5(C\times \M,\Q)\xrightarrow[\simeq]{\overset{\text{Poincar\'e}}{\text{duality}}} H_{2n-5}(C\times\M,\Q) \xrightarrow[\simeq]{UCT} H^{2n-5}(C\times\M,\Q)\,\check{}\xrightarrow{(p_2^{*})\check{}}H^{2n-5}(\M,\Q)\check{}\xrightarrow[\simeq]{UCT}H_{2n-5}(\M,\Q) \xrightarrow[\simeq]{\overset{\text{Poincar\'e}}{\text{duality}}}H^3(\M,\Q)$} \end{equation*} where UCT denotes the isomorphism given by the Universal coefficient theorem. \\ The composition $\Gamma_{c_2(U)} := p_{2*}\circ \cup c_2(U) \circ p_1^*$ is called the \textit{correspondence} defined by $c_2(U)$ in literature. \begin{proposition}\label{cohoiso} $\Gamma_{c_2(U)} : H^1(C,\Q) \rightarrow H^3(\M,\Q)$ is an isomorphism for $r\geq 2 , g\geq 3$. \end{proposition} \begin{proof} Follows from \cite[Theorem 2.1]{JY}, by observing that the argument only uses the fact that the determinant is a line bundle of degree 1. \end{proof} Let $U' := (Id\times\pi)^*(U)$ denote the pullback of $U$ onto $C\times\Malpha$, and define $\Gamma_{c_2(U')}$ in the same way as $\Gamma_{c_2(U)}$ in \eqref{3.2} by replacing $\M$ by $\Malpha$ and $U$ by $U'$ . \begin{lemma} $\Gamma_{c_2(U')}: H^1(C,\Q) \rightarrow H^3(\Malpha,\Q)$ is isomorphism as well. \end{lemma} \begin{proof} Consider the following commutative diagrams: \begin{align*} \xymatrix{H^1(C,\Q) \ar[r]^(.45){p_1^*} \ar[d]^{Id} & H^1(C\times\Malpha,\Q) \ar[r]^{\cup c_2(U')} \ar[d]^{(Id\times\pi)^*} & H^5(C\times\Malpha,\Q) \ar[r]^(.56){p_{2*}} \ar[d]^{(Id\times\pi)^*} & H^3(\Malpha,\Q) \ar[d]^{\pi^*} \\ H^1(C,\Q) \ar[r]^(.45){p_1^*} & H^1(C\times \M,\Q) \ar[r]^{\cup c_2(U)} & H^5(C\times\M,\Q) \ar[r]^(.56){p_{2*}} & H^3(\M,\Q) } \end{align*} The rightmost vertical map $\pi^*$ is an isomorphism by Proposition \ref{specseq}. The lower horizontal composition is equal to $\Gamma_{c_2(U)}$ by definition, which is also an isomorphism by Proposition \ref{cohoiso}. Hence we conclude that the upper horizontal composition, which equals $\Gamma_{c_2(U')}$, is an isomorphism as well. \end{proof} Next, let $\mathcal{O}(1)$ be a very ample line bundle on $\Malpha$, and let $H:=c_1(\mathcal{O}(1))$ be its first Chern class. If $n=$ dim$\Malpha$, then there exists Hard Lefschetz isomorphism \begin{align}\label{hardlefschetz} H^3(\Malpha,\Q)\xrightarrow[\simeq]{\cup H^{n-3}} H^{2n-3}(\Malpha,\Q) \end{align} Together with Lemma 3.3, we obtain an isomorphism \begin{align}\label{iso1} \cup H^{n-3}\circ\Gamma_{c_2(U')} : H^1(C,\Q)\xrightarrow{\sim} H^{2n-3}(\Malpha,\Q) \end{align} \begin{proposition}\label{iso2} The isomorphism \ref{iso1} induces an isomorphism \begin{align} Jac(C)\otimes \Q \xrightarrow{\sim} IJ^{n-1}(\Malpha)\otimes \Q = \dfrac{H^{2n-3}(\Malpha,\C)}{F^{n-1}H^{2n-3}+H^{2n-3}(\Malpha,\Q)} \end{align} where $Jac(C)$ denote the isomorphism class of degree 0 line bundles on $C$. \end{proposition} \begin{proof} We recall the well-known fact that $Jac(C) \cong IJ^1(C) = \dfrac{H^1(C,\C)}{H^{1,0}(C)+H^1(C,\mathbb{Z})}$, which can be seen from the exponential exact sequence. The Hodge decomposition gives $H^{2n-3}(\Malpha,\C) = H^{n,n-3}\oplus \cdots\oplus H^{n-3,n}$; moreover, since the Hard Lefschetz isomorphism \ref{hardlefschetz} (after tensoring by $\C$) respects Hodge decomposition and the fact that $H$ is a type (1,1)-form (\cite[Proposition 11.27]{Voi1}), we have that under \ref{hardlefschetz}, $H^{3,0}(\Malpha)\cong H^{n,n-3}(\Malpha)$ and $H^{0,3}(\Malpha)\cong H^{n-3,n}(\Malpha)$. But since $\Malpha$ is rational variety, $H^{3,0}(\Malpha)$ $=H^{0,3}(\Malpha) = 0$; hence \[H^{2n-3}(\Malpha) = H^{n-1,n-2}\oplus H^{n-2,n-1}.\] Since $p_1^*, \cup c_2(U'), p_{2*}, \cup H^{n-3}$ are morphisms of Hodge structures of weights $(0,0), (2,2),$\\ $ (-1,-1)$ and $ (n-3,n-3)$ respectively, it is easy to see that the isomorphism \ref{iso1} takes $H^{1,0}(C)$ to $H^{n-1,n-2}(\Malpha)$ $= F^{n-1}H^{2n-3}(\Malpha,\mathbb{C})$. Hence we get our required isomorphism by going modulo appropriate subgroups. \end{proof} Next we want to show that the isomorphism \ref{iso2} is compatible with the similar 'correspondence map' on the rational Chow groups, which is defined analogous to $\Gamma_{c_2(U)}$: consider the composition \begin{align*} \CH^1(C)\otimes{\Q}\overset{p_{1}^{*}}{\rightarrow} \CH^1(C\times \Malpha)\otimes{\Q} \overset{\cap c_2(U')}{\longrightarrow}\CH^3(C\times \Malpha)\otimes{\Q}\overset{p_{2*}}{\longrightarrow}\CH^2(\Malpha)\otimes{\Q}, \end{align*} where $U'$ is the pullback of $U$ as before, $c_2(U')\in \CH^2(C\times \Malpha)$ is the algebraic Chern class, $\cap$ denotes the intersection product and $p_1^*, p_{2*}$ denote the flat pullback and proper pushforward respectively.\\ Define $\Gamma_{c_2(U')}^{\CH} := p_{2*} \circ \cap c_2(U')\circ p_1^*$. This restricts to the subgroups of cycles homologically equivalent to zero, giving \[\Gamma_{c_2(U')}^{\CH}: \CH^1(C)_{hom}\otimes\Q \rightarrow \CH^2(\Malpha)_{hom}\otimes \Q\] Let $H := c_1(\mathcal{O}(1))$ denote the algebraic first chern class of a very ample line bundle on $\Malpha$. \begin{proposition} The following diagram commutes: \begin{align} \xymatrixcolsep{9pc}\xymatrix{\CH^1(C)_{hom}\otimes\Q \ar[r]^{\cap H^{n-3}\circ\Gamma_{c_2(U')}^{\CH}} \ar[d]^{AJ^1}_{\cong} & \CH^{n-1}(\Malpha)_{hom}\otimes \Q \ar[d]^{AJ^{n-1}} \\ Jac(C)\otimes \Q \ar[r]^{\cong} & IJ^{n-1}(\Malpha)\otimes \Q } \end{align} where the lower horizontal isomorphism is given by Proposition \ref{iso2}. \end{proposition} \begin{proof} (\cite[Lemma 2.4]{JY}) By \cite[Page 44 (7.9)]{EV}, there exists a short exact sequence which relates the Intermediate Jacobian with the Deligne cohomology group: \[0\rightarrow IJ^{n-1}(\Malpha)\rightarrow H^{2n-2}_{\mathcal{D}}(\Malpha, \mathbb{Z}(n-1))\rightarrow Hg^{n-1}(\Malpha)\rightarrow 0\] Moreover, there exists a Deligne cycle class map \[\CH^{n-1}(\Malpha)\rightarrow H^{2n-2}_{\mathcal{D}}(\Malpha,\mathbb{Z}(n-1))\] which, when restricted to the subgroup $\CH^{n-1}(\Malpha)_{hom}$, factors through the subgroup $IJ^{n-1}(\Malpha)$ and coincides with the Abel-Jacobi map $AJ^{n-1}$. Furthermore, the Deligne cycle class map is compatible with the correspondence map and intersection product on Chow groups; in other words, the diagram in question commutes. The left vertical Abel-Jacobi map is an isomorphism by \cite[Theorem 11.1.3]{BL}, since in case of a smooth curve $\CH^1(C)_{hom} = Pic^0(C)$. \end{proof} \begin{corollary}\label{abeljacobismallwt} The Abel-Jacobi map $AJ^{n-1}: \CH^{n-1}(\Malpha)_{hom}\otimes \Q \rightarrow IJ^{n-1}(\Malpha)\otimes\Q$ is a split surjection. \end{corollary} \begin{proof} Follows from above lemma. \end{proof} \subsection{CASE OF ARBITRARY WEIGHTS}\label{arbitraryweight} Let $\alpha, \beta$ be two generic weights lying in adjacent chambers in $V_m$ separated by a single wall. Let $H$ be the hyperplane separating them. Let $\gamma$ be the intersection of $H$ and the line joining $\alpha$ and $\beta$. Then $\Malpha$ and $\Mbeta$ are smooth projective varieties, while $\Mgamma$ is normal projective, with the singular locus $\Sigma_{\gamma}$ given by the strictly semistable bundles.\\ By \cite[Theorem 3.1]{BH}, there exist projective morphisms \[ \xymatrix{ \Malpha \ar[rd]_{\phi_{\alpha}} & &\Mbeta \ar[ld]^{\phi_{\beta}} \\ &\Mgamma } \] such that (i) $ \phialpha $ and $ \phibeta $ are isomorphisms along $ \Mgamma \setminus \Sigma_{\gamma} $, \\ \hspace*{10.7ex}(ii) $\phi_{\alpha}^{-1}(\Sigma_{\gamma})$ (resp. $\phi_{\beta}^{-1}(\Sigma_{\gamma})$) is a $\Pnalpha$-bundle (resp. $\Pnbeta$-bundle) over $\Sigma_{\gamma}$, \\ \hspace*{5.5ex} and (iii) $n_{\alpha}+n_{\beta}+1 = codim \Sigma_{\gamma}$. Let $\N := \Malpha \underset{\Mgamma}{\times}\Mbeta$. It follows from the discussion at the end of section 1 in \cite{BH}that $\N$ is the common blowup over $\Malpha$ and $\Mbeta$ along $\phi_{\alpha}^{-1}(\Sigma_{\gamma})$ and $\phi_{\beta}^{-1}(\Sigma_{\gamma})$ respectively. Let $\psi_{\alpha} :\N \rightarrow \Malpha, \psi_{\beta}:\N \rightarrow \Mbeta$ denote the obvious maps. Let $j: E \hookrightarrow \N$ be the exceptional divisor, then $E = \psi_{\alpha}^{-1}(\phi_{\alpha}^{-1}(\Sigma_{\gamma})) = \psi_{\beta}^{-1}(\phi_{\beta}^{-1}(\Sigma_{\gamma}))$, and hence we have the fibre diagram \[ \xymatrix{ &E \ar[ld]_{\psi_{\alpha}} \ar[rd]^{\psi_{\beta}} \\ \phi_{\alpha}^{-1}(\Sigma_{\gamma}) \ar[rd]_{\phialpha} & &\phi_{\beta}^{-1}(\Sigma_{\gamma}) \ar[ld]^{\phibeta} \\ &\Sigma_{\gamma} } \] from which we get that $E$ is a $\Pnbeta$-bundle over $\phialpha^{-1}(\Sigma_{\gamma})$, and a $\Pnalpha$-bundle over $\phibeta^{-1}(\Sigma_{\gamma})$ via $\psi_{\alpha}$ and $\psi_{\beta}$ respectively. \begin{lemma}\label{rational} $\phi_{\alpha}^{-1}(\Sigma_{\gamma})$ and $\phi_{\beta}^{-1}(\Sigma_{\gamma})$ are smooth rational varieties (i.e. birational to $\mathbb{P}^n$ for some n); hence $\CH_0(\phi_{\alpha}^{-1}(\Sigma_{\gamma}))\otimes{\Q} \cong \mathbb{Q}\cong \CH_0(\phi_{\beta}^{-1}(\Sigma_{\gamma}))\otimes{\Q}$. \end{lemma} \begin{proof} $\phi_{\alpha}^{-1}(\Sigma_{\gamma})$ is smooth since it is a projective bundle over the smooth variety $\Sigma_{\gamma}$. By equation (5) in \cite{BH}, $ \Sigma_{\gamma} $ is the product of two smaller dimensional moduli, which are rational (by \cite[Theorem 6.2]{BY}), so $ \Sigma_{\gamma} $ is itself rational. Since $ \phialpha^{-1}(\Sigma_{\gamma}) $ and $ \phibeta^{-1}(\Sigma_{\gamma}) $ are projective bundles over $ \Sigma_{\gamma} $, they are also rational. This proves the first assertion. Moreover, by \cite[Example 16.1.11]{Ful}, the Chow groups of 0-cycles is a birational invariant; and $\CH_0(\mathbb{P}^n) \cong \mathbb{Z} \,\forall\,n$, so we get the second assertion as well. \end{proof} Also note that $\N$ is smooth, being the blow-up of a smooth variety $\Malpha$ along a smooth subvariety $\phi_{\alpha}^{-1}(\Sigma_{\gamma})$. \begin{proposition}\label{chow1iso} $\psi_{\alpha}:\mathcal{N} \rightarrow \Malpha$ induces the following isomorphisms: \begin{eqnarray*} &(a)& \CH_1(\Malpha)_{hom}\otimes \Q \xrightarrow[\simeq]{\psi_{\alpha}^*} \CH_1(\N)_{hom}\otimes\Q. \\ &(b)& H^{2n-3}(\Malpha ,\Q) \xrightarrow[\simeq]{\psi_{\alpha}^*} H^{2n-3}(\N,\Q), \,\text{and hence}\, IJ^{n-1}(\Malpha)\xrightarrow[\simeq]{\psi_{\alpha}^*} IJ^{n-1}(\N). \end{eqnarray*} \end{proposition} \begin{proof} (a) By \cite[Theorem 9.27]{Voi2}, there exists an isomorphism of Chow groups \begin{align} \CH_0(\phialpha^{-1}(\Sigma_{\gamma}))\otimes{\Q} &\bigoplus \CH_1(\Malpha)\otimes{\Q} \overset{g_{\alpha}}{\longrightarrow} \CH_1(\N)\otimes{\Q} \label{chowiso}\\ \textnormal{given\,\,by}\quad\quad (W_0\,\,&, \,\,W_1) \longmapsto j_*((c_1(\mathcal{O}(1))^{n_{\beta}-1}\cap (\psi_{\alpha}|_E)^*(W_0)) + \psi_{\alpha}^*(W_1), \end{align} where as before $\mathcal{O}(1)$ is a very ample line bundle on $E$. Of course, a similar isomorphism $g_{\beta}$ exists for the blow-up $\psi_{\beta}: \N\rightarrow\Mbeta$ as well. Now, since $\gamma$ lies on only one hyperplane, $\Sigma_{\gamma}$ nonsingular according to \cite[section 3.1]{BH}. Hence $\phialpha^{-1}(\Sigma_{\gamma})$ and $\phibeta^{-1}(\Sigma_{\gamma})$ are also nonsingular, being projective bundles over $\Sigma_{\gamma}$. They are rational as well, since $\Sigma_{\gamma}$ is rational, being a product of two smaller dimensional moduli \cite[Section 3.1]{BH}. It is clear that $g_{\alpha}$ restricts to an isomorphism on the cycles homologically equivalent to zero, since the cycle class map commutes with pullback and pushforward maps on cohomology. \\ In general, if $X$ is a nonsingular variety of dimension $n$ and $Z$ is a nonsingular closed subvariety of codimension $r$, then by \cite[Remark 11.16]{Voi1}, under the cycle class map $\CH^r(X) \rightarrow H^{2r}(X,\mathbb{Z})$ followed by Poincare duality $H^{2r}(X,\mathbb{Z})\cong H_{2n-2r}(X,\mathbb{Z})$, the image of the cycle class $[Z]$ maps to the homology class of the oriented submanifold $Z$. Moreover, since $X$ is rational, $\CH_0(X)\otimes \Q \simeq \Q \simeq H_0(X,\Q)$. Hence the composition \[\CH_0(X)\otimes \Q \xrightarrow{\overset{\text{cycle class}}{\text{map}}} H^{2n}(X,\Q)\overset{\overset{\text{Poincar\'e}}{\text{duality}}}{\cong} H_0(X,\Q)\] an isomorphism. But the elements of the subgroup $\CH_0(X)_{hom}\otimes \Q \subset \CH_0(X)\otimes \Q$ goes to zero under the composition by definition, so $\CH_0(X)_{hom}\otimes \Q = 0$. Hence we get our claim.\\ (b) We also have the following blow-up formula for cohomology groups for all $k\geq 0$, given by \begin{align} \bigoplus_{q=0}^{n_{\beta}} H^{k-2q}(\phi_{\alpha}^{-1}(\Sigma_{\gamma}),\Q) &\oplus H^k(\Malpha ,\Q) \xrightarrow{\sim} H^k(\N,\Q), \label{cohomologyiso}\\ (\sigma_,\cdots,\sigma_{r-1}, \,&\,\tau) \mapsto \sum_{q=0}^{n_{\beta}}j_*(c_1(\mathcal{O}_E(1))^q \cup (\psi_{\alpha}|_E)^*(\sigma)) + \psi_{\alpha}^*(\tau) \end{align} Put $k=2n-3$, where $n=dim \Malpha$. Note that $q\leq n_{\beta}-1 \implies 2n-3-2q \geq 2n-2n_{\beta}-1$. Since codim $\phi_{\alpha}^{-1}(\Sigma_{\gamma}) = n_{\beta}+1$, the real dimension of $\phi_{\alpha}^{-1}(\Sigma_{\gamma})$ equals $2n-2n_{\beta}-2$, hence in the LHS of \ref{cohomologyiso}, all the $H^{2n-3-2q}(\phi_{\alpha}^{-1}(\Sigma_{\gamma}),\Q)$ are zero except for $q= n_{\beta}$. But by Poincare duality,\\ $H^{2n-3-2n_{\beta}}(\phi_{\alpha}^{-1}(\Sigma_{\gamma})) \simeq H_1(\phi_{\alpha}^{-1}(\Sigma_{\gamma}))$, which is zero as well, since $\phi_{\alpha}^{-1}(\Sigma_{\gamma})$ is rational. \\ $\therefore$ We get our claim, since $\psi_{\alpha}^*$ respects Hodge decomposition. \end{proof} \begin{theorem}\label{abeljacobiarbitwt} For any generic weight $\alpha$, $AJ^{n-1}: \CH^{n-1}(\Malpha)_{hom}\otimes \Q \rightarrow IJ^{n-1}(\Malpha)\otimes\Q$ is a split surjection. \end{theorem} \begin{proof} The result is already proven for small weights in Corollary \ref{abeljacobismallwt}. First let $\alpha$ and $\beta$ belong to adjacent chambers separated by a single wall, and moreover assume $\alpha$ to be small. Consider the following commutative diagram: \begin{align*} \xymatrix{ \CH_1(\Malpha)_{hom}\otimes \Q \ar[r]_{\simeq}^{\psi_{\alpha}^*} \ar[d]_{AJ^{n-1}} & \CH_1(\N)_{hom}\otimes \Q \ar[d]^{AJ^{n-1}} \\ IJ^{n-1}(\Malpha) \ar[r]_{\simeq}^{\psi_{\alpha}^*} & IJ^{n-1}(\N) } \end{align*} where the two horizontal maps are isomorphisms by Proposition \ref{chow1iso}. Since $\alpha$ is small, the left vertical Abel-Jacobi map is a split surjection by Corollary \ref{abeljacobismallwt}, from which we get that the right vertical arrow is also a split surjection. Replacing $\alpha$ by $\beta$ in the diagram above, we conclude that the Abel-Jacobi map is a split surjection for $\Mbeta$ as well.\\ Recall that there are only finitely many walls in the set $V_m$ of all admissible weights, and the moduli spaces corresponding to weights in the same chamber are isomorphic. If $\alpha$ is any arbitrary generic weight, we can choose a small weight $\beta$ and a sequence of generic weights which starts at $\alpha$ and ends at $\beta$, such that each successive weights are separated by a single wall; in that case, by the remark above, since the statement holds for $\beta$, it holds for $\alpha$ as well. \end{proof} \section{Abel-Jacobi isomorphism for codimension 2 cycles in the rank 2 case} We keep the same notation as in Section 3, with the only modification being that we fix the rank $r=2$ here. \subsection{CASE OF SMALL WEIGHTS} Let $\alpha$ be small, as in section \ref{smallweight}. In this case, the canonical morphism $\pi: \Malpha \rightarrow \M$, make $\Malpha$ into a $(\mathbb{P}^1)^m$-bundle over $\mathcal{M}$, where $m$ equals number of Parabolic points, by \cite[Theorem 4.2]{BY}. \begin{lemma}\label{chowiso1} (a) $\CH^1(\Malpha)_{hom}\otimes \Q = 0.$\\ \hspace*{11.0ex} (b) $\pi^*$ induces isomorpism $\CH^2(\M)_{hom}\otimes \Q \simeq \CH^2(\Malpha)_{hom}\otimes \Q.$ \end{lemma} \begin{proof} (a) First, let there be only one Parabolic point. In that case, $\pi: \Malpha\rightarrow \M$ is a $\mathbb{P}^1$-bundle, since $\alpha$ is small. Therefore, by the projective bundle formula for Chow groups (\cite[Theorem 9.25]{Voi2}), putting $n = dim\Malpha$, \begin{align*} &\CH_{n-1}(\Malpha) \simeq \CH_{n-2}(\M) \oplus \CH_{n-1}(\M) \\ \implies & \CH^1(\Malpha) \simeq \CH^1(\M) \oplus \CH^0(\M) \hspace{10ex} (\because dim \M = n-1)\\ \implies &\CH^1(\Malpha)_{hom} \simeq \CH^1(\M)_{hom}\oplus \CH^0(\M)_{hom} \end{align*} But clearly $\CH^0(\M,\Q) = \Q,$ and $\CH^1(\M,\Q) = \Q$ by \cite[Th\'{e}oreme B]{DN}, so \\ $\CH^0(\M)_{hom}\otimes \Q=\CH^1(\M)_{hom}\otimes \Q =0$, and hence \begin{align*} \CH^1(\Malpha)_{hom}\otimes\Q = 0. \end{align*} In general, $\Malpha$ has the following descrpition: if there are $m$ distinct Parabolic points $S=\{p_1,\cdots, p_m\}$, for each $1\leq i\leq m$ let $ \mathcal{E}_i $ denote the restriction of the universal bundle over $ X\times \mathcal{M}$ to $ \{p_i\}\times \mathcal{M} $. Then an analogous argument as above shows that $\mathcal{M}_{\alpha}$ is isomorphic to the fiber product of $\mathbb{P}(\mathcal{E}_i)$'s over $\mathcal{M}$, i.e. \begin{align*} \mathcal{M}_\alpha \cong \mathbb{P}(\mathcal{E}_1)\times_{\mathcal{M}} \mathbb{P}(\mathcal{E}_2)\times_{\mathcal{M}} ...\times_{\mathcal{M}}\mathbb{P}(\mathcal{E}_m). \end{align*} For each $1\leq i\leq m$, let $ \mathcal{F}_i:= \mathbb{P}(\mathcal{E}_1)\times_{\mathcal{M}} \mathbb{P}(\mathcal{E}_2)\times_{\mathcal{M}} ...\times_{\mathcal{M}} \mathbb{P}(\mathcal{E}_i) $.\newline Here $\Malpha \cong \mathcal{F}_m$, so we have the following fiber diagram: \begin{align}\label{diagram} \xymatrix{\Malpha \ar[r] \ar[d] & \mathbb{P}(\mathcal{E}_m) \ar[d] \\ \mathcal{F}_{m-1} \ar[r] & \mathcal{M} } \end{align} By induction on $m$, we have $\CH^1(\mathcal{F}_{m-1})\otimes\Q = 0$; moreover, the left vertical arrow above is a $\mathbb{P}^1$-bundle, and so by the same argument as above, \begin{align} \CH^1(\Malpha)_{hom}\otimes \Q \simeq \CH^1(\mathcal{F}_{m-1})_{hom}\otimes \Q =0. \end{align} (b) Again, first let there be only one Parabolic point $p$; by the projective bundle formula for Chow groups (\cite[Theorem 9.25]{Voi2}), \begin{align*} &\CH_{n-2}(\Malpha) \simeq \CH_{n-3}(\M) \oplus \CH_{n-2}(\M) \hspace{15ex}(n= dim \Malpha) \nonumber\\ \implies & \CH^2(\Malpha) \simeq \CH^2(\M) \oplus \CH^1(\M) \hspace{17ex} (\because dim \M = n-1)\\ \implies &\CH^2(\Malpha)_{hom} \simeq \CH^2(\M)_{hom}\oplus \CH^1(\M)_{hom} \end{align*} Again, since $\CH^1(\M,\Q) = \Q$, we get \begin{align*} \CH^2(\Malpha)_{hom}\otimes \Q \simeq \CH^2(\M)_{hom}\otimes \Q. \end{align*} In general, for $m$ Parabolic points, by the left vertical arrow in \ref{diagram}, which is a $\mathbb{P}^1$-bundle, we would get \begin{align*} \CH^2(\Malpha)_{hom}\otimes\Q \simeq \CH^2(\mathcal{F}_{m-1})_{hom}\otimes \Q \,\oplus\, \CH^1(\mathcal{F}_{m-1})_{hom}\otimes \Q \end{align*} By (a), we have $\CH^1(\mathcal{F}_{m-1})_{hom}\otimes \Q = 0$, and by induction on $m$, we have $\CH^2(\mathcal{F}_{m-1})_{hom}\otimes \Q \simeq \CH^2(\M)_{hom}\otimes \Q$, and hence we get our result. \end{proof} \begin{proposition}\label{ajsmallwt} The Abel-Jacobi map $AJ^2: \CH^2(\Malpha)_{hom}\otimes\Q \rightarrow IJ^2(\Malpha)\otimes\Q$ is an isomorphism. \end{proposition} \begin{proof} By Proposition \ref{specseq}, $\pi^* : H^3(\M,\Q) \rightarrow H^3(\Malpha,\Q)$ is an isomorphism. Of course, the isomorphism is also valid if we consider cohomology with $\mathbb{C}$-coefficients, which is also an isomorphism of Hodge structures; hence going modulo the appropriate subgroups on both sides, we get an induced isomorphism on the intermediate Jacobians: \begin{align*} \pi^*: IJ^2(\M)\otimes \Q \simeq IJ^2(\Malpha)\otimes \Q. \end{align*} Now, consider the commutative diagram: \begin{align} \xymatrix{ \CH^2(\M)_{hom}\otimes \Q \ar[r]_{\simeq}^{\pi^*} \ar[d]_{AJ^2} & \CH^2(\Malpha)_{hom}\otimes \Q \ar[d]^{AJ^2} \\ IJ^2(\M)\otimes\Q \ar[r]_{\simeq}^{\pi^*} & IJ^2(\Malpha)\otimes\Q } \end{align} By Lemma \ref{chowiso1} (b) the top horizontal map is an isomorphism. The left vertical map is an isomorphism as well by \cite[Lemma 3.10 (a)]{JY} together with \cite[Proposition 3.11]{JY}, noting that even though the Proposition in \cite{JY} is only proved in the case when determinant $\mathcal{L}\simeq \mathcal{O}(-x)$ for a point $x\in C$, the argument goes through for any line bundle of degree 1, since the proof only relies on the fact that $\mathcal{M}$ is rational. Therefore we conclude that the right vertical map is an isomorphism as well. \end{proof} \subsection{CASE OF ARBITRARY WEIGHTS} Recall the discussion in the beginning of Section \ref{arbitraryweight}. Let $\alpha$ and $\beta$ be two generic weights lying in adjacent chambers in $V_m$ separated by a single wall. Recall that $\N$ is a common blow-up over $\Malpha$ and $\Mbeta$ along $\phi_{\alpha}^{-1}(\Sigma_{\gamma})$ and $\phi_{\beta}^{-1}(\Sigma_{\gamma})$ respectively. Let $\psi_{\alpha}, \psi_{\beta}$ denote the usual projections from $\N$ to $\Malpha, \Mbeta$ respectively. We work with $\psi_{\alpha}: \N \rightarrow \Malpha$, the case of $\psi_{\beta}$ being identical. \begin{lemma} $\psi_{\alpha}^*$ induces an isomorphism $IJ^2(\N)\otimes \Q \simeq IJ^2(\Malpha)\otimes\Q.$ \end{lemma} \begin{proof} By the blow-up formula for Cohomology, we get \begin{align} H^3(\N,\Q) \simeq H^3(\Malpha,\Q) \oplus H^1(\phialpha^{-1}(\Sigma_{\gamma}),\Q) \end{align} Also, by \ref{rational}, $\phialpha^{-1}(\Sigma_{\gamma})$ is rational, hence $H^1(\phialpha^{-1}(\Sigma_{\gamma}),\Q) =0 $. Moreover, $\psi_{\alpha}^*$ is a map of Hodge structures, so we get our claim. \end{proof} \begin{proposition}\label{sigmagammadescription} Let $\mathcal{M}(1,d,\sigma)$ denote the moduli space of Parabolic line bundles of degree $d$ and weight $\sigma$. Then $\Sigma_{\gamma}\simeq \coprod_{i=1}^k \mathcal{M}(1,d_i,\alpha^{i})$ for some non-negative integers $d_i$ weights $\alpha^i$. Here, the integers $d_i$ are not necessarily distinct. \end{proposition} \begin{proof} Let $[E_*]\in\Sigma_{\gamma}$ be the S-equivalence class of a $\gamma$-semistable bundle $E_*$. Since $E_*$ is not $\gamma$-stable and $rank(E) =2$, there exists a line sub-bundle $L$ of $E$ such that with the induced Parabolic structure, the Parabolic sub-bundle $L_*$ has same Parabolic slope as $E_*$. \\ Now, $\mathcal{L} = det(E) \simeq L\otimes (E/L) \implies (E/L)\simeq L^{-1}\otimes \mathcal{L}$. If $(L^{-1}\otimes \mathcal{L})_*$ denotes the induced quotient Parabolic structure from $E_*$, then consider the short exact sequence of Parabolic bundles \[0\rightarrow L_*\rightarrow E_* \rightarrow (L^{-1}\otimes \mathcal{L})_*\rightarrow 0\] If $\mu := Par\mu(L_*) = Par\mu(E_*)$ and $\mu':= Par\mu(L^{-1}\otimes \mathcal{L})_*$, then $\mu = \dfrac{\mu+\mu'}{2} \implies \mu =\mu'$. Recalling the discussion from Section \ref{sub-2.5}, we conclude that \[gr_{\gamma}(E_*) = L_*\oplus (L^{-1}\otimes \mathcal{L})_* \implies [E_*] = [L_*\oplus (L^{-1}\otimes \mathcal{L})_*]. \] Let $d:= deg(L)$. Then $deg(L^{-1}\otimes \mathcal{L}) = -d-1$, and $d\geq 0$ (resp. $d<0$) $\iff -d-1<0$ (resp. $-d-1\geq 0$), so either $L$ or $L^{-1}\otimes \mathcal{L}$ has non-negative degree, but not both. Clearly, the condition $Par\mu(L_*) = Par\mu(E_*)$ only allows for only finitely many possible choices for $deg(L)$. Therefore, we get a map \begin{align*} \Sigma_{\gamma} &\overset{\varphi}{\longrightarrow} \coprod_{i=1}^{k}\mathcal{M}(1,d_i,\alpha^{i})\\ [E_*] &\longmapsto \begin{cases} [L_*],\,\,if\,\,d=deg(L)\geq 0 \\ [(L^{-1}\otimes \mathcal{L})_*], \,\,if\,\,d<0 \end{cases} \end{align*} where $\alpha^{i}$'s are the weights induced from $\alpha$, and $d_i$'s belong to the set of all possible non-negative degrees for $deg(L)$. We note that it is possible for the same line bundle to get different Parabolic structures by being sub-bundles of different Parabolic bundles in $\Sigma_{\gamma}$, so the $d_i$'s may not be distinct, but $\alpha^{i}$'s will be necessarily distinct. \\ To see that $\varphi$ is surjective, let $[L_*]\in \mathcal{M}(1,d_i,\alpha^{i})$ for some $i$. We define a Parabolic structure on $\mathcal{M}(1,d_i,\alpha^{i})$ as follows: at each Parabolic point $P$, we have the chain $0\leq \alpha_{P,1}<\alpha_{P,2}<1$, and $\alpha^i_P\in\{\alpha_{P,1},\alpha_{P,2}\}$. Say $\alpha^i_P = \alpha_{P,1}$. Then we define a Parabolic structure on $L^{-1}\otimes \mathcal{L}$ by choosing $\alpha_{P,2}$ at $P$. We do this for every Parbolic point to get $(L^{-1}\otimes \mathcal{L})_*$. Note that $d_i$'s are chosen such that the condition $Par\mu(L_*)= Par\mu(E_*) = Par\mu((L^{-1}\otimes \mathcal{L})_*$ is satisfied, and by the choice of $d_i$'s, $Par\mu(L_*) = Par\mu((L^{-1}\otimes \mathcal{L})_*)$ is automatically satisfied, so that $E_* :=L_*\oplus (L^{-1}\otimes \mathcal{L})_*$ belongs to $\Sigma_{\gamma}$, and maps to $[L_*]$. To see $\varphi$ is injective, let $\varphi([E_*]) = \varphi([E'_*])$, suppose $gr_{\gamma}(E_*) = L_*\oplus (L^{-1}\otimes \mathcal{L})_*$, $gr_{\gamma}(E'_*) = L'_*\oplus (L'^{-1}\otimes \mathcal{L})_* $, where $L\subset E, L'\subset E'$ are line sub-bundles, and let $d:=deg(E),d':=deg(E')$. Without loss of generality assume that $d\geq 0$; if $d'\geq 0$ as well, by the definition of $\varphi$ we get $L_*\simeq L'_*$. But notice that an isomorphism of Parabolic line bundles forces the weights of $L_*$ and $L'_*$ at each Parabolic points to be equal; otherwise, by definition of a morphism of Parabolic bundles (see Defintion \ref{defparhomo}), if the weight of of $L_*$ at $P$ is bigger than the weight of $L'_*$ at $P$, then the morphism restricted to the fibers at $P$ must be zero, contradicting that it is an isomorphism. This also forces $L\simeq L'$ as usual line bundles, which induces an isomorphism $L^{-1}\otimes \mathcal{L}\simeq L'^{-1}\otimes \mathcal{L}$, and since the Parabolic weights of $L_*$ and $L'_*$ agree at each $P$, the same must be true for the induced Parabolic structure on $L^{-1}\otimes \mathcal{L}$ and $L'^{-1}\otimes \mathcal{L}$ as well. Hence we get $(L^{-1}\otimes \mathcal{L})_*\simeq (L'^{-1}\otimes \mathcal{L})_*$. Finally, combining the two isomorphisms, we get $L_* \oplus (L^{-1}\otimes \mathcal{L})_* \simeq L'_*\oplus (L'^{-1}\otimes \mathcal{L})_* \implies [E_*]=[E'_*]$.\\ If $d'<0$, we will get $L_*\simeq (L^{-1}\otimes \mathcal{L})_*$, and the same argument above goes through.\\ Therefore we get our claim. \end{proof} \begin{lemma} Let $n=dim\N = dim\Malpha$. $\psi_{\alpha}:\N \rightarrow \Malpha$ induces isomorphism \[\psi_{\alpha}^*: \CH_{n-2}(\Malpha)_{hom}\otimes\Q \simeq \CH_{n-2}(\N)_{hom}\otimes\Q\] \end{lemma} \begin{proof} We observe that since codim$\Sigma_{\gamma} = 1+ n_{\alpha}+n_{\beta}$ and $\phi_{\alpha}^{-1}(\Sigma_{\gamma})$ is a $\mathbb{P}^{n_{\alpha}}$-bundle over $\Sigma_{\gamma}$, we get codim$(\phi_{\alpha}^{-1}(\Sigma_{\gamma})) = 1+ n_{\beta}$, hence by the blow-up formula for Chow groups (\cite[Theorem 9.27]{Voi2}), \begin{align} \CH_{n-2}(\N) &\simeq \CH_{n-2}(\Malpha)\oplus \underset{0\leq k\leq n_{\beta}-1}{\bigoplus} \CH_{n-2-n_{\beta}+k}(\phi_{\alpha}^{-1}(\Sigma_{\gamma}))\nonumber \\ &= \CH_{n-2}(\Malpha)\oplus \CH_{n-n_{\beta}-2}(\phialpha^{-1}(\Sigma_{\gamma}))\oplus \CH_{n-n_{\beta}-1}(\phialpha^{-1}(\Sigma_{\gamma})) \nonumber\\ \implies \CH^2(\N) & \simeq \CH^2(\Malpha)\oplus \CH^1(\phialpha^{-1}(\Sigma_{\gamma})) \oplus \CH^0(\phialpha^{-1}(\Sigma_{\gamma}))\label{eqn} \end{align} Since $\CH^0(\phi_{\alpha}^{-1}(\Sigma_{\gamma})) = \mathbb{Z}$, clearly $\CH^0(\phialpha^{-1}(\Sigma_{\gamma}))_{hom}\otimes\Q=0$. We need to show that $\CH^1(\phialpha^{-1}(\Sigma_{\gamma}))_{hom}\otimes \Q=0$ as well. Recall that $\phialpha^{-1}(\Sigma_{\gamma})$ is a $\mathbb{P}^{n_{\alpha}}$-bundle over $\Sigma_{\gamma}$, and\\ dim$(\phialpha^{-1}(\Sigma_{\gamma}))=n-n_{\beta}-1$. Using the projective bundle formula for Chow groups (\cite[Theorem 9.25]{Voi2}) we get \begin{align} &\CH^1(\phialpha^{-1}(\Sigma_{\gamma})) \simeq \CH^1(\Sigma_{\gamma}) \oplus \CH^0(\Sigma_{\gamma})\nonumber\\ \implies &\CH^1(\phialpha^{-1}(\Sigma_{\gamma}))_{hom}\otimes\Q \simeq \CH^1(\Sigma_{\gamma})_{hom}\otimes\Q \,\,\oplus\,\, \CH^0(\Sigma_{\gamma})_{hom}\otimes \Q \nonumber \end{align} Since $\CH^0(\Sigma_{\gamma})=\mathbb{Z}$, clearly $\CH^0(\Sigma_{\gamma})_{hom}\otimes\Q=0$. Moreover, by Proposition \ref{sigmagammadescription}, $\CH^1(\Sigma_{\gamma}) =\CH^1(\coprod_{i=1}^{k} Pic^{d_i}(C)) =\oplus_{i=1}^{k}\CH^1(Pic^{d_i}(C))$, and since $\CH^1(Pic^{d_i}(C)) = \mathbb{Z}$, we get $\CH^1(Pic^{d_i}(C))_{hom}\otimes \Q = 0\, \forall i$, and hence $\CH^1(\Sigma_{\gamma})_{hom}\otimes\Q=0$ as well. So we conclude from \ref{eqn}. \end{proof} \begin{theorem}\label{abeljacobiiso} For any generic weight $\alpha$, the Abel-Jacobi map $AJ^2: \CH_{n-2}(\Malpha)_{hom}\otimes\Q \rightarrow IJ^2(\Malpha)\otimes\Q$ is an isomorphism. \end{theorem} \begin{proof} Follows from essentially the same argument as in Theorem \ref{abeljacobiarbitwt}, where instead of the commutative diagram used in the proof, we use the following diagram instead, valid for all weights $\alpha$: \begin{align*} \xymatrix{ \CH_{n-2}(\Malpha)_{hom}\otimes\Q \ar[r]^{\simeq} \ar[d]^{AJ^2} & \CH_{n-2}(\N)_{hom}\otimes \Q \ar[d]_{AJ^2} \\ IJ^2(\Malpha)\otimes \Q \ar[r]^{\simeq} & IJ^2(\N)\otimes\Q } \end{align*} \end{proof} \end{document}
\begin{document} \begin{frontmatter} \title{Efficient Structural Descriptor Sequence to Identify Graph Isomorphism and Graph Automorphism} \author[siva]{Sivakumar Karunakaran} \ead{sivakumar\[email protected]} \author[lavanya]{Lavanya Selvaganesh\corref{cor1}} \ead{[email protected]} \cortext[cor1]{Corresponding author. This work was presented as an invited talk at ICDM 2019, 4-6 Jan 2019 at Amrita University, Coimbatore, India} \address[siva]{SRM Research Institute, S R M Institute of Science and Technology Kattankulathur, Chennai - 603203, INDIA} \address[lavanya]{Department of Mathematical Sciences, Indian Institute of Technology (BHU), Varanasi-221005, INDIA} \begin{abstract} In this paper, we study the graph isomorphism and graph automorphism problems. We propose a novel technique to analyze graph isomorphism and graph automorphism. Further we handled some strongly regular datasets for prove the efficiency of our technique. The neighbourhood matrix $ \mathcal{NM}(G) $ was proposed in \cite {ALPaper} as a novel representation of graphs and was defined using the neighbourhood sets of the vertices. It was also shown that the matrix exhibits a bijection between the product of two well known graph matrices, namely the adjacency matrix and the Laplacian matrix. Further, in a recent work\cite{NM_SPath}, we introduced the sequence of matrices representing the powers of $\mathcal{NM}(G)$ and denoted it as $ \mathcal{NM}^{\{l\}}, 1\leq l \leq k(G)$ where $ k(G) $ is called the \textbf{iteration number}, $k(G)=\ceil*{\log_{2}diameter(G)} $. In this article we introduce a structural descriptor given by a sequence and clique sequence for any undirected unweighted simple graphs with help of the sequences of matrices $ NM^{\{l\}} $. The $ i^{th} $ element of structural descriptor sequence encodes the complete structural information of the graph from the vertex $ i\in V(G) $. The $ i^{th} $ element of clique sequence encodes the Maximal cliques on $ i $ vertices. The above sequences is shown to be a graph invariants and is used to study the graph isomorphism and automorphism problem. \end{abstract} \begin{keyword} Graph Matrices, Graph Isomorphism, Maximal clique, Graph Automorphism, Product of Matrices, Structural descriptors. \MSC{05C50, 05C60, 05C62, 05C85} \end{keyword} \end{frontmatter} \section{Introduction} One of the classical and long standing problem in graph theory is the graph isomorphism problem. Study of graph isomorphisms is not only of theoretical interest, but also has found applications in diversified fields such as cryptography, image processing, chemical structural analysis, biometric analysis, mathematical chemistry, bioinformatics, gene network analysis to name a few. The challenging task in solving graph isomorphism problem is in finding the permutation matrix, if it exists, that can be associated with the adjacency matrices of the given graph. We also know that there are $ n! $ such permutation matrices that are possible, where $ n $ is the number of vertices. However, finding one suitable choice is time consuming. There are various graph invariants that can be used to identify two non-isomorphic graphs. That is, for any two graphs, if a graph invariant is not equal, then it immediately follows that the graphs are not isomorphic. For example, the following invariants have been well studied for this purpose, such as, number of vertices, number of edges, degree sequence of the graph, diameter, colouring number, eigenvalues of the associated graph matrices, etc. However, for each of the invariant, if the invariants are equal one cannot declare the graphs are isomorphic. For example, there are non-isomorphic graphs which have same spectral values and such graphs are referred as isospectral graphs. There are several algorithms to solve the graph isomorphism problem. Computationally, many algorithms have been proposed for the same. Ullman \cite{Sub_Iso_Ullman} proposed the first algorithm for a more general problem known as subgraph isomorphism whose running time is $ \mathcal{O}(n! n^{3}) $. During the last decade many algorithm such as VF2 \cite{VF2_Luigi}, VF2++\cite{Alpar_VF2++}, QuickSI \cite{QuickSI}, GADDI \cite{GADDI_Algori}, have been proposed to improve the efficiency of Ullman's algorithm. Recent developments were made in 2016 by Babai \cite{Quasipolynomial_Babai}, who proposed a more theoretical algorithm based on divide and conquer strategy to solve subgraph isomorphism problem in Quasi Polynomial time ($ 2^{(\log n)^{\mathcal{O}(3)}} $) which has certain implementation difficulties. Due to the non-availability of polynomial time solvable algorithm many researchers have also studied this problem for special classes of graphs such as trees, planar graphs, interval graphs and bounded parameter such as genus, degree and treewidth graphs \cite{Pol_Fixed_Genus,Poly_tree,Poly_Interval,Poly_Permutation,Poly_Planar}. Maximal clique and graph automorphism problems are interesting and long standing problems in graph theory. Lots research works are done on graph automorphism problem \cite{REF_1,REF_2,REF_3,REF_4,REF_8,REF_9} and clique problem \cite{REF_10,REF_11,REF_12,REF_13}. In this paper we study the graph isomorphism problem and graph automorphism problem using our newly proposed structural descriptor sequence and clique sequence. The above sequences are completely constructed form our novel techniques. This sequences are used to reduce the running time for classify the non isomorphic graphs and find the automorphism groups. We use the recently proposed matrix representation of graphs using neighbourhood sets called neighbourhood matrix $ \mathcal{NM}(G) $ \cite{ALPaper}. Further, we also iteratively construct a finite sequence of powers of the given graph and their corresponding neighbourhood matrices \cite{NM_SPath}. This sequence of graph matrices help us to define a collection of measures, capturing the structural information of graphs. The paper is organized as follows: Section 2 presents all required basic definitions and results. Section 3 is the main section which defines the collection of measures and structural descriptor and the main result for testing graph isomorphism. In Section 4, we discuss the clique sequence and its time complexity. Section 5 ensure the efficiency of our structural descriptor sequence and clique sequence. Section 6, describe the way of finding automorphism groups of a given graph and time complexity. We conclude the paper in section 7. \section{Sequence of powers of $ \mathcal{NM }(G)$ matrix} Throughout this paper, we consider only undirected, unweighted simple graphs. For all basic notations and definitions of graph theory, we follow the books by J.A. Bondy and U.S.R. Murty \cite{Graphtheory} and D.B. West \cite{GraphTheoryWest}. In this section, we present all the required notations and definitions. A graph $ G $ is an ordered pair $ (V(G),E(G)) $ consisting of a set $ V(G) $ of vertices and a set $ E(G) $, disjoint from $ V(G) $, of edges, together with an incidence function $ \psi_{G} $ that associates with each edge of $ G $ an unordered pair of vertices of $ G $. As we work with matrices, the vertex set $ V(G) $ is considered to be labelled with integers $ \{1,2,...,n\} $. For a vertex $v\in V(G)$, let $N_{G}(v)$ denote the set of all neighbors of $ v $. The degree of a vertex $v$ is given by $ d_{G}(v) $ or $|N_{G}(v)|$. The diameter of the graph is denoted by $ diameter(G)$ and the shortest path distance between two vertices $i$ and $j$ in $G$ is denoted by $ d_{G}(i,j), i,j\in V(G)$. Let $ A_{G} \text{ or } A(G), D(G) $ and $ C(G)(:=D(G)-A(G)) $ denote the adjacency matrix, degree matrix and the Laplacian/admittance matrix of the grpah $ G $ respectively. Two graphs $ G $ and $ H $ are isomorphic, written $ G \cong H $, if there are bijections $ \theta :V(G)\rightarrow V(H) $ and $ \phi: E(G)\rightarrow E(H) $ such that $ \psi_{G}(e)=uv $ if and only if $ \psi_{H}(\phi(e))=\theta(u)\theta(v); $ (that is a bijection of vertices preserving adjacency) such a pair of mappings is called an isomorphism between $ G $ and $ H $. An automorphism of a graph is an isomorphism of a graph to itself. In the case of a simple graph, an automorphism is just a permutation $ \alpha $ of its vertex set which preserves adjacency: if $ uv $ is an edge then so is $ \alpha (u) \alpha(v) $. The automorphism groups of $ G $ denoted by $ Aut(G) $, is the set of all automorphisms of a groups $ G $. Graphs in which no two vertices are similar are called asymmetric; these are the graphs which have only the identity permutation as automorphism. The graph isomorphism problem tests whether two given graphs are isomorphic or not. In other words, it asks whether there is a one-to-one mapping between the vertices of the graphs preserving the adjacency. \begin{Defn}\label{d3} {\normalfont{ \cite{ALPaper}}} Given a graph $ G $, the neighbourhood matrix, denoted by $ \mathcal{NM}(G) =(\eta_{ij})$ is defined as $$\eta_{ij}=\begin{cases} -|N_{G}(i)|, & \text{ if } i=j\\ |N_{G}(j)-N_{G}(i)|, & \text{ if } (i,j)\in E(G)\\ -|N_{G}(i) \cap N_{G}(j)|, & \text{ if } (i,j)\notin E(G) \\ \end{cases} $$ \end{Defn} The following lemma facilitates us with a way of computing the neighbourhood matrix. Proofs of the following results are given in appendix for immediate reference as they are awaiting publication elsewhere. \begin{Lemma}\label{prop.2}{\normalfont{ \cite{ALPaper}}} Given a graph $ G $, the entries of any row of $ \mathcal{NM}(G) $ corresponds to the subgraph with vertices from the first two levels of the level decomposition of the graph rooted at the given vertex with edges connecting the vertices indifferent levels. $ \square $ \end{Lemma} \begin{Remark}\label{Remark.2}{\normalfont{ \cite{ALPaper}}} The above lemma also reveals following information about the neighbourhood matrix, which justifies its terminology. For any $ i^{th} $ row of $ \mathcal{NM}(G) $, \begin{enumerate} \item The diagonal entries are either negative or zero. In particular, if $ \eta_{ii}=-c $, then the degree of the vertex is $ c $ and that there will be exactly $ c $ positive entries in that row. If $ \eta_{ii}=0 $ then the vertex $ i $ is isolated. \item For some positive integer $ c $, if $ \eta_{ij}=c $ then $ j\in N_{G}(i) $ and that there exists $ (c-1) $ vertices are adjacent to $ 1 $ and at distance $ 2 $ from $ i $ through $ j $. \item If $ \eta_{ij}=-c $, then the vertex $ d_{G}(i,j)=2 $ and there exists $ c $ paths of length two from vertex $ i $ to $ j $. In other words, there exist $ c $ common neighbors between vertex $ i $ and $ j $. \item If an entry, $ \eta_{ij}=0 $ then the distance between vertices $ i $ and $ j $ is at least $ 3 $ or the vertices $ i $ and $ j $ lie in different components.$ \square $ \end{enumerate} \end{Remark} \begin{Defn}\label{Def.1} \normalfont{\cite{NM_SPath} } Given a graph $ G $, let $ G^{\{1\}}=G$ and $ \mathcal{NM}^{\{1\}} = \mathcal{NM}(G^{\{1\}}) $. For $l>1$, let $G^{\{l\}}$ be the graph constructed from the graph $G^{\{l-1\}}$ as below: $ V(G^{\{l\}})=V(G^{\{l-1\}}) $ and $ E(G^{\{l\}})=E(G^{\{l-1\}})\cup \{(i,j): \eta_{ij}^{\{l-1\}}<0, i\neq j\} $. The sequence of matrices, denoted by $ \mathcal{NM}^{\{l\}} $ is defined iteratively as $ \mathcal{NM}^{\{l\}}=\mathcal{NM}(G^{\{l\}}) $ and can be constructed by definition \ref{d3}. We refer to this finite sequence of matrices as \textbf{\textit{Sequence of Powers of $ \mathcal{NM} $}}. \end{Defn} \begin{Remark} \normalfont{\cite{NM_SPath} } The adjacency matrix of $ A(G^{\{l\}}) $ is given by: $$ A(G^{\{l\}}) = (a^{\{l\}}_{ij}) =\begin{cases} 1,& \text{ if } \eta_{ij}^{\{l-1\}}\neq 0, i\neq j\\ 0,& \text{ Otherwise } \end{cases} $$ $ \square $ \end{Remark} \begin{Defn}\label{Def.2} \normalfont{\cite{NM_SPath} } Let $k$ be the smallest possible integer, such that, $\mathcal{NM}^{\{k\}}$ has either no zero entry or the number of non zero entries of $\mathcal{NM}^{\{k\}} $ and $\mathcal{NM}^{\{k+1\}} $ are the same. The number $k(G):=k$ is called the \textbf{\textit{Iteration Number}} of $G$. \end{Defn} \begin{Remark}\label{Rem.2} \normalfont{\cite{NM_SPath} } When $G = K_n$, the complete graph, $\mathcal{NM}(K_n)$ has no zero entries. Hence for $K_n$, the iteration number is $k(K_n) = 1$. Further, for a graph $G$ with diameter 2, $\mathcal{NM}(G)$ has no zero entries, hence iteration number $k(G) = 1$ $ \square $ \end{Remark} Let the number of non zero entries in $\mathcal{NM}^{\{k\}} $ be denoted by $z$. Note that $z\leq n^2$. \begin{Theorem}\label{Thm.4} \normalfont{\cite{NM_SPath} } A graph $ G $ is connected if and only if $ z = n^{2} $. In addition, the iteration number $k(G)$ of the graph $G\neq K_{n}$ is given by $ k(G)=\ceil*{\log_{2}(diameter(G))} $. (For $ G=K_{n}, k(G)=1 $ and $ \mathcal{NM}^{\{1\}}=\mathcal{NM}(G)=-C(G)$) $ \square $ \end{Theorem} \begin{Corollary}\label{Cor.1} \normalfont{\cite{NM_SPath} } A graph $G$ is disconnected if and only if $ z<n^{2} $. Further, the iteration number $ k(G) $ of $G$ is given by $k(G)=\ceil*{\log_{2}S} $, where $ S $ is the maximum over the diameter of components of $G$. $ \square $ \end{Corollary} \begin{Defn}\label{IsoDef1} Let $ N({G^{\{l\}}},i)$ denote the neighbourhood set of a vertex $ i $ in $ G^{\{l\}} $, that is, \begin{equation}\label{Isoeq.1} N({G^{\{l\}}},i) =\{x : \eta_{ix}^{\{l\}}>0\} \end{equation} Let $ X(G^{\{l\}},i) $ be the set of vertices given by \begin{equation}\label{Isoeq.2} X(G^{\{l\}},i)=\{y:\eta_{iy}^{\{l\}}<0, i\neq y \} \end{equation} \end{Defn} Note that, when $ l=1 $, $ N(G^{\{1\}},i)=N_{G}(i)$. For any $ l>1 $, $ N({G^{\{l\}}},i)= N({G^{\{l-1\}}},i) \cup X({G^{\{l-1\}}},i)$, for any given $ i\in V(G). $ \begin{Theorem}\label{Thm.1}\normalfont{\cite{NM_SPath} } Let $G$ be a graph on $n$ vertices and $k(G)$ be the iteration number of $G$. For $1 \leq l \leq k(G)$, the off-diagonal elements of $\mathcal{NM}^{\{l\}}$ can be characterized as follows: For $ 1\leq i\neq j\leq n $ \begin{enumerate} \item {$\eta^{\{l\}}_{ij}=0$ if and only if $ d_{G}(i,j)> 2^{l}$} \item {$\eta^{\{l\}}_{ij}>0$ if and only if $0 < d_{G}(i,j)\leq 2^{l-1}$} \item {$\eta^{\{l\}}_{ij}<0$ if and only if $2^{l-1}<d_{G}(i,j) \leq 2^l$} $ \square $ \end{enumerate} \end{Theorem} By combining definition \ref{d3}, definition \ref{Def.1} and Theorem \ref{Thm.1} we state the following corollaries without proof. \begin{Corollary}\label{RRM1} For a given graph $ G $, if $ \eta^{\{l\}}_{ij} >0$, for some $ l\leq k(G) $, then the following conditions are equivalent: \begin{enumerate} \item $ (i,j)\in E(G^{\{p\}}) $, where $ p\geq l $. \item $ \eta^{\{l\}}_{ij}=|N(G^{\{l\}},j) -N(G^{\{l\}},i)| $. \item $ d_{G}(i,j)\leq 2^{l-1} $ $ \square $ \end{enumerate} \end{Corollary} \begin{Corollary}\label{RRM2} For a graph $ G $, if $ \eta^{\{l\}}_{ij}=0, i\neq j $, then the following conditions are equivalent: \begin{enumerate} \item$ (i,j)\notin E(G^{\{p\}}) $, where $ 1\leq p\leq l+1 $ \item $ d_{G}(i,j)>2^{l} $ $ \square $ \end{enumerate} \end{Corollary} \begin{Corollary}\label{RRM3} For a graph $ G $, if $ \eta^{\{l\}}_{ij}<0, i\neq j$, for some $ l,1\leq l\leq k(G) $ then the following conditions are equivalent, \begin{enumerate} \item $ (i,j)\in E(G^{\{p\}}) $, where $ p\geq l+1 $ \item $ \eta^{\{l\}}_{ij}=-|N(G^{\{l\}},i)\cap N(G^{\{l\}},j)|$ \item $ 2^{l-1}<d_{G}(i,j)\leq 2^{l} $ $ \square $ \end{enumerate} \end{Corollary} \section{Structural descriptor sequence of a graph} Let $ \mathcal{NM}^{\{l\}}(G)$ be the sequence of powers of $ \mathcal{NM} $ corresponding to a graph $ G $, where $ 1\leq l\leq k(G), k(G)=\ceil*{\log_{2}diameter(G)} $. In the following, we define a novel collection of measures to quantify the structure of a graph as follows: \begin{Defn} \label{IsoDef2} Let $ w_{1}, w_{2}, w_{3}, w_{4}, w_{5}$ and $ w_{6} $ be six distinct irrational numbers. For $ x\in N(G^{\{l\}},i) $, let \begin{equation}\label{Isoeq.3} M_{1}(G^{\{l\}},i,x)={\Bigg(\dfrac{\eta_{ix}^{\{l\}}}{w_{1}} \Bigg)} + {\Bigg(\dfrac{|\eta_{xx}^{\{l\}}|- \eta_{ix}^{\{l\}} +w_{3}}{w_{2}} \Bigg).} \end{equation} Consider an ordering of the elements $\langle x_{1}, x_{2},...\rangle $ of $ N(G^{\{l\}},i) $, such that $ M_{1}(G^{\{l\}},i,x_{1})\leq M_{1}(G^{\{l\}},i,x_{2})\leq...\leq M_{1}(G^{\{l\}},i,x_{|N({G^{\{l\}}},i)|}) $. For $ y\in X(G^{\{l\}},i) $, let \begin{equation}\label{Isoeq.4} \begin{aligned} M_{2}(G^{\{l\}},i,y) {} = & {} {\Bigg(\dfrac{|\eta_{iy}^{\{l\}}|}{w_{4}} \Bigg)}+ {\Bigg(\dfrac{|N({G^{\{l\}}},y)\cap X({G^{\{l\}}},i)| +w_{3}}{w_{5}} \Bigg)}\\ & + {\Bigg(\dfrac{|\eta_{yy}^{\{l\}}|-|N({G^{\{l\}}},y)\cap X({G^{\{l\}}},i)|-|\eta_{iy_{j}}^{\{l\}}|+w_{3}}{w_{6}} \Bigg).} \end{aligned} \end{equation}\normalsize Consider an ordering of the elements $ \langle y_{1}, y_{2},...\rangle $ of $ X(G^{\{l\}},i) $, such that $ M_{2}(G^{\{l\}},i,y_{1})\leq M_{2}(G^{\{l\}},i,y_{2})\leq...\leq M_{2}(G^{\{l\}},i,y_{|X({G^{\{l\}}},i)|}) $. \end{Defn} Note that by Lemma \ref{prop.2}, the induced subgraph obtained from two level decomposition of $ G^{\{l\}} $ with root $ i $, is given by the vertex set $ [i\cup N(G^{\{l\}},i)\cup X(G^{\{l\}},i) ] $. In the above definition, $ M_{1} $ is the measure computed as weighted sum to count the number of edges that connect vertices in level 1 to level 2 and the number of edges that connect vertices within level 1. That is, for $x\in N(G^{\{l\}},i), \eta^{\{l\}}_{ix} $ represent the number of vertices connected with $ x $ and not belonging to the same level as $ x $, while $ |\eta^{\{l\}}_{xx}|-\eta^{\{l\}}_{ix} $ counts the number of vertices connected to $ x $ and belonging to the same level as $ x $. We use different weights, namely $ \dfrac{1}{w_{1}}, \dfrac{1}{w_{2}}, \ldots $, to distinguish the same. Similarly, $ M_{2} $ is the weighted sum to count the number of edges that connect vertices in level $ 2 $ with vertices in level $ 1 $, vertices within level $ 2 $ and the vertices in level $ 3 $. That is, for $ y\in X(G^{\{l\}},i),|\eta^{\{l\}}_{iy}| $ represents the number of vertices adjacent to $ y $ from level $ 1 $, $ |N(G^{\{l\}},y)\cap X(G^{\{l\}},i)| $ counts the number of vertices in level $ 2 $ adjacent with $ y $. Note that as the matrix immediately does not reveal the edges present with in this level. Here one neeeds to do extra effort in finding that. Next term, $ |\eta_{yy}^{\{l\}}|-|N({G^{\{l\}}},y)\cap X({G^{\{l\}}},i)|-|\eta_{iy}^{\{l\}}| $ counts the number of vertices adjacent with $ y $ and not belonging to level 1 or level 2. Here again, we use different weights, $ \dfrac{1}{w_{4}},\dfrac{1}{w_{5}}, \dfrac{1}{w_{6}}$, to keep track of these values. The irrational number $ w_{3} $ is used to keep a count of the number of vertices which are isolated within the level in level decomposition. By Corollaries \ref{RRM1}, \ref{RRM2} and \ref{RRM3}, the integer coefficients in $ M_{1}(G^{\{l\}},i,x) $ and $ M_{2}(G^{\{l\}},i,y) $ gives the complete information about the induced subgraph $ [i\cup N(G^{\{l\}},i)\cup X(G^{\{l\}},i)] $ along with the volume of edges connecting outside it for any given $ l $. Let us now define a finite sequence, where each element is associated with a vertex. We call this vector as \textbf{\textit{Structural Descriptor Sequence}}. \begin{Defn}\label{IsoDef3} A finite sequence, known as \textbf{\textit{Structural Descriptor Sequence}}, $ R_{G} (i)$, for $, i\in V(G) $, is defined as the weighted sum of the ordered sets of measure $ M_{1}(G^{\{l\}},i,x_{j}) $ and $ M_{2}(G^{\{l\}},i,y_{j}) $ given by, \begin{equation} \label{Isoeq.5} R_{G}(i)=\sum\limits_{l=1}^{k} \dfrac{1}{Irr(l)} \Bigg({{\sum\limits_{j=1}^{|N({G^{\{l\}}},i)|}\dfrac{M_{1}({G^{\{l\}}},i,x_{j})}{Irr(j)}}+ {\sum\limits_{j=1}^{|X({G^{\{l\}}},i)|}\dfrac{M_{2}({G^{\{l\}}},i, y_{j})}{Irr(j)}}} \Bigg ) \end{equation} where $ Irr $ is a finite sequence of irrational numbers of the form $ \langle \sqrt{2}, \sqrt{3}, \sqrt{5},...\rangle $. \end{Defn} In the above definition note that for every $ i\in V(G) $, $ R_{G}(i) $ captures the complete structural information about the node $ i $ and hence the finite sequence $ \{R_{G}(i)\} $ is a complete descriptor of the given graph. We use the \textbf{\textit{Structural Descriptor Sequence}} explicitly to study the graph isomorphism problem. Rest of this section, will discuss this problem and its feasible solution. \begin{Remark} The following inequalities are immediate from the definition. For any given $ i\in V(G),1\leq l\leq k(G) $ and $ x\in N(G^{\{l\}},i), y\in X(G^{\{l\}},i) $, we have \begin{enumerate} \item $ 1\leq \eta_{ix_{j}}^{\{l\}}\leq (n-1) $ \item $ 0\leq |\eta_{x_{j}x_{j}}^{\{l\}}|- \eta_{ix_{j}}^{\{l\}}\leq (n-2) $ \item $ 1\leq |\eta_{iy_{j}}^{\{l\}}|\leq (n-2)$ \item $ 0\leq |N({G^{\{l\}}},y_{j})\cap X({G^{\{l\}}},i)|\leq (n-3) $ \item $ 0\leq \Big(|\eta_{y_{j}y_{j}}^{\{l\}}|-|N({G^{\{l\}}},y_{j})\cap X({G^{\{l\}}},i)|-|\eta_{iy_{j}}^{\{l\}}|\Big)\leq (n-3) $ $ \square $ \end{enumerate} \end{Remark} It is well known that two graphs with different number of vertices or with non-identical degree sequences are non-isomorphic. Hence, we next prove a theorem to test isomorphism between two graphs having same number of vertices and identical degree sequence. \begin{Theorem} \label{Poly_Iso_Thm_1} If two graphs $ G $ and $ H $ are isomorphic then the corresponding sorted structural descriptor sequence $ R_{G} $ and $ R_{H} $ are identical. \end{Theorem} \begin{proof} Suppose $ G $ and $ H $ are isomorphic, then we have $ k(G)=k(H) $, and there exist an adjacency preserving bijection $\phi :V(G)\rightarrow V(H) $, such that $ (i,j)\in E(G) $ if and only if $ (\phi(i),\phi(j))\in E(H) $ where $ 1\leq i,j\leq n $. It is well known that, for any $ i\in V(G) $ and for given $ l, 1\leq l\leq k(G) $ the subgraph induced by $ \{x\in V(G) : d_{G}(i,x)\leq 2^{l-1}\}$ is isomorphic to the subgraph induced by $\{\phi(x)\in V(H) : d_{H}(\phi(i),\phi(x))\leq 2^{l-1}\} $. Similarly, the subgraph induced by set of vertices $ \{y\in V(G) : 2^{l-1}<d_{G}(i,y)\leq 2^{l}\}$ is isomorphic to $\{\phi(y)\in V(H) :2^{l-1}< d_{H}(\phi(i),\phi(y))\leq 2^{l}\} $. By Corollary \ref{RRM1} and Corollary \ref{RRM3} and Definition \ref{IsoDef1} we have $ N(G^{\{l\}},i) $ isomorphic to $ N(H^{\{l\}},\phi(i)) $ and $X(G^{\{l\}},i) $ is isomorphic to $ X(H^{\{l\}},\phi(i)) $. By equations (\ref{Isoeq.3}) and (\ref{Isoeq.4}) we have, $\forall i\in V(G) $ and $ \forall x_{j}\in N(G^{\{l\}},i)$ $M_{1}(G^{\{l\}},i,x_{j})=M_{1} (H^{\{l\}},\phi(i),\phi(x_{j})) $ and for every $ i\in V(G) $, $ y_{j}\in X(G^{\{l\}},i) $ we have $ M_{2}(G^{\{l\}},i,y_{j})=M_{2} (H^{\{l\}},\phi(i),\phi(y_{j})) $. Since each element of the sequence $ M_{1}(G^{\{l\}},i,x_{j}), M_{1}(H^{\{l\}},\phi(i),\phi(x_{j})), M_{2}(G^{\{l\}},i,y_{j}) $ and $ M_{2}(H^{\{l\}},\phi(i),\phi(y_{j}) ) $ are linear combinations of irrational numbers, by equating the coefficients of like terms, we further get the following 5 equalities between entries of $ \mathcal{NM}^{\{l\}}(G) $ and $ \mathcal{NM}^{\{l\}}(H) $. \begin{eqnarray}\label{ee1} \eta^{\{l\}}_{ix_{j}} & = & \eta^{\{l\}}_{\phi(i)\phi(x_{j})} \\ \label{ee2} \eta^{\{l\}}_{iy_{j}} & = & \eta^{\{l\}}_{\phi(i)\phi(y_{j})} \\ \label{ee3} \big(|\eta^{\{l\}}_{x_{j}x_{j}}|-\eta^{\{l\}}_{ix_{j}}\big )& = &\big ( |\eta^{\{l\}}_{\phi (x_{j})\phi (x_{j})}|-\eta^{\{l\}}_{\phi(i) \phi (x_{j})}\big ) \\ \label{ee4} \big |N({G^{\{l\}}},y_{j})\cap X({G^{\{l\}}},i)\big |& = &\big |N({H^{\{l\}}},\phi (y_{j}))\cap X({H^{\{l\}}},\phi(i))\big | \end{eqnarray} \begin{equation} \label{ee5} \begin{aligned} \big(|\eta_{y_{j}y_{j}}^{\{l\}}|-|\eta_{iy_{j}}^{\{l\}}|-|N({G^{\{l\}}},y_{j})\cap X({G^{\{l\}}},i)|\big) {} & = \big(|\eta_{\phi (y_{j})\phi(y_{j})}^{\{l\}}| -|\eta_{\phi(i)\phi(y_{j})}^{\{l\}}| \\ & -|N({H^{\{l\}}},\phi(y_{j}))\cap X({H^{\{l\}}},\phi(i))|\big) \end{aligned} \end{equation} \normalsize where $ x_{j}\in N(G^{\{l\}},i) $ and $ y_{j}\in X(G^{\{l\}},i) $, for every $ i\in V(G) $ and for every $ l, 1\leq l\leq k(G) $. Therefore there exists a bijection from $ R_{G}$ to $ R_{H} $ satisfying. $ R_{G}(i)=R_{H}(\phi(i)) $, $ \forall i\in V(G)$. \end{proof} \begin{Remark}\label{REMK3} The converse of Theorem \ref{Poly_Iso_Thm_1} need not hold. For example in the case of strongly regular graphs with parameters $ (n,r,\lambda,\mu) $, where $ n-$ number of vertices, $ r- $ regular, $ \lambda- $ number of common neighbours of adjacent vertices and $ \mu- $ number of common neighbours of non adjacent vertices, we see that the sequences are always identical for any graphs. \end{Remark} \subsection{Algorithm to compute Structural descriptor sequence} Now we present a brief description of proposed algorithm to find the structural descriptor sequence for any given graph $ G $. The pseudocodes of the algorithm is given in Appendix. The objective of this work is to get the unique structural sequence given by a descriptor of graph $ G $ using Algorithm \ref{ALO.1} - \ref{ISOALO.3}. The algorithm is based on the construction of the sequence of graphs and its corresponding matrices $ \mathcal{NM}^{\{l\}},1\leq l\leq k(G) $. The structural descriptor sequence is one time calculation for any graph. The sorted sequences are non identical then we conclude corresponding graphs are non-isomorphic. Moreover polynomial time is enough to construct the structural descriptor sequence. Given a graph $ G $, we use the algorithm \ref{ALO.1} from \cite{NM_SPath} to compute the sequence of powers of neighbourhood matrix $ \mathcal{NM}^{\{l\}}$. In this module(Algorithm \ref{ALO.1}), we compute the $ \mathcal{NM}(G) $ matrix and the sequence of $ k(G) $ matrices, namely powers of $ \mathcal{NM} (G)$ [denoted by $ SPG(:,:,:) $] associated with $ G $. The algorithm is referred as \textsc{SP-$\mathcal{NM}(A)$}, where $ A $ is the adjacency matrix of $G$. Next, we compute a sequence of real numbers $ M_{1}(G^{\{l\}},i,x_{j})$ and $ M_{2}(G^{\{l\}},i,y_{j}),\forall i,$ and for given $l,1\leq l\leq k(G) $ using the equations (\ref{Isoeq.1}), (\ref{Isoeq.2}), (\ref{Isoeq.3}), (\ref{Isoeq.4}) and (\ref{Isoeq.5}) which corresponds to the entries of $ \mathcal{NM} $ describing the structure of two level decomposition rooted at each vertex $i\in V(G^{\{l\}})$, and for any $l, 1\leq l\leq k(G) $. For evaluating equations (\ref{Isoeq.3}) and (\ref{Isoeq.4}), we choose the weights $ w_{1} $ to $ w_{7} $ as: $ w_{1}=\sqrt{7}, w_{2}=\sqrt{11}, w_{3}=\sqrt{3}, w_{4}=\sqrt{13}, w_{5}=\sqrt{17}, w_{6}=\sqrt{19} $, which are all done by \ref{ISOALO.3}. Atlast, we compute a sequence of $ n $ real numbers given by $ R_{G} $ in equation (\ref{Isoeq.5}) for every vertex of the given graph $ G $, denoted by $ R_{G} $. The sequence $ R_{G} $ is constructed from the structural information of two level decomposition of $ G^{\{l\}}, 1\leq l\leq k$, where $k=\ceil*{\log_{2}diameter(G)} $ using the entries of sequence of powers of $ \mathcal{NM}^{\{l\}} $, which are all done by \ref{ISOALO.2}. \subsection{Time Complexity} As stated before, the above described algorithm has been designed in three modules and is presented in Appendix. In this module (Algorithm \ref{ISOALO.2}), we compute the structural descriptor sequence $ R_{G} $ corresponding to given graph $ G $. The algorithm is named as \textsc{$S_{-}D _{-} S(A,Irr)$}, where $ A $ is the adjacency matrix of $ G $ and $ Irr- $ square root of first $ (n-1) $ primes. We use Algorithm \ref{ALO.1} in Algorithm \ref{ISOALO.2} to compute the $ \mathcal{NM}(G) $ matrix and the sequence of $ k(G) $ matrices, namely powers of $ \mathcal{NM} (G)$ [denoted by $ SPG(:,:,:) $] associated with $ G $. The algorithm is named as \textsc{SP-$\mathcal{NM}(A)$}, as we apply the product of two matrices namely adjacency A(G) and the Laplacian $C(G)$ to obtain the $\mathcal{NM}(G)$ and we do this for $k(G) $ times, for where $ k(G)= \ceil*{\log_{2}(diameter(G))}$, while $G$ is connected and $ k(G) + 1$ times while $G $ is disconnected. We use Coppersmith - Winograd algorithm \cite{C.W} for matrix multiplication. The running time of Algorithm \ref{ALO.1} is $ \mathcal{O}(k.n^{2.3737}) $. Finally we use Algorithm \ref{ISOALO.3} in Algorithm \ref{ISOALO.2} to we compute the structural descriptor for each $ \mathcal{NM} $ in sequence of powers of $ \mathcal{NM}(G) $. The algorithm is named as \textsc{Structural$ _{-} $ Descriptor}($ \mathcal{NM},n,Irr $), where $ \mathcal{NM} $ is any sequence of $\mathcal{NM} $ corresponding to given $ G $, $ n- $ number of vertices. The running time of Algorithm \ref{ISOALO.3} is $ \mathcal{O}(n^{3}) $. Therefore the total running time of Algorithm \ref{ISOALO.2} is $ \mathcal{O}(n^{3}\log n) $. Note that, the contrapositive of Theorem \ref{Poly_Iso_Thm_1} states that if the sequences $ R_{G} $ and $ R_{H} $ differ atleast in one position, then the graphs are non-isomorphic. We exploit to study the graph isomorphism problem. In Algorithms \ref{ALO.1} - \ref{ISOALO.3}, we implement the computation of structural descriptor sequence and use the theorem to decide if the graphs are non-isomorphic. However, inview of Remark \ref{REMK3}, when the sequence are identical for two graphs, we cannot conclude immediately whether the graphs are isomorphic or not. Hence we tried to extract more information about the structure of the graphs. In this attempt, by using the first matrix of the sequence namely the $ \mathcal{NM}(G) $, we find Maximal cliques of all possible size. We use this information to further discriminate the given graphs. \section{Clique sequence of a graph} We first enumerate all possible maximal cliques of size $ i, 1\leq i\leq w(G)=$clique number and store it in a matrix of size $ t_{i}\times i $, where $ t_{i} $ is the number of maximal cliques of size $ i $, denote it by $ LK_{t_{i},i} $. Note that for each $ i $, we obtain a matrix, that is, $ w(G) $ such matrices. Secondly for each $ i $, we count the number of cliques to which each vertex belongs to and store as a vector of size $ n (C_{j}^{i}), 1\leq j\leq n$. In this fashion we get $ w(G) $ such vectors. Consider the sequence obtained by computing for each $ j $, $ R=\Bigg\{\sum\limits_{i=1}^{w(G)}\dfrac{C_{j}^{i}}{Irr(i)}\Bigg\}_{j=1}^{n}$. Finally we construct the clique sequence $ CS $, defined as follows $ CS=\Bigg\{\sum_{j=1}^{n} C_{j}^{i}\cdot R(j) \Bigg\}_{i=1}^{w(G)} $ \subsection{Description of Maximal cliques Algorithm} In this section we present a brief description of the proposed algorithm to find the Maximal cliques of all possible size of given graph $ G $. The algorithm based on neighbourhood matrix corresponding to $ G $. All possible Maximal cliques work done by Algorithm \ref{Clique_Main} and \ref{Clique_Sub}. Algorithm \ref{Clique_Sub} is iteratively run in Algorithm \ref{Clique_Main} by following inputs, $ A $ is an adjacency matrix corresponding to given graph, $ W \subset V $, set of vertices of $ W $ is complete subgraph on $ 3\times d $ vertices where $ d\leftarrow 1,2,... $, $X\subset V, X+W\subseteq G $ and vertex labels of vertices in $ X $ is greater than vertex labels in $ W $ and $Y\subset V, Y+W\subset G $ and vertex labels of vertices in $ X $ is greater than vertex labels in $ Y $. Initially, $W=\emptyset, X=1,2,...,n $ and $ Y=\emptyset $, first we find adjacency matrix $ C $ corresponding to $ [X] $ and the neighourhood matrix $ CL $ corresponding to $ C $. The algorithm runs for each vertices in $ [X] $. $ g_{1} $ is a neighbours of $ i $, if $ |g_{1}| >0$ then $ i $ is not isolated vertex and the neighbours of $ i $ should contained in any $ K_{2},K_{3},... $ to gether with $ i $. We define new $ g_{1} $, $ g_{1}=\{q:q>i,q\in g_{1}\} $, this elimination used to reduce the running time and maintain the collection of Maximal cliques. Now we construct the sequence of number of triangles passing through each elements of $ g_{1} $ together with $ i $ and stored in $ p $, this can be done by the entries of neighbourhood matrix in step 7. Now we find the set of vertices which are all not contained in any $ K_{3} $, $ r=\{f: p(f)=0,f\in g_{1}\} $. If suppose for all vertices in $ Y $ are not adjacent with $ \{i,r(f)\},f=1,2,...,|r| $ then the vertices $ \{i,r(f)\} $ can add in distinct $ K_{2} $ which is done by step 8 - 12. Now we again redefine $ g_{1} $, $ g_{1}=\{q: p(q)>0\} $ and find the edges in upper triangular matrix of $ C(g1,g1) $ and stored in $ a_{4} $. Each edges in $ a_{4} $ together with $ i $ contained in any $ K_{3},K_{4},..,\text{ and so on,} $. To find $ s $ is a number of common neighbour between $ a_{4}(u,1) $ and $ a_{4}(u,2), u=1,2,...,|a_{4}|$. If $ s>1 $ then we find the common neighbours among $ i,a_{4}(u,1), a_{4}(u,2) $ and stored in $ a_{8} $. $ a_{9}=\{q: q>a_{4}(u,2), q\in a_{8}\} $, this set used to reduce the running time and maintain the collection of Maximal cliques. If $ |a_{9}|=1 $ then it possible to add in distinct $ K_{4} $ at the same time it is not possible to add, when there exist an edge between $ a_{8} $ and $ a_{9} $ and atleast one vertex in $ Y $ is adjacent with $ \{i,a_{4}(u,1),a_{4}(u,2), a_{9}\} $. which is done by steps 19 - 23. If suppose $ |a_{9}|=0 $ and $ |a_{8}| =0$ then it possible to add in distinct $ K_{3} $ at the same time it is not possible to add, when atleast one vertex in $ Y $ is adjacent with $ \{i,a_{4}(u,1),a_{4}(u,2)\} $. which is done by steps 25 - 29. At last the set $a_{9} $ together with $ \{i,a_{4}(u,1),a_{4}(u,2)\} $, possible to contained in $ K_{4},K_{5},... $ and so on,. Upon repeated applications of this iterative process to $ W,X $ and $ Y $ we get all possible Maximal cliques. Now we get the following outputs, $ U- $ cell array which contains the number of Maximal cliques contained in each $ i\in V $ and $ Z- $ cell array which contains all possible Maximal cliques. Our aim is to construct a unique sequence using the outputs $ U $ and $ Z $. Let $ C= $ Let $ C $ is a matrix of size ($ t\times n $), which is constructed from $ U $. Let $ R=\sum_{i=1}^{t}\dfrac{row_{i}(C)}{Irr(i)}, Irr=\sqrt{2},\sqrt{3},\sqrt{5},... $. Finally the clique sequence $ CS $ defined as follows, $ CS=\Big\{\sum_{j=1}^{n} {C(i,j)}\cdot{R(j)} \Big\}_{i=1}^{t} $. \subsection{Time Compleixty} As stated before, the above described algorithm has been designed in three modules and is presented in Appendix. In this module (Algorithm \ref{Clique_Sequence}), we compute the clique sequence and subsequence of clique sequence corresponding to given graph $ G $. The algorithm is named as \textsc{Clique$ _{-} $Sequence}($ A,Irr $), where $ A $ is a adjacency matrix of $ G $. We run Algorithm \ref{Clique_Main} in this module which is named as \textsc{Complete$ _{-} $Cliques}($ A $). Algorithm \ref{Clique_Sub} iteratively run in Algorithm \ref{Clique_Main} which is named as \textsc{Cliques}($ A,W,X,Y $), where $ A $ is an adjacency matrix corresponding to given graph, $ W \subset V $, set of vertices of $ W $ is complete subgraph on $ 3d $ vertices where $ d\leftarrow 1,2,... $, $X\subset V, X+W\subseteq G $ and vertex labels of vertices in $ X $ is greater than vertex labels in $ W $ and $Y\subset V, Y+W\subset G $ and vertex labels of vertices in $ X $ is greater than vertex labels in $ Y $. The worst running time of algorithm \ref{Clique_Sub} is $ \mathcal{O}(|X|^{5}) $ and algorihtm \ref{Clique_Main} is $ \mathcal{O}(n^{5}) $ + $ \sum_{i=1}^{ct}\sum_{j=3*i}^{n-2}(n-j)^{5}{j-1 \choose 3*i-1} $, where $ct=\round{\dfrac{n}{4}}$. If we expand the sum of the series that's seems to be very hard. Therefore the worst case running time of the Algorithm \ref{Clique_Sequence} is exponential. \section{Computation and Analysis} Idenntifying non-isomorphic graphs along with existing algorithm will be executed as follows: Given a collection of graphs: $ \mathcal{G} $ \begin{enumerate} \item We first compute the Structural descriptor sequence using Algorithm \ref{ALO.1} - \ref{ISOALO.3}. So we have a $ n $ element sequence for each graph in $ \mathcal{G} $. From here, all the distinct sequences computed above represent non-isomorphic graphs ($ \mathcal{G}_{1} $). The remaining graphs $ \mathcal{G}_{2} =(\mathcal{G}-\mathcal{G}_{1}) $ are the input for the next step. \item Here we compute the clique sequences for the graphs in $ \mathcal{G}_{2} $. Here we have a $ n- $ element sequence for each graph in $ \mathcal{G}_{2} $. Now all the distinct sequences represent non-isomorphic graphs. Let the collection be $ \mathcal{G}_{3} $. Let $ \mathcal{G}_{4}=\mathcal{G}_{2}-\mathcal{G}_{3} $ \item On this collection $ \mathcal{G}_{4} $: we define a relation $ G_{1}\backsim G_{2} $ if and only if $ CS(G_{1})=CS(G_{2}), G_{1},G_{2}\in \mathcal{G}_{4} $. Note that the relation forms an equivalance relation and partitions $ \mathcal{G}_{4} $ into equivalance classes having graphs with identical $ CS $ sequences in the same class. \item We run the existing isomorphism algorithm comparing graphs with in each of te equivalance class. \end{enumerate} Such a preprocessing reduces the computational effort involved in identify in non-isomorphic graphs among a given collection of graphs as compared to running the existing algorithm for all possible pairs in $ \mathcal{G} $. Above procedure was implemented to verify our claim on relevant datasets for back of space. We present various existing benchmark datasets. \underline{\textbf{$ DS_{1} $:}} This dataset contains $ 3854 $ graphs, all graphs are non isomorphic strongly regular on the family of $ (n,r,\lambda,\mu) $, where $ n=35 $, $ r=18 $, $ \lambda=9 $, $ \mu=9 $. \underline{\textbf{$ DS_{2} $:}} This dataset contains $ 180 $ graphs, all graphs are non isomorphic strongly regular on the family of $ (n,r,\lambda,\mu) $, where $ n=36 $, $ r=14 $, $ \lambda=6 $, $ \mu=4 $. \underline{\textbf{$ DS_{3} $:}} This dataset contains $ 28 $ graphs, all graphs are non isomorphic strongly regular on the family of $ (n,r,\lambda,\mu) $, where $ n=40 $, $ r=12 $, $ \lambda=2 $, $ \mu=4 $. \underline{\textbf{$ DS_{4} $:}} This dataset contains $ 78 $ graphs, all graphs are non isomorphic strongly regular on the family of $ (n,r,\lambda,\mu) $, where $ n=45 $, $ r=12 $, $ \lambda=3 $, $ \mu=3 $. \underline{\textbf{$ DS_{5} $:}} This dataset contains $ 18 $ graphs, all graphs are non isomorphic strongly regular on the family of $ (n,r,\lambda,\mu) $, where $ n=50 $, $ r=21 $, $ \lambda=8 $, $ \mu=9 $. \underline{\textbf{$ DS_{6} $:}} This dataset contains $ 167 $ graphs, all graphs are non isomorphic strongly regular on the family of $ (n,r,\lambda,\mu) $, where $ n=64 $, $ r=18 $, $ \lambda=2 $, $ \mu=6 $. \underline{\textbf{$ DS_{7} $:}} This dataset contains $ 1000 $ graphs, all graphs are regular on the family of $ (n,r) $, where $ n=35 $, $ r=18$, in this collection $ 794 $ graphs are non isomorphic. \\ \underline{\textbf{On the dataset $ DS_{1} $}} Among the $ 3854 $ graphs, we could identify $ 3838 $ graphs to be non-isomorphic with distinct sequences, and remaining $ 16 $ graphs were classified in to $ 8 $ equivalence classes, having $v=\{ 2,2,2,2,2,2,2,2 \}$, the elements of $ v $ are in each of the class. Running the existing algorithm on there ${ v_{i} \choose 2} $ graphs for each $ i $, we can completely distinguish the graph collection which has taken only $ 2376.5268 $ seconds in total, instead of running the existing isomorphism algorithm on $ {3854\choose 2} $ pairs which takes atleast 2 days. \underline{\textbf{On the dataset $ DS_{2 } $}} Among the $180 $ graphs, we could identify $ 81 $ graphs to be non-isomorphic with distinct sequences, and remaining $ 99 $ graphs were classified in to $ 19 $ equivalence classes, having\\ $v=\{2,2,2,2,2,3,4,4,4,4,5,6,6,7,7,7,7,9,16 \}$, the elements of $ v $ are in each of the class. Running the existing algorithm on there ${ v_{i} \choose 2} $ graphs for each $ i $, we can completely distinguish the graph collection which has taken only $ 18.5414 $ seconds in total. The existing isomorphism algorithm on $ {180 \choose 2} $ pairs takes $ 62.0438 $ seconds. \underline{\textbf{On the dataset $ DS_{ 3} $}} Among the $28 $ graphs, we could identify $ 20 $ graphs to be non-isomorphic with distinct sequences, and remaining $ 8 $ graphs were classified in to $ 4 $ equivalence classes, having $v=\{ 2,2,2,2 \}$, the elements of $ v $ are in each of the class. Running the existing algorithm on there ${ v_{i} \choose 2} $ graphs for each $ i $, we can completely distinguish the graph collection which has taken only $ 2.0855 $ seconds in total, the existing isomorphism algorithm on $ {28 \choose 2} $ pairs takes $39.9169 $ seconds. \underline{\textbf{On the dataset $ DS_{ 4} $}} Among the $ 78 $ graphs, we could identify $ 78 $ graphs to be non-isomorphic with distinct sequences, which has taken only $ 5.8620 $ seconds in total, the existing isomorphism algorithm on $ {78 \choose 2} $ pairs takes $ 22.6069$ seconds. \underline{\textbf{On the dataset $ DS_{5 } $}} Among the $ 18 $ graphs, we could identify $ 18 $ graphs to be non-isomorphic with distinct sequences, which has taken only $ 8.4866 $ seconds in total, the existing isomorphism algorithm on $ { 18\choose 2} $ pairs takes $ 8.3194$ seconds. \underline{\textbf{On the dataset $ DS_{6 } $}} Among the $167 $ graphs, we could identify $ 146 $ graphs to be non-isomorphic with distinct sequences, and remaining $ 21 $ graphs were classified in to $ 8 $ equivalence classes, having $v=\{ 2,2,2,2,2,3,3,5 \}$, the elements of $ v $ are in each of the class. Running the existing algorithm on there ${ v_{i} \choose 2} $ graphs for each $ i $, we can completely distinguish the graph collection which has taken only $ 23.8966 $ seconds in total, the existing isomorphism algorithm on $ { 167\choose 2} $ pairs takes $ 12597.2899$ seconds. \underline{\textbf{On the dataset $ DS_{ 7} $}} Among the $1000 $ graphs, we could identify $ 583 $ graphs to be non-isomorphic with distinct sequences, and remaining $ 417 $ graphs were classified in to $ 206 $ equivalence classes, having $v=\{2,2,2,...,(201 \text{ times}), 3,3,3,3,3 \}$, the elements of $ v $ are in each of the class. Running the existing algorithm on there ${ v_{i} \choose 2} $ graphs for each $ i $, we can completely distinguish the graph collection which has taken only $ 508.6248 $ seconds in total, the existing isomorphism algorithm on $ {1000 \choose 2} $ pairs takes $ 108567.0245$ seconds. \section{Automorpism Groups of a graph} In this section we find the automorphism groups of given graph $ G $, using the strucutral descriptor sequence. The above sequence have a complete structural informations for each vertex $ i,i\in V $. Our aim is to get the optimal possibilities for the automorphism groups. From this sequence we can conclude, if any two values of sequence is not identical then the corresponding vertices never be symmetric. By this idea we can do the following, \begin{enumerate} \item Let $ R_{G} $ be the structural descriptor sequence of given graph. \item Let $ fx_{i} =$ Set of vertices in the $ i^{th} $ component of given graph, where $ i=1,2,...,w $ and $ w $ is a number of components. \item Let $ h= $ Set of unique values in $ R_{G}(fx_{i}) $. \item $v_{q}= \big\{\alpha : R_{G}(\alpha)=h(q), \alpha\in fx_{i} \big\}, q=1,2,...,|h| $ \item $ P_{q}= $ Set of all permutations of $ v_{q} $ \item $ X $ is all possible combinations of $ P_{q}, q=1,2,...,|h| $. Moreover $ X $ is optimal options of automorphism groups. \item Finally we check each possible permutaions in $ X $ with given graph for get a automorphism groups. \end{enumerate} \begin{figure} \caption{A graph $ G $ and its structural descriptor sequence $ R_{G} \label{autoF.a} \label{autoF.1b} \label{AutoFig} \end{figure} \begin{Corollary}\label{Auto_Cor} If the structural descriptor sequence $ R_{G} $ contains $ n $ distinct elements, then $|Aut( G)|=1 $, that is, $ G $ is an asymmetric graph. \end{Corollary} \begin{proof} Let the structural descriptor sequence $ R_{G} $ contain $ n $ distinct elements, that is, $ R_{G}(i)\neq R_{G}(j), \forall i\neq j, i,j\in V(G) $. By the definition of $ R_{G} $, it is immediate that for any two vertices $ i $ and $ j $, the two level decompositions of $ G^{\{l\}} $ rooted at $ i $ and $ j $ are not isomorphic for some $ l,1\leq l\leq k(G) $. Since this is true for any two vertices, there exists no non-trivial automorphhism in $ G $. Hence $ Aut(G)$ contains only the identity mapping $ e $. \end{proof} \normalsize \begin{Remark} Note the converse of the above corollary need not hold. For the graph in \normalfont{Figure \ref{autoF.a}}, the structural descriptor sequence $ R_{G} $ is given by Figure \ref{autoF.1b}. Here, $ R_{G} (4)=R_{G}(7)$ but the neighbours of vertex 4 and neighbours of vertex 7 are different as they have different degree sequence and their $ R_{G}-$ values also do not coincide. Hence, we cannot find any automorphism other than identity. \end{Remark} \subsection{Description of Automorphism Group Algorithm} In this section, we present a brief description of the proposed algorithm to find the all posssible automorphism groups of given graph $ G $, this work done by Algorithm \ref{AutoAl_2} and Algorithm \ref{AutoAl_3}. The pseudocodes of the algorithm is given in Appendix. The objective of this work is find the set of all permutation matrix for the automorphism group. This work based on the construction of structural descriptor sequenceof given graph $ G $, which is done in Algorithm \ref{ISOALO.2}. First we find the set of vertices of connected components of given graph. This algorithm run for each connected component of the graph. Let $ fx $ be the vertices of $ i^{th} $ component of the given graph and $ h $ is the set of unique values in $ R_{G}(fx) $. Now we find a vertex set $ v $ which have identical values in $ R_{G}(fx) $ for each values in $ h $, then we find all possible permutations of $ v $. Now we find the combinations of all possible $ v $ corresponding to each elements in $ h $ and stored in $ X $. Finally we check for each permutations in $ X $ with given graph to get a automorphism groups. \subsection{Time Complexity} \begin{enumerate} \item In this module (Algorithm \ref{AutoAl_2}), we compute automorphism groups of given graph $ G $. The algorithm is nammed as \textsc{$A_{-}M_{-}G(A)$}, where $ A- $ Adjacency matrix $ G $. Algorithm \ref{AutoAl_3} is iteratively run in this module. In Algorithm \ref{AutoAl_3}, we compute optimal options of automorphism groups of given set of vertices in $ G $ this algorithm named as \textsc{$ M_{-}O_{-}A_{-}G(R_{G},h,fx) $}, where $ R_{G} $ structural descriptor sequence, $ fx- $ set of vertices in the connected component and $ h- $ unique values in $ R_{G}(fx) $. The worst case running time of this algorithm is $ \mathcal{O}(n!) $ when the graph is connected and all values are identical in a sequence. In case of $ \mathcal{O}(n!) $ we can reduce more time on the following, $ M $ is set of multiplicities of $ h $ in $ R_{G}(fx) $ which is greater than 1. $ M=\{m_{1},m_{2},...,m_{d}\}, d\leq |h| $. Let $ y_{1}=(m_{1}! -1) $, $ y_{j}=(y_{j-1}\times (m_{j}!-1))+(y_{j-1}+ (m_{j}!-1)) $. Therefore the complexity of Algorithm \ref{AutoAl_3} is $\sum_{j=1}^{d}y_{j}\leq |fx|! \leq n! $. Therefore the worst case running time of Algorithm \ref{AutoAl_2} is $ \mathcal{O}(n^{2}.n!) $. \end{enumerate} \section{Conclusion} In this paper, we have studied the graph isomorphism problem and graph automorphism problem. We determining the class of non-isomorphic graphs from given graph collection. We have proposed a novel technique to compute a new feature vector graph invariant called as \textit{Structural Descriptor Sequence}, which encodes complete structural information of the graph with respect to each node. This sequence is also unique in the way it is computed. The proposed \textit{Structural Descriptor Sequence} has been shown useful in identifying non-isomorphic graphs from given collection of graphs. We have proved that if any two sorted \textit{Structural Descriptor Sequence} are not identical then the graphs are not isomorphic. Further, we also propose a polynomial time algorithm for test the sufficient part of graph isomorphism between two graphs, which runs in $ \mathcal{O}(kn^{3}) $, where $ k=\ceil*{\log_{2}diameter(G)} \leq \log n$. In this paper we also proposed an algorithm for finding the automorphism groups of given graph, using the structural descriptor sequence. \section*{References} \appendix \section{Algorithm } In this section, we present the $ MATLAB $ pseudocode in the form of algorithm for testing isomorphism between two graphs and discuss their running time. \begin{algorithm}[H] \caption{\textsc{Structural Descriptor Sequence of $ G $.\label{ISOALO.2}}} \textbf{Objective:} To Find the {\textit{Structural Descriptor Sequence}} corresponding to the given undirected unweighted simple graph $ G $ on $ n $ vertices using the sequence of powers of $ \mathcal{NM} (G)$. \\ \textbf{Input:} $ A- $ Adjacency matrix corresponding to the graph $ G $ and $Irr- $ is a set of $ (n-1)$ irrational numbers, where $Irr= \langle \sqrt{2},\sqrt{3},\sqrt{5},...\rangle ; $\\ \textbf{Output:} $ R_{G} -$ is a set of $ n $ real numbers corresponding to each vertex of the graph $ G $. \begin{algorithmic}[1] \Procedure{$ R_{G} = S_{-}D_{-}S $ }{$A,Irr$} \State [$ n $ \text{, } $ k $ \text{, } $ SPG $ ] $ \leftarrow $ \textsc{$ SP$-$\mathcal{NM}$}({$A$}) \For {$ l \leftarrow 1 \text{ to } k $} \State $ \mathcal{NM}\leftarrow SPG(:,:,l) $ \State $ E $\textsc {$ \leftarrow $ $Structural_{-}Descriptor $ ($\mathcal{NM},n,Irr$)} \State $ S(l,:)\leftarrow \dfrac{E}{Irr(l)} $ \EndFor \State \textbf{end} \If {$ k==1 $} $ R_{G}\leftarrow S $ \Else { $ R_{G} \leftarrow \sum S $ } \EndIf \State \textbf{end} \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \cite{NM_SPath} \caption{\textsc{Sequence of powers of $ \mathcal{NM } $.\label{ALO.1}}} \textbf{Objective:} To find the iteration number $k $ and construct the sequence of powers of $\mathcal{NM} $ matrix of a given graph $G$, that is $ \mathcal{NM}^{\{l\}} $, for every $l,1 \leq l\leq k $. \textbf{Input:} Adjacency matrix $ A $ of a undirected unweighted simple graph $ G $. \textbf{Output:} The number of vertices $ n $, iteration number $k $ and $SPG-$ A three dimensional matrix of size $(n\times n\times k )$, that is for each $l$, $ 1 \leq l \leq k $, the $n\times n-\mathcal{NM}^{\{l\}} $ matrix is given by $ SPG(:,:,l) $. \begin{algorithmic}[1] \Procedure{$ [n \text{, } k \text{, } SPG ]= $ SP$-\mathcal{NM}$}{$A$} \State $ l\leftarrow 1; $ \State $ n \leftarrow $ Number of rows or columns of matrix $ A $ \State $ z \leftarrow 0$ \Comment{Initialize } \While{(True)} \State $ L\leftarrow $ Laplacian matrix of $ A $ \State $ \mathcal{NM} \leftarrow A \times L$ \Comment{Construct the $ \mathcal{NM} $ matrix from $ A $}. \State $ SPG(:,:,l)\leftarrow \mathcal{NM } $ \Comment{$ SPG $-sequence of powers of $ \mathcal{NM } $ matrix.} \State $ u\leftarrow nnz(\mathcal{NM } ) $ \Comment{$ nnz $-Number of non zero entries } \If {$u==n^{2}$} $k \leftarrow l$ \textbf{break} \ElsIf {$ isequal(u,z)==1 $} $k \leftarrow l-1 $ \textbf{break} \Else { $ z\leftarrow u $ } \State $ A \leftarrow \mathcal{NM} ; A (A \neq 0)\leftarrow 1;$ \For{$ p\leftarrow 1 \text{ to } n $} $A (p,p)\leftarrow 0$ \EndFor \State \textbf{end} \State $l\leftarrow l+1$ \EndIf \State \textbf{end} \EndWhile \State \textbf{end} \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{\textsc{Structural descriptor of $ G $.\label{ISOALO.3}}} \textbf{Objective:} To find the sequence of $ n $ real numbers corresponding to given $ \mathcal{NM} $ which is constructed from the information of two level decomposition of each vertex $ i $. \\ \textbf{Input:} $ \mathcal{NM}, n $ and $ Irr- $ Square root of first $ (n-1) $ primes. \\ \textbf{Output:} $ E- $ Sequence of real numbers corresponding to $ i\in V $. \begin{algorithmic}[1] \Procedure{$ E = Structural_{-}Descriptor $ }{$\mathcal{NM},n,Irr$} \State $ Deg\leftarrow |diag(\mathcal{NM})| $ \For{$ i\leftarrow 1:n$} $ X\leftarrow \mathcal{NM}(i,:); \text{ } X(i)\leftarrow 0 ;$ \State $a\leftarrow \big(find(X>0)\big ); \text{ } b\leftarrow\big(find(X<0)\big ) $ \State $ M_{1} \leftarrow sort\Bigg(\dfrac{X(a)}{\sqrt{7}}+\dfrac{Deg(a)-\mathcal{NM}(i,a)+\sqrt{3}}{\sqrt{11}}\Bigg)$ \Comment{$ sort- $ Sorting increasing order} \For{$ f\leftarrow 1 \text{ to } |a| $} $ S_{1}\leftarrow \sum\Big( \dfrac{M_{1}(f)}{Irr(f)}\Big) $ \textbf{ end } \EndFor \If{$ |b|== 0 $} $ S_{2}\leftarrow 0 $ \Else { $ p\leftarrow \mathcal{NM}(b,b); $ } $ p(p<0)\leftarrow 0; \text{ } p(p>0)\leftarrow 1 ;\text{ } p\leftarrow \sum(p) $ \Comment{Column sum of $ p $} \State $ M_{2} \leftarrow sort\Bigg(\dfrac{|X(b)|}{\sqrt{13}}+\dfrac{p+\sqrt{3}}{\sqrt{17}}+\dfrac{Deg(b)-|X(b)|-p+\sqrt{3}}{\sqrt{19}}\Bigg)$ \Comment{$ sort- $ Sorting increasing order} \For{$ f\leftarrow 1 \text{ to } |b| $} $ S_{2}\leftarrow \sum\Big( \dfrac{M_{2}(f)}{Irr(f)}\Big) $ \textbf{ end } \EndFor \EndIf \State \textbf{end} \State $ E(i)\leftarrow S_{1}+S_{2} $ \EndFor \State \textbf{end} \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{\textsc{Clique$ _{-} $Sequence of graph $ G $}} \label{Clique_Sequence} \textbf{Objective:} To get a unique sequence for differentiate the graphs using Algorithm \ref{Clique_Main}. \\ \textbf{Input:} Adjacency matrix $ A $ corresponding to given graph $ G $.\\ \textbf{Output:} $ CS- $ Unique sequence of irrational numbers of size $ (1\times t) $, where t is maximum clique number. \begin{algorithmic}[1] \Procedure {$ CS =$Clique$ _{-} $Sequence}{$ A,Irr $} \State $ n\leftarrow $ number of rows in $ A $; $ Z\leftarrow zeros(1,n)$; $ R\leftarrow Z $; $ C\leftarrow Z $ \State \textsc{$ [U,Z] \leftarrow $ Complete$ _{-} $Cliques}$ (A) $; \For {$ i\leftarrow 1 $ to $ |U| $} $ C\leftarrow [C;U\{i\}] $ \textbf{end}\EndFor \For {$ i\leftarrow 1 $ to $ |C| $} $ R\leftarrow R+\dfrac{C(i,:)}{Irr(i)} $; \textbf{end} \EndFor \For {$ i\leftarrow 1$ to $ |C| $} $ CS(i)\leftarrow \sum_{j=1}^{n} {C(i,j)}\cdot{R(j)} $ \textbf{end}; \EndFor \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{\textsc{All possible Maximal cliques of given graph $ G $}} \label{Clique_Main} \textbf{Objective:} To find all possible distinct complete subgraph of given graph $ G $. \\ \textbf{Input:} $ A- $ Adjacency matrix corresponding to given graph $ G $.\\ \textbf{Output:} $ U- $ cell array which contains the number of Maximal cliques contained in each $ i\in V $, $ Z- $ cell array which contains all possible Maximal cliques. \begin{algorithmic}[1] \Procedure{$ [U,Z] $= Complete$ _{-} $Cliques}{$ A $} \State $ n\leftarrow $Number of rows in $ A $; $ X\leftarrow 1,2,...,n $; $ Y\leftarrow \emptyset $ ; $ W\leftarrow \emptyset $ \State $ [a,b,c,d,LK_{1},LK_{2},LK_{3},LK_{4},J] \leftarrow$ \textsc{Cliques}$ (A,W,X,Y) $; $ ct\leftarrow 1 $ \State $ U\{ct\}\leftarrow [a;b;c;d] $; $ Z\{ct,1\}\leftarrow LK_{1} $; $ Z\{ct,2\}\leftarrow LK_{2} $; $ Z\{ct,3\}\leftarrow LK_{3} $; $ Z\{ct,4\}\leftarrow LK_{4}$ \If {$ |J|==\emptyset $} \textbf{return}; \textbf{end} \EndIf \State $ ct\leftarrow 2 $ \While{$ (1) $} \If {$ J==\emptyset $}; \textbf{break}; \textbf{end} \EndIf \State $ a\leftarrow zeros(1,n) $; $ b\leftarrow a $; $ c\leftarrow a $; $ d\leftarrow a $; $ LK_{1}\leftarrow \emptyset $; \State $ LK_{2}\leftarrow \emptyset $; $ LK_{3}\leftarrow \emptyset $; $ LK_{4}\leftarrow \emptyset $; $ F\leftarrow \emptyset $ \For {$ i\leftarrow 1 $ to number of rows in $ J $} $ W\leftarrow J\{i,1\} $; $ X\leftarrow J\{i,2\} $; $ Y\leftarrow J\{i,3\}-X $ \State $ [ac,bc,cc,dc,LK_{1}c,LK_{2}c,LK_{3}c,LK_{4}c,Jc]\leftarrow $ \textsc{Cliques}$ (A,W,X,Y) $; \State $ a(e)\leftarrow a(e)+ac $; $ b(e)\leftarrow b(e)+bc $; $ c(e)\leftarrow c(e)+cc $; $ d(e)\leftarrow d(e)+dc $; \State $ a(W)\leftarrow a(W)$+Number of rows in $ LK_{1}c $; $ b(W)\leftarrow b(W)$+Number of rows in $ LK_{2}c$ ; \State $ c(W)\leftarrow c(W)$+Number of rows in $ LK_{3}c $ ; $ d(W)\leftarrow d(W)$+Number of rows in $ LK_{4}c $; \State $ LK_{1}\leftarrow [LK_{1}; W \text{ concatenation with } LK_{1}c] $; $ LK_{2}\leftarrow [LK_{2}; W \text{concatenation with } LK_{2}c] $; \State $ LK_{3}\leftarrow [LK_{3}; W \text{ concatenation with } LK_{3}c] $; $ LK_{4}\leftarrow [LK_{4}; W \text{concatenation with } LK_{4}c] $; \For {$ t\leftarrow 1 $ to Number of rows in $ Jc $} \State $ Jc\{t,1\}\leftarrow [W\text{ concatenation with } Jc\{t,1\} ] $; \State $ Jc\{t,2\}\leftarrow [Jc\{t,2\} \text{ concatenation with } X] $; $ Q\leftarrow \{k: k+Jc\{t,1\},k\in Y\} $ \State $ Jc\{t,3\}\leftarrow [Q,Jc\{t,3\} \text{ concatenation with } X] $ \EndFor \State \textbf{end} \State $ F\leftarrow [F; Jc] $ \EndFor \State \textbf{end} \State $ J\leftarrow F $ \State $ a\leftarrow a+U\{ct-1\}(4,:); U\{ct-1\}(4,:)\leftarrow \emptyset $; \State $ U\{ct\}\leftarrow [a;b;c;d] $; $ Z\{ct,1\}\leftarrow [Z\{ct-1,4\};LK_{1}] $; $ Z\{ct,2\}\leftarrow LK_{2} $; \State $ Z\{ct,3\}\leftarrow LK_{3} $; $ Z\{ct,4\}\leftarrow LK_{4} $; $ Z\{ct-1,4\}\leftarrow \emptyset $; \State $ ct\leftarrow ct+1 $; \EndWhile \State \textbf{end} \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{\textsc{Distinct $ K_{1}, K_{2},K_{3} $ and $ K_{4} $ of $[X] $ on given $ G $}} \label{Clique_Sub} \textbf{Objective:} To find distinct $ K_{1}, K_{2}, K_{3} $ and $ K_{4} $ of $ [X] $ in $ A $. \\ \textbf{Input:} $ A $ is an adjacency matrix corresponding to given graph, $ W \subset V $, set of vertices of $ W $ is complete subgraph on $ 3d $ vertices where $ d\leftarrow 1,2,... $, $X\subset V, X+W\subseteq G $ and vertex labels of vertices in $ X $ is greater than vertex labels in $ W $ and $Y\subset V, Y+W\subset G $ and vertex labels of vertices in $ X $ is greater than vertex labels in $ Y $. \textbf{Output:} $ a,b,c,d\in (\mathbb{W})^{1\times |X|} $. $ a,b,c,d $ - Number of distinct $ K_{1}, K_{2},K_{3}, K_{4} $ contained in $ [X] $ respectively. $ LK_{1}, LK_{2},LK_{3},LK_{4} $ - Collections of distinct $ K_{1}, K_{2},K_{3},K_{4} $ in $ [X] $ respectively and $ J $ - Collection of possible distinct $ K_{4}, K_{5},... $ \begin{algorithmic}[1] \small \Procedure{$ [a,b,c,d,LK_{1},LK_{2},LK_{3},LK_{4},J]= $ Cliques}{$ A,W,X,Y $} \State $ C $ is an adjacency matrix corresponding to induced subgraph of $ X $. \State $ CL $ is an neighbourhood matrix corresponding to $ C $, $ n\leftarrow |X| $; \State $ at\leftarrow 1 $; $ bt\leftarrow 1 $; $ ct\leftarrow 1 $; $ dt\leftarrow 1 $; $ vt\leftarrow 1 $ \For {$ i\leftarrow 1 \text{ to } n $} $ g_{1}\leftarrow N_{[X]}(i) $ \If {$ |g_{1}|>0 $} $ g_{1}\leftarrow g_{1}(g_{1}>i) $ \If {$ |g_{1}|>0 $} $p\leftarrow Degree(g1)-CL(i,g1) $ \State $ r\leftarrow g_{1}(p==0) $ \For {$ q\leftarrow 1 \text{ to } |r| $} \If {for every $ Y_{w}, w= 1,2,...,|Y| $ is not adjacent with $ \{i,r(f)\} $} \State $ LK_{2}(bt,:)\leftarrow [r(f),i] $; $ bt\leftarrow bt+1 $ \EndIf \State \textbf{end} \EndFor \State \textbf{end} $ g1\leftarrow g1(p>0) $ \If {$ |g_{1}|>1 $} $ a_{4} \leftarrow $ distinct edges in upper triangular matrix of $ C(g1,g1) $. \For {$ u\leftarrow 1 $ to $ |a_{4}| $} $ s\leftarrow |CL(a_{4}(u,2),a_{4}(u,2))|-CL(a_{4}(u,1),a_{4}(u,2)) $ \If {$ s>1 $} $ a_{8}\leftarrow \{N_{[X]}(i)\cap N_{[X]}(a_{4}(u,1)) \cap N_{[X]}(a_{4}(u,2))\} $ \State $ a_{9}\leftarrow a_{8}(a_{8}>a_{4}(u,2)) $; $ tr\leftarrow |a_{9}| $ \If {$ tr==1 $} \If {there exist no edge between $ a_{8} $ and $ a_{9} $} \If { for every $ Y_{w}, w= 1,2,...,|Y| $ is not adjacent with $\{ i,a_{4}(u,1),a_{4}(u,2),a_{9} \}$ } \State $ LK_{4}(dt,:)\leftarrow [i,a_{4}(u,1),a_{4}(u,2),a_{9}] $; $ dt\leftarrow dt+1 $ \EndIf \State \textbf{end} \EndIf \State \textbf{end} \ElsIf {$ tr==0 $} \If {$ |a_{8}|==0 $} \If {for every $ Y_{w}, w= 1,2,...,|Y| $ is not adjacent with $\{ i,a_{4}(u,1),a_{4}(u,2) \}$ } \State $ LK_{3}(ct,:)\leftarrow [i,a_{4}(u,1),a_{4}(u,2)] $; $ ct\leftarrow ct+1 $ \EndIf \State \textbf{end} \EndIf \State \textbf{end} \Else { $ J\{vt,1\}\leftarrow \{i,a_{4}(u,1),a_{4}(u,2)\} $; $ J\{vt,2\}\leftarrow a_{9} $; $ J\{vt,3\}\leftarrow a_{8} $} \EndIf \State \textbf{end} \Else { } \If { for every $ Y_{w}, w= 1,2,...,|Y| $ is not adjacent with $\{ i,a_{4}(u,1),a_{4}(u,2) \}$ } \State $ LK_{3}(ct,:)\leftarrow [i,a_{4}(u,1),a_{4}(u,2)] $; $ ct\leftarrow ct+1 $ \EndIf \State \textbf{end} \EndIf \State \textbf{end} \EndFor \State \textbf{end} \EndIf \State \textbf{end} \EndIf \State \textbf{end} \Else { } \If { forevry $ Y_{w}, w= 1,2,...,|Y| $ is not adjacent with $\{ i\}$ } \State $ LK_{1}(at,:)\leftarrow [i] $; $ at\leftarrow at+1 $ \EndIf \State \textbf{end} \EndIf \State \textbf{end} \EndFor \State \textbf{end} \EndProcedure \end{algorithmic} \end{algorithm} \normalsize \begin{algorithm}[H] \caption{\textsc{Automorphism Groups of given graph $ G $}} \label{AutoAl_2} \textbf{Objective:} To find the automorphism groups of given undirected unweighted simple graph. \textbf{Input:} Graph $ G $, with adjacency matrix $ A $. \textbf{Output:} $ A_{-}G $- all the automorphisms of $ G $. \begin{algorithmic}[1] \Procedure{$ A_{-}G= A_{-}M_{-}G $}{$ A $} \State $ Irr\leftarrow $ square root of first $ (n-1) $ primes. \State $ R_{G} \leftarrow S_{-}D_{-}S(A,Irr) $; \textsc{$ [CS,R] \leftarrow $Clique$ _{-} $Sequence}($ A,Irr $) \State $ R_{G}\leftarrow R_{G}+R $; $ Tb\leftarrow $ set of vertices of connected components \For {$ i\leftarrow 1 \text{ to } |Tb| $} \State $ fx\leftarrow $ $ Tb\{i\} $; $Id\leftarrow fx; $; \State $ h\leftarrow$ unique values in $ (R_{G}(fx)) $; $ asd\leftarrow 1 $ \If {$ |h|\neq |fx| $} \State { $eL\leftarrow \text{Sorted edge list corresponding to } $} $ [fx] $ \State $ X \leftarrow M_{-}O_{-}A_{-}G(R_{G},h,fx)$ ; \For {$ u\leftarrow 1$ to $ |X| $} $ f\leftarrow X\{u\} $; \Comment{ \scriptsize $ f $ is optional autommorphism groups of size $ (zk\times 2), zk\leq 2n $\normalsize } \State $ tL\leftarrow $ sorted edge list corresponding to given $ f $ \If {$ eL $ is identical with $ tL $} \State $ AMor\{asd\}\leftarrow f$ ; $asd\leftarrow asd+1 $;\Comment{$ AMor\{asd\} $ is an cell array which stores different size of permutation} \EndIf \State \textbf{end} \EndFor \State \textbf{end} \EndIf \State \textbf{end} \State $ AMor\{asd\}\leftarrow [Id' \text{ } Id'] ; A_{-}G\{i\}\leftarrow AMor$; \EndFor \State \textbf{end} \State \textbf{return} \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{\textsc{Optimal Options of Automorphism groups of given vertex set of $ G $}} \label{AutoAl_3} \textbf{Objective:} To find the optimal options of the automorphism groups for given graph. \textbf{Input:} $ R_{G}- $ Structural descriptor sequence corresponding to given graph, $ fx- $ Set of vertices in the connected component and $ h- $ unique values of $ R_{G}(fx) $. \textbf{Output:} $ X- $ cell array which contains the optimal options of automorphism for the given graph. \begin{algorithmic}[1] \Procedure{$ X= M_{-}O_{-}A_{-}G $}{$ R_{G},h,fx $} \State $ jt\leftarrow 1 $; \For {$ i\leftarrow 1:|h| $} $ v\leftarrow $ Set of vertices which have the identical values in $ R_{G}(fx) $; \If {$ |v|>1 $} ; $ fz\leftarrow $ All possible permutations of $ v $ ; \State $ U\{jt\}\leftarrow fz $; $ jt\leftarrow jt+1 $; \EndIf \State \textbf{end} \EndFor \State \textbf{end} \If {$ |U|==1 $} $ X\leftarrow U\{:\} $; \Else { $ ct\leftarrow 1 $ };$ X\leftarrow U\{1\} $; $ Y\leftarrow U\{2\} $; \While{(1)} $ bt\leftarrow 1 $; \For {$ i\leftarrow 1\text{ to } |X| $} \For {$ j\leftarrow 1\text{ to }|Y| $ } \State $ RX\{bt\}\leftarrow [X\{i\}\text{ ; } Y\{j\}] $; $ bt\leftarrow bt+1 $ \EndFor \State \textbf{end} \EndFor \State \textbf{end} \State $ X\leftarrow [X \text{, } Y \text{, } RX] $; $ ct\leftarrow ct+1 $; \If {$ ct==|U| $} \textbf{ break}; \textbf{ end} \EndIf \State $ Y\leftarrow U\{ct+1\} $; \EndWhile \State \textbf{end} \EndIf \State \textbf{end} \State \textbf{return} \EndProcedure \end{algorithmic} \end{algorithm} \end{document}
\begin{document} \setlength{\baselineskip}{4.5mm} \maketitle \begin{abstract} We consider parametric estimation of the continuous part of a class of ergodic diffusions with jumps based on high-frequency samples. Various papers previously proposed threshold based methods, which enable us to distinguish whether observed increments have jumps or not at each small-time interval, hence to estimate the unknown parameters separately. However, a data-adapted and quantitative choice of the threshold parameter is known to be a subtle and sensitive problem. In this paper, we present a simple alternative based on the Jarque-Bera normality test for the Euler residuals. Different from the threshold based method, the proposed method does not require any sensitive fine tuning, hence is of practical value. It is shown that under suitable conditions the proposed estimator is asymptotically equivalent to an estimator constructed by the unobserved fluctuation of the continuous part of the solution process, hence is asymptotically efficient. Some numerical experiments are conducted to observe finite-sample performance of the proposed method. \end{abstract} \section{Introduction} Consider the following one-dimensional stochastic differential equation with jumps: \begin{equation} dX_t=\left( \sum_{l=1}^{p_\alpha} \alpha^{(l)} a^{(l)}(X_t) \right)^{1/2}dw_t+\sum_{k=1}^{p_\beta} \beta^{(k)} b^{(k)}(X_t)dt+c(X_{t-})dJ_t, \langlebel{hm:sde} \end{equation} defined on a complete filtered probability space $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\geq0},P)$. The ingredients are as follows: \begin{itemize} \item The coefficients $\{a^{(l)}(x)\}_{l=1}^{p_\alpha}$ and $\{b^{(k)}(x)\}_{k=1}^{p_\beta}$ are known measurable functions; \item The statistical parameter \begin{equation} \theta:=(\alpha,\beta)\in \Theta_{\alpha} \times \Theta_{\beta}=\Theta \nonumber \end{equation} are unknown, where $\Theta_\alpha$ and $\Theta_\beta$ are bounded convex domains and subset of $\mathbb{R}^{p_\alpha}$ and $\mathbb{R}^{p_\beta}$, respectively; \item $w$ is a standard {Wiener process} and $J$ a {compound Poisson process} with intensity parameter $\langlembda\in[0,\infty)$ and i.i.d jump-size random variables $\{\xi_i\}_{i\in\mathbb{N}}$, that is, \begin{equation*} J_t=\sum_{i=1}^{N_t} \xi_i; \end{equation*} \item $(w,J)$ is $\mathcal{F}_t$-adapted, and the initial variable $X_0$ is $\mathcal{F}_0$-adapted and independent of $(w,J)$. \end{itemize} Throughout this paper, we assume that there exists a true value $\theta_{0}:=(\alpha_0,\beta_0)\in\Theta$. We want to estimate $\theta_{0}$ based on a discrete-time but high-frequency observation $(X_{t^{n}_{j}})_{j=0}^{n}$ from a solution to \eqref{hm:sde}, where the sampling times are supposed to be equally spaced: \begin{equation} t^{n}_{j} =jh_{n} \nonumber \end{equation} for a positive sequence $(h_n)$ such that $h_n \to 0$ and the terminal sampling time $T_{n}:=t^{n}_{n}=nh_n\to\infty$. Throughout we suppose that $\langlembda>0$; for diffusion models, many estimator of $\theta$ have been proposed, such as Gaussian quasi-likelihood estimator \cite{Kes97}, adaptive estimator \cite{UchYos12}, multi-step estimator \cite{KamUch15}, to mention few. The special forms of the coefficients of \eqref{hm:sde} may seem restrictive. However, we are particularly interested in models which can be estimated without heavy computational effort. As will be mentioned in Section \ref{Asymptotic Results}, we do not need any numerical search of a maximizer to estimate $\theta$ as good as virtual situation where we know every jump instances over $(0,T_n]$. In the presence of the jump component, elimination of the effect of $J$ is crucial for a reliable estimation of $\theta$. A well-known approach for it is the threshold based method independently proposed in \cite{Man04} and \cite{ShiYos06}; see also \cite{OgiYos11} for subsequent developments. In the method, we look at sizes of the increments \begin{equation} \Delta^n_{j}X=\Delta_{j}X:=X_{t^n_{j}}-X_{t^n_{j-1}} \nonumber \end{equation} for $j=1,\dots,n$ in absolute value: we assume that one jump has occurred over $(t^n_{j-1},t^n_j]$ if $|\Delta_{j}X|>r_{n}$ for a pre-specified \textit{jump-detection threshold} $r_{n}>0$, and then estimate $\theta$ after removing such increments. For a suitably chosen $r_{n}>0$, it is shown that the estimator of $\theta$ is asymptotically normally distributed at the same rate as diffusion models, while finite-sample performance of the threshold method strongly depends on the value of $r_{n}$. A data-adaptive quantitative choice of $r_n$ is a subtle and sensitive problem in practice; see \cite{Shi08}, \cite{Shi09}, as well as the references therein. Obviously, if the model may have ``small'' jumps with positive probability, joint estimation of diffusion and jump components can exhibit a rather bad finite-sample performance; for example, some increments may simultaneously contain small jumps and large fluctuation caused by continuous component. This practical issue can also be seen in other jump detection methods such as \cite{AitJac09}. Recently, for estimating the volatility parameter in the non-ergodic framework, i.e., for a fixed $T>0$, $h_n=T/n$ and $T_n\equiv T$, \cite{InaYos18} proposed an alternative estimation procedure called a global jump-detection filter based on the theory of order statistics constructed from the whole increments; there, it is shown that the global filtering can work both theoretically and numerically better than the previously studied local one (\cite{Man04}, \cite{ShiYos06}, and \cite{OgiYos11}). Nevertheless, as will be seen later, required conditions on the distribution of jump sizes and decaying rate of $h_n\to 0$ may be more stringent in the case where $T_n\to\infty$. Hence it is not quite clear whether or not and how the global filtering of \cite{InaYos18} is directly applicable to our ergodic setting. The primary objective of this paper is to formulate an intuitively easy-to-understand strategy, which can simultaneously estimate $\theta$ and detect jumps without any precise calibration of a jump-detection threshold. For this purpose, we utilize the approximate self-normalized residuals \cite{Mas13-2}, which makes the classical Jarque-Bera test \cite{JarBer87} adapted to our model. More specifically, the hypothesis test whose significance level is $\alpha\in(0,1)$ is constructed by the following manner: let the null hypothesis be of ``no jump component'' against the alternative hypothesis of ``non-trivial jump component'': \begin{equation} \mathcal{H}_0: {\langlembda=0} \quad \text{vs} \quad \mathcal{H}_1: {\langlembda>0}. \nonumber \end{equation} Then, if the Jarque-Bera type statistic introduced later is larger than a given percentile of the chi-square distribution with $2$ degrees of freedom, we reject the null hypothesis $\mathcal{H}_0$; and otherwise, we accept $\mathcal{H}_0$. For such a test, we can intuitively regard that the largest increment contains at least one jump when the null hypothesis is rejected. Following this intuition, our proposed method will go as follows: we iteratively conduct the test with removing the largest increments in the retained samples until rejection of $\mathcal{H}_0$ is stopped; after that, we construct the modified estimator of $\theta$ by the remaining samples. Our method enables us not only just to make a ``pre-cleaning'' of diffusion-like data sequence by removing large jumps which breaks the approximate Gaussianity of the self-normalized residuals, but also to approximately quantify jumps relative to continuous fluctuations in a natural way; see Remark \ref{hm:rem_F.esti}. This paper is organized as follows: in Section \ref{Preliminaries}, we give a brief summary of the approximate self-normalized residuals, and the Jarque-Bera type test for general jump diffusion models. Section \ref{Proposed strategy} provides our strategy and some remarks on its practical use. In Section \ref{Asymptotic Results}, we will propose a least-squares type estimator and its one-step version for \eqref{hm:sde}. In the calculation of our estimator we can sidestep optimization, and thus it is numerically tractable, with retaining high representational power of the nonlinearity in the state variable. Moreover, we will prove that our estimator is asymptotically equivalent to the ``oracle'' estimator which is constructed as if we observe the unobserved continuous part of $X$. We show some numerical experiments results in Section \ref{Numerical experiments}. Finally, Appendix \ref{hm:sec_proofs} presents the proofs of the results given in Section \ref{Asymptotic Results}. Here are some notations and conventions used throughout this paper. We largely abbreviate ``$n$'' from the notation like $t_{j}=t^{n}_{j}$ and $h=h_{n}$. For any vector variable $x=(x^{(i)})$, we write $\partial_x=\left(\frac{\partial}{\partial x^{(i)}}\right)_i$. For any process $Y$, $\Delta_j Y$ denotes the $j$-th increment $Y_{t_{j}}-Y_{t_{j-1}}$. $C$ denotes a universal positive constant which may vary at each appearance. $\top$ stands for the transpose operator, and $v^{\otimes2}:= vv^\top$ for any matrix $v$. The convergences in probability and in distribution are denoted by $\xrightarrow{p}$ and $\xrightarrow{\mcl}$, respectively. All limits appearing below are taken for $n\to\infty$ unless otherwise mentioned. For two nonnegative real sequences $(a_n)$ and $(b_n)$, we write $a_n \lesssim b_n$ if $\limsup_n(a_n/b_n)<\infty$. For any $x\in\mathbb{R}$, $\lfloor x \rfloor$ denotes the maximum integer which does not exceed $x$. \section{Preliminaries} \langlebel{Preliminaries} To see whether a working model fits data well or not, and/or whether data in hand have outliers or not, diagnosis based on residual analysis is often done. For jump diffusion models, \cite{Mas13-2} formulated a Jarque-Bera normality test based on self-normalized residuals for the driving noise process. In this section, we briefly review the construction of the self-normalized residual, and the Jarque-Bera statistics with its asymptotic behavior for general ergodic jump diffusion model described as: \begin{equation} dX_{t} = a(X_{t},\alpha)dw_{t} + b(X_{t},\beta)dt + c(X_{t-})dJ_{t}. \langlebel{yu:sde} \end{equation} Given any function $f$ on $\mathbb{R}\times\Theta$ and $s\geq0$, we hereafter write \begin{equation*} f_s(\theta)=f(X_s,\theta), \end{equation*} and in particular, for all $j\in\{0,\dots,n\}$ we denote \begin{equation} f_{j}(\theta) = f(X_{t_{j}},\theta). \nonumber \end{equation} For each $j\in\{1,\dots,n\}$, let \begin{equation} \epsilon_j(\alpha)=\epsilon_{n,j}(\alpha):=\frac{\Delta_{j}X}{\sqrt{a^{2}_{j-1}(\alpha)h_n}}. \langlebel{hm:ep_def} \end{equation} Then, following \cite{Mas13-2} we introduce the self-normalized residual and the Jarque-Bera type statistics: \begin{align*} &\hat{N}_j=\hat{S}_n^{-1/2}(\epsilon_j(\hat{\alpha}_{n})-\bar{\hat{\epsilon}}_n),\\ &\mathrm{JB}_n= \frac{1}{6n}\left(\sum_{j=1}^{n} (\hat{N}_j)^3-3\sqrt{h_n}\sum_{j=1}^{n} \partial_x a_{j-1}(\hat{\alpha}_{n})\right)^2+\frac{1}{24n}\left(\sum_{j=1}^{n}((\hat{N}_j)^4-3)\right)^2, \end{align*} where \begin{equation*} \bar{\hat{\epsilon}}_n:=\frac{1}{n}\sum_{j=1}^{n} \epsilon_j(\hat{\alpha}_{n}), \quad \hat{S}_n:=\frac{1}{n}\sum_{j=1}^{n}(\epsilon_j(\hat{\alpha}_{n})-\bar{\hat{\epsilon}}_n)^2. \end{equation*} The following theorem gives the asymptotic behavior of $\mathrm{JB}_n$, which ensures theoretical validity of the Jarque-Bera type test based on $\mathrm{JB}_n$. \begin{Thm}(\cite[Theorems 3.1 and 4.1]{Mas13-2}) \langlebel{Achi&p} \begin{enumerate} \item Under $\mathcal{H}_0: {\langlembda=0}$ and suitable regularity conditions, for any estimator $\hat{\alpha}_{n}$ of $\alpha$ satisfying \begin{equation}\langlebel{sqn} \sqrt{n}(\hat{\alpha}_{n}-\alpha_0)=O_p(1), \end{equation} we have \begin{equation*} \mathrm{JB}_{n}\xrightarrow{\mcl} \chi^2(2). \end{equation*} \item Under $\mathcal{H}_1: {\langlembda>0}$ and suitable regularity conditions, we have \begin{equation*} \mathrm{JB}_{n}\overset{P}\rightarrow \infty, \end{equation*} that is, $P(\mathrm{JB}_{n}>K) \to 1$ for any $K>0$. \end{enumerate} \end{Thm} \begin{Rem} The residual defined by \eqref{hm:ep_def} is of the Euler type with ignoring the drift fluctuation; under the sampling conditions in Assumption \ref{Sampling} given later, we can ignore the presence of the drift term in construction of residuals. Indeed, instead of \eqref{hm:ep_def} we could consider \begin{equation} \epsilon_j(\theta)=\epsilon_{n,j}(\theta):=\frac{\Delta_{j}X - h_n b_{j-1}(\beta) }{\sqrt{a^{2}_{j-1}(\alpha)h_n}}. \nonumber \end{equation} Also, we could define $\mathrm{JB}_{n}$ only by the skewness or kurtosis part; this only changes the asymptotic degrees of freedom $2$ in Theorem \ref{Achi&p}-(1) by $1$. See \cite{Mas11} for the technical details. This case may require more computation time, while we would then have a stabilized performance under $\mathcal{H}_0$ compared with the case of \eqref{hm:ep_def}. \end{Rem} \begin{Rem} The results of \cite{Mas13-2} can apply even when the jump component is driven by a compound Poisson process, possibly a much broader class of finite-activity processes. It is therefore expected that we may relax the structural assumption, although the theoretical results in Section \ref{Asymptotic Results} then require a large number of modifications. \end{Rem} In the rest of this section, suppose that the null hypothesis $\mathcal{H}_0$ is true, so that the underlying model is the diffusion process. Among choices of $\hat{\alpha}_{n}$, the Gaussian quasi-maximum likelihood estimator (GQMLE) is one of the most important candidates because it has the asymptotic efficiency in H\'{a}jek-Le Cam sense (cf. \cite{Gob02}). The GQMLE is defined as any maximizer of the Gaussian quasi-likelihood (GQL) \begin{equation*} \mathbb{H}_{n}(\theta) := \sum_{j=1}^{n} \log\left\{\frac{1}{\sqrt{2\partiali a^{2}_{j-1}(\alpha)h_n}}\partialhi\left(\frac{\Delta_{j}X - b_{j-1}(\beta)h_n}{\sqrt{a^{2}_{j-1}(\alpha)h_n}}\right)\right\}, \end{equation*} where $\partialhi$ denotes the standard normal density. This quasi-likelihood is constructed based on the local-Gauss approximation of the transition probability $\mathcal{L}(X_{t_{j}}|X_{t_{j-1}})$ by $N(b_{j-1}(\beta)h_n, a^{2}_{j-1}(\alpha)h_n)$. It is well known that the asymptotic normality holds true under suitable regularity conditions \cite{Kes97}: For the GQMLE $\tilde{\theta}_n=(\tilde{\alpha}_n,\tilde{\beta}_n)$, we have \begin{equation} \left( \sqrt{n}(\tilde{\alpha}_n-\alpha_0),\, \sqrt{T_n}(\tilde{\beta}_n-\beta_0) \right) \xrightarrow{\mcl} N\left(0, \,\mathop{\rm diag}(I_{1}^{-1}(\theta_{0}), I_{2}^{-1}(\theta_{0}))\right), \nonumber \end{equation} where \begin{align*} &I_{1}(\theta_{0})=\frac{1}{2}\int \left(\frac{\partial_\alpha a^2}{a^2}(x,\alpha_0)\right)^{\otimes2}\partiali_0(dx),\\ &I_{2}(\theta_{0})=\int\left(\frac{\partial_\beta b}{a}(x,\beta_0)\right)^{\otimes2}\partiali_0(dx), \end{align*} both assumed to be positive definite. Here $\partiali_0$ denotes the invariant measure of $X$. The strategy we will describe in Section \ref{Proposed strategy} is in principle valid even when the drift and diffusion coefficients are nonlinear in the parameters. However, if the coefficients $a$ and $b$ are highly nonlinear and/or the number of the parameters is large, then the calculation of the GQMLE can be quite time-consuming. To deal with such a problem, it is effective to separate optimizations of $\alpha$ and $\beta$ by utilizing the difference of the small-time stochastic orders of the $dt$- and $dw_t$-terms. To be specific, we introduce the following stepwise version of the GQMLE $\check{\theta}_n:=(\check{\alpha}_n,\check{\beta}_n)$: \begin{align*} &\check{\alpha}_n\in\mathop{\rm argmax}_{\alpha\in\bar{\Theta}_\alpha} \sum_{j=1}^{n} \log\left\{\frac{1}{\sqrt{2\partiali a^{2}_{j-1}(\alpha)h_n}}\partialhi\left(\frac{\Delta_{j}X}{\sqrt{a^{2}_{j-1}(\alpha)h_n}}\right)\right\},\\ &\check{\beta}_n\in\mathop{\rm argmax}_{\beta\in\bar{\Theta}_\beta} \mathbb{H}_{n}(\check{\alpha}_n,\beta). \end{align*} Under some suitable regularity condition, it is shown that the stepwise GQMLE has the same asymptotic distribution as the original GQMLE $\tilde{\theta}_n$ (cf. \cite{UchYos12}). Hence $\check{\theta}_n$ is asymptotically efficient, and the claims in Theorem \ref{Achi&p} with $\hat{\alpha}_{n}$ replaced by $\check{\alpha}_n$ hold true. Although in general we have to conduct two optimization for the stepwise estimation scheme, it lessens the number of the parameters to be simultaneously optimized, thus reducing the computational time. \section{Proposed strategy}\langlebel{Proposed strategy} In this section, still looking at \eqref{yu:sde}, we propose an iterative jump detection procedure based on the Jarque-Bera type test introduced in the previous section. Let $q\in(0,1)$ be a small number, which will later serve as the significance level. Suppose that we are given an estimator $\hat{\theta}_{n}$ of $\theta=(\alpha,\beta)$ defined to be any element $\hat{\theta}_{n} \in \mathop{\rm argmax} M_n$ for some contrast function $M_n$ of the from \begin{equation} M_n(\theta) := \sum_{j=1}^{n} m_{h_{n}}\left( X_{t_{j-1}},\Delta_j X;\,\theta \right). \nonumber \end{equation} Denote by $\chi^2_q(2)$ the upper $q$-percentile of the chi-squared distribution with $2$ degrees of freedom. Then, our procedure is as follows; we implicitly assume that there is no tie among the values $|\Delta_{1}X|,\dots,|\Delta_{n}X|$. \begin{itemize} \item[{\it Step 0.}] Set $k=k_n=0$, and let $\hat{\mathcal{J}}_{n}^0:=\emptyset$. \item[{\it Step 1.}] Calculate the modified estimator $\hat{\theta}_{n}^k$ defined by \begin{equation} \hat{\theta}_{n}^k \in \mathop{\rm argmax}_{\theta\in\Theta} \sum_{j\notin\hat{\mathcal{J}}_n^k} m_{h_{n}}\left( X_{t_{j-1}},\Delta_j X;\,\theta \right), \nonumber \end{equation} then let \begin{equation*} \bar{\hat{\epsilon}}_n^k:=\frac{1}{n-k}\sum_{j\notin\hat{\mathcal{J}}_{n}^k} \epsilon_j(\hat{\alpha}_{n}^k), \qquad \hat{S}_n^k:=\frac{1}{n-k}\sum_{j\notin\hat{\mathcal{J}}_{n}^k}(\epsilon_j(\hat{\alpha}_{n}^k)-\bar{\hat{\epsilon}}_n^k)^2, \end{equation*} and (re-)construct the following modified self-normalized residuals $(\hat{N}_j^k)_{j=1}^n$ and Jarque-Bera type statistics $\mathrm{JB}_{n}^k$: \begin{align}\langlebel{yu:msnr} \hat{N}_j^k &:=(\hat{S}_n^k)^{-1/2}(\epsilon_j(\hat{\alpha}_{n}^k)-\bar{\hat{\epsilon}}_n^k), \nonumber\\ \mathrm{JB}_{n}^k &:= \frac{1}{6(n-k)}\left(\sum_{j\notin\hat{\mathcal{J}}_{n}^k} (\hat{N}_j^k)^3-3\sqrt{h_n}\sum_{j\notin\hat{\mathcal{J}}_{n}}\partial_x a_{j-1}(\hat{\alpha}_{n}^k)\right)^2 \\ &{}\qquad +\frac{1}{24(n-k)}\left(\sum_{j\notin\hat{\mathcal{J}}_{n}^k}((\hat{N}_j^k)^4-3)\right)^2.\nonumber \end{align} \item[{\it Step 2.}] If $\mathrm{JB}_{n}^k>\chi^2_q(2)$, then pick out the interval number \begin{equation} j(k+1):=\mathop{\rm argmax}_{j\in\{1,\dots, n\}\setminus\hat{\mathcal{J}}_{n}^k} |\Delta_{j}X|, \nonumber \end{equation} add it to the set $\hat{\mathcal{J}}_{n}^k$: \begin{equation} \hat{\mathcal{J}}_{n}^{k+1} := \hat{\mathcal{J}}_{n}^{k} \cup \{j(k+1)\}, \nonumber \end{equation} and then return to {\it Step 1}. If $\mathrm{JB}_{n}^k \le \chi^2_q(2)$, then set an estimated number of jumps to be \begin{equation} k^\star=k_n^\star(\omega) := \min\left\{ k\le n;~ \mathrm{JB}_{n}^k \le \chi^2_q(2) \right\} \nonumber \end{equation} and go to {\it Step 3}. \item[{\it Step 3.}] If $k^\star=0$, regard that there is no jump; otherwise, we regard that each of $\Delta_{j(1)}X, \dots, \Delta_{j(k^\star)} X$ contains one jump. Finally, set $\hat{\theta}_{n}^{k^\star}$ to be an estimator of $\theta$. \end{itemize} In practice, the above-described method enables us to divide the set of the whole increments $(\Delta_{j}X)_{j=1}^{n}$ into the following two categories: \begin{itemize} \item ``One-jump'' group $(\Delta_j X)_{j\in\hat{\mathcal{J}}_n^{k^{\star}}}=\{ \Delta_{j(1)}X,\dots,\Delta_{j(k^\star)}X\}$, and \item ``No-jump'' group $(\Delta_j X)_{j\notin\hat{\mathcal{J}}_n^{k^{\star}}}=(\Delta_{j}X)_{j=1}^{n} \setminus \{ \Delta_{j(1)}X,\dots,\Delta_{j(k^\star)}X\}$. \end{itemize} Automatically entailed just after jump removals are stopped is the estimator $\hat{\theta}_{n}^{k^{\star}}$ of the drift and diffusion parts of $X$, which is the maximizer of the \textit{modified Gaussian quasi-likelihood} defined by \begin{equation*} \theta \mapsto \sum_{j\notin\hat{\mathcal{J}}_{n}^{k^\star}} \log\left\{\frac{1}{\sqrt{2\partiali a^{2}_{j-1}(\alpha)h_n}}\partialhi\left(\frac{\Delta_{j}X - b_{j-1}(\beta)h_n}{\sqrt{a^{2}_{j-1}(\alpha)h_n}}\right)\right\}. \end{equation*} As is demonstrated in Section \ref{Asymptotic Results}, our primary setting \eqref{hm:sde} is designed not to require any optimization using a numerical search such as the quasi-Newton method. We should note that, due to the nature of the testing, there may remain positive probability of spurious detection of jumps no matter how large number of data is. Nevertheless, as long as the underlying model is correct, the number of removals is much smaller than the total sample size, so that spurious removals are not serious here. \begin{Rem}\langlebel{shift} In the above-described procedure we simply remove the largest increments at each step, with keeping the positions of the remaining data. Note that in the construction of the modified estimator $\hat{\theta}_{n}^k$ it is incorrect to use the ``shifted" samples $(Y_{t_j})_{j\notin\hat{\mathcal{J}}_n^{k_n}}$ defined by \begin{equation*} Y_{t_j}=X_{t_j}-\sum_{i\in\hat{\mathcal{J}}_n^{k_n}\cap\{1,\dots,j\}}\Delta_i X. \end{equation*} This is because one-step transition density of the original process $X$ is spatially different from $Y$, so that the estimation result would not suitably reflect the information of data. \end{Rem} \begin{Rem}\langlebel{ite} At $k$-th iteration, it can be regarded that we conduct the Jarque-Bera type test for the trimmed data $(X_{t_{j-1}},\Delta_j X)_{j\notin\hat{\mathcal{J}}_n^k}$. Hence the null hypothesis $\mathcal{H}^k_0$ and alternative hypothesis $\mathcal{H}^k_1$ of the test are formally written as follows: \begin{align*} &\mathcal{H}^k_0: \sharp\left\{j\in\{1,\dots,n\} \ \middle| \ \Delta_j N\geq 1 \right\}\leq k,\\ &\mathcal{H}^k_1: \sharp\left\{j\in\{1,\dots,n\} \ \middle| \ \Delta_j N\geq 1 \right\}>k, \end{align*} where $\sharp A$ denotes the cardinality of any set $A$. From this formulation, we have the inclusion relation \begin{equation*} \mathcal{H}_0\subset \mathcal{H}_0^1\subset \mathcal{H}_0^2 \subset \dots \subset \mathcal{H}_0^k\subset \cdot\cdot\cdot, \end{equation*} which implicitly suggests that we can extract more than one increments at \textit{Step 2} when seemingly several jumps do exist: indeed, in view of the expectation of Poisson processes, it seems reasonable to remove at the first rejection of $\mathcal{H}_0$ not only $|\Delta_{j(1)}X|$ but the first $O(T_n)$ largest increments, resulting in acceleration of terminating the procedure. \end{Rem} \begin{Rem} In practice, the size of ``last-removed'' increment: \begin{equation} r_{n}(k^{\star}):=|\Delta_{j(k^{\star})}X| \nonumber \end{equation} would be used as a threshold for detecting jumps for future increments. \end{Rem} \begin{Rem} \langlebel{hm:rem_F.esti} When the jump coefficient is parameterized as $c(x,\gamma)$ and a model of the common jump distribution, say $F_J$, of the compound Poisson process $J$ is given, we may consider estimation of $\gamma$ and $F_J$ based on the sequence $\{\Delta_{j(k)}X/c_{j(k)-1}(\gamma)\}_{k=1}^{k^\star}$, with supposing that they are i.i.d. random variables with common jump distribution $F_J$; note that number of jumps tends to increase for a larger $T_n$. This is beyond the scope of this paper, and we leave it as a future study. \end{Rem} \section{Asymptotic results}\langlebel{Asymptotic Results} We now return to the model \eqref{hm:sde}. As was mentioned in the previous section, we have a choice of an estimator of $\theta$. As a matter of course, for each estimator $\hat{\theta}_{n}$, we need to study asymptotic behavior of its modified version $\hat{\theta}_{n}^{k*}$. In this section, we will derive asymptotic results for a numerically tractable least-squares type estimator and the corresponding one-step improved version. For simplicity, we write \begin{align*} &\mathbb{A}(x)=(a^{(1)}(x),\dots,a^{(p_\alpha)}(x))^\top, \quad \mathbb{B}(x)=(b^{(1)}(x),\dots,b^{(p_\beta)}(x))^\top. \end{align*} \begin{Assumption}[Regularity of coefficients]\langlebel{Ascoef} The following conditions hold: \begin{enumerate} \item $\ds{0< \inf_{x,\alpha}\mathbb{A}(x)^{\top}\alpha \wedge \inf_x |c(x)|}$ \ and \ $\ds{\sup_{x,\alpha}\mathbb{A}(x)^{\top}\alpha \vee \sup_x |c(x)| <\infty}$; \item $\ds{\left|\sqrt{\mathbb{A}(x)^\top\alpha_0}-\sqrt{\mathbb{A}(y)^\top\alpha_0}\right|+\left|\mathbb{B}(x)-\mathbb{B}(y)\right|+\left|c(x)-c(y)\right|\lesssim |x-y|, \quad x,y\in\mathbb{R}}$; \item There exists a constant $C'\ge 0$ for which $\ds{|\partial_x \mathbb{A}(x)| + |\partial_x^{2} \mathbb{A}(x)| \lesssim 1+|x|^{C'}, \quad x\in\mathbb{R}}$. \end{enumerate} \end{Assumption} Here the supremum with respect to $\alpha$ is taken over the compact set $\bar{\Theta}_\alpha$. The basic scenario to construct an estimator of $\theta$ when $X$ had no jumps is as follows: \begin{itemize} \item We first estimate the diffusion parameter by the least-squares estimator (LSE): \begin{align} \tilde{\alpha}_n &:= \operatornamewithlimits {argmin}_\alpha \sum_{j=1}^{n} \left\{(\Delta_j X)^2-h_n \mathbb{A}_{j-1}^\top\alpha\right\}^2 \nonumber\\ &=\frac{1}{h_n} \left(\sum_{j=1}^{n} \mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)^{-1} \sum_{j=1}^{n} (\Delta_j X)^2\mathbb{A}_{j-1}. \nonumber\\ \nonumber \end{align} \item We then improve the LSE through the scoring with the GQL: \begin{equation}\langlebel{ose} \hat{\alpha}_{n}:= \tilde{\alpha}_n-\left(\sum_{j=1}^{n}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{(\mathbb{A}_{j-1}^\top\tilde{\alpha}_n)^2}\right)^{-1}\sum_{j=1}^{n} \left(\frac{1}{\mathbb{A}_{j-1}^\top\tilde{\alpha}_n}-\frac{(\Delta_j X)^2}{h_n(\mathbb{A}_{j-1}^\top \tilde{\alpha}_n)^2}\right)\mathbb{A}_{j-1}. \end{equation} \item Finally we estimate the drift parameter by the plug-in LSE: \begin{align} \hat{\beta}_{n}&:=\operatornamewithlimits {argmin}_\beta \sum_{j=1}^{n} \frac{(\Delta_j X-h_n \mathbb{B}_{j-1}^\top\beta)^2}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}} \nonumber\\ &=\frac{1}{h_n}\left(\sum_{j=1}^{n} \frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}}\right)^{-1} \sum_{j=1}^{n} \frac{\Delta_j X}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}}\mathbb{B}_{j-1}. \nonumber \end{align} \end{itemize} It is known that $\tilde{\alpha}_n$ is not asymptotically efficient, while $\hat{\beta}_{n}$ is in case where the underlying process is a diffusion process, which is why we additionally consider the improved version $\hat{\alpha}_{n}$ based on the stepwise GQL: \begin{equation*} \mathbb{H}_{1,n}(\alpha):=-\frac{1}{2}\sum_{j=1}^{n}\left\{\log\left( 2\partiali h_n \mathbb{A}_{j-1}^\top\alpha \right)+\frac{(\Delta_j X)^2}{h_n\mathbb{A}_{j-1}^\top\alpha}\right\}. \end{equation*} Then $\hat{\alpha}_{n}$ is asymptotic efficient under appropriate regularity conditions. The form of the second term in the right-hand side in \eqref{ose} comes from the quasi-score associated with $\mathbb{H}_{1,n}(\alpha)$ and the expression of the Fisher information matrix corresponding to $\alpha$, where the latter equals the upper left part of $\Sigma_0$ in Theorem \ref{osan}. Now, in the presence of jumps, in view of Section \ref{Proposed strategy} we introduce the modified estimators \begin{align} &\tilde{\alpha}_n^{k_n}=\frac{1}{h_n} \left(\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)^{-1} \sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} (\Delta_j X)^2\mathbb{A}_{j-1}, \nonumber\\ &\hat{\beta}_{n}^{k_n}=\frac{1}{h_n}\left(\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\right)^{-1} \sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \frac{\Delta_j X}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbb{B}_{j-1}, \nonumber \end{align} where $\hat{\alpha}_{n}^{k_n}$ is the improved estimator defined by \begin{equation} \hat{\alpha}_{n}^{k_n}=\tilde{\alpha}_n^{k_n}-\left(\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{(\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n})^2}\right)^{-1}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \left(\frac{1}{\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n}}-\frac{(\Delta_j X)^2}{h_n(\mathbb{A}_{j-1}^\top \tilde{\alpha}_n^{k_n})^2}\right)\mathbb{A}_{j-1}. \langlebel{hm:ose-a} \end{equation} The inverse matrices appearing in the above definitions asymptotically exist under the forthcoming conditions, hence implicitly assumed here for brevity. What is important from these expressions is that we can calculate the modified estimators $\tilde{\alpha}_n^{k_n}$, $\hat{\beta}_{n}^{k_n}$, and $\hat{\alpha}_{n}^{k_n}$ simply by removing the indices in $\hat{\mathcal{J}}_n^{k_n}$ in computing the sums without repetitive numerical optimizations, thus reducing the computational time to a large extent. Further, it should also be noted that we may proceed only with $\tilde{\alpha}_n^{k_n}$ without the improved version $\hat{\alpha}_n^{k_n}$, if the asymptotically efficient estimator is not the first thing to have and quick-to-compute estimator is more needed. To state our main result, we introduce further assumptions below. \begin{Assumption}[Stability]\langlebel{Moments}$\ $ \begin{enumerate} \item There exists a unique invariant probability measure $\partiali_0$, and for any function $f\in L_1(\partiali_0)$, we have \begin{equation*} \frac{1}{T}\int_0^T f(X_t)dt\xrightarrow{p} \int_\mathbb{R} f(x)\partiali_0(dx), \quad \mathrm{as} \ T\to\infty. \end{equation*} \item $\ds{\sup_{t\in\mathbb{R}^+} E[|X_t|^q]<\infty}$ for any $q>0$. \end{enumerate} \end{Assumption} \begin{Assumption}[Sampling design]\langlebel{Sampling} There exist positive constants $\kappa',\kappa \in (1/2,1)$ such that \begin{equation} n^{-\kappa'} \lesssim h_n \lesssim n^{-\kappa}. \nonumber \end{equation} \end{Assumption} Recall that the driving noise $J$ can be expressed as \begin{equation*} J_t=\sum_{i=1}^{N_t} \xi_i, \end{equation*} by a Poisson process $N$ and i.i.d random variables $(\xi_i)$ being independent of $N$. \begin{Assumption}[Jump size]\langlebel{Asjsize}$\ $ \begin{enumerate} \item $E[|\xi_1|^q]<\infty$ for any $q>0$. \item In addition to Assumption \ref{Sampling}, \begin{equation} \limsup_{x\downarrow 0} x^{-s}P\left( |\xi_1| \le x \right) < \infty, \langlebel{hm:add-3} \end{equation} for some constant $s$ satisfying \begin{equation} s > \frac{4(1-\kappa)}{2\kappa -1}. \nonumber \end{equation} \end{enumerate} \end{Assumption} Here are some technical remarks on each assumption. Assumption \ref{Ascoef} ensures the existence of a {c\`adl\`ag} solution of \eqref{hm:sde}, and its Markovian property (cf. \cite[chapter 6]{App09}). Assumption \ref{Moments} is essential to derive our theoretical results. In our Markovian framework, it suffices for Assumption \ref{Moments}-(1) to have \begin{equation} \left\| P_t(x,\cdot) -\partiali(\cdot) \right\|_{TV} \to 0, \quad t\to\infty, \quad x\in\mathbb{R}, \langlebel{hm:ergodicity} \end{equation} for some probability measure $\partiali_0$, where $\| \mathfrak{m}(\cdot) \|_{TV}$ denotes the total variation norm of a signed measure $\mathfrak{m}$ and $\{P_t(x,dy)\}$ does the family of transition probability of $X$; then, $\partiali_0$ is the unique invariant measure of $X$, and Assumption \ref{Moments}-(1) holds for any $f\in L_1(\partiali_0)$ and any initial distribution $\mathcal{L}(X_0)$, see \cite{Bha82} for details. Further, \eqref{hm:ergodicity} with Assumption \ref{Moments}-(2) implies that \begin{equation} \int_\mathbb{R}|x|^q\partiali_0(dx)<\infty \nonumber \end{equation} for any $q>0$; this can be seen in a standard manner using Fatou's lemma and the monotone convergence theorem through a smooth truncation of the mapping $x\mapsto|x|^q$ into a compact set. We refer to \cite{Kul09}, \cite{Mas08}, and \cite{Mas13-1} for an easy-to-check sufficient condition for \eqref{hm:ergodicity} and Assumption \ref{Moments}-(2). Assumptions \ref{Sampling} and \ref{Asjsize} describe a tradeoff between sampling frequency and probability of small jump size (quicker decay of $h_n$ allows for more frequent small jumps of $J$). We have formulated them with giving preference to simplicity over complexity. See Section \ref{hm:sec_pre-rems} for some technical consequences which we will really require in the proofs. \begin{Rem} We are focusing on estimation of both drift and diffusion coefficients under the ergodicity. Nevertheless, we may consistently estimate the diffusion coefficient even when the terminal sampling time is fixed, such as $T_n\equiv 1$, without ergodicity; see \cite{GenJac93}, and also \cite{InaYos18} as well as the references therein. Since \cite{Mas13-2} can handle the non-ergodic case as well, it is expected that our estimation strategy in Section \ref{Proposed strategy} would remain in place and the theoretical results in this section would have trivial non-ergodic counterparts, to be valid under much weaker assumptions; in particular, we would only require \eqref{hm:add-3} for some $s>0$. \end{Rem} To investigate the asymptotic property of our estimators, we introduce the unobserved continuous part of $X$ defined by \begin{align*} X^{\mathrm{cont}}_t=X_t-X_0-\int_0^t c(X_{s-})dJ_s=\int_0^t a(X_s,\alpha_0)dw_t+\int_0^t b(X_s,\beta_0)dt. \end{align*} Let $(\check{\alpha}_n)$ be any random sequence such that \begin{equation}\langlebel{ines} \sqrt{n}(\check{\alpha}_n-\alpha_0)=O_p(1). \end{equation} As in \eqref{hm:ose-a}, we define the random sequence $\hat{\alpha}_{n}^{\mathrm{cont}}$ by \begin{equation*} \hat{\alpha}_{n}^{\mathrm{cont}}=\check{\alpha}_n-\left(\sum_{j=1}^{n}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{(\mathbb{A}_{j-1}^\top\check{\alpha}_n)^2}\right)^{-1}\sum_{j=1}^{n}\left(\frac{1}{\mathbb{A}_{j-1}^\top\check{\alpha}_n}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_n(\mathbb{A}_{j-1}^\top \check{\alpha}_n)^2}\right)\mathbb{A}_{j-1}. \end{equation*} Correspondingly, we also define \begin{equation*} \hat{\beta}_{n}^{\mathrm{cont}}:=\frac{1}{h_n}\left(\sum_{j=1}^{n}\frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{\mathrm{cont}}}\right)^{-1} \sum_{j=1}^{n} \frac{\Delta_j X^{\mathrm{cont}}}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{\mathrm{cont}}}\mathbb{B}_{j-1}. \end{equation*} As is expected, $(\hat{\alpha}_{n}^{\mathrm{cont}},\hat{\beta}_{n}^{\mathrm{cont}})$ serves as a good estimator if it could be computed: \begin{Thm}\langlebel{osan} Suppose that Assumptions \ref{Ascoef} to \ref{Sampling}, and Assumption \ref{Asjsize}-(1) hold, and that both $\int \mathbb{A}(x)^{\otimes 2}\partiali_0(dx)$ and $\int \mathbb{B}(x)^{\otimes 2}\partiali_0(dx)$ are positive definite. Then we have \begin{align*} \left(\sqrt{n}(\hat{\alpha}_{n}^{\mathrm{cont}}-\alpha_0), \sqrt{T_n}(\hat{\beta}_{n}^{\mathrm{cont}}-\beta_0)\right)\xrightarrow{\mcl} N\left(0, \Sigma_0\right), \end{align*} where \begin{equation*} \Sigma_0:=\begin{pmatrix} \displaystyle2\left\{\int\left(\frac{\mathbb{A}(x)}{(\mathbb{A}(x))^\top\alpha_0}\right)^{\otimes2}\partiali_0(dx)\right\}^{-1} & \displaystyle O\\ \displaystyle O &\displaystyle \left\{\int\frac{\mathbb{B}^{\otimes2}(x)}{\mathbb{A}(x)^\top\alpha_0}\partiali_0(dx)\right\}^{-1} \end{pmatrix}. \end{equation*} \end{Thm} \begin{Rem} \langlebel{hm:rem_asymp.eff} The asymptotic covariance matrix of $\hat{\beta}^{\mathrm{cont}}_n$ is formally the efficient one, see \cite[Theorem 2.2]{KohNuaTra17}. Moreover, that of $\hat{\alpha}^{\mathrm{cont}}_n$ is the same as the estimator in \cite{ShiYos06} and \cite{OgiYos11} based on a jump-detection filter. \end{Rem} The next theorem states that, asymptotically, on the set $ \left\{ \mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}$, the number of jumps is less than $k_n$, and thus the modified LSE type diffusion estimator $\tilde{\alpha}_n^{k_n}$ consists of (true) "no-jump" group and has the $\sqrt{n}$-consistency. \begin{Thm}\langlebel{Consistency} Suppose that Assumptions \ref{Ascoef} to \ref{Asjsize} hold, and that both $\int \mathbb{A}(x)^{\otimes 2}\partiali_0(dx)$ and $\int \mathbb{B}(x)^{\otimes 2}\partiali_0(dx)$ are positive definite. Then, for any $\epsilon>0$, we can find a sufficiently large $M>0$ and $N\in\mathbb{N}$ such that \begin{equation} \sup_{n\ge N}P\left(\left\{|\sqrt{n}(\tilde{\alpha}_n^{k_n}-\alpha_0)|>M\right\}\cap\left\{\mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}\right)<\epsilon. \nonumber \end{equation} \langlebel{thm_consis1} \end{Thm} By re-defining $(\tilde{\alpha}_n^{k_n})$ as \begin{equation} \tilde{\alpha}_n^{k_n}=\begin{cases}\tilde{\alpha}_n^{k_n} & \quad \mathrm{on} \ \ \left\{\mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}, \\ \alpha_0 & \quad \mathrm{on} \ \ \left\{\mathrm{JB}_n^{k_n}>\chi^2_q(2)\right\}, \end{cases} \end{equation} $(\tilde{\alpha}_n^{k_n})$ enjoys the property \eqref{ines}: $\sqrt{n}(\tilde{\alpha}_n^{k_n}-\alpha_0)=O_p(1)$ from Theorem \ref{thm_consis1}, so that by Theorem \ref{osan}, we have \begin{align*} \left(\sqrt{n}( \hat{\alpha}_{n}^{k_n, \mathrm{cont}}-\alpha_0), \sqrt{T_n}(\hat{\beta}_{n}^{k_n, \mathrm{cont}}-\beta_0)\right)\xrightarrow{\mcl} N\left(0, \Sigma_0\right), \end{align*} where \begin{align*} &\hat{\alpha}_{n}^{k_n, \mathrm{cont}}:=\tilde{\alpha}_n^{k_n}-\left(\sum_{j=1}^{n}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{(\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n})^2}\right)^{-1}\sum_{j=1}^{n}\left(\frac{1}{\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n}}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_n(\mathbb{A}_{j-1}^\top \tilde{\alpha}_n^{k_n})^2}\right)\mathbb{A}_{j-1},\\ &\hat{\beta}_{n}^{k_n, \mathrm{cont}}:=\frac{1}{h_n}\left(\sum_{j=1}^{n}\frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\right)^{-1} \sum_{j=1}^{n} \frac{\Delta_j X^{\mathrm{cont}}}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbb{B}_{j-1}. \end{align*} Recall that we finish our procedure once we have $\mathrm{JB}_n^{k_n}\leq\chi^2_q(2)$. The following theorem is the main claim of this section. \begin{Thm}\langlebel{Ae} Suppose that Assumptions \ref{Ascoef} to \ref{Asjsize} hold and that $\Sigma_0$ in Theorem \ref{osan} is positive definite. Then, for any $\epsilon>0$ and $q\in(0,1)$, we have \begin{align}\langlebel{ae} &P\left(\left\{\left|\sqrt{n}(\hat{\alpha}_{n}^{k_n}-\hat{\alpha}_{n}^{k_n,\mathrm{cont}})\right|\vee\left|\sqrt{T_n}(\hat{\beta}_{n}^{k_n}-\hat{\beta}_{n}^{k_n,\mathrm{cont}})\right|>\epsilon\right\}\cap \left\{ \mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}\right)\to0. \end{align} \end{Thm} \begin{Rem} Since each phase of our method is conducted on the null hypothesis, we do not identify the true value in the re-defined $\tilde{\alpha}_n^{k_n}$ in practice. \end{Rem} \begin{Rem} We should note that the number of jump removals is automatically determined by the iterative Jarque-Bera type test, and thus there is no need to choose $(k_n)$ in practice. \end{Rem} \section{Numerical experiments}\langlebel{Numerical experiments} \begin{table}[t] \caption{The performance of our estimators is given in case (i). The mean is given with the standard deviation in parenthesis. In this table, $k_n^\star$ denotes the number of jumps.} \langlebel{res3-1} \begin{center} \begin{tabular}{cccccccccc} \hline &&&&&&&&&\\[-3.5mm] $T_n$ & $n$ & $h_n$ & $k_n^\star$ & \multicolumn{6}{c}{(i)$\text{Gamma distribution}$} \\ &&&& $\hat{\alpha}_{n}^{0}$ & $\hat{\beta}_{n}^{0}$ &$\hat{\alpha}_{n}^{k_n}$ &$\hat{\beta}_{n}^{k_n}$ & $\hat{\alpha}_{n}^{k_n^\star}$&$\hat{\beta}_{n}^{k_n^\star}$ \\ \hline 28.8& 1000 & 0.03&15&18.80&0.62&3.38&0.99&3.38&1.00 \\ &&&&(4.31)&(0.13)&(0.20)&(0.09)&(0.20)&(0.09)\\ 62.1&10000&0.006&30&17.7&0.63&3.07&1.00&3.08&1.00\\ &&&&(2.91)&(0.09)&(0.05)&(0.06)&(0.04)&(0.06)\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[t] \caption{The performance of our estimators is given in case (ii). The mean is given with the standard deviation in parenthesis. In this table, $k_n^\star$ denotes the number of jumps.} \langlebel{res3-2} \begin{center} \begin{tabular}{cccccccccc} \hline &&&&&&&&&\\[-3.5mm] $T_n$ & $n$ & $h_n$ & $k_n^\star$ & \multicolumn{6}{c}{(ii)$\text{Bilateral inverse Gaussian distribution}$} \\ &&&& $\hat{\alpha}_{n}^{0}$ & $\hat{\beta}_{n}^{0}$ &$\hat{\alpha}_{n}^{k_n}$ &$\hat{\beta}_{n}^{k_n}$ & $\hat{\alpha}_{n}^{k_n^\star}$&$\hat{\beta}_{n}^{k_n^\star}$ \\ \hline 28.8& 1000 & 0.03&15&10.83&0.82&3.19&0.99&3.15&1.00 \\ &&&&(3.70)&(0.22)&(0.17)&(0.14)&(0.16)&(0.14)\\ 62.1&10000&0.006&30&10.22&0.82&3.04&1.01&3.04&1.01\\ &&&&(2.46)&(0.15)&(0.06)&(0.09)&(0.05)&(0.09)\\ \hline \end{tabular} \end{center} \end{table} In this section, we conduct Monte Carlo simulation in order to see the performance of our method. First we consider the following statistical model: \begin{equation}\langlebel{yu:simmodel} dX_t=\sqrt{\frac{\alpha}{1+\sin^2 X_t}}dw_t-\beta X_tdt+dJ_t\quad X_0=0, \end{equation} with the true value $\theta_{0}:=(\alpha_0,\beta_0)=(3,1)$. As the jump size distributions, we set: \begin{itemize} \item[(i)] Gamma distribution $\Gamma(4,1)$ (one-sided positive jumps); \item[(ii)] Bilateral inverse Gaussian distribution $bIG(2,1,4,1)$ (two-sided jumps). \end{itemize} The bilateral inverse Gaussian random variable $X\sim bIG(\delta_1,\gamma_1,\delta_2,\gamma_2)$ is defined as the difference of two independent inverse Gaussian random variable $X_1\sim IG(\delta_1,\gamma_1)$ and $X_2\sim IG(\delta_2,\gamma_2)$. In the trials, we set the significance level $q=10^{-3}$, and the number of jumps fixed just for purposes of comparison. Based on independently simulated 1000 sample path, the mean and standard deviation of our estimator $(\hat{\alpha}_{n}^{k_n},\hat{\beta}_{n}^{k_n})$ are tabulated in Table \ref{res3-1} and Table \ref{res3-2} with the estimators $(\hat{\alpha}_{n}^0,\hat{\beta}_{n}^0)$ and $(\hat{\alpha}_{n}^{k_n^\star},\hat{\beta}_{n}^{k_n^\star})$. The first estimator $(\hat{\alpha}_{n}^0,\hat{\beta}_{n}^0)$ is constructed by the whole data, and the latter estimator $(\hat{\alpha}_{n}^{k_n^\star},\hat{\beta}_{n}^{k_n^\star})$ is constructed by the true no-jump group. From these tables, the following items are indicated: \begin{itemize} \item In both case, the modified estimators get closer and closer to the true value as jump removals proceed. \item Since the performances of $(\hat{\alpha}_{n}^{k_n},\hat{\beta}_{n}^{k_n})$ and $(\hat{\alpha}_{n}^{k_n^\star},\hat{\beta}_{n}^{k_n^\star})$ are almost the same, the jump detection by our method works well. \item Concerning the drift estimator, the degree of improvement is not large for (ii) relative to (i). It may be due to the two-sided jump structure of $bIG(2,1,4,1)$; thus the amount of improvement is generally expected to be much more significant when the jump distribution is skewed. \item In the estimator $(\hat{\alpha}_{n}^0,\hat{\beta}_{n}^0)$, the performance of $\hat{\alpha}_{n}^0$ is worse than $\hat{\beta}_{n}^0$. This is because the diffusion estimator is based on the square of the increments $(\Delta_j X)_j$, thus being heavily affected by jumps. \item Overall, the diffusion parameter are overestimated even by $\hat{\alpha}_{n}^{k_n^\star}$. Taking into consideration the fact that the mean-reverting point of $X$ is 0, the magnitude of the increment should be larger after one jump occurs. Thus, although jumps are correctly picked, such overestimation can be seen. \end{itemize} \noindent \textbf{Acknowledgement.} This work was supported by JST, CREST Grant Number JPMJCR14D7, Japan. \appendix \section{Proofs of the result in Section \ref{Asymptotic Results}} \langlebel{hm:sec_proofs} Throughout this section, Assumptions \ref{Ascoef} to \ref{Asjsize} are in force. \subsection{Technical remarks} \langlebel{hm:sec_pre-rems} We summarize a few consequences of Assumptions \ref{Sampling} and \ref{Asjsize}. All of them will be used later on. \begin{itemize} \item Under Assumption \ref{Sampling}, $T_n\to\infty$ and there exists a positive constant $\delta=\delta(\kappa,\kappa')\in(0,1)$ such that \begin{equation} \frac{\log n}{T_n} \vee \left(n^{1+\delta}h_n^{2+\delta}(\log n)^2\right) \to 0. \langlebel{hm:add-4} \end{equation} This entails $nh_n^2\log n=o\left(T_n^{-\delta}/\log n\right)=o(1)$, stronger than the so-called ``rapidly increasing design'': $nh_n\to\infty$ and $nh_n^2\to0$, which is one of standard conditions in the literature of statistical inference for ergodic processes based on high-frequency data. Assumption \ref{Sampling} will be required for handling the extreme value of the solution process $X$, and for asymptotically allowing the number of jump-removal operations to exceed the expected number of jump times. \item Introduce the positive sequence \begin{equation} a_n = a_n(\eta):=T_n^{\eta} \langlebel{hm:an_def} \end{equation} for a constant $\eta>0$. We see that, with a sufficiently small $\eta$, the following statements follow from Assumptions \ref{Sampling} and \ref{Asjsize}: \begin{align} & a_n^3\sqrt{h_n\log n} \to 0, \langlebel{hm:conseq-1}\\ & T_n P\left( |\xi_1|\leq M \sqrt{n}h_n \right) \to 0, \langlebel{hm:conseq-3} \\ & \max_{1\leq j \leq \lfloor T_n\rfloor} |\xi_j| = O_p(a_n). \langlebel{hm:conseq-2} \end{align} Under Assumption \ref{Sampling}, we can pick an $\eta>0$ small enough to ensure \eqref{hm:conseq-1}; in the sequel, we fix this $\eta$. Also, we note \begin{align} & T_n P\left( |\xi_1|\leq M \sqrt{n}h_n \right) \lesssim n^{-s\kappa/2 + (1-\kappa)(1+s/2)}\to0, \nonumber \end{align} from Assumption \ref{Asjsize}-(2). As for \eqref{hm:conseq-2}, observe that for any $\epsilon>0$ \begin{equation*} P\left(a_n^{-1}\max_{1\leq j \leq \lfloor T_n\rfloor} |\xi_j| > \epsilon\right)=1-\left\{1-P(|\xi_1|> a_n\epsilon)\right\}^{\lfloor T_n\rfloor}. \end{equation*} The right-hand side tends to $0$ if the upper bound (due to Markov's inequality) of \begin{equation} T_n P(|\xi_1|> a_n\epsilon) \lesssim n^{1-\kappa-q\eta(1-\kappa')} \nonumber \end{equation} tends to $0$. This holds true under Assumption \ref{Asjsize}-(1) by taking a large constant $q$. \end{itemize} For abbreviation, we will use the following notations: \begin{itemize} \item For any matrix $S=\{S_{kl}\}$, we denote by $|S|:=(\sum_{k,l}S_{kl}^2)^{1/2}$ its Frobenius norm. \item $I_p$ represents the $p$-dimensional identity matrix. \item $R(x)$ denotes a differentiable matrix-valued function on $\mathbb{R}$ for which there exists a constant $C>0$ such that $|R(x)|+|\partial_x R(x)|\lesssim (1+|x|)^C$, $x\in\mathbb{R}$. \item We often write $a(x,\alpha)$ and $b(x,\beta)$ instead of $\sqrt{(\mathbb{A}(x))^\top\alpha}$ and $(\mathbb{B}(x))^\top\beta$. \item $E^{j-1}[\cdot]$ denotes the conditional expectation with respect to $\mathcal{F}_{t_{j-1}}$. \item We often omit the true value $\theta_{0}$, for instance, $a_s$ and $a_{j}$ denote $a(X_s,\alpha_0)$ and $a(X_{t_j},\alpha_0)$, respectively. \end{itemize} \subsection{Proof of Theorem \ref{osan}} Let us recall that \begin{align*} X^{\mathrm{cont}}_t:=X_t - X_0 -\int_0^t c(X_{s-})dJ_s=\int_0^t a(X_s,\alpha_0)dw_t+\int_0^t b(X_s,\beta_0)dt. \end{align*} First we prove a preliminary lemma. \begin{Lem}\langlebel{ana} The $(p_\alpha+p_\beta)$-dimensional random sequence \begin{align*} \left(\frac{1}{\sqrt{n}}\sum_{j=1}^{n}\left(\frac{1}{\mathbb{A}_{j-1}^\top\alpha_0}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_n(\mathbb{A}_{j-1}^\top \alpha_0)^2}\right)\mathbb{A}_{j-1}, \, \frac{1}{\sqrt{T_n}}\sum_{j=1}^{n} \frac{\Delta_j X^{\mathrm{cont}}-h_n\mathbb{B}_{j-1}^\top\beta_0}{\mathbb{A}_{j-1}^\top\alpha_0}\mathbb{B}_{j-1}\right) \end{align*} weakly converges to the centered normal distribution with covariance matrix \begin{equation*} \begin{pmatrix}\displaystyle2\int \left(\frac{\mathbb{A}(x)}{(\mathbb{A}(x))^\top\alpha_0}\right)^{\otimes2}\partiali_0(dx) &\displaystyle O \\\displaystyle O &\displaystyle \int \frac{\mathbb{B}^{\otimes2}(x)}{\mathbb{A}(x)^\top\alpha_0}\partiali_0(dx)\end{pmatrix}. \end{equation*} \end{Lem} \begin{proof} By the Cram\'{e}r-Wold device, it is enough to show the case where $p_\alpha=p_\beta=1$. From the martingale central limit theorem, the desired result follows if we show \begin{align} &\langlebel{ford}\frac{1}{\sqrt{n}}\sum_{j=1}^{n} E^{j-1}\left[\left(\frac{1}{a_{j-1}^2}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_na_{j-1}^4}\right)\mathbb{A}_{j-1}\right]\xrightarrow{p}0,\\ &\langlebel{sord}\frac{1}{n}\sum_{j=1}^{n} E^{j-1}\left[\left\{\left(\frac{1}{a_{j-1}^2}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_na_{j-1}^4}\right)\mathbb{A}_{j-1}\right\}^2\right]\xrightarrow{p} 2\int \left(\frac{\mathbb{A}(x)}{a^2(x,\alpha_0)}\right)^2\partiali_0(dx),\\ &\langlebel{foord}\frac{1}{n^2}\sum_{j=1}^{n} E^{j-1}\left[\left\{\left(\frac{1}{a_{j-1}^2}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_na_{j-1}^4}\right)\mathbb{A}_{j-1}\right\}^4\right]\xrightarrow{p}0,\\ &\langlebel{bford}\frac{1}{\sqrt{T_n}}\sum_{j=1}^{n} E^{j-1}\left[\frac{\Delta_j X^{\mathrm{cont}}-h_nb_{j-1}}{a^2_{j-1}}\mathbb{B}_{j-1}\right]\xrightarrow{p}0,\\ &\langlebel{bsord}\frac{1}{T_n}\sum_{j=1}^{n} E^{j-1} \left[\left(\frac{\Delta_j X^{\mathrm{cont}}-h_nb_{j-1}}{a^2_{j-1}}\mathbb{B}_{j-1}\right)^2\right]\xrightarrow{p} \int \frac{\mathbb{B}^2(x)}{a^2(x,\alpha_0)}\partiali_0(dx),\\ &\langlebel{bfoord}\frac{1}{(T_n)^2}\sum_{j=1}^{n} E^{j-1} \left[\left(\frac{\Delta_j X^{\mathrm{cont}}-h_nb_{j-1}}{a^2_{j-1}}\mathbb{B}_{j-1}\right)^4\right]\xrightarrow{p}0\\ &\langlebel{cross}\frac{1}{n\sqrt{h_n}}\sum_{j=1}^{n} E^{j-1} \left[\left(\frac{1}{a_{j-1}^2}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_na_{j-1}^4}\right)\frac{\Delta_j X^{\mathrm{cont}}-h_nb_{j-1}}{a^2_{j-1}}\mathbb{A}_{j-1}\mathbb{B}_{j-1}\right]\xrightarrow{p}0. \end{align} By using the martingale property of the stochastic integral, Jensen's inequality, the Lipschitz continuity of $b$, and \cite[Lemma 4.5]{Mas13-1}, we have \begin{align}\langlebel{fice} E^{j-1}[\Delta_j X^{\mathrm{cont}}]=h_n b_{j-1}+\int_{t_{j-1}}^{t_j} E^{j-1}[b_s-b_{j-1}]ds=h_nb_{j-1}+h_n^{\frac{3}{2}}R_{j-1}. \end{align} It\^{o}'s formula and Fubini's theorem for conditional expectation yield that \begin{align*} &E^{j-1}[(\Delta_j X^{\mathrm{cont}})^2]\\ &=E^{j-1}\left[2\int_{t_{j-1}}^{t_j} (X^{\mathrm{cont}}_s-X^{\mathrm{cont}}_{j-1})dX^{\mathrm{cont}}_s+\int_{t_{j-1}}^{t_j} (a_s^2-a^2_{j-1})ds+a^2_{j-1}h_n\right]\\ &=a^2_{j-1}h_n+2\int_{t_{j-1}}^{t_j} \left(\int_{t_{j-1}}^sE^{j-1}\left[b_ub_s\right]du\right)ds+\int_{t_{j-1}}^{t_j} E^{j-1}[a_s^2-a^2_{j-1}]ds. \end{align*} Again making use of the Lipschitz continuity of $b(x,\beta_0)$ and \cite[Lemma 4.5]{Mas13-1}, we get \begin{align*} &\left|\int_{t_{j-1}}^{t_j} \left(\int_{t_{j-1}}^sE^{j-1}\left[b_ub_s\right]du\right)ds\right|\\ &\lesssim \int_{t_{j-1}}^{t_j} \left(\int_{t_{j-1}}^sE^{j-1}\left[1+|X_u|+|X_s|+|X_u||X_s|\right]ds\right)ds\\ &\lesssim h_n^2(1+|X_{j-1}|^2). \end{align*} Since $\partial_x a^2(x,\alpha)$ and $\partial_x^2 a^2(x,\alpha)$ are of at most polynomial growth with respect to $x$ uniformly in $\alpha$, we can similarly deduce that \begin{align*} &\left|\int_{t_{j-1}}^{t_j} E^{j-1}[a_s^2-a^2_{j-1}]ds\right|\\ &\lesssim \bigg|\int_{t_{j-1}}^{t_j} E^{j-1}\bigg[\partial_x a^2_{j-1}(X_s-X_{j-1}) \nonumber\\ &{}\qquad +\frac{1}{2}\int_0^1\int_0^1 \partial_x^2 a^2(X_{j-1}+uv(X_s-X_{j-1}),\alpha_0)dudv(X_s-X_{j-1})^2\bigg]ds\bigg| \\ &\lesssim \left|\int_{t_{j-1}}^{t_j} \left(\int_{t_{j-1}}^sE^{j-1}[b_u]du\right)ds\partial_x a^2_{j-1}\right| \nonumber\\ &{}\qquad + \int_{t_{j-1}}^{t_j} E^{j-1}\left[\left(1+|X_{j-1}|^C+|X_s-X_{j-1}|^C\right)(X_s-X_{j-1})^2\right]ds\\ &\lesssim h_n^2 R_{j-1}. \end{align*} Hence \begin{equation}\langlebel{sce} E^{j-1}[(\Delta_j X^{\mathrm{cont}})^2]=h_na^2_{j-1}+h_n^2R_{j-1}. \end{equation} For any $q\geq 2$, Burkholder's inequality for conditional expectation gives \begin{equation}\langlebel{ece} E^{j-1}\left[ |\Delta_j X^{\mathrm{cont}}|^q \right]\lesssim h_n^{\frac{q}{2}} R_{j-1}. \end{equation} Repeatedly using It\^{o}'s formula and \eqref{ece}, we have \begin{align}\langlebel{fce} &E^{j-1}[(\Delta_j X^{\mathrm{cont}})^4]\nonumber\\ &=E^{j-1}\left[4\int_{t_{j-1}}^{t_j}(X^{\mathrm{cont}}_s-X^{\mathrm{cont}}_{j-1})^3dX^{\mathrm{cont}}_s+6\int_{t_{j-1}}^{t_j} (X^{\mathrm{cont}}_s-X^{\mathrm{cont}}_{j-1})^2a_s^2ds\right]\nonumber\\ &=6E^{j-1}\Bigg[\int_{t_{j-1}}^{t_j} \left\{2\int_{t_{j-1}}^s (X^{\mathrm{cont}}_u-X^{\mathrm{cont}}_{j-1})dX^{\mathrm{cont}}_u+\int_{t_{j-1}}^s (a_u^2-a^2_{j-1})du+(s-t_{j-1})a^2_{j-1}\right\} dsa^2_{j-1}\nonumber\\ &\quad \quad+\int_{t_{j-1}}^{t_j} \left\{2\int_{t_{j-1}}^s (X^{\mathrm{cont}}_u-X^{\mathrm{cont}}_{j-1})dX^{\mathrm{cont}}_u +\int_{t_{j-1}}^s a_u^2du \right\} (a_s^2-a_{j-1}^2)ds\Bigg] \nonumber\\ &{}\qquad +h_n^{\frac{5}{2}}R_{j-1}\nonumber\\ &=3h_n^2a_{j-1}^4+h_n^{\frac{5}{2}}R_{j-1}. \end{align} In particular, it follows from \eqref{sce} and \eqref{fce} that \begin{align}\langlebel{hm:add-1} E^{j-1}\left[\left\{\left(\frac{1}{a_{j-1}^2}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_na_{j-1}^4}\right)\mathbb{A}_{j-1}\right\}^2\right] =\frac{2}{a_{j-1}^4} \mathbb{A}_{j-1}^{\otimes 2}+h_n^{\frac{1}{2}}R_{j-1}. \end{align} Now, the convergences \eqref{ford}-\eqref{cross} follow from the expressions \eqref{fice}-\eqref{ece} and \eqref{hm:add-1} together with the ergodic theorem (Assumption \ref{Moments}-(1)). Thus we obtain the desired result. \end{proof} Applying Taylor's expansion, we have, with the obvious notation, \begin{align*} \frac{1}{n}\sum_{j=1}^{n}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{(\mathbb{A}_{j-1}^\top\check{\alpha}_n)^2}=\frac{1}{n}\sum_{j=1}^{n}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{(\mathbb{A}_{j-1}^\top\alpha_0)^2}+\left\{\frac{1}{n}\int_0^1\sum_{j=1}^{n}\partial_\alpha\left(\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{\left[\mathbb{A}_{j-1}^\top(\alpha_0+u(\check{\alpha}_n-\alpha_0))\right]^2}\right)du\right\}[\check{\alpha}_n-\alpha_0]. \end{align*} The ergodic theorem implies that \begin{equation} \frac{1}{n}\sum_{j=1}^{n}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{(\mathbb{A}_{j-1}^\top\alpha_0)^2} \xrightarrow{p} \int \left(\frac{\mathbb{A}(x)}{\mathbb{A}(x)^\top\alpha_0}\right)^{\otimes2}\partiali_0(dx), \nonumber \end{equation} the limit being positive definite. From Assumption \ref{Ascoef} and the condition $\sqrt{n}(\check{\alpha}_n-\alpha_0)=O_p(1)$, the second term of the right-hand-side is $o_p(1)$. We also have \begin{align*} &\frac{1}{\sqrt{n}}\sum_{j=1}^{n}\left(\frac{1}{\mathbb{A}_{j-1}^\top\check{\alpha}_n}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_n(\mathbb{A}_{j-1}^\top \check{\alpha}_n)^2}\right)\mathbb{A}_{j-1}\\ &=\frac{1}{\sqrt{n}}\sum_{j=1}^{n}\left(\frac{1}{\mathbb{A}_{j-1}^\top\alpha_0}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_n(\mathbb{A}_{j-1}^\top \alpha_0)^2}\right)\mathbb{A}_{j-1}\\ &+\left\{\frac{1}{n}\sum_{j=1}^{n}\left(-\frac{1}{(\mathbb{A}_{j-1}^\top\alpha_0)^2}+2\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_n(\mathbb{A}_{j-1}^\top \alpha_0)^3}\right)\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right\}[\sqrt{n}(\check{\alpha}_n-\alpha_0)]+o_p(1). \end{align*} By \eqref{sce}, \eqref{fce}, and \cite[Lemma 9]{GenJac93}, it follows that \begin{equation*} \frac{1}{n}\sum_{j=1}^{n}\left(-\frac{1}{(\mathbb{A}_{j-1}^\top\alpha_0)^2}+2\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_n(\mathbb{A}_{j-1}^\top \alpha_0)^3}\right)\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\xrightarrow{p}\int \left(\frac{\mathbb{A}(x)}{(\mathbb{A}(x))^\top\alpha_0}\right)^{\otimes2}\partiali_0(dx). \end{equation*} Hence we obtain \begin{align*} \sqrt{n}(\hat{\alpha}^{\mathrm{cont}}_n-\alpha_0)=-\left\{\int \left(\frac{\mathbb{A}(x)}{(\mathbb{A}(x))^\top\alpha_0}\right)^{\otimes2}\partiali_0(dx)\right\}^{-1}\frac{1}{\sqrt{n}}\sum_{j=1}^{n}\left(\frac{1}{\mathbb{A}_{j-1}^\top\alpha_0}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_n(\mathbb{A}_{j-1}^\top \alpha_0)^2}\right)\mathbb{A}_{j-1}+o_p(1), \end{align*} and similarly we have \begin{equation*} \sqrt{T_n}(\hat{\beta}_{n}^{\mathrm{cont}}-\beta_0)=\left(\int \frac{\mathbb{B}^{\otimes2}(x)}{\mathbb{A}(x)^\top\alpha_0}\partiali_0(dx)\right)^{-1}\frac{1}{\sqrt{T_n}}\sum_{j=1}^{n} \frac{\Delta_j X^{\mathrm{cont}}-h_nb_{j-1}}{\mathbb{A}_{j-1}^\top\alpha_0}\mathbb{B}_{j-1}+o_p(1). \end{equation*} Theorem \ref{osan} now follows from applying Slutsky's lemma and Lemma \ref{ana} to these two equations. Before we proceed to the proof of Theorem \ref{Consistency} and Theorem \ref{Ae}, we comment on a virtual upper bound of $k_n$ and $N_{T_n}$. By the Lindeberg-Feller theorem we have \begin{equation*} \frac{N_{T_n}-\langlembda T_n}{\sqrt{\langlembda T_n}} =\sum_{j=1}^{n}\frac{\Delta_j N - \langlembda h_n}{\sqrt{\langlembda T_n}} \xrightarrow{\mcl} N(0,1), \end{equation*} so that for any positive nondecreasing sequence $(l_n)$ satisfying $\frac{l_n-\langlembda T_n}{\sqrt{\langlembda T_n}}\to\infty$, we have \begin{equation} P\left(N_{T_n}\geq l_n\right)=P\left(\frac{N_{T_n}-\langlembda T_n}{\sqrt{\langlembda T_n}}\geq\frac{l_n-\langlembda T_n}{\sqrt{\langlembda T_n}}\right)\to0; \nonumber \end{equation} in particular, this implies that the probability of the event $\{N_{T_n}\geq (\langlembda+1)T_n\}$ is asymptotically negligible. Thus, we hereafter set $k_n\leq (\langlembda+1)T_n-1=O(T_n)$, and replace the event $\{N_{T_n}\geq k_n+1\}$ by $\{ k_n+1\leq N_{T_n}\leq (\langlembda+1)T_n\}$ without any mention. \subsection{Proof of Theorem \ref{Consistency}} Since it is easy to deduce that for a fixed $M>0$, \begin{align*} &P\left(\left\{|\sqrt{n}(\tilde{\alpha}_n^{k_n}-\alpha_0)|>M\right\}\cap\left\{\mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}\right)\\ &\leq P\left(\left\{|\sqrt{n}(\tilde{\alpha}_n^{k_n}-\alpha_0)|>M\right\}\cap\left\{1\leq N_{T_n}\leq k_n\right\}\right) + P\left(\left\{k_n+1\leq N_{T_n}\leq (\langlembda+1)T_n\right\}\cap\left\{\mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}\right)+o(1), \end{align*} the desired result follows if we show that for any $\epsilon>0$, there exist positive constants $M$ and $N\in\mathbb{N}$ satisfying \begin{align} &\sup_{n\ge N}P\left(\left\{|\sqrt{n}(\tilde{\alpha}_n^{k_n}-\alpha_0)|>M\right\}\cap\left\{1\leq N_{T_n}\leq k_n\right\}\right)<\epsilon, \langlebel{yu:C1}\\ &\sup_{n\ge N}P\left(\left\{k_n+1\leq N_{T_n}\leq (\langlembda+1)T_n\right\}\cap\left\{\mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}\right)<\epsilon. \langlebel{yu:C2} \end{align} From now on, we separately prove them with introducing some fundamental lemmas. \subsubsection{Proof of \eqref{yu:C1}} We will write $\{\tau_i\}_{i\in\mathbb{N}}$ for jump times of $N$, and $B_n$ for the event that the Poisson process $N$ does not have more than one jumps over all $[t_{j-1},t_j)$, $j=1,\dots,n$: \begin{align} B_{n} &:= \left\{^\exists i\in\mathbb{N}, ^\exists j\in\{1,\dots,n\} \ \ \mbox{s.t.} \ \ \tau_i, \tau_{i+1}\in[t_{j-1},t_j)\right\}^c. \nonumber \end{align} \begin{Lem} \langlebel{ojump} $P(B_n) = 1 - O(nh_n^2)$. \end{Lem} \begin{proof} For each $i\ge 2$, the conditional distribution of $(\tau_1/T_n,\dots,\tau_i/T_n)$ given the event $\{N_{T_n}=i\}$ equals that of the order statistics $U_{(1)}\le\dots\le U_{(i)}$ of $k$ i.i.d. $(0,1)$-uniformly distributed random variables \cite[Proposition 3.4]{Sat99}. Moreover, each spacing $U_{(i+1)}-U_{(i)}$ admits the density $s\mapsto i(1-s)^{i-1}$, $0<s<1$, e.g. \cite{Pyk65}. Then, \begin{align} P(B_n^c) &= \sum_{i=2}^\infty P(N_{T_n}=i) P( B_n^c \, |\, N_{T_n}=i ) \nonumber\\ &\le \sum_{i=2}^\infty P(N_{T_n}=i) P\big( {}^\exists j\in\{2,\dots, i\} \ \ \mbox{s.t.} \ \ \tau_{i}-\tau_{i-1}<h_n \, \big| \, N_{T_n}=i \big) \nonumber\\ &\le \sum_{i=2}^\infty P(N_{T_n}=i) \times (i-1) \int_0^{1/n} i(1-s)^{i-1}ds \nonumber\\ &\lesssim \sum_{i=2}^\infty e^{-\langlembda T_n} \frac{(\langlembda T_n)^{i}}{(i-2)!}\times \frac1n \lesssim \frac{T_n^2}{n} =nh_n^2. \nonumber \end{align} \end{proof} Let \begin{align} C_{k,n} &:=\left\{ {}^\exists i\in\mathbb{N},\, {}^\exists j\in\{1,\dots,n\} \ \ \mbox{s.t.} \ \ \tau_i \in [t_{j-1},t_j) \ \ \mbox{and} \ \ j\notin\hat{\mathcal{J}}_n^{k} \right\}^c, \nonumber \end{align} denote the event where all jumps up to time $T_n$ are correctly removed. The next lemma shows the asymptotic negligibility of the failure-to-detection rate on the event $\left\{1\leq N_{T_n}\leq k_n \right\}\cap B_n$. \begin{Lem} \langlebel{cpick} $P(C_{k_n,n}^{\,c}\cap \left\{1\leq N_{T_n}\leq k_n \right\}\cap B_n) \to 0$. \langlebel{proof_lem2} \end{Lem} \begin{proof} Hereafter we use the following notations: \begin{align*} \mathcal{D}_n &= \{j\le n:\, ^\exists i, \text{ s.t. } \tau_i \in[t_{j-1},t_j)\}, \nonumber\\ \mathcal{C}_n &= \{1,\dots,n\}\setminus \mathcal{D}_n. \end{align*} Write \begin{equation} \eta_j=\frac{\Delta_j w}{\sqrt{h_n}}, \qquad j\le n. \nonumber \end{equation} Recalling that the set $\hat{\mathcal{J}}_n^{k_{n}}$ of removed indices is constructed through picking up the first $k_n$-largest increments in magnitude, we have \begin{align} & P\Big( C_{k_n,n}^{\,c}\cap \left\{1\leq N_{T_n}\leq k_n \right\}\cap B_n \Big) \nonumber\\ &\leq P(\{^\exists j'\in\mathcal{D}_n, j''\in\mathcal{C}_n \ \ \mbox{s.t.} \ \ |\Delta_{j'} X|<|\Delta_{j''} X|\}\cap\{1\leq N_{T_n}\leq k_n\}\cap B_n) \nonumber\\ &\leq P\bigg(\bigg\{ {}^\exists j'\in\mathcal{D}_n, j''\in\mathcal{C}_n \ \ \mbox{s.t.} \ \ \inf_x|c(x)|\min_{1\leq j\leq N_{T_n}}|\xi_j| \nonumber\\ &{}\qquad<\bigg|\int_{t_{j'-1}}^{t_{j'}}b_sds+\int_{t_{j'-1}}^{t_{j'}} a_sdw_s\bigg|+\bigg|\int_{t_{{j''}-1}}^{t_{j''}}b_sds+\int_{t_{{j''}-1}}^{t_{j''}} a_sdw_s\bigg|\bigg\} \cap\{1\leq N_{T_n}\leq k_n\}\cap B_n\Bigg) \nonumber\\ &\leq P\bigg( \bigg\{\inf_x|c(x)|\min_{1\leq j\leq N_{T_n}}|\xi_j| < 2\sqrt{h_n}\sup_x |a(x,\alpha_0)| \max_{1\leq j\leq n}|\eta_j| \nonumber\\ &{}\qquad +2\max_{1\leq j\leq n}\bigg(\bigg|\int_{t_{j-1}}^{t_j} b_sds\bigg|+\bigg|\int_{t_{j-1}}^{t_j}(a_s-a_{j-1})dw_s\bigg|\bigg)\bigg\} \cap \{1\leq N_{T_n}\leq k_n\}\cap B_n\Bigg) \nonumber\\ &\leq P\Bigg(\left\{ \inf_x|c(x)|^2\min_{1\leq j\leq N_{T_n}}|\xi_j|^2<r_{1,n}+r_{2,n}\right\} \cap\{1\leq N_{T_n}\leq k_n\}\cap B_n\Bigg), \langlebel{A3-1} \end{align} where \begin{align*} &r_{1,n}:=8h_n\sup_x a^2(x,\alpha_0) \max_{1\leq j\leq n}|\eta_j|^2, \\ &r_{2,n}:=8\sum_{j=1}^{n}\left\{\left(\int_{t_{j-1}}^{t_j} b_sds\right)^2+\left(\int_{t_{j-1}}^{t_j} (a_s-a_{j-1})dw_s\right)^2\right\}. \end{align*} From extreme value theory (cf. \cite[Table 3.4.4]{EmbKluMik97}), we have \begin{equation} \nonumber \max_{1\leq j\leq n } |\eta_i|^2-\left(\log n- \frac{1}{2}\log\log n-\log\Gamma\left(\frac{1}{2}\right)\right)=O_p(1). \end{equation} This together with Assumption \ref{Ascoef} and \eqref{hm:add-4} leads to \begin{equation*} r_{1,n}=O_p(h_n\log n) =O_p(nh_n^2). \end{equation*} Jensen's and Burkholder's inequalities together with \cite[Lemma 4.5]{Mas13-1} gives $E[r_{2,n}]\lesssim nh_n^2$, so that \begin{equation*} r_{2,n}=O_p(nh_n^2). \end{equation*} Hence, for any $\epsilon\in(0,1)$, we can pick sufficiently large $N$ and $K$ such that for all $n\geq N$, \begin{equation*} P\left( r_{1,n} + r_{2,n} > K nh_n^2\right) <\epsilon. \end{equation*} Building on these estimates, $E[N_{T_n}]=\langlembda T_n$, and the independence between $N$ and $(\xi_j)$, we see that the upper bound in \eqref{A3-1} can be further bounded by \begin{align} & P\Bigg(\left\{\min_{1\leq j\leq N_{T_n}}|\xi_j|^2<\frac{K}{\inf_x|c(x)|^2}nh_n^2\right\}\cap\{1\leq N_{T_n}\leq k_n\}\cap B_n\Bigg) + \epsilon \nonumber\\ &\le \sum_{i=1}^{k_{n}}P\Bigg(\left\{\min_{1\leq j\leq i }|\xi_j|^2<\frac{K}{\inf_x|c(x)|^2}nh_n^2\right\}\cap\{N_{T_n}=i\}\Bigg) + \epsilon \nonumber\\ &\le \sum_{i=1}^{k_{n}} i P\left(|\xi_1|^2<\frac{K}{\inf_x|c(x)|^2}nh_n^2\right) P\left( N_{T_n}=i\right) + \epsilon \nonumber\\ &\lesssim T_n P\left(|\xi_1|<\frac{\sqrt{K}}{\inf_x|c(x)|}\sqrt{n}h_n\right) + \epsilon. \langlebel{A3-2} \end{align} Since the choice of $\epsilon$ is arbitrary, \eqref{hm:conseq-3} implies the desired result. \end{proof} Let us introduce the event \begin{equation} G_{k_{n},n} := \left\{1\leq N_{T_n}\leq k_n\right\}\cap B_n\cap C_{k_n,n}. \nonumber \end{equation} Thanks to Lemmas \ref{ojump} and \ref{cpick}, it follows that \begin{align*} &P\left(\left\{|\sqrt{n}(\tilde{\alpha}_n^{k_n}-\alpha_0)|>M\right\}\cap\left\{1\leq N_{T_n}\leq k_n\right\}\right)\\ &\leq P\left(\left\{|\sqrt{n}(\tilde{\alpha}_n^{k_n}-\alpha_0)|>M\right\}\cap G_{k_{n},n} \right)+P(C_{k_n,n}^{\,c}\cap \left\{1\leq N_{T_n}\leq k_n \right\}\cap B_n)+P(B_n^c)\\ &=P\left(\left\{|\sqrt{n}(\tilde{\alpha}_n^{k_n}-\alpha_0)|>M\right\}\cap G_{k_{n},n} \right)+o(1). \end{align*} Hence, in order to prove \eqref{yu:C1}, it suffices to show that for any $\epsilon>0$ there correspond sufficiently large $M>0$ and $N\in\mathbb{N}$ for which \begin{equation}\langlebel{clse} \sup_{n\ge N}P\left(\left\{|\sqrt{n}(\tilde{\alpha}_n^{k_n}-\alpha_0)|>M\right\}\cap G_{k_{n},n} \right)<\epsilon. \end{equation} Since $\Delta_j X=\Delta_j X^{\mathrm{cont}}$ for each $j\notin \hat{\mathcal{J}}_n^{k_n}$ on $G_{k_{n},n}$, we have \begin{equation} |\tilde{\alpha}_n^{k_n}-\alpha_0|\mathbbm{1}_{G_{k_{n},n}} \leq(|\kappa_{1,n}|+|\kappa_{2,n}|+|\kappa_{3,n}|)\mathbbm{1}_{G_{k_{n},n}}, \langlebel{thm.4.7-p1} \end{equation} where \begin{align*} &\kappa_{1,n}:=\frac{1}{h_n}\left\{\left(\sum_{j\notin\hat{\mathcal{J}}^{k_n}_n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)^{-1}-\left(\sum_{j=1}^{n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)^{-1}\right\}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}}\mathbb{A}_{j-1} (\Delta_j X^{\mathrm{cont}})^2,\\ &\kappa_{2,n}:=-\frac{1}{h_n}\left(\sum_{j=1}^{n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)^{-1}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}}\mathbb{A}_{j-1} (\Delta_j X^{\mathrm{cont}})^2,\\ &\kappa_{3,n}:=\frac{1}{h_n}\left(\sum_{j=1}^{n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)^{-1}\left\{\sum_{j=1}^{n}\mathbb{A}_{j-1} (\Delta_j X^{\mathrm{cont}})^2-h_n\left(\sum_{j=1}^{n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)\alpha_0\right\}. \end{align*} Below we look at these three terms separately. 1. Evaluation of $\kappa_{1,n}$: From the ergodic theorem, we have \begin{equation*} \frac{1}{n}\sum_{j=1}^{n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\xrightarrow{p} \int \mathbb{A}(x)(\mathbb{A}(x))^\top\partiali_0(dx)>0, \end{equation*} so that $(\frac{1}{n}\sum_{j=1}^{n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top)^{-1}=O_p(1)$ as a random sequence of matrices. Since $\mathbb{A}(x)$ is bounded, we can also obtain \begin{align*} &\frac{1}{n}\sum_{j\notin\hat{\mathcal{J}}^{k_n}_n} \mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top =\frac{1}{n}\sum_{j=1}^{n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top-\frac{1}{n}\sum_{j\in\hat{\mathcal{J}}^{k_n}_n} \mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top=\frac{1}{n}\sum_{j=1}^{n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top+O_p\left(\frac{k_n}{n}\right),\\ &\left|\frac{1}{T_n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}}\mathbb{A}_{j-1} (\Delta_j X^{\mathrm{cont}})^2\right|\lesssim \frac{1}{T_n}\sum_{j=1}^{n}(\Delta_j X^{\mathrm{cont}})^2=O_p(1), \end{align*} from \eqref{ece}. Hence it follows that \begin{align} &|\sqrt{n}\kappa_{1,n}|\mathbbm{1}_{G_{k_{n},n}} \nonumber\\ &\lesssim \left| \left( \frac{1}{n}\sum_{j=1}^{n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)^{-1} \right| \cdot \left|\sqrt{n}\left\{\left(\frac{1}{n}\sum_{j=1}^{n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)\left(\frac{1}{n}\sum_{j\notin\hat{\mathcal{J}}^{k_n}_n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)^{-1}-I_{p_\alpha}\right\}\right| \nonumber\\ &{}\qquad\times \left(\frac{1}{T_n}\sum_{j=1}^{n} (\Delta_j X^{\mathrm{cont}})^2\right) \nonumber\\ &=O_p\left(\frac{k_n}{\sqrt{n}}\right)=o_p(1). \langlebel{thm.4.7-p2} \end{align} 2. Evaluation of $\kappa_{2,n}$: Recall that $\eta_j:=\frac{\Delta_j w}{\sqrt{h_n}}$. Under Assumption \ref{Ascoef}, we can derive from the estimates of $r_{1,n}$ and $r_{2,n}$ in the proof of Lemma \ref{cpick} that, on $G_{k_{n},n}$, \begin{align} &\left|\frac{1}{T_n}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}}\mathbb{A}_{j-1} (\Delta_j X^{\mathrm{cont}})^2\right| \nonumber\\ &\lesssim \frac{1}{T_n} \left\{ \sum_{j=1}^{n} \left[\left(\int_{t_{j-1}}^{t_j} (a_s-a_{j-1})dw_s\right)^2+\left(\int_{t_{j-1}}^{t_j} b_sds\right)^2\right]+k_nh_n\max_{1\leq j\leq n}|\eta_j|^2\right\} \nonumber\\ & =O_p\left(\frac{1}{T_n}(nh_n^2 \vee k_n h_n \log n)\right)=O_p\left( h_n \vee \frac{k_n \log n}{n}\right). \langlebel{smoj} \end{align} Thus we get \begin{align} |\sqrt{n}\kappa_{2,n}|\mathbbm{1}_{G_{k_{n},n}} =O_p\left( \sqrt{n h_n^2} \vee \frac{k_n \log n}{\sqrt{n}}\right) = o_p(1). \langlebel{thm.4.7-p3} \end{align} 3. Evaluation of $\kappa_{3,n}$: From \eqref{sce}, \eqref{fce}, \eqref{ece}, and the martingale central limit theorem (see the proof of Lemma \ref{ana}), it follows that \begin{align} \sqrt{n}\kappa_{3,n} &= \left(\frac{1}{n}\sum_{j=1}^{n}\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top\right)^{-1} \frac{1}{\sqrt{n}}\sum_{j=1}^{n}\mathbb{A}_{j-1} \left\{\left(\frac{\Delta_j X^{\mathrm{cont}}}{\sqrt{h_n}}\right)^2 - \mathbb{A}_{j-1}^\top\alpha_0\right\} = O_p(1). \langlebel{thm.4.7-p4} \end{align} Substituting \eqref{thm.4.7-p2}, \eqref{thm.4.7-p3} and \eqref{thm.4.7-p4} into \eqref{thm.4.7-p1} now yields that \begin{equation} \left| \sqrt{n}\left(\tilde{\alpha}_n^{k_n}-\alpha_0\right)\right|\mathbbm{1}_{G_{k_{n},n}}=O_p(1), \nonumber \end{equation} followed by \eqref{clse}. \subsubsection{Proof of \eqref{yu:C2}} Let \begin{equation*} D_{k_n,n}:= \left\{\mathcal{C}_n\cap\hat{\mathcal{J}}_n^{k_n}=\emptyset\right\}. \end{equation*} Recall that the probability of the event $\{N_{T_n}\geq (\langlembda+1)T_n\}$ is asymptotically negligible and that without loss of generality, we can assume $k_n\leq (\langlembda+1)T_n-1$. Then we get the following lemma. \begin{Lem} \langlebel{hm:lem-1} \begin{equation*} P\left( D_{k_n,n}^c \cap\{k_n+1\leq N_{T_n}\leq (\langlembda+1)T_n\}\cap B_n\right)\to0. \end{equation*} \end{Lem} \begin{proof} The lemma can be shown in a quite similar way to Lemma \ref{proof_lem2}. Letting $\epsilon$, $N$, and $K$ be the same as in the proof of Lemma \ref{proof_lem2}, as in \eqref{A3-1} and \eqref{A3-2}, we have for any $n\geq N$, \begin{align*} &P\left( D_{k_n,n}^c \cap\{k_n+1\leq N_{T_n}\leq (\langlembda+1)T_n\}\cap B_n\right)\\ &\le P\left(\left\{ {}^\exists j'\in\mathcal{D}_n,\, j''\in\mathcal{C}_n \ \ \mbox{s.t.} \ \ |\Delta_{j'} X|<|\Delta_{j''} X| \right\}\cap \{k_n+1\leq N_{T_n}\leq (\langlembda+1)T_n\}\cap B_n\right) \nonumber\\ &\leq P\left(\left\{\min_{1\leq j\leq N_{T_n}}|\xi_j|^2<\frac{K}{\inf_x|c(x)|^2}nh_n^2\right\}\cap\{k_n+1\leq N_{T_n}\leq (\langlembda+1)T_n\}\cap B_n\right)\\ &\lesssim T_n P\left(|\xi_1|<\frac{\sqrt{K}}{\inf_x|c(x)|}\sqrt{n}h_n\right)+\epsilon = o(1) +\epsilon. \end{align*} Hence the proof is complete. \end{proof} Let \begin{equation} H_{k_{n},n}:=\left\{k_n+1\leq N_{T_n}\leq (\langlembda+1)T_n\right\}\cap B_n\cap D_{k_n,n}. \nonumber \end{equation} Thanks to Lemma \ref{hm:lem-1}, \eqref{yu:C2} is led by \begin{equation} P\left( \left\{ \mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}\cap H_{k_n,n}\right) = o(1). \langlebel{YU:prob2} \end{equation} In view of the definition \eqref{yu:msnr}, \eqref{YU:prob2} follows on showing that for any $M>0$, \begin{equation} \langlebel{YU:prob2+1} P\left(\left\{\frac{1}{\sqrt{n}}\sum_{j\notin\hat{\mathcal{J}}_{n}^{k_n}}\left(\left(\hat{N}_j^k\right)^4-3\right)<M\right\}\cap H_{k_n,n}\right)=o(1); \end{equation} recall the notation $\hat{N}_j^k =(\hat{S}_n^k)^{-1/2}(\epsilon_j(\hat{\alpha}_{n}^k)-\bar{\hat{\epsilon}}_n^k)$; for this purpose, we need to clarify asymptotic behaviors of $\bar{\hat{\epsilon}}_{n}^{k_n} \mathbbm{1}_{H_{k_{n},n}}$ and $\hat{S}_n^{k_n}\mathbbm{1}_{H_{k_{n},n}}$. Define the diverging real sequence $a_n$ by \eqref{hm:an_def}: \begin{equation} a_n=T_n^\eta \uparrow\infty \nonumber \end{equation} with $\eta>0$ being small enough to ensure \eqref{hm:conseq-1} to \eqref{hm:conseq-2}. Making $\eta>0$ smaller if necessary so that $\eta <1/4$, we may and do further suppose that \begin{equation} a_n\sqrt{h_n} \vee \frac{a_n^4 h_n}{nh_n^2} \to 0. \nonumber \end{equation} First we will prove \begin{equation} \bar{\hat{\epsilon}}_{n}^{k_n} \mathbbm{1}_{H_{k_{n},n}} =O_p\left(a_n \sqrt{h_n} \right) = o_p(1). \langlebel{hm:4.8.p++1} \end{equation} Decompose $\bar{\hat{\epsilon}}_{n}^{k_n}$ as \begin{equation*} \bar{\hat{\epsilon}}_{n}^{k_n}= \frac{1}{n-k_n}\sum_{j=1}^{n} \epsilon_j(\hat{\alpha}_{n}^{k_n}) - \frac{1}{n-k_n}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}}\epsilon_j(\hat{\alpha}_{n}^{k_n}). \end{equation*} Below we will look at the terms in the right-hand side separately. Observe that \begin{align} \frac{1}{\sqrt{h_n}} \left|\frac{1}{n-k_n}\sum_{j=1}^{n} \epsilon_j(\hat{\alpha}_{n}^{k_n})\right| &\lesssim \left|\frac{1}{n} \sum_{j=1}^{n}\frac{1}{a_{j-1} (\hat{\alpha}_{n}^{k_n})} \frac{1}{h_n}\int_{t_{j-1}}^{t_j} (b_s+c_s\langlembda E[\xi_1])ds \right| \nonumber\\ &{}\qquad +\left| \sum_{j=1}^{n} \frac{1}{T_n} \frac{1}{a_{j-1} (\hat{\alpha}_{n}^{k_n})} \left(\int_{t_{j-1}}^{t_j} a_sdw_s \right)\right| \nonumber\\ &{}\qquad +\left| \sum_{j=1}^{n} \frac{1}{T_n} \frac{1}{a_{j-1} (\hat{\alpha}_{n}^{k_n})} \left(\int_{t_{j-1}}^{t_j} c_{s-}d\tilde{J}_s \right) \right|, \langlebel{hm:jbh-1} \end{align} where $\tilde{J}_t:=J_t-\langlembda E[\xi_1]t$. Obviously the first term in \eqref{hm:jbh-1} is $O_p(1)$, and we are going to show that the remaining two terms are $o_p(1)$. To achieve this, under the present assumptions it suffices to prove the following claim: let $\partiali(x,\alpha)$ be a bounded real-valued function on $\mathbb{R}\times\Theta_\alpha$ such that $|\partiali(x,\alpha)-\partiali(x,\alpha')| \lesssim |\alpha-\alpha'|$ for each $x\in\mathbb{R}$ and $\alpha,\alpha'\in\bar{\Theta}_\alpha$, and consider the random $\mathcal{C}^1(\bar{\Theta}_\alpha)$-functions \begin{align*} &F_{1,n}(\alpha):= \sum_{j=1}^{n} \frac{1}{T_n} \partiali_{j-1}(\alpha) \int_{t_{j-1}}^{t_j} a_{s}dw_s,\\ &F_{2,n}(\alpha):= \sum_{j=1}^{n} \frac{1}{T_n} \partiali_{j-1}(\alpha) \int_{t_{j-1}}^{t_j} c_{s-}d\tilde{J}_s. \nonumber \end{align*} Then we claim \begin{equation} \sup_{\alpha\in\bar{\Theta}_\alpha}|F_{1,n}(\alpha)| = o_p(1), \quad \sup_{\alpha\in\bar{\Theta}_\alpha}|F_{2,n}(\alpha)|=o_p(1) \langlebel{hm:4.8.p++2.5} \end{equation} To show \eqref{hm:4.8.p++2.5}, we note that $F_{1,n}(\alpha)\xrightarrow{p} 0$ and $F_{2,n}(\alpha)\xrightarrow{p}0$ for each $\alpha$, which follows on applying \cite[Lemma 9]{GenJac93}. Concerning $F_{1,n}$, by making use of Burkholder's inequality and Jensen's inequality, it is easy to deduce that for each integer $q>(p_\alpha\vee 2)$, $E\left[|F_{1,n}(\alpha)|^q\right]\lesssim 1$ and $E\left[ |F_{1,n}(\alpha)-F_{1,n}(\alpha')|^q\right]\lesssim |\alpha-\alpha'|^q$. As for $F_{2,n}$, proceeding as in \cite[Eq.(4.14)]{Mas13-1} we can verify that for each integer $q>(p_\alpha \vee 2)$: letting \begin{align*} \chi_j(t):= \begin{cases} 1 & t\in(t_{j-1},t_j],\\ 0 & \text{otherwise}, \end{cases} \end{align*} we have \begin{align*} E\left[|F_{2,n}(\alpha)|^q\right]&\lesssim T_n^{-q} E\left\{\left| \int_0^{T_n} \left(\sum_{j=1}^{n} \chi_j(s) \partiali_{j-1}(\alpha)c_{s-} \right) d\tilde{J}_s \right|^q \right\} \nonumber\\ &\lesssim T_n^{-q/2} \frac{1}{T_n}\int_0^{T_n} \sum_{j=1}^{n} \chi_j(s) E\left[ |c_{s-}|^q \right] ds \nonumber\\ &\lesssim1, \end{align*} and \begin{align} E\left[ |F_{2,n}(\alpha)-F_{2,n}(\alpha')|^q\right] &\lesssim T_n^{-q} E\left\{\left| \int_0^{T_n} \left(\sum_{j=1}^{n} \chi_j(s) (\partiali_{j-1}(\alpha)-\partiali_{j-1}(\alpha'))c_{s-} \right) d\tilde{J}_s \right|^q \right\} \nonumber\\ &\lesssim T_n^{-q/2} \frac{1}{T_n}\int_0^{T_n} \sum_{j=1}^{n} \chi_j(s) E\left\{ |\partiali_{j-1}(\alpha)-\partiali_{j-1}(\alpha')|^q |c_{s-}|^q \right\} ds \nonumber\\ &\lesssim T_n^{-q/2} |\alpha-\alpha'|^q \lesssim |\alpha-\alpha'|^q. \nonumber \end{align} Hence the Kolmogorov criterion (cf. \cite[Theorem 1.4.7]{Kun97}) concludes the tightness of $\{F_{1,n}(\cdot)\}_n$ and $\{F_{2,n}(\cdot)\}_n$ in the space $\mathcal{C}(\bar{\Theta}_\alpha)$ (equipped with the uniform metric), from which \eqref{hm:4.8.p++2.5} follows. We thus conclude \begin{equation} \frac{1}{n-k_n}\sum_{j=1}^{n} \epsilon_j(\hat{\alpha}_{n}^{k_n}) = O_p\left(\sqrt{h_n}\right). \langlebel{hm:4.8.p++2} \end{equation} Next, it follows from Assumptions \ref{Ascoef} and \ref{Sampling} that \begin{align}\langlebel{esresi} \max_{1\leq j\leq n}\epsilon_j^2(\hat{\alpha}_{n}^{k_n})\mathbbm{1}_{H_{k_{n},n}} &\lesssim \frac{1}{h_n}\max_{1\leq j\leq n}(\Delta_j X)^2\mathbbm{1}_{H_{k_{n},n}}\nonumber\\ &\lesssim \frac{1}{h_n}\sum_{j=1}^{n} \left\{\left(\int_{t_{j-1}}^{t_j} b_sds\right)^2+\left(\int_{t_{j-1}}^{t_j} (a_s-a_{j-1})dw_s\right)^2\right\} \nonumber\\ &{}\qquad+\max_{1\leq j\leq n} \eta_j^2+\frac{1}{h_n}\max_{1\leq j\leq \lfloor(\langlembda+1)T_n\rfloor} \xi_j^2\nonumber\\ & \lesssim O_p(T_n) + O_p(\log n) + O_p\left(\frac{a_n^2}{h_n}\right) \nonumber\\ & =O_p\left(\frac{a_n^2}{h_n}\left(\frac{T_n h_n}{a_n^2} +1\right)\right)=O_p\left(\frac{a_n^2}{h_n}\right). \end{align} This gives \begin{align} \left|\frac{1}{n-k_n}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}} \epsilon_j(\hat{\alpha}_{n}^{k_n})\right|\mathbbm{1}_{H_{k_{n},n}} &\lesssim \frac{k_n}{n}\sqrt{\max_{1\leq j\leq n}\epsilon_j^2(\hat{\alpha}_{n}^{k_n})}\,\mathbbm{1}_{H_{k_{n},n}} \nonumber\\ &= O_p\left(\frac{k_n a_n}{n\sqrt{h_n}}\right) = O_p\left(a_n\sqrt{h_n} \right), \langlebel{hm:4.8.p++3} \end{align} and \eqref{hm:4.8.p++1} follows from \eqref{hm:4.8.p++2} and \eqref{hm:4.8.p++3}. Next we look at $\hat{S}_n^{k_n}\mathbbm{1}_{H_{k_{n},n}}$. Note that \eqref{hm:4.8.p++1} under Assumption \ref{Asjsize} entails \begin{align} \hat{S}_n^{k_n}\mathbbm{1}_{H_{k_{n},n}} &=\frac{1}{n-k_n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \epsilon^2_j(\hat{\alpha}_{n}^{k_n})\mathbbm{1}_{H_{k_{n},n}} + o_p(1). \langlebel{hm:4.8.p++4} \end{align} From Assumption \ref{Ascoef}, the following relation holds: \begin{equation}\langlebel{rela} \frac{1}{T_n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}}(\Delta_jX)^2\lesssim\frac{1}{n-k_n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \epsilon^2_j(\hat{\alpha}_{n}^{k_n})\lesssim \frac{1}{T_n}\sum_{j=1}^{n} (\Delta_jX)^2. \end{equation} From Cauchy-Schwarz inequality, Burkholder's inequality and \cite[Lemma 4.5]{Mas13-1}, we derive \begin{align*} E\left[\left(\Delta_j X\right)^2\right] &=E\Bigg[\bigg(\int_{t_{j-1}}^{t_j}(a_s-a_{j-1})dw_s+\int_{t_{j-1}}^{t_j} (b_s+\langlembda E(\xi_1) c_s)ds \nonumber\\ &{}\qquad +\int_{t_{j-1}}^{t_j}(c_{s-}-c_{j-1})d\tilde{J}_s+a_{j-1}\Delta_jw+c_{j-1}\Delta_j\tilde{J}\bigg)^2\Bigg]\\ &=E\left[\left(a_{j-1}\Delta_j w+c_{j-1}\Delta_j \tilde{J}\right)^2\right]+O\left(h_n^{\frac{3}{2}}\right)\\ &\lesssim h_n. \end{align*} Hence the rightmost side in \eqref{rela} is $O_p(1)$. In a similar manner through Cauchy-Schwarz and Burkholder's inequalities, we have \begin{align*} \frac{1}{T_n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}}(\Delta_jX)^2 &=\frac{1}{T_n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}}\left(a_{j-1}\Delta_j w+c_{j-1}\Delta_j \tilde{J}\right)^2+O_p\left(\sqrt{h_n}\right)\\ &=\frac{1}{T_n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \left(c_{j-1}\Delta_j \tilde{J}\right)^2 +\frac{1}{T_n}\sum_{j=1}^{n}\left\{\left(a_{j-1}\Delta_j w\right)^2+2a_{j-1}c_{j-1}\Delta_j w\Delta_j \tilde{J}\right\}\\ &{}\qquad-\frac{1}{T_n}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}} \left\{\left(a_{j-1}\Delta_j w\right)^2+2a_{j-1}c_{j-1}\Delta_j w\Delta_j \tilde{J}\right\} + o_p(1) \nonumber\\ &\ge \frac{1}{T_n}\sum_{j=1}^{n}\left\{\left(a_{j-1}\Delta_j w\right)^2+2a_{j-1}c_{j-1}\Delta_j w\Delta_j \tilde{J}\right\}\\ &{}\qquad-\frac{1}{T_n}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}} \left\{\left(a_{j-1}\Delta_j w\right)^2+2a_{j-1}c_{j-1}\Delta_j w\Delta_j \tilde{J}\right\} + o_p(1) \nonumber\\ &=: L_{n} - \hat{L}^{k_n}_n + o_p(1). \end{align*} The independence between $w$ and $J$, \cite[Lemma 9]{GenJac93}, and the ergodic theorem yield that \begin{equation*} L_n \xrightarrow{p} \int a^2(x,\alpha_0)\partiali_0(dx)>0. \end{equation*} In a similar manner to \eqref{esresi}, Assumption \ref{Asjsize} implies that \begin{align*} | \hat{L}^{k_n}_n | \mathbbm{1}_{H_{k_{n},n}} \le O_p\left(\frac{k_n \log n}{n}\right) + O_p\left(a_n\sqrt{h_n\log n}\right) = o_p(1). \end{align*} Summarizing the last three displays leads to \begin{equation} \frac{1}{T_n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}}(\Delta_jX)^2 \ge \int a^2(x,\alpha_0)\partiali_0(dx) + o_p(1) - \mathbbm{1}_{H_{k_{n},n}^c}| \hat{L}^{k_n}_n | \nonumber \end{equation} Fix an arbitrary $\epsilon>0$. By the last display combined with \eqref{hm:4.8.p++1}, \eqref{hm:4.8.p++4} and \eqref{rela}, we can pick a positive constant $K=K(\epsilon)>1$ and a positive integer $N=N(\epsilon)$ such that \begin{equation*} \sup_{n\ge N}P\left[\left(\left\{\hat{S}_n^{k_n}<\frac{1}{K}\right\}\cup\left\{\hat{S}_n^{k_n}>K\right\} \cup \left\{|\bar{\hat{\epsilon}}_n^{k_n}|>K a_n \sqrt{h_n} \right\} \right) \cap H_{k_{n},n} \right] < \epsilon. \end{equation*} Therefore, to conclude \eqref{YU:prob2+1} we may and do focus on the event \begin{equation*} F_{k_n,n,\epsilon} := \left\{ \frac{1}{K} \le \hat{S}_n^{k_n} \le K\right\} \cap\left\{|\bar{\hat{\epsilon}}_n^{k_n}|\leq K a_n \sqrt{h_n} \right\} \cap H_{k_{n},n}; \end{equation*} we note that on $F_{k_n,n,\epsilon}$ there remain jumps (not removed), its number of pieces being at least one jump. From Assumption \ref{Ascoef}, \begin{align} |h_n^{-1/2}\Delta_j X|^k &\lesssim |h_n^{-1/2}\Delta_j X^{\mathrm{cont}}|^k + |h_n^{-1/2}\Delta_j J|^k, \nonumber\\ |h_n^{-1/2}\Delta_j X|^k &\gtrsim |h_n^{-1/2}\Delta_j J|^k - |h_n^{-1/2}\Delta_j X^{\mathrm{cont}}|^k \nonumber \end{align} for $k>0$ hold on $B_n$. With these together with \eqref{ece}, writing positive constants $C=C(a,c)$ possibly varying from line to line, we have \begin{align} &\frac{1}{\sqrt{n}}\sum_{j\notin\hat{\mathcal{J}}_{n}^{k_n}}\left(\left(\hat{N}_j^k\right)^4-3\right)\mathbbm{1}_{F_{k_n,n,\epsilon}} \nonumber\\ &\gtrsim\sqrt{n}\left\{\frac{1}{n}\sum_{j\notin\hat{\mathcal{J}}_{n}^{k_n}}\left(\left|\frac{\Delta_j X}{\sqrt{h_n}}\right|^4 -C a_n\sqrt{h_n} \left|\frac{\Delta_j X}{\sqrt{h_n}}\right|^3\right) + O_p(1)\right\} \mathbbm{1}_{F_{k_n,n,\epsilon}}\nonumber\\ &\gtrsim \sqrt{n}\left\{ \frac{1}{nh_n^2}\sum_{j\notin\hat{\mathcal{J}}_{n}^{k_n}} \left( \left(\Delta_j J\right)^4 -C|\Delta_j J|^3 a_n h_n \right) +O_p(1) \right\} \mathbbm{1}_{F_{k_n,n,\epsilon}}. \nonumber\\ &\gtrsim \sqrt{n}\left\{ \frac{1}{nh_n^2}\sum_{j\notin\hat{\mathcal{J}}_{n}^{k_n}} \left( \min_{1\leq j\leq \lfloor(\langlembda+1)T_n\rfloor} |\xi_j|^4 - C h_na_n \max_{1\leq j\leq \lfloor(\langlembda+1)T_n\rfloor}|\xi_j|^3 \right) +O_p(1) \right\} \mathbbm{1}_{F_{k_n,n,\epsilon}}. \langlebel{hm:addadd-1} \end{align} Since $a_n^4 h_n /(nh_n^2) = n^{-\delta''}\to 0$ for some $\delta''>0$, we have for every $M,M'>0$ \begin{align} & P\left( \min_{1\leq j\leq \lfloor(\langlembda+1)T_n\rfloor} |\xi_j|^4 - M h_na_n \max_{1\leq j\leq \lfloor(\langlembda+1)T_n\rfloor}|\xi_j|^3 \ge M' nh_n^2 \right) \nonumber\\ &= P\left( \min_{1\leq j\leq \lfloor(\langlembda+1)T_n\rfloor} |\xi_j|^4 \gtrsim O_p(a_n^4 h_n) + nh_n^2 \right) \nonumber\\ &= P\left( \frac{\min_{1\leq j\leq \lfloor(\langlembda+1)T_n\rfloor} |\xi_j|}{(nh_n^2)^{1/4}} \gtrsim 1 \right) + o(1) \nonumber\\ &= \left\{ 1-P\left(|\xi_1| \lesssim (nh_n^2)^{1/4}\right)\right\}^{\lfloor(\langlembda+1)T_n\rfloor} + o(1). \nonumber \end{align} The last probability tends to $1$ since \begin{align} T_n P\left(|\xi_1| \lesssim (nh_n^2)^{1/4}\right) &\lesssim n^{1-\kappa + (1-2\kappa)s/4} \to 0, \nonumber \end{align} by \eqref{Asjsize}. Recalling that on the event $F_{k_n,n,\epsilon}$ there is at least one jump over $[0,T_n]$ which is yet to be removed, we can continue \eqref{hm:addadd-1} as follows: on an event whose probability gets arbitrarily close to $1$ as $n\to\infty$, \begin{align} &\gtrsim \sqrt{n}\bigg\{ \frac{1}{nh_n^2}\bigg(\min_{1\leq j\leq \lfloor(\langlembda+1)T_n\rfloor} |\xi_j|^4 -C h_n a_n \max_{1\leq j\leq \lfloor(\langlembda+1)T_n\rfloor}|\xi_j|^3 \bigg) +O_p(1) \bigg\}\mathbbm{1}_{F_{k_n,n,\epsilon}} \nonumber\\ &\gtrsim \sqrt{n}\left( M' + O_p(1) \right) \mathbbm{1}_{F_{k_n,n,\epsilon}} \nonumber\\ &\gtrsim \sqrt{n} \mathbbm{1}_{F_{k_n,n,\epsilon}}. \nonumber \end{align} This entails \eqref{YU:prob2+1}, hence \eqref{YU:prob2} as well. \subsection{Proof of Theorem \ref{Ae}} We will complete the proof of Theorem \ref{Ae} by showing \begin{align} & P\left(\left\{\left|\sqrt{n}(\hat{\alpha}_{n}^{k_n}-\hat{\alpha}_{n}^{k_n, \mathrm{cont}})\right|\vee\left|\sqrt{T_n}(\hat{\beta}_{n}^{k_n}-\hat{\beta}_{n}^{k_n,\mathrm{cont}})\right|>\epsilon\right\}\cap G_{k_{n},n} \right) = o(1), \langlebel{YU:prob1} \end{align} Indeed, we can deduce from Lemmas \ref{ojump}, \ref{cpick}, and \ref{hm:lem-1} that for any $\epsilon>0$, the probabillity \begin{align} &P\left(\left\{\left|\sqrt{n}(\hat{\alpha}_{n}^{k_n}- \hat{\alpha}_{n}^{k_n, \mathrm{cont}} )\right|\vee\left|\sqrt{T_n}(\hat{\beta}_{n}^{k_n}-\hat{\beta}_{n}^{k_n,\mathrm{cont}})\right|>\epsilon\right\}\cap \left\{ \mathrm{JB}_n^{k_n}\leq\chi^2_q(2)\right\}\right) \nonumber \end{align} can be bounded from above by the sum of the two probabilities given in \eqref{YU:prob1} and \eqref{YU:prob2}, plus an $o(1)$ term. Recall that for any $j\notin \mathcal{J}_n^{k_n}$, $\Delta_j X=\Delta_j X^{\mathrm{cont}}$ on $G_{k_{n},n}$. Making use of Assumption \ref{Ascoef}, \eqref{smoj}, and a similar argument to the proof of Theorem \ref{Consistency}, we get \begin{align*} &\left|\sqrt{n}(\hat{\alpha}_{n}^{k_n}-\hat{\alpha}_{n}^{k_n, \mathrm{cont}})\mathbbm{1}_{G_{k_{n},n}}\right|\\ &\leq \left|\left(\frac{1}{n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n}}\right)^{-1}\left\{\frac{1}{\sqrt{n}}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}} \left(\frac{1}{\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n}}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_n(\mathbb{A}_{j-1}^\top \tilde{\alpha}_n^{k_n})^2}\right)\mathbb{A}_{j-1}\right\}\right| \mathbbm{1}_{G_{k_{n},n}}\\ &{}\quad+\left|\left(\frac{1}{n}\sum_{j=1}^{n}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n}}\right)^{-1}\right|\left|\left(\frac{1}{n}\sum_{j=1}^{n}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n}}\right)\left(\frac{1}{n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}}\frac{\mathbb{A}_{j-1}\mathbb{A}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n}}\right)^{-1}-I_{p_\alpha}\right|\\ &{}\quad\times\left|\frac{1}{\sqrt{n}}\sum_{j=1}^{n}\left(\frac{1}{\mathbb{A}_{j-1}^\top\tilde{\alpha}_n^{k_n}}-\frac{(\Delta_j X^{\mathrm{cont}})^2}{h_n(\mathbb{A}_{j-1}^\top \tilde{\alpha}_n^{k_n})^2}\right)\mathbb{A}_{j-1}\right|\mathbbm{1}_{G_{k_{n},n}}\\ &= O_p(1) \cdot \left\{O_p\left(\frac{k_n}{\sqrt{n}}\right) + O_p\left( \sqrt{n h_n^2} \vee \frac{k_n \log n}{\sqrt{n}}\right)\right\} + O_p(1) \cdot o_p(1) \cdot O_p(1) \nonumber\\ &=o_p(1). \end{align*} Next, to deduce $\sqrt{T_n}(\hat{\beta}_{n}^{k_n}-\hat{\beta}_{n}^{k_n,\mathrm{cont}}) \mathbbm{1}_{G_{k_{n},n}} = o_p(1)$, it suffices to prove \begin{align} & \frac{1}{n}\sum_{j=1}^{n} \frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbbm{1}_{G_{k_{n},n}} =\frac{1}{n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}} \mathbbm{1}_{G_{k_{n},n}}+o_p(1), \langlebel{YU:exp1} \\ &\left|\frac{1}{\sqrt{T_n}}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}} \frac{\Delta_j X^{\mathrm{cont}}-h_nb_{j-1}}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbb{B}_{j-1}\right|\nonumber \\ &=\left|\frac{1}{\sqrt{T_n}}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}} \frac{\int_{t_{j-1}}^{t_j} (b_s-b_{j-1})ds+\int_{t_{j-1}}^{t_j} (a_s-a_{j-1})dw_s+a_{j-1}\Delta_j w}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbb{B}_{j-1}\right|=o_p(1). \langlebel{YU:exp2} \end{align} Then, as in the proof of Theorem \ref{osan} we see that \begin{align*} &\left|\sqrt{T_n}(\hat{\beta}_{n}^{k_n}-\hat{\beta}_{n}^{k_n,\mathrm{cont}})\mathbbm{1}_{G_{k_n,n}}\right|\\ &\le \left|\left(\frac{1}{n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\right)^{-1} \frac{1}{\sqrt{T_n}}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}} \frac{\Delta_j X^{\mathrm{cont}}-h_nb_{j-1}}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbb{B}_{j-1}\right| \mathbbm{1}_{G_{k_n,n}}\\ &{}\qquad+\left|\left(\frac{1}{n}\sum_{j=1}^{n} \frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\right)^{-1}\right|\left|\left(\frac{1}{n}\sum_{j=1}^{n} \frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\right)\left(\frac{1}{n}\sum_{j\notin\hat{\mathcal{J}}_n^{k_n}} \frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\right)^{-1}-I_{p_\beta}\right|\\ &{}\qquad\times\left|\frac{1}{\sqrt{T_n}}\sum_{j=1}^{n} \frac{\Delta_j X^{\mathrm{cont}}-h_nb_{j-1}}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbb{B}_{j-1}\right| \mathbbm{1}_{G_{k_n,n}}\\ &=o_p(1). \end{align*} By It\^{o}'s formula, \begin{align}\langlebel{YU:supesti} &\left|\frac{1}{n}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}} \frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\right| \mathbbm{1}_{G_{k_{n},n}}\nonumber\\ &\lesssim \frac{k_n}{n}\left(1+\sup_{0\leq t\leq T_n} X_t^2\right)\nonumber\\ &=\frac{k_n}{n}\sup_{0\leq t\leq T_n} \left\{1+X_0^2+2\int_0^t X_{s-}dX_s+\int_0^t a_s^2ds+\sum_{0<s\leq t} (\Delta_s X)^2\right\}\nonumber\\ &\lesssim\frac{k_n}{n}\Bigg\{1+X_0^2+\int_0^{T_n}\left(a_s^2+|X_sb_s|+c_s^2 +\left|X_{s}c_s\langlembda E[\xi_1]\right|\right)ds\nonumber\\ &+\sup_{0\leq t\leq T_n}\left|\int_0^tX_sa_sdw_s+\int_0^t\int_\mathbb{R} \left(c_{s-}^2z^2+X_{s-}c_{s-}z\right)\tilde{N}(ds,dz)\right|\Bigg\}, \end{align} where $\tilde{N}(\cdot,\cdot)$ denotes the compensated Poisson random measure associated with $J$. Applying Assumption \ref{Moments} and Burkholder's inequality to the last term, we get \begin{equation*} \frac{1}{n}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}} \frac{\mathbb{B}_{j-1}\mathbb{B}_{j-1}^\top}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbbm{1}_{G_{k_{n},n}} =O_p(h_n k_n)=o_p\left(\frac{\sqrt{nh_n^2}}{\log n} \right)=o_p(1), \end{equation*} hence \eqref{YU:exp1}. Utilizing the Lipschitz continuity of $b$ and \cite[Lemma 4.5]{Mas13-1}, we have \begin{align*} E\left[\left|\frac{1}{\sqrt{T_n}}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}} \frac{\int_{t_{j-1}}^{t_j} (b_s-b_{j-1})ds}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbb{B}_{j-1}\right|\right]&\lesssim \frac{1}{\sqrt{T_n}} \sum_{j=1}^{n}\int_{t_{j-1}}^{t_j} E\left[|(b_s-b_{j-1})\mathbb{B}_{j-1}|\right]ds\\ &=O_p\left(\sqrt{nh_n^2}\right)=o_p(1). \end{align*} From the elementary inequality \begin{equation}\langlebel{YU:ineq} |x|\leq \frac12 \left(C+\frac{|x|^2}{C}\right), \end{equation} for any positive constant $C$ and real number $x$, we get \begin{align*} &\left|\frac{1}{\sqrt{T_n}}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}}\frac{\int_{t_{j-1}}^{t_j} (a_s-a_{j-1})dw_s}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbb{B}_{j-1}\right|\\ &\lesssim \frac{1}{\sqrt{T_n}}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}} \left|\int_{t_{j-1}}^{t_j} (a_s-a_{j-1})dw_s\mathbb{B}_{j-1}\right| \nonumber\\ &\lesssim\frac{1}{\sqrt{T_n}}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}}\left\{\frac{\sqrt{T_n}}{k_n(\log n)^2}+\frac{k_n(\log n)^2}{\sqrt{T_n}}\left|\int_{t_{j-1}}^{t_j} (a_s-a_{j-1})dw_s\mathbb{B}_{j-1}\right|^2\right\}\\ &\lesssim \frac{1}{(\log n)^2} +\frac{1}{n}\sum_{j=1}^{n} \left|\frac{1}{h_n}\int_{t_{j-1}}^{t_j} (a_s-a_{j-1})dw_s\mathbb{B}_{j-1}\right|^2 nh_n^2(\log n)^2 \\ &\lesssim \frac{1}{(\log n)^2}+O_p\left(nh_n^2(\log n)^2\right)=o_p(1). \end{align*} Here we used the condition $k_n\le (\langlembda+1)T_n-1$ and Burkholder's inequality. By means of It\^{o}'s formula as in \eqref{YU:supesti} under Assumption \ref{Moments}, we obtain for any $q>2$ \begin{equation*} E\left[\sup_{0\leq t\leq T_n} |X_t|^q\right]=O\left(T_n\right). \end{equation*} This combined with Jensen's inequality shows that \begin{equation}\langlebel{YU:supesti2} E\left[\sup_{0\leq t\leq T_n} |X_t|^r \right]=O(T_n^\epsilon) \end{equation} for any $\epsilon>0$ and $r>0$. Now, with the $\delta\in(0,1)$ given in \eqref{hm:add-4} (a consequence of Assumption \ref{Sampling}), we set $\epsilon=\frac{\delta}{3}$ and $\delta'=\frac{4}{3}\delta$. Then, making use of \eqref{YU:supesti2} with an application of \eqref{YU:ineq}, we have \begin{align*} &\left|\frac{1}{\sqrt{T_n}}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}} \frac{a_{j-1}\Delta_j w}{\mathbb{A}_{j-1}^\top\hat{\alpha}_{n}^{k_n}}\mathbb{B}_{j-1}\right|\\ &\lesssim\frac{\max_{1\leq j \leq n}\left|\mathbb{B}_{j-1}\right|}{\sqrt{T_n}}\sum_{j\in\hat{\mathcal{J}}_n^{k_n}}\left\{\frac{T_n^{\frac{1-\delta'}{2}}}{k_n}+\frac{k_n}{T_n^{\frac{1-\delta'}{2}}}(\Delta_j w)^2\right\}\\ &\lesssim O_p\left(T_n^{-\frac{\delta'}{2}+\epsilon} \vee T_n^{1+\epsilon+\frac{\delta'}{2}}h_n\log n\right)\\ &=O_p\left(T_n^{-\frac{\delta}{3}}\vee n^{1+\delta}h_n^{2+\delta}\log n\right)=o_p(1), \end{align*} thus concluding \eqref{YU:exp2}. \end{document}
\begin{document} \title{Homotopy versus isotopy:\ spheres with duals in 4--manifolds} \begin{abstract} Dave Gabai recently proved a smooth $4$-dimensional ``Light Bulb Theorem'' in the absence of 2-torsion in the fundamental group. We extend his result to 4--manifolds with arbitrary fundamental group by showing that an invariant of Mike Freedman and Frank Quinn gives the complete obstruction to ``homotopy implies isotopy'' for embedded 2--spheres which have a common geometric dual. The invariant takes values in an $\mathbb{F}_2$-vector space generated by elements of order 2 in the fundamental group and has applications to unknotting numbers and pseudo-isotopy classes of self-diffeomorphisms. Our methods also give an alternative approach to Gabai's theorem using various maneuvers with Whitney disks and a fundamental isotopy between surgeries along dual circles in an orientable surface. \epsilonnd{abstract} \section{Introduction and results} We work in the category of smooth manifolds. Our starting point is Gabai's $4$-dimensional LBT \cite[Thm.1.2]{Gab}: \begin{LBT} Let $M$ be an orientable $4$--manifold such that $\pi_1M$ has no elements of order~$2$. If $R, R':S^2 \hookrightarrow M$ are embedded spheres in $M$ which are homotopic and have the same geometric dual, then $R$ is isotopic to $R'$. \epsilonnd{LBT} Here a \epsilonmph{geometric dual} to a map $R:S^2\to M$ is an embedded sphere with trivial normal bundle which intersects $R$ transversely and in a single point. The necessity of the $\pi_1$-condition was shown by Hannah Schwartz in \cite{Schwartz} and also follows from Theorem~\operatorname{re}f{thm:4d-light-bulb}. In this paper we extend the above LBT to a version for arbitrary fundamental groups. Fix a connected orientable $4$--manifold $M$ and a map $f:S^2\to M$ with geometric dual $G:S^2\hookrightarrow M$. Consider the following set, measuring ``homotopy modulo isotopy'': \[ \mathcal{R}^G_{[f]}:= \{ R: S^2 \hookrightarrow M \mid R \text{ is homotopic to } f \text{ and has } G \text{ as geometric dual} \}\slash \text{isotopies of }R. \] Let $\mathbb{F}_2T_M$ be the $\mathbb{F}_2$-vector space with basis $T_M:=\{g\in\pi_1M \mid g^2=1\neq g\}$, the elements of order two (2-torsion). It turns out that the self-intersection invariant for maps $S^3 \operatorname{im}ra M^4\times \mathbb{R}^2$ with transverse double points gives a homomorphism $\mu_3:\pi_3M\to \mathbb{F}_2T_M$ (see Lemma~\operatorname{re}f{lem:Wall}). \begin{thm}\negativeleftarrowbel{thm:4d-light-bulb} The abelian group $\mathbb{F}_2T_M$ acts transitively on $\mathcal{R}^G_{[f]}$, and $\mathcal{R}^G_{[f]}\neq\epsilonmptyset$ if and only if Wall's reduced self-intersection invariant $\widetilde\mu(f)$ vanishes. The stabilizer of $R\in\mathcal{R}^G_{[f]}$ is the subgroup $\mu_3(\pi_3M)\leq \mathbb{F}_2T_M$. If $R,R':S^2\hookrightarrow M$ represent the same element in $\mathcal{R}^G_{[f]}$ and agree near $G$ then they are isotopic by an isotopy supported away from $G$. \epsilonnd{thm} Note that the group action applied to $R\in\mathcal{R}^G_{[f]}$ gives a bijection $\mathcal{R}^G_{[f]}\longleftrightarrow \mathbb{F}_2T_M/\mu_3(\pi_3M)$. As a consequence Gabai's LBT follows: If $\pi_1M$ contains no 2-torsion then $\mathbb{F}_2T_M=\{0\}$ and hence $ \mathcal{R}^G_{[f]}$ consists of a single isotopy class. In fact, in the second version of his paper Gabai strengthens his result to a ``normal form'' \cite[Thm.1.3]{Gab} which in our language translates to saying that there is a surjection $\mathbb{F}_2T_M \twoheadrightarrow \mathcal{R}^G_{[f]}$. Examples where this projection is not injective were given in \cite{Schwartz}, providing 4-manifolds $M$ for which $\mu_3$ is non-trivial. \begin{rem}\negativeleftarrowbel{rem:common} Hannah Schwartz pointed out examples showing that the geometric dual $G$ needs to be common to both spheres. Consider closed, 1-connected 4-manifolds $M_0, M_1$ such that \[ \varphi: M_0 \# S^2 \times S^2\overset{\cong}{\longrightarrow} M_1\# S^2 \times S^2 \] is a diffeomorphism which preserves the $S^2 \times S^2$-summands homotopically. See for instance the examples in \cite{AKMRS}. Consider the spheres $R:= S^2 \times p$ and $R':=\varphi(S^2 \times p)$ in $M_1\# S^2 \times S^2$ with geometric duals $p \times S^2$ and $\varphi(p \times S^2)$. Then $R$ and $R'$ are homotopic but can't be isotopic, otherwise the ambient isotopy theorem would lead to a diffeomorphism $M_0\cong M_1$. \epsilonnd{rem} \subsection{Consequences of Theorem~\operatorname{re}f{thm:4d-light-bulb} and its proof}\negativeleftarrowbel{subsec:corollaries-of-LBT} \begin{cor}\negativeleftarrowbel{cor:infinitely-many} There exist $4$--manifolds $M$ and $f:S^2\operatorname{im}ra M$ with infinitely many free isotopy classes of embedded spheres homotopic to $f$ (and with common geometric dual). These manifolds also admit infinitely many distinct pseudo-isotopy classes of self-diffeomorphisms. \epsilonnd{cor} These self-diffeomorphisms (carrying one sphere to the other) will be constructed in Lemma~\operatorname{re}f{lem:pseudo-isotopy}. For example, let $M'$ be any $4$--manifold obtained by attaching 2-handles to a boundary connected sum of copies of $S^1\times D^3$ such that $\mathbb{Z}/2\ast\mathbb{Z}/2\leq\pi_1M'$. This infinite dihedral group contains infinitely many distinct reflections (which are of order~$2$). It follows from Theorem~\operatorname{re}f{thm:4d-light-bulb} that there exist infinitely many isotopy classes of spheres homotopic to $p \times S^2$ in $M:=M' \#(S^2\times S^2)$, all with the same geometric dual $S^2 \times p$ and all related by diffeomorphisms. Under the hypotheses of Theorem~\operatorname{re}f{thm:4d-light-bulb} we also have the following results. \begin{cor}\negativeleftarrowbel{cor:concordance} Concordance implies isotopy for spheres with a common geometric dual. \epsilonnd{cor} \begin{cor}\negativeleftarrowbel{cor:5d} If $R, R':S^2\hookrightarrow M^4$ have a common geometric dual in $M$ then $R$ and $R'$ are isotopic in $M \times \mathbb{R}$ if and only if they are isotopic in $M$. \epsilonnd{cor} The two results are actually ``scholia'', i.e.\ corollaries to our proof of Theorem~\operatorname{re}f{thm:4d-light-bulb}. Namely, we show that the bijections in our theorem are induced by a based concordance invariant $\operatorname{fq}(R,R')\in\mathbb{F}_2T_M/\mu_3(\pi_3M) $ used by Mike Freedman and Frank Quinn in \cite[Thm.10.5(2)]{FQ} and later named $\operatorname{fq}$ by Richard Stong \cite[p.2]{Stong}. As explained in Section~\operatorname{re}f{sec:FQ}, Freedman--Quinn actually use the self-intersection invariant $\mu_3(H)\in\mathbb{F}_2T_M$ of a map $S^2 \times I\operatorname{im}ra M \times \mathbb{R} \times I$ with transverse double points obtained by perturbing the track of a based homotopy $H$ between $R$ and $R'$ in $M \times \mathbb{R}$, explaining Corollary~\operatorname{re}f{cor:5d}. Stong states that in the quotient $\mathbb{F}_2T_M/\mu_3(\pi_3M)$, the choice of $H$ disappears and gives $\operatorname{fq}(R,R')$. This will be proven in Section~\operatorname{re}f{sec:homotopies} for any two spheres $R,R'$ in $M$ that are based homotopic, as is the case for $R,R' \in \mathcal{R}^G_{[f]}$. \begin{cor}\negativeleftarrowbel{cor:fq} If $R, R':S^2\hookrightarrow M^4$ have a common geometric dual and are homotopic, then they are isotopic if and only if $\operatorname{fq}(R,R')=0$. \epsilonnd{cor} This follows from the relation $\operatorname{fq}(t\cdot R,R)=t$ for all $t\in\mathbb{F}_2T_M/\mu_3(\pi_3M)$, between our action and the Freedman--Quinn invariant, see Section~\operatorname{re}f{subsec:geo-action-def}. For the next scholium we consider a ``relative unknotting number'' $\operatorname{u}(R,R')$ for homotopic spheres $R,R': S^2\hookrightarrow M$: By assumption, there is a sequence of finger moves and Whitney moves that lead from $R$ to $R'$, compare Section~\operatorname{re}f{subsec:regular-homotopy}. Let $\operatorname{u}(R,R')\in \mathbb{N}_0$ denote the minimal number of finger moves required in any such homotopy. In general, this is an extremely difficult invariant to compute, even though we'll see in Lemma~\operatorname{re}f{lem:compute-mu-by-crossing-changes} that one always has the estimate $\operatorname{u}(R,R') \mathfrak{g}eq |\operatorname{fq}(R,R')|$. Here the {\epsilonm support} $|t|$ is the number of non-zero coefficients in $t\in \mathbb{F}_2T_M$, and for an equivalence class $[t]\in \mathbb{F}_2T_M/\mu_3(\pi_3M)$ we let $|[t]|$ be the minimum support of all representatives. Michael Klug pointed out that in the presence of a common geometric dual this estimate becomes an equality (see the last part of Section~\operatorname{re}f{sec:infinitely-many}): \begin{cor}\negativeleftarrowbel{cor:unknotting number} For $R,R'\in \mathcal{R}^G_{[f]}$, the relative unknotting number equals the support of the Freedman-Quinn invariant: $\operatorname{u}(R,R') = |\operatorname{fq}(R,R')|$. \epsilonnd{cor} Using the 4-manifold $M$ introduced below Corollary~\operatorname{re}f{cor:infinitely-many}, we see that any (arbitrary large) number is realized as the relative unknotting number between spheres in $M$. See \cite{JKRS} for results on unknotting numbers of $2$-spheres in $S^4$ relative to the unknot. \subsection{An isotopy invariant statement}\negativeleftarrowbel{subsec:more-general-LBT} Even though the original LBT's in $S^2 \times S^1$ and $S^2 \times S^2$ are extremely well motivated, see \cite{Gab}, readers may find it confusing that our set $\mathcal{R}^G_{[f]}$ is {\epsilonm not isotopy invariant}: If we do a finger move on $R\in \mathcal{R}^G_{[f]}$ that introduces two additional intersection points with the dual $G$, the resulting embedded sphere is isotopic to $R$ but not in $\mathcal{R}^G_{[f]}$ any more. In other words, if one wants isotopy invariant statements, one should not fix a sphere $G$ as in the LBT. This problem can be addressed as follows. \begin{defn}\negativeleftarrowbel{def:dual-pairs} For fixed $R:S^2\hookrightarrow M^4$ with fixed geometric dual $G$, consider pairs of embeddings $R', G':S^2\hookrightarrow M$ such that: \begin{itemize} \item $R'$ is {\epsilonm homotopic} to $R$ via $R_s:S^2\to M$, \item $G'$ is {\epsilonm isotopic} to $G$ via $G_s:S^2\hookrightarrow M$, and \item $G_s$ is a geometric dual to $R_s$ for each $s\in I$. \epsilonnd{itemize} Denote by $\mathcal{R}_{R,G}$ the set of isotopy classes of such pairs $(R',G')$, where an \epsilonmph{isotopy of a pair} is a pair $(R_s,G_s)$ as above where $R_s$ is in addition an isotopy. \epsilonnd{defn} Then an isotopy $R_s$ can be embedded into an ambient isotopy $\varphi_s:M\overset{\cong}{\longrightarrow} M$ and hence leads to pairs $(R_s, \varphi_s(G))$ that are all equal in $\mathcal{R}_{R,G}$. \begin{thm}\negativeleftarrowbel{thm:4d-light-bulb-isotopy} The group $\mathbb{F}_2T_M$ acts transitively on $ \mathcal{R}_{R,G}$, with stabilizers $\mu_3(\pi_3M) $. The action on the basepoint $(R,G)$ hence gives a bijection $\mathbb{F}_2T_M/\mu_3(\pi_3M)\longrightarrow \mathcal{R}_{R,G}$. The inverse of this bijection is given by the Freedman-Quinn invariant $\operatorname{fq}(R,\cdot):\mathcal{R}_{R,G}\longrightarrow \mathbb{F}_2T_M/\mu_3(\pi_3M)$. \epsilonnd{thm} It turns out that this result is equivalent to Theorem~\operatorname{re}f{thm:4d-light-bulb} above, and as a consequence we won't follow up on it in this paper. For example, to derive Theorem~\operatorname{re}f{thm:4d-light-bulb} we can use Lemma~\operatorname{re}f{lem:based} to turn any homotopy $R_s$ into one that satisfies the last condition in the above definition (with $G_s=G$ a constant isotopy). \subsection{Outline of the proof of Theorem~\operatorname{re}f{thm:4d-light-bulb}}\negativeleftarrowbel{sec:outline-proof} \begin{figure}[ht!] \centerline{\includegraphics[scale=.32]{sheet-change-w-disks-1.pdf}} \caption{The Whitney disks $W_i$ and $W_i^t$ pair the same self-intersections $p_i^{\pm}$ of $f^t$. On the boundary $\partial W_i^t$ differs from $\partial W_i$ as it departs and approaches from the negative self-intersection $p_i^-$ in different local sheets of $f^t$. } \negativeleftarrowbel{fig:sheet-change-w-disks-1} \epsilonnd{figure} Our action of $t=t_1+\cdots +t_n \in {\mathbb{F}_2T_M}$ on $R\in \mathcal{R}^G_{[f]}$ will be defined as follows. First create a generic map $f^t:S^2\operatorname{im}ra M$ by doing $n$ finger moves on $R$ along arcs representing $t_i\in T_M$. There is a collection of $n$ Whitney disks $\mathcal{W}\subset M\smallsetminus G$ for $f^t$ which are ``inverse'' to the finger moves, i.e.\ the result of doing the Whitney moves along the Whitney disks in $\mathcal{W}$ is isotopic to $R$. Since $t_i^2=1$, we can use $G$ to find a different collection $\mathcal{W}^t\subset M\smallsetminus G$ of $n$ Whitney disks on $f^t$ which induce the same pairings of self-intersections of $f^t$ as $\mathcal{W}$ but which induce different sheet choices for the preimages of the self-intersections, see Figure~\operatorname{re}f{fig:sheet-change-w-disks-1} and Lemma~\operatorname{re}f{lem:choice-of-disks-exists}. The result of doing the Whitney moves on the Whitney disks in $\mathcal{W}^t$ is an embedded sphere denoted by $t\cdot R$, which by construction is homotopic to $R$ and has geometric dual $G$. We'll show in Section~\operatorname{re}f{sec:proof-main} that $t\cdot R$ is isotopic to $R$ if and only if $t\in{\mu_3(\pi_3M) }$. The isotopy class of $t\cdot R$ can also be described explicitly without knowing the Whitney disk collection $\mathcal{W}^t$ by the following {\it Norman sphere}, built from $f^t$ and $G$ (see Section~\operatorname{re}f{sec:norman}). Instead of doing Whitney moves on $\mathcal{W}^t$, the \epsilonmph{Norman trick} \cite{Nor} can be applied to eliminate the self-intersections of $f^t$ by tubing into the dual sphere $G$ along arcs in $f^t$. This operation also involves a choice of local sheets for each self-intersection, and we will show in Section~\operatorname{re}f{sec:choices-of-arcs-and-pairings} that $t\cdot R$ is isotopic to the result of applying the Norman trick using the \epsilonmph{opposite} local sheets at each negative self-intersection compared to the original finger moves. Gabai's proof of his LBT in \cite{Gab} introduces a notion of ``shadowing a homotopy by a tubed surface'', which uses careful manipulations of several types of tubes and their guiding arcs to control the isotopy class of the result of a homotopy between embeddings. In addition to using the Norman trick, Gabai also works with tubes along framed arcs that extend into the ambient $4$--manifold, including the guiding arcs for finger moves. Our proof of Theorem~\operatorname{re}f{thm:4d-light-bulb}, which implies Gabai's LBT, takes a different viewpoint by focusing on the generic sphere $f$ which is the middle level of a homotopy between embeddings $R$ and $R'$, given by finger moves and then Whitney moves. By reversing the finger moves, we see that both these embeddings are obtained from $f$ by sequences of Whitney moves along two collections of Whitney disks. We analyze all choices involved in such collections of Whitney disks and show how they are related to the Freedman--Quinn invariant $\operatorname{fq}(R,R')$. \begin{figure}[ht!] \centerline{\includegraphics[scale=.3]{tube-caps-1A.pdf}} \caption{Left: $W$ pairing self-intersections $p^\pm$ of $f$. Center: The surface $F$ obtained by tubing $f$ to itself admits a cap $c_W$. Right: The result $f_W$ of doing the $W$-move is isotopic to the result $F_{c_W}$ of surgery on $c_W$.} \negativeleftarrowbel{fig:tube-caps-0} \epsilonnd{figure} Our key tool is the relationship between Whitney moves and surgeries on surfaces as shown in Figure~\operatorname{re}f{fig:tube-caps-0}. Note that the dual curve to $\partial c_W$ on $F$ is a meridional circle to $F$ which bounds a meridional disk. This meridional disk $d$ is usually of not much use since it intersects $F$ (exactly once). However, in the presence of the dual $G$ to $F$, we can tube $d$ into $G$, removing this intersection and obtaining a cap $c_G$ which is disjoint from $F$ and $c_W$, see Figure~\operatorname{re}f{fig:tube-caps-2}. Now we can apply the fundamental isotopy between surgeries along dual curves in an orientable surface showing that surgery on $c_W$ is isotopic to surgery on $c_G$ (see Section~\operatorname{re}f{sec:capped-surface-w-move}). And if $W'$ is any other Whitney disk for $f$ having the same Whitney circle $\partial W'=\partial W$, then we see that surgery on $c_G$ is also isotopic to surgery on $c_{W'}$. All together, this implies that the Whitney moves on $f$ along $W$, respectively $W'$, give isotopic results $R$, respectively $R'$! This outline finishes our proof in the very simple case that our collections contain only one Whitney disk and the Whitney circles agree. Multiple Whitney disks in our collections correspond to higher genus capped surfaces and the remaining steps in the argument are ``only'' about showing independence of Whitney circles. Section~\operatorname{re}f{sec:choices-of-arcs-and-pairings} consists of a sequence of lemmas that reduce this dependence only to the choices of sheets at self-intersections whose group elements are of order 2. Fortunately, these are exactly detected by the Freedman--Quinn invariant, finishing our proof. \subsection{Isotopy of disks with duals}\negativeleftarrowbel{subsec:isotopy-of-disks} Using the last sentence of Theorem~\operatorname{re}f{thm:4d-light-bulb} one sees that the bijection described by the theorem is equivalent to the following result. In one direction one works in the complement of a tubular neighborhood of the dual sphere $G$, and in the other direction one attaches $D^2 \times S^2$ to the component $S^1 \times S^2\subseteq \partial M$. \begin{thm}\negativeleftarrowbel{thm:disks} Let $M$ be a connected oriented manifold with $S^1 \times S^2\subseteq \partial M$ and let $R:(D^2,S^1) \hookrightarrow (M,\partial M)$ be an embedding with $\partial R = S^1 \times p\subset S^1 \times S^2$. Then the set \[ \mathcal{R}_{R}:= \{ R': D^2 \hookrightarrow M \mid R' \text{ is homotopic rel boundary to } R\} \slash \text{isotopy of $R'$ rel boundary} \] is in bijection with $\mathbb{F}_2T_M/\mu_3(\pi_3M)$ via the Freedman--Quinn invariant $\operatorname{fq}(R,\cdot)$. \epsilonnd{thm} Note that $G=q \times S^2\subset\partial M$ is a geometric dual sphere to any $R'\in \mathcal{R}_{R}$ in the sense that it has trivial normal bundle and intersects $\partial R'$ transversely in the single point $q\times p$. Hence the question arises whether such a theorem can continue to hold for embedded disks (rel boundary) with geometric dual $G:S^2\hookrightarrow\partial M$ that is \epsilonmph{not} assumed to lie in a component $S^1 \times S^2\subseteq \partial M$. It's important to point out that our key Lemma~\operatorname{re}f{lem:choice-of-disks-exists} still works in this setting. The first version of our paper contained a flawed argument that would have shown exactly such a generalization of our theorem. We are thankful to Dave Gabai for pointing out that our technique of sliding Whitney disks over each other may fail to preserve the relevant isotopy class in the case of ``self-slides'' in the setting of disks $R:D^2\hookrightarrow M$; see the last paragraph in Section~\operatorname{re}f{sec:w-disk-slides}. In fact, Gabai recently posted a paper \cite{G2} constructing a generalization of the Freedman--Quinn invariant using work that goes back to Dax \cite{D}. He showed that a ``self-feeding'' construction gives homotopic neat embeddings $R,R':D^2\hookrightarrow M$ with common dual in $\partial M$ and vanishing Freedman-Quinn invariant but which are {\it not} isotopic rel boundary, as detected by the Dax invariant. Here $M$ is the boundary connected sum of $S^1 \times D^3$ and $S^2 \times D^2$, so $\partial M$ does not contain $S^1 \times S^2$. It turns out that our Whitney disk self-sliding does indeed preserve the relevant isotopy class in the setting of spheres $R:S^2\hookrightarrow M$ with a geometric dual, as is carefully discussed in this current version of our paper; see Lemmas~\operatorname{re}f{lem:slide-w-disk-across-D-z} and Lemma~\operatorname{re}f{lem:w-disk-self-slide}. Looking at the proof of Lemma~\operatorname{re}f{lem:slide-w-disk-across-D-z}, we see that it also applies to disks $R:D^2\hookrightarrow M$ with geometric dual $G:S^2\hookrightarrow\partial M$ exactly if there is an $S^1$-family of dual spheres running along the boundary of the disk. In other words, the component of $\partial M$ that contains the geometric dual $G$ must be a copy of $S^1 \times S^2$ with $G=q \times S^2$. This is consistent with Theorem~\operatorname{re}f{thm:disks} above. Danica Kosanovi\'c and the second author recently showed that the Dax invariant classifies isotopy classes of embeddings $D^2\hookrightarrow M$ with fixed boundary and dual sphere $G:S^2 \hookrightarrow \partial M$. In particular, this contains the complete classification of Gabai's new examples. The case where the dual sphere $G$ is not contained in the boundary of $M$ is very interesting and completely open. \subsection{Other proofs}\negativeleftarrowbel{subsec:intro-other-proofs} In Section~\operatorname{re}f{sec:ambient-morse-pi1-embedding-thm} we provide an alternative proof of Corollary~\operatorname{re}f{cor:fq}, sufficient to conclude Gabai's LBT. We start with the uniqueness part of Freedman and Quinn's Theorem~10.5 \cite{FQ,Stong} which gives a concordance $C: S^2 \times I \hookrightarrow M\times I$ from $R$ to $R'$ if and only if $\operatorname{fq}(R,R')=0$. By analyzing the handle decomposition on $S^2 \times I$ related to the composition $p_2\circ C$, we show directly that in the presence of a geometric dual ``ambient handles can be cancelled'' and $C$ can be replaced by an isotopy. This argument gives a third proof of Gabai's LBT, and a second proof of our generalization. It's interesting to note that Gabai also provided a different argument for the special case of $M=S^2 \times S^2$, which was then exposited by Bob Edwards in \cite{Edwards}. An important step is ``codimension~2 embedded Morse theory'', sometimes also referred to as ``ambient Morse theory'' applied to a map of a surface to a 4-manifold. This is one dimension below the same technique for 3-manifolds in 5-manifolds used in our Section~\operatorname{re}f{sec:ambient-morse-pi1-embedding-thm}. \noindent {\epsilonm Acknowledgements:} It is a pleasure to thank David Gabai and Daniel Kasprowski for helpful discussions. Thanks also to the referees for careful readings and helpful comments. The first author was supported by a Simons Foundation \epsilonmph{Collaboration Grant for Mathematicians}, and both authors thank the Max Planck Institute for Mathematics in Bonn, where this work was carried out. \tableofcontents \section{Preliminaries on surfaces in 4-manifolds}\negativeleftarrowbel{sec:preliminaries} We work in the smooth category throughout. Smoothing of corners will be assumed without mention during cut-and-paste operations on surfaces. Orientations will usually be assumed and suppressed, as will choices of basepoints and whiskers. In the smooth category, a {\epsilonm generic} map, written $f:\Sigma^2\operatorname{im}ra M^4$, is a smooth map which is an embedding, except for a finite number of transverse double points. This is the same as a generic immersion and means that there are coordinates on $\Sigma$ and $M$ such that $f$ looks locally like the inclusion $\mathbb{R}^2 \times \{0\}\subset \mathbb{R}^4$ or like a transverse double point $\mathbb{R}^2 \times \{0\} \cup \{0\} \times \mathbb{R}^2 \subset \mathbb{R}^4$. \subsection{Homotopy classes of surfaces}\negativeleftarrowbel{subsec:regular-homotopy} We will use the following fact (see \cite[Sec.3]{PRT}) about homotopy classes $ [S^2,M]$ of maps $f:S^2\to M$ when $M$ is oriented: The inclusion of generic maps into all smooth (or even all continuous) maps induces a bijection \[ \{f: S^2 \operatorname{im}ra M \mid \mu_1(f) = 0\} / \{\text{isotopies, finger moves, Whitney moves}\} \longleftrightarrow [S^2,M] \] where $\mu_1(f)\in\mathbb{Z}$ denotes the coefficient at the trivial group element in the self-intersection invariant $\mu(f)$, see Section~\operatorname{re}f{sec:mu}. Note that $\mu_1(f)$ can be changed arbitrarily by (non-regular) cusp homotopies and in the following, we'll always tacitly assume that this has been done such that $\mu_1(f)=0$. An isotopy of generic maps is a map $H:S^2 \times I \to M$ such that $H(\cdot,t)$ is generic for all $t\in I$. Note that in a finger move or Whitney move this is true for all but one time $t$. In the setting of the LBT, finger moves in a generic homotopy from $R$ to $R'$ having common geometric dual $G$ may be assumed to be disjoint from $G$ since finger moves are supported near their guiding arcs. By the following lemma, the Whitney moves in such a homotopy may also be assumed to be disjoint from $G$ because one easily finds a preliminary isotopy that makes $R$ and $R'$ agree near $G$. This is also \cite[Lem.6.1]{Gab} where the 3D-LBT is used in the proof. For the convenience of the reader, and for completeness, we give an elementary argument. \begin{lem}\negativeleftarrowbel{lem:based} If $R,R':S^2\hookrightarrow M$ agree near a common geometric dual $G$ and are homotopic in $M$ then there exists a finite sequence of isotopies, finger moves and Whitney moves in $M\smallsetminus G$ leading from $R$ to $R'$. \epsilonnd{lem} \begin{proof} We first show that $R,R'$ are base point preserving homotopic, noting that they both send a base-point $z_0\in S^2$ to $z=R\cap G=R'\cap G$ and hence represent elements $[R],[R']\in\pi_2(M,z)$. Any free homotopy $H$ from $R$ to $R'$ identifies $[R']$ with $g\cdot [R]$, where the loop $H(z_0\times I)$ represents $g\in \pi_1(M,z)$ and we use the $\pi_1$-action on $\pi_2$. Now take a free homotopy $H$ that is transverse to $G\subset M$ and consider the submanifold $L:=H^{-1}(G)\subset S^2 \times I$. $L$ is a 1-manifold with boundary $z_0 \times \{0,1\}$ since $R$ and $R'$ intersect $G$ exactly in $z\in M$. This implies that $L$ has a component $L_0$ which is homotopic (in $S^2 \times I$) to $z_0 \times I$ rel endpoints. As a consequence, the above group element $g$ is also represented by $H(L_0)\subset G \cong S^2$ and hence $[R]=[R']$. Removing an open normal bundle $\nu G$ of $G$ leads to a 4-manifold $W:= M \smallsetminus \nu G$ with a new boundary component $\partial_0 W\cong S^2 \times S^1$. $W$ contains two embedded disks $D$ and $D'$ with the same boundary in $\partial_0 W$. These disks complete to the spheres $R$ and $R'$ when adding $\nu G$ back into the 4-manifold. We claim that $D$ and $D'$ are homotopic rel boundary in $W$ by the homological argument below. Granted this fact, we see from the above discussion that there is also a regular homotopy rel boundary from $D$ to $D'$ in $W$. Approximating it by a generic map we obtain the desired type of homotopy in $M\smallsetminus \nu G$. To show that $D$ and $D'$ are homotopic rel boundary in $W$, it suffices to show that the glued up sphere $S:=D\cup_{\partial} D'$ is null homotopic in $W$. Since $R$ intersects $G$ in a single point, it follows from Seifert-van Kampen that the inclusion induces an isomorphism $\pi_1W \cong \pi_1M$, with base-points taken on $\partial_0 W$. The long exact sequence of the pair $(M,W)$ for homology with coefficients in $\mathbb{Z}\pi_1W$ gives exactness for \[ H_3(M,W;\mathbb{Z}\pi_1W) \longrightarrow H_2(W;\mathbb{Z}\pi_1W) \longrightarrow H_2(M;\mathbb{Z}\pi_1M). \] The Hurewicz isomorphism identifies the map on the right hand side with $\pi_2W\longrightarrow \pi_2M$ which sends $S$ to zero by our conclusion on $R,R'$ being based homotopic. By excision and Lefschetz duality, \[ H_3(M,W;\mathbb{Z}\pi_1W) \cong H_3(S^2 \times D^2, S^2 \times S^1;\mathbb{Z}\pi_1W)\cong H^1(S^2 \times D^2;\mathbb{Z}\pi_1W) =0 \] which implies that $[S]=0$. \epsilonnd{proof} We note that Lemma~\operatorname{re}f{lem:based} is the reason why free (versus based) homotopy and isotopy agree in the presence of a common dual, and in particular, why we don't have to divide out by the conjugation action of $\pi_1M$ in Theorem~\operatorname{re}f{thm:4d-light-bulb}. In the rest of the paper, we will turn a sequence of finger moves and Whitney moves as in Lemma~\operatorname{re}f{lem:based} into an isotopy, provided the Freedman--Quinn invariant vanishes. If $f:S^2\operatorname{im}ra M$ is the {\epsilonm middle level} of such a sequence, i.e.\ the result of all finger moves on $R$, then there are two {\epsilonm clean collections of Whitney disks} for $f$ in $M$: One collection $\mathcal{W}$ reverses all the finger moves and leads back to $R=f_\mathcal{W}$, and the other collection $\mathcal{W}'$ does the interesting Whitney moves to arrive at $R'=f_{\mathcal{W}'}$. Thus the triple $(f,\mathcal{W},\mathcal{W}')$ represents the entire homotopy from $R$ to $R'$ up to isotopy. By construction, each of the two collections of Whitney disks is {\epsilonm clean} in the sense of Definition~\operatorname{re}f{def:clean} which formalizes the above discussion. In particular, since the result of Lemma~\operatorname{re}f{lem:based} is a homotopy in the complement of $G$, the notion of clean Whitney disk will include disjointness from $G$. Then all our maneuvers will stay in the complement of $G$, explaining the last sentence in Theorem~\operatorname{re}f{thm:4d-light-bulb}. \subsection{Self-intersection invariants}\negativeleftarrowbel{sec:mu} Let $M$ be a smooth oriented $4$--manifold and let $f:S^2\operatorname{im}ra M$ be a generic sphere with a whisker from the base point of $M$ to $f$. A loop in $f(S^2)$ that changes sheets exactly at one self-intersection $p$ is called a {\it double point loop} at $p$. After choosing an orientation of the double point loop, it determines an element $g\in\pi_1M$ associated to $p$. The orientation of a double point loop corresponds to a \epsilonmph{choice of sheets} at $p$, i.e.~a choice of a point $x\in f^{-1}(p)$ that is the starting point of the preimage of the loop. The self-intersection invariant $\mu(f)\in \mathbb{Z}[\pi_1M]/ \negativeleftarrowngle g-g^{-1} \longrightarrowngle $ is defined by summing the group elements represented by double point loops of $f$, with the coefficients coming from the usual signs determined by the orientation of $M$. The relations $g-g^{-1}=0$ in the integral group ring account for the above choices of sheets. Then $\mu(f)$ is invariant under regular homotopies of $f$ and changes by $\pm 1$ under a cusp homotopy. Therefore, taking $\mu(f)$ in a further quotient that also sets the identity element $1\in\pi_1M$ equal to $0$ makes the resulting {\it reduced} self-intersection invariant $\widetilde\mu(f)$ invariant under arbitrary based homotopies of $f$. The vanishing of $\widetilde \mu(f)$ in fact only depends on the unbased homotopy class of $[f]$. The analogous reduced self-intersection invariant defined for generic $3$-spheres in $6$--manifolds will be relevant in Section~\operatorname{re}f{sec:FQ}. \subsection{Whitney disks and Whitney moves}\negativeleftarrowbel{sec:clean-w-disks-moves} Suppose that a pair $p^\pm$ of oppositely-signed self-intersection points of $f:S^2\operatorname{im}ra M$ have equal group elements for some choices of sheets at $p^+$ and $p^-$. Then the pair $p^\pm$ admits an embedded null-homotopic \epsilonmph{Whitney circle} $\alpha\cup\beta=f(a)\cup f(b)$ for disjointly embedded arcs $a$ and $b$ joining the preimages $x^+,y^+$ and $x^-,y^-$ of $p^+$ and $p^-$, as in Figure~\operatorname{re}f{fig:w-arcs-disk-move-group-element-1}. Such $\alpha$ and $\beta$ are called \epsilonmph{Whitney arcs}. \begin{figure}[ht!] \centerline{\includegraphics[scale=.3]{w-arcs-disk-move-group-element-1.pdf}} \caption{Left: In the domain of $f$. Center: The horizontal sheet of $f$ appears in the `present' as does the Whitney disk $W$, and the other sheet of $f$ appears as an arc which is understood to extend into `past and future', with the dashed part indicating where $f$ extends outside the pictured $4$--ball neighborhood of $W$ in $M$. Right: After the Whitney move guided by $W$.} \negativeleftarrowbel{fig:w-arcs-disk-move-group-element-1} \epsilonnd{figure} The center of Figure~\operatorname{re}f{fig:w-arcs-disk-move-group-element-1} also shows a \epsilonmph{Whitney disk} $W$ with boundary $\partial W=\alpha\cup\beta$ pairing self-intersections $p^\pm$ with group element $g\in\pi_1M$. The right side of Figure~\operatorname{re}f{fig:w-arcs-disk-move-group-element-1} shows the result $f_W$ of doing a \epsilonmph{Whitney move} on $f$ guided by $W$, which is an isotopy of one sheet of $f$, supported in a regular neighborhood of $W$, that eliminates the pair $p^\pm$. Combinatorially, $f_W$ is constructed from $f$ by replacing a regular neighborhood of one arc of $\partial W$ with a \epsilonmph{Whitney bubble} over that arc. This Whitney bubble is formed from two parallel copies of $W$ connected by a curved strip which is normal to a neighborhood in $f$ of the other arc. Figure~\operatorname{re}f{fig:w-arcs-disk-move-group-element-1} shows $f_W$ using a Whitney bubble over $\alpha$. Although both these descriptions of $f_W$ involve a choice of arc of $\partial W$, up to isotopy $f_W$ is independent of this choice. The construction of an embedded Whitney bubble requires that $W$ is \epsilonmph{framed} (so that the two parallel copies used above do not intersect each other), and Whitney disks which do not satisfy the framing condition are called \epsilonmph{twisted} (see eg.~\cite[Sec.7A]{ST1}). \subsection{Sliding Whitney disks}\negativeleftarrowbel{sec:w-disk-slides} We describe here an operation that ``slides'' Whitney disks over each other. This maneuver changes the Whitney arcs while preserving the isotopy class of the results of the Whitney moves, and will be used in the proof of the key Proposition~\operatorname{re}f{prop:common-arcs-w-moves}. \begin{figure}[ht!] \centerline{\includegraphics[scale=.24]{w-disk-slide-1-solid.pdf}} \caption{Left: A path $\mathfrak{g}amma$ guiding a slide of $W_i$ over $W_j$. Right: The result $W_i'$ of sliding $W_i$ contains the (blue) Whitney bubble $B_{\alpha_j}$ over $\alpha_j$.} \negativeleftarrowbel{fig:w-disk-slide-1} \epsilonnd{figure} Let $W_i$ and $W_j$ be two Whitney disks on $f$, and let $\mathfrak{g}amma$ be an embedded path in $f$ from $\alpha_i\subset \partial W_i$ to $\alpha_j\subset \partial W_j$ such that the interior of $\mathfrak{g}amma$ is disjoint from any self-intersection of $f$ or Whitney arcs on $f$. Denote by $W_i'$ the result of boundary-band-summing $W_i$ into a Whitney bubble $B_{\alpha_j}$ over $\alpha_j$ by a half-tube along $\mathfrak{g}amma$ as in Figure~\operatorname{re}f{fig:w-disk-slide-1}. We say that $W_i'$ is the result of \epsilonmph{sliding $W_i$ over $W_j$}. To see that $f_{\{W_i',W_j\}}$ is isotopic to $f_{\{W_i,W_j\}}$, just observe that $W_i'$ becomes isotopic to $W_i$ after doing the $W_j$-Whitney move. To see this in the coordinates of Figure~\operatorname{re}f{fig:w-disk-slide-1}, note that doing the $W_j$-Whitney move would either replace a horizontal disk of $f$ inside $B_{\alpha_j}\subset W_i'$ by a smaller Whitney bubble over $\alpha_j$, or would leave the same horizontal disk free of intersections by adding a Whitney bubble over $\beta_j$ to the other sheet of $f$. So $W'_i$ isotopes back to $W_i$ across the smaller bubble or the horizontal disk. Either of $\alpha_i$ and $\beta_i$ can be slid over either of $\alpha_j$ or $\beta_j$, and the isotopy class of the results of Whitney moves will be preserved as long as $i\neq j$. This sliding operation can be iterated: \begin{lem}\negativeleftarrowbel{lem:w-disk-slide-isotopy} If a collection $\mathcal{W}'$ of Whitney disks on $f$ is the result of performing finitely many slides ($i\neq j$) on a collection $\mathcal{W}$, then $f_{\mathcal{W}'}$ is isotopic to $f_\mathcal{W}$. $\square$ \epsilonnd{lem} Regarding the $i=j$ case, one can indeed slide $W_i$ over itself using a band from $\alpha_i\subset\partial W_i$ to the boundary of a Whitney bubble $B_{\beta_i}$ over $\beta_i$, and the result will still be a clean Whitney disk. We don't believe that such a {\epsilonm self-slide} will preserve the isotopy class of $f_\mathcal{W}$ in general (as it does in Lemma~\operatorname{re}f{lem:w-disk-slide-isotopy}). However, it will follow from Lemma~\operatorname{re}f{lem:w-disk-self-slide} that this self-sliding does indeed preserve the isotopy class of the result of the Whitney move in our current setting where $f$ is a sphere with a geometric dual. \subsection{Tubing into the dual sphere}\negativeleftarrowbel{sec:tubing-into-G} For $G$ a geometric dual to $f$, a transverse intersection point $r$ between $f$ and a surface $D$ can be eliminated by tubing $D$ into $G$. This is known as the \epsilonmph{Norman trick} \cite{Nor} and is the main reason why dual spheres are so useful. Here ``tubing $D$ into $G$'' means taking an ambient connected sum of $D$ with a parallel copy $G'$ of $G$ via a tube (an annulus) of normal circles over an embedded arc in $f$ that joins $r$ with an intersection point between $f$ and $G'$, see Figure~\operatorname{re}f{fig:tube-into-G-1}. Note that in the case that $D=f$ this operation involves a choice of which local sheet of $r$ to connect into. \begin{figure}[ht!] \centerline{\includegraphics[scale=.25]{tube-into-G-1.pdf}} \caption{Two views of the `tubing into $G$' operation to eliminate $r\in f\pitchfork D$, guided by a (blue dashed) path from $r$ to $z=f\cap G$.} \negativeleftarrowbel{fig:tube-into-G-1} \epsilonnd{figure} There are infinitely many pairwise disjoint copies of $G$ intersecting a small neighborhood around $z=f\cap G$ in $f$, so this procedure can be applied to eliminate any number of such intersections without creating new ones as long as appropriate guiding arcs for the tubes can be found. If a guiding arc intersects the boundary of a Whitney disk on $f$ then the corresponding tube around the arc will have an interior intersection with the Whitney disk, so we will always need to find guiding arcs that are disjoint from existing Whitney disk boundaries. By varying the radii of the tubes, the guiding arcs can be allowed to intersect while keeping the tubes disjointly embedded. \subsection{Clean collections of Whitney disks}\negativeleftarrowbel{subsec:existence-of-disks} Recall that for $f:S^2\operatorname{im}ra M^4$, the vanishing of the self-intersection invariant \[ \mu(f)=0\in\mathbb{Z}[\pi_1M]/\negativeleftarrowngle g-g^{-1} \longrightarrowngle \] is equivalent to the existence of choices of sheets so that all double points of $f$ can be arranged in pairs admitting null-homotopic Whitney circles (this statement is independent of the chosen whisker for $f$). \begin{defn}\negativeleftarrowbel{def:clean} A {\epsilonm clean collection of Whitney disks} for $f:S^2\operatorname{im}ra M$ is a collection of Whitney disks that pair all double points of $f$ and are framed, disjointly embedded, with interiors disjoint from $f$. In the presence of a dual sphere $G$ for $f$, this notion of a clean collection also includes the disjointness of the Whitney disks from $G$. \epsilonnd{defn} Each Whitney disk in a clean collection is called a \epsilonmph{clean Whitney disk}. \begin{lem}\negativeleftarrowbel{lem:choice-of-disks-exists} If $f:S^2\operatorname{im}ra M$ admits a geometric dual $G$, any collection of disjointly embedded Whitney circles that are null-homotopic in $M$ extends to a clean collection of Whitney disks. \epsilonnd{lem} \begin{proof} Start with a collection of generic disks $W_i$ bounded by the given null-homotopic Whitney circles that may intersect $G$, may be twisted, and may have interior intersections with $f$ and each other. Note that the complement in $S^2$ of the union of the preimages $\{a_i,b_i\}$ of the Whitney circles is connected, and that there exist disjointly embedded tube-guiding paths in the complement of the Whitney circles between any number of isolated points and points near~$z$. We describe how to modify the $W_i$ relative their boundaries, without renaming them as changes are made: First of all, each $W_i$ can be made disjoint from $G$ by tubing $W_i$ into parallel copies of $f$ along disjoint arcs in $G$. Since $f$ is immersed with possibly non-trivial normal bundle, this tubing operation is in general more traumatic than the ``tubing into $G$'' operation described in Section~\operatorname{re}f{sec:tubing-into-G} and creates interior intersections between the $W_i$ and $f$, as well as intersections among the $W_i$. Next, the intersections and self-intersections among the $W_i$ can be eliminated by pushing each such point down into $f$ by a finger move, and boundary-twists make the $W_i$ framed \cite[Chap.1.3]{FQ}, both at the cost of only creating more interior intersections between Whitney disks and $f$. Finally, the interiors of the $W_i$ can be made disjoint from $f$ by tubing the $W_i$ into $G$ along disjoint paths in $f$. Since $G$ is embedded and has trivial normal bundle the $W_i$ are still framed and disjoint from $G$, i.e.\ they form a clean collection of Whitney disks $\mathcal{W}$ bounded by the original Whitney circles. \epsilonnd{proof} \begin{rem}\negativeleftarrowbel{rem:partial-collection} The proof of Lemma~\operatorname{re}f{lem:choice-of-disks-exists} shows that if any subcollection of Whitney circles bound clean Whitney disks, then these same Whitney disks can be extended to a clean collection of Whitney disks by applying the construction to the remaining Whitney circles. \epsilonnd{rem} For any given collection $\mathcal{W}$ of clean Whitney disks we denote by $D_z\subset f(S^2)$ a small embedded disk around $z=f(S^2)\cap G$ such that each point in $D_z$ intersects a parallel of $G$ disjoint from $\mathcal{W}$. The radius of $D_z$ is less than the minimum of the radii of the finitely many normal tubes around arcs in $G$ used in the first step of the proof of Lemma~\operatorname{re}f{lem:choice-of-disks-exists}, but our modifications of Whitney disk collections will only use the existence of $D_z$ not its diameter. The minimum of the radii of the finitely many normal tubes around arcs in $f$ used in the last step of the proof of Lemma~\operatorname{re}f{lem:choice-of-disks-exists} gives a uniform lower bound on the distance between $f$ and the complements of small boundary collars of all Whitney disks in $\mathcal{W}$. Subsequent modifications of $\mathcal{W}$ by tubing into $G$ along $f$ will always be assumed to use tubes of radius less than this bound, so as long as tubes are away from Whitney disk boundaries the tubes' interiors will be disjoint from Whitney disk interiors. \begin{figure}[ht!] \centerline{\includegraphics[scale=.28]{capped-surface-surgery-1.pdf}} \caption{A small neighborhood in $\mathbb{R}^3$ of $F\cup c\cup c^\ast$ on the left is diffeomorphic to $D^2 \times I$, in a way that $D^2 \times \{0\}\cong F_c$ and $D^2 \times \{1\} \cong F_{c^\ast}$. Hence the two surgeries in the center and right are isotopic.} \negativeleftarrowbel{fig:capped-surface-surgery-1} \epsilonnd{figure} \subsection{Capped surfaces and Whitney moves}\negativeleftarrowbel{sec:capped-surface-w-move} A \epsilonmph{cap} on a generic orientable surface $F$ in $M$ is a $0$-framed embedded disk $c$ such that the boundary $\partial c$ is the image of a non-separating simple closed curve in the domain of $F$, and the interior of $c$ is disjoint from $F$. Here ``$0$-framed'' means that a parallel copy $\partial c'\subset F$ of $\partial c$ bounds a cap $c'$ such that $c'\cap c=\epsilonmptyset$. Two caps on $F$ are \epsilonmph{dual} if their boundaries intersect in a single point and their interiors are disjoint. A collection $\mathcal{C}$ of pairwise disjoint caps on $F$ is \epsilonmph{dual} to another collection $\mathcal{C}^\ast$ of pairwise disjoint caps on $F$ if the interiors of all caps $\mathcal{C}\cup \mathcal{C}^\ast$ are pairwise disjoint, and the set of all cap boundaries is a geometric symplectic basis for the first homology of $F$ (the cap boundaries intersect geometrically $\delta_{ij}$ in $F$). For a collection $\mathcal{C}$ of disjoint caps on $F$, we denote by $F_\mathcal{C}$ the result of surgering $F$ using all the caps of $\mathcal{C}$. The following lemma can be proved by considering an isotopy of a standard model in $3$-space that passes through the symmetric surgery on both sets of caps (see Figure~\operatorname{re}f{fig:capped-surface-surgery-1} and \cite[Sec.2.3]{FQ}): \begin{lem}\negativeleftarrowbel{lem:capped-surface-isotopy} If $\mathcal{C}$ and $\mathcal{C}^\ast$ are dual collections of caps on $F$ then $F_\mathcal{C}$ is isotopic to $F_{\mathcal{C}^\ast}$ by an isotopy supported near the union $F\cup\mathcal{C}\cup\mathcal{C}^\ast$. \epsilonnd{lem} Lemma~\operatorname{re}f{lem:capped-surface-isotopy}, together with the presence of the geometric dual $G$, yields the following simple but useful correspondence between Whitney moves and surgeries: Let $W$ be a clean Whitney disk on $f$ with $\partial W=\alpha\cup\beta$ (possibly one of a collection $\mathcal{W}$ of Whitney disks on $f$), and let $F:T^2 \operatorname{im}ra M$ be the result of tubing $f$ to itself along $\beta$. Observe that a cap $c_W$ on $F$ can be constructed from $W$ by deleting a small boundary collar near $\beta$, and $F_{c_W}$ is isotopic to $f_W$ (Figure~\operatorname{re}f{fig:tube-caps-1}). \begin{figure}[ht!] \centerline{\includegraphics[scale=.31]{tube-caps-1.pdf}} \caption{} \negativeleftarrowbel{fig:tube-caps-1} \epsilonnd{figure} \begin{figure}[ht!] \centerline{\includegraphics[scale=.31]{tube-caps-2.pdf}} \caption{} \negativeleftarrowbel{fig:tube-caps-2} \epsilonnd{figure} Now we construct a cap $c_G$ on $F$ which is dual to $c_W$. Start with a meridional disk $d$ to $F$ which has a single transverse intersection $r=d\pitchfork F\in\beta$ and $\partial d\subset F$ (Figure~\operatorname{re}f{fig:tube-caps-2} left). Note that $G$ is a geometric dual to $F$. Then $c_G$ is the result of eliminating $r$ by tubing $d$ into $G$ along an embedded arc in $F$, disjoint from $c_W$ and $\partial d$ (and any other Whitney disks), running from $r$ to a point where a parallel copy of $G$ intersects $F$, see right Figure~\operatorname{re}f{fig:tube-caps-2}. Such an embedded arc exists since the complement of $\partial W$ is connected (as is the complement of $\partial \mathcal{W}$). Since $c_G$ and $c_W$ are dual caps, Lemma~\operatorname{re}f{lem:capped-surface-isotopy} gives: \begin{lem}\negativeleftarrowbel{lem:W-move-equals-surgery-on-tube-cap} If $F$ is the result of tubing $f$ to itself along one Whitney arc of a clean Whitney disk $W$, and $c_G$ is a cap on $F$ gotten by tubing a meridional disk dual to the Whitney arc into $G$ as above, then $f_W$ is isotopic to $F_{c_G}$.$ \square$ \epsilonnd{lem} So if two Whitney disks $W$ and $W'$ on $f$ have equal Whitney circles $\partial W=\partial W'$, then $f_W$ is isotopic to $f_{W'}$ since each is isotopic to surgery $F_{c_G}$ on a common dual cap $c_G$ to both of the caps $c_W$ and $c_{W'}$ as in Lemma~\operatorname{re}f{lem:W-move-equals-surgery-on-tube-cap}. And since the complement in $f$ of the Whitney circles of a clean collection of Whitney disks is connected we have: \begin{lem}\negativeleftarrowbel{lem:choice-of-disks} If $\mathcal{W}$ and $\mathcal{W}'$ are clean collections of Whitney disks for the self-intersections of $f$ such that $\partial\mathcal{W}=\partial\mathcal{W}'$, then $f_\mathcal{W}$ is isotopic to $f_{\mathcal{W}'}$.$ \square$ \epsilonnd{lem} \begin{lem}\negativeleftarrowbel{lem:slide-w-disk-across-D-z} For the Whitney circles $\mathcal{A}=\partial\mathcal{W}$ of a clean collection $\mathcal{W}=\cup_iW_i$ of Whitney disks as in Definition~\operatorname{re}f{def:clean}, consider $\mathcal{A}'$ which is the result of band summing a Whitney arc $\alpha_i\subset\partial W_i$ into a parallel of $\partial D_z$ along an arc $\mathfrak{g}amma$ with interior disjoint from $\mathcal{A}$ as in the left-most and right-most pictures in Figure~\operatorname{re}f{fig:slide-w-over-z}. Then there exists a clean collection of Whitney disks $\mathcal{W}'$ with $\partial\mathcal{W}'=\mathcal{A}'$ and such that $f_\mathcal{W}$ is isotopic to $f_{\mathcal{W}'}$. \epsilonnd{lem} \begin{proof} We break up the band sum operation into the three steps illustrated in Figure~\operatorname{re}f{fig:slide-w-over-z}: Guided by $\mathfrak{g}amma$, modify $\alpha_i$ by pushing a subarc slightly across $\partial D_z$, and extend this isotopy to a collar of $W_i$. The isotopy class of $f_\mathcal{W}$ is unchanged since the collection $\mathcal{W}$ changes by isotopy due to the disjointness of $\mathfrak{g}amma$ from $\mathcal{A}$. Now delete from $\alpha_i$ the small (dashed) arc which is the intersection of $\alpha_i$ with the interior of $D_z$, and eliminate the oppositely signed self-intersections of $f$ that were paired by $W_i$ by tubing $f$ along the resulting pair of arcs into two oppositely oriented copies of $G$ which intersect $\partial D_z$ at the arcs' endpoints. See the second picture from the left in Figure~\operatorname{re}f{fig:slide-w-over-z}. \begin{figure}[ht!] \centerline{\includegraphics[scale=.25]{slide-w-over-z.pdf}} \caption{} \negativeleftarrowbel{fig:slide-w-over-z} \epsilonnd{figure} This yields an immersed sphere $f^G_\mathfrak{g}amma$ which admits the clean collection of Whitney disks $\mathcal{V}:=\mathcal{W} \smallsetminus W_i$. Note that by construction $f^G_\mathfrak{g}amma$ is also the result of tubing $f$ to itself along the $\alpha_i$ that had been pushed into $D_z$ and then surgering the tube along a cap formed from a parallel copy of $G$ near where $\mathfrak{g}amma$ meets $\partial D_z$. It follows from Lemma~\operatorname{re}f{lem:W-move-equals-surgery-on-tube-cap} that $f_{W_i}$ is isotopic to $f^G_\mathfrak{g}amma$. Hence, $f_\mathcal{W}$ is isotopic to $(f^G_\mathfrak{g}amma)_{\mathcal{V}}$. Next, change $f^G_\mathfrak{g}amma$ by an isotopy which moves the two tubes and the two parallels of $G$ contained in $f^G_\mathfrak{g}amma$ in opposite directions around $\partial D_z$ as shown in the third picture from the left in Figure~\operatorname{re}f{fig:slide-w-over-z}. Since we may assume that the two tubes have been chosen to have radii smaller than any previous tubes used to construct Whitney disks in $\mathcal{V}$ this isotopy does not create any new intersections (see the second paragraph after Remark~\operatorname{re}f{rem:partial-collection}). After this isotopy $f^G_\mathfrak{g}amma$ still admits $\mathcal{V}$, and the isotopy class of $(f^G_\mathfrak{g}amma)_{\mathcal{V}}$ is unchanged. Now (re)connect the endpoints of the two guiding arcs of the tubes near the short subarc of $\partial D_z$ between the endpoints to get a single arc $\alpha'_i:=\alpha_i+_\mathfrak{g}amma\partial D_z$ which is isotopic to the result of taking the band sum of $\alpha_i$ with $\partial D_z$ along $\mathfrak{g}amma$ (see the right-most picture in Figure~\operatorname{re}f{fig:slide-w-over-z}). The resulting embedded Whitney circle $\alpha'_i\cup\beta_i$ is null-homotopic and disjoint from $\partial \mathcal{V}$, so by Lemma~\operatorname{re}f{lem:choice-of-disks-exists} there exists a collection $\mathcal{W}'$ of Whitney disks with $\partial \mathcal{W}'=\alpha'_i\cup\beta_i\cup\partial \mathcal{V}$. As per Remark~\operatorname{re}f{rem:partial-collection}, the proof of Lemma~\operatorname{re}f{lem:choice-of-disks-exists} fixes $\mathcal{V}$ while constructing a clean Whitney disk $W_i'$ bounded by $\alpha'_i\cup\beta_i$ in the complement of $\mathcal{V}$, so we have $\mathcal{W}'=W'_i\cup\mathcal{V}$. It follows again by Lemma~\operatorname{re}f{lem:W-move-equals-surgery-on-tube-cap} that $f^G_\mathfrak{g}amma$ is isotopic to $f_{W'_i}$, since $f^G_\mathfrak{g}amma$ is isotopic to the result of tubing $f$ to itself along $\alpha'_i$ and then surgering a cap formed from a copy of $G$ near where the guiding arcs were reconnected. Hence $f_{\mathcal{W}'}$ is isotopic to $(f^G_\mathfrak{g}amma)_{\mathcal{V}}$, and we see that $f_\mathcal{W}$ and $f_{\mathcal{W}'}$ are isotopic. \epsilonnd{proof} \begin{figure}[ht!] \centerline{\includegraphics[scale=.2]{self-w-slide-dashed-arcs.pdf}} \caption{} \negativeleftarrowbel{fig:self-w-slide-dashed-arcs} \epsilonnd{figure} \begin{lem}\negativeleftarrowbel{lem:w-disk-self-slide} For a clean Whitney disk collection $\mathcal{W}$ on $f:S^2\operatorname{im}ra M^4$ with geometric dual $G$, if $\mathcal{W}'$ is gotten from $\mathcal{W}$ by sliding a Whitney disk over itself then $f_{\mathcal{W}'}$ is isotopic to $f_\mathcal{W}$. \epsilonnd{lem} \begin{proof} Let $\alpha_i$ be the Whitney arc of $\partial W_i=\alpha_i\cup\beta_i$ that is slid over $\beta_i$ to become $\alpha'_i\subset \partial W'_i=\alpha'_i\cup\beta_i$. Referring to Figure~\operatorname{re}f{fig:self-w-slide-dashed-arcs}, consider the following five steps (indicated by the arrows in the figure) describing in the domain an isotopy of $\alpha_i=f(a_i)$ to $\alpha'_i=f(a'_i)$: Step~1 and Step~2 isotope $\alpha_i$ towards and then across $D_z=f(D_{z_0})$, as in Lemma~\operatorname{re}f{lem:slide-w-disk-across-D-z}. After these first two steps of the isotopy the union of the resulting new arc $\alpha^2_i$ with the original $\beta_i$ admits a clean Whitney disk $W^2_i$, and replacing $W_i$ by $W^2_i$ in $\mathcal{W}$ yields a clean collection $\mathcal{W}^2$ such that $f_{\mathcal{W}^2}$ is isotopic to $f_\mathcal{W}$ by Lemma~\operatorname{re}f{lem:slide-w-disk-across-D-z}. Step~3 then uses the Whitney disk sliding operation of Section~\operatorname{re}f{sec:w-disk-slides} to push $\alpha^2_i$ across all the $\alpha_j$ and $\beta_j$ Whitney arcs of the Whitney disks $W_j$ for $j\neq i$ by sliding $W^2_i$ twice over each of these Whitney disks (once each for $\alpha_j$ and $\beta_j$). Taking the resulting Whitney disk $W_i^3$ as a replacement for $W^2_i$ in $\mathcal{W}^2$ yields $\mathcal{W}^3$, with $f_{\mathcal{W}^3}$ isotopic to $f_{\mathcal{W}^2}$ by Lemma~\operatorname{re}f{lem:w-disk-slide-isotopy}. Finally, Steps~4 and 5 isotope a collar of $W_i^3$ around the $2$-sphere until the Whitney disk boundary arc ends up as the band sum $\alpha'_i$ of the original $\alpha_i$ with the boundary of a Whitney bubble over $\beta_i$. This 5-step construction yields $\mathcal{W}^5$ with $W^5_i\in\mathcal{W}^5$ having boundary $\alpha'_i\cup\beta_i$ and $f_\mathcal{W}$ isotopic to $f_{\mathcal{W}^5}$. Now form $\mathcal{W}'$ from $\mathcal{W}^5$ by replacing the Whitney disk $W^5_i$ resulting from this construction with the Whitney disk $W'_i$ gotten by sliding $\alpha_i$ across $\beta_i$ which has the same boundary. By Lemma~\operatorname{re}f{lem:choice-of-disks} we get that $f_\mathcal{W}$ is isotopic to $f_{\mathcal{W}'}$. \epsilonnd{proof} We come to our most useful geometric result for $f:S^2\operatorname{im}ra M$ with geometric dual $G$: \begin{prop}\negativeleftarrowbel{prop:common-arcs-w-moves} If $\mathcal{W}$ and $\mathcal{W}'$ are clean collections of Whitney disks on $f$ such that for each $i$, $W_i\in\mathcal{W}$ and $W_i'\in\mathcal{W}'$ share at least one common Whitney arc $\beta_i = \beta_i'$, then $f_\mathcal{W}$ is isotopic to~$f_{\mathcal{W}'}$. \epsilonnd{prop} \begin{proof} We first prove the simplest case of the statement: If $W$ and $W'$ are Whitney disks on $f$ which share a common Whitney arc $\beta=\beta'$, then $f_W$ is isotopic to $f_{W'}$. The proof will proceed as in the setting of Lemma~\operatorname{re}f{lem:W-move-equals-surgery-on-tube-cap}, but because here we have two Whitney disks with possibly $\alpha\neq\alpha'$ we may need to apply the sliding maneuver of Section~\operatorname{re}f{sec:w-disk-slides} to create a tube-guiding arc to $z$ for cleaning up the meridional cap. Let $F$ be the surface resulting from tubing $f$ to itself along the common Whitney arc $\beta=\beta'$ of $\partial W$ and $\partial W'$. Deleting small boundary collars of $W$ and $W'$ near $\beta$ yields caps $c_W$ and $c_{W'}$ for $F$ as in Figure~\operatorname{re}f{fig:tube-caps-1}, but with $\partial c_{W'}$ wandering off into the ``horizontal'' part of $F$ corresponding to $\alpha'\neq\alpha$. By Lemma~\operatorname{re}f{lem:W-move-equals-surgery-on-tube-cap}, $F_{c_W}$ is isotopic to $f_W$, and $F_{c_{W'}}$ is isotopic to $f_{W'}$. As in the setting of Lemma~\operatorname{re}f{lem:W-move-equals-surgery-on-tube-cap}, we want to construct a cap $c_G$ for $F$ such that $c_G$ is dual to both $c_W$ and $c_{W'}$. Then by Lemma~\operatorname{re}f{lem:W-move-equals-surgery-on-tube-cap} it will follow that each of $f_W$ and $f_{W'}$ is isotopic to $F_{c_G}$. The construction of $c_G$ starts as in Figure~\operatorname{re}f{fig:tube-caps-2}: We want to clean up a meridional disk $d$ to $F$ which has a single transverse intersection $r=d\pitchfork F\in\beta$ and $\partial d\subset F$ by tubing $d$ into $G$. But now we have to find an embedded path from $r$ to $z=G\cap F$ that is disjoint from \epsilonmph{both} $\partial c_W$ and $\partial c_{W'}$. If $r$ and $z$ lie in the same connected component of $F\smallsetminus(\partial c_W\cup \partial c_{W'})$ then there is no problem. We can eliminate $r$ by tubing $d$ into $G$ along an embedded path in $F$ running from $r$ to a point near $z$ where a parallel copy of $G$ intersects $F$, and the resulting cap $c_G$ for $F$ is dual to both $c_W$ and $c_{W'}$. \begin{figure}[ht!] \centerline{\includegraphics[scale=.19]{alpha-slides-horizontal.pdf}} \caption{In the domain $S^2$: The case of one pair of Whitney disks, with $f(z_0)=z=f\cap G$. Slides are done in the order starting closest to $f(b)=\beta=\beta'=f(b')$. Here $a$ and $a'$ are the preimages of $\alpha$ and $\alpha'$, respectively.} \negativeleftarrowbel{fig:alpha-slides} \epsilonnd{figure} Now consider the case that $r$ and $z=G\cap F$ do not lie in the same connected component of $F\smallsetminus(\partial c_W\cup \partial c_{W'})$, and observe that this means that $\beta$ and $z$ do not lie in the same component of the complement in $f$ of the immersed loop $\alpha\cup \alpha'$ (see the left side of Figure~\operatorname{re}f{fig:alpha-slides} for the preimage). In this case we can modify the original Whitney disk $W'$ before constructing $F$ using the sliding maneuver of Section~\operatorname{re}f{sec:w-disk-slides} to arrange that $\beta$ and $z$ do lie in the same component of $f\smallsetminus(\alpha\cup \alpha')$: Since $f \smallsetminus \partial W$ is connected, there is an embedded arc $\mathfrak{g}amma$ from $z$ to $r\in\beta'=\beta$ such that $\mathfrak{g}amma$ is disjoint from $\alpha$ (the preimage of $\mathfrak{g}amma$ is the dashed blue arc in Figure~\operatorname{re}f{fig:alpha-slides}). Eliminate the intersections between $\mathfrak{g}amma$ and $\alpha'$ by sliding $W'$ over itself from $\alpha'$ to $\beta'$ guided by $\mathfrak{g}amma$ as in Section~\operatorname{re}f{sec:w-disk-slides} (right side of Figure~\operatorname{re}f{fig:alpha-slides}). By Lemma~\operatorname{re}f{lem:w-disk-self-slide} this does not change the isotopy class of $f_{W'}$, and now the construction of the cap $c_G$ for $F$ goes through as desired. \begin{figure}[ht!] \centerline{\includegraphics[scale=.2]{multiple-alpha-slides-horizontal.pdf}} \caption{In the domain $S^2$: The general multiple Whitney disk case of Figure~\operatorname{re}f{fig:alpha-slides}. Again the arcs $\mathfrak{g}amma_i$ (dashed blue preimages) lead to our slides of the $\alpha'_i$ (preimages $a'_i$) algorithmically, starting closest to the $\beta_i$-arcs (preimages $b_i=b_i'$).} \negativeleftarrowbel{fig:multiple-alpha-slides} \epsilonnd{figure} For the general statement, apply the same construction to each of the pairs of Whitney disks $W_i$ and $W_i'$ in $\mathcal{W}$ and $\mathcal{W}'$. Start with disjointly embedded arcs $\mathfrak{g}amma_i$ in $f \smallsetminus \partial \mathcal{W}$ from the common arcs $\beta_i$ to $z$. The only new complication is that making these arcs disjoint from $\partial\mathcal{W}'$ may involve more Whitney disk slides as shown in Figure~\operatorname{re}f{fig:multiple-alpha-slides}. By Lemma~\operatorname{re}f{lem:w-disk-slide-isotopy} and Lemma~\operatorname{re}f{lem:w-disk-self-slide} these sides preserve the isotopy class of $f_{\mathcal{W}'}$, and by applying Lemma~\operatorname{re}f{lem:W-move-equals-surgery-on-tube-cap} to each pair $W_i, W'_i$ we have that $f_\mathcal{W}$ is isotopic to $f_{\mathcal{W}'}$. \epsilonnd{proof} \section{New Proof of Gabai's LBT}\negativeleftarrowbel{sec:proof-of-theorem} Let $M$ be a smooth orientable $4$--manifold and $f:S^2\operatorname{im}ra M$ a generic smooth map with $0=\mu(f)\in\mathbb{Z}[\pi_1M]/\negativeleftarrowngle g-g^{-1} \longrightarrowngle$ and with geometric dual $G$. Recall that $ \mathcal{R}^G_{[f]}$ denotes the set of isotopy classes of embedded spheres which are homotopic to $f$ and have $G$ as a geometric dual. \textbf{Outline of our proof of Gabai's LBT:} We will show that $ \mathcal{R}^G_{[f]}$ contains a unique element if $\pi_1M$ does not contain 2-torsion. As explained in Section~\operatorname{re}f{subsec:regular-homotopy}, any two embedded spheres in $\mathcal{R}^G_{[f]}$ are related via a finite sequence of isotopies, finger moves and Whitney moves, all away from $G$. By general position it can be arranged that the finger moves occur before the Whitney moves (see eg.~\cite[Lem.8]{BT}). Denoting the result of the finger moves by $f$, we will consider all possible collections of Whitney disks on $f$ in $M\smallsetminus G$ and show that all the resulting embeddings are isotopic. As a first step, Section~\operatorname{re}f{sec:choices} describes precisely the various types of choices involved in constructing a collection $\mathcal{W}$ of clean Whitney disks on $f$ such that the result $f_\mathcal{W}$ of doing the Whitney moves in $\mathcal{W}$ on $f$ is an embedding. In Sections~\operatorname{re}f{subsec:existence-and-choices-of-disks}--\operatorname{re}f{sec:choice-of-sheets-trivial-element} we prove that the isotopy class of $f_\mathcal{W}$ does not depend on any of these choices. \subsection{Choices of sheets, pairings, W-arcs and W-disks}\negativeleftarrowbel{sec:choices} We'll discuss the four types of choices $\textsf{C}_{\textrm{sheets}}$, $\textsf{C}_{\textrm{pairings}}$, $\textsf{C}_{\textrm{W-arcs}}$ and $\textsf{C}_{\textrm{W-disks}}$ that determine a clean collection $\mathcal{W}$ of Whitney disks on $f:S^2\operatorname{im}ra M$ and hence a generic homotopy from $f$ to an embedding $f_\mathcal{W}$ (with geometric dual $G$). In the following, each step will depend on having made all previous choices. Moreover, each later choice lets us reconstruct the previous choices. Denote the set of transverse self-intersections of $f$ by $\{p_1,\dots,p_{2n}\}\subset f(S^2)$, where the ordering of the $p_i$ is an artifact of the notation and will never be used; and fix a whisker for $f$ from the basepoint of $M$. (The condition $0=\mu(f)\in\mathbb{Z}[\pi_1M]/\negativeleftarrowngle g-g^{-1} \longrightarrowngle$ implies that $f$ has an even number of self-intersections.) \begin{enumerate} \item[$\textsf{C}_{\textrm{sheets}}$:] \negativeleftarrowbel{item:choice-of-sheets} A \epsilonmph{choice of sheets} $\{x_1,\dots, x_{2n}\}\in\textsf{C}_{\textrm{sheets}}$ consists of choices $x_i\in f^{-1}(p_i)\subset S^2$, subject to the following requirement: By Section~\operatorname{re}f{sec:clean-w-disks-moves}, each $x_i$ orients a double point loop at $p_i$ by the convention that the loop is the image of a path {\it starting from} $x_i$. Via the whisker for $f$ we get a well-defined group element $g(x_i)\in\pi_1M$. Then our choice of sheets is required to satisfy \begin{equation}\tag{$*$} 0=\sum_{i=1}^{2n}\,\epsilonpsilon_i\cdot g(x_i)\in\mathbb{Z}[\pi_1M], \text{ where $\epsilonpsilon_i\in\{\pm 1\}$ is the sign of $p_i$.} \epsilonnd{equation} A different choice of whisker for $f$ would change each $g(x_i)$ to a conjugate $g(x_i)^h$ for some fixed $h\in \pi_1M$, hence our requirement $(*)$ is independent of the whisker. Moreover, switching the preimage choice $x_i$ at $p_i$ has the effect of inverting the group element $g(x_i)$, so choices of sheets exist since $0=\mu(f)\in\mathbb{Z}[\pi_1M]/\negativeleftarrowngle g-g^{-1} \longrightarrowngle $. \item[$\textsf{C}_{\textrm{pairings}}$:] \negativeleftarrowbel{item:choice-of-pairings} For $\{x_1,\dots, x_{2n}\}\in\textsf{C}_{\textrm{sheets}}$, a \epsilonmph{compatible choice of pairings} $\{x_1^\pm,\dots, x_n^\pm\}\in\textsf{C}_{\textrm{pairings}}$ consists of $n$ distinct pairs $x_i^\pm:=(x^+_i,x^-_i)=(x_{j_i}, x_{k_i})$ with $\epsilonpsilon_{j_i}=+1=-\epsilonpsilon_{k_i}$ and $g(x_{j_i})=g(x_{k_i})$. A choice of pairings exists by our requirement $(*)$ on $\{x_1,\dots, x_{2n}\}$ and it induces pairings $(p_i^+, p_i^-)$ of the self-intersections of $f$. \item[$\textsf{C}_{\textrm{W-arcs}}$:] \negativeleftarrowbel{item:choice-of-arcs} For $\{x_1^\pm,\dots, x_n^\pm\}\in\textsf{C}_{\textrm{pairings}}$, a \epsilonmph{compatible choice of Whitney arcs} $\{\alpha_1,\beta_1, \dots, \alpha_n, \beta_n\}\in\textsf{C}_{\textrm{W-arcs}}$ are the images under $f$ of disjointly embedded arcs $a_i \subset S^2$ joining $x^+_i$ and $x^-_i$, and arcs $b_i \subset S^2$ joining $y^+_i$ and $y^-_i$ for $i=1,\dots, n$, where $f^{-1}(p^\pm_k) = \{x^\pm_k, y^\pm_k\}$. Here $\alpha_i:= f(a_i)$ and $\beta_i:=f(b_i)$ are disjoint, except that $\partial \alpha_i = \{p_i^+, p_i^-\} = \partial \beta_i$. Note that $\alpha_i\subset f(S^2)$ determines $a_i\subset S^2$ and hence the original choice of pairings is determined by $\{\alpha_1,\dots,\alpha_n\}$ alone. A choice of Whitney arcs always exists since the complement in $S^2$ of finitely many disjointly embedded arcs and points is connected. \item[$\textsf{C}_{\textrm{W-disks}}$:] \negativeleftarrowbel{item:choice-of-disks} Given a choice of Whitney arcs $\{\alpha_1,\beta_1, \dots, \alpha_n, \beta_n\}\in\textsf{C}_{\textrm{W-arcs}}$, a \epsilonmph{compatible choice of Whitney disks} $\{W_1,\dots, W_n\}\in \textsf{C}_{\textrm{W-disks}}$ is a clean collection of Whitney disks $W_i$ whose boundaries are equal to the circles $\alpha_i\cup \beta_i\subset M$. Recall that \epsilonmph{clean} means the $W_i$ are framed, disjointly embedded, have interiors disjoint from $f$, and are disjoint from $G$. The existence of a choice of Whitney disks for any choice of Whitney arcs follows from Lemma~\operatorname{re}f{lem:choice-of-disks-exists}. To reconstruct $\alpha_i$ from $W_i:D^2\hookrightarrow M$, we also require that $\alpha_i = W_i(S^1_-)$, where $S^1_-\subset S^1=\partial D^2\subset D^2\subset \mathbb{R}^2$ is the lower semi-circle. \epsilonnd{enumerate} In the following, we will abbreviate our choices by \[ {\sf{x}}:=\{x_1,\dots, x_{2n}\}, \,\,{\sf{x}}^\pm:=\{x^\pm_1,\dots, x^\pm_{n}\},\,\, \mathcal{A}:=\{\alpha_1,\beta_1, \dots, \alpha_n, \beta_n\} \text{ and } \mathcal{W}:=\{W_1,\dots, W_{n}\}. \] The meaning of $\partial \mathcal{W} = \mathcal{A}$ should be clear from our conventions. The embedded sphere obtained from $f$ by doing Whitney moves guided by the Whitney disks in $\mathcal{W}$ is denoted $f_\mathcal{W}$. \subsection{Existence and choices of Whitney disks}\negativeleftarrowbel{subsec:existence-and-choices-of-disks} For future reference we observe here that the existence of a compatible $\mathcal{W}\in\textsf{C}_{\textrm{W-disks}}$ for any given $\mathcal{A}\in\textsf{C}_{\textrm{W-arcs}}$ guaranteed by Lemma~\operatorname{re}f{lem:choice-of-disks-exists}, together with the definitions of pairing choices and sheet choices in Section~\operatorname{re}f{sec:choices}, imply the following: \begin{lem}\negativeleftarrowbel{lem:disks-exist-for-all-choices} Given ${\sf{x}}^\pm\in\textsf{C}_{\textrm{pairings}}$, there exists $\mathcal{W}\in\textsf{C}_{\textrm{W-disks}}$ compatible with ${\sf{x}}^\pm$. Given ${\sf{x}}\in\textsf{C}_{\textrm{sheets}}$, there exists $\mathcal{W}\in\textsf{C}_{\textrm{W-disks}}$ compatible with ${\sf{x}}$. $ \square$ \epsilonnd{lem} From Lemma~\operatorname{re}f{lem:choice-of-disks}, the isotopy class of $f_\mathcal{W}$ is independent of the interiors of the Whitney disks in $\mathcal{W}$, i.e.\ $f_\mathcal{W}$ only depends on $\mathcal{A}$. We next introduce Norman spheres, which will play a key role in showing that the isotopy class of $f_\mathcal{W}$ is also independent of choices of arcs and pairings for any given sheet choice. \subsection{Norman spheres}\negativeleftarrowbel{sec:norman} Fix a choice of sheets ${\sf{x}}=\{x_1,x_2,\ldots,x_{2n}\}\in\textsf{C}_{\textrm{sheets}}$ for $f$. We need yet another type of choice to define a Norman sphere (whose isotopy class will ultimately only depend on ${\sf{x}}$). Recall that $D_z\subset f$ denotes a small disk around $z=f\cap G$ such that each point in $D_z$ intersects a parallel of $G$ which is geometrically dual to $f$. \begin{enumerate} \item[$\textsf{C}_{\textrm{N-arcs}}$:] \negativeleftarrowbel{item:choice-of-N-arcs} A {\it compatible choice of Norman arcs} $\mathcal{Z}:=\{\sigma_1,\dots,\sigma_{2n}\} \in \textsf{C}_{\textrm{N-arcs}}$ for ${\sf{x}}\in\textsf{C}_{\textrm{sheets}}$ is the image under $f$ of disjointly embedded arcs $s_i\subset S^2$ starting at $x_i$ and ending in $f^{-1}(\partial D_z)$. Then $\sigma_i:=f(s_i)\subset f(S^2)$ are disjointly embedded arcs starting at $p_i$ and ending in $\partial D_z$; they determine the arcs $s_i$ uniquely. \epsilonnd{enumerate} \begin{defn} \negativeleftarrowbel{def:Norman} The \epsilonmph{Norman sphere} $f^G_\mathcal{Z} :S^2\hookrightarrow M$ is obtained from $f$, $G$ and $\mathcal{Z}$ by eliminating all the self-intersections $p_i\in f\pitchfork f$ by tubing $f$ into parallel copies of $G$ along the $\sigma_i$. Precisely, these tubing operations replace the image of a small disk around each $y_i\in S^2$ by a normal tube along $\sigma_i$ together with a parallel copy $G_i$ of $G$ with a small normal disk to $f$ removed at $G_i\cap f$. Here $f^{-1}(p_i)=\{x_i,y_i\}$ with $x_i\in{\sf{x}}$, and the $y_i$-sheet of $f$ at $p_i$ is deleted by the tubing operation since the $y_i$-sheet is normal to $\sigma_i$ at $p_i$. \epsilonnd{defn} By construction, the Norman sphere $f^G_\mathcal{Z}$ is embedded and has $G$ as a geometric dual. Also, $f^G_\mathcal{Z}$ is homotopic to $f$ since the copies of $G$ in the connected sum with $f$ come in oppositely oriented pairs having the same group element by our requirement $(*)$ in Section~\operatorname{re}f{sec:choices} on the sheet choice ${\sf{x}}$. Hence $f^G_\mathcal{Z}\in\mathcal{R}^G_{[f]}$. Surprisingly, we will show in Lemma~\operatorname{re}f{lem:Norman-independence-of-arcs} that the isotopy class of $f^G_\mathcal{Z} $ only depends on ${\sf{x}}$ and not at all on $\mathcal{Z}$. We remark that the $\sigma_i$ are as in \cite{Gab} which are the simplest of the three types of arcs used by Gabai. The $\sigma_i$ in \cite{Gab} are allowed to intersect but here we require them to be disjointly embedded. \begin{lem}\negativeleftarrowbel{lem:Norman-for-every-Whitney} For any given choice of sheets ${\sf{x}}$, if $\mathcal{W}$ is an ${\sf{x}}$-compatible choice of Whitney disks then there is an ${\sf{x}}$-compatible choice of Norman arcs $\mathcal{Z}$ such that $f^G_\mathcal{Z} $ is isotopic to $f_\mathcal{W}$. \epsilonnd{lem} \begin{proof} We apply the first step in the proof of Lemma~\operatorname{re}f{lem:slide-w-disk-across-D-z} simultaneously to all $\alpha_i$: Let $\mathcal{A}:=\partial\mathcal{W}$ be the Whitney arcs, and let ${\sf{x}}^\pm$ be the choice of pairings determined by $\mathcal{A}$. To construct the Norman arcs $\mathcal{Z}$, isotope the Whitney arcs $\alpha_i$ just across $\partial D_z$ and extend this isotopy to an isotopy of $W_i$ in a collar on $\alpha_i$; see Figure~\operatorname{re}f{fig:Norman-arcs-1} where $D_{z_0}:=f^{-1}(D_z)$. This can be done keeping the $\alpha_i$ disjoint from each other and from all $\beta_j$. Deleting the part of the new $\alpha_i$ that lies in the interior of $D_z$ gives two arcs $\sigma_i^\pm$ which start at $x_i^\pm$ and end in $\partial D_z$. \begin{figure}[ht!] \centerline{\includegraphics[scale=.175]{Norman-arcs-1.pdf}} \caption{The preimages $a_i$ and $a_j$ of arcs $\alpha_i$ and $\alpha_j$ after the isotopy.} \negativeleftarrowbel{fig:Norman-arcs-1} \epsilonnd{figure} Define $\mathcal{Z}:=\{\sigma_1^-,\sigma_1^+,\dots, \sigma_n^-,\sigma_n^+\}$ and observe that since the corresponding copies $G_i^\pm$ of $G$ are oppositely oriented, the Norman sphere $f^G_\mathcal{Z} $ is isotopic to the result of tubing $f$ to itself along each $\alpha_i\subset\partial W_i$, then surgering a meridional cap dual to $\alpha_i$ that has been tubed into $G$ as in Figure~\operatorname{re}f{fig:tube-caps-2}. So $f^G_\mathcal{Z} $ is isotopic to $f_\mathcal{W}$ by Lemma~\operatorname{re}f{lem:W-move-equals-surgery-on-tube-cap}. \epsilonnd{proof} In the proofs of the next two lemmas we describe isotopies of Norman spheres using homotopies of Norman arcs by requiring that the radii of the tubes are not equal at any temporarily-created intersection between Norman arcs during a homotopy. Following Gabai, we indicate the tube of smaller radius as an under-crossing of the corresponding Norman arc. \begin{lem}[Lemma~5.11(ii) of \cite{Gab}]\negativeleftarrowbel{lem:Norman-cyclic-ordering} Given any $\mathcal{Z}'\in\textsf{C}_{\textrm{N-arcs}}$ and points $z_1,\dots,z_{2n} \in \partial D_z$, there is a choice of Norman arcs $\mathcal{Z}=\{\sigma_1,\dots,\sigma_{2n}\}$, compatible with the same ${\sf{x}}\in\textsf{C}_{\textrm{sheets}}$ as $\mathcal{Z}'$, such that $\sigma_i$ ends in $z_i$ and the Norman spheres $f^G_{\mathcal{Z}'} $ and $f^G_\mathcal{Z} $ are isotopic. \epsilonnd{lem} \begin{proof} It suffices to observe that neighboring $z_i$ and $z_j$ in $\partial D_z$ can be exchanged by pushing the tube around $\sigma_j$ across (and inside) the tube around $\sigma_i$, as in Figure~\operatorname{re}f{fig:Norman-arcs-re-order-1} and Figure~\operatorname{re}f{fig:Norman-arcs-re-order-2}. \epsilonnd{proof} \begin{figure}[ht!] \centerline{\includegraphics[scale=.15]{Norman-arcs-re-order-1.pdf}} \caption{The indicated homotopy of $s_i$ and $s_j$ corresponds to an isotopy of Norman spheres which slides the tube around $\sigma_j$ inside of the tube around $\sigma_i$. See Figure~\operatorname{re}f{fig:Norman-arcs-re-order-2}.} \negativeleftarrowbel{fig:Norman-arcs-re-order-1} \epsilonnd{figure} \begin{figure}[ht!] \centerline{\includegraphics[scale=.52]{Norman-arcs-re-order-2-small.pdf}} \caption{The image of the third-from-left picture in Figure~\operatorname{re}f{fig:Norman-arcs-re-order-1}. Here the smaller radius of the tube around $\sigma_j$ compared to the tube around $\alpha_i$ corresponds to $s_j$ crossing under $s_i$ in Figure~\operatorname{re}f{fig:Norman-arcs-re-order-1}.} \negativeleftarrowbel{fig:Norman-arcs-re-order-2} \epsilonnd{figure} \begin{lem}\negativeleftarrowbel{lem:Norman-independence-of-arcs} If two choices of Norman arcs $\mathcal{Z}, \mathcal{Z}'\in\textsf{C}_{\textrm{N-arcs}}$ are compatible with the same ${\sf{x}}\in\textsf{C}_{\textrm{sheets}}$ then the Norman spheres $f^G_\mathcal{Z} $ and $f^G_{\mathcal{Z}'} $ are isotopic. \epsilonnd{lem} As a consequence, we get a Norman sphere $f^G_\mathcal{Z} =: f^G_{\sf{x}}\in \mathcal{R}^G_{[f]}$ for a given choice of sheets ${\sf{x}}$. \begin{proof} Let ${\sf{x}}^\pm$ be any compatible choice of pairings for ${\sf{x}}$. By Lemma~\operatorname{re}f{lem:Norman-cyclic-ordering} we may assume that $\mathcal{Z}=\{\sigma_1^-,\sigma_1^+,\dots, \sigma_n^-,\sigma_n^+\}$ induces the cyclic ordering $(z_1^-,z_1^+,z_2^-,z_2^+,\ldots,z_n^-,z_n^+)$ in $\partial D_z$, where $z_i^\pm$ is the end-point of $\sigma^\pm_i$. We will first construct a choice of Whitney disks $\mathcal{W}$ for $f$ such that $f^G_\mathcal{Z} $ is isotopic to $f_\mathcal{W}$, by performing essentially the inverse of the steps in the proof of Lemma~\operatorname{re}f{lem:Norman-for-every-Whitney}. For each $i$, denote by $\alpha_i$ the union of the embedded arcs $\sigma_i^-$ and $\sigma_i^+$ together with a short arc in $\partial D_z$ that runs between $z_i^+$ and $z_i^-$. These $\alpha_i$ then form one half of a collection of Whitney arcs for the choice of pairings ${\sf{x}}^\pm$. Choose appropriate arcs $\beta_i$ to complete the half collection $\alpha_i$ to a ${\sf{x}}^\pm$-compatible choice of Whitney arcs $\mathcal{A}=\{\alpha_1,\beta_1,\dots,\alpha_n,\beta_n\}$. By Lemma~\operatorname{re}f{lem:choice-of-disks-exists} there exists a collection $\mathcal{W}\in\textsf{C}_{\textrm{W-disks}}$ with boundary $\mathcal{A}$. It follows that $f^G_\mathcal{Z} $ is isotopic to $f_\mathcal{W}$ by Lemma~\operatorname{re}f{lem:W-move-equals-surgery-on-tube-cap}, since $f^G_\mathcal{Z} $ is isotopic to the result of surgering the capped surface formed by tubing $f$ along the $\alpha_i$ arcs, as observed in the proof of Lemma~\operatorname{re}f{lem:Norman-for-every-Whitney}. By Lemma~\operatorname{re}f{lem:Norman-cyclic-ordering} we may assume that $\mathcal{Z}'=\{{\sigma'_1}^-,{\sigma'_1}^+,\dots, {\sigma'_n}^-,{\sigma'_n}^+\}$ induces the same cyclically ordered points $(z_1^-,z_1^+,z_2^-,z_2^+,\ldots,z_n^-,z_n^+)$ in $\partial D_z$ as $\mathcal{Z}$, with $z_i^\pm$ the end-point of ${\sigma'_i}^\pm$. For each $i$, denote by $\alpha'_i$ the union of the embedded arcs ${\sigma'_i}^-$ and ${\sigma'_i}^+$ together with a short arc in $\partial D_z$ that runs between $z_i^+$ and $z_i^-$. These $\alpha'_i$ form a half collection of Whitney arcs for the choice of pairings ${\sf{x}}^\pm$. Now pause to observe that if each $\alpha'_i$ is disjoint from all the previously chosen $\beta_j$, then the unions $\alpha'_i\cup\beta_i$ are Whitney circles for a clean collection $\mathcal{W}'$ of Whitney disks on $f$ by Lemma~\operatorname{re}f{lem:choice-of-disks-exists}, and $f^G_\mathcal{Z} $ is isotopic to $f^G_{\mathcal{Z}'}$. This is because the collections $\mathcal{W}$ and $\mathcal{W}'$ share the common $\beta_i$-arcs so $f_\mathcal{W}$ would be isotopic to $f_{\mathcal{W}'}$ by Proposition~\operatorname{re}f{prop:common-arcs-w-moves}. Then analogously to the above argument that $f^G_{\mathcal{Z}} $ is isotopic to $f_{\mathcal{W}}$ we have that $f^G_{\mathcal{Z}'} $ is isotopic to $f_{\mathcal{W}'}$, and hence $f^G_{\mathcal{Z}'} =f_{\mathcal{W}'}=f_\mathcal{W}=f^G_{\mathcal{Z}}\in\mathcal{R}^G_{[f]}$ completing the proof. So it just remains to get $\alpha'_i\cap\beta_j=\epsilonmptyset$ for all $i,j$. Since the $\alpha'_i$ are constructed from ${\sigma'_i}^\pm$ by adding short arcs in $\partial D_z$, it suffices to show that we may push all the ${\sigma'_i}^\pm$ off all the $\beta_j$ in a way that corresponds to an isotopy of the Norman sphere $f^G_{\mathcal{Z}'} $. It will be convenient to describe this pushing-off construction in the domain of $f$, so we want to get ${s'_i}^\pm\cap b_j=\epsilonmptyset$, where $b_j\subset S^2$ is an embedded arc from $y_j^-$ to $y_j^+$ with $f(b_j)=\beta_j$, and ${s'_i}^\pm\subset S^2$ goes from $x_i^\pm$ to $f^{-1}(z_i^\pm)$ with $f({s'_i}^\pm)={\sigma'_i}^\pm$. Our construction will work with one $b_j$ at a time, removing intersections with all ${s'_i}^\pm$ in a way that does not create new intersections in any previously cleaned-up $b_k$. This will be accomplished by describing an isotopy of the Norman sphere tubes induced by pushing (as needed) each ${s'_i}^\pm$ across the endpoints $y_j^\pm$ of $b_j$, using the fact that a disk around $y_j^\pm$ maps to a disk in the Norman sphere consisting of a tube along $\sigma_j^\pm$ into $G^\pm_j$. As observed by Gabai \cite[Rem.5.10]{Gab}, in the case $i=j$ we are \epsilonmph{not} able to push ${s'_j}^\pm$ across $y_j^\pm$, but we \epsilonmph{are} able to push ${s'_j}^\pm$ across the opposite-signed $y_j^\mp$. This is similar to the fact that a handle cannot be slid over itself. Consider first the case where some $b_j$ only has intersections with a single ${s'_i}^\pm$ (Figure~\operatorname{re}f{fig:beta-push-off-1} left). If $i\neq j$ then these intersections can all be eliminated by an isotopy of ${s'_i}^\pm$ across $y_j^+$ (Figure~\operatorname{re}f{fig:beta-push-off-1} right). If $i= j$ then $b_j\cap {s'_j}^\pm$ can be eliminated by an isotopy of ${s'_j}^\pm$ across the oppositely-signed $y_j^\mp$. These isotopies pushing ${s'_j}^\pm$ off $b_j$ can be done without creating any intersections among the parallel strands of ${s'_j}^\pm$. \begin{figure}[ht!] \centerline{\includegraphics[scale=.22]{beta-push-off-1-prime.pdf}} \caption{For $i\neq j$ all strands of ${s'_i}^\pm$ can be pushed off $b_j$ across $y_j^+$.} \negativeleftarrowbel{fig:beta-push-off-1} \epsilonnd{figure} Next consider the case where $b_j$ intersects only the two arcs ${s'_j}^+$ and ${s'_j}^-$, each in a single point $r^+=b_j\cap {s'_j}^+$ and $r^-=b_j\cap {s'_j}^-$. If $r^\pm$ is adjacent to $y_j^\mp$ in $b_j$, then each $r^\pm$ can be eliminated as in the previous case by pushing ${s'_j}^\pm$ across $y_j^\mp$. If $r^\pm$ is adjacent to $y_j^\pm$ in $b_j$, then first eliminate $r^-$ by pushing ${s'_j}^-$ across $y_j^+$ and under ${s'_j}^+$, as in Figure~\operatorname{re}f{fig:beta-push-off-2} left. Then eliminate $r^+$ by pushing ${s'_j}^+$ across $y_j^-$ and over ${s'_j}^-$, as in Figure~\operatorname{re}f{fig:beta-push-off-2} center. At this point we have $b_j\cap {s'_j}^\pm=\epsilonmptyset$, but ${s'_j}^+$ intersects ${s'_j}^-$ in two points $q$ and $q'$. Each of $q$ and $q'$ can be eliminated by pushing ${s'_j}^-$ along ${s'_j}^+$ and across (under) $x_j^+$ as in Figure~\operatorname{re}f{fig:beta-push-off-2} right, since the tube around $\sigma_j^-$ has a smaller radius. Note that the pushing of ${s'_j}^-$ along ${s'_j}^+$ will create new intersections between ${s'_j}^-$ and any other $b_k$ with $k\neq j$ that intersected ${s'_j}^+$ along the strand of the original ${s'_j}^+$ between $x_j^+$ and $r^+$. But such new intersections only are created in a $b_k$ that has yet to be cleaned up. \begin{figure}[ht!] \centerline{\includegraphics[scale=.2]{beta-push-off-2-prime.pdf}} \caption{} \negativeleftarrowbel{fig:beta-push-off-2} \epsilonnd{figure} The construction of the previous paragraph can be adapted to the general case where $b_j$ intersects arbitrary strands of ${s'_i}^\pm$ for arbitrary $i$ as follows. (Picture the ${s'_j}^\pm$-arcs in Figure~\operatorname{re}f{fig:beta-push-off-2} as two among several parallel collections of strands.) First simultaneously push all strands of ${s'_j}^-$ and all strands of any other ${s'_i}^\pm$ with $i\neq j$ under any and all strands of ${s'_j}^+$ and across $y_j^+$. This can be done in parallel, without creating any intersections among the strands that are being isotoped. Then simultaneously push any and all strands of ${s'_i}^+$ over all other strands and across $y_j^-$. This can be done in parallel, so that the only resulting intersections between $s'$-arcs are where ${s'_i}^+$ passes over other strands. At this point $b_j$ is disjoint from all ${s'_i}^\pm$, and the intersections among $s'$-arcs can all be eliminated by pushing the under-crossing arcs along ${s'_j}^+$ across (under) $x_j^+$. \epsilonnd{proof} \subsection{Independence of pairings and Whitney arcs}\negativeleftarrowbel{sec:choices-of-arcs-and-pairings} From Lemmas~\operatorname{re}f{lem:Norman-for-every-Whitney} and~\operatorname{re}f{lem:Norman-independence-of-arcs} we get: \begin{cor}\negativeleftarrowbel{cor:w-disk-independence-of-arcs-and-pairings} If two choices of Whitney disks $\mathcal{W}, \mathcal{W}'\in\textsf{C}_{\textrm{W-disks}}$ are each compatible with the same choice of sheets ${\sf{x}}\in\textsf{C}_{\textrm{sheets}}$, then $f_\mathcal{W}$ is isotopic to $f_{\mathcal{W}'}$. In particular, $f_\mathcal{W}\in \mathcal{R}^G_{[f]}$ is independent of ${\sf{x}}$-compatible choices of pairings, Whitney arcs and Whitney disks. $ \square$ \epsilonnd{cor} As a consequence, $f_\mathcal{W}\in \mathcal{R}^G_{[f]}$ only depends on ${\sf{x}}$ and it's safe to write $f_\mathcal{W}=:f_{\sf{x}}\in \mathcal{R}^G_{[f]}$, where the existence of an ${\sf{x}}$-compatible $\mathcal{W}$ is guaranteed by Lemma~\operatorname{re}f{lem:disks-exist-for-all-choices}. By the same lemmas we also see that $f_{\sf{x}}$ is isotopic to the Norman sphere $f^G_{\sf{x}}$, whose isotopy class therefore only depends on ${\sf{x}}$ but not on $G$. To complete the proof of Gabai's LBT it remains to consider the ${\sf{x}}$-dependence of $f_{\sf{x}}$. \subsection{Double sheet changes}\negativeleftarrowbel{sec:double-sheet-change} Let ${\sf{x}}=\{x_1,\dots,x_{2n}\}\in\textsf{C}_{\textrm{sheets}}$ and recall that $g(x_i)\in \pi_1M$ is represented by a double point loop through $p_i$ which is the image of an oriented arc from $x_i$ to $y_i$, where $f^{-1}(p_i)=\{x_i, y_i\}$. Switching the choice $x_i$ to $y_i$ changes $g(x_i)$ to $g(y_i)=g(x_i)^{-1}$ while keeping the sign $\epsilonpsilon_i$ of $p_i$. Changing the whisker for $f$ changes all $g(x_i)$ by a fixed conjugation and also keeps the signs. Assume that for two indices $i,j$ we have $\epsilonpsilon_j=-\epsilonpsilon_i$ and $g(x_j)=g(x_i)=:g$. Then a different choice of sheets ${\sf{x}}'\in\textsf{C}_{\textrm{sheets}}$ can be defined by replacing $x_i$ by $y_i$ and replacing $x_j$ by $y_j$, since it satisfies our requirement $(*)$ in Section~\operatorname{re}f{sec:choices} with the canceling terms $\epsilonpsilon_i\cdot g+\epsilonpsilon_j\cdot g=0$ replaced by $\epsilonpsilon_i\cdot g^{-1}+\epsilonpsilon_j\cdot g^{-1}=0$. We will refer to such a change of sheet choice as a \epsilonmph{double sheet change}. \begin{lem}\negativeleftarrowbel{lem:double-sheet-change} If ${\sf{x}},{\sf{x}}'\in\textsf{C}_{\textrm{sheets}}$ differ by a double sheet change, then $f_{{\sf{x}}}=f_{{\sf{x}}'}\in \mathcal{R}^G_{[f]}$. \epsilonnd{lem} \begin{proof} Let $\{x_i,x_j\}\subset{\sf{x}}$ be the local sheets involved in the double sheet change. There is a choice of pairings ${\sf{x}}^\pm$ compatible with ${\sf{x}}$ such that $x_i=x_1^+$ and $x_j=x_1^-$ (or vice versa). Moreover, by Lemma~\operatorname{re}f{lem:disks-exist-for-all-choices} there is a choice of Whitney disks $\mathcal{W}=\{W_1,\dots,W_n\}$ compatible with ${\sf{x}}^\pm$, i.e.\ $p_i$ and $p_j$ are paired by $W_1$. Let $\mathcal{W}':=\{W'_1, W_2,\dots,W_n\}$ be the choice of Whitney disks where $W_1'$ differs from $W_1$ only by precomposing with a reflection of the domain $D^2$ across the horizontal diameter. This exchanges the two boundary arcs of $W_1$ but does not change the effect of doing a Whitney move since $W_1$ and $W_1'$ have the same image in $M$. Now observe that $\mathcal{W}'$ is compatible with ${\sf{x}}'$ and it follows from Corollary~\operatorname{re}f{cor:w-disk-independence-of-arcs-and-pairings} that $f_{\sf{x}}=f_{\mathcal{W}}=f_{\mathcal{W}'}=f_{{\sf{x}}'}\in \mathcal{R}^G_{[f]}$. \epsilonnd{proof} \subsection{Choice of sheets for double point loops not of order $2$} \negativeleftarrowbel{sec:choice-of-sheets-non-trivial-not-in-T} Consider a sheet choice ${\sf{x}}=\{x_1,\dots,x_{2n}\}\in\textsf{C}_{\textrm{sheets}}$ such that for some $i$ we have $g_i:=g(x_i)\in\pi_1M$ with $g_i^2\neq 1$. If ${\sf{x}}'$ is a different choice of sheets that takes $y_i$ as the preferred preimage instead of $x_i$, then this has the effect of inverting $g_i$. Since $g_i\neq g_i^{-1}$, in order for ${\sf{x}}'$ to satisfy the requirement $(*)$ in Section~\operatorname{re}f{sec:choices} of a choice of sheets it follows that ${\sf{x}}'$ must also switch some oppositely-signed $x_j$ to $y_j$, where $g(x_j)=g(x_i)$. So ${\sf{x}}'$ differs from ${\sf{x}}$ by at least one double sheet change, and Lemma~\operatorname{re}f{lem:double-sheet-change} applied finitely many times gives: \begin{lem}\negativeleftarrowbel{lem:choice-of-sheets-non-trivial-not-in-T} If choices of sheets ${\sf{x}},{\sf{x}}'\in\textsf{C}_{\textrm{sheets}}$ only differ at self-intersections $p_i$ where the double point loops $g_i$ satisfy $g_i^2\neq 1$, then $f_{\sf{x}}=f_{{\sf{x}}'}\in \mathcal{R}^G_{[f]}$. $\square$ \epsilonnd{lem} Note that the assumption does not depend on the whisker for $f$. \subsection{Choice of sheets for trivial double point loops}\negativeleftarrowbel{sec:choice-of-sheets-trivial-element} Let $p_i$ be a self-intersection of $f$ with trivial group element $1\in\pi_1M$. By the same construction as in the proof of Lemma~\operatorname{re}f{lem:choice-of-disks-exists}, $p_i$ admits a clean \epsilonmph{accessory disk} $A_i$, i.e.\ $A_i$ is a framed embedded disk with interior disjoint from $f$ such that the boundary circle $\partial A_i\subset f$ changes sheets just at $p_i$. See \cite[Sec.7]{ST1} for details on accessory disks. If $p_i^+$ and $p_i^-$ are oppositely signed with trivial group element, then clean Whitney disks for $p_i^\pm$ can be constructed by banding together two clean accessory disks $A_i^\pm$ as in Figure~\operatorname{re}f{fig:accessory-disk-bands-1}, which shows two choices of bands resulting in Whitney disks $W_i$ and $W'_i$ which induce the possible different sheet choices. These Whitney disks are supported in a neighborhood of the union of the two accessory disks together with a generic disk in $f$ containing the accessory circles $\partial A_i^\pm$. We will show that $W_i$ and $W_i'$ are isotopic via an ambient isotopy supported near one of the accessory disks. Hence $f_{W_i}$ is isotopic to $f_{W_i'}$. \begin{figure}[ht!] \centerline{\includegraphics[scale=.175]{accessory-disk-bands-1.pdf}} \caption{Preimages of Whitney circles for $W_i$ (left) and $W_i'$ (right) formed by banding together accessory disks $A_i^\pm$ in two different ways, with $W_i$ satisfying the sheet choice $\{x_i^-,x_i^+\}$ and $W'_i$ satisfying the sheet choice $\{x_i^-,y_i^+\}$. Applying the rotation isotopy of Lemma~\operatorname{re}f{lem:rotate-B4} to $A_i^+$ interchanges $x_i^+$ and $y_i^+$.} \negativeleftarrowbel{fig:accessory-disk-bands-1} \epsilonnd{figure} A regular neighborhood of a clean accessory disk is diffeomorphic to a standard model in $4$--space, so we work locally, dropping superscripts and subscripts. Let $(\Delta,\partial\Delta)\operatorname{im}ra (B^4,S^3)$ be a generic $2$--disk with a single self-intersection $p$ which is the result of applying a cusp homotopy \cite[1.6]{FQ} to a standard $(D^2,S^1)\subset (B^4,S^3)$. Then $p$ admits a clean accessory disk $A$, and the following lemma will be proved: \begin{lem}\negativeleftarrowbel{lem:rotate-B4} There is an ambient isotopy $h_s$ of $B^4$ such that \begin{enumerate} \item $h_0$ is the identity, \item $h_1(\Delta\cup A)=\Delta\cup A$, \item $h_1|_A$ is a reflection of $A$ that fixes the double point of $\Delta$ in $\partial A$, and \item $h_1|_{\partial\Delta}$ is a rotation by $180$ degrees. \epsilonnd{enumerate} \epsilonnd{lem} Applying Lemma~\operatorname{re}f{lem:rotate-B4} to a $B^4$-neighborhood of $A_i^+$ we see that the two Whitney disks $W_i$ and $W'_i$ in Figure~\operatorname{re}f{fig:accessory-disk-bands-1} are isotopic: Rotating the right accessory arc $\partial A_i^+$ by $180$ degrees drags one band to the other, and hence one Whitney disk to the other. \begin{proof} To prove Lemma~\operatorname{re}f{lem:rotate-B4}, consider $\Delta$ as the trace of a null-homotopy of the Whitehead double of the unknot in $S^3 \times \{0\} =\partial B^4$ which pulls apart the clasp in a collar $S^3\times I\subset B^4$, creating the self-intersection $p$ admitting a clean accessory disk $A$, as in Figure~\operatorname{re}f{fig:accessory-disk-rotation-1}. Define the homotopy $h_s$ of $\Delta$ in the coordinates of Figure~\operatorname{re}f{fig:accessory-disk-rotation-1} to be rotation around the vertical axis by $180\cdot s$ degrees in each slice $S^3\times \{t\}: h_s(x,t) = (h_s(x),t)$, with $s$ and $t$ each going from $0$ to $1$. Extend $h_s$ to $B^4$ by tapering the rotation back to zero inside the collar $(x,u), u\in [1,0]$ of a smaller 4-ball that is the complement of the $S^3 \times I$ already used: $h_s(x,u):=h_{u\cdot s}(x)$, with $s$ and $u$ each going from $1$ to $0$ (and $h_s=$ identity outside this second collar). \begin{figure}[ht!] \centerline{\includegraphics[scale=.2]{accessory-disk-rotation-1.pdf}} \caption{Left: The Whitehead double of the unknot in $S^3$ is the boundary of $\Delta$. Center: The clean accessory disk $A$ for the self-intersection $p$ of $\Delta$ which corresponds to the clasp singularity. Both $\Delta$ and $A$ have $180$ degree rotational symmetry (top views of left and center on upper and lower right).} \negativeleftarrowbel{fig:accessory-disk-rotation-1} \epsilonnd{figure} \epsilonnd{proof} By Corollary~\operatorname{re}f{cor:w-disk-independence-of-arcs-and-pairings} we can compute $f_{\sf{x}}=f_\mathcal{W}$ by $\mathcal{W}\in\textsf{C}_{\textrm{W-disks}}$ whose Whitney disks pairing self-intersections with trivial group elements are formed from banding together accessory disks as above. So in combination with Lemma~\operatorname{re}f{lem:choice-of-sheets-non-trivial-not-in-T} we have: \begin{cor}\negativeleftarrowbel{cor:choice-of-sheets-non-involution} If choices of sheets ${\sf{x}},{\sf{x}}'$ only differ at self-intersections $p_i$ whose double point loops don't have order~$2$, then $f_{\sf{x}}=f_{{\sf{x}}'}\in \mathcal{R}^G_{[f]}$. \epsilonnd{cor} This result completes the proof of Gabai's LBT. To prove our main Theorem~\operatorname{re}f{thm:4d-light-bulb} it remains to understand the ${\sf{x}}$-dependence of $f_{\sf{x}}$ in the presence of self-intersections with group elements of order~$2$. In the subsequent Section~\operatorname{re}f{sec:FQ} and Section~\operatorname{re}f{sec:proof-main} we will show that it is completely controlled by the Freedman--Quinn invariant. \section{The Freedman--Quinn invariant}\negativeleftarrowbel{sec:FQ} In Section~\operatorname{re}f{sec:3-in-6} we review some relevant aspects of the intersection form on $\pi_3$ of a 6--manifold. In Section~\operatorname{re}f{sec:homotopies} the Freedman--Quinn invariant is defined using the self-intersection invariant applied to the track of a homotopy between spheres in $M^4\times \mathbb{R}$, which is a map of a 3--manifold to a 6--manifold rel boundary. \subsection{3--manifolds in 6--manifolds}\negativeleftarrowbel{sec:3-in-6} Recall that for a smooth oriented 6--manifold $P^6$, the intersection and self-intersection invariants give maps \[ \negativeleftarrowmbda_3: \pi_3P \times \pi_3P \to \mathbb{Z}\pi_1P \quad \text{ and } \quad \mu_3: \pi_3P \to \mathbb{Z}\pi_1P / \negativeleftarrowngle g+g^{-1}, 1 \longrightarrowngle. \] The intersection invariant $\negativeleftarrowmbda_3$ can be computed geometrically by representing the two homotopy classes by transverse based maps $S^3 \to P$ and counting their intersection points with signs and group elements. Similarly, for the self-intersection invariant $\mu_3$ one represents the homotopy class by a generic based map $a:S^3\operatorname{im}ra P$ and counts transverse self-intersections, again with signs and group elements: \[ \mu_3(a):= [\sum_p \epsilon_p \cdot g_p] \in \mathbb{Z}\pi_1P / \negativeleftarrowngle g+g^{-1}, 1 \longrightarrowngle \] We note that in this dimension, switching the choice of sheets at a double point $p$ changes $g_p\in \pi_1P$ to $g_p^{-1}$ (as in dimension 4) but the signs change from $\epsilon_p$ to $-\epsilon_p$, explaining the relation $g+g^{-1} =0$ in the range of $\mu_3$ (as a opposed to $g-g^{-1} =0$ in the range of $\mu_2$ in dimension 4). The relation $1=0$ is important to make $\mu_3(a)$ only depend on the homotopy class of $a$ since a cusp homotopy introduces a double point with arbitrary sign and trivial group element (as in dimension 4). In this dimension we will not be using the ``tilde'' notation for this reduced self-intersection invariant in the interest of streamlining notation, and the relation $1=0$ will always be assumed in the target of $\mu_3$. Changing the whisker for $a$ changes $\mu_3(a)$ by a conjugation with the corresponding group element. The homotopy invariance of $\mu_3$ follows from the fact that a generic homotopy is isotopic to a sequence of cusps, finger moves and Whitney moves, none of which changes the invariant. Using the involution $\bar g:= g^{-1}$ on $\mathbb{Z}\pi_1P$, the ``quadratic form'' $(\negativeleftarrowmbda_3, \mu_3)$ satisfies the formulas \begin{equation}\tag{$**$} \mu_3(a+b)= \mu_3(a) + \mu_3(b) +[\negativeleftarrowmbda_3(a,b)] \quad\text{ and } \quad\negativeleftarrowmbda_3(a,a) = \mu_3(a) -\overline{\mu_3(a)} \in \mathbb{Z}\pi_1P \epsilonnd{equation} where the second formula has no content for the coefficient at the trivial element in $\pi_1P$: Since $\negativeleftarrowmbda_3$ is skew-hermitian, it vanishes on the left hand side, whereas it's automatically zero on the right hand side that is defined by picking a representative of $\mu(a)\in\mathbb{Z}\pi_1P$ and then applying the involution to that specific choice. The case $N=M\times \mathbb{R}$ of the following lemma describes the homomorphism used in Theorem~\operatorname{re}f{thm:4d-light-bulb} and will be used in the definition of the Freedman--Quinn invariant given in Section~\operatorname{re}f{sec:homotopies}. Recall that $T_N$ denotes the 2-torsion in $\pi_1N$. \begin{lem}\negativeleftarrowbel{lem:Wall} If $P^6 = N^5 \times I$, then $\mu_3:\pi_3N \to \mathbb{F}_2T_N\leq \mathbb{Z}\pi_1N / \negativeleftarrowngle g+g^{-1}, 1 \longrightarrowngle$ is a homomorphism. \epsilonnd{lem} \begin{proof} First note that the intersection pairing $\negativeleftarrowmbda_3$ vanishes identically, since one can represent $a,b\in \pi_3(N \times I)$ disjointly (and hence transversely without intersections) in $N \times 0$ respectively $N \times 1$. So from the second formula in $(**)$ above, together with the observation that $\mathbb{F}_2T_N \leq \mathbb{Z}\pi_1N/ \negativeleftarrowngle g+g^{-1} , 1 \longrightarrowngle$ is the subgroup generated by $\{ \mathbb Z_2eta\in \mathbb{Z}\pi_1N \mid \mathbb Z_2eta = \bar \mathbb Z_2eta \neq 1\}$, we see that $\mu_3(a)$ lies exactly in $\mathbb{F}_2T_N$. And from the first formula in $(**)$ it follows that $\mu_3:\pi_3N \to \mathbb{F}_2T_N$ is a homomorphism. \epsilonnd{proof} The next lemma will be used in the proof of Corollary~\operatorname{re}f{cor:infinitely-many} given in Section~\operatorname{re}f{sec:infinitely-many}. \begin{lem}\negativeleftarrowbel{lem:Hurewicz} $\mu_3$ factors through the Hurewicz homomorphism $\pi_3P\twoheadrightarrow H_3 \widetilde P$. \epsilonnd{lem} \begin{proof} We will use Whitehead's exact sequence $\Gamma(\pi_2P) \to \pi_3P\twoheadrightarrow H_3 \widetilde P$ from \cite{Wh}, where the first map is induced by the quadratic map $\epsilonta: \pi_2P \to \pi_3P$ which is pre-composition by the Hopf map $h:S^3\to S^2$. The second map is surjective by Hurewicz's theorem applied to the 1-connected space $\widetilde P$. We need to show that \[ \mu_3(a_3+ \epsilonta(a_2))= \mu_3(a_3) \ \forall \ a_i\in\pi_iP, \] By the quadratic property of $\mu_3$ given by the first formula in $(**)$, we get \[ \mu_3(a_3+ \epsilonta(a_2))= \mu_3(a_3) + \mu_3(\epsilonta(a_2)) + \negativeleftarrowmbda_3(a_3,\epsilonta(a_2)) \] and so we want to show that the last two terms on the right vanish. Representing $a_2$ by an embedding $b_2:S^2 \hookrightarrow P^6$, we see that $\epsilonta(a_2)=b_2 \circ h$ is supported in the image of $b_2$. As a consequence of working in a 6--manifold, we can find a representative of $a_3$ in the complement of this 2--manifold and hence their intersection invariant $\negativeleftarrowmbda_3$ vanishes. Similarly, there is a generic representative of $\epsilonta(a_2)$ which has support in the normal bundle of $b_2$, a simply-connected 6--manifold. Therefore, $\mu_3(\epsilonta(a_2))=0$ since the trivial group element is divided out in the range of $\mu_3$. \epsilonnd{proof} \begin{rem}\negativeleftarrowbel{rem:homological} Even though we obtain a map $\mu_3: H_3 \widetilde P\to \mathbb{Z}\pi_1P / \negativeleftarrowngle g+g^{-1}, 1 \longrightarrowngle$, it is not clear to us whether $\mu_3$ can be computed in a ``homological way'', i.e.\ without representing homology classes by generic maps and counting double points. This can be done for $\negativeleftarrowmbda_3$ but the second formula in $(**)$ shows that $\negativeleftarrowmbda_3(a,a)$ does {\epsilonm not} determine $\mu_3(a)$ at group elements of order~$2$. \epsilonnd{rem} \subsection{The self-intersection invariant for homotopies of 2--spheres in 5--manifolds}\negativeleftarrowbel{sec:homotopies} The above description of $\mu_3$ can also be applied to define self-intersection invariants of properly immersed simply-connected $3$--manifolds in a $6$--manifold. In this setting $\mu_3$ is computed just as above, by summing signed double point group elements, and is invariant under homotopies that restrict to isotopies on the boundary. Now fix a smooth oriented 5--manifold $N$. For any homotopy $H:S^2 \times I \to N^5$ between embedded spheres in $N$ we define the self-intersection invariant of $H$ \[ \mu_3(H)\in\mathbb{Z}\pi_1N / \negativeleftarrowngle g+g^{-1}, 1 \longrightarrowngle \] to be the self-intersection invariant $\mu_3$ of a generic track $S^2 \times I \operatorname{im}ra N^5 \times I$ for $H$ (with fixed boundary and based at the sphere $H_0$). The invariant $\mu_3(H)$ is independent of the choice of generic track since any two choices of perturbations to make $S^2 \times I \operatorname{im}ra N^5 \times I$ generic differ at most by a homotopy rel boundary. Definition~\operatorname{re}f{def:fq} of the Freedman--Quinn invariant below involves the case where $N^5=M^4 \times \mathbb{R}$ and $H_0, H_1$ are embeddings $S^2\hookrightarrow M \times 0$. In this case one has that $\mu_3(H)\in \mathbb{F}_2 T_M$, as in Lemma~\operatorname{re}f{lem:Wall}. The next lemma characterizes the dependence of $\mu_3(H)$ on the choice of homotopy $H$ only in this case. \begin{lem}\negativeleftarrowbel{lem:self-homotopies} If $J:S^2 \times I \operatorname{im}ra M \times \mathbb{R} \times I$ is a generic track of a based self-homotopy of $R:S^2\hookrightarrow M \times 0$, then $\mu_3(J) \in \mathbb{F}_2 T_M$ lies in the image of the homomorphism $\mu_3: \pi_3M \to \mathbb{F}_2T_M$. \epsilonnd{lem} It follows that for any two based homotopies $H, H': S^2 \times I\to M^4 \times \mathbb{R}$ between embedded spheres $H_0=H'_0$ and $H_1=H'_1$ in $M \times 0$, the difference $\mu_3(H) - \mu_3(H')\in \mathbb{F}_2 T_M$ lies in the image of $\mu_3: \pi_3M \to \mathbb{F}_2T_M$, since stacking the two homotopies gives a based self-homotopy $J=H\cup -H'$ such that $\mu_3(J) = \mu_3(H) - \mu_3(H')$. \begin{proof} By assumption, $J$ agrees with the track $R \times I$ of the product self-homotopy on the 2-skeleton $S^2 \times \{0,1\} \cup z_0 \times I$ of $S^2 \times I$. So they only differ on the 3-cell where $R\times I$ is represented by $R(D^2)\times I$ (here $D^2$ is the complement in $S^2$ of a small disk around $z_0$) and $J$ is represented by a generic $3$--ball $B:D^3\operatorname{im}ra (M\times \mathbb{R} \times I) \smallsetminus \nu(z \times I)$. Here $z$ denotes the image of the basepoint $z_0\in S^2$, and by construction the boundaries of these $3$--balls are parallel copies of an embedded 2--sphere in the boundary of a small neighborhood of $R \times \{0,1\}\cup(z \times I)$. Gluing $B$ and $R(D^2)\times I$ together along a small cylinder $S^2 \times I$ between their boundaries yields a map of a $3$--sphere $b:= B\cup R(D^2) \times I:S^3 \to M \times \mathbb{R} \times I$. To prove the lemma we will show that $\mu_3(J)=\mu_3(b)\in\mu_3(\pi_3(M))$. First note that all contributions to $\mu_3(J)$ come from double point loops in $B$. There are two types of self-intersections that contribute to $\mu_3(b)$, namely the self-intersections of the immersed $3$--ball $B$ and the intersections between $B$ and the embedded $3$--ball $R(D^2)\times I$. Observe that $B\pitchfork R(D^2)\times I=J\pitchfork R\times I$, with the corresponding loops based at $z\in R\times 0$ determining the same group elements contributing to both of $\mu_3(b)$ and $\negativeleftarrowmbda_3(J,R\times I)$. Now note that $\negativeleftarrowmbda_3(J,R\times I)=0$, since $R\times I\subset M\times 0\times I$ can be made disjoint from a homotopic (rel boundary) copy of $J$ in $M\times 1\times I$. So $B\pitchfork R(D^2)\times I$ contributes trivially to $\mu_3(b)$, and it follows that $\mu_3(b)=\mu_3(J)$ since both are determined by double point loops in $B$. \epsilonnd{proof} \begin{defn}\negativeleftarrowbel{def:fq} Given embeddings $R,R':S^2\hookrightarrow M^4$ which are based homotopic, their Freedman--Quinn invariant is given by: \[ \operatorname{fq}(R,R'):= [\mu_3(H)] \in \mathbb{F}_2T_M/\mu_3(\pi_3M) \] for any choice of based homotopy $H$ from $R\times 0$ to $R'\times 0$ in $M \times \mathbb{R}$. \epsilonnd{defn} Recall from the beginning of the proof of Lemma~\operatorname{re}f{lem:based} that a common dual for $R$ and $R'$ forces any given homotopy in $M$ to be based and hence $\operatorname{fq}(R,R')$ is defined for any pair $R,R'\in \mathcal{R}^G_{[f]}$. This definition of $\operatorname{fq}(R,R')$ is independent of the choice of $H$ by Lemma~\operatorname{re}f{lem:self-homotopies}. \subsection{Computing the Freedman--Quinn invariant}\negativeleftarrowbel{subsec:compute-fq} We show how to compute $\operatorname{fq}(R,R')$ as a ``difference of sheet choices'' for embedded $2$-spheres $R\times 0$ and $R'\times 0$ in $M \times \mathbb{R}$. Consider a homotopy $H$ given by finger moves on $R$ leading to a middle level $f:S^2\operatorname{im}ra M$, followed by Whitney moves on $f$ leading to $R'$. The collection of Whitney disks $\mathcal{W}$ on $f$, inverse to the finger moves, gives $f_\mathcal{W}=R$ and determines a choice of sheets ${\sf{x}}=(x_1,\ldots,x_{2n})$, and the collection of Whitney disks $\mathcal{W}'$ such that $f_{\mathcal{W}'}=R'$ determines a choice of sheets ${\sf{x}}'=(x'_1,\ldots,x'_{2n})$. We will describe an isotopy in $M \times \mathbb{R}$ from $R \times 0$ to $f \times b $, where $b:S^2 \to \mathbb{R}$ will be a sum of bump functions that ``resolves'' the double points in $f$. For simplicity of notation, we'll assume that $f$ is the result of just a single finger move, with ${\sf{x}}=(x_1,x_2)$. First define for each $x\in S^2$ a smooth family of non-negative bump functions $b^x_s:S^2\to \mathbb{R}$ which are supported in a small neighborhood of $x$ and have maximum $b^x_s(x)=s$. There is a homotopy $R_s$, $s\in [0,1]$, describing how the finger grows from $R$ to the self-tangency which introduces an identification of $x,y\in S^2$, where $y$ gives the ``finger tip'' $R_s(y)$ while $R_s(x)$ is fixed for all $s$. It gives an isotopy $R_s \times b^x_s$ from $R \times 0$ to $R_1 \times b^x_1$, with the self-tangency avoided by the bump $b^x_1$ having lifted the image of the $x$-sheet above what was the tangency point (see Figure~\operatorname{re}f{fig:2d-finger-move-sheet-resolution} left). \begin{figure}[ht!] \centerline{\includegraphics[scale=.23]{2d-finger-move-sheet-resolution-1.pdf}} \caption{A single bump splitting into two, along a finger move.} \negativeleftarrowbel{fig:2d-finger-move-sheet-resolution} \epsilonnd{figure} We extend this to an isotopy in $M \times \mathbb{R}$ from $R \times 0$ to an embedding $f \times b $: As $R_s$ continues to move towards $f$, the self-tangency splits into two transverse intersection points, and we arrange the single bump $b^x_1$ to split into a sum of two bumps which finally arrives at $b:=b^{x_1}_1 + b^{x_2}_1$ when the finger move is done, see Figure~\operatorname{re}f{fig:2d-finger-move-sheet-resolution}. Note that in this convention, the chosen sheets $x_i\in S^2$ represent ``over-crossings'' of the embedding $f \times b$. The isotopy class of this embedding does not depend on the particulars of $b$ but only on the choice of sheets ${\sf{x}}$. In the general case of $n$ finger moves such a $b$ can be defined simultaneously to get a corresponding isotopy. Turning the homotopy $H$ upside down, we can also consider finger moves leading from $R'$ to $f$ which are inverse to the Whitney moves along Whitney disks in $\mathcal{W}'$. Apply the same procedure using the choice of sheets ${\sf{x}}'=(x'_1,\ldots,x'_{2n})$ to get an isotopy in $M\times \mathbb{R}$ from $R' \times 0$ to $f \times b'$. If $x_i=x_i'$ we have $b =b'$ near $x_i$, so these two isotopies can be glued together in that neighborhood. If $x_i\neq x_i'$ there is a local homotopy $H_i(s) := f \times (b^{x^\prime_i}_{1-s} + b^{x_i}_s)$ that moves $f \times b'$ to locally coincide with $f \times b$ by a ``crossing change'' (see Figure~\operatorname{re}f{fig:2d-sheet-resolution}). $H_i$ has a single double point where it identifies $(x_i,1/2)$ with $(x_i',1/2)$. The associated group element is $g(x_i)\in\pi_1M$ associated to the sheet choice $x_i$ of the double point $f(x_i)$. \begin{figure}[ht!] \centerline{\includegraphics[scale=.23]{2d-sheet-resolution-homotopy-1.pdf}} \caption{Two bumps crossing in a single point during a local homotopy $H_i$.} \negativeleftarrowbel{fig:2d-sheet-resolution} \epsilonnd{figure} Assembling such local homotopies $H_i $ around all $x_i\neq x_i'$, and then composing with the above isotopies from $R \times 0$ to $f \times b$ and from $f \times b'$ to $R' \times 0$, yields a based homotopy $H_{\mathcal{W},\mathcal{W}'}$. Its isotopy class rel boundary only depends on the sheet choices ${\sf{x}},{\sf{x}}'$ and not on the particulars of the bump functions in the construction. \begin{lem}\negativeleftarrowbel{lem:compute-mu-by-crossing-changes} $\mu_3(H_{\mathcal{W},\mathcal{W}'} ) = \sum_i g(x_i) \in \mathbb{F}_2T_M$, where the sum is over those double points $p_i$ of $f$ for which $x_i\neq x_i'$. This sum is therefore a representative for $\operatorname{fq}(R,R')\in\mathbb{F}_2T_M/\mu_3(\pi_3(M))$. \epsilonnd{lem} Recall from Lemma~\operatorname{re}f{lem:Wall} and Section~\operatorname{re}f{sec:homotopies} that the target of $\mu_3(H_{{\sf{x}},{\sf{x}}'} )$ is indeed the subgroup $\mathbb{F}_2T_M$ of $\mathbb{Z}\pi_1N / \negativeleftarrowngle g+g^{-1}, 1 \longrightarrowngle$, i.e.\ any $g(x_i)$ with $g(x_i)^2\neq 1$ must contribute trivially (and we don't have to worry about signs). \subsection{Singular circles: The origin of the $\operatorname{fq}$ invariant} \negativeleftarrowbel{sec:singular circles} The $\operatorname{fq}$-invariant originally appeared in the more general setting of \cite[Chap.10.9]{FQ} as the obstruction to eliminating circles of intersections between the cores of $3$-handles in a $5$-manifold. For the interested reader we briefly explain the connection with singular circles in our setting. The results of this section will not be used in our paper. The singular set of a generic track $S^2\times I\operatorname{im}ra M\times I$ of a regular homotopy from $R$ to $R'$ consists of circles which are double-covered by circles in $S^2\times I$. The group element associated to a singular circle is determined by a double point loop in the image of $S^2\times I$ that changes sheets exactly at one point on the singular circle, with a choice of first sheet orienting the loop. The group element $g(\mathfrak{g}amma)$ associated to a circle $\mathfrak{g}amma$ with connected double cover satisfies $g(\mathfrak{g}amma)^2=1$ since $\mathfrak{g}amma$ itself represents $g(\mathfrak{g}amma)$ and the double cover bounds a disk in the domain. The singular arcs that appear in \cite[Chap.10.9]{FQ} and start/end at cusps, do not occur in our setting since we work with a regular homotopy. \begin{lem}\negativeleftarrowbel{lem:mu3-equals-circle-count} $ \operatorname{fq}(R,R')=[\sum_\mathfrak{g}amma g(\mathfrak{g}amma)]\in \mathbb{F}_2T_M/\mu_3(\pi_3(M)), $ where the sum is over all $\mathfrak{g}amma$ that have connected double covers in $S^2 \times I$. \epsilonnd{lem} \begin{proof}[Sketch of Proof:] The idea is to resolve the singular circles of a track $H:S^2\times I\operatorname{im}ra M\times I$ to (at worst) self-intersection points of $S^2\times I\operatorname{im}ra M\times \mathbb{R}\times I$, and compute $\mu_3$. Using the extra $\mathbb{R}$-factor, the singular circles with disconnected covers can be eliminated by perturbing one sheet into the $\mathbb{R}$-direction. By perturbing the sheets that intersect in a circle $\mathfrak{g}amma$ with connected double cover partially into the positive $\mathbb{R}$-direction and partially into the negative $\mathbb{R}$-direction, $\mathfrak{g}amma$ can be eliminated except for a single transverse self-intersection with group element $g(\mathfrak{g}amma)$. \epsilonnd{proof} It is interesting to note that these singular circles in $M \times I$ project to the middle level $f:S^2\times 1/2\operatorname{im}ra M\times 1/2$ as follows: They map to the union of the boundary arcs of Whitney disks $W_i$ (inverse to finger moves on $R$) and the boundary arcs of Whitney disks $W_i'$ (guiding Whitney moves towards $R'$). These arcs meet at the self-intersections of $f$, so the union $\cup_i\partial W_i\cup\partial W'_i$ is a map of circles into $f(S^2)$. The number of circles will not in general be the number of self-intersection pairs, because the $W_i$ and $W'_i$ may induce different pairings. To see that these Whitney disk boundaries are projections of the singular circles to the middle level $f$, consider first the track of the $i$th finger move: As the finger first touches the sheets and then pushes through, a single tangential self-intersection is created which then splits into two self-intersections that move apart until coming to rest at the end of the finger's motion. So in each sheet the motion of a single point splitting into two traces out one arc in the boundary of the Whitney disk $W_i$ (inverse to the finger move). In the domain $S^2\times I$ of the homotopy we see neighborhoods of two minima of singular circles, see Figure~\operatorname{re}f{fig:homotopy-track}. Turning the homotopy upside down, the same observations explain neighborhoods of the maxima. \begin{figure}[ht!] \centerline{\includegraphics[scale=.28]{homotopy-trace-4-with-sheet-change.pdf}} \caption{Singular circles in $S^2\times I$: A connected double cover.} \negativeleftarrowbel{fig:homotopy-track} \epsilonnd{figure} Singular circles with connected double covers arise when there are differences in the sheet choices determined by the $W_i$ and $W'_i$ as shown in Figure~\operatorname{re}f{fig:homotopy-track}. This is consistent with our two computations of the Freedman--Quinn invariant in Lemmas~\operatorname{re}f{lem:compute-mu-by-crossing-changes} and~\operatorname{re}f{lem:mu3-equals-circle-count}: Each singular circle with double point loop $g$ corresponds to $n$ finger moves along the same $g$ and $n$ Whitney moves resolving the resulting double points. The number $n$ is the number of minima (and maxima) of the projection $M \times I\to I$ when restricted to the singular circle. The double cover is connected if and only if $g^2=1$ and there is an odd number of sheet changes from the sheet choice determined by the finger moves to the sheet choice of the Whitney moves. \section{Proof of Theorem~\operatorname{re}f{thm:4d-light-bulb}}\negativeleftarrowbel{sec:proof-main} The last sentence of Theorem~\operatorname{re}f{thm:4d-light-bulb} follows from the fact that all our constructions, including throughout this section, are supported away from $G$. That $\mathcal{R}^G_{[f]}\neq\epsilonmptyset$ if and only if Wall's reduced self-intersection invariant $\widetilde\mu(f)$ vanishes follows from Lemma~\operatorname{re}f{lem:choice-of-disks-exists}, since the vanishing of $\widetilde\mu(f)$ is a sufficient condition for the existence of null homotopic Whitney circles for all self intersections of $f$, and is a necessary condition for $f$ to be homotopic to an embedding. For the rest of Theorem~\operatorname{re}f{thm:4d-light-bulb}, we will proceed with the following steps: \begin{enumerate} \item [A.] Define the geometric action of $\mathbb{F}_2T_M$ on $ \mathcal{R}^G_{[f]}$ and show that \[ \operatorname{fq}(t\cdot R,R) = [t]\in \mathbb{F}_2T_M/\mu_3(\pi_3M)\quad \forall R\in \mathcal{R}^G_{[f]}, t\in \mathbb{F}_2T_M. \] \item[B.] Show that the stabilizers are $\mu_3(\pi_3M)$. \item[C.] Prove that $R'$ is isotopic to $\operatorname{fq}(R,R')\cdot R$ for all $R,R'\in \mathcal{R}^G_{[f]}$. \epsilonnd{enumerate} The last item implies the transitivity of the action, so these steps complete the proof of Theorem~\operatorname{re}f{thm:4d-light-bulb}: For a fixed $R\in\mathcal{R}^G_{[f]}$ the Freedman--Quinn invariant $\operatorname{fq}(R,\cdot)\in\mathbb{F}_2T_M/\mu_3(\pi_3M)$ inverts the $\mathbb{F}_2T_M$-action. $ \square$ \subsection{The geometric action on $ \mathcal{R}^G_{[f]}$}\negativeleftarrowbel{subsec:geo-action-def} An outline of this construction was given in Section~\operatorname{re}f{sec:outline-proof}. Given $t=t_1+\cdots +t_n\in\mathbb{F}_2T_M$ and $R\in \mathcal{R}^G_{[f]}$, we first do $n$ finger moves on $R$, along arcs starting and ending near the base-point in $R$, representing $t_i \in T_M$. The isotopy class of the resulting generic map $f^t:S^2\operatorname{im}ra M$ only depends on $R$ and $t$ because $\pi_1(M \smallsetminus R)\cong \pi_1M$ and homotopy implies isotopy for arcs in 4--manifolds. The second step in the definition of our action is to do Whitney moves on $f^t$ along a collection $\mathcal{W}^t$ of $n$ Whitney disks to arrive at an embedding denoted by $t\cdot R$, where $\mathcal{W}^t$ satisfies the following sheet choice condition: Let ${\sf{x}}=(x_1^+,x_1^-,\dots, x_n^+,x_n^-)$ be a sheet choice such that the collection $\mathcal{W}$ of Whitney disks $W_i$ which are inverse to the finger moves is ${\sf{x}}$-compatible and each $W_i$ pairs $f(x_i^\pm)$, i.e.~$\mathcal{W}$ is also compatible with the pairing choice ${\sf{x}}^\pm=(x_1^\pm,\ldots,x_n^\pm)$. Then we take $\mathcal{W}^t$ to be any choice of Whitney disks that is compatible with the sheet choice ${\sf{x}}^t:=(x_1^+,y_1^-,\ldots,x_n^+,y_n^-)$ which has the sheets of $f^t$ switched at each negative self-intersection $f^t(x^-_i)=f^t(y^-_i)$. Such an ${\sf{x}}^t$-compatible $\mathcal{W}^t$ exists by Lemma~\operatorname{re}f{lem:disks-exist-for-all-choices}, and by Corollary~\operatorname{re}f{cor:w-disk-independence-of-arcs-and-pairings} the isotopy class of $f^t_{\mathcal{W}^t}$ is determined by ${\sf{x}}^t$, so $t\cdot R:=f^t_{\mathcal{W}^t}\in\mathcal{R}^G_{[f]}$ is well defined. Lemma~\operatorname{re}f{lem:compute-mu-by-crossing-changes} implies by construction: \begin{lem}\negativeleftarrowbel{lem:fq-tdotR-R-equals-t} $\operatorname{fq}(t\cdot R, R) = [t]$ for all $R\in \mathcal{R}^G_{[f]}$ and $t=t_1 +\cdots + t_n\in \mathbb{F}_2T_M$. $ \square$ \epsilonnd{lem} By Corollary~\operatorname{re}f{cor:choice-of-sheets-non-involution}, sheet choices ${\sf{x}}$ don't affect the isotopy class of $f_{\sf{x}}$ at double points whose group element is not 2-torsion. This implies that $t\cdot R$ is unchanged if we perform more finger moves on $R$ along non-2-torsion (and then appropriate Whitney moves to arrive at an embedding). In Lemma~\operatorname{re}f{lem:double-sheet-change} we showed that making double sheet changes doesn't change the isotopy class of $f_{\sf{x}}$, so only the mod 2 number of finger moves along 2-torsion matters for the isotopy class of $t\cdot R$. This leads to the following result: \begin{lem}\negativeleftarrowbel{lem:sheets give action} For $R\in \mathcal{R}^G_{[f]}$ and $t=t_1 +\cdots + t_n\in \mathbb{F}_2T_M$, $t\cdot R=R'\in \mathcal{R}^G_{[f]}$ for any $R'$ that is obtained from $R$ by a sequence of finger moves and Whitney moves as long as $\mu_3(H_{\mathcal{W},\mathcal{W}'})=t$. $\square$ \epsilonnd{lem} Recall that by Lemma~\operatorname{re}f{lem:compute-mu-by-crossing-changes} $\mu_3(H_{\mathcal{W},\mathcal{W}'})= \sum_{x_i\neq x_i'} g(x_i)$ only depends on the middle level of the homotopy and the two sheet choices ${\sf{x}}$ and ${\sf{x}}'$ (and only at double points whose group elements are 2-torsion and which are counted mod 2). \subsection{The stabilizer equals $\mu_3(\pi_3M)$}\negativeleftarrowbel{subsec:stabilizer} \begin{lem}\negativeleftarrowbel{lem:stabilizer-contains-mu-pi3} If $t\cdot R$ is isotopic to $R$, then $t\in \mu_3(\pi_3M)$, i.e.\ the stabilizer of $R\in \mathcal{R}^G_{[f]}$ is contained in $\mu_3(\pi_3M)$. \epsilonnd{lem} \begin{proof} The union of a based homotopy $H^t$ from $R$ to $t\cdot R$ with $\mu_3(H^t)=t$ and a based isotopy $H^0$ from $t\cdot R$ to $R$ forms a based self-homotopy $J:=H^t\cup H^0$ of $R$. So by Lemma~\operatorname{re}f{lem:self-homotopies}, we have $t=\mu_3(H^t)=\mu_3(J)\in\mu_3(\pi_3M)$. \epsilonnd{proof} \begin{lem}\negativeleftarrowbel{lem:stabilizer} If $t\in \mu_3(\pi_3M)$ then $t\cdot R$ is isotopic to $R$, i.e.\ $\mu_3(\pi_3M)$ is contained in the stabilizer of any $R\in \mathcal{R}^G_{[f]}$. \epsilonnd{lem} \begin{proof} We first use that a closed tubular neighborhood $\nu(R\cup G)$ has boundary $S^3$ and is homotopy equivalent to $S^2\vee S^2$ (in fact, capping it off with $B^4$ leads to a $S^2$-bundle over $S^2$ with Euler number $R\cdot R$). If $M_0\subset M$ is the closure of the complement of $\nu(R\cup G)$ then the corresponding Mayer-Vietoris sequence (for universal covering spaces) reads as follows: \[ H_3(\nu(R\cup G);\mathbb{Z}\pi_1M) \oplus H_3(\widetilde M_0) \longrightarrow H_3(\widetilde M) \longrightarrow H_2(S^3;\mathbb{Z}\pi_1M) \] Since the first summand of the first term and the last term are both $0$, we see that the inclusion induces an epimorphism $H_3(\widetilde M_0) \twoheadrightarrow H_3(\widetilde M)$. By the surjectivity of Hurewicz maps, this implies that we may assume that $t=\mu_3(a)$ for some $a\in \pi_3M_0$. Now represent $a$ by a based generic regular homotopy $F_s:S^2 \times I\to M_0$ from the trivial sphere $F_0=F_1$ in $M_0$ to itself. By construction, $F_s$ lies in the complement of $R$ at each $s$-level, so we can take a smooth family of ambient connected sums of $F_s$ with $R\times s$ to get a homotopy $H:S^2\times I\to M$ from $R$ to itself with $\mu_3(H)=t$. By Lemma~\operatorname{re}f{lem:sheets give action}, this shows that $F_1\# R$ is an admissible representative of our action $t\cdot R$ and therefore, $t\cdot R$ is isotopic to $R$. \epsilonnd{proof} \subsection{The action is transitive}\negativeleftarrowbel{sec:transitive} This follows directly from: \begin{lem}\negativeleftarrowbel{lem:fq-inverts-action} For any $R,R'\in \mathcal{R}^G_{[f]}$, we have $\operatorname{fq}(R,R')\cdot R = R'.$ \epsilonnd{lem} \begin{proof} This is a simple consequence of Lemmas~\operatorname{re}f{lem:compute-mu-by-crossing-changes} and~\operatorname{re}f{lem:sheets give action}. \epsilonnd{proof} \section{Proofs of Corollaries~\operatorname{re}f{cor:infinitely-many} and \operatorname{re}f{cor:unknotting number}}\negativeleftarrowbel{sec:infinitely-many} Recall the statement of Corollary~\operatorname{re}f{cor:infinitely-many}: There exist $4$--manifolds $M$ and $f:S^2\operatorname{im}ra M$ with infinitely many free isotopy classes of embedded spheres homotopic to $f$ (and with common geometric dual). These manifolds also admit infinitely many distinct pseudo-isotopy classes of self-diffeomorphisms. \begin{proof}[Proof of Corollary~~\operatorname{re}f{cor:infinitely-many}:] We first note that in the example given below Corollary~\operatorname{re}f{cor:infinitely-many}, $\mu_3(\pi_3M) =0$ since $M$ (and hence its universal covering $\widetilde M$) has no 3-handles, and $\mu_3$ factors through the Hurewicz homomorphism $\pi_3(M) \twoheadrightarrow H_3(\widetilde M)=0$ by Lemma~\operatorname{re}f{lem:Hurewicz}. So $|\mathbb{F}_2T_M/\mu_3(\pi_3M)|=|\mathbb{F}_2T_M|=\infty$. The pseudo-isotopy statement of Corollary~\operatorname{re}f{cor:infinitely-many} follows from Lemma~\operatorname{re}f{lem:pseudo-isotopy} below because a diffeomorphism $\varphi:M \times I \stackrel{\cong}{\to} M \times I$ with $\varphi_0=\operatorname{id}$ (the pseudo-isotopy condition) and $\varphi_1(R)=R'$ leads to the concordance $\varphi\circ (R\times\operatorname{id}):S^2 \times I\hookrightarrow M \times I$ from $R$ to $R'$. This contradicts $\operatorname{fq}(R,R')\neq 0$ by Corollary~\operatorname{re}f{cor:concordance}. \epsilonnd{proof} \begin{lem}\negativeleftarrowbel{lem:pseudo-isotopy} Let $G:S^2 \hookrightarrow M$ be framed and fix $n\in\mathbb{Z}$. Then the diffeomorphism group of $M$ acts transitively on embedded spheres $R:S^2 \hookrightarrow M$ with $G$ as a geometric dual and normal Euler number $e(\nu R)=n$. \epsilonnd{lem} \begin{proof} Given $G,R$ as above, consider a closed regular neighborhood $\nu(R\cup G)\subset M$. It is diffeomorphic to the $4$-manifold $M_n$ with one 0-handle and two 2-handles attached to the Hopf link, one 0-framed and the other $n$-framed. In particular, the boundary $\partial M_n$ is a 3-sphere which leads to a decomposition \[ M \cong M_n \cup_{S^3} M_R, \] where $M_R$ is the closure of the complement of $M_n$ in $M$. Note that $G:S^2 \hookrightarrow M_n\subset M$ is the union of the (core of the) 0-framed 2-handle and a disk bounding the $0$-framed component of the Hopf link. As a consequence, surgery on $G$ in $M_n$ leads to the 4-manifold where that 0-framed 2-handle is replaced by a 1-handle. This 1-handle then cancels the $n$-framed 2-handle, showing that surgery on $G$ leads from $M_n$ to $D^4$. It follows that surgery on $G$ also leads from $M$ to $M_R\cup_{S^3}D^4$. If $R':S^2 \hookrightarrow M$ with $e(\nu R')=e(\nu R)$ also has $G$ as a geometric dual, then repeating the same constructions for $R'$ in place of $R$, we get a second decomposition \[ M \cong M_n \cup_{S^3} M_{R'}, \] where $M_{R'}\cup_{S^3}D^4$ is diffeomorphic to surgery on $G$ in $M$. But $G$ is a {\epsilonm common} dual, so we get an orientation preserving diffeomorphism $M_R \cong M_{R'}$. Since orientation preserving diffeomorphisms of $S^3$ are isotopic to the identity, we can extend this to a self-diffeomorphism of $M$ which carries $R$ to $R'$ and fixes $G$: This just requires to line up the 2-handles of $M_n$ in the obvious way. \epsilonnd{proof} Corollary~\operatorname{re}f{cor:unknotting number} states that for $R,R'\in \mathcal{R}^G_{[f]}$, the relative unknotting number equals the support of the Freedman-Quinn invariant: $\operatorname{u}(R,R') = |\operatorname{fq}(R,R')|$. Here $\operatorname{u}(R,R')\in \mathbb{N}_0$ denotes the minimal number of finger moves required in any regular homotopy between $R$ and $R'$, and $|\operatorname{fq}(R,R')|$ is the minimum number of non-zero coefficients in any representative of $\operatorname{fq}(R,R')\in \mathbb{F}_2T_M/\mu_3(\pi_3M)$. \begin{proof} For $t\in\mathbb{F}_2T_M$, the relative unknotting number satisfies $\operatorname{u}(t\cdot R,R) \leq |t|$ because $t\cdot R$ is constructed from $R$ by using $|t|$ finger moves. Moreover, any $R'\in \mathcal{R}^G_{[f]}$ is isotopic to some $t\cdot R$, so it suffices to understand those particular numbers. If $[t] = [s]$ then $t\cdot R=[t]\cdot R$ is isotopic to $s\cdot R$, so $\operatorname{u}(t\cdot R,R) \leq |s|$ holds as well. If $u:=\operatorname{u}(t\cdot R,R)$ then there are $u$ finger moves and then $u$ Whitney moves that lead from $R$ to $t\cdot R$. By general position, we may assume that the finger moves are disjoint from $G$ and run along group elements $g_i\in \pi_1M, i=1,\dots, u$. By Lemma~\operatorname{re}f{lem:choice-of-disks-exists} we find Whitney disks with the same sheet choices in the complement of $G$, and by Lemma~\operatorname{re}f{lem:sheets give action} they also lead to $t\cdot R$. This implies that $u$ is at least as large as the number of 2-torsion $s_j$ among the $g_i$ which by itself equals $|s|$ for $s:=\sum_j s_j$. So we get $u\mathfrak{g}eq|s|$ and together $u=|[t]|$ as claimed. \epsilonnd{proof} \section{Ambient Morse theory and the $\pi_1$-negligible embedding Theorem}\negativeleftarrowbel{sec:ambient-morse-pi1-embedding-thm} A third proof of Gabai's LBT arises from ambient Morse theory and the uniqueness part of the $\pi_1$-negligible embedding theorem \cite[Thm.10.5A(2)]{FQ, Stong}. We only state it in the orientable, non $s$-characteristic case that we are going to use because our dual $G$ is framed. Recall that an embedding $h:V\hookrightarrow W$ is \epsilonmph{$\pi_1$-negligible} if the inclusion induces an isomorphism $\pi_1(W \smallsetminus h(V)) \cong \pi_1W$, which is guaranteed by a dual sphere. \begin{Thm10}\negativeleftarrowbel{thm:FQ-10.5-uniqueness} Let $(V; \partial_0 V, \partial_1 V)$ be a compact $4$--manifold triad so that $\pi_1(V, \partial_0 V)=\{1\}=\pi_1(V,\partial_1V)$ (all basepoints), each component has nonempty intersection with $\partial_1 V$, and components disjoint from $\partial_0 V$ are 1-connected. Suppose $W$ is an oriented $4$--manifold, $h, h':(V,\partial_0 V), \hookrightarrow (W,\partial W)$ are $\pi_1$-negligible embeddings, both not $s$-characteristic, and $H$ is a homotopy rel $\partial_0 V$. Then there is an obstruction $\operatorname{fq}(H)\in H^2 (V,\partial_0 V; \mathbb{F}_2T_W)$ which vanishes if and only if $H$ is homologous (with $\mathbb{Z}[\pi_1W]$-coefficients) to a $\pi_1$-negligible concordance $V \times I \hookrightarrow W \times I$ from $h$ to $h'$.$ \square$ \epsilonnd{Thm10} Stong extends this theorem to the $s$-characteristic case in \cite[p.2]{Stong} by showing that there is a secondary obstruction, the {\it Kervaire-Milnor invariant}, to finding a concordance. Stong also observes on the bottom of \cite[p.2]{Stong} that $\operatorname{fq}$ can be strengthened to be independent of $H$ by taking $\operatorname{fq}(h,h')$ in the quotient of $H^2 (V,\partial_0 V; \mathbb{F}_2T_W)$ by the self-intersection invariant on $\pi_3W$. Note that this is a 5-dimensional result so it holds in the smooth category. We apply this theorem for $W$ defined to be the manifold $M$, with an open neighborhood of $G$ removed, and $V:= D^2 \times D^2$ with $\partial_0 V= S^1 \times D^2$ and $\partial_1 V = D^2 \times S^1$. Then $R, R'$ can be turned into embeddings $h, h': (V,\partial_0 V) \hookrightarrow (W,\partial W)$ by using the normal bundles of $R,R'$ and removing a neighborhood of their intersection point with $G$. Note that $R$ may have non-trivial normal bundle (necessarily isomorphic to that of $R'$) but after removing the neighborhood of $G$, it turns into a $D^2$-bundle over $D^2$ which must be trivial. By Lemma~\operatorname{re}f{lem:based}, the resulting embeddings $h, h'$ are homotopic rel $\partial_0 V$ and the theorem applies. Note that $(V,\partial_0 V)\simeq (D^2, S^1)$ and hence the invariant $\operatorname{fq}(R,R')=\operatorname{fq}(h, h')$ lies in $\mathbb{F}_2T_W/\mu_3(\pi_3W) $. Note also that Seifert--van Kampen shows that in this case, every concordance is $\pi_1$-negligible (as long as it is on one boundary). If $\operatorname{fq}(h,h')=0$ then $h$ and $h'$ are concordant by the above theorem. We now reverse the above steps of thickening spheres and disks to 4--manifolds with boundary to arrive at a concordance $C: S^2 \times I \hookrightarrow M \times I$ between $R$ and $R'$ exactly as in the corollary below. Note that Stong's additional Kervaire-Milnor invariant vanishes in our setting since $R$ is not s-characteristic: The dual sphere $G$ is framed, so that \[ R\cdot G \epsilonquiv 1 \neq 0 \epsilonquiv G\cdot G \mod 2. \] \begin{cor}\negativeleftarrowbel{cor:concordance} Given embedded spheres $R,R'\in \mathcal{R}^G_{[f]}$ as in Theorem~\operatorname{re}f{thm:4d-light-bulb}, the obstruction $\operatorname{fq}(R,R')\in \mathbb{F}_2T_M/\mu_3(\pi_3M)$ vanishes if and only if there is a concordance $C: S^2 \times I \hookrightarrow M \times I$ between $R$ and $R'$ that has $G$ as a geometric dual in every level: $C^{-1}(G \times \{t\}) = (z_0,t). \square$ \epsilonnd{cor} By the following result, which will be proven using Morse theory for the 3-manifold $S^2 \times I$ and only basic lemmas from this paper, the Freedman--Quinn invariant completely detects isotopy in this setting: \begin{thm}\negativeleftarrowbel{thm:concordance implies isotopy} Given a concordance $C: S^2 \times I \hookrightarrow M \times I$ between $R$ and $R'$ which has $G$ as a geometric dual in every level $t\in I$ as in the above corollary, it follows that $R$ and $R'$ are isotopic. \epsilonnd{thm} \begin{proof} We now show how to directly turn the concordance $C$ into an isotopy using the geometric duals. By general position, we may assume that the composition $p_2\circ C: S^2 \times I \to I$ is a Morse function. If it has no critical points then $C$ is the track of an isotopy, so we'll study the critical points of $p_2\circ C$ by Morse theory. Extend a gradient-like vector field on $S^2 \times I$ to $M \times I$ that has no additional critical points and flows downwards in the $I$-direction away from the image of $C$. Then the relative local models for $(M \times I, C(S^2 \times I))$ at the critical points, corresponding to the $k$-handles of the 3-manifold $S^2 \times I$, $k=0,1,2,3$, are well known. In \cite[Lem.8]{BT} it is proven by a dimension count for ascending and descending manifolds of the vector field that by an ambient isotopy of $M \times I$ one can order the critical points according to their index. Moreover, one can also re-order critical points of the {\it same} index arbitrarily which can be seen as follows, say in the case of 1-handles: The core of a 1-handle (in the 3-manifold $S^2 \times I$) is an arc, whereas the cocore is a 2-disk. If we have two adjacent 1-handles, one just below a level $M:=M \times t$ and the other just above $M$, then we can push the cocore up and the core down into that ``middle'' level $M$. By general position, this 1-manifold and 2-manifold will not intersect in the ambient 4-manifold $M$ and hence we can push the upper 1-handle below the lower one. As a consequence, we can assume that our Morse function on $S^2 \times I$ first has $n$ minima (0-handles) which are then abstractly cancelled by $n$ 1-handles: Each 0-handle must be abstractly cancelled eventually and we can slide those cancelling 1-handles below the other 1-handles. Looking at the top, $m$ maxima arise that are abstractly cancelled by $m$ 2-handles. The remaining 1-handles have both their feet on $R$ by construction, similarly for the 2-handles read in the other direction. The remaining 1- and 2-handles form a third cobordism which must be diffeomorphic to $S^2\times I$ since gluing $S^2 \times I$ to its top and bottom gives the entire cobordism $S^2 \times I$. More precisely, we can find two non-critical levels $t_1 < t_2$ in $(0,1)$ such that $C^{-1}(M \times \{t_i\})$ are spheres which separate the domain $S^2 \times I$ of $C$ into three product cobordisms: \[ V_i := C^{-1}(M \times [t_i, t_{i+1}]), \ i=0,1,2 \text{ and } t_0:= 0, t_3:=1. \] Here $V_i\cong S^2 \times I$ consists of the $i$- and $(i+1)$-handles discussed above. The proof of Theorem~\operatorname{re}f{thm:concordance implies isotopy} will be completed by the subsequent lemmas which show that each of the three restrictions of $C$ to $V_i$ can be turned into an isotopy, using the geometric dual $G$. \epsilonnd{proof} For $V_0$, the $t$-parameter gives a movie in $M$ that starts with $R$ and then shows $n$ trivial spheres $S_1,\dots,S_n$ being born in $M$, one for each $0$-handle. Then $n$ tubes form, one for each $1$-handle, that connect $R$ to each $S_i$, making the result a new sphere $R_w$ in $M$. Here $w$ is a collection of $n$ words in the free product $\pi_1M \ast F_n\cong\pi_1(M\setminus\cup_iS_i)$, where $F_n$ is the free group generated by the meridians $m_i$ to $S_i$, and the words $w_k$ in $w$ measure how the core arcs $a_k$ of the $1$-handles hit the cocore $3$--balls $B_i$ of the $0$-handles bounded by the $S_i$ in $M$: Each intersection point with $B_i$ reads out the letter $m_i$, whereas the letters from $\pi_1M$ arise from the arcs that are in between such intersection points. The arcs can be turned into based loops after picking whiskers from each $S_i$ to the base-point of $M$. The argument below does not depend on those choices. These cocores and cores originally lie in $M \times [0,t_1]$ but we pushed the cocores up and the cores down into a common middle level $M=M \times t_1/2$. By the above reordering argument, the collection $\mathcal{C}$ of cocores is embedded disjointly into $M$ and similarly, the collection $\mathcal{C}'$ of cores is also embedded disjointly. However, these 3-- and 1--manifolds can intersect each other in the 4-dimensional middle level $M$, so the abstract handle cancellation can a priori not be done ambiently in $M$. \begin{lem}\negativeleftarrowbel{lem:0-1-handles} The sphere $R_w$ is isotopic to $R$ in any neighborhood of $R\cup \mathcal{C}\cup\mathcal{C}'\cup G$ in $M$. \epsilonnd{lem} \begin{figure}[ht!] \centerline{\includegraphics[scale=.24]{pullout-tube.pdf}} \caption{Pushing core arcs out of cocore 3-balls} \negativeleftarrowbel{fig:pullout-tube} \epsilonnd{figure} \begin{proof} Figure~\operatorname{re}f{fig:pullout-tube} shows how we can reduce the number of occurences of the meridian $m_i$ in $w$. This is a finger move and then a Whitney move on $R_w$, and as usual we see two Whitney disks, $W$ going back to $R_w$ by the inverse of the finger move and $W'$ going forward. These Whitney disks share a boundary arc $\beta$, and by Proposition~\operatorname{re}f{prop:common-arcs-w-moves} it follows that $R_w$ is isotopic to the result $R_{w'}$ of the Whitney move along $W'$, with $w'$ containing one letter $m_i$ fewer then $w$. Iterating this procedure we see that $R_w$ is isotopic to $R_{w_0}$ where $w_0\in\pi_1M$. This means that the 1-handles for $R_{w_0}$ do not intersect the cocore $3$--balls for the $0$-handles. These $3$--balls and copies of $D^2\times D^1$ inside the tubes then provide the final isotopy from $R_{w_0}$ to $R$. \epsilonnd{proof} Applying the same arguments of Lemma~\operatorname{re}f{lem:0-1-handles} to $V_2$ turned upside down shows that the restriction of $C$ to $V_2$ can be replaced by an isotopy. So it just remains to show that the restriction of $C$ to $V_1$ can be replaced by an isotopy. The $t$-parameter movie for $V_1$ starts with the sphere $R$ at $t=t_1$ then $g$ tubes form, one for each remaining 1-handle in $V_1$. We then see a surface $F$ of genus $g$ in the middle level $M$ in which the collection $\mathcal{C}$ of cocores is also embedded. These are 2--disks, or better, a collection of $g$ disjoint caps (section~\operatorname{re}f{sec:capped-surface-w-move}) attached to a half-basis of disjointly embedded simple closed curves in $F$. The movie continues with $g$ 2-handles being attached to $F$ whose cores form a second collection of caps $\mathcal{C}'$, again embedded disjointly into the middle level $M$. \begin{lem}\negativeleftarrowbel{lem:1-2-handles} The sphere $R$ is isotopic to $R'$ in any neighborhood of $F\cup\mathcal{C}\cup\mathcal{C}'\cup G$ in $M$. \epsilonnd{lem} \begin{proof} By construction, we have a genus $g$ surface $F\subset M$, together with a collection $\mathcal{C}$ of $g$ caps such that surgery leads to $R$, and another collection $\mathcal{C}'$ of $g$ caps for $F$ that surger it to $R'$. The caps in each collection are embedded in $M$, and disjoint from all other caps in the same collection, but caps of different collections may intersect on their boundary (in $F$) as well as in their interiors. There are two handle-bodies $Y$ and $Y'$ formed from $F \times [-\epsilonpsilon,\epsilonpsilon]$ by (abstractly) attaching thickened caps from $\mathcal{C}$ to $F \times -\epsilonpsilon$, respectively $\mathcal{C}'$ to $F \times \epsilonpsilon$, and then filling the resulting boundary with two $3$--balls. This is a Heegaard decomposition of $S^3$ to which we will next apply some classical $3$--manifold results to simplify the intersection pattern in $F$ between the boundaries of the caps in $\mathcal{C}$ and those in $\mathcal{C}'$. Waldhausen's uniqueness theorem for Heegaard decompositions of $S^3$ \cite{Waldhausen} gives a diffeomorphism of triples (isotopic to the identity -- but we won't use this here) \[ (S^3; Y, Y') \cong (S^3; Y_0, Y'_0) \] where the subscript $0$ refers to the standard Heegaard decomposition, stabilized to be of the same genus as $Y$. In the following, we'll need the usual notion of {\it minimal systems of disks}, which are disjointly embedded disks that cut a handlebody into a $3$--ball. For $Y$, respectively $Y'$, such minimal systems are given by the caps in $\mathcal{C}$, respectively $\mathcal{C}'$. On the $(Y_0, Y_0')$-side these are standard disks in the sense that their boundaries intersect $\delta_{ij}$ geometrically. By applying Waldhausen's diffeomorphism, we see that $Y$ and $Y'$ admit minimal systems of disks that also intersect $\delta_{ij}$ geometrically on the boundary. A result of Reidemeister \cite{Reidemeister} and Singer \cite{Singer} from 1933 asserts that any two minimal systems of disks in a handlebody are slide equivalent. This implies that after finitely many handle slides among the abstract caps in $\mathcal{C}$, respectively $\mathcal{C}'$, we may assume that the collections of caps $\mathcal{C}$ and $\mathcal{C}'$ intersect $\delta_{ij}$ geometrically on the boundary. These handle-slides can be achieved ambiently in $M$ and we'll assume from now on that this has been done. This has the consequence that the complement in $F$ of the boundaries of the caps in $\mathcal{C}$ and $\mathcal{C}'$ is connected. In particular, in the following arguments we may always find (disjoint) arcs in $F$ from any point in this complement to the intersection point of $F$ and $G$. If the interiors of all caps happen to be disjoint then Lemma~\operatorname{re}f{lem:capped-surface-isotopy} shows that the two surgeries $R$ and $R'$ are isotopic in $M$. We will complete our proof of Lemma~\operatorname{re}f{lem:1-2-handles} by showing the following general result. \epsilonnd{proof} \begin{lem}\negativeleftarrowbel{lem:disjoint-caps} Let $F$ be a surface in a 4--manifold admitting a collection $\mathcal{C}$ of disjoint caps $c_i$, and also admitting another collection $\mathcal{C}'$ of disjoint caps $c_i'$, such that the $\partial c_i$ intersect the $\partial c'_i$ geometrically $\delta_{ij}$ in $F$. If $F$ has a geometric dual $G$ which is disjoint from $\mathcal{C}\cup \mathcal{C}'$ then there exists a collection $\mathcal{C}''$ with the same boundaries as $\mathcal{C}$, which has no interior intersections with $\mathcal{C}'$, and such that surgery on $\mathcal{C}''$ is isotopic to surgery on $\mathcal{C}$. \epsilonnd{lem} Recall that by definition (section~\operatorname{re}f{sec:capped-surface-w-move}) the interiors of all caps are embedded in the complement of $F$. And in our current setting of the proof of Lemma~\operatorname{re}f{lem:1-2-handles} the geometric dual $G$ to $F$ is indeed disjoint from $\mathcal{C}\cup \mathcal{C}'$. Note that Lemma~\operatorname{re}f{lem:capped-surface-isotopy} then implies that surgery on $\mathcal{C}$ is also isotopic to surgery on $\mathcal{C}'$, which we wanted to prove. \begin{proof} Our construction will eliminate each intersection point $p\in c_i\pitchfork c'_j$ for $c_i\in\mathcal{C}$ and $c'_j\in\mathcal{C}'$ by tubing $c_i$ into a dual sphere $S_j$ to $c'_j$. This does not change $F_{\mathcal{C}'}$ since $\mathcal{C}'$ is fixed, and it will be checked that the tubing of the $c_i$ into the $S_j$ does not change $F_\mathcal{C}$ up to isotopy. \begin{figure}[ht!] \centerline{\includegraphics[scale=.275]{cap-clean-up-1-solid.pdf}} \caption{Left: An intersection $p\in c_i\pitchfork c'_j$. Right: A torus $T_j$ of normal circles over $\partial c_j$ with $T_j\cap c'_j=\{q\}$.} \negativeleftarrowbel{fig:cap-clean-up-1} \epsilonnd{figure} We first describe the easiest case where $\mathcal{C}\pitchfork \mathcal{C}'$ is a single interior intersection $p\in c_i\pitchfork c'_j$ for some $c_i\in\mathcal{C}$ and $c'_j\in\mathcal{C}'$ with $i\neq j$ (Figure~\operatorname{re}f{fig:cap-clean-up-1}, left). By assumption there exists a cap $c_j\in\mathcal{C}$ whose boundary $\partial c_j$ intersects $\partial c'_j$ in a single point. A torus $T_j$ of normal circles to $F$ over $\partial c_j$ intersects the interior of $c'_j$ in a single point $q$ (Figure~\operatorname{re}f{fig:cap-clean-up-1}, right). Let $d$ be a meridional disk to $F$ bounded by a circle in $T_j$, and denote by $d_G$ the result of tubing $d$ into $G$ to eliminate the intersection between $d$ and $\partial c_j$ (as in Figure~\operatorname{re}f{fig:tube-caps-2} but here $\partial d\subset T_j$). Then surgering $T_j$ along $d_G$ yields a $0$-framed embedded sphere $S_j$ with $q=S_j\cap c'_j$, such that $S_j$ is disjoint from all other caps in $\mathcal{C}'$, and $S_j$ is disjoint from all caps in $\mathcal{C}$ (Figure~\operatorname{re}f{fig:cap-clean-up-2}, left). So the intersection $p$ can be eliminated by tubing $c_i$ into $S_j$ along a path between $p$ and $q$ in $c_j$ (Figure~\operatorname{re}f{fig:cap-clean-up-2}, right). \begin{figure}[ht!] \centerline{\includegraphics[scale=.275]{cap-clean-up-2-solid.pdf}} \caption{Left: The sphere $S_j$ with $S_j\cap F=\{q\}$. Right: The result of tubing $c_i$ into $S_j$ to eliminate $p$ and $q$.} \negativeleftarrowbel{fig:cap-clean-up-2} \epsilonnd{figure} At this point we have eliminated $p\in c_i\pitchfork c'_j$ by replacing $c_i$ with the connected sum $c''_i:=c_i\# S_j$ of $c_i$ with $S_j$ to get a new collection of caps $\mathcal{C}''$ with the same boundaries as $\mathcal{C}$ but with interiors disjoint from $\mathcal{C}'$. We want to check that $F_\mathcal{C}$ is isotopic to $F_{\mathcal{C}''}$. Note that $T_j$ also admits a cap $\mathfrak{g}amma_j$ formed from $c_j$ by deleting a small collar. (The boundary of $\mathfrak{g}amma_j$ is visible in the right side of Figure~\operatorname{re}f{fig:cap-clean-up-1} as the ``inner longitude'' of $T_j$.) This cap $\mathfrak{g}amma_j$ is disjoint from $F$ and is dual to $d_G$, so it follows from the capped surface isotopy lemma (Lemma~\operatorname{re}f{lem:capped-surface-isotopy}) that the sphere $S_j^\mathfrak{g}amma$ formed by surgering $T_j$ along $\mathfrak{g}amma_j$ is isotopic to $S_j$ in the complement of $F$. So it suffices to check that $F_\mathcal{C}$ is isotopic to $F_{\mathcal{C}^\mathfrak{g}amma}$, where the collection of caps $\mathcal{C}^\mathfrak{g}amma$ differs from the original $\mathcal{C}$ by replacing $c_i$ with $c_i\# S_j^\mathfrak{g}amma$. The sphere $S_j^\mathfrak{g}amma$ is contained in the boundary of a tubular neighborhood $\nu_{c_j}\cong D^2\times D^2$ of $c_j$, and $S_j^\mathfrak{g}amma$ bounds an embedded $3$--ball $B_j^\mathfrak{g}amma\subset\nu_{c_j}$ which is the union of the solid torus $\partial c_j\times D^2$ with a $1$-dimensional sub-bundle over the interior of $c_j$. Observe that the only intersections between $B_j^\mathfrak{g}amma$ and $F$ are the circle $\partial c_j$. Now surger $F$ along $\mathcal{C}^\mathfrak{g}amma$ to get $F_{\mathcal{C}^\mathfrak{g}amma}$. Since surgery has deleted a regular $\epsilonpsilon$-neighborhood of $\partial c_j$ from $F$, the $3$--ball $B^\mathfrak{g}amma_j$ is now disjoint from $F_{\mathcal{C}^\mathfrak{g}amma}$. So there exists an isotopy from $F_{\mathcal{C}^\mathfrak{g}amma}$ to $F_\mathcal{C}$ supported near $B^\mathfrak{g}amma_j$ which isotopes the two parallel copies of $c_i\# S_j^\mathfrak{g}amma$ in $F_{\mathcal{C}^\mathfrak{g}amma}$ to the two parallel copies of $c_i$ in $F_\mathcal{C}$ by shrinking the parallels of $S_j^\mathfrak{g}amma$ in $B^\mathfrak{g}amma_j$. The description of how this construction can be carried out in the general case to simultaneously eliminate any number of intersections $p\in c_i\pitchfork c'_j$ among all the $c_i\in\mathcal{C}$ and $c'_j\in\mathcal{C}'$ is straightforward: Consider some $c'_j$ which has multiple interior intersections with multiple $c_i$ (in the left of Figure~\operatorname{re}f{fig:cap-clean-up-1} imagine more $p$-intersections). We will not introduce sub-index notation to enumerate the interior intersections in each $c'_j$, nor for the subsequent tori and spheres created for each intersection. Take a torus $T_j$ as in the right of Figure~\operatorname{re}f{fig:cap-clean-up-1} around a parallel copy of $\partial c_j$ for each interior intersection. (Note that these parallels of $\partial c_j$ and their corresponding disjoint normal tori can be assumed to be supported arbitrarily close to $\partial c_j$, ie.~in the part of $F$ that will be deleted by surgery -- this observation is key to why the general case will present no new difficulties.) Just as above, these tori can be surgered to spheres $S_j$ disjoint from $F$ which are dual to $c'_j$ using caps $d_G$ on the $T_j$ in the complement of $F$ created by tubing meridional disks into $G$ along disjointly embedded arcs in $F$. These $S_j$ are all disjointly embedded by construction. Now all intersections between $c'_j$ and the $c_i$ can be eliminated by tubing the $c_i$ into the $S_j$ along disjointly embedded arcs in $c'_j$ between pairs of intersection points in $c_i\pitchfork c'_j$ and $S_j\cap c'_j$ (as in the right of Figure~\operatorname{re}f{fig:cap-clean-up-2}). Note that the case $i=j$ is allowed in this construction since the tori are supported near the parallel copies of $\partial c_j$ and the $S_j$ are disjoint from all $c_i$, so changing the interior of $c_j$ by tubing into an $S_j$ can be carried out just as for $c_i$ with $i\neq j$. Carrying out this construction for all $c'_j$ replaces $\mathcal{C}$ with $\mathcal{C}''$ such that $\mathcal{C}''$ and $\mathcal{C}'$ have disjoint interiors (with boundaries unchanged). It remains to check that the argument from the easy case also applies to show that this construction which has changed the $c_i$ by multiple connected sums has not changed the result of surgery. As before, we can surger each of the $T_j$-tori along a cap $\mathfrak{g}amma_j$ formed from a parallel of $c_j$ to get a sphere $S^\mathfrak{g}amma_j$ which is isotopic in the complement of $F$ to the corresponding $S_j$. Here we are using parallels of the new $c''_j$ which may been tubed into some $S_k$'s, but the key properties of being framed, with interiors disjointly embedded in the complement of $F$ have been preserved. Since the $\mathfrak{g}amma_j$-caps are dual to the $d_G$-caps, the $S_j^\mathfrak{g}amma$-spheres are isotopic to the $S_j$-spheres in the complement of $F$, again by the capped surface isotopy lemma (Lemma~\operatorname{re}f{lem:capped-surface-isotopy}). So again it suffices to check that $F_\mathcal{C}$ is isotopic to $F_{\mathcal{C}^\mathfrak{g}amma}$ where the collection of caps $\mathcal{C}^\mathfrak{g}amma$ differs from the original $\mathcal{C}$ by taking connected sums of the $c_i$ with multiple $S_j^\mathfrak{g}amma$. Similarly as before, the $S_j^\mathfrak{g}amma$ are contained in the boundaries of disjoint tubular $D^2\times D^2$-neighborhoods of parallels of $c_j$, with each of these neighborhoods containing an embedded $3$--ball $B_j^\mathfrak{g}amma$ bounded by $S_j^\mathfrak{g}amma$ such that $B_j^\mathfrak{g}amma$ and $F$ only intersect in the corresponding parallel copy of $\partial c_j$. Surgering $F$ along $\mathcal{C}^\mathfrak{g}amma$ to get $F_{\mathcal{C}^\mathfrak{g}amma}$ deletes regular $\epsilonpsilon$-neighborhoods of all the $\partial c_j$ from $F$, and since we may assume that all the $T_j$-tori in the construction were supported near parallels of the $\partial c_j$ that lie inside these deleted $\epsilonpsilon$-neighborhoods, all the $B^\mathfrak{g}amma_j$-balls are disjoint from $F_{\mathcal{C}^\mathfrak{g}amma}$. So there exists an isotopy from $F_{\mathcal{C}^\mathfrak{g}amma}$ to $F_\mathcal{C}$ supported near the $B^\mathfrak{g}amma_j$ which isotopes the pairs of parallel copies of $c_i\# S_j^\mathfrak{g}amma$ in $F_{\mathcal{C}^\mathfrak{g}amma}$ to the pairs of parallel copies of $c_i$ in $F_\mathcal{C}$ by shrinking the parallels of $S_j^\mathfrak{g}amma$ in $B^\mathfrak{g}amma_j$. \epsilonnd{proof} \begin{thebibliography}{blah} \bibitem{AKMRS}{\bf D\ Auckly}, {\bf H\ J\ Kim}, {\bf P\ Melvin}, {\bf D\ Ruberman}, {\bf H\ Schwartz}, {\it Isotopy of surfaces in 4-manifolds after a single stabilization}, Advances in Mathematics Vol. 341, 7 January (2019), 609--615. \bibitem{BT}{\bf A\ Bartels}, {\bf P\ Teichner}, {\epsilonm All 2-dimensional links are null homotopic}, \\ Geom. Topol., 3 (1999), 235--252. \bibitem{D} {\bf J\ P\ Dax}, {\it Etude homotopique des espaces de plongements}, Ann. Sci. Ecole Norm. Sup. (4) 5 (1972), 303--377. \bibitem{Edwards} {\bf R\ Edwards}, {\it The 4-dimensional Light Bulb Theorem (after David Gabai)}, Preprint arXiv:1709.04306 [math.GT] (2017). \bibitem{FQ} {\bf M\ Freedman}, {\bf F\ Quinn}, {\it The topology of $4$-manifolds}, Princeton Math. Series 39, (1990). \bibitem{Gab} {\bf D\ Gabai}, {\it The 4-dimensional Light Bulb Theorem}, Preprint arXiv:1705.09989v2 (2017). \bibitem{G2} {\bf D\ Gabai}, {\it Self-referential discs and the light bulb lemma}, Preprint arXiv:2006.15450v1 (2020). \bibitem{JKRS} {\bf J\ Joseph}, {\bf M\ Klug}, {\bf B\ Ruppik}, {\bf H\ Schwartz}, {\it Unknotting numbers of 2-spheres in the 4-sphere}, https://arxiv.org/abs/2007.13244 [math.GT] (2020). \bibitem{Lau} {\bf F\ Laudenbach} (1973). {\it Sur Les 2-Spheres D'une Variete de Dimension 3}, \\ Annals of Math. 97(1) (1973) 57--81. \bibitem{Nor} {\bf R\ Norman}, {\it Dehn's lemma for certain $4$--manifolds}, Invent. Math. 7 (1969) 143--147. \bibitem{PRT} {\bf M\ Powell}, {\bf A\ Ray}, {\bf P\ Teichner}, {\it The 4-dimensional disc embedding theorem and dual spheres}, arXiv:2006.05209 [math.GT] (2020). \bibitem{Reidemeister} {\bf K\ Reidemeister}, {\it Zur 3-dimensionalen Topologie}, \\ Abh. Math. Sem. Univ. Hamburg 11 (1933) 189--194. \bibitem{ST1} {\bf R\ Schneiderman}, {\bf P Teichner}, {\it The group of disjoint 2--spheres in 4--space},\\ Ann. of Math. Nov 2019 Vol. 190 no. 3, and arXiv:1708.00358v2 [math.GT] (2017). \bibitem{Schwartz} {\bf H\ Schwartz}, {\epsilonm Equivalent non-isotopic spheres in $4$--manifolds}, to appear in Journal of Topology. \bibitem{Singer} {\bf J\ Singer}, {\it 3-dim. manifolds and their Heegaard diagrams}, Trans. A.M.S. 35 (1933) 88--111. \bibitem{Stong} {\bf R\ Stong}, { \it Uniqueness of $\pi_1$-negligible embeddings in $4$--manifolds: A correction to theorem 10.5 of Freedman and Quinn}, Topology Vol 32 No 4 (1993) 677--699. \bibitem{Waldhausen}{\bf F\ Waldhausen}, {\it Heegaard-Zerlegungen der 3-Sph\"are}, Topology 7, (1968), 195--203. \bibitem{Wh} {\bf JHC\ Whitehead}, {\it A certain exact sequence}, Annals of Math. 52, (1950), 51--110. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \title{Unlocking the frequency domain for high-dimensional quantum information processing} \author{Meritxell Cabrejo Ponce} \email{[email protected]} \affiliation{Friedrich Schiller University Jena, Fürstengraben 1, 07743 Jena } \affiliation{Fraunhofer Institute for Applied Optics and Precision Engineering, Albert-Einstein-Strasse 7, 07745 Jena} \author{André Luiz Marques Muniz} \affiliation{Fraunhofer Institute for Applied Optics and Precision Engineering, Albert-Einstein-Strasse 7, 07745 Jena } \author{Marcus Huber} \email{[email protected]} \affiliation{Atominstitut, Technische Universit{\"a}t Wien, Stadionallee 2, 1020 Vienna, Austria} \affiliation{Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Vienna, Austria } \author{Fabian Steinlechner} \email{[email protected]} \affiliation{Fraunhofer Institute for Applied Optics and Precision Engineering, Albert-Einstein-Strasse 7, 07745 Jena } \affiliation{Abbe Center of Photonics, Friedrich Schiller University Jena, Albert-Einstein-Str. 6, 07745 Jena, Germany} \begin{abstract} High-dimensional photonic entanglement is a promising candidate for error-protected quantum information processing with improved capacity. Encoding high-dimensional qudits in the carrier frequency of photons combines ease of generation, universal single-photon gates via electro-optic modulation and wave shaping, and compatibility with fiber transmission for high-capacity quantum communication. Recent landmark experiments have impressively demonstrated quantum interference of a few frequency modes, but the certification of massive-dimensional frequency entanglement has remained an open challenge. Here we report a record certification of discretized frequency entanglement, combined with a novel approach for certification that is both highly efficient and nonlocally implementable, opening the possibility for utilizing this encoding in quantum communication. \end{abstract} \maketitle \section{Introduction} Entanglement is a unique and powerful quantum feature with a multitude of applications in quantum information processing. The nonlocal correlations of the entangled states may be used in quantum communications, imaging, metrology, and quantum processors. In the case of photons, polarization-entangled states have been traditionally used to demonstrate a multitude of quantum gates and quantum information protocols \cite{kokLinearOpticalQuantum2007} and the principles of rapidly developing quantum networks \cite{panMultiphotonEntanglementInterferometry2012,anwarEntangledPhotonpairSources2021}. These qubit states are easy to manipulate with linear optics and can be distributed in fiber or free-space links because of their low interaction with the environment. Nevertheless, larger alphabets in quantum communication are highly pursued not only to increase the capacity of the quantum channel. High-dimensional encoding also provides a stronger tolerance to noise \cite{eckerOvercomingNoiseEntanglement2019}, an essential asset to overcome the transmission limits of polarization qubits. On the other hand, an increase in dimensionality can also boost the computational power of quantum computers \cite{lanyonSimplifyingQuantumLogic2009}. In this context, a recent landmark experiment \cite{vigliarErrorprotectedQubitsSilicon2021} used high-dimensional entanglement of propagation paths on a silicon photonic chip to realise error-protected logical qubits and thus improve the performance of a quantum phase estimation algorithm. Harnessing the full state space of photonic degrees of freedom (DOF) such as transverse spatial mode, time, and frequency, will be key to future generations of photonic quantum information processors. Although spatial \cite{mairEntanglementOrbitalAngular2001,krennGenerationConfirmation1002014,fontaineLaguerreGaussianModeSorter2019} and temporal modes \cite{fransonBellInequalityPosition1989,brendelPulsedEnergyTimeEntangled1999,richartExperimentalImplementationHigher2012} have been straightforward to operate in very large dimensional spaces, the frequency DOF has remained behind in these advances. The frequency domain at optical scales is of particular interest because of its parallelization capabilities, in particular its compatibility with telecom multiplexing or frequency modulation techniques. Earlier experiments exploiting electro-optic modulation were able to demonstrate two-dimensional frequency entanglement \cite{olislagerImplementingTwophotonInterference2012}, while the use of pulse shapers and up-conversion processes allowed the characterization of up to four-dimensional states \cite{peerTemporalShapingEntangled2005,bernhardShapingFrequencyentangledQudits2013}. The combination of both components was used later to coherently control the quantum states emerging from integrated resonators \cite{kuesOnchipGenerationHighdimensional2017}. These important building blocks opened the possibility to measure in the superposition basis at optical scales and were used to demonstrate discretized frequency entanglement with few dimensions (up to 4). Since then, approaches have been developed to perform arbitrary manipulations of single-frequency qubits \cite{luFullyArbitraryControl2020} and the first steps have been taken towards full control of single qudits, its high-dimensional version \cite{luQuantumInterferenceCorrelation2018,luControlledNOTGateFrequencybin2019}. While Hong-Ou-Mandel interference has also been used to verify frequency entanglement \cite{chenVerificationHighdimensionalEntanglement2020,chenTemporalDistinguishabilityHongOuMandel2021}, the requirement of local measurements limits the utility of these methods in quantum networks. \begin{figure*} \caption{Schematic of our frequency-entangled photon pair source and the state analysis parts. We generate broadband frequency entanglement at telecom wavelength via spontaneous parametric down-conversion (SPDC) in a periodically poled lithium niobate waveguide (ppLN). Afterwards, the photon pairs are coherently manipulated with a pulse shaper and an electro-optic modulator (EOM) to analyze the correlations in a superposition basis. CW: continuous-wave; DEMUX: demultiplexer.} \label{fig:Setup} \end{figure*} Entangled frequency states can also be generated in combination with entanglement in other DOFs like polarization or even the time domain, whenever the relevant time and frequency properties can be manipulated independently, i.e., because they refer to vastly different time scales. Such hyperentangled states can also be used to enlarge the dimensionality of the system, to generate cluster states \cite{reimerHighdimensionalOnewayQuantum2019} or to perform more advanced quantum gates \cite{imanyHighdimensionalOpticalQuantum2019}. Yet, all these approaches are only able to manipulate a small frequency mode set of typically extensive underlying quantum states. Certifying real high-dimensional entanglement is not trivial, particularly in the frequency domain, and its immense potential is still unexploited. In this Letter, we show that it is not always necessary to design quantum sources with a discretized frequency space, such as those built in cavities, and therefore continuous spectra can also provide access to massive-dimensional and well-controlled Hilbert space. We show quantum interference with up to 7 modes and $>98\%$ visibility, providing tools for complete state control in a 7-mode state space. Subsequently, based on previous work to characterize time -bin qudits \cite{martinQuantifyingPhotonicHighDimensional2017,tiranovQuantificationMultidimensionalEntanglement2017}, we certify genuine high-dimensional frequency entanglement without prior assumptions regarding, e.g., the purity of the quantum state. Exploiting a novel bucket detection approach that requires very few measurement settings, resembling the compressed measurements used to characterize spatial correlations \cite{schneelochQuantifyingHighdimensionalEntanglement2018,schneelochQuantifyingEntanglement68billiondimensional2019}, we alleviate the harsh requirements on the number of measurements necessary to characterize the relevant correlations of the state, which grows with dimensionality. Finally, by recovering the information from high-quality quantum interference of two-dimensional (2D) subspaces, we are able to certify a minimum of 33 entangled frequency modes, the highest dimensionality of entanglement reported in time and frequency degrees of freedom. \section{Setup} In our work, we analyze the frequency content of two-photon states generated in standard $\chi^{(2)}$ non-linear crystals: periodically poled lithium niobate (ppLN) waveguides (see Methods). They provide continuous and broadband spontaneous parametric down-converted photons (SPDC) with high efficiency and 60 nm of bandwidth. The SPDC process is temperature tuned to a degeneracy wavelength of 1548 nm to cover the entire C band with high uniformity (see inset in Fig. \ref{fig:Setup}). At this point, the frequency space of the generated photon pairs would typically be discretized, e.g., with etalon cavities to carve the spectrum \cite{xieHarnessingHighdimensionalHyperentanglement2015,imanyCharacterizationCoherentQuantum2018}. This approach is useful for the isolation of frequency modes and for performing sideband modulation. However, a considerable contribution of the photon spectrum is directly rejected, reducing the total throughput. For this reason, we avoid this step and employ the maximum bandwidth per frequency mode. Due to the energy conservation of the SPDC process, the emitted photon pairs are strongly anticorrelated in frequency, and the state can be described as: \begin{equation} |j\rangle_s |j\rangle_i = \int \Pi(\Omega -j\Delta\omega, \Omega +j\Delta\omega)|\omega_0 + \Omega\rangle_s |\omega_0 - \Omega\rangle_i \,d\Omega \end{equation} where $|j\rangle_{s,i}$ is the label of the $j^{th}$ frequency mode corresponding to the signal and the idler photon. $\omega_0$ is the degeneracy frequency of the SPDC process, $\Delta\omega$ is the FSR between modes and $\Pi$ is the spectral shape of each mode. We discretize the system in bins of 25 GHz bandwidth and the same FSR over the whole C-band, yielding the state: \begin{equation} |\psi_d\rangle = \sum_{j=1}^{d} \alpha_j |j\rangle_s |j\rangle_i \end{equation} The term $\alpha_j$ refers to the phase and amplitude of the mode and is determined by the spectral characteristics of the source. For a very broad and uniform spectrum as here, $\alpha_j\approx1$. Although after propagation, each photon pair corresponding to a mode $j$ accumulates a different phase due to material dispersion. In a quantum key distribution (QKD) scenario, the frequency-entangled photon pairs emerging from a single optical fiber can be distributed into different paths, e.g., via wavelength- or polarization demultiplexing. In our experiment, we use only one device simultaneously for both photons, but in principle, manipulation would be just as easily possible at two separate locations. To reveal the entanglement content, we use a commercial pulse shaper to control the phase and amplitude of each of the frequency modes \cite{kuesOnchipGenerationHighdimensional2017}, and electro-optic modulation to achieve mode superpositions \cite{olislagerImplementingTwophotonInterference2012} (see Methods). \section{The superposition basis} For either quantum state characterization via full state tomography (FST), evaluation of Bell-type tests, or implementation of QKD protocols, measurements in superposition bases are fundamental to uncover quantum correlations and statistics of the state. The eigenvectors of these bases may be the superposition of some or all elements of the computational basis, here the frequency basis, with certain phases for each mode. Here, we show that with standard levels of RF signal amplification $(P_\text{max}=26 \text{dBm})$, it is possible to perform a full superposition of up to 7 modes with a low contribution of accidental coincidence detection events. To prove this, we performed high-dimensional Bell-type tests, also known as CGLMP \cite{collinsBellInequalitiesArbitrarily2002}, based on the CHSH inequality for two-photon qubits \cite{clauserProposedExperimentTest1969}. Instead of measuring quantum correlations only for fixed phase settings, we performed a phase scan for all contributing modes \cite{thewBellTypeTestEnergyTime2004}. The measurement projector we use for each photon is: \begin{equation} |\Psi_{proj}\rangle = \frac{1}{\sqrt{d}} \sum_{j=1}^{d} \left(e^{ij\theta_{s,i}}|j\rangle_{s,i}\right) \end{equation} where $\theta_{s,i}$ is the phase applied to the signal or the idler photon, and we use $\theta_s = \theta_i$. The phase of interference depends on the sum of the signal and idler phases. By scanning their phase, we obtain the quantum interferences shown in Fig. \ref{fig:BellTest} for dimensions $d=2,3,5,7$. The visibilities are 96.7\%, 97.7\%, 98.1\% and 98.2\%, respectively, without fitting or subtraction of accidental coincidences and much above the thresholds 70.7\%, 77.5\%, 84.6\% and 88.3\% to rule out hidden variable theories \cite{thewBellTypeTestEnergyTime2004}. To perform these measurements, we have selected states centered on the $6^{th}$ mode. Electro-optic modulation shifted photons from the neighboring modes into the $6^{th}$ mode, and demultiplexing filters (DEMUX) postselected the superposition state. Our constrained RF signal only allowed a limited efficiency of the frequency modulation of photons; thus, added mode loss by the pulse shaper provided an equal contribution. Even dimensionality can also be evaluated by using the same parameters as here. \begin{figure} \caption{Bell-type tests in the frequency domain for dimensions $d = 2, 3, 5, 7$. We chose the modes centered on $|6\rangle$ for the signal and the idler photons. } \label{fig:BellTest} \end{figure} \section{Huge dimensionality certification} While these CGLMP tests provide a (partially) device-independent certification of entanglement and demonstrate the reliability of our devices, they do not easily test for actual entanglement dimensionality, i.e. the dimension of entanglement needed to reproduce the correlations. Generically, to demonstrate even higher levels of entanglement, one would need to project distant spectral modes into superposition states, which may be limited or physically impossible to perform. In our large frequency space, this would imply unreachable RF power levels for electro-optic modulation, but similar practical limitations are to be expected for any type of encoding. On the other hand, common entanglement certification techniques, such as FST, are expensive procedures that require measurements in at least $(d+1)^2$ bases for bipartite systems, an equivalent of $d^2(d+1)^2$ single-outcome measurement settings. The obvious consequence is that the number of total measurements increases rapidly with dimension. Adapting the techniques developed for the time-bin domain, we show that it is sufficient to characterize the quantum coherence of a few 2D subspaces to certify a high dimensionality of entanglement in the frequency domain. The certification process is structured into two main blocks: the measurement of some elements of the density matrix $\rho$ and the posterior lower bound of the remaining unknowns. Measurements on the computational basis, that is, the frequency basis, can be performed with standard filters and are, in fact, directly related to the diagonal elements of the density matrix $\langle j, k| \rho |j, k\rangle$. Only this characterization step would usually require $d^2 = 10404$ filter settings, which can take arbitrary long times. Here, we propose making good use of the frequency parallelization and bucket detection of all uncorrelated frequencies with a single and broadband filter setting $\sum_{k\ne j} \langle j, k| \rho |j,k\rangle$ (i.e. 101 modes $\times$ 25 GHz), while still measuring the correlated set $|j,j\rangle$ with narrowband filter settings (25 GHz). This method reduces the high number of measurements required to only $2d$. The results of the maximum frequency correlation are shown in Fig. \ref{fig:Performance}a. The background noise measured for uncorrelated frequencies originates from accidental coincidence detection events due to the high number of single counts and imperfect filters. The detected coincidence to accidental ratio (CAR) averaged throughout the space amounts to $1.4\times10^3$. \begin{figure} \caption{(a) Intensity probabilities and (b) visibilities of the two-photon interferences for the 2D subspaces of modes $|j,j+1\rangle$, $|j,j+2\rangle$ and $|j,j+6\rangle$. (c) Comparison of measured data with the expected lower bound with our method. Note that, from all possible values of elements that produce a positive density matrix $\rho$, we take the worst scenario.} \label{fig:Performance} \end{figure} Outside of the diagonal of $\rho$ we find two types of elements: those close to zero due to energy conservation, as observed from the computational basis measurements and upper limited by the accidentals in our system, and those non-zero elements $\langle j,j | \rho | k, k \rangle$ that indicate the coherence of the entangled state. We now measure some of the coherence elements to which we have access with our measurement system. They can be estimated with the mode amplitude from the computational basis and the strength of neighboring interference. That is, we only need to measure the quantum interference between any two modes of the whole system. We measure the interferences for the first, second and sixth neighboring modes corresponding to the terms $\langle j,j|\rho|j+i,j+i\rangle$ for $i=1,2,6$. The dispersion shifts the interference patterns proportional to the frequency distance with respect to the center wavelength \cite{imanyCharacterizationCoherentQuantum2018}. Knowledge of the exact dispersion values would allow us to directly measure the maxima and minima of the interference patterns. For an unknown exact value as in our case, we can perform a dispersion calibration, a full phase scan of the interference for different frequency modes. Finally, to collect statistics, we record 60 samples of the maxima and minima of the expected interference for each subspace and calculate the visibility. The total number of filter settings are now $2(d-1) + 2(d-2) + 2(d-6) = 6(d-3)$ and the results of all recorded visibilities are presented in Fig. \ref{fig:Performance}b. The average visibility for the 102 modes with 1st, 2nd, and 6th neighbors are, respectively, 96.85(7)\%, 97.94(5)\% and 96.8(1)\%. The fact that these high-contrast interference is preserved over the whole investigated space indicates that the quantum state at hand is very close to a maximally entangled state. However, in potentially adversarial scenarios, such as QKD, we do not want to make any assumption on the distributed state. Thus, we proceed with a rigorous analysis to demonstrate entanglement. Indeed, these strong coherences allow us to finally certify a large amount of entanglement without further measurements. Similarly to the methods proposed for entanglement certification in the time domain \cite{tiranovQuantificationMultidimensionalEntanglement2017,martinQuantifyingPhotonicHighDimensional2017}, we lower bound the remaining unknown elements $\langle j,j | \rho | j+i, j+i \rangle$ by using the fact that the density matrix must be positive semidefinite to represent a valid quantum state. Thus, every element, submatrix, and therefore subdeterminant of $\rho$ must be positive or equal to zero at the very least. They can be lower-bounded iteratively by solving $3\times3$ subdeterminants, composed of measured parameters and one unknown, and we keep the largest bound extracted from all combinations of submatrices. Notice that this type of bound causes a rather fast loss of information, yet it is sufficient to certify high-dimensional entanglement. To visualize this loss, we compare in Fig. \ref{fig:Performance}c the measured quantities for $\langle j,j|\rho|j+2, j+2\rangle$ and the calculated bound if we would not use those measurements. The resulting submatrix $\langle j,j| \rho | k,k \rangle$ is shown in Fig. \ref{fig:results}a. \begin{figure} \caption{(a) Lower bound of the density submatrix $\langle j,j | \rho | k,k \rangle $ and (b) minimum certified dimensionality of our system, according to the measured data. Notice that for low space dimension up to $d = 11$, we can certify the maximum dimensionality with very few measurement settings.} \label{fig:results} \end{figure} We can now compare the lower-bounded density matrix with a target state $|\Phi\rangle$ by computing the fidelity $F(\rho,\Phi) = \text{Tr}\left( \sqrt{\sqrt{\Phi}\rho\sqrt{\Phi}} \right)^2$. There exists an upper bound for the fidelity of any state of Schmidt rank $k\leq d$ \cite{ficklerInterfacePathOrbital2014,bavarescoMeasurementsTwoBases2018}. Fidelities with the maximally entangled state above the threshold $B_k(\Phi)=k/d$ indicate a dimensionality of at least $k+1$. We thus choose a maximally entangled state as the target state $|\Phi\rangle=1/\sqrt{d} \sum_{j=1}^{d}|j,j\rangle$ and iteratively calculate the fidelity for $d = 2$ to $d = 102$ and compare it with the threshold values for varying Schmidt rank. The final certification is plotted in Fig. \ref{fig:results}b, where we find at least 33 entangled modes in a space of 101 to 102. It is worth emphasizing that with only these very few measurement settings, $2d$ on the computational basis and $\sim 6d$ on 2D subspaces, we are able to certify 11 dimensions in a space of 11 modes. Higher visibility would directly increase the amount of certified entanglement per number of modes, and further measurements that could fill more elements of the density matrix would also improve certification. \section{Discussion and outlook} To date, although high-dimensional frequency entanglement has been widely accepted to exist in photon pairs generated spontaneously from nonlinear processes, it has never been completely certified beyond a few dimensions. In our work, we have shown methods on how to characterize these quantum states that unquestionably possess a huge dimensionality. Similarly as in the time domain, the limits on the dimensionality for continuously pumped processes depend on the resolution of our devices, here the spectral filters. In this work, we have shown full superpositions for $d=2,3,5,7$ and high interference visibilities, the highest values in the literature (only available up to $d=4$). These subspaces are easily exploitable with few fiber-integrated optical components. We have also certified 11-dimensional entanglement in a subspace of 11 modes with only $\sim 8d$ measurement settings, and at least 33-dimensional entanglement in our frequency space of 102 frequency modes. Entanglement in the frequency domain can also be used in combination with entanglement in time and path DOFs, leading to huge state spaces that can encode vast amounts of information. We hope that these results will motivate further photonic technology development, in particular low-loss electro-optic modulation and wave-shaping technologies that would inevitably render frequency-coded quantum information encoding a choice for near-term photonic quantum information processing with massive bandwidth. \section*{Funding} This work was supported by the Fraunhofer Internal Programs under Grant No. Attract 066-604178 and by the Federal Ministry of Education and Research of Germany (BMBF) through the project Quantum Photonics Labs (QPL). \appendix \section{Methods} To generate our frequency-entangled state, we use a commercial second harmonic generation (SHG) module, a 40 mm type-0 ppLN waveguide (Covesion). To pump the nonlinear process, we use a standard continuous-wave telecom laser and upconvert it with a second SHG module to better align the SPDC wavelength to ITU channels. To process the frequency entanglement, we use a telecom pulse shaper (Waveshaper 16000A) to select specific frequency subspaces for the signal and the idler photons and to tune their relative phases. This device limits the operational frequency range of our source to 40 nm of the telecom C-band only. We divide this space into 102 frequency modes for the signal and the idler photon, with a standard free spectral range (FSR) of 25 GHz and the same bandwidth. This leads to a state space of $102\times102$ dimensions. Note that subspace selection is an actual frequency discretization procedure, and it can be employed as a resource for flexible bandwidth allocation in reconfigurable QKD networks. The photon frequencies are then modulated with an electro-optic modulator, driven by a radio-frequency sine with the same frequency as the FSR. This sideband modulation technique allows us to distribute photons into neighboring modes according to Bessel function amplitudes \cite{capmanyQuantumModelElectrooptical2010}. Choosing a low FSR and bandwidth allows slower modulation signals than in earlier work, where FSR values of 200 GHz \cite{kuesOnchipGenerationHighdimensional2017} down to 50 GHz \cite{imany50GHzspacedCombHighdimensional2018} were used. Lower FSR would be better suited to increase dimensionality, but would eventually be limited by the resolution of the available optical filters and the number of photons per fraction of spectral bandwidth. Lastly, due to photon scattering into distant spectral modes, postselection is necessary to measure the right superposition states. To reduce the noise that imperfect filters may introduce, we use a narrow DEMUX of 20 GHz bandwidth centered at the corresponding signal and idler frequencies prior to coincidence measurement. The colorful illustrations at the bottom of Fig. \ref{fig:Setup} depict the sideband modulation and photon scattering. \end{document}
\begin{document} \begin{abstract} We provide a rather simple proof of a homogenization result for the bidomain model of cardiac electrophysiology. Departing from a microscopic cellular model, we apply the theory of two-scale convergence to derive the bidomain model. To allow for some relevant nonlinear membrane models, we make essential use of the boundary unfolding operator. There are several complications preventing the application of standard homogenization results, including the degenerate temporal structure of the bidomain equations and a nonlinear dynamic boundary condition on an oscillating surface. \varepsilonnd{abstract} \maketitle \tableofcontents 2/5ection{Introduction}\label{sec:Introduction} The bidomain model \cite{Tung78,ColliBook,Sundnes} is widely used as a quantitative description of the electric activity in cardiac tissue. The relevant unknowns are the intracellular ($u_i$) and extracellular ($u_e$) potentials, along with the so-called transmembrane potential ($v:=u_i-u_e$). In this model, the intra- and extracellular spaces are considered as two separate homogeneous domains superimposed on the cardiac domain. The two domains are separated by the cell membrane creating a discontinuity surface for the cardiac potential. Conduction of electrical signals in cardiac tissue relies on the flow of ions through channels in the cell membrane. In the bidomain model, the celebrated Hodgkin-Huxley \cite{Hodgkin} framework is used to dynamically couple the intra- and extracellular potentials through voltage gated ionic channels. The bidomain model can be viewed as a PDE system consisting of two degenerate reaction-diffusion equations involving the unknowns $u_i,u_e,v$ and two conductivity tensors $2/5igma_i,2/5igma_e$. These equations are supplemented by a nonlinear ODE system for the dynamics of the ion channels. The bidomain model is often derived heuristically by interpreting $2/5igma_i,2/5igma_e$ as some sort of ``average" conductivities, applying Ohm's electrical conduction law and the continuity equation (conservation of electrical charge) to the intracellular and extracellular domains \cite{ColliBook,Sundnes}. Starting from a more accurate microscopic (cell-level) model of cardiac tissue, with the heterogeneity of the underlying cellular geometry represented in great detail, it is possible to heuristically derive the bidomain model (tissue-level) using the multiple scales method of homogenization. This derivation was first carried out in \cite{Neu:1993aa}. It should be noted that the microscopic model is in general too complex to allow for full organ simulations, although there have been some very recent efforts in that direction \cite{Tveito17}. The complexity of cell-level models, which themselves can be heuristically derived from the Poisson-Nernst-Planck equations \cite{Richardson:2009aa}, motivates the search for simpler homogenized (macroscopic) models. The work \cite{Neu:1993aa} assumes, as we do herein, that cardiac tissue can be viewed as a uniformly oriented periodic assembly of cells (see also \cite{Colli,Henriquez}). There have been some attempts to remove this assumption. We refer to \cite{Keener:1998ab,Keener:1996aa,Richardson:2011aa} for extensions to somewhat more realistic tissue geometries. Despite the widespread use of the bidomain model, there are few mathematical rigorous derivations of the model from a microscopic description of cardiac tissue. From a mathematical point of view, rigorous homogenization is often linked to the study of the asymptotic behavior (convergence) of solutions to PDEs with oscillating coefficients. In the literature several approaches have been developed to handle this type of problem, like Tartar's method of oscillating test functions, $\Gammaamma$-convergence, two-scale convergence, and the unfolding method. We refer to \cite{Donato} for an accessible introduction to the mathematics of homogenization and for an overview of the different homogenization methods. We are aware of two earlier works \cite{Amar2013,Pennacchio2005} containing rigorous homogenization results for the bidomain model (but see \cite{Donato:2015aa,Donato:2011aa,Yang:2014aa} for examples of elliptic and parabolic equations on``two-component" domains). With a fairly advanced proof involving $\Gammaamma$-convergence, the De Giorgi ``minimizing movement" approach, time-discretization, variational problems, and two limit procedures, the homogenization result in \cite{Pennacchio2005} covers the generalized FitzHugh-Nagumo ionic model \cite{FitzHugh1955}. The proof of the result in \cite{Amar2013} is more basic in the sense that it employs only two-scale convergence arguments, but it handles only a restricted class of ionic models. We mention that there are several complications preventing the application of standard homogenization results (for elliptic/parabolic equations) to the bidomain equations, including its degenerate structure (seen at the tissue-level), resulting from differing anisotropies of the intra- and extracellular spaces, and the highly nonlinear, oscillating dynamic boundary condition (seen at the cell-level). The main contribution of our paper is to provide a simple homogenization proof that can handle some relevant nonlinear membrane models (the generalized FitzHugh-Nagumo model), relying only on basic two-scale convergence techniques. We now explain our contribution in more detail. The point of departure is the following microscopic model \cite{ColliBook,Colli,Henriquez,Veneroni} for the electric activity in cardiac tissue: \begin{equation}\label{micro} \begin{split} & -\mathbb{D}iv \left( 2/5igma_i^\varepsilon \nabla u_i^{\varepsilon}\right) = s_i^{\varepsilon} \quad \mbox{in} \; (0,T)\times \Omegamega_i^{\varepsilon}, \\ & -\mathbb{D}iv \left( 2/5igma_e^\varepsilon \nabla u_e^{\varepsilon}\right) = s_e^{\varepsilon} \quad \mbox{in} \; (0,T)\times \Omegamega_e^{\varepsilon}, \\ & \varepsilon \left( 3*2/5artial_t {v^{\varepsilon}} + I(v^{\varepsilon},w^{\varepsilon})\right) =-\nu \cdot 2/5igma_{i}^{\varepsilon} \nabla u_i^{\varepsilon} \quad \mbox{on} \; (0,T)\times \Gammaamma^{\varepsilon}, \\ & \varepsilon \left( 3*2/5artial_t {v^{\varepsilon}} + I(v^{\varepsilon},w^{\varepsilon})\right) =-\nu \cdot 2/5igma_{e}^{\varepsilon} \nabla u_e^{\varepsilon} \quad \mbox{on} \; (0,T)\times \Gammaamma^{\varepsilon}, \\ & 3*2/5artial_t w^{\varepsilon} = H(v^{\varepsilon},w^{\varepsilon}) \quad \text{on} \; (0,T)\times\Gammaamma^{\varepsilon}, \varepsilonnd{split} \varepsilonnd{equation} where $\nu$ denotes the unit normal pointing out of $\Omegamega_i^{\varepsilon}$ (and into $\Omegamega_e^{\varepsilon}$). Cardiac tissue consists of an assembly of elongated cylindrical-shaped cells coupled together (end-to-end and side-to-side) to provide intercellular communication. The entire cardiac domain $\Omegamega2/5ubset \mathbb{R}^3$ is viewed as a ``two-component" domain and split into two $\varepsilon$-periodic open sets $\Omegamega_i^{\varepsilon}$, $\Omegamega_e^{\varepsilon}$ corresponding to the intra- and extracellular spaces. The sets $\Omegamega_i^{\varepsilon},\Omegamega_e^{\varepsilon}$, which are assumed to be disjoint and connected, are separated by an $\varepsilon$-periodic surface $\Gammaamma^{\varepsilon}$ representing the cell membrane, so that $\Omegamega=\Omegamega_i^{\varepsilon}\cup \Omegamega_e^{\varepsilon}\cup \Gammaamma^{\varepsilon}$. The main geometrical assumption is that the intra- and extracellular domains are $\varepsilon$-dilations of some reference cells $Y_i,Y_e2/5ubset Y:=[0,1]^3$, periodically repeated over $\mathbb{R}^3$. Although our results are valid for general Lipschitz domains, for simplicity of presentation, we assume that the entire cardiac domain $\Omega$ is the open cube \begin{equation}\label{def:domain} \Omega=(0,1)\times(0,1)\times (0,1). \varepsilonnd{equation} In \varepsilonqref{micro}, $2/5igma_j^{\varepsilon}$ is the conductivity tensor and $s_j^{\varepsilon}$ is the stimulation current, relative to $\Omegamega_j^{\varepsilon}$ for $j=i,e$. The functions $s_i^\varepsilon,s_e^\varepsilon$ are assumed to be at least bounded in $L^2$, independently of $\varepsilon$. As usual in homogenization theory, the conductivity tensors $2/5igma_i^{\varepsilon}, 2/5igma_e^{\varepsilon}$ are assumed to have the form $$ 2/5igma_j^\varepsilon(x)=2/5igma_j\left(x,\frac{x}{\varepsilon}\right), \qquad j=i,e, $$ where $2/5igma_j=2/5igma_j(x,y)$ satisfies the usual conditions of uniform ellipticity and periodicity (in $y$). Despite the fact that the inhomogeneities of the domains impose $\varepsilon$-oscillations in the conductivity tensors (via gap junctions), the main source of inhomogeneity in the microscopic model is not the conductivities $2/5igma_i^\varepsilon$ and $2/5igma_e^\varepsilon$, but the domains $\Omegamega_i^{\varepsilon}$ and $\Omegamega_e^{\varepsilon}$ themselves. We allow for inhomogeneous and oscillating conductivities for the sake of generality. We denote by $u_j^{\varepsilon}$ the electric potential in $\Omegamega_j^{\varepsilon}$ ($j=i,e$). On $\Gammaamma^{\varepsilon}$, $v^{\varepsilon} := u_i^{\varepsilon}-u_e^{\varepsilon}$ is the transmembrane potential and $I(v^{\varepsilon},w^{\varepsilon})$ is the ionic current depending on $v^{\varepsilon}$ and a gating variable $w^{\varepsilon}$. The left-hand side of the third and fourth equations in \varepsilonqref{micro} describes the current across the membrane as having a capacitive component, depending on the time derivative of the transmembrane potential, and a nonlinear ionic component $I$ corresponding to the chosen membrane model. In this article we consider the generalized FitzHugh-Nagumo model \cite{FitzHugh1955}. We choose to focus on this membrane model for definiteness, but our arguments can be adapted to many other models satisfying reasonable technical assumptions \cite{Boulakia2008,Bourgault,ColliBook,Sundnes,Veneroni,Veneroni:2009aa}. For each fixed $\varepsilon>0$, the functions $2/5igma_j^{\varepsilon}, s_j^{\varepsilon}, I,H$ in \varepsilonqref{micro} are given and we wish to solve for $(u_i^\varepsilon,u_e^\varepsilon,v^\varepsilon,w^\varepsilon)$. To this end, we must augment the system \varepsilonqref{micro} with initial conditions for $v^\varepsilon, w^\varepsilon$ and Neumann-type boundary conditions for $u_i^\varepsilon, u_e^\varepsilon$ (ensuring no current flow out of the heart): \begin{equation}\label{eq:ib-cond} \begin{split} &v^{\varepsilon}|_{t=0}=v_0^\varepsilon \;\; \text{in $\Omega$}, \quad w^{\varepsilon}|_{t=0}=w_0^\varepsilon\;\; \text{in $\Omega$}, \\ & n \cdot2/5igma_j \nabla u_j^{\varepsilon} = 0 \;\; \text{on $(0,T)\times \left(3*2/5artial \Omega \cap 3*2/5artial \Omegamega_j^{\varepsilon}\right)$}, \; j=i,e, \varepsilonnd{split} \varepsilonnd{equation} where $n$ is the outward unit normal to $\Omega$. It is proved in \cite{Colli,Veneroni} that the microscopic bidomain model \varepsilonqref{micro}, \varepsilonqref{eq:ib-cond} possesses a unique weak solution. This solution satisfies a series of a priori estimates. For us it is essential to know how these estimates depend on the parameter $\varepsilon$. We will therefore outline a proof of these estimates. The dimensionless number $\varepsilon$ is a small positive number representing the ratio of the microscopic and macroscopic scales, that is, considering $\Omega$ as fixed, it is proportional to the cell diameter. The goal of homogenization is to investigate the limit of a sequence of solutions $2/5eq{\left(u^{\varepsilon}_i,u_e^{\varepsilon},v^{\varepsilon},w^{\varepsilon}\right)}_{\varepsilon>0}$ to \varepsilonqref{micro}, \varepsilonqref{eq:ib-cond}. By the multiple scales method \cite{Donato,Colli,Henriquez}, the electric potentials $u^{\varepsilon}_i,u_e^{\varepsilon},v^{\varepsilon}$ and the state variable $w^\varepsilon$ exhibit the following asymptotic expansions in powers of the parameter $\varepsilon$: \begin{align*} u_j^\varepsilon(t,x,y) & =u_j(t,x,y) + \varepsilon u_j^{(1)}(t,x,y) +\varepsilon^2 u_j^{(2)}(t,x,y) +\cdots \quad (j=i,e), \\ v^\varepsilon(t,x,y) & =v(t,x,y) + \varepsilon v^{(1)}(t,x,y) +\varepsilon^2 v^{(2)}(t,x,y) +\cdots, \\ w^\varepsilon(t,x,y) & =w(t,x,y) + \varepsilon w^{(1)}(t,x,y) +\varepsilon^2 w^{(2)}(t,x,y) +\cdots, \varepsilonnd{align*} where $y= x/\varepsilon$ denotes the microscopic variable, and each term in the expansions is a function of both the slow (macroscopic) variable $x$ and the fast (microscopic) variable $y$, periodic in $y$. Substituting the above expansions into \varepsilonqref{micro}, and equating all terms of the same orders in powers of $\varepsilon$, we obtain after some routine arguments that the zero order terms $u_i,u_e,v,w$ are independent of the fast variable $y$ and satisfy the (macroscopic) bidomain model \cite{Colli,Henriquez,Pennacchio2005} \begin{equation}\label{macro} \begin{cases} |\Gamma|3*2/5artial_t v -\mathbb{D}iv \left( M_i \nabla u_i \right)+ |\Gamma|I(v,w) = |Y_i|s_{i}, & \quad \text{in $(0,T)\times \Omega$}, \\ |\Gamma|3*2/5artial_t v + \mathbb{D}iv \left( M_e \nabla u_e \right) + |\Gamma|I(v,w) = - |Y_e|s_{e}, & \quad \text{in $(0,T)\times \Omega$},\\ 3*2/5artial_t w = H(v,w), & \quad \text{in $(0,T)\times \Omega$}, \varepsilonnd{cases} \varepsilonnd{equation} where the homogenized conductivity tensors $M_i(x), M_e(x)$ are given by \begin{equation}\label{Mj} M_j(x)=\int_{Y_j} 2/5igma_j(x,y)\left( I + \nabla_y \chi_j(x,y) \right) \, dy, \qquad j=i,e, \varepsilonnd{equation} and the $y$-periodic (vector-valued) function $\chi_j=\chi_j(x,y)$ solves the cell problem \begin{equation}\label{chi} \begin{cases} -\mathbb{D}iv_y \left( 2/5igma_j \nabla_y \chi_j \right) = -\mathbb{D}iv_y 2/5igma_j, \quad &\mbox{in } \Omega \times Y_j, \\ \nu \cdot 2/5igma_j \nabla_y \chi_j = \nu \cdot 2/5igma_j, \quad &\mbox{on } \Omega\times \Gammaamma. \varepsilonnd{cases} \varepsilonnd{equation} Note that the effective potentials $u_i, u_e$ in \varepsilonqref{macro} are defined at every point of $\Omega$, while in the microscopic model they live on disjoint sets $\Omegamega_i^{\varepsilon}, \Omegamega_e^{\varepsilon}$. In \varepsilonqref{macro}, \varepsilonqref{Mj}, \varepsilonqref{chi} the sets $Y_i,Y_e$ are the intra and extracellular spaces within the reference unit cell $Y$, separated by the cell membrane $\Gamma$ (see Section \ref{sec:preliminaries} for details). It is worth noting that the bidomain model is often stated in terms of the ``geometric" parameter $\chi=\frac{\abs{\Gammaamma}}{\abs{Y}}=\abs{\Gammaamma}$ representing the surface-to-volume ratio of the cardiac cells. Regarding the existence and uniqueness of properly defined solutions, standard theory for parabolic-elliptic systems does not apply naturally to the bidomain model \varepsilonqref{macro}. A number of works \cite{Karlsen,Boulakia2008,Bourgault,Colli,Veneroni:2009aa} have recently provided well-posedness results for \varepsilonqref{macro}, applying differing solution concepts and technical frameworks. As alluded to earlier, we will provide a rigorous derivation of the homogenized system \varepsilonqref{macro}, \varepsilonqref{Mj}, \varepsilonqref{chi} based on the theory of two-scale convergence (see \cite{Nguetseng} and \cite{Allaire,Allaire2,Lukkassen:2002aa}). This result is not covered by standard parabolic homogenization theory. A complication is the nonlinear dynamic boundary condition (posed on an underlying oscillating surface), which makes it difficult to pass to the limit in \varepsilonqref{micro} as $\varepsilon\to 0$. The aim is to prove that a sequence $2/5eq{u_i^{\varepsilon},u_e^{\varepsilon},v^{\varepsilon},w^{\varepsilon}}_{\varepsilon>0}$ of solutions to the microscopic problem two-scale converges to the solution of the bidomain model \varepsilonqref{macro}. However, two-scale convergence is not ``strong enough" to justify passing to the limit in the nonlinear boundary condition. To handle this difficulty we use the boundary unfolding operator \cite{Cioranescu}, establishing strong convergence of $\mathcal{T}_{\varepsilon}^b(v^{\varepsilon})$ in $L^2((0,T)\times\Omega\times \Gamma)$, where $\mathcal{T}_{\varepsilon}^b$ denotes the boundary unfolding operator. The boundary unfolding operator makes our proof flexible enough to handle a range of membrane models, exemplified by the generalized FitzHugh-Nagumo model. Unfolding operators, presented and carefully analyzed in \cite{Cioranescu:2008aa,Cioranescu}, facilitate elementary proofs of classical homogenization results on fixed as well as perforated domains/surfaces. An unfolding operator $\mathcal{T}_\varepsilon$ maps a function $v(x)$ defined on an oscillating domain/surface to a higher dimensional function $\mathcal{T}_{\varepsilon}(v)(x,y)$ on a fixed domain, to which one can apply standard convergence theorems in fixed $L^p$ spaces. Reflecting the ``two-component" nature of the cardiac domain, it makes sense to use two unfolding operators $\mathcal{T}_\varepsilon^{i,e}$, linked to the intra- and extracellular domains $\Omega^\varepsilon_{i,e}$. In this paper, however, we mainly unfold functions defined on the cell membrane, utilizing the boundary unfolding operator $\mathcal{T}_\varepsilon^b$. For somewhat similar unfolding of ``two-component" domains separated by a periodic boundary, see \cite{Donato:2015aa,Donato:2011aa,Yang:2014aa}. For other relevant works that combine two-scale convergence and unfolding methods, we refer to \cite{Neuss,Gahn:2016aa,Graf:2014aa,Marciniak-Czochra:2008aa,Neuss-Radu:2007aa}. Among these, our work borrows ideas mostly from \cite{Neuss,Gahn:2016aa,Neuss-Radu:2007aa}. The remaining part of the paper is organized as follows: In Section \ref{sec:preliminaries}, we collect relevant functional spaces and analysis results. Moreover, we gather definitions and tools linked to two-scale convergence and unfolding operators. In Section \ref{sec:wellposedness}, we define precisely what is meant by a weak solution of the microscopic problem \varepsilonqref{micro}, state a well-posedness result, and establish several ``$\varepsilon$-independent" a priori estimates. The main homogenization result is stated and proved in Section \ref{sec:convergence}. 2/5ection{Preliminaries}\label{sec:preliminaries} 2/5ubsection{Some functional spaces and tools}\label{sec:notation} For a general review of integer and fractional order Sobolev spaces (on Lipschitz domains) and relevant analysis tools, see \cite[Chaps.~2 \& 3]{Boyer:2012aa} and \cite[Chap.~3]{McLean:2000aa}. For relevant background material on mathematical homogenization, we refer to \cite{Donato}. Let $\Omega 2/5ubset \mathbb{R}^3$ be a bounded open set with Lipschitz boundary. We denote by $C_0^{\infty}(\Omega)$ the (infinitely) smooth functions with compact support in $\Omega$. The space of smooth $Y$-periodic functions is denoted by $C^{\infty}_{\mathrm{per}}(Y)$. The closure of this space under the norm $\|\nabla(\cdot)\|_{L^2(Y)}$ is denoted by $H_{\mathrm{per}}^1(Y)$. We write $H^s$ for the $L^2$-based Sobolev spaces $W^{s,2}$ ($s\in (0,1]$). We make use of Sobolev spaces on surfaces, as defined for example in \cite[p.~34]{Lions:1972aa} and \cite[p.~96]{McLean:2000aa}. Specifically, we use the (Hilbert) space $H^{1/2}(\Gamma)$, for a two-dimensional Lipschitz surface $\Gamma 2/5ubset \Omega$, equipped with the norm \begin{equation*} \norm{u}^2_{H^{1/2}(\Gamma)} =\norm{u}_{L^2(\Gamma)}^2+\abs{u}_{H_0^{1/2}(\Gamma)}^2, \varepsilonnd{equation*} where $$ \abs{u}_{H_0^{1/2}}^2=\int_{\Gamma}\int_{\Gamma} \frac{|u(x)-u(x')|^2}{|x-x'|^3}\, dS(x)\, dS(x'), $$ and $dS$ is the two-dimensional surface measure. We define the dual space of $H^{1/2}(\Gamma)$ as $H^{-1/2}(\Gamma):=(H^{1/2}(\Gamma))^*$, equipped with the norm of dual spaces $$ \norm{u}_{H^{-1/2}(\Gamma)} :=\underset{\norm{3*2/5hi}_{H^{1/2}(\Gamma)}=1}{2/5up_{3*2/5hi\in H^{1/2}(\Gamma)}} \left \langle u, 3*2/5hi \right\rangle_{H^{-1/2}(\Gamma),H^{1/2}(\Gamma)}. $$ The following trace inequality holds: \begin{equation}\label{trace1} \norm{u|_{\Gamma}}_{H^{1/2}(\Gamma)} \leq C \norm{u}_{H^1(\Omega)}, \quad u \in H^1(\Omega). \varepsilonnd{equation} Any function in $H^{1/2}(\Gamma)$ can be characterized as the trace of a function in $H^1(\Omega)$. The trace map has a continuous right inverse $\mathcal{I}:H^{1/2}(\Gamma)\rightarrow H^1(\Omega)$, satisfying \begin{equation}\label{inverseTrace} \norm{\mathcal{I}(u)}_{H^1(\Omega)} \leq C\norm{u}_{H^{1/2}(\Gamma)}, \quad \forall u \in H^{1/2}(\Gamma), \varepsilonnd{equation} where the constant $C$ depends only on $\Gamma$. We need the Sobolev inequality \begin{equation}\label{SobolevInequality} \norm{u}_{L^4(\Gamma)} \leq C\norm{u}_{H^{1/2}(\Gamma)}. \varepsilonnd{equation} Indeed, $H^{1/2}(\Gamma)$ is continuously embedded in $L^p(\Gamma)$ for $p\in [1,4]$. This embedding is compact for $p\in [1,4)$. In particular, $H^{1/2}(\Gamma)$ is compactly embedded in $L^2(\Gamma)$. Let $X$ be a separable Banach space $X$ and $p\in [1,\infty]$, We make routinely use of Lebesgue-Bochner spaces such as $L^p(\Omegamega;X)$ and $L^p(0,T;X)$. We also use the spaces of continuous functions from $\Omega$ to $X$ and $(0,T)$ to $X$, denoted by $C(\Omega;X)$ and $C(0,T;X)$, respectively, and the similar spaces with $C$ replaced by $C^p$ or $C^p_0$. If $X$ is a Banach space, then $X/\mathbb{R}$ denotes the space consisting of classes of functions in $X$ that are equal up to an additive constant. Recall that $H^{1/2}(\Gamma)$ is a Hilbert space embedded in a continuous and dense way in $L^2(\Gamma)$. The (Lions-Magenes) integration-by-parts formula holds for functions $u_1,u_2$ that belong to the Banach space \begin{align*} \mathcal{V}_{\Gamma,T} & = \Bigl\{ u \in L^2(0,T; H^{1/2}(\Gamma))\cap L^4((0,T)\times \Gamma) \, \big| \, \\ & \qquad\qquad 3*2/5artial_t u \in L^2(0,T;H^{-1/2}(\Gamma))+L^{4/3}((0,T)\times \Gamma) \Bigr\}, \varepsilonnd{align*} equipped with the norm $$ \norm{u}_{\mathcal{V}_{\Gamma,T}} =\norm{u}_{L^2(0,T;H^{1/2}(\Gamma))\cap L^4((0,T)\times \Gamma)} +\norm{3*2/5artial_t u}_{L^2(0,T;H^{-1/2}(\Gamma))+L^{4/3}((0,T)\times \Gamma)}, $$ where $\norm{u}_{X_1\cap X_2}= \max \left(\norm{u}_{X_1},\norm{u}_{X_2} \right)$, $\norm{u}_{X_1+X_2}=\inf\limits_{u=u_1+u_2} \left(\norm{u_1}_{X_1} +\norm{u_2}_{X_2}\right)$. Indeed, for $u_1,u_2 \in \mathcal{V}_{\Gamma,T}$, $[0,T]\ni t\mapsto \left ( u_1(t), u_2(t) \right)_{L^2(\Gamma)}$ is continuous and \begin{equation}\label{eq:int-by-parts-general} \begin{split} & \int_{t_1}^{t_2} \left \langle 3*2/5artial_t u_1, u_2 \right\rangle\, dt +\int_{t_1}^{t_2} \left \langle 3*2/5artial_t u_2,u_1 \right\rangle\, dt \\ & \qquad = \left ( u_1(t_2), u_2(t_2) \right)_{L^2(\Gamma)} -\left ( u_1(t_1), u_2(t_1) \right)_{L^2(\Gamma)}, \varepsilonnd{split} \varepsilonnd{equation} for all $t_1,t_2\in [0,T]$, $t_1<t_2$, where $\left\langle \cdot, \cdot \right \rangle$ denotes the duality pairing between $H^{-1/2}(\Gamma)+L^{4/3}(\Gamma)$ and $H^{1/2}(\Gamma)\cap L^4(\Gamma)$. For a proof of \varepsilonqref{eq:int-by-parts-general} that can be adapted to our situation, see e.g.~\cite[p.~99]{Boyer:2012aa}. Taking $u_1=u_2=u\in \mathcal{V}_{\Gamma,T}$, we obtain the chain rule \begin{equation}\label{eq:chain-rule} \begin{split} \int_{t_1}^{t_2} \left \langle 3*2/5artial_t u, u \right \rangle \, dt = \frac12 \norm{u(t_2)}_{L^2(\Gamma)}^2 -\frac12 \norm{u(u_1)}_{L^2(\Gamma)}^2. \varepsilonnd{split} \varepsilonnd{equation} Adapting standard arguments (see e.g.~\cite[p.~101]{Boyer:2012aa}), the embedding \begin{equation}\label{cont-embeding} \mathcal{V}_{\Gamma,T}\hookrightarrow C(0,T; L^2(\Gamma)) \varepsilonnd{equation} is continuous. Indeed, this result follows from the continuity of the squared norm $t\mapsto \norm{u(t)}_{L^2(\Gamma)}^2$ (see above) and the weak continuity of $u$ in $L^2(\Gamma)$. The latter results from an easily obtained bound on $u$ in $L^\infty(0,T;L^2(\Gamma))$ and the continuity of $u$ in $H^{-1/2}(\Gamma)$, both facts being deducible from \varepsilonqref{eq:int-by-parts-general}. Let us dwell a bit further on the time continuity of functions in $\mathcal{V}_{\Gamma,T}$. By \varepsilonqref{SobolevInequality}, $H^{1/2}2/5ubset L^4(\Gamma)$ and so $L^{4/3}(\Gamma)2/5ubset H^{-1/2}(\Gamma)$. Therefore, \begin{equation*} L^2(0,T;H^{-1/2}(\Gamma))+L^{4/3}((0,T)\times \Gamma) 2/5ubset L^{4/3}(0,T;H^{-1/2}(\Gamma)). \varepsilonnd{equation*} With $u_1=u\in \mathcal{V}_{\Gamma,T}$ and $u_2=3*2/5hi\in H^{1/2}(\Gamma)$ in \varepsilonqref{eq:int-by-parts-general}, it follows that \begin{align*} \left ( u(t_2)-u(t_1), 3*2/5hi \right)_{L^2(\Gamma)} & =\int_{t_1}^{t_2} \left \langle 3*2/5artial_t u, 3*2/5hi \right \rangle \, dt \\ & \le \norm{3*2/5hi}_{H^{1/2}(\Gamma)} \int_{t_1}^{t_2} \norm{3*2/5artial_t u}_{H^{-1/2}(\Gamma)}\, dt \\ & \le \norm{3*2/5hi}_{H^{1/2}(\Gamma)} \norm{3*2/5artial_t u}_{L^{4/3}(t_1,t_2;H^{-1/2}(\Gamma))} (t_2-t_1)^{1/4}. \varepsilonnd{align*} Fix a small shift $\mathbb{D}elta_t>0$. Specifying $t_1=t\in (0,T-\mathbb{D}elta_t)$, $t_2=t+\mathbb{D}elta_t$, and $3*2/5hi=u(t+\mathbb{D}elta_t,\cdot)-u(t,\cdot)$ gives \begin{align*} &\norm{u(t+\mathbb{D}elta_t,\cdot)-u(t,\cdot)}_{L^2(\Gamma)}^2 \\ & \qquad \le \norm{u(t+\mathbb{D}elta_t,\cdot)-u(t,\cdot)}_{H^{1/2}(\Gamma)} \norm{3*2/5artial_t u}_{L^{4/3}(0,T;H^{-1/2}(\Gamma))} \mathbb{D}elta_t^{1/4}. \varepsilonnd{align*} Integrating this inequality over $t\in (0,T-\mathbb{D}elta_t)$, accompanied by a few elementary manipulations, results in the temporal translation estimate \begin{equation*} \begin{split} & \norm{u(\cdot+\mathbb{D}elta_t,\cdot) -u(\cdot,\cdot)}_{L^2(0,T-\mathbb{D}elta_t;L^2(\Gamma))} \\ & \quad \le C_T\norm{u}_{L^2(0,T;H^{1/2}(\Gamma))}^{1/2} \norm{3*2/5artial_t u}_{L^{4/3}(0,T;H^{-1/2}(\Gamma))}^{1/2} \mathbb{D}elta_t^{1/8}, \quad u\in \mathcal{V}_{\Gamma,T}, \varepsilonnd{split} \varepsilonnd{equation*} where $C_T=2^{1/2}T^{1/4}$. A similar estimate holds for negative $\mathbb{D}elta_t$. There is a compact embedding of $\mathcal{V}_{\Gamma,T}$ in $L^2(0,T;L^2(\Gamma))$. As pointed out above, $\mathcal{V}_{\Gamma,T}$ is a subset of $2/5eq{u\in L^2(0,T;H^{1/2}(\Gamma):3*2/5artial_t u \in L^{4/3}(0,T;H^{-1/2}(\Gamma))}$, which is compactly embedded in $L^2(0,T;L^2(\Gamma))$ by the Aubin-Lions theorem. We need a generalization of this result due to Simon \cite{Simon:1987vn}. Given two Banach spaces $X_12/5ubset X_0$, with $X_1$ compactly embedded in $X_0$, let $\mathcal{K}$ be a collection of functions in $L^p(0,T;X_0)$, $p\in [1,\infty]$. The work \cite{Simon:1987vn} supplies several results ensuring the compactness of $\mathcal{K}$ in $L^p(0,T;X_0)$ (in $C([0,T];X_0)$ if $p=\infty$). For example, we can assume that $\mathcal{K}$ is bounded in $L^1_{\mathrm{loc}}(0,T;X_1)$ and $$ \norm{u(\cdot+\mathbb{D}elta_t)-u}_{L^p(0,T-\mathbb{D}elta_t;X_0)}\to 0 \quad \text{as $\mathbb{D}elta_t\to 0$, uniformly for $u\in \mathcal{K}$}, $$ cf.~\cite[Theorem 3]{Simon:1987vn}. We apply this result with $p=2$, $X_1=H^{1/2}(\Gammaamma^{\varepsilon})$, $X_0=L^2(\Gammaamma^{\varepsilon})$. Another result involves a third Banach space $X_{-1}$ (e.g.~$X_{-1}=H^{-1/2}(\Gammaamma^{\varepsilon})$), such that $X_12/5ubset X_02/5ubset X_{-1}$ and $X_1$ is compactly embedded in $X_0$. Compactness of $\mathcal{K}$ in $L^p(0,T;X_0)$ follows if the set $\mathcal{K}$ is bounded in $L^p(0,T;X_1)$ and, as $\mathbb{D}elta_t\to 0$, $\norm{u(\cdot+\mathbb{D}elta_t)-u}_{L^p(0,T-\mathbb{D}elta_t;X_{-1})}\to 0$, uniformly for $u\in \mathcal{K}$ \cite[Theorem 5]{Simon:1987vn}. 2/5ubsection{Two-scale convergence} \label{sec:twoScale} Recall that $\Omegamega2/5ubset \mathbb{R}^3$ denotes the entire (connected, bounded, open) cardiac domain, assumed to be of the form \varepsilonqref{def:domain}. The assumption \varepsilonqref{def:domain} simplifies the presentation. With mild modifications of the upcoming proofs, the results remain valid for general domains with Lipschitz boundary. Let $Y$ be a reference unit cell in $\mathbb{R}^3$, which we fix to be the unit cube $Y := [0,1]^3$. Let $Y_i$ and $Y_e$ be the (disjoint, connected, open) intra and extracellular spaces within $Y$, separated by the cell membrane $\Gamma$: $$ \overline{Y_i} \cup \overline{Y_e} = Y, \quad \Gamma = 3*2/5artial Y_i 2/5etminus 3*2/5artial Y. $$ Denote by $K^{\varepsilon}$ the set of $k\in \mathbb{Z}^3$ for which $\bigcup_{k\in K^\varepsilon} \varepsilon\left(k+Y \right)=\overline{\Omegamega}$. We define the intracellular domain $\Omegamega_i^{\varepsilon}$, the extracellular domain $\Omegamega_e^{\varepsilon}$, and the cell membrane $\Gammaamma^{\varepsilon}$ as \begin{equation}\label{domains-escaled} \Omegamega_j^{\varepsilon} = \bigcup_{k\in K^{\varepsilon}}\varepsilon(k+Y_j), \quad j=i,e, \qquad \Gammaamma^{\varepsilon} = \bigcup_{k\in K^\varepsilon} \varepsilon(k+\Gamma). \varepsilonnd{equation} Both sets $\Omegamega_i^{\varepsilon},\Omegamega_e^{\varepsilon}$ are connected Lipschitz domains, see Figure 1.1. Note however that it is impossible to have both $\Omegamega_i^{\varepsilon}$ and $\Omegamega_e^{\varepsilon}$ connected in a two-dimensional picture. \begin{figure} \begin{tikzpicture}[scale=0.17] \newcommand{2/5}{2} \newcommand{20*\s}{20*2/5} \newcommand{0}{0} \newcommand{\x+5*\s}{20*\s+5*2/5} \newcommand{\y+5*\s}{0+5*2/5} \renewcommand{3*2/5}{3*2/5} \newcommand{.5*\s}{.5*2/5} \renewcommand{2*\s}{2*2/5} \newcommand{\s*10}{2/5*10} 3*2/5gfmathtruncatemacro{\s*10}{10*2/5}; 3*2/5gfmathtruncatemacro{\mathbb{D}x}{20*\s+5*2/5}; 3*2/5gfmathtruncatemacro{\mathbb{D}y}{0}; 3*2/5gfmathtruncatemacro{\Lx}{20*\s}; 3*2/5gfmathtruncatemacro{\Ly}{0+5*2/5}; 3*2/5gfmathtruncatemacro{\Ux}{20*\s+5*2/5}; 3*2/5gfmathtruncatemacro{\Uy}{0 +10*2/5}; 3*2/5gfmathtruncatemacro{\mathbb{R}x}{20*\s+10*2/5}; 3*2/5gfmathtruncatemacro{\mathbb{R}y}{0+5*2/5}; \fill [color=blue!30](20*\s,0) rectangle (20*\s+\s*10,0+\s*10); \fill[color=red!30] (\x+5*\s,\y+5*\s) circle (3*2/5); \draw[thick,color=yellow] (\x+5*\s,\y+5*\s) circle (3*2/5); \fill [color=red!30] (\mathbb{D}x-.5*\s/2,\mathbb{D}y) rectangle (\mathbb{D}x+.5*\s/2 ,\mathbb{D}y+2*\s) (\Lx,\Ly-.5*\s/2) rectangle (\Lx+2*\s ,\Ly+.5*\s/2) (\Ux-.5*\s/2,\Uy) rectangle (\Ux+.5*\s/2,\Uy-2*\s) (\mathbb{R}x,\mathbb{R}y-.5*\s/2) rectangle (\mathbb{R}x-2*\s,\mathbb{R}y+.5*\s/2); \draw [thick,color=yellow] (\mathbb{D}x-.5*\s/2,\mathbb{D}y) -- (\mathbb{D}x-.5*\s/2 ,\mathbb{D}y+2*\s) (\mathbb{D}x+.5*\s/2,\mathbb{D}y) -- (\mathbb{D}x+.5*\s/2 ,\mathbb{D}y+2*\s) (\Lx,\Ly-.5*\s/2) -- (\Lx+2*\s ,\Ly-.5*\s/2) (\Lx,\Ly+.5*\s/2) -- (\Lx+2*\s ,\Ly+.5*\s/2) (\Ux-.5*\s/2,\Uy)-- (\Ux-.5*\s/2,\Uy-2*\s) (\Ux+.5*\s/2,\Uy) -- (\Ux+.5*\s/2,\Uy-2*\s) (\mathbb{R}x,\mathbb{R}y-.5*\s/2)-- (\mathbb{R}x-2*\s,\mathbb{R}y-.5*\s/2) (\mathbb{R}x,\mathbb{R}y+.5*\s/2)-- (\mathbb{R}x-2*\s,\mathbb{R}y+.5*\s/2); \draw [line width = 0.5mm] (20*\s,0) rectangle (20*\s+\s*10,0+\s*10); \node [black] at (\x+5*\s,\y+5*\s) {\LARGE$Y_i$}; \node [black] at (\x+5*\s,\y+5*\s+6*2/5) {\LARGE$Y$}; \node [below right,black] at (55,5) {\LARGE$Y_e$}; \node at (20*\s+7*2/5,7*2/5) {\LARGE$\Gammaamma$}; \node at (5*2/5,11*2/5) {\LARGE$\Omegamega= \Omegamega_i^{\varepsilon} \cup \Omegamega_e^{\varepsilon}\cup \Gammaamma^{\varepsilon}$}; \node (A) at (11*2/5,6*2/5) {}; \node (B) at (19*2/5,6*2/5) {}; \draw[->, bend left=45, thick] (A) edge[bend left ] node[above] {\Large$y=\frac{x}{\varepsilon}$} (B); \draw [decorate,decoration={brace,amplitude=3*2/5},xshift=5*2/5,yshift=0pt,thick] (10*2/5,6*2/5) -- (10*2/5,4*2/5)node [black,midway,xshift=14pt] {\footnotesize \LARGE$\varepsilon$}; \renewcommand{2/5}{2/5} \renewcommand{3*2/5}{3*2/5} \renewcommand{.5*\s}{.5*2/5} \renewcommand{2*\s}{2*2/5} \renewcommand{\s*10}{2/5*10} \foreach \i in {0,...,4} { \foreach \j in {0,...,4} { 3*2/5gfmathtruncatemacro{20*\s}{\i *10*2/5}; 3*2/5gfmathtruncatemacro{0}{\j*10*2/5}; 3*2/5gfmathtruncatemacro{\x+5*\s}{20*\s+5*2/5} 3*2/5gfmathtruncatemacro{\y+5*\s}{0+5*2/5} 3*2/5gfmathtruncatemacro{\s*10}{10*2/5}; 3*2/5gfmathtruncatemacro{\mathbb{D}x}{20*\s+5*2/5}; 3*2/5gfmathtruncatemacro{\mathbb{D}y}{0}; 3*2/5gfmathtruncatemacro{\Lx}{20*\s}; 3*2/5gfmathtruncatemacro{\Ly}{0+5*2/5}; 3*2/5gfmathtruncatemacro{\Ux}{20*\s+5*2/5}; 3*2/5gfmathtruncatemacro{\Uy}{0 +10*2/5}; 3*2/5gfmathtruncatemacro{\mathbb{R}x}{20*\s+10*2/5}; 3*2/5gfmathtruncatemacro{\mathbb{R}y}{0+5*2/5}; \fill [color=blue!30](20*\s,0) rectangle (20*\s+\s*10,0+\s*10); \fill[color=red!30] (\x+5*\s,\y+5*\s) circle (3*2/5); \draw[thick,color=yellow] (\x+5*\s,\y+5*\s) circle (3*2/5); \fill [color=red!30] (\mathbb{D}x-.5*\s/2,\mathbb{D}y) rectangle (\mathbb{D}x+.5*\s/2 ,\mathbb{D}y+2*\s) (\Lx,\Ly-.5*\s/2) rectangle (\Lx+2*\s ,\Ly+.5*\s/2) (\Ux-.5*\s/2,\Uy) rectangle (\Ux+.5*\s/2,\Uy-2*\s) (\mathbb{R}x,\mathbb{R}y-.5*\s/2) rectangle (\mathbb{R}x-2*\s,\mathbb{R}y+.5*\s/2); \draw [thick,color=yellow] (\mathbb{D}x-.5*\s/2,\mathbb{D}y) -- (\mathbb{D}x-.5*\s/2 ,\mathbb{D}y+2*\s) (\mathbb{D}x+.5*\s/2,\mathbb{D}y) -- (\mathbb{D}x+.5*\s/2 ,\mathbb{D}y+2*\s) (\Lx,\Ly-.5*\s/2) -- (\Lx+2*\s ,\Ly-.5*\s/2) (\Lx,\Ly+.5*\s/2) -- (\Lx+2*\s ,\Ly+.5*\s/2) (\Ux-.5*\s/2,\Uy)-- (\Ux-.5*\s/2,\Uy-2*\s) (\Ux+.5*\s/2,\Uy) -- (\Ux+.5*\s/2,\Uy-2*\s) (\mathbb{R}x,\mathbb{R}y-.5*\s/2)-- (\mathbb{R}x-2*\s,\mathbb{R}y-.5*\s/2) (\mathbb{R}x,\mathbb{R}y+.5*\s/2)-- (\mathbb{R}x-2*\s,\mathbb{R}y+.5*\s/2); \draw [line width = 0.1mm] (20*\s,0) rectangle (20*\s+\s*10,0+\s*10); } } \draw [line width = 0.5mm] (0,0) rectangle (50*2/5,50*2/5); 3*2/5gfmathtruncatemacro{20*\s}{4*10*2/5}; 3*2/5gfmathtruncatemacro{0}{2*10*2/5}; \draw [line width = 0.5mm] (20*\s,0) rectangle (20*\s+\s*10,0+\s*10); \varepsilonnd{tikzpicture} \caption{The rescaled sets $\Omegamega_i^{\varepsilon}$, $\Omegamega_e^{\varepsilon}$, $\Gammaamma^{\varepsilon}$ (left) and the unit cell $Y$ (right).} \varepsilonnd{figure} To derive estimates for the microscopic model, we employ the following trace inequality for $\varepsilon$-periodic hypersurfaces: \begin{equation}\label{trace} \varepsilon \norm{u|_{\Gammaamma^{\varepsilon}}}^2_{L^2(\Gammaamma^{\varepsilon})} \leq C \left( \norm{u}^2_{L^2(\Omegamega_j^{\varepsilon})} + \varepsilon^2 \norm{\nabla u}^2_{L^2(\Omegamega_j^{\varepsilon})} \right), \quad u\in H^1(\Omegamega_j^{\varepsilon}), \; j=i,e, \varepsilonnd{equation} for some constant $C$ independent of $\varepsilon$, cf.~\cite[Lemma 3]{Hornung:1991aa} or \cite[Lemma 4.2]{Marciniak-Czochra:2008aa}. We need a uniform Poincar\'{e} inequality for perforated domains \cite{Cioranescu}. \begin{lemma} There exists a constant $C$, independent of $\varepsilon>0$, such that \begin{equation}\label{Poincare} \norm{u -\frac{1}{\abs{\Omegamega_j^{\varepsilon}}} \int_{\Omegamega_j^{\varepsilon}} u \,dx }_{L^2\left(\Omegamega_j^{\varepsilon}\right)} \leq C \norm{\nabla u}_{L^2\left(\Omegamega_j^{\varepsilon}\right)}, \varepsilonnd{equation} for all $u \in H^1\left(\Omegamega_j^{\varepsilon}\right)$, $j=i,e$. \varepsilonnd{lemma} Estimate \varepsilonqref{Poincare} holds under mild regularity assumptions on the perforated domains; a Lipschitz boundary is more than sufficient (but connectedness is essential). Recall that a sequence $2/5eq{u^{\varepsilon}}_{\varepsilon>0} 2/5ubset L^2((0,T)\times \Omegamega)$ \textit{two-scale converges} to $u$ in $L^2((0,T)\times \Omega ;L^2_{\mathrm{per}}(Y))$ if \begin{equation}\label{def:two-scale} \int_0^T\int_{\Omega} u^{\varepsilon}(t,x)3*2/5 \left(t,x,\frac{x}{\varepsilon}\right) \, dx \, dt \overset{\varepsilon\to 0}{\to} \int_0^T \int_{\Omega} \int_Y u(t,x,y) 3*2/5(t,x,y) \, dy\, dx\, dt, \varepsilonnd{equation} for all $3*2/5\in C([0,T]\times \overline{\Omegamega};C_{\mathrm{per}}(Y))$. We express this symbolically as $$ u^{\varepsilon}\overset{2}{\rightharpoonup} u. $$ By density properties, the convergence \varepsilonqref{def:two-scale} also holds for test functions $3*2/5$ from $L^2((0,T)\times \Omegamega;C_{\mathrm{per}}(Y))$ \cite[p.~176]{Donato}. The two-scale compactness theorem \cite{Allaire,Donato} is of fundamental importance. \begin{theorem}[two-scale compactness]\label{twoL2} Let $ \{u^{\varepsilon}\}_{\varepsilon>0}$ be a bounded sequence in $L^2((0,T)\times \Omega)$, $\norm{u^{\varepsilon}}_{L^2((0,T)\times \Omega)} \leq C$ $\forall \varepsilon>0$. Then there exist a subsequence $\varepsilon_n\to 0$ and a function $u\in L^2( (0,T)\times \Omega;L^2(Y))$ such that $u^{\varepsilon_n}$ two-scale converges to $u$ as $n \rightarrow \infty$. \varepsilonnd{theorem} Consider a sequence $\{u_j^{\varepsilon}\}_{\varepsilon>0}$ of functions defined on the perforated domain $(0,T)\times \Omegamega_j^{\varepsilon}$, $j=i,e$. We write $\widetilde{u_j^\varepsilon}$ for the zero-extension of $u_j^\varepsilon$ to $(0,T)\times\Omega$: \begin{equation*} \widetilde{u_j^\varepsilon}(t,x) := \begin{cases} u_j^\varepsilon(t,x) & \text{if} \, (t,x)\in (0,T)\times \Omegamega_j^{\varepsilon}, \\ 0 & \text{if} \, (t,x) \in (0,T)\times\left(\Omega 2/5etminus \Omegamega_j^{\varepsilon}\right). \varepsilonnd{cases} \varepsilonnd{equation*} By Theorem $\ref{twoL2}$, $2/5eq{\widetilde{u_j^\varepsilon}}_{\varepsilon>0}$ has a two-scale convergent subsequence, provided we know that $\|u_j^{\varepsilon}\|_{L^2((0,T)\times \Omegamega_j^{\varepsilon})}\leq C$ $\forall \varepsilon>0$. However, this is not true in general for the gradient of $\widetilde{u_j^{\varepsilon}}$, even if $\norm{u_j^\varepsilon}_{L^2(0,T;H^1(\Omegamega_j^{\varepsilon}))}\leq C$, since the extension by zero creates a discontinuity across $\Gammaamma^{\varepsilon}$. Instead the following statement holds true for the gradient: \begin{lemma}\label{twoH1perf} Fix $j\in\{i,e\}$ and suppose $u^{\varepsilon}=$ $u_j^\varepsilon$ satisfies $\norm{u^{\varepsilon}}_{L^2(0,T;H^1(\Omegamega_j^{\varepsilon}))} \leq C$ $\forall \varepsilon>0$. Then there exist a subsequence $\varepsilon_n\to 0$ and functions $u\in L^2(0,T;H^1(\Omega))$, $u_1\in L^2((0,T)\times \Omega;H_{\mathrm{per}}^1(Y_j))$ such that as $n\to \infty$, \begin{equation*} \begin{split} \widetilde{u^{\varepsilon_n}} & \overset{2}{\rightharpoonup}\mathbbm{1}_{Y_j}(y)u(t,x) \; \text{in} \; L^2((0,T)\times \Omegamega;L_{\mathrm{per}}^2(Y)), \\ \widetilde{\nabla u^{\varepsilon_n} } & \overset{2}{\rightharpoonup} \mathbbm{1}_{Y_j}(y) \big( \nabla_x u(t,x) + \nabla_y u_1(t,x,y) \big) \; \text{in} \; L^2((0,T)\times\Omega;L_{\mathrm{per}}^2(Y)). \varepsilonnd{split} \varepsilonnd{equation*} Here, $\mathbbm{1}_{Y_j}(y)$ denotes the characteristic function of $Y_j$, \begin{equation*} \mathbbm{1}_j(y) = \begin{cases} 1 \quad \text{if} \;\; y\in Y_j, \\ 0 \quad \text{if} \;\; y\notin Y_j. \varepsilonnd{cases} \varepsilonnd{equation*} \varepsilonnd{lemma} For a proof of this lemma in the time independent case, see \cite[Theorem 2.9]{Allaire}. The extension to time dependent functions is straightforward. There is an extension of two-scale convergence to periodic surfaces \cite{Allaire2}. Recall that a periodic surface $\Gammaamma^{\varepsilon}$ is given by \begin{equation*} \Gammaamma^{\varepsilon}:= \left\{ x\in \Omega \;\Big|\; \frac{x}{\varepsilon} \in k+\Gamma \; \text{for some } k\in \mathbb{Z}^3 \right\}, \varepsilonnd{equation*} where $\Gamma 2/5ubset Y$ is a surface in the unit cell. Since $|\Gammaamma^{\varepsilon}| 2/5im \varepsilon^{-1}$, it is necessary to introduce a normalizing factor in the definition of two-scale convergence on surfaces. A sequence $\{v^{\varepsilon}\}_{\varepsilon>0}$ of functions in $L^2((0,T)\times \Gammaamma^{\varepsilon})$ \textit{two-scale converges} to $v$ in $L^2((0,T)\times \Omega; L^2(\Gammaamma))$, written $$ v^{\varepsilon}\overset{2-\mathrm{S}}{\rightharpoonup} v, $$ if, for all $3*2/5 \in C([0,T]\times \overline{\Omega};C_{\mathrm{per}}(\Gamma))$, \begin{align*} &\lim_{\varepsilon\to 0}\varepsilon \int_0^T\int_{\Gammaamma^{\varepsilon}} v^{\varepsilon}(t,x) 3*2/5\left(t,x, \frac{x}{\varepsilon} \right) \,dS(x) \, dt \\ & \qquad \quad = \int_0^T \int_{\Omega} \int_{\Gamma} v(t,x,y)3*2/5(t,x,y) \, dS(y)\, dx\, dt. \varepsilonnd{align*} As with \varepsilonqref{def:two-scale}, this convergence continues to hold for test functions $3*2/5$ that belong to $L^2((0,T)\times\Omegamega;C_{\mathrm{per}}(\Gamma))$. There is a version of Theorem \ref{twoL2} for functions on periodic surfaces \cite{Allaire2}. \begin{theorem}[two-scale compactness on surfaces]\label{twoS} Suppose $\{v^{\varepsilon}\}_{\varepsilon>0}$ is a sequence of functions in $L^2((0,T)\times \Gammaamma_\varepsilon)$ satisfying \begin{equation}\label{two-scale-cond} \varepsilon \int_0^T\int_{\Gammaamma^{\varepsilon}} \abs{v^{\varepsilon}}^2\, dS\, dt \leq C, \varepsilonnd{equation} for some function $C$ that is independent of $\varepsilon>0$. Then there exist a subsequence $\varepsilon_n\to 0$ and a function $v \in L^2((0,T)\times \Omega ;L^2(\Gamma))$ such that as $n\to \infty$, $$ v^{\varepsilon_n}\overset{2-\mathrm{S}}{\rightharpoonup} v. $$ \varepsilonnd{theorem} One can characterize the two-scale limit of traces of bounded sequences in $L^2(0,T;H^1(\Omega^{\varepsilon}))$ as the trace of the two-scale limit \cite{Allaire2}. \begin{lemma}\label{relation trace} Fix $j\in\{i,e\}$. Suppose $u^{\varepsilon}=u_j^\varepsilon$ satisfies $\norm{u^{\varepsilon}}_{L^2(0,T;H^1(\Omegamega_j^{\varepsilon}))} \leq C$ $\forall \varepsilon>0$ and, cf.~Lemma \ref{twoH1perf}, $\widetilde{u^{\varepsilon}} \overset{2}{\rightharpoonup} \mathbbm{1}_{Y_j}(y)u(t,x)$ in $L^2((0,T)\times\Omega;L^2(Y))$. Let $$ g^{\varepsilon}:= u^{\varepsilon}\big|_{\Gammaamma^{\varepsilon}} \in L^2((0,T)\times \Gammaamma^{\varepsilon} ) $$ be the trace of $u^{\varepsilon}$ on $\Gammaamma^{\varepsilon} = 3*2/5artial \Omegamega_j^{\varepsilon} 2/5etminus 3*2/5artial \Omega$. Then, up to a subsequence, $$ g^{\varepsilon} \overset{2-\mathrm{S}}{\rightharpoonup} g := \mathbbm{1}_{\Gammaamma}(y)u(t,x). $$ \varepsilonnd{lemma} \begin{remark} In view of Lemma \ref{relation trace}, we have (in the sense of measures) $$ \varepsilon \, u^{\varepsilon}\big|_{\Gammaamma^{\varepsilon}} \,dS\, dt \overset{2/5tar}\rightharpoonup \abs{\Gamma}\, u\, dx\, dt, \qquad \widetilde{u^{\varepsilon}}\,dx\, dt \overset{2/5tar}\rightharpoonup \abs{Y_j}\, u \,dx\, dt. $$ \varepsilonnd{remark} 2/5ubsection{Unfolding operators} \label{sec:unfolding} An alternative approach to studying convergence on oscillating surfaces $\Gammaamma^{\varepsilon}$ is provided by the boundary unfolding operator \cite{Cioranescu}. For any $x\in \mathbb{R}^3$, we have the decomposition $x = \lfloor x\rfloor + \{ x\}$, where $\lfloor x \rfloor\in \mathbb{Z}^3$ and $\{x\}\in [0,1]^3$ denotes the integer and fractional parts of $x$, respectively. For later use, note the following simple properties, which hold for any $x,\bar x\in \mathbb{R}^3$, $n\in \mathbb{Z}^3$: $$ \lfloor x+n\rfloor=\lfloor x \rfloor+n, \quad \{ x+n\} = \{ x\}, \quad \lfloor x+\bar x\rfloor\le \lfloor x\rfloor +\lfloor \bar x\rfloor+(1,1,1). $$ Applying the above decomposition to $x/\varepsilon$ gives $$ x = \varepsilon \left( \lfloor x/\varepsilon \rfloor +\{ x / \varepsilon\}\right), $$ where $\lfloor x/\varepsilon \rfloor\in \mathbb{Z}^3$, $\{ x / \varepsilon\}\in [0,1]^3$. The \textit{boundary unfolding} operator $\mathcal{T}_{\varepsilon}^b$ is defined by \begin{equation}\label{unfolding} \begin{split} & \mathcal{T}_{\varepsilon}^b: L^2(0,T;L^2(\Gammaamma^{\varepsilon}))\to L^2(0,T;L^2(\Omega\times \Gamma)),\\ & \mathcal{T}_{\varepsilon}^b (v)(t,x,y) = v\left(t,\varepsilon \left\lfloor \frac{x}{\varepsilon}\right\rfloor + \varepsilon y\right), \qquad (t,x,y) \in (0,T)\times \Omega\times \Gamma. \varepsilonnd{split} \varepsilonnd{equation} The advantage of the unfolding operator is that we can formulate questions of convergence in a fixed space $L^2(0,T;L^2(\Omega\times \Gamma))$. All definitions and results in this section are formulated in $L^2$ spaces. Everything remains the same, however, if we replace $L^2$ by $L^p$ for any $p\in [1,\infty)$. We refer to \cite{Cioranescu} for the definition of the boundary unfolding operator and proofs of the properties listed next. The boundary unfolding operator $\mathcal{T}_{\varepsilon}^b$ is bounded, linear, and satisfies \begin{equation}\label{Tb-product} \mathcal{T}_{\varepsilon}^b (v_1 v_2) = \mathcal{T}_{\varepsilon}^b (v_1) \mathcal{T}_{\varepsilon}^b(v_2), \qquad v_1,v_2 \in L^2(0,T;L^2(\Gammaamma^{\varepsilon})). \varepsilonnd{equation} For any $Y$-periodic function $3*2/5si\in L^2(\Gamma)$, set $3*2/5si_\varepsilon(x):=3*2/5si(x/\varepsilon)$. Then \begin{equation*} \mathcal{T}_{\varepsilon}^b (3*2/5si_\varepsilon)(x,y)=3*2/5si(y), \qquad x\in \Omega, \, y\in \Gamma. \varepsilonnd{equation*} For $v\in L^2(0,T;L^2(\Gammaamma^{\varepsilon}))$, we have the integration formula \begin{equation}\label{IntegralTransform} \varepsilon \int_{\Gamma^{\varepsilon}} v(t,x) \,dS(x) = \int_{\Omega} \int_{\Gamma} \mathcal{T}_{\varepsilon}^b(v)(t,x,y) \,dS(y) \, dx, \varepsilonnd{equation} for a.e.~$t\in (0,T)$, thereby converting an integral over the oscillating set $\Gammaamma^{\varepsilon} $ to an integral over the fixed set $\Omega \times \Gamma$. For $v\in L^2(0,T;L^2(\Gammaamma^{\varepsilon}))$, \begin{equation}\label{Tb-L2Bound} \norm{\mathcal{T}_{\varepsilon}^b(v)}_{L^2(\Omega \times \Gamma)} = \varepsilon^{1/2} \norm{v}_{L^2(\Gammaamma^{\varepsilon})}, \varepsilonnd{equation} for a.e.~$t\in (0,T)$. For any $v\in L^2(0,T;L^2(\Gammaamma^{\varepsilon}))$, \begin{equation}\label{Tb-strongconv} \mathcal{T}_{\varepsilon}^b(v) \overset{\varepsilon\downarrow 0}{\to} v \quad \text{in $L^2( \Omega \times \Gamma)$}, \varepsilonnd{equation} for a.e.~$t\in (0,T)$, and also in $L^2(0,T;L^2(\Omega \times \Gamma))$. Suppose $\{v^{\varepsilon}\}_{\varepsilon>0}$ is a sequence of functions in $L^2((0,T)\times \Gammaamma^{\varepsilon})$ satisfying \varepsilonqref{two-scale-cond}. Then \begin{equation}\label{weak-vs-twoscale} v^{\varepsilon}\overset{2-\mathrm{S}}{\rightharpoonup} v \; \Longleftrightarrow \; \mathcal{T}_{\varepsilon}^b(v^\varepsilon) \rightharpoonup v \quad \text{in $L^2((0,T)\times \Omega\times \Gamma)$}. \varepsilonnd{equation} We need also the \textit{unfolding} operators linked to the domains $\Omegamega_i^{\varepsilon}, \Omegamega_e^{\varepsilon}$ \cite{Cioranescu}: \begin{align*} &\mathcal{T}^j_{\varepsilon}:L^2\left((0,T)\times \Omegamega_j^{\varepsilon}\right) \to L^2(0,T;L^2(\Omega \times Y_j)), \quad j=i,e, \\ & \mathcal{T}^j_{\varepsilon} (u)(t,x,y) = u\left(t,\varepsilon \left\lfloor \frac{x}{\varepsilon}\right\rfloor + \varepsilon y\right), \quad (t,x,y) \in (0,T)\times \Omega\times Y_j. \varepsilonnd{align*} The unfolding operator $\mathcal{T}^j_{\varepsilon}$ maps functions defined on the oscillating set $(0,T)\times \Omegamega_j^{\varepsilon}$ into functions defined on the fixed domain $(0,T)\times \Omega \times Y_j$. The operator $\mathcal{T}_{\varepsilon}^j$ is bounded, linear and satisfies \begin{equation*} \mathcal{T}_{\varepsilon}^j (uv) = \mathcal{T}_{\varepsilon}^j (u) \mathcal{T}_{\varepsilon}^j(v), \qquad u,v \in L^2\left( (0,T)\times \Omegamega_j^{\varepsilon} \right). \varepsilonnd{equation*} For any $Y$-periodic function $3*2/5si\in L^2(Y_j)$, set $3*2/5si_\varepsilon(x):=3*2/5si(x/\varepsilon)$. Then \begin{equation*} \mathcal{T}_{\varepsilon}^j (3*2/5si_\varepsilon)(x,y)=3*2/5si(y), \qquad x\in \Omega, \, y\in Y_j. \varepsilonnd{equation*} For $u \in L^2(0,T;L^2(\Omegamega_j^{\varepsilon}))$, we have the integration formula \begin{equation*} \int_{\Omegamega_j^{\varepsilon}} u(t,x) \,dx\, dt =\int_{\Omega} \int_{Y_j} \mathcal{T}_{\varepsilon}^j(u)(t,x,y)\,dy \, dx \, dt, \varepsilonnd{equation*} for a.e.~$t\in (0,T)$. The integration formula implies \begin{equation}\label{LpBound2} \norm{\mathcal{T}_{\varepsilon}^j(u)}_{L^2(\Omega \times Y_j)} =\norm{u}_{L^2(\Omegamega_j^{\varepsilon})}, \varepsilonnd{equation} for a.e.~$t\in (0,T)$. Let $u \in L^2(0,T;H^1(\Omegamega_j^{\varepsilon}))$. Then \begin{equation*} \nabla_y \mathcal{T}_{\varepsilon}^j (u) =\varepsilon \mathcal{T}_{\varepsilon}^j(\nabla u), \quad \text{a.e.~in $(0,T)\times \Omega\times Y_j$}, \varepsilonnd{equation*} and hence $\mathcal{T}_{\varepsilon}^j (u)\in L^2(\Omega;H^1(Y_j))$, for a.e.~$t\in (0,T)$: \begin{equation}\label{unfoldingH1} \norm{\nabla_y \mathcal{T}_{\varepsilon}^j(u)}_{L^2(\Omega \times Y_j))} =\varepsilon \norm{\nabla u}_{L^2(\Omegamega_j^{\varepsilon})}. \varepsilonnd{equation} The unfolding operators $\mathcal{T}_{\varepsilon}^b$ and $\mathcal{T}_{\varepsilon}^j$ are related in the following sense: \begin{equation}\label{unfoldingTrace} \mathcal{T}_{\varepsilon}^b ( u|_{\Gammaamma^{\varepsilon}}) = \mathcal{T}_{\varepsilon}^j(u)|_{\Gamma}, \qquad u \in L^2(0,T;H^1(\Omegamega_j^{\varepsilon})),\quad j=i,e, \varepsilonnd{equation} for a.e.~$t\in (0,T)$. Combining \varepsilonqref{unfoldingTrace}, \varepsilonqref{LpBound2}, \varepsilonqref{unfoldingH1}, and the trace inequality \varepsilonqref{trace1} in $H^1(\Omegamega_j^{\varepsilon})$, we obtain \begin{equation}\label{unfoldingH1/2} \begin{split} & \norm{\mathcal{T}_{\varepsilon}^b(u|_{\Gammaamma^{\varepsilon}})}_{L^2(\Omega;H^{1/2}(\Gamma))}^2 \\ & \qquad \le C \left(\norm{\mathcal{T}_{\varepsilon}^j(u)}_{L^2(\Omega;L^2(Y_j))}^2 + \norm{\nabla_y \mathcal{T}_{\varepsilon}^j(u)}_{L^2(\Omega;L^2(Y_j))}^2\right) \\ & \qquad = C\left(\norm{u}_{L^2(\Omegamega_j^{\varepsilon})}^2 +\varepsilon^2 \norm{\nabla u}_{L^2(\Omegamega_j^{\varepsilon})}^2\right), \quad j=i,e, \varepsilonnd{split} \varepsilonnd{equation} for a.e.~$t\in (0,T)$, where the constant $C$ is independent of $\varepsilon$ and $t$. Whenever it is convenient, we will write $\mathcal{T}_{\varepsilon}^b (u)$ instead of $\mathcal{T}_{\varepsilon}^b(u|_{\Gammaamma^{\varepsilon}})$. Next, we consider the local average (mean in the cells) operator \begin{equation*} \mathcal{M}^j_{\varepsilon}(u)(t,x) = \int_{Y_j} \mathcal{T}_{\varepsilon}^{j}(u)(t,x,y)\,dy, \quad (t,x)\in (0,T)\times \Omegamega, \qquad j=i,e, \varepsilonnd{equation*} and the piecewise linear interpolation operator \cite[Definition 2.5]{Cioranescu} \begin{equation}\label{Qdefinition} \begin{split} & Q_{\varepsilon}^{j}:L^2(0,T;H^1(\Omegamega_j^{\varepsilon})) \to L^2(0,T;H^1(\Omega)), \quad j=i,e, \\ & Q_{\varepsilon}^{j}(u) \;\; \text{is the $Q_1$-interpolation (in $x$) of $\mathcal{M}^j_{\varepsilon}(u)$}. \varepsilonnd{split} \varepsilonnd{equation} Given the Lipschitz regularity of $Y_j$, the interpolation operator $Q_\varepsilon^j$ satisfies the following estimates \cite[Propositions 2.7 and 2.8]{Cioranescu}: \begin{equation}\label{Qestimates} \begin{split} & \norm{Q_{\varepsilon}^j(u)-u}_{L^2((0,T)\times \Omegamega_j^{\varepsilon})} \leq C \varepsilon \norm{\nabla u}_{L^2((0,T)\times \Omegamega_j^{\varepsilon})}, \\ & \norm{\nabla Q_{\varepsilon}^j(u)}_{L^2((0,T)\times \Omegamega_j^{\varepsilon})} \leq C \norm{\nabla u}_{L^2((0,T)\times \Omegamega_j^{\varepsilon})}, \varepsilonnd{split} \varepsilonnd{equation} where $C$ is a constant that is independent of $\varepsilon$. 2/5ection{Microscopic bidomain model}\label{sec:wellposedness} In this section we present a relevant notion of (weak) solution for the microscopic problem \varepsilonqref{micro}, \varepsilonqref{eq:ib-cond}, along with an accompanying existence theorem. We also derive some ``$\varepsilon$-independent" a priori estimates, which are used later to extract two-scale convergent subsequences. 2/5ubsection{Assumptions on the data} \label{sec:assumptions} We impose the following set of assumptions on the ``membrane" functions $I,H$: $\bullet$ Generalized FitzHugh-Nagumo model: For $v,w\in \mathbb{R}$, \begin{equation}\label{GFHN} \begin{split} & I(v, w) = I_1 (v) + I_2(v)w, \quad H(v, w) = h(v) + c_{H,1}w,\\ & \text{where } I_1 , I_2, h \in C^1(\mathbb{R}),\, c_{H,1}\in \mathbb{R}, \text{ and} \\ &\abs{I_1(v)} \leq c_{I,1}\left(1+\abs{v}^3\right), \quad I_1(v)v \geq c_I \abs{v}^4-c_{I,2}\abs{v}^2, \\ & I_2(v) = c_{I,3}+ c_{I,4}v, \quad \abs{h(v)}\leq c_{H,2}\left(1+\abs{v}^2\right), \varepsilonnd{split} \tag{\textbf{GFHN}} \varepsilonnd{equation} for some constants $c_I > 0$ and $c_{I,1}, c_{I,2}, c_{I,3}, c_{I,4}, c_{H,2}\ge 0$. The classical FitzHugh-Nagumo model corresponds to \begin{equation}\label{FHN} I(v,w)=v(v-a)(v-1)+w, \qquad H(v,w)=\varepsilonpsilon (k v -w), \varepsilonnd{equation} where $a\in (0,1)$ and $k,\varepsilonpsilon >0$ are constants. Repeated applications of Cauchy's inequality yields \begin{equation}\label{IH-conseq} vI(v,w)-wH(v,w) \ge \gamma \abs{v}^4-\beta \left(\abs{v}^2+\abs{w}^2\right), \varepsilonnd{equation} for some constants $\gamma>0$ and $\beta\ge 0$. This inequality will be used to bound the transmembrane potential in the $L^4$ norm. Consider a quadratic matrix $A$, which always can be written as the sum of its symmetric part $\frac12 (A+A^\top)$ and its skew-symmetric part $\frac12 (A-A^\top)$. Recall that in a quadratic form $z\mapsto Az\cdot z$ the skew-symmetric part does not contribute. Therefore, letting $\lambda_{\text{min}}$ and $\lambda_{\text{max}}$ denote respectively the minimum and maximum eigenvalues of the symmetric part of $A$, we have $$ \lambda_{\text{min}} \abs{z}^2\le Az\cdot z \le \lambda_{\text{max}} \abs{z}^2, \qquad \forall z. $$ For $\mu>0$, consider the function $F^\mu:\mathbb{R}^2\to \mathbb{R}^2$ defined by $$F^\mu(z) = \begin{pmatrix} \mu I(z)\\ -H(z) \varepsilonnd{pmatrix}.$$ Denote by $\lambda^\mu_{\text{min}}(z)$, $\lambda^\mu_{\text{max}}(z)$ the minimum, maximum eigenvalues of the symmetric part of the matrix $\nabla F^\mu(z)$. To ensure that weak solutions are unique, we need an additional assumption on $I, H$ expressed via $F^\mu$ \cite[p.~479]{Bourgault}: $\varepsilonxists \mu,\lambda>0$ such that \begin{equation}\label{uniq-cond1} \lambda^\mu_{\text{min}}(z)\ge \lambda, \quad \forall z\in \mathbb{R}^2. \varepsilonnd{equation} One can verify that the FitzHugh-Nagumo model \varepsilonqref{FHN} obeys \varepsilonqref{uniq-cond1} (with $\mu=\varepsilon k$). A consequence of \varepsilonqref{uniq-cond1} is that $\nabla F^\mu(\bar z) z \cdot z \ge \lambda \abs{z}^2$ for all $\bar z, z\in \mathbb{R}^2$. Therefore, writing $$ F^\mu(z_2)-F^\mu(z_1) =\int_0^1 \nabla F^\mu(\theta z_2+(1-\theta)z_1)(z_2-z_1)\,d\theta, $$ it follows that $$ \left(F^\mu(z_2)-F^\mu(z_1)\right) \cdot (z_2-z_1)\geq -\lambda\abs{z_2-z_1}^2. $$ More explicitly, assumption \varepsilonqref{uniq-cond1} implies the following ``dissipative structure" on a suitable linear combination of $I$ and $H$: \begin{align*} &\mu \left(I(v_2,w_2)-I(v_1,w_2)\right)(v_2-v_1) -\left(H(v_2,w_2)-H(v_1,w_1)\right)(w_2-w_1) \\ & \qquad \geq -\lambda\left(\abs{v_2-v_1}^2+\abs{w_2-w_1}^2\right), \qquad \forall v_1,v_2,w_1,w_2 \in \mathbb{R}. \varepsilonnd{align*} This inequality implies the $L^2$ stability (and thus uniqueness) of weak solutions. \begin{remark} There are many membrane models of cardiac cells \cite{ColliBook,Sundnes}. We utilize the FitzHugh-Nagumo model \cite{FitzHugh1955}, which is a simplification of the Hodgin-Huxley model of voltage-gated ion channels. It is possible to treat other membrane models by blending the arguments used herein with those found in \cite{Boulakia2008,Bourgault,ColliBook,Sundnes,Veneroni,Veneroni:2009aa}. \varepsilonnd{remark} As a natural assumption for homogenization, we assume that the $\varepsilon$-dependence of the conductivities $2/5igma_j^\varepsilon$ ($j=i,e$), the applied currents $s^{\varepsilon}_j$ ($j=i,e$), and the initial data $v_0^{\varepsilon}, w_0^{\varepsilon}$ decouples into a ``fast" and a ``slow" variable: \begin{equation}\label{fastSlow} \begin{split} &2/5igma_j^{\varepsilon}(x) = 2/5igma_j\left(x,\frac{x}{\varepsilon}\right), \quad s^{\varepsilon}_j(x) = s_j\left(x,\frac{x}{\varepsilon}\right), \\ & v_0^{\varepsilon}(x) = v_0\left(x,\frac{x}{\varepsilon}\right), \quad w_0^{\varepsilon}(x) = w_0\left(x,\frac{x}{\varepsilon}\right), \varepsilonnd{split} \varepsilonnd{equation} for some fixed functions $2/5igma_j(x,y),s_j(x,y),v_0(x,y),w_0(x,y)$ that are $Y$-periodic in the second argument. The conductivity tensors are assumed to be bounded and continuous, \begin{equation}\label{eq:matrix1} 2/5igma_j(x,y) \in L^\infty(\Omega\times Y_j)\cap C\left(\Omegamega; C_{\mathrm{per}}(Y_j)\right), \varepsilonnd{equation} and satisfy the usual ellipticity condition, i.e., there exists $\alpha >0$ such that \begin{equation}\label{eq:matrix2} \varepsilonta \cdot 2/5igma_j(x,y) \varepsilonta \geq \alpha |\varepsilonta|^2, \quad \forall \varepsilonta \in \mathbb{R}^3, \; \forall (x,y)\in \Omega\times Y_j, \varepsilonnd{equation} for $j=i,e$. Finally, we assume that each $2/5igma_j$ is symmetric: $2/5igma_j^T= 2/5igma_j$. The regularity assumption \varepsilonqref{eq:matrix1} implies that $2/5igma_j$ is an admissible test function for two-scale convergence \cite{Allaire}, which means that $$ \lim_{\varepsilon \rightarrow 0}\int_{\Omegamega_j^{\varepsilon}} 2/5igma^{\varepsilon}_j(x) 3*2/5^{\varepsilon}(x) \, dx = \int_{\Omega \times Y_j} 2/5igma(x,y)3*2/5(x,y) \, dx\, dy, \qquad j=i,e, $$ for every two-scale convergent sequence $\{3*2/5^{\varepsilon}\}_{\varepsilon>0}$, $3*2/5^{\varepsilon}\overset{2}{\rightharpoonup} 3*2/5$. This convergence still holds if the second part of \varepsilonqref{eq:matrix1} is replaced by $2/5igma_j \in L^2\left(\Omegamega; C_{\mathrm{per}}(Y_j)\right)$. For the stimulation currents we assume the compatibility condition \begin{equation}\label{assumptionCompatibility} 2/5um_{j=i,e} \int_{\Omegamega_j^{\varepsilon}} s_j^\varepsilon\,dx =0, \varepsilonnd{equation} and the boundedness in $L^2$: \begin{equation}\label{assumptionStimulation} \norm{s_j^{\varepsilon}}_{L^2((0,T)\times \Omegamega_j^{\varepsilon})}\leq C, \qquad j=i,e, \varepsilonnd{equation} which, in view of \varepsilonqref{fastSlow}, is guaranteed if we take \cite[p.~174]{Donato} \begin{equation}\label{sj-ass} s_j \in L^2(\Omega ; C_{\text{per}}(Y_j)). \varepsilonnd{equation} Similarly, we assume that \begin{equation}\label{assumptionInitial} v_0, w_0 \in L^2(\Omega ; C_{\text{per}}(\Gamma)). \varepsilonnd{equation} Throughout this paper we denote by $C$ a generic constant, not depending on the parameter $\varepsilon$. The actual value of $C$ may change from one line to the next. 2/5ubsection{Weak solutions} Testing \varepsilonqref{micro} against appropriate functions $3*2/5_i,3*2/5_e,3*2/5_w$ we obtain the weak formulation of the microscopic bidomain model \varepsilonqref{micro}, \varepsilonqref{eq:ib-cond}, cf.~\cite{Colli,Veneroni} for details. We note that the terms involving the boundary $3*2/5artial \Omega$ vanish due to the Neumann boundary condition \varepsilonqref{eq:ib-cond}. \begin{definition}[weak formulation of microscopic system]\label{weak} A weak solution to \varepsilonqref{micro}, \varepsilonqref{eq:ib-cond} is a collection $(u_i^{\varepsilon},u_e^{\varepsilon},v^{\varepsilon},w^{\varepsilon})$ of functions satisfying the following conditions: \begin{enumerate}[i] \item (algebraic relation). \begin{equation}\label{ve-def} v^{\varepsilon}= u_i^{\varepsilon}\big|_{\Gammaamma^{\varepsilon}} - u_e^{\varepsilon}\big|_{\Gammaamma^{\varepsilon}} \quad \text{a.e.~on $(0,T)\times\Gammaamma^{\varepsilon}$}. \varepsilonnd{equation} \item (regularity). \begin{align*} & u_j^{\varepsilon} \in L^2(0,T;H^1(\Omegamega_j^{\varepsilon})), \quad j=i,e, \\ & \int_{\Omegamega_e^{\varepsilon}} u_e^\varepsilon(t,x)\, dx = 0, \quad t\in (0,T), \\ & v^{\varepsilon} \in L^2(0,T; H^{1/2}(\Gammaamma^{\varepsilon}))\cap L^4((0,T)\times \Gammaamma^{\varepsilon}), \\ & 3*2/5artial_t v \in L^2(0,T;H^{-1/2}(\Gammaamma^{\varepsilon}))+L^{4/3}((0,T)\times \Gammaamma^{\varepsilon}), \\ & w^{\varepsilon} \in H^1(0,T;L^2(\Gammaamma^{\varepsilon})). \varepsilonnd{align*} \item (initial conditions). \begin{equation}\label{eq:init} v^{\varepsilon}(0)=v_0^{\varepsilon}, \quad w^{\varepsilon}(0) = w_0^{\varepsilon}. \varepsilonnd{equation} \item (differential equations). \begin{equation}\label{weak_i} \begin{split} & \varepsilon \int_0^T \left\langle 3*2/5artial_t v^{\varepsilon}, 3*2/5_i\right\rangle \, dt + \int_0^T \int_{\Omegamega_i^{\varepsilon}}2/5igma_i^{\varepsilon} \nabla u_i^{\varepsilon} \cdot \nabla 3*2/5_i\, dx \, dt \\ & \qquad\quad + \varepsilon \int_0^T \int_{\Gammaamma^{\varepsilon}} I(v^{\varepsilon},w^{\varepsilon})3*2/5_i \,dS \, dt = \int_0^T \int_{\Omegamega_i^{\varepsilon}} s^{\varepsilon}_{i} 3*2/5_i \, dx \, dt, \varepsilonnd{split} \varepsilonnd{equation} \begin{equation}\label{weak_e} \begin{split} & \varepsilon \int_0^T \left\langle3*2/5artial_t v^{\varepsilon}, 3*2/5_e\right\rangle \, dt - \int_0^T\int_{\Omegamega_e^{\varepsilon}}2/5igma_e^{\varepsilon} \nabla u_e^{\varepsilon} \cdot \nabla 3*2/5_e \, dx \, dt \\ & \qquad\quad + \varepsilon \int_0^T \int_{\Gammaamma^{\varepsilon}} I(v^{\varepsilon},w^{\varepsilon})3*2/5_e \,dS \, dt = -\int_0^T \int_{\Omegamega_e^{\varepsilon}} s^{\varepsilon}_{e} 3*2/5_e \, dx \, dt, \varepsilonnd{split} \varepsilonnd{equation} \begin{equation}\label{weak_w} \int_{\Gammaamma^{\varepsilon}} 3*2/5artial_t w^{\varepsilon} 3*2/5_w\, dx = \int_{\Gammaamma^{\varepsilon}} H(v^{\varepsilon},w^{\varepsilon}) 3*2/5_w \,dS, \varepsilonnd{equation} \varepsilonnd{enumerate} for all $3*2/5_j\in L^2(0,T;H^1(\Omegamega_j^{\varepsilon}))$ with $3*2/5_j\in L^4((0,T)\times \Gammaamma^{\varepsilon})$ ($j=i,e$), and for all $3*2/5_w\in L^2(0,T;L^2(\Gammaamma^{\varepsilon}))$. \varepsilonnd{definition} \begin{remark}\label{rem:well-defined} In \varepsilonqref{weak_i}, \varepsilonqref{weak_e} we use $\left\langle \cdot, \cdot \right \rangle$ to denote the duality pairing between $H^{-1/2}(\Gammaamma^{\varepsilon})+L^{4/3}(\Gammaamma^{\varepsilon})$ and $H^{1/2}(\Gammaamma^{\varepsilon})\cap L^4(\Gammaamma^{\varepsilon})$. For a motivation of the regularity conditions in Definition \ref{weak}, see Remark \ref{rem:motivate-regularity} below. By the embedding \varepsilonqref{cont-embeding}, $v^{\varepsilon},w^{\varepsilon}\in C\left(0,T;L^2(\Gammaamma^{\varepsilon}) \right)$ and therefore the pointwise evaluations $v(0)$, $w(0)$ in \varepsilonqref{eq:init} are well defined. The time derivative $3*2/5artial_t v^\varepsilon$ is a distribution belonging to $L^2(0,T;H^{-1/2}(\Gammaamma^{\varepsilon}))+L^{4/3}((0,T)\times \Gammaamma^{\varepsilon})$ with initial values $v^\varepsilon_{0}$, so that the integration-by-parts formula \varepsilonqref{eq:int-by-parts-general} holds. Consequently, we may replace $$ \varepsilon \int_0^T \left\langle 3*2/5artial_t v^{\varepsilon}, 3*2/5_j\right\rangle \, dt, \quad j=i,e, $$ in \varepsilonqref{weak_i} and \varepsilonqref{weak_e} by \begin{equation}\label{weak-form-ibp} -\varepsilon \int_{0}^{T} \int_{\Gammaamma^{\varepsilon}} v^\varepsilon \, 3*2/5artial_{t} 3*2/5_j \, dS(x)\,dt - \varepsilon\int_{\Gammaamma^{\varepsilon}}v^\varepsilon_0\, 3*2/5_j(0) \, dS(x)\,dt, \varepsilonnd{equation} for all test functions $3*2/5_j \in C^\infty_0([0,T)\times \Omegamega_j^{\varepsilon})$, or for all $3*2/5_j\in L^2(0,T;H^1(\Omegamega_j^{\varepsilon}))$ such that $3*2/5artial_t 3*2/5_j\in L^2((0,T)\times \Gammaamma^{\varepsilon})$, $3*2/5_j\in L^4((0,T)\times \Gammaamma^{\varepsilon})$, and $3*2/5_j(T)=0$. Later, when passing to the limit $\varepsilon\to 0$ in \varepsilonqref{weak_i} and \varepsilonqref{weak_e}, we make use of the form \varepsilonqref{weak-form-ibp}. \varepsilonnd{remark} \begin{remark}\label{weak:welldef} Consider a weak solution $(u_i^\varepsilon,u_e^\varepsilon,v^{\varepsilon},w^{\varepsilon})$ according to Definition \ref{weak}. Thanks to \varepsilonqref{trace1}, the trace of $3*2/5_j \in L^2(0,T;H^1(\Omegamega_j^{\varepsilon}))$ belongs to $L^2(0,T;H^{1/2}(\Gammaamma^{\varepsilon}))$. By the Sobolev inequality \varepsilonqref{SobolevInequality}, the trace of $3*2/5_j$ belongs also to $L^2(0,T;L^4(\Gammaamma^{\varepsilon}))$. In Definition \ref{weak} we ask additionally that the trace of $3*2/5_j$ belongs to $L^4((0,T)\times \Gammaamma^{\varepsilon})$ to ensure that the surface terms in \varepsilonqref{weak_i}, \varepsilonqref{weak_e} are well-defined Indeed, $\int_0^T \left\langle 3*2/5artial_t v^{\varepsilon}, 3*2/5_j\right\rangle \, dt$ is well-defined for such $3*2/5_j$. Moreover, \begin{align*} J:=\abs{\int_0^T\int_{\Gammaamma^{\varepsilon}} I(v^{\varepsilon},w^{\varepsilon}) 3*2/5_j\, dS\, dt} & \leq \norm{I(v^{\varepsilon},w^{\varepsilon})}_{L^{4/3}((0,T)\times \Gammaamma^{\varepsilon})} \norm{3*2/5_j}_{L^4((0,T)\times \Gammaamma^{\varepsilon})}. \varepsilonnd{align*} For the membrane model \varepsilonqref{GFHN}, the growth condition on $I$ implies \begin{equation}\label{I43} \abs{I(v,w)}^{\frac43} \le C_I\left(1+\abs{v}^4+ \abs{w}^2\right), \varepsilonnd{equation} and therefore $\norm{I(v^{\varepsilon},w^{\varepsilon})}_{L^{4/3}((0,T)\times \Gammaamma^{\varepsilon})}^{4/3}<\infty$. Consequently, $J<\infty$. We actually have a more precise bound. As $H^{1/2}(\Gammaamma^{\varepsilon})2/5ubset L^4(\Gammaamma^{\varepsilon})$, we have $H^{-1/2}(\Gammaamma^{\varepsilon})2/5ubset L^{4/3}(\Gammaamma^{\varepsilon})$ and $$ \norm{I(v^{\varepsilon},w^{\varepsilon})}_{H^{-1/2}(\Gammaamma^{\varepsilon})}^{4/3} \le \tilde C \norm{I(v^{\varepsilon},w^{\varepsilon})}_{L^{4/3}(\Gammaamma^{\varepsilon})}^{4/3} \le C \left(1+\norm{v^{\varepsilon}}_{L^4(\Gammaamma^{\varepsilon})}^4 +\norm{w^{\varepsilon}}_{L^2(\Gammaamma^{\varepsilon})}^2\right). $$ Integrating this over $t\in (0,T)$ yields $I(v^{\varepsilon},w^{\varepsilon})\in L^{4/3}(0,T;H^{-1/2}(\Gammaamma^{\varepsilon}))$. The integral on the right-hand side of \varepsilonqref{weak_w} can be treated similarly, since $\abs{H(v,w)}^2 \le C\left(\abs{v}^4+\abs{w}^2\right)$. The remaining integrals are trivially well defined. \varepsilonnd{remark} 2/5ubsection{Existence of solution and a priori estimates} Existence and uniqueness results for certain classes of membrane models have been established in \cite{Colli,Veneroni}. These works employ the variable $v=u_i-u_e$ to convert \varepsilonqref{micro} into a non-degenerate ``abstract" parabolic equation. The authors in \cite{Colli} then appeal to the theory of variational inequalities, whereas in \cite{Veneroni} the Schauder fixed point theorem is applied to conclude the existence of a solution. The following theorem can be proved by adapting arguments found in \cite{Colli,Veneroni} (see also \cite{Grandelius:2017aa}), or those utilized in \cite{Bourgault,Karlsen,Andreianov:2010uq} for the (macroscopic) bidomain model. \begin{theorem}[existence of weak solution for microscopic system]\label{thm:micro} Fix any $\varepsilon>0$, and suppose \varepsilonqref{GFHN}, \varepsilonqref{def:domain}, \varepsilonqref{fastSlow}, \varepsilonqref{eq:matrix1}, \varepsilonqref{eq:matrix2}, \varepsilonqref{assumptionCompatibility}, \varepsilonqref{sj-ass}, and \varepsilonqref{assumptionInitial} hold. Then the microscopic bidomain model \varepsilonqref{micro}, \varepsilonqref{eq:ib-cond} possesses a weak solution $(u_i^{\varepsilon},u_e^{\varepsilon},v^{\varepsilon},w^{\varepsilon})$ (in the sense of Definition \ref{weak}). This weak solution satisfies the a priori estimates collected in Lemma \ref{lemma:energy} below. \varepsilonnd{theorem} \begin{remark}\label{rem:uniq} The unknowns $u_i^\varepsilon,u_e^\varepsilon$ are determined uniquely up to a constant. We can fix this constant by imposing the normalization condition $\int u_e^\varepsilon\, dx = 0$, as is done in Definition \ref{weak}. Weak solutions are unique (and $L^2$ stable) provided we impose the structural condition \varepsilonqref{uniq-cond1} on the membrane functions $I,H$. \varepsilonnd{remark} The next lemma, which is utilized below to derive some a priori estimates, is a consequence of the uniform Poincar\'{e} inequality \varepsilonqref{Poincare} and the trace inequality for $\varepsilon$-periodic surfaces \varepsilonqref{trace}. A similar result is used in \cite{Pennacchio2005}. \begin{lemma}\label{L2-uie-est} Let $u_j\in H^1(\Omegamega_j^{\varepsilon})$, $j=i,e$, $\int_{\Omegamega_e^{\varepsilon}} u_e \,dx =0$, and set $v:=u_i\big|_{\Gammaamma^{\varepsilon}}-u_e\big|_{\Gammaamma^{\varepsilon}}$. There is a positive constants $C$, independent of $\varepsilon$, such that \begin{align*} \norm{u_i}_{L^2(\Omegamega_i^{\varepsilon})}^2 \leq C \left(\varepsilon\norm{v}_{L^2(\Gammaamma^{\varepsilon})}^2 +2/5um_{j=i,e}\int_{\Omegamega_j^{\varepsilon}} \abs{\nabla u_j}^2\,dx \right). \varepsilonnd{align*} \varepsilonnd{lemma} \begin{proof} First, since $\int u_e =0$, \begin{equation}\label{poincare-ue} \norm{u_e}_{L^2(\Omegamega_e^{\varepsilon})}^2 \overset{\varepsilonqref{Poincare}}{\le} C_1\int_{\Omegamega_e^{\varepsilon}} \abs{\nabla u_e}^2\, dx. \varepsilonnd{equation} To estimate the $L^2$ norm of $u_i$, write $u_i=\bar{u}_i+\tilde{u}_i$, where $\bar{u}_i:=\frac{1}{\abs{\Omegamega_i^{\varepsilon}}}\int_{\Omegamega_i^{\varepsilon}} u_i\, dx$ is constant in $\Omegamega_i^{\varepsilon}$ and $\tilde{u}_i:=u_i-\bar{u}_i$ has zero mean in $\Omegamega_i^{\varepsilon}$. Clearly, $$ \norm{u_i}_{L^2(\Omegamega_j^{\varepsilon})}^2 =\norm{\bar{u}_i}_{L^2(\Omegamega_i^{\varepsilon})}^2 +\norm{\tilde{u}_i}_{L^2(\Omegamega_i^{\varepsilon})}^2. $$ In view of the Poincar\'{e} inequality \varepsilonqref{Poincare}, \begin{equation}\label{poincare-tui} \norm{\tilde{u}_i}_{L^2(\Omegamega_i^{\varepsilon})}^2 \le \tilde C_1 \int_{\Omegamega_i^{\varepsilon}} \abs{\nabla \tilde u_i}^2\, dx =\tilde C_1 \int_{\Omegamega_i^{\varepsilon}} \abs{\nabla u_i}^2\, dx. \varepsilonnd{equation} Let us bound $\norm{\bar{u}_i}_{L^2(\Omegamega_i^{\varepsilon})}^2 =\frac{\abs{\Omegamega_i^{\varepsilon}}}{\abs{\Gammaamma^{\varepsilon}}} \norm{\bar{u}_i}_{L^2(\Gammaamma^{\varepsilon})}^2$. Since $\abs{\Gammaamma^{\varepsilon}}=\abs{K^{\varepsilon}} \int_{\varepsilon\Gamma} \,dS = \varepsilon^{-3}\varepsilon^2 \int_{\Gamma}\,dS$ (recall \varepsilonqref{domains-escaled}), $\varepsilon \left|\Gammaamma^{\varepsilon}\right| \geq c >0$. Because of this and $\abs{\Omegamega_i^{\varepsilon}}\le \abs{\Omega}$, $$ \norm{\bar{u}_i}_{L^2(\Omegamega_i^{\varepsilon})}^2 \le \bar C_1 \varepsilon \norm{\bar{u}_i}_{L^2(\Gammaamma^{\varepsilon})}^2. $$ Noting that $$ \abs{\bar{u}_i}^2 \le \bar C_2\left(\abs{u_i - u_e}^2 + \abs{\tilde u_i}^2 + \abs{u_e}^2\right), $$ we obtain \begin{align*} \norm{\bar{u}_i}_{L^2(\Omegamega_i^{\varepsilon})}^2 & \le \bar C_3 \left( \varepsilon \norm{v}_{L^2(\Gammaamma^{\varepsilon})}^2 + \varepsilon \norm{\tilde u_i}_{L^2(\Gammaamma^{\varepsilon})}^2 + \varepsilon \norm{u_e}_{L^2(\Gammaamma^{\varepsilon})}^2 \right) \\ & \overset{\varepsilonqref{trace}}{\le} \bar C_3 \varepsilon\norm{v}_{L^2(\Gammaamma^{\varepsilon})}^2 \\ & \qquad \quad + \bar C_4 \left( \norm{\tilde u_i}_{L^2(\Omegamega_i^{\varepsilon})}^2+ \varepsilon^2 \norm{\nabla \tilde u_i}_{L^2(\Omegamega_i^{\varepsilon})}^2\right) \\ & \qquad \quad \quad + \bar C_4 \left( \norm{u_e}_{L^2(\Omegamega_e^{\varepsilon})}^2+ \varepsilon^2 \norm{\nabla u_e}_{L^2(\Omegamega_e^{\varepsilon})}^2\right) \\ & \le \bar C_3 \varepsilon\norm{v}_{L^2(\Gammaamma^{\varepsilon})}^2 + \bar C_5\norm{\nabla u_i}_{L^2(\Omegamega_i^{\varepsilon})}^2 + \bar C_5 \norm{\nabla u_e}_{L^2(\Omegamega_e^{\varepsilon})}^2, \varepsilonnd{align*} where the final inequality is a result of \varepsilonqref{poincare-ue} and \varepsilonqref{poincare-tui}. \varepsilonnd{proof} For the sake of the upcoming homogenization result, we now list some precise ($\varepsilon$-independent) a priori estimates. \begin{lemma}[basic estimates for microscopic system]\label{lemma:energy} Referring Theorem \ref{thm:micro}, the weak solution $(u_i^{\varepsilon},u_e^{\varepsilon},v^{\varepsilon},w^{\varepsilon})$ of \varepsilonqref{micro}, \varepsilonqref{eq:ib-cond} satisfies the a priori estimates \begin{equation}\label{eq:main-est} \begin{split} & (a)\; \norm{\nabla u_j^\varepsilon}_{L^2\left((0,T)\times \Omegamega_j^{\varepsilon}\right)} \le C, \quad j=i,e, \\ & (b)\; \norm{u_j^\varepsilon}_{L^2\left((0,T)\times \Omegamega_j^{\varepsilon}\right)} \le C, \quad j=i,e, \\ & (c)\; \varepsilon^{1/2} \norm{v^\varepsilon}_{L^{\infty}(0,T; L^2(\Gammaamma^{\varepsilon}))} \le C, \\ & (d)\; \varepsilon^{1/4}\norm{v^\varepsilon}_{L^{4}((0,T)\times \Gammaamma^{\varepsilon})}\le C, \\ & (e)\; \varepsilon^{1/2}\norm{w^\varepsilon}_{L^{\infty}(0,T;L^2(\Gammaamma^{\varepsilon}))}\le C, \\ & (f)\; \varepsilon^{1/2}3*2/5artial_t w^\varepsilon \in L^2(0,T;L^2(\Gammaamma^{\varepsilon})), \varepsilonnd{split} \varepsilonnd{equation} where $C$ is a positive constant independent of $\varepsilon$. \varepsilonnd{lemma} \begin{proof} We only \textit{outline} a proof of these (mostly standard) estimates. Specifying $(3*2/5_i, 3*2/5_e,3*2/5_w) = (u_i^{\varepsilon},-u_e^{\varepsilon},w^{\varepsilon})$ as test functions in \varepsilonqref{weak_i}, \varepsilonqref{weak_e}, and \varepsilonqref{weak_w}, adding the resulting equations, applying the chain rule \varepsilonqref{eq:chain-rule}, and using \varepsilonqref{eq:matrix2}, we obtain \begin{equation}\label{vL2} \begin{split} &\frac{\varepsilon}{2}\frac{d}{dt} \left( \norm{v^{\varepsilon}}_{L^2(\Gammaamma^{\varepsilon})}^2 +\norm{w^{\varepsilon}}_{L^2(\Gammaamma^{\varepsilon})}^2 \right) +\alpha 2/5um_{j=i,e} \int_{\Omegamega_j^{\varepsilon}} \abs{\nabla u_j^{\varepsilon}}^2\, dx, \\ &\qquad \quad + \varepsilon \int_{\Gammaamma^{\varepsilon}} \left(I(v^{\varepsilon},w^{\varepsilon}) v^{\varepsilon} -H(v^{\varepsilon},w^{\varepsilon})w^{\varepsilon} \right)\,dS \le 2/5um_{j=i,e}\int_{\Omegamega_j^{\varepsilon}} s_{j}^\varepsilon u_j^{\varepsilon} \, dx. \varepsilonnd{split} \varepsilonnd{equation} Thanks to \varepsilonqref{IH-conseq}, \begin{equation}\label{ionL2} \begin{split} & \int_{\Gammaamma^{\varepsilon}} \left(I(v^{\varepsilon},w^{\varepsilon}) v^{\varepsilon} - H(v^{\varepsilon},w^{\varepsilon})w^{\varepsilon} \right)\,dS \\ & \qquad\quad \geq \gamma \norm{v^{\varepsilon}}_{L^4(\Gammaamma^{\varepsilon})}^4 - \beta\left( \norm{v^{\varepsilon}}_{L^2(\Gammaamma^{\varepsilon})}^2 +\norm{w^{\varepsilon}}_{L^2(\Gammaamma^{\varepsilon})}^2 \right), \varepsilonnd{split} \varepsilonnd{equation} for some $\varepsilon$-independent constants $\gamma>0$ and $\beta\ge 0$. Using Cauchy's inequality (``with $\delta$") , the source term can be bounded as \begin{equation}\label{source} \begin{split} \abs{2/5um_{j=i,e}\int_{\Omegamega_j^{\varepsilon}} s_{j}^{\varepsilon} u_j^{\varepsilon} \, dx} & \leq 2/5um_{j=i,e} \left(\norm{u_j^{\varepsilon}}_{L^2(\Omegamega_j^{\varepsilon})} \norm{s_j^{\varepsilon}}_{L^2(\Omegamega_j^{\varepsilon})}\right) \\ & \leq \delta 2/5um_{j=i,e} \norm{u_j^{\varepsilon}}_{L^2(\Omegamega_j^{\varepsilon})}^2 + C_{\delta} 2/5um_{j=i,e}\norm{s_j^{\varepsilon}}^2_{L^2(\Omegamega_j^{\varepsilon})}, \varepsilonnd{split} \varepsilonnd{equation} with $\delta>0$ small and $C_\delta$ independent of $\varepsilon$. Lemma \ref{L2-uie-est} and \varepsilonqref{poincare-ue} ensure that \begin{equation}\label{source-part} 2/5um_{j=i,e} \norm{u_j^{\varepsilon}}_{L^2(\Omegamega_j^{\varepsilon})}^2 \leq C_1\left(\varepsilon \norm{v^{\varepsilon}}_{L^2(\Gammaamma^{\varepsilon})}^2 +2/5um_{j=i,e} \int_{\Omegamega_j^{\varepsilon}} \abs{\nabla u_j^{\varepsilon}}^2\, dx\right). \varepsilonnd{equation} Insert \varepsilonqref{ionL2}, \varepsilonqref{source}, \varepsilonqref{source-part} into \varepsilonqref{vL2}, use \varepsilonqref{eq:matrix2}, and choose $\delta$ (appropriately) small. Integrating the result over the time intervall $(0,\tau)$, for $\tau\in (0,T)$, yields \begin{align*} & \varepsilon \left( \norm{v_{\varepsilon}(\tau)}^2_{L^2(\Gammaamma^{\varepsilon})} + \norm{w_{\varepsilon}(\tau)}^2_{L^2(\Gammaamma^{\varepsilon})}\right) + \frac{\alpha}{2} 2/5um_{j=i,e}\int_0^\tau \norm{\nabla u_j^{\varepsilon}}^2_{L^2(\Omegamega_j^{\varepsilon})} \,dt \\ & \quad \quad + \gamma \varepsilon \int_0^\tau \norm{v^{\varepsilon}(t)}^4_{L^4(\Omegamega)} \,dt \leq \varepsilon \left( \norm{v_{\varepsilon}(0)}^2_{L^2(\Gammaamma^{\varepsilon})} + \norm{w_{\varepsilon}(0)}^2_{L^2(\Gammaamma^{\varepsilon})}\right) \\ & \quad\quad\quad + C_2 \varepsilon \int_0^\tau \left( \norm{v_{\varepsilon}(t)}^2_{L^2(\Gammaamma^{\varepsilon})} + \norm{w_{\varepsilon}(t)}^2_{L^2(\Gammaamma^{\varepsilon})}\right) \, dt + C_\delta 2/5um_{j=i,e}\norm{s_j^{\varepsilon}}^2_{L^2((0,\tau)\times \Omegamega_j^{\varepsilon})}, \varepsilonnd{align*} for some positive constant $C_2$ independent of $\varepsilon$. Applying Gr\"{o}wall's inequality, recalling \varepsilonqref{assumptionInitial} and \varepsilonqref{assumptionStimulation}, we obtain estimates (a), (c), (d), (e) in \varepsilonqref{eq:main-est}. Estimate (b) follows from \varepsilonqref{source-part} and (a), (c). Finally, note that \varepsilonqref{GFHN} implies the bound $$ \abs{3*2/5artial_t w^\varepsilon}^2 \le C \left(1+\abs{v^\varepsilon}^4 + \abs{v^\varepsilon}^2\right). $$ Estimate (f) follows by integrating this over $(0,T)\times \Gammaamma^{\varepsilon}$ and using (d), (e). \varepsilonnd{proof} \begin{remark}\label{rem:motivate-regularity} Let us motivate the regularity requirements in Definition \ref{weak} that are not covered by Lemma \ref{lemma:energy}. First, due to \varepsilonqref{ve-def}, \varepsilonqref{trace1} and \varepsilonqref{eq:main-est}, $$ \norm{v^{\varepsilon}}_{L^2(0,T;H^{1/2}(\Gammaamma^{\varepsilon}))} \leq \tilde C_{\varepsilon} 2/5um_{j=i,e}\norm{u_j^{\varepsilon}}_{L^2(0,T;H^1(\Omegamega_j^{\varepsilon}))} \le C_\varepsilon, $$ where the constants $\tilde C_\varepsilon, C_{\varepsilon}$ may depend on $\varepsilon$ (via \varepsilonqref{trace1} with $\Gamma$ replaced by $\Gammaamma^{\varepsilon}$). Next, we claim that $$ \varepsilon\norm{3*2/5artial_t v^\varepsilon}_{L^2(0,T;H^{-1/2}(\Gammaamma^{\varepsilon}))+L^4((0,T)\times \Gammaamma^{\varepsilon})}\le C_\varepsilon, $$ for some constant that may depend on $\varepsilon$. To see this use a version of \varepsilonqref{weak_i} or \varepsilonqref{weak_e} (with time-independent test functions) to write $$ \varepsilon \left\langle 3*2/5artial_t v^\varepsilon , 3*2/5si\right\rangle_{H^{-1/2}(\Gammaamma^{\varepsilon}),H^{1/2}(\Gammaamma^{\varepsilon})} =J_1(3*2/5si) +J_2(3*2/5si)+J_3(3*2/5si), $$ for a.e.~$t\in (0,T)$ and for any $3*2/5si\in H^{1/2}(\Gammaamma^{\varepsilon})$ with $\norm{3*2/5si}_{H^{1/2}(\Gammaamma^{\varepsilon})}=1$, where \begin{align*} &\abs{J_1(3*2/5si)}=\abs{\int_{\Omegamega_j^{\varepsilon}}2/5igma_j^{\varepsilon} \nabla u_j^{\varepsilon} \cdot \nabla \mathcal{I}_j(3*2/5si)\, dx}, \qquad \abs{J_2(3*2/5si)} = \abs{\int_{\Omegamega_j^{\varepsilon}}s^{\varepsilon}_j\, \mathcal{I}_j(3*2/5si) \, dx}, \\ & \abs{J_3(3*2/5si)} =\abs{\varepsilon \int_{\Gammaamma^{\varepsilon}} I(v^{\varepsilon},w^{\varepsilon})3*2/5si \,dS}, \varepsilonnd{align*} and $\mathcal{I}_j(\cdot)$ is the right inverse of the trace operator relative to $\Omegamega_j^{\varepsilon}$, for $j=i$ or $j=e$. Clearly, using the Cauchy-Schwarz inequality and \varepsilonqref{inverseTrace}, \begin{align*} & \abs{J_1(3*2/5si)}\le \tilde C_1 \norm{\nabla u_j^{\varepsilon}}_{L^2(\Omegamega_j^{\varepsilon})} \norm{3*2/5si}_{H^{1/2}(\Gammaamma^{\varepsilon})}, \quad \abs{J_2(3*2/5si)}\le \tilde C_2 \norm{s_j^{\varepsilon}}_{L^2(\Omegamega_j^{\varepsilon})} \norm{3*2/5si}_{H^{1/2}(\Gammaamma^{\varepsilon})}, \\ & \text{and} \quad \abs{J_3(3*2/5si)}\le \varepsilon \norm{I(v^{\varepsilon},w^{\varepsilon})}_{H^{-1/2}(\Gammaamma^{\varepsilon})} \norm{3*2/5si}_{H^{1/2}(\Gammaamma^{\varepsilon})}, \varepsilonnd{align*} where the constants $\tilde C_1, \tilde C_2$ may depend on $\varepsilon$ (via the inverse trace inequality \varepsilonqref{inverseTrace} with $\Gamma$ replaced by $\Gammaamma^{\varepsilon}$). As a result, for a.e.~$t\in (0,T)$, $$ \norm{J_1}_{H^{-1/2}(\Gammaamma^{\varepsilon})}^2 \le \tilde C_1^2\norm{\nabla u_j^{\varepsilon}}_{L^2(\Omegamega_j^{\varepsilon})}^2, \quad \norm{J_2}_{H^{-1/2}(\Gammaamma^{\varepsilon})}^2 \le \tilde C_2^2\norm{s_j^{\varepsilon}}_{L^2(\Omegamega_j^{\varepsilon})}^2. $$ Integrating this over $t\in (0,T)$ and using \varepsilonqref{eq:main-est}-(a), \varepsilonqref{assumptionStimulation} it follows that $$ \norm{J_1}_{L^2(0,T;H^{-1/2}(\Gammaamma^{\varepsilon}))}\le C_1, \qquad \norm{J_2}_{L^2(0,T;H^{-1/2}(\Gammaamma^{\varepsilon}))}\le C_2, $$ for some constants $C_1, C_2$ (that may depend on $\varepsilon$). As in Remark \ref{weak:welldef}, $$ \norm{I(v^{\varepsilon},w^{\varepsilon})}_{H^{-1/2}(\Gammaamma^{\varepsilon})} \le C_I \left(1+\norm{v^{\varepsilon}}_{L^4(\Gammaamma^{\varepsilon})}^4 +\norm{w^{\varepsilon}}_{L^2(\Gammaamma^{\varepsilon})}^2\right)^{3/4}, $$ and hence, for a.e.~$t\in (0,T)$, $$ \norm{J_3}_{H^{-1/2}(\Gammaamma^{\varepsilon})}^{4/3} \le \tilde C_3 \varepsilon^{4/3}\left(1+\norm{v^{\varepsilon}}_{L^4(\Gammaamma^{\varepsilon})}^4 +\norm{w^{\varepsilon}}_{L^2(\Gammaamma^{\varepsilon})}^2\right). $$ Integrating over $t\in (0,T)$ and using estimates \varepsilonqref{eq:main-est}-(d,e), we conclude $$ \norm{J_3}_{L^{4/3}(0,T;H^{-1/2}(\Gammaamma^{\varepsilon}))}\le C_3, $$ for some constant $C_3$ (independent of $\varepsilon$). \varepsilonnd{remark} For the upcoming convergence analysis, we need a temporal translation estimate for the membrane potential $v^\varepsilon$. \begin{lemma}[temporal translation estimate in $L^2$]\label{lemma:temp-shift} Referring back to Theorem \ref{thm:micro}, let $(u_i^{\varepsilon},u_e^{\varepsilon},v^{\varepsilon},w^{\varepsilon})$ be the weak solution of \varepsilonqref{micro}, \varepsilonqref{eq:ib-cond}. Then $v^\varepsilon$ satisfies the following $L^2$ temporal translation estimate for some $\varepsilon$-independent constant $C>0$, for any sufficiently small $\mathbb{D}elta_t>0$: \begin{equation}\label{eq:ve-time-trans} \varepsilon \int_0^{T-\mathbb{D}elta_t}\int_{\Gammaamma^{\varepsilon}} \abs{v^\varepsilon(t+\mathbb{D}elta_t,x)-v^\varepsilon(t,x)}^2 \,dx \,dt \le C \mathbb{D}elta_t. \varepsilonnd{equation} \varepsilonnd{lemma} \begin{proof} The translated functions $(u_i^{\varepsilon},u_e^{\varepsilon},v^{\varepsilon},w^{\varepsilon})(t+\mathbb{D}elta_t)$ constitute a weak solution of \varepsilonqref{micro} on $(0,T-\mathbb{D}elta_t)$ with initial data $(v^\varepsilon,w^\varepsilon)(\mathbb{D}elta_t)$ and stimulation currents $(s^\varepsilon_i,s^\varepsilon_e)(t+\mathbb{D}elta_t)$. We subtract the equations (linked to $\Omegamega_i^{\varepsilon}$ and $\Omegamega_e^{\varepsilon}$) for the original weak solution from the equations satisfied by the translated one, and add the resulting equations. The result is \begin{equation*}\label{eq-translation_1} \begin{split} &-\varepsilon \int_0^{T-\mathbb{D}elta_t} \int_{\Gammaamma^{\varepsilon}} \left(v^{\varepsilon}(t+\mathbb{D}elta_t)-v^{\varepsilon}(t) \right) 3*2/5artial_t (3*2/5_i-3*2/5_e) \, dS\, dt \\ & \quad \quad +\varepsilon \int_{\Gammaamma^{\varepsilon}} \left(v^{\varepsilon}(T)-v^{\varepsilon}(T-\mathbb{D}elta_t) \right) (3*2/5_i(T-\mathbb{D}elta_t)-3*2/5_e(T-\mathbb{D}elta_t)) \,dS \\ & \quad \quad -\varepsilon \int_{\Gammaamma^{\varepsilon}} \left(v^{\varepsilon}(\mathbb{D}elta_t)-v^{\varepsilon}_0 \right) (3*2/5_i(0)-3*2/5_e(0)) \,dS \\ & \quad\quad + 2/5um_{j=i,e} \int_0^{T-\mathbb{D}elta_t} \int_{\Omegamega_j^{\varepsilon}} 2/5igma_j^{\varepsilon} \nabla \left(u_j^{\varepsilon}(t+\mathbb{D}elta_t)-u_j^{\varepsilon}(t)\right) \cdot \nabla 3*2/5_j \, dx\, dt \\ & \quad\quad +\varepsilon\int_0^{T-\mathbb{D}elta_t} \int_{\Gammaamma^{\varepsilon}} \left( I(v^{\varepsilon}(t+\mathbb{D}elta_t),w^{\varepsilon}(t+\mathbb{D}elta_t) - I(v^{\varepsilon}(t),w^{\varepsilon}(t)\right) (3*2/5_i-3*2/5_e) \,dS\, dt \\ & \quad\quad -\varepsilon \int_0^{T-\mathbb{D}elta_t}\int_{\Gammaamma^{\varepsilon}} \left(H(v^{\varepsilon}(t+\mathbb{D}elta_t),w^{\varepsilon}(t+\mathbb{D}elta_t)) -H(v^{\varepsilon}(t),w^{\varepsilon}(t)\right) 3*2/5_w \,dS\, dt \\ & \quad \quad\qquad =2/5um_{j=i,e}\int_0^{T-\mathbb{D}elta_t} \int_{\Omegamega_j^{\varepsilon}} \left(s_j^{\varepsilon}(t+\mathbb{D}elta_t)-s^{\varepsilon}(t) \right) 3*2/5_j \, dx\, dt. \varepsilonnd{split} \varepsilonnd{equation*} Specifying the test functions $3*2/5_i, 3*2/5_e,3*2/5_w$ as $$ 3*2/5_j(t)= -\int_t^{t+\mathbb{D}elta_t} u_j^\varepsilon(s) \,ds, \quad j=i,e, \qquad 3*2/5_w(t)= -\int_t^{t+\mathbb{D}elta_t} w_j^\varepsilon(s) \,ds, $$ we obtain \begin{equation*} \begin{split} & \varepsilon \int_0^{T-\mathbb{D}elta_t} \int_{\Gammaamma^{\varepsilon}} \abs{v^{\varepsilon}(t+\mathbb{D}elta_t)-v^{\varepsilon}(t)}^2\, dS \, dt \\ & \quad\quad + \varepsilon \int_{\Gammaamma^{\varepsilon}} \left(v^{\varepsilon}(T)-v^{\varepsilon}(T-\mathbb{D}elta_t) \right) \left(-\int_{T-\mathbb{D}elta_t}^T v(s)\,ds \right)\, dS \\ & \quad\quad - \varepsilon \int_{\Gammaamma^{\varepsilon}} \left(v^{\varepsilon}(\mathbb{D}elta_t)-v^{\varepsilon}_0 \right) \left(-\int_0^{\mathbb{D}elta_t} v(s)\,ds \right)\, dS \\ & \quad \quad +2/5um_{j=i,e}\int_0^{T-\mathbb{D}elta_t} \int_{\Omegamega_j^{\varepsilon}} 2/5igma_j^{\varepsilon} \nabla \left(u_j^{\varepsilon}(t+\mathbb{D}elta_t)-u_j^{\varepsilon}(t)\right) \\& \quad \qquad \qquad\qquad\qquad\qquad\quad \cdot \nabla \left(-\int_{t}^{t+\mathbb{D}elta_t}u_j^{\varepsilon}(s)\,ds\right) \, dx\, dt \\ & \quad\quad + \varepsilon \int_0^{T-\mathbb{D}elta_t}\int_{\Gammaamma^{\varepsilon}} \left( I(v^{\varepsilon}(t+\mathbb{D}elta_t),w^{\varepsilon}(t+\mathbb{D}elta_t)) - I(v^{\varepsilon}(t),w^{\varepsilon}(t)) \right) \\ & \quad \qquad \qquad\qquad\qquad\qquad\quad \times \left(-\int_{t}^{t+\mathbb{D}elta_t}v^{\varepsilon}(s)\,ds\right) \,dS \, dt \\ & \quad\quad + \varepsilon \int_0^{T-\mathbb{D}elta_t}\int_{\Gammaamma^{\varepsilon}} \left( H(v^{\varepsilon}(t+\mathbb{D}elta_t),w^{\varepsilon}(t+\mathbb{D}elta_t)) - H(v^{\varepsilon}(t),w^{\varepsilon}(t))\right) \\ & \quad \qquad \qquad\qquad\qquad\qquad\quad \times \left(\int_{t}^{t+\mathbb{D}elta_t}w^{\varepsilon}(s)\,ds\right) \,dS \, dt \\ & \qquad \quad \qquad = 2/5um_{j=i,e}\int_0^{T-\mathbb{D}elta_t} \int_{\Omegamega_j^{\varepsilon}} s_j^\varepsilon \left(-\int_{t}^{t+\mathbb{D}elta_t}u_j^{\varepsilon}(s)\,ds\right) \, dx \, dt. \varepsilonnd{split} \varepsilonnd{equation*} Let us write this equation as $$ \varepsilon \int_0^{T-\mathbb{D}elta_t} \int_{\Gammaamma^{\varepsilon}} \abs{v^{\varepsilon}(t+\mathbb{D}elta_t)-v^{\varepsilon}(t)}^2\, dS\, dt +J_1+J_2+J_3+J_4+J_5=J_6. $$ By the Cauchy-Schwarz and Minkowski integral inequalities, \begin{equation*} \begin{split} &\abs{J_1} \leq \varepsilon\norm{v^{\varepsilon}(T)-v^{\varepsilon}(T-\mathbb{D}elta_t)}_{L^2(\Gammaamma^{\varepsilon})} \norm{\int_{T-\mathbb{D}elta_t}^T v^{\varepsilon}(s)\,ds}_{L^2(\Gammaamma^{\varepsilon})} \\ & \qquad \leq 2 \varepsilon \norm{v^{\varepsilon}}_{L^\infty(0,T;L^2(\Gammaamma^{\varepsilon}))} \int_{T-\mathbb{D}elta_t}^T\norm{v^{\varepsilon}}_{L^2(\Gammaamma^{\varepsilon})}\,ds \\ & \qquad \leq 2 \varepsilon \norm{v^{\varepsilon}}_{L^{\infty}(0,T;L^2(\Gammaamma^{\varepsilon}))}^2\mathbb{D}elta_t \overset{\varepsilonqref{eq:main-est}-(c)}{\le} C_1 \mathbb{D}elta_t. \varepsilonnd{split} \varepsilonnd{equation*} Similarly, $\abs{J_2}\leq C_2\mathbb{D}elta_t$. We need the following facts involving a real-valued function $f\in L^p$ ($p>1$): \begin{equation} \label{convolution} \begin{split} & \int_t^{t+\mathbb{D}elta_t} f(s)\,ds = \left(\mathbbm{1}_{[0,\mathbb{D}elta_t]} 2/5tar f\right)(t), \\ & \norm{\mathbbm{1}_{[0,\mathbb{D}elta_t]} 2/5tar f}_{L^p} \leq \norm{\mathbbm{1}_{[0,\mathbb{D}elta_t]}}_{L^1} \norm{f}_{L^p}=\mathbb{D}elta_t \norm{f}_{L^p}, \varepsilonnd{split} \varepsilonnd{equation} where the second line follows from Young's convolution inequality. By \varepsilonqref{eq:matrix1} and the Cauchy-Schwarz inequality, \begin{align*} & \abs{J_3} \le 2/5um_{j=i,e} 2 \norm{2/5igma_j^\varepsilon}_{L^\infty} \norm{\nabla u_j^{\varepsilon}}_{L^2((0,T)\times \Omegamega_j^{\varepsilon})} \norm{\int_{t}^{t+\mathbb{D}elta_t} \nabla u_j^{\varepsilon}(s)\,ds}_{L^2((0,T-\mathbb{D}elta_t)\times \Omegamega_j^{\varepsilon})}. \varepsilonnd{align*} As a result of Minkowski integral inequality, \begin{align*} & \norm{\int_{t}^{t+\mathbb{D}elta_t} \nabla u_j^{\varepsilon}(s) \,ds}_{L^2((0,T-\mathbb{D}elta_t)\times \Omegamega_j^{\varepsilon})}^2 \\ & \qquad = \int_0^{T-\mathbb{D}elta_t}\int_{\Omegamega_j^{\varepsilon}} \abs{\int_t^{t+\mathbb{D}elta_t}\nabla u_j^\varepsilon(s) \, ds}^2\, dx \,dt \\ & \qquad \le \int_0^{T-\mathbb{D}elta_t} \left(\int_t^{t+\mathbb{D}elta_t} \left(\int_{\Omegamega_j^{\varepsilon}} \abs{\nabla u_j^\varepsilon(s)}^2 \,dx\right)^{1/2} \, ds\right)^2 \,dt \\ & \qquad = \int_0^{T-\mathbb{D}elta_t}\left(\int_t^{t+\mathbb{D}elta_t} \norm{\nabla u_j^\varepsilon(s)}_{L^2(\Omegamega_j^{\varepsilon})} \, ds\right)^2 \,dt \\ & \qquad \overset{\varepsilonqref{convolution}}{\le} \mathbb{D}elta_t^2 \int_0^{T-\mathbb{D}elta_t} \norm{\nabla u_j^\varepsilon}_{L^2(\Omegamega_j^{\varepsilon})}^2 \, dt \le \mathbb{D}elta_t^2 \norm{\nabla u_j^\varepsilon}_{L^2((0,T)\times \Omegamega_j^{\varepsilon})}^2. \varepsilonnd{align*} Therefore, $$ \abs{J_3}\le 2/5um_{j=i,e} 2 \norm{2/5igma_j^\varepsilon}_{L^\infty} \norm{\nabla u_j^\varepsilon(t)}_{L^2((0,T)\times \Omegamega_j^{\varepsilon})}^2\mathbb{D}elta_t \overset{\varepsilonqref{eq:main-est}-(a)}{\le} C_3\mathbb{D}elta_t. $$ An application of H\"{o}lder's inequality yields \begin{equation*} \begin{split} \abs{J_4} & \leq 2 \varepsilon \norm{I^{\varepsilon}}_{L^{4/3}((0,T)\times \Gammaamma^{\varepsilon})} \norm{\int_{t}^{t+\mathbb{D}elta_t}v^{\varepsilon}(s) \,ds}_{L^4((0,T-\mathbb{D}elta_t)\times \Gammaamma^{\varepsilon})} \\ & \overset{\varepsilonqref{I43}}{\le} C_4 \varepsilon \left(\norm{v^{\varepsilon}}^4_{L^4((0,T)\times \Gammaamma^{\varepsilon})} + \norm{w^{\varepsilon}}_{L^2((0,T)\times \Gammaamma^{\varepsilon})}^2\right)^{3/4} \mathbb{D}elta_t \norm{v^{\varepsilon}}_{L^4((0,T)\times \Gammaamma^{\varepsilon})} \\& \le C_5 \mathbb{D}elta_t \varepsilon \left( \norm{v^{\varepsilon}}_{L^4((0,T)\times \Gammaamma^{\varepsilon})}^4 + \norm{w^{\varepsilon}}_{L^2((0,T)\times \Gammaamma^{\varepsilon})}^2 \right) \overset{\varepsilonqref{eq:main-est}-(d,e)}{\le} C_6 \mathbb{D}elta_t, \varepsilonnd{split} \varepsilonnd{equation*} where we have repeated the argument for $J_1$ involving \varepsilonqref{convolution}, with the implication that $\norm{\int_{t}^{t+\mathbb{D}elta_t}v^{\varepsilon}(s)\,ds}_{L^4_{t,x}}$ is bounded in terms of $\mathbb{D}elta_t \norm{v^{\varepsilon}}_{L^4_{t,x}}$. Moreover, we have used the basic inequality $ab\le \frac34 a^{4/3} + \frac14 b^4$ for positive numbers $a,b$. Similarly, \begin{equation*} \begin{split} \abs{J_5} & \overset{\varepsilonqref{GFHN}}{\le} C_7\mathbb{D}elta_t \varepsilon \left( \norm{v^{\varepsilon}}_{L^4((0,T)\times \Gammaamma^{\varepsilon})}^4 +\norm{w^{\varepsilon}}_{L^2((0,T)\times \Gammaamma^{\varepsilon})}^2\right)^{1/2} \norm{v^\varepsilon}_{L^2((0,T)\times \Gammaamma^{\varepsilon})} \\ & \overset{\varepsilonqref{eq:main-est}-(d,e)}{\le} C_8 \mathbb{D}elta_t, \varepsilonnd{split} \varepsilonnd{equation*} and \begin{equation*} \abs{J_6}\leq 2 \mathbb{D}elta_t 2/5um_{j=i,e}\norm{s_j^{\varepsilon}}_{L^2((0,T)\times \Omegamega_j^{\varepsilon})} \norm{u_j^{\varepsilon}}_{L^2((0,T)\times \Omegamega_j^{\varepsilon})} \overset{\varepsilonqref{assumptionStimulation}, \varepsilonqref{eq:main-est}-(b)}{\le} C_9 \mathbb{D}elta_t. \varepsilonnd{equation*} Summarizing our findings, we conclude that \varepsilonqref{eq:ve-time-trans} holds. \varepsilonnd{proof} \begin{remark} Due to the degenerate structure of the microscopic system \varepsilonqref{micro}, temporal estimates are not available for the intra- and extracellular potentials $u_i^\varepsilon, u_e^\varepsilon$. \varepsilonnd{remark} 2/5ection{The homogenization result} \label{sec:convergence} This section contains the main result of the paper. We start by recalling the weak formulation of the macroscopic bidomain system \varepsilonqref{macro}, which is augmented with the following initial and boundary conditions: \begin{equation}\label{eq:ib-cond2} \begin{split} & v|_{t=0}=v_0 \;\; \text{in $\Omega$}, \quad w|_{t=0}=w_0, \;\; \text{in $\Omega$}. \\ & n \cdot2/5igma \nabla u_j = 0 \;\; \text{on $(0,T)\times 3*2/5artial \Omega$, $j=i,e$.} \varepsilonnd{split} \varepsilonnd{equation} \begin{definition}[weak formulation of bidomain system]\label{weakMacro} A weak solution to \varepsilonqref{macro}, \varepsilonqref{eq:ib-cond2} is a collection $(u_i,u_e,v,w)$ of functions satisfying the following conditions: \begin{enumerate}[i] \item (algebraic relation). $$ v= u_i - u_e \quad \text{a.e.~in $(0,T)\times\Omega$}. $$ \item (regularity). \begin{align*} &u_j \in L^2(0,T;H^1(\Omega)), \quad j=i,e, \\ & \int_{\Omega} u_e^\varepsilon(t,x)\, dx = 0, \quad t\in (0,T), \\ &v \in L^2(0,T; H^1(\Omega)) \cap L^4((0,T)\times \Omega), \\ & 3*2/5artial_t v \in L^2(0,T; H^{-1}(\Omega)) + L^{4/3}((0,T)\times \Omega), \\ & w \in H^1(0,T;L^2(\Omega)). \varepsilonnd{align*} \item (initial conditions). \begin{align*} v(0)=v_0, \quad w(0) = w_0. \varepsilonnd{align*} \item (differential equations). \begin{equation*} \begin{split} &|\Gamma|\int_0^T \left\langle 3*2/5artial_t v, 3*2/5_i \right\rangle \,dt + \int_0^T \int_{\Omega} M_i \nabla u_i \cdot \nabla 3*2/5_i\, dx \, dt \\ & \qquad\qquad + |\Gamma| \int_0^T \int_{\Omega} I(v,w)3*2/5_i \, dx \, dt = |Y_i|\int_0^T\int_{\Omega} s_{i} 3*2/5_i \, dx \, dt, \varepsilonnd{split} \varepsilonnd{equation*} \begin{equation*} \begin{split} & |\Gamma| \int_0^T \left\langle 3*2/5artial_t v,3*2/5_e \right\rangle \,dt - \int_0^T \int_{\Omega} M_e \nabla u_e\cdot \nabla 3*2/5_e \, dx \, dt \\ & \qquad\qquad + |\Gamma|\int_0^T \int_{\Omega} I(v,w)3*2/5_e \, dx \, dt = -|Y_e|\int_0^T \int_{\Omega} s_{e} 3*2/5_e \, dx \, dt, \varepsilonnd{split} \varepsilonnd{equation*} \begin{equation} \int_0^T \int_{\Omega} 3*2/5artial_t w 3*2/5_w \, dx \, dt = \int_0^T \int_{\Omega} H(v,w) 3*2/5_w \, dx \, dt, \varepsilonnd{equation} \varepsilonnd{enumerate} for all test functions $3*2/5_i,3*2/5_e\in L^2(0,T; H^1(\Omega))\cap L^4((0,T)\times \Omega)$, $3*2/5_w\in L^2(0,T;L^2(\Omega))$. We denote by $\left\langle \cdot, \cdot \right\rangle$ the duality pairing between $L^2(0,T; H^{-1}(\Omega)) + L^{4/3}((0,T)\times \Omega)$ and $L^2(0,T; H^1(\Omega))\cap L^4((0,T)\times \Omega)$. \varepsilonnd{definition} The macroscopic bidomain system is well studied for a variety of cellular models \cite{Andreianov:2010uq,Karlsen,Boulakia2008,Bourgault,Colli,Veneroni:2009aa}. For the following result, see \cite{Andreianov:2010uq,Karlsen,Boulakia2008}. \begin{theorem}[well-posedness of bidomain system] Suppose \varepsilonqref{def:domain} holds, $I$ and $H$ satisfy the conditions in \varepsilonqref{GFHN} and \varepsilonqref{uniq-cond1} (for uniqueness), $M_i(x)$ and $M_e(x)$ are bounded, positive definite matrices, $s_i,s_e\in L^2((0,T)\times \Omega)$, and $v_0,w_0 \in L^2(\Omega)$. Then there exists a unique weak solution to the bidomain system \varepsilonqref{macro}, \varepsilonqref{eq:ib-cond2}. \varepsilonnd{theorem} We are now in a position to state the main result, which should be compared to Theorem 1.3 in \cite{Pennacchio2005}. \begin{theorem}[convergence to the bidomain system]\label{theorem:homo} Suppose conditions \varepsilonqref{GFHN}, \varepsilonqref{def:domain}, \varepsilonqref{uniq-cond1}, \varepsilonqref{fastSlow}, \varepsilonqref{eq:matrix1}, \varepsilonqref{eq:matrix2}, \varepsilonqref{assumptionCompatibility}, \varepsilonqref{sj-ass}, and \varepsilonqref{assumptionInitial} hold. Let $\varepsilon$ take values in a sequence tending to zero (e.g.~$\varepsilon^{-1}\in \mathbb{N}$). Then the sequence $2/5eq{u_i^{\varepsilon},u_e^{\varepsilon},v^{\varepsilon},w^{\varepsilon}}_{\varepsilon>0}$ of weak solutions to the microscopic system \varepsilonqref{micro}, \varepsilonqref{eq:ib-cond} two-scale converges (in the sense of \varepsilonqref{convergence} below) to the weak solution $(u_i,u_e,v,w)$ of the macroscopic bidomain system \varepsilonqref{macro}, \varepsilonqref{eq:ib-cond2}. Moreover, $2/5eq{v^{\varepsilon}}_{\varepsilon>0}$ converges strongly in the sense that \begin{equation}\label{strongConvergence} \varepsilon^{1/2} \norm{v^{\varepsilon}-v}_{L^2\left( \Gammaamma^{\varepsilon} \right)} \to 0, \quad \text{as $\varepsilon \rightarrow 0$.} \varepsilonnd{equation} \varepsilonnd{theorem} \begin{remark} The strong convergence \varepsilonqref{strongConvergence} is a corrector-type result. In the current setting it allows us to pass to the limit in the nonlinear ionic terms. By employing standard techniques \cite{Donato} one can also show that the energies $$ 2/5eq{\int_0^T\int_{\Omegamega_j^{\varepsilon}} 2/5igma_j^{\varepsilon} \nabla u_i^{\varepsilon} \cdot \nabla u_j^{\varepsilon} \, dx \, dt}_{\varepsilon>0} $$ converge to the homogenized energy $$ \int_0^T\int_{\Omega} M_j \nabla u_i \cdot \nabla u_j \, dx \, dt, $$ where $M_j$ is defined in \varepsilonqref{Mj}, $j=i,e$. \varepsilonnd{remark} The rest of this section is devoted to the proof of Theorem \ref{theorem:homo}. Homogenization of the linear terms in \varepsilonqref{micro} is handled with standard techniques, cf.~Subsection \ref{subsec:two-scale}. Passing to the limit in the nonlinear terms $I,H$ is more challenging. Although the intracellular/extracellular functions $u_i^{\varepsilon}$ (defined on $\Omega_i^\varepsilon$) and $u_e^{\varepsilon}$ (defined on $\Omega_e^\varepsilon$) do not converge strongly, some kind of strong compactness is expected for the $\varepsilon$-scaled version of the transmembrane potential $v^{\varepsilon}= u_i^{\varepsilon}\big|_{\Gammaamma^{\varepsilon}}-u_e^{\varepsilon}\big|_{\Gammaamma^{\varepsilon}}$ (defined on $\Gammaamma^{\varepsilon}$), since we control both the temporal \varepsilonqref{eq:ve-time-trans} and spatial (fractional) derivatives \varepsilonqref{eq:main-est}. Wild oscillations of the underlying domain do however pose difficulties. For this reason, we use the boundary unfolding operator $\mathcal{T}_{\varepsilon}^b$, cf.~\varepsilonqref{unfolding}, to transform the problem of convergence on the oscillating set $\Gammaamma^{\varepsilon}$ to the fixed set $\Omega \times \Gamma$. Roughly speaking, \varepsilonqref{eq:main-est} is used to conclude that $\mathcal{T}_{\varepsilon}^b(v^\varepsilon)$ is uniformly bounded in $L^2_tL^2_xH^{1/2}_y$. In addition, in view of \varepsilonqref{eq:ve-time-trans}, $\mathcal{T}_{\varepsilon}^b(v^\varepsilon)$ possesses an $\varepsilon$-uniform temporal translation estimate with respect to the $L^2_{t,x,y}$ norm. As a result, the Simon compactness result (cf.~Subsection \ref{sec:notation}) implies that $2/5eq{(t,y)\mapsto \mathcal{T}_{\varepsilon}^b(v^\varepsilon)(t,x,y)}_{\varepsilon>0}$ is precompact in $L^2((0,T)\times \Gamma)$, for fixed $x$. Next we demonstrate that $\mathcal{T}_{\varepsilon}^b(v^\varepsilon)$ is equicontinuous in $x$ (with values in $L^2((0,T)\times \Gamma)$). Applying the Simon-type compactness criterion found in \cite{Neuss} (cf.~Theorem \ref{compactSurface0} below), it follows that $2/5eq{\mathcal{T}_{\varepsilon}^b(v^{\varepsilon})}_{\varepsilon>0}$ converges along a subsequence. Owing to the uniqueness of solutions to the bidomain system \varepsilonqref{weakMacro}, the entire sequence $2/5eq{\mathcal{T}_{\varepsilon}^b(v^{\varepsilon})}_{\varepsilon>0}$ converges (not just a subsequence). We refer to Subsection \ref{subsec:strongconv} for details. For inspirational works deriving macroscopic models by combining two-scale and unfolding techniques, we refer to \cite{Neuss,Gahn:2016aa,Graf:2014aa,Marciniak-Czochra:2008aa,Neuss-Radu:2007aa}. 2/5ubsection{Extracting two-scale limits}\label{subsec:two-scale} Recall that $\tilde{\cdot}$ denotes the extension to $\Omegamega$ by zero, and that $\mathbbm{1}_{Y_j}$ is the indicator function of $Y_j$ ($j=i,e$). Using the a priori estimates provided by Lemma \ref{lemma:energy}, we can apply Lemma \ref{twoH1perf}, Theorem \ref{twoS}, and Lemma \ref{relation trace} to extract subsequences (not relabelled) such that \begin{equation}\label{convergence} \begin{split} &\widetilde{u_j^{\varepsilon}} \overset{2}{\rightharpoonup} \mathbbm{1}_{Y_j}(y)u_j(t,x), \quad j=i,e \\ & \widetilde{\nabla u_j^{\varepsilon}} \overset{2}{\rightharpoonup} \mathbbm{1}_{Y_j}(y)\left( \nabla_x u_j(t,x) + \nabla_y u_j^1(t,x,y) \right), \quad j=i,e, \\ & v^{\varepsilon}\overset{2-\mathrm{S}}{\rightharpoonup} v = u_i-u_e, \\ & w^{\varepsilon}\overset{2-\mathrm{S}}{\rightharpoonup} w, \quad 3*2/5artial_t w^{\varepsilon} \overset{2-\mathrm{S}}{\rightharpoonup} 3*2/5artial_t w, \\ & I(v^{\varepsilon},w^{\varepsilon}) \overset{2-\mathrm{S}}{\rightharpoonup} \overline{I}, \quad H(v^{\varepsilon},w^{\varepsilon}) \overset{2-\mathrm{S}}{\rightharpoonup} \overline{H}, \varepsilonnd{split} \varepsilonnd{equation} for some limits $u_i,u_e \in L^2(0,T,H^1(\Omega))$, $u_i^1,u_e^1 \in L^2((0,T)\times \Omega ;H_{\mathrm{per}}^1(Y))$, and $w \in L^2((0,T)\times\Omega \times \Gamma )$. Here we identify $v=u_i-u_e$ as an element in $L^2(0,T,H^1(\Omega))$. It is easily verified that the two-scale limit $u_e$ satisfies $\int_{\Omega} u_e \, dx=0$. Nonlinear functions are not continuous with respect to weak convergence, which prevents us from immediately making the identifications $$ \overline{I} = I(v,w), \qquad \overline{H} = H(v,w). $$ Using the two-scale convergences in \varepsilonqref{convergence}, Remark \ref{rem:well-defined}, the choice of test function \begin{align*} & 3*2/5(t,x)+\varepsilon3*2/5_1\left(t,x,\frac{x}{\varepsilon}\right), \quad \text{with} \\ & 3*2/5 \in C^{\infty}_0((0,T)\times \Omega), \; 3*2/5_1 \in C^\infty_0((0,T)\times\Omega;C^\infty_{\mathrm{per}}(Y)), \varepsilonnd{align*} in \varepsilonqref{weak_i}, \varepsilonqref{weak_e}, and \varepsilonqref{weak_w}, standard manipulations \cite{Donato,Allaire2} will reveal that the two-scale limit $(u_i,u_e,v,w)$ satisfies the following equations: \begin{equation}\label{u_iLim} \begin{split} &|\Gammaamma|\int_0^T \int_{\Omega} \left\langle 3*2/5artial_t v, 3*2/5 \right\rangle \, dx \, dt +\lim_{\varepsilon\rightarrow 0}\varepsilon \int_0^T \int_{\Gamma^{\varepsilon}} I(v^{\varepsilon},w^{\varepsilon}) 3*2/5 \,dS \,dt \\ & \qquad\qquad\qquad +\int_0^T\int_{\Omega} M_i \nabla u_i\cdot \nabla 3*2/5 \, dx \, dt =|Y_i|\int_0^T\int_{\Omega} s_i 3*2/5 \, dx \, dt, \varepsilonnd{split} \varepsilonnd{equation} \begin{equation}\label{u_eLim} \begin{split} & |\Gammaamma|\int_0^T \int_{\Omega} \left\langle 3*2/5artial_t v, 3*2/5 \right\rangle \, dx \, dt +\lim_{\varepsilon\rightarrow 0} \varepsilon \int_0^T \int_{\Gamma^{\varepsilon}} I(v^{\varepsilon},w^{\varepsilon}) 3*2/5 \,dS \,dt \\ & \qquad\qquad\qquad +\int_0^T\int_{\Omega} M_e \nabla u_e\cdot \nabla 3*2/5 \, dx \, dt = |Y_e|\int_0^T\int_{\Omega} s_e 3*2/5\, dx \, dt, \varepsilonnd{split} \varepsilonnd{equation} and \begin{equation}\label{wLim} |\Gamma|\int_0^T \int_{\Omega} 3*2/5artial_t w 3*2/5 \, dx \, dt = \lim_{\varepsilon\rightarrow 0} \varepsilon \int_0^T \int_{\Gamma^{\varepsilon}} H(v^{\varepsilon},w^{\varepsilon}) 3*2/5 \,dS \,dt, \varepsilonnd{equation} for all $3*2/5 \in C^{\infty}_0((0,T)\times \Omega)$. In \varepsilonqref{u_iLim} and \varepsilonqref{u_eLim}, $M_i$ and $M_e$ are the homogenized conductivity tensors $\varepsilonqref{Mj}$. Let us briefly recall how one arrives at the homogenized conductivities. Setting $3*2/5\varepsilonquiv 0$ and considering $$ \Phi(t,x,y) := 2/5igma_j(x,y)\nabla_y 3*2/5_1(t,x,y) $$ as a test function for two-scale convergence, we have by \varepsilonqref{convergence} that \begin{align*} &\lim_{\varepsilon\to 0} \int_0^T \int_{\Omegamega_j^{\varepsilon}} \Phi\left(t,x,\frac{x}{\varepsilon}\right) \cdot \nabla u_j^{\varepsilon} \, dx\,dt \\ & \qquad =\int_0^T \int_{\Omega} \int_{Y_j} 2/5igma_j \left[ \nabla_x u_j + \nabla_y u^1_j \right] \cdot \nabla_y 3*2/5_1 \,dx\, dy\, dt. \varepsilonnd{align*} Note that the oscillating $3*2/5_1$ term is suppressed in the limit of the weak formulation \varepsilonqref{weak} by the $\varepsilon$-factor, except in the term where a gradient hits the test function. Thus, the two-scale limit $(u_j,u_j^1)$ satisfies the equation $$ \int_0^T \int_{\Omega}\int_{Y_j} 2/5igma_j \left[ \nabla_x u_j + \nabla_y u^1_j \right] \cdot \nabla_y 3*2/5_1 \,dx \, dy\,dt =0, $$ for all $3*2/5_1 \in C_0^{\infty}((0,T)\times\Omega;C_{\mathrm{per}}^{\infty}(Y))$. This equation is satisfied by $u_j^1 = \chi_j \cdot \nabla_x u_j$, where $\chi_j$ is the first order corrector \varepsilonqref{chi}. Hence, for any $y$-independent function $3*2/5_1(t,x,y)\varepsilonquiv 3*2/5si(t,x) \in C_0^{\infty}((0,T)\times \Omega)$, \begin{align*} &0=\int_0^T \int_{\Omega}\int_{Y_j} 2/5igma_j \left[ \nabla_x u_j + \nabla_y u^1_j \right] \cdot \nabla_x 3*2/5si \,dx \, dy\,dt \\ & \qquad = \int_0^T\int_{\Omega} \left(\int_{Y_j} 2/5igma_j + \nabla_y \chi_j \, dy \right) \nabla_x u_j \cdot \nabla_x 3*2/5si \,dx\,dy\, dt, \varepsilonnd{align*} so \varepsilonqref{Mj} is indeed the homogenized conductivity tensor. Up to the convergences of the nonlinear terms, it is now clear that \varepsilonqref{u_iLim}, \varepsilonqref{u_eLim}, and \varepsilonqref{wLim} constitute the weak formulation of the bidomain system \varepsilonqref{macro}, \varepsilonqref{eq:ib-cond2} (in the sense of Definition \ref{weakMacro}). 2/5ubsection{The nonlinear terms and strong convergence}\label{subsec:strongconv} To finalize the proof of Theorem \ref{theorem:homo}, it remains to identify the limits \begin{equation}\label{limitIntegrals} \begin{split} \lim_{\varepsilon\rightarrow 0} \varepsilon \int_0^T \int_{\Gamma^{\varepsilon}} I(v^{\varepsilon},w^{\varepsilon}) 3*2/5 \,dS \,dt & = |\Gamma| \int_0^T \int_{\Omega} I(v,w) 3*2/5 \, dx \, dt, \\ \lim_{\varepsilon\rightarrow 0} \varepsilon \int_0^T \int_{\Gamma^{\varepsilon}} H(v^{\varepsilon},w^{\varepsilon}) 3*2/5 \,dS \,dt & = |\Gamma| \int_0^T \int_{\Omega} H(v,w) 3*2/5 \, dx \, dt, \varepsilonnd{split} \varepsilonnd{equation} for all $3*2/5 \in C^{\infty}_0((0,T)\times\Omega)$. We note that the unfolding operator \varepsilonqref{IntegralTransform} allows us to transform the oscillating surface integral $$ \varepsilon \int_0^T \int_{\Gammaamma^{\varepsilon}} I(v^{\varepsilon},w^{\varepsilon})3*2/5_i \,dS \, dt $$ into $$ \int_0^T \int_{\Omegamega}\int_{\Gamma} I\left(\mathcal{T}_{\varepsilon}^b(v^{\varepsilon}),\mathcal{T}_{\varepsilon}^b(w^{\varepsilon})\right) \mathcal{T}_{\varepsilon}^b(3*2/5_i) \,dS(y) \, dx\, dt, $$ coming from the integration formula \varepsilonqref{IntegralTransform} for $\mathcal{T}_{\varepsilon}^b$. Additionally, we have here used \varepsilonqref{Tb-product} and the fact $\mathcal{T}_{\varepsilon}^b \left(I(v^{\varepsilon},w^{\varepsilon})\right) =I\left(\mathcal{T}_{\varepsilon}^b(v^{\varepsilon}),\mathcal{T}_{\varepsilon}^b(w^{\varepsilon})\right)$. The smoothness of $3*2/5$ implies that $\mathcal{T}_{\varepsilon}^b(3*2/5)\to 3*2/5$ in $L^2((0,T)\times\Omega \times \Gamma )$ as $\varepsilon\to 0$, cf.~\varepsilonqref{Tb-strongconv}, so to identify the limits \varepsilonqref{limitIntegrals} it suffices to show $$ I\left(\mathcal{T}_{\varepsilon}^b(v^{\varepsilon}),\mathcal{T}_{\varepsilon}^b(w^{\varepsilon})\right) \to I(v,w), \quad \text{weakly in $L^2((0,T)\times\Omega \times \Gamma)$}, $$ where $v,w$ are identified in \varepsilonqref{convergence}. Besides, since $w^{\varepsilon}$ appears linearly in $I$ and $H$, cf.~\varepsilonqref{GFHN}, the weak convergence of $2/5eq{\mathcal{T}_{\varepsilon}^b(w^{\varepsilon})}_{\varepsilon>0}$ in $L^2((0,T)\times\Omega\times\Gamma)$ is enough to pass to the limit in \varepsilonqref{limitIntegrals}, if we establish strong convergence of $2/5eq{\mathcal{T}_{\varepsilon}^b(v^{\varepsilon})}_{\varepsilon>0}$. As a step towards verifying the required strong convergence, we need to show that $2/5eq{(t,y)\mapsto \mathcal{T}_{\varepsilon}^b (v^\varepsilon)}_{\varepsilon>0}$ is strongly precompact in $L^2_{t,y}$, for fixed $x\in \Omega$. As a result of Lemma \ref{lemma:energy}, $\mathcal{T}_{\varepsilon}^b (v)$ is bounded in $L^2_tL^2_xH^{1/2}_y$, uniformly in $\varepsilon$. However, according to Lemma \ref{lemma:energy}, the time derivative $3*2/5artial_t v^\varepsilon$ is merely of order $1/\varepsilon$ in the $L^2_tH^{-1/2}_x$ norm. Therefore, we cannot expect $3*2/5artial_t \mathcal{T}_{\varepsilon}^b(v^\varepsilon)$ (assuming that this object is meaningful) to be bounded in $L^2_tL^2_xH^{-1/2}_y$, uniformly in $\varepsilon$. As a consequence, strong $L^2_{t,y}$ compactness of $2/5eq{\mathcal{T}_{\varepsilon}^b (v^\varepsilon)}_{\varepsilon>0}$ is not deducible from the classical Aubin-Lions theorem. Instead of attempting to control (in a negative space) the whole derivative $3*2/5artial_t \mathcal{T}_{\varepsilon}^b(v^\varepsilon)$, we will make use of a temporal translation estimate with respect to the $L^2$ norm (cf.~lemma below). The $L^2_{t,y}$ compactness will then be a consequence of the Simon lemma (cf.~Subsection \ref{sec:notation}). The rest of this section is devoted to the detailed proof that $2/5eq{\mathcal{T}_{\varepsilon}^b(v^{\varepsilon})}_{\varepsilon>0}$ is strongly precompact in the fixed space $L^2((0,T)\times \Omega\times\Gamma)$. We begin with \begin{lemma}\label{lem:Tb-temporal-spatial} There exists a constant $C$, independent of $\varepsilon$, such that \begin{equation}\label{eq:TbH12} \norm{\mathcal{T}_{\varepsilon}^b (v^{\varepsilon})}_{L^2(0,T;L^2(\Omega;H^{1/2}(\Gamma))}\le C \varepsilonnd{equation} and \begin{equation}\label{eq:Tb-temporal} \norm{\mathcal{T}_{\varepsilon}^b(v^\varepsilon)(\cdot+\mathbb{D}elta_t,\cdot,\cdot) -\mathcal{T}_{\varepsilon}^b(v^\varepsilon)(\cdot,\cdot,\cdot)}_{L^2(0,T-\mathbb{D}elta_t;L^2(\Omega\times \Gamma))} \le C \mathbb{D}elta_t^{\frac12}, \varepsilonnd{equation} for sufficiently small temporal shifts $\mathbb{D}elta_t>0$. \varepsilonnd{lemma} \begin{proof} By \varepsilonqref{ve-def} and the linearity of $\mathcal{T}_\varepsilon^b(\cdot)$, we have $\mathcal{T}_{\varepsilon}^b (v^{\varepsilon})=\mathcal{T}_{\varepsilon}^b(u_i^{\varepsilon}|_{\Gammaamma^{\varepsilon}}) - \mathcal{T}_{\varepsilon}^b(u_e^{\varepsilon}|_{\Gammaamma^{\varepsilon}})$. Thus, \varepsilonqref{eq:TbH12} follows by integrating \varepsilonqref{unfoldingH1/2} over $(0,T)$ and using estimates (a), (b) in \varepsilonqref{eq:main-est}. Regarding \varepsilonqref{eq:Tb-temporal}, we use the linearity of $\mathcal{T}_{\varepsilon}^b$, \varepsilonqref{Tb-L2Bound}, and \varepsilonqref{eq:ve-time-trans} to obtain \begin{align*} & \int_0^{T-\mathbb{D}elta_t} \int_\Omega \int_\Gamma \abs{\mathcal{T}_{\varepsilon}^b(v^\varepsilon)(t+\mathbb{D}elta_t,x,y) -\mathcal{T}_{\varepsilon}^b(v^\varepsilon)(\cdot,\cdot,\cdot)}^2\, dS(y) \,dx\,dt \\ & \quad = \varepsilon \int_0^{T-\mathbb{D}elta_t}\int_{\Gammaamma^{\varepsilon}} \abs{v^\varepsilon(t+\mathbb{D}elta_t,x)-v^\varepsilon(t,x)}^2 \,dS(x) \,dt \le C \mathbb{D}elta_t. \varepsilonnd{align*} \varepsilonnd{proof} Let us think of $\mathcal{T}_{\varepsilon}^b(v^\varepsilon)$ as a function of $x\in \Omega$, with values in $L^2(0,T;L^2(\Gamma))$. Fixing $x$, in view of Lemma \ref{lem:Tb-temporal-spatial} and Simon's compactness criterion, the sequence $2/5eq{(t,y)\mapsto \mathcal{T}_{\varepsilon}^b(v^{\varepsilon})(t,x,y)}_{\varepsilon>0}$ is precompact in $L^2((0,T)\times \Gamma)$. The $x$-variable is more difficult. As a matter of fact, since $\mathcal{T}_{\varepsilon}^b(v^{\varepsilon})$ is piecewise constant as a function of $x$ and thus does not belong to any Sobolev space, strong compactness in $x$ is not immediately clear. We address this issue by deriving a translation estimate in the $x$-variable. To be more precise, we make use of a convenient Simon-type compactness criterion ($x$ playing the role of time) established in \cite[Corollary 2.5]{Neuss} (see also \cite[Section 5]{Amann:2000aa}), which is recalled next. For $Q2/5ubset \mathbb{R}^n$ and $20*\si\in \mathbb{R}^n$ ($n\ge 1$), we set $Q_20*\si:=Q \cap (Q-20*\si):=2/5eq{x\in Q: x+20*\si\in Q}$, $\Sigma:=2/5eq{-1,1}^n$ ($\abs{\Sigma}=2^n$), and $20*\si_2/5igma:=(20*\si_1 2/5igma_1,\ldots,20*\si_n2/5igma_n)\in \mathbb{R}^n$ for $2/5igma\in \Sigma$. If $Q=(a,b)$ ($:=\Pi_{\varepsilonll=1}^n (a_\varepsilonll,b_\varepsilonll)$ for $a,b\in \mathbb{R}^n$, $a<b$) is an open rectangle, then \begin{equation}\label{Qxi-decomp} Q=\bigcup_{2/5igma\in \Sigma} Q_{20*\si_2/5igma}, \qquad \text{for any $20*\si \in \mathbb{R}^n$ such that $Q_20*\si \neq \varepsilonmptyset$}. \varepsilonnd{equation} Let $B$ be Banach a space. For $f:Q\to B$ and $\mathbb{D}elta\in \mathbb{R}^n$, we define the translation operator $\tau_\mathbb{D}elta:(Q-\mathbb{D}elta)\to B$ by $$ \tau_\mathbb{D}elta f(x)=f(x+\mathbb{D}elta). $$ The following theorem, due to Gahn and Neuss-Radu \cite{Neuss}, is a multi-dimensional generalization of Simon's main result \cite[Theorem 1]{Simon:1987vn}. \begin{theorem}[\cite{Neuss}]\label{compactSurface0} Let $\mathcal{F} 2/5ubset L^p(Q;B)$ for some Banach space $B$, open rectangle $Q=(a,b)2/5ubset \mathbb{R}^n$, and $p\in [1,\infty)$. Then $\mathcal{F}$ is precompact in $L^p(Q;B)$ if and only if \begin{enumerate}[i.] \item $2/5eq{\int_A f \,dx \, \big | \, f \in \mathcal{F}}$ is precompact in $B$, for every open rectangle $A2/5ubset Q$; \item for each $z\in \mathbb{R}^n$ with $0<z<b-a$, \begin{equation}\label{shift-var-domain} 2/5up_{f \in \mathcal{F}} \norm{\tau_z f - f }_{L^p(Q_z;B)} \to 0, \qquad \text{as $z \to 0$.} \varepsilonnd{equation} \varepsilonnd{enumerate} \varepsilonnd{theorem} Recall \varepsilonqref{Qxi-decomp}, this time specifying $Q=(a,b)2/5ubset \mathbb{R}^n$, and $20*\si=(b-a)/2\in \mathbb{R}^n$. Condition \varepsilonqref{shift-var-domain} in Theorem \ref{compactSurface0} is equivalent to \cite{Neuss} \begin{equation}\label{shift-fixed-domain} 2/5up_{f \in \mathcal{F}} \norm{\tau_{z_2/5igma} f - f }_{L^p(Q_{20*\si_2/5igma};B)} \overset{z\to 0}{\to} 0, \qquad \text{$z\in \mathbb{R}^n$, $z\ge 0$, $\forall 2/5igma\in \Sigma$.} \varepsilonnd{equation} The difference between \varepsilonqref{shift-var-domain} and \varepsilonqref{shift-fixed-domain} is the fixed domain that is utilized in the latter (it does not depend on the shift $z$). We make use of \varepsilonqref{shift-fixed-domain} in the proof of Lemma \ref{lem:ver-ii} below. We now verify that the sequence $2/5eq{\mathcal{T}_{\varepsilon}^b(v^{\varepsilon})}_{\varepsilon>0}$ of unfolded membrane potentials satisfies the assumptions of Theorem \ref{compactSurface0}, with $B= L^2((0,T)\times \Gamma)$, $Q = \Omega$, $p=2$. \begin{lemma}[verification of $i$]\label{lem:ver-i} Given an arbitrary open rectangle $A2/5ubset \Omega$, define the function $v^{\varepsilon}_A(t,y)$ by $$ v^{\varepsilon}_A(t,y) =\int_A \mathcal{T}_{\varepsilon}^b(v^\varepsilon)(t,x,y)\,dx, \qquad (t,x)\in (0,T)\times \Gamma. $$ Then the sequence $2/5eq{v^\varepsilon_A}_{\varepsilon>0}$ is precompact in $L^2((0,T)\times \Gamma)$. \varepsilonnd{lemma} \begin{proof} In view of Jensen's inequality, it follows that \begin{align*} \norm{v_A^{\varepsilon}}_{L^2(0,T;H^{1/2}(\Gamma))}^2 & = \int_0^T \norm{\int_A \mathcal{T}_{\varepsilon}^b(v^{\varepsilon})(t,x,\cdot) \,dx}_{H^{1/2}(\Gamma)}^2 \, dt \\ & \leq \abs{A}\int_0^T\int_A \norm{\mathcal{T}_{\varepsilon}^b (v^{\varepsilon})(t,x,\cdot)}^2_{H^{1/2}(\Gamma)} \,dx\, dt \\ & \le \abs{A}\norm{\mathcal{T}_{\varepsilon}^b (v^\varepsilon)}_{L^2(0,T;L^2(\Omega;H^{1/2}(\Gammaamma^{\varepsilon}))}^2 \overset{\varepsilonqref{eq:TbH12}}{\le} C. \varepsilonnd{align*} Let $\mathbb{D}elta_t>0$ be a small temporal shift. Again using Jensen's inequality, \begin{align*} & \norm{v_A^{\varepsilon}(\cdot+\mathbb{D}elta_t,\cdot) -v_A^{\varepsilon}(\cdot,\cdot}_{L^2(0,T-\mathbb{D}elta_T;L^2(\Gamma))}^2 \\ & \quad \leq \abs{A}\int_0^{T-\mathbb{D}elta_t}\int_A \norm{\mathcal{T}_{\varepsilon}^b (v^{\varepsilon})(t+\mathbb{D}elta_t,x,\cdot)- \mathcal{T}_{\varepsilon}^b (v^{\varepsilon})(t,x,\cdot)}^2_{L^2(\Gamma)} \,dx\, dt \\ & \quad \le \abs{A}\norm{\mathcal{T}_{\varepsilon}^b (v^{\varepsilon})(\cdot+\mathbb{D}elta_t,\cdot,\cdot)- \mathcal{T}_{\varepsilon}^b (v^{\varepsilon})(\cdot,\cdot,\cdot)}^2_{L^2(0,T-\mathbb{D}elta_t,L^2(\Omega\times\Gamma))} \overset{\varepsilonqref{eq:Tb-temporal}}{\le} C \mathbb{D}elta_t. \varepsilonnd{align*} Summarizing, there exists an $\varepsilon$-independent constant $C$ such that $$ \norm{v_A^{\varepsilon}}_{L^2(0,T;H^{1/2}(\Gamma))}\le C, \quad \norm{v_A^{\varepsilon}(\cdot+\mathbb{D}elta_t,\cdot) -v_A^{\varepsilon}(\cdot,\cdot}_{L^2(0,T-\mathbb{D}elta_T;L^2(\Gamma))} \le C \mathbb{D}elta_t^{1/2}. $$ The lemma follows from these estimates and Simon's compactness criterion. \varepsilonnd{proof} In the next lemma we verify \varepsilonqref{shift-fixed-domain} with $20*\si=(1/2,1/2,1/2)\in \mathbb{R}^3$, which is equivalent to condition $ii$ in Theorem \ref{compactSurface0}. \begin{lemma}[verification of $ii$]\label{lem:ver-ii} Given any $\delta>0$, there exists $h>0$ such that for any $\mathbb{D}elta_x\in \mathbb{R}^3$ with $\abs{\mathbb{D}elta_x}<h$ and for all $\varepsilon\in (0,1]$, \begin{equation}\label{eq:Tb-xtranslate} \norm{\mathcal{T}_{\varepsilon}^b(v^\varepsilon)(\cdot,\cdot+(\mathbb{D}elta_x)_2/5igma,\cdot) -\mathcal{T}_{\varepsilon}^b(v^\varepsilon)(\cdot,\cdot,\cdot)}_{L^2(\Omega_{20*\si_2/5igma}; L^2((0,T)\times \Gamma))} < \delta, \quad \forall 2/5igma\in \Sigma. \varepsilonnd{equation} \varepsilonnd{lemma} \begin{proof} We wish to estimate the quantity \begin{align*} & J(\mathbb{D}elta_x;\varepsilon) := \norm{\mathcal{T}_{\varepsilon}^b(v^\varepsilon)(\cdot,\cdot+(\mathbb{D}elta_x)_2/5igma,\cdot) -\mathcal{T}_{\varepsilon}^b(v^\varepsilon)(\cdot,\cdot,\cdot)}_{L^2(\Omega_{20*\si_2/5igma}; L^2((0,T)\times \Gamma))}^2 \\ & \quad = \int_0^T\!\!\int_{\Omega_{20*\si_2/5igma}}\int_\Gamma \abs{v^\varepsilon\left(t,\varepsilon\left\lfloor \frac{x+(\mathbb{D}elta_x)_2/5igma}{\varepsilon} \right\rfloor +\varepsilon y\right) -v^\varepsilon\left(t,\varepsilon\left\lfloor \frac{x}{\varepsilon} \right\rfloor +\varepsilon y\right)}^2\, dS(y)\, dx\, dt. \varepsilonnd{align*} Recall that $\varepsilon$ takes values in a sequence $2/5ubset (0,1]$ tending to zero. Fix any $\varepsilon_0>0$. Since the translation operation is continuous in $L^2$, there exists $h_0=h_0(\varepsilon_0)>0$ such that $J(\mathbb{D}elta_x;\varepsilon)<\delta$ for any $\abs{\mathbb{D}elta_x}<h_0$, for all $\varepsilon\in [\varepsilon_0,1]$. The rest of the proof is devoted to arguing that this holds also for $\varepsilon\in (0,\varepsilon_0)$, thereby proving \varepsilonqref{eq:Tb-xtranslate}. Choose $K^\varepsilon_{20*\si_2/5igma}2/5ubset \mathbb{Z}^3$ such that $\Omega_{20*\si_2/5igma} =\text{interior}\left(\bigcup_{k\in K^\varepsilon_{20*\si_2/5igma}} \overline{\varepsilon Y^k}\right), \qquad Y^k := k+Y$. Then $J(\mathbb{D}elta_x,\varepsilon)$ becomes $$ 2/5um_{k\in K^\varepsilon_{20*\si_2/5igma}} \int_0^T\!\!\int_{\varepsilon Y^k}\int_\Gamma \abs{v^\varepsilon\left(t,\varepsilon\left\lfloor \frac{x+(\mathbb{D}elta_x)_2/5igma}{\varepsilon} \right\rfloor +\varepsilon y\right) -v^\varepsilon\left(t,\varepsilon\left\lfloor \frac{x}{\varepsilon} \right\rfloor +\varepsilon y\right)}^2 \, dS(y)\, dx\, dt. $$ If $x\in \varepsilon Y^k$, then $\left\lfloor \frac{x}{\varepsilon}\right\rfloor=k$, but we have no useful information about $\left\lfloor \frac{x+(\mathbb{D}elta_x)_2/5igma}{\varepsilon}\right\rfloor$. To address this issue, we make use of a favorable decomposition of the cells $\varepsilon Y^k$ proposed in \cite{Neuss-Radu:2007aa} (and also utilized in e.g.~\cite{Neuss,Gahn:2016aa}). We decompose each cell $\varepsilon Y^k$ as $$ \varepsilon Y^k=\bigcup_{m\in 2/5eq{0,1}^3}\varepsilon Y^{k,m}_2/5igma, \quad \varepsilon Y^{k,m}_2/5igma := 2/5eq{x\in \varepsilon Y^k: \varepsilon\left\lfloor \frac{x+\varepsilon2/5eq{\frac{(\mathbb{D}elta_x)_2/5igma}{\varepsilon}}}{\varepsilon}\right\rfloor =\varepsilon (k+m_2/5igma)}, $$ for $k\in K^\varepsilon_{20*\si_2/5igma}$ and $2/5igma\in\Sigma$. Regarding the translation, for $x\in \varepsilon Y^{k,m}_2/5igma$, we write $(\mathbb{D}elta_x)_2/5igma=\varepsilon\left\lfloor \frac{(\mathbb{D}elta_x)_2/5igma}{\varepsilon}\right\rfloor +\varepsilon 2/5eq{\frac{(\mathbb{D}elta_x)_2/5igma}{\varepsilon}}$, and note that \begin{align*} \varepsilon\left\lfloor \frac{x+(\mathbb{D}elta_x)_2/5igma}{\varepsilon}\right\rfloor & =\varepsilon\left\lfloor \frac{x+\varepsilon 2/5eq{\frac{(\mathbb{D}elta_x)_2/5igma}{\varepsilon}}}{\varepsilon} +\left\lfloor \frac{(\mathbb{D}elta_x)_2/5igma}{\varepsilon}\right\rfloor\right\rfloor \\ & =\varepsilon\left\lfloor \frac{x+\varepsilon 2/5eq{\frac{(\mathbb{D}elta_x)_2/5igma}{\varepsilon}}}{\varepsilon}\right\rfloor +\varepsilon\left\lfloor \frac{(\mathbb{D}elta_x)_2/5igma}{\varepsilon}\right\rfloor \\ & = \varepsilon (k+m_2/5igma) +\varepsilon\left\lfloor \frac{(\mathbb{D}elta_x)_2/5igma}{\varepsilon}\right\rfloor. \varepsilonnd{align*} As a result of this, \begin{align*} J & =2/5um_{k\in K^\varepsilon_{20*\si_2/5igma}} 2/5um_{m\in 2/5eq{0,1}^3} \int_0^T \int_{\varepsilon Y^{k,m}_2/5igma}\int_\Gamma \\ & \qquad\qquad \times \abs{v^\varepsilon\left(t,\varepsilon k+\varepsilon m_2/5igma +\varepsilon\left\lfloor \frac{(\mathbb{D}elta_x)_2/5igma}{\varepsilon}\right\rfloor +\varepsilon y\right) -v^\varepsilon(t,\varepsilon k +\varepsilon y)}^2 \, dS(y)\, dx\, dt \\ & \overset{\left(\overset{x:=\varepsilon k + \varepsilon y}{dS(x)=\varepsilon^2 dS(y)}\right)}{\le} \varepsilon 2/5um_{k\in K^\varepsilon_{20*\si_2/5igma}} 2/5um_{m\in 2/5eq{0,1}^3}\int_0^T\int_{\varepsilon(k+\Gamma)} \\ & \qquad\qquad\qquad\qquad\qquad \times \abs{v^\varepsilon\left(t,x+\varepsilon\left(m_2/5igma +\left\lfloor \frac{(\mathbb{D}elta_x)_2/5igma}{\varepsilon}\right\rfloor\right)\right) -v^\varepsilon(t,x)}^2 \, dS(x) \, dt, \varepsilonnd{align*} where we have also used $\int_{\varepsilon Y^{k,m}_2/5igma}\, dx \le \int_{\varepsilon Y^k} \,dx=\varepsilon^3$ to arrive at the final line. Since $2/5um_{k\in K^\varepsilon_{20*\si_2/5igma}}\int_{\varepsilon(k+\Gamma)} =\int_{(\Gammaamma^{\varepsilon})_{20*\si_2/5igma}}$, we conclude that $$ J\le \varepsilon 2/5um_{m\in 2/5eq{0,1}^3}\int_0^T\int_{(\Gammaamma^{\varepsilon})_{20*\si_2/5igma}} \abs{v^\varepsilon(t,x+z)-v^\varepsilon(t,x}^2 \, dS(x) \, dt, $$ where the shift $z=z(\mathbb{D}elta_x,\varepsilon,m)$ is $\varepsilon\left(m_2/5igma +\left\lfloor \frac{(\mathbb{D}elta_x)_2/5igma}{\varepsilon}\right\rfloor\right)$, i.e., $z$ is an integer-multiple of $\varepsilon$. Note that $x+z\in (\Gammaamma^{\varepsilon})_{20*\si_2/5igma}$ whenever $\mathbb{D}elta_x$ and $\varepsilon$ are sufficiently small. Recalling the definition \varepsilonqref{ve-def} of $v^\varepsilon$, the trace inequality \varepsilonqref{trace} implies \begin{align*} &\varepsilon \int_0^T\int_{(\Gammaamma^{\varepsilon})_{20*\si_2/5igma}} \abs{v^\varepsilon(t,x+z)-v^\varepsilon(t,x}^2 \, dS(x) \, dt \\ & \qquad \leq C 2/5um_{j=i,e} \int_0^T \int_{(\Omegamega_j^{\varepsilon})_{20*\si_2/5igma}} \abs{u_j^\varepsilon(t,x+z) -u_j^\varepsilon(t,x)}^2 \,dx \, dt \\ & \qquad \qquad + C \varepsilon^2 2/5um_{j=i,e} \int_0^T\int_{(\Omegamega_j^{\varepsilon}){20*\si_2/5igma}} \abs{\nabla u_j^\varepsilon(t,x+z)-\nabla u_j^\varepsilon(t,x)}^2 \,dx\, dt, \varepsilonnd{align*} where the last term is bounded by a constant times $\varepsilon^2$ because of \varepsilonqref{eq:main-est}-(a). It remains to estimate the term on the second line, which will be done utilizing the well-known characterization of Sobolev spaces by means of translation (difference) operators. Recalling the standard proof of this characterization, a problem that arises (due to the geometry of $\Omegamega_j^{\varepsilon}$) is that parts of the line segment between $x$ and $z$ may leave $\Omegamega_j^{\varepsilon}$. To avoid this problem we make use of the interpolation operators \varepsilonqref{Qdefinition} to obtain functions $Q_\varepsilon^j(u_j^\varepsilon)$ defined on the whole of $\Omegamega$. Using the triangle inequality and recalling the estimates \varepsilonqref{Qestimates}, we obtain \begin{align*} &\int_0^T \int_{(\Omegamega_j^{\varepsilon})_{20*\si_2/5igma}} \abs{u_j^\varepsilon(t,x+z) -u_j^\varepsilon(t,x)}^2 \,dx \, dt \\ & \quad \leq \int_0^T \int_{(\Omegamega_j^{\varepsilon})_{20*\si_2/5igma}} \abs{u_j^\varepsilon(t,x+z)-Q_\varepsilon^j(u_j^\varepsilon)(t,x+z)}^2 \,dx \, dt \\ & \quad\qquad\qquad +\int_0^T \int_{(\Omegamega_j^{\varepsilon})_{20*\si_2/5igma}} \abs{Q_\varepsilon^j(u_j^\varepsilon)(t,x+z)-Q_\varepsilon^j(u_j^\varepsilon)(t,x)}^2 \,dx \, dt \\ & \quad\qquad\quad\qquad +\int_0^T \int_{(\Omegamega_j^{\varepsilon})_{20*\si_2/5igma}} \abs{Q_\varepsilon^j(u_j^\varepsilon)(t,x)-u_j^\varepsilon(t,x)}^2 \,dx\, dt \\ & \quad \leq C_1 \varepsilon \int_0^T \int_{(\Omegamega_j^{\varepsilon})_{20*\si_2/5igma}}\abs{\nabla u_j^\varepsilon}^2 \,dx \, dt + C_2 \abs{z}\int_0^T \int_{(\Omegamega_j^{\varepsilon})_{20*\si_2/5igma}}\abs{\nabla Q_\varepsilon^j(u_j^\varepsilon)}^2 \,dx \, dt \\ & \quad \leq C_3\bigl( \varepsilon + \abs{z}\bigr) \norm{\nabla u_j^\varepsilon}^2_{L^2(0,T;L^2(\Omegamega_j^{\varepsilon}))} \overset{\varepsilonqref{eq:main-est}-(a)}{\le} C_4 \bigl( \varepsilon + \abs{z}\bigr). \varepsilonnd{align*} Hence, $$ J \le C_5 \varepsilon + C_6 \abs{\mathbb{D}elta_x}, $$ where the constants $C_5,C_6$ are independent of $\mathbb{D}elta_x,\varepsilon$. We select the $\varepsilon_0$ introduced earlier sufficiently small, such that the first term on the right-hand side is $<\delta^2/2$ for all $\varepsilon<\varepsilon_0$. We pick $h_1>0$ such that the second term is $<\delta^2/2$ for all $\abs{\mathbb{D}elta_x}<h_1$ (for any $\varepsilon\in (0,1]$). Specifying $h:=\min(h_0,h_1)$, the claim \varepsilonqref{eq:Tb-xtranslate} now follows. \varepsilonnd{proof} 2/5ubsection{Concluding the proof of Theorem \ref{theorem:homo}}\label{sec:concl} Summarizing, we know that $$ \mathcal{T}_{\varepsilon}^b(w^{\varepsilon})\overset{\varepsilon\downarrow 0}{\rightharpoonup} w \quad \text{in $L^2((0,T)\times \Omega\times\Gamma)$}, $$ because $w^{\varepsilon}\overset{2-\mathrm{S}}{\rightharpoonup} w$, cf.~\varepsilonqref{convergence} and \varepsilonqref{weak-vs-twoscale}. Besides, $w^{\varepsilon}$ appears linearly in $I$ and $H$, cf.~\varepsilonqref{GFHN}. We have shown that $2/5eq{\mathcal{T}_{\varepsilon}^b(v^{\varepsilon})}_{\varepsilon>0}$ is strongly precompact in $L^2((0,T)\times \Omega\times\Gamma)$. It then follows that $\mathcal{T}_{\varepsilon}^b(v^{\varepsilon})\to v$ in $L^2((0,T)\times \Omega\times\Gamma)$ and a.e.~in $(0,T)\times \Omega\times\Gamma$, along a subsequence as $\varepsilon\to 0$ (not relabelled), where $v$ is the two-scale limit of $2/5eq{v^{\varepsilon}}_{\varepsilon>0}$ identified in \varepsilonqref{convergence}. By way of estimate (d) in \varepsilonqref{eq:main-est} and (the $L^p$ version of) \varepsilonqref{Tb-L2Bound}, $$ \norm{\mathcal{T}_{\varepsilon}^b (v^{\varepsilon})}_{L^4((0,T)\times\Omega\times \Gamma)} = \varepsilon^{1/4}\norm{v^{\varepsilon}}_{L^4((0,T)\times\Gammaamma^{\varepsilon})}\le C, $$ where $C$ is independent of $\varepsilon$. In view of this estimate and the Vitali convergence theorem, we conclude the validity of \varepsilonqref{limitIntegrals}. This finishes the proof of Theorem \ref{theorem:homo}. \begin{thebibliography}{10} \bibitem{Allaire} G.~Allaire. \newblock Homogenization and two-scale convergence. \newblock {\varepsilonm SIAM J. Math. Anal.}, 23(6):1482--1518, 1992. \bibitem{Allaire2} G.~Allaire, A.~Damlamian, and U.~Hornung. \newblock Two-scale convergence on periodic surfaces and applications. \newblock In {A. Bourgeat et al.}, editor, {\varepsilonm Proceedings of the International Conference on Mathematical Modelling of Flow through Porous Media (May 1995)}, pages 15--25. World Scientific Pub., Singapore, 1996. \bibitem{Amann:2000aa} H.~Amann. \newblock Compact embeddings of vector-valued {S}obolev and {B}esov spaces. \newblock {\varepsilonm Glas. Mat. Ser. III}, 35(55)(1):161--177, 2000. \bibitem{Amar2013} M.~Amar, D.~Andreucci, P.~Bisegna, and R.~Gianni. \newblock A hierarchy of models for the electrical conduction in biological tissues via two-scale convergence: the nonlinear case. \newblock {\varepsilonm Differential Integral Equations}, 26(9-10):885--912, 2013. \bibitem{Andreianov:2010uq} B.~Andreianov, M.~Bendahmane, K.~H. Karlsen, and C.~Pierre. \newblock Convergence of discrete duality finite volume schemes for the cardiac bidomain model. \newblock {\varepsilonm Netw. Heterog. Media}, 6(2):195--240, 2011. \bibitem{Karlsen} M.~Bendahmane and K.~H. Karlsen. \newblock Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue. \newblock {\varepsilonm Netw. Heterog. Media}, 1(1):185--218, 2006. \bibitem{Boulakia2008} M.~Boulakia, M.~A. Fern\'andez, J.-F. Gerbeau, and N.~Zemzemi. \newblock A coupled system of {PDE}s and {ODE}s arising in electrocardiograms modeling. \newblock {\varepsilonm Appl. Math. Res. Express. AMRX}, (2):Art. ID abn002, 28, 2008. \bibitem{Bourgault} Y.~Bourgault, Y.~Coudi{\`e}re, and C.~Pierre. \newblock {Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology}. \newblock {\varepsilonm {Nonlinear Analysis: Real World Applications}}, 10(1):458--482, 2009. \bibitem{Boyer:2012aa} F.~Boyer and P.~Fabrie. \newblock {\varepsilonm Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models}. \newblock Applied Mathematical Sciences. Springer New York, 2012. \bibitem{Cioranescu:2008aa} D.~Cioranescu, A.~Damlamian, and G.~Griso. \newblock The periodic unfolding method in homogenization. \newblock {\varepsilonm SIAM J. Math. Anal.}, 40(4):1585--1620, 2008. \bibitem{Cioranescu} D.~Cioranescu, A.~Damlamian, P.~Donato, G.~Griso, and R.~Zaki. \newblock The periodic unfolding method in domains with holes. \newblock {\varepsilonm SIAM J. Math. Anal.}, 44(2):718--760, 2012. \bibitem{Donato} D.~Cioranescu and P.~Donato. \newblock {\varepsilonm An introduction to homogenization}, volume~17 of {\varepsilonm Oxford Lecture Series in Mathematics and its Applications}. \newblock The Clarendon Press, Oxford University Press, New York, 1999. \bibitem{ColliBook} P.~Colli~Franzone, L.~F. Pavarino, and S.~Scacchi. \newblock {\varepsilonm Mathematical cardiac electrophysiology}, volume~13 of {\varepsilonm MS\&A. Modeling, Simulation and Applications}. \newblock Springer, Cham, 2014. \bibitem{Colli} P.~Colli~Franzone and G.~Savar\'e. \newblock Degenerate evolution systems modeling the cardiac electric field at micro- and macroscopic level. \newblock In {\varepsilonm Evolution equations, semigroups and functional analysis ({M}ilano, 2000)}, volume~50 of {\varepsilonm Progr. Nonlinear Differential Equations Appl.}, pages 49--78. Birkh\"auser, Basel, 2002. \bibitem{Donato:2015aa} P.~Donato and K.~H. Le~Nguyen. \newblock Homogenization of diffusion problems with a nonlinear interfacial resistance. \newblock {\varepsilonm NoDEA Nonlinear Differential Equations Appl.}, 22(5):1345--1380, 2015. \bibitem{Donato:2011aa} P.~Donato, K.~H. Le~Nguyen, and R.~Tardieu. \newblock The periodic unfolding method for a class of imperfect transmission problems. \newblock {\varepsilonm J. Math. Sci. (N.Y.)}, 176(6):891--927, 2011. \newblock Problems in mathematical analysis. No. 58. \bibitem{FitzHugh1955} R.~FitzHugh. \newblock Mathematical models of threshold phenomena in the nerve membrane. \newblock {\varepsilonm The bulletin of mathematical biophysics}, 17(4):257--278, Dec 1955. \bibitem{Gahn:2016aa} M.~Gahn, M.~Neuss-Radu, and P.~Knabner. \newblock Homogenization of reaction--diffusion processes in a two-component porous medium with nonlinear flux conditions at the interface. \newblock {\varepsilonm SIAM Journal on Applied Mathematics}, 76(5):1819--1843, 2016. \bibitem{Neuss} M.~Gahn and M.~Neuss-Radu. \newblock A characterization of relatively compact sets in {$L^p(\Omegamega, B)$}. \newblock {\varepsilonm Stud. Univ. Babe\c s-Bolyai Math.}, 61(3):279--290, 2016. \bibitem{Graf:2014aa} I.~Graf and M.~A. Peter. \newblock Diffusion on surfaces and the boundary periodic unfolding operator with an application to carcinogenesis in human cells. \newblock {\varepsilonm SIAM J. Math. Anal.}, 46(4):3025--3049, 2014. \bibitem{Grandelius:2017aa} E.~Grandelius. \newblock The bidomain equations of cardiac electrophysiology. \newblock Master's thesis, University of Oslo, 2017. \bibitem{Henriquez} C.~S. Henriquez and W.~Ying. \newblock {\varepsilonm The Bidomain Model of Cardiac Tissue: From Microscale to Macroscale}. \newblock Springer US, Boston, MA, 2009. \bibitem{Hodgkin} A.~L. Hodgkin and A.~F. Huxley. \newblock A quantitative description of membrane current and its application to conduction and excitation in nerve. \newblock {\varepsilonm J. Physiol.}, 117(4):500--544, 1952. \bibitem{Hornung:1991aa} U.~Hornung and W.~J\"ager. \newblock Diffusion, convection, adsorption, and reaction of chemicals in porous media. \newblock {\varepsilonm J. Differential Equations}, 92(2):199--225, 1991. \bibitem{Keener:1996aa} J.~P. Keener and A.~V. Panfilov. \newblock A biophysical model for defibrillation of cardiac tissue. \newblock {\varepsilonm Biophysical Journal}, 71(3):1335--1345, 1996. \bibitem{Keener:1998ab} J.~P. Keener. \newblock The effect of gap junctional distribution on defibrillation. \newblock {\varepsilonm Chaos}, 8(1):175--187, 1998. \bibitem{Lions:1972aa} J.~Lions and E.~Magenes. \newblock {\varepsilonm Non-homogeneous boundary value problems and applications}. \newblock Number v. 3 in Non-homogeneous Boundary Value Problems and Applications. Springer-Verlag, 1972. \bibitem{Lukkassen:2002aa} D.~{Lukkassen}, G.~{Nguetseng}, and P.~{Wall}. \newblock {Two-scale convergence.} \newblock {\varepsilonm {Int. J. Pure Appl. Math.}}, 2(1):35--86, 2002. \bibitem{Marciniak-Czochra:2008aa} A.~Marciniak-Czochra and M.~Ptashnyk. \newblock Derivation of a macroscopic receptor-based model using homogenization techniques. \newblock {\varepsilonm SIAM J. Math. Anal.}, 40(1):215--237, 2008. \bibitem{McLean:2000aa} W.~McLean. \newblock {\varepsilonm Strongly elliptic systems and boundary integral equations}. \newblock Cambridge University Press, 2000. \bibitem{Neu:1993aa} J.~C. Neu and W.~Krassowska. \newblock Homogenization of syncytial tissues. \newblock {\varepsilonm Crit. Rev. Biomed. Eng.}, 21(2):137--199, 1993. \bibitem{Neuss-Radu:2007aa} M.~Neuss-Radu and W.~J\"ager. \newblock Effective transmission conditions for reaction-diffusion processes in domains separated by an interface. \newblock {\varepsilonm SIAM J. Math. Anal.}, 39(3):687--720, 2007. \bibitem{Nguetseng} G.~Nguetseng. \newblock A general convergence result for a functional related to the theory of homogenization. \newblock {\varepsilonm SIAM J. Math. Anal.}, 20(3):608--623, May 1989. \bibitem{Pennacchio2005} M.~Pennacchio, G.~Savar\'e, and P.~Colli~Franzone. \newblock Multiscale modeling for the bioelectric activity of the heart. \newblock {\varepsilonm SIAM J. Math. Anal.}, 37(4):1333--1370, 2005. \bibitem{Richardson:2009aa} G.~Richardson. \newblock A multiscale approach to modelling electrochemical processes occurring across the cell membrane with application to transmission of action potentials. \newblock {\varepsilonm Mathematical Medicine and Biology: A Journal of the IMA}, 26(3):201--224, 2009. \bibitem{Richardson:2011aa} G.~Richardson and S.~J. Chapman. \newblock Derivation of the bidomain equations for a beating heart with a general microstructure. \newblock {\varepsilonm SIAM J. Appl. Math.}, 71(3):657--675, 2011. \bibitem{Simon:1987vn} J.~Simon. \newblock Compact sets in the space {$L2/5p p(0,T;B)$}. \newblock {\varepsilonm Ann. Mat. Pura Appl. (4)}, 146:65--96, 1987. \bibitem{Sundnes} J.~Sundnes, G.~T. Lines, X.~Cai, B.~r.~F. Nielsen, K.-A. Mardal, and A.~Tveito. \newblock {\varepsilonm Computing the electrical activity in the heart}, volume~1 of {\varepsilonm Monographs in Computational Science and Engineering}. \newblock Springer-Verlag, Berlin, 2006. \bibitem{Tung78} L.~Tung. \newblock {\varepsilonm A bi-domain model for describing ischemic myocardial D-C potentials}. \newblock PhD thesis, MIT, Cambridge, MA, 1978. \bibitem{Tveito17} A.~Tveito, K.~H. J{\ae}ger, M.~Kuchta, K.-A. Mardal, and M.~E. Rognes. \newblock A cell-based framework for numerical modeling of electrical conduction in cardiac tissue. \newblock {\varepsilonm Frontiers in Physics}, 5:48, 2017. \bibitem{Veneroni} M.~Veneroni. \newblock Reaction-diffusion systems for the microscopic cellular model of the cardiac electric field. \newblock {\varepsilonm Math. Methods Appl. Sci.}, 29(14):1631--1661, 2006. \bibitem{Veneroni:2009aa} M.~Veneroni. \newblock Reaction-diffusion systems for the macroscopic bidomain model of the cardiac electric field. \newblock {\varepsilonm Nonlinear Anal. Real World Appl.}, 10(2):849--868, 2009. \bibitem{Yang:2014aa} Z.~Yang. \newblock The periodic unfolding method for a class of parabolic problems with imperfect interfaces. \newblock {\varepsilonm ESAIM Math. Model. Numer. Anal.}, 48(5):1279--1302, 2014. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Global Solution for Gas-Liquid Flow of 1-D van der Waals Equation of State with Large Initial Data} \author{ Qiaolin He$^1$, Ming Mei$^{2,3}$, Xiaoding Shi$^{4}$\thanks{\scriptsize{Corresponding author, [email protected]}}, Xiaoping Wang$^5$ \\ \scriptsize{$^1$School of Mathematics, Sichuan University, Chengdu, 610064, China}\\ \scriptsize{$^{2}$Department of Mathematics, Champlain College St.-Lambert, St.-Lambert, Quebec, J4P 3P2, Canada}\\ \scriptsize{$^{3}$Department of Mathematics and Statistics, McGill University, Montreal, Quebec,H3A 2K6, Canada }\\ \scriptsize{$^{4}$Department of Mathematics, School of Science, Beijing University of Chemical Technology, Beijing, 100029, China}\\ \scriptsize{$^{5}$Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong, China}} \date{} \maketitle \noindent\textbf{Abstract}. This paper is concerned with a diffuse interface model for the gas-liquid phase transition. The model consists the compressible Navier-Stokes equations with van der Waals equation of state and a modified Allen-Cahn equation. The global existence and uniqueness of strong solution with the periodic boundary condition (or the mixed boundary condition) in one dimensional space is proved for large initial data. Furthermore, the phase variable and the density of the gas-liquid mixture are proved to stay in the physical reasonable interval. The proofs are based on the elementary energy method and the maximum principle, but with new development, where some techniques are introduced to establish the uniform bounds of the density and to treat the non-convexity of the pressure function. \ \noindent{\bf Keywords:} global solution, Navier-Stokes equations, Allen-Cahn equation, gas-liquid flow, van der Waals equation of state \ \noindent{\bf MSC:} 35M10,35Q30 \section {\normalsize Introduction and Main Result} \setcounter{equation}{0} In the last few decades, there have been many progresses on modelling and analysis of the multiphase and phase transition problems, in particular on the phase field models of the phenomena, see \cite{AC1979}-\cite{CH1958}, \cite{HMR2012} \cite{LT-1998} \cite{V1894} \cite{QWS} \cite{WQS} and the references therein. In this paper, we investigate Navier-Stokes-Allen-Cahn system proposed by Blesgen \cite{B1999} which describes the compressible two-phase flow with diffusive interface. The system consists of the compressible Navier-Stokes equations and a modified Allen-Cahn equation, and it is especially useful for analyzing the phase transition properties of gas-liquid flow. It allows phases to shrink or grow due to changes of density in the fluid and incorporates their transport with the current. The Navier-Stokes-Allen-Cahn system is commonly expressed as follows (see \cite{B1999}, \cite{FPRS2008}, \cite{DLL2013}, \cite{CG2017}, \cite{STY} etc.) \begin{equation}\label{3dNSAC} \left\{\begin{array}{llll} \displaystyle \partial_t\rho+\textrm{div}(\rho \mathbf{u})=0, \\ \displaystyle \displaystyle \partial_t(\rho \mathbf{u})+\textrm{div}(\rho \mathbf{u}\otimes \mathbf{u})+\nabla p-(\nu\Delta\mathbf{u}+\eta\nabla\textrm{div}\mathbf{u})=-\epsilon\mathrm{div}\big(\nabla\chi\otimes\nabla\chi-\frac{|\nabla\chi|^2}{2}\mathbb{I}\big),\\ \displaystyle\partial_t\big(\rho\chi\big)+\mathrm{div}(\rho \chi \mathbf{u})=-\frac{1}{\epsilon}\frac{\partial f(\rho,\chi)}{\partial\chi}+\frac{\epsilon}{\rho} \Delta\chi, \end{array}\right. \end{equation} where $\rho=\rho(\mathbf{x},t)$, $\mathbf{u}=\mathbf{u}(\mathbf{x},t)$ and $\chi=\chi(\mathbf{x},t)$ are the density, the velocity and the concentration difference of the gas-liquid mixture. The constants $\nu>0, \ \eta\geq0$ are viscosity coefficients, and the constant $\epsilon>0$ is defined as the thickness of the diffuse interface of the gas-liquid mixture. The potential energy density $f=f(\rho,\chi)$, satisfying the Ginzburg-Landau double-well potential model (see \cite{HW}, \cite{DLL2013}, \cite{CG2017} and the references therein), follows that: \begin{equation}\label{potential energy density} f(\rho,\chi)=-3\rho+\frac{8\Theta}{3}\ln\frac{\rho}{3-\rho}+\frac{1}{4}\big(\chi^2-1\big)^2, \end{equation} with $0<\Theta$ is the positive constant related to the ratio of the actual temperature to the critical temperature. The pressure $p$ is given by the following van der Waals equation of state (see \cite{V1894}, \cite{HK}, \cite{HW}, \cite{MLW1}, \cite{MLW2}, \cite{HSWZ}, \cite{HLS} and the references therein) \begin{equation}\label{the formula of pressure} p(\rho)=\left\{\begin{array}{llll} \displaystyle\rho^2\frac{\partial f}{\partial\rho}=-3\rho^2+\frac{8\Theta\rho}{3-\rho}\quad & \mathrm{if}\ 0\leq\rho<3,\\ \displaystyle+\infty,\quad & \mathrm{if}\ \rho\geq3. \end{array}\right. \end{equation} We have the following properties of the pressure $p$: \begin{enumerate} \item[(i)] $p(\rho)>0$ for $\rho>0$, $p(0)=0$; \item[(ii)] When $\Theta\geq1$, $p(\rho)$ is a monotone increasing function. When $0<\Theta<1$, there exist two positive densities $3>\beta>\alpha>0$ such that $p(\rho)$ is increasing on $[0,\alpha]$ and on $[\beta,3)$, $p(\rho)$ is decreasing on $(\alpha,\beta)$; \item[(iii)] $p'(\rho)=\frac{-6(\rho^3-6\rho^2+9\rho-4\Theta)}{(3-\rho)^2}$. When $0<\Theta<1$, there exist a positive density $\gamma$, such that, $p(\gamma)=p(\beta)$,and $p(\rho)>p(\gamma)$ for $\rho>\gamma$, $p$ is increasing on $[0,\gamma]$. \end{enumerate} \begin{rmk}The van der Waals state equation \eqref{the formula of pressure} is proposed by the Dutch physicist J. D. van der Waals \cite{V1894}. It is a thermodynamic equation of state which is based on the theory that fluids are composed of particles with non-zero volumes, and subject to an inter-particle attractive force. Over the critical temperature (i.e. $\Theta\geq1$ in \eqref{the formula of pressure}), this equation of state is an improvement over the ideal gas law. And what's more, below the critical temperature (i.e. $0<\Theta<1$ in \eqref{the formula of pressure}), this equation is also qualitatively reasonable for the low-pressure gas-liquid states. \end{rmk} \begin{rmk} The concentration difference $\chi$ of the gas-liquid mixture can be understood as $\chi=\chi_1-\chi_2$, where $\chi_i=\frac{M_i}{M}$ is the mass concentration of the fluid $i~(i=1,2)$, $M_i$ is the mass of the components in the representative material volume $V$. The item of $\epsilon\Big(\nabla\chi\otimes\nabla\chi-\frac{|\nabla\chi|^2}{2}\mathbb{I}\Big)$ in the momentum equation \eqref{3dNSAC} can be seen as an additional stress contribution in the stress tensor. This describes the capillary effect associated with free energy $E_{\mathrm{free}}(\rho,\chi)=\int_\Omega\Big(\frac{\rho}{\epsilon} f(\rho,\chi)+\frac{\epsilon}{2}|\nabla\chi|^2\Big)d\mathbf{x}$, (see \cite{AF2008}, \cite{FPRS2008}, \cite{DLL2013},\cite{CG2017}, \cite{CHMS-2018} and the references therein). \end{rmk} There are a lot of works on the well-posedness of the solutions to compressible Navier-Stokes system. We refer to the work of Matsumura-Nishida \cite{MN1980}, Matsumura-Nishihara \cite{MN1985}-\cite{MN1986}, Lions \cite{Lions1998}, Huang-Li-Xin \cite{HLX2012}, Mei \cite{M1997}-\cite{M1999}, Huang-Li-Matsumura \cite{HLM2010}, Huang-Matsumura-Xin \cite{HMX2006}, Huang-Wang-Wang-Yang \cite{HWWY2015}, Shi-Yong-Zhang \cite{SYZ2016} and the references therein. The study of interfacial phase changing in mixed fluids can be traced back to the work by van der Waals (1894). van der Waals described the interface between two immiscible fluids as a layer in the pioneer paper \cite{V1894}. His idea was successfully applied by Cahn-Hilliard \cite{CH1958} and Allen-Cahn \cite{AC1979} to describe the complicated phase separation and coarsening phenomena, the motion of anti-phase boundaries in the mixture respectively. Lowengrub-Truskinovsky \cite{LT-1998} added the effect of the motion of the particles and the interaction with the diffusion into the Cahn-Hilliard equation, and the Navier-Stokes-Cahn-Hilliard system was put forward. Blesgen \cite{B1999} then combined the compressible Navier-Stokes system with the modified Allen-Cahn equation to describe the behavior of cavitation in a flowing liquid, which was known as Navier-Stokes-Allen-Cahn system. The difference between Navier-Stokes-Allen-Cahn system and Navier-Stokes-Cahn-Hilliard system is that, for the former, the diffusion fluxes are neglected and the development of the constitutive equation for mass conversion of any of the considered phases is focused. This leads to that the latter conserves the volume fractions while the former does not. Nowadays, Navier-Stokes-Allen-Cahn system and Navier-Stokes-Cahn-Hilliard system are widely used in the interfacial diffusion problems of fluid mechanics and material science. Comparatively speaking, the numerical treatment to the former is simpler than that of the latter which involves fourth-order differential operators. However, because the concentration difference $\chi$ in \eqref{3dNSAC} does not preserve overall volume fraction, a Lagrange multiplier is usually introduced in \eqref{3dNSAC}$_3$ as a constraint to conserve the volume, see Yang-Feng-Liu-Shen \cite{YFLS2006}, Zhang-Wang-Mi \cite{ZWM} and the references therein. Feireisl-Petzeltov$\mathrm{\acute{a}}$-Rocca-Schimperna \cite{FPRS2008} obtained the global existence of weak solutions for the isentropic case, where the method they used is the framework introduced by Lions \cite{Lions1998}. Along the way proposed by Feireisl \textit{et al.}, Ding-Li-Luo \cite{DLL2013} proved the global existence of one-dimensional strong solution in the bounded domain for initial density without vacuum states. Chen-Guo \cite{CG2017} generalized Ding-Li-Luo's result to the case that the initial vacuum is allowed. However, all the results above are for the ideal fluid. In order to study the gas-liquid phase transition, we need to consider the non-ideal viscous fluid in which, there is an interval of the density $\rho$ where the pressure $p$ decreases as $\rho$ increases, and the phase transition takes place. The equations of state \eqref{the formula of pressure} proposed by van der Waals is quite satisfactory in describing this phenomena. Hsieh-Wang \cite{HW} solved the isentropic compressible Navier-Stokes system model by the van der Waals state equation numerically by a pseudo-spectral method with a form of artificial viscosity. They showed that the phase transition depends on the selection of the initial density. He-Liu-Shi \cite{HLS} investigated the large time behavior for van der Waals fluid in 1-D by using a second order TVD Runge-Kutta splitting scheme combined with Jin-Xin relaxation scheme. Mei-Liu-Wong \cite{MLW1,MLW2} studied Navier-Stokes system with additional artificial viscosity and $p(\rho)=\rho^{-3}-\rho^{-1}$. By using the Liapunov functional method, they proved the existence, uniqueness, regularity and uniform boundedness of the periodic solution in 1-D. Hoff and Khodia \cite{HK} considered the dynamic stability of certain steady-state weak solutions of system (1.1) for compressible van der Waals fluids in 1-D whole space with the small initial disturbance. In this paper, we study the global existence of the solution for the system \eqref{3dNSAC} with the van der Waals state equation \eqref{the formula of pressure} in one dimension. More precisely, for general initial conditions without vacuum state, our purpose is to study the existence and uniqueness of global strong solution for the isentropic Navier-Stokes-Allen-Cahn systems \eqref{3dNSAC} even with large initial data. Moreover we show that the phase variable $\chi$ belongs to the physical interval $[-1,1]$. Some new techniques are developed to establish the up and low bounds of the density $\rho$, and to treat the non-convexity of the pressure $p(\rho)$, both are crucial steps in the proof. We now present our main result. The 1-D isentropic Navier-Stokes-Allen-Cahn system in the Euler coordinates is expressed in the following \begin{equation}\label{NSAC} \left\{\begin{array}{llll} \displaystyle \rho_t+(\rho u)_x=0, \ \ & x\in\mathbb{R},t>0, \\ \displaystyle \rho u_t+\rho uu_x+p_x=\nu u_{xx}-\frac{\epsilon}{2}\big(\chi_x^2\big)_x,\ \ & x\in\mathbb{R},t>0,\\ \displaystyle\rho\chi_t+\rho u\chi_x=-\frac{1}{\epsilon}(\chi^3-\chi)+\frac{\epsilon}{\rho}\chi_{xx},\ \ & x\in\mathbb{R},t>0, \end{array}\right. \end{equation} with the $L$-periodic boundary value condition: \begin{equation}\label{periodic boundary for Euler} \left\{\begin{array}{llll} (\rho,u,\chi)(x,t)=(\rho,u,\chi)(x+L,t),\ \ & x\in\mathbb{R},t>0,\\ (\rho,u,\chi)\big|_{t=0}=(\rho_0,u_0,\chi_0),\ \ & x\in\mathbb{R}. \end{array}\right. \end{equation} We introduce the Hilbert space $L^2_{\mathrm{per}}$ of square integrable functions with the period $L$: \begin{equation}\label{periodic function sobolev space} L^2_{\mathrm{per}}=\Big\{g(x)\big|g(x+L)=g(x)\ \mathrm{for\ all}\ x\in\mathbb{R},\ {\mathrm{and}\ } g(x)\in L^2(0,L) \Big\}, \end{equation} with the norm denoted also by $\|\cdot\|$ (without confusion) which is given by $\|g\|=(\int_0^L|g(x)|^2dx )^{\frac{1}{2}}$. $H_{\mathrm{per}}^l \ (l\geq0)$ denotes the $L_{\mathrm{per}}^2$-functions $g$ on $\mathbb{R}$ whose derivatives $\partial^j_x g, j=1,\cdots,l$ are $L_{\mathrm{per}}^2$ functions, with the norm $ \|g\|_l=(\sum_{j=0}^l\|\partial^j_x g\|^2)^{\frac{1}{2}}$. The initial and boundary data for the density, velocity and concentration difference of two components are assumed to be: \begin{equation}\label{initial data of v} (\rho_0,u_0)\in H_{\mathrm{per}}^1,\ \ \chi_0\in H_{\mathrm{per}}^2;\quad 0<\rho_0<3, \quad-1\leq\chi_0\leq1; \end{equation} \begin{eqnarray}\label{Compatibility condition of chi} \chi_t(x,0)=-u_0\chi_{0x}+\frac{\epsilon}{\rho_0^2}\chi_{0xx}-\frac{1}{\epsilon\rho_0}\Big(\chi_0^3-\chi_0\Big). \end{eqnarray} \begin{thm} \label{main thm-1} Assume that $(\rho_0,u_0,\chi_0)$ satisfies \eqref{initial data of v}-\eqref{Compatibility condition of chi}, then there exists a unique global strong solution $(\rho,u,\chi)$ of the system \eqref{NSAC}-\eqref{periodic boundary for Euler} such that for any $T>0$, \begin{eqnarray}\label{global solution for periodic boundary} &&\rho\in L^\infty(0,T;H_{\mathrm{per}}^1)\cap L^2(0,T;H_{\mathrm{per}}^1),\notag\\ &&u\in L^\infty(0,T;H_{\mathrm{per}}^1)\cap L^2(0,T;H_{\mathrm{per}}^2),\notag\\ &&\chi\in L^\infty(0,T;H_{\mathrm{per}}^2)\cap L^2(0,T;H_{\mathrm{per}}^3), \\ && -1\leq\chi\leq1,\ 0< \rho<3,\ \mathrm{for \ all}\ (x,t)\in\mathbb{R}\times[0,T],\notag \end{eqnarray} and \begin{eqnarray}\label{energy estimate for periodic boundary problem} \left.\begin{array}{llll} \displaystyle\sup_{t\in [0,T]}\big\{\|(\rho,u)(t)\|^2_1+\|\chi\|_2^2\big\}+\int_0^T\big(\|\rho\|_1^2+\|u\|_2^2+\|\chi\|_3^2\big)dt \leq C, \end{array}\right. \end{eqnarray} where $C$ is a positive constant depending only on the initial data and $T$. \end{thm} \noindent\begin{rmk} There are two difficulties to overcome in proving Theorem 1.1. One is the upper and lower bounds of the density $\rho$, the other is the non-convexity of the pressure. For the former, we use the singularity of pressure and the energy estimation of $\|\frac{1}{\rho}\|_{L^{\infty}([0,L]\times[0,T])}$. For the latter, we decompose the pressure according to its convexity. The results of the Theorem 1.1 are valid even for large initial data. They also match well with the existing numerical studies in \cite{HW} and \cite{HLS}. \end{rmk} Moreover, we consider the following mixed boundary value problem: \begin{equation}\label{mixed boundary problem} \left\{\begin{array}{llll} \displaystyle \rho_t+(\rho u)_x=0, \\ \displaystyle \rho u_t+\rho uu_x+p_x=\nu u_{xx}-\frac{\epsilon}{2}\big(\chi_x^2\big)_x,\\ \displaystyle\rho\chi_t+\rho u\chi_x=-\frac{1}{\epsilon}(\chi^3-\chi)+\frac{\epsilon}{\rho}\chi_{xx},\\ \displaystyle(u,\chi_x)\big|_{x=0,L}=(0,0),\\ \displaystyle(\rho,u,\chi)\big|_{t=0}=(\rho_0,u_0,\chi_0). \end{array}\right. \end{equation} Similarly, we have the following existence theorem for the mixed boundary problem \eqref{mixed boundary problem}. The proof will be omitted. \begin{thm} \label{main thm-2} Assume that $(\rho_0,u_0,\chi_0)$ satisfies \begin{equation}\label{initial data of rho u and chi} (\rho_0,u_0)\in H^1,\ \ \chi_0\in H^2,\quad 0<\rho_0<3,\quad-1\leq\chi_0\leq1, \end{equation} \begin{eqnarray}\label{Compatibility condition of rho u and chi} \chi_t(x,0)=-u_0\chi_{0x}+\frac{\epsilon}{\rho_0^2}\chi_{0xx}-\frac{1}{\epsilon\rho_0}\Big(\chi_0^3-\chi_0\Big), \end{eqnarray} then there exists a unique global strong solution $(\rho,u,\chi)$ of the system \eqref{mixed boundary problem},such that for any $T>0$, \begin{eqnarray}\label{global solution for periodic boundary} &&\rho\in L^\infty(0,T;H^1)\cap L^2(0,T;H^1),\notag\\ &&u\in L^\infty(0,T;H^1)\cap L^2(0,T;H^2),\notag\\ &&\chi\in L^\infty(0,T;H^2)\cap L^2(0,T;H^3), \\ && -1\leq\chi\leq1,\ 0< \rho<3,\ \mathrm{for \ all}\ (x,t)\in[0,L]\times[0,T],\notag \end{eqnarray} and \begin{eqnarray}\label{energy estimate for mixed boundary problem} \left.\begin{array}{llll} \displaystyle\sup_{t\in [0,T]}\big\{\|(\rho,u)(t)\|^2_1+\|\chi\|_2^2\big\}+\int_0^T\big(\|\rho\|_1^2+\|u\|_2^2+\|\chi\|_3^2\big)dt \leq C, \end{array}\right. \end{eqnarray} where $C$ is a positive constant depending only on the initial data and $T$. \end{thm} The outline of this paper is as follows. In Section 2, we first give the local existence of the solution for the system \eqref{NSAC}-\eqref{periodic boundary for Euler}. Then, we give a series of lemmas which lead us the desired a priori estimates. Finally, Theorem 1.1 is proved by the well-known alternative result and the maximum principle for parabolic equation. \section {Proofs of the main theorem} \setcounter{equation}{0} In this section, we will present the global existence on strong solution for the periodic problem \eqref{NSAC}-\eqref{periodic boundary for Euler} . Firstly, for $\forall m>0$, $M>0$, $T>0$, we define the periodic solution space: \begin{eqnarray}\label{periodic function space} &&X_{\mathrm{per},m,M}([0,T])\equiv\Big\{(\rho,u,\chi)\Big|(\rho,u)\in C^0([0,T];H_{\mathrm{per}}^1),\chi\in C^0([0,T];H_{\mathrm{per}}^2),\qquad\qquad\qquad\notag\\ &&\qquad\quad\qquad\qquad\qquad\rho\in L^2([0,T];H_{\mathrm{per}}^1), u\in L^2([0,T];H_{\mathrm{per}}^2),\chi\in L^2([0,T];H_{\mathrm{per}}^3),\\ &&\qquad\quad\qquad\qquad\qquad\qquad\inf_{x\in\mathbb{R},t\in [0,T]}\rho(x,t)\geq m,\sup_{t\in [0,T]}\{\|(\rho,u)\|_1^2,\|\chi\|_2^2\}\leq M \Big\}.\notag \end{eqnarray} \begin{prop}[Local existence]\label{local existence and uniqueness for approximate periodic solution} For $\forall m>0$, $M>0$, if $\inf_{x\in\mathbb{R}}\rho_0(x,t)\geq m$, $\|(\rho_0,u_0)\|_1^2$, $\|\chi_0\|_2^2\leq M$, then there exists a small time $T_*=T_*(\rho_0,u_0,\chi_0)>0$ such that the periodic boundary problem \eqref{NSAC}-\eqref{periodic boundary for Euler} admits a unique solution $(\rho,u,\chi)$ satisfying that $(\rho,u,\chi)\in X_{\mathrm{per},\frac{m}{2},2M}([0,T_*])$. \end{prop} \begin{proof} Taking $0<T<+\infty$, for $\forall m>0$, $M>0$, we construct an iterative sequence $(\rho^{(n)},u^{(n)},\chi^{(n)})$,$n=1,2\cdots\cdots$, satisfying $(\rho^{(0)},u^{(0)},\chi^{(0)})=(v_0,u_0,\chi_0)$, and the following iterative scheme \begin{equation}\label{iterative-NSAH} \left\{\begin{array}{llll} \displaystyle \rho^{(n)}_t+(\rho^{(n)} u^{(n-1)})_x=0, \\ \displaystyle \rho^{(n)} u^{(n)}_t+\rho^{(n)} u^{(n-1)}u^{(n)}_x+(p(\rho^{(n)}))_x=\nu u^{(n)}_{xx}-\frac{\epsilon}{2}\big((\chi^{(n)})_x^2\big)_x,\\ \displaystyle\rho^{(n)}\chi^{(n)}_t+\rho^{(n)} u^{(n-1)}\chi^{(n)}_x=-\frac{1}{\epsilon}((\chi^{(n-1)})^3-\chi^{(n-1)})+\frac{\epsilon}{\rho^{(n)}}\chi^{(n)}_{xx},\\ \displaystyle(\rho^{(n)},u^{(n)},\chi^{(n)})(x,t)=(\rho^{(n)},u^{(n)},\chi^{(n)})(x+L,t),\\ (\rho^{(n)},u^{(n)},\chi^{(n)})(x,0)=\big(\rho_0,u_0,\chi_0\big)(x), \end{array}\right. \end{equation} By using the usual iterative approach (c.f. \cite{CHMS-2018}), we can obtained the local existence of the solution for the periodic boundary problem \eqref{NSAC}-\eqref{periodic boundary for Euler}, the details are omitted. \end{proof} Now we will prove the global existence and uniqueness of the solution for the periodic boundary problem \eqref{NSAC}-\eqref{periodic boundary for Euler}. Setting \begin{equation}\label{mu} \mu=\frac{1}{\epsilon}(\chi^3-\chi)-\frac{\epsilon}{\rho}\chi_{xx}. \end{equation} From the physical point of view, the functional $\mu$ in \eqref{mu} can be understood as the chemical potential. The basic energy equality is presented below. From the definition of the pressure $p$ in \eqref{the formula of pressure}, we fix a positive reference density $\tilde \rho$ satisfying (see the properties of $p$) \begin{equation}\label{reference density} 0<\tilde\rho<\gamma<3, \end{equation} and define \begin{equation}\label{Phi} \Phi(\rho)=\rho\int_{\tilde{\rho}}^{\rho}\frac{p(s)-p(\tilde{\rho})}{s^2}ds. \end{equation} Noting that \begin{equation*} \Phi'(\rho)=\frac{\Phi(\rho)+p(\rho)-p(\tilde\rho)}{\rho},\qquad \mathrm{and}\qquad \Phi''(\rho)=\frac{p'(\rho)}{\rho}, \end{equation*} then $\Phi(\tilde \rho)=\Phi'(\tilde\rho)=0$, and so that, there exist positive constants $c_1,c_2>0$ such that \begin{equation}\label{positive definite} c_1(\rho-\tilde\rho)^2\leq\Phi(\rho)\leq c_2(\rho-\tilde\rho)^2. \end{equation} Moreover, combining with the mass conservation equation \eqref{NSAC}$_1$, one gets \begin{equation}\label{renormalization mass conservation equation} \Phi(\rho)_t+\big(\Phi(\rho)u\big)_x+\big(p(\rho)-p(\tilde{\rho})\big)u_x=0. \end{equation} Taking advantage of the local existence result Proposition \ref{local existence and uniqueness for approximate periodic solution}, we know that there exists a unique strong solution of the system \eqref{NSAC}-\eqref{periodic boundary for Euler} for $T$ small enough. By using the well-known alternative result, and the maximum principle for parabolic equation (see \cite{P2005}), it suffices to show the following a priori estimate. \begin{prop}[A priori estimate]\label{a priori estimate proposition for periodic boundary problem} Assume that $(v_0,u_0,\chi_0)$ satisfies \eqref{initial data of v}-\eqref{Compatibility condition of chi}, let $(\rho,u,\chi)\in X_{\mathrm{per},m,M}([0,T])$ be a local solution for a given $T>0$, then there exists a positive constant $C$, such that \begin{eqnarray}\label{a priori estimate} \left.\begin{array}{llll} \displaystyle\sup_{t\in [0,T]}\big\{\|(\rho,u)(t)\|^2_1+\|\chi\|_2^2\big\}+\int_0^T\big(\|\rho\|_1^2+\|u\|_2^2+\|\chi\|_3^2\big)dt \leq C. \end{array}\right. \end{eqnarray} \end{prop} Proposition 2.2 can be obtained by the following series of lemmas. \begin{lem}\label{lem of lower estimate} Under the assumption of Proposition \ref{a priori estimate proposition for periodic boundary problem}, for $\forall T>0$, it holds that \begin{eqnarray}\label{the first energy inequality} &&\int_0^L\Big(\rho u^2+\Phi(\rho)+\chi_x^2+\rho(\chi^2-1)^2\Big)dx+\int_0^T\int_0^L\Big(\mu^2+u_x^2\Big)dxdt\leq C, \end{eqnarray} where $\mu$ is defined in \eqref{mu}. \end{lem} \begin{proof} Multiplying Eq.\eqref{NSAC}$_2$ by $u$ and Eq.\eqref{NSAC}$_3$ by $\mu$, integrating the resultant equations over $[0,L]$ and adding them up, one has \begin{equation}\label{basic energy equality} \frac{d}{dt}\int_0^L\big(\frac{\rho u^2}{2}+\frac{\epsilon\chi_x^2}{2}+\frac{\rho(\chi^2-1)^2}{4\epsilon}\big)dx+\int_0^L\Big(\mu^2+\nu u_x^2+up_x(\rho)\Big)dxdt=0. \end{equation} Integrating \eqref{renormalization mass conservation equation} and adding the result to \eqref{basic energy equality}, one then gets \begin{equation}\label{the basic energy inequality} \frac{d}{dt}\int_0^L\Big(\frac{\rho u^2}{2}+\frac{\epsilon}{2}\chi_x^2+\Phi(\rho)+\frac{\rho(\chi^2-1)^2}{4\epsilon}\Big)dx+\int_0^L\Big(\mu^2+\nu u^2_x\Big)dxd\tau=0. \end{equation} Integrating \eqref{the basic energy inequality} over $[0,T] $, one has \begin{equation}\label{the basic energy inequality for density velocity and concentration difference} \sup_{t\in[0,T]}\int_0^L\Big(\frac{\rho u^2}{2}+\frac{\epsilon}{2}\chi_x^2+\Phi(\rho)+\frac{\rho(\chi^2-1)^2}{4\epsilon}\Big)dx+\int_0^T\int_0^L\Big(\mu^2+\nu u^2_x\Big)dxd\tau=E_0, \end{equation} where $E_0=\int_0^L\big(\frac{1}{2}\rho_0 u_0^2+\frac{\epsilon}{2}\chi_{0x}^2+\Phi(\rho_0)+\frac{\rho_0}{4\epsilon}(\chi_0^2-1)^2\big)dx$. The proof is obtained. \end{proof} \begin{lem}\label{lem of lower estimate for chi} Under the assumption of Proposition \ref{a priori estimate proposition for periodic boundary problem}, for $\forall T>0$, it holds that \begin{eqnarray}\label{sup of concentration} \|\chi\|_{L_{\mathrm{per}}^{\infty}}\leq C. \end{eqnarray} \end{lem} \begin{proof} Integrating the mass equation \eqref{NSAC}$_1$ over $[0,L]\times[0,t]$, one has \begin{equation}\label{mass conservation} \int_0^L\rho(x,t)dx=\int_0^L\rho_0(x)dx. \end{equation} By Lemma 2.1, we then have \begin{equation}\label{inequality of chi} \int_0^L\rho\chi^4dx\leq2\int_0^L\rho\chi^2dx-\int_0^L\rho dx+C_1\leq\frac{1}{2}\int_0^L\rho\chi^4dx+C. \end{equation} Therefore \begin{equation}\label{L4 L1 of chi} \int_0^L\rho\chi^4dx\leq C,\ \ \ \int_0^L\rho\chi dx \leq \int_0^L\rho\chi^4dx+\int_0^L\rho dx\leq C. \end{equation} From \eqref{the first energy inequality}, one has \begin{eqnarray}\label{sup of chi} |\chi(x,t)|&=&\frac{1}{\int_0^L\rho_0dx}\Big|\chi(x,t)\int_0^L\rho(y,t)dy\Big|\notag\\ &\leq&\frac{1}{\int_0^L\rho_0dx}\Big(\big|\int_0^L\big(\chi(x,t)-\chi(y,t)\big)\rho(y,t)dy\big|+\big|\int_0^L\chi(y,t)\rho(y,t)dy\big|\Big)\notag\\ &\leq&\frac{1}{\int_0^L\rho_0dx}\Big(\big|\int_0^L\rho(y,t)\big(\int_y^x\chi_s(s,t)ds\big)dy\big|+\big|\int_0^L\chi(y,t)\rho(y,t)dy\big|\Big)\notag\\ &\leq&\frac{1}{\int_0^L\rho_0dx}\int_0^L|\chi_x|dx\int_0^L\rho(y,t)dy+C_1\leq C. \end{eqnarray} The proof is completed. \end{proof} \begin{lem}\label{lem of sup estimate for rho} Under the assumption of Proposition \ref{a priori estimate proposition for periodic boundary problem}, for $\forall T>0$, it holds that \begin{eqnarray}\label{sup of density} \|\rho\|_{L_{\mathrm{per}}^{\infty}([0,L]\times[0,T])}<3,\ \ \ \int_0^T\int_0^L\chi_{xx}^2dx\leq C. \end{eqnarray} \end{lem} \begin{proof} Observing Lemma 2.1, one has \begin{eqnarray}\label{first energy inequality} \sup_{t\in[0,T]}\int_0^L\Phi(\rho)dx\leq E_0=\int_0^L\big(\frac{1}{2}\rho_0 u_0^2+\frac{\epsilon}{2}\chi_{0x}^2+\Phi(\rho_0)+\frac{\rho_0}{4\epsilon}(\chi_0^2-1)^2\big)dx. \end{eqnarray} From the definitions of \eqref{Phi} and \eqref{the formula of pressure}, one gets \begin{equation}\label{delta limit} \lim_{\delta\rightarrow0}\mathrm{mes}\big\{(x,t)\in[0,L]\times[0,T]\big|\rho(x,t)\geq3-\delta\big\}=0, \end{equation} thus \begin{equation}\label{upper bound of density} \|\rho(x,t)\|_{L^{\infty}([0,L]\times[0,T])}<3. \end{equation} Moreover, from the equation \eqref{mu} and the energy inequalities \eqref{L4 L1 of chi}, \eqref{sup of chi}, one obtains $$\int_0^T\int_0^L\chi_{xx}^2dx=\int_0^T\int_0^L\Big(\rho(\chi^3-\chi)-\rho\mu\Big)^2dx\leq C.$$ The proof is completed. \end{proof} \begin{lem}\label{lem of inf estimate for rho} Under the assumption of Proposition \ref{a priori estimate proposition for periodic boundary problem}, for $\forall T>0$, it holds that \begin{eqnarray}\label{inf of density} \sup_{t\in[0,T]}\|\rho_x\|_{L^2_{\mathrm{per}}}\leq C, \ \ \ \|\frac{1}{\rho}\|_{L_{\mathrm{per}}^{\infty}([0,L]\times[0,T])}\leq C. \end{eqnarray} \end{lem} \begin{proof} From the mass conservation equation \eqref{NSAC}$_1$, one has \begin{eqnarray}\label{The relation between density and velocity} u_{xx}&=&-\big[\frac{1}{\rho}\big(\rho_t+\rho_x u\big)\big]_x= \big[(-\ln\rho)_t+\rho u(\frac{1}{\rho})_x\big]_x=[-(\ln\rho)_x]_t+[\rho u(\frac{1}{\rho})_x\big]_x\notag\\ &=&\big[\rho(\frac{1}{\rho})_x]_t+[\rho u(\frac{1}{\rho})_x\big]_x= \rho(\frac{1}{\rho})_{xt}+\rho u(\frac{1}{\rho})_{xx}+[\rho_x(\frac1\rho)_t+(\rho u)_x(\frac1\rho)_x]\notag\\ &=&\rho(\frac{1}{\rho})_{xt}+\rho u(\frac{1}{\rho})_{xx}-\frac{\rho_x}{\rho^2}\big[\rho_t+(\rho u)_x\big]=\rho(\frac{1}{\rho})_{xt}+\rho u(\frac{1}{\rho})_{xx}. \end{eqnarray} Substituting \eqref{The relation between density and velocity} into the momentum equation \eqref{NSAC}$_2$, one gets \begin{equation}\label{the other form for NSAC-2} (\rho u)_t+(\rho u^2)_x+p'(\rho)\rho_x=\nu\big[\rho\frac{d}{dt}(\frac{1}{\rho})_x+\rho u(\frac{1}{\rho})_{xx}\big]-\frac{\epsilon}{2}\big(\chi_x^2\big)_x, \end{equation} Multiplying \eqref{the other form for NSAC-2} by $(\frac{1}{\rho})_x$, and integrating over $[0,L]$, further \begin{eqnarray}\label{the basic energy equality-2 for density} &&\frac{d}{dt}\int_0^L\big(\frac\nu2\rho\big|\big(\frac{1}{\rho}\big)_x\big|^2-\rho u(\frac{1}{\rho})_x \big)dx+\int_0^L \frac{p'(\rho)}{\rho^2}\rho_x^2dx\notag\\ &&=-\int_0^L\rho u(\frac1\rho)_{xt}dx+\int_0^L(\rho u^2)_x(\frac1\rho)_xdx+\frac{\epsilon}{2}\int_0^L\big(\chi_x^2\big)_x(\frac{1}{\rho})_xdx\notag\\ &&=\int_0^L\Big((\rho u)_x(-\frac{\rho_t}{\rho^2})+(\rho u^2)_x(-\frac{\rho_x}{\rho^2})\Big)dx+\epsilon\int_0^L\chi_x\chi_{xx}(\frac1\rho)_xdx\\ &&=\int_0^L u_x^2dx+\epsilon\int_0^L\chi_x\chi_{xx}(\frac1\rho)_xdx\notag\\ &&\leq\int_0^L u_x^2dx+\epsilon\Big(\|\frac1\rho\|_{L_{\mathrm{per}}^{\infty}}+\int_0^L\rho\big|(\frac1\rho)_x\big|^2dx\Big)\|\chi_{xx}\|_{L_{\mathrm{per}}^2}^2.\notag \end{eqnarray} In view of the mean value theorem, there exists $a(t)\in [0,L]$ satisfying $\rho(a(t),t)=\frac{1}{L}\int_0^L\rho_0dx$, so that \begin{eqnarray} \frac{1}{\rho(x,t)}&=&\frac{1}{\rho(x,t)}-\frac{1}{\rho(a(t),t)}+\frac{1}{\rho(a(t),t)}\notag\\ &=&\int_{a(t)}^x\big(\frac{1}{\rho(y,t)}\big)_ydy+\frac{L}{\int_0^L\rho_0dx}\notag\\ &\leq&\int_0^L\big|\frac{\rho_x(x,t)}{\rho^2(x,t)}\big|dx+\frac{L}{\int_0^L\rho_0dx}\\ &\leq&\big(\int_0^L\frac1\rho dx\big)^{\frac12}\Big(\int_0^L\frac{\rho^2_x(x,t)}{\rho^3(x,t)}dx\Big)^{\frac12}+\frac{L}{\int_0^L\rho_0dx}\notag\\ &\leq&\frac{1}{2}\big\|\frac{1}{\rho}\big\|_{L_{\mathrm{per}}^{\infty}}+ \frac{L}{2}\int_{0}^L\rho\big|\big(\frac{1}{\rho}\big)_x\big|^2dx+\frac{L}{\int_0^L\rho_0dx},\notag \end{eqnarray} then one has the Sobolev inequality about $\frac{1}{\rho}$, \begin{equation}\label{the sobolev inequality of 1/rho} \big\|\frac{1}{\rho}\big\|_{L_{\mathrm{per}}^{\infty}}\leq L\int_{0}^L\rho\big|\big(\frac{1}{\rho}\big)_x\big|^2dx+\frac{2 L}{\int_0^L\rho_0dx}. \end{equation} Substituting the above expression into the inequality \eqref{the basic energy equality-2 for density}, one gets \begin{eqnarray}\label{the energy equality for derivative of density} &&\frac{d}{dt}\int_0^L\big(\frac\nu2\rho\big|\big(\frac{1}{\rho}\big)_x\big|^2-\rho u(\frac{1}{\rho})_x \big)dx+\int_0^L \frac{p'(\rho)}{\rho^2}\rho_x^2dx\notag\\ &&\leq\int_0^L u_x^2dx+\epsilon\Big((L+1)\int_{0}^L\rho\big|\big(\frac{1}{\rho}\big)_x\big|^2dx+\frac{2 L}{\int_0^L\rho_0dx}\Big)\|\chi_{xx}\|_{L_{\mathrm{per}}^2}^2. \end{eqnarray} Setting \begin{eqnarray}\label{the set of derivative for p} &&A_{\mathrm{increase}}(t)=\big\{x\in[0,L]\big|0\leq\rho(x,t)<\alpha\big\}\cup\big\{x\in[0,L]\big|\beta<\rho\leq M\big\},\\ &&A_{\mathrm{decrease}}(t)=\big\{x\in[0,L]\big|\alpha\leq\rho(x,t)\leq\beta\big\}, \end{eqnarray} then multiplying \eqref{the energy equality for derivative of density} by $\frac{\nu}{2}$, and adding up \eqref{the basic energy inequality}, one gets \begin{eqnarray}\label{key inequality-2 for density} &&\frac{d}{dt}\int_0^L\big(\frac{\mu^2\rho}{4}\big|\big(\frac{1}{\rho}\big)_x\big|^2-\frac{\mu\rho u}{2}(\frac{1}{\rho})_x +\frac{\rho u^2}{2}+\Phi(\rho)+\frac{\rho(\chi^2-1)^2}{4\epsilon}+\frac{\epsilon\chi_x^2}{2}\big)dx\notag\notag\\ && \ +\int_{A_{\mathrm{increase}}(t)}\rho p'(\rho)(\frac1\rho)_x^2dx+\int_0^L\big(\mu^2+\frac{\nu}{2}u_x^2\big)dx\\ &&\leq\frac{\epsilon\nu}{2}\Big((L+1)\int_{0}^L\rho\big|\big(\frac{1}{\rho}\big)_x\big|^2dx+\frac{2 L}{\int_0^L\rho_0dx}\Big)\|\chi_{xx}\|_{L_{\mathrm{per}}^2}^2-\int_{A_{\mathrm{decrease}}(t)}\rho p'(\rho)(\frac1\rho)_x^2dx\notag\\ &&\leq\frac{\epsilon\nu}{2}\Big((L+1)\int_{0}^L\rho\big|\big(\frac{1}{\rho}\big)_x\big|^2dx+\frac{2 L}{\int_0^L\rho_0dx}\Big)\|\chi_{xx}\|_{L_{\mathrm{per}}^2}^2+\frac{6(27-4\Theta)}{(3-\beta)^2}\int_0^L\rho\big|(\frac1\rho)_x\big|^2dx.\notag \end{eqnarray} Integrating the inequality \eqref{key inequality-2 for density} over $[0,T]$, applying Lemma 2.2-2.3 and combining with Gronwall's inequality, one obtains \begin{eqnarray} &&\int_0^L\big(\rho\big|\big(\frac{1}{\rho}\big)_x\big|^2+\rho u^2+(\rho-\tilde\rho)^2+\rho(\chi^2-1)^2+\chi_x^2\big)dx\notag+\int_0^T\int_0^L\big(\mu^2+u_x^2\big)dx\leq C. \end{eqnarray} In view of \eqref{the sobolev inequality of 1/rho}, combining with $\int_0^L\rho\big|\big(\frac{1}{\rho}\big)_x\big|^2dx\geq\frac{1}{\|\rho\|_{L_{\mathrm{per}}^\infty}^3}\int_0^L\rho_x^2dx$, the proof of Lemma 2.4 is completed. \end{proof} The estimate of the higher order derivatives for the phase parameter $\chi$ and the velocity $u$ can be obtained in a simpler way then in Lemma 2.1-Lemma 2.4. \begin{lem}\label{higher order derivatives for chi and u} Under the assumption of Proposition \ref{a priori estimate proposition for periodic boundary problem}, for $\forall T>0$, it holds that \begin{equation}\label{twice order derivatives for chi} \sup_{t\in[0,T]}\big(\|\chi_{t}\|^2_{L^2_{\mathrm{per}}}+\|\chi_{xx}\|^2_{L^2_{\mathrm{per}}}\big)+ \int_0^T\int_0^L\big(\chi_{xt}^2+\chi^2_{t}+\chi_{xxx}^2\big)dxdt\leq C, \end{equation} \begin{equation}\label{twice order derivatives for u} \sup_{t\in[0,T]}\|u_{x}\|^2_{L^2_{\mathrm{per}}}+\int_0^T\int_0^L\big(u_t^2+u_{xx}^2\big)dxdt\leq C. \end{equation} \end{lem} \begin{proof} For the sake of convenience, we introduce the Lagrange coordinate system below: \begin{equation}\label{lagrange coordinate} y=\int_0^x\rho(s,t) ds,\ \ t=t;\qquad v=\frac{1}{\rho}. \end{equation} Integrating \eqref{NSAC} over $[0,R]\times[0,t]$ and using the boundary condition \eqref{periodic boundary for Euler}, we have \begin{equation}\label{conservation} \frac{1}{L}\int_0^L\rho dx=\frac1L\int_0^L\rho_0dx:=\bar\rho. \end{equation} Setting \begin{equation}\label{length of period} \tilde{L}:=\bar\rho L, \end{equation} then the system \eqref{NSAC} can be reduced into \begin{equation}\label{Lagrange-NSAC-modified} \left\{\begin{array}{llll} \displaystyle v_t-u_y=0, \ \ & y\in\mathbb{R},t>0,\\ \displaystyle u_t+p_y=\nu \big(\frac{u_{y}}{v}\big)_y-\epsilon\big(\frac{\chi_y^2}{v^2}\big)_y,\ \ & y\in\mathbb{R},t>0,\\ \displaystyle \chi_t=-\frac{v}{\epsilon}(\chi^3-\chi)+\epsilon v\big(\frac{\chi_y}{v}\big)_y,\ \ & y\in\mathbb{R},t>0,\\ (v,u,\chi)(y,t)=(v,u,\chi)(y+\tilde{L},t),\ \ & y\in\mathbb{R},t>0,\\ (v,u,\chi)\big|_{t=0}=(v_0,u_0,\chi_0),\ \ & y\in\mathbb{R}. \end{array}\right. \end{equation} From \eqref{Lagrange-NSAC-modified}$_3$, one has \begin{equation}\label{the time dereviative of chi} \chi_t=-\frac{1}{\epsilon}v(\chi^3-\chi)+\frac{\chi_{yy}}{v}-\frac{2\chi_yv_y}{v^2}, \end{equation} then \eqref{the time dereviative of chi} and Lemma 2.1-2.4 implies that \begin{equation}\label{the first energy inequality in Lagrange coordinate} \displaystyle\int_0^{\tilde{L}}\Big(u^2+v^2_y+v^2+\chi_y^2+(\chi^2-1)^2\Big)dy+\int_0^T\int_0^{\tilde{L}}\Big(\mu^2+u_y^2+\chi_t^2+\chi^2_{yy}\Big)dy\leq C,\end{equation} \begin{equation}\label{upper and lower bound in Lagrange coordinate} \displaystyle 0<c\leq v\leq C<+\infty,\ \ and \ \ 0\leq\chi\leq C, \end{equation} and \begin{equation}\label{the relationship between chi-yy with chi-t} \int_0^{\tilde{L}}\chi_{yy}^2dy\leq C\Big(\int_0^{\tilde{L}}\chi_t^2dy+1\Big).\end{equation} Differentiating \eqref{the time dereviative of chi} with respect to $t$, one gets \begin{equation}\label{time derivative of equation 3} \chi_{tt}=-\frac{v_t}{\epsilon}(\chi^3-\chi)-\frac{v}{\epsilon}(3\chi^2-1)\chi_t+\epsilon v_t\Big(\frac{\chi_y}{v^2}\Big)_y+\epsilon v\Big(\frac{\chi_y}{v^2}\Big)_{yt}. \end{equation} Multiplying \eqref{time derivative of equation 3} by $\chi_t$, and integrating it over $[0,\tilde{L}]$ with respect of $y$, one obtains \begin{eqnarray}\label{bound of chi-t} &&\frac{1}{2}\frac{d}{dt}\int_0^{\tilde{L}}\chi_t^2dy+ \epsilon\int_0^{\tilde{L}}\frac{\chi_{yt}^2}{v}dy\notag\\ &&=-\frac{1}{\epsilon}\int_0^{\tilde{L}}\big((\chi^3-\chi)u_y\chi_t+v(3\chi^2-1)\chi_t^2\big)dy+\epsilon\int_0^{\tilde{L}}u_y\Big(\frac{\chi_y}{v^2}\Big)_y\chi_tdy\notag\\ &&\ \ \ \ \ -\epsilon\int_0^{\tilde{L}}v_y\Big(\frac{\chi_y}{v^2}\Big)_{t}\chi_tdy+\epsilon\int_0^{\tilde{L}}\frac{2}{v^2}\chi_yu_y\chi_{yt}dy\notag\\ &&=I_1+I_2+I_3+I_4. \end{eqnarray} Following from Sobolev inequality and Lemma 2.1-2.4 and \eqref{the first energy inequality in Lagrange coordinate}-\eqref{upper and lower bound in Lagrange coordinate}, one deduces \begin{equation}\label{I1} \big|I_1\big|\leq C_1\big(\|u_y\|^2+\|\chi_t\|^2\big), \end{equation} \begin{eqnarray}\label{I2} \big|I_2\big|&\leq& C\Big(\int_0^{\tilde{L}}\big|u_y\chi_{yy}\chi_t\big|dy+\int_0^{\tilde{L}}\big|u_y\chi_yv_y\chi_t\big|dy\Big)\notag\\ &\leq& C\Big(\|\chi_t\|_{L^{\infty}}\|u_y\|\|\chi_{yy}\|+\|\chi_t\|_{L^{\infty}}\|\chi_y\|_{L^{\infty}}\|u_y\|\|v_y\|\Big)\notag\\ &\leq& C\Big(\|\chi_t\|^2\|u_y\|+\|\chi_t\|^2\|u_y\|^{\frac43}+\|\chi_t\|^{\frac32}\|u_y\|+\|\chi_t\|^{\frac43}\|u_y\|^{\frac43}+\|\chi_t\|\|u_y\|+\|\chi_t\|^{\frac23}\|u_y\|^{\frac43}\Big)\notag\\ & &+\frac{\epsilon}{4}\|\chi_{yt}\|^2, \end{eqnarray} and \begin{eqnarray}\label{I3} \big|I_3\big|+\big|I_4\big|&\leq& C\int_0^{\tilde{L}}\big(|v_y\chi_{yt}\chi_t|+|v_y\chi_yu_y\chi_t|+|\chi_yu_y\chi_{yt}|\Big)dy\notag\\ &\leq&C\Big(\|\chi_t\|\|u_y\|^2+\|\chi_t\|^2\Big)+\frac{\epsilon}{4}\|\chi_{yt}\|^2. \end{eqnarray} Substituting \eqref{I1}--\eqref{I3} into \eqref{bound of chi-t}, applying the Gronwall's inequality, one drives \begin{equation}\label{bound of chi-t and chi-yt} \int_0^{\tilde{L}}\chi_t^2dy+\int_0^T\int_0^{\tilde{L}}\chi_{yt}^2dy\leq C. \end{equation} Combining with \eqref{the relationship between chi-yy with chi-t}, one gets \begin{equation}\label{the bound of the second derivative of concentration} \int_0^{\tilde{L}}\chi_{yy}^2dy\leq C. \end{equation} It holds that \begin{equation}\label{twice order derivatives for chi in lagrange coordinates} \int_0^{\tilde{L}}(\chi_t^2+\chi_{yy}^2)dy+\int_0^T\int_0^{\tilde{L}}\big(\chi_{yt}^2+\chi^2_{t}+\chi_{yy}^2\big)dy\leq C. \end{equation} Multiplying \eqref{Lagrange-NSAC-modified}$_2$ by $-u_{yy}$, integrating over $[0,{\tilde{L}}]$ by parts, by using Sobolev inequality, Lemma 2.1-2.4 and \eqref{twice order derivatives for chi}, one obtains \begin{eqnarray}\label{the second derivative of velocity} &&\big(\frac{1}{2}\int_0^{\tilde{L}}u^2_ydy\big)_t+\nu\int_0^{\tilde{L}}\frac{u_{yy}^2}{v}dy\notag\\ &&=\int_0^{\tilde{L}} u_{yy}(p_\delta)'_vv_ydy+\int_0^{\tilde{L}}\frac{ u_{yy}u_yv_y}{v^{2}}dy+\int_0^{\tilde{L}}\frac{2\epsilon\chi_y\chi_{yy}u_{yy}}{v^3}dy-\int_0^{\tilde{L}}\frac{3\epsilon\chi_y^2v_y u_{yy}}{v^4}dy\notag\\ &&\leq C\big(\|u_y\|^2+1\big)+\frac{\nu}{2}\int_0^{\tilde{L}}\frac{u_{yy}^2}{v}dy. \end{eqnarray} Thus it holds that \begin{equation}\label{twice order derivatives for u in lagrange coordinates} \int_0^{\tilde{L}}u^2_ydy+\int_0^T\int_0^{\tilde{L}}u_{yy}^2dy\leq C. \end{equation} Let's go back to the Euler coordinates, by using \eqref{twice order derivatives for chi in lagrange coordinates}, \eqref{twice order derivatives for u in lagrange coordinates}, combining with $\chi_{xxx}=2\rho\rho_x\chi_t+\rho^2\chi_{xt}+2\rho\rho_xu\chi_x+\rho^2u_x\chi_x+\rho^2u\chi_{xx}+\rho_x(\chi^3-\chi)+\rho(3\chi^2-1)\chi_x$, one has \begin{equation} \sup_{t\in[0,T]}\big(\|\chi_{t}\|^2_{L^2_{\mathrm{per}}}+\|\chi_{xx}\|^2_{L^2_{\mathrm{per}}}\big)+ \int_0^T\int_0^L\big(\chi_{xt}^2+\chi^2_{t}\big)dxdt\leq C, \end{equation} \begin{equation} \sup_{t\in[0,T]}\|u_{x}\|^2_{L^2_{\mathrm{per}}}+\int_0^T\int_0^Lu_{xx}^2dxdt\leq C, \end{equation} and \begin{equation}\label{third derivative for chi} \int_0^T\int_0^L\chi_{xxx}^2dxdt\leq C. \end{equation} Furthermore, by using $u_t=-(p_\delta)_y+\nu \big(\frac{u_{y}}{v}\big)_y-\epsilon\big(\frac{\chi_y^2}{v^2}\big)_y$, one obtains \begin{equation}\label{t derivative for u} \int_0^T\int_0^Lu_t^2dxdt\leq C. \end{equation} Then proof of Lemma 3.5 is achieved. \end{proof} From Lemma 2.1-Lemma 2.5, Proposition 2.2 is obtained, and the proof of Theorem 1.1 is completed. \noindent {\textbf{Acknowledgments:} The research of M.Mei was supported in part by NSERC 354724-2016 and FRQNT grant 256440. The research of X. Shi was partially supported by National Natural Sciences Foundation of China No. 11671027 and 11471321. The work of X.P. Wang was supported in part by the Hong Kong Research Grants Council (GRF grants 16324416, 16303318 and NSFC-RGC joint research grant N-HKUST620/15).} \end{document}
\begin{document} \begin{abstract} We present sharp conditions on divergence-free drifts in Lebesgue spaces for the passive scalar advection-diffusion equation \[ \partial_t \theta - \Delta \theta + b \cdot \nabla \theta = 0 \] to satisfy local boundedness, a single-scale Harnack inequality, and upper bounds on fundamental solutions. We demonstrate these properties for drifts $b$ belonging to $L^q_t L^p_x$, where $\frac{2}{q} + \frac{n}{p} < 2$, or $L^p_x L^q_t$, where $\frac{3}{q} + \frac{n-1}{p} < 2$. For steady drifts, the condition reduces to $b \in L^{\frac{n-1}{2}+}$. The space $L^1_t L^\infty_x$ of drifts with `bounded total speed' is a borderline case and plays a special role in the theory. To demonstrate sharpness, we construct counterexamples whose goal is to transport anomalous singularities into the domain `before' they can be dissipated. \end{abstract} \title{Regularity properties of passive scalars with rough divergence-free drifts} \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} We consider the linear advection-diffusion equation \begin{equation} \langlebel{eq:ADE} \tag{A-D} \partial_t \theta - \Delta \theta + b \cdot \nabla \theta = 0. \end{equation} The solution $\theta = \theta(x,t)$ is known as a \emph{passive scalar}, and the prescribed divergence-free velocity field $b = b(x,t)$ is known as the \emph{drift}. Our goal is to understand, in detail, the regularity properties of~\eqref{eq:ADE} when the drift $b$ is \emph{rough}. Rough divergence-free drifts arises naturally in the context of nonlinear PDEs in fluid dynamics. To understand what is `rough', we recall the scaling symmetry \begin{equation} \langlebel{eq:scalingsymmetry} u \to u(\langlembda x, \langlembda^2 t), \quad b \to \langlembda b(\langlembda x,\langlembda^2 t), \quad \langlembda >0. \end{equation} In dimensional analysis, one writes $[x] = L$, $[t] = L^2$, and $[b] = L^{-1}$. The scaling~\eqref{eq:scalingsymmetry} identifies the Lebesgue spaces $L^q_t L^p_x$, where $\frac{2}{q} + \frac{n}{p} \leq 1$, as \emph{(sub)critical} spaces for the drift, meaning spaces whose norms do not grow upon `zooming in' with the scaling symmetry. For example, \begin{equation} X = L^\infty_t L^n_x,\, L^2_t L^\infty_x,\, L^{n+2}_{t,x} \end{equation} are \emph{critical spaces}, whose norms are dimensionless, i.e., invariant under the symmetry~\eqref{eq:scalingsymmetry}. Here, $n \geq 2$ is the spatial dimension. When $b$ belongs to one of the above critical Lebesgue spaces, it is not difficult to adapt the work of De Giorgi, Nash, and Moser~\cite{de1957sulla,nash1958continuity,moserharnack} to demonstrate that weak solutions of~\eqref{eq:ADE} are H{\"o}lder continuous and satisfy Harnack's inequality. The above threshold is known to be \emph{sharp} for H{\"o}lder continuity within the scale of Lebesgue spaces, see the illuminating counterexamples in~\cite{silvestre2013loss,wu2021supercritical}. The divergence-free condition moreover allows access to drifts in the critical spaces $X = L^\infty_t L^{-1,\infty}_x, L^\infty_t {\rm BMO}^{-1}_x$, considered by~\cite{osada1987,SSSZ,qianxisingular2019}. In these spaces, it is furthermore possible to prove Gaussian upper and lower bounds on fundamental solutions in the spirit of Aronson~\cite{aronson}. In this paper, we are concerned with \emph{supercritical} drifts, for which continuity may fail. Nonetheless, much of the regularity theory may be salvaged. The divergence-free structure plays a crucial role here,\footnote{The divergence-free structure plays a more subtle role in the critical case. Without this structure, the drift is required to be small in a critical Lebesgue space or Kato class, and local boundedness may depend on the `profile' of the drift.} as is already visible from the computation \begin{equation} \int (b \cdot \nabla \theta) \theta \partialhi^2 \, dx \, dt = \int b \cdot \nabla \left( \frac{\theta^2}{2} \right) \partialhi^2 \, dx \, dt = - \int \theta^2 (b \cdot \nabla \partialhi) \partialhi. \end{equation} With this well known observation, one may apply Moser's iteration scheme to demonstrate that, when $b \in L^q_t L^p_x$ and $\frac{2}{q} + \frac{n}{p} < 2$, solutions are \emph{locally bounded}, see~\cite{nazarov2012harnack}. Typical examples are \begin{equation} X = L^\infty_t L^{\frac{n}{2}+}_x, L^{\frac{n+2}{2}+}_{t,x}. \end{equation} Under these conditions (and a weak background assumption), the Harnack inequality persists as a \emph{single-scale Harnack inequality}~\cite{ignatovaelliptic,ignatovakukavicaryzhik}: In the steady case $\theta = \theta(x)$, \begin{equation} \sup_{B_R} \theta \leq C_R \inf_{B_R} \theta, \end{equation} where $C_R$ may become unbounded as $R \to 0^+$. Whereas a scale-invariant Harnack inequality implies H{\"o}lder continuity, it is perhaps less well known that a single-scale Harnack may hold in the absence of H{\"o}lder continuity. Finally, in this setting, pointwise upper bounds on fundamental solutions continue to hold, although they have `fat tails' compared to their Gaussian counterparts~\cite{zhang2004strong,qianxilelllq2019}. Our first contribution is to understand the sharpness of the condition $b \in L^q_t L^p_x$, $\frac{2}{q} + \frac{n}{p} < 2$. We find that $L^1_t L^\infty_x$, the space of drifts with `bounded total speed' (in the terminology of~\cite{taolocalizationcompactness}), plays a special role and informs the counterexamples we construct in Section~\ref{sec:counters}. We summarize the results pertaining to this condition in Theorem~\ref{thm:lqlp}. \\ In the steady case, there is an additional subtle feature, which is not well known and that we find surprising: Local boundedness continues to hold when $b \in L^{\frac{n-1}{2}+}$. To our knowledge, this `dimension reduction' was first observed in this context by Kontovourkis~\cite{kontovourkis2007elliptic} in his thesis.\footnote{ This `dimension reduction' itself goes back at least to work of Frehse and R$\overset{\circ}{\text{u}}${\v z}i{\v c}ka~\cite{frehse1996existence} on the steady Navier-Stokes equations in $n=6$. The `slicing' is also exploited by Struwe in~\cite{struwenavierstokes}.} Heuristically, Kontovourkis' key observation is as follows. Consider the basic $L^2$ energy estimate, in a ball $B_r$ without smooth cut-off. The drift contributes the boundary term \begin{equation} \int_{B_r} (b \cdot \nabla \theta) \theta \, dx = \int_{\partial B_r} \frac{\theta^2}{2} b \cdot n \, d\sigma, \end{equation} where $d\sigma$ is the surface area measure. Since $\nabla \theta \in L^2(B_R)$, on `many slices' $r \in (R/2,R)$, we have $\nabla \theta \in L^2(\partial B_r)$, with a quantitative bound. Similarly, $b$ belongs to $L^{\frac{n-1}{2}+}(\partial B_r)$ on `many slices'. Thus, one may exploit Sobolev embedding on the sphere $\partial B_r$ to estimate the boundary term. This dimension reduction was recently rediscovered by Bella and Sch{\"a}ffner~\cite{belleschaffnerscpam}, who proved local boundedness and a single-scale Harnack inequality in the context of certain degenerate elliptic PDEs, which we review below. Since the work of Kontovourkis, it has been an interesting question, what dimension reduction holds in the parabolic setting? In particular, is $b \in L^{\frac{n+1}{2}+}_{t,x}$ enough for local boundedness? Very recently, X.~Zhang~\cite{zhang2020maximum} generalized the work~\cite{belleschaffnerscpam} of Bella and Sch{\"a}ffner to the parabolic setting, and among other things, demonstrated local boundedness under the condition $b \in L^p_x L^q_t$, $\frac{3}{q} + \frac{n-1}{p} < 2$, $p \leq q$, see Corollary~1.5 therein. Crucially, the order of integration is reversed. The condition $b \in L^{\frac{n-1}{2}+}_x L^\infty_t$ implies the elliptic case of Kontovourkis. From this condition, we see that, perhaps, one dimension is not `reduced', but rather hidden into the time variable. In Theorem~\ref{thm:dimreductionversion}, we present our second contribution, namely, (i) the parabolic Harnack inequality and pointwise upper bounds on fundamental solutions in this setting, and (ii) counterexamples which illustrate the meaning and sharpness of the `dimension reduction'. \\ We now present the main results, which constitute a detailed picture of the local regularity theory for the passive scalar advection-diffusion equation~\eqref{eq:ADE} with supercritical drifts. Let $n \geq 2$, $\Omega \subset \mathbb{R}^n$ be a bounded domain, and $\Omega' \subset\subset \Omega$ be a subdomain. Let $I = (S,T]$ and $I'=(S',T'] \subset I$ be finite intervals such that $S<S'$. Let $Q_I = \Omega \times I$ and $Q_I' = \Omega' \times I'$. Let $p,q \in [1,+\infty]$. \begin{theorem}[$b \in L^q_t L^p_x$] \langlebel{thm:lqlp} \emph{(Local boundedness)}~\cite{nazarov2012harnack} If \begin{equation} \zeta_0 := \frac{2}{q} + \frac{n}{p} < 2, \end{equation} then we have the following \emph{quantitative local boundedness} property: If $\theta \in L^1(Q_I) \cap C^\infty(Q_I)$ satisfies the drift-diffusion equation~\eqref{eq:ADE} in $Q_I$ with divergence-free drift $b \in C^\infty(Q_I)$ and \begin{equation} b \in L^q_t L^p_x(Q_I), \end{equation} then \begin{equation} \sup_{Q_I'} |\theta| \lesssim \norm{\theta}_{L^1(Q_I)}, \end{equation} where the implied constant depends on $n$, $\Omega$, $\Omega'$, $I$, $I'$, $p$, $q$, and $\norm{b}_{L^q_t L^p_x(Q_I)}$. \\ \emph{(Single-scale Harnack)}~\cite{ignatovakukavicaryzhik} If, additionally, $b \in L^2_t H^{-1}_x(Q_I)$ and $\theta > 0$, then we have the following \emph{quantitative Harnack inequality}: If $I_1, I_2 \subset\subset I$ are intervals satisfying $\sup I_1 < \inf I_2$, then \begin{equation} \sup_{\Omega' \times I_1} \theta \lesssim \inf_{\Omega' \times I_2} \theta, \end{equation} where the implied constant depends on $n$, $\Omega$, $\Omega'$, $I$, $I_1$, $I_2$, $p$, $q$, $\norm{b}_{L^q_t L^p_x(Q_I)}$, and $\norm{b}_{L^2_t H^{-1}_x(Q_I)}$. \\ \emph{(Bounded total speed)} If $(p,q) = (\infty,1)$, then the above quantitative local boundedness property holds with constants depending on $b$ itself rather than $\| b \|_{L^1_t L^\infty_x(Q_I)}$. (The property is false without this adjustment.) \\ \emph{(Sharpness)} Let $Q = B_1 \times (0,1)$. There exist a smooth divergence-free drift $b \in C^\infty(Q)$ belonging to $L^q_t L^p_x(Q)$ for all $(p,q) \in [1,+\infty]^2$ with $2/q+n/p = 2$, $(p,q) \neq (\infty,1)$, and satisfying the following property. There exists a smooth solution $\theta \in L^\infty_t L^1_x \cap C^\infty(Q)$ to the advection-diffusion equation~\eqref{eq:ADE} in $Q$ with \begin{equation*} \sup_{B_{1/2} \times (0,T)} |\theta| \to +\infty \text{ as } T \to 1_-. \end{equation*} In particular, the above quantitative local boundedness property fails when $2/q+n/p=2$ and $q > 1$. \\ \emph{(Upper bounds on fundamental solutions)}~\cite{qianxilelllq2019} If the divergence-free drift $b \in C^\infty_0(\mathbb{R}^n \times [0,+\infty))$ belongs to $L^q_t L^p_x(\mathbb{R}^n \times \mathbb{R}_+)$ and $1 \leq \zeta_0 < 2$, then the fundamental solution $\Gamma = \Gamma(x,t;y,s)$ to the parabolic operator $L=\partialartial_t-\Delta+b \cdot \nabla$ satisfies \begin{equation} \langlebel{eq:upperboundslqlp} \begin{aligned} \Gamma(x,t;y,s) &\leq C (t-s) \left( \frac{1}{t-s} + \frac{M_0}{(t-s)^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}}\\ &\quad \times \left[ \exp \left( - \frac{c |x-y|^2}{t-s} \right) + \exp\left( - \frac{ c |x-y|^{1+\frac{1}{1+\alpha_0}}}{((t-s) M_0)^{\frac{1}{1+\alpha_0}}} \right) \right] \end{aligned} \end{equation} for all $x,y \in \mathbb{R}^n$ and $0 \leq s < t < +\infty$. Here, \begin{equation} \alpha_0 = \frac{\zeta_0-1}{\theta_2}, \quad M_0 = C \| b \|_{L^q_t L^p_x(\mathbb{R}^n \times \mathbb{R}_+)}^{\frac{1}{\theta_2}} + \frac{1}{4}, \quad \theta_2 = 1-\frac{\zeta_0}{2}. \end{equation} \end{theorem} See Figure~\ref{fig:dim3} for an illustration of Theorem~\ref{thm:lqlp}. \begin{figure} \caption{Divergence-free drift $b \in L^q_t L^p_x$ in dimension $n \geq 2$ (dimension $n=3$ illustrated above). \emph{Region~A} \end{figure} \begin{theorem}[$b \in L^p_x L^q_t$] \langlebel{thm:dimreductionversion} \emph{(Local boundedness)}~\cite{zhang2020maximum} If \begin{equation} \zeta_0 := \frac{3}{q} + \frac{n-1}{p} < 2,\quad p\le q, \end{equation} then we have the following \emph{quantitative local boundedness} property: If $\theta \in L^1(Q_I) \cap C^\infty(Q_I)$ satisfies the drift-diffusion equation~\eqref{eq:ADE} in $Q_I$ with divergence-free drift $b \in C^\infty(Q_I)$ and \begin{equation} b \in L^p_x L^q_t(Q_I), \end{equation} then \begin{equation} \sup_{Q_I'} |\theta| \lesssim \norm{\theta}_{L^1(Q_I)}, \end{equation} where the implied constant depends on $n$, $\Omega$, $\Omega'$, $I$, $I'$, $p$, $q$, and $\norm{b}_{L^p_x L^q_t(Q_I)}$. \\ \emph{(Single-scale Harnack)} If, additionally, $b \in L^2_t H^{-1}_x(Q_I)$ and $\theta > 0$, then we have the following \emph{quantitative Harnack inequality}: If $I_1, I_2 \subset\subset I$ are intervals satisfying $\sup I_1 < \inf I_2$, then \begin{equation} \sup_{\Omega' \times I_1} \theta \lesssim \inf_{\Omega' \times I_2} \theta, \end{equation} where the implied constant depends on $n$, $\Omega$, $\Omega'$, $I$, $I_1$, $I_2$, $p$, $q$, $\norm{b}_{L^p_x L^q_t(Q_I)}$, and $\norm{b}_{L^2_t H^{-1}_x(Q_I)}$. \\ \emph{(Sharpness, steady case)}. Let $n \geq 3$. The quantitative local boundedness property fails for steady drifts $b \in L^{\frac{n-1}{2}}(B)$ and steady solutions $\theta$ in the ball $B$. \\ \emph{(Sharpness, time-dependent case)}. Let $n \geq 2$ and $Q = B_1 \times (0,1)$. There exist a smooth divergence-free drift $b \in C^\infty(Q)$ belonging to $L^p_x L^q_t(Q)$ for all $p,q \in [1,+\infty]$ with $p \leq q$ and $3/q+(n-1)/p > 2$ and satisfying the following property. There exists a smooth solution $\theta \in L^\infty_t L^1_x \cap C^\infty(Q)$ to the advection-diffusion equation~\eqref{eq:ADE} in $Q$ with \begin{equation*} \sup_{B_{1/2} \times (0,T)} |\theta| \to +\infty \text{ as } T \to 1_-. \end{equation*} In particular, the above quantitative local boundedness property fails when $3/q+(n-1)/p > 2$ and $p \leq q$. Finally, the drift additionally belongs to $L^q_t L^p_x(Q)$ for all $(p,q) \in [1,+\infty]^2$ with $2/q+n/p > 2$. \\ \emph{(Upper bounds on fundamental solutions)} If the divergence-free drift $b \in C^\infty_0(\mathbb{R}^n \times [0,+\infty))$ belongs to $L^p_x L^q_t(\mathbb{R}^n \times \mathbb{R}_+)$ and $1 \leq \zeta_0 < 2$, then the fundamental solution $\Gamma = \Gamma(x,t;y,s)$ to the parabolic operator $L=\partialartial_t-\Delta+b \cdot \nabla$ satisfies \begin{equation} \langlebel{eq:upperboundsdimred} \begin{aligned} \Gamma(x,t;y,s) &\leq C (t-s) \left( \frac{1}{t-s} + \frac{M_0}{(t-s)^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}}\\ &\quad\times\left[ \exp \left( - \frac{c |x-y|^2}{t-s} \right) + \exp\left( - \frac{ c |x-y|^{1+\frac{1}{1+\alpha_0}}}{((t-s) M_0)^{\frac{1}{1+\alpha_0}}} \right) \right] \end{aligned} \end{equation} for all $x,y \in \mathbb{R}^n$ and $0 \leq s < t < +\infty$. Here, \begin{equation} \alpha_0 = \frac{\zeta_0-1}{\theta_2}, \quad M_0 = C \| b \|_{L^p_x L^q_t(\mathbb{R}^n \times \mathbb{R}_+)}^{\frac{1}{\theta_2}} + \frac{1}{4}, \quad \theta_2 = 1-\frac{\zeta_0}{2}. \end{equation} \end{theorem} See Figures~\ref{fig:dim4} and~\ref{fig:dim2} for an illustration of Theorem~\ref{thm:dimreductionversion}. \begin{figure} \caption{Divergence-free drift $b \in L^p_x L^q_t$, dimension $n \geq 3$ (dimension $n=4$ illustrated above). Local boundedness and single-scale Harnack inequality.} \end{figure} \begin{figure} \caption{Divergence-free drift $b \in L^p_x L^q_t$, dimension $n =2$. Local boundedness and single-scale Harnack inequality.} \end{figure} \begin{comment} In this paper, we wish to clarify the regularity of solutions to the drift-diffusion equation~\eqref{eq:ADE} in the presence of rough drifts $b$. For the moment, let us consider general drifts $b$, which may be compressible. In this case, the drift $b$ may \emph{concentrate} $\theta$ into a singularity before the diffusion can smooth it. This occurs when we allow the drift to play `on the same level' as the diffusion, that is, when the drift is \emph{critical}. The scaling symmetry \begin{equation} u \to u(\langlembda x, \langlembda^2 t), \quad b \to \langlembda b(\langlembda x,\langlembda^2 t), \quad \langlembda >0 \end{equation} identifies $L^\infty_t L^n_x$ as a critical Lebesgue space for $b$ whose norm is invariant under the scaling symmetry. By the work of De Giorgi, Nash, and Moser, weak solutions of~\eqref{eq:ADE} are H{\"o}lder continuous when $\norm{b}_{L^\infty_t L^n_x} \ll 1$ and, otherwise, may become unbounded. This is visible at the level of radial solutions $\theta = \theta(|x|)$, $b = \tilde{b}(|x|,t) x/|x|$ with $\tilde{b} \leq 0$, whose configuration is essentially optimal for blow-up.\footnote{It is not quite visible at the level of steady drifts, since $b = b(x) \in L^n_x$ implies that $b$ is `locally small'. Similarly for the spaces $L^q_t L^p_x$. The solutions may even be sought with a self-similar ansatz.} The situation is quite different under the incompressibility condition $\div b = 0$. Such drifts can no longer concentrate the solution, and one sees that the only radial solutions as above are identically zero, though they may mix it. Comparing with previous work, drifts in $L^\infty_t L^n_x$ yield $C^\alpha$ solutions. It was demonstrated in [SSSZ] that $b \in L^\infty_t {\rm BMO}^{-1}_x$ is sufficient to prove H{\"o}lder continuity and Moser's parabolic Harnack inequality. That is, $b = \div d$ where $d \in L^\infty_t {\rm BMO}_x$ is an antisymmetric matrix. Under this assumption, many deep properties of scalar parabolic PDEs continue to hold (Aronson's Gaussian estimates, solution of Kato's problem, etc.) On the other hand, it seems that for these properties to hold one cannot significantly go beyond the scaling-invariant spaces. Notably, Silvestre, Vicol, and Zlatos [...] constructed time-dependent divergence-free drifts in $L^\infty_t L^{2-}_x$ in dimension $n=2$ whose solutions preserve no modulus of continuity. The conventional wisdom is that the drift can only help the solutions decay. For example, when $b \cdot n = 0$ or $\theta = 0$ on the boundary, an elegant calculation due to Nash demonstrates for fundamental solutions $\Gamma$ satisfy smoothing estimates $\norm{\Gamma(\cdot,t)}_{L^\infty} \lesssim t^{-n/2}$ with nearly no assumptions on the divergence-free drift. The situation is not so clear for local solutions which may not satisfy a useful boundary condition. There is a new possible mechanism for singularities: Singularities may occur outside of the domain, and the drift (which is singular) may transport them into the domain before the diffusion can smooth them out. Many authors worked on this problem [...], with a number of results in the positive direction pointing toward $L^\infty_t L^{n/2}_x$ as the borderline space. From a technical point of view, this is due to what needs to make sense of the integration by parts: Caccipolli inequality, which gives the solution some improvement from one scale to another This was expoited in [...], who also proved Harnack's inequality. However, the available counterexamples in the elliptic case [Filonov-Shilkin] which do not quite match this assumption. Indeed, in the elliptic case, $L^{n/2}$ is not the borderline. This was demonstrated by Kontovourkis in his PhD thesis~\cite{kontovourkis2007elliptic}, where he proved that weak solutions of the steady problem with divergence-free drift $b \in L^{(n-1+)/2}$ in dimension $n \geq 5$ are bounded. The underlying technique is a dimension reduction due to Frehse and Ruzicka, who used it on the steady Navier-Stokes equations in dimension six, see also the work of Struwe. The main idea is the following: one needs only make sense of the solution on the \emph{boundary} of the domain. And on many boundary slices, the solution is better. \end{comment} We prove Theorems~\ref{thm:lqlp} and~\ref{thm:dimreductionversion}, including the known results, in a self-contained way below. \begin{remark} The local boundedness property and Harnack inequality in Theorems~\ref{thm:lqlp} and~\ref{thm:dimreductionversion} can be easily extended to accomodate drifts satisfying $\div b \leq 0$ (with the background assumption $b \in L^2_{t,x}(Q_I)$ in the Harnack inequality). These properties and the fundamental solution estimates can also be extended to divergence-form elliptic operators $\div a \nabla \cdot$ with bounded, uniformly elliptic~$a$. \end{remark} \begin{remark} Our condition $b \in L^2_t H^{-1}_x(Q_I)$ appears to be new and is used to connect the forward- and backward-in-time regions in the Harnack inequality, see Lemma~\ref{lem:universal}. In contrast, the background condition in the single-scale Harnack inequality in~\cite{ignatovakukavicaryzhik} is $b \in L^\infty_t L^2_x(Q_I)$. \end{remark} \subsection*{Discussion of dimension reduction principle} The `slicing' described above in the steady setting is more subtle in the time-dependent setting because the anisotropic condition $\theta \in L^\infty_t L^2_x$ does not restrict well to slices in the radial variable $r$; compare this to the isotropic condition $\nabla \theta \in L^2_{t,x}$. Indeed, to `slice' in a variable, it seems necessary for that variable to be summed `last' (that is, on the outside) in the norm. The condition $b \in L^p_x L^q_t$, $p \leq q$, $\frac{3}{q} + \frac{n-1}{p} < 2$, in Theorem~\ref{thm:dimreductionversion} comes, roughly speaking, from interpolating between the isotropic condition $b \in L^{\frac{n+2}{2}+}_{t,x}$, in which the order of integration may be changed freely, and the dimensionally reduced condition $b \in L^{\frac{n-1}{2}+}_x L^\infty_t$, which implies that $b \in L^\infty_t L^{\frac{n-1}{2}+}_\sigma(\partial B_r \times I)$ on `many slices', say, a set of $r \in A \subset (1/2,1)$ with measure $|A| > 1/4$. Local boundedness under this condition was already observed by X. Zhang in~\cite[Corollary 1.5]{zhang2020maximum}, and the counterexamples we construct answer an open question in Remark~1.6 therein. Our proof of local boundedness and the Harnack inequality is built on a certain \emph{form boundedness condition}~\eqref{eq:fbc}, see Section~\ref{sec:localbdd}, which subsumes a wide variety of possible assumptions on $b$. For example, in Proposition~\ref{pro:stuffissatisfied}, we verify~\eqref{eq:fbc} not only in the context of Theorems~\ref{thm:lqlp} and~\ref{thm:dimreductionversion} but also under the more general conditions \begin{equation} b \in L^q_t L^\beta_r L^\gamma_\sigma((B_R \setminus B_{R/2}) \times I), \quad \beta \geq \frac{n}{2}, \quad \frac{2}{q} + \frac{1}{\beta} + \frac{n-1}{\gamma} < 2 \end{equation} and \begin{equation} b \in L^\kappa_r L^q_t L^p_\sigma((B_R \setminus B_{R/2}) \times I), \quad p \leq q, \quad \frac{3}{q} + \frac{n-1}{p} < 2. \end{equation} Furthermore, we allow arbitrarily low integrability $\kappa > 0$ in the radial variable; the slicing method does not require high integrability. Our proof of upper bounds on fundamental solutions is centered on a variant~\eqref{eq12.33} of the form boundedness condition, see Section~\ref{sec:upperboundsonfundamentalsolutions}, partially inspired by the work of Qi S. Zhang~\cite{zhang2004strong}. We now describe the work~\cite{belleschaffnerscpam}, which was generalized to the parabolic setting in~\cite{zhang2020maximum}. The conditions in~\cite{belleschaffnerscpam} are on the ellipticity matrix $a$, which is allowed to be degenerate: Define \begin{equation} \langlembda(x) := \inf_{|\xi| = 1} \xi \cdot a(x) \xi, \quad \mu(x) := \sup_{|\xi| = 1} \frac{|a(x) \xi|^2}{\xi \cdot a(x)\xi}. \end{equation} If $n \geq 2$, $p,q \in (1,+\infty]$, and \begin{equation} \langlebel{eq:bellacondition} \langlembda^{-1} \in L^q(B), \quad \mu \in L^p(B), \quad \frac{1}{p} + \frac{1}{q} < \frac{2}{n-1}, \end{equation} then weak solutions of $- \div a \nabla u = 0$ are locally bounded and satisfy a single-scale Harnack inequality. The analogous condition with $\frac{2}{n}$ on the right-hand side is due to Trudinger in~\cite{trudinger}. By examples in~\cite{franchiseapioniserra}, the right-hand side cannot be improved to $\frac{2}{n-1} + \varepsilon$. Divergence-free drifts $b$ belong to the above framework: Under general conditions, it is possible to realize $b$ as the divergence of an antisymmetric \emph{stream matrix}: $b_i = d_{ij,i}$. Then we have $- \Delta \theta + b \cdot \nabla \theta = - \div [ (I+d) \nabla \theta ]$, and $\mu$ captures the antisymmetric part $d$. The steady examples we construct in Section~\ref{sec:counters} handle the equality case in~\eqref{eq:bellacondition}. We mention also the works~\cite{bellaschaffnerpqgrowth,bella2020non}. Earlier, it was hoped that the dimension reduction could be further adapted to treat the case $b \in L^{\frac{n+1}{2}+}_{t,x}$ in the parabolic setting by estimating a half-derivative in time: $|\partial_t|^{1/2} \theta \in L^2_{t,x}$, since this condition is better adapted to slicing than $\theta \in L^\infty_t L^2_x$. On the other hand, our counterexamples rule out this possibility. Half time derivatives in parabolic PDE go back, at least, to~\cite[Chapter III, Section 4]{ladyzhenskaya}, see~\cite{simonbortz} for further discussion. \subsection*{Discussion of counterexamples and `bounded total speed'} Solutions of~\eqref{eq:ADE} in the whole space and evolving from initial data $\theta_0 \in L^1(\mathbb{R}^n)$ become bounded instantaneously. This is captured by the famous Nash estimate~\cite{nash1958continuity}: \begin{equation} \langlebel{eq:nashestimate} \| \theta(\cdot,t) \|_{L^\infty(\mathbb{R}^n)} \lesssim t^{-\frac{n}{2}} \| \theta_0 \|_{L^1(\mathbb{R}^n)}, \end{equation} where the implied constant is \emph{independent} of the divergence-free drift $b$. The Nash estimate indicates that a divergence-free drift does not impede smoothing, in the sense of boundedness, of a density, even if the density is initially a Dirac mass. Therefore, for rough drifts, local boundedness must be violated in a different way: The danger is that the drift can `drag' an anomolous singularity into the domain of observation from outside. There is a competition between the drift, which transports the singularity with some speed, and the diffusion, which smooths the singularity at some rate. Will the singularity, entering from outside, be smoothed before it can be observed inside the domain? Consider a Dirac mass $\delta_{x = - \vec{e_1}}$, which we seek to transport inside the domain. If one can transport the Dirac mass inside $B_{1/2}$ \emph{instantaneously}, one can violate local boundedness. This can be done easily via the drift $b(x,t) = \delta_{t = 0} \vec{e}_1$, which is singular in time. This example already demonstrates the importance of the space $L^1_t L^\infty_x$, whose drifts cannot transport the mass inside arbitrarily quickly. To improve this example, we seek the most efficient way to transport the Dirac mass. Heuristically, the evolution of the Dirac mass is mostly supported in a ball of radius $R(t) \sim \sqrt{t}$. Therefore, we define our drift $b$ to be $S(t) \vec{e}_1$ restricted to this support. That is, the drift lives on a ball of radius $R(t)$ moving in the $x_1$-direction at speed $S(t)$. Since we wish to move the Dirac mass instantaneously, we guess that $S(t) \sim 1/t$. A back-of-the-envelope calculation gives \begin{equation} \| b \|_{L^q_t L^p_x}^q \sim \int_0^1 S(t)^q R(t)^{\frac{nq}{p}} \, dt \sim \int_0^1 t^{-q+\frac{nq}{2p}} \, dt. \end{equation} The above quantity is finite when $2/q+n/p > 2$; more care is required to get the borderline cases in Theorem~\ref{thm:lqlp}, see Section~\ref{sec:counters}. This heuristic is the basis for our time-dependent counterexamples in Section~\ref{sec:counters}, except that we use appropriate subsolutions to keep the compact support property, we glue together many of these Dirac masses, and $S(t)$ must be chosen more carefully. The above transport-vs.-smoothing phenomenon is also responsible for the `fat tails' in the upper bounds~\eqref{eq:upperboundslqlp} and~\eqref{eq:upperboundsdimred} on the fundamental solutions. These upper bounds do not align with the Nash estimate~\eqref{eq:nashestimate} because the Nash estimate does not effectively capture spatial localization. The elliptic counterexample with $b \in L^{\frac{n-1}{2}}$ is achieved by introducing an ansatz which reduces the problem to counterexamples for the steady Schr{\"o}dinger equation $-\Delta u + Vu = 0$ in dimension $n-1$. These steady counterexamples are singular on a line through the domain, as they must be to respect the maximum principle. The time-dependent counterexamples in $L^p_x L^q_t$ seem to be more subtle, and we only exhibit them in the non-borderline cases $\zeta_0 = \frac{3}{q} + \frac{n-1}{p} > 2$ and $p \leq q$. When $\zeta_0 = 2$, we have counterexamples in the cases $p = q = \frac{n+2}{2}$ and $(p,q) = (\frac{n-1}{2},+\infty)$ (the steady example). We believe that local boundedness fails also between these two points, but the counterexamples are yet to be exhibited, see Remark~\ref{rmk:anopenquestion}. \subsection*{Review of existing literature} Following the seminal works of De Giorgi~\cite{de1957sulla} and Nash~\cite{nash1958continuity}, Moser introduced his parabolic Harnack inequality~\cite{moserharnack,mosererrata} (see~\cite{moserelliptic} for the elliptic case), whose original proof relied on a parabolic generalization of the John-Nirenberg theorem concerning exponential integrability of ${\rm BMO}$ functions. Later, Moser published a simplified proof~\cite{moserpointwise}, whose basic methods we follow. In~\cite{SSSZ}, Seregin, Silvestre, {\v S}ver{\'a}k, and Zlato{\v s} generalized Moser's methods to accomodate drifts in $L^\infty_t {\rm BMO}^{-1}_x$. For recent work on boundary behavior in this setting, see~\cite{linhanjill,Hofmann2021}. Generalizations to critical Morrey spaces and the supercritical Lebesgue spaces are due to~\cite{nazarov2012harnack,ignatovaelliptic,ignatovakukavicaryzhik,ignatovaslightlysupercritical}. The Gaussian estimates on fundamental solutions were discovered by Aronson~\cite{aronson} and were generalized to divergence-free drifts by Osada in~\cite{osada1987} ($L^\infty_t L^{-1,\infty}_x$) and Qian and Xi ($L^\infty_t {\rm BMO}^{-1}_x$) in~\cite{qianxisingular2019,qianxilelllq2019}. Important contributions are due to~\cite{zhang2004strong}, who developed Gaussian-like upper bounds in the supercritical case $b \in L^{\frac{n}{2}+}(\mathbb{R}^n)$, $n \geq 4$, and~\cite{liskevichzhang,milmansemenov,semenovregularitytheorems}, among others. For recent progress on Green's function estimates with sharp conditions on lower order terms, see~\cite{MR3787362,MR3941634, sakellaris2020scale,mourgoglou2019regularity}. The primary examples concerning the regularity of solutions to~\eqref{eq:ADE} can be found in~\cite{SSSZ,silvestre2013loss,wu2021supercritical}. Counterexamples to continuity with time-dependent drifts can be constructed by colliding two discs of $+1$ (subsolution) and $-1$ (supersolution) with radii $R(t) \sim \sqrt{1-t}$ and speeds $S(t) \sim 1/\sqrt{1-t}$. The parabolic counterexamples with \emph{steady} velocity fields constructed therein are more challenging. See~\cite{Filonov2013,FilonovShilkin} for examples in the elliptic setting. We also mention Zhikov's counterexamples~\cite{zhikovuniqueness} to uniqueness when $b$ does not belong to $L^2$, whereas weak solutions with zero Dirichlet conditions are known to be unique when $b \in L^2$~\cite{zhang2004strong}. For recent counterexamples in the regularity theory of parabolic systems based on self-similarity, see~\cite{mooney}. \\ \begin{remark} At a technical level, there is a small but, perhaps, non-trivial gap in the proof of the weak Harnack inequality in~\cite{ignatovakukavicaryzhik}, see (3.22) therein, where it is claimed that $\log_+ (\theta/\mathbf{K})$ is a supersolution. This seems related to a step in the proof of Lemma 6.20, p. 124, in Lieberman's book~\cite{liebermanbook}, which we had difficulty following, see the first inequality therein. Both of these are related to improving the weak $L^1$ inequality. We opt to follow Moser's proof in~\cite{moserpointwise} more directly and skip the weak Harnack inequality. In principle, one could directly apply the parabolic John-Nirenberg inequality in~\cite{moserharnack,fabesgarofalo} to obtain the weak Harnack inequality. \end{remark} \section{Local boundedness and Harnack's inequality} \langlebel{sec:localbdd} Let $b$ be a smooth, divergence-free vector field defined on $B_{R_0} \times I_0$, where $R_0 > 0$ and $I_0$ is an open interval. In the sequel, we will use a \emph{form boundedness condition}, which we denote by~\eqref{eq:fbc}: \begin{quote} There exist constants $M, N, \alpha> 0$, $\varepsilon \in [0,1/2)$, and $\delta \in (0,1]$ satisfying the following property. For every $R \in [R_0/2,R_0]$, subinterval $I \subset I_0$, and Lipschitz $u \in W^{1,\infty}(B_R \times I)$, there exists a measurable set $A = A(\varrho,R,I,u) \subset (\varrho,R)$ with $|A| \geq \delta (R-\varrho)$ and satisfying \begin{equation} \langlebel{eq:fbc} \tag{FBC} \begin{aligned} &- \frac{1}{|A|} \int_{}\kern-.34em \int_{B_A \times I} \frac{|u|^2}{2} (b \cdot n) \, dx \, dt \leq \frac{MR_0^\alpha }{\delta^\alpha R_0^2 (R-\varrho)^\alpha} \int_{}\kern-.34em \int_{(B_R \setminus B_\varrho) \times I} |u|^2 \, dx \, dt \\ &\quad\quad +N \int_{}\kern-.34em \int_{(B_R \setminus B_\varrho) \times I} |\nabla u|^2 \, dx \, dt + \varepsilon \sup_{t \in I} \int_{B_R} |u(x,t)|^2 \, dx, \end{aligned} \end{equation} where $B_A = \cup_{r \in A} \partial B_r$ and $n$ is the outer unit normal. \end{quote} The LHS of~\eqref{eq:fbc} appears on the RHS of the energy estimates. In the situations we consider, $M$ may depend on $R_0$, and we can predict its dependence based on dimensional analysis. For example, since $b$ has dimensions of $L^{-1}$, the quantity $$ R_0^{1-\frac{2}{q}-\frac{n}{p}} \| b \|_{L^q_t L^p_x(B_{R_0} \times R_0^2 I)} $$ is dimensionless. In Proposition~\ref{pro:stuffissatisfied}, we show that~\eqref{eq:fbc} is satisfied under the hypotheses of Theorems~\ref{thm:lqlp} and~\ref{thm:dimreductionversion}. \\ \emph{Notation}. In this section, $R_0/2 \leq \varrho < R \leq R_0$ and $-\infty < T < \tau < 0$. Let us introduce the backward parabolic cylinders $Q_{R,T} = B_R \times (T,0)$. Our working assumptions are that $\theta$ is a non-negative Lipschitz function and $b$ is a smooth, divergence-free vector field. To give precise constants, we will frequently use the notation \begin{equation} \langlebel{eq:Cdef} \mathbf{C}(\varrho,\tau,R,T,M,\delta,\alpha) = \frac{1}{\delta^2 (R-r)^2} + \frac{MR_0^\alpha}{\delta^\alpha R_0^2 (R-r)^\alpha} + \frac{1}{\tau-T} \end{equation} involving the various parameters from~\eqref{eq:fbc}. Our convention throughout the paper is that all implied constants may depend on $n$. \begin{theorem}[Local boundedness] \langlebel{thm:localboundedness} Let $\theta$ be a non-negative Lipschitz subsolution and $b$ satisfy~\eqref{eq:fbc} on $Q_{R,T}$. Then, for all $\gamma \in (0,2]$, \begin{equation} \sup_{Q_{\varrho,\tau}} \theta \lesssim_{N,\alpha,\varepsilon,\gamma} \mathbf{C}^{\frac{n+2}{2\gamma}} \norm{\theta}_{L^\gamma(Q_{R,T} \setminus Q_{\varrho,\tau})}. \end{equation} \end{theorem} \begin{theorem}[Harnack inequality] \langlebel{thm:Harnack} Let $\theta$ be a non-negative Lipschitz solution on $Q^* = B \times (-T^*,T^*)$. Let $b \in L^2_t H^{-1}_x(Q^*;\mathbb{R}^n)$ satisfying~\eqref{eq:fbc} on $Q^*$. Let $0 < \ell < T^*$ be the time lag. Then \begin{equation} \sup_{B_{1/2} \times (-T^*+\ell,0)} \theta \lesssim_{N,M,A,T^*,\delta,\alpha,\varepsilon,\ell} \inf_{B_{1/2} \times (\ell,T^*)} \theta, \end{equation} where $A = \norm{b}_{L^2_t H^{-1}_x(Q^*;\mathbb{R}^n)}^2$. \end{theorem} \subsection{Verifying~\eqref{eq:fbc}} We verify that~\eqref{eq:fbc} is satisfied in the setting of the main theorems: \begin{proposition}[Verifying FBC] \langlebel{pro:stuffissatisfied} Let $p,q,\beta,\gamma \in [1,+\infty]$, $\kappa \in (0,+\infty]$, and $b$ be a smooth, divergence-free vector field defined on $B_{R_0} \times I_0$. 1. If \begin{equation} \langlebel{eq:condition1} b \in L^q_t L^\beta_r L^\gamma_\sigma((B_{R_0} \setminus B_{R_0/2}) \times I_0), \quad \beta \geq \frac{n}{2}, \quad \zeta_0 := \frac{2}{q} + \frac{1}{\beta} + \frac{n-1}{\gamma} < 2, \end{equation} then $b$ satisfies~\eqref{eq:fbc} with $\alpha = 1/\theta_2$, $A = (\varrho,R)$, $\delta = 1$, $N = \varepsilon = 1/4$, and \begin{equation} M = C \left( R_0^{1-\zeta_0} \| b \|_{L^q_t L^\beta_r L^\gamma_\sigma((B_{R_0} \setminus B_{R_0/2}) \times I_0)} \right)^{\frac{1}{\theta_2}} + \frac{1}{4}, \end{equation} where $\theta_2 = 1-\zeta_0/2$. 2. If \begin{equation} \langlebel{eq:condition2} b \in L^\kappa_r L^q_t L^p_\sigma((B_{R_0} \setminus B_{R_0/2}) \times I_0), \quad p \leq q, \quad \zeta_0 := \frac{3}{q} + \frac{n-1}{p} < 2, \end{equation} then $b$ satisfies~\eqref{eq:fbc} with $\alpha = (1/\kappa + (q-1)/q)/\theta_2$, $\delta = 1/2$, $N = \varepsilon = 1/4$, and \begin{equation} M = C \left( R_0^{1-\zeta_0} \| b \|_{L^\kappa_r L^q_t L^p_\sigma((B_{R_0} \setminus B_{R_0/2}) \times I_0)}\right)^{\frac{1}{\theta_2}} + \frac{1}{4}, \end{equation} where $\theta_2 = 1-\zeta_0/2$. \end{proposition} \begin{corollary}[FBC in $L^p_x L^q_t$] Let $p,q \in [1,+\infty]$ and $b$ be as above. If \begin{equation} \langlebel{eq:condition3} b \in L^p_x L^q_t((B_{R_0} \setminus B_{R_0/2}) \times I_0), \quad p \leq q, \quad \zeta_0 := \frac{3}{q} + \frac{n-1}{p} < 2, \end{equation} then $b$ satisfies~\eqref{eq:fbc} with $\alpha = (1/p + (q-1)/q)/\theta_2$, $\delta = 1/2$, $N = \varepsilon = 1/4$, and \begin{equation} M = C \left( R_0^{1-\zeta_0} \| b \|_{L^p_x L^q_t((B_{R_0} \setminus B_{R_0/2}) \times I_0)}\right)^{\frac{1}{\theta_2}} + \frac{1}{4}, \end{equation} where $\theta_2 = 1-\zeta_0/2$. \end{corollary} By Minkowski's inequality,~\eqref{eq:condition3} is a special case of~\eqref{eq:condition2} with $\kappa=p$. \begin{remark} \langlebel{rmk:remarkaboutendpoints} Condition~\eqref{eq:condition1} automatically enforces $q \in (1,+\infty]$ and $\gamma \in \left( \frac{n-1}{2},+\infty \right]$. Condition~\eqref{eq:condition2} automatically enforces $q \in \left( \frac{n+2}{2} ,+\infty \right]$ and $p \in \left( \frac{n-1}{2}, +\infty \right]$. \end{remark} \begin{proof}[Proof of Proposition~\ref{pro:stuffissatisfied}] First, we rescale $R_0 = 1$. Let $1/2 \leq \varrho < R \leq 1$, $I \subset I_0$. We restrict to $n \geq 3$ and summarize $n=2$ afterward. Unless stated otherwise, the norms below are on $(B_R \setminus B_\varrho) \times I$. \textit{1. Summary of embeddings for $u$}. By the Sobolev embedding theorem, we have \begin{equation} \| u \|_{L^2_t L^{2^*_n}_x} \lesssim \| u \|_{L^2} + \| \nabla u \|_{L^2},\quad 2^*_n=\frac{2n}{n-2}. \end{equation} After interpolating with $\| u \|_{L^\infty_t L^2_x}$ and $\| u \|_{L^2_{t,x}}$, we have \begin{equation} \langlebel{eq:firstinterpolation} \| u \|_{L^{q_1}_t L^{p_1}_x} \lesssim \| u \|_{L^2}^{\theta_1} (\| u \|_{L^2} + \| \nabla u \|_{L^2} + \| u \|_{L^\infty_t L^2_x})^{1-\theta_1} \end{equation} for suitable $\theta_1 = \theta_1(q_1,p_1) \in (0,1)$ whenever \begin{equation} \frac{2}{q_1} + \frac{n}{p_1} > \frac{n}{2}, \quad q_1 \in [2,+\infty), \quad p_1 \in [2,2^*_n). \end{equation} Next, we employ the Sobolev embedding theorem on the spheres $\partial B_r$, $r \in (\varrho,R)$: \begin{equation} \langlebel{eq:sobolevembeddingspheres} \| u \|_{L^2_t L^2_r L^{2^*_{n-1}}_\sigma} \lesssim \| u \|_{L^2} + \| \nabla u \|_{L^2}. \end{equation} Interpolating with~\eqref{eq:firstinterpolation}, we have \begin{equation} \langlebel{eq:uinterpineq} \| u \|_{L^{q_2}_t L^{\beta_2}_r L^{\gamma_2}_\sigma} \lesssim \| u \|_{L^2}^{\theta_2} (\| u \|_{L^2} + \| \nabla u \|_{L^2} + \| u \|_{L^\infty_t L^2_x})^{1-\theta_2} \end{equation} for suitable $\theta_2 = \theta_2(q_2,\beta_2,\gamma_2) \in (0,1)$ whenever \begin{equation} \langlebel{eq:q2belonging} \frac{2}{q_2} + \frac{1}{\beta_2} + \frac{n-1}{\gamma_2} > \frac{n}{2}, \quad q_2 \in [2,+\infty), \quad \beta_2 \in [2,2^*_n), \quad \gamma_2 \in [2,2^*_{n-1}). \end{equation} The condition~\eqref{eq:q2belonging} describes a region in the parameter space $\{ (1/q_2,1/\beta_2,1/\gamma_2) \} \subset \mathbb{R}^3$ in a tetrahedron with vertices $(1/2,1/2,1/2)$, $(1/2,1/2^*_n,1/2^*_n)$, $(1/2,1/2,1/2^*_{n-1})$, and $(0,1/2,1/2)$. We compute $\theta_2$ according to \begin{equation} \langlebel{eq:computetheta2} \theta_2 \left( \frac{n+2}{2} \right) + (1-\theta_2) \left( \frac{n+2}{2} - 1 \right) = \frac{2}{q_2} + \frac{1}{\beta_2} + \frac{n-1}{\gamma_2}. \end{equation} \textit{2. Verifying~\eqref{eq:fbc} for condition~\eqref{eq:condition1}}. Choose $q_2/2 = q'$ and $\beta_2/2 = \beta'$, where $'$ denotes H{\"o}lder conjugates. This is admissible according to the restrictions on $q$, $\beta$, $q_2$, and $\beta_2$ described in~\eqref{eq:condition1}, Remark~\ref{rmk:remarkaboutendpoints}, and~\eqref{eq:q2belonging}. We choose $\gamma_2$ to satisfy \begin{equation} \langlebel{eq:thingtounfold} \left( \frac{2}{q} + \frac{1}{\beta} + \frac{n-1}{\gamma} \right) + 2 \left( \frac{2}{q_2} + \frac{1}{\beta_2} + \frac{n-1}{\gamma_2} \right) = n + 2. \end{equation} This is possible according to the numerology in~\eqref{eq:condition1} and~\eqref{eq:q2belonging}. Unfolding~\eqref{eq:thingtounfold}, we find $\gamma_2/2 = \gamma'$. Finally, we choose $A = (\varrho,R)$ and compute $\theta_2$ according to~\eqref{eq:computetheta2}. Then \begin{equation} \langlebel{eq:computingforcondition1} \begin{aligned} & \left| \frac{1}{|A|} \int_{}\kern-.34em \int_{B_A \times I} \frac{|u|^2}{2} b \cdot n \, dx \, dt \right| \leq \frac{1}{R - \varrho} \| b \|_{L^q_t L^{\beta}_r L^{\gamma}_\sigma} \| u \|_{L^{q_2}_t L^{\beta_2}_r L^{\gamma_2}_\sigma}^2 \\ &\quad\overset{\eqref{eq:uinterpineq}}{\leq} \frac{C}{R - \varrho} \| b \|_{L^q_t L^{\beta}_r L^{\gamma}_\sigma} \| u \|_{L^2}^{2\theta_2} (\| u \|_{L^2} + \| \nabla u \|_{L^2} + \| u \|_{L^\infty_t L^2_x})^{2-2\theta_2} \\ &\quad\quad\leq C \frac{\| b \|_{L^q_t L^{\beta}_r L^{\gamma}_\sigma}^{\frac{1}{\theta_2}} }{(R-\varrho)^{\frac{1}{\theta_2}}} \| u \|_{L^2}^2 + \frac{1}{4} ( \| u \|_{L^2}^2 + \| \nabla u \|_{L^2}^2 + \| u \|_{L^\infty_t L^2_x}^2 ), \end{aligned} \end{equation} where we used Young's inequality in the last step. This implies~\eqref{eq:fbc}. \textit{3. Verifying~\eqref{eq:fbc} for condition~\eqref{eq:condition2}}. First, we identify \emph{good slices} for $b$. Specifically, we apply Chebyshev's inequality in $r$ to the integrable function \begin{equation} r \mapsto \big\lVert b|_{I \times \partial B_r} \big\rVert_{L^q_t L^p_\sigma (I \times \partial B_r)}^\kappa \end{equation} to obtain that, on a set $A = A(\varrho,R,I)$ of measure $|A| \geq (R-\varrho)/2$, we have \begin{equation} \langlebel{eq:postcheby} \norm{b}_{L^\infty_r L^q_t L^p_\sigma(B_{A} \times I)} \lesssim \frac{1}{(R-\varrho)^{\frac{1}{\kappa}}} \norm{b}_{L^\kappa_r L^q_t L^p_\sigma((B_R \setminus B_\varrho) \times I)} \end{equation} where $B_A = \cup_{r \in A} \partial B_r$. Now we choose $q_2/2 = q'$, $\beta_2 = q_2$, and $\gamma_2 \in [2,2^*_{n-1})$ satisfying~\eqref{eq:q2belonging} and \begin{equation} \langlebel{eq:thingtounfold2} \left( \frac{3}{q} + \frac{n-1}{p} \right) + 2 \left( \frac{3}{q_2} + \frac{n-1}{\gamma_2} \right) = n + 2. \end{equation} Unfolding~\eqref{eq:thingtounfold2}, we discover $\gamma_2/2 = p'$. This allows us to compute $\theta_2$ according to~\eqref{eq:computetheta2}. Moreover, since $q_2 = \beta_2$ in~\eqref{eq:q2belonging}, we have $L^{q_2}_t L^{\beta_2}_r L^{\gamma_2}_\sigma = L^{\beta_2}_r L^{q_2}_t L^{\gamma_2}_\sigma$. Then \begin{equation} \langlebel{eq:computingforcondition2} \begin{aligned} & \left| \frac{1}{|A|} \int_{}\kern-.34em \int_{B_A \times I} \frac{|u|^2}{2} b \cdot n \, dx \, dt \right| \leq \frac{C}{R-\varrho} \| b \|_{L^\infty_r L^q_t L^{p}_\sigma} (R - \varrho)^{1 - \frac{2}{\beta_2}} \| u \|_{L^{\beta_2}_r L^{q_2}_t L^{\gamma_2}_\sigma}^2 \\ &\quad \overset{\eqref{eq:uinterpineq}}{\leq} \frac{C}{(R-\varrho)^{\frac{1}{\kappa} + \frac{2}{\beta_2}} }\norm{b}_{L^\kappa_r L^q_t L^p_\sigma((B_R \setminus B_\varrho) \times I)} \| u \|_{L^2}^{2 \theta_2} (\| u \|_{L^2} + \| \nabla u \|_{L^2} + \| u \|_{L^\infty_t L^2_x})^{2(1-\theta_2)} \\ &\quad\quad \leq C \frac{\norm{b}_{L^\kappa_r L^q_t L^p_\sigma((B_1 \setminus B_{1/2}) \times I)}^{\frac{1}{\theta_2}}}{(R-\varrho)^{\frac{1}{\theta_2} (\frac{1}{\kappa} + \frac{2}{\beta_2})}} \| u \|_{L^2}^2 + \frac{1}{4} ( \| u \|_{L^2}^2 + \| \nabla u \|_{L^2}^2 + \| u \|_{L^\infty_t L^2_x}^2 ). \end{aligned} \end{equation} This completes the proof of~\eqref{eq:fbc}. \textit{4. Dimension $n=2$}. The Sobolev embedding~\eqref{eq:sobolevembeddingspheres} on the sphere bounds instead \begin{equation*} \| u \|_{L^2_t L^2_r C^{1/2}_\sigma} \lesssim \| u \|_{L^2} + \| \nabla u \|_{L^2}, \end{equation*} so we must adjust the proof of the interpolation~\eqref{eq:uinterpineq}. After the initial interpolation step between $L^2_t L^{2^*_n}_x$, $L^\infty_t L^2_x$ and $L^2_{t,x}$ (where $2^*_n$ is now any large finite number), we apply the following Gagliardo-Nirenberg inequality on spheres: \begin{equation} \langlebel{eq:gagnironsphere} \| u \|_{L^q(B_r)} \lesssim \| u \|_{L^p(B_r)}^\theta \| u \|_{C^{1/2}(B_r)}^{1-\theta}, \quad - \frac{n-1}{q} = \theta \left( - \frac{n-1}{p} \right) + \frac{1-\theta }{2}, \end{equation} where $p,q \in [1,+\infty]$, $\theta \in (0,1]$, and $r \in [1/2,1]$. This allows us to recover~\eqref{eq:uinterpineq}, where now $\gamma_2 = +\infty$ is also allowed, and complete the proof. To prove~\eqref{eq:gagnironsphere}, we first use local coordinates on the sphere and a partition of unity\footnote{Alternatively, since $n=2$, we could argue on the flat torus without a partition of unity. The argument we present here is more general.} to reduce to functions $f$ on $\mathbb{R}^{n-1}$. Next, we use that $L^p \subset B^{-\frac{n-1}{p}}_{\infty,\infty}$, $C^{1/2} = B^{1/2}_{\infty,\infty}$, and real interpolation \begin{equation} [B^{-\frac{n-1}{p}}_{\infty,\infty},B^{1/2}_{\infty,\infty}]_{\theta,1} = B^0_{q,1} \subset L^q \end{equation} to demonstrate \begin{equation} \| f \|_{L^q} \lesssim \| f \|_{L^p(B_r)}^\theta \| f \|_{C^{1/2}(B_r)}^{1-\theta} \lesssim \varepsilon^{\frac{1}{\theta}} \| f \|_{L^p(B_r)} + \varepsilon^{-\frac{1}{1-\theta}} \| f \|_{C^{1/2}(B_r)}. \end{equation} We piece together $u$ from the functions $f$ and optimize in $\varepsilon$ to obtain~\eqref{eq:gagnironsphere}. \end{proof} \begin{remark}[On $\tilde{\text{FBC}}$] \langlebel{rmk:trackingparametervarepsilon0} For pointwise upper bounds on fundamental solutions in Section~\ref{sec:upperboundsonfundamentalsolutions}, we require a global, revised form boundedness condition~\eqref{eq12.33}, in which we allow $\varepsilon$ to vary in $(0,+\infty)$. We refine~\eqref{eq:computingforcondition1} by keeping track of the $\varepsilon$ dependence in Hardy's inequality: \begin{equation} \begin{aligned} & \left| \frac{1}{|A|} \int_{}\kern-.34em \int_{B_A \times I} \frac{|u|^2}{2} b \cdot n \, dx \, dt \right| \leq \frac{C \| b \|_{L^q_t L^{\beta}_r L^{\gamma}_\sigma((B_R \setminus B_\varrho) \times I)}^{\frac{1}{\theta_2}} }{\varepsilon^{\frac{1-\theta_2}{\theta_2}} (R-\varrho)^{\frac{1}{\theta_2}}} \| u \|_{L^2}^2 + \varepsilon \left( \| u \|_{L^2}^2 + \| \nabla u \|_{L^2}^2 + \| u \|_{L^\infty_t L^2_x}^2 \right), \end{aligned} \end{equation} A similar refinement holds in~\eqref{eq:computingforcondition2}. We require these refinements in the justification of Lemma~\ref{lem:verifyingtildefbc}. \end{remark} \subsection{Proof of local boundedness} To begin, we prove Cacciopoli's inequality: \begin{lemma}[Cacciopoli inequality] \langlebel{lem:cacciopoli} Under the hypotheses of Theorem~\ref{thm:localboundedness}, \begin{equation} \langlebel{eq:cacciopoliinequality} \sup_{t \in (\tau,0)} \int_{B_r} |\theta(x,t)|^2 \, dx + \int_{}\kern-.34em \int_{Q_{\varrho,\tau}} |\nabla \theta|^2 \, dx\, dt \lesssim_{N,\alpha,\varepsilon} \mathbf{C} \int_{}\kern-.34em \int_{Q_{R,T} \setminus Q_{\varrho,\tau}} |\theta|^2 \, dx\, dt. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:cacciopoli}] Let $\eta \in C^\infty_0(T,+\infty)$ satisfying $0 \leq \eta \leq 1$ on $\mathbb{R}$, $\eta \equiv 1$ on $(\tau,+\infty)$ and $0 \leq d\eta/dt \lesssim 1/(\tau-T)$. Let $r \in (\varrho,R)$ and $t \in (\tau,0)$. To begin, we multiply by $\theta \eta^2$ and integrate over $B_r \times (T,t)$: \begin{equation} \begin{aligned} &\frac{1}{2} \int_{B_r} |\theta(x,t)|^2 \, dx + \int_{}\kern-.34em \int_{B_r \times (\tau,t)} |\nabla \theta|^2 \, dx\, ds \\ &\quad \leq \int_{}\kern-.34em \int_{B_R \times (T,t)} |\theta|^2 \frac{d\eta}{ds} \eta \, dx\, ds + \int_{}\kern-.34em \int_{\partial B_r \times (T,t)} \Big(\frac{d\theta}{dn} \theta \eta^2 - \frac{\theta^2}{2} (b \cdot n) \eta^2 \Big)\, d\sigma \, ds. \end{aligned} \end{equation} Next, we average in the $r$ variable over the set of `good slices', $A = A(\varrho,R,(T,t),\theta \eta)$, which was defined in~\eqref{eq:fbc}: \begin{equation} \begin{aligned} &\frac{1}{2} \int_{B_\varrho} |\theta(x,t)|^2 \, dx + \int_{}\kern-.34em \int_{B_\varrho \times (\tau,t)} |\nabla \theta|^2 \, dx\, ds \\ &\quad \leq \frac{C}{\tau-T} \int_{}\kern-.34em \int_{B_R \times (T,\tau)} |\theta|^2 \, dx\, ds + \frac{1}{|A|} \int_{}\kern-.34em \int_{B_A \times (T,t)}\Big( \frac{d\theta}{dn} \theta \eta^2 - \frac{\theta^2}{2} (b \cdot n) \eta^2\Big) \, dx\, ds, \end{aligned} \end{equation} where $B_A = \cup_{r \in A} \partial B_{r}$. Let us estimate the term containing $d\theta/dn$: \begin{equation} \begin{aligned} &\frac{1}{|A|} \int_{}\kern-.34em \int_{B_A \times (T,t)} \frac{d\theta}{dn} \theta \eta^2 \, dx\, ds \\ &\quad \leq \frac{1}{\delta^2 (R-r)^2} \int_{}\kern-.34em \int_{Q_{R,T} \setminus Q_{\varrho,\tau}} |\theta|^2 \, dx\, ds + \int_{}\kern-.34em \int_{Q_{R,T} \setminus Q_{\varrho,\tau}} |\nabla \theta|^2 \, dx\, ds. \end{aligned} \end{equation} To estimate the term containing $b$, we use~\eqref{eq:fbc} with $u=\theta \eta$: \begin{equation} \begin{aligned} &- \frac{1}{|A|} \int_{}\kern-.34em \int_{B_A \times (T,t)} \frac{\theta^2}{2} (b \cdot n) \eta^2 \, dx\, ds \leq \frac{M}{\delta^\alpha (R-r)^\alpha} \int_{}\kern-.34em \int_{Q_{R,T} \setminus Q_{\varrho,\tau}} |\theta|^2 \, dx\, ds\\ &\quad + N \int_{}\kern-.34em \int_{Q_{R,T} \setminus Q_{\varrho,\tau}} |\nabla \theta|^2 \, dx\, ds + \varepsilon \sup_{s\in(T,t)} \int_{B_R} |\theta(x,s)|^2 \, dx. \end{aligned} \end{equation} Combining everything and applying $\sup_{t \in (T,0)}$, we obtain \begin{equation} \langlebel{eq:combiningeverything} \begin{aligned} &\frac{1}{2} \sup_{t \in (\tau,0)} \int_{B_\varrho} |\theta(x,t)|^2 \, dx + \int_{}\kern-.34em \int_{Q_{\varrho,\tau}} |\nabla \theta|^2 \, dx\, dt \leq C \times \mathbf{C} \int_{}\kern-.34em \int_{Q_{R,T} \setminus Q_{\varrho,\tau} } |\theta|^2 \, dx\, dt \\ &\quad\quad + (1 + N) \int_{}\kern-.34em \int_{Q_{R,T} \setminus Q_{\varrho,\tau}} |\nabla \theta|^2 \, dx\, dt + \varepsilon \sup_{t \in (T,0)} \int_{B_R} |\theta(x,t)|^2 \, dx. \end{aligned} \end{equation} By Widman's hole-filling trick, there exists $\gamma := \max \{ (N+1)/(N+2), 2\varepsilon \} \in (0,1)$ satisfying \begin{equation} \begin{aligned} &\frac{1}{2(N+2)} \sup_{t \in (\tau,0)} \int_{B_\varrho} |\theta(x,t)|^2 \, dx + \int_{}\kern-.34em \int_{Q_{\varrho,\tau}} |\nabla \theta|^2 \, dx\, dt \leq C(N) \times \mathbf{C} \int_{}\kern-.34em \int_{Q_{R,T} \setminus Q_{\varrho,\tau} } |\theta|^2 \, dx\, dt \\ &\quad\quad + \gamma \int_{}\kern-.34em \int_{Q_{R,T}} |\nabla \theta|^2 \, dx\, dt + \frac{\gamma}{2(N+2)} \sup_{t \in (T,0)} \int_{B_R} |\theta(x,t)|^2 \, dx. \end{aligned} \end{equation} To remove the extra terms on the RHS, we use a standard iteration argument on a sequence of scales (progressing `outward') $\varrho_0 = \varrho$, $\varrho_{k+1} = \varrho + (1-\langlembda^{k+1}) (R-\varrho)$, $R_k =\varrho_{k+1}$, $\tau_0 = \tau$, $\tau_{k+1} = \tau + (1-\langlembda^{2k+2}) (T-\tau)$, $T_k = \tau_{k+1}$, $k=0,1,2,\hdots$, where $0 < \langlembda < 1$ is defined by the relation $\langlembda^{\max(\alpha,2)} = \gamma/2$. See~\cite[p. 191, Lemma 6.1]{giustibook}, for example. This gives the desired Cacciopoli inequality. \end{proof} Next, we require a simple corollary: \begin{corollary}[Interpolation inequality] \langlebel{cor:interpcor} Let $\chi = 1+2/n$. Then \begin{equation} \langlebel{eq:interpinequality} \left( \int_{}\kern-.34em \int_{Q_{\varrho,\tau}} |\theta|^{2\chi} \, dx\, dt \right)^{\frac{1}{\chi}} \lesssim_{N,\alpha,\varepsilon} \mathbf{C} \int_{}\kern-.34em \int_{Q_{R,T} \setminus Q_{\varrho,\tau}} |\theta|^2 \, dx\, dt. \end{equation} \end{corollary} \begin{proof} Let $0 \leq \varphi \in C^\infty_0(B_{(R+\varrho)/2})$ satisfying $\varphi \equiv 1$ on $B_\varrho$ and $|\nabla \varphi| \lesssim 1/(R-\varrho)$. Using~\eqref{eq:cacciopoliinequality} at an intermediate scale, we find \begin{equation} \langlebel{eq:cacciopoliinequalitydos} \begin{aligned} \sup_{t \in (\tau,0)} \int_{B_\varrho} |\theta(x,t)\varphi|^2 \, dx + \int_{}\kern-.34em \int_{Q_{\varrho,\tau}} |\nabla (\theta\varphi)|^2 \, dx\, dt \lesssim_{N,\alpha,\varepsilon} \mathbf{C} \int_{}\kern-.34em \int_{Q_{R,T} \setminus Q_{\varrho,\tau}} |\theta|^2 \, dx\, dt. \end{aligned} \end{equation} Then~\eqref{eq:interpinequality} follows from the Gagliardo-Nirenberg inequality on the whole space. \end{proof} We are now ready to use Moser's iteration: \begin{proof}[Proof of local boundedness] Let $\beta_k := \chi^k$, where $k=0,1,2,\hdots$. A standard computation implies that $\theta^{\beta_k}$ is also a non-negative Lipschitz subsolution. Hence, it satisfies the Cacciopoli inequality~\eqref{eq:interpinequality} with $R_k = \varrho+2^{-k}(R-\varrho)$, $r_k = R_{k+1}$, $T_k = \tau-2^{-2k}(\tau-T)$, $\tau_k=T_{k+1}$, $k=0,1,2, \hdots$ (iterating inward). In other words, \begin{equation} \langlebel{eq:iterationineqforfk} \norm{\theta^{2\beta_{k+1}}}_{L^1(Q_{\varrho_k,\tau_k})}^{\frac{1}{\chi}} \lesssim_{N,\alpha,\varepsilon} \mathbf{C}(\varrho_k,\tau_k,R_k,T_k,M,\delta,\alpha) \norm{\theta^{2\beta_k}}_{L^1(Q_{R_k,T_k} \setminus Q_{\varrho_k,\tau_k})}. \end{equation} We may expand the domain of integration on the RHS as necessary. Define \begin{equation} M_0 := \norm{\theta^2}_{L^1(Q_{R_0,T_0} \setminus Q_{\varrho_0,\tau_0})} \end{equation} and \begin{equation} M_{k+1} := \norm{\theta^2}_{L^{\beta_{k+1}}(Q_{\varrho_k,\tau_k})} = \norm{\theta^{2\beta_{k+1}}}_{L^1(Q_{\varrho_k,\tau_k})}^{\frac{1}{\beta_{k+1}}}, \quad k=0,1,2,\hdots \end{equation} Raising~\eqref{eq:iterationineqforfk} to $1/\beta_k$ and using Eq.~\eqref{eq:Cdef} defining $\mathbf{C}$, we obtain \begin{equation} M_{k+1} \leq C(N,\alpha,\varepsilon)^{\frac{1}{\beta_k}} 2^{\frac{\max(2,\alpha) k}{\beta_k}} \mathbf{C}(\varrho,\tau,R,T,M,\delta,\alpha)^{\frac{1}{\beta_k}} M_k. \end{equation} Iterating, we have \begin{equation} M_{k+1} \leq C(N,\alpha,\varepsilon)^{\sum_{j=0}^k \frac{1}{\chi^j}} 2^{\sum_{j=0}^k \frac{\max(2,\alpha) j}{\chi^j}} \mathbf{C}(\varrho,\tau,R,T,M,\delta,\alpha)^{\sum_{j=0}^k \frac{1}{\chi_j}} M_0. \end{equation} Finally, we send $k \to +\infty$ and substitute $\sum_{j \geq 0} 1/\chi^j = (n+2)/2$ to obtain \begin{equation} \langlebel{eq:intermediatelocalbddness} \norm{\theta}_{L^\infty(Q_{\varrho,\tau})} \lesssim_{N,\alpha,\varepsilon} \mathbf{C}^{\frac{n+2}{4}} \norm{\theta}_{L^2(Q_{R,T} \setminus Q_{\varrho,\tau})}. \end{equation} We now demonstrate how to replace $L^2$ on the RHS of~\eqref{eq:interpinequality} with $L^\gamma$ ($0 < \gamma < 2$). To begin, use the interpolation inequality $\norm{\theta}_{L^2} \leq \norm{\theta}_{L^\gamma}^{\gamma/2} \norm{\theta}_{L^\infty}^{1-\gamma/2}$ in~\eqref{eq:intermediatelocalbddness} and split the product using Young's inequality. This gives \begin{equation} \langlebel{eq:loweringthenorm} \norm{\theta}_{L^\infty(Q_{\varrho,\tau})} \leq C(N,\alpha,\varepsilon,\gamma) \mathbf{C}^{\frac{n+2}{2\gamma}} \norm{\theta}_{L^\gamma(Q_{R,T} \setminus Q_{\varrho,\tau})} + \frac{1}{2} \norm{\theta}_{L^\infty(Q_{R,T})}. \end{equation} The second term on the RHS is removed by iterating outward along a sequence of scales, as in the proof of the Cacciopoli inequality in Lemma~\ref{lem:cacciopoli}. \end{proof} \begin{remark}[Elliptic case] The analogous elliptic result is \begin{equation} \sup_{B_\varrho} \theta \lesssim_{N,\alpha,\varepsilon,\gamma} \mathbf{C}^{\frac{n}{2\gamma}} \norm{\theta}_{L^\gamma(B_R \setminus B_\varrho)}, \end{equation} where $\mathbf{C}(\varrho,R,M,\delta,\alpha) = 1/[\delta^2 (R-\varrho)^2] + MR_0^{\alpha-2}/[\delta^\alpha (R-\varrho)^\alpha]$. The proof is the same except that $\chi = n/(n-2)$ and $\sum 1/\chi^j = n/2$. \end{remark} \subsection{Proof of Harnack inequality} In this subsection, $\theta$ is a strictly positive Lipschitz solution.\footnote{There is no loss of generality if we replace $\theta$ by $\theta + \kappa$ and let $\kappa \to 0^+$.} Then $\log \theta$ is well defined. Let $0 \leq \partialsi \in C^\infty_0(B)$ be a radially decreasing function satisfying $\partialsi \equiv 1$ on $B_{3/4}$. We use the notation \begin{equation} f_{\rm avg} = \frac{1}{{\rm Vol}} \int_{\mathbb{R}^n} f \partialsi^2 \, dx, \quad {\rm Vol} = \int_{\mathbb{R}^n} \partialsi^2 \, dx, \end{equation} whenever $f \in L^1_{\rm loc}(B)$. Let \begin{equation} \langlebel{eq:Kdef} \mathbf{K} = \exp \left( \log \theta(\cdot,0) \right)_{\rm avg}. \end{equation} whose importance will be made clear in the proof of Lemma~\ref{lem:universal}. Define \begin{equation} v = \log \left( \frac{\theta}{\mathbf{K}} \right). \end{equation} Then $v(\cdot,0)_{\rm avg} = 0$. A simple computation yields \begin{equation} \langlebel{eq:supsol} |\nabla v|^2 = \partial_t v - \Delta v + b \cdot \nabla v. \end{equation} That is, $v$ is itself a supersolution, though it may not itself be positive. We crucially exploit that $|\nabla v|^2$ appears on the LHS of~\eqref{eq:supsol}. First, we require the following decomposition of the drift: \begin{lemma}[Decomposition of drift] \langlebel{lem:decompositionofdrift} We have the following decomposition on $B_{3/4} \times (-T^*,T^*)$: \begin{equation} b = b_1 + b_2, \quad b_1 = - \div a, \quad \div b_2 = 0, \end{equation} where $a : B_{3/4} \times (-T^*,T^*) \to \mathbb{R}^{n\times n}_{\rm anti}$ is antisymmetric and \begin{equation} \langlebel{eq:decompest} \| a \|_{L^2(B_{3/4} \times (-T^*,T^*))} + \| b_2 \|_{L^2(B_{3/4} \times (-T^*,T^*))} \lesssim \| b \|_{L^2_t H^{-1}_x(Q^*)}, \end{equation} where $Q^* = B_1 \times (-T^*,T^*)$. \end{lemma} \begin{proof} Let $\partialhi \in C^\infty_0(B_1)$ with $\partialhi \equiv 1$ on $B_{15/16}$. Let $\tilde{b}=\partialhi b$. Hence, $\| \tilde{b} \|_{L^2_t H^{-1}_x(Q^*)} \lesssim \| b \|_{L^2_t H^{-1}_x(Q^*)}$. We may decompose $\tilde{b}(\cdot,t) \in H^{-1}(\mathbb{R}^n)$ into $\tilde{b}_1(\cdot,t) \in \dot H^{-1}(\mathbb{R}^n)$, whose Fourier transform is supported outside of $B_2$, and $\tilde{b}_2(\cdot,t) \in L^2(\mathbb{R}^n)$. Define \begin{equation} a_{ij} = \Delta^{-1} ( - \partial_j \tilde{b}_{1i} + \partial_i \tilde{b}_{1j} ), \quad g = \Delta^{-1} ( - \div \tilde{b}_1 ). \end{equation} This amounts to performing the Hodge decomposition in $\mathbb{R}^n$ `by hand'.\footnote{We are simply exploiting the identity $\Delta = dd^* + d^*d$ on differential $k$-forms, up to a sign convention, for differential $1$-forms $\cong$ vector fields.} Clearly, $a$ is antisymmetric, and we have the decomposition \begin{equation} - \tilde{b}_1 = \div a + \nabla g \end{equation} and the estimates \begin{equation} \| a(\cdot,t) \|_{L^2(\mathbb{R}^n)} + \| g(\cdot,t) \|_{L^2(\mathbb{R}^n)} \lesssim \| \tilde{b}_1(\cdot,t) \|_{\dot H^{-1}(\mathbb{R}^n)}. \end{equation} Similarly, we decompose \begin{equation} \tilde{b}_2 = \mathbb{P} \tilde{b}_2 + \mathbb{Q} \tilde{b}_2, \end{equation} where $\mathbb{P}$ is the Leray (orthogonal) projector onto divergence-free fields, and $\mathbb{Q} = I - \mathbb{P}$ is the orthogonal projector onto gradient fields. We denote $\mathbb{Q} \tilde{b}_2 = \nabla f$. Since $\tilde{b} = \tilde{b}_1 + \tilde{b}_2 = - \div a + \mathbb{P} \tilde{b}_2 + \nabla (f - g)$ is divergence free in $B_{7/8}$ on time slices, we have \begin{equation} \Delta (f- g)(\cdot,t) = 0 \text{ in } B_{7/8}, \end{equation} and by elliptic regularity, for all $k \geq 0$, \begin{equation} \| \nabla (f - g)(\cdot,t) \|_{H^k(B_{3/4})} \lesssim_k \| \tilde{b}_1(\cdot,t) \|_{\dot H^{-1}(B_{7/8})} + \| \tilde{b}_2(\cdot,t) \|_{L^2(B_{7/8})}. \end{equation} Finally, we define \begin{equation} b_2 = \mathbb{P} \tilde{b}_2 + \nabla (f - g) \in L^2_{t,x}(B_{3/4} \times (-T^*,T^*)), \end{equation} which satisfies the claimed estimates and is divergence free in $B_{7/8} \times (-T^*,T^*)$. \end{proof} We now proceed with the proof of Harnack's inequality: \begin{lemma} \langlebel{lem:universal} For all non-zero $t \in [-T^*,T^*]$, we write $I_t = [0,t]$ if $t > 0$ and $I_t = [t,0]$ if $t < 0$. Then \begin{equation} \langlebel{eq:universalest} -\sgn(t) v(\cdot,t)_{\rm avg} + \int_{I_t} \left(|\nabla v|^2(\cdot,s) \right)_{\rm avg} \, ds \lesssim |t| + \norm{b}_{L^2_t H^{-1}_x(B \times I_t)}^2. \end{equation} \end{lemma} \begin{proof} We multiply~\eqref{eq:supsol} by $\partialsi^2$ and integrate over $B \times I_t$: \begin{equation} \langlebel{eq:harnack1} \sgn(t) \int_B \left( v(x,0) - v(x,t) \right) \partialsi^2 \, dx + \int_{}\kern-.34em \int_{B \times I_t} |\nabla v|^2 \partialsi^2 \, dx\, ds \leq \int_{}\kern-.34em \int_{B \times I_t} 2 \partialsi \nabla \partialsi \cdot \nabla v + (b \cdot \nabla v) \partialsi^2 \, dx\, ds. \end{equation} By~\eqref{eq:Kdef}, $\int_{\mathbb{R}^n} v(x,0) \partialsi^2 \, dx = 0$. The first term on the RHS is easily estimated: \begin{equation} \int_{}\kern-.34em \int_{B \times I_t} 2 \partialsi \nabla \partialsi \cdot \nabla v \, dx\, ds \leq \frac{1}{4} \int_{}\kern-.34em \int_{B \times I_t} |\nabla v|^2 \partialsi^2 \, dx\, ds + C|t|. \end{equation} To estimate the term containing $b$, we require the drift decomposition $b = b_1 + b_2$ from Lemma~\ref{lem:decompositionofdrift}. Then \begin{equation} \langlebel{eq:harnack3} \int_{}\kern-.34em \int_{B \times I_t} ( b_1 \cdot \nabla v) \partialsi^2 \, dx \,ds = \int_{}\kern-.34em \int_{B \times I_t} 2\partialsi a(\nabla \partialsi,\nabla v) \, dx\, ds \leq \frac{1}{4} \int_{}\kern-.34em \int_{B \times I_t} |\nabla v|^2 \partialsi^2 \, dx\, ds + C \norm{a}_{L^2(B \times I_t)}^2. \end{equation} and \begin{equation} \langlebel{eq:harnack4} \int_{}\kern-.34em \int_{B \times I_t} ( b_2 \cdot \nabla v) \partialsi^2 \, dx \,ds \leq \frac{1}{4} \int_{}\kern-.34em \int_{B \times I_t} |\nabla v|^2 \partialsi^2 \, dx\, ds + C \norm{b_2}_{L^2(B \times I_t)}^2. \end{equation} Recall the estimate~\eqref{eq:decompest} from the decomposition. Combining~\eqref{eq:harnack1}--\eqref{eq:harnack4} and dividing by ${\rm Vol}$ gives~\eqref{eq:universalest}. \end{proof} In the following, we write $v = v_+ - v_-$, where $v_+, v_- \geq 0$. We also use the notation \begin{equation} A^+ = \int_0^{T^*} \| b(\cdot,t) \|_{H^{-1}(B)}^2 \, dt, \quad A^- = \int_{-T^*}^0 \| b(\cdot,t) \|_{H^{-1}(B)}^2 \, dt. \end{equation} \begin{lemma}[Weak--$L^1$ estimates] \langlebel{lem:weakl1est} With the above notation, we have \begin{equation} \norm{v_+}_{L^{1,\infty}_{t,x}(B_{3/4} \times (-T^*,0))} \lesssim 1+ T^*(T^* + A^-) \end{equation} and \begin{equation} \norm{v_-}_{L^{1,\infty}_{t,x}(B_{3/4} \times (0,T^*))} \lesssim 1+T^*(T^* + A^+). \end{equation} \end{lemma} \begin{proof} By~\eqref{eq:universalest} and a weighted Poincar{\'e} inequality \cite[Lemma 3, p. 120]{moserharnack}, \begin{equation} \langlebel{eq:wewishtorealize} -\sgn(t) v_{\rm avg}(t) + \frac{1}{C_1} \int_{I_t} \left( |v-v_{\rm avg}|^2 \right)_{\rm avg} \, ds \leq C_0 \int_{I_t} \left( 1 + \| b(\cdot,t) \|_{H^{-1}(B)}^2 \right) ds, \end{equation} where $C_0 > 0$ is the implied constant in~\eqref{eq:universalest}. In the following, we focus on the case $t \in [-T^*,0]$. We use~\eqref{eq:wewishtorealize} to obtain a sub/supersolution inequality corresponding to a quadratic ODE. First, we remove the forcing in the ODE by defining \begin{equation} \langlebel{eq:pdef} p(x,t) := v(x,t) - C_0 \underbrace{\int_{I_t} \left( 1 + \| b(\cdot,t) \|_{H^{-1}(B)}^2 \right) ds}_{\leq T^* + A^-}. \end{equation} Then~\eqref{eq:wewishtorealize} becomes \begin{equation} p_{\rm avg}(t) + \frac{1}{C_1} \int_{I_t} \left( |p-p_{\rm avg}|^2 \right)_{\rm avg} \, ds \leq 0. \end{equation} Let us introduce the super-level sets, whose measures $\eta$ appear as a coefficient in the ODE: \begin{equation} \langlebel{eq:etadef} \eta(\mu,t) := \left|\{ x\in B_{3/4} : p(x,t) > \mu\}\right|, \quad \mu > 0. \end{equation} Since $p_{\rm avg} \leq 0$, we have that $p(x,t) - p_{\rm avg}(t) > \mu - p_{\rm avg}(t) > 0$ whenever $p(x,t) > \mu$. Then \begin{equation} \langlebel{eq:rephrase} p_{\rm avg}(t) + \frac{1}{C_1 {\rm Vol}} \int_{I_t} \eta(\mu,s) (\mu-p_{\rm avg})^2 \, ds \leq 0. \end{equation} It is convenient to rephrase~\eqref{eq:rephrase} in terms of a positive function evolving forward-in-time: $\bar{p}(t) = -p_{\rm avg}(-t)$ with $t \in [0,T^*]$. Then~\eqref{eq:rephrase} becomes \begin{equation} \bar{p}(t) \geq \frac{1}{C_1 {\rm Vol}} \int_0^t \eta(\mu,|s|) (\mu+\bar{p}(s))^2 \, ds. \end{equation} The above inequality means that $\bar{p}$ is a supersolution of the quadratic ODE \begin{equation} \langlebel{eq:qode} \dot q = \frac{1}{C_1 {\rm Vol}} \times \eta(\mu,|t|) (\mu+q)^2 \end{equation} with $q(0) = 0$. The above scalar ODE has a comparison principle. \emph{A priori}, since~\eqref{eq:qode} is quadratic, its solutions may quickly blow-up depending on the size of $\eta(\mu,\cdot)$ and $\mu$. However, because $\bar{p}$ lies above the solution $q$, $q$ does not blow up, and we obtain a bound for the density $\eta(\mu,\cdot)$ in the following way. After separating variables in~\eqref{eq:qode}, we obtain \begin{equation} \frac{1}{C_1 {\rm Vol}} \int_0^{T^*} \eta(\mu,|s|) \, ds = \frac{1}{\mu} - \frac{1}{\mu+q(T^*)} \leq \frac{1}{\mu}, \end{equation} since $q \geq 0$. That is, \begin{equation} \norm{p_+}_{L^{1,\infty}(B_{3/4} \times (-T^*,0))} \lesssim 1. \end{equation} Finally, since $\norm{\cdot}_{L^{1,\infty}(B_{3/4} \times (-T^*,0))}$ is a quasi-norm and $v \leq p + C_0 (T^* + A^-)$ pointwise due to \eqref{eq:pdef}, we have \begin{equation} \begin{aligned} \norm{v_+}_{L^{1,\infty}(B_{3/4} \times (-T^*,0))} &\leq 2\norm{p_+}_{L^{1,\infty}(B_{3/4} \times (-T^*,0))} + 2 C_0 \norm{T^* + A^-}_{L^{1,\infty}(B_{3/4} \times (-T^*,0))} \\ &\lesssim 1 + T^*(T^* + A^-). \end{aligned} \end{equation} The proof for $t \in [0,T^*]$ is similar except that one uses sub-level sets in~\eqref{eq:etadef} with $\mu < 0$. \end{proof} We now require the following lemma of Moser~\cite{moserpointwise}, which we quote almost directly, and in which we denote by $Q(\varrho)$, $\varrho > 0$ any family of domains satisfying $Q(\varrho) \subset Q(r)$ for $0 < \varrho < r$. \begin{lemma}[Lemma~3 in~\cite{moserpointwise}] \langlebel{lem:moserlem} Let $m, \zeta, c_0$, $1/2 \leq \theta_0 < 1$ be positive constants, and let $w > 0$ be a continuous function defined in a neighborhood of $Q(1)$ for which \begin{equation} \langlebel{eq:moserlocalboundednessreq} \sup_{Q(\varrho)} w^p < \frac{c_0}{(r-\varrho)^m {\rm meas}(Q(1))} \int_{}\kern-.34em \int_{Q(r)} w^p \, dt \, dx \end{equation} for all $\varrho, r, p$ satisfying \begin{equation} \frac{1}{2} \leq \theta_0 \leq \varrho < r \leq 1, \quad 0 < p < \zeta^{-1}. \end{equation} Moreover, let \begin{equation} \langlebel{eq:moserweakl1req} {\rm meas} \{ (x,t) \in Q(1) : \log w > \mu \} < \frac{c_0 \zeta}{s} {\rm meas}(Q(1)) \end{equation} for all $\mu > 0$. Then there exists a constant function $\gamma = \gamma(\theta_0,m,c_0)$ such that \begin{equation} \langlebel{eq:moserlemconclusion} \sup_{Q(\theta_0)} w < \gamma^\zeta. \end{equation} \end{lemma} \begin{proof}[Proof of Harnack inequality] We apply Lemma~\ref{lem:moserlem} to $w = \theta / \mathbf{K}$ with $Q(\varrho) = B_\varrho \times (-T^* + 2\ell (1-\varrho),0)$ and $\theta_0 = 1/2$.\footnote{Technically, to satisfy the conditions in Lemma~\ref{lem:moserlem}, $w = \theta / \mathbf{K}$ should be extended arbitrarily to be continuous in a neighborhood of $Q(1)$.} Indeed, we recognize the requirement~\eqref{eq:moserlocalboundednessreq} as the local boundedness guaranteed by Theorem~\ref{thm:localboundedness} and~\eqref{eq:moserweakl1req} as the weak $L^1$ estimate from Lemma~\ref{lem:weakl1est}. This gives \begin{equation} \langlebel{eq:estimateone} \sup_{B_{1/2} \times (-T^* + \ell,0)} \frac{\theta}{\mathbf{K}} \lesssim 1. \end{equation} Here, we suppress also the dependence on the time lag $\ell$. Meanwhile, $v_- = \log_+ (\mathbf{K}/\theta)$ is a subsolution. Hence, \begin{equation} \norm{v_-}_{L^\infty(B_{1/2} \times (\ell,T^*))} \lesssim \norm{v_-}_{L^{1,\infty}(B_{3/4} \times (0,T^*))}. \end{equation} On the other hand, \begin{equation} \langlebel{eq:estimatetwo} \frac{\mathbf{K}}{\inf \theta} = \sup \frac{\mathbf{K}}{\theta} = \exp \left( \sup \log \frac{\mathbf{K}}{\theta} \right) \leq \exp \left( \sup v_- \right) \lesssim \exp \left( \norm{v_-}_{L^{1,\infty}} \right) \lesssim 1, \end{equation} where the $\inf$ and $\sup$ are taken on $B_{1/2} \times (\ell,T^*)$. Combining~\eqref{eq:estimateone} and~\eqref{eq:estimatetwo}, we arrive at \begin{equation} \sup_{B_{1/2} \times (-T^*+\ell,0)} \theta \lesssim \mathbf{K} \lesssim \inf_{B_{1/2} \times (\ell,T^*)} \theta, \end{equation} as desired. \end{proof} \section{Bounded total speed} In this section, we prove the statements in Theorem~\ref{thm:lqlp} concerning the space $L^1_t L^\infty_x$. \begin{proposition}[Local boundedness] \langlebel{pro:localbddbddtotal} Let $T \in (-\infty,0)$ and $\tau \in (T,0)$. Let $b \colon Q_{1,T} \to \mathbb{R}^n$ be a smooth divergence-free drift satisfying \begin{equation} \langlebel{eq1.40} \norm{b}_{L^1_t L^\infty_x(Q_{1,T})} \leq 1/8. \end{equation} Let $\theta$ be a non-negative Lipschitz subsolution on $Q_{1,T}$. Then, for all $\gamma \in (0,2]$, we have \begin{equation} \langlebel{eq:localbddnessforbddtotalspeed} \| \theta \|_{L^\infty(Q_{1/2,\tau})} \lesssim_{\gamma} \left(1 + \frac{1}{\tau-T} \right)^{\frac{n+2}{2\gamma}} \norm{\theta}_{L^\gamma(Q_{1,T})}. \end{equation} \end{proposition} \begin{proof} For smooth $\langlembda : [T,0] \to (0,+\infty)$, we define $x = \langlembda y$ and \begin{equation} \tilde{\theta}(y,t) = \theta(\langlembda y,t). \end{equation} That is, $\tilde{\theta}$ is obtained by dynamically rescaling $\theta$ in space. The new PDE is \begin{equation} \langlebel{eq:resultingeqnis} \partial_t \tilde{\theta} - \frac{1}{\langlembda^2} \Delta_y \tilde{\theta} + \frac{1}{\langlembda} \tilde{b} \cdot \nabla_y \tilde{\theta} \leq 0 \end{equation} where \begin{equation} \langlebel{eq:tildebdef} \tilde{b}(y,t) = b(\langlembda y,t) - \dot \langlembda y. \end{equation} Choose $\langlembda(T) = 1$, $\dot \langlembda = - 2\| b(\cdot,t) \|_{L^\infty}$ when $t \in [T,0]$. Clearly, $3/4 \leq \langlembda \leq 1$. Our picture is that $\tilde{\theta}$ dynamically `zooms in' on $\theta$. In particular, using \eqref{eq1.40} and~\eqref{eq:tildebdef}, \begin{equation} \langlebel{eq:tildebpointsoutward} \tilde{b}(\cdot,t) \cdot \frac{y}{|y|} \geq - \| b(\cdot,t) \|_{L^\infty} + 2 \| b(\cdot,t) \|_{L^\infty}|y| \geq 0 \text{ when } y \in B_{1} \setminus B_{1/2}, \end{equation} and \begin{equation} \div \tilde{b} = 2n \| b(\cdot,t) \|_{L^\infty} \geq 0. \end{equation} We now demonstrate Cacciopoli's inequality in the new variables. Let $3/4 \leq \varrho < R \leq 1$. Let $\varphi \in C^\infty_0(B_{R})$ be a radially symmetric and decreasing function satisfying $0 \leq \varphi \leq 1$ on $\mathbb{R}^n$, $\varphi \equiv 1$ on $B_\varrho$, and $|\nabla \varphi| \lesssim 1/(R - \varrho)$. Let $\eta \in C^\infty_0(T,+\infty)$ satisfying $0 \leq \eta \leq 1$ on $\mathbb{R}$, $\eta \equiv 1$ on $(\tau,0)$, and $|d\eta/dt| \lesssim 1/(\tau-T)$. Let $\Phi = \varphi^2 \eta$. We integrate Eq.~\eqref{eq:resultingeqnis} against $\tilde{\theta} \Phi$ on $B_{R} \times (T,t)$ for $t\in (\tau,0)$. Then \begin{equation} \begin{aligned} &\frac{1}{2} \int |\tilde{\theta}|^2(y,t) \Phi \, dy + \int_{}\kern-.34em \int \langlembda^{-2} |\nabla \tilde{\theta}|^2 \Phi \, dy\, ds \\ &\quad \leq \frac{1}{2} \int_{}\kern-.34em \int |\tilde{\theta}|^2 \partial_t \Phi \, dy\, ds - \int_{}\kern-.34em \int \langlembda^{-2} \tilde{\theta} \nabla \tilde{\theta} \cdot \nabla \Phi \, dy\, ds \\ &\quad\quad + \frac{1}{2} \int_{}\kern-.34em \int \langlembda^{-1} \underbrace{\tilde{b} \cdot \nabla \Phi}_{\leq 0 \text{ by }\eqref{eq:tildebpointsoutward}} |\tilde{\theta}|^2 \, dy\, ds + \frac{1}{2} \int_{}\kern-.34em \int \langlembda^{-1} \div \tilde{b} |\tilde{\theta}|^2 \Phi \, dy\, ds. \end{aligned} \end{equation} While $\div \tilde{b}$ has a disadvantageous sign, it simply acts as a bounded potential. Simple manipulations give the Cacciopoli inequality: \begin{equation} \begin{aligned} &\sup_{t \in (\tau,0)} \int_{B_\varrho} |\tilde{\theta}|^2(y,t) \, dy + \int_{}\kern-.34em \int_{Q_{\varrho,\tau}} |\nabla \tilde{\theta}|^2 \varphi^2 \, dy\, ds \\ &\quad \lesssim \left( \frac{1}{\tau-T} + \frac{1}{(R-\varrho)^2} \right) \int_{}\kern-.34em \int_{Q_{R,T} \setminus Q_{\varrho,\tau}} |\tilde{\theta}|^2 \, dy\, ds. \end{aligned} \end{equation} The remainder of the proof proceeds as in Theorem~\ref{thm:localboundedness} except in the $(y,t)$ variables. Namely, we have the interpolation inequality as in Corollary~\ref{cor:interpcor}, and $\tilde{\theta}^\beta$ is a subsolution of~\eqref{eq:resultingeqnis} whenever $\beta \geq 1$. Therefore, we may perform Moser's iteration verbatim. As in~\eqref{eq:loweringthenorm}, the $L^2$ norm on the RHS may be replaced by the $L^\gamma$ norm. Finally, undoing the transformation yields the inequality~\eqref{eq:localbddnessforbddtotalspeed} in the $(x,t)$ variables, since $(y,t) \in B_R \times \{ t \}$ corresponds to $(x,t) \in B_{\langlembda(t)R} \times \{ t \}$. \end{proof} The quantitative local boundedness property in Theorem~\ref{thm:lqlp} follows from applying Proposition~\ref{pro:localbddbddtotal} and its rescalings on finitely many small time intervals. In Remark~\ref{rmk:boundedtotalspeedassertion}, we justify that the constant depends on the `profile' of $b$ and not just its norm. \section{Counterexamples} \langlebel{sec:counters} \subsection{Elliptic counterexamples} Let $n \geq 3$. Our counterexamples will be axisymmetric in `slab' domains $B_R \times (0,1)$, where $R>0$ is arbitrary and $B_R$ is a ball in $\mathbb{R}^{n-1}$. We use the notation $x = (x',z)$, where $x' \in \mathbb{R}^{n-1}$, $r = |x'|$, and $z \in (0,1)$. Let \begin{equation} \ubar{\theta}(x) = u(r) z, \end{equation} \begin{equation} b(x) = V(r) e_z. \end{equation} Since $b$ is a shear flow in the $e_z$ direction, it is divergence free. Then \begin{equation} \langlebel{eq:incylindrical} - \Delta \ubar{\theta} + b \cdot \nabla \ubar{\theta} = - z \Delta_{x'} u + V u. \end{equation} We will a construct a subsolution $\ubar{\theta}$ and supersolution $\bar{\theta}$ using the \emph{steady Schr{\"o}dinger equation} \begin{equation} \langlebel{eq:steadyschrodinger} -\Delta u + V u = 0 \end{equation} in dimension $n-1$, where additionally $u \geq 0$ and $V \leq 0$. By the scale invariance in $x'$, it will suffice to construct a solution at a single fixed length scale $R = R_0$. The way to proceed is well known. We define \begin{equation} u = \log \log \frac{1}{r}, \end{equation} \begin{equation} V = \frac{\Delta u}{u}, \end{equation} for $r \leq R_0 \ll 1$ so that $u$ is well defined. A simple calculation verifies that $0 \leq u \in H^1_{\rm loc}$, $\Delta u, V \le 0$, and $\Delta u, V \in L^{(n-1)/2}_{\rm loc}$.\footnote{Since Schr{\"o}dinger solutions with critical potentials $V$ belong to $L^p_{\rm loc}$ for all $p < \infty$ (see Han and Lin~\cite{hanlinbook}, Theorem 4.4), it is natural to choose $u$ with a $\log$. The double $\log$ ensures that $u$ has finite energy when $n=3$. Notice also that $\Delta u = - (r \log r^{-1})^{-2}$ when $n = 3$.} Therefore, $Vu = \Delta u \in L^1_{\rm loc}$, and the PDE~\eqref{eq:steadyschrodinger} is satisfied in the sense of distributions. Using~\eqref{eq:incylindrical}, we verify that $\ubar{\theta}$ is a distributional subsolution: \begin{equation} - \Delta \ubar{\theta} + b \cdot \nabla \ubar{\theta} = - z \Delta_{x'} u + V u = (1-z) Vu \leq 0 \text{ in } B_{R_0} \times (0,1), \end{equation} with equality at $\{ z = 1 \}$. We also wish to control solutions from above. Since $\Delta u \leq 0$, we define \begin{equation} \bar{\theta}(x',z) = u(r). \end{equation} Clearly, $\ubar{\theta} \leq \bar{\theta}$, and $\bar{\theta}$ is a distributional supersolution: \begin{equation} -\Delta \bar{\theta} + b \cdot \nabla \bar{\theta} = - \Delta_{x'} u \geq 0 \text{ in } B_{R_0} \times (0,1). \end{equation} We now construct smooth subsolutions and supersolutions approximating $\ubar{\theta}$ and $\bar{\theta}$ according to the above procedure. Let $\varphi$ be standard mollifier and \begin{equation} \varphi_\varepsilon = \frac{1}{\varepsilon^{n-1}} \varphi \left( \frac{\cdot}{\varepsilon} \right). \end{equation} Define $u_\varepsilon = \varphi_\varepsilon \ast u$, $V_\varepsilon = \Delta u_\varepsilon/u_\varepsilon$, $b_\varepsilon = V_\varepsilon(r) e_z$, $\ubar{\theta}_{\varepsilon} = z u_\varepsilon(r)$, and $\bar{\theta}_\varepsilon = u_\varepsilon(r)$. Then $(\ubar{\theta}_\varepsilon)$ and $(\bar{\theta}_\varepsilon)$ trap a family $(\theta_\varepsilon)$ of smooth solutions to the PDEs \begin{equation} - \Delta \theta_\varepsilon + b_\varepsilon \cdot \nabla \theta_\varepsilon = 0 \text{ on } B_{R_0/2} \times (0,1) \end{equation} when $\varepsilon \in (0,R_0/2)$. Moreover, we have the desired estimates \begin{equation} \sup_{\varepsilon \in (0,R_0/2)} \| \theta_\varepsilon \|_{L^p(B_{R_0/2} \times (0,1))} \leq \| \bar{\theta} \|_{L^p(B_{R_0} \times (0,1))} < +\infty, \quad p \in [1,+\infty), \end{equation} \begin{equation} \sup_{\varepsilon \in (0,R_0/2)} \| V_\varepsilon \|_{L^{\frac{n-1}{2}}(B_{R_0/2})} \lesssim \sup_\varepsilon \| \Delta u_\varepsilon \|_{L^{\frac{n-1}{2}}(B_{R_0/2})} \lesssim \| \Delta u \|_{L^{\frac{n-1}{2}}(B_{R_0})} < +\infty, \end{equation} and the singularity as $\varepsilon \to 0^+$: \begin{equation} \sup_{B_{\frac{R_0}{4}} \times (\frac{1}{4},\frac{3}{4})} \theta_\varepsilon \geq \sup_{B_{\frac{R_0}{4}} \times (\frac{1}{4},\frac{3}{4})} \ubar{\theta}_\varepsilon \to +\infty \text{ as } \varepsilon \to 0^+. \end{equation} \begin{remark}[Line singularity] The solutions constructed above are singular on the $z$-axis, as the maximum principle demands. \end{remark} \begin{remark}[Time-dependent examples] \langlebel{rmk:timedependentexamples} The above analysis of unbounded solutions for the steady Schr{\"o}dinger equation with critical potential is readily adapted to the parabolic PDE $\partial_t u - \Delta u + V u = f$ in $B_{R_0} \times (-T_0,0) \subset \mathbb{R}^{n+1}$, $n \geq 2$, (i) with potential $V$ belonging to $L^q_t L^p_x$, $2/q + n/p = 2$, $q > 1$, and zero force, or (ii) with force $f$ belonging to the same space and zero potential. For example, one can define $u = \log \log (-t + r^2)$, $V = - (\partial_t u - \Delta u)/u$, and $f=0$. The case $q=1$ is an endpoint case in which solutions remain bounded. These examples are presumably well known, although we do not know a suitable reference. \end{remark} \subsection{Parabolic counterexamples} \begin{proof}[Proof of borderline cases: $L^q_t L^p_x$, $\frac{2}{q} + \frac{n}{p} = 2$, $q > 1$] \textit{1. A heat subsolution}. Let \begin{equation} \Gamma(x,t) = (4\partiali t)^{-n/2} e^{-\frac{|x|^2}{4t}} \end{equation} be the heat kernel. Let \begin{equation} E(x,t) = (\Gamma - c_n)_+, \end{equation} where $c_n = (8\partiali)^{-n/2}$. Then $E$ is globally Lipschitz away from $t=0$, and $E(\cdot,t)$ is supported in the ball $B_{R(t)}$, where \begin{equation} \langlebel{eq:radiusdef} R(t)^2 = 2nt \log \frac{2}{t}, \quad t < 2, \end{equation} and $E$ vanishes in $t \geq 2$. \textit{2. A steady, compactly supported drift}. There exists a divergence-free vector field $U \in C^\infty_0(B_{4})$ satisfying \begin{equation} U \equiv \vec{e}_1 \text{ when } |x| \leq 2. \end{equation} Here is a construction: Let $\partialhi \in C^\infty_0(B_{4})$ be a radially symmetric cut-off function such that $\partialhi \equiv 1$ on $B_3$. By applying Bogovskii's operator in the annulus $B_{4} \setminus \overline{B_{2}}$, see~\cite[Theorem III.3.3, p.~179]{galdi}, there exists $W \in C^\infty_0(B_{4} \setminus \overline{B_{2}})$ solving \begin{equation} \div W = - \div (\partialhi \vec{e}_1) \in C^\infty_0(B_{4} \setminus \overline{B_{2}}). \end{equation} Notably, the property of compact support is preserved. Finally, we define \begin{equation} U = \partialhi \vec{e}_1 + W. \end{equation} \textit{3. Building blocks}. Let $0 \leq S \in C^\infty_0(0,1)$ and $X \colon \mathbb{R} \to \mathbb{R}^n$ be the solution of the ODE \begin{equation} \dot X(t) = S(t) \vec{e}_1, \quad X(0) = -10n \vec{e}_1. \end{equation} Define \begin{equation} b_{S}(x,t) = S(t) U \left( \frac{x - X(t)}{R(t)} \right), \end{equation} where $R(t)$ was defined in~\eqref{eq:radiusdef} above, and \begin{equation} E_{S}(x,t) = E(x-X(t), t). \end{equation} Then $E_{S}$ is a subsolution: \begin{equation} \left( \partial_t - \Delta + b_{S} \cdot \nabla \right) E_{S} \leq 0 \quad \text{ on } \mathbb{R}^n \times (0,1). \end{equation} If $[a,a'] \subset (0,1)$, $S \in C^\infty_0(a,a')$, and $\int S \, dt \geq 20n$, then we have $E_{S}(\cdot,t)|_{B_3} \equiv 0$ when $t \leq a$ or $t \geq a'$. Additionally, $E_{S}(\cdot,\tilde{t}) = E(\cdot,\tilde{t})$ for some $\tilde{t} \in (a,a')$. We also consider the solution $\Phi_S$ to the PDE: \begin{equation} \begin{aligned} \left( \partial_t - \Delta + b_{S} \cdot \nabla \right) \Phi_S &= 0\quad \text{ on } \mathbb{R}^{n+1} \setminus \{ (X(0),0) \} \\ \Phi_S|_{t = 0} &= \delta_{x = X(0)}, \end{aligned} \end{equation} which for short times $|t| \ll 1$ and negative times is equal to the heat kernel $\Gamma(x-X(0),t)$. By the comparison principle, \begin{equation} E_{S} \leq \Phi_S. \end{equation} We have the following measurements on the size of the drift: \begin{equation} \| b_S \|_{L^q_t L^p_x(\mathbb{R}^{n+1})}^q \lesssim \int_{\mathbb{R}} S(t)^q R(t)^{\frac{nq}{p}} \, dt, \end{equation} where $1 \leq p,q < +\infty$, and \begin{equation} \| b_S \|_{L^1_t L^\infty_x(\mathbb{R}^{n+1})} = \|U\|_{L^\infty(\mathbb{R}^n)} \int_{\mathbb{R}} S(t) \, dt. \end{equation} \textit{4. Large displacement}. For $S_k \in C^\infty_0(t_k,t_k')$, $k \geq 1$, with $t_k = o_{k \to +\infty}(1)$ and $[t_k,t_k'] \subset (0,1)$ disjoint-in-$k$, we consider the drifts $b_{S_k}$. Let $M > 0$. We claim that it is possible to choose $S_k$ satisfying \begin{equation} \norm{b_{S_k}}_{L^1_t L^\infty_x(\mathbb{R}^{n+1})} = M \end{equation} and \begin{equation} \left \| \sum_{k \geq 1} b_{S_k} \right \|_{L^q_t L^p_x(\mathbb{R}^{n+1})} < +\infty \end{equation} for all $p,q \in [1,+\infty]$ satisfying $2/q+n/p = 2$ and $q>1$. Indeed, consider \begin{equation} \langlebel{eq7.12} \bar{S}(t) = \left( t \log t^{-1} \log \log t^{-1} \right)^{-1} \end{equation} when $t \leq c_0$ so that the above expression is well defined, and extended smoothly on $[c_0,1]$. We ask also that $t_1 \leq c_0$. Since \begin{equation} \langlebel{eq:theintegral} \int_{a}^{a'} \bar{S}(t) dt = \log \log \log t^{-1} \big|_{a}^{a'} \end{equation} when $a' \leq c_0$, we have \begin{equation} \int_{t=0}^1 \bar{S}(t) \, dt = +\infty, \end{equation} whereas \begin{equation} \begin{aligned} \int_{t=0}^1 \bar{S}(t)^q R(t)^{\frac{nq}{p}} \, dt &\leq O(1) + C_n \int_{t=0}^{c_0} (t \log t^{-1})^{-q+\frac{nq}{2p}} ( \log \log t^{-1} )^{-q} \, dt \\ &\leq O(1) + C_n \int_{t=0}^{c_0} (t \log t^{-1})^{-1} ( \log \log t^{-1} )^{-q} \, dt < +\infty \end{aligned} \end{equation} when $q \in (1,+\infty)$. The case $q = +\infty$ is similar. We choose $S_k = \bar{S}(t) \varphi_k$ with suitable smooth cut-offs $\varphi_k$ to complete the proof of the claim. \textit{5. Unbounded solution}. We choose $M = 20n$ and a suitable sequence of $S_k$ as above. We reorder the building blocks we defined above so that the $k$th subsolution and $k$th drift are `activated' on times $(1-t_k',1-t_k)$. Define \begin{equation} b_k(\cdot, t) = b_{S_k}(\cdot,t - (1-t_k') + t_k), \quad b = \sum_{k \geq 1} b_k \end{equation} and, for size parameters $A_k \geq 0$, \begin{equation} E_k(\cdot,t ) = E_{S_k}(\cdot, t- (1-t_k') + t_k) \mathbf{1}_{(1-t_k',1-t_k)}, \quad E = \sum_{k \geq 1} A_k E_k. \end{equation} Then $E$ is a subsolution of the PDE \begin{equation} (\partial_t - \Delta + b \cdot \nabla) E \leq 0\quad \text{ on } B_3 \times (-\infty,1). \end{equation} We further define \begin{equation} \Phi_k(\cdot,t) = \Phi_{S_k}(\cdot, t- (1-t_k') + t_k), \quad \theta = \sum_{k \geq 1} A_k \Phi_k, \end{equation} which is a solution of the PDE \begin{equation} (\partial_t - \Delta + b \cdot \nabla) \theta = 0\quad \text{ on } B_2 \times (-\infty,1). \end{equation} Since $E_k \leq \Phi_k$, we have that $E \leq \theta$ on $B_2 \times (-\infty,1)$. Additionally, we have \begin{equation} \langlebel{eq:lowerboundonslices} \sup_t A_k \| E_{k}(\cdot,t) \|_{L^\infty(B_1)} \geq A_k \| E_{k}(\cdot,1-\tilde t_k) \|_{L^\infty(B_1)} \gtrsim A_k {t_k'}^{-n/2}, \end{equation} where $\tilde t_k\in (t_k,t_k')$ satisfies $X_{S_k}(\tilde{t_k})=0$. Therefore, by the comparison principle and~\eqref{eq:lowerboundonslices}, we have \begin{equation} \langlebel{eq:limsup} \limsup_{t \to 1_-} \| \theta(\cdot,t) \|_{L^\infty(B_1)} \gtrsim \limsup_{k \to +\infty} A_k {t_k'}^{-n/2}. \end{equation} To control the solution from above, we use \begin{equation} \langlebel{eq:thingtoprune} \| \theta \|_{L^\infty_t L^1_x(\mathbb{R}^{n+1})} \leq \sum A_k. \end{equation} Therefore, it is possible to choose $A_k \to 0$ as $k \to +\infty$ while keeping the $\limsup$ in~\eqref{eq:limsup} infinite. Hence, by `pruning' the sequence of $A_k$ (meaning we pass to a subsequence, without relabeling), we can always ensure that $\| \theta \|_{L^\infty_t L^1_x(\mathbb{R}^{n+1})} < +\infty$.\end{proof} \begin{remark} \langlebel{rmk:boundedtotalspeedassertion} The sequence of solutions $\{\theta_k\}$ above demonstrates that the constant in the quantitative local boundedness property in Theorem~\ref{thm:lqlp} for drifts $b \in L^1_t L^\infty_x$ depends on the `profile' of $b$ rather than just its norm. \end{remark} \begin{proof}[Proof of non-borderline cases: $L^p_x L^q_t$, $\frac{3}{q} + \frac{n-1}{p} > 2$, $p \leq q$] This construction exploits rescaled copies of $E$ and is, in a certain sense, self-similar. \emph{1. Building blocks}. Let $(t_k) \subset (0,1)$ be an increasing sequence, with $t_k \to 1$ as $k \to +\infty$. Define $I_k = (t_k,t_{k+1})$, $R_k^2 = |I_k|$. Let $0 \leq S \in C^\infty(0,1)$ satisfying $\int_0^1 S(t) \, dt = M$ with $M = 20n$. Define $X_k \colon \mathbb{R} \to \mathbb{R}^n$ to be the solution of the ODE \begin{equation} \dot X_k(t-t_k) = \underbrace{\frac{1}{|I_k|} S\left(\frac{t}{|I_k|} \right)}_{=: S_k(t-t_k)} \vec{e}_1, \quad X_k(t_k) = -10n \vec{e}_1. \end{equation} The `total speed' has been normalized: $\int |\dot X_k| \, dt = \int S \, dt = M$. Define also \begin{equation} \langlebel{eq:scalingansatz} b_{k}(x,t) = S_k(t) U\left( \frac{x - X_k(t)}{R_k} \right) \end{equation} and \begin{equation} E_k(x,t) = \frac{1}{R_k^n} E\left( \frac{x-X_k(t)}{R_k}, \frac{t-t_{k}}{|I_k|} \right). \end{equation} Then $E_{k}$ is a subsolution: \begin{equation} \left( \partial_t - \Delta + b_{k} \cdot \nabla \right) E_{k} \leq 0 \text{ on } \mathbb{R}^{n+1} \setminus \{ (X_k(t_k),t_k) \} \end{equation} and satisfies many of the same properties as $E_k$ in the previous construction, among which is \begin{equation} E_{k}(\cdot,\tilde{t_k}) = \frac{1}{R_k^n} E\left(\frac{\cdot}{R_k},\tilde{t} \right) \end{equation} for some $\tilde{t_k} \in (t_k,t_{k+1})$ and $\tilde{t} \in (0,1)$. We define the solution $\theta_k$ to the PDE: \begin{equation} \begin{aligned} \left( \partial_t - \Delta + b_{k} \cdot \nabla \right) \theta_k &= 0 \quad \text{ on } \mathbb{R}^{n+1} \setminus \{ (X_k(t_k),t_k) \}\\ \theta_k|_{t = t_k} &= \delta_{x = X_k(t_k)}, \end{aligned} \end{equation} which for short times $|t - t_k| \ll_k 1$ and times $t < t_k$ is equal to the heat kernel $\Gamma(x-X_k(t_k),t)$. The comparison principle implies \begin{equation} E_{k} \leq \theta_k. \end{equation} \textit{2. Estimating the drift}. We now estimate the size of $b_k$. To begin, we estimate the $L^q_t L^p_x$ norms, $\frac{2}{q} + \frac{n}{p} > 2$. Using the scalings from~\eqref{eq:scalingansatz}, we have \begin{equation} \langlebel{eq:maxb} \max |b_k| \leq \| U \|_{L^\infty} \| S \|_{L^\infty} |I_k|^{-1} \end{equation} and \begin{equation} \langlebel{eq:interpguy2} \| b_k \|_{L^q_t L^p_x(\mathbb{R}^{n+1})} \lesssim \| U \|_{L^\infty} \| S \|_{L^\infty} |I_k|^{\frac{1}{q}-1} R_k^{\frac{n}{p}} \lesssim R_k^{\varepsilon(p,q)} = o_{k\to+\infty}(1), \end{equation} since $|I_k| = R_k^2$. Next, we estimate the $L^p_{x'} L^\infty_{x_n} L^{\tilde{q}}_t$ norm, where $\frac{2}{\tilde{q}} + \frac{n-1}{p} > 2$. We are most interested when $\tilde{q} = +\infty$ and $p = \frac{n-1}{2}-$, but it is not more effort to estimate this. Importantly, we have \begin{equation} {\rm supp } \; b_k \subset B^{\mathbb{R}^{n-1}}_{C_n R_k} \times (-C_n,C_n) \times I_k. \end{equation} Using this and~\eqref{eq:maxb}, we have \begin{equation} \langlebel{eq:interpguy1} \| b_k \|_{L^p_{x'} L^\infty_{x_n} L^{\tilde{q}}_t(\mathbb{R}^{n+1})} \lesssim \| U \|_{L^\infty} \| S \|_{L^\infty} R_k^{\frac{n-1}{p}} |I_k|^{\frac{1}{\tilde{q}}-1} \lesssim R_k^{\varepsilon(p,\tilde{q})} = o_{k \to +\infty}(1). \end{equation} Interpolating between~\eqref{eq:interpguy2} and~\eqref{eq:interpguy1} with $(p,\tilde{q}) = (\frac{n-1}{2}-,\infty)$, we thus obtain \begin{equation} \| b_k \|_{L^p_x L^q_t(\mathbb{R}^{n+1})} = o_{k \to +\infty}(1) \end{equation} when $\frac{3}{q} + \frac{n-1}{p} > 2$ and $p \leq q$. After `pruning' the sequence in $k$ (meaning we pass to a subsequence, without relabeling), we have \begin{equation} \| b \|_{L^q_t L^p_x(\mathbb{R}^{n+1})} \leq \sum\| b_k \|_{L^q_t L^p_x(\mathbb{R}^{n+1})} < +\infty, \quad \frac{2}{q} + \frac{n}{p} > 2 \end{equation} and \begin{equation} \| b \|_{L^p_x L^q_t(\mathbb{R}^{n+1})} \leq \sum \| b_k \|_{L^p_x L^q_t(\mathbb{R}^{n+1})} < +\infty, \quad \frac{3}{q} + \frac{n-1}{p} > 2, \quad p \leq q. \end{equation} \emph{3. Concluding}. The remainder of the proof proceeds as before, with the notable difference that we do not need to reorder the blocks in time. To summarize, we have \begin{equation} \langlebel{eq:lowerboundonslices2} \sup_t A_k \| E_{k}(\cdot,t) \|_{L^\infty(B_1)} \geq A_k \| E_{k}(\cdot,\tilde{t_k}) \|_{L^\infty(B_1)} \gtrsim A_k {R_k}^{-n}, \end{equation} where $\tilde t_k\in (t_k,t_{k+1})$ satisfies $X_{k}(\tilde{t_k})=0$, and hence, \begin{equation} \langlebel{eq:limsup2} \limsup_{t \to 1_-} \| \theta(\cdot,t) \|_{L^\infty(B_1)} \gtrsim \limsup_{k \to +\infty} A_k {R_k}^{-n} \end{equation} To control the solution from above, we again use~\eqref{eq:thingtoprune} and choose $A_k \to 0$ as $k \to +\infty$ while maintaining that the RHS~\eqref{eq:limsup2} is infinite. By again `pruning' the sequence in $k$, we have $\| \theta \|_{L^\infty_t L^1_x(\mathbb{R}^{n+1})} < +\infty$. This completes the proof. \end{proof} \begin{remark}[An open question] \langlebel{rmk:anopenquestion} As mentioned in the introduction, we do not construct counterexamples in the endpoint cases $L^p_x L^q_t$, $\frac{3}{q} + \frac{n-1}{p} = 2$, except when $p = q = \frac{n+2}{2}$ or $(p,q) = (\frac{n-1}{2},+\infty)$ (steady example constructed above). This seems to suggest, perhaps, that local boundedness should also fail on the line between these two points, but that the counterexamples may be more subtle. It would be interesting to construct these examples. Since each `block' above is uniformly bounded in the desired spaces, we can say that, if local boundedness were to hold there, it must be depend on the `profile' of $b$ and not just its norm, as in Remark~\ref{rmk:boundedtotalspeedassertion}. \end{remark} \section{Upper bounds on fundamental solutions} \langlebel{sec:upperboundsonfundamentalsolutions} For the Gaussian-like upper bounds on fundamental solutions, we consider a divergence-free vector field $b \in C^\infty_0(\mathbb{R}^n \times [0,+\infty))$ and $\alpha_0 \geq 0$, $C_0, M_0 > 0$ satisfying the following two properties: \textbf{I}. \emph{Local boundedness}: For each $x_0 \in \mathbb{R}^n$, $R > 0$, $t_0 \in \mathbb{R}_+$, parabolic cylinder $Q_R(x_0,t_0) = B_R(x_0) \times (t_0-R^2,t_0) \subset \mathbb{R}^n \times \mathbb{R}_+$, and Lipschitz solution $u \in W^{1,\infty}(Q_R(x_0,t_0))$, we have \begin{equation} \langlebel{eq:localboundednessforfundsol} \| u \|_{L^\infty(Q_{R/2}(x_0,t_0))} \leq C_0 \left( \frac{M_0}{R^{\alpha_0+2}} + \frac{1}{R^2} \right)^{\frac{n+2}{4}} \| u \|_{L^2(Q_R(x_0,t_0))} \end{equation} \textbf{II}. \emph{Global, revised form boundedness condition}, which we denote~\eqref{eq12.33}: \begin{quote} For each $x_0 \in \mathbb{R}^n$, $R > 0$, open interval $I \subset \mathbb{R}_+$, and Lipschitz function $u \in W^{1,\infty}(B_R \times I)$, there exists a measurable set $A = A(x_0,R,b) \subset (R/2,R)$ with $|A| \geq R/4$ and satisfying \begin{equation} \langlebel{eq12.33} \tag{$\tilde{\text{FBC}}$} \begin{aligned} & \frac{1}{|A|} \int_{}\kern-.34em \int_{B_A \times I} \frac{|u|^2}{2} (b \cdot n) \, dx \, dt \leq \frac{M_0}{\varepsilon^{\alpha_0+1} R^{\alpha_0+2}} \int_{}\kern-.34em \int_{B_R \times I} |u|^2 \, dx \, dt \\ &\quad + \varepsilon \left( \frac{1}{R^2} \int_{}\kern-.34em \int_{B_R \times I} |u|^2 \, dx \, dt + \int_{}\kern-.34em \int_{B_R \times I} |\nabla u|^2 \, dx \, dt + \sup_{t \in I} \int_{B_R} |u(x,t)|^2 \, dx \right), \end{aligned} \end{equation} where $n$ is the unit outer normal direction and $B_A = \cup_{r \in A} \partialartial B_r$. \end{quote} Under the above conditions, we have \begin{theorem}[Upper bounds on fundamental solutions] \langlebel{thm:upperbounds} If $b \in C^\infty_0(\mathbb{R}^n \times [0,+\infty))$ is a divergence-free vector field satisfying \textbf{I} and \textbf{II}, then the fundamental solution $\Gamma = \Gamma(x,t;y,s)$ to the parabolic operator $L=\partialartial_t-\Delta + b \cdot \nabla$ satisfies the following upper bounds: \begin{align} \langlebel{eq:upperboundsendofpaper} &\Gamma(x,t;y,s) \leq C (t-s) \left( \frac{1}{t-s} + \frac{M_0}{(t-s)^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}}\\ &\quad\times \left[ \exp \left( - \frac{c |x-y|^2}{t-s} \right) + \exp\left( - \frac{ c |x-y|^{1+\frac{1}{1+\alpha_0}}}{((t-s) M_0)^{\frac{1}{1+\alpha_0}}} \right) \right] \end{align} for all $x,y \in \mathbb{R}^n$ and $0 \leq s < t < +\infty$, where $C,c > 0$ may depend on $C_0$, $\alpha_0$, and $n$. \end{theorem} \begin{remark}[Comments on \textbf{I} and \textbf{II}] Notice that~\eqref{eq12.33} in~\textbf{II} bounds the \emph{outflux} of scalar through the domain. The \emph{influx}, which was bounded explicitly in~\eqref{eq:fbc}, is handled implicitly by~\textbf{I}. The parameter $\alpha_0$ in~\eqref{eq12.33} does not track the same information as the parameter $\alpha$ in~\eqref{eq:fbc}. The constant $M_0$ in~\eqref{eq12.33} has dimensions $L^{\alpha_0}$, whereas the constant $M$ in~\eqref{eq:fbc} is dimensionless. The upper bound in~\eqref{eq:upperboundsendofpaper} is dimensionally correct. Upon optimizing in $\varepsilon$, one discovers that~\eqref{eq12.33} is equivalent to an interpolation inequality, see the derivation in Remark~\ref{rmk:trackingparametervarepsilon0}. The above form is reflective of how the estimate is utilized below. It would be possible to incorporate a parameter $\delta_0 \in (0,1/2)$ such that $|A| \geq \delta_0 R$, as in~\eqref{eq:fbc}. \end{remark} We verify that the above conditions are satisfied in the setting of the main theorems: \begin{lemma}[Verifying $\tilde{\text{FBC}}$] \langlebel{lem:verifyingtildefbc} Let $p,q \in [1,+\infty]$ and $b \in C^\infty_0(\mathbb{R}^n \times [0,+\infty))$ be a divergence-free vector field. 1. If \begin{equation} b \in L^q_t L^p_x(\mathbb{R}^n \times \mathbb{R}_+), \quad 1 \leq \zeta_0 := \frac{2}{q} + \frac{n}{p} < 2, \end{equation} then $b$ satisfies I and II with $\alpha_0 = \frac{\zeta_0-1}{\theta_2} (=\frac 1 {\theta_2}-2)$, $M_0 = C \| b \|_{L^q_t L^p_x(\mathbb{R}^n \times \mathbb{R}_+)}^{\frac{1}{\theta_2}} + \frac{1}{4}$, $C_0 = C$, and $\theta_2 = 1-\frac{\zeta_0}{2}$. 2. If \begin{equation} b \in L^p_x L^q_t(\mathbb{R}^n \times \mathbb{R}_+), \quad p \leq q, \quad 1 \leq \zeta_0 := \frac{3}{q} + \frac{n-1}{p} < 2, \end{equation} then $b$ satisfies I and II with $\alpha_0 = \frac{\zeta_0-1}{\theta_2} (=\frac 1 {\theta_2}-2)$, $M_0 = C \| b \|_{L^p_x L^q_t(\mathbb{R}^n \times \mathbb{R}_+)}^{\frac{1}{\theta_2}} + \frac{1}{4}$, $C_0 = C$, and $\theta_2 = 1-\frac{\zeta_0}{2}$. \end{lemma} \begin{proof} \textbf{I} follows from Proposition~\ref{pro:stuffissatisfied}, Theorem~\ref{thm:localboundedness}, and the translation and scaling symmetries. \textbf{II} follows from rescaling $R_0$ in Remark~\ref{rmk:trackingparametervarepsilon0}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:upperbounds}] By a translation of the coordinates, we need only consider $\Gamma(x_0,t_0;0,0)$ for $t_0>0$ and $x_0\in \mathbb{R}^n$. We adapt a method due to E. B. Davies~\cite{daviesexplicit}. Let $\partialsi=\partialsi(|x|)=\partialsi(r)$ be a bounded radial Lipschitz function to be specified such that $\partialsi=0$ when $r<|x_0|/2$ and $\partialsi$ is a constant when $r\ge |x_0|$. For now, we record the property $|\nabla \partialsi| \leq \gamma$, where $\gamma > 0$ will be specified later. \emph{1. Weighted energy estimates}. Let $f\in C^\infty_0(\mathbb{R}^n)$ and $u$ be the solution to the equation $Lu=0$ in $\mathbb{R}^n \times \mathbb{R}^+$ with initial condition $u(0,\cdot)=e^{-\partialsi}f$. For $t\ge 0$, denote \begin{equation} J(t)=\frac 1 2\int_{\mathbb{R}^n}e^{2\partialsi(y)}u^2(y,t)\,dy. \end{equation} Then, by integration by parts, for $t>0$, we have \begin{align*} \dot J(t)&=\int_{\mathbb{R}^n}e^{2\partialsi} u \partial_t u \,dy =\int_{\mathbb{R}^n}e^{2\partialsi}u(\Delta u - b\cdot \nabla u)\,dy\\ &=-\int_{\mathbb{R}^n}e^{2\partialsi}|\nabla u|^2\,dy - 2\int_{\mathbb{R}^n} e^{2\partialsi} u \nabla u \cdot \nabla \partialsi \, dy - \int_{\mathbb{R}^n}e^{2\partialsi}b\cdot \nabla \frac {u^2} 2\,dy\\ &\leq -\frac{1}{2} \int_{\mathbb{R}^n}e^{2\partialsi}|\nabla u|^2\,dy + C \gamma^2 J(t) + \int_{\mathbb{R}^n}e^{2\partialsi}b\cdot \nabla \partialsi u^2\,dy, \end{align*} where we applied Young's inequality. Hence, \begin{equation} \begin{aligned} &J(t)+ \frac{1}{2} \int_0^t\int_{\mathbb{R}^n}e^{2\partialsi}|\nabla u|^2\,dy\,ds \\ &\quad \leq J(0) + C \gamma^2 \int_0^t J(s) \, ds + \int_0^t\int_{\mathbb{R}^n}e^{2\partialsi}b\cdot \nabla \partialsi u^2\,dy\,ds. \end{aligned} \end{equation} Now for each $t>0$, we choose $\partialsi'(r)=\gamma \mathbf{1}_{A}$, where $A=A(x_0,|x_0|,b)$ is the set of `good slices' from~\eqref{eq12.33} in \textbf{II}. By~\eqref{eq12.33} applied to $e^\partialsi u$, we have \begin{align*} &J(t)+ \frac{1}{2} \int_0^t\int_{\mathbb{R}^n}e^{2\partialsi}|\nabla u|^2\,dy\,ds\\ &\leq J(0) + C \gamma^2 \int_0^t J(s) \, ds + \gamma \int_0^t\int_{B_A}e^{2\partialsi}b\cdot \frac{y}{|y|} u^2\,dy\,ds\\ &\le J(0)+ C \gamma^2 \int_0^t J(s) \, ds + \gamma \varepsilon |A| \left (\sup_{s\in [0,t]}J(s)+\int_0^t \int_{B_{|x_0|}}e^{2\partialsi}|\nabla u|^2\,dy\,ds\right)\\ &\quad+\gamma \varepsilon |A| \int_0^t \int_{B_{|x_0|}}e^{2\partialsi}|\nabla \partialsi|^2 u^2\,dy\,ds + \left( \frac{\gamma M_0|A|}{\varepsilon^{\alpha_0+1}(|x_0|/2)^{\alpha_0+2}} + \frac{\gamma \varepsilon|A|}{|x_0|^2} \right) \int_0^t\int_{B_{|x_0|}} e^{2\partialsi}u^2\,dy\,ds. \end{align*} Recall that $|A|\le |x_0|/2$. We set $\varepsilon=(100 |x_0| \gamma)^{-1}$ and take the supremum in $t$ in order to absorb the 3rd term on the right-hand side, which has coefficient $\gamma \varepsilon |A|$: \begin{align*} &\frac 1 2 \sup_{s\in [0,t]}J(s) + \frac{1}{4} \int_0^t\int_{\mathbb{R}^n}e^{2\partialsi}|\nabla u|^2\,dy\,ds\\ &\quad \le J(0) +\left( C \gamma^2+ C M_0 \gamma^{2+\alpha_0} + |x_0|^{-2} \right)\int_0^t J(s)\,ds. \end{align*} Now, by the Gronwall inequality, \begin{equation} \langlebel{eq1.23} J(t)\le CJ(0)e^{( C \gamma^2+ C M_0 \gamma^{2+\alpha_0} + |x_0|^{-2} ) t} . \end{equation} \emph{2. Duality argument}. For $s<t$, we define an operator \begin{equation} \langlebel{eq12.17} P^\partialsi_{s\to t} f(x)=e^{\partialsi(x)}\int_{\mathbb{R}^n} \Gamma(x,t;y,s)e^{-\partialsi(y)}f(y)\,dy. \end{equation} By the local boundedness estimate~\eqref{eq:localboundednessforfundsol} in \textbf{I}, for any $x\in \mathbb{R}^n$ and $R = \sqrt{t}$, \begin{align*} &e^{-2\partialsi(x)}|P_{0\to t}^\partialsi f(x)|^2=|u(x,t))|^2 \leq C_0 \left( \frac{1}{t} + \frac{M_0}{t^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} \int_0^t\int_{B_{\sqrt{t}}(x)} u^2(y,s) \,dy\,ds. \end{align*} Thus, allowing $C$ to depend on $C_0$, from~\eqref{eq1.23} we have \begin{align*} &|P_{0\to t}^\partialsi f(x)|^2\le C \left( \frac{1}{t} + \frac{M_0}{t^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} \int_0^t \int_{B_{\sqrt{t}}(x)} e^{2\partialsi(x)}u^2(y,s)\,dy\,ds\\ &\le C \left( \frac{1}{t} + \frac{M_0}{t^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} \sup_{y\in B_{\sqrt{t}}(x)}e^{2(\partialsi(x)-\partialsi(y))}\int_0^t J(s)\,ds\\ &\overset{\eqref{eq1.23}}{\le} C t \left( \frac{1}{t} + \frac{M_0}{t^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} e^{2\gamma\min\{ \sqrt{t},|x_0|/2\}} \times e^{( C \gamma^2+ C M_0 \gamma^{2+\alpha_0} + |x_0|^{-2} ) t} J(0). \end{align*} Since $J(0)=\|f\|_{L^2}^2/2$, the above inequality together with a translation in time implies that \begin{align*} &\|P_{s\to t}^\partialsi\|_{L^2\to L^\infty}^2 \le C(t-s) \left( \frac{1}{t-s} + \frac{M_0}{(t-s)^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} \\ &\quad\cdot e^{2\gamma\min\{\sqrt{t-s},|x_0|/2\}} e^{( C \gamma^2+ C M_0 \gamma^{2+\alpha_0} + |x_0|^{-2} ) (t-s)} J(0) . \end{align*} By duality, we also have \begin{align*} &\|P_{s\to t}^\partialsi\|_{L^1\to L^2}^2 \le C(t-s) \left( \frac{1}{t-s} + \frac{M_0}{(t-s)^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} \\ &\quad \times e^{2\gamma\min\{ \sqrt{t-s} ,|x_0|/2\}} e^{( C \gamma^2+ C M_0 \gamma^{2+\alpha_0} + |x_0|^{-2} ) (t-s)} J(0) . \end{align*} Therefore, \begin{align*} &\|P_{0\to t}^\partialsi\|_{L^1\to L^\infty} \le \|P_{0\to t/2}^\partialsi\|_{L^1\to L^2}\|P_{t/2\to t}^\partialsi\|_{L^2\to L^\infty}\\ &\le Ct \left( \frac{1}{t} + \frac{M_0}{t^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} e^{2\gamma\min\{\sqrt{t},|x_0|/2\}} e^{( C \gamma^2+ C M_0 \gamma^{2+\alpha_0} + |x_0|^{-2} ) t} . \end{align*} From \eqref{eq12.17}, the above inequality implies that for any $x,y\in \mathbb{R}^n$, \begin{align*} &e^{\partialsi(x)-\partialsi(y)}\Gamma(x,t;y,0)\\ &\le Ct \left( \frac{1}{t} + \frac{M_0}{t^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} e^{2\gamma\min\{\sqrt{t},|x_0|/2\}} e^{( C \gamma^2+ C M_0 \gamma^{2+\alpha_0} + |x_0|^{-2} ) t}. \end{align*} In particular, we have \begin{align*} &\Gamma(x_0,t;0,0)\\ &\le Ce^{-\partialsi(x_0)} t \left( \frac{1}{t} + \frac{M_0}{t^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} e^{2\gamma\min\{\sqrt{t},|x_0|/2\}} e^{( C \gamma^2+ C M_0 \gamma^{2+\alpha_0} + |x_0|^{-2} ) t} \\ &\le Ct \left( \frac{1}{t} + \frac{M_0}{t^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} \exp \left\lbrace 2\gamma \min \Big( \sqrt{t},\frac{|x_0|}{2} \Big) - \frac{\gamma |x_0|}{4} + C \gamma^2t + C M_0 \gamma^{2+\alpha_0} t + |x_0|^{-2} t \right\rbrace \end{align*} since $\partialsi(0)=0$ and $\partialsi(x_0)\ge \gamma |x_0|/4$. \emph{3. Optimizing $\gamma$}. In order to have a negative term in the exponential, we first consider \begin{equation} \langlebel{eq:sillyassumption} |x_0| \geq 16 \sqrt{t}. \end{equation} Then \begin{equation} 2\min \Big( \sqrt{t},\frac{|x_0|}{2} \Big) - \frac{|x_0|}{4} \leq - \frac{|x_0|}{8} \end{equation} Notice that final term in the exponential is already controlled: \begin{equation} t |x_0|^{-2} \lesssim 1. \end{equation} Hence, \begin{equation} \Gamma(x_0,t;0,0) \lesssim t \left( \frac{1}{t} + \frac{M_0}{t^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} \exp \left( - \frac{1}{8} \gamma |x_0| + C \gamma^2 t + C M_0 \gamma^{2+\alpha_0} t \right). \end{equation} The new expression inside the exponential is \begin{equation} - \frac{1}{8} \gamma |x_0| + \underbrace{C \gamma^2 t}_{A} + \underbrace{C M_0 \gamma^{2+\alpha_0} t}_{B}. \end{equation} We divide space into an `inner region' and `outer region', and we anticipate that the rate of decay may be modified in the outer region, where $B$ dominates. \emph{3a. Outer region}. We consider scalings of $\gamma$ in which $- \frac{1}{8} \gamma |x_0|$ overtakes $B$. Consider \begin{equation} M_0 \gamma^{2+\alpha_0} t = \varepsilon \gamma |x_0|. \end{equation} Then $\frac{1}{8} \gamma |x_0| \gg B$ when \begin{equation} \gamma^{1+\alpha_0} = \varepsilon M_0^{-1} \frac{|x_0|}{t} \end{equation} and $\varepsilon \ll 1$. In this scaling, we have that $B \geq A$ ($CM_0 \gamma^{2+\alpha_0} \geq C \gamma^2$) when \begin{equation} M_0 \gamma^{\alpha_0} \geq 1, \end{equation} or \begin{equation} \langlebel{eq:region1} \frac{M_0^{\frac{1}{\alpha_0}} |x_0|}{t} \geq \varepsilon^{-1}. \end{equation} In this region, under the additional assumption~\eqref{eq:sillyassumption}, we have the exponential bound \begin{equation} \langlebel{eq:exponentialbound1} \Gamma(x_0,t;0,0) \lesssim t \left( \frac{1}{t} + \frac{M_0}{t^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} \exp\left( - \frac{ c |x_0|^{1+\frac{1}{1+\alpha_0}}}{(t M_0)^{\frac{1}{1+\alpha_0}}} \right) \end{equation} \emph{3b. Inner region}. We now consider scalings of $\gamma$ in which $- \frac{1}{8} \gamma |x_0|$ overtakes the term $A$. Consider \begin{equation} \gamma^2 t = \varepsilon \gamma |x_0|. \end{equation} Then $\frac{1}{8} \gamma |x_0| \gg A$ when \begin{equation} \gamma = \varepsilon |x_0|/t \end{equation} and $\varepsilon \ll 1$. In this scaling, we have that $A \geq B$ ($C \gamma^2 \geq CM_0 \gamma^{2+\alpha_0}$) when \begin{equation} M_0 \gamma^{\alpha_0} \leq 1, \end{equation} or \begin{equation} \langlebel{eq:region2} M_0^{\frac{1}{\alpha_0}} \frac{|x_0|}{t} \leq \varepsilon^{-1}. \end{equation} In this region, under the additional assumption~\eqref{eq:sillyassumption}, we have the exponential bound \begin{equation} \langlebel{eq:exponentialbound2} \Gamma(x_0,t;0,0) \lesssim t \left( \frac{1}{t} + \frac{M_0}{t^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} \exp \left( - \frac{c |x_0|^2}{t} \right). \end{equation} \emph{3c. Patching}. Combining~\eqref{eq:exponentialbound1} and~\eqref{eq:exponentialbound2}, we have \begin{equation} \langlebel{eq:patchedestimate} \Gamma(x_0,t;0,0) \lesssim t \left( \frac{1}{t} + \frac{M_0}{t^{1+\frac{\alpha_0}{2}}} \right)^{\frac{n+2}{2}} \left[ \exp \left( - \frac{c |x_0|^2}{t} \right) + \exp\left( - \frac{ c |x_0|^{1+\frac{1}{1+\alpha_0}}}{(t M_0)^{\frac{1}{1+\alpha_0}}} \right) \right]. \end{equation} under the assumption~\eqref{eq:sillyassumption}. On the other hand, when $|x_0| \leq 16 \sqrt{t}$, the fundamental solution is controlled by the Nash estimate~\cite{nash1958continuity}: \begin{equation} \| \Gamma(\cdot,t;0,0) \|_{L^\infty(\mathbb{R}^n)} \lesssim t^{-\frac{n}{2}}, \end{equation} which is independent of the divergence-free drift. This contributes to the prefactor in~\eqref{eq:patchedestimate}. The proof is complete. \end{proof} {\small \subsubsection*{Acknowledgments} DA thanks Vladim{\'i}r {\v S}ver{\'a}k for encouraging him to answer this question and helpful discussions. DA also thanks Tobias Barker and Simon Bortz for helpful discussions, especially concerning Remark~\ref{rmk:timedependentexamples}, and their patience. DA was supported by the NDSEG Fellowship and NSF Postdoctoral Fellowship Grant No.~2002023. HD was partially supported by the Simons Foundation, Grant No.~709545, a Simons Fellowship, and the NSF under agreement DMS-2055244. } \end{document}
\begin{document} \title{Comparing the effects of nuclear and electron spins on the formation of neutral hydrogen molecule} \author{Miao Hui-hui} \affiliation{Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, Vorobyovy Gory, Moscow, 119991, Russia} \author{Ozhigov Yuri Igorevich} \email[Email address: ]{[email protected]} \affiliation{Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, Vorobyovy Gory, Moscow, 119991, Russia\\K. A. Valiev Institute of physics and technology, Russian Academy of Sciences, Nakhimovsky Prospekt 36, Moscow, 117218, Russia} \date{\today} \begin{abstract} We introduce the association-dissociation model of neutral hydrogen molecule, which is a finite-dimensional cavity quantum electrodynamics model of chemistry with two two-level artificial atoms on quantum dots placed in optical cavities, based on the Tavis-Cummings-Hubbard model. The motion of the nuclei can be represented in quantum form. Electron spin transition and spin-spin interaction between electron and nucleus are both considered. Consideration is also given to the effects of nuclear and electron spins on the formation of neutral hydrogen molecule. \end{abstract} \keywords{neutral hydrogen molecule, artificial atom, finite-dimensional QED, nuclear spin, electron spin} \maketitle \section{Introduction} \label{sec:Intro} The modelling of hydrogen chemical processes attracts increasing interest and becomes one of the primary tasks in recent years, including chemical reactions involving cation $\mathrm{H}_2^+$ \cite{Zhu2020, Afanasyev2021} and neutral hydrogen molecule $\mathrm{H}_2$ \cite{Miao2023}. Quantum chemistry is usually understood as a technique for calculating the numerical characteristics of stationary atoms or molecules: binding energies, spectra, etc. This paper is devoted to a different direction: the dynamics of chemical reactions and the influence of the electromagnetic field and the thermal properties of the environment on them. The task of describing dynamic reaction scenarios is very demanding in terms of computational resources, and therefore incompatible with the exact calculation of the characteristics of stationary structures. We assume that the exact values of the binding energies, electron tunnelling and their interaction with the field are test parameters that can be determined not only by standard computational methods (Hartree-Fock, Monte Carlo and density functional), but also selected from observing the outcomes of dynamic association-dissociation scenarios, the mechanisms of which we are building. This paper provides a method for extending the cavity quantum electrodynamics (QED) model to complex chemical and even biological models by studying the association-dissociation reaction of hydrogen molecule. This is significant because this model can be modified in the future for use with more intricate chemical and biological models, which necessitate an understanding of hydrogen chemical processes. In this paper, the association-dissociation model of neutral hydrogen molecule is introduced in detail, and the effects of nuclear and electron spins on the formation of neutral hydrogen molecule is compared. The most commonly used cavity QED models are the Jaynes-Cummings model (JCM) \cite{Jaynes1963} and the Tavis-Cummings model (TCM) \cite{Tavis1968}, describing the dynamics of one or a group of two-level atoms in an optical cavity, which are the fundamental models for strong coupling (SC). JCM and TCM have been generalized to several cavities coupled by an optical fiber --- the Jaynes-Cummings-Hubbard model (JCHM) and Tavis-Cummings-Hubbard model (TCHM) \cite{Angelakis2007}. The value of these models and their modifications is that it allows us to describe a very complex interaction of light and matter in the framework of finite-dimensional QED models. Recently, a lot of research on SC models and its modifications has been done \cite{Afanasyev2021, Miao2023, Wei2021, Prasad2018, Guo2019, Smith2021, Kulagin2022, Dull2021}. We adapted these SC models in this paper to fulfil the requirements of modelling of hydrogen chemical reaction. This paper is organized as follows. In Chapter \ref{sec:Model}, we introduce the theoretical model, considering both nuclear and electron spins. We also consider the effects of photonic modes $\Omega^s$ and $\Omega^n$ on the quantum evolution and the formation of neutral hydrogen molecule. Some technical details of density matrix and Hamiltonian are included in Chapter \ref{sec:HamilDensity}. Then we get some results from simulations in Chapter \ref{sec:Simulations}. Some brief comments on our results and extension to future work in Chapter \ref{sec:ConcluFuture} close out the paper. \section{Theoretical Model} \label{sec:Model} \begin{figure} \caption{(online color) The hybridization of orbitals of two hydrogen atoms and the formation of bonding orbital and antibonding orbital.} \label{fig:Hybridization} \end{figure} The theoretical model, called the association-dissociation model of neutral hydrogen molecule, is detailed in our earlier work \cite{Miao2023}. Each energy level in this model, including atomic and molecular, is divided into two levels: spin up $\uparrow$ and spin down $\uparrow$. According to the Pauli exclusion principle \cite{Pauli1925}, there can only be one electron per level. The excited states of the electron with the spins for the first nucleus is denoted by $|0_1^{\uparrow}\rangle_e$ and $|0_1^{\downarrow}\rangle_e$ (usually simply written as $|0_1\rangle_e$, which can denote both $|0_1^{\uparrow}\rangle_e$ and $|0_1^{\downarrow}\rangle_e$). Then, $|-1_1\rangle_e$ --- ground electron states for the first nucleus. For the second nucleus --- $|0_2\rangle_e$ and $|-1_2\rangle_e$. Hybridization of orbitals is possible only for atomic excited states $|0_{1,2}\rangle_e$. Hybridization of atomic orbitals (AO) and formation of molecular orbitals (MO) are shown in Fig. \ref{fig:Hybridization}, where antibonding orbital and bonding orbital take the following forms, respectively: \begin{subequations} \label{eq:MolStates} \begin{align} &|\Phi_1\rangle_e=\frac{1}{\sqrt{2}}\left(|0_1\rangle_e-|0_2\rangle_e\right)\label{eq:MolStatePhi1}\\ &|\Phi_0\rangle_e=\frac{1}{\sqrt{2}}\left(|0_1\rangle_e+|0_2\rangle_e\right)\label{eq:MolStatePhi0} \end{align} \end{subequations} where $|\Phi_1\rangle_e$ are also called molecular excited states, and $|\Phi_1\rangle_e$ --- molecular ground states. Each nucleus will form a potential well around itself, and the electrons will be bound in these potential wells. The association reaction of $\mathrm{H}_2$ is described as follows: two electrons in the atomic ground orbital $-1$ with large distance between nuclei, corresponding to different directions of the spin, absorb respectively photon with mode $\Omega^{\uparrow}$ or $\Omega^{\downarrow}$, then they rise to atomic excited orbital $0$. When nuclei gather together in one cavity from different cavities through the quantum tunnelling effect, the potential barrier between the two potential wells decreases, and since the two electrons are in atomic excited orbitals, the atomic orbitals are hybridized into molecular orbitals, and the electrons are released on the molecular excited orbital $\Phi_1$. Then two electrons fleetly release respectively photon with mode $\omega^{\uparrow}$ or $\omega^{\downarrow}$, and fall to molecular ground orbital $\Phi_0$, stable molecule is formed. The dissociation reaction of $\mathrm{H}_2$ is the reverse process of the association reaction, and finally the decomposition of hydrogen molecules is obtained. In this paper we adopt the second quantization \cite{Dirac1927, Fock1932}. The entire system's Hilbert space for quantum states is $\mathcal{C}$ and takes the following form: \begin{equation} \label{eq:SpaceC} |\Psi\rangle_{\mathcal{C}}=|photon\rangle|electron\rangle|nucleus\rangle \end{equation} where the quantum state consists of three parts: \begin{widetext}: \begin{subequations} \label{eq:Subsystems} \begin{align} |photon\rangle&=|p_1\rangle_{\omega^{\uparrow}}|p_2\rangle_{\omega^{\downarrow}}|p_3\rangle_{\Omega^{\uparrow}}|p_4\rangle_{\Omega^{\downarrow}}|p_5\rangle_{\Omega^s}\label{eq:PhotonState}\\ |electron\rangle&=|l_1\rangle_{\substack{at_1\\or_0}}^{\uparrow}|l_2\rangle_{\substack{at_1\\or_0}}^{\downarrow}|l_3\rangle_{\substack{at_1\\or_{-1}}}^{\uparrow}|l_4\rangle_{\substack{at_1\\or_{-1}}}^{\downarrow}|l_5\rangle_{\substack{at_2\\or_0}}^{\uparrow}|l_6\rangle_{\substack{at_2\\or_0}}^{\downarrow}|l_7\rangle_{\substack{at_2\\or_{-1}}}^{\uparrow}|l_8\rangle_{\substack{at_2\\or_{-1}}}^{\downarrow}\label{eq:ElectronState}\\ |nucleus\rangle&=|k\rangle_n\label{eq:NucleusState} \end{align} \end{subequations} \end{widetext} where the numbers of molecule photons with the modes $\omega^{\uparrow}$, $\omega^{\downarrow}$ are $p_1$, $p_2$, respectively; $p_3$, $p_4$ are the numbers of atomic photons with modes $\Omega^{\uparrow}$, $\Omega^{\downarrow}$, respectively; $p_5$ is the number of photons with mode $\Omega^s$, which can excite the electron spin from $\uparrow$ to $\downarrow$ in the atom. $l_{i,i\in\left\{1,2,\cdots,8\right\}}$ describes atom state: $l_i=1$ --- the orbital is occupied by one electron, $l_i=0$ --- the orbital is freed. The state of the nuclei is denoted by $|k\rangle_n$: $k=0$ --- state of nuclei, gathering together in one cavity, $k=1$ --- state of nuclei, scattering in different cavities. \subsection{Nuclear and electron spins} \label{subsec:NuclearElectronSpin} We introduce spin photons with mode $\Omega^s$ in our model, thus transition between $\uparrow$ and $\downarrow$ is allowed. Electron spins must strictly satisfy the Pauli exclusion principle. We stipulate, that independent electron spin transition is allowed if and only if electrons are in atomic excited state. Since this transition will obscure the spin-spin interaction between electron and nucleus (this interaction can only occur when the electron is in the ground state, and is very weak compared to the independent electron spin transition) when the electron is in the ground state. Electron spin transition is also forbidden when electrons are in molecular state corresponding to $|0\rangle_n$, which contravenes Pauli exclusion principle. The stable formation of $\mathrm{H}_2$ is only realized through state, where two electrons with different spins situated in orbital $\Phi_0$. Only when the electrons reach the atomic ground state does nuclear spin interact with them. When an electron is in the ground state of an atom and its spin is different from that of the nucleus, they can exchange spins. The symbol for this interaction, known as the spin-spin interaction, is $\sigma_{en,i}$, here $i$ is index of atoms. With the aid of this interaction, the electron with the $\downarrow$ absorbs a photon with mode $\Omega^s$, and the nucleus with the $\uparrow$ emits a photon with mode $\Omega^n$. Now electron spin up and nucleus spin down. In contrast, the nucleus with the $\downarrow$ can also absorb photons with mode $\Omega^n$, and the electron with the $\uparrow$ can also release photons with mode $\Omega^s$. The initial state $|\Psi_{initial}\rangle$ for the association process is shown in Fig. \ref{fig:Spin-spinInteraction}, where two electron with $\downarrow$ are in different atoms, and two nuclei with $\uparrow$ can interact with electrons and exchange spins. We put three photons with different modes $\Omega^{\uparrow}$, $\Omega^{\downarrow}$ and $\Omega^s$ at the start. This means that only one of the electrons can complete the spin exchange with the nucleus, because at the initial moment we only have one spin photon with mode $\Omega^s$. Thus, we have two situations of formation of $\mathrm{H}_2$: \begin{itemize} \item the first nucleus with $\uparrow$ and the second nucleus with $\downarrow$, denoted by $|\Psi_{final}\rangle$; \item the first nucleus with $\downarrow$ and the second nucleus with $\uparrow$, denoted by $|\Psi_{final}'\rangle$. \end{itemize} Stable hydrogen molecule is defined as follows: \begin{equation} \label{eq:HydrogenState} |\mathrm{H}_2\rangle=c_0|\Psi_{final}\rangle+c_1|\Psi_{final}'\rangle \end{equation} where $c_0$, $c_1$ are normalization factors. Due to the introduction of nuclear spin, we need to introduce nuclear spin photon with mode $\Omega^n$ and consider the spin state of the two nuclei. Thus, the definition of quantum state space must be rewritten. Above all, Eq. (\ref{eq:PhotonState}) is expended as follows: \begin{equation} \label{eq:NewPhotonState} |photon\rangle=|p_1\rangle_{\omega^{\uparrow}}|p_2\rangle_{\omega^{\downarrow}}|p_3\rangle_{\Omega^{\uparrow}}|p_4\rangle_{\Omega^{\downarrow}}|p_5\rangle_{\Omega^s}|p_6\rangle_{\Omega^n} \end{equation} where $p_6$ is the number of photons with mode $\Omega^n$, which can excite the nuclear spin from $\downarrow$ to $\uparrow$ in the atom. Analogously, Eq. (\ref{eq:NucleusState}) is expended as follows: \begin{equation} \label{eq:NewNucleusState} |nucleus\rangle=|k\rangle_n|k_1\rangle_{n_1}|k_2\rangle_{n_2} \end{equation} where $k_{i,i\in\left\{1,2\right\}}$ describes nuclear spin of the first or second atom. $k_i=1$ --- nucleus with $\uparrow$, $k_i=0$ --- nucleus with $\downarrow$. Spin-spin interaction between nucleus and electron with slight intensity $g_{en}$ is usually ignored. However, experiments indicate that when we introduce spin-spin interaction, molecular hydrogen occurs in two isomeric forms: one with its two proton nuclear spins aligned parallel --- orthohydrogen, the other with its two proton spins aligned antiparallel --- parahydrogen. The spin-spin interaction is also called hyperfine. \begin{figure} \caption{(online color) The situation with consideration of spin-spin interaction between nucleus and electron. The formation of neutral hydrogen molecule is possible when we put three photons with different modes $\Omega^{\uparrow} \label{fig:Spin-spinInteraction} \end{figure} \subsection{Thermally stationary state} \label{subsec:Thermally} We define the stationary state of a field with temperature $T$ as a mixed state with a Gibbs distribution of Fock components: \begin{equation} \label{eq:Photon_gibbs} {\cal G}\left(T\right)_f=c\sum\limits_{p=0}^\infty exp\left(-\frac{\hbar\omega_c p}{KT}\right)|p\rangle\langle p| \end{equation} where $K$ is the Boltzmann constant, $c$ is the normalization factor, $p$ is the number of photons, $\omega_c$ is the photonic mode. We introduce the notation $\gamma_{k'}/\gamma_{k}=\mu$, where $\gamma_{k}$ denotes the total spontaneous emission rate for photon from cavity to external environment and $\gamma_{k'}$ denotes the total spontaneous influx rate for photon from external environment into cavity. The state ${\cal G}\left(T\right)_f$ will then exist only at $\mu<1$, because otherwise the temperature will be infinitely large and the state ${\cal G}\left(T\right)_f$ will be non-normalizable. The population of the photonic Fock state $|p\rangle$ at temperature $T$ is proportional to $exp\left(-\frac{\hbar\omega_c}{KT}\right)$. In our model, we assume: \begin{equation} \label{eq:PopulationFock} \mu=exp\left(-\frac{\hbar\omega_c}{KT}\right) \end{equation} from where $T=\frac{\hbar\omega_c}{K\ln\left(1/\mu\right)}$. The following theorem takes place \cite{Kulagin2018}: The thermally stationary state of atoms and fields at temperature $T$ has the form $\rho_{state}=\rho_{ph}\otimes\rho_{at}$, where $\rho_{ph}$ is the state of the photon and $\rho_{at}$ is the state of the atom. \section{Hamiltonian and density matrix} \label{sec:HamilDensity} The quantum master equation (QME) in the Markovian approximation for the density operator $\rho$ of the system takes the following form: \begin{equation} \label{eq:QME} i\hbar\dot{\rho}=\left[H,\rho\right]+iL\left(\rho\right) \end{equation} where $L\left(\rho\right)$ is as follows: \begin{equation} \label{eq:LindbladOperator} \begin{aligned} L\left(\rho\right)&=\sum_{k\in \mathcal{K}}L_k\left(\rho\right)+\sum_{k'\in \mathcal{K}'}L_{k'}\left(\rho\right)\\ &=\sum_{k\in \mathcal{K}}\gamma_k\left(A_k\rho A_k^{\dag}-\frac{1}{2}\left\{\rho, A_k^{\dag}A_k\right\}\right)\\ &+\sum_{k'\in \mathcal{K}'}\gamma_{k'}\left(A_k^{\dag}\rho A_k-\frac{1}{2}\left\{\rho, A_kA_k^{\dag}\right\}\right) \end{aligned} \end{equation} where $\mathcal{K}$ is a graph of the potential photon dissipations between the states that are permitted. The edges and vertices of $\mathcal{K}$ represent the permitted dissipations and the states, respectively. $\mathcal{K}'$ is a graph of the potential photon influxes between the states that are permitted. $L_k\left(\rho\right)$ ($L_{k'}\left(\rho\right)$) is the standard dissipation (influx) superoperator corresponding to the jump operator $A_k$ ($A_{k'}$), and the term $\gamma_k$ ($\gamma_{k'}$) refers to the overall spontaneous emission (influx) rate for photons for $k\in \mathcal{K}$ ($k'\in\mathcal{K}'$). The coupled-system Hamiltonian in Eq. \eqref{eq:QME} is expressed by the total energy operator: \begin{equation} \label{eq:Hamil} H=H_{\mathcal{A}}+H_{\mathcal{D}}+H_{tun}+H_{spin-flip}+H_{spin-spin} \end{equation} where $H_{tun}$ denotes the quantum tunnelling effect between $H_{\mathcal{A}}$ and $H_{\mathcal{D}}$, which are the associative and dissociative Hamiltonians, respectively. $H_{spin-flip}$ describes the electron spin transition (spin-flip) and $H_{spin-spin}$ denotes the spin-spin interaction between nucleus and electron. Rotating wave approximation (RWA) \cite{Wu2007} is taken into account: \begin{equation} \label{eq:RWACondition} \frac{g}{\hbar\omega_c}\approx\frac{g}{\hbar\omega_n}\ll 1 \end{equation} where $\omega_c$ stands for cavity frequency and $\omega_n$ for transition frequency. We presume that $\omega_c=\omega_n$. We will directly quote and transform the definitions of $H_{\mathcal{A}},\ H_{\mathcal{D}},\ H_{tun},\ H_{spin-slip}$ from our earlier paper \cite{Miao2023}. Thus, $H_{\mathcal{A}}$ has following form: \begin{equation} \label{eq:HamilA} H_{\mathcal{A}}=\left(H_{\mathcal{A},field}+H_{\mathcal{A},mol}+H_{\mathcal{A},int}\right)\sigma_n\sigma_n^{\dag} \end{equation} where $\sigma_n\sigma_n^{\dag}$ verifies that nuclei are close. And, \begin{subequations} \label{eq:HamilADetail} \begin{align} &H_{\mathcal{A},field}=\hbar\omega^{\uparrow}a_{\omega^{\uparrow}}^{\dag}a_{\omega^{\uparrow}}+\hbar\omega^{\downarrow}a_{\omega^{\downarrow}}^{\dag}a_{\omega^{\downarrow}}\label{eq:HamilAField}\\ &H_{\mathcal{A},mol}=\hbar\omega^{\uparrow}\sigma_{\omega^{\uparrow}}^{\dag}\sigma_{\omega^{\uparrow}}+\hbar\omega^{\downarrow}\sigma_{\omega^{\downarrow}}^{\dag}\sigma_{\omega^{\downarrow}}\label{eq:HamilAMol}\\ &H_{\mathcal{A},int}=g_{\omega^{\uparrow}}\left(a_{\omega^{\uparrow}}^{\dag}\sigma_{\omega^{\uparrow}}+a_{\omega^{\uparrow}}\sigma_{\omega^{\uparrow}}^{\dag}\right)\\ &+g_{\omega^{\downarrow}}\left(a_{\omega^{\downarrow}}^{\dag}\sigma_{\omega^{\downarrow}}+a_{\omega^{\downarrow}}\sigma_{\omega^{\downarrow}}^{\dag}\right)\label{eq:HamilAInt} \end{align} \end{subequations} where $\hbar$ is the reduced Planck constant or Dirac constant. $H_{\mathcal{A},field}$ is the photon energy operator, $H_{\mathcal{A},mol}$ is the molecule energy operator, $H_{\mathcal{A},int}$ is the molecule-photon interaction operator. $g_{\omega}$ is the coupling strength between the photon mode $\omega$ (with annihilation and creation operators $a_{\omega}$ and $a_{\omega}^{\dag}$, respectively) and the electrons (with excitation and relaxation operators $\sigma_{\omega}^{\dag}$ and $\sigma_{\omega}$, respectively). Then $H_{\mathcal{D}}$ is described in following form: \begin{equation} \label{eq:HamilD} H_{\mathcal{D}}=\left(H_{\mathcal{D},field}+H_{\mathcal{D},mol}+H_{\mathcal{D},int}\right)\sigma_n^{\dag}\sigma_n \end{equation} where $\sigma_n^{\dag}\sigma_n$ verifies that nuclei are far away. And, \begin{subequations} \label{eq:HamilDDetail} \begin{align} &H_{\mathcal{D},field}=\hbar\Omega^{\uparrow}a_{\Omega^{\uparrow}}^{\dag}a_{\Omega^{\uparrow}}+\hbar\Omega^{\downarrow}a_{\Omega^{\downarrow}}^{\dag}a_{\Omega^{\downarrow}}\label{eq:HamilDField}\\ &H_{\mathcal{D},at}=\sum_{i=1,2}\left(\hbar\Omega^{\uparrow}\sigma_{\Omega^{\uparrow},i}^{\dag}\sigma_{\Omega^{\uparrow},i}+\hbar\Omega^{\downarrow}\sigma_{\Omega^{\downarrow},i}^{\dag}\sigma_{\Omega^{\downarrow},i}\right)\label{eq:HamilDAt}\\ &H_{\mathcal{D},int}=\sum_{i=1,2}\left\{g_{\Omega^{\uparrow}}\left(a_{\Omega^{\uparrow}}^{\dag}\sigma_{\Omega^{\uparrow},i}+a_{\Omega^{\uparrow}}\sigma_{\Omega^{\uparrow},i}^{\dag}\right)\right.\\ &\left.+g_{\Omega^{\downarrow}}\left(a_{\Omega^{\downarrow}}^{\dag}\sigma_{\Omega^{\downarrow},i}+a_{\Omega^{\downarrow}}\sigma_{\Omega^{\downarrow},i}^{\dag}\right)\right\}\label{eq:HamilDInt} \end{align} \end{subequations} where $H_{\mathcal{D},field}$ is the photon energy operator, $H_{\mathcal{D},at}$ is the atom energy operator, $H_{\mathcal{D},int}$ is atom-photon interaction operator. $g_{\Omega}$ is the coupling strength between the photon mode $\Omega$ (with annihilation and creation operators $a_{\Omega}$ and $a_{\Omega}^{\dag}$, respectively) and the electrons in the atom (with excitation and relaxation operators $\sigma_{\Omega,i}^{\dag}$ and $\sigma_{\Omega,i}$, respectively, here $i$ denotes index of atoms). $H_{tun}$ describe the hybridization and de-hybridization, realized by quantum tunnelling effect, it takes the form: \begin{equation} \label{eq:HamilTDetail} \begin{aligned} H_{tun}&=\zeta_2\sigma_{\omega^{\uparrow}}^{\dag}\sigma_{\omega^{\uparrow}}\sigma_{\omega^{\downarrow}}^{\dag}\sigma_{\omega^{\downarrow}}\left(\sigma_n^{\dag}+\sigma_n\right)\\ &+\zeta_1\sigma_{\omega^{\uparrow}}\sigma_{\omega^{\uparrow}}^{\dag}\sigma_{\omega^{\downarrow}}^{\dag}\sigma_{\omega^{\downarrow}}\left(\sigma_n^{\dag}+\sigma_n\right)\\ &+\zeta_1\sigma_{\omega^{\uparrow}}^{\dag}\sigma_{\omega^{\uparrow}}\sigma_{\omega^{\downarrow}}\sigma_{\omega^{\downarrow}}^{\dag}\left(\sigma_n^{\dag}+\sigma_n\right)\\ &+\zeta_0\sigma_{\omega^{\uparrow}}\sigma_{\omega^{\uparrow}}^{\dag}\sigma_{\omega^{\downarrow}}\sigma_{\omega^{\downarrow}}^{\dag}\left(\sigma_n^{\dag}+\sigma_n\right) \end{aligned} \end{equation} where $\sigma_{\omega^{\uparrow}}^{\dag}\sigma_{\omega^{\uparrow}}\sigma_{\omega^{\downarrow}}^{\dag}\sigma_{\omega^{\downarrow}}$ verifies that two electrons with different spins are at orbital $\Phi_1$ with large tunnelling intensity $\zeta_2$; $\sigma_{\omega^{\uparrow}}\sigma_{\omega^{\uparrow}}^{\dag}\sigma_{\omega^{\downarrow}}^{\dag}\sigma_{\omega^{\downarrow}}$ verifies that electron with $\uparrow$ is at orbital $\Phi_0$ and electron with $\downarrow$ is at orbital $\Phi_1$, with low tunnelling intensity $\zeta_1$; $\sigma_{\omega^{\uparrow}}^{\dag}\sigma_{\omega^{\uparrow}}\sigma_{\omega^{\downarrow}}\sigma_{\omega^{\downarrow}}^{\dag}$ verifies that electron with $\uparrow$ is at orbital $\Phi_1$ and electron with $\downarrow$ is at orbital $\Phi_0$, with low tunnelling intensity $\zeta_1$; $\sigma_{\omega^{\uparrow}}\sigma_{\omega^{\uparrow}}^{\dag}\sigma_{\omega^{\downarrow}}\sigma_{\omega^{\downarrow}}^{\dag}$ verifies that two electrons with different spins are at orbital $\Phi_0$ with tunnelling intensity $\zeta_0$, which equal to $0$. We assume that the electron spin transition only occurs when the electron is in the atomic excited state, and we only consider the spin-spin interaction of the electron with the nucleus when the electron is in the atomic ground state. Thus, $H_{spin-flip}$ takes the form: \begin{widetext} \begin{equation} \label{eq:HamilSpinflip} H_{spin-flip}=\sum_{i=1,2}\left\{\left(\sigma_{\Omega^{\uparrow},i}^{\dag}\sigma_{\Omega^{\uparrow},i}+\sigma_{\Omega^{\downarrow},i}^{\dag}\sigma_{\Omega^{\downarrow},i}\right)\left[\hbar\Omega^sa_{\Omega^s}^{\dag}a_{\Omega^s}+\hbar\Omega^s\sigma_{\Omega^s,i}^{\dag}\sigma_{\Omega^s,i}+g_{\Omega^s}\left(a_{\Omega^s}^{\dag}\sigma_{\Omega^s,i}+a_{\Omega^s}\sigma_{\Omega^s,i}^{\dag}\right)\right]\right\} \end{equation} \end{widetext} where $\sigma_{\Omega^{\uparrow},i}^{\dag}\sigma_{\Omega^{\uparrow},i}+\sigma_{\Omega^{\downarrow},i}^{\dag}\sigma_{\Omega^{\downarrow},i}$ verifies that electron is in the atomic excited state. $i$ denotes index of atoms. $H_{spin-spin}$ tasks the form: \begin{widetext} \begin{equation} \label{eq:HamilSpinSpin} \begin{aligned} H_{spin-spin}&=\sum_{i=1,2}\left\{\left(\sigma_{\Omega^{\uparrow},i}\sigma_{\Omega^{\uparrow},i}^{\dag}+\sigma_{\Omega^{\downarrow},i}\sigma_{\Omega^{\downarrow},i}^{\dag}\right)\left[\hbar\Omega^sa_{\Omega^s}^{\dag}a_{\Omega^s}+\hbar\Omega^s\sigma_{\Omega^s,i}^{\dag}\sigma_{\Omega^s,i}\right.\right.\\ &\left.\left.+\hbar\Omega^na_{\Omega^n}^{\dag}a_{\Omega^n}+\hbar\Omega^n\sigma_{\Omega^n,i}^{\dag}\sigma_{\Omega^n,i}+g_{en}\left(\sigma_{en,i}+\sigma_{en,i}^{\dag}\right)\right]\right\} \end{aligned} \end{equation} \end{widetext} where $\sigma_{\Omega^{\uparrow},i}\sigma_{\Omega^{\uparrow},i}^{\dag}+\sigma_{\Omega^{\downarrow},i}\sigma_{\Omega^{\downarrow},i}^{\dag}$ verifies that electron is in the atomic ground state. And $\sigma_{en,i}$ takes the form: \begin{equation} \label{eq:OperatorElectronNucleus} \sigma_{en,i} = a_{\Omega^s}\sigma_{\Omega^s,i}^{\dag}a_{\Omega^n}^{\dag}\sigma_{\Omega^n,i} \end{equation} and $\sigma_{en,i}^{\dag}$ is its hermitian conjugate operator. On a p-photons state, the photon annihilation and creation operators $a$ and $a^{\dag}$ are described as: \begin{equation} \label{eq:PhotonOperators} \begin{aligned} &if\ p>0,\ \left\{ \begin{aligned} &a|p\rangle=\sqrt{p}|p-1\rangle,\\ &a^{\dag}|p\rangle=\sqrt{p+1}|p+1\rangle, \end{aligned} \right .\\ &if\ p=0,\ \left \{ \begin{aligned} &a|0\rangle=0,\\ &a^{\dag}|0\rangle=|1\rangle. \end{aligned} \right .\\ \end{aligned} \end{equation} Operators $a_{\omega^{\uparrow}}$, $a_{\omega^{\downarrow}}$, $a_{\Omega^{\uparrow}}$, $a_{\Omega^{\downarrow}}$, $a_{\Omega^s}$, $a_{\Omega^n}$ and their hermitian conjugate operators all obey the rules in \eqref{eq:PhotonOperators}. The interaction of molecule with the electromagnetic field of the cavity, emitting or absorbing photon with mode $\omega^{\uparrow,\downarrow}$, is described as: \begin{equation} \label{eq:InteractionMolecule} \begin{aligned} &\sigma_{\omega^{\uparrow}}|1\rangle_{\Phi_1}^{\uparrow}|0\rangle_{\Phi_0}^{\uparrow}=|0\rangle_{\Phi_1}^{\uparrow}|1\rangle_{\Phi_0}^{\uparrow},\\ &\sigma_{\omega^{\uparrow}}^{\dag}|0\rangle_{\Phi_1}^{\uparrow}|1\rangle_{\Phi_0}^{\uparrow}=|1\rangle_{\Phi_1}^{\uparrow}|0\rangle_{\Phi_0}^{\uparrow},\\ &\sigma_{\omega^{\downarrow}}|1\rangle_{\Phi_1}^{\downarrow}|0\rangle_{\Phi_0}^{\downarrow}=|0\rangle_{\Phi_1}^{\downarrow}|1\rangle_{\Phi_0}^{\downarrow},\\ &\sigma_{\omega^{\downarrow}}^{\dag}|0\rangle_{\Phi_1}^{\downarrow}|1\rangle_{\Phi_0}^{\downarrow}=|1\rangle_{\Phi_1}^{\downarrow}|0\rangle_{\Phi_0}^{\downarrow}. \end{aligned} \end{equation} The interaction of atom with the electromagnetic field of the cavity, emitting or absorbing photon with mode $\Omega^{\uparrow,\downarrow}$, is described as: \begin{equation} \label{eq:InteractionAtom} \begin{aligned} &\sigma_{\Omega^{\uparrow},i}|1\rangle_{\substack{at_i\\or_0}}^{\uparrow}|0\rangle_{\substack{at_i\\or_{-1}}}^{\uparrow}=|0\rangle_{\substack{at_i\\or_0}}^{\uparrow}|1\rangle_{\substack{at_i\\or_{-1}}}^{\uparrow},\\ &\sigma_{\Omega^{\uparrow},i}^{\dag}|0\rangle_{\substack{at_i\\or_0}}^{\uparrow}|1\rangle_{\substack{at_i\\or_{-1}}}^{\uparrow}=|1\rangle_{\substack{at_i\\or_0}}^{\uparrow}|0\rangle_{\substack{at_i\\or_{-1}}}^{\uparrow},\\ &\sigma_{\Omega^{\downarrow},i}|1\rangle_{\substack{at_i\\or_0}}^{\downarrow}|0\rangle_{\substack{at_i\\or_{-1}}}^{\downarrow}=|0\rangle_{\substack{at_i\\or_0}}^{\downarrow}|1\rangle_{\substack{at_i\\or_{-1}}}^{\downarrow},\\ &\sigma_{\Omega^{\downarrow},i}^{\dag}|0\rangle_{\substack{at_i\\or_0}}^{\downarrow}|1\rangle_{\substack{at_i\\or_{-1}}}^{\downarrow}=|1\rangle_{\substack{at_i\\or_0}}^{\downarrow}|0\rangle_{\substack{at_i\\or_{-1}}}^{\downarrow}. \end{aligned} \end{equation} The nuclei's tunnelling operators have following form: \begin{equation} \label{eq:TunnellingOperators} \begin{aligned} &\sigma_n|1\rangle_n=|0\rangle_n,\\ &\sigma_n^{\dag}|0\rangle_n=|1\rangle_n. \end{aligned} \end{equation} And the interaction of atom with the electromagnetic field of the cavity, emitting or absorbing photon with mode $\Omega^s$ and causing electron spin-flip, is described as: \begin{equation} \label{eq:InteractionAtomSpin} \begin{aligned} &\sigma_{\Omega^s,i}|1\rangle_{\substack{at_i,\\or_0}}^{\uparrow}|0\rangle_{\substack{at_i,\\or_0}}^{\downarrow}=|0\rangle_{\substack{at_i,\\or_0}}^{\uparrow}|1\rangle_{\substack{at_i,\\or_0}}^{\downarrow},\\ &\sigma_{\Omega^s,i}^{\dag}|0\rangle_{\substack{at_i,\\or_0}}^{\uparrow}|1\rangle_{\substack{at_i,\\or_0}}^{\downarrow}=|1\rangle_{\substack{at_i,\\or_0}}^{\uparrow}|0\rangle_{\substack{at_i,\\or_0}}^{\downarrow},\\ &\sigma_{\Omega^s,i}|1\rangle_{\substack{at_i,\\or_{-1}}}^{\uparrow}|0\rangle_{\substack{at_i,\\or_{-1}}}^{\downarrow}=|0\rangle_{\substack{at_i,\\or_{-1}}}^{\uparrow}|1\rangle_{\substack{at_i,\\or_{-1}}}^{\downarrow},\\ &\sigma_{\Omega^s,i}^{\dag}|0\rangle_{\substack{at_i,\\or_{-1}}}^{\uparrow}|1\rangle_{\substack{at_i,\\or_{-1}}}^{\downarrow}=|1\rangle_{\substack{at_i,\\or_{-1}}}^{\uparrow}|0\rangle_{\substack{at_i,\\or_{-1}}}^{\downarrow}. \end{aligned} \end{equation} And the interaction of nucleus with the electromagnetic field of the cavity, emitting or absorbing photon with mode $\Omega^n$ and causing nuclear spin-flip, is described as: \begin{equation} \label{eq:InteractionNucleusSpin} \begin{aligned} &\sigma_{\Omega^n,i}|1\rangle_{n_i}=|0\rangle_{n_i},\\ &\sigma_{\Omega^n,i}^{\dag}|0\rangle_{n_i}=|1\rangle_{n_i}. \end{aligned} \end{equation} \section{Simulations and results} \label{sec:Simulations} \begin{figure} \caption{(online color) The evolution with consideration of electron spin transition and spin-spin interaction between electrons and nuclei. {\bf (a)} \label{fig:Formation} \end{figure} Now we introduce the numerical method to simulate the evolution of system. The solution $\rho\left(t\right)$ in Eq. \eqref{eq:QME} may be approximately found as a sequence of two steps. In the first step we make one step in the solution of the unitary part of Eq. \eqref{eq:QME}: \begin{equation} \label{eq:UnitaryPart} \tilde{\rho}\left(t+dt\right)=exp\left(-\frac{i}{\hbar}Hdt\right)\rho\left(t\right)exp\left(\frac{i}{\hbar}Hdt\right) \end{equation} and in the second step, make one step in the solution of Eq. \eqref{eq:QME} with the commutator removed: \begin{equation} \label{eq:Solution} \rho\left(t+dt\right)=\tilde{\rho}\left(t+dt\right)+\frac{1}{\hbar}L\left(\tilde{\rho}\left(t+dt\right)\right)dt \end{equation} In simulations: $\hbar=1$, $\Omega^{\uparrow}=\Omega^{\downarrow}=10^{10}$, $\omega^{\uparrow}=\omega^{\downarrow}=5*10^9$, $\Omega^s=10^9$, $\Omega^n=10^8$; $g_{\Omega^{\uparrow}}=g_{\Omega^{\uparrow}}=10^8$, $g_{\omega^{\uparrow}}=g_{\omega^{\uparrow}}=5*10^7$, $g_{\Omega^s}=10^7$, $g_{en}=10^6$, $\zeta_2=10^9$, $\zeta_1=10^7$, $\zeta_0=0$. We consider the leakage of all types of photon in Markovian open systems, and its dissipative rate all are equal: $\gamma_{\Omega^{\uparrow}}=\gamma_{\Omega^{\downarrow}}=\gamma_{\omega^{\uparrow}}=\gamma_{\omega^{\downarrow}}=\gamma_{\Omega^s}=\gamma_{\Omega^n}=10^7$. \subsection{Formation of neutral hydrogen molecule} \label{subsec:Formation} In Subchapter \ref{subsec:NuclearElectronSpin}, we introduce feeble spin-spin interaction between electrons and nuclei. Initial state is $|\Psi_{initial}\rangle$, described in Fig. \ref{fig:Spin-spinInteraction}, where two electrons with $\downarrow$ are both in atomic ground orbital as above, and two nuclei are with $\uparrow$. Spin-spin interaction is permissible, and it only happens when electron is in atomic ground orbital, which is close to nucleus. Comparing to independent electron spin transition strength $g_{\Omega^s}$, strength of spin-spin interaction between nucleus and electron $g_{en}$ is extremely slight. Thus, we provisionally neglect independent electron spin transition in ground orbitals in order to study the effect of spin-spin interaction on formation of neutral hydrogen molecule. We assume that $\mu_{\Omega^{\uparrow}}=\mu_{\Omega^{\downarrow}}=\mu_{\Omega^s}=\mu_{\Omega^n}=0.5$ and $\mu_{\omega^{\uparrow}}=\mu_{\omega^{\downarrow}}=0$. This means that photons with modes $\Omega^{\uparrow}$, $\Omega^{\downarrow}$, $\Omega^s$, $\Omega^n$ will be continuously injected into the system, while another photons with $\omega^{\uparrow}$, $\omega^{\downarrow}$ will not be replenished. \subsection{Effect of electron spin} \label{subsec:EffectElectronSpin} \begin{figure*} \caption{(online color) Effect of photonic mode $\Omega^s$. In {\bf (a)} \label{fig:EffectElectronSpin} \end{figure*} \begin{figure*} \caption{(online color) Effect of photonic mode $\Omega^n$. In {\bf (a)} \label{fig:EffectNuclearSpin} \end{figure*} Now we investigate the effect of photonic mode $\Omega^s$ on the evolution and the formation of neutral hydrogen molecule. We assume that $\mu_{\Omega^{\uparrow}}=\mu_{\Omega^{\downarrow}}=\mu_{\Omega^n}=0.5$ and $\mu_{\omega^{\uparrow}}=\mu_{\omega^{\downarrow}}=0$. In Fig. \ref{fig:EffectElectronSpin} \textbf{(a)}, we chose four instances that vary in various $\mu_{\Omega^s}$: $\mu_{\Omega^s}^1=0$, $\mu_{\Omega^s}^2=0.1$, $\mu_{\Omega^s}^3=0.3$, $\mu_{\Omega^s}^4=0.5$. We discovered that neutral hydrogen molecule forms more quickly the higher the $\mu_{\Omega^s}$. The circumstance where $\mu_{\Omega^s}^1=0$ (in this case, $T_{\Omega^s}^1=0K$) occurs is where formation moves the slowest, indicated by red solid curve. The fastest formation occurs when $\mu_{\Omega^s}^4=0.5$, indicated by green dashed-dotted curve. The probability of the $|\mathrm{H}_2\rangle$ never approaches $1$ when the $\mu_{\Omega^s}$ is equal to $0$. However, once $\mu_{\Omega^s}$ is bigger than $0$, the probability of $|\mathrm{H}_2\rangle$ will reach $1$ as long as the duration is long enough. Atomic photons are continuously reintroduced back into the system since molecular photons are not regenerated. As a result, the whole system will gradually change to create a stable molecular state. We now raise $\mu_{\Omega^s}$ from $0$ to $0.5$. In each case we take the value of state $|\mathrm{H}_2\rangle$ when the time of evolution is $0.0012s$. We can intuitively perceive the trend of $|\mathrm{H}_2\rangle$ with the growth of $\mu_{\Omega^s}$ in Fig. \ref{fig:EffectElectronSpin} \textbf{(b)}. Probability of $|\mathrm{H}_2\rangle$ is close to 0 when $\mu_{\Omega^s}$ is near to $0$. It begins to increase as the $\mu_{\Omega^s}$ rises, then it reaches a top, which is close to $1$. From the inserted figure in Fig. \ref{fig:EffectElectronSpin} \textbf{(b)}, we can see that the $T$-dependent curve of probability has the same trend as the $\mu$-dependent curve, but there is a hysteresis near $0K$. \subsection{Effect of nuclear spin} \label{subsec:EffectNuclearSpin} Then we investigate the influence of photonic modes $\Omega^n$ to the evolution and the formation of neutral hydrogen molecule. We assume that $\mu_{\Omega^{\uparrow}}=\mu_{\Omega^{\downarrow}}=\mu_{\Omega^s}=0.5$ and $\mu_{\omega^{\uparrow}}=\mu_{\omega^{\downarrow}}=0$. In Fig. \ref{fig:EffectNuclearSpin} \text{(a)}, we chose four instances that vary in various $\mu_{\Omega^n}$: $\mu_{\Omega^n}^1=0$, $\mu_{\Omega^n}^2=0.1$, $\mu_{\Omega^n}^3=0.3$, $\mu_{\Omega^n}^4=0.5$. We discovered that neutral hydrogen molecule forms more quickly the higher the $\mu_{\Omega^n}$. However, compared with the $\Omega^s$, the promoting effect of the $\Omega^n$ on neutral hydrogen molecular formation is not so great due to the weaker spin-spin interaction. The circumstance where $\mu_{\Omega^n}^1=0$ (in this case, $T_{\Omega^n}^1=0K$) occurs is where formation moves the slowest, indicated by red solid curve. But probability of $|\mathrm{H}_2\rangle$ is much higher than the case where $\mu_{\Omega^n}$ is equal to $0$. This is because in the initial state, we have only one electron spin photon, but there are two nuclei both with $\uparrow$, which means that at most two nuclear spin photons will be released. The fastest formation occurs when $\mu_{\Omega^n}^4=0.5$, indicated by green dashed-dotted curve. The probability of the $|\mathrm{H}_2\rangle$ never approaches $1$ when the $\mu_{\Omega^n}$ is equal to $0$. However, once $\mu_{\Omega^n}$ is bigger than $0$, the probability of $|\mathrm{H}_2\rangle$ will reach $1$ as long as the duration is long enough. We now raise $\mu_{\Omega^n}$ from $0$ to $0.5$. In each case we take the value of state $|\mathrm{H}_2\rangle$ when the time of evolution is $0.0012s$. We can intuitively perceive the trend of $|\mathrm{H}_2\rangle$ with the growth of $\mu_{\Omega^n}$ in Fig. \ref{fig:EffectNuclearSpin} \textbf{(b)}. Probability of $|\mathrm{H}_2\rangle$ is close to $0.5$ when $\mu_{\Omega^n}$ is near to $0$. It begins to increase as the $\mu_{\Omega^n}$ rises, then it reaches a top, which is close to $1$. For photonic mode $\Omega^n$, the $T$-dependent curve of probability has a hysteresis, too. \section{Concluding discussion and future work} \label{sec:ConcluFuture} In this paper, we simulate the neutral hydrogen molecule formation in the association-dissociation model of neutral hydrogen molecule. We introduce spin-spin interaction into the system and derived some analytical results of it: Above all, we studied spin-spin interaction between electrons and nuclei in Subchapter \ref{subsec:Formation}. In this part, we investigated the formation of hydrogen molecule. Then the effects of temperature variation of $\Omega^s$ and $\Omega^n$ on the formation of neutral hydrogen molecule is obtained in Subchapter \ref{subsec:EffectElectronSpin} and Subchapter \ref{subsec:EffectNuclearSpin}: the higher temperature, the faster process of neutral hydrogen molecule formation. We have established the adequacy of our model for describing chemical scenarios, taking into account the effects of photons of various modes. In particular, the effect of nuclear spin photon is present, but it is much less than that of electron spin photon. If we compare Fig. \ref{fig:EffectElectronSpin} and Fig. \ref{fig:EffectNuclearSpin} obtained above, we can see that the mode $\Omega^s$ affects the association reaction much more than the mode $\Omega^n$. Our model is temporarily rough, but its advantage is in simplicity and scalability. And in future this model can be generalized to many modifications for laying the foundation for future complex chemical and biological models. \begin{acknowledgments} The reported study was funded by China Scholarship Council, project number 202108090483. The authors acknowledge Center for Collective Usage of Ultra HPC Resources (https://www.parallel.ru/) at Lomonosov Moscow State University for providing supercomputer resources that have contributed to the research results reported within this paper. \end{acknowledgments} \end{document}
\begin{document} \title{ An equilibrated fluxes approach to the certified descent algorithm for shape optimization using conforming finite element and discontinuous Galerkin discretizations } \renewcommand{\arabic{footnote}}{\fnsymbol{footnote}} \footnotetext[1]{ CMAP, Inria, Ecole polytechnique, CNRS, Universit\'e Paris-Saclay, 91128 Palaiseau, France. \texttt{[email protected]} } \footnotetext[2]{ DRI Institut Polytechnique des Sciences Avanc\'ees, 63 Boulevard de Brandebourg, 94200 Ivry-sur-Seine, France. } \footnotetext[3]{ \textit{Current affiliation:} Laboratori de C\`alcul Num\`eric, E.T.S. de Ingenieros de Caminos, Universitat Polit\`ecnica de Catalunya -- BarcelonaTech, Jordi Girona 1, E-08034 Barcelona, Spain. } \footnotetext{ \textit{ M. Giacomini is member of the DeFI team at Inria Saclay \^Ile-de-France. } } \renewcommand{\arabic{footnote}}{\arabic{footnote}} \begin{abstract} The certified descent algorithm (CDA) is a gradient-based method for shape optimization which certifies that the direction computed using the shape gradient is a genuine descent direction for the objective functional under analysis. It relies on the computation of an upper bound of the error introduced by the finite element approximation of the shape gradient. In this paper, we present a goal-oriented error estimator which depends solely on local quantities and is fully-computable. By means of the equilibrated fluxes approach, we construct a unified strategy valid for both conforming finite element approximations and discontinuous Galerkin discretizations. The new variant of the CDA is tested on the inverse identification problem of electrical impedance tomography: both its ability to identify a genuine descent direction at each iteration and its reliable stopping criterion are confirmed. \end{abstract} \keywords{ Shape optimization; Certified Descent Algorithm; A posteriori error estimator; Equilibrated fluxes; Conforming Finite Element; Discontinuous Galerkin; Electrical Impedance Tomography } \section{Introduction} \label{ref:intro} Shape optimization problems - that is optimization problems featuring shape-dependent functionals - have been successfully tackled in the literature by means of gradient-based methods. The major problem of the existing strategies to solve shape optimization problems is represented by the choice of the stopping criterion when moving from the continuous framework to its discrete counterpart, e.g. by means of a finite element approximation. As a matter of fact, stopping criteria based on the norm of the shape gradient may never be fulfilled if the tolerance is chosen too small with respect to the discretization. In order to circumvent this issue, in \cite{giacomini:hal-01201914} we proposed a strategy to solve shape optimization problems based on a certification procedure. Basic idea relies on the derivation of an upper bound of the error due to the approximation of the shape gradient to verify at each iteration that the direction computed using the discretized shape gradient is a genuine descent direction for the objective functional. The resulting algorithm obtained by coupling a descent method based on the shape gradient with an \emph{a posteriori} error estimator proved to automatically stop after generating a sequence of shapes that improved the value of the objective functional at each iteration. Several works on the adaptive finite element method (AFEM) for shape optimization may be found in the literature \cite{AFEM-Maute, Alauzet, AFEM-Kikuchi}. As a matter of fact, the idea of coupling shape optimization and \emph{a posteriori} error estimators is not new. It can be traced back to the work of Banichuk \etal \cite{MeshRef-Banichuk} and has been later extended by Morin \etal \cite{COV:8787109}: in these works, the authors split the error into a component due to the approximation of the geometry and another one related to the discretization of the boundary value problem. Concerning the latter, a key aspect of a \emph{good} estimator is the low computational cost associated to its derivation whence the great interest in estimators constructed using solely local quantities. The construction of \emph{a posteriori} error estimators in the context of finite element approximations is another extensively investigated subject and we refer the reader to \cite{ainsworth2000posteriori, nochetto2009, verfurth2013} for a complete introduction to the field. In this work we consider a strategy - known as equilibrated fluxes approach - to derive fully-computable guaranteed \emph{a posteriori} error estimators for both conforming and discontinuous Galerkin discretizations. Within the framework of conforming finite element, local $\Hdiv$-reconstructions leading to fully-computable upper bounds have been studied by several authors \cite{DBLP:journals/moc/DestuynderM99, doi:10.1137/S0036142903433790, Nicaise01042008, MR2373174, Vohralik2011}. Moreover, we refer to \cite{doi:10.1137/060665993, MR2261011, MR2427189, MR2601287} for the main results obtained in recent years on equilibrated fluxes for discontinuous Galerkin formulations. The present paper starts from the framework introduced in the aforementioned work \cite{giacomini:hal-01201914}, where we neglect the error due to the approximation of the geometry in order to focus on the component arising from the discretization of the governing equation. The novelty of our approach resides in the certification procedure for the descent direction, for which fully-computable \emph{a posteriori} error estimators are required. As a matter of fact, to the best of our knowledge all the works in the literature on AFEM for shape optimization focus on the qualitative information provided by the error estimators to drive mesh adaptation and do not exploit the quantitative information they carry to improve and automatize the overall optimization strategy. \\ To construct the required fully-computable \emph{a posteriori} error estimators, we follow the approach proposed by Ern and Vohral\'ik in \cite{doi:10.1137/130950100}: after presenting a thorough review of the recent developments in the field, the authors depict a unified framework for the construction and the analysis of \emph{a posteriori} error estimators based on equilibrated fluxes. Thus, on the one hand, for the conforming finite element approximation we construct the equilibrated fluxes by introducing the mixed finite element formulation of a local boundary value problem with Neumann boundary conditions over patches of elements. On the other hand, for the discontinuous Galerkin discretization we reconstruct the fluxes element-wise in the Raviart-Thomas finite element space by specifying the values of the degrees of freedom using an average of the gradient of the solution at the interfaces. We especially highlight the interest of this latter approach for the study of the problem of electrical impedance tomography (EIT). As a matter of fact, there has been a growing interest in recent years for a particular class of discontinuous Galerkin methods - known as symmetric weighted interior penalty discontinuous Galerkin - for problems featuring an inhomogeneous diffusion tensor \cite{doi:10.1137/050634736, MR2491426} as the one appearing in the EIT. The rest of the paper is organized as follows. In section \ref{ref:ShapePB}, we recall the general formulation of a shape optimization problem, the so-called boundary variation algorithm and its improved version known as certified descent algorithm. Then, we discuss the strategy to construct the \emph{a posteriori} estimator of the error in the shape gradient (Section \ref{ref:errorQoI}) and we introduce the problem of electrical impedance tomography (Section \ref{ref:InversePB}). In section \ref{ref:ConformingFE} and \ref{ref:DiscontinuousGalerkin} we provide the details of the discretized formulations and the equilibrated fluxes estimators respectively for the conforming finite element and the discontinuous Galerkin approximations. Eventually, in section \ref{ref:numerics} we present some numerical tests of the application of the CDA featuring the equilibrated fluxes estimators to the EIT problem and section \ref{ref:conclusion} summarizes our results. \section{Gradient-based methods for shape optimization} \label{ref:ShapePB} In this section, we recall the abstract formulation of a shape optimization problem and a gradient-based method to solve it. Let $\Omega \subset \mathbb{R}^d$ ($d \geq 2$) be an open domain with Lipschitz boundary $\partial\Omega$. We introduce a separable Hilbert space $V_\Omega$ depending on $\Omega$, a continuous bilinear form $a_\Omega(\cdot,\cdot):V_\Omega \times V_\Omega \rightarrow \mathbb{R}$ and a continuous linear form $F_\Omega(\cdot)$ on $V_\Omega$. We define the following state problem in $\Omega$: we seek $u_\Omega \in V_\Omega$ such that \begin{equation} a_\Omega(u_\Omega,\delta u) = F_\Omega(\delta u) \quad \forall \delta u \in V_\Omega . \label{eq:state} \end{equation} Under the assumption that the bilinear form satisfies the inf-sup condition $$ \adjustlimits\inf_{w \in V_\Omega} \sup_{v \in V_\Omega} \frac{a_\Omega(v,w)}{\|v\| \|w\|} = \adjustlimits\inf_{v \in V_\Omega} \sup_{w \in V_\Omega} \frac{a_\Omega(v,w)}{\|v\| \|w\|} > 0 $$ problem \eqref{eq:state} has a unique solution $u_\Omega$. Let us consider a cost functional $J(\Omega)=j(\Omega,u_\Omega)$ which depends on the domain $\Omega$ itself and on the solution $u_\Omega$ of the state equation. We denote the set of admissible domains in $\mathbb{R}^d$ with $\mathcal{U}_{\text{ad}}$ and we introduce the following problem for the minimization of the functional $J(\Omega)$: \begin{equation} \min_{\Omega \in \mathcal{U}_{\text{ad}}} J(\Omega) . \label{eq:shapeOpt} \end{equation} Hence, we seek a domain $\Omega$ that minimizes the functional $j(\Omega,u)$ under the constraint that $u$ is solution of the state equation \eqref{eq:state}. In the literature, \eqref{eq:shapeOpt} is called a shape optimization problem, that is a PDE-constrained optimization problem of a shape-dependent functional. \subsection{Optimize-then-discretize: the boundary variation algorithm} \label{ref:BVA} This work exploits a gradient-based method for the numerical approximation of problem \eqref{eq:shapeOpt}. In particular, two main approaches have been proposed in the literature: the discretize-then-optimize strategy and the optimize-then-discretize one. The former relies on the idea of computing a discretized version of the objective functional and subsequently constructing its gradient to run the optimization procedure. The latter works the other way around, by first computing the gradient of the cost functional and then discretizing it for the optimization loop. The discretize-then-optimize strategy has two main drawbacks: on the one hand, the discretized functional may not be differentiable, thus limiting the possibility of using a gradient method; on the other hand, this approach may suffer from severe mesh dependency. Hence, we consider an optimize-then-discretize approach for problem \eqref{eq:shapeOpt} by studying a variant of the boundary variation algorithm (BVA) discussed in \cite{smo-AP}: this method relies on the computation of the so-called shape gradient which arises from the differentiation of the functional with respect to the shape (cf. section \ref{ref:shapeDifferentiation}). The key aspect of the BVA is the computation of a descent direction for $J(\Omega)$, that is we seek a direction $\boldsymbol\theta$ along which the objective functional decreases. Once a descent direction has been identified, the domain is deformed by means of a perturbation of the identity map $(\Id + \boldsymbol\theta)\Omega$. \subsubsection{Differentiation with respect to the shape} \label{ref:shapeDifferentiation} Let $X \subset W^{1,\infty}(\Omega;\mathbb{R}^d)$ be a Banach space and $\boldsymbol\theta \in X$ be an admissible smooth deformation of $\Omega$. The cost functional $J(\Omega)$ is said to be $X$-differentiable at $\Omega \in \mathcal{U}_{\text{ad}}$ if there exists a continuous linear form $dJ(\Omega)$ on $X$ such that $\forall \boldsymbol\theta \in X$ $$ J((\Id+\boldsymbol\theta)\Omega) = J(\Omega) + \langle dJ(\Omega) , \boldsymbol\theta \rangle + o(\boldsymbol\theta). $$ Several approaches are feasible to compute the shape gradient and we refer to \cite{giacomini:hal-01201914} for a brief review of the existing techniques. In this work, we consider the material derivative approach \cite{sokolowski1992introduction}. Let us define a diffeomorphism $\varphi: \mathbb{R}^d \rightarrow \mathbb{R}^d$ such that every admissible set in $\mathcal{U}_{\text{ad}}$ may be written as $\Omega_{\varphi} \coloneqq \varphi(\Omega)$. \\ We introduce the Lagrangian functional, defined for every admissible open set $\Omega$ and every $u, \ p \in V_\Omega$ by \begin{equation} \mathcal{L}(\Omega,u,p) = j(\Omega,u) + a_\Omega(u,p) - F_\Omega(p). \label{eq:Lagrangian} \end{equation} We define the following adjoint problem, in which we seek $p_\Omega \in V_\Omega$ such that \begin{equation} a_\Omega(\delta p,p_\Omega) + \left\langle \frac{\partial j}{\partial u} (\Omega,u_\Omega), \delta p \right\rangle = 0 \quad \forall \delta p \in V_\Omega . \label{eq:adjoint} \end{equation} Moreover, all functions $u_\varphi,\ p_\varphi \in V_{\Omega_\varphi}$ defined on the deformed domain $\Omega_{\varphi}$ may be mapped to the fixed reference domain $\Omega$ as follows: \begin{align*} & u_\varphi \coloneqq u\circ\varphi^{-1} \quad , \quad u \in V_\Omega , \\ & p_\varphi\coloneqq p\circ\varphi^{-1} \quad , \quad p \in V_\Omega . \end{align*} We admit that $u \mapsto u_\varphi$ is a one-to-one map between $V_\Omega$ and $V_{\Omega_\varphi}$. The Lagrangian \eqref{eq:Lagrangian} is said to admit a material derivative if there exists a linear form $\partial \mathcal{L}/\partial \varphi$ such that $$ \mathcal{L}(\Omega_{\varphi},u_{\varphi},p_{\varphi}) = \mathcal{L}(\Omega,u,p) + \left\langle \frac{\partial \mathcal{L}}{\partial \varphi}(\Omega,u,p) , \boldsymbol\theta \right\rangle + o(\boldsymbol\theta) $$ where $\varphi=\Id+\boldsymbol\theta$. Provided that $u_\varphi$ is differentiable with respect to $\varphi$ at $\varphi=\Id$ in $V_{\Omega_\varphi}$, from the fast derivation method of C\'ea \cite{Cea1986} we obtain the following expression for the shape derivative: \begin{equation} \langle dJ(\Omega),\boldsymbol\theta \rangle = \left\langle \frac{\partial \mathcal{L}}{\partial \varphi}(\Omega,u_\Omega,p_\Omega) , \boldsymbol\theta \right\rangle. \label{eq:shapeDerDiff} \end{equation} \begin{rmrk} The application of the fast derivation method of C\'ea requires some caution. In \cite{PANTZ-sauts}, Pantz prove that if the solution of the state problem lacks of regularity (e.g. in the case of interface problems), the aforementioned method may lead to inaccurate expressions of the shape derivative. As a matter of fact, within the context of interface problems - as the electrical impedance tomography problem discussed in section \ref{ref:InversePB} - the discontinuity of the conductivity parameter is responsible for the solution of the state problem to be non-differentiable in the classical sense. Hence, a modified Lagrangian functional has to be introduced to correctly compute the expression of the shape derivative by exploiting the regularity of the restrictions of the solution of the state problem to the subdomains excluding the interface. For a detailed description of this procedure and possible alternative approaches, we refer to \cite{PANTZ-sauts, LaurainSturm2015}. \end{rmrk} We remark that the previously introduced shape derivative belongs to the dual space $X^*$. The corresponding shape gradient is obtained as its Riesz representative by introducing an appropriate scalar product on the space $X$. For the sake of simplicity, henceforth we consider $X$ to be a Hilbert space. Under this assumption, we may define the following variational problem to compute a descent direction for $J(\Omega)$: we seek $\boldsymbol\theta \in X$ such that \begin{equation} ( \boldsymbol\theta, \boldsymbol\delta\boldsymbol\theta )_X + \langle dJ(\Omega),\boldsymbol\delta\boldsymbol\theta \rangle = 0 \quad \forall \boldsymbol\delta\boldsymbol\theta \in X . \label{eq:variationalP} \end{equation} A key aspect in \eqref{eq:variationalP} is the choice of an appropriate scalar product. Several approaches have been proposed in the literature, spanning from the traction method inspired by the linear elasticity equation \cite{tractionMethod} to the classical $L^2$ and $H^1$ scalar products \cite{Dogan20073898}. A thorough discussion on this subject is presented in \cite{MR3582826} where the authors employ a Steklov-Poincar\'e-type metric to devise novel efficient algorithms based on shape derivatives. For the rest of this paper, the classical $H^1$ scalar product is employed to construct the descent direction $\boldsymbol\theta$. This choice is motivated on the one hand by its simplicity from the point of view of the implementation and on the other hand by the additional regularity it introduces with respect to the $L^2$ scalar product, thus leading to smoother deformations of the shape as we will observe in the numerical simulations in subsections \ref{ref:1incl} and \ref{ref:2incl}. \subsection{The certified descent algorithm} \label{ref:CDA} In this section, we introduce the finite element approximations of the state problem \eqref{eq:state} and adjoint problem \eqref{eq:adjoint} and we present the conditions that the discretized direction $\boldsymbol\theta^h$ has to fulfill in order to be a genuine descent direction for the functional $J(\Omega)$. \\ First, we define the bilinear form $a_\Omega^h(\cdot,\cdot)$ and the linear form $F_\Omega^h(\cdot)$ associated with the following discrete state problem: we seek $u_\Omega^h \in V_\Omega^{h,\ell}$ such that \begin{equation} a_\Omega^h(u_\Omega^h,\delta u^h) = F_\Omega^h(\delta u^h) \quad \forall \delta u^h \in V_\Omega^{h,\ell} \label{eq:stateDiscrete} \end{equation} where $V_\Omega^{h,\ell}$ is an appropriate finite element or discontinuous Galerkin approximation space featuring basis functions of degree $\ell$. Following the same procedure, we introduce the discrete adjoint problem which consists in seeking $p_\Omega^h \in V_\Omega^{h,\ell}$ such that \begin{equation*} a_\Omega^h(\delta p^h, p_\Omega^h) + \left\langle \frac{\partial j}{\partial u} (\Omega,u_\Omega^h), \delta p^h \right\rangle = 0 \quad \forall \delta p^h \in V_\Omega^{h,\ell} . \label{eq:adjointDiscreteDJ} \end{equation*} The details on the approximation space $V_\Omega^{h,\ell}$ will be discussed in section \ref{ref:ConformingFE} for the case of conforming finite element and in section \ref{ref:DiscontinuousGalerkin} for the discontinuous Galerkin approximation. \\ We may now introduce the discretized shape derivative $\langle d_h J(\Omega),\boldsymbol\delta\boldsymbol\theta \rangle$ defined as \begin{equation} \langle d_h J(\Omega),\boldsymbol\delta\boldsymbol\theta \rangle \coloneqq \left\langle \frac{\partial \mathcal{L}}{\partial \varphi}(\Omega,u_\Omega^h,p_\Omega^h), \boldsymbol\delta\boldsymbol\theta \right\rangle. \label{eq:discreteShapeGrad} \end{equation} The discretized direction $\boldsymbol\theta^h \in X$ is computed as the solution of problem \eqref{eq:variationalP} substituting $d J(\Omega)$ by $d_h J(\Omega)$. We recall the following definition: \begin{defin} A direction $\boldsymbol\theta$ is said to be a genuine descent direction for the functional $J(\Omega)$ if \begin{equation} \langle d J(\Omega),\boldsymbol\theta \rangle < 0 . \label{eq:descentDirection} \end{equation} \end{defin} \noindent It is straightforward to observe that a direction $\boldsymbol\theta$ fulfilling \eqref{eq:descentDirection} is such that $J(\Omega)$ decreases along $\boldsymbol\theta$, that is $J((\Id + \boldsymbol\theta)\Omega) < J(\Omega)$. \\ Nevertheless, due to the numerical error introduced by the finite element discretization, even though $\langle d_h J(\Omega),\boldsymbol\theta^h \rangle < 0$, \ $\boldsymbol\theta^h$ is not necessarily a genuine descent direction for the functional $J(\Omega)$. In the following subsection, we introduce the notion of certified descent direction and we describe a procedure that allows to identify a genuine descent direction for $J(\Omega)$ by solving problem \eqref{eq:variationalP} with the discretized shape derivative $d_h J(\Omega)$ and subsequently accounting for the error in the shape gradient. \subsubsection{Certification of a genuine descent direction} \label{ref:certification} Let us define the error $E^h$ due to the approximation of the shape gradient as follows: \begin{equation} E^h \coloneqq \langle dJ(\Omega) - d_h J(\Omega),\boldsymbol\theta^h \rangle. \label{eq:errorH} \end{equation} From \eqref{eq:errorH}, it follows that \begin{equation} \langle d J(\Omega),\boldsymbol\theta^h \rangle = \langle d_h J(\Omega),\boldsymbol\theta^h \rangle + E^h . \label{eq:gradH+err} \end{equation} As stated before, $\boldsymbol\theta^h$ is constructed starting from \eqref{eq:variationalP} and substituting the expression of the shape derivative with its discrete counterpart \eqref{eq:discreteShapeGrad}. This results in a discretized direction $\boldsymbol\theta^h$ such that $\langle d_h J(\Omega),\boldsymbol\theta^h \rangle < 0$. Nevertheless, in order for $\boldsymbol\theta^h$ to be a descent direction for the objective functional, condition \eqref{eq:descentDirection} has to be fulfilled thus the quantity $E^h$ in \eqref{eq:gradH+err} has to be accounted for. Within this framework, we obtain the following condition on $\boldsymbol\theta^h$: \begin{equation} \langle d_h J(\Omega),\boldsymbol\theta^h \rangle + E^h < 0 . \label{eq:condition-with-err} \end{equation} This condition does not imply that $\boldsymbol\theta^h$ is a genuine descent direction for $J(\Omega)$ since the quantity $E^h$ may be either positive or negative. In order to derive a relationship that stands independently of the sign of $E^h$ and since no a priori information on the aforementioned sign is available, we modify \eqref{eq:condition-with-err} by introducing the absolute value of the error in the shape gradient: \begin{equation*} \langle d_h J(\Omega),\boldsymbol\theta^h \rangle + E^h \leq \langle d_h J(\Omega),\boldsymbol\theta^h \rangle + | E^h | < 0. \label{eq:Grad+AbsErr} \end{equation*} We may now introduce the following definition: \begin{defin} Let $\overline{E}$ be the upper bound of the error $| E^h |$ in the shape gradient. A direction $\boldsymbol\theta^h$ is said to be a certified descent direction for the functional $J(\Omega)$ if \begin{equation} \langle d_h J(\Omega),\boldsymbol\theta^h \rangle + \overline{E} < 0. \label{eq:certified} \end{equation} \end{defin} \noindent The expression \emph{certified} is due to the fact that a direction constructed within this framework is verified to be a genuine descent direction for the functional $J(\Omega)$. As a matter of fact, it is straightforward to observe that if $\boldsymbol\theta^h$ fulfills \eqref{eq:certified}, then it verifies \eqref{eq:descentDirection} as well. \begin{rmrk} It is important to observe that a direction fulfilling \eqref{eq:certified} is a genuine descent direction for $J(\Omega)$, whether it is the solution of equation \eqref{eq:variationalP} or not. This is extremely important since the computation of the descent direction is done through the discretization of the aforementioned variational problem, that is $\boldsymbol\theta^h$ is only an approximation of the direction $\boldsymbol\theta$ solution of \eqref{eq:variationalP}. \end{rmrk} In \cite{1742-6596-657-1-012004,giacomini:hal-01201914}, we coupled the certification procedure to the boundary variation algorithm and we derived a new gradient-based method for shape optimization named certified descent algorithm (CDA - script \ref{scpt:shape-opt-adaptive}). On the one hand, the computation of the upper bound of the numerical error in the shape gradient provides useful information to identify a certified descent direction thus improving the objective functional at each iteration of the optimization strategy. On the other hand, owing to the quantitative information encapsulated in $\overline{E}$ the CDA features a guaranteed stopping criterion for the overall optimization procedure. \\ The key aspect of this algorithm is the derivation of a fully-computable guaranteed estimator of the error in the shape gradient. In particular, in section \ref{ref:errorQoI} we introduce an approach based on the equilibrated fluxes method which relies solely on the computation of local quantities. \subsubsection{The CDA workflow} At each iteration, the algorithm solves the state and adjoint problems and computes a descent direction $\boldsymbol\theta^h$. Then, an upper bound of the numerical error in the shape derivative along the direction $\boldsymbol\theta^h$ is derived. If condition \eqref{eq:certified} is not fulfilled, the mesh is adapted in order to improve the error estimate. This procedure is iterated until the direction $\boldsymbol\theta^h$ is a certified descent direction for $J(\Omega)$. Once a certified descent direction has been identified, we compute a step $\mu$ via an Armijo rule and the shape of the domain is updated according to the computed perturbation of the identity $\Id + \mu \boldsymbol\theta^h$. Eventually, a novel stopping criterion is proposed in order to use the information embedded in the error bound $\overline{E}$ to derive a reliable condition to end the evolution of the algorithm. A key aspect of the sketched procedure is the mesh adaptation routine that is performed if condition \eqref{eq:certified} is not fulfilled. In order to reduce the quantity $\overline{E}$ at each iteration, we construct an indicator based on the error in the shape gradient and we drive the mesh adaptation using the information carried by the indicator itself. This strategy is known as goal-oriented mesh adaptation (cf. \cite{Oden2001735}) and aims to identify the areas of the domain that are mainly responsible for the error in the target quantity and reduce it by means of a local refinement in order to limit the insertion of new degrees of freedom. \begin{lstlisting}[language=pseudo, escapeinside={/*@}{@*/}, caption={The certified descent algorithm (CDA)}, label=scpt:shape-opt-adaptive] Given the domain $\Omega_0$, set $\texttt{tol}>0$, $j=0$ and iterate: 1. Solve the state problem in $\Omega_j$; 2. Solve the adjoint problem in $\Omega_j$; 3. Compute a descent direction $\boldsymbol\theta^h_j$; 4. Compute an upper bound $\overline{E}$ of the numerical error $E^h$; 5. If $\langle d_hJ(\Omega_j),\boldsymbol\theta^h_j \rangle + \overline{E} \geq 0$, refine the mesh and go to 1; 6. Identify an admissible step $\mu_j$; 7. Update the shape $\Omega_{j+1} = (\Id + \mu_j \boldsymbol\theta^h_j) \Omega_j$; 8. While $| \langle d_hJ(\Omega_j),\boldsymbol\theta^h_j \rangle | + \overline{E} > \texttt{tol}$, $j=j+1$ and repeat. \end{lstlisting} \section{Approximation error for the shape gradient} \label{ref:errorQoI} In this section, we present a strategy to construct a fully-computable guaranteed upper bound $\overline{E}$ of the error $E^h$ in the approximation of the shape gradient in order to practically implement the certification procedure described in section \ref{ref:certification}. \subsection{Bound for the approximation error of a linear functional} Let us recall the framework described in \cite{Oden2001735} for the derivation of an estimate of the error in a bounded linear functional. We consider a quantity of interest $Q: V_\Omega \rightarrow \mathbb{R}$ which we aim to evaluate for the function $u_\Omega$, solution of the state problem \eqref{eq:state}. We introduce the solution $u_\Omega^h$ of the corresponding discretized problem \eqref{eq:stateDiscrete} and we seek an estimate of the error in the target functional $Q$: \begin{equation*} E_Q \coloneqq Q(u_\Omega) - Q(u_\Omega^h) = Q(u_\Omega-u_\Omega^h) \label{eq:linFunc} \end{equation*} where the first equality follows from the linearity of $Q$. We introduce an adjoint problem featuring the quantity $Q$ as right-hand side, that is we seek an influence function $r_\Omega \in V_\Omega$ such that \begin{equation} a_\Omega(\delta r,r_\Omega) = Q(\delta r) \quad \forall \delta r \in V_\Omega . \label{eq:adjointGoalOriented} \end{equation} We remark that problem \eqref{eq:adjointGoalOriented} is well-posed owing to the Lax-Milgram theorem. The approximation of \eqref{eq:adjointGoalOriented} is obtained following the framework introduced in section \ref{ref:CDA} for the state problem. It is well-known in the literature (cf. e.g. \cite{MR2899560}) that in order to retrieve a sharp upper bound of the error in a quantity of interest, a higher-order approximation has to be employed for the solution of the adjoint problem. Let $a_\Omega^h(\cdot,\cdot)$ be the discrete bilinear form and $V_\Omega^{h,m}$ the space of finite element (respectively discontinuous Galerkin) functions of degree $m, \ m > \ell$. We seek $r_\Omega^h \in V_\Omega^{h,m}$ such that \begin{equation} a_\Omega^h(\delta r^h, r_\Omega^h) = Q(\delta r^h) \quad \forall \delta r^h \in V_\Omega^{h,m} . \label{eq:adjointDiscrete} \end{equation} From \eqref{eq:adjointGoalOriented} and \eqref{eq:state} it is straightforward to observe that \begin{equation*} F_\Omega(r_\Omega) = a_\Omega(u_\Omega,r_\Omega) = Q(u_\Omega) . \label{eq:primalDual} \end{equation*} Thus, the following relationship between the error in the approximation of the target functional and the solutions of the state and adjoint problems is derived: \begin{equation} E_Q = Q(u_\Omega) - Q(u_\Omega^h) = F_\Omega(r_\Omega) - a_\Omega^h(u_\Omega^h,r_\Omega^h) = F_\Omega(r_\Omega) - a_\Omega^h(u_\Omega^h,r_\Omega) \label{eq:errGOAL} \end{equation} where the first equality follows from the approximation \eqref{eq:adjointDiscrete} of the adjoint problem \eqref{eq:adjointGoalOriented} whereas the justification of the last one exploits different properties when dealing with conforming finite element or discontinuous Galerkin approximations. \\ For the case of conforming finite element approximations, we have that $a_\Omega^h(\cdot,\cdot) = a_\Omega(\cdot,\cdot)$ and $a_\Omega(\delta r^h,r_\Omega) = Q(\delta r^h)$ stands for all $\delta r^h \in V_\Omega^{h,m}$ owing to \eqref{eq:adjointGoalOriented} and the fact that the discretization space $V_\Omega^{h,m}$ is a subspace of $V_\Omega$. Thus the last equality in \eqref{eq:errGOAL} reduces to the classical expression of the residue of the state equation applied to the function $r_\Omega$: $$ R_\Omega^u(r_\Omega) \coloneqq F_\Omega(r_\Omega) - a_\Omega(u_\Omega^h,r_\Omega) . $$ On the contrary, when dealing with discontinuous Galerkin formulations, the expression of the discrete bilinear form also features the terms associated with the jumps of the discontinuous functions and it cannot be identified with its continuous version. Within this framework, the last equality in \eqref{eq:errGOAL} stands if the numerical method used to discretize the adjoint problem is consistent, that is if $a_\Omega^h(\delta r^h, r_\Omega) = Q(\delta r^h) \ \forall \delta r^h \in V_\Omega^{h,m}$. The adjoint consistency is equivalent to the usual Galerkin orthogonality property in finite element and we refer to \cite{MR2361907} for more details on its role in the construction of discretizations of optimal order in terms of target functionals. In section \ref{ref:DiscontinuousGalerkin}, we provide some details on the discontinuous Galerkin strategy chosen for the test case of electrical impedance tomography and we refer to \cite{MR2882148} for a complete analysis of the numerical scheme. \subsection{Goal-oriented estimate of the error in the shape gradient} \label{ref:GOAL} In order to apply the technique described in the previous section to the estimate of the error in the shape gradient, we need to extend the previously introduced framework to the case of non-linear quantities of interest. We rely on the approach in \cite{2006-nonlin}, by performing a linearization of the target functional that leads to the introduction of the following linearized error $\widetilde{E}^h$: \begin{equation} \begin{aligned} E^h = & \left\langle \frac{\partial \mathcal{L}}{\partial \varphi} (\Omega,u_\Omega,p_\Omega) - \frac{\partial\mathcal{L}}{\partial\varphi} (\Omega,u_\Omega^h,p_\Omega^h), \boldsymbol\theta^h \right\rangle \\ & \simeq \frac{\partial^2 \mathcal{L}}{\partial \varphi \partial u} (\Omega,u_\Omega^h,p_\Omega^h)[\boldsymbol\theta^h,u_\Omega-u_\Omega^h] + \frac{\partial^2 \mathcal{L}}{\partial \varphi \partial p} (\Omega,u_\Omega^h,p_\Omega^h)[\boldsymbol\theta^h,p_\Omega-p_\Omega^h] \eqqcolon \widetilde{E}^h. \end{aligned} \label{eq:errDJ} \end{equation} For the rest of this paper, we neglect the linearization error introduced in \eqref{eq:errDJ} and we construct an estimator to evaluate the error in the shape gradient by considering solely the information in $\widetilde{E}^h$. More precisely, the quantities on the second line of \eqref{eq:errDJ} will serve as linear functionals to develop the goal-oriented estimate sketched in the previous subsection. In particular, we introduce two adjoint problems associated with the terms on the right-hand side of \eqref{eq:errDJ} and we seek $r_\Omega,s_\Omega \in V_\Omega$ such that \begin{equation} \begin{aligned} & a_\Omega(\delta r,r_\Omega) = \frac{\partial^2 \mathcal{L}} {\partial\varphi \partial u}(\Omega,u_\Omega^h,p_\Omega^h)[\boldsymbol\theta^h,\delta r] \quad \forall \delta r \in V_\Omega , \\ & a_\Omega(\delta s,s_\Omega) = \frac{\partial^2 \mathcal{L}} {\partial\varphi \partial p}(\Omega,u_\Omega^h,p_\Omega^h)[\boldsymbol\theta^h,\delta s] \quad \forall \delta s \in V_\Omega . \end{aligned} \label{eq:adjointDJ} \end{equation} The problems \eqref{eq:adjointDJ} are well-posed since their right-hand sides are linear and continuous forms on $V_\Omega$. For the corresponding conforming finite element (respectively discontinuous Galerkin) discretizations of problems \eqref{eq:adjointDJ}, we seek $r_\Omega^h,s_\Omega^h \in V_\Omega^{h,m}$ such that \begin{equation*} \begin{aligned} & a_\Omega^h(\delta r^h,r_\Omega^h) = \frac{\partial^2 \mathcal{L}} {\partial\varphi \partial u}(\Omega,u_\Omega^h,p_\Omega^h)[\boldsymbol\theta^h,\delta r^h] \quad \forall \delta r^h \in V_\Omega^{h,m} , \\ & a_\Omega^h(\delta s^h,s_\Omega^h) = \frac{\partial^2 \mathcal{L}} {\partial\varphi \partial p}(\Omega,u_\Omega^h,p_\Omega^h)[\boldsymbol\theta^h,\delta s^h] \quad \forall \delta s^h \in V_\Omega^{h,m} . \end{aligned} \label{eq:adjointDJdiscretized} \end{equation*} Let us now define two quantities associated respectively with the contribution of the state error $u_\Omega-u_\Omega^h$ and the adjoint error $p_\Omega-p_\Omega^h$ to the linearized error in the shape gradient $\widetilde{E}^h$: \begin{align} & \widetilde{E}^h_u= \frac{\partial^2 \mathcal{L}}{\partial \varphi \partial u} (\Omega,u_\Omega^h,p_\Omega^h)[\boldsymbol\theta^h,u_\Omega-u_\Omega^h] = F_\Omega(r_\Omega) - a_\Omega^h(u_\Omega^h,r_\Omega) , \label{eq:Eu} \\ & \widetilde{E}^h_p = \frac{\partial^2 \mathcal{L}}{\partial \varphi \partial p} (\Omega,u_\Omega^h,p_\Omega^h)[\boldsymbol\theta^h,p_\Omega-p_\Omega^h] = - \left\langle \frac{\partial j}{\partial u} (\Omega,u_\Omega), s_\Omega \right\rangle - a_\Omega^h(p_\Omega^h,s_\Omega) . \label{eq:Ep} \end{align} It is straightforward to observe that $\widetilde{E}^h =\widetilde{E}^h_u + \widetilde{E}^h_p$. In order to derive a practical strategy to perform the certification procedure in section \ref{ref:certification} by verifying condition \eqref{eq:certified}, we have to compute an upper bound $\overline{E}$ of the error $| E^h |$, that is $| \widetilde{E}^h |$ under the aforementioned assumption of a negligible linearization error. \\ In this paper, we propose an implicit error estimator based on the equilibrated fluxes method \cite{ainsworth2000posteriori}. This technique provides a fully-computable guaranteed upper bound of the error and relies solely on the computation of local quantities, namely the equilibrated fluxes. A detailed description of the construction of the equilibrated fluxes for the state and adjoint problems and the derivation of the estimator of the error in the shape gradient starting from the quantities $\widetilde{E}^h_u$ and $\widetilde{E}^h_p$ is presented in the following sections. Since the expressions of both the equilibrated fluxes and the error estimator depend on the nature of the problem under analysis, in the next section we introduce the formulation of the inverse identification problem of electrical impedance tomography that will act as proof of concept for the discussed method. \section{Electrical impedance tomography} \label{ref:InversePB} We consider the inverse identification problem of electrical impedance tomography in the most classical Point Electrode Model. We consider a shape optimization formulation for this inverse problem and we solve it by means of a version of the certified descent algorithm (cf. algorithm \ref{scpt:shape-opt-adaptive}) featuring an equilibrated fluxes strategy for the certification procedure. \\ It is well-known in the literature that the problem of electrical impedance tomography is severely ill-posed. Moreover, classical shape optimization methods proved to be highly unsatisfactory by remaining trapped in local minima and consequently providing fairly poor reconstructions of the inclusions. The certified descent algorithm does not aim to solve the known issues of gradient-based strategies when dealing with ill-posed problems. Nevertheless, the interest in the EIT problem is twofold. On the one hand, the EIT is a non-trivial scalar problem that may guide towards the establishment of some properties of this new version of the certified descent algorithm. On the other hand, the CDA confirms the aforementioned limitations of gradient-based methods applied to inverse problems. In particular, the rapidly increasing number of degrees of freedom required for the certification procedure highlights the severe ill-posedness of the EIT problem. We refer the reader interested in an overview of the methods investigated in the literature for the EIT problem to \cite{LaurainSturm2015, MR2211069, MR2407028} for shape optimization approaches, to \cite{Hintermuller2008, MR2886190, MR2966180} for topology optimization strategies and to \cite{MR3019471, MR2132313, holder2004electrical} for regularization techniques. We are now ready to introduce the formulation of the electrical impedance tomography problem. Let us consider an open domain $\mathcal{D} \subset \mathbb{R}^2$ featuring an open subdomain $\Omega \subset\subset \mathcal{D}$ such that the electrical conductivity is piecewise constant in $\Omega$ and $\mathcal{D} \setminus \Omega$: \begin{equation*} k_\Omega \coloneqq k_I \chi_\Omega + k_E (1-\chi_\Omega) \label{eq:conductivity} \end{equation*} where $k_I$ and $k_E$ are two positive parameters and $\chi_\Omega$ is the characteristic function of $\Omega$. The goal is to identify the location and the shape of the inclusion $\Omega$ fitting non-invasive measurements $g$ and $U_D$ respectively of the flux and the potential taken on the external boundary $\partial\mathcal{D}$. In order to solve this problem, we introduce two boundary value problems (Section \ref{ref:state}) and owing to \cite{CPA:CPA3160400605} we define a minimization problem for an objective functional inspired by the Kohn-Vogelius one (Section \ref{ref:shapeOpt}). The inverse problem of electrical impedance tomography - also known as Calder\'on's problem - has been extensively studied over the years. We suggest that the interested reader reviews \cite{Calderon1980, Cheney99electricalimpedance, 0266-5611-18-6-201} for a detailed exposition of the physical problem, its mathematical formulation and its numerical approximation. \subsection{State problems} \label{ref:state} We consider $g \in L^2(\partial\mathcal{D})$ and $U_D \in H^{\frac{1}{2}}(\partial\mathcal{D})$ as boundary data for the flux and the potential. Let $i=N,D$ be the subscripts associated respectively with Neumann and Dirichlet boundary conditions. We introduce the boundary value problems \begin{align} \left\{ \begin{aligned} & -k_\Omega \Delta u_{\Omega,i} + u_{\Omega,i} = 0 \quad & \text{in} \ \mathcal{D} \setminus \partial\Omega \\ & \llbracket u_{\Omega,i} \rrbracket = 0 \quad & \text{on} \ \partial\Omega \\ & \llbracket k_\Omega \nabla u_{\Omega,i} \cdot \mathbf{n} \rrbracket = 0 \quad & \text{on} \ \partial\Omega \end{aligned} \right. \label{eq:statePB} \end{align} with the following sets of boundary conditions on $\partial\mathcal{D}$: \begin{gather} k_E \nabla u_{\Omega,N} \cdot \mathbf{n} = g , \label{eq:NeumannBC} \\ u_{\Omega,D} = U_D . \label{eq:DirichletBC} \end{gather} \subsection{Shape derivative of the Kohn-Vogelius functional} \label{ref:shapeOpt} Let us consider the following objective functional inspired by the work of Kohn and Vogelius \cite{CPA:CPA3160400605}: \begin{equation} J(\Omega) = \frac{1}{2}\int_\mathcal{D}{\Big( k_\Omega \left| \nabla(u_{\Omega,N} - u_{\Omega,D})\right|^2 + |u_{\Omega,N} - u_{\Omega,D}|^2 \Big) d\mathbf{x}}. \label{eq:kohn-vogelius} \end{equation} The problem of retrieving the inclusion $\Omega$ starting from the boundary measurements $g$ and $U_D$ may be viewed as an optimization problem in which we seek the open subset that minimizes \eqref{eq:kohn-vogelius}, $u_{\Omega,N}$ and $u_{\Omega,D}$ being the solutions of the state problems \eqref{eq:statePB} with boundary conditions given respectively by the Neumann \eqref{eq:NeumannBC} and the Dirichlet \eqref{eq:DirichletBC} data. As stated in section \ref{ref:shapeDifferentiation}, in order to differentiate a functional with respect to the shape, we introduce an adjoint problem for each state variable. Owing to the fact that the Kohn-Vogelius problem is self-adjoint, we get that $p_{\Omega,N}=u_{\Omega,N} - u_{\Omega,D}$ and $p_{\Omega,D}=0$. In this work, we consider a volumetric formulation of the shape derivative. As a matter of fact, it has been recently proved by Hiptmair and co-workers (cf. \cite{HPS_bit}) that the volumetric expression provides better numerical accuracy than its corresponding surface representation\footnote{ We refer to \cite{HPS_bit} for a detailed comparison of the volumetric and surface expressions of the shape gradient for elliptic state problems. In particular, in this work the authors prove that within the framework of finite element discretizations, a better numerical accuracy is achieved when using the volumetric formulation of the shape gradient. Similar results for the case of interface problems are available in \cite{MR3467382}.}. \\ Let $\boldsymbol\theta \in W^{1,\infty}(\mathcal{D};\mathbb{R}^2)$ be an admissible deformation of the domain such that $\boldsymbol\theta=\mathbf{0} \ \text{on} \ \partial\mathcal{D}$. We define $\mathbf{M}(\boldsymbol\theta) = \nabla \boldsymbol\theta + \nabla \boldsymbol\theta^T - (\nabla \cdot \boldsymbol\theta) \Id$. By introducing the following operator \begin{equation*} \langle G(\Omega,u) , \boldsymbol\theta \rangle = \frac{1}{2}\int_\mathcal{D}{ \Big(k_\Omega \mathbf{M}(\boldsymbol\theta) \nabla u \cdot \nabla u - \nabla \cdot \boldsymbol\theta \ u^2 \Big) d\mathbf{x}} , \label{eq:operatorG} \end{equation*} the volumetric expression of the shape derivative of \eqref{eq:kohn-vogelius} reads as \begin{equation} \langle dJ(\Omega),\boldsymbol\theta \rangle = \langle G(\Omega,u_{\Omega,N}) - G(\Omega,u_{\Omega,D}) , \boldsymbol\theta \rangle. \label{eq:volumetricKV} \end{equation} The interested reader may refer to \cite{PANTZ-sauts} for more details on the differentiation of the Kohn-Vogelius functional with respect to the shape and its application to the identification of discontinuities of the conductivity parameter. \section{Conforming finite element approximation} \label{ref:ConformingFE} In this section, we introduce a discretization of the EIT problem based on conforming finite element functions. Let $\{\mathcal{T}_h\}_{h>0}$ be a family of triangulations of the domain $\mathcal{D}$ with no hanging nodes. Having in mind that $d=2$, we consider a mesh such that each element $T \in \mathcal{T}_h$ is a triangle and for each couple $T, T' \in \mathcal{T}_h$ such that $T \neq T'$, the intersection of the two elements is either an empty set or a vertex or an edge. An edge $e$ is said to be an interior edge of the triangulation $\mathcal{T}_h$ if there exist two elements $T^-(e), \ T^+(e) \in \mathcal{T}_h$ such that $e = T^-(e) \cap T^+(e)$, whereas is a boundary edge if there exists $T(e) \in \mathcal{T}_h$ such that $e = T(e) \cap \partial\mathcal{D}$. In the former case, the unit normal vector to $e$ is denoted by $\mathbf{n}_e$ and goes from $T^-(e)$ towards $T^+(e)$. In the latter one, $\mathbf{n}$ is the classical outward normal to $\partial\mathcal{D}$. The set of the internal edges is noted as $\mathcal{E}_h^\mathcal{I}$, the boundary edges are collected into $\mathcal{E}_h^\mathcal{B}$ and we set $\mathcal{E}_h \coloneqq \mathcal{E}_h^\mathcal{I} \cup \mathcal{E}_h^\mathcal{B}$. \\ The state and adjoint problems are solved using the following Lagrangian finite element space $$ V_\Omega^{h,\kappa} \coloneqq \{ u^h \in \mathcal{C}^0(\overline{\mathcal{D}}) \ : \ u^h |_T \in \mathbb{P}^\kappa(T) \ \forall T \in \mathcal{T}_h \} $$ where $\mathbb{P}^\kappa(T)$ is the set of polynomials of degree at most $\kappa$ on an element $T$, being $\kappa=\ell$ and $\kappa=m$ respectively for the state and adjoint equations. The procedure to construct the equilibrated fluxes is performed via the solution of local subproblems defined on patches of elements using mixed finite element formulations. A key aspect of this approach - which will be precisely detailed - is the choice of the degree of the approximating functions for both the solution of the problems and the equilibrated fluxes. \subsection{The state problems} Let $a_\Omega(\cdot,\cdot)$ be the bilinear form associated with problems \eqref{eq:statePB} and $F_{\Omega,i}(\cdot), \ i=N,D$ the linear forms respectively for the Neumann and the Dirichlet problem: \begin{gather} a_\Omega(u_{\Omega,i},\delta u) = \int_\mathcal{D}{\Big(k_\Omega \nabla u_{\Omega,i} \cdot \nabla \delta u + u_{\Omega,i} \delta u \Big) d\mathbf{x}} , \label{eq:a} \\ F_{\Omega,N}(\delta u) = \int_{\partial\mathcal{D}}{g \delta u \ ds} \quad \text{and} \quad F_{\Omega,D}(\delta u) = 0. \label{eq:F} \end{gather} We consider $u_{\Omega,N},\ u_{\Omega,D} \in H^1(\mathcal{D})$ such that $u_{\Omega,D}=U_D \ \text{on} \ \partial \mathcal{D}$, solutions of the following Neumann and Dirichlet variational problems $\forall \delta u_N \in H^1(\mathcal{D})$ and $\forall \delta u_D \in H^1_0(\mathcal{D})$: \begin{equation} a_\Omega(u_{\Omega,i},\delta u_i) = F_{\Omega,i}(\delta u_i) \quad , \quad i=N,D. \label{eq:stateEIT} \end{equation} We remark that within the framework of conforming finite element discretizations, the continuous and discrete bilinear (respectively linear) forms have the same expressions. Hence, the corresponding discretized formulations of the state problems \eqref{eq:stateEIT} may be derived by replacing the analytical solutions $u_{\Omega,N}$ and $u_{\Omega,D}$ with their approximations $u_{\Omega,N}^h$ and $u_{\Omega,D}^h$ which belong to the space $V_\Omega^{h,\ell}$ of Lagrangian finite element functions of degree $\ell$. In a similar fashion, $\boldsymbol\theta^h$ is the solution of equation \eqref{eq:variationalP} computed using a vector-valued Lagrangian finite element space and substituting the expression of the discrete shape derivative \eqref{eq:discreteShapeGrad} to its analytical counterpart \eqref{eq:shapeDerDiff}. For both the state problems and the computation of the descent direction, we consider a low-order approximation respectively using $\mathbb{P}^1$ and $\mathbb{P}^1 \times \mathbb{P}^1$ Lagrangian finite element functions. \subsection{The adjoint problems} Let $r_{\Omega,N}$ and $r_{\Omega,D}$ be the solutions of the adjoint problems \eqref{eq:adjointDJ} introduced to evaluate the contributions of the Neumann and Dirichlet state problems to the error in the quantity of interest: we seek $r_{\Omega,N} \in H^1(\mathcal{D})$ and $r_{\Omega,D} \in H^1_0(\mathcal{D})$ such that respectively $\forall \delta r_N \in H^1(\mathcal{D})$ and $\forall \delta r_D \in H^1_0(\mathcal{D})$ \begin{equation} a_\Omega(\delta r_i,r_{\Omega,i}) = H_{\Omega,i}(\delta r_i) \quad , \quad i=N,D \label{eq:adjointEIT} \end{equation} where for $i=N,D$ the linear forms $H_{\Omega,i}(\delta r_i)$ read as \begin{equation*} \begin{aligned} H_{\Omega,i}(\delta r) & \coloneqq \frac{\partial G}{\partial u}(\Omega,u_{\Omega,i}^h)[ \boldsymbol\theta^h,\delta r] \\ & = \int_\mathcal{D}{ \Big(k_\Omega \mathbf{M}(\boldsymbol\theta^h) \nabla u_{\Omega,i}^h \cdot \nabla \delta r - \nabla \cdot \boldsymbol\theta^h \ u_{\Omega,i}^h \delta r \Big) d\mathbf{x}} . \end{aligned} \label{eq:Fadj} \end{equation*} As for the state problems, the discretized solutions $r_{\Omega,N}^h$ and $r_{\Omega,D}^h$ are obtained solving the adjoint equations \eqref{eq:adjointEIT} within an appropriate space of Lagrangian finite element functions, that is the space $V_\Omega^{h,m}$ of degree $m$. According to the requirement of higher-order methods to solve the adjoint problems, we consider a $\mathbb{P}^2$ Lagrangian finite element space for the discretization of \eqref{eq:adjointEIT}. \subsection{Estimate of the error in the shape gradient via the equilibrated fluxes} Starting from the framework described in section \ref{ref:GOAL}, we construct a goal-oriented estimator of the error in the shape gradient by evaluating the quantities $\widetilde{E}^h_u$ and $\widetilde{E}^h_p$ in \eqref{eq:Eu}-\eqref{eq:Ep}. First of all, we observe that owing to the Kohn-Vogelius problem being self-adjoint, this reduces to estimating the quantity $\widetilde{E}^h_u$ for the Neumann and the Dirichlet cases. By recalling the expression \eqref{eq:volumetricKV} of the shape derivative for the Kohn-Vogelius functional, we may rewrite the error in the shape gradient as follows: \begin{equation} \begin{aligned} E^h & = \langle dJ(\Omega) - d_hJ(\Omega),\boldsymbol\theta^h \rangle \\ & = \langle G(\Omega,u_{\Omega,N}) - G(\Omega,u_{\Omega,N}^h) , \boldsymbol\theta^h \rangle - \langle G(\Omega,u_{\Omega,D}) - G(\Omega,u_{\Omega,D}^h), \boldsymbol\theta^h \rangle \\ & \simeq H_{\Omega,N}(u_{\Omega,N} - u_{\Omega,N}^h) - H_{\Omega,D}(u_{\Omega,D} - u_{\Omega,D}^h) \eqqcolon \widetilde{E}^h_{u,_N} - \widetilde{E}^h_{u,_D} . \end{aligned} \label{eq:ERRvolumetricKV} \end{equation} Before constructing the components of the estimator of the error in the shape gradient in \eqref{eq:ERRvolumetricKV}, we recall the notion of equilibrated fluxes. In order to do so, we introduce the space of vector-valued functions $\Hdiv = \{ \boldsymbol\tau \in L^2(\mathcal{D}; \mathbb{R}^d) \ : \ \nabla \cdot \boldsymbol\tau \in L^2(\mathcal{D}) \}$ and the discrete space $W_\Omega^{h,\kappa}$ of the functions that restricted to a single element of the triangulation are Raviart-Thomas finite element functions of degree $\kappa$: $$ W_\Omega^{h,\kappa} \coloneqq \{ \boldsymbol\tau^h \in \Hdiv \ : \ \boldsymbol\tau^h |_T \in [\mathbb{P}^\kappa(T)]^d + \mathbf{x} \mathbb{P}^\kappa(T) \ \forall T \in \mathcal{T}_h \} . $$ \begin{rmrk} A function $\boldsymbol\tau^h \in W_\Omega^{h,\kappa}$ is such that $\nabla \cdot \boldsymbol\tau^h \in \mathbb{P}^\kappa(T) \ \forall T \in \mathcal{T}_h$ , $\boldsymbol\tau^h \cdot \mathbf{n}_e \in \mathbb{P}^\kappa(e) \ \forall e \subset \partial T \ , \ \forall T \in \mathcal{T}_h$ and its normal trace is continuous across all edges $e \subset \partial T$ (cf. \cite{BoffiBrezziFortin}). \end{rmrk} \subsubsection{Equilibrated fluxes for the state equations} The discretized solutions $u_{\Omega,i}^h$'s of the state problems are usually such that $- k_\Omega \nabla u_{\Omega,i}^h$ $\notin \Hdiv$ or $\nabla \cdot (- k_\Omega \nabla u_{\Omega,i}^h) + u_{\Omega,i}^h \neq 0$. On the contrary, the weak solutions $u_{\Omega,i}$'s - and their fluxes $\boldsymbol\sigma_{\Omega,i} \coloneqq - k_\Omega \nabla u_{\Omega,i}$ - fulfill $\boldsymbol\sigma_{\Omega,i} \in \Hdiv$ and $\nabla \cdot \boldsymbol\sigma_{\Omega,i} + u_{\Omega,i} = 0$. In order to retrieve the aforementioned properties, we construct the discrete quantities known as equilibrated fluxes (cf. \cite{MR2373174}): \begin{defin} Let $u_{\Omega,i}^h \in V_\Omega^{h,\ell}$ be the solution of a state problem \eqref{eq:stateEIT} computed using Lagrangian finite element functions of degree $\ell$. Let $\kappa = \max \{ 0 , \ell-1 \}$, we define $\pi_Z^\kappa : L^2(\mathcal{D}) \rightarrow Z_\Omega^{h,\kappa}$ the $L^2$-orthogonal projection operator onto the space $Z_\Omega^{h,\kappa}$ of the piecewise discontinuous finite element functions of degree $\kappa$. A function $\boldsymbol\sigma_{\Omega,i}^h \in W_\Omega^{h,\kappa}$ is said to be an equilibrated flux for the problem \eqref{eq:stateEIT} if \begin{equation} \nabla \cdot \boldsymbol\sigma_{\Omega,i}^h + \pi_Z^\kappa u_{\Omega,i}^h = 0 . \label{eq:eqFLuxState} \end{equation} \label{def:fluxState} \end{defin} \noindent Under the previously introduced assumptions on the degree of the discretization spaces, we get that $\ell=1$ and $\kappa=0$, that is the equilibrated flux is sought in the lowest-order Raviart-Thomas space $RT_0$ and the projection operator returns $\mathbb{P}^0$ piecewise constant functions. To practically reconstruct the equilibrated fluxes $\boldsymbol\sigma_{\Omega,i}^h$'s, we follow the approach proposed by Ern and Vohral\'ik in \cite{doi:10.1137/130950100} which is based on the work by Braess and Sch\"{o}berl \cite{MR2373174}. In particular, we consider a procedure that starting from the finite element functions $u_{\Omega,i}^h \ , \ i=N,D$ constructs the equilibrated fluxes locally on subpatches of elements. Thus, for each vertex $x_v \ , \ v=1,\ldots,N_v$ of the elements in the computational mesh we introduce a linear shape function $\psi_v$ such that $\psi_v(x_w) = \delta_{vw}$, $\delta$ being the classical Kronecker delta. The support of $\psi_v$ is the subpatch centered in $x_v$ and is denoted by $\omega_v$. We remark that the family of functions $\psi_v$'s fulfills the condition known as partition of the unity, that is $$ \sum_{v=1}^{N_v} \psi_v = 1 . $$ In order to retrieve a precise approximation of the fluxes, we consider a dual mixed finite element formulation of the aforementioned local problems. First, let us denote by $W_{\omega_v}^{h,\kappa}$ (respectively $Z_{\omega_v}^{h,\kappa}$) the restriction to $\omega_v$ of the previously defined space $W_\Omega^{h,\kappa}$ (respectively $Z_\Omega^{h,\kappa}$). Moreover, we introduce the following finite element spaces: \begin{gather*} W_{v,0}^{h,\kappa} \coloneqq \{ \boldsymbol\tau^h \in W_{\omega_v}^{h,\kappa} \ : \ \boldsymbol\tau^h \cdot \mathbf{n}_e = 0 \ \text{on} \ e \in \partial\omega_v \} , \label{eq:Wloc-all} \\ W_{v,1}^{h,\kappa} \coloneqq \{ \boldsymbol\tau^h \in W_{\omega_v}^{h,\kappa} \ : \ \boldsymbol\tau^h \cdot \mathbf{n}_e = 0 \ \text{on} \ e \in \partial\omega_v \setminus \mathcal{E}_h^\mathcal{B} \} . \label{eq:Wloc-part} \end{gather*} For each vertex $x_v \ , \ v=1,\ldots,N_v$ and for $i=N,D$, we prescribe $(\boldsymbol\sigma_{i,v}^h,t_{i,v}^h) \in W_{i,v}^{h,\kappa} \times Z_{\omega_v}^{h,\kappa}$ such that $\forall (\boldsymbol\delta \boldsymbol\sigma_i^h,\delta t_i^h) \in W_v^{h,\kappa} \times Z_{\omega_v}^{h,\kappa}$ \begin{equation} \begin{aligned} & \int_{\omega_v}{\nabla \cdot \boldsymbol\sigma_{i,v}^h \delta t_i^h \ d\mathbf{x}} + \int_{\omega_v}{t_{i,v}^h \delta t_i^h \ d\mathbf{x}} = - \int_{\omega_v}{\Big(k_\Omega \nabla u_{\Omega,i}^h \cdot \nabla \psi_v + u_{\Omega,i}^h \psi_v \Big) \delta t_i^h \ d\mathbf{x}} , \\ & \int_{\omega_v}{\boldsymbol\sigma_{i,v}^h \cdot \boldsymbol\delta \boldsymbol\sigma_i^h \ d\mathbf{x}} - \int_{\omega_v}{k_\Omega t_{i,v}^h \nabla \cdot \boldsymbol\delta \boldsymbol\sigma_i^h \ d\mathbf{x}} = - \int_{\omega_v}{k_\Omega \psi_v \nabla u_{\Omega,i}^h \cdot \boldsymbol\delta \boldsymbol\sigma_i^h \ d\mathbf{x}} . \end{aligned} \label{eq:mixedPBstate} \end{equation} The spaces in which the trial and the test functions are sought are detailed below. It is important to highlight the different nature of problem \eqref{eq:mixedPBstate} when the patch $\omega_v$ is centered on a vertex belonging to the interior of $\mathcal{D}$ or to its boundary $\partial\mathcal{D}$. As Braess and Sch\"{o}berl remark in \cite{MR2373174}, some caution has to be used when dealing with the corresponding boundary conditions: in particular, a flux-free condition is imposed on the whole boundary $\partial\omega_v$ of the patch for interior vertices, whereas it is limited to the edges in $\partial\omega_v \setminus \mathcal{E}_h^\mathcal{B}$ for points which belong to the external boundary of the global domain. \\ To construct the equilibrated fluxes for the Neumann state problem on $\omega_v$ centered in a vertex $x_v \in \partial\mathcal{D}$, equation \eqref{eq:mixedPBstate} is solved using the spaces \begin{gather} \begin{aligned} W_{N,v}^{h,\kappa} \coloneqq \{ \boldsymbol\tau^h \in W_{\omega_v}^{h,\kappa} \ : & \ \boldsymbol\tau^h \cdot \mathbf{n}_e = 0 \ \text{on} \ e \in \partial\omega_v \setminus \mathcal{E}_h^\mathcal{B} \\ & \text{and} \ \boldsymbol\tau^h \cdot \mathbf{n}_e = \pi_{W \cdot \mathbf{n}}^\kappa(\psi_v g) \ \text{on} \ e \in \partial\omega_v \cap \mathcal{E}_h^\mathcal{B} \} , \end{aligned} \label{eq:trialExtNeu} \\ W_v^{h,\kappa} = W_{v,0}^{h,\kappa} . \label{eq:testExtNeu} \end{gather} When considering the Dirichlet state problem on $\omega_v$ centered in $x_v \in \partial\mathcal{D}$, the trial and test spaces read as follows: \begin{equation} W_{D,v}^{h,\kappa} = W_v^{h,\kappa} = W_{v,1}^{h,\kappa} . \label{eq:trial-testExtDir} \end{equation} Eventually, for the vertices $x_v \in int(\mathcal{D})$, we solve \eqref{eq:mixedPBstate} using the spaces \begin{equation} W_{N,v}^{h,\kappa} = W_{D,v}^{h,\kappa} = W_v^{h,\kappa} = W_{v,0}^{h,\kappa} . \label{eq:trial-testInt} \end{equation} In \eqref{eq:trialExtNeu}, $\pi_{W \cdot \mathbf{n}}^\kappa$ stands for the $L^2$-orthogonal projection operator from $L^2(\partial\mathcal{D})$ to the space $W_\Omega^{h,\kappa} \cdot \mathbf{n}$ of polynomial functions of degree at most $\kappa$ on the external boundary. For additional details on the procedure to construct the equilibrated fluxes and on the properties of the resulting \emph{a posteriori} error estimators, we refer to \cite{doi:10.1137/130950100}. We now extend all the $\boldsymbol\sigma_{i,v}^h$'s by zero in $\mathcal{D} \setminus \omega_v$. By combining the above information arising from all the subpatches, we may retrieve the global equilibrated fluxes for the state problems: \begin{equation*} \boldsymbol\sigma_{\Omega,i}^h = \sum_{v=1}^{N_v} \boldsymbol\sigma_{i,v}^h \quad , \quad i=N,D . \label{eq:sumFluxState} \end{equation*} \begin{lemma} For the case of the Neumann state problem, it holds \begin{equation*} \boldsymbol\sigma_{\Omega,N}^h \cdot \mathbf{n} = \pi_{W \cdot \mathbf{n}}^\kappa(g) \quad \text{on} \quad \partial\mathcal{D} . \label{eq:fluxProj} \end{equation*} \label{theo:lemmaFlux} \end{lemma} \begin{proof} Let $\chi_v^e$ be equal to $1$ if a given edge $e \in \mathcal{E}_h^\mathcal{B}$ belongs to the subpatch $\omega_v$ centered in $x_v$ and $0$ otherwise. Hence, $$ \boldsymbol\sigma_{\Omega,N}^h|_e = \sum_{v=1}^{N_v}{\chi_v^e \boldsymbol\sigma_{N,v}^h} . $$ Let $\delta u^h \in (W_\Omega^{h,\kappa} \cdot \mathbf{n})|_e$ be a polynomial function of degree at most $\kappa$ on the edge $e \in \mathcal{E}_h^\mathcal{B}$. Owing to the condition on the normal trace $\boldsymbol\sigma_{N,v}^h \cdot \mathbf{n}_e$ in \eqref{eq:trialExtNeu}, we get $$ \langle \boldsymbol\sigma_{\Omega,N}^h \cdot \mathbf{n}_e, \delta u^h \rangle_e = \sum_{v=1}^{N_v}{\chi_v^e \langle \boldsymbol\sigma_{N,v}^h \cdot \mathbf{n}_e, \delta u^h \rangle_e} = \sum_{v=1}^{N_v}{\chi_v^e \langle \psi_v g, \delta u^h \rangle_e} = \langle g, \delta u^h \rangle_e , $$ where the last equality follows from the partition of the unity property fulfilled by the functions $\psi_v$'s. The result is inferred by observing that the previous chain of equality holds $\forall \delta u^h \in (W_\Omega^{h,\kappa} \cdot \mathbf{n})|_e \ , \ \forall e \in \mathcal{E}_h^\mathcal{B}$. \end{proof} \subsubsection{Equilibrated fluxes for the adjoint equations} Following the same approach discussed above for the state problems, we define the equilibrated fluxes for the adjoint problems: \begin{defin} Let $r_{\Omega,i}^h \in V_\Omega^{h,m}$ be the solution of an adjoint problem \eqref{eq:adjointEIT} computed using Lagrangian finite element functions of degree $m$. Let $\kappa = \max \{ 0 , m-1 \}$ and $\pi_Z^\kappa : L^2(\mathcal{D}) \rightarrow Z_\Omega^{h,\kappa}$ the $L^2$-orthogonal projection operator onto the space $Z_\Omega^{h,\kappa}$ defined in the previous section. A function $\boldsymbol\xi_{\Omega,i}^h \in W_\Omega^{h,\kappa}$ is said to be an equilibrated flux for the problem \eqref{eq:adjointEIT} if \begin{equation} \nabla \cdot \boldsymbol\xi_{\Omega,i}^h + \pi_Z^\kappa r_{\Omega,i}^h = - \pi_Z^\kappa \Big( \nabla \cdot (k_\Omega \mathbf{M}(\boldsymbol\theta^h) \nabla u_{\Omega,i}^h) + \nabla \cdot \boldsymbol\theta^h \ u_{\Omega,i}^h \Big) . \label{eq:eqFLuxAdjoint} \end{equation} \label{def:fluxAdjoint} \end{defin} \noindent Having in mind that the adjoint equations are solved using $\mathbb{P}^2$ Lagrangian finite element functions - that is $m=2$ - it follows that the equilibrated fluxes $\boldsymbol\xi_{\Omega,i}^h$'s are constructed via $RT_1$ Raviart-Thomas functions of degree $1$ and the operator $\pi_Z^k$ projects functions from $L^2(\mathcal{D})$ to the discrete space of piecewise discontinuous finite elements of degree $1$. The computation of the equilibrated fluxes for the adjoint problems is again performed via the solution of a mixed finite element problem. We consider the same discrete spaces introduced in definitions \eqref{eq:testExtNeu} to \eqref{eq:trial-testInt}, whereas the space $W_{N,v}^{h,\kappa}$ associated with the Neumann adjoint problem featuring a patch centered on a boundary node is $W_{v,0}^{h,\kappa}$. Thus, for each subpatch $\omega_v \ , \ v=1,\ldots,N_v$ and for $i=N,D$, we seek $(\boldsymbol\xi_{i,v}^h,q_{i,v}^h) \in W_{i,v}^{h,\kappa} \times Z_{\omega_v}^{h,\kappa}$ such that $\forall (\boldsymbol\delta \boldsymbol\xi_i^h,\delta q_i^h) \in W_v^{h,\kappa} \times Z_{\omega_v}^{h,\kappa}$ \begin{equation} \begin{aligned} & \begin{aligned} \int_{\omega_v}\nabla \cdot \boldsymbol\xi_{i,v}^h \delta & q_i \ d\mathbf{x} + \int_{\omega_v}{q_{i,v}^h \delta q_i \ d\mathbf{x}} = \int_{\omega_v}{k_\Omega \mathbf{M}(\boldsymbol\theta^h) \nabla u_{\Omega,i}^h \cdot \nabla \psi_v \delta q_i \ d\mathbf{x}} \\ & - \int_{\omega_v}{\nabla \cdot \boldsymbol\theta^h u_{\Omega,i}^h \psi_v \delta q_i \ d\mathbf{x}} - \int_{\omega_v}{\Big(k_\Omega \nabla r_{\Omega,i}^h \cdot \nabla \psi_v + r_{\Omega,i}^h \psi_v \Big) \delta q_i \ d\mathbf{x}} , \end{aligned} \\ & \begin{aligned} \int_{\omega_v}{\boldsymbol\xi_{i,v}^h \cdot \boldsymbol\delta \boldsymbol\xi_i \ d\mathbf{x}} - \int_{\omega_v}{k_\Omega q_{i,v}^h \nabla \cdot \boldsymbol\delta \boldsymbol\xi_i \ d\mathbf{x}} = \int_{\omega_v}k_\Omega \psi_v & \mathbf{M}(\boldsymbol\theta^h) \nabla u_{\Omega,i}^h \cdot \boldsymbol\delta \boldsymbol\xi_i \ d\mathbf{x} \\ & - \int_{\omega_v}{k_\Omega \psi_v \nabla r_{\Omega,i}^h \cdot \boldsymbol\delta \boldsymbol\xi_i \ d\mathbf{x}} . \end{aligned} \end{aligned} \label{eq:mixedPBadjoint} \end{equation} The corresponding equilibrated fluxes $\boldsymbol\xi_{\Omega,i}^h$'s are obtained by extending the functions $\boldsymbol\xi_{i,v}^h$'s by zero in $\mathcal{D} \setminus \omega_v$ and by combining the previously computed local information: $$ \boldsymbol\xi_{\Omega,i}^h = \sum_{v=1}^{N_v} \boldsymbol\xi_{i,v}^h \quad , \quad i=N,D . $$ \begin{rmrk} A key aspect of the discussed procedure is represented by the local nature of the problems to be solved for the construction of the equilibrated fluxes. The advantage of this approach is twofold. On the one hand, solving the local problems \eqref{eq:mixedPBstate}-\eqref{eq:mixedPBadjoint} is computationally inexpensive owing to the small size of the subpatches. On the other hand, every problem set on a subpatch is independent from the remaining ones thus it is straightforward to implement a version of the procedure that can efficiently exploit modern parallel architectures. \end{rmrk} \subsubsection{Goal-oriented equilibrated fluxes error estimator} \label{ref:goalFE} As previously stated, the construction of the error estimator for the shape gradient for the case of the electrical impedance tomography reduces to the evaluation of \eqref{eq:Eu} for the Neumann and Dirichlet problems. For this purpose, we introduce respectively the quantities $\widetilde{E}^h_{u,_N}$ and $\widetilde{E}^h_{u,_D}$ and two parameters $\zeta_i$'s such that $\zeta_N \coloneqq 1$ and $\zeta_D \coloneqq 0$. By exploiting the formulation of the bilinear and linear forms \eqref{eq:a}-\eqref{eq:F} and adding the expression of the equilibrated fluxes \eqref{eq:eqFLuxState}, $\widetilde{E}^h_{u,_i}$ reads as: \begin{equation*} \begin{aligned} \widetilde{E}^h_{u,_i} \coloneqq F_{\Omega,i}(r_{\Omega,i}) - a_\Omega(& u_{\Omega,i}^h,r_{\Omega,i}) = \ \zeta_i \int_{\partial\mathcal{D}}{g r_{\Omega,i} \ ds} - \int_\mathcal{D}{k_\Omega \nabla u_{\Omega,i}^h \cdot \nabla r_{\Omega,i} d\mathbf{x}} \\ & \hspace{12pt} - \int_\mathcal{D}{u_{\Omega,i}^h r_{\Omega,i} d\mathbf{x}} + \int_\mathcal{D}{\Big( \nabla \cdot \boldsymbol\sigma_{\Omega,i}^h + \pi_Z^\kappa u_{\Omega,i}^h \Big) r_{\Omega,i} \ d\mathbf{x}} . \end{aligned} \label{eq:GOAL1step} \end{equation*} Integrating by parts the last integral and owing to lemma \ref{theo:lemmaFlux} and to $r_{\Omega,D}=0 \ \text{on} \ \partial\mathcal{D}$, we obtain \begin{equation*} \begin{aligned} \widetilde{E}^h_{u,_i} = \ \zeta_i & \int_{\partial\mathcal{D}}{\Big( g - \pi_{W \cdot \mathbf{n}}^\kappa(g) \Big) r_{\Omega,i} \ ds} + \int_\mathcal{D}{\Big( \pi_Z^\kappa u_{\Omega,i}^h - u_{\Omega,i}^h \Big) r_{\Omega,i} \ d\mathbf{x}} \\ & - \int_\mathcal{D}{\Big( \boldsymbol\sigma_{\Omega,i}^h + k_\Omega \nabla u_{\Omega,i}^h \Big) \cdot \nabla r_{\Omega,i} \ d\mathbf{x}} . \end{aligned} \label{eq:GOAL2step} \end{equation*} By adding and subtracting the corresponding terms featuring the finite element counterparts $r_{\Omega,i}^h$'s of the adjoint solutions and owing to definition \ref{def:fluxAdjoint} of the equilibrated fluxes $\boldsymbol\xi_{\Omega,i}^h$'s, we are finally able to derive the expression of the errors $\widetilde{E}^h_{u,_i}$'s: \begin{equation} \begin{aligned} \widetilde{E}^h_{u,_i} = & \ \zeta_i \int_{\partial\mathcal{D}}{\Big( g - \pi_{W \cdot \mathbf{n}}^\kappa(g) \Big) r_{\Omega,i}^h \ ds} + \zeta_i \int_{\partial\mathcal{D}}{\Big( g - \pi_{W \cdot \mathbf{n}}^\kappa(g) \Big)(r_{\Omega,i} - r_{\Omega,i}^h) ds} \\ & + \int_\mathcal{D}{\Big( \pi_Z^\kappa u_{\Omega,i}^h - u_{\Omega,i}^h \Big) r_{\Omega,i}^h \ d\mathbf{x}} + \int_\mathcal{D}{\Big( \pi_Z^\kappa u_{\Omega,i}^h - u_{\Omega,i}^h \Big)(r_{\Omega,i} - r_{\Omega,i}^h) d\mathbf{x}} \\ & + \int_\mathcal{D}{\Big( \boldsymbol\sigma_{\Omega,i}^h + k_\Omega \nabla u_{\Omega,i}^h \Big) \cdot k_\Omega^{-1} \boldsymbol\xi_{\Omega,i}^h \ d\mathbf{x}} \\ & - \int_\mathcal{D}{\Big( \boldsymbol\sigma_{\Omega,i}^h + k_\Omega \nabla u_{\Omega,i}^h \Big) \cdot \Big(\nabla r_{\Omega,i} + k_\Omega^{-1} \boldsymbol\xi_{\Omega,i}^h\Big) d\mathbf{x}} . \end{aligned} \label{eq:GOALconformingFE} \end{equation} We remark that in \eqref{eq:GOALconformingFE} both the exact and the discretized solutions of the adjoint problems appear. From a practical point of view, in order to fully compute the quantity \eqref{eq:GOALconformingFE} we substitute the exact solutions with their finite element counterparts $r_{\Omega,i}^h \in V_\Omega^{h,m}$ obtained by the high-order approximation of \eqref{eq:adjointEIT}. The corresponding approximated solutions are then replaced by the projection of the high-order approximations onto the space $V_\Omega^{h,\ell}$ of the low-order finite element functions used for the discretization of the state problems. Let $I_m^\ell: V_\Omega^{h,m} \rightarrow V_\Omega^{h,\ell}$ be the projection operator from the space of high-order approximations to the low-order one. The fully-computable version of the estimator of the quantity $\widetilde{E}^h_{u,_i}$ is obtained by substituting $r_{\Omega,i} - r_{\Omega,i}^h$ with $r_{\Omega,i}^h - I_m^\ell r_{\Omega,i}^h$ and $\nabla r_{\Omega,i}$ with $\nabla r_{\Omega,i}^h$ in \eqref{eq:GOALconformingFE}. By plugging the expressions of $\widetilde{E}^h_{u,_N}$ and $\widetilde{E}^h_{u,_D}$ arising from \eqref{eq:GOALconformingFE} into \eqref{eq:ERRvolumetricKV}, we obtain a computable expression of the error in the shape gradient and the bound $\overline{E}$ follows by considering its absolute value. \begin{rmrk} The goal-oriented error estimators constructed using the equilibrated fluxes approach are known to be asymptotically exact (cf. \cite{MR2899560}). Owing to the aforementioned asymptotic exactness, the term $\overline{E}$ tends to zero as the mesh size tends to zero. This property plays a crucial role since it guarantees that the mesh adaptation routine performed to certify the descent direction (cf. algorithm \ref{scpt:shape-opt-adaptive} - step 5) eventually leads to the fulfillment of condition \eqref{eq:certified}. \end{rmrk} \section{Discontinuous Galerkin approximation} \label{ref:DiscontinuousGalerkin} In this section, we present an alternative strategy for the approximation of the EIT problem based on the symmetric weighted interior penalty discontinuous Galerkin (SWIP-dG) formulation. \\ Let us consider the notations introduced in section \ref{ref:ConformingFE} for the triangulation $\mathcal{T}_h$. The discontinuous Galerkin (dG) problems are solved within the space $$ V_\Omega^{h,\kappa} \coloneqq \{ u^h \in L^2(\mathcal{D}) \ : \ u^h|_T \in \mathbb{P}^\kappa(T) \ \forall T \in \mathcal{T}_h \} $$ of the discontinuous functions whose restrictions to a single element are polynomials of degree at most $\kappa$. When dealing with dG formulations, discontinuous functions - as the ones of the aforementioned space $V_\Omega^{h,\kappa}$ - which are double-valued on $\mathcal{E}_h^\mathcal{I}$ and single-valued on $\mathcal{E}_h^\mathcal{B}$ have to be properly handled. We define the jump of $u^h$ across the edge $e$ shared by the elements $T^\pm(e)$ as \begin{equation*} \llbracket u^h \rrbracket_e \coloneqq u^h|_{T^-(e)} - u^h|_{T^+(e)} . \label{eq:jumpDG} \end{equation*} In a similar fashion, the weighted average of $u^h$ on $e \in \mathcal{E}_h^\mathcal{I}$ reads as follows \begin{equation} \llbrace u^h \rrbrace_\alpha \coloneqq \alpha_{T^-(e),e} u^h|_{T^-(e)} + \alpha_{T^+(e),e} u^h|_{T^+(e)} . \label{eq:weightedAvDG} \end{equation} where the weights are non-negative quantities such that $\alpha_{T^-(e),e} + \alpha_{T^+(e),e} = 1$. On boundary edges, we set $\llbracket u^h \rrbracket_e = u^h |_e$, $\alpha_{T^-(e),e} = 1$ and $\llbrace u^h \rrbrace_\alpha = u^h$. Classical discontinuous Galerkin methods use arithmetic averages in \eqref{eq:weightedAvDG}, that is for all edges the weights are constant and equal $\alpha_{T^-(e),e} = \alpha_{T^+(e),e} = 1/2$. As stated in the introduction, in recent years there has been a growing interest towards the so-called symmetric weighted interior penalty dG methods, especially when dealing with problems featuring inhomogeneous coefficients for the diffusion term (cf. \cite{doi:10.1137/050634736, MR2491426}). In particular, these methods rely on the definition of weights based on the information carried by the diffusion tensor. For the case of the electrical impedance tomography under analysis, this results in the following weights based on the different values of the electrical conductivity: $$ \alpha_{T^-(e),e} \coloneqq \frac{k_\Omega|_{T^+(e)}}{k_\Omega|_{T^+(e)} + k_\Omega|_{T^-(e)}} \quad , \quad \alpha_{T^+(e),e} \coloneqq \frac{k_\Omega|_{T^-(e)}}{k_\Omega|_{T^+(e)} + k_\Omega|_{T^-(e)}} . $$ It is well-known in the literature \cite{MR2882148} that the bilinear form associated with discontinuous Galerkin methods may suffer from lack of coercivity thus preventing the discrete problem from having a unique solution. A widely-spread workaround (cf. \cite{Shahbazi2005401}) is represented by the interior penalty approach that introduces a \emph{sufficiently large} penalization in order to retrieve the coercivity of the discrete bilinear form. Owing to the idea of exploiting the information carried by the diffusion tensor to construct the weights for the jump term, we define the stabilization parameter in a similar way \cite{MR2427189}: $$ \gamma_e \coloneqq \beta_e \frac{k_\Omega|_{T^+(e)} k_\Omega|_{T^-(e)}}{k_\Omega|_{T^+(e)} + k_\Omega|_{T^-(e)}} $$ where $\beta_e > 0$ is a user-dependent parameter. As for the conforming finite element approximation described in the previous section, first we introduce the discrete state and adjoint problems and then we construct the equilibrated fluxes via a procedure relying solely on local quantities. As previously stated, a key aspect of this approach is represented by the choice of the degree of the approximating functions for both the solution of the problems and the equilibrated fluxes. The details of this choice will be discussed in the following subsections. For the sake of readability, from now on we will omit the subscript $e$ associated with jumps, weights and averages if there is no risk of ambiguity. \subsection{The state problems} In order to appropriately handle the terms involving the effect of the boundary data in the estimator of the error in the shape gradient, the boundary conditions have to imposed using the same strategy in both the weak and the discrete formulation. Owing to the fact that the essential boundary conditions are classically verified in a weak sense in discontinuous Galerkin methods, we consider an alternative formulation of \eqref{eq:a}-\eqref{eq:F} to weakly impose the Dirichlet boundary condition on $\partial\mathcal{D}$. Let $\zeta_N \coloneqq 1$ and $\zeta_D \coloneqq 0$. The bilinear forms $a_{\Omega,i}(\cdot,\cdot)$ and the linear ones $F_{\Omega,i}(\cdot)$ associated with problems \eqref{eq:statePB} coupled with the boundary conditions \eqref{eq:NeumannBC} and \eqref{eq:DirichletBC} respectively read as: \begin{equation} \begin{aligned} a_{\Omega,i}(u_{\Omega,i},\delta u) = & \int_\mathcal{D}{\Big(k_\Omega \nabla u_{\Omega,i} \cdot \nabla \delta u + u_{\Omega,i} \delta u \Big) d\mathbf{x}} \\ & - (1-\zeta_i) \int_{\partial\mathcal{D}}{\Big( k_\Omega \nabla u_{\Omega,i} \cdot \mathbf{n} \delta u + u_{\Omega,i} k_\Omega \nabla \delta u \cdot \mathbf{n} \Big) ds} \\ & + (1-\zeta_i) \int_{\partial\mathcal{D}}{\gamma u_{\Omega,i} \delta u \ ds} , \end{aligned} \label{eq:aWeak} \end{equation} \begin{equation} F_{\Omega,N}(\delta u) = \int_{\partial\mathcal{D}}{g \delta u \ ds} \quad , \quad F_{\Omega,D}(\delta u) = \int_{\partial\mathcal{D}} { U_D (\gamma \delta u - k_\Omega \nabla \delta u \cdot \mathbf{n}) ds} . \label{eq:Fweak} \end{equation} We refer to appendix \ref{ref:essential} for the formal derivation of \eqref{eq:aWeak}-\eqref{eq:Fweak} in the Dirichlet case. The variational formulation of the state equations \eqref{eq:statePB} reads as follows: for $i=N,D$ we seek $u_{\Omega,i} \in H^1(\mathcal{D})$ such that \begin{equation*} a_{\Omega,i}(u_{\Omega,i},\delta u_i) = F_{\Omega,i}(\delta u_i) \quad \forall \delta u_i \in H^1(\mathcal{D}) . \label{eq:stateNitsche} \end{equation*} The corresponding discrete bilinear and linear forms arising from the interior penalty discontinuous Galerkin method have the following expressions: \begin{gather} \begin{aligned} a_{\Omega,i}^h(u_{\Omega,i}^h,\delta u^h) = & \sum_{T \in \mathcal{T}_h}{\int_T{\Big( k_{\Omega} \nabla u_{\Omega,i}^h \cdot \nabla \delta u^h + u_{\Omega,i}^h \delta u^h \Big) d\mathbf{x}}} \\ - & \sum_{e \in \mathcal{E}_h^\mathcal{I}}{\int_{e}{\Big( \mathbf{n}_e \cdot \llbrace k_{\Omega} \nabla u_{\Omega,i}^h \rrbrace_\alpha \llbracket \delta u^h \rrbracket + \llbracket u_{\Omega,i}^h \rrbracket \mathbf{n}_e \cdot \llbrace k_{\Omega} \nabla \delta u^h \rrbrace_\alpha \Big) ds}} \\ - & (1-\zeta_i) \sum_{e \in \mathcal{E}_h^\mathcal{B}}{ \int_{e}{\mathbf{n}_e \cdot \llbrace k_{\Omega} \nabla u_{\Omega,i}^h \rrbrace_\alpha \llbracket \delta u^h \rrbracket ds}} \\ - & (1-\zeta_i) \sum_{e \in \mathcal{E}_h^\mathcal{B}}{ \int_{e}{\llbracket u_{\Omega,i}^h \rrbracket \mathbf{n}_e \cdot \llbrace k_{\Omega} \nabla \delta u^h \rrbrace_\alpha ds}} \\ & + \sum_{e \in \mathcal{E}_h^\mathcal{I}}{ \int_{e}{\frac{\gamma_e}{|e|} \llbracket u_{\Omega,i}^h \rrbracket \llbracket \delta u^h \rrbracket ds}} + (1-\zeta_i) \sum_{e \in \mathcal{E}_h^\mathcal{B}}{ \int_{e}{\frac{\gamma_e}{|e|} \llbracket u_{\Omega,i}^h \rrbracket \llbracket \delta u^h \rrbracket ds}} , \end{aligned} \label{eq:aDG} \\ \begin{aligned} F_{\Omega,N}^h(\delta u^h) & = \int_{\partial\mathcal{D}}{g \delta u^h \ ds} , \\ F_{\Omega,D}^h(\delta u^h) = \sum_{e \in \mathcal{E}_h^B} & { \int_{e}{U_D \Big(\frac{\gamma_e}{|e|} \delta u^h - k_\Omega \nabla \delta u^h \cdot \mathbf{n}_e \Big) ds}} . \end{aligned} \notag \end{gather} Thus, according to the SWIP-dG problem we seek $u_{\Omega,N}^h, u_{\Omega,D}^h \in V_\Omega^{h,\ell}$ such that \begin{equation} a_{\Omega,i}^h(u_{\Omega,i}^h,\delta u_i^h) = F_{\Omega,i}^h(\delta u_i^h) \quad \forall \delta u_i^h \in V_\Omega^{h,\ell} . \label{eq:stateSWIP} \end{equation} Concerning the degree of the discontinuous Galerkin approximating functions, we maintain the same choice previously presented for the conforming finite element discretization, that is a low-order approximation based on piecewise linear polynomials ($\ell=1$). In a similar fashion, the computation of the descent direction $\theta^h$ is performed by means of the conforming discretization using the space of $\mathbb{P}^1 \times \mathbb{P}^1$ Lagrangian finite element functions discussed in section \ref{ref:ConformingFE}. \subsection{The adjoint problems} The symmetric weighted interior penalty discontinuous Galerkin formulation of the adjoint problems may be derived following the same procedure used for the state problems. In particular, the bilinear forms in \eqref{eq:aDG} also stand for the Neumann and Dirichlet adjoint problems. The corresponding linear forms for $i=N,D$ read as \begin{equation*} \begin{aligned} H_{\Omega,i}^h(\delta r^h) = & \sum_{T \in \mathcal{T}_h}{\int_T{\Big(k_\Omega \mathbf{M}(\boldsymbol\theta^h) \nabla u_{\Omega,i}^h \cdot \nabla \delta r^h - \nabla \cdot \boldsymbol\theta^h \ u_{\Omega,i}^h \delta r^h \Big) d\mathbf{x}}} \\ & - \sum_{e \in \mathcal{E}_h^\mathcal{I}}{\int_{e}{\mathbf{n}_e \cdot \llbrace k_{\Omega} \mathbf{M}(\boldsymbol\theta^h) \nabla u_{\Omega,i}^h \rrbrace_\alpha \llbracket \delta r^h \rrbracket ds}} \\ & - \sum_{e \in \mathcal{E}_h^\mathcal{I}}{\int_{e}{\llbracket k_{\Omega} \mathbf{M}(\boldsymbol\theta^h) \nabla u_{\Omega,i}^h \rrbracket \mathbf{n}_e \cdot \llbrace \delta r^h \rrbrace_\alpha ds}} \\ & - (1-\zeta_i) \int_{\partial\mathcal{D}}{ k_{\Omega} \mathbf{M}(\boldsymbol\theta^h) \nabla u_{\Omega,i}^h \cdot \mathbf{n} \ \delta r^h \ ds} . \end{aligned} \label{eq:FadjDG} \end{equation*} The discretized solutions of the adjoint problems are the functions $r_{\Omega,i}^h \in V_\Omega^{h,m}$ such that $\forall \delta r_i^h \in V_\Omega^{h,m}$ \begin{equation} a_{\Omega,i}^h(\delta r_i^h,r_{\Omega,i}^h) = H_{\Omega,i}^h(\delta r_i^h) \quad , \quad i=N,D . \label{eq:adjointSWIP} \end{equation} It is straightforward to verify that the SWIP-dG formulation of the adjoint problems is consistent, that is \eqref{eq:adjointSWIP} stands substituting the analytical solutions $r_{\Omega,i}$'s to their discretized counterparts $r_{\Omega,i}^h$'s (cf. \cite{MR2882148}). As previously stated, this property plays a crucial role in the construction of discretizations of optimal order in terms of target functionals and we refer to \cite{MR2361907} for a detailed presentation of this subject. In order to obtain a higher-order approximation of the adjoint problems, we consider $m=2$, as for the case of the conforming finite element approximation in section \ref{ref:ConformingFE}. \subsection{Estimate of the error in the shape gradient via the equilibrated fluxes} In this section we construct the equilibrated fluxes associated with the discontinuous Galerkin approximations \eqref{eq:stateSWIP} and \eqref{eq:adjointSWIP} and we derive the corresponding goal-oriented estimator of the error in the shape gradient. Following the procedure introduced for the case of conforming finite element discretization, this problem reduces to estimating the quantity \eqref{eq:ERRvolumetricKV}. \subsubsection{Equilibrated fluxes for the state equations} We introduced the notion of equilibrated fluxes for the state problems in definition \ref{def:fluxState}. In particular, for each problem we aim to construct an $\Hdiv$-conforming flux $\boldsymbol\sigma_{\Omega,i}^h \in W_\Omega^{h,\kappa}$ such that \eqref{eq:eqFLuxState} stands. We recall that the state problems are approximated using discontinuous Galerkin functions of degree $\ell=1$, thus the fluxes are reconstructed using $RT_0$ finite element functions ($\kappa=0$). Owing to the nature of the degrees of freedom of the lowest-order Raviart-Thomas finite element functions, the construction of the equilibrated fluxes is straightforward via the prescription of the normal fluxes on all the edges: \begin{align} & \begin{aligned} \int_e{\boldsymbol\sigma_{\Omega,i}^h \cdot \mathbf{n}_e \ \delta t^h \ ds} = \int_e \Big( \frac{\gamma_e}{|e|} \llbracket u_{\Omega,i}^h \rrbracket - \mathbf{n}_e \cdot \llbrace & k_\Omega \nabla u_{\Omega,i}^h \rrbrace_\alpha \Big) \delta t^h \ ds \quad , \\ & \ \forall \delta t^h \in \mathbb{P}^\kappa(e) \ \ \forall e \in \mathcal{E}_h^\mathcal{I} \end{aligned} \label{eq:internalFluxState} \\ & \begin{aligned} \int_e{\boldsymbol\sigma_{\Omega,i}^h \cdot \mathbf{n}_e \ \delta t^h \ ds} = (1-\zeta_i & ) \int_e{\Big( \frac{\gamma_e}{|e|} (u_{\Omega,i}^h - U_D) - k_\Omega \nabla u_{\Omega,i}^h \cdot \mathbf{n}_e \Big) \delta t^h \ ds} \\ & - \zeta_i \int_e{g \ \delta t^h \ ds} \quad , \quad \forall \delta t^h \in \mathbb{P}^\kappa(e) \ \ \forall e \in \mathcal{E}_h^\mathcal{B} . \end{aligned} \label{eq:boundaryFluxState} \end{align} \subsubsection{Equilibrated fluxes for the adjoint equations} In an analogous way, we may construct the equilibrated fluxes for the adjoint problems. We remark that owing to the higher-order approximation of \eqref{eq:adjointSWIP} with respect to \eqref{eq:stateSWIP} - i.e. $m=2$ -, the equilibrated fluxes $\boldsymbol\xi_{\Omega,i}^h$'s in definition \ref{def:fluxAdjoint} are sought in the space $W_\Omega^{h,\kappa} \ , \ \kappa=1$. The $RT_1$ reconstructed fluxes are such that \begin{align*} & \begin{aligned} \int_e{\boldsymbol\xi_{\Omega,i}^h \cdot \mathbf{n}_e \ \delta q_1^h \ ds} = \int_e \Big( \frac{\gamma_e}{|e|} \llbracket r_{\Omega,i}^h \rrbracket - \mathbf{n}_e \cdot \llbrace & k_\Omega \nabla r_{\Omega,i}^h \rrbrace_\alpha \Big) \delta q_1^h \ ds \quad , \\ & \ \forall \delta q_1^h \in \mathbb{P}^\kappa(e) \ \ \forall e \in \mathcal{E}_h^\mathcal{I} \end{aligned} \\ & \begin{aligned} \int_e{\boldsymbol\xi_{\Omega,i}^h \cdot \mathbf{n}_e \ \delta q_1^h \ ds} = \ (1-\zeta_i) & \int_e{\Big( \frac{\gamma_e}{|e|} r_{\Omega,i}^h - k_\Omega \nabla r_{\Omega,i}^h \cdot \mathbf{n}_e \Big) \delta q_1^h \ ds} \\ - & \zeta_i \int_e{k_\Omega \mathbf{M}(\boldsymbol\theta^h) \nabla u_{\Omega,i}^h \cdot \mathbf{n}_e \ \delta q_1^h \ ds} \quad , \\ & \hspace{55pt} \forall \delta q_1^h \in \mathbb{P}^\kappa(e) \ \ \forall e \in \mathcal{E}_h^\mathcal{B} & \end{aligned} \\ & \begin{aligned} \int_T \boldsymbol\xi_{\Omega,i}^h \cdot \boldsymbol\delta \mathbf{q}_2^h \ d\mathbf{x} = & - \int_T{ k_\Omega \nabla r_{\Omega,i}^h \cdot \boldsymbol\delta \mathbf{q}_2^h \ d\mathbf{x}} \\ & + \sum_{e \subset \partial T \setminus \mathcal{E}_h^\mathcal{B}}{\alpha_{T(e),e} \int_e{k_\Omega \llbracket r_{\Omega,i}^h \rrbracket \boldsymbol\delta \mathbf{q}_2^h \cdot \mathbf{n}_e \ ds}} \\ & + (1-\zeta_i) \sum_{e \subset \partial T \cap \mathcal{E}_h^\mathcal{B}}{\int_e{k_\Omega r_{\Omega,i}^h \boldsymbol\delta \mathbf{q}_2^h \cdot \mathbf{n}_e \ ds}} \quad , \\ & \hspace{60pt} \forall \boldsymbol\delta \mathbf{q}_2^h \in [\mathbb{P}^{\kappa-1}(T)]^d \ \ \forall T \in \mathcal{T}_h . \end{aligned} \end{align*} \begin{rmrk} The flux reconstruction procedure presented for both the state and adjoint equations relies solely on the computation of local quantities and is computationally inexpensive. A great advantage of the discontinuous Galerkin framework is represented by the cheap algorithms to construct the equilibrated fluxes on an element-wise level as discussed by several authors, e.g. in \cite{MR2376644,MR2261011,MR3018142,MR3249368}. As previously remarked for the construction of the equilibrated fluxes in the case of conforming finite element discretizations, the local nature of the procedure allows the parallelization of the algorithm and the exploitation of modern parallel architectures. \end{rmrk} \subsubsection{Goal-oriented equilibrated fluxes error estimator} \label{ref:goalDG} We may now evaluate the term \eqref{eq:Eu} for the Neumann and Dirichlet problems by exploiting the information carried by \eqref{eq:aDG} and \eqref{eq:Fweak}. We recall that the symmetric weighted interior penalty discontinuous Galerkin method under analysis is adjoint consistent (cf. \cite{MR2882148}). Owing to the continuity of $r_{\Omega,i}$ and $k_\Omega \nabla r_{\Omega,i} \cdot \mathbf{n}_e$ on all the edges $e$'s and adding the expression of the equilibrated fluxes \eqref{eq:eqFLuxState}, we obtain: \begin{equation*} \begin{aligned} \widetilde{E}^h_{u,_i} \coloneqq & F_{\Omega,i}(r_{\Omega,i}) - a_{\Omega,i}^h(u_{\Omega,i}^h,r_{\Omega,i}) \\ = & \ \zeta_i \int_{\partial\mathcal{D}}{g r_{\Omega,i} \ ds} + (1-\zeta_i) \int_{\partial\mathcal{D}}{U_D (\gamma r_{\Omega,i} - k_\Omega \nabla r_{\Omega,i} \cdot \mathbf{n} ) ds} \\ & - \sum_{T \in \mathcal{T}_h}{\int_T{\Big( k_{\Omega} \nabla u_{\Omega,i}^h \cdot \nabla r_{\Omega,i} + u_{\Omega,i}^h r_{\Omega,i} \Big) d\mathbf{x}}} \\ & + \sum_{e \in \mathcal{E}_h^\mathcal{I}}{\int_{e}{ \llbracket u_{\Omega,i}^h \rrbracket k_{\Omega} \nabla r_{\Omega,i} \cdot \mathbf{n}_e \ ds}} \\ & + (1-\zeta_i) \sum_{e \in \mathcal{E}_h^\mathcal{B}}{ \int_{e}{ \llbracket u_{\Omega,i}^h \rrbracket k_{\Omega} \nabla r_{\Omega,i} \cdot \mathbf{n}_e \ ds}} \\ & + \sum_{T \in \mathcal{T}_h}{\int_T{\Big( \nabla \cdot \boldsymbol\sigma_{\Omega,i}^h + \pi_Z^\kappa u_{\Omega,i}^h \Big) r_{\Omega,i} \ d\mathbf{x}}} . \end{aligned} \label{eq:GOAL1dg} \end{equation*} We integrate by parts the last integral and we plug in the expressions \eqref{eq:internalFluxState}-\eqref{eq:boundaryFluxState} of the equilibrated fluxes for the state problems. It follows from the homogeneous Dirichlet condition fulfilled by the adjoint solution $r_{\Omega,D}$ on $\partial\mathcal{D}$ that \begin{equation*} \begin{aligned} \widetilde{E}^h_{u,_i} = & \zeta_i \int_{\partial\mathcal{D}}{\Big( g - \pi_{W \cdot \mathbf{n}}^\kappa(g) \Big) r_{\Omega,i} \ ds} + (1-\zeta_i) \int_{\partial\mathcal{D}}{ (u_{\Omega,i}^h - U_D) k_{\Omega} \nabla r_{\Omega,i} \cdot \mathbf{n} \ ds} \\ & + \sum_{e \in \mathcal{E}_h^\mathcal{I}}{\int_{e}{ \Big( \llbracket u_{\Omega,i}^h \rrbracket k_{\Omega} \nabla r_{\Omega,i} \cdot \mathbf{n}_e + \llbracket \boldsymbol\sigma_{\Omega,i}^h \cdot \mathbf{n}_e \rrbracket r_{\Omega,i} \Big) \ ds}} \\ & + \sum_{T \in \mathcal{T}_h}{\int_T{\Big( \pi_Z^\kappa u_{\Omega,i}^h - u_{\Omega,i}^h \Big) r_{\Omega,i} \ d\mathbf{x}}} \\ & - \sum_{T \in \mathcal{T}_h}{\int_T{\Big( \boldsymbol\sigma_{\Omega,i}^h + k_\Omega \nabla u_{\Omega,i}^h \Big) \cdot \nabla r_{\Omega,i} \ d\mathbf{x}}} . \end{aligned} \label{eq:GOAL2dg} \end{equation*} We remark that owing to the continuity of the normal traces of the fluxes, $\llbracket \boldsymbol\sigma_{\Omega,i}^h \cdot \mathbf{n}_e \rrbracket = 0$ for all the internal edges. By adding and subtracting the terms $r_{\Omega,i}^h$'s featuring the discontinuous Galerkin approximations of the adjoint solutions and taking into account their equilibrated fluxes $\boldsymbol\xi_{\Omega,i}^h$'s, the expressions of the errors $\widetilde{E}^h_{u,_i}$'s read as: \begin{equation} \begin{aligned} \widetilde{E}^h_{u,_i} = & \ \zeta_i \int_{\partial\mathcal{D}}{\Big( g - \pi_{W \cdot \mathbf{n}}^\kappa(g) \Big) r_{\Omega,i}^h \ ds} + \zeta_i \int_{\partial\mathcal{D}}{\Big( g - \pi_{W \cdot \mathbf{n}}^\kappa(g) \Big)(r_{\Omega,i} - r_{\Omega,i}^h) ds} \\ & - (1-\zeta_i) \int_{\partial\mathcal{D}}{ (u_{\Omega,i}^h - U_D) \boldsymbol\xi_{\Omega,i}^h \cdot \mathbf{n} \ ds} \\ & + (1-\zeta_i) \int_{\partial\mathcal{D}}{ (u_{\Omega,i}^h - U_D) \Big(k_\Omega \nabla r_{\Omega,i} + \boldsymbol\xi_{\Omega,i}^h \Big) \cdot \mathbf{n} \ ds} \\ & - \sum_{e \in \mathcal{E}_h^\mathcal{I}}{\int_{e}{ \llbracket u_{\Omega,i}^h \rrbracket \boldsymbol\xi_{\Omega,i}^h \cdot \mathbf{n}_e \ ds}} + \sum_{e \in \mathcal{E}_h^\mathcal{I}}{\int_{e}{ \llbracket u_{\Omega,i}^h \rrbracket \Big(k_\Omega \nabla r_{\Omega,i} + \boldsymbol\xi_{\Omega,i}^h \Big) \cdot \mathbf{n}_e \ ds}} \\ & + \sum_{T \in \mathcal{T}_h}{\int_T{\Big( \pi_Z^\kappa u_{\Omega,i}^h - u_{\Omega,i}^h \Big) r_{\Omega,i}^h \ d\mathbf{x}}} \\ & + \sum_{T \in \mathcal{T}_h}{\int_T{\Big( \pi_Z^\kappa u_{\Omega,i}^h - u_{\Omega,i}^h \Big)(r_{\Omega,i} - r_{\Omega,i}^h) d\mathbf{x}}} \\ & + \sum_{T \in \mathcal{T}_h}{\int_T{\Big( \boldsymbol\sigma_{\Omega,i}^h + k_\Omega \nabla u_{\Omega,i}^h \Big) \cdot k_\Omega^{-1} \boldsymbol\xi_{\Omega,i}^h \ d\mathbf{x}}} \\ & - \sum_{T \in \mathcal{T}_h}{\int_T{\Big( \boldsymbol\sigma_{\Omega,i}^h + k_\Omega \nabla u_{\Omega,i}^h \Big) \cdot \Big(\nabla r_{\Omega,i} + k_\Omega^{-1} \boldsymbol\xi_{\Omega,i}^h\Big) d\mathbf{x}}} . \end{aligned} \label{eq:GOAL3dg} \end{equation} As already remarked in the estimator derived for the conforming finite element discretization, both the unknown exact solutions of the adjoint problems and their numerical counterparts appear in \eqref{eq:GOAL3dg}. Let $I_m^\ell: V_\Omega^{h,m} \rightarrow V_\Omega^{h,\ell}$ be the projection operator from the space of high-order discontinuous Galerkin approximations to the low-order one. The fully-computable version of the estimator of the quantity $\widetilde{E}^h_{u,_i}$ is obtained by substituting $r_{\Omega,i} - r_{\Omega,i}^h$ with $r_{\Omega,i}^h - I_m^\ell r_{\Omega,i}^h$ and $\nabla r_{\Omega,i}$ with $\nabla r_{\Omega,i}^h$. \\ Eventually, the upper bound $\overline{E}$ of the error in the shape gradient is obtained by plugging the expressions of $\widetilde{E}^h_{u,_N}$ and $\widetilde{E}^h_{u,_D}$ arising from \eqref{eq:GOAL3dg} into \eqref{eq:ERRvolumetricKV} and by considering its absolute value. \begin{rmrk} In \cite{MR3327021}, the authors prove that the contribution of the terms in \eqref{eq:GOAL3dg} featuring the exact solution of the adjoint problems is negligible and the goal-oriented error estimator constructed using the previously described equilibrated fluxes approach is asymptotically exact. This property guarantees that the bound $\overline{E}$ of the error in the shape gradient tends to zero by reducing the mesh size. Hence, the mesh adaptation procedure performed by the certified descent algorithm eventually leads to the fulfillment of condition \eqref{eq:certified}. \end{rmrk} \section{Numerical results} \label{ref:numerics} In this section we present some numerical results of the application of the certified descent algorithm based on the equilibrated fluxes approach for the estimation of the error in the shape gradient. We consider the problem of electrical impedance tomography as a proof of concept to establish some properties of this variant of the certified descent algorithm on a non-trivial scalar test case. Shape optimization methods are known to provide poor reconstructions in inverse ill-posed problems as the EIT. Within this framework, the certified descent algorithm does not aim to remedy the issue of local minima but may act as a counterexample confirming the limitations of gradient-based strategies when dealing with ill-posed problems. The current work presents an improvement of the original certified descent algorithm introduced in \cite{giacomini:hal-01201914}, in particular since it uses solely local quantities to compute the error in the shape gradient required by the certification procedure. The numerical results in this section focus on the quantitative bound $\overline{E}$ obtained using the equilibrated fluxes approach for both conforming finite element and discontinuous Galerkin discretizations. The simulations are obtained using FreeFem++ \cite{MR3043640} and are based on a mesh moving approach for the deformation of the domain. It is well-known in the literature (cf. e.g. \cite{smo-AP}) that numerical shape optimization may result in poor optimal shapes showing high-frequency oscillations of the boundaries whose length scale is comparable with the mesh size \cite{giacomini:hal-01201914}. In order to remedy these issues of regularity of the mesh, several strategies have been proposed. A possible workaround relies on introducing a regularizing term in the cost functional through a perimeter penalization \cite{MR2270119}. Nonetheless, this strategy strongly depends on the weight of the penalty parameter which may be difficult to tune. In order to construct a more general and automatic optimization algorithm, we resort to a two-mesh strategy \cite{smo-AP} which explicitly smoothes the boundary of the shape at each iteration: after solving the state problems using a fine mesh in order to properly capture all the important features of the solutions, a coarser mesh is extracted and a descent direction is computed; thus, the nodes of the coarser mesh are displaced and a novel fine mesh is obtained from the coarser one via a mesh adaptation procedure. This latter step may be performed either through a uniform mesh refinement or by means of an adaptation routine that exploits the information of the last computed solution of the state problem. We remark that by deforming the coarse mesh, oscillations of the novel boundary are prevented and consequently a regularization is directly introduced into the problem. Being this strategy dependent solely on the projection of the information of the finite element solution of the state problems from the fine mesh to the coarse one, the overall strategy results in numerically more stable computations and more regular optimal shapes without introducing any additional parameter \cite{smo-AP}. \\ Within the previously described framework, changes in the topology of the shape are not allowed and the correct number of inclusions has to be set at the beginning of the algorithm and remains the same throughout its evolution. Techniques based on both topological and shape derivatives to account for topological changes inside the domain have been investigated e.g. in \cite{Hintermuller2008}. \subsection{Numerical assessment of the goal-oriented estimator} \label{ref:validation} In order to evaluate the goal-oriented error estimators derived in sections \ref{ref:goalFE} and \ref{ref:goalDG}, we consider a configuration for which the analytical solution of the state problems is known. We introduce the polar coordinate system $(\rho,\vartheta)$ and we set $\mathcal{D} \coloneqq \{ \mathbf{x}=(x,y) \ | \ x^2+y^2 \leq \rho_E^2 \}$ and $\Omega \coloneqq \{ \mathbf{x}=(x,y) \ | \ x^2+y^2 \leq \rho_I^2 \}$ with $\rho_I=4$ and $\rho_E=5$. The value of the conductivity parameter is $k_I=10$ inside $\Omega$ and $k_E=1$ in $\mathcal{D} \setminus \Omega$. We consider the Neumann boundary condition $g=\cos(M \vartheta) \ , \ M=5$ and the Dirichlet datum $U_D$ is the trace of the following function which is the analytical solution of problem \eqref{eq:statePB}: $$ u_{\Omega,N} = \begin{cases} C_0 J_M \left(-i \rho k_I^{-\frac{1}{2}}\right)\cos(M \vartheta) \ & , \ \rho \in [0,\rho_I]\\ \left[ C_1 J_M \left(-i \rho k_E^{-\frac{1}{2}}\right) + C_2 Y_M \left(-i \rho k_E^{-\frac{1}{2}}\right) \right] \cos(M \vartheta) \ & , \ \rho \in (\rho_I,\rho_E] \end{cases} $$ where $J_M(\cdot)$ and $Y_M(\cdot)$ respectively represent the first- and second-kind Bessel functions of order $M$. The constants $C_0,\ldots,C_2$ are detailed in table \ref{tab:constants}. \begin{table} \centering \begin{tabular}[hbt]{| c || l | l |} \hline Constant & $\mathbb{R}\text{e}[C_i]$ & $\mathbb{I}\text{m}[C_i]$ \\ \hline & & \\ [-1em] \hline $C_0$ & $-6.3 \cdot 10^{-9}$ & $+40.39491005$ \\ \hline $C_1$ & $+1.30145994$ & $+0.325482825$ \\ \hline $C_2$ & $+1.5 \cdot 10^{-11}$ & $-1.301459935$ \\ \hline \end{tabular} \caption{Constants for the analytical solution.} \label{tab:constants} \end{table} We recall that for both the conforming finite element and the discontinuous Galerkin discretizations we have $\ell=1$ and $m=2$, that is the state problems are solved using functions of degree $1$, whereas the adjoint solutions are approximated using functions of degree $2$. The corresponding equilibrated fluxes are sought respectively in the space of $RT_0$ and $RT_1$ finite element functions. \\ Figure \ref{fig:QoIrateFE} presents the convergence history of the discretization error in the shape gradient and the goal-oriented estimator $\overline{E}$ for the case of conforming finite element. The corresponding quantities for the case of discontinuous Galerkin are depicted in figure \ref{fig:QoIrateDG}. The analytical error is computed by substituting in \eqref{eq:errorH} the expression \eqref{eq:volumetricKV} evaluated using respectively the analytical solution introduced at the beginning of this subsection and the numerical solutions arising from the conforming finite element and the discontinuous Galerkin discretizations. \\ Eventually, in figure \ref{fig:effIdx} we present the effectivity indices for the discussed discretizations. The effectivity index $\eta$ is defined as the ratio between the estimator and the exact error, that is $\eta \coloneqq \overline{E}/E^h$. If the effectivity index is bigger (respectively smaller) than $1$, one is overestimating (respectively underestimating) the error. The evolution of the effectivity indices in figure \ref{fig:effIdx} confirms that the constructed estimators are guaranteed - that is they provide an upper bound of the error since $\eta > 1$ - and are asymptotically exact since the $\eta \searrow 1$ as the mesh size tends to $0$. \begin{figure} \caption{Convergence rates and effectivity indices of the estimators of the error in the shape gradient with respect to the number of degrees of freedom. Analytical error in the shape gradient (solid black); goal-oriented estimator of the error based on the equilibrated fluxes (dashed gray squares) for the discretizations based on (a) conforming finite element and (b) discontinuous Galerkin. (c) Effectivity indices for the conforming finite element (dark gray squares) and discontinuous Galerkin (light gray circles). } \label{fig:QoIrateFE} \label{fig:QoIrateDG} \label{fig:effIdx} \label{fig:convergences} \end{figure} \subsection{Reconstruction of a single inclusion} \label{ref:1incl} We consider the problem of reconstructing the inclusion $\Omega$ defined in the previous section by means of a couple $(g,U_D)$ of measurements on the external boundary $\partial\mathcal{D}$. In the following simulations, we consider a stopping criterion that combines the condition in step 8 of algorithm \ref{scpt:shape-opt-adaptive} and a bound on the number of admissible mesh elements - i.e. the size of the state and the adjoint problems. This choice is due to the ill-posed nature of the electrical impedance tomography problem that we chose as test case for the certified descent algorithm. As we will highlight throughout this section the ill-posedness of the problem represents an issue that prevents gradient-based strategies from efficiently solving the EIT problem since a huge precision is demanded after few iterations of the optimization procedure. First, we consider the configuration described in figure \ref{fig:circleInterface}. The initial guess for the inclusion is represented by the circle of radius $\rho_{\text{ini}}=2$. The certified descent algorithm is able to correctly identify the interface along which the conductivity parameter $k_\Omega$ is discontinuous (Fig. \ref{fig:circleInterface}). Moreover, figure \ref{fig:circleObj} shows that the objective functional $J(\Omega)$ is monotonically decreasing, meaning a genuine descent direction is computed at each iteration of the algorithm. In tables \ref{tab:circleConvFE}-\ref{tab:circleConvDG} we present the specifics of the meshes used to certify the descent direction at several iterations of the CDA whereas in figure \ref{fig:meshCircle} different meshes generated by the algorithm for conforming finite element and discontinuous Galerkin discretizations are proposed. In particular, we observe that coarse meshes are reliable during the initial iterations of the algorithm to identify a genuine descent direction and the number of degrees of freedom increases when approaching a minimum of the functional $J(\Omega)$. This is also well-explained by figure \ref{fig:circleDOF} in which the evolution of the number of degrees of freedom is depicted. Concerning the quality of the computational meshes, they are mainly uniform during the initial iterations and are refined in the regions where the numerical error in the shape gradient is more important. In particular, finer - and possibly anisotropic - elements tend to concentrate near the external boundary where the measurements are located and near the internal interface where the conductivity parameter is discontinuous (Fig. \ref{fig:meshCircle20FE}-\ref{fig:meshCircle20DG}). Eventually, we remark that figure \ref{fig:circleDOF} also highlights the ill-posed nature of the problem since a huge amount of degrees of freedom is rapidly required by the CDA to certify the descent direction, testifying the difficulties of gradient-based methods to handle inverse problems as the electrical impedance tomography. By comparing the approximations arising from conforming finite element and discontinuous Galerkin formulations, we remark that the latter provides sharper bounds of the error in the shape gradient thus allowing the algorithm to automatically stop for a given tolerance $\texttt{tol}=10^{-6}$ (cf. table \ref{tab:circleConvDG}). On the contrary, the certification in the case of conforming finite element is still able to identify a genuine descent direction at each iteration but rapidly requires a huge number of mesh elements making the computational cost explode. \begin{figure} \caption{Certified descent algorithm for the identification of one inclusion. (a) Initial configuration (dotted black), target inclusion (solid black) and reconstructed interface. (b) Evolution of the objective functional. (c) Number of degrees of freedom. Inversion performed using conforming finite element (dark gray squares) and discontinuous Galerkin (light gray circles).} \label{fig:circleInterface} \label{fig:circleObj} \label{fig:circleDOF} \label{fig:circle} \end{figure} \begin{figure} \caption{Meshes generated by the certified descent algorithm for the test case in figure \ref{fig:circleInterface} \label{fig:meshCircle10FE} \label{fig:meshCircle20FE} \label{fig:meshCircle10DG} \label{fig:meshCircle20DG} \label{fig:meshCircle} \end{figure} \begin{table}[htb] \centering \subfloat[Conforming finite element.] { \centering \begin{tabular}[hbt]{| c | c || l | l |} \hline Iteration & $\# \mathcal{T}_h$ & $\langle d_h J(\Omega),\boldsymbol\theta^h \rangle$ & $\overline{E}$ \\ \hline & & & \\ [-1em] \hline 1 & 8863 & $-1.45 \cdot 10^{-6}$ & $1.12 \cdot 10^{-6}$ \\ \hline 5 & 8582 & $-4.36 \cdot 10^{-6}$ & $3.31 \cdot 10 ^{-6}$ \\ \hline 10 & 8650 & $-1.37 \cdot 10^{-5}$ & $9.17 \cdot 10 ^{-6}$ \\ \hline 15 & 9335 & $-2.83 \cdot 10^{-5}$ & $1.80 \cdot 10^{-5}$ \\ \hline 20 & 19683 & $-1.53 \cdot 10^{-5}$ & $1.07 \cdot 10^{-5}$ \\ \hline 22 & 864808 & $-1.18 \cdot 10^{-6}$ & $1.16 \cdot 10^{-6}$ \\ \hline \end{tabular} \label{tab:circleConvFE} } \hfil \subfloat[Discontinuous Galerkin.] { \centering \begin{tabular}[hbt]{| c | c || l | l |} \hline Iteration & $\# \mathcal{T}_h$ & $\langle d_h J(\Omega),\boldsymbol\theta^h \rangle$ & $\overline{E}$ \\ \hline & & & \\ [-1em] \hline 1 & 5454 & $-2.02 \cdot 10^{-6}$ & $1.86 \cdot 10^{-6}$ \\ \hline 5 & 7307 & $-6.63 \cdot 10^{-6}$ & $6.24 \cdot 10^{-6}$ \\ \hline 10 & 7099 & $-1.73 \cdot 10^{-5}$ & $9.20 \cdot 10^{-6}$ \\ \hline 15 & 7307 & $-3.06 \cdot 10^{-5}$ & $7.11 \cdot 10^{-6}$ \\ \hline 20 & 10123 & $-1.38 \cdot 10^{-5}$ & $8.98 \cdot 10^{-6}$ \\ \hline 24 & 51406 & $-4.60 \cdot 10^{-7}$ & $4.55 \cdot 10^{-7}$ \\ \hline \end{tabular} \label{tab:circleConvDG} } \caption{Test case in figure \ref{fig:circleInterface} using (a) conforming finite element and (b) discontinuous Galerkin. Approximated shape gradient and goal-oriented estimator for different meshes.} \label{tab:circleConv} \end{table} The aforementioned remarks are confirmed and highlighted by the test case in figure \ref{fig:ellipseInterface}. On the one hand, it is straightforward to observe that the certified descent algorithm is able to identify a genuine descent direction at each iteration (Fig. \ref{fig:ellipseObj}). Nevertheless, as extensively discussed in \cite{giacomini:hal-01201914}, the well-known ill-posedness of the electrical impedance tomography problem prevents the certified descent algorithm - and in general, gradient-based strategies - to accurately identify the inclusion in the whole domain. In figure \ref{fig:ellipseInterface} we observe that the portion of the interface closest to $\partial\mathcal{D}$ is well identified but the precision of the reconstruction decreases moving away from the external boundary and towards the inner part of the domain. These observations match the results in \cite{161105272}, where the authors remark that a good approximation of the boundary is obtained for the upwind part of the shape whereas a loss of accuracy is observed in its downwind part. We remark that the perimeter regularization exploited in the aforementioned work is substituted in the present research by the two-mesh strategy as discussed at the beginning of section \ref{ref:numerics}. A possible workaround to the low resolution of the reconstruction in the center of the computational domain is represented by the emerging field of hybrid imaging in which classical tomography techniques are coupled with acoustic or elastic waves \cite{doi:10.1137/120863654}. Nevertheless, these limitations are related to the nature of the electrical impedance tomography problem, especially to the decreasing importance of the boundary conditions when moving away from $\partial\mathcal{D}$ and we cannot expect gradient-based strategies to successfully overcome this issue. These remarks are confirmed again by the rapidly exploding number of degrees of freedom required by the algorithm to certify the descent direction (Fig. \ref{fig:ellipseDOF}). The non-feasibility of gradient-based approaches for severely ill-posed problems is testified by the huge precision required by the algorithm which only leads to a negligible improvement of the solution. \begin{figure} \caption{Certified descent algorithm for the identification of one inclusion. (a) Initial configuration (dotted black), target inclusion (solid black) and reconstructed interface. (b) Evolution of the objective functional. (c) Number of degrees of freedom. Inversion performed using conforming finite element (dark gray squares) and discontinuous Galerkin (light gray circles).} \label{fig:ellipseInterface} \label{fig:ellipseObj} \label{fig:ellipseDOF} \label{fig:ellipse} \end{figure} \begin{figure} \caption{Meshes generated by the certified descent algorithm for the test case in figure \ref{fig:ellipseInterface} \label{fig:meshEllipse10FE} \label{fig:meshEllipse30FE} \label{fig:meshEllipse10DG} \label{fig:meshEllipse30DG} \label{fig:meshEllipse} \end{figure} \begin{table}[htb] \centering \subfloat[Conforming finite element.] { \centering \begin{tabular}[hbt]{| c | c || l | l |} \hline Iteration & $\# \mathcal{T}_h$ & $\langle d_h J(\Omega),\boldsymbol\theta^h \rangle$ & $\overline{E}$ \\ \hline & & & \\ [-1em] \hline 1 & 3366 & $-2.29 \cdot 10^{-4}$ & $1.79 \cdot 10^{-4}$ \\ \hline 10 & 8312 & $-7.63 \cdot 10^{-5}$ & $5.49 \cdot 10 ^{-5}$ \\ \hline 20 & 7893 & $-1.29 \cdot 10^{-4}$ & $1.03 \cdot 10 ^{-4}$ \\ \hline 30 & 227847 & $-6.16 \cdot 10^{-6}$ & $6.15 \cdot 10^{-6}$ \\ \hline 31 & 980555 & $-3.62 \cdot 10^{-6}$ & $3.60 \cdot 10^{-6}$ \\ \hline \end{tabular} \label{tab:ellipseConvFE} } \hfil \subfloat[Discontinuous Galerkin.] { \centering \begin{tabular}[hbt]{| c | c || l | l |} \hline Iteration & $\# \mathcal{T}_h$ & $\langle d_h J(\Omega),\boldsymbol\theta^h \rangle$ & $\overline{E}$ \\ \hline & & & \\ [-1em] \hline 1 & 6189 & $-2.21 \cdot 10^{-4}$ & $1.34 \cdot 10^{-4}$ \\ \hline 10 & 4868 & $-7.80 \cdot 10^{-5}$ & $4.64 \cdot 10^{-5}$ \\ \hline 20 & 4842 & $-1.62 \cdot 10^{-4}$ & $1.30 \cdot 10^{-4}$ \\ \hline 30 & 52595 & $-4.71 \cdot 10^{-6}$ & $4.60 \cdot 10^{-6}$ \\ \hline 33 & 320137 & $-2.54 \cdot 10^{-7}$ & $2.25 \cdot 10^{-7}$ \\ \hline \end{tabular} \label{tab:ellipseConvDG} } \caption{Test case in figure \ref{fig:ellipseInterface} using (a) conforming finite element and (b) discontinuous Galerkin. Approximated shape gradient and goal-oriented estimator for different meshes.} \label{tab:ellipseConv} \end{table} \\ Though both the version of the CDA based on conforming finite element and the one relying on discontinuous Galerkin are able to certify the descent direction at the beginning of the algorithm, the situation changes after few tens of iterations. In particular, the SWIP-dG formulation allows the computation of inexpensive and precise bounds of the error in the shape gradient, whereas using conforming finite element the computational cost rapidly becomes enormous making the certification procedure unfeasible (Table \ref{tab:ellipseConv}). The previous observation is also qualitatively confirmed by the meshes in figure \ref{fig:meshEllipse}. As a matter of fact, despite being similar near the interface on the left-hand side of the domain, the two meshes greatly differ in the right-hand portion and inside the inclusion: within these regions, the mesh generated by the CDA using the SWIP-dG approximation features a lower number of elements, mainly concentrated near the interface where the discontinuity of $k_\Omega$ is located. \subsection{The case of two inclusions featuring multiple boundary measurements} \label{ref:2incl} In this section, we present a more involved test case in which the domain $\mathcal{D}$ features two non-connected inclusions. As previously stated, we assume that the number of inclusions is set \emph{a priori} and we restrict to the case of a two-valued conductivity parameter, that is we distinguish a value $k_E$ for the background and a value $k_I$ valid inside both the inclusions. It is well-known in the literature that multiple boundary measurements are required to retrieve a correct approximation of the inclusion in electrical impedance tomography. In this section, we consider $D=10$ measurements such that $\forall j=0,\ldots,D-1$ $$ g_j(x,y) = (x+ a_j y)^{b_j} a_j^{c_j} \quad , \quad a_j=1+0.1j \quad , \quad b_j=\frac{j+1}{2} \quad , \quad c_j=j - 2 \left\lfloor \frac{j}{2} \right\rfloor . $$ \begin{figure} \caption{Certified descent algorithm for the identification of two inclusions. (a) Initial configuration (dotted black), target inclusion (solid black) and reconstructed interface. (b) Evolution of the objective functional. (c) Number of degrees of freedom. Inversion performed using conforming finite element (dark gray squares) and discontinuous Galerkin (light gray circles).} \label{fig:discInterface} \label{fig:discObj} \label{fig:discDOF} \label{fig:disc} \end{figure} \begin{figure} \caption{Meshes generated by the certified descent algorithm for the test case in figure \ref{fig:discInterface} \label{fig:meshDisc10FE} \label{fig:meshDisc14FE} \label{fig:meshDisc10DG} \label{fig:meshDisc30DG} \label{fig:meshDisc} \end{figure} \begin{table}[htb] \centering \subfloat[Conforming finite element.] { \centering \begin{tabular}[hbt]{| c | c || l | l |} \hline Iteration & $\# \mathcal{T}_h$ & $\langle d_h J(\Omega),\boldsymbol\theta^h \rangle$ & $\overline{E}$ \\ \hline & & & \\ [-1em] \hline 1 & 2221 & $-1.62 \cdot 10^{-3}$ & $1.59 \cdot 10^{-3}$ \\ \hline 10 & 56487 & $-1.09 \cdot 10^{-5}$ & $1.04 \cdot 10^{-5}$ \\ \hline 14 & 852782 & $-3.36 \cdot 10^{-6}$ & $3.21 \cdot 10^{-6}$ \\ \hline \end{tabular} \label{tab:discConvFE} } \hfil \subfloat[Discontinuous Galerkin.] { \centering \begin{tabular}[hbt]{| c | c || l | l |} \hline Iteration & $\# \mathcal{T}_h$ & $\langle d_h J(\Omega),\boldsymbol\theta^h \rangle$ & $\overline{E}$ \\ \hline & & & \\ [-1em] \hline 1 & 7282 & $-1.59 \cdot 10^{-3}$ & $1.20 \cdot 10^{-3}$ \\ \hline 10 & 42564 & $-1.10 \cdot 10^{-5}$ & $7.27 \cdot 10^{-6}$ \\ \hline 20 & 282718 & $-3.82 \cdot 10^{-6}$ & $2.61 \cdot 10^{-6}$ \\ \hline 30 & 389571 & $-1.40 \cdot 10^{-6}$ & $8.74 \cdot 10^{-7}$ \\ \hline 36 & 568548 & $-1.92 \cdot 10^{-7}$ & $1.87 \cdot 10^{-7}$ \\ \hline \end{tabular} \label{tab:discConvDG} } \caption{Test case in figure \ref{fig:discInterface} using (a) conforming finite element and (b) discontinuous Galerkin. Approximated shape gradient and goal-oriented estimator for different meshes.} \label{tab:discConv} \end{table} \\ As previously remarked, the certified descent algorithm is able to identify the portions of the interfaces that lie near the external boundary $\partial\mathcal{D}$ whereas the inner parts suffer from a poor reconstruction (Fig. \ref{fig:discInterface}). Moreover, also in this case after few tens of iterations, the certification procedure requires a huge number of degrees of freedom to identify a genuine descent direction for the objective functional $J(\Omega)$ (Fig. \ref{fig:discDOF}). Both the inability of the method to reconstruct the interface far from the external boundary and the rapidly increasing number of degrees of freedom required to certify the descent direction clearly testifies the limitations of classical gradient-based approaches when dealing with electrical impedance tomography. \\ Nevertheless, this new variant of the certified descent algorithm proves to be able to certify the descent direction in order to construct a minimizing sequence of shapes for which the objective functional is monotonically decreasing (Fig. \ref{fig:discObj}). Moreover, the quantitative information carried by the error bound $\overline{E}$ allows to derive a reliable stopping criterion that automatizes the overall optimization procedure. \begin{rmrk} The tables presented in this section show that the strategy based on a conforming finite element discretization rapidly requires a huge number of mesh elements to perform the certification of the descent direction. In figure \ref{fig:meshDisc14FE}, we report the mesh after $14$ iterations of the CDA and we remark that the algorithm is stopped due to its excessive computational cost. On the contrary, the version of the CDA relying on the discontinuous Galerkin approximation is able to certify the descent direction using a coarser mesh and refining it near the external boundary and in the region where the inclusion is located (Fig. \ref{fig:meshDisc10DG}-\ref{fig:meshDisc30DG}). However, it is important to recall that the discontinuous Galerkin formulations feature a higher number of degrees of freedom per mesh element, making the overall dimensions of the optimization problems comparable. Nevertheless, from a practical point of view the computation of the error bound $\overline{E}$ in the framework of conforming finite element relies on the solution of a number of local subproblems on patches of elements equal to the number of vertices of the triangulation $\mathcal{T}_h$. On the contrary, the discontinuous Galerkin discretization is locally conservative and yields to a straightforward technique to construct the equilibrated fluxes based on an inexpensive local post-process of the solutions of the state and adjoint problems. Thus, both approaches result valid and present an improvement of the original certified descent algorithm introduced in \cite{giacomini:hal-01201914} which required the solution of additional global problems to perform the certification procedure. Nevertheless, the computational cost of the version based on the discontinuous Galerkin formulation appears more competitive, especially in view of future developments focusing on vectorial and three-dimensional problems. \end{rmrk} \section{Conclusion} \label{ref:conclusion} As already pointed out in \cite{giacomini:hal-01201914}, the certified descent algorithm (CDA) for shape optimization uses the quantitative information of the goal-oriented estimator to construct on the one hand a sequence of shapes leading to a monotonically decreasing evolution of the objective functional and on the other hand a novel stopping criterion for the overall optimization procedure. The main drawback of the aforementioned strategy was the high computational cost due to the solution of additional global variational problems to estimate the error in the shape gradient via the complementary energy principle. \\ In this work, we proposed an improved version of the CDA which uses solely local quantities to certify that the computed direction is a genuine descent direction for the functional under analysis. In particular, we derived a goal-oriented estimator of the error in the shape gradient via the construction of equilibrated fluxes. This approach has been developed for both conforming finite element and discontinuous Galerkin discretizations and has been tested on the scalar inverse problem of electrical impedance tomography. On the one hand, using a conforming finite element discretization, the number of degrees of freedom required by the approximation of the state and adjoint problems is small but the construction of the equilibrated fluxes for the estimator of the error in the shape gradient requires the solution of local subproblems defined on patches of elements whose number is equal to the number of vertices of the triangulation. On the other hand, though the discontinuous Galerkin formulation of the problems features a higher number of degrees of freedom per mesh element, the computation of the error estimator based on the equilibrated fluxes approach is straightforward via a post-process which involves solely local quantities. Both strategies proved to be valid but the bounds provided by the discontinuous Galerkin approach appeared more precise and computationally less expensive. Ongoing investigations focus on the application of the certified descent algorithm to the vectorial problem of shape optimization in linear elasticity. \appendix \section{Weak imposition of the essential boundary conditions} \label{ref:essential} We present a formal derivation of the variational formulation of an elliptic problem featuring weakly-imposed Dirichlet boundary conditions. The idea of this approach dates back to the classical paper by Nitsche \cite{Nitsche1971} and has been extensively studied in recent years by several authors (cf. e.g. \cite{NME:NME4815} and references therein). We recall that the solution of a boundary value problem may be interpreted as an optimization problem. Let us introduce the Lagrangian functional associated with the state problem \eqref{eq:statePB} featuring Dirichlet boundary conditions: \begin{equation} \Lambda(w,\lambda) = \frac{1}{2} \int_\mathcal{D}{\Big(k_\Omega | \nabla w |^2 + |w|^2 \Big) d\mathbf{x}} - \int_{\partial\mathcal{D}}{\lambda(w-U_D) ds} . \label{eq:lagrangianWeak} \end{equation} The solution of the aforementioned boundary value problem is equivalent to the following min-max problem: $$ \min_{w \in H^1(\mathcal{D})} \max_{\lambda \in H^{-\frac{1}{2}}(\mathcal{D})} \Lambda(w,\lambda) . $$ The first-order optimality conditions for \eqref{eq:lagrangianWeak} read as \begin{align*} \left\{ \begin{aligned} & \int_\mathcal{D}{\Big( k_\Omega \nabla w \cdot \nabla \delta w + w \delta w \Big) d\mathbf{x}} - \int_{\partial\mathcal{D}}{\lambda \delta w \ ds} = 0 ,\\ & \int_{\partial\mathcal{D}}{(w-U_D) \delta\lambda \ ds} = 0 . \end{aligned} \right. \label{eq:OptimalityEssential} \end{align*} From the second condition, we retrieve the Dirichlet boundary condition on $\partial\mathcal{D}$. Integrating by parts the first condition and owing to the strong form of the problem, we obtain \begin{equation*} \int_{\partial\mathcal{D}}{(k_\Omega \nabla w \cdot \mathbf{n} - \lambda) \delta w \ ds} = 0 . \label{eq:lagrangeMult} \end{equation*} By plugging $\lambda = k_\Omega \nabla w \cdot \mathbf{n} \ \text{on} \ \partial\mathcal{D}$ into \eqref{eq:lagrangianWeak} we may now derive the following dual variational problem by seeking $w \in H^1(\mathcal{D})$ such that $\forall \delta w \in H^1(\mathcal{D})$ \begin{equation} \begin{aligned} \int_\mathcal{D}{\Big( k_\Omega \nabla w \cdot \nabla \delta w + w \delta w \Big) d\mathbf{x}} & - \int_{\partial\mathcal{D}}{\Big( k_\Omega \nabla w \cdot \mathbf{n} \delta w + w k_\Omega \nabla \delta w \cdot \mathbf{n} \Big) ds} \\ & \hspace{48pt} = - \int_{\partial\mathcal{D}}{U_D k_\Omega \nabla \delta w \cdot \mathbf{n} \ ds} . \end{aligned} \label{eq:weakInstable} \end{equation} We remark that the bilinear form on the left-hand side of \eqref{eq:weakInstable} is not coercive thus we cannot establish the well-posedness of this problem. To bypass this issue, we consider the following augmented Lagrangian functional and we construct the corresponding dual variational formulation for the problem under analysis: \begin{equation*} \Upsilon(w,\lambda,\gamma) = \Lambda(w,\lambda) + \frac{1}{2} \int_{\partial\mathcal{D}}{\gamma(w-U_D)^2 ds} . \label{eq:augLagrangianWeak} \end{equation*} Following the same procedure used to derive \eqref{eq:weakInstable}, we seek $w \in H^1(\mathcal{D})$ such that $\forall \delta w \in H^1(\mathcal{D})$ \begin{equation} \begin{aligned} \int_\mathcal{D}{\Big( k_\Omega \nabla w \cdot \nabla \delta w + w \delta w \Big) d\mathbf{x}} - & \int_{\partial\mathcal{D}}{\Big( k_\Omega \nabla w \cdot \mathbf{n} \delta w + w k_\Omega \nabla \delta w \cdot \mathbf{n} \Big) ds} + \int_{\partial\mathcal{D}}{\gamma w \delta w \ ds} \\ & \hspace{73pt} = \int_{\partial\mathcal{D}}{U_D \Big(\gamma \delta w - k_\Omega \nabla \delta w \cdot \mathbf{n} \Big) ds} . \end{aligned} \label{eq:weakSable} \end{equation} It is straightforward to observe that the bilinear form on the left-hand side of \eqref{eq:weakSable} is coercive owing a \emph{sufficiently large} value of $\gamma$ is chosen. \end{document}
\begin{document} \title{ A note on injective envelopes and von Neumann algebras} \vbox{\hfil {\Large\bf }\hfil} \author{U. Haag} \date{\today \\ \texttt{\hfil Contact:[email protected]}} \maketitle \par\noindent \begin{abstract} The article exhibits certain relations between the injective envelope $\, I ( A )\, $ of a $C^*$-algebra $\, A\, $ and the von Neumann algebra generated by a representation $\,\lambda\, $ of $\, A\, $ provided it is injective. More specifically we show that there exist positive retractions $\, \sigma : \lambda ( A )'' \twoheadrightarrow I ( A )\, $ which are close to being $*$-homomorphisms in the sense that they are Jordan homomorphisms of the underlying Jordan algebras, and the kernel of $\,\sigma\, $ is given by a twosided ideal. \end{abstract} \par \noindent If $\, A\, $ is a $C^*$-algebra its injective envelope is denoted $\, I ( A )\, $. \par \par\noindent {\bf Theorem.}\quad (i) Let $\, \lambda : A\rightarrow \mathcal B ( \mathcal H )\, $ be a faithful $*$-representation of the unital $C^*$-algebra $\, A\, $ with strong closure $\, A'' \, $ acting on the separable Hilbert space $\,\mathcal H\, $. If $\, A''\, $ is injective then there is a canonical $*$-ideal $\, \mathcal J \vartriangleleft A''\, $ such that the kernel of every completely positive projection $\, \Phi : A'' \rightarrow A''\, $ extending the identity map of $\, A\, $ and with range completely isometric to the injective envelope of $\, A\, $ contains $\, \mathcal J\, $. The quotient $\, A'' / \mathcal J\, $ contains a canonical monotone complete subspace (complete sublattice) completely isometric with $\, I ( A )\, $ which is a Jordan subalgebra of the quotient algebra. If $\, J ( A )\, $ denotes the preimage of $\, I ( A )\, $ in $\, A''\, $ modulo $\, \mathcal J\, $ then $\, J ( A )\, $ is a Jordan subalgebra. The von Neumann algebra $\, A''\, $ is an injective envelope for $\, A\, $ if and only if $\,\mathcal J\, $ is trivial. Given an arbitrary completely isometric inclusion $\, \iota : I ( A ) \hookrightarrow A''\, $ extending the identity map of $\, A\, $ there exists a (nonunique) positive retraction $\, \sigma : A'' \twoheadrightarrow I ( A )\, $ for the inclusion $\,\iota\, $ which satisfies the Schwarz equality $\, \sigma ( x x^* ) = \sigma ( x ) \sigma ( x )^*\, $ for every normal element $\, x\in A''\, $, in particular $\, \sigma\, $ is a Jordan homomorphism, i.e. $\, \sigma ( x y + y x ) = \sigma ( x ) \sigma ( y ) + \sigma ( y ) \sigma ( x )\, $, and maps projections to projections and unitaries to unitaries. Moreover every selfadjoint element $\, x\in I ( A )^{sa}\, $ is the monotone decreasing limit of a net $\, ( y_{\mu } ) \searrow x\, $ such that each $\, y_{\mu }\in I ( A )^{sa}\, $ is the monotone increasing limit $\, ( a_{\mu\nu } ) \nearrow y_{\mu }\, $ of elements $\, a_{\mu\nu }\in A^{sa}\, $ (Up-Down-Theorem for $\, I ( A )\, $). \par \noindent (ii) If $\,\mathfrak X\, $ is an operator system contained in an abelian $C^*$-algebra then each selfadjoint element $\, x\in I ( \mathfrak X )^{sa}\, $ is the least upper bound of the subset $\, \{ a_{\lambda }\in {\mathfrak X}^{sa}\,\vert\, a_{\lambda } \leq x \}\, $, and the greatest lower bound of the subset $\, \{ a_{\mu }\in {\mathfrak X}^{sa}\,\vert\, x\leq a_{\mu } \} \, $. In particular any monotone complete abelian $C^*$-algebra is injective. \par \noindent {\it Proof.}\quad We begin with the last statement for an operator subsystem of an abelian $C^*$-algebra. The assumption implies that also $\, I ( \mathfrak X )\, $ is abelian. Let $\, x\in I ( \mathfrak X )^{sa}\, $ be given and $\, \{ a_{\lambda }\,\vert\, a_{\lambda }\in \mathfrak X\, ,\, a_{\lambda } \leq x \}\, $ be the subset of elements in $\, {\mathfrak X}^{sa}\, $ which are smaller than $\, x\, $ (or equal to if $\, x\in \mathfrak X\, $). Let $\, \overline x\, $ be the least upper bound of this set in $\, I ( \mathfrak X )\, $ which exists by monotone completeness of the injective envelope (Theorem 6.1.3 of \cite{E-R}). Then $\,\overline x \leq x\, $. Consider the subspaces $\, A_x = \mathfrak X + \mathbb C\, x \subseteq I ( \mathfrak X )\, $ and $\, A_{\overline x} = \mathfrak X + \mathbb C\, \overline x\subseteq I ( \mathfrak X )\, $ and define a map $\, \nu : A_x \rightarrow A_{\overline x}\, $ extending the identity map of $\, \mathfrak X\, $ in the obvious way by sending $\, x\, $ to $\,\overline x\, $. We claim that $\,\nu\, $ is positive (and hence completely contractive since unital with $\, I ( \mathfrak X )\, $ abelian). To see this let a positive element in $\, A_x\, $ be given which can be written as $$\, y\, =\, a\, +\, \gamma \, x \geq 0 $$ with $\, \gamma \in \mathbb R\, $ and $\, a\in {\mathfrak X}^{sa}\, $. Suppose that $\, \gamma < 0\, $. Then since $\,\overline x \leq x\, $ one has $\, \nu ( y ) \geq y \geq 0\, $. We may therefore assume $\, \gamma > 0\, $. Then $\, \nu ( y )\, $ is equal to the least upper bound of the set $\, \{ \gamma a_{\lambda } + a\,\vert\, a_{\lambda }\in \mathfrak X\, ,\, a_{\lambda } \leq x \}\, $ which equals the least upper bound of the set $\, \{ b_{\lambda }\in\mathfrak X\,\vert\, b_{\lambda } \leq a + \gamma x \}\, $, hence $\, \nu ( y ) \geq 0\, $ as desired. Extending $\,\nu\, $ to a completely positive map of $\, I ( \mathfrak X )\, $ into $\, I ( \mathfrak X )\, $ and using rigidity gives that $\, x = \overline x\, $. The case of $\, x\, $ being equal to the greatest lower bound of elements in $\, {\mathfrak X}^{sa}\, $ which are larger follows by symmetry. This proves the special Up/Down-property of $\, I ( \mathfrak X )\, $. If $\, A\, $ is a monotone complete $C^*$-algebra then by the foregoing argument each element $\, x\in I ( A )^{sa}\, $ is the least upper bound of all elements $\, \{ a\in A\,\vert\, a\leq x \}\, $. But this set also has a least upper bound $\, \overline x\, $ in $\, A\, $ with $\, \overline x \geq x\, $ whereas the set $\, \{ b\in A^{sa}\,\vert\, b\geq x \}\, $ has a greatest lower bound $\, \underline x\, $ in $\, A\, $, so that $\, \overline x \leq \underline x \leq x\leq \overline x\, $ and equality follows in each instance, i.e. $\, A = I ( A )\, $ so $\, A\, $ must be injective. This proves (ii). \par \noindent Let $\, \lambda : A \rightarrow \mathcal B ( \mathcal H )\, $ be a representation of $\, A\, $ as in part (i) of the theorem with strong closure given by the injective von Neumann algebra $\, A''\, $. From injectivity there exists a completely positive projection $\, \Phi : A'' \rightarrow A''\, $ with range completely isometric to $\, I ( A )\, $. The map $\, \Phi\, $ factors as the product of a completely positive retraction $\, \rho : A'' \twoheadrightarrow I ( A )\, $ and a completely isometric inclusion $\, \iota : I ( A ) \hookrightarrow A''\, $. To prove the first statement choose a dense $*$-linear subspace $\, \mathfrak Y\, $ of $\, A''\, $ together with a basis $\, \{ c_{\omega } {\}}_{\omega\in\Omega }\, $ consisting of selfadjoint elements (of norm one say) which is assumed to be well ordered by a corresponding index set such that for each fixed $\, \omega \in \Omega\, $ the element $\, c_{\omega }\, $ is linear independent from the closure of the linear span of the set $\, \{ c_{\kappa }\,\vert\, \kappa < \omega \}\, $ and of norm one in the corresponding quotient space. The existence of such a dense subspace is guaranteed from Zorn's Lemma. Let $\, J_{\iota } ( A )\, $ denote the canonical subspace of $\, A''\, $ consisting of elements having a unique image under every positive projection $\, \Psi : A'' \rightarrow A''\, $ with range equal to $\, \Phi ( A'' )\, $. Let $\, J_+\, $ denote the subset $\,\{ b \}\, $ of elements of $\, (A'')^{sa}\, $ which are monotone increasing limits $\, ( a_{\nu } ) \nearrow b\, $ of some monotone increasing net of elements $\, a_{\nu }\in A^{sa}\, $, and $\, J_- = - J_+\, $. Then the subspace $\, J = J_+ + J_-\, $ is contained in $\, J_{\iota } ( A )\, $. Indeed if $\, \overline b\, $ is the supremum of the same net $\, ( a_{\nu } )\, $ in $\, I ( A )\, $ (which exists by monotone completeness of $\, I ( A )\, $), then $\, \overline b \geq b\, $ and $\, \Psi ( b ) = \iota ( \overline b )\, $ for every positive projection $\,\Psi\, $ with range $\, \Phi ( A'' )\, $. Also $\, J_{\pm }\, $ contains $\, A\, $ by choosing nets consisting of a single element. One may now assume that the subset $\,\{ c_{\omega } \} \cap J\, $ generates a dense subspace $\, \mathfrak J \subseteq J\, $ and exhausts the leading halfopen interval of all indices $\, 1 \leq \omega < {\omega }_0\, $ beginning with the first element and bounded above by a least index $\, {\omega }_0\, $. To save notation we let $\, {\omega }_0 = 0\, $ and start the numbering with this index omitting the indication of any basis element in $\,\mathfrak J\, $. For each positive element $\, b\in J_+\, ,\, b\geq 0\, $ one checks the following reverse Schwarz inequality $$ \rho ( b^2 )\> =\> \rho ( \sup\, \{ a^2\,\vert\, a \leq b\, ,\, a\in A \} )\> =\> \sup\, \{ a^2\,\vert\, a\leq b\, ,\, a\in A \}\> \leq\> \rho ( b )^2 $$ where on the right side the supremum in $\, I ( A )\, $ is understood. Since $\,\rho\, $ is completely positive the ordinary Schwarz inequality gives an equality. If $\, b\in J_+\, $ is arbitrary then $$ \rho ( b^2 )\> =\> \rho ( ( \Vert b \Vert + b )^2 ) - \Vert b {\Vert }^2 - 2 \Vert b \Vert \rho ( b )\> =\> \rho ( b )^2 \> . $$ For $\, a\, ,\, b\in J_+\, $ one gets $$ \rho ( a b + b a )\> =\> \rho ( ( a + b )^2 ) - \rho ( a^2 ) - \rho ( b^2 )\> =\> \rho ( a ) \rho ( b ) + \rho ( b ) \rho ( a ) $$ and hence $$ \rho ( ( a - b )^2 )\> =\> \rho ( a - b )^2 \> , $$ i.e. the Schwarz equality holds for arbitrary $\, b\in J\, $. One inductively constructs a positive projection $\, \overline\Psi : A'' \rightarrow A''\, $ over the identity map of $\, A\, $ depending on the chosen basis as follows. From the Up-Down Theorem (cf. \cite{Pe}, Theorem 2.4.3) every element $\, c\in (A'')^{sa}\, $ is the infimum of the set $\, \{ b\,\vert\, b \geq c\, ,\, b\in J_+ \}\, $ and correspondingly the supremum of the set $\,\{ a\,\vert\, a \leq c\, ,\, a \in J_- \}\, $ by the symmetry $\, c \mapsto - c\, $. For $\, c_0\, $ define $$ \overline\Psi ( c_0 )\> =\> \inf\, \{ \Phi ( b )\,\vert\, b \geq c_0\, ,\, b\in J \} \> $$ where the infimum in $\, \Phi ( A'' )\, $ is understood. On decomposing $\, b = b_+ + b_-\, $ with $\, b_+\in J_+\, ,\, b_-\in J_-\, $ and since $\, b_-\, $ is the infimum of elements in $\, A \subseteq J_+\, $ one has $$ \overline\Psi ( c_0 )\> =\> \inf\, \{ \Phi ( a )\,\vert\, a\geq c_0\, ,\, a\in J_+ \} \> . $$ One checks positivity of $\,\overline\Psi\, $. Let $\, x = a + \gamma c_0 \geq 0\, $ be given with $\, a\in J\, $. If $\, \gamma > 0\, $ then $\, \overline\Psi ( x ) = \inf\, \{ \Phi ( b )\,\vert\, b \geq x\, ,\, b\in J \} \geq 0\, $. On the other hand if $\,\gamma < 0\, $ then $\, \overline\Psi ( x ) = \sup\, \{ \Phi ( a )\,\vert\, 0 \leq a \leq x\, ,\, a \in J \} \geq 0\, $. To specify the restriction of $\, \overline\Psi\, $ to the domain $\, J_0 = J + \mathbb R\, c_0\, $ this map will also be denoted by $\, {\overline\Psi }_0\, $. Put $\, J_{0 , -} = J - {\mathbb R}_+\, c_0\, $. For the sucessor $\, c_1\, $ of $\, c_0\, $ one defines $$ \overline\Psi ( c_1 )\> =\> \inf\, \{ {\overline\Psi }_0 ( b )\,\vert\, b \geq c_1\, ,\, b\in J_0 \}\> =\> \inf\, \{ {\overline\Psi }_0 ( b )\, \vert\, b\geq c_1\, ,\, b\in J_{0 , -} \} \> . $$ Indeed if $\, b = b_- + \alpha c_0\, $ with $\, \alpha \geq 0\, $ then $\, {\overline\Psi }_0 ( \alpha c_0 ) = \inf\, \{ \alpha \Phi ( a )\,\vert\, a\geq c_0\, ,\, a\in J_+ \}\, $ so one may reduce to considering only elements in $\, J_{0 , -}\, $. Then one similarly checks positivity of the induced extended linear map $\, {\overline\Psi }_1\, $ with domain $\, J_1 = J + \mathbb R\, c_0 + \mathbb R\, c_1\, $. One proceeds by induction. Assume that given $\,\omega\in\Omega\, $ one has already constructed the positive map $\, {\overline\Psi }_{\kappa < \omega }\, $ with domain $\, J_{\kappa < \omega }\, $ generated by $\, J\, $ and all basis elements $\, \{ c_{\kappa }\,\vert\, \kappa < \omega \}\, $. Let $\, J_{\kappa < \omega , - }\, $ be the subcone generated by $\, J\, $ and arbitrary linear combinations $\, \sum_{\kappa < \omega }\, {\alpha }_{\kappa } c_{\kappa }\, $ with negative coefficients $\, {\alpha }_{\kappa } \leq 0\, $. Define $$ \overline\Psi ( c_{\omega } )\> =\> \inf\, \{ {\overline\Psi }_{\kappa < \omega } ( b )\,\vert\, b\geq c_{\omega }\, ,\, b\in J_{\kappa < \omega } \}\> =\> \inf\, \{ {\overline\Psi }_{\kappa < \omega } ( b )\,\vert\, b\geq c_{\omega }\, ,\, b\in J_{\kappa < \omega , -} \} $$ and check as above that this gives a positive extension of $\, {\overline\Psi }_{\kappa < \omega }\, $ to the subspace $\, J_{\omega } = J_{\kappa < \omega } + \mathbb R\, c_{\omega }\, $. If one has exhausted all basis elements by this procedure one only needs to extend $\, \overline\Psi\, $ to a positive projection on $\, A''\, $ by continuity. The map $\,\overline\Psi\, $ then factors as a product of a positive retraction $\, \overline\sigma : A'' \twoheadrightarrow I ( A )\, $ and the completely isometric inclusion $\, \iota\, $ as above. It is clear that $\, \overline\Psi ( c_0 )\, $ gives the maximal value for the image of $\, c_0\, $ under any positive projection $\, \Psi : A'' \rightarrow A''\, $ with range $\, \Phi ( A'' )\, $, and that $\, \overline\Psi ( c_{\omega } )\, $ gives the maximal value for $\, \Psi ( c_{\omega } )\, $ subject to the condition that $\, \Psi ( c_{\kappa } ) = \overline\Psi ( c_{\kappa } )\, $ for $\, \kappa < \omega\, $. The construction above gives $$ \overline\Psi ( c_0 )\> =\> \inf\, \{ \Phi ( b )\,\vert\, b\geq c_0\, ,\, b\in J_+ \}\> \geq\> \inf\, \{ b\,\vert\, b\geq c_0\, ,\, b\in \Phi ( A'' ) \} $$ where the infimum in $\, \Phi ( A'' )\, $ is understood, since $\, b\geq c_0\, $ implies $\, \Phi ( b ) \geq c_0\, $ if $\, b\in J_+\, $. On the other hand the value of $\,\Psi ( c_0 )\, $ certainly must be smaller than the value to the right of the inequality so equality follows. Correspondingly there is for each chosen basis as above a positive projection $\,\underline\Psi : A'' \rightarrow A''\, $ with same range such that the values of $\, \underline\Psi ( c_{\omega } )\, $ are conditionally minimal, and in particular the value of $\,\underline\Psi ( c_0 )\, $ is the absolutely minimal value any positive projection can take in $\, c_0\, $, which follows from the symmetry $\, c_{\omega } \mapsto - c_{\omega }\, $ plus the above construction. If $\, b \in J_{0 , +} = J + {\mathbb R}_+\, c_0\, $ with $\, b \geq 0\, $ one checks the following reverse Schwarz inequality $$ \overline\sigma ( b^2 )\> \leq\> \inf\,\{ \rho ( a^2 )\,\vert\, a\geq b\, ,\, a\in J_+ \}\> =\> \inf\, \{ \rho ( a )^2\,\vert\, a\geq b\, ,\, a\in J_+ \} $$ $$ \> =\> \inf\, \{ \rho ( \Phi ( a )^2 )\,\vert\, a\geq b\, ,\, a\in J_+ \}\> =\> \> \tau ( \inf\, \{ \Phi ( a )^2\,\vert\, a\geq b\, ,\, a\in J_+ \} ) $$ $$ \> =\> \tau ( ( \inf\, \{ \Phi ( a )\, ,\, a\geq b\, ,\, a\in J_+ \} )^2 )\> =\> \tau ( \overline\Psi ( b )^2 ) =\> \overline\sigma ( b )^2 \> . $$ This needs some explanation. The first inequality follows from the general scheme that the image of an infimum of a decreasing net of elements under a positive map must be smaller than the infimum of its images, then the second is the Schwarz equality for $\, a\in J\, $ proved above, the third is given by definition of the $C^*$-product in $\, I ( A )\, $ which can be retrieved (cf. \cite{E-R}, Theorem 6.1.3) from the completely positive projection $\,\Phi\, $ by the formula $$ x\circ y\> =\> \rho ( \iota ( x ) \iota ( y ) )\> . $$ One finds that if $\, \tau : A'' \twoheadrightarrow I ( A )\, $ is any positive retraction for $\,\iota\, $ then $$ x^2\> =\> \tau ( \iota ( x ) ) \tau ( \iota ( x ) )\> \leq\> \tau ( \iota ( x ) \iota ( x ) )\> \leq\> \rho ( \iota ( x ) \iota ( x ) )\> = x^2 $$ which follows from the Schwarz inequality for the completely positive map $\,\iota\, $ and the Kadison-Schwarz inequality for the positive map $\,\tau\, $. The second equality in the second line follows by choosing a suitable positive retraction $\,\tau\, $ taking precisely the chosen value for the element $\, z = \inf\, \{ \Phi ( a )^2\,\vert\, a\geq b\, ,\, a\in J_+ \}\, $, namely $\, \tau ( z ) = \inf\,\{ \rho ( \Phi ( a )^2 )\,\vert\, a\geq b\, ,\, a\in J_+ \}\, $. Clearly no positive retraction may take a larger value in $\, z\, $. To see that $\, \tau\, $ exists one makes use of the above construction choosing $\, c_0 = z\, $ unless $\, z\in J\, $ in which latter case one may take $\,\tau = \rho\, $. Indeed since $\, z \geq 0\, $ one has $$ z\> =\> \inf\, \{ c\,\vert\, c\geq z\, ,\, c = d^2\, ,\, c , d\in J_+ \} $$ so there exists a positive retraction $\,\tau : A'' \twoheadrightarrow I ( A )\, $ with $$ \> \tau ( z )\> =\> \inf\, \{ \rho ( c )\, \vert\, c\geq z\, ,\, c\in J_+ \}\> =\> \inf\, \{ \rho ( \Phi ( d )^2 )\,\vert\, d^2 \geq z\, ,\, d\in J_+ \} $$ $$ \geq\> \inf\,\{ \rho ( \Phi ( d )^2)\,\vert\, d \geq b\, ,\, d\in J_+ \} \qquad\qquad\qquad\qquad\qquad \>\> $$ since $\, z \geq b^2\, $ by definition ($\, a\in J_+\, $ implies $\, \Phi ( a ) \geq a\, $). Then equality follows from the maximality argument above. Then the first two equalities in the third line follows from weak continuity of the $C^*$-product in $\, A''\, $ and the definition of $\, \overline\Psi ( b )\, $. The last inequality is implied by the general consideration above since $\, \tau ( \overline\Psi ( b )^2 ) = \tau ( \iota ( \overline\sigma ( b ) ) \iota ( \overline\sigma ( b ) ) ) = \rho ( \overline\Psi ( b )^2 )\, $. Since $\, \overline\sigma\, $ is positive the ordinary Kadison-Schwarz inequality applies to get the converse statement so that $$ \overline\sigma ( b^2 )\> =\> \overline\sigma ( b )^2 \> . $$ Then if $\, b = b_1 + b_2\, $ with $\, b_1\, ,\, b_2\, $ as above one has $$ {\overline\Psi }_0 ( b_1 b_2 + b_2 b_1 )\> =\> {\overline\Psi }_0 ( ( b_1 + b_2 )^2 ) - {\overline\Psi }_0 ( b_1^2 ) - {\overline\Psi }_0 ( b_2^2 )\> =\> {\overline\Psi }_0 ( b_1 ) {\overline\Psi }_0 ( b_2 ) + {\overline\Psi }_0 ( b_2 ) {\overline\Psi }_0 ( b_1 )\> . $$ Then if $\, b = b_1 - b_2\, $ with $\, b_1\, ,\, b_2\, $ as above one gets $$ \overline\sigma ( b^2 )\> =\> \overline\sigma ( b_1^2 ) - \overline\sigma ( b_1 b_2 + b_2 b_1 ) + \overline\sigma ( b_2^2 )\> =\> \overline\sigma ( b )^2Ê $$ and the Schwarz equality continues to hold for such differences. Let now $\, b\in J_0 + {\mathbb R}_+ c_1\, ,\, b\geq 0\, $. As above one gets $$ \overline\sigma ( b^2 )\> \leq\> \inf\, \{ \overline\sigma ( a^2 )\,\vert\, a\geq b\, ,\, a\in J_0 \} \> =\> \inf\, \{ \overline\sigma ( a )^2\,\vert\, a\geq b\, ,\, a\in J_0 \} $$ $$Ê\> =\> \inf \{ \rho ( \overline\Psi ( a )^2 )\,\vert\, a\geq b\, ,\, a\in J_0 \}\> =\> \tau ( \inf\, \{ \overline\Psi ( a )^2\,\vert\, a\geq b\, ,\, a\in J_0 \} $$ $$\> =\> \tau ( ( \inf\, \{ \overline\Psi ( a )\,\vert\, a\geq b\, ,\, a\in J_0 \} )^2 )\> =\> \tau ( \overline\Psi ( b )^2 )\> = \overline\sigma ( b )^2\> . $$ The argument is completely analogous to the previous case, but maybe just a little bit more tricky. One notes first that each element of the form $\, \Phi ( a )^2\, $ is in $\, J_{\iota } ( A )\, $ since $\, \tau ( \Phi ( a )^2 ) = \rho ( \Phi ( a )^2 )\, $ as shown above. Define $\, z = \inf\, \{ \overline\Psi ( a )^2\,\vert\, a\geq b\, ,\, a\in J_0 \}\, $ and considering $\, z\, $ as the first element of a suitable well ordered basis there exists a positive retraction $\, \tau : A'' \twoheadrightarrow I ( A )\, $ taking the maximal possible value in $\, z\, $. This value is given by $$ \tau ( z )\> =\> \inf\, \{ a\,\vert\, a\geq z\, ,\, a\in \Phi ( A'' ) \}\> \geq\> \inf\, \{ d^2\,\vert\, d^2\geq z\, ,\, d\in \Phi ( A'' ) \} $$ where the infimum is in $\,\Phi ( A'' )\, $, i.e in the second case it is the supremum of all elements in $\, \Phi ( A'' )\, $ which are smaller than each element $\, d^2\geq z\, $ with $\, d\in \Phi ( A'' )\, $. The inequality follows since $\, d^2 \geq z\, $ implies $\, \Phi ( d^2 )\geq d^2\geq z\, $ and $$ \inf\, \{ d^2\,\vert\, d^2\geq z\, ,\, d\in \Phi ( A'' ) \}\> =\> \inf\, \{ \Phi ( d^2 )\,\vert\, d^2\geq z\, ,\, d\in \Phi ( A'' ) \} \> . $$ Indeed, $\, \Phi ( d^2 )\, $ is the smallest element in $\, \Phi ( A'' )\, $ larger than $\, d^2\, $. This accounts for the second equation in the middle line, and the rest of the argument is much the same as before. One proceeds by induction to prove the Schwarz equality $\, \overline\sigma ( x^2 ) = \overline\sigma ( x )^2\, $ for all selfadjoint elements in $\, \mathfrak Y\, $, and by continuity for all elements in $\, (A'')^{sa}\, $. Then the Schwarz inequality holds for arbitrary normal elements. Indeed, for $\, x = a + i b\, $ a normal element the ordinary Schwarz inequality applies to give $$ \overline\sigma ( x x^* )\> =\> \overline\sigma ( a )^2 + \overline\sigma ( b )^2\> \geq\> \overline\sigma ( x ) \overline\sigma ( x )^*\> =\> \overline\sigma ( a )^2 + \overline\sigma ( b )^2 + i ( \overline\sigma ( b ) \overline\sigma ( a ) - \overline\sigma ( a ) \overline\sigma ( b ) ) $$ which implies that $\, i ( \overline\sigma ( b ) \overline\sigma ( a ) - \overline\sigma ( a ) \overline\sigma ( b ) ) \leq 0\, $. By symmetry one also gets $\, -i ( \overline\sigma ( b ) \overline\sigma ( a ) - \overline\sigma ( a ) \overline\sigma ( b ) ) \leq 0\, $ and hence $\, \overline\sigma ( a ) \overline\sigma ( b ) = \overline\sigma ( b ) \overline\sigma ( a )\, $ proving the Schwarz equality $\, \overline\sigma ( x x^* ) = \overline\sigma ( x ) \overline\sigma ( x )^*\, $ for any normal element. From this one easily induces for $\, x = x_+ - x_-\, $ selfadjoint with $\, x_+ = x\vee 0\, $ and $\, - x_- = x\wedge 0\, $ that $\overline\sigma\, $ sends $\, x_+\, $ to $\, \overline\sigma ( x )_+\, $ and $\, x_-\, $ to $\,\overline\sigma ( x )_-\, $ (since $\,\overline\sigma ( x_+ ) \overline\sigma ( x_- ) = \overline\sigma ( x_- )\overline\sigma ( x_+ ) = 0\, $). In particular if $\, x = x_+ - x_-\in (A'')^{sa} \, $ is an element of the kernel of $\,\overline\sigma\, $ with $\, x_+ x_- = x_- x_+ = 0\, $ then both $\, x_+\, $ and $\, x_-\, $ are contained in the kernel, i.e. the kernel is linearly generated by positive elements. From this one gets that the kernel of $\,\overline\sigma\, $ is a twosided ideal, for if $\, a\geq 0\, $ is contained in the kernel then also $\,\sqrt a\, $ which follows directly from the Schwarz equality $$ \overline\sigma ( \sqrt a )^2\> =\> \overline\sigma ( a )\> =\> 0 \> . $$ Let $\, b\, ,\, c \in A''\, $ be arbitrary elements. Then $$ \overline\sigma ( ba ) + \overline\sigma ( \sqrt a b \sqrt a )\> =\> \overline\sigma ( b\sqrt a ) \overline\sigma ( \sqrt a ) + \overline\sigma ( \sqrt a ) \overline\sigma ( b \sqrt a )\> =\> 0 $$ and since obviously $\, \overline\sigma ( \sqrt a b \sqrt a ) \leq \Vert b \Vert \overline\sigma ( a ) = 0\, $ one gets $\, \overline\sigma ( b a ) = 0\, $. By the same line of argument $$ \overline\sigma ( b a c ) + \overline\sigma ( c b a )\> =\> \overline\sigma ( b a ) \overline\sigma ( c ) + \overline\sigma ( c ) \overline\sigma ( b a )\> =\> 0 $$ and since $\, \overline\sigma ( c b a ) = 0\, $ from the previous argument one concludes that $\, \sigma ( b a c ) = 0\, $, which implies the assertion. It is not unlikely that $\,\overline\sigma\, $ is a $*$-homomorphism in general. Indeed, one may define an associative Banach algebra product on $\, I ( A )\, $ by the formula $$ x * y\> =\> \overline\sigma ( \iota ( x ) \iota ( y ) ) \> . $$ Associativity is readily checked from the fact that the kernel of $\,\overline\sigma\, $ is an ideal. If this product should also have the $C^*$-property $\, \Vert x * x^* \Vert = \Vert x {\Vert }^2\, $ then from uniqueness of the $C^*$-product on $\, I ( A )\, $ the product must be the usual one and $\,\overline\sigma\, $ must be a $*$-homomorphism. However it does not seem very easy to prove the $C^*$-property for a general (nonnormal) element $\, x\, $. For a normal element the result follows of course immediately from the Schwarz equality. The property of $\,\overline\sigma\, $ being a $*$-homomorphism is in fact equivalent to it being $2$-positive. In this case one gets for selfadjoint elements $\, a\, $ and $\, b\, $ $$ {\overline\sigma }_2 \left( {\begin{pmatrix} a & b \\ b & 0 \end{pmatrix} }^2\right)\> \geq\> {\overline\sigma }_2 \left( \begin{pmatrix} a & b \\ b & 0 \end{pmatrix} \right)^2\> $$ and hence $$ \begin{pmatrix} 0 & \overline\sigma ( a b ) - \overline\sigma ( a )\overline\sigma ( b ) \\ \overline\sigma ( b a ) - \overline\sigma ( b ) \overline\sigma ( a ) & 0 \end{pmatrix}\> \geq\> 0 $$ implying $\, \overline\sigma ( a b ) = \overline\sigma ( a ) \overline\sigma ( b )\, $. Let $\, {\mathcal J}_{\iota }\, $ be the canonical subspace consisting of those elements which are in the kernel of every positive retraction $\,\sigma : A'' \twoheadrightarrow I ( A )\, $ giving a left inverse for $\,\iota\, $. This space is just the intersection of all kernels of positive retractions of the above type, since there is for every selfadjoint element $\, x\, $ a maximal and a minimal possible value which are taken by retractions of the form considered above, so if these are both zero then $\, x\, $ is contained in $\, {\mathcal J}_{\iota }\, $. Being the intersection of a given class of ideals $\,Ê{\mathcal J}_{\iota }\, $ is a twosided ideal itself. Also from the Schwarz equality for normal elements $\,\overline\sigma\, $ maps projections to projections and unitaries to unitaries. A similar argument shows that the subspace $\, J_{\iota } ( A )\, $ of elements with unique image in $\, I ( A )\, $ is a Jordan subalgebra, so that its image in the quotient $\, A'' / \mathcal J\, $ is also a Jordan subalgebra and canonically completely isometric with $\, I ( A )\, $. We claim that it is (relatively) monotone complete. Let an increasing net $\, ( x_{\lambda } )_{\lambda }\, $ be given with $\, \{ x_{\lambda } \} \subseteq I ( A )\, $ and $\, x\in I ( A )\, $ its least upper limit. Suppose that $\, x\, $ is not the least upper limit of the same net in $\, A'' / \mathcal J\, $. Then there exists an element $\, y\in A'' / \mathcal J\, ,\, y < x\, $ such that $\, y\, $ is an upper bound for the net $\, ( x_{\lambda } )_{\lambda }\, $. The positive projection $\, \Psi\, $ with range $\,\iota ( I ( A ) )\, $ induces a positive projection $\,\widetilde\Psi : A'' / \mathcal J \rightarrow A'' / \mathcal J\, $ with range equal to $\, I ( A )\, $. As in the argument above the value of $\, \widetilde\Psi ( y )\, $ is necessarily given by $\, x\, $ no matter the choice of $\,\Psi\, $. Therefore any preimage of $\, y\, $ is contained in $\, J ( A )\, $ which implies $\, y\in I ( A )\, $, hence $\, y = x\, $. To prove the Up-Down Theorem for $\, I ( A )\, $ choose for given $\, x\in I ( A )^{sa}\, $ a preimage $\,\overline x \in J_{\iota } ( A )^{sa}\, $. From the Up-Down Theorem in $\, A''\, $ one gets a monotone decreasing net $\, ( {\overline b}_{\mu } ) \searrow \overline x\, $ with each $\, b_{\mu }\, $ the limit of a monotone increasing net $\, ( a_{\mu \nu } ) \nearrow b_{\mu }\, $ of elements $\, a_{\mu \nu }\in A^{sa}\, $. Then each positive retraction $\, \sigma : A'' \twoheadrightarrow I ( A )\, $ sends each element $\, {\overline b}_{\mu }\, $ to a corresponding element $\, b_{\mu }\, $ which is the monotone increasing limit of the net $\, ( a_{\mu \nu } )\, $ in $\, I ( A )\, $ and $\, x\, $ is the limit of the monotone decreasing net $\, ( b_{\mu } )\, $. If $\, {\mathcal J}_{\iota }\, $ is trivial then the subspace $\, J\, $ generated by limits of monotone increasing (or decreasing) nets of elements in $\, A^{sa}\, $ is contained in $\, \Phi ( A'' )\, $ because the difference between the supremum of such a net in $\, A''\, $ and its supremum in $\,\Phi ( A'' )\, $ is in the kernel of any positive retraction and hence in $\, {\mathcal J}_{\iota }\, $. Then again by the same argument since every element in $\, (A'')^{sa}\, $ can be represented as the infimum of a monotone decreasing net of elements in $\, \Phi ( A'' )^{sa}\, $ it must be contained in $\, \Phi ( A'' )\, $ itself so $\,\Phi\, $ is the identity map and $\, A'' \simeq I ( A )\, $ follows. Put $\, \mathcal J = \sum_{\iota }\, {\mathcal J}_{\iota }\, $ which is a twosided ideal of $\, A''\, $. Each element $\, x\in\mathcal J\, $ is contained in the kernel of every completely positive projection $\,\Phi : A'' \rightarrow A''\, $ extending the identity map of $\, A\, $ and with range completely isometric to $\, I ( A )\, $. To see this let to different completely isometric projections $\, \Phi\, ,\, {\widetilde\Phi} : A'' \rightarrow A''\, $ as above be given. Then it is easy to see that $\, \Phi \circ\widetilde\Phi\, $ is also a projection (since $\, \Phi\circ\widetilde\Phi\circ\Phi = \Phi\, $ from rigidity) with range $\, \Phi ( A'' )\, $ and kernel equal to the kernel of $\,\widetilde\Phi\, $. Thus for each different choice of kernel there is a corresponding projection onto a fixed copy of $\, I ( A ) \subseteq A''\, $. This implies that if $\, x\, $ is contained in the kernel of any completely positive projection with fixed range $\, \iota ( I ( A ) )\, $ it is necessarily also in the kernel of any such projection having a different range $\, {\widetilde\iota} ( I ( A ) )\, $. In particular this accounts for all elements in $\,\mathcal J\, $. Since the embedding of $\, I ( A )\, $ into $\, A'' / {\mathcal J}_{\iota }\, $ is completely canonical, the same is true for the embedding into $\, A'' / \mathcal J\, $ and in particular any two completely isometric inclusions $\, \iota\, ,\, {\widetilde\iota }\, $ will agree modulo $\,\mathcal J\, $. This implies that the preimage $\, J ( A )\, $ of $\, I ( A )\, $ modulo $\,\mathcal J\, $ is a Jordan subalgebra of $\, A''\, $ \qed \par \noindent The statement of the theorem extends to the case where $\, A\, $ is separable, but is represented on a nonseparable Hilbert space $\,\mathcal H\, $ for in this case the representation decomposes into a direct sum of separable representations so the Up-Down Theorem applies. Note also that the proof uses the injectivity of $\, A''\, $ only insofar as to obtain a completely isometric inclusion $\, I ( A )\subseteq A''\, $ extending the identity of $\, A\, $, so that it also applies in case that such an inclusion exists without $\, A''\, $ being injective (it is conceivable that it is always possible to embed $\, I ( A )\, $ into $\, A^{**}\, $ and hence into the strong closure of $\, A\, $ in any representation but this needs a proof). \par \noindent {\it Remark.}\quad Since the map $\,\overline\sigma\, $ constructed in the proof of the theorem maps projections to projections it is natural to ask whether it induces a map of the $K_0$-groups of the respective $C^*$-algebras. Clearly it sends a pair of homotopic projections $\, p {\sim }_h q\, $ in $\, A''\, $ to homotopic projections in $\, I ( A )\, $ but more is true. It is always possible to extend the map $\, \overline\sigma\, $ to a Jordan homomorphism $$ {\overline\sigma }_n : M_n ( A'' ) \twoheadrightarrow M_n ( I ( A ) ) $$ for any $\, n\, $ such that this map reduces to $\,\overline\sigma\, $ restricted to the left upper corner. This implies that $\,\overline\sigma\, $ maps stably homotopic projections to stably homotopic projections and hence induces a homomorphism of the subgroup of $\, K_0 ( A'' )\, $ generated by the projections in $\, A''\, $ into $\, K_0 ( I ( A ) )\, $. Then choosing any ascending sequence of natural numbers $\, 1= n_1 \leq n_2 \leq \cdots \leq n_k\leq\cdots\, $ and compatible Jordan homomorphisms $$ {\overline\sigma }_{n_k} : M_{n_k} ( A'' ) \twoheadrightarrow M_{n_k} ( I ( A ) ) $$ in the sense above will give a well defined map $$ K_0 ( A'' ) \longrightarrow K_0 ( I ( A ) )\> . $$ \par \noindent {\it Example.}\quad If $\, A\, $ is commutative then any positive retraction $\, \sigma : \lambda ( A )'' \twoheadrightarrow I ( A )\, $ as constructed in the proof of the theorem is a $*$-homomorphism. The homomorphism $\, \sigma \, $ cannot be normal in general. To see this let $\, A = C ( X )\, $ be the algebra of continuous functions on the interval $\, X = [ 0 , 1 ]\, $ and, choosing some countable dense subset $\, X_S\subseteq X\, $, consider the representation $\, {\lambda }_S\, $ of $\, A\, $ on the Hilbert space $\, l^2 ( X_S )\, $ by pointwise multiplication so that $\, A'' = l^\infty ( X_S )\, $. If $\, {\sigma }_{{\lambda }_S}\, $ would be normal then $\, I ( A )\, $ would be $*$-isomorphic to the von Neumann algebra $\, l^\infty ( Y_S )\, $ for some dense subset $\, Y_S \subseteq X_S\, $ by \cite{Pe}, Corollary 2.5.5. But then, for any given point $\, x_0\in Y_S\, $, $\, Y_S\backslash \{ x_0 \}\, $ is dense in $\, X\, $ so that the natural projection $\, l^\infty ( Y_S ) \twoheadrightarrow l^\infty ( Y_S \backslash \{ x_0 \} )\, $ is faithful on $\, A\, $. From rigidity it would also have to be faithful on $\, I ( A )\, $ (there always exists an extension of the identity map of $\, A\, $ in the reverse direction) which gives a contradiction. The same example serves to show that $\,\rho\, $ is not unique in this case. Divide $\, X_S = Y_S \cup Z_S\, $ into a disjoint sum of dense subsets so that $\, l^\infty ( X_S ) \simeq l^\infty ( Y_S ) \oplus l^\infty ( Z_S )\, $. Then for each summand there is a corresponding surjection $\, {\rho }_{Y , Z} : l^\infty ( X_S ) \twoheadrightarrow I ( C ( X ) )\, $ which annihilates the complementary summand, so these must be different. From part (ii) of the theorem one obtains for commutative $\, A\, $ a simple characterization of the elements in $\, J ( A )\, $ which in this case is a canonical $C^*$-subalgebra of $\, A''\, $. For a given selfadjoint element $\, f\, $ in $\, A''\, $ define $\, A_f = \{ a\in A\,\vert\, a\geq f \}\, $ and $\, B_f = \{ b\in A\,\vert\, b \leq f \}\, $. Also define for any subset $\, \mathcal A \subset A^{sa}\, $ which is bounded from below, resp. for any subset $\, \mathcal B \subset A^{sa}\, $ which is bounded from above, its complement by $\, {\mathcal A}^c = \{ c\in A\,\vert\, c \leq \inf\, \mathcal A \}\, $, resp. $\, {\mathcal B}^c = \{ d\in A^{sa}\,\vert\, d\geq \sup\, \mathcal B \}\, $, so that $\, \mathcal A \subseteq ({\mathcal A}^c)^c\, $ and $\, {\mathcal A}^c = (({\mathcal A}^c)^c)^c\, $. The proof of the theorem shows that for any selfadjoint element $\, f\, $ in $\, A''\, $ there exists a $*$-homomorphism $\, \overline\sigma : A'' \twoheadrightarrow I ( A )\, $ extending the identity map of $\, A\, $ such that $\, \overline\sigma ( f ) = \inf\, \{ x\,\vert\, x\in I ( A )\, ,\, \iota ( x ) \geq f \}\, $ with respect to some positive inclusion $\,\iota : I ( A ) \hookrightarrow A''\, $ extending the identity map of $\, A\, $. If $\, f\in J ( A )\, $ then the image of $\, f\, $ in $\, I ( A )\, $ is the same for any choice of positive retraction $\, \sigma : A'' \twoheadrightarrow I ( A )\, $ (whatever the choice of $\, \iota\, $ is). This implies by part (ii) of the theorem that the value of $\, \sigma ( f )\, $ must be equal to $\, \inf\, A_f\, $ (the infimum taken in $\, I ( A )\, $) and also equal to $\, \sup\, B_f\, $. This then implies the identities $\, (A_f)^c = ((B_f)^c)^c\, $ or equivalently $\, (B_f)^c = ((A_f)^c)^c\, $. On the other hand it is easy to see that these identities can be satisfied only for $\, f\in J ( A )\, $. Since there is a unique positive retraction $\, \sigma : J ( A ) \twoheadrightarrow I ( A )\, $ extending the identity map of $\, A\, $ which in fact is a $*$-homomorphism this map induces by duality a canonical embedding $\, s : Spec ( I ( A ) ) \hookrightarrow Spec ( J ( A ) )\, $ of the spectrum (or the state space) of $\, I ( A )\, $ as a closed subspace of the spectrum (resp. state space) of $\, J ( A )\, $, the image of which may be called the {\it rigid states with respect to $\, A\, $}. They have the following rigidity property: given any positive extension $\, j : J ( A ) \rightarrow J ( A )\, $ extending the identity map of $\, A\, $ and $\, \phi\, $ a state in the image of $\, s\, $ one gets $\, \phi ( j ( f ) ) = \phi ( f )\, $ for all $\, f\in J ( A )\, $. Note also that $\, J ( A )\, $ has the following injectivity property relative to $\, A\, $: given any subspace $\, E\subseteq J ( A )\, $ containg $\, A\, $ and positive map $\, \rho : E \rightarrow J ( A )\, $ which reduces to the identity map on $\, A\, $ there is a positive extension of $\, \rho\, $ to all of $\, J ( A )\, $. This follows since by injectivity of $\, A''\, $ there exists a positive extension of $\,\rho\, $ to a map $\, \overline\rho : J ( A ) \rightarrow A''\, $ which must necessarily send $\, J ( A )\, $ into itself. \par \noindent \par \noindent \end{document}
\begin{document} \title{Sofic Mean Dimension} \author{Hanfeng Li} \thanks{Partially supported by NSF Grants DMS-0701414 and DMS-1001625.} \address{\hskip-\parindent Department of Mathematics, Chongqing University, Chongqing 401331, China. Department of Mathematics, SUNY at Buffalo, Buffalo NY 14260-2900, U.S.A.} \email{[email protected]} \keywords{Sofic group, amenable group, mean dimension, topological entropy, small-boundary property} \date{May 8, 2013} \begin{abstract} We introduce mean dimensions for continuous actions of countable sofic groups on compact metrizable spaces. These generalize the Gromov-Lindenstrauss-Weiss mean dimensions for actions of countable amenable groups, and are useful for distinguishing continuous actions of countable sofic groups with infinite entropy. \end{abstract} {\rm m}aketitle \section{Introduction} \label{S-introduction} Mean dimension was introduced by Gromov \cite{Gromov} about a decade ago, as an analogue of dimension for dynamical systems, and was studied systematically by Lindenstrauss and Weiss \cite{LW} for continuous actions of countable amenable groups on compact metrizable spaces. Among many beautiful results they obtained, it is especially notable that they used mean dimension to show that there exits a minimal action of ${{\rm m}athbb Z}$ on some compact metrizable space which can not be equivariantly embedded into $[0, 1]^{{\rm m}athbb Z}$ equipped with the shift ${{\rm m}athbb Z}$-action. Mean dimension is further explored in \cite{Coornaert, CK, Gutman, Krieger06, Krieger09, Lindenstrauss}. The notion of sofic groups was also introduced by Gromov \cite{Gromov99} around the same time. The class of sofic groups include all discrete amenable groups and residually finite groups, and it is still an open question whether every group is sofic. For some nice exposition on sofic groups, see \cite{CC, ES05, ES06, Pestov, Weiss00}. Using the idea of counting sofic approximations, in \cite{Bowen10} Bowen defined entropy for measure-preserving actions of countable sofic groups on probability measure spaces, when there exists a countable generating partition with finite Shannon entropy. Together with David Kerr, in \cite{KL11, KerLi10amenable} we extended Bowen's measure entropy to all measure-preserving actions of countable sofic groups on standard probability measure spaces, and defined topological entropy for continuous actions of countable sofic groups on compact metrizable spaces. The sofic measure entropy and sofic topological entropy are related by the variational principle \cite{KL11}. Furthermore, the sofic entropies coincide with the classical entropies when the group is amenable \cite{Bowen10a, KerLi10amenable}. The goal of this article is to extend mean dimension to continuous actions of countable sofic groups $G$ on compact metrizable spaces $X$. In order to define sofic mean dimension, we use some approximate actions of $G$ on finite sets as models, and replace $X$ by certain spaces of approximately $G$-equivariant maps from the finite sets to $X$, which appeared first in the definition of sofic topological entropy \cite{KerLi10amenable}. A novelty here is that we replace open covers of $X$ by certain open covers on these map spaces. Lindenstrauss and Weiss studied two kinds of mean dimensions for actions of countable amenable groups in \cite{LW}, one is topological, as the analogue of the covering dimension, and the other is metric, as the analogue of the lower box dimension. We define sofic mean topological dimension and establish some basic properties in Section~{\rm re}f{S-sofic mean topological dim}, and show that it coincides with the Gromov-Lindenstrauss-Weiss mean topological dimension when the group is amenable in Section~{\rm re}f{S-top amenable}. Similarly, we discuss sofic metric mean dimension in Sections~{\rm re}f{S-sofic metric mean dim} and {\rm re}f{S-metric amenable}. It is shown in Section~{\rm re}f{S-comparison} that sofic mean topological dimension is always bounded above by sofic metric mean dimension. We calculate sofic mean dimensions for some Bernoulli shifts and show that every non-trivial factor of the shift action of $G$ on $[0, 1]^G$ has positive sofic mean dimensions in Section~{\rm re}f{S-shifts}. In the last section, we show that actions with small-boundary property have zero or $-\infty$ sofic mean topological dimension. To round up this section, we fix some notation. \begin{definition} \label{D-sofic group} For $d\in {{\rm m}athbb N}$, we write $[d]$ for the set $\{1, \dots, d\}$ and ${\rm Sym}(d)$ for the permutation group of $[d]$. A countable group $G$ is called {\it sofic} if there is a {\it sofic approximation sequence} $\Sigma= \{ \sigma_i : G \to {\rm Sym} (d_i ) \}_{i=1}^\infty$ for $G$, namely the following three conditions are satisfied: \begin{enumerate} \item for any $s, t\in G$, one has $\lim_{i\to \infty}\frac{|\{a\in [d_i]: \sigma_i(s)\sigma_i(t)(a)=\sigma_i(st)(a)\}|}{d_i}=1$; \item for any distinct $s, t\in G$, one has $\lim_{i\to \infty}\frac{|\{a\in [d_i]: \sigma_i(s)(a)=\sigma_i(t)(a)\}|}{d_i}=0$; \item $\lim_{i\to \infty}d_i=+\infty$. \end{enumerate} For a map $\sigma$ from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$, we write $\sigma(s)(a)$ as $\sigma_s(a)$ or $sa$, when there is no confusion. We say that $\sigma$ is a {\it good enough sofic approximation} for $G$ if for some large finite subset $F$ of $G$ which will be clear from the context, one has $\frac{|\{a\in [d]: \sigma(s)\sigma(t)(a)=\sigma(st)(a)\}|}{d}$ very close to $1$ for all $s, t \in F$ and $\frac{|\{a\in [d]: \sigma(s)(a)=\sigma(t)(a)\}|}{d}$ very close to $0$ for all distinct $s, t\in F$. \end{definition} Throughout this paper, $G$ will be a countable sofic group with identity element $e_G$, and we fix a sofic approximation sequence $\Sigma$ for $G$. {\it Acknowledgements.} I thank Yonatan Gutman and David Kerr for very helpful discussions, and Lewis Bowen and Elon Lindenstrauss for comments. I am grateful to the referee for extremely helpful comments, which improved the paper greatly. \section{Sofic Mean Topological Dimension} \label{S-sofic mean topological dim} In this section we define the sofic mean topological dimension and establish some basic properties. We start with recalling the definitions of covering dimension of compact metrizable spaces and mean topological dimension for actions of countable amenable groups. For a compact space $Y$ and two finite open covers ${{\rm m}athcal U}$ and ${{\rm m}athcal V}$ of $Y$, we say that ${{\rm m}athcal V}$ {\it refines} ${{\rm m}athcal U}$, and write ${{\rm m}athcal V}\succ {{\rm m}athcal U}$, if every element of ${{\rm m}athcal V}$ is contained in some element of ${{\rm m}athcal U}$. \begin{definition} \label{D-order} Let $Y$ be a compact space and ${{\rm m}athcal U}$ a finite open cover of $Y$. We denote $${\rm ord}({{\rm m}athcal U})={\rm m}ax_{y\in Y}\sum_{U\in {{\rm m}athcal U}}1_U(y)-1, {\rm m}box{ and } {{\rm m}athcal D}({{\rm m}athcal U})={\rm m}in_{{{\rm m}athcal V}\succ {{\rm m}athcal U}}{\rm ord}({{\rm m}athcal V}),$$ where ${{\rm m}athcal V}$ ranges over finite open covers of $Y$ refining ${{\rm m}athcal U}$. \end{definition} For a compact metrizable space $X$, its {\it (covering) dimension} $\dim(X)$ is defined as $\sup_{{{\rm m}athcal U}}{{\rm m}athcal D}({{\rm m}athcal U})$ for ${{\rm m}athcal U}$ ranging over finite open covers of $X$. \begin{definition} \label{D-amenable} A countable group $G$ is called {\it amenable} if for any finite subset $K$ of $G$ and any $\varepsilon>0$ there exists a finite subset $F$ of $G$ with $|KF\setminus F|<\varepsilon |F|$. Equivalently, $G$ has a {\it left F{\o}lner sequence} $\{F_n\}_{n\in {{\rm m}athbb N}}$, i.e. each $F_n$ is a nonempty finite subset of $G$ and $\frac{|sF_n\setminus F_n|}{|F_n|}\to 0$ as $n\to 0$ for every $s\in G$. \end{definition} Let a countable amenable group $G$ act continuously on a compact metrizable space $X$. Let ${{\rm m}athcal U}$ be a finite open cover of $X$. For a nonempty finite subset $F$ of $G$, we set ${{\rm m}athcal U}^F=\bigvee_{s\in F}s^{-1}{{\rm m}athcal U}$. The function $F{\rm m}apsto {{\rm m}athcal D}({{\rm m}athcal U}^F)$ defined on the set of nonempty finite subsets of $G$ satisfies the conditions of the Ornstein-Weiss lemma \cite{OW} \cite[Theorem 6.1]{LW}, thus $\frac{{{\rm m}athcal D}({{\rm m}athcal U}^F)}{|F|}$ converges to some real number, denoted by ${\rm mdim}({{\rm m}athcal U})$, as $F$ becomes more and more left invariant. That is, for any $\varepsilon>0$, there exist a nonempty finite subset $K$ of $G$ and $\delta>0$ such that $|\frac{{{\rm m}athcal D}({{\rm m}athcal U}^F)}{|F|}-{\rm mdim}({{\rm m}athcal U})|<\varepsilon$ for every nonempty finite subset $F$ of $G$ satisfying $|KF\setminus F|<\delta |F|$. In terms of any left F{\o}lner sequence $\{F_n\}_{n\in {{\rm m}athbb N}}$ of $G$, one has $$ {\rm mdim}({{\rm m}athcal U})=\lim_{n\to \infty}\frac{{{\rm m}athcal D}({{\rm m}athcal U}^{F_n})}{|F_n|}.$$ The {\it mean topological dimension} of $X$ \cite[page 13]{LW} is defined as $${\rm mdim}(X)=\sup_{{{\rm m}athcal U}}{\rm mdim}({{\rm m}athcal U}),$$ where ${{\rm m}athcal U}$ ranges over finite open covers of $X$. Throughout the rest of this section, we fix a countable sofic group $G$ and a sofic approximation sequence $\Sigma = \{ \sigma_i : G \to {\rm Sym} (d_i ) \}_{i=1}^\infty$ for $G$, as defined in Section~{\rm re}f{S-introduction}. Let $\alpha$ be a continuous action of $G$ on a compact metrizable space $X$. Let ${\rm h}o$ be a continuous pseudometric on $X$. For a given $d\in{{\rm m}athbb N}$, we define on the set of all maps from $[d]$ to $X$ the pseudometrics \begin{align*} {\rm h}o_2 (\varphi , \psi ) &= \bigg( \frac{1}{d} \sum_{a\in [d]} ({\rm h}o (\varphi (a),\psi (a)))^2 \bigg)^{1/2} , \\ {\rm h}o_\infty (\varphi ,\psi ) &= {\rm m}ax_{a\in [d]} {\rm h}o (\varphi (a),\psi (a)) . \end{align*} \begin{definition}\label{D-map top} Let $F$ be a nonempty finite subset of $G$ and $\delta > 0$. Let $\sigma$ be a map from $G$ to ${\rm Sym} (d)$ for some $d\in{{\rm m}athbb N}$. We define $Map ({\rm h}o ,F,\delta ,\sigma )$ to be the set of all maps $\varphi : [d] \to X$ such that ${\rm h}o_2 (\varphi\circ\sigma_s , \alpha_s \circ\varphi ) \le \delta$ for all $s\in F$. We consider $Map ({\rm h}o ,F,\delta ,\sigma )$ to be a topological space with the topology inherited from $X^d$. \end{definition} The space $Map ({\rm h}o ,F,\delta ,\sigma )$ appeared first in \cite[Section 2]{KerLi10amenable}, and was used to define the topological entropy of the action $\alpha$. Eventually we shall take $\sigma$ to be $\sigma_i$ for large $i$. Then the condition (1) in the definition of $\Sigma$ says that $\sigma$ is approximately a group homomorphism of $G$ into ${\rm Sym}(d)$, and therefore we can think of $\sigma$ as an approximate action of $G$ on $[d]$. The space $Map ({\rm h}o ,F,\delta ,\sigma )$ is the set of approximately $G$-equivariant maps from $[d]$ into $X$. For a finite open cover ${{\rm m}athcal U}$ of $X$, we denote by ${{\rm m}athcal U}^d$ the finite open cover of $X^{[d]}$ consisting of $U_1\times U_2\times \cdots \times U_d$ for $U_1, \dots, U_d\in {{\rm m}athcal U}$. Note that $Map({\rm h}o, F, \delta, \sigma)$ is a closed subset of $X^{[d]}$. Consider the restriction ${{\rm m}athcal U}^d|_{Map({\rm h}o, F, \delta, \sigma)}={{\rm m}athcal U}^d\cap Map({\rm h}o, F, \delta, \sigma)$ of ${{\rm m}athcal U}^d$ to $Map({\rm h}o, F, \delta, \sigma)$. Denote ${{\rm m}athcal D}({{\rm m}athcal U}^d|_{Map({\rm h}o, F, \delta, \sigma)})$ by ${{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta, \sigma)$. The set $[d]$ is the analogue of an approximately left invariant finite subset $H$ of $G$ in the amenable group case, $Map({\rm h}o, F, \delta, \sigma)$ is the analogue of the subset $\{(sx)_{s\in H}: x\in X\}$ of $X^H$ which can be identified with $X$ naturally (this will be made clear in the proof of Theorem~{\rm re}f{T-top mean dim} below), ${{\rm m}athcal U}^d|_{Map({\rm h}o, F, \delta, \sigma)}$ is then the analogue of ${{\rm m}athcal U}^H$, and ${{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta, \sigma)$ is the analogue of ${{\rm m}athcal D}({{\rm m}athcal U}^H)$. \begin{definition} \label{D-sofic top mean dim} Let ${\rm h}o$ be a compatible metric on $X$. Let $F$ be a nonempty finite subset of $G$ and $\delta > 0$. For a finite open cover ${{\rm m}athcal U}$ of $X$ we define \begin{align*} {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o ,F, \delta ) &= \varlimsup_{i\to\infty} \frac{{{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta, \sigma_i)}{d_i},\\ {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o ,F ) &= \inf_{\delta > 0} {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o ,F, \delta ),\\ {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o)&= \inf_{F} {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o ,F), \end{align*} where $F$ in the third line ranges over the nonempty finite subsets of $G$. If $Map ({\rm h}o ,F,\delta ,\sigma_i )$ is empty for all sufficiently large $i$, we set ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o ,F, \delta ) = -\infty$. We define the {\it sofic mean topological dimension} of $\alpha$ as $$ {\rm mdim}_\Sigma (X, {\rm h}o ) = \sup_{{{\rm m}athcal U}} {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o)$$ for ${{\rm m}athcal U}$ ranging over finite open covers of $X$. As shown by Lemma~{\rm re}f{L-sofic top mean dim independent of metric} below, the quantities ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o ,F ), {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o)$ and ${\rm mdim}_\Sigma(X, {\rm h}o)$ do not depend on the choice of ${\rm h}o$, and we shall write them as ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, F ), {{\rm m}athcal D}_\Sigma({{\rm m}athcal U})$ and ${\rm mdim}_\Sigma(X)$ respectively. In particular, ${\rm mdim}_\Sigma(\cdot)$ is an invariant of topological dynamical systems. \end{definition} \begin{remark} \label{R-mean top dim1} Note that ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o ,F, \delta )$ decreases when $\delta$ decreases and $F$ increases. Thus in the definitions of ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o ,F )$ and ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o)$ one can also replace $\inf_{\delta>0}$ and $\inf_F$ by $\lim_{\delta\to 0}$ and $\lim_{F\to \infty}$ respectively, where $F_1\le F_2$ means $F_1\subseteq F_2$. If we partially order the set of all such $(F, \delta)$ by $(F, \delta)\ge (F', \delta)$ when $F\supseteq F'$ and $\delta\le \delta'$, then $$ {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o)= \lim_{(F, \delta)\to \infty} {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o ,F, \delta).$$ \end{remark} \begin{remark} \label{R-mean top dim2} From Definition~{\rm re}f{D-sofic top mean dim} and \cite[Proposition 2.4]{KerLi10amenable} one gets that the following conditions are equivalent: \begin{enumerate} \item ${\rm mdim}_\Sigma(X)\ge 0$; \item For any finite subset $F$ of $G$, any $\delta>0$, and any $N\in {{\rm m}athbb N}$, there is some $i\ge N$ such that $Map({\rm h}o, F, \delta, \sigma_i)$ is nonempty; \item The sofic topological entropy $h_\Sigma(X)\ge 0$. \end{enumerate} Also, by the variational principle \cite[Theorem 6.1]{KL11}, these conditions imply the following \begin{enumerate} \item[(4)] $X$ has a $G$-invariant Borel probability measure. \end{enumerate} Every non-amenable countable group has a continuous affine action on a compact metrizable convex set (in some locally convex Hausdorff topological vector space) admitting no fixed point, equivalently, admitting no invariant Borel probability measure \cite[Corollary 4.10.2]{CC}. Thus every non-amenable sofic group has a continuous action on some compact metrizable space with mean topological dimension $-\infty$. However, we do not know whether there is any such an example with a unique invariant Borel probability measure. \end{remark} \begin{remark} \label{R-sequence} The definitions of ${\rm mdim}_\Sigma(X), {\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o)$ in the sequel and ${\rm mdim}_{\Sigma, {\rm M}}(X)$ depend on the choice of the sofic approximation sequence $\Sigma$. However, we do not know any examples for which different choices of $\Sigma$ lead to different values of these invariants, though Lewis Bowen \cite{Bowen09} showed that the sofic measure entropy of the trivial action of ${\rm SL}(n, {{\rm m}athbb Z})$ for $n\ge 2$ (more generally groups with property ($\tau$)) on the two-point set equipped with the uniform distribution does depend on the choice of $\Sigma$. \end{remark} We need the following simple observation several times. \begin{lemma} \label{L-almost} Let ${\rm h}o$ be a continuous pseudometric on $X$, $F$ a nonempty finite subset of $G$, $\delta>0$, and $\sigma$ a map from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$. For any $\varphi\in Map({\rm h}o, F, \delta, \sigma)$ and $s\in F$, setting ${{\rm m}athcal W}=\{a\in [d]: {\rm h}o(s\varphi(a), \varphi(sa))\le \sqrt{\delta}\}$, one has $|{{\rm m}athcal W}|\ge (1-\delta)d$. \end{lemma} \begin{proof} This follows from $$ \delta^2\ge ({\rm h}o_2(\alpha_s\circ \varphi, \varphi\circ \sigma_s))^2\ge \frac{1}{d}|[d]\setminus {{\rm m}athcal W}|\delta=(1-\frac{|{{\rm m}athcal W}|}{d})\delta.$$ \end{proof} \begin{lemma} \label{L-sofic top mean dim independent of metric} Let ${\rm h}o$ and ${\rm h}o'$ be compatible metrics on $X$. For any nonempty finite subset $F$ of $G$ and any finite open cover ${{\rm m}athcal U}$ of $X$, one has ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o ,F )={{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o' ,F )$. \end{lemma} \begin{proof} By symmetry it suffices to show ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o ,F )\le {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o' ,F )$. Let $\delta>0$. Take $\delta'>0$ be a small positive number which we shall determine in a moment. We claim that for any map $\sigma$ from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ one has $Map({\rm h}o, F, \delta', \sigma)\subseteq Map({\rm h}o', F, \delta, \sigma)$. Let $\varphi\in Map({\rm h}o, F, \delta', \sigma)$. For each $s\in F$, set \[ {{\rm m}athcal W}_s=\{a\in [d]: {\rm h}o(s\varphi(a), \varphi(sa))\le \sqrt{\delta'}\}.\] By Lemma~{\rm re}f{L-almost} one has $|{{\rm m}athcal W}_s|\ge (1-\delta')d$. Taking $\delta'$ small enough, we may assume that for any $x, y\in X$ with ${\rm h}o(x, y)\le \sqrt{\delta'}$, one has ${\rm h}o'(x, y)\le \delta/2$. Then one has \begin{align*} ({\rm h}o'_2(\alpha_s\circ \varphi, \varphi\circ \sigma_s))^2&\le \frac{|{{\rm m}athcal W}_s|}{d}\cdot \frac{\delta^2}{4}+(1-\frac{|{{\rm m}athcal W}_s|}{d})({\rm diam}(X, {\rm h}o'))^2\\ &\le \frac{\delta^2}{4}+\delta'({\rm diam}(X, {\rm h}o'))^2\le \delta^2, \end{align*} granted that $\delta'$ is small enough. Therefore $\varphi\in Map({\rm h}o', F, \delta, \sigma)$. This proves the claim. Since $Map({\rm h}o, F, \delta', \sigma)\subseteq Map({\rm h}o', F, \delta, \sigma)$, clearly ${{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta', \sigma)\le {{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o', F, \delta, \sigma)$. Thus ${{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F)\le {{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta')\le {{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o', F, \delta)$. Letting $\delta\to 0$, we get ${{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F)\le {{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o', F)$ as desired. \end{proof} We need the following lemma several times. \begin{lemma} \label{L-factor1} Let $\alpha^X$ and $\alpha^Y$ be continuous actions of $G$ on compact metrizable spaces $X$ and $Y$ respectively, and $\pi: X\rightarrow Y$ be an equivariant continuous map. Let ${\rm h}o^X$ and ${\rm h}o^Y$ be compatible metrics on $X$ and $Y$ respectively. Let $F$ be a nonempty finite subset of $G$ and $\delta>0$. Then there exists $\delta'>0$ such that for every map $\sigma$ from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ and every $\varphi\in Map({\rm h}o^X, F, \delta', \sigma)$, one has $\pi\circ \varphi\in Map({\rm h}o^Y, F, \delta, \sigma)$. \end{lemma} \begin{proof} Since $\pi$ is continuous and $X$ is compact, we can find $\delta'>0$ small enough such that $|F|\delta'({\rm diam}(Y, {\rm h}o^Y))^2\le \delta^2/2$ and for any $x, x'\in X$ with ${\rm h}o^X(x, x')\le \sqrt{\delta'}$ one has ${\rm h}o^Y(\pi(x), \pi(x'))\le \delta/2$. Denote by ${{\rm m}athcal W}$ the set of all $a\in [d]$ satisfying ${\rm h}o^X(s\varphi(a),\varphi(sa))\le \sqrt{\delta'}$. By Lemma~{\rm re}f{L-almost} one has $|{{\rm m}athcal W}|\ge (1-|F|\delta')d$. For each $a\in {{\rm m}athcal W}$, by the choice of $\delta'$ one has $${\rm h}o^Y(s\pi(\varphi(a)), \pi(\varphi(sa))={\rm h}o^Y(\pi(s\varphi(a)), \pi(\varphi(sa))\le \delta/2.$$ Thus \begin{align*} ({\rm h}o^Y_2(\alpha^Y_s\circ \pi\circ \varphi, \pi\circ \varphi\circ \sigma_s))^2&\le \frac{|{{\rm m}athcal W}|}{d}(\frac{\delta}{2})^2+\frac{|[d]\setminus {{\rm m}athcal W}|}{d}({\rm diam}(Y, {\rm h}o^Y))^2\\ &\le \delta^2/4+|F|\delta'({\rm diam}(Y, {\rm h}o^Y))^2\le \delta^2/4+\delta^2/2<\delta^2, \end{align*} and hence $\pi\circ \varphi\in Map({\rm h}o^Y, F, \delta, \sigma)$. \end{proof} Lindenstrauss and Weiss established the next two propositions in the case $G$ is amenable \cite[page 5, Proposition 2.8]{LW}. \begin{proposition} \label{P-subspace top} Let $G$ act continuously on a compact metrizable space $X$. Let $Y$ be a closed $G$-invariant subset of $X$. Then ${\rm mdim}_\Sigma(Y)\le {\rm mdim}_\Sigma(X)$. \end{proposition} \begin{proof} Let ${\rm h}o$ be a compatible metric on $X$. Then ${\rm h}o$ restricts to a compatible metric ${\rm h}o'$ on $Y$. Let ${{\rm m}athcal U}$ be a finite open cover of $Y$. Then we can find a finite open cover ${{\rm m}athcal V}$ of $X$ such that ${{\rm m}athcal U}$ is the restriction of ${{\rm m}athcal V}$ to $Y$. Note that $Map({\rm h}o', F, \delta, \sigma)\subseteq Map({\rm h}o, F, \delta, \sigma)$ for any nonempty finite subset $F$ of $G$, any $\delta>0$, and any map $\sigma$ from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$. Furthermore, the restriction of ${{\rm m}athcal V}^d|_{Map({\rm h}o, F, \delta, \sigma)}$ on $Map({\rm h}o', F, \delta, \sigma)$ is exactly ${{\rm m}athcal U}^d|_{Map({\rm h}o', F, \delta, \sigma)}$. Thus ${{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o', F, \delta, \sigma)\le {{\rm m}athcal D}({{\rm m}athcal V}, {\rm h}o, F, \delta, \sigma)$. It follows that ${\rm mdim}_\Sigma({{\rm m}athcal U})\le {\rm mdim}_\Sigma({{\rm m}athcal V})\le {\rm mdim}_\Sigma(X)$. Since ${{\rm m}athcal U}$ is an arbitrary finite open cover of $Y$, we get ${\rm mdim}_\Sigma(Y)\le {\rm mdim}_\Sigma(X)$. \end{proof} \begin{proposition} \label{P-product} Let $G$ act continuously on a compact metrizable space $X_n$ for each $1\le n< R$, where $R\in {{\rm m}athbb N}\cup \{\infty\}$. Consider the product action of $G$ on $X:=\prod_{1\le n<R}X_n$. Then ${\rm mdim}_\Sigma(X)\le \sum_{1\le n<R}{\rm mdim}_\Sigma(X_n)$. \end{proposition} \begin{proof} Let ${\rm h}o$ and ${\rm h}o^{(n)}$ be compatible metrics on $X$ and $X_n$ respectively. Denote by $\pi_n$ the projection of $X$ onto $X_n$. Let ${{\rm m}athcal U}$ be a finite open cover of $X$. Then there are an $N\in {{\rm m}athbb N}$ with $N<R$ and a finite open cover ${{\rm m}athcal V}_n$ of $X_n$ for all $1\le n\le N$ such that $${{\rm m}athcal V}:=\bigvee^N_{n=1}\pi_n^{-1}({{\rm m}athcal V}_n)\succ {{\rm m}athcal U}.$$ Let $F$ be a nonempty finite subset of $G$ and $\delta>0$. By Lemma~{\rm re}f{L-factor1} we can find $\delta'>0$ such that for any map $\sigma$ from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ and any $\varphi\in Map({\rm h}o, F, \delta', \sigma)$ one has $\pi_n\circ \varphi\in Map({\rm h}o^{(n)}, F, \delta, \sigma)$ for all $1\le n\le N$. It follows that we have a continuous map $\Phi_n: Map({\rm h}o, F, \delta', \sigma)\rightarrow Map({\rm h}o^{(n)}, F, \delta, \sigma)$ sending $\varphi$ to $\pi_n\circ \varphi$ for each $1\le n\le N$. Note that $${{\rm m}athcal V}^d|_{Map({\rm h}o, F, \delta', \sigma)}=\bigvee_{n=1}^N\Phi_n^{-1}({{\rm m}athcal V}_n^d|_{Map({\rm h}o^{(n)}, F, \delta, \sigma)}).$$ For any finite open covers ${{\rm m}athcal U}_1$ and ${{\rm m}athcal U}_2$ of a compact metrizable space $Y$ one has ${{\rm m}athcal D}({{\rm m}athcal U}_1\vee {{\rm m}athcal U}_2)\le {{\rm m}athcal D}({{\rm m}athcal U}_1)+{{\rm m}athcal D}({{\rm m}athcal U}_2)$ \cite[Corollary 2.5]{LW}. Thus \begin{align*} {{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta', \sigma)\le {{\rm m}athcal D}({{\rm m}athcal V}, {\rm h}o, F, \delta', \sigma)\le \sum_{n=1}^N {{\rm m}athcal D}({{\rm m}athcal V}_n, {\rm h}o^{(n)}, F, \delta, \sigma), \end{align*} and hence ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o)\le {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o, F, \delta')\le \sum_{n=1}^N{{\rm m}athcal D}_\Sigma({{\rm m}athcal V}_n, {\rm h}o^{(n)}, F, \delta)$. Since $F$ and $\delta$ are arbitrary, we get $${{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o)\le \sum_{n=1}^N{{\rm m}athcal D}_\Sigma({{\rm m}athcal V}_n, {\rm h}o^{(n)})\le \sum_{n=1}^N{\rm mdim}_\Sigma(X_n)\le \sum_{1\le n <R}{\rm mdim}_\Sigma(X_n).$$ Therefore ${\rm mdim}_\Sigma(X)\le \sum_{1\le n<R}{\rm mdim}_\Sigma(X_n)$ as desired. \end{proof} If a property P for continuous $G$-actions on compact Hausdorff spaces is preserved by products, subsystems, and isomorphisms, and the trivial action of $G$ on the one-point set $\bullet$ has property P, then for any continuous $G$-action on a compact Hausdorff space $X$, there is a largest factor $Y$ of $X$ with property $P$ \cite[Proposition 2.9.1]{GM}. We prove a similar fact for the category of actions on compact metrizable spaces, the proof of which is implicit in the proof of \cite[Proposition 6.12]{Lindenstrauss}. \begin{lemma} \label{L-existence of largest factor} Let $\Gamma$ be a topological group. Let P be a property for continuous $\Gamma$-actions on compact metrizable spaces. Suppose that P is preserved by countable products, subsystems, and isomorphisms, and that the trivial action of $\Gamma$ on the one-point set $\bullet$ has property P. Then any continuous $\Gamma$-action on a compact metrizable space $X$ has a largest factor $Y$ with property P, i.e. for any factor $Z$ of $X$ with property P there is a unique ($\Gamma$-equivariant continuous surjective) map $Y\rightarrow Z$ making the following diagram \begin{eqnarray*} \xymatrix{ X \ar[rd] \ar[r] &Y \ar[d]\\ &Z} \end{eqnarray*} commute. \end{lemma} \begin{proof} For each factor $Z$ of $X$ with factor map $\pi_Z: X\rightarrow Z$, denote by $R_Z$ the closed subset $\{(x, y)\in X^2: \pi_Z(x)=\pi_Z(y)\}$ of $X^2$. Denote by $R$ the set $\bigcap_Z R_Z$ for $Z$ ranging over factors of $X$ with property P. Since $X^2$ is compact metrizable, it has a countable base. Thus every subset $W$ of $X^2$ with the topology inherited from $X^2$ is a Lindel\"{o}f space in the sense that every open cover of $W$ has a countable subcover \cite[page 49]{Kelley}. Taking $W=X^2\setminus R$ and considering the open cover of $X^2\setminus R$ consisting of $X^2\setminus R_Z$ for all factors $Z$ of $X$ with property P, we find factors $Z_1, Z_2, \dots$ of $X$ with property P such that $\bigcap_{n=1}^\infty R_{Z_n}=R$. Consider the map $\pi: X\rightarrow \prod_{n=1}^\infty Z_n$ sending $x$ to $(\pi_{Z_n}(x))_{n=1}^\infty$. Then $Y:=\pi(X)$ is a closed $\Gamma$-invariant subset of $\prod_{n=1}^\infty Z_n$ and is a factor of $X$. Furthermore, $R_Y=R$. By the assumption on P and $Z_n$, we see that the $\Gamma$-action on $Y$ has property P. Since $R_Z\supseteq R=R_Y$ for every factor $Z$ of $X$ with property P, clearly $Y$ is the largest factor of $X$ with property P. \end{proof} By Remark~{\rm re}f{R-mean top dim2} and Lemma~{\rm re}f{L-factor1} if $G$ acts continuously on a compact metrizable space $X$ with ${\rm mdim}_\Sigma(X)\ge 0$, then ${\rm mdim}_\Sigma(Y)\ge 0$ for every factor $Y$ of $X$. From Lemma~{\rm re}f{L-existence of largest factor} and Propositions~{\rm re}f{P-subspace top} and {\rm re}f{P-product}, taking property P to be having sofic mean topological dimension at most $0$ and observing that ${\rm mdim}_\Sigma(\bullet)=0$ for the trivial action of $G$ on the one-point set $\bullet$, we obtain the following result, which was established by Lindenstrauss for countable amenable groups \cite[Proposition 6.12]{Lindenstrauss}. \begin{proposition} Let $G$ act continuously on a compact metrizable space $X$ with ${\rm mdim}_\Sigma(X)\ge 0$. Then $X$ has a largest factor $Y$ satisfying ${\rm mdim}_\Sigma(Y)=0$. \end{proposition} \section{Sofic Mean Topological Dimension for Amenable Groups} \label{S-top amenable} In this section we show that the sofic mean topological dimension extends the mean topological dimension for actions of countably infinite amenable groups: \begin{theorem} \label{T-top mean dim} Let a countably infinite (discrete) amenable group $G$ act continuously on a compact metrizable space $X$. Let $\Sigma$ be a sofic approximation sequence of $G$. Then $${\rm mdim}_\Sigma(X)={\rm mdim}(X).$$ \end{theorem} Theorem~{\rm re}f{T-top mean dim} follows directly from Lemmas~{\rm re}f{L-top mean dim lower bound} and {\rm re}f{L-top mean dim upper bound} below. We need the following Rokhlin lemma several times. Though for ergodic measure-preserving actions of ${{\rm m}athbb Z}$ one needs only one Rokhlin tower, for actions of general countable amenable groups one needs several Rokhlin towers. Here one should think of $[d]$ as equipped with the uniform distribution. The assumption about $\sigma$ says that it is approximately a free action of $G$ on $[d]$ (preserving the uniform distribution), while the conclusion says that ${{\rm m}athcal C}_1, \dots, {{\rm m}athcal C}_\ell$ are the bases for Rokhlin towers and that for each $k=1, \dots, \ell$, $\{\sigma(s){{\rm m}athcal C}_k\}_{s\in F_k}$ is a Rokhlin tower. \begin{lemma}\cite[Lemma 4.6]{KerLi10amenable} \label{L-Rokhlin} Let $G$ be a countable amenable group. Let $0\le \tau<1$, $0<\eta<1$, $\delta>0$, and $K$ be a nonempty finite subset of $G$. Then there are an $\ell\in {{\rm m}athbb N}$, nonempty finite subsets $F_1, \dots, F_\ell$ of $G$ with $|KF_k \setminus F_k|<\delta |F_k|$ and $|F_kK\setminus F_k|<\delta|F_k|$ for all $k=1, \dots, \ell$, a finite set $F\subseteq G$ containing $e_G$, and an $\eta'>0$ such that, for every $d\in {{\rm m}athbb N}$, every map $\sigma: G\rightarrow {\rm Sym}(d)$ for which there is a set ${{\rm m}athcal B}\subseteq [d]$ satisfying $|{{\rm m}athcal B}|\ge (1-\eta')d$ and \[ \sigma_{st}(a)=\sigma_s\sigma_t(a), \sigma_s(a)\neq \sigma_{s'}(a), \sigma_{e_G}(a)=a \] for all $a\in {{\rm m}athcal B}$ and $s, t, s'\in F$ with $s\neq s'$, and every set ${{\rm m}athcal W}\subseteq [d]$ with $|{{\rm m}athcal W}|\ge (1-\tau)d$, there exist ${{\rm m}athcal C}_1, \dots, {{\rm m}athcal C}_\ell\subseteq {{\rm m}athcal W}$ such that \begin{enumerate} \item for every $k=1, \dots, \ell$, the map $(s, c){\rm m}apsto \sigma_s(c)$ from $F_k\times {{\rm m}athcal C}_k$ to $\sigma(F_k){{\rm m}athcal C}_k$ is bijective, \item the sets $\sigma(F_1){{\rm m}athcal C}_1, \dots, \sigma(F_\ell){{\rm m}athcal C}_\ell$ are pairwise disjoint and $|\bigcup_{k=1}^\ell \sigma(F_k){{\rm m}athcal C}_k|\ge (1-\tau-\eta)d$. \end{enumerate} \end{lemma} \begin{remark} \label{R-Rokhlin} Let $G$ be a finite group. Note that the only nonempty finite subset $F$ of $G$ satisfying $|GF\setminus F|<\frac{1}{|G|}|F|$ is $G$. From Lemma~{\rm re}f{L-Rokhlin} one deduces the following: Let $0\le \tau<1$ and $0<\eta<1$, then there is an $\eta'>0$ such that, for every $d\in {{\rm m}athbb N}$, every map $\sigma: G\rightarrow {\rm Sym}(d)$ for which there is ${{\rm m}athcal B}\subseteq [d]$ satisfying $|{{\rm m}athcal B}|\ge (1-\eta')d$ and \[ \sigma_{st}(a)=\sigma_s\sigma_t(a), \sigma_s(a)\neq \sigma_{s'}(a), \sigma_{e_G}(a)=a \] for all $a\in {{\rm m}athcal B}$ and $s, t, s'\in G$ with $s\neq s'$, and every set ${{\rm m}athcal W}\subseteq [d]$ with $|{{\rm m}athcal W}|\ge (1-\tau)d$, there exists ${{\rm m}athcal C}\subseteq {{\rm m}athcal W}$ such that the map $(s, c){\rm m}apsto \sigma_s(c)$ from $G\times {{\rm m}athcal C}$ to $\sigma(G){{\rm m}athcal C}$ is bijective and $|\sigma(G){{\rm m}athcal C}|\ge (1-\tau-\eta)d$. \end{remark} Combined with Lemma~{\rm re}f{L-Rokhlin}, the following lemma tells us how to construct elements in $Map({\rm h}o, F, \delta, \sigma)$. \begin{lemma} \label{L-construct map} Let $\alpha$ be a continuous action of a countable group $G$ on a compact metrizable space $X$. Let ${\rm h}o$ be a continuous pseudometric on $X$. Let $\delta, \delta'>0$ with $\sqrt{\delta'}{\rm diam}(X, {\rm h}o)<\delta/2$. Let $\ell\in {{\rm m}athbb N}$ and $F, F_1, \dots, F_\ell$ be nonempty finite subsets of $G$ with $|FF_k\setminus F_k|<\delta'|F_k|$ for all $k=1, \dots, \ell$. Let $\sigma$ be a map $G\rightarrow {\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$. Denote by ${{\rm m}athcal W}$ the set of elements $a$ in $[d]$ satisfying $\sigma_t\sigma_s(a)=\sigma_{ts}(a)$ for all $t\in F$ and $s\in \bigcup_{k=1}^\ell F_k$. Suppose that there are ${{\rm m}athcal C}_1, \dots, {{\rm m}athcal C}_\ell\subseteq {{\rm m}athcal W}$ satisfying the following: \begin{enumerate} \item for every $k=1, \dots, \ell$, the map $(s, c){\rm m}apsto \sigma_s(c)$ from $F_k\times {{\rm m}athcal C}_k$ to $\sigma(F_k){{\rm m}athcal C}_k$ is bijective, \item the sets $\sigma(F_1){{\rm m}athcal C}_1, \dots, \sigma(F_\ell){{\rm m}athcal C}_\ell $ are pairwise disjoint and $|\bigcup_{k=1}^\ell\sigma(F_k){{\rm m}athcal C}_k|\ge (1-\delta')d$. \end{enumerate} For any $h=(h_k)_{k=1}^\ell\in \prod_{k=1}^\ell X^{{{\rm m}athcal C}_k}$, if $\varphi: [d]\rightarrow X$ satisfies $$ \varphi(sc)=s(h_k(c))$$ for all $k\in \{1, \dots, \ell\}, c\in {{\rm m}athcal C}_k$, and $s\in F_k$, then $\varphi\in Map({\rm h}o, F, \delta, \sigma)$. \end{lemma} \begin{proof} Note that if $t\in F$, $k\in \{1, \dots, \ell\}$, $s\in F_k$, $c\in {{\rm m}athcal C}_k$, and $ts\in F_k$, then $\sigma_t\sigma_s(c)=\sigma_{ts}(c)$, and hence $\alpha_t\circ\varphi(sc)=\varphi\circ \sigma_t(sc)$. For every $t\in F$, one has \begin{align*} ({\rm h}o_2(\alpha_t\circ \varphi, \varphi\circ \sigma_t))^2&\le \frac{d-|\bigcup_{k=1}^\ell\sigma(F_k\cap t^{-1}F_k){{\rm m}athcal C}_k|}{d}({\rm diam}(X, {\rm h}o))^2\\ &= \frac{d-|\bigcup_{k=1}^\ell\sigma(F_k){{\rm m}athcal C}_k|+|\bigcup_{k=1}^\ell\sigma(F_k\setminus t^{-1}F_k){{\rm m}athcal C}_k|}{d}({\rm diam}(X, {\rm h}o))^2\\ &\le \frac{\delta'd+\delta'|\bigcup_{k=1}^\ell\sigma(F_k){{\rm m}athcal C}_k|}{d}({\rm diam}(X, {\rm h}o))^2\\ &\le 2\delta'({\rm diam}(X, {\rm h}o))^2<\delta^2. \end{align*} Thus $\varphi \inMap ({\rm h}o , F ,\delta , \sigma )$. \end{proof} We show first ${\rm mdim}_\Sigma (X)\ge {\rm mdim}(X)$ for infinite $G$. This amounts to show that $Map({\rm h}o, F, \delta, \sigma)$ is large enough. When $[d]=H$ for some approximately left invariant finite subset $H$ of $G$ and $\sigma_s\in {\rm Sym}(d)$ is essentially the left multiplication by $s$ for all $s\in F$, one can embed $X$ into $Map({\rm h}o, F, \delta, \sigma)$ sending $x$ to $(sx)_{s\in H}$. In general, $[d]$ may fail to be of the form $H$, but Lemma~{\rm re}f{L-Rokhlin} tells us $[d]$ is roughly the disjoint union of some such $H_j$ for $j\in J$. Then we do such embedding on each $H_j$ and hence embed $X^J$ into $Map({\rm h}o, F, \delta, \sigma)$. \begin{lemma} \label{L-top mean dim lower bound} Let a countably infinite amenable group $G$ act continuously on a compact metrizable space $X$. Then for any finite open cover ${{\rm m}athcal U}$ of $X$ we have ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U})\ge {\rm mdim}({{\rm m}athcal U})$. In particular, ${\rm mdim}_\Sigma (X)\ge {\rm mdim}(X)$. \end{lemma} \begin{proof}It suffices to show that ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U})\ge {\rm mdim}({{\rm m}athcal U})-2\theta$ for every $\theta>0$. Fix a compatible metric ${\rm h}o$ on $X$. Let $F$ be a nonempty finite subset of $G$ and $\delta > 0$. Let $\sigma$ be a map from $G$ to ${\rm Sym} (d)$ for some $d\in{{\rm m}athbb N}$. Now it suffices to show that if $\sigma$ is a good enough sofic approximation then \begin{align*} \frac{{{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta, \sigma)}{d} \ge {\rm mdim}({{\rm m}athcal U}) - 2\theta . \end{align*} Take a finite subset $K$ of $G$ containing $F$ and $\varepsilon>0$ such that for any nonempty finite subset $F'$ of $G$ with $|KF'\setminus F'|<\varepsilon |F'|$ one has $$ \frac{{{\rm m}athcal D}({{\rm m}athcal U}^{F'})}{|F'|}\ge {\rm mdim}({{\rm m}athcal U})-\theta.$$ Take $0<\delta'<1$ to be small such that $({\rm mdim}({{\rm m}athcal U})-\theta)(1-\delta')\ge {\rm mdim}({{\rm m}athcal U}) - 2\theta$ and $\sqrt{\delta'}{\rm diam}(X, {\rm h}o)<\delta/2$. By Lemma~{\rm re}f{L-Rokhlin} there are an $\ell\in {{\rm m}athbb N}$ and nonempty finite subsets $F_1, \dots, F_\ell$ of $G$ satisfying $|KF_k\setminus F_k|<{\rm m}in(\varepsilon, \delta') |F_k|$ for all $k=1, \dots, \ell$ such that for every map $\sigma : G\to{\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ which is a good enough sofic approximation for $G$ and every ${{\rm m}athcal W}\subseteq [d]$ with $|{{\rm m}athcal W}|\ge (1-\delta'/2)d$ there exist ${{\rm m}athcal C}_1, \dots, {{\rm m}athcal C}_\ell\subseteq {{\rm m}athcal W}$ satisfying the following: \begin{enumerate} \item for every $k=1, \dots, \ell$, the map $(s, c){\rm m}apsto \sigma_s(c)$ from $F_k\times {{\rm m}athcal C}_k$ to $\sigma(F_k){{\rm m}athcal C}_k$ is bijective, \item the sets $\sigma(F_1){{\rm m}athcal C}_1, \dots, \sigma(F_\ell){{\rm m}athcal C}_\ell $ are pairwise disjoint and $|\bigcup_{k=1}^\ell\sigma(F_k){{\rm m}athcal C}_k|\ge (1-\delta')d$. \end{enumerate} Let $\sigma: G\rightarrow {\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ be a good enough sofic approximation for $G$ such that $|{{\rm m}athcal W}|\ge (1-\delta'/2)d$ for $$ {{\rm m}athcal W}:=\{a\in [d]: \sigma_t\sigma_s(a)=\sigma_{ts}(a) {\rm m}box{ for all } t\in F, s\in \bigcup_{k=1}^\ell F_k\}.$$ Then we have ${{\rm m}athcal C}_1, \dots, {{\rm m}athcal C}_\ell$ as above. Since $G$ is infinite, there exist maps $\psi_k: {{\rm m}athcal C}_k\rightarrow G$ for $k=1, \dots, \ell$ such that the map $\Psi$ from $\bigsqcup_{k=1}^\ell F_k\times {{\rm m}athcal C}_k$ to $G$ sending $(s, c)\in F_k\times {{\rm m}athcal C}_k$ to $s\psi_k(c)$ is injective. Denote by $\tilde{F}$ the range of $\Psi$. Note that $|K\tilde{F}\setminus \tilde{F}|<\varepsilon |\tilde{F}|$ because for every $k$, $|KF_k\setminus F_k|<\varepsilon |F_k|$. Thus $$ \frac{{{\rm m}athcal D}({{\rm m}athcal U}^{\tilde{F}})}{|\tilde{F}|}\ge {\rm mdim}({{\rm m}athcal U})-\theta.$$ Pick $x_0\in X$. For each $x\in X$ define a map $\varphi_x: [d]\rightarrow X$ by $\varphi_x(a)=x_0$ for all $a\in [d]\setminus \bigcup_{k=1}^\ell\sigma(F_k){{\rm m}athcal C}_k$, and $$ \varphi_x(sc)=s\psi_k(c)x$$ for all $k\in \{1, \dots, \ell\}$, $c\in {{\rm m}athcal C}_k$, and $s\in F_k$. By Lemma~{\rm re}f{L-construct map} one has $\varphi_x \inMap ({\rm h}o , F ,\delta , \sigma )$. Note that the map $\Phi$ from $X$ to $Map ({\rm h}o , F ,\delta , \sigma )$ sending $x$ to $\varphi_x$ is continuous, and $\Phi^{-1}({{\rm m}athcal U}^d|_{Map ({\rm h}o , F ,\delta, \sigma )})={{\rm m}athcal U}^{\tilde{F}}$. Thus ${{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta, \sigma)\ge {{\rm m}athcal D}({{\rm m}athcal U}^{\tilde{F}})$. Therefore \begin{align*} \frac{{{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta, \sigma)}{d} \ge \frac{{{\rm m}athcal D}({{\rm m}athcal U}^{\tilde{F}})}{|\tilde{F}|}\cdot \frac{|\tilde{F}|}{d}\ge ({\rm mdim}({{\rm m}athcal U})-\theta)(1-\delta')\ge {\rm mdim}({{\rm m}athcal U}) - 2\theta, \end{align*} as desired. \end{proof} Let ${{\rm m}athcal U}$ be a finite open cover of a compact metrizable space $X$. A continuous map $f$ from $X$ into another compact metrizable space $Y$ is said to be {\it ${{\rm m}athcal U}$-compatible} if for each $y\in Y$, the set $f^{-1}(y)$ is contained in some $U\in {{\rm m}athcal U}$ \cite[Definition 2.2 and Proposition 2.3]{LW}. We need the following fact: \begin{lemma}\cite[Proposition 2.4]{LW} \label{L-compatible map} Let ${{\rm m}athcal U}$ be a finite open cover of a compact metrizable space $X$, and $k\ge 0$. Then ${{\rm m}athcal D}({{\rm m}athcal U})\le k$ if and only if there is a ${{\rm m}athcal U}$-compatible continuous map $f: X\rightarrow Y$ for some compact metrizable space $Y$ with dimension $k$. \end{lemma} Next we show ${\rm mdim}_\Sigma(X)\le {\rm mdim}(X)$. We first use Lemma~{\rm re}f{L-Rokhlin} to decompose $[d]$ into the disjoint union of some approximately left invariant finite subsets $\{H_j\}_{j\in J}$ of $G$ and a small portion $[d]\setminus \bigcup_jH_j$. Take a continuous ${{\rm m}athcal U}^{H_j}$-compatible map $X\rightarrow Y_j$ with $\dim(Y_j)\le {{\rm m}athcal D}({{\rm m}athcal U}^{H_j})$ for each $j\in J$. Anticipating each element of $Map({\rm h}o, F, \delta, \sigma)$ being essentially of the form $(sx_j)_{s\in H_j}$ with some $x_j\in X$ on $H_j$ for each $j\in J$, we map $Map({\rm h}o, F, \delta, \sigma)$ to $\prod_jY_j$. To take care of the coordinates on $[d]\setminus \bigcup_jH_j$, we also take a continuous ${{\rm m}athcal U}$-compatible map $X\rightarrow Z$ with $\dim(Z)\le {{\rm m}athcal D}({{\rm m}athcal U})$, and map $Map({\rm h}o, F, \delta, \sigma)$ to $\prod_{a\in [d]\setminus \bigcup_jH_j}Z$. These two maps combined together are ${{\rm m}athcal U}^d|_{Map({\rm h}o, F, \delta, \sigma)}$-compatible on nice elements which are almost of the form $(sx_j)_{s\in H_j}$ with some $x_j\in X$ on $H_j$ for each $j\in J$. The points of $Map({\rm h}o, F, \delta, \sigma)$ not so nice can be bad only at a small portion of $[d]$. Then we use some auxiliary map $h$ to shrink $X$ to one point at all the good places, making the bad parts to live in a small-dimensional space (relative to $d$). \begin{lemma} \label{L-top mean dim upper bound} Let a countable amenable group $G$ act continuously on a compact metrizable space $X$. Then ${\rm mdim}_\Sigma(X)\le {\rm mdim}(X)$. \end{lemma} \begin{proof} Fix a compatible metric ${\rm h}o$ on $X$. Let ${{\rm m}athcal U}$ be a finite open cover of $X$. It suffices to show that ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U})\le {\rm mdim}(X)$. Take a finite open cover ${{\rm m}athcal V}$ of $X$ such that for every $V\in {{\rm m}athcal V}$, one has $\overline{V}\subseteq U$ for some $U\in {{\rm m}athcal U}$. Then it suffices to show ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U})\le {\rm mdim}({{\rm m}athcal V})+3\theta$ for every $\theta>0$. We can find $\eta>0$ such that for every $V\in {{\rm m}athcal V}$, one has $B(V, \eta)=\{x\in X: {\rm h}o(x, V)<\eta\}\subseteq U$ for some $U\in {{\rm m}athcal U}$. Take a nonempty finite subset $K$ of $G$ and $\varepsilon>0$ such that for any nonempty finite subset $F'$ of $G$ with $|KF'\setminus F'|<\varepsilon |F'|$ one has $$ \frac{{{\rm m}athcal D}({{\rm m}athcal V}^{F'})}{|F'|}\le {\rm mdim}({{\rm m}athcal V})+\theta.$$ Take $\tau>0$ with $\tau {{\rm m}athcal D}({{\rm m}athcal U})\le \theta$. By Lemma~{\rm re}f{L-Rokhlin} there are an $\ell\in {{\rm m}athbb N}$ and nonempty finite subsets $F_1, \dots, F_\ell$ of $G$ satisfying $|KF_k\setminus F_k|<\varepsilon |F_k|$ for all $k=1, \dots, \ell$ such that for every map $\sigma : G\to{\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ which is a good enough sofic approximation for $G$ and every ${{\rm m}athcal W}\subseteq [d]$ with $|{{\rm m}athcal W}|\ge (1-\tau/2)d$ there exist ${{\rm m}athcal C}_1, \dots, {{\rm m}athcal C}_\ell\subseteq {{\rm m}athcal W}$ satisfying the following: \begin{enumerate} \item for every $k=1, \dots, \ell$, the map $(s, c){\rm m}apsto \sigma_s(c)$ from $F_k\times {{\rm m}athcal C}_k$ to $\sigma(F_k){{\rm m}athcal C}_k$ is bijective, \item the sets $\sigma(F_1){{\rm m}athcal C}_1, \dots, \sigma(F_\ell){{\rm m}athcal C}_\ell$ are pairwise disjoint and $|\bigcup_{k=1}^\ell \sigma(F_k){{\rm m}athcal C}_k|\ge (1-\tau)d$. \end{enumerate} Set $F=\bigcup_{k=1}^\ell F_k^{-1}$. Take $\kappa>0$ such that for any $x, y\in X$ with ${\rm h}o(x, y)\le \kappa$ one has ${\rm h}o(s^{-1}x, s^{-1}y)<\eta$ for all $s\in F$. Take $\delta>0$ with $\delta^{1/2}<\kappa$ and $\delta |{{\rm m}athcal U}| |F|\le \theta$. Let $\sigma$ be a map from $G$ to ${\rm Sym} (d)$ for some $d\in{{\rm m}athbb N}$. Now it suffices to show that if $\sigma$ is a good enough sofic approximation then \begin{align*} \frac{{{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta, \sigma)}{d} \le {\rm mdim}({{\rm m}athcal V}) + 3\theta . \end{align*} Denote by ${{\rm m}athcal W}$ the subset of $[d]$ consisting of $a$ satisfying $\sigma_s\sigma_{s^{-1}}(a)=\sigma_{e_G}(a)=a$ for all $s\in F$. Assuming that $\sigma$ is a good enough sofic approximation, we have $|{{\rm m}athcal W}|\ge (1-\tau/2)d$ and can find ${{\rm m}athcal C}_1, \dots, {{\rm m}athcal C}_\ell$ as above. Set ${{\rm m}athcal Z}=[d]\setminus \bigcup_{k=1}^\ell \sigma(F_k){{\rm m}athcal C}_k$. Then $|{{\rm m}athcal Z}|\le \tau d$. For every $\varphi\inMap ({\rm h}o ,F, \delta, \sigma )$, by Lemma~{\rm re}f{L-almost} the set $\Lambda_{\varphi}$ of all $a\in [d]$ satisfying \[ {\rm h}o(\varphi (sa), s\varphi(a)) \le \delta^{1/2} \] for all $s\in F$ has cardinality at least $(1-|F|\delta) d$. Take a partition of unity $\{\zeta_U\}_{U\in {{\rm m}athcal U}}$ for $X$ subordinate to ${{\rm m}athcal U}$. That is, each $\zeta_U$ is a continuous function $X\rightarrow [0, 1]$ with support contained in $U$, and $$ \sum_{U\in {{\rm m}athcal U}}\zeta_U=1.$$ Define a continuous map $\overrightarrow{\xi}: X \rightarrow [0, 1]^{{{\rm m}athcal U}}$ by $\overrightarrow{\xi}(x)_U=\zeta_U(x)$ for $x\in X$ and $U \in {{\rm m}athcal U}$. Consider the continuous map $\overrightarrow{h}: Map({\rm h}o, F, \delta, \sigma) \rightarrow ([0, 1]^{{{\rm m}athcal U}})^{[d]}$ defined by $$\overrightarrow{h}(\varphi)_a=\overrightarrow{\xi}(\varphi(a)){\rm m}ax({\rm m}ax_{s\in F}{\rm h}o(s \varphi(a), \varphi(sa))-\kappa, 0)$$ for $\varphi \in Map({\rm h}o, F, \delta, \sigma)$ and $a\in [d]$. Denote by $\nu$ the point of $[0, 1]^{{{\rm m}athcal U}}$ having all coordinates $0$. Set $X_0$ to be the subset of $([0, 1]^{{{\rm m}athcal U}})^{[d]}$ consisting of elements whose coordinates are equal to $\nu$ at at least $(1-|F|\delta) d$ elements of $[d]$. For each $\varphi\in Map({\rm h}o, F, \delta, \sigma)$, note that $\overrightarrow{h}(\varphi)_a=\nu$ for all $a\in \Lambda_\varphi$ by our choice of $\delta$ and hence $\overrightarrow{h}(\varphi)\in X_0$. Thus we may think of $\overrightarrow{h}$ as a map from $Map({\rm h}o, F, \delta, \sigma)$ into $X_0$. Since the union of finitely many closed subsets of dimension at most $m$ has dimension at most $m$ \cite[page 30 and Theorem V.8]{HW} and $\delta>0$ was chosen such that $|{{\rm m}athcal U}| |F| \delta\le \theta$, we get $\dim(X_0)\le |{{\rm m}athcal U}| |F|\delta d\le \theta d$. For each $1\le k\le \ell$, by Lemma~{\rm re}f{L-compatible map} we can find a compact metrizable space $Y_k$ with $\dim(Y_k)\le {{\rm m}athcal D}({{\rm m}athcal V}^{F_k})$ and a ${{\rm m}athcal V}^{F_k}$-compatible continuous map $f_k:X\rightarrow Y_k$. By Lemma~{\rm re}f{L-compatible map} we can find a compact metrizable space $Z$ with $\dim(Z)\le {{\rm m}athcal D}({{\rm m}athcal U})$ and a ${{\rm m}athcal U}$-compatible continuous map $g:X\rightarrow Z$. Now define a continuous map $\Psi: Map({\rm h}o, F, \delta, \sigma)\rightarrow X_0\times (\prod_{k=1}^\ell\prod_{c\in {{\rm m}athcal C}_k}Y_k)\times (\prod_{a\in {{\rm m}athcal Z}}Z)$ as follows. For $\varphi\in Map({\rm h}o, F, \delta, \sigma)$, the coordinate of $\Psi(\varphi)$ in $X_0$ is $\overrightarrow{h}(\varphi)$, in $Y_k$ for $1\le k\le \ell$ and $c\in {{\rm m}athcal C}_k$ is $f_k(\varphi(c))$, in $Z$ for $a\in {{\rm m}athcal Z}$ is $g(\varphi(a))$. We claim that $\Psi$ is ${{\rm m}athcal U}^d|_{Map({\rm h}o, F, \delta, \sigma)}$-compatible. Let $w\in X_0\times (\prod_{k=1}^\ell\prod_{c\in {{\rm m}athcal C}_k}Y_k)\times (\prod_{a\in {{\rm m}athcal Z}}Z)$. We need to show that for each $a\in [d]$ there is some $U\in {{\rm m}athcal U}$ depending only on $w$ and $a$ such that $\varphi(a)\in U$ for every $\varphi \in \Psi^{-1}(w)$. We write the coordinates of $w$ in $X_0$, $\prod_{k=1}^\ell\prod_{c\in {{\rm m}athcal C}_k}Y_k$, and $\prod_{a\in {{\rm m}athcal Z}}Z$ as $w^1$, $w^2$, and $w^3$ respectively. For each $a\in {{\rm m}athcal Z}$, since $g$ is ${{\rm m}athcal U}$-compatible, one has $g^{-1}(w^3_a)\subseteq U_{w^3_a}$ for some $U_{w^3_a}\in {{\rm m}athcal U}$. Then $\varphi(a)\in U_{w^3_a}$ for every $\varphi \in \Psi^{-1}(w)$ and $a\in {{\rm m}athcal Z}$. For every $1\le k\le \ell$ and $c\in {{\rm m}athcal C}_k$, since $f_k$ is ${{\rm m}athcal V}^{F_k}$-compatible, one has $f_k^{-1}(w^2_{k, c})\subseteq \bigcap_{s^{-1}\in F_k}sV_{k, c, s}$ for some $V_{k, c, s}\in {{\rm m}athcal V}$ for every $s^{-1}\in F_k$. By the choice of $\eta$, $B(V_{k, c, s}, \eta)$ is contained in some $U_{k, c, s}\in {{\rm m}athcal U}$. For every $a\in [d]\setminus {{\rm m}athcal Z}$, we distinguish the two cases $w^1_a\neq \nu$ and $w^1_a=\nu$. If $w^1_a\neq \nu$, then $(w^1_a)_U\neq 0$ for some $U\in {{\rm m}athcal U}$, and then for $\varphi \in \Psi^{-1}(w)$ one has $\zeta_U(\varphi(a))>0$ and hence $\varphi(a)\in U$. Suppose that $w^1_a=\nu$. Say, $a=\sigma(s^{-1})c$ for some $1\le k\le \ell$, $s^{-1}\in F_k$ and $c\in {{\rm m}athcal C}_k$. Let $\varphi \in \Psi^{-1}(w)$. Since $c\in {{\rm m}athcal C}_k\subseteq {{\rm m}athcal W}$ and $s\in F$, one has $sa=\sigma_{s}\sigma_{s^{-1}}(c)=c$. As $\{\zeta_U\}_{U\in {{\rm m}athcal U}}$ is a partition of unity of $X$, $\overrightarrow{\xi}(\varphi(a))\neq \nu$. But $\overrightarrow{h}(\varphi)_a=w^1_a=\nu$. Thus ${\rm m}ax_{s'\in F}{\rm h}o(s' \varphi(a), \varphi(s'a))\le \kappa$. In particular, one has ${\rm h}o(s\varphi(a), \varphi(c))={\rm h}o(s\varphi(a), \varphi(sa))\le \kappa$. From our choice of $\kappa$, one gets ${\rm h}o(\varphi(a), s^{-1}\varphi(c))<\eta$. Since $f_k(\varphi(c))=w^2_{k, c}$, we have $\varphi(c)\in sV_{k, c, s}$ and hence $s^{-1}\varphi(c)\in V_{k, c, s}$. Therefore $\varphi(a)\in U_{k, c, s}$. This proves the claim. From Lemma~{\rm re}f{L-compatible map} we get $$ {{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta, \sigma) \le \dim\big(X_0\times (\prod_{k=1}^\ell\prod_{c\in {{\rm m}athcal C}_k}Y_k)\times (\prod_{a\in {{\rm m}athcal Z}}Z)\big).$$ Since the dimension of the product of two compact metrizable spaces is at most the sum of the dimensions of the factors \cite[page 33 and Theorem V.8]{HW}, we have \begin{align*} \dim\big(X_0\times (\prod_{k=1}^\ell\prod_{c\in {{\rm m}athcal C}_k}Y_k)\times (\prod_{a\in {{\rm m}athcal Z}}Z)\big) &\le \dim(X_0)+\sum_{k=1}^\ell |{{\rm m}athcal C}_k|\dim(Y_k)+|{{\rm m}athcal Z}|\dim(Z)\\ &\le \theta d+\sum_{k=1}^\ell |{{\rm m}athcal C}_k|{{\rm m}athcal D}({{\rm m}athcal V}^{F_k}) +|{{\rm m}athcal Z}|{{\rm m}athcal D}({{\rm m}athcal U})\\ &\le \theta d+\sum_{k=1}^\ell |{{\rm m}athcal C}_k| |F_k|({\rm mdim}({{\rm m}athcal V})+\theta) +\tau d{{\rm m}athcal D}({{\rm m}athcal U})\\ &\le \theta d+d({\rm mdim}({{\rm m}athcal V})+\theta) +\theta d\\ &= d({\rm mdim}({{\rm m}athcal V})+3\theta). \end{align*} Therefore ${{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta, \sigma) \le d({\rm mdim}({{\rm m}athcal V})+3\theta)$ as desired. \end{proof} \begin{remark} \label{R-sofic top not equal to top for finite} Theorem~{\rm re}f{T-top mean dim} fails when $G$ is finite. Indeed, when a finite group $G$ acts continuously on a compact metrizable space $X$, one has ${\rm mdim}(X)=\frac{1}{|G|}\dim(X)$. There are compact metrizable finite-dimensional spaces $X$ satisfying $\dim(X^2)<2\dim(X)$ (see \cite{Boltyanskii}). For such $X$, Lemma~{\rm re}f{L-top mean dim upper bound finite} below implies that ${\rm mdim}_\Sigma(X)<{\rm mdim}(X)$. \end{remark} If $X$ is a compact metrizable space with finite dimension, then for any $n, m\in {{\rm m}athbb N}$ one has $\dim(X^n\times X^m)\le \dim(X^n)+\dim(X^m)$ \cite[page 33 and Theorem V.8]{HW} and hence $\frac{\dim(X^n)}{n}\to \inf_{m\in {{\rm m}athbb N}}\frac{\dim(X^m)}{m}$ as $n\to \infty$. \begin{lemma} \label{L-top mean dim upper bound finite} Let a finite group $G$ act continuously on a compact metrizable finite-dimensional space $X$. Then ${\rm mdim}_\Sigma(X)\le \frac{1}{|G|}\inf_{m\in {{\rm m}athbb N}}\frac{\dim(X^m)}{m}$. \end{lemma} \begin{proof} The proof is similar to that of Lemma~{\rm re}f{L-top mean dim upper bound}. Fix a compatible metric ${\rm h}o$ on $X$. Set $\lambda=\frac{1}{|G|}\inf_{m\in {{\rm m}athbb N}}\frac{\dim(X^m)}{m}$. Let ${{\rm m}athcal U}$ be a finite open cover of $X$ and $\theta>0$. It suffices to show that ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U})\le \lambda+3\theta$. We can find $\eta>0$ such that for every $y\in X$, one has $\{x\in X: {\rm h}o(x, y)<\eta\}\subseteq U$ for some $U\in {{\rm m}athcal U}$. Take $M>0$ such that $\frac{\dim(X^m)}{m}\le (\lambda+\theta) |G|$ for all $m\ge M$. Take $\tau>0$ with $\tau \dim(X)\le \theta$. By Remark~{\rm re}f{R-Rokhlin} for every map $\sigma : G\to{\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ which is a good enough sofic approximation for $G$ and every ${{\rm m}athcal W}\subseteq [d]$ with $|{{\rm m}athcal W}|\ge (1-\tau/2)d$ there exists ${{\rm m}athcal C}\subseteq {{\rm m}athcal W}$ such that the map $(s, c){\rm m}apsto \sigma_s(c)$ from $G\times {{\rm m}athcal C}$ to $\sigma(G){{\rm m}athcal C}$ is bijective and $|\sigma(G){{\rm m}athcal C}|\ge (1-\tau)d$. Take $\kappa>0$ such that for any $x, y\in X$ with ${\rm h}o(x, y)\le \kappa$ one has ${\rm h}o(sx, sy)<\eta$ for all $s\in G$. Take $\delta>0$ with $\delta^{1/2}<\kappa$ and $\delta |{{\rm m}athcal U}| |G|\le \theta$. Let $\sigma$ be a map from $G$ to ${\rm Sym} (d)$ for some $d\in{{\rm m}athbb N}$. Now it suffices to show that if $\sigma$ is a good enough sofic approximation and $d$ is sufficiently large then \begin{align*} \frac{{{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, G, \delta, \sigma)}{d} \le \lambda + 3\theta . \end{align*} Denote by ${{\rm m}athcal W}$ the subset of $[d]$ consisting of $a$ satisfying $\sigma_s\sigma_{s^{-1}}(a)=\sigma_{e_G}(a)=a$ for all $s\in G$. Assuming that $\sigma$ is a good enough sofic approximation, we have $|{{\rm m}athcal W}|\ge (1-\tau/2)d$ and can find ${{\rm m}athcal C}$ as above. Set ${{\rm m}athcal Z}=[d]\setminus \sigma(G){{\rm m}athcal C}$. Then $|{{\rm m}athcal Z}|\le \tau d$. For every $\varphi\inMap ({\rm h}o, G, \delta, \sigma )$, by Lemma~{\rm re}f{L-almost} the set $\Lambda_{\varphi}$ of all $a\in [d]$ satisfying \[ {\rm h}o(\varphi (sa), s\varphi(a)) \le \delta^{1/2} \] for all $s\in G$ has cardinality at least $(1-|G|\delta) d$. Take a partition of unity $\{\zeta_U\}_{U\in {{\rm m}athcal U}}$ for $X$ subordinate to ${{\rm m}athcal U}$. That is, each $\zeta_U$ is a continuous function $X\rightarrow [0, 1]$ with support contained in $U$, and $$ \sum_{U\in {{\rm m}athcal U}}\zeta_U=1.$$ Define a continuous map $\overrightarrow{\xi}: X \rightarrow [0, 1]^{{{\rm m}athcal U}}$ by $\overrightarrow{\xi}(x)_U=\zeta_U(x)$ for $x\in X$ and $U \in {{\rm m}athcal U}$. Consider the continuous map $\overrightarrow{h}: Map({\rm h}o, G, \delta, \sigma) \rightarrow ([0, 1]^{{{\rm m}athcal U}})^{[d]}$ defined by $$\overrightarrow{h}(\varphi)_a=\overrightarrow{\xi}(\varphi(a)){\rm m}ax({\rm m}ax_{s\in G}{\rm h}o(s \varphi(a), \varphi(sa))-\kappa, 0)$$ for $\varphi \in Map({\rm h}o, G, \delta, \sigma)$ and $a\in [d]$. Denote by $\nu$ the point of $[0, 1]^{{{\rm m}athcal U}}$ having all coordinates $0$. Set $X_0$ to be the subset of $([0, 1]^{{{\rm m}athcal U}})^{[d]}$ consisting of elements whose coordinates are equal to $\nu$ at at least $(1-|G|\delta) d$ elements of $[d]$. For each $\varphi\in Map({\rm h}o, G, \delta, \sigma)$, note that $\overrightarrow{h}(\varphi)_a=\nu$ for all $a\in \Lambda_\varphi$ by our choice of $\delta$ and hence $\overrightarrow{h}(\varphi)\in X_0$. Thus we may think of $h$ as a map from $Map({\rm h}o, G, \delta, \sigma)$ into $X_0$. Since the union of finitely many closed subsets of dimension at most $m$ has dimension at most $m$ \cite[page 30]{HW} and $\delta>0$ was chosen such that $|{{\rm m}athcal U}| |G|\delta\le \theta$, we get $\dim(X_0)\le |{{\rm m}athcal U}| |G|\delta d\le \theta d$. Now define a continuous map $\Psi: Map({\rm h}o, G, \delta, \sigma)\rightarrow X_0\times (\prod_{c\in {{\rm m}athcal C}}X)\times (\prod_{a\in {{\rm m}athcal Z}}X)$ as follows. For $\varphi\in Map({\rm h}o, G, \delta, \sigma)$, the coordinate of $\Psi(\varphi)$ in $X_0$ is $\overrightarrow{h}(\varphi)$, in $X$ for $c\in {{\rm m}athcal C}$ is $\varphi(c)$, in $X$ for $a\in {{\rm m}athcal Z}$ is $\varphi(a)$. We claim that $\Psi$ is ${{\rm m}athcal U}^d|_{Map({\rm h}o, G, \delta, \sigma)}$-compatible. Let $w\in X_0\times (\prod_{c\in {{\rm m}athcal C}}X)\times (\prod_{a\in {{\rm m}athcal Z}}X)$. We need to show that for each $a\in [d]$ there is some $U\in {{\rm m}athcal U}$ depending only on $w$ and $a$ such that $\varphi(a)\in U$ for every $\varphi \in \Psi^{-1}(w)$. We write the coordinates of $w$ in $X_0$, $\prod_{c\in {{\rm m}athcal C}}X$, and $\prod_{a\in {{\rm m}athcal Z}}X$ as $w^1$, $w^2$, and $w^3$ respectively. For each $a\in {{\rm m}athcal Z}$, one has $w^3_a\in U_{w^3_a}$ for some $U_{w^3_a}\in {{\rm m}athcal U}$. Then $\varphi(a)\in U_{w^3_a}$ for every $\varphi \in \Psi^{-1}(w)$ and $a\in {{\rm m}athcal Z}$. For every $a\in [d]\setminus {{\rm m}athcal Z}$, we distinguish the two cases $w^1_a\neq \nu$ and $w^1_a=\nu$. If $w^1_a\neq \nu$, then $(w^1_a)_U\neq 0$ for some $U\in {{\rm m}athcal U}$, and then for $\varphi \in \Psi^{-1}(w)$ one has $\zeta_U(\varphi(a))>0$ and hence $\varphi(a)\in U$. Suppose that $w^1_a=\nu$. Say, $a=\sigma(s^{-1})c$ for some $s^{-1}\in G$ and $c\in {{\rm m}athcal C}$. Then $\{x\in X: {\rm h}o(x, s^{-1}\varphi(c))<\eta\}\subseteq U$ for some $U\in {{\rm m}athcal U}$. Let $\varphi \in \Psi^{-1}(w)$. Since $c\in {{\rm m}athcal C}\subseteq {{\rm m}athcal W}$, one has $sa=\sigma_{s}\sigma_{s^{-1}}(c)=c$. As $\{\zeta_U\}_{U\in {{\rm m}athcal U}}$ is a partition of unity of $X$, $\overrightarrow{\xi}(\varphi(a))\neq \nu$. But $h(\varphi)_a=w^1_a=\nu$. Thus ${\rm m}ax_{s'\in G}{\rm h}o(s' \varphi(a), \varphi(s'a))\le \kappa$. In particular, one has ${\rm h}o(s\varphi(a), \varphi(c))={\rm h}o(s\varphi(a), \varphi(sa))\le \kappa$. From our choice of $\kappa$, one gets ${\rm h}o(\varphi(a), s^{-1}\varphi(c))<\eta$. Thus $\varphi(a)\in U$. This proves the claim. From Lemma~{\rm re}f{L-compatible map} we get $$ {{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, G, \delta, \sigma) \le \dim\big(X_0\times (\prod_{c\in {{\rm m}athcal C}}X)\times (\prod_{a\in {{\rm m}athcal Z}}X)\big).$$ Taking $d$ to be sufficiently large, we have $|{{\rm m}athcal C}|\ge M$ and hence $\dim(X^{|{{\rm m}athcal C}|})\le |{{\rm m}athcal C}||G|(\lambda +\theta)$. Since the dimension of the product of two compact metrizable spaces is at most the sum of the dimensions of the factors \cite[page 33 and Theorem V.8]{HW}, we have \begin{align*} \dim\big(X_0\times (\prod_{c\in {{\rm m}athcal C}}X)\times (\prod_{a\in {{\rm m}athcal Z}}X)\big) &\le \dim(X_0)+\dim(X^{|{{\rm m}athcal C}|})+|{{\rm m}athcal Z}|\dim(X)\\ &\le \theta d+|{{\rm m}athcal C}||G|(\lambda +\theta) +\tau d \dim(X)\\ &\le \theta d+d(\lambda+\theta) +\theta d\\ &= d(\lambda+3\theta). \end{align*} Therefore ${{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, G, \delta, \sigma) \le d(\lambda+3\theta)$ as desired. \end{proof} \section{Sofic Metric Mean Dimension} \label{S-sofic metric mean dim} In this section we define the sofic metric mean dimension and establish some basic properties for it. We start with recalling the definitions of the lower box dimension for a compact metric space and the metric mean dimension for actions of countable amenable groups. For a pseudometric space $(Y, {\rm h}o)$ and $\varepsilon>0$ we say a subset $Z$ of $Y$ is {\it $({\rm h}o, \varepsilon)$-separated} if ${\rm h}o(y, z)\ge \varepsilon$ for all distinct $y, z\in Z$. Denote by $N_\varepsilon(Y, {\rm h}o)$ the maximal cardinality of $({\rm h}o, \varepsilon)$-separated subsets of $Y$. The {\it lower box dimension} of a compact metric space $(Y, {\rm h}o)$ is defined as $$ \underline{\dim}_B(Y, {\rm h}o):=\varliminf_{\varepsilon\to 0}\frac{\log N_{\varepsilon}(Y, {\rm h}o)}{|\log \varepsilon|}.$$ Let a countable (discrete) amenable group $G$ act continuously on a compact metrizable space $X$. Let ${\rm h}o$ be a continuous pesudometric on $X$. For a finite open cover ${{\rm m}athcal U}$ of $X$, we define the mesh of ${{\rm m}athcal U}$ under ${\rm h}o$ by $$ {\rm m}esh({{\rm m}athcal U}, {\rm h}o)={\rm m}ax_{U\in {{\rm m}athcal U}}{\rm diam}(U, {\rm h}o).$$ For a nonempty finite subset $F$ of $G$, we define a pseudometric ${\rm h}o_F$ on $X$ by $$ {\rm h}o_F(x, y)={\rm m}ax_{s\in F}{\rm h}o(sx, sy)$$ for $x, y\in X$. The function $F{\rm m}apsto \log {\rm m}in_{{\rm m}esh({{\rm m}athcal U}, {\rm h}o_F)<\varepsilon} |{{\rm m}athcal U}|$ defined on the set of nonempty finite subsets of $G$ satisfies the conditions of the Ornstein-Weiss lemma \cite{OW} \cite[Theorem 6.1]{LW}, thus ${\rm m}in_{{\rm m}esh({{\rm m}athcal U}, {\rm h}o_F)<\varepsilon} \frac{\log |{{\rm m}athcal U}|}{|F|}$ converges to some real number, denoted by $S(X, \varepsilon, {\rm h}o)$, as $F$ becomes more and more left invariant. The {\it metric mean dimension of $X$ with respect to ${\rm h}o$} \cite[page 13]{LW} is defined as $${\rm mdim}_{\rm M}(X, {\rm h}o)=\varliminf_{\varepsilon\to 0}\frac{S(X, \varepsilon, {\rm h}o)}{|\log \varepsilon|}.$$ For any nonempty finite subset $F$ of $G$ and $\varepsilon>0$, it is easy to check that \begin{align} \label{E-separated vs spanning} {\rm m}in_{{\rm m}esh({{\rm m}athcal U}, {\rm h}o_F)<\varepsilon}|{{\rm m}athcal U}|\ge N_\varepsilon(X, {\rm h}o_F)\ge {\rm m}in_{{\rm m}esh({{\rm m}athcal U}, {\rm h}o_F)<2\varepsilon}|{{\rm m}athcal U}|. \end{align} As discussed on page 14 of \cite{LW}, using {\rm eq}ref{E-separated vs spanning} one can also write ${\rm mdim}_{\rm M}(X, {\rm h}o)$ in a way similar to $\underline{\dim}_B(Y, {\rm h}o)$: $$ {\rm mdim}_{\rm M}(X, {\rm h}o)=\varliminf_{\varepsilon\to 0}\frac{1}{|\log \varepsilon|}\varlimsup_{n\to \infty}\frac{\log N_\varepsilon(X, {\rm h}o_{F_n})}{|F_n|}$$ for any left F{\o}lner sequence $\{F_n\}_{n\in {{\rm m}athbb N}}$ of $G$ (see Definition~{\rm re}f{D-amenable}). Also, define $$ {\rm mdim}_{\rm M}(X)=\inf_{\rm h}o{\rm mdim}_{\rm M}(X, {\rm h}o)$$ for ${\rm h}o$ ranging over compatible metrics on $X$. In the rest of this section we fix a countable sofic group $G$ and a sofic approximation sequence $\Sigma = \{ \sigma_i : G \to {\rm Sym} (d_i ) \}_{i=1}^\infty$ for $G$. We also fix a continuous action $\alpha$ of $G$ on a compact metrizable space $X$. As we discussed in Section~{\rm re}f{S-sofic mean topological dim}, when defining sofic invariants, we replace $X$ by $Map({\rm h}o, F, \delta, \sigma)$. We also replace $({\rm h}o_F, \varepsilon)$-separated subsets of $X$ by $({\rm h}o_\infty, \varepsilon)$-separated subsets of $Map({\rm h}o, F, \delta, \sigma)$. Though we are mainly interested in the sofic metric mean dimension with respect to a metric, sometimes it can be calculated using certain continuous pseudometric as Lemma~{\rm re}f{L-allow pseudometric} and Section~{\rm re}f{S-shifts} indicate. Thus we start various definitions for continuous pseudometrics. \begin{definition} \label{D-sofic metric mean dim} Let $F$ be a nonempty finite subset of $G$ and $\delta > 0$. For $\varepsilon > 0$ and ${\rm h}o$ a continuous pseudometric on $X$ we define \begin{align*} h_{\Sigma ,\infty}^\varepsilon ({\rm h}o ,F, \delta ) &= \varlimsup_{i\to\infty} \frac{1}{d_i} \log N_\varepsilon (Map ({\rm h}o ,F,\delta ,\sigma_i ),{\rm h}o_\infty ) ,\\ h_{\Sigma ,\infty}^\varepsilon ({\rm h}o ,F) &= \inf_{\delta > 0} h_{\Sigma ,\infty}^\varepsilon ({\rm h}o ,F,\delta ) ,\\ h_{\Sigma ,\infty}^\varepsilon ({\rm h}o ) &= \inf_{F} h_{\Sigma ,\infty}^\varepsilon ({\rm h}o ,F) , \end{align*} where $F$ in the third line ranges over the nonempty finite subsets of $G$. If $Map ({\rm h}o ,F,\delta ,\sigma_i )$ is empty for all sufficiently large $i$, we set $h_{\Sigma ,\infty}^\varepsilon ({\rm h}o ,F, \delta ) = -\infty$. We define the {\it sofic metric mean dimension of $\alpha$ with respect to ${\rm h}o$} as $${\rm mdim}_{\Sigma, {\rm M}} (X, {\rm h}o ) = \varliminf_{\varepsilon\to 0} \frac{1}{|\log \varepsilon|}h_{\Sigma ,\infty}^\varepsilon ({\rm h}o ).$$ We also define $$ {\rm mdim}_{\Sigma, {\rm M}}(X) =\inf_{{\rm h}o} {\rm mdim}_{\Sigma, {\rm M}} (X, {\rm h}o ),$$ for ${\rm h}o$ ranging over compatible metrics on $X$. Note that ${\rm mdim}_{\Sigma, {\rm M}}(\cdot, \cdot)$ is an invariant of metric dynamical systems, while ${\rm mdim}_{\Sigma, {\rm M}}(\cdot)$ is an invariant of topological dynamical system. \end{definition} \begin{remark} \label{R-mean metric dim1} Note that $h_{\Sigma ,\infty}^\varepsilon ({\rm h}o ,F, \delta )$ decreases when $\delta$ decreases and $F$ increases. Thus in the definitions of $h_{\Sigma ,\infty}^\varepsilon ({\rm h}o ,F)$ and $h_{\Sigma ,\infty}^\varepsilon ({\rm h}o )$ one can also replace $\inf_{\delta>0}$ and $\inf_F$ by $\lim_{\delta\to 0}$ and $\lim_{F\to \infty}$ respectively, where $F_1\le F_2$ means $F_1\subseteq F_2$. \end{remark} The {\it sofic topological entropy} $h_\Sigma(X)$ of $\alpha$ was defined in \cite[Definition 4.6]{KL11}. It was shown in Proposition 2.4 of \cite{KerLi10amenable} that $$ h_\Sigma(X)=\lim_{\varepsilon \to 0}h^\varepsilon_{\Sigma, \infty}({\rm h}o)$$ for every compatible metric ${\rm h}o$ on $X$. Thus we have \begin{proposition} \label{P-entropy vs dim} If $h_\Sigma(X)<+\infty$, then ${\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o)\le 0$ for every compatible metric ${\rm h}o$ on $X$. \end{proposition} The amenable group case of Proposition~{\rm re}f{P-entropy vs dim} was observed by Lindenstrauss and Weiss \cite[page 14]{LW}. We say that a continuous pseudometric ${\rm h}o$ on $X$ is {\it dynamically generating} \cite[Sect.\ 4]{Li} if for any distinct points $x,y\in X$ one has ${\rm h}o(sx, sy)>0$ for some $s\in G$. The next lemma says that for any dynamically generating continuous pseudometric ${\rm h}o$, one can construct a compatible metric ${\rm h}o'$ on $X$ such that ${\rm mdim}_{\Sigma, {\rm M}} (X, {\rm h}o )={\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o')$. This is the analogue of the Kolmogorov-Sinai theorem that for measure-preserving actions of ${{\rm m}athbb Z}$ the entropy of any generating finite partition is equal to the entropy of the action. In some examples $X$ is naturally a closed invariant subset of $Y^G$ equipped with the shift action of $G$ for some compact metrizable space $Y$ which is much simpler than $X$. In such case one can take a compatible metric ${\rm h}o$ on $Y$ and think of it as a dynamically generating continuous pseudometric on $X$ via the coordinate map $X\rightarrow Y$ at $e_G$. Then the calculation using ${\rm h}o$ is much simpler than that using ${\rm h}o'$ (see for example Section~{\rm re}f{S-shifts} and \cite{BL, KL11, KerLi10amenable}). \begin{lemma} \label{L-allow pseudometric} Let ${\rm h}o$ be a dynamically generating continuous pseudometric on $X$. Enumerate the elements of $G$ as $s_1, s_2, \dots$. Define ${\rm h}o'$ by ${\rm h}o'(x, y)=\sum_{n=1}^{\infty}\frac{1}{2^n}{\rm h}o(s_nx, s_ny)$ for all $x, y\in X$. Then ${\rm h}o'$ is a compatible metric on $X$. Furthermore, if $e_G=s_m$, then for any $\varepsilon>0$ one has $$ h_{\Sigma ,\infty}^{4\varepsilon} ({\rm h}o' )\le h_{\Sigma ,\infty}^\varepsilon ({\rm h}o )\le h_{\Sigma ,\infty}^{\varepsilon/2^m} ({\rm h}o' ).$$ In particular, ${\rm mdim}_{\Sigma, {\rm M}} (X, {\rm h}o )={\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o')$. \end{lemma} \begin{proof} Clearly ${\rm h}o'$ is a continuous pseudometric on $X$. Since ${\rm h}o$ is dynamically generating, ${\rm h}o'$ separates the points of $X$. Thus ${\rm h}o'$ is a compatible metric on $X$. Let $\varepsilon>0$. We show first $h_{\Sigma ,\infty}^\varepsilon ({\rm h}o )\le h_{\Sigma ,\infty}^{\varepsilon/2^m} ({\rm h}o' )$. Let $F$ be a finite subset of $G$ containing $e_G$ and $\delta>0$. Take $k\in {{\rm m}athbb N}$ with $2^{-k}{\rm diam}(X, {\rm h}o)<\delta/2$. Set $F'=\bigcup_{n=1}^ks_nF$ and take $1>\delta'>0$ to be small which we shall fix in a moment. Let $\sigma$ be a map from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ which is a good enough sofic approximation for $G$. We claim that $Map ({\rm h}o ,F',\delta' ,\sigma )\subseteq Map ({\rm h}o' ,F,\delta ,\sigma)$. Let $\varphi\in Map ({\rm h}o ,F',\delta' ,\sigma )$. By Lemma~{\rm re}f{L-almost} one has $$|{{\rm m}athcal W}|\ge (1-\delta'|F'|)d,$$ for $${{\rm m}athcal W}:=\{a\in [d]: {\rm m}ax_{s\in F'}{\rm h}o(\varphi\circ \sigma_{s}(a), \alpha_s\circ \varphi(a))\le \sqrt{\delta'}\}.$$ Set ${{\rm m}athcal R}={{\rm m}athcal W}\cap \bigcap_{t\in F}\sigma_{t}^{-1}({{\rm m}athcal W})$. Then $|{{\rm m}athcal R}|\ge (1-\delta'|F'|(1+|F|))d$. Also set $${{\rm m}athcal Q}=\{a\in [d]: \sigma_{s_n}\circ \sigma_{t}(a)=\sigma_{s_nt}(a) {\rm m}box{ for all } 1\le n \le k {\rm m}box{ and } t\in F \}.$$ For any $a\in {{\rm m}athcal R}\cap {{\rm m}athcal Q}$ and $t\in F$, since $a, \sigma_{t}(a)\in {{\rm m}athcal W}$ and $s_n, s_nt\in F'$ for all $1\le n\le k$, we have \begin{eqnarray*} & &{\rm h}o'(\varphi \circ \sigma_{t}(a), \alpha_t\circ \varphi(a))\\ &\le& 2^{-k}{\rm diam}(X, {\rm h}o)+\sum_{n=1}^k\frac{1}{2^n}{\rm h}o(\alpha_{s_n}\circ \varphi \circ \sigma_{t}(a), \alpha_{s_n}\circ \alpha_t\circ \varphi(a))\\ &\le& \delta/2+\sum_{n=1}^k \frac{1}{2^n}\bigg({\rm h}o(\alpha_{s_n}\circ \varphi \circ \sigma_{t}(a), \varphi\circ \sigma_{s_n}\circ \sigma_{t}(a))+{\rm h}o(\varphi\circ \sigma_{s_nt}(a), \alpha_{s_nt}\circ \varphi(a))\bigg)\\ &\le& \delta/2+\sum_{n=1}^k \frac{1}{2^n}\cdot 2\sqrt{\delta'}\le \delta/2+2\sqrt{\delta'}. \end{eqnarray*} When $\sigma$ is a good enough sofic approximation for $G$, one has $|{{\rm m}athcal Q}|\ge (1-\delta'|F'|)d$ and hence for any $t\in F$, \begin{eqnarray*} ({\rm h}o'_2(\varphi \circ \sigma_{t}, \alpha_t\circ \varphi))^2&\le& \frac{1}{d}(|{{\rm m}athcal R}\cap {{\rm m}athcal Q}|(\delta/2+2\sqrt{\delta'})^2+(d-|{{\rm m}athcal R}\cap {{\rm m}athcal Q}|)({\rm diam}(X, {\rm h}o'))^2)\\ &\le& (\delta/2+2\sqrt{\delta'})^2+\delta'|F'|(2+|F|)({\rm diam}(X, {\rm h}o'))^2<\delta^2, \end{eqnarray*} when $\delta'$ is small enough independent of $\sigma$ and $\varphi$. Therefore $\varphi\in Map ({\rm h}o' ,F,\delta ,\sigma)$. This proves the claim. Note that $\frac{1}{2^m}{\rm h}o_\infty\le {\rm h}o'_\infty$ on $Map ({\rm h}o ,F',\delta' ,\sigma )$. Thus $$N_\varepsilon(Map ({\rm h}o ,F',\delta' ,\sigma ), {\rm h}o_\infty)\le N_{\varepsilon/2^m}(Map ({\rm h}o ,F',\delta' ,\sigma ), {\rm h}o'_\infty) \le N_{\varepsilon/2^m}(Map ({\rm h}o' ,F,\delta ,\sigma ), {\rm h}o'_\infty),$$ when $\sigma$ is a good enough sofic approximation for $G$. It follows that $h_{\Sigma ,\infty}^\varepsilon ({\rm h}o ,F', \delta' )\le h_{\Sigma ,\infty}^{\varepsilon/2^m} ({\rm h}o' ,F, \delta )$, and hence $ h_{\Sigma ,\infty}^\varepsilon ({\rm h}o )\le h_{\Sigma ,\infty}^{\varepsilon/2^m} ({\rm h}o' )$ as desired. Next we show $h_{\Sigma ,\infty}^{4\varepsilon} ({\rm h}o' )\le h_{\Sigma ,\infty}^\varepsilon ({\rm h}o )$. It suffices to show $h_{\Sigma ,\infty}^{4\varepsilon} ({\rm h}o' )\le h_{\Sigma ,\infty}^\varepsilon ({\rm h}o )+\theta$ for every $\theta>0$. Take $k\in {{\rm m}athbb N}$ with $2^{-k}{\rm diam}(X, {\rm h}o)<\varepsilon/2$. Let $F$ be a finite subset of $G$ containing $\{s_1, \dots, s_k\}$ and $\delta>0$ be sufficiently small which we shall specify in a moment. Set $\delta'=\delta/2^m$. Let $\sigma$ be a map from $G$ to ${\rm Sym}(d)$ for some sufficiently large $d\in {{\rm m}athbb N}$. Note that $\frac{1}{2^m}{\rm h}o_2(\varphi, \psi)\le {\rm h}o'_2(\varphi, \psi)$ for all maps $\varphi, \psi: [d]\rightarrow X$. Thus $Map ({\rm h}o ,F,\delta ,\sigma )\supseteq Map ({\rm h}o' ,F,\delta' ,\sigma )$. Let ${{\rm m}athscr E}$ be a $({\rm h}o'_\infty, 4\varepsilon)$-separated subset of $Map ({\rm h}o' ,F,\delta' ,\sigma )$ with $|{{\rm m}athscr E}|=N_{4\varepsilon}(Map ({\rm h}o' ,F,\delta' ,\sigma), {\rm h}o'_\infty)$. For each $\varphi\in {{\rm m}athscr E}$ denote by ${{\rm m}athcal W}_\varphi$ the set of $a\in [d]$ satisfying ${\rm h}o(\alpha_s\circ \varphi(a), \varphi\circ \sigma_s(a))\le \sqrt{\delta}$ for all $s\in F$. By Lemma~{\rm re}f{L-almost} one has $|{{\rm m}athcal W}_\varphi|\ge (1-|F|\delta)d$. Take $\delta$ to be small enough so that $|F|\delta<1/2$. Stirling's approximation says that $\frac{t!}{(2\pi t)^{1/2}(t/e)^t)}\to 1$ as $t\to +\infty$. The number of subsets of $[d]$ of cardinality at most $|F|\delta d$ is equal to $\sum_{j=0}^{\lfloor |F|\delta d\rfloor}\binom{d}{j}$, which is at most $|F|\delta d\binom{d}{|F|\delta d}$ (here we use $|F|\delta<1/2$), which by Stirling's approximation is less than $\exp(\beta d)$ for some $\beta > 0$ depending on $\delta$ and $|F|$ but not on $d$ when $d$ is sufficiently large with $\beta\to 0$ as $\delta\to 0$ for a fixed $|F|$. Take $\delta$ to be small enough such that $\beta<\theta/2$. Then, when $d$ is sufficiently large, we can find a subset ${{\rm m}athscr F}$ of ${{\rm m}athscr E}$ with $|{{\rm m}athscr F}|\exp(\beta d)\ge |{{\rm m}athscr E}|$ such that ${{\rm m}athcal W}_\varphi$ is the same, say ${{\rm m}athcal W}$, for every $\varphi \in {{\rm m}athscr F}$. Let $\varphi \in {{\rm m}athscr F}$. Let us estimate how many elements in ${{\rm m}athscr F}$ are in the open ball $B_\varphi:=\{\psi\in X^{[d]}: {\rm h}o_\infty(\varphi, \psi)<\varepsilon\}$. Let $\psi\in {{\rm m}athscr F}\cap B_\varphi$. For any $a\in {{\rm m}athcal W}$ and $s\in F$, we have \begin{eqnarray*} {\rm h}o(s\varphi(a), s\psi(a))&\le& {\rm h}o(s\varphi(a), \varphi(sa))+{\rm h}o(\varphi(sa), \psi(sa))+{\rm h}o(\psi(sa), s\psi(a))\\ &\le & \sqrt{\delta}+\varepsilon+\sqrt{\delta}\le \frac{3}{2}\varepsilon, \end{eqnarray*} when $\delta$ is taken to be small enough. It follows that for any $a\in {{\rm m}athcal W}$ we have \begin{eqnarray*} {\rm h}o'(\varphi(a), \psi(a))&\le& 2^{-k}{\rm diam}(X, {\rm h}o)+\sum_{n=1}^k2^{-n}{\rm h}o(s_n\varphi(a), s_n\psi(a))\\ &<& \frac{1}{2}\varepsilon+\frac{3}{2}\varepsilon=2\varepsilon. \end{eqnarray*} Then ${\rm h}o'_\infty(\varphi|_{{{\rm m}athcal W}}, \psi|_{{{\rm m}athcal W}})< 2\varepsilon$. Let $Y$ be a maximal $({\rm h}o', 2\varepsilon)$-separated subset of $X$. Set ${{\rm m}athcal W}^c=[d]\setminus {{\rm m}athcal W}$. For each $\psi\in {{\rm m}athscr F}\cap B_\varphi$, there exists some $f_\psi\in Y^{{{\rm m}athcal W}^c}$ with ${\rm h}o'_\infty(\psi|_{{{\rm m}athcal W}^c}, f_\psi)<2\varepsilon$. Then we can find a subset ${{\rm m}athscr A}$ of ${{\rm m}athscr F}\cap B_\varphi$ with $|Y|^{|{{\rm m}athcal W}^c|}|{{\rm m}athscr A}|\ge |{{\rm m}athscr F}\cap B_\varphi|$ such that $f_\psi$ is the same, say $f$, for every $\psi\in {{\rm m}athscr A}$. For any $\psi, \psi'\in {{\rm m}athscr A}$, we have $$ {\rm h}o'_{\infty}(\psi|_{{{\rm m}athcal W}^c}, \psi'|_{{{\rm m}athcal W}^c})\le {\rm h}o'_\infty(\psi|_{{{\rm m}athcal W}^c}, f)+{\rm h}o'_\infty(f, \psi'|_{{{\rm m}athcal W}^c})<4\varepsilon,$$ and $$ {\rm h}o'_{\infty}(\psi|_{{{\rm m}athcal W}}, \psi'|_{{{\rm m}athcal W}})\le {\rm h}o'_\infty(\psi|_{{{\rm m}athcal W}}, \varphi|_{{\rm m}athcal W})+{\rm h}o'_\infty(\varphi|_{{\rm m}athcal W}, \psi'|_{{{\rm m}athcal W}})<4\varepsilon,$$ and hence ${\rm h}o'_\infty(\psi, \psi')< 4\varepsilon$. Since ${{\rm m}athscr A}$ is $({\rm h}o'_\infty, 4\varepsilon)$-separated, we get $\psi=\psi'$. Therefore $|{{\rm m}athscr A}|\le 1$, and hence \begin{align*} |{{\rm m}athscr F}\cap B_\varphi|\le |Y|^{|{{\rm m}athcal W}^c|}|{{\rm m}athscr A}|\le |Y|^{|F|\delta d}. \end{align*} Let ${{\rm m}athscr B}$ be a maximal $({\rm h}o_\infty, \varepsilon)$-separated subset of ${{\rm m}athscr F}$. Then ${{\rm m}athscr F}=\bigcup_{\varphi \in {{\rm m}athscr B}}({{\rm m}athscr F}\cap B_\varphi)$. Thus \begin{align*} N_{4\varepsilon}(Map ({\rm h}o' ,F,\delta' ,\sigma ), {\rm h}o'_\infty)&=|{{\rm m}athscr E}|\le \exp(\beta d)|{{\rm m}athscr F}|\le \exp(\theta d/2)|{{\rm m}athscr B}||Y|^{|F|\delta d}\\ &\le \exp(\theta d/2)|Y|^{|F|\delta d} N_\varepsilon(Map ({\rm h}o, F, \delta, \sigma), {\rm h}o_\infty)\\ &\le \exp(\theta d)N_\varepsilon(Map ({\rm h}o, F, \delta, \sigma), {\rm h}o_\infty) \end{align*} when we take $\delta$ to be small enough, where in the third inequality we used ${{\rm m}athscr B}\subseteq Map ({\rm h}o' ,F,\delta' ,\sigma )\subseteq Map ({\rm h}o, F, \delta, \sigma)$ to conclude $|{{\rm m}athscr B}|\le N_\varepsilon(Map ({\rm h}o, F, \delta, \sigma), {\rm h}o_\infty)$. Therefore $h_{\Sigma ,\infty}^{4\varepsilon} ({\rm h}o' ,F, \delta' )\le h_{\Sigma ,\infty}^\varepsilon ({\rm h}o ,F, \delta )+\theta$. It follows that $h_{\Sigma ,\infty}^{4\varepsilon} ({\rm h}o' )\le h_{\Sigma ,\infty}^\varepsilon ({\rm h}o )+\theta$ as desired. \end{proof} Note that ${\rm mdim}_{\Sigma, {\rm M}} (X)$ was defined as the infimum over all compatible metrics ${\rm h}o$ of $ {\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o)$. However by Lemma~{\rm re}f{L-allow pseudometric} we get: \begin{proposition} \label{P-allow pseudometric} One has \[ {\rm mdim}_{\Sigma, {\rm M}} (X) = \inf_{{\rm h}o}{\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o) \] for ${\rm h}o$ ranging over dynamically generating continuous pseudometrics on $X$. \end{proposition} The following is the analogue of Proposition~{\rm re}f{P-subspace top}. \begin{proposition} \label{P-subspace metric} Let $G$ act continuously on a compact metrizable space $X$. Let $Y$ be a closed $G$-invariant subset of $X$. Then ${\rm mdim}_{\Sigma, {\rm M}}(Y)\le {\rm mdim}_{\Sigma, {\rm M}}(X)$. \end{proposition} \begin{proof} Let ${\rm h}o$ be a compatible metric on $X$. Then ${\rm h}o$ restricts to a compatible metric ${\rm h}o'$ on $Y$. Note that $Map({\rm h}o', F, \delta, \sigma)\subseteq Map({\rm h}o, F, \delta, \sigma)$ for any nonempty finite subset $F$ of $G$, any $\delta>0$, and any map $\sigma$ from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$. Furthermore, the restriction of ${\rm h}o_\infty$ on $Map({\rm h}o', F, \delta, \sigma)$ is exactly ${\rm h}o'_\infty$. Thus $N_\varepsilon(Map({\rm h}o', F, \delta, \sigma), {\rm h}o'_\infty)\le N_\varepsilon(Map({\rm h}o, F, \delta, \sigma), {\rm h}o_\infty)$ for any $\varepsilon>0$. It follows that ${\rm mdim}_{\Sigma, {\rm M}}(Y)\le {\rm mdim}_{\Sigma, {\rm M}}(Y, {\rm h}o')\le {\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o)$. Since ${\rm h}o$ is an arbitrary compatible metric on $X$, we get ${\rm mdim}_{\Sigma, {\rm M}}(Y)\le {\rm mdim}_{\Sigma, {\rm M}}(X)$. \end{proof} \section{Sofic Metric Mean Dimension for Amenable Groups} \label{S-metric amenable} In this section we show that the sofic metric mean dimension extends the metric mean dimension for actions of countable amenable groups: \begin{theorem}\label{T-amenable} Let a countable (discrete) amenable group $G$ act continuously on a compact metrizable space $X$. Let $\Sigma$ be a sofic approximation sequence for $G$. Then \[ {\rm mdim}_{\Sigma, {\rm M}} (X,{\rm h}o) = {\rm mdim}_{\rm M} (X,{\rm h}o) \] for every continuous pseudometric ${\rm h}o$ on $X$. In particular, \[ {\rm mdim}_{\Sigma, {\rm M}}(X)={\rm mdim}_{\rm M} (X). \] \end{theorem} Theorem~{\rm re}f{T-amenable} follows directly from Lemmas~{\rm re}f{L-amenable lower bound} and {\rm re}f{L-amenable upper bound} below. The proof of Lemma~{\rm re}f{L-amenable lower bound} is similar to that of Lemma~{\rm re}f{L-top mean dim lower bound}. \begin{lemma}\label{L-amenable lower bound} Let a countable amenable group $G$ act continuously on a compact metrizable space $X$. Let ${\rm h}o$ be a continuous pseudometric on $X$. Then for any $\varepsilon>0$ we have $ h^{\varepsilon}_{\Sigma ,\infty} ({\rm h}o)\ge S(X, 2\varepsilon, {\rm h}o)$. In particular, ${\rm mdim}_{\Sigma, {\rm M}} (X, {\rm h}o)\ge {\rm mdim}_{\rm M}(X, {\rm h}o)$. \end{lemma} \begin{proof} It suffices to show that $h^\varepsilon_{\Sigma ,\infty} ({\rm h}o ) \geq S(X, 2\varepsilon, {\rm h}o) - 2\theta$ for every $\theta>0$. Take a nonempty finite subset $K$ of $G$ and $\varepsilon'>0$ such that for any nonempty finite subset $F'$ of $G$ satisfying $|KF'\setminus F'|<\varepsilon'|F'|$, one has \begin{align} \label{E-amenable lower bound} \frac{1}{|F'|}\log N_{\varepsilon}(X, {\rm h}o_{F'})\overset{{\rm eq}ref{E-separated vs spanning}}\ge \frac{1}{|F'|}\log {\rm m}in_{{\rm m}esh({{\rm m}athcal U}, {\rm h}o_{F'})<2\varepsilon} |{{\rm m}athcal U}|\ge S(X, 2\varepsilon, {\rm h}o)-\theta. \end{align} Let $F$ be a nonempty finite subset of $G$ and $\delta > 0$. Let $\sigma$ be a map from $G$ to ${\rm Sym} (d)$ for some $d\in{{\rm m}athbb N}$. Now it suffices to show that if $\sigma$ is a good enough sofic approximation then \begin{align*} \frac{1}{d} \log N_{\varepsilon} (Map ({\rm h}o ,F,\delta ,\sigma ),{\rm h}o_{\infty} )\ge S(X, 2\varepsilon, {\rm h}o) - 2\theta . \end{align*} Take $\delta'>0$ such that $\sqrt{\delta'}{\rm diam}(X, {\rm h}o)<\delta/2$ and $(1-\delta' )(S(X, 2\varepsilon, {\rm h}o) - \theta ) \ge S(X, 2\varepsilon, {\rm h}o) - 2\theta$. By Lemma~{\rm re}f{L-Rokhlin} there are an $\ell\in {{\rm m}athbb N}$ and nonempty finite subsets $F_1, \dots, F_\ell$ of $G$ satisfying $|(K\cup F)F_k\setminus F_k|<{\rm m}in(\varepsilon', \delta') |F_k|$ for all $k=1, \dots, \ell$ such that for every map $\sigma : G\to{\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ which is a good enough sofic approximation for $G$ and every ${{\rm m}athcal W}\subseteq [d]$ with $|{{\rm m}athcal W}|\ge (1-\delta'/2)d$ there exist ${{\rm m}athcal C}_1, \dots, {{\rm m}athcal C}_\ell\subseteq {{\rm m}athcal W}$ satisfying the following: \begin{enumerate} \item for every $k=1, \dots, \ell$, the map $(s, c){\rm m}apsto \sigma_s(c)$ from $F_k\times {{\rm m}athcal C}_k$ to $\sigma(F_k){{\rm m}athcal C}_k$ is bijective, \item the sets $\sigma(F_1){{\rm m}athcal C}_1, \dots, \sigma(F_\ell){{\rm m}athcal C}_\ell $ are pairwise disjoint and $|\bigcup_{k=1}^\ell\sigma(F_k){{\rm m}athcal C}_k|\ge (1-\delta')d$. \end{enumerate} Let $\sigma: G\rightarrow {\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ be a good enough sofic approximation for $G$ such that $|{{\rm m}athcal W}|\ge (1-\delta'/2)d$ for $$ {{\rm m}athcal W}:=\{a\in [d]: \sigma_t\sigma_s(a)=\sigma_{ts}(a) {\rm m}box{ for all } t\in F, s\in \bigcup_{k=1}^\ell F_k\}.$$ Then we have ${{\rm m}athcal C}_1, \dots, {{\rm m}athcal C}_\ell$ as above. For every $k\in\{ 1,\dots, \ell\}$ pick an $\varepsilon$-separated set $E_k \subseteq X$ with respect to ${\rm h}o_{F_k}$ of maximal cardinality. Then $$ \frac{1}{|F_k|}\log |E_k|=\frac{1}{|F_k|}\log N_\varepsilon(X, {\rm h}o_{F_k})\overset{{\rm eq}ref{E-amenable lower bound}}\ge S(X, 2\varepsilon, {\rm h}o)-\theta.$$ For each $h = (h_k )_{k=1}^\ell \in\prod_{k=1}^\ell (E_k )^{{{\rm m}athcal C}_k}$ take a map $\varphi_h : [d]\rightarrow X$ such that \[ \varphi_h(s c) = s(h_k (c)) \] for all $k\in \{ 1,\dots ,\ell \}$, $c\in {{\rm m}athcal C}_k$, and $s\in F_k$. By Lemma~{\rm re}f{L-construct map} one has $\varphi_h \inMap ({\rm h}o , F ,\delta , \sigma )$. Now if $h = (h_k )_{k=1}^\ell$ and $h' = (h_k' )_{k=1}^\ell$ are distinct elements of $\prod_{k=1}^\ell (E_k )^{{{\rm m}athcal C}_k}$, then $h_k (c) \neq h_k' (c)$ for some $k\in \{ 1,\dots ,\ell \}$ and $c\in {{\rm m}athcal C}_k$. Since $h_k (c)$ and $h_k' (c)$ are distinct points in $E_k$ which is $\varepsilon$-separated with respect to ${\rm h}o_{F_k}$, $h_k (c)$ and $h_k' (c)$ are $\varepsilon$-separated with respect to ${\rm h}o_{F_k}$, and thus we have ${\rm h}o_{\infty} (\varphi_h ,\varphi_{h'} ) \ge \varepsilon$. Therefore \begin{align*} \frac1d \log N_{\varepsilon} (Map ({\rm h}o ,F,\delta ,\sigma ),{\rm h}o_{\infty} ) &\geq \frac1d \sum_{k=1}^\ell |{{\rm m}athcal C}_k | \log |E_k | \\ &\geq \frac1d \sum_{k=1}^\ell |{{\rm m}athcal C}_k | |F_k | (S(X, 2\varepsilon, {\rm h}o) - \theta ) \\ &\geq (1-\delta' )(S(X, 2\varepsilon, {\rm h}o) - \theta ) \\ &\geq S(X, 2\varepsilon, {\rm h}o) - 2\theta , \end{align*} as desired. \end{proof} The proof of Lemma~{\rm re}f{L-amenable upper bound} is similar to that of Lemma~{\rm re}f{L-top mean dim upper bound}, but considerably easier. \begin{lemma}\label{L-amenable upper bound} Let a countable amenable group $G$ act continuously on a compact metrizable space $X$. Let ${\rm h}o$ be a continuous pseudometric on $X$. Then for any $\varepsilon>0$ we have $ h^{\varepsilon}_{\Sigma ,\infty} ({\rm h}o)\le S(X, \varepsilon/4, {\rm h}o)$. In particular, ${\rm mdim}_{\Sigma, {\rm M}} (X, {\rm h}o)\le {\rm mdim}_{\rm M}(X, {\rm h}o)$. \end{lemma} \begin{proof} Let $\varepsilon> 0$. It suffices to show that $h^{\varepsilon}_{\Sigma ,\infty} ({\rm h}o)\le S(X, \varepsilon/4, {\rm h}o) + 3\theta$ for every $\theta >0$. Take a nonempty finite subset $K$ of $G$ and $\delta'>0$ such that ${\rm m}in_{{\rm m}esh({{\rm m}athcal U}, {\rm h}o_{F'})<\varepsilon/4} |{{\rm m}athcal U}|<\exp((S(X, \varepsilon/4, {\rm h}o) + \theta)|F'|)$ for every nonempty finite subset $F'$ of $G$ satisfying $|KF'\setminus F'|<\delta'|F'|$. Take an $\eta\in (0,1)$ such that $(N_{\varepsilon/2}(X, {\rm h}o ))^{2\eta} \leq \exp(\theta)$. By Lemma~{\rm re}f{L-Rokhlin} there are an $\ell\in {{\rm m}athbb N}$ and nonempty finite subsets $F_1, \dots, F_\ell$ of $G$ satisfying $|KF_k\setminus F_k|<\delta' |F_k|$ for all $k=1, \dots, \ell$ such that for every map $\sigma : G\to{\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ which is a good enough sofic approximation for $G$ and every ${{\rm m}athcal W}\subseteq [d]$ with $|{{\rm m}athcal W}|\ge (1-\eta)d$ there exist ${{\rm m}athcal C}_1, \dots, {{\rm m}athcal C}_\ell\subseteq {{\rm m}athcal W}$ satisfying the following: \begin{enumerate} \item for every $k=1, \dots, \ell$, the map $(s, c){\rm m}apsto \sigma_s(c)$ from $F_k\times {{\rm m}athcal C}_k$ to $\sigma(F_k){{\rm m}athcal C}_k$ is bijective, \item the sets $\sigma(F_1){{\rm m}athcal C}_1, \dots, \sigma(F_\ell){{\rm m}athcal C}_\ell $ are pairwise disjoint and $|\bigcup_{k=1}^\ell\sigma(F_k){{\rm m}athcal C}_k|\ge (1-2\eta)d$. \end{enumerate} Then \begin{align} \label{E-top upper} N_{\varepsilon/4}(X, {\rm h}o_{F_k})\overset{{\rm eq}ref{E-separated vs spanning}}\le {\rm m}in_{{\rm m}esh({{\rm m}athcal U}, {\rm h}o_{F_k})<\varepsilon/4} |{{\rm m}athcal U}|<\exp((S(X, \varepsilon/4, {\rm h}o) + \theta)|F_k|) \end{align} for every $k=1, \dots, \ell$. Set $F=\bigcup_{k=1}^\ell F_k$. Let $\delta > 0$ be a small positive number which we will determine in a moment. Let $\sigma$ be a map from $G$ to ${\rm Sym} (d)$ for some sufficiently large $d\in{{\rm m}athbb N}$ which is a good enough sofic approximation for $G$. We will show that $N_{\varepsilon}(Map({\rm h}o, F, \delta, \sigma), {\rm h}o_{\infty})\le \exp(S(X, \varepsilon/4, {\rm h}o) + 3\theta)d$, which will complete the proof since we can then conclude that $h^{\varepsilon}_{\Sigma ,\infty}({\rm h}o, F, \delta)\le S(X, \varepsilon/4, {\rm h}o) + 3\theta$ and hence $h^{\varepsilon}_{\Sigma ,\infty}({\rm h}o)\le S(X, \varepsilon/4, {\rm h}o) + 3\theta$. For every $\varphi\inMap ({\rm h}o, F, \delta, \sigma)$, by Lemma~{\rm re}f{L-almost} the set ${{\rm m}athcal W}_{\varphi}$ of all $a\in [d]$ satisfying \[ {\rm h}o(\varphi (sa), s\varphi(a)) \le \sqrt{\delta} \] for all $s\in F$ has cardinality at least $(1-|F|\delta) d$. For each ${{\rm m}athcal W}\subseteq [d]$ we define on the set of maps from $[d]$ to $X$ the pseudometric \[ {\rm h}o_{{{\rm m}athcal W},\infty} (\varphi , \psi ) = {\rm h}o_{\infty} (\varphi|_{{\rm m}athcal W}, \psi|_{{\rm m}athcal W}). \] Take a $({\rm h}o_\infty, \varepsilon )$-separated subset ${{\rm m}athscr E}$ of $Map ({\rm h}o, F, \delta, \sigma)$ of maximal cardinality. Set $n = |F|$. Stirling's approximation says that $\frac{t!}{(2\pi t)^{1/2}(t/e)^t)}\to 1$ as $t\to +\infty$. When $n\delta <1/2$, the number of subsets of $[d]$ of cardinality no greater than $n\delta d$ is equal to $\sum_{j=0}^{\lfloor n\delta d \rfloor} \binom{d}{j}$, which is at most $n\delta d \binom{d}{n\delta d}$, which by Stirling's approximation is less than $\exp(\beta d)$ for some $\beta > 0$ depending on $\delta$ and $n$ but not on $d$ when $d$ is sufficiently large with $\beta\to 0$ as $\delta\to 0$ for a fixed $n$. Thus when $\delta$ is small enough and $d$ is large enough, there is a subset ${{\rm m}athscr F}$ of ${{\rm m}athscr E}$ with $\exp(\theta d)|{{\rm m}athscr F}|\ge |{{\rm m}athscr E}|$ such that the set ${{\rm m}athcal W}_{\varphi}$ is the same, say ${{\rm m}athcal W}$, for every $\varphi\in {{\rm m}athscr F}$, and $|{{\rm m}athcal W}|/d>1-\eta$. Then we have ${{\rm m}athcal C}_1, \dots, {{\rm m}athcal C}_\ell\subseteq {{\rm m}athcal W}$ as above. Let $1\le k\le \ell$ and $c\in {{\rm m}athcal C}_k$. Let ${{\rm m}athscr D}_{k, c}$ be a maximal $(\varepsilon/2)$-separated subset of ${{\rm m}athscr F}$ with respect to ${\rm h}o_{\sigma(F_k)c ,\infty}$. Then ${{\rm m}athscr D}_{k, c}$ is a $({\rm h}o_{\sigma(F_k)c ,\infty}, \varepsilon/2)$-spanning subset of ${{\rm m}athscr F}$ in the sense that for any $\varphi\in {{\rm m}athscr F}$, there exists some $\psi\in {{\rm m}athscr D}_{k, c}$ with ${\rm h}o_{\sigma(F_k)c ,\infty}(\varphi, \psi)<\varepsilon/2$. We will show that $|{{\rm m}athscr D}_{k, c}|\le \exp((S(X, \varepsilon/4, {\rm h}o) + \theta)|F_k|)$ when $\delta$ is small enough. For any two distinct elements $\varphi$ and $\psi$ of ${{\rm m}athscr D}_{k, c}$ we have, for every $s\in F_k$, since $c\in {{\rm m}athcal W}_{\varphi}\cap {{\rm m}athcal W}_{\psi}$, \begin{align*} {\rm h}o(s\varphi(c), s\psi(c))&\ge {\rm h}o(\varphi(sc), \psi(sc))-{\rm h}o(s\varphi(c), \varphi(sc))-{\rm h}o(s\psi(c), \psi(sc))\\ &\ge {\rm h}o(\varphi(sc), \psi(sc))-2\sqrt{\delta}, \end{align*} and hence \begin{align*} {\rm h}o_{F_k}(\varphi(c), \psi(c))&={\rm m}ax_{s\in F_k}{\rm h}o(s\varphi(c), s\psi(c))\ge {\rm m}ax_{s\in F_k}{\rm h}o(\varphi(sc), \psi(sc))-2\sqrt{\delta}>\varepsilon/2-\varepsilon/4=\varepsilon/4, \end{align*} granted that $\delta$ is taken small enough. Thus $\{\varphi(c) : \varphi\in {{\rm m}athscr D}_{k, c} \}$ is a $({\rm h}o_{F_k}, \varepsilon/4 )$-separated subset of $X$ of cardinality $|{{\rm m}athscr D}_{k, c}|$, so that \begin{align*} |{{\rm m}athscr D}_{k, c}|\le N_{\varepsilon/4}(X, {\rm h}o_{F_k}) \overset{{\rm eq}ref{E-top upper}}\le \exp((S(X, \varepsilon/4, {\rm h}o) + \theta)|F_k|), \end{align*} as we wished to show. Set \[ {{\rm m}athcal Z} = [d] \setminus \bigcup_{k=1}^\ell\sigma(F_k){{\rm m}athcal C}_k, \] and take a maximal $({\rm h}o, \varepsilon/2)$-separated subset $Y$ of $X$. Then $Y^{{\rm m}athcal Z}$ is a $({\rm h}o_\infty, \varepsilon/2)$-spanning subset of $X^{{\rm m}athcal Z}$ in the sense that for any $\varphi\in X^{{\rm m}athcal Z}$ there is some $\psi\in Y^{{\rm m}athcal Z}$ with ${\rm h}o_\infty(\varphi, \psi)<\varepsilon/2$. We have \[ |Y^{{\rm m}athcal Z} | \leq (N_{\varepsilon/2}(X, {\rm h}o ))^{|{{\rm m}athcal Z}|} \leq (N_{\varepsilon/2}(X, {\rm h}o ))^{2\eta d}. \] Write ${{\rm m}athscr A}$ for the set of all maps $\varphi : [d]\rightarrow X$ such that $\varphi|_{{\rm m}athcal Z}\in Y^{{\rm m}athcal Z}$ and $\varphi|_{\sigma (F_k)c}\in {{\rm m}athscr D}_{k,c}|_{\sigma (F_k)c}$ for all $1\le k\le \ell$ and $c\in {{\rm m}athcal C}_k$. Then, by our choice of $\eta$, \begin{align*} |{{\rm m}athscr A} | &= |Y^{{\rm m}athcal Z} | \prod_{k=1}^\ell \prod_{c\in {{\rm m}athcal C}_k} |{{\rm m}athscr D}_{k,c} | \le (N_{\varepsilon/2}(X, {\rm h}o ))^{2\eta d} \exp\bigg(\sum_{k=1}^\ell\sum_{c\in {{\rm m}athcal C}_k}(S(X, \varepsilon/4, {\rm h}o) + \theta)|F_k|\bigg)\\ &= (N_{\varepsilon/2}(X, {\rm h}o ))^{2\eta d}\exp\bigg((S(X, \varepsilon/4, {\rm h}o) + \theta)\sum_{k=1}^\ell|F_k| |C_k|\bigg)\\ &\le \exp(\theta d) \exp((S(X, \varepsilon/4, {\rm h}o) + \theta)d)=\exp((S(X, \varepsilon/4, {\rm h}o) + 2\theta)d). \end{align*} Now since every element of ${{\rm m}athscr F}$ lies within ${\rm h}o_{\infty}$-distance $\varepsilon/2$ to an element of ${{\rm m}athscr A}$ and ${{\rm m}athscr F}$ is $\varepsilon$-separated with respect to ${\rm h}o_{\infty}$, the cardinality of ${{\rm m}athscr F}$ is at most that of ${{\rm m}athscr A}$. Therefore \begin{align*} N_{\varepsilon}(Map ({\rm h}o, F, \delta, \sigma), {\rm h}o_\infty)&=|{{\rm m}athscr E}|\le \exp(\theta d)|{{\rm m}athscr F}|\le \exp(\theta d)|{{\rm m}athscr A}|\\ &\le \exp(\theta d)\exp((S(X, \varepsilon/4, {\rm h}o) + 2\theta)d)\\ &=\exp((S(X, \varepsilon/4, {\rm h}o) + 3\theta)d), \end{align*} as desired. \end{proof} \section{Comparison of Sofic Mean Dimensions} \label{S-comparison} In this section we prove the following relation between sofic mean dimensions: \begin{theorem} \label{T-top vs metric} Let a countable sofic group $G$ act continuously on a compact metrizable space $X$. Let $\Sigma$ be a sofic approximation sequence of $G$. Then $$ {\rm mdim}_\Sigma(X)\le {\rm mdim}_{\Sigma, {\rm M}}(X).$$ \end{theorem} The amenable group case of Theorem~{\rm re}f{T-top vs metric} was proved by Lidenstrauss and Weiss \cite[Theorem 4.2]{LW}. We adapt their proof to our situation, by replacing the set $\{(sx)_{s\in H}: x\in X\}\subseteq X^H$ for an approximately left invariant finite subset $H$ of $G$ in their proof (in their case $G={{\rm m}athbb Z}$ and $H=\{0, \dots, N-1\}$) with the set $Map({\rm h}o, F, \delta, \sigma)$. \begin{lemma} \label{L-Lipschitz partion of unity} Let ${{\rm m}athcal U}$ be a finite open cover of $X$ and ${\rm h}o$ be a compatible metric on $X$. Then there exist Lipschitz functions $f_U: X\rightarrow [0,1]$ vanishing on $X\setminus U$ for each $U\in {{\rm m}athcal U}$ such that ${\rm m}ax_{U\in {{\rm m}athcal U}}f_U(x)=1$ for every $x\in X$. \end{lemma} \begin{proof} If some elements of ${{\rm m}athcal U}$ are equal to the whole space $X$, we may set $f_U$ to be the constant function $1$ for the elements $U$ equal to $X$ and the constant function $0$ for those elements $U$ not equal to $X$. Thus we mat assume that all elements of ${{\rm m}athcal U}$ do not equal $X$. Note that for each $U\in {{\rm m}athcal U}$, the function ${\rm h}o(\cdot, X\setminus U): X\rightarrow [0, +\infty)$ is Lipschitz and vanishes on $X\setminus U$. Furthermore, $ \sum_{V\in {{\rm m}athcal U}} {\rm h}o(x, X\setminus V)>0$ for every $x\in X$. Define $g_U, f_U: X\rightarrow [0, 1]$ by $$g_U(x)=\frac{{\rm h}o(x, X\setminus U)}{\sum_{V\in {{\rm m}athcal U}} {\rm h}o(x, X\setminus V)},$$ and $$ f_U(x)=|{{\rm m}athcal U}|{\rm m}in(g_U(x), 1/|{{\rm m}athcal U}|).$$ Then $g_U$ is Lipschitz and vanishes on $X\setminus U$, and hence so is $f_U$. Furthermore, for each $x\in X$, one has $\sum_{U\in {{\rm m}athcal U}}g_U(x)=1$ and thus $g_U(x)\ge 1/|{{\rm m}athcal U}|$ for some $U\in {{\rm m}athcal U}$. It follows that $f_U(x)=1$ for some $U\in {{\rm m}athcal U}$. \end{proof} Let ${\rm h}o, {{\rm m}athcal U}$ and $f_U$ be as in Lemma~{\rm re}f{L-Lipschitz partion of unity}. Let $\sigma$ be a map from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$. We define a continuous map $\Phi_d: X^{[d]}\rightarrow [0, 1]^{[d]\times {{\rm m}athcal U}}$ by $\Phi_d(\varphi)_{a, U}=f_U(\varphi(a))$ for $a\in [d]$ and $U\in {{\rm m}athcal U}$. \begin{lemma} \label{L-small image} Let $\theta>0$, and set $D={\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o)$. Then there exist a nonempty finite subset $F$ of $G$, $\delta>0$ and $M>0$ such that for any $i\in {{\rm m}athbb N}$ with $i\ge M$, there exists $\xi\in (0, 1)^{[d_i]\times {{\rm m}athcal U}}$ such that for any $S\subseteq [d_i]\times {{\rm m}athcal U}$ with $|S|\ge (D+\theta)d_i$, one has $$ \xi|_S\not \in \Phi_{d_i}(Map({\rm h}o, F, \delta, \sigma_i))|_S.$$ \end{lemma} \begin{proof} Denote by $C$ the maximum of the Lipschitz constants of $f_U$ for all $U\in {{\rm m}athcal U}$. Then $$ \|\Phi_d(\varphi)-\Phi_d(\psi)\|_\infty\le C {\rm h}o_\infty(\varphi, \psi)$$ for all $d\in {{\rm m}athbb N}$ and $\varphi, \psi\in X^{[d]}$. Take $\varepsilon>0$ small enough, which we shall determine in a moment, satisfying $$ \frac{h^\varepsilon_{\Sigma, \infty}({\rm h}o)}{|\log \varepsilon|}<D+\theta/3.$$ Then we can find a nonempty finite subset $F$ of $G$, $\delta>0$, and $M>0$ such that for every $i\in {{\rm m}athbb N}$ with $i>M$, we have $$ \frac{1}{d_i|\log \varepsilon|}\log N_\varepsilon(Map({\rm h}o, F, \delta, \sigma_i), {\rm h}o_\infty)\le D+\theta/2, $$ that is, $$ N_\varepsilon(Map({\rm h}o, F, \delta, \sigma_i), {\rm h}o_\infty)\le \varepsilon^{-(D+\theta/2)d_i}.$$ It follows that we can cover $\Phi_{d_i}(Map({\rm h}o, F, \delta, \sigma_i))$ using $\varepsilon^{-(D+\theta/2)d_i}$ open balls in the $\|\cdot \|_\infty$ norm with radius $\varepsilon C$. Denote by ${\rm m}u$ the Lebesgue measure on $[0, 1]^{[d_i]\times {{\rm m}athcal U}}$. For each $S\subseteq [d_i]\times {{\rm m}athcal U}$, the set $\Phi_{d_i}(Map({\rm h}o, F, \delta, \sigma_i))|_S\subseteq [0, 1]^S$ can be covered using $\varepsilon^{-(D+\theta/2)d_i}$ open balls in the $\|\cdot \|_\infty$ norm with radius $\varepsilon C$, and hence has Lebesgue measure at most $\varepsilon^{-(D+\theta/2)d_i}(2\varepsilon C)^{|S|}$. Thus, $${\rm m}u(\{\xi\in [0, 1]^{[d_i]\times {{\rm m}athcal U}}: \xi|_S\in \Phi_{d_i}(Map({\rm h}o, F, \delta, \sigma_i))|_S\})\le \varepsilon^{-(D+\theta/2)d_i}(2\varepsilon C)^{|S|}.$$ Therefore, the set of $\xi\in [0,1]^{[d_i]\times {{\rm m}athcal U}}$ satisfying $\xi|_S\in \Phi_{d_i}(Map({\rm h}o, F, \delta, \sigma_i))|_S$ for some $S\subseteq [d_i]\times {{\rm m}athcal U}$ with $|S|\ge (D+\theta)d_i$ has Lebesgue measure at most \begin{align*} 2^{|[d_i]\times {{\rm m}athcal U}|}\varepsilon^{-(D+\theta/2)d_i}(2\varepsilon C)^{(D+\theta)d_i}= (2^{|{{\rm m}athcal U}|}\varepsilon^{\theta/2}(2C)^{D+\theta})^{d_i}<1, \end{align*} when we take $\varepsilon$ to be small enough. Thus we can find $\xi$ with desired property. \end{proof} \begin{lemma} \label{L-to lower dimension} Let $F$ be a nonempty finite subset of $G$ and $\delta>0$. Let $\sigma$ be a map from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$. Let $\Psi$ be a continuous map from $\Phi_d(Map({\rm h}o, F, \delta, \sigma))$ to $[0, 1]^{[d]\times {{\rm m}athcal U}}$. Suppose that for any $\varphi \in Map({\rm h}o, F, \delta, \sigma)$ and any $(a, U)\in [d]\times {{\rm m}athcal U}$, if $\Phi_d(\varphi)_{a, U}=0$ or $1$, then $\Psi(\Phi_d(\varphi))_{a, U}=0$ or $1$ accordingly. Then $\Psi\circ \Phi_d$ is ${{\rm m}athcal U}^d|_{Map({\rm h}o, F, \delta, \sigma)}$-compatible. \end{lemma} \begin{proof} Let $\varphi \in Map({\rm h}o, F, \delta, \sigma)$ and $a\in [d]$. By the choice of $\{f_U\}_{U\in {{\rm m}athcal U}}$ there exists some $U\in {{\rm m}athcal U}$ such that $\Phi_d(\varphi)_{a, U}=f_U(\varphi(a))=1$. By our assumption on $\Psi$ we then have $\Psi(\Phi_d(\varphi))_{a, U}=1$. Let $\zeta\in \Psi(\Phi_d(Map({\rm h}o, F, \delta, \sigma)))$ and $a\in [d]$. It suffices to show that there exists some $U\in {{\rm m}athcal U}$ such that one has $\varphi(a)\in U$ for every $\varphi \in Map({\rm h}o, F, \delta, \sigma)$ satisfying $\Psi(\Phi_d(\varphi))=\zeta$. By the above paragraph we can find some $U\in {{\rm m}athcal U}$ such that $\zeta_{a, U}=1$. Let $\varphi\in Map({\rm h}o, F, \delta, \sigma)$ with $\Psi(\Phi_d(\varphi))=\zeta$. By our assumption on $\Psi$ we have $f_U(\varphi(a))=\Phi_d(\varphi)_{a, U}>0$. Since $f_U$ vanishes on $X\setminus U$, we get $\varphi(a)\in U$. \end{proof} \begin{lemma} \label{L-maps down} Let $W$ be a finite set and $Z$ a closed subset of $[0, 1]^W$. Let $m\in {{\rm m}athbb N}$ and $\xi\in (0, 1)^W$ such that for every $S\subseteq W$ with $|S|\ge m$ one has $\xi|_S\not \in Z|_S$. Then there exists a continuous map $\Psi$ from $Z$ into $[0, 1]^W$ such that $\dim(\Psi(Z))\le m$ and for any $z \in Z$ and any $w\in W$, if $z_w=0$ or $1$, then $\Psi(z)_w=0$ or $1$ accordingly. \end{lemma} \begin{proof} This can be proved as in the proof of \cite[Theorem 4.2]{LW}. We give a slightly different proof. For each $S\subseteq W$, denote by $Y_S$ the subset of $[0, 1]^W$ consisting of elements whose coordinate at any $w\in W\setminus S$ is either $0$ or $1$. Then $Y_S$ is a closed subset of $[0, 1]^W$ with dimension $|S|$. Set $Y=\bigcup_{|S|\le m}Y_S$. Since the union of finitely many closed subsets of dimension at most $m$ has dimension at most $m$ \cite[page 30 and Theorem V.8]{HW}, $Y$ has dimension at most $m$. As $Z$ is compact, by the assumption on $\xi$ we can find $\delta>0$ such that $0<\xi_w-\delta<\xi_w+\delta<1$ for every $w\in W$ and the set of $w\in W$ satisfying $\xi_w-\delta\le z_w\le \xi_w+\delta$ has cardinality at most $m$ for every $z\in Z$. For each $w\in W$, take a continuous map $f_w:[0, 1]\rightarrow [0, 1]$ sending $[0, \xi_w-\delta]$ to $0$, and $[\xi_w+\delta, 1]$ to $1$. Define a continuous map $\Psi$ from $[0, 1]^W$ into itself by setting the coordinate of $\Psi(x)$ at $w\in W$ to be $f_w(x_w)$. For any $x\in [0, 1]^W$ and $w\in W$, if $x_w=0$ or $1$, then clearly $\Psi(z)_w=0$ or $1$ accordingly. From the choice of $\delta$ and $f_w$ it is also clear that $\Psi(Z)\subseteq Y$. Thus $\dim(\Psi(Z))\le \dim(Y)\le m$. \end{proof} We are ready to prove Theorem~{\rm re}f{T-top vs metric}. \begin{proof}[Proof of Theorem~{\rm re}f{T-top vs metric}] Let ${{\rm m}athcal U}$ be a finite open cover of $X$, ${\rm h}o$ be a compatible metric on $X$, and $\theta>0$. Then we have $f_U$ as in Lemma~{\rm re}f{L-Lipschitz partion of unity}. For each $i\in {{\rm m}athbb N}$, define a continuous map $\Phi_{d_i}: X^{[d_i]}\rightarrow [0, 1]^{[d_i]\times {{\rm m}athcal U}}$ by $\Phi_{d_i}(\varphi)_{a, U}=f_U(\varphi(a))$ for $a\in [d_i]$ and $U\in {{\rm m}athcal U}$. Take $F, \delta, M, i$ and $\xi$ as in Lemma~{\rm re}f{L-small image}. By Lemmas~{\rm re}f{L-maps down} and {\rm re}f{L-to lower dimension} we can find a continuous map $\Psi: \Phi_{d_i}(Map({\rm h}o, F, \delta, \sigma))\rightarrow [0, 1]^{[d_i]\times {{\rm m}athcal U}}$ such that $\Psi\circ \Phi_{d_i}$ is ${{\rm m}athcal U}^{d_i}|_{Map({\rm h}o, F, \delta, \sigma_i)}$-compatible and $\dim(\Psi(\Phi_{d_i}(Map({\rm h}o, F, \delta, \sigma))))\le ({\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o)+\theta)d_i$. From Lemma~{\rm re}f{L-compatible map} we get ${{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta, \delta_i)\le ({\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o)+\theta)d_i$. It follows that $$ {{\rm m}athcal D}_\Sigma({{\rm m}athcal U})={{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o)\le {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o, F, \delta)\le {\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o)+\theta.$$ Since ${{\rm m}athcal U}$ is an arbitrary finite open cover of $X$ and $\theta$ is an arbitrary positive number, we get ${\rm mdim}_\Sigma(X)\le {\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o)$. As ${\rm h}o$ is an arbitrary compatible metric on $X$, we get ${\rm mdim}_\Sigma(X)\le {\rm mdim}_{\Sigma, {\rm M}}(X)$. \end{proof} The Pontrjagin-Schnirelmann theorem \cite{PS} \cite[page 80]{Nagata} says that for any compact metrizable space $Z$, the dimension $\dim Z$ of $Z$ is equal to the minimal value of $\underline{\dim}_B(Z, {\rm h}o)$ for ${\rm h}o$ ranging over compatible metrics on $Z$. Since ${\rm mdim}_\Sigma(X)$ and ${\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o)$ are dynamical analogues of $\dim(X)$ and $\underline{\dim}_B(X, {\rm h}o)$ respectively, it is natural to ask \begin{question} \label{Q-top vs metric} Let a countably infinite sofic group $G$ act continuously on a compact metrizable space $X$. Then is there any compatible metric ${\rm h}o$ on $X$ satisfying $$ {\rm mdim}_\Sigma(X)={\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o)?$$ \end{question} Question~{\rm re}f{Q-top vs metric} was answered affirmatively by Lindenstrauss in the case $G={{\rm m}athbb Z}$ and $X$ has a nontrivial minimal factor \cite[Theorem 4.3]{Lindenstrauss}. \section{Bernoulli Shifts} \label{S-shifts} In this section we discuss sofic mean dimension of the Bernoulli shifts and their factors, proving Theorems~{\rm re}f{T-cube} and {\rm re}f{T-factor has positive dim}. Throughout this section $G$ will be a countable sofic group, and $\Sigma$ will be a fixed sofic approximation sequence of $G$. For any compact metrizable space $Z$, we have the left shift action $\alpha$ of $G$ on $Z^G$ given by $(sx)_t=x_{s^{-1}t}$ for all $x\in Z^G$ and $s, t\in G$. \begin{theorem} \label{T-cube} Let $Z$ be a compact metrizable space, and consider the left shift action of a countable sofic group $G$ on $X=Z^G$. Then $$ {\rm mdim}_\Sigma(X)\le {\rm mdim}_{\Sigma, {\rm M}}(X)\le \dim(Z).$$ If furthermore $Z$ contains a copy of $[0, 1]^n$ for every natural number $n\le \dim(Z)$ (for example, $Z$ could be any polyhedron or the Hilbert cube), then $${\rm mdim}_\Sigma(X)={\rm mdim}_{\Sigma, {\rm M}}(X)=\dim(Z).$$ \end{theorem} The amenable group case of Theorem~{\rm re}f{T-cube} was proved by Lindenstrauss and Weiss \cite[Propositions 3.1 and 3.3]{LW}, and Coornaert and Krieger \cite[Corollaries 4.2 and 5.5]{CK}. \begin{remark} \label{R-how to construct map} In general it is not easy to construct elements of $Map({\rm h}o, F, \delta, \sigma)$ for sofic non-amenable groups. Theorem~{\rm re}f{T-cube} implies that there are plenty of such elements for Bernoulli shifts. When $G$ is residually finite (for example free groups) and $\Sigma$ comes from a sequence of finite quotient groups of $G$, each periodic point in $X$ with the period being the corresponding finite-index normal subgroup of $G$ gives rise to an element of $Map({\rm h}o, F, \delta, \sigma)$ \cite[Theorem 7.1]{KL11} \cite[Theorem 1.3]{BL}. \end{remark} Recall the lower box dimension recalled at the beginning of Section~{\rm re}f{S-sofic metric mean dim}. We need the following lemma. \begin{lemma} \label{L-shift upper bound} Let $Z$ be a compact metrizable space, and consider the left shift action $\alpha$ of $G$ on $X=Z^G$. Let ${\rm h}o$ be a compatible metric on $Z$. Define ${\rm h}o'$ by ${\rm h}o'(x, y)={\rm h}o(x_{e_G}, y_{e_G})$ for $x, y\in X$. Then ${\rm h}o'$ is a dynamically generating continuous pseudometric on $X$. Furthermore, for any $\varepsilon>0$ one has $$ \log N_\varepsilon(Z, {\rm h}o)\le h_{\Sigma ,\infty}^\varepsilon ({\rm h}o' )\le \log N_{\varepsilon/2}(Z, {\rm h}o).$$ In particular, $ {\rm mdim}_{\Sigma, {\rm M}}(X, {\rm h}o')=\underline{\dim}_B(Z, {\rm h}o)$. \end{lemma} \begin{proof} Clearly ${\rm h}o'$ is a dynamically generating continuous pseudometric on $X$. Let $\varepsilon>0$. We show first $h_{\Sigma ,\infty}^\varepsilon ({\rm h}o' )\ge \log N_\varepsilon(Z, {\rm h}o)$. It suffices to show $h_{\Sigma ,\infty}^\varepsilon ({\rm h}o' ,F, \delta ) \ge \log N_\varepsilon(Z, {\rm h}o)$ for every nonempty finite subset $F$ of $G$ and every $\delta>0$. Take a $({\rm h}o, \varepsilon)$-separated subset $Y$ of $Z$ with $|Y|=N_\varepsilon(Z, {\rm h}o)$. Let $\sigma$ be a map from $G$ to ${\rm Sym}(d)$ from some $d\in {{\rm m}athbb N}$ which is a good enough sofic approximation for $G$ such that $\sqrt{1-|{{\rm m}athcal Q}|/d}\cdot {\rm diam}(Z, {\rm h}o)\le \delta $, where $${{\rm m}athcal Q}=\{a\in [d]: \sigma_{e_G}\circ\sigma_s(a)=\sigma_s(a) {\rm m}box{ for all } s\in F\}.$$ For each map $f: [d]\rightarrow Y$, we define a map $\varphi_f: [d]\rightarrow X$ by $$ (\varphi_f(a))_t=f(\sigma_{t^{-1}}(a))$$ for all $a\in [d]$ and $t\in G$. Let $s\in F$. For any $a\in {{\rm m}athcal Q}$, one has \begin{align*} (\alpha_s\circ \varphi_f(a))_{e_G}=(\varphi_f(a))_{s^{-1}}=f(\sigma_{s}(a))=f(\sigma_{e_G}\circ \sigma_s(a))=(\varphi_f\circ \sigma_s(a))_{e_G}, \end{align*} and hence ${\rm h}o'(\alpha_s\circ \varphi_f(a), \varphi_f\circ \sigma_s(a))=0$. Thus \begin{align*} {\rm h}o'_2(\alpha_s\circ \varphi_f, \varphi_f\circ \sigma_s)\le \sqrt{(1-|{{\rm m}athcal Q}|/d)({\rm diam}(Z, {\rm h}o))^2}\le \delta. \end{align*} Therefore $\varphi_f\in Map ({\rm h}o' ,F,\delta ,\sigma )$. For any distinct maps $f, g: [d]\rightarrow Y$, say $f(a)\neq g(a)$ for some $a\in [d]$, one has \begin{align*} {\rm h}o'_\infty(\varphi_f, \varphi_g)&\ge {\rm h}o'(\varphi_f(\sigma_{e_G}^{-1}(a)), \varphi_g(\sigma_{e_G}^{-1}(a)))\\ &={\rm h}o((\varphi_f(\sigma_{e_G}^{-1}(a)))_{e_G}, (\varphi_g(\sigma_{e_G}^{-1}(a)))_{e_G})\\ &={\rm h}o(f(a), g(a))\ge \varepsilon. \end{align*} Thus the set $\{\varphi_f: f\in Y^{[d]}\}$ is $({\rm h}o'_\infty, \varepsilon)$-separated. Therefore $$N_\varepsilon(Map ({\rm h}o' ,F,\delta ,\sigma ), {\rm h}o'_\infty)\ge |Y|^d=(N_\varepsilon(Z, {\rm h}o))^d.$$ It follows that $h_{\Sigma ,\infty}^\varepsilon ({\rm h}o' ,F, \delta ) \ge \log N_\varepsilon(Z, {\rm h}o)$ as desired. Next we show $h_{\Sigma ,\infty}^\varepsilon ({\rm h}o' )\le \log N_{\varepsilon/2}(Z, {\rm h}o)$. It suffices to show $h_{\Sigma ,\infty}^\varepsilon ({\rm h}o' ,\{e_G\}, 1 )\le \log N_{\varepsilon/2}(Z, {\rm h}o)$. Take a maximal $({\rm h}o, \varepsilon/2)$-separated subset $Y$ of $Z$. Then $|Y|\le N_{\varepsilon/2}(Z, {\rm h}o)$. Let $\sigma$ be a map from $G$ to ${\rm Sym}(d)$ from some $d\in {{\rm m}athbb N}$. Let ${{\rm m}athscr E}$ be a $({\rm h}o'_\infty, \varepsilon)$-separated subset of $Map ({\rm h}o' ,\{e_G\},1 ,\sigma )$ with $|{{\rm m}athscr E}|=N_\varepsilon(Map ({\rm h}o' ,\{e_G\},1 ,\sigma ), {\rm h}o'_\infty)$. For each $\varphi\in {{\rm m}athscr E}$, we find some map $f_\varphi: [d]\rightarrow Y$ such that $$ {\rm m}ax_{a\in [d]}{\rm h}o((\varphi(a))_{e_G}, f_\varphi(a))<\varepsilon/2.$$ If $\varphi, \psi\in {{\rm m}athscr E}$ and $f_\varphi=f_\psi$, then $$ {\rm h}o'_\infty(\varphi, \psi)={\rm m}ax_{a\in [d]}{\rm h}o((\varphi(a))_{e_G}, (\psi(a))_{e_G})<\varepsilon,$$ and hence $\varphi=\psi$, since ${{\rm m}athscr E}$ is $({\rm h}o'_\infty, \varepsilon)$-separated. Therefore $$ N_\varepsilon(Map ({\rm h}o' ,\{e_G\},1 ,\sigma ), {\rm h}o'_\infty)=|{{\rm m}athscr E}|\le |Y|^d\le (N_{\varepsilon/2}(Z, {\rm h}o))^d.$$ It follows that $h_{\Sigma ,\infty}^\varepsilon ({\rm h}o' ,\{e_G\}, 1 )\le \log N_{\varepsilon/2}(Z, {\rm h}o)$ as desired. \end{proof} We also need the following version of Lebesgue's covering theorem: \begin{lemma}\cite[Lemma 3.2]{LW} \cite[Theorem IV.2]{HW} \label{L-cover of cube} Let $W$ be a finite set. Let ${{\rm m}athcal U}$ be a finite open cover of $[0, 1]^W$ such that no element of ${{\rm m}athcal U}$ intersects two opposing sides of $[0, 1]^W$. Then $$ {{\rm m}athcal D}({{\rm m}athcal U})\ge |W|.$$ \end{lemma} We are ready to prove Theorem~{\rm re}f{T-cube}. \begin{proof}[Proof of Theorem~{\rm re}f{T-cube}] By Theorem~{\rm re}f{T-top vs metric}, Lemma~{\rm re}f{L-shift upper bound}, Proposition~{\rm re}f{P-allow pseudometric} and the Pontrjagin-Shnirelmann theorem as recalled at the end of Section~{\rm re}f{S-comparison}, we have ${\rm mdim}_\Sigma(X)\le {\rm mdim}_{\Sigma, {\rm M}}(X)\le \dim(Z)$. Now assume that $Z$ contains a copy of $[0, 1]^n$ for every natural number $n\le \dim(Z)$. It suffices to show ${\rm mdim}_\Sigma(X)\ge \dim(Z)$. In turn it suffices to show ${\rm mdim}_\Sigma(X)\ge n$ for every natural number $n\le \dim(Z)$. Since $([0, 1]^n)^G$ is a closed $G$-invariant subset of $Z^G$, by Proposition~{\rm re}f{P-subspace metric} one has ${\rm mdim}_\Sigma(X)\ge {\rm mdim}_\Sigma(([0, 1]^n)^G)$. Therefore it suffices to show ${\rm mdim}_\Sigma(([0, 1]^n)^G)\ge n$. Take a finite open cover ${{\rm m}athcal U}$ of $[0, 1]^n$ such that no element of ${{\rm m}athcal U}$ intersects two opposing sides of $[0, 1]^n$. For each $d\in {{\rm m}athbb N}$, note that no element of ${{\rm m}athcal U}^d$ intersects two opposing sides of $([0, 1]^n)^{[d]}=[0, 1]^{dn}$, and hence by Lemma~{\rm re}f{L-cover of cube} one has ${{\rm m}athcal D}({{\rm m}athcal U}^d)\ge dn$. Denote by $\pi$ the map $([0,1]^n)^G\rightarrow [0, 1]^n$ sending $x$ to $x_{e_G}$. Then $\tilde{{{\rm m}athcal U}}:=\pi^{-1}({{\rm m}athcal U})$ is a finite open cover of $([0,1]^n)^G$. Let ${\rm h}o$ be a compatible metric on $([0,1]^n)^G$. It suffices to show ${{\rm m}athcal D}_\Sigma(\tilde{{{\rm m}athcal U}}, {\rm h}o, F, \delta)\ge n$ for every nonempty finite subset $F$ of $G$ and every $\delta>0$. Take a nonempty finite subset $K$ of $G$ such that if $x, y\in ([0,1]^n)^G$ are equal on $K$, then ${\rm h}o(x, y)<\delta/2$. Let $\sigma$ be a map from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ which is a good enough sofic approximation for $G$ such that $\delta^2/4+(1-|{{\rm m}athcal Q}|/d)(|F|+1)({\rm diam}(([0,1]^n)^G, {\rm h}o))^2\le \delta^2$, where $${{\rm m}athcal Q}=\{a\in [d]: \sigma_{t^{-1}}\circ\sigma_s(a)=\sigma_{t^{-1}s}(a) {\rm m}box{ for all } s\in F, t\in K {\rm m}box{ and } \sigma_{e_G}(a)=a\}.$$ For each map $f: [d]\rightarrow [0, 1]^n$, we define a map $\varphi_f: [d]\rightarrow ([0, 1]^n)^G$ by \begin{align} \label{E-cube} (\varphi_f(a))_t=f(\sigma_{t^{-1}}(a)) \end{align} for all $a\in [d]$ and $t\in G\setminus \{e_G\}$, and \begin{align} \label{E-cube2} (\varphi_f(a))_{e_G}=f(a) \end{align} for all $a\in [d]$. Note that ({\rm re}f{E-cube}) holds for all $a\in {{\rm m}athcal Q}$ and $t\in G$. Set ${{\rm m}athcal R}={{\rm m}athcal Q}\cap \bigcap_{s\in F}\sigma_s^{-1}({{\rm m}athcal Q})$. For any $a\in {{\rm m}athcal R}$, $s\in F$, and $t\in K$, since $a, \sigma_s(a)\in {{\rm m}athcal Q}$, one has \begin{align*} (\alpha_s\circ \varphi_f(a))_t=(\varphi_f(a))_{s^{-1}t}=f(\sigma_{t^{-1}s}(a))=f(\sigma_{t^{-1}}\circ \sigma_s(a))=(\varphi_f\circ \sigma_s(a))_t, \end{align*} and hence ${\rm h}o(\alpha_s\circ \varphi_f(a), \varphi_f\circ \sigma_s(a))<\delta/2$ by the choice of $K$. Thus \begin{align*} {\rm h}o_2(\alpha_s\circ \varphi_f, \varphi_f\circ \sigma_s)&\le \sqrt{\delta^2/4+(1-|{{\rm m}athcal R}|/d)({\rm diam}(([0, 1]^n)^G, {\rm h}o))^2}\\ &\le \sqrt{\delta^2/4+(1-|{{\rm m}athcal Q}|/d)(|F|+1)({\rm diam}(([0, 1]^n)^G, {\rm h}o))^2} \le \delta. \end{align*} Therefore $\varphi_f\in Map ({\rm h}o ,F,\delta ,\sigma )$. The map $\Phi: ([0, 1]^n)^{[d]}\rightarrow Map ({\rm h}o ,F,\delta ,\sigma )$ sending $f$ to $\varphi_f$ is clearly continuous. Note that $\Phi^{-1}(\tilde{{{\rm m}athcal U}}^d|_{Map({\rm h}o, F, \delta, \sigma)})={{\rm m}athcal U}^d$ because of the equation {\rm eq}ref{E-cube2}. Therefore ${{\rm m}athcal D}_\Sigma(\tilde{{{\rm m}athcal U}}, {\rm h}o, F, \delta,\sigma)\ge {{\rm m}athcal D}({{\rm m}athcal U}^d)\ge dn$. It follows that ${{\rm m}athcal D}_\Sigma(\tilde{{{\rm m}athcal U}}, {\rm h}o, F, \delta)\ge n$ as desired. \end{proof} \begin{theorem} \label{T-factor has positive dim} Let $Z$ be a path-connected compact metrizable space, and consider the left shift action $\alpha$ of a countable sofic group $G$ on $X=Z^G$. For any nontrivial factor $Y$ of $X$, one has $$ {\rm mdim}_\Sigma(Y)>0.$$ \end{theorem} The amenable group case of Theorem~{\rm re}f{T-factor has positive dim} was proved by Lindenstrauss and Weiss \cite[Theorem 3.6]{LW}. We adapt their argument to our situation. First we prove a lemma: \begin{lemma} \label{L-factor} Let $G$ act continuously on a compact metrizable space $X$. Let $Y$ be a factor of $X$ with factor map $\pi: X\rightarrow Y$. Let ${{\rm m}athcal U}$ be a finite open cover of $Y$. Then $${{\rm m}athcal D}_\Sigma(\pi^{-1}({{\rm m}athcal U}))\le {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}).$$ \end{lemma} \begin{proof} Let ${\rm h}o$ and ${\rm h}o'$ be compatible metrics on $X$ and $Y$ respectively. Replacing ${\rm h}o$ by ${\rm h}o+\pi^{-1}({\rm h}o')$ if necessary, we may assume that ${\rm h}o(x, y)\ge {\rm h}o'(\pi(x), \pi(y))$ for all $x, y\in X$. Let $F$ be a nonempty finite subset of $X$ and $\delta>0$. Let $\sigma$ be a map from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$. For any $\varphi \in Map({\rm h}o, F, \delta, \sigma)$, one has $\pi\circ \varphi \in Map({\rm h}o', F, \delta, \sigma)$. Thus we have a continuous map $\Phi: Map({\rm h}o, F, \delta, \sigma)\rightarrow Map({\rm h}o', F, \delta, \sigma)$ sending $\varphi$ to $\pi\circ \varphi$. Furthermore, $\Phi^{-1}({{\rm m}athcal U}^d|_{Map({\rm h}o', F, \delta, \sigma)})=(\pi^{-1}({{\rm m}athcal U}))^d|_{Map({\rm h}o, F, \delta, \sigma)}$. Thus ${{\rm m}athcal D}_\Sigma(\pi^{-1}({{\rm m}athcal U}), {\rm h}o, F, \delta, \sigma)\le {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o', F, \delta, \sigma)$. It follows that ${{\rm m}athcal D}_\Sigma(\pi^{-1}({{\rm m}athcal U}), {\rm h}o, F, \delta)\le {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o', F, \delta)$. Since $F$ is an arbitrary nonempty finite subset of $G$ and $\delta$ is an arbitrary positive number, we get $${{\rm m}athcal D}_\Sigma(\pi^{-1}({{\rm m}athcal U}))={{\rm m}athcal D}_\Sigma(\pi^{-1}({{\rm m}athcal U}), {\rm h}o)\le {{\rm m}athcal D}_\Sigma({{\rm m}athcal U}, {\rm h}o')={{\rm m}athcal D}_\Sigma({{\rm m}athcal U}).$$ \end{proof} We are ready to prove Theorem~{\rm re}f{T-factor has positive dim}. \begin{proof}[Proof of Theorem~{\rm re}f{T-factor has positive dim}] Denote by $\pi$ the factor map $X\rightarrow Y$. Let ${{\rm m}athcal U}=\{U, V\}$ be an open cover of $Y$ such that none of $U$ and $V$ is dense in $Y$. By Lemma~{\rm re}f{L-factor} it suffices to show that ${{\rm m}athcal D}_\Sigma(\pi^{-1}({{\rm m}athcal U}))>0$. Take compatible metrics ${\rm h}o$ and ${\rm h}o'$ on $Z$ and $X$ respectively. Note that none of $\pi^{-1}(U)$ and $\pi^{-1}(V)$ is dense in $X$. Take $x_U\in X\setminus \overline{\pi^{-1}(U)}$ and $x_V\in X\setminus \overline{\pi^{-1}(V)}$. Then there exist a finite symmetric subset $K$ of $G$ containing $e_G$ such that if $x\in X$ coincides with $x_U$ (resp. $x_V$) on $K$, then $x\not \in \pi^{-1}(U)$ (resp. $x\not \in \pi^{-1}(V)$). Since $Z$ is path connected, for each $s\in K$ we can take a continuous map $\gamma_s: [0, 1]\rightarrow Z$ such that $\gamma_s(0)=(x_U)_s$ and $\gamma_s(1)=(x_V)_s$. Now it suffices to show ${{\rm m}athcal D}_\Sigma(\pi^{-1}({{\rm m}athcal U}), {\rm h}o', F, \delta)\ge 1/(2|K^2|)$ for every finite subset $F$ of $G$ containing $K$ and every $\delta>0$. Take a finite symmetric subset $K_1$ of $G$ such that if two points $x$ and $y$ of $X$ coincide on $K_1$, then ${\rm h}o'(x, y)<\delta/2$. Let $\sigma$ be a map from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ which is a good enough sofic approximation for $G$ such that $\sqrt{\delta^2/4+(1-|{{\rm m}athcal Q}|/d)({\rm diam}(X, {\rm h}o'))^2}\le \delta$ and $|{{\rm m}athcal Q}|/d\ge 1/2$, where \begin{align*} {{\rm m}athcal Q}=\{a\in [d]: \sigma_s\sigma_t(a)&=\sigma_{st}(a) {\rm m}box{ for all } s, t\in F\cup K_1, {\rm m}box{ and } \sigma_{e_G}(a)=a, \\ {\rm m}box{ and } \sigma_s(a)&\neq \sigma_t(a) {\rm m}box{ for all distinct } s, t\in K\}. \end{align*} Take a maximal subset ${{\rm m}athcal W}$ of ${{\rm m}athcal Q}$ subject to the condition that the sets $\sigma(K)a$ for $a\in {{\rm m}athcal W}$ are pairwise disjoint. Then ${{\rm m}athcal Q}\subseteq \sigma(K^2){{\rm m}athcal W}$. Thus $|{{\rm m}athcal W}|/d\ge |{{\rm m}athcal Q}|/(|K^2|d)\ge 1/(2|K^2|)$. Now we define a map $\Phi: [0, 1]^{{{\rm m}athcal W}}\rightarrow Map ({\rm h}o' ,F,\delta ,\sigma )$. Fix $z_0\in Z$. Let $f\in [0, 1]^{{{\rm m}athcal W}}$. We define $\tilde{f}\in Z^{[d]}$ by $$ \tilde{f}_{\sigma(s)a}=\gamma_{s^{-1}}(f(a))$$ for $a\in {{\rm m}athcal W}$, $s\in K$, and $$ \tilde{f}_b=z_0$$ for $b\in [d]\setminus \sigma(K){{\rm m}athcal W}$. Then we define $\varphi_{\tilde{f}}\in X^{[d]}$ by $$ (\varphi_{\tilde{f}}(a))_s=\tilde{f}(\sigma_{s^{-1}}(a))$$ for $a\in [d]$ and $s\in G$. Let $s\in F$. For any $a\in {{\rm m}athcal Q}$ and $t\in K_1$, one has \begin{align*} (\alpha_s\circ \varphi_{\tilde{f}}(a))_t=(\varphi_{\tilde{f}}(a))_{s^{-1}t}=\tilde{f}(\sigma_{t^{-1}s}(a))=\tilde{f}(\sigma_{t^{-1}}\circ \sigma_s(a))=(\varphi_{\tilde{f}}\circ \sigma_s(a))_t, \end{align*} and hence ${\rm h}o'(\alpha_s\circ \varphi_{\tilde{f}}(a), \varphi_{\tilde{f}}\circ \sigma_s(a))<\delta/2$ by the choice of $K_1$. Thus \begin{align*} {\rm h}o'_2(\alpha_s\circ \varphi_{\tilde{f}}, \varphi_{\tilde{f}}\circ \sigma_s)\le \sqrt{\delta^2/4+(1-|{{\rm m}athcal Q}|/d)({\rm diam}(X, {\rm h}o'))^2}\le \delta. \end{align*} Therefore $\varphi_{\tilde{f}}\in Map ({\rm h}o' ,F,\delta ,\sigma )$. Set $\Phi(f)=\varphi_{\tilde{f}}$. Clearly $\Phi$ is continuous. Set ${{\rm m}athcal V}=\Phi^{-1}((\pi^{-1}({{\rm m}athcal U}))^d|_{Map ({\rm h}o' ,F,\delta ,\sigma )})$. Let $f\in [0, 1]^{{{\rm m}athcal W}}$ and $a\in {{\rm m}athcal W}$. If $f(a)=0$, then $(\varphi_{\tilde{f}}(a))_s=\tilde{f}_{\sigma(s^{-1})a}=\gamma_s(0)=(x_U)_s$ for all $s\in K$, and hence $\varphi_{\tilde{f}}(a)\not \in \pi^{-1}(U)$ by the choice of $K$. Similarly, if $f(a)=1$, then $\varphi_{\tilde{f}}(a)\not \in \pi^{-1}(V)$. Thus no element of ${{\rm m}athcal V}$ intersects two opposing sides of $[0, 1]^{{{\rm m}athcal W}}$. By Lemma~{\rm re}f{L-cover of cube} we conclude that ${{\rm m}athcal D}(\pi^{-1}({{\rm m}athcal U}), {\rm h}o', F, \delta, \sigma)\ge {{\rm m}athcal D}({{\rm m}athcal V})\ge |{{\rm m}athcal W}|\ge d/(2|K^2|)$. It follows that ${{\rm m}athcal D}_\Sigma(\pi^{-1}({{\rm m}athcal U}), {\rm h}o', F, \delta)\ge 1/(2|K^2|)$ as desired. \end{proof} \section{Small-boundary Property} In this section we discuss the relation between the small-boundary property and non-positive sofic mean topological dimension. We start with recalling the definitions of zero inductive dimensional compact metrizable spaces and actions with small-boundary property. A compact metrizable space $Y$ is said to have {\it inductive dimension $0$} if for every $y\in Y$ and every neighborhood $U$ of $Y$ there exists a neighborhood $V$ of $y$ contained in $U$ such that the boundary $\partial V$ of $V$ is empty \cite[Definition II.1]{HW}. A compact metrizable space has inductive dimension $0$ if and only if it has covering dimension $0$ \cite[Theorem V.8]{HW}. \begin{definition} \label{D-small} Let a countable group $\Gamma$ act continuously on a compact metrizable space $X$. We denote by $M(X, \Gamma)$ the set of $\Gamma$-invariant Borel probability measures on $X$. We say that a closed subset $Z$ of $X$ is {\it small} if ${\rm m}u(Z)=0$ for all ${\rm m}u \in M(X, \Gamma)$. In particular, when $M(X, \Gamma)$ is empty, every closed subset of $X$ is small. We say that the action has the {\it small-boundary property} (SBP) if for every point $x\in X$ and every neighborhood $U$ of $x$, there is an open neighborhood $V\subseteq U$ of $x$ with small boundary. \end{definition} When $\Gamma$ is amenable, for any subset $Z$ of $X$ it is easy to check that the function $F{\rm m}apsto {\rm m}ax_{x\in X}\sum_{s\in F}1_Z(sx)$ defined on the set of nonempty finite subsets of $\Gamma$ satisfies the conditions of the Ornstein-Weiss lemma \cite[Theorem 6.1]{LW}. Thus $\frac{1}{|F|}{\rm m}ax_{x\in X}\sum_{s\in F}1_Z(sx)$ converges to some limit as $F$ becomes more and more left invariant. Shub and Weiss defined $Z$ to be {\it small} if this limit is $0$ \cite{SW}. It is proved in page 538 of \cite{SW} that when $\Gamma$ is amenable and $Z$ is closed, the definition of Shub and Weiss coincides with Definition~{\rm re}f{D-small}. The notion of the SBP was introduced in \cite{Lindenstrauss} and \cite{LW}. If $X$ has less than $2^{\aleph_0}$ ergodic $G$-invariant Borel probability measures, then the action has the SBP \cite{SW} \cite[page 18]{LW}. When $\Gamma$ is amenable, Lindenstrauss and Weiss showed that actions with the SBP has zero mean topological dimension \cite[Theorem 5.4]{LW}. We extend their result to sofic case: \begin{theorem} \label{T-SBP to zero mean dim} Let a countable sofic group $G$ act continuously on a compact metrizable space $X$. Suppose that the action has the SBP. Let $\Sigma$ be a sofic approximation sequence for $G$. Then ${\rm mdim}_\Sigma(X)\le 0$. \end{theorem} Lindenstrauss showed that if a continuous action of ${{\rm m}athbb Z}$ on a compact metrizable space has a nontrivial minimal factor and has zero mean topological dimension, then it has the SBP \cite[Theorem 6.2]{Lindenstrauss}. Gutman showed that if a continuous action of ${{\rm m}athbb Z}^d$ for $d\in {{\rm m}athbb N}$ on a compact metrizable space has a free zero-dimensional factor and has zero mean topological dimension, then it has the SBP \cite[Theorem 1.11.1]{Gutman}. It is not clear in what generality the converse of Theorem~{\rm re}f{T-SBP to zero mean dim} is true (see the discussion on page $20$ of \cite{LW}). We shall adapt the argument of Lindenstrauss and Weiss to our case, by replacing the set $\{(sx)_{s\in H}: H\in G\}\subseteq X^H$ for an approximately left invariant finite subset $H$ of $G$ in their proof with the set $Map({\rm h}o, F, \delta, \sigma)$. First we need the following three lemmas. \begin{lemma} \label{L-measure} Let a countable group $\Gamma$ act continuously on a compact metrizable space $X$. Let $Z$ be a closed subset of $X$. Then for any $\varepsilon>0$ there is an open neighborhood $U$ of $Z$ with $\sup_{{\rm m}u \in M(X, \Gamma)}{\rm m}u(U)<\varepsilon+\sup_{{\rm m}u \in M(X, \Gamma)}{\rm m}u(Z)$. \end{lemma} \begin{proof} Suppose that this is not true. Denote by ${{\rm m}athcal S}$ the set of all open neighborhoods of $Z$, partially ordered by reversed inclusion. Then $M(X, \Gamma)$ is nonempty, and for each $U\in {{\rm m}athcal S}$ there is some ${\rm m}u_U\in M(X, G)$ with ${\rm m}u_U(U)\ge \varepsilon+\sup_{{\rm m}u \in M(X, \Gamma)}{\rm m}u(Z)$. Take a limit point $\nu$ of this net $\{{\rm m}u_U\}_{U\in {{\rm m}athcal S}}$ in the compact space $M(X, \Gamma)$. We claim that $\nu(Z)\ge \varepsilon+\sup_{{\rm m}u \in M(X, \Gamma)}{\rm m}u(Z)$, which is a contradiction. To prove the claim, by the regularity of $\nu$, it suffices to show $\nu(V)\ge \varepsilon+\sup_{{\rm m}u \in M(X, \Gamma)}{\rm m}u(Z)$ for every $V\in {{\rm m}athcal S}$. Take $U'\in {{\rm m}athcal S}$ with $\overline{U'}\subseteq V$. Take a continuous function $f: X\rightarrow [0, 1]$ such that $f=1$ on $\overline{U'}$ and $f=0$ on $X\setminus V$. For any $U\in {{\rm m}athcal S}$ satisfying $U\subseteq U'$, one has $\int_X f\, d{\rm m}u_U\ge {\rm m}u_U(U)\ge \varepsilon+\sup_{{\rm m}u \in M(X, \Gamma)}{\rm m}u(Z)$. It follows that $\nu(V)\ge \int_X f\, d\nu\ge \varepsilon+\sup_{{\rm m}u \in M(X, \Gamma)}{\rm m}u(Z)$, as desired. \end{proof} \begin{lemma} \label{L-small to function} Consider a continuous action of a countable group $\Gamma$ on a compact metrizable space $X$ with the SBP. Then for any finite open cover ${{\rm m}athcal U}$ of $X$ and any $\varepsilon>0$ there is a partition of unity $\phi_j: X\rightarrow [0, 1]$ for $j=1, \dots, |{{\rm m}athcal U}|$ subordinate to ${{\rm m}athcal U}$ such that $\sup_{{\rm m}u\in M(X, \Gamma)}{\rm m}u(\overline{\bigcup_{j=1}^{|{{\rm m}athcal U}|}\phi_j^{-1}(0, 1)})<\varepsilon$. \end{lemma} \begin{proof} For each $x\in X$, take an open neighborhood $V_x$ of $x$ with small boundary such that $\overline{V_x}$ is contained in some element of ${{\rm m}athcal U}$. Since $X$ is compact, we can cover $X$ by finitely many such $V_x$'s. Note that if two subsets $Y_1$ and $Y_2$ of $X$ have small boundary $\partial Y_1$ and $\partial Y_2$ respectively, then $\partial (Y_1\cup Y_2)\subseteq \partial Y_1\cup \partial Y_2$ is also small. Thus we may take the union of those chosen $V_x$'s whose closures are contained in one element of ${{\rm m}athcal U}$ to obtain an open cover ${{\rm m}athcal U}'$ of $X$ such that each element of ${{\rm m}athcal U}'$ has small boundary and there is a bijection $\varphi: {{\rm m}athcal U}'\rightarrow {{\rm m}athcal U}$ with $\overline{U'}\subseteq \varphi(U')$ for every $U'\in {{\rm m}athcal U}'$. By Lemma~{\rm re}f{L-measure}, for each $U'\in {{\rm m}athcal U}'$ we can find an open neighborhood $U''$ of the boundary $\partial U'$ of $U'$ such that $\sup_{{\rm m}u \in M(X, \Gamma)}{\rm m}u(U'')<\varepsilon/|{{\rm m}athcal U}|$. Replacing $U''$ by $U''\cap \varphi(U')$ if necessary, we may assume that $U''\subseteq \varphi(U')$. Take an open neighborhood $U'''$ of $\partial U'$ such that $\overline{U'''}\subseteq U''$. List the elements of ${{\rm m}athcal U}'$ as $U_1', \dots, U_{|{{\rm m}athcal U}|}'$. For each $1\le j\le |{{\rm m}athcal U}|$, take a continuous function $\psi_j: X\rightarrow [0, 1]$ such that $\psi_j=1$ on $\overline{U_j'}$ and $\psi_j=0$ on $X\setminus (U_j'\cup U_j''')$. Now define $\phi_j$ for $1\le j\le |{{\rm m}athcal U}|$ inductively as $\phi_1=\psi_1$, and $\phi_j={\rm m}in(\psi_j, 1-\sum_{i=1}^{j-1}\phi_i)$ for $2\le j\le |{{\rm m}athcal U}|$. We claim that $\phi_1, \dots, \phi_{|{{\rm m}athcal U}|}$ is a partition of unity subordinate to ${{\rm m}athcal U}$. One has $0\le \phi_1=\psi_1\le 1$. For each $2\le j\le |{{\rm m}athcal U}|$, one also has $\phi_j\le 1-\sum_{i=1}^{j-1}\phi_i$. Thus $\sum_{i=1}^j\phi_i\le 1$ for every $1\le j\le |{{\rm m}athcal U}|$. In particular, one get $\phi_j={\rm m}in(\psi_j, 1-\sum_{i=1}^{j-1}\phi_j)\ge 0$ for every $2\le j\le |{{\rm m}athcal U}|$. For any $x\in X$, if $x\in U'_1$ then $\phi_1(x)=\psi_1(x)=1$, otherwise $x\in U'_j$ for some $2\le j\le |{{\rm m}athcal U}|$ and thus $\phi_j(x)={\rm m}in(\psi_j(x), 1-\sum^{j-1}_{i=1}\phi_i(x))=1-\sum^{j-1}_{i=1}\phi_i(x)$. It follows that $\sum^{|{{\rm m}athcal U}|}_{j=1}\phi_j=1$. Note that for each $1\le j\le |{{\rm m}athcal U}|$, the support of $\phi_j$ is contained in $\overline{U'_j}\cup \overline{U'''_j}$, which in turn is contained in $\varphi(U'_j)$. This proves our claim. Now one has $$\overline{\bigcup_{j=1}^{|{{\rm m}athcal U}|}\phi_j^{-1}(0, 1)}\subseteq \overline{\bigcup_{j=1}^{|{{\rm m}athcal U}|}\psi_j^{-1}(0, 1)}\subseteq \overline{\bigcup_{j=1}^{|{{\rm m}athcal U}|}U_j'''}\subseteq \bigcup_{j=1}^{|{{\rm m}athcal U}|}U_j'',$$ and hence ${\rm m}u(\overline{\bigcup_{j=1}^{|{{\rm m}athcal U}|}\phi_j^{-1}(0, 1)})\le {\rm m}u(\bigcup_{j=1}^{|{{\rm m}athcal U}|}U_j'')<\varepsilon$ for every ${\rm m}u \in M(X, \Gamma)$. \end{proof} \begin{lemma} \label{L-measure to orbit} Let a countable group $\Gamma$ act continuously on a compact metrizable space $X$. Let ${\rm h}o$ be a compatible metric on $X$. Let $Z$ be a closed subset of $X$, and $\varepsilon>0$. Then there exist a nonempty finite subset $F$ of $\Gamma$ and $\delta>0$ such that, for any map $\sigma$ from $\Gamma$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$, one has $$ \frac{1}{d}{\rm m}ax_{\varphi \in Map({\rm h}o, F, \delta, \sigma)}\sum_{a\in [d]}1_Z(\varphi(a))<\varepsilon+\sup_{{\rm m}u\in M(X, \Gamma)}{\rm m}u(Z).$$ \end{lemma} \begin{proof} Suppose that that for any nonempty finite subset $F$ of $\Gamma$ and any $\delta>0$, there are a map $\sigma_{F, \delta}$ from $\Gamma$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$ and a $\varphi_{F, \delta}\in Map({\rm h}o, F, \delta, \sigma_{F, \delta})$ with $$ \frac{1}{d}\sum_{a\in [d]}1_Z(\varphi_{F, \delta}(a))\ge \varepsilon+\sup_{{\rm m}u\in M(X, \Gamma)}{\rm m}u(Z).$$ Denote by ${\rm m}u_{F, \delta}$ the probability measure $\frac{1}{d}\sum_{a\in [d]}\delta_{\varphi_{F, \delta}(a)}$ on $X$. The set of all such $(F, \delta)$ is partially ordered by $(F, \delta)\ge (F', \delta)$ when $F\supseteq F', \delta\le \delta'$. Take a limit point $\nu$ of $\{{\rm m}u_{F, \delta}\}_{F, \delta}$ in the compact set of Borel probability measures on $X$. Then there is a net $\{(F_j, \delta_j)\}_{j\in J}$ of pairs such that ${\rm m}u_{F_j, \delta_j}\to \nu$ as $j\to \infty$ and for any $(F, \delta)$ one has $(F_j, \delta_j)\ge (F, \delta)$ for all sufficiently large $j\in J$. We claim that $\nu$ is $G$-invariant. Let $g$ be a continuous ${{\rm m}athbb R}$-valued function on $X$. For each $\delta>0$, set $C_\delta={\rm m}ax_{x, y\in X, {\rm h}o(x, y)\le \delta}|g(x)-g(y)|$. For any $j\in J$ and any $s\in F_j$, since $\varphi_{F_j, \delta_j}\in Map({\rm h}o, F_j, \delta_j, \sigma_{F_j, \delta_j})$, we have $|{{\rm m}athcal W}_s|\ge d(1-\delta_j)$, where $${{\rm m}athcal W}_s=\{a\in [d]: {\rm h}o(\varphi_{F_j, \delta_j}(sa), s\varphi_{F_j, \delta_j}(a))\le \sqrt{\delta_j}\}.$$ Thus, for any $j\in J$ and $s\in F_j$, one has \begin{align*} |{\rm m}u_{F_j, \delta_j}(g)-(s{\rm m}u_{F_j, \delta_j})(g)|&=\frac{1}{d}|\sum_{a\in [d]}g(\varphi_{F_j, \delta_j}(a))-\sum_{a\in [d]}g(s\varphi_{F_j, \delta_j}(a))|\\ &=\frac{1}{d}|\sum_{a\in [d]}g(\varphi_{F_j, \delta_j}(sa))-\sum_{a\in [d]}g(s\varphi_{F_j, \delta_j}(a))|\\ &\le \frac{1}{d}\sum_{a\in [d]}|g(\varphi_{F_j, \delta_j}(sa))-g(s\varphi_{F_j, \delta_j}(a))|\\ &\le \frac{1}{d} (|{{\rm m}athcal W}_s|C_{\sqrt{\delta_j}}+(d-|{{\rm m}athcal W}_s|)2\|g\|_\infty)\\ &\le C_{\sqrt{\delta_j}}+2\delta_j\|g\|_\infty. \end{align*} Letting $j\to \infty$, we have ${\rm m}u_{F_j, \delta_j}(g) \to \nu(g)$ and $(s{\rm m}u_{F_j, \delta_j})(g)\to (s\nu)(g)$ for every $s\in G$, and $\delta_j, C_{\sqrt{\delta_j}}\to 0$. It follows that $\nu(g)=(s\nu)(g)$. This proves the claim. Next we claim that $\nu(Z)\ge \varepsilon+\sup_{{\rm m}u\in M(X, \Gamma)}{\rm m}u(Z)$, which is a contradiction. To prove the claim, by the regularity of $\nu$, it suffices to show $\nu(U)\ge \varepsilon+\sup_{{\rm m}u\in M(X, \Gamma)}{\rm m}u(Z)$ for every open neighborhood $U$ of $Z$. Take a continuous function $f: X\rightarrow [0, 1]$ such that $f=1$ on $Z$ and $f=0$ on $X\setminus U$. For any $j\in J$ one has \begin{align*} {\rm m}u_{F_j, \delta_j}(f)\ge \frac{1}{d}\sum_{a\in [d]}1_Z(\varphi_{F_j, \delta_j}(a))\ge \varepsilon+\sup_{{\rm m}u\in M(X, \Gamma)}{\rm m}u(Z). \end{align*} Letting $j\to \infty$, we get $\nu(U)\ge \nu(f)\ge \varepsilon+\sup_{{\rm m}u\in M(X, \Gamma)}{\rm m}u(Z)$ as desired. \end{proof} We are ready to prove Theorem~{\rm re}f{T-SBP to zero mean dim}. \begin{proof}[Proof of Theorem~{\rm re}f{T-SBP to zero mean dim}] Fix a compatible metric ${\rm h}o$ on $X$. Let ${{\rm m}athcal U}$ be a finite open cover of $X$. Set $k=|{{\rm m}athcal U}|$. Let $\varepsilon>0$. Take $\phi_1, \dots, \phi_k$ as in Lemma~{\rm re}f{L-small to function} for ${{\rm m}athcal U}$ and $\varepsilon$. Set $Z=\overline{\bigcup_{j=1}^k\phi_j^{-1}(0, 1)}$. Then $\sup_{{\rm m}u\in M(X, G)}{\rm m}u(Z)\le \varepsilon$. By Lemma~{\rm re}f{L-measure to orbit} we can find a nonempty finite subset $F$ of $G$ and $\delta>0$ such that, for any map $\sigma$ from $G$ to ${\rm Sym}(d)$ for some $d\in {{\rm m}athbb N}$, one has $$ \frac{1}{d}{\rm m}ax_{\varphi \in Map({\rm h}o, F, \delta, \sigma)}\sum_{a\in [d]}1_Z(\varphi(a))<\varepsilon+\sup_{{\rm m}u\in M(X, G)}{\rm m}u(Z)\le 2\varepsilon.$$ Define $\Phi: X\rightarrow {{\rm m}athbb R}^k$ by $\Phi(x)=(\phi_1(x), \dots, \phi_k(x))$. Define $\Phi_{F, \delta, \sigma}: Map({\rm h}o, F, \delta, \sigma)\rightarrow {{\rm m}athbb R}^{kd}$ by $$ \Phi_{F, \delta, \sigma}(\varphi)=(\Phi(\varphi(1)), \dots, \Phi(\varphi(d))).$$ Let $e^i_j$, $i=1, \dots, d, j=1, \dots, k$ be the standard basis of ${{\rm m}athbb R}^{kd}$. For every $I\subseteq [d]$ with $|I|\le 2\varepsilon d$ and every $\xi\in \{0, 1\}^{kd}$, define $$C(I, \xi)={\rm span} \{e^i_j: i\in I, 1\le j\le k\}+\xi.$$ Then $$ \Phi_{F, \delta, \sigma}(Map({\rm h}o, F, \delta, \sigma))\subseteq \bigcup_{|I|\le 2\varepsilon d, \xi\in \{0, 1\}^{kd}}C(I, \xi).$$ Note that $\Phi_{F, \delta, \sigma}$ is ${{\rm m}athcal U}^d|_{Map({\rm h}o, F, \delta, \sigma)}$-compatible, and $\bigcup_{|I|\le 2\varepsilon d, \xi \in \{0, 1\}^{kd}}C(I, \xi)$ is a finite union of at most $2\varepsilon kd$ dimensional affine subspaces of ${{\rm m}athbb R}^{kd}$. Since the union of finitely many closed subsets of dimension at most $2\varepsilon kd$ has dimension at most $2\varepsilon kd$ \cite[page 30 and Theorem V.8]{HW}, from Lemma~{\rm re}f{L-compatible map} we get $$ {{\rm m}athcal D}({{\rm m}athcal U}, {\rm h}o, F, \delta, \sigma)\le 2\varepsilon kd.$$ It follows that ${{\rm m}athcal D}_\Sigma({{\rm m}athcal U})\le 0$, and hence ${\rm mdim}_\Sigma(X)\le 0$. \end{proof} We leave the following \begin{question} \label{Q-SBP and zero dimension} Could one strengthen Theorem~{\rm re}f{T-SBP to zero mean dim} to get ${\rm mdim}_{\Sigma, {\rm M}}(X)=0$? \end{question} \end{document}
\begin{document} \title{Height zeta functions of projective bundles} \author{Takuya Maruyama} \address[Takuya Maruyama]{Graduate School of Mathematical Sciences, the University of Tokyo} \email{[email protected]} \maketitle \begin{abstract} We introduce a new approach to study height zeta functions of projective spaces and projective bundles. To study height zeta functions of projective spaces $Z(\mathbb{P}^n, H_{\mathcal{O}(1)}; s)$, we apply the Riemann-Roch theorem of Arakelov vector bundles by van der Geer and Schoof to the integrand of an integral expression of $Z(\mathbb{P}^n, H_{\mathcal{O}(1)}; s)$. We give a proof of the analytic continuation and functional equations of height zeta functions of projective spaces with respect to various height functions. Motivic analogues of these results are also proved. We also study height zeta functions of Hirzebruch surfaces. \end{abstract} \section{Introduction} \label{sect:Introduction} Let $F$ be a number field. Consider a projective variety $X$ over $F$ and a function $H \colon X(F) \longrightarrow \mathbb{R}$ such that the number \[ n(X,H;B) = \#\bigl\{P \in X(F) \bigm | H(P) \leq B \bigr\} \] is finite for each $B \in \mathbb{R}$. We call a function with this property a height function on $X$. Typically, one can construct a height function from a generically ample Arakelov line bundle on some model of $X$. By definition, it consists of (1) a line bundle $\mathcal{L}$ on some proper model $\mathfrak{X} \longrightarrow \Spec O_F$ of $X$ such that $L = \mathcal{L}|_X$ is ample, and (2) a Hermitian metric of the line bundle $L \otimes_F F_v$ on $X \otimes_F F_v$ for each complex embedding $v$ of $F$. Then for a point $P \in X(F)$, the $1$-dimensional $F$-vector space $P^*L$ has a natural structure of an Arakelov line bundle over $\Spec O_F$. An $F$-vector space equipped with a structure of an Arakelov vector bundle over $\Spec O_F$ is called an Arakelov vector bundle over $F$ for short. There is a notion of the norm $N(L)$ of an Arakelov line bundle $L$ over $F$ (see Definition \ref{defn:chern class of line bundle}). The height $H_L(P)$ of $P$ with respect to $L$ is defined as \[ H_L(P) = N(P^*L). \] Let $H_L$ be a height function on $X$ constructed as above. We are interested in the asymptotic behaviour of the counting function $n(X,H_L;B)$ as $B \to \infty$. It represents one aspect of the distribution of rational points of $X$ with respect to $H_L$. A useful tool to study such problems is a Dirichlet series called the height zeta function defined by \[ Z(X,H_L;s) = \sum_{P \in X(F)}H_L(P)^{-s}. \] By a Tauberian-type theorem, one derives the asymptotic behaviour of $n(X,H_L;B)$ for $B \to \infty$ from the behaviour of $Z(X,H_L;s)$ around its abscissa of convergence. See Theorem \ref{thm:Tauberian} for the precise statement of the Tauberian theorem used in this line. In this paper, we study analytic properties of height zeta functions of projective spaces and Hirzebruch surfaces. We first state our results for projective spaces. Let $n$ be a non-negative integer and set $r = n+1$. Let $V$ be an Arakelov vector bundle over $F$ of rank $r$. Consider the projective space $\mathbb{P}(V)$ of all lines in $V$. Then the line bundle $\mathcal{O}_{\mathbb{P}(V)}(1)$ carries a natural structure of an Arakelov line bundle, and therefore defines a height function $H_{\mathcal{O}(1)}$. \begin{thm} \label{thm:intro main theorem} In the above situation, let $Z\bigl(\mathbb{P}(V),s\bigr)$ be the height zeta function of $\mathbb{P}(V)$ with respect to $H_{\mathcal{O}(1)}$. \begin{enumerate} \item The Dirichlet series $Z\bigl(\mathbb{P}(V),s\bigr)$ becomes a holomorphic function on the domain $\bigl\{s \in \mathbb{C} \bigm| \Re(s) > r\bigr\}$, and it is analytically continued to a meromorphic function on the whole complex plane. \item In the domain $\{s \in \mathbb{C} \mid \Re(s) > 1\}$, the function $Z\bigl(\mathbb{P}(V),s\bigr)$ has an unique simple pole at $s = r$ with residue \[ \Res_{s=r}Z\bigl(\mathbb{P}(V),s\bigr)=\frac{\alpha N(V)}{w|\Delta|^{\frac{r}{2}}\xi(r)}. \] Here, $N(V)$ is the norm of the Arakelov line bundle $\det(V)$, $w$ is the number of roots of unity in $F$, $\alpha = Rh$ is the product of the regulator $R$ and the class number $h$ of $F$, $\Delta$ is the discriminant of $F$, and $\xi(s)$ is the Dedekind zeta function of $F$ multiplied by a gamma factor defined in \S\ref{subs:Topologies and measures}. \item We have the following functional equations: \[ N(V)^{-\frac{1}{2}}|\Delta|^{\frac{s}{2}}\xi(s)Z\bigl(\mathbb{P}(V),s\bigr) = N(V^{\vee})^{-\frac{1}{2}}|\Delta|^{\frac{r-s}{2}}\xi(r-s)Z\bigl(\mathbb{P}(V^{\vee}),r-s\bigr). \] \end{enumerate} \end{thm} When $V$ is a trivial Arakelov vector bundle $O_F^{\oplus r}$, the $F$-variety $\mathbb{P}(V)$ is canonically isomorphic to $\mathbb{P}^n$, and the height used in Theorem \ref{thm:intro main theorem} is equal to the classical height \[ H(P) = \prod_{v\colon\text{infinite places}}\bigl(\sum_{0\leq i \leq n}|x_i|^2_v\bigr)^{\frac{n_v}{2}}\prod_{v\colon\text{finite places}}\sup_{0 \leq i \leq n}|x_i|_v^{n_v} \] for a point $P \in \mathbb{P}^n(F)$ with homogeneous coordinates $[x_0:\cdots:x_n]$. See \S\ref{ssect:Number fields} for notation about absolute values and $n_v$'s. In this case, the statements of Theorem \ref{thm:intro main theorem} is known (for example, this is a very special case of results in \cite{FrankeManinTschinkel} where they study height zeta functions of generalized flag varieties). In this paper, we generalize these results to arbitrary Arakelov vector bundles using the Riemann-Roch theorem due to van der Geer and Schoof \cite{vanderGeerSchoof}. By using this generalization, we can also handle height zeta functions of Hirzebruch surfaces. This will be explained below. For Hirzebruch surfaces, we prove the following. We consider only Hirzebruch surfaces of degree greater than one. Fix an Arakelov vector bundle $V$ of rank $2$ and let $\mathbb{P}^1$ denote the projective space $\mathbb{P}(V)$. For an integer $e \geq 2$, consider the Hirzebruch surface $\pi \colon F_e = \mathbb{P}\bigl(\mathcal{O}_{\mathbb{P}^1}\oplus\mathcal{O}_{\mathbb{P}^1}(e)\bigr) \longrightarrow \mathbb{P}^1$ of degree $e$. For a pair of integers $(a,b) \in \mathbb{Z}^{2}$, let $\mathcal{O}(a,b)$ denote the line bundle $\mathcal{O}_{F_e}(a)\otimes\pi^*\mathcal{O}_{\mathbb{P}^1}(b)$ on $F_e$. Here, $\mathcal{O}_{F_e}(1)$ is the Serre's twisting sheaf associated to the projective bundle construction of $F_e$. The line bundle $\mathcal{O}(a,b)$ is ample if and only if $a>0$ and $b>ae$. Assume these inequalities. The line bundle $\mathcal{O}(a,b)$ carries a natural structure of an Arakelov line bundle, and therefore defines a height function on $F_e$. \begin{thm} \label{thm:intro Hirzebruch main theorem} In the above situation, let $Z(F_e,s)$ be the height zeta function of $F_e$ with respect to $H_{\mathcal{O}(a,b)}$. Then it becomes a meromorphic function in the domain $D = \bigl\{s \in \mathbb{C} \bigm| \Re(s) > \max\{1/a,(e+2)/b\} \bigr\}$. There are the following two contributions to the poles of $Z(F_e,s)$ in $D$: \begin{enumerate} \item $\displaystyle \frac{\alpha Z(\mathbb{P}^1,2b/a-e)}{w |\Delta| \xi(2)}\cdot\frac{1}{as-2}\quad$ around $s = 2/a$. \item $\displaystyle \frac{\alpha N(V)}{w|\Delta|\xi(2)}\cdot\frac{1}{(b-ae)s-2}\quad$ around $s = 2/(b-ea)$ if it is in $D$. \end{enumerate} \end{thm} In particular, there is a decomposition of the ample cone $\Lambda_{\text{ample}}(F_e)$ into two cones such that the asymptotic behaviour of the counting function $n(F_e,H_{\mathcal{O}(a,b)};B)$ varies on which cone the line bundle $\mathcal{O}(a,b)$ belongs to. For more discussions on this result, see \S\ref{ssect:Discussions}. In \S\ref{sect:Motivic case}, we treat other versions of height zeta functions. To define these functions, fix a proper smooth curve $C$ over a field $k$. Consider a proper scheme $X \longrightarrow C$ and a relatively ample line bundle $L$ on $X$. Then for a section $P \in X(C)$, we define the logarithmic height $h_L(P)$ as \[ h_L(P) = \deg (P^*L). \] If $k$ is finite, then the number of points $P \in X(C)$ with bounded heights is finite. Therefore we can define the geometric height zeta function of $X$ with respect to $h_L$ as the power series \[ Z_{\text{geom}}(X,h_L;t) = \sum_{P\in X(F)}t^{h_L(P)} \in \mathbb{Z}((t)). \] For a general field $k$, we can define the motivic height zeta function $Z_{\text{mot}}(X,h_L;t)$ of $X$ as a power series with coefficients in the Grothendieck ring $K$ of varieties over $k$. When $k$ is finite, there is a ring homomorphism $\mu \colon K \longrightarrow \mathbb{Z}$ such that $\mu\bigl([X]\bigr) = \#X(k)$ for any $k$-variety $X$. We also write $\mu$ for the ring homomorphism $K((t)) \longrightarrow \mathbb{Z}((t))$ induced by $\mu$. Then we have $\mu\bigl(Z_{\text{mot}}(X,h_L;t)\bigr) = Z_{\text{geom}}(X,h_L;t)$. In \cite{Wan}, Daqing Wan studies the geometric height zeta function of trivial projective bundles $X = \mathbb{P}^n \times C \longrightarrow C$ with respect to $L = \mathcal{O}_{X/C}(1)$. For example, he proves that $Z_{\text{geom}}\bigl(\mathbb{P}^n \times C,h_{\mathcal{O}(1)};t\bigr)$ is rational in $t$ and describes its denominator using the Riemann-Roch theorem. In this paper, we study motivic height zeta functions when $X \longrightarrow C$ is a projective bundle which do not necessarily trivial. To state our results, let $\widetilde{K}$ be the quotient of $K$ by an ideal generated by $[X]-[Y]$ for all radicial surjective morphisms $f \colon X \longrightarrow Y$. Let $\mathbb{L}$ denote the image of $\mathbb{A}^1$ in $\widetilde{K}$, and let $\mathcal{M} = \widetilde{K}[\mathbb{L}^{-1}]$ be the localization. Let $\zeta(t) \in K[[t]]$ denote the Kapranov's motivic zeta function of $C$. It is known that $\zeta(t)$ is a rational function in $\mathcal{M}(t)$. \begin{thm} \label{thm:intro motivic main theorem} Let $V$ be a vector bundle of rank $r$ over $C$. We denote the motivic height zeta function of $\mathbb{P}(V) \longrightarrow C$ with respect to $\mathcal{L} = \mathcal{O}_{\mathbb{P}(V)}(1)$ by $Z\bigl(\mathbb{P}(V),t\bigr)$. \begin{enumerate} \item The product $\zeta(t)Z\bigl(\mathbb{P}(V),t\bigr)$ is a rational function in $\mathcal{M}(t)$ with the denominator $(t-1)(t-\mathbb{L}^{-r})$. Since $\zeta(t)$ is rational in $\mathcal{M}((t))$, we also have the rationality of $Z\bigl(\mathbb{P}(V),t\bigr)$ in $\mathcal{M}((t))$. \item The value of $(t-1)(t-\mathbb{L}^{-r})\zeta(t)Z\bigl(\mathbb{P}(V),t\bigr)$ at $t = \mathbb{L}^{-r}$ is $\mathbb{L}^{-rg-1+\deg V}\bigl(1-[\mathbb{P}^{-r}]\bigr)[J]$. Here, $J$ denotes the Jacobian variety of $C$, and the negative dimensional projective space $[\mathbb{P}^{-r}]$ is defined by \[ [\mathbb{P}^{-r}] = -\mathbb{L}^{-(r-1)} - \mathbb{L}^{-(r-2)} - \cdots - \mathbb{L}^{-1}. \] \item We have the following functional equations: \[ \zeta(t)Z\bigl(\mathbb{P}(V),t\bigr) = \mathbb{L}^{\deg V+r(g-1)}t^{2g-2}\zeta(\mathbb{L}^{-r}t^{-1})Z\bigl(\mathbb{P}(V^{\vee}),\mathbb{L}^{-r}t^{-1}\bigr). \] \end{enumerate} \end{thm} This is a precise motivic analogue of Theorem \ref{thm:intro main theorem}. Our approach in the proof of Theorem \ref{thm:intro main theorem} is an Arakelov theoretic analogue of (a refined version of) Wan's arguments in \cite{Wan}. The Riemann-Roch theorem in his arguments is repleced by van der Geer and Schoof's version of Tate's arithmetic Riemann-Roch theorem given in \cite{vanderGeerSchoof}. In \S2, we recall van der Geer and Schoof's theory of \cite{vanderGeerSchoof}, which also plays an important role in the proof of \ref{thm:intro Hirzebruch main theorem}. \section{Arakelov theory} In this section, we introduce notiation in Arakelov theory of arithmetic curves and recall results from \cite{vanderGeerSchoof}. \subsection{Number fields}\label{ssect:Number fields} Throughout in this paper, we fix a number field $F$ and use the following notation: $O_F$ is the ring of integers, $\mu_F$ is the group of roots of unity, $w = \#\mu_F$, $\Delta$ is the discriminant, $R$ is the regulator, $h$ is the class number, $\alpha = Rh$, and $\zeta(s)$ is the Dedekind zeta function. The symbol $v$ always denotes a finite or infinite place of $F$. The number of real (resp. complex) places is denoted as $r_1$ (resp. $r_2$). For a place $v$, the function $|\cdot|_v \colon F \longrightarrow \mathbb{R}$ is the absolute value of $F$ which extends the standard $p$-adic or archimedean absolute value on $\mathbb{Q}$. $F_v$ is the completion of $F$ along $|\cdot|_v$, and $n_v$ is the local degree at $v$ (i.e., $[F_v:\mathbb{Q}_w]$ for a place $w$ of $\mathbb{Q}$ such that $v|w$). For a finite place $v$, the norm $N(v)$ is the cardinality of the residue field at $v$. \subsection{Arakelov bundles and divisors} \begin{defn} \begin{enumerate} \item A Hermitian module is a finitely generated projective $O_F$-module $M$ equipped with a Hermitian metric $\langle,\rangle_{\sigma}$ on $M \otimes_{\sigma} \mathbb{C}$ for each embedding $\sigma \colon F \hooklongrightarrow \mathbb{C}$. A Hermitian module $M$ is called of real type if it satisfies $\overline{\langle x \otimes_{\sigma} 1, y \otimes_{\sigma} 1 \rangle_{\sigma}} = \langle x \otimes_{\overline{\sigma}} 1, y \otimes_{\overline{\sigma}} 1 \rangle_{\overline{\sigma}}$ for each $x,y \in M$ and $\sigma$. \item An Arakelov vector bundle $V$ is a vector space over $F$, equipped with a $O_F$-lattice $\Gamma(V) \subset V$ which is a Hermitian module of real type. \item An Arakelov vector bundle of rank $1$ is called an Arakelov line bundle. The Arakelov Picard group $\Pic(F)$ is defined as the group of isometry classes of Arakelov line bundles. \item Let $V$ be an Arakelov vector bundle. Choose an embedding $\sigma_v \colon F_v \hooklongrightarrow \mathbb{C}$ for each place $v|\infty$. Then the Hermitian metric defines a $F_v$-norm $\|\cdot\|_v$ on $V \otimes_F F_v$ by $\|x\|_v^2 = \langle x \otimes_{\sigma_v} 1,x \otimes_{\sigma_v} 1\rangle_{\sigma_v}$. This does not depend on the choice of $\sigma_v$ because $\Gamma(V)$ is of real type. \end{enumerate} \end{defn} Various constructions on finite dimensional vector spaces, such as direct sums, duals, tensor products and determinants, are also defined for Arakelov bundles. We call an element of the underlying vector space $V$ a rational section. \begin{defn} \begin{enumerate} \item An Arakelov divisor is a formal finite sum $\sum_{v|\infty}x_v[v] + \sum_{v\colon\fin}x_v[v]$, where $x_v \in \mathbb{R}$ for $v|\infty$ and $x_v \in \mathbb{Z}$ for finite $v$. We denote the group of Arakelov divisors by $\Div(F)$. \item For a rational function $f \in F^{\times}$, we define the principal divisor associated to $f$ as \[ (f) = \sum_{v|\infty}\bigl(-\log|f|^{n_v}_v\bigr)[v] + \sum_{v:\fin}\ord_v(f)[v]. \] The Arakelov class group $\Cl(F)$ is the quotient of $\Div(F)$ by the group of principal divisors. \item The degree of an Arakelov divisor $D$ is defined as \[ \deg(D) = \sum_{v|\infty}x_v + \sum_{v:\fin}\log N(v) x_v. \] The norm of $D$ is defined as $N(D) = \exp\bigl(\deg(D)\bigr)$. \end{enumerate} \end{defn} Note that the degree and the norm are well-defined for a class in $\Cl(F)$ by the product formula. \begin{defn} \label{defn:chern class of line bundle} \begin{enumerate} \item For an Arakelov line bundle $L$ and a rational section $x$ of $L$, we define the divisor of zeros and poles of $x$ as \[ \div(x) = \sum_{v|\infty}(-\log \|x\|^{n_v}_v)[v] + \sum_{v:\fin}\ord_v(x)[v]. \] Then the class $[\div(x)] \in \Cl(F)$ does not depend on $x$, which we denote as $c_1(L)$. \item For an Arakelov divisor $D$, we define an Arakelov line bundle $O_F(D)$ as a line bundle such that the vector space of rational sections is $F$ and that $\div(1) = D$. \item We define $\deg L = \deg c_1(L)$. We also define the degree of an Arakelov vector bundle $V$ as $\deg(V) = \deg \bigl(\det(V)\bigr)$. We define the norm of $V$ as $N(V) = \exp\bigl(\deg(V)\bigr)$. \end{enumerate} \end{defn} It is easily checked that the map $L \longmapsto c_1(L)$ is a bijection from the Picard group $\Pic(F)$ to the class group $\Cl(F)$. \subsection{The Riemann-Roch and a vanishing result}\label{subs:Riemann-Roch} The notion of Arakelov vector bundles is an arithmetic analogue of vector bundles on algebraic curves over finite fields. The underlying $F$-vector space $V$ is considered as the space of rational sections, and the module $\Gamma(V) \subset V$ is considered as the set of rational sections which are regular at finite places. One usual definition of ``global sections'' is as follows. For an infinite place $v$, a rational section $x \in V$ is defined to be ``regular at $v$'' if and only if $\|x\|_v \leq 1$. So one defines the set of global sections $H^0(V)$ as $\bigl\{x \in \Gamma(V)\bigm| \|x\|_v \leq 1 \text{ for all }v|\infty\bigr\}$, and the ``dimension'' $h^0(V)$ of $H^0(V)$ as $\log \# H^0(V)$. Another and beautiful idea of a definition of $h^0(V)$ is given in \cite{vanderGeerSchoof}. For a place $v|\infty$ and a section $x \in \Gamma(V)$, we consider the quantity $\exp(-n_v\pi\|x\|_v^2)$ as ``the probability of $x$ to be regular at $v$''. Then the probability of $x$ to be a global section is equal to $\prod_{v|\infty}\exp(-n_v\pi\|x\|_v^2)$. Therefore we make the following definitions. \begin{defn} \label{defn:beautiful definitions} Let $V$ be an Arakelov bundle. \begin{enumerate} \item For an element $x \in \Gamma(V)$, we define a positive real number $e_V(x)$ by \[ e_V(x) = \prod_{v|\infty}\exp(-n_v\pi\|x\|_v^2). \] \item We define ``the number of global sections'' $\#H^0(V)$ by \[ \#H^0(V) = \sum_{x \in \Gamma(V)}e_V(x) \] and ``the dimension of $H^0(V)$'' by $h^0(V) = \log \#H^0(V)$. We also use $\varphi(V) = \#H^0(V) - 1$, which is considered as ``the number of nonzero global sections''. \end{enumerate} \end{defn} Now we state the Riemann-Roch theorem and a vanishing result of Arakelov vector bundles. The canonical bundle $\omega$ is defined to be an Arakelov bundle such that $\Gamma(\omega)$ is the inverse of the different ideal of $F$ and $\|1\|_v = 1$ for all infinite places $v$. Then $\deg \omega = \log |\Delta|$. \begin{prop}\label{prop:R-R and vanishing} Let $V$ be an Arakelov vector bundle of rank $r$. \begin{enumerate} \item (Riemann-Roch) $h^0(V)-h^0(V^{\vee}\otimes\omega)=\deg V - \frac{r}{2}\log|\Delta|$. \item (Vanishing) For any $C \in \mathbb{R}$, there are constants $C_1,C_2 > 0$ such that \[ \varphi(V\otimes L) \leq C_1 \exp \Bigl(-C_2 \exp \Bigl(-\frac{2}{[F:\mathbb{Q}]} \deg L\Bigr) \Bigr) \] for all $L \in Pic(F)$ with $\deg L \leq C$. In other words, the function $L \longmapsto \varphi(V \otimes L)$ on $\Pic(F)$ tends to zero doubly exponentially fast and uniformly as $\deg L \to -\infty$. \item There is a constant $C > 0$ such that $\varphi(L) \leq CN(L)$ for any $L \in \Pic(F)$. \end{enumerate} \end{prop} \begin{proof} As explained in \cite{vanderGeerSchoof} (where they prove (i) when $r=1$), the proof of (i) is an easy applicatoin of the Poisson summation formula. Since $\Pic^0(F)$ is compact (see \S\ref{subs:Topologies and measures}) and $L \longmapsto \varphi(V\otimes L)$ is continuous, it suffices to show the statement (ii) for one choice of $C$. Any Arakelov vector bundle $V$ is embedded into a direct sum $\oplus_{1 \leq i \leq n}L_i$ of Arakelov line bundles $L_i$. Then the proof of (ii) is reduced to the case $r=1$ and $C = \frac{1}{2}\log|\Delta|$, which is proved in \cite{vanderGeerSchoof}. The statement (iii) follows from (i), (ii) and the compactness of $\Pic^0(F)$. \end{proof} \subsection{Topologies and measures}\label{subs:Topologies and measures} We endow the group $\Div(F) = \prod_{v|\infty}\mathbb{R} \times \bigoplus_{v:\fin}\mathbb{Z}$ with a natural topology and a Haar measure, i.e., the product of Euclidean and discrete topologies, and the product of Lebesgue and counting measures. Then the group $\Pic(F) \cong \Cl(F) = \Div(F) / (F^{\times}/\mu_F)$ is endowed with the quotient topology and the quotient measure. Let $\Pic^0(F) \subset \Pic(F)$ be the kernel of the degree map: \[ 0 \longrightarrow \Pic^0(F) \longrightarrow \Pic(F) \stackrel{\deg}{\longrightarrow} \mathbb{R} \longrightarrow 0. \] The group $\Pic^0(F)$ fits into the exact sequence \[ 0 \longrightarrow H/\phi(O_F^{\times}) \longrightarrow \Pic^0(F) \longrightarrow \Pic(O_F) \longrightarrow 0, \] where $\Pic(O_F)$ is the group of isomorphism classes of finitely generated projective $O_F$-modules of rank $1$, $H =\bigl \{ (x_v) \in \prod_{v|\infty}\mathbb{R} \bigm | \sum_{v} x_v = 0 \bigr \}$, and $\phi(f) = \bigl( \log|f|_v^{n_v} \bigr)_v$. Then Dirichlet's unit theorem (i.e., the compactness of $H/\phi(O_F^{\times})$) and the finiteness of ideal class group (i.e., the finiteness of $\Pic(O_F)$) implies that $\Pic^0(F)$ is compact. The volume of $\deg^{-1}\bigl([0,1]\bigr) \subset \Pic(F)$ is equal to $\alpha = Rh$. For a function $f \in L^1(\mathbb{R})$ we have $\int_{\Pic(F)}f(\deg L)dL = \alpha\int_{\mathbb{R}}f(t)dt$. The completed Dedekind zeta function of $F$ can be expressed as an integral over $\Div(F)$. To explain this, we define the effectivity $e(D)$ (i.e., ``the probability of $D$ to be effective'') of an Arakelov divisor $D$. \begin{defn} For an Arakelov divisor $D$, we define \[ e(D) = \begin{cases} 0 & \text{if the finite part of }D\text{ is effective, and}\\ e_{O_F}(1) & \text{otherwise.} \end{cases} \] Here, $e_{O_F}(1)$ denotes ``the probability of $1$ to be a global section of $O_F(1)$'' defined in Definition \ref{defn:beautiful definitions}. Explicitly, if the finite component of $D$ is effective and the infinite component is $\sum_{v|\infty}x_v[v]$, then the effectivity $e(D)$ is given by $\prod_{v|\infty}\exp(-n_v\pi\exp(-\frac{2}{n_v}x_v))$. \end{defn} Now we define a function $\xi(s)$ as an integral \[ \xi(s) = \int_{\Div(F)}N(D)^{-s}e(D)dD. \] By the decomposition $\Div(F) = \bigoplus_{v\colon\fin}\mathbb{Z} \times \prod_{v|\infty}\mathbb{R}$, we have \begin{align*} \xi(s) &= \sum_{I \subset \mathcal{O}_F} N(I)^{-s} \prod_{v|\infty}\int_{\mathbb{R}}e^{-xs}e^{-n_v\pi\exp(-\frac{2}{n_v}x)}dx \\ &= 2^{-r_1}\Bigl(\pi^{-\frac{s}{2}}\Gamma\Bigl(\frac{s}{2}\Bigr)\Bigr)^{r_1}\bigl((2\pi)^{-s}\Gamma(s)\bigr)^{r_2}\zeta(s). \end{align*} (The computation in \cite{vanderGeerSchoof} is incorrect by a scalar multiple.) We can also express $\xi(s)$ as an integral over $\Pic(F)$. Note that the effectivity $e\bigl(D+(f)\bigr)$ is equal to $e_{O(D)}(f)$. Therefore, \[ \xi(s) = \int_{\Pic(F)}N\bigl([D]\bigr)^{-s}\sum_{f \in F^{\times}/\mu_F}e\bigl(D+(f)\bigr)d[D] = w^{-1} \int_{\Pic(F)}N(L)^{-s}\varphi(L)dL. \] \section{Projective spaces} \label{sec:Height zeta functions} In this section, we prove the analytic continuation and functional equations of height zeta functions of projective spaces. \subsection{Heights} \label{ssec:Height} Let $V$ be an Arakelov vector bundle of rank $r = n+1$, and $\mathbb{P}(V)$ be the associated projective space of lines in $V$. A $1$-dimensional subspace $L \subset V$ corresponds to a rank $1$ projective $O_F$-submodule $\Gamma(V) \cap L$ of $\Gamma(V)$ with a projective cokernel. So we have an Arakelov subbundle of $V$ of rank $1$ for each $F$-rational point $P$ of $\mathbb{P}(V)$, which is denoted as $P^*\mathcal{O}(-1)$. The dual of $P^*\mathcal{O}(-1)$ is denoted as $P^*\mathcal{O}(1)$. We define the height $H(P)$ of $P$ as the norm $N\bigl(P^*\mathcal{O}(1)\bigr)$. \subsection{A generalization of Wan's formula} The height zeta function is defined as the series \begin{align}\label{eq:def of height zeta} Z\bigl(\mathbb{P}(V),s\bigr) = \sum_{P \in \mathbb{P}(V)(F)}H(P)^{-s} \quad\text{for }s \in \mathbb{C}. \end{align} We are going to show that the series $Z\bigl(\mathbb{P}(V),s\bigr)$ converges absolutely for $\Re s > r$, and it is analytically continued to a meromorphic function on the whole complex plane. Let $\xi(s)$ be the function defined in \S\ref{subs:Topologies and measures}. Then for a point $P \in \mathbb{P}(V)(F)$, we have \begin{align*} w\xi(s)H(P)^{-s} &= \int_{\Pic(F)}N\bigl(L\otimes P^*\mathcal{O}(1)\bigr)^{-s} \varphi(L)dL \\ &= \int_{\Pic(F)}N(L)^{-s} \varphi\bigl(L\otimes P^*\mathcal{O}(-1)\bigr)dL. \end{align*} If we fix $L \in \Pic(F)$ and move $P$ in the set $\mathbb{P}(V)(F)$, then the line bundle $L \otimes P^*\mathcal{O}(-1)$ runs through all of line-subbundles of $L \otimes V$. Note that, for a rational section $x \in L \otimes P^*\mathcal{O}(-1)$, the quantity $e_{L \otimes P^*\mathcal{O}(-1)}(x)$ is equal to $e_{L \otimes V}(x)$. Therefore, by the definition of the height zeta function (\ref{eq:def of height zeta}), we have \begin{align} \begin{aligned}\label{eq:Mobius inversion} w \xi(s)Z\bigl(\mathbb{P}(V),s\bigr) &= \int_{\Pic(F)}N(L)^{-s}\sum_{P \in \mathbb{P}(V)(F)}\varphi\bigl(L \otimes P^*\mathcal{O}(-1)\bigr)dL \\ &= \int_{\Pic(F)}N(L)^{-s}\varphi(L \otimes V)dL. \end{aligned} \end{align} \begin{rem} Consider a new function $\xi^{(k)}(s)$ defined by \[ \xi^{(k)}(s) = \int_{\Div(F)}\biggl(\frac{\varphi\bigl(O_F(D)\bigr)}{w}\biggr)^k N(D)^{-s}e(D)dD = w^{-(k+1)}\int_{\Pic(F)}\varphi(L)^{k+1}N(L)^{-s}dL \] for $k \geq 0$. This is an arithmetic analogue of the $k$-th zeta function in \cite{Wan}. Suppose that $V=F^{\oplus r}$ is a trivial bundle of rank $r$. In this case, since $\varphi(L^{\oplus r})+1 = \bigl(\varphi(L)+1\bigr)^r = \sum_{0 \leq l \leq r}\binom{r}{l}\varphi(L)^l$, the formula (\ref{eq:Mobius inversion}) implies \[ Z\bigl(\mathbb{P}^n,s\bigr) = \frac{1}{\xi(s)}\sum_{0 \leq k \leq n}\binom{n+1}{k+1}w^{k}\xi^{(k)}(s). \] This is our version of Wan's formula in Theorem 3.1 of \cite{Wan}. \end{rem} \subsection{An Application of the Riemann-Roch} \label{ssect:Application of Riemann-Roch} Let $E_{+} = \{L \in \Pic(F) \mid N(L) \geq \sqrt{|\Delta|}\}$ and $E_{-} = \{L \in \Pic(F) \mid N(L) \leq \sqrt{|\Delta|}\}$. Divide the right hand side of (\ref{eq:Mobius inversion}) into the sum of two integrals: \[ \int_{\Pic(F)}N(L)^{-s}\varphi(L \otimes V)dL = F_+(V,s) + F_-(V,s) \] where \[ F_{\pm}(V,s) = \int_{E_{\pm}}N(L)^{-s}\varphi(L\otimes V)dL. \] By (ii) of Proposition \ref{prop:R-R and vanishing}, the integral $F_{-}(V,s)$ converges to a holomorphic function on the whole complex plane. As for the integral $F_{+}(V,s)$, by substituting $L^{\vee} \otimes \omega$ for $L$, we have the following expression over $E_{-}$: \[ F_{+}(V,s) = \int_{E_{-}}|\Delta|^{-s}N(L)^s\varphi(L^{\vee} \otimes V \otimes \omega)dL. \] By the Riemann-Roch theorem ((i) of Proposition \ref{prop:R-R and vanishing}) applied to $L \otimes V^\vee$, we have \begin{align*} \varphi(L^{\vee} \otimes V \otimes \omega) + 1 &= |\Delta|^{\frac{r}{2}}N(L^{\vee} \otimes V) \bigl(\varphi(L \otimes V^{\vee})+1\bigr) \\ \iff \varphi(L^{\vee} \otimes V \otimes \omega) &= |\Delta|^{\frac{r}{2}}N(V)N(L)^{-r} \bigl(\varphi(L \otimes V^{\vee})+1\bigr) - 1. \end{align*} Theorefore, \begin{align*} N(V)^{-\frac{1}{2}}|\Delta|^{\frac{s}{2}}F_{+}(V,s) = \int_{E_{-}}N(V)^{\frac{1}{2}}|\Delta|^{\frac{r-s}{2}}N(L)^{s-r}\varphi(L \otimes V^{\vee})dL \\ + \int_{E_{-}}N(V)^{\frac{1}{2}}|\Delta|^{\frac{r-s}{2}}N(L)^{s-r}dL - \int_{E_{-}}N(V)^{-\frac{1}{2}}|\Delta|^{-\frac{s}{2}}N(L)^{s}dL. \end{align*} The first term in the right hand side converges to a holomorphic function on the whole complex plane by (ii) of Proposition \ref{prop:R-R and vanishing}. Moreover, by the formula $\int_{\Pic(F)}f(\deg L)dL = \alpha \int_{\mathbb{R}}f(t)dt$ in \S\ref{subs:Topologies and measures}, the second and third terms are computed explicitly as follows: \begin{align*} \int_{E_{-}}N(V)^{\frac{1}{2}}|\Delta|^{\frac{r-s}{2}}N(L)^{s-r}dL &= \frac{N(V)^{\frac{1}{2}}\alpha}{s-r} \quad (\Re(s) > r),\\ \int_{E_{-}}N(V)^{-\frac{1}{2}}|\Delta|^{-\frac{s}{2}}N(L)^{s}dL &= \frac{N(V)^{-\frac{1}{2}}\alpha}{s} \quad (\Re(s) > 0). \end{align*} Summarizing, we obtained the following expression of the right hand side of (\ref{eq:Mobius inversion}) multiplied by $N(V)^{-\frac{1}{2}}|\Delta|^{\frac{s}{2}}$ which is valid for $s$ with $\Re(s) > r$: \begin{align} \begin{aligned}\label{eq:star} N(V)^{-\frac{1}{2}}|\Delta|^\frac{s}{2}\bigl(F_{-}(V,s) + F_{+}(V,s)\bigr) &= \int_{E_{-}}N(V)^{-\frac{1}{2}}|\Delta|^{\frac{s}{2}}N(L)^{-s}\varphi(L\otimes V)dL \\ &+ \int_{E_{-}}N(V)^{\frac{1}{2}}|\Delta|^{\frac{r-s}{2}}N(L)^{s-r}\varphi(L \otimes V^{\vee})dL \\ &+ \frac{N(V)^{\frac{1}{2}}\alpha}{s-r} -\frac{N(V)^{-\frac{1}{2}}\alpha}{s}. \end{aligned} \end{align} Note that the right hand side is invariant under the replacement $(V,s) \leftrightarrow (V^{\vee},r-s)$. Now we have proved the following theorem. \begin{thm} \label{thm:main theorem} Let $V$ be an Arakelov vector bundle of rank $r = n + 1$. Then the height zeta function $Z\bigl(\mathbb{P}(V),s\bigr)$ converges to a holomorphic function on the domain $\{s \in \mathbb{C} \mid \Re(s) > r\}$, and it is analytically continued to a meromorphic function on the whole complex plane. In the domain $\{s \in \mathbb{C} \mid \Re(s) > 1\}$, it has an unique simple pole at $s = r$ with residue \[ \Res_{s=r}Z\bigl(\mathbb{P}(V),s\bigr)=\frac{\alpha N(V)}{w|\Delta|^{\frac{r}{2}}\xi(r)}. \] Moreover, these functions satisfy the following functional equations: \[ N(V)^{-\frac{1}{2}}|\Delta|^{\frac{s}{2}}\xi(s)Z\bigl(\mathbb{P}(V),s\bigr) = N(V^{\vee})^{-\frac{1}{2}}|\Delta|^{\frac{r-s}{2}}\xi(r-s)Z\bigl(\mathbb{P}(V^{\vee}),r-s\bigr). \] \end{thm} \section{Motivic case} \label{sect:Motivic case} In this section, we prove the analytic continuation and functional equations of motivic height zeta functions of projective bundles. \subsection{Zeta functions} Fix a field $k$ and let $K$ be the Grothendieck ring of quasi-projective varieties over $k$. So $K$ is a ring generated by isomorphism classes of varieties over $k$ with multiplication defined by products of varieties, modulo relations of the form $[X] = [Y] + [X \setminus Y]$ for each closed subvariety $Y$ of $X$. Consider an ideal $I \subset K$ generated by elements of the form $[X]-[Y]$ for each radicial surjective morphism $f \colon X \longrightarrow Y$. Let $\widetilde{K} = K/I$. We denote the image of $\mathbb{A}^1$ in $\widetilde{K}$ by $\mathbb{L}$. In the last, let $\mathcal{M} = \widetilde{K}[\mathbb{L}^{-1}]$ be the localization of $\widetilde{K}$ by $\mathbb{L}$. Let $C$ be a smooth projective curve over $k$. For simplicity we assume that $C$ has a $k$-rational point $p_0$. Kapranov's motivic zeta function of $C$ is defined as a series in $K[[t]]$ whose $n$-the coefficient is the symmetric product $C_n$ of $C$: \[ \zeta(t) = \sum_{n \geq 0}[C_n]t^n. \] Note that $C_0 = \Spec k$, so the constant term of $\zeta(t)$ is $1$. For later convenience, we define $C_n = \emptyset$ for negative $n$. Kapranov has shown that $\zeta(t)$ is a ratioanl function in $\mathcal{M}(t)$ of the form \[ \zeta(t) = \frac{f(t)}{(1-t)(1-\mathbb{L} t)}, \] where $f(t)$ is a polynomial in $\mathcal{M}[t]$ of degree $\leq 2g$. Moreover, the motivic zeta function of $C$ satisfies the following functional equation \[ \zeta(t) = \mathbb{L}^{g-1}t^{2g-2}\zeta\bigl(\mathbb{L}^{-1}t^{-1}\bigr) \] in $\mathcal{M}(t)$. Let $V$ be a vector bundle over $C$ of rank $r$, and $\pi \colon \mathbb{P}(V) \longrightarrow C$ be the associated projective bundle of line-subbundles of $V$. Let $\mathcal{O}(-1)$ denote the universal subbundle of $\pi^*V$, and $\mathcal{O}(1)$ the dual. For each $d \in \mathbb{Z}$, let $\Sect_d\bigl(\mathbb{P}(V)\bigr)$ be the scheme of sections of $\mathbb{P}(V) \longrightarrow C$ of degree $d$. By definition, it is a scheme which represents the functor \[ \Sch_k \longrightarrow \Sets; T \longmapsto \bigl\{\text{sections } s \text{ of }\pi \times \id_T \colon \mathbb{P}(V) \times T \longrightarrow C \times T \bigm | \forall t \in T, \deg\bigl(s_t^*\mathcal{O}(1)\bigr) = d\bigr\}. \] The above functor is represented by an open subscheme of the Hilbert scheme of closed subschemes of $\mathbb{P}(V)$ of dimension $1$ and degree $d$ with respect to $\mathcal{O}(1)$. In particular, $\Sect_d\bigl(\mathbb{P}(V)\bigr)$ is a quasi-projective variety. \begin{lem} \label{lem:Sect_d empty for small d} For enough small $d$, $\Sect_d\bigl(\mathbb{P}(V)\bigr)$ is an empty set. \end{lem} \begin{proof} Fix an ample line bundle $L$ on $C$. There exists an integer $m$ such that $V^{\vee} \otimes L^{\otimes m}$ is ample. Then for any field $k'$ containing $k$ and a section $s \colon C_{k'} \longrightarrow \mathbb{P}(V_{k'})$ of degree $d$, the line bundle $s^*\bigl(\mathcal{O}_{\mathbb{P}(V\otimes L^{\otimes(-m)})}(1)\bigr) = s^*\mathcal{O}_{\mathbb{P}(V)}(1) \otimes L^{\otimes m}$ is ample, and therefore its degree $d + m$ is positive. This implies that $\Sect_d\bigl(\mathbb{P}(V)\bigr) = \emptyset$ if $d \leq - m$. \end{proof} Now we define the motivic height zeta function of $\mathbb{P}(V)$ as the power series \[ Z\bigl(\mathbb{P}(V),t\bigr) = \sum_{d \in \mathbb{Z}}\bigl[\Sect_d\bigl(\mathbb{P}(V)\bigr)\bigr]t^d, \] which belongs to $K((t))$ by Lemma \ref{lem:Sect_d empty for small d}. \subsection{The variety $X_n(V)$} Consider the product of zeta function of $C$ and height zeta function: \[ \zeta(t)Z\bigl(\mathbb{P}(V),t\bigr) = \sum_{n \in \mathbb{Z}} \sum_{d \in \mathbb{Z}} \bigl[\Sect_d\bigl(\mathbb{P}(V)\bigr)\bigr][C_{n-d}]t^n. \] In this subsection, we define a variety $X_n(V)$ and prove the equality $\sum_{d \in \mathbb{Z}} \bigl[\Sect_d\bigl(\mathbb{P}(V)\bigr)\bigr][C_{n-d}] = \bigl[X_n(V)\bigr]$ in $\widetilde{K}$. For a coherent sheaf $\mathcal{F}$, let $\mathbb{P}^{\vee}(\mathcal{F})$ denote the $\Proj$-construction applied to the symmetric algebra of $\mathcal{F}$. For a surjective morphism $\mathcal{F} \longrightarrow \mathcal{G}$ of coherent sheaves there is an associated closed embedding $\mathbb{P}^{\vee}(\mathcal{G}) \longrightarrow \mathbb{P}^{\vee}(\mathcal{F})$. If $\mathcal{F}$ is the sheaf of sections of a vector bundle $V$, then $\mathbb{P}^{\vee}(\mathcal{F}^{\vee})$ is our projective bundle $\mathbb{P}(V)$ of line-subbundles of $V$. Fix a point $p_0 \in C(k)$. Let $J_n = \Pic^n(C)$ be the Picard scheme of line bundles of degree $n$. We write simply $J$ for $J_0$. Let $\mathcal{P}_n$ be the Poincar\'e sheaf on $C \times J_n$ normalized such that $\mathcal{P}_n|_{\{p_0\} \times J_n} \cong \mathcal{O}_{J_n}$ and $\mathcal{P}_n|_{C \times \{\mathcal{O}(np_0)\}} \cong \mathcal{O}_C(np_0)$. Schwarzenberger \cite{Schwarzenberger} showed that the natural morphism $C_n \longrightarrow J_n$ is isomorphic to $\mathbb{P}^{\vee}\bigl(R^1p_{2*}(\mathcal{P}_n^{\vee} \otimes p_1^*\omega_C)\bigr) \longrightarrow J_n$ (where $p_i$ ($i = 1,2$) denote projection morphisms from $C \times J_n$ to $C$, or $J_n$). We define $X_n(V) = \mathbb{P}^{\vee}\bigl(R^1p_{2*}(\mathcal{P}_n^{\vee} \otimes p_1^*V^{\vee} \otimes p_1^*\omega_C)\bigr)$ (here we simply write $V$ for the sheaf of sections of $V$). The following lemma is a generalization of Proposition 7 and 8 in \cite{Schwarzenberger}. \begin{lem} \label{lem:coh and base chg} For any scheme $S$ and morphism $h\colon S \longrightarrow J_n$, there is an isomorphism $X_n(V) \times_{J_n} S \cong \mathbb{P}^{\vee}\bigl(R^1p_{2*}((\id_C \times h)^*(\mathcal{P}_n^{\vee} \otimes p_1^{*}V^{\vee} \otimes p_1^*\omega_C))\bigr)$. In particular, if $S = \Spec k'$ for a field $k'$ and $h$ corresponds to a sheaf $\mathcal{L} \in \Pic^n(C_{k'})$, then the fiber of $X_n(V) \longrightarrow J_n$ at $\mathcal{L} \in J_n(k')$ is isomorphic to $\mathbb{P}^{\vee}\bigl(H^1(C_{k'},\mathcal{L}^{\vee} \otimes V^{\vee}_{k'} \otimes \omega_{C_{k'}})\bigr) \cong \mathbb{P}\bigl(H^0(C_{k'},\mathcal{L} \otimes V_{k'})\bigr)$. \end{lem} \begin{proof} Schwarzenberger proves this lemma when $V = \mathcal{O}_C$, and his proof also applies to our case. Here we give a shorter proof. Since top higher direct images always commute with base change, there is a canonical isomorphism \[ h^*R^1p_{2*}\mathcal{G} \conglra R^1p_{2*}\bigl((\id_C \times h)^*\mathcal{G}\bigr) \] for any coherent sheaf $\mathcal{G}$ on $C \times J_n$ and $n \in \mathbb{Z}$. The construction $\mathbb{P}^{\vee}$ is compatible with pull-back of coherent sheaves. Now the lemma is clear. \end{proof} \label{prop:computation of prod of sect and C} \begin{prop} \begin{enumerate} \item There is a natural morphism $\Sect_d\bigl(\mathbb{P}(V)\bigr) \times C_{n-d} \longrightarrow X_n(V)$. \item The morphism $m\colon\coprod_{d \in \mathbb{Z}}\Sect_d\bigl(\mathbb{P}(V)\bigr) \times C_{n-d} \longrightarrow X_n(V)$ induced by (i) is radicial and surjective. \end{enumerate} \end{prop} \begin{proof} We recommend the reader to look at the proof of (ii) at first to know the geometric description of the morphism in (i) and then back to the proof of (i) for the scheme-theoretic construction of the morphism. (i) We construct a morphism between functors represented by $\Sect_d\bigl(\mathbb{P}(V)\bigr) \times C_{n-d}$ and $X_n(V)$. Let $T$ be a $k$-scheme and pick an element $(s,f)$ of $\Sect_d\bigl(\mathbb{P}(V)\bigr)(T) \times C_{n-d}(T)$. Let $\overline{f}$ denote the composition $T \stackrel{f}{\longrightarrow} C_{n-d} \longrightarrow J_{n-d}$, and $\mathcal{O}(f)$ denote the sheaf $(\id_C \times \overline{f})^*\mathcal{P}_{n-d}$ on $C \times T$. The point $s$ corresponds to a $C$-morphism $C \times T \longrightarrow \mathbb{P}(V)$, which is also denoted by $s$. The sheaf $s^*\mathcal{O}(1)$ on $C \times T$ has constant degree $d$ along each slice $C \times \{t\}$, so it gives a morphism $\overline{s} \colon T \longrightarrow J_{d}$. Let $\overline{s}\otimes\overline{f} \colon T \longrightarrow J_n$ be the composition $T \stackrel{\overline{s} \times \overline{f}}{\longrightarrow} J_d \times J_{n-d} \stackrel{\otimes}{\longrightarrow} J_n$. Then the sheaf $\bigl(\id_C \times \overline{s}\otimes\overline{f}\bigr)^*\mathcal{P}_n$ on $C \times T$ is isomorphic to $s^*\mathcal{O}(1) \otimes \mathcal{O}(f)$ modulo an element of $\Pic(T)$. The canonical map $p_1^*V^{\vee} \longrightarrow s^*\mathcal{O}(1)$ on $C \times T$ gives rise to a quotient map \[ s^*\mathcal{O}(-1) \otimes \mathcal{O}(f)^{\vee} \otimes p_1^*(V^{\vee} \otimes \omega_C) \longrightarrow \mathcal{O}(f)^{\vee} \otimes p_1^*\omega_C \] on $C \times T$. Applying the right-exact functor $R^1p_{2*}$ and the construction $\mathbb{P}^{\vee}$, we obtain an embedding \begin{align*} C_{n-d} \times_{J_{n-d},\overline{f}} T &= \mathbb{P}^{\vee}\bigl(R^1p_{2*}(\mathcal{O}(f)^{\vee}\otimes p_1^*\omega_C)\bigr)\\ &\hooklongrightarrow \mathbb{P}^{\vee}\bigl(R^1p_{2*}\bigl(s^*\mathcal{O}(-1) \otimes \mathcal{O}(f)^{\vee} \otimes p_1^*(V^{\vee} \otimes \omega_C)\bigr)\bigr) = X_n(V) \times_{J_n,\overline{s}\otimes\overline{f}} T, \end{align*} by Lemma \ref{lem:coh and base chg}. Note also that for any coherent sheaf $\mathcal{F}$ on $C \times T$ and $\mathcal{L} \in \Pic(T)$, there is a canonical isomorphism $\mathbb{P}^{\vee}\bigl(R^1p_{2*}(\mathcal{F} \otimes p_2^*\mathcal{L})\bigr) \cong \mathbb{P}^{\vee}(R^1p_{2*}\mathcal{F})$. Now we associate a $T$-valued point of $X_n(V)$ defined by the composition \[ T \stackrel{(f,\id_T)}{\longrightarrow} C_{n-d} \times_{J_{n-d},\overline{f}} T \longrightarrow X_n(V) \times_{J_n,\overline{s}\otimes\overline{f}} T \longrightarrow X_{n}(V) \] to the given pair $(s,f) \in \Sect_d\bigl(\mathbb{P}(V)\bigr)(T) \times C_{n-d}(T)$. The construction is clearly natural in $T$ and therefore defines a morphism $\Sect_d\bigl(\mathbb{P}(V)\bigr) \times C_{n-d} \longrightarrow X_n(V)$. (ii) We must prove that the morphism $m$ in the statement induces a bijection on the sets of $k'$-valued points for any field $k'$ containing $k$. As explained in the proof of (i), there is a morphism $\Sect_d\bigl(\mathbb{P}(V)\bigr) \longrightarrow J_d$ defined by $s \longmapsto s^*\mathcal{O}(1)$. By the construction in (i), the following diagram commutes: \[ \xymatrix{ \coprod_{d\in\mathbb{Z}}\Sect_d\bigl(\mathbb{P}(V)\bigr)\times C_{n-d} \ar[r]^(0.7){m} \ar[d] & X_n(V) \ar[d]\\ \coprod_{d\in\mathbb{Z}}J_d \times J_{n-d} \ar[r]^(0.65){\otimes} & J_n. } \] Fix a point $\mathcal{L} \in J_n(k')$ and consider the fiber of the above diagram at $\mathcal{L}$. Then the fiber of the upper horizontal morphism $m$ is described as follows: \begin{itemize} \item The source is isomorphic to \begin{align*} &\coprod_{d \in \mathbb{Z}} \bigl\{ (s,D) \in \Sect_d\bigl(\mathbb{P}(V)\bigr)(k') \times C_{n-d}(k') \bigm | s^*\mathcal{O}(1) \otimes \mathcal{O}(D) \cong \mathcal{L} \bigr\} \\ \cong & \coprod_{d \in \mathbb{Z}} \Bigl[\text{ the set of pairs } (s,l) \text{ where } s \in \Sect_d\bigl(\mathbb{P}(V)\bigr)(k') \text{ and } l \in \mathbb{P}\bigl(H^0(C_{k'},\mathcal{L} \otimes s^*\mathcal{O}(-1))\bigr)\Bigr]. \end{align*} \item The target is isomorphic to $\mathbb{P}\bigl(H^0(C_{k'}, \mathcal{L} \otimes V_{k'})\bigr)$ by Lemma \ref{lem:coh and base chg}. \item An element $(s,l) \in \Sect_d\bigl(\mathbb{P}(V)\bigr)(k') \times \mathbb{P}\bigl(H^0(C_{k'},\mathcal{L} \otimes s^*\mathcal{O}(-1))\bigr)$ is mapped to the image of $l$ under the embedding \[ \mathbb{P}\bigl(H^0(C_{k'},\mathcal{L} \otimes s^*\mathcal{O}(-1))\bigr) \hooklongrightarrow \mathbb{P}\bigl(H^0(C_{k'}, \mathcal{L} \otimes V_{k'})\bigr) \] which is induced by the canonical inclusion $s^*\mathcal{O}(-1) \hooklongrightarrow V_{k'}$. \end{itemize} By the last description of the map, one can easily check that it is bijective. The inverse map is given as follows: Represent an element of $\mathbb{P}\bigl(H^0(C_{k'}, \mathcal{L} \otimes V_{k'})\bigr)$ by a nonzero global section $t$ of $\mathcal{L} \otimes V_{k'}$. Consider the divisor of zeros $\div(t)$ (i.e., the coefficient of $\div(t)$ at a point $p \in C_{k'}$ is given by the largest integer $m$ such that $t_p \in \pi_p^m(\mathcal{L} \otimes V_{k'})_p$ where $\pi_p$ is a uniformizer of $C_{k'}$ at $p$). Then the injective morphism $\mathcal{O}(\div(t)) \hooklongrightarrow \mathcal{L} \otimes V_{k'}$ of $\mathcal{O}_{C_{k'}}$-modules defined by $1 \longmapsto t$ is locally splitting. Define $d = n - \deg(\div(t))$, and let $s \in \Sect_d\bigl(\mathbb{P}(V)\bigr)(k')$ be a section such that $s^*\mathcal{O}(-1) = \mathcal{L}^{\vee} \otimes \mathcal{O}(\div(t)) \subset V_{k'}$, and $l$ be the line in $H^0\bigl(H_{k'},\mathcal{O}(\div(t))\bigr)$ generated by $1$. Then the pair $(s,l)$ gives a lift of $t$. \end{proof} \begin{cor} \[ \sum_{d \in \mathbb{Z}} \bigl[\Sect_d\bigl(\mathbb{P}(V)\bigr)\bigr][C_{n-d}] = \bigl[X_n(V)\bigr] \text{ in }\widetilde{K}, \text{and} \] \[ \zeta(t)Z\bigl(\mathbb{P}(V),t\bigr) = \sum_{n \in \mathbb{Z}}[X_n(V)]t^n \text{ in }\widetilde{K}[[t]]. \] \end{cor} \begin{proof} This follows from Proposition \ref{prop:computation of prod of sect and C} and the definition of $\widetilde{K}$. \end{proof} From now we mimic Kapranov's arguments to prove the rationality and the functional equation of $\zeta(t)Z\bigl(\mathbb{P}(V),t\bigr)$. \subsection{The Riemann-Roch and a vanishing result} In the ring $K$, $[\mathbb{P}^n] = 1 + \mathbb{L} + \cdots + \mathbb{L}^n$ for $n \geq 0$. From this we have the equality $[\mathbb{P}^m] - \mathbb{L}^{m-n}[\mathbb{P}^n] = [\mathbb{P}^{m-n-1}]$ for all $m,n \in \mathbb{Z}$ such that $m > n \geq 0$. In particular, there is a recursive relation $[\mathbb{P}^{n+1}] - \mathbb{L}[\mathbb{P}^n] = 1$. We also define $[\mathbb{P}^n]$ for negative $n$ in the ring $\widetilde{K}$ by this relation. Then the formula $[\mathbb{P}^m] - \mathbb{L}^{m-n}[\mathbb{P}^n] = [\mathbb{P}^{m-n-1}]$ holds for all $m,n \in \mathbb{Z}$ in $\widetilde{K}$. \begin{prop} \label{prop:motivic R-R} Let $V$ be a vector bundle over $C$ of rank $r$. \begin{enumerate} \item (Riemann-Roch) For all $n \in \mathbb{Z}$, \[ [X_n(V)] - \mathbb{L}^{r(n+1-g)+\deg V}[X_{2g-2-n}(V^{\vee})] = [\mathbb{P}^{r(n+1-g)+\deg V-1}][J]. \] \item (Vanishing) $X_n(V) = \emptyset$ for enough small $n$. \end{enumerate} \end{prop} \begin{proof} (i) We identify $J_{n}$ and $J_{2g-2-n}$ via the map $\mathcal{L} \longmapsto \mathcal{L}^{\vee} \otimes \omega_C$. Let $J_n = \coprod_{i}Z_i$ be a finite decomposition into locally closed subsets such that both of the sheaves $R^1p_{2*}(\mathcal{P}^{\vee}_n \otimes p_1^*V^{\vee} \otimes p_1^*\omega_C)$ on $J_n$ and $R^1p_{2*}(\mathcal{P}^{\vee}_{2g-2-n} \otimes p_1^*V \otimes p_1^*\omega_C)$ on $J_{2g-2-n}$ are free on each $Z_i$. By Riemann-Roch theorem, the difference of ranks of these sheaves is $r(n+1-g)+\deg V$. So we have \[ [X_n(V)|_{Z_i}] - \mathbb{L}^{r(n+1-g)+\deg V}[X_{2g-2-n}(V^{\vee})|_{Z_i}] = [\mathbb{P}^{r(n+1-g)+\deg V-1}][Z_i] \] for each $i$. Summing up these equalities gives the desired result. (ii) This is clear from Lemma \ref{lem:coh and base chg} and the fact that the is a constant $n(V)$ depending only on $V$ such that $H^0(C_{k'},\mathcal{L} \otimes V_{k'}) = 0$ for $\deg \mathcal{L} \leq n(V)$. \end{proof} \subsection{Rationality and functional equations} \begin{lem} \label{lem:lem on power series with coef P^n} \begin{enumerate} \item Let $a,b$ be integers. The power series \[ g(t) = (t-1)(t-\mathbb{L}^{-a})\sum_{n \geq 0}[\mathbb{P}^{an+b}]t^n \in \mathcal{M}[[t]] \] is a polynomial. The value of $g(t)$ at $t=\mathbb{L}^{-a}$ is $\mathbb{L}^{b-a}\bigl(1-[\mathbb{P}^{-a}]\bigr)$. \item The series $\sum_{n < 0}[\mathbb{P}^{an+b}]t^n$ is also a rational function of $t^{-1}$, and as an element of $\mathcal{M}(t^{-1}) = \mathcal{M}(t)$ we have \[ \sum_{n < 0}[\mathbb{P}^{an+b}]t^n + \sum_{n \geq 0}[\mathbb{P}^{an+b}]t^n = 0. \] \end{enumerate} \end{lem} \begin{proof} By the recursive relation $[\mathbb{P}^{an+b}] = \mathbb{L}[\mathbb{P}^{an+b-1}]+1$, we easily reduce the proof to the case that $b = 0$. Then, by a direct computation, one can show that \[ (t-1)(t-\mathbb{L}^{-a})\sum_{n \geq 0}[\mathbb{P}^{an}]t^n = \mathbb{L}^{-a}\bigl(1+\mathbb{L}[\mathbb{P}^{a-2}]t\bigr), \] and the statement of the lemma follows from it easily. \end{proof} Our motivic main result is the following theorem. \begin{thm} \label{thm:motivic main theorem} The product $\zeta(t)Z\bigl(\mathbb{P}(V),t\bigr)$ is rational as a series in $\mathcal{M}((t))$ with denominator $(t-1)(t-\mathbb{L}^{-r})$. The value of $(t-1)(t-\mathbb{L}^{-r})\zeta(t)Z\bigl(\mathbb{P}(V),t\bigr)$ at $t = \mathbb{L}^{-r}$ is $\mathbb{L}^{-rg-1+\deg V}\bigl(1-[\mathbb{P}^{-r}]\bigr)[J]$. Moreover, these functions satisfy the following functional equations: \[ \zeta(t)Z\bigl(\mathbb{P}(V),t\bigr) = \mathbb{L}^{\deg V+r(g-1)}t^{2g-2}\zeta(\mathbb{L}^{-r}t^{-1})Z\bigl(\mathbb{P}(V^{\vee}),\mathbb{L}^{-r}t^{-1}\bigr). \] in $\mathcal{M}((t))$. \end{thm} \begin{proof} By Proposition \ref{prop:motivic R-R}, it follows that \[ \zeta(t)Z\bigl(\mathbb{P}(V),t\bigr) = (\text{polynomial}) + \sum_{n \geq 0}[\mathbb{P}^{r(n+1-g)+\deg V - 1}][J]t^n. \] Then the rationality and the result on the behaviour at $t = \mathbb{L}^{-r}$ follow from Lemma \ref{lem:lem on power series with coef P^n} (i). Moreover, by the Riemann-Roch, we have \[ \zeta(t)Z\bigl(\mathbb{P}(V),t\bigr) = \sum_{n \in \mathbb{Z}}\mathbb{L}^{r(n+1-g)+\deg V}\bigl[X_{2g-2-n}(V^{\vee})\bigr]t^n + \sum_{n \in \mathbb{Z}}[\mathbb{P}^{r(n+1-g)+\deg V-1}][J]t^n. \] The second summation is zero by \ref{lem:lem on power series with coef P^n} (ii). Then, by substituting $m$ for $2g-2-n$, we obtain \begin{align*} \zeta(t)Z\bigl(\mathbb{P}(V),t\bigr) &= \mathbb{L}^{r(1-g)+\deg V}\sum_{m \in \mathbb{Z}}\mathbb{L}^{r(2g-2-m)}\bigl[X_m(V^{\vee})\bigr]t^{2g-2-m} \\ &= \mathbb{L}^{r(g-1)+\deg V}t^{2g-2}\sum_{m \in \mathbb{Z}}\bigl[X_m(V^{\vee})\bigr](\mathbb{L}^{-r}t^{-1})^{m} \\ &= \mathbb{L}^{r(g-1)+\deg V}t^{2g-2}\zeta(t)Z\bigl(\mathbb{P}(V^{\vee},\mathbb{L}^{-r}t^{-1})\bigr). \end{align*} \end{proof} In the situation where $[\mathbb{G}_m]$ is invertible, the residue of $Z\bigl(\mathbb{P}(V),t\bigr)$ at $t = \mathbb{L}^{-r}$ is expressed in a very similar form as in Theorem \ref{thm:main theorem}. \begin{cor} Let $R$ be a field and $\mu \colon \mathcal{M} \longrightarrow R$ be a ring homomorphism such that $\mu(\mathbb{L}) \neq 1$. Let $\zeta_{\mu}(t) \in R[[t]]$ (resp. $Z_{\mu}\bigl(\mathbb{P}(V),t\bigr) \in R((t))$) be the power series obtained by evaluating each coefficient of $\zeta(t)$ (resp. $Z\bigl(\mathbb{P}(V),t\bigr)$) by $\mu$. Assume that $\zeta_{\mu}\bigl(\mu(\mathbb{L})^{-r}\bigr) \neq 0$. Then $Z_{\mu}\bigl(\mathbb{P}(V),t\bigr)$ is a rational function of $t$ with an unique simple pole at $t = \mu(\mathbb{L})^{-r}$ and \[ \Res_{t = \mu(\mathbb{L})^{-r}}Z_{\mu}\bigl(\mathbb{P}(V),t\bigr) = -\frac{\mu([J])\mu(\mathbb{L})^{\deg V}}{\mu([\mathbb{G}_m])\mu(\mathbb{L})^{rg}\zeta_{\mu}\bigl(\mu(\mathbb{L})^{-r}\bigr)}. \] \end{cor} \begin{proof} This follows from Theorem \ref{thm:motivic main theorem} and the fact that \[ \frac{1-\mu([\mathbb{P}^{-r}])}{\mu(\mathbb{L})^{-r}-1} = - \frac{\mu(\mathbb{L})}{\mu(\mathbb{L})-1} = - \frac{\mu(\mathbb{L})}{\mu([\mathbb{G}_m])}. \] \end{proof} \section{Hirzebruch surfaces} The methods in \S\ref{sec:Height zeta functions} are also useful to study height zeta functions of projective bundles over a variety $X$ when the height zeta function $Z(X,s)$ of $X$ is well understood. To demonstrate this, we study height zeta functions of Hirzebruch surfaces. \subsection{Setting} \label{ssect:Setting} Fix an Arakelov vector bundle $V$ of rank $2$ over $F$ and let $\mathbb{P}^1 = \mathbb{P}(V)$ denote the associated projective line. For an integer $e \geq 0$, let $F_e = \mathbb{P}\bigl(\mathcal{O}_{\mathbb{P}^1} \otimes \mathcal{O}_{\mathbb{P}^1}(e)\bigr)$ be the Hirzebruch surface and $\pi \colon F_e \longrightarrow \mathbb{P}^1$ be the canonical projection. Let $\mathcal{O}(a,b)$ denote the sheaf $\mathcal{O}_{F_e}(a) \otimes \pi^*\mathcal{O}_{\mathbb{P}^1}(b)$ on $F_e$, where $\mathcal{O}_{F_e}(1)$ is the relative Serre's twisting sheaf of the projective bundle $\pi \colon F_e \longrightarrow \mathbb{P}^1$. It is known that any invertible sheaf on $F_e$ is isomorphic to the sheaf $\mathcal{O}(a,b)$ for some $(a,b) \in \mathbb{Z}^2$, and it is ample iff $a > 0$ and $b > ae$ (see the section of ruled surfaces in \cite{Hartshorne}). Fix a pair $(a,b)$ with this condition. For a technical reason we assume that $e \geq 2$. For a rational point $P \in F_e(F)$, the $F$-vector space $P^*\mathcal{O}(a,b)$ is naturally endowed with a structure of an Arakelov line bundle. We define the height $H(P) = H_{a,b}(P)$ as the norm $N\bigl(P^*\mathcal{O}(a,b)\bigr)$. If $Q = \pi(P)$, then $H_{a,b}(P)$ is equal to $H_{\pi^{-1}(Q)}(P)^a H_{\mathbb{P}^1}(Q)^b$ (here we see $\pi^{-1}(Q)$ as the projective space associated to the Arakelov vector bundle $Q^*\bigl(\mathcal{O}_{\mathbb{P}^1}\oplus\mathcal{O}_{\mathbb{P}^1}(e)\bigr)$ and consider the height function on it defined in \S\ref{ssec:Height}). \begin{lem} The height zeta function $Z(F_e,s) = \sum_{P \in F_e(F)}H_{a,b}(P)^{-s}$ satisfies \[ Z(F_e,s) = \sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-bs}Z\bigl(\pi^{-1}(Q),as\bigr). \] \end{lem} \begin{proof} This is clear from the description of the height in the above. \end{proof} \subsection{Strategy} We are going to show that $Z(F_e,s)$ defines a meromorphic function on the domain $D = \bigl\{s \in \mathbb{C} \bigm| \Re(s) > \max\{1/a,(e+2)/b\} \bigr\}$, and its largest pole is at $s = \max\{2/a,2/(b-ea)\}$. Note that $(e+2)/b < (e+2)/ae = 1/a + 2/ae \leq 2/a$ since $e \geq 2$, so $s = 2/a$ is always in $D$. By the formula (\ref{eq:star}) in \S\ref{ssect:Application of Riemann-Roch}, the function $w\xi(as)Z\bigl(\pi^{-1}(Q),as\bigr)$ is the sum of the following $4$ functions: \begin{align*} G_{1,Q}(s) &= - \frac{\alpha|\Delta|^{-\frac{as}{2}}}{as},\\ G_{2,Q}(s) &= \frac{\alpha N\bigl(Q^*(\mathcal{O}_{\mathbb{P}^1}\oplus\mathcal{O}_{\mathbb{P}^1}(e))\bigr)|\Delta|^{-\frac{as}{2}}}{s-2} = \frac{\alpha H_{\mathbb{P}^1}(Q)^e|\Delta|^{-\frac{as}{2}}}{as-2}, \\ G_{3,Q}(s) &= |\Delta|^{1-as}H(Q)^e\int_{E_{-}}N(L)^{as-2}\varphi\bigl(L\oplus(L\otimes Q^*\mathcal{O}_{\mathbb{P}^1}(-e))\bigr)dL, \\ G_{4,Q}(s) &= \int_{E_{-}}N(L)^{-as}\varphi\bigl(L\oplus(L\otimes Q^*\mathcal{O}_{\mathbb{P}^1}(e))\bigr)dL. \end{align*} From now we try to find the largest pole of the series $F_i(s) = \sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-bs}G_{i,Q}(s)$ in the domain $D$ for each $i \in \{1,2,3,4\}$. \subsection{First three parts} The first part $F_1(s)$ is \[ F_1(s) = - \frac{\alpha|\Delta|^{-\frac{as}{2}}}{as}\sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-bs} = - \frac{\alpha|\Delta|^{-\frac{as}{2}}}{as}Z\bigl(\mathbb{P}^1,bs\bigr). \] By Theorem \ref{thm:main theorem}, the series $Z(\mathbb{P}^1,bs)$ is holomorphic for $s > 2/b$. Then $F_1(s)$ is holomorphic in $D$ because we have $2/b < 1/a$ from $b > ae \geq 2a$. The second part is \[ F_2(s) = \frac{\alpha |\Delta|^{-\frac{as}{2}}}{as-2}\sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-(bs-e)} = \frac{\alpha |\Delta|^{-\frac{as}{2}}}{as-2}Z\bigl(\mathbb{P}^1,bs-e\bigr), \] Therefore $F_2(s)$ has its largest pole at $s=2/a$ with order $1$ and residue $\alpha a^{-1} |\Delta|^{-1} Z\bigl(\mathbb{P}^1, 2b/a - e\bigr)$. The third part is \[ F_3(s) = |\Delta|^{1-as}\sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-(bs-e)}\int_{E_{-}}N(L)^{as-2}\varphi\bigl(L\oplus(L\otimes Q^*\mathcal{O}_{\mathbb{P}^1}(-e))\bigr)dL. \] Since the line bundle $Q^*\mathcal{O}_{\mathbb{P}^1}(-e)$ is embedded into $V^{\otimes e}$, we have an estimate \[ \int_{E_{-}}\bigl|N(L)^{as-2}\bigr|\varphi\bigl(L\oplus(L\otimes Q^*\mathcal{O}_{\mathbb{P}^1}(-e))\bigr)dL \leq \int_{E_{-}}\bigl|N(L)^{as-2}\bigr|\varphi\bigl(L\oplus(L\otimes V^{\otimes e})\bigr)dL \] where the latter integral converges for any $s \in \mathbb{C}$ by Theorem \ref{prop:R-R and vanishing} (ii) and their values are bounded with respect to $Q$. Therefore the third part has no pole in the domain $D$. \subsection{The fourth part} In general, for two Arakelov vector bundles $V,W$ we have $\varphi(V\oplus W) = \varphi(V) + \varphi(W) + \varphi(V)\varphi(W)$ which follows from $\#H^0(V\oplus W) = \#H^0(V)\#H^0(W)$. Then the fourth part $F_4(s)$ is divided into the sum $F_4^{(1)}(s) + F_4^{(2)}(s) + F_4^{(3)}(s)$ where \begin{align*} F_4^{(1)}(s) &= \sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-bs}\int_{E_{-}}N(L)^{-as}\varphi(L)dL, \\ F_4^{(2)}(s) &= \sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-bs}\int_{E_{-}}N(L)^{-as}\varphi\bigl(L\otimes Q^*\mathcal{O}_{\mathbb{P}^1}(e)\bigr)dL, \\ F_4^{(3)}(s) &= \sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-bs}\int_{E_{-}}N(L)^{-as}\varphi\bigl(L)\varphi\bigl(L\otimes Q^*\mathcal{O}_{\mathbb{P}^1}(e)\bigr)dL. \end{align*} \subsubsection*{I} The series $F^{(1)}_4(s)$ is the product of $Z\bigl(\mathbb{P}^1,bs\bigr)$ with a holomorphic function. In particular, it has no pole in the domain $D$. \subsubsection*{II} In the series $F^{(2)}_4(s)$, replacing $L \otimes Q^*\mathcal{O}_{\mathbb{P}^1}(e)$ to $L$ yields \begin{align*} F_4^{(2)}(s) &= \sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-(b-ae)s}\int_{N(L)\leq\sqrt{|\Delta|}H_{\mathbb{P}^1}(Q)^e}N(L)^{-as}\varphi(L)dL \\ &= wZ\bigl(\mathbb{P}^1,(b-ae)s\bigr)\xi(as) - F^{(2)'}_4(s) \end{align*} where \[ F^{(2)'}_4(s) = \sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-(b-ae)s}\int_{N(L)\geq\sqrt{|\Delta|}H_{\mathbb{P}^1}(Q)^e}N(L)^{-as}\varphi(L)dL. \] By the Riemann-Roch therem, $\varphi(L) = \varphi(L^{\vee}\otimes\omega)N(L)|\Delta|^{-\frac{1}{2}} + N(L)|\Delta|^{-\frac{1}{2}} - 1$. Then $F^{(2)'}_4$ is divided as follows: \begin{align*} F^{(2)'}_4(s) &= |\Delta|^{-\frac{1}{2}}\sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-(b-ae)s}\int_{N(L)\geq\sqrt{|\Delta|}H_{\mathbb{P}^1}(Q)^e}N(L)^{-as+1}\varphi(L^{\vee}\otimes\omega)dL \\ &+ |\Delta|^{-\frac{1}{2}}\sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-(b-ae)s}\int_{N(L)\geq\sqrt{|\Delta|}H_{\mathbb{P}^1}(Q)^e}N(L)^{-as+1}dL \\ &- \sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-(b-ae)s}\int_{N(L)\geq\sqrt{|\Delta|}H_{\mathbb{P}^1}(Q)^e}N(L)^{-as}dL. \end{align*} The integrals in the second and third term can be explicitely computed by the formula \\ $\int_{\Pic(F)}f(\deg L)dL = \alpha \int_{\mathbb{R}}f(t)dt$ in \S\ref{subs:Topologies and measures}. The results are \begin{align*} \int_{N(L)\geq\sqrt{|\Delta|}H_{\mathbb{P}^1}(Q)^e}N(L)^{-as+1}dL &= \frac{\alpha \sqrt{|\Delta|}^{1-as}H_{\mathbb{P}^1}(Q)^{e(1-as)}}{as-1}\quad\text{for }\Re(s)>\frac{1}{a}, \\ \int_{N(L)\geq\sqrt{|\Delta|}H_{\mathbb{P}^1}(Q)^e}N(L)^{-as}dL &= \frac{\alpha\sqrt{|\Delta|}^{-as}H_{\mathbb{P}^1}(Q)^{-eas}}{as}\quad\text{for }\Re(s)>0. \end{align*} From this it is easily checked that the largest pole of the second (resp. third) term is at $s = (e+2)/b$ (resp. $s = 2/b$). The first term in the above expression of $F^{(2)'}_4(s)$ is holomorphic for $\Re(s) > 1/a$. In fact, since $N(L^{\vee}\otimes\omega) \leq \sqrt{|\Delta|}H_{\mathbb{P}^1}(Q)^{-e}$, we have $\varphi(L^{\vee}\otimes\omega) \leq C_1 \exp \bigl(-C_2 H_{\mathbb{P}^1}(Q)^{C_3}\bigr)$ for some $C_1,C_2,C_3 > 0$ by Proposition \ref{prop:R-R and vanishing} (ii). Note also that the value of the integral $\int_{N(L)\geq\sqrt{|\Delta|}H_{\mathbb{P}^1}(Q)^e}N(L)^{-as+1}dL$ is bounded with respect to $Q$ (for $\Re(s) > 1/a$). From these facts one can easily see that the first term converges absolutely for $\Re(s) > 1/a$. We have seen that the pole of the series $F^{(2)}_4(s)$ in $D$ arises from the term $wZ\bigl(\mathbb{P}^1,(b-ae)s\bigr)\xi(as)$. Its largest pole is at $s = 2/(b-ea)$ (when $2/(b-ea) \in D$). It is simple and the residue is given by \[ \lim_{s \to 2/(b-ae)}w\biggl(s-\frac{2}{b-ae}\biggr)Z\bigl(\mathbb{P}^1,(b-ae)s\bigr)\xi(as) = \frac{\xi(\frac{2a}{b-ae})}{b-ae} \cdot \frac{\alpha N(V)}{|\Delta| \xi(2)}. \] \subsubsection*{III} Now we consider the last part $F_4^{(3)}(s)$. Divide $E_{-}$ into $E'_{-}(Q) \cup E''_{-}(Q)$ where $E_{-}'(Q) = \{L \in E_{-} \mid N(L)H(Q)^{e/2} \leq \sqrt{|\Delta|}\}$ and $E_{-}''(Q) = \{L \in E_{-} \mid N(L)H(Q)^{e/2} \geq \sqrt{|\Delta|} \}$. Associated to this, we divide the series $F^{(3)}_4(s)$ into $F^{(3)'}_4(s) + F^{(3)''}_4(s)$ where \begin{align*} F_4^{(3)'}(s) &= \sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-bs}\int_{E'_{-}(Q)}N(L)^{-as}\varphi\bigl(L)\varphi\bigl(L\otimes Q^*\mathcal{O}_{\mathbb{P}^1}(e)\bigr)dL, \\ F_4^{(3)''}(s) &= \sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-bs}\int_{E''_{-}(Q)}N(L)^{-as}\varphi\bigl(L)\varphi\bigl(L\otimes Q^*\mathcal{O}_{\mathbb{P}^1}(e)\bigr)dL. \end{align*} First we consider $F^{(3)'}_4(s)$, so assume that $L \in E_{-}'(Q)$. Then $N(L) \leq \sqrt{|\Delta|} H(Q)^{-e/2}$, and by Proposition \ref{prop:R-R and vanishing} (ii), $\sqrt{\varphi(L)} \leq C_1\exp(-C_2H(Q)^{C_3})$ for some $C_1,C_2,C_3 > 0$. Moreover, we have $N\bigl(L \otimes Q^*\mathcal{O}_{\mathbb{P}^1}(e)\bigr) \leq \sqrt{|\Delta|}H(Q)^{e/2}$ and $\varphi\bigl(L \otimes Q^*\mathcal{O}_{\mathbb{P}^1}(e)\bigr) \leq C_4H(Q)^{e/2}$ for some $C_4 > 0$ by Proposition \ref{prop:R-R and vanishing} (iii). Therefore, we have an estimate \begin{align*} &\Bigl|H_{\mathbb{P}^1}(Q)^{-bs}N(L)^{-as}\varphi(L)\varphi\bigl(L\otimes Q^*\mathcal{O}_{\mathbb{P}^1}(e)\bigr)\Bigr| \\ &\leq C_5\Bigl|H_{\mathbb{P}^1}(Q)^{-bs+e/2}\exp\bigl(-C_2H_{\mathbb{P}^1}(Q)^{C_3}\bigr)\Bigr|\bigl|N(L)^{-as}\sqrt{\varphi(L)}\bigr|. \end{align*} for some $C_5 > 0$. Since $E_{-}'(Q) \subset E_{-}$, the integral $\int_{E_{-}'(Q)}N(L)^{-as}\sqrt{\varphi(L)}dL$ converges for any $s \in \mathbb{C}$ and their values are bounded with respect to $Q$. So the series $F^{(3)'}_4(s)$ converges for any $s \in \mathbb{C}$. Next we work with $F^{(3)''}_4(s)$. By the Riemann-Roch theorem, \begin{align*} F^{(3)''}_4(s) &= |\Delta|^{-\frac{1}{2}}\sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-(bs-e)}\int_{E''_{-}(Q)}N(L)^{-(as-1)}\varphi(L)\varphi\bigl(L^{\vee}\otimes Q^*\mathcal{O}_{\mathbb{P}^1}(-e)\otimes\omega\bigr)dL \\ &+ |\Delta|^{-\frac{1}{2}}\sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-(bs-e)}\int_{E''_{-}(Q)}N(L)^{-(as-1)}\varphi(L)dL \\ &- \sum_{Q \in \mathbb{P}^1(F)}H_{\mathbb{P}^1}(Q)^{-bs}\int_{E''_{-}(Q)}N(L)^{-as}\varphi(L)dL. \end{align*} All of three integrals in the above expression converges for any $s \in \mathbb{C}$ and its value is bounded with respect to $Q$. So the series $F^{(3)''}_4(s)$ is holomorphic in the domain $D$. \subsection{Conclusion} We have proved the following theorem. \begin{thm} \label{thm:Hirzebruch main theorem} The height zeta function $Z(F_e, s) = \sum_{P \in F_e(F)}H_{a,b}(P)^{-s}$ defines a meromorphic function in the domain $D = \bigl\{s \in \mathbb{C} \bigm| \Re(s) > \max\{1/a,(e+2)/b\} \bigr\}$. It has a simple pole at $s = 2/a$ with residue \[ \frac{\alpha Z(\mathbb{P}^1,2b/a-e)}{a w |\Delta| \xi(2)}, \] and another simple pole at $s = 2/(b-ea)$ with residue \[ \frac{\alpha N(V)}{(b-ae)w|\Delta|\xi(2)}. \] when $2/(b-ea) \in D$. There are no other poles in $D$. \end{thm} \subsection{Discussions} \label{ssect:Discussions} Consider the minimal section of $\pi \colon F_e = \mathbb{P}\bigl(\mathcal{O}\oplus\mathcal{O}(e)\bigr) \longrightarrow \mathbb{P}^1$ which corresponds to a line-subbundle $\mathcal{O}(e) \subset \mathcal{O}\oplus\mathcal{O}(e)$. Let $C \subset F_e$ be its image. Then the above result shows that the points on $C$ are dominant in the distribution of rational points of $F_e$ when $2/(b-ea) > 2/a \iff b < (e+1)a$. In fact, the restriction $\mathcal{O}(a,b)|_C$ is isomorphic to $\mathcal{O}_{\mathbb{P}^1}(b-ea)$ if we identify $C$ and $\mathbb{P}^1$ via the projection. Therefore by Theorem \ref{thm:main theorem} the height zeta function of $C$ with respect to $\mathcal{O}(a,b)|_C$ is equal to $Z\bigl(\mathbb{P}^1,(b-ea)s\bigr)$ and the principal part of its largest pole is equal to that of $Z(F_e,s)$ when $b<(e+1)a$. Before considering the case $b \geq (e+1)a$, we recall the conjecture of Batyrev and Manin (Conjecture B of \cite{BatyrevManin}) on the abscissa of convergence of height zeta functions of Fano varieties: Let $X$ be a Fano variety over $F$, and $L$ be an ample line bundle on $X$. Let $\alpha(L) = \inf\{A \in \mathbb{R} \mid AL + K_X \in \Lambda_{\text{eff}}\}$ where $\Lambda_{\text{eff}} \subset \Pic(X)_{\mathbb{R}}$ denotes the effective cone of $X$. Then for any sufficiently big finite extension $F' \supset F$ and for any sufficiently small Zariski open subset $U \subset X$, the abscissa of convergence of $Z(U_{F'},H_{L};s)$ is equal to $\alpha(L)$. Now return to our case $X=F_e$ and $b \geq (e+1)a$. It is known that the canonical sheaf $K_{F_e}$ is isomorphic to $\mathcal{O}(-2,-2-e)$ (see the section of ruled surfaces in \cite{Hartshorne}). Also recall that the effective cone $\Lambda_{\text{eff}} \subset \Pic(F_e)_{\mathbb{R}} \cong \mathbb{R}^2$ is $\{(a,b) \in \mathbb{R}^2 \mid a \geq 0, b \geq ae \}$. Therefore the number $2/a$ (i.e., the abscissa of convergence of the height zeta function $Z(F_e,s)$) is equal to $\inf\{A \in \mathbb{R} \mid A\mathcal{O}(a,b) + K_{F_e} \in \Lambda_{\text{eff}}\}$. As a whole, Batyrev and Manin's description of the abscissa of convergence is valid for Hirzebruch surfaces $F_e \ (e \geq 2)$ with respect to ample line bundles $L=\mathcal{O}(a,b) \ \bigl(b \geq (e+1)a\bigr)$ without shrinking $F_e$ to a small open subset. \section{Tauberian theorem} We include here the statement of Tauberian theorem used to deduce the asymptotic behaviour of the counting function $n(X,H_L;B)$ from properties of the height zeta function $Z(X,H_L;s)$. \begin{thm}[Tauberian theorem] \label{thm:Tauberian} Let $\{\lambda_n\}_{n \in \mathbb{N}}$ be a non-decreasing sequence of positive real numbers. Let $a \in \mathbb{R}$ be a positive real number and $b \in \mathbb{N}$ a positive integer. Assume that the Dirichlet series $Z(s) = \sum_{n \in \mathbb{N}}\lambda_n^{-s}$ admits a representation \[ Z(s) = \frac{g(s)}{(s-a)^b} + h(s) \] on some open neighbourhood of $\bigl\{ s \in \mathbb{C} \bigm| \Re(s) \geq a \bigr\}$, where $g(s)$ and $h(s)$ are holomorphic functions such that $g(a) \neq 0$. Then \[ N(B) = \#\{n \in \mathbb{N} \mid \lambda_n \leq B\} = \frac{g(a)}{a(b-1)!}B^{a}(\log B)^{b-1}\bigl(1+o(1)\bigr) \quad \text{as } B \to \infty. \] \end{thm} \begin{proof} Apply Theorem III in \cite{Delange} to $\alpha(t) = N(e^t)$. Note that $f(s)$ in \cite{Delange} is equal to $Z(s)/s$. \end{proof} \end{document}
\begin{document} \begin{center}{\Large\bf Plurisubharmonic geodesics and interpolating sets} \end{center} \begin{center}{\large Dario Cordero-Erausquin and Alexander Rashkovskii} \end{center} \begin{abstract} We apply a notion of geodesics of plurisubharmonic functions to interpolation of compact subsets of ${\mathbb C}n$. Namely, two non-pluripolar, polynomially closed, compact subsets of ${\mathbb C}n$ are interpolated as level sets $L_t=\{z: u_t(z)=-1\}$ for the geodesic $u_t$ between their relative extremal functions with respect to any ambient bounded domain. The sets $L_t$ are described in terms of certain holomorphic hulls. In the toric case, it is shown that the relative Monge-Amp\`ere capacities of $L_t$ satisfy a dual Brunn-Minkowski inequality. \end{abstract} \section{Introduction} In the classical complex interpolation theory of Banach spaces, originated by Calder\'{o}n \cite{Ca} (see \cite{BL} and, for more recent developments, \cite{CEK} and references therein), a given family of Banach spaces $X_\xi$ parameterized by boundary points $\xi$ of a domain $C\subset\mathbb C^N$ gives rise to a family of Banach spaces $X_\zeta$ for all $\zeta\in C$. A basic setting is interpolation of two spaces, $X_0$ and $X_1$, for a partition $\{C_0, C_1\}$ of $\partial C$. More specifically, one can take $C$ to be the strip $0<{{ \mathbb R}e}\, \zeta <1$ in the complex plane and $C_0, C_1$ the corresponding boundary lines, then the interpolated norms depend only on $t=\operatorname{Im}\zeta$. In the finite dimensional case $X_j=({\mathbb C}n,\|\cdot\|_j)$, $j=0,1$, they are defined in terms of the family of mappings $C\to{\mathbb C}n$, bounded and analytic in the strip, continuous up to the boundary and tending to zero as ${\operatorname{Im}}\, \zeta\to\infty$, see details in \cite{BL}. In this setting, the volume of the unit ball $B_t$ of $({\mathbb C}n,\|\cdot\|_t)$, $0<t<1$, was proved in \cite{CE} to be a logarithmically concave function of $t$. When the given norms $\|\cdot\|_j$ on ${\mathbb C}n$ are toric, i.e., satisfy $\|(z_1,\ldots,z_n)\|_j=\|(|z_1|,\ldots,|z_n|)\|_j$, the interpolated norms are toric as well and the balls $B_t$ are Reinhardt domains of ${\mathbb C}n$ obtained as the multiplicative combinations (geometric means) of the balls $B_0$ and $B_1$. The logarithmic concavity implies that volumes of the multiplicative combinations \begin{equation}\label{geommean} K_t^\times = K_0^{1-t}\,K_1^t\subset{ \mathbb R}n \end{equation} of any two convex bounded neighbourhoods $K_0$ and $K_1$ of the origin in ${ \mathbb R}n$ satisfy the Brunn-Minkowski inequality \begin{equation}\label{volBM} {\operatorname{Vol}}(K_t^\times)\ge {\operatorname{Vol}}(K_0)^{1-t} {\operatorname{Vol}}(K_1)^t,\quad 0<t<1.\end{equation} Note also that in \cite{S2}--\cite{S4}, the interpolated spaces were related to convex hulls and complex geodesics with convex fibers. In particular, it put the interpolation in the context of analytic multifunctions. In this note, we develop a slightly different -- albeit close -- approach to the interpolation of compact, polynomially convex subsets of ${\mathbb C}n$ by sets arising from a notion of plurisubharmonic geodesics. The technique originates from results on geodesics in the spaces of metrics on compact K\"{a}hler manifolds due to Mabuchi, Semmes, Donaldson, Berndtsson and others (see \cite{G12} and the bibliography therein). Its local counterpart for plurisubharmonic functions from Cegrell classes on domains of ${\mathbb C}n$ was introduced in \cite{BB} and \cite{R17a}. We will need here a special case when the geodesics can be described as follows. Let $$A=\{\zeta\in{\mathbb C}:\:0< \log|\zeta| < 1\}$$ be the annulus bounded by the circles $$A_j=\{\zeta:\: \log|\zeta|=j\},\quad j=0,1.$$ Given two plurisubharmonic functions $u_0$ and $u_1$ in a bounded hyperconvex domain $\Omega\subset{\mathbb C}n$, equal to zero on $\partial \Omega$, we consider the class $W(u_0,u_1)$ of all plurisubharmonic functions $u(z,\zeta)$ in the product domain $\Omega\times A$, such that $\limsup u(z,\zeta)\le u_j(z)$ for all $z\in\Omega$ as $\zeta\to A_j$. The function \begin{equation}\label{upenv} \widehat u(z,\zeta)=\sup\{u(z,\zeta):\: u\in W(u_0,u_1)\} \end{equation} belongs to the class and satisfies $\widehat u(z,\zeta)=\widehat u(z,|\zeta|)$, which gives rise to the functions $u_t(z):=\widehat u(z,e^t)$, $0<t<1$, the {\it geodesic} between $u_0$ and $u_1$. When the functions $u_j$ are bounded, the geodesic $u_t$ tend to $u_j$ as $t\to j$, uniformly on $\Omega$. One of the main properties of the geodesics is that they linearize the energy functional \begin{equation}\label{enfunc} {\mathcal E}(u)=\int_\Omega u(dd^c u)^n, \end{equation} see \cite{BB}, \cite{R17a} (where actually more general classes of plurisubharmonic functions are considered). Given two non-pluripolar compact sets $K_0,K_1\subset{\mathbb C}n$, let $u_j$ denote the relative extremal functions of $K_j$, $j=0,1$, with respect to a bounded hyperconvex neighbourhood $\Omega$ of $K_0\cup K_1$, i.e., \begin{equation}\label{initial} u_j(z)=\omega_{K_j,\Omega}(z)=\limsup_{y\to z}\, \sup\{u(y):\: u\in {\operatorname{PSH}}_-(\Omega),\ u|_{K_j}\le -1\}, \end{equation} where ${\operatorname{PSH}}_-(\Omega)$ is the collection of all nonpositive plurisubharmonic functions in $\Omega$. The functions $u_j $ belong to ${\operatorname{PSH}}_-(\Omega)$ and satisfy $(dd^cu_j)^n=0$ on $\Omega\setminus K_j$, see \cite{Kl}. Assume, in addition, each $K_j$ to be polynomially convex (in the sense that it coincides with its polynomial hull). This implies $\omega_{K_j,\Omega'}=-1$ on $K_j$ for some (and thus any) bounded hyperconvex neighborhood $\Omega'$ of $K_j$ and that $\omega_{K_j,\Omega'}\in C(\overline{\Omega'})$. In particular, the functions $u_j=-1$ on $K_j$ and are continuous on $\overline\Omega$. The geodesics $u_t$ converge to $u_j$ uniformly as $t\to j$ \cite{R17a} and so, by the Walsh theorem, the upper envelope $ \widehat u(z,\zeta)$ (\ref{upenv}) is continuous on $\Omega\times A$, which, in turn, implies $u_t\in C(\overline\Omega\times [0,1])$. As was shown in \cite{R17b}, functions $u_t$ in general are different from the relative extremal functions of any subsets of $\Omega$. Consider nevertheless the sets where they attain their minimal value, $-1$: \begin{equation}\label{L_t} L_t=\{z\in \Omega:\: u_t(z)=-1\},\quad 0< t< 1.\end{equation} By the continuity of the geodesic at the endpoints, the sets $L_t$ converge (say, in the Hausdorff metric) to $ K_j$ when $t\to j\in\{0,1\}$ and so, they can be viewed as interpolations between $K_0$ and $K_1$. The curve $t\mapsto L_t$ can be in a natural way identified with the multifunction $\zeta\mapsto K_\zeta:=L_{\log|\zeta|}$. Note however that it is not an {\it analytic multifunction} (for the definition, see, e.g., \cite{O}, \cite{S1}, \cite{Po}) because its graph $\{(z,\zeta)\in \Omega\times A:\: \widehat u(z,\zeta)=-1\}$ is not pseudoconcave. In Section~2, we show that the interpolating sets $L_t$ can be represented as sections $K_t=\{z:\: (z,e^t)\in \widehat K\} $ of the holomorphic hull $\widehat K$ of the set \begin{equation}\label{hull}K^A:=(K_0\times A_0) \cup (K_1\times A_1)\subset {\mathbb C}^{n+1}\end{equation} with respect to functions holomorphic in ${\mathbb C}n\times ({\mathbb C}\setminus\{0\})$. In Section~3, we study the relative Monge-Amp\`ere capacities ${\mathbb C}apa(L_t,\Omega)$ of the sets $L_t$; recall that for $K\Subset\Omega$, $${\mathbb C}apa(K,\Omega)= \sup\{(dd^c u)^n(K):\: u\in{\operatorname{PSH}}_-(\Omega),\ u|_K\le -1\}=(dd^c\omega_{K,\Omega})^n(\Omega), $$ see \cite{Kl}. It was shown in \cite{R17a} that the function $t\mapsto{\mathbb C}apa(L_t,\Omega)$ is convex, which was achieved by using linearity of the energy functional (\ref{enfunc}) along the geodesics. In the case when $\Omega$ is the unit polydisk ${\mathbb D}^n$ and $K_j$ are Reinhardt sets, the convexity of the Monge-Amp\`ere capacities was rewritten in \cite{R17b} as convexity of covolumes of certain unbounded convex subsets $P_t$ of the positive orthant ${ \mathbb R}np$ (that is, volumes of their complements to ${ \mathbb R}np$). Here, we use a convex geometry technique to prove Theorem~\ref{BMcovol} stating that actually the covolumes of the sets $P_t$ are {\sl logarithmically convex}. Since in this case the sets $L_t$ are exactly the geometric means $K_t^\times$ of $K_0$ and $K_1$, this implies the dual Brunn-Minkowski inequality for their Monge-Amp\`ere capacities, \begin{equation}\label{logconvcap}{\mathbb C}apa(K_t^\times,{\mathbb D}^n)\le {\mathbb C}apa(K_0,{\mathbb D}^n)^{1-t} {\mathbb C}apa(K_1,{\mathbb D}^n)^t,\quad 0<t<1.\end{equation} In addition, an equality here occurs for some $t\in (0,1)$ if and only if $K_0=K_1$. It is quite interesting that the volume of $K_t^\times$ satisfies the opposite Brunn-Minkowski inequality (\ref{volBM}), i.e., it is logarithmically {\sl concave}. Furthermore, so are the standard logarithmic capacity in the complex plane and the Newtonian capacity in ${ \mathbb R}n$ with respect to the Minkowski addition \cite{B1}, \cite{B2}, \cite{Ran}. The difference here is that the relative Monge-Amp\`ere capacity is, contrary to the logarithmic or Newton capacities, a local notion, which leads to the dual Brunn-Minkowski inequality (\ref{logconvcap}), exactly like for the covolumes of coconvex bodies \cite{KT}. A natural question that remains open is to know whether the logarithmic convexity of the relative Monge-Amp\`ere capacities is also true in the general, non-toric case. No non-trivial examples of (\ref{logconvcap}) in this setting are known so far. \section{Level sets as holomorphic hulls}\label{sect:holhulls} Let $K_0,K_1$ be two non-pluripolar compact subsets of a bounded hyperconvex domain $\Omega\subset{\mathbb C}n$, and let $L_t=L_{t,\Omega}$ be the interpolating sets defined by (\ref{L_t}) for the geodesic $u_t=u_{t,\Omega}$ with the endpoints $u_j=\omega_{K_j,\Omega}$. We start with an observation that if the sets $K_j$ are polynomially convex, then the sets $L_t$ are actually independent of the choice of the domain $\Omega$ containing $K_0\cup K_1$. \begin{lemma}\label{indep} If $\Omega'$ and $\Omega''$ are bounded hyperconvex neighborhoods of non-pluripolar, polynomially convex, compact sets $K_0$ and $K_1$, then $L_{t,\Omega'}=L_{t,\Omega''}$. \end{lemma} \begin{proof} By the monotonicity of $\Omega\mapsto u_{t,\Omega}$, it suffices to show the equality for $\Omega'\Subset\Omega''$. Since $u_{t,\Omega''}\le u_{t,\Omega'}$, the inclusion $L_{t,\Omega'}\subset L_{t,\Omega''}$ is evident. Denote now $$\delta=-\inf\{u_{j,\Omega''}(z):\: z\in\partial\Omega',\ j=0,1\}\in(0,1).$$ Recall that the geodesics $u_{t,\Omega}$ come from the maximal plurisubharmonic functions $\widehat u_{\Omega}$ in $\Omega\times A$ for the annulus $A$ bounded by the circles $A_j$ where $\log|\zeta|=j$. Then the function $$\hat v:=\frac1{1-\delta}(\widehat u_{\Omega''}+\delta)\in{\operatorname{PSH}}(\Omega'\times A)\cap C(\overline{\Omega'\times A})$$ satisfies $(dd^c\hat v)^{n+1}=0$ in $\Omega'\times A$ and \begin{equation}\label{bdv}\lim \hat v(z,\zeta)= -1\ {\rm \ as\ } (z,\zeta)\to K_j\times A_j.\end{equation} Moreover, since $\hat v\ge 0$ on $\partial \Omega'\times A$ and its restriction to each $A_j$ satisfies $(dd^c\hat v)^{n}=0$ on $A_j\setminus K_j$, the boundary conditions (\ref{bdv}) imply $$ \lim \hat v(z,\zeta)\ge u_{j,\Omega'}\ {\rm as\ } \zeta\to A_j.$$ Therefore, we have $\hat v\ge \hat u$ in the whole $\Omega'\times A$. If $z\in L_{t,\Omega''}$, this gives us $-1\ge u_{t,\Omega'}(z)$ and so, $z\in L_{t,\Omega'}$, which completes the proof. \end{proof} Next step is comparing the sets $L_t$ with other interpolating sets, $K_t$, defined as follows. Set \begin{equation}\label{hatK} \widehat K=\widehat K(\Omega)=\{(z,\zeta)\in \Omega\times A:\: u(z,\zeta)\le M(u)\ \forall u\in {\operatorname{PSH}}_-(\Omega\times A)\},\end{equation} where $M(u)=\max_j M_j(u)$ and $$ M_j(u)=\limsup\,u(z,\zeta)\ {\rm as \ }(z,\zeta)\to K_j\times A_j, \quad j=0,1.$$ Note that the set $\widehat K$ will not change if one replaces ${\operatorname{PSH}}_-(\Omega\times A)$ by the collection of all bounded from above (or just bounded) plurisubharmonic functions on $\Omega\times A$. Denote by $\widehat K_\zeta$ the section of $\widehat K$ over $\zeta\in A$: $$\widehat K_\zeta=\widehat K_\zeta(\Omega) = \{z\in\Omega:\: (z,\zeta)\in \widehat K\},\quad \zeta\in A.$$ The set $\widehat K$ is invariant under rotation of the $\zeta$-variable, so $\widehat K_\zeta$ depends only on $|\zeta|$. We set then $$K_t=\widehat K_{e^t},\quad 0<t<1.$$ \begin{theorem}\label{prop1}If $K_j$ are non-pluripolar, polynomially convex compact subsets of $\Omega$, then $L_t=K_t$ for all $0<t<1$. \end{theorem} \begin{proof} First, we prove the inclusion \begin{equation} \label{incl1} L_t \subset K_t, \end{equation} that is, \begin{equation} \label{Mbound} u(z,t)\le M(u)\quad \forall z\in L_t\end{equation} for all $u\in{\operatorname{PSH}}_-(\Omega\times A)$. By the scalings $u\mapsto c\, u$, we can assume that $u-\min_j M_j(u)\le 1$ on $\Omega\times A$. Then the function $$\phi(z,\zeta)=u(z,\zeta)- (1-\log|\zeta|)M_0(u) -(\log|\zeta|) M_1(u)-1$$ belongs to ${\operatorname{PSH}}_-(\Omega\times A)$ and $$\limsup\phi(z,\zeta)\le -1 {\rm\ as \ }(z,\zeta)\to K_j\times A_j.$$ In other words, $\phi_t(z):=\phi(z,e^t)$ is a subgeodesic for $u_0$ and $u_1$, so $\phi_t\le u_t$. Therefore, $\phi_t\le -1$ on $L_t$, which gives us (\ref{Mbound}). To get the reverse inclusion, assume $z\in K_t$. Then, by definition of $\widehat K$, we get $u_t(z)\le M(u_t)=-1$ and, since $u_t\ge -1$ everywhere, $u_t(z)=-1$. \end{proof} The set $\widehat K$ can actually be represented as a holomorphic hull of the set $$K^A=(K_0\times A_0) \cup (K_1\times A_1),$$ which is similar to what one gets in the classical interpolation theory. This can be concluded by standard arguments relating plurisubharmonic and holomorphic hulls (see, e.g., \cite{Range}). \begin{proposition}\label{loc} Let $K_0,K_1$ be two non-pluripolar, polynomially convex compact subsets of a bounded hyperconvex domain $\Omega\subset{\mathbb C}n$. Then the set $\widehat K$ defined by (\ref{hatK}) is the holomorphic hull of the set $K^A$ with respect to the collection of all functions holomorphic on $\Omega\times{\mathbb C}_*$ (here ${\mathbb C}_*={\mathbb C}\setminus\{0\}$). \end{proposition} \begin{proof} The domain $\Omega\times{\mathbb C}_*$ is pseudoconvex, so it suffices to show that $\widehat K$ is the ${\mathcal F}$-hull $\widehat K_{\mathcal F}$ of the set (\ref{hull}) with respect to ${\mathcal F}={\operatorname{PSH}}(\Omega\times{\mathbb C}_*)$. Take any hyperconvex domain $\Omega'$ such that $K_0\cup K_1\subset \Omega'\Subset\Omega$. Since ${\mathcal F}$ forms a subset of the collection of all bounded from above psh functions on $\Omega'\times A$, we have $\widehat K':=\widehat K(\Omega')\subset \widehat K_{\mathcal F}$. Moreover, by Lemma~\ref{indep} and Theorem~\ref{prop1}, we have $\widehat K'=\widehat K$, which implies $\widehat K\subset \widehat K_{\mathcal F}$. Let $u_t$ be the geodesic of $u_0,u_1$ in $\Omega$. Then its psh image $\hat u(z,\zeta)$ can be extended to $\Omega\times {\mathbb C}_*$ as $\hat U(z,\zeta)=u_0(z)-\log|\zeta|$ for $-\infty<\log|\zeta|\le 0$ and $\hat U(z,\zeta)=u_1(z)+\log|\zeta|-1$ for $1\le\log|\zeta|< \infty$. Indeed, the function $$\hat v(z,\zeta) =\max\{u_0(z)-\log|\zeta|, u_1(z)+\log|\zeta|-1\}$$ is psh on $\Omega\times A$, continuous on $\Omega\times \overline A$, and equal to $u_j$ on $\Omega\times A_j$. Therefore, it coincides with $\hat U$ on $\Omega\times({\mathbb C}_*\setminus A)$, Since $\hat v\le \hat u$ on $\Omega\times A$ and $\hat v=\hat u$ on $\Omega\times \partial A$, the claim is proved. Let $(z^*,\zeta^*)\in \widehat K_{\mathcal F}$. By the definition of $\widehat K_{\mathcal F}$, since $\hat U\in{\operatorname{PSH}}(\Omega\times{\mathbb C}_*)$, $$\hat u(z^*,\zeta^*)=\hat U(z^*,\zeta^*)\le \sup \{\hat U(z,\zeta):\: (z,\zeta)\in K^A\}=-1,$$ so $z^*\in \widehat K_t$ with $t=\log|\zeta^*|$. \end{proof} Finally, since the sets $L_t$ are independent of the choice of $\Omega$, we get the following description of the interpolated sets $K_t$. \begin{corollary}\label{cor} Let $K_0,K_1$ be two non-pluripolar, polynomially convex compact subsets of $ {\mathbb C}n$ and let $\Omega$ be a bounded hyperconvex domain containing $K_0\cup K_1$. Denote by $u_t$ the geodesic of the functions $u_j=\omega_{K_j,\Omega}$, $j=0,1$. Then for any $\zeta\in A$, $$K_t=\{z\in\Omega:\: u_{t}(z)=-1\}=\{z\in{\mathbb C}n: |f(z,\zeta)|\le\|f\|_{K^A}\ \forall f\in{\mathcal O}({\mathbb C}n\times {\mathbb C}_*)\}$$ with $t=\log|\zeta|$. \end{corollary} {\it Remark.} Note that the considered hulls are taken with respect to functions holomorphic in ${\mathbb C}n\times {\mathbb C}_*$ and not in ${\mathbb C}^{n+1}$ (that is, not the polynomial hulls). This reflects the fact that in the definition of $K^A$, the circles $A_0$ and $A_1$ may be interchanged. Since for any polynomial $P(z,\zeta)$ and any $\zeta$ inside the disc $|\zeta|<e$, we have $|P(z,\zeta)|\le \max \{|P(z,\xi)|: \: |\xi|=e\}$, each section of the {\sl polynomial} hull of $K^A$ must contain $K_1$, so such a hull does not provide any interpolation between $K_0$ and $K_1$. \section{Log-convexity of Monge-Amp\`ere capacities} Let, as before, $K_0$ and $K_1$ be non-pluripolar, polynomially convex compact subsets of a bounded hyperconvex domain $ \Omega\Subset{\mathbb C}n$, $u_t$ be the geodesic between $u_j= \omega_{K_j,\Omega}$, and let $K_t$ be the corresponding interpolating sets as described in Section~\ref{sect:holhulls}. As was mentioned, their relative Monge-Amp\`ere capacities satisfy the inequality $${\mathbb C}apa(K_t,\Omega)\le(1-t)\,{\mathbb C}apa(K_0,\Omega) +t\,{\mathbb C}apa(K_1,\Omega).$$ Let now $\Omega={\mathbb D}^n$ and assume the sets $K_j$ to be Reinhardt. The polynomial convexity of $K_j$ means then that their logarithmic images $$Q_j={\operatorname{Log}\,} K_j=\{s\in{ \mathbb R}nm:\: (e^{s_1},\ldots,e^{s_n})\in K_j\}$$ are complete convex subsets of ${ \mathbb R}nm$, i.e., $Q_j+{ \mathbb R}nm\subset{ \mathbb R}nm$; when this is the case, we will also say that $K_j$ is complete logarithmically convex. In this situation, the sets $K_t$ are, as in the classical interpolation theory, the geometric means of $K_j$. Note however that our approach extends the classical -- convex -- setting to a wider one. \begin{proposition} The interpolating sets $K_t$ of two non-pluripolar, complete logarithmically convex, compact Reinhardt sets $K_0, K_1\subset {\mathbb D}^n$ coincide with $$K_t^\times:=K_0^{1-t}K_1^t=\{z:\: |z_l|=|\eta_l|^{1-t} |\xi_l|^{t}, \ 1\le l\le n,\ \eta\in K_0,\ \xi\in K_1\}.$$ \end{proposition} \begin{proof} We prove this by using the representation of the sets $K_t$ as $L_t=\{z:\: u_t(z)=-1\}$ and a formula for the geodesics in terms of the Legendre transform \cite{Gu}, \cite{R17a}. By and large, this is Calder\'{o}n's method. As was noted in \cite[Thm. 4.3]{R17a}, the inclusion $K_t^\times\subset L_t$ follows from convexity of the function $\check u_t(s)=u_t (e^{s_1},\ldots,e^{s_n})$ in $(s,t)\in { \mathbb R}nm\times (0,1)$ since $s\in \log K_t^\times$ implies $\check u_t(s)\le -1$. To prove the reverse inclusion, we use a representation for $\check u_t$ given by \cite[Thm. 6.1]{R17b}: $$ \check u_t={\mathcal L}[(1-t)\max\{h_{Q_0}+1,0\} + t \max\{h_{Q_1}+1,0\}],\quad 0<t<1,$$ where $$ {\mathcal L}[f](y)=\sup_{x\in{ \mathbb R}n}\{\langle x,y\rangle -f(x)\}$$ is the {\it Legendre transform} of $f$, $$h_Q(a)=\sup_{s\in Q} \langle a,s\rangle,\quad a\in{ \mathbb R}np $$ is the support function of a convex set $Q\subset{ \mathbb R}nm$, and $Q_j={\operatorname{Log}\,} K_j$. Let $z\notin K_t^\times$, then one can assume that none of its coordinates equals zero, so the corresponding point $\xi=(\log|z_1|,\ldots,\log|z_n|)\in{ \mathbb R}nm$ does not belong to $Q_t=(1-t)Q_0+tQ_1$. Therefore there exists $b\in{ \mathbb R}np$ such that $$ \langle b,\xi\rangle > h_{Q_t}(b)=(1-t)h_{Q_0}(b)+ t\, h_{Q_1}(b);$$ by the homogeneity, one can assume $h_{Q_0}(b),h_{Q_1}(b)>-1$ as well. Then \begin{eqnarray*} \check u_t(\xi) &=& \sup_{a\in{ \mathbb R}np}[\langle a,\xi\rangle - (1-t)\max\{h_{Q_0}(a)+1,0\} - t \max\{h_{Q_1}(a)+1,0\}]\\ &>& (1-t)[h_{Q_0}(b)-(h_{Q_0}(b)+1)] + t [h_{Q_1}(b)-(h_{Q_1}(b)+1)]=1, \end{eqnarray*} so $\xi$ does not belong to ${\operatorname{Log}\,} L_t$ and consequently $z\notin L_t$. \end{proof} The crucial point for the Reinhardt case is a formula from \cite[Thm. 7]{ARZ} (see also \cite{R17b}) for the Monge-Amp\`ere capacity of complete logarithmically convex compact sets $K\subset{\mathbb D}^n$: $${\mathbb C}apa (K,{\mathbb D}^n)=n!\,{\mathbb C}ovol(Q^\circ):=n!\, {\operatorname{Vol}}({{ \mathbb R}np\setminus Q^\circ}),$$ where $$Q^\circ=\{x\in{ \mathbb R}np: \langle x,y\rangle \le -1 \ \forall y\in Q\}$$ is the {\it copolar} to the set $Q={\operatorname{Log}\,} K$. In particular, \begin{equation}\label{capakt}{\mathbb C}apa(K_t)=n!\,{\mathbb C}ovol(Q_t^\circ)\end{equation} for the copolar $Q_t^\circ$ of the set $Q_t=(1-t)Q_0+t\,Q_1$. \begin{proposition}\label{BMcovol} We have \begin{equation}\label{BMQ}{\mathbb C}ovol(Q_t^\circ)\le {\mathbb C}ovol(Q_0^\circ)^{1-t}\,{\mathbb C}ovol(Q_1^\circ)^t,\quad 0<t<1.\end{equation} If an equality here occurs for some $t\in (0,1)$, then $Q_0=Q_1$. \end{proposition} \begin{proof} Let, as before, $h_Q$ be the restriction of the support function of a convex set $Q\subset{ \mathbb R}nm$ to ${ \mathbb R}np$: $$ h_Q(x)=\sup\{\langle x,y\rangle:\: y\in Q\},\ x\in{ \mathbb R}np.$$ We have then \begin{eqnarray*} \int_{{ \mathbb R}np} e^{h_{Q_t}(x)}\,dx &=& \int_{{ \mathbb R}np} dx \int_{-h_{Q_t}(x)}^\infty e^{-s}\,ds = \int_0^\infty e^{-s}\,ds \int_{h_{Q_t}(x)\ge -s} dx \\ &=& \int_0^\infty e^{-s} {\operatorname{Vol}}(\{h_{Q_t}(x)\ge -s\})\,ds \\ &=& {\operatorname{Vol}}(\{h_{Q_t}(x)\ge -1\}) \int_0^\infty e^{-s}s^n \,ds\\ &=& n!\, {\mathbb C}ovol({Q_t^\circ}). \end{eqnarray*} Note that $h_{Q_t}=(1-t) h_{Q_0}+ t \, h_{Q_1}$. Therefore, by H\"{o}lder's inequality with $p=(1-t)^{-1}$ and $q=t^{-1}$, we have \begin{equation}\label{Hold} \int_{{ \mathbb R}np} e^{h_{Q_t}(x)}\,dx\le\left(\int_{{ \mathbb R}np} e^{h_{Q_0}(x)}dx\right)^{1-t}\left(\int_{{ \mathbb R}np} e^{h_{Q_1}(x)}dx\right)^{t}, \end{equation} which proves (\ref{BMQ}). An equality in (\ref{BMQ}) implies the equality case in H\"{o}lder's inequality (\ref{Hold}), which means the functions $e^{h_{Q_0}}$ and $e^{h_{Q_1}}$ are proportional, so $h_{Q_0}(x)=h_{Q_1}(x)+C$ for all $x\in{ \mathbb R}np$. Since both $h_{Q_0}(x)$ and $h_{Q_1}(x)$ converge to $0$ as $x\to 0$ along ${ \mathbb R}np$, we get $C=0$, which completes the proof. \end{proof} Finally, by (\ref{capakt}), we get \begin{theorem}\label{CapBM} For polynomially convex, non-pluripolar compact Reinhardt sets $K_j\Subset{\mathbb D}^n$, the Monge-Amp\`ere capacity ${\mathbb C}apa(K_t,{\mathbb D}^n)$ is a logarithmically convex function of $\,t$; in other words, the Brunn-Minkowski inequality (\ref{logconvcap}) holds. An equality in (\ref{logconvcap}) occurs for some $t\in (0,1)$ if and only if $K_0=K_1$. \end{theorem} {\it Remark.} The general situation of compact, polynomially convex Reinhardt sets reduces to the case $K_0,K_1\subset{\mathbb D}^n$ because for $K$ in the polydisk ${\mathbb D}_R^n$ of radius $R$, we have ${\mathbb C}apa(K,{\mathbb D}_R^n)={\mathbb C}apa(\frac1R K,{\mathbb D}^n)$ and $(\frac1R K)_t = \frac1R K_t$. {\bf Acknowledgement.} Part of the work was done while the second named author was visiting Universit\'e Pierre et Marie Curie in March 2017; he thanks Institut de Math\'ematiques de Jussieu for the hospitality. The authors are grateful to the anonymous referee for careful reading of the text. \noindent {\sc Dario Cordero-Erausquin} \noindent Institut de Math\'ematiques de Jussieu, Sorbonne Universit\'e, 4 place Jissieu, 75252 Paris Cedex 05, France \noindent \emph{e-mail:} [email protected] \noindent {\sc Alexander Rashkovskii} \noindent Tek/Nat, University of Stavanger, 4036 Stavanger, Norway \noindent \emph{e-mail:} [email protected] \end{document}
\begin{document} \title{On the Principal Permanent Rank Characteristic Sequences of Graphs and Digraphs hanks{Research supported in part by NSF grant DMS-1427526, ``The Rocky Mountain - Great Plains Graduate Research Workshop in Combinatorics''.} \begin{abstract} The principal permanent rank characteristic sequence is a binary sequence $r_0 r_1 \ldots r_n$ where $r_k = 1$ if there exists a principal square submatrix of size $k$ with nonzero permanent and $r_k = 0$ otherwise, and $r_0 = 1$ if there is a zero diagonal entry. A characterization is provided for all principal permanent rank sequences obtainable by the family of nonnegative matrices as well as the family of nonnegative symmetric matrices. Constructions for all realizable sequences are provided. Results for skew-symmetric matrices are also included. \end{abstract} \noindent MSC: 15A15; 15A03; 15B57; 05C50\\ Keywords: Symmetric matrix, Skew-symmetric matrix, Permanent rank, Principal permanent rank characteristic sequenc, generalized cycle, matching, minor \section{Introduction} The \emph{principal rank characteristic sequence problem} introduced by Brualdi, Deaett, Olesky and van den Driessche asks \cite{brualdi12}: \begin{quotation} \noindent Given a binary sequence $r_0 r_1 \ldots r_n$ of length $n+1$, is there an $n \times n$ matrix $A$ such that $r_k = 1$ if and only if there is a principal submatrix of rank $k$? \end{quotation} This problem is a simplified form of the more general \emph{principal assignment problem} (see for example \cite{holtz02}). Recently, several groups have studied the principal rank characteristic sequence problem with different variations. For real matrices, Brualdi, et. al, characterize all realizable sequences with $n \le 6$ and all realizable sequences beginning $010\ldots$ for $7 \le n \le 10$. They also provide several forbidden subsequences \cite{brualdi12}. Barrett, et. al. characterize all allowable sequences over fields with characteristic 2 and also provide additional results for other fields \cite{barrett14}. Additionally, in \cite{butler}, the authors study a variation, the \emph{enhanced principal rank sequence}, which differentiates whether ``some'' or ``all'' of the principal minors of order $k$ have rank $k$ where they characterize all such realizable sequences for real matrices of order $n\le 5$. Our focus will be the permanent, ${\rm{per}}(A)$, instead of the rank or determinant. Recall the definition of the permanent: \begin{defi}[From \cite{minc78}] The \emph{\textbf{permanent}} of an $n\times n$ matrix $A$ is defined to be the sum of all diagonal products of $A$. That is, \[\displaystyle {\rm{per}}(A)=\sum_{\sigma\in S_n} \left( \prod_{i=1}^{n} a_{i\,\sigma(i)}\right)\] \end{defi} while \[\displaystyle \det(A)=\sum_{\sigma\in S_n} \left({\rm{sgn}}(\sigma) \prod_{i=1}^{n} a_{i\,\sigma(i)}\right)\] where $\rm{sgn}(\cdot)$ is the sign of the permutation. Hence, in some sense, the permanent can be viewed as a variation of the determinant. Note that determining whether there is a principal submatrix of rank $k$ is equivalent to seeing if there is a principal submatrix of size $k$ with nonzero determinant (see \cite{brualdi12}). Therefore, in a similar fashion, one can define the \emph{permanent rank}: \begin{defi}[From \cite{yu99}] The \emph{\textbf{principal permanent rank}} of a matrix $A$, denoted ${\rm{per}}rank (A)$ is defined to be the size of the largest square submatrix with nonzero permanent. \end{defi} We study the principal permanent rank characteristic sequence defined as follows. \begin{defi} Given an $n\times n$ matrix $A$, the \emph{\textbf{principal characteristic permanent rank sequence}} of $A$ (abbreviated ppr-sequence of $A$ or ${\rm{ppr}}(A)$) is defined as $r_0 r_1 r_2 \ldots r_n$ where for $1 \leq k \leq n$ \[ r_k= \begin{cases} 1 & \text{if $A$ has a principal submatrix of size $k$ with nonzero permanent, and }\\ 0 & \text{otherwise,} \end{cases}\] while $r_0 = 1$ if and only if $A$ has a zero on its main diagonal. \end{defi} Naturally, in this paper, we introduce the \emph{principal permanent rank sequence problem}: \begin{quotation} \noindent Given a binary sequence $r_0 r_1 \ldots r_n$, when is there an $n \times n$ matrix $A$ such that $ppr(A) = r_0 r_1 \ldots r_n$? \end{quotation} Our contribution is to answer this question and to fully characterize which sequences can be realized for various families of real matrices including \begin{itemize} \item nonnegative matrices (Section \ref{sectiondirected}, Theorem \ref{onenonsymmetric}), \item symmetric nonnegative matrices (Section \ref{sectionundericted}, Theorem \ref{fuzzyegg}), and \item skew-symmetric matrices whose underlying graph is a tree (Section \ref{skew}, Theorem \ref{thm.skewtree}). \end{itemize} Additionally, for each characterization, we provide a construction that produces a realization for any realizable sequence. \section{Preliminaries} Our main approach is to exploit the duality between matrices and graphs. Throughout, we will consider graphs, both directed and undirected and with or without loops. However, we will not consider graphs with multiple edges (see Proposition \ref{zerononzero}). Let $[n] = \{1, \ldots, n\}$. For a (directed) graph $G$ on $n$ vertices, $V(G) = [n]$, and $\alpha \subseteq [n]$, the graph $G[\alpha]$ is the induced subgraph of $G$ on vertices in $\alpha$. For an $n\times n$ matrix $A$ and $\alpha \subseteq [n]$, $A[\alpha]$ denotes the principal submatrix of $A$ from rows and column indexed by $\alpha$. The zero--nonzero pattern of $A$ is a $(0,1)$-matrix $B$ of the same order where $B_{ij}=1$ if and only if $A_{ij}\neq 0$. Also, the \emph{underlying graph} of a matrix $A$ is the graph $G$ whose adjacency matrix is the zero--nonzero pattern of $A$. Note that $G$ is undirected if and only if the zero--nonzero pattern of $A$ is symmetric. The following proposition shows that the ppr-sequence of a nonnegative matrix and its zero--nonzero pattern are one and the same. Thus, for a nonnegative matrix, we will focus our attention on its underlying graph. \begin{prop} \label{zerononzero} Let $B$ be the zero--nonzero pattern of an $n\times n$ nonnegative matrix $A$. Then ${\rm{ppr}}(A) = {\rm{ppr}}(B)$. \end{prop} \begin{proof} Let ${\rm{ppr}}(B) = q_0 q_1 \ldots q_{n}$ and ${\rm{ppr}}(A) = r_0 r_1 \dots r_{n}$. First, by definition, $a_{ii} = 0$ if and only if $b_{ii} = 0$ for $i \in [n]$. Thus, $r_0 = q_0$. Now fix $k \in [n]$ and let $\alpha = \{i_1,i_2, ..., i_k\} \subseteq [n]$. Note that every term in both ${\rm{per}}(A[\alpha])$ and ${\rm{per}}(B[\alpha])$ is nonnegative. Next let $S_\alpha$ be the set of all permutations on $\alpha$. Then for $\sigma \in S_\alpha$, \[ \prod_{j = 1}^k a_{i_j, \sigma(i_j)} > 0 \hspace{1mm} \text{ if and only if } \hspace{1mm} \prod_{j = 1}^k b_{i_j, \sigma(i_j)} > 0 \] and vice versa. Thus, \[ {\rm{per}}(A[\alpha]) = \sum_{\pi \in S_\alpha}\prod_{j = 1}^k a_{i_j,\pi(i_j)} > 0\] if and only if \[ {\rm{per}}(B[\alpha]) = \sum_{\pi \in S_\alpha}\prod_{j = 1}^k b_{i_j,\pi(i_j)} > 0. \] Hence, $q_k = r_k$, for $k=1,2,\ldots, n$. \end{proof} It is well known that various graph properties are captured by the permanent rank of matrices describing the graph. Such properties include the size of a largest \textit{generalized cycle} and the size of the largest perfect matching in the graph (see \cite{monfared12} and the references therein). Let us formally define a generalized cycle. \begin{defi} A \emph{\textbf{generalized cycle}} of size $k$ is a permutation, $\pi_C$, on a subset of $k$ vertices, $C$, such that $i \pi_C(i)$ is a directed edge (or a loop if $i = \pi_C(i)$) for all $i \in C$. \end{defi} Observe that for a (directed) graph $G$, $C \subset V(G)$ supports a generalized cycle if there is a collection of edges within $G[C]$, such that every component of the subgraph induced on those edges has a Hamiltonian cycle. A generalized cycle can be viewed as both a permutation or a subset of edges. Here, a bi-directed edge (or undirected edge) can be seen as a 2-cycle. With this clear bijection, we will refer to such a collection of cycles also as a ``generalized cycle.'' Further, a generalized cycle of order $|G|$ is said to be \emph{spanning}. Next, recall that a \emph{matching} is a collection of disjoint edges. Since a matching in an undirected graph can be viewed as a disjoint collection of directed 2-cycles, every matching forms a generalized cycle. The set of all generalized cycles of order $k$ of a (directed) graph $G$ is denoted by ${\rm{cyc}}_k(G)$. The connection between generalized cycles and permanent ranks is made formal by the following proposition. \begin{prop}\label{gencycle} Let $G$ be the underlying (directed) graph of the nonnegative matrix $A$ and let ${\rm{ppr}}(A) = r_0 r_1 \ldots r_{n}$. For $k \ge 1$, $r_k = 1$ if and only if $G$ has a (directed) generalized cycle of order $k$. \end{prop} \begin{proof} Let $\alpha \subseteq [n]$ with $|\alpha| = k$. \begin{align*} {\rm{per}}(A[\alpha]) &= \sum_{\pi \in S_\alpha}\prod_{j = 1}^k a_{i_j,\pi(i_j)} \end{align*} where $\alpha = \{i_1, ..., i_k\}$. A term of the sum above is nonzero (and positive) if and only if $\pi \in {\rm{cyc}}_k(G)$ \end{proof} We say a binary sequence $r_0 r_1 \ldots r_n$ is \emph{realizable}, if there is a matrix whose ppr-sequence is $r_0 r_1 \ldots r_n$. \section{General Nonnegative Matrices}\label{sectiondirected} In this section, we characterize the principal permanent rank sequences of nonnegative matrices. We prove: \begin{thm}\label{zerononsymmetric}\label{onenonsymmetric} The binary sequence $r_0 r_1 \ldots r_n$ is realizable as a ppr-sequence of a nonnegative matrix if and only if \begin{itemize} \item $r_0 = 0$ and $r_i = 1$ for $i=1,2,\ldots, n$ or \item $r_0 = 1$. \end{itemize} \end{thm} First, let us prove the following lemma. \begin{lem}\label{lem:zeroout} Let $A$ be a nonnegative $n \times n$ matrix with ${\rm{ppr}}$-sequence $r_0 r_1 \ldots r_n$. If $r_0 = 0$, then $r_i = 1$ for all $i=1, 2, \ldots, n$. \end{lem} \begin{proof} Recall $r_0 = 0$ if and only if there is a loop on every vertex in {the underlying} graph $G$. Thus, for all $k \in [n]$, $G$ has a generalized cycle of order $k$ consisting of $k$ loops. Therefore, by Proposition \ref{gencycle}, $r_k = 1$ for all $k \in [n]$. Lastly, any sequence of the form $r_0 r_1 \ldots r_n = 0 1 1 \ldots 1$ is realized by $I_n$, the identity matrix of order $n$. \end{proof} \begin{proof}[Proof of Theorem \ref{zerononsymmetric}] The case when $r_0 = 0$ is covered by Lemma \ref{lem:zeroout}. Hence, we can assume $r_0=1$. We will construct a directed graph, $G$, as follows. Start with the directed path $v_1 \rightarrow v_2 \rightarrow \cdots \rightarrow v_n$. Next, for each $k \in [n]$, add a directed edge from $v_k$ to $v_1$ if and only if $r_k = 1$ (see Figure \ref{fig.backtrack}). Let $A$ be the adjacency matrix of $G$ and ${\rm{ppr}}(A) = q_0 q_1 \ldots q_n$. We claim that $q_i = r_i$ for each $i \in [n]$. First note that $r_0 = 1$, because $a_{22} = 0$. Now consider $r_k$, for $k \geq 1$. If $r_k = 1$, then there is an edge from $v_k$ to $v_1$. Hence $C = (v_1, ..., v_k)$ is a directed generalized cycle of order $k$ in $G$. Thus, by Proposition \ref{gencycle}, $q_k = 1$. Now suppose that $r_k = 0$ and consider a subset $S$ of $k$ vertices. If $v_1 \notin S$, then $G[S]$ is a disjoint union of directed paths and thus has no spanning generalized cycle. Now assume that $v_1 \in S$. If $v_j \in S$ for some $j >k$, then $v_i \notin S$ for some $1 < i \leq k$. Thus, $G[S]$ has no generalized cycle containing $v_j$. Finally, if $S = \{v_1, v_2, ..., v_k\}$, $G[S]$ is a graph on $k$ vertices with a pendent vertex $v_k$. Therefore, $G[S]$ has no spanning generalized cycle. Hence, by Proposition \ref{gencycle}, $q_k = 0$. \end{proof} \begin{figure} \caption{An illustration of the construction in Theorem \ref{onenonsymmetric} \label{fig.backtrack} \end{figure} \section{Nonnegative Symmetric Matrices}\label{sectionundericted} In this section, we consider the principal permanent rank characteristic sequences of nonnegative symmetric matrices. In contrast to general nonnegative matrices, the set of allowable sequences is more restrictive. The key difference between the symmetric and general case is that in the symmetric case a single graph edge always counts as a $2$-cycle. For example, $r_2 = 1$ if the underlying graph {has} an edge. Moreover, since every even cycle contains a perfect matching, this implies that we may always choose a generalized cycle where all even cycles are $2$-cycles. {That is, if $G$ contains a generalized cycle of order $k$, then one realization consists of a (possibility empty) matching and a (possibility empty) collection of odd cycles.} Ultimately, in Theorem \ref{fuzzyegg}, we fully characterize which ${\rm{ppr}}$-sequences are realizable by nonnegative symmetric matrices. First, we provide some necessary conditions for a binary sequence to be realizable in Lemmas \ref{lem:zeroout}--\ref{lem:minmaxodd}. The following lemma shows if there is no generalized cycle of an even order $2k$, then every generalized cycle of the graph is smaller than $2k$. \begin{lem}\label{lem:even} Suppose $A$ is a nonnegative $n \times n$ symmetric matrix, and let $ppr(A) = r_0 r_1 r_2 \ldots r_{n}$. If $r_{2k} = 0$ for some $k > 0$, then $r_{j} = 0$ for all $j \geq 2k$. \end{lem} \begin{proof} Recall that Proposition \ref{gencycle} asserts that $r_{j} = 1$ if and only if the underlying graph, $G$, has a generalized cycle on $j$ vertices. First, suppose $r_{j} = 1$ for some odd $j = 2t+1$. Then there exists a generalized cycle of $G$ consisting of at least one odd cycle, along with (possibly) a matching. Discarding an arbitrary vertex from this odd cycle results in a path on an even number of vertices. This path contains a spanning matching. When this matching is considered with the other components of the original odd generalized cycle, we have a generalized cycle on $j-1$ vertices. Thus, $r_{j-1} = 1$. Now, suppose that $r_{j}=1$ for some $j=2t$. Then there is a generalized cycle $C$ of order $j$ consisting of a (possibly empty) collection of odd cycles plus a (possibly empty) matching. If $C$ contains a matching edge, then discarding it yields a generalized cycle on $2t-2$ vertices. Hence, $r_{j-2} = r_{2t-2} = 1$. Otherwise, $C$ consists solely of an even number of odd cycles. Discarding one vertex each from two different odd cycles, and noting again that the remaining even paths contain a spanning matching, yields a generalized cycle on $2t-2$ vertices. That is, $r_{2t-2} = 1$. Therefore if $r_{2t+2} = 1$, or $r_{2t+1} = 1$, we have that $r_{2t} = 1$, and this implies the lemma. \end{proof} {The proof of Lemma \ref{lem:even} further demonstrates that if an even generalized cycle exists, then there is a generalized cycle for all smaller even orders. Thus, we are left to study the restriction that odd cycles impose on the ${\rm{ppr}}$-sequence. In Lemma \ref{lem:oddspan}, we show that the odd indices $i$ for which $r_i = 1$ must be sequential}; however, first we make a few structural observations. \begin{fact} \label{lem.oddgirth} Let $2\ell + 1$ be the length of the shortest odd cycle of $G$, then $2 \ell + 1$ is the smallest odd integer $k$ with $r_{k} = 1$. \end{fact} \begin{lem}\label{lem:struct} Suppose $r_{2k-1} = 0$ and $r_{2k+1} = 1$. Then every generalized cycle on $2k+1$ vertices is a $2k+1$ cycle, and the vertex set of that generalized cycle induces a cycle with no chords. \end{lem} \begin{proof} Consider a generalized cycle on $2k+1$ vertices. {As noted before, we can choose a generalized cycle consisting} of a collection of odd cycles plus a matching. If there is an edge in the matching, however, discarding it yields a generalized cycle on $2k-1$ vertices, that is, $r_{2k-1} = 1$. Thus, the generalized cycle is a collection of odd cycles. {If there is more than one odd cycle in the collection}, one vertex {can be} discarded from two different cycles, and a perfect matching can be taken from the resulting even paths to find a generalized cycle on $2k-1$ vertices. Thus, assuming $r_{2k-1}=0$, the generalized cycle is a single cycle. {Now let $V$ be the vertex set for some generalized cycle of order $2k+1$}. The set $V$ induces a $2k+1$ cycle, along with potentially some chords. If there is a chord, however, {$G[V]$} also consists of a smaller odd cycle (containing the chord) plus an even path {on the remaining vertices} containing at least one edge. Converting this path to a matching and discarding an edge would yield a generalized cycle on $2k-1$ vertices, completing the proof of the lemma. \end{proof} \begin{lem}\label{lem:oddspan} Suppose $r_{2i+1} = r_{2k+1} = 1$ for some integers $i < k$, then $r_{t} = 1$ for all $2i+1 \leq t \leq 2k+1$. \end{lem} \begin{proof} By Lemma \ref{lem:even} it suffices to just consider $r_t$ for odd $t$. It also suffices to show that $r_{2k-1} = 1$. Suppose to the contrary that $r_{2k-1} = 0$. By Lemma \ref{lem:struct}, every generalized cycle of size $2k+1$ is an induced cycle. Fix such a generalized cycle on vertex set $V$. Suppose $j < k$ is minimum with the property that $r_{2j-1} = 1$. Again, fix a generalized cycle with size $2j-1$. This is also an (induced) cycle on a vertex set $V'$. If $V' \cap V = \emptyset$, then we are done: discarding a vertex from $V$ we have a path on $2k$ vertices, and a cycle on $2j - 1$ vertices. This path can be treated as a matching, and discarding sufficiently many edges in the matching yields a generalized cycle of size ${2k-1}$. Otherwise, we may assume that the cycle on $V'$ follows along the cycle on $V$ on $s$ contiguous segments {sharing a total of} $\ell$ vertices. Immediately following each segment there must be at least one vertex in $V' \setminus V$, so $s + \ell \leq 2j-1$. The vertices in $V$ {\it not} in $V'$ lie on $s$ segments with total length $2k+1 - \ell.$ For parity reasons, we may have to delete one vertex from each segment, but we can then obtain a matching on {at least} $$2k+1-\ell - s$$ vertices. Combining this matching with the cycle on $2j-1$ vertices, we have a generalized cycle on $$2k+1 + (2j-1) - \ell - s \geq 2k + 1$$ vertices comprised of a cycle on $2j-1$ vertices plus a matching. Discarding sufficiently many edges of the matching, we again obtain a generalized cycle of size $2k-1$. \end{proof} Finally, the following lemma shows that the largest odd generalized cycle of a graph strongly constrains the largest even generalized cycle. \begin{lem}\label{lem:minmaxodd} Suppose $m$ and $M$ are respectively the smallest and largest odd integers so that $r_m = r_{M} = 1$. Assuming $m+M+2 \le n$ then $r_{m+M+2} = 0$. \end{lem} \begin{proof} Let $t$ be the largest even number so that $r_t = 1$, and consider a generalized cycle $C_1$ on $t$ vertices. We claim that if $t > M+1$ then there is a matching on $t$ vertices. Indeed, if the generalized cycle on $t$ vertices contains an odd cycle a single vertex can be discarded from one odd cycle to obtain a generalized cycle on $t-1$ vertices, and hence $t-1 \leq M$. (Note that it is possible that $t-1 < M$, if the largest generalized cycle in the graph had odd order.) Thus, we may assume that the generalized cycle $C_1$ consists of $\frac{t}{2}$ disjoint edges. Consider a generalized cycle $C_2$ on $m$ vertices, which may include vertices from at most $m$ of the disjoint edges of $C_1$. Taking $C_2$ along with the edges of $C_1$ not containing a vertex from it yields an odd generalized cycle on at least $m + 2(\frac{t}{2} - m) = t - m$ vertices. Therefore $M \geq t-m$. Rearranging, we get $t \leq M + m$, which proves the lemma. \end{proof} The following Theorem shows that the above necessary conditions on the ppr-sequence of a nonnegative symmetric matrix are indeed sufficient. That is, if a binary sequence $r_0 r_1 \ldots r_n$ satisfies the conditions of Lemmas \ref{lem:even}--\ref{lem:minmaxodd}, then there is a nonnegative symmetric matrix whose ppr-sequence is $r_0 r_1 \ldots r_n$. \begin{figure} \caption{An illustration of the construction of Case 4 in the proof of Theorem \ref{fuzzyegg} \label{fig.fuzzyegg} \end{figure} \begin{thm} \label{fuzzyegg} Any binary sequence not discounted by Lemmas \ref{lem:zeroout}--\ref{lem:minmaxodd} is realizable by a symmetric nonnegative matrix. That is, $r_0 r_1 \ldots r_n$ is realizable as a ppr-sequence of a nonnegative symmetric matrix if and only if \begin{itemize} \item $r_0 = 0$ and $r_i = 1$ for $i=1,2,\ldots, n$; or \item there are nonnegative integers $\ell, k, m$ with $\ell \le k \leq m \le \ell + k + 1$ where \begin{enumerate}[a)] \item $r_{2j+1} = 0$ for any $j < \ell$, \item $r_{2j+1} = 1$ for any $j$ with $ \ell \le j \le k$, \item $r_{2j+1} = 0$ for any $j$ with $k < j \leq \frac{n-1}{2}$, \item $r_{2j} = 1$ for any $0 \leq j \leq m$, and \item $r_{2j} = 0$ for any or $m < j \leq \frac{n-1}{2}; or$ \end{enumerate} \item $r_0 = 1$, $r_{i} = 0$ for all odd $i \le n$, $r_{i} = 1$ for all even $i \le 2m$ and $r_{i} = 0$ for all even $i > 2m$ for some nonnegative $m \le \lfloor \frac{n-1}{2} \rfloor.$\end{itemize} \end{thm} \begin{proof} First note that Lemma \ref{lem:minmaxodd} implies $m \leq k + \ell + 1 $. Also, Fact \ref{lem.oddgirth} implies that for any odd $t$ where $t$ is less than the length of the shortest odd cycle of the graph, then $r_t = 0$. Hence it implies item \textit{a}. Lemma \ref{lem:oddspan} implies for any odd $t$ between the length of the shortest cycle and the length of the largest generalized cycle of the graph, $r_t = 1$. That shows the necessity of item \textit{b}. Lemma \ref{lem:zeroout} asserts $r_0 = 1$, and Lemma \ref{lem:even} implies for even numbers $t$ no more than a fixed number, $r_t = 1$, and for even numbers $t$ more than that fixed number, $r_t = 0$. This implies items \textit{d} and \textit{e}. Now, since Lemmas \ref{lem:zeroout}--\ref{lem:minmaxodd} show the necessity of the conditions above, it suffices to construct a matrix for various cases. \vspace*{1mm} \noindent \textbf{Case 1}: $r_0 = 0$ and $r_i = 1$ for $i=1,2,\ldots, n$. \noindent This case is covered by Lemma \ref{lem:zeroout} where the identity matrix, $I_n$, realizes the sequence. \vspace*{1mm} \noindent \textbf{Case 2:} $0 = \ell = k = m$ (i.e., $r_0 = r_1 = 1$ and $r_i = 0$ for $i = 2, 3, \ldots n$). \noindent Consider the graph with $n$ isolated vertices where one vertex has a loop. The adjacency matrix has $r_0 =r_ 1= 1$ and $r_i = 0$ otherwise. \vspace*{1mm} \noindent \textbf{Case 3:} $r_0 = 1$ and $0 < \ell = k = m$. \noindent Consider a cycle on $2 \ell + 1$ vertices and $n- 2\ell - 1$ isolated vertices. The only odd generalized cycle is on $2\ell +1$ vertices, and there is a matching on the cycle for all even $2j$ for $j \le \ell$. \vspace*{1mm} \noindent \textbf{Case 4:} $r_0 = 1$ and $\ell, k, m$ are not all equal. \noindent We construct a graph as follows. Construct an odd cycle on vertices $1,2,\ldots, 2\ell+1$ (if $\ell = 0$, take the odd cycle to be a loop on a single vertex), and a path on vertices $2\ell+1, 2\ell+2, \ldots, 2k+1$. Add $2(m-k)-1$ vertices and connect each of them to one of the vertices $1,2,\ldots, 2(m-k)-1$ such that no pair of them is connected to the same vertex. Finally, add $n-2m$ vertices and connect all of them to vertex 1. See Figure \ref{fig.fuzzyegg}. We now verify that items \textit{a}--\textit{e} hold. Items \textit{a}--\textit{c} assert that the smallest and the largest generalized cycles of the graph are to be of sizes $2\ell+1$ and $2k+1$, respectively. Also, \textit{d} and \textit{e} assert that the graph has to have a maximum matching of size $2m$. The smallest odd cycle of $G$ is of size $2\ell+1$, hence \begin{itemize} \item[\textit{a)}] $r_{2j+1} = 0$ for $j<\ell$, and \end{itemize} Now, consider the $2\ell+1$ cycle joint with (possibly zero) disjoint edges from the path. This shows there are generalized cycles of length $2j+1$ for $\ell \leq j \leq k$. That is \begin{itemize} \item[\textit{b)}] $r_{2j+1} = 1$ for $2j+1$ for $\ell \leq j \leq k$, and \item[\textit{c)}] $r_{2j+1} = 0$ for any $j$ with $k < j$. \end{itemize} Note that the graph has a maximum matching of size $m$. This is obtained by taking the edges that connect each of the vertices $1,2,\ldots, 2(m-k) - 1$ to the pendent vertex adjacent to them ($2(m-k)-1$ edges), every other edge in { the }rest of the $2\ell+1$ cycle ($\frac{2\ell+1 - 2(m-k)-1}{2}$ edges), and the maximum matching from the path ($k-\ell$ edges). Thus, \begin{itemize} \item[\textit{d)}] $r_{2j} = 1$ for any $1 \leq j \leq m$, and \item[\textit{e)}] $r_{2j} = 0$ for any $m < j$. \end{itemize} \vspace*{1mm} \noindent \textbf{Case 5:} $r_0 = 1$, $r_{i} = 0$ for all odd $i \le n$, $r_{i} = 1$ for all even $i \le 2m$ and $r_{i} = 0$ for all even $i > 2m$ for some nonnegative $m \le \lfloor \frac{n-1}{2} \rfloor$. \noindent This case is obtained with a graph with $m$ disjoint edges and $n-2m$ isolated vertices. \end{proof} \section{Skew-symmetric Matrices}\label{skew} Previously, we only considered nonnegative matrices. This consideration benefited the analysis as every contribution to the permanent was necessarily positive. In this section, we consider \emph{skew-symmetric} matrices. Recall that a real matrix $A$ is \emph{skew-symmetric} if $A_{ji} = -A_{ij}$. First note that the odd positions in the {\rm{ppr}}-sequence of a skew-symmetric matrix have to be all zero, as shown in the following lemma. \begin{lem}\label{lem.skewnoodd} Let $A$ be a skew-symmetric matrix with ${\rm{ppr}}(A) =r_0 r_1 \cdots r_n$. Then $r_{2i+1} = 0$ for all integers $i$ with $0 \leq i \leq \lfloor \frac{n-1}{2} \rfloor$. \end{lem} \begin{proof} Choose $k \le n$ odd. Let $B = (b_{ij})$ be a principal submatrix of $A$ of size $k$. We will show that ${\rm{per}}(B) = 0$. \begin{eqnarray*} {\rm{per}}(B) &=& \sum_{\sigma\in S_k} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)}\right) \\ &=& \sum_{\sigma\in D_k} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)}\right) + \sum_{\sigma\in S_k\setminus D_k} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)}\right) \end{eqnarray*} where $D_k$ is the set of all derangements on $[k]$ (i.e., permutations without a fixed point). For $\sigma \in S_k \setminus D_k$, $i = \sigma_i$ for some $i$, and so $b_{i\,\sigma(i)}=0$ by skew-symmetry. Further, since $k$ is odd, no $\sigma \in D_k$ is its own inverse. Therefore, continuing from above, we have \begin{eqnarray*} {\rm{per}}(B) &=& \sum_{\sigma\in D_k} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)}\right) \\ &=& \sum_{\sigma\in D'_k} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)}+ \prod_{i=1}^{k} b_{i\,\sigma^{-1}(i)}\right) \end{eqnarray*} where $D'_k \subset D_k$ is a maximum subset of derangements where no elements is an inverse of another. However, by skew-symmetry, \begin{eqnarray*} &=& \sum_{\sigma\in D'_k} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)}+ \prod_{i=1}^{k} -b_{\sigma^{-1}(i)\,i}\right) \\ &=& \sum_{\sigma\in D'_k} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)}+\prod_{i=1}^{k} -b_{i\, \sigma(i)}\right) \\ &=& \sum_{\sigma\in D'_k} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)} + (-1)^k \prod_{i=1}^{k} b_{i\, \sigma(i)}\right) \\ &=& \sum_{\sigma\in D'_k} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)} - \prod_{i=1}^{k} b_{i\, \sigma(i)}\right) \\ &=& 0 \end{eqnarray*} \end{proof} Now, the question is to characterize the patterns of zeros and ones in the even positions of this sequences. Concentrating on the even positions, several examples of small size are checked and it is observed that there are no gaps between the ones in the even positions. It is easy to see that this property holds for trees. In the following theorem we will characterize ${\rm{ppr}}(A)$ for all skew-symmetric matrices whose underlying graph is a tree. \begin{thm}\label{thm.skewtree} Let $A$ be a skew-symmetric matrix whose underlying graph is a tree with a maximum matching of size $\mu(G)$. Then, the principal permanent rank sequence ${\rm{ppr}}(A) = r_0 r_1 \cdots r_n$ has $r_{k} = 1$ if and only if $k$ is even and $k \le 2\mu(G)$. Furthermore, any such sequence is realizable by a skew-symmetric matrix whose underlying graph is a tree.\end{thm} \begin{proof} If $k\le n$ is odd, then $q_k = 0$ by Lemma \ref{lem.skewnoodd}. Choose $k$ even and $\alpha \subset [n]$ with $|\alpha|=k$ Let $B = A [\alpha]= (b_{i,j})$. We will show the permanent of $B$ is nonzero if $k \le 2 \mu(G)$ and $0$ otherwise. \begin{eqnarray*} {\rm{per}}(B) &=& \sum_{\sigma\in S_k} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)}\right)\\ &=& \sum_{\sigma\in M_{k/2}} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)}\right)+ \sum_{\sigma\in D_k \setminus M_{k/2}} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)}\right) + \sum_{\sigma\in S_k\setminus (D_k \cup M_{k/2})} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)}\right) \end{eqnarray*} where $M_{k/2}$ is the {set of permutations corresponding to the maximum matchings of $G[\alpha]$} (i.e., a disjoint product of transpositions) and $D_k$ is the set of all derangements on $\alpha$. Observe that for $\sigma \in D_k \setminus M_{k/2}$, $\sigma$ must have a cycle of size 3 or more; however, $G$ is a tree, so no such $\sigma$ contributes to its sum. Similarly, as in the proof of Lemma \ref{lem.skewnoodd}, any permutation $\sigma \not \in D_k$ also contributes 0. Therefore, we have \begin{eqnarray*} {\rm{per}}(B) &=& \sum_{\sigma\in M_{k/2}} \left( \prod_{i=1}^{k} b_{i\,\sigma(i)}\right)\\ &=&(-1)^{k/2} \sum_{m \in M_{k/2}} \left( \prod_{\{i,j\} \in m} b_{ij}^2 \right). \end{eqnarray*} where the final line considers the matchings as a collection of edges. Since for the last term, $b_{i,j}^2 > 0$, the final sum is nonzero so long as the sum is not empty. The sum is empty only when $k > 2\mu(G)$. Now, we construct a skew-symmetric matrix $A$ whose underlying graph is a tree $T$ and ${\rm{ppr}}(A) = r_0 r_1 \ldots r_n$, where $r_j = 1$ if and only if $j$ is even and $1 \leq j \leq 2m$, for some $m\leq n/2$. Consider a path of length $2m$ on vertices $1,2,\ldots, 2m$. Add $n-2m$ vertices and connect all of them to vertex $2m-1$. Let $B$ be the adjacency matrix of this graph, and $A$ be the matrix obtained from $B$ by negating all the lower-diagonal entries. Since $T$ does not have any cycles, all the nonzero terms in the permanent of a principal submatrix of $A$ come from a matching of $T$. Hence ${\rm{ppr}}(A) = r_0 r_1 \ldots r_n$, with $r_j = 1$ if and only if $j$ is even and $1 \leq j \leq 2m$. \end{proof} \end{document}
\begin{document} \author{MLE Slone} \year=2008 \title{HOMOLOGICAL COMBINATORICS AND EXTENSIONS OF THE CD-INDEX} \av\bvstract{ Many combinatorial proofs rely on induction. When these proofs are formulated in traditional language, they can be bulky and unmanageable. Coalgebras provide a language which can reduce reduce many inductive proofs in graded poset theory to comprehensible size. As a bonus, the visual form of the resulting recursive proofs suggests combinatorial interpretations for constants appearing in the longer arguments. We use the techniques of coalgebras to compute invariants of toric and affine arrangements as well as of poset products. In additional chapters we prove structure theorems for acyclic orientations and critical groups of graphs. } \keywords{cd-index, polytopes, coalgebras, posets, spanning trees} \advisor{Richard Ehrenborg} \frontmatter \avketitle \begin{acknowledgments} \noindent I have many people to thank. I would like to single out Dora~Ahmadi, Jimmy~Booth, Tom~Chapman, Vivian~Cyrus, Scott~Davison, Richard~Ehrenborg, Jennifer~Eli,\newline Edgar~Enochs, Charles~P.~Fairchild, Claire~A.~Foley, Brauch~Fugate, Scott~Godefroy, Trish~Hall, Brad~Hamlin, Mike~Hammond, David~Johnson, Eric~Kahn, Daniel~Kiteck, Carl~Lee, David~Little, Kathryn~Lybarger, Penny~Pajel~McCollum, Neil~Moore, Mark~Motley, Mary~Motley, Tricia~Muldoon, Sunil~Nanwani, Carlos~M.~Nicol\'as, Rebecca~Novak, Wendell~O'Brien, Sonja~Petrovi\'c, Pat~Quillen, Margaret~Readdy, Josh~Roberts, Robert~D.~Royar, Jack~Schmidt, Yuho~Shin, Aekyoung~Shin~Kim, R.~Duane~Skaggs, Bethany~Slone, Cephas~Slone, Donald~J.~Spickler, Erik~Stokes, Brett~Strassner, Jack~Weir, and Yu~Xiang. The chapter ``Affine and toric arrangements'' is based on joint work with\newline Richard~Ehrenborg and Margaret~Readdy. Ehrenborg and Slone were partially supported by National Security Agency grant H98230-06-1-0072. The chapter ``A geometric approach to acyclic orientations'' is based on joint work with Richard~Ehrenborg. \end{acknowledgments} \tableofcontents* \listoffigures \avinmatter \setcounter{chapter}{-1} \chapter{Introduction} For any collection of mathematical objects, two questions have fundamental importance. \begin{enumerate} \item Can we \emph{enumerate} the objects in the collection? \item Can we \emph{classify} the objects in the collection? \end{enumerate} This dissertation deals primarily with the question of enumeration in the field of algebraic combinatorics. Here ``enumerate'' is intended in both its common senses: counting objects and listing objects. We should be able to count objects so we have a rough idea of the complexity of the task of organizing them. But we should also be able to give representative examples of the objects. In particular, if we can construct representative examples in a recursive way, then we can teach a computer to perform operations on the objects. Moreover, in spending the time to find appropriate recursively-defined representations of objects, we generally discover properties of the objects which will be useful when we turn to the question of classification. The chapters of this dissertation can be read independently. However, there are strong connections between some of the chapters. Here we indicate some of the connections and briefly explain the topics to be discussed. Chapters~\ref{chap:affinetoric} and~\ref{chap:mixing} deal with the~$\cv\dv$-index, which is a polynomial invariant encoding the flag structure of polytopes and similar objects. With the $\cv\dv$-index of a polytope available, one can quickly answer questions such as: \begin{itemize} \item How many vertices does this polytope have? or \item How many ways can one select a connected chain of a vertex, an edge, and a face in this polytope? \end{itemize} The $\cv\dv$-index is not fully understood. In particular, even in cases where the coefficients are known to be nonnegative it is not always known what they count. In Chapter~\ref{chap:affinetoric} we examine the behavior of the $\cv\dv$-index (and more generally, the $\av\bv$-index) on non-spherical manifolds. This viewpoint allows combinatorial questions for polytopes, which are spheres, to be transported to other manifolds. We start this by handling the simplest possible case, that of the $n$-dimensional torus, via the notion of toric hyperplane arrangement. In Chapter~\ref{chap:mixing} we streamline computation of and proofs regarding the $\cv\dv$-index. Recursive formulas are already known for the effects of some natural geometric operations on the $\cv\dv$-index. However, some of these rely on delicate chain-counting arguments, since their proofs are expressed in poset-theoretic rather than $\cv\dv$-theoretic terms. By importing the arguments into the $\cv\dv$-language, we are able to simplify many arguments. We are also able to interpret the coefficients of the $\cv\dv$-index in a special case as counting lattice paths. Several results in this chapter were discovered with the assistance of GAP~\cite{GAP4}. Chapters~\ref{chap:acyclic} and~\ref{chap:critical} deal, in one way or another, with chip-firing games on graphs. Chip-firing games arise out of statistical mechanics, where they are called abelian sandpile models. There are also connections to Kirchhoff's fundamental work in circuit theory. In Chapter~\ref{chap:acyclic} we use chip-firing games as a tool to give a geometric proof of the result of Propp that acyclic orientations of a graph with a fixed sink have the structure of a distributive lattice. Finally, in Chapter~\ref{chap:critical} we study the critical group, which is the group of configurations of a chip-firing game. It is known that the order of this group is equal to the number of spanning trees of the graph. However, the structure of the critical group is only known for a few classes of graphs. We can shed a little light on the structure of the critical group of uniformly cleft graphs, which are introduced in this dissertation. We can also count the spanning trees of non-uniformly cleft trees. Some work in this dissertation is jointly authored. In particular, Chapter~\ref{chap:affinetoric} is joint work with Richard~Ehrenborg and Margaret~Readdy, while Chapter~\ref{chap:acyclic} is joint work with Richard~Ehrenborg. We have submitted Chapter~\ref{chap:affinetoric} to the journal \textit{Discrete and Computational Geometry}. It has been refereed, and we are preparing a new version for resubmission. The chapter is based on a snapshot of that new version. None of the other chapters have yet been submitted for publication. \begin{center} Copyright \copyright\ MLE Slone 2008 \end{center} \setcounter{chapter}{0} \chapter{Affine and toric arrangements}\label{chap:affinetoric} \section{Introduction} \label{section_introduction} Traditionally combinatorialists have studied topological objects that are spherical, such as polytopes, or which are homeomorphic to a wedge of spheres, such as those obtained from shellable complexes. In this chapter we break from this practice and study hyperplane arrangements on the $n$-dimensional torus. It is classical that the convex hull of a finite collection of points in Euclidean space is a polytope and its boundary is a sphere. The key ingredient in this construction is convexity. At the moment there is no natural analogue of this process to obtain a complex whose geometric realization is a torus. In this chapter we are taking a zonotopal approach to working with arrangements on the torus. Recall that a zonotope can be defined without the notion of convexity, that is, it is a Minkowski sum of line segments. Dually, a central hyperplane arrangement gives rise to a spherical cell complex. By considering an arrangement on the torus, we are able to obtain a subdivision whose geometric realization is indeed the torus. We will see later in Section~\ref{section_toric} that this amounts to restricting ourselves to arrangements whose subspaces in the Euclidean space $\avthbb{R}^{n}$ have coefficient matrices with rational entries. Under the quotient map $\avthbb{R}^{n} \longrightarrow \avthbb{R}^{n}/\avthbb{Z}^{n} = T^{n}$ these subspaces are sent to subtori of the $n$-dimensional torus $T^{n}$. Zaslavsky initiated the modern study of hyperplane arrangements in his fundamental treatise~\cite{Zaslavsky}. For early work in the field, see the references given in Gr\"unbaum's text~\cite[Chapter 18]{Grunbaum}. Zaslavsky showed that evaluating the characteristic polynomial of a central hyperplane arrangement at $-1$ gives the number of regions in the complement of the arrangement. For central hyperplane arrangements, Bayer and Sturmfels~\cite{Bayer_Sturmfels} proved the flag $f$-vector of the arrangement can be determined from the intersection lattice; see Theorem~\ref{theorem_Bayer_Sturmfels}. However, their result is stated as a sum of chains in the intersection lattice and hence it is hard to apply. Billera, Ehrenborg, and Readdy improved the Bayer--Sturmfels result by showing that it is enough to know the flag $f$-vector of the intersection lattice to compute the flag~$f$-vector of a central arrangement. Recall that the $\cv\dv$-index of a regular cell complex is an efficient tool to encode its flag $f$-vector without linear redundancies~\cite{Bayer_Klapper}. The Billera--Ehrenborg--Readdy theorem gives an explicit way to compute the $\cv\dv$-index of the arrangement, and hence its flag $f$-vector~\cite{Billera_Ehrenborg_Readdy_om}. We generalize Zaslavsky's theorem on the number of regions of a hyperplane arrangement to the toric case. Although there is no intersection lattice per se, one works with the intersection poset. From the Zaslavsky result we obtain a toric version of the Bayer--Sturmfels result for hyperplane arrangements, that is, there is a natural poset map from the face poset to the intersection poset, and furthermore, the cardinality of the inverse image of a chain under this map is described. As in the case of a central hyperplane arrangement, our toric version of the Bayer--Sturmfels result determines the flag $f$-vector of the face poset of a toric arrangement in terms of its intersection poset. However, this is far from being explicit. Using the coalgebraic techniques from~\cite{Ehrenborg_Readdy_c}, we are able to determine the flag $f$-vector explicitly in terms of the flag $f$-vector of the intersection poset. Moreover, the answer is given by a $\cv\dv$ type of polynomial. The flag $f$-vector of a regular spherical complex is encoded by the $\cv\dv$-index, a non-commutative polynomial in the variables~$\avthbf{c}$ and~$\avthbf{d}$, whereas the $n$-dimensional toric analogue is a $\cv\dv$-polynomial plus the $\av\bv$-polynomial~$(\avthbf{a}-\avthbf{b})^{n+1}$. Zaslavsky also showed that evaluating the characteristic polynomial of an affine arrangement at $1$ gives the number of bounded regions in the complement of the arrangement. Thus we return to affine arrangements in Euclidean space with the twist that we study the {\em unbounded} regions. The unbounded regions form a spherical complex. In the case of central arrangements, this complex is exactly what was studied previously by Billera, Ehrenborg, and Readdy~\cite{Billera_Ehrenborg_Readdy_om}. For non-central arrangements, we determine the $\cv\dv$-index of this complex in terms of the lattice of unbounded intersections of the arrangement. Interestingly, the techniques for studying toric arrangements and the unbounded complex of non-central arrangements are similar. Hence, we present these results in the same chapter. For example, the toric and non-central analogues of the Bayer--Sturmfels theorem only differ by which Zaslavsky invariant is used. The coalgebraic translations of the two analogues involve exactly the same argument, and the resulting underlying maps $\varphi_{t}$ (in the toric case) and $\varphi_{ub}$ (in the non-central case) differ only slightly in their definitions. We end with many open questions about subdivisions of manifolds. \section{Preliminaries} \label{section_preliminaries} All the posets we will work with are graded, that is, posets having a unique minimal element $\hat{0}$, a unique maximal element $\hat{1}$, and rank function $\rho$. For two elements $x$ and $z$ in a graded poset $P$ such that $x \leq z$, let $[x,z]$ denote the interval $\{y \in P \: : \: x \leq y \leq z\}$. Observe that the interval $[x,z]$ is itself a graded poset. Given a graded poset $P$ of rank $n+1$ and $S \subseteq \{1, \ldots, n\}$, the {\em $S$-rank-selected poset $P(S)$} is the poset consisting of the elements $P(S) = \{x \in P \: : \: \rho(x) \in S\} \cup \{\hat{0}, \hat{1}\}$. The partial orders of~$[x,y]$ and $P(S)$ are each inherited from that of $P$. The \define{dual poset} of $P$, written $P^*$, is the poset having the same underlying set as $P$ but with the order relation reversed: $x <_{P^*} y$ if and only if $y <_P x$. For standard poset terminology, we refer the reader to Stanley's work~\cite{Stanley_EC_I}. The M\"obius function $\mu(x,y)$ on a poset $P$ is defined recursively by $\mu(x,x) = 1$ and for elements $x, y \in P$ with $x < y$ by $\mu(x,y) = - \sum_{x \leq z < y} \mu(x,z)$; see Section~3.7 in~\cite{Stanley_EC_I}. For a graded poset~$P$ with minimal element $\hat{0}$ and maximal element $\hat{1}$ we write $\mu(P) = \mu_{P}(\hat{0},\hat{1})$. We now review important results about hyperplane arrangements, the $\cv\dv$-index, and coalgebraic techniques. All are essential for proving the main results of this chapter. \subsection{Hyperplane arrangements} Let $\avthcal{H} = \{H_{1}, \ldots, H_{m}\}$ be a hyperplane arrangement in $\avthbb{R}^{n}$, that is, a finite collection of affine hyperplanes in $n$-dimensional Euclidean space. For brevity, throughout this chapter we will often refer to a hyperplane arrangement as an arrangement. We call an arrangement {\em essential} if the normal vectors to the hyperplanes in $\avthcal{H}$ span $\avthbb{R}^n$. In this chapter we are only interested in essential arrangements. Observe that the intersection $\bigcap_{i=1}^{m} H_{i}$ of all of the hyperplanes in an essential arrangement is either the empty set $\emptyset$ or a singleton point. We call an arrangement {\em central} if the intersection of all the hyperplanes is one point. We may assume that this point is the origin ${\avthbf 0}$ and hence all of the hyperplanes are subspaces of codimension $1$. If the intersection is the empty set, we call the arrangement {\em non-central}. The {\em intersection lattice} $\avthcal{L}$ is the lattice formed by ordering all the intersections of hyperplanes in~$\avthcal{H}$ by reverse inclusion. If the intersection of all the hyperplanes in a given arrangement is empty, then we include the empty set $\emptyset$ as the the maximal element in the intersection lattice. If the arrangement is central, the maximal element is $\{{\avthbf 0}\}$. In all cases, the minimal element of $\avthcal{L}$ will be all of $\avthbb{R}^{n}$. For a hyperplane arrangement $\avthcal{H}$ with intersection lattice $\avthcal{L}$, the {\em characteristic polynomial} is defined by $$ \chi(\avthcal{H}; t) = \sum_{\onethingatopanother{x \in \avthcal{L}}{x \neq \emptyset}} \mu(\hat{0},x) \cv\dvot t^{\dim(x)} , $$ where $\mu$ denotes the M\"obius function. The characteristic polynomial is a combinatorial invariant of the arrangement. The fundamental result of Zaslavsky~\cite{Zaslavsky} is that this invariant determines the number and type of regions in the complement of the arrangement. \begin{theorem}[Zaslavsky] For a hyperplane arrangement $\avthcal{H}$ in $\avthbb{R}^{n}$ the number of regions in the complement of the arrangement is given by $(-1)^{n} \cv\dvot \chi(\avthcal{H}; -1)$. Furthermore, the number of bounded regions is given by $(-1)^{n} \cv\dvot \chi(\avthcal{H}; 1)$. \label{theorem_Zaslavsky} \end{theorem} For a graded poset $P$, define the two Zaslavsky invariants $Z$ and $Z_{b}$ by \begin{eqnarray*} Z(P) & = & \sum_{\hat{0} \leq x \leq \hat{1}} (-1)^{\rho(x)} \cv\dvot \mu(\hat{0},x) ,\\ Z_{b}(P) & = & (-1)^{\rho(P)} \cv\dvot \mu(P) . \end{eqnarray*} In order to work with Zaslavsky's result, we need the following reformulation of Theorem~\ref{theorem_Zaslavsky}. \begin{theorem} \begin{enumerate} \item[(i)] For a central hyperplane arrangement the number of regions is given by $Z(\avthcal{L})$, where $\avthcal{L}$ is the intersection lattice of the arrangement. \item[(ii)] For a non-central hyperplane arrangement the number of regions is given by $Z(\avthcal{L}) - Z_{b}(\avthcal{L})$, where $\avthcal{L}$ is the intersection lattice of the arrangement. The number of bounded regions is given by~$Z_{b}(\avthcal{L})$. \end{enumerate} \label{theorem_Zaslavsky_poset} \end{theorem} Given a central hyperplane arrangement $\avthcal{H}$ there are two associated lattices, namely, the intersection lattice $\avthcal{L}$ and the lattice $T$ of faces of the arrangement. The minimal element of $T$ is the empty set $\emptyset$ and the maximal element is the whole space~$\avthbb{R}^{n}$. The lattice of faces can be seen as the face poset of the cell complex obtained by intersecting the arrangement $\avthcal{H}$ with a small sphere centered at the origin. Each hyperplane corresponds to a great circle on the sphere. An alternative way to view the lattice of faces $T$ is that the dual lattice $T^{*}$ is the face lattice of the zonotope corresponding to $\avthcal{H}$. Let $\avthcal{L} \cup \{\hz\}$ denote the intersection lattice with a new minimal element $\hat{0}$ adjoined. Define an order- and rank-preserving map $z$ from the dual lattice $T^{*}$ to the augmented lattice $\avthcal{L} \cup \{\hz\}$ by sending a face of the arrangement, that is, a cone in $\avthbb{R}^{n}$, to its affine hull. Note that under the map $z$ the minimal element of $T^{*}$ is mapped to the minimal element of $\avthcal{L} \cup \{\hz\}$. Observe that $z$ maps chains to chains. Hence we view~$z$ as a map from the set of chains of $T^{*}$ to the set of chains of $\avthcal{L} \cup \{\hz\}$. Bayer and Sturmfels~\cite{Bayer_Sturmfels} proved the following result about the inverse image of a chain under the map $z$. \begin{theorem}[Bayer--Sturmfels] Let $\avthcal{H}$ be a central hyperplane arrangement with intersection lattice $\avthcal{L}$. Let $c = \{\hat{0} = x_{0} < x_{1} < \cv\dvots < x_{k} = \hat{1}\}$ be a chain in $\avthcal{L} \cup \{\hz\}$. Then the cardinality of the inverse image of the chain $c$ under the map $z : T^{*} \longrightarrow \avthcal{L} \cup \{\hz\}$ is given by the product $$ |z^{-1}(c)| = \prod_{i=2}^{k} Z([x_{i-1},x_{i}]) . $$ \label{theorem_Bayer_Sturmfels} \end{theorem} \subsection{The cd-index} Let $P$ be a graded poset of rank $n+1$ with rank function $\rho$. For $S = \{s_{1} < \cv\dvots < s_{k-1}\}$ a subset of $\{1, \ldots, n\}$ define $f_{S}$ to be the number of chains $c = \{\hat{0} = x_{0} < x_{1} < \cv\dvots < x_{k} = \hat{1}\}$ that have elements with ranks in the set $S$, that is, $$ f_{S} = |\{ c \:\: : \:\: \rho(x_{1}) = s_{1}, \ldots, \rho(x_{k-1}) = s_{k-1} \}| . $$ Observe that $f_{S}$ is the number of maximal chains in the rank-selected poset $P(S)$. The flag $h$-vector is obtained by the relation (here we also present its inverse) \[ h_{S} = \sum_{T \subseteq S} (-1)^{|S-T|} \cv\dvot f_{T} \:\:\:\: \bvox{ and } \:\:\:\: f_{S} = \sum_{T \subseteq S} h_{T} . \] Recall that by Philip~Hall's theorem, the M\"obius function of $P(S)$ is $\mu(P(S)) = (-1)^{|S|-1} \cv\dvot h_{S}$. Let $\avthbf{a}$ and $\avthbf{b}$ be two non-commutative variables of degree $1$. For $S$ a subset of $\{1, \ldots, n\}$ let $u_{S}$ be the monomial $u_{S} = u_{1} \cv\dvots u_{n}$ where $u_{i} = \avthbf{b}$ if $i \in S$ and~$u_{i} = \avthbf{a}$ if $i \not\in S$. Then the {\em $\av\bv$-index} is the noncommutative polynomial defined by \[ \mathcal{P}si(P) = \sum_{S} h_{S} \cv\dvot u_{S} , \] where the sum is over all subsets $S \subseteq \{1, \ldots, n\}$. The $\av\bv$-index of a poset $P$ of rank $n+1$ is a homogeneous polynomial of degree $n$. A poset $P$ is {\em Eulerian} if every interval $[x,y]$, where $x < y$, satisfies the Euler-Poincar\'e relation, that is, there are the same number of elements of odd as even rank. Equivalently, the M\"obius function of $P$ is given by $\mu(x,y) = (-1)^{\rho(x,y)}$ for all~$x \leq y$ in $P$. The quintessential result is that the $\av\bv$-index of an Eulerian poset has the following form. \begin{theorem} The $\av\bv$-index of an Eulerian poset $P$ can be expressed in terms of the noncommutative variables $\avthbf{c} = \avthbf{a} + \avthbf{b}$ and $\avthbf{d} = \avthbf{a} \avthbf{b} + \avthbf{b} \avthbf{a}$. \end{theorem} This theorem was originally conjectured by Fine and proved by Bayer and Klapper~\cite{Bayer_Klapper}. Stanley provided an alternative proof for Eulerian posets~\cite{Stanley_d}. There are proofs which have both used and revealed the underlying algebraic structure. See for instance~\cite{Ehrenborg_k-Eulerian,Ehrenborg_Readdy_homology}. When the $\av\bv$-index $\mathcal{P}si(P)$ is written in terms of $\avthbf{c}$ and $\avthbf{d}$, the resulting polynomial is called the {\em $\cv\dv$-index}. There are linear relations among the entries of the flag $f$-vector of an Eulerian poset, known as the generalized Dehn-Sommerville relations; see~\cite{Bayer_Billera}. The importance of the $\cv\dv$-index is that it removes all of these linear redundancies among the flag $f$-vector entries. Observe that the variables $\avthbf{c}$ and $\avthbf{d}$ have degrees $1$ and $2$, respectively. Thus the $\cv\dv$-index of a poset of rank $n+1$ is a homogeneous polynomial of degree $n$ in the noncommutative variables $\avthbf{c}$ and $\avthbf{d}$. Define the reverse of an $\av\bv$-monomial $u = u_{1} u_{2} \cv\dvots u_{n}$ to be $u^{*} = u_{n} \cv\dvots u_{2} u_{1}$ and extend by linearity to an involution on $\avthbb{Z}\fab$. Since $\avthbf{c}^{*} = \avthbf{c}$ and $\avthbf{d}^{*} = \avthbf{d}$, this involution applied to a $\cv\dv$-monomial simply reverses the $\cv\dv$-monomial. Finally, the $\av\bv$-index respects this involution. For any graded poset $P$ we have $\mathcal{P}si(P)^{*} = \mathcal{P}si(P^{*})$. A direct approach to describe the $\av\bv$-index of a poset $P$ is to give each chain a weight and then sum over all chains. For a chain $c = \{\hat{0} = x_{0} < x_{1} < \cv\dvots < x_{k} = \hat{1}\}$ in the poset $P$, define its {\em weight} to be \begin{equation} \wt(c) = (\avthbf{a}-\avthbf{b})^{\rho(x_{0},x_{1})-1} \cv\dvot \avthbf{b} \cv\dvot (\avthbf{a}-\avthbf{b})^{\rho(x_{1},x_{2})-1} \cv\dvot \avthbf{b} \cv\dvots \avthbf{b} \cv\dvot (\avthbf{a}-\avthbf{b})^{\rho(x_{k-1},x_{k})-1} , \label{equation_weight} \end{equation} where $\rho(x,y)$ denotes the rank difference $\rho(y) - \rho(x)$. Then the $\av\bv$-index of $P$ is the polynomial $$ \mathcal{P}si(P) = \sum_{c} \wt(c), $$ where the sum is over all chains $c$ in the poset $P$. Finally, a third description of the $\av\bv$-index is Stanley's recursion for the $\av\bv$-index of a graded poset~\cite[Equation~(7)]{Stanley_d}. It is: \begin{equation} \label{equation_Stanley_recursion} \mathcal{P}si(P) = (\avthbf{a}-\avthbf{b})^{\rho(P)-1} + \sum_{\hat{0} < x < \hat{1}} (\avthbf{a}-\avthbf{b})^{\rho(x)-1} \cv\dvot \avthbf{b} \cv\dvot \mathcal{P}si([x,\hat{1}]). \end{equation} The initial condition for this recursion is the unique poset of rank $1$, $B_{1}$, where $\mathcal{P}si(B_{1}) = 1$. \subsection{Coalgebraic techniques} A coproduct $\Delta$ on a free $\avthbb{Z}$-module $C$ is a linear map $\Delta : C \longrightarrow C \otimes C$. In order to be explicit, we use the Heyneman--Sweedler sigma notation~\cite{Heyneman_Sweedler} for writing the coproduct. To explain this notation, notice that $\Delta(w)$ is an element of $C \otimes C$ and thus has the form $$ \Delta(w) = \sum_{i=1}^{k} w_{1}^{i} \otimes w_{2}^{i} , $$ where $k$ is the number of terms and $w_{1}^{i}$ and $w_{2}^{i}$ belong to $C$. Since all the maps that are applied to $\Delta(w)$ treat each term the same, the sigma notation drops the index $i$ and instead one writes $$ \Delta(w) = \sum_{w} w_{(1)} \otimes w_{(2)} . $$ Informally, this sum should be thought of as all the ways of breaking the element $w$ in two pieces, where the first piece is denoted by $w_{(1)}$ and the second by $w_{(2)}$. The Sweedler notation for the expression $(\Delta \otimes \id) \circ \Delta$, where $\id$ denotes the identity map, is the following $$ ((\Delta \otimes \id) \circ \Delta)(w) = \sum_{w} \sum_{w_{(1)}} w_{(1,1)} \otimes w_{(1,2)} \otimes w_{(2)} . $$ The right-hand side should be thought of as first breaking $w$ into the two pieces $w_{(1)}$ and $w_{(2)}$ and then breaking $w_{(1)}$ into the two pieces $w_{(1,1)}$ and $w_{(1,2)}$. See Joni and Rota for a more detailed explanation~\cite{Joni_Rota}. The coproduct $\Delta$ is coassociative if $(\Delta \otimes \id) \circ \Delta = (\id \otimes \Delta) \circ \Delta$. The sigma notation expresses coassociativity as $$ \sum_{w} \sum_{w_{(1)}} w_{(1,1)} \otimes w_{(1,2)} \otimes w_{(2)} = \sum_{w} \sum_{w_{(2)}} w_{(1)} \otimes w_{(2,1)} \otimes w_{(2,2)} . $$ Informally coassociativity states that all the possible ways to break $w$ into two pieces and then breaking the first piece into the two pieces is equivalent to all the ways to break $w$ into two pieces and then break the second piece into two pieces. Compare coassociativity with associativity of a multiplication map $m: A \otimes A \longrightarrow A$ on an algebra $A$. Assuming coassociativity, the sigma notation simplifies to $$ \Delta^{2}(w) = \sum_{w} w_{(1)} \otimes w_{(2)} \otimes w_{(3)} , $$ where $\Delta^{2}$ is defined as $(\Delta \otimes \id) \circ \Delta = (\id \otimes \Delta) \circ \Delta$, and the three pieces have been renamed as $w_{(1)}$, $w_{(2)}$ and $w_{(3)}$. Coassociativity allows one to define the $k$-ary coproduct $\Delta^{k-1} : C \longrightarrow C^{\otimes k}$ by the recursion $\Delta^{0} = \id$ and $\Delta^{k} = (\Delta^{k-1} \otimes \id) \circ \Delta$. The sigma notation for the $k$-ary coproduct is $$ \Delta^{k-1}(w) = \sum_{w} w_{(1)} \otimes w_{(2)} \otimesdots w_{(k)} . $$ Let $\avthbb{Z}\fab$ denote the polynomial ring in the non-commutative variables $\avthbf{a}$ and~$\avthbf{b}$. We define a coproduct~$\Delta$ on the algebra $\avthbb{Z}\fab$ by letting $\Delta$ satisfy the following identities: $\Delta(1) = 0$, $\Delta(\avthbf{a}) = \Delta(\avthbf{b}) = 1 \otimes 1$ and the Leibniz condition \begin{equation} \Delta(u \cv\dvot v) = \sum_{u} u_{(1)} \otimes u_{(2)} \cv\dvot v + \sum_{v} u \cv\dvot v_{(1)} \otimes v_{(2)} . \label{equation_Newtonian} \end{equation} For an $\av\bv$-monomial $u = u_{1} u_{2} \cv\dvots u_{n}$ we have that $$ \Delta(u) = \sum_{i=1}^{n} u_{1} \cv\dvots u_{i-1} \otimes u_{i+1} \cv\dvots u_{n} . $$ The fundamental result for this coproduct is that the $\av\bv$-index is a coalgebra homomorphism~\cite{Ehrenborg_Readdy_c}. We express this result as the following identity. \begin{theorem}[Ehrenborg--Readdy] For a graded poset $P$ with $\av\bv$-index $w = \mathcal{P}si(P)$ and for any $k$-multilinear map~$M$ on $\avthbb{Z}\fab$, the following coproduct identity holds: $$ \sum_{c} M(\mathcal{P}si([x_{0},x_{1}]), \mathcal{P}si([x_{1},x_{2}]), \ldots, \mathcal{P}si([x_{k-1},x_{k}])) = \sum_{w} M(w_{(1)}, w_{(2)}, \ldots, w_{(k)}) , $$ where the first sum is over all chains $c = \{\hat{0} = x_{0} < x_{1} < \cv\dvots < x_{k} = \hat{1}\}$ of length $k$ and the second sum is over the $k$-ary coproduct of $w$, that is, over~$\Delta^{k-1}$. \label{theorem_Ehrenborg_Readdy} \end{theorem} \subsection{The cd-index of the face poset of a central arrangement} We recall the definition of the omega map~\cite{Billera_Ehrenborg_Readdy_om}. \begin{definition} The linear map $\omega$ from $\avthbb{Z}\fab$ to $\avthbb{Z}\fcd$ is formed by first replacing every occurrence of $\avthbf{a}\avthbf{b}$ in a given $\av\bv$-monomial by $2\avthbf{d}$ and then replacing the remaining letters by $\avthbf{c}$. \end{definition} For a central hyperplane arrangement~$\avthcal{H}$ the $\cv\dv$-index of the face poset is computed as follows~\cite{Billera_Ehrenborg_Readdy_om}. \begin{theorem}[Billera--Ehrenborg--Readdy] Let $\avthcal{H}$ be a central hyperplane arrangement with intersection lattice $\avthcal{L}$ and face lattice $T$. Then the $\cv\dv$-index of the face lattice $T$ is given by $$ \mathcal{P}si(T) = \omega(\avthbf{a} \cv\dvot \mathcal{P}si(\avthcal{L}))^{*} . $$ \label{theorem_Billera_Ehrenborg_Readdy} \end{theorem} We review the basic ideas behind the proof of this theorem. We will refer back to them when we prove similar results for toric and affine arrangements in Sections~\ref{section_toric} and~\ref{section_affine}. Define three linear operators $\kappa$, $\beta$ and $\eta$ on $\avthbb{Z}\fab$ by \[ \kappa(v) = \begin{cases} (\avthbf{a}-\avthbf{b})^{m} & \text{if $v=\avthbf{a}^{m}$ for some $m \geq 0$,} \\ 0 & \text{otherwise,} \end{cases} \] \[ \beta(v) = \begin{cases} (\avthbf{a}-\avthbf{b})^{m} & \text{if $v=\avthbf{b}^{m}$ for some $m \geq 0$,} \\ 0 & \text{otherwise,} \end{cases} \] and \[ \eta(v) = \begin{cases} 2 \cv\dvot (\avthbf{a}-\avthbf{b})^{m+k} & \text{if $v = \avthbf{b}^{m} \avthbf{a}^{k}$ for some $m, k \geq 0$,} \\ 0 & \text{otherwise.} \end{cases} \] Observe that $\kappa$ and $\beta$ are both algebra maps. The following relations hold for a poset $P$. See~\cite[Section 5]{Billera_Ehrenborg_Readdy_om}. \begin{eqnarray} \kappa(\mathcal{P}si(P)) & = & (\avthbf{a}-\avthbf{b})^{\rho(P)-1} , \label{equation_kappa} \\ \beta(\mathcal{P}si(P)) & = & Z_{b}(P) \cv\dvot (\avthbf{a}-\avthbf{b})^{\rho(P)-1} , \label{equation_beta} \\ \eta(\mathcal{P}si(P)) & = & Z(P) \cv\dvot (\avthbf{a}-\avthbf{b})^{\rho(P)-1} . \label{equation_eta} \end{eqnarray} For $k \geq 1$ the operator $\varphi_{k}$ is defined by the coalgebra expression $$ \varphi_{k}(v) = \sum_{v} \kappa(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \eta(v_{(2)}) \cv\dvot \avthbf{b} \cv\dvots \avthbf{b} \cv\dvot \eta(v_{(k)}) , $$ where the coproduct splits $v$ into $k$ parts. Finally $\varphi$ is defined as the sum $$ \varphi(v) = \sum_{k \geq 1} \varphi_{k}(v) . $$ Note that in this expression only a finite number of terms are nontrivial. The connection with hyperplane arrangements is given by the following proposition. \begin{proposition} The $\av\bv$-index of the lattice of faces of a central hyperplane arrangement is given by $$ \mathcal{P}si(T) = \varphi( \mathcal{P}si( \avthcal{L} \cup \{\hz\} ) )^{*} . $$ \label{proposition_varphi} \end{proposition} The function $\varphi$ satisfies the functional equation $$ \varphi(v) = \kappa(v) + \sum_{v} \varphi(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \eta(v_{(2)}) . $$ From this functional equation it follows that the function $\varphi$ satisfies the initial conditions $\varphi(1) = 1$ and $\varphi(\avthbf{b}) = 2 \cv\dvot \avthbf{b}$ and the recurrence relations: \begin{eqnarray} \varphi(v \cv\dvot \avthbf{a}) & = & \varphi(v) \cv\dvot \avthbf{c} , \label{equation_varphi_a} \\ \varphi(v \cv\dvot \avthbf{b} \avthbf{b}) & = & \varphi(v \cv\dvot \avthbf{b}) \cv\dvot \avthbf{c} , \label{equation_varphi_b_b} \\ \varphi(v \cv\dvot \avthbf{a} \avthbf{b}) & = & \varphi(v) \cv\dvot 2\avthbf{d} , \label{equation_varphi_a_b} \end{eqnarray} for an $\av\bv$-monomial $v$; see~\cite[Section~5]{Billera_Ehrenborg_Readdy_om}. These recursions culminate in the following result. \begin{proposition} \label{proposition_culminate} The maps $\varphi$ and $\omega$ agree on $\av\bv$-monomials that begin with $\avthbf{a}$, that is, if $w = \avthbf{a}\cv\dvot v$, then $\varphi(w) = \omega(w)$. \end{proposition} Theorem~\ref{theorem_Billera_Ehrenborg_Readdy} follows from the fact that $\mathcal{P}si(\avthcal{L} \cup \{\hz\}) = \avthbf{a} \cv\dvot \mathcal{P}si(\avthcal{L})$ by applying Proposition~\ref{proposition_culminate}. \subsection{Regular subdivisions of manifolds} The face poset $P(\mathcal{O}mega)$ of a cell complex $\mathcal{O}mega$ is the set of all cells in $\mathcal{O}mega$ together with a minimal element~$\hat{0}$ and a maximal element $\hat{1}$. One partially orders two cells $\tau$ and $\sigma$ by requiring that $\tau < \sigma$ if the cell $\tau$ is contained in $\overline{\sigma}$, the closure of $\sigma$. In order to define a regular cell complex, consider the cell complex $\mathcal{O}mega$ embedded in Euclidean space $\avthbb{R}^{n}$. This condition is compatible with toric cell complexes since the~$n$-dimensional torus can be embedded in $2n$-dimensional Euclidean space. Let~$B^{n}$ denote the ball $\{x \in \avthbb{R}^{n} \: : \: x_{1}^{2} + \cv\dvots + x_{n}^{2} \leq 1\}$ and let $S^{n-1}$ denote the sphere $\{x \in \avthbb{R}^{n} \: : \: x_{1}^{2} + \cv\dvots + x_{n}^{2} = 1\}$. A cell complex $\mathcal{O}mega$ is {\em regular} if (i) $\mathcal{O}mega$ consists of a finite number of cells, (ii) for every cell $\sigma$ of $\mathcal{O}mega$ the pair $(\overline{\sigma}, \overline{\sigma} - \sigma)$ is homeomorphic to a pair $(B^{k},S^{k-1})$ for some integer $k$, and (iii) the boundary $\overline{\sigma} - \sigma$ is the disjoint union of smaller cells in $\mathcal{O}mega$. See Section~3.8 in~\cite{Stanley_EC_I} for more details. For a discussion of regular cell complexes not embedded in $\avthbb{R}^n$, see~\cite{Bjorner_topological_methods}. The face poset of a regular subdivision of the sphere is an Eulerian face poset and hence has a $\cv\dv$-index. For regular subdivisions of compact manifolds, a similar result holds. This was independently observed by Swartz~\cite{Swartz}. \begin{theorem} Let $\mathcal{O}mega$ be a regular cell complex whose geometric realization is a compact $n$-dimensional manifold $\avthcal{M}$. Let $\chi(\avthcal{M})$ denote the Euler characteristic of~$\avthcal{M}$. Then the $\av\bv$-index of the face poset $P$ of $\mathcal{O}mega$ has the following form. \begin{enumerate} \item[(i)] If $n$ is odd then $P$ is an Eulerian poset and hence $\mathcal{P}si(P)$ can written in terms of $\avthbf{c}$ and $\avthbf{d}$. \item[(ii)] If $n$ is even then $\mathcal{P}si(P)$ has the form $$ \mathcal{P}si(P) = \left( 1-\frac{\chi(\avthcal{M})}{2} \right) \cv\dvot (\avthbf{a}-\avthbf{b})^{n+1} + \frac{\chi(\avthcal{M})}{2} \cv\dvot \avthbf{c}^{n+1} + \mathcal{P}hi , $$ where $\mathcal{P}hi$ is a homogeneous $\cv\dv$-polynomial of degree $n+1$ and $\mathcal{P}hi$ does not contain the term $\avthbf{c}^{n+1}$. \end{enumerate} \label{theorem_manifold} \end{theorem} \begin{proof} Observe that the poset $P$ has rank $n+2$. By~\cite[Theorem~3.8.9]{Stanley_EC_I} we know that every interval~$[x,y]$ strictly contained in $P$ is Eulerian. When the rank of $P$ is odd this implies that $P$ is also Eulerian; see~\cite[Exercise 69c]{Stanley_EC_I}. Hence in this case the $\av\bv$-index of $P$ can be expressed as a $\cv\dv$-index. When $n$ is even, we use~\cite[Theorem~4.2]{Ehrenborg_k-Eulerian} to conclude that the $\av\bv$-index of $P$ belongs to $\avthbb{R}\langle\avthbf{c},\avthbf{d},(\avthbf{a}-\avthbf{b})^{n+1}\rangle$. Since $\mathcal{P}si(P)$ has degree $n+1$, the $\av\bv$-index $\mathcal{P}si(P)$ can be written in the form \[ \mathcal{P}si(P) = c_{1} \cv\dvot (\avthbf{a}-\avthbf{b})^{n+1} + c_{2} \cv\dvot \avthbf{c}^{n+1} + \mathcal{P}hi , \] where $\mathcal{P}hi$ is a homogeneous $\cv\dv$-polynomial of degree $n+1$ that does not contain any~$\avthbf{c}^{n+1}$ terms. By looking at the coefficients of $\avthbf{a}^{n+1}$ and $\avthbf{b}^{n+1}$, we have $c_{1} + c_{2} = 1$ and $c_{2} - c_{1} = \mu(P) = \chi(\avthcal{M}) - 1$, where the last identity is again~\cite[Theorem~3.8.9]{Stanley_EC_I}. Solving for $c_{1}$ and $c_{2}$ proves the result. \end{proof} For the $n$-dimensional torus Theorem~\ref{theorem_manifold} can be expressed as follows. \begin{corollary} Let $\mathcal{O}mega$ be a regular cell complex whose geometric realization is the $n$-dimensional torus~$T^{n}$. Then the $\av\bv$-index of the face poset $P$ of $\mathcal{O}mega$ has the following form: $$ \mathcal{P}si(P) = (\avthbf{a}-\avthbf{b})^{n+1} + \mathcal{P}hi , $$ where $\mathcal{P}hi$ is a homogeneous $\cv\dv$-polynomial of degree $n+1$ and $\mathcal{P}hi$ does not contain the term $\avthbf{c}^{n+1}$. \end{corollary} \begin{proof} When $n$ is even this is Theorem~\ref{theorem_manifold}. When $n$ is odd this is Theorem~\ref{theorem_manifold} together with the two facts that $\chi(T^{n}) = 0$ and $(\avthbf{a}-\avthbf{b})^{n+1} = (\avthbf{c}^{2} - 2 \avthbf{d})^{(n+1)/2}$. \end{proof} \section{Toric arrangements} \label{section_toric} \subsection{Toric subspaces and arrangements} \begin{figure} \caption{A toric line arrangement which subdivides the torus $T^{2} \label{figure_toric_one} \end{figure} The $n$-dimensional torus $T^{n}$ is defined as the quotient $\avthbb{R}^{n}/\avthbb{Z}^{n}$. Recall that the torus~$T^{n}$ is an abelian group. When identifying the torus $T^{n}$ with the set $[0,1)^{n}$, the group structure is componentwise addition modulo $1$. \begin{lemma} Let $V$ be a $k$-dimensional affine subspace in $\avthbb{R}^{n}$ with rational coefficients. That is, $V$ has the form $$ V = \{ \vec{v} \in \avthbb{R}^{n} \:\: : \:\: A \vec{v} = \vec{b}\} , $$ where the matrix $A$ has rational entries and the vector $\vec{b}$ is allowed to have real entries. Then the image of $V$ under the quotient map $\avthbb{R}^{n} \to \avthbb{R}^{n}/\avthbb{Z}^{n}$, denoted by $\overline{V}$, is a $k$-dimensional torus. \end{lemma} \begin{proof} By translating $V$, we may assume that the vector $\vec{b}$ is the zero vector, and therefore $V$ is a subspace. In this case, the intersection of $V$ with the integer lattice~$\avthbb{Z}^{n}$ is a subgroup of the free abelian group $\avthbb{Z}^{n}$. Since the matrix $A$ has all rational entries, the rank of this subgroup is $k$, that is, the subgroup is isomorphic to $\avthbb{Z}^{k}$. Hence the image $\overline{V}$ is the quotient $V/(V \cap \avthbb{Z}^{n})$, which is isomorphic to the quotient~$\avthbb{R}^{k}/\avthbb{Z}^{k}$, that is, a $k$-dimensional torus. \end{proof} We call the image $\overline{V}$ a {\em toric subspace} of the torus $T^{n}$ because it is homeomorphic to some $k$-dimensional torus. When we remove the condition that the matrix $A$ is rational, the image is not necessarily homeomorphic to a torus. The intersection of two toric subspaces is in general not a toric subspace, but instead is the disjoint union of a finite number of toric subspaces. For two affine subspaces $V$ and $W$ with rational coefficients, we have that $\overline{V \cap W} \subseteq \overline{V} \cap \overline{W}$. In general, this containment is strict. Define the translate of a toric subspace $U$ by a point $x$ on the torus to be the toric subspace $U + x = \{ u+x : u \in U \}$. Alternatively, one may lift the toric subspace to an affine subspace in Euclidean space, translate it and then map back to the torus. Then for two toric subspaces $V$ and $W$, their intersection has the form $$ V \cap W = \bigcup_{p=1}^{r} (U + x_{p}) , $$ where $U$ is a toric subspace, $r$ is a non-negative integer and $x_{1}, \ldots, x_{r}$ are points on the torus $T^{n}$. \begin{figure} \caption{A toric line arrangement and its intersection poset.} \label{figure_toric_two} \end{figure} A {\em toric hyperplane arrangement} $\avthcal{H} = \{H_{1}, \ldots, H_{m}\}$ is a finite collection of toric hyperplanes. Define the {\em intersection poset} $\avthcal{P}$ of a toric arrangement to be the set of all connected components arising from all possible intersections of the toric hyperplanes, that is, all connected components of $\bigcap_{i \in S} H_{i}$ where $S \subseteq \{1, \ldots, m\}$, together with the empty set. Order the elements of the intersection poset $\avthcal{P}$ by reverse inclusion, that is, the torus $T^{n}$ is the minimal element of $\avthcal{P}$ corresponding to the empty intersection, and the empty set is the maximal element. A toric subspace $V$ is contained in the intersection poset $\avthcal{P}$ if there are toric hyperplanes $H_{i_{1}}, \ldots, H_{i_{k}}$ in the arrangement such that $V \subseteq H_{i_{1}} \cap \cv\dvots \cap H_{i_{k}}$ and there is no toric subspace $W$ satisfying $V \subset W \subseteq H_{i_{1}} \cap \cv\dvots \cap H_{i_{k}}$. In other words, $V$ has to be a maximal toric subspace in some intersection of toric hyperplanes from the arrangement. The notion of using the intersection poset can be found in work of Zaslavsky, where he considers topological dissections~\cite{Zaslavsky_paper}. In this setting there is not an intersection lattice, but rather an intersection poset. To every toric hyperplane arrangement $\avthcal{H} = \{H_{1}, \ldots, H_{m}\}$ there is an associated periodic hyperplane arrangement $\widetilde{\avthcal{H}}$ in the Euclidean space $\avthbb{R}^{n}$. Namely, the inverse image of the toric hyperplane~$H_{i}$ under the quotient map $\avthbb{R}^{n} \to \avthbb{R}^{n}/\avthbb{Z}^{n}$ is the union of parallel integer translates of a real hyperplane. Let~$\widetilde{\avthcal{H}}$ be the collection of all these integer translates. Observe that every face of the toric arrangement~$\avthcal{H}$ can be lifted to a parallel class of faces in the periodic real arrangement $\widetilde{\avthcal{H}}$. As in the case of real arrangements, a toric arrangment subdivides the torus into a number of regions. Let $T_t$ denote the poset of regions in the induced subdivision of the torus. For a toric hyperplane arrangement $\avthcal{H}$ define the {\em toric characteristic polynomial} to be $$ \chi(\avthcal{H}; t) = \sum_{\onethingatopanother{x \in \avthcal{P}}{x \neq \emptyset}} \mu(\hat{0},x) \cv\dvot t^{\dim(x)} . $$ \begin{example} Consider the line arrangement consisting of the two lines $y = 2 \cv\dvot x$ and $x = 2 \cv\dvot y$ in the plane $\avthbb{R}^{2}$. In $\avthbb{R}^{2}$ they intersect in one point, namely the origin, whereas on the torus $T^{2}$ they intersect in three points, namely $(0,0)$, $(2/3,1/3)$, and~$(1/3,2/3)$. The characteristic polynomial is given by $\chi(\avthcal{H};t) = t^{2} - 2 \cv\dvot t + 3$. However, this arrangement is not regular, since the induced subdivision of $T^{2}$ is not regular. The boundary of each region is a wedge of two circles. See Figure~\ref{figure_toric_one}. \label{example_toric_one} \end{example} \begin{example} Consider the line arrangement consisting of the three lines $y = 3 \cv\dvot x$, $x = 2 \cv\dvot y$, and $y = 1/5$. It subdivides the torus into a regular cell complex. The subdivision and the associated intersection poset are shown in Figure~\ref{figure_toric_two}. The characteristic polynomial is given by $\chi(\avthcal{H};t) = t^{2} - 3 \cv\dvot t + 8$. Furthermore, the $\av\bv$-index of the subdivision of the torus is given by $\mathcal{P}si(T_{t}) = (\avthbf{a}-\avthbf{b})^{3} + 7 \cv\dvot \dvc + 8 \cv\dvot \cvd$, as the following calculation shows. \newcommand{\:\:\:\:\:\:}{\:\:\:\:\:\:} $$ \begin{array}{c r r c r c c} S & f_{S} & h_{S} & u_{S} & (\avthbf{a}-\avthbf{b})^{3} & 7 \cv\dvot \dvc & 8 \cv\dvot \cvd \\ \hline \emptyset & 1 & 1 & \avthbf{a}\avthbf{a}\avthbf{a} & 1 \:\:\:\:\:\: & 0 & 0 \\ \{1\} & 7 & 6 & \avthbf{b}\avthbf{a}\avthbf{a} & -1 \:\:\:\:\:\: & 7 & 0 \\ \{2\} & 15 & 14 & \avthbf{a}\avthbf{b}\avthbf{a} & -1 \:\:\:\:\:\: & 7 & 8 \\ \{3\} & 8 & 7 & \avthbf{a}\avthbf{a}\avthbf{b} & -1 \:\:\:\:\:\: & 0 & 8 \\ \{1,2\} & 30 & 9 & \avthbf{b}\avthbf{b}\avthbf{a} & 1 \:\:\:\:\:\: & 0 & 8 \\ \{1,3\} & 30 & 16 & \avthbf{b}\avthbf{a}\avthbf{b} & 1 \:\:\:\:\:\: & 7 & 8 \\ \{2,3\} & 30 & 8 & \avthbf{a}\avthbf{b}\avthbf{b} & 1 \:\:\:\:\:\: & 7 & 0 \\ \{1,2,3\} & 60 & -1 & \avthbf{b}\avthbf{b}\avthbf{b} & -1 \:\:\:\:\:\: & 0 & 0 \\ \end{array} $$ Recall that $\dvc = \avba + \av\bv\bv + \bv\av\av + \bvab$ and $\cvd = \av\av\bv + \avba + \bvab + \bv\bv\av$. Here in the last three columns we indicate the contribution of a given term to each $\av\bv$-monomial. Observe that the sum of the last three columns gives the flag $h$-vector entries. \label{example_toric_two} \end{example} We now give a natural interpretation of the toric characteristic polynomial. Recall that the intersection of toric subspaces is the disjoint union of toric subspaces that are translates of each other. Let~$G$ be the collection of finite intersections of toric subspaces of the $n$-dimensional torus $T^{n}$, that is, $G$ consists of sets of the form $V = W_{1} \cap \cdots \cap W_{q}$, where $W_{1}, \ldots, W_{q}$ are toric subspaces. Such a set~$V$ can be written as a union $V = \bigcup_{p=1}^{r} (U + x_{p})$, where $U$ is a toric subspace, $r$ a non-negative integer, and $x_{1}, \ldots, x_{r}$ are points on the torus. Observe that the empty set $\emptyset$ and the torus $T^{n}$ belong to $G$. Furthermore, $G$ is closed under finite intersections. Let $L$ be the distributive lattice consisting of all subsets of the torus $T^{n}$ that are obtained from the collection~$G$ by finite intersections, finite unions and complements. The set~$G$ is the generating set for the lattice~$L$. A {\em valuation} $v$ on the lattice $L$ is a function on $L$ to an abelian group satisfying $v(\emptyset) = 0$ and $v(A) + v(B) = v(A \cap B) + v(A \cup B)$ for all sets~$A, B \in L$. The next theorem is analogous to Theorem~2.1 in~\cite{Ehrenborg_Readdy_valuation_1}. The proof here is more involved due to the fact that the collection of toric subspaces is not closed under intersections. \begin{theorem} There is a valuation $v$ on the distributive lattice~$L$ to integer polynomials in the variable $t$ such that for a $k$-dimensional toric subspace $V$ its valuation is $v(V) = t^{k}$. \end{theorem} \begin{proof} Define the function $v$ on the generating set $G$ by \[ v\left(\bigcup_{p=1}^{r} (U + x_{p})\right) = r \cv\dvot t^{k} , \] where we assume that $U$ is a $k$-dimensional toric subspace and the $r$ translates $U + x_{1}, \ldots, U + x_{r}$ are pairwise disjoint. Observe that the function $v$ is additive with respect to disjoint unions, that is, for elements $V_{1}, \ldots, V_{m}$ in $G$ which are pairwise disjoint and $V_{1} \cup \cdots \cup V_{m} \in G$. In this case, each $V_{i}$ is a disjoint union of translates of the same affine subspace $U$ and both sides of the identity $v(V_{1}) + \cv\dvots + v(V_{m}) = v(V_{1} \cup \cdots \cup V_{m})$ count the number of translates of $U$ times $t^{\dim(U)}$. Groemer's integral theorem~\cite{Groemer} (see also~\cite[Theorem~2.2.1]{Klain_Rota}) states that a function $v$ defined on a generating set $G$ extends to a valuation on the distributive lattice generated by $G$ if for all $V_{1}, \ldots, V_{m}$ in $G$ such that $V_{1} \cup \cdots \cup V_{m} \in G$, the inclusion-exclusion formula holds: \begin{equation} v(V_{1} \cup \cdots \cup V_{m}) = \sum_{i} v(V_{i}) - \sum_{i<j} v(V_{i} \cap V_{j}) + \cv\dvots . \label{equation_inclusion_exclusion} \end{equation} To verify this relation for our generating set $G$, first consider the case when the union $V_{1} \cup \cdots \cup V_{m}$ is a toric subspace. This case implies that $V_{1} \cup \cdots \cup V_{m} = V_{i}$ for some index $i$. It then follows that the inclusion-exclusion formula~(\ref{equation_inclusion_exclusion}) holds trivially. Before considering the general case, we introduce some notation. For $S$ a non-empty subset of the index set $\{1, \ldots, m\}$, let $V_{S} = \bigcap_{i \in S} V_{i}$. Equation~(\ref{equation_inclusion_exclusion}) can then be written as $$ v(V_{1} \cup \cdots \cup V_{m}) = \sum_{S} (-1)^{|S|-1} \cv\dvot v(V_{S}) , $$ where the sum ranges over non-empty subsets $S$ of $\{1, \ldots, m\}$. Now assume that $V_{1} \cup \cdots \cup V_{m}$ is the disjoint union $(U + x_{1}) \cup \cdots \cup (U + x_{r})$. Let $V_{S,p}$ denote the intersection $V_{S} \cap (U + x_{p})$. Observe that $U + x_{p} = \bigcup_{i=1}^{m} V_{\{i\},p}$ and since $U + x_{p}$ is itself a toric subspace, we have already proved that the inclusion-exclusion formula~(\ref{equation_inclusion_exclusion}) holds for this union. Hence we have \begin{eqnarray*} v(V_{1} \cup \cdots \cup V_{m}) & = & \sum_{p=1}^{r} v(U + x_{p}) \\ & = & \sum_{p=1}^{r} \sum_{S} (-1)^{|S|-1} \cv\dvot v(V_{S,p}) \\ & = & \sum_{S} (-1)^{|S|-1} \cv\dvot \sum_{p=1}^{r} v(V_{S,p}) \\ & = & \sum_{S} (-1)^{|S|-1} \cv\dvot v(V_{S}) , \end{eqnarray*} where $S$ ranges over all non-empty subsets of $\{1, \ldots, m\}$. The last step follows since the terms in the union $V_{S} = \bigcup_{p=1}^{r} V_{S,p}$ are pairwise disjoint. \end{proof} By M\"obius inversion we directly have the following theorem. The proof is standard. See the references~\cite{Athanasiadis,Chen,Ehrenborg_Readdy_valuation_1,Jozefiak_Sagan}. \begin{theorem} The characteristic polynomial of a toric arrangement is given by $$ \chi(\avthcal{H}) = v\left( T^{n} - \bigcup_{i=1}^{m} H_{i} \right) . $$ \label{theorem_characteristic} \end{theorem} When each region is an open ball we can now determine the number of regions in a toric arrangement. The proof is analogous to the proofs in~\cite{Ehrenborg_Readdy_valuation_1,Ehrenborg_Readdy_valuation_2}. Recall that the Euler characteristic can be viewed as a valuation. Here we use the notation $\varepsilon$ to indicate that we are viewing the Euler valuation as a valuation. \begin{theorem} Let $\avthcal{H}$ be a toric hyperplane arrangement on the $n$-dimensional torus $T^{n}$ that subdivides the torus into regions that are open $n$-dimensional balls. Then the complement of the arrangement has $(-1)^{n} \cv\dvot \chi(\avthcal{H}; 0)$ regions. \label{theorem_toric_Zaslavsky_version_1} \end{theorem} \begin{proof} Observe that the Euler valuation~$\varepsilon$ of a $k$-dimensional torus is given by the Kronecker delta~$\delta_{k,0}$. Hence for a toric subspace $V$ of the $n$-dimensional torus, the Euler valuation of $V$ is obtained by setting $t = 0$ in the valuation, that is, $\varepsilon(V) = v(V)|_{t = 0}$. Since the two valuations $\varepsilon$ and $v|_{t=0}$ are additive with respect to disjoint unions, they agree for any member of the generating set $G$. Hence they also agree for any member in the distributive lattice $L$. In particular, \begin{equation} \varepsilon\left( T^{n} - \bigcup_{i=1}^{m} H_{i} \right) = \left. v\left( T^{n} - \bigcup_{i=1}^{m} H_{i} \right)\right|_{t = 0} . \label{equation_Euler_and_v} \end{equation} Since the Euler valuation of an open ball is $(-1)^{n}$ and $T^{n} - \bigcup_{i=1}^{m} H_{i}$ is a disjoint union of open balls, the left-hand side of~(\ref{equation_Euler_and_v}) is $(-1)^{n}$ times the number of regions. The right-hand side is $\chi(\avthcal{H}; t=0)$ by Theorem~\ref{theorem_characteristic}. \end{proof} \addtocounter{theorem}{-5} \begin{continuation} \textrm{ Setting $t=0$ in the characteristic polynomial in Example~\ref{example_toric_one} we obtain $3$, which is indeed the number of regions of this arrangement. } \end{continuation} \addtocounter{theorem}{4} We call a toric hyperplane arrangement $\avthcal{H} = \{H_{1}, \ldots, H_{m}\}$ {\em rational} if each hyperplane $H_{i}$ is of the form $\vec{a}_{i} \cv\dvot \vec{x} = b_{i}$ where the vector $\vec{a}_{i}$ has integer entries and $b_{i}$ is an integer for $1 \leq i \leq m$. This is equivalent to assuming every constant $b_{i}$ is rational since every vector $\vec{a}_{i}$ was already assumed to be rational. In what follows it will be convenient to assume every coefficient is integral in a given rational arrangement. Define $N(\avthcal{H})$ to be the least common multiple of all the $n \times n$ minors of the~$n \times m$ matrix $(\vec{a}_{1}, \ldots, \vec{a}_{m})$. We can now give a different interpretation of the toric chromatic polynomial by counting lattice points. \begin{theorem} For a rational hyperplane arrangement $\avthcal{H}$ there exists a constant~$k$ such that for every $q > k$ where $q$ is a multiple of $N(\avthcal{H})$, the toric characteristic polynomial evaluated at $q$ is given by the number of lattice points in $\left( \frac{1}{q} \avthbb{Z} \right)^{n}/\avthbb{Z}^{n}$ that do not lie on any of the toric hyperplanes $H_{i}$, that is, \[ \chi(\avthcal{H}; q) = \left| \left( \frac{1}{q} \avthbb{Z} \right)^{n}/\avthbb{Z}^{n} - \bigcup_{i=1}^{m} H_{i} \right| . \] \label{theorem_lattice_points} \end{theorem} The condition that $q$ is a multiple of $N(\avthcal{H})$ implies that every subspace $x$ in the intersection poset~$\avthcal{P}$ intersects the toric lattice $\left(\frac{1}{q} \avthbb{Z} \right)^{n}/\avthbb{Z}^{n}$ in exactly $q^{\dim(x)}$ points. Theorem~\ref{theorem_lattice_points} now follows by M\"obius inversion. This theorem is the toric analogue of the finite field method of Athanasiadis. See~\cite[Theorem~2.1]{Athanasiadis_II} in particular. In the case when $N(\avthcal{H}) = 1$, the toric arrangement $\avthcal{H}$ is called {\em unimodular}. Novik, Postnikov, and Sturmfels~\cite{Novik_Postnikov_Sturmfels} state Theorem~\ref{theorem_toric_Zaslavsky_version_1} in the special case of unimodular arrangements. Their first proof is based upon Zaslavsky's result on the number of bounded regions in an affine arrangement. The second proof, due to Reiner, is equivalent to our proof for arbitrary toric arrangements. See also the paper~\cite{Zaslavsky_paper} by Zaslavsky, where more general arrangements are considered. \subsection{Graphical arrangements} We digress in this subsection to discuss an application to graphical arrangements, which are hyperplane arrangements arising from graphs. For a graph $G$ on the vertex set $\{1, \ldots, n\}$ define the \define{graphical arrangement} $\avthcal{H}_{G}$ to be the collection of hyperplanes of the form $x_{i} = x_{j}$ for each edge $ij$ in the graph $G$. \begin{corollary} For a connected graph $G$ on $n$ vertices the regions in the complement of the graphical arrangement $\avthcal{H}_{G}$ on the torus $T^{n}$ are each homotopy equivalent to the $1$-dimensional torus $T^{1}$. Furthermore, the number of regions is given by $(-1)^{n-1}$ times the linear coefficient of the chromatic polynomial of $G$. \end{corollary} \begin{proof} The chromatic polynomial of the graph $G$ is equal to the characteristic polynomial of the graphical arrangement $\avthcal{H}_{G}$. Furthermore, the intersection lattice of the real arrangement $\avthcal{H}_{G}$ is the same as the intersection poset of the toric arrangement~$\avthcal{H}_{G}$. Translating the graphic arrangement in the direction $(1, \ldots, 1)$ leaves the arrangement on the torus invariant. Since $G$ is connected this is the only direction that leaves the arrangement invariant. Hence each region is homotopy equivalent to $T^{1}$. By adding the hyperplane $x_{1} = 0$ to the arrangement we obtain a new arrangement $\avthcal{H}^{\prime}$ with the same number of regions, but with each region homeomorphic to a ball. Since the intersection lattice of $\avthcal{H}^{\prime}$ is just the Cartesian product of the two-element poset with the intersection lattice of $\avthcal{H}_{G}$, we have $$ \chi(\avthcal{H}^{\prime}, t) = (t-1) \cv\dvot \chi(\avthcal{H}_{G}, t)/t . $$ The number of regions is obtained by setting $t=0$ in this equality. \end{proof} A similar statement holds for graphs that are disconnected. The result follows from the fact that the complement of the graphical arrangement is the product of the complements of each connected component. \begin{corollary} For a graph $G$ on $n$ vertices consisting of $k$ components, the regions in the complement of the graphical arrangement $\avthcal{H}_{G}$ on the torus $T^{n}$ are each homotopy equivalent to the $k$-dimensional torus $T^{k}$. The number of regions is given by $(-1)^{n-k}$ times the coefficient of $t^{k}$ in the chromatic polynomial of $G$. \end{corollary} Stanley~\cite{Stanley_acyclic} proved the celebrated result that the chromatic polynomial of a graph evaluated at $t=-1$ is $(-1)^{n}$ times the number of acyclic orientations of the graph. A similar interpretation for the linear coefficient of the chromatic polynomial is due to Greene and Zaslavsky~\cite{Greene_Zaslavsky}: \begin{theorem}[Greene--Zaslavsky] Let $G$ be a connected graph and $v$ a given vertex of the graph. The linear coefficient of the chromatic polynomial is $(-1)^{n-1}$ times the number of acyclic orientations of the graph such that the only sink is the vertex $v$. \end{theorem} \begin{proof} It is enough to give a bijection between regions in the complement of the graphical arrangement on the torus $T^{n}$ and acyclic orientations with the vertex $v$ as the unique sink. For a region $R$ of the arrangement intersect it with the hyperplane~$x_{v} = 0$ to obtain the face $S$. Let $\avthcal{H}^{\prime}$ be the arrangement~$\avthcal{H}_{G}$ together with the hyperplane $x_{v} = 0$. Lift $S$ to a face $\widetilde{S}$ in the periodic arrangement~$\widetilde{\avthcal{H}^{\prime}}$ in~$\avthbb{R}^{n}$. Observe that $\widetilde{S}$ is the interior of a polytope. When minimizing the linear functional $L(x) = x_{1} + \cv\dvots + x_{n}$ on the closure of the face $\widetilde{S}$, the optimum is a lattice point $k = (k_{1}, \ldots, k_{n})$. Pick a point $x = (x_{1}, \ldots, x_{n})$ in $\widetilde{S}$ close to the optimum, that is, such that each coordinate $x_{i}$ lies in the interval $[k_{i},k_{i}+\epsilon)$ for some small $\epsilon > 0$. Let $y = (y_{1}, \ldots, y_{n})$ be the image of the point $x$ on the torus $T^{n}$, that is, $y_{i} = x_{i} \bmod 1$. Note that each entry $y_{i}$ lies in the half open interval $[0,1)$ and that $y_{v} = 0$. Construct an orientation of the graph $G$ by letting the edge $ij$ be oriented $i \rightarrow j$ if~$y_{i} > y_{j}$. Note that this orientation is acyclic and has the vertex $v$ as a sink. To show that the vertex $v$ is the unique sink, assume that the vertex $i$ is also a sink, where $i \neq v$. In other words, for all neighbors $j$ of the vertex $i$ we have that~$y_{i} < y_{j}$. We can continuously move the point $x$ in $\widetilde{S}$ by decreasing the value of the $i$th coordinate $x_{i}$. Observe that there is no hyperplane in the periodic arrangement blocking the coordinate $x_{i}$ from passing through the integer value $k_{i}$ and continuing down to $k_{i}-1+\epsilon$. This contradicts the fact that we chose the original point~$x$ close to the optimum of the linear functional $L$. Hence the vertex $i$ cannot be a sink. It is straightforward to verify that this map from regions to the set of acyclic orientations with the unique sink at $v$ is a bijection. \end{proof} The technique of assigning a point to every region of a toric arrangement using a linear functional was used by Novik, Postnikov and Sturmfels in their paper~\cite{Novik_Postnikov_Sturmfels}. See their first proof of the number of regions of a toric arrangement. \subsection{The toric Bayer--Sturmfels result} Define the {\em toric Zaslavsky invariant} of a graded poset $P$ having $\hat{0}$ and $\hat{1}$ by $$ Z_{t}(P) = \sum_{x \bvox{ \textrm{ \tiny coatom of} } P} (-1)^{\rho(\hat{0},x)} \cv\dvot \mu(\hat{0},x) = (-1)^{\rho(P) - 1} \cv\dvot \sum_{x \bvox{ \textrm{ \tiny coatom of} } P} \mu(\hat{0},x) . $$ We reformulate Theorem~\ref{theorem_toric_Zaslavsky_version_1} as follows. \begin{theorem} For a toric hyperplane arrangement $\avthcal{H}$ on the torus $T^n$ that subdivides the torus into open $n$-dimensional balls, the number of regions is given by $Z_{t}(\avthcal{P})$, where $\avthcal{P}$ is the intersection poset of the arrangement~$\avthcal{H}$. \label{theorem_toric_Zaslavsky_version_2} \end{theorem} As a corollary of Theorem~\ref{theorem_toric_Zaslavsky_version_2}, we can describe the $f$-vector of the subdivision $T_{t}$ of the torus. For similar results for more general manifolds see~\cite[Section~3]{Zaslavsky_paper}. \begin{corollary} The number of $i$-dimensional regions in the subdivision $T_{t}$ of the $n$-dimensional torus is given by the sum $$ f_{i+1}(T_{t}) = (-1)^{i} \cv\dvot \sum_{\onethingatopanother{x \leq y} {\onethingatopanother{\dim(x) = i}{\dim(y) = 0}}} \mu(x,y) , $$ where $\mu(x,y)$ denotes the M\"obius function of the interval $[x,y]$ in the intersection poset $\avthcal{P}$. \label{corollary_f_vector_I} \end{corollary} \begin{proof} Each $i$-dimensional region is contained in a unique $i$-dimensional subspace $x$. By restricting the arrangement to the subspace $x$ and applying Theorem~\ref{theorem_toric_Zaslavsky_version_1}, we have that the number of $i$-dimensional regions in $x$ is given by \[ (-1)^{i} \cv\dvot \sum_{x \leq y, \dim(y) = 0} \mu(x,y).\] Summing over all $x$, the result follows. \end{proof} For the remainder of this section we will assume that the induced subdivision of the torus is a regular cell complex. Let $T_{t}$ be the face poset of the subdivision of the torus induced by the toric arrangement. Define the map $z_{t} : T_{t}^{*} \longrightarrow \avthcal{P} \cup \{\hz\}$ by sending each face to the smallest toric subspace in the intersection poset that contains the face and sending the minimal element in $T_{t}^{*}$ to $\hat{0}$. Observe that the map $z_{t}$ is order- and rank-preserving, as well as being surjective. As in the central hyperplane arrangement case, we view the map $z_{t}$ as a map from the set of chains of $T_{t}^{*}$ to the set of chains of $\avthcal{P} \cup \{\hz\}$. Let $x$ be an element in the intersection poset $\avthcal{P}$ of a toric hyperplane arrangement~$\avthcal{H}$. Then the interval $[x,\hat{1}]$ is the intersection poset of a toric arrangement in the toric subspace $x$. The atoms of the interval $[x,\hat{1}]$ are the toric hyperplanes in this smaller toric arrangement. More interesting is the geometric interpretation of the interval $[\hat{0},x]$. It is the intersection lattice of a central hyperplane arrangement in $\avthbb{R}^{n - \dim(x)}$. Without loss of generality we may assume that $x$ contains the zero point $(0, \ldots, 0)$, that is, when we lift the toric subspace $x$ to an affine subspace $V$ in~$\avthbb{R}^{n}$ we may assume that $V$ is a subspace of $\avthbb{R}^{n}$. Any toric subspace $y$ in the interval $[\hat{0},x]$, that is, a toric subspace containing $x$, can be lifted to a subspace $W$ containing the subspace $V$. In particular, the toric hyperplanes in $[\hat{0},x]$ lift to hyperplanes in $\avthbb{R}^{n}$ containing $V$. This lifting is a poset isomorphism and we obtain an essential central arrangement of dimension $n - \dim(x)$ by quotienting out by the subspace~$V$. We conclude by noticing that an interval $[x,y]$ in $\avthcal{P}$, where $y < \hat{1}$, is the intersection lattice of a central hyperplane arrangement. The toric analogue of Theorem~\ref{theorem_Bayer_Sturmfels} is as follows. \begin{theorem} Let $\avthcal{P}$ be the intersection poset of a toric hyperplane arrangement whose induced subdivision is regular. Let $c = \{\hat{0} = x_{0} < x_{1} < \cv\dvots < x_{k} = \hat{1}\}$ be a chain in $\avthcal{P} \cup \{\hz\}$ with $k \geq 2$. Then the cardinality of the inverse image of the chain~$c$ is given by the product \[ |z_{t}^{-1}(c)| = \prod_{i=2}^{k-1} Z([x_{i-1},x_{i}]) \cv\dvot Z_{t}([x_{k-1},x_{k}]) . \] \label{theorem_Bayer_Sturmfels_toric} \end{theorem} \begin{proof} We need to count the number of ways we can select a chain $d = \{\hat{0} = y_0 < y_1 < \dots < y_k = \hat{1}\}$ in $T_{t}^{*}$ such that $z_{t}(y_{i}) = x_{i}$. The number of ways to select the element $y_{k-1}$ in $T_{t}^{*}$ is the number of regions in the arrangement restricted to the toric subspace $x_{k-1}$. By Theorem~\ref{theorem_toric_Zaslavsky_version_2} this can be done in $Z_{t}([x_{k-1},x_{k}])$ ways. Observe now that all other elements in the chain $d$ contain the face~$y_{k-1}$. To count the number of ways to select the element $y_{k-2}$, we follow the original argument of Bayer--Sturmfels. We would like to pick a face $y_{k-2}$ such that it contains the face $y_{k-1}$ and it is a region in the toric subspace $x_{k-2}$. This is equal to the number of regions in the central arrangement having the intersection lattice $[x_{k-2},x_{k-1}]$, which is given by $Z([x_{k-2},x_{k-1}])$. By iterating this procedure until we reach the element $y_{1}$, the result follows. \end{proof} \begin{corollary} The flag $f$-vector entry $f_{S}(T_{t})$ of the face poset $T_{t}$ of a toric arrangement whose induced subdivision is regular subdivision of $T^{n}$ is divisible by~$2^{|S|-1}$ for $S \subseteq \{1, \ldots, n+1\}$ with $S \neq \emptyset$. \label{corollary_evenness} \end{corollary} \begin{proof} The proof follows from the fact that the Zaslavsky invariant $Z$ is an even integer and that a given flag $f$-vector entry is the appropriate sum of products appearing in Theorem~\ref{theorem_Bayer_Sturmfels_toric}. \end{proof} \subsection{The connection between posets and coalgebras} For an $\av\bv$-monomial $v$ define the linear map $\lambda_{t}$ by letting $$ \lambda_{t}(v) = \begin{cases} (\avthbf{a}-\avthbf{b})^{m} & \text{if $v=\avthbf{b}^{m}$ for some $m \geq 0$,} \\ (\avthbf{a}-\avthbf{b})^{m+1} & \text{if $v=\avthbf{b}^{m}\avthbf{a}$ for some $m \geq 0$,} \\ 0 & \text{otherwise.} \end{cases} $$ Define the linear operator $H^{\prime}$ on $\avthbb{Z}\fab$ to be the one which removes the last letter in each $\av\bv$-monomial, that is, $H^{\prime}(w \cv\dvot \avthbf{a}) = H^{\prime}(w \cv\dvot \avthbf{b}) = w$ and $H^{\prime}(1) = 0$. We use the prime in the notation to distinguish it from the~$H$ map defined in~\cite[Section~8]{Billera_Ehrenborg_Readdy_om} which instead removes the first letter in each $\av\bv$-monomial. From~\cite{Billera_Ehrenborg_Readdy_om} we have the following lemma. \begin{lemma} For a graded poset $P$ with $\hat{1}$ of rank greater than or equal to $2$, the following identity holds: $$ H^{\prime}(\mathcal{P}si(P)) = \sum_{x \bvox{ \textrm{ \tiny coatom of} } P} \mathcal{P}si([\hat{0},x]) . $$ \end{lemma} The next lemma gives the relation between the toric Zaslavsky invariant $Z_{t}$ and the map~$\lambda_{t}$. \begin{lemma} For a graded poset $P$ with $\hat{1}$ of rank greater than or equal to $1$, the following identity holds: $$ \lambda_{t}(\mathcal{P}si(P)) = Z_{t}(P) \cv\dvot (\avthbf{a}-\avthbf{b})^{\rho(P)-1}. $$ \end{lemma} \begin{proof} When $P$ has rank $1$, both sides are equal to $1$. For an $\av\bv$-monomial $v$ different from $1$, we have that $\lambda_{t}(v) = \beta(H^{\prime}(v)) \cv\dvot (\avthbf{a}-\avthbf{b})$. Hence \begin{eqnarray*} \lambda_{t}(\mathcal{P}si(P)) & = & \beta(H^{\prime}(\mathcal{P}si(P))) \cv\dvot (\avthbf{a}-\avthbf{b}) \\ & = & \sum_{x \bvox{ \textrm{ \tiny coatom of} } P} \beta(\mathcal{P}si([\hat{0},x])) \cv\dvot (\avthbf{a}-\avthbf{b}) \\ & = & (-1)^{\rho(P)} \cv\dvot \sum_{x \bvox{ \textrm{ \tiny coatom of} } P} \mu(\hat{0},x) \cv\dvot (\avthbf{a}-\avthbf{b})^{\rho(P)-1} , \end{eqnarray*} which concludes the proof. \end{proof} Define a sequence of functions $\varphi_{t,k}\colon\avthbb{Z}\fab\to\avthbb{Z}\fab$ by $\varphi_{t,1}=\kappa$, and for $k \geq 2$, $$ \varphi_{t,k}(v) = \sum_{v} \kappa(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \eta(v_{(2)}) \cv\dvot \avthbf{b} \cv\dvot \eta(v_{(3)}) \cv\dvot \avthbf{b} \cv\dvots \avthbf{b} \cv\dvot \eta(v_{(k-1)}) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(v_{(k)}). $$ Finally, let $\varphi_{t}(v)$ be the sum $\varphi_{t}(v) = \sum_{k\geq 1}\varphi_{t,k}(v)$. \begin{theorem} The $\av\bv$-index of the face poset $T_{t}$ of a toric arrangement is given by $$ \mathcal{P}si(T_{t})^{*} = \varphi_{t}(\mathcal{P}si(\avthcal{P} \cup \{\hz\})). $$ \label{theorem_poset_varphi_toric} \end{theorem} \begin{proof} The $\av\bv$-index of the poset $T_{t}$ is given by the sum $\mathcal{P}si(T_{t}) = \sum_{c} |z_{t}^{-1}(c)| \cv\dvot \wt(c)$. Fix $k \geq 2$ and sum over all chains $c = \{\hat{0} = x_{0} < x_{1} < \cv\dvots < x_{k} = \hat{1}\}$ of length $k$. We then have \begin{eqnarray*} & & \sum_{c} |z_{t}^{-1}(c)| \cv\dvot \wt(c) \\ & = & \sum_{c} \prod_{i=2}^{k-1} Z([x_{i-1}, x_{i}]) \cv\dvot Z_{t}([x_{k-1}, x_{k}]) \cv\dvot (\avthbf{a}-\avthbf{b})^{\rho(x_{0},x_{1})-1} \cv\dvot \avthbf{b} \cv\dvots \avthbf{b} \cv\dvot (\avthbf{a}-\avthbf{b})^{\rho(x_{k-1},x_{k})-1} \\ & = & \sum_{c} \kappa(\mathcal{P}si([x_{0},x_{1}])) \cv\dvot \prod_{i=2}^{k-1} \left(\avthbf{b} \cv\dvot \eta(\mathcal{P}si([x_{i-1}, x_{i}]))\right) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(\mathcal{P}si([x_{k-1},x_{k}])) \\ & = & \sum_{w} \kappa(w_{(1)}) \cv\dvot \prod_{i=2}^{k-1} \left(\avthbf{b} \cv\dvot \eta(w_{(i)})\right) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(w_{(k)}) \\ & = & \varphi_{t,k}(w) , \end{eqnarray*} where we let $w$ denote the $\av\bv$-index of the augmented intersection poset $\avthcal{P} \cup \{\hz\}$. For $k=1$ we have that $(\avthbf{a}-\avthbf{b})^{\rho(T_{t})-1} = \varphi_{t,1}(\mathcal{P}si(\avthcal{P} \cup \{\hz\}))$. Summing over all $k \geq 1$, we obtain the result. \end{proof} \subsection{Evaluating the function $\varphi_{t}$} \begin{proposition} For an $\av\bv$-monomial $v$, the following identity holds: $$ \varphi_{t}(v) = \kappa(v) + \sum_{v} \varphi(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(v_{(2)}) . $$ \label{proposition_varphi_t} \end{proposition} \begin{proof} Using the coassociative identity $\Delta^{k-1} = (\Delta^{k-2} \otimes \id) \circ \Delta$, for $k \geq 2$ we have that \begin{eqnarray*} \varphi_{t,k}(v) & = & \sum_{v} \kappa(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \eta(v_{(2)}) \cv\dvot \avthbf{b} \cv\dvots \avthbf{b} \cv\dvot \eta(v_{(k-1)}) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(v_{(k)}) \\ & = & \sum_{v} \sum_{v_{(1)}} \kappa(v_{(1,1)}) \cv\dvot \avthbf{b} \cv\dvot \eta(v_{(1,2)}) \cv\dvot \avthbf{b} \cv\dvots \avthbf{b} \cv\dvot \eta(v_{(1,k-1)}) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(v_{(2)}) \\ & = & \sum_{v} \varphi_{k-1}(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(v_{(2)}) . \end{eqnarray*} By summing over all $k \geq 1$, the result follows. \end{proof} \begin{lemma} Let $v$ be an $\av\bv$-monomial that begins with $\avthbf{a}$ and let $x$ be either $\avthbf{a}$ or $\avthbf{b}$. Then $$ \varphi_{t}(v \cv\dvot \avthbf{a} \cv\dvot x) = \kappa(v \cv\dvot \avthbf{a} \cv\dvot x) + {1}/{2} \cv\dvot \omega(v \cv\dvot \avthbf{a}\avthbf{b}) . $$ \label{lemma_toric_varphi_I} \end{lemma} \begin{proof} Using Proposition~\ref{proposition_varphi_t} we have \begin{eqnarray*} \varphi_{t}(v \cv\dvot \avthbf{a} \cv\dvot x) & = & \kappa(v \cv\dvot \avthbf{a} \cv\dvot x) + \varphi(v \cv\dvot \avthbf{a}) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(1) + \varphi(v) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(x) \\ & & + \sum_{v} \varphi(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(v_{(2)} \cv\dvot \avthbf{b} \cv\dvot x) \\ & = & \kappa(v \cv\dvot \avthbf{a} \cv\dvot x) + \varphi(v) \cv\dvot \avthbf{c} \cv\dvot \avthbf{b} + \varphi(v) \cv\dvot \avthbf{b} \cv\dvot (\avthbf{a}-\avthbf{b}) \\ & = & \kappa(v \cv\dvot \avthbf{a} \cv\dvot x) + \omega(v) \cv\dvot \avthbf{d} \\ & = & \kappa(v \cv\dvot \avthbf{a} \cv\dvot x) + 1/2 \cv\dvot \omega(v \cv\dvot \avthbf{a}\avthbf{b}) , \end{eqnarray*} since $\lambda_{t}(v_{(2)} \cv\dvot \avthbf{b} \cv\dvot x) = 0$. \end{proof} \begin{lemma} Let $v$ be an $\av\bv$-monomial that begins with $\avthbf{a}$, let $k$ be a positive integer, and let $x$ be either $\avthbf{a}$ or $\avthbf{b}$. Then \[ \varphi_{t}(v \cv\dvot \avthbf{a} \avthbf{b}^{k} \cv\dvot x) = \kappa(v \cv\dvot \avthbf{a} \avthbf{b}^{k} \cv\dvot x) + {1}/{2} \cv\dvot \omega(v \cv\dvot \avthbf{a}\avthbf{b}^{k+1}) . \] \label{lemma_toric_varphi_II} \end{lemma} \begin{proof_special} Using Proposition~\ref{proposition_varphi_t} we have \begin{eqnarray} (\varphi_{t} - \kappa)(v \cv\dvot \avthbf{a} \avthbf{b}^{k}\cv\dvot x) & = & \varphi(v \cv\dvot \avthbf{a} \avthbf{b}^{k}) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(1) + \varphi(v \cv\dvot \avthbf{a}) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(\avthbf{b}^{k-1} \cv\dvot x) \nonumber \\ & & {}+ \varphi(v) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(\avthbf{b}^{k} \cv\dvot x) \nonumber \\ & & {}+ \sum_{i+j=k-2} \varphi(v \cv\dvot \avthbf{a} \avthbf{b}^{i+1}) \cv\dvot \avthbf{b} \cv\dvot \lambda_{t}(\avthbf{b}^{j} \cv\dvot x) \nonumber \\ & = & \varphi(v) \cv\dvot \left( \vphantom{\sum_{i+j=k-2}} 2 \avthbf{d} \avthbf{c}^{k-1} \cv\dvot \avthbf{b} + \avthbf{c} \cv\dvot \avthbf{b} \cv\dvot (\avthbf{a}-\avthbf{b})^{k} + \avthbf{b} \cv\dvot (\avthbf{a}-\avthbf{b})^{k+1} \right. \nonumber \\ & & \left. + \sum_{i+j=k-2} 2 \avthbf{d} \avthbf{c}^{i} \cv\dvot \avthbf{b} \cv\dvot (\avthbf{a}-\avthbf{b})^{j+1} \right) . \label{equation_messy} \end{eqnarray} In order to simplify this expression, consider the butterfly poset of rank $k$. This is the poset consisting of two rank $i$ elements, for $i = 1, \ldots, k-1$, adjoined with a minimal and maximal element. Each of the rank $i$ elements covers the rank $i-1$ element(s) for $i = 1, \ldots, k-1$. The butterfly poset is the unique poset having the $\cv\dv$-index $\avthbf{c}^{k-1}$. It is also Eulerian. Applying~(\ref{equation_Stanley_recursion}) to the butterfly poset, we have $$ \avthbf{c}^{k-1} = (\avthbf{a}-\avthbf{b})^{k-1} + 2 \cv\dvot \sum_{i+j=k-2} \avthbf{c}^{i} \cv\dvot \avthbf{b} \cv\dvot (\avthbf{a}-\avthbf{b})^{j} . $$ Using this relation to simplify equation~(\ref{equation_messy}), we obtain \begin{eqnarray*} \hspace*{35 mm} \varphi_{t}(v \cv\dvot \avthbf{a} \avthbf{b}^{k}\cv\dvot x) - \kappa(v \cv\dvot \avthbf{a} \avthbf{b}^{k} \cv\dvot x) & = & \varphi(v) \cv\dvot \avthbf{d} \cv\dvot \avthbf{c}^{k} \\ & = & 1/2 \cv\dvot \omega(v \cv\dvot \avthbf{a} \avthbf{b}^{k+1}) . \hspace*{35 mm} \end{eqnarray*} This completes the proof. \qed \end{proof_special} By combining Lemmas~\ref{lemma_toric_varphi_I} and~\ref{lemma_toric_varphi_II}, we have the following proposition. \begin{proposition} For an $\av\bv$-monomial $v$ that begins with the letter $\avthbf{a}$, $$ \varphi_{t}(v) = \kappa(v) + 1/2 \cv\dvot \omega(H^{\prime}(v) \cv\dvot \avthbf{b}) . $$ \label{proposition_toric_varphi} \end{proposition} We now obtain the main result for computing the $\av\bv$-index of the face poset of a toric arrangement. \begin{theorem} Let $\avthcal{H}$ be a toric hyperplane arrangement on the $n$-dimensional torus $T^{n}$ that subdivides the torus into a regular cell complex. Then the $\av\bv$-index of the face poset $T_{t}$ can be computed from the $\av\bv$-index of the intersection poset $\avthcal{P}$ as follows: $$ \mathcal{P}si(T_{t}) = (\avthbf{a}-\avthbf{b})^{n+1} + \frac{1}{2} \cv\dvot \omega(\avthbf{a} \cv\dvot H^{\prime}(\mathcal{P}si(\avthcal{P})) \cv\dvot \avthbf{b})^{*} . $$ \label{theorem_toric} \end{theorem} Observe that in Lemmas~\ref{lemma_toric_varphi_I} and~\ref{lemma_toric_varphi_II}, Proposition~\ref{proposition_toric_varphi} and Theorem~\ref{theorem_toric} no rational coefficients were introduced. Only the $\av\bv$-monomial $\avthbf{a}^{n}$ is mapped to a $\cv\dv$-polynomial with an odd coefficient, hence $1/2 \cv\dvot \omega(v \cv\dvot \avthbf{b})$ has all integer coefficients. \addtocounter{theorem}{-20} \begin{continuation} \textrm{ The flag $f$-vector of the intersection poset $\avthcal{P}$ in Example~\ref{example_toric_two} is given by $(f_{\emptyset},f_{1},f_{2},f_{12}) = (1,3,7,15)$, the flag $h$-vector by $(h_{\emptyset},h_{1},h_{2},h_{12}) = (1,2,6,6)$, and so the $\av\bv$-index is $\mathcal{P}si(P) = \ava + 2 \cv\dvot \bva + 6 \cv\dvot \avb + 6 \cv\dvot \bvb$. Thus \begin{eqnarray*} \mathcal{P}si(T_{t}) & = & (\avthbf{a}-\avthbf{b})^{3} + 1/2 \cv\dvot \omega(\avthbf{a} \cv\dvot H^{\prime}(\ava + 2 \cv\dvot \bva + 6 \cv\dvot \avb + 6 \cv\dvot \bvb) \cv\dvot \avthbf{b})^{*} \\ & = & (\avthbf{a}-\avthbf{b})^{3} + 1/2 \cv\dvot \omega(\avthbf{a} \cv\dvot (7 \cv\dvot \av + 8 \cv\dvot \bv) \cv\dvot \avthbf{b})^{*} \\ & = & (\avthbf{a}-\avthbf{b})^{3} + 1/2 \cv\dvot \omega(7 \cv\dvot \avab + 8 \cv\dvot \avbb)^{*} \\ & = & (\avthbf{a}-\avthbf{b})^{3} + 7 \cv\dvot \dvc + 8 \cv\dvot \cvd , \end{eqnarray*} which agrees with the calculation in Example~\ref{example_toric_two}. } \end{continuation} \addtocounter{theorem}{19} Theorem~\ref{theorem_toric} gives a different approach from Corollary~\ref{corollary_f_vector_I} for determining the $f$-vector of $T_{t}$. For notational ease, for positive integers $i$ and $j$, let $[i,j] = \{ i, i+1, \ldots, j\}$ and $[j] = \{1, \ldots, j\}$. \begin{corollary} The number of $i$-dimensional regions in the subdivision $T_{t}$ of the $n$-dimensional torus is given by the following sum of flag $h$-vector entries from the intersection poset $\avthcal{P}$: $$ f_{i+1}(T_{t}) = h_{[n-i,n]}(\avthcal{P}) + h_{[n-i,n-1]}(\avthcal{P}) + h_{[n-i+1,n]}(\avthcal{P}) + h_{[n-i+1,n-1]}(\avthcal{P}) , $$ for $1 \leq i \leq n-1$. The number of vertices is given by $f_{1}(T_{t}) = 1 + h_{n}(\avthcal{P})$ and the number of maximal regions by $f_{n+1}(T_{t}) = h_{[n-1]}(\avthcal{P}) + h_{[n]}(\avthcal{P})$. \label{corollary_f_vector_II} \end{corollary} \begin{proof} Let $\pair{\cv\dvot}{\cv\dvot}$ denote the inner product on $\avthbb{Z}\fab$ defined by $\pair{u}{v} = \delta_{u,v}$ for two $\av\bv$-monomials $u$ and $v$. For $1 \leq i \leq n-1$ we have \begin{eqnarray*} f_{i+1}(T_{t}) & = & 1 + h_{i+1}(T_{t}) \\ & = & 1 + \pair{\avthbf{a}^{i} \avthbf{b} \avthbf{a}^{n-i}}{\mathcal{P}si(T_{t})} \\ & = & \frac{1}{2} \cv\dvot \pair{\avthbf{a}^{i} \avthbf{b} \avthbf{a}^{n-i}} {\omega(\avthbf{a} \cv\dvot H^{\prime}(\mathcal{P}si(\avthcal{P})) \cv\dvot \avthbf{b})^{*}} \\ & = & \frac{1}{2} \cv\dvot [\avthbf{c}^{i-1} \avthbf{d} \avthbf{c}^{n-i}] \omega(\avthbf{a} \cv\dvot H^{\prime}(\mathcal{P}si(\avthcal{P})) \cv\dvot \avthbf{b})^{*} + \frac{1}{2} \cv\dvot [\avthbf{c}^{i} \avthbf{d} \avthbf{c}^{n-i-1}] \omega(\avthbf{a} \cv\dvot H^{\prime}(\mathcal{P}si(\avthcal{P})) \cv\dvot \avthbf{b})^{*} \\ & = & \pair{\avthbf{a}^{n-i} \cv\dvot \avthbf{a}\avthbf{b} \cv\dvot \avthbf{b}^{i-1} + \avthbf{a}^{n-i-1} \cv\dvot \avthbf{a}\avthbf{b} \cv\dvot \avthbf{b}^{i}} {\avthbf{a} \cv\dvot H^{\prime}(\mathcal{P}si(\avthcal{P})) \cv\dvot \avthbf{b}} \\ & = & \pair{\avthbf{a}^{n-i-1} \cv\dvot (\avthbf{a}+\avthbf{b}) \cv\dvot \avthbf{b}^{i-1}} {H^{\prime}(\mathcal{P}si(\avthcal{P}))} \\ & = & \pair{\avthbf{a}^{n-i-1} \cv\dvot (\avthbf{a}+\avthbf{b}) \cv\dvot \avthbf{b}^{i-1} \cv\dvot (\avthbf{a}+\avthbf{b})} {\mathcal{P}si(\avthcal{P})} . \end{eqnarray*} Expanding in terms of the flag $h$-vector the result follows. The expressions for $f_{1}$ and $f_{n+1}$ are obtained by similar calculations. \end{proof} The fact that Corollaries~\ref{corollary_f_vector_I} and~\ref{corollary_f_vector_II} are equivalent follows from the coalgebra techniques in Theorem~\ref{theorem_Ehrenborg_Readdy}. \section{The complex of unbounded regions} \label{section_affine} \subsection{Zaslavsky and Bayer--Sturmfels} The {\em unbounded Zaslavsky invariant} is defined by $$ Z_{ub}(P) = Z(P) - 2 \cv\dvot Z_{b}(P) . $$ As the name suggests, the number of unbounded regions in a non-central arrangement is given by this invariant. By taking the difference of the two statements in Theorem~\ref{theorem_Zaslavsky_poset} part (ii), we immediately obtain the following result. \begin{lemma} For a non-central hyperplane arrangement $\avthcal{H}$ the number of unbounded regions is given by $Z_{ub}(\avthcal{L})$, where $\avthcal{L}$ is the intersection lattice of the arrangement $\avthcal{H}$. \label{lemma_Zaslavsky_unbounded} \end{lemma} \begin{figure} \caption{The non-central arrangement $x,y,z=0,1$.} \label{figure_affine} \end{figure} Let $\avthcal{H}$ be a non-central hyperplane arrangement in $\avthbb{R}^n$ with intersection lattice~$\avthcal{L}$ having the empty set $\emptyset$ as the maximal element. Let $\avthcal{L}_{ub}$ denote the {\em unbounded intersection lattice}, that is, the subposet of the intersection lattice consisting of all affine subspaces with the points (dimension zero affine subspaces) omitted but with the empty set $\emptyset$ continuing to be the maximal element. Equivalently, the poset $\avthcal{L}_{ub}$ is the rank-selected poset $\avthcal{L}([1, n-1])$, that is, the poset $\avthcal{L}$ with the coatoms removed. Let $T$ be the face lattice of the arrangement $\avthcal{H}$ with the minimal element $\hat{0}$ denoting the empty face and the maximal element denoted by $\hat{1}$. Similarly, let $T_{ub}$ denote the set of all faces in the face lattice~$T$ which are not bounded. Observe that~$T_{ub}$ includes the minimal and maximal elements of $T$ and that $T_{ub}$ is the face poset of an $(n-1)$-dimensional sphere. Pick~$R$ large enough so that all of the bounded faces are strictly inside a ball of radius~$R$. Intersect the arrangement~$\avthcal{H}$ with a sphere of radius $R$. The resulting cell complex has face poset $T_{ub}$. Our goal is to compute the $\cv\dv$-index of $T_{ub}$ in terms of the $\av\bv$-index of $\avthcal{L}_{ub}$. The collection of unbounded faces of the arrangement $\avthcal{H}$ forms a lower order ideal in the poset $T^{*}$. Let~$Q$ be the subposet of $T^{*}$ consisting of this ideal with a maximal element $\hat{1}$ adjoined. We define the rank of an element in $Q$ to be its rank in the original poset $T^{*}$, that is, for $x \in Q$ let $\rho_{Q}(x) = \rho_{T^{*}}(x)$. This rank convention will simplify the later arguments. As posets, $T_{ub}^{*}$ and $Q$ are isomorphic. However, since their rank functions differ, their $\av\bv$-indexes satisfy $\mathcal{P}si(T_{ub})^{*} \cv\dvot (\avthbf{a}-\avthbf{b}) = \mathcal{P}si(Q)$. Restrict the zero map $z : T^{*} \longrightarrow \avthcal{L} \cup \{\hz\}$ to form the map $z_{ub} : Q \longrightarrow \avthcal{L} \cup \{\hz\}$. The map $z_{ub}$ is order- and rank-preserving. However, it is not necessarily surjective. As before we view the map $z_{ub}$ as a map from the set of chains of $Q$ to the set of chains of $\avthcal{L} \cup \{\hz\}$. The following theorem is a toric deformation of Theorem~\ref{theorem_Bayer_Sturmfels}. \begin{theorem} Let $\avthcal{H}$ be a non-central hyperplane arrangement with intersection lattice $\avthcal{L}$. Let $c=\{\hat{0} = x_0 < x_1 < \dots < x_k = \hat{1}\}$ be a chain in $\avthcal{L} \cup \{\hz\}$ with $k \geq 2$. Then the cardinality of the inverse image of the chain $c$ under $z_{ub}$ is given by $$ |z_{ub}^{-1}(c)| = \prod_{i=2}^{k-1} Z([x_{i-1},x_{i}]) \cv\dvot Z_{ub}([x_{k-1},x_{k}]) . $$ \label{theorem_Bayer_Sturmfels_unbounded} \end{theorem} \begin{proof} We need to count the number of ways we can select a chain $d = \{\hat{0} = y_0 < y_1 < \dots < y_k = \hat{1}\}$ in the poset of unbounded regions $Q$ such that $z_{ub}(y_{i}) = x_{i}$. The number of ways to select the element~$y_{k-1}$ in $Q$ is the number of unbounded regions in the arrangement restricted to the subspace $x_{k-1}$. By Lemma~\ref{lemma_Zaslavsky_unbounded} this can be done in $Z_{ub}([x_{k-1},x_{k}])$ ways. Since $y_{k-1}$ is an unbounded face of the arrangement and all other elements in the chain $d$ contain the face $y_{k-1}$, the other elements must be unbounded. The remainder of the proof is the same as that of Theorem~\ref{theorem_Bayer_Sturmfels_toric}. \end{proof} \begin{corollary} The flag $f$-vector entry $f_{S}(T_{ub})$ is divisible by $2^{|S|}$ for any index set $S \subseteq \{1, \ldots, n\}$. \end{corollary} \begin{proof} The proof is the same as Corollary~\ref{corollary_evenness} with the extra observation that the Zaslavsky invariant $Z_{ub}$ is even. \end{proof} \subsection{The connection between posets and coalgebras} Define $\lambda_{ub}$ by $\lambda_{ub} = \eta - 2 \cv\dvot \beta$. By equations~(\ref{equation_beta}) and~(\ref{equation_eta}), for a graded poset $P$ we have $$ \lambda_{ub}(\mathcal{P}si(P)) = Z_{ub}(P) \cv\dvot (\avthbf{a}-\avthbf{b})^{\rho(P)-1} . $$ Define a sequence of functions $\varphi_{ub,k}\colon\avthbb{Z}\fab\to\avthbb{Z}\fab$ by $\varphi_{ub,1}=\kappa$ and for $k>1$, $$ \varphi_{ub,k}(v) = \sum_{v} \kappa(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \eta(v_{(2)}) \cv\dvot \avthbf{b} \cv\dvot \eta(v_{(3)}) \cv\dvot \avthbf{b} \cv\dvots \avthbf{b} \cv\dvot \eta(v_{(k-1)}) \cv\dvot \avthbf{b} \cv\dvot \lambda_{ub}(v_{(k)}). $$ Finally, let $\varphi_{ub}(v)$ be the sum $\varphi_{ub}(v) = \sum_{k\geq 1}\varphi_{ub,k}(v)$. Similar to Theorem~\ref{theorem_poset_varphi_toric} we have the next result. The proof only differs in replacing the map $z_{t} : T_{t}^{*} \longrightarrow \avthcal{P} \cup \{\hz\}$ with $z_{ub} : Q \longrightarrow \avthcal{L} \cup \{\hz\}$ and the invariant $Z_{t}$ by $Z_{ub}$. \begin{theorem} The $\av\bv$-index of the poset $Q$ of unbounded regions of a non-central arrangement is given by $$ \mathcal{P}si(Q) = \varphi_{ub}(\mathcal{P}si(\avthcal{L} \cup \{\hz\})). $$ \label{theorem_poset_varphi_unbounded} \end{theorem} \subsection{Evaluating the function $\varphi_{ub}$} In this subsection we analyze the behavior of $\varphi_{ub}$. \begin{lemma} For any $\av\bv$-monomial $v$, $$ \varphi_{ub}(v) = \varphi(v) - 2 \cv\dvot \sum_v \varphi(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \beta(v_{(2)}) . $$ \label{lemma_phi_ub_recursion} \end{lemma} \begin{proof} Using the coassociative identity $\Delta^{k-1} = (\Delta^{k-2} \otimes \id) \circ \Delta$, we have for $k \geq 2$ \begin{eqnarray*} \varphi_{ub,k}(v) & = & \varphi_{k}(v) - 2 \cv\dvot \sum_{v} \kappa(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \eta(v_{(2)}) \cv\dvot \avthbf{b} \cv\dvots \avthbf{b} \cv\dvot \eta(v_{(k-1)}) \cv\dvot \avthbf{b} \cv\dvot \beta(v_{(k)}) \\ & = & \varphi_{k}(v) - 2 \cv\dvot \sum_{v} \sum_{v_{(1)}} \kappa(v_{(1,1)}) \cv\dvot \avthbf{b} \cv\dvot \eta(v_{(1,2)}) \cv\dvot \avthbf{b} \cv\dvots \avthbf{b} \cv\dvot \eta(v_{(1,k-1)}) \cv\dvot \avthbf{b} \cv\dvot \beta(v_{(2)}) \\ & = & \varphi_{k}(v) - 2 \cv\dvot \sum_{v} \varphi_{k-1}(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \beta(v_{(2)}) . \end{eqnarray*} The result then follows by summing over all $k \geq 2$ and adding $\varphi_{ub,1}(v) = \kappa(v) = \varphi_{1}(v)$. \end{proof} \begin{lemma} Let $v$ be an $\av\bv$-monomial. Then $$ \varphi_{ub}(v \cv\dvot \avthbf{a}) = \varphi(v) \cv\dvot (\avthbf{a} - \avthbf{b}) . $$ \label{lemma_varphi_ub_I} \end{lemma} \begin{proof} By Lemma~\ref{lemma_phi_ub_recursion} and the Leibniz relation~(\ref{equation_Newtonian}) we have $$ \varphi_{ub}(v \cv\dvot \avthbf{a}) = \varphi(v \cv\dvot \avthbf{a}) - 2 \cv\dvot \varphi(v) \cv\dvot \avthbf{b} \cv\dvot \beta(1) - 2 \cv\dvot \sum_v \varphi(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \beta(v_{(2)} \cv\dvot \avthbf{a}) . $$ By equation~(\ref{equation_varphi_a}) $\varphi(v \cv\dvot \avthbf{a}) = \varphi(v) \cv\dvot \avthbf{c}$. The summation above is zero because $\beta(v_{(2)} \cv\dvot \avthbf{a})$ is always zero. Hence $\varphi_{ub}(v \cv\dvot \avthbf{a}) = \varphi(v) \cv\dvot (\avthbf{c} - 2\avthbf{b}) = \varphi(v) \cv\dvot (\avthbf{a} - \avthbf{b})$. \end{proof} \begin{lemma} Let $v$ be an $\av\bv$-monomial. Then \[ \varphi_{ub}(v \cv\dvot \avthbf{b}\avthbf{b}) = \varphi_{ub}(v \cv\dvot \avthbf{b}) \cv\dvot (\avthbf{a} - \avthbf{b}). \] \label{lemma_varphi_ub_II} \end{lemma} \begin{proof} Let $u = v \cv\dvot \avthbf{b}$. Applying Lemma~\ref{lemma_phi_ub_recursion} and the Leibniz relation~(\ref{equation_Newtonian}) to $u$ gives \begin{eqnarray*} \varphi_{ub}(u \cv\dvot \avthbf{b}) & = & \varphi(u \cv\dvot \avthbf{b}) - 2 \cv\dvot \varphi(u) \cv\dvot \avthbf{b} \cv\dvot \beta(1) - 2 \cv\dvot \sum_{u} \varphi(u_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \beta(u_{(2)} \cv\dvot \avthbf{b}) \\ & = & \varphi(u) \cv\dvot (\avthbf{c} - 2\avthbf{b}) - 2 \cv\dvot \sum_{u} \varphi(u_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \beta(u_{(2)}) \cv\dvot (\avthbf{a} - \avthbf{b}) \\ & = & \left(\varphi(u) - 2 \cv\dvot \sum_{u} \varphi(u_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \beta(u_{(2)}) \right) \cv\dvot (\avthbf{a} - \avthbf{b}) \\ & = & \varphi_{ub}(u) \cv\dvot (\avthbf{a} - \avthbf{b}). \end{eqnarray*} Here we have used the facts that $\varphi(u \cv\dvot \avthbf{b}) = \varphi(u) \cv\dvot \avthbf{c}$ and $\beta(u_{(2)} \cv\dvot \avthbf{b}) = \beta(u_{(2)}) \cv\dvot (\avthbf{a}-\avthbf{b})$. \end{proof} \begin{lemma} Let $v$ be an $\av\bv$-monomial. Then $\varphi_{ub}(v \cv\dvot \av\bv) = 0$. \label{lemma_varphi_ub_III} \end{lemma} \begin{proof} Directly we have \begin{align*} \varphi_{ub}(v \cv\dvot \av\bv) & = \varphi(v \cv\dvot \av\bv) - 2 \cv\dvot \varphi(v) \cv\dvot \avthbf{b} \cv\dvot \beta(\avthbf{b}) \\ &\vphantom{=} - 2 \cv\dvot \varphi(v \cv\dvot \avthbf{a}) \cv\dvot \avthbf{b} \cv\dvot \beta(1) \\ &\vphantom{=} - 2 \cv\dvot \sum_v \varphi(v_{(1)}) \cv\dvot \avthbf{b} \cv\dvot \beta(v_{(2)} \cv\dvot \avthbf{a} \avthbf{b}) \\ & = \varphi(v) \cv\dvot 2\avthbf{d} - 2 \cv\dvot \varphi(v) \cv\dvot \avthbf{b} \cv\dvot (\avthbf{a} - \avthbf{b}) - 2 \cv\dvot \varphi(v) \cv\dvot \avthbf{c} \avthbf{b} \\ & = 2 \cv\dvot \varphi(v) \cv\dvot (\avthbf{d} - \avthbf{b}(\avthbf{a} - \avthbf{b}) - \avthbf{c}\avthbf{b}) \\ & = 0 , \end{align*} where we have used the facts that $\varphi(v \cv\dvot \avthbf{a}\avthbf{b}) = \varphi(v) \cv\dvot 2\avthbf{d}$ and $\beta(v_{(2)} \cv\dvot \avthbf{a}\avthbf{b}) = 0$. \end{proof} The previous three lemmas enable us to determine $\varphi_{ub}$. In order to obtain more compact notation, define a map $r\colon\avthbb{Z}\fab\to\avthbb{Z}\fab$ by $r(1)=0$, $r(v \cv\dvot \avthbf{a})=v$, and $r(v \cv\dvot \avthbf{b})=0$. By using the chain definition of the $\av\bv$-index, it is straightforward to see that $\mathcal{P}si(\avthcal{L}_{ub}) = r(\mathcal{P}si(\avthcal{L}))$. \begin{proposition} Let $w$ be an $\av\bv$-polynomial homogeneous of degree greater than zero. Then $$ \varphi_{ub}(\avthbf{a} \cv\dvot w) = \omega(\avthbf{a} \cv\dvot r(w)) \cv\dvot (\avthbf{a} - \avthbf{b}) . $$ \end{proposition} \begin{proof} The case $w = v \cv\dvot \avthbf{a}$ follows from Lemma~\ref{lemma_varphi_ub_I}. The remaining case is $w = v \cv\dvot \avthbf{b}$. Note that $\avthbf{a} \cv\dvot v \cv\dvot \avthbf{b}$ can be factored as $u \cv\dvot \avthbf{a}\avthbf{b} \cv\dvot \avthbf{b}^{k}$ for a monomial $u$. Hence $\varphi_{ub}(u \cv\dvot \avthbf{a}\avthbf{b} \cv\dvot \avthbf{b}^{k}) = \varphi_{ub}(u \cv\dvot \avthbf{a}\avthbf{b}) \cv\dvot (\avthbf{a}-\avthbf{b})^{k} = 0$ by Lemmas~\ref{lemma_varphi_ub_II} and~\ref{lemma_varphi_ub_III}. \end{proof} We combine all of these results to conclude that the $\cv\dv$-index of the poset of unbounded regions~$T_{ub}$ can be computed in terms of the $\av\bv$-index of the unbounded intersection lattice $\avthcal{L}_{ub}$. \begin{theorem} Let $\avthcal{H}$ be a non-central hyperplane arrangement with the unbounded intersection lattice $\avthcal{L}_{ub}$ and poset of unbounded regions $T_{ub}$. Then the $\av\bv$-index of $T_{ub}$ is given by $$ \mathcal{P}si(T_{ub}) = \omega(\avthbf{a} \cv\dvot \mathcal{P}si(\avthcal{L}_{ub}))^{*} . $$ \label{theorem_unbounded} \end{theorem} \begin{proof} We have that \begin{eqnarray*} \mathcal{P}si(T_{ub})^{*} \cv\dvot (\avthbf{a}-\avthbf{b}) & = & \mathcal{P}si(Q) \\ & = & \varphi_{ub}(\avthbf{a} \cv\dvot \mathcal{P}si(\avthcal{L})) \\ & = & \omega(\avthbf{a} \cv\dvot r(\mathcal{P}si(\avthcal{L}))) \cv\dvot (\avthbf{a}-\avthbf{b}) \\ & = & \omega(\avthbf{a} \cv\dvot \mathcal{P}si(\avthcal{L}_{ub})) \cv\dvot (\avthbf{a}-\avthbf{b}) . \end{eqnarray*} The result follows by cancelling $\avthbf{a} - \avthbf{b}$ from both sides of the identity. \end{proof} \begin{figure} \caption{The spherical subdivision obtained from the non-central arrangement $x,y,z=0,1$.} \label{figure_spherical} \end{figure} \begin{example} Consider the non-central hyperplane arrangement consisting of the six hyperplanes $x = 0,1$, $y = 0,1$ and $z = 0,1$. See Figure~\ref{figure_affine}. After intersecting this arrangement with a sphere of large enough radius we obtain the cell complex in Figure~\ref{figure_spherical}. The polytopal realization of this complex is known as the rhombicuboctahedron. The dual of the face lattice of this spherical complex is not realized by a zonotope. However, one can view the dual lattice as the face lattice of a $2 \times 2 \times 2$ pile of cubes. The intersection lattice $\avthcal{L}$ is the face lattice of the three-dimensional crosspolytope, in other words, the octahedron. Hence the lattice of unbounded intersections $\avthcal{L}_{ub}$ has the flag $f$-vector $(f_{\emptyset},f_{1},f_{2},f_{12}) = (1,6,12,24)$ and the flag $h$-vector $(h_{\emptyset},h_{1},h_{2},h_{12}) = (1,5,11,7)$. The $\av\bv$-index is given by $\mathcal{P}si(\avthcal{L}_{ub}) = \ava + 5 \cv\dvot \bva + 11 \cv\dvot \avb + 7 \cv\dvot \bvb$. Hence the $\cv\dv$-index of $T_{ub}$ is \begin{eqnarray*} \mathcal{P}si(T_{ub}) & = & \omega(\avaa + 5 \cv\dvot \avba + 11 \cv\dvot \avab + 7 \cv\dvot \avbb)^{*} \\ & = & \cvcc + 22 \cv\dvot \dvc + 24 \cv\dvot \cvd . \end{eqnarray*} \end{example} \section{Concluding remarks} For regular subdivisions of manifolds questions abound. \begin{itemize} \item[(i)] What is the right analogue of a regular subdivision in order that it be polytopal? Can flag $f$-vectors be classified for polytopal subdivisions? \item[(ii)] Is there a Kalai convolution for manifolds that will generate more inequalities for flag $f$-vectors? \cite{Kalai} \item[(iii)] Is there a lifting technique that will yield more inequalities for higher dimensional manifolds? \cite{Ehrenborg_lifting} \item[(iv)] Are there minimization inequalities for the $\cv\dv$-coefficients in the polynomial~$\mathcal{P}si$? As a first step, can one prove the non-negativity of $\mathcal{P}si$? \cite{Billera_Ehrenborg,Ehrenborg_Karu} \item[(v)] Is there an extension of the toric $g$-inequalities to manifolds? \cite{Bayer_Ehrenborg,Kalai_g,Karu,Stanley_h} \item[(vi)] Can the coefficients for $\mathcal{P}si$ be minimized for regular toric arrangements as was done in the case of central hyperplane arrangements? \cite{Billera_Ehrenborg_Readdy_om} \end{itemize} The most straightforward manifold to study is $n$-dimensional projective space~$P^{n}$. We offer the following result in obtaining the $\av\bv$-index of subdivisions of $P^{n}$. \begin{theorem} Let $\mathcal{O}mega$ be a centrally symmetric regular subdivision of the $n$-dimensional sphere $S^{n}$. Assume that when antipodal points of the sphere are identified, a regular subdivision $\mathcal{O}mega^{\prime}$ of the projective space $P^{n}$ is obtained. Then the $\av\bv$-index of $\mathcal{O}mega^{\prime}$ is given by $$ \mathcal{P}si(\mathcal{O}mega^{\prime}) = \frac{\avthbf{c}^{n+1} + (\avthbf{a}-\avthbf{b})^{n+1}}{2} + \frac{\mathcal{P}hi}{2} , $$ where the $\cv\dv$-index of $\mathcal{O}mega$ is $\mathcal{P}si(\mathcal{O}mega) = \avthbf{c}^{n+1} + \mathcal{P}hi$. \end{theorem} \begin{proof} Each chain $c = \{\hat{0} = x_{0} < x_{1} < \cv\dvots < x_{k} = \hat{1}\}$ with $k \geq 2$ in $\mathcal{O}mega^{\prime}$ corresponds to two chains in~$\mathcal{O}mega$ with the same weight $\wt(c)$. The chain $c = \{\hat{0} = x_{0} < x_{1} = \hat{1}\}$ corresponds to exactly one chain in~$\mathcal{O}mega$ and has weight $(\avthbf{a}-\avthbf{b})^{n+1}$. Hence $\mathcal{P}si(\mathcal{O}mega) = 2 \cv\dvot \mathcal{P}si(\mathcal{O}mega^{\prime}) - (\avthbf{a}-\avthbf{b})^{n+1}$, proving the result. \end{proof} The results in this chapter have been stated for hyperplane arrangements. In true generality one could work with the underlying oriented matroid, especially since there are nonrealizable ones such as the non-Pappus oriented matroid. All of these can be represented as pseudo-hyperplane arrangements. We chose to work with hyperplane arrangements to preserve the geometric intuition. Poset transformations related to the $\omega$ map have been considered in~\cite{Ehrenborg_r_Birkhoff,Ehrenborg_Readdy_Tchebyshev,Hsiao}. Are there toric or affine analogues of these poset transforms? Another way to encode the flag $f$-vector data of a poset is to use the quasisymmetric function of a poset~\cite{Ehrenborg_Hopf}. In this language the $\omega$ map is translated to Stembridge's~$\vartheta$ map; see~\cite{Billera_Hsiao_van_Willigenburg,Stembridge}. Would the results of Theorems~\ref{theorem_toric} and~\ref{theorem_unbounded} be appealing in the quasisymmetric function viewpoint? Richard Stanley has asked if the coefficients of the toric characteristic polynomial are alternating. If so, is there any combinatorial interpretation of the absolute values of the coefficients. A far reaching generalization of Zaslavsky's results for hyperplane arrangements is by Goresky and MacPherson~\cite{Goresky_MacPherson}. Their results determine the cohomology groups of the complement of a complex hyperplane arrangement. For a toric analogue of the Goresky--MacPherson results, see work of De~Concini and Procesi~\cite{De_Concini_Procesi}. For algebraic considerations of toric arrangements, see~\cite{Douglass,Macmeikan_I, Macmeikan_II, Macmeikan_III}. In Section~\ref{section_toric} we restricted ourselves to studying arrangements that cut the torus into regular cell complexes. In a future paper~\cite{Ehrenborg_Slone}, two of the authors are developing the notion of a $\cv\dv$-index for non-regular cell complexes. \begin{center} Copyright \copyright\ MLE Slone 2008 \end{center} \vanish{ \newcommand{\journal}[6]{{\sc #1,} #2, {\it #3} {\bf #4} (#5), #6.} \newcommand{\books}[6]{{\sc #1,} ``#2,'' #3, #4, #5, #6.} \newcommand{\collection}[6]{{\sc #1,} #2, #3, in {\it #4}, #5, #6.} \newcommand{\thesis}[4]{{\sc #1,} ``#2,'' Doctoral dissertation, #3, #4.} \newcommand{\springer}[4]{{\sc #1,} ``#2,'' Lecture Notes in Math., Vol.\ #3, Springer-Verlag, Berlin, #4.} \newcommand{\preprint}[3]{{\sc #1,} #2, preprint #3.} \newcommand{\preparation}[2]{{\sc #1,} #2, in preparation.} \newcommand{\appear}[3]{{\sc #1,} #2, to appear in {\it #3}} \newcommand{\submitted}[3]{{\sc #1,} #2, submitted to {\it #3}} \newcommand{J.\ Combin.\ Theory Ser.\ A}{J.\ Combin.\ Theory Ser.\ A} \newcommand{Adv.\ Math.}{Adv.\ Math.} \newcommand{J.\ Algebraic Combin.}{J.\ Algebraic Combin.} \newcommand{\communication}[1]{{\sc #1,} personal communication.} {\small \begin{thebibliography}{99} \bibitem{Athanasiadis} \journal{C.\ A.\ Athanasiadis} {Characteristic polynomials of subspace arrangements and finite fields} {Adv.\ Math.} {122}{1996}{193--233} \bibitem{Athanasiadis_II} \journal{C.\ A.\ Athanasiadis} {Extended Linial hyperplane arrangements for root systems and a conjecture of Postnikov and Stanley} {J.\ Algebraic Combin.} {10}{1999}{207--225} \bibitem{Bayer_Billera} \journal{M.\ Bayer and L.\ Billera} {Generalized Dehn-Sommerville relations for polytopes, spheres and Eulerian partially ordered sets} {Invent.\ Math.} {79}{1985}{143--157} \bibitem{Bayer_Ehrenborg} \journal{M.\ Bayer and R.\ Ehrenborg} {The toric $h$-vectors of partially ordered sets} {Trans.\ Amer.\ Math.\ Soc.} {352}{2000}{4515--4531} \bibitem{Bayer_Klapper} \journal{M.\ Bayer and A.\ Klapper} {A new index for polytopes} {Discrete Comput.\ Geom.} {6}{1991}{33--47} \bibitem{Bayer_Sturmfels} \journal{M.\ Bayer and B.\ Sturmfels} {Lawrence polytopes} {Canad.\ J.\ Math.} {42}{1990}{62--79} \bibitem{Billera_Ehrenborg} \journal{L.\ J.\ Billera and R.\ Ehrenborg} {Monotonicity of the cd-index for polytopes} {Math.\ Z.} {233}{2000}{421--441} \bibitem{Billera_Ehrenborg_Readdy_om} \journal{L.\ J.\ Billera, R.\ Ehrenborg, and M.\ Readdy} {The {\cv\bvox{-}2\dv}-index of oriented matroids} {J.\ Combin.\ Theory Ser.\ A} {80}{1997}{79--105} \bibitem{Billera_Hsiao_van_Willigenburg} \journal{L.\ J.\ Billera, S.\ K.\ Hsiao and S.\ van Willigenburg} {Peak quasisymmetric functions and Eulerian enumeration} {Adv.\ Math.} {176}{2003}{248--276} \bibitem{Bjorner_topological_methods} {\sc A.\ Bj\"orner,} Topological methods. Handbook of combinatorics, Vol.\ 2, 1819--1872, Elsevier, Amsterdam, 1995. \bibitem{Chen} \journal{B.\ Chen} {On characteristic polynomials of subspace arrangements} {J.\ Combin.\ Theory Ser.\ A} {90}{2000}{347--352} \bibitem{De_Concini_Procesi} \journal{C.\ De Concini and C.\ Procesi} {On the geometry of toric arrangements} {Transform.\ Groups} {10}{2005}{387--422} \bibitem{Douglass} \journal{J.\ M.\ Douglass} {Toral arrangements and hyperplane arrangements} {Rocky Mountain J.\ Math.} {28}{1998}{939--956} \bibitem{Ehrenborg_Hopf} \journal{R.\ Ehrenborg} {On posets and Hopf algebras} {Adv.\ Math.} {119}{1996}{1--25} \bibitem{Ehrenborg_k-Eulerian} \journal{R.\ Ehrenborg} {$k$-Eulerian posets} {Order.} {18}{2001}{227--236} \bibitem{Ehrenborg_r_Birkhoff} \submitted{R.\ Ehrenborg} {The $r$-signed Birkhoff transform} {Trans.\ Amer.\ Math.\ Soc.} \bibitem{Ehrenborg_lifting} \journal{R.\ Ehrenborg} {Lifting inequalities for polytopes} {Adv.\ Math.} {193}{2005}{205--222} \bibitem{Ehrenborg_Karu} \journal{R.\ Ehrenborg and K.\ Karu} {Decomposition theorem for the $\cv\dv$-index of Gorenstein* posets} {J.\ Algebraic Combin.} {26}{2007}{225--251} \bibitem{Ehrenborg_Readdy_c} \journal{R.\ Ehrenborg and M.\ Readdy} {Coproducts and the $\cv\dv$-index} {J.\ Algebraic Combin.} {8}{1998}{273--299} \bibitem{Ehrenborg_Readdy_valuation_1} \journal{R.\ Ehrenborg and M.\ Readdy} {On valuations, the characteristic polynomial and complex subspace arrangements} {Adv.\ Math.} {134}{1998}{32--42} \bibitem{Ehrenborg_Readdy_valuation_2} \journal{R.\ Ehrenborg and M.\ Readdy} {The Dowling transform of subspace arrangements} {J.\ Combin.\ Theory Ser.\ A} {91}{2000}{322--333} \bibitem{Ehrenborg_Readdy_homology} \journal{R.\ Ehrenborg and M.\ Readdy} {Homology of Newtonian coalgebras} {European J.\ Combin.} {23}{2002}{919--927} \bibitem{Ehrenborg_Readdy_Tchebyshev} \appear{R.\ Ehrenborg and M.\ Readdy} {The Tchebyshev transforms of the first and second kind} {Ann.\ Comb.} \bibitem{Ehrenborg_Slone} \preparation{R.\ Ehrenborg and MLE\ Slone} {The $\cv\dv$-index of non-regular cell complexes} \bibitem{Goresky_MacPherson} {\sc M.\ Goresky and R.\ MacPherson,} ``Stratified Morse theory'' Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], 14.\ Springer-Verlag, Berlin, 1988. \bibitem{Greene_Zaslavsky} \journal{C.\ Greene and T.\ Zaslavsky} {On the interpretation of Whitney numbers through arrangements of hyperplanes, zonotopes, non-Radon partitions, and orientations of graphs} {Trans.\ Amer.\ Math.\ Soc.} {280}{1983}{97--126} \bibitem{Groemer} \journal{H.\ Groemer} {On the extension of additive functionals on classes of convex sets} {Pacific J.\ Math.} {75}{1978}{397--410} \bibitem{Grunbaum} \books{B.\ Gr\"unbaum} {Convex polytopes} {second edition} {Springer-Verlag}{New York}{2003} \bibitem{Hsiao} \journal{S.\ K.\ Hsiao} {A signed analog of the Birkhoff transform} {J.\ Combin.\ Theory Ser.\ A} {113}{2006}{251--272} \bibitem{Joni_Rota} \journal{S.\ A.\ Joni and G.-C.\ Rota} {Coalgebras and bialgebras in combinatorics} {Stud.\ Appl.\ Math.} {61}{1979}{93--139} \bibitem{Jozefiak_Sagan} \journal{T.\ J\'ozefiak and B.\ Sagan} {Basic derivations for subarrangements of Coxeter arrangements} {J.\ Algebraic Combin.} {2}{1993}{291--320} \bibitem{Kalai_g} \journal{G.\ Kalai} {Rigidity and the lower bound theorem. I} {Invent.\ Math.} {88}{1987}{125--151} \bibitem{Kalai} \journal{G.\ Kalai} {A new basis of polytopes} {J.\ Combin.\ Theory Ser.\ A} {49}{1988}{191--209} \bibitem{Karu} \journal{K.\ Karu} {Hard Lefschetz theorem for nonrational polytopes} {Invent.\ Math.} {157}{2004}{419--447} \bibitem{Klain_Rota} \book{D.\ Klain and G.-C.\ Rota} {Introduction to geometric probability} {Lezioni Lincee, Cambridge University Press, Cambridge} {1997} \bibitem{Macmeikan_I} \journal{C.\ Macmeikan} {The Poincar\'e polynomial of an mp arrangement} {Proc.\ Amer.\ Math.\ Soc.} {132}{2004}{1575--1580} \bibitem{Macmeikan_II} \journal{C.\ Macmeikan} {Modules of derivations for toral arrangements} {Indag.\ Math. (N.S.)} {15}{2004}{257--267} \bibitem{Macmeikan_III} \collection{C.\ Macmeikan} {Toral arrangements. The COE Seminar on Mathematical Sciences 2004} {37--54} {Sem.\ Math.\ Sci., 31} {Keio Univ., Yokohama} {2004} \bibitem{Novik_Postnikov_Sturmfels} \journal{I.\ Novik, A.\ Postnikov and B.\ Sturmfels} {Syzygies of oriented matroids} {Duke Math.\ J.} {111}{2002}{287--317} \bibitem{Stanley_acyclic} \journal{R.\ Stanley} {Acyclic orientations of graphs} {Discrete Math.} {5}{1973}{171--178} \bibitem{Stanley_EC_I} \book{R.\ P.\ Stanley} {Enumerative Combinatorics, Vol. I} {Wadsworth and Brooks/Cole, Pacific Grove} {1986} \bibitem{Stanley_h} {\sc R.\ P.\ Stanley,} Generalized $h$-vectors, intersection cohomology of toric varieties, and related results. Commutative algebra and combinatorics (Kyoto, 1985), 187--213, Adv. Stud. Pure Math., 11, North-Holland, Amsterdam, 1987. \bibitem{Stanley_d} \journal{R.\ P.\ Stanley} {Flag $f$-vectors and the $cd$-index} {Math.\ Z.} {216}{1994}{483--499} \bibitem{Stembridge} \journal{J.\ Stembridge} {Enriched $P$-partitions} {Trans.\ Amer.\ Math.\ Soc.} {349}{1997}{763--788} \bibitem{Swartz} \preprint{E.\ Swartz} {Face enumeration -- from spheres to manifolds} {2007} \bibitem{Sweedler} \book{M.\ Sweedler} {Hopf Algebras} {Benjamin, New York} {1969} \bibitem{Zaslavsky} {\sc T.\ Zaslavsky,} {Facing up to arrangements: face count formulas for partitions of space by hyperplanes,} {\it Mem.\ Amer.\ Math.\ Soc.} {\bf 154} {(1975).} \bibitem{Zaslavsky_paper} \journal{T.\ Zaslavsky} {A combinatorial analysis of topological dissections} {Adv.\ Math.} {25}{1977}{267--285} \end{thebibliography} } {\em R.\ Ehrenborg, M.\ Readdy and MLE\ Slone, Department of Mathematics, University of Kentucky, Lexington, KY 40506-0027,} \{{\tt jrge},{\tt readdy},{\tt mslone}\}{\tt @ms.uky.edu} \end{document} } \setcounter{chapter}{1} \chapter{Mixing operators} \label{chap:mixing} \section{Introduction} Kalai~\cite{Kalai} showed that a basis for flag $f$-vectors of polytopes is given by the flag $f$-vectors of polytopes constructed from simplices by repeatedly taking joins or products. Ehrenborg and Readdy~\cite{Ehrenborg_Readdy_c} studied how the $\cv\dv$-index changes under these operations. They discovered bilinear operators on the Newtonian coalgebra $\avthbb{Z}\free{\cv, \dv}$ which they called the mixing operator (for joins of polytopes) and the diamond operator (for products of polytopes). Later, Ehrenborg and Fox~\cite{Ehrenborg_Fox} analyzed these operators further, obtaining recursive coalgebraic formulas for the $\cv\dv$-indices of joins and products of polytopes. Using these formulas, they obtained a $\cv\dv$-index inequality relating the product of a join with the join of a product, providing evidence for Stanley's Gorenstein${}^*$ conjecture, which was only settled later~\cite{Ehrenborg_Karu}. It is difficult to use the join and product operations to study non-spherical manifolds, such as tori, since both preserve Eulerianness and take spheres to spheres. To remedy this difficulty, we introduce the manifold product. This is defined on manifolds as the Cartesian product of the underlying cell complexes, and yields a bilinear operator on $\av\bv$-indices. A manifold product of Eulerian manifolds is not globally Eulerian, but it is locally Eulerian. We extend inequalities proved by Ehrenborg and Fox to the case of manifold products. The mixing and diamond operators are nonnegative operators on $\cv\dv$-indices. Therefore, it makes sense to ask if there is something the coefficients count. We prove that the coefficients of the $\cv\dv$-index of the diamond product of two butterfly posets, which have pure $\avthbf{c}$-power $\cv\dv$-indices, can be interpreted as a weighted sum of restricted lattice paths. This also extends to a lattice-path interpretation for the coefficients of the mixing operator applied to pure $\avthbf{c}$-power terms. We also extend this interpretation to the manifold operator in the situation where the manifold operator yields a near $\cv\dv$-index which is nonnegative. \section{Preliminaries} For any cell complex $X$, let $\lat{X}$ denote its face poset. The empty face $\hat{0}$ and the total complex $\hat{1}$ are faces in $\lat{X}$. If $X$ is a polytope, then $\lat{X}$ is a lattice. A \define{graded poset} is a poset $P$ with distinct minimum and maximum elements $\hat{0}$ and $\hat{1}$ which is equipped with a \define{rank function} $\rho\colon P\to\avthbb{N}$. The rank function must preserve covers and send the minimum element of $P$ to $0$. In other words, $\rho(\hat{0})=0$, and if $x<:y$ in $P$, then $\rho(x)+1 = \rho(y)$. The face poset of a finite regular cell complex, such as a polytope, is graded by dimension. Fix once and for all a collection $\avthcal{G}$ which has exactly one representative of each isomorphism class of finite graded posets. From now on, we identify each graded poset with its isomorphic representative in $\avthcal{G}$. Fix a ground ring $k$. All modules, algebras, and coalgebras we discuss will be over this ground ring. An \define{algebra} is a module $A$ together with linear structure maps $\nabla\colon A\otimes A\to A$, called the product, and $\eta\colon k\to A$, called the unit, such that the diagrams \[\xymatrix{ A\otimes A\otimes A\ar[r]^{~~\nabla\otimes\id}\ar[d]_{\id\otimes\nabla} & A\otimes A\ar[d]^{\nabla} \\ A\otimes A\ar[r]_{\nabla} & A } \text{\quad and\quad} \xymatrix{ k\otimes A\ar[r]^{\eta\otimes\id}\ar[dr]_{\cong} & A\otimes A\ar[d]^{\nabla} & A\otimes k\ar[l]_{\id\otimes\eta}\ar[dl]^{\cong} \\ & A & }\] are commutative. If $A$ and $B$ are algebras, then an algebra morphism from $A$ to $B$ is a linear map $\varphi\colon A\to B$ which respects the product and unit. That is, $\nabla_B\circ(\varphi\otimes\varphi) = \varphi\circ \nabla_A$ and $\varphi\circ\eta_A=\eta_B$. Dually, a \define{coalgebra} is a module $C$ together with linear structure maps $\Delta\colon C\to C\otimes C$, called the coproduct, and $\varepsilon\colon C\to k$, called the counit, such that the diagrams \[\xymatrix{ C\otimes C\otimes C & C\otimes C\ar[l]_{~~~~\Delta\otimes\id} \\ C\otimes C\ar[u]^{\id\otimes\Delta} & C\ar[l]^{\Delta}\ar[u]_{\Delta} } \text{\quad and\quad} \xymatrix{ k\otimes C & C\otimes C\ar[l]_{\varepsilon\otimes\id}\ar[r]^{\id\otimes\varepsilon} & C\otimes k \\ & C\ar[ul]^{\cong}\ar[u]^{\Delta}\ar[ur]_{\cong} & }\] are commutative. Coalgebras will generally \emph{not} be assumed to have a counit. If~$C$ and $D$ are coalgebras, then a coalgebra morphism from $C$ to $D$ is a linear map~$\varphi\colon C\to D$ which respects the coproduct and counit, that is, the equations $(\varphi\otimes\varphi)\circ\Delta_C=\Delta_D\circ\varphi$ and $\varepsilon_D\circ\varphi = \varepsilon_C$ hold. Just as taking a product can be thought of assembling something out of smaller pieces, taking a coproduct can be thought of as disassembling something into its constituent pieces. Following this analogy, we define a \define{piece} of $c$ to be any term $c_{(1)}$ or $c_{(2)}$ which appears in the expansion $\Delta(c) = \sweedle{}{c_{(1)}\otimes c_{(2)}}$. We will generally suppress the notation $\nabla$ for product, writing $ab$ or $a\cv\dvot b$ instead of $\nabla(a\otimes b)$. The sigma notation for coproducts was introduced by Heyneman and Sweedler~\cite{Heyneman_Sweedler} and is now widely used. We adopt a variant of sigma notation, writing the coproduct of $c$ as \[ \Delta(c) = \sweedle{\Delta}{c_{(1)}\otimes c_{(2)}}. \] If the coproduct is understood, we will generally suppress $\Delta$, writing \[ \Delta(c) = \sweedle{}{c_{(1)}\otimes c_{(2)}}. \] Using sigma notation, the coassociativity condition can be written as the equation \[ \sweedle{}{(c_{(1,1)}\otimes c_{(1,2)})\otimes c_{(2)}} = \sweedle{}{c_{(1)}\otimes (c_{(2,1)}\otimes c_{(2,2)})} = \sweedle{}{c_{(1)}\otimes c_{(2)}\otimes c_{(3)}}. \] while the counital condition can be written as the equation \[ c = \sweedle{}{\varepsilon(c_{(1)})c_{(2)}} = \sweedle{}{c_{(1)}\varepsilon(c_{(2)})}. \] A \define{bialgebra} is a module with compatible algebra and coalgebra structure maps. In other words, the algebra structure maps are coalgebra morphisms, while the coalgebra structure maps are algebra morphisms. If $B$ is a bialgebra with product $\nabla$ and coproduct $\Delta$, then $\Hom_k(B, B)$ is an algebra with the convolution product, defined by $f*g = \nabla\circ(f\otimes g)\circ\Delta$. Using sigma notation, the convolution of linear maps $f$ and $g$ is written \[ (f*g)(b) = \sweedle{\Delta} f(b_{(1)})g(b_{(2)}). \] Observe that the composition $\eta\circ\varepsilon\colon B\to B$ of the unit and counit (if there is one) is the identity under convolution. A \define{Hopf algebra} is a bialgebra $H$ for which the identity map $\id\colon H\to H$ has a convolution inverse $S\colon H\to H$, that is, such that \[ (\eta\circ\varepsilon)(h) = \sweedle{\Delta}{S(h_{(1)})h_{(2)}} = \sweedle{\Delta}{h_{(1)}S(h_{(2)})} \] for all $h$ in $H$. The map $S$, which is always an antihomomorphism, is called the \define{antipode} of $H$. A \define{Newtonian coalgebra} is a module $N$ with both algebra and coalgebra structure maps such that the Leibniz condition \[ \Delta(u\cv\dvot v) = \Delta(u)\cv\dvot v + u\cv\dvot\Delta(v) \] holds for all $u$ and $v$. In other words, the coproduct is a derivation over the product. Newtonian coalgebras were introduced by Joni and Rota~\cite{Joni_Rota}, who called them infinitesimal coalgebras. A Newtonian coalgebra can have a unit or a counit, but not both. Now we indicate the algebras of interest and briefly describe each. \begin{itemize} \item $\mathbf{G}$, the Newtonian coalgebra of graded posets; \item $\mathbf{A}$, the Newtonian coalgebra of $\av\bv$-polynomials; \item $\mathbf{G}^{\bullet}$, the Hopf algebra of graded posets; and \item $\mathbf{A}^{\bullet}$, the nonassociative bialgebra of $\av\bv$-polynomials. \end{itemize} \subsection{The Newtonian coalgebra of graded posets} Let $\mathbf{G}=k\avthcal{G}$ be the free module generated by $\avthcal{G}$. The \define{star product} of two posets $P$, $Q$ in $\avthcal{G}$, denoted by $P\star Q$, is the poset with ground set $(P\setminus\{\hat{0}_P\})\cup(Q\setminus\{\hat{1}_Q\})$ and order relation \[ x \le_{P\star Q} y \text{\ if and only if\ } \begin{cases} x \le_P y \\ x \le_Q y \\ x \in P \text{\ and\ } y\in Q. \end{cases} \] The star product $\star$ makes $\mathbf{G}$ into an algebra with the Boolean algebra on a one-element set as the unit. Ehrenborg and Hetyei showed in unpublished work that $\mathbf{G}$ is a Newtonian coalgebra. The coproduct of a poset $P$ is defined by the formula \[ \Delta(P) = \sum_{\hat{0} < x < \hat{1}} [\hat{0}, x]\otimes[x,\hat{1}]. \] It is straightforward to verify the Leibniz condition: \[ \Delta(P \star Q) = \Delta(P) \star Q + P \star \Delta(Q). \] Since $\Delta$ is a derivation over the unital product $\star$, there is no counit. \subsection{The Newtonian coalgebra of ab-polynomials} The noncommutative polynomial algebra $\mathbf{A}=k\free{\av, \bv}$ also has the structure of a Newtonian coalgebra. The coproduct is defined on a monomial $u_1 \cv\dvots u_n$ by the formula \[ \Delta(u_1 \cv\dvots u_n) = \sum_{i=1}^n u_1\cv\dvots u_{i-1} \otimes u_{i+1}\cv\dvots u_n. \] The $\av\bv$-index $\mathcal{P}si(P)$ of a graded poset $P$ is an invariant of the poset. Ehrenborg and Readdy~\cite{Ehrenborg_Readdy_c} showed that $\mathcal{P}si$ can be viewed as a morphism $\mathcal{P}si\colon \mathbf{G}\to\mathbf{A}$ of Newtonian coalgebras. Moreover, $\mathcal{P}si$ is surjective. Stanley~\cite{Stanley_d} developed a recursive formula for the $\av\bv$-index of a poset which is amenable to computation and best expressed using coalgebraic notation. Define an algebra endomorphism $\kappa$ on $\mathbf{A}$ by setting $\kappa(\avthbf{a}) = \avthbf{a} - \avthbf{b}$ and $\kappa(\avthbf{b}) = 0$. Stanley proved that the $\av\bv$-index satisfies the recursive formula \begin{align*} \mathcal{P}si(P) &= \kappa(\mathcal{P}si(P)) + \sweedle{}{\kappa(\mathcal{P}si(P_{(1)}))\cv\dvot\avthbf{b}\cv\dvot\mathcal{P}si(P_{(2)})} \\ &= \kappa(\mathcal{P}si(P)) + \sweedle{}{\mathcal{P}si(P_{(1)})\cv\dvot\avthbf{b}\cv\dvot\kappa(\mathcal{P}si(P_{(2)}))}. \end{align*} Applying the surjectivity of $\mathcal{P}si$, the same recursive formula holds for every $\av\bv$-polynomial: \[ u = \kappa(u) + \sweedle{}{\kappa(u_{(1)})\cv\dvot\avthbf{b}\cv\dvot u_{(2)}} = \kappa(u) + \sweedle{}{u_{(1)}\cv\dvot\avthbf{b}\cv\dvot\kappa(u_{(2)})} \] This can also be proved inductively for $\av\bv$-polynomials, or be viewed as a consequence of the Ehrenborg--Readdy theorem that $\mathcal{P}si$ is a morphism of Newtonian coalgebras. In any case, the $\kappa$ morphism is fundamental for the study of the $\av\bv$-index. The map $\kappa$ preserves $\avthbf{a}$ and kills $\avthbf{b}$. In a similar way we can define a map $\lambda$ which preserves $\avthbf{b}$ and kills $\avthbf{a}$. Let $\swapm{\,\cv\dvot\,}\colon k\free{\av, \bv}\to k\free{\av, \bv}$ denote the map which swaps $\avthbf{a}$ and~$\avthbf{b}$. Define an algebra endomorphism $\lambda$ on $k\free{\av, \bv}$ by $\lambda(u) = \swapm{\kappa(\swapm{u})}$. Then for any $\av\bv$-polynomial $u$, \[ u = \lambda(u) + \sweedle{}{\lambda(u_{(1)})\cv\dvot\avthbf{a}\cv\dvot u_{(2)}} = \lambda(u) + \sweedle{}{u_{(1)}\cv\dvot\avthbf{a}\cv\dvot\lambda(u_{(2)})}. \] Note that the maps $\kappa$ and $\lambda$ act as near-counits in $\mathbf{A}$. The Newtonian coalgebra $\mathbf{A}$ has an important Newtonian subcoalgebra $\avthbf{C}=k\free{\cv, \dv}$, which is generated by the monomials $\avthbf{c} = \avthbf{a} + \avthbf{b}$ and $\avthbf{d} = \av\bv + \bv\av$. If a poset is Eulerian, its $\av\bv$-index lives in the subcoalgebra $\avthbf{C}$. In general, if $\mathcal{P}si(P)$ is in $k\free{\cv, \dv}$, then we say that $P$ \define{has a $\cv\dv$-index}. The existence of the $\cv\dv$-index was conjectured by Fine. Bayer and Klapper~\cite{Bayer_Klapper} showed that a poset has a $\cv\dv$-index if and only if it satisfies the generalized Dehn-Sommerville relations, while Stanley~\cite{Stanley_d} provided an alternative proof for Eulerian posets and established that the $\cv\dv$-index of a polytope has nonnegative coefficients. Several proofs of the existence of the $\cv\dv$-index have been given~\cite{Bayer_Klapper},\cite{Ehrenborg_k-Eulerian},\cite{Ehrenborg_Readdy_homology},\cite{Stanley_d}. \subsection{The Hopf algebra of graded posets} We will also need to make use of the Hopf algebra structure on graded posets. Let $\overline{\avthcal{G}} = \avthcal{G}\cup\{\bullet\}$, where $\bullet$ is the one-point poset. Then the module $\mathbf{G}^{\bullet} = k\overline{\avthcal{G}}$ has the structure of a Hopf algebra, with product coming from the Cartesian product and coproduct defined by \[ \Delta^*(P) = \sweedle{\Delta^*}P_{(1)}\otimes P_{(2)} = \sum_{x\in P} [\hat{0}, x]\otimes [x,\hat{1}]. \] Schmitt~\cite{Schmitt} derived an explicit formula for the antipode. Ehrenborg showed~\cite{Ehrenborg_Hopf} that the antipode plays the role of the M\"obius function, since if we define $\varphi\colon \mathbf{G}^{\bullet}\to k$ by $\varphi(P) = 1$ for each poset $P$, then $\mu(P) = \varphi(S(P))$. \subsection{The nonassociative bialgebra of ab-polynomials} In a similar way, we can extend the Newtonian coalgebra $\mathbf{A}$ to a nonassociative bialgebra $\mathbf{A}^{\bullet}=k\free{\av, \bv}\oplus k\xi$ via the formulas \[ \avthbf{a}\xi = \avthbf{b}\xi = \xi\avthbf{a} = \xi\avthbf{b} = 1 \text{\quad and\quad} \xi^2 = 0. \] Note that $\xi$ does not usually associate, so one must exercise care with its use. If~$\xi$ is flanked by two copies of $\avthbf{a}$ or $\avthbf{b}$, then it does associate, yielding the identities $\avthbf{a}\xi\avthbf{a} = \avthbf{a}$ and $\avthbf{b}\xi\avthbf{b}=\avthbf{b}$. However, $(\avthbf{d}\xi)\avthbf{d} = \avthbf{c}\avthbf{d}$ while $\avthbf{d}(\xi\avthbf{d}) = \avthbf{d}\avthbf{c}$. This bialgebra was introduced by Ehrenborg and Fox~\cite{Ehrenborg_Fox}. The Stanley recursion for the $\av\bv$-index may be expressed more briefly in this bialgebra: \begin{align*} u &= \sweedle{\Delta^*}{\kappa(u_{(1)})\cv\dvot\avthbf{b}\cv\dvot u_{(2)}} = \sweedle{\Delta^*}{u_{(1)}\cv\dvot\avthbf{b}\cv\dvot\kappa(u_{(2)})} \\ &= \sweedle{\Delta^*}{\lambda(u_{(1)})\cv\dvot\avthbf{a}\cv\dvot u_{(2)}} = \sweedle{\Delta^*}{u_{(1)}\cv\dvot\avthbf{a}\cv\dvot\lambda(u_{(2)})} \\ \end{align*} Observe that $\kappa$ and $\lambda$ act as near-counits in $\mathbf{A}^{\bullet}$. Just as $\mathbf{A}$ has a subcoalgebra $\avthbf{C}$ of $\cv\dv$-polynomials, $\mathbf{A}^{\bullet}$ has a sub-bialgebra $\mathbf{C}^{\bullet}$ of $\cv\dv$-polynomials with $\xi$. \section{Binary operations on posets} Kalai~\cite{Kalai} constructed a basis of polytopes obtained from simplices by repeatedly taking joins and direct sums. He showed that the face lattice of a join of polytopes is the Cartesian product of the respective face lattices, and the face lattice of a direct sum is the diamond product of the face lattices. That is, \begin{align*} \lat{X\: \mbox{$\vee \hspace{-9.5pt} \bigcirc$} \: Y} &= \lat{X} \times \lat{Y} \\ \lat{X\times Y} &= \lat{X} \diamond \lat{Y}, \end{align*} where $\: \mbox{$\vee \hspace{-9.5pt} \bigcirc$} \:$ denotes the join operation and $\diamond$ denotes the diamond product. Recall that the \define{diamond product} (or lower truncated product) of posets $P$ and $Q$ is defined by \[ P \diamond Q = (P\setminus\{\hat{0}\}) \times (Q\setminus\{\hat{0}\}) \cup \{\hat{0}\}. \] There is also a \define{dual diamond product} (or upper truncated product), which we denote by $\diamond^*$: \[ P \diamond^* Q = (P\setminus\{\hat{1}\}) \times (Q\setminus\{\hat{1}\}) \cup \{\hat{1}\}. \] The diamond product and dual diamond product are related by the identity \[ P \diamond^* Q = (P^* \diamond Q^*)^*, \] where $P^*$ denotes the dual of the poset $P$. The geometric operations of pyramid and prism arise from $\times$ and $\diamond$ on the poset level, since \[ \lat{\mathcal{P}yr(P)} = \lat{P}\times B_1 \text{\ and\ } \lat{\mathcal{P}ri(P)} = \lat{P}\diamond B_2, \] where $B_i$ denotes the Boolean algebra on $i$ elements. Since the $\av\bv$-index encodes the flag $f$-vector, it is of interest to study the effects of~$\times$ and $\diamond$ on the $\av\bv$-index. Ehrenborg~\cite{Ehrenborg_Hopf} used quasisymmetric functions to show that $\mathcal{P}si(P\times Q)$ is a function of $\mathcal{P}si(P)$ and $\mathcal{P}si(Q)$. Ehrenborg and Readdy~\cite{Ehrenborg_Readdy_c} derived recursive formulas for $\mathcal{P}si(P\times Q)$ which were improved by Ehrenborg and Fox~\cite{Ehrenborg_Fox}. In preparation for the study of the manifold product, we present a completely coalgebraic derivation of the recursive formulas for $\mathcal{P}si(P\times Q)$ and $\mathcal{P}si(P\diamond Q)$. We need two basic facts. First, we need the Stanley recursion discussed above. Second, we need to know the coproduct of a Cartesian product of posets. Since the Cartesian product is the product in the Hopf algebra of graded posets, \[ \Delta^*(P\times Q) = \Delta^*(P)\times\Delta^*(Q) =\sweedle{\Delta^*}{ (P_{(1)}\times Q_{(1)})\otimes (P_{(2)}\times Q_{(2)}) }. \] Hence the coproduct of a Cartesian product is \[ \Delta^*(u \times v) = \sweedle{\Delta^*}{(u_{(1)}\times v_{(1)}) \otimes (u_{(2)}\times v_{(2)})}. \] Using Stanley's recursion for the $\av\bv$-index, we obtain the following recursive formula for the mixing operator $\times$ applied to the $\av\bv$-polynomials $u$ and $v$: \begin{align*} u \times v &= \sweedle{\Delta^*}{\kappa(u_{(1)}\times v_{(1)})\cv\dvot\avthbf{b}\cv\dvot (u_{(2)}\times v_{(2)})} \\ &= \kappa(u\times v) + \kappa(u)\cv\dvot\avthbf{b} + \kappa(v)\cv\dvot\avthbf{b}\cv\dvot u \\ &\phantom{= } + \sweedle{}{\kappa(u_{(1)})\cv\dvot\avthbf{b}\cv\dvot(u_{(2)}\times v)} + \sweedle{}{\kappa(v_{(1)})\cv\dvot\avthbf{b}\cv\dvot(u\times v_{(2)})} \\ &\phantom{= } + \sweedle{}{\kappa(u\times v_{(1)})\cv\dvot \avthbf{b}\cv\dvot v_{(2)}} + \sweedle{}{\kappa(u_{(1)}\times u_{(2)}) \cv\dvot\avthbf{b}\cv\dvot (u_{(2)}\times v_{(2)})}. \end{align*} \subsection{Computing the cd-index of a Cartesian product} For any graded poset $P$, the coefficient of the pure $\avthbf{a}$ term is always $1$. Hence $\kappa(P)$ depends only on the rank of $P$, that is, $\kappa(P) = (\avthbf{a} - \avthbf{b})^{\rho(P) - 1}$. If $P$ and $Q$ are graded posets, their Cartesian product has rank $\rho(P) + \rho(Q) + 1$. So \begin{align*} \kappa(\mathcal{P}si(P\times Q)) &= \kappa(\mathcal{P}si(P)\cv\dvot\avthbf{a}\cv\dvot\mathcal{P}si(Q)) \\ \end{align*} Hence for any $\av\bv$-polynomials $u$ and $v$, \begin{align*} \kappa(u\times v) &= \kappa(u)\cv\dvot\kappa(v)\cv\dvot(\avthbf{a}-\avthbf{b}) \\ \end{align*} Analogously, \begin{align*} \lambda(u\times v) &= \lambda(u)\cv\dvot\lambda(v)\cv\dvot(\avthbf{b}-\avthbf{a}) \\ \end{align*} We use these facts to prove the following lemma. \begin{lemma}[Ehrenborg--Readdy~{\cite[Proposition 4.2]{Ehrenborg_Readdy_c}}]\label{lemma:pyramid} For any $\av\bv$-polynomial~$u$, \begin{align} u\times 1 &= \sweedle{\Delta^*}{u_{(1)}\cv\dvot\bv\av\cv\dvot u_{(2)}} = \avthbf{a}\cv\dvot u + u\cv\dvot\avthbf{b} + \sweedle{}{u_{(1)}\cv\dvot\bv\av\cv\dvot u_{(2)}}\label{eqn:up1ba} \\ &= \sweedle{\Delta^*}{u_{(1)}\cv\dvot\av\bv\cv\dvot u_{(2)}} = \avthbf{b}\cv\dvot u + u\cv\dvot\avthbf{a} + \sweedle{}{u_{(1)}\cv\dvot\av\bv\cv\dvot u_{(2)}}.\label{eqn:up1ab} \end{align} Since the formula for $u\times 1$ is invariant under the action of the involution $\swapm{\,\cv\dvot\,}$ which swaps $\avthbf{a}$ and $\avthbf{b}$, if $u$ is a $\cv\dv$-polynomial, then so is $u\times 1$. \end{lemma} \begin{proof} Since $1$ is the $\av\bv$-index of the Boolean algebra $B_1$, the expression $1\times 1$ is the $\av\bv$-index of the product $B_1\times B_1 = B_2$, that is, $1\times 1 = \mathcal{P}si(B_2) = \avthbf{c}$. Equations~(\ref{eqn:up1ba}) and~(\ref{eqn:up1ab}) both hold when $u = 1$. To complete the proof, assume for induction that Equation~(\ref{eqn:up1ba}) holds for all pieces of $u$, that is, for any polynomial $u_{(1)}$ or $u_{(2)}$ appearing in the coproduct of $u$. Since $\Delta^*(1) = 1\otimes\xi + \xi\otimes 1$, \begin{align*} u\times 1 &= \sweedle{\Delta^*}{\kappa(u_{(1)}\times 1)\cv\dvot\avthbf{b}\cv\dvot u_{(2)}} + \sweedle{\Delta^*}{\kappa(u_{(1)})\cv\dvot\avthbf{b}\cv\dvot (u_{(2)}\times 1)} \\ &= \avthbf{b}\cv\dvot u + \kappa(u\times 1) + \kappa(u)\cv\dvot\avthbf{b} \\ &\phantom{=} + \sweedle{\Delta}{(\avthbf{a}-\avthbf{b})\cv\dvot\kappa(u_{(1)})\cv\dvot\avthbf{b}\cv\dvot u_{(2)}} + \sweedle{\Delta}{\kappa(u_{(1)})\cv\dvot\avthbf{b}\cv\dvot u_{(2)}\cv\dvot\avthbf{b}} \\ &\phantom{=} + \sweedle{\Delta}{\kappa(u_{(1)})\cv\dvot\bv\av\cv\dvot u_{(2)}} + \sweedle{\Delta}{\kappa(u_{(1)})\cv\dvot\avthbf{b}\cv\dvot u_{(2)}\cv\dvot\bv\av\cv\dvot u_{(3)}} \\ \end{align*} Use the identity $\kappa(u\times 1) = \kappa(u)\cv\dvot (\avthbf{a} - \avthbf{b})$ to combine two of the isolated terms, and apply the induction hypothesis to expand the second summation. The Stanley recursion permits the terms above to be expressed in a much simpler way. \begin{align*} u\times 1 &= \avthbf{b}\cv\dvot u + \kappa(u)\cv\dvot\avthbf{a} \\ &\phantom{=} + (\avthbf{a} - \avthbf{b})\cv\dvot(u - \kappa(u)) + (u - \kappa(u))\cv\dvot\avthbf{b} \\ &\phantom{=} + \sweedle{\Delta}{u_{(1)}\cv\dvot\bv\av\cv\dvot u_{(2)}} \\ &= \avthbf{a}\cv\dvot u + u\cv\dvot\avthbf{b} + \sweedle{\Delta}{u_{(1)}\cv\dvot\bv\av\cv\dvot u_{(2)}}. \end{align*} Equation~(\ref{eqn:up1ab}) could be proved by imitating the one just given, replacing $\kappa$ with $\lambda$ and making other appropriate changes. However, it is more direct to apply the fact that the star involution is a Newtonian coalgebra anti-isomorphism. Hence \begin{align*} u\times 1 &= (u^* \times 1^*)^* \\ &= \left[ \avthbf{a}\cv\dvot u^* + u^*\cv\dvot\avthbf{b} + \sweedle{\Delta}{u_{(2)}^*\cv\dvot\bv\av\cv\dvot u_{(1)}^*} \right]^* \\ &= \avthbf{b}\cv\dvot u + u\cv\dvot\avthbf{a} + \sweedle{\Delta}{u_{(1)}\cv\dvot\av\bv\cv\dvot u_{(2)}}, \end{align*} which completes the proof. \end{proof} \begin{lemma}[Ehrenborg--Fox~{\cite[Proposition 5.8]{Ehrenborg_Fox}}]\label{lemma:timesabrecursion} For any $\av\bv$-polynomials $u$ and $v$, the identities \begin{align} u\times (v\cv\dvot\avthbf{a}) &= \sweedle{\Delta^*}{(u_{(1)}\times v)\cv\dvot\av\bv\cv\dvot u_{(2)}} \notag \\ &= v\cv\dvot\av\bv\cv\dvot u + (u\times v)\cv\dvot\avthbf{a} + \sweedle{}{(u_{(1)}\times v)\cv\dvot\av\bv\cv\dvot u_{(2)}} \label{eqn:upva} \\ u\times (v\cv\dvot\avthbf{b}) &= \sweedle{\Delta^*}{(u_{(1)}\times v)\cv\dvot\bv\av\cv\dvot u_{(2)}} \notag \\ &= v\cv\dvot\bv\av\cv\dvot u + (u\times v)\cv\dvot\avthbf{b} + \sweedle{}{(u_{(1)}\times v)\cv\dvot\bv\av\cv\dvot u_{(2)}} \label{eqn:upvb} \end{align} hold. \end{lemma} \begin{proof} The proof is a double induction on the lengths of $u$ and $v$. By explicitly constructing appropriate posets, one can compute that \[ 1\times\avthbf{a} = \avthbf{c}^2 - \avthbf{b}^2 \text{\quad and\quad} 1\times\avthbf{b} = (1\times\avthbf{c}) -(1\times\avthbf{a}) = (\avthbf{c}^2+\avthbf{d}) - (\avthbf{c}^2 - \avthbf{b}^2) = \avthbf{c}^2 - \avthbf{a}^2. \] Thus Equations~(\ref{eqn:upva}) and~(\ref{eqn:upvb}) are both satisfied if $u = v = 1$. Now assume for induction that Equation~(\ref{eqn:upvb}) holds for $v = 1$ and any piece of $u$. Expand $u\times\avthbf{b}$ via the general recursion for products, keeping in mind that $\kappa(w\times\avthbf{b}) = 0$ for any $w$. \begin{align*} u\times\avthbf{b} &= \sweedle{\Delta^*}{\kappa(u_{(1)}\times 1)\cv\dvot\avthbf{b}\cv\dvot(u_{(2)}\times 1)} + \sweedle{\Delta^*}{\kappa(u_{(1)})\cv\dvot\avthbf{b}\cv\dvot(u_{(2)}\times\avthbf{b})}. \end{align*} Apply Lemma~\ref{lemma:pyramid} to the first summation and the induction hypothesis to the second summation. \begin{align*} u\times\avthbf{b} &= \sweedle{\Delta^*}{\kappa(u_{(1)}\times 1)\cv\dvot\avthbf{b}\cv\dvot(u_{(2)}\times \xi)\cv\dvot\bv\av\cv\dvot u_{(3)}} \\ &\phantom{=} + \sweedle{\Delta^*}{\kappa(u_{(1)}\times\xi)\cv\dvot\avthbf{b}\cv\dvot(u_{(2)}\times 1)\cv\dvot\bv\av\cv\dvot u_{(3)}}. \end{align*} The part of the above expression preceding $\bv\av$ is recognizable as an expansion of the product $u_{(1)}\times 1$. \[ u\times\avthbf{b} = \sweedle{\Delta^*}{(u_{(1)}\times 1)\cv\dvot\bv\av\cv\dvot u_{(2)}}, \] which is what needed to be shown. To complete the double induction, assume that Equation~(\ref{eqn:upvb}) holds for any piece of $u$ or $v$. Since $\Delta^*(v\cv\dvot\avthbf{b}) = \Delta^*(v)\cv\dvot\avthbf{b} + v\cv\dvot\avthbf{b}\otimes\xi$, \begin{align*} u\times(v\cv\dvot\avthbf{b}) &= \sweedle{\Delta^*}{\kappa(u_{(1)}\times v_{(1)})\cv\dvot\avthbf{b}\cv\dvot(u_{(2)}\times (v_{(2)}\cv\dvot\avthbf{b}))} \\ &\phantom{=} + \sweedle{\Delta^*}{\kappa(u_{(1)}\times (v\cv\dvot\avthbf{b}))\cv\dvot\avthbf{b}\cv\dvot(u_{(2)}\times\xi)}. \end{align*} The second summation vanishes because $\kappa$ kills $\avthbf{b}$. Apply the induction hypothesis to expand the first summation. As in the case $v=1$, this results in a recognizable expansion of a product. No parentheses are needed below because $u_{(2)}\times v_{(2)}$, the only expression which could be $\xi$, is flanked by copies of~$\avthbf{b}$. \begin{align*} u\times (v\cv\dvot\avthbf{b}) &= \sweedle{\Delta^*}{\kappa(u_{(1)}\times v_{(1)})\cv\dvot\avthbf{b}\cv\dvot(u_{(2)}\times v_{(2)})\cv\dvot\bv\av\cv\dvot u_{(3)}} \\ &= \sweedle{\Delta^*}{(u_{(1)}\times v)\cv\dvot\bv\av\cv\dvot u_{(2)}}. \end{align*} This completes the proof of Equation~(\ref{eqn:upvb}). Equation~(\ref{eqn:upva}) can be proved in a similar way, replacing $\kappa$ with $\lambda$ and making other appropriate changes. \end{proof} In the previous lemmas identities appeared in pairs differing only by the action of the involution $\swapm{\,\cv\dvot\,}$. This suggests that $\times$ respects the action of $\swapm{\,\cv\dvot\,}$. This is a consequence of the identities proved in Lemma~\ref{lemma:timesabrecursion}, but it is more fundamentally a consequence of the existence of the paired recursive formulas \[ u = \sweedle{\Delta^*}{\kappa(u_{(1)})\cv\dvot\avthbf{b}\cv\dvot u_{(2)}} = \sweedle{\Delta^*}{\lambda(u_{(1)})\cv\dvot\avthbf{a}\cv\dvot u_{(2)}}. \] Ehrenborg and Fox proved that $\times$ respects the involution $\swapm{\,\cv\dvot\,}$. We offer the following alternative proof. \begin{proposition}[Ehrenborg--Fox~{\cite[Lemma 5.5]{Ehrenborg_Fox}}]\label{prop:timesrespectsswapm} For any $\av\bv$-polynomials $u$ and $v$, the identity \[ \swapm{u\times v} = \swapm{u}\times\swapm{v} \] holds. \end{proposition} \begin{proof} If $u = v = 1$, there is nothing to prove. Suppose the claim holds for pieces of~$u$ and $v$. By the recursive formula for the product $u\times v$, \begin{align*} \swapm{u\times v} &= \sweedle{\Delta^*}{\swapm{\kappa(u_{(1)}\times v_{(1)})}\cv\dvot\avthbf{a}\cv\dvot\swapm{u_{(2)}\times v_{(2)}}} \\ &= \sweedle{\Delta^*}{\lambda(\swapm{u_{(1)}\times v_{(1)}})\cv\dvot\avthbf{a}\cv\dvot\swapm{u_{(2)}\times v_{(2)}}}. \end{align*} Now apply the induction hypothesis and the fact that $\swapm{\,\cv\dvot\,}$ is a coalgebra morphism. \begin{align*} \swapm{u\times v} &= \sweedle{\Delta^*}{\lambda(\swapm{u_{(1)}}\times\swapm{v_{(1)}})\cv\dvot\avthbf{a}\cv\dvot(\swapm{u_{(2)}}\times\swapm{v_{(2)}})} \\ &= \swapm{u}\times\swapm{v}. \end{align*} This completes the proof. \end{proof} \begin{corollary}[Ehrenborg--Fox~{\cite[Theorem 5.1]{Ehrenborg_Fox}}]\label{cor:timescdrecursion} For any $\cv\dv$-polynomials $u$ and~$v$, the identities \begin{align*} u\times (v\cv\dvot\avthbf{c}) &= \sweedle{\Delta^*}{(u_{(1)}\times v)\cv\dvot\avthbf{d}\cv\dvot u_{(2)}} \\ u\times (v\cv\dvot\avthbf{d}) &= \sweedle{\Delta^*}{(u_{(1)}\times v)\cv\dvot\avthbf{d}\cv\dvot \mathcal{P}yr(u_{(2)})} \end{align*} hold. \end{corollary} \begin{proof} Expand $\avthbf{c}$ and $\avthbf{d}$, then apply Lemma~\ref{lemma:timesabrecursion}. Thus \begin{align*} u\times (v\cv\dvot\avthbf{c}) &= u\times (v\cv\dvot\avthbf{a}+\avthbf{b}) \\ &= \sweedle{\Delta^*}{ (u_{(1)}\times v)\cv\dvot(\av\bv + \bv\av)\cv\dvot u_{(2)}} \\ &= \sweedle{\Delta^*}{(u_{(1)}\times v)\cv\dvot\avthbf{d}\cv\dvot u_{(2)}}. \end{align*} To compute $u\times (v\cv\dvot\avthbf{d})$, the lemma must be invoked twice. We have \begin{align*} u\times (v\cv\dvot\av\bv) &= \sweedle{\Delta^*}{(u_{(1)}\times(v\cv\dvot\avthbf{a}))\cv\dvot\bv\av\cv\dvot u_{(2)}} \\ &= \sweedle{\Delta^*}{(u_{(1)}\times v)\cv\dvot\av\bv\cv\dvot u_{(2)}\cv\dvot\bv\av\cv\dvot u_{(3)}}. \end{align*} By Lemma~\ref{lemma:pyramid}, we can collapse $u_{(2)}\cv\dvot\bv\av\cv\dvot u_{(3)}$ into $\mathcal{P}yr(u_{(2)})$. Similarly, \[ u \times (v\cv\dvot\bv\av) = \sweedle{\Delta^*}{(u_{(1)}\times v)\cv\dvot\bv\av\cv\dvot\mathcal{P}yr(u_{(2)})}, \] from which the recursive formula for $u\times (v\cv\dvot\avthbf{d})$ follows. \end{proof} \subsection{Computing the cd-index of a diamond product} Just as with the Cartesian product, the algebra maps $\kappa$ and $\lambda$ interact nicely with the $\av\bv$-index of a diamond product of posets. If $P$ is a graded poset, then $\kappa(P)$ is given by $\kappa(\mathcal{P}si(P)) = (\avthbf{a} - \avthbf{b})^{\rho(P)-1}$. If $P$ and $Q$ are graded posets, their diamond product has rank $\rho(P) + \rho(Q)$. Thus \begin{align*} \kappa(\mathcal{P}si(P\diamond Q)) &= \kappa(\mathcal{P}si(P)\cv\dvot\mathcal{P}si(Q)) \\ \end{align*} Hence for any $\av\bv$-polynomials $u$ and $v$, \begin{align*} \kappa(u\diamond v) &= \kappa(u)\cv\dvot\kappa(v). \end{align*} Analogously, \begin{align*} \lambda(u\diamond v) &= \lambda(u)\cv\dvot\lambda(v). \end{align*} These formulas describe the situation in the algebra $\mathbf{A}$. For simplicity, we require that the above formulas hold in $\mathbf{A}^{\bullet}$, even if $u$ or $v$ is $\xi$, subject to the constraint that~$\kappa(\xi) = 0$. In particular, $\kappa(u\diamond\xi) = 0$ for any $u$, which implies that $ u \diamond \xi = 0$ for any $u$. This may conflict with the intuition that \[ P \diamond \bullet = (P \setminus\{\hat{0}\})\times \emptyset\cup \{\hat{0}\} = \emptyset\cup\{\hat{0}\} = \bullet, \] but has the advantage of maintaining homogeneity of degree in the recursive formulas that follow. Since an $\av\bv$-index of a poset is always homogeneous in degree, we accept failure of intuition in exchange for correctness of formulas. We summarize the basic properties of the diamond product with the following result from Ehrenborg and Fox. \begin{proposition}[Ehrenborg--Fox~{\cite[Corollary 6.3]{Ehrenborg_Fox}}]\label{cor:diamondfacts} The diamond product $\diamond$ makes $\mathbf{A}$ into an abelian monoid with unit $1$ and makes $\mathbf{A}^{\bullet}$ into a commutative semigroup satisfying the rules \begin{align*} u\diamond 1 &= u \text{\ for any $u$ in $\mathbf{A}$} \\ u\diamond\xi &= 0 \text{\ for any $u$ in $\mathbf{A}^{\bullet}$}. \end{align*} \end{proposition} \noindent The diamond product obeys the coalgebraic recursive formula \[ u \diamond v = \sweedle{\Delta^*}{\kappa(u_{(1)}\diamond v_{(1)})\cv\dvot\avthbf{b}\cv\dvot(u_{(2)}\times v_{(2)})} \] as well as the analogous formulas obtained by moving $\kappa$ or replacing $\kappa$ and $\avthbf{b}$ with~$\lambda$ and $\avthbf{a}$. The following lemma is the diamond version of Lemma~\ref{lemma:timesabrecursion}. \begin{lemma}\label{lemma:diamondabrecursion} For any $\av\bv$-polynomials $u$ and $v$, the identities \begin{align} u\diamond (v\cv\dvot\avthbf{a}) &= \sweedle{\Delta^*}{(u_{(1)}\diamond v)\cv\dvot\av\bv\cv\dvot u_{(2)}} \notag \\ &= (u\diamond v)\cv\dvot\avthbf{a} + \sweedle{}{(u_{(1)}\diamond v)\cv\dvot\av\bv\cv\dvot u_{(2)}}\label{eqn:udva} \\ u\diamond (v\cv\dvot\avthbf{b}) &= \sweedle{\Delta^*}{(u_{(1)}\diamond v)\cv\dvot\bv\av\cv\dvot u_{(2)}} \notag \\ &= (u\diamond v)\cv\dvot\avthbf{b} + \sweedle{}{(u_{(1)}\diamond v)\cv\dvot\bv\av\cv\dvot u_{(2)}}\label{eqn:udvb} \end{align} hold. \end{lemma} \begin{proof} This lemma is essentially a corollary of Lemma~\ref{lemma:timesabrecursion}. Here we demonstrate Equation~(\ref{eqn:udvb}). Since $\Delta^*(v\cv\dvot\avthbf{b}) = \Delta^*(v)\cv\dvot\avthbf{b} + v\cv\dvot\avthbf{b}\otimes\xi$, \begin{align*} u\diamond (v\cv\dvot\avthbf{b}) &= \sweedle{\Delta^*}{\kappa(u_{(1)}\diamond v_{(1)})\cv\dvot\avthbf{b}\cv\dvot(u_{(2)}\times (v_{(2)}\cv\dvot\avthbf{b}))} \\ &\phantom{=} + \sweedle{\Delta^*}{\kappa(u_{(1)}\diamond (v\cv\dvot\avthbf{b}))\cv\dvot\avthbf{b}\cv\dvot(u_{(2)}\times \xi)}. \end{align*} Expand the first summation using the recursion for $\times$, and notice that the second summation vanishes. Finally, recognize the left factor of the expression as an expansion of the diamond product. \begin{align*} u\diamond v &= \sweedle{\Delta^*}{\kappa(u_{(1)}\diamond v_{(1)})\cv\dvot\avthbf{b}\cv\dvot(u_{(2)}\times v_{(2)})\cv\dvot\bv\av\cv\dvot u_{(3)}} \\ &= \sweedle{\Delta^*}{(u_{(1)}\diamond v)\cv\dvot\bv\av\cv\dvot u_{(2)}}. \end{align*} The proof of Equation~(\ref{eqn:udva}) is similar. \end{proof} The diamond product also respects the involution $\swapm{\,\cv\dvot\,}$. Combining the recursive formulas for $u\diamond(v\cv\dvot\avthbf{a})$ and $u\diamond(v\cv\dvot\avthbf{b})$ produces recursive formulas for $u\diamond(v\cv\dvot\avthbf{c})$ and~$u\diamond (v\cv\dvot\avthbf{d})$. \begin{corollary}[Ehrenborg--Fox~{\cite[Theorem 7.1]{Ehrenborg_Fox}}]\label{cor:diamondcdrecursion} For any $\cv\dv$-polynomials $u$ and~$v$, the identities \begin{align} u\diamond (v\cv\dvot\avthbf{c}) &= \sweedle{\Delta^*}{(u_{(1)}\diamond v)\cv\dvot\avthbf{d}\cv\dvot u_{(2)}} \notag \\ &= (u\diamond v)\cv\dvot\avthbf{c} + \sweedle{\Delta}{(u_{(1)}\diamond v)\cv\dvot\avthbf{d}\cv\dvot u_{(2)}} \label{eqn:udvc} \\ u\diamond(v\cv\dvot\avthbf{d}) &= \sweedle{\Delta^*}{(u_{(1)}\diamond v)\cv\dvot\avthbf{d}\cv\dvot \mathcal{P}yr(u_{(2)})} \notag \\ &= (u\diamond v)\cv\dvot\avthbf{d} + \sweedle{\Delta}{(u_{(1)}\diamond v)\cv\dvot\avthbf{d}\cv\dvot \mathcal{P}yr(u_{(2)})} \label{eqn:udvd} \end{align} hold. \end{corollary} \section{Lattice-path interpretation of mixing operators} Equation~(\ref{eqn:udvc}) can be used to give an explicit recursive formula for $\avthbf{c}^p\diamond\avthbf{c}^q$. In this section we display this formula and show how to interpret its coefficients as counting weighted lattice paths. First we define the algebra of lattice paths. Consider the noncommutative polynomial algebra on the generators $\avthbf{D}$, $\avthbf{R}$, and $\avthbf{U}$, where $\avthbf{D}$ has degree~2 and $\avthbf{R}$ and $\avthbf{U}$ have degree~1. The generators correspond to the steps \begin{align*} \text{\textbf{D}iagonal} &= (1,1) \\ \text{\textbf{R}ight} &= (1,0) \\ \text{\textbf{U}p} &= (0,1). \end{align*} This algebra admits a bigrading into homogeneous parts indexed by $p$ and $q$ and generated by monomials with $p$ occurrences of $\avthbf{D}$ or $\avthbf{R}$ and $q$ occurrences of $\avthbf{D}$ or $\avthbf{U}$. Note that $\avthbf{D}$, which represents a diagonal step, counts toward both $p$ and $q$. The~$(p,q)$ summand of this algebra represents lattice paths in $\avthbb{N}\times\avthbb{N}$ from the origin to $(p,q)$ which use only $\avthbf{D}$, $\avthbf{R}$, and $\avthbf{U}$ steps. To avoid overcounting in what follows, we need to restrict to a submodule. Let~$\Lambda$ denote the submodule generated by monomials which do not contain $\avthbf{UR}$ as a contiguous subword. It inherits a grading $\Lambda = \bigoplus_{p,q} \Lambda_{p,q}$ from the grading of the polynomial algebra. \begin{example} By direct computation, one can verify that \begin{align*} \avthbf{c}^3\diamond \avthbf{c}^2 &= \avthbf{c}^5 + 2\avthbf{c}^3\avthbf{d}+ 4\avthbf{c}^2\avthbf{d}\avthbf{c}+ 4\avthbf{c}\avthbf{d}\avthbf{c}^2 + 2\avthbf{d}\avthbf{c}^3 + 4\avthbf{c}\avthbf{d}^2 + 4\avthbf{d}\avthbf{c}\avthbf{d} + 4\avthbf{d}^2\avthbf{c}. \end{align*} Compare this polynomial with Figure~\ref{fig:pathsample}, which displays each $\avthbf{D}\avthbf{R}\avthbf{U}$-word in $\Lambda_{3,2}$ together with its associated path. The coefficients of the terms in $\avthbf{c}^3\diamond\avthbf{c}^2$ can be obtained by weighting $\avthbf{R}$ and $\avthbf{U}$ steps by $\avthbf{c}$ and weighting $\avthbf{D}$ steps by $2\avthbf{d}$. Note that the pair of terms $\avthbf{R}\avthbf{R}\avthbf{D}\avthbf{U}$ and $\avthbf{R}\avthbf{U}\avthbf{D}\avthbf{R}$ contribute to the same term of $\avthbf{c}^3\diamond\avthbf{c}^2$, as do the pair of terms $\avthbf{R}\avthbf{D}\avthbf{R}\avthbf{U}$ and $\avthbf{U}\avthbf{D}\avthbf{R}\avthbf{R}$. Hence $\avthbf{c}^3\diamond\avthbf{c}^2$ has only eight terms, even though there are ten $\avthbf{D}\avthbf{R}\avthbf{U}$-words in $\Lambda_{3,2}$. \end{example} \begin{figure} \caption{Paths in $\Lambda_{3,2} \label{fig:pathsample} \end{figure} The following proposition shows that this situation is general. \begin{proposition} Let $\wt\colon\Lambda\to\avthbf{C}$ be the linear map determined by \[ \wt(\avthbf{D}) = 2\avthbf{d}\text{\quad and\quad} \wt(\avthbf{R}) = \wt(\avthbf{U}) = \avthbf{c}. \] Then for any natural numbers $p$ and $q$, the $\cv\dv$-index $\avthbf{c}^p\diamond\avthbf{c}^q$ is given by the formula \[ \avthbf{c}^p\diamond\avthbf{c}^q = \sum_{P\in\Lambda_{p,q}} \wt(P). \] \end{proposition} \begin{proof} The proof proceeds by induction on $p$ and $q$. For $p=q=0$ there is nothing to show. Suppose the weighted lattice path interpretation is correct for $(p', q')$ strictly smaller than $(p, q)$ in at least one coordinate. As a consequence of Corollary~\ref{cor:diamondcdrecursion}, \begin{align*} \avthbf{c}^p\diamond(\avthbf{c}^{q-1}\cv\dvot\avthbf{c}) &= (\avthbf{c}^p\diamond\avthbf{c}^{q-1})\cv\dvot\avthbf{c} \\ &\vphantom{=} + (\avthbf{c}^{p-1}\diamond\avthbf{c}^{q-1})\cv\dvot 2\avthbf{d} \\ &\vphantom{=} + \sum_{k=0}^{p-2} (\avthbf{c}^k\diamond\avthbf{c}^{q-1})\cv\dvot 2\avthbf{d}\cv\dvot \avthbf{c}^{p-1-k}. \end{align*} Applying the induction assumption, the first summand is \[ \avthbf{c}^p\diamond(\avthbf{c}^{q-1}\cv\dvot\avthbf{c}) = \sum_{P\in\Lambda_{p,q-1}}\wt(P\cv\dvot\avthbf{U}), \] and the second summand is \[ (\avthbf{c}^{p-1}\diamond\avthbf{c}^{q-1})\cv\dvot 2\avthbf{d} = \sum_{P\in\Lambda_{p-1,q-1}}\wt(P\cv\dvot\avthbf{D}). \] The summation corresponds to lattice paths to which an $\avthbf{R}$ can be appended, that is, \[ \sum_{k=0}^{p-2} (\avthbf{c}^k\diamond\avthbf{c}^{q-1})\cv\dvot 2\avthbf{d}\cv\dvot \avthbf{c}^{p-1-k} = \sum_{\substack{P\in\Lambda_{p-1,q} \\ \text{$P$ does not end in $\avthbf{U}$}}} \wt(P\cv\dvot\avthbf{R}). \] But the module $\Lambda_{p,q}$ decomposes as \begin{align*} \Lambda_{p, q} &= \{ P \cv\dvot \avthbf{D} \colon P \in \Lambda_{p-1,q-1} \} \\ &\vphantom{=} \oplus \{ P \cv\dvot \avthbf{U} \colon P \in \Lambda_{p,q-1} \} \\ &\vphantom{=} \oplus \{ P \cv\dvot \avthbf{R} \colon P \in \Lambda_{p-1,q} \text{\ and $P$ does not end in $\avthbf{U}$\ }\}. \end{align*} This completes the proof. \end{proof} As an application, we prove that the $\cv\dv$-polynomial $\avthbf{c}^p\diamond\avthbf{c}^q$ is always symmetric. \begin{proposition}\label{prop:diamondlatpath} For any natural numbers $p$ and $q$, \[ (\avthbf{c}^p\diamond\avthbf{c}^q)^* = \avthbf{c}^p\diamond\avthbf{c}^q. \] \end{proposition} \begin{proof} We prove the claim by constructing an involution on the lattice paths in $\Lambda_{p,q}$. Suppose $\alpha = \alpha_1\dots\alpha_n$ is a $\avthbf{UR}$-avoiding path from $(0,0)$ to $(p,q)$. Following the steps of $\alpha$ in reverse order yields the path $\alpha^* = \alpha_n\dots\alpha_1$. Now, $\alpha^*$ is a path from $(0,0)$ to $(p,q)$, but it could contain $\avthbf{UR}$ as a contiguous subword. Adjust $\alpha^*$ to $\varphi(\alpha)$ by replacing each instance of $\avthbf{U}^k\avthbf{R}^{\ell}$ with $\avthbf{R}^{\ell}\avthbf{U}^k$. In other words, we push in any ``bumps'' we find in the path. The map $\alpha\avpsto\varphi(\alpha)$ is an involution, and since $\avthbf{U}$ and~$\avthbf{R}$ have the same weight, \[ \wt(\varphi(\alpha)) = \wt(\alpha)^*. \] This completes the proof. \end{proof} Since we can interpret the coefficients of $\avthbf{c}^p\diamond\avthbf{c}^q$ as counting lattice paths, it is natural to ask whether we can interpret the coefficients of $\avthbf{c}^p\times\avthbf{c}^q$ in a similar way. First, recall the recursive formula for $\avthbf{c}^p\times\avthbf{c}^q$: \[ \avthbf{c}^p\times\avthbf{c}^q = (\avthbf{c}^p \times \avthbf{c}^{q-1})\cv\dvot\avthbf{c} + \avthbf{c}^q\cv\dvot\avthbf{d}\cv\dvot\avthbf{c}^p + \sum_{j+k=p-1}(\avthbf{c}^j\times\avthbf{c}^{q-1})\cv\dvot 2\avthbf{d}\cv\dvot\avthbf{c}^k. \] If the coefficients are to represent lattice paths in a straightforward way, then it seems natural that the term $(\avthbf{c}^p\times \avthbf{c}^{q-1})\cv\dvot\avthbf{c}$ represents lattice paths which pass through $(p, q-1)$ and end in $\avthbf{U}$, so that they pass through $(p,q-1)$, while a term of the form $(\avthbf{c}^j\times\avthbf{c}^{q-1})\cv\dvot 2\avthbf{d}\cv\dvot\avthbf{c}^k$ represents lattice paths which pass through $(i, q-1)$ and end in~$\avthbf{D}\avthbf{R}^k$. But how are we to interpret the term $\avthbf{c}^q\cv\dvot\avthbf{d}\cv\dvot\avthbf{c}^p$? It seems to require a lattice path of the form $\avthbf{U}^q\cv\dvot\avthbf{R}^p$, which contains the forbidden subpath $\avthbf{U}\avthbf{R}$. We can avoid forbidden subpaths by introducing another step $\avthbf{S} = (0,0)$. Thus $\avthbf{S}$ represents standing still for a moment to avoid $\avthbf{U}\avthbf{R}$. It can also be thought of as marking a particular point on a lattice path. Now we develop our argument more formally. Consider the noncommutative polynomial algebra on the generators $\avthbf{D}$, $\avthbf{R}$, $\avthbf{U}$, and $\avthbf{S}$, where $\avthbf{D}$ has degree~2 and the other generators have degree~1. The generators correspond to the steps \begin{align*} \text{\textbf{D}iagonal} &= (1,1) \\ \text{\textbf{R}ight} &= (1,0) \\ \text{\textbf{U}p} &= (0,1) \\ \text{\textbf{S}tand} &= (0,0). \end{align*} For natural numbers $p$ and $q$, let $\Lambda'_{p,q}$ be the module generated by monomials of degree $p+q+1$ with $p$ occurrences of $\avthbf{D}$ or $\avthbf{R}$ and $q$ occurrences of $\avthbf{D}$ or $\avthbf{U}$ which do not contain $\avthbf{UR}$ as a contiguous subword. In this context, we can prove a proposition analogous to Proposition~\ref{prop:diamondlatpath} for the Cartesian product. We can prove that the diamond product is unimodal. \begin{proposition}[Unimodality of diamond product] The sequence \[ 1\diamond\avthbf{c}^{2n}, \avthbf{c}\diamond\avthbf{c}^{2n-1}, \dots, \avthbf{c}^n\diamond\avthbf{c}^n, \avthbf{c}^{n+1}\diamond\avthbf{c}^{n-1}, \dots, \avthbf{c}^{2n}\diamond 1 \] is unimodal. \end{proposition} \section{Concluding remarks} \label{section_mixing_remarks} In addition to the mixing operators studied above, there is also the \define{manifold product} (or doubly-truncated product), denoted by $P\Box Q$ and defined by \[ P \Box Q = (P\setminus\{\hat{0}_P, \hat{1}_P\})\times (Q\setminus\{\hat{0}_Q,\hat{1}_Q\})\cup\{\hat{0},\hat{1}\}. \] The name comes from its relation with manifolds. For example, if $P$ and $Q$ are face lattices of polytopes, then $P\Box Q$ is the face poset of the torus which is the Cartesian product of the boundary complexes of the polytopes. While the Cartesian product $\times$ increases degree by 1 and the diamond product $\diamond$ preserves degree, the manifold product $\Box$ decreases degree by 1. For any posets $P$ and $Q$ with rank at least $2$, \[ \kappa(\mathcal{P}si(P)\Box\mathcal{P}si(Q)) = \kappa(\mathcal{P}si(P))\cv\dvot\kappa(\mathcal{P}si(Q))/(\avthbf{a}-\avthbf{b}). \] Hence \[ \kappa(u\Box v) = \kappa(u\cv\dvot v)/(\avthbf{a} - \avthbf{b}) + \sweedle{\Delta^*}{ (u_{(1)}\diamond v_{(1)})\cv\dvot\avthbf{b}\cv\dvot\kappa(u_{(2)}\diamond^* v_{(2)}} \] whenever $u$ and $v$ have sufficiently large degree. While the operations $\times$ and $\diamond$ have the $\cv\dv$-polynomials $\xi$ and $1$ respectively as units, the unit of $\Box$ is $\avthbf{a}$. Hence the manifold product does not preserve the $\cv\dv$-index. There are still recursive rules for computing $u\Box v$. In particular, \begin{align*} u\Box(v\cv\dvot(\avthbf{a} - \avthbf{b})) &= (u\Box v)\cv\dvot(\avthbf{a} - \avthbf{b}) \text{\ and} \\ u\Box(v\cv\dvot\avthbf{d}) &= \sweedle{\Delta}{(u_{(1)}\diamond v)\cv\dvot 2\avthbf{d}\cv\dvot u_{(2)}}. \end{align*} Although the manifold product does not generally preserve $\cv\dv$-polynomial or nonnegativity, there are some special cases where it does. In particular, $\avthbf{c}^p\Box\avthbf{c}^q$ is a $\cv\dv$-polynomial if $p+q$ is odd, and $\avthbf{c}^n\Box\avthbf{c}^{n+1}$ is a nonnegative $\cv\dv$-polynomial for any $n$. Increasing the difference in degree between the arguments rapidly introduces negative terms. Since these expressions denote $\av\bv$-indices of products of spheres of different dimensions, we would like to give conditions which guarantee nonnegativity of the coefficients. \begin{center} Copyright \copyright\ MLE Slone 2008 \end{center} \setcounter{chapter}{2} \chapter{A geometric approach to acyclic orientations} \label{chap:acyclic} The set of acyclic orientations of a connected graph with a given sink has a natural poset structure. We give a geometric proof of a result of Propp: this poset is the disjoint union of distributive lattices. Let $G$ be a connected graph on the vertex set $[\underline{n}] = \{0\}\cup[n]$, where $[n]$ denotes the set $\{1, \ldots, n\}$. Let $P$ denote the collection of acyclic orientations of $G$, and let $P_0$ denote the collection of acyclic orientations of $G$ with $0$ as a sink. If $\mathcal{O}mega$ is an orientation in $P$ with the vertex $i$ as a source, we can obtain a new orientation $\mathcal{O}mega'$ with $i$ as a sink by \emph{firing} the vertex $i$, reorienting all the edges adjacent to $i$ towards $i$. The orientations $\mathcal{O}mega$ and~$\mathcal{O}mega'$ agree away from $i$. A \emph{firing sequence} from $\mathcal{O}mega$ to $\mathcal{O}mega'$ in $P$ consists of a sequence $\mathcal{O}mega=\mathcal{O}mega_1,\dots,\mathcal{O}mega_{m+1}=\mathcal{O}mega'$ of orientations and a function $F\colon [m]\to [\underline{n}]$ such that for each $i\in [m]$, the orientation~$\mathcal{O}mega_{i+1}$ is obtained from $\mathcal{O}mega_i$ by firing the vertex $F(i)$. We will abuse language by calling $F$ itself a firing sequence. We make $P$ into a preorder by writing $\mathcal{O}mega\le\mathcal{O}mega'$ if and only if there is a firing sequence from $\mathcal{O}mega$ to $\mathcal{O}mega'$. From the definition it is clear that $P$ is reflexive and transitive. While $P$ is only a preorder, $P_0$ is a poset. By finiteness, antisymmetry can be verified by showing that firing sequences in $P_0$ cannot be arbitrarily long. This is a consequence of the fact that neighbors of the distinguished sink $0$ cannot fire. The proof depends on the following lemma. \begin{lemma} Let $F\colon [m]\to [n]$ be a firing sequence for the graph $G$. If $i$ and $j$ are adjacent vertices in $G$, then \[ |F^{-1}(i)| \le |F^{-1}(j)| + 1. \] \end{lemma} \begin{proof} A vertex can fire only if it is a source. Firing $i$ reverses the orientation of its edge to~$j$. Hence $i$ cannot fire again until the orientation is again reversed, which can only happen by firing $j$. \end{proof} As a corollary, firing sequences have bounded length, implying that $P_0$ is a poset. \begin{corollary} The preorder $P_0$ of acyclic orientations with a distinguished sink is a poset. \end{corollary} \begin{proof} Let $F\colon [m]\to [n]$ be a firing sequence. By iterating the lemma, $|F^{-1}(i)| \le d(0, i) - 1$, so \[ m = \sum_{i\in [n]} |F^{-1}(i)| \le \sum_{i\in [n]} d(0, i) - 1 \] Hence firing sequences cannot be arbitrarily long, implying that $P_0$ is antisymmetric. \end{proof} For a real number $a$ let $\lfloor a \rfloor$ denote the largest integer less than or equal to $a$. Similarly, let $\lceil a \rceil$ denote the least integer greater than or equal to $a$. Finally, let $\fracp{a}$ denote the fractional part of the real number $a$, that is, $\fracp{a} = a - \lfloor a \rfloor$. Observe that the range of the function $x \longmapsto \fracp{x}$ is the half open interval $[0,1)$. In this chapter we use $\fracp{a}$ only to denote the fractional part and never to denote a singleton set. Let $\widetilde{\avthcal{H}} = \widetilde{\avthcal{H}}(G)$ be the \emph{periodic graphic arrangement} of the graph $G$, that is, $\widetilde{\avthcal{H}}$ is the collection of all hyperplanes of the form \[ x_{i} = x_{j} + k , \] where $ij$ is an edge in the graph $G$ and $k$ is an integer. This hyperplane arrangement cuts~$\avthbb{R}^{n+1}$ into open regions. Note that each region is translation-invariant in the direction $(1, \ldots, 1)$. Let $C$ denote the complement of $\widetilde{\avthcal{H}}$, that is, \[ C = \avthbb{R}^{n+1} \setminus \bigcup_{H \in \widetilde{\avthcal{H}}} H . \] Define a map $\varphi\colon C\to P$ from the complement of the periodic graphic arrangement to the preorder of acyclic orientations as follows. For a point $x = (x_{0}, \ldots, x_{n})$ and an edge $ij$ observe that $\fracp{x_{i}} \neq \fracp{x_{j}}$ since the point does not lie on any hyperplane of the form $x_{i} = x_{k} + k$. Hence orient the edge $ij$ towards $i$ if $\fracp{x_{i}} < \fracp{x_{j}}$ and towards~$j$ if the inequality is reversed. This defines the orientation $\varphi(x)$. Also note that this is an acyclic orientation, since no directed cycles can occur. \vanish{ The poset of orientations consists of several connected components, each of which is self-dual. To get a better intuition for the structure of $P$, we will work with the \emph{periodic graphic arrangement} $\avthcal{A}=\avthcal{A}(G)$ in $\avthbb{R}^{n+1}$. For each edge $ij$ in $G$ and each integer $k$, the periodic graphic arrangement includes the hyperplane $x_i = x_j + k$. } \vanish{ Let $S$ denote the set obtained by intersecting the complement $\avthbb{R}^{n+1}\setminus\bigcup\avthcal{A}$ with the hyperplane $x_0 = 0$. We now define a map $\varphi\colon S\to P$. Let $x\in S$. Since $x$ is not on any hyperplane in $\avthcal{A}$, its coordinates are all distinct modulo 1. Hence $x$ corresponds to a permutation of $[\underline{n}]$, inducing an orientation $\mathcal{O}mega$ of the graph $G$. Since $x_0 = 0$ but the other coordinates of $x$ are strictly positive modulo $1$, this orientation makes vertex $0$ a sink. Hence~$\mathcal{O}mega$ is in $P$, and we can define the map $\varphi$ by sending $x$ to $\mathcal{O}mega$. } Let $H_{0}$ be the coordinate hyperplane $\{x \in \avthbb{R}^{n+1} \: : \: x_{0} = 0\}$. The map $\varphi$ sends points of the intersection $C_0 = C \cap H_{0}$ to acyclic orientations in $P_0$. \vanish{ Especially, if we restrict our attention to the complement $C$ intersected the hyperplane $H_{0}$, we obtain that $\varphi$ maps points to acyclic orientations with a sink at the vertex $0$. } The real line $\avthbb{R}$ is a distributive lattice; meet is minimum and join is maximum. Since~$\avthbb{R}^{n+1}$ is a product of copies of $\avthbb{R}$, it is also a distributive lattice, with meet and join given by componentwise minimum and maximum. That is, given two points in~$\avthbb{R}^n$, say $x = (x_{0}, \ldots, x_{n})$ and $y = (y_{0}, \ldots, y_{n})$, their meet and join are given by \[ x \wedge y = (\min(x_{0},y_{0}), \ldots, \min(x_{n},y_{n})) \] and \[ x \vee y = (\avx(x_{0},y_{0}), \ldots, \avx(x_{n},y_{n})) \] respectively. \vanish{ Recall that $\avthbb{R}^{n+1}$ is a finite product of chains, hence a distributive lattice. In this lattice, the meet of two points is the coordinatewise minimum and the join of two points is the coordinatewise maximum. The set $S$ inherits the poset structure of $\avthbb{R}^{n+1}$, and each component is a distributive sublattice. } \begin{lemma} Each region $R$ in the complement $C$ of the periodic graphic arrangement $\widetilde{\avthcal{H}}$ is a distributive sublattice of $\avthbb{R}^{n+1}$. Hence the intersection $R \cap H_{0}$, which is a region in $C_{0}$, is also a distributive sublattice of $\avthbb{R}^{n+1}$. \end{lemma} \begin{proof} Since each region $R$ is the intersection of slices of the form \[ T = \{ x \in \avthbb{R} \:\: : \:\: x_i + k < x_j < x_i + k + 1 \}, \] it is enough to prove that each slice is a sublattice of $\avthbb{R}^{n+1}$. Let $x$ and $y$ be two points in the slice $T$. Then $\min(x_i,y_i) + k = \min(x_i + k, y_i + k) < \min(x_j, y_j) < \min(x_i + k+1, y_i + k+1) = \min(x_i,y_i) + k+1$, implying that $x \wedge y$ also lies in the slice $T$. A dual argument shows that the slice $T$ is closed under the join operation. Thus the region $R$ is a sublattice. Since distributivity is preserved under taking sublattices, it follows that $R$ is a distributive sublattice of $\avthbb{R}^{n+1}$. \end{proof} In the remainder of this chapter we let $R$ be a region in $C_{0}$. \begin{lemma} Consider the restriction $\varphi|_{R}$ of the map $\varphi$ to the region~$R$. The inverse image of an acyclic orientation in $P_{0}$ is of the form: \[ R \cap \biggl(\{0\} \times \prod_{i=1}^{n} [a_{i},a_{i}+1) \biggr), \] where each $a_{i}$ is an integer. That is, the inverse image of an orientation is the intersection of the region $R$ with a half-open lattice cube. Hence the inverse image is a sublattice of~$\avthbb{R}^{n+1}$. \label{lemma_inverse_image} \end{lemma} \begin{proof} Assume that $x$ and $y$ lie in the region $R$. Define the integers $a_{i}$ and $b_{i}$ by $a_{i} = \lfloor x_{i} \rfloor$ and $b_{i} = \lfloor y_{i} \rfloor$. Hence the coordinate $x_{i}$ lies in the half-open interval $[a_{i},a_{i}+1)$ and the coordinate $y_{i}$ lie in the half-open interval $[b_{i},b_{i}+1)$. Lastly, assume that $\varphi|_{R}$ maps $x$ and $y$ to the same acyclic orientation. The last condition implies that for every edge $ij$ that $0 \leq x_{i} - a_{i} < x_{j} - a_{j} < 1$ is equivalent to $0 \leq y_{i} - b_{i} < y_{j} - b_{j} < 1$. Consider an edge that is directed from $j$ to $i$. Since $x$ and~$y$ both lie in the region~$R$, there exists an integer $k$ such that $x_{i} + k < x_{j} < x_{i} + k + 1$ and $y_{i} + k < y_{j} < y_{i} + k + 1$. Now we have that $a_{j} - a_{i} < x_{j} - x_{i} < k+1$. Furthermore, observe that $x_{j} - a_{j} - 1 < 0 \leq x_{i} - a_{i}$. Hence $a_{j} - a_{i} > x_{j} - x_{i} - 1 > k-1$. Since $a_{j} - a_{i}$ is an integer, the two bounds implies that $a_{j} - a_{i} = k$. By similar reasoning we obtain that $b_{j} - b_{i} = k$. Hence for every edge $ij$ we know that $a_{j} - a_{i} = b_{j} - b_{i}$. Since $a_{0} = b_{0} = 0$ and the graph $G$ is connected we obtain that $a_{i} = b_{i}$ for all vertices $i$. \end{proof} \begin{lemma} The restriction $\varphi|_{R} : R \to P_0$ is a poset map. \end{lemma} \begin{proof} Assume that $y$ and $z$ belong to the region $R$ and that $y \leq z$. Since the region~$R$ is convex, the line segment from $y$ to $z$ is contained in~$R$. Let a point $x$ move continuously from $y$ to $z$ along this line segment and consider what happens with the associated acyclic orientations $\varphi(x)$. Note that each coordinate $x_{i}$ is non-decreasing. When the point $x$ crosses an hyperplane of the form $x_{i} = p$ where $p$ is an integer, observe that the value $\fracp{x_{i}}$ approaches $1$ and then jumps down to $0$. Hence the vertex~$i$ switches from being a source to being a sink, that is, the vertex~$i$ fires. Observe that two adjacent nodes $i$ and $j$ cannot fire at the same time, since the intersection of the two hyperplanes $x_{i} = p$ and $x_{j} = q$ is contained in the hyperplane $x_{i} = x_{j} + (p-q)$ which is not in the region $R$. Hence we obtain a firing sequence from the acyclic orientation $\varphi(y)$ to $\varphi(z)$, proving that $\varphi(y) \leq \varphi(z)$. \end{proof} \vanish{ Along any monotonic path in $C_0$ from $x$ to $y$, the value of $\varphi$ changes only finitely many times, and we may assume it changes exactly once. For each $i$ in $[n]$, let $\alpha_i$ be the straight-line path in $C_0$ from $(y_1, \ldots, y_{i-1}, x_i, x_{i+1}, \ldots, x_n)$ to $(y_1, \ldots, y_{i-1}, y_i, x_{i+1}, \ldots, x_n)$. By following these paths in sequence we get a monotonic path from $x$ to $y$. The value of $\varphi$ changes in precisely one path $\alpha_{i+1}$. The only change that occurs in this path is the monotonic increase of the $i$ coordinate from $x_i$ to $y_i$. Thus there must be an integer between $x_i$ and $y_i$. Since $\alpha_{i+1}$ is a path in $C_0$, the $i$th coordinate must be maximal in $x$ and minimal in $y$. In other words, $\varphi(y)$ is obtained from $\varphi(x)$ by firing the vertex $i$. Hence $\varphi(x) \le \varphi(y)$. } \vanish{ \begin{lemma} Let $R$ be a region of the complement $C$ and $x$ a point in the region $R$. Let $\mathcal{O}mega'$ be an acyclic orientation comparable to the acyclic orientation $\mathcal{O}mega = \varphi(x)$. Then there exists a point $y$ in the region $R$ such that $\varphi(y) = \mathcal{O}mega'$. \end{lemma} } \begin{lemma} Let $x$ be a point in the region $R$. Let $\mathcal{O}mega'$ be an acyclic orientation comparable to $\mathcal{O}mega = \varphi(x)$ in the poset $P_0$. Then there exists a point $z$ in the region of $R$ as $x$ such that $\varphi(z) = \mathcal{O}mega'$. \label{lemma_lifting} \end{lemma} \begin{proof} It is enough to prove this for cover relations in the poset $P$. We begin by considering the case when $\mathcal{O}mega'$ covers $\mathcal{O}mega$ in $P$. Thus $\mathcal{O}mega'$ is obtained from $\mathcal{O}mega$ by firing a vertex $i$. First pick a positive real number $\lambda$ such that $\fracp{x_{j}} < 1 - \lambda$ for each nonzero vertex~$j$. Let $y$ be the point $y = x + \lambda \cv\dvot (0,1, \ldots, 1)$. Observe that $y$ belongs to the same region $R$ and that $\varphi$ maps $y$ to the same acyclic orientation as the point $x$. Since $i$ is a source in $\mathcal{O}mega$, the value $\fracp{y_{i}}$ is larger than any other value $\fracp{y_{j}}$ for vertexes~$j$ adjacent to the vertex $i$. Let $z$ be the point with coordinates $z_{j} = y_{j}$ for $j \neq i$ and $z_{i} = \lceil y_{i} \rceil + \lambda/2$. Observe that moving from $y$ to the point $z$ we do not cross any hyperplanes of the form $x_{i} = x_{j} + k$. Hence the point $z$ also belongs to region $R$. However, we did cross a hyperplane of the form $x_{i} = p$, corresponding to firing the vertex~$i$. Hence we have that $\varphi(z) = \mathcal{O}mega'$. Now we can iterate this to extend to the general case when $\mathcal{O}mega < \mathcal{O}mega'$. The case when $\mathcal{O}mega'$ is covered by $\mathcal{O}mega$ is done similarly. However this case is easier since one can skip the middle step of defining the point $y$. Hence this case is omitted. \end{proof} A connected component of a finite poset is a weakly connected component of its associated comparability graph. That is, a finite poset is the disjoint union of its connected components. \begin{lemma} Let $Q$ be a connected component of the poset of acyclic orientations~$P_{0}$. Then there exists a region $R$ in $C_{0}$ such that the map $\varphi$ maps $R$ onto the component $Q$. \label{lemma_lifting_components} \end{lemma} \begin{proof} Let $\mathcal{O}mega$ be an orientation in the component $Q$. Since $\varphi$ is surjective we can lift~$\mathcal{O}mega$ to a point $x$ in $C_{0}$. Say that the point $x$ lies in the region $R$. It is enough to show that every orientation $\mathcal{O}mega^{\prime}$ in $Q$ can be lifted to a point in $R$. The two orientations $\mathcal{O}mega$ and $\mathcal{O}mega^{\prime}$ are related by a sequence in $Q$ of orientations $\mathcal{O}mega = \mathcal{O}mega_{1}, \mathcal{O}mega_{2}, \ldots, \mathcal{O}mega_{k} = \mathcal{O}mega^{\prime}$ such that $\mathcal{O}mega_{i}$ and $\mathcal{O}mega_{i+1}$ are comparable. By iterating Lemma~\ref{lemma_lifting} we obtain points~$x_{i}$ in~$R$ such that $\varphi(x_{i}) = \mathcal{O}mega_{i}$. In particular, $\varphi(x_{k}) = \mathcal{O}mega^{\prime}$. \end{proof} \begin{proposition} Let $Q$ be a connected component of the poset of acyclic orientations $P_{0}$. Then the component $Q$ as a poset is a lattice. Moreover, let $R$ be a region of $C_{0}$ that maps onto $Q$ by $\varphi$. Then the poset map $\varphi|_{R} : R \longrightarrow Q$ is a lattice homomorphism. \end{proposition} \begin{proof} The previous discussion showed that we can lift the component $Q$ to a region~$R$. Consider two acyclic orientations $\mathcal{O}mega$ and $\mathcal{O}mega'$. We can lift them to two points~$x$ and~$y$ in~$R$, that is, $\varphi(x) = \mathcal{O}mega$ and $\varphi(y) = \mathcal{O}mega'$. Since $\varphi|_{R}$ is a poset map we obtain that $\varphi(x \wedge y)$ is a lower bound for $\mathcal{O}mega$ and $\mathcal{O}mega'$. It remains to show that the lower bound is unique. Assume that $\mathcal{O}mega''$ is a lower bound of $\mathcal{O}mega$ and $\mathcal{O}mega'$. By Lemma~\ref{lemma_lifting} we can lift $\mathcal{O}mega''$ to an element $z$ in $R$ such that $z \leq x$. Similarly, we can lift $\mathcal{O}mega''$ to an element $w$ in $R$ such that $w \leq y$. That is we have that $\varphi(z) = \varphi(w) = \mathcal{O}mega''$. Now by Lemma~\ref{lemma_inverse_image} we have that $\varphi(z \wedge w) = \mathcal{O}mega''$. But since $z \wedge w$ is a lower bound of both $x$ and $y$ we have that $z \wedge w \leq x \wedge y$. Now applying $\varphi$ we obtain that $\varphi(x \wedge y)$ is the greatest lower bound, proving that the meet is well-defined. A dual argument shows that the join is well-defined, hence $Q$ is a lattice. Finally, we have to show that $\varphi|_{R}$ is a lattice homomorphism. Let $x$ and $y$ be two points in the region $R$. By Lemma~\ref{lemma_lifting} we can lift the inequality $\varphi(x) \wedge \varphi(y) \leq \varphi(x)$ to obtain a point $z$ in $R$ such that $z \leq x$ and $\varphi(z) = \varphi(x) \wedge \varphi(y)$. Similarly, we can lift the inequality $\varphi(x) \wedge \varphi(y) \leq \varphi(y)$ to obtain a point $w$ in $R$ such that $w \leq y$ and $\varphi(w) = \varphi(x) \wedge \varphi(y)$. By Lemma~\ref{lemma_inverse_image} we know that $\varphi(z \wedge w) = \varphi(x) \wedge \varphi(y)$. But~$z \wedge w$ is a lower bound of both $x$ and $y$, so $\varphi(x) \wedge \varphi(y) = \varphi(z \wedge w) \leq \varphi(x \wedge y)$. But since $\varphi(x \wedge y)$ is a lower bound of both $\varphi(x)$ and $\varphi(y)$ we have $\varphi(x \wedge y) \leq \varphi(x) \wedge \varphi(y)$. Thus the map $\varphi|_{R}$ preserves the meet operation. The dual argument proves that $\varphi|_{R}$ preserves the join operation, proving that it is a lattice homomorphism. \end{proof} Combining these results we can now prove the result of Propp~\cite{Propp}. \begin{theorem} Each connected component of the poset of acyclic orientations $P_{0}$ is a distributive lattice. \end{theorem} \begin{proof} It is enough to recall that $\avthbb{R}^{n+1}$ is a distributive lattice and each region $R$ is a sublattice. Furthermore, the image under a lattice morphism of a distributive lattice is also distributive. \end{proof} Observe that the minimal element in each connected component $Q$ is an acyclic orientation with the unique sink at the vertex $0$. Greene and Zaslavsky~\cite{Greene_Zaslavsky} proved that the number such orientations is given by the sign $-1$ to the power one less than the number of vertices times the linear coefficient in the chromatic polynomial of the graph $G$. Gebhard and Sagan gave several proofs of this result~\cite{Gebhard_Sagan}. A geometric proof of this result can be found in Chapter~\ref{chap:affinetoric} of this dissertation. That the connected component are confluent, that is, each pair of elements has a lower and an upper bound, can also be shown to follow from a special case of chip-firing games~\cite{Bjorner_Lovasz_Shor}. Is there a geometric way to prove the confluency of chip-firing? More discussions relating these distributive lattice with chip-firing can be found in~\cite{Latapy_Magnien,Latapy_Phan}. \begin{center} Copyright \copyright\ MLE Slone 2008 \end{center} \setcounter{chapter}{3} \chapter{Critical groups of cleft graphs} \label{chap:critical} \section{Introduction} The number of spanning trees of an undirected graph is an important invariant of the graph. The matrix tree theorem reduces the problem of determining the tree number to linear algebra. (The problem of listing all spanning trees for a specific graph was solved by Feussner~\cite{Feussner,Feussner_two} using what is essentially deletion-contraction.) \begin{theorem}[Kirchhoff's matrix tree theorem~\cite{Kirchhoff}]\label{thm:matrixtree} Let $X$ be a graph on $n$ vertices with Laplacian $L$. Suppose $\lambda_1\le\dots\le\lambda_n$ are the eigenvalues of $L$. Then the tree number of $X$ is \[ \tree{X} = \frac{1}{n}\prod_{i=2}^n \lambda_i. \] Equivalently, $\tree{X}$ is the value of any cofactor of $L$. \end{theorem} \noindent Kirchhoff developed this theorem with the theory of electrical networks in mind. More than a hundred years later, the physicists Bak, Tang, and Wiesenfeld~\cite{Bak_Tang_Wiesenfeld} developed the apparently unrelated abelian sandpile model in an attempt to explain flicker noise, an effect which appears in widely varying physical systems. In the abelian sandpile model, grains of sand are added one at a time to small piles of sand. Since this is inherently unstable, eventually a pile will collapse, distributing grains to neighboring piles. They called a configuration critical if it is stable but becomes unstable if a single grain is added anywhere. The problem of characterizing critical configurations was studied by graph theorists and other combinatorialists in the 1990s under the guise of chip-firing games. A chip-firing game, in the sense of Bj\"orner, Lov\'asz, and Shor~\cite{Bjorner_Lovasz_Shor}, involves firing vertices in a finite graph~$G$ with a nonnegative number of chips on each vertex. A vertex fires by distributing a chip to each of its neighbors, and cannot fire unless it has sufficiently many chips. Only one vertex can fire at a time, so it might be expected that the decision of which vertex to fire at a particular step is of major importance. However, Bj\"orner, Lov\'asz, and Shor showed that a chip-firing game on a graph is a confluent system. Hence if an initial configuration is not recurrent, its terminal stable configuration of chips does not depend on the order in which vertices are fired. Biggs~\cite{Biggs_dollar} developed a variant of this called the dollar game, which includes one vertex which can fire if and only if no other vertex can fire, even if it would have a negative number of chips after doing so. Biggs proved that the number of critical configurations of a graph is equal to the order of the critical group, which is the torsion part of the cokernel of the Laplacian. Thus the problem of counting spanning trees is subsumed by the problem of understanding the critical group of a graph. The critical group is only known for a few classes of graphs. In this chapter, we study cleft graphs, which are graphs obtained from a base graph by replacing each vertex with an anticlique, that is, a collection of nonadjacent vertices. This construction is the vertex analogue of that of Lorenzini~\cite{Lorenzini}, who studied the effect of replacing all edges in a graph with paths of uniform length. We derive an exact sequence relating the critical group of a uniformly cleft graph with that of its base graph. Moreover, we also have results in the non-uniform case. By studying the spectrum of the Laplacian we are able to determine the tree number of a non-uniformly cleft tree. \section{Preliminaries} All graphs we consider are simple, loopless, undirected graphs with no parallel edges. Our discussion will be greatly simplified if we imagine a graph as being endowed with an orientation. None of our results depend on which orientation is used. With this in mind, we define an \define{oriented graph} to be a structure $X = (EX, VX)$ consisting of a set of edges $EX$ and a set of vertices $VX$ which are related by a pair of structure maps from edges to vertices, called $s$ for source and $t$ for target. If $X = (EX, VX)$ and~$B = (EB, VB)$ are oriented graphs, then a morphism $\varphi\colon X\to B$ consists of two functions $\varphi\colon EX\to EB$ and $\varphi\colon VX\to VB$ such that an oriented edge with source~$u$ and target $v$ is mapped to an oriented edge with source $\varphi(u)$ and target $\varphi(v)$. We let~$\delta(v)$ denote the neighborhood of a vertex $v$ in the unoriented graph. The degree of $v$ is denoted by $\deg(v)$ and is the size of the neighborhood, that is, $\deg(v) = |\delta(v)|$. In an oriented graph, the neighborhood of a vertex $v$ decomposes as $\delta(v) = \delta^+(v)\sqcup \delta^-(v)$, where~$\delta^+(v)$ is the out-neighborhood of $v$, the set of vertices reachable from $v$ in one step, and~$\delta^-(v)$ is the in-neighborhood of $v$, the set of vertices from which $v$ can be reached in one step. An oriented graph $X$ can be viewed as an oriented 1-dimensional cell complex. Hence $X$ comes equipped with a chain complex $\avthcal{C}hain{X}{}$, where \[ \avthcal{C}hain{X}{1} = \bigoplus_{e\in EX}\avthbb{Z}e \text{\quad and\quad} \avthcal{C}hain{X}{0} = \bigoplus_{v\in VX}\avthbb{Z}v. \] The boundary map $\partial\colon\avthcal{C}hain{X}{1}\to\avthcal{C}hain{X}{0}$ is defined on an edge $e$ by $\partial(e) = t(e) - s(e)$. Hence the boundary map is the same as the incidence matrix of the graph. The \define{Laplacian} of $X$ is the map $L = \partial\trans{\partial}$, where $\trans{\partial}$ is the transpose of $\partial$. Thus $\trans{\partial}$ represents the coboundary of the graph. If $X$ has $n$ vertices, we can view $X$ as an~$n\times n$ matrix. For vertices $u$ and $v$, one can compute that the $(u, v)$ entry of $L$ is \[ L(u, v) = \begin{cases} \deg(u), & u = v \\ -\#[u, v], & u \ne v, \end{cases} \] where the notation $[u, v]$ indicates the set of edges with endpoints $u$ and $v$ in either orientation. Some authors use this as the definition of the Laplacian matrix. Thus~$L = D - A$, where $D$ is the diagonal matrix whose diagonal gives the degree sequence of $X$ and $A$ is the incidence matrix of $X$. The \define{critical group} of $X$ is the torsion part of the cokernel of $L$. The cokernel can be found by reducing $L$ to its Smith normal form, which can be done using row and column operations which are invertible over the integers. \begin{figure} \caption{Cleaving the vertex $v$ replaces it with an anticlique.} \label{fig:cleftexplain} \end{figure} Cleft graphs are similar to graph fibrations, but they obey a weaker unique lifting condition. Hence we will adopt some of the language, including the notions of total graph and base graph. Before presenting the technical definition of cleft graph we offer the following way to visualize cleaving a single vertex in two. Suspend the graph by the vertex to be cleft, so that the edges which connect it to the rest of the graph are hanging downwards. Carefully drape these edges and the vertex on a chopping block. Then take a very sharp (and infinitely thin) cleaver and cut through the vertex and its incident edges. Thus the vertex is cleft into two vertices, and each of the edges incident with the vertex is cleft into two edges, one for each half of the cleft vertex. Thus the vertex to be cleft has been replaced with two nonadjacent vertices, each of which has the same neighborhood as the cleft vertex. See Figure~\ref{fig:cleftexplain}. In a similar way, we can cleave a vertex $m$-fold, replacing the vertex with an anticlique of $m$ vertices, each with the same neighborhood as the cleft vertex. The structure of a graph after multiple vertices have been cleft does not depend on the order in which the cleavings were performed. So given a graph $B$ and a weight vector on the vertex set of $B$, there is a unique graph $X$ which is obtained from~$B$ by cleaving each vertex of $B$ according to its weight. Moreover, there is a natural projection morphism $p\colon X\to B$ which assigns each vertex in~$X$ to the vertex in $B$ from which it was cleft. Hence we can define a \define{cleft graph} to be an oriented graph morphism $p\colon X\to B$ which satisfies the following two properties: \begin{itemize} \item (weak unique lifting) For any vertices $\widetilde{u}$, $\widetilde{v}\in VX$, if $e\in EB$ is an edge from~$p(\widetilde{u})$ to~$p(\widetilde{v})$, then the edge $e$ has a unique lift $\widetilde{e}\in EX$ with source $\widetilde{u} = s(\widetilde{e})$ and target~$\widetilde{v} = t(\widetilde{e})$. \item (cleaving) Each fibre of $p$ is a nonempty anticlique. \end{itemize} Observe that since $p$ is a graph morphism, the edge $\widetilde{e}$ mentioned above is a lift of~$e$. We say that the cleft graph is induced by the weight vector~$(m_v)_{v\in VB}$, where the weight of a vertex is the size of its fibre, that is,~$m_v = |p^{-1}(v)|$. A cleft graph is~\define{$m$-uniform} if every fibre has the same size $m$. An example of a non-uniform cleft graph appears in Figure~\ref{fig:cleftpair}. \begin{figure} \caption{A bipartite graph viewed as a cleft path.} \label{fig:cleftpair} \end{figure} Suppose we know the tree number or critical group of a base graph $B$. It is natural to ask how much we can deduce about the tree number or critical group of the total graph of a cleft graph over $B$. It turns out that this is not difficult if the cleaving is uniform or if the base graph is a tree. In Section~\ref{exact-induced}, we determine the tree number of a uniformly-cleft graph. In Section~\ref{cleft-tree}, we determine the tree number of an non-uniformly-cleft tree. \section{The exact sequence of a uniformly-cleft graph}\label{exact-induced} Let $p\colon X\to B$ be a cleft graph. Since the projection $p$ is a graph morphism, it commutes with the boundary map, that is, $\partial p=p\partial$. The interaction between the coboundary map $\trans{\partial}$ and the projection is slightly more complex, and is described by the following lemma. \begin{lemma}\label{lemma:compressedmakessense} Let $p\colon X\to B$ be a cleft graph with weight vector $\avthbf{m}$. Define a linear map $\varphi\colon C_0(B)\to C_1(B)$ by $\varphi(v) = \sum_e \varphi(e, v)\cv\dvot e$, where \[ \varphi(e, v) = \begin{cases} m_{s(e)} & t(e) = v \\ -m_{t(e)} & s(e) = v \\ 0 & \text{otherwise.} \end{cases} \] Then the diagram \[\xymatrix{ C_0(X)\ar[r]^{\trans{\partial}}\ar[d]_{p} & C_1(X)\ar[d]^{p} \\ C_0(B)\ar[r]_{\varphi} & C_1(B) }\] is commutative. Moreover, if $X$ is an $m$-uniformly cleft graph, then $\varphi = m\trans{\partial}$. \end{lemma} \begin{proof} The composite map $\varphi p$ is given by \[ \varphi p(e, \widetilde{v}) = \sum_{u \in VB} \varphi(e, u) p(u, \widetilde{v}) = \varphi(e, p(\widetilde{v})). \] On the other hand, the composite map $p\trans{\partial}$ is given by \[ p\trans{\partial}(e, \widetilde{v}) = \sum_{\widetilde{f}\in EX} p(e, \widetilde{f}) \trans{\partial}(\widetilde{f}, \widetilde{v}) = \sum_{\widetilde{e} \in p^{-1}(e)} \trans{\partial}(\widetilde{e}, \widetilde{v}). \] The sum vanishes unless $e$ is incident with $p(\widetilde{v})$. The number of lifts of $e$ which have a given endpoint is given by the weight of the vertex at the other endpoint, and the sign of the term $\trans{\partial}(\widetilde{e}, \widetilde{v})$ is determined by whether $p(\widetilde{v})$ is the source or target of $e$. Hence $p\trans{\partial} = \varphi\partial$, as claimed. \end{proof} Combining Lemma~\ref{lemma:compressedmakessense} with the commutativity relation $\partial p=p\partial$, we can define the \define{compressed Laplacian} of a cleft graph $X$ with respect to its base graph $B$ as the composite map $C = \partial\varphi$. If we define a vector $(M_v)_{v\in VB}$ by \[ M_v = \sum_{u\in\delta(v)} m_u, \] then it follows directly that \[ C(u,v) = \begin{cases} M_u, & u = v \\ -m_u\cv\dvot\#[u,v] & u \ne v \end{cases} \] for any vertices $u$, $v\in VB$. \begin{corollary}\label{cor:uniformlycompressed} Let $p\colon X\to B$ be an $m$-uniformly cleft graph. Then the compressed Laplacian of $X$ is $m\cv\dvot L(B)$. \end{corollary} For the rest of this section we will specialize to the case of an $m$-uniformly cleft graphs $p\colon X\to B$. Let~$(M_v)$ be the vector defined above. Thus $M_v$ is the degree in $X$ of any lift~$\widetilde{v}$ of $v$. Let $S$ denote the transpose of $p$. The map $S$ sends a vertex $v$ to the sum of its lifts, that is, $S(v) = \sum_{\widetilde{v}\in p^{-1}(v)} \widetilde{v}$. Since both $L(X)$ and $C=m\cv\dvot L(B)$ are symmetric matrices, it follows from Lemma~\ref{lemma:compressedmakessense} that the diagram \[\xymatrix{ C_0(B)\ar[r]^{C}\ar[d]_{S} & C_0(B)\ar[d]^{S} \\ C_0(X)\ar[r]_{L} & C_0(X) }\] is commutative. Since $SC = LS$, there is an injection $\coker C\to\coker L$. We can use the fact that~$X$ is a uniformly cleft graph to determine the factor by which splitting increases the tree number. But first we need a lemma. \begin{lemma}\label{lemma:split} Let $p\colon X\to B$ be an $m$-uniformly cleft graph, and define a vector~$(M_v)_{v\in VB}$ by $M_v = m\cv\dvot|\delta(v)|$. If $B$ is connected, then there is an exact sequence \[ 0 \to \coker C \to K(X)\oplus\avthbb{Z} \to \bigoplus_{v\in VB} \avthbb{Z}_{M_v}^{m-1}/M_v \to 0 \] of abelian groups. \end{lemma} \begin{proof} Since $B$ is connected, so is $X$. Thus $\coker L = K(X)\oplus\avthbb{Z}$. Applying the snake lemma to the commutative diagram \[\xymatrix{ 0 \ar[r] & C_0(B) \ar[r]^{S}\ar[d]_{C} & C_0(X) \ar[r] \ar[d]_{L} & \coker S\ar[r] \ar[d]_{\overline{L}} & 0 \\ 0 \ar[r] & C_0(B) \ar[r]_{S} & C_0(X) \ar[r] & \coker S\ar[r] & 0 }\] with exact rows yields the exact sequence \[ \ker{\overline{L}} \to \coker C\to K(X)\oplus\avthbb{Z} \to \coker{\overline{L}}\to 0, \] where the map $\overline{L}\colon\coker S\to\coker S$ is induced by $L$. For any vertex $v$ in $B$, the sum of the lifts of $v$ in $X$ is a representative of zero in $\coker S$, but there are no other relations among the vertices of $X$. The Laplacian sends a lift $\widetilde{v}$ of $v$ to \[ L(\widetilde{v}) = M_v\widetilde{v} - m\sum_{u\in\delta^+(v)}S(u), \] which by our observation represents $M_v\widetilde{v}$ in $\coker S$. Hence we can represent $\overline{L}$ by the block matrix $\bigoplus_{v\in VB} M_v I_{m - 1}$, which is injective and has the desired cokernel. \end{proof} To make this exact sequence useful for enumeration, we need to kill the infinite factors in $\coker C$ and $K(X)\oplus\avthbb{Z}$. The following observation allows us to do this. \begin{lemma}\label{lemma:generator} Let $M$ be an $n\times n$ integer matrix with corank $1$. Let $H$ be the submodule of $\avthbb{Z}^n$ generated by all vectors whose coordinates sum to $0$. If $\im M\subseteq H$, then each standard basis vector $e_i$ represents an infinite generator of $\coker M$, possibly with nonzero torsion part. \end{lemma} \begin{proof} First observe that $\avthbb{Z}^n$ is isomorphic to $\avthbb{Z}$ and is generated by any standard basis vector $e_i$. Lifting $e_i$ to $\coker M$ yields an element of the form $n\cv\dvot \gamma + r$, where~$\gamma$ is the infinite generator of $\coker M$ and $r$ is a torsion element. But this implies that~$\gamma$ is mapped to $n^{-1}$ times the generator of $\avthbb{Z}^n/H$ under the canonical surjection. Hence~$n$ is a unit. \end{proof} \begin{proposition}\label{prop:uniformcase} Let $p\colon X\to B$ be an $m$-uniformly cleft graph with Laplacian $L$ and compressed Laplacian $C$. If $B$ has $n$ vertices, then the tree number of $X$ is given by the formula \[ \kappa(X) = \kappa(B)\cv\dvot m^{n-2}\cv\dvot\prod_{v\in VB} (m\cv\dvot\deg(v))^{m-1}. \] Moreover, if the Smith normal form of $L(B)$ has the form $\diag(d_1,\dots, d_{n-1}, 0)$, then the critical group of $X$ fits into the exact sequence \[\xymatrix{ 0\ar[r] & \bigoplus_{i=1}^{n-1}\avthbb{Z}_{m\cv\dvot d_i}\ar[r] & K(X)\ar[r] & \biggl(\bigoplus_{u}\avthbb{Z}_{m\cv\dvot\deg(u)}^{m-1}\biggr)/\avthbb{Z}_m\ar[r] & 0. }\] \end{proposition} \begin{proof} We may assume $B$ is connected. The map $\coker C\to\coker L\cong K(X)\oplus\avthbb{Z}$ sends the element $v + \im C$ to $S(v) + \im L$. By Lemma~\ref{lemma:generator} this element can be rewritten as $m\cv\dvot \widetilde{v} + \im L$ plus a torsion element, where $\widetilde{v}$ is a lift of $v$ in $X$. Hence the map must send the infinite generator of $\coker C$ to $m$ times the infinite generator of~$\coker L$. This allows us to embed $K(X)\oplus\avthbb{Z}$ in the commutative diagram \[\xymatrix{ & 0\ar[d] & 0\ar[d] & 0\ar[d] & & \\ 0\ar[r] & \avthbb{Z}\ar[r]^m\ar[d] & \avthbb{Z}\ar[r]\ar[d] & \avthbb{Z}_m\ar[r]\ar[d] & 0 \\ 0\ar[r] & \coker C\ar[r]\ar[d] & K(X)\oplus\avthbb{Z}\ar[r]\ar[d] & \coker\overline{L}\ar[r]\ar[d] & 0 \\ 0\ar[r] & \coker C/\avthbb{Z}\ar[r]\ar[d] & K(X)\ar[r]\ar[d] & \coker \overline{L}/\avthbb{Z}_m\ar[r]\ar[d] & 0 \\ & 0 & 0 & 0 & & }\] with exact rows and columns. Since $X$ is $m$-uniformly cleft, its compressed Laplacian is $C = mL(B)$. Hence $C$ has Smith normal form $\diag(m\cv\dvot d_1,\dots,m\cv\dvot d_{n-1},0)$. This completes the proof. \end{proof} The above proposition measures the growth in tree number produced by uniform splitting. We get the following corollary in the case where the base graph is a tree. \begin{corollary}\label{cor:uniformtree} Let $p\colon X\to T$ be an $m$-uniformly cleft graph whose base graph $T$ is a tree on $n$ vertices. Then the tree number of $X$ is \[ \kappa(X) = m^{n-2}\cv\dvot\prod_{v\in VT}(m\cv\dvot\deg(v))^{m-1}. \] \end{corollary} We would like to extend this method to the case of non-uniformly cleft graphs. Since the compressed Laplacian need not be symmetric, it is unclear how to do it. In the next section, we will extend Corollary~\ref{cor:uniformtree} to the case of non-uniformly cleft trees. However, the proof we give makes necessary use of the fact that the base graph is a tree, and it is unclear how to generalize it. \section{Tree numbers of cleft trees}\label{cleft-tree} In this section we count spanning trees of a cleft tree using a weighted analogue of the following classical theorem. \begin{theorem}[Poincar\'e~\cite{Poincare}, Chuard~\cite{Chuard}]\label{Poincare_Chuard} Let $X$ be a graph on $n$ vertices with incidence matrix $A$, and let $A'$ be an $n-1\times n-1$ submatrix of $A$. The matrix $A'$ is nonsingular (in fact, $\det(A') = \pm 1$) if and only if the columns of $A'$ represent the edges of a spanning tree of $X$. \end{theorem} To motivate the main ideas behind our argument, we study a recursive function on a special class of trees we call weighted marked trees. A \define{weighted marked tree} is a tree $T$ together with a weight vector $\avthbf{m} = (m_v)_{v\in VT}$ and two special vertices, the root $r$ and a marked vertex $q$, which could also be the root. We define a function~$F(T, \avthbf{m}, r, q)$ according to the following recursive procedure. \begin{enumerate} \item If $T$ has no edges, then $F(T, \avthbf{m}, r, q) = 1$. \item Otherwise: \begin{enumerate} \item Let $v$ be a leaf of $T$. Do not select the marked vertex $q$ unless it is the only leaf. \item Let $w$ be the parent of $T$. \item Let $T'$ be the tree obtained from $T$ by collapsing the edge connecting $v$ and $w$ to $w$. Let $\avthbf{m}'$ be the restriction of the weight vector of $T$ to the vertices of $T'$. \item Define a tuple $(q', w')$ by \[ (q', w') = \begin{cases} (q, w), & v \ne q \\ (w, q), & v = q. \end{cases} \] \item With the above notation, $F(T, \avthbf{m}, r, q) = m_{w'}\cv\dvot F(T', \avthbf{m}', r, q')$. \end{enumerate} \end{enumerate} We illustrate this algorithm by applying it to the tree in Figure~\ref{fig:algtree1}. \begin{figure} \caption{A weighted tree $T$ with root $r$ and marked vertex $b$.} \label{fig:algtree1} \end{figure} \noindent In order, we select the vertices $c$, $f$, $d$, and $a$, collapsing the edges $ac$, $bf$, $ad$, and~$ra$, and picking up the weights $m_a$, $m_b$, $m_a$, and $m_r$. After these collapses, the tree has been reduced to the tree $T'$ displayed in Figure~\ref{fig:algtree2}. \begin{figure} \caption{The tree $T'$ obtained from $T$ by collapsing several edges.} \label{fig:algtree2} \end{figure} Now the marked vertex $b$ is the only leaf, so we must select it. Thus we collapse $rb$ to $r$ and move the marker from~$b$ to $r$. Since $b$ was marked, we pick up its weight, $m_b$, rather than the weight of its parent. The collapsed tree has no more edges, so there are no more steps to perform. The value of $F$ on the tree~$T$ is $m_a^2\cv\dvot m_b^2\cv\dvot m_r$. Notice that for each non-marked vertex~$v$, the factor $m_v$ appears in~$F$ a total of $\deg(v) - 1$ times. The factor $m_b$ appears twice. This property holds for any weighted marked tree, as we now show. \begin{lemma}\label{lemma:weightedmarkedtree} Let $(T, \avthbf{m}, r, q)$ be a weighted marked tree, and let $F$ be the function defined above. Then \[ F(T, \avthbf{m}, r, q) = m_q \cv\dvot \prod_{v\in VT} m_v^{\deg(v) - 1}. \] \end{lemma} \begin{proof} Let $v$ be a vertex of $T$. There are three cases, depending on the position of the marked vertex. \textbf{Case 1.} Neither $v$ nor any of its children is marked. Each child of $v$ contributes a factor of $m_v$ to the value of $F$. Since $v$ is not marked, it contributes the weight of its parent to the value of $F$ when selected as a leaf. Hence $v$ contributes a total of~$m_v^{\deg(v) - 1}$ to the value of $F$. \textbf{Case 2.} The vertex $v$ is marked. If $v$ is marked, there is a contribution of $m_v$ for each of its children as well as a contribution of $m_v$ when it is selected as a leaf. Hence $v$ contributes a total of~$m_v^{\deg(v)}$ to the value of $F$. \textbf{Case 3.} The vertex $v$ has a marked descendant. Hereditarily unmarked children of $v$ behave as in Case 1. Hence we may assume that $v$ has the marked vertex as its unique child. When the child of $v$ is selected, it contributes nothing to the exponent of $m_v$, but then the mark is passed from the child to $v$. So when $v$ is selected as a leaf, it contributes a weight of $m_v$ to the value of $F$. Hence $v$ contributes a total of~$m_v^{\deg(v) - 1}$ to the value of $F$. \end{proof} The next step is to observe that the function~$F$ is, up to a sign, the result of computing a determinant by cofactor expansion. Recall that the compressed Laplacian $C$ of a cleft graph factors as $C = \partial\varphi$, where $\partial\colon C_1(B)\to C_0(B)$ is the boundary map and $\varphi\colon C_0(B)\to C_1(B)$ is the map defined in Lemma~\ref{lemma:compressedmakessense}. \begin{lemma}\label{lemma:algtreeisdeterminant} Let $p\colon X\to T$ a cleft tree with weight vector $\avthbf{m}$. Select a root $r$ for $T$ and orient all edges away from the root. Let $M$ be a matrix representing $\varphi$, and let $K$ be a matrix representing $\partial$. Then the determinant of $MK$ is \[ \det(MK) = \left(\sum_{q\in VT} m_q\right)\cv\dvot\prod_{v\in VT} m_v^{\deg(v) - 1}. \] \end{lemma} \begin{proof} By the Binet--Cauchy theorem, the determinant of MK is given by the sum \[ \det(MK) = \sum_{q\in VT} \det(M_q)\cv\dvot\det(K_q), \] where $M_q$ is obtained from $M$ by striking the column corresponding to $q$, and $K_q$ is defined similarly. It follows from Theorem~\ref{Poincare_Chuard} that $\det(K_q) = \pm 1$. To evaluate~$\det(M_q)$, select a leaf $v$ of the tree $T$, let $w$ be the parent of $v$, and let $e$ be the edge from $w$ to $v$. If $v\ne q$, then by cofactor expansion about the $(v, e)$ entry of $M_q$, \[ \det(M_q) = \pm m_w\cv\dvot \det(M'_q), \] where $M'_q$ is the submatrix of $M_q$ obtained by striking the column corresponding to $v$ and the row corresponding to its unique incident edge. If $v=q$, then by cofactor expansion about the $(w, e)$ entry of $M_q$, \[ \det(M_q) = \pm m_v\cv\dvot \det(M'_q). \] Up to a sign, this recursive computation of $\det(M_q)$ agrees with the recursive computation of the function~$F(T, \avthbf{m}, r, q)$. By computing the determinant of $K_q$ in the same way we see that $\det(K_q)$ is equal to the sign of $\det(M_q)$. Applying Lemma~\ref{lemma:weightedmarkedtree}, we conclude that \[ \det(M_q)\cv\dvot\det(K_q) = m_q \cv\dvot \prod_{v\in VT} m_v^{\deg(v)-1}. \] Summing over all $q\in VT$ completes the proof. \end{proof} We need the following technical lemma. \begin{lemma}[Horn--Johnson~{\cite[Theorem 1.3.20]{Horn_Johnson}}]\label{lemma:eigen} Suppose $r\le n$. Let $P$ be an $n\times r$ matrix and $Q$ be an $r\times n$ matrix. Then the eigenvalues of $QP$ are also eigenvalues of $PQ$, with (at least) the same multiplicity. All other eigenvalues of $PQ$ are $0$. \end{lemma} Now we use the above results to count the spanning trees of a cleft graph. \begin{theorem}\label{thm:clefttree} Let $p\colon X\to T$ be a cleft graph with weight vector $(m_v)_{v\in VT}$, and define a vector~$(M_v)_{v\in VT}$ by $M_v = \sum_{u\in\delta(v)} m_u$. If $T$ is a tree, then the tree number of~$X$ is \[ \kappa(X) = \prod_{v\in VT} (M_v^{m_v - 1}\cv\dvot m_v^{\deg(v) - 1}). \] \end{theorem} \begin{proof} The graph $X$ has Laplacian matrix $L$ and compressed Laplacian~$C=\partial\varphi$. Suppose $T$ has $n$ vertices, and let $N$ denote the sum \[ N = \sum_{v\in VT} m_v, \] that is, $N$ is the number of vertices of $X$. By Theorem~\ref{thm:matrixtree}, the tree number of $X$ is \[ \kappa(X) = \frac{1}{N}\prod_{i=2}^{N}\lambda_i, \] where $\lambda_1\le\dots\le\lambda_{N}$ are the eigenvalues of~$L$. The diagonal entries of~$L$ have the form~$M_v$, each such entry occurring~$m_v$ times. Hence for each~$v\in VT$, the Laplacian of $X$ has eigenvalue~$M_v$ occurring with multiplicity $m_v - 1$. This leaves $n$ eigenvalues to be determined. Since the rows and columns of~$L$ sum to zero, one of these eigenvalues is~$\lambda_1 = 0$. From the fact that $L\trans{p} = \trans{p}\trans{C}$ we conclude that every eigenvalue of $\trans{C}$ (hence also $C$) is an eigenvalue of $L$. Since $T$ is a tree, it has one more vertex than it has edges, so while $C = \partial\varphi$ is an $n\times n$ matrix, its companion $\varphi\partial$ is an $n-1\times n-1$ matrix. Applying Lemma~\ref{lemma:eigen}, we conclude that the product of the remaining eigenvalues of~$L$ is $\det(\varphi\partial)$. But it follows from Lemma~\ref{lemma:algtreeisdeterminant} that \[ \det(\varphi\partial) = \left(\sum_{q\in VT} m_q\right)\cv\dvot\prod_{v\in VT} m_v^{\deg(v) - 1} = N\cv\dvot\prod_{v\in VT} m_v^{\deg(v) - 1}. \] Hence \[ \kappa(X) = \prod_{v\in VT} M_v^{m_v - 1}\cv\dvot \prod_{v\in VT} m_v^{\deg(v) - 1}, \] which is what we wanted to show. \end{proof} \section{Concluding remarks} \label{section_critical_remarks} The arguments used to study uniformly-cleft graphs and non-uniformly-cleft trees are different enough that it is unclear what form a possible common generalization would take. We can compute the critical group explicitly in some simple cases, such as a uniformly-cleft path. However, the available techniques for working with these structures do not yet generalize even to the case of uniformly-cleft trees. We would like to have a leaf-cutting procedure, similar to the weighted analogue of the Poincar\'e--Chuard theorem, which operates on the critical group level. \begin{center} Copyright \copyright\ MLE Slone 2008 \end{center} \vanish{ \section{Critical groups of uniformly-cleft spiders} \label{section_spider_groups} \begin{proposition} \label{proposition_even_spider} Let $T$ be a spider on $n$ vertices with $s$ legs, the $i$th leg of which is of even length $n_i=2k_i$, and let $T_k$ be the $k$-regular splitting of $T$. Then \[ K(T_k)\cong \avthbb{Z}_k^{s(k-2)}\oplus\avthbb{Z}_{2k}^{(n-1)(k-2)}\oplus \avthbb{Z}_{2k^2}^{n-1}\oplus\avthbb{Z}_{sk^2}\oplus\avthbb{Z}_{k^2}^{s-2}\oplus\avthbb{Z}_{sk}^{k-2}. \] \end{proposition} \begin{proof} Pivot, pivot, pivot. \end{proof} } \setcounter{secnumdepth}{-1} \chapter{Vita} \begin{itemize} \item Education: \begin{itemize} \item 2008: Ph.D. (expected), University of Kentucky \item 2003: MA, University of Kentucky \item 2001: BA, Morehead State University \end{itemize} \item Professional positions held: \begin{itemize} \item 2001--2008: Teaching assistant, University of Kentucky \item 2000: Markup editor, Institute for Regional Analysis and Public Policy \item 1997--2001: Technical editor, Lexmark-MSU Writing Project \end{itemize} \item Scholastic and professional honors: \begin{itemize} \item Presidential Graduate Fellowship \item Edgar Enochs Scholarship in Algebra \item Daniel Reedy Quality Fellowship \end{itemize} \end{itemize} \end{document}
\begin{document} \begin{center} {\fontsize{18}{22}\selectfont \bf Regularization of the restricted $(n+1)$--body problem on curved spaces} \end{center} \begin{center} {\bf Ernesto P\'erez-Chavela$^1$, and Juan Manuel S\'anchez-Cerritos$^2$}\\ $^1$Departamento de Matem\'aticas\\ Instituto Tecnol\'ogico Aut\'onomo de M\'exico, Mexico City, Mexico\\ $^2$Department of Mathematics\\ Sichuan University, Chengdu, People's Republic of China\\ [email protected], [email protected] \end{center} \abstract{ We consider $(n+1)$ bodies moving under their mutual gravitational attraction in spaces with constant Gaussian curvature $\kappa$. In this system, $n$ primary bodies with equal masses form a relative equilibrium solution with a regular polygon configuration, and the remaining body of negligible mass does not affect the motion of the others. We show that the singularity due to binary collision between the negligible mass and the primaries can be regularized local and globally through suitable changes of coordinates (Levi-Civita and Birkhoff type transformations). } \ {\bf Keywords:} Curved $n$-body problem, Regularization \section{Introduction} We consider the generalization of the gravitational $n-$body problem to spaces of constant curvature proposed by Diacu, P\'erez-Chavela and Santoprete \cite{Diacu,Diacu2}. The problem has its roots on the ideas about non-Euclidean geometries proposed by Lovachevski and Bolyai in the 19th century \cite{Bolyai,Lovachevski}. For more details about the history of this fascinating problem we refer the interested readers to \cite{Diacu4}. The restricted $(n+1)-$body problem on curved spaces refers to the study of the motion of a particle $q$, with negligible mass, moving under the gravitational attraction of $n$ other particles, called primaries. The mass of the particle $q$ is very small, in such a way that the motion of the primaries are not affected by this particle. In a personal notification, Carles Sim\'o pointed us that through a suitable change of coordinates follow by a rescaling of time, we can focus our analysis on the cases of curvature $\kappa = 1$ and $\kappa = -1$ (see \cite{Diacu4} for details). In this way, we consider the sphere embedded in $R^3$ with radius $1$, denoted by $\mathbb{S}^2$, as model for positive curvature, and the sphere (with Lorentzian metric) of imaginary radius $i$, denoted by $\mathbb{H}^2$, as model for negative curvature. In the classical case (zero curvature), the restricted three-body problem was firstly proposed by Euler in the 18th century. One of the main obstacles to study the restricted problems is the presence of singularities due to collision. We start with a fix configuration for the primaries, avoiding the collision among them, but the problem of collision of the massless particle with one or more of the primaries still persists. This problem was tacked first by Levi-Civita \cite{Levi} and some years later by Birkhoff \cite{ Birkhoff}. The first technique is useful to regularize just one singularity, that we call local regularization. There are several generalizations of this technique which are applied to the restricted three body problem, see for instance \cite{Roman}, Levi-Civita regularization and its generalizations is very helpful for analyze the dynamics of a satellite in an orbit close to another massive body or in general in spatial missions to explore a single planet. However, sometimes it is necessary to have a global picture of the solutions, for instance to study escapes of particles or a possible connection among the equilibrium points, it is necessary to have a global regularization of all singularities due to collision, in this last case we use Birkhoff technique. Both techniques consist of suitable change of coordinates follow for a reparametrization of the time. Some years ago, this classical problem (zero curvature) problem was extended considering more bodies , and the problem of regularize the binary collisions where also tackled \cite{ Vidal2}. Recently the restricted three body problem was also proposed considering the motion on curved spaces, for the restricted $2$--body problem, where the two primaries of different masses move on different planes around the $z$-axis \cite{Vidal3}. In this paper we consider the restricted $(n+1)$--body problem on $\mathbb{S}^2$ and $\mathbb{H}^2$ where $n$ particles with equal masses are moving on a plane parallel to the $xy$-plane forming a regular polygon configuration. We find the regularization transformations that allows avoid the singularities due to collisions between these particles and the remaining body of negligible mass. It results that the transformations are similar as in the classical Newtonian problem. We believe that these problems can have important applications in the dynamics of the components of an atom and in the new astronomical discoveries, usually studied only through quantum mechanics and relativity, nevertheless we let the analysis of the possible applications for a forthcoming paper and we will concentrate here just in the theoretical aspects of the problem. After the introduction the paper is organized as follows. In section 2, we show the existence of polygonal relative equilibria given by the primary $n$ bodies with equal masses. In section 3 we set the problem and present the equations in a convenient form (a suitable rotating frame). In section 4 we present the main results of the work, in this way it is necessary to use the stereography projection to avoid the constraints and allow us to work with complex variables. Then we obtain the local and global regularization of the binary collisions. \section{Relative Equilibria} We consider the motion of the primary $n$ bodies where each particle, denoted by $q_i$, moves on the space $\mathbb{S}^2$ or $\mathbb{H}^2$. In this section we will show the existence of solutions given by relative equilibria. The motion of the particles is lead by the cotangent potential \[ U(q)=\sum_{i<j}m_im_j cotn(d(q_i,q_j)), \] where $d$ is the geodesic distance on the corresponding space $\mathbb{S}^2$ or $\mathbb{H}^2$, and $cotn$ means the usual cotangent function or the hyperbolic cotangent function respectively. The kinetic energy is defined as \[ T=\dfrac{1}{2}\sum_{i}m_i\dot{q}_i \odot\dot{q}_i, \] where the symbol $\odot$ represents the usual inner product if we consider $\mathbb{S}^2$, or the Lorentzian inner product if $\mathbb{H}^2$ is considered (in this case for, $a,b \in \mathbb{R}^3$ we have $a \odot b = a_xb_x + a_yb_y - a_zb_z$). From the Euler-Lagrange equations, the equations of motion take the form \begin{equation}\label{systemS2} \ddot{q}_i=\sum_{j=1,j\neq i}^n\dfrac{q_j-\sigma(q_i\odot q_j)q_i}{[\sigma-\sigma(q_i\odot q_j)^2]^{3/2}}-\sigma(\dot{q}_i\odot \dot{q}_i)q_i, \ \ \ i=1,\cdots, n, \end{equation} where $\sigma$ stands for $1$ if we analyze $\mathbb{S}^2$ or $-1$ for $\mathbb{H}^2$. \subsection{Relative Equilibria on $\mathbb{S}^2$} Consider the group of isometries $SO(3)$ acting on $\mathbb{R}^3$. It is well known that it consists of all uniform rotations. The relative equilibria are invariant solutions of the equations of motion under the group $SO(3)$. Now, since the principal axis theorem states that any $A \in SO(3)$ can be written, in some orthonormal basis, as a rotation about a fixed axis, we consider this one as the $z$ axis. Hence we can characterize this result as follows \begin{prop} A solution $q_i, i=1,\cdots,n$ of the equations of motion on $\mathbb{S}^2$ is a relative equilibrium if and only if $q_i=(x_i,y_i,z_i)$, with $x_i=r_i \cos(\Omega t + \alpha_i),$ $y_i=r_i \sin(\Omega t + \alpha_i),$ and $z_i= \sqrt{1-r_i^2}$, where $\Omega, \alpha_i$ and $r_i, i=1,\cdots,n$ are constants. \end{prop} \begin{proof} The result follows directly by straightforward computations, we omit the details here. \end{proof} Now we will show that, if in the above proposition, $z_i=z_j, \,\, \forall i,j$, then it is possible to find relative equilibria. \begin{theorem} For $n$ equal masses on $\mathbb{S}^2$ with a regular polygon initial configuration with the bodies at a height $z=$constant $\neq 0$, there exist a positive and a negative value for the initial velocity such that the solution is a relative equilibrium. \end{theorem} \begin{proof} Consider $n$ particles with equal masses $m=1$ with a regular polygon initial configuration. The position for the $i-$th body at a given time $t$ is $q_i(t)=(x_i(t),y_i(t),z(t))$ where \[x_i(t)=r \cos\left[ \Omega t +(i-1)\dfrac{2 \pi}{n}\right], \ \ \ y_i(t)=r \sin\left[ \Omega t +(i-1)\dfrac{2 \pi}{n}\right], \ \ z(t) \neq 0.\] We will show that there exist values of $\Omega$ such that the above functions satisfy (\ref{systemS2}). By symmetry we can consider only the equations of motion for the $x_i(t)$ coordinates. We have \begin{equation} \begin{split} x_{i+k}=&r \cos\left[ \Omega t +(i+k-1)\dfrac{2 \pi}{n}\right],\\ x_{i-k}=&r \cos\left[ \Omega t +(i-k-1)\dfrac{2 \pi}{n}\right],\\ \dot{x}_i(t)=&-\Omega r\sin\left[ \Omega t +(i-1)\dfrac{2 \pi}{n}\right],\\ \ddot{x}_i(t)=&-\Omega^2 r\cos\left[ \Omega t +(i-1)\dfrac{2 \pi}{n}\right]. \end{split} \end{equation} Let $A=\Omega t+(i-1)\dfrac{2\pi}{n}$. We have for $n$ odd \begin{equation} \begin{split} \ddot{x}_i=&\sum_{j=1,j\neq i}^n\dfrac{x_j-(q_i\cdot q_j)x_i}{[1-(q_i\cdot q_j)^2]^{3/2}}-(\dot{q}_i\cdot \dot{q}_i)x_i. \end{split} \end{equation} For each $i$ we enumerate the particles as $(-\frac{n+1}{2}+i+1\cdots,i-1,i,i+1,\cdots, \frac{n+1}{2}+i-1)$. Hence \begin{equation} \begin{split} \ddot{x}_i =&\sum_{j=i+1}^{\frac{n+1}{2}+i-1}\dfrac{x_j-(q_i\cdot q_j)x_i}{[1-(q_i\cdot q_j)^2]^{3/2}}+\sum_{j=i-1}^{-\frac{n+1}{2}+i+1}\dfrac{x_j-(q_i\cdot q_j)x_i}{[1-(q_i\cdot q_j)^2]^{3/2}}-(\dot{q}_i\cdot \dot{q}_i)x_i\\ =&\sum_{j=1}^{\frac{n+1}{2}-1}\dfrac{x_{j+i}-(q_i\cdot q_{j+i})x_i}{[1-(q_i\cdot q_{j+i})^2]^{3/2}}+\sum_{j=-1}^{-\frac{n+1}{2}+1}\dfrac{x_{j+i}-(q_i\cdot q_{j+i})x_i}{[1-(q_i\cdot q_{j+i})^2]^{3/2}}-(\dot{q}_i\cdot \dot{q}_i)x_i\\ =&\sum_{j=1}^{\frac{n+1}{2}-1}\dfrac{x_{j+i}-(q_i\cdot q_{j+i})x_i}{[1-(q_i\cdot q_{j+i})^2]^{3/2}}+\sum_{j=1}^{\frac{n+1}{2}-1}\dfrac{x_{i-j}-(q_i\cdot q_{i-j})x_i}{[1-(q_i\cdot q_{i-j})^2]^{3/2}}-(\dot{q}_i\cdot \dot{q}_i)x_i.\\ \end{split} \end{equation} Notice that $\ddot{x}_i=-\Omega^2 x_i$, \ $x_{i\pm j}=x_i\cos(2j\pi /n)\mp y_i\sin(2j\pi /n)$, \ $q_i\cdot q_{i-j}=q_i\cdot q_{i+j}=r^2 \cos(2j\pi/n)+1-r^2$ \ and \ $\dot{q}_i\cdot \dot{q}_i=r^2 \Omega^2$. With these facts we obtain \begin{equation}\label{w1} \Omega^2=2\sum_{j=1}^{\frac{n+1}{2}-1}\dfrac{1-\cos(2j\pi/n)}{[1-(r^2 \cos(2j\pi/n)+1-r^2)^2]^{3/2}}. \end{equation} For $n$ even we have \begin{equation} \begin{split} \ddot{x}_i=&\sum_{j=1,j\neq i}^n\dfrac{x_j-(q_i\cdot q_j)x_i}{[1-(q_i\cdot q_j)^2]^{3/2}}-(\dot{q}_i\cdot \dot{q}_i)x_i. \end{split} \end{equation} For each $i$ we enumerate the particles as $(-\frac{n}{2}+i+1\cdots,i-1,i,i+1,\cdots, \frac{n}{2}+i-1)$. Hence \begin{equation} \begin{split} \ddot{x}_i=&\sum_{j=i+1}^{\frac{n}{2}+i-1}\dfrac{x_j-(q_i\cdot q_j)x_i}{[1-(q_i\cdot q_j)^2]^{3/2}}+\sum_{j=i-1}^{-\frac{n}{2}+i+1}\dfrac{x_j-(q_i\cdot q_j)x_i}{[1-(q_i\cdot q_j)^2]^{3/2}}\\ & +\dfrac{x_{\frac{n}{2}+i+1}-(q_i\cdot q_{\frac{n}{2}+i+1})x_i}{[1-(q_i\cdot q_{\frac{n}{2}+i+1})^2]^{3/2}}-(\dot{q}_i\cdot \dot{q}_i)x_i\\ =&\sum_{j=1}^{\frac{n}{2}-1}\dfrac{x_{j+i}-(q_i\cdot q_{j+i})x_i}{[1-(q_i\cdot q_{j+i})^2]^{3/2}}+\sum_{j=1}^{\frac{n}{2}-1}\dfrac{x_{i-j}-(q_i\cdot q_{i-j})x_i}{[1-(q_i\cdot q_{i-j})^2]^{3/2}}-(\dot{q}_i\cdot \dot{q}_i)x_i\\ &+ \dfrac{-x_{i}-(q_i\cdot q_{\frac{n}{2}+i+1})x_i}{[1-(q_i\cdot q_{\frac{n}{2}+i+1})^2]^{3/2}}-(\dot{q}_i\cdot \dot{q}_i)x_i. \end{split} \end{equation} And we have \begin{equation}\label{w2} \Omega^2=2\sum_{j=1}^{\frac{n}{2}-1}\dfrac{1-\cos(2j\pi/n)}{[1-(r^2 \cos(2j\pi/n)+1-r^2)^2]^{3/2}}+\dfrac{1}{4r^3(1-r^2)^{3/2}}. \end{equation} The equations (\ref{w1}) and (\ref{w2}) are positive, hence we conclude that there exist a positive and a negative value of $\Omega$ such that generate relative equilibria. \end{proof} If the particles are on the equator of $\mathbb{S}^2$, then we have the following \cite{paper1}: if $n$ is odd, then and they have equal masses forming a regular polygon configuration moving with constant angular velocity $\Omega$, then the positions and velocities form a solution of relative equilibrium for any $\Omega \in \mathbb{R}$. \subsection{Relative Equilibria on $\mathbb{H}^2$} Consider the orthogonal transformations of determinant $\pm 1$ that leave $\mathbb{H}^2$ invariant, this is a closed group called \textit{Lorentz group}, $Lor(\mathbb{R}^{2,1},\odot)$. The principal axis theorem in this case states that every $G \in Lor(\mathbb{R}^{2,1},\odot)$ has one of the following canonical forms: \[A=P \left( \begin{array}{ccc} \cos \theta & -\sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{array} \right) P^{-1},\] \[B=P \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cosh s & \sinh s \\ 0 & \sinh s & \cosh s \end{array} \right) P^{-1},\] or \[C= P \left( \begin{array}{ccc} 1 & -t & t \\ t & 1-\frac{t^2}{2} & \frac{t^2}{2} \\ t & -\frac{t^2}{2} & 1+\frac{t^2}{2} \end{array} \right) P^{-1},\] where $\theta \in [0,2 \pi),s,t \in \mathbb{R}$, and $P \in Lor(\mathbb{R}^{2,1},\odot)$. The above transformations are called elliptic, hyperbolic, and parabolic, respectively. It is well known from people in the field that there are not solutions generated by parabolic transformations \cite{Diacu}. Those solutions generated by elliptic or hyperbolic transformations are called elliptic or hyperbolic relative equilibria. In this work we are interested in solutions where the $n$ primaries form relative equilibria with a regular polygon configuration. In a recent paper of the authors, they proved the no existence of these kind of hyperbolic relative equilibria, in particular the no existence of Lagrange hyperbolic relative equilibria \cite{paperh2}. Hence we will focus only on elliptic relative equilibria. Analogously as in $\mathbb{S}^2$ we have the following result. \begin{theorem} For $n$ equal masses on $\mathbb{H}^2$ with a regular polygon initial configuration with the bodies at a height $z=$constant $\neq 0$, there exist a positive and a negative value for the initial velocity such that the solution is a relative equilibrium. \end{theorem} \begin{proof} The proof is similar by noticing that $q_i \odot q_{i-j}=q_i \odot q_{j-i}=r^2\cos(\frac{2\pi j}{n})-1-r^2$. For $n$ odd the angular velocity satisfies \begin{equation}\label{w3} \Omega^2=2\sum_{j=1}^{\frac{n+1}{2}-1}\dfrac{1-\cos(2j\pi/n)}{[-1+(r^2 \cos(2j\pi/n)-1-r^2)^2]^{3/2}}. \end{equation} For $n$ even, \begin{equation}\label{w4} \Omega^2=2\sum_{j=1}^{\frac{n}{2}-1}\dfrac{1-\cos(2j\pi/n)}{[-1+(r^2 \cos(2j\pi/n)-1-r^2)^2]^{3/2}}+\dfrac{1}{4r^3(1+r^2)^{3/2}}. \end{equation} Since equations (\ref{w3}) and (\ref{w4}) are positive we conclude that there exists a positive and negative value for the angular velocity that leads to elliptic relative equilibria. \end{proof} \section{The restricted curved $(n+1)$--body problem} In order to unified our analysis we introduce the notation $\mathbb{M}^2$ without distinction for $\mathbb{S}^2$ or $\mathbb{H}^2$. The restricted curved $(n+1)-$body problem refers to the study of a system of $n+1$ particles moving on under their mutual attraction on $\mathbb{M}^2$, where $n$ bodies with positions $q_i$ of equal masses (called primary bodies) are rotating on a circle parallel to the $xy$ plane with velocity (\ref{w1}) or (\ref{w2}) and with a regular polygon configuration. The remaining body located at position $q$ has a negligible mass and its motion is given by the following equation \begin{equation} \ddot{q}=\sum_{i=1}^n\dfrac{q_i-\sigma(q_i\odot q)q}{[\sigma-\sigma(q\odot q_i)^2]^{3/2}}-\sigma(\dot{q} \odot \dot{q})q. \end{equation} As in the classical case, we introduce rotating coordinates. Let $q=RQ$ with $Q=(\xi,\eta,\vartheta)^T$, where $R$ is the rotation matrix \[R= \left( \begin{array}{ccc} \cos \Omega & -\sin \Omega & 0 \\ \sin \Omega & \cos \Omega & 0 \\ 0 & 0 & 1 \end{array} \right).\] After a straightforward computation the new equation of motion in the new variables are \begin{equation}\label{eqn} \begin{split} \ddot{Q}-2\Omega J \dot{Q}+ [\sigma(\dot{\xi}-\Omega \eta)^2+\sigma(\dot{\eta}+\Omega \xi)^2+\dot{\vartheta}^2]Q =\nabla_{Q}\left(\dfrac{\Omega^2}{2}(\xi^2+\eta^2)\right. \\ \left. +\sum_{i=1}^n\dfrac{Q_i\odot Q}{\left(\sigma-\sigma(Q_i \odot Q)^2\right)^{1/2}}\right), \end{split} \end{equation} with \[J= \left( \begin{array}{ccc} 0 & 1 & 0 \\ -1& 0 & 0 \\ 0 & 0 & 0 \end{array} \right). \] In these coordinates the position of the primaries take the form \[ Q_i=\left[r\cos \left(\frac{2 \pi}{n}(i-1)\right),r\sin \left(\frac{2 \pi}{n}(i-1)\right),z\right]. \] \textbf{Stereographic projection } In our analysis we consider the stereographic projection from the point $(0,0,-1)$ to $\mathbb{R}^2$, \ $\Pi: \mathbb{M}^2 \rightarrow \mathbb{R}^2$. This function maps $Q\longmapsto (u,v)$, with \[ u=\dfrac{\xi}{1+\vartheta}, \ \ v=\dfrac{\nu}{1+\vartheta}. \] The inverse function $\Pi^{-1}$ maps $(u,v) \longmapsto Q$ where \[ \xi=\dfrac{2u}{1+\sigma(u^2+v^2)}, \ \ \eta=\dfrac{2u}{1+\sigma(u^2+v^2)}, \ \ \vartheta=\dfrac{1-\sigma(u^2+v^2)}{1+\sigma(u^2+v^2)}. \] It is known that $\Pi$ maps $\mathbb{S}^2$ onto the whole plane $\mathbb{R}^2$, with the metric $ds^2=\dfrac{4}{1+u^2+v^2}$, this plane with this metric is known by some authors as the curved plane. The case of $\mathbb{H}^2$, this space is projected onto the open unitary disk with the metric $ds^2=\dfrac{4}{1-u^2-v^2}$, this space is the well known model of hyperbolic geometry called the Poincar\'e disk. Under $\Pi$, the primaries, originally on $\mathbb{M}^2$, now are locate at $w_i=\dfrac{1}{1+z}(k_i,h_i)$, with $k_i=r\cos \left(\frac{2 \pi}{n}(i-1)\right)$ and $h_i=r\sin \left(\frac{2 \pi}{n}(i-1)\right)$. In the right part of (\ref{eqn}), the so called effective potential, can be written as \begin{equation} \begin{split} &\dfrac{\Omega^2}{2}(\xi^2+\eta^2)+\sum_{i=1}^n\dfrac{Q_i\odot Q}{\left(\sigma-\sigma(Q_i \odot Q)^2\right)^{1/2}}\\ =& \dfrac{2 \Omega^2(u^2+v^2)}{(1+\sigma(u^2+v^2))}+\sum_{i=1}^n\dfrac{2k_iu+2h_iv+\sigma z(1-\sigma(u^2+v^2))}{[\sigma(1+\sigma(u^2+v^2))^2-\sigma(2k_iu+2h_iv+\sigma z(1-\sigma(u^2+v^2)))^2]^{1/2}}\\ =:&\Psi(u,v)+U(u,v). \end{split} \end{equation} \section{Regularization} We first write the problem as a Hamiltonian system, with Hamiltonian function given by \begin{equation}\label{ham} H(u,v,p_u,p_v)=\dfrac{(1+\sigma(u^2+v^2))^2}{8}(p_u^2+p_v^2)+\Omega(vp_u-up_v)-U(u,v), \end{equation} where $U$ is the corresponding potential getting from the stereographic projection of $\mathbb{S}^2$ or $\mathbb{H}^2$ onto $\mathbb{R}^2$. If no confusion arises, we will keep denoting the positions of the primaries on $\mathbb{C}$ as $w_i$. In order to analyze the regularization of the binary collisions between the negligible mass with the primaries, we consider complex variables through the following change of coordinates \[ {\bf z}=u+iv, \ \ {\bf Z}=p_u+ip_v.\] Then (\ref{ham}) takes the form \begin{equation} \label{H} H=\dfrac{(1+\sigma|{\bf z}|^2)2}{4}|{\bf Z}|+2\Omega Im({\bf z}\overline{{\bf Z}})-2V({\bf z},\overline{{\bf z}}), \end{equation} with \[ V({\bf z},\bar{{\bf z}})=\sum_{j=1}^n\dfrac{k_j({\bf z}+\overline{{\bf z}})-ih_j({\bf z}-\overline{{\bf z}})+\sigma z(1-\sigma |{\bf z}|^2)}{r|{\bf z}-w_j||{\bf z}-\widehat{w}_j|},\] Where $\hat{w}_i=-\dfrac{1}{1 -z}(k_i,h_i)$. If we consider $\mathbb{S}^2$ , then $\Pi^{-1}(\hat{w}_i)$ corresponds to the antipodal point of the primary $Q_i$, however if we consider $\mathbb{H}^2$, then $\hat{w}_i$ is a point such that $\Pi^{-1}(\hat{w}_i)$ does not belong to $\mathbb{H}^2$. \subsection{Local Regularization} In this section we state the first main theorem of this paper. \begin{theorem} The transformation ${\bf z}=g(w)=w\overline{w}+w_k$ , $(i=1,\cdots n)$ with the time transformation $\dfrac{dt}{ds}=|w|^2$, regularize the singularity of (\ref{H}) due collision between the negligible mass and the $k-$th primary body. \end{theorem} \begin{proof} First, let us consider the space transformation ${\bf z}=g(w)$ with ${\bf Z}=W/\overline{g'(w)}$, and the new time $s$ such that $\dfrac{dt}{ds}=|g'(w)|^2.$ Take a constant energy level $H=-\dfrac{C}{2}$, and let us define a new Hamiltonian $\hat{H}=|g'(w)|^2\left(H+\dfrac{C}{2}\right)$. The flow generated by both of the Hamiltonian functions is equivalent, hence we will consider the flow generated by $\hat{H}$ at zero energy level. Performing the change or variables we obtain that the Hamiltonian takes the form \begin{eqnarray}\label{H2} \hat{H} &=& \dfrac{(1+\sigma |g(w)|^2)2}{4}|W| \ + \ 2|g'(w)|^2\Omega Im\left(g(w)\overline{W}/g'(w)\right) \nonumber \\ && - \ 2|g'(w)|^2V(w,\overline{w}) + |g'(w)|^2 \dfrac{C}{2}. \end{eqnarray} Take $z=g(w)=w\overline{w}+w_k$, then $g'(w)=\overline{w}$ and $|g'(w)|^2=|\overline{w}|^2=|w|^2=w\overline{w}=|w\overline{w}|$. We will check that this transformation avoids the singularity due collision of the negligible mass and the primary $k$. On $\mathbb{M}^2$: {\footnotesize \begin{equation} \begin{split} |g'(w)|^2V(w,\overline{w})=&|w|^2\left[\sum_{j=1}^n\dfrac{\left( k_j(g(w)+\overline{g(w)})-ih_j(g(w)-\overline{g(w)})+\sigma z(1-\sigma|g(w)|^2)\right)}{r|w\overline{w}+w_k-w_j||w\overline{w}+w_k-\widehat{w}_j|}\right]\\ =&|w|^2\left[\sum_{j=1,j\neq k}^n\dfrac{\left( k_j(g(w)+\overline{g(w)})-ih_j(g(w)-\overline{g(w)})+\sigma z(1-\sigma|g(w)|^2)\right)}{r|w\overline{w}+w_k-w_j||w\overline{w}+w_k-\widehat{w}_j|}\right]\\ &+|w|^2\dfrac{\left( k_j(g(w)+\overline{g(w)})-ih_j(g(w)-\overline{g(w)})+\sigma z(1-\sigma|g(w)|^2)\right)}{r|w\overline{w}||w^2+w_i-\widehat{w}_j|}\\ =&|w|^2\left[\sum_{j=1,j\neq k}^n\dfrac{\left( k_j(g(w)+\overline{g(w)})-ih_j(g(w)-\overline{g(w)})+\sigma z(1-\sigma|g(w)|^2)\right)}{r|w\overline{w}+w_k-w_j||w\overline{w}+w_k-\widehat{w}_j|}\right]\\ &+\dfrac{\left( k_j(g(w)+\overline{g(w)})-ih_j(g(w)-\overline{g(w)})+\sigma z(1-\sigma|g(w)|^2)\right)}{r|w^2+w_i-\widehat{w}_j|}. \end{split} \end{equation} } We can see that the singularities of equation (\ref{H2}) are avoided, hence we conclude the proof. \end{proof} \subsection{Global Regularization} The second main statement of this work is the following. \begin{theorem} The transformation ${\bf z}=g(w)=\dfrac{n-1}{n}w+\dfrac{w_1^n}{nw^{n-1}}$ and the time transformation $\dfrac{dt}{ds}=\dfrac{(n-1)^2}{n^2}\dfrac{|{\bf z}-w_1|^2|{\bf z}-w_2|^2\cdots |{\bf z}-w_n|^2}{|{\bf z}|^{2n}}$ regularize the $n$ binary collision-singularities of (\ref{H}), between the negligible mass and each of the primaries. \end{theorem} \begin{proof} Consider the transformation ${\bf z}=g(w):=\alpha w +\dfrac{\beta}{w^{n-1}}$, ${\bf Z}=W/\overline{g'(w)}$, and the time $s$ such that $\dfrac{dt}{ds}=|g'(w)|^2.$ As we did before, we will consider a fixed energy level $-\dfrac{C}{2}$ and a new Hamiltonian defined as $\hat{H}=|g'(w)|^2\left(H+\dfrac{C}{2}\right)$. We will be working using this new Hamiltonian at zero energy evel. Performing the change or variables we obtain that the Hamiltonian takes the form \begin{eqnarray}\label{H1} \hat{H} &=& \dfrac{(1+\sigma|g(w)|^2)2}{4}|W| \ + \ 2|g'(w)|^2\Omega Im\left(g(w)\overline{W}/g'(w)\right) \nonumber \\ && - 2|g'(w)|^2V(w,\overline{w}) + |g'(w)|^2 \dfrac{C}{2}. \end{eqnarray} In order to remove singularities we will find $\alpha$ and $\beta$ with the following properties: First that the primaries remain fixed, it means $$\text{On} \ \ S^2: \ \ \ g(w_i)=w_i, \ \ g(\hat{w}_i)=\hat{w}_i; \ \ \ \text{On} \ \ H^2: g(w_i)=w_i,$$ and second that the functions $g(w)$ allows us to remove all the collision singularities, it means $g'(w_i)=0$, or $g'(w)=\alpha+\dfrac{\beta (n-1)}{w^n}=\dfrac{\alpha}{w^n}\left( w^n-\dfrac{\beta}{\alpha} (n-1) \right)=\dfrac{\alpha}{w^n}(w-w_1)(w-w_2)\cdots (w-w_n)$. Last equality is satisfied if the following occurs \begin{equation} \begin{split} 0=&\sum_{j=1}^nw_j,\\ 0=&w_1\sum_{j=2}^nw_j+w_2\sum_{j=3}^nw_j+\cdots +w_{n-2}\sum_{j=n-1}^nw_j+w_{n-1}w_n,\\ 0=&w_1w_2\sum_{j=3}^nw_j+w_1w_3\sum_{j=4}^nw_j+ \cdots +w_2w_3\sum_{j=4}^nw_j +\cdots +w_2w_4\sum_{j=5}^nw_j+\\ &+w_{n-3}w_{n-2}\sum_{j=n-1}^nw_j +w_{n-2}w_{n-1}w_{n}=0,\\ \vdots\\ 0=&\sum_{i=1}^n \prod_{j=1, j\neq i}^nw_j,\\ (-1)^n&\prod_{j=1}^nw_j=-\dfrac{\beta}{\alpha}(n-1). \end{split} \end{equation} The first $(n-1)$ conditions are satisfied if the first one occurs, and it is true since it is the sum of the $n$th roots of the unity. Since $w_{j+1}=e^{i2\pi j/n}w_1$, we have $\dfrac{w_1^n}{n-1}=\dfrac{\beta}{\alpha}$. We also have that the property $g(w_1)=w_1$ implies $\dfrac{\beta}{\alpha}=\dfrac{w_1^n}{n-1}$. With these facts we conclude $\alpha=\dfrac{n-1}{n}$ and $\beta=\dfrac{w_1^n}{n}$. Notice that \[ g(w)-w_i=\dfrac{n-1}{n}w+\dfrac{w_1^n}{nw^{n-1}}-w_i=\dfrac{(w-w_i)^2}{w^{n-1}}G(w), \] where \[ G(w)=\sum_{k=0}^{n-2}\dfrac{n-k-1}{n}w^{n-2-k}w_i^{k}. \] And we have that $G(w_i)\neq 0$, for $i=1,\cdots, n$. We now check that the singularities of (\ref{H1}) due collisions are removed. {\footnotesize \begin{equation} \begin{split} |g'(w)|^2V(w,,\overline{w})=&\left[\sum_{j=1}^n\dfrac{\left( k_j(g(w)+\overline{g(w)})-ih_j(g(w)-\overline{g(w)})+\sigma z(1-\sigma|g(w)|^2)\right)}{r|g(w)-w_j||g(w)-\widehat{w}_j|}\right] \cdot\\ &\left[ \dfrac{(n-1)^2}{n^2w^{2n}}(w-w_1)^2\cdots (w-w_n)^2 \right]\\ =&\left[\sum_{j=1}^n\dfrac{w^{n-1}\left( k_j(g(w)+\overline{g(w)})-ih_j(g(w)-\overline{g(w)})+\sigma z(1-\sigma|g(w)|^2)\right)}{r(w-w_j)^2G(w)|g(w)-\widehat{w}_j|}\right] \cdot\\ &\left[ \dfrac{(n-1)^2}{n^2w^{2n}}(w-w_1)^2\cdots (w-w_n)^2 \right]\\ =&\sum_{j=1}^n\left[ \dfrac{\left( k_j(g(w)+\overline{g(w)})-ih_j(g(w)-\overline{g(w)})+\sigma z(1-\sigma|g(w)|^2)\right)}{rG(w)|g(w)-\widehat{w}_j|}\right. \cdot\\ &\left. \dfrac{(n-1)^2}{n^2w^{n+1}}(w-w_1)^2\cdots (w-w_{j-1})^2(w-w_{j+1})^2\cdots (w-w_n)^2 \right]. \end{split} \end{equation} } Notice that the only singularity is the point $w=0$ which corresponds to $|{\bf z}|\rightarrow \infty$. With this we finish the proof. \end{proof} \end{document}
\begin{document} \title{Privacy Games} \ifx \fullversion \undefined \fi \begin{abstract} The problem of analyzing the effect of privacy concerns on the behavior of selfish utility-maximizing agents has received much attention lately. Privacy concerns are often modeled by altering the utility functions of agents to consider also their privacy loss~\cite{Xiao13,GhoshR11,NissimOS12,ChenCKMV13}. Such privacy aware agents prefer to take a randomized strategy even in very simple games in which non-privacy aware agents play pure strategies. In some cases, the behavior of privacy aware agents follows the framework of Randomized Response, a well-known mechanism that preserves differential privacy. Our work is aimed at better understanding the behavior of agents in settings where their privacy concerns are explicitly given. We consider a toy setting where agent $A$, in an attempt to discover the secret type of agent $B$, offers $B$ a gift that one type of $B$ agent likes and the other type dislikes. As opposed to previous works, $B$'s incentive to keep her type a secret isn't the result of ``hardwiring'' $B$'s utility function to consider privacy, but rather takes the form of a payment between $B$ and $A$. We investigate three different types of payment functions and analyze $B$'s behavior in each of the resulting games. As we show, under some payments, $B$'s behavior is very different than the behavior of agents with hardwired privacy concerns and might even be deterministic. Under a different payment we show that $B$'s BNE strategy does fall into the framework of Randomized Response. \end{abstract} \ifx \fullversion \undefined \fi \ifx \fullversion\undefined \else \setcounter{tocdepth}{2}\tableofcontents \fi \section{Introduction} \label{sec:intro} \ifx \fullversion \undefined \fi In recent years, as the subject of privacy becomes an increasing concern, many works have discussed the potential privacy concerns of economic utility-maximizing agents. Obviously, utility-maximizing agents are worried about the effect of revealing personal information in the current game on future transactions, and wish to minimize potential future losses. In addition, some agents may simply care about what some outside observer, who takes no part in the current game, believes about them. Such agents would like to optimize the effect of their behavior in the current game on the beliefs of that outside observer. Yet specifying the exact way in which information might affect the agents' future payment or an outside observer's beliefs is a complicated and intricate task. Differential privacy (DP), a mathematical model for privacy, developed for statistical data analysis~\cite{DworkMNS06,DworkKMMN06}, avoids the need for such intricate modeling by providing a worst-case bound on an agents' exposure to privacy-loss. Specifically, by using a $\epsilon$-differentially private mechanism, agents can guarantee that the belief of \emph{any} observer about them changes by no more than a multiplicative factor of $e^\epsilon\approx 1+\epsilon$ once this observer sees the outcome of the mechanism~\cite{Dwork06} . Furthermore, as pointed out in~\cite{GhoshR11,NissimOS12}, using a $\epsilon$-differentially private mechanism the agents guarantee that, in expectation, \emph{any} future loss increases by no more than a factor of $e^\epsilon-1\approx \epsilon$. A recent line of work~\cite{Xiao13,GhoshR11,NissimOS12,ChenCKMV13} has used ideas from differential privacy to model and analyze the behavior of privacy-awareness in game-theoretic settings. The aforementioned features of DP allow these works to bypass the need to model future transactions. Instead, they model privacy aware agents as selfish agents with utility functions that are ``hardwired'' to trade off between two components: a (positive) reward from the outcome of the mechanism vs a (negative) loss from their non-private exposure. This loss can be upper-bounded using DP, and hence in some cases can be shown to be dominated by the reward (of carefully designed mechanisms), showing that privacy concerns don't affect an agent's behavior. However, in other cases, the behavior of privacy-aware agents may differ drastically from the behavior of classical, non-privacy aware agents. For example, consider a toy-game in which $B$ tells $A$ which of the two free gifts that $A$ offers (or \emph{coupons} as we call it, for reasons to be explained later) $B$ would like to receive. We characterize $B$ using one of two types, $0$ or $1$; where type $0$ prefers the first gift and type $1$ prefers the second one. (This is a rephrasing of the ``Rye or Wholewheat'' game discussed in~\cite{NissimOS12}.) Therefore it is simple to see that a non-privacy-aware agent always (deterministically) asks for the gift that matches her type. In contrast, if we model the privacy loss of a privacy-aware agent using DP as in the work of Ghosh and Roth~\cite{GhoshR11} (and the value of the coupon is large enough), a privacy-aware agent takes a randomized strategy. (See Section~\ref{subsubsec:privacy_aware_agents}.) Specifically, the agent plays \emph{Randomized Response}, a standard differentially private mechanism that outputs a random choice slightly biased towards the agent's favorable action. However, it was argued~\cite{NissimOS12,ChenCKMV13} that it is not realistic to use the worst-case model of DP to quantify the agent's privacy loss and predict her behavior. Differential privacy should only serve as an \emph{upper bound} on the privacy loss, whereas the agent's expected privacy loss can (and should in fact) be much smaller --- depending on the agent's predictions regarding future events, adversary's prior belief about her, the types and strategies of other agents, and the random choices of the mechanism and of other agents. As discussed above, these can be hard to model, so it is tempting to use a worst-case model like differential privacy. But what happens if we can formulate the agent's future transactions? What if we know that the agent is concerned with the belief of a specific adversary, and we can quantify the effects of changes to that belief? Is the behavior of a classical selfish agent in that case well-modeled by such a ``DP-hardwired'' privacy-aware agent? Will she even randomize her strategy? In other words, we ask: \begin{center} \begin{minipage}[c]{0.9\textwidth} \centering \textit{What is the behavior of a selfish utility-maximizing agent in a setting with clear privacy costs?} \end{minipage} \end{center} More specifically, we ask whether we can take the above-mentioned toy-game and alter it by introducing payments between $A$ and $B$ such that the behavior of a privacy-aware agent in the toy-game matches the behavior of classical (non-privacy aware) agent in the altered game. In particular, in case $B$ takes a randomized strategy --- does her behavior preserve $\epsilon$-differential privacy, and for what value of $\epsilon$? The study of these questions may also provide insights relevant for traditional, non-game-theoretic uses of differential privacy --- helping us understand how tightly differential privacy addresses the concerns of data subjects, and thus providing guidance in the setting of the privacy parameter $\epsilon$ or the use of alternative, non-worst-case variants of differential privacy (such as~\cite{BassilyGKS13}). \paragraph{Our model.} In this work we consider multiple games that model an interaction between an agent which has a secret type and an adversary whose goal is to discover this type. Though the games vary in the resulting behavior of the agents, they all follow a common outline which is similar to the toy game mentioned above. Agent $A$ offers $B$ a free coupon, that comes in one of two types $\{0,1\}$. Agent $B$ has a secret type $\ensuremath{t}\in \{0,1\}$ chosen from a known prior $(D_0,D_1)$, such that a type-$\ensuremath{t}$ agent has positive utility $\rho_t$ for type-$\ensuremath{t}$ coupon and zero utility for a type-$(1-\ensuremath{t})$ coupon. And so the game starts with $B$ sending $A$ a signal $\hat\ensuremath{t}$ indicating the requested type of coupon. (Formally, $B$'s utility for the coupon is $\rho_{\ensuremath{t}} \mathds{1}_{[\hat\ensuremath{t}=\ensuremath{t}]}$ for some parameters $\rho_0, \rho_1$.) Following this interaction, agent $C$, who viewed the signal $\hat\ensuremath{t}$ that $B$ sent, challenges $B$ into a game --- with $C$ taking action $\tilde\ensuremath{t}$ and incurring a payment from $B$ of $P(\tilde\ensuremath{t},\ensuremath{t})$. To avoid the need to introduce a third party into the game, we identify $C$ with $A$.\footnote{Hence the reason for the name ``The Coupon Game''. We think of $A$ as $G$ -- an ``evil'' car-insurance company that offers its client a coupon either for an eyewear store or for a car race; thereby increasing the client's insurance premium based on either the client's bad eyesight or the client's fondness for speedy and reckless driving.} Figure~\ref{fig:game_outline} gives a schematic representation of the game's outline. We make a few observations of the above interaction. We aim to model a scenario where $B$ has the most incentive to hide her true type whereas $A$ has the most incentive to discover $B$'s type. Therefore, all of the payments we consider have the property that if $B$'s type is $\ensuremath{t}^*$ then $t^* = \arg\max_{\tilde \ensuremath{t}}P(\tilde\ensuremath{t},\ensuremath{t}^*)$. Furthermore, the game is modeled so that the payments are transferred from $B$ to $A$, which makes $A$'s and $B$'s goals as opposite as possible. (In fact, past the stage where $B$ sends a signal $\hat\ensuremath{t}$, we have that $A$ and $B$ plays a zero-sum game.) We also note that $A$ and $B$ play a Bayesian game (in extensive form) as $A$ doesn't know the private type of $B$, only its prior distribution. We characterize Bayesian Nash Equilibria (BNE) in this paper and will show that in each game, the BNE is unique except when parameters of the game satisfy certain equality constraints. It is not difficult to show that the strategies at every BNE of our games are part of a Perfect Bayesian Equilibrium (PBE), i.e. a subgame-perfect refinement of the BNE. However, we focus on BNE in this paper as the equilibrium refinement doesn't bring any additional insight to our problem. \erase{ Third, following classical game theory, in each game we characterize the Nash Equilbirium (BNE). As we show, in each game the BNE is unique, unless that parameters of the game are set in a way that satisfy certain equality constraints. Therefore, other solution concepts like subgame-perfect equilibrium and its Bayesian equivalent (should they exist in the game), must also be the same unique BNE Furthermore, as the prior on $B$'s type is known to all agents, we have that this BNE must all be perfect Bayes equilibrium. \os{Yiling - I think this is true and fairly trivial, am I right?} } \begin{figure} \caption{A schematic view of the privacy game we model.\label{fig:game_outline} \label{fig:game_outline} \end{figure} \paragraph{Our results and paper organization.} First, in Section~\ref{sec:preliminaries}, following preliminaries we discuss the DP-hardwired privacy-aware agent as defined by Ghosh and Roth~\cite{GhoshR11} and analyze her behavior in our toy game. Our analysis shows that given sufficiently large coupon valuations $\rho_t$, both types of $B$ agent indeed play Randomized Response. We also discuss conditions under which other models of DP-hardwired privacy-aware agents play a randomized strategy. Following preliminaries, we consider three different games. These games follow the general coupon-game outline, yet they vary in their payment function. The discussion for each of the games follows a similar outline. We introduce the game, then analyze the two agents' BNE strategies and see if the strategy of the $B$ agent is indeed randomized or pure (and in case it is randomized --- whether or not it follows Randomized Response for some value of $\epsilon$). We also compare the coupon game to a ``benchmark game'' where $B$ takes no action and $A$ guesses $B$'s type without any signal from $B$. Investigating whether it is even worth while for $A$ to offer such a coupon, we compare $A$'s profit between the two games.\footnote{The benchmark game is not to be confused with the toy-game we discussed earlier in this introduction. In the toy game, $A$ takes no action and $B$ decides on a signal. In the benchmark game, $B$ takes no action and $A$ decides which action to take based on the specific payment function we consider in each game.} The payment functions we consider are the following. \begin{enumerate} \item In Section~\ref{sec:scoring_rule} we consider the case where the payment function is given by a \emph{proper scoring rule}. Proper scoring rules allows us to quantify the $B$'s cost to any change in $A$'s belief about her type. We show that in the case of symmetric scoring rules (scoring rules that are invariant to relabeling of event outcomes) both types of $B$ agent follow a randomized strategy that causes $A$'s posterior belief on the types to resemble Randomized Response. That is, initially $A$'s belief on $B$ being of type-$0$ (resp. type-$1$) is $D_0$ (resp. $D_1$); but $B$ plays in a way such that after viewing the $\hat\ensuremath{t}=0$ signal, $A$'s belief that $B$ is of type-$0$ (resp. type-$1$) is $\tfrac {1 + \epsilon} 2$ (resp. $\tfrac {1-\epsilon}2$) for some value of $\epsilon$ (and vice-versa in the case of the $\hat\ensuremath{t}=1$ signal with the same $\epsilon$). \item In Section~\ref{sec:matching_pennies} we consider the case where the payments between $A$ and $B$ are the result of $A$ guessing correctly $B$'s type. $A$ views the signal $\hat\ensuremath{t}$ and then guesses a type $\tilde\ensuremath{t}\in\bits$ and receives a payment of $\mathds{1}_{[\tilde\ensuremath{t}=\ensuremath{t}]}$ from $B$. This payment models the following viewpoint of $B$'s future losses: there is a constant gap (of one ``unit of utility'') between interacting with an agent that knows $B$'s type to an agent that does not know her type. We show that in this case, if the coupon valuations are fixed as $\rho_0$ and $\rho_1$, then at least one type of $B$ agent plays deterministically. However, if $B$'s valuation for the coupon is sampled from a continuous distribution, then $A$'s strategy effectively dictates a threshold with the following property: any $B$ agent whose valuation for the coupon is below the threshold lies and signals $\hat\ensuremath{t} = 1-\ensuremath{t}$, and any agent whose valuation is above the threshold signals truthfully $\hat\ensuremath{t} = \ensuremath{t}$. Hence, an $A$ agent who does not know $B$'s valuation thinks of $B$ as following a randomized strategy. \item In Section~\ref{sec:opt-out-possible} we consider a variation of the previous game where $A$ also has the option to opt out and not challenge $B$ into a payment game --- to report~$\bot$ and in return get no payment (i.e., $P(\bot,\ensuremath{t})=0$). We show that in such a game, under a very specific setting of parameters, the only BNE is such where both types of $B$ agent take a randomized strategy. Under alternative settings of the game's parameters, the strategy of $B$ is such that at least one of the two types plays deterministically. \end{enumerate} \ifx \cameraready \undefined Conclusions and future directions appear in Section~\ref{sec:conclusions}, where we provide a discussion of our results. \else Future directions are deferred to the full version of the paper, due to space limitation. \fi We find it surprising to see how minor changes to the privacy payments lead to diametrically different behaviors. In particular, we see the existence of a threshold phenomena. Under certain parameter settings in the game we consider in item 3 above, we have that if the value of the coupon is above a certain threshold then at least one of the two types of $B$ agent plays deterministically; and if the value of the coupon is below this threshold, $B$ randomizes her behavior s.t. $\hat\ensuremath{t}=\ensuremath{t}$ w.p. close to $\tfrac 1 2$. \ifx \fullversion \undefined \fi \subsection{Related Work} \label{subsec:related_work} \ifx \fullversion \undefined \fi The study of the intersection between mechanism design and differential privacy began with the seminal work of McSherry and Talwar~\cite{McsherryT07}, who showed that an $\epsilon$-differentially private mechanism is also $\epsilon$-truthful. The first attempt at defining a privacy-aware agent was of Ghosh and Roth~\cite{GhoshR11} who quantified the privacy loss using a linear approximation $v_i\cdot \epsilon$ where $v_i$ is an individual parameter and $\epsilon$ is the level of differential privacy that a mechanism preserves. Other applications of differentially privacy mechanisms in game theoretic settings were studied by Nissim et al~\cite{NissimST12}. The work of Xiao~\cite{Xiao13} initiated the study of mechanisms that are truthful even when you incorporate the privacy loss into the agents' utility functions. Xiao's original privacy loss measure was the mutual information between the mechanism's output and the agent's type. Nissim et al~\cite{NissimOS12} (who effectively proposed a preliminary version of our coupon game called ``Rye or Wholewheat'') generalized the models of privacy loss to only assume that it is \emph{upper bounded} by $v_i \cdot \epsilon$. Chen et al~\cite{ChenCKMV13} proposed a refinement where the privacy loss is measured with respect to the given input and output. Fleischer and Lyu~\cite{FleischerL12} considered the original model of agents as in Ghosh and Roth~\cite{GhoshR11} but under the assumption that $v_i$, the value of the privacy parameter of each agent, is sampled from a known distribution. Several papers in economics look at the potential loss of agents from having their personal data revealed. In fact, one folklore objection to the Vickrey auction is that in a repeated setting, by providing the sellers with the bidders' true valuations for the item, the bidders subject themselves to future loss should the seller prefer to run a reserved-price mechanism in the future. In the context of repeated interaction between an agent and a company, there have been works~\cite{ConitzerTW12,BergemannBM13} studying the effect of price differentiation based on an agent allowing the company to remember whether she purchased the same item in the past. Interestingly, strategic agents realize this effect and so they might ``haggle'' --- reject a price below their valuation for the item in round $1$ so that they'd be able to get even lower price in round $2$. In that sense, the fact that the agents publish their past interaction with the company actually helps the agents. Other work~\cite{CalzolariP06} discusses a setting where a buyer sequentially interacts with two different sellers, and characterizes the conditions under which the first seller prefers not to give the buyer's information to the second seller. Concurrently with our work, Gradwohl and Smorodinsky~\cite{GradwohlS14}, whose motivation is to analyze the effect of privacy concerns, introduce a framework of games in which an agent's utility is affected by both her actions and how her actions are perceived by a third party. The privacy games that we propose and analyze in this paper fall into the class of signaling games~\cite{MWG95}, where a sender ($B$ in our game) with a private type sends a message (i.e. a signal) to a receiver ($A$ in our game) who then takes an action. The payoffs of both players depend on the sender's message, the receiver's action, and the sender's type. Signaling games have been widely used in modeling behavior in economics and biology. The focus is typically on understanding when signaling is informative, i.e. when the message of the sender allows the receiver to infer the sender's private type with certainty, especially in settings when signaling is costly (e.g. Spence's job market signaling game~\cite{spence73}). In our setting, however, informative signaling violates privacy. We are interested in characterizing when the sender plays in a way such that the receiver cannot infer her type deterministically. \ifx \fullversion \undefined \fi \ifx \fullversion \undefined \else \fi \section{Preliminaries} \label{sec:preliminaries} \ifx \fullversion \undefined \fi \subsection{Equilibrium Concept} \ifx \fullversion \undefined \fi We model the games between $A$ and $B$ as Bayesian extensive-form games. However, instead of using the standard Perfect Bayesian Equilibrium (PBE), which is a refinement of Bayesian Nash Equilibrium (BNE) for extensive-form games, as our solution concept, we analyze BNE for our games. It can be shown that all of the BNEs considered in our paper can be ``extended'' to PBEs (by appropriately defining the beliefs of agent A about agent B at all points in the game). We thus avoid defining the more subtle concept of PBE as the refinement doesn't provide additional insights for our problem. Below we define BNE. A \emph{Bayesian} game between two agents $A$ and $B$ is specified by their type spaces $(\Gamma_A, \Gamma_B)$, a prior distribution $\Pi$ over the type spaces (according to which nature draws the private types of the agents), sets of available actions $(C_A, C_B)$, and utility functions, $u_i : \Gamma_A \times \Gamma_B \times C_A \times C_B \to \mathbb{R}$, $i \in \{A, B\}$. A \emph{mixed} or \emph{randomized} strategy of agent $i$ maps a type of agent $i$ to a distribution over her available actions, i.e. $\sigma_i: \Gamma_i \to \Delta(C_i)$, where $\Delta(C_i)$ is the probability simplex over $C_i$. \ifx \cameraready \undefined When $\sigma_i$ deterministically maps a type to an action, it is called a \emph{pure strategy}. The Bayesian Nash Equilibrium (BNE) of the two-player game is defined as follows. \fi \begin{definition} \label{def:NE} A strategy profile $(\sigma_A, \sigma_B)$ is a \emph{Bayesian Nash Equilibrium} if \[ \E [ u_i (T_i, T_{-i}, \sigma_i(T_i), \sigma_{-i}(T_{-i})) | T_i=t_i] \geq \E [u_i (T_i, T_{-i}, \sigma'_i(T_i), \sigma_{-i}(T_{-i})) | T_i=t_i] \] for all $i \in \{A, B\}$, all types $t_i \in \Gamma_i$ occurring with positive probability, and all strategies $\sigma'_i$, where $\sigma_{-i}$ and $T_{-i}$ denote the strategy and type of the other agent respectively and the expectation is taken over the randomness of agent type $T_{-i}$ and the randomness of the strategies, $\sigma_i$, $\sigma_{-i}$ and $\sigma'_i$. \end{definition} \ifx \cameraready \undefined In other words, a strategy profile $(\sigma_A, \sigma_B)$ is a BNE if both agents maximize their expected utility by playing $\sigma_i$ in responding to the other player's strategy $\sigma_{-i}$, i.e. they both play \emph{best response}. As mentioned in Section \ref{subsec:related_work}, our games between $A$ and $B$ belong to the class of signaling games. For signaling games, the terms {\em separating equilibrium} and {\em pooling equilibrium} are often used to characterize when signaling is fully informative. At a separating equilibrium, a player's strategy allows the other player to deterministically infer her private type, while at a pooling equilibrium multiple types of a player may take the same action, preventing the other player to infer her type with certainty. \fi \ifx \fullversion \undefined \fi \subsection{Differential Privacy} \label{subsec:DP} \ifx \fullversion \undefined \fi In order to define differential privacy, we first need to define the notion of neighboring inputs. Inputs are elements in $\mathcal{X}^n$ for some set $\mathcal{X}$, and two inputs $\mathcal{I},\mathcal{I}'\in \mathcal{X}^n$ are called neighbors if the two are identical on the details of all individuals (all coordinates) except for at most one. \begin{definition}[\cite{DworkMNS06}] \label{def:privacy} An algorithm $\textsf{ALG}$ which maps inputs into some range $\mathcal{R}$ satisfies \emph{$\epsilon$-differential privacy} if for all pairs of neighboring inputs $\mathcal{I},\mathcal{I}'$ and for all subsets $\mathcal{S}\subset\mathcal{R}$ it holds that ${\bf Pr}[\mathsf{ALG}(\mathcal{I}) \in \mathcal{S}] \leq e^\epsilon {\bf Pr}[\mathsf{ALG}(\mathcal{I'}) \in \mathcal{S}]$. \end{definition} One of the simplest algorithms that achieve $\epsilon$-differential privacy is called \emph{Randomized Response}~\cite{KasiviswanathanLNRS08,dwork2010differential}, which dates back to the 60s~\cite{Warner65}. This algorithm is best illustrated over a binary input, where each individual is represented by a single binary bit (therefore a neighboring instance is a neighbor in which one individual is represented by a different bit), Randomized Response works by perturbing the input. For each individual $i$ represented by the bit $b_i$, the algorithm randomly and independently picks a bit $\hat b_i$ s.t. ${\bf Pr}[\hat b_i = b_i] = \tfrac {1+\epsilon} 2$ for some $\epsilon \in [0,1)$. It follows from the definition of the algorithm that it satisfies $\ln(\tfrac{1+\epsilon}{1-\epsilon}) \approx 2\epsilon$-differential privacy. Randomized Response is sometimes presented as a distributed algorithm, where each individual randomly picks $\hat b_i$ locally, and reports $\hat b_i$ publicly. Therefore, it is possible to view this work as an investigation of the type of games in which selfish utility-maximizing agents truthfully follow Randomized Response, rather than sending some arbitrary bit as $\hat b_i$. In this work, we define certain games and analyze the behavior of the two types of $B$ agent in the BNE of these games. And so, denoting $B$'s strategy as $\sigma_B$, we consider the implicit algorithm $\sigma_B(t)$ that tells a type-$\ensuremath{t}$ agent what probability mass to put on the $0$-signal and on the $1$-signal. Knowing $B$'s strategy $\sigma_B$, we say that $B$ satisfies $\ln(X_{\rm game})$-differential privacy where\footnote{We use the convention $\tfrac 0 0 =1$.} \[ X_{\rm game}\stackrel{\rm def} = X_{\rm game}(\sigma_B)=\max_{\ensuremath{t},\hat\ensuremath{t}\in \{0,1\}} \left( \frac{{\bf Pr}[\sigma_B(t)=\hat \ensuremath{t}]} {{\bf Pr}[\sigma_B(1-t)= \hat \ensuremath{t} ]} \right) \] We are interested in finding settings where $X_{\rm game}(\sigma_B^*)$ is finite, where $\sigma_B^*$ denotes $B$'s BNE strategy. We say $B$ \emph{plays a Randomized Response strategy} in a game whenever her BNE strategy $\sigma_B^*$ satisfies ${\bf Pr}[\sigma_B^*(0)=0] = {\bf Pr}[\sigma_B^*(1)=1]=p$ for some $p\in [1/2,1)$. \subsubsection{Privacy-Aware Agents.} \label{subsubsec:privacy_aware_agents} The notion of privacy-aware agents has been developed through a series of works~\cite{Xiao13,GhoshR11,NissimOS12,ChenCKMV13}. The utility function of our privacy-aware agent $B$ is of the form $u_B = u_B^{out}-u_B^{priv}$. The first term, $u_B^{out}$ is the utility of agent $B$ from the mechanism. The second term, $u_B^{priv}$, represents the agent's privacy loss. The exact definition of $u_B^{priv}$ (and even the variables $u_B^{priv}$ depends on) varies between the different works mentioned above, but all works bound the privacy-loss of an agent that interacts with a mechanism that satisfies $\epsilon$-differential privacy by $u_B^{priv} \leq v\cdot \epsilon$ for some $v>0$. Here we argue about the behavior of a privacy-aware agent with the maximal privacy loss function, which is the type of agent considered by Ghosh and Roth~\cite{GhoshR11} (i.e., the agent's privacy loss when interacting with a mechanism that satisfies $\epsilon$-differential privacy is exactly $v\cdot\epsilon$ for some $v>0$). Recall our toy game: $B$ sends a signal $\hat\ensuremath{t}$ and gets a coupon of type $\hat\ensuremath{t}$. Therefore, the outcome of this simple game is $\hat\ensuremath{t}$, precisely the action that $B$ takes. $B$'s type is picked randomly to be $0$ w.p. $D_0$ and $1$ w.p. $D_1$, and a $B$ agent of type $\ensuremath{t}$ has valuation of $\rho_t$ for a coupon of type $\ensuremath{t}$. Therefore, in this game $u_B^{out} = \rho_t\mathds{1}_{[\hat \ensuremath{t} = \ensuremath{t}]}$. The mechanism we consider is $\sigma_B^*$, $B$'s utlity-maximizing strategy, which we think of as the implicit algorithm that tells a type-$\ensuremath{t}$ agents what probability mass to put on sending the $\hat\ensuremath{t}=0$ signal and what mass to put on the $\hat\ensuremath{t}=1$ signal. As noted above, this strategy satisfies $\ln(X_{\rm game})$-differential privacy, and so $u_B^{priv}(\sigma_B^*) = v\cdot\ln(X_{\rm game})$ for some parameter $v>0$. Assuming $D_0\rho_0\neq D_1\rho_1$, our proof shows that this privacy-aware agent chooses essentially between two alternatives in our toy game: either both types take the same deterministic strategy and send the same signal (${\bf Pr}[\sigma_B^*(0)=b]={\bf Pr}[\sigma_B^*(1)=b]=1$ for some $b\in\bits$); or the agent randomizes her behavior and plays using Randomized Response: ${\bf Pr}[\sigma_B^*(0)=0] = {\bf Pr}[\sigma_B^*(1)=1] \in [\tfrac 1 2,1)$. We show that for sufficiently large values of the coupon the latter alternative is better than the first. \newcommand{\behaviorPrivacyAware}{ Let $B$ be a privacy-aware agent, whose privacy loss is given by $v\ln(X_{\rm game})$ for some $v>0$. Assume that there exists an $\alpha>0$ s.t. for sufficiently large values of $\rho_0, \rho_1$ it holds that $\min\{\rho_0,\rho_1\} \geq \alpha\cdot(\rho_0+\rho_1)$. Then, the unique strategy $\sigma_B^*$ that maximizes $B$'s utility is randomized and satisfies: ${\bf Pr}[\sigma_B^*(0)=0] = {\bf Pr}[\sigma_B^*(1)=1]=p^*$ for some $p^*\in[\tfrac 1 2,1)$. } \begin{theorem} \label{thm:behavior_privacy_aware} \behaviorPrivacyAware \end{theorem} \newcommand{\PAA}{ \begin{proof} Recall, the type of $B$ is chosen randomly to be $0$ w.p. $D_0$ and $1$ w.p. $D_1$. Given a strategy $\sigma_B$ for $B$, we denote $p={\bf Pr}[\sigma_B(0)=0]$ and $q={\bf Pr}[\sigma_B(1)=1]$ (so ${\bf Pr}[\sigma_B(0)=1] = 1-p$ and ${\bf Pr}[\sigma_B(1) = 0] = 1-q$). Therefore, \[X_{\rm game}(\sigma_B) = X_{\rm game}(p,q) = \max \left\{ \frac {p}{1-q}, \frac {1-q}{p}, \frac {q}{1-p}, \frac {1-p}{q} \right\}\] Note that $X_{\rm game}(p,q) \geq 1$, with equality iff $p=1-q$ (which means $\sigma_B(t)$ is independent of $t$ and $B$ reveals no information about her type). And so $B$ aims to maximizes the following utility function: $u_B = D_0\rho_0 p + D_1\rho_1 q - v\ln(X_{\rm game})$. When the strategy that optimizes $B$'s utility, denoted $(p^*,q^*)$, satisfies $p^*=q^* = 1/2 +\epsilon$ for some $\epsilon \in [0,\tfrac 1 2)$ then we say that $B$ plays using Randomized Response. First, observe that if $p+q<1$ then $X_{\rm game} > 1$ and the utility of $B$ is $D_0\rho_0 p + D_1\rho_1 q - v\ln(X_{\rm game}) \leq D_0\rho_0p+D_1\rho_1 q$, so $B$ can always improve the utility by replacing setting either $(p,q)=(1,0)$ or $(p,q) = (0,1)$. The same argument holds for any $(p,q)$ where $p+q=1$ and both are not integral. (If $D_0\rho_0=D_1\rho_1$ then the agent in indifferent between any $(p,q)$ satisfying $p=1-q$.) Secondly, observe that the maximum cannot be obtained for $(p=1,q>0)$ or $(p>0,q=1)$, because in that case $X_{\rm game}$ shoots to infinity, so the privacy loss is infinite. Therefore, if there exist a strategy $(p,q)$ s.t. $p>1-q$ and $p,q\in(0,1)$ whose utility is strictly greater than $\max\{ D_0\rho_0, D_1\rho_1\}$, then it is a utility maximizing strategy. (Otherwise, one of the two strategies $(0,1)$ or $(1,0)$ maximizes $B$'s utility.) Suppose that the maximum is obtained on some $(p^*,q^*)$ with $p^*+q^*>1$ and $q^*>p^*$. This means that $X_{\rm game} = \tfrac p {1-q}>1$. For any $(p,q)$ in a small enough neighborhood of $(p^*,q^*)$ we can differentiate $u_B$ and it holds that \[ D_0\rho_0 - \frac v {X_{\rm game}(p^*,q^*)} \left(\tfrac {\partial}{\partial p} X_{\rm game}(p^*,q^*)\right) = 0~,~~D_1\rho_1 - \frac v {X_{\rm game}(p^*,q^*)} \left(\tfrac {\partial}{\partial q} X_{\rm game}(p^*,q^*)\right) = 0\] with $\tfrac {\partial}{\partial p} X_{\rm game} = \tfrac 1 {1-q}$ and $\tfrac {\partial}{\partial q} X_{\rm game} = \tfrac {p}{(1-q)^2}$, we have $\tfrac {D_1\rho_1}{D_0\rho_0} = \tfrac {p^*}{1-q^*}$, and so $D_0\rho_0 p^* + D_1 \rho_1 q^* = D_1\rho_1$. Denote $X^*_{\rm game} \stackrel{\rm def}= X_{\rm game}(p^*,q^*) = \frac {p^*} {1-q^*}$, and deduce that in this case the maximal utility is $D_0\rho_0 p^* + D_1 \rho_1 q^*-v\ln(X^*_{\rm game})=D_1\rho_1 - v\ln\left(\tfrac{D_1\rho_1}{D_0\rho_0}\right) < D_1\rho_1 \leq \max\{D_0\rho_0, D_1\rho_1\}$. Hence $B$ is still better off playing either $(0,1)$ or $(1,0)$. The case with $p^*>q^*$ (or equivalently $q^*(1-q^*)>p^*(1-p^*)$) is symmetric, and so $B$ prefers playing $(0,1)$ or $(1,0)$. It remains to check the case of $p^*=q^*$, with $p^*+q^*=2p^*>1$. In this case we have $X_{\rm game} = \tfrac {p^*}{1-p^*}$, and the utility function of $B$ is the univariate function $(D_0\rho_0+D_1\rho_1)p+ v\ln\left(\tfrac p {1-p}\right)$. Setting the derivative $u_B'(p^*) = 0$ we have $D_0\rho_0 + D_1\rho_1 = \tfrac {v}{p^*(1-p^*)}$, or $p^* = \tfrac 1 2 \left(1 + \sqrt{1-\tfrac {4v}{D_0\rho_0 + D_1\rho_1}}\right)$. Denoting $Y = D_0\rho_0+D_1\rho_1$, we now use the assumption that $\rho_0,\rho_1 = \Omega(\rho_0+\rho_1)$ and observe that $\max\{D_0\rho_0,D_1\rho_1\} = (1-c)Y$ for some constant $c>0$. Therefore, $B$ prefers playing this randomized strategy if \[u_B(p^*) = \tfrac 12 Y \left(1 + \sqrt{1-\tfrac{4v}Y}\right) - v \ln\left( \frac {1+\sqrt{1-\frac {4v}Y}} {1-\sqrt{1-\frac {4v}Y}} \right) > (1-c)Y\] Since $\lim_{Y\to\infty} \tfrac {u_B(p^*)}{Y} = 1$ then for a large enough value of $Y$, the above inequality holds. \end{proof} As an immediate corollary of the proof, consider any alternative definition of a privacy aware agent in which the privacy valuation $u_B^{priv}$ (i) depends only on the strategy $\sigma_B$, is (ii) non-negative, (iii) upper bounded by $v\ln(X_{\rm game})$ for some $v>0$ and (iv) $u_B^{priv}=\infty$ whenever $X_{\rm game} = \infty$. We argue that the utility maximizing strategy of such an agent is also randomized. (Observe that we no longer guarantee that $B$'s optimal strategy $\sigma_B^*$ satisfies ${\bf Pr}[\sigma_B^*(0)=0]={\bf Pr}[\sigma_B^*(1)=1]$.) To see that, observe that whenever $p=1-q$ we have that $X_{\rm game}=0$ so the privacy loss of an agent is $0$. Therefore, playing either $(p,q)=(1,0)$ or $(0,1)$, the agent can guarantee a utility of $\max\{D_0\rho_0,D_1\rho_1\}$. In contrast, should the agent play any $(p,q)$ with $p<1-q$, then her utility is upper bounded by $D_0p+D_1q \leq \max\{D_0\rho_0,D_1\rho_1\}$, because the privacy loss is non-negative. Therefore, the agent prefers playing $(p,q)=(1,0)$ or $(0,1)$ to any $(p,q)$ with $p<1-q$. Secondly, since we assume infinite privacy loss whenever $X_{\rm game}$, then $B$ utility maximizing strategy cannot satisfy that $p=1$ and $q > 0$ (or vice-versa). Lastly, the proof of Theorem~\ref{thm:behavior_privacy_aware} gives a strategy $(p,q)$ with $p>1-q$ where the lower bound on $B$'s utility is greater than $\max\{D_0\rho_0,D_1\rho_1\}$. It follows that $B$ strictly prefers playing some strategy $(p,q)$ with $p,q\in (0,1)$ over playing $(p,q)=(1,0)$ or $(p,q)=(0,1)$. \ifx \fullversion\undefined \paragraph{Two types of $B$ agent as different players.} \else \subsubsection{The two types of $B$ agent as different players.} \fi The above analysis assumed $B$ is an agent playing this coupon game, decides on a strategy before the realization of her type, and sticks to that strategy even after her type is revealed to her. It is possible though to think of the two types of $B$ agents as two different agents ex-post -- after each agent is revealed her own type. As we show, the analysis in this case is slightly different. Observe that in this case we discuss a straight-forward Nash-equilibrium, as both agents know their respective types. In the following, we continue using our notation from earlier, where ${\bf Pr}[\sigma(i) = \hat\ensuremath{t}]$ denotes the probability a $B$ agent of type $\ensuremath{t}=i$ sends the signal $\hat\ensuremath{t}$ according to strategy $\sigma$. \begin{theorem} \label{thm:two_players_types_of_B_agents} Consider the $2$-player game where player $i\in\{0,1\}$ is a type $\ensuremath{t}=i$ $B$ agent. Assume $\rho_0=\rho_1=\rho$ and that $\rho$ is sufficiently large. Then there exists some $z^* \in (0,\tfrac v \rho)$ s.t. any NE of the game falls into one of three categories \begin{itemize} \item ${\bf Pr}[\sigma(0)=0]={\bf Pr}[\sigma(1)=0]=1-z$ for some $z\in [0,z^*]$. (Both agents take the same strategy and send the signal $\hat\ensuremath{t}=0$ with high probability $1-z$.) \item ${\bf Pr}[\sigma(0)=0]={\bf Pr}[\sigma(1)=0]=z$ for some $z\in [1-z^*,1]$. (Both agents take the same strategy and send the signal $\hat\ensuremath{t}=1$ with high probability $1-z$.) \item ${\bf Pr}[\sigma(0)=0]={\bf Pr}[\sigma(1)=1]=z$ for some $z\in [1-\tfrac v \rho, z^*]$. (Both agents play randomized response and report truthfully $\hat\ensuremath{t}=\ensuremath{t}$ with the same probability $z$.) \end{itemize} \end{theorem} \begin{proof} We continue using the same notation from Theorem~\ref{thm:behavior_privacy_aware}: $p={\bf Pr}[\sigma(0)=0]$ and $q={\bf Pr}[\sigma(1)=1]$, and so $X_{\rm game}=X_{\rm game}(p,q)$ as denoted in the proof of Theorem~\ref{thm:behavior_privacy_aware}. In particular, when $p+q\geq 1$ it holds that $X_{\rm game}(p,q) = \tfrac p {1-q}$ when $p\leq q$, and $X_{\rm game}(p,q) = \tfrac q {1-p}$ when $p \geq q$. First of all, observe that the utilities of both agents are symmetric: $u_{B,0}(p,q) = \rho p - v\ln(X_{\rm game}(p,q))$ and $u_{B,1}(p,q) = \rho q - v\ln(X_{\rm game}(p,q))$. Secondly, observe that if one agent plays determinisitcally ${\bf Pr}[\sigma(\ensuremath{t})=\hat\ensuremath{t}]=1$ then unless the other type deterministically sends the same signal, then $X_{\rm game}(p,q)=\infty$ causing both agents to have utility of $-\infty$. It is therefore clear that the strategies $(p,q)=(1,0)$ and $(p,q)=(0,1)$ are both NEs. To find the remaining NEs of the game, we fix a certain strategy for the $\ensuremath{t}=1$ agent, denoted $z = {\bf Pr}[\sigma(1)=1]$, and see what is the strategy $x={\bf Pr}[\sigma(0)=0]$ that type $\ensuremath{t}=0$ agent prefers deviating to. Since both agents are symmetric, then our analysis also translates to an analysis of type $\ensuremath{t}=1$. Before continuing with our analysis, we point out to the following two functions. \begin{itemize} \item Fix $z$ and denote $f(x)=\rho x - v\ln(\tfrac {z} {1-x})$. Since $f'(x) = \rho - \tfrac v {1-x}$ is decreasing on the interval $[0,1)$, we have that $f$ is maximized at $x = 1-\tfrac v \rho$. In particular, $f$ is strictly increasing on the interval $[0,1-\tfrac v \rho]$ and strictly decreasing on the interval $[1-\tfrac v \rho, 1)$. \item Fix $z$ and denote $g(x) = \rho x -v\ln(\tfrac x {1-z})$. Since $g'(x) = \rho - \tfrac v x$ and it is an increasing function on the interval $[0,1]$ then $g(x)$ is strictly decreasing on the $(0, \tfrac v \rho)$ interval and strictly increasing on the $[\tfrac v \rho,1]$ interval. \end{itemize} We return on our NE analysis. First, for any $z$, it is evident that $\ensuremath{t}=0$ agent has incentive to deviate if $x < 1-z$. (In response to $z$, the $\ensuremath{t}=0$ agent increases her utility by deviating to playing ${\bf Pr}[\sigma(0)=0]=1-z$ since $X_{\rm game}(1-z,z) = 0$.) Assume $z < 1/2$ for now. Therefore $x \in [1-z,1]$, otherwise the $\ensuremath{t}=0$ agent has incentive to deviate. Since $x\geq 1-z\geq1/2 > z$ then $X_{\rm game} = \tfrac {z} {1-x}$ so type $\ensuremath{t}=0$ agent's utility is $f(x)$. Since $f$ is strictly decreasing on the interval $[1-\tfrac v \rho,1)$ we have that if $1-z \geq 1-\tfrac v \rho$ then the type $\ensuremath{t}=0$ agent has no incentive to deviate when $x=1-z$. I.e., type $\ensuremath{t}=0$ agent does not deviate from any strategy $(p,q) =(1-z,z)$ with $z \leq \tfrac v \rho$. In addition, type $\ensuremath{t}=0$ doesn't deviate from $(p,q) = (1-\frac v \rho, z)$ for $\tfrac v\rho <z <1/2$. Assume now the case $1/2 \leq z$. Again, we only need to consider $x\in [1-z,1]$, so either $x \in [1-z,z)$ or $x\in [z,1]$. In the former case, the utility of the type $\ensuremath{t}=0$ agent is $g(x)$, and in the latter her utility is $f(x)$. Therefore: \begin{itemize} \item when $z < 1-\tfrac v \rho$ she considers only two possible strategies: $x=1-\tfrac v \rho$ (which maximizes $f(x)$ on the interval $[z,1]$), or $x=z$ (which maximizes $g(x)$ on the interval $[1-z,z]$). As $g(z) = f(z) < f(1-\tfrac v \rho)$ we deduce that in this case, type $\ensuremath{t}=0$ agent does not deviate only from the strategy $(1-\tfrac v \rho, z)$. \item when $z \geq 1-\tfrac v \rho$ she considers only two possible strategies: $x=z$ (which maximizes $f(x)$ on the interval $[z,1]$) , or $x=1-z$ (which might maximize $g(x)$ on the interval $[1-z,z]$). As $g(1-z) = v$ and $f(z) = \rho z - v\ln(\tfrac {z}{1-z})$ we have that $f(z)-g(1-z) \leq 0$ for any $z>z^*$ for some $z^*$ (the solution of $f(z)-g(z)=0$). Observe that $1-\tfrac v \rho < z^*$. We deduce that for any $z\in [1-\tfrac v \rho, z^*]$ the type $\ensuremath{t}=0$ agent doesn't deviate from the strategy $(p,q)=(z,z)$; and for $z\in [z^*,1]$ type $\ensuremath{t}=0$ agent does not deviate from $(p,q) = (1-z,z)$. \end{itemize} Recall that the type $\ensuremath{t}=1$ agent is symmetric to type $\ensuremath{t}=0$ agent, with the same utility function. This implies that any $(1-\tfrac v \rho,q)$ cannot be a NE since type $\ensuremath{t}=1$ agent prefers to deviate, unless $q=1-\tfrac v \rho$. Therefore, we essentially characterized the NEs of the game, as specified in the theorem statement. \end{proof} } \newcommand{\privacyAwareAgentInsteadAPX}{ \ifx \cameraready \undefined Due to space limitations, the proof is deferred to Appendix~\ref{apx_sec:privacy_aware_agent}. \else The proof is deferred to the full version of the paper. \fi The proof of Theorem~\ref{thm:behavior_privacy_aware} also applies to some alternative models of a privacy-aware agent. In addition to Theorem~\ref{thm:behavior_privacy_aware}, we also analyze, for completeness, an alternative scenario where type $0$ and type $1$ are two competing agents. Observe that this is no longer a Bayesian game with a single player but rather a standard complete-information game with two players. We show that this game also has NEs where both types play randomized strategies that follow Randomized Response (i.e.,$ {\bf Pr}[\sigma_B^*(0) = 0] = {\bf Pr}[\sigma_B^*(1)=1]) > \tfrac 1 2$). } \ifx \fullversion \undefined \privacyAwareAgentInsteadAPX \else \PAA \fi \ifx \fullversion \undefined \fi \ifx \fullversion \undefined \else \fi \section{The Coupon Game with Scoring Rules Payments} \label{sec:scoring_rule} \ifx \fullversion \undefined \fi In this section, we model the payments between $A$ and $B$ using a proper scoring rule (see below). This model is a good ``first-attempt'' model for the following two reasons. (i) Proper scoring rules assign profit to $A$ based on the accuracy of her belief, so $A$ has incentives to improve her prior belief on $B$'s type. (ii) As we show, in this model it is possible to quantify the $B$'s trade-off between an $\epsilon$-change in the belief and the cost that $B$ pays $A$. In that aspect, this model gives a clear quantifiable trade-off that explains what each additional unit of $\epsilon$-differential privacy buys $B$. Interestingly, proper scoring rules were recently applied in the context of differential privacy~\cite{GhoshLRS14} (yet in a very different capacity). \newcommand{\properScoringRulesWithAPX}{ Proper scoring rules (see surveys~\cite{Winkler96,Gneiting:07}) were devised as a method to elicit experts to report their true prediction about some random variable. For a $\bits$-valued random variable $X$, an expert is asked to report a prediction $x\in[0,1]$ about the probability that $X=1$. We pay her $f_1(x)$ if indeed $X=1$ and $f_0(x)$ otherwise. A \emph{proper scoring rule} is a pair of functions $(f_0, f_1)$ such that $\arg\max_x \E_{\ensuremath{t}\leftarrow X}[f_\ensuremath{t}(x)] = {\bf Pr}[X=1]$. Hence a risk-neutral agent's best strategy is to report $x={\bf Pr}[X=1]$. Most frequently used proper scoring rules are \emph{symmetric} (or label-invariant) rules, where $\forall x, f_1(x) = f_0(1-x)$ (also referred to as neutral scoring rules in~\cite{ChenDPV14}). With symmetric proper scoring rules, the payment to an expert reporting $x$ as the probability of a random variable $X$ to be $1$, is identical to the payment of an expert reporting $(1-x)$ as the probability of the random variable $(1-X)$ to be $1$. Additional background regarding proper scoring rules is deferred to \ifx \cameraready \undefined Appendix~\ref{apx_sec:proper_scoring_rules}. \else the full version of this paper. \fi } \properScoringRulesWithAPX \newcommand{\backgroundProperScoringRules}{ \subsection{Background: Proper Scoring Rules} \label{subsec:proper_scoring_rules} Proper scoring rules (see surveys~\cite{Winkler96,Gneiting:07}) were devised as a method to elicit experts to report their true prediction as to the probability of an event happening. That is, given a Bernoulli random variable $X$, we ask an expert to report her estimation of $\mu={\bf Pr}[X=1]$. Given that the expert reports $x$ we pay her $f_1(x)$ if indeed $X=1$ and pay her $f_0(x)$ otherwise. A \emph{proper scoring rule} is a pair of functions $(f_0, f_1)$ such that $\arg\max_x \E_{\ensuremath{t}\leftarrow X}[f_\ensuremath{t}(x)] = \mu$ where the maximum is obtained for a unique report. That is, it is in the expert's best interest to report the true prior. It was shown~\cite{Savage:71,Gneiting:07} that a pair of twice-differentiable functions $(f_0,f_1)$ give a proper scoring rule iff there exists a convex function $g$ (i.e. $g'' > 0$ on the $[0,1]$ interval) s.t. $f_0(x) = g(x) - xg'(x)$, $f_1(x) = g(x) + (1-x) g'(x)$. Using the derivatives of both functions ($f_0'(x) = -xg''(x)$ and $f_1'(x) = (1-x)g''(x)$), we deduce that $f_0$ is a strictly decreasing function and $f_1$ is a strictly increasing function on the $[0,1]$ interval. And so, given that $X=1$ w.p. $\mu$, we have that the expected payment for an expert predicting $x$ is \begin{equation} F_\mu(x) = (1-\mu)f_0(x) + \mu f_1(x) = g(x) - (x-\mu)g'(x) \label{eq:F_mu} \end{equation} which is maximized at $x=\mu$, where $F_\mu(\mu) = g(\mu)$. Most commonly discussed proper scoring rules are \emph{symmetric} (or label-invariant) proper scoring rules, that are oblivious to that outcomes of $X$ (also referred to as neutral scoring rules in~\cite{ChenDPV14}). That is, symmetric scoring rules have the property that for any two Bernoulli random variables $X$ and $X'$ s.t. ${\bf Pr}[X=1]={\bf Pr}[X'=0]$ the expected payment for an expert predicting $x$ for $X$ is identical to the payment for an expert predicting $1-x$ for $X'$. Such symmetric scoring rules are derived from a convex function $g$ that is symmetric around $\tfrac 1 2$. I.e.: $g(x)=g(1-x)$, and so $g'(x) = -g'(1-x)$ and $g''(x)=g''(1-x)$. Concrete examples of proper scoring rules, such as the quadratic scoring rule, the spherical scoring rule and the logarithmic scoring rules, are discussed in Section~\ref{subsec:specific_scoring_rules}. } \ifx \fullversion \undefined \fi \subsection{The Game with Scoring Rule Payments} \label{subsec:scoring_rule_game} \ifx \fullversion \undefined \fi We now describe the game, and analyze its BNE. In this game $A$ interacts with a random $B$ from a population that has $D_0$ fraction of type $0$ agents and $D_1$ fraction of type $1$ agents. Wlog we assume throughout Sections~\ref{sec:scoring_rule}, \ref{sec:matching_pennies} and \ref{sec:opt-out-possible} that $D_0 \geq D_1$. $A$ aims to discover $B$'s secret type. She has utility that is directly linked to her posterior belief on $B$'s type and $A$ reports her belief that $B$ is of type $1$. $A$'s payments are given by a proper scoring rule, composed of two functions $(f_0,f_1)$, so that after reporting a belief of $x$, a $B$ agent of type $\ensuremath{t}$ pays $f_\ensuremath{t}(x)$ to $A$. \paragraph{A benchmark game.} First consider the following straight forward (and more boring) game where $B$ does nothing, $A$ merely reports $x$ -- her belief that $B$ is of type $1$. In this game $A$ gets paid according to a proper scoring rule --- i.e., $A$ gets a payment of $F_{D_1}(x) \stackrel{\rm def}{=} D_0 f_0(x) + D_1 f_1(x)$ in expectation. Since $(f_0, f_1)$ is a proper scoring rule, $A$ maximizes her expected payment by reporting $x=D_1$. So, in this game $A$ gets paid $g(D_1) \stackrel{\rm def} {=} f_{D_1}(D_1)$ in expectation, whereas $B$'s expected cost is $g(D_1)$. (Alternatively, a $B$ agent of type $0$ pays $f_0(D_1)$ and a $B$ agent of type $1$ pays $f_1(D_1)$.) \paragraph{The full game.} We now turn our attention to a more involved game. Here $A$, aiming to have a more accurate posterior belief on $B$'s type, offers $B$ a coupon. Agents of type $\ensuremath{t}$ prefer a coupon of type $\ensuremath{t}$. And so, $B$ chooses what type to report $A$, who then gives $B$ the coupon and afterwards makes a prediction about $B$'s probability of being of type $1$. The formal stages of the game are as follows. \begin{enumerate} \addtocounter{enumi}{-1} \item $B$'s type, $\ensuremath{t}$, is drawn randomly with ${\bf Pr}[\ensuremath{t}=0]=D_0$ and ${\bf Pr}[\ensuremath{t}=1] =D_1$. \item $B$ reports to $A$ a type $\hat \ensuremath{t}=\sigma_B(t)$ and receives utility of $\rho_\ensuremath{t}$ if indeed $\hat \ensuremath{t} = \ensuremath{t}$. We assume throughout this section that $\rho_0=\rho_1=\rho$. \item $A$ reports a prediction $x$, representing ${\bf Pr}[\ensuremath{t}=1 ~|~ \sigma_B(\ensuremath{t})=\hat\ensuremath{t}]$, and receives a payment from $B$ of $f_\ensuremath{t}(x)$. \end{enumerate} \newcommand{\BNEScoringRules}{ Consider the coupon game with payments in the form of a symmetric proper scoring rule and with the following added assumption about the value of the coupon: $f_1(D_0)-f_1(D_1) < \rho < f_1(1)-f_1(0)=f_0(0)-f_0(1)$. The unique BNE strategy of $B$ in this game, denoted $\sigma_B^*$, satisfies that ${\bf Pr}[\ensuremath{t} = 0~|~\sigma_B^*(\ensuremath{t})=0] = {\bf Pr}[\ensuremath{t} = 1~|~\sigma_B^*(\ensuremath{t})=1]$. } \begin{theorem} \label{thm:BNE_scoring_rules} \BNEScoringRules \end{theorem} Note that a Randomized Response strategy $\sigma_B$ for $B$ would instead have ${\bf Pr}[\sigma_B(0)=0]={\bf Pr}[\sigma_B(1)=1]$. This condition is different from the condition in Theorem~\ref{thm:BNE_scoring_rules} when ${\bf Pr}[t=0]\neq{\bf Pr}[t=1]$ (i.e., $D_0\neq D_1$). \newcommand{\proofTheoremProperScoringRule}{ \begin{proof} We first analyze both agents' utilities and strategies. The utility of $A$ is solely based on the payments of the proper scoring rule: $E_{\ensuremath{t}\leftarrow \{D_0,D_1\}} [ f_\ensuremath{t}(x) ]$. $A$ has to decide on two potential reports: $x_0$ and $x_1$, where for $b\in\bits$, $x_b$ represents $A$'s belief about ${\bf Pr}[\ensuremath{t}=1 ~|~ \hat \ensuremath{t} = b]$. Therefore, a strategy $\sigma_A$ of $A$ maps a signal $\hat\ensuremath{t}$ into a report. The utility of $B$ has two components~--- $B$ gains a certain amount of utility $\rho_\ensuremath{t}$ from reporting $A$ the true type, but then has to pay $A$ her scoring rule payments. Therefore a strategy $\sigma_B$ maps each of $B$'s types to a signal. Given a strategy $\sigma_B$ we use the following notation: \begin{eqnarray*} p= {\bf Pr}[\sigma_B(0)=0], && q = {\bf Pr}[\sigma_B(1)=1] \end{eqnarray*} This way, $B$'s utility function takes the form \begin{eqnarray*} &u_B & = D_0 u_{B,0} + D_1 u_{B,1} \cr \textrm{where} & u_{B,0} & = p\left( \rho - f_0(x_{0})\right) + (1-p)\left(-f_0(x_{1})\right) \cr & u_{B,1} & = q\left( \rho - f_1(x_{1})\right) + (1-q)\left(-f_1(x_{0})\right) \cr \end{eqnarray*} When $A$ sees the signal $\hat\ensuremath{t}$ the probability over $B$'s type is given by Bayes Rule: \begin{eqnarray} y_0 = y_0(p,q) \stackrel{\rm def}= {\bf Pr}[ \ensuremath{t} = 1 ~|~ \hat\ensuremath{t} = 0] = \frac {D_1(1-q)}{D_0p + D_1(1-q)} = \frac 1 {1 + \frac {D_0p}{D_1(1-q)}} \label{eq:y_0} \\ y_1 = y_1(p,q) \stackrel{\rm def}= {\bf Pr}[ \ensuremath{t} = 1 ~|~ \hat\ensuremath{t} = 1] = \frac {D_1q}{D_0(1-p) + D_1q} = \frac 1 {1 + \frac {D_0(1-p)}{D_1q}} \label{eq:y_1}\end{eqnarray} and since $A$'s payments come from a proper scoring rule it follows that $A$ reports $x_0 = \sigma_A(0)=y_0$ and $x_1=\sigma_A(1)=y_1$. In other words, given that $B$'s BNE strategy is $(p^*,q^*)$, then $A$ plays best-response of $x_0^* = y_0(p^*,q^*), x_1^* = y_1(p^*,q^*)$. We now turn to analyze $B$'s utility. Denote the strategy that $A$ plays as $x_0$ and $x_1$. Then agent $B$ decides on $p$ and $q$ that maximize the utility function \[ u_B = D_0 \cdot \left( p(\rho - f_0(x_0)) - (1-p)f_0(x_1) \right) + D_1\cdot\left( q(\rho - f_1(x_1))-(1-q)f_1(x_0) \right) \] It is simple to characterize $B$'s best response to $A$'s strategy of $(x_0,x_1)$. \begin{flalign} & \textrm{If }\rho > f_0(x_0)-f_0(x_1) \textrm{ then } p =1\cr & \textrm{If }\rho < f_0(x_0)-f_0(x_1) \textrm{ then } p =0 \cr & \textrm{If }\rho = f_0(x_0)-f_0(x_1) \textrm{ then $B$ may play any } p\in[0,1] &\cr & \textrm{If }\rho > f_1(x_1)-f_1(x_0) \textrm{ then } q =1 \cr & \textrm{If }\rho < f_1(x_1)-f_1(x_0) \textrm{ then } q =0 \cr &\textrm{If }\rho = f_1(x_1)-f_1(x_0) \textrm{ then $B$ may play any } q \in [0,1] \label{eq:Bs_strategies} \end{flalign} We now wish to characterize the game's BNEs. First, we claim that in a BNE, with $B$ playing $\sigma_B^* = (p^*,q^*)$, it cannot be that $p^*<1-q^*$. This follows from the fact that $y_0(p,q) > y_1(p,q) \Leftrightarrow p < 1-q$. It means that $A$'s best response to such $(p^*,q^*)$ is to answer some $(x_0,x_1)$ s.t. $x_0 > x_1$. But since $f_0$ is a decreasing function, $f_1$ is an increasing function and $\rho > 0$, then $B$'s best response to such $(x_0,x_1)$ is to deviate to $(1,1)$. Similarly, should $(p^*,q^*)$ be such that $p^*=1-q^*$ \emph{and both} $p^*,q^*\in(0,1)$, then $A$'s best response $(x_0,x_1)$ is $(\tfrac 1 2,\tfrac 1 2)$, which implies again that $B$ prefers to deviate to $(1,1)$. It follows that, with the exception of $(1,0)$ and $(0,1)$, any BNE strategy of $B$ satisfies $p^*>1-q^*$, and so any BNE strategy of $A$ satisfies $x_0 < x_1$. Before continuing with the proof we would like to make two observations, which we will repeatedly use. Let $X$ be a uniform Bernoulli random variable. We examine the expected payment to an expert reporting a belief of $z$ as to the probability of the event $X=1$, which we denote as $F_{1/2}(z)= \tfrac 1 2 (f_0(z)+f_1(z))$. The function $F_{1/2}$ is a concave function with a unique maximum at $z=\tfrac 1 2$, and it is strictly increasing on the $[0,\tfrac 1 2]$ interval and strictly decreasing on $[\tfrac 1 2,1]$ interval. Therefore, for any $a$ there exists at most two distinct preimages $z_1 \leq \tfrac 1 2 \leq z_2$ satisfying $F_{1/2}(z_1)=F_{1/2}(z_2) = a$. Recall that we assume $(f_0,f_1)$ is a symmetric proper scoring rule (so $f_1(z)=f_0(1-z)$ for any $z\in[0,1]$). So our first observation is: for any $z_1,z_2$ satisfying $ F_{1/2}(z_1)=F_{1/2}(z_2)$ and $z_2 > z_1$, we have that $z_2 = 1-z_1$ with $z_1\in[0,1/2)$ and $z_2 \in (1/2,1]$. Using again the fact that $(f_0,f_1)$ is a symmetry proper scoring rule and the fact that $F_{1/2}$ is maximized at $z=\tfrac 1 2$, we make our second observation: for any $z,z'$ satisfying $F_{1/2}(z)\geq F_{1/2}(z')$ it must hold that $|z-\tfrac 1 2| \leq |z'-\tfrac 1 2|$, which implies that $z \in [z',1-z']$ if $z' \leq 1/2$. We now return to the proof of the theorem using case analysis as to the potential BNE strategies of $B$. We will rely also on our assumption that $D_0 \geq D_1$. \begin{itemize} \item $(p^*,q^*)=(1,1)$, i.e. $B$ always plays $\hat\ensuremath{t}=\ensuremath{t}$. This means that $A$ sets $x_0=0$ and $x_1=1$. (I.e., $A$ always predicts $\ensuremath{t} = b$ given the signal $\hat\ensuremath{t} = b$.) \\$(\ast)$ We deduce that if $\rho \geq f_0(0)-f_0(1)$ and $\rho \geq f_1(1)-f_1(0)$, then the game has a BNE of \[(x_0^*,x_1^*) = (0,1), ~~~ (p^*,q^*)=(1,1)\] We comment that since $(f_0,f_1)$ is a symmetric proper scoring rule, then we have that $f_0(0)-f_0(1) = f_1(1)-f_1(0)$. \item $(p^*,q^*) = (1,0)$, i.e. $B$ only sends the $\hat\ensuremath{t}=0$ signal. So when $A$ sees the $\hat\ensuremath{t}=0$ signal she sets $x_0=D_1$ just as in the benchmark yet. But $A$ is indifferent as to the choice of $x_1$ since the $\hat\ensuremath{t}=1$ signal is never sent. In order for this to be a BNE it must holds that $f_1(x_1)-f_1(D_1)\geq \rho \geq f_0(D_1)-f_0(x_1)$ so that both types of $B$ agent would keep sending the $\hat\ensuremath{t}=0$ signal. So $x_1$ satisfies that $F_{1/2}(x_1)=\tfrac 1 2\left(f_0(x_1)+f_1(x_1)\right) \geq F_{1/2}(D_1) = \tfrac 1 2 \left( f_0(D_1) + f_1(D_1) \right)$. Based on our second observation, we have that $x_1\in[D_1, D_0]$.\\ $(\ast)$ We deduce that if the parameters of the game are set such that there exists $v\in [D_1,D_0]$ satisfying both $f_0(v)\geq f_0(D_1)- \rho$ and $f_1(v) \geq f_1(D_1) + \rho$ then the game has a BNE of \[(x_0^*,x_1^*) = (D_1,v), ~~~ (p^*,q^*)=(1,0)\] As $f_1$ is an increasing function, it must hold that $\rho \leq f_1(D_0)-f_1(D_1)$. In other words, when $\rho > f_1(D_0)-f_1(D_1)$ then this cannot be a BNE. \item $(p^*,q^*)=(0,1)$. This means that $B$ only sends the $\hat\ensuremath{t}=1$ signal. So now $A$ sets $x_1=D_1$ but $A$ is indifferent regarding the value of $x_0$. In order for $B$ not to deviate from$(0,1)$ then $x_0$ should satisfy both $\rho \leq f_0(x_0)-f_0(D_1)$ and $\rho \geq f_1(D_1)-f_1(x_0)$. This implies that $F_{1/2}(x_0) \geq F_{1/2}(D_1)$ and our second observation gives that $x_0 \in [D_1,D_0]$. But observe that $f_0(x_0) \geq \rho-f_0(D_1) > f_0(D_1)$. This contradicts the fact that $f_0$ is a strictly decreasing function. \item $p^*=1$ while $q^* \in (0,1)$. This means $A$ sets $x_1 = 1$ (because only type $1$ agents can send $\hat\ensuremath{t}=1$), while setting $x_0=y_0(p^*,q^*)>0$. To keep $B$ from deviating then $x_0$ should satisfy that $\rho \geq f_0(x_0) - f_0(1)$ and $\rho = f_1(1)-f_1(x_0)$. Therefore $F_{1/2}(1) \geq F_{1/2}(x_0)$, so our observation yields the contradiction $1\in [x_0,1-x_0]$. \item $q^*=1$ while $p^*\in (0,1)$. This case is symmetric to the previous case, and we get a similar contradiction using $F_{1/2}(0) \geq F_{1/2}(x_1)$. \item $p^*,q^* \in (0,1)$ with $p^* > 1-q^*$. We know that $A$'s best response is setting $x_0^*=y_0(p^*,q^*)$ and $x_1^*=y_1(p^*,q^*)$ and we have already shown that $y_0 < y_1$. In order for $B$ to play best response against $(y_0,y_1)$ we must have that $\rho = f_0(y_0) - f_0(y_1) = f_1(y_1)-f_1(y_0)$ so $F_{1/2}(y_0)=F_{1/2}(y_1)$. Based on our first observation from before we have that $y_1 = 1-y_0$. In other words, $B$ picks $p^*$ and $q^*$ s.t. the signals $\hat\ensuremath{t} =0$ and $\hat\ensuremath{t}=1$ are symmetric: \begin{eqnarray*} & {\bf Pr}[\ensuremath{t} = 1 ~|~ \hat\ensuremath{t} = 1] & = y_1 = 1- y_0\cr && = 1 - {\bf Pr} [\ensuremath{t}=1 ~|~ \hat\ensuremath{t} = 0] = {\bf Pr}[\ensuremath{t} = 0 ~|~ \hat\ensuremath{t}=0] \end{eqnarray*} so regardless of the value of $b$, the expression ${\bf Pr}[\ensuremath{t} = \hat\ensuremath{t} ~|~ \hat \ensuremath{t} = b]$ is the same.\\ Observe that we have $\rho = f_0(y_0)-f_0(y_1) = f_0(y_0) - f_0(1-y_0) =-g'(y_0)$ or $\rho=g'(y_1)$. (Recall, $(f_0,f_1)$ are derived using a convex function $g$ as detailed in Section~\ref{subsec:proper_scoring_rules}.) In other words, $B$ sets $(p^*,q^*)$ by first finding $y_1\in(\tfrac 1 2,1]$ s.t. $g'(y_1)=\rho$, then finding $(p^*,q^*)$ that satisfy Equation~\eqref{eq:p_q_and_ratios} and yield $y_1$. Formally, $B$ finds $(p^*,q^*)$ that satisfy \begin{equation} \rho = g'( \frac{D_1q^*} {D_0(1-p^*)+D_1q^*}) = -g'( \frac{D_1(1-q^*)} {D_0p^*+D_1(1-q^*)}) \label{eq:derivative_by_p} \end{equation} Recall that $g$ is convex and $g''>0$ on the $[0,1]$ interval. This implies that as $\rho$ increases, the point $y_1(p^*,q^*)$ gets further away from $\tfrac 1 2$ and closer to $1$. \end{itemize} \end{proof} Recall that in order for $B$ to play according to Randomized Response, $B$ should set $p^*=q^*$. Yet, in this game, a rational agent $B$ plays s.t. $A$'s posterior on $B$'s type is symmetric. Indeed, $1-y_0 = y_1$~implies \begin{equation} \frac {D_0p^*}{D_0p^*+D_1(1-q^*)} = \frac{D_1q^*} {D_0(1-p^*)+D_1q^*} ~~~\Rightarrow~~~ D_0^2p^*(1-p^*) = D_1^2q^*(1-q^*) \label{eq:p_q_and_ratios} \end{equation} and so, unless $D_0=D_1$, we have that $p^*\neq q^*$. Lastly, we comment about $A$'s payment. Using the notation of Equation~\eqref{eq:F_mu}, when $\hat\ensuremath{t}=0$ then $A$ gets an expected payment of $F_{y_0}(y_0) = g(y_0)$, and when $\hat\ensuremath{t}=1$ then $A$ gets $F_{y_1}(y_1) = g(y_1)$. But as $y_0=1-y_1$ and the scoring rule is symmetric, we have that $A$ gets the same payment regardless of the signal, so $A$'s payment is $g(y_1)$. Recall that $y_1$ is the point where $\rho=g'(y_1)$. So, is this game worth while for $A$? Imagine that $A$ could choose between either this coupon game, or the ``benchmark game'' in which $A$ guesses $B$'s type without viewing any signal from $A$. Recall, in the benchmark game, $A$ gets an expected profit of $g(D_1) = g(D_0)$. Recall that $g$ is a convex function that is minimized at $x=\tfrac 1 2$. Therefore, $g(y_1) > g(D_0)$ if $\tfrac 1 2 \leq D_0 < y_1$ which also implies $g'(D_0) < g'(y_1) = \rho$. In other words, $A$ gains more money than in the benchmark game only if $A$ offers a coupon of high-value. \paragraph{The case with $\rho_0 \neq \rho_1$.} We briefly discuss the case where $\rho_0$ and $\rho_1$ are not equal. First of all, observe that now there could be situations in which the BNE is of the form $(1,q^*)$ with a non-integral $q^*$, or the symmetric $(p^*,1)$. This is because the previous contradiction no longer holds. More interestingly the BNE we get: $(y_0,y_1)$ and $(p^*,q^*)$ still satifies Equations~\eqref{eq:y_0} and~\eqref{eq:y_1}, and also \begin{eqnarray*} \rho_0 = f_0(y_0)-f_0(y_1) &,& \rho_1 = f_1(y_1)-f_1(y_0) \end{eqnarray*} which, using $\rho_0\rho_1$ can be manipulated into \[ \frac {\rho_1}{\rho_0 + \rho_1} f_0(y_0) + \frac{\rho_0}{\rho_0 + \rho_1}f_1(y_0) = \frac{\rho_1}{\rho_0 + \rho_1} f_0(y_1) + \frac {\rho_0}{\rho_0 + \rho_1} f_1(y_1) \] In otherwords, setting $\mu= \tfrac {\rho_0}{\rho_0+\rho_1}$, we have $F_\mu(y_0) = F_\mu(y_1)$. Alternatively, it is possible to subtract the two equalities and deduce: \[ \tfrac 1 2(\rho_0-\rho_1)= \tfrac 1 2 (f_0(y_0) + f_1(y_0)) - \tfrac1 2 (f_0(y_0) + f_1(y_1)) = F_{1/2}(y_0)-F_{1/2}(y_1)\] These two conditions (along with $y_0 < y_1$) dictate the value of $y_0,y_1$, and thus the values of $(p^*,q^*)$. Sadly, it is no longer the case that $y_1=1-y_0$. } \newcommand{\insteadOfProofForScoringRulesAPX}{ \ifx \cameraready \undefined The proof of Theorem~\ref{thm:BNE_scoring_rules} is deferred to Appendix~\ref{apx_sec:proper_scoring_rules}, where we also compare $A$'s profit in the benchmark game to her profit from her BNE strategy in the full game. \else The proof of Theorem~\ref{thm:BNE_scoring_rules} is in the full version of this paper, where we also compare $A$'s profit in the benchmark game to her profit from her BNE strategy in the full game. \fi } \ifx \fullversion \undefined \insteadOfProofForScoringRulesAPX \else \proofTheoremProperScoringRule \fi In Appendix~\ref{subsec:specific_scoring_rules} we discusse the implications of using specific scoring rules. \newcommand{\specificScoringRules}{ \subsection{Strategies Under Specific Scoring Rules} \label{subsec:specific_scoring_rules} We now plug-in different types of proper and symmetric scoring rules, and find what $p^*$ and $q^*$ are in each case. We analyze the game for a value of $\rho$ s.t. the BNE is obtained where neither $p^*$ nor $q^*$ are integral. We also characterize what is the $\epsilon$ in $A$'s posterior probability --- the value of $\max_b\left\{\ln\left(\frac {{\bf Pr}[\hat \ensuremath{t}=\ensuremath{t} ~|~ \ensuremath{t}=b] } {{\bf Pr}[\hat\ensuremath{t}=1-\ensuremath{t} ~|~ \ensuremath{t} = b]} \right)\right\}$. There exists 3 canonical rules often used in literature: Quadratic, Spherical and Logarithmic. \paragraph{Quadratic Scoring Rule.} The quadratic scoring rule is defined by the functions $(f_0(x),f_1(x)) = (2-2x^2,4x-2x^2)$. The quadratic scoring rule is generated by the convex function $g(x) = x^2+(1-x)^2+1 = 2-2x+2x^2$. (So, $g'(x) = -2+4x$ and $g''(x)=-2$.) Therefore $(f_0'(x),f_1'(x)) = ( -4x, 4(1-x) )$. Observe that since $g' \in [-2,2]$, Equation~\eqref{eq:derivative_by_p} gives that $\rho \in [-2,2]$ as well. Hence, Equation~\eqref{eq:derivative_by_p} takes the form \begin{eqnarray*} & \rho = 2 - \frac 4 {1+ \frac {D_0p}{D_1(1-q)}} &\Rightarrow~~ \frac {D_0p}{D_1(1-q)} =\frac {2+\rho} {2-\rho} \Rightarrow~~ p = \frac {D_1}{D_0} \frac {2+\rho} {2-\rho} (1-q)\cr & \rho = -2 + \frac 4 {1+ \frac {D_0(1-p)}{D_1q}} &\Rightarrow~~ \frac {D_0(1-p)}{D_1q} = \frac {2-\rho}{2+\rho} \Rightarrow~~ q = \frac {D_0}{D_1} \frac {2+\rho} {2-\rho} (1-p)\cr \end{eqnarray*} So we have \[ p = \frac {D_1}{D_0} \frac {2+\rho} {2-\rho} - \left(\frac {2+\rho} {2-\rho}\right)^2(1-p) ~\Rightarrow ~~ p = \left(\frac {2+\rho} {2-\rho}\right) \left(\frac {D_0}{D_1}-\frac {2+\rho} {2-\rho}\right) / \left(1-\left(\frac {2+\rho} {2-\rho}\right)^2 \right)\] which boils down to \[ p = \frac {2+\rho} 4 \left(\frac{\frac {2+\rho} {2-\rho}-\frac {D_0}{D_1}} {\frac {2+\rho} {2-\rho}-1}\right) = \frac {2+\rho} 4 \left( \frac { 2(1-\frac {D_0}{D_1})+\rho(1+\frac {D_0}{D_1}) } {2\rho} \right) = \frac {2+\rho} 4 \left(\frac 1 2(1+\frac {D_0}{D_1}) - \frac {D_0-D_1}{\rho D_1} \right) \] And similarly, \[ q = \frac {D_0}{D_1} \frac {2+\rho} {2-\rho} - \left(\frac {2+\rho} {2-\rho}\right)^2(1-q) ~\Rightarrow ~~ q = \left(\frac {2+\rho} {2-\rho}\right) \left(\frac {D_1}{D_0}-\frac {2+\rho} {2-\rho}\right) / \left(1-\left(\frac {2+\rho} {2-\rho}\right)^2 \right)\] which gives \[ q = \frac {2+\rho} 4 \left(\frac{\frac {2+\rho} {2-\rho}-\frac {D_1}{D_0}} {\frac {2+\rho} {2-\rho}-1}\right) = \frac {2+\rho} 4 \left( \frac { 2(1-\frac {D_1}{D_0})+\rho(1+\frac {D_1}{D_0}) } {2\rho} \right) = \frac {2+\rho} 4 \left(\frac 1 2(1+\frac {D_1}{D_0}) + \frac {D_0-D_1}{\rho D_1} \right) \] More importantly, under these $p$ and $q$ values, $y_0 = \frac {2-\rho} 4$ and $y_1 = \frac {2+\rho} 4$. So from $A$'s perspective, there is a Randomized Response move here with $e^\epsilon = y_1/y_0$, hence \[\epsilon = \ln(\frac {2+\rho}{2-\rho} )\] The expected utility of $A$ is $u_A = g(y_0) = y_0^2+y_1^2+1 = \frac {8+2\rho^2}{16}+1 = 2 - \tfrac 1 2 + \frac {\rho^2}8$. This is in comparison to $g(D_1) = 1+D_0^2+D_1^2 = 2-2D_1+2D_1^2 = 2-2D_1(1-D_1) = 2-2D_1D_0$. It follows that $A$ prefers the $2$nd game (with the coupon) to the first only if $\tfrac 1 2 - \tfrac {\rho^2}8 < 2 D_0D_1$ or $\rho^2 > 4-16D_0D_1$. Clearly, with $D_0=D_1=\tfrac 1 2$ we have that $A$ prefers the coupon game over the benchmark-game. \paragraph{Spherical Scoring Rule.} The spherical scoring rule is defined by the functions \[ (f_0(x),f_1(x) ) = ( \frac {1-x} {\sqrt{x^2+(1-x)^2}}, \frac x {\sqrt{x^2+(1-x)^2}})\] which are generated using $g(x) = \sqrt{x^2+(1-x)^2}$. (So, $g'(x) = \frac {2x-1} {\sqrt{x^2+(1-x)^2}}$ and $g''(x) = (x^2+(1-x)^2)^{-\tfrac 3 2}$.) Therefore $(f_0'(x),f_1'(x)) = ( -x(1-2x+2x^2)^{-\tfrac 3 2} , (1-x)(1-2x+2x^2)^{-\tfrac 3 2} )$. Using the definition of $g'(x)$, Equation~\eqref{eq:derivative_by_p} now yields \begin{eqnarray*} & \rho \sqrt{y_0^2 + 1-2y_0+y_0^2} = -2y_0+1 &\Rightarrow (4-2\rho^2)y_0^2-(4-2\rho^2)y_0 + (1-\rho^2)=0 \cr & \rho \sqrt{y_1^2 + 1-2y_1+y_1^2} = 2y_1-1 & \Rightarrow (4-2\rho^2)y_1^2-(4-2\rho^2)y_1 + (1-\rho^2)=0 \end{eqnarray*} So $y_0$ and $y_1$ are the two different roots of the equation $x^2-x+\frac{1-\rho^2} {4-2\rho^2} = 0$, namely $\tfrac 1 2 \pm \tfrac1 2 \sqrt{\frac {\rho^2}{2-\rho^2} }$ Plugging in the values of $y_0$ and $y_1$ we have \begin{eqnarray*} &&\frac{ D_1q - D_0(1-p)}{D_1q + D_0(1-p)} = \sqrt{\frac {\rho^2}{2-\rho^2}} \cr && \frac{ D_0p - D_1(1-q)}{D_0p + D_1(1-q)} = \sqrt{\frac {\rho^2}{2-\rho^2}} \end{eqnarray*} and this is because we assume $D_0 p > D_1(1-q)$ and $D_1q > D_0(1-p)$. (That is, when we see the signal $\hat\ensuremath{t} = 0$ it is more likely to come from a $\ensuremath{t}=0$-type agent than a $\ensuremath{t}=1$-agent, and similarly with the $\hat\ensuremath{t}=1$ signal.) After arithmetic manipulations, we have \begin{eqnarray*} & (1-\rho^2)(D_0^2p^2 + D_1^2(1-q)^2) = 2 D_0D_1p(1-q) & \Rightarrow (1-\rho^2)D_0p = D_1(1-q)\left( 1 \pm \rho\sqrt{2-\rho^2}\right) \cr & (1-\rho^2)(D_0^2(1-p)^2+D_1^2q^2) = 2D_0D_1(1-p)q &\Rightarrow (1-\rho^2)D_1q = D_0(1-p)\left(1 \pm \rho \sqrt{2-\rho^2}\right) \end{eqnarray*} using the fact that $\rho \leq 1$ and that $D_0p>D_1(1-q)$ and $D_1q>D_0(1-p)$, then \begin{eqnarray*} && D_0p = D_1(1-q)\frac{ 1 + \rho\sqrt{2-\rho^2}}{ 1-\rho^2} \stackrel{\rm def}= {Z_\rho} D_1(1-q)\cr && D_1q = D_0(1-p)\frac{1 + \rho \sqrt{2-\rho^2}}{1-\rho^2} \stackrel{\rm def}= Z_\rho D_0(1-p) \end{eqnarray*} (because $1-\rho\sqrt{2-\rho^2} \leq 1-\rho \leq 1-\rho^2$.) We have that \begin{eqnarray*} && D_1 = D_1q+D_1(1-q) = Z_\rho D_0(1-p) + \frac 1 {Z_\rho} D_0p \cr && D_0 = D_0p+D_0(1-p) = {Z_\rho} D_1(1-q) + \frac 1{Z_\rho} D_1q \end{eqnarray*} We deduce \[ p = \frac {Z_\rho^2 - Z_\rho \frac{D_1}{D_0}}{Z_\rho^2-1}, \qquad q = \frac {Z_\rho^2 - Z_\rho \frac{D_0}{D_1}}{Z_\rho^2-1}\] More importantly, from $A$'s perspective, the signal is like a Randomized Response with parameter $e^\epsilon = y_1/y_0$ so \[\epsilon = \ln ( \left(1 + \sqrt{\frac {\rho^2}{2-\rho^2}}\right) / \left(1 - \sqrt{\frac {\rho^2}{2-\rho^2}}\right))\] The utility of $A$ from the game is now $g(y_0)$ which boils down to $\frac 1 {2-\rho^2}$. This is in contrast to $D_0^2 + D_1^2$, so $A$ prefers the game with the coupon over the baseline when $\rho^2 > 2 - \frac 1 {D_0^2+D_1^2} = \frac {(D_0-D_1)^2} {D_0^2+D_1^2}$. Complimentary to that, $B$'s expected payment is \[\rho(D_0p+D_1q) - g(y_0) = \rho \left( \frac{D_0Z_\rho^2-D_1 Z\rho +D_1Z_\rho^2-D_0Z_\rho} {Z_\rho^2-1}\right) -\frac 1 {2-\rho^2} = \frac {\rho Z_\rho}{Z_\rho + 1} - \frac 2 {2-\rho^2}\] \paragraph{Logarithmic Scoring Rule.} The logarithmic scoring rule is defined by the functions $ (f_0(x),f_1(x)) = (\ln(1-x), \ln(x))$ which are generated by $g(x) = - H(x) = x\ln(x)+(1-x)\ln(1-x)$. (So, $g'(x) = \ln(x) -\ln(1-x)$ and $g''(x) = \tfrac 1 x +\tfrac 1 {1-x}$.) Therefore $(f_0'(x),f_1'(x)) = ( -\frac 1 {1-x} , \frac 1 x ) $. Observe that the logarithmic scoring rule has \emph{negative} costs, and furthermore, we may charge infinite cost from an expert reporting $x=0$ or $x=1$. Using $g'(x)$, Equation~\eqref{eq:derivative_by_p} takes the form \begin{eqnarray*} & \rho = \ln(\frac{1-y_0}{y_0}) = \ln(\frac{y_1}{1-y_1}) &\Rightarrow y_0 = \frac 1 {1+e^\rho},~~y_1 = \frac 1 {1+e^{-\rho}} \end{eqnarray*} This implies that \[ \frac {D_0 p}{D_1(1-q)} = \frac {D_1q}{D_0(1-p)}=e^{\rho} ~~\Rightarrow~~ p = \frac {e^{2\rho}-e^\rho \tfrac{D_1}{D_0}}{e^{2\rho}-1},~~~ q = \frac {e^{2\rho}-e^\rho \tfrac{D_0}{D_1}}{e^{2\rho}-1}\] The Randomized Response behavior that $A$ observes is for $e^\epsilon = y_1/y_0$ which means that simply $\epsilon = \rho$. The utility for $A$ is now $g(y_0) = -\frac {\ln(1+e^{\rho})}{1+e^\rho} - \frac {\ln(1+e^{-\rho})}{1+e^{-\rho}}$. And the utility for $B$ is $u_B = \rho(D_0p+D_1q)-g(y_0) = \rho \frac {e^{2\rho}-e^\rho}{e^{2\rho}-1} - g(y_0) = \frac {\rho e^\rho}{e^\rho+1} - g(y_0)$. } \cut{ \subsection{Skewing the Scoring-Rule Payments to Compensate for the Prior} \label{subsec:skewed_scoring_rule} As observed in Section~\ref{subsec:scoring_rule_game}, the way $B$ plays is to use the signal $\hat\ensuremath{t}$ obscures the prior $\{D_0,D_1\}$ over its original type. Here, we show that this phenomena is a result of the payments to $A$ being symmetric. In this section we suggest using a shifted scoring rule. Observe, given a proper scoring rule $(f_0,f_1)$ and any two positive constants $w_0, w_1$, we can define $\tilde f_0(x) = w_0 f_0(x)$ and $\tilde f(x_1) = w_1 f_1(x)1$. These are no longer proper scoring rule. In particular, given a r.v. $X$ s.t. ${\bf Pr}[X=1]=x$ it is most beneficial for an expert to report $x'=\frac {w_1 x}{w_0(1-x)+w_1 x} = \frac 1 {1+\frac {w_0(1-x)}{w_1x}}$. (Alternatively, to report $x'$ s.t. $\frac {x'}{1-x'} = \frac {w_1}{w_0}\cdot \frac x {1-x}$, or $\frac {w_0}{w_1} \frac {x'}{1-x'} = \frac x {1-x}$.) Plugging this into the game, we have that if agent $B$ follows strategy $(p,q)$, then it is $A$'s best response that given the signal $\hat\ensuremath{t} = 0$ she predicts $x_0$ s.t. $\frac {w_0}{w_1} \frac{x_0}{1-x_0} = \frac {D_1(1-q)}{D_0p}$; given the signal $\hat\ensuremath{t}=1$ she predicts $x_1$ s.t. $\frac{w_0}{w_1} \frac {x_1}{1-x_1} = \frac {D_1q}{D_0(1-p)}$. It is now simple to see that by setting $w_0 = D_1$ and $w_1 = D_0$ we ``neutralize'' the effect of the prior. This sets $A$'s best response for $(p,q)$ as $(x_0,x_1) = \left(\frac {1-q} {p+1-q},\frac q {1-p+q}\right)$. $B$'s best response strategies are just the same as given in Eq~\eqref{eq:Bs_strategies}, only w.r.t $(\tilde f_0,\tilde f_1)$. In particular, similar conclusions hold for the different equilibria. \begin{itemize} \item If $\rho \geq \max\{D_0(f_1(1)-f_1(0)),D_1(f_0(0)-f_0(1))\}$, then the game has a BNE of \[(x_0^*,x_1^*) = (0,1), ~~~ (p^*,q^*)=(1,1)\] \item If $\exists v>D_1, D_1(f_0(D_1)-f_0(v)) < \rho < D_0(f_1(v)-f_1(D_1))$ then the game has a BNE of \[(x_0^*,x_1^*) = (D_1,v), ~~~ (p^*,q^*)=(1,0)\] \item If $\exists v>D_1, D_0(f_1(D_1)-f_1(v)) < \rho < D_1(f_0(v)-f_0(D_1))$ then the game has a BNE of \[(x_0^*,x_1^*) = (v,D_1), ~~~ (p^*,q^*)=(0,1)\] \item Similarly to before, we can rule out any other case where at least one of $\{p^*,q^*\}$ is integral. Most cases follow the exact same line of reasoning. The case where $p^*=1$ while $q^* \in (0,1)$ or the symmetric case where $q^*=1$ and $p^*\in(0,1)$ can be ruled out by observing that for a r.v. $X$ s.t. ${\bf Pr}[X=1]=D_0$ predicting (using the proper scoring rule) any fractional $x'$ yields better utility than predicting $0$ or $1$. \end{itemize} We are now left with the case of $p^*,q^*\in(0,1)$. In this case, Equation~\eqref{eq:tradeoff_of_rho} takes the form \[ D_1f_0(x_0) + D_0 f_1(x_0) = D_1f_0(x_1) + D_0 f_1(x_1)\] I.e., given a r.v. $X$ s.t. ${\bf Pr}[X=1]=D_0$, then $x_0$ and $x_1$ yield the same expected utility: $\E[f_X(x_0)] = \E[f_X(x_1)]$. The function $F(x) = D_1 f_0(x) + D_0 f_1(x) = D_0 f_0(1-x) + D_1 f_1(1-x)$ has derivative of $F'(x) = (D_0-x)g''(x)$, and so it is strictly increasing on the $(0,D_0)$-interval and strictly decreasing on the $(D_0,1)$-interval. So one solution is for $x_0=x_1$, which we can immediately rule out as it leads to $\rho=0$. We deduce that $x_0 < D_0 < x_1$ (if $x_1<x_0$ then $\rho$ has to be negative). Unfortunately, there is no closed-form solution to this problem, unless we plug-in the functions $f_0, f_1$. But observe that there is no reason to have $x_0=1-x_1$ (which Randomized Response, with $p=q$ should give). } \ifx \fullversion \undefined \fi \ifx \fullversion \undefined \else \fi \section{The Coupon Game with the Identity Payments} \label{sec:matching_pennies} \ifx \fullversion \undefined \fi In this section, we examine a different variation of our initial game. As always, we assume that $B$ has a type sampled randomly from $\{0,1\}$ w.p. $D_0$ and $D_1$ respectively, and wlog $D_0 \geq D_1$. Yet this time, the payments between $A$ and $B$ are given in the form of a $2\times 2$ matrix we denote as $M$. This payment matrix specifies the payment from $B$ to $A$ in case $A$ ``accuses'' $B$ of being of type $\tilde\ensuremath{t}\in\{0,1\}$ and $B$ is of type $\ensuremath{t}$. In general we assume that $A$ strictly gains from finding out $B$'s true type and potentially loses otherwise (or conversely, that a $B$ agent of type $\ensuremath{t}$ strictly loses utility if $A$ accuses $B$ of being of type $\tilde\ensuremath{t}=\ensuremath{t}$ and potentially gains money if $A$ accuses $B$ of being of type $\tilde\ensuremath{t}=1-\ensuremath{t}$). In this section specifically, we consider one simple matrix $M$ -- the identity matrix $I_{2\times 2}$. Thus, $A$ gets utility of $1$ from correctly guessing $B$'s type (the same utility regardless of $B$'s type being $0$ or $1$) and $0$ utility if she errs. \ifx \fullversion \undefined \fi \subsection{The Game and Its Analysis} \label{subsec:coupon_game} \ifx \fullversion \undefined \fi \paragraph{The benchmark game.} The benchmark for this work is therefore a very simple ``game'' where $B$ does nothing, $A$ guesses a type and $B$ pays $A$ according to $M$. It is clear that $A$ maximizes utility by guessing $\tilde\ensuremath{t}=0$ (since $D_0 \geq D_1$) and so $A$ gains in expectation $D_0$; where an agent $B$ of type $\ensuremath{t} =0$ pays $1$ to $A$, and an agent $B$ of type $\ensuremath{t}=1$ pays $0$ to $A$. \paragraph{The full game.} Aiming to get a better guess for the actual type of $B$, we now assume $A$ first offers $B$ a coupon. As before, $B$ gets a utility of $\rho_\ensuremath{t}$ from a coupon of the right type and $0$ utility from a coupon of the wrong type. And so, the game takes the following form now. \begin{enumerate} \addtocounter{enumi}{-1} \item $B$'s type, denoted $\ensuremath{t}$, is chosen randomly, with ${\bf Pr}[\ensuremath{t}=0]=D_0$ and ${\bf Pr}[\ensuremath{t}=1]=D_1$. \item $B$ reports a type $\hat\ensuremath{t}=\sigma_B(t)$ to $A$. $A$ in return gives $B$ a coupon of type $\hat\ensuremath{t}$. \item $A$ accuses $B$ of being of type $\tilde\ensuremath{t}=\sigma_A(\hat\ensuremath{t})$ and $B$ pays $1$ to $A$ if indeed $\tilde\ensuremath{t}=\ensuremath{t}$. \end{enumerate} And so, the utility of agent $A$ is $u_A = \mathds{1}_{[\tilde\ensuremath{t}=\ensuremath{t}]}$. The utility of agent $B$ is a summation of two factors -- reporting the true type to get the right coupon and the loss of paying $A$ for finding $B$'s true type. So $u_B = \rho_\ensuremath{t}\mathds{1}_{[\hat\ensuremath{t}=\ensuremath{t}]} - \mathds{1}_{[\tilde\ensuremath{t}=\ensuremath{t}]}$. \newcommand{\couponMatchingPennies}{ In the coupon game with payments given by the identity matrix with $\rho_0\neq\rho_1$, any BNE strategy of $B$ is pure for at least one of the two types of $B$ agent. Formally, for any BNE strategy of $B$, denoted $\sigma_B^*$, there exist $\ensuremath{t},\hat\ensuremath{t}\in\bits$ s.t. ${\bf Pr}[\sigma_B^*(\ensuremath{t})=\hat\ensuremath{t}]=1$. } \begin{theorem} \label{thm:coupon_matching_pennies} \couponMatchingPennies \end{theorem} In the case where $\rho_0=\rho_1$ then $B$ has infinitely many randomized BNE strategies, including a BNE strategy $\sigma_B^*$ s.t. $\tfrac 1 2 \leq {\bf Pr}[\sigma_B^*(0)=0] = {\bf Pr}[\sigma_B^*(1)=1] < 1$ (Randomized response). \newcommand{\proofTheoremIdentityMatrix}{ \begin{proof} First, we denote the strategies of agents $A$ and $B$. We denote \begin{flalign*} \textrm{For $B$: } & ~~p = {\bf Pr}[\sigma_B(0)=0] \textrm{, and }~ q = {\bf Pr}[\sigma_B(1)=1] \cr \textrm{For $A$: } & ~~x ={\bf Pr}[\sigma_A(0)=0] \textrm{, and }~ y={\bf Pr}[\sigma_A(1)=1] \end{flalign*} Using these $4$ parameters,we analyze the utility functions of the agents of the game. We start with the utility function of $A$: \begin{align*} u_A = D_0 p x + D_0 (1-p)(1-y) + D_1 q y + D_1 (1-q)(1-x) \end{align*} This function characterizes $A$'s best response strategy as follows. $A$ determines $x ={\bf Pr}[\sigma_A(0)=0]$ based on the relation between $D_0p$ ($= {\bf Pr}[\ensuremath{t}=0 \wedge \hat\ensuremath{t}=0]$) and $D_1(1-q)$ ($= {\bf Pr}[\ensuremath{t}=1 \wedge \hat\ensuremath{t}=0] $) --- if $D_0p$ is the larger term, then $x=1$; if $D_1(1-q)$ is the larger term, then $x=0$; and if both are equal then $A$ is free to set any $x\in [0,1]$. Similarly, the relationship between $D_1q = {\bf Pr}[\ensuremath{t}=1 \wedge \hat\ensuremath{t}=1]$ and $D_0(1-p) = {\bf Pr}[\ensuremath{t}=0\wedge\hat\ensuremath{t}=1]$ determines the value of $y = {\bf Pr}[\sigma_A(1)=1]$. We therefore denote the following two lines on the $[0,1]\times [0,1]$ square of possible choices for $p$ and $q$ \begin{eqnarray*} &\ell_1 : & q = 1- \tfrac {D_0}{D_1} p \textrm{, (i.e., $D_0p=D_1(1-q)$)}\cr &\ell_2 : & q = \tfrac {D_0}{D_1} (1-p)\textrm{, (i.e., $D_0(1-p)=D_1q$)} \end{eqnarray*} These are $A$'s ``lines of indifference'': when $B$ plays $(p,q)\in \ell_1$ then $A$ is indifferent to any value of $x$ in the range $[0,1]$, and when $B$ plays $(p,q)\in\ell_2$ then $A$ is indifferent between any value of $y$. Observe that $\ell_1$ and $\ell_2$ have the same slope, and so they are parallel, and that the point $(p,q)=(1,0)$ is above $\ell_1$ yet on $\ell_2$. It follows that $\ell_2$ is above $\ell_1$ (unless $D_0=D_1=\tfrac 1 2$ in which case the two lines coincide). The two lines are shown in Figure~\ref{fig:strategy_space_for_B}. \cut{ \begin{flalign*} \textrm{For $A$:} &\cr & \textrm{ if } D_0 p > D_1 (1-q), ~\textrm{ i.e. } {\bf Pr}[t=0 \wedge \hat\ensuremath{t}=0] > {\bf Pr}[\ensuremath{t}=1 \wedge \hat\ensuremath{t}=0] \textrm{, then } x = 1\cr & \textrm{ if } D_0 p < D_1 (1-q), ~\textrm{ i.e. } {\bf Pr}[t=0 \wedge \hat\ensuremath{t}=0] < {\bf Pr}[\ensuremath{t}=1 \wedge \hat\ensuremath{t}=0] \textrm{, then } x = 0\cr & \textrm{ if } D_0 p = D_1 (1-q), ~\textrm{ i.e. } {\bf Pr}[t=0 \wedge \hat\ensuremath{t}=0] = {\bf Pr}[\ensuremath{t}=1 \wedge \hat\ensuremath{t}=0] \textrm{, then } A \textrm{ can play any } x\in[0,1]\cr & \textrm{ if } D_1 q > D_0 (1-p), ~\textrm{ i.e. } {\bf Pr}[t=1 \wedge \hat\ensuremath{t}=1] > {\bf Pr}[\ensuremath{t}=0 \wedge \hat\ensuremath{t}=1] \textrm{, then } y = 1\cr & \textrm{ if } D_1 q < D_0 (1-p), ~\textrm{ i.e. } {\bf Pr}[t=1 \wedge \hat\ensuremath{t}=1] < {\bf Pr}[\ensuremath{t}=0 \wedge \hat\ensuremath{t}=1] \textrm{, then } y = 0\cr & \textrm{ if } D_1 q = D_0 (1-p), ~\textrm{ i.e. } {\bf Pr}[t=1 \wedge \hat\ensuremath{t}=1] = {\bf Pr}[\ensuremath{t}=0 \wedge \hat\ensuremath{t}=1] \textrm{, then } A \textrm{ can play any } y\in[0,1]\cr \textrm{For $B$:} &\cr & \textrm{ if } \rho_0 > x+y-1 \textrm{ then } p = 1\cr & \textrm{ if } \rho_0 < x+y-1 \textrm{ then } p=0\cr & \textrm{ if } \rho_0 = x+y-1 \textrm{ then } B \textrm{ can play any } p\in[0,1]\cr & \textrm{ if } \rho_1 > x+y-1 \textrm{ then } q = 1\cr & \textrm{ if } \rho_1 < x+y-1 \textrm{ then } q=0\cr & \textrm{ if } \rho_1 = x+y-1 \textrm{ then } B \textrm{ can play any } q\in[0,1] \end{flalign*} Using $A$'s best response strategies, we look at the $[0,1]\times [0,1]$ square of possible choices for $p$ and $q$, and denote two lines on this square. \begin{eqnarray*} &\ell_1 : & q = 1- \tfrac {D_0}{D_1} p \textrm{, (i.e., $D_0p=D_1(1-q)$)}\cr &\ell_2 : & q = \tfrac {D_0}{D_1} (1-p)\textrm{, (i.e., $D_0(1-p)=D_1q$)} \end{eqnarray*} The two lines are parallel (if $D_0 > D_1$) or collapse into a single line when $D_0=D_1 = \tfrac 1 2$. The best response analysis for $A$ means that if $(p,q)$ is a point above the $\ell_1$-line then $A$ sets $x=1$, and if $(p,q)$ is below the $\ell_1$-line then $A$ sets $x=0$. Similarly, if $(p,q)$ is above the $\ell_2$-line then $y=1$, and if $(p,q)$ is below the $\ell_2$-line then $y=0$. The strategy space for $B$ is shown in Figure~\ref{fig:strategy_space_for_B}. } \begin{figure} \caption{\label{fig:strategy_space_for_B} \label{fig:strategy_space_for_B} \end{figure} We now turn our attention to the utility functions of $B$. The utility of $B$ of type $\ensuremath{t}=0$ is \[u_{B,0} = p\cdot(\rho_0 - x) + (1-p)(-1+y)\] and the utility of $B$ of type $\ensuremath{t}=1$ is \[u_{B,1} = q\cdot(\rho_1 - y) + (1-q)(-1+x)\] which means that $B$'s best response strategies are: \begin{flalign*} & \textrm{ if } \rho_0 > x+y-1 \textrm{ then } p = 1\cr & \textrm{ if } \rho_0 < x+y-1 \textrm{ then } p=0\cr & \textrm{ if } \rho_0 = x+y-1 \textrm{ then } B \textrm{ can play any } p\in[0,1]\cr & \textrm{ if } \rho_1 > x+y-1 \textrm{ then } q = 1\cr & \textrm{ if } \rho_1 < x+y-1 \textrm{ then } q=0\cr & \textrm{ if } \rho_1 = x+y-1 \textrm{ then } B \textrm{ can play any } q\in[0,1] \end{flalign*} Using the best response strategies of both $A$ and $B$, we can analyze the game's potential BNEs. First we cover the simple case. If $\max \{\rho_0,\rho_1\} > 1$: then at least one coupon have value strictly greater than $1$ and so one of the two types of $B$ agents strictly prefers deviating to playing deterministically. Wlog this type is the $t=0$ type, and so in any BNE of the game we have that ${\bf Pr}[\sigma^*_B(0)=0]=1$. The interesting case is when $\max\{\rho_0,\rho_1\} \leq 1$ and since we assume $\rho_0\neq\rho_1$ then for some type of $B$ agent the value of the coupon is strictly $<1$. (This intuitively makes sense~--- the coupon game becomes interesting only when $B$'s value for the coupon is below the max-payment from $B$ to $A$, hence $B$ has incentive to hide her true type.) We continue with a case analysis as to the potential BNE strategies of $B$. \begin{itemize} \item Strictly above the $\ell_2$ line, where $D_1q>D_0(1-p)$.\\ This means $B$ plays s.t. $D_1{\bf Pr}[\sigma^*_B(1)=1] > D_0 {\bf Pr}[\sigma^*_B(0)=1]$, and as a result \[D_1{\bf Pr}[\sigma^*_B(1)=0] = D_1 - D_1q < D_0 - D_0(1-p) = D_0{\bf Pr}[\sigma^*_B(0)=0]\] Therefore, given that $A$ observes any signal $\hat\ensuremath{t}\in\{0,1\}$ it is more likely that a $B$ agent of type $\hat\ensuremath{t}$ sent that this signal. So $A$ responds to such strategy by playing deterministically ${\bf Pr}[\sigma^*_A(\hat\ensuremath{t})=\hat\ensuremath{t}]=1$ for any signal $\hat\ensuremath{t}\in\bits$. As $A$ prefers to play $x=y=1$ and some type of $B$ agent has coupon valuation $<1$ then that type deviates (so either $p=0$ or $q=0$), and so the BNE strategy of $B$ cannot be above the $\ell_2$ line. \item Strictly below the $\ell_2$ line, where $D_1q < D_0(1-p)$.\\ This means $B$ plays s.t. $D_1{\bf Pr}[\sigma^*_B(1)=1] < D_0 {\bf Pr}[\sigma^*_B(0)=1]$. So the $\hat\ensuremath{t}=1$ signal is more likely to come from a $\ensuremath{t}=0$ type agent, and so $A$'s best response is to set $(1-y)={\bf Pr}[\sigma^*_A(1)=0]=1$. We thus have that $x+y-1 \leq 0$ whereas $\rho_0,\rho_1 > 0$. Hence $B$ deviates to playing $(p,q)=(1,1)$, and so the BNE of $B$ cannot be below the $\ell_2$ line. \item On the $\ell_2$ line, where $D_1q = D_0(1-p)$.\\ This means $B$ plays s.t. $D_1{\bf Pr}[\sigma^*_B(1)=1] = D_0 {\bf Pr}[\sigma^*_B(0)=1]$, and as a result \[D_1{\bf Pr}[\sigma^*_B(1)=0] = D_1 - D_1q < D_0 - D_0(1-p) = D_0{\bf Pr}[\sigma^*_B(0)=0]\] assuming $D_1<D_0$ (the special case where $D_0=D_1=\tfrac 1 2$ will be discussed later). And so when $A$ views the $\hat\ensuremath{t}=0$ signal it is more likely that the type of $B$ agent is $t=0$ so $x={\bf Pr}[\sigma^*_A(0)=0]=1$; whereas when $A$ views the $\hat\ensuremath{t}=1$ signal both types of $B$ agents are as likely to send the signal, so $A$ is indifferent as to the value of $y = {\bf Pr}[\sigma^*_A(1)=1]$.\\ Since $\rho_0\neq \rho_1$, then by setting the single parameter $y$, $A$ can makes at most one of the two types of $B$ agent indifferent, while the other type plays a pure strategy. In other words, $B$'s BNE strategy can only be one of the two extreme point: $(p,q)=(1,0)$ or $(p,q) = (\tfrac {D_0-D_1}{D_1}, 1)$. The pure strategy of the non-indifferent type is determined by the relation between $\rho_0 \gtrless \rho_1$. So the possible BNEs are: \begin{eqnarray*} \textrm{If } \rho_0 > \rho_1, &&\sigma_A^* = (1,y) \textrm{ with } y\in [\rho_1,\rho_0],~ \sigma_B^*=(1,0) \cr \textrm{If } \rho_0 = \rho_1, && \sigma_A^* = (1,\rho_0), ~\sigma_B^*=(p,q) \textrm{ with } (p,q) \in \ell_2 \cr \textrm{If } \rho_0 < \rho_1, && \sigma_A^* = (1, \rho_0), ~ \sigma_B^*= (\tfrac {D_0-D_1} {D_0}, 1) \end{eqnarray*} In the special case where $D_0=D_1$ (the two lines are the same one) then $(p=1,q=0)$ and $(p=0,q=1)$ are both BNE regardless with $A$ BNE strategy being any $(x,y)$ satisfying with $\min\{\rho_0,\rho_1\} \leq x+y-1 \leq \max\{\rho_0,\rho_1\}$. \end{itemize} Observe that in the case with $\rho_0=\rho_1<1$, in a BNE, $B$ may play any strategy on the $\ell_2$-line and $A$ makes both types of $B$ agent indifferent to the value of $p,q$ by setting $y=\rho_0=\rho_1$. Since the line $p=q$ (i.e., agent $B$ plays Randomized Response) does intersect the $\ell_2$ line at $(D_0,D_0)$, it is possible that $B$ plays Randomized Response (with $\epsilon = \ln(D_0/D_1)$). (And if $\rho_0=\rho_1=1$ then $B$ may play any $(p,q)$ on $\ell_2$ or above it whereas $A$ plays $x=y=1$.) \end{proof} At the BNE, $B$ plays in a way where the $\hat\ensuremath{t}=0$ signal leads $A$ to play the same way she plays in the benchmark game (with no coupon) -- to always play $\tilde\ensuremath{t}=0$, because ${\bf Pr}[\ensuremath{t} = 0 ~|~ \hat\ensuremath{t}=0] > {\bf Pr}[\ensuremath{t}=1 ~|~ \hat\ensuremath{t}=0]$. However, given the signal $\hat\ensuremath{t}=1$, it holds that ${\bf Pr}[\ensuremath{t}=0 ~|~ \hat\ensuremath{t} =1] = {\bf Pr}[\ensuremath{t}=1 ~|~ \hat\ensuremath{t}=1]$ (since $B$ BNE strategy is on the $\ell_2$ line). In other words, after viewing the $\hat\ensuremath{t}=1$ signal, $A$ has posterior belief on $B$'s type of $(\tfrac 1 2,\tfrac 1 2)$. We comment that if $B$ plays the strategy $(1,0)$, then this last statement is vacuous since the $\hat\ensuremath{t}=1$ signal is never sent. Observe, when $B$ plays any strategy on the $\ell_2$ line, the utility that $A$ gets from using the Nash-strategy of $(1, y)$ is $D_0p +D_0(1-p) = D_0$. In other words, moving from the benchmark game to this more complicated coupon game gives $A$ no additional revenue. In fact, the only agent that gains anything is $B$. In the benchmark game $B$'s utility is $-D_0$. In the coupon game, $B$'s utility is $D_0(\rho_0-1)$ when $\rho_0\geq \rho_1$, or $D_0(\rho_0-1) + D_1(\rho_1-\rho_0)$ when $\rho_1>\rho_0$. } \ifx \fullversion\undefined \else \proofTheoremIdentityMatrix \fi \ifx \fullversion \undefined \fi \subsection{Continuous Coupon Valuations} \label{subsec:diff_continuous_rho} \ifx \fullversion \undefined \fi We now consider the same game with the same payments, but under a different setting. Whereas before we assumed the valuations that the two types of $B$ agents have for the coupon are fixed (and known in advance), we now assume they are not fixed. In this section we assume the existence of a continuous prior over $\rho$, where each type $\ensuremath{t} \in \{0,1\}$ has its own prior, so $\ensuremath{\mathsf{CDF}}_0(x) \stackrel {\rm def} = {\bf Pr}[\rho < x ~|~ t=0]$ with an analogous definition of $\ensuremath{\mathsf{CDF}}_1(x)$. We use $\ensuremath{\mathsf{CDF}}_B$ to denote the cumulative distribution function of the prior over $\rho$ (i.e., $\ensuremath{\mathsf{CDF}}_B(x) = {\bf Pr}[\rho < x] = D_0\ensuremath{\mathsf{CDF}}_0(x)+D_1\ensuremath{\mathsf{CDF}}_1(x)$). We assume the $\ensuremath{\mathsf{CDF}}$ is continuous and so ${\bf Pr}[\rho=y]=0$ for any $y$. Given any $z \geq 0$ we denote $\ensuremath{\mathsf{CDF}}_B^{-1}(z)$ the set $\{y :~ \ensuremath{\mathsf{CDF}}_B(y)=z\}$. \newcommand{\proofDefferedContinuousValuationsAPX}{ \ifx \fullversion\undefined Due to space constraints, this analysis is deferred to Appendix~\ref{apx_sec:matching_pennies}. \else \ifx \cameraready \undefined \else the full version of the paper. \fi } \newcommand{\matchingPenniesContinuousValuations}{ In every BNE $(\sigma_A^*,\sigma_B^*)$ of the coupon game with identity payments, where $D_0 \neq D_1$ and the valuations of the $B$ agents for the coupon are taken from a continuous distribution over $[0,\infty)$, the BNE-strategies are as follows. \begin{itemize} \item Agent $A$ always plays $\tilde\ensuremath{t}=0$ after viewing the $\hat\ensuremath{t}=0$ signal (i.e., ${\bf Pr}[\sigma_A^*(0) =0]=1$); and plays $\tilde\ensuremath{t}=1$ after viewing the $\hat\ensuremath{t}=1$ signal with probability $y^*$ (i.e., ${\bf Pr}[\sigma_A^*(1)=1] = y^*$), where $y^*$ is any value in $\ensuremath{\mathsf{CDF}}_B^{-1}(D_1)$ when ${\bf Pr}[\rho<1]\geq D_1$ and $y^*=1$ when ${\bf Pr}[\rho<1]<D_1$. \item Agent $B$ reports truthfully (sends the signal $\hat\ensuremath{t}=\ensuremath{t}$) whenever her valuation for the coupon is greater than $y^*$, and lies (sends the signal $\hat\ensuremath{t}=1-\ensuremath{t}$) otherwise. That is, for every $t \in \{0, 1\}$ and $\rho \in [0,\infty)$, we have that if $\rho > y^*$ then ${\bf Pr}[\sigma_B^*(\ensuremath{t}) = \ensuremath{t}]=1$ and if $\rho < y^*$ then ${\bf Pr}[\sigma_B^*(\ensuremath{t})= \ensuremath{t}]=0$. \end{itemize}} \begin{theorem} \label{thm:matching-pennies-continuous-valuations} \matchingPenniesContinuousValuations \end{theorem} \proofDefferedContinuousValuationsAPX \newcommand{\continuousCouponValuations}{ \begin{proof} We assume $B$'s parameters are sampled as follows. First, we pick a type $\ensuremath{t}$ s.t. ${\bf Pr}[\ensuremath{t}=1]=D_1$ and ${\bf Pr}[\ensuremath{t}=0] = D_0$. Then, given $\ensuremath{t}$ we sample $\rho\leftarrow \ensuremath{\mathsf{PDF}}_\ensuremath{t}$, where ${\bf Pr}[\rho \leq 0]=0$ for both types. And while $A$ knows $D_0,D_1, \ensuremath{\mathsf{PDF}}_0$ and $\ensuremath{\mathsf{PDF}}_1$, $A$ does not know $B$'s realized type and valuation. We apply the same notation from before, denoting a strategy $\sigma_B$ of $B$ using $p$ and $q$ (where $p = {\bf Pr}[\sigma_B(0)=0]$ and $q={\bf Pr}[\sigma_B(1)=1]$), and denoting a strategy $\sigma_A$ of $A$ using $x$ and $y$ (where $x={\bf Pr}[\sigma_A(0)=0]$ and $y={\bf Pr}[\sigma_A(1)=1]$). The utility function of $B$ remains the same: \[ u_{B,0,\rho} = p(\rho - x)+(1-p)(-1+y)~,~\qquad u_{B,1,\rho} = q(\rho-y) + (1-q)(-1+x) \] So $B$'s best response to any strategy of $(x,y)$ of $A$ is given by \[ \sigma_B^{br}(\rho,\ensuremath{t}) = \begin{cases} \ensuremath{t}, & \textrm{ if } \rho > x+y-1 \cr 1-\ensuremath{t}, & \textrm{ if } \rho < x+y-1 \end{cases}\] We call such a strategy a \emph{threshold strategy} characterized by a parameter $T$, where any agent whose $\rho < T$ plays $\hat\ensuremath{t}=1-\ensuremath{t}$ and any agent with $\rho >T$ plays $\hat\ensuremath{t}=\ensuremath{t}$.\footnote{Since $\rho$ is sampled from a continuous distribution, then the probability of the event $\rho=T$ is $0$.} Clearly, in any BNE, $B$ follows a threshold strategy for some value of $T$. Therefore, since $A$'s BNE strategy is best response to $B$'s BNE strategy, it suffices to consider $A$'s best response against a threshold strategy. Given that $B$ follows a threshold strategy with threshold $T$ we have that $A$'s utility function is \begin{eqnarray*} & u_A= & x D_0 (1-\ensuremath{\mathsf{CDF}}_0(T)) + (1-x)D_1 \ensuremath{\mathsf{CDF}}_1(T) \cr && + yD_1 (1-\ensuremath{\mathsf{CDF}}_1(T)) + (1-y)D_0 \ensuremath{\mathsf{CDF}}_0(T)\cr && = \ensuremath{\mathsf{CDF}}_B(T) + x(D_0 - \ensuremath{\mathsf{CDF}}_B(T)) + y(D_1-\ensuremath{\mathsf{CDF}}_B(T)) \end{eqnarray*} where we use the notation $\ensuremath{\mathsf{CDF}}_B = D_0 \ensuremath{\mathsf{CDF}}_0 + D_1\ensuremath{\mathsf{CDF}}_1$. As $A$ maximizes her strategy, we have that $A$ sets $x>0$ only if $\ensuremath{\mathsf{CDF}}_B(T)\leq D_0$. Similarly, $y>0$ only if $\ensuremath{\mathsf{CDF}}_B(T)\leq D_1$. Since $D_1\leq D_0$ we only have three cases to consider. \begin{itemize} \item If $\ensuremath{\mathsf{CDF}}_B(T) < D_1$: In this case $A$'s best response is to set $x=y=1$ and $B$'s best-response to $(1,1)$ is to set the threshold parameter $T = x+y-1 = 1$. (So every $B$ agent with $\rho<1$ determinisitically sends the signal $\hat\ensuremath{t} = 1-\ensuremath{t}$ and any $B$ agent with $\rho > 1$ sends the signal $\hat\ensuremath{t}=\ensuremath{t}$.) Clearly, if it holds that $\ensuremath{\mathsf{CDF}}_B(1) < D_1$, i.e., that the probability of a random $B$ agent to have $\rho\leq 1$ is less than $D_1$, then we have a BNE. \item If $\ensuremath{\mathsf{CDF}}_B(T) > D_1$: In this case $A$'s best response sets $y=0$ and $x\in [0,1]$. As $B$ is playing best response to $A$'s strategy then it means that the threshold parameter is set to $T=x-1 \leq 0$. (And so all $B$ agents have coupon valuation $\rho>0$ we have that all $B$ agents determinisitically send the signal $\hat\ensuremath{t}=\ensuremath{t}$.) But for such $T$ we have that $\ensuremath{\mathsf{CDF}}_B(T) \leq \ensuremath{\mathsf{CDF}}_B(0) = 0 < D_1$ we get an immediate contradiction. \item If $\ensuremath{\mathsf{CDF}}_B(T) = D_1$: In this case $A$ sets $x = 1$ and is indifferent to the choice of $y$. Observe that $B$'s best response to $A$'s strategy of $(1,y)$ is to set the threshold parameter to $T=y$. We have that this is indeed a BNE if $y \in \ensuremath{\mathsf{CDF}}_B^{-1}(D_1)$. Assuming uniqueness to the inverse of $\ensuremath{\mathsf{CDF}}_B$ then $\sigma_A^*=(1,y^*)$ with $y^*=\ensuremath{\mathsf{CDF}}_B^{-1}(D_1)$ is $A$'s BNE strategy, and $B$'s BNE strategy is a threshold strategy with the threshold parameter set to $T=y^*$. We comment that in the case where $D_0 = D_1$ and $A$ is indifferent to the choice of $x$ as well, the BNE strategy of $A$ is defined using any $x^*,y^* \in [0,1]$ that satisfy $x^*+y^* \in \ensuremath{\mathsf{CDF}}_B^{-1}(D_1)$. \end{itemize} \cut{ So for a fixed $x,y$, we have that ${\bf Pr}[\ensuremath{t} = 0 \textrm{ and } \rho < x+y-1] = D_0 \int_0^{x+y-1} \ensuremath{\mathsf{PDF}}_0(t) dt = D_0 \ensuremath{\mathsf{CDF}}_0(x+y-1)$ and similarly ${\bf Pr}[\ensuremath{t}=1 \textrm{ and } \rho<x+y-1] = D_1 \ensuremath{\mathsf{CDF}}_1(x+y-1)$. Since $\rho$ is sampled from a continuous distribution, we have that ${\bf Pr}[\rho=x+y-1] = 0$. Therefore, for a given $(x,y)$, the $A$ agent can quantify the probability of getting a $\hat\ensuremath{t}=0$ signal and a $\hat\ensuremath{t}=1$ signal. Hence, the utility of $A$ is \begin{eqnarray*} & u_A= & x D_0 (1-\ensuremath{\mathsf{CDF}}_0(x+y-1)) + (1-x)D_1 \ensuremath{\mathsf{CDF}}_1(x+y-1) \cr && + yD_1 (1-\ensuremath{\mathsf{CDF}}_1(x+y-1)) + (1-y)D_0 \ensuremath{\mathsf{CDF}}_0(x+y-1)\cr && = xD_0+yD_1 + D_0(-x+1-y)\ensuremath{\mathsf{CDF}}_0(x+y-1) + D_1(-y+1-x)\ensuremath{\mathsf{CDF}}_1(x+y-1) \cr && = xD_0+yD_1 - (x+y-1)\ensuremath{\mathsf{CDF}}_B(x+y-1) \cr && = 1- D_1x - D_0y + (x+y-1) \left(1-\ensuremath{\mathsf{CDF}}_B(x+y-1)\right) \end{eqnarray*} with $\ensuremath{\mathsf{CDF}}_B = D_0 \ensuremath{\mathsf{CDF}}_0 + D_1\ensuremath{\mathsf{CDF}}_1$. To analyze what it $A$'s utility-maximizing strategy, we need to solve the following problem. \begin{eqnarray*} & \textrm{maximize} & 1- D_1x - D_0y + z(1-\ensuremath{\mathsf{CDF}}_B(z)) \cr & \textrm{s.t.} & x,y \in [0,1] \cr && z = x+y-1 \end{eqnarray*} Given that $(x^*,y^*,z^*)$ maximizes the above quantity, we use the fact that $D_0 \geq D_1$ to deduce that: if $1+z^*<1$ then it must be that $x^*=1+z^*$ and $y^*=0$, and if $1+z^*\geq 1$ then it must hold that $x^*=1$ and $y = z^*$. Therefore, this maximization problem is equivalent to maximizing a univariate function \begin{eqnarray*} & \textrm{maximize} & \begin{cases} &1- D_1(1+z) + z(1-\ensuremath{\mathsf{CDF}}_B(z)) \cr &\textrm{s.t. } z\in[-1,0] \end{cases} \textrm{ or } \begin{cases} &1- D_1 - D_0z + z(1-\ensuremath{\mathsf{CDF}}_B(z)) \cr & \textrm{s.t. } z\in [0,1]\end{cases} \end{eqnarray*} Since we assume that no agent has negative valuation for the coupon, we have that $\ensuremath{\mathsf{CDF}}_B(z) = 0$ if $z<0$. So the first optimization problem turns out to maximize $D_0 - D_1 z + z = D_0 + D_0z$ on the range $z \in [-1,0]$. As this function increases with $z$ it is easy to see that the maximum is $D_0$. Therefore it suffices to consider the latter optimization problem \begin{eqnarray*} & \textrm{maximize} & D_0 - D_0z + z(1-\ensuremath{\mathsf{CDF}}_B(z)) = D_0 + z(D_1 - \ensuremath{\mathsf{CDF}}_B(z))\cr & \textrm{s.t.} & z \in [0,1] \end{eqnarray*} The maximum here is $\geq D_0$ as $z=0$ gives a value of $D_0$. This makes intuitive sense as $A$ can always ignore the signal $\hat\ensuremath{t}$ and gain $D_0$ by always playing $\tilde\ensuremath{t}=0$. Given that $z^*$ maximized this function, $A$ sets $(x^*,y^*) = (1,z^*)$. (Unless $D_0=D_1=\tfrac 1 2$ in which case $A$ can set any $(x^*,y^*)$ s.t. $x^*+y^*=1+z^*$.) Any $B$ agent whose valuation for the coupon satisfies $\rho > 1+z^*$ signals truthfully $\hat\ensuremath{t}=\ensuremath{t}$ and any agent whose valuation is $\rho < 1+z^*$ signals $\hat\ensuremath{t}=1-\ensuremath{t}$. The function $z(D_1-\ensuremath{\mathsf{CDF}}_B(z))$ is a differentiable function, whose derivative is $D_1 - \ensuremath{\mathsf{CDF}}_B(z) - z\ensuremath{\mathsf{PDF}}_B(z)$. The derivative for $z=0$ is positive and it is a decreasing function of $z$. So we have that if it exists, then the point $z^*\in[0,1]$ s.t. $\ensuremath{\mathsf{CDF}}_B(z^*)+z^*\ensuremath{\mathsf{PDF}}_B(z^*) = D_1$ maximizes $A$'s utility, otherwise we set $z^*=1$. In any case, $z^*>0$ so $A$ gains strictly more than $D_0$, which is her gain in the straw-man game. I.e., in the continuous valuations model, $A$ is better off playing the coupon game. } \end{proof} Observe that in this game, from $A$ perspective, without knowing the realized value of $\rho$, it appears that $B$ is playing a randomized strategy. Furthermore, should the coupon valuation and the type be chosen independently (i.e. $\ensuremath{\mathsf{PDF}}_0=\ensuremath{\mathsf{PDF}}_1$) then $A$ views $B$'s strategy as Randomized Response --- since for a randomly chosen $\rho$ it holds that ${\bf Pr}[\sigma(\rho,0) = 0] = {\bf Pr}[\sigma_B(\rho,1) = 1]$. In that case the behavior of $B$ preserves $\epsilon$-differential privacy for \[ \epsilon = \ln \left( \frac {1-\ensuremath{\mathsf{CDF}}_B(y^*)}{\ensuremath{\mathsf{CDF}}_B(y^*)}\right) = \ln\left(\frac {D_0}{D_1}\right) \] \cut{ One can plug in various functions as $\ensuremath{\mathsf{PDF}}$s. In particular, when $\rho$ is sampled uniformly from the $[0,1]$-interval (regardless of the value of $\ensuremath{t}$) then $z^* = \tfrac 1 2 D_1$, so $\epsilon = \ln( \tfrac{2-D_1}{D_1})$. When $\rho$ is sampled from the exponential distribution, i.e. $\ensuremath{\mathsf{PDF}}(z) = e^{-z}$, then $z^*$ is the value satisfying $(1-z)e^{-z} = D_0$. } } \continuousCouponValuations \cut{ \subsection{The Game with a General $2\times 2$ Payment Matrix} \label{subsec:arbitrary_matrix} \os{Other than the fact that this is a good preview for the next section, don't really see why I should put it here. The results are nice, but uninteresting.} Finally, we consider an extension of the game, where we replace the identity-matrix payments with general payments. Indeed, there is no intuitive reason why, from $A$'s perspective, realizing that $B$ has type $\ensuremath{t}=0$ is worth just as much as finding $B$ has type $\ensuremath{t}=1$. After all, it could be that type $\ensuremath{t}=0$ are the healthy people and $\ensuremath{t}=1$ represents having some medican condition, so finding a person of $\ensuremath{t}=1$ should be more worthwhile for $A$. In our new payment matrix, we still require that $A$ gains utility if she correctly guesses $B$'s type, and loses if she accuses $B$ of being of the wrong type. In other words, we consider payments of the form \[ M = \left[ \begin{array}{c|c} M_{0,0} & -M_{0,1}\cr \hline -M_{1,0} & M_{1,1} \end{array}\right] \] where $A$ is the row player and $B$ is the column player. We assume $M_{0,0}, M_{0,1}, M_{1,0}, M_{1,1}$ are all non-negative. \cut{ For example, we can imagine that being of type-$1$ is rather embarrassing. Therefore, $M_{1,1}$ can be much larger than $M_{0,0}$, but similarly $M_{1,0}$ is also probably larger than $M_{0,1}$. (Falsely accusing $B$ of being of the embarrassing type is costlier than falsely accusing a $B$ of type $1$ of belonging to the non-embarrassing majority.) And so, as leading example, we pick some parameter $a>1$ and think of $M$ as the payment matrix $\left[ \begin{array}{c | c} 1 & -1\cr \hline -a & a \end{array}\right]$ } \paragraph{The benchmark game.} First, consider the game where $B$ has no moves, and $A$ just gets to accuse $B$ of belonging to type $\tilde\ensuremath{t}$. $A$ knows that $B$ plays the $0$-column w.p. $D_0$ and the $1$-column w.p. $D_1$, and so she plays the row that maximizes \[ \tilde\ensuremath{t}=\arg\max\{ D_0 M_{0,0} - D_1 M_{0,1}, D_1 M_{1,1} - D_0 M_{1,0} \} \] We denote this row as $r^*$. An equivalent formulation of this is that $r^*=0$ if $D_0(M_{0,0}+M_{1,0}) > D_1(M_{0,1}+M_{1,1})$ and $r^*=1$ otherwise. \paragraph{The coupon game.} The game itself remains the same. \begin{flalign*} \textrm{For $B$: } & ~~p = {\bf Pr}[\hat\ensuremath{t}= 0 ~|~ \ensuremath{t} =0] \textrm{, and }~ q = {\bf Pr}[\hat\ensuremath{t}=1 ~|~ \ensuremath{t} = 0] \cr \textrm{For $A$: } & ~~x ={\bf Pr}[\tilde\ensuremath{t} = 0 ~|~ \hat\ensuremath{t} = 0] \textrm{, and }~ y={\bf Pr}[\tilde\ensuremath{t}=1~|~\hat\ensuremath{t}=1] \end{flalign*} The new utility functions are generalization of the utility functions from before. \begin{flalign*} u_A =& D_0\left[ M_{0,0}p x - M_{1,0} p(1-x) + M_{0,0} (1-p) (1-y) - M_{1,0} (1-p) y \right] \cr &+ D_1\left[ M_{1,1}q y - M_{0,1} q(1-y) + M_{1,1} (1-q) (1-x) - M_{0,1} (1-q) x \right] \cr =& x \left( M_{0,0} D_0 p -M_{0,1}D_1(1-q)\right) + (1-x) \left(-M_{1,0}D_0p + M_{1,1} D_1 (1-q)\right) \cr &+ y \left( -M_{1,0}D_0(1-p) + M_{1,1}D_1q \right) + (1-y) \left( M_{0,0}D_0(1-p)-M_{0,1}D_1q \right) \cr u_B =& D_0 u_{B,0} + D_1 u_{B,1} \textrm{, where we define }\cr u_{B,0} =& p ( \rho_0 - M_{0,0}x + M_{1,0}(1-x) ) + (1-p) ( -M_{0,0}(1-y) + M_{1,0} y ) \cr u_{B,1} =& q ( \rho_1 - M_{1,1}y + M_{0,1}(1-y) ) + (1-q) ( -M_{1,1}(1-x) + M_{0,1} x ) \end{flalign*} And so, $x$ is determined by the point $(p,q)$ being above or below the line \[\ell_0: (M_{0,0} + M_{1,0})D_0p = (M_{0,1}+M_{1,1})D_1(1-q)\] and $y$ is determined by the point $(p,q)$ being above or below the line \[\ell_1: (M_{0,0} + M_{1,0})D_0(1-p) = (M_{0,1}+M_{1,1})D_1q\] Again, the two lines are parallel (or identical), with $(0,1)\in \ell_0$ and $(1,0)\in \ell_1$. Yet, this time, it is not clear which line is above the other. Looking at $p=1$, we have that the point $(1, 1- \frac {D_0(M_{0,0}+M_{1,0})} {D_1(M_{0,1}+M_{1,1})})$ is on the $\ell_0$-line, and the point $(1,0)$ is on the $\ell_1$ line. So, when $r^* = 0$ then the $\ell_0$-line is below the $\ell_1$-line, and vice-versa. In other words, the $\ell_{r^*}$-line is below the $\ell_{1-r^*}$-line. The two parallel line partition the $[0,1]\times[0,1]$ square into three regions: below the both lines (where $A$ sets $x+y=0$), between the two lines (where $A$ sets $x+y=1$, and above the two lines (where $A$ sets $x+y=2$). On the lower of the two lines $A$ may set $x+y$ to be any number in $[0,1]$, and the higher of the two lines $A$ may set $x+y$ to be any number in $[1,2]$.\footnote{Here we use the assumption that all of $\{M_{0,0}, M_{0,1}, M_{1,0}, M_{1,1}\}$ are positive.} Similar to before, $B$'s best response is determined by the relationship between $\rho_t$ and $x+y-1$. The relationship $\rho_0 \gtrless (M_{0,0}+M_{1,0})(x+y-1)$ determines whether $p=1$ or $p=0$; the relationship $\rho_1 \gtrless (M_{0,1}+M_{1,1})(x+y-1)$ determines whether $q=1$ or $q=0$. Observe that when $x+y \leq 1$ then both agents prefer to take the coupon (we assume the valuation for the coupon is positive), and when $x+y = 2$ then only agents with large enough valuation for the coupon take it. \paragraph{Analyzing the Game in the case of fixed valuations.} We first consider the case where the valuations of the two types of $B$ agent for the coupon are fixen and known in advance. In this case, the analysis mimic the analysis in Section~\ref{subsec:coupon_game} First consider the simple case where both $\rho_0 \geq M_{0,0}+M_{1,0}$ and $\rho_1 \geq M_{0,1}+M_{1,1}$. In this case, it is clear that $p=q=1$ and $x=y=1$ is the only BNE (if $\rho_0 = M_{0,0}+M_{1,0}$ or $\rho_1 = M_{0,1}+M_{1,1}$ then $B$ may play any $p$ or any $q$, as long as $x=y=1$ is a best response to $(p,q)$. Meaning, as long as $(p,q)$ are above both lines. Now suppose that at least on type of agent, $b\in \{0,1\}$ has valuation $0 < \rho_b < M_{0,b}+M_{1,b}$. In such a case, and just like before, we claim that the BNE strategy of $B$, $(p^*,q^*)$, must lie on the highest line. Indeed, everywhere else, $A$'s best response is to set $(x,y)$ s.t. either $x+y=2$ or $x+y \leq 1$; $B$'s best response to $x+y=2$ is to set either $p$ or $q$ to $0$, and for any $x+y\leq 1$ is to set both $p=1$ and $q=1$. Therefore, $A$'s strategy is to set $x+y$ s.t. either $B$ of type-$0$ is indifferent, or $B$ of type-$1$ is indifferent. In other words, either \[ x+y = \frac {\rho_0}{M_{0,0}+M_{1,0}} + 1 ~\textrm{, or }~~x+y = \frac {\rho_1}{M_{0,1}+M_{1,1}} + 1\] $A$ makes the type-$b$ agent indifferent by setting one of the $x$ or $y$ to be $1$ (the one corresponding to~$r^*$) and the other to $\frac {\rho_b}{M_{0,b}+M_{1,b}}$. That is, $A$'s strategy sets $B$ agent of type-$b$ to be indifferent, whereas type-$(1-b)$ plays $\begin{cases} 1&\textrm{, if } \frac {\rho_{1-b}}{\rho_b} > \frac {M_{0,1-b}+M_{1,1-b}}{M_{0,b}+M_{1,b}} \cr \textrm{Anything on }[0,1]&\textrm{, if } \frac {\rho_{1-b}}{\rho_b}= \frac {M_{0,1-b}+M_{1,1-b}}{M_{0,b}+M_{1,b}} \cr 0&\textrm{, if } \frac {\rho_{1-b}}{\rho_b}< \frac {M_{0,1-b}+M_{1,1-b}}{M_{0,b}+M_{1,b}} \end{cases}$.\\ If out of the three conditions it happens that the equality holds, any $(p,q)$ on the highest line is a BNE strategy for $B$. In that case, it is possible that $B$ plays Randomized Response --- with $(p,q)$ being on the intersection of the line $p=q$ with $\ell_{1-r^*}$. In the case where the equality does not hold, then the BNE is characterized as follows. If $\ell_1$ is the highest line, then \[ \begin{cases} (x=1, y=\frac {\rho_1}{M_{0,1}+M_{1,1}}), (p=1,q=0), &\textrm{ if } \frac {\rho_{0}}{\rho_1} > \frac {M_{0,0}+M_{1,0}}{M_{0,1}+M_{1,1}} \cr (x=1, y=\frac {\rho_0}{M_{0,0}+M_{1,0}}), (p=1 - \frac {D_1}{D_0} \frac{M_{0,1}+M_{1,1}}{M_{0,0}+M_{1,0}},q=1), &\textrm{ o/w } \end{cases}\] If $\ell_0$ is the highest line, then \[ \begin{cases} (x=\frac {\rho_1}{M_{0,1}+M_{1,1}}, y=1), (p=1,q=1- \frac {D_0(M_{0,0}+M_{1,0})} {D_1(M_{0,1}+M_{1,1})}), &\textrm{ if } \frac {\rho_{0}}{\rho_1} > \frac {M_{0,0}+M_{1,0}}{M_{0,1}+M_{1,1}} \cr (x=\frac {\rho_0}{M_{0,0}+M_{1,0}},y=1), (p=0,q=1), &\textrm{ o/w } \end{cases}\] \paragraph{Analyzing the game in the case of continuous valuations.} As before, we continue the analysis with coupon valuations sampled from a distribution. Same as in Section~\ref{subsec:diff_continuous_rho} we denote $\ensuremath{\mathsf{PDF}}_0$ and $\ensuremath{\mathsf{PDF}}_1$ as the PDFs of the distributions from which the valuations for $0$ and $1$ are sampled from. We assume of course that $\ensuremath{\mathsf{PDF}}_0(x)=\ensuremath{\mathsf{PDF}}_1(x)=0$ for any $x<0$. As before $x = {\bf Pr}[\tilde\ensuremath{t}=0 | ~\hat\ensuremath{t}=0]$ and $y = {\bf Pr}[\tilde\ensuremath{t} = 1 |~ \hat\ensuremath{t}=1]$. Again, given a $B$ agent of type either $0$ or $1$ and coupon valuation of $\rho$, its utility function is \begin{eqnarray*} u_{B,0} =& p ( \rho - M_{0,0}x + M_{1,0}(1-x) ) + (1-p) ( -M_{0,0}(1-y) + M_{1,0} y ) \cr u_{B,1} =& q ( \rho - M_{1,1}y + M_{0,1}(1-y) ) + (1-q) ( -M_{1,1}(1-x) + M_{0,1} x ) \end{eqnarray*} So $B$ chooses whether or not to give the signal $\hat\ensuremath{t}=\ensuremath{t}$ based on $\rho \stackrel {?} \gtrless (M_{0,\ensuremath{t}}+M_{1,\ensuremath{t}})(x+y-1)$. We thus denote for $\ensuremath{t}\in\{0,1\}$ the threshold value $T_\ensuremath{t} = (M_{0,\ensuremath{t}}+M_{1,\ensuremath{t}})(x+y-1)$, and we have \begin{eqnarray*} {\bf Pr}[\hat\ensuremath{t} = 1 | \ensuremath{t} = 0] & = \ensuremath{\mathsf{CDF}}_0(T_0)\cr {\bf Pr}[\hat\ensuremath{t} = 0 | \ensuremath{t} = 1] & = \ensuremath{\mathsf{CDF}}_1(T_1)\end{eqnarray*} We can now formulate $A$'s utility from this game. \begin{eqnarray*} & u_A & = D_0 \left(\ensuremath{\mathsf{CDF}}_0(T_0) (1-y) + (1-\ensuremath{\mathsf{CDF}}_0(T_0))x\right) + D_1\left( \ensuremath{\mathsf{CDF}}_1(T_1)(1-x) + (1-\ensuremath{\mathsf{CDF}}_1(T_1))y \right) \cr && = x D_0 + yD_1 - (x+y-1) \left(D_0\ensuremath{\mathsf{CDF}}_0(T_0) + D_1\ensuremath{\mathsf{CDF}}_1(T_1) \right) \cr && = 1 - y D_0 - x D_1 + (x+y-1) \left( D_0(1-\ensuremath{\mathsf{CDF}}_0(T_0)) + D_1(1-\ensuremath{\mathsf{CDF}}_1(T_1)) \right) \end{eqnarray*} Therefore, as $A$ maximizes her own utility, this optimization fucntion is dependent on the variable $z$ the represents $z = x+y-1$. \begin{eqnarray*} &\textrm{maximize} & 1 - y D_0 - x D_1 + z \left( D_0(1-\ensuremath{\mathsf{CDF}}_0(T_0)) + D_1(1-\ensuremath{\mathsf{CDF}}_1(T_1)) \right) \cr &\textrm{s.t.} & z = x+y-1\cr && x,y\in[0,1] \end{eqnarray*} The same analysis from before gives that we can consider just the case where $z\in [0,1]$ and $x=1, y=z$. This leaves us with \begin{eqnarray*} &\textrm{maximize}_{z\in[0,1]} & u_A(z) = D_0 - z D_0 + z \left( D_0(1-\ensuremath{\mathsf{CDF}}_0(T_0)) + D_1(1-\ensuremath{\mathsf{CDF}}_1(T_1)) \right) \cr &\Leftrightarrow \textrm{maximize}_{z\in[0,1]} & u_A(z) = D_0 + z D_1 - z \left( D_0\ensuremath{\mathsf{CDF}}_0(T_0) + D_1\ensuremath{\mathsf{CDF}}_1(T_1) \right) \cr \end{eqnarray*} We therefore define the function $F_B(x) = D_0 \ensuremath{\mathsf{CDF}}_0\big( (M_{0,0}+M_{1,0})\cdot x \big) + D_1 \ensuremath{\mathsf{CDF}}_1\big( (M_{0,1}+M_{1,1})\cdot x \big)$. At this point, we have the same analysis from before, except that the function $\ensuremath{\mathsf{CDF}}_B$ is replaced with $F_B$. In particular, $A$ finds some value $z$ s.t. from $A$'s perspective $B$ plays Randomized Response with the same $\epsilon$ as before. \cut{ Differentiating: \begin{flalign*} u_A' & = D_1 -D_0\ensuremath{\mathsf{CDF}}_0(T_0)-D_1\ensuremath{\mathsf{CDF}}_1(T_1) - D_0(M_{0,0}+M_{1,0})z\ensuremath{\mathsf{PDF}}_0(T_0) - D_1(M_{0,1}+M_{1,1})z\ensuremath{\mathsf{PDF}}_1(T_1)\cr & = D_1(1 - \ensuremath{\mathsf{CDF}}_1(T_1) - T_1\ensuremath{\mathsf{CDF}}_1(T_1)) -D_0(\ensuremath{\mathsf{CDF}}_0(T_0)+ T_0 \ensuremath{\mathsf{PDF}}_0(T_0))\cr & = D_1(1 - \ensuremath{\mathsf{CDF}}_1(T_1) - T_1\ensuremath{\mathsf{CDF}}_1(T_1)) +D_0(1-\ensuremath{\mathsf{CDF}}_0(T_0)-T_0 \ensuremath{\mathsf{PDF}}_0(T_0)) -D_0 \end{flalign*} Solving for $u = (0,0)$ we have that \[ (1-\ensuremath{\mathsf{CDF}}_0(T_0) - T_0\ensuremath{\mathsf{PDF}}_0(T_0))(1-\ensuremath{\mathsf{CDF}}_1(T_1) - T_1\ensuremath{\mathsf{PDF}}_1(T_1))=1\] } } \ifx \fullversion \undefined \fi \ifx \fullversion \undefined \else \fi \section{The Coupon Game with an Opt Out Strategy} \label{sec:opt-out-possible} \ifx \fullversion \undefined \fi In this section, we consider a version of the game considered in Section~\ref{sec:matching_pennies}. The revised version of the game we consider here is very similar to the original game, except for $A$'s ability to ``opt out'' and not guess $B$'s type. In this section, we consider the most general form of matrix payments. We replace the identity-matrix payments with general payment matrix $M$ of the form $M = \left[ \begin{array}{c|c} M_{0,0} & -M_{0,1}\cr \hline -M_{1,0} & M_{1,1} \end{array}\right]$ with the $(i,j)$ entry in $M$ means $A$ guessed $\tilde\ensuremath{t}=i$ and $B$'s true type is $\ensuremath{t}=j$, and so $B$ pays $A$ the amount detailed in the $(i,j)$-entry. We assume $M_{0,0}, M_{0,1}, M_{1,0}, M_{1,1}$ are all non-negative. Indeed, when previously considering the identity matrix payments, we assumed the for $A$, realizing that $B$ has type $\ensuremath{t}=0$ is worth just as much as finding $B$ has type $\ensuremath{t}=1$. But it might be the case that finding a person of $\ensuremath{t}=1$ should be more worthwhile for $A$. For example, type $\ensuremath{t}=1$ (the minority, since we always assume $D_0\geq D_1$) may represent having some embarrassing medical condition while type $\ensuremath{t}=0$ representing not having it. Therefore, $M_{1,1}$ can be much larger than $M_{0,0}$, but similarly $M_{1,0}$ is probably larger than $M_{0,1}$. (Falsely accusing $B$ of being of the embarrassing type is costlier than falsely accusing a $B$ of type $1$ of belonging to the non-embarrassing majority.) Our new payment matrix still motivates $A$ to find out $B$'s true type --- $A$ gains utility by correctly guessing $B$'s type, and loses utility by accusing $B$ of being of the wrong type. \paragraph{The ``strawman'' game.} First, consider a simple game where $B$ makes no move ($A$ offers no coupon) and $A$ tries to guess $B$'s type without getting any signal from $B$. Then $A$ has three possible pure strategies: (i) guess that $B$ is of type $0$; (ii) guess that $B$ is of type $1$; and (iii) guess nothing. In expectation, the outcome of option (i) is $D_0 M_{0,0}-D_1 M_{0,1}$ and the outcome of option (ii) is $D_1 M_{1,1} - D_0 M_{1,0}$. If the parameters of $M$ are set such that both options are negative then $A$'s preferred strategy is to opt out and gain $0$. We assume throughout this section that indeed the above holds. (Intuitively, this assumption reflects the fact that we don't make assumptions about people's type without first getting any information about them.) So we have \ifx \fullversion \undefined \fi \begin{eqnarray} &&\frac {M_{0,0}}{M_{0,1}} < \frac {D_1}{D_0} \textrm{ , }\qquad \textrm{ and }~~~ \frac {M_{1,1}}{M_{1,0}} < \frac {D_0}{D_1} \label{eq:strawman_game_assumptions} \end{eqnarray} A direct (and repeatedly used) corollary of Equation~\eqref{eq:strawman_game_assumptions} is that $\tfrac {M_{0,0}}{M_{0,1}} < \tfrac {M_{1,0}}{M_{1,1}}$. \paragraph{The full game.} We now give the formal description of the game. \begin{enumerate} \addtocounter{enumi}{-1} \item $B$'s type, denoted $\ensuremath{t}$, is chosen randomly, with ${\bf Pr}[\ensuremath{t}=0]=D_0$ and ${\bf Pr}[\ensuremath{t}=1]=D_1$. \item $B$ reports a type $\hat\ensuremath{t}$ to $A$. $A$ in return gives $B$ a coupon of type $\hat\ensuremath{t}$. \item $A$ chooses whether to accuse $B$ of being of a certain type, or opting out. \begin{itemize} \item If $A$ opts out (denoted as $\tilde\ensuremath{t}=\bot$), then $B$ pays $A$ nothing. \item If $A$ accuses $B$ of being of type $\tilde\ensuremath{t}$ then: if $\tilde\ensuremath{t}=\ensuremath{t}$ then $B$ pays $M_{\ensuremath{t},\ensuremath{t}}$ to $A$, and if $\tilde\ensuremath{t}=1-\ensuremath{t}$ then $B$ pays $-M_{1-\ensuremath{t},\ensuremath{t}}$ to $A$ (or $A$ pays $M_{1-\ensuremath{t},\ensuremath{t}}$ to $B$). \end{itemize} \end{enumerate} Introducing the option to opt out indeed changes significantly the BNE strategies of $A$ and $B$. \newcommand{\NEWhichIsRR}{ If we have that $D_0^2 M_{0,0}M_{1,0} = D_1^2M_{0,1}M_{1,1}$ and the parameters of the game satisfy the following condition: \begin{align} &0 < \rho_1M_{1,0} - \rho_0M_{1,1} < M_{0,1}M_{1,0}-M_{0,0}M_{1,1} \cr &0 < \rho_0M_{0,1} - \rho_1M_{0,0} < M_{0,1}M_{1,0}-M_{0,0}M_{1,1} \label{eq:condition_of_randomized_BNE} \end{align} then the unique BNE strategy of $B$, denote $\sigma_B^*$, is such that $B$ plays Randomized Response: $\tfrac 1 2 \leq {\bf Pr}[\sigma_B^*(0)=0]={\bf Pr}[\sigma_B^*(1)=1] < 1$.} \begin{theorem} \label{thm:NE_which_is_RR} \NEWhichIsRR \end{theorem} \noindent Proving Theorem~\ref{thm:NE_which_is_RR} is the goal of this section. \newcommand{\optoutGameDeferredToAPX}{ \ifx \cameraready \undefined The proof itself is deferred to the next section. We give here, in Table~\ref{tab:Conditions_for_NE} a summary of the various BNEs of this game. The six cases detailed in Table~\ref{tab:Conditions_for_NE} cover all possible settings of the game and they are also mutually exclusive (unless some inequality holds as an equality). The notation in Table~\ref{tab:Conditions_for_NE} is consistent with our notation in the analysis of the game. A strategy $\sigma_B$ of agent $B$ is denoted as $(p,q)$ and a strategy $\sigma_A$ of agent $A$ is denoted as $(x_0,x_1,y_0,y_1)$. Formally, we denote $p={\bf Pr}[\sigma_B(0)=0]$ and $q = {\bf Pr}[\sigma_B(1)=1]$; and $x_b = {\bf Pr}[\sigma_A(0)=b]$ and $y_b = {\bf Pr}[\sigma_A(1)=b]$ for $b\in\bits$. (So $A$'s opting out probabilities are $x_\bot = {\bf Pr}[\sigma_A(0)=\bot] = 1-x_0-x_1$ and $y_\bot = {\bf Pr}[\sigma_A(1)=\bot]=1-y_0-y_1$.) \begin{table}[t] \centering \begin{tabular}{ | c | c | c | c |} \hline Case & Condition & $A$'s Strategy & $B$'s strategy \cr No.& & (always: $x_1=y_0=0$) &\cr \hline $1$& $\rho_0 \geq M_{0,0}+M_{1,0}$ and~~~ $\rho_1\geq M_{0,1}+M_{1,1}$ & $(x_0,y_1)=(1,1)$ & $(1,1)$\cr\hline $2$& $\rho_0 \leq M_{0,0}$ and~~~ $\tfrac{\rho_0}{\rho_1}\leq \tfrac{M_{0,0}}{M_{0,1}}$ & $(x_0,y_1) = (\tfrac {\rho_0}{M_{0,0}},0)$ & $P_1=(0,1)$ \cr\hline $3$& $0 \leq \rho_0-M_{0,0}\leq M_{1,0}$ & $(x_0,y_1) = (1,\tfrac {\rho_0-M_{0,0}}{M_{1,0}})$ & $P_2$ \cr & $\rho_1M_{1,0}-\rho_0M_{1,1} \geq M_{0,1}M_{1,0} -M_{0,0}{M_{1,1}}$ & &\cr\hline $4$& $\rho_1 \leq M_{1,1}$ and~~~ $\tfrac{\rho_0}{\rho_1} \geq \tfrac{M_{1,0}}{M_{1,1}}$ & $(x_0,y_1)=(0,\tfrac{\rho_1}{M_{1,1}})$ & $P_3=(1,0)$ \cr\hline $5$& $0 \leq \rho_1 - M_{1,1} \leq M_{0,1}$ & $(x_0,y_1)=(\tfrac {\rho_1-M_{1,1}}{M_{0,1}},1)$ & $P_4$ \cr & $\rho_0M_{0,1}-\rho_1M_{0,0} \geq M_{0,1}M_{1,0}-M_{0,0}M_{1,1}$ & &\cr\hline $6$& $0 \leq \rho_1M_{1,0} - \rho_0M_{1,1} \leq M_{0,1}M_{1,0}-M_{0,0}M_{1,1}$ & See below & $P_5$ \cr & $0 \leq \rho_0M_{0,1} - \rho_1M_{0,0} \leq M_{0,1}M_{1,0}-M_{0,0}M_{1,1}$ & & \cr\hline \end{tabular} \caption{\label{tab:Conditions_for_NE}\small The various conditions under which we characterize the BNEs of the Game. We use the notation $P_2 = (1 - \frac {D_1M_{1,1}}{D_0 M_{1,0}} , 1)$, $P_4 = (1, 1-\frac {D_0 M_{0,0}}{D_1M_{0,1}})$, and $P_5 = \left( \frac {D_0D_1M_{0,1}M_{1,0} - D_1^2 M_{0,1}M_{1,1}} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}} , \frac {D_0D_1M_{0,1}M_{1,0} - D_0^2 M_{0,0}M_{1,0}} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}} \right)$. The point $P_5$ lies at the intersection between two specific lines, and points $P_2$ and $P_4$ are the intersection points of each of those lines with the $(q=1)$-line and $(p=1)$-line resp. In case $6$, the strategy of $A$ is given by $(x_0,y_1) = {\frac 1 {M_{1,0}M_{0,1} - M_{0,0}M_{1,1}}} (-M_{1,1}\rho_0 + M_{1,0}\rho_1, M_{0,1} \rho_0 - M_{0,0}\rho_1)$.} \end{table} The various conditions given in Table~\ref{tab:Conditions_for_NE} are \emph{feasibility} conditions. They guarantee that $A$ is able to find a strategy $(x_0,x_1,y_0,y_1)\in [0,1]^4$ that cause at least one of the two types of $B$ agent to be indifferent as to the signal she sends. Case $6$, which is the case relevant to Theorem~\ref{thm:NE_which_is_RR}, can be realized starting with any matrix $M$ satisfying $M_{0,0}M_{1,1} < M_{0,1}M_{1,0}$ (which is a necessary condition derived from Equation~\eqref{eq:strawman_game_assumptions}), which intuitively can be interpreted as having a wrong ``accusation'' being costlier than the gain from a correct ``accusation'' (on average and in absolute terms). Given such $M$, one can set $D_0$ and $D_1$ s.t. $\tfrac {D_0}{D_1} =\sqrt{\tfrac {M_{0,1}}{M_{0,0}}\cdot \tfrac {M_{1,1}} {M_{1,0}}}$ as to satisfy Equation~\eqref{eq:strawman_game_assumptions}. This can be interpreted as balancing the ``significance'' of type $0$ (i.e. $M_{0,0}M_{0,1}$) with the ``significance'' of type $1$ (i.e. $M_{1,0}M_{1,1}$), setting the more significant type as the less probable (i.e. if type $1$ is more significant than type $0$, than $D_1 < D_0$). We then pick $\rho_0,\rho_1$ that satisfy $\tfrac {M_{1,1}}{M_{1,0}}< \tfrac {\rho_1}{\rho_0}< \tfrac {M_{0,1}}{M_{0,0}} $ and scale both by the sufficiently small multiplicative factor so we satisfy the other inequality of case $6$. (In particular, setting $\tfrac {\rho_1}{\rho_0}=\tfrac {D_0}{D_1}$ is a feasible solution.) Here, $\rho_0$ and $\rho_1$ are set such that the ratio $\tfrac {\rho_1}{\rho_0}$ balances the significance ratio w.r.t type $1$ accusation (i.e. $\tfrac {\rho_1}{\rho_0} > \tfrac {M_{1,1}}{M_{0,0}}$) and the ratio $\tfrac {\rho_0}{\rho_1}$ balances the significance ratio w.r.t to type $0$ accusation (i.e. $\tfrac {\rho_0}{\rho_1} > \tfrac {M_{0,0}}{M_{1,0}}$). More concretely, for any matrix $M = \left( \begin{array}{c c} 1 & c \cr c & d \end{array} \right)$ with parameters $c,d$ satisfying $d < c^2$, we can set $\tfrac {D_0}{D_1} = \sqrt{d}$ and any sufficiently small $\rho_0,\rho_1$ satisfying $\tfrac {\rho_1}{\rho_0} \in (\tfrac d c, c)$ and satisfy the requirements of Theorem~\ref{thm:NE_which_is_RR}. \else The proof itself is deferred to the full version of this paper, where we also give a complete summary of the various BNEs of this game. We detail $6$ different cases that cover all possible settings of the game. Each of these $6$ cases is defined by a different \emph{feasibility} condition. These conditions guarantee that $A$ is able to find a strategy that cause at least one of the two types of $B$ agent to be indifferent as to the signal she sends. The feasibility condition detailed in Equation~\eqref{eq:condition_of_randomized_BNE} can be realized starting with any matrix $M$ satisfying $M_{0,0}M_{1,1} < M_{0,1}M_{1,0}$ (which is a necessary condition derived from Equation~\eqref{eq:strawman_game_assumptions}), which intuitively can be interpreted as having a wrong ``accusation'' being costlier than the gain from a correct ``accusation'' (on average and in absolute terms). Given such $M$, one can set $D_0$ and $D_1$ s.t. $\tfrac {D_0}{D_1} =\sqrt{\tfrac {M_{0,1}}{M_{0,0}}\cdot \tfrac {M_{1,1}} {M_{1,0}}}$ as to satisfy Equation~\eqref{eq:strawman_game_assumptions}. This can be interpreted as balancing the ``significance'' of type $0$ (i.e. $M_{0,0}M_{0,1}$) with the ``significance'' of type $1$ (i.e. $M_{1,0}M_{1,1}$), setting the more significant type as the less probable (i.e. if type $1$ is more significant than type $0$, than $D_1 < D_0$). We then pick $\rho_0,\rho_1$ that satisfy $\tfrac {M_{1,1}}{M_{1,0}}< \tfrac {\rho_1}{\rho_0}< \tfrac {M_{0,1}}{M_{0,0}} $ and scale both by the sufficiently small multiplicative factor so we satisfy the other inequality in Equation~\eqref{eq:condition_of_randomized_BNE}. (In particular, setting $\tfrac {\rho_1}{\rho_0}=\tfrac {D_0}{D_1}$ is a feasible solution.) Here, $\rho_0$ and $\rho_1$ are set such that the ratio $\tfrac {\rho_1}{\rho_0}$ balances the significance ratio w.r.t type $1$ accusation (i.e. $\tfrac {\rho_1}{\rho_0} > \tfrac {M_{1,1}}{M_{0,0}}$) and the ratio $\tfrac {\rho_0}{\rho_1}$ balances the significance ratio w.r.t to type $0$ accusation (i.e. $\tfrac {\rho_0}{\rho_1} > \tfrac {M_{0,0}}{M_{1,0}}$). More concretely, for any matrix $M = \left( \begin{array}{c c} 1 & c \cr c & d \end{array} \right)$ with parameters $c,d$ satisfying $d < c^2$, we can set $\tfrac {D_0}{D_1} = \sqrt{d}$ and any sufficiently small $\rho_0,\rho_1$ satisfying $\tfrac {\rho_1}{\rho_0} \in (\tfrac d c, c)$ and satisfy the requirements of Theorem~\ref{thm:NE_which_is_RR}. \fi } \optoutGameDeferredToAPX \newcommand{\apxOptOut}{ Recall, we assume ${\bf Pr}[t=0]=D_0$ and ${\bf Pr}[t=1]=D_1$ where wlog $D_0\geq D_1$. As we did before, we denote $B$'s strategy $\sigma_B$ using $p = {\bf Pr}[\sigma_B(0)=0]$ and $q = {\bf Pr}[\sigma_B(1)=1]$. In contrast to the previous analysis, now $A$ has to decide between $3$ alternatives per $\hat\ensuremath{t}$ signal, so $A$ has $6$ options. However, seeing as $A$'s choice to opt-out always give $A$ a utility of $0$, we just denote $4$ alternatives: \begin{eqnarray*} x_0 = {\bf Pr}[\sigma_A(0)=0] , && x_1={\bf Pr}[\sigma_A(0)=1] \cr y_0 = {\bf Pr}[\sigma_A(1)=0] , && y_1={\bf Pr}[\sigma_A(1)=1] \end{eqnarray*} and we constrain $x_0+x_1 \leq 1$ and $y_0 + y_1 \leq 1$.\footnote{Whereas in the previous section we constrained $x_0+x_1=1$ and $y_0+y_1=1$.} Now, given that $A$ views a signal $\hat\ensuremath{t}$, she has $3$ alternatives: \begin{itemize} \item Accuse $B$ of being of type $0$ and get an expected revenue of \begin{align*} & M_{0,0} {\bf Pr}[t=0 ~|~ \hat\ensuremath{t}] - M_{0,1} {\bf Pr}[\ensuremath{t}=1 ~|~ \hat\ensuremath{t}]\cr & = \frac 1 {{\bf Pr}[\hat\ensuremath{t}]} \left( M_{0,0} {\bf Pr}[t=0]{\bf Pr}[\sigma_B(0) = \hat\ensuremath{t}] - M_{0,1} {\bf Pr}[\ensuremath{t}=1]{\bf Pr}[\sigma_B(1)= \hat\ensuremath{t}] \right)\cr & = \frac 1 {{\bf Pr}[\hat\ensuremath{t}]} \left( M_{0,0} D_0 {\bf Pr}[\sigma_B(0) = \hat\ensuremath{t}] - M_{0,1} D_1 {\bf Pr}[\sigma_B(1) = \hat\ensuremath{t}]\right) \end{align*} \item Accuse $B$ of being of type $1$ and get an expected revenue of \begin{align*} &M_{1,1} {\bf Pr}[t=1 ~|~ \hat\ensuremath{t}] - M_{1,0} {\bf Pr}[\ensuremath{t}=0 ~|~ \hat\ensuremath{t}] \cr &=\frac 1 {{\bf Pr}[\hat\ensuremath{t}]} \left( M_{1,1} {\bf Pr}[t=1]{\bf Pr}[\sigma_B(1) = \hat\ensuremath{t}] - M_{1,0} {\bf Pr}[\ensuremath{t}=0]{\bf Pr}[\sigma_B(0)= \hat\ensuremath{t}] \right)\cr & = \frac 1 {{\bf Pr}[\hat\ensuremath{t}]} \left( M_{1,1} D_1 {\bf Pr}[\sigma_B(1) = \hat\ensuremath{t}] - M_{1,0} D_0 {\bf Pr}[\sigma_B(0) = \hat\ensuremath{t}]\right) &\end{align*} \item Opt out and get revenue of $0 = \frac 0 {{\bf Pr}[\hat\ensuremath{t}]}$. \end{itemize} This means that $A$ prefers accusing $B$ of being of type $0$ to opting out when \[ {\bf Pr}[\sigma_B(0) = \hat\ensuremath{t}] > \frac{M_{0,1} D_1}{M_{0,0} D_0}{\bf Pr}[\sigma_B(1) = \hat\ensuremath{t}] \] Similarly, $A$ prefers accusing $B$ of being of type $1$ to opting out when \[ {\bf Pr}[\sigma_B(0) = \hat\ensuremath{t}] < \frac{M_{1,1} D_1}{M_{1,0} D_0}{\bf Pr}[\sigma_B(1) = \hat\ensuremath{t}] \] From Equation~\eqref{eq:strawman_game_assumptions} we have that $\tfrac {M_{1,1}D_1}{M_{1,0}D_0} < 1 < \tfrac {M_{0,1}D_1}{M_{0,0}D_0}$. Therefore, \emph{given that ${\bf Pr}[\hat\ensuremath{t}]>0$}, then $A$'s best response is determined by the ratio: \begin{equation*} \frac {{\bf Pr}[\sigma_B(0)=\hat\ensuremath{t}]}{{\bf Pr}[\sigma_B(1)=\hat\ensuremath{t}]} \begin{cases} < \frac {M_{1,1}D_1}{M_{1,0}D_0}, & A \textrm{ plays } {\bf Pr}[\sigma_A(\hat\ensuremath{t}) = 1] = 1 \cr = \frac {M_{1,1}D_1}{M_{1,0}D_0}, & A \textrm{ is indifferent between }\bot\textrm{ and playing } \tilde\ensuremath{t}=1\cr \in (\frac {M_{1,1}D_1}{M_{1,0}D_0}, \frac {M_{0,1}D_1}{M_{0,0}D_0}) & A \textrm{ plays } {\bf Pr}[\sigma_A(\hat\ensuremath{t})=\bot]=1\cr = \frac {M_{0,1}D_1}{M_{0,0}D_0}, & A \textrm{ is indifferent between }\bot\textrm{ and playing } \tilde\ensuremath{t}=0\cr > \frac {M_{0,1}D_1}{M_{0,0}D_0} & A \textrm{ plays } {\bf Pr}[\sigma_A(\hat\ensuremath{t})=0]=1 \end{cases} \end{equation*} Therefore $A$'s BNE strategy when viewing the signal $\hat\ensuremath{t}$ (which is best response to $B$'s BNE strategy) is such that $A$ never plays both $\tilde\ensuremath{t} = \hat\ensuremath{t}$ and $\tilde\ensuremath{t}=1-\hat\ensuremath{t}$ with non-zero probability. Switching to the $B$ agent, the utility functions of $B$ are similar to before: \begin{align*} \textrm{For type } \ensuremath{t}=0:~~~ & U_{B,0} = p(\rho_0 - x_0 M_{0,0} + x_1 M_{1,0}) + (1-p) (-y_0 M_{0,0} + y_1 M_{1,0}) \cr \textrm{For type } \ensuremath{t}=1:~~~ & U_{B,1} = q(\rho_1 - y_1 M_{1,1} + y_0 M_{0,1}) + (1-q) (-x_1 M_{1,1} + x_0 M_{0,1}) \end{align*} and so $p =1$ if $\rho_0 > M_{0,0}(x_0-y_0) - M_{1,0}(x_1-y_1)$ and $p=0$ if $\rho_0 < M_{0,0}(x_0-y_0) - M_{1,0}(x_1-y_1)$; similarly $q=1$ if $\rho_1 > M_{1,1}(y_1-x_1)-M_{0,1}(y_0-x_0)$ and $q=0$ if $\rho_1 < M_{1,1}(y_1-x_1)-M_{0,1}(y_0-x_0)$. We can now make our first claim about the BNE of the game. \begin{claim} In any BNE strategy of $B$ we have that either \[ \frac p {1-q} = \frac {{\bf Pr}[\sigma_B^*(0)=0]}{{\bf Pr}[\sigma^*_B(1)=0]} \geq \frac {M_{0,1}D_1}{M_{0,0}D_0} ~~\textrm{ or }~~ \frac {1-p} {q} = \frac {{\bf Pr}[\sigma_B^*(0)=1]}{{\bf Pr}[\sigma^*_B(1)=1]} \leq \frac {M_{1,1}D_1}{M_{1,0}D_0}\] \end{claim} \begin{proof} Assume for the same of contradiction that both conditions do not hold. Then given the $\hat\ensuremath{t}=0$ signal it holds that $x_0 = {\bf Pr}[\sigma^*_A(0)=0] = 0$, and given the $\hat\ensuremath{t}=1$ signal it holds that $y_1 = {\bf Pr}[\sigma^*_A(1)=1]=0$. Thus, $B$'s best response to $A$'s strategy is to switch to $(p,q)=(1,1)$ (since $\rho_0,\rho_1>0$) and now both conditions do hold. \end{proof} \begin{claim} In any BNE strategy of $B$ we have that both \[ \frac p {1-q} = \frac {{\bf Pr}[\sigma_B^*(0)=0]}{{\bf Pr}[\sigma^*_B(1)=0]} > \frac {M_{1,1}D_1}{M_{1,0}D_0} ~~\textrm{ and }~~ \frac {1-p} {q} = \frac {{\bf Pr}[\sigma_B^*(0)=1]}{{\bf Pr}[\sigma^*_B(1)=1]} < \frac {M_{0,1}D_1}{M_{0,0}D_0}\] \end{claim} \begin{proof} Based on the previous claim, one of the two inequalities is immediate. Assume we have $\frac {p}{1-q} \geq \frac {M_{0,1}D_1}{M_{0,0}D_0} > 1 > \frac {M_{1,1}D_1}{M_{1,0}D_0}$, we now show that $\frac {1-p}q < \frac {M_{0,1}D_1}{M_{0,0}D_0}$ must also hold. If, for contradiction, we have that $\frac {1-p}q \geq \frac {M_{0,1}D_1}{M_{0,0}D_0}$ then \[ 1 = p + (1-p) \geq \frac {M_{0,1}D_1}{M_{0,0}D_0} \big(q + (1-q)\big) = \frac {M_{0,1}D_1}{M_{0,0}D_0}\] which contradicts Equation~\eqref{eq:strawman_game_assumptions}. The argument for the case $\frac {1-p}q \leq \frac {M_{1,1}D_1}{M_{1,0}D_0}$ is symmetric. \end{proof} Based on the last claim and on $A$'s best-response analysis, we have that in any BNE strategy of $A$ it holds that $x_1 = {\bf Pr}[\sigma^*_A(0)=1] = 0$ and $y_0 = {\bf Pr}[\sigma^*_A(1)=0]=0$. (I.e., given the signal $\hat\ensuremath{t}$ then $A$ never plays $\tilde\ensuremath{t} = 1-\hat\ensuremath{t}$.) As a result, $B$'s best response analysis simplifies to: $p =1$ if $\rho_0 > M_{0,0}x_0 + M_{1,0}y_1$ and $p=0$ if $\rho_0 < M_{0,0}x_0 + M_{1,0}y_1$; similarly $q=1$ if $\rho_1 > M_{1,1}y_1+M_{0,1}x_0$ and $q=0$ if $\rho_1 < M_{1,1}y_1+M_{0,1}x_0$. We are now able to prove the existence of a BNE as specified Theorem~\ref{thm:NE_which_is_RR}. \begin{claim} Assume that $0 \leq \rho_1M_{1,0} - \rho_0M_{1,1} \leq M_{0,1}M_{1,0}-M_{0,0}M_{1,1}$ and $0 \leq \rho_0M_{0,1} - \rho_1M_{0,0} \leq M_{0,1}M_{1,0}-M_{0,0}M_{1,1}$. The strategies $\sigma_A^*$ and $\sigma_B^*$ denoted below are BNE strategies. \begin{align*} \textrm{For } A: & x_0^* &= {\bf Pr}[\sigma^*_A(0)=0] &= \frac {M_{1,0}\rho_1-M_{1,1}\rho_0} {M_{1,0}M_{0,1} - M_{0,0}M_{1,1}} \cr & x_1^* &= {\bf Pr}[\sigma^*_A(0)=1] &= 0 \cr & y_0^* &= {\bf Pr}[\sigma^*_A(1) = 0] &=0 \cr & y_1^* &= {\bf Pr}[\sigma^*_A(1)= 1] &= \frac {M_{0,1} \rho_0 - M_{0,0}\rho_1} {M_{1,0}M_{0,1} - M_{0,0}M_{1,1}} \cr \textrm{For } B: & p^* &= {\bf Pr}[\sigma^*_B(0) = 0] &= \frac {D_1M_{0,1}(D_0M_{1,0} - D_1M_{1,1})} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}}\cr &1-p^* &= {\bf Pr}[\sigma^*_B(0)=1] &= \frac {D_1M_{1,1}(D_1M_{0,1}- D_0 M_{0,0})} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}}\cr &1-q^* &= {\bf Pr}[\sigma^*_B(1)=0] &= \frac {D_0M_{0,0}(D_0M_{1,0}- D_1 M_{1,1})} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}}\cr &q^* &= {\bf Pr}[\sigma^*_B(1)=1] &= \frac {D_0M_{1,0}(D_1M_{0,1} - D_0 M_{0,0})} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}} \end{align*} \end{claim} \begin{proof} First, observe that under the given assumptions in the claim it holds that $x_0^*,y_1^* \in [0,1]$, and due to Equation~\eqref{eq:strawman_game_assumptions} it holds that $p^*,q^*,1-p^*,1-q^*$ are all strictly positive (so $p^*,q^* \in (0,1)$). Now observe that when $B$ follows $\sigma^*_B$ then $A$ has no incentive to deviate since $\frac {p^*} {1-q^*} = \frac {M_{0,1}D_1}{M_{0,0}D_0}$ and $\frac {1-p^*}{q^*}=\frac {M_{1,1}D_1}{M_{1,0}D_0}$. When $A$ follows $\sigma^*_A$ then $B$ has no incentive to deviate since \begin{align*} M_{0,0} x_0^* + M_{1,0}y_1^* & = \frac{M_{1,0}M_{0,1}\rho_0 - M_{0,0}M_{1,1}\rho_0 } {M_{1,0}M_{0,1}-M_{0,0}M_{1,1}} &=\rho_0 \cr M_{0,1} x_0^* + M_{1,1}y_1^* & = \frac{M_{0,1}M_{1,0}\rho_1 - M_{1,1}M_{0,0}\rho_1 } {M_{1,0}M_{0,1}-M_{0,0}M_{1,1}} &=\rho_1 \end{align*} \end{proof} Observe that when $D_0^2M_{0,0}M_{1,0} = D_1^2M_{0,1}M_{1,1}$ then $p^*=q^*$. Furthermore, in this case we have that $p^*> \tfrac 1 2$ because \begin{align*} &2D_0D_1M_{0,1}M_{1,0} - 2D_1^2M_{0,1}M_{1,1} > D_0D_1M_{0,1}M_{1,0}-D_0D_1M_{0,0}M_{1,1}\cr & \Leftrightarrow D_0D_1M_{0,1}M_{1,0} - D_1^2M_{0,1}M_{1,1} > D_0^2M_{0,0}M_{1,0} -D_0D_1M_{0,0}M_{1,1} \cr &\Leftrightarrow D_1M_{0,1}(D_0M_{1,0}-D_1M_{1,1}) > D_0M_{0,0}(D_0M_{1,0}-D_1M_{1,1})\cr &\Leftrightarrow 0 > -D_1M_{0,1} + D_0M_{0,0} \end{align*} where the last derivation and the last inequality are both true because of Equation~\eqref{eq:strawman_game_assumptions}. This concludes the existence part of Theorem~\ref{thm:NE_which_is_RR}. The more complicated part is to show that $B$'s BNE strategy is \emph{unique}. We make the following argument. \begin{theorem} \label{thm:mixed_BNE_under_condition} Assume that $0 < \rho_1M_{1,0} - \rho_0M_{1,1} < M_{0,1}M_{1,0}-M_{0,0}M_{1,1}$ and $0 < \rho_0M_{0,1} - \rho_1M_{0,0} < M_{0,1}M_{1,0}-M_{0,0}M_{1,1}$. Then in all BNEs of the game both types of $B$ agent play a mixed strategy. \end{theorem} Assuming Theorem~\ref{thm:mixed_BNE_under_condition} holds and using our above best-responsse analysis the uniqueness of the BNE of Theorem~\ref{thm:NE_which_is_RR} is immediate. If both $p$ and $q$ are non-integral then it must hold that $x_0^*$ and $y_1^*$, since this is the unique solution to the system of two linear equations in two variables that set both types of $B$ agent indifferent. Under the assumption of Theorem~\ref{thm:mixed_BNE_under_condition} we have that both $x_0^*$ and $y_1^*$ are non-integral as well. This means that $B$ plays $(p,q)$ s.t. $A$ is indifferent to the value of $x_0^*,y_1^*$; i.e. $p = (1-q) \frac{M_{0,1}D_1}{M_{0,0}D_0}$ and $1-p = q \frac {M_{1,1}D_1}{M_{1,0}D_0}$. Again, since this is a linear system in two variables, there exists a unique $(p,q)$ pair that satisfy this condition, which is given by $(p^*,q^*)$. In the rest of this section (following t, our goal is to prove the Theorem~\ref{thm:mixed_BNE_under_condition}. In fact, we give a full analysis of all the points $(p,q)$ that \emph{may} be $B$'s BNE strategy, and for each such possible $(p,q)$ we analyze the conditions over the parameters of the game underwhich it is a BNE strategy for $B$. The analysis is fairly long and tedious, as it involves checking feasibility constraints over the $6$ parameters of the game: $\rho_0$, $\rho_1$, $M_{0,0}$, $M_{0,1}$, $M_{1,0}$ and $M_{1,1}$. Furthermore, after deriving the suitable feasibility constraints, we show that they cover all settings of the parameters of the game and are mutually exclusive (when inequalities are strict). \cut{ \subsection{Utilities and Best-Response Analysis} \label{subsec:NE_analysis} Recall, we assume ${\bf Pr}[t=0]=D_0$ and ${\bf Pr}[t=1]=D_1$ where wlog $D_0\geq D_1$. As we did before, we denote $B$'s strategy $\sigma_B$ using $p = {\bf Pr}[\sigma_B(0)=0]$ and $q = {\bf Pr}[\sigma_B(1)=1]$. In contrast to the previous analysis, now $A$ has to decide between $3$ alternatives per $\hat\ensuremath{t}$ signal, so $A$ has $6$ options. However, seeing as $A$'s choice to opt-out always give $A$ a utility of $0$, we just denote $4$ alternatives: \begin{eqnarray*} x_0 = {\bf Pr}[\sigma_A(0)=0] , && x_1={\bf Pr}[\sigma_A(0)=1] \cr y_0 = {\bf Pr}[\sigma_A(1)=0] , && y_1={\bf Pr}[\sigma_A(1)=1] \end{eqnarray*} and we constrain $x_0+x_1 \leq 1$ and $y_0 + y_1 \leq 1$.\footnote{Whereas in the previous section we constrained $x_0+x_1=1$ and $y_0+y_1=1$.} The utility functions of $B$ are much like they were before: \begin{align*} \textrm{For type } \ensuremath{t}=0:~~~ & U_{B,0} = p(\rho_0 - x_0 M_{0,0} + x_1 M_{1,0}) + (1-p) (-y_0 M_{0,0} + y_1 M_{1,0}) \cr \textrm{For type } \ensuremath{t}=1:~~~ & U_{B,1} = q(\rho_1 - y_1 M_{1,1} + y_0 M_{0,1}) + (1-q) (-x_1 M_{1,1} + x_0 M_{0,1}) \end{align*} The utility function of $A$ too is very similar to before: \begin{align*} u_A & = D_0 \big( p(x_0M_{0,0} - x_1 M_{1,0}) + (1-p)(y_0 M_{0,0} - y_1 M_{1,0}) \big)\cr & ~~~+ D_1\big( q(y_1M_{1,1}-y_0M_{0,1}) + (1-q)( x_1M_{1,1}-x_0M_{0,1} ) \big) \cr & = x_0 \left( D_0 p M_{0,0} - D_1(1-q)M_{0,1}\right) + x_1 \left( -D_0 p M_{1,0} + D_1(1-q)M_{1,1}\right) \cr & ~~~+ y_0 \left( D_0 (1-p) M_{0,0} - D_1qM_{0,1}\right) + y_1 \left(- D_0(1- p) M_{1,0} + D_1q M_{1,1}\right) \end{align*} In order to determine what it $A$'s best response to a strategy $(p,q)$ of $B$, we compare the utility from $x_0$, $x_1$ and opting out, and similarly for $y_0$, $y_1$ and opting out. \begin{itemize} \item When $\E[ u_A \textrm{ from setting } \sigma_A(0)=0 ~|~ \hat\ensuremath{t}=0] > \E[ u_A \textrm{ from setting } \sigma_A(0)=1 ~|~ \hat\ensuremath{t}=0]$ (i.e. when $D_0pM_{0,0} -D_1(1-q)M_{0,1} > D_1(1-q)M_{1,1}-D_0pM_{1,0}$), then $A$ prefers playing $\sigma_A(0)=0$ to $\sigma_A(0)=1$, (i.e., $x_1=0$).\\ When the inequality holds in the opposite direction, i.e. $D_0pM_{0,0} -D_1(1-q)M_{0,1} < D_1(1-q)M_{1,1}-D_0pM_{1,0}$, then $x_0=0$. \item When $\E[ u_A \textrm{ from setting } \sigma_A(0)=0 ~|~ \hat\ensuremath{t}=0] < 0$ (i.e. when $D_0pM_{0,0} -D_1(1-q)M_{0,1} < 0$), then $A$ prefers opting-out to playing $\sigma_A(0)=0$ (i.e. $x_0=0$). \item When $\E[ u_A \textrm{ from setting } \sigma_A(0)=1 ~|~ \hat\ensuremath{t}=0] < 0$ (i.e. when $-D_0pM_{1,0} +D_1(1-q)M_{1,1} < 0$), then $A$ prefers opting-out to playing $\sigma_A(0)=1$ (i.e. $x_1=0$). \end{itemize} \begin{itemize} \item When $\E[ u_A \textrm{ from setting } \sigma_A(1)=1 ~|~ \hat\ensuremath{t}=1] > \E[ u_A \textrm{ from setting } \sigma_A(1)=0 ~|~ \hat\ensuremath{t}=1]$ (i.e. when $D_0 (1-p) M_{0,0} - D_1qM_{0,1} < - D_0(1- p) M_{1,0} + D_1q M_{1,1}$), then $A$ prefers playing $\sigma_A(1)=1$ to $\sigma_A(1)=0$, (i.e., $y_0=0$).\\ When the inequality holds in the opposite direction, i.e. $D_0 (1-p) M_{0,0} - D_1qM_{0,1} > - D_0(1- p) M_{1,0} + D_1q M_{1,1}$, then $y_1=0$. \item When $\E[ u_A \textrm{ from setting } \sigma_A(1)=1 ~|~ \hat\ensuremath{t}=1] < 0$ (i.e. when $- D_0(1- p) M_{1,0} + D_1q M_{1,1} < 0$), then $A$ prefers opting-out to playing $\sigma_A(1)=1$ (i.e. $y_1=0$). \item When $\E[ u_A \textrm{ from setting } \sigma_A(1)=0 ~|~ \hat\ensuremath{t}=1] < 0$ (i.e. when $D_0 (1-p) M_{0,0} - D_1qM_{0,1} < 0$), then $A$ prefers opting-out to playing $\sigma_A(0)=1$ (i.e. $y_0=0$). \end{itemize} \begin{align*} & \textrm{When } D_0pM_{0,0} -D_1(1-q)M_{0,1} > D_1(1-q)M_{1,1}-D_0pM_{1,0} & \textrm{ then } x_1 = 0, \cr & \textrm{ and when } D_0pM_{0,0} -D_1(1-q)M_{0,1} < D_1(1-q)M_{1,1}-D_0pM_{1,0} & \textrm{ then } x_0 = 0; \cr & \textrm{When } D_0pM_{0,0} -D_1(1-q)M_{0,1} < 0 & \textrm{ then } x_0 = 0; \cr & \textrm{When } -D_0pM_{1,0} +D_1(1-q)M_{1,1} < 0 & \textrm{ then } x_1 = 0. \cr &&\cr & \textrm{When } D_0 (1-p) M_{0,0} - D_1qM_{0,1} > - D_0(1- p) M_{1,0} + D_1q M_{1,1} & \textrm{ then } y_1 = 0, \cr & \textrm{ and when } D_0 (1-p) M_{0,0} - D_1qM_{0,1} < - D_0(1- p) M_{1,0} + D_1q M_{1,1} & \textrm{ then } y_0 = 0; \cr & \textrm{When } D_0 (1-p) M_{0,0} - D_1qM_{0,1} < 0 & \textrm{ then } y_0 = 0; \cr & \textrm{When } - D_0(1- p) M_{1,0} + D_1q M_{1,1} < 0 & \textrm{ then } y_1 = 0. \cr \end{align*} We turn to examining the ``lines of indifference'' -- the set of strategies $(p,q)$ that cause $A$ to be indifferent between at least two strategies. \begin{align*} l_1 :~~~ & D_0pM_{0,0} -D_1(1-q)M_{0,1}=0 \cr l_2 :~~~ & D_0p(M_{0,0}+M_{1,0}) -D_1(1-q)(M_{0,1}+M_{1,1}) = 0 \cr l_3 :~~~ & -D_0pM_{1,0} +D_1(1-q)M_{1,1} = 0\cr \cr l_4 :~~~ & - D_0(1- p) M_{1,0} + D_1q M_{1,1} = 0\cr l_5 :~~~ & D_0(1- p) (M_{0,0}+M_{1,0}) - D_1q (M_{0,1}+M_{1,1}) = 0 \cr l_6 :~~~ & D_0 (1-p) M_{0,0} - D_1qM_{0,1} = 0 \cr \end{align*} Clearly, the point $(0,1)$ lie on all $3$ lines $l_1,l_2,l_3$. Some arithmetic show that the following points reside on each of these lines: $(\tfrac {D_1}{D_0}\tfrac{M_{0,1}}{M_{0,0}} ,0) \in l_1$, $(\tfrac {D_1}{D_0}\tfrac{M_{0,1}+M_{1,1}}{M_{0,0}+M_{1,0}},0)\in l_2$, $(\tfrac {D_1}{D_0}\tfrac{M_{1,1}}{M_{1,0}},0)\in l_3$. Using the assumptions given in~\eqref{eq:strawman_game_assumptions}, we have that $\tfrac {D_1}{D_0}\tfrac{M_{0,1}}{M_{0,0}} > 1 > \tfrac {D_1}{D_0}\tfrac{M_{1,1}}{M_{1,0}}$, so the line $l_1$ lies above the line $l_3$. Next we use the fact that \[\max\{\tfrac{M_{0,1}}{M_{0,0}},\tfrac{M_{1,1}}{M_{1,0}} \} \geq \tfrac{M_{0,1}+M_{1,1}}{M_{0,0}+M_{1,0}} \geq \min\{\tfrac{M_{0,1}}{M_{0,0}},\tfrac{M_{1,1}}{M_{1,0}} \}\footnote{Proof: for any positive $a,b,c,d$ we have $a+b = c\tfrac a c + d\tfrac b d \leq \max\{\tfrac a c,\tfrac b d \}(c+d)$.} \] to deduce that the line $l_2$ is ``sandwiched'' in between $l_1$ and $l_3$. A schematic description of the three lines is given in Figure~\ref{fig:indifference_lines}. \begin{figure} \caption{\label{fig:indifference_lines} \label{fig:indifference_lines} \end{figure} Following $A$'s best-response strategies we deduce that whenever $B$ plays a strategy $(p,q)$ such that: \begin{itemize} \item $(p,q)$ lies below the $l_3$ line, then $A$ plays $x_1=1$, $x_0=0$. I.e., in response to the signal $\hat\ensuremath{t}=0$, $A$ prefers to accuse $B$ of being of type $\tilde\ensuremath{t}=1$. \item $(p,q)$ lies between the $l_1$ and $l_3$ lines, then $A$ plays $x_0=x_1=0$. I.e., in response to the signal $\hat\ensuremath{t}=0$, $A$ prefers to opt out. \item $(p,q)$ lies above the $l_1$ line, then $A$ plays $x_0=1$ and $x_1=0$. I.e., in response to the signal $\hat\ensuremath{t}=0$, $A$ prefer to accuse $B$ of being of types $\tilde\ensuremath{t}=0$. \end{itemize} Lines $l_4,l_5,l_6$ are the symmetric analogue of lines $l_1,l_2,l_3$. Following the same line of reasoning, $l_4$ lies above the other two lines. As best response to any $(p,q)$ above line $l_4$ we have that $A$ sets $y_1=1$; and for any $(p,q)$ below line $l_4$, $A$ sets $y_1=0$. For the sake of completeness we give additional details below, but the reader can skip to Section~\ref{subsec:NE_locations}. Analogously, the point $(1,0)$ lies at the intersection of lines $l_4, l_5, l_6$. And again, the following points lie on the following lines: $(0, \tfrac {D_0}{D_1}\tfrac{M_{1,0}}{M_{1,1}} )\in l_4$, $(0,\tfrac {D_0}{D_1}\tfrac{M_{0,0}+M_{1,0}}{M_{0,1}+M_{1,1}})\in l_5$, $(0, \tfrac {D_0}{D_1}\tfrac{M_{0,0}}{M_{0,1}} ) \in l_6$. We use the assumptions given in~\eqref{eq:strawman_game_assumptions} to deduce that $\tfrac {D_0}{D_1}\tfrac{M_{1,0}}{M_{1,1}}>1>\tfrac {D_0}{D_1}\tfrac{M_{0,0}}{M_{0,1}}$, and the fact that \[ \max\{\tfrac{M_{0,0}}{M_{0,1}},\tfrac{M_{1,0}}{M_{1,1}} \} \geq \tfrac{M_{0,0}+M_{1,0}}{M_{0,1}+M_{1,1}} \geq \min\{\tfrac{M_{0,0}}{M_{0,1}},\tfrac{M_{1,0}}{M_{1,1}}\} \] to deduce that $l_4$ is above $l_6$ and $l_5$ is ``sandwiched'' between them. This too is shown in Figure~\ref{fig:indifference_lines}. Similar best-response strategies hold for a strategy $(p,q)$ of $B$ such that \begin{itemize} \item $(p,q)$ lies below the $l_6$ line, then $A$ plays $y_0=1$, $y_1=0$. I.e., in response to the signal $\hat\ensuremath{t}=1$, $A$ prefers to accuse $B$ of being of type $\tilde\ensuremath{t}=0$. \item $(p,q)$ lies between the $l_4$ and $l_6$ lines, then $A$ plays $y_0=y_1=0$. I.e., in response to the signal $\hat\ensuremath{t}=1$, $A$ prefers to opt out. \item $(p,q)$ lies above the $l_4$ line, then $A$ plays $y_1=1$ and $y_0=0$. I.e., in response to the signal $\hat\ensuremath{t}=1$, $A$ prefer to accuse $B$ of being of types $\tilde\ensuremath{t}=1$. \end{itemize} } \subsection{Proof of Theorem~\ref{thm:NE_which_is_RR}: Characterizing All Potential BNEs of the Game} \label{subsec:NE_locations} Consider the space $[0,1]\times[0,1]$ of all possible strategies $(p,q)$ for the two types of $B$ agents. We denote the two ``lines of indifference'' for $A$ on this square: \begin{eqnarray*} l_1 : &p = \frac {M_{0,1}D_1}{M_{0,0}D_0} (1-q) \cr l_2 : &1-p = \frac {M_{1,1}D_1}{M_{0,1}D_1} q \end{eqnarray*} where $(p,q)=(0,1) \in l_1$ and $(p,q)=(1,0) \in l_2$. These lines partition the $[0,1]\times[0,1]$ square into multiple different regions, as shown in Figure~\ref{fig:non_NE_areas}). \begin{figure} \caption{\small The 4 regions created by the lines $l_1$ and $l_2$, and the $5$ intersection points of the two lines and the borders of the squares.\label{fig:non_NE_areas} \label{fig:non_NE_areas} \end{figure} First, we argue that any $(p,q)$ in the lower region, below $l_1$ and $l_2$ (the shaded blue region in Figure~\ref{fig:non_NE_areas}) cannot be $B$'s strategy in a BNE. We in fact already shown this: for any $(p,q)$ in this blue region we have that $\frac p {1-q} < \frac {M_{0,1}D_1}{M_{0,0}D_0}$ and $\frac {1-p} q < \frac {M_{1,1}D_1}{M_{0,1}D_1}$, contradicting our earlier claim. Secondly, we argue that a point $(p,q)$ above both lines is $B$'s strategy in a BNE is if the valuation of $B$ for the coupons is high. Observe, for any $(p,q)$ above the line, $A$'s best response is to set $x_0=y_1=1$, which means $B$'s utility is $p(\rho_0-M_{0,0})+(1-p)M_{1,0}$ for agents of type $\ensuremath{t}=0$, and $q(\rho_1-M_{1,1})+(1-q)M_{0,1}$ for type $\ensuremath{t}=1$. Therefore, $B$ has no incentive to deviate from $(p,q)$ only if $\rho_0 \geq M_{0,0}+M_{1,0}$ and $\rho_1 \geq M_{0,1}+M_{1,1}$. In particular, when both inequalities are strict, we have that $(1,1)$ is $B$'s BNE; when both are equalities, any point above both $l_1$ and $l_2$ is a BNE; and when one is an equality and the other is a strict inequality, we have that the BNE strategy is on the border of the $[0,1]\times[0,1]$ square. We now turn our attention to the points strictly between the lines $l_1$ and $l_2$ (excluding all points on these lines). To any $(p,q)$ in these regions, $A$'s best response is to play $\tilde\ensuremath{t}=\hat\ensuremath{t}$ when seeing one signal and to opt-out when seeing the other signal. E.g., for any $(p,q)$ above $l_1$ but below $l_2$ (the top-left region), $A$ opts out when seeing the $\hat\ensuremath{t}=1$ signal, but play $\tilde\ensuremath{t}=0$ when seeing the $\hat\ensuremath{t}=0$ signal. In that case, $B$'s utility function is $p(\rho_0 - M_{0,0})$ for type $\ensuremath{t}=0$, and $q\rho_1 + (1-q)M_{0,1}$for type $\ensuremath{t}=1$. Therefore, unless $\rho_0=M_{0,0}$, agents of type $\ensuremath{t}=0$ have incentive to deviate (either to playing $p=0$ or $p=1$). In addition, it must also hold that $\rho_1\geq M_{0,1}$. (If this inequality is strict, then the BNE strategy lies on the border of the square.) Analogously, should the BNE strategy lie above the $l_2$ line but below the $l_1$ line (lower-right area), then it must be the case that $\rho_1=M_{1,1}$ and $\rho_0 \geq M_{1,0}$. We now consider points on the $l_1$ and $l_2$, excluding the $5$ intersection points we have with either the two lines, or any of the lines intersection with the $p=1$ line or the $q=1$ line. \begin{itemize} \item For any $(p,q)$ on the $l_1$ line below the $l_2$ line (top-left side of $l_1$ bordering the blue region): $A$'s best response to any such $(p,q)$ is to set $y_0=y_1=0$ and $x_1=0$. It follows that $B$'s utility is $p(\rho_0 - x_0M_{0,0})$ and $q\rho_1 + (1-q) x_0 M_{0,1}$ for types $\ensuremath{t}=0$ and $\ensuremath{t}=1$ resp. Hence, if $0 \leq \tfrac {\rho_0}{M_{0,0}} = \tfrac {\rho_1}{M_{0,1}} \leq 1$ then such strategies can be BNE \item For any $(p,q)$ on the $l_1$ line above the $l_2$ line (bottom-right side of $l_1$ bordering the red region): $A$'s best response to such $(p,q)$ is to set $x_1=y_0=0$, $y_1=1$; $B$'s utility functions are $p(\rho_0-x_0M_{0,0}) + (1-p) M_{1,0}$ and $q(\rho_1 - M_{1,1})+ (1-q)x_0M_{0,1}$ for types $\ensuremath{t}=0$ and $\ensuremath{t}=1$ resp. It follows that if $0 \leq \tfrac{ \rho_0-M_{1,0} } {M_{0,0}} = \tfrac {\rho_1-M_{1,1}}{M_{0,1}}\leq 1$, then such point give a BNE \item For any $(p,q)$ on the $l_2$ line below the $l_1$ line (bottom-right side of $l_2$ bordering the blue region): As a response to any $(p,q)$ here, $A$ sets $x_0=x_1=0$ and $y_0=0$. $B$'s utility function is therefore $p\rho_0+(1-p) (y_1M_{1,0})$ and $q(\rho_1-y_1M_{1,1})$. Hence, in case $0\leq \tfrac {\rho_0}{M_{1,0}} = \tfrac {\rho_1}{M_{1,1}} \leq 1$, we have a BNE with such $(p,q)$. \item For any $(p,q)$ on the $l_2$ line above the $l_1$ line (top-left side of $l_2$ bordering the red region): As best response to such $(p,q)$, $A$ sets $x_1=y_0=1$ and $x_0=1$. And so $B$'s utility functions are $p(\rho_0 - M_{0,0})+(1-p)y_1M_{1,0}$ and $q(\rho_1 - y_1 M_{1,1} +(1-q)M_{0,1}$. Therefore, if we have $0\leq \tfrac {\rho_0 - M_{0,0}}{M_{1,0}} = \tfrac {\rho_1-M_{0,1}}{M_{1,1}} \leq 1$ then we have a BNE with such $(p,q)$. \end{itemize} Thus far (with the exception of the potential BNE at the point $(p,q)=(1,1)$) we have considered only BNE that may arise only when the parameters of the game ($\rho_0,\rho_1$ and the entries of $M$) satisfy some equality constraints. Assuming we perturb the values of $\rho_0$ and $\rho_1$ a little such that none of the above-mentioned equalities hold, we are left with $5$ points on which the BNE can occur: \begin{align*} & P_1= (0,1) ~,~~~ P_2 = (1 - \frac {D_1M_{1,1}}{D_0 M_{1,0}} , 1) ~,~~~ P_3 = (1,0) ~,~~~ P_4 = (1, 1-\frac {D_0 M_{0,0}}{D_1M_{0,1}}) \cr & P_5 = \left( \frac {D_0D_1M_{0,1}M_{1,0} - D_1^2 M_{0,1}M_{1,1}} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}} , \frac {D_0D_1M_{0,1}M_{1,0} - D_0^2 M_{0,0}M_{1,0}} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}} \right) \end{align*} We traverse them one by one. We remind the reader that since $\tfrac {M_{0,1}}{M_{0,0}} > \tfrac {D_0}{D_1} > \tfrac {M_{1,1}}{M_{1,0}}$ then $M_{0,0}M_{1,1} < M_{0,1}M_{1,0}$. We repeatedly use this inequality in the analysis below. \ifx \fullversion\undefined \paragraph{Conditions under which $B$'s BNE strategy is $P_1$.} \else \subsubsection{Conditions under which $B$'s BNE strategy is $P_1$.} \fi Observe that in this case $B$ never sends the $\hat\ensuremath{t}=0$ signal. $A$'s best response in naturally to opt-out, but $A$ still commits to a certain values of $x_0$ and $x_1$ (to prevent $B$ from deviating from the $(0,1)$ strategy). $B$'s utility functions are \begin{eqnarray*} \textrm{For } \ensuremath{t}=0 : && p(\rho_0 - x_0M_{0,0}+x_1 M_{1,0}) \cr \textrm{For } \ensuremath{t}=1 : && q\rho_1+(1-q)(x_0M_{0,1}-x_1 M_{1,1}) \end{eqnarray*} So $A$ should set $x_0$ and $x_1$ s.t. $\rho_0 \leq x_0M_{0,0}-x_1M_{1,0}$ and $\rho_1 \geq x_0 M_{0,1} - x_1 M_{1,1}$. \begin{proposition} \label{clm:conditions_for_P1} There exist $x_0, x_1\in[0,1]$ satisfying both $\rho_0 \leq x_0M_{0,0}-x_1M_{1,0}$ and $\rho_1 \geq x_0 M_{0,1} - x_1 M_{1,1}$ iff $\rho_0 \leq M_{0,0}$ and $\rho_0 M_{0,1} \leq \rho_1 M_{0,0}$. \end{proposition} \begin{proof} To see that these conditions are sufficient, assume that $\rho_0 \leq M_{0,0}$ and $\rho_0 M_{0,1} \leq \rho_1 M_{0,0}$. Then we can set $x_0 = \tfrac {\rho_0}{M_{0,0}}$ and $x_1=0$. Clearly, both lie on the $[0,1]$-interval. We can check and see that indeed $\rho_0 \leq \tfrac {\rho_0} {M_{0,0}} M_{0,0} - 0 \cdot M_{1,0}$ and $\rho_1 \geq \tfrac {\rho_0}{M_{0,0}} M_{0,1} - 0\cdot M_{1,1}$. We now show these conditions are necessary. Suppose that $\rho_0 > M_{0,0}$, then observe that any $x_0, x_1$ satisfying the two constraints must satisfy $0\leq x_1M_{1,0} \leq x_0 M_{0,0} - \rho_0$, so $x_0 \geq \tfrac{\rho_0}{M_{0,0}}$. As a result of our assumption, we have that $x_0 > 1$. Contradiction. So assume now that $\rho_0 \leq M_{0,0}$ yet $\rho_0 M_{0,1} > \rho_1M_{0,0}$. Any $x_0, x_1$ satisfying the two constraints must also satisfy \[ x_0 \tfrac {M_{0,1}}{M_{1,1}} -\tfrac {\rho_1}{M_{1,1}} \leq x_1 \leq x_0 \tfrac {M_{0,0}}{M_{1,0}} - \tfrac {\rho_0}{M_{1,0}}\] which, using our assumption, yields \[ x_0 \tfrac {M_{0,1}M_{1,0} - M_{0,0}M_{1,1}}{M_{1,1}M_{1,0}} \leq \tfrac {\rho_1}{M_{1,1}} - \tfrac {\rho_0}{M_{1,0}} < \rho_0 \left( \tfrac {M_{0,1}} {M_{0,0}M_{1,1}} - \tfrac 1 {M_{1,0}} \right) = \rho_0 \tfrac {M_{1,0}M_{0,1}-M_{0,0}M_{1,1}} {M_{0,0}M_{1,0}M_{1,1}} \] so we have $x_0 < \tfrac {\rho_0}{M_{0,0}}$. Contradiction. \end{proof} \ifx\fullversion\undefined \paragraph{Conditions under which $B$'s BNE strategy is $P_2$.} \else \subsubsection{Conditions under which $B$'s BNE strategy is $P_2$.} \fi As a response to this strategy, $A$'s best response is to set $x_1=y_0=0$ and $x_0=1$, while $y_1$ is to be determined. So $B$'s utility functions are \begin{eqnarray*} \textrm{For } \ensuremath{t}=0 : && p(\rho_0 - M_{0,0})+(1-p)y_1M_{1,0} \cr \textrm{For } \ensuremath{t}=1 : && q(\rho_1-y_1M_{1,1})+(1-q) M_{0,1} \end{eqnarray*} Therefore, in order for $B$ to not have any incentive to deviate, $A$ should set $y_1$ s.t $\rho_0-M_{0,0}= y_1 M_{1,0}$ and $\rho_1-y_1M_{1,1} \geq M_{0,1}$. \begin{proposition} \label{clm:conditions_for_P2} There exists a $y_1\in [0,1]$ satisfying both $\rho_0-M_{0,0}= y_1 M_{1,0}$ and $\rho_1-y_1M_{1,1} \geq M_{0,1}$ iff $M_{0,0}\leq \rho_0 \leq M_{0,0}+M_{1,0}$ and $(\rho_0 -M_{0,0}){M_{1,1}} \leq \left(\rho_1-M_{0,1}\right){M_{1,0}}$. \end{proposition} \begin{proof} Clearly, the only $y_1$ that can satisfy both constraints is $y_1 = \tfrac {\rho_0-M_{0,0}}{M_{1,0}}$, and we therefore must have that $M_{0,0} \leq \rho_0 \leq M_{0,0}+M_{1,0}$. We also need to verify that indeed the inequality holds in the right direction. I.e., to have $(\rho_0 -M_{0,0})\tfrac {M_{1,1}}{M_{1,0}} \leq \rho_1-M_{0,1}$. Clearly, if those two conditions hold then $y_1$ defined as above satisfy the required. \end{proof} \newtheorem{observation}[theorem]{Observation} \begin{observation} If we have that $(\rho_0 -M_{0,0}){M_{1,1}} \leq \left(\rho_1-M_{0,1}\right){M_{1,0}}$ then also $\tfrac {\rho_0}{\rho_1} < \tfrac {M_{1,0}} {M_{1,1}}$. \end{observation} \begin{proof} $\rho_0M_{1,1} - M_{0,0}M_{1,1} \leq \rho_1M_{1,0}-M_{0,1}M_{1,0} < \rho_1M_{1,0} - M_{0,0}M_{1,1} \Rightarrow \rho_0 M_{1,1} < \rho_1M_{1,0}$. \end{proof} \ifx\fullversion\undefined \paragraph{Conditions under which $B$'s BNE strategy is $P_3$.} \else \subsubsection{Conditions under which $B$'s BNE strategy is $P_3$.} \fi Should $B$ play $(1,0)$, then we have that $A$ only sees the $\hat\ensuremath{t}=0$ signal and always opts out (i.e. $x_0=x_1=0$). However, in order to prevent $B$ from deviating, $A$ needs to commit to a $y_0, y_1$ that leave $B$ preferring not to deviate from $(1,0)$. $B$'s utility functions are \begin{eqnarray*} \textrm{For } \ensuremath{t}=0 : && p\rho_0 + (1-p)(-y_0M_{0,0}+y_1M_{1,0}) \cr \textrm{For } \ensuremath{t}=1 : && q(\rho_1 +y_0 M_{0,1}- y_1M_{1,1}) \end{eqnarray*} Therefore, in order for $B$ to not have any incentive to deviate, $A$ should set $y_0,y_1$ s.t $\rho_0\geq -y_0M_{0,0} + y_1 M_{1,0}$ and $\rho_1+y_0M_{0,1}-y_1M_{1,1}\leq 0$. \begin{proposition} \label{clm:conditions_for_P3} There exist $y_0, y_1\in[0,1]$ satisfying both $\rho_0\geq -y_0M_{0,0} + y_1 M_{1,0}$ and $\rho_1\leq -y_0M_{0,1}+y_1M_{1,1}$ iff $\rho_1 \leq M_{1,1}$ and $\rho_0 M_{1,1} \geq \rho_1 M_{1,0}$. \end{proposition} The proof is completely analogous to the proof of Proposition~\ref{clm:conditions_for_P1}. \cut{ \begin{proof} To see that these conditions are sufficient, assume that $\rho_1 \leq M_{1,1}$ and $\rho_0 M_{1,1} \geq \rho_1 M_{1,0}$. Then we can set $y_1 = \tfrac {\rho_1}{M_{1,1}}$ and $y_0=0$. Clearly, both $y_0,y_1 \in [0,1]$. We can check and see that indeed $\rho_1 \leq \tfrac {\rho_1} {M_{1,1}} M_{1,1} - 0 \cdot M_{0,1}$ and $\rho_0 \geq \tfrac {\rho_1}{M_{1,1}} M_{1,0} - 0\cdot M_{0,0}$. We now show these conditions are necessary. Suppose that $\rho_1 > M_{1,1}$, then observe that any $y_0, y_1$ satisfying the two constraints must satisfy $0\leq y_0M_{0,1} \leq y_1 M_{1,1} - \rho_1$, so $y_1 \geq \tfrac{\rho_1}{M_{1,1}}$. As a result of our assumption, we have that $y_1 > 1$. Contradiction. So assume now that $\rho_1 \leq M_{1,1}$ yet $\rho_0 M_{1,1} < \rho_1M_{1,0}$. Any $y_0, y_1$ satisfying the two constraints must also satisfy \[ y_1 \tfrac {M_{1,0}}{M_{0,0}} -\tfrac {\rho_0}{M_{0,0}} \leq y_0 \leq y_1 \tfrac {M_{1,1}}{M_{0,1}} - \tfrac {\rho_1}{M_{0,1}}\] which, using our assumption, yields \[ y_1 \tfrac {M_{0,1}M_{1,0} - M_{0,0}M_{1,1}}{M_{0,0}M_{0,1}} \leq \tfrac {\rho_0}{M_{0,0}} - \tfrac {\rho_1}{M_{0,1}} < \rho_1 \left( \tfrac {M_{1,0}} {M_{0,0}M_{1,1}} - \tfrac 1 {M_{0,1}} \right) = \rho_1 \tfrac {M_{1,0}M_{0,1}-M_{0,0}M_{1,1}} {M_{0,0}M_{0,1}M_{1,1}} \] so we have $y_1 < \tfrac {\rho_1}{M_{1,1}}$. Contradiction. \end{proof} } \ifx\fullversion\undefined \paragraph{Conditions under which $B$'s BNE strategy is $P_4$.} \else \subsubsection{Conditions under which $B$'s BNE strategy is $P_4$.} \fi As a response to this strategy, $A$'s best response is to set $x_1=y_0=0$ and $y_1=1$, while $x_0$ is to be determined. So $B$'s utility functions are \begin{eqnarray*} \textrm{For } \ensuremath{t}=0 : && p(\rho_0 - x_0M_{0,0})+(1-p)M_{1,0} \cr \textrm{For } \ensuremath{t}=1 : && q(\rho_1-M_{1,1})+(1-q)x_0 M_{0,1} \end{eqnarray*} Therefore, in order for $B$ to not have any incentive to deviate, $A$ should set $x_0$ s.t $\rho_0-x_0M_{0,0}\geq M_{1,0}$ and $\rho_1-M_{1,1} = x_0M_{0,1}$. \begin{proposition} \label{clm:conditions_for_P4} There exists a $x_0\in [0,1]$ satisfying both $\rho_0-x_0M_{0,0}\geq M_{1,0}$ and $\rho_1-M_{1,1} = x_0M_{0,1}$ iff $M_{1,1}\leq \rho_1 \leq M_{1,1}+M_{0,1}$ and $\left(\rho_1-M_{1,1}\right)M_{0,0} \leq \left(\rho_0-M_{1,0}\right)M_{0,1}$. \end{proposition} The proof is analogous to the proof of Proposition~\ref{clm:conditions_for_P2}. \cut{ \begin{proof} Clearly, the only $x_0$ that can satisfy both constraints is $x_0 = \tfrac {\rho_1-M_{1,1}}{M_{0,1}}$, and we therefore must have that $M_{1,1} \leq \rho_1 \leq M_{1,1}+M_{0,1}$. We also need to verify that indeed the inequality holds in the right direction. I.e., to have $(\rho_1 -M_{1,1})\tfrac {M_{0,0}}{M_{0,1}} \leq \rho_0-M_{1,0}$. This two conditions are equivalent to having the above-defined $x_0$ lie in the $[0,1]$-interval and satisfy both constraints. \end{proof} \begin{observation} If we have $\left(\rho_1-M_{1,1}\right)M_{0,0} \leq \left(\rho_0-M_{1,0}\right)M_{0,1}$ then also $\rho_1 M_{0,0} < \rho_0 M_{0,1}$. \end{observation} \begin{proof} $\rho_1 M_{0,0} - M_{0,0}M_{1,1} \leq \rho_0 M_{0,1} - M_{0,1}M_{1,0} < \rho_0 M_{0,1}-M_{0,0}M_{1,1} \Rightarrow \rho_1 M_{0,0} < \rho_0 M_{0,1}$. \end{proof} } \ifx\fullversion\undefined \paragraph{Conditions under which $B$'s BNE strategy is $P_5$.} \else \subsubsection{Conditions under which $B$'s BNE strategy is $P_5$.} \fi As this point lies on the intersection of $l_1$ and $l_2$, then $A$'s best response to this strategy is to set $x_1=y_0=0$. Thus,$B$'s utility functions are \begin{eqnarray*} \textrm{For } \ensuremath{t}=0 : && p(\rho_0 - x_0M_{0,0})+(1-p)y_1M_{1,0} \cr \textrm{For } \ensuremath{t}=1 : && q(\rho_1-y_1M_{1,1})+(1-q)x_0 M_{0,1} \end{eqnarray*} It is therefore up to $A$ to pick $x_0$ and $y_1$ that satisfy both equalities $\begin{pmatrix} \rho_0 \cr \rho_1\end{pmatrix} = \begin{pmatrix} M_{0,0} & M_{1,0} \cr M_{0,1} & M_{1,1} \end{pmatrix} \begin{pmatrix}x_0 \cr y_1 \end{pmatrix}$ Cramer's formula give that the solution to this system is \begin{equation} \begin{pmatrix} x_0 \cr y_1 \end{pmatrix} = {\displaystyle \frac 1 {M_{1,0}M_{0,1} - M_{0,0}M_{1,1}}} \begin{pmatrix} -M_{1,1} &M_{1,0} \cr M_{0,1} &-M_{0,0} \end{pmatrix}\begin{pmatrix} \rho_0 \cr \rho_1 \end{pmatrix}\label{eq:A_NE_strategy_P5}\end{equation} In order for $x_0,y_1$ to be in the range $[0,1]$, we therefore must have that: (i) $\rho_0 M_{1,1} \leq \rho_1 M_{1,0}$, (ii) $(\rho_1-M_{0,1})M_{1,0} \leq (\rho_0-M_{0,0})M_{1,1} $, (iii) $\rho_1M_{0,0} \leq \rho_0M_{0,1}$, and (iv) $(\rho_0-M_{1,0})M_{0,1} \leq (\rho_1-M_{1,1})M_{0,0}$. In other words: \begin{align} \label{eq:conditions_for_P5} & 0 \leq \rho_1M_{1,0} - \rho_0M_{1,1} \leq M_{0,1}M_{1,0}-M_{0,0}M_{1,1} \cr & 0 \leq \rho_0M_{0,1} - \rho_1M_{0,0} \leq M_{0,1}M_{1,0}-M_{0,0}M_{1,1} \end{align} \ifx\fullversion\undefined \paragraph{Summarizing.} Below we give the various conditions under which each of the points may be a BNE: \begin{table}[hbt] \centering \begin{tabular}{ | c | c | c | c |} \hline Case & Condition & $A$'s Strategy & $B$'s strategy \cr No.& & (always: $x_1=y_0=0$) &\cr \hline $1$& $\rho_0 \geq M_{0,0}+M_{1,0}$ and~~~ $\rho_1\geq M_{0,1}+M_{1,1}$ & $(x_0,y_1)=(1,1)$ & $(1,1)$\cr\hline $2$& $\rho_0 \leq M_{0,0}$ and~~~ $\tfrac{\rho_0}{\rho_1}\leq \tfrac{M_{0,0}}{M_{0,1}}$ & $(x_0,y_1) = (\tfrac {\rho_0}{M_{0,0}},0)$ & $(0,1)$ \cr\hline $3$& $\rho_1 \leq M_{1,1}$ and~~~ $\tfrac{\rho_0}{\rho_1} \geq \tfrac{M_{1,0}}{M_{1,1}}$ & $(x_0,y_1)=(0,\tfrac{\rho_1}{M_{1,1}})$ & $(1,0)$ \cr\hline $4$& $0 \leq \rho_0-M_{0,0}\leq M_{1,0}$ & $(x_0,y_1) = (1,\tfrac {\rho_0-M_{0,0}}{M_{1,0}})$ & $P_2$ \cr & $\rho_1M_{1,0}-\rho_0M_{1,1} \geq M_{0,1}M_{1,0} -M_{0,0}{M_{1,1}}$ & &\cr\hline $5$& $0 \leq \rho_1 - M_{1,1} \leq M_{0,1}$ & $(x_0,y_1)=(\tfrac {\rho_1-M_{1,1}}{M_{0,1}},1)$ & $P_4$ \cr & $\rho_0M_{0,1}-\rho_1M_{0,0} \geq M_{0,1}M_{1,0}-M_{0,0}M_{1,1}$ & &\cr\hline $6$& $0 \leq \rho_1M_{1,0} - \rho_0M_{1,1} \leq M_{0,1}M_{1,0}-M_{0,0}M_{1,1}$ & See Eq.~\eqref{eq:A_NE_strategy_P5}& $P_5$ \cr & $0 \leq \rho_0M_{0,1} - \rho_1M_{0,0} \leq M_{0,1}M_{1,0}-M_{0,0}M_{1,1}$ & & \cr\hline \end{tabular} \caption{\label{apx_tab:Conditions_for_NE}The various conditions under which any of the isolated points are BNE. Recall: $P_2 = (1 - \frac {D_1M_{1,1}}{D_0 M_{1,0}} , 1)$ is the intersection point of the $l_2$-line and the $q=1$ line, $P_4 = (1, 1-\frac {D_0 M_{0,0}}{D_1M_{0,1}})$ is the intersection point of the $l_1$ line and the $p=1$ line, and $P_5 = \left( \frac {D_0D_1M_{0,1}M_{1,0} - D_1^2 M_{0,1}M_{1,1}} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}} , \frac {D_0D_1M_{0,1}M_{1,0} - D_0^2 M_{0,0}M_{1,0}} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}} \right)$ is at the intersection of the $l_1$- and $l_2$-lines. Recall also that $M_{0,1}M_{1,0}>M_{0,0}M_{1,1}$.} \end{table} \else \subsubsection{Summarizing.} Table~\ref{tab:Conditions_for_NE} summarized the $6$ conditions under which each point is a BNE. \fi It is easy to verify that the condition of Theorem~\ref{thm:mixed_BNE_under_condition} is precisely condition $6$, underwhich $B$'s BNE strategy unique (the point $P_5$) and it is mixed. \subsection{Proof of Theorem~\ref{thm:NE_which_is_RR}: The Uniqueness of $B$'s BNE Strategy} \label{subsec:characterizing_NE} So far we have introduced conditions for the existence of various BNEs. In this section, our goal is to show that the above analysis gives a complete description of the game. That is, to show that the cases detailed in Table~\ref{tab:Conditions_for_NE} span all potential values the parameters of the game may take, and furthermore (modulo cases of equality between parameters) they are also mutually exclusive. \begin{lemma} \label{lem:conditions_are_exclusive} Assume that the parameters of the game (i.e. $\rho_0,\rho_1$ and the entries of $M$) satisfy one of the $6$ conditions detailed in Table~\ref{tab:Conditions_for_NE} with strict inequalities. Then no other condition in Table~\ref{tab:Conditions_for_NE} holds simultaneously. In other words, the conditions in Table~\ref{tab:Conditions_for_NE} are mutually exclusive (excluding equalities). As the conditions are mutually exclusive it means that under the condition specified in Theorem~\ref{thm:mixed_BNE_under_condition}, the game has a unique BNE -- as specified by case $6$ in Table~\ref{tab:Conditions_for_NE}. \end{lemma} \begin{proof} We traverse the $6$ cases, showing that if case $i$ holds with strict inequalties then some other case $j>i$ cannot hold. \begin{description} \item [Case $1$.] Clearly, if the conditions of case $1$ hold, then the conditions of cases $2,3,4$ and $5$ cannot hold. To see that the conditions of case $6$ cannot hold, we argue that the condition $\max\{\rho_1M_{1,0} - \rho_0M_{1,1}, \rho_0M_{0,1} - \rho_1M_{0,0}\} \leq M_{0,1}M_{1,0}-M_{0,0}M_{1,1}$ implies that both $\rho_0 \leq M_{0,0}+M_{1,0}$ and $\rho_1\leq M_{0,1}+M_{1,1}$. This claim follows from the inequalities \begin{eqnarray*} \rho_0 (M_{0,1}M_{1,0}-M_{0,0}M_{1,1} ) &&= M_{0,0} (\rho_1M_{1,0} - \rho_0M_{1,1}) + M_{1,0}(\rho_0M_{0,1} - \rho_1M_{0,0}) \cr &&\leq (M_{0,0}+M_{1,0})(M_{0,1}M_{1,0}-M_{0,0}M_{1,1}) \cr \rho_1 (M_{0,1}M_{1,0}-M_{0,0}M_{1,1} ) &&= M_{0,1} (\rho_1M_{1,0} - \rho_0M_{1,1}) + M_{1,1}(\rho_0M_{0,1} - \rho_1M_{0,0}) \cr &&\leq (M_{0,1}+M_{1,1})(M_{0,1}M_{1,0}-M_{0,0}M_{1,1}) \end{eqnarray*} \item [Case $2$.] Clearly, the conditions of case $2$ cannot hold simultaneously with the conditions of cases $4$ and $6$. To exclude the other cases, observe that using our favorite inequality $\tfrac {M_{0,0}}{M_{0,1}} < \tfrac {M_{1,0}}{M_{1,1}}$, we have that the condition $\tfrac{\rho_0}{\rho_1} \leq \tfrac {M_{0,0}}{M_{0,1}}$ implies that $\tfrac {\rho_0}{\rho_1} < \tfrac {M_{1,0}}{M_{1,1}}$. Hence case $3$ cannot hold, and neither does case $5$ (using again the fact that $M_{0,1}M_{1,0}>M_{0,0}M_{1,1}$). \item [Case $3$.] This case is symmetric to case $2$ --- since $M_{0,1}M_{1,0} - M_{0,0}M_{1,1}>0$ then case $3$ rules out case $4$ (and the fact it cannot hold simultaneously with cases $5$ and $6$ is obvious). \item [Case $4$.] Clearly, case $6$ cannot hold together with case $4$. To show that case $5$ cannot hold too, we claim that if both $\rho_1M_{1,0}-\rho_0M_{1,1} \geq M_{0,1}M_{1,0} -M_{0,0}{M_{1,1}}$ and $\rho_0M_{0,1}-\rho_1M_{0,0} \geq M_{0,1}M_{1,0}-M_{0,0}M_{1,1}$ hold, then $\rho_0 \geq M_{0,0}+M_{1,0}$. This holds because the two inequalities imply \begin{eqnarray*} && \rho_1 \leq ( \rho_0-M_{1,0} )\tfrac {M_{0,1}}{M_{0,0}} + M_{1,1} \textrm{ and } \rho_1 \geq (\rho_0-M_{0,0})\tfrac{M_{1,1}}{M_{1,0}} + M_{0,1} \cr & \Rightarrow &\rho_0 \left( \tfrac{M_{0,1}}{M_{0,0}} - \tfrac {M_{1,1}}{M_{1,0}} \right) \geq M_{0,1}-M_{1,1} + \tfrac {M_{1,0}M_{0,1}}{M_{0,0}} - \tfrac { M_{0,0}M_{1,1} }{M_{1,0}} \cr & \Rightarrow & \rho_0 \geq \frac { M_{0,0}M_{0,1}M_{0,1} - M_{0,0}M_{1,0}M_{1,1} + M_{0,1}M_{1,0}^2 - M_{0,0}^2M_{1,1} }{ M_{0,1}M_{1,0} - M_{0,0}M_{1,1}} = M_{0,0}+M_{1,0} \end{eqnarray*} \item [Case $5$.] Clearly, cases $5$ and $6$ cannot hold simultaneously. \end{description} \end{proof} \begin{lemma} \label{lem:conditions_are covering} Any choice of parameters for $\rho_0,\rho_1$ and the entries of $M$ satisfies at least one of the $6$ cases detailed in Table~\ref{tab:Conditions_for_NE}. \end{lemma} \begin{proof} First, suppose $\rho_0 \geq M_{0,0}+M_{1,0}$. We claim that in this case, the value of $\rho_1$ determines which case holds. \begin{itemize} \item If $\rho_1 \leq M_{1,1}$ then case $3$ holds, since obviously $ M_{1,1}\tfrac{\rho_0}{M_{1,0}} > M_{1,1} \geq \rho_1$. \item If $M_{1,1} < \rho_1 \leq M_{0,1}+M_{1,1}$ then case $5$ holds since \[\rho_0 M_{0,1} - \rho_1 M_{0,0} \geq (M_{0,0}+M_{1,0})M_{0,1}-\rho_1 M_{0,0} = M_{0,1}M_{1,0} + M_{0,0}(M_{0,1}-\rho_1) \geq M_{0,1}M_{1,0}-M_{0,0}M_{1,1}\] \item If $\rho_1 > M_{0,1}+M_{1,1}$ then clearly case $1$ holds. \end{itemize} Similarly, if we have that $\rho_1 \geq M_{0,1}+M_{1,1}$, then the value of $\rho_0$ determines whether case $2,4$ or $1$ hold. We therefore assume from now on that $\rho_0 < M_{0,0}+M_{1,0}$ and $\rho_1 < M_{0,1}+M_{1,1}$. Suppose that $\tfrac {\rho_0}{\rho_1} \leq \tfrac{M_{0,0}}{M_{0,1}}$. \begin{itemize} \item If $\rho_0 \leq M_{0,0}$ then clearly case $2$ holds. \item If $\rho_0 \geq M_{0,0}$ then we show case $4$ holds. Observe $\tfrac {\rho_1}{\rho_0} - \tfrac{M_{1,1}}{M_{1,0}} \geq \tfrac {M_{0,1}}{M_{0,0}} - \tfrac{M_{1,1}}{M_{1,0}}$, so $\tfrac{ \rho_1M_{1,0} - \rho_0M_{1,1}}{\rho_0 M_{1,0}} \geq \tfrac{M_{0,1}M_{1,0} - M_{0,0}M_{1,1}}{M_{0,0}M_{1,0}}$. We conclude that $\rho_1M_{1,0} - \rho_0M_{1,1} \geq \tfrac {\rho_0}{M_{0,0}} (M_{0,1}M_{1,0}-M_{0,0}M_{1,1})$. So the fact that $\rho_0 \geq M_{0,0}$ implies that the conditions of case $4$ hold. \end{itemize} Analogously, if we assume the $\frac {\rho_0}{\rho_1} \geq \tfrac {M_{1,0}}{M_{1,1}}$, then the same line of argument shows that either case $3$ or case $5$ hold. So now, we assume both that $\rho_0 < M_{0,0}+M_{1,0}$, $\rho_1 < M_{0,1}+M_{1,1}$ and that $\tfrac {M_{0,0}}{M_{0,1}} < \tfrac {\rho_0}{\rho_1} < \tfrac {M_{1,0}}{M_{1,1}}$. \begin{itemize} \item If $\rho_1 M_{1,0} -\rho_0M_{1,1} \geq M_{0,1}M_{1,0} - M_{0,0}M_{1,1}$, we argue that case $4$ holds. This is because we have both that $\rho_1 < \rho_0 \tfrac{M_{0,1}}{M_{0,0}}$ and that $\rho_1 \geq \rho_0 \tfrac{M_{1,1}}{M_{1,0}} +M_{0,1} - \tfrac {M_{0,0}M_{1,1}}{M_{1,0}}$. Combining the two we get \[ \rho_0\left( \tfrac {M_{0,1}}{M_{0,0}} - \tfrac {M_{1,1}}{M_{1,0}}\right) > \tfrac {M_{0,1}M_{1,0}-M_{0,0}M_{1,1}}{M_{1,0}} ~~\Rightarrow ~~ \rho_0 > M_{0,0}\] \item If $\rho_0 M_{0,1} - \rho_1M_{0,0} \geq M_{0,1}M_{1,0} - M_{0,0}M_{1,1}$ then we are in the analogous case, and we can show, using the inequality $\tfrac {\rho_0}{\rho_1} < \tfrac {M_{1,0}}{M_{1,1}}$, that $\rho_1 > M_{1,1}$. \end{itemize} This leaves us with the case that $\rho_0 < M_{0,0}+M_{1,0}$, $\rho_1 < M_{0,1}+M_{1,1}$, $\tfrac {M_{0,0}}{M_{0,1}} < \tfrac {\rho_0}{\rho_1} < \tfrac {M_{1,0}}{M_{1,1}}$ and also $\rho_1 M_{1,0} -\rho_0M_{1,1} < M_{0,1}M_{1,0} - M_{0,0}M_{1,1}$ and $\rho_0 M_{0,1} - \rho_1M_{0,0} < M_{0,1}M_{1,0} - M_{0,0}M_{1,1}$. This is precisely case $6$. \end{proof} } \ifx \cameraready \undefined Recall, in addition to the conditions specifically stated in Case $6$ in Table~\ref{tab:Conditions_for_NE}, we also require that $D_0^2M_{0,0}M_{1,0} = D_1^2M_{0,1}M_{1,1}$ in order for the two types of agent $B$ to play Randomized Response. In other words, this condition implies that $B$'s BNE strategy, represented by the point \[P_5=\big( \frac {D_0D_1M_{0,1}M_{1,0} - D_1^2 M_{0,1}M_{1,1}} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}},\frac {D_0D_1M_{0,1}M_{1,0} - D_0^2 M_{0,0}M_{1,0}} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}} \big)\] lies on the $p=q$ line. \cut{ \begin{proposition} \label{clm:conditions_for_RR} In a BNE of Case $6$, where $B$ plays a strictly randomized strategy (i.e. $p,q \in (0,1)$ ), we have that $p=q$ iff $\tfrac {D_0 M_{1,0}}{D_1M_{1,1}} = \tfrac {D_1 M_{0,1}}{D_0 M_{0,0}}$. \end{proposition} \begin{proof} The coordinates of $P_5$ are $\left( \frac {D_0D_1M_{0,1}M_{1,0} - D_1^2 M_{0,1}M_{1,1}} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}} , \frac {D_0D_1M_{0,1}M_{1,0} - D_0^2 M_{0,0}M_{1,0}} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}} \right)$, so the proof follows immediately. \end{proof} } And so, in this case the $B$ agent plays a Randomized Response strategy that preserves $\epsilon$-differential privacy for $\epsilon = \ln(\tfrac p {1-q}) =\ln\left( \tfrac {D_1M_{0,1}} {D_0M_{0,0}}\right)$. Observe that this value of $\epsilon$ is \emph{independent} from the value of the coupon (i.e., from $\rho_0$ and $\rho_1$). This is due to the nature of BNE in which an agent plays her Nash-strategy in order to make her opponent indifferent between various strategies rather than maximizing her own utility. Therefore, the coordinates of $P_5$ are such that they make agent $A$ indifferent between opting out and playing $x_0=1$ (or opting out and $y_1=1$). Since the utility function of $A$ is independent of $\rho_0,\rho_1$, we have that perturbing the values of $\rho_0,\rho_1$ does not affect the coordinates of $P_5$. (Yet, perturbing the values of $\rho_0,\rho_1$ does affect the various relations between the parameters of the game, and so it may determine which of the $6$ cases in Table~\ref{tab:Conditions_for_NE} holds.) \else Recall, in addition to the conditions specifically stated in Equation~\eqref{eq:condition_of_randomized_BNE}, we also require that $D_0^2M_{0,0}M_{1,0} = D_1^2M_{0,1}M_{1,1}$ in order for the two types of agent $B$ to play Randomized Response. In other words, the feasibility condition in Equation~\eqref{eq:condition_of_randomized_BNE} implies that $B$'s BNE strategy, denoted by $p^* = {\bf Pr}[\sigma_B^*(0)=0]$ and $q^* = {\bf Pr}[\sigma_B^*(1)=1]$, is given by \[(p^*,q^*)=\big( \frac {D_0D_1M_{0,1}M_{1,0} - D_1^2 M_{0,1}M_{1,1}} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}},\frac {D_0D_1M_{0,1}M_{1,0} - D_0^2 M_{0,0}M_{1,0}} {D_0D_1 M_{0,1}M_{1,0} - D_0 D_1 M_{0,0}M_{1,1}} \big)\] The additional condition of $D_0^2M_{0,0}M_{1,0} = D_1^2M_{0,1}M_{1,1}$ implies therefore that $p^*=q^*$. And so, in this case the $B$ agent plays a Randomized Response strategy that preserves $\epsilon$-differential privacy for $\epsilon = \ln(\tfrac {p^*} {1-{q^*}}) =\ln\left( \tfrac {D_1M_{0,1}} {D_0M_{0,0}}\right)$. Observe that this value of $\epsilon$ is \emph{independent} from the value of the coupon (i.e., from $\rho_0$ and $\rho_1$). This is due to the nature of BNE in which an agent plays her Nash-strategy in order to make her opponent indifferent between various strategies rather than maximizing her own utility. Therefore, the coordinates $(p^*,q^*)$ are such that they make agent $A$ indifferent between several pure strategies. And since the utility function of $A$ is independent of $\rho_0,\rho_1$, we have that perturbing the values of $\rho_0,\rho_1$ does not affect the coordinates $(p^*,q^*)$. (Yet, perturbing the values of $\rho_0,\rho_1$ does affect the various relations between the parameters of the game, and so it may determine which of the $6$ feasibility conditions does in fact hold.) \fi \subsection{Proof of Theorem~\ref{thm:NE_which_is_RR}: Finding a BNE Strategy for $B$} \apxOptOut \ifx \cameraready \undefined \section{Conclusions and Future Directions} \label{sec:conclusions} Our work is a first attempt at exposing and reconciling the competing conclusions of two different approaches to the same challenge: the theory of privacy-aware agents (where privacy loss is modeled using differential privacy), and the behavior of standard utility-maximizing agents once they explicitly assess future losses from having their behavior in the current game publicly exposed. While the canonical privacy-aware agent randomizes her strategy, we show that different explicit privacy losses cause very different behavior among agents. This is best illustrated with the game studied in Section~\ref{sec:matching_pennies} (Theorem~\ref{thm:matching-pennies-continuous-valuations}). In that game, agents assess their future loss and their behavior is therefore quite simple: if the current gain is greater than the future loss, their behavior is to truthfully report their type; otherwise they lie and report the opposite type. We believe this simple rule explains real-life phenomena, such as people trying to hide their medical condition from the general public while truthfully answering a doctor's questions. \footnote{I'm likely gain little and potentially lose a lot from revealing my medical history to a random person, whereas I am likely to gain a lot from truthfully reporting my medical history to a doctor.} Observe however that in all the games we analyzed, we still have not pinned down a game in which the behavior of a non-privacy aware agent \emph{fully} mimics the behavior of a privacy-aware agent. Privacy-aware agents' behavior is, after a fashion, quite reasonable. They trade-off between the value of the coupon they get and the amount of privacy (or change in belief) they are willing to risk. Naturally, the higher the value of the coupon, the more privacy they are willing to risk. In contrast, in the game discussed in Section~\ref{sec:opt-out-possible}, even under settings where $B$'s BNE strategy $\sigma_B^*$ is randomized and satisfies ${\bf Pr}[\sigma_B^*(0)]={\bf Pr}[\sigma_B^*(1)]$, we don't see a continuous change in $B$'s behavior based on the value of the coupon. Changing solely the value of the coupon while keeping all other parameters the same, we see that $B$ plays the same BNE strategy, whereas $A$'s BNE strategy continuously changes. It would be interesting to pursue this line of work further, by studying more complex games. In particular, we propose the following scenario, which resembles the standard narrative in differential-privacy literature and should provide a complementary approach to the ``sensitive surveyor'' problem~\cite{GhoshR11,NissimOS12,RothS12,NissimVX14,GhoshLRS14}. Suppose that the signal that $B$ sends is not for a type of coupon that gives $B$ an immediate and fixed reward, but rather a response of $B$ to a survey question. That is, suppose $B$ interacts with a benevolent data curator that wishes to learn the distribution of type-$0$ and type-$1$ agents in the population and $B$ may benefit from the effect of curator's analysis. (For example, the data curator may ask people with a certain disease about their exposure to some substance.) In such a case, $B$'s utility is a function of the curator's ability to well approximate the true answer. In addition to the potential gain, there is also potential loss, based on $B$'s concerns about her private information being publicly exposed. What formulation of this privacy loss results in $B$ playing according to a Randomized Response strategy? What explicit formulation of privacy loss causes $B$ to truthfully report her type knowing that $A$'s will data be published using an $\epsilon$-differential private mechanism? \fi \ifx \fullversion \undefined \fi \section*{Acknowledgments} \ifx \fullversion \undefined \fi We would like to thank Kobbi Nissim for many helpful discussions and helping us in initiating this line of work. \ifx \fullversion \undefined \fi \ifx \fullversion \undefined {\small } \else \fi \ifx \cameraready \undefined \appendix \ifx \fullversion \undefined \spnewtheorem*{apxthm}{Theorem}{\bfseries}{\itshape} \else \newtheorem*{apxthm}{Theorem}{\bfseries}{\itshape} \fi \ifx \fullversion \undefined \section{Privacy Aware Agents.} \label{apx_sec:privacy_aware_agent} \begin{apxthm}[Theorem~\ref{thm:behavior_privacy_aware} restated] \behaviorPrivacyAware \end{apxthm} \PAA \fi \section{Missing Proofs -- Coupon Game with Proper Scoring Rules} \label{apx_sec:proper_scoring_rules} \backgroundProperScoringRules \ifx\fullversion\undefined \subsection{Proof of Theorem~\ref{thm:BNE_scoring_rules}} \begin{apxthm}[Theorem~\ref{thm:BNE_scoring_rules} restated] \BNEScoringRules \end{apxthm} \proofTheoremProperScoringRule \else \specificScoringRules \ifx \fullversion \undefined \section{Missing Proofs -- Coupon Game with Identity Matrix Payments} \label{apx_sec:matching_pennies} \subsection{Proof of Theorem~\ref{thm:coupon_matching_pennies}} \begin{apxthm}[Theorem~\ref{thm:coupon_matching_pennies} restated] \couponMatchingPennies \end{apxthm} \proofTheoremIdentityMatrix \subsection{Coupon Game with Continuous Coupon Valuations} \label{apx_subsec:continuous_valuations} We now consider the same coupon game with payments given in the form of the identity matrix, but under a different setting. Whereas before we assumed the valuations that the two types of $B$ agents have for the coupon are fixed (and known in advance), we now assume they are not fixed. In this section we assume the existence of a continuous prior over $\rho$, where each type $\ensuremath{t} \in \{0,1\}$ has its own prior, so $\ensuremath{\mathsf{CDF}}_0(x) \stackrel {\rm def} = {\bf Pr}[\rho < x ~|~ t=0]$ with an analogous definition of $\ensuremath{\mathsf{CDF}}_1(x)$. We use $\ensuremath{\mathsf{CDF}}_B$ to denote the cumulative distribution function of the prior over $\rho$ (i.e., $\ensuremath{\mathsf{CDF}}_B(x) = {\bf Pr}[\rho < x] = D_0\ensuremath{\mathsf{CDF}}_0(x)+D_1\ensuremath{\mathsf{CDF}}_1(x)$). We assume the $\ensuremath{\mathsf{CDF}}$ is continuous and so ${\bf Pr}[\rho=y]=0$ for any $y$. Given any $z \geq 0$ we denote $\ensuremath{\mathsf{CDF}}_B^{-1}(z)$ the set $\{y :~ \ensuremath{\mathsf{CDF}}_B(y)=z\}$. We proceed by proving Theorem~\ref{thm:matching-pennies-continuous-valuations}. \begin{apxthm}[Theorem~\ref{thm:matching-pennies-continuous-valuations} restated] \matchingPenniesContinuousValuations \end{apxthm} \continuousCouponValuations \fi \ifx\fullversion\undefined \section{Missing Proofs -- Coupon Game with an Opt-Out Strategy} \label{apx_sec:opt-out} \begin{apxthm}[Theorem~\ref{thm:NE_which_is_RR} restated] \NEWhichIsRR \end{apxthm} \apxOptOut \fi \fi \end{document}
\begin{document} \title{ itl} \begin{abstract} Learning rates for least-squares regression are typically expressed in terms of $L_2$-norms. In this paper we extend these rates to norms stronger than the $L_2$-norm without requiring the regression function to be contained in the hypothesis space. In the special case of Sobolev reproducing kernel Hilbert spaces used as hypotheses spaces, these stronger norms coincide with fractional Sobolev norms between the used Sobolev space and $L_2$. As a consequence, not only the target function but also some of its derivatives can be estimated without changing the algorithm. From a technical point of view, we combine the well-known integral operator techniques with an embedding property, which so far has only been used in combination with empirical process arguments. This combination results in new finite sample bounds with respect to the stronger norms. From these finite sample bounds our rates easily follow. Finally, we prove the asymptotic optimality of our results in many cases. \end{abstract} \paragraph{Keywords} statistical learning theory, regularized kernel methods, least-squares regression, interpolation norms, uniform convergence, learning rates \section{Introduction}\label{sec:intro} Given a data set $D=\{(x_i,y_i)\}_{i=1}^n$ independently sampled from an unknown distribution $P$ on $X\times Y$, the goal of non-parametric least-squares regression is to estimate the conditional mean function $\optFZ{P}:X\to Y$ given by $\optFZ{P}(x) \coloneqq {\mathbb E}(Y|X=x)$. The function $\optFZ{P}$ is also known as regression function, we refer to \citet{GyKoKrWa2002} for basic information as well as various algorithms for this problem. In this work, we focus on kernel-based regularized least-squares algorithms, which are also known as least-squares support vector machines (LS-SVMs), see e.g.\ \citet{StCh2008}. Recall that LS-SVMs construct a predictor $\optRegFD$ by solving the convex optimization problem \begin{equation}\label{eq:intro:optimization_problem} \optRegFD = \argmin_{f\in H} \Bigl\{\lambda\|f\|_H^2 + \frac{1}{n}\sum_{i=1}^n(y_i - f(x_i))^2\Bigr\}\;\;, \end{equation} where a reproducing kernel Hilbert space (RKHS) $H$ over $X$ is used as hypothesis space and $\lambda>0$ is the so-called regularization parameter. For a definition and basic properties of RKHSs see e.g.\ \cites[Chapter~4]{StCh2008}. Probably the most interesting theoretical challenge for this problem is to establish learning rates, either in expectation or in probability, for the generalization error \begin{equation}\label{eq:intro:generalization_error} \|\optRegFD -\optFZ{P}\|\;\;. \end{equation} In this paper, we investigate \eqref{eq:intro:generalization_error} with respect to the norms of a continuous scale of suitable Hilbert spaces $[H]^\gamma$ with $H \subseteq [H]^\gamma \subseteq L_2$ in the \emph{hard learning} scenario $\optFZ{P}\not\in H$. For the sake of simplicity, we assume $[H]^0 = L_2$ and $[H]^1 = H$ for this introduction, see Section~\ref{sec:pre} for an exact definition. Let us briefly compare the two main techniques previously used in the literature to establish learning rates for \eqref{eq:intro:generalization_error}: the \emph{integral operator} technique \citep[see e.g.,][and references therein]{DeCaRo2005,DeRoCaDeOd2005,DeRoCa2006,BaPeRo2007,SmZh2007,CaDe2007,BlM2017,DiFoHs2017, LiRuRoCe2018,LiCe2018} and the \emph{empirical process} technique \citep[see e.g.,][and references therein]{MeNe2010,StCh2008,StHuSc2009}. An advantage of the integral operator technique is that it can provide learning rates for \eqref{eq:intro:generalization_error} with respect to a continuous scale of $\gamma$, including the $L_2$-norm case $\gamma = 0$ \citep[see e.g.,][]{BlM2017,LiRuRoCe2018}. In addition, it can be used to establish learning rates for \emph{spectral regularization algorithms} \citep[see e.g.,][]{BaPeRo2007,BlM2017,LiRuRoCe2018} and further kernel-based learning algorithms \citep[see e.g.,][]{M2019,LiCe2018a,PiRuBa2018a,MBl2018,MNeRo2019}. On the other hand, the empirical process techniques can so far only handle the $L_2$-norm in \eqref{eq:intro:generalization_error}, but in the hard learning scenario $\optFZ{P}\not\in H$, which is rarely investigated by the integral operator technique, it provides the fastest, and in many cases minimax optimal, $L_2$-learning rates for \eqref{eq:intro:generalization_error}, see \cite{StHuSc2009}. This advantage of the empirical process technique in the hard learning scenario is based on the additional consideration of some \emph{embedding property} of the RKHS, which has hardly been considered in combination with the integral operator technique so far. In a nutshell, this embedding property allows for an improved bound on the $L_\infty$-norm of the regularized population predictor. In addition, the empirical process technique can be easily applied to learning algorithms \eqref{eq:intro:optimization_problem} in which the least-squares loss function is replaced by other convex loss functions, see e.g.\ \cite{FaSt2018} for expectile regression and \cite{EbSt2013} for quantile regression. In the present manuscript, which is an improvement of its first version \cite{FiSt2017}, we apply the integral operator technique in combination with some embedding property, see \eqref{eq:res:embedding_property} in Section~\ref{sec:res} below for details, to learning scenarios including the case $\optFZ{P}\not\in H$. Recall that such embedding properties---as far as we know---have only been used by \citet{StHuSc2009}, \citet{DiFoHs2017}, and \citet{PiRuBa2018a}. By doing so, we extend and improve the results of \citet{BlM2017} and \citet{LiRuRoCe2018}. To be more precise, we extend the results of \cite{BlM2017}, who only considered the case $\optFZ{P}\in H$, to the hard learning case and the largest possible scale of $\gamma$. Moreover, compared to \cite{LiRuRoCe2018} we obtain faster rates of convergence for \eqref{eq:intro:generalization_error}, if the RKHS enjoys a certain embedding property. In the hard learning scenario, we obtain, as a byproduct, the $L_2$-learning rates of \cite{StHuSc2009}, as well as the very first $L_\infty$-norm learning rates in the hard learning scenario. For a more detailed comparison with the literature see Section~\ref{sec:comparison} and in particular Table~\ref{tab:comparison:rates} and Figure~\ref{fig:comparison:L2rates}. Finally, we prove the minimax optimality of our $[H]^\gamma$-norm learning rates for all combinations of $H$ and $P$, for which the optimal $L_2$-norm learning rates are known. The rest of this work is organized as follows: We start in Section~\ref{sec:pre} with an introduction of notations and general assumptions. In Section~\ref{sec:res} we present our learning rates. The consequences of our results for the special case of a Sobolev/Besov RKHS $H$ can be found in Section~\ref{sec:besov}. Note that in this case $[H]^\gamma$ coincide with the classical Besov spaces and the corresponding norms have a nice interpretation in terms of derivatives. Finally, we compare our result with other contributions in Section~\ref{sec:comparison}. All proofs can be found in Section~\ref{sec:proof}. \subsection*{Acknowledgment} The authors are especially grateful to Nicole Mücke for pointing them to the article of \citet*{LiRuRoCe2018}. Moreover, the authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Simon Fischer. \section{Preliminaries}\label{sec:pre} Let $(X,\mathcal{B})$ be a measurable space used as \emph{input space}, $Y={\mathbb R}$ be the \emph{output space}, and $P$ be an \emph{unknown} probability distribution on $X\times {\mathbb R}$ with \begin{equation}\label{eq:pre:second_moment} |P|_2^2 \coloneqq \int_{X\times {\mathbb R}} y^2\ {\mathrm d} P(x,y)<\infty\;\;. \end{equation} Moreover, we denote the marginal distribution of $P$ on $X$ by $\nu \coloneqq P_X$. In the following, we fix a (regular) conditional probability $P(\,\cdot\,|x)$ of $P$ given $x\in X$. Since the conditional mean function $\optFZ{P}$ is only $\nu$-almost everywhere uniquely determined we use the symbol $\optFZ{P}$ for both, the $\nu$-equivalence class and for the representative \begin{equation}\label{eq:pre:regression_function} \optFZ{P}(x)=\int_{\mathbb R} y\ P({\mathrm d} y|x)\;\;. \end{equation} If we use another representative we will explicitly point this out. In the following, we fix a separable RKHS $H$ on $X$ with respect to a measurable and bounded kernel $k$. Let us recall some facts about the interplay between $H$ and $L_2(\nu)$. Some of the following results have already be shown by \citet{SmZh2004,SmZh2005} and \citet{DeRoCa2006,DeRoCaDeOd2005}, but we follow the more recent contribution of \citet{StSc2012} because of its more general applicability. According to \cites[Lemma~2.2, Lemma~2.3]{StSc2012} and \cites[Theorem~4.27]{StCh2008} the---not necessarily injective---embedding $Imas:H\to L_2(\nu)$, mapping a function $f\in H$ to its $\nu$-equivalence class $[f]_\nu$, is well-defined, Hilbert-Schmidt, and the Hilbert-Schmidt norm satisfies \begin{equation*} \|Imas\|_{\mathcal{L}_2(H,L_2(\nu))} = \|k\|_{L_2(\nu)} \coloneqq \biggl(\int_X k(x,x)\ {\mathrm d}\nu(x)\biggr)^{\sfrac{1}{2}} < \infty\;\;. \end{equation*} Moreover, the adjoint operator $S_\nu \coloneqq Imas^\ast: L_2(\nu)\to H$ is an integral operator with respect to the kernel $k$, i.e. for $f\in L_2(\nu)$ and $x\in X$ we have \begin{equation}\label{eq:pre:integral_operator} (S_\nu f)(x) = \int_{X} k(x,x')f(x')\ {\mathrm d}\nu(x')\;\;. \end{equation} Next, we define the self-adjoint and positive semi-definite integral operators \[ Tmas \coloneqq ImasS_\nu:L_2(\nu)\to L_2(\nu)\qquad\text{ and }\qquadCmas \coloneqq S_\nuImas:H\to H\;\;. \] These operators are trace class and their trace norms satisfy \begin{equation*} \|Tmas\|_{\mathcal{L}_1(L_2(\nu))} = \|Cmas\|_{\mathcal{L}_1(H)} = \|Imas\|_{\mathcal{L}_2(H, L_2(\nu))}^2 = \|S_\nu\|_{\mathcal{L}_2(L_2(\nu),H)}^2\;\;. \end{equation*} If there is no danger of confusion we write $\|\cdot\|$ for the operator norm, $\|\cdot\|_2$ for the Hilbert-Schmidt norm, and $\|\cdot\|_1$ for the trace norm. The spectral theorem for self-adjoint compact operators yields an at most countable index set $I$, a non-increasing summable sequence $(\mu_i)_{i\in I}\subseteq (0,\infty)$, and a family $(e_i)_{i\in I}\subseteq H$, such that $([e_i]_\nu)_{i\in I}$ is an orthonormal basis (ONB) of $\overline{\ranImas}\subseteq L_2(\nu)$ and $(\mu_i^{\sfrac{1}{2}}\,e_i)_{i\in I}$ is an ONB of $(\kerImas)^\perp \subseteq H$ with \begin{equation}\label{eq:pre:spectral} Tmas = \sum_{i\in I}\mu_i\,\langle\,\cdot\,,[e_i]_\nu\rangle_{L_2(\nu)} [e_i]_\nu \qquad\text{ and }\qquad Cmas=\sum_{i\in I}\mu_i\,\langle\,\cdot\,,\mu_i^{\sfrac{1}{2}}\,e_i\rangle_H\, \mu_i^{\sfrac{1}{2}}\,e_i\;\;, \end{equation} see \cites[Lemma~2.12]{StSc2012} for details. Since we are mainly interested in the hard learning scenario $\optFZ{P}\not\in H$ we exclude finite $I$ and assume $I={\mathbb N}$ in the following. Let us recall some intermediate spaces introduced by \citet[Equation~\eqnr{36}]{StSc2012}. We call them \emph{power spaces}. For $\alphageq 0$, the \emph{$\alpha$-power space} is defined by \[ [H]_\nu^\alpha \coloneqq \biggl\{\sum_{igeq 1}a_i\mu_i^{\sfrac{\alpha}{2}}[e_i]_\nu:\ (a_i)_{igeq 1}\in\ell_2({\mathbb N})\biggr\} \subseteq L_2(\nu) \] and equipped with the \emph{$\alpha$-power norm} \[ \biggl\|\sum_{igeq 1}a_i\mu_i^{\sfrac{\alpha}{2}} [e_i]_\nu\biggr\|_{[H]_\nu^\alpha} \coloneqq \bigl\|(a_i)_{igeq 1}\bigr\|_{\ell_2({\mathbb N})} = \biggl(\sum_{igeq 1} a_i^2\biggr)^{\sfrac{1}{2}}\;\;, \] for $(a_i)_{igeq 1}\in\ell_2({\mathbb N})$, it becomes a Hilbert space. Moreover, $(\mu_i^{\sfrac{\alpha}{2}}[e_i]_\nu)_{igeq 1}$ forms an ONB of $[H]_\nu^\alpha$ and consequently $[H]_\nu^\alpha$ is a separable Hilbert space. If there is no danger of confusion we use the abbreviation $\|\cdot\|_\alpha \coloneqq \|\cdot\|_{[H]_\nu^\alpha}$. Furthermore, in the case of $\alpha=1$ we introduce the notation $[H]_\nu \coloneqq [H]_\nu^1$. Recall that for $\alpha=0$ we have $[H]_\nu^0 = \overline{\ranImas}\subseteq L_2(\nu)$ with $\|\cdot\|_0 = \|\cdot\|_{L_2(\nu)}$. Moreover, for $\alpha = 1$ we have $[H]_\nu^1 = \ranImas$ and $[H]_\nu^1$ is isometrically isomorphic to the closed subspace $(\kerImas)^\perp$ of $H$ via $Imas$, i.e.\ $\|[f]_\nu\|_{1} = \|f\|_H$ for $f\in(\kerImas)^\perp$. For $0<\beta<\alpha$, the embeddings \begin{equation}\label{eq:pre:embeddings} [H]_\nu^\alpha \hookrightarrow[H]_\nu^\beta \hookrightarrow [H]_\nu^0 = \overline{\ranImas}\subseteq L_2(\nu) \end{equation} exist and they are compact. For $\alpha>0$, the $\alpha$-power space is given by the image of the fractional integral operator, namely \[ [H]_\nu^\alpha=\ranTmas^{\sfrac{\alpha}{2}} \qquad\text{and}\qquad \|Tmas^{\sfrac{\alpha}{2}}f\|_\alpha = \|f\|_{L_2(\nu)} \] for $f\in \overline{\ranImas}$. In addition, for $0<\alpha<1$, the $\alpha$-power space is characterized in terms of interpolation spaces of the real method, see e.g.\ \cites[Section~1.3.2]{Tr1978} for a definition. To be more precise, \citet[Theorem~4.6]{StSc2012} proved \begin{equation}\label{eq:pre:interpolation_spaces} [H]_\nu^\alpha \cong \bigl[L_2(\nu),[H]_\nu\bigr]_{\alpha,2}\;\;, \end{equation} where the symbol $\cong$ in \eqref{eq:pre:interpolation_spaces} means that these spaces are isomorphic, i.e.\ the sets coincide and the corresponding norms are equivalent. Note that for Sobolev/Besov RKHSs and marginal distributions that are essentially the uniform distribution, the interpolation space $\bigl[L_2(\nu),[H]_\nu\bigr]_{\alpha,2}$ is well-known from the literature, see Section~\ref{sec:besov} for details. \section{Main Results}\label{sec:res} Before we state the results we introduce the main assumptions. For $0<p\leq 1$ we assume that the \emph{eigenvalue decay} satisfies a polynomial upper bound of order $\sfrac{1}{p}$: There is a constant $C>0$ such that the eigenvalues $(\mu_i)_{igeq 1}$ of the integral operator satisfy \begin{gather}\label{eq:res:eigenvalue_decay}\tag{EVD} \mu_i \leqC\, i^{-\sfrac{1}{p}} \end{gather} for all $igeq 1$. In order to establish the optimality of our results we need to assume an exact polynomial asymptotic behavior of order $\sfrac{1}{p}$: There are constants $CLB,C>0$ such that \begin{gather}\label{eq:res:eigenvalue_decay_exact}\tag{EVD+} CLB\ i^{-\sfrac{1}{p}}\leq \mu_i \leqC\, i^{-\sfrac{1}{p}} \end{gather} is satisfied for all $igeq 1$. Our next assumption is the \emph{embedding property}, for $0<\alpha\leq 1$: There is a constant $A>0$ with \begin{gather}\label{eq:res:embedding_property}\tag{EMB} \bigl\|[H]_\nu^\alpha\hookrightarrow L_\infty(\nu)\bigr\| \leqA\;\;. \end{gather} This mean $[H]_\nu^\alpha$ is continuously embedded into $L_\infty(\nu)$ and the operator norm of the embedding is bounded by $A$. Because of \eqref{eq:pre:embeddings} the larger $\alpha$ is, the weaker the embedding property is. Since our kernel $k$ is bounded, \eqref{eq:res:embedding_property} is always satisfied for $\alpha=1$. Moreover, Part~\ref{it:proof:embedding:eigenvalue_decay:iii} of Lemma~\ref{lem:proof:embedding:eigenvalue_decay} in Section~\ref{sec:proof} shows that \eqref{eq:res:embedding_property} implies a polynomial eigenvalue decay of order $\sfrac{1}{\alpha}$ and hence we assume $p\leq\alpha$ in the following. Observe that the converse does not hold in general and consequently it is possible that we even have the strict inequality $p < \alpha$. Note that the Conditions~\eqref{eq:res:embedding_property} and \eqref{eq:res:eigenvalue_decay}/\eqref{eq:res:eigenvalue_decay_exact} just describe the interplay between the marginal distribution $\nu=P_X$ and the RKHS $H$. Consequently, they are independent of the conditional distribution $P(\,\cdot\,|x)$ and especially independent of the regression function $\optFZ{P}$. In the following, we use a \emph{source condition}, for $0<\beta\leq 2$, to measure the smoothness of the regression function: There is a constant $B>0$ such that $\optFZ{P}\in [H]_\nu^\beta$ and \begin{gather}\label{eq:res:source_condition}\tag{SRC} \|\optFZ{P}\|_\beta\leqB\;\;. \end{gather} Note that $|P|_2<\infty$, defined in \eqref{eq:pre:second_moment}, already implies $\optFZ{P}\in L_2(\nu)$. Moreover, \eqref{eq:res:source_condition} with $\betageq 1$ implies that $\optFZ{P}$ has a representative from $H$---in short $\optFZ{P}\in H$---and hence $\betageq 1$ excludes the hard learning scenario we are mainly interested in. Nonetheless, we included the case $1\leq\beta\leq 2$ because it is no extra effort in the proof. Since we want to estimate $\| [\optRegFD]_\nu - \optFZ{P}\|_gamma$ and this expression is well-defined if and only if $\optFZ{P}\in[H]_\nu^\gamma$, we naturally have to assume $\betageq\gamma$ in the following. Finally, we introduce a \emph{moment condition} to control the noise of the observations: There are constants $\sigma,L>0$ such that \begin{gather}\label{eq:res:moment_condition}\tag{MOM} \int_{\mathbb R} |y - \optFZ{P}(x)|^m\ P({\mathrm d} y|x) \leq \frac{1}{2}m!\,\sigma^2\,L^{m-2} \end{gather} is satisfied for $\nu$-almost all $x\in X$ and all $mgeq 2$. Note that \eqref{eq:res:moment_condition} is satisfied for Gaussian noise with bounded variance, i.e.\ $P(\,\cdot\,|x) = \mathcal{N}(\optFZ{P}(x),\sigma_x^2)$, where $x\mapsto\sigma_x \in (0,\infty)$ is a measurable and $\nu$-almost surely bounded function. Another sufficient condition is that $P$ is concentrated on $X\times[-M,M]$ for some constant $M>0$, i.e.\ $P(X\times[-M,M]) = 1$. The Conditions~\eqref{eq:res:eigenvalue_decay} and \eqref{eq:res:source_condition} are well-recognized in the statistical analysis of regularized least-squares algorithms \citep[see e.g.,][]{CaDe2007,BlM2017,LiCe2018,LiRuRoCe2018}. However, there is a whole zoo of moment conditions. We use \eqref{eq:res:moment_condition} because \eqref{eq:res:moment_condition} only constraints the discrepancy of the observation $y$ to the \emph{true} value $\optFZ{P}(x)$ and hence does \emph{not} imply additional constraints, such as boundedness, on $\optFZ{P}$. An embedding property slightly weaker than \eqref{eq:res:embedding_property} was used by \citet{StHuSc2009} in combination with empirical process arguments. \citet{DiFoHs2017} used \eqref{eq:res:embedding_property} to investigate benign scenarios with exponentially decreasing eigenvalues and $\optFZ{P}\in H$, and \citet{PiRuBa2018a} used \eqref{eq:res:embedding_property} to investigate stochastic gradient methods. But embedding properties are new in combination with the integral operator technique in the hard learning scenario for the learning scheme \eqref{eq:intro:optimization_problem} and enable us to prove the following result. \begin{thm}[\boldmath$\gamma$-Learning Rates]\label{thm:res:upper_rates} Let $(X,\mathcal{B})$ be a measurable space, $H$ be a separable RKHS on $X$ with respect to a bounded and measurable kernel $k$, $P$ be a probability distribution on $X\times {\mathbb R}$ with $|P|_2<\infty$, and $\nu \coloneqq P_X$ be the marginal distribution on $X$. Furthermore, let $B_\infty>0$ be a constant with $\|\optFZ{P}\|_{L_\infty(\nu)}\leqB_\infty$ and the Conditions~\eqref{eq:res:embedding_property}, \eqref{eq:res:eigenvalue_decay} \eqref{eq:res:source_condition}, and \eqref{eq:res:moment_condition} be satisfied for some $0<p\leq\alpha\leq 1$ and $0<\beta\leq 2$. Then, for $0\leq\gamma\leq 1$ with $\gamma<\beta$ and a regularization parameter sequence $(\lambda_n)_{ngeq 1}$, the LS-SVM $D\mapsto\optRegFD[\lambda_n]$ with respect to $H$ defined by \eqref{eq:intro:optimization_problem} satisfies the following statements: \begin{enumerate} \item\label{it:res:upper_rates:i} In the case of $\beta + p \leq \alpha$ and $\lambda_n\asymp(\sfrac{n}{\log^r(n)})^{-\sfrac{1}{\alpha }}$ for some $r>1$ there is a constant $K>0$ independent of $ngeq 1$ and $\taugeq 1$ such that \begin{equation}\label{eq:res:upper_rate:i} \bigl\|[\optRegFD[\lambda_n]]_\nu - \optFZ{P}\bigr\|_{\gamma}^2 \leq \tau^2 K \biggl(\frac{\log^r(n)}{n}\biggr)^{\frac{\beta-\gamma}{\alpha}} \end{equation} is satisfied for sufficiently large $ngeq 1$ with $P^n$-probability not less than $1 - 4 e^{-\tau}$. \item\label{it:res:upper_rates:ii} In the case of $\beta + p>\alpha$ and $\lambda_n\asymp n^{-\sfrac{1}{(\beta + p)}}$ there is a constant $K>0$ independent of $ngeq 1$ and $\taugeq 1$ such that \begin{equation}\label{eq:res:upper_rate:ii} \bigl\|[\optRegFD[\lambda_n]]_\nu - \optFZ{P}\bigr\|_{\gamma}^2 \leq \tau^2 K \biggl(\frac{1}{n}\biggr)^{\frac{\beta-\gamma}{\beta + p}} \end{equation} is satisfied for sufficiently large $ngeq 1$ with $P^n$-probability not less than $1 - 4 e^{-\tau}$. \end{enumerate} \end{thm} Theorem~\ref{thm:res:upper_rates} is mainly based on a finite sample bound given in Section~\ref{sec:proof}, see Theorem~\ref{thm:proof:upper:oi}. We think that the statement of Theorem~\ref{thm:res:upper_rates} can be proved for general regularization methods if one combines our technique, especially Lemma~\ref{lem:proof:upper:oi_part_i} and Lemma~\ref{lem:proof:upper:oi_part_ii} from Section~\ref{sec:upper}, with the results of \citet{LiRuRoCe2018} and \citet{LiCe2018}. However, we stick to the learning scheme \eqref{eq:intro:optimization_problem} for simplicity. The proof of Theorem~\ref{thm:res:upper_rates} reveals that the constants $K>0$ just depend on the parameters and constants from \eqref{eq:res:embedding_property}, \eqref{eq:res:eigenvalue_decay}, \eqref{eq:res:source_condition}, and \eqref{eq:res:moment_condition}, on the considered norm, i.e.\ on $\gamma$, on $B_\infty$, and on the regularization parameter sequence $(\lambda_n)_{ngeq 1}$. Moreover, the index bound hidden in the phrase \emph{for sufficient large $ngeq 1$} just depends on the parameters and constants from \eqref{eq:res:embedding_property} and \eqref{eq:res:eigenvalue_decay}, on $\tau$, on a lower bound $0<c\leq 1$ for the operator norm $c\leq\|Cmas\|$, and on the regularization parameter sequence $(\lambda_n)_{ngeq 1}$. The asymptotic behavior in $n$ of the right hand side in \eqref{eq:res:upper_rate:i} and \eqref{eq:res:upper_rate:ii}, respectively, is called \emph{learning rate} with respect to the $\gamma$-power norm or abbreviated $\gamma$-learning rate. Recall, for $\gamma=0$, the norms on left hand sides of \eqref{eq:res:upper_rate:i} and \eqref{eq:res:upper_rate:ii} coincide with the $L_2(\nu)$-norm. Note that, for $\betageq\alpha$, the conditional mean function $\optFZ{P}$ is automatically $\nu$-almost surely bounded, since we have $\optFZ{P}\in[H]_\nu^\beta\hookrightarrow[H]_\nu^\alpha\hookrightarrow L_\infty(\nu)$, and in this case always Situation~\eqref{eq:res:upper_rate:ii} applies. Moreover, in the case of $\alpha=p$, which was also considered by \citet[Corollary~6]{StHuSc2009}, we are always in Situation~\eqref{eq:res:upper_rate:ii}, too. If we ignore the $\log$-term in the obtained $\gamma$-learning rates then in both cases, $\beta+p\leq\alpha$ and $\beta+p>\alpha$, the $\gamma$-learning rate coincides with \[ n^{-\frac{\beta-\gamma}{\max\{\beta+ p,\alpha\}}}\;\;. \] Finally, note that the asymptotic behavior of the regularization parameter sequence \emph{does not depend} on the considered $\gamma$-power norm. Consequently, we get convergence with respect to \emph{all} $\gamma$-power norms $0\leq\gamma<\beta$ \emph{simultaneously}. In order to investigate the optimality of our $\gamma$-learning rates the next theorem yields $\gamma$-lower rates. In doing so, we have to assume \eqref{eq:res:eigenvalue_decay_exact} to make sure that the eigenvalues do not decay faster than \eqref{eq:res:eigenvalue_decay} guarantees. \begin{thm}[\boldmath$\gamma$-Lower Rates]\label{thm:res:lower_rate} Let $(X,\mathcal{B})$ be a measurable space, $H$ be a separable RKHS on $X$ with respect to a bounded and measurable kernel $k$, and $\nu$ be a probability distribution on $X$ such that \eqref{eq:res:embedding_property} and \eqref{eq:res:eigenvalue_decay_exact} are satisfied for some $0<p\leq\alpha\leq 1$. Then, for all parameters $0<\beta\leq 2$, $0\leq\gamma\leq 1$ with $\gamma<\beta$ and all constants $\sigma,L, B,B_\infty>0$, there exist $K_0,K,r>0$ such that for all learning methods $D\mapsto f_D$, all $\tau>0$, and all sufficiently large $ngeq 1$ there is a distribution $P$ on $X\times {\mathbb R}$ with $P_X=\nu$ satisfying $\|\optFZ{P}\|_{L_\infty(\nu)}\leqB_\infty$, \eqref{eq:res:source_condition} with respect to $\beta,B$, \eqref{eq:res:moment_condition} with respect to $\sigma,L$, and with $P^n$-probability not less than $1 - K_0\tau^{\sfrac{1}{r}}$ \begin{equation}\label{eq:res:lower_rate} \bigl\|[f_{D}]_\nu - \optFZ{P}\bigr\|_{\gamma}^2 geq \tau^2 K \biggl(\frac{1}{n}\biggr)^{\frac{\max\{\alpha,\beta\} - \gamma}{\max\{\alpha,\beta\} + p}}\;\;. \end{equation} \end{thm} In short, Theorem~\ref{thm:res:lower_rate} states that there is no learning method satisfying a faster decaying $\gamma$-learning rate than \[ n^{-\frac{\max\{\alpha,\beta\} - \gamma}{\max\{\alpha,\beta\} + p}} \] under the assumptions of Theorem~\ref{thm:res:upper_rates} and \eqref{eq:res:eigenvalue_decay_exact}. The asymptotic behavior in $n$ of the right hand side in \eqref{eq:res:lower_rate} is called \emph{(minimax) lower rate} with respect to the $\gamma$-power norm or abbreviated $\gamma$-lower rate. Theorem~\ref{thm:res:lower_rate} extends the lower bounds previously obtained by \citet{CaDe2007}, \citet{StHuSc2009}, and \citet{BlM2017}. To be more precise, \citet[Theorem~2]{CaDe2007} considered only the case $\optFZ{P}\in H$ and $\gamma=0$, \citet[Theorem~9]{StHuSc2009} considered only the case $\betageq\alpha$ and $\gamma=0$, and \citet[Theorem~3.5]{BlM2017} restricted their considerations to $\optFZ{P}\in H$. In the case of $\alpha\leq\beta$, which implies the boundedness of $\optFZ{P}$, the $\gamma$-learning rate of LS-SVMs stated in Theorem~\ref{thm:res:upper_rates} coincides with the $\gamma$-lower rate from Theorem~\ref{thm:res:lower_rate} and hence is optimal. The optimal rate in the case of $\alpha>\beta$, which does \emph{not} imply the boundedness of $\optFZ{P}$, is, \emph{even for the $L_2$-norm}, an outstanding problem for several decades, which we cannot address, either. \begin{rem}[Optimality and Boundedness] Under the assumptions of Theorem~\ref{thm:res:lower_rate}, \emph{but without} requiring the uniform boundedness of $\optFZ{P}$ by some constant $B_\infty$, we can \emph{improve} the $\gamma$-lower rate of Theorem~\ref{thm:res:lower_rate}. More precisely, a straightforward modification of Lemma~\ref{lem:proof:lower:valid_strings} in Section~\ref{sec:proof} gives in the case of not uniformly bounded $\optFZ{P}$ the $\gamma$-lower rate \begin{equation*}\label{eq:res:lower_rate_improved} n^{-\frac{\beta - \gamma}{\beta + p}}\;\;. \end{equation*} Moreover, if we would be able to prove the $\gamma$-learning rates of Theorem~\ref{thm:res:upper_rates} with a constant $K>0$ independent of $\|\optFZ{P}\|_{L_\infty(\nu)}$ then we would have optimality for our $\gamma$-learning rates in the case of $\beta > \alpha-p$ instead of $\betageq\alpha$. \end{rem} Because of \eqref{eq:res:embedding_property}, the next remark is a direct consequence of Theorem~\ref{thm:res:upper_rates} for $\gamma = \alpha$. \begin{rem}[\boldmath$L_\infty$-Learning Rates]\label{rem:res:Linfinity_rate} Under the assumptions of Theorem~\ref{thm:res:upper_rates} in the case of $\beta>\alpha$ the following statement is true. For all regularization parameter sequences $(\lambda_n)_{ngeq 1}$ with $\lambda_n\asymp n^{\sfrac{1}{(\beta + p)}}$ there is a constant $K>0$ independent of $ngeq 1$ and $\taugeq 1$ such that the LS-SVM $D\mapsto\optRegFD[\lambda_n]$ with respect to $H$ defined by \eqref{eq:intro:optimization_problem} satisfies \[ \bigl\|[\optRegFD[\lambda_n]]_\nu - \optFZ{P}\bigr\|_{L_\infty(\nu)}^2 \leq \tau^2 K \biggl(\frac{1}{n}\biggr)^{\frac{\beta-\alpha}{\beta + p}} \] for sufficiently large $ngeq 1$ with $P^n$-probability not less than $1 - 4 e^{-\tau}$. \end{rem} Note that all previous efforts to get $L_\infty$-learning rates for the learning scheme \eqref{eq:intro:optimization_problem} need to assume $\optFZ{P}\in H$. Consequently, Remark~\ref{rem:res:Linfinity_rate} establishes the very first $L_\infty$-learning rates in the hard learning scenario. \section{Example: Besov RKHSs}\label{sec:besov} In this section we illustrate our main results in the case of Besov RKHSs. To this end, we assume that $X$ is a benign domain: Let $X\subseteq{\mathbb R}^d$ be a non-empty, open, connected, and bounded set with a \begin{gather}\label{eq:besov:domain}\tag{DOM} C_\infty\text{-boundary} \end{gather} and be equipped with the Lebesgue-Borel $\sigma$-algebra $\mathcal{B}$. Furthermore, $L_2(X) \coloneqq L_2(\mu)$ denotes the corresponding $L_2$-space. Let us briefly introduce Sobolev and Besov Hilbert spaces. For a more detailed introduction see e.g.\ \cites{AdFo2003}. For $m\in{\mathbb N}$ we denote the \emph{Sobolev space} of smoothness $m$ by $W_m(X) \coloneqq W_{m,2}(X)$, see e.g.\ \cites[Definition~3.2]{AdFo2003} for a definition. For $r>0$ the \emph{Besov space} $B^{r}_{2,2}(X)$ is defined by means of the real interpolation method, namely $B^{r}_{2,2}(X) \coloneqq \bigl[L_2(X), W_{m}(X)\bigr]_{\sfrac{r}{m},2}$, where $m \coloneqq \min\{k\in{\mathbb N}:\ k>r\}$ see e.g.\ \cites[Section~7.30]{AdFo2003} for details. For $r=0$ we define $B^{0}_{2,2}(X) \coloneqq L_2(X)$. It is well-known that the Besov spaces $B^{r}_{2,2}(X)$ are separable Hilbert spaces and that they satisfy \begin{equation}\label{eq:besov:reiteration} B^{r}_{2,2}(X) \cong \bigl[L_2(X),B^t_{2,2}(X)\bigr]_{\sfrac{r}{t},2} \end{equation} for all $t>r>0$, see e.g.\ \cites[Section~7.32]{AdFo2003} for details. Moreover, an extension of the Sobolev embedding theorem to Besov spaces guarantees that, for $r>\sfrac{d}{2}$, each $\mu$-equivalence class in $B^{r}_{2,2}(X)$ has a unique continuous and bounded representative, see e.g.\ \cites[Part~\eqnr{c} of Theorem~7.24]{AdFo2003}. In fact, for $r > j + \sfrac{d}{2}$, this representative is from the space $C_j(X)$ of $j$-times continuous differentiable and bounded functions with bounded derivatives. More precisely, the mapping of a $\mu$-equivalence class to its (unique) continuous representative is linear and continuous, in short, for $r > j + \sfrac{d}{2}$, \begin{equation}\label{eq:besov:embedding} B^{r}_{2,2}(X) \hookrightarrow C_j(X)\;\;. \end{equation} Consequently, we define, for $r>\sfrac{d}{2}$, the \emph{Besov RKHS} as the set of continuous representatives $H_r(X) \coloneqq \{f\in C_0(X):\ [f]_\mu\in B^{r}_{2,2}(X)\}$ and equip this space with the norm $\|f\|_{H_r(X)} \coloneqq \|[f]_\mu\|_{B^{r}_{2,2}(X)}$. The Besov RKHS $H_r(X)$ is a separable RKHS with respect to a kernel $k_r$. Moreover, $k_r$ is bounded and measurable, see e.g.\ \cites[Lemma~4.28 and Lemma~4.25]{StCh2008}. In the following, we fix a Besov RKHS $H_r(X)$ for some $r>\sfrac{d}{2}$ and a probability measure $P$ on $X\times {\mathbb R}$ such that the marginal distribution $\nu = P_X$ on $X$ satisfies the following condition: The probability measure $\nu$ is equivalent to the Lebesgue measure $\mu$ on $X$, i.e.\ $\mu\ll\nu$, $\nu\ll\mu$, and there are constants $g,G>0$ such that \begin{gather}\label{eq:besov:marginal_distribution}\tag{LEB} g \leq \frac{{\mathrm d}\nu}{{\mathrm d}\mu}\leq G \end{gather} is $\mu$-almost surely satisfied. For marginal distributions $\nu$ satisfying \eqref{eq:besov:marginal_distribution} we have $L_2(\nu)\cong L_2(X)$ and we can describe the power spaces of $H_r(X)$ according to \eqref{eq:pre:interpolation_spaces}, the interpolation property, and \eqref{eq:besov:reiteration} by \begin{equation}\label{eq:besov:power_spaces} [H_r(X)]_\nu^{\sfrac{u}{r}} \cong \bigl[L_2(\nu),[H_r(X)]_\nu\bigr]_{\sfrac{u}{r},2} \cong \bigl[L_2(X),[H_r(X)]_\mu\bigr]_{\sfrac{u}{r},2} \cong B^{u}_{2,2}(X) \end{equation} for $0<u<r$. As a consequence of \eqref{eq:besov:power_spaces}, we have $\optFZ{P}\in B^{s}_{2,2}(X)$ for some $0<s<r$ if and only if \eqref{eq:res:source_condition} is satisfied for $\beta=\sfrac{s}{r}$. Next, if we combine \eqref{eq:besov:power_spaces} and \eqref{eq:besov:embedding} then we get \eqref{eq:res:embedding_property} for all $\alpha$ with $\frac{d}{2r}<\alpha<1$: \[ [H_r(X)]_\nu^\alpha \cong B^{\alpha r}_{2,2}(X) \hookrightarrow C_0(X) \hookrightarrow L_\infty(\nu)\;\;. \] Finally, we consider the asymptotic behavior of the eigenvalues $(\mu_i)_{igeq 1}$ of the integral operator $Tmas$. \citet[Equation~\eqnr{4.4.12}]{CaSt1990} show that the eigenvalue $\mu_i$ of $Tmas$ equals the squares of the approximation number $a_i^2(Imas)$ of the embedding $Imas:H_r(X)\to L_2(\nu)$. Since $L_2(\nu)\cong L_2(X)$ these approximation numbers are described by \citet[Equation~\eqnr{4} on p.~119]{EdTr1996}, namely \[ \mu_i=a_i^2(Imas) \asymp i^{-\sfrac{2r}{d}}\;\;. \] To sum up, the eigenvalues satisfy \eqref{eq:res:eigenvalue_decay_exact} for $p = \frac{d}{2r}$. The following corollaries are direct consequences of Part~\ref{it:res:upper_rates:ii} of Theorem~\ref{thm:res:upper_rates} and Theorem~\ref{thm:res:lower_rate} with $p=\frac{d}{2r}$, $\beta=\sfrac{s}{r}$, $\gamma = \sfrac{t}{r}$, and an $\alpha>p$ that is chosen sufficiently close to $p$. \begin{cor}[Besov-Learning Rates]\label{cor:besov:upper_rates} Let $X\subseteq{\mathbb R}^d$ be a set satisfying \eqref{eq:besov:domain}, $H_r(X)$ be a Besov RKHS on $X$ with $r>\sfrac{d}{2}$, $P$ be a probability distribution on $X\times {\mathbb R}$ with $|P|_2<\infty$, and $\nu \coloneqq P_X$ be the marginal distribution on $X$ such that \eqref{eq:besov:marginal_distribution} is satisfied. Furthermore, let $B,B_\infty>0$ be constants with $\|\optFZ{P}\|_{L_\infty(\mu)}\leqB_\infty$ and $\|\optFZ{P}\|_{B^{s}_{2,2}(X)}\leqB$ for some $0<s < r$, and the Condition~\eqref{eq:res:moment_condition} be satisfied. Then, for $0\leqt<s$ and a regularization parameter sequence $(\lambda_n)_{ngeq 1}$ with $\lambda_n\asymp n^{-\sfrac{r}{(s+\sfrac{d}{2})}}$, there is a constant $K>0$ independent of $ngeq 1$ and $\taugeq 1$ such that the LS-SVM $D\mapsto\optRegFD[\lambda_n]$ with respect to the Besov RKHS $H_r(X)$ defined by \eqref{eq:intro:optimization_problem} satisfies \[ \bigl\|[\optRegFD[\lambda_n]]_\mu - \optFZ{P}\bigr\|_{B^{t}_{2,2}(X)}^2 \leq \tau^2 K \biggl(\frac{1}{n}\biggr)^{\frac{s - t}{s + \sfrac{d}{2}}} \] for sufficiently large $ngeq 1$ with $P^n$-probability not less than $1- 4 e^{-\tau}$. \end{cor} Note that the $B^{t}_{2,2}$-learning rate is independent of the chosen Besov RKHS $H_r(X)$. Besides $r>\sfrac{d}{2}$ the only requirement on the choice of $H_r(X)$, a user has to take care of, is $r>s$, i.e.\ to pick a sufficiently small $H_r(X)$. Recall that the case $t=0$ corresponds to $L_2$-norm learning rates. \begin{cor}[Besov-Lower Rates]\label{cor:besov:lower_rates} Let $X\subseteq{\mathbb R}^d$ be a set satisfying \eqref{eq:besov:domain}, $H_r(X)$ be a Besov RKHS on $X$ with $r>\sfrac{d}{2}$, and $\nu$ be a probability distribution on $X$ satisfying \eqref{eq:besov:marginal_distribution}. Then, for all parameters $0\leqt<s<r$ with $s>\sfrac{d}{2}$ and all constants $\sigma,L,B,B_\infty>0$, there exist $K_0,K,r>0$ such that for all learning methods $D\mapsto f_D$, all $\tau>0$, and all sufficiently large $ngeq 1$ there is a distribution $P$ on $X\times {\mathbb R}$ with $P_X=\nu$ satisfying $\|\optFZ{P}\|_{L_\infty(\nu)}\leqB_\infty$, $\|\optFZ{P}\|_{B^{t}_{2,2}(X)}\leqB$, \eqref{eq:res:moment_condition} with respect to $\sigma,L$, and with $P^n$-probability not less than $1 - K_0\tau^{\sfrac{1}{r}}$ \[ \bigl\|[f_D]_\mu - \optFZ{P}\bigr\|_{B^{t}_{2,2}(X)}^2 geq \tau^2 K \biggl(\frac{1}{n}\biggr)^{\frac{s - t}{s + \sfrac{d}{2}}}\;\;. \] \end{cor} In short, Corollary~\ref{cor:besov:lower_rates} states that the rates from Corollary~\ref{cor:besov:upper_rates} are optimal for $s>\sfrac{d}{2}$. \begin{rem}\label{rem:besov:lower_rate} Under the assumptions of Corollary~\ref{cor:besov:lower_rates} in the case of $s\leq \sfrac{d}{2}$ for all sufficiently small $\varepsilon>0$ the following lower bound is satisfied \[ \bigl\|[f_D]_\mu - \optFZ{P}\bigr\|_{B^{t}_{2,2}(X)}^2 geq \tau^2 K \biggl(\frac{1}{n}\biggr)^{\sfrac{1}{2} - \sfrac{t}{d} + \varepsilon}\;\;. \] \end{rem} Finally, if we have $s > j + \sfrac{d}{2}$, for some integer $jgeq 0$, then the combination of Corollary~\ref{cor:besov:upper_rates} and \eqref{eq:besov:embedding} yields $C_j(X)$-norm learning rates. To this end, we denote by $\optFZ{P}$ the unique continuous representative of the $\nu$-equivalence class $\optFZ{P}$ and apply Corollary~\ref{cor:besov:upper_rates} with a sufficiently small $t>j+\sfrac{d}{2}$. \begin{rem}[\boldmath$C_j(X)$-Learning Rates]\label{rem:besov:Cj_rate} Under the assumption of Corollary~\ref{cor:besov:upper_rates} in the case of $s > j + \sfrac{d}{2}$ for some integer $jgeq 0$ the following statement is true. For all $0<\varepsilon<\frac{s - (j + \sfrac{d}{2})}{s + \sfrac{d}{2}}$ and each regularization parameter sequence $(\lambda_n)_{ngeq 1}$ with $\lambda_n\asymp n^{-\sfrac{r}{(s + \sfrac{d}{2}})}$ there is a constant $K>0$ independent of $ngeq 1$ and $\taugeq 1$ such that the LS-SVM $D\mapsto\optRegFD[\lambda_n]$ with respect to the Besov RKHS $H_r(X)$ defined by \eqref{eq:intro:optimization_problem} satisfies \[ \bigl\|\optRegFD[\lambda_n] - \optFZ{P}\bigr\|_{C_j(X)}^2 \leq \tau^2 K \biggl(\frac{1}{n}\biggr)^{\frac{s - (j + \sfrac{d}{2})}{s + \sfrac{d}{2}} - \varepsilon} \] for sufficiently large $ngeq 1$ with $P^n$-probability not less than $1- 4 e^{-\tau}$. \end{rem} Remark~\ref{rem:besov:Cj_rate} suggests that $D\mapsto \partial^\alpha\optRegFD$, for some multi-index $\alpha=(\alpha_1,\ldots,\alpha_d)\in{\mathbb N}_0^d$, is a reasonable estimator for the $\alpha$-th derivative of the regression function $\partial^\alpha\optFZ{P}$ if $\optFZ{P}\in B^{s}_{2,2}(X)$ with some $s>|\alpha|+\sfrac{d}{2}=\alpha_1+\ldots+\alpha_d +\sfrac{d}{2}$. Note that the $\varepsilon>0$ appears in the rates of Remark~\ref{rem:besov:lower_rate} and Remark~\ref{rem:besov:Cj_rate} because we have to choose $\alpha>p$ and $t > j + \sfrac{d}{2}$, respectively. \section{Comparison}\label{sec:comparison} In this section we compare our results with learning rates previously obtained in the literature. Since in the case of $\optFZ{P}\in[H]_\nu^\beta$ with $1\leq\beta\leq 2$ we just recover the well-known optimal rates obtained by many authors, see e.g.\ \cite{CaDe2007, LiCe2018} for $L_2$-rates and \cite{BlM2017, LiRuRoCe2018} for general $\gamma$-rates, we focus on the hard learning scenario $0<\beta<1$. Furthermore, due to the large amount of results in the literature we limit our considerations to the best known results for the learning scheme \eqref{eq:intro:optimization_problem}, namely \cite{StCh2008,StHuSc2009}, which use empirical process techniques and \cite{LiCe2018,LiRuRoCe2018}, which use integral operator techniques. Moreover, we assume that $P$ is concentrated on $X\times[-M,M]$ for some $M>0$ and that $k$ is a bounded measurable kernel with separable RKHS $H$. Note that these assumptions form the largest common ground under which all the considered contributions achieve $L_2$-learning rates. In addition, the article of \citet{LiRuRoCe2018} is the only one of the four articles listed above that considers general $\gamma$-learning rates. Finally, in order to keep the comparison clear we ignore $\log$-terms in the learning rates. In Table~\ref{tab:comparison:rates} we give a short overview of the learning rates and in Figure~\ref{fig:comparison:L2rates} we plot the exponent $r$ of the polynomial $L_2$-learning rates $n^{-r}$ over the smoothness $0<\beta<1$ of $\optFZ{P}\in[H]_\nu^\beta$ for some fixed $0<p\leq\alpha\leq 1$. \input{L2rates_tab.tex} \input{L2rates_fig.tex} \textbf{Integral operator techniques.} The article of \citet{LiCe2018} is an extended version of the conference paper \cite{LiCe2018a}. \citet{LiCe2018} investigate distributed gradient decent methods and spectral regularization algorithms. In Corollary~6 they provide the $L_2$-learning rate $n^{-\sfrac{\beta}{\max\{\beta+p,1\}}}$ in expectation for spectral regularization algorithms, containing the learning scheme \eqref{eq:intro:optimization_problem} as special case. \citet{LiRuRoCe2018} establish the $\gamma$-learning rate $n^{-\sfrac{(\beta-\gamma)}{\max\{\beta+p,1\}}}$ in probability for spectral regularization algorithms under more general source conditions, see \cites[Equation~\eqnr{18}]{LiRuRoCe2018} for a definition. Both articles do not take any embedding property into account and hence we get at least the same rates and in case of \eqref{eq:res:embedding_property} with $\alpha<1$ we actually improve their rates iff $\beta + p < 1$. Let us illustrate this improvement in the case of a Besov RKHS $H_r(X)$ with smoothness $r$. To this end, we assume $\optFZ{P}\in B^{s}_{2,2}(X)$ for some $s>0$. Besides the condition $r>\sfrac{d}{2}$, which ensures that $H_r(X)$ is a RKHS, the only requirement for our Corollary~\ref{cor:besov:upper_rates} is $r>s$ in order to achieve the fastest known $L_2$-learning rate $n^{-\sfrac{s}{(s+\sfrac{d}{2})}}$. Recall that this rate is independent of the smoothness $r$ of the hypothesis space and is known to be optimal for $s>\sfrac{d}{2}$, see e.g.\ Corollary~\ref{cor:besov:lower_rates}. In order to get the same $L_2$-learning rate by the results of \citet{LiCe2018} or \citet{LiRuRoCe2018} the \emph{additional} constraint $r\leqs+\sfrac{d}{2}$ has to be satisfied. Otherwise, they only yield the $L_2$-rate $n^{-\sfrac{s}{r}}$, which gets worse with increasing smoothness $r$. Consequently, taking \eqref{eq:res:embedding_property} into account facilitates the choice of $r$. Moreover, for learning rates with respect to Besov norms our results improve those of \citet{LiRuRoCe2018} in a similar way, i.e.\ to get our Besov-learning rates with the help of the results of \citet{LiRuRoCe2018} the \emph{additional} constraint $r\leqs+\sfrac{d}{2}$ has to be satisfied. \textbf{Empirical process techniques.} \citet{StCh2008} provide an oracle inequality in Theorem~7.23 under a slightly weaker assumption than \eqref{eq:res:eigenvalue_decay}. As already mentioned there \citep[Equation~\eqnr{7.54}]{StCh2008}, this oracle inequality leads, under a slightly weaker assumption than \eqref{eq:res:source_condition}, to the $L_2$-rate $n^{-\sfrac{\beta}{\max\{\beta+p,1\}}}$. This rate coincides with the results of \citet{LiCe2018} and \citet{LiRuRoCe2018}, and is even better by a logarithmic factor. Inspired by \citet[Lemma~5.1]{MeNe2010}, \citet{StHuSc2009} were the first using an embedding property, slightly weaker than \eqref{eq:res:embedding_property}, to derive finite sample bounds, see \cites[Theorem~1]{StHuSc2009}. Moreover, Theorem~1 of \citet{StHuSc2009} was used in Corollary~6 of that article to establish, in the case of $p=\alpha$, the $L_2$-rate $n^{-\sfrac{\beta}{(\beta+\alpha)}}$. But the proof remains valid in the general case $p\leq \alpha$ and hence \citet[Theorem~1]{StHuSc2009} get the $L_2$-rate $n^{-\sfrac{\beta}{\max\{\beta +p,\beta +\alpha(1-\beta)\}}}$. This rate is never better than ours and is worse than ours iff $\alpha<1$ and $\beta<1-\sfrac{p}{\alpha}$. If we combine the oracle inequality of \citet[Theorem~7.23]{StCh2008} with \eqref{eq:res:embedding_property} then we recover our $L_2$-rate from Theorem~\ref{thm:res:upper_rates} even without logarithmic factor. However, recall that the empirical process technique is not able to provide general $\gamma$-learning rates yet. Finally, it is to mention that both contributions, \cite{StCh2008} and \cite{StHuSc2009}, consider the \emph{clipped} predictor. The influence of this clipping is not clear, but it could be the reason for avoiding the logarithmic factors appearing in some learning rates obtained by integral operator techniques. To sum up, we use the integral operator technique to recover the best known, and in many cases optimal, $L_2$-learning rates previously only obtained by the empirical process technique. In addition, we improve the best known $\gamma$-learning rates from \cite{LiRuRoCe2018} for the learning scheme \eqref{eq:intro:optimization_problem} whenever \eqref{eq:res:embedding_property} is satisfied for some $0<\alpha<1$ as well as \eqref{eq:res:source_condition} and \eqref{eq:res:eigenvalue_decay} are satisfied for $\beta+p < 1$. Finally, we show that our $\gamma$-learning rates are optimal in all cases in which the optimal $L_2$-norm learning rate is known. \section{Proofs}\label{sec:proof} First, we summarize some well-known facts that we need for the proofs of our main results. To this end, we use the notation and general assumptions from Section~\ref{sec:pre}. \input{Proofs_Intro.tex} \input{Proofs_Bounds.tex} \input{Proofs_Upper_Bounds.tex} \input{Proofs_Lower_Bounds.tex} \appendix \section{Auxiliary Results and Concentration Inequalities}\label{sec:apx} \begin{lem}\label{lem:apx:estimate} Let, for $\lambda>0$ and $0\leq\alpha\leq1$, the function $f_{\lambda,\alpha}:[0,\infty)\to{\mathbb R}$ be defined by $f_{\lambda,\alpha}(t) \coloneqq \sfrac{t^\alpha}{(\lambda + t)}$. In the case $\alpha=0$ the function $f_{\lambda,\alpha}$ is decreasing and in the case of $\alpha=1$ the function $f_{\lambda,\alpha}$ is increasing. Furthermore, the supremum of $f_{\lambda,\alpha}$ satisfies the following bound \[ \sfrac{\lambda^{\alpha-1}}{2} \leq \sup_{tgeq 0} f_{\lambda,\alpha}(t) \leq \lambda^{\alpha-1}\;\;. \] In the case of $0<\alpha<1$ the function $f_{\lambda,\alpha}$ attain its supremum at $t^\ast \coloneqq \sfrac{\lambda\alpha}{(1-\alpha)}$. \end{lem} \begin{proof} In order to prove this statement we use the derivative of $f_{\lambda,\alpha}$, which is given by \[ f_{\lambda,\alpha}' (t) = \frac{\alpha t^{\alpha -1}(\lambda + t) - t^\alpha}{(\lambda + t)^2}\;\;. \] For $\alpha=0$ we have $f_{\lambda,\alpha}' (t) = -(\lambda + t)^{-2} < 0$ and hence $\sup_{tgeq 0}f_{\lambda,\alpha}(t) = f_{\lambda,\alpha}(0) = \lambda^{\alpha-1}$. For $\alpha=1$ we have $f_{\lambda,\alpha}' (t) = \lambda (\lambda + t)^{-2} > 0$ and hence $\sup_{tgeq 0}f_{\lambda,\alpha}(t) = \lim_{t\to\infty}f_{\lambda,\alpha}(t) = 1=\lambda^{\alpha-1}$. For $0<\alpha<1$ the derivative $f_{\lambda,\alpha}'$ has a unique root at $t^\ast = \sfrac{\alpha\lambda}{(1-\alpha)}$. Since $f_{\lambda,\alpha}(0)=0$ and $\lim_{t\to\infty}f_{\lambda,\alpha}(t)=0$ holds, $f_{\lambda,\alpha}$ attains its global maximum at $t^\ast$ and \[ \sup_{tgeq 0}f_{\lambda,\alpha}(t) = f_{\lambda,\alpha}(t^\ast) = \lambda^{\alpha-1} \alpha^\alpha(1-\alpha)^{1-\alpha}\;\;. \] Since $g(\alpha) \coloneqq \alpha^\alpha(1-\alpha)^{1-\alpha}$ is bounded by $1$ the upper bound follows. The derivative \[ g'(\alpha) = g(\alpha) \log\biggl(\frac{\alpha}{1-\alpha}\biggr) \] of $g$ has a unique root at $\alpha=\sfrac{1}{2}$ and hence the lower bound follows from $g(\alpha)geq g(\sfrac{1}{2}) = \sfrac{1}{2}$ for all $0<\alpha< 1$. \end{proof} The following Bernstein type inequality for Hilbert space valued random variables is due to \citet{PiSa1986}. However we use a version from \cites[Proposition~2]{CaDe2007}. \begin{thm}[Bernstein's Inequality]\label{thm:apx:bernstein} Let $(\Omega,\mathcal{B},P)$ be a probability space, $H$ be a separable Hilbert space, and $\xi:\Omega\to H$ be a random variable with \[ {\mathbb E}_P \|\xi\|_H^m \leq\frac{1}{2}m!\sigma^2L^{m-2} \] for all $mgeq 2$. Then, for $\taugeq 1$ and $ngeq 1$, the following concentration inequality is satisfied \[ P^n\biggl((\omega_1,\ldots,\omega_n)\in\Omega^n:\ \Bigl\|\frac{1}{n}\sum_{i=1}^n \xi(\omega_i) - {\mathbb E}_P \xi \Bigr\|_H^2 geq 32\frac{\tau^2}{n}\biggl(\sigma^2+ \frac{L^2}{n}\biggr) \biggr) \leq 2 e^{-\tau}\;\;. \] \end{thm} \begin{proof} The $m$-th moment of the centered random variable $\xi - {\mathbb E}_P\xi$ is bounded by \begin{equation*} {\mathbb E}_P \|\xi - {\mathbb E}_P\xi\|_H^m \leq 2^{m-1} \bigl({\mathbb E}_P\|\xi\|_H^m + \|{\mathbb E}_P\xi\|_H^m\bigr) \leq 2^m{\mathbb E}_P\|\xi\|_H^m \leq \frac{1}{2}m! (2L)^{m-2} 4\sigma^2\;\;. \end{equation*} Since we consider the squared norm the assertion is a direct consequence of \cites[Proposition~2]{CaDe2007} with $\eta = 2 e^{-\tau}$, $L=2L$, and $\sigma^2 = 4\sigma^2$. \end{proof} The following Bernstein type inequality for Hilbert-Schmidt operator valued random variables is due to \citet{Mi2017}. However we use a version from \cites[Lemma~26]{LiCe2018}, see also \citet{Tr2015} for an introduction to this topic. \begin{thm}\label{thm:apx:bernstein_operator} Let $(\Omega,\mathcal{B},P)$ be a probability space, $H$ be a separable Hilbert space, and $\xi:\Omega\to\mathcal{L}_2(H)$ be a random variable with values in the set of self-adjoint Hilbert-Schmidt operators. Furthermore, let the operator norm be $P$-a.s.\ bounded, i.e.\ $\|\xi\|\leq B$ $P$-a.s. and $V$ be a self-adjoint positive semi-definite trace class operator with ${\mathbb E}_P(\xi^2) \preccurlyeq V$, i.e.\ $V - {\mathbb E}_P(\xi^2)$ is positive semi-definite. Then, for $g(V) \coloneqq \log\bigl(\sfrac{2 e \tr(V)}{\|V\|}\bigr)$, $\taugeq 1$, and $ngeq 1$, the following concentration inequality is satisfied \[ P^n\biggl((\omega_1,\ldots,\omega_n)\in\Omega^n:\ \Bigl\|\frac{1}{n}\sum_{i=1}^n \xi(\omega_i) - {\mathbb E}_P \xi \Bigr\|geq \frac{4 \tau B g(V)}{3 n}+ \sqrt{\frac{2\tau\|V\|\,g(V)}{n}} \biggr) \leq 2 e^{-\tau}\;\;. \] \end{thm} Recall that $\|V\|$ denotes the operator norm and $\tr$ the trace operator. \begin{proof} This is a direct consequence of Lemma~26 from \cite{LiCe2018} with ${\mathrm d}elta = 2 e^{-\tau}$ applied to the centered random variable $\xi - {\mathbb E}_P\xi$. Furthermore, we used $\|\xi - {\mathbb E}_P\xi\| \leq 2 B$ and ${\mathbb E}_P(\xi - {\mathbb E}_P\xi)^2 \preccurlyeq {\mathbb E}_P (\xi^2)\preccurlyeq V$. Finally, $\beta$ defined by \cites[Lemma~26]{LiCe2018} can be bounded by \[ \beta \coloneqq \log\biggl(\frac{4\tr(V)}{\|V\|{\mathrm d}elta}\biggr) = \log\biggl(\frac{2\tr(V)}{\|V\|}\biggr) + \tau \leq \tau g(V) \] because of $\taugeq 1$ and $\log\bigl(\sfrac{2\tr(V)}{\|V\|}\bigr)>0$. \end{proof} \begin{singlespace} \end{singlespace} \end{document}
\begin{document} \title{``Active-set complexity'' of proximal gradient \thanks{We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) [Discovery Grant, reference numbers \texttt{\#}355571-2013, \texttt{\#}2015-06068].\\ Cette recherche a \'et\'e financ\'ee par le Conseil de recherches en sciences naturelles et en g\'enie du Canada (CRSNG) [Discovery Grant, num\'eros de r\'ef\'erence \texttt{\#}355571-2013, \texttt{\#}2015-06068].} } \subtitle{How long does it take to find the sparsity pattern?} \author{Julie~Nutini \and Mark~Schmidt \and Warren~Hare} \institute{Julie Nutini \at Department of Computer Science, The University of British Columbia \\ 201-2366 Main Mall, Vancouver BC, V6T 1Z4, Canada \\ \email{[email protected]} \and Mark Schmidt \at Department of Computer Science, The University of British Columbia \\ \email{[email protected]} \and Warren Hare \at Department of Mathematics, The University of British Columbia Okanagan \\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} Proximal gradient methods have been found to be highly effective for solving minimization problems with non-negative constraints or $\ell_1$-regularization. Under suitable nondegeneracy conditions, it is known that these algorithms identify the optimal sparsity pattern for these types of problems in a finite number of iterations. However, it is not known how many iterations this may take. We introduce the notion of the ``active-set complexity'', which in these cases is the number of iterations before an algorithm is guaranteed to have identified the final sparsity pattern. We further give a bound on the active-set complexity of proximal gradient methods in the common case of minimizing the sum of a strongly-convex smooth function and a separable convex non-smooth function. \keywords{convex optimization \and non-smooth optimization \and proximal gradient method \and active-set identification \and active-set complexity} \end{abstract} \section{Motivation}\label{sec:motivation} We consider the problem \begin{equation}\label{eq:problem} \minimize{x \in {\rm I\!R}^n} \quad f(x) + g(x), \end{equation} where $f$ is $\mu$-strongly convex and the gradient $\nabla f$ is $L$-Lipschitz continuous. We assume that $g$ is a separable function, \[ g(x) = \sum_{i=1}^n g_i(x_i), \] and each $g_i$ only needs to be a proper convex and lower semi-continuous function (it may be non-smooth or infinite at some $x_i$). In machine learning, a common choice of $f$ is the squared error $f(x) = \frac 1 2\norm{Ax-b}^2$ (or an $\ell_2$-regularized variant to guarantee strong-convexity). The squared error is often paired with a scaled absolute value function $g_i(x_i) = \lambda|x_i|$ to yield a sparsity-encouraging $\ell_1$-regularization term. This is commonly known as the LASSO problem~\cite{tibshirani1996}. The $g_i$ can alternatively enforce bound constraints (e.g., the dual problem in support vector machine optimization~\cite{cortes1995}), such as the $x_i$ must be non-negative, by defining $g_i(x_i)$ to be an indicator function that is zero if the constraints are satisfied and $\infty$ otherwise. One of most widely-used methods for minimizing functions of this form is the proximal gradient (PG) method~\cite{levitin1966constrained,beck2009,nesterov2013,bertsekas2015convex}, which uses an iteration update given by \[ x^{k+1} = \prox{\frac{1}{L} g} \left(x^k - \frac{1}{L} \nabla f(x^k) \right), \] where the proximal operator is defined as \[ \prox{\frac{1}{L} g}(x) = \argmin{y} \frac{1}{2} \| y - x\|^2 + \frac{1}{L}g(y). \] When the proximal gradient method is applied with non-negative constraints or $\ell_1$-regularization, an interesting property of the method is that the iterations $x^k$ will match the sparsity pattern of the solution $x^*$ for all sufficiently large $k$ (under a mild technical condition). Thus, after a finite number of iterations the algorithm ``identifies'' the final set of non-zero variables. This is useful if we are only using the algorithm to find the sparsity pattern, since it means we do not need to run the algorithm to convergence. It is also useful in designing faster algorithms (for example, see~\cite{krishnan2007,curtis2015,buchheim2016} for non-negativity constrained problems and~\cite{wen2010,byrd2015,santis2015}) for $\ell_1$-regularized problems). After we have identified the set of non-zero variables we could switch to a more sophisticated solver like Newton's method applied to the non-zero variables. In any case, we should expect the algorithm to converge faster after identifying the final sparsity pattern, since it will effectively be optimizing over a lower-dimensional subspace. The idea of finitely identifying the set of non-zero variables dates back at least 40 years to the work of Bertsekas~\cite{bertsekas1976goldstein} who showed that the projected gradient method identifies the sparsity pattern in a finite number of iterations when using non-negative constraints (and suggests we could then switch to a superlinearly convergent unconstrained optimizer). Subsequent works have shown that finite identification occurs in much more general settings including cases where $g$ is non-separable, where $f$ may not be convex, and even where the constraints may not be convex~\cite{burke1988identification,wright1993identifiable,hare2004,hare2011}. The active-set identification property has also been shown for other algorithms like certain coordinate descent and stochastic gradient methods~\cite{mifflin2002,wright2012,lee2012manifold}. Although these prior works show that the active-set identification must happen after some finite number of iterations, they only show that this happens asymptotically. In this work, we introduce the notion of the ``active-set complexity'' of an algorithm, which we define as the number of iterations required before an algorithm is guaranteed to have reached the active-set. We further give bounds, under the assumptions above and the standard nondegeneracy condition, on the active-set complexity of the proximal gradient method. We are only aware of one previous work giving such bounds, the work of Liang et al.\ who included a bound on the active-set complexity of the proximal gradient method~\cite[Proposition~3.6]{liang2017activity}. Unlike this work, their result does not evoke strong-convexity. Instead, their work applies an inclusion condition on the local subdifferential of the regularization term that ours does not require. By focusing on the strongly-convex case (which is common in machine learning due to the use of regularization), we obtain a simpler analysis and a much tighter bound than in this previous work. Specifically, both rates depend on the ``distance to the subdifferential boundary'', but in our analysis this term only appears inside of a logarithm rather than outside of it. \section{Notation and assumptions} We assume that $f$ is $\mu$-strongly convex so that for some $\mu > 0$, we have \[ f(y) \ge f(x) + \langle \nabla f(x), y - x \rangle + \frac{\mu}{2} \| y - x\|^2, \quad \text{for all $x,y \in {\rm I\!R}^n$}. \] Further, we assume that its gradient $\nabla f$ is $L$-Lipschitz continuous, meaning that \begin{equation}\label{eq:lipschitz} \| \nabla f(y) - \nabla f(x) \| \le L \| y - x\|, \quad \text{for all $x,y \in {\rm I\!R}^n$}. \end{equation} By our separability assumption on $g$, the subdifferential of $g$ is simply the concatenation of the subdifferentials of each $g_i$. Further, the subdifferential of each individual $g_i$ at any $x_i \in {\rm I\!R}$ is defined by \[ \partial g_i(x_i) = \{ v \in {\rm I\!R} : g_i(y) \ge g_i(x_i) + v \cdot (y-x_i), \text{ for all $y \in \dom{g_i}$}\}, \] which implies that the subdifferential of each $g_i$ is just an interval on the real line. In particular, the interior of the subdifferential of each $g_i$ at a non-differentiable point $x_i$ can be written as an open interval, \begin{equation} \label{eq:interior} \mathop{\hbox{int}} \partial g_i(x_i) \equiv (l_i, u_i ), \end{equation} where $l_i \in {\rm I\!R} \cup \{-\infty\}$ and $u_i \in {\rm I\!R} \cup \{\infty\}$ (the $\infty$ values occur if $x_i$ is at its lower or upper bound, respectively). As in existing literature on active-set identification~\cite{hare2004}, we require the {\em nondegeneracy} condition that $-\nabla f(x^*)$ must be in the ``relative interior'' of the subdifferential of $g$ at the solution $x^*$. For simplicity, we present the nondegeneracy condition for the special case of \eqref{eq:problem}. \begin{assumption}\label{assump:nondegeneracy} We assume that $x^*$ is a nondegenerate solution for problem \eqref{eq:problem}, where $x^*$ is {\em nondegenerate} if and only if \[ \begin{cases} -\nabla_i f(x^*) \!=\! \nabla_i g(x^*_i) &\!\!\!\!\text{if $\partial g_i(x_i^*)$ \!is a singleton \!($g_i$\! smooth at $x_i^*$)} \\ -\nabla_i f(x^*) \!\in\! \mathop{\hbox{int}} \partial g_i(x^*_i) &\!\!\!\!\text{if $\partial g_i(x_i^*)$ \!is not a singleton \!($g_i$\! non-smooth at $x_i^*$)}. \end{cases} \] \end{assumption} Under this assumption, we ensure that $-\nabla f(x^*)$ is in the ``relative interior" (see \cite[Section 2.1.3]{boyd2004convex}) of the subdifferential of $g$ at the solution $x^*$. In the case of non-negative constraints, this requires that $\nabla_i f(x^*) > 0$ for all variables $i$ that are zero at the solution ($x_i^* = 0$). For $\ell_1$-regularization, this requires that $|\nabla_i f(x^*)| < \lambda$ for all variables $i$ that are zero at the solution, which is again a strict complementarity condition~\cite{santis2015}.\footnote{Note that $|\nabla_i f(x^*)| \leq \lambda$ for all $i$ with $x_i^* = 0$ follows from the optimality conditions, so this assumption simply rules out the case where $|\nabla_i f(x_i^*)|=\lambda$.} \begin{definition} The {\em active-set} $\mathcal{Z}$ for a separable $g$ is defined as \[ \mathcal{Z} = \{ i : \partial g_i(x_i^*) \text{ is not a singleton} \}. \] \end{definition} By the above definition and recalling the interior of the subdifferential of $g_i$ as defined in \eqref{eq:interior}, the set $\mathcal{Z}$ includes indices $i$ where $x_i^*$ is equal to the lower bound on $x_i$, is equal to the upper bound on $x_i$, or occurs at a non-smooth value of $g_i$. In the case of non-negative constraints and $\ell_1$-regularization under Assumption 1, $\mathcal{Z}$ is the set of non-zero variables at the solution. Formally, the {\em active-set identification property} for this problem is that for all sufficiently large $k$ we have that $x_i^k = x_i^*$ for all $i \in \mathcal{Z}$. An important quantity in our analysis is the minimum distance to the nearest boundary of the subdifferential \eqref{eq:interior} among indices $i \in \mathcal{Z}$. This quantity is given by \begin{equation} \label{eq:Delta} \delta = \min_{i\in\mathcal{Z}}\left\{ \min\{ -\nabla_i f(x^*) - l_i, u_i + \nabla_i f(x^*) \}\right\}. \end{equation} \section{Finite-time active-set identification} In this section we show that the PG method identifies the active-set of \eqref{eq:problem} in a finite number of iterations. Although this result follows from the more general results in the literature, by focusing on~\eqref{eq:problem} and the case of strong-convexity we give a substantially simpler proof that will allow us to easily bound the active-set iteration complexity of the method. Before proceeding to our main contributions, we state the linear convergence rate of the proximal gradient method to the (unique) solution $x^*$. \begin{theorem}\cite[Prop. 3]{schmidt2011convergence}\label{thm:convergence} Consider problem \eqref{eq:problem}, where $f$ is $\mu$-strongly convex with $L$-Lipschitz continuous gradient, and the $g_i$ are proper convex and lower semi-continuous. Then for every iteration $k \ge 1$ of the proximal gradient method, we have \begin{equation} \label{eq:linConv} \norm{x^k - x^*} \leq \left(1-\frac{1}{\kappa}\right)^k\norm{x^0 - x^*}, \end{equation} where $\kappa := L/\mu$ is the condition number of $f$. \end{theorem} Next, we state the finite active-set identification result. Our argument essentially states that $\norm{x^k - x^*}$ is eventually always less then $\delta/2L$, where $\delta$ is defined as in \eqref{eq:Delta}, and at this point the algorithm always sets $x_i^k$ to $x_i^*$ for all $i \in \mathcal{Z}$. \begin{lemma}\label{lem:activeset} Consider problem \eqref{eq:problem}, where $f$ is $\mu$-strongly convex with $L$-Lipschitz continuous gradient, and the $g_i$ are proper convex and lower semi-continuous. Let Assumption \ref{assump:nondegeneracy} hold for the solution $x^*$. Then for any proximal gradient method with a step-size of $1/L$, there exists a $\bar{k}$ such that for all $k > \bar{k}$ we have $x_i^k = x_i^*$ for all $i \in \mathcal{Z}$. \end{lemma} \begin{proof} By the definition of the proximal gradient step and the separability of $g$, for all $i$ we have \[ x_i^{k+1} \in \argmin{y}\left\{ \frac{1}{2}\left| y - \left(x^k_i - \frac{1}{L}\nabla_i f(x^k)\right) \right|^2 + \frac{1}{L}g_i(y)\right\}. \] This problem is strongly-convex, and its unique solution satisfies \[ 0 \in y - x_i^k + \frac{1}{L} \nabla_i f(x^k) + \frac{1}{L}\partial g_i(y), \] or equivalently that \begin{equation} \label{eq:proxSol} L(x_i^k - y) - \nabla_i f(x^k) \in \partial g_i(y). \end{equation} By Theorem~\ref{thm:convergence}, there exists a minimum finite iterate $\bar{k}$ such that $\norm{x^{\bar{k}}~-~x^*} \leq \delta/2L$. Since $|x_i^k - x_i^*| \leq \norm{x^k - x^*}$, this implies that for all $k \geq \bar{k}$ we have \begin{equation} \label{eq:itBound} -\delta/2L \leq x_i^k - x_i^* \leq \delta/2L, \quad \text{for all $i$.} \end{equation} Further, the Lipschitz continuity of $\nabla f$ in \eqref{eq:lipschitz} implies that we also have \begin{align*} |\nabla_i f(x^k) - \nabla_i f(x^*)| & \leq \norm{\nabla f(x^k) - \nabla f(x^*)}\\ & \leq L\norm{x^k - x^*}\\ & \leq \delta/2, \end{align*} which implies that \begin{equation} \label{eq:gBound} -\delta/2 - \nabla_i f(x^*) \leq - \nabla_i f(x^k) \leq \delta/2 - \nabla_i f(x^*). \end{equation} To complete the proof it is sufficient to show that for any $k \geq \bar{k}$ and $i \in \mathcal{Z}$ that $y = x_i^*$ satisfies \eqref{eq:proxSol}. Since the solution to~\eqref{eq:proxSol} is unique, this will imply the desired result. We first show that the left-side is less than the upper limit $u_i$ of the interval $\partial g_i(x_i^*)$, \begin{align*} L(x_i^k - x_i^*) - \nabla_i f(x^k) & \leq \delta/2 - \nabla_i f(x^k) & \text{(right-side of~\eqref{eq:itBound})}\\ & \leq \delta - \nabla_i f(x^*) & \text{(right-side of~\eqref{eq:gBound})}\\ & \leq (u_i + \nabla_i f(x^*)) - \nabla_i f (x^*) & \text{(definition of $\delta$,~\eqref{eq:Delta})}\\ & \leq u_i. \end{align*} We can use the left-sides of~\eqref{eq:itBound} and~\eqref{eq:gBound} and an analogous sequence of inequalities to show that $L(x_i^k~-~x_i^*)~-~\nabla_i f(x^k) \geq l_i$, implying that $x_i^*$ solves~\eqref{eq:proxSol}. \qed \end{proof} \section{Active-set complexity} \label{sec:maniRate} The active-set identification property shown in the previous section could also be shown using the more sophisticated tools used in related works~\cite{burke1988identification,hare2004}. However, an appealing aspect of the simple argument above is that it is clear how to bound the active-set complexity of the method. We formalize this in the following result. \begin{corollary}\label{cor:complexity} Consider problem \eqref{eq:problem}, where $f$ is $\mu$-strongly convex with $L$-Lipschitz continuous gradient, and the $g_i$ are proper convex and lower semi-continuous. Let Assumption \ref{assump:nondegeneracy} hold for the solution $x^*$. Then the proximal gradient method with a step-size of $1/L$ identifies the active-set after at most $\kappa \log(2L\norm{x^0 - x^*}/\delta)$ iterations. \end{corollary} \begin{proof} Using Theorem~\ref{thm:convergence} and $(1 - 1/\kappa)^k \leq \exp(-k/\kappa)$, we have \[ \norm{x^k - x^*} \leq \exp(-k/\kappa)\norm{x^0 - x^*}. \] The proof of Lemma \ref{lem:activeset} shows that the active-set identification occurs whenever the inequality $\norm{x^k - x^*} \leq \delta/2L$ is satisfied. For this to be satisfied, it is sufficient to have \[ \exp(-k/\kappa)\norm{x^0 - x^*} \le \frac{\delta}{2L}. \] Taking the $\log$ of both sides and solving for $k$ gives the result. \qed \end{proof} It is interesting to note that this bound only depends logarithmically on $1/\delta$, and that if $\delta$ is quite large we can expect to identify the active-set very quickly. This $O(\log(1/\delta))$ dependence is in contrast to the previous result of Liang et al.\ who give a bound of the form $O(1/\sum_{i=1}^n \delta_i^2)$ where $\delta_i$ is the distance of $\nabla_i f$ to the boundary of the subdifferential $\partial g_i$ at $x^*$~\cite[Proposition~3.6]{liang2017activity}. Thus, our bound will typically be tighter as it only depends logarithmically on the single smallest $\delta_i$ (though we make the extra assumption of strong-convexity). In Section~\ref{sec:motivation}, we considered two specific cases of problem \eqref{eq:problem}, for which we can define $\delta$: \begin{enumerate} \item If the $g_i$ enforce non-negativity constraints, then $\delta = \min_{i \in \mathcal{Z}} \nabla_i f(x^*)$. \item If $g$ is a scaled $\ell_1$-regularizer, then $\delta = \lambda - \max_{i \in \mathcal{Z}}|\nabla_i f(x^*)|$. \end{enumerate} In the first case we identify the non-zero variables after $\kappa \log(2L\norm{x^0 - x^*}/\min_{i \in \mathcal{Z}}\nabla_i f(x^*))$ iterations. If the minimum gradient over the active-set at the solution $\delta$ is zero, then we may approach the active-set through the interior of the constraint and the active-set may never be identified (this is the purpose of the nondegeneracy condition). Similarly, for $\ell_1$-regularization this result also gives an upper bound on how long it takes to identify the sparsity pattern. Above we have bounded the number of iterations before $x_i^k = x_i^*$ for all $i \in \mathcal{Z}$. However, in the non-negative and L1-regularized applications we might also be interested in the number of iterations before we always have $x_i^k \neq 0$ for all $i \not\in\mathcal{Z}$. More generally, the number of iterations before $x_i^k$ for $i\not\in\mathcal{Z}$ are not located at non-smooth or boundary values. It is straightforward to bound this quantity. Let $\Delta = \min_{i \not\in Z}\{|x_i^n - x_i^*|\}$ where $x_i^n$ is the nearest non-smooth or boundary value along dimension $i$. Since~\eqref{eq:linConv} shows that the proximal-gradient method contracts the distance to $x^*$, it cannot set values $x_i^k$ for $i \not\in\mathcal{Z}$ to non-smooth or boundary values once $\norm{x^k - x^*} \leq \Delta$. It follows from~\eqref{eq:linConv} that $\kappa\log(\norm{x^0 - x^*}/\Delta)$ iterations are needed for the values $i \not\in\mathcal{Z}$ to only occur at smooth/non-boundary values. \section{General step-size} The previous sections considered a step-size of $1/L$. In this section we extend our results to handle general constant step-sizes, which leads to a smaller active-set complexity if we use a larger step-size depending on $\mu$. To do this, we require the following result, which states the generalized convergence rate bound for the proximal gradient method. This result matches the known rate of the gradient method with a constant step-size for solving strictly-convex quadratic problems~\cite[\S 1.3]{bertsekas1999nonlinear}, and the rate of the projected-gradient algorithm with a constant step-size for minimizing strictly-convex quadratic functions over convex sets~\cite[\S 2.3]{bertsekas1999nonlinear}. \begin{theorem}\label{thm:generalstep} Consider problem~\eqref{eq:problem}, where $f$ is $\mu$-strongly convex with $L$-Lipschitz continuous gradient, and $g$ is proper convex and lower semi-continuous. Then for every iteration $k \ge 1$ of the proximal gradient method with a constant step-size $\alpha > 0$, we have \begin{equation} \label{eq:linConv2} \norm{x^k - x^*} \leq Q(\alpha)^k\norm{x^0 - x^*}, \end{equation} where $Q(\alpha) := \max \{ |1 - \alpha L|, |1 - \alpha \mu |\}$. \end{theorem} We give the proof in the Appendix. Theorem~\ref{thm:convergence} is a special case of Theorem~\ref{thm:generalstep} since $Q(1/ L) = 1 - \mu / L$. Further, Theorem~\ref{thm:generalstep} gives a faster rate if we minimize $Q$ in terms of $\alpha$ to give $\alpha = 2/(L + \mu)$, which yields a faster rate of \[ Q\left(\frac{2}{L+\mu}\right)= 1 - \frac{2\mu}{L + \mu} = \frac{L - \mu}{L + \mu}. \] This faster convergence rate for the proximal gradient method may be of independent interest in other settings, and we note that this result does not require $g$ to be separable. We also note that, although the theorem is true for any positive $\alpha$, it is only interesting for $\alpha < 2/L$ since for $\alpha \geq 2/L$ it does not imply convergence. \begin{lemma}\label{lem:2} Consider problem \eqref{eq:problem}, where $f$ is $\mu$-strongly convex with $L$-Lipschitz continuous gradient, and the $g_i$ are proper convex and lower semi-continuous. Let Assumption \ref{assump:nondegeneracy} hold for the solution $x^*$. Then for any proximal gradient method with a constant step-size $0 < \alpha < 2/L$, there exists a $\bar{k}$ such that for all $k > \bar{k}$ we have $x_i^k = x_i^*$ for all $i \in \mathcal{Z}$. \end{lemma} We give the proof Lemma~\ref{lem:2} in the Appendix, which shows that we identify the active-set when $\| x^k - x^* \| \le \delta \alpha/3$ is satisfied. Using this result, we prove the following active-set complexity result for proximal gradient methods when using a general fixed step-size (the proof is once again found in the Appendix). \begin{corollary}\label{cor:2} Consider problem \eqref{eq:problem}, where $f$ is $\mu$-strongly convex with $L$-Lipschitz continuous gradient (for $\mu < L$), and the $g_i$ are proper convex and lower semi-continuous. Let Assumption \ref{assump:nondegeneracy} hold for the solution $x^*$. Then for any proximal gradient method with a constant step-size $\alpha$, such that $0 < \alpha < 2/L$, the active-set will be identified after at most $ \frac{1}{\log (1/Q(\alpha))} \log(3 || x^0 - x^*||/ (\delta \alpha))$ iterations. \end{corollary} Finally, we note that as part of a subsequent work we have analyzed the active-set complexity of block coordinate descent methods~\cite{nutini2017}. The argument in that case is similar to the argument presented here. The main modification needed to handle coordinate-wise updates is that we must use a coordinate selection strategy that guarantees that we eventually select all $i \in \mathcal{Z}$ that are not at their optimal values for some finite $k \ge \bar{k}$. \section*{Appendix} {\it Proof of Theorem~\ref{thm:generalstep}}. For any $\alpha > 0$, by the non-expansiveness of the proximal operator~\cite[Lem 2.4]{combettes2005} and the fact that $x^*$ is a fixed point of the proximal gradient update for any $\alpha > 0$, we have \begin{align*} &\| x^{k+1} - x^*\|^2 \\ &~~= \| \prox{\alpha g}(x^k - \alpha \nabla f(x^k)) - \prox{\alpha g}(x^k - \alpha \nabla f(x^*)) \|^2 \\ &~~\le \| (x^k - \alpha \nabla f(x^k)) - (x^* - \alpha \nabla f(x^*)) \|^2 \\ &~~= \| x^k - x^* - \alpha (\nabla f(x^k) - \nabla f(x^*)) \|^2 \\ &~~= \| x^k - x^* \|^2 - 2 \alpha \langle \nabla f(x^k) - \nabla f(x^*), x^k - x^* \rangle + \alpha^2 \| \nabla f(x^k) - \nabla f(x^*) \|^2. \end{align*} By the $L$-Lipschitz continuity of $\nabla f$ and the $\mu$-strong convexity of $f$, we have~\cite[Thm 2.1.12]{Nes04b} \[ \langle \nabla f(x^k) - \nabla f(x^*), x^k - x^* \rangle \ge \frac{1}{L+\mu} \| \nabla f(x^k) - \nabla f(x^*) \|^2 + \frac{L \mu}{L + \mu} \| x^k - x^* \|^2, \] which yields \[ \| x^{k+1} - x^*\|^2 \le \left ( 1 - \frac{2 \alpha L \mu}{L + \mu} \right ) \| x^k - x^* \|^2 + \alpha \left ( \alpha - \frac{2}{L + \mu} \right ) \| \nabla f(x^k) - \nabla f(x^*) \|^2. \] Further, by the $\mu$-strong convexity of $f$, we have for any $x, y \in {\rm I\!R}^n$~\cite[Thm 2.1.17]{Nes04b}, \[ \langle \nabla f(x) - \nabla f(y)\rangle \ge \mu\|x - y\|^2, \] which by Cauchy-Schwartz gives \[ \| \nabla f(x) - \nabla f(y) \| \ge \mu \| x - y \|. \] Combining this with the $L$-Lipschitz continuity condition in \eqref{eq:lipschitz} shows that $\mu \leq L$. Therefore, for any $\beta \in {\rm I\!R}$ (positive or negative) we have \[ \beta \| \nabla f(x) - \nabla f(y) \|^2 \le \max \{ \beta L^2, \beta \mu^2 \} \|x - y \|^2. \] Thus, for $\beta := \left ( \alpha - \frac{2}{L + \mu}\right )$, we have \begin{align*} &\| x^{k+1} - x^*\|^2 \\ &~~\le \left ( 1 - \frac{2 \alpha L \mu}{L + \mu} \right ) \| x^k - x^* \|^2 + \alpha \max \left \{ L^2 \beta, \mu^2 \beta \right \} \| x^k - x^* \|^2 \\ &~~= \max \left \{ \left ( 1 - \frac{2 \alpha L \mu}{L + \mu} \right ) + \alpha L^2 \beta, \left ( 1 - \frac{2 \alpha L \mu}{L + \mu} \right ) + \alpha \mu^2 \beta \right \} \| x^k - x^* \|^2 \\ &~~= \max \left \{ 1 - \frac{2 \alpha L (L + \mu)}{L + \mu} + \alpha^2 L^2, 1 - \frac{2 \alpha \mu (L + \mu)}{L + \mu} + \alpha^2 \mu^2 \right \} \| x^k - x^* \|^2 \\ &~~= \max \left \{ (1 - \alpha L)^2, (1 - \alpha \mu)^2\right \} \| x^k - x^* \|^2 \\ &~~= Q(\alpha)^2 \| x^k - x^* \|^2. \end{align*} Taking the square root and applying it repeatedly, we obtain our result. \qed \noindent {\it Proof of Lemma~\ref{lem:2}}. By the definition of the proximal gradient step and the separability of $g$, for all $i$ we have \[ x_i^{k+1} \in \argmin{y}\left\{ \frac{1}{2}\left| y - \left(x^k_i - \alpha \nabla_i f(x^k)\right) \right|^2 + \alpha g_i(y)\right\}. \] This problem is strongly-convex with a unique solution that satisfies \begin{equation}\label{APPeq:proxSol} \frac{1}{\alpha}(x_i^k - y) - \nabla_i f(x^k) \in \partial g_i(y). \end{equation} By Theorem~\ref{thm:generalstep} and $\alpha < 2/L$, there exists a minimum finite iterate $\bar{k}$ such that $\norm{x^{\bar{k}}~-~x^*}~\leq~\delta \alpha /3$. Following similar steps as in Lemma~\ref{lem:activeset}, this implies that \begin{equation}\label{APPeq:itBound} -\delta \alpha /3 \leq x_i^k - x_i^* \leq \delta \alpha/3, \quad \text{for all $i$}, \end{equation} and by the Lipschitz continuity of $\nabla f$, we also have \begin{equation}\label{APPeq:gBound} -\delta \alpha L/3 - \nabla_i f(x^*) \leq - \nabla_i f(x^k) \leq \delta \alpha L/3- \nabla_i f(x^*). \end{equation} To complete the proof it is sufficient to show that for any $k \geq \bar{k}$ and $i \in \mathcal{Z}$ that $y = x_i^*$ satisfies \eqref{APPeq:proxSol}. We first show that the left-side is less than the upper limit $u_i$ of the interval $\partial g_i(x_i^*)$, \begin{align*} \frac{1}{\alpha}(x_i^k - x_i^*) - \nabla_i f(x^k) & \leq \delta/3- \nabla_i f(x^k) & \text{(right-side of~\eqref{APPeq:itBound})}\\ & \leq \delta(1 + \alpha L)/3 - \nabla_i f(x^*) & \text{(right-side of~\eqref{APPeq:gBound})}\\ & \leq \delta - \nabla_i f(x^*) & \text{(upper bound on $\alpha$)}\\ & \leq (u_i + \nabla_i f(x^*)) - \nabla_i f (x^*) & \text{(definition of $\delta$,~\eqref{eq:Delta})}\\ & \leq u_i. \end{align*} Using the left-sides of~\eqref{APPeq:itBound} and~\eqref{APPeq:gBound}, and an analogous sequence of inequalities, we can show that $\frac{1}{\alpha}(x_i^k~-~x_i^*)~-~\nabla_i f(x^k) \geq l_i$, implying that $x_i^*$ solves~\eqref{APPeq:proxSol}. Since the solution to~\eqref{APPeq:proxSol} is unique, this implies the desired result. \qed \noindent {\it Proof of Corollary~\ref{cor:2}}. By Theorem \ref{thm:generalstep}, we know that the proximal gradient method achieves the following linear convergence rate, \[ \norm{x^{k+1} - x^*} \leq Q(\alpha)^k\norm{x^0 - x^*}. \] The proof of Lemma \ref{lem:2} shows that the active-set identification occurs whenever the inequality $\norm{x^k - x^*} \leq \delta \alpha/3$ is satisfied. Thus, we want \[ Q(\alpha)^k\norm{x^0 - x^*} \le \frac{\delta \alpha}{3}. \] Taking the $\log$ of both sides, we obtain \[ k \log \left (Q(\alpha) \right) + \log \left (\norm{x^0 - x^*} \right ) \le \log \left ( \frac{\delta \alpha}{3} \right ). \] Noting that $0 < Q(\alpha) < 1$ so $\log(Q(\alpha)) < 0$, we can rearrange to obtain \begin{align*} k &\ge \frac{1}{\log \left (Q(\alpha) \right)} \log \left (\frac{\delta \alpha}{3\norm{x^0 - x^*}} \right) = \frac{1}{\log(1/Q(\alpha))} \log \left (\frac{3\norm{x^0 - x^*}}{\delta \alpha} \right). \end{align*} \qed \end{document}
\begin{document} \numberwithin{equation}{section} \title[Order extreme points]{Order extreme points and solid convex hulls} \author{T.~Oikhberg and M.A.~Tursi} \address{ Dept.~of Mathematics, University of Illinois, Urbana IL 61801, USA} \email{[email protected], [email protected]} \date{\today} \subjclass[2010]{46B22, 46B42} \keywords{Banach lattice, extreme point, convex hull, Radon-Nikod{\'y}m Property} \dedicatory{To the memory of Victor Lomonosov} \maketitle \parindent=0pt \parskip=3pt \begin{abstract} We consider the ``order'' analogues of some classical notions of Banach space geometry: extreme points and convex hulls. A Hahn-Banach type separation result is obtained, which allows us to establish an ``order'' Krein-Milman Theorem. We show that the unit ball of any infinite dimensional reflexive space contains uncountably many order extreme points, and investigate the set of positive norm-attaining functionals. Finally, we introduce the ``solid'' version of the Krein-Milman Property, and show it is equivalent to the Radon-Nikod{\'y}m Property. \end{abstract} \maketitle \thispagestyle{empty} \is_{\mathcal{E}}ction{Introduction}\label{s:intro} At the very heart of Banach space geometry lies the study of three interrelated subjects: (i) separation results (starting from the Hahn-Banach Theorem), (ii) the structure of extreme points, and (iii) convex hulls (for instance, the Krein-Milman Theorem on convex hulls of extreme points). Certain counterparts of these notions exist in the theory of Banach lattices as well. For instance, there are positive separation/extension results; see e.g. \cite[Section 1.2]{AB}. One can view solid convex hulls as lattice analogues of convex hulls; these objects have been studied, and we mention some of their properties in the paper. However, no unified treatment of all three phenomena listed above has been attempted. In the present paper, we endeavor to investigate the lattice versions of (i), (ii), and (iii) above. We introduce the order version of the classical notion of an extreme point: if $A$ is a subset of a Banach lattice $X$, then $a \in A$ is called an \emph{order extreme point} of $A$ if for all $x_0, x_1 \in A$ and $t \in (0,1)$ the inequality $a \leq (1-t) x_0 + t x_1$ implies $x_0 = a = x_1$. Note that, in this case, if $x \geq a$ and $x \in A$, then $x = a$ (write $a \leq (x+a)/2$). Throughout, we work with real spaces. We will be using the standard Banach lattice results and terminology (found in, for instance, \cite{AB}, \cite{M-N} or \cite{Sch}). We also say that a subset of a Banach lattice is \textit{bounded} when it is norm bounded, as opposed to order bounded. Some special notation is introduced in Section \ref{s:definitions}. In the same section, we establish some basic facts about order extreme points and solid hulls. In particular, we note a connection between order and ``canonical'' extreme points (Theorem \ref{t:connection}). In Section \ref{s:separation} we prove a ``Hahn-Banach'' type result (Proposition \ref{p:separation2}), involving separation by positive functionals. This result is used in Section \ref{s:KM} to establish a ``solid'' analogue of the Krein-Milman Theorem. We prove that solid compact sets are solid convex hulls of their order extreme points (see Theorem \ref{t:KM}). A ``solid'' Milman Theorem is also proved (Theorem \ref{t:order-milman}). In Section \ref{s:examples} we study order extreme points in $AM$-spaces. For instance, we show that, for an AM-space $X$, the following three statements are equivalent: (i) $X$ is a $C(K)$ space; (ii) the unit ball of $X$ is the solid convex hull of finitely many of its elements; (iii) the unit ball of $X$ has an order extreme point (Propositions \ref{p:describe_AM} and \ref{p:only_C(K)_has_OEP}). Further in Section \ref{s:examples} we investigate norm-attaining positive functionals. Functionals attaining their maximum on certain sets have been investigated since the early days of functional analysis; here we must mention V.~Lomonosov's papers on the subject (see e.g.~the excellent summary \cite{ArL}, and the references contained there). In this paper, we show that a separable AM-space is a $C(K)$ space iff any positive functional on it attains its norm (Proposition \ref{p:only_C(K)_attain_norm}). On the other hand, an order continuous lattice is reflexive iff every positive operator on it attains its norm (Proposition \ref{p:functional_OC}). In Section \ref{s:how_many_extreme_points} we show that the unit ball of any reflexive infinite-dimensional Banach lattice has uncountably many order extreme points (Theorem \ref{t:uncount_many}). Finally, in Section \ref{s:SKMP} we define the ``solid'' version of the Krein-Milman Property, and show that it is equivalent to the Radon-Nikodym Property (Theorem \ref{t:RNP}). To close this introduction, we would like to mention that related ideas have been explored before, in other branches of functional analysis. In the theory of $C^*$ algebras, and, later, operator spaces, the notions of ``matrix'' or ``$C^*$'' extreme points and convex hulls have been used. The reader is referred to e.g.~\cite{EWe}, \cite{EWi}, \cite{FaM}, \cite{WeWi} for more information; for a recent operator-valued separation theorem, see \cite{Mag}. \is_{\mathcal{E}}ction{Preliminaries}\label{s:definitions} In this section, we introduce the notation commonly used in the paper, and mention some basic facts. The closed unit ball (sphere) of a Banach space $X$ is denoted by $\mathbf{B}(X)$ (resp.~$\mathbf{S}(X)$). If $X$ is a Banach lattice, and $C \subset X$, write $C_+ = C \cap X_+$, where $X_+$ stands for the positive cone of $X$. Further, we say that $C \subset X$ is \emph{solid} if, for $x \in X$ and $z \in C$, the inequality $|x| \leq |z|$ implies the inclusion $x \in C$. In particular, $x \in X$ belongs to $C$ if and only if $|x|$ does. Note that any solid set is automatically \emph{balanced}; that is, $C = -C$. Restricting our attention to the positive cone $X_+$, we say that $C \subset X_+$ is \emph{positive-solid} if for any $x\in X_+$, the existence of $z \in C$ satisfying $x \leq z$ implies the inclusion $x \in C$. We will denote the set of order extreme points of $C$ (defined in Section \ref{s:intro}) by $\mathrm{OEP}(C)$; the set of ``classical'' extreme points is denoted by $\mathrm{EP}(C)$. \begin{remark}\label{r:G_delta} It is easy to see that the set of all extreme points of a compact metrizable set is $G_\delta$. The same can be said for the set of order extreme points of $A$, whenever $A$ is a closed solid bounded subset of a separable reflexive Banach lattice. Indeed, then the weak topology is induced by a metric $d$. For each $n$ let $F_n$ be the set of all $x \in A$ for which there exist $x_1, x_2, \in A$ with $x \leq (x_1 + x_2)/2$, and $d(x_1, x_2) \geq 1/n$. By compactness, $F_n$ is closed. Now observe that $\cup_n F_n$ is the complement of the set of all order extreme points. \end{remark} Note that every order extreme point is an extreme point in the usual sense, but the converse is not true: for instance, $\mathbf{1}_{(0,1)}$ is an extreme point of $\mathbf{B}(L_\infty(0,2))_+$, but not its order extreme point. However, a connection between ``classical'' and order extreme points exists: \begin{theorem}\label{t:connection} Suppose $A$ is a solid subset of a Banach lattice $X$. Then $a$ is an extreme point of $A$ if and only if $|a|$ is its order extreme point. \end{theorem} The proof of Theorem \ref{t:connection} uses the notion of a quasi-unit. Recall \cite[Definition 1.2.6]{M-N} that for $e,v \in X_+$, $v$ is a \textit{quasi-unit} of $e$ if $v \omegaedge (e-v) = 0$. This terminology is not universally accepted: the same objects can be referred to as \textit{components} \cite{AB}, or \textit{fragments} \cite{PR}. \begin{proof} Suppose $|a|$ is order extreme. Let $0<t<1$ be such that $a = tx+(1-t)y$. Then since $A$ is solid and $|a| \leq t|x| + (1-t)|y|$, one has $|x| = |y| = |a|$. Thus the latter inequality is in fact equality. Thus $|a|+a = 2a_+ = 2t x_+ +2(1-t)y_+$, so $a_+ = tx_+ +(1-t)y_+$. Similarly, $a_- = tx_- + (1-t)y_-$. It follows that $x_+ \perp y_-$ and $x_- \perp y_+$. Since $ x_+ + x_- =|x| = |y| = y_+ +y_- $, we have that $x_+ = x_+ \omegaedge (y_+ +y_-) = x_+\omegaedge y_+ + x_+ \omegaedge y_-$ (since $y_+, y_-$ are disjoint). Now since $x_+ \perp y_-$, the latter is just $x_+ \omegaedge y_+$, hence $x_+ \leq y_+$. By similar argument one can show the opposite inequality to conclude that $x_+ = y_+$, and likewise $x_-=y_-$, so $x=y=a$. Now suppose $a$ is extreme. It is sufficient to show that $|a| $ is order extreme for $A_+$. Indeed, if $|a| \leq tx + (1-t)y$ (with $0 \leq t \leq 1$ and $x, y \in A$), then $|a| \leq t|x|+(1-t)|y|$. As $|a|$ is an order extreme point of $A_+$, we conclude that $|x| = |y| = |a|$, so $|a| = tx+(1-t)y= t|x|+(1-t)|y|$. The latter implies that $x_- = y_-=0$, hence $x=|x| =|a|=|y|=y$. Therefore, suppose $|a| \leq tx + (1-t)y$ with $0 \leq t \leq 1$, and $x,y \in A_+$. First show that $|a|$ is a quasi-unit of $x$ (and by similar argument of $y$). To this end, note that $a_+ - tx \omegaedge a_+ \leq (1-t)y\omegaedge a_+$. Since $A$ is solid, \[A \ni z_+:= \frac{1}{1-t}( a_+ -tx\omegaedge a_+ ) \] and similarly, since $a_- - tx \omegaedge a_- \leq (1-t)y\omegaedge a_-$, \[A \ni z_-:= \frac{1}{1-t}( a_- -tx\omegaedge a_-) \] These inequalities imply that $z_+ \perp z_-$, so they correspond to the positive and negative parts of some $z = z_+ -z_-$. Also, $z\in A$ since $|z| \leq |a|$. Now $a_+ = t(x\omegaedge \frac{a_+}{t}) +(1-t) z_+$ and $a_- = t(x\omegaedge \frac{a_-}{t}) +(1-t) z_+$. In addition, $|x\omegaedge\frac{a_+}{t} - x\omegaedge\frac{a_-}{t}| \leq x$, so since $A$ is solid, \[z':= x\omegaedge\frac{a_+}{t} - x\omegaedge\frac{a_-}{t} \in A. \] Therefore $ a=a_+ -a_- = tz'+(1-t)z$. Since $a$ is an extreme point, $a=z$, hence \[(1-t)z_+ =(1-t) a_+ = a_+ -tx\omegaedge a_+ \] so $tx \omegaedge a_+ = ta_+ $ which implies that $(t(x-a_+))\omegaedge((1-t)a_+) = 0$. As $0<t<1$, we have that $a_+$ (and likewise $a_-$) is a quasi-unit of $x$ (and similarly of $y$). Thus $|a|$ is a quasi-unit of $x$ and of $y$. Now let $s = x-|a|$. Then $a+s, a-s \in A$, since $|a \pm s|= x$. We have \[ a = \frac{a-s}{2} +\frac{a+s}{2}, \] but since $a$ is extreme, $s$ must be $0$. Hence $x=|a|$, and similarly $y=|a|$. \end{proof} The situation is different if $A$ is a positive-solid set: the paragraph preceding Theorem \ref{t:connection} shows that $A$ can have extreme points which are not order extreme. If, however, a positive-solid set satisfies certain compactness conditions, then some connections between extreme and order extreme points can be established; see Proposition \ref{p:under_oep}, and the remark following it. If $C$ is a subset of a Banach lattice $X$, denote by ${\mathrm{S}}(C)$ the \emph{solid hull} of $C$, which is the smallest solid set containing $C$. It is easy to see that ${\mathrm{S}}(C)$ is the set of all $z \in X$ for which there exists $x \in C$ satisfying $|z| \leq |x|$. Clearly ${\mathrm{S}}(C) = {\mathrm{S}}(|C|)$, where $|C| = \{|x| : x \in C\}$. Further, we denote by ${\mathrm{CH}}(C)$ the \emph{convex hull} of $C$. For future reference, observe: \begin{proposition}\label{p:interchange} If $X$ is a Banach lattice, then ${\mathrm{S}}({\mathrm{CH}}(|C|)) = {\mathrm{CH}}({\mathrm{S}}(C))$ for any $C \subset X$. \end{proposition} \begin{proof} Let $x\in {\mathrm{CH}}({\mathrm{S}}(C))$. Then $x = \sum a_iy_i,$ where $\sum a_i = 1, a_i > 0$, and $|y_i| \leq |k_i|$ for some $k_i \in C$. Then \[|x| \leq \sum a_i|y_i| \leq \sum a_i |k_i| \in {\mathrm{CH}}(|C|), \] so $x\in {\mathrm{S}}({\mathrm{CH}}(|C|)).$ If $x\in {\mathrm{S}}({\mathrm{CH}}(|C|))$, then \[|x| \leq \sum_1^n a_i y_i,\quad y_i \in |C|,\quad 0 < a_i, \quad \sum a_i = 1. \] We use induction on $n$ to prove that $x \in {\mathrm{CH}}({\mathrm{S}}(C))$. If $n= 1$, $x\in {\mathrm{S}}(C)$ and we are done. Now, suppose we have shown that if $|x| \leq \sum_1^{n-1} a_iy_i$ then there are $z_1,...,z_{n-1} \in {\mathrm{S}}(C)_+$ such that $|x|= \sum_1^{n-1}a_iz_i$. From there, we have that \[|x| = (\sum_1^n a_i y_i)\omegaedge |x| \leq (\sum_1^{n-1} a_iy_i)\omegaedge |x| + (a_ny_n)\omegaedge |x|. \] Now \[ 0 \leq |x| - (\sum_1^{n-1} a_iy_i)\omegaedge |x| \leq a_n(y_n\omegaedge \frac{|x|}{a_n}). \] Let $z_n :=\frac{1}{a_n}(|x| - (\sum_1^{n-1} a_iy_i)\omegaedge |x|)$. By the above, $z_n \in {\mathrm{S}}(C)_+$. Furthermore, \[\frac{1}{1-a_n}(|x| \omegaedge \sum_1^{n-1}a_iy_i) \leq \sum_1^{n-1} \frac{a_i}{1-a_n} y_i \in {\mathrm{CH}}(|C|), \] so by induction there exist $z_1,..,z_{n-1} \in {\mathrm{S}}(C)_+$ such that \[ |x|\omegaedge( \sum_1^{n-1}a_iy_i) = \sum_1^{n-1} \frac{a_i}{1-a_n} z_i \] Therefore $|x| = \sum_1^n a_iz_i$. Now for each $n$, $a_iz_i \leq |x|$, so $|x| = \sum \big( (a_iz_i) \omegaedge|x|\big)$, and \[a_iz_i = a_iz_i\omegaedge x_+ +a_iz_i\omegaedge x_- = a_i( z_i\omegaedge(\frac{x_+}{a_i}) + z_i\omegaedge(\frac{x_-}{a_i}) ).\] Let $w_i = z_i\omegaedge(\frac{x_+}{a_i}) - z_i\omegaedge(\frac{x_-}{a_i})$. Note that $|w_i| = z_i$, so $w_i \in {\mathrm{S}}(C)$. It follows that $x= \sum a_iw_i \in {\mathrm{CH}}({\mathrm{S}}(C))$. \end{proof} For $C \subset X$ (as before, $X$ is a Banach lattice) we define the \emph{solid convex hull} of $C$ to be the smallest convex, solid set containing $C$, and denote it by ${\mathrm{SCH}}(C)$; the norm (equivalently, weak) closure of the latter set is denoted by ${\mathrm{CSCH}}(C)$, and referred to as the \emph{closed solid convex hull} of $C$. \begin{corollary}\label{c:equal-sch} Let $C\subseteq X$. Then\begin{enumerate} \item ${\mathrm{SCH}}(C) = {\mathrm{CH}}({\mathrm{S}}(C)) = {\mathrm{SCH}}(|C|)$, and consequently, ${\mathrm{CSCH}}(C) = {\mathrm{CSCH}}(|C|)$. \item If $C \subseteq X_+$, then ${\mathrm{SCH}}(C) = {\mathrm{S}}({\mathrm{CH}}(C))$. \end{enumerate} \end{corollary} \begin{proof} (1) Suppose $C\subseteq D$, where $D$ is convex and solid. Then ${\mathrm{CH}}({\mathrm{S}}(C)) \subseteq D$. Consequently, ${\mathrm{CH}}({\mathrm{S}}(C)) \subset {\mathrm{SCH}}(C)$. On the other hand, by Proposition \ref{p:interchange}, ${\mathrm{CH}}({\mathrm{S}}(C))$ is also solid, so ${\mathrm{SCH}}(C) \subseteq {\mathrm{CH}}({\mathrm{S}}(C))$. Thus, ${\mathrm{SCH}}(C) = {\mathrm{CH}}({\mathrm{S}}(C)) = {\mathrm{CH}}({\mathrm{S}}(|C|)) = {\mathrm{SCH}}(|C|)$. \\ (2) This follows from (1) and the equality in Proposition \ref{p:interchange}. \end{proof} \begin{remark}\label{r:shadow-not-closed} The two examples below show that ${\mathrm{S}}(C)$ need not be closed, even if $C$ itself is. Example (1) exhibits an unbounded closed set $C$ with ${\mathrm{S}}(C)$ not closed; in Example (2), $C$ is closed and bounded, but the ambient Banach lattice needs to be infinite dimensional. (1) Let $X$ be a Banach lattice of dimension at least two, and consider disjoint norm one $e_1, e_2 \in \mathbf{B}(X)_+$. Let $C = \{ x_n : n \in \mathbb{N}\}$, where $x_n = \frac{n}{n+1}e_1 +ne_2$. Now, $C$ is norm-closed: if $m > n$, then $\|x_m-x_n\| \geq \|e_2\| = 1$. However, ${\mathrm{S}}(C)$ is not closed: it contains $r e_1$ for any $r \in (0,1)$, but not $e_1$. (2) If $X$ is infinite dimensional, then there exists a closed \emph{bounded} $C \subset X_+$, for which ${\mathrm{S}}(C)$ is not closed. Indeed, find disjoint norm one elements $e_1, e_2, \ldots \in X_+$. For $n \in \mathbb{N}$ let $y_n = \sum_{k=1}^n 2^{-k} e_k$ and $x_n = y_n + e_n$. Then clearly $\|x_n\| \leq 2$ for any $n$; further, $\|x_n - x_m\| \geq 1$ for any $n \neq m$, hence $C = \{x_1, x_2, \ldots\}$ is closed. However, $y_n \in {\mathrm{S}}(C)$ for any $n$, and the sequence $(y_n)$ converges to $\sum_{k=1}^\infty 2^{-k} e_k \notin {\mathrm{S}}(C)$. \end{remark} However, under certain conditions we can show that the solid hull of a closed set is closed. \begin{proposition}\label{p:conv_closed_KB} A Banach lattice $X$ is reflexive if and only if, for any norm closed, bounded convex $C \subset X_+$, ${\mathrm{S}}(C)$ is norm closed. \end{proposition} \begin{proof} Support first $X$ is reflexive, and $C$ is a norm closed bounded convex subset of $X_+$. Suppose $(x_n)$ is a sequence in ${\mathrm{S}}(C)$, which converges to some $x$ in norm; show that $x$ belongs to ${\mathrm{S}}(C)$ as well. Clearly $|x_n| \rightarrow |x|$ in norm. For each $n$ find $y_n \in C$ so that $|x_n| \leq y_n$. By passing to a subsequence if necessary, we assume that the sequence $(y_n)$ converges to some $y \in X$ in the weak topology. For convex sets, norm and weak closures coincide, hence $y$ belongs to $C$. For each $n$, $\pm x_n \leq y_n$; passing to the weak limit gives $\pm x \leq y$, hence $|x| \leq y$. Now suppose $X$ is not reflexive. By \cite[Theorem 4.71]{AB}, there exists a sequence of disjoint elements $e_i \in \mathbf{S}(X)_+$, equivalent to the natural basis of either $c_0$ or $\ell_1$. First consider the $c_0$ case. Let $C$ be the closed convex hull of $$ x_1 = \frac{e_1}{2}, \, \, x_n = \big(1 - 2^{-n} \big) e_1 + \sum_{j=2}^n e_j \, \, (n \geq 2) . $$ We shall show that any element of $C$ can be written as $c e_1 + \sum_{i=2}^\infty c_i e_i$, with $c < 1$ . This will imply that ${\mathrm{S}}(C)$ is not closed: clearly $e_1 \in \overline{{\mathrm{S}}(C)}$. The elements of ${\mathrm{CH}}(x_1, x_2, \ldots)$ are of the form $\sum_{i=1}^\infty t_i x_i = c e_1 + \sum_{i=2}^\infty c_i e_i$; here, $t_i \geq 0$, $t_i \neq 0$ for finitely many values of $i$ only, and $\sum_i t_i = 1$. Note that $c_i = \sum_{j=i}^\infty t_i$ for $i \geq 2$ (so $c_i = 0$ eventually); for convenience, let $c_1 = \sum_{j=1}^\infty t_i = 1$. Then $t_i = c_i - c_{i+1}$; Abel's summation technique gives $$ c = \sum_{i=1}^\infty \big( 1 - 2^{-i} \big) t_i = 1 - \sum_{i=1}^\infty 2^{-i} \big( c_i - c_{i+1} \big) = \frac12 + \sum_{j=2}^\infty 2^{-j} c_j . $$ Now consider $x \in C$. Then $x$ is the norm limit of the sequence $$ x^{(m)} = c^{(m)} e_1 + \sum_{i=2}^\infty c_i^{(m)} e_i \in {\mathrm{CH}}(x_1, x_2, \ldots) ; $$ for each $m$, the sequence $(c_i^{(m)})$ has only finitely many non-zero terms, $c^{(m)} = \frac12 + \sum_{j=2}^\infty 2^{-j} c_j^{(m)}$, and for all $m,n \in \mathbb{N}$, $|c_i^{(m)} - c_i^{(n)}| \leq \|x^{(m)} - x^{(n)}\|$. Thus, $x = c e_1 + \sum_{i=2}^\infty c_i e_i$, with $c = \frac12 + \sum_{j=2}^\infty 2^{-j} c_j$. As $0 \leq c_j \leq 1$, and $\lim_j c_j = 0$, we conclude that $c < 1$, as claimed. Now suppose $(e_i)$ are equivalent to the natural basis of $\ell_1$. Let $C$ be the closed convex hull of the vectors $$ x_n = \big(1 - 2^{-n} \big) e_1 + e_n \, \, (n \geq 2) , $$ and show that $e_1 \in \overline{{\mathrm{S}}(C)} \backslash {\mathrm{S}}(C)$. Note that $$ C = \Big\{ \Big( \sum_{i=2}^\infty \big(1 - 2^{-n} \big) t_i \Big) e_1 + \sum_{i=2}^\infty t_i e_i : t_2, t_3, \ldots \geq 0 , \sum_{i=2}^\infty t_i = 1 \Big\} . $$ Clearly $e_1$ belongs to $\overline{{\mathrm{S}}(C)}$, but not to ${\mathrm{S}}(C)$. \end{proof} \is_{\mathcal{E}}ction{Separation by positive functionals}\label{s:separation} Throughout the section, $X$ is a Banach lattice, equipped with a locally convex Hausdorff topology $\tau$. This topology is called \emph{sufficiently rich} if the following conditions are satisfied: \begin{enumerate}[(i)] \item The space $X^\tau$ of $\tau$-continuous functionals on $X$ is a Banach lattice (with lattice operations defined by Riesz-Kantorovich formulas). \item $X_+$ is $\tau$-closed. \end{enumerate} Note that (i) and (ii) together imply that positive $\tau$-continuous functionals separate points. That is, for every $x \in X \backslash \{0\}$ there exists $f \in X^\tau_+$ so that $f(x) \neq 0$. Indeed, without loss of generality, $x_+ \neq 0$. Then $- x_+ \notin X_+$, hence there exists $f \in X^\tau_+$ so that $f(x_+) > 0$. By \cite[Proposition 1.4.13]{M-N}, there exists $g \in X^\tau_+$ so that $g(x_+) > f(x_+)/2$ and $g(x_-) < f(x_+)/2$. Then $g(x) > 0$. Clearly, the norm and weak topologies are sufficiently rich; in this case, $X^\tau = X^*$. The weak$^*$ topology on $X$, induced by the predual Banach lattice $X_*$, is sufficiently rich as well; then $X^\tau = X_*$. \begin{proposition}[Separation]\label{p:separation2} Suppose $\tau$ is a sufficiently rich topology on a Banach lattice $X$, and $A \subset X_+$ is a $\tau$-closed positive-solid bounded subset of $X_+$. Suppose, furthermore, $x \in X_+$ does not belong to $A$. Then there exists $f \in X^\tau_+$ so that $f(x) > \sup_{a \in A} f(a)$. \end{proposition} \begin{lemma}\label{l:max} Suppose $A$ and $X$ are as above, and $f \in X^\tau$. Then $\sup_{a \in A} f(a)$ $= \sup_{a \in A} f_+(a)$. \end{lemma} \begin{proof} Clearly $\sup_{a \in A} f(a) \leq \sup_{a \in A} f_+(a)$. To prove the reverse inequality, write $f = f_+ - f_-$, with $f_+ \omegaedge f_- = 0$. Fix $a \in A$; then $$ 0 = \big[ f_+ \omegaedge f_- \big](a) = \inf_{0 \leq x \leq a} \big( f_+(a-x) + f_-(x) \big) . $$ For any $\varepsilon > 0$ we can find $x \in A$ so that $f_+(a-x), f_-(x) < \varepsilon$. Then $f_+(x) = f_+(a) - f_+(a-x) > f_+(a) - \varepsilon$, and therefore, $f(x) = f_+(x) - f_-(x) > f_+(a) - 2\varepsilon$. Now recall that $\varepsilon > 0$ and $a \in A$ are arbitrary. \end{proof} \begin{proof}[Proof of Proposition \ref{p:separation2}] Use Hahn-Banach Theorem to find $f$ strictly separating $x$ from $A$. By Lemma \ref{l:max}, $f_+$ achieves the separation as well. \end{proof} \begin{remark}\label{r:lattice_needed} In this paper, we do not consider separation results on general ordered spaces. Our reasoning will fail without lattice structure. For instance, Lemma \ref{l:max} is false when $X$ is not a lattice, but merely an ordered space. Indeed, consider $X = M_2$ (the space of real $2 \times 2$ matrices), $\displaystyle f = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}$, and $A = \{t a_0 : 0 \leq t \leq 1\}$, where $\displaystyle a_0 = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}$; one can check that $A = \{x \in M_2 : 0 \leq x \leq a_0\}$. Then $f|_A = 0$, while $\displaystyle \sup_{x \in A} f_+(x) = 1$. The reader interested in the separation results in the non-lattice ordered setting is referred to an interesting result of \cite{FW}, recently re-proved in \cite{Ama}. \end{remark} \is_{\mathcal{E}}ction{Solid convex hulls: theorems of Krein-Milman and Milman}\label{s:KM} Throughout this section, the topology $\tau$ is assumed to be sufficiently rich (defined in the beginning of Section \ref{s:separation}). \begin{theorem}[``Solid'' Krein-Milman]\label{t:KM} Any $\tau$-compact positive-solid subset $A$ of $X_+$ coincides with the $\tau$-closed positive-solid convex hull of its order extreme points. \end{theorem} \begin{proof} Let $A$ be a $\tau$-compact positive-solid subset of $X_+$. Denote the $\tau$-closed positive convex hull of $\mathrm{OEP}(A)$ by $B$; then clearly $B \subset A$. The proof of the reverse inclusion is similar to that of the ``usual'' Krein-Milman. Suppose $C$ is a $\tau$-compact subset of $X$. We say that a non-void closed $F \subset C$ is an \emph{order extreme subset} of $C$ if, whenever $x \in F$ and $a_1, a_2 \in C$ satisfy $x \leq (a_1 + a_2)/2$, then necessarily $a_1, a_2 \in F$. The set ${\mathcal{F}}(C)$ of order extreme subsets of $C$ can be ordered by reverse inclusion (this makes $C$ the minimal order extreme subset of itself). By compactness, each chain has an upper bound; therefore, by Zorn's Lemma, ${\mathcal{F}}(C)$ has a maximal element. We claim that these maximal elements are singletons, and they are the order extreme points of $C$. We need to show that, if $F \in {\mathcal{F}}(C)$ is not a singleton, then there exists $G \subsetneq F$ which is also an order extreme set. To this end, find distinct $a_1, a_2 \in F$, and $f \in X^\tau_+$ which separates them -- say $f(a_1) > f(a_2)$. Let $\alpha = \max_{x \in F} f(x)$, then $G = F \cap f^{-1}(\alpha)$ is a proper, order extreme subset of $F$. Suppose, for the sake of contradiction, that there exists $x \in A \backslash B$. Use Proposition \ref{p:separation2} to find $f \in X^\tau_+$ so that $f(x) > \max_{y \in B} f(y)$. Let $\alpha = \max_{x \in A} f(x)$, then $A \cap f^{-1}(\alpha)$ is an order extreme subset of $A$, disjoint from $B$. As noted above, this subset contains at least one extreme point. This yields a contradiction, as we started out assuming all order extreme points lie in $B$. \end{proof} \begin{corollary}\label{c:KM} Any $\tau$-compact solid subset of $X$ coincides with the $\tau$-closed solid convex hull of its order extreme points. \end{corollary} Of course, there exist Banach lattices whose unit ball has no order extreme points at all -- $L_1(0,1)$, for instance. However, an order analogue of \cite[Lemma 1]{Linl1} holds. \begin{proposition}\label{p:some_vs_all} For a Banach lattice $X$, the following two statements are equivalent: \begin{enumerate} \item Every bounded closed solid convex subset of $X$ has an order extreme point. \item Every bounded closed solid convex subset of $X$ is the closed solid convex hull of its order extreme points. \end{enumerate} \end{proposition} \begin{proof} (2) $\mathbb{R}ightarrow$ (1) is evident; we shall prove (1) $\mathbb{R}ightarrow$ (2). Suppose $A \subset X$ is closed, bounded, convex, and solid. Let $B = {\mathrm{CSCH}}(\mathrm{OEP}(A))$ (which is not empty, by (1)). Suppose, for the sake of contradiction, that $B$ is a proper subset of $A$. Let $a \in A \backslash B$. Since $B$ and $A$ are solid, $|a| \in A \backslash B$ as well, so without loss of generality we assume that $a \geq 0$. Then there exists $f \in \mathbf{S}(X^*)_+$ which strictly separates $a$ from $B$; consequently, $$ \sup_{x \in A} f(x) \geq f(a) > \sup_{x \in B} f(x) . $$ Fix $\varepsilon > 0$ so that $$ 2 \sqrt{2 \varepsilon} \alpha < \sup_{x \in A} f(x) - \sup_{x \in B} f(x) , \, \, {\textrm{where}} \, \, \alpha = \sup_{x \in A} \|x\| . $$ By Bishop-Phelps-Bollob\'as Theorem (see e.g. \cite{Boll} or \cite{CKMetc}), there exists $f' \in \mathbf{S}(X^*)$, attaining its maximum on $A$, so that $\|f - f'\| \leq \sqrt{2 \varepsilon}$. Let $g = |f'|$, then $\|f - g\| \leq \|f - f'\| \leq \sqrt{2 \varepsilon}$. Further, $g$ attains its maximum on $A_+$, and $\max_{g \in A} g(x) > \sup_{x \in B} g(x)$. Indeed, the first statement follows immediately from the definition of $g$. To establish the second one, note that the triangle inequality gives us $$ \sup_{x \in B} g(x) \leq \sqrt{2 \varepsilon} \alpha + \sup_{x \in B} f(x) , \, \, \, \sup_{x \in A} g(x) \geq \sup_{x \in A} f(x) - \sqrt{2 \varepsilon} \alpha . $$ Our assumption on $\varepsilon$ gives us $\max_{g \in A} g(x) > \sup_{x \in B} g(x)$. Let $D = \{a \in A : g(a) = \sup_{x \in A} g(x)\}$. Due to (1), $D$ has an order extreme point which is also an order extreme point of $A$; this point lies inside of $B$, leading to the desired contradiction. \end{proof} Milman's theorem \cite[3.25]{Rud} states that, if both $K$ and $\overline{{\mathrm{CH}}(K)}^\tau$ are compact, then $\mathrm{EP}\big(\overline{{\mathrm{CH}}(K)}^\tau\big) \subset K$. An order analogue of Milman's theorem exists: \begin{theorem}\label{t:order-milman} Suppose $X$ is a Banach lattice. \begin{enumerate} \item If $K \subset X_+$ and $\overline{{\mathrm{CH}}(K)}^\tau$ are $\tau$-compact, then $\mathrm{OEP}\big(\overline{{\mathrm{SCH}}(K)}^\tau\big) \subseteq K$. \item If $K \subset X_+$ is weakly compact, then $\mathrm{OEP}({\mathrm{CSCH}}(K)) \subseteq K$. \item If $K \subset X$ is norm compact, then $\mathrm{OEP}({\mathrm{CSCH}}(K)) \subseteq |K|$. \end{enumerate} \end{theorem} The following lemma describes the solid hull of a $\tau$-compact set. \begin{lemma}\label{l:compact_closed} Suppose a Banach lattice $X$ is equipped with a sufficiently rich topology $\tau$. If $C \subset X_+$ is $\tau$-compact, then ${\mathrm{S}}(C)$ is $\tau$-closed. \end{lemma} \begin{proof} Suppose a net $(y_i) \subset {\mathrm{S}}(C)$ $\tau$-converges to $y \in X$. For each $i$ find $x_i \in C$ so that $|y_i| \leq x_i$ -- or equivalently, $y_i \leq x_i$ and $-y_i \leq x_i$. Passing to a subnet if necessary, we assume that $x_i \to x \in C$ in the topology $\tau$. Then $\pm y \leq x$, which is equivalent to $|y| \leq x$. \end{proof} \begin{proof}[Proof of Theorem \ref{t:order-milman}] (1) We first consider a $\tau$-compact $K\subseteq X_+$. Milman's traditional theorem holds that $\mathrm{EP}\big(\overline{{\mathrm{CH}}(K)}^\tau\big) \subseteq K$. Every order extreme point of a set is extreme, hence the order extreme points of $\overline{{\mathrm{CH}}(K)}^\tau$ are in $K$. Therefore, by Lemma \ref{l:compact_closed} and Corollary \ref{c:equal-sch}, \[\overline{{\mathrm{SCH}}(K)}^\tau = \overline{{\mathrm{S}}({\mathrm{CH}}(K))}^\tau \subseteq {\mathrm{S}}\big(\overline{{\mathrm{CH}}(K)}^\tau\big) = \{x: |x| \leq y \in \overline{{\mathrm{CH}}(K)}^\tau \} . \] Thus, the points of $\overline{{\mathrm{SCH}}(K)}^\tau \backslash \overline{{\mathrm{CH}}(K)}^\tau$ cannot be order extreme due to being dominated by $\overline{{\mathrm{CH}}(K)}^\tau$. Therefore $\mathrm{OEP}\big(\overline{{\mathrm{SCH}}(K)}^\tau\big) \subseteq \mathrm{OEP}\big(\overline{{\mathrm{CH}}(K)}^\tau\big) \subseteq K$. (2) Combine (1) with Krein's Theorem (see e.g.~\cite[Theorem 3.133]{FHHMZ}), which states that $\overline{{\mathrm{CH}}(K)}^w = \overline{{\mathrm{CH}}(K)}$ is weakly compact. (3) Finally, suppose $K\subseteq X$ is norm compact. By Corollary \ref{c:equal-sch}, ${\mathrm{CSCH}}(K) = {\mathrm{CSCH}}(|K|)$. $|K|$ is norm compact, hence by \cite[Theorem 3.20]{Rud}, so is $\overline{{\mathrm{CH}}(|K|)}$. By the proof of part (1), $\mathrm{OEP} ( {\mathrm{CSCH}}(K) ) \subseteq |K|$. \end{proof} We turn our attention to interchanging ``solidification'' and norm closure. We work with the norm topology, unless specified otherwise. \begin{lemma}\label{l:switch-close-solid} Let $C\subseteq X$, where $X$ is a Banach lattice, and suppose that ${\mathrm{S}}(\overline{|C|})$ is closed. Then $ \overline{{\mathrm{S}}(C)} = {\mathrm{S}}(\overline{|C|})$. \end{lemma} \begin{proof} One direction is easy: ${\mathrm{S}}(C)= {\mathrm{S}}(|C|) \subseteq {\mathrm{S}}(\overline{|C|})$, hence $\overline{{\mathrm{S}}(C)} \subseteq \overline{{\mathrm{S}}(\overline{|C|})} = {\mathrm{S}}(\overline{|C|})$. Now consider $x\in {\mathrm{S}}(\overline{|C|})$. Then by definition, $|x| \leq y$ for some $y \in \overline{|C|}$. Take $y_n \in |C|$ such that $y_n \rightarrow y$ . Then $|x|\omegaedge y_n \in {\mathrm{S}}(|C|) = {\mathrm{S}}(C)$ for all $n$. Furthermore, \[ |x_+\omegaedge y_n -x_-\omegaedge y_n| =|x|\omegaedge y_n, \] so, $x_+ \omegaedge y_n - x_-\omegaedge y_n \in {\mathrm{S}}(C)$. By norm continuity of $\omegaedge$, \[x_+\omegaedge y_n -x_-\omegaedge y_n \rightarrow x_+\omegaedge y -x_-\omegaedge y = x, \] hence $x \in \overline{{\mathrm{S}}(C)}$. \end{proof} \begin{remark} The assumption of ${\mathrm{S}}(\overline{|C|})$ being closed is necessary: Remark \ref{r:shadow-not-closed} shows that, for a closed $C \subset X_+$, ${\mathrm{S}}(C)$ need not be closed. \end{remark} \begin{corollary}\label{c:switch-close-solid1} Suppose $C\subseteq X$is relatively compact in the norm topology. Then $\overline{{\mathrm{S}}(C)} = {\mathrm{S}}(\overline{C})$. \end{corollary} \begin{proof} The set $\overline{C}$ is compact, hence, by the continuity of $| \cdot |$, the same is true for $|\overline{C}|$. Consequently, $|\overline{C}| \subseteq \overline{|C|} \subseteq \overline{|\overline{C}|} = |\overline{C}|$, hence $|\overline{C}| = \overline{|C|}$. By Lemmas \ref{l:compact_closed} and \ref{l:switch-close-solid}, ${\mathrm{S}}(\overline{C}) = {\mathrm{S}}(|\overline{C}|) = {\mathrm{S}}(\overline{|C|}) = \overline{{\mathrm{S}}(C)}$. \end{proof} \begin{remark}\label{r:closure_versus_modulus} In the weak topology, the equality $|\overline{C}| = \overline{|C|}$ may fail. Indeed, equip the Cantor set $\Delta = \{0,1\}^\mathbb{N}$ with its uniform probability measure $\mu$. Define $x_i \in L_2(\mu)$ by setting, for $t = (t_1, t_2, \ldots) \in \Delta$, $x_i(t) = t_i - 1/4$ (that is, $x_i$ equals to either $3/4$ or $-1/4$, depending on whether $t_i$ is $1$ or $0$). Then $C = \{x_i : i \in \mathbb{N}\}$ belongs to the unit ball of $L_2(\mu)$, hence it is relatively compact. It is clear that $\overline{C}$ contains $\mathbf{1}/4$ (here and below, $\mathbf{1}$ denotes the constant $1$ function). On the other hand, $\overline{C}$ does not contain $\mathbf{1}/2$, which can be witnessed by applying the integration functional. Conversely, $\overline{|C|}$ contains $\mathbf{1}/2$, but not $\mathbf{1}/4$. \end{remark} \begin{remark}\label{r:weakly_compact_solids} Relative weak compactness of solid hulls have been studied before. If $X$ is a Banach lattice, then, by \cite[Theorem 4.39]{AB}, it is order continuous iff the solid hull of any weakly compact subset of $X_+$ is relatively weakly compact. Further, by \cite{ChW}, the following three statements are equivalent: \begin{enumerate} \item The solid hull of any relatively weakly compact set is relatively weakly compact. \item If $C \subset X$ is relatively weakly compact, then so is $|C|$. \item $X$ is a direct sum of a KB-space and a purely atomic order continuous Banach lattice (a Banach lattice is called purely atomic if its atoms generate it, as a band). \end{enumerate} \end{remark} Finally, we return to the connections between extreme points and order extreme points. As noted in the paragraph preceding Theorem \ref{t:connection}, a non-zero extreme point of a positive-solid set need not be order extreme. However, we have: \begin{proposition}\label{p:under_oep} Suppose $\tau$ is a sufficiently rich topology, and $A$ is a $\tau$-compact positive-solid convex subset of $X_+$. Then for any extreme point $a \in A$ there exists an order extreme point $b \in A$ so that $a \leq b$. \end{proposition} \begin{remark}\label{r:for_domination_need_compactness} The compactness assumption is essential. Consider, for instance, the closed set $A \subset C[-1,1]$, consisting of all functions $f$ so that $0 \leq f \leq \mathbf{1}$, and $f(x) \leq x$ for $x \geq 0$. Then $g(x) = x \vee 0$ is an extreme point of $A$; however, $A$ has no order extreme points. \end{remark} \begin{proof} If $a$ is not an order extreme point, then we can find distinct $x_1, x_2 \in A$ so that $2a \leq x_1 + x_2$. Then $2a \leq (x_1 + x_2) \omegaedge (2a) \leq x_1 \omegaedge (2a) + x_2 \omegaedge (2a) \leq x_1 + x_2$. Write $2a = x_1 \omegaedge (2a) + (2a - x_1 \omegaedge (2a))$. Both summands are positive, and both belong to $A$ (for the second summand, note that $2a - x_1 \omegaedge (2a) \leq x_2$). Therefore, $x_1 \omegaedge (2a) = a = 2a - x_1 \omegaedge (2a)$, hence in particular $x_1 \omegaedge (2a) = a$. Similarly, $x_2 \omegaedge (2a) = a$. Therefore, we can write $x_1$ as a disjoint sum $x_1 = x_1' + a$ ($a, x_1'$ are quasi-units of $x_1$). In the same way, $x_2 = x_2' + a$ (disjoint sum). Now consider the $\tau$-closed set $B = \{x \in A : x \geq a\}$. As in the proof of Theorem \ref{t:KM}, we show that the family of $\tau$-closed extreme subsets of $B$ has a maximal element; moreover, such an element is a singleton $\{b\}$. It remains to prove that $b$ is an order extreme point of $A$. Indeed, suppose $x_1, x_2 \in A$ satisfy $2b \leq x_1 + x_2$. A fortiori, $2a \leq x_1 + x_2$, hence, by the preceding paragraph, $x_1, x_2 \in B$. Thus, $x_1 = b = x_2$. \end{proof} \is_{\mathcal{E}}ction{Examples: AM-spaces and their relatives}\label{s:examples} The following example shows that, in some cases, $\mathbf{B}(X)$ is much larger than the closed convex hull of its extreme points, yet is equal to the closed solid convex hull of its order extreme points. \begin{proposition}\label{p:fin_many} For a Banach lattice $X$, $\mathbf{B}(X)$ is the (closed) solid convex hull of $n$ disjoint non-zero elements if and only if $X$ is lattice isometric to $C(K_1) \oplus_1 \ldots \oplus_1 C(K_n)$ for suitable non-trivial Hausdorff compact topological spaces $K_1,...,K_n$. \end{proposition} \begin{proof} Clearly, the only order extreme points of $\mathbf{B}(C(K_1) \oplus_1 \ldots \oplus_1 C(K_n))$ are $\mathbf{1}_{K_i}$, with $1 \leq i \leq n$. Conversely, suppose $\mathbf{B}(X) = {\mathrm{CSCH}}(x_1, \ldots, x_n)$, where $x_1, \ldots, x_n \in \mathbf{B}(X)_+$ are disjoint. It is easy to see that, in this case, $\mathbf{B}(X) = {\mathrm{SCH}}(x_1, \ldots, x_n)$. Moreover, $x_i \in \mathbf{S}(X)_+$ for each $i$. Indeed, otherwise there exists $i \in \{1, \ldots, n\}$ and $\lambda > 1$ so that $\lambda x_i \in {\mathrm{SCH}}(x_1, \ldots, x_n)$, or in other words, $\lambda x_i \leq \sum_{j=1}^n t_j x_j$, with $t_j \geq 0$ and $\sum_j t_j \leq 1$. Consequently, due to the disjointness of $x_j$'s, $$ \lambda x_i = (\lambda x_i) \omegaedge (\lambda x_i) \leq \big( \sum_{j=1}^n t_j x_j \big) \omegaedge (\lambda x_i) \leq \sum_{j=1}^n (t_j x_j) \omegaedge (\lambda x_i) \leq t_i x_i , $$ which yields the desired contradiction. Let $E_i$ be the ideal of $X$ generated by $x_i$, meaning the set of all $x \in X$ for which there exists $c > 0$ so that $|x| \leq c |x_i|$. Note that, for such $x$, $\|x\|$ is the infimum of all $c$'s with the above property. Indeed, if $|x| \leq |x_i|$, then clearly $x \in \mathbf{B}(X)$. Conversely, suppose $x \in \mathbf{B}(X) \cap E_i$. In other words, $|x| \leq c x_i$ for some $c$, and also $|x| \leq \sum_j t_j x_j$, with $t_j \geq 0$, and $\sum_j t_j = 1$. Then $|x| \leq (c x_i) \omegaedge ( \sum_j t_j x_j ) = (c \omegaedge t_i) x_i$. Consequently, $E_i$ (with the norm inherited from $X$) is an $AM$-space, whose strong unit is $x_i$. By \cite[Theorem 2.1.3]{M-N}, $E_i$ can be identified with $C(K_i)$, for some Hausdorff compact $K_i$. Further, Proposition \ref{p:interchange} shows that $X$ is the direct sum of the ideals $E_i$: any $y \in X$ has a unique disjoint decomposition $y = \sum_{i=1}^n y_i$, with $y_i \in E_i$. We have to show that $\|y\| = \sum_i \|y_i\|$. Indeed, suppose $\|y\| \leq 1$. Then $|y| = \sum_i |y_i| \leq \sum_j t_j x_j$, with $t_j \geq 0$, and $\sum_j t_j = 1$. Note that $\|y_i\| \leq 1$ for every $i$, or equivalently, $|y_i| \leq x_i$. Therefore, $$ |y_i| = |y| \omegaedge x_i = \big( \sum_j t_j x_j \big) \omegaedge x_i = t_i , $$ which leads to $\|y_i\| \leq t_i$; consequently, $\|y\| \leq \sum_i t_i \leq 1$. \end{proof} \begin{example}\label{e:other_etreme_points} For $X = (C(K_1) \oplus_1 C(K_2)) \oplus_\infty C(K_3)$, order extreme points of $\mathbf{B}(X)$ are $\mathbf{1}_{K_1} \oplus_\infty \mathbf{1}_{K_3}$ and $\mathbf{1}_{K_2} \oplus_\infty \mathbf{1}_{K_3}$; $\mathbf{B}(X)$ is the solid convex hull of these points. Thus, the word ``disjoint'' in the statement of Proposition \ref{p:fin_many} cannot be omitted. \end{example} Note that $\mathbf{B}(C(K))$ is the closed solid convex hull of its only order extreme point -- namely, $\mathbf{1}_K$. This is the only type of AM-spaces with this property. \begin{proposition}\label{p:describe_AM} Suppose $X$ is an AM-space, and $\mathbf{B}(X)$ is the closed solid convex hull of finitely many of its elements. Then $X = C(K)$ for some Hausdorff compact $K$. \end{proposition} \begin{proof} Suppose $\mathbf{B}(X)$ is the closed solid convex hull of $x_1, \ldots, x_n \in \mathbf{B}(X)_+$. Then $x_0 := x_1 \vee \ldots \vee x_n \in \mathbf{B}(X)_+$ (due to $X$ being an AM-space), hence $x \in \mathbf{B}(X)$ iff $|x| \leq x_0$. Thus, $x_0$ is the strong unit of $X$. \end{proof} \begin{proposition}\label{p:only_C(K)_has_OEP} If $X$ is an AM-space, and $\mathbf{B}(X)$ has an order extreme point, then $X$ is lattice isometric to $C(K)$, for some Hausdorff compact $K$. \end{proposition} \begin{proof} Suppose $a$ is order extreme point of $\mathbf{B}(X)$. We claim that $a$ is a strong unit, which means that $a \geq x$ for any $x \in \mathbf{B}(X)_+$. Suppose, for the sake of contradiction, that the inequality $a \geq x$ fails for some $x \in \mathbf{B}(X)_+$. Then $b = a \vee x \in \mathbf{B}(X)_+$ (due to the definition of an AM-space), and $a \leq (a+b)/2$, contradicting the definition of an order extreme point. \end{proof} We next consider norm-attaining functionals. It is known that, for a Banach space $X$, any element of $X^*$ attains its norm iff $X$ is reflexive. If we restrict ourself to positive functionals on a Banach lattice, the situation is different: clearly every positive functional on $C(K)$ attains it norm at $\mathbf{1}$. Below we show that, among separable AM-spaces, only $C(K)$ has this property. \begin{proposition}\label{p:only_C(K)_attain_norm} Suppose $X$ is a separable AM-space, so that every positive linear functional attains its norm. Then $X$ is lattice isometric to $C(K)$. \end{proposition} \begin{proof} Let $(x_i)_{i=1}^\infty$ be a dense sequence in $\mathbf{S}(X)_+$. For each $i$ find $x_i^* \in \mathbf{B}(X^*_+)$ so that $x_i^*(x_i) = 1$. Let $x^* = \sum_{i=1}^\infty 2^{-i} x_i^*$. We shall show that $\|x^*\| = 1$. Indeed, $\|x^*\| \leq \sum_i 2^{-i} = 1$ by the triangle inequality. For the opposite inequality, fix $N \in \mathbb{N}$, and let $x = x_1 \vee \ldots \vee x_N$. Then $x \in \mathbf{S}(X)_+$, and $$ \|x^*\| \geq x^*(x) \geq \sum_{i=1}^N 2^{-i} x_i^*(x) \geq \sum_{i=1}^N 2^{-i} x_i^*(x_i) = \sum_{i=1}^N 2^{-i} = 1 - 2^{-N} . $$ As $N$ can be arbitrarily large, we obtain the desired estimate on $\|x^*\|$. Now suppose $x^*$ attains its norm on $a \in \mathbf{S}(X)_+$. We claim that $a$ is the strong unit for $X$. Suppose otherwise; then there exists $y \in \mathbf{B}(X)_+$ so that $a \geq y$ fails. Let $b = a \vee y$, then $z = b - y$ belongs to $X_+\backslash\{0\}$. Then $1 \geq x^*(b) \geq x^*(a) = 1$, hence $x^*(z) = 0$. However, $x^*$ cannot vanish at $z$. Indeed, find $i$ so that $\|z/\|z\| - x_i\| < 1/2$. Then $x_i^*(z) \geq \|z\|/2$, hence $x(z) > 2^{-i-1} \|z\| > 0$. This gives the desired contradiction. \end{proof} In connection to this, we also mention a result about norm-attaining functionals on order continuous Banach lattices. \begin{proposition}\label{p:functional_OC} An order continuous Banach lattice $X$ is reflexive if and only if every positive linear functional on it attains its norm. \end{proposition} \begin{proof} If an order continuous Banach lattice $X$ is reflexive, then clearly every linear functional is norm-attaining. If $X$ is not reflexive, then, by the classical result of James, there exists $x^* \in X^*$ which does not attain its norm. We show that $|x^*|$ does not either. Let $B_+ = \{x \in X : x^*_+(|x|) = 0\}$, and define $B_-$ similarly. As all linear functionals on $X$ are order continuous \cite[Section 2.4]{M-N}, $B_+$ and $B_-$ are bands \cite[Section 1.4]{M-N}. Due to the order continuity of $X$ \cite[Section 2.4]{M-N}, $B_{\pm}$ are ranges of band projections $P_{\pm}$. Let $B$ be the range of $P = P_+ P_-$; let $B_+^o$ be the range of $P_+^o = P_+ P_-^\perp = P_+ - P$ (where we set $Q^\perp = I_X - Q$), and similarly for $B_-^o$ and $P_-^o$. Note that $P_+^o + P_-^o = P^\perp$. Suppose for the sake of contradiction that $x \in \mathbf{S}(X)_+$ satisfies $|x^*|(x) = \|x^*\|$. Replacing $x$ by $P^\perp x$ if necessary, we assume that $P x = 0$, so $x = P_+^o x + P_-^o x$. Then $\|P_+^o x - P_-^o x\| = 1$, and \begin{align*} x^* \big( P_-^o x - P_+^o x \big) & = x^*_+ \big( P_-^o x \big) - x^*_+ \big( P_+^o x \big) - x^*_- \big( P_-^o x \big) + x^*_- \big( P_+^o x \big) \\ & = x^*_+ \big( P_-^o x \big) + x^*_- \big( P_+^o x \big) = |x^*|(x) = \|x^*\| , \end{align*} which contradicts our assumption that $x$ does not attain its norm. \end{proof} \is_{\mathcal{E}}ction{On the number of order extreme points}\label{s:how_many_extreme_points} It is shown in \cite{LP} that, if a Banach space $X$ is reflexive and infinite-dimensional Banach lattice, then $\mathbf{B}(X)$ has uncountably many extreme points. Here, we establish a similar lattice result. \begin{theorem}\label{t:uncount_many} If $X$ is a reflexive infinite-dimensional Banach lattice, then $\mathbf{B}(X)$ has uncountably many order extreme points. \end{theorem} Note that if $X$ is a reflexive infinite-dimensional Banach lattice, then Theorems \ref{t:connection} and \ref{t:uncount_many} imply that $\mathbf{B}(X)$ has uncountably many extreme points, re-proving the result of \cite{LP} in this case. \begin{proof} Suppose, for the sake of contradiction, that there were only countably many such points $\{x_n\}$. For each such $x_n$, we define $F_n =\{ f \in \mathbf{B}(X^*)_+ : f(x_n) = \| f\| \}$. Clearly $F_n$ is weak$^*$ ($=$ weakly) compact. By the reflexivity of $X$, any $f\in \mathbf{B}(X^*)$ attains its norm at some $x \in \mathrm{EP}(\mathbf{B}(X))$. Since $f(x) \leq |f|(|x|)$ we assume that any positive functional attains its norm at a positive extreme point in $ \mathbf{B}(X)$. By Theorem \ref{t:connection}, these are precisely the order extreme points. Therefore $\bigcup F_n = \mathbf{B}(X^*)_+$. By the Baire Category Theorem, one of these sets $F_n$ must have non-empty interior in $ \mathbf{B}(X^*)_+$. Assume it is $F_1$. Pick $f_0\in F_1 $, and $y_1,...,y_k \in X$, such that if $f \in \mathbf{B}(X^*)_+$ and for each $y_i$, $|f(y_i) - f_0(y_i) | < 1$, then $f \in F_1$. Without loss of generality, we assume that $\|f_0 \| < 1$, and also that each $y_i \geq 0$. Further, we can and do assume that there exist mutually disjoint $u_1, u_2, \ldots \in \mathbf{S}(X)_+$ which are disjoint from $y = \vee_i y_i$. Indeed, find mutually disjoint $z_1, z_2, \ldots \in \mathbf{S}(X)_+$. Denote the corresponding band projections by $P_1, P_2, \ldots$ (such projections exist, due to the $\sigma$-Dedekind completeness of $X$). Then the vectors $P_n y$ are mutually disjoint, and dominated by $y$. As $X$ is reflexive, it must be order continuous, and therefore, $\lim_n \|P_n y\| = 0$. Find $n_1 < n_2 < \ldots$ so that $\sum_j \|P_{n_j} y\| < 1/2$. Let $w_i = \sum_j P_{n_j} y_i$ and $y'_i = 2(y_i - w_i)$. Then if $|(f_0-g)(y'_i)|< 1$, with $g\geq 0, \|g\| \leq 1$, it follows that \begin{align*} |(f_0 - g)(y_i)| & \leq \frac{1}{2} ( |(f_0 - g)(y'_i) | + |(f_0- g)(w_i) |) \\ &\leq \frac{1}{2} ( 1 + \|f_0 - g\| \|w_i\| ) < \frac12(1+ 2\cdot \frac12 ) = 1 \end{align*} We can therefore replace $y_i$ with $y'_i$ to ensure sufficient conditions for being in $F_1$. Then the vectors $u_j = z_{n_j}$ have the desired properties. Let $P$ be the band projection complementary to $\sum_j P_{n_j}$ (in other words, complementary to the the band projection of $\sum_j 2^{-j} u_j$); then $P y_i = y_i$ for any $i$. By \cite[Lemma 1.4.3 and its proof]{M-N}, there exist linear functionals $g_j \in \mathbf{S}(X^*)_+$ so that $g_j(u_j) = 1$, and $g_j =P_{n_j}^* g_j$. Consequently, the functionals $g_j$ are mutually disjoint, and $g_j|_{\mathrm{ran} \, P} = 0$. For $j \in \mathbb{N}$ find $\alpha_j \in [1 - \|P^* f_0\|, 1]$ so that $\|f_j\| = 1$, where $f_j = P^* f_0 + \alpha_j g_j$. Then, for $1 \leq i \leq k$, $f_j(y_i) = (P^* f_0)(y_i) + \alpha_j g_j(y_i) = f_0(y_i)$, which implies that, for every $j$, $f_j$ belongs to $F_1$, hence attains its norm at $x_1$. On the other hand, note that $\lim_j g_j(x_1) = 0$. Indeed, otherwise, there exist $\gamma > 0$ and a sequence $(j_k)$ so that $g_{j_k}(x_1) \geq \gamma$ for every $k$. For any finite sequence of positive numbers $(\beta_k)$, we have $$ \sum_k |\beta_k| \geq \big\| \sum_k \beta_k g_{j_k} \big\| \geq \sum_k \beta_k g_{j_k} (x_1) \geq \gamma \sum_k |\beta_k| . $$ As the functionals $g_{j_k}$ are mutually disjoint, the inequalities $$ \sum_k |\beta_k| \geq \big\| \sum_k \beta_k g_{j_k} \big\| \geq \gamma \sum_k |\beta_k| $$ hold for every finite sequence $(\beta_k)$. We conclude that $\overline{\mathrm{span}}[g_{j_k} : k \in \mathbb{N}]$ is isomorphic to $\ell_1$, which contradicts the reflexivity of $X$. Thus, $\lim_j g_j(x_1) = 0$, hence $\lim_j f_j (x_1) = f_0(P x_1) \leq \|f_0\| < 1$. \end{proof} \begin{corollary} Suppose $C$ is a closed, bounded, solid, convex subset of a reflexive Banach lattice, having non-empty interior. Then $C$ contains uncountably many order extreme points. \end{corollary} \begin{proof} We assume without loss of generality that $\sup_{x \in C} \|x\| = 1$. Note that $0$ is an interior point of $C$. Indeed, suppose $x$ is an interior point. Pick $\varepsilon > 0$ such that $x + \varepsilon \mathbf{B}(X) \subset C$. For any $k$ such that $\|k\| < \varepsilon$, we have $\frac{k}{2} = \frac{-x}{2} +\frac{x+k}{2} \in C$, since $C$ is solid and convex. Hence $\frac{\varepsilon}{2} \mathbf{B}(X) \subseteq C$. Since $C$ is bounded, we can then define an equivalent norm, with $\|y\|_C = \inf \{\lambda > 0: y \in \lambda C \}$. Since $C$ is solid, $\|y \|_C = \| \ |y | \ \|_C$, and the norm is consistent with the order. Finally, $\| \cdot \|_C$ is equivalent to $\| \cdot \|$, since for all $y\in X$, we have that $\frac{\varepsilon}{2}\|y\|_C \leq \|y\| \leq \|y\|_C$. The conclusion follows by Theorem \ref{t:uncount_many}. \end{proof} \is_{\mathcal{E}}ction{The solid Krein-Milman Property and the RNP}\label{s:SKMP} We say that a Banach lattice (or, more generally, an ordered Banach space) $X$ has the \emph{Solid Krein-Milman Property} (\emph{SKMP}) if every solid closed boun\-ded subset of $X$ is the closed solid convex hull of its order extreme points. This is analogous to the canonical Krein-Milman Property (KMP) in Banach spaces, which is defined in the similar manner, but without any references to order. It follows from Theorem \ref{t:connection} that the KMP implies the SKMP. These geometric properties turn out to be related to the Radon-Nikod{\'ym} Property (RNP). It is known that the RNP implies the KMP, and, for Banach lattices, the converse is also true (see \cite{Cas} for a simple proof). For more information about the RNP in Banach lattices, see \cite[Section 5.4]{M-N}; a good source of information about the RNP in general is \cite{Bour} or \cite{DU}. One of the equivalent definitions of the RNP of a Banach space $X$ involves integral representations of operators $T : L_1 \to X$. If $X$ is a Banach lattice, then, by \cite[Theorem IV.1.5]{Sch}, any such operator is regular (can be expressed as a difference of two positive ones); so positivity comes naturally into the picture. \begin{comment} We use the following notation, taken from \cite{Bour}: Let $X$ be a Banach lattice, $A \subseteq X_+$ be a bounded set, and let $a > 0$. Then for $f\in X^*$, let $M(A,f) = \sup_{a\in A} |f(a)|$, and let $T(A,f,a) = \{ x\in A: f(x) > M(A,f) - a \}$, Sets of the form $T(A,f,a)$ are called \textbf{slices}. Recall that $X$ has the RNP iff all of the separable sublattices of $X$ have the RNP. Assuming that $X$ does not have the RNP, we will first work within a separable sublattice $Y \subseteq X$, and generate a closed convex solid subset of $X$ generated by elements in $Y$ that does not have any order extreme points, and thus cannot have the KMP. Let $Y$ be a separable sublattice that does not have the RNP. We can without loss of generality assume that the ideal generated by $Y$ is $X$. Let $u \in Y_+$ be a weak unit in $Y$. Then it is also a weak unit in $X$. For $n\in \mathbb{N}$, let $H_n = \{ x \in X_+: \| u \omegaedge x\| > \frac{1}{n} \}$. Now let $A\subseteq Y_+$. We say that $A$ is \textit{order dentable} if there exists a slice $T$ of $A$ such that $T \subseteq H_n$ for some $n$. By (cite something else FILL IN), if $Y$ does not have the RNP, there exists a closed, bounded, convex set $A\subseteq Y_+$ that is not order-dentable. We now restate without proof the following lemmas from \cite{BourTal} to be used in the proof of our theorem: \begin{lemma}\label{l:convex-hull-closed} Let $X$ be an order continuous Banach lattice. Then for any closed, bounded, convex sets $A, B \subseteq X_+$, the set $CH(A \cup B)$ is also closed. \end{lemma} \begin{lemma}\label{l:martingale-builder} Suppose $X$ is order continuous, and let $A$ be a non-order dentable bounded, closed, convex subset of $X_+$. Let $n\in \mathbb{N}$, $f\in X^*, x\in A$ with $f(x) = M(f,A)$, let $\varepsilon > 0$, and $T:= T(A,f,a)$. Then there exists $m\in \mathbb{N}$ and finite sequences $(y_i)_1^m \subseteq A$, $(f_i)_1^m \in X^*$, $(a_i)_1^m \subseteq \mathbb{R}^+$, and $(\lambda_i)_1^m \subseteq \mathbb{R}$ with $\lambda_i \geq 0$ and $\sum \lambda_i = 1$ such that \begin{enumerate} \item $f_i(y_i) = M(A,f_i)$, \item $T(A,f_i, a_i) \subseteq T$, \item $\overline{T(A,f_i, a_i)} \cap H_n = \emptyset$, \item $\| x - \sum_1^m \lambda_iy_i \| < \varepsilon$. \end{enumerate} \end{lemma} \end{comment} \begin{theorem}\label{t:RNP} For a Banach lattice $X$, the SKMP, KMP, and RNP are equivalent. \end{theorem} \begin{proof} The implications RNP $\Leftrightarrow$ KMP $\mathbb{R}ightarrow$ SKMP are noted above. Now suppose $X$ fails the RNP (equivalently, the KMP). We shall establish the failure of the SKMP in two different ways, depending on whether $X$ is a KB-space, or not. (1) If $X$ is not a KB-space, then \cite[Theorem 2.4.12]{M-N} there exist disjoint $e_1, e_2, \ldots \in \mathbf{S}(X)_+$, equivalent to the canonical basis of $c_0$. Then the set $$ C = \overline{{\mathrm{S}} \Big( \big\{ \sum_i \alpha_i e_i : \max_i |\alpha_i| = 1 , \, \lim_i \alpha_i = 0 \big\} \Big)} $$ is solid, bounded, and closed. To give a more intuitive description of $C$, for $x \in X$ we let $x_i = |x| \omegaedge e_i$. It is easy to see that $x \in C$ if and only if $\lim_i \|x_i\| = 0$, and $|x| = \sum_i x_i$. Finally, show that $x \in C_+$ cannot be an order extreme point. Find $i$ so that $\|x_i\| < 1/2$, and consider $x' = \sum_{j \neq i} x_j + e_i$. Then clearly $x' \in C$, and $x' - x \in X_+ \backslash \{0\}$. (2) If $X$ is a KB-space failing the RNP, then, by \cite[Proposition 5.4.9]{M-N}, $X$ contains a separable sublattice $Y$ failing the RNP. Find a quasi-interior point $u \in Y$ -- that is, $y = \lim_n y \omegaedge (nu)$ for any $y \in Y_+$. By \cite[Corollary 5.4.20]{M-N}, $Y$ is not order dentable -- that is, $Y_+$ contains a non-empty convex bounded subset $A$ so that, for every $n \in \mathbb{N}$, $A = \overline{{\mathrm{CH}}(A \backslash H_n)}$, where $H_n = \{ y \in Y_+ : \|u \omegaedge y\| > \frac1n \}$. We use the techniques (and notation) of \cite{BourTal} to construct a set $C$ witnessing the failure of the SKMP. For $f\in Y^*$, let $M(A,f) = \sup_{x\in A} |f(x)|$. For $\alpha > 0$, define the \emph{slice} $T(A,f,\alpha) = \{ x\in A: f(x) > M(A,f) - \alpha\}$. By \cite{BourTal}, we can construct increasing measure spaces $\Sigma_n$ on $[0,1]$ with $|\Sigma_n|$ finite, as well as $\Sigma_n$-measurable functions $Y_n:[0,1] \rightarrow A$, $f_n:[0,1]\rightarrow Y^*$, and $\alpha_n:[0,1] \rightarrow \mathbb{R}$ such that: \begin{enumerate} \item For any $n$ and $t$, $Y_n(t) \in \overline{T(A, f_n(t), \alpha_n(t))}$. \item $(Y_n)$ is a martingale -- that is, $Y_n(t) = {\mathbb{E}}^{\Sigma_n}(Y_{n+1}(t))$, for any $t$ and $n$ (${\mathbb{E}}$ stands for the conditional expectation). \item For any $n$ and $t$, $H_n \cap \overline{T(A,f_n(t),\alpha_n(t)))} = \emptyset$. \item For any $n$ and $t$, $T(A,f_{n+1}(t), \alpha_{n+1}(t)) \subseteq T(A,f_n(t), \alpha_n(t))$. \end{enumerate} Now let $C' = \overline{{\mathrm{CH}}(\{Y_n(t), n\in \mathbb{N}, t\in [0,1] \})}$, then the set $C = \overline{{\mathrm{S}}(C')}$ (the solid hull is in $X$) is closed, bounded, convex, and solid. We will show that $C$ has no order extreme points. By Theorem \ref{t:connection}, it suffices to show that no $x \in C_+ \backslash \{0\}$ can be an extreme point of $C$, or equivalently, of $C_+ = C \cap X_+$. From now on, fix $x \in C_+ \backslash \{0\}$. Note that $x \omegaedge u \neq 0$. Indeed, suppose, for the sake of contradiction, that $x \omegaedge u = 0$. Find $y' \in C' \subset Y_+$, so that $x \leq y'$. For any $n$, we have $y' \omegaedge (nu) = (y'-x) \omegaedge (nu) \leq y'-x$. Thus, $\|y' - y' \omegaedge (nu)\| \geq \|x\|$. However, $u$ is a quasi-interior point of $Y$, hence $y' = \lim_n y' \omegaedge (nu)$. This is the desired contradiction. Find $n \in \mathbb{N}$ so that $\|x \omegaedge u\| > \frac1n$. Let $I_1,..., I_m$ be the atoms of $\Sigma_n$. For $i \leq m$, define $C'_i = \overline{{\mathrm{CH}}(\{ Y_m(t):m \geq n, t\in I_i \})}$, and let $C_i = \overline{{\mathrm{S}}(C'_i)}_+$. The sequence $(Y_k)$ is a martingale, hence $C' = \overline{{\mathrm{CH}}(\cup_{i=1}^m C'_i)}$. Thus, by Proposition \ref{p:interchange}, $$ C = \overline{{\mathrm{S}}(C')} = \overline{{\mathrm{S}}(\overline{{\mathrm{CH}}(\cup_{i=1}^m C'_i))}} = \overline{{\mathrm{S}}({\mathrm{CH}}(\cup_{i=1}^m C_i))} . $$ By \cite[Lemme 3]{BourTal}, ${\mathrm{CH}}(\cup_{i=1}^m C_i)$ is closed. This set is clearly positive-solid, so by norm continuity of $| \cdot |$, ${\mathrm{S}}({\mathrm{CH}}(\cup_1^m C_i))$ is closed, hence equal to $C$. In particular, $C_+ = {\mathrm{CH}}(\cup_{i=1}^m C_i)$. Therefore, if $x$ is an extreme point of $C_+$, then it must belong to $C_i$, for some $i$. We show this cannot happen. If $y \in {\mathrm{S}}(C'_i)_+$, then we can find $y' \in C_i'$ with $y \leq y'$. By parts (1) and (4), $C'_i \subseteq \overline{T(A, f_n(t), \alpha_n(t))}$ for $t\in I_i$, hence, by (3), $\|y' \omegaedge u\| \leq \frac1n$, which implies $\|y \omegaedge u\| \leq \frac1n$. By the triangle inequality, $$ \|x \omegaedge u\| \leq \|y \omegaedge u\| + \|x-y\| \leq \frac1n + \|x-y\| . $$ hence $\|x-y\| \geq \|x \omegaedge u\| - \frac1n$. Recall that $n$ is selected in such a way that $\|x \omegaedge u\| > \frac1n$. As $C_i = \overline{{\mathrm{S}}(C'_i)_+}$, it cannot contain $x$. Thus, $C$ witnesses the failure of the SKMP. \begin{comment} If $X$ is a KB-space, then by the reasoning in the proof of the main theorem of \cite{Cas}, there exists a closed convex set $D \subset \mathbf{B}(X)_+$ with no extreme points. By Propositions \ref{p:interchange} and \ref{p:conv_closed_KB}, the set $C = {\mathrm{S}}(D)$ is convex and closed; it is clearly bounded and solid. However, $C$ has no order extreme points, since all such points would have to be extreme points of $D$. \end{comment} \end{proof} {\bf Acknowledgments.} We would like to thank the anonymous referee for reading the paper carefully, and providing numerous helpful suggestions. We are also grateful to Prof.~Anton Schep for finding an error in an earlier version of Proposition \ref{p:conv_closed_KB}. \end{document}
\begin{document} \title{Well-balanced high order schemes on non-uniform grids and entropy residuals} \author{G. Puppo \and M. Semplice} \institute{ G. Puppo \at Dipartimento di Scienza e Alta Tecnologia Universit\`a dell'Insubria Via Valleggio, 11 22100 Como \email{[email protected]} \and M. Semplice \at Dipartimento di Matematica ``G. Peano'' Universit\`a di Torino Via C. Alberto, 10 10123 Torino (Italy) \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} This paper is concerned with the construction of high order schemes on irregular grids for balance laws, including a discussion of an a-posteriori error indicator based on the numerical entropy production. We also impose well-balancing on non uniform grids for the shallow water equations, which can be extended similarly to other cases, obtaining schemes up to fourth order of accuracy with very weak assumptions on the regularity of the grid. Our results show the expected convergence rates, the correct propagation of shocks across grid discontinuities and demonstrate the improved resolution achieved with a locally refined non-uniform grid. The error indicator based on the numerical entropy production, previously introduced for the case of systems of conservation laws, is extended to balance laws. Its decay rate and its ability to identify discontinuities is illustrated on several tests. The schemes proposed in this work naturally can also be applied to systems of conservation laws. \keywords{high order finite volumes \and nonuniform grids \and entropy \and well balancing} \subclass{65M08 \and 76M12} \end{abstract} \section{Introduction} Many problems arising from engineering applications involve the ability to compute flow fields on complex domains, governed by hyperbolic systems of balance laws. Often, many scales are involved and this prompts the need for algorithms that are able to modify the scheme and/or the underlying grid following the evolution of the flow. Several wide purpose codes are available and many of them are based on finite volume schemes, see e.g. Fluent \cite{fluent} or ClawPack \cite{clawpack}. Usually these codes are second order accurate with high order versions, if available, in progress. On the other hand they provide the user with the flexibility of an adaptive grid, which is extremely useful to tackle highly non-homogeneous solutions. At the same time, high order finite volume schemes are well established in the literature: from the early review in \cite{Shu97} to the more recent paper \cite{Dumbser:DGFV}, extensive studies have been conducted on the construction of high order finite volume schemes. In this paper we carry out a detailed study of the issues arising in finite volume algorithms on irregular grids, and in particular we construct finite volume high order WENO schemes, including the treatment of source terms and addressing the issue of well balancing for steady state solutions. We concentrate on the one-dimensional case, since most problems already arise in this setting. These results can be extended to multidimensional problems discretized with cartesian grids. Schemes based on cartesian grids can be easily parallelized and boundary conditions for complex domains can be implemented with the ghost fluid method as in \cite{Iollo}. Adaptive grids can be constructed either by defining a single non uniform grid on which all degrees of freedom are located, as in most unstructured grid managers, or superposing several patches of uniform cartesian grids of different levels of refinement as in the ClawPack solver \cite{clawpack}. In this latter approach the different patches must communicate and the enforcement of conservativity and well balancing for steady states are not straighforward \cite{DonatWellBalanced}. High order schemes for the AMR approach can be found in \cite{BaezaMulet,ShenQiuChristlieb}. For applications to the shallow water equations, see the software GeoClaw \cite{clawpack} and \cite{GeorgeAMRMalpasset}. In our case we consider a single highly non-uniform grid. Such grids commonly arise in h-adaptive methods \cite{HartenHayman:1983}, expecially when using moving mesh methods \cite{TanqTang:2003,Tang:2004}. In one space dimension, when the grid size varies smoothly, one can remap the problem to a uniform grid as in \cite{FazioLeveque:2003}, but this cannot be expected to work in more space dimensions of when the grid size can jump abruptly as in dyadic/quadtree/octree grid refinement. These latter discretization techniques start from a conforming, often uniform, partitioning of the simulation domain and allow the local refinement of each control volume by splitting it in $2^d$ parts in $d$ space dimensions, like in \cite{HuGreavesWu:2002:tritree} for simplices and \cite{WangBorthwickTaylor:2004:quadtree} for quads. Lower order schemes on such grids were employed by the authors in \cite{PS:entropy} in one space dimension and in \cite{PS:HYP12} in two space dimensions for general conservation laws. Two-dimensional applications to the shallow water system may be found in \cite{LiangBorthwick:2009:quadtreeSWE}, or in \cite{Liang}. The construction of a fifth order WENO scheme for conservation laws on one-dimensional non-uniform grids, based on the superposition of three parabolas, has been conducted in \cite{WangFengSpiteri}. Here we extend this construction to the case of balance laws, showing how to obtain positive coefficients in the quadrature of the source term. Moreover we also construct a third order scheme based on \cite{LPR01}, characterized by a stencil of three cells. This reconstruction is particularly suited for two-dimensional problems due to its very compact stencil, see \cite{CRS}. A first key ingredient of this work is the use of semidiscrete schemes which permit to decouple the space from the time discretization: in this fashion the non-uniformity of the grid boils down to an interpolation problem to reconstruct the boundary extrapolated data which interact through the numerical fluxes. Secondly, the use of the Richardson extrapolation as in \cite{NatvigEtAl} is crucial for the preservation of steady states on a non uniform grid, since it allows to enforce equilibrium at the level of each single cell, thus avoiding the need to account for the non-uniformity of the grid. This yields automatic well-balancing over the whole grid, unlike in the block-structured AMR case, where well-balancing has to be enforced not only on each grid patch but also in the projection and interpolation operators that relate the solution on different grid levels \cite{DonatWellBalanced}. Moreover, we extend the entropy indicator of \cite{PS:entropy} to the case of balance laws. We show that the numerical entropy production provides a measure of the local error on the cell also in the case of balance laws on non-uniform grids. Before giving the outline of the paper, we briefly introduce the setting and the notation used in the bulk of this work. We consider balance laws with a geometric source term of the form \begin{equation}\label{e:blaw} u_t + \nabla \cdot f(u) = g(u,x) \end{equation} and we seek the solution on a domain $\Omega$, with given initial conditions. The computational domain $\Omega$ is an interval, discretized with cells $I_j=(x_{j-1/2}, x_{j+1/2})$, such that $\cup I_j=\Omega$. The amplitude of each cell is $\delta_j= x_{j+1/2}-x_{j-1/2}$, with cell center $x_j = (x_{j-1/2}+x_{j+1/2})/2$. We consider semidiscrete finite volume schemes and denote with $\ca{U}_j(t)$ the cell average of the numerical solution in the cell $I_j$ at time $t$. The semidiscrete numerical scheme can be written as \begin{equation}\label{e:semischeme} \frac{\mathrm{d}}{\mathrm{d} t}\ca{U}_j= - \frac{1}{\delta_j}\left( {F}_{j+1/2}- {F}_{j-1/2}\right) + G_j(\ca{U},x). \end{equation} The numerical fluxes are computed starting from the boundary extrapolated data, namely \begin{equation}\label{e:fluxes} {F}_{j+1/2}=\mathcal{F}(U_{j+1/2}^-,U_{j+1/2}^+) \end{equation} where $\mathcal{F}$ is a consistent and monotone numerical flux, evaluated on two estimates of the solution at the cell interface $U_{j+1/2}^{\pm}$. These values are obtained with a high order non oscillatory reconstruction, as described in detail in \S \ref{s:reconstruction}. Finally, $G_j$ is a consistently accurate discretization of the cell average of the source term on the cell $I_j$, see \S \ref{s:wb}. In order to obtain a fully discrete scheme, we apply a Runge-Kutta method with Butcher's tableau $(A,b)$, obtaining the evolution equation for the cell averages \begin{equation} \label{eq:fullydiscrete} \ca{U}_j^{n+1} = \ca{U}_j^{n} - \frac{\mathrm{d}T}{\delta_j} \sum_{i=1}^s b_i \left(F^{(i)}_{j+1/2}-F^{(i)}_{j-1/2}\right) + \mathrm{d}T \sum_{i=1}^s b_i G^{(i)}_j. \end{equation} Here $F^{(i)}_{j+1/2}=\mathcal{F}\big(U^{(i),-}_{j+1/2},U^{(i),+}_{j+1/2}\big)$ and the boundary extrapolated data $U^{(i),\pm}_{j+1/2}$ are computed from the stage values of the cell averages \[ \ca{U}_j^{(i)} = \ca{U}_j^{n} - \frac{\mathrm{d}T}{\delta_j} \sum_{k=1}^{i-1} a_{ik} \left(F^{(k)}_{j+1/2}-F^{(k)}_{j-1/2}\right) + \mathrm{d}T \sum_{k=1}^{i-1} a_{ik}G^{(k)}_j. \] We point out that the spatial reconstruction procedures of \S \ref{s:reconstruction} and the well-balanced quadratures for the source term of \S \ref{s:wb} must be applied for each stage value of the Runge-Kutta scheme. In this paper we consider a uniform timestep over the whole grid. A local timestep keeping a fixed CFL number over the grid can be enforced using techniques from \cite{PS:entropy,LambyMullerStiriba}. We will also consider the preservation of steady state solutions and we will illustrate these techniques on the shallow water system, namely \begin{equation}\label{eq:swe} u=\begin{pmatrix} h\\q\end{pmatrix} \qquad f(u) = \begin{pmatrix} q\\q^2/h+ \tfrac12 g h^2 \end{pmatrix} \qquad g(u,x) = \begin{pmatrix} 0\\-ghz_x\end{pmatrix} \end{equation} Here $h$ denotes the water height, $q$ is the discharge and $z(x)$ the bottom topography, while $g$ is the gravitational constant (see also Figure \ref{fig:sw}). The preservation of steady states depends heavily on the structure of the equilibrium solution one wishes to preserve. Here we will concentrate on the lake at rest solution of the shallow water equation, given by $H(t,x)=h(t,x)+z(x)=\text{constant}$ and $q(t,x)=0$. Many works have been dedicated to this problem since the paper \cite{BermudezVazquez:1994} shed light on the importance of well-balancing (or C-property). For example, see \cite{XingShu:2005:WBSWEfd} in the finite difference setting, \cite{XingShu:2006:WBDG,NatvigEtAl,NoelleXingShu:2007:SWEmovingwater} in the finite volume setting, \cite{XingShu:2006:WBDG,Xing:2013:WBDGmovingwater,CaleffiValiani:2013:RKDG3WB} in the Discontinuous Galerkin framework and \cite{VignoliTitarevToro:2008:ADERchannel,CastroToroKaser:2012:ADERtsunami} in the ADER setting. The structure of the paper is as follows: in \S \ref{s:reconstruction} we introduce the third order accurate C-WENO (Compact WENO) reconstruction on non uniform grids, generalizing the results of \cite{LPR01}, and we extend the fifth order accurate WENO reconstruction on non uniform grids of \cite{WangFengSpiteri}, adding the evaluation of the reconstruction at the centre of cells which is needed in the computation of the source term. In \S \ref{s:wb} we extend the construction of well-balanced schemes of \cite{Audusse:2004,NatvigEtAl} to the non-uniform grid setting. Next, in \S \ref{s:entropy} we extend the notion of numerical entropy production to non uniform grids for balance laws. Finally, \S \ref{s:numerical} contains numerical tests, which illustrate the consistency between accuracy of the schemes and rate of convergence of the numerical entropy production, for several types of grids. \section{High order reconstructions on non uniform grids} \label{s:reconstruction} The mission of reconstruction algorithms is to give estimates of a function at some points, starting from discrete data. In particular, for finite volume schemes for balance laws, the starting data are the cell averages of a function $v$, and we wish to estimate $v$ at the cell interfaces, and, if needed, at some other internal points, using a finite dimensional approximation, such as a piecewise polynomial interpolator. Typically, estimates of $v$ at internal points within a cell are needed to compute the cell averages of the source term through a quadrature formula. Thus, the reconstruction will be described as an interpolation algorithm. Suppose then that we are given the cell averages \[ \ca{V}_j = \frac{1}{\delta_j} \int_{I_j} v(x)\; \mathrm{d} x. \] of a smooth function $v(x)$. In order to fix ideas, we consider a piecewise polynomial reconstruction $\mathcal{R}$ such that \[ \mathcal{R}(\ca{V},x) = \sum_j \chi_{I_j}(x) P_j(x), \] which gives the boundary extrapolated data as \begin{equation} \label{eq:bdryextrapdata} V_{j+1/2}^-=P_j(x_{j+1/2}), \qquad V_{j+1/2}^+=P_{j+1}(x_{j+1/2}). \end{equation} The reconstruction must be conservative, i.e. \[ \frac{1}{\delta_j} \int_{I_j} \mathcal{R}(\ca{V},x)\; \mathrm{d} x = \ca{V}_j, \] and high order accurate at the cell interfaces for smooth data, in the sense that \[ V_{j+1/2}^- = v(x_{j+1/2}) + O(\delta_j)^p, \qquad V_{j-1/2}^+ = v(x_{j-1/2}) + O(\delta_j)^p. \] Moreover, the reconstruction should be non-oscillatory, preventing the onset of spurious oscillations. Finally, for accuracy of order higher than 2, the evaluation of the cell average of the source term requires the reconstruction of the point values of $v$ at the nodes of the well-balanced quadrature formula. For schemes of order 3 and 4, it is enough to reconstruct $v$ at the cell centers, thus we will require that, for smooth $v(x)$, \[ V_{j} = v(x_{j}) + O(\delta_j)^p. \] \subsubsection*{First order reconstruction} In this case, the reconstruction is piecewise constant, and we have \[ V_{j+1/2}^- = \ca{V}_j, \qquad V_{j-1/2}^+ = \ca{V}_j. \] \subsubsection*{Second order reconstruction} Here, the reconstruction is piecewise linear, and we have \[ V_{j+1/2}^- = \ca{V}_j+\tfrac12 \sigma_j \delta_j, \qquad V_{j-1/2}^+ = \ca{V}_j-\tfrac12 \sigma_j \delta_j, \] where $\sigma_j$ is a limited slope, i.e., chosen a limiter $\Phi$, define the interface slopes as \begin{equation}\label{e:interface_slope} \sigma_{j+1/2} = \frac{\ca{V}_{j+1}-\ca{V}_{j}}{x_{j+1}-x_j}= \frac{\ca{V}_{j+1}-\ca{V}_{j}}{\tfrac12(\delta_j+\delta_{j+1})} \end{equation} then the limited slope within the $I_j$ cell is given by \[ \sigma_j = \Phi \left( \sigma_{j-1/2},\sigma_{j+1/2}\right). \] For a collection of limiting functions, see \cite{LeVeque:book}. In our tests, we have chosen the MinMod limiter. \subsubsection*{Third order reconstruction} The third order reconstruction is based on the compact WENO (C-WENO) technique introduced in \cite{LPR01}. This reconstruction is characterized by a particularly compact stencil, which is very important when dealing with adaptive grids. Moreover, unlike the classical WENO third order reconstruction based on the combination of two linear functions, the C-WENO reconstruction contains also a parabola and it remains uniformly third order accurate throughout the interval $I_j$ on smooth flows. To our knowledge, the reconstruction presented here is the first extension of the C-WENO reconstruction to the case of non-uniform grids. Fig. \ref{f:cweno} illustrates the polynomials composing this reconstruction. \begin{figure} \caption{\sf Compact WENO reconstruction} \label{f:cweno} \end{figure} The interpolant is piecewise quadratic, and the parabola reconstructed in each cell is the convex combination of two linear functions $P^1_L$, $P^1_R$, and a parabola, $P^2_{C}$. In order to simplify the notation we describe the reconstruction on a reference cell, labelled with the index $j=0$. The two linear functions interpolate $v$ in the sense of cell averages on the stencils $\{I_{-1}, I_0\}$ and $\{I_{0}, I_{+1}\}$. Each of these functions approximates $v$ with order $O(\delta_0)^2$ accuracy uniformly on $I_0$. Further, the parabola $P^2_{\text{OPT}}$ is introduced by the requirement that \[ \frac{1}{\delta_0}\int_{I_0} P^2_{\text{OPT}}(x) \; \mathrm{d} x = \ca{V}_0, \qquad \frac{1}{\delta_{\pm 1}}\int_{I_{\pm 1}} P^2_{\text{OPT}}(x) \; \mathrm{d} x = \ca{V}_{\pm 1}. \] This parabola approximates $v$ with order $O(\delta_0)^3$ accuracy uniformly on $I_0$. Next, the parabola $P^2_C$ is introduced, defined as \[ P^2_{\text{OPT}} = \alpha_0 P^2_C + \alpha_{+1} P^1_R + \alpha_{-1}P^1_L \] with $\alpha_0=\tfrac12$, $\alpha_{\pm 1}=\tfrac14$. The reconstruction is given by \[ P^2(x) = \omega_0 P^2_C + \omega_{+1} P^1_R + \omega_{-1}P^1_L. \] When the function $v$ is smooth, one would like that $\omega_k=\alpha_k + O(\delta_0)^2$, to ensure that $P^2$ has the same accuracy of $P^2_{\text{OPT}}$, otherwise, the non linear weights $\omega_k$ are designed to switch on only the contribution coming from the one-sided stencil on which the function is smooth. For a non uniform grid, the coefficients of the two linear interpolants on the cell $I_0$ are \begin{align*} P^1_R(x)& = \ca{V}_0 + \sigma_{+1/2}(x-x_0) \\ P^1_L(x)& = \ca{V}_0 + \sigma_{-1/2}(x-x_0), \end{align*} where $\sigma_{\pm 1/2}$ have been defined in \eqref{e:interface_slope}. The optimal parabola is \begin{align*} P^2_{\text{OPT}} &= a + b(x-x_0) + c(x-x_0)^2, \\ c&= \frac32 \frac{\sigma_{+1/2}-\sigma_{-1/2}}{\delta_{-1}+\delta_{0}+\delta_{+1}} \\ b&= \frac{( \delta_{0}+2\delta_{-1})\sigma_{+1/2}+( \delta_{0}+2\delta_{+1})\sigma_{-1/2}} {2(\delta_{-1}+\delta_{0}+\delta_{+1})} \\ a&= \ca{V}_0 - \tfrac{1}{12}c\, \delta_0^2. \end{align*} As in WENO-like reconstructions, the non linear weights $\omega_k$ are computed as \[ \tilde{\omega}_k = \frac{\alpha_k}{(\epsilon + \text{IS}_k)^2}, \qquad \omega_k=\frac{\tilde{\omega}_k}{\sum_{l=-1}^1 \tilde{\omega}_l}, \] starting from the smoothness indicators $\text{IS}_k$ defined in \cite{Shu97}. In this case, they are given by \begin{align*} \text{IS}_{-1} & = \delta_0^2 \sigma_{-1/2}^2 \\ \text{IS}_1 & = \delta_0^2 \sigma_{+1/2}^2 \\ \text{IS}_0 & = \frac{1}{\alpha_0^2} \left[ \left( b-\alpha_{-1}\sigma_{-1/2}-\alpha_{+1}\sigma_{+1/2}\right)\delta_0^2 + \tfrac{13}{3}c^2\, \delta_0^4 \right]. \end{align*} Since $P^2_{\text{OPT}}$ is uniformly third order accurate on the whole interval, the boundary extrapolated data and the value $V_0$ at the cell center are all computed evaluating the same quadratic polynomial at the corresponding points inside the cell. \subsubsection*{Fourth order reconstruction} The fourth order reconstruction is based on the fifth order WENO reconstruction computed from the convex combination of three parabolas, as in \cite{Shu97}. The coefficients of the combination of the three parabolas are computed in order to yield fifth order accuracy at the boundary of the cell, see Fig \ref{f:WENO5}. It is tedious but straightforward to see that positive coefficients can be found to result in fifth order accuracy at the cell interfaces even on non uniform grids (see below and \cite{WangFengSpiteri}). However, there is no set of positive coefficients resulting in fifth order accuracy at the cell center, see \cite{NatvigEtAl}. Here we show that it is possible to find three positive coefficients giving {\em fourth} order accuracy at the center of the cell. \begin{figure} \caption{\sf Parabolic WENO reconstruction} \label{f:WENO5} \end{figure} For the sake of completeness, we review the coefficients of the reconstruction on non uniform grids, as in \cite{WangFengSpiteri}, using the notation established in Fig. \ref{f:WENO5}. Again we consider a reference cell with index $0$. The goal of the reconstruction is to mimic the quartic polynomial $P_{\text{OPT}}$ interpolating the data $\ca{V}_{l}, l=-2, \dots, 2$ in the sense of cell averages. Clealy, $P_{\text{OPT}}$ would provide fifth order accuracy uniformly in the interval $I_0$, in the case of smooth data. For each point $\hat{x}$ in which the reconstruction is needed, we look for three positive coefficients $d_{-1}, d_{0}, d_{1}$ that add up to $1$ and such that \begin{equation}\label{eq:WENO5:def} P_{\text{OPT}} (\hat{x}) = \sum_{l=-1}^1 d_l P_l(\hat{x}), \end{equation} where the $P_l$'s are the three parabolas, interpolating in the sense of cell averages the data $\ca{V}_{l-1}, \ca{V}_{l}, \ca{V}_{l+1}$. The coefficients of the three parabolas can be found in \cite{WangFengSpiteri}. Here we give the linear weights that permit to reconstruct the left and right boundary extrapolated data. To simplify the notation, we write \begin{equation} \label{eq:notazionebrutta} \delta_l^k = \sum_{i=l}^k \delta_i, \end{equation} then the coefficients for the boundary extrapolated data $V_{+1/2}^-$ are \begin{align*} d_{1} &= \frac{\delta_{-1}(\delta_{-2}+\delta_{-1})}{\delta_{-2}^2 \delta_{-1}^2} \\ d_0 &= \frac{\delta_{0}^2(\delta_{-2}+\delta_{-1})(\delta_{-2}^1 + \delta_{-1}^2)} {\delta_{-2}^2 \delta_{-1}^2 \delta_{-2}^1} \\ d_{-1} &= \frac{\delta_{0}^2(\delta_{0}+\delta_1)}{\delta_{-2}^2 \delta_{-2}^1} \end{align*} Note that, if $\delta_{-2}=\delta_{-1}=\delta_0=\delta_1=\delta_2$, then $d_{-1}= \tfrac{3}{10}, d_0= \tfrac{3}{5}, d_1= \tfrac{1}{10}$, as in the usual uniform grid case. Similarly, the coefficients for the reconstruction of $V_{-1/2}^+$ are \begin{align*} d_{-1} &= \frac{\delta_1(\delta_1+\delta_2)}{\delta_{-2}^2 \delta_{-2}^1} \\ d_0 &= \frac{\delta_{-2}^0(\delta_1+\delta_2)(\delta_{-2}^1 + \delta_{-1}^2)} {\delta_{-2}^2 \delta_{-1}^2 \delta_{-2}^1} \\ d_1 &= \frac{\delta_{-2}^0(\delta_{-1}+\delta_0)}{\delta_{-2}^2 \delta_{-1}^2} \end{align*} We remark that the coefficients $d_k$ are positive and add up to $1$, so that \eqref{eq:WENO5:def} is a convex combination, for all possible values of the local grid size $\delta_{-2},\ldots,\delta_{2}$. For the $5^{\text{th}}$-order reconstruction at cell center $x_0$, one finds negative coefficients even for uniform meshes. In fact, see \cite{NatvigEtAl}, $d_{-1} =-\tfrac{9}{80}, d_0 =\tfrac{49}{40}, d_1 =-\tfrac{9}{80}$. Since the well balanced quadrature based on the three points $x_{\pm 1/2}, x_0$ is only fourth order accurate, there is actually no need for fifth order accuracy in this case. Thus, we look for {\em positive} coefficients $d_0, d_{\pm 1}$ such that $1=\sum d_l$, and $V_0$ is fourth order accurate, \[ V_0 = \sum_{l=-1}^1 d_l P_l(x_0) = v(x_0) + O(\delta_0)^4. \] After tedious computations, we find that $d_1$ and $d_{-1}$ must satisfy \[ \delta_{-2}^1 d_{-1} - \delta_{-1}^2d_1 = \delta_1 - \delta_{-1} \] \begin{figure} \caption{\sf Reconstruction of the point value in the cell center for WENO. Locus of positive linear weights (dash-dot lines) and the coefficients chosen by \eqref{eq:CWEN04:center} \label{fig:WENO4} \end{figure} \noindent Since we wish all coefficients to be positive, the solution must be sought in the simplex shown in Fig. \ref{fig:WENO4}. Clearly, the solution is over-determined, we pick the values that maximize the size of the minimum coefficient, that is \begin{equation}\label{eq:CWEN04:center} \text{If } \delta_1 > \delta_{-1} \left\{ \begin{aligned} & d_1 = \frac12 \frac{\delta_{-2} + 2 \delta_{-1}+\delta_0}{\delta_{-2}^1 + \delta_{-1}^2} \\ & d_{-1} = \frac{\delta_1 - \delta_{-1} + d_1 \delta_{-1}^2}{\delta_{-2}^1} \\ & d_0 = 1 - d_{-1} - d_1 \end{aligned} \right. , \qquad \text{else } \left\{ \begin{aligned} & d_{-1} = \frac12 \frac{\delta_{2} + 2 \delta_{1}+\delta_0}{\delta_{-2}^1 + \delta_{-1}^2} \\ & d_{1} = \frac{\delta_{-1} - \delta_{1} + d_{-1} \delta_{-2}^1}{\delta_{-1}^2} \\ & d_0 = 1 - d_{-1} - d_1 \end{aligned} \right. \end{equation} where again we have used the convention \eqref{eq:notazionebrutta}. \section{Well-balanced schemes} \label{s:wb} It is important to perform numerical integration of a system of balance laws with schemes that preserve the steady states exactly at a discrete level (well-balanceed schemes), since only these allow to distinguish small perturbations of these states from numerical noise \cite{BermudezVazquez:1994}. In this section we describe a technique to obtain well-balanced schemes on non-uniform grids for the shallow water equations, with particular attention to the lake at rest solution. In this case, beside well-balancing, it is also particularly important to preserve the positivity of the water height. We use and generalize to nonuniform meshes the techniques of \cite{Audusse:2004} for obtaining well-balanced schemes irrespectively of the chosen numerical fluxes and of \cite{NatvigEtAl} to obtain high order accuracy through Richardson extrapolation. There are two sources of error in well-balanced schemes. We illustrate them with a very simple example. We consider a first order reconstruction with the Lax-Friedrichs numerical flux on the lake at rest solution (see Fig. \ref{fig:sw} for notation), thus we suppose that for every index $j$, $q^n_j=0$ and $h^n_j+z_j=H$. The discretized equation on a uniform grid would be \begin{align*} h_j^{n+1} &= h_j^n + \tfrac{\lambda}2\alpha \left(h^n_{j+1} - 2h^n_j +h^n_{j-1}\right)\\ q_j^{n+1} &= -\tfrac{\lambda}4 g\left( (h^n_{j+1})^2 - (h^n_{j-1})^2 \right) +\tfrac{\lambda}2 gh^n_j \left( z_{j+1}- z_{j-1}\right) \end{align*} where we have already substituted $q^n_j=0$. It is easy to see that in the first equation, $h$ does not remain constant because the artificial diffusion term introduces a perturbation whenever $z(x)$ is not constant. In order to prevent this kind of perturbation it is enough to reconstruct along equilibrium variables or to ensure that the boundary extrapolated values at the interface are continuous when equilibrium occours. In the second equation, the perturbation due to the artificial diffusion does not appear exactly because $q$ is an equilibrium variable for the lake at rest equilibrium. However there is a lack of balance betweeen the source and the fluxes at the discrete level: in fact one finds that $q_j^{n+1}=-\tfrac{\lambda}{4}(z_{j+1}^2-2z_jz_{j+1}+2z_jz_{j-1}-z_{j-1}^2)$, which is in general nonzero, unless the bottom is flat. For these reasons we use the hydrostatic reconstruction of \cite{Audusse:2004} which ensures that the reconstruction is continuous across interfaces when the system is in equilibrium and moreover preserves positivity of the water height. Given a reconstruction algorithm $\mathcal{R}$ with accuracy of order $p$, reconstruct the equilibrium variables $H$ and $q$, obtaining the boundary extrapolated data as in equation \eqref{eq:bdryextrapdata}. In order to ensure that the water height appearing in the fluxes remains non-negative, one locally modifies the bottom by computing boundary extrapolated data also for $h$ and defining \[ z_{j+1/2}^{\pm} = H_{j+1/2}^{\pm} - h_{j+1/2}^{\pm} \] and these are used to compute the bottom topography at the interface \[ z_{j+1/2} = \max(\piu{z},\meno{z}). \] Once these are known, the interface values of $h$ are corrected giving new values \[ \widehat{h}_{j+1/2}^{\pm} = \max(H_{j+1/2}^{\pm} - z_{j+1/2},0). \] Note that $\widehat{h}_{j+1/2}^{\pm}\geq0$ and that at equilibrium $\widehat{h}_{j+1/2}^{+}=\widehat{h}_{j+1/2}^{-}$. The numerical fluxes \eqref{e:fluxes} are then applied to the states \[ U^{\pm}_{j+1/2} = \left[\widehat{h}_{j+1/2}^{\pm}, \; \widehat{h}_{j+1/2}^{\pm} v_{j+1/2}^{\pm} \right]. \] Here $v_{j+1/2}^{\pm}$ denotes the velocity, obtained as $v_{j+1/2}^{\pm}=q_{j+1/2}^{\pm}/\widehat{h}_{j+1/2}^{\pm}$ or through a desingularization procedure as proposed in \cite{Kur:desing}. Since the reconstruction is continous at equilibrium, for lake at rest data, for each consistent numerical flux, one has ${\mathcal F}(U^-_{j+1/2},U^+_{j+1/2})=f(U^{\pm}_{j+1/2})$. In this fashion Audusse et al. are able to ensure well-balancing independently on the particular numerical flux used \cite{Audusse:2004}. In order to complete the semidiscrete scheme \eqref{e:semischeme} we still need to specify the discretization of the source term. For a first order scheme it is enough to choose \begin{equation}\label{eq:S:1} G_j = \frac{g}2 \begin{pmatrix} 0\\ (\widehat{h}^-_{j+1/2})^2 - (\widehat{h}^+_{j-1/2})^2 \end{pmatrix}. \end{equation} Note that at equilibrium, the above expression exactly cancels out the numerical fluxes and thus the lake at rest solution is preserved at the discrete level. Consistency is obtained through the dependence of $\widehat{h}$ on $z$. At second order, the second component of the source term is \begin{align}\label{eq:S:2} G_{j,2} = \frac{g}2 & (\, (\widehat{h}^-_{j+1/2})^2 -(h^-_{j+1/2})^2 + (h^+_{j-1/2}+h^-_{j+1/2})(z^+_{j-1/2}-z^-_{j+1/2}) \\ & + (h^+_{j-1/2})^2 -(\widehat{h}^+_{j-1/2})^2\, ) \nonumber \end{align} On the lake at rest solution, the two $\widehat{h}$ terms cancel the numerical fluxes, while the other terms add up to zero, again giving a well-balanced scheme \cite{Audusse:2004}. On the other hand, off equilibrium, the first and the last two terms cancel by consistency and the middle term is consistent with the cell average of the source. Clearly, equation \eqref{eq:S:2} must be applied to both of the stages of the second order Runge-Kutta method needed to achieve second order accuracy also in time. For higher orders, we use Richardson extrapolation as in \cite{NatvigEtAl}. This technique is particularly useful on non-uniform grids because it concentrates all the computational effort for the source term within one cell. In fact, the subcell resolution required to compute the quadrature of the source term with high order accuracy can be naturally applied introducing uniformly distributed nodes within each cell. Thus the high order evaluation of the source term is performed entirely within one cell and the coefficients of the quadrature formula will not be affected by the nonuniformity of the mesh. The source can be rewritten as \begin{equation}\label{eq:S:4} G_j = \frac{g}2 \begin{pmatrix} 0\\ (\widehat{h}^-_{j+1/2})^2 -(h^-_{j+1/2})^2 + \widetilde{G}_j + (h^+_{j-1/2})^2 -(\widehat{h}^+_{j-1/2})^2 \end{pmatrix}. \end{equation} At second order, \[ \widetilde{G}_j = (h^+_{j-1/2}+h^-_{j+1/2})(z^+_{j-1/2}-z^-_{j+1/2}) = \int_{x_{j-1/2}}^{x_{j+1/2}} hz_x \mathrm{d}x + O(\delta_j^2).\] For order up to four, it is enough to choose \begin{align*} \widetilde{G}_j = & \frac43 \left( (h^+_{j-1/2}+h_{j})(z^+_{j-1/2}-z_j) + (h_j+h^-_{j+1/2})(z_j-z^-_{j+1/2}) \right) \\ & -\frac13 (h^+_{j-1/2}+h^-_{j+1/2})(z^+_{j-1/2}-z^-_{j+1/2}) , \end{align*} where $h_j$ and $z_j$ denote the reconstruction at the center of the cell, which is why we have developed high order reconstructions for the point values of the solution in $x_j$. Again, equation \eqref{eq:S:4} will be applied to all stages of the Runge-Kutta method used in the fully discrete scheme. \begin{figure} \caption{Shallow water set up.} \label{fig:sw} \end{figure} \section{Numerical entropy production for balance laws} \label{s:entropy} We wish to devise an error indicator for driving adaptive schemes for balance laws. In particular we extend the notion of numerical entropy production proposed in \cite{P:entropy,PS:entropy} to the case of balance laws with a geometric source term. In the homogeneous case, that is for systems of hyperbolic conservation laws, the entropy is defined as a convex function $\eta(u)$ for which there exists a function $\psi(u)$ (called entropy flux) such that $\nabla^T\eta f' = \nabla^T\psi$ where $f'$ denotes the Jacobian of the flux function $f$. Then, on smooth solutions, \[ \partial_t\eta+\partial_x\psi=0,\] while on entropic shocks \[\partial_t\eta+\partial_x\psi\leq 0\] in a weak sense, thus singling out the correct unique solutions \cite{Dafermos}. One can exploit this structure at the discrete level to devise a regularity indicator for finite volume schemes for conservation laws. A fully discrete finite volume conservative scheme for a hyperbolic system can be written in the form \[ \ca{U}^{n+1}_j= \ca{U}^{n}_j - \lambda \left( F_{j+1/2}- F_{j-1/2}\right). \] Here \[ F_{j+1/2} = \sum_{i=1}^s b_i \mathcal{F}\left(U^{(i),-}_{j+1/2},U^{(i),+}_{j+1/2}\right), \] $\mathcal{F}$ is a consistent and monotone numerical flux and $U^{(i),\pm}_{j+1/2}$ denote the boundary extrapolated data computed on the $i$-th stage value. Choosing a numerical entropy flux $\mathcal{P}$, consistent with the exact entropy flux $\psi$, we can define the quantity \begin{equation} \label{eq:S} S^n_j = \frac{1}{\mathrm{d}T_n} \left[ \ca{\eta(U^{n+1})}_j - \ca{\eta(U^{n})}_j +\lambda \left(P_{j+1/2}-P_{j-1/2}\right) \right] \end{equation} where \[ P_{j+1/2} = \sum_{i=1}^s b_i \mathcal{P}\left(U^{(i),-}_{j+1/2},U^{(i),+}_{j+1/2}\right) \] In \cite{PS:entropy} we proved that \[ S^n_j= \begin{cases} O(h^p) & \text{on smooth flows} \\ \sim C/h & \text{on shocks} \end{cases} \] where $C$ does not depend on $h$ and $p$ is the order of accuracy of the scheme. Moreover, if the numerical flux can be written in viscous form as \[ \mathcal{F}(U^-,U^+) = \tfrac12 (f(U^-)+f(U^+)) - \tfrac12 Q(U^-,U^+)\, (U^+-U^-) \] we choose the numerical entropy flux as \begin{equation}\label{eq:numentflux} \mathcal{P}(U^-,U^+) = \tfrac12 (\psi(U^-)+\psi(U^+))- \tfrac12 Q(U^-,U^+)\, (\eta(U^+)-\eta(U^-)). \end{equation} Then we see numerically that the numerical entropy production is essentially negative definite on smooth flows, in the sense that positive values of $S_j^n$ may occour near local extrema, but their amplitude decreases faster than the order of convergence of the scheme. In particular, we have proved this claim for the upwind and Lax Friedrichs numerical flux applied to first order schemes in the scalar case \cite{PS:entropy}. We wish to extend this construction to systems of $n$ balance laws. In the case of separable balance laws in the sense of \cite{XingShu:2006:WBDG}, namely if the source can be written as \begin{equation} \label{eq:separable} g(u,x) = \sum_{j=1}^M s_j(u,x)z'_j(x) \end{equation} (with $s_j:\mathbb{R}^n\times\mathbb{R}\to\mathbb{R}^n$), the balance law can be rewritten as an homogeneous system of $n+M$ equations. For the case $M=1$, denoting with $A(u)$ the $n\times n$ Jacobian matrix of the flux $f$, one has \begin{equation} \label{eq:M1} \partial_t \begin{pmatrix}u\\z_1\end{pmatrix} + \begin{pmatrix} A(u) & s_1(u,x)\\0&0 \end{pmatrix} \partial_x \begin{pmatrix}u\\z_1\end{pmatrix} = 0. \end{equation} Exploiting this structure one can extend the notion of entropy. In fact the entropy-entropy flux pair for the balance law must satisfy \begin{equation}\label{e:entropy_fluxes} \left[ \nabla^T_u\eta A(u) ,\; \nabla^T_u\eta \cdot s_1(u,x) \right] = \left[ \nabla^T_u\psi ,\; \partial_{z_1} \psi \right] \end{equation} Note that the $z$-derivative of $\eta$ does not appear in the compatibility condition above, and thus convexity with respect to $z$ is not required. This construction can be easily extended for $M>1$. Thus we still have entropy conservation for the balance law in the smooth case, provided the entropy-entropy flux pair satisfies \eqref{e:entropy_fluxes}, and the entropy residual defined in \eqref{eq:S} gives a measure of the local error of the numerical scheme. In the shallow water case, the entropy pair can be chosen as \begin{equation}\label{eq:shentropy} \eta(h,u) = \tfrac12 \left(hu^2+gh^2\right) +ghz \qquad \psi(h,u) = \eta(h,u)u + \tfrac12 gh^2u, \end{equation} see \cite{Bouchut:book}. Note that the function $\eta$ represents the total energy of the system including the potential energy due to the bottom topography. In the following section we will show that the entropy residual converges with the expected rate on smooth flows and detects the presence of shocks in the solution. \section{Numerical tests} \label{s:numerical} The following tests asses the accuracy of the high order reconstructions on non-uniform grids proposed in this work, the well-balancing properties of the fully discrete schemes for the shallow water equations, the resolution of discontinuities on non-uniform grids and the performance of the entropy residual as an error indicator. In all tests we used the local Lax-Friedrichs numerical flux and the entropy residual defined with the corresponding numerical entropy flux \eqref{eq:numentflux}, unless otherwise stated. \paragraph{Grids} In the numerical tests we use several grids that will be referred to as {\sl uniform}, {\sl quasi-regular}, {\sl random} and {\sl locally refined}. For simplicity we define them on the reference interval $[0,1]$. The quasi regular grid is obtained as the image of a uniform grid with spacing $\delta=1/N$ under the map \[\varphi(x)=x+0.1*\sin(10\pi x)/5;\] The resulting grid spacing is depicted in the left panel of Figure \ref{fig:grids}: we point out that \[ (1-\tfrac{\pi}{5})\tfrac1N \leq \delta_j \leq (1+\tfrac{\pi}{5})\tfrac1N. \] \begin{figure} \caption{Grid spacing for the nonuniform grids used in the numerical tests, shown for the case of $100$ points in $[0,2]$. {\em Quasi-regular} \label{fig:grids} \end{figure} Next, we consider non-uniform rough grids that are obtained moving randomly the interfaces of a uniform grid, namely starting from a uniform grid with spacing $\delta$ we consider grids with interfaces at \[ \tilde{x}_{j+1/2} = j\delta+ \xi_j\tfrac{\delta}{4}\] where $\xi_j$ are random numbers uniformly distributed in $[-0.5,0.5]$. A realization of such a grid is shown in the right panel of Figure \ref{fig:grids}. Here it is easily seen that \[ \tfrac34 \tfrac1N \leq \delta_j \leq \tfrac54 \tfrac1N. \] We use this grid for the purpose of illustration even if of course one would not use such an irregular grid in an application. This grid will be referred to as {\em random} grid. In some tests we need a grid which is locally refined around a given point $w_C$. For this purpose we consider a grid which, on the standard domain $[0,1]$ is a map of a uniform grid under the function \begin{equation} \label{eq:locallyrefined} \varphi(w) = w + 3w(1-w)(w_C-w); \end{equation} where $w_C$ is the location in $[0,1]$ of the point where the grid should have its minimum spacing (see e.g. Fig. \ref{fig:steady:transshock:3}). \subsection{High order schemes on non-uniform grids} \paragraph{Convergence tests} Following \cite{XingShu:2005:WBSWEfd}, we compute the flow with initial data given by \begin{equation} \label{eq:test:Shu} z(x)=\sin^2(\pi x) \qquad h(0,x) = 5+e^{\cos(2\pi x)} \quad q(0,x) = \sin(\cos(2\pi x)) \end{equation} with periodic boundary conditions on the domain $[0,1]$. At time $t=0.1$ the solution is still smooth and we compare the numerical results with a reference solution computed with the fourth order scheme and $16384$ cells. The 1-norm of the errors appears in Figure \ref{fig:convrate} and the maximum entropy production is shown in Figure \ref{fig:entrate} for all schemes and the three grid types considered. \begin{figure} \caption{Error decay under grid refinement for first (top-left), second (top-right), third (bottom-left) and fourth (bottom-right) order schemes. The dashed line indicates the expected decay in each case.} \label{fig:convrate} \end{figure} All schemes have the expected accuracy, except for the fourth order scheme on the random grids, where the accuracy is slightly decreased due to the extreme irregularity of the grid. We point out however that, despite the reduced decay rate, the actual values of the error of the fourth order scheme even on the random grid are orders of magnitude smaller than those obtained with the third order scheme with the same number of degrees of freedom. \paragraph{Well-balancing} We show a well-balancing test on the lake at rest solution using a bottom topography described by a uniformly distributed random variable sampled between $0$ and $1$, with water heigth at $h(x)+z(x)=1.5$. Table \ref{tab:wbtest} shows the well-balancing errors in the total water height and momentum, in the case of smooth nonuniform grids and random grids. Here $\mathrm{d}elta(h+z)_{j+1/2}= (h+z)_{j+1}- (h+z)_j$. All data are close to machine precision, as expected. \begin{table} \begin{center} \begin{tabular}{|c|rrrr|rrrr|} \hline & \multicolumn{4}{c|}{$\|\mathrm{d}elta (h+z)\|_{\infty}$} & \multicolumn{4}{c|}{$\|q\|_{\infty}$} \\ \hline Smooth & 100& 200 & 400 & 800 & 100& 200 & 400 & 800 \\ \hline $p=1$ & 0 & 0 & 0 & 0 & 4.51e-16 &5.55e-16 & 5.00e-16 & 7.68e-16\\ $p=2$ & 0 & 2.22e-16 & 2.22e-16 & 2.22e-16 & 3.82e-16 &8.47e-16 & 7.36e-16 & 1.54e-15\\ $p=3$ & 0 &4.44e-16 & 4.44e-16 & 6.66e-16 & 6.87e-16 &1.47e-15 & 1.67e-15 & 2.47e-15\\ $p=4$ & 8.88e-16 &6.66e-16 & 1.55e-15 & 1.55e-15 & 9.89e-16 &1.82e-15 & 1.67e-15 & 1.90e-15\\ \hline Random&\multicolumn{8}{c|}{} \\ \hline $p=1$ & 2.22e-16 & 2.22e-16 & 2.22e-16 & 2.22e-16 & 2.08e-16 &6.24e-16 & 6.77e-16 & 9.65e-16\\ $p=2$ & 2.22e-16 & 2.22e-16 & 2.22e-16 & 2.22e-16 & 2.91e-16 &7.25e-16 & 8.95e-16 & 9.99e-16\\ $p=3$ & 2.22e-16 & 6.66e-16 & 6.66e-16 & 6.66e-16 & 5.63e-16 &8.47e-16 & 9.94e-16 & 1.28e-15\\ $p=4$ & 6.66e-16 & 8.88e-16 & 1.33e-15 & 1.11e-15 & 8.68e-16 &7.94e-16 & 1.11e-15 & 1.43e-15\\ \hline \end{tabular} \end{center} \caption{Lake at rest test: well-balancing errors with rough bottom. } \label{tab:wbtest} \end{table} \paragraph{Small perturbation of a lake at rest} The domain is $x\in[0,2]$, the bottom and initial total height are given by \begin{equation} \label{eq:smallpulse} z(x)=\begin{cases} 0.25(1+\cos(10\pi(x-0.5))) & 1.2\leq x\leq 1.4\\ 0 &\text{otherwise} \end{cases} \qquad H(x,0)=1+ 0.001 \chi_{[1.1,1.2]}(x) \end{equation} \begin{figure} \caption{LeVeque's test \eqref{eq:smallpulse} \label{fig:smallpulse:3:regular} \end{figure} \begin{figure} \caption{LeVeque's test \eqref{eq:smallpulse} \label{fig:smallpulse:3:random} \end{figure} \begin{figure} \caption{LeVeque's test \eqref{eq:smallpulse} \label{fig:smallpulse:4:regular} \end{figure} \begin{figure} \caption{LeVeque's test \eqref{eq:smallpulse} \label{fig:smallpulse:4:random} \end{figure} This test was first used by LeVeque in \cite{LeVeque} with a second order scheme, but here we use it with a smaller perturbation for the third and fourth order schemes, as in \cite{NatvigEtAl}. This test requires a well-balanced scheme to resolve correctly the small perturbations which otherwise would be hidden by numerical noise. The solutions are shown in Fig \ref{fig:smallpulse:3:regular} and \ref{fig:smallpulse:3:random} for the third order scheme and Fig \ref{fig:smallpulse:4:regular} and \ref{fig:smallpulse:4:random} for the fourth order one. In each of the figures the numerical solution obtained with the uniform grid is compared with the one obtained on a non-uniform mesh. It can be seen that the pulse is well-resolved in all cases and the results obtained with a uniform grid can be perfectly superposed on those computed with the uniform ones. In this test, the parameter $\epsilon$ in the nonlinear weights of the WENO schemes is set to $10^{-12}$, as pointed out in \cite{NatvigEtAl}. \paragraph{Moving water equilibria} Since our schemes are well-balanced around the lake-at-rest equilibrium, one does not expect them to compute moving water equilibria at machine precision. Here we show two tests. In the first case we consider a transcritical steady state with a shock, over the parabolic hump \[ z(x)=\begin{cases} (0.2-0.05*(x-10)^2) & 8\leq x \leq 12\\ 0 &\text{otherwise} \end{cases} \] in the domain $[0,25]$. We consider the steady state solution with $q(x)=0.18$, with Dirichlet boundary conditions $q=0.18$ at $x=0$ and $h=0.33$ at $x=25$. The solution has a steady shock at $x=11.665504281554291$. The computation was initialized with the exact steady state solution (see for example the Appendix A of \cite{Karni}) and the numerical integration was performed until $t=50$. \begin{figure} \caption{Steady solution with transcritical shock, approximated with a third order scheme (uniform and adapted grids). The dashed line in the left panel is the local grid size in the non-uniform grid.} \label{fig:steady:transshock:3} \end{figure} \begin{figure} \caption{Steady solution with transcritical shock, approximated with a fourth order scheme (uniform and adapted grids). The dashed line in the left panel is the local grid size in the non-uniform grid.} \label{fig:steady:transshock:4} \end{figure} \begin{table} \begin{center} \begin{tabular}{|c|rr|rr|rr|rr|} \hline & \multicolumn{2}{c|}{$p=1$} & \multicolumn{2}{c|}{$p=2$} & \multicolumn{2}{c|}{$p=3$} & \multicolumn{2}{c|}{$p=4$} \\ \hline Uniform & error& rate & error& rate & error& rate & error& rate \\ \hline $100$ & 1.96e-1 & -- & 5.54e-2 & -- & 2.02e-2 & -- & 2.92e-3 & --\\ $200$ & 1.17e-1 & 0.74 & 1.42e-2 & 1.96 & 4.26e-3 & 2.24 & 1.40e-4 & 4.38\\ $400$ & 6.35e-2 & 0.89 & 3.29e-3 & 2.11 & 4.87e-4 & 3.13 & 5.12e-6 & 4.77\\ $800$ & 3.26e-2 & 0.96 & 8.08e-4 & 2.03 & 3.89e-5 & 3.65 & 1.60e-7 & 5.00\\ \hline Adapted&\multicolumn{8}{c|}{} \\ \hline $100$ & 9.20e-2 & -- & 6.96e-3 & -- & 9.78e-4 & -- & 4.54e-5 & -- \\ $200$ & 4.67e-2 & 0.97 & 1.71e-3 & 2.02 & 7.97e-5 & 3.62 & 1.36e-6 & 5.07 \\ $400$ & 2.34e-2 & 0.99 & 4.25e-4 & 2.01 & 6.57e-6 & 3.60 & 3.87e-8 & 5.13\\ $800$ & 1.17e-2 & 1.00 & 1.06e-4 & 2.01 & 5.63e-7 & 3.55 & 1.25e-9 & 4.95 \\ \hline \end{tabular} \end{center} \caption{Well-balancing errors for the subcritical steady state with gaussian bottom.} \label{tab:wbsubcritial} \end{table} We show the solutions computed with uniform grids and with a grid refined ad-hoc around the shock position (see Eq \eqref{eq:locallyrefined}) with the scheme of order three (Figures \ref{fig:steady:transshock:3}) and four (Figure \ref{fig:steady:transshock:4}). The figures report with a dashed line the local cell size of the nonuniform grid, which is refined close to the shock. The right panels of each figure show a zoom on the shock and it is clear that the adapted solution (in red with crosses) approximates better the exact solution (thin black line) than the solution obtained with a uniform grid with the same numer of points (blue line with dots), with no spurious oscillations. In order to quantify the improvement due to the adapted grid and the rate of convergence of the schemes on moving water equilibria, we consider a smooth test problem, namely a subcritical steady flow over the smooth bump $z(x)=0.2e^{-(x-12.5)^2}$ on the domain $[0,25]$. The numerical scheme was initialized with the exact solution and the flow computed until $t=10$. Since the behaviour of the errors on the water height and on momentum is very similar, only the former are reported in Table \ref{tab:wbsubcritial}. The first and second order schemes show the expected rates of convergence, while the third and fourth order ones have convergence rates well above the expected values (respectively $3.60$ and $5.00$). We also consider nonuniform grids that are finer on the hump and coarser on the flat portion of the bottom function, namely those given by Eq. \eqref{eq:locallyrefined} with $\overline{w}=12.5/25=0.5$. The errors on the adapted grids are much smaller than the corresponding uniform grids and the convergence rates are confirmed also on nonuniform grids. \subsection{Numerical entropy production} \paragraph{Rate of decay on smooth flows.} Figure \ref{fig:entrate} shows the numerical entropy production in the smooth test \eqref{eq:test:Shu} on several grid types. It is apparent that the decay rate, as expected, follows the order of accuracy of the corresponding schemes. Moreover, comparing this figure with Figure \ref{fig:convrate}, we note that the entropy decay mimics exactly the behaviour of the error, even in the case of the slight deterioration of accuracy observed on the random grid for the fourth order scheme. \begin{figure} \caption{Numerical entropy production decay under grid refinement for first (top-left), second (top-right), third (bottom-left) and fourth (bottom-right) order schemes. The dashed line indicates the expected decay in each case.} \label{fig:entrate} \end{figure} \paragraph{Two shocks.} We set up initial data with a flat bottom, water at rest and $h(0,x) = e^{-50x^2}$ on the domain $[-2,2]$. As the flow evolves, two shocks form and separate from each other: at $t=0.2$ the computed water height is depicted in the top-left plot of Figure \ref{fig:stonato}. Each of the other panels of Figure \ref{fig:stonato} shows the entropy residual obtained with four different grid sizes. The results for second, third and fourth order schemes appear in the top-right, lower left and lower right panels respectively. In all three cases it can be seen that the numerical entropy production on the two shocks increases under grid refinement like $1/h$. On the other hand, the magnitude of the peak of the numerical entropy production does not depend on the order of the scheme. This is to be contrasted with the numerical entropy production on smooth flows just shown, where one observes entropy residuals of $O(h^p)$, where $p$ is the order of the scheme. \begin{figure} \caption{Entropy production on shocks under grid refinement for several schemes. Top-left: water height. Top-right: second oder scheme. Bottom-left: third order scheme. Bottom right: fourth order scheme. $N=800$ (black solid line), $N=400$ (red line with circles), $N=200$ (green line with crosses), $N=100$ (blue line with stars). } \label{fig:stonato} \end{figure} Due to the different orders of magnitude of the numerical entropy production in the smooth regions of the flows and around shocks, it can be concluded that the entropy residual provides an effective discontinuity detector, expecially in the case of high order schemes. \paragraph{Stream on artificial river bed.} In the domain $[-.5,1.5]$ we consider the bottom topography and initial conditions: \begin{eqnarray} \label{eq:SinCanal} & z(x) = \begin{cases} \sin(10\pi x) x(1-x) & x\in[0,1]\\ 0& \text{otherwise} \end{cases} \\ & H(0,x)= \begin{cases} 1.0 & x< -0.2\\ 0.5 & x\geq -0.2 \end{cases} \quad q(0,x)= \begin{cases} \tfrac12\sqrt{\tfrac32 g} & x< -0.2\\ 0.0 & x\geq -0.2 \end{cases} \nonumber \end{eqnarray} We integrate with free flow boundary conditions until $t=0.4$, when the shock originated from the Riemann problem has overcome the irregularity in the bottom topography (see the left panel of Figure \ref{fig:sincanal}). The right panel compares the numerical entropy production of the second order scheme with grid size from $200$ to $1600$. The peaks in the numerical entropy production clearly show the location of the shocks and have the expected $O(1/h)$ behaviour. \begin{figure} \caption{Stream on artificial river bed. Left: water height. Right: numerical entropy production. $N=1600$ (black solid line), $N=800$ (red line with circles), $N=400$ (green line with crosses) and $N=200$ points (blue line with stars).} \label{fig:sincanal} \end{figure} \begin{figure} \caption{Comparison of the numerical entropy production with two different numerical entropy fluxes.} \label{fig:nument:compareflux} \end{figure} Finally, we wish to illustrate the importance of choosing the numerical entropy flux customized on the numerical flux used by the scheme, as in \eqref{eq:numentflux}. Figure \ref{fig:nument:compareflux} shows the numerical entropy production on the test \eqref{eq:SinCanal} computed with the numerical entropy flux of \eqref{eq:numentflux} (green line with circles) and with the numerical entropy flux $\Psi(U^-,U^+)=\tfrac12 (U^-+U^+)$ (blue line with dots). Note that also the alternative flux considered here is consistent with the exact entropy flux $\psi$ and therefore will provide entropy residuals with the same rate of decay of the local error of the scheme. However, in all cases, it is clear that using the local Lax-Friedrichs flux for both the conservation law and the computation of the numerical entropy flux leads to much smaller positive overshoots in the numerical entropy production and thus a much more reliable error indicator. \section{Conclusions} In this work we have derived formulas for high order schemes for balance laws on non-uniform grids. It includes the extension of the third order compact WENO reconstruction of \cite{LPR01} to non uniform grids and high order reconstructions to compute the cell average of the source term, needed by high order finite volume schemes on balance laws. Farther, we illustrate how well balancing on equilibrium solutions can be enforced for high order schemes on irregular grids. We also include the extension of the entropy indicator we proposed in \cite{PS:entropy} and \cite{P:entropy} to the case of balance laws. The proofs given in \cite{PS:entropy} carry over to the case of balance laws with geometric source terms, and prove that the entropy indicator provides a measure of the local truncation error on smooth flows, and it reliably selects the location of discontinuities. Several numerical tests are included, to show the achievement of the expected accuracy of the schemes proposed, even on extremely irregular grids, and the improvement obtained with ad-hoc chosen grids. Future work on this topic will be dedicated to the construction of adaptive cartesian grids of octree type, driven by the entropy error indicator, for balance laws, with particular attention on the enforcement of equilibrium solutions at the discrete level. \nocite{*} \end{document}
\begin{document} \title{Numerical scheme for stochastic differential equations driven by fractional Brownian motion with $ 1/4<H <1/2$.} \abstract{In this article, we study a numerical scheme for stochastic differential equations driven by fractional Brownian motion with Hurst parameter $ H \in \left( 1/4, 1/2 \right)$. Towards this end, we apply Doss-Sussmann representation of the solution and an approximation of this representation using a first order Taylor expansion. The obtained rate of convergence is $n^{-2H +\rho}$, for $\rho$ small enough.} \textbf{ Key words}: Doss-Sussmann representation, fractional Brownian motion, stochastic differential equation, Taylor expansion. \section{Introduction} In this article we are interested in a pathwise approximation of the solution to the stochastic differential equation \begin{equation}\label{eqest} X_{t} = x + \int_{0}^{t} b(X_{s})ds + \int_{0}^{t} \sigma(X_{s}) \circ dB_{s}, \quad t \in [0,T], \end{equation} where $x \in \mathbb{R}$ and $b, \sigma : \mathbb{R} \rightarrow \mathbb{R} $ are measurable functions. The stochastic integral in (\ref{eqest}) is understood in the sense of Stratonovich, (see Al{\`o}s et.al. \cite{alos1} for details) and $B= \lbrace B_{t} , t \in [0,T] \rbrace$ is a fractional Brownian motion (fBm) with Hurst parameter $H \in (1/4,1/2)$. $B$ is a centered Gaussian process with a covariance structure given by \begin{equation}\label{covariance} \mathbb{E}\left( B_{t} B_{s} \right) = {1 \over 2}\left( t^{2H} + s^{2H} - \vert t-s \vert^{2H} \right), \quad t \in [0,T]. \end{equation} In \cite{alos1}, the existence and uniqueness for the solution of equation (\ref{eqest}) have been established under suitable conditions, which follows from our assumption (see hypothesis (H) in Section \ref{doss-sussmann}). Equation (\ref{eqest}) has been analyzed by several authors, for different interpretations of stochastic integrals, because of the properties of fractional Brownian motion $B$ . Among these properties, we can mention self-similarity, stationary increments, $\rho$-H\"older continuity, for any $\rho\in(0,H)$, and the covariance of its increments on intervals decays asymptotically as a negative power of the distance between the intervals. Therefore, equation (\ref{eqest}) becomes quite useful in applications in different areas such as physics, biology, finance, etc (see, e.g., \cite{Alos2007, kaj2008, KLUP}). Hence, it is important to provide approximations to the solution of (\ref{eqest}). For $H=1/2$ (i.e., B is a Brownian motion), a large number of numerical schemes to approximate the unique solution of (\ref{eqest}) has been considered in the literature. The reader can consult Kloeden and Platen \cite{klo} (and the references therein), for a complete exposition of this topic. In particular, Talay \cite{Talay} introduces the Doss-Sussmann transformation \cite{doss, sussmann} in the study of numerical methods to the solution of stochastic differential equations (see Section \ref{doss-sussmann} for the definition of this transformation). For $H>1/2$, numerical schemes for equation (\ref{eqest}) have been analyzed by several authors. For instance, we can mention \cite{araya, hu, mish1} and \cite{nourdin}, where the stochastic integrals is interpreted as the extension of the Young integral given in \cite{za} and the forward integral, respectively. It is well-known that these integrals agree with the Stratonovich one under suitable conditions (see Al{\`o}s and Nualart \cite{alos2}). In this paper we are interested in the case $H<1/2$, because numerical schemes for the solution to (\ref{eqest}) have been studied only in some particular situations. Namely, Garz\'on et. al. \cite{garzon} use the Doss-Sussmann transformation in order to prove the convergence for the Euler scheme associated to (\ref{eqest}) by means of an approximation of fBm via fractional transport processes. In \cite{nourdin}, the authors also take advantage of the Doss-Sussmann transformation in order to discuss the Crank-Nicholson method, for $ H \in \left(1/6 , 1/2\right)$ and $b\equiv0$. Here, they show convergence in law of the error to a random variable, which depends on the solution of the equation and an independent Gaussian random variable. Specifically, the authors state that the rate of convergence of the scheme is of order $n^{1/2-3H}$. In \cite{tindel1} the authors consider the so-called modified Euler scheme for multidimensional stochastic differential equations driven by fBm with $ H \in \left(1/3 , 1/2\right)$. They utilize rough paths techniques in order to obtain the convergence rate of order $n^{1/2-2H}$. Also, they prove that this rate is sharp. In \cite{deya} a numerical scheme for stochastic differential equations driven by a multidimensional fBm with Hurst parameter greater than $1/3$ is introduced. The method is based on a second-order Taylor expansion, where the L\'evy area terms are replaced by products of increments of the driving fBm. Here, the order of convergence is $n^{-(H-\rho)}$, with $\rho \in \left( 1/3 , H \right)$. In order to get this rate of convergence, the authors use a combination of rough paths techniques and error bounds for the discretization of the L\'evy area terms. In this work we propose an approximation scheme for the solution to (\ref{eqest}) with $H \in \left( 1/4 , 1/2 \right)$. To do so, we use a first order Taylor expansion in the Doss-Sussmann representation of the solution. We consider the case $H \in \left( 1/4 , 1/2 \right)$ because it is showed in \cite{alos2} that the solution of (\ref{eqest}) is given by this transformation. However, even in the case $\left( 0 , 1/4 \right)$, our scheme tends to the mentioned transformation. The rate of convergence in this paper is $ n^{- 2H + \rho}$, where $\rho < 2H$ small enough, improving the ones given in \cite{nourdin}, \cite{Talay}, \cite{deya} and \cite{tindel1}. Also our rate is better than the one obtained in \cite{garzon} when the fBm is not approximated by means of fractional transport process. We observe that our method only establishes this rate of convergence for $H<1/2$ because we could only see that the auxiliary inequality (\ref{dify1}) below is satisfied in this case. However, the same construction holds for $H>1/2$ (see \cite{nourdin}, Proposition 1). In this case, the rate of convergence for the scheme is not the same as the case $1/4 < H < 1/2$. In fact, for $H>1/2$, we only get that the rate of convergence is $n^{-1 + \rho}$ for $\rho$ small enough. The paper is organized as follows: In Section \ref{sec2} we introduce the notations needed in this article. In particular, we explain the Doss-Sussmann-type transformation related to the unique solution to (\ref{eqest}). Also, in this section, the scheme is presented and the main result is stated (Theorem \ref{teo1} below). In Section \ref{sec3}, we establish the auxiliary lemmas, which are needed to show, in Section \ref{sec4}, that the main result is true. The proof of the auxiliary lemmas are presented in Section \ref{sec5}. Finally, in the Appendix (Section \ref{apen}), other auxiliary result is also studied because it is a general result concerning the Taylor expansion for some continuous functions. \section{Preliminaries and main result} \label{sec2} In this section, we introduce the basic notions and the framework that we use in this paper. That is, we first describe the Doss-Sussmann transformation given in Doss \cite{doss} and Sussmann \cite{sussmann}, which is the link between the stochastic and ordinary differential equations (see Al\`os et al. \cite{alos1}, or Nourdin and Neuenkirch \cite{nourdin}, for fractional Brownian motion case). Then, we provide a numerical method and its rate of convergence for the unique solution of (\ref{eqest}). These are the main result of this article (see Theorem \ref{teo1}). \subsection{Doss-Sussmann transformation}\label{doss-sussmann} Henceforth, we consider the stochastic differential equation \begin{equation}\label{eqest2} X_{t} = x + \int_{0}^{t} b(X_{s})ds + \int_{0}^{t} \sigma(X_{s}) \circ dB_{s}, \quad t \in [0,T], \end{equation} where $B=\{B_t:t\in[0,T]\}$ is a fractional Brownian motion with Hurst parameter $1/4 < H < 1/2$, $x \in \mathbb{R}$ and the stochastic integral in (\ref{eqest2}) is understood in the sense of Stratonovich, which is introduced in \cite{alos1}. Remember that $B$ is defined in (\ref{covariance}). The coefficients $b,\sigma:\mathbb{R}\rightarrow\mathbb{R}$ are measurable functions such that \begin{itemize} \item [(H)] $b \in C^{2}_{b}(\mathbb{R})$ and $\sigma \in C^{2}_{b}(\mathbb{R})$. \end{itemize} \begin{rem} \label{cotas} By assumption(H), we have, for $z \in \mathbb{R}$, \begin{itemize} \item $\vert b(z) \vert \leq M_{1} $, $\vert b'(z) \vert \leq M_{4} $ and $\vert b''(z) \vert \leq M_{6} $. \item $\vert \sigma (z) \vert \leq M_{5} $, $\vert \sigma ' (z) \vert \leq M_{2} $ and $\vert \sigma '' (z) \vert \leq M_{3} $. \end{itemize} We explicitly give these constants so that it will be clear where we use them in our analysis. \end{rem} Now, we explain the relation between (\ref{eqest2}) and ordinary differential equations: the so call Doss-Sussmann transformation. In Al\`os et al. (Proposition 6) is proven that the equation (\ref{eqest2}) has a unique solution of the form \begin{equation}\label{sol} X_{t} = \phi \left(Y_{t}, B_{t} \right). \end{equation} The function $\phi:\mathbb{R}^2\rightarrow\mathbb{R}$ is the solution of the ordinary differential equation \begin{eqnarray}\label{phiori} {\partial \phi \over \partial \beta }(\alpha, \beta) &= &\sigma(\phi(\alpha, \beta)),\quad \alpha, \ \beta \in \mathbb{R},\nonumber\\ \phi(\alpha , 0) &=& \alpha , \end{eqnarray} and the process $Y$ is the pathwise solution to the equation \begin{equation*}\label{y1} Y_{t} = x + \int_{0}^{t} \left( {\partial \phi \over \partial \alpha } (Y_{s}, B_{s}) \right)^{-1} b \left( \phi (Y_{s}, B_{s} ) \right) ds,\quad t\in[0,T]. \end{equation*} By Doss \cite{doss}, we have \begin{equation}\label{eq:phi} {\partial \phi \over \partial \alpha }(\alpha, \beta) = exp \left( \int_{0}^{\beta} \sigma'(\phi(\alpha, s )) ds \right), \end{equation} which implies \begin{equation}\label{y2} Y_{t} = x + \int_{0}^{t} \exp \left( -\int_{0}^{B_{s}} \sigma'(\phi(Y_{s}, u )) du \right) b \left( \phi (Y_{s}, B_{s} ) \right) ds. \end{equation} \subsection{Numerical Method} In this section, we describe our numerical scheme associated to the unique solution of (\ref{eqest2}). Towards this end, in Section \ref{sec:2.2.1}, we first propose an approximation to the function $\phi$ given in (\ref{eq:phi}), and then, in Section \ref{sec:2.2.2} we approximate the process $Y$. In both sections we suppose that (H) holds. \subsubsection{Approximation of $\phi$}\label{sec:2.2.1} Note that, for $x\in\mathbb{R}$, equation (\ref{phiori}) has the form \begin{equation}\label{phi} \phi(x,u) = x + \int_{0}^{u} \sigma(\phi(x,s))ds. \end{equation} For each $l\in\mathbb{N}$, we take the partition $\left\lbrace u_{i}^{l} , i\in\{-l,\ldots, l\}\right\rbrace$ of the interval $[- \Vert B\Vert_{\infty}, \Vert B\Vert_{\infty}]$ given by $-\Vert B \Vert_{\infty}=u^{l}_{-l} < \ldots <u^{l}_{-1}< u^{l}_{0}=0< u^{l}_{1}< \ldots < u^{l}_{l} = \Vert B \Vert_{\infty}$. Here, $ \Vert B \Vert_{\infty} =\sup_{t\in[0,T]}|B_t|$, \begin{equation} u^{l}_{i+1} = u^{l}_{i} + {\Vert B \Vert_{\infty} \over l} = {(i+1)\Vert B \Vert_{\infty} \over l},\quad u^{l}_{-(i+1)} = u^{l}_{-i} - {\Vert B \Vert_{\infty} \over l} = -{(i+1)\Vert B \Vert_{\infty} \over l}. \nonumber \end{equation} Let $x\in\mathbb{R}$ be given in (\ref{y2}). Set \begin{equation}\label{M} M:= \vert x \vert + T\left( M_{1}\exp(M_{2} \Vert B \Vert _{\infty}) + \Vert B\Vert_{H-\rho}C_{3}T^{H-\rho} \right) , \end{equation} where $\rho\in(0,H)$, $\Vert B\Vert_{H-\rho}$ is the $(H-\rho$)-H\"older norm of $B$ on $[0,T]$, $$C_{3}= M_{1}M_{2} \exp \left(M_{2}\Vert B \Vert_{\infty} \right) + M_{4} \exp \left(M_{2} \Vert B \Vert_{\infty} \right) M_{5} \Vert B \Vert_{\infty}\left( 1 +M_2 \right) $$ and $M_i$, $i\in\{1,\ldots,6\}$ are defined in Remark \ref{cotas}. Now, we define the function $\phi^{l} : \mathbb{R}^{2} \rightarrow \mathbb{R}$ by \begin{equation}\label{phi0} \phi^{l}(z,u) = 0 \ \ \ \mbox{if} \ \ \ (z,u) \not\in [-M,M] \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]; \end{equation} and, for $k=1, \ldots ,l$, \begin{equation}\label{phin1} \phi^{l}(z,u) = \phi^{l}(z,u^{l}_{k-1}) + \int_{u_{k-1}^{l}}^{u} \sigma \left(\phi^{l}(z,u^{l}_{k-1}) + (s-u_{k-1}^{l}) \sigma \left( \phi^{l}(z,u^ {l}_{k-1}) \right) \right) ds, \end{equation} if $z\in[-M,M]$ and $u \in (u^{l}_{k-1} ,u^{l}_{k} ]$, with \begin{equation}\label{phinicial} \phi^{l}(z,u^{l}_{0}) = z,\quad\hbox{ if}\ \ z\in[-M,M]. \end{equation} The definition of $\phi^l$ for the case $k=-l,\ldots, 0$ is similar. That is, \begin{equation}\label{phin1n} \phi^{l}(z,u) = \phi^{l}(z,u^{l}_{k}) -\int^{u_{k}^{l}}_{u} \sigma \left(\phi^{l}(z,u^{l}_{k}) + (s-u_{k}^{l}) \sigma \left( \phi^{l}(z,u^ {l}_{k}) \right) \right) ds, \end{equation} if $z\in[-M,M]$ and $u \in [u^{l}_{k-1} ,u^{l}_{k} )$. Also, we consider the function $\Psi^l: \mathbb{R}^{2} \rightarrow \mathbb{R}$, which is equal to \begin{equation}\label{psi0} \Psi^{l}(z,u) = 0 \ \ \ \mbox{if} \ \ \ (z,u) \not\in [-M,M] \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}], \end{equation} and, for $k=1, \ldots ,l$, \begin{eqnarray} \Psi^{l}(z,u) &=& \Psi^{l}(z,u^{l}_{k-1}) + \int_{u_{k-1}^{l}}^{u} \left( \sigma \left( \Psi^{l}(z,u^{l}_{k-1})\right) + \sigma' \sigma \left( \Psi^{l}(z,u^{l}_{k-1}) \right) (s-u_{k-1}^{l}) \right) ds \nonumber \\ & = & \Psi^{l}(z,u^{l}_{k-1}) + (u-u_{k-1}^{l}) \left( \sigma \left(\Psi^{l}(z,u^{l}_{k-1})\right) + \sigma' \sigma \left( \Psi^{l}(z,u^{l}_{k-1}) \right) {(u-u_{k-1}^{l}) \over 2 } \right) \nonumber \\ \label{psin} \end{eqnarray} if $z\in[-M,M]$ and $u \in (u^{l}_{k-1} ,u^{l}_{k} ]$, with \begin{equation*}\label{psinicial} \Psi^{l}(z,u^{l}_{0}) = z, \quad\hbox{ if}\ \ z\in[-M,M]. \end{equation*} For $k=-l,\ldots, 0$, $\Psi^{l}$ is introduced as $$ \Psi^{l}(z,u) = \Psi^{l}(z,u^{l}_{k}) - \int^{u_{k}^{l}}_u \left( \sigma \left( \Psi^{l}(z,u^{l}_{k})\right) + \sigma' \sigma \left( \Psi^{l}(z,u^{l}_{k}) \right) (s-u_{k}^{l}) \right) ds , $$ If $z\in[-M,M]$ and $u \in[u^{l}_{k-1} ,u^{l}_{k} ]$. From equation (\ref{psin}) and last equality, it can be seen that $\Psi^l(z, \cdot)$ is continuous on $[-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]$. We remark that the function $\phi^l$ given in (\ref{phin1}) and (\ref{phin1n}) is an auxiliary tool that allows us to use Taylor's theorem in the analysis of the numerical scheme proposed in this paper (i.e., Theorem \ref{teo1}). Indeed, the Taylor's theorem is utilized in Lemma \ref{lemapsi1}. \subsubsection{Approximation of $Y$} \label{sec:2.2.2} Here, we approximate the solution of equation (\ref{y2}). For $l\in\mathbb{N}$, we define the process $Y^l$ as the solution of the following ordinary differential equation, where the existence and uniqueness is guaranteed since the coefficient $g^l: \mathbb{R}^2 \to \mathbb{R}$ satisfies Lipschitz and linear growth conditions in the second variable (see Remark \ref{cotas} and Lemma \ref{lemapsin}). \begin{eqnarray}\label{yl} Y^{l}_{t} &=& x + \int_{0}^{t} g^{l}\left(B_{s} , Y^{l}_{s}\right) ds, \quad Y^{l}_{0} = x, \end{eqnarray} where \begin{equation}\label{gl1} g^{l}\left(B_{s} , Y^{l}_{s}\right) = \exp \left( -\int_{0}^{B_{s}} \sigma'(\Psi^{l}(Y^{l}_{s}, u )) du \right) b \left( \Psi^{l}(Y^{l}_{s}, B_{s} ) \right). \end{equation} Now, for $m\in\mathbb{N}$, we set the partition $0=t_{0} < \ldots <t_{m} = T$ of $[0,T]$ with $t_{i+1} = t_{i} + {T \over m}$ and we define the process $Y^ {l,m}$ as: \begin{small} \begin{eqnarray} Y_{0}^{l,m} &=& x , \nonumber \\ Y_{t}^{l,m} &=& Y_{t^{m}_{k}}^{l,m} + \int_{t_{k}^{m}}^{t} \left[ g^{l}\left( B_{t^{m}_{k}} , Y_{t^{m}_{k}}^{l,m} \right) + h_{1}^{l}\left( B_{t^{m}_{k}} , Y_{t^{m}_{k}}^{l,m} \right) \left( B_{s} - B_{t^{m}_{k}} \right) \right] ds \label{ynn}, \end{eqnarray} \end{small} for $t_{k}^{m} \leq t < t_{k+1}^{m}$, where $h_{1}^{l}(u,z) = {\partial g^{l}(u,z) \over \partial u}$ and $g^{l}$ is given by (\ref{gl1}). So \begin{equation}\label{dergl} {\partial g^{l}(u,z) \over \partial u} = - g^{l}(u,z) \sigma'(\Psi^{l}(z,u)) + \exp \left( - \int_{0}^{u} \sigma'( \Psi^{l}(z,r)) dr \right) b'(\Psi^{l}(z,u)) { \partial \Psi^{l}(z,u) \over \partial u} . \end{equation} By Remark \ref{cotas}, we can see \begin{equation} \left\vert g^{l}(u,z) \right\vert \leq M_{1} \exp \left( M_{2} \vert u\vert \right). \label{cotagl1} \end{equation} Also we have \begin{equation} \vert h_{1}^{l}(B_{t^{m}_{k}},Y^{l,m}_{t^{m}_{k}}) \vert \leq C_{3}, \label{cotah1} \end{equation} where $C_{3} $ is given in (\ref{M}). Moreover, assuming that (\ref{cotagl1}) and (\ref{cotah1}) are satisfied, it is not hard to prove by induction that \begin{equation*} \sup\limits_{t\in[0,T]} \vert Y_{t}^{n,n} \vert \leq \vert x \vert + T\left(M_{1} \exp(M_{2} \Vert B \Vert_{\infty} ) + T^{H-\rho} \Vert B \Vert_{H-\rho} C_{3} \right)=M. \end{equation*} Finally, in a similar way to Garz\'on et al. \cite{garzon}, for $n\in\mathbb{N}$, we define the approximation $X^n$ of $X$ by: \begin{equation}\label{metodo} X_{t}^{n} = \Psi^{n} \left( Y^{n,n}_{t} , B_{t} \right), \end{equation} where $\Psi^{n}$ and $Y_{t}^{n,n}$ are given by (\ref{psin}) and (\ref{ynn}), respectively.\\ Now, we are in position to state our main result\\ \begin{teo} \label{teo1} Let (H) be satisfied and $1/4< H < 1/2 $, then $$ \left\vert X_{t} - X_{t}^{n} \right\vert \leq C n^{-2(H-\rho)}, $$ where $\rho >0 $ is small enough and $C$ is a constant that does not depend on $n$. \end{teo} \begin{rem} The constant $C$ has the form \begin{eqnarray*} C &=& \exp(2M_{2} \Vert B \Vert_{\infty}) \left[ C_{2} \exp(C_{1}T) + \frac{ M_{2}^{2} M_{5} \Vert B \Vert^{3}_{\infty} }{6} + \frac{ M_{5}^{2} M_{3} \Vert B \Vert^{3}_{\infty} }{6} \right. \\ & + & \left. C_{6} T \exp(C_{7} T) \right], \end{eqnarray*} with {\scriptsize \begin{eqnarray} C_{1} &=& (M_{4} + M_{1}M_{3} \Vert B \Vert_{\infty} ) \exp( M_2 (\Vert B \Vert_{\infty} + T)), \nonumber \\ C_{2} &=& \exp( M_2 \Vert B \Vert_{\infty}) (M_{4} + M_{1}M_{3}\Vert B \Vert_{\infty} )({M_2^2 M_5 \Vert B \Vert^{3}_{\infty} \over 6} \exp(M_2 \Vert B \Vert_{\infty}) + {M_3 M_5^2 \Vert B \Vert^{3}_{\infty} \over 6} \exp(2M_2 \Vert B \Vert_{\infty}) ), \nonumber \\ C_{3} &=& M_{1}M_{2} \exp \left(M_{2}\Vert B \Vert_{\infty} \right) + M_{4} \exp \left(M_{2} \Vert B \Vert_{\infty} \right) M_{5} \Vert B \Vert_{\infty}\left( 1 +M_2 \right), \nonumber \\ C_4 &=& M_{1} \exp \left( M_{2} \Vert B \Vert_{\infty} \right) + C_3 T^{H - \rho} \Vert B \Vert_{H-\rho}, \nonumber \\ C_{5} &=& \exp (M_{2} \Vert B \Vert_{\infty}) \left[ \Vert B \Vert_{\infty} (1+M_{2}) \left( M_{3} M_{1}M_{5} + M_{2} M_{4} M_{5} + M_{6}M_{5} \Vert B \Vert_{\infty} (1+ M_{2}) \right) \right. \nonumber \\ &+& \left. M_{1}M_{2} + M_{4}M_{5}( 1+ M_{2}) \right], \nonumber \\ C_{6} &=& \left[ C_{4} \exp(3 M_{2} \Vert B \Vert_{\infty} ) [M_{4} + M_{1}M_{3}\Vert B \Vert_{\infty} ]T^{1-2(H - \rho)} + (C_{5} + C_{8}) \Vert B \Vert_{H-\rho} \right], \nonumber \\ C_{7} &=& \exp(3 M_{2} \Vert B \Vert_{\infty} ) [M_{4} + M_{1} M_{3} \Vert B \Vert_{\infty}], \nonumber \\ C_{8} & = & M_{4} M_{2} \exp(M_{2} \Vert B \Vert_{\infty}) \left[ (M_{5} + M_{5}M_{2}) \Vert B \Vert_{\infty} + M_{5}M_{2} \right]. \nonumber \end{eqnarray} } \end{rem} \begin{rem}\label{MR} We choose the constant $M$ because the processes given in (\ref{yl}) and (\ref{ynn}), as well as the solution to (\ref{y2}), are bounded by $M$, as it is pointed out in this section. \end{rem} \section{Preliminary lemmas}\label{sec3} In this section, we stated the auxiliary tools that we need in order to prove our main result Theorem \ref{teo1}. The first four lemmas are related to the apriori estimates of $\phi$. We recall you that the constants $M_{i}, i \in \{1, \ldots, 6 \}$ are introduced in Remark \ref{cotas}. \begin{lem}\label{lemaphi1} Let $\phi$ and $\phi^{l}$ be given by (\ref{phi}) and (\ref{phin1}), respectively. Then, Hypothesis (H) implies that , for $ (z,u) \in [-M,M] \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]$, we have \begin{equation*}\label{3.1} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert \leq {M_{2}^{2} M_{5} \Vert B\Vert_{\infty}^{3} \over {6 l^2}} \exp \left( M_{2} \Vert B\Vert_{\infty} \right), \end{equation*} \end{lem} \begin{lem}\label{lemapsi1} Let $\phi^{l}$ and $\Psi^{l}$ be given by (\ref{phin1}) and (\ref{psin}), respectively. Then, Hypothesis (H) implies \begin{equation*} \left\vert \phi^{l}(z,u) - \Psi^{l}(z,u) \right\vert \leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^{3} \over 6 l^2} \exp \left(2 M_{2} \Vert B \Vert_{\infty} \right), \end{equation*} for $ (z,u) \in [-M,M] \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]$. \end{lem} \begin{lem}\label{lemapsin} Let $\Psi^{l}$ be introduced in (\ref{psin}) and Hypothesis (H) hold. Then, for $(z_1, z_2,u) \in [-M,M]^2 \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}] $, \begin{equation*} \left\vert \Psi^{l}(z_{1},u) - \Psi^{l}(z_{2},u) \right\vert \leq \left\vert z_{1} -z_{2} \right\vert \exp(2M_2 \Vert B \Vert_{\infty} ). \end{equation*} \end{lem} \begin{lem}\label{difphin1} Let $\phi^{l}$ be given in (\ref{phin1}). Then, under Hypothesis (H), \begin{equation}\label{difphin1-1} \vert \phi^{l}(z_{1},u) - \phi^{l}(z_{2},u) \vert \leq \vert z_{1}-z_{2} \vert \exp(2M_2 \Vert B \Vert_{\infty} ), \end{equation} for $(z_1, z_2,u) \in [-M,M]^2 \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}] $. \end{lem} Now we proceed to state the lemmas referred to the estimates on $Y^n - Y$. \begin{lem}\label{lemayyn} Assume that Hypothesis (H) is satisfied. Let $Y$ and $Y^{n}$ be given in (\ref{y2}) and (\ref{yl}), respectively. Then, for $t \in [0,T]$, \begin{equation*} \left\vert Y_{t} - Y_{t}^{n} \right\vert \leq \exp (C_1 T) {C_2 \over n^{2}}, \end{equation*} where \begin{equation*} C_{1} = (M_{4} + M_{1}M_{3} \Vert B \Vert_{\infty} ) \exp( M_2 (\Vert B \Vert_{\infty} + T)) \end{equation*} and \begin{eqnarray*} C_{2} &=& \exp( M_2 \vert B \Vert_{\infty}) (M_{4} + M_{1}M_{3}\Vert B \Vert_{\infty}) \\ & \times & \left({T M_2^2 M_5 \Vert B \Vert_{\infty}^{3} \over 6} \exp(M_2 \Vert B \Vert_{\infty}) + {T M_3 M_5^2 \Vert B \Vert_{\infty}^3 \over 6} \exp(2M_2 \Vert B \Vert_{\infty}) \right). \end{eqnarray*} \end{lem} \begin{lem}\label{difynn} Let $Y^{n,n}$ be defined in (\ref{ynn}). Then Hypothesis (H) implies, for $s \in (t_{k}^{n}, t_{k+1}^{n}]$, \begin{equation*} \vert Y^{n,n}_{s} - Y^{n,n}_{t^{n}_{k}} \vert \leq C_{4} (s-t_{k}^{n}), \end{equation*} where $C_4 = M_{1} \exp \left( M_{2} \Vert B \Vert_{\infty} \right) + C_3 T^{H - \rho} \Vert B \Vert_{H-\rho}$ and \begin{equation} C_{3} = M_{1}M_{2} \exp \left(M_{2}\Vert B \Vert_{\infty} \right) + M_{4} \exp \left(M_{2} \Vert B \Vert_{\infty} \right) M_{5} \Vert B \Vert_{\infty}\left( 1 +M_2 \right). \nonumber \end{equation} \end{lem} \begin{lem}\label{difynynn} Suppose that Hypothesis (H) holds. Let $Y^{n}$ and $Y^{n,n}$ be given in (\ref{yl}) and (\ref{ynn}), respectively. Then, \begin{equation*} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert \leq C_{6} T \left({T \over n} \right)^{2(H-\rho)} \exp(C_{7}T), \quad t \in [0,T], \end{equation*} where $0<\rho <H$, \begin{eqnarray*} C_{5} &=& \exp (M_{2} \Vert B \Vert_{\infty}) \left[ \Vert B \Vert_{\infty} (1+M_{2}) \left( M_{3} M_{1}M_{5} + M_{2} M_{4} M_{5} + M_{6}M_{5} \Vert B \Vert_{\infty} (1+ M_{2}) \right) \right. \\ &+& \left. M_{1}M_{2} + M_{4}M_{5}( 1+ M_{2}) \right], \end{eqnarray*} \begin{equation} C_{6} = \left[ C_{4} \exp(3 M_{2} \Vert B \Vert_{\infty} ) [M_{4} + M_{1}M_{3}\Vert B \Vert_{\infty} ]T^{1-2(H - \rho)} + (C_{5} + C_{8}) \Vert B \Vert_{H-\rho} \right] , \label{c6} \end{equation} with $C_{4}$ given in Lemma \ref{difynn}, and \begin{equation}\label{c7} C_{7} = \exp(3 M_{2} \Vert B \Vert_{\infty} ) [M_{4} + M_{1} M_{3} \Vert B \Vert_{\infty}], \end{equation} \begin{equation*} C_{8} = M_{4} M_{2} \exp(M_{2} \Vert B \Vert_{\infty}) \left[ (M_{5} + M_{5}M_{2}) \Vert B \Vert_{\infty} + M_{5}M_{2} \right]. \end{equation*} \end{lem} \section{Convergence of the Scheme: Proof of Theorem \ref{teo1}}\label{sec4} We are now ready to prove the main result of this article, which gives a theoretical bound on the speed of convergence for $X^n$ defined in (\ref{metodo}). Remember that the constants $M_{i}, i \in \{1, \ldots, 6 \}$, are given in Remark \ref{cotas}. \begin{proof} By (\ref{sol}) and (\ref{metodo}), we have, for $t\in [0,T]$, \begin{equation*} \vert X_{t} - X_{t}^{n} \vert \leq H_{1}(t) + H_{2}(t) + H_{3}(t), \end{equation*} where \begin{small} \begin{eqnarray} H_{1}(t) &=& \vert \phi\left( Y_{t}, B_{t} \right) - \phi^{n} ( Y^{n}_{t}, B_{t} ) \vert \nonumber \\ H_{2}(t) &=& \vert \phi^{n} ( Y^{n}_{t}, B_{t} ) - \phi^{n} ( Y^{n,n}_{t}, B_{t} ) \vert \nonumber \\ \nonumber H_{3}(t) &=& \vert \phi^{n} ( Y^{n,n}_{t}, B_{t} ) - \Psi^{n} ( Y^{n,n}_{t}, B_{t} )\vert. \nonumber \end{eqnarray} \end{small} Now we proceed to obtain estimates of $H_{1}$, $H_{2}$ and $H_{3}$. By property (\ref{eq:phi}), we get \begin{small} \begin{eqnarray*} H_{1}(t) & \leq & \vert \phi\left( Y_{t}, B_{t} \right) - \phi ( Y^{n}_{t}, B_{t} ) \vert + \vert \phi ( Y^{n}_{t}, B_{t} ) - \phi^{n} ( Y^{n}_{t}, B_{t} ) \vert \nonumber \\ & \leq & \exp \left(M_{2} \Vert B \Vert_{\infty} \right) \left\vert Y_{t} - Y_{t}^{n} \right\vert + \vert \phi ( Y^{n}_{t}, B_{t} ) - \phi^{n} ( Y^{n}_{t}, B_{t} ) \vert. \nonumber \\ \end{eqnarray*} \end{small} Therefore, by Lemmas \ref{lemaphi1} and \ref{lemayyn} \begin{equation} H_{1}(t) \leq \exp \left(M_{2} \Vert B \Vert_{\infty} \right) \exp(C_{1}T) {C_{2} \over n^2 }+ {M_{2}^{2} M_{5} \Vert B \Vert_{\infty}^{3} \over 6n^2 } \exp \left( M_{2} \Vert B \Vert_{\infty} \right). \label{h1} \end{equation} Also Lemmas \ref{difphin1} and \ref{difynynn}, yield \begin{small} \begin{eqnarray} H_{2}(t) & \leq & \exp(2M_{2} \Vert B \Vert_{\infty}) \vert Y^{n}_{t} - Y^{n,n}_{t} \vert \nonumber \\ & \leq &\exp(2M_{2} \Vert B \Vert_{\infty})C_{6} T \left({T \over n} \right)^{2(H-\rho)} \exp(C_{7}T).\label{h2} \end{eqnarray} \end{small} For $H_{3}$, we use Lemma \ref{lemapsi1}. So \begin{equation} H_{3}(t) \leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6n^2} \exp(2M_{2}\Vert B \Vert_{\infty}).\label{h3} \end{equation} Finally, from (\ref{h1}) to (\ref{h3}), we have \begin{equation*} \vert X_{t} - X_{t}^{n} \vert \leq C n^{-2(H -\rho) } , \end{equation*} which shows that Theorem \ref{teo1} holds. \end{proof} \section{Proofs of preliminary lemmas}\label{sec5} Here, we provide the proofs of Lemmas \ref{lemaphi1} to \ref{difynynn}. First, we will prove, by induction, that the statements of Lemmas \ref{lemaphi1} to \ref{difphin1} hold for all $k=1,2, \ldots l$ and $u \in (u_{k-1}^{l} , u^{l}_{k}]$. We will consider for simplicity the case $u>0$, the other case can be treated similarly. \subsection*{Proof of Lemma \ref{lemaphi1}} \begin{proof} Let $z\in[-M,M]$. We will prove by induction that, for all $k \in \{1,\ldots, l\}$ and $u \in (u_{k-1}^{l} , u^{l}_{k}]$, we have \begin{equation}\label{philema1} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert \leq {M_{2}^{2} M_{5} \Vert B\Vert_{\infty}^{3} \over 6 l^3} \tilde{C}_k, \end{equation} where $\tilde{C}_k = \exp \left( M_{2} (u^{l}_{k} - u^{l}_{0})\right) + \ldots + \exp \left( M_{2} (u^{l}_{k} - u^{l}_{k-1})\right).$ As a consequence we obtain the global bound \begin{equation}\label{philema} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert \leq {M_{2}^{2} M_{5} \Vert B\Vert_{\infty}^{3} \over 6 l^2} \exp \left( M_{2} \Vert B\Vert_{\infty} \right), \end{equation} where $M_2$ and $M_5$ are constants independent of $k$ and they are given in Remark \ref{cotas}. First for $k=1$, let $0=u_{0}^{l} < u \leq u^{l}_{1}$, then (\ref{phi}), (\ref{phin1}), the Lipschitz condition on $\sigma$ (with constant $M_2$) and the fact that $\phi(z,u_0^l) = \phi^l(z,u_0^l)=z$ imply \begin{small} \begin{eqnarray} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert & \leq & M_{2} \int_{u_{0}^{l}}^{u} \left\vert \phi(z,s) - z - (s-u_{0}^{l}) \sigma \left( z \right) \right\vert ds \nonumber \\ & \leq & M_{2} \int_{u_{0}^{l}}^{u}\left\vert \phi(z,s) - \phi^{l}(z,s) \right\vert ds + M_{2}\int_{u_{0}^{l}}^{u} \left\vert \phi^{l}(z,s) -z-(s-u_{0}^{l})\sigma(z)\right\vert ds \nonumber \\ & = & M_{2}\int_{u_{0}^{l}}^{u} \left\vert \phi(z,s) - \phi^{l}(z,s) \right\vert ds + M_{2} \bold{I}_{0}^{l}. \label{i0} \end{eqnarray} \end{small} Next, we bound the term $\bold{I}_{0}^{l} $, \begin{small} \begin{equation*} \bold{I}_{0}^{l}= \int_{u_{0}^{l}}^{u} \left\vert \phi^{l}(z,s) -z-(s-u_{0}^{l})\sigma(z)\right\vert ds = \int_{u_{0}^{l}}^{u} \left\vert \phi^{l}(z,s) -z- \int_{u_0^l}^s \sigma(z)dr \right\vert ds. \end{equation*} From (\ref{phin1}), the Lipschitz condition and the bound on $\sigma$, we get \begin{eqnarray} \bold{I}_{0}^{l}& = & \int_{u_{0}^{l}}^{u} \left\vert \int_{u_{0}^{l}}^{s} \sigma \left(z+(r-u_{0}^{l})\sigma(z) \right)dr - \int_{u_{0}^{l}}^{s} \sigma(z) dr \right\vert ds \nonumber \\ & \leq & \int_{u_{0}^{l}}^{u}\int_{u_{0}^{l}}^{s} \left\vert \sigma \left(z + (r-u_{0}^{l})\sigma(z)\right) -\sigma(z)\right\vert dr ds \nonumber \\ & \leq & M_{2}\int_{u_{0}^{l}}^{u}\int_{u_{0}^{l}}^{s}(r-u_{0}^{l})\left\vert \sigma(z) \right\vert dr ds \leq { M_{2} M_{5} \Vert B\Vert_{\infty}^3 \over 6 l^3}. \label{I0} \end{eqnarray} \end{small} Therefore by (\ref{i0}), (\ref{I0}) and the Gronwall lemma we obtain \begin{small} \begin{equation*} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert \leq { M_{2}^{2} M_{5} \Vert B\Vert_{\infty}^3 \over 6 l^3} \exp \left( M_{2} (u_1^l - u_0^l)\right), \quad \mbox{for} \ u \in (0 , u_{1}^{l}]. \label{phid0} \end{equation*} \end{small} Now, consider an index $k \in \{2,\ldots, l\}$. Our induction assumption is that (\ref{philema1}) is true for $u \in (u_{k-1}^{l},u^{l}_{k}]$. We shall now propagate the induction, that is prove that the inequality is also true for its successor, $k+1$. We will thus study (\ref{philema1}) for $u \in (u_{k}^{l} ,u^{l}_{k+1}]$. Following (\ref{phi}), (\ref{phin1}) and our induction hypothesis we establish \begin{small} \begin{eqnarray} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert & \leq & \left\vert \phi(z,u^{l}_{k}) - \phi^{l}(z,u^{l}_{k}) \right\vert \nonumber \\ & + & \int_{u^{l}_{k}}^{u} \left\vert \sigma \left( \phi(z,s) \right) -\sigma \left(\phi^{l}(z,u^{l}_{k}) + (s- u^{l}_{k}) \sigma \left( \phi^{l}(z,u^{l}_{k}) \right) \right) \right\vert ds, \nonumber \\ & \leq & \tilde{C}_k {M_2^2 M_5 \Vert B\Vert_{\infty}^3 \over 6{l}^{3}} + \int_{u^{l}_{k}}^{u} \left\vert \sigma \left( \phi(z,s) \right)-\sigma \left(\phi^{l}(z,u^{l}_{k}) + (s- u^{l}_{k}) \sigma \left( \phi^{l}(z,u^{l}_{k}) \right) \right) \right\vert ds. \nonumber \end{eqnarray} From Lipschitz condition on $\sigma$, \begin{eqnarray} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert & \leq & \tilde{C}_k {M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}}+ M_{2} \int_{u^{l}_{k}}^{u}\left\vert \phi(z,s)-\phi^{l}(z,s) \right\vert ds \nonumber \\ & + & M_{2}\int_{u^{l}_{k}}^{u}\left\vert \phi^{l}(z,s)-\phi^{l}(z,u^{l}_{k})- (s- u^{l}_{k}) \sigma\left(\phi^{l}(z,u^{l}_{k})\right)\right\vert ds \nonumber \\ & = & \tilde{C}_k {M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}} + M_{2} \int_{u^{l}_{k}}^{u} \left\vert \phi(z,s)-\phi^{l}(z,s) \right\vert ds + M_{2} {\bold I}^{l}_{k}, \label{ik} \end{eqnarray} \end{small} where $\tilde{C}_k = \exp \left( M_{2} (u^{l}_{k} - u^{l}_{0})\right) +\ldots + \exp \left( M_{2} (u^{l}_{k} - u^{l}_{k-1})\right).$ Now, we analyze the term $ {\bold I}^{l}_{k}$, given in equation (\ref{ik}). From (\ref{phin1}), the Lipschitz condition and the bound on $\sigma$ we obtain \begin{small} \begin{eqnarray} {\bold I}^{l}_{k} & \leq & \int_{u^{l}_{k}}^{u} \int_{u^{l}_{k}}^{s} \left\vert \sigma \left(\phi^{l}(z,u^{l}_{k}) + (r- u^{l}_{k}) \sigma \left( \phi^{l}(z,u^{l}_{k}) \right) \right) - \sigma \left(\phi^{l}(z,u^{l}_{k}) \right) \right\vert dr ds \nonumber \\ & \leq & M_{2} \int_{u^{l}_{k}}^{u} \int_{u^{l}_{k}}^{s} (r - u^{l}_{k}) \left\vert \sigma \left( \phi^{l}(z,u^{l}_{k}) \right) \right\vert drds \leq {M_2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}}. \label{IK} \end{eqnarray} \end{small} Therefore inequalities (\ref{ik}) and (\ref{IK}) yield \begin{small} \begin{eqnarray} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert & \leq & \tilde{C}_k {M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}} +{M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}} + M_{2} \int_{u^{l}_{k}}^{u} \left\vert\phi(z,s)-\phi^{l}(z,s) \right\vert ds. \nonumber \end{eqnarray} \end{small} Thus, the Gronwall lemma allows us to establish \begin{eqnarray} & & \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert \leq \left(\tilde{C}_k + 1 \right)\left( {M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}} \right) \exp(M_2(u^{l}_{k+1} - u^l_k)) \nonumber \\ &=& \left( \exp \left( M_{2} (u^{l}_{k} - u^{l}_{0})\right) +\ldots + \exp \left( M_{2} (u^{l}_{k} - u^{l}_{k-1}) \right) +1 \right) \exp(M_2(u^{l}_{k+1} - u^{l}_k) \left( {M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}} \right) \nonumber \\ \nonumber \\ &=& \tilde{C}_{k+1} {M_2^2 M_5 \Vert B \Vert^3_{\infty} \over 6{l}^{3}}, \nonumber \end{eqnarray} which shows that (\ref{philema1}) is satisfied for $k+1$. Now, we prove that (\ref{philema}) is true. For all $(z,u) \in [-M,M] \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]$, there is some $k \in \lbrace 1,\ldots, l \rbrace$ such that $u^{l}_{k-1} < u \leq u^{l}_{k} $ and by the previous calculations \begin{small} \begin{eqnarray} \left\vert \phi(z,u) - \phi^{l}(z,u) \right\vert & \leq & { M_{2}^{2} M_{5} \Vert B \Vert^3_{\infty} \over 6 l^3} \left[ \exp \left( M_{2} (u^{l}_{k} - u^{l}_{0})\right) +\ldots + \exp \left( M_{2} (u^{l}_{k} - u^{l}_{k-1})\right) \right] \nonumber \\ & \leq & { M_{2}^{2} M_{5} \Vert B \Vert^3_{\infty} \over 6 l^3} k \exp \left( {M_{2} \Vert B \Vert_{\infty}} \right)\nonumber \\ & \leq & { M_{2}^{2} M_{5} \Vert B \Vert^3_{\infty} \over 6 l^2} \exp \left( {M_{2} \Vert B \Vert_{\infty}}\right), \nonumber \end{eqnarray} \end{small} proving the lemma. \end{proof} \subsection*{Proof of Lemma \ref{lemapsi1}} \begin{proof} As in the proof of Lemma \ref{lemaphi1}, we will prove by induction that, for all $k \in \{1,\ldots, l\}$ and $u \in (u_{k-1}^{l} , u^{l}_{k}]$, we have \begin{equation}\label{lemadifl-1} \left\vert \phi^{l}(z,u) -\Psi^{l}(z,u)\right\vert\leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^{3} \over 6 l^3} k \left( 1 + A_l \right) ^k, \end{equation} with $A_l = 1 + {M_2 \Vert B \Vert_{\infty} \over l} + {1 \over 2} ({M_2 \Vert B \Vert_{\infty} \over l})^2$. Hence, \begin{equation}\label{lemadifl} \left\vert \phi^{l}(z,u) -\Psi^{l}(z,u)\right\vert\leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^{3} \over 6 l^2} \exp(2 M_2 \Vert B \Vert_{\infty}). \end{equation} where $M_2$, $M_3$ and $M_5$ are constants independent on $k$ and are given in Remark \ref{cotas}. We first assume that $k=1$. If $0=u_{0}^{l} < u \leq u^{l}_{1}$ and from equalities (\ref{phin1}) to (\ref{psin}) we obtain that \begin{small} \begin{eqnarray*} \left\vert \phi^{l}(z,u) - \Psi^{l}(z,u)\right\vert & \leq & \int_{u_{0}^{l}}^{u} \left\vert \sigma \left[ \phi^{l}\left(z,u^{l}_{0} \right) + (s-u^{l}_{0}) \sigma \left( \phi^{l}\left(z,u^{l}_{0} \right) \right) \right] \right. \nonumber \\ & - & \left. \left[ \sigma \left( \Psi^{l}\left(z,u^{l}_{0} \right) \right) + \sigma \sigma' \left( \Psi^{l}\left(z,u^{l}_{0} \right) \right) (s-u^{l}_{0}) \right] \right\vert ds \nonumber \\ & = & \int_{u_{0}^{l}}^{u} \left\vert \sigma \left( z + (s-u^{l}_{0}) \sigma \left( z \right) \right) - \sigma(z) - \sigma'(z) (s-u^{l}_{0})\sigma(z) \right\vert ds. \end{eqnarray*} \end{small} By Taylor's theorem there exists a point $\theta \in \left( \inf \{z , z + (s-u^{l}_{0}) \sigma (z) \} , \sup \{z , z + (s-u^{l}_{0}) \sigma (z) \} \right)$ such that \begin{small} \begin{eqnarray*} \left\vert \phi^{l}(z,u) - \Psi^{l}(z,u) \right\vert & \leq & \int_{u_{0}^{l}}^{u} { \left\vert \sigma '' ( \theta ) \right\vert \over 2 } (s-u^{l}_{0})^{2} \left\vert \sigma (z) \right\vert^{2} ds \nonumber \\ & \leq & {M_{3}M_{5}^{2} \Vert B \Vert^3_{\infty} \over 6 l^3} \nonumber \\ & \leq & {M_{3}M_{5}^{2} \Vert B \Vert^3_{\infty} \over 6 l^3} 1 ( 1 + A_l) ^1. \end{eqnarray*} \end{small} Now,let us consider $k \in \{2,\ldots, l\}$. Our induction assumption is that (\ref{lemadifl-1}) is true for $u \in (u_{k-1}^{l},u^{l}_{k}]$. We will thus study (\ref{lemadifl-1}) for $u \in (u_{k}^{l} , u^{l}_{k+1}]$. Following equations (\ref{phin1}) to (\ref{psin}) and our induction hypothesis, we get \begin{small} \begin{eqnarray} & &\left\vert \phi^{l}(z,u) - \Psi^{l}(z,u)\right\vert \nonumber \\ &\leq & \left\vert \phi^{l}(z,u^{l}_{k}) - \Psi^{l}(z,u^{l}_{k})\right\vert + \int_{u_{k}^{l}}^{u}\left\vert \sigma \left[\phi^{l}\left(z,u^{l}_{k}\right) + (s-u^{l}_{k}) \sigma \left( \phi^{l}\left(z,u^{l}_{k} \right) \right) \right] \right. \nonumber \\ & - & \left. \left[ \sigma \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) + \sigma' \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) (s-u^{l}_{k})\sigma \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) \right] \right\vert ds \nonumber \\ & \leq & {M_{3} M_{5}^{2} \Vert B \Vert^{3}_{\infty} \over 6 l^3} k \left( 1 + A_l \right) ^k \nonumber \\ & + & \int_{u_{k}^{l}}^{u} \left\vert \sigma \left[ \phi^{l}\left(z,u^{l}_{k} \right) + (s-u^{l}_{k}) \sigma \left( \phi^{l}\left(z,u^{l}_{k} \right) \right) \right] - \sigma \left[ \Psi^{l}\left(z,u^{l}_{k} \right) + (s-u^{l}_{k}) \sigma \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) \right] \right\vert ds \nonumber \\ & + & \int_{u_{k}^{l}}^{u} \left\vert \sigma \left[ \Psi^{l}\left(z,u^{l}_{k} \right) + (s-u^{l}_{k}) \sigma \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) \right] - \left[ \sigma \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) + \sigma \sigma' \left( \Psi^{l}\left(z,u^{l}_{k} \right) \right) (s-u^{l}_{k}) \right] \right\vert ds \nonumber \\ &= & {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} k \left( 1 + A_l \right) ^k + \bold{J}_1^k +\bold{J}_2^k \nonumber \end{eqnarray} From the Lipschitz condition on $\sigma$, and our induction hypothesis \begin{eqnarray} \bold{J}_1^k & \leq & M_2 \int_{u_{k}^{l}}^{u} \left\vert \phi^{l}\left(z,u^{l}_{k} \right) - \Psi^{l}\left(z,u^{l}_{k} \right) \right\vert ds + M_2^2 \int_{u_{k}^{l}}^{u} (s-u_k^l) \left\vert \phi^{l}\left(z,u^{l}_{k} \right) - \Psi^{l}\left(z,u^{l}_{k} \right) \right\vert ds \nonumber \\ & \leq & {M_2 M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} k \left( 1 + A_l \right) ^k (u - u^{l}_{k}) + {M_2^2 M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 12 l^3} k \left( 1 + A_l\right) ^k (u - u^{l}_{k})^2. \nonumber \end{eqnarray} By Taylor's theorem there exists a point \begin{equation*} \theta_k \in \left( \inf \{ \Psi^{l}(z,u^{l}_{k}), \Psi^{l}(z,u^{l}_{k}) + (s-u^{l}_{k}) \sigma (\Psi^{l}(z,u^{l}_{k})) \} , \sup \{ \Psi^{l}(z,u^{l}_{k}), \Psi^{l}(z,u^{l}_{k}) + (s-u^{l}_{k}) \sigma (\Psi^{l}(z,u^{l}_{k})) \} \right) \end{equation*} such that \begin{eqnarray} \bold{J}_2^k & \leq & \int_{u_{k}^{l}}^{u} {\vert \sigma''(\theta_k) \vert \over 2} \left\vert \sigma \left( \Psi^{l} \left(z,u^{l}_{k} \right) \right) \right\vert ^2 (s-u^{l}_{k})^{2} ds \nonumber \leq { M_3 M_5^2 \over 6} (u - u^{l}_{k} )^3 . \nonumber \end{eqnarray} \end{small} Therefore \begin{eqnarray} & & \left\vert \phi^{l}(z,u) - \Psi^{l}(z,u)\right\vert \leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} k \left( 1 +A_l \right) ^k + {M_2 M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} k \left( 1 + A_l \right) ^k (u - u^{l}_{k}) \nonumber \\ &+& {M_2^2 M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 12 l^3} k \left( 1 + A_l \right) ^k (u - u^{l}_{k})^2 + { M_3 M_5^2 \over 6} (u - u^{l}_{k} )^3. \nonumber \end{eqnarray} Since $(u - u^{l}_{k}) \leq {\Vert B \Vert_{\infty} \over l}$ we obtain \begin{eqnarray} & & \left\vert \phi^{l}(z,u) - \Psi^{l}(z,u)\right\vert \leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} \left[ k\left(1 + A_l \right)^k + {M_2 \Vert B \Vert_{\infty} \over l} k\left(1 + A_l \right)^k+ {M_2^2 \Vert B \Vert_{\infty}^2 \over 2l^2 }k\left(1 + A_l \right)^k+ 1\right]. \nonumber \end{eqnarray} Since $ 1 < \left(1 + A_l \right)^{k+1}$, then \begin{eqnarray} & & \left\vert \phi^{l}(z,u) - \Psi^{l}(z,u)\right\vert \leq {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} \left[ k\left(1 + A_l \right)^{k+1} + 1\right] \nonumber \\ &\leq & {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} \left[ k\left(1 + A_l \right)^{k+1} + \left(1 + A_l \right)^{k+1}\right] = {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^3 \over 6 l^3} (k+1) \left(1 + A_l \right)^{k+1}. \nonumber \end{eqnarray} Thus (\ref{lemadifl-1}) holds for any $k \in \{1, \ldots ,l\}$. Finally, we see that (\ref{lemadifl}) is satisfied. For all $(z,u) \in [-M,M] \times [-\Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]$, there is some $k \in \lbrace 1,\ldots, l \rbrace$ such that $u^{l}_{k-1} < u \leq u^{l}_{k} $ and by (\ref{lemadifl-1}), \begin{small} \begin{eqnarray} \left\vert \phi(z,u) - \Psi^{l}(z,u) \right\vert & \leq & {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^{3} \over 6 l^3} k \left(1 + A_l \right)^{k} \nonumber \\ & \leq & {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^{3} \over 6 l^2} \left(1 + A_l \right)^{k} \nonumber \\ & \leq & {M_{3} M_{5}^{2} \Vert B \Vert_{\infty}^{3} \over 6 l^2} \exp(2 M_2 \Vert B \Vert_{\infty}). \nonumber \end{eqnarray} \end{small} Thus, the proof is complete. \end{proof} \subsection*{Proof of Lemma \ref{lemapsin}} \begin{proof} We will prove by induction that, for all $k \in \{1,2, \ldots ,l\}$ and $u \in (u_{k-1}^{l} , u^{l}_{k}]$, we have \begin{equation}\label{difpsi1} \vert \Psi^{l}(z_{1},u) - \Psi^{l}(z_{2},u) \vert \leq \vert z_{1} - z_{2} \vert \prod_{j=1}^{k} \left[ 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + [M_{2}^{2} + M_{3}M_{5}]{(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right]. \end{equation} Furthermore, for all $k \in \mathbb{N}$ we have obtained a global bound \begin{equation*} \vert \Psi^{l}(z_{1},u) - \Psi^{l}(z_{2},u) \vert \leq \vert z_{1} - z_{2} \vert \exp(2M_{2} \Vert B \Vert_{\infty} ). \end{equation*} In a similar way as in previous lemmas, if $0=u_{0}^{l} < u \leq u^{l}_{1}$, then by equation (\ref{psin}) and the fact that $\Psi^{l}(z,u_0^l) =z$, we have \begin{equation*} \left\vert \Psi^{l}(z_{1},u) -\Psi^{l}(z_{2},u) \right\vert \leq \left\vert z_{1}-z_{2} \right\vert \left[ 1 + M_2 (u_{1}^{l}-u_{0}^{l}) + (M_2^2 + M_3 M_5) {(u_{1}^{l}-u_{0}^{l})^{2} \over 2} \right]. \end{equation*} Then for $k=1$ (\ref{difpsi1}) is satisfied. Now, consider that (\ref{difpsi1}) is true for $k$. Then, we will prove that the inequality is true for its successor, $k+1$. For that, we will study (\ref{difpsi1}) for $u \in (u_{k}^{l} , u^{l}_{k+1}]$. Following (\ref{psin}), Lipschitz condition and hypothesis on the second derivative of $\sigma$, we have \begin{eqnarray*} & &\left\vert \Psi^{l}(z_{1},u) - \Psi^{l}(z_{2},u) \right\vert \\ &\leq & \left\vert \Psi^{l}(z_1,u^{l}_{k}) - \Psi^{l}(z_2,u^{l}_{k})\right\vert + M_2 \int_{u_{k}^{l}}^{u}\left\vert \Psi^{l}\left(z_1,u^{l}_{k}\right) -\Psi^{l}\left(z_2,u^{l}_{k}\right) \right\vert ds \\ &+& (M_2^2 + M_3 M_5) \int_{u_{k}^{l}}^{u}\left\vert \Psi^{l}\left(z_1,u^{l}_{k}\right) -\Psi^{l}\left(z_2,u^{l}_{k}\right) \right\vert (s-u^{l}_{k}) ds \\ &=& \left\vert \Psi^{l}(z_1,u^{l}_{k}) - \Psi^{l}(z_2,u^{l}_{k})\right\vert \left( 1+M_2 (u-u^{l}_{k}) + (M_2^2 + M_3 M_5) { (u-u^{l}_{k})^2 \over 2} \right). \end{eqnarray*} Consequently, from our induction hypothesis, we get \begin{eqnarray*} \left\vert \Psi^{l}(z_{1},u) - \Psi^{l}(z_{2},u) \right\vert & \leq & \left\vert z_{1}-z_{2} \right\vert \prod_{j=1}^{k} \left[ 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + [M_{2}^{2} + M_{3}M_{5}]{(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right] \\ &\times& \left( 1 + M_{2}(u^{l}_{k+1}-u_{k}^{l}) + [M_{2}^{2} + M_{3}M_{5}]{(u^{l}_{k+1}-u_{k}^{l})^{2} \over 2 }\right) \\ &= & \left\vert z_{1}-z_{2} \right\vert \prod_{j=1}^{k+1} \left( 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + [M_{2}^{2} + M_{3}M_{5}]{(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right), \end{eqnarray*} which implies that (\ref{difpsi1}) is satisfied. Now, for all $(z,u) \in [-M,M] \times [- \Vert B \Vert_{\infty},\Vert B \Vert_{\infty}]$, there is some $k \in \lbrace 1,\ldots, l \rbrace$ such that $u^{l}_{k-1} < u \leq u^{l}_{k} $ and from (\ref{difpsi1}) \begin{eqnarray*} \left\vert \Psi^{l}(z_{1},u) - \Psi^{l}(z_{2},u) \right\vert & \leq & \left\vert z_{1}-z_{2} \right\vert \prod_{j=1}^{k} \left( 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + [M_{2}^{2} + M_{3}M_{5}]{(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right) \\ &\leq & \left\vert z_{1}-z_{2} \right\vert \left[ 1 + {2M_{2} \Vert B \Vert_{\infty} \over l} \right] ^{l} \leq \left\vert z_{1}-z_{2} \right\vert \exp (2M_{2} \Vert B \Vert_{\infty} ), \end{eqnarray*} where the last inequality is due by the fact that for $l$ large enough ${M_2 \Vert B \Vert_{\infty} + M_3M_5 \Vert B \Vert_{\infty} /M_2 \over 2 l} < 1$ and $\left( 1 + {2M_2 \Vert B \Vert_{\infty} \over l} \right)^l < \exp(2 M_2 \Vert B \Vert_{\infty})$. Thus the proof is complete. \end{proof} \subsection*{Proof of Lemma \ref{difphin1}} \begin{proof} We will prove by induction that, for all $k \in \{1,\ldots, l\}$ and $u \in (u_{k-1}^{l} , u^{l}_{k}]$, we have \begin{small} \begin{equation}\label{difphi1} \vert \phi^{l}(z_{1},u) - \phi^{l}(z_{2},u) \vert \leq \vert z_{1} - z_{2} \vert \prod_{j=1}^{k} \left[ 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + M_{2}^{2} {(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right]. \end{equation} \end{small} As a consequence, for all $k \in \mathbb{N}$, \begin{equation*} \vert \phi^{l}(z_{1},u) - \phi^{l}(z_{2},u) \vert \leq \vert z_{1} - z_{2} \vert \exp \left( 2M_{2} \Vert B \Vert_{\infty} \right), \end{equation*} is true. If $0=u_{0}^{l} < u \leq u^{l}_{1}$, then by equations (\ref{phi0}) and (\ref{phin1}) and the fact that $\phi^{l}(z,u_{0}^{l}) =z$, we have \begin{small} \begin{equation*} \vert \phi^{l}(z_{1},u) - \phi^{l}(z_{2},u) \vert \leq \vert z_{1} - z_{2} \vert \left[ 1 + M_{2}(u^{l}_{1}-u_{0}^{l}) + M_{2}^{2} {(u^{l}_{1}-u_{0}^{l})^{2} \over 2 }\right]. \end{equation*} \end{small} Therefore for $k=1$ (\ref{difphi1}) is satisfied. Now, let (\ref{difphi1}) be true until $k$. Therefore, it remains to prove that this inequality is true for its successor, $k+1$. For that, we choose $u \in (u_{k}^{l} , u^{l}_{k+1}]$. Using (\ref{phin1}) and Lipschitz condition on $\sigma$ again, we can write \begin{eqnarray*} & &\left\vert \phi^{l}(z_{1},u) -\phi^{l}(z_{2},u) \right\vert \\ &\leq & \left\vert \phi^{l}(z_1,u^{l}_{k}) - \phi^{l}(z_2,u^{l}_{k})\right\vert + M_2 \int_{u_{k}^{l}}^{u}\left\vert \phi^{l}\left(z_1,u^{l}_{k}\right) -\phi^{l}\left(z_2,u^{l}_{k}\right) \right\vert ds \\ &+& M_2^2 \int_{u_{k}^{l}}^{u}\left\vert \phi^{l}\left(z_1,u^{l}_{k}\right) -\phi^{l}\left(z_2,u^{l}_{k}\right) \right\vert (s-u^{l}_{k}) ds \\ &=& \left\vert \phi^{l}(z_1,u^{l}_{k}) - \phi^{l}(z_2,u^{l}_{k})\right\vert \left( 1+M_2 (u-u^{l}_{k}) + M_2^2 { (u-u^{l}_{k})^2 \over 2} \right). \end{eqnarray*} Our induction hypothesis leads us to \begin{eqnarray*} \vert \phi^{l}(z_{1},u) - \phi^{l}(z_{2},u) \vert & \leq & \left\vert z_{1}-z_{2} \right\vert \prod_{j=1}^{k} \left[ 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + M_{2}^{2}{(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right] \\ &\times& \left[ 1 + M_{2}(u^{l}_{k+1}-u_{k}^{l}) + M_{2}^{2} {(u^{l}_{k+1}-u_{k}^{l})^{2} \over 2 }\right] \\ &= & \left\vert z_{1}-z_{2} \right\vert \prod_{j=1}^{k+1} \left[ 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + M_{2}^{2} {(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right]. \end{eqnarray*} Therefore, (\ref{difphi1}) for any $k \in \{ 1, \ldots , l \}$. Now, for all $u \in [- \Vert B \Vert_{\infty} ,\Vert B \Vert_{\infty}]$, there is some $k \in \lbrace 1,\ldots, l \rbrace$ such that $u^{l}_{k-1} < u \leq u^{l}_{k} $ and, by (\ref{difphi1}), \begin{eqnarray*} \left\vert \phi^{l}(z_{1},u) - \phi^{l}(z_{2},u) \right\vert & \leq & \left\vert z_{1}-z_{2} \right\vert \prod_{j=1}^{k} \left( 1 + M_{2}(u^{l}_{j}-u_{j-1}^{l}) + M_{2}^{2}{(u^{l}_{j}-u_{j-1}^{l})^{2} \over 2 }\right) \\ &\leq & \left\vert z_{1}-z_{2} \right\vert \left[ 1 + {2M_{2}\Vert B \Vert_{\infty} \over l} \right] ^{l} \leq \left\vert z_{1}-z_{2} \right\vert \exp (2M_{2}\Vert B \Vert_{\infty}), \end{eqnarray*} where the last inequality is due by the fact that for $l$ large enough ${M_2 \over l} < 1$ and $\left( 1 + {2M_2 \Vert B \Vert_{\infty} \over l} \right)^l < \exp(2 M_2 \Vert B \Vert_{\infty})$. Therefore (\ref{difphin1-1}) is satisfied and the proof is complete. \end{proof} \subsection*{Proof of Lemma \ref{lemayyn}} \begin{proof} By equations (\ref{y2}) and (\ref{yl}), we have, for $t \in [0,T]$, \begin{small} \begin{eqnarray*} \left\vert Y_{t} - Y_{t}^{n} \right\vert & \leq & \int_{0}^{t} \left\vert \exp \left( -\int_{0}^{B_{s}} \sigma' \left( \phi(Y_{s} ,u) \right) du \right) b \left( \phi(Y_{s} ,B_{s}) \right) \right. \nonumber \\ & - & \left. \exp \left( -\int_{0}^{B_{s}} \sigma' ( \Psi^{n}(Y^{n}_{s} ,u) ) du \right) b \left( \Psi^{n}(Y^{n}_{s} ,B_{s}) \right) \right\vert ds \nonumber \\ & \leq & \bold{K}_{1} + \bold{K}_{2}, \end{eqnarray*} \end{small} with \begin{small} \begin{equation} \bold{K}_{1} = \int_{0}^{t} \left\vert \exp \left( -\int_{0}^{B_{s}} \sigma' \left( \phi(Y_{s} ,u) \right) du \right) \right\vert \left\vert b \left( \phi(Y_{s} ,B_{s}) \right) - b \left( \Psi^{n}(Y^{n}_{s} ,B_{s}) \right) \right\vert ds, \nonumber \end{equation} \end{small} and \begin{small} \begin{equation} \bold{K}_{2} = \int_{0}^{t} \left\vert \exp \left( -\int_{0}^{B_{s}} \sigma' \left( \phi(Y_{s} ,u) \right) du \right) - \exp \left( -\int_{0}^{B_{s}} \sigma' \left( \Psi^{n}(Y^{n}_{s} ,u) \right) du \right)\right\vert \left\vert b \left( \Psi^{n}(Y^{n}_{s} ,B_{s}) \right) \right\vert ds. \nonumber \end{equation} \end{small} Therefore by (\ref{eq:phi}), the Lipschitz properties on $b$ and $\sigma$, and Lemmas \ref{lemaphi1} and \ref{lemapsi1} we obtain \begin{small} \begin{eqnarray} &&\bold{K}_{1} \leq M_{4} \exp \left( M_2\Vert B \Vert_{\infty} \right) \int_{0}^{t} \vert \phi(Y_{s} , B_{s}) - \Psi^{n}(Y^{n}_{s},B_{s}) \vert ds \nonumber \\ & \leq & M_{4} \exp \left( M_2 \Vert B \Vert_{\infty} \right) \left( \int_{0}^{t} \vert \phi(Y_{s} , B_{s}) - \phi(Y^{n}_{s},B_{s}) \vert ds + \int_{0}^{t} \vert \phi(Y^{n}_{s} , B_{s}) - \phi^{n}(Y^{n}_{s},B_{s}) \vert ds \right. \nonumber \\ & + & \left. \int_{0}^{t} \vert \phi^{n}(Y^{n}_{s} , B_{s}) - \Psi^{n}(Y^{n}_{s},B_{s}) \vert ds \right)\nonumber \\ &\leq & M_{4} \exp \left( M_2\Vert B \Vert_{\infty} \right) \left( \int_{0}^{t} \exp(M_2 \Vert B \Vert_{\infty}) \vert Y_{s} - Y^{n}_{s}|ds \right. \nonumber \\ && \hspace{3cm} +\left. {T M_2^2 M_5 \Vert B \Vert^{3}_{\infty} \over 6n^2} \exp(M_2 \Vert B \Vert_{\infty}) + {T M_3 M_5^2 \Vert B \Vert^{3}_{\infty} \over 6n^2} \exp(2M_2 \Vert B \Vert_{\infty}) \right). \nonumber \end{eqnarray} \end{small} Now, by the mean value theorem, we get \begin{eqnarray*} \bold{K}_2 & \leq & M_{1}M_{3} \exp \left( M_{2} \Vert B \Vert_{\infty} \right)\int_{0}^{t} \int_{0}^{\vert B_{s} \vert } \vert \phi(Y_{s},u) - \Psi^{n}(Y^{n}_{s},u) \vert du ds. \end{eqnarray*} Hence, proceeding as in $\bold{K}_{1}$, we obtain \begin{small} \begin{eqnarray*} \bold{K}_{2} & \leq & M_{1}M_{3} \exp \left( M_{2} \Vert B \Vert_{\infty} \right) \Vert B \Vert_{\infty} \left( \int_{0}^{t} \exp(M_2 \Vert B \Vert _\infty ) \vert Y_{s} - Y^{n}_{s}| ds \right.\nonumber \\ & +& \left. {T M_2^2 M_5 \Vert B \Vert_{\infty}^{3} \over 6n^2} \exp(M_2 \Vert B \Vert_{\infty}) + {TM_3 M_5^2 \Vert B \Vert_{\infty}^{3} \over 6n^2} \exp(2M_2 \Vert B \Vert_{\infty}) \right). \end{eqnarray*} \end{small} Taking into account the inequalities for $\bold{K}_{1}$ and $\bold{K}_{2}$, we have \begin{eqnarray*} \left\vert Y_{t} - Y_{t}^{n} \right\vert & \leq & C_{1} \int_{0}^{t} \vert Y_{s}- Y^{n}_{s} \vert ds + {C_{2} \over n^{2}}, \end{eqnarray*} \begin{equation*} C_{1} = (M_{4} + M_{1}M_{3} \Vert B \Vert_{\infty} ) \exp(2 M_2 \Vert B \Vert_{\infty}) \end{equation*} and \begin{eqnarray*} C_{2} &=& \exp( M_2 \vert B \Vert_{\infty}) (M_{4} + M_{1}M_{3}\Vert B \Vert_{\infty}) \\ & \times & \left({T M_2^2 M_5 \Vert B \Vert_{\infty}^{3} \over 6} \exp(M_2 \Vert B \Vert_{\infty}) + {T M_3 M_5^2 \Vert B \Vert_{\infty}^3 \over 6} \exp(2M_2 \Vert B \Vert_{\infty}) \right) \end{eqnarray*} Finally, the desired result is achieved by direct application of the Gronwall lemma. \end{proof} \subsection*{Proof of Lemma \ref{difynn}} \begin{proof} Recall that $h_{1}^{n}(z,u) = {\partial g^{n}(z,u) \over \partial z}$. Then, by equations (\ref{yl}) to (\ref{ynn}) we obtain that \begin{small} \begin{eqnarray*} \vert Y^{n,n}_{s} - Y^{n,n}_{t^{n}_{k}} \vert & \leq & \int_{t_{k}^{n}}^{s} \left\vert g^{n}\left( B_{t^{n}_{k}} , Y_{t^{n}_{k}}^{n,n} \right) + h_{1}^{n}\left( B_{t^{n}_{k}} , Y_{t^{n}_{k}}^{n,n} \right) \left( B_{u} - B_{t^{n}_{k}} \right) \right\vert du \nonumber \\ & \leq & M_{1} \exp \left( M_{2} \Vert B \Vert_{\infty} \right) (s-t_{k}^{n}) + \left\vert h_{1}^{n}\left( B_{t^{n}_{k}} , Y_{t^{n}_{k}}^{n,n} \right) \right\vert \int_{t_{k}^{n}}^{s} \left\vert B_{u} - B_{t^{n}_{k}} \right\vert du \nonumber \\ & \leq & M_{1} \exp \left( M_{2} \Vert B \Vert_{\infty} \right) (s-t_{k}^{n}) + \Vert B \Vert_{H- \rho} C_{3}(s-t_{k}^{n})^{1+H-\rho} \nonumber \\ & \leq & C_{4}(s-t_{k}^{n}), \end{eqnarray*} \end{small} where \begin{small} \begin{equation*} \left\vert h_{1}^{n}\left( B_{t^{n}_{k}} , Y_{t^{n}_{k}}^{n,n} \right) \right\vert \leq M_{1}M_{2} \exp \left(M_{2}\Vert B \Vert_{\infty} \right) + M_{4} \exp \left(M_{2} \Vert B \Vert_{\infty} \right) M_{5} \Vert B \Vert_{\infty}\left( 1 +M_2 \right) = C_{3}, \end{equation*} and \begin{equation*}\label{C4} C_4 = M_{1} \exp \left( M_{2} \Vert B \Vert_{\infty} \right) + C_3 T^{H - \rho} \Vert B \Vert_{H-\rho}. \end{equation*} \end{small} The specific computation of the bound of the term $h_{1}^{n}(z,u)$ is left in the Appendix (Section \ref{apen}). \end{proof} \subsection*{Proof of Lemma \ref{difynynn} } \begin{proof} Let $n \in \mathbb{N}$ be fixed. We will prove Lemma \ref{difynynn} by induction on $k$ again. That is, for every $k \in \{1, \ldots,n \}$ and $t \in (t_{k-1}^{n} , t_{k}^{n}]$, we have \begin{equation}\label{dify1} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert \leq C_{6} \sum_{j=1}^{k} (t_{j}^n -t_{j-1}^n)^{1+2(H-\rho)} \exp(C_{7}(t^n_{k} - t^n_{j-1} )), \end{equation} here $0<\rho<H$. As a consequence, for all $k \in \{1, \ldots ,n\}$ we obtained the global bound \begin{equation*} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert \leq { C_{6} T^{1+2(H-\rho)} \exp(C_{7}T) \over n^{2(H-\rho)} }, \end{equation*} where $C_{6}$ and $C_{7}$ are given in (\ref{c6}) and (\ref{c7}), respectively. First for $k=1$ and $t \in (t_{0}^{n}, t_{1}^{n}]$, equations (\ref{yl}) to (\ref{ynn}) imply \begin{small} \begin{eqnarray} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert & \leq & \int_{t_{0}^{n}}^{t} \left\vert g^{n}(B_{s}, Y^{n}_{s}) - \left[ g^{n}(B_{t_{0}^{n}},x) + h_{1}^{n}(B_{t_{0}^{n}},x)(B_{s} - B_{t_{0}^{n}}) \right] \right\vert ds \nonumber \\ & \leq & \bold{F}_{1} + \bold{F}_{2} + \bold{F}_{3}, \nonumber \end{eqnarray} \end{small} where \begin{eqnarray} \bold{F}_{1} &=& \int_{t_{0}^{n}}^{t} \left\vert g^{n}(B_{s}, Y^{n}_{s}) - g^{n}(B_{s}, Y^{n,n}_{s}) \right\vert ds\nonumber \\ \bold{F}_{2} &=& \int_{t_{0}^{n}}^{t} \left\vert g^{n}(B_{s},Y^{n,n}_{s}) - g^{n}(B_{s},Y^{n,n}_{t_{0}^{n}}) \right\vert ds \quad \mbox{and}\nonumber \\ \bold{F}_{3} &=& \int_{t_{0}^{n}}^{t} \left\vert g^{n}(B_{s}, Y^{n,n}_{t_{0}^{n}}) - \left[ g^{n}(B_{t_{0}^{n}},Y^{n,n}_{t_{0}^{n}}) + h_{1}^{n}(B_{t_{0}^{n}},Y^{n,n}_{t_{0}^{n}})(B_{s} - B_{t_{0}^{n}}) \right] \right\vert ds. \nonumber \end{eqnarray} Equality (\ref{gl1}) and the triangle inequality allow us to write \begin{small} \begin{eqnarray*} \bold{F}_{1} &=& \int_{t_{0}^{n}}^{t} \left\vert \exp \left(-\int_{0}^{B_{s}} \sigma{'}(\Psi^{n}(Y_{s}^{n}, r)) dr \right) b(\Psi^{n}(Y_{s}^{n}, B_s)) \right.\\ &-& \left. \exp \left(-\int_{0}^{B_{s}} \sigma{'}(\Psi^{n}(Y_{s}^{n,n}, r)) dr \right) b(\Psi^{n}(Y_{s}^{n,n}, B_s)) \right\vert ds \\ & \leq & \int_{t_{0}^{n}}^{t} \left\vert \exp \left(-\int_{0}^{B_{s}} \sigma{'}(\Psi^{n}(Y_{s}^{n}, r))dr \right) \right\vert \left\vert b(\Psi^{n}(Y_{s}^{n}, B_s)) - b(\Psi^{n}(Y_{s}^{n,n}, B_s)) \right\vert ds \\ &+& \int_{t_{0}^{n}}^{t} \left\vert \exp \left(-\int_{0}^{B_{s}} \sigma{'}(\Psi^{n}(Y_{s}^{n}, r)) dr \right) -\exp \left(-\int_{0}^{B_{s}} \sigma{'}(\Psi^{n}(Y_{s}^{n,n}, r)) dr \right) \right\vert \\ & \times & \left\vert b(\Psi^{n}(Y_{s}^{n,n}, B_s)) \right\vert ds. \end{eqnarray*} \end{small} Therefore, the Lipschitz property on $b$ and the mean value theorem yield \begin{small} \begin{eqnarray*} \bold{F}_{1} & \leq & M_{4}\exp(M_{2} \Vert B \Vert_{\infty}) \int_{t_{0}^{n}}^{t} \left\vert \Psi^{n}(Y_{s}^{n}, B_s) - \Psi^{n}(Y_{s}^{n,n}, B_s) \right\vert ds, \\ &+& M_{1}M_{3} \exp \left( M_{2} \Vert B \Vert_{\infty} \right)\int_{0}^{t} \int_{0}^{\vert B_{s} \vert } \vert \Psi^{n}(Y_{s}^{n}, r) - \Psi^{n}(Y_{s}^{n,n}, r) \vert dr ds. \end{eqnarray*} \end{small} Consequently, Lemma \ref{lemapsin} lead us to \begin{small} \begin{eqnarray*} \bold{F}_{1} & \leq & M_{4}\exp(3M_{2} \Vert B \Vert_{\infty}) \int_{t_{0}^{n}}^{t} \left\vert Y^{n}_{s} - Y^{n,n}_{s} \right\vert ds \nonumber \\ &+& M_{1}M_3\exp(3M_{2} \Vert B \Vert_{\infty} ) \Vert B \Vert_{\infty} \int_{t_{0}^{n}}^{t} \left\vert Y^{n}_{s} - Y^{n,n}_{s} \right\vert ds\\ &=& \exp(3M_{2} \Vert B \Vert_{\infty}) \left[ M_{4} + M_{1}M_3 \Vert B \Vert_{\infty} \right] \int_{t_{0}^{n}}^{t} \left\vert Y^{n}_{s} - Y^{n,n}_{s} \right\vert ds.\\ \end{eqnarray*} \end{small} Proceeding similarly as in $\bold{F}_{1}$, \begin{small} \begin{eqnarray*} \bold{F}_{2} & \leq & M_{4}\exp(M_{2} \Vert B \Vert_{\infty}) \int_{t_{0}^{n}}^{t} \left\vert \Psi^{n}(Y_{s}^{n,n}, B_s) - \Psi^{n}(Y^{n,n}_{t_{0}^{n}}, B_s) \right\vert ds, \\ &+& M_{1}M_{3} \exp \left( M_{2} \Vert B \Vert_{\infty} \right)\int_{t_{0}^{n}}^{t} \int_{0}^{\vert B_{s} \vert } \vert \Psi^{n}(Y_{s}^{n,n}, r) - \Psi^{n}(Y^{n,n}_{t_{0}^{n}}, r) \vert dr ds \\ &\leq & \exp(3M_{2} \Vert B \Vert_{\infty}) \left[ M_{4} + M_{1}M_3 \Vert B \Vert_{\infty} \right]\int_{t_{0}^{n}}^{t} \vert Y_{s}^{n,n}-Y^{n,n}_{t_{0}^{n}} \vert ds. \end{eqnarray*} \end{small} Hence, using Lemma \ref{difynn}, we can establish $$\bold{F}_{2} \leq C_{4} \exp(3M_{2} \Vert B \Vert_{\infty}) \left[ M_{4} + M_{1}M_3 \Vert B \Vert_{\infty} \right] {(t-t_{0}^{n})^2 \over 2}.$$ Now, we deal with $\bold{F}_{3} $. From Lemma \ref{derivadagl} (Section \ref{apen}), \begin{eqnarray}\label{eq:F3delta} \bold{F}_{3} & \leq & C_{5} \int_{t_{0}^{n}}^{t} (B_{s} - B_{t_{0}^{n}})^{2} ds + \int_{t_{0}^{n}}^{t} \left\vert \sum_{j=1}^{n} 1_{ \{B_{s} \in (u_{j-1}, u_{j}] \}} \sum_{k=1}^{j} (B_{s} - u_{k}^{n}) \Delta_{j+k} g^n{'} \right\vert ds \nonumber \\ & + & \int_{t_{0}^{n}}^{t} \left\vert \sum_{j=-n+1}^{0} 1_{ \{B_{s} \in (u_{j-1}, u_{j}] \}} \sum_{k=-j}^{0} (B_{s} - u_{k}^{n}) \Delta_{j+k} g^n{'} \right\vert ds, \end{eqnarray} where \begin{equation*} \Delta_{j+k} g^n{'} = {\partial g^{n}(u_{j+k}+, Y_{t_{0}^{n}}^{n,n}) \over \partial u} - {\partial g^{n}(u_{j+k}-, Y_{t_{0}^{n}}^{n,n}) \over \partial u} \end{equation*} and \begin{eqnarray*} C_{5} &=& \exp (M_{2} \Vert B \Vert_{\infty}) \left[ \Vert B \Vert_{\infty} (1+M_{2}) \left( M_{3} M_{1}M_{5} + M_{2} M_{4} M_{5} + M_{6}M_{5} \Vert B \Vert_{\infty} (1+ M_{2}) \right) \right. \\ &+& \left. M_{1}M_{2} + M_{4}M_{5}( 1+ M_{2}) \right]. \end{eqnarray*} Note that (\ref{dergl}) implies \begin{eqnarray*} \vert \Delta_{j+k} g^n{'} \vert &\leq& M_{4} \exp(M_{2} \Vert B \Vert_{\infty}) \left\vert \frac{\partial \Psi^{n}(Y_{t_{0}^{n}}^{n,n}, u_{j+k}+)}{\partial u} - \frac{\partial \Psi^{n}(Y_{t_{0}^{n}}^{n,n}, u_{j+k}-)}{\partial u} \right\vert \nonumber \\ & = & M_{4} \exp(M_{2} \Vert B \Vert_{\infty}) \left\vert \sigma \left( \Psi^{n}(Y_{t_{0}^{n}}^{n,n},u_{j+k} ) \right) - \sigma \left( \Psi^{n}(Y_{t_{0}^{n}}^{n,n},u_{j+k-1} ) \right) \right. \nonumber \\ &-& \left. \sigma \sigma' \left( \Psi^{n}(Y_{t_{0}^{n}}^{n,n},u_{j+k-1} ) \right) \left(u_{j+k} - u_{j+k-1} \right) \right\vert \nonumber \\ & \le & M_{4} M_{2} \exp(M_{2} \Vert B \Vert_{\infty}) \left[ \left\vert \Psi^{n}(Y_{t_{0}^{n}}^{n,n},u_{j+k} ) - \Psi^{n}(Y_{t_{0}^{n}}^{n,n},u_{j+k-1} ) \right\vert \right. \nonumber \\ & + & \left. M_{5}M_{2} (u_{j+k}-u_{j+k-1}) \right] \nonumber \\ & \le & M_{4} M_{2} \exp(M_{2} \Vert B \Vert_{\infty}) \left[ (M_{5} + M_{5}M_{2}) {(u_{j} -u_{j-1})^{2} \over 2 } \right. \nonumber \\ & + & \left. M_{5}M_{2} (u_{j+k}-u_{j+k-1}) \right] \nonumber \\ & \le & M_{4} M_{2} \exp(M_{2} \Vert B \Vert_{\infty}) (u_{j} -u_{j-1}) \left[ (M_{5} + M_{5}M_{2}) \Vert B \Vert_{\infty} \right. \nonumber \\ & + & \left. M_{5}M_{2} \right] \nonumber \\ &=& C_{8} (u_{j} -u_{j-1}). \end{eqnarray*} Hence, (\ref{eq:F3delta}) implies \begin{equation*} \bold{F}_{3} \leq ( C_{5} +C_{8} ) \int_{t_{0}^{n}}^{t} (B_{s} - B_{t_{0}^{n}})^{2} ds \leq ( C_{5} +C_{8} ) \Vert B \Vert_{H-\rho} \frac{(t - t_{0}^{n})^{1+2(H-\rho)}}{2}, \end{equation*} Since $H < 1/2$, then the previous estimations for $\bold{F}_{1}$, $\bold{F}_{2}$ and $\bold{F}_{3}$ give \begin{eqnarray*} & &\left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert \leq \nonumber \\ & & \exp(3 M_{2} \Vert B \Vert_{\infty} ) [M_{4} + M_{1} M_{3} \Vert B \Vert_{\infty}] \int_{t_{0}^{n}}^{t} \left\vert Y^{n}_{s} - Y^{n,n}_{s} \right\vert ds \\ & &+ \left[ C_{4} \exp(3 M_{2} \Vert B \Vert_{\infty} ) [M_{4} + M_{1}M_{3}\Vert B \Vert_{\infty} ]T^{1-2(H - \rho)} \right. \\ && \left. + (C_{5} + C_{8}) \Vert B \Vert_{H-\rho} \right] (t-t_{0}^{n})^{1 + 2(H-\rho)} \\ & & = C_{6}(t-t_{0}^{n})^{1+2(H-\rho)} + C_{7} \int_{t_{0}^{n}}^{t} \left\vert Y^{n}_{s} - Y^{n,n}_{s} \right\vert ds. \end{eqnarray*} Then by the Gronwall lemma and $t \in (t_{0}^{n} , t_{1}^{n}]$, we conclude \begin{equation*} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert \leq C_{6} \exp(C_{7}(t^{n}_{1}-t_{0}^{n}) ) (t^{n}_{1}-t_{0}^{n})^{1+2(H-\rho)}. \end{equation*} Now we show that (\ref{dify1}) is true for $k+1$ if it holds for $k$. So we choose $t \in (t_{k}^{n}, t_{k+1}^{n}]$. Towards this end, we proceed as in the case $k=1$: \begin{small} \begin{eqnarray*} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert & \leq & \left\vert Y^{n}_{t^n_{k}} - Y_{t^n_{k}}^{n,n} \right\vert \\ & + & \left| \int_{t_{k}^{n}}^t \left( g^{n}\left(B_{s} , Y^{n}_{s}\right)- \left[ g^{n}\left( B_{t^{n}_{k}} , Y_{t^{n}_{k}}^{n,n} \right) + h_{1}^{n}\left( B_{t^{n}_{k}} , Y_{t^{n}_{k}}^{n,n} \right) \left( B_{s} - B_{t^{n}_{k}} \right) \right] \right) ds \right| \\ & \leq & C_{6} \sum_{j=1}^{k} (t_{j}^n -t_{j-1}^n)^{1+2(H-\rho)} \exp(C_{7}(t^n_{k} - t^n_{j-1} )) + C_{6}(t_{k+1}^n - t_{k}^n )^{1 + 2(H-\rho)} \\ &+& C_{7} \int_{t_{k}^{n}}^{t} \left\vert Y^{n}_{s} - Y^{n,n}_{s} \right\vert ds. \end{eqnarray*} \end{small} Therefore, using the Gronwall lemma again and $t \in (t_{k}^{n} , t_{k+1}^{n}]$ \begin{small} \begin{eqnarray*} & & \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert \leq \\ & \leq & \left[ C_{6} \sum_{j=1}^{k} (t_{j}^n -t_{j-1}^n)^{1+2(H-\rho)} \exp(C_{7}(t^n_{k} - t^n_{j-1} )) + C_{6}(t_{k+1}^n - t_{k}^n )^{1 + 2(H-\rho)} \right] \exp(C_{7}(t_{k+1}^n - t_{k}^n )) \\ &=& C_{6} \sum_{j=1}^{k} (t_{j}^n -t_{j-1}^n)^{1+2(H-\rho)} \exp(C_{7}(t^n_{k+1} - t^n_{j-1} )) + C_{6}(t_{k+1}^n - t_{k}^n )^{1 + 2(H-\rho)} \exp(C_{7}(t_{k+1}^n - t_{k}^n )) \\ &=& C_{6} \sum_{j=1}^{k+1} (t_{j}^n -t_{j-1}^n)^{1+2(H-\rho)} \exp(C_{7}(t^n_{k+1} - t^n_{j-1} )). \end{eqnarray*} \end{small} Therefore (\ref{dify1}) is true for any $k \leq n$. Finally, for all $t \in [0,T]$, there exits $k \in \lbrace 1,\ldots, n \rbrace$ such that $t^{n}_{k-1} < t \leq t^{n}_{k} $. Thus (\ref{dify1}) implies \begin{eqnarray*} \left\vert Y^{n}_{t} - Y_{t}^{n,n} \right\vert & \leq & C_{6} \left({T \over n} \right)^{1+2(H-\rho)} k \exp(C_{7}(t^n_{k} - t^n_{0} )) \\ & \leq & C_{6} T \left({T \over n} \right)^{2(H-\rho)} \exp(C_{7}T), \end{eqnarray*} and the proof is complete. \end{proof} \section{Appendix}\label{apen} Here, we consider the following useful result for the analysis of the convergence of the scheme. \begin{lem}\label{derivadagl} Let $\left\lbrace u_{i}^{l} \right\rbrace$ be a partion of the interval $[-R,R]$ given by $-R=u^{l}_{-l} < \ldots <u^{l}_{-1}< u^{l}_{0}=0< u^{l}_{1}< \ldots < u^{l}_{l} = R$ and $f:[-R,R] \rightarrow \mathbb{R} $ a $C^{2}([u_{j}^l,u_{j+1}^l])$-function for each $j \in \{-l, \ldots, l-1 \}$. Also let $f$ be continuous on $[-R,R]$, $C$ a constant such that \begin{equation*} \sup\limits_{j \in \{-l,...,l-1 \}} \Vert f'' \Vert_{\infty , [u_{j}, u_{j+1} ]} =C< \infty, \end{equation*} and $x \in (u_{j}^{l} , u^{l}_{j+1}]$ and $y \in (u_{j+k}^{l} , u^{l}_{j+k+1}]$. Then, \begin{equation}\label{conj} \vert f(y)-f(x)-f'(x+)(y-x) \vert \leq {C \over 2} (y-x)^2 + \sum_{p=1}^{k} \Delta_{j+p}f' \cdot (y-u_{j+p}), \end{equation} where \begin{equation*} \Delta_{j+p}f' = \vert f'(u_{j+p}+)-f'(u_{j+p}-) \vert. \end{equation*} \end{lem} \begin{proof} We will prove that (\ref{conj}) holds via induction on $k$. We start our induction with $k=1$. That is, we consider two consecutive intervals. If $x \in (u_{j},u_{j+1}]$ and $y \in (u_{j+1},u_{j+2}]$. Then, \begin{eqnarray*}\lefteqn{ \vert f(y)-f(x)-f'(x+)(y-x) \vert} \\ & \leq & \vert f(y)-f(u_{j+1}) - f'(u_{j+1}+)(y-u_{j+1})\vert \\ &+& \vert f(u_{j+1}) + f'(u_{j+1}+)(y-u_{j+1}) - f(x) - f'(x+)(y-x) \vert \\ & \leq & {C \over 2} (y- u_{j+1})^2 + \vert f(u_{j+1})-f(x) - f'(x+)(u_{j+1}-x)\vert \\ &+& \vert f'(u_{j+1}+)-f'(x+) \vert (y-u_{j+1}) \\ & \leq & {C \over 2} (y- u_{j+1})^2 + {C \over 2} (u_{j+1}-x)^2 \\ &+& \left[ \vert f'(u_{j+1}+)-f'(u_{j+1}-) \vert + \vert f'(u_{j+1}-)-f'(x+) \vert \right] (y-u_{j+1})\\ &=& {C \over 2} (y- u_{j+1})^2 + {C \over 2} (u_{j+1}-x)^2 + (y-u_{j+1}) (\Delta_{j+1}f') \\ &+& C(u_{j+1} -x )(y-u_{j+1})\\ &=& {C \over 2} (y- x)^2 + (\Delta_{j+1}f')(y-u_{j+1}). \end{eqnarray*} It means, (\ref{conj}) holds for $k=1$. It remains to prove that the inequality (\ref{conj}) is true for its successor, $k+1$ assuming that until $k$ is satisfied. To do so, choose $x \in [u_{j},u_{j+1}]$ and $y \in [u_{j+k+1},u_{j+k+2}]$. Then, \begin{eqnarray*}\lefteqn{ \vert f(y)-f(x)-f'(x+)(y-x) \vert } \\ & \leq & \vert f(y)-f(u_{j+k+1}) - f'(u_{j+k+1}+)(y-u_{j+k+1})\vert \\ &+& \vert f(u_{j+k+1}) + f'(u_{j+k+1}+)(y-u_{j+k+1}) - f(x) - f'(x+)(y-x) \vert \\ & \leq & {C \over 2} (y- u_{j+k+1})^2 + \vert f(u_{j+k+1})-f(x) - f'(x+)(u_{j+k+1}-x)\vert \\ &+& \vert f'(u_{j+k+1}+)-f'(x+) \vert (y-u_{j+k+1}). \end{eqnarray*} Hence, our induction hypothesis implies \begin{eqnarray*}\lefteqn{ \vert f(y)-f(x)-f'(x+)(x-y) \vert} \\ & \leq & {C \over 2} (y- u_{j+k+1})^2 + {C \over 2} (u_{j+k+1}-x)^2 + \sum_{p=1}^{k} (u_{j+k+1} -u_{j+p}) \Delta_{j+p}f' \\ & + & \left[ \Delta_{j+k+1}f' + C (u_{j+k+1} - u_{j+k}) + \left\vert f'(u_{j+k}+) - f'(x+) \right\vert \right] (y-u_{j+k+1}) \\ & \leq & {C \over 2} (y- u_{j+k+1})^2 + {C \over 2} (u_{j+k+1}-x)^2 + \sum_{p=1}^{k} (u_{j+k+1} -u_{j+p}) \Delta_{j+p}f' \\ &+& \left[ \Delta_{j+k+1}f' + C (u_{j+k+1} - u_{j+k}) \phantom{ \sum_{p=1}^{k} } \right. \\ &+& \left. \sum_{p=1}^{k} \Delta_{j+p} f' + C(u_{j+k} - x) \right] (y-u_{j+k+1}) \\ & \leq & \frac{C}{2}(y-x)^{2} + \sum_{p=1}^{k} (y -u_{j+p}) \Delta_{j+p}f' + (y-u_{j+k+1}) \Delta_{j+k+1}f'. \end{eqnarray*} Therefore, (\ref{conj}) is satisfied for $k+1$ and the proof is complete. \end{proof} {\bf Acknowledgments} We would like to thank anonymous referee for useful comments and suggestions that improved the presentation of the paper. Part of this work was done while Jorge A. Le\'on was visiting CIMFAV, Chile, and H\'ector Araya and Soledad Torres were visiting Cinvestav-IPN, Mexico. The authors thank both Institutions for their hospitality and economical support. Jorge A. Le\'on was partially supported by the CONACYT grant 220303. Soledad Torres was partially supported by Proyecto ECOS C15E05; Fondecyt 1171335, REDES 150038 and Mathamsud 16MATH03. H\'ector Araya was partially supported by Beca CONICYT-PCHA/Doctorado Nacional/2016-21160138; Proyecto ECOS C15E05, REDES 150038 and Mathamsud 16MATH03. \end{document}
\begin{document} \title{The unified framework for modelling credit cycles with Marshall-Walras price formation process and systemic risk assessment hanks{Submitted 10.05.2023. unding{This work was supported by the Polish Ministry of Education and Science (MEiN) core funding for statutory R\&D activities.} \begin{abstract} Systemic risk is a rapidly developing area of research. Classical financial models often do not adequately reflect the phenomena of bubbles, crises, and transitions between them during credit cycles. To study very improbable events, systemic risk methodologies utilise advanced mathematical and computational tools, such as complex systems, chaos theory, and Monte Carlo simulations. In this paper, a relatively simple mathematical formalism is applied to provide a unified framework for modeling credit cycles and systemic risk assessment. The proposed model is analyzed in detail to assess whether it can reflect very different states of the economy. Basing on those results, measures of systemic risk are constructed to provide information regarding the stability of the system. The formalism is then applied to describe the full credit cycle with the explanation of causal relationships between the phases expressed in terms of parameters derived from real-world quantities. The framework can be naturally interpreted and understood with respect to different economic situations and easily incorporated into the analysis and decision-making process based on classical models, significantly enhancing their quality and flexibility. \end{abstract} \begin{keywords} Systemic risk, credit cycle, phase transition, interest rates, positive feedback, critical point. \end{keywords} \begin{AMS} 91B55, 91B52, 91G15 \end{AMS} \section{Introduction} ``With the takeover of Credit Suisse by UBS, a solution has been found to secure financial stability and protect the Swiss economy in this exceptional situation''. At the time of writing this article, on 19th March 2023, the Swiss National Bank announced a resolution to prevent the second largest Swiss bank from failure \cite{swiss_national_bank_swiss_2023}. To obtain an acquisition agreement, Swiss National Bank pledged a loan of up to \$104 billion, while the Swiss government issued for UBS a guarantee to assume losses up to \$9.6 billion \cite{bishop_ubs_nodate}. There was one decisive reason for undertaking such measures: the mitigation of systemic risk. Systemic risk represents the possibility of collapse of the entire financial system, as opposed to collapses of individual parts and components \cite{ilin_uncertainty_2015}. The aim of this research area is to assess the risk of the financial system to secure its persistence and mitigate the consequences of its failure. Based on the results, banks around the world determine the dimension of the reserves held and, consequently, the interest rates and the amount of money in the economy \cite{basel_comittee_on_banking_supervision_basel_nodate}. Therefore, this research discipline is of fundamental importance for the functioning of modern societies and exhibits a significant impact on our daily life. This importance was confirmed by the recent 2022 Nobel Prize in Economics for the work of Bernanke, Diamond, and Dybvig on bank failures \cite{bernanke_financial_nodate, diamond_bank_1983}. Systemic risk research utilises complex mathematical tools \cite{doldi_conditional_2021} to analyze very improbable events and their probable consequences \cite{jackson_systemic_2020}. Its main challenge is the fact that the vast majority of such events will never occur, so their probability and potential impact must be assessed on the basis of the very sparse set of real-world observations and simulations of unrealised outcomes \cite{taleb_black_2007}. The tools include network models of financial contagion and phase transitions \cite{elliott_financial_2014}, systems with feedback effects, self-fulfilling prophecies with possible switches between multiple equilibria \cite{diamond_bank_1983}, and complex computer calculations to simulate potential trajectories \cite{gill_high-performance_2021}. Network models are used to define the measures of systemic risk \cite{bartesaghi_risk-dependent_2020,battiston_debtrank_2012} and the methods of their computation on real data \cite{bourgey_metamodel_2020,poledna_quantification_2021}. The measures are then utilised to propose methods of risk optimization in financial systems \cite{pichler_systemic_2021}. The solutions include the usage of financial instruments such as credit default swaps to effectively reorganize the network structure towards a safer one \cite{leduc_systemic_2017} and contingent convertible obligations to improve banks' solvency and survival ability \cite{feinstein_contingent_2023}. The developed methodologies are, in turn, used to improve decision making on the management of financial systems in the real world \cite{gill_high-performance_2021}. The potential of the research performed extends beyond the banking system and is applied to the analysis and valuation of the credit risk and contagion effects of reinsurance companies \cite{ceci_value_2020}, as well as to the identification of systematically important entities in the credit network of the entire state \cite{poledna_identifying_2018}. On the other hand, classical financial models generally are not appropriate for reflecting improbable events \cite{taleb_black_2007}. They focus mainly on long stable periods with moderate variance throughout the time \cite{black_pricing_1973}. Transitions between different phases of credit cycles constitute real-world phenomena that classical advanced, stochastic, and mathematically complicated models struggle to explain \cite{greenspan_age_2007}. Among the known and popular models, GARCH can relatively better align with the real data due to the volatility clustering property \cite{bollerslev_generalized_1986}. It includes the feedback effect, because a higher value of volatility in a period increases the probability of a higher value in the next one. Nevertheless, the model does not capture some asymetries characteristic for credit cycle data, where even long periods of stable but excessive growth can contribute to the risk accumulation and, therefore, increase the fragility of the system. Similarly, mean-reversion models \cite{lipe_mean_1994} do not explain the increase in fragility (risk of severe collapses) for stable growth periods and asymetries between stable growth, steep crisis, and relatively fast recovery. The aim of this paper is to provide a unified framework for modeling credit cycles and systemic risk assessment with the utilization of relatively simple, easy to use, and well-interpretable mathematical formalism. It is structured as follows. \Cref{sec:models_introduction} introduces the model with the mathematical analysis of its properties. \Cref{sec:phase_transitions} focuses on the topic of phase transitions and the derivation of system stability measures. \Cref{sec:cycle} provides an example of a credit cycle generated from the model with details on the real-world interpretation of parameters and their economic consequences. A summary of the research results is presented in \Cref{sec:summary}. \section{Models and methods} \label{sec:models_introduction} \subsection{The Marshall-Walras equilibrium (MWE) model applied to credit market} The mathematical model of loans, defaults, and interest rate dynamics with the autocatalytic feedback mechanism has been presented in Ref.~\cite{solomon_minsky_2013}. The model is based on the Marshall-Walras equilibrium assumption for the price formation process. It is able to properly reflect different economic regimes by the appropriate values of the parameters, which is also empirically validated in \cite{golo_too_2016}. However, it model does not describe the transition process between different states of the economy, which is the main point of interest in this paper. The model analyzed in \cite{solomon_minsky_2013} consists of several qualitatively similar but independent subparts that describe the dynamics of either loans or crisis accelerators. It does not factor in defaults and their influence on the interest rate during the loan accelerator phase, and, similarly, it does not factor in loans and their influence on the interest rate during the crisis accelerator phase. Within the model, the dynamics of the loan number $N(t)$ and the interest rate value $i(t)$ is described as \begin{equation} \begin{aligned} N{\left(t \right)} &= \left(\frac{i{\left(t \right)}}{k}\right)^{- \mu}, \\ i{\left(t + 1 \right)} &= i_0 N^{-\alpha} (t). \end{aligned} \label{eq:original_loans} \end{equation} Similarly, the dynamics of the crisis accelerator with the number of ponzi (i.e. technically defaulted) companies $D (t)$ and the interest rate $i (t)$ is defined as \begin{equation} \begin{aligned} D{\left(t \right)} &= \left(\frac{i{\left(t \right)}}{k}\right)^{\beta}, \\ i{\left(t + 1 \right)} &= i_0 D^{\alpha} (t). \end{aligned} \label{eq:original_ponzi} \end{equation} A detailed justification of the equations is presented in \cite{solomon_minsky_2013}. Applied analysis includes probabilistic modeling of earnings and debt distribution in the market, a derivation from the power law assumption of wealth, earnings, and debt distribution, and a discussion of the power~law assumption. Empirical validation of the MWE model is performed in \cite{golo_too_2016}. \begin{remark} The model \cref{eq:original_loans} is one of the two loan dynamics models analyzed in \cite{solomon_minsky_2013}. It corresponds to the law of increasing returns. Reasonability of the law of increasing returns in the banking industry is discussed in \cite{solomon_minsky_2013}. An additional reason for choosing it in this paper is the feature of a positive feedback loop: an increase in the number of loans causes a decrease in the interest rate, which in turn causes a further increase in the number of loans. Thus, the number of loans and interest rates are self-reinforcing. This feature of the system has interesting mathematical and real-world consequences, often poorly reflected in classical models of financial mathematics. \end{remark} \subsection{Derivation of the unified model} \label{subsec:model_derivation} The aim is to unify the equations \cref{eq:original_loans}-\cref{eq:original_ponzi} into a single, consistent model (referred to as the unified MWE model, UMWE in short, throughout the rest of this work) that describes all variables of interest and their relations at each point in time. The proposed solution is as follows: \begin{equation} \begin{aligned} N{\left(t \right)} &= \left(\frac{i{\left(t \right)}}{k}\right)^{- \mu}, \\ D{\left(t \right)} &= \left(\frac{i{\left(t \right)}}{l}\right)^{\nu}, \\ i{\left(t + 1 \right)} &= \frac{D^{\beta}{\left(t \right)}}{N^{\alpha}{\left(t \right)}}, \end{aligned} \label{eq:my_model} \end{equation} where: \begin{itemize} \item $N{\left(t \right)}$ is the volume of loans at time $t$; \item $D{\left(t \right)}$ -- the volume of defaults at time $t$; \item $i{\left(t + 1 \right)}$ -- the value of the interest rate at time $t$; \item $\mu$ is a parameter that describes the level of demand for credit in relation to the interest rate; \item $\nu$ -- the level of defaults in relation to the interest rate; \item $\alpha$ is a parameter that describes the sensitivity of the interest rate to the volume of loans; \item $\beta$ -- the sensitivity of the interest rate to the volume of defaults; \item $k, \ l$ are scale parameters. \newline \end{itemize} Define the set of utilised model parametrization as $\Lambda := \{ \alpha, \beta, \mu, \nu, k, l \} \in \mathbb{R}_{+}^{6}.$ As $k,l$ are technical scale parameters, they do not constitute a great point of interest in the investigation. Therefore, throughout the article, $\Lambda$ will be considered primarily as $ \Lambda =~\{ \alpha, \beta, \mu, \nu \} $ with the general exclusion of $k, l$ from the research focus. The exponents in $\Lambda$ describe various economic quantities of the real world. The parameter $\mu$ reflects the dependence of the volume of loans on the interest rate and therefore the demand for credit for different levels of the interest rate. On the other hand, $\nu$ captures the relation between the volume of defaults and the interest rate, so it can be interpreted as the indicator of the resilience of a particular economy: systems with different values of the parameter $\nu$ are characterized by distinct default tendencies and fragility. The parameters $\alpha$ and $\beta$ describe the dynamics of the interest rate determination in response to the volumes of loans and defaults on the market. The dynamics of loans and defaults are taken directly from the original model. The interest rate is modeled in a natural way often met in finance as the (exponentiated) ratio of defaults to loans. Consider the case \begin{equation} \alpha = \beta = 1, \end{equation} which implies that \begin{equation} i(t+1) = \frac{D(t)}{N(t)}. \end{equation} This is a very simple and intuitive parameterization, where the interest rate equals the estimated probability of default $D_t / N_t.$ Moreover, the interest rate is also equal to the expected rate of return from the 'bare' credit with the same amount of loan and repayment. Consider a loan with the amount of $M$ and the same amount to repay at maturity $T.$ The expected rate of return from such a loan with the estimated probability of default $D_t / N_t$ and the recovery rate of $0$ equals \begin{equation} r(t+1) = \frac{1}{M} \left( M \left( 1 - \frac{D(t)}{N(t)} \right) - M \right) = \left( 1 - \frac{D(t)}{N(t)} \right) - 1 = - \frac{D(t)}{N(t)}. \end{equation} Thus \begin{equation} i(t) = - r(t). \end{equation} The addition of $\alpha$ and $\beta$ parameters enhances the flexibility of the model and allows to reflect phenomena such as various market sentiments and premiums charged by banks for their offers. It also increases the accuracy potential to fit to real data. The proposed equation for the interest rate dynamics ultimately binds all variables of interest altogether. It is worth noticing that if two of the power indices (either $\alpha$ and $\mu$ or $\beta$ and $\nu$) are set to $0,$ the result is effectively a two-equation model very similar to the original MWE. Furthermore, to accurately align both models, the initial interest rate component $i_0$ in \cref{eq:original_loans}-\cref{eq:original_ponzi} must satisfy $i_0=1.$ The exclusion of $i_0$ from the interest rate equation ensures that the proposed model \cref{eq:my_model} is memoryless (Markovian). The initial interest rate $i_0$ was utilised in the original interest rate equations \cref{eq:original_loans}-\cref{eq:original_ponzi} as a scale parameter that anchors the evolution of the interest rate to its root. This causes $i_0$ to prevail in the model at each point in time and significantly influences the interest rate value even in the distant future. This long-memory approach has several drawbacks, including time inconsistency: the model parameterization and evolution are different depending on which point in time is treated as the initial moment. It is particularly problematic for the research conducted in this article, where one of the main points of interest is the transitions between qualitatively different economic regimes. It would be very inconvenient to handle the state transitions of the model with the long-lasting dependency on the predefined initial moment that prevails in every time step. The Markov property of the model \cref{eq:my_model} ensures that the value of the variable of interest in the next period depends only on the current one in the history, thus allowing for a very natural introduction of the regime switches. \subsection{Mathematical analysis of the interest rate model} Inserting defaults and loans into the interest rate definition in \cref{eq:my_model} yields a recurrence formula for $i(t):$ \begin{equation} \label{eq:it_recurrence} i\left(t+1 \right) = \left( l^{- \beta \nu} k^{ - \mu \alpha } \right) \left( i{\left(t \right)} \right)^{ \alpha\mu + \beta \nu}. \end{equation} The fixed point $i_{fix}$ of the recurrence equation \cref{eq:it_recurrence} must satisfy \begin{equation} i(t+1) = i(t) = i_{fix}. \end{equation} Hence, it follows that \begin{equation} \label{eq:i_fix} i_{fix} = \left( k^{- \alpha \mu} l^{- \beta \nu}\right)^{\frac{1}{1 - (\alpha \mu + \beta \nu)}}. \end{equation} Just to recall, the fixed-point formula in the MWE model~\cite{solomon_minsky_2013} reads \begin{equation} i^{MWE}_{fix} = \left( i_0 k^{- \alpha \mu} \right)^{\frac{1}{1 - \alpha \mu}}. \end{equation} The fixed point in the unified model does not feature the dependency on the initial value of the interest rate $i_0.$ This is a noticeable enhancement of the model, as it is natural to expect that the invariant point of the system depends on the parameters values $\Lambda$ and should not be influenced by the initial value $i_0$ defined at an arbitrarily chosen point in time $t=0.$ In case of dependency of the fixed point on the initial rate $i_0,$ it is very hard to reasonably design transitions between different economic regimes, as the value of the invariant point of each new economy state is influenced by the arbitrarily chosen initial interest $i(0).$ Define the power index parameter \begin{equation} \label{eq:def_a} a := \mu \alpha + \nu \beta. \end{equation} From the recurrence equation \cref{eq:it_recurrence}, it follows that \begin{equation} \label{eq:it_recurrence_advanced} i \left(t+1 \right) = \left( l^{- \beta \nu} k^{ - \mu \alpha } \right) i^{\left( \alpha\mu + \beta \nu \right)}{\left(t \right)} = i_{fix}^{(1-a)} i^{a}{\left(t \right)}. \end{equation} Formula \cref{eq:it_recurrence_advanced} is a simpler representation of the original interest rate recurrence equation. Consider now its logarithmic form. \begin{equation} \label{eq:it_recurrence_logarithmic} \ln (i(t+1)) = (1-a) \ln(i_{fix}) + a \ln(i(t)). \end{equation} Solving the last equation and returning to the nonlogarithmic form produces the compact formula for the value of the interest rate $i(t)$ at any given point in time $t$, \begin{equation} \label{eq:it_intime_t} i(t) = i_{fix} \left( \frac{i_{0}}{i_{fix}} \right)^{a^t}. \end{equation} \section{Credit market phases and transitions} \label{sec:phase_transitions} \subsection{Bifurcation analysis} \label{subsec:bif_analysis} The qualitative behavior of the system \cref{eq:my_model} depends mainly on two factors: the value of the power index $a$ and the position of the initial interest rate $ i (0) $ relative to the fixed point $ i_{fix}$. The conclusions presented can be straightforwardly derived from the formula for the interest rate over time \cref{eq:it_intime_t}. For a general overview of dynamic systems, fixed points, and their classification, see \cite{strogatz_nonlinear_2007}. For $ a < 1$ in the Eq.~\cref{eq:it_intime_t} the fixed point is attractive (\cref{fig:c_stable}). This situation corresponds to a stable and healthy economic regime with interest rates that smoothly and monotonically converge to the fixed point $ i_{fix}. $ From the perspective of central bank seeking for stability, this is the regime that should be searched for. \begin{figure} \caption{Stable regime with $ a = 0.9, \ i_0 = 0.045, \ i_{fix} \label{fig:stable} \label{fig:c_stable} \end{figure} The point $ a = 1 $ is the point of transcritical bifurcation of the system. The behavior of the system changes drastically around this point. For $a=1$, from the recurrence equation \cref{eq:it_recurrence} it follows that \begin{equation} \begin{aligned} i\left(t+1 \right) &= \left( l^{- \beta \nu} k^{ - \mu \alpha } \right) i^a{\left(t \right)} \\ &= \left( l^{- \beta \nu} k^{ - \mu \alpha } \right) i{\left(t \right)} \\ &= \left( l^{- \beta \nu} k^{ - \mu \alpha } \right)^2 i{\left(t-1\right)} \\ &= \left( l^{- \beta \nu} k^{ - \mu \alpha } \right)^n i{\left(t-n + 1 \right)} \\ &= \left( l^{- \beta \nu} k^{ - \mu \alpha } \right)^{t+1} i_{0} \\ &= c^{t+1} i_{0}, \qquad c := \left( l^{- \beta \nu} k^{ - \mu \alpha } \right). \\ \end{aligned} \label{eq:it_a=1} \end{equation} If $ c = 1$, then \begin{equation} i\left(t \right) = \left( l^{- \beta \nu} k^{ - \mu \alpha } \right) i{\left(t \right)} = c i{\left(t \right)} = i{\left(t \right)}. \end{equation} Therefore, the interest rate is constant over time and equals the initial value: \begin{equation} i\left(t \right) = i_0. \end{equation} For $ c \neq 1,$ there is no fixed point of the equation \cref{eq:it_a=1} (with the exception of $0,$ which is generally not considered a possible value of the interest rate in the analyzed model). If $c<1,$ interest rate converges to $ 0 $ and if $c>1,$ interest rate diverges to infinity. For $ a > 1, $ the fixed point is repelling. This situation corresponds to a dangerous and unstable state of the economy, with the interest rate rapidly approaching $0$ or infinity. Contrary to the stable regime $ a < 1 $, the behavior of the system is heavily dependent on the position of the initial interest rate $i_0$ relative to the fixed point $i_{fix}.$ For $ i_{0} > i_{fix}, $ it follows that \begin{equation} \begin{aligned} i_0 &> \left( k^{- \alpha \mu} l^{- \beta \nu}\right)^{\frac{1}{1 - a}} \\ \ln i_0 &> ( - \alpha \mu \ln k - \beta \nu \ln l)/(1-a) \\ 0 &> \frac{1}{1-a} ( - \alpha \mu \ln k - \beta \nu \ln l - (1-a) \ln i_0) \\ 0 &> \frac{1}{1-a} \left( \alpha \mu ( \ln i_0 - \ln k ) + \beta \nu (\ln i_0 - \ln l ) - \ln i_0 \right). \\ \end{aligned} \end{equation} The last form of the above formula will be referred to as the position inequality, \begin{equation} \label{ineq:position} 0 > \frac{1}{1-a} \left( \alpha \mu ( \ln i_0 - \ln k ) + \beta \nu (\ln i_0 - \ln l ) - \ln i_0 \right), \end{equation} which determines the qualitative behavior of the system in the unstable regime $a>1.$ If $ i_{0} < i_{fix}, $ $ i(t) $ converges to $0$ (\cref{fig:c_bubble}). We say that the economy is in a bubble regime. This is an unsafe and unsustainable state. According to the model \cref{eq:my_model}, the volume of loans $N(t),$ and therefore the amount of money in the economy, is inversely proportional to the interest rate: $N{\left(t \right)} = \left( k / i{\left(t \right)} \right)^{\mu}. $ As interest rate approaches $0,$ the amount of money in the systems rapidly expands without constraints. Inflation spirals out of control and either the system collapses or the currency fails to fulfill a basic function of money: a stable measure of value \cite{stanley_jevons_money_1989}. \begin{figure} \caption{Bubble regime with $ a = 1.1, \ i_0 = 0.035, \ i_{fix} \label{fig:bubble} \label{fig:c_bubble} \end{figure} If $ i_{0} > i_{fix}, $ $ i(t) $ diverges to infinity (\cref{fig:c_crash}). The economy is in a crash regime. The interest rate increases sharply, entering the feedback loop with vanishing credit and expanding defaults. The collapse of the banking system has very serious consequences beyond the limitation of the credit activity. In the modern economy, a large part of the money itself is a credit created by banks \cite{mcleay_money_2014}. Therefore, the collapse of the banking system is the collapse of money itself. \begin{figure} \caption{Crash regime with $ a = 1.1, \ i_0 = 0.045, \ i_{fix} \label{fig:crash} \label{fig:c_crash} \end{figure} Overall, the UMWE model has features similar to those of the original model. However, it extends the results of the original one to a holistic framework, describing the comprehensive dynamics of the credit market with loans, defaults, and interest rates at each point in time. On the other hand, the credit cycle is a sum of its phases with the transition between them. The UMWE model is able to explain the stable growth, bubble, and crisis phases, depending on the values of the parameters. Therefore, to explain a credit cycle, the model has to allow for changes in the parameters in such a way that the transition occurs between different states of the system. \subsection{Critical parameters} \label{sec:bifurcation_paramaters_analysis} In the dynamic parameter regime, there is a time dependency of the parameter values $\Lambda (t) := \left \{ \alpha(t), \beta(t), \mu(t), \nu(t) \right \}. $ However, the unified MWE model \cref{eq:my_model} is Markovian. Therefore, for the sake of clarity, the time dependence of the parameters will be omitted from the notation. The value of the parameter can be interpreted as being at a "current" or any arbitrary chosen point in time. Recall that $a=\alpha \mu + \beta \nu .$ Transcritical bifurcation occurs at $a=1.$ Therefore, the critical value $\alpha_{crit}$ of $\alpha,$ for which the bifurcation occurs must satisfy \begin{equation} \label{eq:critical_alpha_condition} \alpha_{crit} \mu + \beta \nu = 1. \end{equation} Thus, the value of the critical stability parameter $\alpha_{crit}$ equals \begin{equation} \label{eq:critical_alpha} \alpha_{crit} = \frac{1 - \beta \nu}{\mu}. \end{equation} The application of the same reasoning to the other parameters yields the following values of the critical stability parameters: \begin{equation} \begin{aligned} \alpha_{crit} &= \frac{1 - \beta \nu}{\mu}, \\ \beta_{crit} &= \frac{1 - \alpha \mu}{\nu}, \\ \mu_{crit} &= \frac{1 - \beta \nu}{\alpha}, \\ \nu_{crit} &= \frac{1 - \alpha \mu}{\beta}. \\ \end{aligned} \label{eq:crtiical_stability_parameters} \end{equation} At an aggregated level, the value $\Delta_{crit}$ of a scale parameter can be defined, for which a stability transition occurs. Let $ \Delta_{crit}$ be a multiplier for which a critical value $\alpha \mu + \beta \nu = 1$ is attained: \begin{equation} \Delta_{crit} \alpha \Delta_{crit} \mu + \Delta_{crit} \beta \Delta_{crit} \nu = 1. \end{equation} Hence, it follows that $\Delta_{crit}$ satisfies \begin{equation} \begin{aligned} \Delta^{2}_{crit} \left(\alpha \mu + \beta \nu \right) &= 1 \\ \Delta_{crit} &= \frac{1}{\sqrt{a}}. \end{aligned} \label{eq:critical_delta} \end{equation} On the other hand, consider the transition between bubble and crash regimes in the unstable state $a>1.$ The regime is determined by the relative positions of the current interest rate $i_t$ and the fixed point $i_{fix}.$ Therefore, the value of the critical direction parameter $\alpha_{crit} (i_t)$ for which there is a change of the sign in parentheses term in the position inequality \cref{ineq:position} equals \begin{equation} \label{eq:direction_alpha} \alpha_{crit} (i_t) = \frac{\ln i_t - \beta \nu ( \ln i_t - \ln l )}{\mu (\ln i_t - \ln k )}. \end{equation} In the unstable regime $a>1$, either $ i_t \longrightarrow 0$ or $ i_t \longrightarrow \infty $ as $ t \longrightarrow \infty. $ Hence, the limit $\overline{\alpha_{crit}}$ of the above formula \cref{eq:direction_alpha} as $t$ approaches infinity is \begin{equation} \overline{\alpha}_{crit} = \frac{1 - \beta \nu}{\mu}. \end{equation} It is worth noting that it is the same value as the critical stability parameter $\alpha_{crit}$ in Eq.~\cref{eq:critical_alpha}. The critical direction values of the other parameters can be derived analogously, yielding \begin{equation} \begin{aligned} \alpha_{crit} (i_t) &= \frac{\ln i_t - \beta \nu ( \ln i_t - \ln l )}{\mu (\ln i_t - \ln k )}, \\ \beta_{crit} (i_t) &= \frac{\ln i_t - \alpha \mu ( \ln i_t - \ln k )}{\nu (\ln i_t - \ln l )}, \\ \mu_{crit} (i_t) &= \frac{\ln i_t - \beta \nu ( \ln i_t - \ln k )}{\alpha (\ln i_t - \ln l )}, \\ \nu_{crit} (i_t) &= \frac{\ln i_t - \alpha \mu ( \ln i_t - \ln k )}{\beta (\ln i_t - \ln l )}. \\ \end{aligned} \label{eq:critical_direction_parameters} \end{equation} In the limit cases (either $ i_t \longrightarrow 0$ or $ i_t \longrightarrow \infty $), the asymptotic critical direction parameters equal \begin{equation} \begin{aligned} \overline{\alpha}_{crit} &= \frac{1 - \beta \nu}{\mu}, \\ \overline{\beta}_{crit} &= \frac{1 - \alpha \mu}{\nu}, \\ \overline{\mu}_{crit} &= \frac{1 - \beta \nu}{\alpha}, \\ \overline{\nu}_{crit} &= \frac{1 - \alpha \mu}{\beta}. \\ \end{aligned} \end{equation} They are exactly the same as the values of critical stability parameters in Eq.~\cref{eq:crtiical_stability_parameters}. Furthermore, at a general level, define the value $\Delta_{crit} (i_t)$ of a scale parameter for which a direction transition occurs. Let $ \Delta_{crit} (i_t)$ be such that in the position inequality \cref{ineq:position} $0$ is attained: \begin{equation} \begin{aligned} 0 &= \Delta_{crit} (i_t) \alpha \Delta_{crit} (i_t) \mu ( \ln i_t - \ln k ) + \\ &+ \Delta_{crit} (i_t) \beta \Delta_{crit} (i_t) \nu (\ln i_t - \ln l ) - \ln i_t. \end{aligned} \end{equation} Therefore \begin{equation} \begin{aligned} \label{eq:direction_delta} \Delta^{2}_{crit} (i_t) &= \frac{\ln i_t}{\alpha \mu ( \ln i_t - \ln k ) + \beta \nu (\ln i_t - \ln l )} \\ \Delta_{crit} (i_t) &= \sqrt{ \frac{\ln i_t}{\alpha \mu ( \ln \frac{i_t}{k} ) + \beta \nu (\ln \frac{i_t}{l} )}}. \end{aligned} \end{equation} In the limit case $ t \longrightarrow \infty $, the value of the asymptotic critical direction delta $\overline{\Delta}_{crit}$ is equal to the critical stability delta $\Delta_{crit}$ in Eq.~\cref{eq:critical_delta}: \begin{equation} \label{eq:direction_delta_limit} \overline{\Delta}_{crit} = \sqrt{ \frac{1}{\alpha \mu + \beta \nu }} = \frac{1}{\sqrt{a}}. \end{equation} \begin{remark} Similar definitions can be introduced in the stable regime $a<1$, but the position of $i_{fix}$ relative to $i_t$ is not so relevant there. In that regime, $i_t$ converges to $i_{fix}$ in all cases, so the qualitative behavior of the stable system does not change significantly depending on the location of the current interest rate (with the exception of the direction of convergence). \end{remark} The comparison of critical stability parameters to asymptotic critical direction parameters performed in the analysis makes it possible to deduce the following result. \begin{theorem}[Equivalence of the Critical Parameters] \label{th:critical_parameters_equivalence} Let $\Lambda = \left \{ \alpha, \beta, \mu, \nu \right \}$ be the set of exponential parameters of the UMWE model \cref{eq:my_model}. For every parameter $\lambda \in \Lambda$ and every pair of the corresponding critical stability and asymptotic critical direction parameters $\left( \lambda_{crit}, \overline{\lambda}_{crit} \right)$ the following relation takes place: \begin{equation} \lambda_{crit} = \overline{\lambda}_{crit}. \end{equation} Furthermore, it holds that \begin{equation} \Delta_{crit} = \overline{\Delta}_{crit}. \end{equation} \end{theorem} The result of \cref{th:critical_parameters_equivalence} is not a coincidence. Consider the below form of the position inequality \cref{ineq:position}: \begin{equation} 0 > \frac{1}{1-a} ( - \alpha \mu \ln k - \beta \nu \ln l - (1-a) \ln i_t). \end{equation} In the unstable regime, $ \ln i_t $ approaches $+ \infty$ or $- \infty$. Therefore, in the limit case, the parameter $ - \alpha \mu \ln k - \beta \nu \ln l $ is negligible; to change the sign in the formula, the sign of $(1-a)$ should be changed. The result of the \cref{th:critical_parameters_equivalence} unifies asymptotic critical direction parameters with critical stability parameters, highlighting the latter as the most important indicators of the fragility of the analyzed UMWE model. Thus, the neighborhood of the point $a=1$ constitutes indeed the critical area of the system, where the phase transitions (between stability and instability, as well as between a bubble and a crisis) ultimately occur. \subsection{Measures of systemic risk} On the basis of the determined critical (stability and direction) parameters, the measures of systemic risk for the unified MWE model can be constructed as the distances of the current values of the parameters to the critical ones. For every parameter $ \lambda \in \Lambda $ and the corresponding critical stability parameter $\lambda_{crit}$ define \begin{equation} \label{eq:stability_distances} \begin{aligned} \delta \lambda_{crit} &= \lambda_{crit} - \lambda, \\ \frac{\delta \lambda_{crit}}{\lambda} &= \frac{\lambda_{crit} - \lambda}{\lambda} = \frac{\lambda_{crit}}{\lambda} - 1. \end{aligned} \end{equation} The exact formulas for all parameters are presented in \cref{tab:critical_stability_distances}. The above measures will be referred to as the (critical) stability distances: absolute stability distance $\delta \lambda_{crit}$ and relative one, $\frac{\delta \lambda_{crit}}{\lambda}.$ According to the theorem \cref{th:critical_parameters_equivalence}, distances to critical stability parameters are equal to the distances to asymptotic critical direction parameters. \begin{table}[h!] \label{tab:critical_stability_distances} \centering \begin{tabular}{|c | c c c c c|} \hline & $\alpha$ & $\beta$ & $\mu$ & $\nu$ & $\Delta$ \\ \hline $\delta \lambda_{crit}$ & $\frac{1-a}{\mu}$ & $ \frac{1-a}{\nu} $ & $\frac{1-a}{\alpha}$ & $\frac{1-a}{\beta}$ & $\frac{1}{\sqrt{a}} - 1$ \\ [1ex] $\delta \lambda_{crit} / \lambda$& $ \frac{1-a}{\mu \alpha}$ & $\frac{1-a}{\nu \beta}$ & $\frac{1-a}{\mu \alpha}$ & $\frac{1-a}{\nu \beta}$ & $\frac{1}{\sqrt{a}} - 1$ \\ [1ex] \hline \end{tabular} \caption{\label{demo-table}Critical stability distances.} \end{table} For critical direction parameters, the distances can be defined analogously for $ \lambda \in \Lambda $: \begin{equation} \label{eq:direction_distances} \begin{aligned} \delta \lambda_{crit} (i_t) &= \lambda_{crit} (i_t) - \lambda, \\ \frac{\delta \lambda_{crit} (i_t)}{\lambda} &= \frac{\lambda_{crit} (i_t) - \lambda}{\lambda} = \frac{\lambda_{crit} (i_t)}{\lambda} - 1. \end{aligned} \end{equation} The exact formulas for all parameters are presented in \cref{tab:critical_direction_distances}. In that case, $\delta \lambda_{crit} (i_t)$ and $\frac{\delta \lambda_{crit} (i_t)}{\lambda}$ are referred to as absolute and relative (critical) distances, respectively. \begin{table}[h!] \label{tab:critical_direction_distances} \centering \begin{tabular}{|c | c c c|} \hline & $\alpha$ & $\beta$ & $\mu$ \\ \hline $\delta \lambda_{crit} \left( i_t \right)$ & $- \frac{\beta \nu \ln \frac{i_0}{l} - \ln i_0}{\mu \ln \frac{i_0}{k}} - \alpha $ & $- \frac{\alpha \mu \ln \frac{i_0}{l} - \ln i_0}{\nu \ln \frac{i_0}{k}} - \beta $ & $ - \frac{\beta \nu \ln \frac{i_0}{l} - \ln i_0}{\alpha \ln \frac{i_0}{k}} - \mu$ \\ $\delta \lambda_{crit} \left( i_t \right) / \lambda$ & $- \frac{\beta \nu \ln \frac{i_0}{l} - \ln i_0}{\alpha \mu \ln \frac{i_0}{k}} - 1 $ & $- \frac{\alpha \mu \ln \frac{i_0}{l} - \ln i_0}{ \beta \nu \ln \frac{i_0}{k}} - 1$ & $- \frac{\alpha \mu \ln \frac{i_0}{l} - \ln i_0}{ \beta \nu \ln \frac{i_0}{k}} - 1$ \\ \hline & $\nu$ & $\Delta$ & \\ \hline $\delta \lambda_{crit} \left( i_t \right)$ & $ - \frac{\alpha \mu \ln \frac{i_0}{l} - \ln i_0}{\beta \ln \frac{i_0}{k}} - \nu$ & $\sqrt{ \frac{\ln i_t}{\alpha \mu ( \ln \frac{i_t}{k} ) + \beta \nu (\ln \frac{i_t}{l} )}}$ & \\ $\delta \lambda_{crit} \left( i_t \right) / \lambda$ & $ - \frac{\alpha \mu \ln \frac{i_0}{l} - \ln i_0}{ \beta \nu \ln \frac{i_0}{k}} - 1 $ & $\sqrt{ \frac{\ln i_t}{\alpha \mu ( \ln \frac{i_t}{k} ) + \beta \nu (\ln \frac{i_t}{l} )}}$ & \\ \hline \end{tabular} \caption{\label{table} Critical direction distances.} \end{table} System fragility measures defined in \cref{eq:stability_distances} and \cref{eq:direction_distances} are of very practical use. Stability distances describe how far (from the perspective of values of parameters) the system is from the unstable (or stable) state. In a stable stage, this information can be utilised to keep a (arbitrary set) safe distance from the unstable phase. But, as usually in finance there could be a trade-off between the risk and the return \cite{markowitz_portfolio_1952}: lower values of the parameters can mean a higher level of interest rates $i_{fix}.$ For unstable phase, the stability distance can be utilized to steer the comeback to the stable regime. On the other hand, direction distance describes how far the system is from the crisis (or bubble) in the unstable regime $ a > 1.$ It can be helpful to determine how to escape the bubble with too much money present in the economy, causing inflation, instability, and phantom (nominal) growth without coverage in real terms. It is worth to point out that it is not obvious if escaping bubble smoothly in the long period is a better solution than v-shaped crisis and recovery. Smooth bubble escape could mean long lasting, exhausting recession with prevailing consequences, often difficult to get rid off. In comparison, a rapid, severe, but quick shock, controlled to some extent by central bank supporting the financial situation, can often be followed by relatively fast recovery to the stable regime. With enough knowledge regarding the fit of the presented model to reality, possibly the direction distance could be utilized to steer a more rapid recovery and then smooth transition to the stable regime. For these purposes, however, empirical validation of the model is required. \section{The credit cycle} \label{sec:cycle} \subsection{Theoretical background} The credit cycle is the expansion and reduction of credit in the economy over time \cite{phillips_banking_1938}. It may consist of various stages such as expansion, crisis, recession, and recovery (\cref{fig:graph_credit_cycle}). \begin{figure} \caption{An example credit cycle with various phases. } \label{fig:graph_credit_cycle} \end{figure} Excessive growth of the money supply and the level of prices can lead to the formation of a speculative bubble, which is unsustainable in the long term and may result in a severe crash \cite{benning_trading_2007}. The crisis is usually followed by a period of decline in credit and economic activity, which eventually stabilizes the situation and leads to the next phase of expansion. Throughout the cycle, different states are characterized by very different values of economic and financial factors, such as interest rates, inflation, unemployment, industrial production, money supply and bankruptcies \cite{wankel_encyclopedia_2009}. This diversity is captured by the UMWE model \cref{eq:my_model}, heavily dependent on parameterization $\Lambda,$ where for various sets of parameters the system can exhibit qualitatively different types of dynamics, as analyzed in \cref{subsec:bif_analysis}. The parameters $\alpha$ and $\beta$ reflect the policy of the banks regarding their market offer, expressed through the price charged for their products. Since this article is written mainly from the point of view of financial institutions, it will be assumed that $\alpha$ and $\beta$ could change their values throughout the credit cycle, as they represent the available instruments of the financial policy and can be tailored to individual situations. The values of the parameters $\mu$ and $\nu$ will be treated as external from the bank's perspective and therefore kept constant. The technical scale parameters $k, \ l$ will also not vary over time in the proposed approach. The assumption that only $\alpha$ and $\beta$ can change over time simplifies the analysis and makes it more clear. Nevertheless, it is worth noting that during the incorporation of the model into the real data analysis, the potential dynamics of the rest of the parameters should also be validated, possibly even with the relations between various parameters, as for example $\alpha$ and $\mu$ could both be driven by the common optimism in the credit market. \subsection{Stable phase and the bubble} \label{sec:stable_and_bubble} It is argued in Ref. \cite{solomon_minsky_2013} that the square root power law can describe the response of the economic demand for loans in relation to the interest rate. Based on that, the values of exponents that describe the dependence of loans and defaults on the interest rates in the system \cref{eq:my_model} are set to \begin{equation} \mu=\nu=0.499. \end{equation} The values are just below $0.5$ for technical reasons explained later. For the exponent parameters of the interest rate equation in the system \cref{eq:my_model}, the initial values are chosen as \begin{equation} \alpha = \beta = 1, \end{equation} which implies that the interest rate equals the estimated probability of default \begin{equation} i_{t+1} = \frac{D_t}{N_t}. \end{equation} The rationale for such a parameterization was discussed in \cref{subsec:model_derivation}. The scale parameters are set to \begin{equation} \begin{aligned} k &= 105.5, \\ l &= 0.0096. \end{aligned} \end{equation} They are just technically adjusted to fit the desired behavior of the model. Finally, the initial value of the interest rate is taken as \begin{equation} i_0 = 0.42. \end{equation} The chosen specification of the parameters implies that \begin{equation} a = \alpha \mu + \beta \nu = 1 \times 0.499 + 1 \times 0.499 = 0.998 < 1. \end{equation} It means that the default state of economy is a convergent one ($a<1$), but it is very close to the instability border $a=1$ at the same time. The value of the fixed interest rate of such parameterized system is close to $0.04186.$ Therefore, the evolution of the system starts at a stable and calm phase. The interest rate slowly and smoothly settles to its lower limit. The described situation is illustrated in \cref{fig:cycle_stable}. \begin{figure} \caption{Stable phase of the cycle with $ \alpha = \beta = 1, \ \mu = \nu = 0.499, \ k = 105.5, l = 0.0096, \ i_0 = 0.042, \ i_{fix} \label{fig:cycle_stable} \end{figure} \begin{comment} Cztery niezależne wykresy jako element jednego rysunku \begin{figure} \caption{Stable phase of the cycle with $ \alpha = \beta = 1, \ \mu = \nu = 0.499, \ k = 105.5, l = 0.0096, \ i_0 = 0.042, \ i_{fix} \label{fig:cycle_stable} \label{fig:c_cycle_stable} \end{figure} \end{comment} The prevailing period of prosperity amplifies the confidence and optimism of the banks. At some point in time, they become comfortable enough to start to relax the previous rules of determining the value of the interest rate in relation to loans and defaults. In the selected setup of the parameters $\Lambda,$ the interest rates could be lowered by the increase of $\alpha$ or the decrease of $\beta.$ Modification of the sensitivity to the number of loans (money) in the system appears to be less risky than modification of the sensitivity to the number of defaults. Therefore, after a long period of stable economic growth, $\alpha$ increases. Our approach brings positive feedback from the economy, resulting~in more beneficial dynamics of the credit market in comparison to the previous one. The favorable outcome fuels further optimism and encourages to increase $\alpha$ even more. The result is a positive feedback loop with the parameter $\alpha$ increasing at each step. Along with the rise of the value of $\alpha$, the distance to instability $\delta \alpha_{crit}$ decreases and the system becomes more fragile. At some point in time, $\delta \alpha_{crit}$ passes through $0.$ The market enters a bubble state and continues to grow. The situation is presented in \cref{fig:cycle_bubble}. \begin{figure} \caption{Bubble phase of the cycle with $ \beta = 1, \ \mu = \nu = 0.499, \ k = 105.5, l = 0.0096$ and increasing $\alpha.$ Long period of stable growth improved market confidence, expressed in rising value of $\alpha$ (left plot). System stability decreases and at some point in time market enters the bubble phase (right plot).} \label{fig:cycle_bubble} \end{figure} Although it is generally assumed that lower interest rates fuel economic growth \cite{keynes_general_2018}, too low levels of interest rates are not healthy for the economy. Interest rates influence the supply of money \cite{ krugman_macroeconomics_2009, mankiw_principles_2009,mcleay_money_2014}, which is also reflected in the considered model. Interest rates impact the volume of loans and, in turn, the amount of money in the economy, by the relation given in \cref{eq:my_model}: \begin{equation} N{\left(t \right)} = \left(\frac{i{\left(t \right)}}{k}\right)^{- \mu}. \end{equation} Furthermore, the increase in the amount of money leads to the increase in the level of prices \cite{palgrave_macmillan_equation_2008} (there are also direct relations between interest rates and inflation derived in economic theory \cite{fisher_nature_2009}). Exceeding the unhealthy threshold of the inflation level creates the risk of entering the vicious cycle of self-reinforcing increases in prices and wages \cite{mankiw_brief_2008}. Rising prices increase the value of collateral for credit, improving the credit quality of borrowers and amplifying the increase in credit volume in the economy \cite{bernanke_financial_1996}. This can create incentives for the formation of speculative bubbles, which amplifies the fragility of the financial system \cite{kiyotaki_credit_1997} and potentially leads to a crash, as was the case in 2007 \cite{krishnamurthy_amplification_2010}. When the interest rate reaches near zero territories in the bubble regime, the amount of money in the system and the level of prices expand uncontrollably without limitations; there are known cases of yearly inflation reaching levels of $10^{22} \%$ \cite{hanke_measurement_nodate}. The currency no longer performs the function of a measure and a store of value, violating the definition of money itself \cite{stanley_jevons_money_1989}. \subsection{Crisis and stabilization} As discussed in \cref{sec:stable_and_bubble}, there are practical limits on growth of the monetary base, and therefore the interest rate should not remain in the near-zero territory indefinitely (there are some exceptions even with negative interest rates \cite{altavilla_is_2022, blanchard_public_2019,blinder_revisiting_2012,heider_life_2019}, but they are beyond the scope of this work). Central banks are generally obliged to preserve the stable value of the currency and therefore maintain inflation at predefined acceptable levels \cite{levy_yeyati_monetary_2010}. Steering interest rate levels remains a key tool of central bank monetary policy implementation \cite{chenery_handbook_1988, smelser_international_2001}, therefore, interest rates are risen to bring inflation back to target levels. The lower bound on the interest rate offered by banks in the market is the rate of return on their deposits placed in the central bank accounts. This value, called the deposit rate, is established by the central bank \cite{bernanke_new_2020}. Other limitations come from the Basel accords, placing capital and liquidity requirements, as well as the permissible leverage ratio \cite{grundke_impact_2020, basel_comittee_on_banking_supervision_basel_nodate}. All this leads to the conclusion that there are practical restrictions on the decrease of the interest rate, reflected in the adopted approach by the minimum value for which banks will not allow the interest rate to fall below, $0.123.$ The situation discussed here is presented in \cref{fig:cycle_crash}. \begin{figure} \caption{Crisis phase of the cycle with $ \beta = 1.6, \ \mu = \nu = 0.499, \ k = 105.5, l = 0.0096$ and decreasing $\alpha.$ Left column: the top chart presents the severity of potential contraction of the system expressed in terms of value of critical direction distance for $\beta$. The bottom chart presents the overall system phase steadiness expressed in terms of critical direction distance $\delta \Delta_{crit} \label{fig:cycle_crash} \end{figure} \begin{comment} \begin{figure} \caption{Crisis phase of the cycle with $ \beta = 1.6, \ \mu = \nu = 0.499, \ k = 105.5, l = 0.0096$ and decreasing $\alpha.$ The values of $\alpha$ are put on the left vertical axis, whereas the ones of $\beta$ - on the right axis. } \label{fig:cycle_crash} \label{fig:c_cycle_crash} \end{figure} \end{comment} When the interest rate approaches the limit, banks start to fear. In order to remain safe, they increase the sensitivity to the number of defaults, $\beta$, in order to distance the interest rate from the limit level. Because $\beta$ increases and $ a = \alpha \mu + \beta \nu,$ the market remains in the unstable regime $a>1$. But to avoid danger, banks rise $\beta$ to such a level that the distance from the minimum interest rate increases as well. This increase in conjunction with the presence of the unstable regime $a>1$ implies a crash. For the transition between bubble and crash to occur, the critical direction distance \begin{equation} \delta \beta_{crit} = \frac{\ln i_t - \alpha \mu ( \ln i_t - \ln k )}{\nu (\ln i_t - \ln l )} - \beta \end{equation} must fall below $0$. The risk measure $\delta \beta_{crit}$ is a linear function of $\alpha$ with the slope of $ \frac{ - \mu ( \ln i_t - \ln k )}{\nu (\ln i_t - \ln l )} $ and a hyperbolic function of the current value of the interest rate, $ i_t. $ It is qualitatively dependent on values of $k, \ l$ parameters, as they determine the signs of appropriate fragments of the equation above and then the monotonicity of dependencies between the exponents of the model. This implied from the model formula, in the set-up of the chosen parameters, reflects an important feature of reality: the faster the bubble grows (greater $\alpha$), the faster and severe the contraction, because $a = \alpha \mu + \beta \nu$ describes the velocity of divergence. The transition to the crash regime results in a sharp increase in the volume of defaults and a significant contraction in the volume of loans. The crunch of the loan market amplifies the panic, causing banks to further reduce lending by decreasing the value of $\alpha,$ which ultimately falls to its initial level. The distance to stability $\Delta_{crit}$ starts to decrease in absolute terms, consequently driving the system closer to the stable state. Meanwhile, the central bank has time to intervene and calm down the situation. After bringing back the sensitivity $\alpha$ to territories known from the stable regime, banks start to reduce the shocked parameter $\beta.$ Finally, $\Delta_{crit}$ passes through $0,$ parameters return to their initial level and the system returns to the stable state. The first phase of a stabilization is a recessionary period with the level of interest rate elevated by the crisis (\cref{fig:cycle_recovery}). \begin{figure} \caption{Recovery phase of the cycle with $ \alpha = \beta = 1, \ \mu = \nu = 0.499, \ k = 105.5, l = 0.0096, \ i_0 = 0.042, \ i_{fix} \label{fig:cycle_recovery} \end{figure} The interest rate consequently decreases towards the fixed point $0.0419$ and after some time enters the area of beneficial levels, which stimulates the growth of the economy. Further in time, the pace of change significantly reduces and the value of interest rate slowly and consequently settles on the fixed point. The cycle is closed. \subsection{Further considerations regarding the credit cycle description} It is worth noting that the cycle was explained only by the dynamics of the financial system, without referring to external shocks interpreted as changes in $\mu, \ \nu$ parameters and utilised often as a ''divine intervention'' bringing crisis to the market. It is important because in reality the crisis can also be caused by the malfunctioning of the financial system itself \cite{krishnamurthy_amplification_2010} and modeling this phenomenon can help to avoid the crisis by proper management of the system. Naturally, external shocks expressed in the dynamics of $\mu$ and $\nu$ can also be modeled by the analyzed system \cref{eq:my_model}, providing insight into potential strategies to mitigate the crisis. The evolution of the system presented above is only one of the many possible approaches to the description of the credit cycle in the unified MWE model with dynamic parameters. It is worth to point out that it is also possible to generate the transition from a bubble to a crisis only by increasing $\alpha,$ under some parameterizations $\Lambda.$ Therefore, the analyzed model is also capable of explaining the situation where the bubble collapses because of the "excess optimism", without setting any effective constraints on the value of the interest rate (\cref{fig:c_bubble_crash_alpha}). \begin{figure} \caption{Transition from bubble to crash by constantly increasing $\alpha$ for $\alpha$ staring from $1.007,$ $ \beta = 1, \ \mu = \nu = 0.499, \ k = 10^{-12} \label{fig:bubble_crash_alpha} \label{fig:c_bubble_crash_alpha} \end{figure} Parameters $\alpha$ and $\beta$ exhibit significantly different behaviors under the parameterization $\Lambda$ proposed in \cref{sec:stable_and_bubble}. They are "countermonotonic" in the sense that optimism increases in $\alpha$ and decreases in $\beta.$ On the other hand, the instability increases in both $\alpha$ and $\beta$ (as $a = \alpha \nu + \beta \nu$ describes the instability and the pace of evolution). These characteristics can lead to very diverse consequences near the transcritical bifurcation point $a=1,$ as various approaches to increase/decrease optimism (by modifying $\alpha$ or modifying $\beta$) can bring very different results. In the neighbourhood of the transcritical bifurcation point, more optimistic (lower) $\beta$ can help escape the unstable regime ($a<1$), although it is rather hard to expect banks to bring $\beta$ to significantly lower levels, as they should closely watch the number of defaults and express their optimism mainly in the sensitivity to the increasing amount of money in the system ($\alpha$). On the other hand, more optimism expressed in $\alpha$ drives the system to more unstable territories. In the unstable regime, more pessimistic but careful choice of parameters (increase $\beta,$ decrease $\alpha$ such that $a<1$) can cause greater contraction in interest rate in one step (in comparison to the situation when $\alpha$ is not modified), but can also enable to avoid longer lasting crisis or recession regime. Different possible outcomes for similar banks policies increase the uncertainty regarding the future dynamics of the system and hinder the decision-making, especially considering that there are many banks in the market and no one knows about the other's strategies. This situation is very similar to the coordination problem in bank run models \cite{diamond_bank_1983, kiss_preventing_2022}. All the described effects can amplify uncertainty and deepen panic. There is also a possibility that banks react to some minor turbulence by increasing $\alpha$ even more and hoping that more money pumped into the system will help avoid the crisis (which can be true). Nevertheless, the continuation of that strategy can lead to the situation where the growth is artificially fueled by rapidly increasing amount of money in the system, and at some point is not sustainable any more. Even if a central bank and regulations allow it, market participants will eventually switch to a stable currency to protect the value of their property \cite{bumin_predicting_2023, rochon_dollarization_2003}. Moving the crisis further in time can make it more severe \cite{levy_microscopic_1995, solomon_minsky_2013}. \section{Summary} \label{sec:summary} In this paper, the Marshall-Walras equilibrium approach with the power law dynamics of the credit market was unified, to provide a comprehensive model of the credit cycle, describing all variables of interest and relations between them at each point in time. The model was enhanced to be Markovian, therefore eliminating the dependency on the arbitrary choice of the initial moment $t=0$. Detailed mathematical analysis of the unified model was performed to determine that it describes three very different economic regimes: stable state, bubble, and crisis, dependent on the values of model parameters. On the basis of these results, the measures of systemic risk were constructed as distances to critical values of the parameters, for which the transition between different model regimes occurs. The developed theory was applied to generate the interest rate evolution with features characteristic for a full credit cycle. For this purpose, the dynamics of the parameters was utilized in the model, with the economic interpretation of casual relationships between them. The mathematical consequences of the model and their correspondence to the real-world phenomena were analyzed in detail. The result of this article is the unified framework for modeling credit cycles with systemic risk assessment. Our model describes various states of the market and transitions between them. The relative mathematical simplicity causes that the model can be easily operated on and incorporated into the analyses and decision-making processes performed with the use of the classical financial models. It also ensures that the model is clear and tracable, and therefore has natural economic interpretation of parameters and provides explanatory power regarding causes and effects of key market events. The standalone risk measures constructed in the article can provide information on the current market regime and possible dangers related to transitions to bubbles or crises. For example, our model can provide indicators of the presence of stable market situation with small volatility and help predict potential regime switches to unstable periods with greater variance. On the basis of this information, a financial instrument valuation or an investment strategy can be modified to prepare for a potential crisis. Other potential topics for future research include improving the model with the addition of inflation or several banking entities with interactions between them derived from empirical data. Research on the topic of systemic risk and credit cycles should be continued despite all the challenges connected with modeling very rare events. Complex systems, positive feedback loops, and critical points are promising tools for modeling the area of interest. Therefore, it is critical to enhance the knowledge about these phenomena, since financial systems constitute the basis of modern economies, governments, and societies, which can struggle to function properly without the foundation of a stable and secure financial system \cite{cibils_argentina_nodate}. \end{document}
\begin{document} \title[Groups of PL-homeomorphisms admitting invariant characters] {Groups of PL-homeomorphisms admitting \\non-trivial invariant characters} \author[D. L. Gon\c{c}alves] {Daciberg L. Gon\c{c}alves} \address{Departametno de Matem\'atica - IME, Universidade de S\~ao Paulo\\ Caixa Postal 66.281 - CEP 05314-970, S\~ao Paulo - SP, Brasil} \email{[email protected]} \author[P. Sankaran]{Parameswaran Sankaran} \address{The Institute of Mathematical Sciences, CIT Campus, Taramani, Chennai 600113, India} \email{[email protected]} \author[R. Strebel]{Ralph Strebel} \address{D\'{e}partement de Math\'{e}matiques, Chemin du Mus\'{e}e 23, Universit\'{e} de Fribourg, 1700 Fribourg, Switzerland} \email{[email protected]} \mathbb{S}ubjclass[2010]{20E45, 20E36, 20F28.\\ Keywords and phrases: Groups of PL-homeomorphisms of the real line, Bieri-Neumann-Strebel invariant, twisted conjugacy classes.} \begin{abstract} We show that several classes of groups $G$ of PL-homeomorphisms of the real line admit non-trivial homomorphisms $\chi \colon G \to \mathbb{R}$ that are fixed by every automorphism of $G$. The classes enjoying the stated property include the generalizations of Thompson's group $F$ studied by K. S. Brown, M. Stein, S. Cleary and Bieri-Strebel in \cite{Bro87a, Ste92, Cle95, Cle00, BiSt14}, but also the class of groups investigated by Bieri et al.\,in \cite[Theorem 8.1]{BNS}. It follows that every automorphism of a group in one of these classes has infinitely many associated twisted conjugacy classes. \end{abstract} \maketitle \mathbb{S}etcounter{tocdepth}{1} \tableofcontents \mathbb{S}ection{Introduction} \label{sec:Intro} This paper has its origin in two articles \cite{BFG08} and \cite{GoKo10} about twisted conjugacy classes of Thompson's group $F$. In order to describe the aim of the cited papers, we recall some terminology. Let $G$ be a group and $\alpha$ an automorphism of $G$. Then $\alpha$ gives rise to an action $\mu_\alpha \colon G \times |G| \to |G|$ of $G$ on its underlying set $|G|$, defined by \begin{equation} \label{eq:alpha-action} \mu_\alpha (g, x) = g \cdot x \cdot \alpha(g)^{-1}. \end{equation} The orbits of this action are called \emph{twisted conjugacy classes}, or \emph{Reidemeister} classes, of $\alpha$. The twisted conjugacy classes of the identity automorphism, for instance, are nothing but the conjugacy classes. Two questions now arise, firstly, whether a given automorphism $\alpha$ has infinitely many orbits and, secondly, whether every automorphism of $G$ has infinitely many orbits. As the latter property will be central to this paper, we recall the definition of property $R_\infty$: \begin{definition} \label{definition:Property-Rinfty} A group $G$ is said to have \emph{property} $R_\infty$ if the action $\mu_\alpha$ has infinitely many orbits for every automorphism $\alpha \colon G \iso G$. \end{definition} The problem of determining whether a given group, or a class of groups, satisfies property $R_\infty$ has attracted the attention of several researchers. The problem is rendered particularly interesting by the fact there does not exist a uniform method of solution. Indeed, a variety of techniques and ad hoc arguments from several branches of mathematics have been used to tackle the problem, notably combinatorial group theory in \cite{GoWo09}, geometric group theory in \cite{LeLu00}), $C^*$-algebras in \cite{FeTr12} and algebraic geometry in \cite{MuSa14b}. In \cite{BFG08} Thompson's group $F$ is shown to enjoy property $R_\infty$, while \cite{GoKo10} establishes that property for Thompson's group $F$, but also for many other groups $G$ having the peculiarity that the complement of their BNS-invariant $\Sigma^1(G)$ is made up of finitely many rank 1 points. In this paper, we generalize both approaches and prove in this way that many classes of groups of PL-homeomorphisms have property $R_\infty$. \mathbb{S}ubsection{A useful fact} \label{ssec:Crucial-fact} The papers by C. Bleak et al. and by Gonçalves-Kochloukova both exploit the following observation: let $\alpha$ be an automorphism of a group $G$, let $\psi \colon G \to B$ be a homomorphism into an \emph{abelian} group and assume $\psi$ is fixed by $\alpha$. Then $\psi$ is constant on twisted conjugacy classes of $\alpha$; indeed, if the elements $x$ and $y$ lie in the same twisted conjugacy class there exists $z \in G$ so that $y = z \cdot x \cdot \alpha(z)^{-1}$; the computation \begin{equation*} \psi(y) = \psi(z \cdot x \cdot \alpha(z)^{-1})= \psi(x ) \cdot \psi(z) \cdot \left((\psi \circ \alpha)(z)\right) ^{-1} = \psi(x) \end{equation*} then establishes the claim. \emph{A group $G$ has therefore property $R_\infty$ if it admits a homomorphism onto an \emph{infinite, abelian} group} that is fixed by every automorphism of $G$. \mathbb{S}ubsection{Approach used by C. Bleak et al.} \label{ssec:Approach-in-BFG08} In \cite{BFG08} the authors establish that Thompson's group $F$ has property $R_\infty$ by using the mentioned fact. To find the homomorphism $\psi$, they use a representation of $F$ by piecewise linear homeomorphisms of the real line: $F$ is isomorphic to the group of all piecewise linear homeomorphisms $f$ with supports in the unit interval $I = [0,1]$, slopes a power of 2, and \emph{break points}, i.e., \xspace{} points where the left and right derivatives differ, in the group $\mathbb{Z}[1/2]$ of dyadic rationals; see, e.g., \xspace{} \cite[p.\,216, §\,1]{CFP96}. This representation affords them with two homomorphisms $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$, given by the right derivative in the \emph{left} end point $0$ and the left derivative in \emph{right} end point 1 of $I$; respectively; in formulae \begin{equation} \label{eq:definition-sigma-ell-sigma-r} \left\{ \begin{aligned} \mathbb{S}igma_\ell (f) &= \lim\nolimits_{\,t \mathbb{S}earrow 0} f'(t),\\ \mathbb{S}igma_r(f) &= \lim\nolimits_{\,t \nearrow 1} f'(t),\\ \end{aligned} \right. \end{equation} The images of $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$ are both equal to $\gp(2)$, the (multiplicative) cyclic group generated by the natural number $2$. Theorem 3.3, the main result of \cite{BFG08}, can be rephrased by saying that the homomorphism \[ \psi \colon F \to \gp(2), \quad f \mapsto \mathbb{S}igma_\ell(f) \cdot \mathbb{S}igma_r(f) \] is fixed by every automorphism $ \alpha$ of $F$. Its proof uses the very detailed information about $\Aut F$ established by M. Brin in \cite{Bri96}. \mathbb{S}ubsection{A generalization} \label{ssec:Generalization-approach-BFG08} The stated description of Thompson's group $F$ invites one to introduce generalized groups of type $F$ in the following manner. \begin{definition} \label{definition:G(I;A,P)} Let $I \mathbb{S}ubseteq \mathbb{R}$ be a closed interval, $P$ be a subgroup of the multiplicative group of positive reals $\mathbb{R}^\times_{>0}$, and $A$ be a subgroup of the additive group $\mathbb{R}_{\add}$ of the reals that is stable under multiplication by $P$. Let $G(I;A,P)$ denote the subset of $\mathbb{P}L_o(\mathbb{R})$ made up of all PL-homeomorphisms $g$ satisfying the conditions: \begin{enumerate}[a)] \item the support $\mathbb{S}upp g = \{t \in \mathbb{R} \mid g(t) \neq t \}$ of $f$ is contained in $I$, \item the slopes of the finitely many line segments forming the graph of $g$ lie in $P$, \item the break points of $g$ lie in $A$, and \item $g$ maps $A$ onto $A$. \end{enumerate} \end{definition} \begin{remarks} \label{remarks:G(I;A,P)} a) The subset $G(I;A,P)$ is closed under composition \footnote{In this article we use \emph{left} actions and the composition of functions familiar to analysts; thus $g_2 \circ g_1$ denotes the function $t \mapsto g_2(g_1(t))$ and $\act{g_1}{-0.5}{g_2}$ the homeomorphism $g_1 \circ g_2 \circ g_1^{-1}$.} and inversion. The set $G(I;A,P)$ equipped with these operations is a group; by abuse of notation, it will also be denoted by $G(I;A,P)$. b) We shall always require that neither $P$ nor $A$ be reduced to the neutral element. These requirements imply that $A$ contains arbitrary small positive elements and thus $A$ is a dense subgroup of $\mathbb{R}$. As concerns the interval $I$ we shall restrict attention to three types of intervals: compact intervals with endpoints 0 and $b \in A_{>0}$, the half line $[0, \infty[$ and the line $\mathbb{R}$; we refer the reader to \cite[Sections 2.4 and 16.4]{BiSt14} for a discussion of the groups associated to other intervals. c) The idea of introducing and studying the groups $G(I;A,P)$ goes back to the papers \cite{BrSq85} and \cite{BiSt85}. \end{remarks} \mathbb{S}ubsubsection{The homomorphisms $\mathbb{S}igma_\ell$, $\mathbb{S}igma_r$ and $\psi$} \label{sssec:Homomorphisms-sigma-ell-etc} The definitions of $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$, given in formula \eqref{eq:definition-sigma-ell-sigma-r}, admit straightforward extensions to the groups $G(I;A,P)$; note, however, that in case of the half line $[0, \infty[$, the number $\mathbb{S}igma_r(f)$ will denote the slope of $f$ near $+\infty$, and similarly for $I = \mathbb{R}$ and $\mathbb{S}igma_\ell$, $\mathbb{S}igma_r$. The homomorphisms $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$ allow one then to introduce an analogue of $\psi \colon F \to \gp(2)$, namely \begin{equation} \label{eq:Definition-psi} \psi \colon G = G(I;A,P) \to P, \quad g \mapsto \mathbb{S}igma_\ell(g) \cdot \mathbb{S}igma_r(g). \end{equation} There remains the question whether this homomorphism $\psi$ is fixed by every automorphism of $G$. In the case of Thompson's group $F$ the question has been answered in the affirmative by exploiting the detailed information about $\Aut F$ obtained by M. Brin in \cite{Bri96}. Such a detailed description is not to be expected for every group of the form $G(I;A,P)$; indeed, the results in \cite{BrGu98} show that the structure of the automorphism group gets considerably more involved if one passes from the group $G([0,1];\mathbb{Z}[1/2], \gp(2))$, the group isomorphic to $F$, to the groups $G([0,1];\mathbb{Z}[1/n], \gp(n))$ with $n$ an integer greater than 2. \mathbb{S}ubsubsection{The first main results} \label{sssec:First-main-result} It turns out that one does not need very detailed information about $\Aut G(I;A,P)$ in order to construct a non-trivial homomorphism $\psi \colon G(I;A,P) \to \mathbb{R}^\times_{>0}$ that is fixed by every automorphism of the group $G(I;A,P)$; it suffices to go back to the findings in the memoir \cite{BiSt85} and to supplement them by some auxiliary results based upon them. \footnote{The memoir \cite{BiSt85} has recently been reedited by R. Strebel and published in the repository of electronic preprints \emph{arXiv}; see \cite{BiSt14} for the precise reference.} A first outcome is \begin{theorem} \label{thm:Generalization-of-BFG08} Assume the interval $I$, the group of slopes $P$ and the $\mathbb{Z}[P]$-module $A$ are as in Definition \ref{definition:G(I;A,P)} and in Remark \ref{remarks:G(I;A,P)}b. Then there exists an epimorphism $\psi \colon G(I;A,P) \twoheadrightarrow P$ that is fixed by every automorphism of $G$. The group $G(I;A,P) $ has therefore property $R_\infty$. \end{theorem} \begin{remark} \label{remark:Subgroup-B} Let $I$, $A$ and $P$ as before and let $B = B(I;A,P)$ be the subgroup of $G(I;A,P)$ made up of all elements $g$ that are the identity near the endpoints. Then $B$ is a characteristic subgroup of $G(I;A,P)$ and variations of Theorem \ref{thm:Generalization-of-BFG08} hold for many subgroups $G$ of $G(I;A,P)$ with $B \mathbb{S}ubset G$; for details, see Theorems \ref{thm:Existence-psi-I-compact-all-autos}, \ref{thm:Existence-psi-I-halfline-image-rho-non-abelian} and \ref{thm:Existence-psi-I-line-image-rho-non-abelian}. \end{remark} \mathbb{S}ubsection{Route taken by Gonçalves-Kochloukova in \cite{GoKo10} } \label{ssec:Approach-in-GoKo10} The proof of Theorem \ref{thm:Generalization-of-BFG08} does not exploit information about $\Aut G(I;A,P)$ that is as precise as that going into the proof of the main result of \cite{BFG08}. It uses, however, non-trivial features of the automorphisms of $G(I;A,P)$. In \cite{GoKo10} Gonçalves and Kochloukova put forward the novel idea of replacing detailed information about $\Aut G$ by information about the form of the BNS-invariant of the group $G$; they carry out this program for the generalized Thompson group $F_{n,0}$ with $n \geq 2$, a group isomorphic to $G([0,1];\mathbb{Z}[1/n], \gp(n))$, and for many other groups. In a nutshell, their idea is this. Suppose $G$ is a finitely generated group for which the complement of $\Sigma^1(G)$ is \emph{finite}. \footnote{Recall that $\Sigma^1(G)$ is a certain subset of the space of all half lines $\mathbb{R}_{>0} \cdot \chi$ emanating from the origin of the real vector space $\mathbb{H}om(G, \mathbb{R})$, and that $\Aut(G)$ acts canonically on this subset, as well on as its complement.} Then every automorphism of $G$ permutes the finitely many rays in $\Sigma^1(G)^c$. This suggests that it might be possible to construct a new ray $\mathbb{R}_{>0}\cdot \chi_0$ that is fixed by $\Aut G$. If one succeeds in doing so, then $\mathbb{R} \cdot \chi_0$ will be a 1-dimensional sub-representation of the finite dimensional real vector space $\mathbb{H}om(G, \mathbb{R})$, acted on by $\Aut G$ via \[ (\alpha, \chi) \mapsto \chi \circ \alpha^{-1}. \] \emph{A priori}, this invariant line need not be fixed pointwise. Gonçalves and Kochloukova detected that the line $\mathbb{R} \cdot \chi_0$ is fixed pointwise by $\Aut G$ if the homomorphism $\chi_0\colon G \to \mathbb{R}$ has \emph{rank} 1, i.e., \xspace{} if its image is infinite cyclic. Using this fact they were then able to prove that Thompson's group $F$, but also many other groups $G$, admit a rank 1 homomorphism that is fixed by $\Aut G$ and thus satisfy property $R_\infty$. \mathbb{S}ubsection{A generalization} \label{ssec:Generalization-approach-GoKo10} In the second part of this paper we generalize the approach of Gon\c{c}alves and Kochloukova to PL-homeomorphism groups $G$ for which $\Sigma^1(G)^c$ may contain points of rank greater than $1$. In pursuing this goal, one runs into the following difficulty. Suppose $\mathbb{R}_{>0} \cdot \chi_0$ is a ray that is fixed by $\Aut G$ as a \emph{set}. There may then exist an automorphism $\alpha$ which acts on the ray by multiplication by a positive real number $s \neq 1$; if so, the 1-dimensional subspace $\mathbb{R} \cdot \chi_0$ in the real vector space $\mathbb{H}om(G,\mathbb{R})$ is an eigenline with eigenvalue $s \neq 1$ of the linear transformation $\alpha^*$ induced by $\alpha$ on $\mathbb{H}om(G,\mathbb{R})$. The existence of such an eigenvalue $s \neq 1$ would ruin our plan, but, as we shall see, it can be ruled out if the image of the character $\chi_0$ has only $1$ and $-1$ as units, where the groups of units is defined as follows: \begin{definition} \label{definition:Group-of-units} Given a subgroup $B$ of the additive group $\mathbb{R}_{\add}$ we set \begin{equation} \label{eq:Units-B} U(B) = \{s \in \mathbb{R}^\times \mid s \cdot B = B \} \end{equation} and call $U(B)$ the \emph{group of units} of $B$ (inside the multiplicative group of $\mathbb{R}$). \end{definition} We shall establish the following \begin{theorem} \label{thm:Generalization-GoKo10} Suppose $G$ is a subgroup of $PL_o(I)$ that satisfies the following conditions: \begin{enumerate}[(i)] \item no interior point of the interval $I = [0, b]$ is fixed by $G$; \item the characters $\chi_\ell$ and $\chi_r$ are both non-zero; \item the quotient group $G/(\ker \chi_\ell \cdot \ker \chi_r )$ is a torsion-group, and \item at least one of the groups $U(\im \chi_\ell)$ and $U(\im \chi_r)$ is reduced to $\{1, -1\}$. \end{enumerate} Then there exists a non-zero homomorphism $\psi \colon G \to \mathbb{R}^\times_{>0}$ that is fixed by every automorphism of $G$. The group $G$ has therefore property $R_\infty$. \end{theorem} There remains the problem of finding subgroups $B \mathbb{S}ubset \mathbb{R}_{\add}$ that have only trivial units. This problem is addressed in section \ref{ssec:Group of units}. We shall show that a subgroup $B = \ln P$ has this property if the multiplicative group $P \mathbb{S}ubset \mathbb{R}^\times_{>0}$ is free abelian and generated by algebraic numbers. In addition, we shall construct in section \ref{ssec:Uncountable-class} a collection $\mathbb{G}G$ of pairwise non-isomorphic 3-generator groups $G_s$ enjoying the properties that each group $G_s$ satisfies the assumptions of Theorem \ref{thm:Generalization-GoKo10} and that the cardinality of $\mathbb{G}G$ is that of the continuum. \mathbb{S}ubsection*{Acknowledgments} The first author has been partially supported by the Fapesp project ``Topologia Alg\'ebrica, Geom\'etrica e Diferencial-2012/24454-8''. The second author acknowledges financial support from the Department of Atomic Energy, Government of India, under a XII plan project. The present work was initiated during the visit of the second author to the University of S\~ao Paulo in August 2012. He thanks the first author for the invitation and the warm hospitality. The third author thanks M. Brin and M. Zaremsky for numerous, helpful discussions. \endinput \mathbb{S}ection{Preliminaries on automorphisms of the groups $G(I;A,P)$} \label{sec:Preliminaries-autos-G(I;A,P)} The groups $G(I;A, P)$ form a class of subgroups of the group $\mathbb{P}L_o(\mathbb{R})$, the group of all orientation preserving, piecewise linear homeomorphisms of the real line. They enjoy some special properties, in particular the following two: each group acts approximately \footnote{See \cite[Chapter A]{BiSt14} for details.} highly transitively on the interior of $I$ and all its automorphisms are induced by conjugation by homeomorphisms. It is above all this second property that will be exploited in the sequel. In this section, we recall the basic representation theorem for automorphisms of the groups $G(I;A,P)$ and deduce then some consequences. \mathbb{S}ubsection{Representation of isomorphisms} \label{ssec:Representation-automorphisms-G(I;A,P)} We begin by fixing the set-up of this section: $P$ is a non-trivial subgroup of $\mathbb{R}^\times_{>0}$ and $A$ a non-zero subgroup of $\mathbb{R}_{\add}$ that is stable under multiplication by $P$. Next, $I$ is a closed interval of positive length; we assume the left end point of $I$ is in $A$ if $I$ is bounded from below and similarly for the right end point. \begin{remark} \label{remark:Distinct-intervals} Distinct intervals $I_1$, $I_2$ may give rise to isomorphic groups $G(I_1;A,P)$ and $G(I_2;A, P)$. In particular, it is true that every group $G(I_1;A,P)$ is isomorphic to one whose interval $I_2$ has one of the following three forms \begin{equation} \label{eq:Types-of-intervals} [0,b] \text{ with } b \in A, \quad [0, \infty[ \quad \text{and}\quad \mathbb{R}. \end{equation} (see Sections 2.4 and 16.4 of \cite{BiSt14} for proofs). \end{remark} We come now to the announced result about isomorphisms of groups $G(I;A,P)$ and $G(\bar{I}; \bar{A}, \bar{P}$. It asserts that each isomorphism of the first group onto the second one is induced by conjugation by a homeomorphism of the interior $\mathbb{I}nt(I)$ of $I$ onto the onterior of $\bar{I}$. This claim holds even for suitably restricted subgroups of $G(I;A,P)$ and of $G(\bar{I};\bar{A}, \bar{P})$. In order to state the generalized assertion we need the subgroup of ``bounded elements''. \begin{definition} \label{definition:B} Let $B(I;A,P)$ be the subgroup of $G(I;A,P)$ consisting of all PL-homeomorphisms $f$ that are the identity near the end points or, more formally, that satisfy the inequalities $\inf I < \inf \mathbb{S}upp f$ and $\mathbb{S}up \mathbb{S}upp f < \mathbb{S}up I$. \end{definition} We are now in a position to state the representation theorem. \begin{theorem} \label{thm:TheoremE16.4} Assume $G$ is a subgroup of $G(I;A,P)$ that contains the derived subgroup of $B(I;A,P)$, and $\bar{G}$ is a subgroup of $G(\bar{I};\bar{A}, \bar{P})$ containing the derived group of $B(\bar{I};\bar{A},\bar{P})$. Then every isomorphism $\alpha \colon G \iso \bar{G}$ is induced by conjugation by a unique homeomorphism $\varphi_\alpha$ of the interior $\mathbb{I}nt(I)$ of $I$ onto the interior of $\bar{I}$; more precisely, the equation \begin{equation} \label{eq;Inducing-alpha} \alpha(g) \restriction{\mathbb{I}nt(\bar{I})} = \varphi_\alpha \circ (g \restriction{\mathbb{I}nt(I)}) \circ \varphi_\alpha ^{-1} \end{equation} holds for every $g \in G$. Moreover, $\varphi_\alpha$ maps $A \cap \mathbb{I}nt(I)$ onto $\bar{A} \cap \mathbb{I}nt(\bar{I})$ \end{theorem} \begin{proof} The result is a restatement of \cite[Theorem E16.4]{BiSt14}. \end{proof} \begin{remarks} \label{remarks:varphi} a) Theorem \ref{thm:TheoremE16.4} has two simple, but important consequences. First of all, every homeomorphism of intervals is either increasing or decreasing; since the homeomorphism $\varphi_\alpha$ inducing an isomorphism $\alpha \colon G \iso \bar{G}$ is uniquely determined by $\alpha$, there exist therefore two types of isomorphisms: the \emph{increasing} isomorphisms induced by conjugation by an increasing homeomorphism and the decreasing ones. Assume now that $\bar{I} = I$. It the homeomorphism $\varphi_\alpha \colon \mathbb{I}nt(I) \iso \mathbb{I}nt(I)$ is increasing it extends uniquely to a homeomorphism of $I$, but this may not be so if it is decreasing. Indeed, $\varphi_\alpha$ extends if $I$ is a compact interval or the real line, but not if $I$ is a half line. If the extension exists, it will be denoted by $\tilde{\varphi}_\alpha$. b) The increasing automorphisms of a group $G$ form a subgroup $\Aut_+ G $ of $\Aut G$ of index at most 2. It will turn out that is often easier to find a non-zero homomorphism $\psi \colon G \to B$ that is fixed by the subgroup $\Aut_+ G$ than a non-zero homomorphism fixed by $\Aut G \mathbb{S}mallsetminus \Aut_+ G$ (in case this set is non-empty). For this reason, it is useful to dispose of criteria guaranteeing that $\Aut G = \Aut_+ G$. c) The derived group of $B(I;A,P)$ is a simple, infinite group (see \cite[Theorem C10.2]{BiSt14}), but $B(I;A,P)$ itself may not be perfect. To date, no characterization of the parameters $(I, A,P)$ corresponding to perfect groups $B(I;A,P)$ is known. The quotient group $G(I;A,P)/B(I;A,P)$, on the other hand, is a metabelian group that can be described explicitly in terms of the triple $(I, A, P)$ (see Section 12 and 5.2 in \cite{BiSt14}). In the sequel, we shall therefore restrict attention to subgroups $G$ containing $B(I;A,P)$. d) The second important consequence of Theorem \ref{thm:TheoremE16.4} is the fact that $B(I;A,P)$ is a characteristic subgroup of every subgroup $G$ with $B(I;A,P) \mathbb{S}ubseteq G \mathbb{S}ubseteq G(I;A,P)$ (the proof is easy; see \cite[Corollary E16.5]{BiSt14} or Corollary \ref{crl:alpha-and-ker-lambda} below). \end{remarks} In part a) of the previous remarks the term \emph{increasing isomorphism} has been introduced. In the sequel, this parlance will be used often and so we declare: \begin{definition} \label{definition:Increasing-isomorphism} Let $\alpha \colon G \iso \bar{G}$ be an isomorphism induced by the (uniquely determined) homeomorphism $\varphi_\alpha \colon \mathbb{I}nt(I) \iso\mathbb{I}nt(\bar{I})$. If $\varphi$ is \emph{increasing} (respectively \emph{decreasing}) then $\alpha$ will be called increasing (respectively decreasing). \end{definition} \mathbb{S}ubsection{The homomorphisms $\lambda$ and $\rho$ } \label{ssec:lambda-rho} By Remark \ref{remarks:varphi}d the group $B = B(I;A,P)$ is a characteristic subgroup of every group $G$ containing it. Now $G$ has, in addition, subgroups containing $B$ that are invariant under the subgroup $\Aut_+ G$, namely the kernels of the homomorphisms $\lambda$ and $\rho$. To set these homomorphisms into perspective, we go back to the homomorphisms \[ \mathbb{S}igma_\ell \colon G(I;A,P) \to P \quad \text{and} \quad \mathbb{S}igma_r\colon G(I;A,P) \to P, \] introduced in section \ref{sssec:Homomorphisms-sigma-ell-etc}. Their images are abelian and coincide with the group of slopes $P$. If $I$ is not bounded from below, there exist a homomorphism $\lambda$, related to $\mathbb{S}igma_\ell$, whose image is contained in $\Aff(A,P)$, the group of all affine maps of $\mathbb{R}$ with slopes in $P$ and displacements $f(0) \in A$. The definition of $\lambda$ is this: \begin{equation} \label{eq:Definition-lambda} \lambda \colon G(I;A,P) \to \Aff(A,P), \quad g \mapsto \text{affine map coinciding with $g$ near $-\infty$}. \end{equation} If the interval $I$ is not bounded from above, there exists a similarly defined homomorphism $\rho \colon G(I;A,P) \to \Aff(A,P)$, given by \begin{equation} \label{eq:Definition-rho} \rho(g) =\text{ affine map coinciding with $g$ near $+\infty$}. \end{equation} The images of $\lambda$ and $\rho$ are, in general, smaller than $\Aff(A,P)$. They are equal to the entire group $\Aff(A,P)$ if $I = \mathbb{R}$; if $I$ is not bounded from below, but bounded from above, the image of $\lambda$ is $\Aff(IP \cdot A, P)$ and the analogous statement holds for $\rho$. In the above, $IP \cdot A$ denotes the submodule of $A$ generated by the products $(p-1) \cdot a$ with $p \in P$ and $a \in A$ (see Section 4 and Corollary A5.3 in \cite{BiSt14}). For uniformity of notation, we extend the definition of $\lambda$ and $\rho$ to compact intervals: if $I = [0,b]$ and $f \in G(I;A,P)$ then $\lambda(g)$ is the linear map $t \mapsto \mathbb{S}igma_\ell(g) \cdot t$ and $\rho(g)$ is the affine map $ t \mapsto \mathbb{S}igma_r(f) \cdot (t-b) + b$. Similarly one defines $\lambda (g)$ if $I$ is the half line $[0, \infty[$. The homomorphisms $\lambda$ and $\rho$ allow one to restate the definition of $B(I;A,P)$; one has \begin{equation} \label{eq:Reexpressing-B} B(I;A,P) = \ker \lambda \cap \ker \rho. \end{equation} \begin{remark} \label{remark:Restrictions-lambda-rho} In the sequel, we shall often deal with subgroups, denoted $G$, of a group $G(I;A,P)$ that contain $B(I;A,P)$. For ease of notation, we shall then denote the restrictions of $\lambda$ and $\rho$ to $G$ again by $\lambda$ and $\rho$. \end{remark} \mathbb{S}ubsection{First consequences of the representation theorem} \label{ssec:Representation-first-consequences} Let $G$ be a subgroup of $G(I;A,P)$ that contains the derived subgroup of $B(I;A,P)$ and let $\bar{G}$ be a subgroup of $G(\bar{I};\bar{A},\bar{P})$ containing the derived subgroup of $B(\bar{I};\bar{A},\bar{P})$. Suppose $\varphi_\alpha$ is a homeomorphism of $\mathbb{I}nt(I)$ onto $\mathbb{I}nt(\bar{I})$ that induces an isomorphism $\alpha \colon G \iso \bar{G}$. The map $\varphi_\alpha$ need not be piecewise linear. Theorem \ref{thm:TheoremE16.4}, however, has useful consequences even in such a case. One implication is recorded in \begin{crl} \label{crl:alpha-and-ker-lambda} Assume $G$ and $\bar{G}$ are subgroups of $G(I;A,P)$ both of which contain $B(I;A,P)$, and let $\lambda,\rho \colon G \to \Aff(A,P)$ and $\bar{\lambda},\bar{\rho} \colon \bar{G} \to \Aff(A,P)$ be the obvious restrictions of the homomorphisms $\lambda, \rho$ introduced in section \ref{ssec:lambda-rho}. Consider now an isomorphism $\alpha \colon G \iso \bar{G}$ that is induced by the homeomorphism $\varphi_\alpha \colon \mathbb{I}nt(I) \iso \mathbb{I}nt(I)$. If $\varphi_\alpha$ is increasing then \begin{enumerate}[(i)] \item $\alpha$ maps $\ker \lambda$ onto $\ker \bar{\lambda}$ and induces an isomorphism $\alpha_\ell$ of $G/\ker \lambda$ onto $\bar{G}/\ker \bar{\lambda}$; \item $\alpha$ maps $\ker \rho$ onto $\ker \bar{\rho}$ and induces an isomorphism $\alpha_r$ of $G/\ker \rho$ onto $\bar{G}/\ker \bar{\rho}$. \end{enumerate} \end{crl} \begin{proof} (i) If $g \in \ker \lambda$ then $g$ is the identity near $\inf I$. As $\varphi_\alpha$ is \emph{increasing}, the image $\alpha (g) = \varphi_\alpha \circ g \circ \varphi_\alpha^{-1}$ of $g$ is therefore also the identity near $\inf I$. It follows that $\alpha(\ker \lambda) \mathbb{S}ubseteq \ker \bar{\lambda}$. This inclusion is actually an equality, for $\alpha^{-1}:\bar{G}\to G$ is an isomorphism and so $\alpha^{-1}(\ker \bar{\lambda})\mathbb{S}ubseteq \ker \lambda$. Claim (ii) can be proved similarly. \end{proof} \mathbb{S}ubsection{Automorphisms induced by finitary PL-homeomorphisms} \label{ssec:Autos-induced-by-PL-homeomorphisms} Let $G \mathbb{S}ubseteq G(I;A,P)$ be as before, and let $\alpha$ be an automorphism of $G$. According to Theorem \ref{thm:TheoremE16.4}, $\alpha$ is induced by conjugation by a unique auto-homeomorphism $\varphi_\alpha$. This auto-homeomorphism may not be piecewise linear, but the situation improves if $P$, the group of slopes, is not cyclic (and hence dense in $\mathbb{R}^\times_{>0}$): \begin{theorem} \label{thm:TheoremE17.1} Suppose $P$ is \emph{not cyclic}. For every automorphism $\alpha$ of $G$ there exists then a non-zero real number $s$ such that $A = s \cdot A$ and that the auto-homeomorphism $\varphi_\alpha \colon \mathbb{I}nt(I) \iso \mathbb{I}nt(I)$ is piecewise linear with slopes in the coset $s \cdot P$ of $P$. Moreover, $\varphi_\alpha$ maps the subset $A \cap \mathbb{I}nt(I)$ onto itself and has only finitely many breakpoints in every compact subinterval of $\mathbb{I}nt(I)$. \end{theorem} \begin{proof} The result is a special case of \cite[Theorem E17.1]{BiSt14}. \end{proof} Theorem \ref{thm:TheoremE17.1} indicates that automorphisms of groups with a non-cyclic group of slopes $P$ are easier to analyze than those of the groups with cyclic $P$. Note, however, that the conclusion of Theorem \ref{thm:TheoremE17.1} does not rule out that $\varphi_\alpha$ has infinitely many breakpoints which accumulate in one or both end points \footnote{The notion of end point is to be interpreted suitably if $I$ is not bounded.} and so $\varphi_\alpha$ may not be differentiable in the end points. In section \ref{ssec:Differentiability-criterion} we shall therefore be interested in differentiability criteria. \mathbb{S}ection{Characters fixed by $\Aut G([0,b];A,P)$} \label{sec:Homomorphisms-fixed-by-Aut-I-compact} In this section, we prove Theorem \ref{thm:Generalization-of-BFG08} for the case of a compact interval and various extensions of it. An important ingredient in the proofs of these results is a criterion that allows one to deduce that an auto-homeomorphism $\varphi_\alpha$ inducing an automorphism $\alpha$ of the group is differentiable near one or both of its end points. \mathbb{S}ubsection{A differentiability criterion} \label{ssec:Differentiability-criterion} The proof of the criterion is rather involved. Prior to stating the criterion and giving its proof, we discuss therefore a result that explains the interest in the criterion. \begin{prp} \label{prp:Importance-of-differentiability-I-compact} Let $G$ be a subgroup of $G([a,b];A,P)$ that contains the derived subgroup of $B(I;A,P)$. Suppose $\tilde{\varphi} \colon [0,b] \iso [0,b]$ is an auto-homeomorphism that induces, by conjugation, an automorphism $\alpha$ of $G$. Then the following statements hold: \begin{enumerate}[(i)] \item if $\tilde{\varphi}$ is increasing, differentiable in 0 and $\tilde{\varphi}'(0) > 0$ then $\alpha$ fixes $\mathbb{S}igma_\ell$; \item if $\tilde{\varphi}$ is increasing, differentiable in $b$ with $\tilde{\varphi}'(b) > 0$ then $\alpha$ fixes $\mathbb{S}igma_r$; \item if $\tilde{\varphi}$ is differentiable both in 0 and in $b$, with non-zero derivatives, then $\alpha$ fixes the homomorphism $\psi \colon g \mapsto \mathbb{S}igma_\ell (g) \cdot \mathbb{S}igma_r(g)$. \end{enumerate} \end{prp} \begin{proof} \text{(i) and (ii) } Suppose the extended auto-homeomorphism $\tilde{\varphi}= \tilde{\varphi}_\alpha$ is increasing and fix $g \in G$. If $\tilde{\varphi}$ is differentiable in 0 and $\tilde{\varphi}'(0) > 0$, the chain rule justifies the following computation \begin{equation} \label{eq:Calculation-alpha-increasing} \mathbb{S}igma_\ell(\alpha(g)) = (\tilde{\varphi} \circ g \circ \tilde{\varphi}^{-1})(0) = \tilde{\varphi}'(0) \cdot g'(0) \cdot (\tilde{\varphi}^{-1})'(0) = \mathbb{S}igma_\ell(g) \end{equation} It follows that $\mathbb{S}igma_\ell$ is fixed by $\alpha$. If $\tilde{\varphi}$ admits a left derivative in $b$ and if $\tilde{\varphi}'(b) > 0$, one sees similarly, that $\mathbb{S}igma_r$ is fixed by $\alpha$. \mathbb{S}mallskip \text{(iii) } Assume now that $\tilde{\varphi}= \tilde{\varphi}_\alpha$ is differentiable, both in 0 and in $b$, and that both derivatives are different from 0. If $\tilde{\varphi}$ is \emph{increasing} parts (i) and (ii) guarantee that $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$ are fixed by $\alpha$, whence so is their product $\psi$. If, on the other hand, $\tilde{\varphi}$ is \emph{decreasing} the calculation \begin{equation} \label{eq:Calculation-alpha-decreasing} \mathbb{S}igma_r(\alpha(g)) = \left(\tilde{\varphi} \circ g \circ \tilde{\varphi}^{-1}\right)'(b) = \tilde{\varphi}'(0) \cdot g'(0) \cdot (\tilde{\varphi}^{-1})'(b) = \mathbb{S}igma_\ell(g) \end{equation} holds for every $g \in G$ and establishes the relation $\mathbb{S}igma_r \circ \alpha = \mathbb{S}igma_\ell$. A similar calculation shows that the relation $\mathbb{S}igma_\ell \circ \alpha = \mathbb{S}igma_r$ is valid. The claim for $\psi$ is then a consequence of the following computation: \begin{align*} \left(\psi \circ \alpha\right) (g) &= \mathbb{S}igma_\ell(\alpha(g)) \cdot \mathbb{S}igma_r(\alpha(g)) = \mathbb{S}igma_r(g) \cdot \mathbb{S}igma_\ell(g) = \psi (g).\qedhere \end{align*} \end{proof} \mathbb{S}ubsubsection{Statement and proof of the criterion} \label{ssec:Differentiability-criterion-Statement-and-proof} We come now to the criterion; we choose a formulation that is slightly more general than what is needed for the case at hand; the extended version will be used in Section \ref{sec:Homomorphisms-fixed-by-Aut-I-half-line}. \begin{prp} \label{prp:phi-is-linear-I-compact} Suppose $I$ is an interval of one of the forms $[0, b]$ or $[0, \infty[$, and $G$ as well as $\bar{G}$ are subgroups of $G(I;A,P)$ that contain $B(I;A,P)$. Assume $\tilde{\varphi} \colon I \iso I$ is an \emph{increasing} auto-homeomorphism that induces, by conjugation, an isomorphism $\alpha$ of the group $G$ onto the group $\bar{G}$. If the image of $\mathbb{S}igma_\ell \colon G \to P$ is \emph{not cyclic}, then $\tilde{\varphi}$ is linear on a small interval of the form $[0,\delta]$ and so $\tilde{\varphi}$ is differentiable in 0 with positive derivative. \end{prp} \begin{proof} The following argument uses ideas from the proofs of Proposition E16.8 and Supplement E17.3 in \cite{BiSt14}. The proof will be divided into three parts. In the first one, we show that $\alpha \colon G \iso \bar{G}$ induces an isomorphism $\alpha_\ell \colon \im \mathbb{S}igma_\ell \iso \im \bar{\mathbb{S}igma}_\ell$, that takes $p \in \im \mathbb{S}igma_\ell$ to $p^r = \e^{r\cdot \log p}$ for some positive real number $r$ that does \emph{not} depend on $p$. In the second part, we establish that $\tilde{\varphi}$ satisfies the relation \begin{equation} \label{eq:Represention-of-phi} \tilde{\varphi}(p \cdot t) = p^r \cdot \tilde{\varphi}(t) \end{equation} for every $p \in (\im \mathbb{S}igma_\ell \,\cap \,]0,1[\,)$ and $t$ varying in some small interval $[0, \delta]$. In the last part, we deduce from this relation that $\tilde{\varphi}$ is linear near 0. We embark now on the first part. Since $\varphi$ is increasing Corollary \ref{crl:alpha-and-ker-lambda} applies and shows that $\alpha$ maps the kernel of $\mathbb{S}igma_\ell \colon G \to P$ onto the kernel of the homomorphism $\bar{\mathbb{S}igma}_\ell \colon \bar{G} \to P$, and induces thus an isomorphism $\alpha_\ell \colon \im \mathbb{S}igma_\ell \iso \im \bar{\mathbb{S}igma}_\ell$ that renders the square \begin{equation*} \xymatrix{G \ar@{->}[r]^-{\alpha} \ar@{->>}[d]^-{\mathbb{S}igma_\ell} & \bar{G} \ar@{->>}[d]^-{\bar{\mathbb{S}igma}_\ell}\\ \im \mathbb{S}igma_\ell \ar@{->}[r]^-{\alpha_\ell} &\im \bar{\mathbb{S}igma}_\ell} \end{equation*} \noindent commutative. We claim $\alpha_\ell$ maps the set $(\im \mathbb{S}igma_\ell) \, \cap \,]0, 1[$ onto $(\im \bar{\mathbb{S}igma}_\ell) \, \cap \,]0, 1[$. Indeed, let $p \in \im \mathbb{S}igma_\ell$ be a slope with $p < 1$ and let $f_p \in G$ be a preimage of $p$. Then $\alpha(f_p)$ is linear on some interval $[0, \varepsilon_p]$ and has there slope $\bar{\mathbb{S}igma}_\ell(\alpha(f_p)) = \alpha_\ell(p)$. Since $\tilde{\varphi}$ is continuous in 0, there exists $\delta_p > 0$ so that $f_p$ is linear on $[0, \delta_p]$ and that $\tilde{\varphi}([0,\delta_p]) \mathbb{S}ubseteq [0,\varepsilon_p]$. Fix $t \in [0, \delta_p]$. The hypothesis that $\alpha$ is induced by conjugation by $\tilde{\varphi}$ then leads to the chain of equalities \begin{align} \label{eq:Representation-of-phi-1} \tilde{\varphi} (p \cdot t) &= (\tilde{\varphi}\circ f_p)(t) = (\alpha(f_p) \circ \tilde{\varphi})( t) = \alpha(f_p) (\tilde{\varphi}( t)) = \alpha_\ell(p) \cdot \tilde{\varphi}(t). \end{align} Since $\tilde{\varphi}$ is increasing and as $p < 1$, the chain of equalities implies that $\alpha_\ell(p) < 1$. It follows that $\alpha_\ell$ maps $(\im \mathbb{S}igma_\ell) \, \cap \, ]0,1[$ into $\im \bar{\mathbb{S}igma}_\ell \, \cap \, ]0,1[$ and then, by applying the preceding argument to $\varphi^{-1}$, that \[ \alpha_\ell \left(\im \mathbb{S}igma_\ell \, \cap \, ]0,1[\,\right) = \im \bar{\mathbb{S}igma}_\ell\, \cap \, ]0,1[\;. \] We show next that $\alpha_\ell(p) = p^r$ for all $p \in \im \mathbb{S}igma_\ell$ and some positive real number $r$. We begin by passing from the multiplicative subgroup $\im \mathbb{S}igma_\ell \mathbb{S}ubset \mathbb{R}^\times_{>0}$ to a subgroup of $\mathbb{R}_{\add}$; to that end, we introduce the homomorphism \[ L_0 = \ln \circ \, \alpha_\ell \circ \exp \colon \ln(\im \mathbb{S}igma_\ell) \iso \ln(\im \bar{\mathbb{S}igma}_\ell). \] The previous verification implies that $L_0$ is an order preserving isomorphism; by the assumption on $\im \mathbb{S}igma_\ell$ the domain of $L_0$ is a dense subgroup of $\mathbb{R}_{\add}$. It follows that $L_0$ extends uniquely to an order preserving automorphism $L \colon \mathbb{R}_{\add} \to \mathbb{R}_{\add}$. The homomorphism $L$ is continuous, hence linear, and so given by multiplication by some positive real number $r$. The isomorphism $\alpha_\ell $ has therefore the form \[ p \mapsto p^r = \exp(r \cdot \ln p ) \quad \text{with}\quad r > 0. \] We come now to the second part of the proof. Fix a slope $p_1 < 1$ in $\im \mathbb{S}igma_\ell$. Formula \eqref{eq:Representation-of-phi-1} and the previously found formula for $\alpha_\ell$ then imply that there exists a small positive number $\delta_{p_1}$ such that the equation \begin{equation} \label{eq:Representation-of-phi-2} \tilde{\varphi}(p_1 \cdot t)= p_1^r \cdot \tilde{\varphi} (t) \end{equation} holds for every $t \in [0, \delta_{p_1}]$. Consider next another slope $p < 1$. There exists then, as before, a real number $\delta_p > 0$ so that $\tilde{\varphi}(p\cdot t) = p^r\cdot \tilde{\varphi}(t)$ for $t\in [0,\delta_p]$. Choose now $m\in \mathbb{N}$ so large that $p_1^m \cdot \delta_{p_1} \leq \delta_p$. The following chain of equalities then holds for each $t \in [0,\delta_{p_1}]$: \begin{equation*} p^{m\cdot r}_1 \cdot \tilde{\varphi}(p\cdot t) = \tilde{\varphi}(p_1^m\cdot p \cdot t) = \tilde{\varphi}( p \cdot p_1^m \cdot t) = p^r \cdot \tilde{\varphi}(p^m_1\cdot t) = p^r \cdot p^{m\cdot r}_1 \cdot \tilde{\varphi}(t). \end{equation*} The calculation shows that $\tilde{\varphi}(p\cdot t) = p^r\cdot \tilde{\varphi}(t)$ for every $t\in [0,\delta_1]$. Upon setting $\delta = \delta_{p_1}$ one arrives at formula \eqref{eq:Represention-of-phi}. The proof is now quickly completed. By assumption, $\im \mathbb{S}igma_\ell$ is not cyclic and so formula \eqref{eq:Represention-of-phi} holds for a dense set of slopes $p$ and a fixed argument $t$, say $t = \delta$. Since $\varphi$ is continuous and increasing, formula \eqref{eq:Represention-of-phi} continues to hold for every real $x \in\; ]0, 1[$. The formula \[ \varphi (x\cdot \delta ) = \exp(r \cdot \ln x) \cdot \varphi(\delta) = x^r \cdot \varphi(\delta) \] is therefore valid for every $x \in \;]0, \delta]$. By Theorem \ref{thm:TheoremE17.1}, on the other hand, $\varphi$ is piecewise linear on $]0,\delta]$. So the exponent $r$ must be equal to 1, whence $\varphi$ is linear on $[0,\delta]$ with slope $\varphi(\delta) /\delta> 0 $ and so, in particular, differentiable in 0. \end{proof} \begin{remark} \label{remark:Criterion-for-differentiability} Assume $I$ is a compact interval of the form $[0,b]$ with $b \in A_{>0}$ and the images of $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$ are both not cyclic. It follows then from Proposition \ref{prp:phi-is-linear-I-compact} that every \emph{increasing} automorphism $\alpha \colon G \iso G$ is induced by an auto-homeomorphism $\tilde{\varphi}$ that is affine near both end points. By \cite[Proposition E16.9]{BiSt14} the homeomorphism $\tilde{\varphi}$ is thus \emph{finitary} piecewise linear. \end{remark} \mathbb{S}ubsubsection{First application} \label{ssec:Differentiability-criterion-First-application} As a further step towards the main results we give a corollary that combines Propositions \ref{prp:Importance-of-differentiability-I-compact} and \ref{prp:phi-is-linear-I-compact}. \begin{crl} \label{crl:I-compact-summary-differentiability} Let $G$ be a subgroup of $G(I;A,P)$ that contains $B(I;A,P)$. Assume $I = [0, b]$ and let $\alpha$ be an automorphism of $G$ that is induced by the auto-homeomorphism $\tilde{\varphi}\colon I \iso I$. Then the following statements hold: \begin{enumerate}[(i)] \item if $\alpha$ is increasing \footnote{See Definition \ref{definition:Increasing-isomorphism}.} and $\im \mathbb{S}igma_\ell$ not cyclic, then $\mathbb{S}igma_\ell$ is fixed by $\alpha$; \item if $\alpha$ is increasing and $\im \mathbb{S}igma_r$ not cyclic, then $\mathbb{S}igma_r$ is fixed by $\alpha$; \item if $\tilde{\varphi}$ is decreasing and $\im \mathbb{S}igma_\ell$ is not cyclic, then $\tilde{\varphi}$ is affine near both end points and the homomorphism $\psi \colon g \mapsto \mathbb{S}igma_\ell (g) \cdot \mathbb{S}igma_r(g)$ is fixed by $\alpha$. \end{enumerate} \end{crl} \begin{proof} (i) is a direct consequence of Proposition \ref{prp:phi-is-linear-I-compact} and part (i) of Proposition \ref{prp:Importance-of-differentiability-I-compact}. (ii) We invoke Proposition \ref{prp:phi-is-linear-I-compact} for an auxiliary group $G_1$. Let $\vartheta \colon I \iso I$ be the reflection in the mid-point of $I$; set $G_1 = \vartheta \circ G \circ \vartheta^{-1}$ and $\varphi_1 = \vartheta \circ \tilde{\varphi}_\alpha \circ \vartheta^{-1}$. Since $G(I;A,P)$ and $B(I;A,P)$ are both invariant under conjugation by $\vartheta$, and as the image of $\mathbb{S}igma_r$ is not cyclic, Proposition \ref{prp:phi-is-linear-I-compact} applies to the couple $(G_1,\varphi_1)$ and shows that $\varphi_1$ is linear in a small interval $[0, \delta_1]$ of positive length. But if so $\varphi_\alpha$ is affine in the interval $[b-\delta_1, b]$. Use now part (ii) in Proposition \ref{prp:Importance-of-differentiability-I-compact}. (iii) Since $\tilde{\varphi}$ is \emph{decreasing}, the subgroups $\im \mathbb{S}igma_\ell$ and $ \im \mathbb{S}igma_r$ are isomorphic by Lemma \ref{lem:Consequence-decreasing-auto-I-compact} below; the hypothesis on $\im \mathbb{S}igma_\ell$ implies therefore the image of $\mathbb{S}igma_r$ is not cyclic, either. Let $\vartheta \colon \mathbb{I}nt(I) \iso \mathbb{I}nt(I)$ be the reflection in the midpoint of the interval $I$ and set $\tilde{\varphi}_1 = \vartheta \circ \tilde{\varphi}$ and $\bar{G} = \vartheta \circ G \circ \vartheta^{-1}$. Conjugation by $\tilde{\varphi}_1$ induces then an increasing isomorphism $\alpha_1 \colon G \iso \tilde{G}$. Since $G(I;A,P)$ and $B(I;A,P)$ are both invariant under conjugation by $\vartheta$, Proposition \ref{prp:phi-is-linear-I-compact} applies to $\tilde{\varphi}_1$ in the r™le of $\tilde{\varphi}$ and shows that $\tilde{\varphi}_1$ is linear near $0$. But if so, $\tilde{\varphi}$ is linear near $0$. Consider now the auto-homeomorphism $\tilde{\varphi}_2 = \tilde{\varphi} \circ \vartheta$ of $I$. It induces an isomorphism $\alpha_2 \colon \bar{G} \iso G$ by conjugation; an argument similar to the preceding one then reveals that $\tilde{\varphi}$ is affine near $b$. The remainder of the claim follows from part (iii) in Proposition \ref{prp:Importance-of-differentiability-I-compact}. \end{proof} \mathbb{S}ubsection{Construction of homomorphisms fixed by $\Aut_+ G$} \label{ssec:Construction-psi-fixed-by-Aut-plus(G)-I-compact} The first main result holds for all groups $G$ with $B(I;A,P) \mathbb{S}ubsetneq G \mathbb{S}ubseteq G(I;A,P)$, but the exhibited homomorphisms may only be fixed by $\Aut_+ G$. \begin{theorem} \label{thm:Existence-psi-I-compact-increasing-autos} Suppose $I = [0, b]$ with $b \in A_{>0}$ and let $G$ be a subgroup of $G(I;A,P)$ that contains $B(I;A,P)$ properly. Then the homomorphisms $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$ are fixed by $\Aut_+ G$, and at least one of them is non-trivial. \end{theorem} \begin{proof} Let $\alpha$ be an increasing automorphism of $G$ and let $\tilde{\varphi}$ be the auto-homeo\-mor\-phism of $I$ that induces $\alpha$. (The map $\tilde{\varphi}$ exists by Theorem \ref{thm:TheoremE16.4} and Remark \ref{remarks:varphi}a.) Since the quotient group $G(I;A,P)/B(I;A,P)$ is isomorphic to the image of $ \mathbb{S}igma_\ell \times \mathbb{S}igma_r \colon G(I;A,P)\to P \times P$ and as $G$ contains $B(I;A,P)$ properly, at least one of the homomorphisms $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$ is non-zero. Assume first that $\psi = \mathbb{S}igma_\ell$ is non zero. Two cases then arise, depending on whether the image of $\psi$ is cyclic or not. If the image of $\psi$ is \emph{not cyclic} then part (i) in Corollary \ref{crl:I-compact-summary-differentiability} shows that $\alpha$ fixes $\psi$. If, on the other hand, $\psi$ is cyclic, consider the generator $p \in \im \psi$ with $p < 1$ and pick a preimage $g_p \in G$ of $p$. Then $g_p$ attracts points in every sufficiently small interval of the form $[0, \delta]$ towards 0; hence so does $\alpha(g_p) = \tilde{\varphi} \circ g_p \circ \tilde{\varphi}^{-1}$ and thus $p' = (\alpha(g_p))'(0) <1$. Now $p'$ generates also $\im \mathbb{S}igma_\ell$; being smaller than 1, it coincides therefore with $ p= \psi(g_p)$ and so $\psi = \psi\circ \alpha$. Assume next that $\psi = \mathbb{S}igma_r$ is not zero. If its image is not cyclic part (ii) of Corollary \ref{crl:I-compact-summary-differentiability} alllows us to conclude that $\alpha$ fixes $\psi$. If $\im \psi$ is cyclic, consider the generator $p \in \im \psi$ with $p < 1$ and pick a preimage $g_p \in G$. Then $g_p$ attracts points in every sufficiently small interval $[b-\delta, b]$ towards $b$. It then follows, as before, that $\psi (\alpha(g_p)) =p = \psi(g_p)$, whence $\psi \circ \alpha = \alpha$. \end{proof} \mathbb{S}ubsection{Existence of decreasing automorphisms} \label{ssec:Existence-decreasing-auto-I-compact)} Theorem \ref{thm:Existence-psi-I-compact-increasing-autos} is very satisfactory in that it produces a non-zero homomorphism $\psi$ onto an infinite abelian group whenever such a homomorphism is likely to exist, i.e., \xspace{} if $G$ contains $B(I;A,P)$ properly. This homomorphism is, however, only guaranteed to be fixed by the subgroup $\Aut_+ G$ of $\Aut G$ which has index 1 or 2 in $\Aut G$. If the index is 1, the conclusion of Theorem \ref{thm:Existence-psi-I-compact-increasing-autos} is as good as we can hope for. So the question arises whether there are useful criteria that force the index to be 1. Here is a very simple observation that leads to such a criterion: \begin{lem} \label{lem:Consequence-decreasing-auto-I-compact} Assume $I = [0, b]$ with $b \in A_{>0}$ and let $G$ be a subgroup of $G(I;A,P)$ that contains $B(I;A,P)$. Then every decreasing automorphism $\alpha$ induces an isomorphism $\alpha_* \colon \im \mathbb{S}igma_\ell \iso \im \mathbb{S}igma_r$. \end{lem} \begin{proof} The kernel of $\mathbb{S}igma_\ell$ consists of all elements in $G$ that are the identity near 0. Since $\alpha$ is induced by conjugation by a homeomorphism of $I$ that maps 0 onto $b$, the image of $\ker \mathbb{S}igma_\ell$ consists of elements that are the identity near $b$, and so $\alpha(\ker \mathbb{S}igma_\ell) \mathbb{S}ubseteq \ker \mathbb{S}igma_r$. Since $\alpha^{-1}$ is also a decreasing automorphism, the preceding inclusion is actually an equality. So $\alpha$ induces an isomorphism $\alpha_* \colon \im \mathbb{S}igma_\ell \iso \im \mathbb{S}igma_r$ that renders the square \begin{equation} \label{eq:square-minus-plus} \xymatrix{G \ar@{->}[r]^-{\alpha} \ar@{->>}[d]^-{\mathbb{S}igma_\ell} & G \ar@{->>}[d]^-{\mathbb{S}igma_r}\\ \im \mathbb{S}igma_\ell\ar@{->}[r]^-{\alpha_{*}} &\im \mathbb{S}igma_r} \end{equation} commutative. \end{proof} \begin{example} \label{example:Existence-decreasing-auto-I-compact} Suppose the slope group $P$ is finitely generated and hence free abelian of finite rank $r$, say. Choose subgroups $Q_\ell$ and $Q_r$ of $P$ and set \begin{equation} \label{eq:Definition-G-Qell-Qr} G(Q_\ell, Q_r) = \{g \in G(I;A,P) \mid (\mathbb{S}igma_\ell(g), \mathbb{S}igma_r(g) \in Q_\ell \times Q_r \}. \end{equation} Then $\im \mathbb{S}igma_\ell = Q_\ell$ and $\im \mathbb{S}igma_r = Q_r$, and the image of $(\mathbb{S}igma_\ell, \mathbb{S}igma_r) \colon G \to P \times P$ coincides with $Q_\ell \times Q_r$ (these claims follow from Corollary A5.5 in \cite{BiSt14}). Assume now that $G(Q_\ell, Q_r)$ admits a decreasing automorphism, say $\alpha$. By Lemma \ref{lem:Consequence-decreasing-auto-I-compact} the groups $Q_\ell$ and $Q_r$ are then isomorphic, and thus have the same rank. But more is true: if $Q_\ell = \im \mathbb{S}igma_\ell$ is \emph{not} cyclic, then part (iii) of Corollary \ref{crl:I-compact-summary-differentiability} applies and shows that $\mathbb{S}igma_r = \mathbb{S}igma_\ell \circ \alpha$, whence $Q_r$, the image of $\mathbb{S}igma_r$, coincides with $Q_\ell$, the image of $\mathbb{S}igma_\ell$. The same conclusion holds if $Q_r$ is not cyclic. Conversely, if $Q_\ell = Q_r$ then $G(Q_\ell, Q_r)$ admits decreasing automorphisms, for instance the automorphism induced by conjugation by the reflection about the mid point of $I$. So the only case where the existence of a decreasing automorphism is neither obvious nor easy to rule out by the preceding arguments is that where $Q_\ell$ and $Q_r$ are both cyclic, but distinct. We shall come back to this exceptional case in Example \ref{Example2-for-main-result-I-compact}. \end{example} \mathbb{S}ubsection{Construction of a homomorphism fixed by $\Aut G$} \label{ssec:Construction-psi-fixed-by-Aut} We move on to the construction of a homomorphism fixed by all of $\Aut G$. The following result is our main result. \begin{theorem} \label{thm:Existence-psi-I-compact-all-autos} Suppose $I$ is a compact interval of the form $[0,b]$ with $b \in A_{>0}$. Let $G$ be a subgroup of $G(I;A,P)$ containing $B(I;A,P)$ and let $\psi \colon G \to P$ be the homomorphism $g \mapsto \mathbb{S}igma_\ell(g) \cdot \mathbb{S}igma_r(g)$. Then $\psi$ is fixed by $\Aut G$, \emph{except possibly} when $G$ satisfies the following three conditions: \begin{enumerate}[a)] \item $\im (\mathbb{S}igma_\ell \colon G \to P)$ is cyclic, \item $G$ admits a decreasing automorphism, \item $G$ does not admit a decreasing automorphism induced by an auto-homeomorphism $\vartheta \colon I \iso I$ that is differentiable in both end points with non-zero values. \end{enumerate} \end{theorem} \begin{proof} Let $\alpha$ be an automorphism of $G$ and let $\varphi$ be the auto-homeomorphism of $\mathbb{I}nt(I)$ that induces $\alpha$ by conjugation. If $\varphi$ is \emph{increasing} both $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$ are fixed by $\alpha$ (see Theorem \ref{thm:Existence-psi-I-compact-increasing-autos}) and hence so is $\psi$. If, on the other hand, $\alpha$ is \emph{decreasing} and the image of $\mathbb{S}igma_\ell$ is \emph{not cyclic} then part (iii) of Corollary \ref{crl:I-compact-summary-differentiability} yields the desired conclusion. Suppose now that $G$ admits an automorphism $\beta$ that is induced by a decreasing auto-homeomorphism $\tilde{\varphi}_\beta$ of $I$ that is differentiable in 0, as well as in $b$, and has there non-zero derivatives. Then part (iii) of Proposition \ref{prp:Importance-of-differentiability-I-compact} allows us to conclude that $\psi$ is fixed by $\beta$. Since $\beta$ represents the coset $\Aut G \mathbb{S}mallsetminus \Aut_+ G$ and as $\psi$ is fixed by $\Aut_+ G$, it follows that $\psi$ is fixed by every decreasing automorphism. All taken together we have proved that the automorphism $\alpha$ fixes $\psi$ except, possibly, if $\im \mathbb{S}igma_\ell$ is cyclic, $\alpha$ is decreasing and if there does not exists a decreasing automorphism $\beta$ that is differentiable in the end points and has there non-zero derivatives. \end{proof} We state next some consequences of Theorems \ref{thm:Existence-psi-I-compact-increasing-autos} and \ref{thm:Existence-psi-I-compact-all-autos}. We begin with the special case where $G$ is all of $G(I;A,P)$. Then $G$ is normalized by the reflection in the mid-point of $I$ and so Theorem \ref{thm:Existence-psi-I-compact-all-autos} leads to \begin{crl} \label{crl:G=G(I;A,P)} If $G$ coincides with $G([0,b];A,P)$ the homomorphism $\psi \colon G \to P$ taking $g \in G$ to $\mathbb{S}igma_\ell(g) \cdot \mathbb{S}igma_r(g)$ is surjective, hence non-zero, and fixed by $\Aut G$. \end{crl} The second result is a consequence of the proof of Theorem \ref{thm:Existence-psi-I-compact-increasing-autos}. \begin{crl} \label{crl:sigma-ell-fixed-y-AutG-for-G(half-line;A,P)} Suppose $I$ is the half line $[0, \infty[$ and $G$ is a subgroup of $G(I;A,P)$ containing $B(I; A,P)$. If $G$ does not admit a decreasing automorphism then $\psi = \mathbb{S}igma_\ell$ is fixed by $\Aut G$. \end{crl} \begin{proof} The claim follows from Proposition \ref{prp:phi-is-linear-I-compact} and from the proof of part (i) in Proposition \ref{prp:Importance-of-differentiability-I-compact} upon noting that the cited proof does not presuppose that the interval $I$ be bounded from above. \end{proof} \mathbb{S}ubsection{Some examples} \label{ssec:Some-examples-I-compact} We exhibit some specimens of groups $G$ that possess a homomorphism $\psi \colon G \to P$ fixed by $\Aut G$. The existence of $\psi$ will be established by recourse to Theorems \ref{thm:Existence-psi-I-compact-increasing-autos} and \ref{thm:Existence-psi-I-compact-all-autos} and to Corollary \ref{crl:G=G(I;A,P)}. \begin{example} \label{Example1-for-main-result-I-compact} We begin with variations on Thompson's group $F$. Assume $P$ is infinite cyclic and $A$ is a (non-trivial) $\mathbb{Z}[P]$-submodule of $\mathbb{R}$. Set $G_0 = G([0,b]; A, P)$ with $b \in A_{>0}$ and consider the following subgroups of $G_0$: \begin{align} G_1 &= \{g \in G_0 \mid \mathbb{S}igma_\ell(g) = 1 \}, \label{eq:Subgroup1-with-infinite-cyclic-quotient}\\ G_2 &=\{g \in G_0 \mid \mathbb{S}igma_\ell(g) = \mathbb{S}igma_r(g) \}, \label{eq:Subgroup2-with-infinite-cyclic-quotient}\\ G_3 &=\{g \in G_0 \mid \mathbb{S}igma_\ell(g) = \mathbb{S}igma_r(g)^{-1} \}. \label{eq:Subgroup3-with-infinite-cyclic-quotient} \end{align} The group $G_0$ is the entire group $G(I;A,P)$ and so Corollary \ref{crl:G=G(I;A,P)} tells us that the homomorphism $\psi \colon g \mapsto \mathbb{S}igma_\ell(g) \cdot \mathbb{S}igma_r(g)$ is non-zero and and fixed by $\Aut G_0$. The group $G_1$ is an ascending union of subgroups $H_n = G([a_n, b];A,P)$ given by a strictly decreasing sequence $n \mapsto a_n$ of elements in $A$ that converges to 0, and so the group $G_1$ is infinitely generated. It does not admit a decreasing automorphism (for instance because of Lemma \ref{lem:Consequence-decreasing-auto-I-compact}) and so Theorem \ref{thm:Existence-psi-I-compact-increasing-autos} allows us to infer that the epimorphism $\mathbb{S}igma_r \colon G_1 \twoheadrightarrow P$ is fixed by all of $\Aut G_1$. The group $G_2$ is an ascending HNN-extension with a base group that is isomorphic to $G_0$ (see \cite[Lemma E18.8]{BiSt14}). If $G$ is finitely generated or finitely presented, so is therefore $G_2$. The group is normalized by the reflection in the mid-point of $I$ and so Theorem \ref{thm:Existence-psi-I-compact-all-autos} implies that $\psi \colon g \mapsto \mathbb{S}igma_\ell(g) \cdot \mathbb{S}igma_r(g)$ is fixed by $\Aut G_2$. This homomorphism $\psi$ is non-zero, for it coincides with $\mathbb{S}igma_\ell^2$. (Actually, $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$ are also fixed by $\Aut G_2$.) Now to the group $G_3$. It differs from $G_2$ in several respects: it cannot be written as an ascending HNN-extension with a finitely generated base group contained in $B(I;A,P)$; it is finitely generated if $G_0$ is so, but, if finitely generated, it does not admit a finite presentation (see part (ii) of Lemma E18.8 and Remark E18.10 in \cite{BiSt14}). The group $G_3$ is normalized by the reflection in the mid point of $I$ and so $\psi \colon G_3 \to P$ is fixed by $\Aut G_3$; this conclusion, however, is of no interest as $\psi$ is the zero map. Actually, more is true: every homomorphism $\psi' \colon G_3 \to P$ fixed by $\rho$ and vanishing on the bounded subgroup $B_3$ of $G_3$ is the zero-map: by definition \eqref{eq:Subgroup3-with-infinite-cyclic-quotient} the group $G_3/B_3$ is infinite cyclic and so $\psi'$ must be a multiple of $\mathbb{S}igma_\ell$. \end{example} \begin{remark} \label{remark-by-Paramesh} The previous discussion shows that $G_0$, $G_1$ and $G_2$ admit non-trivial homomorphisms into $P$ that are fixed by the corresponding automorphism groups. This fact and the observation made in section \ref{ssec:Crucial-fact} imply that every automorphism of one of these groups has infinitely many corresponding twisted conjugacy classes. This reasoning does not hold for $G_3$, for $\psi \colon G_3 \to P$ is the zero homomorphism. So the question whether or not an automorphism $\alpha$ of $G_3$ has infinitely many twisted conjugacy classes has to be tackled by another approach. Note first that the homomorphisms $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$ are both non-zero; as $G_3$ satisfies the assumptions of Theorem \ref{thm:Existence-psi-I-compact-increasing-autos} these homomorphisms are therefore fixed by $\Aut_+G_3$. It follows that every increasing automorphism $\alpha$ of $G_3$ has infinitely many $\alpha$-twisted conjugacy classes. We are thus left with the coset of decreasing automorphisms of $G_3$. Consider, for example, the automorphism $\beta$ induced by conjugation by the reflection $\vartheta$ in the mid-point of the interval $I$. Our aim is to construct an infinite collection of elements $g_n \in G_3$ and to verify then that they represent pairwise distinct $\beta$-twisted conjugacy classes. This verification will be based on the fact $\beta$ has order $2$ and a connection between twisted and ordinary conjugacy classes, available for automorphisms of finite order. \footnote{Cf.\;Lemma 2.3 in \cite{GoSa14}.} Let $f$ and $g$ be elements of $G_3$ that lie in the same $\beta$-twisted conjugacy class. By definition, there exists then $h \in G_3$ that satisfies the equation $g = h \circ f \circ \beta(h^{-1})$. The calculation \begin{align*} g \circ \beta(g) &= \left(h \circ f \circ \beta(h^{-1})\right) \circ \beta\left( h \circ f \circ \beta(h^{-1}) \right)\\ &= h \circ (f \circ \beta(f) ) \circ \beta^2(h^{-1}) = \act{h}{1} {(f \circ \beta(f) )} \end{align*} shows then that the elements $f \circ \beta(f)$ and $g \circ \beta(g)$ are conjugate. It suffices therefore to find a sequence of elements $n \mapsto f_n$ with the property that the compositions $f_{n_1} \circ \beta(f_{n_1})$ and $f_{n_2} \circ \beta(f_{n_2})$ represent distinct conjugacy classes whenever $n_1 \neq n_2$. To obtain such a sequence, we use the fact that $G$ contains $B(I;A,P)$ and that $B(I;A,P)$ consists of all PL-homeomorphisms with slopes in $P$, breakpoints in the dense subgroup $A$, and which are the identity near the end points. For every positive integer $n$ there exists therefore a non-trivial element $f_n \in B(I;A,P)$ whose support has $n$ connected components, all contained in the interval $]0, b/2[$. Then $h_n = f_n \circ \beta(f_n) = f_n \circ (\vartheta \circ f_n \circ \vartheta^{-1})$ has $2n$ connected components, and so $h_{n_1}$ is not conjugate to $h_{n_2}$ for $n_1 \neq n_2$. It follows that $G_3$ has infinitely many $\beta$-twisted conjugacy classes. The previous reasoning allows of some improvements, but it does not seem powerful enough to establish that $G_3$ has infinitely many $\alpha$-twisted conjugacy classes for every decreasing automorphism $\alpha$ of $G_3$. \end{remark} \begin{example} \label{Example2-for-main-result-I-compact} Example \ref{Example1-for-main-result-I-compact} admits a generalization that is worth being brought to attention of the reader. Assume $P$ is a non-trivial subgroup of the positive reals, $A$ is a (non-trivial) $P$-submodule of $\mathbb{R}$ and $\nu$ is an endomorphism of $P$. Fix $b \in A_{>0}$, set $I = [0, b]$ and define \begin{equation} \label{eq:definition-G(nu)} G_\nu = \{g \in G([0,b];A,P) \mid \mathbb{S}igma_r(g) = \nu(\mathbb{S}igma_\ell(g)) \}. \end{equation} We are interested in finding a non-zero homomorphism $\psi \colon G_\nu \to P$ that is fixed by $\Aut G_\nu$. Theorem \ref{thm:Existence-psi-I-compact-all-autos} implies that the homomorphism $\psi \colon g \mapsto \mathbb{S}igma_\ell (g) \cdot \mathbb{S}igma_r(g)$ is fixed by $\Aut G_\nu$ whenever $P$ is not cyclic; this homomorphism is non-zero unless $\nu$ is the map that sends $p \in P$ to its inverse $p^{-1}$. \emph{Assume now that $P$ is cyclic}. Then $G_\nu$ is isomorphic to one of the groups $G_1$, $G_2$ or $G_3$ discussed in Example \ref{Example1-for-main-result-I-compact}. This claim is clear if $\nu$ is the zero map, for $G_\nu$ coincides then with $\ker \mathbb{S}igma_r$ and is therefore isomorphic to $G_1$. Assume now that $\nu$ is not zero. The quotient group $G_\nu/ B(I;A,P)$ is then an infinite cyclic subgroup of the quotient group $G(I;A,P)/B(I;A, P)$ which is free abelian group of rank 2. By the classification in Section 18.4b of \cite{BiSt14}, the group $G_\nu$ is therefore isomorphic, either to $G_2$, or to $G_3$. Since the isomorphism $G_\nu \iso G_2$, respectively $G_\nu \iso G_3$, is induced by conjugation by an auto-homeomorphism of $]0,b[$ and as conjugation by the reflection in $b/2$ induces decreasing automorphisms in $G_2$ and in $G_3$, \emph{the group $G_\nu$ admits a decreasing automorphism, say $\beta$}; it induces an isomorphism $\beta_{*} \colon \im \mathbb{S}igma_\ell \iso \im \mathbb{S}igma_r$ (see Lemma \ref{lem:Consequence-decreasing-auto-I-compact}). Our next aim is to obtain a formula for $\beta_*$. The definition of $G_\nu$ shows, first of all, that $\im \mathbb{S}igma_\ell = P$ and that $\im \mathbb{S}igma_r = \nu(P)$. Let $p$ be the generator of $P$ with $p < 1$. Then $\nu(p) = p^m$ for some non-zero integer $m$ (recall that $\nu$ is not the zero map). Pick an element $g_p \in G_\nu$ with $\mathbb{S}igma_\ell(g_p) = p$. Then 0 is an attracting fixed point of $g_p$ restricted to a sufficiently small interval of the form $[0, \delta]$, and hence $b$ is an attracting fixed point for the restriction of $\beta(g_p)$ to a sufficiently small interval of the form $[b-\varepsilon,b]$. Thus $\beta(g_p) < 1$. Since $\beta(g_p)$ generates $\im \mathbb{S}igma_r = \nu(P) = \gp(p^m)$ it follows that $\beta_*$ is given by the formula \begin{equation} \label{eq:Identification-beta-star} \beta_* \colon P \to P, \quad p \mapsto p^{|m|}. \end{equation} Consider now the commutative square \eqref{eq:square-minus-plus}, but with $\alpha$ replaced by $\beta$. It shows that \begin{equation} \label{eq:Relation-beta-digma-ell-sigma-r} (\mathbb{S}igma_r \circ \beta)(g_p) = \beta_*(\mathbb{S}igma_\ell(g_p)) = \beta_*(p) = p^{|m|} = \left(\mathbb{S}igma_\ell(g_p)\right)^{|m|} \end{equation} and so $\mathbb{S}igma_r \circ \beta = \mathbb{S}igma_\ell^{|m|}$. The preceding reasoning is also valid with $\beta^{-1}$ in place of $\beta$, for $\beta^{-1}$ is also a decreasing automorphism of $G_\nu$, and so the relation $\mathbb{S}igma_r \circ \beta^{-1} = (\mathbb{S}igma_\ell)^{|m|}$ holds, hence also the relation $\mathbb{S}igma_\ell^{|m|} \circ \beta = \mathbb{S}igma_r$. Consider next the homomorphism $\psi \colon G_\nu \to P$ that takes $g$ to $\mathbb{S}igma_\ell(g)^{|m|} \cdot \mathbb{S}igma_r(g)$. The calculation \[ (\psi \circ \beta)(g) = \mathbb{S}igma_\ell^{|m|} (\beta (g)) \cdot \mathbb{S}igma_r(\beta(g)) = \mathbb{S}igma_r(g) \cdot \mathbb{S}igma_\ell^{|m|}(g) = \psi(g) \] shows then that $\psi$ is fixed by $\beta$. Note, however, that $\psi$ is the zero homomorphism whenever $m$ is negative, for in this case the definition of $G_\nu$ implies that \[ \psi(g) = (\mathbb{S}igma_\ell(g))^{|m|} \cdot \mathbb{S}igma_r(g) = (\mathbb{S}igma_\ell(g))^{|m|} \cdot (\mathbb{S}igma_\ell(g))^m = 1 \] for every $g \in G_\nu$, just as it happens with $G_3$ in Example \ref{Example1-for-main-result-I-compact}. \begin{remark} \label{remark:Exceptions-in-thm-Existence-psi-I-compact-all-autos} Suppose $P$ is cyclic and $\nu \colon P \to P$ is neither the identity nor the passage to the inverse. Then $G_\nu$ admits decreasing automorphisms $\beta$, but none of them can be induced by an auto-homeomorphism $\tilde{\varphi} \colon I \iso I$ that is differentiable in the end points; indeed, formula \ref{eq:Relation-beta-digma-ell-sigma-r} shows that $\mathbb{S}igma_r \circ \beta \neq \mathbb{S}igma_\ell$, in contrast to what happens if the chain rule can be applied (see Proposition \ref{prp:Importance-of-differentiability-I-compact}). It follows, in particular, that the three conditions a), b) and c) stated in Theorem \ref{thm:Existence-psi-I-compact-all-autos} can occur simultaneously. \end{remark} \end{example} \mathbb{S}ection{Characters fixed by $\Aut G([0,\infty[\,;A,P)$} \label{sec:Homomorphisms-fixed-by-Aut-I-half-line} The results in this section differ from those of Section \ref{sec:Homomorphisms-fixed-by-Aut-I-compact} in two important respects: in many situations several candidates for $\psi \colon G \to P$ are available and one of these candidates need not fixed by $\Aut G_+$. \mathbb{S}ubsection{Existence of decreasing automorphisms} \label{ssec:I-half-line-Existence-decreasing-autos} Every compact interval of the form $[0,b]$, and also the line, is invariant under a reflection. It follows that the groups $G(I;A,P)$ with $I$ one of these intervals, but also many of their subgroups, admit decreasing automorphisms. The case where $I$ is a half line, say $[0, \infty[$, is different: then $G([0,\infty[\,;A,P)$ does not admit a decreasing automorphism. In this section, we first justify this claim and discuss then the extent to which it continues to be valid for subgroups of $G([0,\infty[\,;A,P)$. We begin with an analogue of Lemma \ref{lem:Consequence-decreasing-auto-I-compact}. \begin{lem} \label{lem:Consequence-decreasing-auto-I-half-line} Assume $I$ is the half line $[0, \infty[$ and $G$ is a subgroup of $G(I;A,P)$ that contains $B(I;A,P)$. Then every decreasing automorphism $\alpha$ induces an isomorphism $\alpha_* \colon \im \mathbb{S}igma_\ell \iso \im \rho $. \end{lem} \begin{proof} The proof is very similar to that of Lemma \ref{lem:Consequence-decreasing-auto-I-compact}. The kernel of $\mathbb{S}igma_\ell$ consists of all elements in $G$ that are the identity near 0, while that of $\rho$ is made up of the elements in $G$ that are the identity near $\infty$. Since $\alpha$ is induced by conjugation by a decreasing homeomorphism of $]0, \infty[$, the image of $\ker \mathbb{S}igma_\ell$ consists of elements $\alpha(g)$ that are the identity on a half line of the form $[t(g), \infty[$, and so $\alpha(\ker \mathbb{S}igma_\ell) \mathbb{S}ubseteq \ker \rho$. Since $\alpha^{-1}$ is also a decreasing automorphism, the preceding inclusion is actually an equality. It follows that $\alpha$ induces an isomorphism $\alpha_* \colon \im \mathbb{S}igma_\ell \iso \im \rho$ that renders the square \begin{equation} \label{eq:square-sigma-ell-rho} \xymatrix{G \ar@{->}[r]^-{\alpha} \ar@{->>}[d]^-{\mathbb{S}igma_\ell} & G \ar@{->>}[d]^-{\rho}\\ \im \mathbb{S}igma_\ell\ar@{->}[r]^-{\alpha_{*}} &\im \rho} \end{equation} commutative. \end{proof} The preceding lemma leads directly to a criterion for the non-existence of decreasing automorphisms. Indeed, the image of $\mathbb{S}igma_\ell$ is abelian, while that of $\rho$ is often a non-abelian, metabelian group, and so we obtain \begin{criterion} \label{criterion:Non-existence-decreasing-auto-I-half-line} Assume $I$ is the half line $[0, \infty[$ and $G$ is a subgroup of $G(I;A,P)$ that contains $B(I;A,P)$. If $\im \rho$ is \emph{not} abelian then $\Aut G = \Aut_+ G$. \end{criterion} \mathbb{S}ubsection{Construction of homomorphisms: part I} \label{ssec:I-half-line-Construction-homomorphisms-I} We turn now to the construction of homomorphisms fixed by $\Aut_+ G$, or even by $\Aut G$. Several homomorphisms are at our disposal. The first of them is $\mathbb{S}igma_\ell$. Corollary \ref{crl:sigma-ell-fixed-y-AutG-for-G(half-line;A,P)} tells us then: \begin{prp} \label{prp:Existence-psi-I-halfline-increasing-autos-1} Assume $G$ is a subgroup of $G([0,\infty[\,;A,P)$ that contains $B(I;A,P)$. Then the homomorphisms $\mathbb{S}igma_\ell$ is fixed by $\Aut_+ G$. \end{prp} We move on to the homomorphism $\rho$. Here two cases arise, depending on whether its image is abelian or non-abelian. In the second case, a very satisfying conclusion holds. It is enunciated in \begin{theorem} \label{thm:Existence-psi-I-halfline-image-rho-non-abelian} Assume $I = [0, \infty[$ and $G$ is a subgroup of $G(I;A,P)$ containing $B(I;A,P)$. If $\im \rho$ is not abelian $\mathbb{S}igma_r$ is a non-zero homomorphism fixed by $\Aut G$. \end{theorem} \begin{proof} Suppose $\im \rho$ is non-abelian. Then Lemma \ref{lem:Consequence-decreasing-auto-I-half-line} forces $\alpha$ to be increasing. Let $\varphi \colon ]0, \infty[\, \iso \, ]0, \infty[$ be the auto-homeomorphism that induces $\alpha$ by conjugation. As it is increasing, it is affine near $\infty$ by Proposition \ref{prp:Affine-near-infty-I-half-line} below, and so the following calculation \begin{align*} \mathbb{S}igma_r(\alpha(g)) &= \lim\nolimits_{t \to \infty} \left(\varphi \circ g \circ \varphi^{-1}\right)'(t)\\ &= \lim\nolimits_{t \to \infty} \left( \varphi'(g \circ \varphi^{-1}(t)) \cdot g'(\varphi^{-1}(t)) \cdot (\varphi^{-1})'(t) \right)\\ &= \lim\nolimits_{t \to \infty} \left( \varphi'(t) \cdot g'(t) \cdot (\varphi^{-1})'(t) \right)\\ &= \lim\nolimits_{t \to \infty} g'(t) = \mathbb{S}igma_r(g) \end{align*} is valid for every $g \in G$. It shows that $\alpha$ fixes the homomorphism $\mathbb{S}igma_r$. This homomorphism is non-zero. Indeed, $G/\ker \rho \iso \im \rho$ is not abelian by hypothesis, while $ \ker \mathbb{S}igma_r/\ker \rho$ is abelian and thus the third term of the extension \[ \ker \mathbb{S}igma_r/\ker \rho \rightarrowtail G/ \ker \rho \twoheadrightarrow G/\ker \mathbb{S}igma_r \] is not zero, whence $\ker \mathbb{S}igma_r \neq G$. \end{proof} We are left with proving an analogue of Proposition \ref{prp:phi-is-linear-I-compact}. For later use, we state it in greater generality than needed at this point, namely as \begin{prp} \label{prp:Affine-near-infty-I-half-line} Assume $G$ and $\bar{G}$ are subgroups of $G(I;A,P)$, both containing the subgroup $B(I;A,P)$, and that $I$ is either the half line $[0, \infty[$ or the line $\mathbb{R}$. Let $\alpha \colon G \iso \bar{G}$ be an isomorphism and let $\varphi_\alpha$ be an auto-homeomorphism of $\mathbb{I}nt(I)$ that induces $\alpha$ by conjugation. If $\im \rho$ is not abelian and $\varphi_\alpha$ is increasing then $\varphi_\alpha$ is affine near $\infty$. \end{prp} \begin{proof} We adapt the argument of Part 2 in the proof of \cite[Supplement E17.3]{BiSt14} to the case at hand. By assumption, the image of $\rho_* \colon G \to \Aff(A,P) \iso A \rtimes P$ is not abelian; its derived group is therefore (isomorphic to) a non-trivial submodule $A_1$ of $A$ which, being non-trivial, contains arbitrary small positive elements and so is dense in $\mathbb{R}$. Let $\bar{\rho}_* \colon \bar{G} \to A \rtimes P$ be the similarly defined homomorphism; the derived group of its image is then isomorphic to a non-trivial submodule $\bar{A}_1$ of $A$. By part (ii) of Corollary \ref{crl:alpha-and-ker-lambda}, the isomorphism $\alpha $ induces an isomorphism $\alpha_*$ of $G/\ker \rho$ onto $\bar{G}/\ker \bar{\rho}$; hence an isomorphism of $\im \rho$ onto $\im \bar{\rho}$, and, finally, an isomorphism $\alpha_1$ of $A_1$ onto $\bar{A}_1$. They render commutative the following diagram \begin{equation} \label{eq:square-rho} \xymatrix{ G \ar@{->>}[r]^-{\rho} \ar@{->}[d]^-{\alpha} & \im \rho \ar@{<-<}[r]^-{} \ar@{->}[d]^-{\alpha_*} & A_1 \ar@{->}[d]^-{\alpha_1} \\% \bar{G} \ar@{->>}[r]^-{\bar{\rho} } & \im \bar{\rho} \ar@{<-<}[r]^-{} & \bar{A}_1 . } \end{equation} We claim the automorphism $\alpha_1 \colon A_1 \iso \bar{A}_1$ is strictly \emph{increasing}. Let $b \in A_1$ be an arbitrary positive element and let $f_b \in G$ be a PL-homoe\-mor\-phisms that is a translation with amplitude $b$ near $\infty$, say on $[t_{b,1}, \infty[$. Then $\alpha(f_b)$ is a PL-homeomorphism which is a translation with amplitude $\alpha_1(b)$ near $\infty$, say for $t \geq \varphi(t_{b,2})$. Since $\alpha$ is induced by conjugation by $\varphi$, one has $\alpha(f_b) = \act{\varphi}{1}{ f_b}$; so $\varphi \circ f_b = \alpha(f_b) \circ \varphi$. By evaluating this equality at $t \geq \max\{t_{b,1}, t_{b, 2}\}$ one obtains the chain of equations \[ \varphi(t + b) = (\varphi \circ f_b)(t) = (\alpha(f_b) \circ \varphi)(t) = \alpha_1(b) + \varphi(t). \] It implies that $\alpha_1(b)$ is positive, for $b$ is so by assumption and $\varphi$ is increasing. We show next that $\alpha_1$ is given by multiplication by a positive real number $s_1$. As stated in the first paragraph of the proof, $A_1$ is a dense subgroup of $\mathbb{R}_{\add}$. Since $\alpha_1$ is strictly increasing it extends to a (unique) strictly increasing automorphism $\tilde{\alpha}_1 \colon \mathbb{R} \iso \mathbb{R}$. This automorphism is continuous and hence an $\mathbb{R}$-linear map, given by multiplication by some positive real number $s_1$. We come now to the final stage of the analysis of $\varphi$. In it we show that the restriction of $\varphi$ to a suitable interval of the form $[t_*, \infty[$ is \emph{affine}. Choose a positive element $b_* \in A_1$ and let $f_{b_*} \in G$ be an element whose image under $\rho$ is a translation with amplitude $b_*$. It then follows, as before, that there is a positive number $t_*$ so that the equation \begin{equation} \label{eq;Describing-varphi} \varphi(t + b_*) = \alpha_1(b_*) + \varphi(t) = \varphi(t) + s_1 \cdot b_* \end{equation} holds for every $t \geq t_*$. Consider now an arbitrary positive element $b \in A_1$. There exists then a positive number $t_{b}$ such that the calculation \[ \varphi(t + b) = \alpha_1(b) + \varphi(t) = \varphi(t) + s_1 \cdot b \] is valid for $t \geq t_{b}$. Choose a positive integer $m$ which is so large that $t_{b} \leq t_* + m \cdot b_*$. For every $t \geq t_*$ the following calculation is then valid: \begin{align*} \varphi(t + b) +s_1 \cdot m b_* &= \varphi( t +b + m \cdot b_* )\\ &= \varphi( t + m \cdot b_*) + s_1 \cdot b = \varphi(t) + s_1 \cdot m b_* +s_1 \cdot b. \end{align*} It follows, in particular, that the equation \begin{equation} \label{eq:Representation-.varphi-bis} \varphi (t_* + b) = \varphi (t_*) + s_1 \cdot b \end{equation} holds for every positive element $b \in A_1$ and $t \geq t_*$. Since $\varphi$ is continuous and increasing and as $A_1$ is dense in $\mathbb{R}$, this equation allows us to deduce that $\varphi$ is affine with slope $s_1$ on the half line $[t_*, \infty[$, and so the proof is complete. \end{proof} The hypotheses of the Theorem \ref{thm:Existence-psi-I-halfline-image-rho-non-abelian} are satisfied if $G = G([0,\infty[;A,P)$; the theorem, Lemma \ref{lem:Consequence-decreasing-auto-I-half-line} and Corollary \ref{crl:sigma-ell-fixed-y-AutG-for-G(half-line;A,P)} thus yield the pleasant \begin{crl} \label{crl:Existence-psi-I-halfline-fixed-by-AutG} If $G$ coincides with $G([0,\infty[\,;A,P)$ then both $\mathbb{S}igma_\ell \colon G \to P$ and $\mathbb{S}igma_r \colon G \to P$ are surjective homomorphisms fixed by $\Aut G$. \end{crl} Corollary \ref{crl:Existence-psi-I-halfline-fixed-by-AutG} is the analogue of Corollary \ref{crl:G=G(I;A,P)}, but with the compact interval $I$ replaced by a half line. Groups $G(I;A, P)$ with $I$ a half line have, so far, been investigated less often than groups with $I$ a compact interval; they have, however, their own merits, in particular the following one: to date, finitely generated groups of the form $G(I;A,P)$ with $I$ compact are only known for very special choices of the parameters $(A, P)$. \footnote{See \cite[p.\;vii]{BiSt14} for the list of the groups known at the end of 2014.} By contrast, finitely generated groups with $I$ a half line are far more common, as is shown by the following characterization: \begin{prp}[Theorem B8.2 in \cite{BiSt14}] \label{prp:Characterization-G-fg-I-half-line} The group $G([0,\infty[\,;A,P)$ is finitely generated if, and only if, the following conditions are satisfied: \begin{enumerate}[(i)] \item $P$ is finitely generated, \item $A$ is a finitely generated $\mathbb{Z}[P]$-module, and \item $A/(IP \cdot A)$ is finite. \end{enumerate} \end{prp} \mathbb{S}ubsection{Construction of homomorphisms: part II} \label{ssec:I-half-line-Construction-homomorphisms-II} Theorem \ref{thm:Existence-psi-I-halfline-image-rho-non-abelian} is very pleasing: it shows that the homomorphism $\mathbb{S}igma_r$ is fixed by all automorphisms provided merely \emph{the image of $\rho \colon G \to \Aff(IP \cdot A, P)$ is not abelian}. In this section, we discuss the remaining case. The image of $G([0,\infty[\, ; A, P)$ under $\rho$ is the affine group \[ \Aff(IP \cdot A,P) \iso (IP \cdot A) \rtimes P \] (see section \ref{ssec:lambda-rho}). This group is metabelian and contains two obvious kinds of abelian subgroups: those made up of translations, corresponding to the subgroups of $IP \cdot A$, and the subgroups consisting of homotheties $t \mapsto q\cdot t$ with ratio $q$ varying in a subgroup $Q$ of $P$. We begin by discussing the second type of abelian subgroups. \mathbb{S}ubsubsection{Image of $\rho$ is made up of homotheties} \label{sssec:I-half-line-Groups-with-im-rho-homotheties} Given a subgroup $Q$ of $P$ let $G_{Q}$ be the the subgroup of $G =G([0, \infty[\,;A,P)$ consisting of the products $f \circ g$ with $g \in B = B([0 ,\infty[\,;A,P)$ and $f$ a homothety $t \mapsto q \cdot t$ with $q \in Q$; since $B$ is normal in $G$ the set so defined is actually a subgroup of $G$. We do not know which of these subgroups $G_Q$ admit decreasing automorphisms, but those with $Q$ cyclic have this peculiarity, as can be seen from \begin{lem} \label{lem:I-half-line-Construction-decreasing-autos} Assume $I$ is the half line $[0, \infty[$ and $Q$ is a cyclic subgroup of $P$. Then the subgroup \begin{equation} \label{eq:Definition-subgroup-G-sub-Q} G_Q = \{ f \circ g \mid f = (t \mapsto q \cdot t) \text{ with } q \in Q \text{ and } g \in B\} \end{equation} of the group $G([0,\infty[\,;A,P)$ does admit a decreasing automorphism. \end{lem} \begin{proof} Let $q_0$ be the generator of $Q$ with $q_0>1$ and choose a positive element $a_0 \in IP \cdot A$. For each $k \in \mathbb{Z}$ set $t_k = q^k \cdot a_0$ and define $\varphi \colon \,]0, \infty[\; \iso \;]0, \infty[$ to be the affine interpolation of the assignment $(t_k \mapsto t_{-k})_{k \in \mathbb{Z}}$. Then $\varphi$ is an infinitary PL-auto-homeomorphism of $]0, \infty[$ whose interpolation points lie in $(IP \cdot A) \times (IP \cdot A)$. The slopes of the segments forming the graph of $\varphi$ are the negatives of powers of $q_0$; indeed, \begin{align*} t_{k+1} - t_k &= q^{k+1}_0 \cdot a_0 -q^{k}_0 \cdot a_0 = (q_0-1) \cdot q_0^k \cdot a_0\\ \varphi(t_{k+1}) - \varphi(t_k) &= (1/q_0)^{k+1} \cdot a_0 - (1/q_0)^k \cdot a_0 = (1 - q_0) \cdot q_0^{-k-1} \cdot a_0 \end{align*} and so $\varphi$ has slope $(-1) \cdot q_0^{-2k - 1}$ on the interval $[t_{k}, t_{k+1}]$. It follows that $\varphi$ maps $IP \cdot A$ onto itself. Consider now a conjugate $\act{\varphi}{1}{ h} = \varphi \circ h \circ \varphi^{-1}$ of an element $h \in G_Q$. If $h \in B(I;A,P)$, then $h$ has support contained in some interval of the form $I_{k(h)} = [t_{-k(h)}, t_{k(h)}]$ for some $k(h) > 0$ and so $\act{\varphi}{0}{ h}$ has support in $\varphi(I_{k(h)}) = I_{k(h)}$, slopes in $P$, break points in $IP \cdot A$ and is thus an element of $B \mathbb{S}ubset G_Q$. If, on the other hand, $h$ is the homothety with ratio $q_0$, then $h(t_k) = t_{k+1}$ for each index $k \in \mathbb{Z}$ and its conjugate $\act{\varphi}{0}{ h}$ is the PL-function with interpolation points $(t_k, t_{k-1})$, hence the homothety with centre 0 and ratio $q_0^{-1}$ and thus $\act{\varphi}{0}{ h }= h^{-1}$ lies in $G_Q$. As $G_Q$ is generated by $B \cup \{ (t \mapsto q_0 \cdot t)\}$, the previous reasoning shows that the decreasing auto-homeomorphism $\varphi$ induces by conjugation an automorphism of $G_Q$ and so the lemma is established. \end{proof} \begin{remark} \label{remark:Existence-psi-for-homotheties} Assume $A$, $P$ and $Q$ are as in the statement of the lemma. Then the bounded group $B =B([0,\infty[\,;A,P)$ may be perfect and hence simple; cf.\;\cite[Section 12.4]{BiSt14}. In such a case, $B$ is the only normal subgroup $N$ of $G_Q$ with $G/N$ \emph{infinite abelian} and so the lemma implies that no homomorphism of $G_Q$ onto an infinite abelian group is fixed by all of $\Aut G_Q$. Note, however, that $\rho$ is fixed by every increasing automorphism of $G_Q$. \end{remark} \mathbb{S}ubsubsection{Image of $\rho$ consists of translations} \label{sssec:I-half-line-Groups-with-im-rho-translations} We turn now to the other type of abelian subgroups of $\Aff(IP \cdot A, P)$, but concentrate on a special case. Given a subgroup $Q$ of $P$ and a subgroup $A_0 \mathbb{S}ubseteq IP \cdot A$, we set \begin{equation} \label{eq:Definition-G-sub-Q-A0} G_{Q, A_0} = \left\{g \in G([0, \infty[\,;A,P) \mid \mathbb{S}igma_\ell(g) \in Q \text{ and } \rho(g) \in A_0 \rtimes \{1\} \right\}. \end{equation} The group $G_{Q, A_0}$ is an extension of $B([0,\infty[\,;A,P)$ by the abelian group $Q \times A_0$. The class of groups having the form $G_{Q, A_0}$ is of interest for several reasons. Firstly, if $Q$ and $A_0$ are \emph{not} isomorphic, every automorphism of $G_{Q,A_0}$ is increasing by Lemma \ref{lem:Consequence-decreasing-auto-I-half-line}. This case occurs frequently, as is brought home by the following kind of examples. Suppose $Q$ is finitely generated and contains an integer $p > 1$, while $A_0$ is a non-zero submodule of $IP \cdot A$. Then $A_0$ is divisible by $p$ and, in particular, not free abelian. Some groups of the form $G_{Q, A_0}$ admit decreasing automorphisms, in particular the following ones. Let $P$ be a cyclic group generated by the real number $p>1$, let $A$ be a $\mathbb{Z}[P]$-submodule of $\mathbb{R}_{\add}$ and choose a positive element $b \in A$. The group $\bar{G} = G([0,b]; A, P)$ admits decreasing automorphisms, for instance the automorphism induced by conjugation by the reflection $\bar{\varphi}$ at the midpoint of $I = [0, b]$. Consider now the group $G = G_{P, \mathbb{Z}\cdot (p-1)b} \mathbb{S}ubset G([0, \infty[\, ; P, A)$. It is isomorphic to $\bar{G}$; there exists actually an isomorphism induced by an increasing, infinitary PL-homeomorphism $\varphi_b \colon [0, \infty[\, \iso [0, b[$ (see \cite[Lemma E18.2]{BiSt14}). The composition $\varphi_b^{-1} \circ \bar{\varphi} \circ \varphi_b$ induces then by conjugation a decreasing automorphism of $G$. Thirdly, let $\tau_r \colon G_{Q,A_0} \to \mathbb{R}_{\add}$ be the homomorphism that maps the PL-ho\-meo\-morphism $g \in G_{Q,A_0}$ to the amplitude of the translation $\rho(g)$. This homomorphism seems to have a good chance of being fixed by $\Aut_+ G_{Q, A_0}$, but this impression is mistaken. Indeed, let $\Aut_P A_0$ be the set of elements $p \in P$ with $p \cdot A_0 = A_0$; this set is a subgroup of $P$ and the semi-direct product $A_0\rtimes \Aut_P A_0$ is a subgroup of $(IP \cdot A) \rtimes P$; let $\tilde{G} $ denote the preimage of $A_0\rtimes \Aut_P A_0$ under the epimorphism \[ \bar{\rho} \colon G([0, \infty[\,;A,P) \overset{\rho}{\twoheadrightarrow} \Aff(IP \cdot A, P) \iso (IP \cdot A) \rtimes P. \] Then $G_{Q,A_0}$ is a normal subgroup of $\tilde{G}$. The group $\tilde{G}$ contains the homothety $\vartheta_p \colon t \mapsto p \cdot t$ for every $p \in \Aut_P A_0$, and so conjugation by such a homothety induces an automorphism $\alpha_p$ of $G_{Q, A_0}$. The calculation \begin{align*} (\tau_r \circ \alpha_p)(g) &= \tau_r(\vartheta_p \circ g \circ \vartheta_p^{-1}) = (\vartheta_p \circ g \circ \vartheta_p^{-1})(t) - t\\ &= \vartheta_p ( g ( p^{-1}t )) - t = p \cdot (p^{-1} t + \tau_r(g)) - t = (p \cdot \tau_r)(g), \end{align*} valid for every sufficiently large real number $t$, then shows that the formula \begin{equation} \label{eq:Transformation-tau} \tau_r \circ \alpha_p = p \cdot \tau_r \end{equation} holds for each $p \in \Aut_P A_0$. We conclude that $\tau_r$ can only be fixed by all of $\Aut_+ G_{Q,A_0}$ if $\Aut_P A_0$ is reduced to $1 \in \mathbb{R}^\times_{>0}$. This condition is fulfilled, for instance, if $A_0$ is infinite cyclic. \begin{example} \label{example:character-not-fixed} Given a real number $p > 1$, set $P = \gp(p)$ and $A = \mathbb{Z}[P] = \mathbb{Z}[p, p^{-1}]$. Choose $A_0 = A$ and set $G = G_{P,A_0}$. Then $\Aut_P A_0 = P$. Concrete examples are rational integers $p \in \mathbb{N} \mathbb{S}mallsetminus \{0, 1\}$, with $A_0= \mathbb{Z}[1/p]$, or quadratic integers like $\mathbb{S}qrt{2} + 1$ with $A_0 = A = \mathbb{Z}[\mathbb{S}qrt{2}\,]$. We shall come back to the second of these examples in section \ref{sssec:Group-of-units-elementary-examples}. \end{example} \mathbb{S}ection{Characters fixed by $\Aut G(\mathbb{R};A,P)$} \label{sec:Homomorphisms-fixed-by-Aut-I-line} Let $I$ denote one of the intervals $[0,b]$, $[0, \infty[$ or $\mathbb{R}$, and let $G$ be a subgroup of $G(I;A,P)$ containing $B(I;A,P)$. In Sections \ref{sec:Homomorphisms-fixed-by-Aut-I-compact} and \ref{sec:Homomorphisms-fixed-by-Aut-I-half-line} groups with $I$ a compact interval or a half line have been studied. In this section we now turn to the line $I = \mathbb{R}$. Finding non-zero homomorphisms $\psi \colon G \to \mathbb{R}^\times_{>0}$ fixed by $\Aut G$, is then harder than in the previously investigated cases, and this for two reasons. Firstly, subgroups of $G(\mathbb{R};A,P)$ often admit decreasing automorphisms $\alpha$, in contrast to what happens if $I$ is a half line; in the case of a decreasing automorphism, $\lambda$ (or $\rho$) is only fixed by $\alpha$ if $\lambda$ coincides with $\rho$. Secondly, if the image of $\lambda$ or that of $\rho$ consists of translations, neither $\lambda$ nor $\rho$ need be fixed by $\Aut_+ G$. The plan of our investigation will be similar to that adopted in Section \ref{sec:Homomorphisms-fixed-by-Aut-I-half-line}. We begin by discussing the existence of decreasing automorphisms (in section \ref{ssec:I-line-Existence-decreasing-autos}), move on to the main results about the existence of homomorphisms fixed by $\Aut_+ G$ or $\Aut G$ (in section \ref{ssec:I-line-Construction-homomorphisms-I}) and complement these results with more special findings in section \ref{ssec:I-line-Construction-homomorphisms-II}. The layout of the middle section \ref{ssec:I-line-Construction-homomorphisms-I} will resemble that of section \ref{ssec:Differentiability-criterion}. \mathbb{S}ubsection{Existence of decreasing automorphisms} \label{ssec:I-line-Existence-decreasing-autos} As in the cases of a compact interval or a half line, the existence of a decreasing automorphism has an easily stated consequence, namely \begin{lem} \label{lem:Consequence-decreasing-auto-I-line} Assume $G$ is a subgroup of $G(\mathbb{R};A,P)$ that contains $B(\mathbb{R};A,P)$. Then every decreasing automorphism $\alpha$ induces an isomorphism $\alpha_* \colon \im \lambda \iso \im \rho $ that renders commutative the following square. \begin{equation} \label{eq:square-lambda-rho} \xymatrix{G \ar@{->}[r]^-{\alpha} \ar@{->>}[d]^-{\lambda} & G \ar@{->>}[d]^-{\rho}\\ \im \lambda\ar@{->}[r]^-{\alpha_{*}} &\im \rho} \end{equation} \end{lem} \begin{proof} The claim can be established as in the proofs of Lemmata \ref{lem:Consequence-decreasing-auto-I-compact} and \ref{lem:Consequence-decreasing-auto-I-half-line}. \end{proof} The images of $\lambda$ and $\rho$ are both subgroups of the affine group $Q= \Aff_o(IP \cdot A, P)$. It is easy to describe some pairs of subgroups $(Q_1, Q_2)$ that are \emph{not} isomorphic for obvious reasons, for instance if one is abelian, and the other is non-abelian. We are, however, not aware of a classification of the isomorphism types of subgroups of $\Aff_o(IP \cdot A, P)$ for parameters $A \neq \{ 0\} $ and $P \neq \{1\}$. \mathbb{S}ubsection{Construction of homomorphisms: part I} \label{ssec:I-line-Construction-homomorphisms-I} We turn now to the construction of homomorphisms that are fixed by $\Aut_+ G$ or by $\Aut G$. The next result is an analogue of Corollary \ref{crl:I-compact-summary-differentiability}. The main ingredient in its proof is Proposition \ref{prp:Affine-near-infty-I-half-line}. \begin{prp} \label{prp:I-line-affine-near-infty} Let $G$ be a subgroup of $G(\mathbb{R};A,P)$ containing $B(\mathbb{R};A,P)$ and let $\alpha$ be an automorphism of $G$ that is induced by conjugation by the auto-homeo\-mor\-phism $\varphi_\alpha \colon \mathbb{R} \iso \mathbb{R}$. Then the following statements hold: \begin{enumerate}[(i)] \item if $\alpha$ is increasing \footnote{See Definition \ref{definition:Increasing-isomorphism}.} and $\im \rho$ is not abelian, $\varphi_\alpha$ is affine near $\infty$; \item if $\alpha$ is increasing and $\im \lambda$ is not abelian, $\varphi_\alpha$ is affine near $-\infty$; \item if $\alpha$ is decreasing and $\im \rho$ is not abelian, $\tilde{\varphi}_\alpha$ is affine, both near $-\infty$ and near $\infty$. \end{enumerate} \end{prp} \begin{proof} (i) is a restatement of the claim of Proposition \ref{prp:Affine-near-infty-I-half-line}. To establish (ii), we show that (ii) can be reduced to (i). Let $\vartheta \colon \mathbb{R} \iso \mathbb{R}$ be the reflection in the origin 0, set $G_1 = \vartheta \circ G \circ \vartheta^{-1}$ and $\varphi_1 = \vartheta \circ \tilde{\varphi}_\alpha \circ \vartheta^{-1}$. We claim that Proposition \ref{prp:Affine-near-infty-I-half-line} applies to the couple $(G_1, \varphi_1)$. Indeed, the groups $G(\mathbb{R};A,P)$ and $B(\mathbb{R};A,P)$ are invariant under conjugation by $\vartheta$ and so $G_1$ is a subgroup of $G(\mathbb{R};A,P)$ containing $B(\mathbb{R};A,P)$. Next, Lemma \ref{lem:Formula-involving-lambda-rho-theta} below shows that \[ \rho(G_1) = \rho\left( \vartheta \circ G \circ \vartheta^{-1}\right) = \vartheta \circ \lambda(G) \circ \vartheta^{-1}. \] The group $\vartheta \circ \lambda(G) \circ \vartheta^{-1}$ is isomorphic to $\im \lambda$, which is non-abelian by hypothesis, and so $\rho(G_1)$ is non-abelian. Proposition \ref{prp:Affine-near-infty-I-half-line} thus applies to $G_1$ and to $\varphi_1$ and implies that $\varphi_1 = \vartheta \circ \varphi_\alpha \circ \vartheta^{-1}$ is affine near $+\infty$, whence $\varphi$ itself is affine near $-\infty$. (iii) Since $\alpha$ is \emph{decreasing}, the groups $\im \lambda$ and $ \im \rho$ are isomorphic (see Lemma \ref{lem:Consequence-decreasing-auto-I-line}); the hypothesis on $\im \rho$ implies therefore that the image of $\lambda$ is not abelian. The idea now is to reduce (iii) to the previously treated cases (i) and (ii). As before, let $\vartheta \colon \mathbb{R} \iso \mathbb{R}$ denote the reflection in the origin 0, and set $\varphi_2 = \vartheta \circ \varphi_\alpha$. Then $\varphi_2$ is increasing and conjugation by $\varphi_2$ maps $G$ onto $\bar{G} = \vartheta \circ G \circ \vartheta^{-1}$. Proposition \ref{prp:Affine-near-infty-I-half-line} thus applies and guarantees that $\varphi_2$ is affine near $\infty$. But $\varphi_2 = \vartheta \circ \varphi_\alpha$ and so $\varphi_\alpha$ itself is affine near $\infty$. Consider, secondly, $\varphi_3 = \varphi_\alpha \circ \vartheta$. This map is again increasing, and conjugation by it maps $\bar{G} = \vartheta \circ G \circ \vartheta^{-1}$ onto $G$. Invoking Proposition \ref{prp:Affine-near-infty-I-half-line} once more, we learn that $\varphi_3$ is affine near $+\infty$, and so $\varphi_\alpha$ itself is affine near $-\infty$. All taken together, we have shown that $\varphi_\alpha$ is affine, both near $-\infty$ and $+\infty$, as asserted by claim (iii). \end{proof} We are left with proving \begin{lem} \label{lem:Formula-involving-lambda-rho-theta} Let $\vartheta \colon \mathbb{R} \iso \mathbb{R} $ denote the reflection in 0. Then the formula \begin{equation} \label{eq:Formula-involving-lambda-rho-theta} \rho\left( \vartheta \circ g \circ \vartheta^{-1}\right) = \vartheta \circ \lambda(g) \circ \vartheta^{-1} \end{equation} holds for every $g \in \mathbb{P}L_o(\mathbb{R})$. \end{lem} \begin{proof} Let $\mu$ and $\nu$ denote the functions of $\mathbb{P}L_o(\mathbb{R})$ into itself given by the left hand and the right hand side of equation \eqref{eq:Formula-involving-lambda-rho-theta}; thus $\mu(g) = \rho(\vartheta \circ g \circ \vartheta^{-1})$ for $g \in PL_o(\mathbb{R})$, and similarly for $\nu$. Both functions are homomorphisms of $\mathbb{P}L_o(\mathbb{R})$ into $\Aff_o(\mathbb{R})$ that vanish on $\ker \lambda$. It suffices therefore to check equation \eqref{eq:Formula-involving-lambda-rho-theta} on a complement of $\ker(\lambda \colon \mathbb{P}L_o(\mathbb{R}) \to \Aff(\mathbb{R}))$. Such a complement is $\Aff_0(\mathbb{R})$ and for affine maps $h$ the following calculation holds: \[ \rho(\vartheta \circ h \circ \vartheta^{-1}) = \vartheta \circ h \circ \vartheta^{-1} = \vartheta \circ \lambda( h )\circ \vartheta^{-1}. \qedhere \] \end{proof} \mathbb{S}ubsubsection{Some corollaries} \label{sssec:Applications-Proposition-I-line-affine-near-infty} The first corollary of Proposition \ref{prp:I-line-affine-near-infty} deals with homomorphisms fixed by $\Aut_+$; the corollary is an analogue of Theorem \ref{thm:Existence-psi-I-compact-increasing-autos}. \begin{theorem} \label{thm:Existence-psi-I-line-image-lambda-and-sigma-non-abelian-increasing} Assume $G$ is a subgroup of $G(\mathbb{R};A,P)$ that contains $B(\mathbb{R};A,P)$. If $\im \rho$ is not abelian then $\mathbb{S}igma_r$ is a non-zero homomorphism fixed by $\Aut_+ G$. Similarly, $\mathbb{S}igma_\ell$ is a non-zero homomorphism fixed by $\Aut_+ G$ in case $\im \lambda$ is not abelian. \end{theorem} \begin{proof} Let $\alpha$ be an increasing automorphism of $G$ and let $\varphi_\alpha$ be the increasing auto-homeomorphism of $\mathbb{R}$ inducing $\alpha$ by conjugation. (The map exists thanks to Theorem \ref{thm:TheoremE16.4}.) Assume first that $\im \rho$ is not abelian. By part (i) of Proposition \ref{prp:I-line-affine-near-infty} the map $\varphi_\alpha$ is then affine near $\infty$. On the other hand, the image of $\rho$, being non-abelian, cannot consist merely of translations; so the homomorphism $\mathbb{S}igma_r \colon G \to P$ is non-zero. The following calculation then reveals that $\mathbb{S}igma_r$ is fixed by $\alpha$: \begin{align*} \left(\mathbb{S}igma_r \circ \alpha\right)(g) &= \mathbb{S}igma_r\left(\varphi_\alpha \circ g \circ \varphi_\alpha^{-1}\right)\\ &= \lim\nolimits_{t \to \infty} \left(\varphi_\alpha \circ g \circ \varphi_\alpha^{-1}\right)'(t)\\ &= \lim\nolimits_{t \to \infty} \left( \varphi_\alpha'\left(g(\varphi_\alpha^{-1}(t))\right) \cdot g'(\varphi_\alpha^{-1}(t)) \cdot (\varphi_\alpha^{-1})'(t) \right)\\ &= \lim\nolimits_{t \to \infty}g'(\varphi_\alpha^{-1}(t)) = \mathbb{S}igma_r(g). \end{align*} In this calculation the facts that the derivatives of $ \varphi_\alpha$ and of $g$ are constant on a half line of the form $[t_*, \infty[$ and that $\varphi_\alpha$ is an increasing homeomorphism, have been used. Assume next that $\im \lambda$ is not abelian. By part (ii) of Proposition \ref{prp:I-line-affine-near-infty} the map $\varphi_\alpha$ is then affine near $-\infty$. and the homomorphism $\mathbb{S}igma_\ell \colon G \to P$ is non-zero. Since the derivatives of every element $g \in G$ and of $\varphi_\alpha$ are constant near $-\infty$, a calculation similar to the preceding one will show that $\lambda$ is fixed by $\alpha$. \end{proof} As a second application of Proposition \ref{prp:I-line-affine-near-infty}, we present a result that furnishes a homomorphism $\psi$ that is fixed by every automorphism. Note, however, that the hypotheses of the result do not imply that $\psi$ is non-trivial. \begin{theorem} \label{thm:Existence-psi-I-line-image-rho-non-abelian} Assume $G$ is a subgroup of $G(\mathbb{R};A,P)$ containing $B(\mathbb{R};A,P)$ and let $\psi \colon G \to P$ be the homomorphism $g \mapsto \mathbb{S}igma_\ell(g) \cdot \mathbb{S}igma_r(g)$. If the images of $\lambda$ and of $\rho$ are both non-abelian, the homomorphism $\psi\colon G \to P$ is fixed by $\Aut G$. \end{theorem} \begin{proof} Let $\alpha$ be an automorphism of $G$ and let $\varphi_\alpha$ be the auto-homeomorphism of $\mathbb{R}$ that induces $\alpha$ by conjugation. If $\varphi_\alpha$ is \emph{increasing} both $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$ are fixed by $\alpha$ (see Theorem \ref{thm:Existence-psi-I-line-image-lambda-and-sigma-non-abelian-increasing}) and hence so is $\psi$. Assume now that $\alpha$ is \emph{decreasing}. Part (iii) of Corollary \ref{crl:I-compact-summary-differentiability} then guarantees that $\varphi_\alpha$ is affine near $-\infty$ and also near $\infty$. These facts imply the relations \begin{equation} \label{eq:Transformations} \mathbb{S}igma_\ell \circ \alpha = \mathbb{S}igma_r \quad\text{and}\quad \mathbb{S}igma_r \circ \alpha = \mathbb{S}igma_\ell \end{equation} (see below) and so $\psi = \mathbb{S}igma_\ell \cdot \mathbb{S}igma_r$ is fixed by $\alpha$. We are left with verifying relations \eqref{eq:Transformations}. The following calculation uses the fact that both $\varphi_\alpha$ and $g$ have constant derivatives near $-\infty$ and $+\infty$: \begin{align*} \left(\mathbb{S}igma_\ell \circ \alpha\right)(g) &= \mathbb{S}igma_\ell\left(\varphi_\alpha \circ g \circ \varphi_\alpha^{-1}\right)\\ &= \lim\nolimits_{t \to -\infty} \left(\varphi_\alpha \circ g \circ \varphi_\alpha^{-1}\right)'(t)\\ &= \lim\nolimits_{t \to -\infty} \left( \varphi_\alpha'\left(g(\varphi_\alpha^{-1}(t))\right) \cdot g'(\varphi_\alpha^{-1}(t)) \cdot (\varphi_\alpha^{-1})'(t) \right)\\ &= \lim\nolimits_{t \to -\infty}g'(\varphi_\alpha^{-1}(t)) = \mathbb{S}igma_r(g). \end{align*} A similar calculation establishes the second relation in \eqref{eq:Transformations}. \end{proof} We continue with an easy consequence of Theorem \ref{thm:Existence-psi-I-line-image-rho-non-abelian}. If the group $G$ is all of $G(I;A,P)$ the homomorphism $\psi \colon g \mapsto \mathbb{S}igma_\ell(g) \cdot \mathbb{S}igma_r(g)$ is surjective; in addition, $\im \lambda$ and $\im \rho$ both coincide with $\Aff(A,P)$ and thus are non-abelian. Theorem \ref{thm:Existence-psi-I-line-image-rho-non-abelian} implies therefore \begin{crl} \label{crl:G=G(R;A,P)-I-line} If $G =G(\mathbb{R};A,P)$ the homomorphism $\psi \colon G \to P$, taking $g \in G$ to $\mathbb{S}igma_\ell(g) \cdot \mathbb{S}igma_r(g)$, is non-zero and fixed by $\Aut G$. \end{crl} Corollary \ref{crl:G=G(R;A,P)-I-line} is an analogue of Corollaries \ref{crl:G=G(I;A,P)} and \ref{crl:Existence-psi-I-halfline-fixed-by-AutG}. Groups of the form $G(\mathbb{R};A, P)$ have been investigated, so far, less often than groups with $I$ a compact interval; they have, however, their own merits if it comes to finite generation. There exists, first of all, a characterization of the finitely generated groups of the form $G(\mathbb{R};A, P)$, namely \begin{prp}[Theorem B7.1 in \cite{BiSt14}] \label{prp:Characterization-G-fg-I-line} The group $G(\mathbb{R};A,P)$ is finitely generated if, and only if, $P$ is finitely generated and $A$ is a finitely generated $\mathbb{Z}[P]$-module. \end{prp} \begin{remark} \label{remark:Continuously-many-non-isomorphic-groups} Proposition \ref{prp:Characterization-G-fg-I-line} implies that \emph{there are continuously many, pairwise non-isomorphic, finitely generated groups of the form $G(\mathbb{R};A,P)$}. To prove this assertion, we recall the following result: \emph{if two groups of the form $G(\mathbb{R}; A, P)$ and $G(\mathbb{R};\bar{A}, \bar{P})$ are isomorphic and if $P$ is not cyclic, then $P = \bar{P}$}. \footnote{see Theorem E17.1 in \cite{BiSt14}.} It suffices therefore to find a collection of finitely generated, pairwise distinct, subgroups $\{P_j \mid j \in J \}$ of $\mathbb{R}^\times_{>0}$ with $J$ an index set having the cardinality of $\mathbb{R}$, and to set $A_j = \mathbb{Z}[P_j]$ for each $j \in J$. Such a collection of subgroups can be obtained as follows: one constructs first a family of irrational, real numbers $\{x_j \mid j \in J \}$ such that the extended family $\{1\} \cup \{x_j \mid j \in J \}$ is linearly independent (over $\mathbb{Q}$) and then sets $P_j = \exp(\gp(\{1, x_j\})$. Then each group $P_j$ is free abelian of rank two, hence not cyclic, and for indices $j_1 \neq j_2$ the groups $P_{j_1}$ and $P_{j_2}$ are distinct. \end{remark} \mathbb{S}ubsection{Construction of homomorphisms: part II} \label{ssec:I-line-Construction-homomorphisms-II} In this final part of Section \ref{sec:Homomorphisms-fixed-by-Aut-I-line}, we consider subgroups $G$ of $G(\mathbb{R};A,P)$, containing $B(\mathbb{R};A,P)$, with $\im \lambda$ and $\im \rho$ both abelian. \footnote{If exactly one of $\im \lambda$ and $\im \rho$ is abelian, the group does not admit a decreasing automorphism (by Lemma \ref{lem:Consequence-decreasing-auto-I-line}) and so Theorem \ref{thm:Existence-psi-I-line-image-lambda-and-sigma-non-abelian-increasing} yields a non-zero homomorphism fixed by $\Aut G$.} The most interesting subcase seems to be that where the images of $\lambda$ and $\rho$ consists only of translations. Then two homomorphisms $\tau_\ell$ and $\tau_r$ of $G$ into $\mathbb{R}_{\add}$ can be defined: they associate to $g \in G$ the amplitudes of the translations $\lambda(g)$ and $\rho(g)$, respectively. One sees, as in section \ref{sssec:I-half-line-Groups-with-im-rho-translations}, that neither of these homomorphisms need be fixed by $\Aut_+ G$. An exception occurs if the image of $\rho$ or of $\lambda$ is \emph{infinite cyclic}. Suppose, for instance, that $\im \rho$ is infinite cyclic, and let $f \in G$ be an element that maps onto the positive generator, say $x_f$, of $\im \tau_r$. Consider an increasing automorphism $\alpha$ of $G$ and let $\varphi_\alpha$ be the homeomorphism of $\mathbb{R}$ that induces $\alpha$ by conjugation. Then $\tau_r(\alpha(f))$ generates $\im \tau_r$, too, and so $\tau_r(\alpha(f)) = \pm x_f$. Near $+\infty$, the map $f$ is a translation with positive amplitude, hence so is $\alpha(f) = \varphi_\alpha \circ f \circ \varphi_\alpha^{-1}$, and so $\tau_r(\alpha(f)) > 0$. Thus $\tau_r(f) = (\alpha \circ \tau_r)(f)$. We conclude that $\tau_r$ is fixed by $\alpha$. An analogous argument shows that $\tau_\ell$ is fixed by every increasing automorphism of $G$. All taken together we have thus established \begin{prp} \label{prp:Images-tau-ell-and-tau-r-cylic} Let $G$ be a subgroup of $G(\mathbb{R};A,P)$ containing $B(\mathbb{R};A,P)$. Assume that the images of $\lambda$ and $\rho$ contain only translations and that these images are infinite cyclic. Then $\tau_\ell$ and $\tau_r$ are both non-zero homomorphisms that are fixed by $\Aut_+ G$. \end{prp} \begin{example} \label{example:images-tau-ell-and-tau-r-cyclic} Suppose $P$ is an infinite cyclic group, $A$ a (non-zero) $\mathbb{Z}[P]$-module and $b$ a positive element of $A$. Set $\bar{G} = G([0,b];A,P)$. Then there exists a homeomorphism $\vartheta \colon ] 0, b[\, \iso \mathbb{R}$ that induces, by conjugation, an embedding \[ \mu \colon G([0,b];A,P) \rightarrowtail G(\mathbb{R}; A, P) \] whose image contains $B(\mathbb{R};A, P)$. \footnote{In special cases, for instance if $\bar{G}$ is Thompson's group $F$, this fact is well-known (see, e.g., \xspace{} \cite[Proposition 3.1.1]{BeBr05}); the general claim is established in \cite{BiSt14} (see Lemma E18.4).} Let $G$ denote the image of $\mu$. The images of $\lambda\restriction{G}$ and $\rho \restriction{G}$ are both infinite cyclic and consist of translations. The images of $\tau_\ell$ and $\tau_r$ are therefore infinite cyclic, too, and so the previous lemma applies. Let's now consider the special case where $P$ is generated by an integer $n \geq 2$, where $A = \mathbb{Z}[P] = \mathbb{Z}[1/n]$ and $b = 1$. For a suitably chosen homeomorphism $\vartheta$ the image $G$ of $\mu$ consists then of all elements $g \in G(\mathbb{R};\mathbb{Z}[1/n], \gp(n))$ fulfilling the conditions \begin{equation} \label{eq:Describing-image-mu} \mathbb{S}igma_\ell(g) = \mathbb{S}igma_r(g) = 1 \quad\text{and }\quad \tau_\ell(g) \in \mathbb{Z}(n-1), \quad \tau_r(g) \in \mathbb{Z}(n-1); \end{equation} see \cite[Lemma E18.4]{BiSt14}. This group $G$ is called $F_{n, \infty}$ in \cite[p.\;298]{BrGu98}. By relaxing conditions \eqref{eq:Describing-image-mu} one obtains supergroups of $F_{n, \infty}$, in particular the group called $F_n$ in \cite[p.\;298]{BrGu98} and defined by the requirements \begin{equation} \label{eq:Describing-image-mu-2} \mathbb{S}igma_\ell(g) = \mathbb{S}igma_r(g) = 1 \quad\text{and}\quad \tau_\ell(g) \in \mathbb{Z}, \;\; \tau_r(g) \in \mathbb{Z}, \;\; \tau_r(g) - \tau_\ell(g) \in \mathbb{Z}(n-1); \end{equation} see \cite[Proposition 2.2.6]{BrGu98}. Proposition \ref{prp:Images-tau-ell-and-tau-r-cylic} applies to the groups $F_{n, \infty}$, but also to the larger groups $F_n$. Now, the groups $F_n$ and $F_{n, \infty}$ both admit decreasing automorphisms, in particular the automorphism induced by the reflection in the origin. The homomorphisms $\tau_\ell$ and $\tau_r$ are therefore not fixed by the full automorphism group of the groups $F_{n, \infty}$ and $F_n$, but the difference $\tau_r - \tau_\ell$ is a non-zero homomorphism, with infinite cyclic image, that enjoys this property. \end{example} \endinput \mathbb{S}ection{Characters fixed by $\Aut G$ with $G$ a subgroup of $\mathbb{P}L_o([0,b])$} \label{sec:Generalization-GoKo10} In this section we prove Theorem \ref{thm:Generalization-GoKo10}. For the convenience of the reader we restate this result here as \begin{theorem} \label{thm:Generalization-GoKo10-bis} Suppose $I = [0,b]$ is a compact interval of positive length and $G$ is subgroup of $PL_o(I)$ that satisfies the following conditions: \begin{enumerate}[(i)] \item no interior point of the interval $I = [0, b]$ is fixed by $G$; \item the characters $\chi_\ell$ and $\chi_r$ are both non-zero; \item the quotient group $G/(\ker \chi_\ell \cdot \ker \chi_r)$ is a torsion group, and \item at least one of the group of units $U(\im\chi_\ell)$ and $U(\im \chi_r)$ is reduced to $\{1,-1\}$. \end{enumerate} Then there exists a non-zero homomorphism $\psi \colon G \to \mathbb{R}^\times_{>0}$ that is fixed by every automorphism of $G$. The group $G$ has therefore property $R_\infty$. \end{theorem} We explain next the layout of Section \ref{sec:Generalization-GoKo10}. We begin by recalling the definition of the invariant $\Sigma^1$ and stating some basic results concerning it. In section \ref{ssec:Proof-Theorem-Generalization-GoKo10-bis}, we prove Theorem \ref{thm:Generalization-GoKo10-bis}. The hypotheses of the theorem allow of variations that deserve some comments. This topic is taken care of in sections \ref{ssec:Discussion-hypotheses-Generalization-GoKo10-bis} through \ref{ssec:Subgroups-of-finite-index}. \mathbb{S}ubsection{Review of $\Sigma^1$} \label{ssec:Review-Sigma1} Given an infinite group $G$, consider the real vector space $\mathbb{H}om(G,\mathbb{R})$ made up of all homomorphisms $\chi \colon G \to \mathbb{R}_{\add}$ into the additive group of $\mathbb{R}$. These homomorphisms will be referred to as \emph{characters}. Two non-zero characters $\chi_1$ and $\chi_2$ are called equivalent, if one is a positive real multiple of the other. Geometrically speaking, the associated equivalence classes are (open) rays emanating from the origin. The space of all rays is denoted by $S(G)$ and called the \emph{character sphere} of $G$. In case the abelianization $G_{\ab} = G/[G,G]$ of $G$ is finitely generated, the vector space $\mathbb{H}om(G,\mathbb{R})$ is finite dimensional and carries a unique topology, induced by its norms; the sphere $S(G)$ equipped with the quotient topology is then homeomorphic to the spheres in a Euclidean vector space of dimension $\dim_\mathbb{Q} H_1(G, \mathbb{Q}) = \dim_\mathbb{Q} (G_{\ab} \otimes \mathbb{Q})$. The invariant $\Sigma^1(G) $ is a subset of $S(G)$. It admits several equivalent definitions; in the sequel, we use the definition in terms of Cayley graphs. \footnote{See, e.g., \xspace{}Chapter C in \cite{Str13} for alternate definitions.} Fix a generating set $\mathcal{X}$ of $G$ and define $\mathbb{G}amma = \mathbb{G}amma(G, \mathcal{X})$ to be the associated Cayley graph of $G$. This graph can be equipped with $G$-actions; as we want to work with \emph{left} $G$-actions we define the set of positive edges of the Cayley graph like this: \[ E_+(\mathbb{G}amma) = \{(g, g \cdot x) \in G \times G \mid (g, x) \in G \times \mathcal{X} \}. \] We move on to the \emph{definition of} $\Sigma^1(G)$. Given a non-zero character $\chi$, consider the submonoid $G_\chi = \{ g \in G \mid \chi(g) \geq 0\}$ of $G$ and define $\mathbb{G}amma_\chi = \mathbb{G}amma(G, \mathcal{X})_\chi$ to be the full subgraph of $\mathbb{G}amma(G; \mathcal{X})$ with vertex set $G_\chi$. Both the submonoid $G_\chi$ and the subgraph $\mathbb{G}amma_\chi$ remain the same if $\chi$ is replaced by a positive multiple; so these objects depend only on the ray $[\chi] = \mathbb{R}_{>0} \cdot \chi$ represented by $\chi$. The Cayley graph $\mathbb{G}amma$ is connected, but its subgraph $\mathbb{G}amma_\chi$ may not be so; the invariant $\Sigma^1(G)$ records the rays for which the subgraph $\mathbb{G}amma_\chi = \mathbb{G}amma(G, \mathcal{X})_\chi$ \emph{is connected}. In symbols, \begin{equation} \label{eq:Definition-Sigma1-XX} \Sigma^1(G, \mathcal{X}) = \{ [\chi] \in S(G) \mid \mathbb{G}amma(G, \mathcal{X})_\chi \text{ is connected} \}. \end{equation} One now faces the problem, familiar from Homological Algebra, that the definition of $\Sigma^1(G, \mathcal{X})$ involves an arbitrary choice and that one wants to construct an object that does not depend on this choice. Suppose, first, that $G$ is \emph{finitely generated} and let $\mathcal{X}_f$ be a \emph{finite} generating set. Then the subgraph $\mathbb{G}amma(G, \mathcal{X}_f)_\chi$ is connected if, and only if, all the subgraphs $\mathbb{G}amma(G, \mathcal{X})_\chi$, with $\mathcal{X}$ a generating set, are connected (see, e.g., \xspace{}\cite[Lemma C2.1]{Str13}) and so the following definition is licit: \begin{definition} \label{definition:Sigma1} Let $G$ be a finitely generated group and $\mathcal{X}_f$ a \emph{finite} generating set of $G$. Then $\Sigma^1(G)$ is defined to be the subset \begin{equation} \label{eq:Definition-Sigma1-fg} \{ [\chi] \in S(G) \mid \mathbb{G}amma(G, \mathcal{X}_f)_\chi \text{ is connected} \}. \end{equation} \end{definition} The fact that the set \eqref{eq:Definition-Sigma1-fg} does not depend on the choice of the finite set $\mathcal{X}_f$, allows one to select $\mathcal{X}_f$ in accordance with the problem at hand; see \cite[Sections A2.3a and A2.3b]{Str13} for some consequences of this fact. Suppose now that $G$ is an arbitrary group. A useful subset of $S(G)$ can then be obtained by defining \begin{equation} \label{eq:Definition-Sigma1} \Sigma^1(G) = \{ [\chi] \in S(G) \mid \mathbb{G}amma(G, \mathcal{X})_\chi \text{ is connected for every generating set } \mathcal{X}\} \end{equation} (cf.\,\cite[Definition C2.2]{Str13}). If $G$ happens to be finitely generated, the sets \eqref{eq:Definition-Sigma1-fg} and \eqref{eq:Definition-Sigma1} are equal; for an arbitrary group, the set $\Sigma^1(G) $ coincides with the invariant $\Sigma(G)$ defined by Ken Brown in \cite[p.\,489]{Bro87b} \emph{up to a sign}; in other words, \begin{equation} \label{eq:Relating-Brown-Cayley-graph} \Sigma(G) = - \Sigma^1(G). \end{equation} The sign in this formula is caused by the fact that Brown uses right actions on $\mathbb{R}$-trees, whereas \emph{left} actions are employed in our definition of $\Sigma^1$. The subset $\Sigma^1(G)$ of $S(G)$ is traditionally called the $\Sigma^1$-\emph{invariant}. The epithet ``invariant'' is justified by a fact that we explain next. Suppose $\alpha \colon G \iso \bar{G}$ is an isomorphism of groups. Then $\alpha$ induces, first of all, a linear isomorphism of vector spaces $\mathbb{H}om(\alpha, \mathbb{R}) \colon \mathbb{H}om(\bar{G}, \mathbb{R}) \iso \mathbb{H}om(G, \mathbb{R})$, and so an isomorphism of spheres \begin{equation} \label{eq:Invariance-Sigma1-isos} \alpha^* \colon S(\bar{G}) \iso S(G), \qquad [\bar{\chi}] \mapsto [\bar{\chi} \circ \alpha] \end{equation} This second isomorphism maps the subset $\Sigma^1(\bar{G}) \mathbb{S}ubseteq S(\bar{G})$ onto $\Sigma^1(G) \mathbb{S}ubseteq S(G)$ (section B1.2a in \cite{Str13} has more details). In the sequel, the special case where $\alpha$ is an \emph{automorphism} will be crucial. The assignment \begin{equation} \label{eq:Representation-autos} \alpha \longmapsto (\alpha^{-1})^* \colon \Sigma^1(G) \iso \Sigma^1(G) \end{equation} defines a homomorphism of the automorphism group of $G$ into the group of bijections of $\Sigma^1(G)$, and hence also one into that of its complement $\Sigma^1(G)^c$. \mathbb{S}ubsection{$\Sigma^1$ of subgroups of $\mathbb{P}L_o([0,b])$} \label{ssec:Sigma1-subgroups-PL(compact-interval)} Given a subgroup $G$ of $\mathbb{P}L_o([0,b])$, let $\mathbb{S}igma_\ell$ be the homomorphism that assigns to a function $g \in G$ the value of its (right) derivative in the \emph{left} end point $0$; similarly, define $\mathbb{S}igma_r \colon G \to \mathbb{R}^\times_{>0}$ to be the homomorphism given by the formula $\mathbb{S}igma_r(g) = \lim\nolimits_{t \to b} g'(t)$. The homomorphisms $\mathbb{S}igma_\ell$ and $\mathbb{S}igma_r$ generalize the maps with the same names studied in Section \ref{sec:Homomorphisms-fixed-by-Aut-I-compact}. By composing them with the natural logarithm function, one obtains characters of $G$, namely \begin{equation} \label{eq:Definition-chi-ell-chi-r} \chi_\ell = \ln \circ \;\mathbb{S}igma_\ell \quad\text{and}\quad \chi_r = \ln \circ \;\mathbb{S}igma_r. \end{equation} The invariant $\Sigma^1(G)^c$ turns out to consist of precisely two points, represented by the characters $\chi_\ell$ and $\chi_r$, provided $G$ satisfies certain restrictions. The first of them rules out that $G$ is a direct product of subgroups $G_1$, $G_2$ with supports in two disjoint open subintervals $I_1$, $I_2$, and more general decompositions; the second requires that $\chi_\ell$ and $\chi_r$ be non-zero and hence represent points of $S(G)$; the third condition is natural in the sense that it holds for all groups of the form $G([0;b];A,P)$ investigated in Section \ref{sec:Homomorphisms-fixed-by-Aut-I-compact}. \begin{theorem} \label{thm:Generalization-BNS} Let $I$ be a compact interval of positive length and $G$ a subgroup of $\mathbb{P}L_o(I)$. Assume the following requirements are satisfied: \begin{enumerate}[(i)] \item no interior point of $I$ is fixed by $G$; \item the characters $\chi_\ell$ and $\chi_r$ are both non-zero, and \item the quotient group $G/(\ker \chi_\ell \cdot \ker \chi)$ is a torsion-group. \end{enumerate} Then $\Sigma^1(G)^c=\{[\chi_\ell],[\chi_r]\}$. \end{theorem} \begin{remarks} \label{remarks:thm-Generalization-BNS} a) Theorem \ref{thm:Generalization-BNS} generalizes Theorem 8.1 in \cite{BNS}; there $G$ is assumed to be finitely generated and condition (iii) is sharpened to $G = \ker \chi_\ell \cdot \ker \chi_r$. The theorem improves also on a result stated in \cite[Remark on p.\,502]{Bro87b}. A proof of Theorem \ref{thm:Generalization-BNS}, based on the Cayley graph definition of $\Sigma^1(G)$, can be found in \cite{Str15}; see Theorem 1.1. b) We continue with a comment that seems overdue. In \cite{BNS} an invariant $\Sigma_{G'}(G)$ is introduced for finitely generated groups $G$; in the sequel, this invariant will be called $\Sigma^{BNS}(G)$. It is defined in terms of a generation property that uses \emph{right} conjugation, while left action is employed in the definition of $\Sigma^1(G)$. There is, however, a close connection between the two invariants: if $G$ is finitely generated then \begin{equation} \label{eq:Relating-BNS-Cayley-graph} \Sigma^{BNS}(G) = - \Sigma^1(G), \end{equation} similar to the what happens for Brown's invariant $\Sigma(G)$; see formula \eqref{eq:Relating-Brown-Cayley-graph}. Now, PL-homeomorphism groups are examples of groups made up of permutations, and for such a group $G$ the underlying set can be equipped with two familiar compositions. Suppose the composition in the group $G$ is the one familiar to analysts (and used in this paper); to emphasize this fact call the group temporarily $G_{ana}$. The assignment $g \mapsto g^{-1}$ defines then an anti-automorphism of $G_{ana}$ and hence an isomorphism $\iota \colon G_{ana} \iso G_{gt}$ onto the group obtained by equipping the set underlying $G_{ana}$ with the composition defined by $f \circ g \colon t \mapsto f(t) \mapsto g(f(t))$ and preferred by many \emph{g}roup \emph{t}heorists. The invariants of the groups $G_{ana}$ and $G_{gt}$ are then related by the formulae \[ \Sigma^1(G_{ana}) = - \Sigma^1(G_{gt}) \quad \text{and} \quad \Sigma^{BNS}(G_{ana}) = - \Sigma^{BNS}(G_{gt}). \] The analogous formula holds for the invariant $\Sigma$ studied in \cite{Bro87b}. The two parts of the comment, taken together, lead to the following formulae for groups made up of bijections: \begin{align} \text{$G_{gt}$ arbitrary } &\mathbb{L}ongrightarrow \Sigma(G_{gt})= \Sigma^1(G_{ana}), \label{eq:Sigma-related-to Sigma1}\\ \text{$G_{gt}$ is finitely generated } &\mathbb{L}ongrightarrow \Sigma^{BNS}(G_{gt}) = \Sigma^1(G_{ana}). \label{eq:SigmaBNS-related-to Sigma1} \end{align} \end{remarks} \mathbb{S}ubsection{Proof of Theorem \ref{thm:Generalization-GoKo10-bis}} \label{ssec:Proof-Theorem-Generalization-GoKo10-bis} Let $I = [0, b]$ be an interval of positive length and $G$ a subgroup of $\mathbb{P}L_o(I)$ that satisfies hypotheses (i) through (iv) stated in Theorem \ref{thm:Generalization-GoKo10-bis}. Hypotheses (i), (ii) and (iii) allow one to invoke Theorem \ref{thm:Generalization-BNS} and so $\Sigma^1(G)^c = \{[\chi_\ell], [\chi_r] \}$. In view of the remarks made at the end of section \ref{ssec:Review-Sigma1}, every automorphism $\alpha$ of $G$ will therefore permute the set $\{[\chi_\ell], [\chi_r] \}$. Two cases now arise, depending on whether or not the automorphism group of $G$ acts by the identity on $\Sigma^1(G)^c$. Suppose first that $\Aut G$ \emph{acts trivially on} $\Sigma^1(G)^c$. By hypothesis (iv), one of the characters $\chi_\ell$ and $\chi_r$ , say $\chi_\ell$, has an image $B$ with $U(B) = \{1,-1\}$. We assert that $\chi_\ell$ is fixed by $\Aut G$. Consider an automorphism $\alpha$ of $G$. It fixes the ray $\mathbb{R}_{>0} \cdot \chi_\ell$ and so $\chi_\ell \circ \alpha = s \cdot \chi_\ell$ for some positive real $s$. The relation $\chi_\ell \circ \alpha = s\cdot \chi_\ell$ implies next that \[ \im \chi_\ell= \im (\chi_\ell \circ \alpha) = s \cdot \im \chi_\ell. \] So $s$ is a positive element of $U(\im \chi_\ell) = \{1, -1\}$ and thus $s= 1$. So far we have assumed that $U(\chi_\ell)$ equals $\{1, -1\}$; if $U(\im \chi_r)$ is so, one proves in the same way that $\chi_r$ is fixed by $\Aut G$. The homomorphism $\psi \colon G \to \mathbb{R}^\times_{>0}$ can thus be chosen to be $\mathbb{S}igma_\ell$ if $U(\im \chi_\ell) = \{1, -1\}$ and to be $\mathbb{S}igma_r$ if $U(\im \chi_r)= \{1, -1\}$. \mathbb{S}mallskip Assume now that $\Aut G$ \emph{interchanges the points} $[\chi_\ell]$ and $[\chi_r]$. Let $\Aut_+G$ denote the subgroup of $\Aut G$ that fixes these points and pick an automorphism, say $\alpha_-$, that interchanges them. Then $\chi_r \circ \alpha_-= s \cdot \chi_\ell$ for some positive real $s$ and so $\im \chi_r = s \cdot \im \chi_\ell$. This relation implies that $U(\im \chi_\ell) = U(\im \chi_r) = \{1,-1\}$. We claim that the homomorphism \[ \psi = \mathbb{S}igma_\ell \cdot (\mathbb{S}igma_\ell \circ \alpha_-) = \mathbb{S}igma_\ell \cdot (s \cdot \mathbb{S}igma_r) \] is fixed by $\Aut G$. Two cases arise. If $\alpha \in \Aut_+ G$ then $\mathbb{S}igma_\ell$ is fixed by $\alpha$ in view of the first part of the proof. Moreover, $\alpha' = \alpha_- \circ \alpha \circ (\alpha_-)^{-1} \in \Aut_+ G$ and so the calculation \[ \psi \circ \alpha = (\mathbb{S}igma_\ell \circ \alpha) \cdot (\mathbb{S}igma_\ell \circ \alpha_-) \circ \alpha = \mathbb{S}igma_\ell \cdot (\mathbb{S}igma_\ell \circ \alpha') \circ \alpha_- = \mathbb{S}igma_\ell \cdot (\mathbb{S}igma_\ell \circ \alpha_-) =\psi \] holds. If $\alpha = \alpha_-$ then $\alpha_-^2 \in \Aut_+ G$ and so $ \psi \circ \alpha_- = (\mathbb{S}igma_\ell \circ \alpha_-) \cdot (\mathbb{S}igma_\ell \circ \mathbb{S}igma_-^2) = \psi. $ It follows that $\psi$ is fixed by $\Aut_+ G \cup \{\alpha_-\}$ and hence by $\Aut G$. \mathbb{S}ubsection{Discussion of the hypotheses of Theorem \ref{thm:Generalization-GoKo10-bis}} \label{ssec:Discussion-hypotheses-Generalization-GoKo10-bis} This section and the next two contain various remarks on the hypotheses of Theorem \ref{thm:Generalization-GoKo10-bis}. \mathbb{S}ubsubsection{Irreducibility} \label{ssec:Discussion-irreducibility} Let $G$ be a subgroup of $\mathbb{P}L_o([0,b])$. The union of the supports of the elements of $G$ is then an open subset of $I = [0,b]$, and hence a union of disjoint intervals $J_k $ for $k$ running over some index set $K$. For each $k \in K$ the assignment $g \mapsto g \restriction{J_k}$ defines an epimorphism $\pi_k$ onto a quotient group $G_k$ and so $G$ itself is isomorphic to a subgroup of the cartesian product $\prod \{G_k \mid k \in K\}$; more precisely, $G$ is a subdirect product of the quotient groups $G_k$. Hypothesis (i) requires that $K$ be a singleton, and so the group $G$ does not admit such obvious decompositions. This fact prompted the authors of \cite{BNS} to call a group $G$ \emph{irreducible} if $\card(K) = 1$. If the group $G$ is not irreducible it may be a direct product $G_1 \times G_2$ with each factor $G_k$ an irreducible subgroup of $\mathbb{P}L_o(I_k)$ where $I_k$ is the closure of $J_k$. Then $\Sigma^1(G)^c$ can contain more than 2 points (for more details, see \cite[Section 4.1]{Str15}). \mathbb{S}ubsubsection{Non-triviality of the characters $\chi_\ell$ and $\chi_r$} \label{ssec:Discussion-non-triviality} In Theorem \ref{thm:Generalization-GoKo10-bis} the characters $\chi_\ell$ and $\chi_r$ are assumed to be non-zero. They represent therefore points of $S(G)$; the remaining hypotheses and Theorem \ref{thm:Generalization-BNS} then guarantee that $\Sigma^1(G)^c = \{[\chi_\ell], [\chi_r]\}$ and so every automorphism of $G$ must permute the points $[\chi_\ell]$ and $[\chi_r]$. There exists a variant of Theorem \ref{thm:Generalization-GoKo10-bis} in which only one of the characters, say $\chi_\ell$, is non-zero, the remaining hypotheses being as before. Then $\Sigma^1(G)^c = \{[\chi_\ell]\}$ (see \cite[Theorem 1.1]{Str15}) and so the argument in the first part of the proof of Theorem \ref{thm:Generalization-GoKo10-bis} applies and shows that $\psi = \chi_\ell$ is fixed by every automorphism of $G$. Note that hypothesis (iii) holds automatically if $\chi_\ell$ or $\chi_r$ vanishes. \mathbb{S}ubsubsection{Almost independence of $\chi_\ell$ and $\chi_r$} \label{ssec:Discussion-independence} Among the assumptions of Theorem 8.1 in \cite{BNS}, a sharper form of hypothesis (iii) is listed, namely $G = \ker \chi_\ell \cdot \ker \chi_r$; in addition, $G$ is assumed to be finitely generated. The authors of \cite{BNS} refer to his stronger condition by saying that ``$\chi_\ell$ and $\chi_r$ are independent''. In what follows, we exhibit various versions of this stronger requirement and explain then the reason that led the authors of \cite{BNS} to adopt the mentioned locution. We start out with a general result. \begin{lem} \label{lem:Equivalence-conditions} Let $\psi_1 \colon G \twoheadrightarrow H_1$ and $\psi_2 \colon G \twoheadrightarrow H_2$ be epimorphisms of groups. Then the following statements imply each other: \begin{align*} &\text{(i) }H_1 = \psi_1(\ker \psi_2), &\quad &(\text{ii) }H_2 = \psi_2(\ker \psi_1),\\ &\text{(iii) }G = \ker \psi_1 \cdot \ker \psi_2, &\quad &\text{(iv) } (\psi_1, \psi_2) \colon G \to H_1 \times H_2 \text{ is surjective}. \end{align*} \end{lem} \begin{proof} Note first that the product $\ker \psi_1 \cdot \ker \psi_2$ is a normal subgroup of $G$. Next, $\psi_1$ maps $G$ onto $H_1$ and $\ker \psi_1 \cdot \ker \psi_2$ onto $\psi_1(\ker \psi_2)$ and induces thus an isomorphism \begin{equation} \label{eq:Equivalence-i-and-iii} (\psi_1)_*\colon G/(\ker \psi_1 \cdot \ker \psi_2) \iso H_1/\psi_1(\ker \psi_2). \end{equation} It follows, in particular, that statements (i) and (iii) are equivalent. By exchanging the r™les of the indices 1 and 2, one sees that statements (ii) and (iii) are equivalent. Assume now that statements (i) and (ii) hold and consider $(h_1, h_2) \in H_1 \times H_2$. Since $\psi_1$ is surjective, $h_1$ has a preimage $g_1 \in G$; as statement (i) holds, this preimage can actually be chosen in $\ker \psi_2$. If this is done, one sees that $(\psi_1,\psi_2)(g_1) = (h_1, 1)$. One finds similarly that there exists $g_2 \in \ker \psi_1$ with $(\psi_1, \psi_2)(g_2) = (1, h_2)$. The product $g_1 \cdot g_2$ is therefore a preimage of $(h_1, h_2)$ under $(\psi_1,\psi_2)$. The preceding argument proves that the conjunction of (i) and (ii) implies statement (iv). Assume, finally, that (iv) holds. Given $h_1 \in H_1$, there exists then $g_1 \in G$ with $(\psi_1,\psi_2)(g_1) = (h_1, 1)$; so $g_1$ is a preimage of $h_1$ lying in $\ker \psi_2$. Implication (iv) $\mathbb{R}ightarrow$ (i) is thus valid, and so the proof is complete. \end{proof} \begin{remark} \label{remark:Origin-independent} Lemma \ref{lem:Equivalence-conditions} allows one to understand why the locution ``$\chi_\ell$ and $\chi_r$ are independent'' is used in \cite{BNS} to express the requirement that $G = \ker \chi_\ell \cdot \ker \chi_r$, the group $G$ being a finitely generated, irreducible subgroup of $\mathbb{P}L_o([0,b])$. Let $\psi_1$ denote the epimorphism $ G \twoheadrightarrow \im \chi_\ell$ obtained by restricting the domain of $\chi_\ell \colon G \to \mathbb{R}$ to $\im \chi_\ell$, and let $\psi_2$ be defined analogously. If statement (iii) holds implication $(iii) \mathbb{R}ightarrow (iv)$ of Lemma \ref{lem:Equivalence-conditions} shows that the image of $(\chi_\ell, \chi_r) \colon G \to \mathbb{R}_{\add} \times \mathbb{R}_{\add}$ is $\im \chi_\ell \times \im \chi_r$. This fact amounts to say that the values of the characters $\chi_\ell$ and $\chi_r$ can be prescribed \emph{independently} (within $\im \chi_\ell \times \im \chi_r$), in contrast to what happens, for instance, if the characters satisfy a relation like $\chi_2 = -\chi_1$. \footnote{Example \ref{Example2-for-main-result-I-compact} considers more general relations.} By analyzing the proof of Theorem 8.1 in \cite{BNS} one finds that it suffices to require that the normal subgroup $\ker \chi_\ell \cdot \ker \chi_r$ has finite index in the finitely generated group $G$; a condition that we shall paraphrase by saying that $\chi_\ell$ and $\chi_r$ are \emph{almost independent}. Theorem \ref{thm:Generalization-BNS} extends this result to possibly infinitely generated groups $G$; the new form of hypothesis (iii) will likewise be referred to by saying that $\chi_\ell$ and $\chi_r$ are \emph{almost independent}. This form of almost independence is used in the proof Theorem \ref{thm:Generalization-BNS} to find commuting elements of a certain type (see, e.g., \xspace{}\cite[Section 3.3]{Str15}). It remains unclear what $\Sigma^1(G)^c$ looks like if $\chi_\ell$ and $\chi_r$ are not almost independent. \footnote{Sections 4.2 and 4.3 in \cite{Str15} have some preliminary results.} \end{remark} \mathbb{S}ubsection{Group of units} \label{ssec:Group of units} In section \ref{ssec:Generalization-approach-GoKo10}, the \emph{group of units} $U(B)$ of a subgroup $B$ of $\mathbb{R}_{\add}$ is introduced. This notion allows one to state a very simple condition that implies, in conjunction with the hypotheses of Theorem \ref{thm:Generalization-BNS}, that $\Aut G$ fixes the character $\chi_\ell$ if it fixes the ray $[\chi_\ell] = \mathbb{R} \cdot \chi_\ell$. In this section, we discuss the group of units of some concrete examples of subgroups $B$ of $\mathbb{R}_{\add}$ and study then two types of subgroups $B$ of $\mathbb{R}_{\add}$ where methods taken from the Theory of Transcendental Numbers allow one to establish that $B$ has only trivial units. \mathbb{S}ubsubsection{Elementary examples} \label{sssec:Group-of-units-elementary-examples} We begin with an observation: \emph{a subgroup $B$ and a non-zero real multiple $s \cdot B$ of $B$ have the same group of units}. If $B$ is not reduced to 0, we may therefore assume that $1 \in B$. a) If $B$ is infinite cyclic, it is a positive multiple of $\mathbb{Z}$. Clearly $U(\mathbb{Z}) = \{1,-1\}$. b) If $B$ is free abelian of rank 2, we may assume that it is generated by 1 and an irrational number $\vartheta$; so $B = \mathbb{Z} \cdot 1 \oplus \mathbb{Z} \cdot \vartheta$. If $u$ is a unit of $B$ then $u = u \cdot 1 \in B$, say $u = a + b \cdot \vartheta$ with $(a,b) \in \mathbb{Z}^2$. The condition $u \cdot B \mathbb{S}ubseteq B$ implies next that $u \cdot \vartheta = a \cdot \vartheta + b \cdot \vartheta^2$ lies in $B$. If $b \neq 0$, the real $\vartheta $ is thus a quadratic algebraic number; if $b = 0$, the condition that $u \cdot B = B$ forces $a$ to 1 or $-1$. It follows that $U(B) = \{1, -1\}$ if $\vartheta$ is an irrational, but not a quadratic algebraic number. c) Let $B$ be the additive group of a subring $R$ of $\mathbb{R}$, for instance the additive group of the ring $\mathbb{Z}[P]$ generated by a subgroup $P $ of $\mathbb{R}^\times_{>0}$ or of a ring of algebraic integers. Then $U(B)$ is nothing but the group of units $U(R)$ of $R$; if $R$ is a ring of the form $\mathbb{Z}[P]$ its group of units contains, of course, $P \cup -P$, but it may be considerably larger; moreover, rings of algebraic integers have also often units of infinite order. Note, however, that not every subring $R \neq \mathbb{Z}$ of $\mathbb{R}$ has non-trivial units, an example being the polynomial ring $\mathbb{Z}[s]$ generated by a transcendental number $s$. \mathbb{S}ubsubsection{Transcendental subgroups} \label{sssec:Transcendental-subgoups} Many familiar examples of subgroups of $\mathbb{P}L_o([0,b])$ consists of PL-homeomorphisms with rational slopes; this is true for Thompson's group $F$, but also for its generalizations $G_m =G([0,1];\mathbb{Z}[1/m], \gp(m))$ with $m \geq 3$ an integer and for many of the groups studied in Stein's paper \cite{Ste92}. The values of the characters $\chi_\ell$ are then natural logarithms of rational numbers, and so transcendental numbers or 0 (see, e.g., \xspace\ \cite[Theorem 9.11c]{Niv56}). We are thus led to study the unit groups $U(B)$ of subgroups $B \mathbb{S}ubset \mathbb{R}$ that contain transcendental numbers; in view of the fact that $U(B) = U(s \cdot B)$ for every $s \neq 0$, it is not so much the nature of the elements of $B$ that is important, but the nature of the quotients $b_1/b_2$ of non-zero elements in $B$. The following definition singles out a class of subgroups $B$ that turn out to be significant. \begin{definition} \label{definition:Transcendental} \begin{enumerate}[a)] \item Let $B \neq \{0\}$ be a subgroup of the additive group $\mathbb{R}_{\add}$ of the reals. We say $B$ is \emph{transcendental} if, for each ordered pair $(b_1, b_2)$ of non-zero elements in $B$, the quotient $b_1/b_2$ is either rational or transcendental. \item We call a non-zero character $\chi \colon G\to \mathbb{R}$ \emph{transcendental} if its image in $\mathbb{R}_{\add}$ is transcendental. \end{enumerate} \end{definition} The next result explains why transcendental subgroups are welcome in our study. \begin{prp} \label{prp:U-of-transcendental-B-1} If $B$ is a non-trivial, finitely generated, transcendental subgroup of $\mathbb{R}_{\add}$ then $U(B) = \{1,-1\}$. \end{prp} \begin{proof} Suppose $u$ is a unit of $B$. Then $u \cdot B = B$. Pick $b \in B \mathbb{S}mallsetminus {0}$; this is possible since $B$ is not reduced to 0. The assignment $1 \mapsto b$ extends to a homomorphism $\mathbb{Z}[u] \to B$ of $\mathbb{Z}[u]$-modules; it is injective since $\mathbb{R}$ has no zero-divisors. The fact that $B$ is finitely generated implies next that the additive group of the integral domain $\mathbb{Z}[u]$ is finitely generated and so $u$ is an algebraic integer; as $B$ is transcendental by assumption, $u$ must therefore be an algebraic integer and also a rational number, hence an integer. Finally, $u^{-1}$ satisfies also the relation $u^{-1} \cdot B = B$, and so $u^{-1}$ is an integer, too. \end{proof} We continue with a combination of Theorem \ref{thm:Generalization-GoKo10-bis} and Proposition \ref{prp:U-of-transcendental-B-1}. \begin{crl} \label{crl:Generalization-GoKo10-bis} Suppose $I = [0,b]$ is a compact interval of positive length and $G$ is subgroup of $PL_o(I)$ that satisfies the following conditions: \begin{enumerate}[(i)] \item no interior point of the interval $I = [0, b]$ is fixed by $G$; \item the characters $\chi_\ell$ and $\chi_r$ are both non-zero; \item the quotient group $G/(\ker \chi_\ell \cdot \ker \chi_r)$ is a torsion group $G$, and \item the image of $\mathbb{S}igma_\ell$ or that of $\mathbb{S}igma_r$ is finitely generated and transcendental. \end{enumerate} Then there exists a non-zero homomorphism $\psi \colon G \to \mathbb{R}^\times_{>0}$ that is fixed by every automorphism of $G$. \end{crl} \mathbb{S}ubsubsection{Examples of transcendental subgroups of $\mathbb{R}_{\add}$} \label{sssec:Examples-transcendental-subgoups} In order to make use of Proposition \ref{prp:U-of-transcendental-B-1}, one needs a supply of transcendental subgroups of $\mathbb{R}$. The simplest ones are the cyclic subgroups; non-cyclic subgroups are harder to come by. Example \ref{example:logarithm-1} below describes a first collection of transcendental subgroups. It is based on the following theorem, established independently by A. O. Gelfond in 1934 and by T. Schneider in 1935: \begin{theorem}[Gelfond-Schneider Theorem] \label{thm:Gelfond-Schneider} If $p_1$ and $p_2$ are non-zero (real or complex) algebraic numbers and if $p_2 \neq1$ then $\ln p_1/ \ln p_2$ is either a rational or a transcendental number. \end{theorem} \begin{proof} See, e.g., \xspace{}Theorem 10.2 in \cite{Niv56}. \end{proof} \begin{example} \label{example:logarithm-1} Let $P$ denote a subgroup of $\mathbb{R}^\times_{>0}$ generated by a set $\mathbb{P}P$ of algebraic numbers and define $B = \ln P $ to be its image in $\mathbb{R}_{\add} $ under the natural logarithm. Then every element in $P$ is a positive algebraic number and so the Gelfond-Schneider Theorem implies that every quotient $\ln p_1 /\ln p_2$ of elements in $P \mathbb{S}mallsetminus \{1\}$ is either rational or transcendental. \end{example} In Example \ref{example:logarithm-1} the set $\mathbb{P}P$ is allowed to be infinite; for such a choice, the group $B = \ln(\gp(\mathbb{P}P))$ is not finitely generated and so neither Proposition \ref{prp:U-of-transcendental-B-1} nor its Corollary \ref{crl:Generalization-GoKo10-bis} applies. Now, in Proposition \ref{prp:U-of-transcendental-B-1} the finite generation of $B$ is only used to infer that a unit $u$ of $B$, which, by the transcendence of $B$, is either rational or transcendental, is also an algebraic integer, and hence a rational integer. Proposition \ref{prp:U-of-transcendental-B-2} below furnishes examples of infinitely generated, transcendental groups that have only 1 and $-1$ as units. Its proof makes use of the following result, due to C. L. Siegel and rediscovered by S. Lang (see \cite[Theorem II.1]{Lan66} or \cite[Theorem (1.6)]{Lan71}): \begin{theorem}[Siegel-Lang Theorem] \label{thm:Siegel-Lang} Suppose $\beta_1$, $\beta_2$ and $z_1$, $z_2$, $z_3$ are non-zero complex numbers. If the subsets $\{\beta_1, \beta_2\}$ and $\{z_1, z_2, z_3\}$ are both $\mathbb{Q}$-linearly independent then at least one of the six numbers \[ \exp(\beta_i \cdot z_j) \text{ with } (i,j) \in \{1,2\} \times \{1,2,3\} \] is transcendental. \end{theorem} Here then is the announced result: \begin{prp} \label{prp:U-of-transcendental-B-2} Let $\mathbb{P}P$ be a set of positive algebraic numbers and set $B = \ln \gp(\mathbb{P}P)$. If $B$ is free abelian of positive rank then $U(B) = \{1,-1\}$. \end{prp} \begin{proof} Note first that every element of $P = \gp(\mathbb{P}P)$ is a positive algebraic number. Consider now a unit $u$ of $B$. Since $B$ has positive rank, it contains a non-zero element $b_1 = \ln q_1$. Then $u \cdot b \in B\mathbb{S}mallsetminus \{0\}$; so $b_2 = u \cdot b_1$ has the form $\ln q_2$ and thus $u$ is either rational or transcendental (by the Gelfond-Schneider Theorem). Assume first that $u$ is rational, say $u = m/n$ where $m$ and $n$ are relatively prime integers. The hypothesis $(m/n) \cdot B = B$ implies then that $m B = n B$. As $B$ is free abelian of positive rank this equality can only hold if $|m| = |n| = 1$. So $u \in \{1, -1\}$. Assume now that $u$ is transcendental. Fix $p \in P \mathbb{S}mallsetminus \{1\}$. Then $u \cdot \ln p \in B = \ln P$; so there exists $q \in P$ with $\ln q = u \cdot \ln p$; put differently, $\exp (u \cdot \ln p)$ lies in $P$ and is thus an algebraic number. As the powers of $u$ are again units of $B$ it follows that $\exp(u^\ell \cdot \ln p) \in P$ for every $\ell \in \mathbb{N}$. Set \[ \beta_1 = \ln p, \quad \beta_2 = u \cdot \ln p \quad\text{and }\quad z_j =u^j \text{ for } j = 1, 2, 3. \] Then the sets $\{\beta_1, \beta_2\}$ and $\{z_1, z_2, z_3\}$ fulfill the hypotheses of Theorem \ref{thm:Siegel-Lang}; its conclusion, however, is contradicted by the previous calculation. This state of affairs shows that the unit $u$ cannot be transcendental. \end{proof} \begin{example} \label{example:logarithm-2} Let $\mathbb{P}P$ be a non-empty set of (rational) prime numbers and let $P$ denote the subgroup of $\mathbb{Q}^\times_{>0}$ generated by $P$. Then $P$ is free abelian with basis $\mathbb{P}P$ (by the unique factorization in $\mathbb{N}^\times_{>0}$) and so $U(\ln P) = \{1,-1\}$. More generally, every non-trivial subgroup $P$ of $\mathbb{Q}^\times_{>0}$ is a free abelian group and hence $B = \ln P$ has only the units $1$ and $-1$. \end{example} \mathbb{S}ubsubsection{Some properties of transcendental subgroups and transcendental characters} \label{ssec:Properties-transcendental-subgroups} The transcendence of a character is a property that has not yet been discussed in the literature on the invariant $\Sigma^1$. In this section, we assemble therefore a few useful properties of this notion. Assume $B \mathbb{S}ubset \mathbb{R}_{\add}$ is a transcendental subgroup. Then \begin{enumerate}[a)] \item every non-trivial subgroup $B' \mathbb{S}ubseteq B$ is transcendental (immediate from the definition); \item if $\chi \colon G \to \mathbb{R}$ is a character whose image is a non-trivial subgroup of $B$ then $\chi$ is transcendental (by (a)), and so are all the compositions $\chi \circ \pi$ with $\pi \colon \tilde{G} \twoheadrightarrow G$ an epimorphism of groups (immediate from the first part); \item if $\chi$, $\chi'$ are characters of $G$ with images equal to $B$, the image of $\chi + \chi'$ is contained in $B$, and so the character $\chi+ \chi'$ is transcendental, unless it is 0; \item if $\chi$ is transcendental and $\alpha_1$, \ldots, $\alpha_m$ are automorphisms of $G$ the character \[ \eta = \chi \circ \alpha_1 + \cdots + \chi \circ \alpha_m \] is transcendental, unless it is zero. \end{enumerate} A further property is discussed in part (iv) of Proposition \ref{prp:Passage-to-finite-index} below. \mathbb{S}ubsection{Passage to subgroups of finite index} \label{ssec:Subgroups-of-finite-index} The next proposition shows that the hypotheses stated in Corollary \ref{crl:Generalization-GoKo10-bis} are inherited by subgroups of finite index. \begin{prp} \label{prp:Passage-to-finite-index} Let $G$ be a subgroup of $\mathbb{P}L_o([0,b])$ and $H \mathbb{S}ubseteq G$ a subgroup of finite index. Denote the restrictions of $\chi_\ell$ and $\chi_r$ to $H$ by $\chi'_\ell$ and $\chi'_r$. Then the following statements are valid: \begin{enumerate}[(i)] \item $G$ is irreducible if, and only if, $H$ is so; \item $\chi_\ell$ is non-zero precisely if $\chi'_\ell$ has this property, and similarly for $\chi_r$ and $\chi'_r$; \item the characters $\chi_\ell$ and $\chi_r$ are almost independent if, and only if, $\chi'_\ell$ and $\chi'_r$ have this property; \item $\chi_\ell$ is transcendental exactly if $\chi'_\ell$ is so, and similarly for $\chi_r$ and $\chi'_r$. \end{enumerate} \end{prp} \begin{proof} Claim (i) holds since the support of a PL-homeomorphism $f$ coincides with that of its positive powers $f^m$. Assertion (ii) is valid since the image of a character is a subgroup of $\mathbb{R}_{\add}$ and hence torsion-free. The fact that the quotient $b_1 /b_2$ of non-zero real numbers coincides, for every positive integer $m$, with the quotient $(mb_1)/(mb_2)$ allows one to see that a non-zero character $\chi$ of $G$ is transcendental if its restriction to $H$ is so; the converse is covered by property a) stated in section \ref{ssec:Properties-transcendental-subgroups}. We are left with establishing statement (iii). To achieve this goal, we compare the quotient groups $G/(\ker \chi_\ell \cdot \ker \chi_r)$ and $H/(\ker \chi'_\ell \cdot \ker \chi'_r)$. By formula \eqref{eq:Equivalence-i-and-iii}, the first of them is isomorphic to the quotient group $A_1 =\im \chi_\ell/\chi_\ell( \ker \chi_r)$, the second one is isomorphic to $A_2=\im \chi'_\ell/\chi'_\ell(\ker \chi'_r)$. Clearly $A_2 = \im \chi'_\ell/\chi_\ell( \ker \chi'_r)$. The groups $A_1$ and $A_2$ fit into the short exact sequences \begin{align} A_2 = \im \chi'_\ell/\chi_\ell( \ker \chi'_r) &\hookrightarrow A = \im \chi_\ell/ \chi_\ell (\ker \chi'_r) \twoheadrightarrow \im \chi_\ell/ \im \chi'_\ell, \label{eq:Extending-A2}\\ \chi_\ell(\ker \chi_r)/ \chi_\ell(\ker \chi'_r) &\hookrightarrow A = \im \chi_\ell/ \chi_\ell (\ker \chi'_r) \twoheadrightarrow A_1 =\im \chi_\ell/\chi_\ell( \ker \chi_r). \label{eq:Mapping-onto-A1} \end{align} The claim now follows from the fact that $\im \chi_\ell/ \im \chi'_\ell$ and $\chi_\ell(\ker \chi_r)/ \chi_\ell(\ker \chi'_r)$ are finite groups with orders that divide the index of $H$ in $G$. \end{proof} A first application of Proposition \ref{prp:Passage-to-finite-index} is \begin{crl} \label{crl:Commensurable-groups} Let $G$ be a finitely generated, irreducible subgroup of $PL_o(I)$. If the characters $\chi_\ell$ and $\chi_r$ are almost independent and one of them is transcendental, then any group $\mathbb{G}amma$ commensurable \footnote{Two groups $G_1$ and $G_2$ are called \emph{commensurable} if they contain subgroups $H_1$, $H_2$ that are isomorphic and of finite indices in $G_1$ and in $G_2$, respectively.} with $G$ has property $R_\infty$. \end{crl} \begin{proof} Let $H_0\mathbb{S}ubset G$ be a finite index subgroup of $G$ that is isomorphic to a finite index subgroup $\mathbb{G}amma_0$ of $\mathbb{G}amma$. There exists then a \emph{finite index subgroup $\mathbb{G}amma_1$ of $\mathbb{G}amma_0$ that is characteristic in $\mathbb{G}amma$} (see, e.g., \xspace{}\cite[Theorem IV.4.7]{LS77}). Let $H_1$ be the subgroup of $H_0$ that corresponds to $\mathbb{G}amma_1$ under an isomorphism $H_0 \iso \mathbb{G}amma_0$. Then $H_1$ has finite index in $G$ and thus Proposition \ref{prp:Passage-to-finite-index} allows us to infer that $H_1$ inherits the properties enunciated for $G$ in the statement of Corollary \ref{crl:Generalization-GoKo10-bis}. This corollary applies therefore to $H_1$ and shows that $H_1$ admits a non-zero homomorphism $\psi_1 \colon H_1 \to \mathbb{R}^\times_{>0}$ that is fixed by $\Aut H_1$. So $H_1$, and hence $\mathbb{G}amma_1$, satisfy property $R_\infty$. Use now that $\mathbb{G}amma_1$ is a characteristic subgroup of $\mathbb{G}amma$ and apply \cite[Lemma 2.2(ii)]{MuSa14a} to infer that $\mathbb{G}amma$ satisfies property $R_\infty$. \end{proof} \begin{remark} \label{remark:Commensurability-and-Reidemeidster-number} If the group $G_1$ has property $R_\infty$, then a group $G_2$ commensurable to $G_1$ need not have this property, as is shown by the group $G_1$ of the Klein bottle and the group $G_2$ of a torus: the group $G_1$ has property $R_\infty$ by \cite[Theorem 2.2]{GoWo09}), while the automorphism $-\mathbbm{1}$ of $G_2 = \mathbb{Z}^2$ has Reidemeister number 4. \end{remark} \mathbb{S}ection{Miscellaneous examples} \label{sec:Miscellaneous-examples} In this section we illustrate the notions of irreducible subgroup, of almost independence of $\chi_\ell$ and $\chi_r$ and of the group of units by various examples. \mathbb{S}ubsection{Irreducible subgroups} \label{ssec:Irreducible-subgroups} Let $b$ be a positive real number and $G$ a subgroup of $\mathbb{P}L_o([0,b])$. Recall that $G$ is called \emph{irreducible} if no interior point of $I = [0,b]$ is fixed by all of $G$ (see section \ref{ssec:Discussion-irreducibility} for the motive that led to this name). The group is irreducible if, and only if, the supports of the elements of $G$ cover the interior $\mathbb{I}nt(I)$ of $I$, or, equivalently, if the supports of the elements in a generating set $\mathcal{X}$ of $G$ cover $\mathbb{I}nt(I)$; these claims are easily verified. If $G$ is cyclic, generated by $f$, say, it is therefore irreducible if $f$ fixes no point in $\mathbb{I}nt(I)$ or, equivalently if, $f^\varepsilon(t) < t$ for $t \in \mathbb{I}nt(I)$ and some sign $\varepsilon$. Such a function is often called a \emph{bump}. \begin{example} \label{eq:One-bump} Here is a very simple kind of PL-homeomorphism bump. Given a positive slope $s \neq 1$, set \begin{equation} \label{eq:Bump-function} f_s(t) = \begin{cases} (1/s)t, & \text{ if } 0\le t\le s/(s+1)\cdot b \\ s \left(t- \frac{s\cdot b}{s+1}\right) + \tfrac{b}{s+1} , & \text{ if } s/(s+1) \cdot b < t \leq b. \end{cases} \end{equation} Then $f_s$ is continuous at $s/(s + 1) \cdot b$; since $f_s(0)= 0$ and $f_s(b) = b$, the function $f_s$ lies in $\mathbb{P}L_o([0,b])$. Let $G_s$ denote the group generated by $f_s$ and let $\alpha$ be the automorphism that sends $f_s$ to its inverse $ f_s^{-1}$. Then $(\chi_\ell\circ \alpha)(f_s) = \chi_\ell(f^{-1}_s) =- \chi_\ell(f_s)$; similarly $(\chi_r \circ \alpha) (f_s) = -\chi_r(f_s)$, whence \begin{equation} \label{eq:Relation-chi-ell-chi-r} \chi_\ell \circ \alpha = - \chi_\ell \text{ and } \chi_r \circ \alpha = - \chi_r. \end{equation} So neither $\chi_\ell$ nor $\chi_r$ is fixed by $\Aut(G_s)$. Theorem \ref{thm:Generalization-BNS} cannot be applied, as requirement (iii) is violated; indeed, $\ker \chi_\ell = \ker \chi_r = \{\mathbbm{1}\}$ and so $G_s/(\ker \chi_\ell \cdot \ker \chi_r)$ is infinite cyclic. The conclusion of Theorem \ref{thm:Generalization-BNS} is likewise false, for $\Sigma^1(G)^c = \emptyset$ (this follows, e.g., \xspace\ from Example A2.5a in \cite{Str13}). Property $R_\infty$, finally, does not hold, either; for the Reidemeister number of the automorphism $\alpha$ is 2, as a simple calculation shows. \end{example} The groups in the previous example are cyclic; more challenging groups are considered in \begin{example} \label{eq:Several-bumps} Let $d > 1$ be an integer and $s_1$, \ldots, $s_d$ pairwise distinct, positive real numbers $\neq 1$. For each index $i \in \{1, \ldots, d\}$, define $f_i$ by formula \eqref{eq:Bump-function} with $s = s_i$, and set \[ G = G_{\{s_1, \ldots, s_d\}} = \gp(f_1, \ldots, f_d). \] The group $G$ inherits two properties from the group $G_s$ in the previous example: it is irreducible (obvious), and the assignment $f_i \mapsto f_i^{-1}$ extends to an automorphism $\alpha$; indeed, the special form of the elements $f_i$ implies that conjugation by the reflection in the mid-point of $I = [0, b]$ sends $f_i$ to its inverse. It follows, as before, that the relations \eqref{eq:Relation-chi-ell-chi-r} are valid; so neither $\chi_\ell$ nor $\chi_r$ is fixed by $\Aut G $. Now to another property of the automorphism $\alpha$. The calculation \begin{equation} \label{eq:Relation-chi-ell-chi-r-bis} (\chi_\ell \circ \alpha)(f_i) = \chi_\ell (f_i^{-1}) =- \chi_\ell(f_i)= \chi_r(f_i) \end{equation} is valid for every index $i$. It shows that $\alpha$ exchanges $\chi_\ell$ and $\chi_r$. It follows, in particular, that $\ker \chi_\ell = \ker \chi_r$ and so the quotient \[ G/(\ker \chi_\ell \cdot \ker \chi_r) = G/\ker \chi_\ell \iso \im \chi_\ell = \gp(\ln s_1, \ldots, \ln s_d) \] is a non-trivial free abelian group of rank at most $d$. Requirement (iii) in Theorem \ref{thm:Generalization-BNS} is thus violated and so we cannot use that result to determine $\Sigma^1(G)^c$. Actually, only the following meager facts are known about $\Sigma^1(G)^c$: both $\chi_\ell$ and $\chi_r = - \chi_\ell$ represent points of $\Sigma^1(G)^c$ (by \cite[Proposition 2.5]{Str15}); moreover, the existence and form of the automorphism $\alpha$ and formula \eqref{eq:Representation-autos} imply that $\Sigma^1(G)^c$ is invariant under the antipodal map $[\chi] \mapsto [-\chi]$. The computation \eqref{eq:Relation-chi-ell-chi-r-bis} shows that $\chi_\ell \circ \alpha = - \chi_\ell$. This conclusion holds, actually, for every character $\chi \colon G \to \mathbb{R}$ and proves that no non-zero character of $G$ is fixed by $\alpha$. \end{example} \mathbb{S}ubsection{Independence of $\chi_\ell$ and $\chi_r$} \label{ssec:Independence} As before, let $G$ be a subgroup of $\mathbb{P}L_o([0,b])$ with $b$ a positive real number. Recall that the characters $\chi_\ell$ and $\chi_r$ are called \emph{independent} if $G = \ker \chi_\ell \cdot \ker \chi_r$ (see section \ref{ssec:Discussion-irreducibility}). It follows that $\chi_\ell$ and $\chi_r$ are independent if, and only if, $G$ admits a generating set $\mathcal{X} = \mathcal{X}_\ell \cup \mathcal{X}_r$ in which the elements of $\mathcal{X}_\ell$ have slope 1 near $b$ and those of $\mathcal{X}_r$ have slope 1 near $0$. It is thus very easy to manufacture groups for which $\chi_\ell$ and $\chi_r$ are independent. In the next example, some very particular specimens are constructed. \begin{example} \label{example:Recipe-for-independence} Choose a real number $b_1 \in \; ]b/2,b[$. Given positive real numbers $s_1$, \ldots, $s_{d_\ell}$ that are pairwise distinct and not equal to 1, let $f_i$ be the bump defined by formula \eqref{eq:Bump-function} but with $s =s_i$ and $b = b_1$. Next let $s'_1$, \ldots, $s'_{d_r}$ be another sequence of positive reals that are pairwise distinct and different from 1. Use them to define bump functions $g_j$ with supports in $]b-b_1, b[$ like this: let $h_j$ be the function given by formula \eqref{eq:Bump-function} but with $s = s'_j$ and $b = b_1$, and define then $g_j $ to be $h_j$ conjugated by the translation with amplitude $b- b_1$. Finally set \begin{equation} \label{eq:Construction-independent} G = G_{\{s_1, \ldots, s_{d_\ell}, s'_1, \ldots s'_{d_r}; b_1 \}} = \gp(f_1, \ldots, f_{d_\ell}, g_1, \ldots, g_{d_r} ). \end{equation} In the sequel we assume that $d_\ell$ and $d_r$ are positive. Then $G$ is irreducible (since $b_1 > b-b_1$), the characters $\chi_\ell$, $\chi_r$ are non-zero and independent, and thus Theorem \ref{thm:Generalization-BNS} allows us to conclude that $\Sigma^1(G)^c = \{[\chi_\ell], [\chi_r] \}$. The character $\chi_\ell$ is transcendental if all the positive reals $s_1$, \ldots, $s_{d_\ell}$ are algebraic (cf.\ Example \ref{example:logarithm-1}). Then $G$ admits a non-zero homomorphism $\psi \colon G \to \mathbb{R}^\times_{>0}$ that is fixed by $\Aut G$ (see Theorem \ref{thm:Generalization-GoKo10-bis}). If $G$ does not admit an automorphism $\alpha$ with $(\chi_\ell \circ \alpha) \in [\chi_r]$ the homomorphism $\psi$ can be chosen to be $\mathbb{S}igma_\ell$ (see the second paragraph of section \ref{ssec:Proof-Theorem-Generalization-GoKo10-bis}). The stated condition holds, in particular, if there does not exists a number $s$ with $\im \chi_r = s\cdot \im \chi_\ell$. Similar remarks apply to $\chi_r$. \end{example} \mathbb{S}ubsubsection{Independence versus almost independence} \label{sssec:Independence-versus-almost-indepndence} The characters $\chi_\ell$ and $\chi_r$ are called almost independent if $G/(\ker \chi_\ell \cdot \ker \chi_r)$ is a torsion group (see Remark \ref{remark:Origin-independent}). Statement (iii) of Proposition \ref{prp:Passage-to-finite-index} shows that almost independence of $\chi_\ell$ and $\chi_r$ is inherited by the restricted characters $\chi'_\ell = \chi_\ell \restriction{H}$ and $\chi'_r = \chi_r \restriction{H}$ whenever $H \mathbb{S}ubseteq G$ is a subgroup of finite index. The next result characterizes those ordered pairs $(G, H)$, with $\chi_\ell$, $\chi_r$ independent whose restrictions $\chi'_\ell$ and $\chi'_r$ are again independent. \begin{lem} \label{lem:Independence-and-finite-index} Let $G$ be a subgroup of $\mathbb{P}L_o([0,b])$ for which $\chi_\ell$ and $\chi_r$ are independent and let $H\mathbb{S}ubset G$ be a subgroup of finite index. Then the restrictions $\chi'_\ell$ and $\chi'_r$ of these characters are independent if, and only if the homomorphism \begin{equation} \label{eq:Characterizing-homomorphism} \zeta \colon \chi_\ell(\ker \chi_r)/ \chi_\ell(\ker \chi'_r) \longrightarrow \im \chi_\ell/ \im \chi'_\ell, \end{equation} induced by the inclusions, is injective. \end{lem} \begin{proof} The justification will be an assemblage of facts extracted from the proof of Lemma \ref{lem:Equivalence-conditions} and from that of Proposition \ref{prp:Passage-to-finite-index}. Firstly, $\chi_\ell$ and $\chi_r$ are independent if, and only if, the abelian group $A_1 =\im \chi_\ell/\chi_\ell( \ker \chi_r)$ is 0. Similarly, $\chi'_\ell$ and $\chi'_r$ are independent precisely if $A_2 = \im \chi'_\ell/\chi_\ell( \ker \chi'_r) $ is the zero group. The groups $A_1$ and $A_2$ occur among the groups in the short exact sequences \eqref{eq:Extending-A2} and \eqref{eq:Mapping-onto-A1}. Since $A_1 = 0$, these exact sequences lead to the short exact sequence \[ \im \chi'_\ell/\chi_\ell( \ker \chi'_r) \hookrightarrow \chi_\ell(\ker \chi_r)/ \chi_\ell(\ker \chi'_r) = \im \chi_\ell/ \chi_\ell (\ker \chi'_r) \twoheadrightarrow \im \chi_\ell/ \im \chi'_\ell, \] It shows that $A_2 = \im \chi'_\ell/\chi_\ell( \ker \chi'_r)$ is the kernel of the homomorphism $\zeta$. \end{proof} It is now easy to construct independent characters $\chi_\ell$ and $\chi_r$ of $G$ whose restrictions to a subgroup of finite index are no longer independent. \begin{example} \label{example:Restrictions-need-not-be-independent} Let $G$ be a subgroup of $\mathbb{P}L_o([0,b])$ and $H$ a subgroup of finite index. Assume the characters $\chi_\ell$ and $\chi_r$ are independent. According to Lemma \ref{lem:Independence-and-finite-index} the restricted characters $\chi'_\ell$ and $\chi'_r$ of $H$ are independent, if, and only if, the obvious homomorphism \[ \zeta \colon \chi_\ell(\ker \chi_r)/ \chi_\ell(\ker \chi'_r) \longrightarrow \im \chi_\ell/ \im \chi'_\ell \] is injective. The characters $\chi'_\ell$ and $\chi'_r$ of $H$ will therefore \emph{not} be independent whenever \begin{equation} \label{eq:Sufficient-condition-not-independent} \im \chi'_\ell = \im \chi_\ell \quad \text{but} \quad \chi_\ell(\ker \chi_r \cap H) \mathbb{S}ubsetneq \chi_\ell(\ker \chi_r). \end{equation} Now to some explicit examples. We begin with quotients of the groups we shall ultimately be interested in. Set $\bar{G} = \mathbb{Z}^2$, let $p \geq 2$ be an integer and set \[ \bar{H} = \mathbb{Z}(p,0) + \mathbb{Z} (1,1). \] Then $\bar{H}$ has index $p$ in $\bar{G}$. Next, let $\chi_1$, $\chi_2$ denote the canonical projections of $\mathbb{Z}^2$ onto its factors. Then \[ \chi_1(\bar{G}) = \mathbb{Z} = \chi_1 (\bar{H}),\quad \ker \chi_2 = \mathbb{Z} (1,0) \text{ and } \ker \chi_2 \cap \bar{H} = \mathbb{Z}(1,0) \cap \bar{H} = \mathbb{Z}(p,0), \] and thus \[ \chi_1(\ker \chi_2 \cap \bar{H} ) = \mathbb{Z} \cdot p \mathbb{S}ubsetneq \ \chi_1(\ker \chi_2) = \mathbb{Z}. \] The auxiliary groups $\bar{G}$ and $\bar{H}$ satisfy therefore the relations \eqref{eq:Sufficient-condition-not-independent}. We are now ready to define the group $G$; it will be of the kind considered in Example \ref{example:Recipe-for-independence} with $d_\ell = d_r = 1$. Fix $b > 0$ and $b_1 \in \;]b/2, b[$ and choose positive numbers $s_1$, $s'_1$, both different from 1. Define $f_1$ and $g_1$ as in Example \ref{example:Recipe-for-independence} and set \[ G =\gp(f_1, g_1). \] Then $G$ is an irreducible subgroup of $\mathbb{P}L_0([0,b])$ and the characters $\chi_\ell$ and $\chi_r$ of $G$ are independent. Moreover, $G_{\ab}$ is free abelian of rank 2, freely generated by the canonical images of $f_1$ and $g_1$. Set $H = \gp( f_1^p, f_1 \circ g_1, [G,G] )$. The above calculations then imply that \[ \chi_\ell(G) = \mathbb{Z} \cdot \ln s_1 = \chi_\ell (H), \text{ and } \chi_\ell(\ker \chi_r \cap H ) = \mathbb{Z} \cdot p \cdot \ln s_1 \mathbb{S}ubsetneq \chi_\ell(\ker \chi_r) = \mathbb{Z} \ln s_1. \] \end{example} \mathbb{S}ubsection{Eigenlines} \label{ssec:Eigenlines} Let $G$ be an irreducible subgroup of $\mathbb{P}L_o([0,b])$. If the characters $\chi_\ell$ and $\chi_r$ are non-zero and almost independent, then $\Sigma^1(G)^c$ consist of the two points $[\chi_\ell]$ and $[\chi_r]$ (by Theorem \ref{thm:Generalization-BNS}). Every automorphism $\alpha$ of $G$ either fixes or exchanges them. Suppose we are in the first case. Then $\chi_r \circ \alpha = s \cdot \chi_r$ for some positive real $s$, and so $\mathbb{R} \cdot \chi_r$ is an eigenline, with eigenvalue $s$, in the vector space $\mathbb{H}om(G,\mathbb{R})$ acted on by $\alpha^*$. No example with $s \neq1$ has been found so far. If the compact interval $[0, b]$ is replaced by the half line $[0, \infty[$, such examples exist, provided $\chi_r$ is replaced by a suitable analogue $\tau_r$. In order to construct examples, we return to the set-up of Section \ref{sec:Homomorphisms-fixed-by-Aut-I-half-line}. So $P$ is a non-trivial subgroup of $\mathbb{R}^\times_{>0}$ and $A$ is a non-trivial $\mathbb{Z}[P]$ submodule of $\mathbb{R}_{\add}$. Define $G$ to be the kernel of the homomorphism $\mathbb{S}igma_r \colon G([0,\infty[\,; A, P) \to \mathbb{R}^\times_{>0}$; thus $G$ consists of all the elements of $G([0,\infty[\,; A, P)$ that are translations near $+\infty$. The analysis in section \ref{sssec:I-half-line-Groups-with-im-rho-translations} shows that conjugation by the PL-homeomorphism $f_p \colon \mathbb{R} \iso \mathbb{R}$, given by $f_p(t) = p \cdot t$ for $t \geq 0$, induces, for every $p \in P$, an automorphism $\alpha_p$ of $G$ that satisfies the relation \begin{equation} \label{eq:Transformation-tau-bis} \tau_r \circ \alpha_p = p \cdot \tau_r; \end{equation} here $\tau_r \colon G \to \mathbb{R}_{\add}$ is the character that sends $g \in G$ to the amplitude of the translation that coincides with $g$ near $+\infty$. This character $\tau_r$ shares an important property with the character $\chi_r$: the invariant $\Sigma^1(G)^c$ consists of two points, one represented by $\chi_\ell$, the other by $\tau_r$ (see \cite[Theorem 1.2]{Str15}). The image of $\tau_r$ in $\mathbb{R}_{\add}$ is a subgroup $B$ of $A$, namely \[ B = IP \cdot A = \left\{\mathbb{S}um \, (p-1) \cdot a \;\big | \; p \in P \text{ and } a \in A \right\} \] (cf.\ assertion (iii) of \cite[Corollary A5.3]{BiSt14}). The group of units $U(B)$ of $B$ contains the group $P$ and so it is not reduced to $\{1, -1\}$. The subgroup $B$ is typically infinitely generated; if so, $G$ is likewise infinitely generated. Examples of finitely generated groups $G = \ker \mathbb{S}igma_r$ are harder to find, and they are so far rare. Suppose the group $G([0,b]; A, P)$ is finitely generated for some $b \in A_{>0}$. Then $G([b,2b];A,P)$ is a finitely generated subgroup of the group of bounded elements $B([0,\infty\,[;A,P)$. Pick now an element $g_0 \in G$ that moves every point of the open interval $]0, \infty[$ to the right and satisfies the inequality $g_0(b) < 2b$. Then translates of the interval $]b, 2b[$ under the powers of $g_0$ will then cover $]0, \infty[$. It follows that the subgroup \[ N = \gp(\{g_0^j \circ G([b, 2 b];A,P) \circ g_0^{-j} \mid j \in \mathbb{Z} \}) \] coincides with the bounded group $B([0, \infty[\,;A,P)$ (use \cite[Lemma E18.9]{BiSt14}). So the group $B([0, \infty[\,;A,P) \rtimes \gp(g_0)$ is finitely generated. The group $G$, finally, is finitely generated if $G/N \iso \im \tau_r = IP \cdot A$ is finitely generated. To show that finitely generated groups of the form $G = \ker \mathbb{S}igma_r$ exist we need thus an example of a group $G([0,b];A, P)$ where both $G([0,b]; A, P )$ and the abelian group underlying $B = IP \cdot A$ are finitely generated. The parameters \[ P = \gp \left(\mathbb{S}qrt{2\,} +1\right), \quad A = \mathbb{Z}\left[\mathbb{S}qrt{2\,}\right] = \mathbb{Z}[P], \quad b = 1 \] lead to such a group; see \cite{Cle95}. \mathbb{S}ubsection{Variation on Theorem \ref{thm:Generalization-GoKo10-bis}} \label{ssec:Complement-to-main-theorem} Among the hypotheses of Theorems \ref{thm:Generalization-GoKo10-bis} and \ref{thm:Generalization-BNS} figures the requirement that $G$ acts irreducibly on the interval $[0,b]$. This requirement rules out, in particular, that $G$ is a product $G_1 \times G_2$ with $G_1$ acting irreducibly on some interval $I_1 = [0,b_1]$ and $G_2$ acting irreducibly on an interval $I_2= [b_2, b]$ that is disjoint from $I_1$. Suppose now we are in this excluded case and that the groups $G_1$, $G_2$ satisfy the assumptions of Theorem \ref{thm:Generalization-BNS}, suitably interpreted; more explicitly, suppose the characters $\chi_{1, \ell}$ and $\chi_{1, r}$ of $G_1$ are non-zero and almost independent, and similarly for the characters $\chi_{2, \ell}$ and $\chi_{2, r}$ of $G_2$. The question then arises whether $G = G_1 \times G_2$ admits a non-zero homomorphism $\psi \colon G \to \mathbb{R}^\times_{>0}$ that is fixed by $\Aut G$. We shall see that this is the case if at least one of the four groups $\im \chi_{1, \ell}$, $\im \chi_{1, r}$ and $\im \chi_{2, \ell}$, $\im \chi_{2, r}$ has a unit group that is reduced to $\{1,-1\}$. The following result is a variation on Theorem 3.2 in \cite{GoKo10}. \begin{prp} \label{prp:Linearly-independent-characters} Let $G$ be a group for which $\Sigma^1(G)^c$ is a non-empty finite set with $m$ elements. Assume the rays $[\chi] \in \Sigma^1(G)^c$ span a subspace of $\mathbb{H}om(G, \mathbb{R})$ having dimension $m$ over $\mathbb{R}$ and that $U(\im \chi_1) = \{1,-1\}$ for some point $[\chi_1] \in \Sigma^1(G)^c$. Then $G$ admits a non-trivial homomorphism $\psi \colon G \to \mathbb{R}^\times_{>0}$ that is fixed by $\Aut G$. \end{prp} \begin{proof} The automorphism group $\Aut G$ acts on $\Sigma^1(G)^c$ via the assignment \[ (\alpha, [\chi]) \mapsto [\chi \circ \alpha^{-1}]; \] let $\{ [\chi_1], \ldots, [\chi_n] \}$ be the orbit in $\Sigma^1(G)^c$ containing $[\chi_1]$. If $n = 1$, the point $[\chi_1]$ is fixed by $\Aut G$; hence $\chi_1$ itself is fixed by $\Aut G$ in view of the assumption that $U(\im \chi_1) = \{1, -1\}$, and so we can take $\psi = \exp \circ \chi_1$. Suppose now that $n > 1$ and choose, for every $i \in \{1, \ldots, n\}$, an automorphism $\alpha_i$ with $[\chi_i] = [\chi_1 \circ \alpha_i]$. Let $\alpha$ be an automorphism of $G$. For every index $i \in \{1, \ldots, n\}$ there exists then an index $j$ so that $[\chi_i \circ \alpha^{-1}] = [(\chi_1 \circ \alpha_i) \circ \alpha^{-1}]$ is equal to $[\chi_j] = |\chi_1 \circ \alpha_j]$. It follows that there exists a positive real number $s_{i,j}$ so that \[ \chi_1 \circ \alpha_i \circ \alpha^{-1} = s_{i,j} \cdot \chi_1 \circ \alpha_j. \] But if so, $\beta = \alpha_i \circ \alpha^{-1} \circ \alpha_j^{-1}$ is an automorphism with $\chi_1 \circ \beta = s_{i,j} \cdot \chi_1$. The assumption that $U(\im \chi_1) = \{1,-1\}$ permits one then to deduce that $s_{i,j} = 1$. So $\Aut G$ permutes the set of characters \begin{equation} \label{eq:Representative-characters} \chi_1 \circ \alpha_1, \quad \chi_1 \circ\alpha_2, \ldots, \chi_1 \circ\alpha_n. \end{equation} Their sum $\eta$ is therefore fixed by $\Aut G$. It is non-zero since the characters displayed in \eqref{eq:Representative-characters} are linearly independent over $\mathbb{R}$. Set $\psi = \exp \circ \eta$. \end{proof} \begin{crl} \label{crl:Direct-product-of-irreducible-groups} Let $G_1$ be a subgroup of $\mathbb{P}L_0([0,b_1])$ and let $G_2$ be a subgroup of $\mathbb{P}L_0([b_2, b])$ with $0 < b_1 < b_2 < b$. Assume $G_1$ and $G_2$ are irreducible, the characters $\chi_{1,\ell}$ and $\chi_{1, r}$ of $G_1$ are non-zero and almost independent, and that the characters $\chi_{2,\ell}$ and $\chi_{2, r}$ of $G_2$ have the same properties. If the image of at least one of the four characters $\chi_{1,\ell}$, $\chi_{1, r}$ and $\chi_{2,\ell}$, $\chi_{2, r}$ has a unit group that is reduced to $\{1, -1\}$ then $G = G_1 \times G_2$ admits a non-trivial homomorphism $\psi \colon G \to \mathbb{R}^\times_{>0}$ that is fixed by $\Aut G$. \end{crl} \begin{proof} The hypothesis on $G_1$ and $G_2$ allow us to apply Theorem \ref{thm:Generalization-BNS} and so \[ \Sigma^1(G_1)^c = \{[\chi_{1,\ell}], [\chi_{1,r}] \} \quad \text{and} \quad \Sigma^1(G_2)^c = \{[\chi_{2,\ell}], [\chi_{2,r}] \}. \] The product formula for $\Sigma^1$ then implies that $\Sigma^1(G)^c$ consists of the four points represented by \begin{equation} \label{eq:Points-of-product} \chi_{1,\ell} \circ \pi_1, \quad \chi_{1,r} \circ \pi_1, \qquad \chi_{2,\ell} \circ \pi_2, \quad \chi_{1,\ell} \circ \pi_2; \end{equation} here $\pi_i \colon G \twoheadrightarrow G_i$ denotes the canonical projection onto the $i$-th factor $G_i$ (see, e.g., \xspace\ \cite[Proposition C2.55]{Str13}). These four characters are $\mathbb{R}$-linearly independent since all are non-zero, as $\ker \chi_{1, \ell} \neq \ker \chi_{1,r}$ by the almost independence of $\chi_{1,\ell}$ and $\chi_{1,r}$, as $\ker \chi_{2, \ell} \neq \ker \chi_{2,r}$ by the almost independence of $\chi_{2,\ell}$ and $\chi_{2,r}$, and since $\pi_1^*(\mathbb{H}om(G_1, \mathbb{R}))$ and $\pi_2^*(\mathbb{H}om(G_2, \mathbb{R}))$ are complementary subspaces of $\mathbb{H}om(G, \mathbb{R})$. Finally, at least one of the images of the four characters displayed in \eqref{eq:Points-of-product} has an image $B$ with $U(B) = \{1,-1\}$. All the assumptions of Proposition \ref{prp:Linearly-independent-characters} are thus satisfied and so the contention of the corollary follows from that proposition. \end{proof} \begin{remark} \label{Direct-product-R-infty} It is not known whether the direct product of groups $G_1$, $G_2$ each of which has property $R_\infty$ has again property $R_\infty$. The previous corollary implies that this will be so if the groups $G_1$ and $G_2$ satisfy the assumptions of the corollary. \end{remark} \mathbb{S}ection{Complements} \label{sec:Complements} By Remark \ref{remark:Continuously-many-non-isomorphic-groups} there exists continuously many pairwise non-isomorphic, finitely generated groups of the form $G(\mathbb{R};A, P)$, and by Corollary \ref{crl:G=G(R;A,P)-I-line} each of these groups admits a non-zero homomorphism $\psi$ into $P$. These facts prompt the question whether there exist similarly large collections of finitely generated subgroups of $\mathbb{P}L_o(I)$ with $I$ a compact interval, say $I = [0,1]$. Since only countably many \emph{finitely generated} groups of the form $G([0,1]; A, P)$ have been found so far, we look for finitely generated groups that satisfy the assumptions of Theorem \ref{thm:Generalization-GoKo10-bis}. In section \ref{ssec:Uncountable-class} we exhibit a collection $\mathbb{G}G$ of 3-generator groups with the desired properties. Checking that each group in $\mathbb{G}G$ satisfies the assumptions of Theorem \ref{thm:Generalization-GoKo10-bis} is fairly easy; the verification that distinct groups in $\mathbb{G}G$ are not isomorphic, however, is more demanding. We shall succeed by exploiting properties of the $\Sigma^1$-invariant of the groups in $\mathbb{G}G$ in a roundabout manner. In section \ref{ssec:Unexpected-isomorphisms} we describe then a collection of 2-generator groups which, despite appearances, turn out to be pairwise isomorphic. This indicates once more that criteria which allow one to prove that two given, similarly looking, groups are not isomorphic, are very useful. In the final section, we give such a criterion. \mathbb{S}ubsection{A large collection of groups $G$ with characters fixed by $\Aut G$} \label{ssec:Uncountable-class} In this section we construct a collection $\mathbb{G}G$ of pairwise non-isomorphic groups $G_s$ with the following properties: \begin{enumerate}[(i)] \item each $G_s \in \mathbb{G}G$ is an irreducible subgroup of $\mathbb{P}L_o([0,1])$ generated by 3 elements; \item the characters $\chi_\ell$, $\chi_r$ of $G_s$ are independent and have ranks 1, respectively 2; \item for each $G_s \in \mathbb{G}G$ the character $\chi_\ell$ is fixed by $\Aut G_s$, and \item the cardinality of $\mathbb{G}G$ is that of the continuum. \end{enumerate} \mathbb{S}ubsubsection{Construction of the groups $G_s$} \label{sssec:Construction-groups} The groups $G_s$ are obtained by the recipe described in Example \ref{example:Recipe-for-independence}. Fix a triple $s = (s_1, s_2= s'_1, s_3= s'_2)$ of real numbers in $]1, \infty[$. Let $f_s$ be the PL-homeomorphism defined by formula \eqref{eq:Bump-function} with $s = s_1$ and $b = \tfrac{3}{4}$. Next, let $g$ be the function obtained by putting $s = s_2$, $b = \tfrac{3}{4}$ and by conjugating then the function so obtained by translation with amplitude $\tfrac{1}{4}$. Similarly, let $h_s$ be the function obtained by setting $s = s_3$, $b = \tfrac{3}{4}$ and by conjugating the function so obtained by the translation $t \mapsto t + \tfrac{1}{4}$. Finally, set \begin{equation} \label{eq:Dinition-G-sub-s} G_s = G_{\{s_1, s_2, s_3\}} = \gp \left(f_s, g_s, h_s\right). \end{equation} The definition of $G_s$ shows that it is an irreducible subgroup of $\mathbb{P}L_o([0,1])$ with non-zero and independent characters $\chi_\ell$ and $\chi_r$. By Theorem \ref{thm:Generalization-BNS}, the complement of $\Sigma^1(G_s)$ consists therefore of the two rays $[\chi_\ell]$ and $[\chi_r]$. Consider now an automorphism $\alpha$ of $G_s$. It induces an auto-homeomorphism $\alpha^*$ of the sphere $S(G_s)$ that maps the subset $\Sigma^1(G_s)^c$ onto itself. Suppose $\alpha^*$ is the identity on $\Sigma^1(G_s)^c$. Since $\chi_\ell$ has rank 1 and is thus transcendental, the first two paragraphs of section \ref{ssec:Proof-Theorem-Generalization-GoKo10-bis} apply and show that $\alpha$ fixes the character $\chi_\ell$ and hence also the homomorphism $\mathbb{S}igma_\ell \colon G \to \mathbb{R}^\times_{>0}$. This homomorphism $\mathbb{S}igma_\ell$ will therefore be fixed by all of $\Aut G_s$ whenever the images of $\chi_\ell$ and $\chi_r$ are not isomorphic. \mathbb{S}ubsubsection{Additional assumptions} \label{sssec:More-assumptions-groups} Assume therefore that $s_2$ and $s_3$ are \emph{multiplicatively independent}. Then the free abelian group \[ \im \chi_r = \ln \gp(\{s_2, s_3\}) = \mathbb{Z} \ln s_2 + \mathbb{Z} \ln s_3. \] has rank 2. Consider now two triples $s$ and $s'$ where $ s'_2 = s_2$ and where both pairs $\{s_2, s_3 \}$ and $\{s_2, s'_3\}$ are multiplicatively independent. Suppose there exists an isomorphism $\beta \colon G_s \iso G_{s'}$. Then $\beta$ induces a homeomorphism $\beta^* \colon S(G_{s'}) \iso S(G_s)$ that maps $\{[\chi'_\ell], [\chi'_r]\}$ onto $\{[\chi'_\ell], [\chi'_r]\}$. The ranks of the involved characters imply that $\beta^*[\chi'_r] = [\chi_r]$; so there exists a positive real number $u$ with $\chi'_r \circ \beta = u \cdot \chi_r$. It follows that $\im \chi'_r = u \cdot \im \chi_r$ or, equivalently, that \[ \mathbb{Z} (\ln s'_3) + \mathbb{Z} (\ln s_2) = u \cdot \left(\mathbb{Z} (\ln s_3) + \mathbb{Z} (\ln s_2) \right). \] This equality amounts to say that there exists a matrix $T = \left(\begin{smallmatrix} a&b\\c&d \end{smallmatrix}\right) \in \mathbb{G}L(2, \mathbb{Z})$ such that \[ \begin{pmatrix} \ln s'_3 \\ \ln s_2 \end{pmatrix} = u \cdot T \cdot \begin{pmatrix} \ln s_3 \\ \ln s_2 \end{pmatrix} = u \cdot \begin{pmatrix} a \cdot \ln s_3 + b \cdot \ln s_2 \\c \cdot \ln s_3 + d \cdot \ln s_2\end{pmatrix}. \] It follows that \[ \frac{\ln s'_3}{\ln s_2} = \frac{a(\ln s_3/\ln s_2) + b}{c(\ln s_3/\ln s_2) + d}\;; \] alternatively put, the numbers $\ln s_3$, $\ln s'_3$ lie in the same orbit of the group \begin{equation} \label{eq:Definition-group-H-sub-s2} H_{s_2} = \begin{pmatrix}\ln s_2&0\\0&1\end{pmatrix} \cdot \mathbb{G}L(2,\mathbb{Z}) \cdot \begin{pmatrix}\ln s_2&0\\0&1\end{pmatrix}^{-1} \end{equation} acting on the extended real line $\mathbb{R} \cup \{\infty\}$ by fractional linear transformations. \mathbb{S}ubsubsection{Consequences} \label{sssec:Consequences} It is now easy to exhibit a collection of groups $\mathbb{G}G$ that enjoy the properties stated at the beginning of section \ref{ssec:Uncountable-class}. Choose first a number $s_1 > 1$; for instance $s_1 = 2$, and select $s_2$ so that $\ln s_2$ is rational, for instance $s_2 = \exp 1$. The group $H_{s_2}$ is then a subgroup of $\mathbb{G}L(2, \mathbb{Q})$; it acts on $\mathbb{R} \cup \{\infty\}$ by fractional linear transformations. The set $\mathbb{Q} \cup \{\infty\}$ is an orbit; all other orbits are made up of irrational numbers. Use the axiom of choice to find a set of representative $\mathcal{T}$ of the orbits of $H_{s_2}$ contained in $\mathbb{R} \mathbb{S}mallsetminus \mathbb{Q}$. For every $t \in \mathcal{T}$ the numbers $\ln s_2$ and $t$ are then $\mathbb{Q}$-linearly independent, and hence $s_2$ and $\exp t$ are multiplicatively independent. Since $\mathbb{R} \mathbb{S}mallsetminus \mathbb{Q}$ has the cardinality of the continuum and $H_{s_2}$ is countable, the set $\mathcal{T}$ has likewise the cardinality of the continuum. The collection \begin{equation} \label{eq:Definition-GG} \mathbb{G}G = \{ G_{(s_1, s_2, \exp t)} \mid t \in \mathcal{T} \} \end{equation} enjoys therefore properties (i) through (iv) stated at the beginning of section \ref{ssec:Uncountable-class}. \mathbb{S}ubsection{Some unexpected isomorphisms} \label{ssec:Unexpected-isomorphisms} Let $t_1$, $t_2$ be distinct irrational numbers and consider the groups $G_1 = G_{(2, \exp 1, \exp t_1)}$ and $G_2= G_{(2, \exp 1, \exp t_2)}$. We don't know under which conditions on $t_1$ and $t_2$ the groups $G_1$ and $G_2$ are isomorphic. In the construction of the collection $\mathbb{G}G$, carried out in section \ref{ssec:Uncountable-class}, we proceeded therefore in a very cautious manner and required that distinct elements in the parameter space $\mathcal{T}$ fail to satisfy a certain condition. The question now arises whether this approach is overly pessimistic. The next example indicates that caution may have been appropriate. We begin with a simple, but surprising, lemma. \footnote{The third author got word of this result in discussions with Matt Brin and Matt Zaremsky.} \begin{lem} \label{lem:Ubiquity-of-F} Suppose $G$ is a subgroup of $\mathbb{P}L_o([a, d])$ generated by two PL-homeo\-mor\-phisms $f$ and $g$ with the following properties: \begin{enumerate}[(i)] \item $\mathbb{S}upp f = \,]a, c[$ and $f(t) < t$ for $t \in \mathbb{S}upp f$, \item $\mathbb{S}upp g= \,]b, d[$ and $g(t) < t$ for $t \in \mathbb{S}upp g$, \item $a < b < c < d$ and $f(g(c)) \leq b$. \end{enumerate} Then $G$ is isomorphic to Thompson's group $F$. \end{lem} \begin{proof} Set $h = f \circ g$ and note that $h(t) < t$ for every $t \in\; ]a,d[\,$. Property (iii) then implies that $h(c) \leq b$ and so the supports of $g$ and that of $\act{h}{-2}{ f }= h \circ f \circ h ^{-1}$ are disjoint, as are the supports of $g$ and that of $\act{h^2}{-2}{f}$. The first fact implies that $g$ commutes with $\act{h}{-1}{ f}$ and leads to the chain of equations \begin{equation} \label{eq:First-relator} \act{h \circ h}{-2}{f} = \act{f}{-2}{\left(\act{g \circ h}{-1}{f} \right)} = \act{f}{-1}{\left( \act{h}{-1}{f}\right)} = \act{f \circ h}{-1}{ f }. \end{equation} The second fact leads to the equations \begin{equation} \label{eq:Second-relator} \act{h \circ h^2}{-2}{f} = \act{f}{-2}{\left(\act{g \circ h^2}{-1}{f} \right)} = \act{f}{-1}{\left( \act{h^2}{-1}{f}\right)} = \act{f \circ h^2}{-1}{ f }. \end{equation} Thompson's group $F$, on the other hand, has the presentation \[ \langle x, x_1 \mid \act{x^2}{-1}{x_1}= \act{x_1 x}{-1}{x_1}, \act{x^3}{-1}{x_1} = \act{x_1 x^2}{-1}{x_1} \rangle; \] see, e.g., \xspace\ \cite[Examples D15.11]{BiSt14}. The assignments $x \mapsto h$, $x_1 \mapsto f$ extend therefore to an epimorphism $\rho \colon F \twoheadrightarrow G$. As the derived group of $F$ is simple (see, e.g., \xspace\ \cite[Theorem 4.5]{CFP96}) and as $G$ is non-abelian, $\rho$ must be injective, hence an isomorphism, and so the proof is complete. \end{proof} Our next result shows that the assumptions of the previous lemma can be satisfied by PL-homeomorphisms with pre-assigned values for the slopes in the end points. \begin{lem} \label{lem:Ubiquity-of-F-bis} Let $s_f$, $s_g$ be positive reals with $s_f< 1 < s_g$ and let $a$, $b$, $c$, $d$, be real numbers with $a < b < c < d$. Then there exist PL-homeomorphisms $f$ and $g$ that satisfy properties (i) through (iii) listed in Lemma \ref{lem:Ubiquity-of-F} and, in addition, \begin{enumerate} \item[(iv)] $f'(a) = s_f$ \quad \text{and} \quad $g'(d) = s_g$. \end{enumerate} \end{lem} \begin{proof} The generators $f$ and $g$ will both be affine interpolations of 5 interpolation points. To define them fix numbers $t_1$, $t_2$, $t_3$, $t_4$ so that \[ a < t_1 < b < t_2 \leq t_3 < c < t_4 < d. \] Next choose $t_0 \in \; ]a, t_1[$ so that $(t_0-a)/(t_1-a) = s_f$. Then $a < t_1 < t_3 < c < d$ and $a < t_0 < b < c <d$ and so the affine interpolation, given by the 5 points \[ (a,a), \quad (t_1, t_0),\quad (t_3, b), \quad (c,c),\quad (d,d), \] exists and is an increasing PL-homeomorphism, say $f$, with $f'(a) = s_f$. Next, there exists a number $t_5 \in \; ]t_4, d[$ so that $(d-t_4)/(d-t_5) = s_g$. Then $a < b < c < t_5 < d$ and $a < b < t_2 < t_4 < d$ and so the affine interpolation, given by the 5 points \[ (a,a), \quad (b, b), \quad (c, t_2), \quad (t_5, t_4) ,\quad (d, d) \] exists and is an increasing PL-homeomorphism, say $g$; the definition of $t_5$ implies, in addition, that $g'(d) = s_g$. Finally, $f(g(c)) = f(t_2) \leq f(t_3) = b$. \end{proof} \begin{remarks} \label{remarks:Ubiquity-of-F-bis} a) In the statement of Lemma \ref{lem:Ubiquity-of-F-bis} the slopes $s_f$ and $s_g$ have been chosen so that $s_f < 1 < s_g$. This requirement can be weakened to $s_f \neq 1$ and $s_g \neq1$; indeed the four pairs $\{f,g \}$, $\{f, g^{-1}\}$ and $\{f^{-1},g\}$, $\{f^{-1}, g^{-1}\}$ generate the same group. b) The generators $f_s$, $g_s$ and $h_s$ of the groups $G_s$, constructed in section \ref{ssec:Uncountable-class}, are simpler then those used in Lemma \ref{lem:Ubiquity-of-F-bis} in that they are defined by affine interpolations of 3 rather than of 5 points. But a variant of the Lemma \ref{lem:Ubiquity-of-F-bis} holds even in this more restricted set-up. Suppose $s_1 = s_2 = 2$ and $s_3 \geq 2$. The function $f_s$ is then given by the formula \begin{equation} \label{eq:Function-f} f_s(t) = \begin{cases} \tfrac{1}{2}t, & \text{ if }0\le t\le \tfrac{1}{2},\\ 2 \left(t- \tfrac{1}{2}\right)+ \tfrac{1}{4}, & \text{ if }\tfrac{1}{2} \leq t\leq \tfrac{3}{4}, \end{cases} \end{equation} and it is the identity outside of $]0, \tfrac{3}{4}[$, while $g_s$ is defined by \begin{equation} \label{eq:Function-g} g_s(t) = \begin{cases} \tfrac{1}{2}\left(t- \tfrac{1}{4} \right)+\tfrac{1}{4}, & \text{ if } \tfrac{1}{4}\le t\le \tfrac{3}{4},\\ 2 \left(t- \tfrac{3}{4}\right)+ \tfrac{1}{2}, & \text{ if }\tfrac{3}{4} \leq t\leq 1, \end{cases} \end{equation} and it is the identity outside of $]\tfrac{1}{4}, 1[$. The function $h_s$, finally, is defined by \begin{equation} \label{eq:Definition-h-sub-s} h_s(t) = \begin{cases} (1/s_3) \left(t- \tfrac{1}{4} \right)+\tfrac{1}{4}, & \text{ if } \tfrac{1}{4}\le t\leq \tfrac{3s_3}{4(s_3+1)} + \tfrac{1}{4},\\ s_3 \left(t- \tfrac{3s_3}{4(s_3+1)}- \tfrac{1}{4}\right)+ \tfrac{3}{4(s_3+1)} + \tfrac{1}{4}, & \text{ if }\tfrac{3s_3}{4(s_3+1)} + \tfrac{1}{4} \leq t\leq 1, \end{cases} \end{equation} and is the identity outside of \;$]\tfrac{1}{4}, 1[$. The function $g_s$ is not differentiable in $\tfrac{1}{4}$, $\tfrac{3}{4}$ and 1, while the function $h_s$ has singularities in $\tfrac{1}{4}$, $t_* = \tfrac{3s_3}{4(s_3+1)} + \tfrac{1}{4}$ and in 1. Since $s_3 \geq s_2 = 2 $, the inequality $t_* \geq \tfrac{3}{4}$ holds, as one verifies easily. The calculation \[ f_s \left(h_s(\tfrac{3}{4}) \right) = f_s\left( (1/s_3) \left(\tfrac{3}{4}- \tfrac{1}{4} \right)+\tfrac{1}{4}\right) = f_s\left( \tfrac{1}{2s_3} + \tfrac{1}{4} \right) \leq f_s\left( \tfrac{1}{4} + \tfrac{1}{4} \right) \leq \tfrac{1}{4}. \] then shows that the functions $f_s$ and $h_s$ fulfill the assumptions imposed on the functions $f$ and $g$ in Lemma \ref{lem:Ubiquity-of-F}. It follows that the groups $\gp(f_s, h_s)$ are isomorphic to each other for every $s_3 \geq s_2 = 2$. \end{remarks} \mathbb{S}ubsection{A criterion} \label{ssec:Precluding-isomorphisms} The groups $G_s$ studied in section \ref{ssec:Uncountable-class} are generated by 3 elements; in addition the image of $\chi_\ell$ is infinite cyclic and that of $\chi_r$ is free abelian of rank 2. Any isomorphism $\beta \colon G_s \iso G_{s'}$ between two such groups must therefore induce an homeomorphism $\beta^* \colon S(G_{s'}) \iso S(G_s)$ with $\beta^*([\chi'_r]) = [\chi_r]$. This consequence amounts to say that there exists a positive real number $u$ so that $\chi'_r \circ \beta = u \cdot \chi_r$ and this new condition implies the equality \begin{equation} \label{eq:Consequence-iso} \im \chi'_r = u \cdot \im \chi_r. \end{equation} In section \ref{ssec:Uncountable-class} we did not study this condition in general; we dealt only with the special case where \[ \im \chi_r = \mathbb{Z} (\ln s_3) + \mathbb{Z} (\ln s_2) \quad\text{and}\quad \im \chi'_r = \mathbb{Z} (\ln s'_3) + \mathbb{Z} (\ln s_2) \] and exploited then the fact that, in this particular case, condition \eqref{eq:Consequence-iso} involves basically only the two numbers $\ln s_3$ and $\ln s'_3$. In this final section we shall investigate another special case. It is reminiscent of a situation considered in \ref{ssec:Group of units}. Let $B_1$ and $B_2$ be finitely generated subgroups of $\mathbb{R}_{\add}$ and suppose there exists a positive real number $u$ with $B_2 = u \cdot B_1$. If $B_2$ coincides with $B_1$, then $u$ is a unit of $B_1$ and the results of section \ref{ssec:Group of units} apply. They show, in particular, that $u = 1$ whenever $B$ is the image under $\ln$ of a subgroup $P$ of $\mathbb{R}^\times_{>0}$ that is generated by finitely many algebraic numbers. The proof of this consequence relies on Theorem \ref{thm:Gelfond-Schneider}, the Gelfond-Schneider Theorem. Below we give an analogue of this criterion, but dealing with the equation $B_2 = u \cdot B_1$. In the proof, both the Gelfond-Schneider Theorem and the Siegel-Lang Theorem will be used. \begin{lem} \label{lem:Non-existence-u} Let $P_1$ and $P_2$ be subgroups $\mathbb{Q}^\times_{>0}$ and set $B_1 = \ln P_1$ and $B_2 = \ln P_2$. Suppose there exists a prime number $\pi$ that occurs with non-zero power in the factorization of an element in $P_1$, but not in that of an element of $P_2$. If the rank of $B_1$ is at least 3, then $B_2$ is distinct from $u \cdot B_1$ for every positive real number $u$. \end{lem} \begin{proof} Let $p_1 \in P_1$ be an element with a prime factorization that involves the prime $\pi$, and let $u$ be positive real number. Assume first that $u \in \mathbb{Q}$. Then $\pi$ occurs in the prime factorization of $p_1^u$, so $q_1^u \notin P_2$, and thus $u \cdot B_1 =u \ln P_1 \neq \ln P_2 = B_2$. Suppose now that $u$ is irrational and that $u \cdot \ln p_1 \in B_2$. There exists then a rational number $p_2 \in P_2$ with $u = \ln p_2/\ln p_1$ and so $u$ is transcendental by the Gelfond-Schneider Theorem. Choose, finally, three $\mathbb{Q}$-linearly independent elements $z_1$, $z_2$ and $z_3$ in $B_1$ (this is possible as the rank of $B_1$ is at least 3) and consider the six numbers \[ \exp(1 \cdot z_j) \text{ with } j = 1,2, 3 \quad \text{and} \quad \exp(u \cdot z_j) \text{ with } j = 1,2, 3. \] The first three of them are in $P_1$, and hence rational. As the subsets $\{1, u\}$ and $\{z_1, z_2, z_3 \}$ are both linearly independent over $\mathbb{Q}$, Theorem \ref{thm:Siegel-Lang} implies therefore that at least one the remaining three numbers, say $\exp(u \cdot z_{j_*})$, is transcendental. This number is therefore outside of $P_2$ and so $u \cdot z_{j_*} \in u B_1 \mathbb{S}mallsetminus B_2$. \end{proof} We terminate with an application of the preceding lemma. \begin{example} \label{example:Non-isomorphic-groups} Given a non-empty set of prime numbers $\mathbb{P}P$, let $G_\mathbb{P}P$ be a subgroup of $PL_o([0,1])$ generated by a set $\{f_p, g_p \mid p \in \mathbb{P}P \}$ of elements that satisfy the following two conditions \begin{enumerate}[(i)] \item $\mathbb{S}igma_\ell (f_p) = p$, $\mathbb{S}igma_\ell (g_p) = 1$ and $\mathbb{S}igma_r (f_p) = 1$, $\mathbb{S}igma_\ell (g_p) = 1$ for every $p \in \mathbb{P}P$; \item the union of the supports of the generators $f_p$ and $g_p$ is $]0,1[$. \end{enumerate} The group $G_\mathbb{P}P$ admits then an epimorphism $\psi \colon G_\mathbb{P}P \twoheadrightarrow \gp(\mathbb{P}P)$ that is fixed by every automorphism of $G_\mathbb{P}P$ (use Corollary \ref{crl:Generalization-GoKo10-bis}). Moreover, if $\mathbb{P}P_1$ and $\mathbb{P}P_2$ are distinct sets of primes of cardinality at least 3, the groups $G_{\mathbb{P}P_1}$ and $G_{\mathbb{P}P_2}$ are not isomorphic in view of Lemma \ref{lem:Non-existence-u} and the considerations at the beginning of section \ \ref{ssec:Precluding-isomorphisms}. \end{example} \endinput \input{Groups-of-PL-Homeomorphisms.bbl} \end{document}
\begin{document} \title{Deterministic super-resolved estimation towards angular displacements based upon a Sagnac interferometer and parity measurement} \author{Jian-Dong Zhang} \affiliation{Department of Physics, Harbin Institute of Technology, Harbin, 150001, China} \author{Zi-Jing Zhang} \email[]{[email protected]} \affiliation{Department of Physics, Harbin Institute of Technology, Harbin, 150001, China} \author{Longz-Zhu Cen} \affiliation{Department of Physics, Harbin Institute of Technology, Harbin, 150001, China} \author{Jun-Yan Hu} \affiliation{Department of Physics, Harbin Institute of Technology, Harbin, 150001, China} \author{Yuan Zhao} \email[]{[email protected]} \affiliation{Department of Physics, Harbin Institute of Technology, Harbin, 150001, China} \date{\today} \begin{abstract} Super-resolved angular displacement estimation is of crucial significances for quantum information process and optical lithography. Here we report on and experimentally demonstrate a protocol for angular displacement estimation based on a coherent state containing orbital angular momentum. In the lossless scenario, with using parity measurement, this protocol can theoretically achieve 4$\ell$-fold super-resolution with quantum number $\ell$, and shot-noise-limited sensitivity saturating the quantum Cram\'er-Rao bound. Several realistic factors and their effects are considered, including nonideal state preparation, photon loss, and imperfect detector. Finally, given mean photon number $\bar N=2.297$ and $\ell=1$, we show an angular displacement super-resolution effect with a factor of 7.88, and the sensitivity approaching shot-noise limit is reachable. \end{abstract} \pacs{03.65.Wj, 42.50.Ar, 06.20.-f, 03.67.Bg} \maketitle \section{introduction} As is known to all, a light beam can carry two forms of angular momenta: spin angular momentum (SAM), and orbital angular momentum (OAM). SAM corresponds to the polarization of the light, and the angular momentum of each photon is $\sigma \hbar $, where $\sigma = {\rm{ + }}1$ and $\sigma = - 1$ stand for the left-handed and the right-handed polarized light, respectively. OAM is associated with the azimuthal distribution of the light, and each photon in the OAM beam carries an angular momentum $\ell\hbar $ with quantum number $\ell$. The polarization of the light is discovered early and is applied among a great deal of fields, feature detections, target imaging, and material identifications, to name a few \cite{song2014polarization, israel2014supersensitive, kartazayeva2005backscattering}. The fact that a beam with an azimuthal phase dependence of $\exp \left( {i\ell\theta } \right)$ is capable of carrying OAM is put forward by Allen $et$ $al$. in 1992 \cite{allen1992orbital}. Since then, plenty of theoretical and experimental studies have focused on this subject \cite{simpson1997mechanical, tabosa1999optical, padgett2002orbital}. The infinite orthogonal dimensions of OAM place no limit on the amount of information that can be carried by a single photon. Therefore, within past decade, OAM has played a significant role in the fields of quantum communications, quantum computing, and quantum metrology \cite{yan2014high, nicolas2014quantum, puentes2012weak}. Due to the characteristic of helical phase $\exp \left( {i\ell\theta } \right)$, an OAM state can take the part of `angular amplifier', which converts an angular displacement $\theta$ into the amplified displacement $\ell\theta$ \cite{d2013photonic}. Phase shifts and optical rotations, the fundamental operations for photonic qubit gates, are two vital degrees of freedom in parameter estimation. For the past few years, phase estimation has been discussed in numerous physical protocols \cite{taylor2016quantum, giovannetti2011advances}, many exotic quantum states and measurement strategies are presented. Angular displacement estimation is also widely analyzed in quantum process tomography \cite{zhou2015quantum} and weak measurement \cite{magana2014amplification, thekkadath2016direct, zhang2015precision}, however, it is seldom mentioned in interferometric quantum metrology. On the other hand, almost all quantum states are sensitive to photon loss, and they are also limited by difficult preparation for large photon number. In the scenario of high loss channel, these facts downplay the advantages arising from quantum resources, and coherent states come across as ideal candidate. In this paper, we demonstrate a novel estimation protocol towards angular displacements using a coherent state and a Sagnac interferometer (SI) combined with a Dove prism. The remainder of this paper is organized as follows. In Sec. \ref{s2}, we introduce the fundamental principle and measurement strategy of our protocol. Section \ref{s3} focuses on studying the effects of several realistic factors on our protocol, such as nonideal state preparation, photon loss, and imperfect detector. In Sec. \ref{s4}, we discuss the fundamental sensitivity limit of our protocol by calculating quantum Fisher information (QFI), and compare it with the previous Mach-Zehnder interferometer (MZI) protocol. An experimental realization is demonstrated in Sec. \ref{s5}, and the performance is briefly analyzed. Finally, we summarize our work in Sec. \ref{s6}. \section{Fundamental principle and measurement strategy of the protocol } \label{s2} To begin, let us consider the angular displacement estimation protocol of which the setup is a SI consisting of three mirrors and a 50/50 beam splitter arranged in a square, as illustrated in Fig. \ref{f1}. The coherent state is generated from a laser, and its OAM degree of freedom is added by a spatial light modulator \cite{fickler2012quantum, zhou2016orbital}. The polarizer is used to filter the polarization which is not suitable for the spatial light modulator, and rectangular aperture is responsible for passing the first-order diffraction. In accordance with the theory of quantum optics, the input state can be described as $\left| {{\psi _\textrm{in}}} \right\rangle = {\left| {{\alpha _\ell}} \right\rangle _A}{\left| 0 \right\rangle _B}$, where ${\alpha _\ell}{\rm{ = }}\sqrt N $ and $N$ is the mean photon number in the coherent state. Then, the input enters the SI and is divided into two paths, in turn, the state becomes $\left| {{\psi _1}} \right\rangle = {\left| {{{{\alpha _\ell}} \mathord{\left/ {\vphantom {{{\alpha _\ell}} {\sqrt 2 }}} \right. \kern-\nulldelimiterspace} {\sqrt 2 }}} \right\rangle _A}{\left| {{{i{\alpha _\ell}} \mathord{\left/ {\vphantom {{i{\alpha _\ell}} {\sqrt 2 }}} \right. \kern-\nulldelimiterspace} {\sqrt 2 }}} \right\rangle _B}$. Here we assume that the clockwise direction in the interferometer loop is path $A$, and the counterclockwise one is path $B$. The beams in two paths pass through the Dove prism \cite{leach2002measuring} with an angular displacement $\varphi$, which is the parameter we would like to estimate. After such an evolution process, the state can be expressed as $\left| {{\psi _2}} \right\rangle = {\left| {{{{\alpha _\ell}{e^{i2\ell\varphi }}} \mathord{\left/ {\vphantom {{{\alpha _\ell}{e^{i2\ell\varphi }}} {\sqrt 2 }}} \right. \kern-\nulldelimiterspace} {\sqrt 2 }}} \right\rangle _A}{\left| {{{i{\alpha _\ell}{e^{ - i2\ell\varphi }}} \mathord{\left/ {\vphantom {{i{\alpha _\ell}{e^{ - i2\ell\varphi }}} {\sqrt 2 }}} \right. \kern-\nulldelimiterspace} {\sqrt 2 }}} \right\rangle _B}$. Finally, the state goes through the beam splitter again, and the output has the following ket representation $\left| {{\psi _\textrm{out}}} \right\rangle = {\left| {i{\alpha _\ell}\cos \left( {2\ell\varphi } \right)} \right\rangle _A}{\left| { - i{\alpha _\ell}\sin \left( {2\ell\varphi } \right)} \right\rangle _B}$. \begin{figure} \caption{Schematic of the angular displacement estimation protocol. The full names of the abbreviations in the figure: L, laser; P, polarizer; SLM, spatial light modulator; RA, rectangular aperture; BS, beam splitter; DP, Dove prism; RM, reflection mirror; PNRD, photon-number-resolving detector.} \label{f1} \end{figure} Comparing with the MZI, our protocol has two main advantages. On the one hand, the beams in two paths are more likely to experience identical optical paths and photon losses in our protocol, in that SI is a self-balanced interferometer. On the other hand, our protocol is equivalent to the scenario that the two Dove prisms in the two paths of MZI are reversely rotated with same angular displacement. We turn now to the measurement strategy. Parity measurement is originally discussed by Bollinger $et$ $al.$ for enhanced frequency measurement with an entangled state\cite{bollinger1996optimal}, subsequently, Gerry and Campos apply it to optical interferometers \cite{gerry2000heisenberg, gerry2005quantum}. Generally, the implementation of parity measurement requires a photon-number resolving detector, and the details of the detector can be found in Refs. \cite{achilles2004photon, cohen2014super, liu2017fisher}. In this strategy, the counts are assigned as $ + 1$ and $ - 1$ for even and odd photon numbers, respectively. Therefore, the parity operator for output port $B$ can be written as $ {\hat \Pi } = \exp \left( {i\pi {{\hat b}^\dag }\hat b} \right)$. In the Fock basis, the output state is recast as \begin{eqnarray} \nonumber\left| {{\psi _\textrm{out}}} \right\rangle = &&{e^{ - \frac{1}{2}{N}}}\sum\limits_{n = 0}^\infty {\frac{{{{\left[ {i{\alpha _\ell}\cos \left( {2\ell\varphi } \right)} \right]}^n}}}{{\sqrt {n!} }}} \\ &&\times\sum\limits_{m = 0}^\infty\frac{{{{\left[ { - i{\alpha _\ell}\sin \left( {2\ell\varphi } \right)} \right]}^m}}}{{\sqrt {m!} }}\left| {n,m} \right\rangle. \label{1} \end{eqnarray} Further, the probability of simultaneously detecting $n$ photons at port $A$ and $m$ ones at port $B$ is \begin{equation} P\left( {n,m} \right) = \frac{{{e^{ - {N}}}}}{{n!m!}}{\left[ {{N}{{\cos }^2}\left( {2\ell\varphi } \right)} \right]^n}{\left[ {{N}{{\sin }^2}\left( {2\ell\varphi } \right)} \right]^m}. \label{2} \end{equation} Consequently, the conditional probability ${P_{\rm even}}$ or ${P_{\rm odd}}$ for port $B$ can be calculated through a series sum of $P\left( {n,m} \right)$ over the parity of the photon number $n$, \begin{eqnarray} {P_\textrm{even}} &&= \frac{1}{2}\left\{ {1 + \exp \left[ { - 2{N}{{\sin }^2}\left( {2\ell\varphi } \right)} \right]} \right\}, \label{3}\\ {P_\textrm{odd}} &&= \frac{1}{2}\left\{ {1 - \exp \left[ { - 2{N}{{\sin }^2}\left( {2\ell\varphi } \right)} \right]} \right\}. \label{4} \end{eqnarray} In the light of the definition of the parity operator, we can obtain the expectation value of the output, \begin{equation} \left\langle {\hat \Pi } \right\rangle = \exp \left[ { - 2N{{\sin }^2}\left( {2\ell\varphi } \right)} \right]. \label{5} \end{equation} Further, with the help of error propagation, the sensitivity is given by \begin{equation} \Delta \varphi = \frac{{\sqrt {\left\langle {{{\hat \Pi }^2}} \right\rangle - {{\left\langle {\hat \Pi } \right\rangle }^2}} }}{{\left| {{{\partial \left\langle {\hat \Pi } \right\rangle } \mathord{\left/ {\vphantom {{\partial \left\langle {\hat \Pi } \right\rangle } {\partial \varphi }}} \right. \kern-\nulldelimiterspace} {\partial \varphi }}} \right|}} = \frac{{\sqrt {\exp \left[ {4{N}{{\sin }^2}\left( {2\ell\varphi } \right)} \right] - 1} }}{{\left| {4\ell{N}\sin \left( {4\ell\varphi } \right)} \right|}}. \label{6} \end{equation} By using first-order approximation, when $\varphi$ approaches 0, the sensitivity arrives at its minimum, \begin{equation} \Delta {\varphi _{\min }} = {\left. {\frac{{\sqrt {1 + 4N{{\sin }^2}\left( {2\ell\varphi } \right) - 1} }}{{\left| {4\ell N\sin \left( {4\ell\varphi } \right)} \right|}}} \right|_{\varphi \to 0}} = \frac{1}{{4\ell\sqrt N }}. \label{7} \end{equation} \begin{figure} \caption{The resolution of parity measurement as a function of angular displacement. (a) $N =10$. (b) $\ell =3$.} \label{f2} \end{figure} In the above derivation, we have used a property of the parity operator, $ {{{\hat \Pi }^2}} = 1$. For intuitively observing the variation on the resolution caused by mean photon number $N$ and quantum number $\ell$, we plot Fig. \ref{f2}. From the figure we can find that the number of oscillating output fringes increase with increasing $\ell$, and each fringe gets narrow as the increase of $N$. Hence, the resolution of the protocol can be improved with an increasing value of either $N$ or $\ell$. Moreover, the visibility of the output is approximate to 100\%, in that the maximum sits at 1 and the minimum approaches 0 for large $N$. The definition of visibility refers to \cite{N00N} \begin{equation} V = \frac{{{{\left\langle {\hat \Pi } \right\rangle }_{\max }} - {{\left\langle {\hat \Pi } \right\rangle }_{\min }}}}{{{{\left\langle {\hat \Pi } \right\rangle }_{\max }} + {{\left\langle {\hat \Pi } \right\rangle }_{\min }}}}. \label{8} \end{equation} In Fig. \ref{f3}, we show the full widths at half maximum (FWHMs) with different values of $N$ and $\ell$. FWHM is a universal super-resolution criterion, i.e., the smaller the FWHM is, the higher the resolution is. Figure \ref{f3} indicates that the increase of $N$ or $\ell$ can provide an enhancement of the resolution, and a more apparent resolution increase is obtained whenever both $N$ and $l$ are increased. With respect to the sensitivity, Eq. (\ref{7}) shows a shot-noise-limited sensitivity as the factor $4\ell$ is a classical effect. The effect arising in OAM is equal to the increase of the number of trials. The results mean that the increases of both $N$ and $\ell$ have an enhanced effect on the sensitivity. \begin{figure} \caption{The FWHM of parity measurement as a function of mean photon number.} \label{f3} \end{figure} \section{Effects of realistic factors} \label{s3} Since a setup is inevitably immersed in its surrounding environment, the realistic factors will affect the estimation. In this section, we analyze the effects of several realistic factors on the resolution and the sensitivity. These factors may take place in three stages: state preparation; state evolution; and state measurement. Each stage will be discussed in this section, and the subscript $k$ stands for the $k$-th realistic factor. \subsection{Nonideal state preparation} We start off with the nonideal state preparation. In this scenario, the input must be described by a density matrix, rather than a state vector \cite{bahder2011phase, zhang17effects}. Let us assume that the conversion efficiency of the spatial light modulator is $\eta$, consequently, we can write the input density matrix as \begin{equation} {\rho _\textrm{in}} = \left[ {\eta \left| {{\alpha _\ell}} \right\rangle \left\langle {{\alpha _\ell}} \right| + \left( {1 - \eta } \right)\left| {{\alpha _0}} \right\rangle \left\langle {{\alpha _0}} \right|} \right] \otimes \left| 0 \right\rangle \left\langle 0 \right|. \label{9} \end{equation} In the light of the evolution process mentioned, the reduced output density matrix for mode $B$ can be obtained. Further, the expectation value of parity operator is \begin{equation} {\left\langle {\hat \Pi } \right\rangle _{1}} = {\rm Tr}\left( {\hat \Pi {\rho _{B\textrm{out}}}} \right) = \eta\exp \left[ { - 2 {N}{{\sin }^2}\left( {2\ell\varphi } \right)} \right]+1-\eta, \label{10} \end{equation} using first-order approximation again, we have the optimal sensitivity, \begin{equation} \Delta {\varphi _2} = {\left. {\frac{{\sin \left( {2l\varphi } \right)\sqrt {1 - \eta N{{\sin }^2}\left( {2l\varphi } \right)} }}{{\left| {2l\sqrt {\eta N} \sin \left( {4l\varphi } \right)} \right|}}} \right|_{\varphi \to 0}} = \frac{1}{{\sqrt \eta }}\frac{1}{{4l\sqrt N }} \label{11} \end{equation} Equation (\ref{9}) suggests that only the photons added with OAM degree of freedom play a role in measurement, and the unmodulated photons boost the minimum of the output. This nonideal efficiency reduces the visibility since the minimum is boosted, in turn, the sensitivity is also deteriorating by a factor of ${\sqrt \eta }$. \subsection{Photon loss} We next take into account the effect of a type of inevitable realistic factor, photon loss, on the resolution and the sensitivity. The measured information in the output is acquired through counting the results of multiple trials. The lossy photon in each measurement is random, however, the statistical results are subject to a certain probability distribution. In general, the theoretical simulation of photon loss is realized by inserting a virtual beam splitter in the interference loop, the transmissivities of the two paths are $\sqrt {{T_A}} $ and $\sqrt {{T_B}} $ \cite{feng2014quantum, kacprowicz2010experimental}. Further, the parameters $L_A=1-T_A$ and $L_B=1-T_B$ represent two path losses. On the basis of this theory, the output state for mode $B$ reduces to $\left| {{\psi _\textrm{out}}} \right\rangle_B = {\left| {{{{\alpha _\ell}\left( {\sqrt {{T_B}} {e^{ - i2\ell\varphi }} - \sqrt {{T_A}} {e^{i2\ell\varphi }}} \right)}}/{2}} \right\rangle}$. The expression of the resolution and the sensitivity corresponding to this output state can be calculated through the analysis, \begin{equation} {\left\langle {\hat \Pi } \right\rangle _{2}} = {\kern 1pt} \exp \left[ {N\sqrt {{T_A}{T_B}} \cos \left( {4\ell\varphi } \right)}{ - \frac{N}{2}\left( {{T_A} + {T_B}} \right)} \right] \label{12} \end{equation} and \begin{eqnarray} \nonumber{\Delta \varphi _{2}} = && \sqrt {\exp \left\{ {N\left[ {{T_A} + {T_B} - 2\sqrt {{T_A}{T_B}} \cos \left( {4\ell\varphi } \right)} \right]} \right\} - 1} \\ &&\times \frac{1}{{\left| {4\ell\left( {{T_A} + {T_B}} \right)N\sin \left( {4\ell\varphi } \right)} \right|}}. \label{13} \end{eqnarray} \begin{figure} \caption{(a) The resolution of parity measurement as a function of angular displacement, where $N=10$. (b) The sensitivity of parity measurement as a function of two path losses, where $N=10$ and $\ell=3$.} \label{f4} \end{figure} From Eq. (\ref{12}) we can find that only if the condition ${T_A} = {T_B}$ is satisfied does the maximum of output take on the value of 1. This is consistent with the interference condition of classical optics. Meanwhile, for the SI protocol, ${T_A} \approx {T_B}$ is facile to be satisfied as the two light fields experience the same path. Figure \ref{f4}(a) manifests the FWHM is broadening as the two transmissivities decrease, and the visibility remains changeless. As for the sensitivity, Fig. \ref{f4}(b) indicates that, for the identical total loss, the optimal sensitivity is achieved under the same photon losses in two paths. \subsection{Imperfect detector} Detection efficiency, response-time delay and dark counts are three typical imperfect factors of the detector \cite{spagnolo2012phase}. Specific results of the analysis are as follows. \subsubsection{Detection efficiency} In general, there is no guarantee that a detector keeps a 100\% efficiency, and this process is also simulated by inserting a virtual beam splitter in front of the detector, where the transmissivity is $\kappa$ \cite{PhysRevA.83.063836}, also known as detection efficiency. The output state for mode $B$ can be rewritten as ${\left| {{\psi _\textrm{out}}} \right\rangle _B} = \left| { - i\sqrt \kappa {\alpha _l}\sin \left( {2\ell\varphi } \right)} \right\rangle $, and the expectation value is \begin{equation} {\left\langle {\hat \Pi } \right\rangle _{3}} = \exp \left[ { - 2\kappa {N}{{\sin }^2}\left( {2\ell\varphi } \right)} \right]. \label{14} \end{equation} One can find that this equation is the same as Eq. (\ref{12}) when $T_A=T_B=\kappa$. This shows that the effect of the detection efficiency is identical with that of the photon loss, and the previous conclusions are still applicable. This phenomenon stems from the fact that photon loss is a linear loss, that is, a coherent state maintains its distribution under linear loss, the presence of loss after the SI completely equals a lossless SI fed by a weaker input \cite{PhysRevLett.99.223602}. \subsubsection{Response-time delay and dark counts} In the practical measurements, response-time delay and dark counts also affect the performance of the detector. The former forces the width of the sampling detection gate to increase, as a result, the rate of the latter will rise, and a detailed analysis of this process is available in Appendix \ref{A}. A thoughtful discussion of the effect of the dark counts on the output with parity measurement has been proposed in Ref. \cite{huang2017adaptive}. Here we invoke this conclusion, and the expectation value of the parity operator equals \begin{equation} {\left\langle {\hat \Pi } \right\rangle _{4}} = {e^{ - 2r}}\left\langle {\hat \Pi } \right\rangle, \label{15} \end{equation} where the parameter $r$ is the rate of the dark counts. With the help of error propagation, the sensitivity can be calculated. Under the current technology, the range of $r$ is generally between ${10^{ - 8}}$ to ${10^{ - 3}}$. \begin{figure} \caption{The effects of the response-time delay and the dark counts on the sensitivity. Where the curve of $r=0$ is the ideal curve, the curves of $r=10^{-3} \label{f8} \end{figure} The impact of response-time delay on dark counts will increase the rate of dark counts by a factor which is generally less than 10. Hence, we choose ${10^{ - 3}}$ and ${10^{ - 2}}$ to represent the scenarios: only dark counts; and the combination of dark counts and the response-time delay. The results in Fig. \ref{f8} exhibit that the change of sensitivity is slight with only dark counts, however, the deterioration of sensitivity becomes obvious due to the simultaneous existence of the response-time delay and dark counts. \section{Analysis of fundamental sensitivity limit} \label{s4} In the above sections, we merely calculate the sensitivity of measurement strategy, the fundamental sensitivity limit over all possible positive operate valued measures (POVMs) is not given. Here we systematically compare the our protocol and previous MZI protocol from the perspective of QFI. \begin{figure} \caption{Diagram of the angular displacement estimation protocol. The full names of the abbreviations in the figure: DP, Dove prism; D, detector; BS, beam splitter; SI, Sagnac interferometer; MZI, Mach-Zehnder interferometer.} \label{f6} \end{figure} The current angular displacement estimation protocols can be divided into the following two categories: SI and MZI protocols, as illustrated in the Fig. \ref{f6}. For the above two protocols, the rotation of Dove prism can be described as the following operators ${\hat U_{\varphi 1}} = \exp \left( {i4\ell{{\hat J}_z}\varphi } \right)$ and ${\hat U_{\varphi 2}} = \exp \left( {i2\ell{{\hat n}_a}\varphi } \right)$, respectively. The operator for beam splitter is ${\hat U_\textrm{BS}} = \exp \left( {i{\pi }{{\hat J}_x}/2} \right)$, where \begin{eqnarray} {{\hat J}_x} &&= \frac{1}{2}\left( {{a^\dag }b + a{b^\dag }} \right), \\ {{\hat J}_y} &&= - \frac{i}{2}\left( {{a^\dag }b - a{b^\dag }} \right), \\ {{\hat J}_z} &&= \frac{1}{2}\left( {{a^\dag }a - {b^\dag }b} \right) \label{17} \end{eqnarray} are the angular momentum operators in the Schwinger representation \cite{tan2014enhanced}. These operators satisfy the cyclic commutation relations for the Lie algebra of SU(2): $\left[ {{{\hat J}_x} {\kern 1pt} {\kern 1pt} ,{\kern 1pt} {{\hat J}_y}} \right] = i{\hat J_z}$; $\left[ {{{\hat J}_y}{\kern 1pt} {\kern 1pt} , {\kern 1pt} {\kern 1pt} {{\hat J}_z}} \right] = i{\hat J_x}$; and $\left[ {{{\hat J}_z}{\kern 1pt} {\kern 1pt} ,{\kern 1pt} {\kern 1pt} {{\hat J}_x}} \right] = i{\hat J_y}$. The input density matrix can be written as ${\rho _\textrm{in}} = {\rho _a} \otimes {\rho _b}$, where ${\rho _a} = \left| {{\alpha _\ell}} \right\rangle \left\langle {{\alpha _\ell}} \right|$ and ${\rho _b} = \left| 0 \right\rangle \left\langle 0 \right|$. Here we define the counterclockwise path in Fig. \ref{f6} is mode $a$, and the clockwise one is mode $b$. In accordance with the above analysis, we calculate the QFI for two scenarios. As for SI protocol, the output density matrix evolves into ${\rho _\textrm{out}} = {\hat U_{\varphi 1}}{\hat U_\textrm{BS}}{\rho _\textrm{in}}\hat U_\textrm{BS}^\dag \hat U_{\varphi 1}^\dag $, and in terms of the equation ${{\partial {\rho _\textrm{out}}} \mathord{\left/ {\vphantom {{\partial {\rho _\textrm{out}}} {\partial \varphi }}} \right. \kern-\nulldelimiterspace} {\partial \varphi }} = {{ - i\left[ {{\rho _\textrm{out}},{\kern 1pt} \hat R} \right]} \mathord{\left/ {\vphantom {{ - i\left[ {{\rho _\textrm{out}},{\kern 1pt} {\kern 1pt} {\kern 1pt} \hat R} \right]} \hbar }} \right. \kern-\nulldelimiterspace} \hbar }$, we can obtain the estimator $\hat R$. For the case of a pure state input, the QFI is simplified as $4{\left\langle {{\Delta ^2}\hat R} \right\rangle _\textrm{in}}$ \cite{gibilisco2007uncertainty, knysh2011scaling, liu2013phase}. Therefore, the QFI for SI is calculated as \begin{eqnarray} \nonumber{{\cal F}_\textrm{SI}} && = 4\left[ {\left\langle \psi \right|{{\left( {4\ell{{\hat J}_z}} \right)}^2}\left| \psi \right\rangle - \left\langle \psi \right|4\ell{{\hat J}_z}{{\left| \psi \right\rangle }^2}} \right] \\ &&= 16{\ell^2}{N}. \label{19} \end{eqnarray} In the above derivation, we have used the formula ${\hat U_\textrm{BS}}\left| \alpha \right\rangle \left| 0 \right\rangle = \left| {{\alpha \mathord{\left/ {\vphantom {\alpha {\sqrt 2 }}} \right. \kern-\nulldelimiterspace} {\sqrt 2 }}} \right\rangle \left| {{{i\alpha } \mathord{\left/ {\vphantom {{i\alpha } {\sqrt 2 }}} \right. \kern-\nulldelimiterspace} {\sqrt 2 }}} \right\rangle \equiv \left| \psi \right\rangle $. And the relationship between the optimal sensitivity $\Delta \varphi_{\rm min} $ and QFI is $\Delta \varphi_{\rm min} = {1 \mathord{\left/ {\vphantom {1 {\sqrt {qF} }}} \right. \kern-\nulldelimiterspace} {\sqrt {\nu{\cal F}} }}$, where $\nu$ is the number of trials. The parameter $\nu$ does not lead any quantum effects into the sensitivity since it is a classical experiment repeat. In the next discussion we focus on single trial, that is, $\nu = 1$. The optimal sensitivity of the SI protocol can be calculated, $\Delta {\varphi _\textrm{SI}} = {1 \mathord{\left/ {\vphantom {1 {4\ell\left| {{\alpha _l}} \right|}}} \right. \kern-\nulldelimiterspace} {4\ell\left| {{\alpha _\ell}} \right|}}$. For the scenario of the MZI protocol, we have \begin{eqnarray} \nonumber{{\cal F}_\textrm{MZI}} && = 4\left[ {\left\langle \psi \right|{{\left( {2\ell{{\hat a}^\dag }\hat a} \right)}^2}\left| \psi \right\rangle - \left\langle \psi \right|2\ell{{\hat a}^\dag }\hat a{{\left| \psi \right\rangle }^2}} \right] \\ &&= 8{\ell^2}{N}. \label{20} \end{eqnarray} It is obvious that the QFI of the SI protocol is superior to that of the MZI one, i.e., SI protocol is more sensitive to angular displacement. An interesting and perplexing phenomenon with respect to ${{\cal F}_{\rm MZI}}$ is that the optimal sensitivity corresponding to Eq. (\ref{20}) is $\Delta {\varphi _\textrm{MZI}} = {1 \mathord{\left/ {\vphantom {1 {2\sqrt 2 \ell\left| {{\alpha _l}} \right|}}} \right. \kern-\nulldelimiterspace} {2\sqrt 2 \ell\left| {{\alpha _\ell}} \right|}}$. After removing the factor 2$\ell$ originating from OAM, the QFI also implies a sub-shot-noise-limited sensitivity. In order to solve this confusion, we can use the phase-averaging approach to ascertain whether a measurement strategy can break through the shot-noise limit with only a coherent state in the absence of additional source. This approach can give a low-down on sensitivity limit with only using input source. A simplified understanding for its idea is to disrupt the input state into a mixed state losing all phase references. Based on this approach, we obtain the QFI of the MZI protocol, ${{\cal F}_{\bar \rho }} = \sum\nolimits_{{{n = 0}}}^\infty {{p_n}4{\ell^2}} n = 4{\ell^2}N$. This QFI implies the sensitivity limit is shot-noise limit, and the details can be found in Appendix \ref{B}. Hence, the QFI in Eq. (\ref{20}) contains a part of information stemming from additional sources. Ascertaining the special additional sources which can assist MZI protocol to achieve the sensitivity in Eq. (\ref{20}) is still a meaningful and challenging research content, for many practical measurements can be classified as the MZI configuration. Overall, the SI protocol is more sensitive for angular displacement estimation, and its sensitivity is twice as much as that of MZI protocol. \section{Experimental realization} \label{s5} As the last part of the work in this paper, we perform the proof of principle with $\ell=1$. The working principle and measuring results about the photon-number-resolving detector are supplied in Appendix \ref{C}. As can be seen from Fig. \ref{f7}, the experimental results are in agreement with the theoretical analysis. In Fig. \ref{7}(a) we fit the expectation value of the output in terms of experimental data, \begin{equation} \left\langle {\hat \Pi } \right\rangle = 0.9507\exp \left\{ { - 4.594{{\sin }^2}\left[ {2\left( {\varphi - 0.7022} \right)} \right]} \right\}. \label{21} \end{equation} \begin{figure} \caption{Experimental data as a function of angular displacement with $\ell=1$. (a) The blue line is a fit to the output. Error bars are one standard deviation due to propagated Poissonian statistics. (b) The red line is sensitivity deduced from the fit of the output, blue dots are the sensitivities calculated from the experimental data, and the black dashed line is the shot-noise limit defined in accordance with $\bar N$} \label{f7} \end{figure} This equation implies that the mean photon number arriving at the detector is $\bar N=2.297$, and the visibility of the output is 98\%. Note that here $\bar N=T \kappa N$, the effect of photon loss is not reflected in Eq. (\ref{21}) as we only record the mean photon number arriving at the detector. By calculating the FWHM, the experimental data demonstrates that our protocol has an enhanced resolution with a factor of 7.88. This also points out that our protocol can be applied to the field of optical lithography \cite{PhysRevLett.85.2733}. Moreover, ignoring the relative position of the maximum, the Eq. (\ref{21}) can be recast as \begin{equation} \left\langle {\hat \Pi } \right\rangle = \exp \left[ { - 4.594{{\sin }^2}\left( {2\varphi } \right) }-0.0506 \right]. \label{22} \end{equation} That is, the rate value $r$ is 0.0253 in the experiment, and comprises dark counts, response-time delay, and background noise. These noises result that the maximum in Fig. \ref{f7}(a) cannot reach 1. Figure \ref{7}(b) presents the sensitivities calculated from experimental data. The results mean that the sensitivities tally with the theoretical analysis and the output fit. Note that the optimal sensitivity is slightly inferior to the shot-noise limit due to the noise photons, and this scenario is similar to the discussion about the dark counts of imperfect detector. \section{Conclusion} \label{s6} In conclusion, we introduce a novel interferometric setup, a SI with a Dove prism, which can realize super-resolved angular displacement estimation using parity measurement. The input state is a coherent state carrying OAM, and in lossless scenario we can obtain 4$\ell$-fold resolution fringe and shot-noise-limited sensitivity. The resolution and the sensitivity can be improved by increasing mean photon number and quantum number, independently or simultaneously. We also discuss the effects of several realistic factors on the performances of the output. Nonideal preparation efficiency brings the deteriorations on the resolution, the visibility and the sensitivity. With respect to photon loss, for identical total loss, the scenario that two same path losses provides a better resolution and an optimal sensitivity. The effects of dark counts and response-time delay on the sensitivity are unconspicuous, and the resolution is not affected by them. Additionally, the fundamental sensitivity limits of our protocol and MZI one are given by calculating QFI, the results suggest that the sensitivity of SI protocol is saturated by QCR bound and is twice as much as that of MZI protocol. Finally, a proof of principle is performed, the experimental data tally with the theoretical analysis. For mean photon number $\bar N=2.297$, we achieve a super-resolved output which is enhanced by a factor of 7.88 and a nearly shot-noise-limited sensitivity. \section*{Acknowledgments} We would like to thank Prof. Zhi-Yuan Zhou and Shi-Long Liu from University of Science and Technology of China for a great deal of enlightening discussions with the experiment. This work is supported by the National Natural Science Foundation of China (Grant No. 61701139). \appendix \section{The effect of the response-time delay on the rate of dark counts} \label{A} Here we offer the explanation about the relationship between the response-time delay and dark counts. In the practical measurements, the response-time delay can be expressed as a mean time delay attached to a delay jitter $\tau$. The mean delay has no effect on the estimation results, for the measurement strategy is to count the photon number, rather than arriving time. Schematic diagram for the effect of the response-time delay on the detection results is shown in Fig. \ref{fs1}, where the blue rectangle is the theoretical standard response-time. However, the time of the practical arriving signal may occur at any point in the $\tau$ of a period of time in the presence of the response-time delay. The parameter $T$ is the time width of sampling detection gate and the relationship $\tau \le T$ is satisfied to guarantee that only one signal in each gate. The red rectangle expresses the pulse of the dark counts of which the distribution is random and the statistical results follow the Poissonian distribution. Moreover, the dark counts outside the sampling detection gate do not affect the measurement. The width of the gate has to be increased owing to the response-time delay, hence, the effect of time delay on the measurement results is to increase the rate of dark counts. \begin{figure} \caption{Schematic of the effect of the response-time delay on the measurement results.} \label{fs1} \end{figure} \section{QFI of MZI protocol using phase-averaging approach} \label{B} In this part of Appendix, we give an elaborate calculation process for the phase-averaging method. In this framework, phase randomization is required for the input density matrix, i.e., \begin{eqnarray} \nonumber{{\bar \rho }_1} =&& \frac{1}{{2\pi }}\int_0^{2\pi } {\exp \left( {i\delta {{\hat n}_a}} \right)} \exp \left( {i\delta {{\hat n}_b}} \right){\rho _a} \\ \nonumber&&\otimes {\rho _b}\exp \left( { - i\delta {{\hat n}_a}} \right)\exp \left( { - i\delta {{\hat n}_b}} \right)d\delta \\ =&& \sum\limits_{n = 0}^\infty {{p_n}\left| {{n}} \right\rangle \left\langle {{n}} \right| \otimes \left| 0 \right\rangle \left\langle 0 \right|}. \label{B1} \end{eqnarray} Where ${p_n} = {{{{N}^{n}}\exp \left( { - {N}} \right)} / n}!$ is the probability of emerging $n$ photons in OAM coherent state $\left| {{\alpha _\ell}} \right\rangle $. It is easy to find that the off-diagonal elements of the density matrix disappear at this point, that is, the coherence information is erased. Then the density matrix passes through the first beam splitter and becomes \begin{eqnarray} \nonumber{{\bar \rho }_2} &&= {{\hat U}_\textrm{BS}}{{\bar \rho }_1}\hat U_\textrm{BS}^\dag \\ &&= \sum\limits_{n = 0}^\infty {p_n}\sum\limits_{m = 0}^n {C_n^m } \left| {{n} - {m}} \right\rangle \left\langle {{n} - {m}} \right| \otimes \left| {{m}} \right\rangle \left\langle {{m}} \right| , \label{B2} \end{eqnarray} where $C_n^m$ is binomial coefficient. In view of the orthogonality of the Fock state ($\left\langle {n} {\left | {\vphantom {n m}} \right. \kern-\nulldelimiterspace}{m} \right\rangle = {\delta _{nm}}$) and the convexity of the QFI, the QFI of the entire mixed state equals the sum of that of each Fock state in the light of the weight factor ${p_n}$. For a two-mode Fock state $\left| {{n}} \right\rangle \left| 0 \right\rangle $ and a unitary evolution process ${\hat U_{\varphi 2}}{\hat U_{\rm BS}}$, its QFI can be calculated as \begin{eqnarray} \nonumber{{\cal F}_\textrm{Fock}} &&= 4\left[ {\left\langle {{{\hat U}_\textrm{BS}}{{\left( {2\ell{{\hat a}^\dag }\hat a} \right)}^2}\hat U_\textrm{BS}^\dag } \right\rangle - {{\left\langle {{{\hat U}_\textrm{BS}}\left( {2\ell{{\hat a}^\dag }\hat a} \right)\hat U_\textrm{BS}^\dag } \right\rangle }^2}} \right]\\ &&= 4{\ell^2}n. \label{B3} \end{eqnarray} The expectation values are taken over the Fock state $\left| {{n},0} \right\rangle $, here we have used the Baker-Hausdorff lemma ${e^{ - i{\pi }{{\hat J}_x}/2}}{\hat{a}^\dag }a{e^{i{\pi }{{\hat J}_x}/2}} = {\hat J_y} + {\hat J_z}$ and the unitary property of the operator ${\hat U_\textrm{BS}}$. Consequently, the QFI of input density matrix in Eq. (\ref{B1}) goes to \begin{equation} {{\cal F}_{\bar \rho }} = \sum\limits_{n = 0}^\infty {{p_n}4{\ell^2}} n = 4{\ell^2}N. \label{B4} \end{equation} Note that the above result is the shot-noise limit, that is, the optimal sensitivity of the MZI protocol is the shot-noise limit in the scenario of a coherent state input and without additional driving sources. \section{The working principle and measuring results of the photon-number-resolving detector} \label{C} The photon-number-resolving detector we used in experiment is a Geiger mode avalanche photodiode (Gm-APD) array. Each APD only responses to the presence or absence of photons at output port, i.e., there is no knowledge of exact photon number. For low-intensity output, it is a considerable probability that each photon is assigned to different APD units. Therefore, the total photon number in each measurement is the sum of all APD trigger counts. As can be seen from the Fig. \ref{f_detector}(a), to each trigger introduced by single APD there corresponds to an analog voltage of 0.02 V. We calculate the mean photon number that experimental output, and plot the Poissonian distribution of same photon number. The theoretical and experimental probability distributions are shown in Fig. \ref{f_detector}(b). We use the credibility defined as $H = \sum\nolimits_i {\sqrt {{x_i}{y_i}} } $ to quantify the similarity between the experimental probability distribution $\left\{ {x_i}\right\}$ and the theoretical one $\left\{ {y_i} \right\}$ with respect to the Fig. \ref{f_detector}(b). After calculating we have $H=0.9914$, this implies the detector has a superb credibility. \begin{figure*} \caption{(a) The analog voltage signals displayed by oscillograph, and each signal is converted from single statistical trigger counts. (b) The probability distribution of output photon state and Poissonian distribution fit.} \label{f_detector} \end{figure*} \begin{thebibliography}{40} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Song}\ \emph {et~al.}(2014)\citenamefont {Song}, \citenamefont {Lu}, \citenamefont {Gruverman},\ and\ \citenamefont {Ducharme}}]{song2014polarization} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Song}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Gruverman}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ducharme}},\ }\href {http://aip.scitation.org/doi/abs/10.1063/1.4875960} {\bibfield {journal} {\bibinfo {journal} {Appl. Phys. Lett.}\ }\textbf {\bibinfo {volume} {104}},\ \bibinfo {pages} {192901} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Israel}\ \emph {et~al.}(2014)\citenamefont {Israel}, \citenamefont {Rosen},\ and\ \citenamefont {Silberberg}}]{israel2014supersensitive} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Israel}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Rosen}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Silberberg}},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.103604} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {103604} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kartazayeva}\ \emph {et~al.}(2005)\citenamefont {Kartazayeva}, \citenamefont {Ni},\ and\ \citenamefont {Alfano}}]{kartazayeva2005backscattering} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kartazayeva}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ni}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Alfano}},\ }\href {https://www.osapublishing.org/ol/abstract.cfm?uri=OL-30-10-1168} {\bibfield {journal} {\bibinfo {journal} {Opt. Lett.}\ }\textbf {\bibinfo {volume} {30}},\ \bibinfo {pages} {1168} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Allen}\ \emph {et~al.}(1992)\citenamefont {Allen}, \citenamefont {Beijersbergen}, \citenamefont {Spreeuw},\ and\ \citenamefont {Woerdman}}]{allen1992orbital} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Allen}}, \bibinfo {author} {\bibfnamefont {M.~W.}\ \bibnamefont {Beijersbergen}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Spreeuw}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Woerdman}},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.45.8185} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo {pages} {8185} (\bibinfo {year} {1992})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Simpson}\ \emph {et~al.}(1997)\citenamefont {Simpson}, \citenamefont {Dholakia}, \citenamefont {Allen},\ and\ \citenamefont {Padgett}}]{simpson1997mechanical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Simpson}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Dholakia}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Allen}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Padgett}},\ }\href {https://www.osapublishing.org/ol/abstract.cfm?uri=ol-22-1-52} {\bibfield {journal} {\bibinfo {journal} {Opt. Lett.}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {52} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tabosa}\ and\ \citenamefont {Petrov}(1999)}]{tabosa1999optical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Tabosa}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Petrov}},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.83.4967} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {4967} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Padgett}\ and\ \citenamefont {Allen}(2002)}]{padgett2002orbital} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Padgett}}\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Allen}},\ }\href {http://iopscience.iop.org/article/10.1088/1464-4266/4/2/362/meta} {\bibfield {journal} {\bibinfo {journal} {J. Opt. B: Quantum Semicl. Opt.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {S17} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yan}\ \emph {et~al.}(2014)\citenamefont {Yan}, \citenamefont {Xie}, \citenamefont {Lavery}, \citenamefont {Huang}, \citenamefont {Ahmed}, \citenamefont {Bao}, \citenamefont {Ren}, \citenamefont {Cao}, \citenamefont {Li}, \citenamefont {Zhao} \emph {et~al.}}]{yan2014high} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yan}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Xie}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Lavery}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Ahmed}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Bao}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ren}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhao}}, \emph {et~al.},\ }\href {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4175588/} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {5}} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nicolas}\ \emph {et~al.}(2014)\citenamefont {Nicolas}, \citenamefont {Veissier}, \citenamefont {Giner}, \citenamefont {Giacobino}, \citenamefont {Maxein},\ and\ \citenamefont {Laurat}}]{nicolas2014quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Nicolas}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Veissier}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Giner}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Giacobino}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Maxein}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Laurat}},\ }\href {https://www.nature.com/nphoton/journal/v8/n3/full/nphoton.2013.355.html} {\bibfield {journal} {\bibinfo {journal} {Nat. Photonics}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {234} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Puentes}\ \emph {et~al.}(2012)\citenamefont {Puentes}, \citenamefont {Hermosa},\ and\ \citenamefont {Torres}}]{puentes2012weak} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Puentes}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Hermosa}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Torres}},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.109.040401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo {pages} {040401} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {D'ambrosio}\ \emph {et~al.}(2013)\citenamefont {D'ambrosio}, \citenamefont {Spagnolo}, \citenamefont {Del~Re}, \citenamefont {Slussarenko}, \citenamefont {Li}, \citenamefont {Kwek}, \citenamefont {Marrucci}, \citenamefont {Walborn}, \citenamefont {Aolita},\ and\ \citenamefont {Sciarrino}}]{d2013photonic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {D'ambrosio}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Spagnolo}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Del~Re}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Slussarenko}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {L.~C.}\ \bibnamefont {Kwek}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Marrucci}}, \bibinfo {author} {\bibfnamefont {S.~P.}\ \bibnamefont {Walborn}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Aolita}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Sciarrino}},\ }\href {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3791460/} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {4}} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Taylor}\ and\ \citenamefont {Bowen}(2016)}]{taylor2016quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Taylor}}\ and\ \bibinfo {author} {\bibfnamefont {W.~P.}\ \bibnamefont {Bowen}},\ }\href {http://www.sciencedirect.com/science/article/pii/S0370157315005001} {\bibfield {journal} {\bibinfo {journal} {Phys. Rep.}\ }\textbf {\bibinfo {volume} {615}},\ \bibinfo {pages} {1} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Giovannetti}\ \emph {et~al.}(2011)\citenamefont {Giovannetti}, \citenamefont {Lloyd},\ and\ \citenamefont {Maccone}}]{giovannetti2011advances} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}},\ }\href {http://www.nature.com/nphoton/journal/v5/n4/full/nphoton.2011.35.html?foxtrotcallback=true} {\bibfield {journal} {\bibinfo {journal} {Nat. Photonics}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {222} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2015)\citenamefont {Zhou}, \citenamefont {Cable}, \citenamefont {Whittaker}, \citenamefont {Shadbolt}, \citenamefont {O��Brien},\ and\ \citenamefont {Matthews}}]{zhou2015quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-Q.}\ \bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Cable}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Whittaker}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Shadbolt}}, \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {O��Brien}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Matthews}},\ }\href {https://www.osapublishing.org/optica/abstract.cfm?uri=optica-2-6-510} {\bibfield {journal} {\bibinfo {journal} {Optica}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {510} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Maga{\~n}a-Loaiza}\ \emph {et~al.}(2014)\citenamefont {Maga{\~n}a-Loaiza}, \citenamefont {Mirhosseini}, \citenamefont {Rodenburg},\ and\ \citenamefont {Boyd}}]{magana2014amplification} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.~S.}\ \bibnamefont {Maga{\~n}a-Loaiza}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mirhosseini}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Rodenburg}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Boyd}},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.200401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {200401} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Thekkadath}\ \emph {et~al.}(2016)\citenamefont {Thekkadath}, \citenamefont {Giner}, \citenamefont {Chalich}, \citenamefont {Horton}, \citenamefont {Banker},\ and\ \citenamefont {Lundeen}}]{thekkadath2016direct} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Thekkadath}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Giner}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Chalich}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Horton}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Banker}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lundeen}},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.117.120401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {120401} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhang}\ \emph {et~al.}(2015)\citenamefont {Zhang}, \citenamefont {Datta},\ and\ \citenamefont {Walmsley}}]{zhang2015precision} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Datta}}, \ and\ \bibinfo {author} {\bibfnamefont {I.~A.}\ \bibnamefont {Walmsley}},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.210801} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {114}},\ \bibinfo {pages} {210801} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fickler}\ \emph {et~al.}(2012)\citenamefont {Fickler}, \citenamefont {Lapkiewicz}, \citenamefont {Plick}, \citenamefont {Krenn}, \citenamefont {Schaeff}, \citenamefont {Ramelow},\ and\ \citenamefont {Zeilinger}}]{fickler2012quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Fickler}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Lapkiewicz}}, \bibinfo {author} {\bibfnamefont {W.~N.}\ \bibnamefont {Plick}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Krenn}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Schaeff}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ramelow}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {http://science.sciencemag.org/content/338/6107/640} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {338}},\ \bibinfo {pages} {640} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2016)\citenamefont {Zhou}, \citenamefont {Liu}, \citenamefont {Li}, \citenamefont {Ding}, \citenamefont {Zhang}, \citenamefont {Shi}, \citenamefont {Dong}, \citenamefont {Shi},\ and\ \citenamefont {Guo}}]{zhou2016orbital} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.-Y.}\ \bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {S.-L.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {D.-S.}\ \bibnamefont {Ding}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Shi}}, \bibinfo {author} {\bibfnamefont {M.-X.}\ \bibnamefont {Dong}}, \bibinfo {author} {\bibfnamefont {B.-S.}\ \bibnamefont {Shi}}, \ and\ \bibinfo {author} {\bibfnamefont {G.-C.}\ \bibnamefont {Guo}},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.117.103601} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {103601} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Leach}\ \emph {et~al.}(2002)\citenamefont {Leach}, \citenamefont {Padgett}, \citenamefont {Barnett}, \citenamefont {Franke-Arnold},\ and\ \citenamefont {Courtial}}]{leach2002measuring} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Leach}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Padgett}}, \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont {Barnett}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Franke-Arnold}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Courtial}},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.88.257901} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {257901} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bollinger}\ \emph {et~al.}(1996)\citenamefont {Bollinger}, \citenamefont {Itano}, \citenamefont {Wineland},\ and\ \citenamefont {Heinzen}}]{bollinger1996optimal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Bollinger}}, \bibinfo {author} {\bibfnamefont {W.~M.}\ \bibnamefont {Itano}}, \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Heinzen}},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.54.R4649} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {54}},\ \bibinfo {pages} {R4649} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gerry}(2000)}]{gerry2000heisenberg} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~C.}\ \bibnamefont {Gerry}},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.61.043811} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {61}},\ \bibinfo {pages} {043811} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gerry}\ \emph {et~al.}(2005)\citenamefont {Gerry}, \citenamefont {Benmoussa},\ and\ \citenamefont {Campos}}]{gerry2005quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~C.}\ \bibnamefont {Gerry}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Benmoussa}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Campos}},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.72.053818} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo {pages} {053818} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Achilles}\ \emph {et~al.}(2004)\citenamefont {Achilles}, \citenamefont {Silberhorn}, \citenamefont {Sliwa}, \citenamefont {Banaszek}, \citenamefont {Walmsley}, \citenamefont {Fitch}, \citenamefont {Jacobs}, \citenamefont {Pittman},\ and\ \citenamefont {Franson}}]{achilles2004photon} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Achilles}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Silberhorn}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Sliwa}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Banaszek}}, \bibinfo {author} {\bibfnamefont {I.~A.}\ \bibnamefont {Walmsley}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Fitch}}, \bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont {Jacobs}}, \bibinfo {author} {\bibfnamefont {T.~B.}\ \bibnamefont {Pittman}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Franson}},\ }\href {http://www.tandfonline.com/doi/abs/10.1080/09500340408235288} {\bibfield {journal} {\bibinfo {journal} {J. Mod. Opt.}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {1499} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cohen}\ \emph {et~al.}(2014)\citenamefont {Cohen}, \citenamefont {Istrati}, \citenamefont {Dovrat},\ and\ \citenamefont {Eisenberg}}]{cohen2014super} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Cohen}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Istrati}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Dovrat}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Eisenberg}},\ }\href {https://www.osapublishing.org/oe/abstract.cfm?uri=oe-22-10-11945} {\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {11945} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2017)\citenamefont {Liu}, \citenamefont {Wang}, \citenamefont {Yang}, \citenamefont {Jin},\ and\ \citenamefont {Sun}}]{liu2017fisher} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Jin}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Sun}},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.023824} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {023824} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dowling}(2008)}]{N00N} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Dowling}},\ }\href {\doibase 10.1080/00107510802091298} {\bibfield {journal} {\bibinfo {journal} {Contemp. Phys.}\ }\textbf {\bibinfo {volume} {49}},\ \bibinfo {pages} {125} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bahder}(2011)}]{bahder2011phase} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~B.}\ \bibnamefont {Bahder}},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.83.053601} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {053601} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhang}\ \emph {et~al.}(2017)\citenamefont {Zhang}, \citenamefont {Zhang}, \citenamefont {Cen}, \citenamefont {Yu}, \citenamefont {Li}, \citenamefont {Wang},\ and\ \citenamefont {Zhao}}]{zhang17effects} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Cen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Wang}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhao}},\ }\href {https://www.osapublishing.org/oe/abstract.cfm?uri=oe-25-21-24907} {\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {25}},\ \bibinfo {pages} {24907} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Feng}\ \emph {et~al.}(2014)\citenamefont {Feng}, \citenamefont {Jin},\ and\ \citenamefont {Yang}}]{feng2014quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Feng}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Jin}}, \ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Yang}},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.90.013807} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {013807} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kacprowicz}\ \emph {et~al.}(2010)\citenamefont {Kacprowicz}, \citenamefont {Demkowicz-Dobrza{\'n}ski}, \citenamefont {Wasilewski}, \citenamefont {Banaszek},\ and\ \citenamefont {Walmsley}}]{kacprowicz2010experimental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kacprowicz}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Demkowicz-Dobrza{\'n}ski}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Wasilewski}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Banaszek}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Walmsley}},\ }\href {https://www.nature.com/nphoton/journal/v4/n6/full/nphoton.2010.39.html} {\bibfield {journal} {\bibinfo {journal} {Nat. Photonics}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {357} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Spagnolo}\ \emph {et~al.}(2012)\citenamefont {Spagnolo}, \citenamefont {Vitelli}, \citenamefont {Lucivero}, \citenamefont {Giovannetti}, \citenamefont {Maccone},\ and\ \citenamefont {Sciarrino}}]{spagnolo2012phase} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Spagnolo}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Vitelli}}, \bibinfo {author} {\bibfnamefont {V.~G.}\ \bibnamefont {Lucivero}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Sciarrino}},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.108.233602} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {233602} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Datta}\ \emph {et~al.}(2011)\citenamefont {Datta}, \citenamefont {Zhang}, \citenamefont {Thomas-Peter}, \citenamefont {Dorner}, \citenamefont {Smith},\ and\ \citenamefont {Walmsley}}]{PhysRevA.83.063836} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Datta}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Thomas-Peter}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Dorner}}, \bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont {Smith}}, \ and\ \bibinfo {author} {\bibfnamefont {I.~A.}\ \bibnamefont {Walmsley}},\ }\href {\doibase 10.1103/PhysRevA.83.063836} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {063836} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pezz\'e}\ \emph {et~al.}(2007)\citenamefont {Pezz\'e}, \citenamefont {Smerzi}, \citenamefont {Khoury}, \citenamefont {Hodelin},\ and\ \citenamefont {Bouwmeester}}]{PhysRevLett.99.223602} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Pezz\'e}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Smerzi}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Khoury}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Hodelin}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bouwmeester}},\ }\href {\doibase 10.1103/PhysRevLett.99.223602} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {223602} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2017)\citenamefont {Huang}, \citenamefont {Motes}, \citenamefont {Anisimov}, \citenamefont {Dowling},\ and\ \citenamefont {Berry}}]{huang2017adaptive} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {K.~R.}\ \bibnamefont {Motes}}, \bibinfo {author} {\bibfnamefont {P.~M.}\ \bibnamefont {Anisimov}}, \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Dowling}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Berry}},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.053837} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {053837} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tan}\ \emph {et~al.}(2014)\citenamefont {Tan}, \citenamefont {Liao}, \citenamefont {Wang},\ and\ \citenamefont {Nori}}]{tan2014enhanced} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.-S.}\ \bibnamefont {Tan}}, \bibinfo {author} {\bibfnamefont {J.-Q.}\ \bibnamefont {Liao}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wang}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.89.053822} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {053822} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gibilisco}\ \emph {et~al.}(2007)\citenamefont {Gibilisco}, \citenamefont {Imparato},\ and\ \citenamefont {Isola}}]{gibilisco2007uncertainty} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Gibilisco}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Imparato}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Isola}},\ }\href {http://aip.scitation.org/doi/abs/10.1063/1.2748210} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys.}\ }\textbf {\bibinfo {volume} {48}},\ \bibinfo {pages} {072109} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Knysh}\ \emph {et~al.}(2011)\citenamefont {Knysh}, \citenamefont {Smelyanskiy},\ and\ \citenamefont {Durkin}}]{knysh2011scaling} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Knysh}}, \bibinfo {author} {\bibfnamefont {V.~N.}\ \bibnamefont {Smelyanskiy}}, \ and\ \bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont {Durkin}},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.83.021804} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {021804} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2013)\citenamefont {Liu}, \citenamefont {Jing},\ and\ \citenamefont {Wang}}]{liu2013phase} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Jing}}, \ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wang}},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.88.042316} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {042316} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boto}\ \emph {et~al.}(2000)\citenamefont {Boto}, \citenamefont {Kok}, \citenamefont {Abrams}, \citenamefont {Braunstein}, \citenamefont {Williams},\ and\ \citenamefont {Dowling}}]{PhysRevLett.85.2733} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Boto}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Kok}}, \bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont {Abrams}}, \bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont {Braunstein}}, \bibinfo {author} {\bibfnamefont {C.~P.}\ \bibnamefont {Williams}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Dowling}},\ }\href {\doibase 10.1103/PhysRevLett.85.2733} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {2733} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \title{Learning the price response of active distribution networks for TSO-DSO coordination} \author{J. M. Morales, S. Pineda and Y. Dvorkin \thanks{J. M. Morales is with the Department of Applied Mathematics, University of M\'alaga, M\'alaga, Spain. E-mail: [email protected]} \thanks{S. Pineda is with the Department of Electrical Engineering, University of M\'alaga, M\'alaga, Spain. E-mail: [email protected].} \thanks{Y. Dvorkin is with the New York University, Brooklyn, NY 11201 USA. E-mail: [email protected].} \thanks{This work was supported in part by the European Research Council (ERC) under the EU Horizon 2020 research and innovation program (grant agreement No. 755705), in part by the Spanish Ministry of Science and Innovation through project PID2020-115460GB-I00, in part by the Andalusian Regional Government through project P20-00153, and in part by the European Regional Development Fund (FEDER) through the research project UMA2018-FEDERJA-150. The authors thankfully acknowledge the computer resources, technical expertise and assistance provided by the SCBI (Supercomputing and Bioinformatics) center of the University of M\'alaga.}} \maketitle \begin{abstract} The increase in distributed energy resources and flexible electricity consumers has turned TSO-DSO coordination strategies into a challenging problem. Existing decomposition/decentralized methods apply divide-and-conquer strategies to trim down the computational burden of this complex problem, but rely on access to proprietary information or fail-safe real-time communication infrastructures. To overcome these drawbacks, we propose in this paper a TSO-DSO coordination strategy that only needs a series of observations of the nodal price and the power intake at the substations connecting the transmission and distribution networks. Using this information, we learn the price response of active distribution networks (DN) using a decreasing step-wise function that can also adapt to some contextual information. The learning task can be carried out in a computationally efficient manner and the curve it produces can be interpreted as a market bid, thus averting the need to revise the current operational procedures for the transmission network. Inaccuracies derived from the learning task may lead to suboptimal decisions. However, results from a realistic case study show that the proposed methodology yields operating decisions very close to those obtained by a fully centralized coordination of transmission and distribution. \end{abstract} \begin{IEEEkeywords} TSO-DSOs coordination, DERs market integration, distribution network, price-responsive consumers, statistical learning. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section*{Nomenclature} The main symbols used throughout this paper are listed next for quick reference. Others are defined as required in the text. We consider hourly time steps and, hence, MW and MWh are interchangeable under this premise. \subsection{Indexes and sets} \begin{ldescription}{$xxxxx$} \item[$b$] Index of blocks in the step-wise approximation of the price response function. \item[$i$] Index of generating units. \item[$j$] Index of consumers. \item[$k$] Index of distribution networks. \item[$l$] Index of lines. \item[$n$] Index of nodes. \item[$t$] Index of time periods. \item[$\mathcal{B}$] Set of blocks in the step-wise approximation of the price response function. \item[$I^T$] Set of generating units at the transmission network. \item[$I_n$] Set of generating units at node $n$. \item[$I^D_k$] Set of generating units of distribution network $k$. \item[$J^T$] Set of consumers in the transmission network. \item[$J_n$] Set of consumers at node $n$. \item[$J^D_k$] Set of consumers in distribution network $k$. \item[$K^T$] Set of distribution networks. \item[$K_n$] Set of distribution networks at node $n$. \item[$L^T$] Set of lines of the transmission network. \item[$L^D_k$] Set of lines of distribution network $k$. \item[$N^T$] Set of nodes of the transmission network. \item[$N^D_k$] Set of nodes of distribution network $k$. \item[$\mathcal{T}$] Set of time periods. \item[$\mathcal{T}^C(t)$] Subset of time periods that are the closest to $t$. \end{ldescription} \subsection{Parameters} \begin{ldescription}{$xxx$} \item[$a_i$] Quadratic cost parameter of unit $i$ [\euro/MW$^2$]. \item[$b_i$] Linear cost parameter of unit $i$ [\euro/MW]. \item[$\underline{p}^G_{i}$] Minimum active power output of unit $i$ [MW]. \item[$\overline{p}^G_{i}$] Maximum active power output of unit $i$ [MW]. \item[$\widehat{p}^D_{jt}$] Baseline demand of consumer $j$ at time $t$ [MW]. \item[$\overline{p}^D_{jt}$] Maximum demand of consumer $j$ at time $t$ [MW]. \item[$\underline{p}^D_{jt}$] Minimum demand of consumer $j$ at time $t$ [MW]. \item[$\underline{q}^G_{i}$] Minimum reactive power output of unit $i$ [MW]. \item[$\overline{q}^G_{i}$] Maximum reactive power output of unit $i$ [MW]. \item[$r_l$] Resistance of line $l$ [p.u.]. \item[$\overline{s}^F_l$] Capacity of line $l$ [MVA]. \item[$\overline{s}^G_i$] Inverter power rate of unit $i$ [MVA]. \item[$s^B$] Base power [MVA]. \item[$\overline{u}^B_{ktb}$] Marginal utility of block $b$ for DN $k$ and time $t$ [\euro/MW]. \item[$\overline{v}_n$] Maximum squared voltage at node $n$ [p.u.]. \item[$\underline{v}_n$] Minimum squared voltage at node $n$ [p.u.]. \item[$x_l$] Reactance of line $l$ [p.u.]. \item[$\alpha_{jt}$] Intercept of the inverse demand function of consumer $j$ at time $t$ [MW]. \item[$\beta_{jt}$] Slope of the inverse demand function of consumer $j$ at time $t$ [\euro/MW$^2$]. \item[$\gamma_j$] Power factor of consumer $j$. \item[$\delta_j$] Flexibility parameter of consumer $j$. \item[$\lambda_{kt}$] Price at the substation of DN $k$ at time $t$ [\euro/MW]. \item[$\rho_{it}$] Capacity factor of unit $i$ at time $t$. \item[$\chi_{kt}$] Contextual information of DN $k$ at time $t$. \end{ldescription} \subsection{Variables} \begin{ldescription}{$xxx$} \item[$p^G_{it}$] Active power output of unit $i$ at time $t$ [MW]. \item[$p^D_{jt}$] Active power demand of consumer $j$ at time $t$ [MW]. \item[$p^F_{lt}$] Active power flow through line $l$ at time $t$ [MW]. \item[$p^N_{kt}$] Active power intake of DN $k$ at time $t$ [MW]. \item[$q^G_{it}$] Reactive power output of unit $i$ at time $t$ [MVAr]. \item[$q^D_{jt}$] Reactive power demand of consumer $j$ at time $t$ [MVAr]. \item[$q^F_{lt}$] Reactive power flow through line $l$ at time $t$ [MVAr]. \item[$v_{nt}$] Squared voltage magnitude at node $n$ and time $t$ [p.u.]. \item[$\theta_{nt}$] Voltage angle at node $n$ and time $t$ [rad]. \end{ldescription} \section{Introduction} \label{sec:introduction} \IEEEPARstart{E}{lectric} power distribution has been traditionally ignored in the operation of transmission power networks, on the grounds that distribution grids only housed passive loads. However, the proliferation of distributed energy resources (DERs) is rendering this traditional \emph{modus operandi} obsolete \cite{Li2016a}. Power systems engineers are faced with an unprecedented challenge of efficiently integrating a vast number and a wide spectrum of flexible power assets located in mid- and low-voltage networks into the operation of the transmission power network \cite{Li2018}. Naturally, succeeding in this endeavor requires the coordination between the transmission and distribution system operators (TSOs and DSOs, respectively), all united in the purpose of fostering an active role of DERs in the operation of the power system through their participation in wholesale electricity markets. As a result, research emphasis is placed on mechanisms that strengthen the TSO-DSO coordination so that the available flexibility of DERs can be harvested for transmission and wholesale market services \cite{migliavacca2017smartnet,de2019control}. For instance, some recent research works investigate TSO-DSO coordination schemes to improve voltage stability \cite{li2017impact,valverde2019coordination}. Other authors focus on the economic coordination between transmission and distribution operations to minimize total system costs \cite{li2018new}. The present work belongs to this latter group. Regarding TSO-DSO economic coordination, a single centralized operational model that includes both transmission and distribution networks with their full level of detail is not viable due to its computational cost, modeling complexity and potential conflict of interests between the involved parties \cite{GERARD201840}. Rather, the coordination of transmission and distribution power assets calls for a divide-and-conquer strategy that alleviates the computational burden, allows for decentralization and minimizes the need for information exchange between the TSO and DSOs \cite{kargarian2018toward}. For instance, authors of \cite{yuan_2017} uses Benders decomposition to find the optimal economic dispatch considering TSO-DSO interactions. Similarly, reference \cite{bragin_2017} proposes a model to operate transmission and distribution systems in a coordinated manner using a surrogate Lagrangian Relaxation approach. Finally, an analytical target cascading procedure (ATC) to coordinate the operation of transmission and distribution networks is described in \cite{nawaz_2020}. The decomposition and decentralized methods previously described are able to obtain the same solution as the centralized approach while significantly reducing the computational burden. Yet these methods also have meaningful drawbacks. Decomposition methods still require full access to all physical and economic information on distribution networks. However, as stated in \cite{mohammadi2019diagonal} ``distribution system operators are autonomous entities that are unwilling to reveal their commercially sensitive information.'' Therefore, these methods can hardly be accommodated in a real-life distribution environment with even a few ambiguous or unknown parameters (e.g. topological configuration, impedance, voltage and flow limits) and proprietary customer-end and behind-the-meter parameters (e.g. production/utility cost functions, supply/demand elasticity and behavioral aspects of electricity demand). Similarly, decentralized methods are based on repetitive real-time information exchanges between the TSO and the DSOs, and thus, rely on robust and fast communication infrastructure. As discussed in \cite{kargarian2018toward}, ``The communication infrastructures for implementing distributed methods need to be carefully designed, and the impact of communication delays and failures on the performance of distributed methods need to be investigated''. Instead of using decomposition or decentralized procedures, which heavily rely on access to either all physical and economic information or fail-safe real-time communication infrastructure, what we propose in this paper is an approximate method that requires neither of these two controversial assumptions at the expense of obtaining a solution slightly different from the optimal one. The proposed approach only needs access to offline historical information on prices and power injection at the substations connecting the transmission and distribution networks. Using statistical tools, we learn the price response of active distribution networks whose operating decisions aim at minimizing costs while complying with local physical constraints, such as voltage limits. We also utilize easily accessible information (e.g., capacity factors of wind and solar local resources) to make the curve adaptive to changes in external conditions that affect power system operations. Finally, the obtained response is approximated by a non-increasing step-wise function that can be conveniently interpreted as a market bid for the participation of the distribution networks in wholesale electricity markets. In summary, the contributions of our paper are twofold: \begin{enumerate} \item We propose a TSO-DSO coordination scheme that uses historical data at substations to learn the price response of active distribution networks using a decreasing step-wise function. If compared with existing methodologies, ours is simple and easy to implement within current market procedures, and cheap in terms of computational resources, information exchange and communication infrastructure. \item We measure the performance of the proposed approach in terms of the power imbalances and social welfare loss caused by the approximation of the distribution networks' behavior using a realistic case study. \end{enumerate} We compare our approach against a fully centralized operational model, referred to as \emph{benchmark}, that guarantees the optimal coordination between the TSO and the DSOs. Since this benchmark produces the same solution obtained with exact decomposition and decentralized methods \cite{kargarian2018toward,yuan_2017,bragin_2017,nawaz_2020,mohammadi2019diagonal}, these have not been considered in our study. On the contrary, our model is evaluated against two other approximations. In the first one, called the \emph{single-bus approach}, all physical constraints of the distribution networks are disregarded as if all small consumers and distributed generating resources were directly connected to the main substation. In the second one, called the \emph{price-agnostic approach}, the response of the distribution networks is assumed to be independent of prices. We note that, within the context of reactive power optimization for the minimization of network losses, the authors in \cite{Ding2018} also approximate the apparent power exchange between the TSO and the DSOs by a polynomial function of the voltage level at the main substation. However, beyond the evident facts that their purpose is different and the fitting procedure we need to use is more intricate (to comply with market rules), they also omit the \emph{dynamic} nature of distribution network response to local marginal prices (LMPs). Similarly, authors of \cite{li2018response} also propose a methodology to obtain the relation between the distribution network response and the voltage level at the substation. However, their approach is based on perfect knowledge of all distribution network parameters. The rest of this paper is organized as follows. Section~\ref{sec:MF} introduces optimization models for transmission and distribution network operations, which are then used to construct different DSO-TSO coordination approaches in Section~\ref{sec:methodology}. The metrics we use for comparing these approaches are described in Section~\ref{sec:CP}, while the case study is presented in Section~\ref{sec:SR}. Finally, conclusions are duly reported in Section~\ref{sec:conc}. \section{Modeling Framework}\label{sec:MF} We consider a power system with a high-voltage, meshed transmission network connected to generating units, large consumers and several medium-voltage distribution networks. As illustrated in Fig. \ref{fig:tso_dso}, each distribution system is connected to the transmission network through one main substation, has a radial topology and hosts small-scale electricity consumers and producers. \begin{figure} \caption{Transmission and distribution coordination scheme} \label{fig:tso_dso} \end{figure} The active power output of generating unit $i$ at time period $t$ is denoted by $p^{G}_{it}$, with minimum/maximum limits $\underline{p}^G_i/\overline{p}^G_i$. Generating units are assumed to have a convex cost function of the form $c_i(p^{G}_{it})=\frac{1}{2}a_i(p^{G}_{it})^2 + b_i(p^{G}_{it})$, with $a_i,b_i \geq 0$, and a dimensionless capacity factor $\rho_{it}$, with $0\leq\rho_{it}\leq1$. For thermal units $\rho_{it}=1, \forall t$, while for renewable generating units the capacity factor depends on weather conditions and the production cost is zero ($a_i=b_i=0$). Electricity consumption is modeled as a capped linear function of the LMP $\lambda_t$, as shown in Fig. \ref{fig:flex}, where $\widehat{p}^D_{jt}$ denotes the baseline demand of consumer $j$ at time $t$ and $\overline{p}^D_{jt}/ \underline{p}^D_{jt}$ are the maximum/minimum load levels given by $\overline{p}^D_{jt}=\widehat{p}^D_{jt}(1+\delta_j)$ and $\underline{p}^D_{jt}=\widehat{p}^D_{jt}(1-\delta_j)$, with $\delta_j \geq 0$ \cite{Mieth2020}. Consequently, the maximum/minimum load levels vary over time according to the evolution of the baseline demand. Under this modeling approach, a price-insensitive demand is modeled with $\delta_j=0$, while $\delta_j=0.5$ implies that the consumer is willing to increase or decrease their baseline demand up to 50\% depending on the price. Finally, $\overline{\lambda}$ and $\underline{\lambda}$ stand for the LMP values that unlock the minimum and maximum demand from consumers, respectively. The demand function in Fig. \ref{fig:flex} goes through points $(\underline{p}^D_{jt},\overline{\lambda})$ and $(\overline{p}^D_{jt},\underline{\lambda})$ and, therefore, its expression can be determined as follows: \begin{align} & \frac{\lambda_t - \underline{\lambda}}{\overline{\lambda}-\underline{\lambda}} = \frac{p^D_{jt}-\overline{p}^D_{jt}}{\underline{p}^D_{jt}-\overline{p}^D_{jt}} = \frac{p^D_{jt}-\widehat{p}^D_{jt}(1+\delta_j)}{\widehat{p}^D_{jt}(1-\delta_j)-\widehat{p}^D_{jt}(1+\delta_j)} \implies \nonumber \\ & p^D_{jt} = \widehat{p}^D_{jt}\left( 1 + \delta_j \frac{\overline{\lambda}+\underline{\lambda}}{\overline{\lambda}-\underline{\lambda}} \right) - \frac{2\widehat{p}^D_{jt}\delta_j}{\overline{\lambda}-\underline{\lambda}} \lambda_t \end{align} Hence, the active demand level $p^{D}_{jt}$ for a given electricity price $\lambda_t$ takes the following form: \begin{equation} p^{D}_{jt} = \left\{ \begin{array}{lcl} \overline{p}^D_{jt} & \text{if} & \lambda_t \leq \underline{\lambda} \\ \alpha_{jt} - \beta_{jt} \lambda_t & \text{if} & \underline{\lambda} < \lambda_t < \overline{\lambda} \\ \underline{p}^D_{jt} & \text{if} & \overline{\lambda} \leq \lambda_t, \end{array} \right. \label{eq:inverse_function} \end{equation} where $\alpha_{jt}=\widehat{p}^D_{jt}\left(1+\delta_j\frac{\overline{\lambda}+\underline{\lambda}}{\overline{\lambda}-\underline{\lambda}} \right)$ and $\beta_{jt}=\frac{2\widehat{p}^D_{jt}\delta_j}{\overline{\lambda}-\underline{\lambda}}$. The reactive power demand is given by $q^{D}_{jt}=\gamma_jp^{D}_{jt}$, where $\gamma_j$ is the power factor of consumer $j$, which is assumed to be independent of time for simplicity. Finally, we obtain the utility of each consumer by integrating the inverse demand function with respect to the demand quantity, that is, \begin{align} & u_{jt}(p^D_{jt}) = \int_{\underline{p}^D_{jt}}^{p^D_{jt}} \lambda(p) dp = \int_{\underline{p}^D_{jt}}^{p^D_{jt}} \left( \frac{\alpha_{jt}}{\beta_{jt}} - \frac{p}{\beta_{jt}} \right) dp = \nonumber \\ & \qquad = \frac{\alpha_{jt}}{\beta_{jt}} \left(p^D_{jt}-\underline{p}^D_{jt}\right) - \frac{(p^D_{jt})^2-(\underline{p}^D_{jt})^2}{2\beta_{jt}} \label{eq:utility} \end{align} \begin{figure} \caption{Flexible electricity demand modeling} \label{fig:flex} \end{figure} The transmission network is modeled using a DC power flow approximation \cite{nawaz2020stochastically} and, therefore, each line $l$ going from node $o_l$ to node $e_l$ is characterized by its reactance $x_l$ and maximum capacity $\overline{s}^F_l$. The power flow is denoted by $p^{F}_{lt}$. Then, suppose that the active consumption of the $k$-th distribution network $p^N_{kt}$ can be expressed as $p^N_{kt}=h_{kt}(\lambda_{kt})$, where $\lambda_{kt}$ is the price at the corresponding substation. Under this assumption, transmission system operations at time period $t$ are modeled by the following optimization problem: \begin{subequations} \begin{align} & \max_{\Phi^T_t} \sum_{j\in J^T} u_{jt}(p^{D}_{jt}) + \sum_{k\in K^T} \int_{0}^{p^N_{kt}} \hspace{-3mm} h_{kt}^{-1}(s)ds - \sum_{i\in I^T} c_i(p^{G}_{it}) \label{eq:transmission_of}\\ & \text{s.t.} \nonumber \\ & \sum_{i\in I_n} p^{G}_{it} - \sum_{j\in J_n} p^{D}_{jt} - \sum_{k\in K_n} p^N_{kt} = \nonumber \\ & \qquad = \sum_{l:e_l=n} p^{F}_{lt} - \sum_{l:o_l=n} p^{F}_{lt}, \; \forall n\in N^T \label{eq:transmission_bal} \\ & \frac{p^{F}_{lt}}{s^B} = \frac{1}{x_l}(\theta_{o_lt}-\theta_{e_lt}), \;\forall l\in L^T \label{eq:transmission_flow} \\ & \underline{p}^G_i \leq p^{G}_{it} \leq \rho_{it}\overline{p}^G_i, \;\forall i\in I^T \label{eq:transmission_maxgen}\\ & \underline{p}^D_{jt} \leq p^{D}_{jt} \leq \overline{p}^D_{jt}, \;\forall j\in J^T \label{eq:transmission_maxdem}\\ & - \overline{s}^F_l \leq p^{F}_{lt} \leq \overline{s}^F_l, \;\forall l\in L^T \label{eq:transmission_maxflow} \end{align} \label{eq:transmission} \end{subequations} \noindent where $\theta_{nt}$ is the voltage angle at node $n$ and time period $t$, $\Phi^T_t=(p^{G}_{it},p^{D}_{jt},p^N_{kt},p^{F}_{lt},\theta_{nt})$ are decisions variables, $N^T,L^T,I^T,J^T,K^T$ are sets of nodes, lines, generators, consumers and distribution networks connected to the transmission network, and $I_n,J_n,K_n$ are sets of generating units, consumers and distribution networks connected to node $n$. Objective function \eqref{eq:transmission_of} maximizes the total social welfare and includes the utility of all flexible consumers connected to the transmission network (first term), the utility of all distribution networks (second term), and the generation cost of all units connected to the transmission network (third term). Note that $h_{kt}^{-1}(\cdot)$ represents the inverse demand function and its integral correspond to the total utility of each distribution network. The nodal power balance equation is imposed by \eqref{eq:transmission_bal}, while the power flow through each transmission line is computed in \eqref{eq:transmission_flow}. Finally, constraints \eqref{eq:transmission_maxgen}, \eqref{eq:transmission_maxdem} and \eqref{eq:transmission_maxflow} enforce the generation, consumption and transmission capacity limits. Traditionally, distribution networks only hosted inflexible consumption and, therefore, $p^N_{kt}$ was considered independent of the electricity price. In this case, the second term of \eqref{eq:transmission_of} vanishes, and variable $p^N_{kt}$ is replaced by the forecast power intake of each distribution network. Thus, problem \eqref{eq:transmission} can be transformed into a quadratic optimization problem that can be solved to global optimality using off-the-shelf solvers, \cite[Appendix B]{doi:10.1137/1.9781611974164.fm}. However, this paradigm has changed in recent years and current distribution networks include a growing amount of flexible small-scale consumers and distributed generation resources that are capable of adjusting their consumption/generation in response to the electricity price to maximize their utility/payoff \cite{Papavasiliou2018}. Indeed, if $\lambda_{kt}$ is the electricity price at the main substation of distribution network $k$, the power intake of that distribution network $p^N_{kt}$ can be determined by solving the following optimization problem: \begin{subequations} \begin{align} & \max_{\Phi^D_{kt}} \quad \sum_{j\in J_k^D} u_{jt}(p^{D}_{jt}) - \sum_{i\in I_k^D} c_i(p^{G}_{it}) - \lambda_{kt}p^N_{kt} \label{eq:distribution_of}\\ & \text{s.t.} \nonumber \\ & p^N_{kt} + \sum_{i\in I_n} p^{G}_{it} - \sum_{j\in J_n} p^{D}_{jt} = \nonumber \\ & \qquad = \sum_{l:e_l=n} p^{F}_{lt} - \sum_{l:o_l=n} p^{F}_{lt}, \; n=n^0_k \label{eq:distribution_bal0}\\ & \sum_{i\in I_n} p^{G}_{it} - \sum_{j\in J_n} p^{D}_{jt} = \nonumber \\ & \qquad = \sum_{l:e_l=n} p^{F}_{lt} - \sum_{l:o_l=n} p^{F}_{lt}, \; \forall n \in N_k^D, n\neq n^0_k \label{eq:distribution_bal}\\ & \sum_{i\in I_n} q^{G}_{it} - \sum_{j\in J_n} q^{D}_{jt} = \nonumber \\ & \qquad = \sum_{l:e_l=n} q^{F}_{lt} - \sum_{l:o_l=n} q^{F}_{lt}, \; \forall n \in N_k^D\label{eq:distribution_balq}\\ & q^{D}_{jt} = \gamma_j p^{D}_{jt}, \; \forall j \in J_k^D \label{eq:distribution_factor}\\ & v_{nt} = v_{a_nt} - \frac{2}{s^B} \sum_{l:e_l=n} r_lp^{F}_{lt} + x_lq^{F}_{lt}, \; \forall n \in N_k^D \label{eq:distribution_voltage}\\ & \underline{p}^G_i \leq p^{G}_{it} \leq \rho_{it}\overline{p}^G_i, \; \forall i \in I_k^D \label{eq:distribution_maxgen}\\ & \underline{q}^G_i \leq q^{G}_{it} \leq \overline{q}^G_i, \; \forall i \in I_k^D \label{eq:distribution_maxgenq}\\ & (p^{G}_{it})^2 + (q^{G}_{it})^2 \leq (\overline{s}^G_i)^2, \; \forall i \in I_k^D \label{eq:distribution_converter}\\ & \underline{p}^D_{jt} \leq p^{D}_{jt} \leq \overline{p}^D_{jt}, \; \forall j \in J_k^D \label{eq:distribution_maxdem}\\ & (p^{F}_{lt})^2 + (q^{F}_{lt})^2 \leq (\overline{s}^F_l)^2, \; \forall l \in L_k^D \label{eq:distribution_maxflow}\\ & \underline{v}_{nt} \leq v_{nt} \leq \overline{v}_{nt}, \; \forall n \in N_k^D \label{eq:distribution_maxvol} \end{align} \label{eq:distribution} \end{subequations} where the decisions variables are $\Phi^D_{kt} = (p^N_{kt}, p^{G}_{it}, q^{G}_{it}, p^{D}_{jt}, q^{D}_{jt}, p^{F}_{lt}, q^{F}_{lt}, v_{nt})$. In particular, $q^{G}_{it},q^{D}_{jt},q^{F}_{lt}$ are the reactive power generation, consumption and flow, in that order, and $v_{nt}$ is the squared voltage magnitude. Since we assume a radial distribution network, we use the LinDistFlow AC power flow approximation, where $a_n$ represents the ancestor of node $n$ and $r_l$ is the resistance of line $l$ \cite{Mieth2018}. The rate power of the inverters for distributed generators is denoted as $\overline{s}^G_i$ \cite{Hassan2018a}, and the squared voltage magnitude limits are $\underline{v}_{nt},\overline{v}_{nt}$. Finally, $N_k^D,L_k^D,I_k^D,J_k^D$ are the set of nodes, lines, generators and consumers of distribution network $k$, and $n^0_k$ corresponds to the node of the distribution network connected to the substation. Objective function \eqref{eq:distribution_of} maximizes the social welfare of distribution network $k$ and includes the utility of flexible consumers (first term), the cost of distributed generation (second term) and the cost of power exchanges with the transmission network (third term). Nodal active and reactive power equations are formulated in~\eqref{eq:distribution_bal0}, \eqref{eq:distribution_bal} and \eqref{eq:distribution_balq}. Constraint \eqref{eq:distribution_factor} relates active and reactive demand through a given power factor, while the dependence of voltage magnitudes in a radial network is accounted for in \eqref{eq:distribution_voltage} using the LinDistFlow approximation. Limits on active and reactive generating power outputs are enforced in \eqref{eq:distribution_maxgen}, \eqref{eq:distribution_maxgenq} and \eqref{eq:distribution_converter}. Similarly, equations \eqref{eq:distribution_maxdem}, \eqref{eq:distribution_maxflow} and \eqref{eq:distribution_maxvol} determine the feasible values of demand quantities, power flows and squared voltage magnitudes. As a result, \eqref{eq:distribution} is a convex optimization problem that can be solved using off-the-shelf solvers. Drawing a closed-form expression $h_{kt}(\lambda_{kt})$ from~\eqref{eq:distribution} that exactly characterizes the optimal value of $p^N_{kt}$ as a function of the electricity price $\lambda_{kt}$ seems like a lost cause. Furthermore, even if such an expression were possible, using it in \eqref{eq:transmission} would lead to a troublesome non-convex optimization problem, with the likely loss of global optimality guarantees. In the next section, we discuss different strategies to construct an approximation $\hat{h}_{kt}(\lambda_{kt})$ that can be easily incorporated into \eqref{eq:transmission} to determine the optimal operation of the transmission network. In particular, we focus on strategies that leverage available contextual information to construct function $\hat{h}_{kt}(\lambda_{kt})$. \section{Methodology} \label{sec:methodology} In this section we present four different approaches to accommodate the behavior of active distribution networks in transmission network operations. The aim of these methods is to determine the electricity prices at substations that foster the most efficient use of the flexible resources available in the distribution networks. Once electricity prices are published, the DSOs operate the distributed energy resources to maximize their social welfare while satisfying the physical limits of the distribution network, such as voltage limits and reactive power capacities. \subsection{Benchmark approach (BN)} This approach includes a full representation of both the transmission system and the distribution networks, by jointly solving optimization problems \eqref{eq:transmission} and \eqref{eq:distribution} as follows: \begin{subequations} \begin{align} & \max_{\Phi^T_t,\Phi^D_{kt}} \sum_{j\in J^T \cup \{J^D_k\}} u_{jt}(p^{D}_{jt}) - \sum_{i\in I^T \cup \{I^D_k\}} c_i(p^{G}_{it}) \label{eq:benchmark_of}\\ & \text{s.t.} \qquad \eqref{eq:transmission_bal}-\eqref{eq:transmission_maxflow}, \eqref{eq:distribution_bal}-\eqref{eq:distribution_maxvol} \end{align} \label{eq:benchmark} \end{subequations} Model \eqref{eq:benchmark} enables the optimal operation of the transmission network since it takes into account the most accurate representation of all distribution networks connected to it \cite{LeCadre2019}. However, this approach has the following drawbacks: \begin{itemize} \item[-] It requires having access to distribution network parameters, such as its topological configuration and $r_l,x_l$, which is impractical, as private or sovereign entities operating distribution networks prefer to keep this information confidential \cite{mohammadi2019diagonal,Ding2018,Yu2018}. \item[-] Operating the power system through \eqref{eq:benchmark} would require a deep transformation of current market mechanisms to allow small generators/consumers to directly submit their electricity offers/bids to a centralized market operator. \item[-] Even if all distribution network parameters were known and small generators/consumers were allowed to directly participate in the electricity market, solving model \eqref{eq:benchmark} is computationally expensive for realistically sized systems with hundreds of distribution networks connected to the transmission network \cite{Li2018}. \end{itemize} In this paper, we use the solution of this approach as a benchmark to evaluate the performance of the other methods described in this section. Other decomposition or decentralized methods in the technical literature are able to achieve global optimality and, therefore, their solution coincide with that of BN. For this reason, we focus on comparing the proposed approach with other approximate methodologies that also lead to suboptimal solutions. \subsection{Single-bus approach (SB)} This approach is a relaxation of BN in \eqref{eq:benchmark}, where physical limits on distribution power flows and voltages are disregarded. Therefore, operational model SB can be equivalently interpreted as if all small consumers and distributed energy resources were directly connected to the transmission network, i.e. all distribution systems are modeled as single-bus grids. Therefore, the dispatch decisions for the transmission network are computed by solving the following problem: \begin{subequations} \begin{align} & \max_{\Phi^T_t,\Phi^D_{kt}} \sum_{j\in J^T \cup \{J^D_k\}} u_{jt}(p^{D}_{jt}) - \sum_{i\in I^T \cup \{I^D_k\}} c_i(p^{G}_{it}) \label{eq:dso2tso_of}\\ & \text{s.t.} \nonumber \\ & \sum_{i\in \hat{G}_n} p^{G}_{it} - \sum_{j\in \hat{D}_n} p^{D}_{jt} = \sum_{l:e_l=n} p^{F}_{lt} - \sum_{l:o_l=n} p^{F}_{lt}, \;\forall n\in N^T \label{eq:dso2tso_bal} \\ & \frac{p^{F}_{lt}}{s^B} = \frac{1}{x_l}(\theta_{o_lt}-\theta_{e_lt}), \;\forall l\in L^T \label{eq:dso2tso_flow} \\ & \underline{p}^G_i \leq p^{G}_{it} \leq \rho_{it}\overline{p}^G_i, \;\forall i\in I^T \cup \{I^D_k\} \label{eq:dso2tso_maxgen}\\ & \underline{p}^D_{jt} \leq p^{D}_{jt} \leq \overline{p}^D_{jt}, \;\forall j\in J^T \cup \{J^D_k\} \label{eq:dso2tso_maxdem}\\ & - \overline{s}^F_l \leq p^{F}_{lt} \leq \overline{s}^F_l, \;\forall l\in L^T \label{eq:dso2tso_maxflow} \end{align} \label{eq:dso2tso} \end{subequations} where $\hat{G}_n$ and $\hat{D}_n$ denote, respectively, the set of generators and consumers either directly connected to node $n$ or hosted by a distribution network connected to it. Problem \eqref{eq:dso2tso} is less computationally demanding than the BN approach in \eqref{eq:benchmark} and does not require knowledge of distribution network parameters. However, this approach also relies on a market mechanism that allows small generators and consumers to submit their offers and bids directly to the wholesale market \cite{Chen2018}. Besides, if the operation of some of the distribution networks is constrained by the physical limitations of power flows and/or voltage levels, then the solution provided by this approach may substantially differ from the actual conditions in the distribution networks. \subsection{Contextual price-agnostic approach (PAG)} This approach is based on the premise that the penetration rates of small-scale flexible consumers and distributed generation resources is not significant and, therefore, the response of distribution networks is independent of LMPs at their substations. On the other hand, this response can still depend on other contextual information that affect the behavior of distribution networks such as the aggregated load level of their flexible consumers and the wind and solar capacity factors in the corresponding geographical area. Consider a set of historical data $\{\chi_{kt},p^N_{kt}\}_{t \in \mathcal{T}}$, where $\chi_{kt}$ represents a vector containing the contextual information to explain the consumption level of distribution network $k$. Vector $\chi_{kt}$ can include weather conditions, e.g. ambient temperature, wind speed, solar irradiation, or categorical variables, e.g. an hour of the day or a day of the week. The PAG approach aims to learn the relation between $p^N_{kt}$ and $\chi_{kt}$ for each distribution network $k$, i.e., \begin{equation} p^N_{kt} = f_k(\chi_{kt}) \label{eq:contex1} \end{equation} The function $f_k$ that best approximates the behavior of distribution network $k$ with contextual information can be found using a wide variety of supervised learning techniques \cite{Hastie2009}. In particular, if $f_k$ must belong to a certain family of functions, such as the family of linear functions, its parameters can be computed using the well-known least squares criterion. In order to capture non-linear relations, $f_k$ can also represent a neural network to be trained using available data. Alternatively, if we do not make strong assumptions about the form of the mapping function, the relation between $\chi_{kt}$ and $p^N_{kt}$ can be modeled using non-parametric supervised learning techniques. Within this group, we opt in this work for the k-nearest neighbors regression algorithm ($K$-NN) because of its simplicity, interpretability and scalability. Following this methodology, the estimation of the power import of distribution network $k$ for time period $t$ (denoted as $\widehat{p}^N_{kt}$) is computed as: \begin{equation} \widehat{p}^N_{kt} = \frac{1}{K}\sum_{t'\in \mathcal{T}^C(t)}p^N_{kt'}, \label{eq:knn} \end{equation} where $\mathcal{T}^C(t)$ is the subset of the $K$ time periods whose contexts are the closest to $\chi_{kt}$ according to a given distance, and $t'$ is an auxiliary time period index. If contextual information only includes continuous variables (electricity demand, renewable power generation, etc.), the dissimilarity between two time periods can be measured using the Euclidean distance, i.e., $dist(t_1,t_2)=||\chi_{kt_1}-\chi_{kt_2}||_2$. If contextual information also includes binary variables (equipment status, maintenance schedules, etc.), the dissimilarity can be measured using the Hamming distance, for example. Once the forecast intake for each distribution network is obtained depending on its corresponding context, we model all distribution networks as fix loads and determine the operation of the transmission network by solving the following optimization problem: \begin{subequations} \begin{align} & \max_{\Phi^T} \sum_{j\in J^T} u_{jt}(p^{D}_{jt}) - \sum_{i\in I^T} c_i(p^{G}_{it}) \label{eq:average_of}\\ & \text{s.t.} \nonumber \\ & p^N_{kt} = \widehat{p}^N_{kt}, \;\forall k \in K \\ & \eqref{eq:transmission_bal}- \eqref{eq:transmission_maxflow} \end{align} \label{eq:average} \end{subequations} Problem \eqref{eq:average} is also more computationally tractable than \eqref{eq:benchmark} and does not rely on knowledge of distribution network parameters. As the one we propose, this approach only requires access to historical power flow measurements at the substation and contextual information that can enhance explainability and interpretability of the distribution network responses to external factors of interest. Fortunately, independent system operators such as ISONE and NYISO make this information publicly available. Another advantage of this approach is that, unlike the SB approach, it can be seamlessly implemented in existing market-clearing procedures since the response of distribution networks is simply replaced with the fixed power injections provided by \eqref{eq:knn}. Actually, this is the approach that most closely reproduces the traditional way of proceeding. On the other hand, since the impact of substation LMPs on the response of distribution networks is disregarded, the accuracy of this approach worsens as the flexibility provided by small consumers and distributed generators increase. \subsection{Contextual price-aware approach (PAW)} The SB and PAG approaches disregard the impact of either physical limits or economic signals on the response of distribution networks with small-scale, flexible consumers and distributed generation resources. To overcome this drawback, we propose to approximate the response function $h_{kt}(\lambda_{kt})$ by taking into account the effects of both physical and economic conditions on the behavior of active distribution networks. \begin{figure} \caption{Step-wise approximation of distribution network response} \label{fig:stepwise} \end{figure} Similarly to the PAG approach, we assume access to the set of historical data $\{\chi_{kt},\lambda_{kt},p^N_{kt}\}_{t \in \mathcal{T}}$, where $\lambda_{kt}$ denotes the LMP at the substation of distribution network $k$. The proposed PAW approach aims at determining the function that explains the response $p^N_{kt}$ as a function of the contextual information $\chi_{kt}$ and the electricity price at the substation $\lambda_{kt}$, i.e., \begin{equation} p^N_{kt} = g_k(\chi_{kt},\lambda_{kt}) \label{eq:contex2} \end{equation} For a fixed context, function \eqref{eq:contex2} provides the relation between the response of a distribution network and the price at its substation. This function can be understood as the bid to be submitted by each distribution network to the wholesale electricity market. However, most current market procedures only accept a finite number of decreasing block bids. For instance, the Spanish market operator establishes that ``For each hourly scheduling period within the same day-ahead scheduling horizon, there can be as many as 25 power blocks for the same production unit, with a different price for each of the said blocks, with the prices increasing for sale bids, or decreasing for purchase bids.'' \cite{OMIE}. In order to comply with these market rules, we propose an efficient learning procedure to determine a decreasing step-wise mapping between the response of a distribution network and the LMP, while taking into account contextual information. Our approach combines unsupervised and supervised learning techniques as follows: \begin{itemize} \item[-] \textit{Unsupervised learning}. Similarly to PAG, the first step of the proposed approach uses a K-nearest neighbors algorithm to find the subset of time periods $\mathcal{T}^C(t)$ whose contextual information are the closest to $\chi_{kt}$. For the sake of illustration, the points depicted in Fig. \ref{fig:stepwise} represent the pairs of prices and power intakes for times periods in $\mathcal{T}^C(t)$ for a given substation $k$ and context $\chi_{kt}$. \item[-] \textit{Supervised learning}. The second step of the proposed approach consists of finding the step-wise decreasing function that best approximates the price-quantity pairs obtained in the previous step. As illustrated in Fig. \ref{fig:stepwise}, this function can be defined by a set of price breakpoints $\overline{u}^B_{b}$ and the demand level for each block $\overline{p}^B_{b}$. Despite its apparent simplicity, finding the optimal step-wise decreasing function that approximates a set of data points is a complex task that cannot be accomplished by conventional regression techniques. For instance, isotonic regression yields a monotone step-wise function, but a maximum number of blocks cannot be imposed. Conversely, segmented regression provides a step-wise function with a maximum number of blocks, but monotonicity is not ensured. Therefore, the statistical estimation of $\overline{p}^B_{0}$ and $\overline{u}^B_{b}$ and $\overline{p}^B_{b}$, $\forall b \in \mathcal{B}$, is conducted by means of the curve-fitting algorithm for segmented isotonic regression that has been recently developed in \cite{Bucarey2020} and can be formulated as the following optimization problem: \begin{subequations} \label{eq:isotonic} \begin{align} \hspace{-3mm} \min_{\overline{p}^B_{0},\overline{u}^B_{b},\overline{p}^B_{b}} & \sum_{t'\in\mathcal{T}^C(t)} {\left(p^N_{kt'}- \hspace{-4mm} \sum_{b \in \mathcal{B} \cup \{0\} }{\hspace{-2mm}\overline{p}^B_b \mathbb{I}_{[\overline{u}^B_{b+1}, \overline{u}^B_b)} (\lambda_{kt'})}\right)^2} \hspace{-3mm} \label{eq:isotonic_of}\\ \textrm{s.t.} & \enskip \overline{p}^B_b \geq \overline{p}^B_{b-1}, \enskip \forall b \in B \label{eq:isotonic_c1}\\ \phantom{\textrm{s.t.}}& \enskip \overline{u}^B_{b+1} \leq \overline{u}^B_{b}, \enskip \forall b \in B \label{eq:isotonic_c2} \end{align} \end{subequations} \noindent where $\mathbb{I}_{[\overline{u}^B_{b+1},\overline{u}^B_{b})}$ is the indicator function equal to 1 if $\overline{u}^B_{b+1} \leq \lambda_{kt} < \overline{u}^B_{b}$, and 0 otherwise, and $\overline{u}^B_{0} = \infty$, $\overline{u}^B_{|\mathcal{B}|+1} =-\infty$. Objective function \eqref{eq:isotonic_of} minimizes the sum of squared errors, while constraints \eqref{eq:isotonic_c1}-\eqref{eq:isotonic_c2} ensures the monotonicity of the regression function. Problem \eqref{eq:isotonic} can be reformulated as a mixed-integer quadratic problem to be solved by standard optimization solvers. However, the computational burden of this solution strategy is extremely high. Alternatively, reference \cite{Bucarey2020} proposes a dynamic programming reformulation that guarantees global optimality in polynomial time, which makes this approach computationally attractive. \end{itemize} \begin{figure} \caption{Proposed TSO-DSO coordination approach} \label{fig:paw_approach} \end{figure} In summary, we approximate the response of the distribution networks using a learning strategy that combines a K-nearest neighbor algorithm and a curve-fitting methodology. The proposed learning approach is simple and fast, as well as it does not suffer from high data requirements for training and offers explainability of the results, unlike black-box approaches, e.g. based on deep learning. The operation of the transmission network is obtained by assuming that each distribution network reacts to prices according to the obtained step-wise non-increasing functions as illustrated in Fig. \ref{fig:paw_approach}. Mathematically, the operation of the transmission network is obtained by solving the following optimization problem: \begin{subequations} \begin{align} & \max_{\Phi^T_t,p^N_{kt},p^B_{ktb}} \sum_{b \in \mathcal{B},k\in K ^T} \hspace{-2mm}\overline{u}^B_{ktb} p^B_{ktb} + \hspace{-2mm} \sum_{j\in J^T}u_{jt}(p^D_{jt}) - \hspace{-2mm} \sum_{i\in I^T} c_i(p^{G}_{it}) \label{eq:inverse_of}\\ & \text{s.t.}\quad p^N_{kt} = \overline{p}^B_{kt0} + \sum_{b \in \mathcal{B}} p^B_{ktb}, \; \forall k \in K ^T \\ & \phantom{s.t.}\quad 0 \leq p^B_{ktb} \leq \overline{p}^B_{ktb}-\overline{p}^B_{kt(b-1)}, \; \forall b \in \mathcal{B}, k\in K^T \\ & \phantom{s.t.}\quad \eqref{eq:transmission_bal}- \eqref{eq:transmission_maxflow} \end{align} \label{eq:inverse} \end{subequations} The proposed approach has several advantages. First, while the SB and PAG approaches disregard, respectively, the impact of network limits or economic signals on the response of distribution networks, the PAW approach is aware of both effects. Second, like the PAG approach, this method only requires historical LMPs and power flows at the substations and, therefore, detailed information about the distribution network parameters is not required. Third, the response of each distribution network to prices is modeled by a step-wise decreasing function that can be directly included in existing market-clearing mechanisms without additional modifications. Besides, unlike other decomposition/decentralized approaches, the one we propose is not an iterative method and, therefore, is immune to convergence issues. Since the proposed method basically relies on a learning task, its performance highly depends on the quality of the input historical data. Hence, the dataset should be continuously updated to include the most recent operating conditions and exclude the oldest ones. To conclude this section, Table \ref{tab:method_comparison} summarizes the main features of the four approaches discussed above. If compared with the benchmark, the three alternative approaches involve lower computational burdens through different approximation strategies. The next section describes the methodology to quantify the impact of such approximations on the optimal operation of the transmission electricity network. \begin{table}[] \centering \caption{Qualitative comparison of TSO-DSOs coordination approaches} \begin{tabular}{lcccc} \toprule & BN & SB & PAG & PAW \\ \midrule Network-aware & X & & X & X \\ Price-aware & X & X & & X \\ Historical data & & & X & X \\ Seamless market integration & & & X & X \\ Computational burden & High & Low & Low & Low \\ \bottomrule \end{tabular} \label{tab:method_comparison} \end{table} \section{Evaluation procedure}\label{sec:CP} While existing decomposition/decentralized approaches are able to yield the optimal coordination decisions, the proposed methodology may lead to suboptimal decisions caused by incompleteness or inaccuracies of the learning task. In this section, we present the evaluation procedure to quantify the impact of these suboptimal decisions in terms of power imbalance and social welfare losses. We compare such measures with those obtained by the other three methods described in Section \ref{sec:methodology}. To that end, we proceed as follows: \begin{enumerate} \item Solve problems \eqref{eq:benchmark}, \eqref{eq:dso2tso}, \eqref{eq:average} or \eqref{eq:inverse} using the modeling of the distribution networks derived from the BN, SB, PAG or PAW approaches. LMPs at each substation $\lambda_{kt}$ are obtained as the dual variable of the balance equation \eqref{eq:transmission_bal}. The sum of the approximated consumption by all distribution networks is denoted as $\widehat{P}^N_t$. \item Model \eqref{eq:distribution} is solved for each distribution network $k$ after fixing LMPs at the substations to those obtained in Step 1). As such, we compute the actual response of the distribution networks considering all physical and economic information, denoted as $P^N_t$. Optimal values of objective function \eqref{eq:distribution_of} provide the social welfare achieved by each distribution network for the electricity prices computed in Step 1). We denote the sum of the social welfare of all distribution networks as $SW^D_t$. \item Quantify the power imbalance caused by the different distribution network approximations as $\Delta_t = 100 |\widehat{P}^N_t-P^N_t|/P^N_t$. Note that such power imbalances must be handled by flexible power resources able to adapt their generation or consumption in real-time. \item Model \eqref{eq:transmission} is solved by setting the electricity imported by each distribution network to the quantity obtained in Step 2). The output of this model represents the real-time re-dispatch of generating units connected to the transmission network to ensure the power system balance. The optimal value of \eqref{eq:transmission_of} provides the realized social welfare of the transmission network denoted as $SW^T_t$. We emphasize that this social welfare is computed as if all generating units and consumers at the transmission network could instantly adapt to any unexpected power imbalance coming from the distribution networks ($\Delta_t$) without any extra cost for the deployment of such unrealistic flexible resources. This means that we are underestimating the social welfare loss caused by these power imbalances. \item Compute the total realized social welfare of the power system as $SW_t = SW^D_t+SW^T_t$. \end{enumerate} \section{Simulation results}\label{sec:SR} We consider the 118-bus, 186-line transmission network from \cite{Pena2018}. Each transmission-level load is replaced with a 32-bus radial distribution network, which hosts eight solar generating units, see data in \cite{Hassan2018b, Hassan2018}. That is, the power system includes 3030 buses ($118+91\times32$), 3098 lines ($186+91\times32$), thermal and wind power plants connected to 43 transmission buses, solar generating units connected to 728 distribution buses ($91\times8$), and electricity consumers located at 2912 distribution buses ($91\times32$). Each consumer is assumed to react to the electricity price as depicted in Fig. \ref{fig:flex}. The installed capacity of thermal, solar and wind generating units is 17.3GW, 2.5GW and 2.5GW, respectively, while the peak demand is 18GW. Finally, time-varying capacity factors for all consumers, wind and solar generation in the same distribution network are assumed equal. While all distribution networks have the same topology and the same location of loads and solar power generating units, we scale their total demand from 12MW to 823MW to match the transmission demand given in \cite{Pena2018}. We also scale the original values of branch resistances and reactances inversely proportional to the peak demand within each distribution network. All data used in this case study is available in \cite{118TN33DN}. Simulations have been run on a Linux-based server with one CPU clocking at 2.6 GHz and 2 GB of RAM using CPLEX 12.6 under Pyomo~5.2. As discussed in Section \ref{sec:methodology}, the analyzed methods differ in their ability to account for the impact of physical limits and economic signals on the response of active distribution networks. For instance, if distribution voltage limits never become activated, then the SB approach would provide results quite close to those of the benchmark approach BN. Conversely, if distribution voltages reach their security limits, the PAG and PAW methods are expected to outperform SB. In order to investigate the impact of voltage congestion on the performance of each approach, we vary the resistances and reactances of branches of the distribution networks as indicated in \eqref{eq:change_rx}, where $r^0,x^0$ are the base-case values provided in \cite{118TN33DN}, and parameter $\eta$ is changed from 0.67 to 1.33, i.e., a 33\% lower and greater than the initial values: \begin{equation} r = \eta r^0 \qquad \qquad x = \eta x^0 \label{eq:change_rx} \end{equation} Additionally, we use parameter $\delta$ to model each flexible consumer, which is randomly generated for the 2912 loads following a uniform probability distribution in $[0.5-0.75]$. Besides, we set $\overline{\lambda}=25$, $\underline{\lambda}=10$, $\overline{v}_n=1.05$, and $\underline{v}_n=0.95$. The PAG and PAW approaches require access to historical data. In this case study, historical data is generated by solving the BN model \eqref{eq:benchmark} for 8760 hours of a given year. Each hour is characterized by different baseline demands of flexible consumers along with the wind and solar capacity factors throughout the system. Values for wind, solar and baseline demands are taken from \cite{Pena2018} and are available in \cite{118TN33DN}. For illustration, Fig. \ref{fig:response} plots the price response of one distribution network for $\eta=1$ including the 8760 time periods. For simplicity, changes in the topology of the transmission and distribution networks are disregarded. The learning-based PAG and PAW approaches use the demand and renewable capacity factors at each distribution network as contextual information to learn its response. Also, the number of neighbors for the $K$-NN learning methodology is set to 100. Finally, the maximum number of blocks for the bidding curves learned by the PAW approach is equal to ten. For the sake of comparison, each of the four approaches uses the same test set that includes 100 randomly selected hours of the year. \begin{figure} \caption{Price response of one distribution network for $\eta=1$} \label{fig:response} \end{figure} Using the results of these 100 hours, Fig. \ref{fig:imbalance} plots, for each approach, a shaded area ranging from the 5\% to the 95\% percentile of the relative power imbalance $\Delta_t$ as a function of parameter $\eta$. The average of the power imbalance is also displayed. Naturally, due to its completeness, the benchmark method does not incur any power imbalance and therefore, the results delivered by this method are not included in Fig. \ref{fig:imbalance}. Low values of $\eta$ reduce voltage congestion at the distribution networks and, therefore, their response is mainly driven by electricity prices at the substations. In such cases, the SB approach outperforms the PAG approach and yields power imbalances close to 0\%. For small values of $\eta$, the proposed PAW approach yields higher power imbalances than SB. However, this difference could be narrowed by approximating the response of the distribution networks with more than ten blocks. Conversely, high values of $\eta$ translates into congested distribution networks in which the dispatch of small consumers and distributed generators is heavily constrained by technical limits. In these circumstances, electricity prices at the substations have a reduced impact on the the response of the distribution network and then, the power imbalance of the SB approach is significantly greater than that of the PAG approach. Quantitatively, the proposed methodology PAW achieves average power imbalances below 0.7\% for any value of $\eta$. \begin{figure} \caption{Impact of distribution network congestion on power imbalance.} \label{fig:imbalance} \end{figure} When comparing SB, PAG and PAW, we shoud also keep in mind that their integration into current market-clearing mechanisms are not comparable. Implementing the SB approach would require modifying existing market rules so that distributed generators and small consumers could directly submit their offers and bids. On the other hand, the PAG and PAW comply with these rules since active distribution networks are modeled as fix loads or in the form of step-wise bidding curves, respectively. \begin{figure} \caption{Estimated (dashed) and observed (bold) flexibility of active distribution networks.} \label{fig:usedflex} \end{figure} Power imbalances of Fig. \ref{fig:imbalance} are also explained by the incorrect estimation of the flexibility provided by the distribution networks made by the different approaches. To illustrate this effect, we compute the relative difference between the approximate consumption of distribution networks $\widehat{P}^N_t$ and the baseline consumption, which is plotted in dashed lines in Fig. \ref{fig:usedflex}. The bold lines represent the relative difference between the actual consumption of distribution networks $P^N_t$ and the same baseline demand. First, it can be observed that flexible customers allow for aggregate demand variations that range from 4\% to 12\%, on average. It can also be observed that PAG underestimates the flexibility provided by distribution networks for low congestion levels. Conversely, the SB approach overestimates the available flexibility of distributed networks when their operation is mainly driven by physical constraints. Finally, the proposed PAW approach is able to operate the transmission network with a very realistic estimation of the flexibility of the distribution networks, which is, in turn, very close to the actual flexibility levels determined by the centralized benchmark approach. Similarly to Fig. \ref{fig:imbalance}, Fig. \ref{fig:socialwelf} plots the mean and the 5\% and 95\% percentiles of the social welfare loss with respect to the BN approach. Aligned with power imbalance results, the social welfare losses under the SB and PAG approaches are linked to high and low values of parameter $\eta$, respectively. More importantly, while the social welfare loss may reach values of 2\% and 4\% for the SB and PAG approaches, in that order, for some of the 100 hours analyzed, the PAW approach keeps this value below 0.1\% for any network congestion level. That is, the proposed methodology to integrate transmission and distribution networks achieves the same social welfare as the BN for a wide range of power system conditions (described by the different demand and renewable capacity factors of the 100 hours) and network congestion of the distribution systems (modeled by parameter $\eta$). For completeness, Table~\ref{tab:socialwelfares} provides the sum of the social welfare for the 100 hours of the test set for some values of parameter $\eta$. It is also important to remark that social welfare losses in Fig. \ref{fig:socialwelf} are computed assuming that all generating units and consumers at the transmission network can react instantaneously to any real-time power imbalance without involving extra regulations costs. Therefore, these results are a lower bound of the actual social welfare losses that would happen in a more realistic setup in which flexibility resources are both limited and expensive. \begin{figure} \caption{Impact of distribution network congestion on social welfare.} \label{fig:socialwelf} \end{figure} \begin{table}[] \centering \caption{Social welfare results in k\euro} \begin{tabular}{ccccc} \toprule $\eta$ & BN & SB & PAG & PAW \\ \midrule 0.66 & 3226.8 & 3226.8 & 3216.8 & 3226.3 \\ 1.00 & 3214.5 & 3213.5 & 3207.5 & 3214.4 \\ 1.33 & 2828.8 & 2814.1 & 2828.7 & 2828.7 \\ \bottomrule \end{tabular} \label{tab:socialwelfares} \end{table} \begin{table}[] \centering \caption{Allocation of average social welfare loss (in percent with respect to BN) between transmission and distribution} \begin{tabular}{ccccccc} \toprule & \multicolumn{2}{c}{SB} & \multicolumn{2}{c}{PAG} & \multicolumn{2}{c}{PAW} \\ \midrule $\eta$& TSO & DSO & TSO & DSO & TSO & DSO \\ \midrule 0.66 & 0.00\% & 0.00\% & -4.98\% & 5.39\% & 0.46\% & -0.45\% \\ 1.00 & -1.38\% & 1.41\% & -4.17\% & 4.47\% & 0.04\% & -0.04\%\\ 1.33 & -12.00\% & 12.67\% & -0.54\% & 0.55\% & -0.10\% & 0.10\%\\ \bottomrule \end{tabular} \label{tab:socialwelfloss_alloc} \end{table} Table \ref{tab:socialwelfloss_alloc} shows how the average relative social welfare loss (as illustrated in Fig.~\ref{fig:socialwelf}) is apportioned between the transmission and distribution systems, for various congestion levels $\eta$. Notably, the average loss in the SB and PAG cases disproportionally affects the transmission and distribution networks. Actually, there is a substantial net transfer of welfare from DSOs to the TSO. That is, the SB and PAG approaches delegate the bulk of the costs of dealing with distribution congestion to the distributed energy resources themselves, which certainly puts into question the ability of these methods to effectively integrate distribution into transmission operations. In contrast, the proposed PAW approach considerably mitigates this effect, or even reverses it, thus ensuring that distribution issues are also taken care of by transmission resources. Looking at Figures \ref{fig:imbalance}, \ref{fig:usedflex} and \ref{fig:socialwelf}, we can conclude that the effectiveness of the proposed PAW method with respect to other approaches depends on the network characteristics, and more particularly, on the congestion level of the distribution networks. Indeed, if the distribution networks never experience voltage congestion, SB outperforms PAW in terms of power imbalance and social welfare loss. On the other hand, for highly congested networks, PAG and PAW provide almost identical results. In conclusion, the use of the proposed PAW approach is more relevant for those systems in which the level of congestion of distribution networks vary significantly depending on the operating conditions. Finally, Table \ref{tab:computational_times} compares the maximum, average and minimum computational times for the four approaches. The average speedup factor between each method and the benchmark is also provided in the last column. Due to the high number of variables and constraints of model \eqref{eq:dso2tso}, the SB's speedup factor is relatively low. In contrast, since PAG and PAW characterize the response of each distribution network through a constant value or a step-wise bidding curve, respectively, the computational savings are more substantial. \begin{table}[] \centering \caption{Computational time results} \begin{tabular}{lcccc} \toprule & Min time (s) & Average time (s) & Max time (s) & Speedup \\ \midrule BN & 0.88 & 1.66 & 15.27 & - \\ SB & 0.17 & 0.30 & 1.83 & 5.5x \\ PAG & 0.02 & 0.03 & 0.20 & 55.3x \\ PAW & 0.04 & 0.06 & 0.38 & 27.7x \\ \bottomrule \end{tabular} \label{tab:computational_times} \end{table} \section{Conclusion}\label{sec:conc} Motivated by the proliferation of distributed energy resources, new TSO-DSO coordination strategies are required to take full advantage of these resources in the operation of the transmission system. Existing decomposition/decentralized methods are able to yield the same operating decisions as centralized benchmarks at lower computational costs. However, these approaches require access to proprietary information or fail-safe real-time communication infrastructures. Alternatively, our approach only uses offline historical data at substations to learn the price response of the distribution networks in the form of a non-increasing bidding curve that can be easily embedded into current procedures for transmission operations. In addition, this data set can be enriched with some covariates that have predictive power on the response of the distribution networks. We have benchmarked our approach against an idealistic model that fully centralizes the coordination of distribution and transmission operations. We have also compared it with other approximate approaches that either ignore the technical constraints of the distribution networks or the price-sensitivity of DERs. The conducted numerical experiments reveal that our approach systematically delivers small differences with respect to the fully centralized benchmark in terms of power imbalances and social welfare regardless of the level of congestion of the distribution grids. In return, our approach is computationally affordable and consistent with current market practices, and allows for decentralization. Future work should be directed to assessing whether these results remain valid, and to which extent, for meshed distribution networks and DERs with more complex price responses, e.g. thermostatically controlled loads. Furthermore, in this research, we have only considered contextual information pertaining to continuous random variables (electricity demand, renewable power generation, etc.). Therefore, a relevant avenue for future research is to extend the proposed approach to work with binary variables too, such as those describing the network topology or the in-service/out-of-service status of some network components. \end{document}
\begin{document} \let\oldaddcontentsline\addcontentsline \renewcommand{\addcontentsline}[3]{} \title{Fast, high-fidelity addressed single-qubit gates\\ using efficient composite pulse sequences} \author{A.\,D.\,Leu} \email{[email protected]} \author{M.\,F.\,Gely} \author{M.\,A.\,Weber} \author{M.\,C.\,Smith} \author{D.\,P.\,Nadlinger} \author{D.\,M.\,Lucas} \affiliation{Clarendon Laboratory, Department of Physics, University of Oxford, Parks Road, Oxford OX1 3PU, U.K.} \date{\today} \begin{abstract} We use electronic microwave control methods to implement addressed single-qubit gates with high speed and fidelity, for $^{43}$Ca$^{+}$ hyperfine ``atomic clock'' qubits in a cryogenic (100K) surface trap. For a single qubit, we benchmark an error of $1.5$ $\times$ $10^{-6}$ per Clifford gate (implemented using $600~$ns $\pi/2$-pulses). For two qubits in the same trap zone (ion separation $5~\upmu$m), we use a spatial microwave field gradient, combined with an efficient 4-pulse scheme, to implement independent addressed gates. Parallel randomized benchmarking on both qubits yields an average error $3.4$ $\times$ $10^{-5}$ per logical gate. \end{abstract} \maketitle Trapped ions are one of the most promising platforms to build a universal quantum computer~\cite{Monroe_2013}. Quantum state control of ions is conventionally achieved with lasers, but radio-frequency~\cite{Srinivas_2021} or microwave fields~\cite{Entangling,Ospelkaus_2011,Weidt_2016,Harty_2014,harty2016,zarantonello2019} have in recent years demonstrated competitive performance. Microwave technology is more mature and widespread than laser technology and hence cheaper and more reliable. Also, the long wavelength of microwaves eases phase control, and waveguides can straightforwardly be integrated into surface ``chip'' traps. Microwave-driven logic is therefore a compelling candidate for scaling up ion trap quantum processors, and logical operations surpassing error correction thresholds have been demonstrated~\cite{Harty_2014,harty2016,zarantonello2019}. However, whilst laser beams can be focused to address individual ions in the same trap potential~\cite{laserfocus}, the centimetre-scale wavelength of microwaves requires a different approach to single-ion addressing. Past demonstrations of microwave-driven addressed gates have mostly relied on nulling the effect of the microwave field for the non-addressed ion. This has been achieved through position-dependent Zeeman shifts~\cite{Piltz_2014,Warring_2013,Randell_2015} or by nulling the field amplitude for certain ion positions~\cite{Craik_2017,Warring_2013}. Similarly, sidebands of the microwave qubit transition can be generated and nulled at different positions, either by controlling micromotion~\cite{Warring_2013}, trapping ions in different potential wells with different secular frequencies~\cite{sutherland2022} or by stimulating ion motion using d.c. electric fields~\cite{srinivas2022}. However, gate errors and crosstalk below $10^{-4}$ -- an important threshold for the practical scalability of error-correction~\cite{knill2010,Preskill1998} -- have not previously been demonstrated. In this Letter, we first report on global single-qubit operations comparable to the present state of the art~\cite{Harty_2014} but featuring a considerable $\sim20 \times$ speedup. We exploit this performance improvement to implement a more complex multi-pulse scheme which can address two ions within the same potential well, with an average logical gate error of $3.4 (3)\times 10^{-5}$ including crosstalk errors and with faster logical gate speed to that of~\cite{Harty_2014}. The scheme employs the microwave field gradient in our trap and uses an efficient, optimal composite sequence of single-qubit rotations to perform an arbitrary combination of logical gates on both ions simultaneously. We characterize the addressing scheme by carrying out independent randomized benchmarking (RB) sequences on both ions simultaneously. Experiments are carried out using a micro-fabricated segmented-electrode surface Paul trap with an on-chip microwave resonator generating a microwave field for the ions trapped at a height of $40~\upmu$m~\cite{trap}. The trap is operated at room temperature for the single-ion experiments and at ``warm cryogenic'' temperature (100K) for addressing experiments (which improves two-ion trapping lifetime). Our qubit is defined by the hyperfine levels $\ket{F =4,M =1 }$ and $\ket{F =3,M =1 }$ in the ground state manifold $4\text{S}_{1/2}$ of $^{43}\text{Ca}^+$, which form a clock transition at our static magnetic field strength of 28.8 mT. Further details, notably concerning state preparation and readout can be found in Ref.~\cite{trap}. Logical operations are driven by the microwave drive chain described in Supplementary Sec.~\ref{sec:experimental_setup}. Our surface trap can perform gates on a single ion on sub-microsecond timescales, whilst maintaining fidelities consistent with the state of the art across all quantum computing platforms. We illustrate the landscape of single qubit gate fidelities and durations in Fig.~\ref{fig:comparison} (a) with a selection of results across different ion manipulation protocols and quantum computing technologies. \begin{figure} \caption{ \textbf{State of the art for non-addressed single-qubit gates.} \label{fig:comparison} \end{figure} To measure gate errors, we use randomized benchmarking (RB)~\cite{RBM}. The qubit is subjected to a sequence of pseudorandom Clifford gates which combined perform a known Pauli gate. Each Clifford is decomposed into $\pi/2$ and $-\pi/2$ pulses in the $\hat{\sigma}_{\text{x}}$ and $\hat{\sigma}_{\text{y}}$ directions. The probability of measuring the expected state at the end of a sequence decays towards $50~\%$ as the number of applied Clifford gates increases. Measuring this decay gives us a measure of the average error per Clifford gate. In this experiment, the average single-qubit Clifford gate error is measured to be $1.5 (1)\times 10^{-6}$, see Fig~\ref{fig:comparison} (b). A summary of all known error sources is presented in Table ~\ref{tab:error_budget}. The dominant contribution to the error budget is the decoherence time, which we measure through memory benchmarking~\cite{IRBM,Sepiol_2019} to be $T_\text{2}^{\ast\ast} = 4.6 (2)~\text{s}$. Here we introduce the notation $T_2^{\ast \ast}$ to represent the effective decoherence time constant in the {\em small error} regime~\cite{Sepiol_2019}. The error due to decoherence is increased by the need for a $2~\upmu\text{s}$ delay time after each $0.6~\upmu\text{s}$ $\pi/2$-pulse, a purely technical limitation imposed by the rate at which our field programmable gate array controller (FPGA) can output events to our arbitrary waveform generator (AWG). The second largest source of error is the thermal occupation of the in-plane secular mode of ion motion. As the ion moves in-plane parallel to the trap surface, the amplitude of the microwaves changes, and with it the amount of rotation driven on the Bloch sphere. For slower gates, where the position of the ion performs many oscillations around its equilibrium during a gate, the average Rabi frequency in a gate will remain constant. This effect becomes more significant in our system because the Rabi frequency (520 kHz) approaches the in-plane mode frequency (5.66 MHz). However, even a worst-case prediction yields a non-limiting $2.4 \times 10^{-7} $ average gate error across a 10,000 Clifford gate sequence. Methods used to estimate other errors are provided in Supplementary Sec.~\ref{sec:errors}. \begin{table}[] \begin{tabular}{l|c} Error source & Error ($/ 10^{-6}$ ) \\ \hline \rule{0pt}{2.5ex}Decoherence $T_\text{2}^{\ast\ast}$ & 0.42 \\ In-plane motion & 0.24 \\ microwave/laser leakage & 0.084 \\ Amplitude stability & 0.081 \\ Detuning & 0.075 \\ AC Zeeman shift & 0.006 \\ Spectator state excitation & 0.003 \\ \hline\hline\rule{0pt}{2.5ex}Simulated error & 0.91 \\ Measured error & 1.5 \end{tabular} \caption{ \textbf{Single-qubit gate error budget.} Errors are simulated for a $\pi/2$-pulse and then scaled by the average number of pulses in a Clifford gate ($2.2$ in our implementation). Decoherence during inter-pulse delays ($2~\upmu$s) is also included. } \label{tab:error_budget} \end{table} Our fast and high-fidelity single-qubit gates enable the use of a multi-pulse scheme to address single ions. This scheme relies on the large magnetic field gradient provided by the microwave electrode layout~\cite{trap}. As shown in Fig.~\ref{fig:B_vs_x}, counter-propagating microwave currents lead to destructively interfering fields along the quantization axis. This results in a large gradient in the field component required to drive the qubit transition. For the 130 mW input power used to drive a $\pi/2$-rotation (on par with typical powers used in the addressing pulse scheme) this gradient is $11.7~$T/m. By changing the voltages of the segmented trap DC electrodes, the trapping potential can be twisted, such that the ions are placed at different locations in the Rabi frequency gradient. By tuning the amplitudes and phases of a train of microwave pulses (all with identical temporal shape) we can use the differential Rabi frequency to construct an arbitrary pair of different single-qubit gates on the two ions. Such a pair of gates can be described by the unitary $G_0\otimes G_1$, \begin{equation} G_k=\begin{pmatrix} e^{i\delta_k}\cos{\frac{\theta_k}{2}} & e^{i\phi_k}\sin{\frac{\theta_k}{2}} \\ e^{-i\phi_k}\sin{\frac{\theta_k}{2}} & e^{-i\delta_k}\cos{\frac{\theta_k}{2}} \end{pmatrix}\ , \label{eqn:space} \end{equation} where $k=0,1$ indexes the ions. Each unitary $G_k$ has three parameters: $\phi_k$, $\delta_k$ and $\theta_k$, totalling 6 parameters per gate pair. A single resonant microwave pulse drives this unitary evolution on both ions with a few constraints. With resonant driving, we have $\delta_0=\delta_1=0$, the phase $\phi$ of the microwaves sets $\phi_0=\phi_1=\phi$, and the relative amount of rotation induced in the qubit states is fixed through $\theta_k = \pi A/A_k^\pi$, determined by the pulse amplitude $A$ relative the amplitude $A_k^\pi$ required to perform a $\pi$ rotation on ion $k$. A pulse of amplitude $A$ and phase $\phi$ thus drives the unitary $R_0\otimes R_1$, \begin{equation} R_k=\begin{pmatrix} \cos\frac{\pi A}{2A_k^\pi} & e^{i\phi}\sin\frac{\pi A}{2A_k^\pi} \\ e^{-i\phi}\sin\frac{\pi A}{2A_k^\pi} & \cos\frac{\pi A}{2A_k^\pi} \end{pmatrix}\ . \end{equation} For each pulse, we therefore have two degrees of freedom to adjust, so at least three pulses are required to match the 6 parameters of the desired pair of gates. In practice, we use four pulses instead of three such that two additional degrees of freedom can be used to minimize other quantities, such as the susceptibility to coherent errors. The amplitudes and phases of the pulses are calculated numerically using a least-squares method (see Supplementary Sec.~\ref{sec:Calibration_Procedure}). We illustrate the scheme in Fig.~\ref{fig:scheme} (a), where the implementation of a X$_\frac{\pi}{2}$ gate on ion $\#0$ and a Y$_\frac{\pi}{2}$ gate on ion $\#1$ is shown. The corresponding trajectories on the Bloch spheres shown in Fig.~\ref{fig:scheme} (b) demonstrate that despite the axis of rotation being the same for both qubits in each pulse, the differential Rabi frequency $\left(\Omega_1/\Omega_0 = 0.80 \right)$ is ultimately sufficient to reach the target state. This scheme is implemented with $2.12~\upmu \text{s}$ pulses of varying amplitude and phase, resulting in a logical gate duration of $8.48~\upmu$s (excluding inter-pulse delays). Each pulse is ramped on and off with a $\sin^2\left(t\pi/2t_R\right)$ shape (with $t_R=120~\text{ns}$) to avoid exciting spectator hyperfine transitions. \begin{figure} \caption{ \textbf{Surface trap design enabling a microwave field gradient for qubit addressing (not to scale).} \label{fig:B_vs_x} \end{figure} \begin{figure} \caption{ \textbf{Single-ion addressing scheme.} \label{fig:scheme} \end{figure} To determine the quality of the addressing scheme, we perform RB on both ions simultaneously, each ion being subject to an independent sequence of Clifford gates. As with RB on a single qubit, Clifford gates are decomposed into X$_{\pm\frac{\pi}{2}}$ and Y$_{\pm\frac{\pi}{2}}$ gates. A pair of X$_{\pm\frac{\pi}{2}}$ or Y$_{\pm\frac{\pi}{2}}$ gates (one gate applied to each ion simultaneously) is finally decomposed into four physical pulses using the addressing scheme. This is illustrated in Fig.~\ref{fig:addressing_rbm} (a). Whilst the number of Clifford gates applied to each ion is the same in a single shot of the experiment, the number of underlying X$_{\pm\frac{\pi}{2}}$ and Y$_{\pm\frac{\pi}{2}}$ gates necessary to implement all the Clifford gates may differ. The shorter sequence is padded with identity gates I to account for this. Even though there are in principle 25 different pairs of X$_{\pm\frac{\pi}{2}}$, Y$_{\pm\frac{\pi}{2}}$ and I gates that a Clifford gate decomposition could require, we make use of the global phase in the microwave pulse sequence to reduce the number of pulse sequences that we need to compute and calibrate. For example, subjecting the ions to a pulse sequence implementing gates X$_\frac{\pi}{2}$ on ion $\#1$ and Y$_{\frac{\pi}{2}}$ on ion $\#1$, but with the microwave phase shifted by $45^\circ$, realizes a Y$_\frac{\pi}{2}$ on ion $\#0$ and a X$_{-\frac{\pi}{2}}$ on ion $\#1$. This reduces the number of pulse sequences required for RB to only six. For each required sequence, we make use of the fourth pulse to test multiple sequences and to select the one which offers the best fidelity (see Supplementary Sec.~\ref{sec:Calibration_Procedure}). \begin{figure} \caption{ \textbf{Simultaneous randomized benchmarking of logical gate pairs.} \label{fig:addressing_rbm} \end{figure} Whilst the pulse sequences only implement X$_{\pm\frac{\pi}{2}}$, Y$_{\pm\frac{\pi}{2}}$ or I gates in this experiment, they can in principle implement arbitrary gates. Hence, the figure of merit for this scheme is the fidelity of the logical operation resulting from the sequence of four pulses. We therefore measure RB sequence lengths in terms of the number of X$_{\pm\frac{\pi}{2}}$, Y$_{\pm\frac{\pi}{2}}$ and I gates, and quote fidelities for these logical operations, rather than for the Clifford gates that are composed of several such logical gates. In Fig.~\ref{fig:addressing_rbm}(b), we show the evolution of the sequence fidelity as a function of the number of gates. The states of the ions are read out individually using ion shuttling~\cite{MariusThesis}. The resulting errors per logical gate are $1.6 (3)\times 10^{-5} $ and $5.2 (7) \times 10^{-5} $ for ion $\#0$ and ion $\#1$ respectively, i.e. an average logical gate error of $3.4 (3)\times 10^{-5}$. In Table~\ref{tab:Addressing}, we compare this error, as well as the gate duration, to previous microwave addressing experiments. \begin{table}[t!] \begin{tabular}{r|c|c|c} & Error&Crosstalk&Duration \\ & ($/ 10^{-3} $) &($/ 10^{-3} $)&($\upmu$s) \\ \hline \rule{0pt}{2.5ex}This work & \multicolumn{2}{c|}{0.03}& 8 \\ \cline{2-3} \rule{0pt}{2.5ex}Craik et al.(2017) & - & 3 & 50--90 \\ Randell et al. (2015) & - & 5 & 550\\ Piltz et al. (2014) & 5 &0.03--0.08&25 \\ Piltz et al. (2014) & 5 &0.06--0.23&9 \\ Warring et al. (2013) & - & 0.6--1.5& 50 \\ \end{tabular} \caption{ \textbf{Trapped-ion microwave addressing state of the art.} Comparison of single-qubit addressed gates for error, nearest-neighbour crosstalk and gate duration across different microwave-driven ion trap experiments~\cite{Piltz_2014,Craik_2017,Warring_2013,Randell_2015}. Quoted gate durations exclude time delays between pulses. Where crosstalk error was not measured through RB, crosstalk for a $\pi$-pulse is quoted. Our simultaneous benchmarking approach does not distinguish between gate error and crosstalk as gates are executed in parallel. } \label{tab:Addressing} \end{table} The measured error is larger than expected from scaling the single-qubit gate error: by linearly scaling the single-qubit gate error with pulse duration, we would expect an error per gate in the addressing scheme of $9 \times 10^{-6}$. The excess error can however be explained by drift in the microwave amplitude. We monitor the microwave amplitude by measuring the average state of one of the ions in the twisted crystal after being subjected to $101$ $\pi/2$-pulses. The drift measured over tens of minutes is sufficient to limit the gate fidelity (see Supplementary Sec.~\ref{sec:MW_stability_addressing}). From measurements on a single ion (Sec.~\ref{sec:single_microwave_stability}), we find that this level of drift is not present in the microwave field at a fixed position in space, and we suspect that the drifts may be associated with position drifts of the two ions which are present for two ions in the twisted configuration. This could be due to larger, and asymmetric, DC voltages which are required to twist the ion crystal, leading to greater susceptibility to common-mode voltage noise. In conclusion, we have demonstrated fast ($1.3~\upmu$s) and high-fidelity ($1.5 \times 10^{-6} $) single-qubit gates driven by microwave near-field radiation. This level of performance has enabled a high-fidelity single-ion addressing scheme using optimized pulse sequences, in which gates can be carried out simultaneously on two ions within the same potential well with an average error of $3.4 \times 10^{-5}$. This surpasses the best performance achieved by -- technically much more demanding -- optical addressing approaches~\cite{Crain_2014,binaimotlagh2023}. With this work we demonstrate that nulling the effects of microwave fields -- either by frequency selection~\cite{Piltz_2014,Warring_2013,Randell_2015}, field cancellation~\cite{Craik_2017,Warring_2013}, or sideband manipulations~\cite{Warring_2013,sutherland2022,srinivas2022} -- is not a strict requirement for addressing individual qubits. The addressing scheme could readily be extended to more than two ions, with the number of required pulses scaling linearly with the number of ions. Since this addressing method does not induce a difference in the microwave amplitude gradient experienced by each ion, single and two-qubit gates~\cite{harty2016} can be interleaved without changing the ion positions. In the short term, this could enable the implementation of RB for two-qubit gates without the limitation of using subspace benchmarking methods~\cite{Baldwin_2020}. The implementation of the scheme could be further simplified by ``embedding'' the differential Rabi frequency in the surface trap design, e.g. by angling the microwave delivery electrodes (which could be ``buried'' beneath the trapping electrodes in a multi-layer design~\cite{Hahn_2019}) with respect to the RF electrodes. This would also avoid inducing RF micromotion when twisting the ion crystal. Finally, the scheme could be used on other ground state transitions to selectively move the population out of the computational basis of one of the ions and then into shelf states (through the $\text{S}_{1/2}-\text{D}_{5/2}$ quadrupole transition), enabling individual ion readout without the need for ion shuttling or tightly focused laser beams. More generally, the efficient composite pulse scheme which we have introduced could be used for individual qubit addressing in any physical system where a modest differential Rabi frequency between qubits can be engineered, for example to allow multiple qubits to share a single microwave control line. \textbf{Acknowledgments:} This work was supported by the U.S. Army Research Office (ref. W911NF-18-1-0340) and the U.K. EPSRC Quantum Computing and Simulation Hub. M.F.G. acknowledges support from the Netherlands Organization for Scientific Research (NWO) through a Rubicon Grant. A.D.L. acknowledges support from Oxford Ionics Ltd. \FloatBarrier \onecolumngrid \begin{center} {\Large \textbf{Supplementary information}} \end{center} \makeatletter \renewcommand\l@section{\@dottedtocline{2}{1.5em}{2em}} \renewcommand\l@subsection{\@dottedtocline{2}{3.5em}{2em}} \renewcommand\l@subsubsection{\@dottedtocline{2}{5.5em}{2em}} \makeatother \let\addcontentsline\oldaddcontentsline \renewcommand{S\arabic{section}}{\arabic{section}} \onecolumngrid \let\oldaddcontentsline\addcontentsline \renewcommand{\addcontentsline}[3]{} \let\addcontentsline\oldaddcontentsline \renewcommand{S\arabic{equation}}{S\arabic{equation}} \renewcommand{S\arabic{figure}}{S\arabic{figure}} \renewcommand{S\arabic{table}}{S\arabic{table}} \renewcommand{S\arabic{section}}{S\arabic{section}} \setcounter{figure}{0} \setcounter{equation}{0} \setcounter{section}{0} \newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}} \newcolumntype{L}[1]{>{\raggedright\arraybackslash}p{#1}} \FloatBarrier \section{Experimental Setup} \label{sec:experimental_setup} The pulses used to drive single-qubit gates are generated by an arbitrary wave form generator (AWG, Sinara-Phaser) which is controlled by an FPGA (Sinara-Kasli) programmed amongst other devices through the ARTIQ framework~\cite{artiq}. These pulses are up-converted to microwave frequencies and subsequently filtered and amplified as shown in Fig.~\ref{fig:experimental_setup} before being combined with a separate microwave drive chain providing state preparation and readout pulses. These microwave signals are then guided towards the surface ion trap for which further detail is provided in Ref.~\cite{trap}. \begin{figure} \caption{ \textbf{Microwave setup} \label{fig:experimental_setup} \end{figure} \FloatBarrier \section{Calibration Procedure} \label{sec:Calibration_Procedure} To calculate the amplitude and phases of the pulses used in the addressing gates, we first determine the difference in Rabi frequency between the two ions. After twisting the two-ion crystal by $15^\circ$, the microwave amplitude $\text{A}_{k}^\pi$ required to produce a spin flip on ion $k$ is calibrated for both ions. We then use a least-square method to minimize a cost-function measuring gate error. Calculations are done with rotation matrices in the special orthogonal group $\text{SO}(3)$, where a gate is represented as a rotation on the Bloch sphere. The cost function is given by \begin{equation} \sum_{k=0}^1\Bigg\|\left(\prod_{j=1}^{4}\textbf{R}_{\phi_j}\left(\dfrac{\text{A}_j}{\text{A}_{k}^\pi}\pi\right)\right) - \textbf{G}_k\Bigg\|_\mathrm{HS} \label{eqn:costfunction} \end{equation} where $\|\cdot\|_\mathrm{HS} $ denotes the Hilbert-Schmidt norm, $\textbf{R}_{\phi_j}(\theta_j)\in\text{SO}(3)$ encodes the rotation on the Bloch sphere performed by pulse $j$ (through an angle $\theta_j$ with respect to an axis on the equator defined by the angle $\phi_j$), and matrix $\textbf{G}_k\in\text{SO}(3)$ is the rotation matrix corresponding to the desired gate on the ion $k$. Only pulse amplitudes $\text{A}_j$ and phases $\phi_j$ are varied in the minimization routine. Since we use an additional pulse to the three strictly required to construct a pair of gates, different initial guesses in the optimization procedure can produce different pulse sequences. Comparing different pulse sequences experimentally (for the same target gate) reveals that the error can vary over an order of magnitude across different pulse sequences. We suspect that different pulse sequences amplify or attenuate coherent error sources. We exploit this fact in our calibration routine by generating different pulse sequences and testing them experimentally to maximize fidelity. Specifically, for a given pair of target gates, we generate a pulse sequence and measure its fidelity by running the same RB sequences on both ions simultaneously. If the error (averaged over two ions) exceeds a threshold of $5\times10^{-5}$, we repeat the process with another generated gate sequence, otherwise, the sequence is memorized and the process is applied to the next target gate pair. Once the pulse sequence for all six required gate pairs are determined, we carry out the RB experiment as described in the main text. The calibration procedure for the main text RB data is shown in Fig.~\ref{fig:calibration}. \begin{figure} \caption{ \textbf{Calibration procedure for single-ion addressing.} \label{fig:calibration} \end{figure} \section{Single qubit gate calibration and errors} \label{sec:errors} In this section, we provide further details on the construction of the single-qubit error budget table (Table~\ref{tab:error_budget}). \subsection{Qubit detuning} Because of the magnetic field component of the radio-frequency trapping field, the effective energy splitting of our qubit is shifted through the AC Zeeman effect. We calibrate this shift by measuring the detuning at which maximum population transfer is achieved for a weak, 2 ms long, microwave pulse. The length of the pulse provides good frequency resolution, and its weakness ensures negligible AC Zeeman shift from the pulse itself will be present. To verify this calibration, we measure the Clifford gate error with RB as the detuning is swept as shown in Fig.~\ref{fig:rbm_sweep} (a). This demonstrates that our set-point (the origin in the x-axis) is close to optimal. Without this calibration, the detuning induced by the RF trapping field (120 Hz) would dominate the gate error. The detuning error in Table 1 is calculated from the typical day-to-day drift of this calibrated quantity. \subsection{Spectator state excitation} Given the peak amplitude of the Rabi frequency ($\sim0.5$ MHz), relative to the frequency difference between the qubit transition and transitions to spectator states ($\sim100$ MHz), a significant amount of the state population $\left(\sim10^{-4}\right)$ resides in spectator states during a gate. By ramping up our pulses much slower than the period of a Rabi oscillation with the spectator states, we adiabatically transition in and out of a dressed state basis spanning mostly the qubit and its four neighboring spectator states. More specifically, the pulse length (600 ns) includes two $t_R=120~\text{ns}$ on/off ramping periods with a $\sin^2\left(\pi t / 2 t_R\right)$ shape. \subsection{Pulse amplitude} \label{sec:single_microwave_stability} With a fixed temporal shape, the remaining parameter used to calibrate the pulse to a $\pi/2$-rotation is the microwave amplitude. We calibrate the microwave amplitude by maximizing the probability of returning to a prepared state after up to 1024 repetitions of the pulse. To verify this calibration procedure, we measure the gate error with RB as a function of deviation from the calibrated parameter as shown in Fig.~\ref{fig:rbm_sweep} (b). The drift in microwave amplitude is measured by monitoring the qubit state after performing $1049$ $\pi/2$-pulses. The result, shown in Fig.~\ref{fig:ampl_stability}, shows that the impact of drift is orders of magnitude lower than single qubit gate error. Notably, loading events do not have a significant effect on the Rabi frequency. \subsection{Microwave AC Zeeman shift} The dressing of the qubit by spectator states also affects the qubit frequency during the gates through the AC Zeeman effect. As we are unable to ramp the microwave frequency with the microwave amplitude, we rather compensate the shift with a fixed detuning of the microwaves over the whole duration of the pulse. In the rotating frame of the microwaves, we then track the phase of the qubit during the inter-pulse delay, where the qubit is unaffected by the Zeeman shift, and account for the resulting phase shift in the next pulse. The Zeeman shift is calibrated by performing 800 $\pi/2$ pulses with alternating positive and negative phases. Only if the Zeeman shift is correctly calibrated, these pulses should return the qubit to its initial state, allowing us to determine the optimal detuning of $\sim270$ Hz. This value is close to the theoretical value for the AC Zeeman shift (283 Hz), which assumes that the $\sigma_+$ and $\sigma_-$ polarized components of the microwave field are equal. The error resulting from typical day-to-day drift of this parameter is negligible and quoted in Table 1. We verify this calibration technique by detuning the microwaves applied during the pulse (with respect to the calibrated optimum), and monitoring the impact on RB error, see Fig.~\ref{fig:rbm_sweep} (c). \subsection{Decoherence} To estimate the impact of decoherence on the small time-scales which matter to an RB experiment, we vary the inter-pulse delay and measure its impact on gate error, see Fig.~\ref{fig:rbm_sweep} (d). Assuming that the origin of the resulting error is dephasing, which introduces an error linear in inter-pulse delay, we get an estimate of the decoherence time $\text{T}_2^{\ast\ast} = 4.6 (2)~\text{s}$. The minimum inter-pulse delay required by our control system, as well as the dephasing occurring during the pulse itself, is the largest contribution to our gate error that we were able to identify. \subsection{Microwave and laser leakage} Leakage of radiation affecting the qubit state is measured by preparing the qubit in either logical state, waiting up to 1 second, and measuring the change in qubit population. We measure a loss of state population of $\sim 2$ \% over the course of a second, corresponding to the Clifford gate error quoted in Table 1. \subsection{Thermal motion} In the worst case scenario of a 10,000 Clifford gate sequence, anomalous heating of 400 quanta of motion per second lead to an average thermal occupation of the in-plane radial mode of 23 quanta by the end of the sequence. Given the effective Lamb-Dicke parameter of $\eta=8\times 10^{-4}$~\cite{trap}, and simulating a $\pi/2-$pulse in the presence of ion-motion coupling, we extrapolate from small Hilbert space and low thermal occupation simulations to obtain an error of $2.4 \times 10^{-7} $ per Clifford gate over the entire sequence. \begin{figure} \caption{ \textbf{Single-qubit gate error model verification.} \label{fig:rbm_sweep} \end{figure} \begin{figure} \caption{ \textbf{Microwave stability (single-qubit gates).} \label{fig:ampl_stability} \end{figure} \FloatBarrier \section{Microwave stability in the twisted two ion crystal} \label{sec:MW_stability_addressing} Contrary to the single ion case, we find that with two ions in a twisted crystal, the microwave amplitude undergoes significant drift on the time-scale of tens of minutes. As with a single ion, we amplify this effect by monitoring the average qubit occupation after a sequence of $100$ microwave pulses, which perform a $223~\pi/2$ rotation on ion 0 and a $177~\pi/2$ rotation on ion 1. In Fig.~\ref{fig:addressing_stability}, the drift is converted into an error for the different pulses used in the simultaneous RB shown in Fig.~\ref{fig:addressing_rbm}. We know from the single ion measurement (see Fig.~\ref{fig:ampl_stability}) that the amplitude of the microwave field at a given position in space is much more stable than the drift shown here. Therefore, we conclude that the drift lies in a change of position of the ions, probably due to changes in the trapping fields. \begin{figure} \caption{ \textbf{Microwave stability (addressed gates).} \label{fig:addressing_stability} \end{figure} \end{document}
\begin{equation}gin{document} \subjclass[2010]{Primary 42A45. Secondary 42C10, 42A38, 33C45.} \title[Multipliers of Laplace Transform Type for Laguerre and Hermite\dots]{Multipliers of Laplace Transform Type for Laguerre and Hermite Expansions} \author{Pablo L. De N\'apoli} \address{Departamento de Matem\'atica \\ Facultad de Ciencias Exactas y Naturales \\ Universidad de Buenos Aires \\ Ciudad Universitaria \\ 1428 Buenos Aires, Argentina} \email{[email protected]} \author{Irene Drelichman} \address{Departamento de Matem\'atica \\ Facultad de Ciencias Exactas y Naturales \\ Universidad de Buenos Aires \\ Ciudad Universitaria \\ 1428 Buenos Aires, Argentina} \email{[email protected]} \author{Ricardo G. Dur\'an} \address{Departamento de Matem\'atica \\ Facultad de Ciencias Exactas y Naturales \\ Universidad de Buenos Aires \\ Ciudad Universitaria \\ 1428 Buenos Aires, Argentina} \email{[email protected]} \thanks{Supported by ANPCyT under grant PICT 01307, by Universidad de Buenos Aires under grants X070 and X837 and by CONICET under grants PIP 11420090100230 and PIP 11220090100625. The first and third authors are members of CONICET, Argentina.} \begin{equation}gin{abstract} We present a new criterion for the weighted $L^p-L^q$ boundedness of multiplier operators for Laguerre and Hermite expansions that arise from a Laplace-Stieltjes transform. As a special case, we recover known results on weighted estimates for Laguerre and Hermite fractional integrals with a unified and simpler approach. \end{abstract} \keywords{Laguerre expansions, Hermite expansions, harmonic oscillator, fractional integration, multipliers} \maketitle \section{Introduction} The aim of this paper is to obtain weighted estimates for multipliers of Laplace transform type for Laguerre and Hermite orthogonal expansions. To explain our results, consider the system of Laguerre functions, for fixed $\alpha>-1$, given by \begin{equation}gin{equation*} l_k^\alpha(x)= \left( \frac{k!}{\Gamma(k+\alpha+1)} \right)^{\frac12} e^{-\frac{x}{2}} L_k^\alpha(x) \ ,\quad k \in \mathbb{N}_0 \end{equation*} where $L_k^\alpha(x)$ are the Laguerre polynomials. The $l_k^\alpha(x)$ are eigenfunctions with eigenvalues $\lambda_{\alpha,k}= k + (\alpha+1)/2$ of the differential operator \begin{equation}gin{equation} \label{laguerre} L= -\left( x \frac{d^2}{dx^2} + (\alpha+1) \frac{d}{dx} - \frac{x}{4} \right) \end{equation} and are an orthonormal basis in $L^2(\mathbb{R}_{+},x^\alpha dx)$. Therefore, for $\gammaamma<p(\alpha+1)-1$ we can associate to any $f \in L^p(\mathbb{R}_+, x^\gammaamma \, dx)$ its Laguerre series: \begin{equation}gin{equation*} f(x) \sim \sum_{k=0}^\infty a_{\alpha,k}(f) l_k^\alpha(x), \quad a_{\alpha,k}(f)= \int_0^\infty f(x) l_k^\alpha(x) x^\alpha dx \label{Laguerre-series} \end{equation*} and, given a bounded sequence $\{m_k\}$, we can define a multiplier operator by \begin{equation}gin{equation} M_{\alpha,m} f(x) \sim \sum_{k=0}^\infty a_{\alpha,k}(f) m_k l_k^\alpha(x). \label{multiplier-operator} \end{equation} The main example of the kind of multipliers we are interested in is the Laguerre fractional integral, introduced by G. Gasper, K. Stempak and W. Trebels in \cite{GST} as an analogue in the Laguerre setting of the classical fractional integral of Fourier analysis, and given by \begin{equation}gin{equation*} I_\sigma f(x) \sim \sum_{k=0}^\infty (k+1)^{-\sigma} a_{\alpha,k} l_k^\alpha(x). \end{equation*} In \cite{GST} the aforementioned authors obtained weighted estimates for this operator that were later improved by G. Gasper and W. Trebels in \cite{GT} using a completely different proof. In this work we recover some of the ideas of the original method of \cite{GST}, but simplifying the proof in many technical details and extending it to obtain a better range of exponents that, in particular, give the same result of \cite{GT} for the Laguerre fractional integral. Moreover, we show that our proof applies to a wide class of multipliers, namely multipliers arising from a Laplace-Stieltjes transform, which are of the form \eqref{multiplier-operator} with $m_k=m(k)$ given by the Laplace-Stieljtes transform of some real-valued function $\Psi(t)$, that is, \begin{equation}gin{equation} m(s) = \mathfrak{L}\Psi(s) := \int_0^\infty e^{-st} d\Psi(t). \label{Laplace.transform} \end{equation} We will assume that $\Psi$ is of bounded variation in $\mathbb{R}_+$, so that the Laplace transform converges absolutely in the half plane $\hbox{Re}(s)\gammaeq 0$ (see \cite[Chapter 2]{Widder}) and the definition of the operator $M_{\alpha,m}$ makes sense. Multipliers of this kind are quite natural to consider and, indeed, a slightly different definition is given by E. M. Stein in \cite{Stein} and was previously used in the unweighted setting by E. Sasso in \cite{S}. More recently, B. Wr\'obel \cite{W} has obtained weighted $L^p$ estimates for the both the kind of multipliers considered in \cite{Stein} and the ones considered here when $\alpha \in \{-\frac12\} \cup [\frac12,\infty)$, by proving that they are Calder\'on-Zygmund operators (see Section 4 below for a precise comparison of results). Also, let us mention that T. Mart\'inez has considered multipliers of Laplace transform type for ultraspherical expansions in \cite{Martinez}. Other kind of multipliers for Laguerre expansions have also been considered, see, for instance, \cite{GST, Stempak-Trebels, Thangavelu} where boundedness criteria are given in terms of difference operators. In our case, we will only require minimal assumptions on the function $\Psi$, which are more natural in our context, and easier to verify in the case of the Laguerre fractional integral and in other examples that we will consider later. Indeed, the main theorem we will prove for multipliers for Laguerre expansions reads as follows: \begin{equation}gin{theorem} Assume that $\alpha>-1$ and that $M_{\alpha,m}$ is a multiplier of Laplace transform type for Laguerre expansions, given by \eqref{multiplier-operator} and \eqref{Laplace.transform}, such that: \begin{equation}gin{enumerate} \item[(H1)] \begin{equation}gin{equation*}\int_0^\infty |d\Psi|(t)< +\infty; \end{equation*} \item[(H2)] there exist $\delta>0$, $0 < \sigma < \alpha+1$, and $C>0$ such that $$ |\Psi(t)| \leq C t^{\sigma} \quad \hbox{for} \; 0 \le t \leq \delta .$$ \end{enumerate} Then $M_{\alpha,m}$ can be extended to a bounded operator such that $$ \| M_{\alpha,m} f \|_{L^q(\mathbb{R}_+, x^{(\alpha-bq)})} \leq C \| f \|_{L^p(\mathbb{R}_+, x^{(\alpha+ap)})} $$ provided that the following conditions hold: \begin{equation}gin{equation*} 1 < p \leq q < \infty \quad , \quad a < \frac{\alpha+1}{p^\prime}\quad , \quad b < \frac{\alpha+1}{q} \end{equation*} and \begin{equation}gin{equation*} \label{cond19} \left( \frac{1}{q} - \frac{1}{p} \right) \left(\alpha+\frac12\right) \le a+b \le \left(\frac{1}{q}-\frac{1}{p}\right)(\alpha+1) + \sigma. \end{equation*} \label{main-result} \end{theorem} Besides the system $\{l_k^\alpha\}_{k\gammae 0}$, other families of Laguerre functions have been considered in the literature, and using an idea due to I. Abu-Falah, R. A. Mac\'ias, C. Segovia and J. L. Torrea \cite{AMST} we will show that analogues of Theorem \ref{main-result} hold for those families with appropriate changes in the exponents (see Section 3 for the precise statement of results). Finally, the well-known connection between Laguerre and Hermite expansions will allow us to extend the above result to an analogous result for Laplace type multipliers for Hermite expansions. To make this precise, recall that, given $f \in L^2(\mathbb{R})$, we can consider its Hermite series expansion \begin{equation}gin{equation*} f \sim \sum_{k=0}^\infty c_k(f) h_k , \quad c_{k}(f)= \int_{-\infty}^\infty f(x) h_k(x) dx \label{Hermite-series}. \end{equation*} where $h_k$ are the Hermite functions given by \begin{equation}gin{equation*} h_k(x)= \frac{(-1)^k}{(2^k k! \pi^{1/2})^{1/2}} H_k(x) e^{-\frac{x^2}{2}}, \end{equation*} which are the normalized eigenfunctions of the Harmonic oscillator operator $$H=-\frac{d^2}{dx^2} + |x|^2. $$ As before, given a bounded sequence $\{m_k\}$ we can define a multiplier operator by \begin{equation}gin{equation} \label{hermite-multiplier} M_{H,m} f \sim \sum_{k=0}^\infty c_k(f) m_k h_k \end{equation} and we say that it is a Laplace transform type multiplier if equation \eqref{Laplace.transform} holds. Then, we have the following analogue of Theorem \ref{main-result}, which, in the case of the Hermite fractional integral (that is, for $m_k= (2k+1)^{-\sigma}$), gives the same result of \cite[Theorem 2.5]{Nowak-Stempak} in the one-dimensional case: \begin{equation}gin{theorem} \label{teorema-hermite} Assume that $M_{H,m}$ is a multiplier of Laplace transform type for Hermite expansions, given by \eqref{hermite-multiplier} and \eqref{Laplace.transform}, such that: \begin{equation}gin{enumerate} \item[(H1h)] $$ \int_0^\infty |d\Psi|(t) < +\infty;$$ \item[(H2h)] there exist $\delta>0$, $0 < \sigma < \frac12$, and $C>0$ such that $$ |\Psi(t)| \leq C t^{\sigma} \quad \hbox{for} \; 0 \le t \leq \delta.$$ \end{enumerate} Then $M_{H,m}$ can be extended to a bounded operator such that $$ \| M_{H,m} f \|_{L^q(\mathbb{R}, x^{-bq})} \leq C \| f \|_{L^p(\mathbb{R}, x^{ap})} $$ provided that the following conditions hold: \begin{equation}gin{equation*} 1 < p \leq q < \infty \quad , \quad a<\frac{1}{p'} \quad , \quad b<\frac{1}{q} \end{equation*} and \begin{equation}gin{equation*} 0\le a+b \le \frac{1}{q} -\frac{1}{p}+ 2\sigma. \label{escalah} \end{equation*} \end{theorem} The remainder of this paper is organized as follows. In Section 2 we prove Theorem \ref{main-result}. For the case $\alpha \gammae 0$ the proof relies on the representation of the operator as a twisted generalized convolution, already used in \cite{GST} for the Laguerre fractional integral. However, instead of using the method of that paper to obtain weighted bounds, we give a simpler proof based on the use of Young's inequality in the multiplicative group $(\mathbb{R}_+, \cdot)$, which allows us to obtain a wider range of exponents. Moreover, we obtain an estimate for the convolution kernel which simplifies and generalizes Lemma 2.1 from \cite{GST}. For the case $-1 < \alpha < 0$ the result is obtained from the previous case by means of a weighted transplantation theorem from \cite{Garrigos}. A similar idea was used by Y. Kanjin and E. Sato in \cite{KS} to prove unweighted estimates for the Laguerre fractional integral using a transplantation theorem from \cite{K}. In Section 3 we obtain the analogues of Theorem \ref{main-result} for other Laguerre systems using an idea from \cite{AMST}. In Section 4 we exploit the relation between Laguerre and Hermite expansions to derive Theorem \ref{teorema-hermite} from Theorem \ref{main-result}. Finally, in Section 5 we present some examples of operators covered by the two main theorems and make some further comments. \section{Proof of the theorem in the Laguerre case} In this section we prove Theorem \ref{main-result}. We will divide the proof in three steps: \begin{equation}gin{enumerate} \item We write the operator as a twisted generalized convolution and obtain the estimate for the convolution kernel when $\alpha \gammae 0$. This part of the proof follows essentially the ideas of \cite{GST}, but in the more general setting of multipliers of Laplace transform type. In particular, we provide an easier proof of the analogue of \cite[Lemma 2.1]{GST} in this setting (see Lemma \ref{lemma-g} below). \item We complete the proof of the theorem in the case $\alpha \gammae 0$ by proving weighted estimates for the generalized euclidean convolution. \item We extend the results to the case $-1<\alpha < 0$ using the case $\alpha \gammae 0$ and a weighted transplantation theorem from \cite{Garrigos} (Lemma \ref{lema-garrigos} below). \end{enumerate} \subsection{Step 1: representing the multiplier operator as a twisted generalized convolution when $\alpha \gammae 0$} Following \cite{Mc,A} we define the twisted generalized convolution of $F$ and $G$ by $$ F \times G := \int_0^\infty \tau_x F(y) \, G(y) \, y^{2\alpha+1} \, dy$$ where the twisted translation operator is defined by $$ \tau_x F(y)= \frac{\Gamma(\alpha+1)}{\pi^{1/2} \Gamma(\alpha+1/2)} \int_0^\pi F((x,y)_\theta) \mathcal{J}_{\alpha-1/2}(xy \sin \theta) (\sin \theta )^{2\alpha} \; d\theta $$ with $$\mathcal{J}_\begin{equation}ta(x)= \Gamma(\begin{equation}ta+1) J_\begin{equation}ta(x)/(x/2)^\begin{equation}ta $$ $J_\begin{equation}ta(x)$ being the Bessel function of order $\begin{equation}ta$ and $$ (x,y)_\theta= (x^2 + y^2 - 2xy \cos \theta)^{1/2}.$$ Then, we have (formally) that \begin{equation}gin{equation} \label{malfa} M_{\alpha,m} f(x^2)= F \times G \end{equation} where $$ F(y)=f(y^2)\quad , \quad G(y) = g(y^2) $$ and \begin{equation}gin{equation} g(x) \sim \frac1{\Gamma(\alpha+1)} \sum_{k=0}^\infty m_k L_k^\alpha(x) e^{-\frac{x}{2}}. \label{series-g} \end{equation} Recalling that $|\mathcal{J}_\begin{equation}ta(x)| \leq C_\begin{equation}ta$ if $\begin{equation}ta \gammaeq -\frac12$, we have that: \begin{equation}gin{equation} | F \times G | \le C (|F| \star |G|) \label{convolution-bound} \end{equation} where $\star$ denotes the generalized Euclidean convolution which is defined by \begin{equation}gin{equation} \label{gen-eucl} F \star G(x) := \int_0^\infty \tau^E_x F(y) \, G(y) \, y^{2\alpha+1} \, dy \end{equation} with \begin{equation}gin{equation} \label{gen-trans} \tau^E_x F(y):= \frac{\Gamma(\alpha+1)}{\pi^{1/2} \Gamma(\alpha+1/2)} \int_0^\pi F((x,y)_\theta) (\sin \theta )^{2\alpha} \; d\theta. \end{equation} As a consequence of \eqref{malfa} and \eqref{convolution-bound}, the operator $M_{\alpha,m}$ is pointwise bounded by a generalized euclidean convolution with the kernel $G$ (with respect to the measure $x^{2\alpha+1} \, dx$). Therefore, we need to obtain an appropriate estimate for $G(x)=g(x^2)$, that essentially is: $$ |g(x)| \leq C x^{\sigma-\alpha-1} \; \hbox{for} \; \alpha \gammaeq 0 \; \hbox{and} \; 0 < \sigma < \alpha +1 $$ (see Lemma \ref{lemma-g} below for a precise statement). This generalizes the result given in \cite[Lemma 2.1]{GST} but, while in that paper the proof of the corresponding estimate is based on delicate pointwise estimates for the Laguerre functions, our proof is based on the following generating function for the Laguerre polynomials (see, for instance, \cite{Thangavelu}): \begin{equation}gin{equation} \label{generating-function} \sum_{k=0}^\infty L_k^\alpha(x) w^k = (1-w)^{-\alpha-1} e^{-\frac{xw}{1-w}} := Z_{\alpha,x}(w) \quad (|w|<1). \end{equation} To explain our ideas, we point out that if the series in \eqref{series-g} were convergent (this need not be the case) we would have: \begin{equation}gin{align*} g(x) &= \frac1{\Gamma(\alpha+1)} \sum_{k=0}^\infty m_k L_k^\alpha(x) e^{-\frac{x}{2}} \\ & = \frac1{\Gamma(\alpha+1)} \sum_{k=0}^\infty \left( \int_0^\infty e^{-kt} d\Psi(t) \right) L_k^\alpha(x) e^{-\frac{x}{2}} \\ & = \frac1{\Gamma(\alpha+1)} e^{-\frac{x}{2}} \int_0^\infty Z_{\alpha,x}(e^{-t}) \; d\Psi(t). \end{align*} The main advantage of this formula is that it shields a rather explicit expression for $g$ in which, thanks to \eqref{generating-function}, the Laguerre polynomials do not appear. However, in general it is not clear if the series in \eqref{series-g} is convergent (not even in the special case of the Laguerre fractional integral $m(t)=t^{\sigma-1}$). Moreover, the integration of the series in $Z_{\alpha,x}(w)$ is difficult to justify since it is not uniformly convergent in the interval $[0,1]$ (because $Z_{\alpha,x}(w)$ is not analytical for $w=1$). Nevertheless, we will see that the formal manipulations above can be given a rigorous meaning if we agree in understanding the convergence of the series in $\eqref{series-g}$ in the Abel sense. For this purpose, we introduce a regularization parameter $\rho \in (0,1)$, we consider the regularized function \begin{equation}gin{equation} g_{\rho}(x) = \frac1{\Gamma(\alpha+1)} \sum_{k=0}^\infty m_k \rho^k L_k^\alpha(x) e^{-\frac{x}{2}} \label{series-g-rho} \end{equation} and recall that the series in \eqref{series-g} is summable in Abel sense to the limit $g(x)$ if there exists the limit \begin{equation}gin{equation*} g(x) = \lim_{\rho \to 1} g_{\rho}(x). \end{equation*} With this definition in mind, we can give a rigorous meaning to the heuristic idea described above. More precisely, we will prove the following: \begin{equation}gin{lemma} \label{lemma-g} Let $ g_{\rho}$ be defined by \eqref{series-g-rho}. Then: (1) For $0<\rho<1$ the series \eqref{series-g-rho} converges absolutely. (2) The following representation formula holds: \begin{equation}gin{equation} \label{rep-grho} g_{\rho}(x) = \frac{1}{\Gamma(\alpha+1)} \int_0^\infty Z_{\alpha,x}(\rho e^{-t}) \; d\Psi(t). \end{equation} (3) If we define $g(x)$ by setting $\rho=1$ in this representation formula, $g(x)$ is well defined and the series \eqref{series-g} converges to $g(x)$ in the Abel sense. (4) If $\alpha>0$, $0 <\rho_0 < \rho \leq 1$ and $0 < \sigma < \alpha +1$, then $$ |g_\rho (x)| \leq C x^{\sigma-\alpha-1}, $$ with a constant $C=C(\alpha,\sigma)$ independent of $\rho$. \end{lemma} \begin{equation}gin{proof} (1) Observe first that hypothesis $(H1)$ implies that $(m_k)$ is a bounded sequence. Indeed, $$ |m_k| \leq \int_0^\infty e^{-kt} |d\phi|(t) \leq \int_0^\infty |d\phi|(t) = C < +\infty. $$ Now recall that (\cite[Lemma 1.5.3]{Thangavelu}), if $\nu=\nu(k)= 4k + 2\alpha +2$, $$ |l_k^\alpha(x)| \leq C (x\nu)^{-\frac14} \quad \hbox{if } \frac{1}{\nu} \leq x \leq \frac{\nu}{2}. $$ Therefore, if we fix $x$, for $k \gammaeq k_0$, $x$ is in the region where this estimate holds (since $\nu \to +\infty$ when $k \to +\infty$), and from Stirling's formula we deduce that $$ \frac{k!}{\Gamma(k+\alpha+1)} = \frac{\Gamma(k+1)}{\Gamma(k+\alpha+1)} = O(k^{-\alpha}). $$ Then we have the following estimate for the terms of the series in \eqref{series-g-rho} $$ |m_k \rho^k L^\alpha_k(x)| e^{-\frac{x}{2}} \leq C(x) \rho^k k^{-\sigma} \; \hbox{for} \; k \gammaeq k_0, $$ and, since $\rho<1$, this implies that the series converges absolutely.\footnote{K. Stempak has observed that this result can be also justified by observing that, for fixed $x$, $L^\alpha_k(x)$ has at most polynomial growth with $k\to \infty$ (see, for instance, (7.6.9) and (7.6.10) in \cite{Sz}). Hence, the polynomial growth of $L^\alpha_k(x)$ versus the exponential decay of $\rho^k$, with $m_k$ disregarded as a bounded sequence, produce an absolutely convergent series.} (2) First, observe that $Z_{\alpha,x}(w)$ is continuous as a function of a real variable for $w \in [0,1]$ (if we define $Z_{\alpha,x}(1)=0$) and, therefore, it is bounded, say \begin{equation}gin{equation*} |Z_{\alpha,x}(w)| \leq C = C(\alpha,x) \; \hbox{for} \; w \in [0,1]. \label{Z-bound} \end{equation*} Hence, using hypothesis $(H1)$ we see that the integral in the representation formula is convergent for any $\rho \in [0,1]$. Moreover, from our assumptions we have that, for $\rho<1$, \begin{equation}gin{align} \nonumber g_{\rho}(x) & = \frac{1}{\Gamma(\alpha+1)} \sum_{k=0}^\infty m_k \rho^k L_k^\alpha(x) e^{-\frac{x}{2}} \\ \nonumber & = \frac{1}{\Gamma(\alpha+1)} \sum_{k=0}^\infty \left( \int_0^\infty \rho^k e^{-kt} d\Psi(t) \right) L_k^\alpha(x) e^{-\frac{x}{2}} \\ \nonumber & = \lim_{N \to +\infty} \frac{1}{\Gamma(\alpha+1)} \sum_{k=0}^N \left( \int_0^\infty \rho^k e^{-kt} d\Psi(t) \right) L_k^\alpha(x) e^{-\frac{x}{2}} \\ & = \lim_{N \to +\infty} \frac{1}{\Gamma(\alpha+1)} e^{-\frac{x}{2}} \int_0^\infty Z_{\alpha,x}^{(N)}(\rho e^{-t}) \; d\Psi(t) \label{limN} \end{align} where $$ Z_{\alpha,x}^{(N)}(w) = \sum_{k=0}^N L_k^\alpha(x) w^k $$ denotes a partial sum of the series for $Z_{\alpha,x}(w)$. Now, since $\rho<1$, that series converges uniformly in the interval $[0,\rho]$, so that given $\varepsilon>0$ there exists $N_0=N_0(\varepsilon)$ such that $$ |Z_{\alpha,x}(w)-Z_{\alpha,x}^{(N)}(w)| < \varepsilon \; \hbox{if} \; N \gammaeq N_0. $$ Using this estimate and hypothesis $(H1)$, we obtain \begin{equation}gin{align*} & \left| \int_0^\infty Z_{\alpha,x}(\rho e^{-t}) \; d\Psi(t) - \int_0^\infty Z_{\alpha,x}^{(N)}(\rho e^{-t}) \; d\Psi(t) \right| \\ & \leq \int_0^\infty |Z_{\alpha,x}(\rho e^{-t})-Z_{\alpha,x}^{(N)}(\rho e^{-t})| \; |d\Psi|(t) \\ &\leq C \varepsilon \end{align*} from which we conclude that \begin{equation}gin{equation} \label{limZN} \lim_{N \to +\infty} \int_0^\infty Z_{\alpha,x}^{(N)}(\rho e^{-t}) \; d\Psi(t) = \int_0^\infty Z_{\alpha,x}(\rho e^{-t}) \; d\Psi(t) \end{equation} and, replacing \eqref{limZN} into \eqref{limN} we obtain \eqref{rep-grho}. (3) We have already observed that the integral in \eqref{rep-grho} is convergent for $\rho=1$. Moreover, the bound we have proved above for $Z_{\alpha,x}$, and $(H1)$ imply that we can apply the Lebesgue bounded convergence theorem to this integral (with a constant majorant function, which is integrable with respect to $|d\Psi|(t)$ by $(H1)$), to conclude that $g(x)=\lim_{\rho \to 1}g_\rho(x)$. (4) Let $\delta$ be as in $(H2)$ and observe that \begin{equation}gin{align*} \Gamma(\alpha+1) g_\rho(x) &= e^{-\frac{x}{2}} \int_0^\infty Z_{\alpha,x}(\rho e^{-t}) d\Psi(t) \\ &= e^{-\frac{x}{2}} \int_0^\delta Z_{\alpha,x}(\rho e^{-t}) d\Psi(t) + e^{-\frac{x}{2}} \int_\delta^\infty Z_{\alpha,x}(\rho e^{-t}) d\Psi(t) \\ & = \underbrace{ e^{-\frac{x}{2}} \int_0^\delta Z_{\alpha,x}^\prime (\rho e^{-t}) \rho e^{-t} \Psi(t) \, dt}_{(i)} + \underbrace{ e^{-\frac{x}{2}} Z_{\alpha,x}(\rho e^{-\delta}) \Psi(\delta)}_{(ii)} \\ & \quad - \underbrace{e^{-\frac{x}{2}} Z_{\alpha,x}(\rho) \Psi(0)}_{(iii)} + \underbrace{ e^{-\frac{x}{2}} \int_\delta^\infty Z_{\alpha,x}(\rho e^{-t}) d\Psi(t) }_{(iv)} \end{align*} Since $|Z_{\alpha,x}(\rho e^{-\delta})|\le (1-\rho e^{-\delta})^{-\alpha-1} \le C_\delta$, $\Psi(0)=0$, and $\sigma-\alpha-1<0$, clearly $(ii) \le C x^{\sigma-\alpha-1}$ and $(iii)$ vanishes. To bound $(iv)$, notice that if $\omega= \rho e^{-t}$ and $t>\delta$, $0\le Z_{\alpha,x}(\omega) \le M_\delta$. Therefore, using $(H1)$ and the fact that $\sigma-\alpha-1<0$ we obtain \begin{equation}gin{equation*} (iv) \le e^{-\frac{x}{2}} M_\delta \int_\delta^\infty |d \Psi|(t) \le C x^{\sigma-\alpha-1}. \end{equation*} Now, observing that \begin{equation}gin{equation*} Z_{\alpha,x}^{\prime}(\omega) = (\alpha+1) Z_{\alpha+1,x}(\omega) - x Z_{\alpha+2,x}(\omega). \end{equation*} and using $(H2)$, we obtain \begin{equation}gin{align*} (i) & \le C e^{-\frac{x}{2}} \int_0^\delta Z_{\alpha+1,x}(\rho e^{-t}) \rho e^{-t} t^{\sigma} \, dt \\ & \quad + e^{-\frac{x}{2}} \int_0^\delta x Z_{\alpha+2,x}(\rho e^{-t}) \rho e^{-t} t^{\sigma} \, dt \end{align*} and the wanted estimates in this case follow by a direct application of the following lemma. \end{proof} \begin{equation}gin{lemma} In the conditions of Lemma \ref{lemma-g}(4), if $$I(x) = e^{-\frac{x}{2}} \int_0^\delta Z_{\begin{equation}ta,x}(\rho e^{-t}) \rho e^{-t} t^{\sigma} \, dt,$$ and $\begin{equation}ta=\alpha+1$ or $\begin{equation}ta=\alpha+2$ then, $|I(x)| \le C x^{\sigma-\begin{equation}ta}$ with $C=C(\begin{equation}ta, \sigma, \delta, \rho_0)$. \end{lemma} \begin{equation}gin{proof} Making the change of variables $w= \rho e^{-t}$, and recalling the definition of $Z_{\begin{equation}ta,x}(w)$ given by \eqref{generating-function}, we see that \begin{equation}gin{align*} I(x) & = e^{-\frac{x}{2}} \int_{\rho e^{-\delta}}^\rho (1-w)^{-\begin{equation}ta-1} e^{-\frac{xw}{1-w}} \log^{\sigma}\left(\frac{\rho}{w}\right) \, dw \end{align*} Making a further change of variables $u=\frac12 + \frac{w}{1-w}$ and setting $c_\delta = e^{-\delta}$ this is \begin{equation}gin{align} \nonumber I(x) & = \int_{\frac12 + \frac{c_\delta \rho}{1-c_\delta \rho}}^{\frac12 + \frac{\rho}{1-\rho}} \left(u+\frac12\right)^{\begin{equation}ta+1} e^{-ux} \left[ \log\left( \rho \frac{u+\frac12}{u-\frac12} \right)\right]^{\sigma} \frac{1}{\left( u+\frac12 \right)^2} \, du \\ & \le C \int_{\frac12 + \frac{c_\delta \rho}{1-c_\delta \rho}}^{\frac12 + \frac{\rho}{1-\rho}} u^{\begin{equation}ta-1} e^{-ux} \left( u - \frac12 \right)^{-\sigma} \underbrace{\left[u(\rho-1)+\frac12 (\rho+1)\right]^{\sigma}}_{:= \tilde u(\rho)} \, du \label{just} \end{align} where in \eqref{just} we have used that, since $$ \rho \frac{u+\frac12}{u-\frac12} = 1+\frac{u(\rho-1) + \frac12 (\rho+1)}{u-\frac12},$$ then $$\log\left( \rho \frac{u+\frac12}{u-\frac12} \right) \le \frac{u(\rho-1) + \frac12 (\rho+1)}{u-\frac12}.$$ Since $\frac12 < u\le \frac12 + \frac{\rho}{1-\rho}$, it is immediate that $$0 \le u(\rho-1)+ \frac12(\rho+1)\le \rho,$$ which, using that $\sigma \gammae 0$, implies $ \tilde u(\rho)\le 1.$ Also, since $$ u \gammae \frac12 + \frac{c_\delta \rho_0}{1- c_\delta \rho_0} > \frac12 $$ we have that $$ \left(u-\frac12 \right)^{-\sigma} \le C u^{-\sigma} $$ where the constant depends only on $\rho_0$ and $\delta$. Therefore, \begin{equation}gin{align} \nonumber I(x) & \le C \int_0^\infty u^{\begin{equation}ta-\sigma-1} e^{-ux} \, du \\ & = C x^{-\begin{equation}ta+\sigma} \int_0^\infty v^{\begin{equation}ta-\sigma-1} e^{-v} \, dv \label{r1} \\ & \le C x^{-\begin{equation}ta+\sigma} \label{r2} \end{align} where in \eqref{r1} we have made the change of variables $v=ux$, and in \eqref{r2} we have used that $\begin{equation}ta-\sigma-1> -1$ because $\begin{equation}ta=\alpha+1$ or $\begin{equation}ta=\alpha+2$. \end{proof} \subsection{Step 2: weighted estimates for the generalized Euclidean convolution} Following the idea of the previous section, we define a regularized multiplier operator $M_{\alpha,m,\rho}$ by: \begin{equation}gin{equation} \label{Mamr} M_{\alpha,m,\rho} f(x):= \sum_{k=0}^\infty m_k \rho^k a_{k,\alpha}(f) l_ k^\alpha(x) \end{equation} In this section we will obtain the estimate \begin{equation}gin{equation} \label{acotacion} \left( \int_0^\infty |M_{\alpha,m,\rho}(f)|^q x^{\alpha-bq} \; dx \right)^{\frac{1}{q}} \leq C \left( \int_0^\infty |f|^p x^{\alpha+ap} \; dx \right)^{\frac{1}{p}} \end{equation} for $f \in L^p(\mathbb{R}_+, x^{\alpha +ap})$ with a constant $C$ independent of the regularization parameter $\rho$ and appropriate $a,b$ (see Theorem \ref{teo-convolucion}). Indeed, the operator can be expressed as before as a twisted generalized convolution with kernel $G_{\rho}(y)=g_\rho(y^2)$ (in place of $G$), and by Lemma \ref{lemma-g}, if $F(y)=f(y^2)$, we have the pointwise bound $$ |M_{\alpha,m,\rho} f(x^2)| \leq (|F| \star |G_\rho|)(x) \leq C ( |F| \star |x^{2(\sigma-\alpha-1)}|)(x).$$ Therefore, \eqref{acotacion} will follow from a weighted inequality for the generalized Euclidean convolution with kernel $K_\sigma := x^{2(\sigma-\alpha-1)}$ (Theorem \ref{teo-convolucion}). Once we have \eqref{acotacion}, Theorem \ref{main-result} will follow by a standard density argument. Indeed, if we consider the space $$ E= \{ f(x)=p(x) e^{-\frac{x}{2}} : 0 \leq x, \, p(x) \mbox{ a polynomial} \}, $$ any $f \in E$ has only a finite number of non-vanishing Laguerre coefficients. In that case, it is straightforward that $ M_{\alpha,m} f(x)$ is well-defined and: $$ M_{\alpha,m} f(x) = \lim_{\rho \to 1} M_{\alpha,m,\rho} f(x). $$ Then, by Fatou's lemma, $$ \int_0^\infty |M_{\alpha,m}(f)|^q x^{\alpha-bq} \; dx \le \lim_{\rho \to 1} \int_0^\infty |M_{\alpha,m,\rho}(f)|^q x^{\alpha-bq} \; dx $$ and, therefore, we obtain $$ \left( \int_0^\infty |M_{\alpha,m,\rho}(f)|^q x^{\alpha-bq} \; dx \right)^{\frac{1}{q}} \leq C \left( \int_0^\infty |f|^p x^{\alpha+ap} \; dx \right)^{\frac{1}{p}} \; \forall f \in E.$$ Since $E$ is dense in $L^p(\mathbb{R}_+,x^{\alpha+a p})$, we deduce that $M_{\alpha,m}$ can be extended to a bounded operator from $L^p(\mathbb{R}_+, x^{\alpha +ap})$ to $L^q(\mathbb{R}_+, x^{\alpha -bq})$. Moreover, the extended operator satisfies: $$ M_{\alpha,m} f = \lim_{\rho \to 1} M_{\alpha,m,\rho} f.$$ This means that the formula \eqref{multiplier-operator} is valid for $f\in L^p(\mathbb{R}_+, x^{\alpha +ap})$ if the summation is interpreted in the Abel sense with convergence in $L^q(\mathbb{R}_+, x^{\alpha -bq})$. Therefore, to conclude the proof of Theorem \ref{main-result} in the case $\alpha\gammae 0$ it is enough to see that the following result holds: \begin{equation}gin{theorem} \label{teo-convolucion} Let $\alpha \gammae 0, 0<\sigma<\alpha+1$ and $M_{\alpha,m,\rho}$ be given by \eqref{Mamr} such that it satisfies $(H1)$ and $(H2)$. Then, for all $f\in L^p(\mathbb{R}_+, x^{\alpha+ap})$, the following estimate holds \begin{equation}gin{equation*} \|M_{\alpha,m,\rho}f(x^2) x^{-2b} \|_{L^q(\mathbb{R}_+, x^{2\alpha+1})} \leq \| f(x^2) x^{2a} \|_{L^p(\mathbb{R}_+, x^{2\alpha+1})} \end{equation*} provided that \begin{equation}gin{equation*} a<\frac{\alpha+1}{p'} \quad , \quad b<\frac{\alpha+1}{q} \end{equation*} and that \begin{equation}gin{equation*} \left(\frac{1}{q}-\frac{1}{p} \right)\left(\alpha +\frac12 \right) \le a+b \le \left( \frac{1}{q} -\frac{1}{p}\right) (\alpha+1) + \sigma \end{equation*} \end{theorem} \begin{equation}gin{proof} First, notice that if condition $(H2)$ holds for a certain $0<\sigma_0<\alpha+1$, then it also holds for any $0<\sigma < \sigma_0$. Therefore, it suffices to prove the theorem in the case $a+b = (\frac{1}{q} - \frac{1}{p})(\alpha+1) + \sigma$ which in turn, by the conditions above, implies $\sigma \gammae -\frac12 \left( \frac{1}{q}-\frac{1}{p}\right)$. Let $K_\sigma (x) := x^{2(\sigma-\alpha-1)}$, $F(y)=f(y^2)$ and recall that $$|M_{\alpha,m,\rho}f(x^2)| \le C (|F|\star |K_\sigma|)(x)$$ where $\star$ denotes the generalized euclidean convolution defined by \eqref{gen-eucl}. We begin by computing the generalized Euclidean translation of $K_\sigma$ given by \eqref{gen-trans}. Making the change of variables $$ t=\cos \theta \mathbb{R}ightarrow dt = - \sin \theta \, d\theta = - \sqrt{1-t^2} \, d\theta $$ we see that $$ \tau_x^E K_\sigma(y)= C(\alpha) \int_{-1}^1 (x^2+y^2-2xyt)^{\sigma-\alpha-1} (1-t^2)^{\alpha-\frac12} \; dt .$$ Following the notation of our previous work \cite{ddd}, if we let $$ I_{\gammaamma,k}(r):= \int_{-1}^1 \frac{(1-t^2)^k}{(1-2rt+r^2)^{\frac{\gammaamma}{2}}}\; dt, $$ then $$ \tau_x^E K_\sigma(y)= C(\alpha) y^{2(\sigma-\alpha-1)} I_{2(1+\alpha-\sigma), \alpha-\frac12}\left( \frac{x}{y} \right)$$ and, therefore, \begin{equation}gin{align} \nonumber K_\sigma \star F(x) & = C \int_0^\infty y^{2(\sigma-\alpha-1)} I_{2(1+\alpha-\sigma),\alpha-\frac12}\left( \frac{x}{y} \right) F(y) y^{2\alpha+1} dy \\ & = C \int_0^\infty y^{2\sigma} I_{2(1+\alpha-\sigma),\alpha-\frac12}\left( \frac{x}{y} \right) F(y) \frac{dy}{y} \label{star} \end{align} Now, \begin{equation}gin{align*} \|M_{\alpha,m,\rho}f(x^2) x^{-2b} \|_{L^q(\mathbb{R}_+, x^{2\alpha+1})} & \le C \| [ K_\sigma \star F(x)] x^{-2b} \|_{L^q(\mathbb{R}_+,x^{2\alpha+1})} \\ & = C \left( \int_0^\infty | K_\sigma \star F(x) x^{-2b} |^q x^{2\alpha+1} \; dx \right)^{\frac{1}{q}} \\ & = C \left( \int_0^\infty \left| K_\sigma \star F(x) x^{\frac{2\alpha+2}{q}-2b} \right|^q\; \frac{dx}{x} \right)^{\frac{1}{q}} \end{align*} but, by \eqref{star}, \begin{equation}gin{align*} [ K_\sigma & \star F(x) ] x^{\frac{2\alpha+2}{q}-2b} \\ & = C \int_0^\infty y^{2\sigma} x^{\frac{2\alpha+2}{q}-2b} I_{2(1+\alpha-\sigma),\alpha-\frac12}\left( \frac{x}{y} \right) F(y) \frac{dy}{y} \\ & = C \int_0^\infty \left(\frac{y}{x}\right)^{-[\frac{2\alpha+2}{q}-2b]} I_{2(1-\alpha-\sigma),\alpha-\frac12}\left( \frac{x}{y} \right) F(y) y^{2\sigma+\frac{2\alpha+2}{q}-2b} \frac{dy}{y} \\ & = [ y^{\frac{2\alpha+2}{q}-2b} I_{2(1+\alpha-\sigma),\alpha-\frac12}(y) * F(y) y^{2\sigma+\frac{2\alpha+2}{q}-2b} ](x) \end{align*} where $*$ denotes the convolution in $\mathbb{R}_+$ with respect to the Haar measure $\frac{dx}{x}$. Then, by Young's inequality: \begin{equation}gin{align*} \|M_{\alpha,m,\rho} & f(x^2) x^{-2b} \|_{L^q(\mathbb{R}_+, x^{2\alpha+1})} \\ & \leq \| F(x) x^{2\sigma+\frac{2\alpha+2}{q}-2b} \|_{L^p\left(\frac{dx}{x} \right)} \| x^{\frac{2\alpha+2}{q}-2b} I_{2(1+\alpha-\sigma),\alpha-\frac12}(x) \|_{L^{s,\infty}(\frac{dx}{x})} \end{align*} provided that: \begin{equation}gin{equation} \label{youngl} \frac{1}{p}+\frac{1}{s}=1+\frac{1}{q}. \end{equation} Since we are assuming that $a+b=\left( \frac{1}{q}-\frac{1}{p}\right)(\alpha +1)+\sigma$, we have that \begin{equation}gin{align*} \| F(x) x^{2\sigma+\frac{2\alpha+2}{q}-2b} \|_{L^p\left(\frac{dx}{x} \right)} & = \left( \int_0^\infty | F(x) x^{2\sigma+\frac{2\alpha+2}{q}-2b} |^p \; \frac{dx}{x}\right)^{\frac{1}{p}} \\ & = \left( \int_0^\infty | F(x) x^{2a+\frac{2\alpha+2}{p}} |^p \; \frac{dx}{x}\right)^{\frac{1}{p}} \\ & = \| F(x) x^{2a} \|_{L^p(\mathbb{R}_+,x^{2\alpha+1})} \\ & = \| f(x^2) x^{2a} \|_{L^p(\mathbb{R}_+,x^{2\alpha+1})} \end{align*} whence, to conclude the proof of the theorem it suffices to see that $$\| x^{\frac{2\alpha+2}{q}-2b} I_{2(1+\alpha-\sigma),\alpha-\frac12}(x) \|_{L^{s,\infty}(\frac{dx}{x})} < +\infty. $$ For this purpose, we shall use the following lemma, which is a generalization of our previous result \cite[Lemma 4.2]{ddd}. The first part of the proof is the same as in that lemma, but it is included here for the sake of completeness: \begin{equation}gin{lemma} Let $$ I_{\gammaamma,k}(r)= \int_{-1}^1 \frac{(1-t^2)^k}{(1-2rt+r^2)^{\frac{\gammaamma}{2}}}\; dt $$ Then, for $r \sim 1$ and $k >-1$, we have that $$ |I_{\gammaamma,k}(r)| \leq \left\{ \begin{equation}gin{array}{lclcc} C_{\gammaamma,k} & \mbox{if} & \gammaamma<2k+2 \\ C_{\gammaamma,k} \log\frac{1}{|1-r|}& \mbox{if} & \gammaamma=2k+2 \\ C_{\gammaamma,k} |1-r|^{-\gammaamma+2k+2} & \mbox{if} & \gammaamma > 2k+2\\ \end{array} \right. $$ \end{lemma} \begin{equation}gin{proof} Assume first that $k \in\mathbb{N}_0$ and $-\frac{\gammaamma}{2}+k > -1$. Then, $$ I_{\gammaamma,k}(1)\sim\int_{-1}^1 \frac{(1-t^2)^k}{(2-2t)^{\frac{\gammaamma}{2}}} \, dt \sim C \int_{-1}^1 \frac{(1-t)^k}{(1-t)^{\frac{\gammaamma}{2}}} \, dt. $$ Therefore, $I_{ \gammaamma,k}$ is bounded. If $-\frac{\gammaamma}{2}+k =-1$, then $$ I_{\gammaamma,k}(r)\sim\int_{-1}^1 (1-t^2)^k \frac{d^k}{dt^k}\left\{(1-2rt+r^2)^{-\frac{\gammaamma}{2}+k}\right\}\, dt. $$ Integrating by parts $k$ times (the boundary terms vanish), $$ I_{\gammaamma,k}(r)\sim\left|\int_{-1}^1 \frac{d^k}{dt^k}\left\{(1-t^2)^k\right\} (1-2rt+r^2)^{-\frac{\gammaamma}{2}+k}\, dt\right|. $$ But $\frac{d^k}{dt^k}\left\{(1-t^2)^k\right\}$ is a polynomial of degree $k$ and therefore is bounded in $[-1,1]$ (in fact, it is up to a constant the classical Legendre polynomial). Therefore, $$ I_{ \gammaamma,k}(r) \sim \frac{1}{2r} \log\left(\frac{1+r}{1-r}\right)^2 \le C \log \frac{1}{|1-r|}. $$ Finally, if $-\frac{\gammaamma}{2}+k <-1$, then integrating by parts as before, $$ I_{\gammaamma,k}(r)\le C_k\int_{-1}^{1}(1-2rt+r^2)^{-\frac{\gammaamma}{2}+k}\, dt. $$ Thus, $$ I_{ \gammaamma,k}(r) \sim (1-2rt+r^2)^{-\frac{\gammaamma}{2}+k+1}|_{t=-1}^{t=1} \le C_{k,\gammaamma} |1-r|^{-\gammaamma+2k+2}. $$ This finishes the proof if $k\in \mathbb{N}_0$. Consider now the case $k=m+\nu$ with $m\in \mathbb{N}_0$ and $0<\nu<1$. Then, \begin{equation}gin{align*} I_{\gammaamma, k}(r) & = \int_{-1}^1 (1-t^2)^{\nu(m+1)+(1-\nu)m} (1-2rt + r^2)^{-\frac{\nu\gammaamma}{2}-\frac{(1-\nu)\gammaamma}{2}} \, dt \\Ê& \le I_{m+1, \gammaamma}^\nu(r) I_{m, \gammaamma}^{1-\nu}(r), \end{align*} where in the last line we have used H\"older's inequality with exponent $\frac{1}{\nu}$. If $\gammaamma<2m+2$, by the previous calculation $$ |I_{\gammaamma,k}(r)|\le C. $$ If $\gammaamma > 2(m+1)+2$, then, by the previous calculation \begin{equation}gin{align*} |I_{\gammaamma,k}(r)| &\le C |1-r|^{\nu(-\gammaamma+2(m+1)+2)} |1-r|^{(1-\nu)(-\gammaamma+2m+2)} \\ &= C |1-r|^{-\gammaamma+2k+2}. \end{align*} For the case $2m+2 < \gammaamma < 2m+4 $, notice that we can always assume $r<1$, since $I_{ \gammaamma,k}(r) = r^{-\gammaamma} I_{\gammaamma,k}(r^{-1})$. Then, as before, we can prove that $$ I_{\gammaamma,k}'(r) \le \gammaamma (1-r) I_{\gammaamma+2,k}(r) $$ But now we are in the case $\gammaamma + 2 > 2(m+1)+2$ and, thus, $$ |I_{\gammaamma+2,k}(r)| \le C |1-r|^{-\gammaamma+2k}. $$ Therefore, if $-\gammaamma+2k+1\neq -1$ \begin{equation}gin{align*} I_{\gammaamma,k}(r) &= \int_0^r I'_{\gammaamma,k}(s) \, ds \\ & \le C \int_0^r (1-s)^{-\gammaamma+2k+1} \, ds \\ & \le C |1-r|^{-\gammaamma+2k+2}, \end{align*} and if $-\gammaamma+2k+1= -1$ \begin{equation}gin{align*} I_{\gammaamma,k}(r) &\le C \int_0^r \frac{1}{1-s} \, ds \\ &= C \log \frac{1}{|1-r|}. \end{align*} It remains to check the case $k \in (-1, 0)$. For this purpose, write $$ I_{ \gammaamma,k}(r) = \underbrace{\int_{-1}^0 \frac{(1-t^2)^{k}}{(1-2rt+r^2)^\frac{\gammaamma}{2}} \, dt}_{(i)} + \underbrace{\int_0^1 \frac{(1-t^2)^{k}}{(1-2rt+r^2)^\frac{\gammaamma}{2}} \, dt}_{(ii)} $$ Since $\gammaamma>0$ and $k+1>0$, $$ (i) \le \int_{-1}^0 (1+t)^{k} \, dt = C $$ \begin{equation}gin{align*} (ii) & \le \int_0^1 \frac{(1-t)^{k}}{(1-2rt+r^2)^{\frac{\gammaamma}{2}}} \, dt \\ & = -\frac{1}{k+1} \int_0^1 \frac{\frac{d}{dt}[(1-t)^{k+1}]}{(1-2rt+r^2)^{\frac{\gammaamma}{2}}} \, dt \\ & = \frac{2r}{k+1} \int_0^1 \frac{(1-t)^{k+1}}{(1-2rt+r^2)^{\frac{\gammaamma}{2}+1}} \, dt \\ & \le C I_{\gammaamma+2, k+1}(r). \end{align*} and, since now $k+1>0$, $I_{\gammaamma, k}$ can be bounded as before. This concludes the proof of the lemma. \end{proof} Now we are ready to conclude the proof Theorem \ref{teo-convolucion}. Remember that we need to see that \begin{equation}gin{equation} \| x^{\frac{2\alpha+2}{q}-2b} I_{2(1+\alpha-\sigma),\alpha-\frac12}(x) \|_{L^{s,\infty}(\frac{dx}{x})} < +\infty. \label{norma} \end{equation} Using the previous lemma, it is clear that when $x\to 1$ and $2(\alpha+1-\sigma)\le 2(\alpha - \frac12)$ the norm in \eqref{norma} is bounded. In the case $2(\alpha+1-\sigma)> 2(\alpha - \frac12)$ (that is, $\sigma<3$), the integrability condition is $$-s \left[ 2(\alpha+1-\sigma)-2 \left(\alpha-\frac12\right)-2 \right] \gammae -1.$$ But, using \eqref{youngl}, we see that this is equivalent to $\sigma \gammae -\frac12 \left( \frac{1}{q}-\frac{1}{p}\right)$, which holds by our assumption on $a+b$. When $x=0$, the integrability condition is $$ \frac{2\alpha+2}{q}-2b > 0 $$ which holds because $b<\frac{\alpha+1}{q}$. Finally, when $x\to\infty$, since $I_{\alpha-\frac12, 2(\alpha+1-\sigma)}(x)\sim x^{-2(\alpha+1-\sigma)}$, the condition we need to fulfill is $$ \frac{2\alpha+2}{q}-2b-2(\alpha+1-\sigma)<0 $$ which, by our assumption on $a+b$ is equivalent to $a<\frac{\alpha+1}{p'}$. \end{proof} \subsection{Extension to the case $-1<\alpha <0$ and end of proof of Theorem \ref{main-result}} As before, we may assume that $a+b= \left(\frac{1}{q}-\frac{1}{p}\right)(\alpha+1) +\sigma.$ In this case, to extend our result to the case $-1<\alpha <0$ let us consider $-1<\alpha<\begin{equation}ta$, where $\begin{equation}ta\gammae 0$, and use a transplantation result from \cite{Garrigos}, that we recall here as a lemma for the sake of completeness: \begin{equation}gin{lemma}[\cite{Garrigos}, Corollary 6.19 (ii)] \label{lema-garrigos} Let $1<q<\infty$. Given $\alpha, \begin{equation}ta >-1$, we define the transplantation operator $$ \mathbb{T}_\begin{equation}ta^\alpha f = \sum_{k=0}^\infty \left( \int_0^\infty f(y) l_k^\alpha(y) y^{\alpha} \, dy \right) l_k^\begin{equation}ta. $$ Then, if $\sigma_0 \in \mathbb{R}$ and $\sigma_1 = \sigma_0 + (\alpha -\begin{equation}ta)(\frac{1}{p} - \frac12)$, $\mathbb{T}_\begin{equation}ta^\alpha : L_{\sigma_0}^q (\mathbb{R}_+, x^\alpha \, dx) \to L_{\sigma_1}^q (\mathbb{R}_+, x^\begin{equation}ta \, dx)$ and $\mathbb{T}_\alpha^\begin{equation}ta : L_{\sigma_1}^q (\mathbb{R}_+, x^\begin{equation}ta \, dx) \to L_{\sigma_0}^q (\mathbb{R}_+, x^\alpha \, dx)$ are bounded operators if and only if $$ -\frac{1+\alpha}{q}<\sigma_0 <\frac{1+\alpha}{q^\prime}. $$ \end{lemma} Using this lemma, we can write \begin{equation}gin{align*} \|M_{\alpha,m}f |x|^{-b}\|_{L^q(\mathbb{R}_+,x^{\alpha} \, dx)} & = \|\mathbb{T}_\alpha^\begin{equation}ta (M_{\begin{equation}ta,m}(\mathbb{T}_\begin{equation}ta^\alpha f)) |x|^{-b}\|_{L^q(\mathbb{R}_+, x^\alpha \, dx)} \\ & \le C \|M_{\alpha,m, \begin{equation}ta}(\mathbb{T}_\begin{equation}ta^\alpha f) |x|^{-\tilde b}\|_{L^q(\mathbb{R}_+, x^{\begin{equation}ta} \, dx)} \end{align*} provided that \begin{equation}gin{equation} -1<\alpha<\begin{equation}ta \end{equation} \begin{equation}gin{equation} \label{btilde} -\tilde b = -b +(\alpha-\begin{equation}ta)\left(\frac{1}{q}-\frac12\right), \end{equation} and \begin{equation}gin{equation} \label{balfaq} -\frac{1+\alpha}{q}<-b<\frac{1+\alpha}{q'}, \end{equation} and, using Theorem \ref{teo-convolucion} for $M_{\begin{equation}ta,m}$ with $\begin{equation}ta \gammae 0$, $$ \|M_{\alpha,m, \begin{equation}ta}(\mathbb{T}_\begin{equation}ta^\alpha f) |x|^{-\tilde b}\|_{L^q(\mathbb{R}_+, x^{\begin{equation}ta} \, dx)} \le C\|\mathbb{T}_\begin{equation}ta^\alpha f |x|^{\tilde a}\|_{L^p(\mathbb{R}_+, x^{\begin{equation}ta} \, dx)} $$ provided that \begin{equation}gin{equation*} 0<\sigma<\begin{equation}ta+1 \quad, \quad \tilde a < \frac{\begin{equation}ta+1}{p'} \quad , \quad \tilde b <\frac{\begin{equation}ta+1}{q}, \end{equation*} \begin{equation}gin{equation} \label{desigtilde} \left(\frac{1}{q}-\frac{1}{p}\right)\left(\begin{equation}ta+\frac12\right) \le \tilde a +\tilde b \end{equation} and that \begin{equation}gin{equation} \label{atildebtilde} \tilde a +\tilde b = \left(\frac{1}{q}-\frac{1}{p}\right)(\begin{equation}ta+1) + \sigma. \end{equation} Finally, using Lemma \ref{lema-garrigos} again, we obtain \begin{equation}gin{equation} \|M_{\alpha,m}f |x|^{-b}\|_{L^q(\mathbb{R}_+,x^{\alpha} \, dx)} \le C \|f |x|^a\|_{L^p(\mathbb{R}_+, x^\alpha \, dx)} \end{equation} provided that \begin{equation}gin{equation} \label{atilde2} \tilde a=a+(\alpha-\begin{equation}ta)\left(\frac{1}{p}-\frac{1}{2}\right) \end{equation} and that \begin{equation}gin{equation} \label{aalfap} -\frac{1+\alpha}{p}<a<\frac{1+\alpha}{p'}. \end{equation} Now, replacing \eqref{btilde} and \eqref{atilde2} into \eqref{desigtilde} and \eqref{atildebtilde} we obtain \begin{equation}gin{equation*} \left(\frac{1}{q}-\frac{1}{p}\right)\left(\alpha+\frac12\right) \le a + b \end{equation*} and \begin{equation}gin{equation} \label{res} \quad a+b= \left(\frac{1}{q}-\frac{1}{p}\right)(\alpha+1) + \sigma. \end{equation} To conclude the proof of the theorem we need to see that the restrictions $a> -\frac{1+\alpha}{p}$ in \eqref{aalfap} and $b>-\frac{1+\alpha}{q'}$ in \eqref{balfaq} are redundant. Indeed, the first one follows from \eqref{res} and $b<\frac{\alpha+1}{q}$, while the second one follows from \eqref{res} and $a<\frac{\alpha+1}{p'}$. \section{Multipliers for related Laguerre systems} In this section we show how the results for multipliers for expansions in the Laguerre system $\{l^\alpha_k\}_{k\gammae 0}$ can be extended to other related systems, using a transference result from I. Abu-Falah, R. A. Mac\'ias, C. Segovia and J. L. Torrea \cite{AMST}. To this end, for fixed $\alpha>-1$, we consider the orthonormal systems: \begin{equation}gin{enumerate} \item $\{\mathcal{L}_k^\alpha(y) := y^{\frac{\alpha}{2}} l_k^\alpha(y)\}_{k\gammae 0}$ in $L^2(\mathbb{R}_+)$ \item $\{\varphi_k^\alpha(y) := \sqrt 2 y^{\alpha+\frac12} l_k^\alpha(y^2)\}_{k\gammae 0}$ in $L^2(\mathbb{R}_+)$ \item $\{\psi_k^\alpha (y) := \sqrt 2 l_k^\alpha(y^2)\}_{k\gammae 0}$ in $L^2(\mathbb{R}_+, y^{2\alpha+1} \, dy)$ \end{enumerate} which are eigenvectors of certain modifications of the Laguerre differential operator \eqref{laguerre}. Then, following the notations in \cite{AMST}, if we let $W^\alpha, V,$ and $Z^\alpha$ be the operators defined by \begin{equation}gin{equation*} W^\alpha f(y)=y^{-\frac{\alpha}{2}} f(y), \quad Vf(y)= (2y)^\frac12 f(y^2) , \quad and \quad Z^\alpha f(y)= \sqrt 2 y^{-\alpha} f(y^2) \end{equation*} it is immediate that $W^\alpha \mathcal{L}_k^\alpha = l_k^\alpha$, $V \mathcal{L}_k^\alpha = \varphi_k^\alpha$, and $Z^\alpha \mathcal{L}_k^\alpha = \psi_k^\alpha$. Moreover, for $f$ a measurable function with domain in $\mathbb{R}_+$, the following result holds: \begin{equation}gin{lemma}[\cite{AMST}, Lemma 3.22] \label{lema-cambio} Let $\alpha>-1$. \begin{equation}gin{enumerate} \item Let $\delta = \rho -\alpha(\frac{p}{2}-1)$, then $\|W^\alpha f \|_{L^p(\mathbb{R}_+, y^{\rho+\alpha})}= \|f \|_{L^p(\mathbb{R}_+, y^\delta)}$ \item Let $2\delta = \gammaamma +\frac{p}{2}-1$, then $\|Vf\|_{L^p(\mathbb{R}_+, y^\gammaamma)}= 2^{\frac12 -\frac{1}{p}}\|f \|_{L^p(\mathbb{R}_+,y^\delta)}$ \item Let $\delta = \frac{\eta}{2} -\alpha(\frac{p}{2}- 1)$, then $\|Z^\alpha f \|_{L^p(\mathbb{R}_+, y^{\eta+2\alpha+1})}= 2^{\frac12 -\frac{1}{p}}\|f\|_{L^p(\mathbb{R}_+, y^\delta)}$ \end{enumerate} \end{lemma} In analogy to what we have done for the system $\{l_k^\alpha\}_{k\gammae 0}$, we can also define multipliers of Laplace transform type for the orthonormal systems listed above. For instance, in the case of the system $\{\mathcal{L}_k^\alpha\}_{k\gammae 0}$, if \begin{equation}gin{equation*} f(x) \sim \sum_{k=0}^\infty b_{\alpha,k}(f) \mathcal{L}_k^\alpha(x), \quad b_{\alpha,k}(f)= \int_0^\infty f(x) \mathcal{L}_k^\alpha(x) dx \end{equation*} given a bounded sequence $\{m_k\}_{k\gammae 0}$ we may define the multiplier \begin{equation}gin{equation*} M_{\alpha,m}^\mathcal{L} f(x) \sim \sum_{k=0}^\infty b_{\alpha,k}(f) m_k \mathcal{L}_k^\alpha(x), \end{equation*} and we say that $M_{\alpha,m}^\mathcal{L}$ is a multiplier of Laplace transform type if $m_k=m(k)$ is given by \eqref{Laplace.transform} for some real-valued function $\Psi(t)$. Similar definitions can be given for the systems $\{\varphi_k^\alpha\}_{k\gammae 0}$ and $\{\psi_k^\alpha\}_{k\gammae 0}$; we will denote the corresponding multipliers by $M_{\alpha,m}^\varphi$ and $ M_{\alpha,m}^\psi$. Then, the following analogue of Theorem \ref{main-result} holds: \begin{equation}gin{theorem} \label{teo31} Assume that $\alpha>-1$. \begin{equation}gin{enumerate} \item If $M_{\alpha,m}^{\mathcal{L}}$ is a multiplier of Laplace transform type for the system $\{\mathcal{L}_k^\alpha\}_{k\gammae 0}$ such that $(H1)$ and $(H2)$ hold, then \begin{equation}gin{equation*} \|M_{\alpha,m}^{\mathcal{L}} f\|_{L^q(\mathbb{R}_+, x^{-Bq})} \le C \|f\|_{L^p(\mathbb{R}_+, x^{Ap})} \end{equation*} provided that \begin{equation}gin{equation*} 1<p \le q<\infty \quad, \quad A< \frac{\alpha}{2}+\frac{1}{p'} \quad , \quad B<\frac{\alpha}{2}+\frac{1}{q}, \end{equation*} and that \begin{equation}gin{equation*} \left( \frac{1}{q}-\frac{1}{p}\right)(\alpha+1)<A+B \le \sigma \left(\frac{1}{q}-\frac{1}{p} \right). \end{equation*} \item If $M_{\alpha,m}^\varphi$ is a multiplier of Laplace transform type for the system $\{\varphi_k^\alpha\}_{k\gammae 0}$ such that $(H1)$ and $(H2)$ hold, then \begin{equation}gin{equation*} \|M_{\alpha,m}^\varphi f\|_{L^q(\mathbb{R}_+, x^{-Dq})} \le C \|f\|_{L^p(\mathbb{R}_+, x^{Cp})} \end{equation*} provided that \begin{equation}gin{equation*} 1<p \le q<\infty \quad, \quad C< \alpha + \frac{1}{p'}+\frac12 \quad , \quad D< \alpha+\frac{1}{q}+\frac12 \end{equation*} and that \begin{equation}gin{equation*} \left(\frac{1}{q}-\frac{1}{p}\right)(2\alpha+1) <C+D\le (2\sigma-1) \left(\frac{1}{q}-\frac{1}{p}\right). \end{equation*} \item If $M_{\alpha,m}^\psi$ is a multiplier of Laplace transform type for the system $\{\psi_k^\alpha\}_{k\gammae 0}$ such that $(H1)$ and $(H2)$ hold, then \begin{equation}gin{equation*} \|M_{\alpha,m}^\psi f\|_{L^q(\mathbb{R}_+, x^{-Fq})} \le C \|f\|_{L^p(\mathbb{R}_+, x^{Ep})} \end{equation*} provided that \begin{equation}gin{equation*} 1<p \le q<\infty \quad, \quad E< 2\alpha + 1+ \frac{1}{p'} \quad , \quad F< \frac{1}{q} \end{equation*} and that \begin{equation}gin{equation*} \left(\frac{1}{q}-\frac{1}{p}\right)(2\alpha+1) <E+F\le (2\sigma-1) \left(\frac{1}{q}-\frac{1}{p}\right). \end{equation*} \end{enumerate} \end{theorem} \begin{equation}gin{proof} We explain how to prove (1), since the other cases are analogous. From the fact that $W^\alpha \mathcal{L}_k^\alpha =l_k^\alpha$ and by Lemma \ref{lema-cambio}(1), we have the following diagram \begin{equation}gin{equation*} \begin{equation}gin{array}{cccc} & L^p(\mathbb{R}_+, x^{ap+\alpha}) &\stackrel{M_{\alpha,m}}\longrightarrow & L^q(\mathbb{R}_+ , x^{-bq+\alpha}) \\ & (W^\alpha)^{-1} \Big\downarrow & & \Big\uparrow W^\alpha \\ & L^p(\mathbb{R}_+, x^{Ap}) & \stackrel{M_{\alpha,m}^\mathcal{L}}\longrightarrow & L^q(\mathbb{R}_+ , x^{-Bq}) \end{array} \end{equation*} provided that \begin{equation}gin{equation} \label{condicionAa} Ap = ap - \alpha \left( \frac{p}{2}-1 \right) \quad \mbox{and}Ê\quad -Bq = -bq - \alpha\left( \frac{q}{2}-1\right). \end{equation} and $M_{\alpha, m} = W^\alpha M_{\alpha,m}^\mathcal{L} (W^\alpha)^{-1}$. Therefore, the identities \eqref{condicionAa} together with the conditions on $a,b$ given by Theorem \ref{main-result} imply the desired result. \end{proof} \section{Proof of Theorem \ref{teorema-hermite}} In this section we exploit the well-known relation between Hermite and Laguerre poynomials to obtain an analogous result to that of Section 2 in the Hermite case. Indeed, recalling that \begin{equation}gin{align*} H_{2k}(x) & = (-1)^k 2^{2k} k! L_k^{-\frac12} (x^2) \\ H_{2k+1}(x) & = (-1)^k 2^{2k} k! x L_k^{\frac12} (x^2) \end{align*} it is immediate that \begin{equation}gin{align*} h_{2k}(x) &= l^{-1/2}_k(x^2) \\ h_{2k+1}(x) & = x l^{\frac12}_k(x^2) \end{align*} It is then natural to decompose $f=f_0+f_1$ where $$ f_0(x)=\frac{f(x)+f(-x)}{2} \quad , \quad f_1(x)= \frac{f(x)-f(-x)}{2} $$ and, clearly, when $k=2j$, if we let $g_0(y)= f_0(\sqrt{y})$ we obtain: $$ c_k(f) = \langle f_0, h_k \rangle = 2 \int_0^\infty f_0(x) l^{-\frac12}_j(x^2) \; dx = a_{-\frac12,j}(g_0) $$ while if $k=2j+1$, and we let $g_1(y)= \frac{1}{\sqrt{y}} f_1(\sqrt{y})$ we have: $$ c_k(f) = \langle f_1, h_k \rangle = 2 \int_0^\infty f_1(x) x l^{\frac12}_j(x^2) \; dx = a_{\frac12,j}(g_1) $$ Then, \begin{equation}gin{align*} M_{H,m} f (x) & = \sum_{j=0}^\infty m_{2j} a_{-\frac12,j}(g_0 ) l^{-\frac12}_j(x^2) + \sum_{j=0}^\infty m_{2j+1} a_{\frac12,j}( g_1) x l^{\frac12}_j(x^2) \\ & = M_{-\frac12,m_0} g_0(x^2) + x M_{\frac12, m_1} g_1(x^2) \end{align*} where $(m_0)_k=m_{2k}$ and $(m_1)_k = m_{2k+1}$. To apply Theorem \ref{main-result} to this decomposition, we need to check first that $m_0$ and $m_1$ are Laplace-Stiltjes functions of certain functions $\Psi_0$ and $\Psi_1$. Indeed, notice that $m_{2k}= \mathfrak{L}\Psi_0(k)$ where $$\Psi_0(u)=\frac12 \Psi(\frac{u}{2})$$ and $m_{2k+1}= \mathfrak{L} \Psi_1(k)$ where $$\Psi_1(u)= \frac12 \int_0^\frac{u}{2} e^{-\tau} d\Psi(\tau).$$ It is also easy to see that $\Psi_0$ satisfies the hypotheses of Theorem \ref{main-result} for $\alpha=-\frac12$ whereas $\Psi_1$ satisfies the hypotheses for $\alpha=\frac12$ (in this case condition $(H2)$ follows after an integration by parts). Then, \begin{equation}gin{align} \nonumber \|M_{H,m}f |x|^{-b}\|_{L^q(\mathbb{R})} & = \left( \int_{\mathbb{R}} |M_{H,m}f(x)|^q |x|^{-bq} \, dx\right)^{\frac{1}{q}} \\ & = C \left( \int_{\mathbb{R}} \left|M_{-\frac12,m_0} g_0(x^2) + x M_{\frac12, m_1} g_1(x^2) \right|^q |x|^{-bq} \, dx\right)^{\frac{1}{q}} \label{Mh} \end{align} Using Minkowski's inequality and making the change of variables $y=x^2, dx= \frac12 y^{-\frac12} \, dy$, we see that \begin{equation}gin{align*} \eqref{Mh} &\sim \left( \int \left|M_{-\frac12,m_0} g_0(y)\right|^q |y|^{-\frac{bq}{2}-\frac12} \, dy \right)^{\frac{1}{q}} + \left( \int \left|M_{\frac12, m_1} g_1(y)\right|^q |y|^{\frac{(-b+1)q}{2}-\frac12} \, dy\right)^{\frac{1}{q}} \\ & = \| M_{-\frac12,m_0} g_0(y) |y|^{-\frac{b}{2}}\|_{L^q(\mathbb{R}, x^{-\frac12} \, dx)} + \|M_{\frac12, m_1} g_1(y) |y|^{\frac{-b+1}{2}-\frac{1}{q}} \|_{L^q(\mathbb{R}, x^\frac12 \, dx)} \\ &\le C \|g_0(y) |y|^{\tilde a}\|_{L^p(\mathbb{R}, x^{-\frac12} \, dx)} + C \|g_1(y) |y|^{\hat a}\|_{L^p(\mathbb{R}, x^{\frac12} \, dx)} \end{align*} where the last inequality follows from Theorem \ref{main-result} provided that: \begin{equation}gin{equation*} \tilde a < \frac{1}{2p'} \quad , \quad b < \frac{1}{q} \end{equation*} \begin{equation}gin{equation} \label{resc1} 0\le \tilde a +\frac{b}{2} \le \frac12 \left(\frac{1}{q}-\frac{1}{p} \right)+\sigma \end{equation} \begin{equation}gin{equation*} \hat a < \frac{3}{2p'} \end{equation*} and \begin{equation}gin{equation} \label{resc2} \left( \frac{1}{q}-\frac{1}{p} \right) \le \hat a + \frac{1}{q} -\frac{1-b}{2} \le \frac32 \left( \frac{1}{q}-\frac{1}{p}\right)+ \sigma. \end{equation} Therefore, \begin{equation}gin{align*} \|M_{H,m}f |x|^{-b}\|_{L^q(\mathbb{R})} & \le C \left( \int |g_0(x)|^p |x|^{\tilde a p-\frac12} \, dx \right)^{\frac{1}{p}} + C \left( \int |g_1(x)|^p |x|^{\hat a p + \frac12} \, dx\right)^{\frac{1}{p}} \\ &= C \left( \int |f_0(\sqrt{x})|^p |x|^{\tilde a p-\frac12} \, dx \right)^{\frac{1}{p}} + C \left( \int |f_1(\sqrt{x})|^p |x|^{\hat a p + \frac12-\frac{p}{2}} \, dx\right)^{\frac{1}{p}} \\ & = C \left( \int |f_0(x)|^p |x|^{2\tilde a p} \, dx \right)^{\frac{1}{p}} + C \left( \int |f_1(x)|^p |x|^{2\hat a p + 2-p} \, dx\right)^{\frac{1}{p}} \\ & \le C \|f(x) |x|^a\|_{L^p(\mathbb{R})} \end{align*} provided that \begin{equation}gin{equation} \label{aatildeahat} a = 2\tilde a = 2 \hat a + \frac{2}{p}-1. \end{equation} Therefore, by \eqref{aatildeahat} and the conditions on $ \tilde a, \hat a$, there must hold $$ a<\frac{1}{p'} $$ while, by \eqref{aatildeahat}, \eqref{resc1} and \eqref{resc2} are equivalent to $$ 0 \le a+b \le \frac{1}{q} - \frac{1}{p}+ 2\sigma. $$ \begin{equation}gin{remark} It follows from the proof of Theorem \ref{teorema-hermite} that a better result holds if the function $f$ is odd. \end{remark} \section{Examples and further remarks} First, we should point out that it is clear that, since a Stieltjes integral of a continuous function with respect to a function of bounded variation can be thought as an integral with respect to the corresponding Lebesgue-Stieltjes measure, we could equivalently have formulated all our results in terms of integrals with respect to signed Borel measures in $\mathbb{R}_+$. However, we have found convenient to use the framework of Stieltjes integrals since many of the classical references on Laplace transforms are written in that framework (for instance \cite{Widder}), and leave the details of a possible restatement of the theorems in the case of regular Borel measure to the reader. We also recall that the Laplace-Stieltjes transform contains as particular cases both the ordinary Laplace transform of (locally integrable) functions (when $\Psi(t)$ is absolutely continuous), and Dirichlet series (see below). In particular, if $\Psi$ is absolutely continuous and $\phi(t)=\Psi^\prime(t)$ (defined almost everywhere), the assumptions $(H1)$ and $(H2)$ of Theorem \ref{main-result} can be replaced by: \begin{equation}gin{itemize} \item[(H1ac)] $$ \int_0^\infty |\phi(x)| \; dx < +\infty \quad \hbox{i.e.} \; \phi \in L^1(\mathbb{R}_+) $$ \item[(H2ac)] there exist $\delta>0$, $0 < \sigma < \alpha+1$, and $C>0$ such that $$ \left|\int_0^t \phi(x) \; dx \right| \leq C t^{\sigma} \quad \hbox{for} \; 0 < t \leq \delta. $$ \end{itemize} In particular, assumption $(H2ac)$ holds if $\phi(t)=O(t^{\sigma-1})$ when $t \to 0$. As we have already mentioned in the introduction, B. Wr\'obel \cite[Corollary 2.7]{W} has recently proved that Laplace type multipliers for the system $\{\varphi_k^\alpha\}_{k\gammae 0}$ are bounded on $L^p(\mathbb{R}^d,\omega)$, $1<p<\infty$, for all $\omega \in A_p$ and $\alpha \in (\{-\frac12\} \cup [\frac12, \infty))^d$. In the case of power weights in one dimension this means that $\omega(x)=|x|^\begin{equation}ta$ must satisfy $-1<\begin{equation}ta<p-1$, while taking $p=q$ and letting the weight be $|x|^\begin{equation}ta$ on both sides, Theorem \ref{teo31}(2) can easily be seen to imply $-1-p\left(\alpha+\frac12 \right)<\begin{equation}ta<p-1+ p\left(\alpha +\frac12\right)$. Also, weighted estimates had been obtained before for the case of some particular operators for the system $\{l_k^\alpha\}_{k\gammae 0}$. Indeed, recall that one of the main examples of the kind of multipliers we are considering is the Laguerre fractional integral introduced in \cite{GST}, which corresponds to the choice $m_k=(k+1)^{-\sigma}$. In \cite[Theorem 4.2]{Nowak-Stempak}, A. Nowak and K. Stempak considered multi-dimensional Laguerre expansions and used a slightly different definition of the fractional integral operator, given by the negative powers of the differential operator \eqref{laguerre}. As they point out, their theorem contains as a special case the result of \cite{GST} (in the one dimensional case). To see that both operators are indeed equivalent, they rely on a deep multiplier theorem \cite[Theorem 1.1]{Stempak-Trebels}. Instead, we can see that Theorem \ref{main-result} is applicable to both definitions by choosing: $$ m_k=(k+c)^{-\sigma}, \quad \phi(t)= \frac{1}{\Gamma(\sigma)} t^{\sigma-1} e^{-ct} \quad (c>0) $$ The case $c=1$ corresponds to the definition in \cite{GST}, whereas the choice $c=\frac{\alpha+1}{2}$ corresponds to the definition in \cite{Nowak-Stempak}. Therefore, Theorem \ref{main-result} applied to these choices, coincides in the first case with the result of \cite[Theorem 1]{GT} (which is an improvement of \cite[Theorem 3.1]{GST}) and improves in the second case the one-dimensional result of \cite[Theorem 4.2]{Nowak-Stempak}. The same choice of $m_k$ and $\phi$ in Theorem \ref{teorema-hermite} gives a two-weight estimate for the Hermite fractional integral, which corresponds to the one-dimensional version of \cite[Theorem 2.5]{Nowak-Stempak}. Another interesting example is the operator $(L^2+I)^{-\frac{\alpha}{2}}$, where $L$ is given by \eqref{laguerre}. In this case, Theorem \ref{main-result} with hypotheses $(H1ac)$ and $(H2ac)$ instead of $(H1)$ and $(H2)$ applies with $\alpha= \sigma$ and $$ \phi(t)= \frac{1}{C_\alpha} e^{-\frac{\alpha+1}{2}t}J_{\frac{\alpha-1}{2}} (t) t^{\frac{\alpha-1}{2}} $$ since, by \cite[formula 5, p. 386]{Watson}, $$ \int_0^\infty e^{-st} J_{\frac{\alpha -1}{2}}(t) t^{\frac{\alpha-1}{2}} \, dt = C_\alpha (s^2+1)^{-\frac{\alpha}{2}} $$ and, when $t\to 0$, $J_{\frac{\alpha-1}{2}} (t) t^{\frac{\alpha-1}{2}} \sim t^{\alpha-1}$. A further example is obtained by choosing $\Psi(t)=e^{-s_0t} H(t-\tau)$ with $s_0=\frac{\alpha+1}{2}$, where $H$ is the Heaviside unit step function: $$ H(t) = \left\{ \begin{equation}gin{array}{rcl} 1 & \hbox{if} & t \gammaeq 0 \\ 0 & \hbox{if} & t < 0 \\ \end{array} \right. $$ and we see that Theorem \ref{main-result} is applicable to the Heat diffusion semigroup (considered for instance in \cite{Stempak-heat-diffusion} and \cite{MST}) $$ M_{\tau} = e^{-\tau L} $$ associated to the operator $L$ for any $\sigma>0$. More generally, the same conclusion holds for $$\Psi(t)= \sum_{n=1}^\infty a_n e^{-s_0t} H(t-\tau_n ) $$ provided that the Dirichlet series $$ F(s)= \sum_{n=1}^\infty a_n e^{-\tau_n s}, \quad 0 < \tau_1 < \tau_2 < \ldots $$ conveges absolutely for $s=s_0$ (which corresponds to hypothesis $(H1)$). As a final comment, we remark that finding a function $\Psi$ of bounded variation such that $m_k = \mathfrak{L} \Psi(k)$ holds (see \eqref{Laplace.transform}) is equivalent to solving the clasical Hausdorff moment problem (see \cite[Chapter III]{Widder}). {\bf Acknowledgements.} We wish to thank Professor K. Stempak for bringing into our attention the connection between the generalized euclidean convolution and our previous results on fractional integrals of radially symmetric functions, and for helpful comments and corrections. We are also indebted to Professor J. L. Torrea for pointing to us that our results could be transferred from one Laguerre system to the others, and for giving us reference \cite{AMST}. \begin{equation}gin{thebibliography}{} \bibitem{AMST} I. Abu-Falahah, R. A. Mac\'ias, C. Segovia y J.L. Torrea, {\it Transferring strong boundedness among Laguerre orthogonal systems.} Proc. Indian Acad. Sci. Math. Sci. 119 (2009), 203--220. \bibitem{A} R. Askey, {\it Orthogonal polynomials and positivity}, In: Studies in Applied Mathematics, Wave propagation and special functions, SIAM (1970), 64--85. \bibitem{BT} B. Bongioanni, J. L. Torrea, {\it Sobolev spaces associated to the harmonic oscillator.} Proc. Indian Acad. Sci. Math. Sci. 116 (2006), no. 3, 337--360. \bibitem{BT2} B. Bongioanni, J. L. Torrea, {\it What is a Sobolev space for the Laguerre function systems?} Studia Math. 192 (2009), no. 2, 147--172. \bibitem{ddd} P. L. De N\'apoli, I. Drelichman, R. G. Dur\'an, {\it On weighted inequalities for fractional integrals of radial functions}. To appear in Illinois J. Math. \bibitem{Garrigos} G. Garrig\'os, E. Harboure, T. Signes, J. L. Torrea, B. Viviani, {\it A sharp weighted transplantation theorem for Laguerre function expansions}. J. Funct. Anal. 244 (2007), no. 1, 247--276. \bibitem{GST} G. Gasper, K. Stempak, W. Trebels, {\it Fractional integration for Laguerre expansions}. Methods Appl Anal. 2 (1995), 67-75. \bibitem{GT} G. Gasper, W. Trebels, {\it Norm inequalities for fractional integrals of Laguerre and Hermite expansions.} Tohoku Math. J. (2) 52 (2000), no. 2, 251--260 \bibitem{K} Y. Kanjin, {\it A transplantation theorem for Laguerre series}. Tohoku Math. J. (2) 43 (1991), no. 4, 537--555. \bibitem{KS} Y. Kanjin, E. Sato, {\it The Hardy-Littlewood theorem on fractional integration for Laguerre series.} Proc. Amer. Math. Soc. 123 (1995), no. 7, 2165--2171. \bibitem{MST} R. Mac\'ias, C. Segovia, J. L. Torrea, {\it Heat-diffusion maximal operators for Laguerre semigroups with negative parameters}. J. Funct. Anal. 229 (2005), no. 2, 300--316. \bibitem{Martinez} T. Mart\'inez, {\it Multipliers of Laplace Transform Type for Ultraspherical Expansions}. Math. Nachr. 281 (2008), no. 7, 978--988. \bibitem{Mc} J. McCully, {\it The Laguerre transform}. SIAM Rev. 2 (1960), 185--191. \bibitem{Nowak-Stempak} A. Nowak, K. Stempak, {\it Negative Powers of Laguerre Operators}. Preprint 2009, http://arxiv.org/abs/0912.0038 \bibitem{S} E. Sasso, {\it Spectral multipliers of Laplace transform type for the Laguerre operator}. Bull. Austral. Math. Soc. 69 (2004), no. 2, 255--266. \bibitem{Stein} E. M. Stein, Topics in harmonic analysis related to the Littlewood-Paley theory. Annals of Mathematics Studies, No. 63 Princeton University Press, Princeton, N.J. \bibitem{Stempak-heat-diffusion} K. Stempak, {\it Heat diffusion and Poisson integrals for Laguerre expansions}. Tohoku Math. Volume 46, Number 1 (1994), 83-104. \bibitem{Stempak-Trebels} K. Stempak and W. Trebels, {\it On weighted transplantation and multipliers for Laguerre expansions}, Math. Ann. 300 (1994), 203-219. \bibitem{Sz} G. Szeg\"o, Orthogonal Polynomials. American Mathematical Society Colloquium Publications, v. 23. American Mathematical Society, New York, 1939. \bibitem{Thangavelu} S. Thangavelu, Lectures on Hermite and Laguerre Expansions. Mathematical Notes, 42. Princeton University Press, Princeton, NJ, 1993. \bibitem{Watson} G. N. Watson, A treatise on the theory of Bessel functions. Reprint of the second (1944) edition. Cambridge Mathematical Library. Cambridge University Press, Cambridge, 1995. \bibitem{Widder} D. V. Widder, The Laplace Transform. Princeton Mathematical Series, v. 6. Princeton University Press, Princeton, N. J., 1941. \bibitem{W} B. Wr\'obel, {\it Laplace type multipliers for Laguerre function expansions of Hermite type}. Preprint 2010. \end{thebibliography} \end{document}
\begin{document} \title{Supplementary Material} \section{Comparing spline and bi-spline methods for denoising data} We compared the accuracy in learning the correct PDE when using 1-dimensional cubic splines versus cubic bi-splines for denoising data and approximating partial derivatives (Figures \ref{box_whisker_diffadv_splines},\ref{box_whisker_fisher_splines},\ref{box_whisker_fisher_nonlin_splines}). We found that PDE-FIND with pruning always has a higher TPR value when using bi-spline computations as compared to 1-dimensional splines. \begin{figure} \caption{TPR values for the diffusion-advection equation when using 1-dimensional cubic splines versus cubic bi-splines for denoising data and approximating partial derivatives.} \label{box_whisker_diffadv_splines} \end{figure} \begin{figure} \caption{TPR values for the Fisher-KPP equation when using 1-dimensional cubic splines versus cubic bi-splines for denoising data and approximating partial derivatives. } \label{box_whisker_fisher_splines} \end{figure} \begin{figure} \caption{TPR values for the nonlinear Fisher-KPP equation when using 1-dimensional cubic splines versus cubic bi-splines for denoising data and approximating partial derivatives. } \label{box_whisker_fisher_nonlin_splines} \end{figure} \section{PDE-FIND without pruning results} We found that using PDE-FIND without pruning results in learning the wrong equation when applied to data from biological transport models, even when no noise is added to the data. We evaluated accuracy, using the true positive ratio (TPR) as a metric, for the diffusion-advection (Figure \ref{box_whisker_diffadv_no_prune}), Fisher-KPP (Figure \ref{box_whisker_fisher_no_prune}), and nonlinear Fisher-KPP equations (Figure \ref{box_whisker_fisher_nonlin_no_prune}). For the diffusion-advection equation, we found that the TPR value of the final learned equation when using ANN approximations is higher for all values of $\sigma$ when using pruning with PDE-FIND than without pruning (Figure \ref{box_whisker_diffadv_no_prune}). In general, for small values of $\sigma$, we observed that pruning enables PDE-FIND to better learn the true equation when using spline and finite difference computations, but it harms the ability to learn the true equation for larger values of $\sigma$. For example, the median TPR value increases after pruning when using finite difference approximations from TPR = 0.33 to 0.5 for $\sigma=0$. However, the TPR instead decreases from TPR = 0.33 to 0 at $\sigma=0.05$ and from TPR = 0.5 to 0 at $\sigma=0.10$. The median TPR value when using spline approximations increases from TPR = 0.3 to 0.5 at $\sigma=0$ and from TPR = 0.33 to 1 at $\sigma=0.01$. At $\sigma=0.10$, the median values decreased from TPR = 1.0 to 0.5. \begin{figure} \caption{TPR values for the diffusion-advection equation. } \label{box_whisker_diffadv_no_prune} \end{figure} For the Fisher-KPP equation, the median TPR value for PDE-FIND with the ANN computations always increases after using pruning (Figure \ref{box_whisker_fisher_no_prune}). The median value for PDE-FIND with finite difference computations increases for $\sigma=0,0.01$, but decreases from TPR = 0.5 to 0 for $\sigma=0.05$ and from TPR = 0.45 to 0 for $\sigma=0.10$. The median TPR value for PDE-FIND with spline computations increases for $\sigma=0,0.01,0.05,$ and 0.10, but decreases from TPR = 0.4 to 0 at both $\sigma=0.25$ and 0.50. Thus, pruning always helped PDE-FIND learn the true equation when using the ANN method, and helps the other computational methods for small noise levels. \begin{figure} \caption{TPR values for the Fisher-KPP equation.} \label{box_whisker_fisher_no_prune} \end{figure} For the nonlinear Fisher-KPP qquation, the median TPR value always improved the accuracy of the PDE-FIND method when using ANN approximations (Figure \ref{box_whisker_fisher_nonlin_no_prune}). When using finite difference approximations, the median TPR value increases for $\sigma=0,0.01.05$. The median value decreases from TPR = 0.3 to 0 at $\sigma=0.10$ for finite difference approximations. When using spline approximations, the median TPR value increases when $\sigma=0\text{ and }0.01$. The median TPR value decreased from TPR = 0.3 to 0 at $\sigma=0.50$. While the median value is never TPR = 1 for this equation, these results suggest that pruning in general helps reduce the number of incorrect terms in the library. \begin{figure} \caption{TPR values for the nonlinear Fisher-KPP equation.} \label{box_whisker_fisher_nonlin_no_prune} \end{figure} \section{Tables of learned PDEs} This section contains tables of the final learned PDEs for data from each equation considered at a given noise level. The equation form is the one most commonly selected by the PDE-FIND method with pruning over the 1,000 different training-validation splits of $u_t$ and $\Theta$. The provided parameter values are the mean value for these parameters when the equation form was the final learned equation. \begin{table} \centering \begin{tabular}{|c|c|c|} \hline & & \textbf{True Equation}\tabularnewline \hline & & $u_{t}=-0.8u_{x}+.01u_{xx}$\tabularnewline \hline $\boldsymbol{\sigma}$ & \textbf{method} & \textbf{learned equation}\tabularnewline \hline 00 & FD & $u_{t}=-0.800u_{x}+0.010u_{xx}-0.000350u^{2}u_{x}$\tabularnewline \hline 01 & FD & $u_{t}=-0.800u_{x}+0.010u_{xx}$\tabularnewline \hline 05 & FD & $u_{t}=0$\tabularnewline \hline 10 & FD & $u_{t}=0$\tabularnewline \hline 25 & FD & $u_{t}=0$\tabularnewline \hline 50 & FD & $u_{t}=0$\tabularnewline \hline 00 & SP & $u_{t}=-0.808u_{x}+0.011u_{xx}+0.001u^{2}u_{x}+0.000u^{2}u_{xx}$\tabularnewline \hline 01 & SP & $u_{t}=-0.794u_{x}+0.012u_{xx}$\tabularnewline \hline 05 & SP & $u_{t}=-0.797u_{x}+0.012u_{xx}$\tabularnewline \hline 10 & SP & $u_{t}=-0.774u_{x}$\tabularnewline \hline 25 & SP & $u_{t}=-0.709u_{x}$\tabularnewline \hline 50 & SP & $u_{t}=0$\tabularnewline \hline 00 & ANN & $u_{t}=-0.809u_{x}+0.011u_{xx}$\tabularnewline \hline 01 & ANN & $u_{t}=-0.803u_{x}+0.011u_{xx}-0.000u^{2}u_{xx}+0.001u_{x}^{2}$\tabularnewline \hline 05 & ANN & $u_{t}=-0.810u_{x}+0.011u_{xx}$\tabularnewline \hline 10 & ANN & $u_{t}=-0.809u_{x}+0.010u_{xx}$\tabularnewline \hline 25 & ANN & $u_{t}=-0.796u_{x}+0.009u_{xx}$\tabularnewline \hline 50 & ANN & $u_{t}=-0.802u_{x}+0.008u_{xx}+0.001u_{x}^{2}$\tabularnewline \hline \end{tabular} \caption{Learned Equations for the diffusion-advection equation.} \label{learned_eqns_diffadv} \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|} \hline & & \textbf{True Equation}\tabularnewline \hline & & $u_{t}=0.02u_{xx}-10u^{2}+10u$\tabularnewline \hline \textbf{$\boldsymbol{\sigma}$} & \textbf{method} & \textbf{learned equation}\tabularnewline \hline 00 & FD & $u_{t}=0.020u_{xx}-9.994u^{2}+9.996u$\tabularnewline \hline 01 & FD & $u_{t}=-10.155u^{2}+9.951u$\tabularnewline \hline 05 & FD & $u_{t}=0$\tabularnewline \hline 10 & FD & $u_{t}=0$\tabularnewline \hline 25 & FD & $u_{t}=0$\tabularnewline \hline 50 & FD & $u_{t}=0$\tabularnewline \hline 00 & SP & $u_{t}=0.020u_{xx}-9.993u^{2}+9.997u$\tabularnewline \hline 01 & SP & $u_{t}=0.020u_{xx}-9.972u^{2}+9.978u$\tabularnewline \hline 05 & SP & $u_{t}=-10.130u^{2}+9.926u$\tabularnewline \hline 10 & SP & $u_{t}=-10.088u^{2}+9.916u$\tabularnewline \hline 25 & SP & $u_{t}=0$\tabularnewline \hline 50 & SP & $u_{t}=0$\tabularnewline \hline 00 & ANN & $u_{t}=0.023u_{xx}-9.308u^{2}+9.533u$\tabularnewline \hline 01 & ANN & $u_{t}=0.020u_{xx}-9.972u^{2}+9.978u$\tabularnewline \hline 05 & ANN & $u_{t}=0.021u_{xx}-9.734u^{2}+9.837u$\tabularnewline \hline 10 & ANN & $u_{t}=0.022u_{xx}-9.287u^{2}+9.588u$\tabularnewline \hline \multirow{2}{*}{25} & \multirow{2}{*}{ANN}& $u_{t}=0.012u_{xx}-11.161u^{2}+12.537u$\tabularnewline & & $+0.071uu_{xx}-0.105u_{x}^{2}$\tabularnewline \hline \multirow{2}{*}{50} & \multirow{2}{*}{ANN} & $u_{t}=-0.016u_{x}+0.014u_{xx}-8.689u^{2}+12.180u$\tabularnewline & & $-0.034u^{2}u_{x}+0.077uu_{xx}-0.109u_{x}^{2}$\tabularnewline \hline \end{tabular} \caption{Discovered Equations for the Fisher-KPP Equation} \label{learned_eqns_fisher_kpp} \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|} \hline & & \textbf{True Equation}\tabularnewline \hline & & $u_{t}=-10u^{2}+10u+0.02uu_{xx}+0.02u_{x}^{2}$\tabularnewline \hline \textbf{$\boldsymbol{\sigma}$} & \textbf{method} & \textbf{learned equation}\tabularnewline \hline 00 & FD & $u_{t}=0.000u_{xx}-9.996u^{2}+9.997u+0.020uu_{xx}+0.020u_{x}^{2}$\tabularnewline \hline 01 & FD & $u_{t}=-10.268u^{2}+10.211u$\tabularnewline \hline 05 & FD & $u_{t}=-9.532u^{2}+9.731u$\tabularnewline \hline 10 & FD & $u_{t}=0$\tabularnewline \hline 25 & FD & $u_{t}=0$\tabularnewline \hline 50 & FD & $u_{t}=0$\tabularnewline \hline 00 & SP & $u_{t}=0.001u_{xx}-10.013u^{2}+10.013u+0.019uu_{xx}+0.018u_{x}^{2}$\tabularnewline \hline 01 & SP & $u_{t}=0.006u_{xx}-9.821u^{2}+9.790u+0.018u_{x}^{2}$\tabularnewline \hline 05 & SP & $u_{t}=-10.393u^{2}+10.288u$\tabularnewline \hline 10 & SP & $u_{t}=-10.265u^{2}+10.195u$\tabularnewline \hline 25 & SP & $u_{t}=-10.146u^{2}+10.083u$\tabularnewline \hline 50 & SP & $u_{t}=0$\tabularnewline \hline 00 & ANN & $u_{t}=0.010u_{xx}-9.295u^{2}+9.238u-0.032uu_{xx}+0.017u_{x}^{2}$\tabularnewline \hline 01 & ANN & $u_{t}=-9.398u^{2}+9.390u+0.025u_{x}^{2}$\tabularnewline \hline 05 & ANN & $u_{t}=0.009u_{xx}-9.312u^{2}+9.246u-0.032uu_{xx}+0.017u_{x}^{2}$\tabularnewline \hline 10 & ANN & $u_{t}=-0.006u_{xx}-8.965u^{2}+9.140u+0.027u_{x}^{2}$\tabularnewline \hline 25 & ANN & $u_{t}=-0.010u_{xx}-7.928u^{2}+8.552u+0.034u_{x}^{2}$\tabularnewline \hline 50 & ANN & $u_{t}=0.286-0.026u_{xx}-5.419u^{2}+6.724u+0.045u_{x}^{2}$\tabularnewline \hline \end{tabular} \caption{Discovered Equations for the nonlinear Fisher-KPP Equation.} \label{learned_eqns_nonlinear_fisher_kpp} \end{table} \end{document}
\begin{document} \numberwithin{equation}{section} \numberwithin{figure}{section} \title{Stretched IDLA} \author{Noam Berger\footnote{The Hebrew University of Jerusalem and Technische Universit\"at M\"unchen}, Jacob J. Kagan\footnote{The Weizmann Institute of Science} , Eviatar B. Procaccia\footnotemark[\value{footnote}]} \maketitle \begin{abstract} We consider a new IDLA - particle system model, on the upper half planar lattice, resulting in an infinite forest covering the half plane. We prove that almost surely all trees are finite. \end{abstract} \maketitle \section{Introduction} The model of Internal Diffusion Limited Aggregation (IDLA) was introduced by Meakin and Deutch \cite{meakin1986formation} as a model for some chemical reactions, particle coalescence and aggregation. IDLA was first studied rigorously by Diaconis and Fulton \cite{diaconis1991growth} and by Lawler, Bramson and Griffeath \cite{1992}. IDLA is a growth model, starting with a point aggregate $0\in{\mathbb{Z}}^2$, $A(0)=\{0\}$. At each step a particle exits the origin, performs a simple random walk (SRW) and stops at the first position outside the aggregate, this position is then added to the aggregate i.e. $A(n+1)=A(n)\cup v_n$, where $v_n$ is the first exit position of a SRW starting at $0$ from $ A(n)$. In \cite{1992}, Lawler, Bramson and Griffeath prove the asymptotic shape of the IDLA aggregate converges to the Euclidean ball. Asselah and Gaudilli\`ere \cite{asselah2010note} and independently Jerison, Levine and Sheffield \cite{jerison2012logarithmic} recently proved the long standing conjecture, that the fluctuations from the Euclidean ball are at most logarithmic. In this paper we consider an IDLA process in continuous time, introduced to us by Itai Benjamini, which we call Stretched IDLA ($\text{SIDLA}$). This process starts with an infinite line. Every vertex on the line has a Poisson clock, every ring initiates an oriented SRW that can add an edge to the tree rooted at the vertex whose clock rang. We show that even though eventually all vertices are covered, all trees are finite almost surely. See Figurs \ref{fig:finitetree} and \ref{fig:finitetree2} for two simulations of the process. The tree rooted at $0$ is colored red. In initiating the IDLA in an infinite line, we lose the simplicity of a discrete process, but we gain ergodicity which we use heavily in our analysis. Our main tool is coupling the $\text{SIDLA}$ to some first passage percolation model, with exponentially increasing weights, and proving all trees are finite in the percolation setting. A natural question that arises is universality of the finite tree property. In the last section we prove that all trees are finite in another first passage percolation model with exponentially decreasing weights. Another interesting problem is to characterize the decay of tree height. See Remark \ref{rem:zerner} for further discussion. \begin{figure} \caption{Simulations of the SIDLA process.} \end{figure} \subsection{General Notation} We consider the rotated $\mathbb{Z}^2$ lattice in the upper half-plane re-scaled by $\sqrt 2$. Hereon we abbreviate it ${\mathbb{H}}$, $${\mathbb{H}} = \{(x,y)\in{\mathbb{Z}}^2:x+y \in 2\cdot\mathbb{Z},\:y\ge0 \} .$$ Denote by $\theta_l =(-1,1)$ and $\theta_r = (1,1)$ the vectors spanning the lattice. Viewed as a directed graph, every site $v=(x,y)$ is connected to the sites $v+\theta_l=(x-1,y+1)$ and $v+\theta_r= (x+1,y+1)$. Abbreviate ${\mathcal{E}}$ the set of edges in ${\mathbb{H}}$. For a vertex $v = (x,y)$, denote the vertex height by $h(v) = y$. For an edge $e=(v,w)$, let $h(e)=\max\{h(v),h(w)\}$. It will also be useful to define the cone of $v$, $C(v)=\{v+i\theta_l+j\theta_r:i,j\in{\mathbb{N}}\cup\{0\}\}$, we write $e=(w,z)\in C(v)$ if $w,z\in C(v)$. This is the set of vertices and edges that can be reached from $x$ using directed edges. Finally we denote $\partial{\mathbb{H}} = \{(x,0):x\in2\cdot\mathbb{Z}\}$. See Figure \ref{fig:lattice} for a summary of the notation. \begin{figure} \caption{The oriented lattice\langlebel{fig:lattice} \end{figure} A disjoint oriented rooted forest in ${\mathbb{H}}$, is a collection of rooted trees $\{T(v)\}_{v\in\partial{\mathbb{H}}}\subset{\mathcal{E}}$, such that for every $v\neq v'$, $T(v)\cap T(v')=\emptyset$, and every rooted tree $T(v)$ is the union of oriented paths for the form $(e_1=(x_1,x_2),e_2=(x_2,x_3),\ldots,e_n=(x_n,x_{n+1}))$ starting from $x_1=v\in\partial{\mathbb{H}}$ and ${\rm for}all i\le n,~x_{i+1}-x_i\in\{\theta_l,\theta_r\}$. For every tree $T(v)$ and a vertex $u\in{\mathbb{H}}$, if there exists some $w\in{\mathbb{H}}$ such that $(w,u)$ or $(u,w)$ is in $T(v)$, we abuse notations and say that $u\in T(v)$. Let $T$ be an oriented tree in a disjoint oriented rooted forest. Denote by $\partial T$, the edge boundary of $T$ i.e. $\partial T = \{e = (u,v)\in{\mathcal{E}}: v\notin T, u\in T\}$. Abbreviate $\partial ^n T$ the boundary of height $n$, i.e. $\partial ^n T=\{e\in\partial T, h(e)=n\}$. The height of a tree is denoted $h(T)=\sup_{e\in\partial T}\{h(e)\}$. Denote by $T^n$ the vertices of height $n$ in $T$ i.e. $T^n = \{v| h(v)=n, v\in T\}.$ For any set $A\subset{\mathbb{H}}$, let $\partial^{in}A=\{u\in A:\exists v\notin A, (u,v)\in{\mathcal{E}}\text{ or }(v,u)\in{\mathcal{E}}\}$. We call a homogenous Poisson process $N(t)$ such that $N(t+\tau)-N(t)$ is distributed Poisson$(\langlembda\tau)$ a Poisson clock of rate $\langlembda$. When omitting the time of the Poisson clock we refer to the set of ring times i.e $N=\{t\in{\mathbb{R}}^+:{\rm for}all s<t,~N(t)>N(s)\}$. \subsection{SIDLA model description and general remarks} In this section we give a description of the $\text{SIDLA}$, the well-definedness is proved below. We construct the $\text{SIDLA}$ process on ${\mathbb{H}}$. Let $\mathcal{F}$ be the set of disjoint oriented rooted forests in ${\mathbb{H}}$, and let $\mathfs{F}$ be the ${\sigma}gma$-algebra spanned by the standard projection maps to ${\mathcal{E}}$. For every $t\ge 0$, let $\mathbb{P}ob_t$ be a measure on ${\mathcal{F}}$. The process starts with the empty forest i.e. $\mathbb{P}ob_0({\rm for}all v,T(v,0)=\emptyset)=1$. Assume $\mathbb{P}ob_t$ is defined and let $T(v,t)$ to be the tree rooted at $v$ sampled from $\mathbb{P}ob_t$. At each site $v$ found on the $x$ axis place an independent Poisson clock of rate 1. Given that a ring occurred at time $t_0>t$ an edge $e = (u_1,u_2)$ is adjoined to the tree according to the following law: \begin{equation*} \mathbb{P}ob_{t_0}(T(v,t_0)=T(v,t_0^-)\cup e) = \begin{cases} 2^{-h(u_2)} & \text{if } u_1\in T(v,t_0^-)\text{ and } u_2\notin\bigcup_{\overset{v'\in\partial{\mathbb{H}}}{v'\neq v}}T(v',t_0)\\ 0 & \text{Otherwise} \end{cases}, \end{equation*} for every $e\neq e'$, $\mathbb{P}ob_{t_0}(T(v,t_0)=T(v,t_0^-)\cup e\cup e')=0$, where \[t_0^-=t_0^-(v)=\sup\{s>0:s<t_0, \text{clock at site }v\text{ rang at time }s\} .\] This process can be described intuitively in terms of particles: each time $t_{0}$, the clock at a vertex $u\in\partial{\mathbb{H}}$ rings, a particle is created, and starts an instantaneous oriented random walk subject to the following law: \begin{enumerate} \item Being at vertex $v\in T(u,t_0^-)$, the particle chooses one of its neighbours $v+\theta_r$ and $v+\theta_l$ with probability $\frac{1}{2}$, call the choice $a$. \item If $a$ is free, the particle occupies the edge $(v,a)$. \item If $a\in \bigcup_{\overset{x\in\partial{\mathbb{H}}}{x\neq u}} T(x,t_0)$ or $a\in T(u,t_0^-)$ but $(v,a)\notin T(u,t_0^-)$ the particles vanishes. \item Else it continues as described in (1.) from the newly reached vertex. \end{enumerate} Since the process is defined in continuous time the question of well-definedness arises. However the geometry of ${\mathbb{H}}$ greatly simplifies the matter. \begin{lem} The process is well defined and $\mathbb{P}ob_t$ converges strongly to a measure $\mathbb{P}ob$ on disjoint oriented forests. \end{lem} \begin{proof} Each edge $e\in{\mathcal{E}}$ can a priori be reached only by a finite number of trees i.e. $|\{v\in\partial{\mathbb{H}}:e\in C(v)|=h(e)$. For every $t>0$ we can order the rings of the Poisson clocks associated to the set of trees up to time $t$. For each ring we have an oriented random walk path, and $e$ can be joined to at most one tree. The well definedness of the process for every $t\ge0$ follows. If some edge $e\in{\mathcal{E}}$ is contained in some tree $T(v,t)$, then for every $s>t$, $e\in T(v,s)$ $\mathbb{P}ob_s$-a.s. Thus the limit $\lim_{t\rightarrow\infty}\mathbb{P}ob_t$ exists. Abbreviate the limiting measure $\mathbb{P}ob$. \end{proof} Let $T(v)=\lim_{t\rightarrow\infty}T(v,t)$. We can now state the main result of this paper: \begin{thm}\langlebel{thm:main} $\mathbb{P}ob({\rm for}all v\in\partial{\mathbb{H}}, |T(v)|<\infty)=1$. \end{thm} \begin{rem} Note that every vertex in ${\mathbb{H}}$ is reached at a finite time a.s. We use this remark in Corollary \ref{infforstmom} which states that the expected height of a tree in $\mathbb{P}ob$ is infinity. \end{rem} \begin{comment} Another useful observation regarding this model is captured by the following claim: \textcolor{red}{I don't like this lemma or proof and we don't use it} \begin{lemma} Each vertex at level $h$ can be regarded as a Poisson clock of rate $2^{-h}$ \end {lemma} \begin{proof} The claim follows by induction on the height of the vertex. at level 0 the claim is trivial. At level 1 each vertex $v$ is connected to a single vertex on the $x$ axis, each particle created in $R(v)$ has a probability $\frac{1}{2}$ of getting to $v$. Thus $v$ can be regarded as a Poisson clock of rate $\frac{1}{2}$. The same argument works at level $h$. \end{proof} \end{comment} \subsection{First passage percolation} In this section we define a first passage percolation model (FPP). In the next section we will couple the $\text{SIDLA}$ with the FPP defined in this section. Assign for each edge $e\in{\mathcal{E}}$ a weight ${\omega}(e){\sigma}m\exp\left(2^{-h(e)}\right)$ independently of all other edges. We denote the measure on $[0,\infty]^{\mathcal{E}}$ so constructed by $\mathbb{P}$. For every oriented path $\gamma=(e_1,e_2,\ldots,e_n)$ in ${\mathbb{H}}$, the length of $\gamma$ is defined to be $\langlembda(\gamma)=\sum_{i=1}^n{\omega}(e_i)$. For every two points $x,y\in{\mathbb{H}}$ such that $x\in C(y)$ or $y\in C(x)$, let \[ d_{\omega}(x,y)=\min_{\gamma:x\rightarrow y}\langlembda(\gamma) ,\] where the minimum is over all finite number of oriented paths in ${\mathbb{H}}$ connecting $x$ and $y$. For a point $x\in{\mathbb{H}}$ and a set $A\subset{\mathbb{H}}$ connected by an oriented path, let $d_{\omega}(x,A)=\inf_{y\in A}d_{\omega}(x,y)$. \begin{definition} For a vertex $x\in\partial{\mathbb{H}}$, let $\hat{T}(x)=\bigcup_{y\in{\mathbb{H}}}\{\gamma|\gamma \text{ is oriented, }\gamma:x\rightarrow y,\langlem(\gamma)=d_{\omega}(y,\partial {\mathbb{H}})\}$ i.e. the union of all oriented paths minimizing the distance from points $y\in{\mathbb{H}}$ to $\partial{\mathbb{H}}$ starting at the vertex $x$. \end{definition} \begin{rem} The uniqueness of the path $\gamma:x\rightarrow y,$ such that $\langlem(\gamma)=d_{\omega}(y,\partial{\mathbb{H}})$, follows from the independence and continuity of the distribution of $\{{\omega}(e)\}_{e\in{\mathcal{E}}}$. \end{rem} \begin{rem} Since $\mathbb{P}$ is a function of i.i.d. random variables, $\mathbb{P}$ is ergodic under the shift $\theta:{\mathbb{H}} \rightarrow{\mathbb{H}}$ defined by $\theta(x)= x-\theta_l+\theta_r$. \end{rem} \section{Coupling $\text{SIDLA}$ with FPP}\langlebel{sec:coupling} Given a FPP process with distribution $\mathbb{P}$, we construct a $\text{SIDLA}$ process by way of coupling. The construction amounts to associating with each $x\in\partial{\mathbb{H}}$ a set of Poisson clock rings and prescribing the trajectory of each particle. To this end we introduce an auxiliary set of independent Poisson clocks. Given an edge $e\in{\mathcal{E}}$ we associate with it a Poisson clock of rate $2^{-h(e)}$, which we abbreviate $\text{Poisson}(e)$, such that $\{\text{Poisson}(e)\}_e$ is an independent set of processes, and independent of the FPP measure $\mathbb{P}$. We assign a set of rings for $x$ and particle trajectories as follows: For each finite oriented path $\gamma\subseteq\hat{T}(x)\cup\partial \hat{T}(x)$, $\gamma=(e_1,\ldots,e_{l(\gamma)})$ originating at $x$ we assign the following rings: \begin{itemize} \item if $\gamma \subset \hat{T}(x)$ we assign the ring $\sum_{i=1}^{l(\gamma)} {\omega}(e_i)$, and the trajectory of the particle will be $\gamma$. \item if $\gamma \nsubseteq \hat{T}(x)$ we assign the ring sequence $\sum_{i = 1}^{l(\gamma)} {\omega}(e_i) $, $\sum_{i = 1}^{l(\gamma)} {\omega}(e_i)+Poisson(e_{l(\gamma)})$, for each ring in this sequence of rings the particle will be assigned the path $\gamma$. \end{itemize} \begin{rem} Note that in the second case, all the particles will vanish, as the vertex at the end of $\gamma$ will be reached sooner by a particle associated to the FPP tree containing it. \end{rem} We need to show that this construction results in a Poisson clock at $v$ for every $v \in \partial{\mathbb{H}}$ with the correct rate. We prove this by showing that the time differences between every two consecutive rings is distributed exponentially with rate 1. The next lemma is a combinatorial property of finite oriented trees in ${\mathbb{H}}$. \begin{lem}\langlebel{lem:combcrap} For every finite oriented tree $T$ in ${\mathbb{H}}$ with root $x\in\partial{\mathbb{H}}$ and height $n-1$, then \[ \sum_{i=1}^n\frac{1}{2^i}|\partial^i T|=1 .\] \end{lem} \begin{proof} We prove by induction on $n$. For $n=1$, the tree is empty, thus $|\partial ^1 T|=2$ and for every $i>1$, $|\partial^i T|=0$. We get $\frac{1}{2}2=1$. Now assume the claim is true for $n-1$, let $T$ be a tree of height $n$. If $|\partial^1 T|=0$, denote by $T_r-\theta_r$ and $T_l-\theta_l$ the two subtrees of $T$ contained in $T\setminus\{x\}$ shifted to $\partial {\mathbb{H}}$. The subtrees are of height smaller than $n$, and for every $i\le n$, $|\partial^i T_r|+|\partial^i T_l|=|\partial^{i+1} T|$ thus by the induction hypothesis \begin{equation}\begin{aligned} \sum_{i=1}^n\frac{1}{2^i}|\partial^i T|=\sum_{i=1}^{n-1}\frac{1}{2^{i+1}}\left(|\partial^i T_r|+|\partial^i T_l|\right)=\frac{1}{2}+\frac{1}{2}=1. \end{aligned}\end{equation} If $|\partial^1 T|=1$, assume wlog $T_l=\emptyset$, by the induction hypothesis, \begin{equation}\begin{aligned} \sum_{i=1}^n\frac{1}{2^i}|\partial^i T|=\frac{1}{2}|\partial^1 T|+\sum_{i=2}^n\frac{1}{2^i}|\partial^i T|=\frac{1}{2}+\frac{1}{2}\sum_{i=1}^{n-1}\frac{1}{2^i}|\partial^i T_{r}|=1. \end{aligned}\end{equation} \end{proof} \begin{clm} The time differences between every two consecutive rings at any vertex v are independent and are distributed exponentially with rate $1$. \end{clm} \begin{proof} By induction on the number of rings. The first ring happens at time $\min\{{\omega}(e_r),{\omega}(e_l)\}$ which are distributed exponentially ${\omega}(e_r) {\sigma}m \exp(1/2)$, ${\omega}(e_l){\sigma}m \exp(1/2) $, thus their minimum, is distributed $\min\{{\omega}(e_r),{\omega}(e_l)\}{\sigma}m \exp(1)$. Induction step: assuming the first $n$ rings have occurred, we consider the $n+1^{st}$ interval of ring times. $T(v,t)$ after the $n$-th ring consists of at most $n$ vertices and edges, in particular $|T(v,t)|<\infty$. Let $w'(e)$ be distributed according to $\mathbb{P}$ independently from ${\omega}$. By the memoryless property of exponential distribution, the $n+1^{\mbox{\tiny st}}$ interval between ring times is by definition of the coupling, distributed as $\min_{e\in\partial T(v,t)}w'(e)$. We prove by induction that \begin{equation}\begin{aligned} \mu_j=\min_{e\in\bigcup_{k=0}^{j}\partial^{n-k} T(v,t)}\{w'(e)\}{\sigma}m\exp\left(\frac{1}{2^{n-j}}\sum_{l=0}^{j}\frac{1}{2^{j-l}}\bigg|\partial ^{n-l}T(v,t)\bigg|\right) .\end{aligned}\end{equation} The base of induction follows as $\mu_0$ is the minimum of $|\partial ^{n}T(v,t)|$ , $\exp\left(\frac{1}{2^n}\right)$ independent random variables. Since \[\min_{e\in\partial^{n-j-1} T(v,t)}w'(e){\sigma}m\exp\left(\frac{1}{2^{n-j-1}}\bigg|\partial ^{n-j-1}T(v,t)\bigg|\right),\] \begin{equation}\begin{aligned} \mu_{j+1}{\sigma}m\min\left\{\mu_j,\min_{e\in\partial^{n-j-1} T(v,t)}w'(e)\right\}{\sigma}m \exp\left(\frac{1}{2^{n-j-1}}\sum_{l=0}^{j+1}\frac{1}{2^{j+1-l}}\bigg|\partial ^{n-l}T(v,t)\bigg|\right) .\end{aligned}\end{equation} Thus proving the internal induction. We obtain by Lemma \ref{lem:combcrap} \begin{equation}\begin{aligned} \mu_n{\sigma}m\exp\left(\sum_{l=0}^{n}\frac{1}{2^{n-l}}\bigg|\partial ^{n-l}T(v,t)\bigg|\right){\sigma}m\exp(1). \end{aligned}\end{equation} \end{proof} \section{Finite trees} In this section we will prove the main result of this paper. \begin{thm}\langlebel{thm:finitefpp} Given a FPP on ${\mathbb{H}}$ distributed according to $\mathbb{P}$, i.e. with weights $w(e){\sigma}m\exp\left(2^{-h(e)}\right)$, almost surely all trees are finite, i.e.\[ \mathbb{P}(|\hat{T}(0)|<\infty)=1 .\] \end{thm} \begin{proof} Assume for the purpose of contradiction the existence of an infinite tree. Then by shift invariance, $\beta:=\mathbb{P}(|\hat{T}(0)|=\infty)>0$. Remember that $\hat{T}^m(x) = \{v| h(v)=m, v\in \hat{T}(x)\}.$ By the ergodic theorem we have \begin{equation}\begin{aligned} \frac{1}{2n+1}\sum_{x=-n}^{n}|\hat{T}^m(x)|{\mathbbm{1}}_{|\hat{T}(x)|=\infty}&\underset{n\rightarrow\infty}{\longrightarrow}{\mathbb{E}}\left[|\hat{T}^m(0)|\big|\hat{T}(0)=\infty\right]\cdot\mathbb{P}(|\hat{T}(0)|=\infty)\\ &=\beta\cdot{\mathbb{E}}\left[|\hat{T}^m(0)|\big||\hat{T}(0)=\infty|\right]. \end{aligned}\end{equation} Since all the trees are oriented, for every $x\in\partial{\mathbb{H}}$, the tree $\hat{T}(x)$ resides in the cone $C(x)$. Thus \[ \frac{1}{2n+1}\sum_{x=-n}^{n}|\hat{T}^m(x)|{\mathbbm{1}}_{|\hat{T}(x)|=\infty}\le\frac{1}{2n+1}\sum_{x=-n}^{n}|\hat{T}^m(x)|\le\frac{2n+2m+1}{2n+1}\underset{n\rightarrow\infty}{\longrightarrow}1, \] and we get \[ {\mathbb{E}}\left[|\hat{T}^m(0)|\big|~|\hat{T}(0)|=\infty\right]\le\frac{1}{\beta} .\] Fix $\delta<1$, $D=\frac{1}{\beta\cdot\delta}$, by Markov's inequality \begin{equation}\begin{aligned} \mathbb{P}\left(|\hat{T}^n(0)|>D\big|\;|\hat{T}(0)|=\infty\right)\le\delta \end{aligned}\end{equation} \begin{definition} A tree rooted at $v$ is called slim, if $0<|\hat{T}^n(v)|<D$ for infinitely many n's. We say that a tree is slim at level $k$ if $0<|\hat{T}^k(0)|<D$. \end{definition} $\hat{T}(0)$ is slim with probability greater than $1-\delta$ by the estimation \begin{equation}\begin{aligned} \mathbb{P}\left(\hat{T}(0)\text{ is not slim}\big|\;|\hat{T}(0)|=\infty\right)&= \mathbb{P}\left(\bigcup_{n=1}^{\infty}\bigcap_{m=n}^{\infty}\{|\hat{T}^n(0)|>D\}\big|\;|\hat{T}(0)|=\infty\right)\\ &=\mathbb{P}\left(\liminf_{n\rightarrow\infty}\{|\hat{T}^n(0)|>D\}\big|~ |\hat{T}(0)|=\infty\right)\\ &\le\liminf_n\mathbb{P}\left(|\hat{T}^n(0)|>D\big|\;|\hat{T}(0)|=\infty\right)\le\delta. \end{aligned}\end{equation} By assuming existence of infinite trees we obtain a positive density of slim trees. We will reach a contradiction by showing that the probability of a tree being slim is $0$. \begin{definition} Let $r_{n}=(\max\{s:(s,n)\in \hat{T}^n(0)\}+2,n)$ be the vertex to the right of $\hat{T}^{n}(0)$ and let $l_n$ be the vertex to the left of $\hat{T}^{n}(0)$. For every $n\in{\mathbb{N}}$ denote $\Delta(n)={\mathbb{H}}\cap\text{Convex hull}\{l_{n},r_{n},l_{n}+\left(|\hat{T}^{n}(0)|+1\right)\theta_r\}$, the triangle based in $\hat{T}^{n}(0)\cup l_{n}\cup r_{n}$. See Figure \ref{fig:deltak} for clarifications. \end{definition} \begin{lem}\langlebel{lem:stocdomln} For every $\kappa>1$, $\mathbb{P}( d_{\omega}(l_n,\partial{\mathbb{H}})>\kappa2^{n+1}|{\sigma}gma(\{{\omega}(e):e\in\bigcup^n_{i=1} \hat{T}^i(0)\}))\le\frac{1}{\kappa}<1$ a.s. \end{lem} \begin{proof} Let $w_i{\sigma}m\text{exp}(2^{-i})$, with law $Q$, be independent of each other and of $\mathbb{P}$. We first prove by induction on $n$ that $d_{\omega}(l_n,\partial{\mathbb{H}})$ is stochastically dominated by $\sum_{i=1}^n w_i$. For $n=1$, if $T^1(0)$ is $\{\theta_r\}$, then ${\omega}((\theta_l,0))>{\omega}((l_1-\theta_r,l_1))$. ${\omega}((l_1-\theta_r,l_1))$ is independent of $\hat{T}^1(0)$, and in particular $d(l_1,\partial{\mathbb{H}})$ is stochastically dominated by $w_1$. If $\hat{T}^1(0)$ is $\{\theta_l\}$ or $\{\theta_l,\theta_r\}$, $d(l_1,\partial{\mathbb{H}})=\min\{{\omega}(-2,-2+\theta_l),{\omega}(-4,-4+\theta_r)\}$, both are independent of $\hat{T}^1(0)$, and in particular dominated by $w_1$. Assume claim for $l_{n-1}$, if $l_n=l_{n-1}+\theta_l$, since there is no oriented path connecting $\hat{T}^n(0)$ with the edge $(l_{n-1},l_n)$, then ${\omega}(l_{n-1},l_n)$ is independent of $\bigcup_{i=1}^n \hat{T}^i(0)$, and thus dominated by $w_n$. Since $d_{\omega}(l_n,\partial{\mathbb{H}})\le d(l_{n-1},\partial{\mathbb{H}})+{\omega}(l_{n-1},l_n)$, the claim follows by induction. If $l_n=l_{n-1}+\theta_r$, then $d_{\omega}(l_n,\partial{\mathbb{H}})<d_{\omega}(l_n-\theta_l,\partial{\mathbb{H}})+{\omega}(l_n,l_n-\theta_l)$ . Thus conditioned on the weights of $\bigcup_{i=1}^{n-1} \hat{T}^i(0)$, and the structure of the tree, we obtain that \begin{equation}\begin{aligned}\langlebel{eq:stocdom} 0\le{\omega}(l_{n-1},l_n)\le{\omega}(l_n-\theta_l,l_n)+d(l_n-\theta_l,\partial{\mathbb{H}})-d(l_{n-1},\partial{\mathbb{H}}) .\end{aligned}\end{equation} Since the random variables on the RHS of \ref{eq:stocdom} are independent (without the conditioning) of ${\omega}(l_{n-1},l_n)$, we obtain that ${\omega}(l_{n-1},l_n)$ is conditionally dominated by $w_n$. This is since for two independent random variables $X$ and $Y$, one has $\mathbb{P}ob(X>t|X<Y)\le\mathbb{P}ob(X>t)$. Thus we get by the induction hypotheses that $d_{\omega}(l_n,\partial{\mathbb{H}})\le d(l_{n-1},\partial{\mathbb{H}})+{\omega}(l_{n-1},l_n)$ is stochastically dominated by $\sum_{i=1}^n w_i$. Now \begin{equation}\begin{aligned} \mathbb{P}\left(d_{\omega}(l_{n},\partial{\mathbb{H}})>\kappa2^{n+1}|{\sigma}gma(\{{\omega}(e):e\in\bigcup^n_{i=1} \hat{T}^i(0)\})\right) &\le{ Q\left(\sum_{i=1}^{n}w_i>\kappa E_Q\left[\sum_{i=1}^{n}w_i\right]\right)}\\ &\le\frac{1}{\kappa}<1. \end{aligned}\end{equation} \end{proof} \begin{figure} \caption{Killing a slim tree.\langlebel{fig:deltak} \end{figure} Let $M_{n}=\max\{d_{\omega}(l_{n},\partial{\mathbb{H}}),d_{\omega}(r_{n},\partial{\mathbb{H}})\}$. Conditioned on the event that $\hat{T}(0)$ was slim in levels $n_1,\ldots,n_k$ such that $n_{m+1}-n_m>D$ and $2^{n_{k}+1}>M_{n_{k-1}}$, $m=1,\ldots,k-1$, we show that the probability there exists a level $l\ge n_k+D$ where the tree is slim is bounded away from 1. Every edge $e\in\Delta(n_k)$ has weight distribution ${\omega}(e){\sigma}m\exp(2^{-h(e)})=\exp(2^{-n_k-l})$ where $0\leq l\leq D+1$. Using the exponential distribution properties $w(e){\sigma}m2^{n_k}\exp(2^{-l})$. The idea that will follow is to show that with positive probability $\partial^{in} \Delta(n_k)\setminus \hat{T}^{n_k}(0)$ belongs to the union of the trees of $r_{n_k}$ and $l_{n_k}$ thus killing the tree rooted at $0$. To this end let $w_i{\sigma}m\text{exp}(2^{-i})$, be independent of each other and of $\mathbb{P}$. We denote the measure so constructed by $Q$. By Lemma \ref{lem:stocdomln} (note that the conditioning is hiding in the notation $l_{n_k}$) we obtain that \begin{equation}\begin{aligned} \mathbb{P}(d_{\omega}(l_{n_{k}},\partial{\mathbb{H}})>M_{n_{k-1}}+\kappa2^{n_{k}+1})\le\mathbb{P}(d_{\omega}(l_{n_{k}},\partial{\mathbb{H}})>\kappa2^{n_{k}+1})\le\frac{1}{\kappa}<1. \end{aligned}\end{equation} With probability bounded away from zero and independent of all the levels lower than $n_{k}$, all (finite number) edges $e\in\Delta({n_k})$ will have weights larger than ${\omega}(e)\ge\kappa2^{2D}{\mathbb{E}}[{\omega}(e)]\ge\kappa 2^{2D}2^{n_k}$, and all edges $e'=(x,y),\{x,y\}\in\partial^{in}\Delta({n_k})\setminus \hat{T}^{n_{k}}(0)$ will have weights smaller than ${\omega}(e')\le{\mathbb{E}}[{\omega}(e')]$. Under this event, for every edge $e\in\partial^{in}\Delta({n_k})\setminus \hat{T}^{n_{k}}(0)$, ${\omega}ega(e)\le 2^{n_k+D}$. This yields, \[ \sum_{e\in\partial^{in}\Delta({n_k})\setminus \hat{T}^{n_{k}}(0)} {\omega}ega(e)\le 2D\cdot2^{n_k+D}<\kappa 2^{2D}2^{n_k} .\] By the choice of $n_k$ we obtain that under the previous event $M_{n_k}+\sum_{e\in\partial^{in}\Delta({n_k})\setminus \hat{T}^{n_{k}}(0)} {\omega}ega(e)$ is smaller than the weight of a single edge in $\Delta({n_k})$, thus any geodesic that hits $\Delta({n_k})$ will not connect to $T^{n_k}(0)$. We get that $\partial^{in}\Delta({n_k})\setminus \hat{T}^{n_{k}}(0)\notin \hat{T}(0)$. \end{proof} \begin{cor}\langlebel{infforstmom} $\mathbf{E}[h(\hat{T}(0))]=\infty$ \end{cor} \begin{proof} Assume for the purpose of contradiction that $\mathbf{E}[h(\hat{T}(0))]<\infty$, thus \begin{equation}\begin{aligned} \sum_{i=1}^\infty\mathbb{P}ob(h(\hat{T}(0))\ge i)=\sum_{i=1}^\infty\mathbb{P}ob(h(\hat{T}(i))\ge i)\le\frac{1}{2}\sum_{i=-\infty}^\infty\mathbb{P}ob(h(\hat{T}(i))\ge |i|)<\infty. \end{aligned}\end{equation} By Borel-Cantelli, for all but a finite number $i$'s, $h(\hat{T}(i))<|i|$. Since all trees have finite height, there are infinitely many vertices in $C(0)$ that are not covered a.s. This is a contradiction to the construction of the $\text{SIDLA}$. \end{proof} \begin{proof}[Proof of theorem \ref{thm:main}] By the coupling of Section \ref{sec:coupling}, $\mathbb{P}ob(|T(0)|<\infty)=\mathbb{P}(|\hat{T}(0)|<\infty)$. By Theorem \ref{thm:finitefpp}, $\mathbb{P}(|\hat{T}(0)|<\infty)=1$. \end{proof} \begin{rem}\langlebel{rem:zerner} An interesting question that so far evades rigorous proof is that of the correct decay of tree height. In \cite{MR1880239}, Zerner and Merkl presented a variation of the next forest model. Let $\mathbf{Z}$ be a measure on $\{0,1\}^{\mathcal{E}}$ defined as follows: from each vertex $v\in{\mathbb{H}}$ with $h(v)>0$, \begin{equation}\begin{aligned} &\mathbf{Z}((v,v-\theta_r)=1,(v,v-\theta_l)=0)=\frac{1}{2}\\ &\mathbf{Z}((v,v-\theta_r)=0,(v,v-\theta_l)=1)=\frac{1}{2} .\end{aligned}\end{equation} Zerner and Merkl proved the that the height of trees under $\mathbf{Z}$ have a $\frac{1}{2}$ moment, by coupling an exploration process that surrounds the trees with two independent simple random walks. The tree is bounded by the trajectories of the random walks until the first time they meet. Since the SIDLA process is coupled to a FPP model with exponentially increasing weights, the law of the SIDLA is very close to $\mathbf{Z}$. We conjecture that SIDLA has $\frac{1}{2}-{\epsilon}silon$ moment for some small ${\epsilon}silon>0$. \end{rem} \section{Analogous result for different FPP} Once one sees the finite trees result for the FPP with exponentially increasing weights, one may ask if this phenomenon is preserved for different FPP measures e.g. a FPP with exponentially decreasing weights. Let ${\mathbb{S}}$ be a FPP measure on ${\mathbb{H}}$ such that ${\omega}(e){\sigma}m\exp\left(2^{h(e)}\right)$, and abbreviate \[S(x)=\bigcup_{y\in{\mathbb{H}}}\{\gamma,\text{ oriented }|\gamma:x\rightarrow y,l(\gamma)=d_{\omega}(y,\partial {\mathbb{H}})\}.\] \begin{figure} \caption{FPP with decreasing weights\langlebel{fig:idladecrease} \end{figure} \begin{thm} $S(0)$ is finite ${\mathbb{S}}$ a.s. \end{thm} \begin{proof} Denote by $a = \min\{{\omega}ega((0,\theta_r)),{\omega}ega((0,\theta_l))\}$. Let $l$ be the minimal integer such that \[\sum_{i=l}^{\infty}\left(2^{-i}+i\cdot2^{-i}\right)<\frac{a}{3}.\] Consider $A^l(a)=\{0<v\in\partial{\mathbb{H}}\: |\: \sum_{i=0}^{l-1} {\omega}ega\left((v+i\cdot \theta_l,v+(i+1)\cdot \theta_l)\right)<a/3 \}.$ Note that by shift invariance this set is infinite. \begin{equation}\begin{aligned}\langlebel{eq:posray} &\mathbb{P}\left(\sum_{i=l}^{\infty} {\omega}ega\left((v+i\cdot \theta_l,v+(i+1)\cdot \theta_l)\right)<a/3\right)\ge \\&\mathbb{P}\left(\bigcap_{i=l}^{\infty}\left\{{\omega}ega((v+i\cdot\theta_l(v),v+(i+1)\cdot \theta_l))<2^{-i}+i\cdot2^{-i}\right\}\right) \geq\ \mathbb{P}od_{i =l}^{\infty}\left(1-\frac{1}{i^2}\right)>0,\end{aligned}\end{equation} where the one before last inequality follows from Chebyshev. For every $v\in A^l(a)$, the events $\{\sum_{i=l}^{\infty} {\omega}ega\left((v+i\cdot \theta_l,v+(i+1)\cdot \theta_l)\right)<a/3\}$ and \[\left\{\sum_{i=0}^{l-1} {\omega}ega\left((v+i\cdot \theta_l,v+(i+1)\cdot \theta_l)\right)<a/3\right\},\] are independent. Thus by \eqref{eq:posray} There exists some $v\in A^l(a)$ such that \[ \sum_{i=0}^{\infty} {\omega}ega\left((v+i\cdot \theta_l,v+(i+1)\cdot \theta_l)\right)<\frac{2a}{3}<a,\] thus the path $\bigcup_i\{v+i\cdot \theta_l\}\notin T(0)$. By symmetrical arguments there exists some $v'<0$ with $\bigcup_i\{v'+i\cdot \theta_r\}\notin T(0)$, thus $T(0)$ is finite. \end{proof} \begin{rem} An interesting open question is that of finite trees in the i.i.d case on ${\mathbb{H}}$. i.e. ${\omega}(e){\sigma}m\exp\left(1\right)$. It has some relations to the Eden model on ${\mathbb{H}}$. Similar to the Eden model each edge on the boundary of a tree is attempted to be added with equal probability. Under the coupling scheme of Section \ref{sec:coupling} bigger trees grow in a greater rate than smaller trees. \end{rem} \section*{Acknowledgments} The authors wish to thank Itai Benjamini for suggesting this problem and helpful discussions. One of the authors would like to thank Ohad Feldheim for a fruitful discussion. \end{document}
\begin{document} \title[A unified time scale for quantum chaotic regimes] {A unified time scale for quantum chaotic regimes} \author{Ignacio S. Gomez$^{1}$} \ead{[email protected]} \author{Ernesto P. Borges$^{1}$} \ead{[email protected]} \address{$ $ \\ $^1$Instituto de F\'{i}sica, Universidade Federal da Bahia, Rua Barao de Jeremoabo, 40170-115 Salvador--BA, Brazil} \begin{abstract} We present a generalised time scale for quantum chaos dynamics, motivated by nonextensive statistical mechanics. It recovers, as particular cases, the relaxation (Heisenberg) and the random (Ehrenfest) time scales. Moreover, we show that the generalised time scale can also be obtained from a nonextensive version of the Kolmogorov-Sinai entropy by considering the graininess of quantum phase space and a generalised uncorrelation between subsets of the phase space. Lyapunov and regular regimes for the fidelity decay are obtained as a consequence of a nonextensive generalisation of the $m$th point correlation function for a uniformly distributed perturbation in the classical limit. \end{abstract} \noindent{\it Keywords}: quantum chaos, time scales, Kolmogorov-Sinai entropy, fidelity. \section{\label{sec:intro}Introduction} The characteristic time scales are important indicators for describing the dynamics of quantum chaotic systems. They allow to distinguish the regular behaviour (relaxation, or Heisenberg, time scale) from the chaotic behaviour (random, or logarithmic, or Ehrenfest, time scale) in such a way as to make compatible with the Correspondence Principle (CP) with the discrete spectrum. One of their main features is that the quantum and classical descriptions tend to coincide within these time scales, making possible to characterise the phenomena of relaxation, exponential sensitivity, etc \cite{Ber89,Cas95,Gut90,Haa01,Sto99}. The random time scale establishes the time interval for which the dynamics of a wavepacket is as random as the classical trajectory, exhibiting a spreading over the whole phase space \cite{Cas95}. The relaxation time scale, on the other hand, establishes the minimum time interval for determining a discrete spectrum. Some authors consider that the random time scale solves the apparent conflict between the CP and the quantum to classical transition in chaotic dynamics \cite{Cas95}. Other peculiarities of quantum systems with a chaotic behaviour are concerned with the modelling of chaotic systems of continuous spectrum by means of discretised ones. More precisely, the Kolmogorov-Sinai (KS) entropy of continuous and discrete chaotic systems tend to coincide for a certain finite time range. In this sense, the KS-entropy represents a robust indicator in the field \cite{Tab79,Wal82,Awr16,Lat00,Tir01,Cas05,Fal14,Mih14,GomLya17}. The coarse-graining of the quantum phase space, as a consequence of the Uncertainty Principle (UP), has an intimate relationship with quantum chaos time scales \cite{Ike93,Eng97,Jaq05,Ino08,Gom17}, and quantum extensions for the KS-entropy have been proposed \cite{Ben04,Cri93,Fal03}. Since the quantum chaos time scales must be compatible with the imitation of statistical properties of chaotic quantum systems in terms of discretised models, then the type of statistics for such descriptions should turn out to be relevant. Tsallis nonextensive statistics is able to model KS-entropy in a generalised way, from both regular and chaotic dynamics \cite{Tsa97}. Nonextensive statistics has been applied to a wide variety of systems and formalisms: structures in plasmas \cite{Guo13}, entangled systems \cite{Kim16}, relativistic formulations \cite{Oli16}, quantum tunnelling and chemical kinetics \cite{Aqu17}, mathematical structures \cite{Nivanen03,Bor04,Lob09,Tem11}. Many more examples can be found on \cite{tsallis-book}. Numerical evidences of non-Boltzmannian chaotic behaviour have been reported in low-dimensional conservative systems \cite{Tir16,ruiz-et_al-jsm-2017,ruiz-et_al-pre-2017} and dissipative ones \cite{Tir09,tirnakli-tsallis-2016}. Some connections between quantum chaos and the nonextensivity formalism has been advanced \cite{Tsa02}, but developments in relation to the characteristic time scales in a general way seem to be still absent. The goal of this paper is to provide a generalisation of the quantum chaos time scales derived from nonextensive statistics. They contain the relaxation and the random time scales as particular cases. In addition, we also show how the generalised time scale can arise as a consequence of an extended version of the KS-entropy and the graininess of the quantum phase space. The paper is organised as follows. In Section \ref{sec:preliminaries} we provide the preliminaries about the used formalism. Section \ref{sec:quantum} is devoted to a brief review of the relaxation and the random quantum chaos time scales. In Section \ref{sec:generalized-time-scales}, we propose a generalised time scale that has the relaxation and random time scales as special cases, through a generalised KS-entropy and an asymptotical deformed uncorrelation. We also address the fidelity decay, and illustrate the generalised time scale with a kicked rotator with absorption. Next, in Section \ref{sec:time-domain}, we give a discussion about a possible unified scenario in the time domain of quantum chaos. Finally, in Section \ref{sec:conclusions} we draw our conclusions and future directions are outlined. \section{\label{sec:preliminaries}Preliminaries} In the following we give the necessary elements for the development of the forthcoming sections. \subsection{Kolmogorov-Sinai entropy} A dynamical system is any quartern of the form $(\Gamma, \Sigma, \widetilde{\mu}, \{T_t \}_{t\in J})$, where $\Gamma$ is the phase space, $\Sigma$ is a $\sigma$-algebra, $\widetilde{\mu}:\Gamma \rightarrow [0, 1]$ is a normalised measure and $\{T_t \}_{t\in J}$ is a group \footnote{In some cases it is a semigroup, e.g., in discrete dynamical systems.} of preserving measure transformations \footnote{For instance, in classical mechanics $T_t$ is the Liouville transformation.}, $J$ is typically the set of real numbers $\mathbb{R}$ for continuous dynamical systems and the integers $\mathbb{Z}$ for discrete ones. Dividing the phase space $\Gamma$ in $m$ small cells $A_i$ of measure $\widetilde{\mu}(A_i)$, the entropy of the partition $Q$ is \begin{eqnarray}\label{KS entropy} H(Q)=\sum_{i=1}^{m}\widetilde{\mu}(A_i)\log \frac{1}{\widetilde{\mu}(A_i)}. \end{eqnarray} Given two partitions $Q_1$ and $Q_2$, it is possible to obtain the refinement partition $Q_1 \vee Q_2$: $\{a_i \cap b_j : a_i \in Q_1 , b_j \in Q_2 \}$ ($Q_1 \vee Q_2$ is a refinement of $Q_1$ and $Q_2$). Starting from an arbitrary partition $Q$ of $\Gamma$, the entropy of the refinement partition $H(\vee_{j=0}^n T^{-j} Q)$ can be derived, with $T^{-j}$ the inverse of $T_j$ ($T^{-j} \equiv (T_j)^{-1}$), and $T^{-j} Q = \{T^{-j} a : a \in Q\}$. Fig. \ref{fig:refinement} depicts an elementary instance of how the refinement partition is constructed when the transformation is the $\pi/2$-rotation. Each column refers to a time-step (backwards from left to right) and each line refers to the graining (increasing number of elements of the partition from up to down). \begin{figure} \caption{\label{fig:refinement} \label{fig:refinement} \end{figure} \noindent The standard measure theory defines the KS-entropy as \cite{Wal82}: \begin{eqnarray} \label{entropy partition} h_{\textrm{\scriptsize KS}}(T) = \sup_{Q} \{\lim_{n\rightarrow\infty}\frac{1}{n} H(\vee_{j=0}^n T^{-j} Q)\}, \end{eqnarray} where the supreme is taken over all measurable initial partitions $Q$ of $\Gamma$. The definition of $h_{\textrm{\scriptsize KS}}(T)$ remains the same by replacing $T^{-j}$ by $T^{j}$; we adopt $T^{-j}$ as is commonly used in the literature. The KS-entropy is the supreme of all the entropies per time step, corresponding to each manner of dividing the phase space (by refinement of the partitions evolved backwards in time) allowed by the dynamics, when the time step tends to infinity. Due to the rectangular symmetry of the time evolution operator $T$ of the example of Fig. \ref{fig:refinement}, any rectangular partition $Q=\{A_1,\ldots,A_m\}$ with all the $A_i$ of the same volume $\widetilde{\mu}(A_i)=1/m$ and $m=4^{l}$ ($l\in\mathbb{N}$, $l\geq1$) will be invariant after $T^{-1}$, i.e. $Q=T^{-1}Q$ (second line of Fig. \ref{fig:refinement}, $l=1$). So, $\vee_{j=0}^n T^{-j} Q=Q$ for all $n\in\mathbb{N}$ and then using the fact that $T$ preserves the measure we obtain $H(\vee_{j=0}^n T^{-j} Q)=H(Q)=\log m$ for all $n\in\mathbb{N}$. Hence, since the limit $\lim_{n\rightarrow\infty}\frac{1}{n}H(\vee_{j=0}^n T^{-j} Q) = \lim_{n\rightarrow\infty}\frac{\log m}{n} = 0$, then taking the supreme over rectangular partitions $Q$ (having $m=4^l$ elements with $l\rightarrow\infty$), it follows that $h_{\textrm{\scriptsize KS}}(T)=0$. The same argument can be extended for all $\omega \pi$-rotation with rational $\omega$. In addition, in the context of information theory the Brudno theorem expresses that the KS-entropy is the average unpredictability of information of all possible trajectories in the phase space. In turn, Pesin theorem relates the KS-entropy with the exponential instability of motion given by the Lyapunov exponents, thus leading $h_{\textrm{\scriptsize KS}}>0$ as a sufficient condition for the chaotic motion \footnote{ Distinguishable levels within the chaotic behaviour can occur like hyper-chaotic, hyper-hyper chaotic, etc.\ \protect\cite{Awr16}. }. If the dynamical system has a characteristic time scale expressed by a dimensionless parameter $\kappa$, it follows from (\ref{KS entropy}) and (\ref{entropy partition}) that $h_{\textrm{\scriptsize KS}}$ can be written as (see Theorem 4.13 of \cite{Wal82}) \begin{eqnarray} \label{rescaled KS entropy} h_{\textrm{\scriptsize KS}}(T) = \frac{1}{\kappa}h_{\textrm{\scriptsize KS}}(T_{\kappa}), \end{eqnarray} where $h_{\textrm{\scriptsize KS}}(T_\kappa)$ is obtained from (\ref{KS entropy}) replacing $T$ by $T_{\kappa}$, and $T_{\kappa}$ is the transformation $T_t$ for $t=\kappa$. The time rescaling allows to connect the characteristic time scale $\kappa$ and the KS-entropy by means of (\ref{rescaled KS entropy}). This concept will be relevant for obtaining the generalised time scale. \subsection{Deformed quantities from nonextensive statistical mechanics} Motivated by the generalised $q$-entropy \cite{Tsa88} \begin{eqnarray}\label{tsallis entropy} S_q=k \frac{\sum_{i=1}^{W}p_i^q-1}{1-q} \end{eqnarray} ($q \in \mathbb{R}$ is the entropic index, $k$ is a positive constant that defines the unity in which $S_q$ is measured, and $(p_1,\ldots,p_W)$ is a discretised probability distribution), the $q$-logarithm and the $q$-exponential functions (one is inverse of the other) are defined \cite{Tsa94}: \begin{eqnarray} \label{q-functions} \begin{array}{lll} \ln_{q}x &=& \frac{x^{1-q}-1}{1-q} \quad (x>0),\\ e_{q}(x) &=& [1+(1-q)x]_{+}^{\frac{1}{1-q}} \quad (x\in\mathbb{R}), \end{array} \end{eqnarray} where $[A]_+ =\max\{A,0\}$. The $q$-entropy is then rewritten as \begin{eqnarray} \label{sq-with-qlog} S_q=k \sum_{i=1}^{W} p_i \ln_q (1/p_i). \end{eqnarray} It is straightforward to verify that \begin{eqnarray} \label{q-functions properties} \begin{array}{lll} \ln_{q}(xy) &=&\ln_{q}x+\ln_{q}y+(1-q)\ln_{q}x\ln_{q}y. \end{array} \end{eqnarray} This expression justifies $S_q$ to be referred to as \textit{nonadditive entropy}: the $q$-entropy of a system composed by two independent subsystems $A$ and $B$ ($p_{ij}(A+B) = p_i(A) p_j(B)$) is $ S_q(A+B) = S_q(A) + S_q(B) + \frac{1-q}{k} S_q(A) S_q(B) $. Equation (\ref{q-functions properties}) is one of the relations that triggers the generalisation of the usual algebraic operations \cite{Nivanen03,Bor04}: \begin{eqnarray}\label{q-algebra} \begin{array}{lll} x\oplus_{q} y &=& x+y+(1-q)xy, \\ x\ominus_{q} y &=& \frac{x-y}{1+(1-q)y} \quad (y\neq \frac{1}{q-1}),\\ x\otimes_{q} y &=& [x^{1-q}+y^{1-q}-1]_{+}^{\frac{1}{1-q}} \quad(x,y>0), \\ x\oslash_{q} y &=& [x^{1-q}-y^{1-q}+1]_{+}^{\frac{1}{1-q}} \quad (x,y>0), \end{array} \end{eqnarray} called $q$-sum, $q$-difference, $q$-multiplication, and $q$-division, respectively. One expression that stems from these generalised algebraic relations, and that plays a central role in the development to come, is \begin{eqnarray} \label{eq:ln_q-qproduct} \ln_q( x \otimes_q y ) = \ln_q x + \ln_q y. \end{eqnarray} \section{\label{sec:quantum}Quantum chaos time scales} Some classical chaos conditions, like a continuous spectrum and phase space, are difficult to be used for defining quantum chaos \cite{Cas95}. These conditions are frequently violated in quantum mechanics, because the majority of quantum systems of interest have discrete spectrum, and according to the UP, the phase space must be discretised by cells of finite size $\Delta x\Delta p\geq h$ (per degree of freedom) with $h$ the Planck constant. The situation becomes even more tricky in relation to the CP, since it prescribes that all classical phenomena (including chaos) are expected to emerge from the underlying quantum domain in the classical limit, when the Planck constant is vanishingly small. A study of this issue can be found on \cite{Ike93,Eng97}. These facts stimulate the search for a quantum formalism that is consistent with the UP and the CP, and that also allow to explain the emergence of chaos in the classical limit. A granulated (discretised) phase space for quantum dynamics given by the UP appears to be a suitable treatment of the problem. However, even taking into account the graininess, some complications arise when attempting to make a quantisation of a chaotic system. For instance, the compactness of a chaotic phase space leads to a discrete spectrum. Thus, the task is subtle and requires an adequate tool that can capture the main dynamical properties with respect to the continuous spectrum of the chaotic systems. As mentioned in the introduction, the KS-entropy constitutes a good candidate due to its robust signature, both in theory and applications, in the modelling of classical chaotic systems from discretised ones. The KS-entropy is equal to the Shannon entropy per time step of the ensemble of the trajectory bunches in the limit of infinitely many time steps, and therefore represents an information measure of the dynamical system in the asymptotic limit. This feature is expressed in a compact form by the Pesin theorem, that links the KS-entropy with the Lyapunov coefficients. In this regard, for a description of quantum systems exhibiting chaotic behaviour, the desirable would be a quantum extension of the KS-entropy. Several non-commutative candidates has been proposed, where the KS-extensions yield the classical KS-entropy within finite time ranges. This fact is asserted by some authors as the singular one par excellence in quantum chaos \cite{Ben04,Cri93,Fal03}. Two time scales characterising the quantum motion, regular and chaotic, are well distinguished: the relaxation time scale $\tau_{R}$, and the random time scale $\tau_{r}$. For the regular case, the classical and quantum behaviours approximate each other for \begin{eqnarray} \label{power law time scale} t \leq\tau_{R}, \quad \textrm{with} \quad \tau_{R}\propto \eta^{\alpha}, \end{eqnarray} where $\eta=\frac{I}{h^D}$ is the quasiclassical parameter of the order of the characteristic value of the classical action $I$, $2D$ is the dimension of the phase space, and $\alpha>0$ is a system-dependent parameter. The relaxation time establishes the so-called semiclassical regime, with the particularity that within this one the discrete spectrum cannot be solved if $t \leq \tau_{R}$ (see p.\ 12 of \cite{Cas95}). On the other hand, the random time scale $\tau_{r} \ll \tau_{R}$ determines the time interval where the wave packet motion spreads over the phase space and it is related to a property of the strong chaos, the exponential instability, that is measured by the positive Lyapunov exponents. This is given by \begin{eqnarray} \label{logarithmic time scale} t \leq\tau_r, \quad \textrm{with} \quad \tau_r=\frac{\ln \eta}{h_{\textrm{\scriptsize KS}}(T)}, \end{eqnarray} where $h_{\textrm{\scriptsize KS}}(T)$ is the KS-entropy of the classical analogue having the classical Liouville evolution $T$. $\tau_r$ represents a resolution for the disagreement between the CP and the classical limit, and an evidence of the noncommutative double limit \cite{Ben04,Cri93,Fal03} \begin{eqnarray} \label{double limits} \lim\limits_{t\rightarrow\infty}\lim\limits_{\eta\rightarrow\infty} \neq \lim\limits_{\eta\rightarrow\infty}\lim\limits_{t\rightarrow\infty}, \nonumber \end{eqnarray} where the left hand side means classical chaos while the right hand side expresses a quantum behaviour without chaos (see p.\ 17 of \cite{Cas95}). \section{\label{sec:generalized-time-scales} Generalised time scale} The $q$-logarithm asymptotically behaves as a power law for $q\neq 1$ (equation\ (\ref{q-functions})), and recovers the natural logarithm when $q \rightarrow 1$. The crucial observation is that these two behaviours exactly coincide with the relaxation and random times with respect to the quasiclassical parameter (equations\ (\ref{power law time scale}) and (\ref{logarithmic time scale})). In order to unify these two time scales, the first step is to generalise the KS-entropy. Generalised versions of the KS-entropy have been proposed from different contexts \cite{Lat00,Tir01,Cas05, Fal14,Mih14}. Stimulated by the relation between the KS-entropy and the random time scale, equation\ (\ref{logarithmic time scale}), we make the reasonable assumption that the same could be expected to happen between a generalisation of the KS-entropy and the corresponding time scale. Our strategy is in connection with the generalised KS-entropies studied by \cite{Fal14} and supported numerically by \cite{Cas05}. The entropy partition $H(Q)$ can be deformed as (see equation\ (\ref{sq-with-qlog}), with $k=1$) \begin{eqnarray} \label{deformed entropy partition} H_{q}(Q)=\sum_{i=1}^{m}\widetilde{\mu}(A_i)\ln_{q}\frac{1}{\widetilde{\mu}(A_i)} \end{eqnarray} ($H_q(Q) \ge 0, \forall q$), which reduces to the usual one when $q\rightarrow1$. We propose the \emph{generalised} KS-entropy as \begin{eqnarray} \label{deformed KS entropy} h_{\textrm{\scriptsize KS}}^{(q)}(T) = \sup_{Q} \{ \lim_{n\rightarrow\infty} \frac{1}{n} H_{q}(\vee_{j=0}^n T^{-j} Q) \}. \end{eqnarray} The scaling property given by equation\ (\ref{rescaled KS entropy}) is still valid: \begin{eqnarray} \label{deformed rescaled KS entropy} h_{\textrm{\scriptsize KS}}^{(q)}(T) = \frac{1}{\kappa}h_{\textrm{\scriptsize KS}}^{(q)}(T_{\kappa}). \end{eqnarray} The random time scale can be readily generalised into \begin{eqnarray} \label{q--time scale} \tau_q = \displaystyle \frac{\ln_q \eta}{h_{\textrm{\scriptsize KS}}^{(q)}(T)} \propto \ln_q \eta, \quad \eta\geq1, \end{eqnarray} where $q$ is the entropic index, $T$ is the classical Liouville evolution, and $h_{\textrm{\scriptsize KS}}^{(q)}(T)$ is the generalisation of the KS-entropy. The condition $\eta \geq 1$ ensures that $\tau_q$ is nonnegative, as expected. Physically, $\eta \geq 1$ says that the classical action $I \ge h^D$, which includes the region of the classical limit. The relaxation and random times scales are particular cases of $\tau_q$: \noindent \emph{Relaxation time scale}. If $q=1-\alpha$ in equation\ (\ref{q--time scale}), then using (\ref{q-functions}) in the classical limit $\eta \gg 1$, \begin{eqnarray} \label{q--time scale Heisenberg} \tau_{q=1-\alpha} = \frac{\ln_{1-\alpha} \eta}{h_{\textrm{\scriptsize KS}}^{(1-\alpha)}(T)} = \frac{\eta^\alpha-1}{\alpha h_{\textrm{\scriptsize KS}}^{(1-\alpha)}(T)} \approx \frac{\eta^\alpha}{\alpha h_{\textrm{\scriptsize KS}}^{(1-\alpha)}(T)} \propto \eta^\alpha, \end{eqnarray} that is precisely the relaxation time scale $\tau_R$. \noindent \emph{Random time scale}. This time scale is directly obtained within the proper limit: \begin{eqnarray} \label{q--time scale Ehrenfest} \lim\limits_{q \rightarrow 1} \tau_q = \lim\limits_{q \rightarrow 1} \frac{\ln_q \eta}{h_{\textrm{\scriptsize KS}}^{(q)}(T)} = \frac{\ln \eta}{h_{\textrm{\scriptsize KS}}(T)}. \end{eqnarray} \subsection{\label{subsec:graininess}Graininess in a deformed decay scenario} We provide a justification for how could arise the generalised time scale $\tau_q$ from the point of view of $h_{\textrm{\scriptsize KS}}^{(q)}(T)$. In fact, $h_{\textrm{\scriptsize KS}}^{(q)}(T)$ allows one to obtain $\tau_q$ by considering the graininess of the phase space, due to the UP, as follows. We consider a quantum system of $D$ degrees of freedom so the discretised quantum phase is $2D$-dimensional and coarse-grained by undeformable cells of minimal size $\Delta q \Delta p = h^{D}$, where $(q,p)$ denotes $(q_1,\ldots,q_D,p_1,\ldots,p_D)$. The motion of the system lies on a bounded compact region $\Omega\in\mathbb{R}^{2D}$, where its Lebesgue measure $\mu(\Omega)<\infty$ is of the order of the classical action $I$, i.e. (see equation\ (\ref{power law time scale})), \begin{eqnarray} \label{eq:eta} \eta = \frac{\mu(\Omega)}{h^{D}}. \end{eqnarray} This presumption appropriately meets in chaotic billiards \footnote{ Since closed systems with $D=1$ are integrable, and therefore not chaotic. } with $D>1$, or in non integrable systems under a central potential (for example, the Henon-Heiles system). The UP implies that there exists a maximal partition (that is, the greatest refinement that one can take) $Q_{\scriptsize{\textrm{max}}}=\{A_1,\ldots,A_M\}$ of $\Omega$ constituted by $M$ identical and rigid rectangle cells $A_i$ of dimensions $\Delta q\Delta p$ and a dimensionless normalised measure $\widetilde{\mu}(A_i)=\frac{h^{D}}{\mu(\Omega)}$ for all $i=1,\ldots,M$ $(\widetilde{\mu}= \frac{\mu}{\mu(\Omega)})$, where $M$ is the maximal number of cells $A_i$ contained in $\Omega$. $Q_{\scriptsize{\textrm{max}}}$ is a partition, so $\sum_{i=1}^M \widetilde{\mu}(A_i)=\sum_{i=1}^M\frac{h^{D}}{\mu(\Omega)}=1$, what implies \begin{equation} \label{graininess} M h^{D}=\mu(\Omega), \end{equation} that is simply an expression of the \emph{graininess} of the quantum phase space ($M = \eta$). Next step is to perform the $h_{\textrm{\scriptsize KS}}^{(q)}(T_\kappa)$ assuming that the system has a finite time scale $\kappa$ in which the classical and quantum descriptions tend to coincide. Therefore, the supreme in (\ref{deformed KS entropy}) can be replaced by $\lim_{n\rightarrow\infty} \frac{1}{n}H_q(\vee_{j=0}^{n}T_{\kappa}^{-j}Q_{\scriptsize{\textrm{max}}})$ in the context of the graininess. It is a difficult task to calculate \begin{eqnarray} \label{supreme deformed entropy} H_q(\vee_{j=0}^{n}T_{\kappa}^{-j}Q_{\scriptsize{\textrm{max}}}) = \sum_{(i_0,i_1,\ldots,i_n)} \widetilde{\mu}(\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j})\: \ln_{q}\frac{1}{\widetilde{\mu}(\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j})}, \end{eqnarray} due to the form that the elements of $\vee_{j=0}^{n}T_{\kappa}^{-j}Q_{\scriptsize{\textrm{max}}}$ can adopt as $n$ increases up to infinity. However, an assumption can be made to overcome this. Typically, the dynamics in chaotic systems is such that a decay of correlations between subsets of phase space sufficiently separated in time is expected to take place in the asymptotic limit (for instance, in mixing systems). The information about the correlation decay is precisely contained in the way in which $\widetilde{\mu}(\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j})$ decreases as $n \to \infty$ \cite{Lich92}. Each set $\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j}$ corresponds to a part of the phase space $\Gamma$ which is divided through the refinement of all the partitions that result from the maximal one, $Q_{\scriptsize{\textrm{max}}}$, evolved backwards up to $j$-th time-step (i.e. $T_{\kappa}^{-j}Q_{\scriptsize{\textrm{max}}}$). Thus, for all $(n+1)$-tuple $(i_0,i_1,\ldots,i_n)$ of $n+1$ labels in $\{1,\ldots,M\}$, the set $\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j}$ is the central object for studying the dynamics, from the point of view of the KS-entropy. We conjecture that, in the asymptotical regime, \begin{eqnarray} \label{uncorrelation deformed entropy} \displaystyle \frac{1}{\widetilde{\mu}(\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j})} = \displaystyle \frac{1}{\widetilde{\mu}( A_{i_0})} \otimes_q \frac{1}{\widetilde{\mu}( T_{\kappa}^{-1}A_{i_1})} \otimes_q \ldots \otimes_q \frac{1}{\widetilde{\mu}( T_{\kappa}^{-n}A_{i_n})} \end{eqnarray} for all positive integer $n$, where $\frac{1}{\widetilde{\mu}(A_{i_0})} \otimes_q \frac{1}{\widetilde{\mu}(T_{\kappa}^{-1}A_{i_1})} \otimes_q \ldots \otimes_q \frac{1}{\widetilde{\mu}( T_{\kappa}^{-n}A_{i_n})}$ stands for the $q$-product between $\frac{1}{\widetilde{\mu}( T_{\kappa}^{-j}A_{i_j})}$ from $j=0$ up to $n$. The $q$-product of $\frac{1}{\widetilde{\mu}( T_{\kappa}^{-n}A_{i_j})}$ introduces a special and particular correlation that leads to the generalised time scale according to equation\ (\ref{q--time scale}). The particular case $q=1$ corresponds to the simple product of probabilities of Bernoulli dynamical systems, typical for uncorrelated systems. In view thereof, equation\ (\ref{uncorrelation deformed entropy}) represents an \emph{asymptotical deformed uncorrelation} between the subsets $T_{\kappa}^{-j}A_{i_j}$, a wider condition than the \emph{mixing of all orders} \footnote{Note that only mixing does not guarantees the existence of positive Lyapunov exponents and therefore neither the condition $h_{\textrm{\scriptsize KS}}>0$. Instead, the condition of mixing of all orders $\lim_{n_0,\ldots,n_k\rightarrow\infty} \mu(A_0 \cap T^{-n_1}A_{1} \cap \ldots \cap T^{-n_k}A_{k}) = \mu(A_0)\mu(A_1)\cdots\mu(A_k)$ allows to treat the sum involved in $h_{\textrm{\scriptsize KS}}$.}, that happens for $q \to 1$. $\widetilde{\mu}$ is preserved \footnote{ Ergodic theory focuses on dynamical systems with invariant densities that are preserved for some function. In classical mechanics, the Liouville theorem expresses this condition, through the conservation of the volume $\mu$ of the phase space. }, inasmuch as $T_t$ preserves $\mu$, hence \begin{eqnarray} \label{measure preserving 1} \widetilde{\mu}( T_{\kappa}^{-j}A_{i_j}) = \widetilde{\mu}( A_{i_j}) = \frac{1}{\eta}, \quad \forall \ j=0,\ldots,n. \end{eqnarray} From equations\ (\ref{eq:ln_q-qproduct}) and (\ref{measure preserving 1}), the deformed uncorrelation condition can be written as \begin{eqnarray} \label{uncorrelation deformed entropy 2} \frac{1}{\widetilde{\mu}(\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j})} = e_q^{ (n+1) \ln_q \eta }. \end{eqnarray} To complete the calculus it is sufficient to replace (\ref{uncorrelation deformed entropy 2}) in (\ref{supreme deformed entropy}) to obtain \begin{eqnarray} \label{deformed entropy calculated} \begin{array}{lll} \displaystyle H_q(\vee_{j=0}^{n}T_{\kappa}^{-j}Q_{\scriptsize{\textrm{max}}}) &=& \displaystyle \sum_{(i_0,i_1,\ldots,i_n)} \widetilde{\mu}(\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j}) (n+1) \ln_q \eta \\ &=& (n+1) \ln_q \eta, \end{array} \end{eqnarray} where $\sum_{(i_0,i_1,\ldots,i_n)} \widetilde{\mu}(\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j})=1$ due to the fact that $\vee_{j=0}^{n}T_{\kappa}^{-j}Q_{\scriptsize{\textrm{max}}}$ and $Q_{\scriptsize{\textrm{max}}}$ have the same measure, since the former is a refinement of the latter, and the motion is confined to $\Omega$. Then, from (\ref{deformed entropy calculated}), \begin{eqnarray} \label{deformed KS entropy calculated 2} \begin{array}{lll} h_{\textrm{\scriptsize KS}}^{(q)}(T_\kappa) &=& \displaystyle \lim_{n\rightarrow\infty} \textstyle \frac{1}{n}H_q(\vee_{j=0}^{n}T_{\kappa}^{-j}Q_{\scriptsize{\textrm{max}}}) \\ &=& \displaystyle \lim_{n\rightarrow\infty} \textstyle \left( \frac{n+1}{n} \right) \ln_q \eta \\ &=& \ln_q \eta, \end{array} \end{eqnarray} and from (\ref{deformed rescaled KS entropy}), \begin{eqnarray} \label{deformed KS entropy compact expression} h_{\textrm{\scriptsize KS}}^{(q)}(T_\kappa) = \ln_q \eta = \kappa \, h_{\textrm{\scriptsize KS}}^{(q)}(T). \end{eqnarray} Finally, it follows that \begin{eqnarray} \label{deformed time} \kappa = \frac{1}{h_{\textrm{\scriptsize KS}}^{(q)}(T)} \ln_q \eta \end{eqnarray} which is the generalised time scale $\tau_q$. With the help of the $q$-product definition (see equation\ (\ref{q-algebra})), the asymptotical deformed uncorrelation, equation\ (\ref{uncorrelation deformed entropy}), can be expressed as \begin{eqnarray} \label{role uncorrelation deformed} \frac{1}{\widetilde{\mu}(\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j})} = \left( \sum_{j=0}^{n}\left( \frac{1}{\widetilde{\mu}(T_{\kappa}^{-j}A_{i_j})} \right)^{1-q}- n \right)^{\frac{1}{1-q}}. \end{eqnarray} Considering equations\ (\ref{graininess}) and (\ref{measure preserving 1}), it results \begin{eqnarray} \label{main deformed uncorrelation} \frac{1}{\left[ \widetilde{\mu}(\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j}) \right]^{1-q}} = (n+1)M^{1-q}-n, \end{eqnarray} where $M>1$ is a positive integer. This equation is the starting point for characterising some typical correlation decays. \subsubsection{Extensive case $q\rightarrow 1$: chaotic dynamics}. It is easy to see that when the entropic index $q$ tends to one, the formula (\ref{main deformed uncorrelation}) is trivially satisfied. Then, for all value of $M$ the asymptotical uncorrelation holds, where $M$ is the order of the quasiclassical parameter $\eta$. This means that for a chaotic dynamics, governed by the random time scale, the asymptotical uncorrelation is valid for all values of $M$. \subsubsection{Nonextensive case $q=1-\alpha < 1$: regular and non-chaotic dynamics}. One interesting case occurs when $q=1-\alpha$ in (\ref{main deformed uncorrelation}), thus obtaining \begin{eqnarray} \label{main deformed uncorrelation regular} \widetilde{\mu}(\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j}) = \left(\frac{1}{(n+1)M^{\alpha}-n}\right)^{\frac{1}{\alpha}}. \end{eqnarray} In turn, this equation can be approximated, using the binomial formula $(1-\varepsilon)^{\gamma}\simeq 1-\gamma\varepsilon$, as \begin{eqnarray} \label{main deformed uncorrelation regular approx} \widetilde{\mu}(\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j}) \simeq \frac{1}{M(n+1)^{\frac{1}{\alpha}}} \propto (n+1)^{-\frac{1}{\alpha}} \end{eqnarray} in the classical limit ($M \gg 1$). Equation\ (\ref{main deformed uncorrelation regular approx}) expresses that the correlation decay follows a power law, which is precisely the observed in regimes that are not completely chaotic but provided with a complex dynamics \cite{Cos97}. The volume of the sets $\cap_{j=0}^{n} T_{\kappa}^{-j}A_{i_j}$ can take values less than $h^D$ in the asymptotic limit of sufficiently large $n$, a particular feature that can occur in mixed regimes of classically chaotic systems \cite{Zur01}. These type of regions in the phase space are called \emph{sub-Planck structures}, and they were studied in decoherence phenomena. \subsection{\label{subsec:fidelity} Fidelity decay} The fidelity was introduced by Peres \cite{Per95} as an indicator for the stability of quantum motion. It plays a role analogous to the Lyapunov exponents in classical mechanics. In this Section we generalise this concept in view of making it coherent with the generalised time scale. Through a perturbation $\widehat{V}$ on a Hamiltonian $\widehat{H}_0$, $\widehat{H}=\widehat{H}_0+\widehat{V}$ with an arbitrary initial state $|\psi_0\rangle$, Peres defined fidelity as the overlap of a state at time $t$ with its perturbed echo: \begin{eqnarray} \label{fidelity} M(t)=|\langle \psi_0 |\widehat{U}^{\prime}_{-t} \widehat{U}_{t} |\psi_0\rangle |^2, \end{eqnarray} the so-called \emph{Loschmidt echo}. The unperturbed and the perturbed evolution operators are $\widehat{U}_{t}=\exp(-i\widehat{H}_0t)$ and $\widehat{U}^{\prime}_{-t}=\exp(i\widehat{H}t)$, respectively. The perturbed evolution operator $\widehat{U}^{\prime}_{t}$ can be expressed in a more convenient form: \begin{eqnarray} \label{perturbedevoution} \widehat{U}^{\prime}_{t} = \widehat{U}_{\delta}^{t} =\widehat{U}_{t} \exp(-i\widehat{B}\delta/\hbar), \end{eqnarray} where $\widehat{B}$ is a self-adjoint operator, $\delta$ is the perturbation strength and $\hbar$ is an effective Planck constant \cite{Jal01}. Analytical evaluation of equation\ (\ref{fidelity}) may be difficult due to the non-commutativity between the unperturbed Hamiltonian $\widehat{H}_0$ and the perturbation $\widehat{B}$. This difficulty is analogous to that of evaluating the KS-entropy (see equation\ (\ref{supreme deformed entropy})): in one case, the hindrance lies on the product of non-commutative operators, and in the other, on the intersection of elements of different partitions of the phase space. To overcome this, the fidelity may be expanded in power series of some characteristic parameter. The Loschmidt echo is obtained from equations\ (\ref{fidelity}) and (\ref{perturbedevoution}) as a power series of the perturbation strength $\delta$ \cite{Pro02}: \begin{eqnarray} \label{fidelity-series} M(t)=\left| 1+\sum_{m=1}^{\infty}\frac{i^m \delta^m}{m!\hbar^m} \widehat{\mathcal{T}}\sum_{t_1,\ldots,t_m=0}^{t-1} \langle \widehat{B}_{t_1}\cdots\widehat{B}_{t_m} \rangle \right|^2, \end{eqnarray} where $\widehat{\mathcal{T}}$ stands for a left-to-right time ordering, $\widehat{B}_{t_j}=\widehat{U}^{\prime}_{t_j}\widehat{B}\widehat{U}^{\prime}_{-t_j}$ represents the perturbation at time $t_j$ for all $j=1,\ldots,m$, and $t$ is a discrete time in units of the period \footnote{Here we are assuming a periodically time-dependent Hamiltonian that represents a sufficiently large class of chaotic quantum systems, the Floquet representation.} of the Hamiltonian $\widehat{H}$. The Weyl-Wigner transform permits to recover the dynamics of the quantum phase space in the classical limit \cite{Hil84,Gad95}. Along this line, the mean value $\langle \widehat{B}_{t_1}\cdots\widehat{B}_{t_m}\rangle$, that carries the time dependence of the fidelity decay, may be expressed by \begin{eqnarray} \label{meanvalue-B} \begin{array}{lll} \langle \widehat{B}_{t_1}\cdots\widehat{B}_{t_m} \rangle &=& \textrm{Tr} \left( \displaystyle \frac{1}{N}\widehat{1}\widehat{B}_{t_1}\cdots\widehat{B}_{t_m} \right) \\ &=& \displaystyle \frac{1}{N h^D} \int_{\mathbb{R}^{2D}} \textrm{\hugetilde{\widehat{B}_{t_1}\cdots\widehat{B}_{t_m}}} dq dp \end{array} \end{eqnarray} The classical chaotic dynamics is achieved by approximating the Weyl symbol of the product of $\widehat{B}_{t_j}$ by the product of the Weyl symbols of each one: \begin{eqnarray} \label{B-factorization} \textrm{\hugetilde{ \widehat{B}_{t_1}\cdots\widehat{B}_{t_m} }} \simeq \widetilde{B}_{t_1}\cdots\widetilde{B}_{t_m}. \end{eqnarray} Then, equations\ (\ref{meanvalue-B}) and (\ref{B-factorization}) are rewritten as \begin{eqnarray} \label{meanvalue-chaotic} \begin{array}{lll} \langle \widehat{B}_{t_1}\cdots\widehat{B}_{t_m} \rangle &\simeq& \displaystyle \frac{\mu(\Omega)}{N h^D} \int_{\Omega} \widetilde{B}_{t_1}\cdots\widetilde{B}_{t_m} d\widetilde{\mu}, \end{array} \end{eqnarray} with $d\mu = dq dp$, and $\widetilde{B}_{t_j}$ are assumed to have a support in $\Omega$ for $t_m\rightarrow\infty$ i.e., the perturbation acts on $\Omega$ only. The quantum interference correlations are cancelled by the chaotic dynamics in phase space, expressed mathematically by (\ref{meanvalue-chaotic}), giving place to a coarse-grained distribution \cite{Cas95}, where $Q_{\scriptsize{\textrm{max}}}=\{A_1,\ldots,A_M\}$ is the finest partition of $\Omega$. As $M \to \infty$ in the classical limit, $\widetilde{B}(q,p)$ becomes a function defined over the granulated quantum phase space that can be approximated as a linear combination of the characteristic functions, $1_{A_j}(q,p)$ for all $j=1,\ldots,M$: \begin{eqnarray} \label{B-characteristic-function} \widetilde{B}(q,p)\simeq\sum_{j=1}^{M}\gamma_j 1_{A_j}(q,p), \end{eqnarray} then, \begin{eqnarray} \label{meanvalue-product2} \langle \widehat{B}_{t_1} & \cdots \widehat{B}_{t_m} \rangle \simeq \nonumber \\ &\frac{\eta}{N} \int_{\Omega} \left( \sum_{j_1=1}^{M} \gamma_{j_1} 1_{T_{t_1} A_{j_1}}(q,p) \cdots \sum_{j_m=1}^{M} \gamma_{j_m} 1_{T_{t_m} A_{j_m}}(q,p) \right) d\widetilde{\mu} \nonumber \\ &\simeq \frac{\eta}{N} \sum_{j_1,\ldots,j_m=1}^{M} \gamma_{j_1} \cdots \gamma_{j_m} \widetilde{\mu}(T_{t_1} A_{j_1}\cap\ldots \cap T_{t_m} A_{j_m}). \end{eqnarray} At this point we introduce a hypothesis with the same structure of equation\ (\ref{uncorrelation deformed entropy}) in order to make the fidelity decay scenario compatible with the generalised time scale. We replace $(\gamma_{j_1}\cdots\gamma_{j_m})^{-1}$ by $(\gamma_{j_1})^{-1}\otimes_q\cdots\otimes_q(\gamma_{j_m})^{-1}$ in equation\ (\ref{meanvalue-product2}) so $\langle \widehat{B}_{t_1}\cdots\widehat{B}_{t_m}\rangle$ is generalised as \begin{eqnarray} \label{generalized-meanvalue-product} & \langle \widehat{B}_{t_1} \cdots \widehat{B}_{t_m}\rangle_q =\nonumber\\ &\frac{\eta}{N} \sum_{j_1,\ldots,j_m=1}^{M} [(\gamma_{j_1})^{-1}\otimes_q\cdots\otimes_q(\gamma_{j_m})^{-1}]^{-1} \widetilde{\mu}(T_{t_1} A_{j_1}\cap\ldots \cap T_{t_m} A_{j_m}). \end{eqnarray} The following step is to take a uniformly distributed perturbation over $\Omega$, $\gamma_j=\frac{1}{M}$ for all $j$ (the unit of the perturbation strength $\delta$ may be properly chosen to turn $\widehat{B}$ dimensionless), so $(\gamma_{j_1})^{-1}\otimes_q\cdots\otimes_q(\gamma_{j_m})^{-1} = e_q^{m\ln_q \eta}$. From (\ref{generalized-meanvalue-product}), \begin{eqnarray} \label{generalized-meanvalue-product2} \langle \widehat{B}_{t_1}\cdots\widehat{B}_{t_m}\rangle_q = K (e_q^{\lambda_q m})^{-1} \propto \frac{1}{e_q^{\lambda_q m}}, \end{eqnarray} where $K=\frac{\eta}{N}$ is a number that remains fixed in the classical (and thermodynamical) double limit $\eta \rightarrow \infty$, $N\rightarrow\infty$, $\lambda_q=h_{\textrm{\scriptsize KS}}^{(q)}(T_{\kappa}) = \ln_q \eta$ is a generalised Lyapunov exponent, and $\sum_{j_1,\ldots,j_m=1}^{M} \widetilde{\mu}(T_{t_1} A_{j_1}\cap\ldots \cap T_{t_m} A_{j_m}) = \widetilde{\mu}(\vee_{i=1}^m T_{t_i}Q_{\scriptsize{\textrm{max}}})=1$. Equations\ (\ref{uncorrelation deformed entropy 2}) and (\ref{generalized-meanvalue-product2}) describe the same $q$-exponential decay as a function of the time step, labelled by $n$ and $m$ respectively. In other words, in the classical limit the cross-terms of the Weyl-Wigner expansions are neglected, thus the $m$th point correlation function $\langle \widehat{B}_{t_1}\cdots\widehat{B}_{t_m}\rangle_q$ and the asymptotical deformed uncorrelation are essentially the same quantity. The approximation of a uniformly distributed perturbation can represent a uniform environment that forces the system to decohere, for instance the pendulum immersed in a continuum oscillator bath (see p.\ 285 of \cite{Omn94}). Now we are able to study some regimes of the fidelity decay in a unified way. \subsubsection{Lyapunov regime.} $q\equiv q_{\textrm{\scriptsize fid}}=1$: When the perturbation strength $\delta$ is sufficiently large, (i.e., greater than the spacing of the energy levels) then the Lyapunov regime takes place, where the Loschmidt echo exponentially decays with a characteristic time given by the Lyapunov exponent of the classical chaotic dynamics \cite{Jal01}, and $ \langle \widehat{B}_{t_1}\cdots\widehat{B}_{t_m}\rangle_q \propto e^{-\lambda m}, $ $\lambda$ is the usual Lyapunov exponent. \subsubsection{Regular regime.} $q\equiv q_{\scriptsize \textrm{fid}}\neq1$: In classically quasi-integrable systems the correlations between subsets in phase space do not decay so fast as in the Lyapunov regime but obey a power law $t^{-\frac{3D}{2}}$ ($t^{-D}$ in the classical case), with $2D$ the dimension of phase space \cite{Jac03}. This derives from the diagonal part of $M(t)$ mainly determined by the decay of $\langle \widehat{B}_{t_1}\cdots\widehat{B}_{t_m}\rangle$ (see equation\ (\ref{fidelity-series})). Invoking the power law dependence of the correlations (\ref{generalized-meanvalue-product2}) in the classical limit $\eta \gg1$, then $\lambda_q \ln_q \eta \gg 1$ and the asymptotic behaviour of the $q$-exponential assures that $e_q^{\lambda_q m} \approx m^{\frac{1}{1-q}} \approx t^{-\frac{3D}{2}}$. Thus, the entropic index for the regular regime is identified as \begin{eqnarray} \label{regular-index} q_{\textrm{\scriptsize fid}}=1-\frac{2}{3D}. \end{eqnarray} The exponential decay is recovered with the increase of the number of degrees of freedom $D$, in accordance with a fast cancellation of the correlations in the macroscopic limit $D\rightarrow\infty$. The pendulum in a continuum oscillator bath, for instance, displays an exponential decay of the correlations in the macroscopic limit of an infinite number $N$ of oscillators (see p.\ 287 of \cite{Omn94}). \subsection{\label{subsec:application} An example: the kicked rotator with absorption} The kicked rotation with absorbing boundary conditions is used as an instance showing that the generalised time scale $\tau_q$ is able to characterise the relaxation regime \cite{Bor91,Cas97}. Differently from the ordinary kicked rotator, the evolution operator of the present example is not unitary, and thus their eigenvalues are distributed inside the unit circle of the complex plane. The model is described by the quantum map \begin{eqnarray}\label{app1-map} |\overline{\psi}\rangle=\widehat{U}|\psi\rangle=\widehat{P} e^{-\frac{iT\widehat{n}^2}{4}}e^{-i\lambda\cos\widehat{\theta}} e^{\frac{iT\widehat{n}^2}{4}}|\psi\rangle \end{eqnarray} where $|\psi\rangle$ is an arbitrary state, $|\overline{\psi}\rangle$ is the state after a time-step, $\widehat{U}$ is the evolution operator, $\widehat{P}$ is a projection operator over quantum state $n$ in the interval $(-N/2,N/2)$ (being $N$ the size of the system), $T$ is the period between two successive kicks, $\lambda$ is the coupling strength of the kicks, and $\theta$ is the angle of the pendulum with respect to the vertical. The conjugated operators $\widehat{n}$ and $\widehat{\theta}$ satisfy $[\widehat{n},\widehat{\theta}]=-i$ and the classical limit is characterised by the double limit $\lambda \rightarrow \infty, T \rightarrow 0$ remaining constant the product $\lambda T$. For studying quantum relaxation, Casati et al.\ \cite{Cas97} fixed the classical chaos parameter $\lambda T = 7$ and the ratio $N/\lambda = 4$, and the initial condition was chosen such that only the level $n=0$ is populated. In this case, the first $N$ complex eigenvalues of $\widehat{U}$ remain distributed in a narrow ring of area $E_c$, which corresponds to the typical situation in the scattering approach of open quantum systems. The distance between the eigenvalues is $\delta\approx\sqrt{E_c/N}$, where $\delta$ goes to zero and the density of poles becomes a continuum in the classical limit. For finite $N$, the separation of the poles can be resolved after a relaxation time $\mathcal{T} \sim 1/\delta$, which implies $\mathcal{T}\sim \sqrt{N}$. In this regime, the cells of the quantum phase space have a relative size $1/N$. These results may be interpreted by means of the graininess approach of Section \ref{subsec:graininess}, with $1/N=\mu(A_i)=\frac{1}{\eta}$ for each rigid box $A_i$ ($i=1,\ldots,M$) belonging to the maximal partition $Q_{\scriptsize{\textrm{max}}}$, so $N=\eta$, which resembles the graininess of the quantum phase space, $M=\eta$. It follows that $\mathcal{T}\sim \eta^{1/2}$, i.e., $\mathcal{T}\approx \tau_{q=1/2}$ (with $\alpha=1/2$, see equation\ (\ref{q--time scale Heisenberg})), a power law behaviour for the relaxation time. \section{\label{sec:time-domain} Towards a unified time domain scenario} The concept of characteristic time scales in quantum dynamics solves the ambiguity between the noncommutative double limit (involving $t \rightarrow \infty$ and $\eta \rightarrow \infty$). Numerical results show that every time scale must be obtained for each regime separately. In Ref. \cite{Cas95} it is asserted that the main peculiarity of quantum chaos is the restriction to a finite interval, called \emph{pseudochaos}, distinguished from the true chaos that takes place in the classical limit $\eta \rightarrow \infty$. In virtue of this, it is concluded that the general structure of quantum chaos dynamics can be expressed by the time scale curves on the plane $(\eta,t)$ of Fig.\ 5 of \cite{Cas95} \footnote{ N.B.: our $\eta$ and $\tau_q$ correspond to the symbols $q$ and $t$ of Ref.\ \cite{Cas95} respectively. Since $q$ is generally referred as the entropic index in nonextensive statistics literature, we decided to follow this convention. }. Three well distinguished regions can be seen on their diagram: (1) the power law region associated to the localisation phenomena, (2) the region below the logarithm curve corresponding to true chaos regime, and (3) the intermediate region between (1) and (2) belonging to pseudochaos. Interestingly, these regions are obtained by varying the entropic index $q$ in the formula of the generalised time scale $\tau_q$. More precisely, for a given value of $q<1$, the power law region is situated above (and on) the curve $\ln_q \eta$, the true chaos zone is below (and on) $\ln_1 \eta$ (the usual logarithm), and the pseudochaos region is placed in-between. Fig.\ \ref{fig:qescalas} illustrates this characterisation for some values of $q$. Thus, the general structure of quantum chaos dynamics of Fig.\ 5 of \cite{Cas95} can be reproduced by Fig.\ \ref{fig:qescalas} in a unified way. This unified scenario is identically reproduced in the classical limit $\eta \gg 1$. In fact, Casati and Chirikov asserted that the asymptotic limit $t \rightarrow \infty$ must be taken conditionally in such a way that the ratio $t/\tau_R(\eta)$, or $t/\tau_r(\eta)$, is fixed, depending on the dynamics. This is precisely what we obtain using the generalised time scale: the ratio $t/\tau_q(\eta)$ is kept fixed, and the regime, either regular or chaotic, is defined by the entropic index $q$. Now we provide a discussion about the connection between some approaches in quantum chaos time scales and the framework presented in this paper. Several previous works based on numerical experiments with simple models as kicked rotators \cite{Haa01}, semiclassical propagation of wave packets \cite{Schu12}, randomness parameter in linear and nonlinear dynamical chaos \cite{Chi97}, and orderer quantisation \cite{Ang03}, among others, show that a framework for unifying the dynamical aspects of quantum chaos would seem to be difficult to be defined in a single way. Below we address some specific remarks reported on the literature, and discuss them from the point of view of the present work. \begin{figure} \caption{\label{fig:qescalas} \label{fig:qescalas} \end{figure} \begin{itemize} \item[$(1)$] \emph{Universal nature of the time scales.} The time scale diverges logarithmically with $h$ for classically chaotic flows and it is algebraic in $h$ for the classical regular flow (see \cite{Ang03} and references therein). The present approach proposes the time scale to be expressed by a $q$-logarithm function, equation\ (\ref{q--time scale}) (the entropic index could be more properly renamed within the present context as $q_{\textrm{\tiny QC}}$, {\scriptsize QC} stands for quantum chaos), that interpolates between the logarithmic regime (with $q_{\textrm{\tiny QC}} = 1$) and the algebraic one (with $q_{\textrm{\tiny QC}} = 1-\alpha$), in the classical limit $\eta \rightarrow \infty$. \item[$(2)$] \emph{Kolmogorov-Sinai time.} The exponential increase of any small initial volume $\Delta \Gamma_0$ within the phase space, $\Delta \Gamma(t)=\Delta \Gamma_0 e^{h_{\textrm{\tiny KS}}t}$, implies that the whole phase space is filled ($\Delta \Gamma \approx 1$) after a time of the order $t_{0} = \frac{1}{h_{\textrm{\tiny KS}}} \log \frac{1}{\Delta \Gamma_0}$, and thus the relaxation time might be proportional to $\frac{1}{h_{\textrm{\tiny KS}}}$ (see \cite{Del97} and references therein). The generalised time scale here introduced points towards a $q$-exponential spreading of a small phase volume $\Delta \Gamma_0$ over a region $\Delta \Gamma(t) = \Delta \Gamma_0 e_q^{h_{\textrm{\tiny KS}}^{(q)}(T)t}$ after a time $t$. It follows straightforwardly that \begin{eqnarray} \label{q KS time} \tau_{\textrm{\tiny KS}}^{(q)}(T) = \frac{1}{h_{\textrm{\tiny KS}}^{(q)}(T)} \end{eqnarray} constitutes a generalisation of the Kolmogorov-Sinai time $\tau_{\textrm{\tiny KS}}(T) = \frac{1}{h_{\textrm{\tiny KS}}(T)}$ for processes satisfying the asymptotical deformed uncorrelation (\ref{uncorrelation deformed entropy}). \item[$(3)$] \emph{Modifications of the quantum chaos theory.} There are claims that the quantum chaos phenomenon seems to ask for a possibly difficult mathematical modification of the theory for a finite time (see \cite{Cas95}, Sec.\ 3.3, p.\ 18). The main framework of the theory is preserved by using the generalised time scale $\tau_q$, and the generalised KS-entropy (\ref{deformed KS entropy}). The kind of correlation, if any, between the subsets of the quantum phase space defines which time scale is to be used. The particular case of the asymptotical deformed uncorrelation, equation\ (\ref{uncorrelation deformed entropy}), leads to the present $\tau_q$ generalisation. \end{itemize} \section{\label{sec:conclusions}Conclusions} We have presented a redefinition of the quantum chaos time scales by means of a generalised time scale, $\tau_q$ (henceforth we use $q_{\textrm{\tiny{QC}}} \equiv q$), motivated by the nonextensive statistics, which contains the relaxation and random regimes as particular cases. We have obtained the generalised time scale by using four ingredients: (i) a generalised KS-entropy (equation\ (\ref{deformed KS entropy})), (ii) the rescaled time property (equation\ (\ref{rescaled KS entropy})), (iii) the asymptotical deformed uncorrelation (equation\ (\ref{uncorrelation deformed entropy})), and (iv) the graininess of the quantum phase space (equation\ (\ref{graininess})). Lyapunov and regular regimes for the fidelity decay has been obtained as a consequence of the generalisation of the $m$th point correlation function through the $q$-product, equation\ (\ref{generalized-meanvalue-product}), for a uniformly distributed perturbation in the classical limit. The deformed uncorrelation introduced by equation\ (\ref{uncorrelation deformed entropy}) is not claimed as a necessary, but a sufficient condition to obtain the generalised time scale. A physical justification of it is still absent, however, it is intriguing that this hypothesis leads to the unified scenario here presented. Other correlations different from that given by equation\ (\ref{uncorrelation deformed entropy}) will possibly lead to different generalisations for the time scales. Within the framework of nonextensive statistical mechanics (see Section 3.3.4 of \cite{tsallis-book}), the entropic index $q_{\textrm{\scriptsize{ent}}}$ of the generalised entropy $S_{q_{\textrm{\scriptsize{ent}}}}$ is ultimately defined by the correlations between the subsystems of a composite system, that stem from its dynamics. For strongly chaotic systems, i.e., systems with exponential sensitivity to the initial conditions (with at least one positive Lyapunov exponent), corresponding to independent subsystems (i.e., uncorrelated subsystems, but it also asymptotically works for weakly correlated subsystems), the effective volume $W$ of the classical phase space exponentially increases with the size $N$ of the system (we mean, a system composed by $N$ equal subsystems). The Boltzmann-Gibbs entropy (for the equiprobable case) $S_{q_{\textrm{\scriptsize{ent}}}=1} =S_{\textrm{\scriptsize{BG}}} = k_B \ln W$ is thus extensive, i.e., $S_{\textrm{\scriptsize{BG}}}(N) = N S_{\textrm{\scriptsize{BG}}}(1)$. For weakly chaotic systems, i.e., systems with a power law sensitivity to the initial conditions (with vanishing maximal Lyapunov exponent), for which the effective volume $W$ of the classical phase space increases according to a power law of the size $N$ of the system (say, $W \sim N^d$, with $d>0$), the extensivity of the entropy $S_{q_{\textrm{\scriptsize{ent}}}<1}(N) \propto N S_{q_{\textrm{\scriptsize{ent}}}<1}(1)$ can only be attained for an entropy with $q_{\textrm{\scriptsize{ent}}} = 1-1/d < 1$ (the Boltzmann-Gibbs entropy $S_{\textrm{\scriptsize{BG}}}(N) \propto \ln N$, thus it is not extensive in this case). Note that both cases, the strongly and the weakly chaotic systems, can be unified through a $q$-exponential increase of $W$ with $N$. This structure is impressively similar to the present generalisation of the time scales. For the chaotic regime, the quasiclassical parameter $\eta=\frac{I}{h^D}$, --- that plays the role of the normalised volume $W$ (the number of states) of the classical phase space ---, exponentially increases with the time scale $\tau_{q_{\textrm{\tiny{QC}}}=1}=\tau_r$ (the inverse of equation\ (\ref{logarithmic time scale})). For the regular regime, $\eta$ increases according to a power law of the time scale $\tau_{q_{\textrm{\tiny{QC}}}<1}=\tau_R$ (the inverse of equation\ \ref{power law time scale}). In this paralleling, the generalised time scale $\tau_{q_{\textrm{\tiny{QC}}}}$ plays an analogous role to the size $N$ of the system. Similarly to the index $q_{\textrm{\scriptsize{ent}}}$, the index $q_{\textrm{\tiny{QC}}}$ is also determined by the (quantum) dynamics, and it is also less than 1. The connection between the asymptotic scale invariance correlations and the asymptotic scale-free occupation of the phase space, remarked by \cite{tsallis-gell-mann-sato--2005} in a classical formulation, also seems to be present in the semiclassical approach. The nonextensive statistical mechanics, that unifies aspects of the classical chaos dynamics, may also unify the quantum chaos dynamics. \section*{References} \end{document}
\begin{document} \title{High-Fidelity Universal Gate Set for $^9$Be$^+$ Ion Qubits} \author{J. P. Gaebler} \author{T. R. Tan} \email[Electronic address: ]{[email protected]} \author{Y. Lin} \altaffiliation{Current address: Physics Department and JILA, University of Colorado Boulder, Boulder, Colorado, USA. } \author{Y. Wan} \author{R. Bowler} \altaffiliation{Current address: Physics Department, University of Washington, Seattle, Washington, USA. } \author{A. C. Keith} \author{S. Glancy} \author{K. Coakley} \author{E. Knill} \author{D. Leibfried} \author{D. J. Wineland} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA} \begin{abstract} We report high-fidelity laser-beam-induced quantum logic gates on magnetic-field-insensitive qubits comprised of hyperfine states in $^{9}$Be$^+$ ions with a memory coherence time of more than 1 s. We demonstrate single-qubit gates with error per gate of $3.8(1)\times 10^{-5}$. By creating a Bell state with a deterministic two-qubit gate, we deduce a gate error of $8(4)\times10^{-4}$. We characterize the errors in our implementation and discuss methods to further reduce imperfections towards values that are compatible with fault-tolerant processing at realistic overhead. \end{abstract} \maketitle Quantum computers can solve certain problems that are thought to be intractable on conventional computers. An important general goal is to realize universal quantum information processing (QIP), which could be used for algorithms having a quantum advantage over processing with conventional bits as well as to simulate other quantum systems of interest \cite{Feynman1982,Deutsch1985,Lloyd1996}. For large problems, it is generally agreed that individual logic gate errors must be reduced below a certain threshold, often taken to be around $10^{-4}$ \cite{Preskill1998,Knill2010,Ladd2010}, to achieve fault tolerance without excessive overhead in the number of physical qubits required to implement a logical qubit. This level has been achieved in some experiments for all elementary operations including state preparation and readout, with the exception of two-qubit gates, emphasizing the importance of improving multi-qubit gate fidelities. Trapped ions are one candidate for scalable QIP. State initialization, readout, and quantum logic gates have been demonstrated in several systems with small numbers of trapped ions using various atomic species including $^{9}$Be$^+$, $^{25}$Mg$^+$, $^{40}$Ca$^+$, $^{43}$Ca$^+$, $^{88}$Sr$^+$, $^{111}$Cd$^+$, $^{137}$Ba$^+$, and $^{171}$Yb$^+$. The basic elements of scalable QIP have also been demonstrated in multi-zone trap arrays \cite{Home2009,Hanneke2010}. As various ions differ in mass, electronic, and hyperfine structure, they each have technical advantages and disadvantages. For example, $^{9}$Be$^+$ is the lightest ion currently considered for QIP, and as such, has several potential advantages. The relatively light mass yields deeper traps and higher motional frequencies for given applied potentials, and facilitates fast ion transport \cite{Bowler2012,Walther2012}. Light mass also yields stronger laser-induced effective spin-spin coupling (inversely proportional to the mass), which can yield less spontaneous emission error for a given laser intensity \cite{Ozeri2007}. However, a disadvantage of $^{9}$Be$^+$ ion qubits compared to some heavier ions such as $^{40}$Ca$^+$ and $^{43}$Ca$^+$ \cite{Benhelm2008,Ballance2015} has been the difficulty of producing and controlling the ultraviolet (313 nm) light required to drive $^{9}$Be$^+$ stimulated-Raman transitions. In the work reported here, we use an ion trap array designed for scalable QIP \cite{Blakestad2011} and take advantage of recent technological developments with lasers and optical fibers that improve beam quality and pointing stability. We also implement active control of laser pulse intensities to reduce errors. We demonstrate laser-induced single-qubit computational gate errors of $3.8(1) \times 10^{-5}$ and realize a deterministic two-qubit gate to ideally produce the Bell state $\ket{\Phi_+} = \textstyle{\frac{1}{\sqrt{2}}}(\ket{\uparrow\uparrow}+\ket{\downarrow\downarrow})$. By characterizing the effects of known error sources with numerical simulations and calibration measurements, we deduce an entangling gate infidelity or error of $\epsilon = 8(4)\times10^{-4}$, where $\epsilon =$ 1 - F, and F is the fidelity. Along with Ref. \cite{Ballance2015}; these appear to be the highest two-qubit gate fidelities reported to date. \begin{figure} \caption{Schematic of the ion trap, formed with two gold-coated, stacked wafers. Top view of the trap (on the right) showing the load zone $\mathcal{L} \label{fig:Xtrap} \end{figure} The ions are confined in a multi-segmented linear Paul trap (Fig.\ref{fig:Xtrap}) designed to demonstrate scalable QIP \cite{Blakestad2011, bible,Kielpinski2002}. Radio frequency (RF) potentials, with frequency $\omega_{\mathrm{RF}} \simeq 2\pi \times 83$ MHz and amplitude $V_{\mathrm{RF}} \simeq $ 200 V, are applied to the RF electrodes to provide confinement transverse to the main trap channels. DC potentials are applied to the segmented control electrodes to create potential wells for trapping of ions at desired locations in the channels. By applying time-dependent potentials to these electrodes, the ions can be transported deterministically between different trap zones. The trap also contains a junction at $\mathcal{C}$, which can be used for reordering \cite{Blakestad2011}. For the experiment here, the ions are first loaded in $\mathcal{L}$ and then transported to $\mathcal{E}$. Quantum logic experiments described below are performed with ions confined in a fixed harmonic well at $\mathcal{E}$. Due to the particular design of the junction and trap imperfections, the ions undergo residual RF ``micromotion" at frequency $\omega_{\mathrm{RF}}$ along $\hat{z}$ with amplitude $\simeq$ 105 nm at $\mathcal{E}$. This affects our implementation of logic gates, Doppler and ground state cooling, and qubit state measurement, as described below. \begin{figure} \caption{Relevant energy level structure for $^{9} \label{fig:BeLevels} \end{figure} For a single $^9$Be$^+$ ion confined in $\mathcal{E}$, the axial $z$ harmonic mode frequency is $\omega_z \simeq 2 \pi\times 3.58$ MHz, while the transverse mode frequencies are $\omega_x \simeq 2 \pi\times 11.2$ MHz, and $\omega_y \simeq 2 \pi\times 12.5$ MHz. The ground state hyperfine levels and relevant optical levels for $^{9}$Be$^+$ ions in a magnetic field B $\simeq$ 0.0119 T are shown schematically in Fig. \ref{fig:BeLevels}. The qubit is encoded in the $^2$S$_{1/2}\ket{F=2, m_F = 0} = \ket{\downarrow}$ and $\ket{1,1} = \ket{\uparrow}$ hyperfine levels, where $F$ and $m_F$ are total angular momentum and its projection along the quantization axis, respectively. The qubit frequency, $\omega_0 = 2\pi \times f_0 \simeq 2\pi \times 1207.496$ MHz is first-order insensitive to magnetic field fluctuations \cite{Langer2005}; we measure a coherence time of approximately 1.5 s. Before each experiment, we Doppler cool and optically pump the ion(s) to the $\ket{2,2}$ state with three laser beams that are $\sigma^+$-polarized relative to the B field and drive the $^2$S$_{1/2}\ket{2,2} \rightarrow ^{2}$P$_{3/2}\ket{3,3}$ cycling transition as well as deplete the $\ket{1,1}$ and $\ket{2,1}$ states (Fig. \ref{fig:BeLevels} and supplementary material). Both ions are then initialized to their $\ket{\uparrow}$ state by applying a composite $\pi$ pulse on the $\ket{2,2} \rightarrow \ket{\uparrow}$ transition. After gate operations and prior to qubit state detection, population in the $\ket{\downarrow}$ state is transferred or ``shelved" to either the $\ket{1,-1}$ or $\ket{1,0}$ state and the $\ket{\uparrow}$ state is transferred back to the $\ket{2,2}$ state (supplementary material). We then apply the Doppler-cooling beam and observe fluorescence. In the two-ion experiments, for a detection duration of 330 $\mu$s, we detect on average approximately 30 photons for each ion in the $\ket{\uparrow}$ state, and approximately 2 photons when both ions are in the $\ket{\downarrow}$ state. Coherent qubit manipulation is realized via two-photon stimulated-Raman transitions \cite{Monroe1995,bible} (supplementary material). The required laser beams (Fig. \ref{fig:BeamLineSetup}) are directed to the trap via optical fibers \cite{Colombe2014} and focused to beam waists of approximately 25 $\mu$m at the position of the ions. \begin{figure} \caption{Laser beam geometry for stimulated-Raman transitions. Co-propagating beams 2a and 2b are used to implement high-fidelity single qubit gates; two-qubit entangling gates use all three beams as described in the text.} \label{fig:BeamLineSetup} \end{figure} High-fidelity single-qubit gates are driven with co-propagating beams $\bm{k_{2a}}$ and $\bm{k_{2b}}$ detuned by $\Delta$ from the $^2$S$_{1/2} \leftrightarrow ^2$P$_{1/2}$ transition frequency with their frequency difference set to $\omega_0$. In this co-propagating beam geometry, single-qubit gates are negligibly affected by ion motion. We employ the randomized benchmarking technique described in \cite{Brown2011} to characterize gate performance. Each computational gate consists of a Pauli gate ($\pi$ pulse) followed by a (non-Pauli) Clifford gate ($\pi/2$ pulse) around the $x$, $y$, and $z$ axes of the Bloch sphere, and identity gates. The $\pi$ pulses are performed with two sequential $\pi/2$ pulses about the same axis, each with duration $\simeq 2\ \mu$s. Rotations about the $z$ axis are accomplished by shifting the phase of the direct digital synthesizer that is keeping track of the qubit's phase; the identity gate is implemented with a $1\ \mu$s wait time. We deduce an error per computational gate of $3.8(1) \times 10^{-5}$. For $\Delta \simeq - 2 \pi \times$ 730 GHz used here, spontaneous emission error \cite{Ozeri2007} is estimated to be $2.5 \times 10^{-5}$. The remaining error is dominated by Rabi rate fluctuations of approximately $1\times 10^{-3}$ due to imperfect laser power stabilization. \begin{figure} \caption{Average fidelity for single-qubit-gate randomized benchmarking sequences, plotted as a function of sequence length. We determine the average error per computational gate to be $3.8(1)\times10^{-5} \label{fig:RB} \end{figure} To couple the ions' internal (``spin") states to their motion, Raman transitions are driven by two beams along paths 1 and 2 respectively (Fig. \ref{fig:BeamLineSetup}). These beams intersect at $90^{\circ}$ such that the difference in their $\bm{k}$ vectors, $\bm{\Delta k}$, is aligned along the axial direction, in which case only the axial motion will couple to the spins \cite{bible,Monroe1995}. The strength of the spin-motion coupling provided by these beams is proportional to the single-ion Lamb-Dicke parameter $\eta = |\bm{\Delta k}| z_{0} \simeq 0.25$ where $z_0 = \sqrt{\hbar/(2 m \omega_z)}$, with $\hbar$ and $m$ the reduced Planck's constant and the ion mass. However, due to the micro-motion along the axial direction, the carrier and spin-motion sideband Rabi rates are reduced for this laser beam geometry. For our parameters, the modulation index due to the micro-motion Doppler shift is approximately 2.9 such that the largest Rabi rates are provided by the second micro-motion sideband which is reduced by a factor of $J_2(2.9) \simeq 0.48$ relative to Rabi rates in the absence of micromotion. Two trapped ions confined in $\mathcal{E}$ align along the axial direction with spacing 3.94 $\mu$m. The relevant axial modes are the center-of-mass (C) mode (ions oscillate in phase at $\omega_z$) and and ``stretch" (S) mode (ions oscillate out of phase at $\sqrt{3} \omega_z$). The two-qubit entangling gate is implemented by applying an effective $\hat{\sigma}_x\hat{\sigma}_x$ type spin-spin interaction using state-dependent forces (here acting on the axial stretch mode) in a M$\o$lmer-S$\o$rensen (MS) protocol \cite{Sorensen1999,Sorensen2000,Milburn1999,Solano1999} using all three beams in Fig. \ref{fig:BeamLineSetup} (supplementary material). To maximize the spin-motion coupling and state-dependent forces with the ions undergoing micromotion, the three beam frequencies are set to $\omega_1 = \omega_L$, $\omega_{2a} = \omega_L + 2\omega_{\mathrm{RF}} - \omega_0 + \omega_{\mathrm{S}} + \delta$, and $\omega_{2b} = \omega_L + 2\omega_{\mathrm{RF}} - \omega_0 - \omega_{\mathrm{S}} - \delta$, where $\omega_L$ is the laser frequency, which is detuned by $\Delta$ from the $^2$S$_{1/2} \rightarrow\ ^2$P$_{1/2}$ transition frequency, and $\delta$ is a small detuning ($\ll \omega_z$) that determines the gate duration \cite{Sorensen2000}. Following initial Doppler cooling, the ions are sideband cooled with a series of $\ket{2,2}\ket{n} \rightarrow \ket{\uparrow}\ket{n-1}$ transitions, followed by repumping \cite{Monroe1995}, resulting in mean mode occupation numbers $\langle n_{\mathrm{C}} \rangle \simeq 0.01$ and $\langle n_{\mathrm{S}} \rangle \simeq 0.006$ and the ions being pumped to the $\ket{2,2}$ state. Two-qubit measurements are made as in the one ion case, but we collect fluorescence from both ions simultaneously. We record photon count histograms with repeated experiments having the same parameters to extract the information about the qubit states. \begin{figure} \caption{ML-Bell-state error (red circles), plotted as a function of $-2 \pi/\Delta$ where $\Delta$ is the Raman detuning, for a constant gate duration of approximately 30 $\mu$s. The simulated contributions to the Bell state error from Raman and Rayleigh scattering (supplementary material) are shown with the blue and purple dashed lines respectively. For large $|\Delta|$ the Raman scattering error approaches zero, however, the Rayleigh scattering error remains approximately constant at $1.7\times10^{-4} \label{fig:GateDetuning} \end{figure} We use the gate to ideally prepare the Bell state $\ket{\Phi_+} = \textstyle{\frac{1}{\sqrt{2}}(\ket{\uparrow\uparrow}+\ket{\downarrow\downarrow})}$. To evaluate the gate's performance, we employ partial state tomography analyzed with a maximum likelihood (ML) algorithm to deduce the fidelity of the experimentally prepared state. Using a set of reference histograms, the maximum likelihood method estimates the experimentally created density matrix by maximizing the probability of the data histograms to correspond to that density matrix. The ML algorithm is general enough that joint-count histograms (here photon counts from two ions) can be analyzed without the need for individual addressing and measurement. From the Bell-state fidelity as determined by the ML method, we can estimate the MS gate fidelity. The ML-Bell-state fidelity does not include errors due to imperfect $\ket{2,2}$ state preparation and measurement. By taking these effects into account we also determine a lower bound for the actual Bell-state fidelity (supplementary material). By varying the laser beam power, we determine the error of the Bell state as a function of $\Delta$ keeping a fixed gate duration of $\simeq$ 30 $\mu$s (Fig. \ref{fig:GateDetuning}) and also as a function of gate duration for a fixed detuning $\Delta \simeq -2\pi \times 730$ GHz (Fig. \ref{fig:GateDuration}). The various curves in the figures show the expected errors due to spontaneous emission and errors in the composite microwave pulses used for $\ket{2,2} \leftrightarrow \ket{1,1} = \ket{\uparrow}$ state transfer, and mode frequency fluctuations in Fig. \ref{fig:GateDuration}. The minimum error obtained is $8(4) \times 10^{-4}$ for $\Delta \simeq - 2\pi \times 900$ GHz and a gate duration of approximately 30 $\mu$s, which yields a ML-Bell-state fidelity of 0.9992(4). An important contribution to the ML-Bell-state error is due to the imperfect transfers from the $\ket{2,2}$ state to the qubit $\ket{\uparrow}$ state (for both qubits) before the application of the gate, and the reverse procedure that transfers $\ket{\uparrow}$ population back to the $\ket{2,2}$ state before detection. The total fidelity of these transfer pulses, limited by magnetic field fluctuations and the quality of the microwave pulses, is investigated with separate experiments analyzed with the same ML algorithm (supplement), and we find $\epsilon_{\mathrm{transfer}} = 4(3)\times 10^{-4}$. This is averaged over multiple data evaluations across multiple days; the uncertainty is the standard deviation of these data. While this error does not in principle affect the gate performance, we conservatively do not remove it from our gate fidelity estimate due to its relatively large uncertainty. \begin{figure} \caption{ML-Bell-state error (red circles) as a function of gate duration $t_{\mathrm{gate} \label{fig:GateDuration} \end{figure} In the supplementary material, we describe in more detail characterization of individual errors sources through calibration measurements and numerical simulation. From this, we deduce that the fidelity of the ML-Bell-state is a good representation of the average gate fidelity. The errors for the highest state fidelity obtained are listed in Table \ref{ErrorBudget}. It would be advantageous to evaluate the gate performance with full process tomograghy or randomized benchmarking to confirm our assessment. We did not perform randomized benchmarking because ion motional excitation gives additional errors. This excitation occurs during ion separation (to provide individual ion addressing) and because of anomalous heating \cite{Turchette2000} during the required long sequences of gates. These problems can eventually be solved as in \cite{Gaebler2012} where the gate fidelity was measured by interleaved randomized benchmarking or by process tomography \cite{Navon2014}. In both cases, the gate error was consistent with the measured two-qubit state fidelity. In the experiment here, the uncertainties of the inferred errors are deduced by parametric bootstrap resampling \cite{Efron1993} with 500 resamples. We determine a lower bound of $0.999$ on the purity of the $\ket{2,2}$ state for one ion prepared by optical pumping. With this, we put a lower bound of $0.997$ on the overall Bell state fidelity. \begin{table} \begin{tabular}{|c|c|c|} \hline Errors & $\times 10^{-4}$\\ \hline Spontaneous emission (Raman) & $4.0$\\ Spontaneous emission (Rayleigh) & $1.7$\\ Motional mode frequency fluctuations & 1\\ Rabi rate fluctuations & 1\\ Laser coherence & 0.2 \\ Qubit coherence & $<$0.1 \\ Stretch-mode heating & $ 0.3$\\ Error from Lamb-Dicke approximation & 0.2\\ Off-resonant coupling & $<$0.1 \\ \hline $\ket{2,2} \Leftrightarrow \ket{\uparrow}$ two-way transfer & 4\\ \hline \end{tabular} \caption{Error budget for the entangling gate at a Raman detuning of $\Delta \simeq - 2\pi \times 900$ GHz, and a gate duration of 30 $\mu$s. Off-resonant coupling includes coupling of the qubit states to other hyperfine states and their sidebands. The last error reduces the ML-Bell-state fidelity but should minimally affect the gate fidelity.} \label{ErrorBudget} \end{table} In summary, we have demonstrated high fidelity single- and two-qubit laser-induced gates on trapped $^9$Be$^+$ ions. The single-qubit gate fidelity exceeds some threshold estimates for fault-tolerant error correction with reasonable overhead. Sources of the $\simeq 10^{-3}$ two-qubit gate error have been identified and can likely be reduced, making $^9$Be$^+$ ion a strong qubit candidate for fault-tolerant QIP. Gates with comparable fidelity have been recently reported by the Oxford group using $^{43}$Ca$^+$ ions \cite{Ballance2015}. This work was supported by the Office of the Director of National Intelligence (ODNI) Intelligence Advanced Research Projects Activity (IARPA), ONR and the NIST Quantum Information Program. We thank D. Allcock and S. Brewer for helpful suggestions on the manuscript. We thank D. Hume, D. Lucas, C. Ballance and T. Harty for helpful discussions. This paper is a contribution of NIST and not subject to U.S. copyright. \section{Supplementary Material} \subsection{Laser beam configuration} Four infrared lasers are used to create three separate UV sources: 313.132 nm for the $^2$S$_{1/2}$ to $^2$P$_{3/2}$ transitions, 313.196 nm for the $^2$S$_{1/2}$ to $^2$P$_{1/2}$ transitions, and a variable wavelength source in the range of approximately 313.260 to 313.491 nm for Raman transitions. We generate visible light near 626 nm by employing sum frequency generation (SFG) of a pair of infrared laser beams (one near 1050 nm and the other near 1550 nm) using temperature-tuned magnesium-oxide-doped periodically-poled lithium-niobate (MgO:PPLN) in a single-pass configuration. These lasers setups are similar to that described in \cite{Wilson2011} and provide up to 2.5 W in each visible beam with relatively low power fluctuations ($<1\ \%$ rms). The visible light is then frequency doubled using a Brewster-angle cut beta barium borate (BBO) crystal in a bowtie cavity configuration \cite{Wilson2011}. A separate mode-locked pulsed laser system near 235 nm is used for photo-ionization of neutral $^9$Be atoms during loading of ions into the trap. The $^2$S$_{1/2}$ to $^2$P$_{3/2}$ beam is used for Doppler cooling and qubit state measurement. Doppler cooling is achieved with $\sigma^+$ polarized light red detuned by approximately $\Gamma/2$ (angular frequency) from the $^2$S$_{1/2}\ket{2,2}$ to $^2$P$_{3/2}\ket{3,3}$ transition at 313.132 nm (Fig. \ref{fig:BeLevels}), where $\Gamma \simeq 2\pi \times 19.6$ MHz is the decay rate of the $^2$P$_{3/2}$ state. The Doppler cooling beam $\bm{k}$-vector direction is such that it can cool all modes of the ions' motion. During Doppler cooling and detection, to maximize efficiency in the presence of the axial micromotion, we apply a differential voltage of approximately $\pm\ 0.15$ V to the two control electrodes centered on zone $\mathcal{E}$. This shifts the ions away from the radial micromotion null point (trap axis) such that the vector sum of the radial and axial micromotion is perpendicular to the Doppler cooling beam's wavevector. Qubit state-dependent fluorescence detection is accomplished by first reversing the initial qubit state preparation to transfer the $\ket{\uparrow}$ population back to the $\ket{2,2}$ state. This is followed by a microwave $\pi$ pulse that transfers (``shelves") the $\ket{\downarrow}$ state to the $\ket{1,-1}$ state. To shelve any remaining $\ket{\downarrow}$ population, we then apply a microwave $\pi$ pulse from the $\ket{\downarrow}$ state to the $\ket{1,0}$ state. After these shelving pulses, the $^2$S$_{1/2}$ to $^2$P$_{3/2}$ laser beam is tuned to resonance on the $^2$S$_{1/2}\ket{2,2}$ to $^2$P$_{3/2}\ket{3,3}$ cycling transition. With these conditions, the fluorescing or ``bright'' state of this protocol corresponds to the qubit $\ket{\uparrow}$ state, and the qubit $\ket{\downarrow}$ state will be detected as ``dark''. With a detection duration of $330\ \mu$s, we record on average approximately 30 photon counts in a photo-multiplier tube for an ion in the bright state and 2 photons for 2 ions in the dark state (limited by background scattered light). The $^2$S$_{1/2}$ to $^2$P$_{3/2}$ Doppler cooling and detection laser beam will optically pump the ions to the $^2$S$_{1/2}\ket{2, 2}$ state as long as the beam has pure $\sigma^+$ polarization with respect to the B field. To mitigate the effects of polarization impurity and speed up the pumping process, two $^2$S$_{1/2}$ to $^2$P$_{1/2}$ laser beams are added for the initial optical pumping to the $^2$S$_{1/2}\ket{2, 2}$ state. All three beams are first applied, and the final stage of pumping uses only the $^2$S$_{1/2}$ to $^2$P$_{1/2}$ beams. These beams are derived from the same laser source that is split into two, with one beam frequency tuned near the $^2$S$_{1/2}\ket{2, 1}$ to $^2$P$_{1/2}\ket{2, 2}$ transition, and the other tuned near the $^2$S$_{1/2}\ket{1, 1}$ to $^2$P$_{1/2}\ket{2, 2}$ transition. To suppress electromagnetically-induced transparency effects that would lead to coherent trapping of population in the $^2$S$_{1/2}\ket{2, 1}$ and the $^2$S$_{1/2}\ket{1, 1}$ states when the beams are applied simultaneously and on resonance, one of these beams is detuned from its atomic resonance by approximately $-\Gamma/2$. These beams also serve to repump to the $^2$S$_{1/2}\ket{2, 2}$ state during Raman sideband cooling \cite{Monroe1995}. All of these UV beams are then overlapped inside a UV optical fiber \cite{Colombe2014} before being focused onto the ions' location. The laser beams for qubit manipulation with stimulated-Raman transitions ($\lambda \simeq$ 313 nm) are detuned by $\Delta$ from the $^2$S$_{1/2}$ to $^2$P$_{1/2}$ electronic transitions. The UV beam is first split and sent down two different paths (Fig. \ref{fig:LaserSetup}). The beam in path 1 passes through an acousto-optic modulator (AOM) where the first-order deflected beam (+ 200 MHz) is coupled into an optical fiber. The output beam from the optical fiber is then focused to a waist of $\simeq 25\ \mu$m at the location of the ions. The beam in path 2 is first sent through a double-pass AOM with center a frequency of 600 MHz. The 600 MHz AOM geometry is configured such that when it is switched off, it simply outputs the input beam. When it is switched on, it outputs an additional beam shifted by $\simeq +\ 2\times 600$ MHz that is co-alligned with the unshifted beam. In both cases, the output of the 600 MHz AOM is then sent through a double-pass AOM with a center frequency of 310 MHz followed by a single pass 200 MHz AOM in a setup analogous to that in path 1. The tuning range between beams in paths 1 and 2 is approximately 200 MHz. The frequency and phase of the RF for each AOM is generated with computer-controlled direct digital synthesizers (DDS) that are phase stable relative to each other. \begin{figure} \caption{A schematic of the laser setup used for stimulated-Raman laser gates. The 313 nm light is generated from two IR sources by sum frequency generation (SFG) followed by second harmonic generation (SHG). The beam is split into two paths, sent through AOMs, coupled into fibers, and aligned onto the ions. Path 2 contains a double-passed 600 MHz AOM that, when switched on, produces an additional beam shifted by approximately the qubit frequency, $f_{0} \label{fig:LaserSetup} \end{figure} The optical fibers are robust against color center formation \cite{Colombe2014} and substantially suppress higher-order modes compared to the case when they are not used. (These fibers have been extensively used for over 18 months with output of powers up to 100 mW and no observed degradation in the transmission ($\simeq 50\ \%$) or mode quality. Also, we do not observe significant polarization drift.) Furthermore, since the fiber outputs are located relatively close to the beam input windows of the vacuum chamber, beam intensity fluctuations at the ions' location due to air currents, vibrations, and temperature drifts are reduced compared to similar beam lines set up in free-space. Beam position fluctuations at the input to the fibers translate to laser power fluctuations at the fiber outputs. The laser power at each of the fiber's output is stabilized by monitoring a sample of the beam on a photodiode and feeding back to the RF power that drives the 200 MHz AOMs in each path. Each feedback loop is controlled by a Field Programmable Gate Array (FPGA) based digital servo \cite{Leibrandt2015}, while another FPGA-based digital-to-analog converter (DAC) \cite{Bowler2013} is used to actively control the servo set point that provides beam temporal pulse shaping to smooth the transitions between on and off. Turn on and off durations for the pulses are approximately 0.75 $\mu$s. For high-fidelity single-qubit gates, the 200 MHz AOM in path 1 is switched off and the 600 MHz AOM in path 2 is switched on. The resulting two beams in path 2 have a frequency difference that can be tuned to the qubit frequency, $f_0$. The qubit transition in this copropagating configuration is insensitive to the ion's motion, and we measure insignificant phase drift between these two laser beams during single experiment. The polarization of the these beams is set at $45^{\circ}$ with respect to $\bm{k_1}$ and the B-field direction (Fig. \ref{fig:BeamLineSetup}) such that they contain equal parts $\pi$ and $\sigma^{+,-}$ polarization at the ion's location. Stimulated-Raman sideband transitions that couple the spins (hyperfine states) to the motion are driven by laser beams from paths 1 and 2. Since the second axial micromotion sideband provides the largest coupling strength, we tune the difference frequency of the Raman beams to near $\omega + 2\omega_{\mathrm{RF}}$ or $\omega - 2\omega_{\mathrm{RF}}$, where $\omega$ is the transition frequency of interest. Before applying the MS gate, the ions are first Doppler cooled followed by ground-state cooling on the axial modes \cite{Monroe1995}. Ground state cooling is accomplished by first driving a series of red-sideband pulses on the $\ket{2, 2}$ to $\ket{\uparrow}$ transition (which has carrier frequency $\omega_{\ket{2,2}\leftrightarrow \ket{1,1}} \simeq 2\pi \times 1018$ MHz), each followed by repumping with the $^2$S$_{1/2}$ to $^2$P$_{1/2}$ beams to re-initialize the ions back to the $\ket{2, 2}$ state. We first apply 10 cooling pulses on the second motional sideband of the center-of-mass (C) mode (Raman beam frequency difference set to $\omega_{\ket{2,2}\leftrightarrow \ket{1,1}} + 2\omega_{\mathrm{RF}} - 2 \omega_{\mathrm{C}})$ followed by 60 pulses each on the first motional sidebands (at $\omega_{\ket{2,2}\leftrightarrow \ket{1,1}} + 2\omega_{\mathrm{RF}} - \omega_{\mathrm{C,S}})$. These red sideband transitions are alternately applied on the C and stretch (S) modes. With this, we achieve final mean occupation numbers, $\langle n \rangle \simeq$ 0.01 and 0.006 respectively \cite{Monroe1995}. The ions are then transferred from the $\ket{2,2}$ to the qubit $\ket{\uparrow}$ state by applying a composite microwave $\pi$ pulse \cite{Levitt1986} composed of a sequence of ($\theta,\phi)$ pulses $(\pi,0),\ (\pi,\pi/3),\ (\pi,\pi/6),\ (\pi,\pi/3),\ (\pi,0)$, where $\theta$ denotes the angle the state is rotated about an axis in the {\it x-y} plane of the Bloch sphere, and $\phi$ is the azimuthal angle of the rotation axis. \subsection{M$\o$lmer-S$\o$rensen Gate} Two trapped ions are aligned along the axial $z$ direction with spacing of $\simeq 3.94$ $\mu$m. Their $z$ motion can be described by two normal modes, the center-of-mass (C) and stretch (S) modes with frequencies $\omega_{\mathrm{C}} =\omega_z$ and $\omega_{\mathrm{S}} = \sqrt{3}\omega_z$ respectively. The motion of the $i$th ion is written $z_i = z_{i,\mathrm{C}0}(a + a^{\dag}) + z_{i,\mathrm{S}0} (b + b^{\dag})$ where $a, a^{\dag} $ and $b, b^{\dag}$ are the lowering and raising operators for the C and S modes and $z_{1,\mathrm{C}0} = z_{2,\mathrm{C}0} = z_0/\sqrt{2}, z_{1,\mathrm{S}0} = - z_{2,\mathrm{S}0} = z_0/\sqrt{2\sqrt{3}}$. The M$\o$lmer-S$\o$rensen (MS) interaction requires simultaneously driving a blue sideband with a detuning of $\delta$ and a red sideband with a detuning of $-\delta$ on the selected (stretch) mode. The sideband transitions are driven on the second order micromotion sideband with three laser beams as described in the main text. The difference in diffraction angle of beams $k_{2a}$ and $k_{2b}$ is small enough that they can both be coupled into the same single-mode fiber by imaging the center of the AOM into the fiber. The near perfect overlap of these two beams ensures any optical path fluctuations leading to laser beam phase fluctuations will be common and the stability of their phase difference is determined by the stable RF oscillators used to create the two tones. The third beam is generated in path 1; its frequency differs from the mean of the path 2 beams by $\omega_0 - 2 \omega_{\mathrm{RF}}$. The polarization of this beam is adjusted such that the power ratio of $\sigma^+$ to $\sigma^-$ components is $8:2$, with the $\sigma^+$ component used for driving the MS gate and the $\sigma^-$ component used for sideband cooling. Transforming to the interaction frames for both the spins and stretch mode of motion and dropping high frequency terms (rotating-wave approximation), the effective $\hat{\sigma}_{x} \hat{\sigma}_{x}$ M$\o$lmer-S$\o$rensen interaction in the Lamb-Dicke limit can be written as \begin{align} H = \hbar \sum_{j=1,2} \eta_{\mathrm{S}} \Omega \hat{\sigma}_j^+ \left(\hat{b} e^{-i(\delta t+\phi_{j,r})}+\hat{b}^{\dag} e^{i(\delta t-\phi_{j,b})}\right)+h.c., \label{EqMS} \end{align} where $\Omega$ is the resonant carrier transition Rabi rate, $\eta_{\mathrm{S}} = |\bm{\Delta k}| z_{1,\mathrm{S}0} \simeq 0.19$, $\hat{\sigma}_j^+$ is the spin raising operator for the $j$th ion, and $\phi_{j,b (r)}$ is the phase of the blue (red) sideband interaction on the $j$th ion. Starting in the $\ket{\uparrow\uparrow}$ state and setting $\eta_{\mathrm{S}} \Omega = \delta/2$, this interaction produces the state \begin{align} \frac{1}{\sqrt{2}}\left(\ket{\uparrow\uparrow} + e^{-i\left(\sum_{j=1,2}\frac{1}{2}(\phi_{j,b}+\phi_{j,r}) + \pi/2 \right)}\ket{\downarrow\downarrow}\right) \label{EqPhiMS} \end{align} after a duration $2\pi/\delta$. From Eq. (\ref{EqPhiMS}), we see that the phase difference between the two components of the state depends on the phases $\phi_{j,b}$ and $\phi_{j,r}$, which can fluctuate over the course of repeated experiments due to relative path length fluctuations between paths 1 and 2 caused by air currents, mechanical vibrations, and thermal drifts. However, if all single-qubit pulses in an individual experimental sequence are applied using laser beams propagating along the same two beam paths, then the relative phases of the two-qubit gate and single qubit pulses will be stable as long as the beam path lengths are constant for the duration of each experiment. In this case we can choose the phase factors in Eq. (\ref{EqPhiMS}) for each experiment such that we realize the state $\ket{\Phi_+} = \textstyle{\frac{1}{\sqrt{2}}}(|\uparrow\uparrow\rangle + |\downarrow\downarrow\rangle)$. Single-qubit gates driven by non-copropagating laser beams have the disadvantages of being sensitive to the ions' motion and will have lower relative phase stability compared to the case of co-propagating laser beams. However, by surrounding the MS gate pulse with two global $\pi/2$ single-qubit pulses (using the same non-co-propagating laser beams) a phase insensitive two-qubit gate that implements $\ket{\uparrow\uparrow} \rightarrow \ket{\uparrow\uparrow},\ \ket{\uparrow\downarrow} \rightarrow i\ket{\uparrow\downarrow},\ \ket{\downarrow\uparrow} \rightarrow i\ket{\downarrow\uparrow},\ \ket{\downarrow\downarrow} \rightarrow \ket{\downarrow\downarrow}$ can be achieved and all other single-qubit pulses can be performed with any phase stable source \cite{Lee2005,Tan2015}. \subsection{State detection and tomography} For the single-qubit experiments, the photon-count histograms of the bright and dark states are well separated so we determine the ion's state by setting a threshold count, typically at 12 counts. We estimate the state-detection error from this simple method to be $\sim 2 \times 10^{-3}$. Randomized benchmarking separates this error from the much smaller error per computational gate. For the two ion experiments, joint photon-count histograms are collected by recording state-dependent fluorescence counts from both ions with a detection laser beam size much larger than the ion separation. As a result, the recorded histograms are drawn from mixtures of three possible count distributions $q_j(c)$ ($c = 0,1,2,....,C$ indicates the photon counts) corresponding to the distinguishable ion subspaces spanned by (i) $\ket{\uparrow\uparrow}$, (ii) $\ket{\uparrow \downarrow}$ or $\ket{\downarrow \uparrow}$, and (iii) $\ket{\downarrow \downarrow}$ states. Because of the finite efficiency of our photon collection apparatus and optical pumping during detection, the three count distributions overlap, particularly those of subspaces (ii) and (iii). Therefore an exact determination of the subspace to which the ions are projected cannot be determined in a single detection. Nevertheless, we can infer the ions' density matrix statistically from repetitions of the experiment provided that the count distributions for each projected subspace are known. These distributions can be inferred from reference experiments by fitting to a parametrized model of the distributions. A common class of such models is given by mixtures of Poissonians with different means. The uncertainty requirements of our experiments and effects such as optical pumping during photon collection imply that we cannot use such models unless they have an excessively large number of parameters, in which case overfitting becomes an issue. Our maximum likelihood (ML) analysis avoids these issues by statistically inferring states without requiring a model for the ideal count distributions. The ML analysis requires reference and data histograms, where the data histograms involve observations of an identically prepared state $\rho$ modified by analysis pulses. It infers a representative density matrix $\hat{\rho}$. Because the different observations are not ``informationally complete'', $\hat{\rho}$ is not intended to match $\rho$ {precisely but the measurements are designed so that the fidelities of interest do match to within a statistical uncertainty. In our experiment, we obtain four reference histograms $r_i(c)$ ($i=1,2,3,4$). Each reference histogram is obtained by observing known ion states prepared as follows: For $r_1(c)$, the state is prepared by optical pumping both ions to the $\ket{2,2}$ state. For $r_2(c)$, this optical pumping is followed by implementing the transfer $\ket{2,2} \rightarrow \ket{\uparrow}$ with a composite microwave pulse, followed by the transfers $\ket{\uparrow}\rightarrow \ket{\downarrow}$ and shelving into one of the states $\ket{1,-1}$ or $\ket{1,0}$ with microwave $\pi$ pulses as described in the main text. For $r_3(c)$, the optical pumping is followed by the microwave-driven spin-echo sequence consisting of $(\frac{\pi}{2},0)$, $(\pi,0)$, $(\frac{\pi}{2},\frac{\pi}{2})$ pulses on the $\ket{2,2}$ to $\ket{\uparrow}$ transition, followed by transferring the population in the $\ket{\uparrow}$ state to the $\ket{1,-1}$ or $\ket{1,0}$ state as for $r_2(c)$. The histogram $r_4(c)$ is obtained like $r_3(c)$ but with the phase of the third pulse set to $\frac{3\pi}{2}$. The change in phase does not change the state when the initial state and pulses are as designed. Data histograms $h_k(c)$ are obtained directly from the prepared state $\rho$ or by applying analysis pulses on the prepared state. The analysis pulses are global $(\frac{\pi}{2},n\frac{\pi}{4})$ pulses, for $n = 0,1,...,7$. These pulses are applied using the laser beams from path 1 and 2 (Fig. \ref{fig:BeamLineSetup}) to maintain relative phase stability with respect to the two-qubit gate. To determine $\hat{\rho}$, we maximize the log(arithm of the) likelihood of the observed histograms with respect to the unknown $q_j(c)$ and $\rho$ to be determined. Given these unknowns, reference histogram $r_i(c)$ is sampled from the distribution $\sum_j a_{ij} q_j(c)$, where the $a_{ij}$ are ``populations'' determined from the known prepared state. Similarly, the data histograms $h_k(c)$ are sampled from the distribution $\sum_j b_{kj} q_j(c)$, where the populations $b_{kj}$ are a linear function of $\rho$. Given these distributions, the log likelihood is given by \begin{widetext} \begin{align} \mathrm{log}\left(\mathrm{Prob}(r,h|q,a,b)\right)=\left[\sum_{i=1,c}^{4,C}r_i(c)\mathrm{log}\left(\sum_{j=1}^3 a_{ij}q_j(c)\right)\right]+\left[\sum_{k,c}^{9,C} h_k(c)\mathrm{log}\left(\sum_j^3 b_{kj} q_j(c)\right)\right]+\mathrm{const}. \end{align} \end{widetext} To maximize the log likelihood, we take advantage of the separate convexity of the optimization problem in the $q_j(c)$ and $\rho$ and alternate between optimizing with respect to the $q_j(c)$ and $\rho$. We used a generic optimization method for the first and the ``$R\rho R$'' algorithm \cite{Hradil2004} for the second to keep $\rho$ physical during the optimization. The quality of the model fit can be determined by a bootstrap likelihood-ratio test~\cite{Boos2003}. We found that the data's log-likelihood-ratio was within two standard deviations of the mean bootstrapped log-likelihood-ratio. A refinement of the ML analysis is required to reduce the number of parameters needed for the $q_j(c)$ and the complexity of the algorithms. For this, we bin the counts into seven bins of consecutive counts \cite{Lin2016}. The binning is done separately by setting aside a random 10 $\%$ of each reference histogram and using this as a training set to determine a binning that maximizes state information quantified by a mutual-information-based heuristic. The heuristic is designed to characterize how well we can infer which of the four training set reference histograms a random count is sampled from. The ML analysis assumes that the reference histograms are sampled from count distributions corresponding to states with known populations. The actual populations deviate by small amounts from this assumption. We considered the systematic effects of such deviations on the analysis. Two effects were considered. One is that optical pumping that ideally prepares the $\ket{2,2}$ state may have fidelity as low as $0.999$ (see next section). The other is due to imperfections in the transfer pulses between the qubit manifold and other states. This will be dominated by the transfer pulses between the $\ket{2,2}$ and $\ket{\uparrow}$ states, which has a measured fidelity of $0.9996(3)$. Errors in the transfer between the $\ket{\downarrow}$ and the $\ket{1,-1}$ and $\ket{1,0}$ states have less effect, since to a high degree all three of these states are dark. To analyze the effects of these errors, we explicitly distinguish between the computational qubit based on the states $\ket{\uparrow}=\ket{1,1}$ and $\ket{\downarrow}=\ket{2,0}$ and the measurement qubit based on the bright state $\ket{2,2}$ and the dark states $\ket{1,-1}$ and $\ket{1,0}$. The ML analysis is designed to determine the populations (in the given basis) of the measurement qubit. Thus the references are designed to yield histograms associated with known populations in the measurement qubit. To first order, the reference states have all population in the measurement qubit. Contributions from the small populations outside the measurement qubit manifold are observationally equivalent to population in $\ket{\downarrow}$ since they are dark to a high degree. If these populations are the same in all experiments, they are equivalent to a background contribution. With this in mind, we inspect the systematic effects from imperfect optical pumping and transfer in more detail. Consider the effect of the $\epsilon\leq 1\times10^{-3}$ population (per ion) in states other than $\ket{2,2}$ after optical pumping (see next section). This population is distributed over the other states in a way that depends on details of the optical pumping process. With one exception considered below, population in states other than $\ket{2,2}$ is nominally dark in all reference histograms. The ML algorithm infers $q_j(c)$ as if all populations were in the measurement qubit manifold. Provided population in other state remains dark in all experiments, it is treated as a background and subtracted. The effect is that the algorithm infers the renormalized density matrix on the qubit rather than the actual one with trace reduced by the population outside the qubit manifold. This condition holds for our experiments: To first order, the $\epsilon$ population (of each ion) is outside of the computational qubit manifold during the Bell state preparation and outside the measurement qubit manifold. To correct for this effect and determine a lower bound on the overall Bell state fidelity, we subtracted $2\epsilon$ from the ML inferred fidelity. This is consistent with the effect on Bell state fidelity from simulations (see next section). The exception to this model is that a fraction of the $\epsilon$ non-$\ket{2,2}$ population after optical pumping is in the qubit manifold. In the case where non-$\ket{2,2}$ state is in the $\ket{\uparrow}$ state, reference histogram $r_3(c)$ and $r_4(c)$ are affected differently compared to the situation above. This is because in this case, the population stays inside the measurement qubit manifold. With respect to the background interpretation, this is equivalent to having the $\ket{2,2}$ population in references $r_3(c)$ and $r_4(c)$ exceed $0.5$ (for each ion) by $\xi\leq \epsilon/2$. To determine the effect on the ML inferred fidelity, we performed a sensitivity analysis on simulated data by varying the ML assumed populations for these references according to the parameter $\xi$. The change in fidelity is small compared to our uncertainties. Non-$\ket{2,2}$ population in the $\ket{\downarrow}$ state after optical pumping enters the qubit state at the beginning of Bell state preparation and in this context does not behave as a dark state independent of the analysis pulses. However, simulations of the optical pumping process show that this population is less than $10^{-4}$, which is small compared to our uncertainties. For the transfer of $\ket{2,2}$ to the $\ket{\uparrow}$ computational qubit state before the application of the gate, the errors of the transfer pulse result in population in the $\ket{2,2}$ state, outside the computational qubit manifold and unaffected by the MS interaction and the following analysis pulses. To first order, the transfer pulse after the gate will now move the population in the $\ket{2,2}$ state back to the $\ket{1,1}$ state, which will be nominally detected as dark, independent of the analysis pulses. While this is equivalent to a leakage error on the Bell state being analyzed, it is not accounted for by the ML analysis. Transfer error after the application of the gate results in extra dark population that depends on the final population in $\ket{1,1}$, which in turn depends on the analysis pulses. Thus, both effects are inconsistent with the analysis pulse model that is assumed by the ML analysis. Such inconsistencies, if significant, are expected to show up in the bootstrapped likelihood-ratio test, but did not. Nevertheless, we performed a second sensitivity analysis on simulated data, where we modified the model to include an extra dark state outside the qubit manifold and included the expected transfer pulse effects by modifying the analysis pulses with pulses coupling the qubit to the extra state. The effect on the inferred error of this modification was also small compared to the uncertainties. \subsection{Imperfect optical pumping and lower bound of Bell state fidelity} The lower bound of the $\ket{2,2}$ state purity is deduced by deriving an upper bound on the error $\epsilon$ of preparing the $\ket{2,2}$ state after applying optical pumping. The population is dominantly in the $\ket{2,2}$ state but we do not know precisely which of the remaining hyperfine states are populated. We write the density matrix of a single ion for this situation as \begin{equation} \rho = (1-\epsilon)\ket{2,2}\bra{2,2} + \sum_{i=1}^{7}\epsilon_i\ket{\Psi_i}\bra{\Psi_i}, \end{equation} where $\epsilon = \sum_i^7\epsilon_i$ and $\ket{\Psi_i}$ represents the hyperfine states excluding the $\ket{2,2}$ state. One strategy for setting an upper bound on $\epsilon$ is to choose a cut-off count $\beta$ and compare the small-count ``tail'' probabilities $t=\sum_{c<\beta}h(c)$. Let $t_b$ be the tail probability of $h_{\ket{2,2}}$. Because state preparation and detection are not perfect, we have $t_b\geq\bar{t}_b$, where $\bar{t}_b$ is the tail probability of perfectly prepared $\ket{2,2}$ states. The tail probabilities $t_i$ for $\Psi_i$ are large, as verified by experimentally preparing each $\Psi_i$ state and measuring its count distribution. From this we can set a lower bound on $t_i$ such that $t_i>l$. With this, we can write \begin{eqnarray} t_b&=&(1-\epsilon)\bar{t}_b + \sum_i^7\epsilon_i t_i\\ &\geq&(1-\epsilon)\bar{t}_b + \epsilon l\\ t_b&\geq&\epsilon\left(l-\bar{t}_b\right)+\bar{t}_b, \end{eqnarray} or \begin{eqnarray} \epsilon &\leq& \frac{t_b-\bar{t}_b}{l-\bar{t}_b}\\ &\leq& \frac{t_b}{l-\bar{t}_b}. \end{eqnarray} For our parameters of $l=0.8$ and $\bar{t}_b=0$ we estimate an upper bound on $\epsilon$ of $1\times10^{-3}$. We also numerically simulate the effect of imperfect optical pumping and find that the Bell state error scales linearly in as a function of $\epsilon$. This is consistent with the lower bound on overall Bell state fidelity of 0.997 inferred in the previous section by considering the effect on the ML analysis. \subsection{Average Gate Fidelity} To characterize the performance of the gate over all input states, we investigate the average gate fidelity, $F_{\mathrm{avg}}$ \cite{Horodecki1999,Nielsen2002}, by employing numerical simulation with known experimental imperfections. Firstly, we write \begin{eqnarray} F_{\mathrm{avg}}=\frac{6}{5}S_{+} +\frac{3}{5}S_{-} - \frac{1}{5}, \label{AverageFidelity} \end{eqnarray} with \begin{widetext} \begin{eqnarray} S_+ &=& \frac{1}{36}\sum_{U_1}\sum_{U_2}\left[\left(\bra{U_1}\otimes\bra{U_2}\right)\hat{G}_{\mathrm{ideal}}^{\dagger}\rho_{\mathrm{noisy}}(U_1,U_2)\hat{G}_{\mathrm{ideal}}\left(\ket{U_1}\otimes\ket{U_2}\right)\right],\label{EqSplus}\\ S_- &=& \frac{1}{36}\sum_{U_1}\sum_{U_2}\left[\left(\bra{\overline{U_1}}\otimes\bra{\overline{U_2}}\right)\hat{G}_{\mathrm{ideal}}^{\dagger}\rho_{\mathrm{noisy}}(U_1,U_2)\hat{G}_{\mathrm{ideal}}\left(\ket{\overline{U_1}}\otimes\ket{\overline{U_2}}\right)\right].\label{EqSMinus} \end{eqnarray} \end{widetext} where $\ket{U_i}$ is an eigenstates of Pauli operators $\hat{\sigma}_x$, $\hat{\sigma}_y$ or $\hat{\sigma}_z$ for the $i$th qubit, and $\ket{\overline{U_i}}$ is the state orthogonal to $\ket{U_i}$. We fix a consistent phase for these eigenstates throughout. The operator $\hat{G}_{\mathrm{ideal}}$ is the ideal entangling operation, $\rho_{\mathrm{noisy}}(U_1,U_2)$ represents the resultant density matrix of the imperfect entangling operation with the input states of $\ket{U_1}\otimes\ket{U_2}$. Equation (\ref{AverageFidelity}) can be verified by direct computation, or by noting that it is invariant under one-qubit Clifford operations and SWAP. There are three independent such invariant expressions, so it suffices to check validity on a small number of simple quantum operations. Definitions and expression for $F_{\mathrm{avg}}$ can be found in \cite{Horodecki1999,Nielsen2002}. We use Eq. (\ref{AverageFidelity}) to compute $F_{\mathrm{avg}}$ instead of alternative expressions \cite{Horodecki1999,Pedersen2007} in order to bypass computation of an explicit process matrix. With 36 different input states, our simulations of known imperfections yield the summands in Eq. (\ref{EqSplus}) and Eq. (\ref{EqSMinus}). We found $F_{\mathrm{avg}}$ lies within the uncertainty of the inferred Bell state fidelity measurement. \subsection{Error sources} Spontaneous emission error is caused by randomly scattered photons when driving stimulated Raman transitions. It can be separated into Raman and Rayleigh scattering. Raman scattering processes are inelastic and project an ion's internal state to one of the other hyperfine states, destroying coherence. Rayleigh scattering processes are elastic and do not necessarily cause spin decoherence \cite{Ozeri2007}; however, momentum kicks from photon recoil cause uncontrolled displacements of the motional state, which result in phase errors in the final states. Raman scattering can be reduced by increasing $|\Delta|$ at the cost of higher laser intensity to maintain the same Rabi rate. However, Rayleigh scattering error cannot be reduced by increasing the detuning and it reaches an asymtoptic value. This error is proportional to the Lamb-Dicke parameter and thus could be reduced by increasing the trap frequency; it can also be reduced by using multiple loops in phase space \cite{Ozeri2007,Hayes2012}. These methods reduce the gate Rabi rate and thus increase Raman scattering error. In our experiment, eliminating the axial micromotion would allow us to increase $\Delta$ by a factor of $\xi \simeq$ 2 which would lower the Raman scattering error by a factor of $2\xi$, and the Rayleigh scattering error by a factor of $\xi$ while maintaining the same gate duration. Spontaneous Raman scattering can result in leakage of population from the qubit manifold. The resulting states will predominantly be detected as dark and falsely associated with the qubit $\ket{\downarrow}$ state. This creates a systematic bias that overestimates the actual Bell state fidelity. Through simulations, we found that such a bias is approximately $4\times10^{-5}$ for the Bell state fidelity created at a Raman detuning of $-2\pi\times 900$ GHz and approximately $1.5\times 10^{-3}$ for $-2\pi\times 90$ GHz Raman detuning. Motional mode frequency fluctuations also cause errors. For the stretch mode, the sources of frequency fluctuations (which are slow compared to the gate durations shown in Fig. \ref{fig:GateDuration}) are (i) fluctuations in the DC potentials applied to electrodes for trapping, (ii) fluctuating electric-field gradients from uncontrolled charging of electrode surfaces \cite{Harlander2010}, and (iii) non-linear coupling to transverse ``rocking'' modes \cite{Roos2008,Nie2009}. By measuring the lineshape for exciting the motional state of a single ion with injected RF ``tickle" potentials on the trap electrodes at frequencies near the mode frequencies, we estimate the first two sources contribute fluctuations of approximately 50 Hz. Stray charging can be caused by UV beam light scattering off the trap surfaces so this effect may becomes more pronounced when higher laser power is used. For (iii), to a good approximation, the shift of the stretch mode frequency from excitation of the rocking modes is given by $\delta \omega_S = \chi(n_x + n_y +1)$ where $\chi$ is a non-linear coupling parameter and $n_x$ and $n_y$ are Fock state occupation numbers of the two transverse rocking modes \cite{Roos2008,Nie2009}. For our parameters $\chi \simeq 45$ Hz, our Raman laser beam geometry did not allow direct measurement of the $\langle n_x\rangle$ and $\langle n_y\rangle$ radial modes excitation. Therefore, the final temperature is estimated from the (thermal) Doppler cooling limit, taking into account heating due to photon recoil during sideband cooling of the axial modes. From this, we estimate the stretch mode frequency fluctuations from experiment to experiment to be approximately 100 Hz r.m.s. As these fluctuations are dependent on the occupation numbers of the radial modes, the error can be suppressed by cooling the radial modes to the ground state. In Fig. \ref{fig:GateDuration}, we show three simulation curves for different total values of the r.m.s. frequency fluctuation of the motional mode that follow the trend of the data and are consistent with our known sources. The error due to these fluctuations is approximately $1 \times 10^{-4}$ for the shortest gate durations. Errors are also caused by changes in the MS Rabi rates, which cause fluctuations in the state-dependent forces. Sources are (i) fluctuations of the ions' micro-motion amplitude along the axial direction, (ii) fluctuations in the laser beam intensities at the ion locations, and (iii) fluctuations in the Debye-Waller factor associated with the center-of-mass (C) mode \cite{bible}. In our experiments, the latter gives the largest error. Through numerical simulation, we derived the expression $\epsilon \simeq 2.5\times (\frac{\delta\Omega}{\Omega})^2$ describing the MS gate error due to Rabi rate fluctuations (this agrees with the expression in \cite{Benhelm2008}). Given a thermal distribution of the C mode, the r.m.s. Rabi rate fluctuation of the stretch mode can be described by $\langle\frac{\delta\Omega}{\Omega}\rangle = \eta_{\mathrm{C}}^2\sqrt{\langle n_{\mathrm{C}} \rangle(\langle n_{\mathrm{C}}\rangle +1)}$ where $\eta_{\mathrm{C}}$ is the Lamb-Dicke parameter for that mode \cite{bible}. With $\langle n_{\mathrm{C}}\rangle \simeq$ 0.01 at the beginning of the MS interaction, we find $\langle\frac{\delta\Omega}{\Omega}\rangle \simeq 6 \times 10^{-3}$ and we deduce an error of approximately $1\times 10^{-4}$ to $\ket{\Phi_+}$. Because this mode will experience anomalous heating during the gate, the actual error contribution will increase with the gate duration. The heating rate for the COM mode is approximately 80 quanta per second. For our 30 $\mu$s gate duration, this implies a change of $\Delta \langle n_C \rangle \simeq 0.001$ averaged over the duration of the gate. Therefore the error caused by the modification of the Debye-Waller factor from heating can be neglected for our fastest gate times. Because the two-qubit-gate Raman transitions are driven on the second micro-motion sideband, the Rabi rates are proportional to the second-order Bessel funtion $J_2(|\bm{\Delta k}| z_{\mu m}) \simeq 0.48$, where $|\bm{\Delta k}| z_{\mu m} = 2.9$ is the modulation index due to the micromotion-induced Doppler shift and is proportional to the applied RF voltage $V_{\mathrm{RF}}$. For the conditions of the experiment, $J_2(|\bm{\Delta k}| z_{\mu m})$ is near a maximum such that the Rabi rate is relatively insensitive to fluctuations in $V_{\mathrm{RF}}$. Our measurements show that the transverse mode frequencies can drift by up to 10 kHz over the course of several experiments; this would imply a relative drift in $V_{\mathrm{RF}}$ of $\sim 1 \times 10^{-3}$ and a corresponding change in the Rabi rate of $3 \times 10^{-4}$, which contributes an error that is negligible compared to the other errors. Laser intensity fluctuations can be assumed to be comparable to the fluctuations measured from the single-qubit benchmarking experiments ($ \sim 1 \times 10^{-3}$), which makes this contribution to Rabi rate fluctuations negligible compared to that of the fluctuating Debye-Waller factors. Laser intensity fluctuations also cause fluctuation in AC-Stark shifts, which we measure to be $\sim$ 1 kHz at a Raman detuning of $-2\pi\times 900$ GHz and induce negligible error. Smaller sources of error are (i) laser beam phase fluctuations between beam paths 1 and 2 during each experiment, (ii) individual qubit decoherence, (iii) heating of the axial stretch mode \cite{Turchette2000}, (iv) imperfect Lamb-Dicke approximation, and (v) off-resonance coupling to spectator transitions. Each of these sources contributes a few times $10^{-5}$ error to the entangling gate. Sources of frequency and phase fluctuations include fluctuations in the laser beam phases $\phi_{j,b}$ and $\phi_{j,r}$, and fluctuations in the qubit frequency. Fluctuations due to relative length changes between paths 1 and 2 were measured by recombining the two beams after they exit the UV fibers, detecting with a fast photo-diode, and measuring the phase of the beat note using the AOM RF sources as a reference. We measured a phase drift of $\sim \pi$ after $\sim$ 1 s, this is likely due to temperature drift of the optical elements in the setup. We also observed small-amplitude phase oscillations with frequencies of a few hundred Hertz, which can be attributed to acoustic vibrations in the laboratory. With this, we estimate an error of $\sim 2\times10^{-5}$ to the gate. The measured coherence time of the qubit from Ramsey experiments is approximately 1.5 s, which implies an r.m.s. qubit transition frequency error of 1 Hz, giving negligible error compared to other sources. The heating rate of the axial stretch mode is measured to be less than 1 quantum per second and contributes an error of less than $3\times10^{-5}$ to $\ket{\Phi_+}$. The M$\o$lmer-S$\o$rensen interaction is robust against finite thermal excitation in the Lamb-Dicke limit, $\eta \ll 1$. However, due to the small mass of $^9$Be$^+$ ions this condition is not rigorously satisfied and the sensitivity to finite motional excitation must be considered. The error due to this is given by $\frac{\pi^2}{4}\eta^4\langle n \rangle(\langle n \rangle+1)$ \cite{Sorensen2000}, which corresponds to an error of less than $2\times10^{-5}$ for our parameters. We also use numerical simulation to study this effect and find good agreement. Even within the Lamb-Dicke limit, finite thermal excitation increases the sensitivity of error due to motional mode frequency fluctuations \cite{Hayes2012}. For our parameters, this error is negligible. Off-resonant coupling to spectator transitions is suppressed by employing laser pulse shaping. The rise and fall durations of the gate pulse are adjusted such that the Fourier component at the frequencies of spectator transitions is sufficiently small. Spectator transitions include the carrier and COM sideband transitions as well as other atomic transitions that can be coupled by micromotion sidebands (the Zeeman splittings between atomic states are comparable to $\omega_{\mathrm{RF}}$). If a square pulse is used instead of a shaped pulse, we estimate an error of $1\times10^{-4}$ for a gate duration of 30 $\mu $s \cite{Sorensen2000}. \end{document}
\begin{document} \title[Frequently hypercyclic $C$-dist...]{Frequently hypercyclic $C$-distribution semigroups and their generalizations} \author{Marko Kosti\' c} \address{Faculty of Technical Sciences, University of Novi Sad, Trg D. Obradovi\' ca 6, 21125 Novi Sad, Serbia} \email{[email protected]} {\renewcommand{\thefootnote}{} \footnote{2010 {\it Mathematics Subject Classification.} Primary: 47A16 Secondary: 47B37, 47D06. \\ \text{ } \ \ {\it Key words and phrases.} $C$-distribution semigroups, integrated $C$-semigroups, $f$-frequent hypercyclicity, $q$-frequent hypercyclicity, Fr\' echet spaces. \\ \text{ } \ \ This research is partially supported by grant 174024 of Ministry of Science and Technological Development, Republic of Serbia.}} \begin{abstract} In this paper, we introduce the notions of $f$-frequent hypercyclicity and ${\mathcal F}$-hypercyclicity for $C$-distribution semigroups in separable Fr\'echet spaces. We particularly analyze the classes of $q$-frequently hypercyclic $C$-distribution semigroups ($q\geq 1$) and frequently hypercyclic $C$-distribution semigroups, providing a great number of illustrative examples. \end{abstract} \maketitle \section{Introduction and Preliminaries} The notion of a frequently hypercyclic linear continuous operator on a separable Fr\'echet space was introduced by F. Bayart and S. Grivaux in 2006 (\cite{bay1}). The general notion of $(m_k)$-hypercyclicity for linear continuous operators was introduced by F. Bayart and \'E. Matheron \cite{Baya1} in 2009, while some special cases of $(m_k)$-hypercyclicity, like $q$-frequent hypercyclicity ($q\in {\mathbb N}$), were analyzed by M. Gupta and A. Mundayadan in \cite{gupta}. Within the field of linear topological dynamics, the notion of ${\mathcal F}$-hypercyclicity, where ${\mathcal F}$ is a Furstenberg family, was introduced for the first time by S. Shkarin in 2009 (\cite{shkarin}); further contributions were given by J. B\`es, Q. Menet, A. Peris, Y. Puig \cite{biba-prim} and A. Bonilla, K.-G. Grosse-Erdmann \cite{boni-upper}. The notion of ${\mathcal F}$-hypercyclicity for linear not necessarily continuous operators has been recently introduced by the author in \cite{1211212018}. For more details on the subject, we refer the reader to \cite{Bay}-\cite{biba}, \cite{Grosse}, \cite{menet}-\cite{measures} and references cited therein. On the other hand, the notion of a frequently hypercyclic strongly continuous semigroup on a separable Banach space was introduced by E. M. Mangino and A. Peris in 2011 (\cite{man-peris}). Frequently hypercyclic translation semigroups on weighted function spaces were further investigated by E. M. Mangino and M. Murillo-Arcila in \cite{man-marina}, while the frequent hypercyclity of semigroup solutions for first-order partial differential equations arising in mathematical biology was investigated by C.-H. Hung and Y.-H. Chang in \cite{hung}. Frequent hypercyclicity and various generalizations of this concept for single operators and semigroups of operators are still very active field of research, full of open unsolved problems. Hypercyclicity of $C$-regularized semigroups, distribution semigroups and unbounded linear operators in Banach spaces was analyzed by R. deLaubenfels, H. Emamirad and K.-G. Grosse-Erdmann in 2003 (\cite{cycch}). The non-existence of an appropriate reference which treats the frequent hypercyclicity of $C$-regularized semigroups and distribution semigroups strongly influenced us to write this paper. We work in the setting of separable infinite-dimensional Fr\'echet spaces, considering general classes of $C$-distribution semigroups and fractionally integrated $C$-semigroups (\cite{knjigah}-\cite{knjigaho}); here, we would like to point out that our results seem to be new even for strongly continuous semigroups of operators in Fr\'echet spaces. In contrast to the investigations of frequently hypercyclic strongly continuous semigroups of operators that are carried out so far, the notion of Pettis integrability does not play any significant role in our approach, which is primarly oriented for giving some new applications in the qualitative analysis of solutions of abstract ill-posed differential equations of first order. The notion of a $q$-frequently hypercyclic strongly continuous semigroup, where $q\geq 1,$ has been recently introduced and systematically analyzed in our joint paper with B. Chaouchi, S. Pilipovi\' c and D. Velinov \cite{qjua}; the notion of $f$-frequent hypercyclicity, introduced here for the first time as a continuous counterpart of $(m_k)$-hypercyclicity, seems to be not considered elsewhere even for strongly continuous semigroups of operators in Banach spaces. Albeit we analyze the general class of $C$-distribution semigroups, providing also some examples of frequently hypercyclic integrated semigroups, almost all structural results of ours are stated for the class of global $C$-regularized semigroups (for certain difficuties we have met in our exploration of frequently hypercyclic fractionally integrated $C$-semigroups, we refer the reader to Remark \ref{prcko-tres}). Without any doubt, our main theoretical result is Theorem \ref{oma}, which can be called $f$-Frequent Hypercyclicity Criterion for $C$-Regularized Semigroups. In Theorem \ref{oma-duo}, we state Upper Frequent Hypercyclicity Criterion for $C$-Regularized Semigroups (this result seems to be new even for strongly continuous semigroups in Banach spaces, as well). From the point of view of possible applications, Theorem \ref{3.1.4.13}, in which we reconsider the spectral criterions esatablished by S. El Mourchid \cite[Theorem 2.1]{samir} and E. M. Mangino, A. Peris \cite[Corollary 2.3]{man-peris}, and Theorem \ref{3018}, in which we reconsider the famous Desch-Schappacher-Webb criterion for chaos of strongly continuous semigroups \cite[Theorem 3.1]{fund}, are most important; both theorems are consequences of Theorem \ref{oma}. In Example \ref{freja}, we revisit \cite[Subsection 3.1.4]{knjigah} and prove that all examined $C$-regularized semigroups and integrated semigroups, including corresponding single linear operators, are frequently hypercyclic (we already know that these semigroups and operators are topologically mixing or chaotic in a certain sense). We use the standard notation throughout the paper. By $E$ we denote a separable infinite-dimensional Fr\' echet space (real or complex). We assume that the topology of $E$ is induced by the fundamental system $(p_{n})_{n\in {\mathbb N}}$ of increasing seminorms. If $Y$ is also a Fr\' echet space, over the same field of scalars ${\mathbb K}$ as $E,$ then by $L(E,Y)$ we denote the space consisting of all continuous linear mappings from $E$ into $Y.$ The translation invariant metric $d: E\times E \rightarrow [0,\infty),$ defined by $$ d(x,y):=\sum \limits_{n=1}^{\infty}\frac{1}{2^{n}}\frac{p_{n}(x-y)}{1+p_{n}(x-y)},\ x,\ y\in E, $$ satisfies, among many other properties, the following ones: $d(x+u,y+v)\leq d(x,y)+d(u,v)$ and $d(cx,cy)\leq (|c|+1)d(x,y),\ c\in {\mathbb K},\ x,\ y,\ u,\ v\in X.$ Set $L(x,\varepsilonilon):=\{y\in X : d(x,y)<\varepsilonilon\}$ and $L_{n}(x,\varepsilonilon):=\{y\in X : p_{n}(x-y)<\varepsilonilon\}$ ($n\in {\mathbb N},$ $\varepsilonilon>0,$ $x\in X$). By $E^{\ast}$ we denote the dual space of $E.$ For a closed linear operator $T$ on $E,$ we denote by $D(T),$ $R(T),$ $N(T),$ $\rho(T)$ and $\sigma_{p}(T)$ its domain, range, kernel, resolvent set and point spectrum, respectively. If ${\tilde E}$ is a linear subspace of $E,$ then the part of $T$ in ${\tilde E},$ $T_{|{\tilde E}}$ shortly, is defined through $T_{|{\tilde E}}:=\{(x,y) \in T : x,\ y\in {\tilde E}\}$ (we will identify an operator and its graph henceforth). Set $D_{\infty}(T):=\bigcap_{k\in {\mathbb N}}D(T^{k}).$ We will always assume henceforth that $C\in L(E)$ and $C$ is injective. Put $p_{C}(x):=p(C^{-1}x),$ $p\in \circledast,$ $x\in R(C).$ Then $p_{C}(\cdot)$ is a seminorm on $R(C)$ and the calibration $(p_{C})_{p\in \circledast}$ induces a Fr\'echet topology on $R(C);$ we denote this space by $[R(C)]_{\circledast}.$ If $T^{k}$ is closed for any $k\in {\mathbb N},$ then the space $C(D(T^{k})),$ equipped with the following family of seminorms $p_{k,n}(Cx):=p_{n}(x)+p_{n}(Tx)+\cdot \cdot \cdot +p_{n}(T^{k}x),$ $x\in D(T^{k}),$ is a Fr\'echet one ($n\in {\mathbb N}$). This space will be denoted by $[C(D(T^{k}))].$ For any $s\in {\mathbb R},$ we define $\lfloor s \rfloor :=\sup \{ l\in {\mathbb Z} : s\geq l \}$ and $\lceil s \rceil :=\inf \{ l\in {\mathbb Z} : s\leq l \}.$ Let us recall that a series $\sum_{n=1}^{\infty}x_{n}$ in $E$ is called unconditionally convergent iff for every permutation $\sigma$ of ${\mathbb N}$, the series $\sum_{n=1}^{\infty}x_{\sigma (n)}$ is convergent; it is well known that the absolute convergence of $\sum_{n=1}^{\infty}x_{n}$ (i.e., the convergence of $\sum_{n=1}^{\infty}p_{l}(x_{n})$ for all $l\in {\mathbb N}$) implies its unconditional convergence (see \cite{boni} and references cited therein for further information on the subject). The Schwartz space of rapidly decreasing functions $\mathcal{S}$ is defined by the following system of seminorms $ p_{m,n}(\psi):=\sup_{x\in\mathbb{R}}\, |x^m\psi^{(n)}(x)|,$ $\psi\in\mathcal{S},$ $m,\ n\in\mathbb{N}_0.$ We use notation $\mathcal{D}=C_0^{\infty}(\mathbb{R})$ and $\mathcal{E}=C^{\infty}(\mathbb{R})$. If $\emptyset \neq \Omega \subseteq {\mathbb R},$ then the symbol $\mathcal{D}_{\Omega}$ denotes the subspace of $\mathcal{D}$ consisting of those functions $\varphi \in \mathcal{D}$ for which supp$(\varphi) \subseteq \Omega;$ $\mathcal{D}_{0}\equiv \mathcal{D}_{[0,\infty)}.$ The spaces $\mathcal{D}'(E):=L(\mathcal{D},E)$, $\mathcal{E}'(E):=L(\mathcal{E},E)$ and $\mathcal{S}'(E):=L(\mathcal{S},E)$ are topologized in the usual way; the symbols $\mathcal{D}'_{\Omega}(E)$, $\mathcal{E}'_{\Omega}(E)$ and $\mathcal{S}'_{\Omega}(E)$ denote their subspaces containing $E$-valued distributions whose supports are contained in $\Omega ;$ $\mathcal{D}'_{0}(E)\equiv \mathcal{D}'_{[0,\infty)}(E)$, $\mathcal{E}'_{0}(E)\equiv \mathcal{E}'_{[0,\infty)}(E)$, $\mathcal{S}'_{0}(E)\equiv \mathcal{S}'_{[0,\infty)}(E).$ By $\delta_{t}$ we denote the Dirac distribution centered at point $t$ ($t\in {\mathbb R}$). If $\varphi$, $\psi:\mathbb{R}\to\mathbb{C}$ are measurable functions, then we define $ \varphi*_0 \psi(t):=\int^t_0\varphi(t-s)\psi(s)\,ds,\;t\;\in\mathbb{R}. $ The convolution of vector-valued distributions will be taken in the sense of \cite[Proposition 1.1]{ku112}. \subsection{$C$-distribution semigroups and fractionally integrated $C$-semigroups}\label{peru} Let $C\in L(E)$ be an injective operator, and let $\mathcal{G}\in\mathcal{D}_0'^{\ast}(L(E))$ satisfy $C\mathcal{G}=\mathcal{G}C$. Then we say that $\mathcal{G}$ is a $C$-distribution semigroup, shortly (C-DS), iff ${\mathcal G}$ satisfies the following two conditions: \begin{itemize} \item[(i)] ${\mathcal G}(\varphi\ast_0\psi)C={\mathcal G}(\varphi){\mathcal G}(\psi)$, $\varphi,\, \psi\in\mathcal D$;\\ \item[(ii)] ${\mathcal N}({\mathcal G}):=\bigcap\limits_{\varphi\in\mathcal D_0}N({\mathcal G}(\varphi))=\{0\}$. \end{itemize} If, additionally, ${\mathcal{R}}(\mathcal{G}):=\bigcup_{\varphi\in\mathcal{D}_0}R(\mathcal{G}(\varphi))$ is dense in $E$, then we say that ${\mathcal G}$ is a dense $C$-distribution semigroup. Let $T\in\mathcal{E}_0',$ i.e., $T$ is a scalar-valued distribution with compact support contained in $[0,\infty)$. Set \[ G(T)x:=\bigl\{(x,y) \in E\times E : \mathcal{G}(T*\varphi)x=\mathcal{G}(\varphi)y\mbox{ for all }\varphi\in\mathcal{D}_0 \bigr\}. \] Then it can be easily seen that $G(T)$ is a closed linear operator. We define the (infinitesimal) generator of a (C-DS) $\mathcal{G}$ by $A:=G(-\delta').$ Suppose that $\mathcal{G}$ is a (C-DS). Then $\mathcal{G}(\varphi)\mathcal{G}(\psi)=\mathcal{G}(\psi)\mathcal{G}(\varphi)$ for all $\varphi,\,\psi\in\mathcal{D},$ $C^{-1}AC=A$ and the following holds: Let $S$, $T\in\mathcal{E}'_0$, $\varphi\in\mathcal{D}_0$, $\psi\in\mathcal{D}$ and $x\in E$. Then we have: \begin{itemize} \item[A1.] $G(S)G(T)\subseteq G(S*T)$ with $D(G(S)G(T))=D(G(S*T))\cap D(G(T))$, and $G(S)+G(T)\subseteq G(S+T)$. \item[A2.] $(\mathcal{G}(\psi)x$, $\mathcal{G}(-\psi^{\prime})x-\psi(0)Cx)\in A$. \end{itemize} We denote by $D({\mathcal G})$ the set consisting of those elements $x\in E$ for which $x\in D(G({\delta}_t)),$ $t\geq 0$ and the mapping $t\mapsto G({\delta}_t)x,$ $t\geq 0$ is continuous. By A1., we have that \begin{align*} D\bigl(G(\delta_s)G(\delta_t)\bigr)\!=\!D\bigl(G(\delta_s*\delta_t)\bigr)\cap D\bigl(G(\delta_t)\bigr)\!=\!D\bigl(G(\delta_{t+s})\bigr)\cap D\bigl(G(\delta_t)\bigr), \;t,\,s\!\geq\! 0, \end{align*} which clearly implies $G(\delta_t)(D(\mathcal{G}))\subseteq D(\mathcal{G})$, $t\geq 0$ and \begin{equation}\label{C-DS} G\bigl(\delta_s\bigr)G\bigl(\delta_t \bigr)x=G\bigl(\delta_{t+s}\bigr)x, \quad t,\,s\geq 0,\ x\in D(\mathcal{G}). \end{equation} The following definition is well-known: \begin{defn}\label{first} Let $\alpha \geq 0,$ and let $A$ be a closed linear operator. If there exists a strongly continuous operator family $(S_\alpha(t))_{t\geq 0}\subseteq L(E)$ such that: \begin{itemize} \item[(i)] $S_\alpha(t)A\subseteq AS_\alpha(t)$, $t\geq 0$, \item[(ii)] $S_\alpha(t)C=CS_\alpha(t)$, $t\geq 0$, \item[(iii)] for all $x\in E$ and $t\geq 0$: $\int_0^tS_\alpha(s)x\,ds\in D(A)$ and \begin{align*} A\int\limits_0^tS_\alpha(s)x\,ds=S_\alpha(t)x-g_{\alpha +1}(t)Cx, \end{align*} \end{itemize} then it is said that $A$ is a subgenerator of a (global) $\alpha$-times integrated $C$-semigroup $(S_\alpha(t))_{t\geq 0}$. Furthermore, it is said that $(S_\alpha(t))_{t\geq 0}$ is an exponentially equicontinuous, $\alpha$-times integrated $C$-semigroup with a subgenerator $A$ if, in addition, there exists $\omega \in {\mathbb R}$ such that the family $\{e^{-\omega t}S_{\alpha}(t) : t\geq 0\}\subseteq L(E)$ is equicontinuous. \end{defn} If $\alpha =0,$ then $(S_0(t))_{t\geq 0}$ is also said to be a $C$-regularized semigroup with subgenerator $A;$ in this case, we have the following simple functional equation: $S_0(t)S_0(s)=S_0(t+s)C,$ $t,\ s\geq 0.$ Moreover, if $\alpha \geq 0$ and $(S_\alpha(t))_{t\geq 0}$ is a global $\alpha$-times integrated $C$-semigroup with subgenerator $A,$ then $(S_\alpha(t))_{t\geq 0}$ is locally equicontinuous, i.e., the following holds: for every $T>0$ and $n\in {\mathbb N},$ there exist $m\in {\mathbb N}$ and $c>0$ such that $p_{n}(S_{\alpha}(t)x)\leq cp_{m}(x),$ $t\in [0,T],$ $x\in E.$ The integral generator of $(S_\alpha(t))_{t\geq 0}$ is defined by \begin{align*} \hat{A}:=\Biggl\{(x,y)\in E\times E:S_\alpha(t)x-g_{\alpha+1}(t)Cx=\int\limits^t_0S_\alpha(s)y\,ds,\;t\geq 0\Biggr\}. \end{align*} The integral generator of $(S_\alpha(t))_{t\geq 0}$ is a closed linear operator which is an extension of any subgenerator of $(S_\alpha(t))_{t\geq 0}.$ Arguing as in the proofs of \cite[Proposition 2.1.6, Proposition 2.1.19]{knjigah}, we may deduce that the integral generator of $(S_\alpha(t))_{t\geq 0}$ is its maximal subgenerator with respect to the set inclusion. Furthermore, the following equality holds $\hat{A}=C^{-1}AC.$ Let $A$ be a closed linear operator on $E.$ Denote by $Z_{1}(A)$ the space consisting of those elements $x\in E$ for which there exists a unique continuous mapping $u :[0,\infty) \rightarrow E$ satisfying $\int^t_0 u(s,x)\,ds\in D(A)$ and $A\int^t_0 u(s,x)\,ds=u(t,x)-x$, $t\geq 0,$ i.e., the unique mild solution of the corresponding Cauchy problem $(ACP_{1}):$ $$ (ACP_{1}) : u^{\prime}(t)=Au(t),\ t\geq 0, \ u(0)=x. $$ Suppose now that $A$ is a subgenerator of a global $\alpha$-times integrated $C$-semigroup $(S_\alpha(t))_{t\geq 0}$ for some $\alpha \geq 0.$ Then there is only one (trivial) mild solution of $ (ACP_{1})$ with $x=0,$ so that $Z_{1}(A)$ is a linear subspace of $X.$ Moreover, for every $\beta>\alpha ,$ the operator $A$ is a subgenerator (the integral generator) of a global $\beta$-times integrated $C$-semigroup $(S_\beta(t)\equiv (g_{\beta-\alpha}\ast S_{\alpha}\cdot)(t))_{t\geq 0},$ where $g_{\zeta}(t):=t^{\zeta-1}/\Gamma(\zeta)$ for $t>0$ and $\Gamma (\cdot)$ denotes the Gamma function ($\zeta>0$). The space $Z_{1}(A)$ consists exactly of those elements $x\in E$ for which the mapping $t\mapsto C^{-1}S_{\lceil \alpha \rceil}(t)x,$ $t\geq 0$ is well defined and $\lceil \alpha \rceil$-times continuously differentiable on $[0,\infty);$ see e.g. \cite{knjigaho}. Set \begin{align}\label{monman} {\mathcal G}(\varphi)x:=(-1)^{\lceil \alpha \rceil}\int \limits^{\infty}_{0}\varphi^{(\lceil \alpha \rceil)}(t)S_{\lceil \alpha \rceil}(t)x\, dt,\quad \varphi \in {\mathcal D}_{{\mathbb K}},\ x\in E \end{align} and $$ G\bigl(\delta_{t}\bigr)x:=\frac{d^{\lceil \alpha \rceil}}{dt^{\lceil \alpha \rceil}}C^{-1}S_{\lceil \alpha \rceil}(t)x, \quad t\geq 0,\ x\in Z_{1}(A). $$ Then ${\mathcal G}$ is a $C$-distribution semigroup generated by $C^{-1}AC$ and $Z_{1}(A)=D({\mathcal G})$ (see e.g. \cite{knjigah} and \cite{C-ultra}). Before proceeding further, it should be observed that the solution space $Z_{1}(A)$ is independent of the choice of $(S_\alpha(t))_{t\geq 0}$ in the following sense: If $C_{1}\in L(X)$ is another injective operator with $C_{1}A\subseteq AC_{1},$ $\gamma \geq 0,$ $x\in E$ and $A$ is a subgenerator (the integral generator) of a global $\gamma$-times integrated $C_{1}$-semigroup $(S^\gamma(t))_{t\geq 0},$ then the mapping $t\mapsto C^{-1}S_{\lceil \alpha \rceil}(t)x,$ $t\geq 0$ is well defined and $\lceil \alpha \rceil$-times continuously differentiable on $[0,\infty)$ iff the mapping $t\mapsto C_{1}^{-1}S^{\lceil \gamma \rceil}(t)x,$ $t\geq 0$ is well defined and $\lceil \gamma \rceil$-times continuously differentiable on $[0,\infty)$ (with the clear notation). If this is the case, then we have that $u(t;x):=G(\delta_{t})x=\frac{d^{\lceil \gamma \rceil}}{dt^{\lceil \gamma \rceil}}C_{1}^{-1}S^{\lceil \gamma \rceil}(t)x,$ $t\geq 0$ is a unique mild solution of the corresponding Cauchy problem $(ACP_{1}).$ Furthermore, $(S_\alpha(t))_{t\geq 0}$ and $(S^\gamma(t))_{t\geq 0}$ share the same (subspace) $f$-frequently hypercyclic properties defined below. We refer the reader to \cite{knjigah}-\cite{C-ultra} for further information concerning $C$-distribution semigroups. The notion of exponentially equicontinuous, analytic fractionally integrated $C$-semigroups will be taken in a broad sense of \cite[Definition 2.2.1(i)]{knjigaho}, while the notion of an entire $C$-regularized group will be taken in the sense of \cite[Definition 2.2.9]{knjigaho}. For more details about $C$-regularized semigroups and their applications, we refer the reader to the monograph \cite{l1} by R. deLaubenfels. \subsection{Lower and upper densities} First of all, we need to recall the following definitions from \cite{1211212018}: \begin{defn}\label{4-skins-MLO-okay} Let $(T_{n})_{n\in {\mathbb N}}$ be a sequence of linear operators acting between the spaces $X$ and $Y,$ let $T$ be a linear operator on $X$, and let $x\in X$. Suppose that ${\mathcal F}\in P(P({\mathbb N}))$ and ${\mathcal F}\neq \emptyset.$ Then we say that: \begin{itemize} \item[(i)] $x$ is an ${\mathcal F}$-hypercyclic element of the sequence $(T_{n})_{n\in {\mathbb N}}$ iff $x\in \bigcap_{n\in {\mathbb N}} D(T_{n})$ and for each open non-empty subset $V$ of $Y$ we have that $$ S(x,V):=\bigl\{ n\in {\mathbb N} : T_{n}x \in V \bigr\}\in {\mathcal F} ; $$ $(T_{n})_{n\in {\mathbb N}}$ is said to be ${\mathcal F}$-hypercyclic iff there exists an ${\mathcal F}$-hypercyclic element of $(T_{n})_{n\in {\mathbb N}}$; \item[(ii)] $T$ is ${\mathcal F}$-hypercyclic iff the sequence $(T^{n})_{n\in {\mathbb N}}$ is ${\mathcal F}$-hypercyclic; $x\in D_{\infty}(T)$ is said to be an ${\mathcal F}$-hypercyclic element of $T$ iff $x$ is an ${\mathcal F}$-hypercyclic element of the sequence $(T^{n})_{n\in {\mathbb N}}.$ \end{itemize} \end{defn} \begin{defn}\label{prckojed} Let $q\in [1,\infty),$ let $A \subseteq {\mathbb N}$, and let $(m_{n})$ be an increasing sequence in $[1,\infty).$ Then: \begin{itemize} \item[(i)] The lower $q$-density of $A,$ denoted by $\underline{d}_{q}(A),$ is defined through: $$ \underline{d}_{q}(A):=\liminf_{n\rightarrow \infty}\frac{|A \cap [1,n^{q}]|}{n}. $$ \item[(ii)] The upper $q$-density of $A,$ denoted by $\overline{d}_{q}(A),$ is defined through: $$ \overline{d}_{q}(A):=\limsup_{n\rightarrow \infty}\frac{|A \cap [1,n^{q}]|}{n}. $$ \item[(iii)] The lower $(m_{n})$-density of $A,$ denoted by $\underline{d}_{m_{n}}(A),$ is defined through: $$ \underline{d}_{{m_{n}}} (A):=\liminf_{n\rightarrow \infty}\frac{|A \cap [1,m_{n}]|}{n}. $$ \item[(iv)] The upper $(m_{n})$-density of $A,$ denoted by $\overline{d}_{{m_{n}}}(A),$ is defined through: $$ \overline{d}_{{m_{n}}}(A):=\limsup_{n\rightarrow \infty}\frac{|A \cap [1,m_{n}]|}{n}. $$ \end{itemize} \end{defn} Assume that $q\in [1,\infty)$ and $(m_{n})$ is an increasing sequence in $[1,\infty).$ Consider the notion introduced in Definition \ref{4-skins-MLO-okay} with: (i) ${\mathcal F}=\{A \subseteq {\mathbb N} : \underline{d}(A)>0\},$ (ii) ${\mathcal F}=\{A \subseteq {\mathbb N} : \underline{d}_{q}(A)>0\},$ (iii) ${\mathcal F}=\{A \subseteq {\mathbb N} : \underline{d}_{{m_{n}}}(A)>0\};$ then we say that $(T_{n})_{n\in {\mathbb N}}$ ($T,$ $x$) is frequently hypercyclic, $q$-frequently hypercyclic and l-$(m_{n})$-hypercyclic, respectively. Denote by $m(\cdot)$ the Lebesgue measure on $[0,\infty).$ We would like to propose the following definition: \begin{defn}\label{prckojed-prim} Let $q\in [1,\infty),$ let $A\subseteq [0,\infty)$, and let $f : [0,\infty) \rightarrow [1,\infty)$ be an increasing mapping. Then: \begin{itemize} \item[(i)] The lower $qc$-density of $A,$ denoted by $\underline{d}_{qc}(A),$ is defined through: $$ \underline{d}_{qc}(A):=\liminf_{t\rightarrow \infty}\frac{m(A \cap [0,t^{q}])}{t}. $$ \item[(ii)] The upper $qc$-density of $A,$ denoted by $\overline{d}_{qc}(A),$ is defined through: $$ \overline{d}_{qc}(A):=\limsup_{t\rightarrow \infty}\frac{m(A \cap [0,t^{q}])}{t}. $$ \item[(iii)] The lower $f$-density of $A,$ denoted by $\underline{d}_{f}(A),$ is defined through: $$ \underline{d}_{f} (A):=\liminf_{t\rightarrow \infty}\frac{m(A \cap [0,f(t)])}{t}. $$ \item[(iv)] The upper $f$-density of $A,$ denoted by $\overline{d}_{f}(A),$ is defined through: $$ \overline{d}_{f}(A):=\limsup_{t\rightarrow \infty}\frac{m(A \cap [0,f(t)])}{t}. $$ \end{itemize} \end{defn} It is clear that Definition \ref{prckojed-prim} provides continues analogues of the notion introduced in Definition \ref{prckojed}, which have been analyzed in \cite[Section 2]{1211212018} in more detail. For the sake of brevity and better exposition, we will skip all related details about possibilities to transfer the results established in \cite{1211212018} for continuous lower and upper densities. \section[Generalized frequent hypercyclicity for $C$-distribution semigroups...]{Generalized frequent hypercyclicity for $C$-distribution semigroups and fractionally integrated $C$-semigroups}\label{srboljub} Let $P([0,\infty))$ denote the power set of $[0,\infty).$ We would like to propose the following general definition: \begin{defn}\label{4-skins} Let ${\mathcal G}$ be a $C$-distribution semigroup, and let $x\in D({\mathcal G})$. Suppose that ${\mathcal F}\in P(P([0,\infty)))$ and ${\mathcal F}\neq \emptyset.$ Then we say that $x$ is an ${\mathcal F}$-hypercyclic element of ${\mathcal G}$ iff for each open non-empty subset $V$ of $E$ we have $$ S(x,V):=\bigl\{ t\geq 0 : G(\delta_t)x \in V \bigr\}\in {\mathcal F} ; $$ ${\mathcal G}$ is said to be ${\mathcal F}$-hypercyclic iff there exists an ${\mathcal F}$-hypercyclic element of ${\mathcal G}.$ \end{defn} The notion introduced in the following definition is a special case of the notion introduced above, with ${\mathcal F}$ being the collection of all non-empty subsets $A$ of $[0,\infty)$ such that the lower $qc$-density of $A,$ the upper $qc$-density of $A,$ the lower $f$-density of $A$ or the upper $f$-density of $A$ is positive: \begin{defn}\label{prckojedd} Let $q\in [1,\infty),$ and let $f : [0,\infty) \rightarrow [1,\infty)$ be an increasing mapping. Suppose that ${\mathcal G}$ is a $C$-distribution semigroup. Then we say that: \begin{itemize} \item[(i)] ${\mathcal G}$ is $q$-frequently hypercyclic iff there exists $x\in D({\mathcal G})$ such that for each open non-empty subset $V$ of $E$ we have $\underline{d}_{qc}(\{ t\geq 0 : G(\delta_t)x \in V \bigr\})>0;$ \item[(ii)] ${\mathcal G}$ is upper $q$-frequently hypercyclic iff there exists $x\in D({\mathcal G})$ such that for each open non-empty subset $V$ of $E$ we have $\overline{d}_{qc}(\{ t\geq 0 : G(\delta_t)x \in V \bigr\})>0;$ \item[(iii)] ${\mathcal G}$ is $f$-frequently hypercyclic iff there exists $x\in D({\mathcal G})$ such that for each open non-empty subset $V$ of $E$ we have $\underline{d}_{f}(\{ t\geq 0 : G(\delta_t)x \in V \bigr\})>0;$ \item[(iv)] ${\mathcal G}$ is upper $f$-frequently hypercyclic iff there exists $x\in D({\mathcal G})$ such that for each open non-empty subset $V$ of $E$ we have $\underline{d}_{f}(\{ t\geq 0 : G(\delta_t)x \in V \bigr\})>0.$ \end{itemize} \end{defn} It seems natural to reformulate the notion introduced in the previous two definitions for fractionally integrated $C$-semigroups: \begin{defn}\label{prcko-raki} Suppose that $A$ is a subgenerator of a global $\alpha$-times integrated $C$-semigroup $(S_\alpha(t))_{t\geq 0}$ for some $\alpha \geq 0.$ Let ${\mathcal F}\in P(P([0,\infty)))$ and ${\mathcal F}\neq \emptyset.$ Then we say that an element $x\in Z_{1}(A)$ is an ${\mathcal F}$-hypercyclic element of $(S_\alpha(t))_{t\geq 0}$ iff $x$ is an ${\mathcal F}$-hypercyclic element of the induced $C$-distribution semigroup ${\mathcal G}$ defined through \eqref{monman}; $(S_\alpha(t))_{t\geq 0}$ is said to be ${\mathcal F}$-hypercyclic iff ${\mathcal G}$ is said to be ${\mathcal F}$-hypercyclic. \end{defn} \begin{defn}\label{prcko-raki} Suppose that $A$ is a subgenerator of a global $\alpha$-times integrated $C$-semigroup $(S_\alpha(t))_{t\geq 0}$ for some $\alpha \geq 0.$ Let $q\in [1,\infty),$ and let $f : [0,\infty) \rightarrow [1,\infty)$ be an increasing mapping. Then it is said that $(S_\alpha(t))_{t\geq 0}$ is $q$-frequently hypercyclic (upper $q$-frequently hypercyclic, $f$-frequently hypercyclic, upper $f$-frequently hypercyclic) iff the induced $C$-distribution semigroup ${\mathcal G}$, defined through \eqref{monman}, is. \end{defn} As mentioned in the introductory part, the following result can be viewed as $f$-Frequent Hypercyclicity Criterion for $C$-Regularized Semigroups: \begin{thm}\label{oma} Suppose that $A$ is a subgenerator of a global $C$-regularized semigroup $(S_{0}(t))_{t\geq 0}$ on $E$ and $f : [0,\infty) \rightarrow [1,\infty)$ is an increasing mapping. Set $T(t)x:=C^{-1}S_{0}(t)x,$ $t\geq 0,$ $x\in Z_{1}(A)$ and $m_{k}:=f(k),$ $k\in {\mathbb N}.$ Suppose that there are a number $t_{0}>0,$ a dense subset $E_{0}$ of $E$ and mappings $S_{n} : E_{0} \rightarrow R(C)$ ($n\in {\mathbb N}$) such that the following conditions hold for all $y\in E_{0}$: \begin{itemize} \item[(i)] The series $\sum_{n=1}^{k}T(t_{0}\lfloor m_{k}\rfloor)S_{\lfloor m_{k-n}\rfloor}y$ converges unconditionally, uniformly in $k\in {\mathbb N}.$ \item[(ii)] The series $\sum_{n=1}^{\infty}T(t_{0}\lfloor m_{k}\rfloor)S_{\lfloor m_{k+n} \rfloor}y$ converges unconditionally, uniformly in $k\in {\mathbb N}.$ \item[(iii)] The series $\sum_{n=1}^{\infty}S_{\lfloor m_{n} \rfloor}y$ converges unconditionally, uniformly in $n\in {\mathbb N}.$ \item[(iv)] $\lim_{n\rightarrow \infty}T(t_{0}\lfloor m_{n} \rfloor)S_{\lfloor m_{n} \rfloor}y=y.$ \item[(v)] $R(C)$ is dense in $E.$ \end{itemize} Then $(S_{0}(t))_{t\geq 0}$ is $f$-frequently hypercyclic and the operator $T(t_{0})$ is l-$(m_{k})$-frequently hypercyclic. \end{thm} \begin{proof} Without loss of generality, we may assume that $t_{0}=1.$ It is clear that $(m_{k})$ is an increasing sequence in $[1,\infty).$ Define the sequence of operators $(T_{n})_{n\in {\mathbb N}}\subseteq L([R(C)], E)$ by $T_{n}x:=T(n)x,$ $n\in {\mathbb N},$ $x\in R(C).$ Due to \eqref{C-DS}, we get that $T_{n}x=T(1)^{n}x$ for $x\in R(C).$ Then the prescribed assumptions (i)-(iv) in combination with \cite[Theorem 3.1]{1211212018} imply that the sequence $(T_{n})_{n\in {\mathbb N}}$ is l-$(m_{k})$-frequently hypercyclic, which means that there exists an element $x=Cy\in R(C),$ for some $y\in E,$ satisfying that for each open non-empty subset $V'$ of $E$ there exists an increasing sequence $(k_{n})$ of positive integers such that the interval $[1,f(k_{n})]$ contains at least $k_{n}c$ elements of set $\{ k\in {\mathbb N} : T_{k}Cy=S_{0}(k)y \in V'\}.$ Since $T_{n}\subseteq T(1)^{n},$ the above clearly implies that the operator $T(1)$ is l-$(m_{k})$-frequently hypercyclic with $x=Cy$ being its l-$(m_{k})$-frequently hypercyclic vector. We will prove that $Cx=C^{2}y$ is an $f$-frequently hypercyclic vector for $(S_{0}(t))_{t\geq 0}$, i.e., that for each open non-empty subset $V$ of $E$ we have $\underline{d}_{f}(\{ t\geq 0 :T(t)Cx=S_{0}(t)Cy \in V \bigr\})>0;$ see also the proof of \cite[Proposition 2.1]{man-peris}. Let such a set $V$ be given. Then, due to our assumption (v), there exist an element $z\in E$ and a positive integer $n\in {\mathbb N}$ such that $L_{n}(Cz,\varepsilonilon) \subseteq V.$ By the local equicontinuity of $(S_{0}(t))_{t\geq 0}$ and the continuity of $C$, we get that there exist an integer $m\in {\mathbb N}$ and a positive constant $c>1$ such that $p_{n}(Cx)\leq c p_{m}(x),$ $x\in E$ and \begin{align} \notag p_{n}\bigl( & S _{0}(k+\delta)Cy-S_{0}(k)Cy\bigr) \\\notag & \leq p_{n}\bigl( S _{0}(k+\delta)Cy-Cz\bigr) +p_{n}\bigl( S _{0}(k)Cy-Cz\bigr) \\\notag & \leq p_{n}\bigl( S_{0}(\delta) \bigl[S _{0}(k)y-z\bigr]\bigr) +p_{n}\bigl( S_{0}(\delta)z-Cz \bigr)+p_{n}\bigl( S _{0}(k)Cy-Cz\bigr) \\\label{daci-mile} & \leq cp_{m}\bigl(S_{0}(k)y-z\bigr) +p_{n}\bigl( S_{0}(\delta)z-Cz\bigr),\quad \delta \in [0,1]. \end{align} Set $V':=L_{m}(z,\varepsilonilon/3c).$ Then, by the foregoing, we know that there exists an increasing sequence $(k_{n})$ of positive integers such that the interval $[1,f(k_{n})]$ contains at least $k_{n}c$ elements of set ${\mathrm A}:=\{ k\in {\mathbb N} : T_{k}Cy=S_{0}(k)y \in V'\}.$ For any $k\in {\mathrm A},$ we have $p_{m}(S_{0}(k)y-z)<\varepsilonilon/3c;$ further on, \eqref{daci-mile} yields that there exists a positive constant $\delta_{0}>0$ such that $p_{n}( S _{0}(k+\delta)Cy-S_{0}(k)Cy) < 2\varepsilonilon/3$ for all $\delta \in [0,\delta_{0}].$ This implies \begin{align*} p_{n}\bigl( & S _{0}(k+\delta)Cy-Cz\bigr) \\ & \leq p_{n}\bigl( S _{0}(k+\delta)Cy-S_{0}(k)Cy\bigr)+p_{n}\bigl( S_{0}(k)Cy-Cz\bigr) <2\varepsilonilon/3 +\varepsilonilon/3=\varepsilonilon , \end{align*} for any $k\in {\mathrm A}$ and $\delta \in [0,\delta_{0}].$ By virtue of this, we conclude that $S _{0}(k+\delta)Cy\in V$ for any $k\in {\mathrm A}$ and $\delta \in [0,\delta_{0}],$ finishing the proof of theorem in a routine manner. \end{proof} Plugging $f(t):=t^{q}+1,$ $t\geq 0$ ($q\geq 1$), we obtain a sufficient condition for $q$-frequent hypercyclicity of $(S_{0}(t))_{t\geq 0}$ and $T(t_{0}).$ \begin{rem}\label{prcko-tres} Consider the situation of Theorem \ref{oma} with $A$ being a subgenerator of a global $\alpha$-times integrated $C$-semigroup $(S_{\alpha}(t))_{t\geq 0}$ on $X,$ $n \geq \lceil \alpha \rceil$ and the Fr\'echet space $[C(D(A^{n}))]$ being separable. It is well known that $C(D(A^{n}))\subseteq Z_{1}(A);$ if $x=Cy\in C(D(A^{n})),$ then for every $t\geq 0,$ \begin{align*} G\bigl( \delta_{t} \bigr)x=\frac{d^{n}}{dt^{n}}C^{-1}S_{n}(t)x=\frac{d^{n}}{dt^{n}}S_{n}(t)y = S_{n}(t)A^{n}y+\sum \limits_{i=0}^{n-1}\frac{t^{n-i-1}}{(n-i-1)!}CA^{n-1-i}y. \end{align*} Furthermore, for every $t\geq 0,$ the mapping $G(\delta_{t}) : [C(D(A^{n}))] \rightarrow X$ is linear and continuous as well as the operator family $(G(\delta_{t}))_{t\geq 0}\subseteq L([C(D(A^{n}))] , X)$ is strongly continuous; see e.g. the proof of \cite[Theorem 5.4]{266}. But, it is not clear how to prove that the l-$(m_{k})$-frequent hypercyclicity of a single operator $G(\delta_{t_{0}})_{| C(D(A^{n}))},$ for some $t_{0}>0,$ implies the l-$(m_{k})$-frequent hypercyclicity of $(S_{n}(t))_{t\geq 0}$ because an analogue of the estimate \eqref{daci-mile} seems to be not attainable for integrated $C$-semigroups. The interested reader may try to prove an analogue of \cite[Theorem 3.1.32]{knjigah} for l-$(m_{k})$-frequent hypercyclicity. \end{rem} Suppose now that ${\mathcal F}\in P(P({\mathbb N}))$ and ${\mathcal F}\neq \emptyset.$ If ${\mathcal F}$ satisfies the following property: \begin{itemize} \item[(I)] $A\in{\mathcal F}$ and $A\subseteq B$ imply $B\in{\mathcal F},$ \end{itemize} then it is said that ${\mathcal F}$ is a Furstenberg family; a proper Furstenberg family ${\mathcal F}$ is any Furstenberg family satisfying that $\emptyset \notin {\mathcal F}.$ See \cite{furstenberg} for more details. From the proof of Theorem \ref{oma}, we may deduce the following: \begin{prop}\label{sot} Let ${\mathcal F}$ be a Furstenberg family. Suppose that $A$ is a subgenerator of a global $C$-regularized semigroup $(S_{0}(t))_{t\geq 0}$ on $E$ and $T(t)x:=C^{-1}S_{0}(t)x,$ $t\geq 0,$ $x\in Z_{1}(A).$ If $R(C)$ is dense in $E,$ $t_{0}>0$ and $x\in Z_{1}(A)$ is an ${\mathcal F}$-hypercyclic element of $T(t_{0}),$ then $x$ is an ${\mathcal F}'$-hypercyclic element of $(S_{0}(t))_{t\geq 0},$ where \begin{align}\label{profica} {\mathcal F}'=\Biggl\{ B\subseteq [0,\infty) : (\exists A\in {\mathcal F})\, (\exists \delta_{0}>0)\, \bigcup_{k\in A}[k,k+\delta_{0}] \subseteq B \Biggr\}. \end{align} \end{prop} An upper Furstenberg family is any proper Furstenberg family ${\mathcal F}$ satisfying the following two conditions: \begin{itemize} \item[(II)] There are a set $D$ and a countable set $M$ such that ${\mathcal F}=\bigcup_{\delta \in D} \bigcap_{\nu \in M}{\mathcal F}_{\delta,\nu},$ where for each $\delta \in D$ and $\nu \in M$ the following holds: If $A\in {\mathcal F}_{\delta,\nu},$ then there exists a finite subset $F\subseteq {\mathbb N}$ such that the implication $A\cap F \subseteq B \Rightarrow B\in {\mathcal F}_{\delta,\nu}$ holds true. \item[(III)] If $A\in {\mathcal F},$ then there exists $\delta \in D$ such that, for every $n\in {\mathbb N},$ we have $A-n\equiv \{k-n: k\in A,\ k>n\}\in {\mathcal F}_{\delta},$ where ${\mathcal F}_{\delta}\equiv \bigcap_{\nu \in M}{\mathcal F}_{\delta,\nu}.$ \end{itemize} Appealing to \cite[Theorem 22]{boni-upper} in place of \cite[Theorem 3.1]{1211212018}, and repeating almost literally the arguments given in the proof of Theorem \ref{oma}, we may deduce the following result: \begin{thm}\label{oma-duo} Suppose that ${\mathcal F}=\bigcup_{\delta \in D} \bigcap_{\nu \in M}{\mathcal F}_{\delta,\nu}$ is an upper Furstenberg family and $A$ is a subgenerator of a global $C$-regularized semigroup $(S_{0}(t))_{t\geq 0}$ on $E.$ Set $T(t)x:=C^{-1}S_{0}(t)x,$ $t\geq 0,$ $x\in Z_{1}(A).$ Suppose that there are a number $t_{0}>0,$ two dense subsets $E_{0}'$ and $E_{0}''$ of $E$ and mappings $S_{n} : E_{0}'' \rightarrow R(C)$ ($n\in {\mathbb N}$) such that for any $y\in E_{0}''$ and $\varepsilonilon >0$ there exist $A\in{\mathcal F}$ and $\delta \in D$ such that: \begin{itemize} \item[(i)] For every $x\in E_{0}',$ there exists some $B\in {\mathcal F}_{\delta},$ $B\subseteq A$ such that, for every $n\in B,$ one has $S_{0}(t_{0}n)x\in L(0,\varepsilonilon).$ \item[(ii)] The series $\sum_{n \in A}S_{n}y$ converges. \item[(iii)] For every $m\in A,$ we have $T(mt_{0})\sum_{n \in A}S_{n}y-y\in L(0,\varepsilonilon).$ \item[(iv)] $R(C)$ is dense in $E.$ \end{itemize} Then the operator $T(t_{0})$ is ${\mathcal F}$-hypercyclic and $(S_{0}(t))_{t\geq 0}$ is ${\mathcal F}'$-hypercyclic, where ${\mathcal F}'$ is given by \eqref{profica}. \end{thm} \begin{rem}\label{special} Collection of all non-empty subsets $A\subseteq [0,\infty)$ for which $\overline{d}_{qc}(A)>0$ forms an upper Furstenberg family (\cite{boni-upper}, \cite{1211212018}), so that Theorem \ref{oma-duo} with $f(t)=t^{q}+1,$ $t\geq 0$ ($q\geq 1$) gives a sufficient condition for the upper $q$-frequent hypercyclicity of $(S_{0}(t))_{t\geq 0}$ and $T(t_{0}).$ It can be simply proved that the validity of condition $\limsup_{t\rightarrow \infty}\frac{f(t)}{t}>0$ for an increasing function $f : [0,\infty) \rightarrow [1,\infty)$ implies that the collection of all non-empty subsets $A\subseteq [0,\infty)$ such that $\overline{d}_{f}(A)>0$ forms an upper Furstenberg family, as well. \end{rem} We continue by stating two intriguing consequences of Theorem \ref{oma}. The first one is motivated by the well-known results of S. El Mourchid \cite[Theorem 2.1]{samir} and E. M. Mangino, A. Peris \cite[Corollary 2.3]{man-peris}; see also \cite[Theorem 3.1.40]{knjigah}. \begin{thm}\label{3.1.4.13} Let $t_0>0,$ let ${\mathbb K}={\mathbb C},$ and let $A$ be a subgenerator of a global $C$-regularized semigroup $(S_{0}(t))_{t\geq 0}$ on $E.$ Suppose that $R(C)$ is dense in $E.$ Set $T(t)x:=C^{-1}S_{0}(t)x,$ $t\geq 0,$ $x\in Z_{1}(A).$ \emph{(i)} Assume that there exists a family $(f_{j})_{j\in \Gamma}$ of locally bounded measurable mappings $f_{j} : I_{j} \rightarrow E$ such that $I_{j}$ is an interval in ${\mathbb R}$, $Af_{j}(t) = itf_{j}(t)$ for every $t \in I_{j} ,$ $ j \in \Gamma$ and span$\{f_{j}(t) : j \in \Gamma,\ t \in I_{j}\}$ is dense in $E.$ If $f_{j} \in C^{2}(I_{j} : X)$ for every $j \in \Gamma,$ then $(S_{0}(t))_{t\geq 0}$ is frequently hypercyclic and each single operator $T(t_0)$ is frequently hypercyclic. \emph{(ii)} Assume that there exists a family $(f_{j})_{j\in \Gamma}$ of twice continuously differentiable mappings $f_{j} : I_{j} \rightarrow E$ such that $I_{j}$ is an interval in ${\mathbb R}$ and $Af_{j}(t) = itf_{j}(t)$ for every $t \in I_{j} ,$ $ j \in \Gamma .$ Set $\tilde{E}:=\overline{span\{f_{j}(t) : j \in \Gamma,\ t \in I_{j}\}}$. Then $A_{|\tilde{E}}$ is a subgenerator of a global $C_{|\tilde{E}}$-regularized semigroup $(S_{0}(t)_{| \tilde{E}})_{t\geq 0}$ on $\tilde{E},$ $(S_{0}(t)_{|\tilde{E}})_{t\geq 0}$ is frequently hypercyclic in $\tilde{E}$ and the operator $T(t_{0})_{|\tilde{E}}$ is frequently hypercyclic in $\tilde{E}.$ \end{thm} \begin{proof} Consider first the statement (i). Arguing as in the Banach space case \cite[Corollary 2.3]{man-peris}, we get there exists a family $(g_{j})_{j\in \Lambda}$ of functions $g_{j}\in C^{2}({\mathbb R} : E)$ with compact support such that $Ag_{j}(t) = itg_{j}(t)$ for every $t\in {\mathbb R},$ $j\in \Lambda$ and $span\{g_{j}(t) : j\in \Lambda,\ t\in {\mathbb R}\}$ is dense in $E.$ For every $\lambda \in \Lambda$ and $r\in {\mathbb R},$ set $\psi_{r,\lambda}:=\int_{-\infty}^{\infty}e^{-irs}g_{\lambda}( s)\, ds.$ Then we have \begin{align}\label{jednazba} T(t)\psi_{r,\lambda}=\psi_{r-t,\lambda},\quad t\geq 0,\ r\in {\mathbb R},\ \lambda \in \Lambda \end{align} and the part (i) follows by applying Theorem \ref{oma} with the sequence $m_{k}:=k$ ($k\in {\mathbb N}$), $E_{0}:=C(span\{g_{j}(t) : j\in \Lambda,\ t\in {\mathbb R}\})$ and the operator $S_{n}: E_{0}\rightarrow R(C)$ given by $S_{n}(C\psi_{r,\lambda}):=C\psi_{t_{0}n+r,\lambda}$ ($n\in {\mathbb N},$ $r\in {\mathbb R},$ $\lambda \in \Lambda$) and after that linearly extended to $E_{0}$ in the obvious way; here, it is only worth noting that the conditions (i)-(iii) follow from \eqref{jednazba} and the fact that the series $\sum_{n=1}^{\infty}\psi_{t_{0}n+r,\lambda}$ and $\sum_{n=1}^{\infty}\psi_{-t_{0}n+r,\lambda}$ converge absolutely (and therefore, unconditionally) since for each seminorm $p_{l}(\cdot),$ where $l\in {\mathbb N},$ there exists a finite constant $c_{l}>0$ such that $p_{l}(\psi_{t_{0}n+r,\lambda})+p_{l}(\psi_{-t_{0}n+r,\lambda}) \leq c_{l}n^{-2},$ $n\in {\mathbb N}$ ($r\in {\mathbb R},$ $\lambda \in \Lambda$). This can be seen by applying integration by parts twice, as in the proof of \cite[Lemma 9.23(b)]{Grosse}. For the proof of (ii), it is enough to observe that an elementary argumentation shows that $A_{|\tilde{E}}$ is a subgenerator of a global $C_{|\tilde{E}}$-regularized semigroup $(S_{0}(t)_{| \tilde{E}})_{t\geq 0}$ on $\tilde{E}.$ Then we can apply (i) to finish the proof. \end{proof} The following application of Theorem \ref{3.1.4.13} is quite illustrative ($C=I$): \begin{example}\label{apa} Consider the operator $A:=d/dt,$ acting with maximal domain in the Banach space $E:=BUC({\mathbb R}),$ consisting of all bounded uniformly continuous functions. Then $\sigma_{p}(A)=i{\mathbb R}$ and $Ae^{\lambda \cdot}=\lambda e^{\lambda \cdot},$ $\lambda \in i{\mathbb R}.$ It is well-known that the space $\tilde{E}:=\overline{span\{e^{\lambda \cdot} : \lambda \in i{\mathbb R}\}}$ coincide with the space of all almost-periodic functions $AP({\mathbb R});$ see \cite{diagana} and \cite{gaston} for more details on the subject. Due to Theorem \ref{3.1.4.13}(ii), we have that the translation semigroup $(T(t))_{t\geq 0}$ is frequently hypercyclic in $AP({\mathbb R} )$ and, for every $t>0$, the operator $T(t)$ is frequently hypercyclic in $AP({\mathbb R})$; the same holds if frequent hypercyclicity is replaced with Devaney chaoticity or topologically mixing property (\cite{knjigah}). We can similarly prove that the translation semigroup is frequently hypercyclic in the Fr\'echet space $C({\mathbb R} )$ and that, for every $t>0$, the translation operator $f\mapsto f(\cdot +t),$ $f\in C({\mathbb R})$ is frequently hypercyclic in $C({\mathbb R})$. \end{example} The subsequent version of Desch-Schappacher-Webb criterion for frequent hypercyclicity can be proved similarly; it is, actually, a simple consequence of Theorem \ref{3.1.4.13} (see also \cite[Theorem 3.1.36]{knjigah}). \begin{thm}\label{3018} Let $t_0>0,$ let ${\mathbb K}={\mathbb C},$ and let $A$ be a subgenerator of a global $C$-regularized semigroup $(S_{0}(t))_{t\geq 0}$ on $E.$ Suppose that $R(C)$ is dense in $E.$ Set $T(t)x:=C^{-1}S_{0}(t)x,$ $t\geq 0,$ $x\in Z_{1}(A).$ \emph{(i)} Assume that there exists an open connected subset $\Omega$ of $\mathbb{C}$, which satisfies $\sigma_p(A)\supseteq\Omega$ and intersects the imaginary axis, and $f:\Omega\to E$ is an analytic mapping satisfying $f(\lambda)\in N (A-\lambda)\setminus\{0\}$, $\lambda\in\Omega$. Assume, further, that $(x^*\circ f)(\lambda)=0$, $\lambda\in\Omega$, for some $x^*\in E^*$, implies $x^*=0$. Then $(S_{0}(t))_{t\geq 0}$ is frequently hypercyclic and each single operator $T(t_0)$ is frequently hypercyclic. \emph{(ii)} Assume that there exists an open connected subset $\Omega$ of $\mathbb{C}$, which satisfies $\sigma_p(A)\supseteq\Omega$ and intersects the imaginary axis, and $f:\Omega\to E$ is an analytic mapping satisfying $f(\lambda)\in N(A-\lambda)\setminus\{0\}$, $\lambda\in\Omega$. Put $E_0:=span\{f(\lambda):\lambda\in\Omega\}$ and $\tilde{E}:=\overline{E_0}$. Then $A_{|\tilde{E}}$ is a subgenerator of a global $C_{|\tilde{E}}$-regularized semigroup $(S_{0}(t)_{| \tilde{E}})_{t\geq 0}$ on $\tilde{E},$ $(S_{0}(t)_{|\tilde{E}})_{t\geq 0}$ is frequently hypercyclic in $\tilde{E}$ and the operator $T(t_{0})_{|\tilde{E}}$ is frequently hypercyclic in $\tilde{E}.$ \end{thm} Using Theorem \ref{3018} and the proof of \cite[Theorem 3.1.38]{knjigah} (see also \cite[Theorem 2.2.10]{knjigaho}), we may deduce the following result: \begin{thm}\label{kakosteva} Let $\theta\in(0,\frac{\pi}{2}),$ let ${\mathbb K}={\mathbb C},$ and let $-A$ generate an exponentially equicontinuous, analytic strongly continuous semigroup of angle $\theta$. Assume $n\in\mathbb{N}$, $a_n>0$, $a_{n-i}\in\mathbb{C}$, $1\leq i\leq n$, $D(p(A))=D(A^n)$, $p(A)=\sum_{i=0}^na_iA^i$ and $n(\frac{\pi}{2}-\theta)<\frac{\pi}{2}$. \emph{(i)} Suppose there exists an open connected subset $\Omega$ of $\mathbb{C}$, satisfying $\sigma_p(-A)\supseteq\Omega$, $p(-\Omega)\cap i\mathbb{R}\neq\emptyset$, and $f:\Omega\to E$ is an analytic mapping satisfying $f(\lambda)\in N(-A-\lambda)\setminus\{0\}$, $\lambda\!\in\!\Omega$. Let $(x^*\!\circ\!f)(\lambda)=0$, $\lambda\in\Omega$, for some $x^*\in E^*$ imply $x^*=0$. Then, for every $\alpha\in(1,\frac{\pi}{n\pi-2n\theta})$, there exists $\omega\in\mathbb{R}$ such that $p(A)$ generates an entire $e^{-(p(A)-\omega)^{\alpha}}$-regularized group $(S_{0}(t))_{t\in\mathbb{C}}$. Furthermore, $(S_{0}(t))_{t\geq 0}$ is frequently hypercyclic and, for every $t>0$, the operator $C^{-1}S_{0}(t)$ is frequently hypercyclic. \emph{(ii)} Suppose there exists an open connected subset $\Omega$ of $\mathbb{C}$, satisfying $\sigma_p(-A)\supseteq\Omega$, $p(-\Omega)\cap\,i\mathbb{R}\neq\emptyset$, and $f:\Omega\to E$ is an analytic mapping satisfying $f(\lambda)\in N(-A-\lambda)\setminus\{0\}$, $\lambda\in\Omega $. Let $E_0$ and $\tilde{E}$ be as in the formulation of Theorem~\emph{\ref{3018}(ii)}. Then there exists $\omega\in\mathbb{R}$ such that, for every $\alpha\in(1,\frac{\pi}{n\pi-2n\theta})$, $p(A)$ generates an entire $e^{-(p(A)-\omega)^{\alpha}}$-regularized group $(S_{0}(t))_{t\in\mathbb{C}}$ such that $(S_{0}(t)_{|\tilde{E}})_{t\geq 0}$ is frequently hypercyclic and, for every $t>0$, the operator $C^{-1}S_{0}(t)_{\tilde{E}}$ is frequently hypercyclic. \end{thm} Theorem \ref{3.1.4.13}, Theorem \ref{3018} and Theorem \ref{kakosteva} can be applied in a great number of concrete situations. In what follows, we will continue our analyses from \cite[Example 3.1.40, Example 3.1.41, Example 3.1.44]{knjigah}: \begin{example}\label{freja} \begin{itemize} \item[(i)] (\cite{fund}) Consider the following convection-diffusion type equation of the form \[\left\{\begin{array}{l} u_t=au_{xx}+bu_x+cu:=-Au,\\[0.1cm] u(0,t)=0,\;t\geq 0,\\[0.1cm] u(x,0)=u_0(x),\;x\geq 0. \end{array}\right. \] As it is well known, the operator $-A$, acting with domain $D(-A)=\{f\in W^{2,2}([0,\infty)):f(0)=0\}$, generates an analytic strongly continuous semigroup of angle $\pi/2$ in the space $E=L^2([0,\infty))$, provided $a$, $b,$ $c>0$ and $c<\frac{b^2}{2a}<1$. The same conclusion holds true if we consider the operator $-A$ with the domain $D(-A)=\{f\in W^{2,1}([0,\infty)):f(0)=0\}$ in $E=L^1([0,\infty))$. Set $$ \Omega :=\Biggl\{\lambda\in\mathbb{C}:\Bigl|\lambda-\Bigl(c-\frac{b^2}{4a}\Bigr)\Bigr|\leq \frac{b^2}{4a},\;\Im\lambda\neq 0\text{ if }\Re\lambda\leq c-\frac{b^2}{4a}\Biggr\}. $$ Let $p(x)=\sum_{i=0}^na_ix^i$ be a nonconstant polynomial such that $a_n>0$ and $p(-\Omega)\cap i\mathbb{R}\neq\emptyset$ (this condition holds provided that $a_0\in i\mathbb{R}$). An application of Theorem \ref{kakosteva}(i) shows that there exists an injective operator $C\in L(E)$ such that $p(A)$ generates an entire $C$-regularized group $(S_{0}(t))_{t\geq 0}$ satisfying that $(S_{0}(t))_{t\geq 0}$ is frequently hypercyclic and each single operator $T(t_{0})$ is frequently hypercyclic ($t_{0}>0$). \item[(ii)] (\cite{ji}) Let $X$ be a symmetric space of non-compact type (of rank one) and $p>2.$ Then there exists an injective operator $C\in L(L^{p}_{\natural}(X))$ such that for each $c\in {\mathbb R}$ the operator $\Delta_{X,p}^{\natural}-c$ generates an entire $C$-regularized group $(S_{0}(t))_{t\geq 0}$ in $L^{p}_{\natural}(X).$ Furthermore, owing to \cite[Theorem 3.1]{ji} and Theorem \ref{kakosteva}(i), there exists a number $c_{p}>0$ such that, for every $c>c_{p},$ the semigroup $(S_{0}(t))_{t\geq 0}$ is frequently hypercyclic in $L^{p}_{\natural}(X)$ and each single operator $T(t_{0})$ is frequently hypercyclic in $L^{p}_{\natural}(X)$ ($t_{0}>0$). \item[(iii)] (\cite{transfer}, \cite{knjigaho}) Suppose that $\alpha>0$, $\tau\in i\mathbb{R}\setminus\{0\}$ and $E:=BUC(\mathbb{R})$. After the usual matrix conversion to a first order system, the equation $\tau u_{tt}+u_t=\alpha u_{xx}$ becomes \[ \frac{d}{dt}\vec{u}(t)=P(D)\vec{u}(t),\;t\geq 0, \text{ where }D\equiv-i\frac{d}{dx},\; P(x)\equiv\begin{bmatrix}0 & 1\\-\frac{\alpha}{\tau}x^2 &-\frac{1}{\tau}\end{bmatrix}, \] and $P(D)$ acts on $E\oplus E$ with its maximal distributional domain. The polynomial matrix $P(x)$ is not Petrovskii correct and applying \cite[Theorem 14.1]{l1} we get that there exists an injective operator $C\in L(E\oplus E)$ such that $P(D)$ generates an entire $C$-regularized group $(S_{0}(t))_{t\geq 0}$, with $R(C)$ dense. Define the numbers $\omega_{1},\ \omega_{2} \in [0,+\infty]$ and functions $\psi_{r,j}\in E\oplus E$ ($r\in\mathbb{R}$, $j=1,2$) as it has been done in \cite[Example 3.1.44]{knjigah}; $\tilde{E}:=\overline{span\{\psi_{r,j}:r\in\mathbb{R},\;j=1,2\}}$. Due to Theorem \ref{3.1.4.13}(ii), we have that $(S_{0}(t)_{|\tilde{E}})_{t\geq 0}$ is frequently hypercyclic in $\tilde{E}$ and, for every $t>0$, the operator $C^{-1}S_{0}(t)_{|\tilde{E}}$ is frequently hypercyclic in $\tilde{E}.$ \item[(iv)] (\cite{cycch}) Denote by $(W_{Q}(t))_{t\geq 0}$ the $e^{-(-B^{2})^{N}}$-regularized semigroup generated by the operator $Q(B),$ whose existence has been proved in \cite[Lemma 5.2]{cycch}. If the requirement stated in the formulation of \cite[Theorem 5.3]{cycch} holds, then $(W_{Q}(t))_{t\geq 0}$ and each single operator $e^{(-B^{2})^{N}}W_{Q}(t_{0})$ is frequently hypercyclic ($t_{0}>0$); this simply follows from an application of Theorem \ref{3018}(i). \item[(v)] (\cite{knjigah}) Finally, we turn our attention to integrated semigroups. Let $n\in {\mathbb N},$ $\rho(t):=\frac{1}{t^{2n}+1},\ t\in {\mathbb R},$ $Af:=f^{\prime},$ $D(A):=\{f\in C_{0,\rho}({\mathbb R}) : f^{\prime} \in C_{0,\rho}({\mathbb R})\},$ $E_{n}:=(C_{0,\rho}({\mathbb R}))^{n+1},$ $D(A_{n}):=D(A)^{n+1}$ and $A_{n}(f_{1},\cdot \cdot \cdot ,f_{n+1}):=(Af_{1}+Af_{2},Af_{2}+Af_{3},\cdot \cdot \cdot , Af_{n}+Af_{n+1},Af_{n+1}),$ $(f_{1},\cdot \cdot \cdot, f_{n+1}) \in D(A_{n}).$ Then $\pm A_{n}$ generate global polynomially bounded $n$-times integrated semigroups $(S_{n,\pm}(t))_{t\geq 0}$ and neither $A_{n}$ nor $-A_{n}$ generates a local $(n-1)$-times integrated semigroup. If we denote by $G_{\pm,n}$ the associated distribution semigroups generated by $\pm A_{n},$ then for every $\varphi_{1},\cdot\cdot \cdot, \varphi_{n+1} \in {\mathcal D},$ we have: $$ G_{\pm,n}(\delta_{t})\bigl(\varphi_{1},\cdot\cdot \cdot , \varphi_{n+1}\bigr)^{T}=\bigl(\psi_{1},\cdot\cdot \cdot , \psi_{n+1}\bigr)^{T}, $$ where $$ \psi_{i}(\cdot)=\sum \limits_{j=0}^{n+1-i}\frac{(\pm t)^{j}}{j!}\varphi_{i+j}^{(j)}(\cdot \pm t),\quad 1\leq i\leq n+1. $$ Set $E_{0}:={\mathcal D}^{n+1}$ and $ S_{k}(\varphi_{1}, \cdot\cdot \cdot , \varphi_{n+1})^{T}:=(\phi_{1},\cdot\cdot \cdot , \phi_{n+1})^{T}, $ where $$ \phi_{i}(\cdot)=\sum \limits_{j=0}^{n+1-i}\frac{(\mp kt_{0})^{j}}{j!}\varphi_{i+j}^{(j)}(\cdot \mp kt_{0}),\quad 1\leq i\leq n+1, $$ for any $k\in {\mathbb N},$ $t_{0}>0$ and $\varphi_{1}, \cdot\cdot \cdot , \varphi_{n+1} \in {\mathcal D}.$ Then we can simply verify (see also \cite[Example 3.2.39]{knjigah}) that the conditions of Theorem \ref{oma} hold with $C=(\lambda \mp A_{n})^{-n},$ where $\rho(\pm A_{n}) \ni \lambda>0$ is sufficiently large, since the series in (i)-(iii) from the formulation of this theorem converge absolutely. Hence, the integrated semigroups $(S_{n,\pm}(t))_{t\geq 0}$ are frequently hypercyclic in $E_{n}$ and for each each number $t_{0}>0$ the single operators $G_{\pm,n}(\delta_{t_{0}})$ are frequently hypercyclic in $E_{n}$. \end{itemize} \end{example} We close the paper with the observation that, for any $C$-regularized semigroup or integrated semigroup considered above, say $(S(t))_{t\geq 0},$ any finite direct sum $(S(t)\oplus S(t)\oplus \cdot\cdot \cdot \oplus S(t))_{t\geq 0}$ is again frequently hypercyclic or subspace frequently hypercyclic, with the meaning clear. The same holds for finite direct sums of considered single operators (cf. \cite{qjua} and \cite{kimpark} for more details about this topic). \end{document}
\begin{document} \title{ Approach to Evaluating Characteristics of Multichannel Loss System with FCFD Preempted Priority Discipline } \begin{abstract} In the paper, we consider a multichannel loss preemptive priority system with a Poisson input and general service time distribution depending on the priority of job. Jobs of the same priority are preempted according with First Come First Displaced (FCFD) protocol. Approximate formulas are obtained for the loss probability of a prescribed priority job and some other characteristics of the system. It particular cases, the obtained formulas are exact. \vskip 10pt \end{abstract} \keywords{ Queueing systems \and loss multichannel system \and loss probability \and first come~--- last displaced protocol (FCLD); \and first come~--- first displaced protocol (FCFD) \and limited processor sharing \and approximate formulas } \section{Introduction} Exact results for characteristics of queueing systems with priority disciplines are known for one-channel system and systems in that service time is distributed exponentially with parameter independent of job priority~\cite{Jaiswal}--~\cite{Takagi16}. Approximate approaches to evaluate characteristics of multichannel priority waiting queueing systems with general service time distributions was proposed in~[8]--~\cite{Alencar20-11}. In ~\cite{MT-prior-92-12}, an approximate approach is proposed to evaluate characteristics of a loss multichannel preemptive priority system such that, if a job arrives to the system and all servers are processing jobs of the same priority or higher, then the arriving job is lost. If all servers are busy and at least one job with a lower priority is serviced, then the job of the lowest priority among the priorities of jobs in service is preempted and lost such that the preempted job arrived later than the other jobs of this priority (Last Come, First Displaced protocol, LCFD). This paper proposes an approximate approach to evaluate the loss probability and some other characteristics of a loss multichannel preemptive priority system such that, if a job arrives to the system, and all servers are processing jobs of the higher priority, then the arriving job is lost. If all servers are busy and at least one job with the same or a lower priority is serviced, then a job of the lowest priority among the priorities of jobs in service is lost such that this job arrived earlier than the other jobs of this priority (First Come, First Displaced protocol, FCFD). The FCFD disciplines are useful when the importance of call decreases with the elapsing time~\cite{Katchner L.-13}. In the case of one-channel and in the case of exponential service time distribution, the proposed formulas are exact. The formulas are also exact if, with prescribed probabilities, the service time equals~0 or is distributed exponentially with parameter independent of the job priority. Section~2 describes the considered system. In Section~3, an approach is proposed to evaluate the probability that a job of a prescribed priority is lost at the arrival moment. Section~4 proposes approach to evaluate the probability that a accepted job of prescribed priority is lost due to preemption. An approximate formula for the loss probability is proposed in Section~5. In Section~6, a numeric example is presented. \section{ System Description } \label{section:SD} We use the following notations: $v_i$ is the average sojourn time for a job of the priority $i;$ $w_i$ is the average waiting time including the service interruptions for the priority $i$ job; $p_i$ is the probability that the service of priority $i$ class job does not start immediately; $u_i$ is the average time before the start of the priority $i$ job service provided this time is not equal to 0; $h_i$ is the average number of the priority $i$ job preemptions $i=1,\dots,N;$ $g_i$ is the duration of a service interruption interval for the priority $i$ job, $i=2,\dots,N;$ $\Lambda_{i}$ is the total arrival rate of priority-classes no lower than $i:$ $\Lambda_i=\lambda_1+\dots+\lambda_i;$ $R_i$ is the load due to priority-classes no lower than $i:$ $R_i=(\lambda_1+\dots+\lambda_i)/m,$ $i=1,\dots,N.$ Denote by $c_i$ the probability of non-zero waiting for $M/G/m$ system computed by well-known Erlang's formula for a waiting system with the arrival load $R_i:$ $$c_i=\frac{(mR_i)^m}{m!(1-R_i)\sum\limits_{k=0}^{m-1} \frac{(mR_i)^k}{k!}+(mR_i)^m}.$$ The following equalities are true: $$w_1=p_1u_1,\eqno(1)$$ $$w_i=p_iu_i+h_ig_i,\ i=2,\dots,N,\eqno(2)$$ $$v_i=w_i+b_i,\ i=1,\dots,N,\eqno(3)$$ $$h_i=\frac{\Lambda_i(p_i-p_{i-1})}{\lambda_i},\ i=1,\dots,N.\eqno(4)$$ The proof of (4) is similar to the proof of an analogous statement for a preemptive priority system such that, in this system, the service distribution is exponential with average value independent of the priority class. The proof of (4) is the following. The probability that all servers are busy by jobs of priority-classes not lower $i$ and there is at least one job of priority-class $i$ equals the difference of the probability that all servers are busy by jobs of priorities not lower than $i$ and the probability that all servers are busy by jobs fo priorities not lower $i-1,$ and therefore this probability is $p_i-p_{i-1}.$ Hence the average number of preemptions of the priority $i$ per a time unit is equal to $\Lambda_{i-1}(p_i-p_{i-1}).$ From this, taking into account that the average number of arriving priority-class $i$ jobs per a time unit is equal to $\lambda_i,$ one gets (1). \section{ Evaluation of probability that job is lost at its arrival moment } \label{section:EP} \hskip 18pt Suppose $\Lambda_i$ is the total arrival rate of the priority $i$ and higher, $\Lambda_i=\sum\limits_{j=1}^i \lambda_j;$ $R_i$ is the arriving load due to priority $i$ and higher; $R_i=\sum\limits_{j=1}^i \lambda_jb_j;$ $\beta(s)$ is the Laplace--Stieltjes of distribution $B_i(x),$ $\beta_i(s)= \int\limits_0^{\infty}e^{-sx}dB_i(x);$ $q_i$ is the probability that a job is lost at its arrival moment, $r_i$ is the probability that a job of the $j$th priority is lost provided that this job was accepted for the service; $\gamma_i$ is the loss probability for a job of $i$th priority, $j=1,\dots,i.$ Let the considered queuing system be called the system $S.$ Let us describe auxiliary $m$-channel queueing systems $S_i,$ $i=1,\dots,N.$ There are $N$ Poisson arrival processes of different priorities, and the rate of the $i$th priority processes is equal to $\lambda_j,$ $j=1,\dots,N.$ If there are less than $m$ jobs in the system, then they are serviced as usual. If there are $m-1$ jobs in the system, and a new job arrives, then the arriving job starts to be serviced at a rate increased by $m$ times, and the service of the other jobs stops until their number becomes less than $m.$ If there are $m$ jobs in the system and a new job arrives, and the priority of the serviced job is higher than the priority of the arriving job, then the arriving job is lost. If the priority of the arriving job is not lower, then the priority of the serviced job, then the arriving job starts to be serviced at a rate increased by $m$ times, and the job that was in service is lost. Let us introduce the queuing system $S_i',$ $i=1,\dots,N.$ This system is a loss one-channel system with a Poisson input and the preemptive priority discipline. The rate of the arrival process for the $i$th priority job is equal to $\lambda_j,$ $j=1,\dots,i.$ The service time distribution for a job of the $j$th priority is $B_j(mx),$ i.e., the service rate increases by $m$ times. Suppose $p_{ki}$ is the stationary probability that there are $k$ jobs in the system $S_i,$ $k=0,1,\dots,m;$ $p_{ki}'$ is the stationary probability that there are $k$ jobs in the system $S_i',$ $k=0,1;$ $g_i$ is the average duration of the busy period of the system $S_i'.$ We have $$p_{0i}'=\frac{1/\Lambda_i}{g_i+\frac{1}{\Lambda_i}},\ p_{1i}'=\frac{g_i}{g_i+\frac{1}{\Lambda_i}},\ i=1,\dots,N.$$ Let us obtain the value $g_i.$ Denote by $d_j$ the average duration of a busy period of the system $S_i'$ provided that this period starts with servicing a job of the $j$th priority, $j=1,\dots,i.$ The probability that a busy period of the system $S_i'$ starts with servicing a job of the $j$th probability is $\lambda_j/\Lambda_i.$ Using the total probability formula, we get $$g_i=\frac{1}{\Lambda_i}\sum\limits_{j=1}^i \lambda_jd_j.\eqno(1)$$ The probability that during the service time of a job of $j$th priority, no job of a priority not lower than $i$ is $\int\limits_0^{\infty}e^{-\Lambda_i x}dB_j(mx),$ or $\beta_j\left(\frac{\Lambda_j}{m}\right).$ If the the $j$th priority job is preemptied, then, with probability $\lambda_s/\Lambda_j,$ the preemption is due the arrival of the $s$th priority job, and the average remaining time of the busy period equals $g_s,$ $s=1,\dots,j.$ The average service time of the $j$th priority job including service interruption time equals $\int\limits_0^{\infty}(1-B_j(mx))e^{-\Lambda_jx}dx,$ or $\left(1-\beta_i\left(\frac{\Lambda_j}{m}\right)\right)/\Lambda_j.$ Hence, $$d_i=\left(1-\beta_j\left(\frac{\Lambda_j}{m}\right)\right)\left(\frac{1}{\Lambda_j}+g_j\right),\ j=1,\dots,N.\eqno(2) $$ Using (1), (2), we get $$g_i=\frac{1}{1-\frac{\lambda_i}{\Lambda_i} \left(1-\beta_i\left(\frac{\Lambda_i}{m}\right)\right)}\left(\frac{1}{\Lambda_i} \sum\limits_{j=1}^{i-1}\lambda_jd_j+ \frac{\lambda_i}{\Lambda_i^2}\left(1- \beta_i\left(\frac{\Lambda_i}{m}\right)\right)\right),\ i=1,\dots,N.\eqno(3)$$ Using recurrent procedure based on formulas (2), (3), we can compute the value $g_i.$ If the time intervals during that there $m$ jobs in the system $S_i,$ then this system is equivalent to the usual queueng system with arrival load equal to $R_i.$ Therefore, $p_{ki}=\frac{R_i^k}{k!}p_{0i},$ $k=0,1,\dots,m-1.$ Let $\nu_i(t),$ $\nu_i'(t)$ be the number of jobs at time $t$ in the systems $S_i$ and $S_i'$ respectively. If the intervals are excluded such that, in the system $S_i,$ there are less than $m-1$ jobs, then $\nu_i(t)-m+1$ and $\nu_i'(t)$ are equivalent stochastic processes. Therefore $\frac{p_{mi}}{p_{m-1,i}}=\frac{p_{1i}'}{p_{0i}},$ and hence, $p_{mi}=\frac{R_i^{m-1}\Lambda_i g_ip_{i0}} {(m-1)!}.$ From the normalizing condition $\sum\limits_{k=0}^m p_{ki}=1,$ we get $$p_{ik}=\frac{R_i^k}{k!\left(\sum\limits_{k=0}^{m-1}\frac{R_i^k}{k!}+\frac{R_i^{m-1}\Lambda_ig_i}{(m-1)!}\right)},\ k=0,1,\dots,m-1,$$ $$p_{mi}=\frac{R_i^{m-1}\Lambda_ig_i} {(m-1)!\sum\limits_{k=0}^{m-1} \frac{R_i^k}{k!}+R_i^{m-1}\Lambda_ig_i}.\eqno(4)$$ Denote by $q_i$ the probability that a job is lost at its arrival moment for the system $S.$ Assume that $p_{m,i-1}$ is the approximate value of this probability. Since the jobs arrive according to a Poisson process, the value $q_i$ is also equal to the probability that, in the system $S_{i-1},$ there are $m$ jobs of the priorities no lower than $i-1.$ Denote by $c_i$ the value $p_{mi}.$ \vskip 5pt The equality $q_i=c_{i-1}$ is exact in the following cases. \vskip 5pt 1. If $m=1,$ then the system $S_i$ is equivalent to the system $S$ if it is assumed that only jobs of priority not lower than $i,$ and therefore the equality $q_i=c_i$ holds. \vskip 5pt 2. Let the service time is distributed exponentially with parameter $\mu$ not depending on the priority of job. The presence of lower priorities in the system $S$ does not affect the servicing of higher priority jobs. Since the service time is distributed exponentially, then remaining service time is also distributed exponentially with the same parameter. Thus the probability that, in the system $S,$ there are no jobs of the priorities lower than $i$ is the same as the probability that, in related non-priority system with arriving load $R_i$ a job is lost, i.e., according to the first Erlang formula $$q_i=\frac{R_i^m}{m!\sum\limits_{k=0}^m \frac{R_i^k}{k!}}.$$ In this case, $g_i=\frac{1}{m\mu}=\frac{R_i^m}{\Lambda_im},$ then (4) may be rewritten as $$p_{mi}=\frac{R_i^m} {m!\sum\limits_{k=0}^m \frac{R_i^k}{k!}}.$$ Thus, $q_i=c_{i-1}.$ \vskip 5pt 3. The equality $q_i=c_{i-1}$ is also holds if, with a probability depending on the priority of job, the service time is equal to 0, and, with the additional priority, the service time is distributed exponentially with parameter independent of the priority. \section{ Evaluation of probability that accepted job will be preempted } \label{section:EP} \hskip 18pt Denote by $r_i$ the probability that a prescribed priority job accepted to service will be preempted. Let us prove the following equality $$r_i=\frac{\Lambda_i(q_i-q_{i-1})}{\lambda_i(1-q_i)}.\eqno(5)$$ An arriving job of the priority $i$ or higher preempts a job of the $i$th priority if there are $m$ jobs in the system, and at least one of these jobs is a job of the $j$th priority. Therefore, with the probability $q_i-q_{i-1},$ the system is in the state such that an arriving job of the priority not lower than $i$ preempts a job of the $i$th priority. Hence the average number of interruptions of servicing the $i$th priority jobs per a time unit equals $\Lambda(q_i-q_{i-1})r_i.$ On the other hand, this average number equals $\lambda(q_i-q_{i-1}).$ Thus we have (5). Replacing $q_i$ with $c_i$ in (5), we get the following approximate formula $$r_i=\frac{\Lambda_i(c_i-c_{i-1})}{\lambda_i(1-c_i)},\ i=1,\dots,N.\eqno(6)$$ \section{ Evaluation of loss probability } \label{section:ELP} \hskip 18pt We have the following equality $$\gamma_i=q_i+(1-q_i)r_i.\eqno(7)$$ Replacing $q_i$ with $c_{i-1}$ in (7) and using (6), we get $$\gamma_i=c_{i-1}+\frac{\Lambda_i}{\lambda_i}(c_i-c_{i-1}).\eqno(8)$$ {\it Thus we propose to compute the loss probability for a job of a prescribed priority according to formula (8), where $$c_i=\frac{R_i^{m-1}\Lambda_ig_i}{(m-1)!\sum\limits_{k=0}^{m-1}\frac{R_i^k}{k!}+R_i^{m-1}\Lambda_ig_i},$$ $$\Lambda_i=\sum\limits_{j=1}^i\lambda_j,\ R_i=\sum\limits_{j=1}^i\lambda_j b_j,$$ $$g_i=\frac{1}{1-\frac{\lambda_i}{\Lambda_i} \left(1-\beta_i\left(\frac{\Lambda_i}{m}\right)\right)}\left(\frac{1}{\Lambda_i} \sum\limits_{j=1}^{i-1}\lambda_jd_j+ \frac{\lambda_i}{\Lambda_i^2}\left(1- \beta_i\left(\frac{\Lambda_i}{m}\right)\right)\right),$$ $$d_i=\left(1-\beta_j\left(\frac{\Lambda_j}{m}\right)\right)\left(\frac{1}{\Lambda_j}+g_j\right),\ j=1,\dots,N. $$ } \vskip 3pt {\bf Remark 1.} Note that, for the related multichannel loss system with LCFD preemptied priority discipline an approximate approach is proposed in~\cite{MT-prior-92-12} such that, according to this approach, the loss probability $\gamma_i$ for a job of the $i$th priority is computed as $$\gamma_i=c_i+\frac{\Lambda_{i-1}}{\lambda_i}(c_i-c_{i-1}),\ m=1,\dots,N,$$ $$c_i=\frac{R_i^{m-1}\Lambda_ig_i}{(m-1)!\sum\limits_{k=0}^{m-1}\frac{R_i^k}{k!}+R_i^{m-1}\Lambda_ig_i},$$ $$\Lambda_i=\sum\limits_{j=1}^i\lambda_j,\ R_i=\sum\limits_{j=1}^i\lambda_j b_j,\ g_i=\frac{1}{\Lambda_i}\sum\limits_{j=1}^i\lambda_jd_j,$$ $$d_j=\left(1-\beta_j\left(\frac{\Lambda_{j-1}}{m}\right)\right),\ j=2,3,\dots,N,$$ $$d_1=\frac{b_1}{m}.$$ \section{ Estimating accuracy of approach } \label{section:EA} \hskip 18pt If the job service time is exponential with parameter that may depend on priority class, then the values of the loss probability for a job of prescribed priority is the same for the FCFD preemptive discipline and the LCFD preemptive discipline. Assume that the job service time priority is exponential with average value depending on priority-class: $$B_i(x)=1-e^{-\mu_i x},\ i=1,2,3,$$ $N=3,$ $m=2,$ $\mu_1 =10,$ $\mu_2 =5,$ $\mu_3=2,$ $\lambda_1=\lambda_2=\lambda_3=1.$ Then the value of the loss probabilities computed according to the proposed approach are $$\gamma_1 = 0.0045,\ \gamma_2 = 0.060,\ q_3 = 0.36.$$ The values obtained by simulation are ~\cite{MT-prior-92-12} $$\gamma_1 = 0.0045,\ \gamma_2 = 0.064,\ \gamma_3 = 0.32.$$ \section{ Conclusion } \label{section:Co} \hskip 18pt Approximate approach is proposed to evaluate the following characteristics of a multichannel loss preemptive priority system: the loss probability of a prescribed probability job; the loss probability of a prescribed probability job at the the arrival moment; the probability that a job of prescribed priority is lost due to preemption. In particular cases the formulas are exact. The accuracy of the approximate approach was estimated by simulation. \end{document}
\begin{document} \title{Joint eavesdropping on the BB84 decoy state protocol with an arbitrary passive light-source side channel} \author{D. V. Babukhin$^{1,2}$ and D.V. Sych$^{2,3}$} \affiliation{$^1$QRate LLC, Novaya av. 100, Moscow 121353, Russia} \affiliation{$^2$Department of Mathematical Methods for Quantum Technologies, Steklov Mathematical Institute of Russian Academy of Sciences, Gubkina str. 8, Moscow 119991, Russia} \affiliation{$^3$P.N. Lebedev Physical Institute, Russian Academy of Sciences, 53 Leninskiy Prospekt, Moscow 119991, Russia} \begin{abstract} Passive light-source side channel in quantum key distribution (QKD) makes the quantum signals more distinguishable thus provides additional information about the quantum signal to an eavesdropper. The explicit eavesdropping strategies aimed at the passive side channel known to date were limited to the separate measurement of the passive side channel in addition to the operational degree of freedom. Here we show how to account for the joint eavesdropping on both operational degree of freedom and the passive side channel of the generic form. In particular, we use the optimal phase-covariant cloning of the signal photon state, which is the most effective attack on the BB84 protocol without side channels, followed by a joint collective measurement of the side channel and the operational degree of freedom. To estimate QKD security under this attack, we develop an ``effective error'' method and show its applicability to the BB84 decoy-state protocol. \end{abstract} \maketitle \date{\today } \section{Introduction} Quantum key distribution (QKD) provides theoretically secure communication between legitimate sides \cite{LoChau1999, Shor2000, gisin2002}. In practice, various deviations of real devices from theoretical models lead to overestimation of security of a real QKD setup \cite{Scarani2014}. Whenever a particular device behaves differently from its theoretical model, the eavesdropper (Eve) can use this difference to construct a more efficient attack on the QKD protocol \cite{Nauerth2009, Pereira2019}. This additional eavesdropping option reduces QKD security and forms a so-called informational side channel that can completely compromise the QKD protocol \cite{Makarov2017, Huang2018}. The goal of practical quantum communication is to carefully analyse and estimate opportunities of eavesdropping beyond those allowed by an ideal theoretical model of QKD \cite{Diamanti2016, Xu2020, Jain2016}. Among all opportunities to attack a given experimental realization of QKD, there are attacks on the quantum source (Alice) and attacks on the receiver (Bob). Historically, the receiver in QKD is more prone for hacking \cite{Jain2016}. Fortunately, a measurement-device-independent QKD protocol allows closing every possible quantum hacking on the receiver side \cite{MDI} and still obtain practically-valuable secure key rates over long distances. The device-independent QKD \cite{Acin2007} excludes side channels of the source and the receiver simultaneously, but at the cost of complicated experimental realization and low secret key rate at practically reasonable distances. The other way to deal with light-source side channels is to analyse Eve's opportunities to gain information from them. The most general analysis of device imperfections up to date is GLLP \cite{Gottesman2002}, which allows to estimate information leakage from the general principles. This approach provides pessimistically-low secure distances, thus there is a need for searching explicit attacks with more practical security characteristics. Active-probing-based side channels (so called ``Trojan horse'' attack) are widely analysed in the literature and explicit attacks satisfying the GLLP bound are partially provided \cite{Lucamarini2015}. At the same time, analysis of passive light-source side channels (i.e. distinguishability of photon source in non-operational degrees of freedom \cite{Nauerth2009}) are not so developed. There is a huge gap between the lower and upper bounds on the secret key, derived from the general principles \cite{Gottesman2002} and from explicit attacks on QKD with passive side channels, respectively \cite{Babukhin2020, Babukhin2021, Sych2021}. The gap can be potentially reduced by tightening the bounds. In this paper, we tighten the upper bound on the secret key and consider a joint eavesdropping strategy on the signal and passive side channel states of the general form in the BB84 decoy-state protocol. This strategy consists of a phase-covariant cloning of the operational degree of freedom and a joint measurement of both operational and a non-operational degrees of freedom. We introduce passive source side channel model as an additional degree of freedom, which has dimension of the QKD protocol alphabet and allow accounting for arbitrary distinguishability of signal photons. To estimate efficiency of this attack on the decoy-state protocol, we propose a method to calculate secret key rates in QKD protocols with explicit attacks on the signal photon state and on the passive source side channel. Namely, we show how to reinterpret information flows among Bob and Eve, and calculate an effective error rate on Bob's side, which allows incorporating theoretically calculated error into the decoy state protocol. This paper is organized as follows. In Sec. II.A we introduce background for the single-photon BB84 protocol and show a model of a light-source side channel. In Sec. II.B we discuss an application of the Hong-Ou-Mandel (HOM) interference to estimation of information leakage through the light source side channel. In Sec. II.C we discuss eavesdropping on the operational degree of freedom. In Sec. II.D we introduce an ``effective error'' method, which we further use to estimate security of the decoy-state BB84 protocol with photons' distinguishability. In Sec. II.E we provide calculation results and connect it to the state-of-the-art photon sources. Description of the BB84 protocol with decoy states and how it incorporates the effective error into security analysis is provided in Appendix. \section{BB84 with source side channel and effective error rate calculation} \subsection{Side channel model} In the BB84 protocol, legitimate sides exchange bits encoded in quantum states of signal photons. Alice randomly chooses a bit value (0 or 1) and a basis to encode the bit in (X or Y). Then she sends the encoded state into a quantum channel, which is open for eavesdropping. The eavesdropper attacks the photon and thus introduces errors in communication, which alert the legitimate sides about eavesdropping. Without photon distinguishability, photon states in BB84 protocol are \begin{equation} \label{ensembleBB84} \biggl{\{} \frac{1}{4}: \ket{0_{x}},\text{ } \frac{1}{4}: \ket{1_{x}},\text{ } \frac{1}{4}: \ket{0_{y}},\text{ } \frac{1}{4}: \ket{1_{y}} \biggl{\}}. \end{equation} If photons are distinguishable other than in signal degree of freedom, we need to incorporate this distinguishability into the model through an additional degree of freedom. This degree of freedom is non-operational for legitimate users, but it is visible to Eve, who can extract additional information. For example, if Alice and Bob use photons polarization to encode secret bits, differences in other photon degrees of freedom (e.g., spatial modulation, frequencies) provide additional information to Eve. Thus, there is a side channel of information, which leaks information to Eve. This transforms BB84 protocol states to the form \begin{equation} \label{ensembleBB84sidechannel} \biggl{\{} \frac{1}{4}: \ket{0_{x}}\otimes\ket{0^{\Delta}_{X}},\text{ } \frac{1}{4}: \ket{1_{x}}\otimes\ket{1^{\Delta}_{X}},\text{ } \frac{1}{4}: \ket{0_{y}}\otimes\ket{0^{\Delta}_{Y}},\text{ } \frac{1}{4}: \ket{1_{y}}\otimes\ket{1^{\Delta}_{Y}},\text{ } \biggl{\}}, \end{equation} where states $\ket{0^{\Delta}_{X}}$, $\ket{1^{\Delta}_{X}}$, $\ket{0^{\Delta}_{Y}}$, $\ket{1^{\Delta}_{Y}}$ are nonorthogonal states of additional degree of freedom, which models the photons' distinguishability side channel. Here we introduce passive source side channel model, which has the most general form without any symmetry constraints (unlike in the previous studies \cite{Babukhin2022}) and allow accounting for any kind of distingusihability in signal photon non-operational properties. The number of lasers, which compose the QKD source and produce signal photons, dictates the number of states which compose a basis of side channel state space. For example, if a QKD source has four different lasers, then, in general, all four states of side channel degree of freedom have different non-unity scalar product with other side channel states. This case models the situation, when non-operational degrees of freedom of photons, produced with different lasers, are not completely coincident due to imprecise lasers calibration or some unnoticed device flaw. The described model allows estimating influence of side channel on the security of QKD in the whole range of non-operational photon distinguishability. The case of indistinguishable side-channel states corresponds to the protocol without excess information leakage, and the case of orthogonal side-channel states corresponds to a protocol, completely compromised through side channels. \subsection{Hong-Ou-Mandel visibility as a measure of side channel leakage} One can test physical difference of photons using Hong-Ou-Mandel interference \cite{Hong1987,Branczyk2017}. The HOM interference is a fourth-order interference of photons, which makes two physically indistinguishable photons, which are incident on a beam splitter, to exit this beam splitter pairwise in one or another arm. If photodetectors are placed in front of each arm, there will be no simultaneous counts of both detectors, i.e., no coincident counts. Contrary, if photons are physically distinct (e.g., some of their modes have different states), they can leave beam splitter in different arms and produce coincident counts of two photodetectors. This effect can be used to estimate passive source side channels in QKD. In particular, the more distinguishability of photons in non-operational degrees of freedom, the more information can potentially leak to Eve through the side channel. Alice can use HOM interference to estimate photons distinguishability in all non-operational degrees of freedom at once, and thus to estimate information leakage \cite{Duplinskii2019}. She needs to bring two photons from difference source lasers into one operational degree of freedom (e.g., polarization) and send them to two arms of a balanced beam splitter to measure interference visibility. If photons' states are described with density matrices $\rho_{1}$ and $\rho_{2}$, the HOM interference visibility is equal to \begin{equation} V(\rho_{1}, \rho_{2}) = Tr[\rho_{1} \rho_{2}] = \frac{N_{max} - N_{min}}{N_{max}}, \end{equation} where $N_{min}$ and $N_{max}$ are minimum and maximum values of coincidence counts. If visibility is maximal (1 for single photons), then these photons are not distinct in any degrees of freedom. If visibility is minimal (0 for single photons), then photons are completely distinct in some degree of freedom and can be discriminated through a proper quantum state measurement. The visibility value allows to estimate a basis imbalance parameter, which characterizes information leakage through the side channel \cite{Duplinskii2019}: \begin{eqnarray} \label{DeltaDuplinskiy} \Delta \leq \frac{1}{2} \biggl( 1 - \cos\biggl( 2 \arccos{\frac{1 + e^{\mu(\sqrt{2V}-1)}}{2}} + \arccos{e^{\mu(\sqrt{2V}-1)}} \biggl) \biggl) \end{eqnarray} where $\mu$ is a signal pulse intensity and $V$ is a visibility value. More discussion about parameter $\Delta$ see in Appendix \ref{AppendixA3}. The formula (\ref{DeltaDuplinskiy}) allows connecting theoretical parameter $\Delta$ with the HOM visibility of practical QKD sources. We will use this connection in the following sections. \subsection{Eavesdropping on the signal degree of freedom} Here we describe a unitary attack on the signal degree of freedom, which is a standard action of Eve in the process of QKD communication. Because side channel gives Eve only partial information about the secret key, Eve combines attacking side channel with attack on the signal photon. The most effective attack on photons in the BB84 (a so called collective attack) gives equal information flows from Alice to Bob and from Alice to Eve, when Bob has a communication error equal to $11\%$. This attack can be implemented with an optimal phase-covariant cloning machine \cite{Bruss2000}. For signal states of the BB84 from the $XY$ plane of the Bloch sphere, the action of this cloning machine is \begin{eqnarray} U\ket{\psi(\phi)}_{B}\ket{0_{z}}_{E}\ket{0_{z}}_{E^{'}} = \frac{1}{2}(\ket{0_{z}}_{B}\ket{0_{z}}_{E}\ket{0_{z}}_{E^{'}} + \notag\\ + \cos\eta\ket{0_{z}}_{B}\ket{1_{z}}_{E}\ket{1_{z}}_{E^{'}} + \sin\eta\ket{1_{z}}_{B}\ket{0_{z}}_{E}\ket{1_{z}}_{E^{'}} \pm \notag\\ \pm \cos\eta\ket{1_{z}}_{B}\ket{0_{z}}_{E}\ket{0_{z}}_{E^{'}} \pm \sin\eta\ket{0_{z}}_{B}\ket{1_{z}}_{E}\ket{0_{z}}_{E^{'}} \pm \notag\\ \pm \ket{1_{z}}_{B}\ket{1_{z}}_{E}\ket{1_{z}}_{E^{'}}), \end{eqnarray} where $\eta$ is a cloning parameter. When $\eta = 0$, the cloner unitary does nothing, and when $\eta = \pi/2$, Eve has Bob's state in her space and Bob's qubit becomes maximally mixed because of entanglement with Eve's ancillary qubit. This attack leads to a critical error value $Q_{c} \approx 0.11$ for the standard BB84 protocol with photons, indistinguishable in non-operational degrees of freedom. The attack on the signal degree of freedom consists of Eve doing a quantum cloning transform on the Alice signal photon, thus correlating the photon with Eve's ancillary system, and waiting for basis exchange. Eve's ancillary system is considered a quantum memory, which can store a quantum state for an infinitely long time. After Alice and Bob exchange their basis choices on all bit positions, Eve makes a full register measurement and obtains a binary string, which is correlated with distributed bits sequence between Alice and Bob. The amount of information, which Eve obtains during the collective attack on the QKD protocol is characterized with a Holevo value. This value upper-bounds possible mutual information between Alice and Eve. In the BB84 protocol with balanced basis choice ($\frac{1}{2}$ of times Alice chooses basis $X$ and $\frac{1}{2}$ of times Alice chooses basis $Y$), the Holevo value is calculated as follows \begin{equation} \chi = S(\frac{1}{2}\rho_{0,X} + \frac{1}{2}\rho_{1,X}) - \frac{1}{2}S(\rho_{0,X}) - \frac{1}{2}S(\rho_{1,X}), \end{equation} where $S$ denote a von Neuman entropy \begin{equation} S(\rho) = -Tr(\rho\log(\rho)), \end{equation} where $\rho_{0,X}$ and $\rho_{1,X}$ are Eve states, obtained with cloning states in the quantum channel between Alice and Bob. \subsection{Effective error in QKD with a side channel} Side channels increase Eve's information about the secret bit, because Eve has more resources to distinguish collected quantum states. It leads to increase of information leaking to Eve, and it formally leads to increase of Eve's mutual information bounded with a Holevo value \begin{equation} \chi < \chi^{\Delta}, \end{equation} where \begin{equation} \chi^{\Delta} = S \biggl( \frac{1}{2}\rho_{0,X}\otimes\ket{0^{\Delta}_{X}}\bra{0^{\Delta}_{X}} + \frac{1}{2}\rho_{1,X}\otimes\ket{1^{\Delta}_{X}}\bra{1^{\Delta}_{X}} \biggl) - \frac{1}{2}S \biggl( \rho_{0,X}\otimes\ket{0^{\Delta}_{X}}\bra{0^{\Delta}_{X}} \biggl) - \frac{1}{2} S \biggl( \rho_{1,X}\otimes\ket{1^{\Delta}_{X}}\bra{1^{\Delta}_{X}} \biggl) \end{equation} is a Holevo value of Eve's attack on the protocol with side channels. This value is equal to $\chi$, when side channel states coincide ($\bra{1^{\Delta}_{X}}\ket{0^{\Delta}_{X}} = 1$), which means there is no information leakage through side channel. We can look at the increase of information leakage at a different perspective. With more information, Eve has an information channel from Alice with less error, while Bob's information channel does not change. The secret key rate is calculated through subtraction of channels capacities between Alice and Bob, and Alice and Eve. Formally, we can write this equation twofold: \begin{equation} \label{Rdelta} R^{\Delta} = 1 - h_{2}(Q_{Bob}) - \chi^{\Delta} = 1 - h_{2}(Q_{Bob}^{\Delta}) - \chi. \end{equation} Here, $Q_{Bob}$ is an error on Bob's side in the QKD protocol, which occurs due to the unitary eavesdropping of the signal photon state. This error is not influenced with side channel attack, and using this quantity for further protocol analysis will give an overestimated security. Instead, we can consider the secret key rate as written on the right hand side on the equation (\ref{Rdelta}): here Eve obtains no information from side channel, but Bob obtains less information due to a larger ("effective") error rate $Q_{Bob}^{\Delta}$, which accounts for information leakage through side channels. This equivalence allows us to calculate the effective error of Bob with use of explicit attacks on the protocol, such as unitary attacks on the signal, explicit attacks on the side channel states as well as the model of side channel. The result of combining these parts of eavesdropping is a single value - effective Bob error - which can be used for further security estimation of the protocol. Calculation of the effective error consists of the following steps: \begin{enumerate} \item Choose a model of a source side channel in the protocol; \item Choose an attack on the side channel states (i.e., how Eve measures side channel states and how she uses this information) \item Choose a unitary attack to eavesdrop the signal photon; \item Calculate effective error on Bob's side $Q_{Bob}^{\Delta}$ with equation (\ref{Rdelta}); \item Use the effective error $Q_{Bob}^{\Delta}$ for further security analysis. \end{enumerate} In the following we provide a concrete example of using this effective error approach. \subsection{Results} Here we apply the effective error approach to calculation of the explicit attack strategy on the BB84 protocol with decoy states. Details of the decoy-state method are provided in Appendix \ref{AppendixA1}. We show how the effective error corrects detection error on the Bob's side in Appendix \ref{AppendixA2}. We show how our side channel model we use here performs in GLLP approach \cite{gottesman2002security} in Appendix \ref{AppendixA3}. In Fig. \ref{fig: Figure1} we provide secret key rates for the case, when Eve does no eavesdropping on the signal photon state and only attacks a side channel degree of freedom. In our calculation, this case corresponds to the optimal phase-covariant cloning with $\eta = 0$. In Fig. \ref{fig: Figure2} we provide secret key rates for the case, when Eve does eavesdropping both on the signal photon and on the side channel state. We compare our approach with the GLLP secret key estimate. In simulations of the decoy state protocol we used fiber attenuation $\alpha = 0.2$, dark count probability $Y_0 = 10^{-5}$, an average number of photons per pulse $\mu = 0.5$, optical error rate $1\%$ and error correction efficiency $f = 1.0$. \begin{figure} \caption{Secret key rates for the BB84 decoy state protocol with a passive source side channel. Here Eve does no eavesdropping on the signal photon state. The imbalance value $\Delta$ indicates amount of information leakage through the side channel, ``EfEr'' label stands for ``effective error'' method.} \label{fig: Figure1} \end{figure} \begin{figure} \caption{Secret key rates for the BB84 decoy state protocol with a passive source side channel. Here Eve does eavesdropping (phase-covariant cloning) the signal photon state. The imbalance value $\Delta$ indicates amount of information leakage through the side channel, ``EfEr'' label stands for ``effective error'' method. The reference curve indicates the protocol with no eavesdropping on signal and side channel states.} \label{fig: Figure2} \end{figure} \begin{figure} \caption{Visibility vs. basis imbalance parameter $\Delta$, derived from Eq.~\ref{DeltaDuplinskiy} \label{fig:visibility} \end{figure} The results demonstrate an application example of the effective error approach to the analysis of the explicit attack on the protocol with a passive source side channel. The effective error is above zero in the absence of eavesdropping the signal degree of freedom and there is a corresponding decrease of secret key generation rate (see Fig.~\ref{fig: Figure1}). This result is expected since there is a leakage of information to Eve through the side channel. If the signal degree of freedom is also eavesdropped, the effective error characterizes the overall leakage of information (see Fig.~\ref{fig: Figure2}). In our calculation we used an optimal phase-covariant cloning, which is the most efficient attack on the BB84 protocol without side channels \cite{Brus2000}, along with a joint collective measurement of a "signal + side channel" system. The joint collective measurement of all degrees of freedom (both operational and non-operational) is a stronger eavesdropping strategy of using passive source side channel, compared to those covered in the previous studies \cite{Babukhin2022}. This attack strategy uses information from side channel nonadaptively - Eve obtains additional information and does not actively uses this information to change a signal photon state (e.g., make states of the protocol alphabet more orthogonal or block some photons with respect to a particular criterion). We calculated secret key rates for values of $\Delta$, which correspond to visibility values of the state-of-the-art photon sources (see Fig.~\ref{fig:visibility}). Thus, secret key rates we calculated provide a practical upper bound of the key generation rate for a considered attack strategy. \section{Conclusion} We analyzed joint eavesdropping on the operational degree of freedom and a passive side channel in the BB84 decoy state protocol with a photon distinguishability side channel. We used an effective error approach to estimate the security of QKD protocols with side channels. Our results allowed us to tighten the upper bound on the secret key rate of the BB84 decoy state protocol with a passive source side channel. Since the BB84 with decoy states comprises a backbone of the most practically-available QKD systems, our result can potentially enhance certification of security in real-world quantum communication. We also note that the proposed method to calculate an effective error in protocols with side channels is intended to provide an intuitive tool for analyzing the influence of side channels in QKD. This method uses information flows to Bob and Eve to calculate an error value, capturing the possible eavesdropping of the side channel, which is unnoticeable through a standard communication error calculation. We demonstrated our ``effective error'' approach on an explicit joint eavesdropping strategy based on phase-covariant cloning of the operational degree of freedom followed by the joint collective measurement of a signal and a side channel degrees of freedom. As far as a phase-covariant cloning machine provides the most effective eavesdropping on the BB84 protocol without side channels, this is a reasonable example for the initial demonstration of our approach. In our calculation results (dashed curves in Fig.~\ref{fig: Figure1} and Fig.~\ref{fig: Figure2}), the secret key rate drops to almost zero in 130 kilometers (eavesdropping on the side channel only, Fig.~\ref{fig: Figure1}) and in 100 kilometers (eavesdropping on both side channel and the signal state, Fig.~\ref{fig: Figure2}) while conservative estimates give an almost zero secret key rate at much shorter distances (30 kilometers in Fig.~\ref{fig: Figure1}, and 5 kilometers in Fig.~\ref{fig: Figure2} correspondingly). In our consideration, we investigated a non-adaptive eavesdropping strategy for the attack of the protocol with a passive source side channel. Based on our results, we conclude that if there is an attack that closes the gap between a conservative key rate estimate and a protocol without the side channel, then it must be an adaptive attack, i.e. an eavesdropping strategy that uses information from the side channel to change the operational degree of freedom in a favorable way to increase the information leakage. \section*{ACKNOWLEDGMENTS} This work was supported by the Russian Science Foundation under grant no. 20-71-10072, https://rscf.ru/en/project/20-71-10072/. \renewcommand{APPENDIX}{APPENDIX} \appendix \section{Decoy state protocol with source side channel} \subsection{BB84 protocol with decoy states} \label{AppendixA1} In practice, signal photons are generated by lasers with an unlocked initial phase, which leads to the randomization of the initial phases of signal photons. Furthermore, laser sources generate many-photon states along with single-photon states. The resulting carrier quantum state has the following form \begin{equation} \rho^{\mu}_{x} = \sum_{k=0}^{\infty}e^{-\mu}\frac{\mu^{k}}{k!}\ket{k}\bra{k}, \end{equation} where $\mu$ is the mean photon number of the coherent state. To decrease the influence of many-photon states on the communication the mean photon number is usually taken low (less than one photon per pulse), and the resulting state is called a phase-randomized weak coherent pulse (PRWCP). Even though many-photon parts are attenuated, they continue to be a vulnerability in the BB84 protocol. These parts contain photons of the same single-photon states and thus open Eve a possibility to hold one photon with the same quantum state as photons sent towards Bob. In the end, Eve has a clone of the Bob state without introducing disturbance, and the security of the protocol is compromised. This vulnerability can be closed with a so-called decoy-state method \cite{Ma2005}. This method is based on a fact, that even though Eve can measure a number of photons in a signal, she cannot deduce the signal intensity from such a measurement. Thus, if Alice and Bob use signals with different intensities, Eve will attack these signals with the same actions. This fact allows Alice and Bob to use photon signals with different intensities to estimate the single-photon component weight, which composes the secret fraction of the distributed bits. They can further proceed with security amplification, taking into account only the secret fraction of their bits. In particular, receiving $k$-photon states, where $k$ has values from 0 to infinity, the full probability of having a detection count on Bob side for a pulse with intensity is \begin{equation} Q_{\mu} = \sum_{k=0}^{\infty}e^{-\mu}\frac{\mu^{k}}{k!}Y_{k} = Y_{0} + 1 - e^{-\eta \mu}, \end{equation} where $Y_{k}$ is a $k$-photon yield - a conditional probability that Bob detects a signal, when Alice sent him a $k$-photon state. The probability of bit error on the Bob side is \begin{equation} E_{\mu} = \frac{1}{Q_{\mu}}\sum_{k=0}^{\infty}e^{-\mu}\frac{\mu^{k}}{k!}Y_{k}e_{k} = e_{0}Y_{0} + e_{det}(1 - e^{-\eta \mu}), \end{equation} where $k = 0$ corresponds to dark counts of photodetectors and $n$-photon error rate $e_{n}$ is \begin{equation} \label{en} e_{n} = \frac{e_{0}Y_{0} + e_{det}\eta_{n}}{Y_{n}}. \end{equation} Here $e_{0}$ is a probability of dark count, $Y_{0}$ is a vacuum yield, $Y_{n}$ is a $n$-photon yield, $e_{det}$ is an optical error of Bob's system. Here $\eta_{n}$ is the overall transmission and detection efficiency between Alice and Bob \begin{equation} \eta_{n} = 1 - (1 - \eta)^{n}, \end{equation} where \begin{equation} \eta = 10^{-\alpha L / 10}\eta_{Bob}, \end{equation} where $\alpha$ is a loss coefficient, $L$ is a transmission line length and $\eta_{Bob}$ is a transmittance on the Bob's side. Decoy state protocol allows Alice and Bob to obtain bounds on a vacuum yield $Y_{0}$, a single-photon yield $Y_{1}$, and a single-photon error rate $e_{1}$ in a standard approach to BB84 protocol with decoy states. These quantities are further used to estimate the secret key rate. But, using (\ref{en}) allows to incorporate an explicit eavesdropping strategy, which Eve uses to attack the protocol. In the next section we will provide a connection between error from eavesdropping and a measurement error on Bob's side. The secret key rate for the BB84 protocol with decoy states is \begin{equation} \label{Rdecoy} R = \frac{1}{2}(Q_{1}(1 - h_{2}(e_{1})) - fQ_{\mu}h_{2}(E_{\mu}) ), \end{equation} where $f$ is a post-processing efficiency factor and $e_{1}$ is a bit error in single-photon component outcomes. \subsection{Measurement errors and eavesdropping} \label{AppendixA2} Eavesdropping introduces errors in communication between Alice and Bob. In theory, they use perfect devices, and error occurs due to Eve interception. In practice, communication error arises from imperfections of devices. Mathematically, these two sources of errors (eavesdropping and measurement error) are equivalent, which we prove in this section. Suppose Alice sent $\ket{\Psi_{0}}\bra{\Psi_{0}}$ state in transmission channel and suppose that this state belongs to a basis \{$\ket{\Psi_{0}}\bra{\Psi_{0}}$, $\ket{\Psi_{1}}\bra{\Psi_{1}}$\}. In the channel, Eve uses a unitary attack, which entangles her subsystem with the Alice state: \begin{equation} U(\ket{\Psi_{0}}\ket{E}) = \sqrt{1 - \eta}\ket{\Psi_{0}}\ket{0_{E}} + \sqrt{\eta}\ket{\Psi_{1}}\ket{1_{E}}, \end{equation} where $\ket{E}$ - initial ancillary state, $\ket{0_{E}}$ and $\ket{1_{E}}$ ($\bra{1_{E}}\ket{0_{E}} = 0$) are two states of Eve subsystem. A unitary attack reads as a sequence of transforms: \begin{eqnarray} \notag \ket{\Psi_{0}}\bra{\Psi_{0}} \longrightarrow \ket{\Psi_{0}}\bra{\Psi_{0}} \otimes \ket{E}\bra{E} \longrightarrow (1 - \eta)\ket{\Psi_{0}}\bra{\Psi_{0}} \otimes \ket{0_{E}}\bra{0_{E}} + \\ \sqrt{\eta(1-\eta)}(\ket{\Psi_{0}}\bra{\Psi_{1}} \otimes \ket{0_{E}}\bra{1_{E}} + \ket{\Psi_{1}}\bra{\Psi_{0}} \otimes \ket{1_{E}}\bra{0_{E}}) + \eta \ket{\Psi_{1}}\bra{\Psi_{1}} \otimes \ket{1_{E}}\bra{1_{E}}. \end{eqnarray} The output state available to Bob is \begin{equation} \rho_{Bob} = (1 - \eta)\ket{\Psi_{0}}\bra{\Psi_{0}} + \eta \ket{\Psi_{1}}\bra{\Psi_{1}}. \end{equation} Bob measures this state in a particular basis, and the measurement is represented as a POVM operator $M$. Bob measures the expectation value of $M$ on a received quantum state, which contains eavesdropping-induced error. This state error can be transformed into measurement error: \begin{eqnarray} \notag <M> = Tr[\rho_{Bob}M] = Tr[((1 - \eta)\ket{\Psi_{0}}\bra{\Psi_{0}} + \eta \ket{\Psi_{1}}\bra{\Psi_{1}})M] = \\ \notag (1 - \eta)Tr[\ket{\Psi_{0}}\bra{\Psi_{0}} M] + \eta Tr[\ket{\Psi_{1}}\bra{\Psi_{1}}M] = \\ (1 - \eta)Tr[\ket{\Psi_{0}}\bra{\Psi_{0}} M] + \eta \notag Tr[V^{\dagger}\ket{\Psi_{0}}\bra{\Psi_{0}}V M] = \\ \notag (1 - \eta)Tr[\ket{\Psi_{0}}\bra{\Psi_{0}} M] + \eta Tr[\ket{\Psi_{0}}\bra{\Psi_{0}} (VMV^{\dagger})] = \\ Tr[\ket{\Psi_{0}}\bra{\Psi_{0}} ((1-\eta)M + \eta V^{\dagger}MV)] = Tr[\ket{\Psi_{0}}\bra{\Psi_{0}} \Tilde{M}], \end{eqnarray} where $V$ is a unitary transform $\ket{\Psi_{1}} = V \ket{\Psi_{0}}$. Here we denoted $\Tilde{M}$ an effective measurement device POVM, which incorporates a measurement error \begin{equation} \Tilde{M} = (1 - \eta)M + \eta V^{\dagger}MV. \end{equation} This equivalence between state errors, which occur from eavesdropping, and measurement errors, which occur due to imperfect devices construction, allow describing eavesdropping as a optical measurement devices error on the Bob side and, formally, to add optical errors and theoretical communication error on the Bob side. Using (\ref{en}), we can incorporate theoretical error from eavesdropping into the single-photon error of decoy state method: \begin{equation} \label{e1Q} e_{1} = \frac{e_{0}Y_{0} + (e_{det} + Q_{Bob})\eta}{Y_{1}} \end{equation} for a single-photon error and \begin{equation} \label{EmuQ} E_{\mu} = \frac{1}{Q_{\mu}}\sum_{k=0}^{\infty}e^{-\mu}\frac{\mu^{k}}{k!}Y_{k}e_{k} = e_{0}Y_{0} + (e_{det} + Q_{Bob})(1 - e^{-\eta \mu}) \end{equation} for overall QBER in decoy state protocol. \subsection{GLLP analysis of side channel} \label{AppendixA3} Influence of side channels on the QKD protocol is usually estimated with a GLLP-Koashi approach \cite{gottesman2002security, Koashi2009}. This approach introduces an additional quantum system - a quantum coin - to simulate Alice choice of basis. This quantum coin allows incorporating distinguishability between basis choices in the BB84 protocol into a quantity, which can be calculated once a model of side channel is provided. This approach to analyse information leakage of side channel is notoriously pessimistic and leads to a severe drop of QKD security even for practically weak side channels. Here we apply this approach to a side channel model of our choice. Our derivation closely follows one of \cite{Lucamarini2015}. We introduced non orthogonal states as a model of side channel of a general kind (\ref{ensembleBB84sidechannel}). States of the ensemble are \begin{eqnarray} \ket{\psi_{0X}} = \ket{0_{x}}\otimes\ket{0^{\Delta}_{X}}, \\ \ket{\psi_{1X}} = \ket{1_{x}}\otimes\ket{1^{\Delta}_{X}}, \\ \ket{\psi_{0Y}} = \ket{0_{y}}\otimes\ket{0^{\Delta}_{Y}}, \\ \ket{\psi_{1Y}} = \ket{1_{y}}\otimes\ket{1^{\Delta}_{Y}}. \\ \end{eqnarray} The X basis states can be prepared when Alice measures an ancillary qubit of the following entangled system \begin{equation} \ket{\Psi_{X}} = \frac{ \ket{0_{X}}\ket{\psi_{0X}} + \ket{1_{X}}\ket{\psi_{1X}}} {\sqrt{2}} \end{equation} and Y basis states can be prepared through measurement of the system \begin{equation} \ket{\Psi_{Y}} = \frac{ \ket{0_{Y}}\ket{\psi_{0Y}} + \ket{1_{Y}}\ket{\psi_{1Y}}} {\sqrt{2}}. \end{equation} To chose a basis, Alice can use a quantum coin system, which is entangled with two basis states \begin{equation} \ket{\Phi} = \frac{\ket{0_{Z}}_{C} \ket{\Psi_{X}} + \ket{1_{Z}}_{C} \ket{\Psi_{Y}}}{\sqrt{2}}, \end{equation} where subscript $C$ denotes a coin state. A full process of state preparation consists in measurement of a quantum coin to choose a basis and then measurement of a ancillary qubit to choose a secret bit. In the absence of side channels, basis states $\ket{\Psi_{X}}$ and $\ket{\Psi_{Y}}$ are indistinguishable. The presence of side channels introduces a difference, which can be measured on a quantum coin. To see this, we rewrite states of quantum coin in $X$ basis \begin{equation} \ket{\Phi} = \frac{\ket{0_{X}}_{C} (\ket{\Psi_{X}}+\ket{\Psi_{Y}}) + \ket{1_{X}}_{C} (\ket{\Psi_{X}}-\ket{\Psi_{Y}}) }{2}. \end{equation} When basis states are indistinguishable ($\ket{\Psi_{X}} = \ket{\Psi_{Y}}$), the coin cannot be measured in a state $\ket{1_{X}}_{C}$, while when there is a distingusihability, there will be a non-zero probability to measure the coin in a state $\ket{1_{X}}_{C}$. This probability is equal to \begin{equation} \label{Delta} \Delta = Pr(\ket{1_{X}}_{C}) = \frac{1 - Re[\bra{\Psi_{X}}\ket{\Psi_{Y}}]}{2}. \end{equation} If we substitute side channel state from model of our choice, we will obtain a following form \begin{equation} \label{Delta} \Delta = \frac{ 4 - \bra{0^{\Delta}_{X}}\ket{0^{\Delta}_{Y}} - \bra{0^{\Delta}_{X}}\ket{1^{\Delta}_{Y}} - \bra{1^{\Delta}_{X}}\ket{0^{\Delta}_{Y}} - \bra{1^{\Delta}_{X}}\ket{1^{\Delta}_{Y}} }{8}. \end{equation} In case, when all side channel states have equal inner products ($\bra{i^{\Delta}_{X}}\ket{j^{\Delta}_{Y}} = S, i \neq j$), this form simplifies to \begin{equation} \Delta = \frac{1 - S}{2}. \end{equation} The form (\ref{Delta}) allows us to compare our approach for estimating side channel influence with an approach of GLLP-Koashi. This value allows to calculate a new error value, which takes into account side channel information leakage. For a decoy-state protocol, this error values reads \begin{equation} \label{e1prime} e_{1}^{'} = e_{1} + 4(1 - \Delta^{'})\Delta^{'}(1 - 2e_{1}) + 4(1 - 2\Delta^{'})\sqrt{\Delta^{'}(1 - \Delta^{'})e_{1}(1 - e_{1})}, \end{equation} where \begin{equation} \Delta^{'} = \frac{\Delta}{Y_{1}} \end{equation} is a corrected value, which takes into account the ability of Eve to use lossless channel. The secret key rate with side channel under the GLLP-Koashi analysis is \begin{equation} \label{RGLLP} R = \frac{1}{2}(Q_{1}(1 - h_{2}(e_{1}^{'})) - fQ_{\mu}h_{2}(E_{\mu}) ). \end{equation} \end{document}
\begin{document} \title{Local Coupling Property for Markov Processes with Applications to L\'evy Processes} \author{Kasra Alishahi\\ Sharif University of Technology\\ Erfan Salavati\\ Faculty of Mathematics and Computer Science,\\ Amirkabir University of Technology (Tehran Polytechnic),\\ P.O. Box 15875-4413, Tehran, Iran.} \date{} \maketitle MSC: 60J25, 60G51, 60E07. \begin{abstract} In this article, we define the new concept of local coupling property for markov processes and study its relationship with distributional properties of the transition probability. In the special case of L\`evy processes we show that this property is equivalent to the absolute continuity of the transition probability and also provide a sufficient condition for it in terms of the L\'evy measure. Our result is stronger than existing results for absolute continuity of L\'evy distributions. \end{abstract} \section{Introduction} The coupling method is a very powerful tool for studying properties of stochastic processes. In case of Markov processes this method is used for proof of convergence to stationary measure. This method can also be used to prove the distributional properties of the transition probability measure of the Markov process. A coupling between two stochastic objects, means a joint distribution whose marginal distributions are that of those objects. For two stochastic processes, those couplings are concerned in which the two processes coincide eventually. Such couplings are called successful couplings. \section{Coupling property and its equivalent statements} Let $\mathbb{R}^+=[0,\infty)$ and $\{X_t\}_{t\in\mathbb{R}^+}$ be a continuous time markov process on a metric space $(E,\mathcal{E})$ and $P^t(x,dy)$ be its transition probability measure. $\mathcal{B}(E)$ and $\mathcal{C}(E)$ denote the space of respectively bounded and continuous real-valued functions on $E$. For each $f\in\mathcal{B}(E)$ we define \[ P^t f(x) = \int_E f(y) P^t(x,dy) \] A function $u(t,x)$ is called space-time harmonic if for all $t,s>0$, \[ u(s,x) = P^t u(s+t,.) (x) = \int_E u(s+t,y) P^t(x,dy) \] For any $x\in E$, Let $\mathbb{P}_x(X\in .)$ be the probability measure on $(E^{\mathbb{R}^+},\mathcal{E}^{\mathbb{R}^+})$ which is induced by the Markov process starting at $x$. For any probability measure $\mu$ on $E$, let $\mathbb{P}_\mu(.)$ be the induced probability measure by the process starting with initial distribution $\mu$. In other words \[ \mathbb{P}_\mu(.) = \int \mathbb{P}_x(.) d\mu(x) \] For any $\ge 0$, the shift operator $\theta_t$ on $(E^{\mathbb{R}^+},\mathcal{E}^{\mathbb{R}^+})$ is defined as \[ \theta_t X (.) = X(t+.) \] \begin{definition}[Local Coupling Property] The Markov process $X_t$ is said to have the local coupling property if for any $x\in E$ and $\epsilon>0$, there exists $\delta>0$ such that for any $y$ with $d(y,x)<\delta$, there exists a coupling $(X_t,Y_t)$ between $\mathbb{P}_x$ and $\mathbb{P}_y$ with the property that for \[ T=T_{x,y}=\inf\{t\ge 0: X_t=Y_t\} \] we have $P(T>\epsilon)<\epsilon$. \end{definition} Without loss of generality, we can assume that for $t\ge T$, $X_t=Y_t$ because we can assume that the two processes move together after time $T$. By $\|\mu\|$ we mean the total variation norm of the signed measure $\mu$. We are now ready to state and prove the main theorem of this section. \begin{theorem}\label{thm:Markov_main} For a Markov process $X_t$ on a Polish space $E$, the following statements are equivalent: \begin{description} \item[(i)] $X$ has the local coupling property. \item[(ii)] For any $x\in E$ and $t>0$, \[ \lim_{y\to x} \|P^t(y,.)-P^t(x,.)\| = 0 \] \item[(iii)] For any $x\in E$ and any $t>0$, \[ \lim_{y\to x} \|\mathbb{P}_y (\theta_t X\in \cdot)-\mathbb{P}_x (\theta_t X\in \cdot)\| = 0 \] \end{description} \end{theorem} \begin{proof} \begin{description} \item[(i) $\implies$ (ii)] Assume $t>0$ is given and let $(X_t,Y_t)$ be a coupling of $\mathbb{P}_x$ and $\mathbb{P}_y$ which satisfies the definition of local coupling property for an $\epsilon<t$. Then we have \begin{eqnarray*} \|P^t(y,.)-P^t(x,.)\| & \le & \mathbb{P}(X_t\ne Y_t)\\ &=&\mathbb{P}(T >t) \le \mathbb{P}(T >\epsilon) \le \epsilon \end{eqnarray*} Now by letting $\epsilon\to 0$ the statement follows. \item[(ii) $\Leftrightarrow$ (iii)] Let $\mu=P^t(x,.)$ and $\nu=P^t(y,.)$. We have \[ \mathbb{P}_y (\theta_t X\in .) - \mathbb{P}_x(\theta_t X\in .) = \mathbb{P}_\nu (.) - \mathbb{P}_\mu(.) \] Now let $\rho=\mu+\nu$ and assume that $g_\mu=\frac{d\mu}{d\rho}$ and $g_\nu=\frac{d\nu}{d\rho}$ are respectively the Radon-Nikodym derivatives of $\mu$ and $\nu$ with respect to $\rho$. Hence we have, \[ \mathbb{P}_\nu (.) - \mathbb{P}_\mu(.) = \int \mathbb{P}_x (.) g_\nu(x)d\rho(x) - \int \mathbb{P}_x (.) g_\mu(x) d\rho(x)\] \[ \le \int_{g_\nu\ge g_\mu} (g_\nu - g_\mu) d \rho = \frac{1}{2} \|\nu-\mu\| \] which implies $\|\mathbb{P}_\nu - \mathbb{P}_\mu \| \le \|\nu-\mu \|$. The other direction of the statement is obvious. \item[(iii) $\implies$ (i)] Given $x$ and $\epsilon>0$, there exists $\delta>0$ such that for any $y$ with $d(y,x)<\delta$, \[ \|\mathbb{P}_y(\theta_\epsilon X\in .) - \mathbb{P}_x (\theta_\epsilon X \in .) \| <2 \epsilon \] Now consider the maximal coupling between these two measures and denote it by $(\{X_x(t)\}_{t\ge \epsilon},\{X_y(t)\}_{t\ge \epsilon})$. Now using the regular conditional probability we extend this coupling to $0\le t <\epsilon$. To do this, we follow the machinery used in \cite{Lindvall}, section 15. Note that by the Polish assumption, there exists a regular version of the conditional probabilities \[ \mathbb{P}_x(X\in .| \theta_\epsilon X =Z) , \quad \mathbb{P}_y(X\in .| \theta_\epsilon X =Z) \] which we denote them by two transition kernels $K_x(Z,.)$ and $K_y(Z,.)$ on $E^{\mathbb{R}^+}\times E^{\mathbb{R}^+}$. Now define a probability measure on $E^{\mathbb{R}^+}\times E^{\mathbb{R}^+}$ by the following, \[ \tilde{\mathbb{P}} (A\times B)= \mathbb{E}( K_x(\theta_\epsilon X_x,A) K_y(\theta_\epsilon X_y,B) ) \] This extends to a coupling of $\mathbb{P}_x$ and $\mathbb{P}_x$. Now we have \[ \mathbb{P}(T>\epsilon) = \mathbb{P}(X_x(\epsilon)\ne X_y(\epsilon)) = \frac{1}{2}\|\mathbb{P}_y(\theta_\epsilon X\in .) - \mathbb{P}_x(\theta_\epsilon X\in .) \| <\epsilon \] \end{description} \end{proof} \begin{proposition} If $X_t$ is a Markov process with the local coupling property then \begin{description} \item[(i)] Every space-time harmonic function which is bounded, is continuous with respect to $x$. \item[(ii)] For any $t>0$ and any $f\in \mathcal{B}(E)$, we have $P^t f\in \mathcal{C}(E)$. \end{description} \end{proposition} \begin{proof} \begin{description} \item[(i)] Let $u(t,x)$ be a bounded harmonic function and $s>0$. Choose $t>0$ arbitrarily. We have \[ |u(t,y)-u(t,x)| = |\int u(s+t,z) P^t(y,dz) - \int u(s+t,z) P^t(x,dz)| \le \|u(s+t,.)\| . \|P^t(y,.)-P^t(x,.)\| \to 0\] \item[(ii)] \[ |P^t f(y)-P^t f(x)| = |\int f(z) P^t(y,dz) - \int f(z) P^t(x,dz)| \le \|f\| . \|P^t(y,.)-P^t(x,.)\| \to 0\] \end{description} \end{proof} \section{Local Coupling Property for L\'evy Processes} In this section we show that for L\'evy processes, the local coupling property is equivalent to absolute continuity of the transition probability measure with respect to the Lebesgue measure. We also provide a sufficient condition for local coupling property in terms of the L\'evy measure. We first prove a lemma. \begin{lemma}\label{lem:absolute_cont} Let $\mu$ be a Borel measure on $\mathbb{R}$. For $a\in \mathbb{R}$ let $\mu_a$ be the translation of $\mu$ by $a$. Then $\mu$ is absolutely continuous if and only if \[ \lim_{a\to 0} \| \mu_a - \mu \| = 0\] \end{lemma} \begin{proof} Denote the Lebesgue measure on $\mathbb{R}$ by $\lambda$. To prove the if part, assume $\lambda(A)=0$. Hence for any $x\in\mathbb{R}$, $\lambda(A+x)=0$ and therefore \begin{multline}\label{equation:proof of lemma_Fubini} 0 = \int_\mathbb{R} \lambda(A+x) d\mu(x) = \int_\mathbb{R} \int_\mathbb{R} 1_A(x+y) d\lambda(y) d\mu(x) \\ = \int_\mathbb{R} \int_\mathbb{R} 1_A(x+y) d\mu(x) d\lambda(y) = \int_\mathbb{R} \mu_y(A) d\lambda(y) \end{multline} On the other hand, by assumption, the function $y\mapsto \mu_y(A)$ is continuous at $y=0$ and since is nonnegative, it follows from \eqref{equation:proof of lemma_Fubini} that $\mu(A)=\mu_0(A) = 0$. To prove the only if part, note that if $\mu \ll \lambda$, it follows from the Radon-Nikodym theorem that for some $f\in L^1(\mathbb{R})$, $d\mu(x) = f(x) d\lambda(x)$ and therefore $d\mu_a(x) = f(x+a) d\lambda(x)$ which implies \[ \| \mu_a - \mu \| = \| f(a+.)-f(.)\|_{L^1} \] and the right hand side tends to 0 as $a\to 0$. \end{proof} \begin{remark} The special case that $\mu$ is the transition probability of a L\'evy process has been proved in~\cite{Hawkes}. \end{remark} \begin{theorem}\label{thm:Levy_absol} The L\'evy process $X_t$ has the local coupling property if and only if its transition probability measure is absolutely continuous for any $t>0$. \end{theorem} \begin{proof} By Theorem~\ref{thm:Markov_main} the local coupling property is equivalent to \begin{equation}\label{eq1} \lim_{a\to 0} \|P^t(x+a,.)-P^t(x,.)\| = 0, \forall t>0 \end{equation} On the other hand for L\'evy processes \[ P^t(x+a,.)= P^t(x,a+.) \] and hence by Lemma~\ref{lem:absolute_cont}, equation \eqref{eq1} is equivalent to the absolute continuity of the transition probability measure. \end{proof} We state two straight forward consequences of Theorem~\ref{thm:Levy_absol}. \begin{corollary} The Brownian motion has the local coupling property. \end{corollary} \begin{proof} The transition probability measure is Gaussian which is absolutely continuous. \end{proof} \begin{corollary}\label{cor:sum_ind_Levy} Let $X_t$ and $Y_t$ be two independent L\'evy processes and assume that $X_t$ has the local coupling property. Then so is $X_t+Y_t$. \end{corollary} \begin{proof} The transition probability of the sum is convolution of two transition probability measure. Hence if one of them is absolutely continuous, so is the sum. \end{proof} Now consider a general one dimensional L\'evy process with L\'evy triplet $(b,\sigma,\nu)$. It is clear that $b$ doesn't play any role in the local coupling property. It also follows from the previous two lemmas that if a $\sigma>0$ then the process has the local coupling property. Hence it remains to study the processes with triplets $(0,0,\nu)$. In what follows, we assume that $X_t$ is a L\'evy process with triplet $(0,0,\nu)$. We use the L\'evy-It\"o representation of $X_t$, \[ X_t = \int_{|x|\le 1} x\tilde{N}(t,dx) + \int_{|x|> 1} xN(t,dx)\] where $N$ is the Poisson random measure with intensity $dt\nu(dx)$ and $\tilde{N}$ is its compensation. The following theorem provides a sufficient condition for local coupling property of the $X_t$ in terms of its L\'evy measure $\nu$. Recall that the minimum of two measures $\mu$ and $\nu$, denoted by $\mu\wedge\nu$ is defined as \[ \mu\wedge\nu = \frac{1}{2} (\mu+\nu - \|\mu - \nu \|) \] Now let $\bar{\nu}(dx)=\nu(-dx)$ and define $\rho=\nu\wedge\bar{\nu}$. Also define the auxiliary function $\eta(r)=\int_0^r x^2 \rho(dx)$. \begin{theorem}\label{thm:Levy_suffic} Assume that $\int_0^1 \frac{r}{\eta(r)} dr <\infty$. Then $X_t$ has the local coupling property. \end{theorem} \begin{remark} In the special case that $\nu$ is symmetric, the above theorem has been implicitly proved in~\cite{Alishahi_Salavati}. \end{remark} In order to prove Theorem~\ref{thm:Levy_suffic} we provide appropriate couplings between two L\'evy processes $X_t$ and $Y_t$ with characteristics $(0,0,\nu)$ and starting respectively from 0 and $a$. Without loss of generality we assume $a>0$. If we denote the distribution of $X_1$ by $\mu$, then the distribution of $Y_1$ is $\mu_a(dx)=\mu(a+dx)$. We denote the left limit of a cadlag process $X_t$ at $t$ by $X_{t-}$ and let $\Delta X_t = X_t - X_{t-}$. Note that it suffices to prove for the case that $\nu$ is supported in $[-1,1]$, since by the L\'evy-It\"o representation we know that $X_t$ is the sum of two independent L\'evy processes $ \int_{|x|\le 1} x\tilde{N}(t,dx)$ and $\int_{|x|> 1} xN(t,dx)$ and if the former has the local coupling property then by Corollary~\ref{cor:sum_ind_Levy} so is $X_t$. Hence from now on we assume that $\nu$ is supported in $[-1,1]$. Let $L_1(t)$ and $L_2(t)$ be two independent L\`evy processes with characteristics $(0,0,\nu-\frac{1}{2}\rho)$ and $(0,0,\frac{1}{2}\rho)$ and let $\tilde{N}_1(dt,du)$ and $\tilde{N}_2(dt,du)$ be the corresponding compensated Poisson random measures (cPrms). Let $\mathcal{F}_t$ be the filtration generated by $N_1$ and $N_2$ and define \[ X_t = L_1(t) + L_2(t) \] It is clear that $X_t$ is a L\`evy process with characteristics $(0,0,\nu)$. Now consider the following stochastic differential equation \begin{equation}\label{definition:Y_t} Y_t = a+ \int_0^t f(s,Y_{s-},\omega) dX_s \end{equation} where $f:\mathbb{R}^+\times \mathbb{R}\times \Omega \to \mathbb{R}$ is defined by \[f(s,y,\omega ) = \left\{ {\begin{array}{*{20}{c}} { - 1}&{\frac{{{X_{s - }} - y}}{2} > \,\left| {\Delta {L_2}(s)} \right|>0} \\ 1&{otherwise} \end{array}} \right.\] Notice that by equation~\eqref{definition:Y_t}, the jumps of $X$ and $Y$ occur at the same times and have the same magnitude but probably different directions. Note that the classical existence theorems for solutions of SDEs are not applicable to equation~\eqref{definition:Y_t} since $f$ is not a Lipschitz (not even continuous) function of $y$. In order to prove the existence of solution, we rewrite~\eqref{definition:Y_t} as an SDE with respect to cPrms as follows, \begin{equation*} Y_t = a + \int_0^t \int_\mathbb{R} u \tilde{N}_1(ds,du) \\ + \int_0^t \int_\mathbb{R} g(s,Y_{s-},u,\omega) u \tilde{N}_2(ds,du) \end{equation*} where \[ g(s,y,u,\omega) = \chi_{\frac{|X_{s-}-y|}{2}<|u|} - \chi_{\frac{|X_{s-}-y|}{2}\ge |u|} \] Now we can use the existence result for such equations (see e.g. Applebaum~\cite{Applebaum}, Theorem 6.2.3). For that, we need to prove linear growth and Lipschitz conditions for coefficients. To prove this, note that both coefficients are almost surely bounded by $|u|$. Hence the Lipschitz condition reduces to $\int_\mathbb{R} |u|^2 \nu (du) < \infty$ which holds by L\'evy condition and the assumption that $\nu$ is supported in $[-1,1]$. The linear growth condition is similar. Hence the equation has a square integrable and cadlag solution. Now we claim that \begin{lemma} $Y_t$ is a L\`evy processes with characteristics $(0,0,\nu)$ and starting from $a$. \end{lemma} \begin{proof} The idea is that $Y_t$ and $X_t$ has the same movements except that $Y$ sometimes jumps in the opposite direction than $X$, but notice that if we decompose $\nu$ as $(\nu-\rho)+\rho$, the jumps of $X$ that come from the $\rho$ part have a symmetric distribution and at exactly these jumps $Y$ makes an opposite jump with probability $\frac{1}{2}$. In order to give a rigorous proof, we calculate the conditional characteristic function of $Y_t$, i.e. $\mathbb{E}(e^{i\xi Y_t}|\mathcal{F}_s)$. We have \[ Y_t = a + \int_0^t \int_\mathbb{R} u \tilde{N}_0(ds,du) \\ +\int_0^t \int_\mathbb{R} u \tilde{N}_3(ds,du)+ \int_0^t \int_\mathbb{R} g(s,Y_{s-},u,\omega) u \tilde{N}_2(ds,du) \] We write the It\^o's formula for $e^{i\xi Y_t}$, \begin{multline*} e^{i\xi Y_t} - e^{i\xi Y_s} = \int_s^t \int_\mathbb{R} \left( e^{i\xi Y_{r-} + i\xi u} - e^{i \xi Y_{r-}}\right) \tilde{N}_1(dr , du)\\ + \int_s^t \int_\mathbb{R} \left( e^{i\xi Y_{r-} + i\xi g(r,Y_{r-},u) u} - e^{i \xi Y_{r-}}\right) \tilde{N}_2(dr , du)\\ + \int_s^t \int_\mathbb{R} e^{i\xi Y_{r-}} [e^{i\xi u}-1- i\xi u] (\nu-\frac{1}{2}\rho)(du)dr\\ + \int_s^t \int_\mathbb{R} [e^{i \xi g(r,Y_{r-},u) u} - 1 - i\xi g(r,Y_{r-},u) u]\frac{1}{2}\rho(du)dr \end{multline*} taking expectations conditioned on $\mathcal{F}_s$ and noting that the first two integrals are martingales we find, \begin{multline} \label{eq:proof_characteristic_1} \mathbb{E}(e^{i\xi Y_t}|\mathcal{F}_s) - e^{i\xi Y_s} = \int_s^t \mathbb{E} \bigg( e^{i\xi Y_{r-}} \int_\mathbb{R} \Big( [e^{i\xi u}-1- i\xi u](\nu-\frac{1}{2}\rho)(du)\\ + [e^{i \xi g(r,Y_{r-},u) u} - 1 - i\xi g(r,Y_{r-},u) u] \frac{1}{2}\rho(du) \Big) |\mathcal{F}_s \bigg) dr \end{multline} Now note that $\rho$ is a symmetric measure and for each $\omega$, the function $u\mapsto g(r,Y_{r-},u,\omega)$ is an even function with values $\pm 1$, hence we have \[ \int_\mathbb{R} \left( e^{i \xi g(r,Y_{r-},u) u} - 1 - i\xi g(r,Y_{r-},u) u \right) \rho(du) = \int_\mathbb{R} \left( e^{ i\xi u}-1 -i\xi u \right) \rho(du) \] Substituting in~\eqref{eq:proof_characteristic_1} we find, \begin{multline*} \mathbb{E}(e^{i\xi Y_t}|\mathcal{F}_s) - e^{i\xi Y_s} = \int_s^t \mathbb{E} \bigg( e^{i\xi Y_{r-}} \int_\mathbb{R} [e^{i\xi u}-1- i\xi u] \nu(du) |\mathcal{F}_s \bigg) dr \\ =\psi(\xi) \int_s^t \mathbb{E} \left( e^{i\xi Y_{r-}} |\mathcal{F}_s \right)dr \end{multline*} where $\psi(\xi)=\int_\mathbb{R} [e^{i\xi u}-1- i\xi u] \nu(du)$. Note that $\psi(\xi)$ is indeed the characteristic exponent of $X$. Hence if we define $h(t)=\mathbb{E}(e^{i\xi Y_t}|\mathcal{F}_s)$, $h$ satisfies the ordinary differential equation $h^\prime(t)=\psi(\xi)h(t)$. This ode has the unique solution \[ \mathbb{E}(e^{i\xi Y_t}|\mathcal{F}_s) = e^{i\xi Y_s} e^{(t-s)\psi(\xi)} .\] The last equality implies easily that $Y$ is a L\'evy process with characteristics $(0,0,\nu)$. \end{proof} Now let \[ Z_t = Y_t - X_t \] By subtracting the integral representations of $X_t$ and $Y_t$, we find that $Z_t$ satisfies the following sde, \[ Z_t = -2 \int_0^t \int_\mathbb{R} \chi_{\frac{|Z_{s-}|}{2}\ge |u|} u \tilde{N}_2(ds,du) \] It is clear from the above equation that the jumps of $Z_t$ always have magnitudes less than $|Z_t|$. Hence since $Z_0=a>0$, $Z_t$ is always non-negative. Now we define two stopping times \[ \tau_a = \inf \{ t: Z_t = 0 \}, \] \[ \bar{\tau}_a = \inf \{ t: Z_t \notin (0,1) \}. \] Since the jumps of $Z_t$ satisfy $| \Delta Z_s| \le Z_{s^-}$ it is clear that $Z_{\bar{\tau}_a} \le 2$. Note also that since $\rho$ is symmetric, the jumps of $Z_t$ have a symmetric distribution. \begin{lemma}\label{lemma:limit} \[ \lim_{t\to\infty} Z_t =0,\quad a.s. \] \end{lemma} \begin{proof} Since $X_t$ and $Y_t$ are martingales hence so is $Z_t$. Moreover, $Z_t$ is non-negative and hence by martingale convergence theorem it has a limit $Z_\infty$, as $t\to\infty$. On the other hand, by the assumption made on $\eta$, for any $\epsilon>0$ we have $\nu((0,\epsilon))>0$. Hence jumps of size greater than $\epsilon$ occur in $X_t$ with a positive rate. This implies that if $Z_\infty=\alpha \ne 0$, then $Z_t$ has infinitely many jumps greater than some $\epsilon$ with $0<\epsilon<\frac{\alpha}{2}$ which contradicts its convergence. Hence $Z_\infty = 0\quad a.s$. \end{proof} We define an auxiliary function $g:[0,\infty)\to\mathbb{R}$ by letting \[ g(x) = \int_x^1 \int_y^1 \frac{1}{\eta(r)} dr dy \] \begin{lemma}\label{lemma:g} If $\int_0^1 \frac{r}{\eta(r)} dr <\infty$ then $g$ is defined on $[0,\infty)$. Moreover, it is differentiable on $(0,\infty)$ and its derivative is absolutely continuous and $g^{\prime\prime} (x)= \frac{1}{\eta(x)}$ for almost every $x$. Furthermore, for every $x,y\in[0,\infty)$, we have \[ g(y)-g(x) \ge g^\prime(x) (y-x) + \frac{1}{2} \frac{1}{\eta(x)} (y-x)^2 1_{y<x} \] \end{lemma} \begin{proof} By Fubini's theorem, \[ g(x) = \int_x^1 \int_x^r \frac{1}{\eta(r)} dy dr = \int_x^1 \frac{r}{\eta(r)} dr \] hence $g$ is finite and differentiable everywhere and its derivative is absolutely continuous and $g^{\prime\prime} = \frac{1}{\eta} \quad a.e $. To prove the last claim, note that since $g^\prime=-\int_x^1 \frac{1}{\eta}$ is increasing hence $g$ is convex and therefore, \[ g(y)-g(x) \ge g^\prime(x) (y-x) \] If $y<x$, by the integral form of the Taylor's remainder theorem we have \[ g(y)-g(x) = g^\prime(x) (y-x) + \int_y^x (t-y) g^{\prime\prime}(t) dt \] where since $g^{\prime\prime}(t)$ is decreasing we have $g^{\prime\prime}(t) \ge \frac{1}{\eta(x)}$ which implies that in the case that $y<x$, \[ g(y)-g(x) \ge g^\prime(x) (y-x) + \frac{1}{2} \frac{1}{\eta(x)} (y-x)^2\] and the proof is complete. \end{proof} \begin{lemma}\label{lemma:tau_bar} If $\int_0^1 \frac{r}{\eta(r)} dr <\infty$ then \[ \lim_{a\to 0} \mathbb{E} \bar{\tau}_a = 0\] \end{lemma} \begin{proof} We have by Lemma~\ref{lemma:g}, \begin{multline} g(Z_{t \wedge \bar{\tau}_a}) = g(a) + \sum_{s\le t\wedge \bar{\tau}_a} \Delta g(Z_s)\\ \ge g(a) + \sum_{s\le t\wedge \bar{\tau}_a} g^\prime(Z_{s-})\Delta Z_s + \frac{1}{2} \sum_{s\le t\wedge \bar{\tau}_a} \frac{1}{\eta(Z_{s-})} (\Delta Z_s)^2 1_{\Delta Z_s<0} \end{multline} Since the second term on the right hand side is a martingale, we have \begin{equation} \label{equation:Ito's formula2} \mathbb{E} g(Z_{t \wedge \bar{\tau}_a}) \ge g(a) + \frac{1}{2} \mathbb{E} \sum_{s\le t\wedge \bar{\tau}_a} \frac{1}{\eta(Z_{s-})} (\Delta Z_s)^2 1_{\Delta Z_s<0}. \end{equation} Noting that the jumps of $Z_s$ are independent and have symmetric distribution, we conclude \[ \mathbb{E} \sum_{s\le t\wedge \bar{\tau}_a} \frac{1}{\eta(Z_{s-})} (\Delta Z_s)^2 1_{\Delta Z_s<0} = \frac{1}{2} \mathbb{E} \int_0^{t\wedge \bar{\tau}_a} \frac{1}{\eta(Z_{s-})} \eta(Z_s) ds = \frac{1}{2} \mathbb{E} (t \wedge \bar{\tau}_a)\] Substituting in~\eqref{equation:Ito's formula2} implies \[ \frac{1}{4} \mathbb{E} (t \wedge \bar{\tau}_a) \le \mathbb{E}g(Z_{t \wedge \bar{\tau}_a}) - g(a) \] Now letting $t\to \infty$ and noting that $Z_s$ is uniformly bounded by 2 for $t\le \bar{\tau}_a$, we find that \[ \frac{1}{4} \mathbb{E} (\bar{\tau}_a) \le \mathbb{E}g(Z_{\bar{\tau}_a}) - g(a) \] Now let $a\to 0$. By continuity of $g$, we have $g(a)\to g(0)$. On the other hand, \[ \mathbb{P}(Z_{\bar{\tau}_a}\ge 1) \le \mathbb{E}(Z_{\bar{\tau}_a}) = a\to 0 \] where we have used the optional stopping theorem. Now we can write, \begin{multline} \mathbb{E}g(Z_{\bar{\tau}_a}) = \mathbb{E}g(Z_{\bar{\tau}_a}; Z_{\bar{\tau}_a}=0) + \mathbb{E}g(Z_{\bar{\tau}_a};Z_{\bar{\tau}_a}\ge 1) \\ \le g(0) + 2 \mathbb{P}(Z_{\bar{\tau}_a}\ge 1) \to g(0) \end{multline} Hence we find that, \[ \lim_{a\to 0} \mathbb{E} \bar{\tau}_a = 0\] \end{proof} We are now ready to prove Theorem~\ref{thm:Levy_suffic}. \begin{proof}[Proof of Theorem~\ref{thm:Levy_suffic}] It suffices to prove that for any $\epsilon>0$, \[\lim_{a\to 0} \mathbb{P} (\tau_a\ge \epsilon) = 0.\] We have \[ \mathbb{P} (\tau_a\ge \epsilon) \le \mathbb{P} (\bar{\tau}_a\ge \epsilon) + \mathbb{P} (\bar{\tau}_a \le \epsilon, Z_{\bar{\tau}_a}\ge 1) \] The first term on the right hand side is less than or equal to $\mathbb{E} \bar{\tau}_a/\epsilon$ and the second term is less than or equal \[ \mathbb{P} (\sup_{0\le t\le \epsilon} Z_t \ge 1)\] which by Doob's maximal inequality is less than or equal to $\mathbb{E} Z_\epsilon = a$. Hence, \[ \mathbb{P} (\tau_a\ge \epsilon) \le \mathbb{E} \bar{\tau}_a/\epsilon + a \] and from Lemma~\ref{lemma:tau_bar} the statement follows. \end{proof} \end{document}
\begin{document} \title[Statistical stability for piecewise expanding maps]{Statistical stability for multidimensional\\ piecewise expanding maps} \date{\today} \author[J. F. Alves]{Jos\'{e} F. Alves} \address{Jos\'{e} F. Alves\\ Centro de Matem\'{a}tica da Universidade do Porto\\ Rua do Campo Alegre 687\\ 4169-007 Porto\\ Portugal} \email{[email protected]} \urladdr{http://www.fc.up.pt/cmup/jfalves} \author[A. Pumari\~no]{Antonio Pumari\~no} \address{Antonio Pumari\~no\\ Departamento de Matem\'aticas, Facultad de Ciencias de la Universidad de Oviedo, Calvo Sotelo s/n, 33007 Oviedo, Spain.} \email{[email protected]} \author[E. Vigil]{Enrique Vigil} \address{Enrique Vigil\\ Departamento de Matem\'aticas, Facultad de Ciencias de la Universidad de Oviedo, Calvo Sotelo s/n, 33007 Oviedo, Spain.} \email{[email protected]} \maketitle \begin{abstract} We present sufficient conditions for the (strong) statistical stability of some classes of multidimensional piecewise expanding maps. As a consequence we get that a certain natural two-dimensional extension of the classical one-dimensional family of tent maps is statistically stable. \end{abstract} \tableofcontents \section{Introduction} Along this paper we deal with multidimensional piecewise expanding maps defined in some compact subset of an Euclidean space. A first approach to this topic has been made in dimension one by Lasota and Yorke in \cite{LY73}, where they proved the existence of absolutely continuous invariant probability measures for a class of piecewise $C^2$ expanding maps of the interval. The extension to higher dimensions in general is a very delicate question, mostly because of the intricate geometry of the domains of smoothness and their images under iterations. In the last decades many results have appeared in the literature with several different approaches. A first result in dimension two was obtained by Keller in~\cite{K79}. In the multidimensional case, Gora and Boyarski in \cite{GB89} proved the existence of absolutely continuous invariant probability measures for maps with a finite number of domains of smoothness under some condition of no cusps in the domains of smoothness. This result was later extended by Adl-Zarabi in \cite{A96} to piecewise expanding maps allowing cusps in the domains of smoothness, and by Alves in~\cite{A00} to piecewise expanding maps with countably many domains of smoothness. Similar results were drawn in the particular case of piecewise linear maps by Buzzi and Tsujii in \cite{B99a,T01b}, and by the same authors in the case of piecewise real analytic expanding maps of the plane in \cite{B00a,T00}. A general result on the existence of absolutely continuous invariant probability measures in any finite dimension was given by Saussol in \cite{S00} for $C^{1+}$ piecewise expanding maps with infinitely many domains of smoothness under some control on the accumulation of discontinuities under iterations of the map. Many concrete situations lead to the appearance of families of piecewise expanding maps under conditions that guarantee the existence of absolutely continuous invariant probability measures, in many cases this probability measure being unique. It is then a natural question trying to decide weather these measures depend continuously on the dynamics, i.e the statistical stability of those maps. This question was addressed in \cite{AV02} for certain robust classes of maps with non-uniform expansion; see also \cite{A04}. In \cite{PRT14}, the authors consider a one-parameter family $(\Lambda_t)_t$ of two-dimensional piecewise linear maps defined on a triangle in $\mathbb{R}^2$. This family $(\Lambda_t)_t$ is closely related to the family of limit return maps arising when certain three-dimensional homoclinic bifurcations take place; see \cite{T01}. It is showed in \cite[Proposition 6.1]{PRT14} and the comments following it that $\Lambda_t$ becomes the best choice in the piecewise linear setting for describing the dynamics of the original limit return maps given in \cite{T01} when the unstable manifold of the saddle point has dimension two. The results in the present paper have a twofold aim: to give sufficient conditions for the statistical stability of some general classes of multidimensional piecewise expanding maps; and to prove the statistical stability of the family of maps introduced in \cite{PRT14}. This last result will be obtained as an application of our general result. \subsection{Statistical stability} In this work we consider discrete-time dynamical systems defined in a compact region $R\subset \mathbb{R}^d$, for some $d\ge1$. Given a measurable map $\phi:R\to R$, we say that a probability measure $\mu$ on the Borel sets of $R$ is \emph{$\phi$-invariant} if $\mu(\phi^{-1}(A))=\mu(A)$, for any Borel set $A\subset R$. If $\mu(A)=0$ whenever $m(A)=0$, where $m$ denotes the Lebesgue measure on the Borel sets of $\mathbb{R}^d$, then $\mu$ is called \emph{absolutely continuous}. In this case, there exists an $m$-integrable function $h\ge 0$, usually denoted $d\mu/dm$ and called the density of $\mu$ with respect to $m$, such that for any Borel set $A\subset R$ we have $\mu(A)=\int_Ahdm$. A $\phi$-invariant probability measure $\mu$ is called ergodic if $\mu(A)\mu(R\setminus A)=0$, whenever $\phi^{-1}(A)=A$. As a consequence of Birkhoff's Ergodic Theorem we have that any $\phi$-invariant absolutely continuous ergodic probability measure $\mu$ is a \emph{physical measure}, meaning that for a subset of points $x\in R$ with positive Lebesgue measure we have $$\lim_{n\to\infty}\sum_{j=0}^{n-1}f(\phi^j(x))=\int fd\mu$$ for all continuous $f:R\to \mathbb{R}$. Let $I$ be a metric space and $(\phi_t)_{t\in I}$ a family of maps $\phi_t:R\rightarrow R$ such that each $\phi_t$ has some absolutely continuous $\phi_t$-invariant probability measure. We say that the family $(\phi_{t})_{t\in I}$ is \emph{statistically stable} if for any $t_0\in I$, any choice of a sequence $(t_n)_n$ in $I$ converging to $t_0$ and any choice of a sequence of absolutely continuous $\phi_{t_n}$-invariant probability measures $(\mu_{t_n})_n$, then any accumulation point of the sequence of densities $d\mu_{t_n}/dm$ must converge in the $L^1$-norm to the density of an absolutely continuous $\phi_{t_0}$-invariant probability measure. Of course, when each $\phi_t$ has a unique absolutely continuous invariant probability measure~$\mu_t$, then statistical stability means that $d\mu_t/dm$ converges in the $L^1$-norm to $d\mu_{t_0}/dm$ when~$t\to t_0$. A strictly weaker notion of statistical stability may be given if we assume only weak* convergence of the measures $\mu_t$ to $\mu_{t_0}$ when $t\to t_0$. \subsection{Piecewise expanding maps} Here we state precisely sufficient conditions for the statistical stability of certain higher dimensional families of $C^2$ piecewise expanding maps with countably many domains of smoothness. We follow the approach in~\cite{A00} which, on its turn, was inspired in \cite{GB89}. Let $R$ be a compact set in $\mathbb{R}^d$, for some $d\ge 1$. For each $1\le p\le \infty$ we denote by $L^p(R)$ the Banach space of functions in $L^p(m)$ with support contained in $R$, endowed with the usual norm $\|\quad\|_p$. Let $\phi: R\to R$ a map for which there is a (Lebesgue mod 0) partition $\{R_i\}_{i=1}^{\infty}$ of $R$ such that each $R_i$ is a closed domain with piecewise $C^2$ boundary of finite $(d-1)$-dimensional measure and $\phi_i=\phi|R_i$ is a $C^2$ bijection from $\operatorname{int}(R_i)$, the interior of~$R_i$, onto its image with a $C^2$ extension to $R_i$. We say that $\phi$ is {\it piecewise expanding} if \begin{itemize} \item[(P$_1$)] there is $0<\sigma<1$ such that for every $i\geq 1$ and $x\in\operatorname{int}(\phi(R_i))$ $$\| D\phi_i^{-1}(x)\| <\sigma.$$ \end{itemize} We say that $\phi$ has {\it bounded distortion} if \begin{itemize} \item[(P$_2$)] there is $D\ge 0$ such that for every $i\geq 1$ and $x\in\operatorname{int}(\phi(R_i))$ $$\frac{\left\| D\left(J\circ\phi^{-1}_i\right)(x)\right\|}{\left|J\circ\phi^{-1}_i\right(x)|}\le D,$$ where $J$ is the Jacobian of $\phi$. \end{itemize} Finally, we say that $\phi$ has \emph{long branches} if \begin{enumerate} \item[(P$_3$)] there are $\beta,\rho>0$ and for each $i\ge 1$ there is a $C^1$ unitary vector field $X_i$ in $\partial \phi(R_i)$ such that: \begin{enumerate} \item[(a)] the segments joining each $x\in\partial \phi(R_i)$ to $x+\rho X_i(x)$ are pairwise disjoint and contained in $\phi(R_i)$, and their union forms a neighborhood of $\partial \phi(R_i)$ in $\phi(R_i)$. \item[(b)] for every $x\in\partial \phi(R_i)$ and $v\in T_x\partial \phi(R_i)\setminus\{0\}$ the angle $\angle(v,X_i(x))$ between $v$ and $X_i(x)$ satisfies $|\sin\angle(v,X_i(x))|\geq \beta$. \end{enumerate} \end{enumerate} Here we assume that at the singular points $x\in\partial \phi(R_i)$ where $\partial \phi(R_i)$ is not smooth the vector $X_i(x)$ is a common $C^1$ extension of $X_i$ restricted to each $(d-1)$-dimensional smooth component of $\partial \phi(R_i)$ having $x$ in its boundary. We also assume that the tangent space of any such singular point $x$ is the union of the tangent spaces to the $(d-1)$-dimensional smooth components it belongs to. In the one-dimensional case $d=1$, condition (P$_3$)(a) is clearly satisfied once we take the sets in the partition of $R$ as being intervals whose images $\phi(R_i)$ have sizes uniformly bounded away from zero. Additionally, condition~(P$_3$)(a) always holds in dimension one, since $ \phi(R_i)$ is a 0-dimensional manifold and so $T_x\partial \phi(R_i)=\{0\}$ for any $x\in \phi(R_i)$. In this case we can even take the optimal value $\beta=1$; see Remark~\ref{re.beta}. \begin{maintheorem}\label{expanding} Let $I$ be a metric space and $(\phi_t)_{t\in I}$ a family of $C^2$ piecewise expanding maps $\phi_t:R\to R$ with bounded distortion and long branches. Assume that for each $t\in I$ \begin{enumerate} \item for each continuous $f:R\to\mathbb{R}$ we have $\|f\circ \phi_{t'}-f\circ\phi_t\|_d\to0$ when $t'\to t$; \item there exist $0<\lambda<1$ and $K>0$ for which $$\sigma_t\left(1+\frac1{\beta_t}\right)\le \lambda\quad\text{and}\quad D_t+\frac{1}{\beta_t\rho_t}+\frac{D_t}{\beta_t} \le K,$$ where $\sigma_t, D_t,\beta_t,\rho_t$ are constants for which (P$_1$), (P$_2$) and (P$_3$) hold for $\phi_t$. \end{enumerate} Then $(\phi_t)_{t\in I}$ is statistically stable. \end{maintheorem} It follows from \cite[Section 5]{A00} that under the assumptions above each $\phi_t$ has a finite number of ergodic absolutely continuous invariant probability measures. The proof of this result uses the space of functions of bounded variation in $\mathbb{R}^d$, which are known to belong to the space $L^p(R)$, with $p=d/(d-1)$; see~\eqref{bvp} below. Observing that $1/p+1/d=1$, this makes the choice of the norm $\|\quad\|_d$ in condition (1) less mysterious; see the proof of Lemma~\ref{p.igual}. Notice that condition (1) in Theorem~\ref{expanding} holds whenever the maps $\phi_t$ are continuous and $\phi_t$ depends continuously (in the $C^0$-norm) on $t\in I$. \subsection{Two-dimensional tent maps} Here we present the family of maps introduced in \cite{PRT14} and give some result on its statistical stability. We define the family of maps $\Lambda_{t}:T\to T$ on the triangle ${T}={T}_0\cup {T}_1$, where \begin{equation}\label{algo} {T}_0=\{(x,y):0 \leq x \leq 1, \ 0 \leq y \leq x \},\quad {T}_1=\{(x,y):1 \leq x \leq 2, \ 0 \leq y \leq 2-x \}, \end{equation} and \begin{equation}\label{familyt} \Lambda_t(x,y)= \left\{ \begin{array}{ll} (t(x+y),t(x-y)), & \mbox{if } (x,y)\in {T}_0;\\ (t(2-x+y),t(2-x-y)), & \mbox{if } (x,y)\in {T}_1. \end{array} \right. \end{equation} The domains $T_0$ and $T_1$ are separated by a straight line segment $\mathcal C=\{(x_1,x_2)\in T: x_1=1\}$ that we call the \emph{critical set} of $\Lambda_t$. As shown in \cite{PT06}, the map $\Lambda_1$ displays the same properties of the one-dimensional tent map $\lambda_2(x)=1-2|x|$. Among them, the consecutive pre-images $\{\Lambda_1^{-n}(\rc)\}_{n\in \mathbb{N}}$ of the critical line $\rc$ define a sequence of partitions (whose diameter tends to zero as $n$ goes to infinity) of $T$ leading them to conjugate $\Lambda_1$ to a one sided shift with two symbols. Hence, it easily follows that $\Lambda_1$ is transitive in $T$. Furthermore, for every point $(x_0,y_0)\in T$ whose orbit never hits the critical line the Lyapounov exponent of $\Lambda_1$ along the orbit of $(x_0,y_0)$ is positive (and coincides with $\frac{1}{2}\log{2}$) in all nonzero direction. Finally, it can be constructed an absolutely continuous ergodic invariant probability measure for $\Lambda_1$; see \cite{PT06}. Because of this, $\Lambda_1$ was called the \textit{two-dimensional tent map}. Since the parameter $t$ in (\ref{familyt}) essentially gives the rate of expansion for $\Lambda_t$ (playing the same roll of the parameter $a$ for $\lambda_a(x)=1-a|x|$), the family $(\Lambda_t)_t$ can be considered as a natural extension of the one-dimensional family of tent maps and naturally called a \textit{family of two-dimensional tent maps}. The results obtained in \cite{PT06} for $t=1$ were extended to a larger set of parameters. More precisely, it was proved in \cite{PRT14a} that for each $t \in [\tau,1] $, with $ \tau=\frac{1}{\sqrt{2}}(\sqrt{2}+1)^\frac{1}{4}\approx 0.882,$ the map $\Lambda_t$ exhibits a \emph{strange attractor} $A_t\subset T$: $\Lambda_t$ is (strongly) transitive in $A_t$, the periodic orbits are dense in $A_t$, and there exists a dense orbit in $A_t$ with two positive Lyapunov exponents. Furthermore, $A_t$ supports a unique absolutely continuous $\Lambda_t$-invariant ergodic probability measure~$\mu_t$. As an application of Theorem~\ref{expanding} we shall obtain the following result. \begin{maintheorem}\label{main} The family $(\Lambda_t)_{t\in [\tau,1]}$ is statistically stable. \end{maintheorem} As each $\Lambda_t$ has a unique absolutely continuous invariant probability measure $\mu_t$, the statistical stability means in this case that $d\mu_t/dm$ converges in the $L^1$-norm to $d\mu_{t_0}/dm$ when~$t\to t_0$, for each $t_0\in[\tau,1]$. \section{Functions of bounded variation}\label{variation} The main ingredient for the proof of Theorem~\ref{expanding} is the notion of variation for functions in multidimensional spaces. We adopt the definition given in \cite{G84}. Given $f\in L^1(\mathbb{R}^d)$ with compact support we define the {\it variation} of $f$ as $$V(f)=\sup\left\{\int_{\mathbb{R}^n}f\mbox{div}(g)dm\,:\,g\in C_0^1(\mathbb{R}^d,\mathbb{R}^d)\text{ and } \|g\|\leq 1\right\},$$ where $C_0^1(\mathbb{R}^d,\mathbb{R}^d)$ is the set of $C^1$ functions from $\mathbb{R}^d$ to $\mathbb{R}^d$ with compact support, $\mbox{div}(g)$ is the divergence of $g$ and $\|\quad\|$ is the sup norm in $C_0^1(\mathbb{R}^d,\mathbb{R}^d)$. Given a bounded set $R\subset\mathbb{R}^d$ we consider the space of {\em bounded variation} functions in $L^1(R)$ $$BV(R)=\left\{ f\in L^1(R):V(f)<+\infty\right\}.$$ Contrarily to the classical one-dimensional definition of bounded variation, a multidimensional bounded variation function need not to be bounded; see \cite{GB92}. However, by Sobolev's Inequality (see e.g. \cite[Theorem 1.28]{G84}) there is some constant $C>0$ (only depending on the dimension $d$) such that for any $f\in BV(R)$ \begin{equation}\label{bvp} \left(\int|f|^pdm_d\right)^{1/p}\leq C\;V(f), \quad \mbox{with}\quad p=\frac{d}{d-1}. \end{equation} This in particular gives $BV(R)\subset L^p(R)$. We shall use the following properties of bounded variation functions whose proofs may be found in \cite{EG92} or \cite{G84}: \begin{itemize} \item[(B$_1$)] $BV(R)$ is dense in $L^1(R)$; \item[(B$_2$)] if $(f_k)_{k}$ is a sequence in $BV(R)$ converging to $f$ in the $L^1$-norm, then $V(f)\leq \liminf_k V(f_k)$; \item[(B$_3$)] if $(f_k)_k$ is a sequence in $BV(R)$ such that $\big(\|f_k\|_1\big)_k$ and $\big(V(f_k)\big)_k$ are bounded, then $(f_k)_k$ has some subsequence converging in the $L^1$-norm to a function in $BV(R)$. \end{itemize} \section{Piecewise expanding maps}\label{multidimensional} In this section we prove Theorem~\ref{expanding}. Let $\{R_i^t\}_{i=1}^\infty$ be the domains of smoothness of $\phi_t$ with $t\in I$ satisfying the assumptions of Theorem~\ref{expanding} and define $\phi_{t,i}=\phi_t\vert_{R_i^t }$ for all $i\ge 1$. For each $t\in I$ we consider the \emph{Perron-Frobenius operator} $$P_t:L^1(R)\longrightarrow L^1(R)$$ defined for $f\in L^1(R)$ as $$P_t f=\sum_{i=1}^{\infty}\frac{f\circ\phi_{t,i}^{-1}}{|J\circ\phi_{t,i}^{-1}|} \chi_{\phi_t(R_i^t)}.$$ It is well known that the following two properties hold for each $P_t$. \begin{itemize} \item[(C$_1$)] $\|P_t f\|_1\leq \| f\|_1$ for every $f\in L^1(R)$; \item[(C$_2$)] $P_t f=f$ if and only if $f$ is the density of an absolutely continuous $\phi_t$-invariant probability measure. \end{itemize} Considering $0<\lambda<1$ and $K>0$ as in the statement of Theorem~\ref{expanding}, the proof of the next lemma follows immediately from~\cite[Lemma~5.4 \& Lemma~5.5]{A00} with $$K_1=K\sum_{j=0}^\infty \lambda^j.$$ \begin{lemma} \label{lema3} Given $t\in I$ and $j\geq 1$ we have for each $f\in BV(R)$ $$V(P_t^jf)\leq \lambda^jV(f)+K_1\|f\|_1.$$ \end{lemma} \begin{remark}\label{re.beta} The proof of \cite[Lemma~5.4]{A00} uses \cite[Lemma 3]{GB89} applied to the sets $S=\phi(R_i)$, which gives for a function $f\in C^1(S)$, \begin{equation}\label{eq.varia} \int_{\partial S}|f|dm\leq \frac{1}{\beta}\left(\frac{1}{\rho}\int_{S}|f|dm+ \int_{S}\|Df\|dm\right). \end{equation} In the one-dimensional case we have for any interval $S$ and $x\in S$ $$f(x)\leq \frac{1}{|S|}\int_{S}|f|dm+ \int_{S}|Df|dm,$$ which yields a formula similar to \eqref{eq.varia} in the one-dimensional case with $\beta =1$. \end{remark} In the proof of the result below we follow some standard arguments with functions of bounded variation, namely those used in \cite{LY73} for the one-dimensional case. \begin{proposition} \label{pr.dakhc}Given $t\in I$ and $f\in L^1(R)$ the sequence $1/n\sum_{j=0}^{n-1}P^j_t f$ has some accumulation point in the $L^1$-norm. Moreover, any such accumulation point belongs to $BV(R)$ and has variation bounded by $ 4K_1\|f\|_1$. \end{proposition} \begin{proof} Given $f\in L^1(R)$, by property (B$_1$) we may consider a sequence of functions $(f_k)_k$ in $BV(R)$ converging to $f$ in the $L^1$-norm. With no loss of generality we may assume that $\|f_k\|_1\leq 2\|f\|_1$ for every $k\geq 1$. It follows from Lemma~\ref{lema3} that for each $k\geq 1$ and large $j$ we have $$ V(P^j_tf_k)\leq \lambda^jV(f_k)+K_1\|f_k\|_1\leq 3K_1\|f\|_1.$$ So, for large $n$ we have $$V\left(\frac{1}{n}\sum_{j=0}^{n-1}P^j_t f_k\right)\leq 4K_1\|f\|_1.$$ Using that $\|f_k\|_1\leq 2\|f\|_1$ for every $k\geq 1$, it easily follows from (C$_1$) that $$\left\|\frac{1}{n}\sum_{j=0}^{n-1}P^j_t f_k\right\|_1\leq 2\|f\|_1.$$ Then it follows from (B$_3$) that there exists some $g_k\in BV(R)$ and a sequence $(n_i)_i$ such that $1/n_i\sum_{j=0}^{n_i-1}P_t^j f_k$ converges in the $L^1$-norm to $g_k$ as $i$ goes to $+\infty$. Moreover, by (B$_2$) we have $V(g_k)\leq 4K_1\|f\|_1$ for every $k\ge 1$. Hence, we may apply the same argument to the sequence $(g_k)_k$ and obtain a subsequence $(k_i)_i$ such that $(g_{k_i})_i$ converges in the $L^1$-norm to some $g\in BV(R)$ with $V(g)\leq 4K_1\|f\|_1$. Hence, there must be some sequence $(n_\ell)_\ell$ converging to $+\infty$ for which $1/n_\ell\sum_{j=0}^{n_\ell-1}P^j_t f_{k_\ell}$ converges to $g$ in the $L^1$-norm as $\ell\to +\infty$. On the other hand, $$\left\|\frac{1}{n_\ell}\sum_{j=0}^{n_\ell-1}\big(P^j_t f_{k_\ell}-P^j_tf\big)\right\|_1\leq\frac{1}{n_\ell}\sum_{j=0}^{n_\ell-1} \left\|f_{k_\ell}-f\right\|_1=\left\|f_{k_\ell}-f\right\|_1$$ and this last term goes to 0 as $\ell\rightarrow +\infty$. This clearly gives that ${1}/{n_\ell}\sum_{j=0}^{n_\ell-1}P^j_t f$ converges to $g$ in the $L^1$-norm. To prove the second part of the lemma, consider some subsequence of $1/n\sum_{j=0}^{n-1}P^j_t f$ converging to $f_0$ in the $L^1$-norm. Taking that subsequence playing the role of the whole sequence in the argument above we easily see that $f_0$ satisfies the conclusion by uniqueness of the limit. \end{proof} \begin{corollary}\label{co.bueno} If $h_t$ is the density of an absolutely continuous $\phi_t$-invariant probability measure, then $h_t\in BV(R)$ and $V(h_t)\le 4K_1$. \end{corollary} \begin{proof} Take $h_t$ the density of an absolutely continuous $\phi_t$-invariant probability measure. We have from property (C$_2$) that $P_t^jh_t=h_t$ for all $j\ge 1$. This implies that the sequence $1/n\sum_{j=0}^{n-1}P^j_t h_t$ is constant and equal to $h_t$, and so the result follows. \end{proof} Now we are in conditions to conclude the proof of Theorem~\ref{expanding}. Let $(t_n)_n$ be a sequence in $I$ converging to some $t_0\in I$. Assume that for each $n\ge 1$ we have an absolutely continuous $\phi_{t_n}$-invariant probability measure $\mu_n$ and consider $$h_n=\frac{d\mu_n}{dm}.$$ Using the fact that each $h_n$ is the density of a probability measure and Corollary~\ref{co.bueno} we have for all $n\ge 1$ $$\|h_n\|_1= 1\quad\text{and}\quad V(h_n)\le 4K_1.$$ Hence, by (B$_2$) and (B$_3$) there exists $h_0\in BV$ with $V(h_0)\le 4K_1$ such that the sequence $(h_n)_n$ converges to $h_0$ in the $L^1$-norm. Let $\mu_0$ be the probability measure in $R$ whose density with respect to $m$ is $h_0$. Theorem~\ref{expanding} is now a consequence of the following lemma. \begin{lemma}\label{p.igual} $\mu_0$ is a $\phi_{t_0}$-invariant measure. \end{lemma} \begin{proof} Since the sequence $(h_n)_n$ converges to $h_0$ in the $L^1$-norm, it easily follows that $(\mu_n)_n$ converges to $\mu_0$ in the weak* topology. Thus, given any $f\colon R\rightarrow\mathbb{R}$ continuous we have $$ \int fd\mu_n\longrightarrow \int fd\mu,\quad\mbox{when } n\rightarrow\infty. $$ On the other hand, since $\mu_n$ is $\phi_{t_n}$-invariant we have $$ \int fd\mu_n=\int (f\circ\phi_{t_n})d\mu_n,\quad\mbox{for every }n. $$ It is enough to prove that \begin{equation*} \int (f\circ\phi_{t_n})d\mu_n \longrightarrow \int (f\circ\phi_{t_0}) d\mu, \quad\mbox{when } n\rightarrow\infty. \end{equation*} We have \begin{eqnarray*} \lefteqn{ \left|\int (f\circ\phi_{t_n})d\mu_n -\int (f\circ\phi_{t_0}) d\mu\right| }\\ & &\hspace{1cm} \le \left|\int (f\circ\phi_{t_n})d\mu_n - \int (f\circ\phi_{t_0}) d\mu_n\right| +\left|\int (f\circ\phi_{t_0}) d\mu_n - \int (f\circ\phi_{t_0}) d\mu\right|\\ & &\hspace{1cm} \le \int \left|f\circ\phi_{t_n}-f\circ\phi_{t_0}\right| d\mu_n +\left|\int (f\circ\phi_{t_0}) d\mu_n - \int (f\circ\phi_{t_0}) d\mu\right|\\ & &\hspace{1cm} = \int \left|f\circ\phi_{t_n}-f\circ\phi_{t_0}\right| h_n\,dm +\left|\int (f\circ\phi_{t_0}) (h_n-h_0) \,dm\right|. \end{eqnarray*} Now using \eqref{bvp} we easily get that each $h_n\in L^p(R)$ with $p=d/(d-1)$ and $$\|h_n\|_p\le CV(h_n)\le 4CK_1.$$ Observing that $1/p+1/d=1$, then by H\"older's Inequality we get $$\int \left|f\circ\phi_{t_n}-f\circ\phi_{t_0}\right| h_n\,dm\le \|f\circ \phi_{t_n}-f\circ\phi_{t_0}\|_d\cdot\|h_n\|_p\le 4C_dK_1 \|f\circ \phi_{t_n}-f\circ\phi_{t_0}\|_d,$$ and this clearly converges to zero, when $n\to+\infty$, by assumption (1) in the statement of Theorem~\ref{expanding}. On the other hand, as $f$ is bounded we have $$\left|\int (f\circ\phi_{t_0}) (h_n-h_0) \,dm\right|\le \|f\circ\phi_{t_0}\|_\infty\cdot\|h_n-h_0\|_1$$ and this clearly converges to 0 when $n\to+\infty$ as well. \end{proof} \section{Two-dimensional tent maps} In this section we shall prove Theorem~\ref{main}. The idea is to obtain it as a corollary of Theorem~\ref{expanding}. As observed before, each $\Lambda_t$ is \emph{strongly transitive}: any open set becomes the whole space under a finite number of iterations by $\Lambda_t$. This implies that the absolutely continuous $\Lambda_t$-invariant ergodic probability measure~$\mu_t$ must be unique. Moreover, any power of $\Lambda_t$ has a unique absolutely continuous invariant ergodic probability measure as well, which must necessarily coincide with~$\mu_t$. Thus, it is enough to obtain the statistical stability for some power of the maps in our family. We are going to see that the family $(\Lambda_t^3)_{t\in[\tau,1]}$ is in the conditions of Theorem~\ref{expanding}. Namely, each $\Lambda_t^3:T\to T$ is a $C^2$ piecewise expanding map with bounded distortion and long branches with constants $\sigma_t, D_t,\beta_t,\rho_t$ satisfying (P$_1$), (P$_2$) and (P$_3$), and \begin{equation}\label{eq.dt} \sigma_t\left(1+\frac1{\beta_t}\right)\le \lambda\quad\text{and}\quad D_t+\frac{1}{\beta_t\rho_t}+\frac{D_t}{\beta_t} \le K, \end{equation} for some choice of uniform constants $0<\lambda<1$ and $K>0$; observe that as the maps $\Lambda_t^3$ are continuous then the first condition in Theorem~\ref{expanding} is trivially satisfied. From the definition of $\Lambda_t$ in \eqref{algo} and \eqref{familyt} we obviously have that $T_0$ and $T_1$ are the domains of smoothness of $\Lambda_t$. The map $\Lambda_t$ is piecewise linear with \[ D\Lambda_t(x)= \left( \begin{array}{cc} t & t \\ t & -t \end{array} \right) \] for $x\in T_0\setminus\mathcal C$, and \[ D\Lambda_t(x)= \left( \begin{array}{cc} - t & t \\ - t & -t \end{array} \right) \] for $x\in T_1\setminus\mathcal C$. From here we deduce that for all $x\in T\setminus\mathcal C$ we have \begin{equation}\label{caza} \|D\Lambda_t^{-1}(x)\|\le \frac{1}{2t}. \end{equation} Now take $R=T$ and $\{R_i^t\}_{i=1}^{8}$ the (Lebesgue mod 0) partition of $R$ given by the domains of smoothness of $\Lambda_t^3$. \begin{figure} \caption{\em Smoothness domains: (a) for $ \Lambda_t $\quad (b) for $ \Lambda_t^2 $\quad (c) for $ \Lambda_t^3 $} \label{amigao} \end{figure} From \eqref{caza} we easily deduce that \begin{equation*}\label{caza2} \|(D\Lambda_t^3)^{-1}(x)\|\le \frac{1}{8t^3}:=\sigma_t<1 \end{equation*} and so property (P$_1$) holds for each $t\in[\tau,1]$; recall that $\tau\approx 0.88$. Since $\Lambda_t^3$ is linear on each domain of smoothness, then it has 0 distortion. Thus we obtain property (P$_2$) with $D_t=0$, for each $t\in[\tau,1]$. Let us now check (P$_3$). As each $\Lambda_t^3$ is linear on each $R_i^t$ and preserves angles, it is enough to obtain the geometric property (P$_3$) for the domains $R_i^t$'s instead of their images. Since the pre-image of the critical set $\mathcal C$ delimits the boundary of the domains of smoothness, it easily follows that the boundary of each $R_i^t$ is formed by at most five straight line segments with slope $-1$, 0, 1 or $\infty$ meeting at an angle at least~$\pi/4$; see Figure~\ref{amigao}(c). \begin{figure} \caption{\em A long branch for $ \Lambda_t^3 $} \label{dois} \end{figure} Then, it is not hard to check that for every $t\in[\tau,1]$ and $i=1,\dots, 8$ there is a piecewise $C^1$ unitary vector filed $X_i^t$ in $\partial R_i^t$ such that $$|\sin\angle(v,X_i^t(x))|\geq \sin\frac\pi8:=\beta_t$$ for every $x\in \partial R_i^t$ and $v\in T_x\partial R_i^t\setminus\{0\}$; see Figure~\ref{dois}. To prove the existence of $\rho_t$ is is enough to observe that the domains of smoothness of $\Lambda_t^3$ depend continuously on the parameter $t$ as illustrated in Figure~\ref{tres}, and so it is possible to choose an uniform value of $\rho$ such that~(P$_3$) holds for each $t\in[\tau,1]$. \begin{figure} \caption{\em Domains of smoothness: (a) for $ \Lambda_1^3 $\quad (b) for $ \Lambda_\tau^3$} \label{tres} \end{figure} Altogether this shows that there are $0<\lambda<1$ and $K>0$ such that $$\sigma_t\left(1+\frac1{\beta_t}\right)\le \frac1{8t^3}\left(1+\frac1{\sin(\pi/8)}\right)\le\lambda$$ for every $t\in[\tau,1]$ (recall that $\tau\approx 0.88$) and $$ D_t+\frac{1}{\beta_t\rho_t}+\frac{D_t}{\beta_t} =\frac1{\rho\sin(\pi/8)} := K,$$ thus having proved \eqref{eq.dt} and hence Theorem~\ref{main}. \end{document}
\begin{document} \title{Coexistence in competing first passage percolation with conversion} \author{Thomas Finn\footnote{[email protected], University of Bath, Deptartment of Mathematical Sciences, supported by a scholarship from the EPSRC Centre for Doctoral Training in Statistical Applied Mathematics at Bath (SAMBa), under the project EP/L015684/1.} \quad Alexandre Stauffer\footnote{[email protected], Universit\`a Roma Tre, Dipartimento di Matematica e Fisica; University of Bath, Department of Mathematical Sciences, supported by EPSRC Fellowship EP/N004566/1.}} \date{} \maketitle \begin{abstract} We introduce a two-type first passage percolation competition model on infinite connected graphs as follows. Type 1 spreads through the edges of the graph at rate $1$ from a single distinguished site, while all other sites are initially vacant. Once a site is occupied by type 1, it converts to type 2 at rate $\rho>0$. Sites occupied by type 2 then spread at rate $\lambda>0$ through vacant sites \emph{and} sites occupied by type 1, whereas type 1 can only spread through vacant sites. If the set of sites occupied by type 1 is non-empty at all times, we say type 1 \emph{survives}. In the case of a regular $d$-ary tree for $d\geq 3$, we show type 1 can survive when it is slower than type 2, provided $\rho$ is small enough. This is in contrast to when the underlying graph is $\mathbb{Z}^d$, where for any $\rho>0$, type 1 dies out almost surely if $\lambda>1$. \end{abstract} \section{Introduction} Consider the following two-type first passage percolation model on an infinite connected graph $G$. Each site can either be occupied by type 1, type 2 or be vacant according to the following dynamics. At time $0$, a distinguished site is occupied by type 1 while every other site is vacant. Sites occupied by type 1 attempt to occupy neighbouring vacant sites at rate 1. Once a site is occupied by type 1, it is converted to type 2 at rate $\rho>0$. That is, we define the collection of random variables $\left\{\mathcal{I}_x\right\}_{x\in G}$ of conversion times, that are i.i.d.\ exponential random variables of rate $\rho>0$, assigned to each site of $G$. Once a site is occupied by type 1, it waits for its respective conversion time to expire before converting to type 2. Sites occupied by type 2 then spread type 2 to vacant sites \emph{and} sites occupied by type 1 at rate $\lambda>0$. This model can be seen as a variant of the chase-escape dynamics in predator-prey models (see Section~\ref{sec:related_work} for more details). In these models, there is initially a single predator that evolves to block the spread of a species of prey. A natural interpretation for our model is as a spreading infection where individuals are either aware or unaware of their infected status. Unaware individuals spread the infection to nearby individuals but become aware of their infected status after a certain time elapses. Aware individuals try to warn neighbouring individuals which become aware of the spread of infection even if it it has not been infected. Uninfected individuals that are aware take the necessary measures (for example, self-isolating) to avoid infection and do not get infected. Can the unaware individuals coexist with aware individuals for all time? Equivalently, can the infection reach an unbounded number of individuals? Another motivation for us to introduce this model comes from a recent way of analysing strongly interacting particle systems through growth models; for example, the analysis of multiparticle diffusion limited aggregation by Sidoravicius and Stauffer~\cite{sidoravicius2019multi} and the analysis of a heterogeneous spread of infection model by Dauvergne and Sly~\cite{dauvergne2021spread}. We believe the competition process we introduce is a natural model for such applications; we discuss this further in Section~\ref{sec:related_work}. The main interest of this paper is understanding coexistence regimes in this model on different graphs. We prove on the regular tree, type 1 can survive even if it is slower than type $2$ (i.e.\ $\lambda$ is larger than one), so long as $\rho$ is sufficiently small (c.f.\ Theorem~\ref{thm:crit_lambda_tree}). Then, we prove such behaviour on the lattice is impossible and type 1 dies out even if it is just faster than type 2, for all $\rho>0$ (c.f.\ Theorem~\ref{thm:lattice}). A major difficulty in analysing this model is the counter-intuitive lack of monotonicity. If we increase $\rho$ or $\lambda$, it seems we can only decrease the probability that type 1 survives. Surprisingly, proving this remains an open problem. The issue is the model is non-monotone in the sense that the standard coupling argument fails to hold (unlike other competition models like the two-type Richardson model as discussed in Section~\ref{sec:related_work}). Models that lack monotonicity require a careful analysis as they include the possibility of many phase transitions occurring. The subtle behaviour of processes that lack monotonicity has been studied in related models in Candellero and Stauffer \cite{candellero2020} and Deijfen and H\"aggstr\"om \cite{def}. Another difficulty the model poses is non-equilibrium dynamics and long-range correlations between the occupancy of sites. For example, to determine whether a site is ever occupied by type 1, one may need non-local information about the first times other sites are occupied by type 1 and how type 2 spreads from them. \subsection{Our results} \label{sec:results} The main result of this paper is the following. On the $d$-ary tree for $d\geq3$ (i.e.\ the infinite tree where all vertices have \emph{degree} equal to $d$), we show type 1 can survive even in a regime where it is slower than type 2. \begin{theorem} \label{thm:crit_lambda_tree} Fix $d\geq3$ and consider the $d$-ary tree. There exists $\lambda_0=\lambda_0(d)>1$ such that if $\lambda\in(0,\lambda_0)$ and $\rho$ is small enough, then type 1 survives with positive probability. \end{theorem} Recalling the interpretation of the model as the spread of infection and awareness, we deduce the following from Theorem~\ref{thm:crit_lambda_tree}. In the case of the $d$-ary tree, the infection can survive even if awareness spreads faster than the infection provided infected individuals do not become aware until a typically large time has passed. \begin{remark} Theorem~\ref{thm:crit_lambda_tree} may naturally be extended to all supercritical Galton-Watson trees of uniformly bounded degree. The requirement of uniformly bounded degrees is needed to appeal to a result in branching random walks by Addario-Berry and Reed \cite{addario2009minima} (see Theorem~\ref{thm:BRW}). The behaviour of the model on more general trees remains an open problem. For example, if the degree distribution follows a power law, can type 1 survive for all $\lambda\in(0,\infty)$? \end{remark} Our next result is establishing the behaviour on the regular tree is fundamentally different than on the lattice. More precisely, on $\mathbb{Z}^d$ for $d\geq2$, for all $\rho>0$, there exists a constant \emph{smaller} than one such that if $\lambda$ is larger than this constant, type 1 dies out almost surely. In other words, there exists a regime where type 1 is faster than type 2 but still dies out almost surely. We note it is immediate for $d=1$ that type 1 dies out almost surely for all $\lambda,\rho>0$. \begin{theorem} \label{thm:lattice} Consider the model on $\mathbb{Z}^d$ for $d\geq2$. For all $\rho>0$, there exists $\lambda'=\lambda'(\rho,d)<1$ such that if $\lambda>\lambda'$, type 1 dies out almost surely. \end{theorem} The fact that for any $\rho>0$, type 1 dies out almost surely for all $\lambda>1$ is a simple consequence of the classical \emph{shape theorem} for first passage percolation on $\mathbb{Z}^d$ (see Theorem~\ref{thm:shape}). Hence, on $\mathbb{Z}^d$, the infection cannot survive forever if awareness spreads faster than the infection. Theorem~\ref{thm:lattice} sharpens this result so that for any conversion rate, unaware individuals cannot survive forever if the infection spreads only just faster than awareness. A complete picture of the survival regimes on $\mathbb{Z}^d$ is an open problem but the expected behaviour is given in the following conjecture. \begin{conjecture} \label{conj:lattice} Consider the model on $\mathbb{Z}^d$ for $d\geq2$. \begin{itemize} \item For all $\lambda<1$, if $\rho$ is sufficiently small, type 1 survives with positive probability. \item There exists $\rho_c=\rho_c(d)\in(0,\infty)$ such that: \begin{enumerate} \item if $\rho<\rho_c$ and $\lambda$ is sufficiently small, type 1 survives with positive probability. \item if $\rho>\rho_c$ and $\lambda>0$, then type 1 dies out almost surely. \end{enumerate} \end{itemize} \end{conjecture} The first half of Conjecture~\ref{conj:lattice} is closely related to the strong survival phase for first passage percolation in a hostile environment, studied in Sidoravicius and Stauffer \cite{sidoravicius2019multi} (see Section~\ref{sec:related_work}). We believe the proof in our case will be similar to the encapsulation arguments in \cite{sidoravicius2019multi}, though there are some caveats in applying that argument directly. A priori there is no reason $\rho_c$ is even well-defined in Conjecture~\ref{conj:lattice} due to the non-monotone nature of the model. Indeed, several phase transitions may occur. We provide a partial answer to Conjecture~\ref{conj:lattice}. The following result gives the expected subcritical and supercritical behaviour holds so long as $\rho$ is sufficiently small or sufficiently large, respectively. \begin{theorem} \label{thm:rho} Consider the model on $\mathbb{Z}^d$ for $d\geq2$. \begin{itemize} \item There exists $\rho_{\ell}=\rho_{\ell}(d)>0$ such that if $\rho<\rho_{\ell}$ and $\lambda$ is sufficiently small, type 1 survives with positive probability. \item There exists $\rho_{u}=\rho_{u}(d)<\infty$ such that if $\rho>\rho_{u}$, type 1 dies out almost surely for all $\lambda>0$. \end{itemize} \end{theorem} Proving there exists $\rho_{\ell}>0$ in Theorem ~\ref{thm:rho} relies on a careful coupling with a percolation process analysed by Dauvergne and Sly \cite{dauvergne2021spread}. The proof for the existence of $\rho_{u}<\infty$ follows through a direct comparison with Bernoulli percolation. Details for both these proofs are given in Section~\ref{sec:lattice}. \subsection{Related work} \label{sec:related_work} The model introduced in this paper is closely related to \emph{chase-escape dynamics in the predator-prey model}. In the language of our model, the predator-prey dynamics can be viewed in the following manner. At time 0, the origin is occupied by type 2 and a neighbour is occupied by type 1. Type 1 spreads at rate $\lambda>0$ to unoccupied sites while type 2 spreads at rate $1$ to both unoccupied sites and sites occupied by type 1. Type 1 can be seen as \emph{prey} while type 2 is the \emph{predator}. Note we have swapped the role of $\lambda$ to the rate of type 1 to remain consistent with the notation in the predator-prey literature. The model described above was introduced by Kordzakhia \cite{kordzakhia2005escape}, who proved a phase transition occurs and computed the exact critical value in the case of the $d$-ary tree. More precisely, there exists $\lambda_c>0$ such that if $\lambda>\lambda_c$, type 1 survives with positive probability while if $\lambda<\lambda_c$, type dies out almost surely. Bordenave \cite{bordenave2014extinction} extended the results by Kordzakhia to Galton-Watson trees and proved type 1 dies out almost surely at criticality. We also direct the reader to Kortchemski \cite{kortchemski2016predator} and references therein for more information about predator-prey dynamics. The question of coexistence in competing random processes on $\mathbb{Z}^d$ has attracted significant attention in recent years. The \emph{two-type Richardson model} was introduced by H\"aggstr\"om and Pemantle~\cite{haggstrom1998first} as a model for competing first passage percolation. In this model, each type starts from a finite set of sites and spreads at rate $1$ and $\lambda$, respectively. so that once a site is occupied by one process, it remains occupied by that process henceforth. They conjecture coexistence occurs with positive probability if and only if $\lambda=1$, assuming non-degenerate initial conditions. See \cite{deijfen2008pleasures} for a background in coexistence regimes in the two-type Richardson model and progress towards this conjecture. A more recent competition model that has been studied is \emph{first passage percolation in a hostile environment} (FPPHE), introduced by Sidoravicius and Stauffer \cite{sidoravicius2019multi}. The first type in FPPHE initially only occupied the origin while the second is dormant in \emph{seeds} that are distributed as a product of Bernoulli measures of parameter $p$. Type 1 spreads at rate 1 through edges and when it encounters a seed, the occupancy is suppressed and the seed is activated. Activated seeds then spread at rate $\lambda>0$ to vacant sites. Once a site is occupied by either type, it remains that type henceforth. Sidoravicius and Stauffer \cite{sidoravicius2019multi} considered FPPHE on $\mathbb{Z}^d$ for $d>1$. They proved for all $\lambda<1$, if $p$ is sufficiently small, then type 1 survives and all components of type 2 are bounded with positive probability - a regime called strong survival. Finn and Stauffer \cite{finn2020non} proved a regime of coexistence exists on $\mathbb{Z}^d$ for $d>2$, in which both types occupy unbounded connected regions. Coexistence on transitive, hyperbolic, non-amenable graphs has also been established in Candellero and Stauffer \cite{candellero2018coexistence}. FPPHE has been used as an analytical tool in Sidoravicius and Stauffer \cite{sidoravicius2019multi} to analyse a challenging aggregation model called \emph{multiparticle diffusion limited aggregation}. A streamlined version of FPPHE, called Sidoravicius--Stauffer percolation (SSP), was utilised in Dauvergne and Sly \cite{dauvergne2021spread} to study a non-homogeneous spread of infection model (see Section~\ref{sec:SSP}). In the above works, coupling the evolution of the desired process with FPPHE or SSP allowed an efficient way to analyze a process with non-equilibrium dynamics without the need to carry out an involved multi-scale analysis from scratch. We believe our competition process can also be used in this regard. The main idea is that type~1 represents the \emph{propagation front} through ``typically good'' regions of the process being analysed, being it the front of the growth of an aggregate or the front of the propagation of the infection, in the above cases. A type-1 site being converted to type~2 represents that enough time has passed so that that site is not anymore contributing to the propagation front. Being it a random time, a site could have a very short conversion time, not allowing type~1 to spread from it to its neighbors. This models situations where the propagation front may pass through \emph{atypically bad} regions of space-time, which locally blocks the propagation of the front. The spread of type~2 then represents the spread of the influence that bad regions may have on neighboring areas. If one can show that type~1 grows indefinitely despite the expansion of type 2, then such a reasoning would imply that the typical regions are dense enough to compensate the presence of bad regions in space-time, allowing the propagation front to survive indefinitely. \subsection{Overview of paper} In Section~\ref{sec:prelim}, we provide a rigorous construction of the model on the $d$-ary tree that will facilitate later proofs and recall some notation from first passage percolation. In Section~\ref{sec:d_ary_tree}, we prove Theorem~\ref{thm:crit_lambda_tree} through a careful renormalisation scheme that controls how type 1 and type 2 spreads through the tree. In Section~\ref{sec:lattice}, we switch focus to the lattice and prove Theorem~\ref{thm:lattice} and Theorem~\ref{thm:rho}. \section{Preliminaries} \label{sec:prelim} \subsection{Construction of the model on the $d$-ary tree} In this section we provide a particular construction of the model on the $d$-ary tree that will later be helpful in proofs. For $d\geq3$, let $\mathbb{T}_d$ be the $d$-ary tree with a distinguished site called the root, written as $\sigma$. Fix constants $\rho>0$ and $\lambda>0$. Let $V(\mathbb{T}_d)$ and $E(\mathbb{T}_d)$ be the vertex set and edge set of $\mathbb{T}_d$, respectively. Let $\{\mathcal{I}_x\}_{x\in V(\mathbb{T}_d)}$ be an i.i.d.\ collection of exponential random variables of rate $\rho$ that correspond to the conversion time of a site. That is, given a site $x\in V(\mathbb{T}_d)$, once $x$ is occupied by type 1, after a further time of $\mathcal{I}_x$ has expired, it converts to type 2. Let $\{t_{1,e}\}_{e\in E(\mathbb{T}_d)}$ be a collection of i.i.d.\ exponential random variables of rate 1 that correspond to the passage times for type 1. Similarly, let $\{t_{u,e}\}_{e\in E(\mathbb{T}_d)}$ and $\{t_{d,e}\}_{e\in E(\mathbb{T}_d)}$ be two collections of i.i.d.\ exponential random variables of rate $\lambda$ that correspond to the passage times for type 2 upwards and downwards, respectively. That is, if $e=xy$ is an edge with $d(\sigma,x)<d(\sigma,y)$, where $d(\cdot,\cdot)$ is the graph distance metric on $\mathbb{T}_d$, the passage time for type 2 spreading from $x$ to $y$ is given by $t_{d,e}$ and the passage time for type 2 spreading from $y$ to $x$ is given by $t_{u,e}$. In this case, we say $y$ is a \emph{descendent} of $x$. At time $t=0$, the root $\sigma$ is occupied by type 1 and all other sites are vacant. If a site $x$ is occupied by type 1 and $y$ neighbours $x$, then $y$ is attempted to be occupied by type 1 after waiting $t_{1,xy}$ time. The occupation is successful if $y$ is vacant and suppressed otherwise. Additionally, once a site is occupied by type 1, after waiting $\mathcal{I}_x$ time, it converts to type 2. If a site $x$ is occupied by type 2 and $y$ neighbours $x$, then $y$ is attempted to be occupied by type 2 after waiting $t_{u,xy}$ or $t_{d,xy}$ time, depending on whether $y$ is closer or further than $x$ to the root with respect to the graph-distance metric, respectively. The occupation is successful if $y$ is unoccupied or is occupied by type 1. The requirement to distinguish between downward and upward passage times for type 2 is to decouple the different ways type 2 may spread and is an important ingredient in our proofs. Given a site $x\in V(G)$, let $\tau_1(x)$ denote the first time that type 1 occupies $x$ and $\tau_2(x)$ denote the first time that type 2 occupies $x$, so that $$ \tau_i(x) = \inf\left\{t\geq0: \mbox{type }i \mbox{ occupies }x\right\} \quad \mbox{for }i=1,2. $$ If type 1 never occupies the site $x$, we write $\tau_1(x)=\infty$. \subsection{First passage percolation} A \emph{path} is a sequence of distinct sites $\left(x_1,x_2,\ldots,x_n\right)$ such $x_ix_{i+1}\in E(\mathbb{T}_d)$ for each $i\in\left\{1,2,\ldots,n-1\right\}$. Given a path $\gamma=\left(x_1,x_2,\ldots,x_n\right)$, we define the \emph{type \# passage time for }$\gamma$ as the random variable \begin{equation} \label{eq:FPP_sum} T_{\#}(\gamma)=\sum_{i=1}^{n-1}t_{\#,x_ix_{i+1}}, \end{equation} where $\#\in\{1,u,d\}$. Given $x,y\in V(\mathbb{T}_d)$, define the \emph{type \# passage time from }$x$ \emph{to} $y$ as the random variable $$ T_{\#}(x\rightarrow y) = T_{\#}(\gamma_{x,y}) $$ where $\gamma_{x,y}$ is the unique shortest path from $x$ to $y$ and $\#\in\left\{1,u,d\right\}$. Note the direction of the arrow in the definition of the passage time will always agree with the descendent structure of the tree. That is, if $y$ is a descendent of $x$, the type 2 passage time from $x$ to $y$ is downward, while the type 2 passage time from $y$ to $x$ is upward. This is purely a notational subtlety and will not alter any of our arguments. \section{Survival on the $d$-ary tree} \label{sec:d_ary_tree} In this section we prove Theorem~\ref{thm:crit_lambda_tree}. We begin with a roadmap of the proof to help guide the reader in how we establish survival of type 1 with positive probability. \subsection{Roadmap of proof of Theorem~\ref{thm:crit_lambda_tree}} Given $d\geq3$, recall $\mathbb{T}_d$ is the $d$-ary tree with $\sigma$ as a distinguished site called the root. For any $n\in\mathbb{Z}_{+}$, let $V_n$ denote the set of sites up to graph distance $n$ from the root and $\partial V_n$ denote the set of sites with graph distance exactly $n$ from the origin, so that $$ V_n = \left\{x\in V(\mathbb{T}_d):d(\sigma,x)\leq n\right\} \quad \mbox{and} \quad \partial V_n = \left\{x\in V(\mathbb{T}_d):d(\sigma,x)=n\right\}, $$ where $d(\cdot,\cdot)$ is the metric on $\mathbb{T}_d$ induced by shortest-paths between sites. We refer to the set $V_n$ as a \emph{box} of depth $n$ from $\sigma$, or just a box for brevity. The aim is to partition $\mathbb{T}_d$ into boxes that are labelled as \emph{good} or \emph{bad} according to events measurable with respect to the box, so that a good box implies that type 1 is able to propagate well through the box while type 2 is hindered. Let $k$ and $r$ be two large integers we set later. Let $B_{1}=V_{kr}$ and $B_{2}=V_{kr+k^2}\backslash V_{kr}$ be the two segments of our box $B=B_1\cup B_2 = V_{kr+k^2}$. The reader should have in mind that we will eventually set $r$ much larger than $k$ so that $kr\gg k^2$. In $B_1$, the first section of the box, we want to prove that there are sufficiently many \emph{highways} down to $\partial V_{kr}$ with high probability. Roughly speaking, highways are paths where type 1 is typically faster than type 2, and we will prove that type 1 survives along these highways with positive probability. We aim to prove that many such highways exist in $B_1$ even if $\lambda$ is slightly larger than 1. This is made rigorous in Section~\ref{sec:perc}. An illustration of highways in a good box can be seen in Figure~\ref{fig:goodbox}. Assuming we can show the existence of many highways through $B_1$, we wish to prove the following holds in $B_2$. From the set of sites in $\partial V_{kr}$ that are connected to $\sigma$ through highways, there are at least two such sites that can be extended down to depth $kr+k^2$ so that the type 1 passage time on the path is fast and the type 2 passage time on each edge with a site incident to the path is very large. The probability a given path satisfies these properties is extremely small, which is precisely why we require many highways existing through $B_1$ to ensure that two such paths exist with high probability. These paths that extend highways we refer to as \emph{spines}. The reason why we need such a strong property from the spines is that they are very close to the boundary of the box, and we need to ensure that conversions of type 2 from outside the a good box cannot propagate into the box and block type 1. These notions are made rigorous in Section~\ref{sec:excellent}. An illustration of spines can be seen in Figure~\ref{fig:goodbox}. We have not yet considered the fact that type 1 converts to type 2 at rate $\rho$. The idea is that we set $\rho$ small enough so that no conversions can take place in a good box until type 1 can spread far down the tree. However, conversions could occur outside of a good box, allowing type 2 to spread upwards the tree into a highway or a spine. In Section~\ref{sec:backtracks}, we control how type 2 can spread in a good box through conversions occurring outside that good box, so that type 1 is not impeded on the previously constructed highways or spines. In Section~\ref{sec:good_box}, we put together the previous sections to define a good box and then use this construction of good boxes to prove Theorem~\ref{thm:crit_lambda_tree} via a branching argument in Section~\ref{sec:proof_thm_tree}. The idea is that with positive probability, a branching structure of good boxes gives rise to at least one infinite path of sites such that all sites on this path are occupied by type 1 at some time. \begin{figure} \caption{An illustration of a good box split into $B_1$ and $B_2$ with respective depths $kr$ and $k^2$. Blue paths represent the highways constructed in $B_1$ that connect the root of the box to depth $kr$. Red paths represent the spines constructed in $B_2$ that extend certain highways to depth $kr+k^2$ in a box as well as any edges incident to them.} \label{fig:goodbox} \end{figure} \subsection{Percolating structure in $B_1$} \label{sec:perc} In this section we prove the existence of a percolating structure that provides the highways needed for type 1 in $B_1$. This percolating structure arises from partitioning $B_1$ into \emph{sub-boxes} where we wish to prove there is the existence of paths for which there is strong control for the passage times of type 1 and type 2. We construct the sub-boxes as follows. Let $k$ be a large integer we set later. For $z\in V(\mathbb{T}_d)$, let $V^{\downarrow}_k(z)$ (resp.\ $\partial V^{\downarrow}_k(z)$) denote the set of sites of graph-distance less than or equal (resp.\ equal) to $k$ from $z$ such that the path to the root from the site must pass through $z$. \begin{definition}[Good sub-box] Let $z\in V(\mathbb{T}_d)$ and consider the sub-box $V^{\downarrow}_k(z)$. For $\varepsilon>0$, let $\mathcal{H}_{\varepsilon}(z)=\mathcal{H}_{\varepsilon}^{(1)}(z)\cap \mathcal{H}_{\varepsilon}^{(\lambda)}(z)$, where \begin{align} \label{def:S_eps} & \mathcal{H}_{\varepsilon}^{(1)}(z)=\left\{y\in\partial V^{\downarrow}_k(z):T_1(z\rightarrow y)\leq (1-\varepsilon)k\right\},\\ & \mathcal{H}_{\varepsilon}^{(\lambda)}(z)=\left\{y\in\partial V^{\downarrow}_k(z):T_d(z\rightarrow y)\geq(1-\varepsilon^2)\tfrac{k}{\lambda}\right\}. \label{def:S_eps2} \end{align} Define $V^{\downarrow}_k(z)$ to be $\varepsilon$-\emph{good} if $\mathcal{H}_{\varepsilon}(z)$ contains at least two distinct elements and $\varepsilon$-\emph{bad} otherwise. Given $y\in\mathcal{H}_{\varepsilon}(z)$, the path from $z$ to $y$ is called a \emph{highway}. \end{definition} \begin{remark} Note that in \eqref{def:S_eps2} in the definition of $ \mathcal{H}_{\varepsilon}^{(\lambda)}(z)$, we are using the \emph{downward} type 2 passage times. \end{remark} With the notion of good sub-boxes to hand, we wish to prove that there is a percolating structure of good sub-boxes up to depth $kr$. We first partition $B_1$ into sub-boxes of depth $k$, so that each sub-box is independently $\varepsilon$-good of every other sub-box. That is, the sub-boxes are of the form $$ V_k^{\downarrow}(z) \mbox{ for }z\in\partial V_{ik} \mbox{ with } i\in\{0,1,2,....,r-1\}. $$ A sub-box being good means that there are at least two paths from its root to depth $k$ where there is a strong control on type 1 and type 2. In particular, if $\lambda<1+\varepsilon$, then type 1 is faster along these paths since $$ 1-\varepsilon = \frac{1-\varepsilon^2}{1+\varepsilon} < \frac{1-\varepsilon^2}{\lambda}. $$ In Figure~\ref{fig:subbox}, we see how highways in good sub-boxes can join, giving long highways that allow for good control on type 1 and type 2. To ensure that many such highways exist down to depth $kr$, we need to prove that sub-boxes are good with high probability. This is the content of the following lemma. \begin{figure} \caption{Triangles represent good sub-boxes and blue paths represent the highways constructed within them.} \label{fig:subbox} \end{figure} \begin{lemma} \label{lem:prob_1box} There exists a constant $c>0$ such that the following holds. Fix $z\in V(\mathbb{T}_d)$. For all sufficiently small $\varepsilon>0$ and large enough $k$, $$ \mathbb{P}\left(V^{\downarrow}_k(z)\mbox{ is }\varepsilon\mbox{-good}\right)\geq 1-\exp\left(-c\varepsilon^4 k\right). $$ \end{lemma} To prove Lemma~\ref{lem:prob_1box}, we recall some results from the theory of branching random walks, that can be viewed as an alternative representation of first passage percolation on trees. The following result is due to Addario-Berry and Reed \cite[Theorem 3]{addario2009minima}, although we emphasise their result holds in a much greater generality and we state it only in what will be useful in our context. \begin{theorem} \label{thm:BRW} For $n\geq1$, let $M_n$ be the infimum over all passage time from $\sigma$ to $\partial V_n$, so that $$ M_n = \inf_{y\in \partial V_n}T_1(o\to y). $$ There exists a constant $\varepsilon_0\in(0,1)$ such that $$ \mathbb{E}\left[ M_n \right] \leq (1-\varepsilon_0)n $$ for all $n$ large enough. Moreover, there exist constants $C,\delta>0$ such that for all $x\in\mathbb{R}$, $$ \mathbb{P}\left(\left|M_n-\mathbb{E}\left[M_n\right]\right|\geq x\right)\leq Ce^{-\delta x}. $$ \end{theorem} The result in Theorem~\ref{thm:BRW} concerns only the fastest path, but we will require that many paths can satisfy its conditions. This is the content of the following lemma. Recall the definition of $\mathcal{H}_{\varepsilon}^{(1)}(z)$ in \eqref{def:S_eps}. \begin{lemma} \label{lem:fast_type1} There exists a constant $c>0$ such that the following holds. Fix $z\in V(\mathbb{T}_d)$. For all $\varepsilon>0$ small enough, there exists $k_1=k_1(\varepsilon)$ such that if $k>k_1$, then $$ \mathbb{P}\left(\left|\mathcal{H}_{\varepsilon}^{(1)}(z)\right|\geq d^{\varepsilon k}\right)\geq 1 - e^{-c\varepsilon k}. $$ \end{lemma} \begin{proof} Firstly, we want to prove that the passage time over all paths from $z$ to $\partial V_{\varepsilon k}^{\downarrow}(z)$ is bounded above by $C\varepsilon k$ with sufficiently high probability, where $C$ is a large constant and $\varepsilon$ is a small constant we set later. For a given site $y\in\partial V_{\varepsilon k}^{\downarrow}(z)$ and a large enough constant $C$, we see through a Chernoff bound argument (see Lemma~\ref{lem:chernoff}) that $$ \mathbb{P}\left(T_1(z\to y)\geq C\varepsilon k\right)\leq e^{-\tfrac{C\varepsilon k}{2}}. $$ By taking the union bound over all $y\in\partial V_{\varepsilon k}^{\downarrow}(z)$, $$ \mathbb{P}\left(\sup_{y\in\partial V_{\varepsilon k}^{\downarrow}(z)}T_1(\sigma\to y)\geq C\varepsilon k\right)\leq d^{\varepsilon k}e^{-\tfrac{C\varepsilon k}{2}}. $$ By Theorem~\ref{thm:BRW}, there exists a constant $\delta>0$, that does not depend on $k$, such that for each $y\in\partial V_{\varepsilon k}^{\downarrow}(z)$, the following holds. For all $x>0$ $$ \mathbb{P}\left(T_1(y\to\partial V_k^{\downarrow}(z))\geq (1-\varepsilon_0)(1-\varepsilon)k+x\right)\leq e^{-\delta x}, $$ where $\varepsilon_0$ is as given in Theorem~\ref{thm:BRW}. By the union bound, for all $x>0$ $$ \mathbb{P}\left(\bigcup_{y\in\partial V_{\varepsilon k}^{\downarrow}(z)}\left\{T_1(y\to\partial V^{\downarrow}_k(z))\geq (1-\varepsilon_0)(1-\varepsilon)k+x\right\}\right) \leq d^{\varepsilon k}e^{-\delta x}. $$ If $x = 2\varepsilon k\left(\log d\right) /\delta$, then the right-hand term above can be bounded above by $e^{-\delta x/2}$. Hence \begin{align*} \mathbb{P}\Bigg(&\bigcap_{y\in\partial V_{\varepsilon k}^{\downarrow}(z)} \left\{T_1(y\to\partial V_k^{\downarrow}(z))< (1-\varepsilon_0)(1-\varepsilon)k+\tfrac{2\varepsilon k\log d}{\delta}\right\}\cap \left\{\sup_{y\in\partial V_{\varepsilon k}^{\downarrow}(z)}T_1(z\to y)< C\varepsilon k\right\}\Bigg)\\ & \geq 1-e^{-\tfrac{\delta\varepsilon k}{4}}-e^{-\tfrac{C \varepsilon k}{4}},\\ & \geq 1-e^{-c\varepsilon k}, \end{align*} for some constant $c>0$ and all sufficiently large $k$. Moreover, the event directly above implies the existence of at least $d^{\varepsilon k}$ sites in $\partial V_{k}^{\downarrow}(z)$, so that if $y$ is one such site, \begin{align*} T_1(z\to y)\leq (1-\varepsilon_0)(1-\varepsilon)k+\tfrac{2\varepsilon k\log d}{\delta} +C\varepsilon k\leq (1-\varepsilon)k, \end{align*} where the final inequality holds so long as $\varepsilon$ satisfies $$ \varepsilon \leq \frac{\varepsilon_0}{\varepsilon_0+C+\tfrac{2\log d}{\delta}}. $$ \end{proof} With Lemma~\ref{lem:fast_type1} established, we are now in a position to prove Lemma~\ref{lem:prob_1box}. \begin{proof}[Proof of Lemma~\ref{lem:prob_1box}] By Lemma~\ref{lem:fast_type1}, there exists a constant $c>0$ such that for a small enough choice of $\varepsilon>0$ and then large enough choice of $k$, \begin{equation} \label{pf:lm32_1} \mathbb{P}\left(\left|\mathcal{H}_{\varepsilon}^{(1)}(z)\right|\geq2\right)\geq 1-e^{-c\varepsilon k}. \end{equation} Conditional on the event $\{|\mathcal{H}_{\varepsilon}^{(1)}(z)|\geq2\}$, let $y_1,y_2\in \mathcal{H}_{\varepsilon}^{(1)}(z)$ be distinct sites. For $i\in\{1,2\}$, let $E_i$ be the event $$ E_i=\left\{T_d(z\rightarrow y_i) \geq \left(1-\varepsilon^2\right)\tfrac{k}{\lambda}\right\}. $$ Since $T_d(z \to y_i)$ is a sum of $k$ i.i.d.\ Exponential($\lambda$) random variables, by a Chernoff bound argument (see Lemma~\ref{lem:chernoff}), we deduce there exists a constant $c'>0$ such that for all sufficiently large $k$, $$ \mathbb{P}\left(E_i\right)\geq 1-\exp\left(-c'\varepsilon^4 k\right) \quad \mbox{for }i=1,2. $$ By the union bound, \begin{equation} \label{pf:lm32_2} \mathbb{P}\left(E_1\cap E_2\right)\geq 1 - \mathbb{P}(E_1^c) - \mathbb{P}(E_2^c) \geq 1 - 2\exp\left(-c'\varepsilon^4 k\right). \end{equation} The result follows by \eqref{pf:lm32_1} and \eqref{pf:lm32_2} through the independence of the passage times for type 1 and type 2. \end{proof} To ease the statements of results, henceforth we assume that $\varepsilon$ is small enough and $k$ is large enough so Lemma~\ref{lem:prob_1box} is satisfied. Consider the box $B=V_{kr+k^2}$. Let $\mathcal{P}_k\subset\partial V_{kr}$ be the set of sites whose geodesic to $\sigma$ only passes through highways of good sub-boxes. Our proof of Theorem~\ref{thm:crit_lambda_tree} will rely on $\mathcal{P}_k$ containing sufficiently many elements, which is established in the following lemma. \begin{lemma} \label{lem:depth_kr} Fix $\alpha\in(1,2)$. If there exists a constant $c$ such that $r\leq k^c$, then $$ \lim_{k\to\infty}\mathbb{P}\left(\left|\mathcal{P}_k\right|>\alpha^r\right)= 1. $$ \end{lemma} The proof of Lemma~\ref{lem:depth_kr} is a consequence of Lemma~\ref{lem:prob_1box} and the following elementary result, as the sub-boxes in $B_1$ are independently good or bad. \begin{lemma} Fix $\alpha\in(1,2)$ and $r\in\mathbb{N}$. Consider independent site percolation of parameter $p=p(r)$ on the binary tree so that $$ \lim_{r\to\infty}p^r = 1. $$ Let $N_r$ be the number of sites at depth $r$ connected to the origin by an open path. Then $$ \lim_{r\to\infty}\mathbb{P}\left(N_r > \alpha^r\right)= 1. $$ \end{lemma} \begin{proof} By applying Markov's inequality to the number of sites not connected to the root by open sites, we deduce $$ \mathbb{P}\left(2^r-N_r\geq 2^r-\alpha^r\right) \leq \frac{2^r-\mathbb{E}[N_r]}{2^r-\alpha^r} = \frac{1-p^r}{1-(\alpha/2)^r}, $$ and the result follows. \end{proof} \subsection{Construction of spines} \label{sec:excellent} In Lemma~\ref{lem:depth_kr} we proved in the box $B=V_{kr+k^2}$, there exists many sites at depth $kr$ that are connected to the root of $B$ through highways with high probability. The aim of this section is to extend some of these highways down to depth $kr+k^2$ through what we will refer to as \emph{spines}, so that spines provide a strong control on the type 1 and type 2 passage times. Recall $\mathcal{P}_k$ is the set of sites $y$ in $\partial V_{kr}$ such that the path from $\sigma$ to $y$ only passes through the highways in good sub-boxes in $B_1$. \begin{definition}[Spine] For each $z\in \mathcal{P}_k$, let $s(z)\in\partial V^{\downarrow}_{k^2}(z)$ satisfy $$ T_1(z\to s(z)) = \inf_{y\in\partial V^{\downarrow}_{k^2}(z)}T_1(z\rightarrow y). $$ The path $s_{z}$ from $z$ to $s(z)$ is defined to be the \emph{spine from }$z$. \end{definition} Given $z\in\mathcal{P}_k$, we want $s_z$ to satisfy the following properties. Firstly, we want the type 1 passage time on $s_z$ to be bounded above by $(1-\varepsilon)k^2$, so type 1 can readily spread to depth $kr+k^2$. Secondly, we want every edge incident to a site in the spine $s_z$ to have a type 2 passage time of at least $k^3$ with the exception of edges only incident to $s(z)$. The edges with both endpoints contained on $s_z$ will be measured with the downward passage times and edges with only one endpoint on $s_z$ will be measured with the upward passage time. This will give the required impediment to stop type 2 from blocking type 1 on highways. Let $\mathcal{S}_k\subset\mathcal{P}_k$ be the set of sites that satisfy these properties, so that $\mathcal{S}_k=\mathcal{S}_k^{(1)}\cap\mathcal{S}_k^{(2)}\cap\mathcal{S}_k^{(3)}$, where \begin{align*} \mathcal{S}_k^{(1)} &= \left\{z\in\mathcal{P}_k:T_1(s_z)\leq (1-\varepsilon)k^2\right\},\\ \mathcal{S}_k^{(2)} &= \left\{z\in\mathcal{P}_k: \bigcap_{y\in s_z\backslash s(z)}\bigcap\limits_{\substack{y'\sim y\\ y'\notin s_z}}\left\{T_u(yy')\geq k^3\right\}\right\},\\ \mathcal{S}_k^{(3)} &= \left\{z\in\mathcal{P}_k: \bigcap\limits_{\substack{y,y'\in s_z \\ y\sim y'}}\left\{T_d(yy')\geq k^3\right\}\right\}. \end{align*} It is clear the probability that a given site $z\in\mathcal{P}_k$ satisfies the properties above is extremely small. We will recover that $|\mathcal{S}_k|\geq2$ with high probability so long as $\mathcal{P}_k$ contains sufficiently many elements. This can be achieved by setting $r=k^6$, and henceforth, $r$ shall take this value. This is established formally in the following lemma. \begin{lemma} \label{lem:good_spine} Let $r=k^6$ and assume $|\mathcal{P}_{k}|> \alpha^r$ for some $\alpha\in(1,2)$. Then, $$ \lim_{k\to\infty}\mathbb{P}\left(|\mathcal{S}_k|\geq2\right)=1. $$ \end{lemma} \begin{proof} The collection of random variables $\left\{T_1(s_z)\right\}_{z\in \mathcal{P}_k}$ are independent, and by Theorem~\ref{thm:BRW}, there exists a constant $\delta>0$ that does not depend on $k$, such that for each $z\in\mathcal{P}_k$, $$ \mathbb{P}\left(T_1(s_z)\geq (1-\varepsilon)k^2\right)<e^{-\delta k^2}. $$ Moreover, since the probability that the type 2 passage time of an edge is at least $k^3$ is $e^{-\lambda k^3}$, by independence of type 1 and type 2 passage times, for each $z\in\mathcal{P}_k$, \begin{align} \mathbb{P}&\left(\left\{T_1(s_z)< (1-\varepsilon)k^2\right\}\cap\bigcap_{y\in s_z\backslash s(z)}\bigcap\limits_{\substack{y'\sim y\\ y'\notin s_z}}\left\{T_u(yy')\geq k^3\right\}\cap\bigcap\limits_{\substack{y,y'\in s_z \\ y\sim y'}}\left\{T_d(yy')\geq k^3\right\}\right) \nonumber\\ &> \left(1 - e^{-\delta k^2}\right)e^{-\lambda k^3\cdot (d-1) k^2} \nonumber\\ &\geq e^{-c k^5}, \label{proof:cand_paths} \end{align} for some constant $c=c(\lambda)>0$. From \eqref{proof:cand_paths} we deduce the probability there exist at least two such paths tends to 1 as $k$ goes to infinity as there are more than $\alpha^{k^6}$ independent candidate paths. \end{proof} \subsection{Controlling type 2 from conversions} \label{sec:backtracks} In this section we consider how to control the spread of type 2 from conversions. The idea is that we may set $\rho$ so small so that in good boxes, no conversions take place until type 1 can spread far down the tree with high probability. However, we must also control for conversions that occur outside of good boxes. For example, type 1 may spread fast through a good box and trigger an instantaneous conversion just outside it, that in turn causes the spread of type 2. This spread may go back through the good box and prevent highways from being occupied by type 1. The aim of this section is to control how type 2 spreads from conversions outside of good boxes in such a manner that type 1 is not impeded on highways. Consider a box $B$ and suppose $|\mathcal{P}_k|>\alpha^r$ for some $\alpha\in(1,2)$ and $|\mathcal{S}_k|\geq2$, so that $\pi_1$ and $\pi_2$ are highways of $B$ with respective spines $s_1$ and $s_2$. If more highways and spines are available, we simply ignore them. The idea is that type 2 must traverse a distance of at least $k^2$ to convert a site outside $B$ and then occupy a site in $\pi_1\cup\pi_2$, and so by the time type 1 occupies some $x\in\pi_1\cup\pi_2$, then it is able to propagate downwards before being occupied by type 2. We make this notion rigorous below. For $x\in\pi_1\cup\pi_2$, let $D(x)$ be the set of sites up to distance $k^2$ down from $x$ that exclude all sites whose path to $x$ include an edge on the highways $\pi_1$ and $\pi_2$ or the spines $s_1$ and $s_2$ (call this collection of excluded sites $H_{k^2}(x;\pi_1,\pi_2,s_1,s_2)$), so $$ D(x) = V^{\downarrow}_{k^2}(x)\backslash H_{k^2}(x;\pi_1,\pi_2,s_1,s_2). $$ Let $\partial D(x) = \partial V_{k^2}(x)\cap D(x)$ and $D^*(x)$ be the event that all paths through $D(x)$ \emph{upwards} have passage time at least $10k$, so that \begin{equation} \label{eq:D_star} D^*(x) = \bigcap_{y\in \partial D(x)}\left\{T_u(y\to x)\geq 10k\right\}. \end{equation} Intuitively, the event $D^*(x)$ guarantees that no upwards type 2 passage time through $D(x)$ is fast enough to allow type 2 to catch type 1 on $\pi_1\cup\pi_2$. It is important the event in \eqref{eq:D_star} is independent of the labelling sub-boxes as good or bad, as a good box is defined through the type 1 passage times and the \emph{downward} type 2 passage times. The following lemma allows us to prove that $D^*(x)$ occurs for all $x\in\pi_1\cup\pi_2$ with high probability, assuming the existence of highways $\pi_1,\pi_2$ and spines $s_1,s_2$. \begin{lemma} \label{lem:backtracks} Let $r=k^6$. Assume $|\mathcal{P}_k|> \alpha^r$ for some $\alpha\in(1,2)$ and $|\mathcal{S}_k|\geq2$. Let $\pi_1$ and $\pi_2$ be highways with respective spines $s_1$ and $s_2$. There exists a constant $c>0$ such that $$ \mathbb{P}\left(\bigcap_{x\in\pi_1\cup\pi_2}D^*(x)\right)\geq 1 - e^{-ck^2\log k}, $$ for all $k$ large enough. \end{lemma} \begin{proof} If $\gamma$ is a path of length $k^2$ and $\theta>0$, then $$ \mathbb{P}\left(T_u(\gamma)\leq 10k\right) = \mathbb{P}\left(\sum_{i=1}^{k^2}T_u(e_i)\leq 10k\right) \leq e^{10\theta k}\mathbb{E}\left[e^{-\theta T_u}\right]^{k^2}, $$ by a Chernoff bound argument, where $\left\{e_i\right\}_i$ is an enumeration of the edges on the path $\gamma$ and $T_u$ is a copy of an exponential random variable of rate $\lambda$. By setting $\theta = k/10$, we deduce $$ \mathbb{P}\left(T_u(\gamma)\leq 10k\right) \leq \left(\tfrac{\lambda}{\lambda+k/10}\right)^{k^2}e^{k^2} = e^{-ck^2\log k}, $$ for some constant $c>0$. There are at most $d^{k^2}$ sites in $\partial D(x)$ and hence, by the union bound, $$ \mathbb{P}\left(\bigcup_{y\in\partial D(x)} \left\{T_u(y\to x)\leq 10k\right\}\right) \leq d^{k^2}e^{-ck^2\log k} \leq e^{-\tfrac{c}{2}k^2\log k}, $$ for large enough $k$. Recall the depth of the percolating structure in a box is $kr$ where $r=k^6$. By the union bound over all sites in $\pi_1\cup\pi_2$, $$ \mathbb{P}\left(\bigcup_{x\in\pi_1\cup\pi_2}\bigcup_{y\in\partial D(x)} \left\{T_u(y\to x)\leq 10k\right\}\right) \leq 2k^7e^{-\tfrac{c}{2}k^2\log k}, $$ for all large enough $k$ and the result follows. \end{proof} \subsection{Good boxes} \label{sec:good_box} In this section we put together the components constructed in the previous sections to rigorously define good boxes. While the events discussed in previous sections only concerned the box containing the origin, the events can easily be translated to an arbitrary box when we partition $\mathbb{T}_d $ into boxes. Indeed, as we will see in this section, a box being good or bad only depends on events measurable with respect to said box. Hence we will retain the notation from previous sections without introducing ambiguity. Fix a box $B$ and consider the following events. The first two events concern the construction of highways down to depth $kr$, where we fix $r=k^6$ and $\alpha\in(1,2)$, and at least two spines, so that $$ \mathcal{G}_1= \left\{\left|\mathcal{P}_k\right|>\alpha^r \right\} \quad \mbox{and} \quad \mathcal{G}_2 = \left\{\left|\mathcal{S}_k\right|\geq2\right\}. $$ Assuming that $\mathcal{G}_1$ and $\mathcal{G}_2$ hold, let $\pi_1$ and $\pi_2$ be two highways to depth $kr$. We let $\mathcal{G}_3$ be the event that all sites on $\pi_1\cup\pi_2$ satisfy the condition in \eqref{eq:D_star}, so that $$ \mathcal{G}_3 = \bigcap_{x\in\pi_1\cup\pi_2}D^*(x). $$ The final ingredient is the event that no conversion occurs in the box until at least time $3(kr+k^2)$ has expired. This guarantees that type 1 is able to traverse a large distance before type 2 originating from that box is able to spread. Call this event $\mathcal{G}_4$, so that $$ \mathcal{G}_4 = \bigcap_{x\in B}\left\{\mathcal{I}_x\geq 3(kr+k^2)\right\}, $$ where we recall $\mathcal{I}_x$ is the conversion time to type 2 at site $x$. We define a box $B$ to be \emph{good} if $\cap_{j=1}^{4}\mathcal{G}_j$ holds, and \emph{bad} otherwise. \begin{lemma} \label{lem:prob_good} We have $$ \lim_{k\to\infty}\lim_{\rho\downarrow 0}\mathbb{P}\left(B\emph{ is good}\right) = 1. $$ \end{lemma} \begin{proof} As the conversion times are independent of the passage times, we may express the probability of a box being good as $$ \mathbb{P}\left(B\mbox{ is good}\right) = \mathbb{P}\left(\cap_{j=1}^{4}\mathcal{G}_j\right) = \mathbb{P}\left(\mathcal{G}_1\right)\mathbb{P}\left(\mathcal{G}_2|\mathcal{G}_1\right)\mathbb{P}\left(\mathcal{G}_3|\mathcal{G}_1\cap\mathcal{G}_2\right)\mathbb{P}\left(\mathcal{G}_4\right). $$ As the event $\mathcal{G}_3$ only observes the upwards type 2 passage times, then it is independent of $\mathcal{G}_1\cap\mathcal{G}_2$. The result then follows by Lemma~\ref{lem:depth_kr}, Lemma~\ref{lem:good_spine} and Lemma~\ref{lem:backtracks}. \end{proof} \subsection{Proof of Theorem~\ref{thm:crit_lambda_tree}} \label{sec:proof_thm_tree} In this section we prove Theorem~\ref{thm:crit_lambda_tree}. Before proceeding with the proof, we introduce the following notion that will ease exposition. If a site $z$ is occupied by type 2 because it converts to type 2 due to its own conversion time expiring after being occupied by type 1, then we say $z$ is its own \emph{progenitor}. Otherwise, $z$ was occupied by type 2 by the spread of type 2 from a neighbouring site, $z'$ say. If $z'$ is its own progenitor, then we say it is the progenitor for $z$ too. If $z'$ is not its own progenitor, then we can iterate this procedure until we find a site that is the progenitor for $z$ and itself. In general, we write $p(z)$ for the progenitor for $z$ and $p_z$ as the path from $p(z)$ to $z$. Note a progenitor must be occupied by type 1 at some time due to being occupied by type 2 through its own conversion. As the passage times are exponentially distributed, the progenitor is almost surely unique. \begin{proof}[Proof of Theorem~\ref{thm:crit_lambda_tree}] Let $\varepsilon$ be small enough and $k$ large enough so that Lemma~\ref{lem:prob_1box} holds, and fix $\lambda<1+\varepsilon$. We construct an infinite path from $\sigma$ that only passes through highways and spines of good boxes with positive probability as follows. Consider the box up to depth $kr+k^2$ with root $\sigma$. If this box is good, there exists at least two sites at depth $kr+k^2$ that are connected to the root $\sigma$ via highways and spines of this box. Considering these sites as roots of boxes to depth $2(kr+k^2)$, by observing if these boxes are good or bad, we deduce a lower bound on the number of sites connected to $\sigma$ only through highways and spines of good boxes. We can continue this procedure $n$ times to attempt to find sites at depth $n(kr+k^2)$ from the origin that only pass through the highways and spines of good boxes. As boxes constructed this way are independently good or bad, if we set $k$ large enough and then $\rho$ small enough so that \begin{equation} \label{eq:perc_prob} \mathbb{P}\left(V_{kr+k^2}\mbox{ is good}\right)>1/2, \end{equation} this procedure does not terminate with positive probability. If this procedure does not terminate, there exists an infinite path $\gamma$ from $\sigma$ that only passes through the highways and spines of good boxes. We aim to prove the existence of $\gamma$ implies type 1 survives, which would complete the proof. Suppose for contradiction there exists an infinite path from $\sigma$ that only passes through the highways and spines of good boxes, $\gamma$ say, and type 1 does not occupy every site on $\gamma$. Then there must exist some good box $B_0$ such that $\gamma$ contains a highway and spine of $B_0$, type 1 occupies some site in $B_0\cap\gamma$ but not every site in $B_0\cap\gamma$. Write $B_0\cap\gamma=\pi\cup s$ for the respective highway $\pi$ and spine $s$ of $B_0$, and let $\sigma_{B_0}$ denote the root of $B_0$. Suppose there exists $z\in\pi\cup s$ that is never occupied by type 1. Without losing generality, we may assume $z$ is the closest such site to $\sigma_{B_0}$. Recall $p(z)$ is the progenitor of $z$ and $p_z$ is the path from $p(z)$ to $z$. First consider the case $p(z)\in B_0$ (see Figure~\ref{fig:progin} for an illustration). Then \begin{equation} \label{eq:cont_conv} \tau_2(z) > \tau_1(p(z)) + 3(kr+k^2) > \tau_1(\sigma_{B_0}) + 3(kr+k^2), \end{equation} where in the first inequality we recall $B_0$ is good and the second follows as type 1 must occupy $\sigma_{B_0}$ before $p(z)$ by construction. As $B_0$ is good, then \begin{equation} \label{eq:cont_conv2} T_1(\sigma_{B_0}\to z) < (1-\varepsilon)(kr+k^2). \end{equation} \begin{figure} \caption{$p(z)\in B_0$} \label{fig:progin} \caption{$p_z\cap \partial V_{kr+k^2} \label{fig:progout1} \caption{\centering $p_z\cap \partial V_{kr+k^2} \label{fig:progout2} \caption{The three cases for the nature of $p_z$, the path from $p(z)$ to $z$. The red line is the path $p_z$. The top of the triangle is the root $\sigma_{B_0} \label{fig:progcase} \end{figure} Comparing \eqref{eq:cont_conv} and \eqref{eq:cont_conv2}, we deduce $$ \tau_1(z) < \tau_1(\sigma_{B_0}) + (1-\varepsilon)(kr+k^2) < \tau_2(z), $$ which contradicts the assumption $z$ is never occupied by type 1. Hence $p(z)\notin B_0$. Now consider the case $p_z\cap \partial V_{kr+k^2}^{\downarrow}(\sigma_{B_0})\neq\emptyset$ (see Figure~\ref{fig:progout1}). Let $z'\in\pi\cup s$ be such that $$ d(\sigma_{B_0},z') = \min_{y\in B_0\cap\gamma\cap p_z}d(\sigma_{B_0},y). $$ It is immediate $d(\sigma_{B_0},z')\leq d(\sigma_{B_0},z)$ through the construction of the progenitor. First consider the case $z\in\pi$. Then $$ \tau_2(z) > \tau_2(z') + \tfrac{(1-\varepsilon^2)k}{\lambda}\left(\text{SB}(z,z')-2\right)^+, $$ where SB$(z,z')$ is the number of sub-boxes that intersect the path from $z'$ to $z$ and for $c\in\mathbb{R}$, we write $(c)^+=\max\left\{c,0\right\}$. Recalling $\lambda<1+\varepsilon$, \eqref{eq:D_star} and that $B_0$ is good, we deduce \begin{equation} \label{eq:cont_below_pi} \tau_2(z) > \tau_1(z') + 10k +(1-\varepsilon)\left(\text{SB}(z,z')-2\right)^+k. \end{equation} Similarly, recalling $\pi$ is a highway and $B_0$ is good, we deduce \begin{equation} \label{eq:cont_below_pi1} T_1(z'\to z) < (1-\varepsilon)\text{SB}(z,z')k. \end{equation} By comparing \eqref{eq:cont_below_pi} and with \eqref{eq:cont_below_pi1}, we observe $$ \tau_1(z) < \tau_1(z') + (1-\varepsilon)\text{SB}(z,z')k < \tau_2(z), $$ which contradicts the assumption $z$ is never occupied by type 1. Now consider the case $z\in s$ and $z\notin \pi$. By similar considerations to the previous case, \begin{equation} \label{eq:cont_below_s} \tau_2(z) > \tau_1(z')+(1-\varepsilon)\left(\text{SB}(z,z')-2\right)^+k + k^3, \end{equation} where the $k^3$ term is because at least one edge incident to the spine $s$ must be traversed for type 2 to propagate to $z$. Note that the function SB only counts sub-boxes up to depth $kr$ in a box and not up to depth $kr+k^2$. Hence \begin{equation} \label{eq:cont_below_s1} T_1(z'\to z) < (1-\varepsilon)\text{SB}(z,z')k + (1-\varepsilon)k^2, \end{equation} where the $ (1-\varepsilon)k^2$ term is from the construction of the spine. By comparing \eqref{eq:cont_below_s} with \eqref{eq:cont_below_s1}, we have $$ \tau_1(z) < \tau_1(z') +(1-\varepsilon)\text{SB}(z,z')k + (1-\varepsilon)k^2 < \tau_2(z), $$ which again gives a contradiction. Finally, consider the case $p_z\cap \partial V_{kr+k^2}^{\downarrow}(\sigma_{B_0})=\emptyset$ and $p(z)\notin B_0$ (see Figure~\ref{fig:progout2}). Under this assumption, $p_z$ contains $\sigma_{B_0}$. Moreover, as $p(z)\notin B_0$, $p_z$ also contains a site neighbouring $\sigma_{B_0}$ in a spine of a good box that $\gamma$ passes through before entering $B_0$. Call this site $u$. As $u$ is contained in a spine of a good box, then \begin{equation} \label{eq:tau1_root} \tau_1(\sigma_{B_0}) < \tau_1(u) + (1-\varepsilon)k^2, \end{equation} where we observe $\tau_1(u)<\infty$ as $\tau_1(\sigma_{B_0})<\infty$. The downwards passage time from $u$ to $\sigma_{B_0}$ is at least $k^3$ and thus \begin{equation} \label{eq:tau2_root} \tau_2(\sigma_{B_0}) > \tau_2(u) + k^3 > \tau_1(u) + k^3 > \tau_1(\sigma_{B_0}) + k^3 - (1-\varepsilon)k^2, \end{equation} where in the final inequality we use \eqref{eq:tau1_root}. If $z\in\pi$, by considering the type 2 passage times from $\sigma_{B_0}$ to $z$ along the highways, we deduce \begin{equation} \label{eq:cont_final} \tau_2(z) > \tau_1(\sigma_{B_0}) + k^3 - (1-\varepsilon)k^2 + (1-\varepsilon)\left(\text{SB}(\sigma_{B_0},z)-2\right)^+k. \end{equation} By construction of the highway $\pi$, we have \begin{equation} \label{eq:cont_final1} T_1(\sigma_{B_0}\to z) < (1-\varepsilon)\text{SB}(\sigma_{B_0},z)k. \end{equation} From \eqref{eq:cont_final} and \eqref{eq:cont_final1}, we have $$ \tau_1(z) < \tau_1(\sigma_{B_0}) + (1-\varepsilon)\text{SB}(\sigma_{B_0},z)k < \tau_2(z). $$ However, this contradicts the assumption $z$ is never occupied by type 1. A similar argument can be used if $z\in s$ and $z\notin \pi$ as in the previous case. As we have considered all possible cases for $z$ and $p(z)$, the contradiction is established and the proof is complete. \end{proof} \section{Behaviour on $\mathbb{Z}^d$} \label{sec:lattice} In this section we consider the model on $\mathbb{Z}^d$. The model can be constructed in the exact same way as before except we no longer distinguish between upwards and downwards type 2 passage times. That is, we define $\left\{t_{1,e}\right\}_{e\in E(\mathbb{Z}^d)}$ and $\left\{t_{2,e}\right\}_{e\in E(\mathbb{Z}^d)}$ to be i.i.d.\ collections of exponentially distributed random variables of rate 1 and rate $\lambda$ on the edges of $\mathbb{Z}^d$, respectively. Type 1 spreads according to the $\left\{t_{1,e}\right\}_{e\in E(\mathbb{Z}^d)}$ passage times to vacant sites. Once a site $z$ is occupied by type 1, it attempts to convert to type 2 after waiting $\mathcal{I}_z$ time, where $\left\{\mathcal{I}_x\right\}_{x\in \mathbb{Z}^d}$ is an i.i.d.\ collection of exponentially distributed random variables of rate $\rho>0$. Type 2 spreads according to the $\left\{t_{2,e}\right\}_{e\in E(\mathbb{Z}^d)}$ passage times to vacant sites and sites occupied by type 1. \subsection{Proof of Theorem~\ref{thm:lattice}} \label{sec:lattice1} In order to prove Theorem~\ref{thm:lattice}, we first recall some classical results for first passage percolation on $\mathbb{Z}^d$. Given sites $x,y\in\mathbb{Z}^d$, define the \emph{passage time from $x$ to $y$} as the random variable $$ T_1(x,y) = \inf_{\gamma\in\Gamma_{x,y}}T_1(\gamma) $$ where $T_1$ is as defined in \eqref{eq:FPP_sum} and $\Gamma_{x,y}$ is the set of all finite paths from $x$ to $y$. For $t\geq0$, let $$ B(t) = \left\{x\in\mathbb{Z}^d:T_1(0,x)\leq t\right\} $$ be the ball of radius $t$ centred at the origin under the metric induced by the exponential passage times of rate 1. The idea of the shape theorem is $B(t)$ converges to a deterministic shape once linearly rescaled in time. To make sense of rescaling, let $$ \tilde{B}(t) = \left\{x+[-\tfrac{1}{2},\tfrac{1}{2})^d:T_1(0,x)\leq t\right\} $$ be the set of sites in $B(t)$ considered as the centre of a unit cube in $\mathbb{R}^d$. \begin{theorem}[Richardson \cite{rich}] \label{thm:shape} There exists a deterministic, convex, compact set $\mathcal{B}_1\subset\mathbb{R}^d$ such that for all $\varepsilon>0$ $$ \mathbb{P}\left((1-\varepsilon)\mathcal{B}_1\subset\frac{\tilde{B}(t)}{t}\subset (1+\varepsilon)\mathcal{B}_1 \emph{ for all large }t\right)=1. $$ The set $\mathcal{B}_1$ is called the limit shape. \end{theorem} The proof of Theorem~\ref{thm:shape} relies on subadditivity arguments through Kingman's subadditive ergodic theorem \cite{kingman1973subadditive}. Consequently, the exact limiting shape is not known. Richardson's shape theorem has been extended to more general passage times by Cox and Durrett \cite{cox1981some} and recent results about the limit shape can be found in \cite{auffinger201750}. By time scaling, if one replaced rate 1 passage times with rate $\lambda$ passage times for some $\lambda>0$, then there would be a limit shape $\mathcal{B}_\lambda$ such that $\mathcal{B}_{\lambda}=\lambda\mathcal{B}_1$. To prove Theorem~\ref{thm:lattice}, we first need to prove the set of sites ever occupied by type 1 is contained in a first passage percolation process of rate strictly less than one. This is the content of the following lemma. Let $\eta_1(t)$ (resp.\ $\eta_2(t)$) be the set of sites ever occupied by type 1 (resp.\ type 2) up to time $t$. \begin{lemma} \label{lem:vdbK} Consider the model on $\mathbb{Z}^d$ with $d\geq2$ and $\rho,\lambda>0$. There exists $\kappa=\kappa(\rho,d)>0$ such that for all $\varepsilon>0$, $$ \eta_1(t) \subset (1+\varepsilon)(1-\kappa)t\mathcal{B}_1 $$ for all large enough $t$, almost surely. \end{lemma} \begin{proof} The proof is consequence of van den Berg and Kesten \cite{van1993inequalities} (see \cite[Proposition 6.4]{sidoravicius2019multi} for the result in terms of the limit shape). Their result states if a distribution $F$ strictly dominates\footnote{We say a distribution $F$ \emph{stricly dominates} a distribution $\tilde F$ if there exists a coupling between these two distributions under which the random variable with distribution $F$ is larger than the random variable with distribution $\tilde F$ with probability $1$.} another distribution $\tilde{F}$, then the limiting shape under $F$ is strictly contained in the limiting shape under $\tilde{F}$, under some natural conditions on $F$ and $\tilde{F}$. The idea is to prove type 1 is contained in a process whose passage times strictly dominate a first passage percolation process with exponential passage times of rate 1. However, there are subtle dependencies arising because the spread of type 1 is facilitated by the interplay of small type 1 passage times and large conversion times. We need to prove we can construct the model in a manner that decouples these two sources of randomness in order to apply a van den Berg--Kesten argument. For $K>0$, let $u(K)$ be the probability an exponential random variable of rate $\rho$ is at least $K$. Given $x\in\mathbb{Z}^d$, let $x$ be \emph{marked} with probability $1-u(K)$, and \emph{unmarked} with probability $u(K)$, independently of every other site. Define an edge $e$ to be marked if both of its endpoints are marked. The set of marked edges gives a 1-dependent percolation process. By Liggett, Schonmann and Stacey \cite{liggett1997domination}, for any $q\in(0,1)$, the set of marked edges stochastically dominates an i.i.d.\ Bernoulli percolation process of parameter $q$ for a large enough choice of $K$. The open edges according to this i.i.d.\ Bernoulli percolation process are defined to be \emph{semi-marked}. For each edge $e$, determine whether $e$ is semi-marked and sample a candidate type 1 passage time $\tilde{t}_{1,e}$ that is an exponential random variable of rate 1, independently of every other edge. If $e$ is not semi-marked, we let $f_{1,e}=\tilde{t}_{1,e}$. If $e$ is semi-marked, we set $$ f_{1,e} = \begin{cases} \tilde{t}_{1,e} & \mbox{if }\tilde{t}_{1,e}\leq K,\\ \infty & \mbox{if }\tilde{t}_{1,e}> K. \end{cases} $$ Hence the evolution for type 1 is identical under $\left\{f_{1,e}\right\}_e$ and $\left\{\tilde{t}_{1,e}\right\}_e$ by this construction. The passage times given by $\left\{f_{1,e}\right\}_e$ are i.i.d.\ and are stochastically dominated by i.i.d.\ exponential passage times of rate 1. Hence we may apply van den Berg--Kesten and the result follows from Theorem~\ref{thm:shape}. \end{proof} With Lemma~\ref{lem:vdbK} to hand, we are now in a position to prove Theorem~\ref{thm:lattice}. \begin{proof}[Proof of Theorem~\ref{thm:lattice}] Let $\kappa$ be as in Lemma~\ref{lem:vdbK} so for any $\varepsilon>0$, $$ \eta_1(t) \subset (1+\varepsilon)(1-\kappa)t\mathcal{B}_1, $$ for all $t$ large enough, almost surely. By Theorem~\ref{thm:shape}, if we only consider the evolution of type 2 after the origin has been converted, for any $\varepsilon>0$, $$ \eta_2(t)\supset (1-\varepsilon)t\mathcal{B}_\lambda = (1-\varepsilon)\lambda t \mathcal{B}_1, $$ for all $t$ large enough, almost surely. If $\lambda>1-\kappa$ and we fix $\varepsilon$ small enough so $$ (1+\varepsilon)(1-\kappa) < (1-\varepsilon)\lambda, $$ we deduce type 1 dies out almost surely as $\eta_1(t)\subset\eta_2(t)$ for all large enough $t$. \end{proof} \subsection{Sidoravicius--Stauffer percolation} \label{sec:SSP} The proof of Theorem~\ref{thm:rho} relies on a coupling between our converting first passage percolation model with a random competition process called \emph{Sidoravicius--Stauffer percolation} (SSP). SSPs where introduced by Dauvergne and Sly \cite{dauvergne2021spread} as a streamlined version of FPPHE from \cite{sidoravicius2019multi}. The purpose of this section is to define SSPs and outline an important encapsulation theorem by Dauvergne and Sly. An SSP on $\mathbb{Z}^d$ consists of two competing growth processes $\mathfrak{R}(t)$ and $\mathfrak{B}(t)$ for $t\geq0$, called red and blue for clarity, respectively. Let $\overrightarrow{\text{E}}$ be the set of directed edges on $\mathbb{Z}^d$, so that $$ \overrightarrow{\text{E}} = \left\{(u,v):(u,v)\in E\left(\mathbb{Z}^d\right)\right\}. $$ To define an SSP we require the following. \begin{itemize} \item Functions $X_{\mathfrak{R}}:\overrightarrow{\text{E}}\to [0,1]$ and $X_{\mathfrak{B}}:\overrightarrow{\text{E}}\to [0,\infty)$ viewed as the clocks defining the growth of the red and blue process, respectively. \item A collection of blue seeds $\mathfrak{B}_*\subset \mathbb{Z}^d$. \item A parameter $\kappa>1$ such that $X_{\mathfrak{B}}(u,v)\leq \kappa$ for all $(u,v)\in \overrightarrow{\text{E}}$. \end{itemize} If $u$ and $v$ are both blue seeds, we take $X_{\mathfrak{B}}(u,v)=0$. That is, the blue process spreads instantaneously through connected components of blue seeds. For our purposes, the blue process spreads after waiting time $\kappa$ through every other edge. Hence we will always consider the special case where the blue process spreads according to the clocks given by $$ X_{\mathfrak{B}}:\overrightarrow{\text{E}}\to [0,\kappa] \mbox{ with } X_{\mathfrak{B}}(u,v) = \begin{cases} 0 & \mbox{if } u,v\mbox{ are blue seeds},\\ \kappa & \mbox{otherwise}. \end{cases} $$ At time $t=0$, the red process only occupies the origin while the blue process is dormant in seeds. SSPs evolve in time through the following dynamics. Given a site $u$, let $T(u)$ be the earliest time the red or blue process occupied $u$ and $C(u)\in\{\mathfrak{R},\mathfrak{B}\}$ denote the colour of $u$ once it is occupied. Given an edge $(u,v)$, at time $T(u)+X_{C(u)}(u,v)$, the edge $(u,v)$ will ring and the process evolves in the following manner. If $T(v)<T(u)+X_{C(u)}(u,v)$, then the occupation is suppressed as $y$ has already been coloured by an invasion from another edge. Otherwise, we colour $v$ according to the following rules: \begin{itemize} \item If $C(u)=\mathfrak{R}$ and $v\in\mathfrak{B}_*$, then $C(v)=\mathfrak{B}$. \item If $C(u)=\mathfrak{R}$ and $v\notin\mathfrak{B}_*$, then $C(v)=\mathfrak{R}$. \item If $C(u)=\mathfrak{B}$, then $C(v)=\mathfrak{B}$. \end{itemize} The construction of SSPs in \cite{dauvergne2021spread} allows for the red process to invade through blue sites in some circumstances due to caveats in their application. They couple SSPs with a spread of infection model and blue regions are only where they cannot guarantee the infection is moving fast enough. For our application, we do not need to consider these cases and so the above construction will suffice. We define the red (resp.\ blue) process to \emph{survive} if there is an infinite connected region of red (resp.\ blue) sites in the limit as $t\to\infty$. Otherwise we say the red (resp.\ blue) process \emph{dies out}. The following is an encapsulation result of Dauvergne and Sly \cite[Theorem 2.14]{dauvergne2021spread}, where the red process survives and encapsulates all blue regions with positive probability, so long as $\kappa$ is large enough (so the blue process is substantially slower than the red process) and blue seeds are stochastically dominated by an i.i.d.\ Bernoulli process of small enough parameter. \begin{theorem} \label{thm:enc_SS} Consider a random SSP $(\mathfrak{R},\mathfrak{B})$ on $\mathbb{Z}^d$ driven by potentially random clocks $X_{\mathfrak{R}},X_{\mathfrak{B}}$, a collection of blue seeds $\mathfrak{B}_*$ and a constant parameter $\kappa>4000$. Suppose additionally $\mathfrak{B}_*$ is stochastically dominated by an i.i.d.\ Bernoulli process of parameter $p>0$. There exists a universal constant $c>0$ such that the probability the red process survives and the blue process dies out is at least $1-cp$. \end{theorem} The statement of Theorem~\ref{thm:enc_SS} suffices for our purposes and is provided in a more detailed manner in \cite{dauvergne2021spread}. \subsection{Proof of Theorem~\ref{thm:rho}} In this section we prove Theorem~\ref{thm:rho} and in doing so, partially answer Conjecture~\ref{conj:lattice}. We first prove $\rho_{\ell}>0$ through a coupling with an appropriate SSP. To facilitate this coupling, we need to provide an alternative construction of the model on $\mathbb{Z}^d$. Type 1 evolves according the passage times $\left\{t_{1,e}\right\}_{e\in E(\mathbb{Z}^d)}$ and converts to type 2 according to the conversion times $\left\{\mathcal{I}_x\right\}_{x\in\mathbb{Z}^d}$, as before. When a site $x$ is occupied by type 2, it attempts to spread to neighbouring sites, according to the passage times $\left\{t_{2,e}\right\}_{e\in E(\mathbb{Z}^d)}$. That is, if $x$ is first occupied by type 1 at time $s$ and $y\sim x$ is vacant at time $s+t_{2,xy}$, then $y$ is occupied by type 2 at time $s+t_{2,xy}$. If $y$ is occupied by type 1 at some time $s'\in[s,s+t_{2,xy})$, type 2 now attempts to occupy $x$ at time $s'+t_{3,xy}$ where $\left\{t_{3,e}\right\}_{e\in E(\mathbb{Z}^d)}$ is an i.i.d.\ collection of exponential random variables of rate $\lambda$. By the memoryless property of the exponential distribution, this construction is equivalent to how the model is defined before. \begin{proof}[Proof of Theorem~\ref{thm:rho}: $\rho_{\ell}>0$.] Let $C$ be a large constant we fix later. Define a site $x$ to be a \emph{type 2-seed} if at least one of the following holds. \begin{itemize} \item There exists $y\sim x$ such that $t_{1,xy}\geq C$. \item There exists $y\sim x$ such that $t_{2,xy}<C^2$. \item There exists $y\sim x$ such that $t_{3,xy}<C^2$. \item The conversion time at $x$ satisfies $\mathcal{I}_x<C^2$. \end{itemize} We now construct the appropriate SSP required for the coupling argument. The blue seeds for the SSP are given by the type 2-seeds above. Define the clocks $X_{\mathfrak{R}}$ and $X_{\mathfrak{B}}$ governing the spread of the red and blue process as follows: $$ X_{\mathfrak{R}}:\overrightarrow{\text{E}}\to [0,C] \mbox{ with } X_{\mathfrak{R}}(x,y) = \min\{t_{1,xy},C\}, $$ and $$ X_{\mathfrak{B}}:\overrightarrow{\text{E}}\to [0,C^2] \mbox{ with } X_{\mathfrak{B}}(x,y) = \begin{cases} 0 & \mbox{if } x,y\mbox{ are blue seeds},\\ C^2 & \mbox{otherwise}. \end{cases} $$ Note the change from $X_\mathfrak{R}$ being bounded by 1 to being bounded by $C$ amounts to a time change and does not alter any arguments. The only edges where the red clock is equal to $C$ must have a type 2-seed at each endpoint and thus play no role in the evolution of the red process. We deduce if a site $x$ is not a type 2-seed and occupied by type 1, it is able to attempt to spread type 1 to all of its neighbours. For example, if $y\sim x$, then $$ t_{1,xy}<\min_{z\sim x}\left\{t_{2,xz},t_{3,xz},\mathcal{I}_x\right\} $$ via the construction of type 2-seeds and so the propagation of type 1 from $x$ cannot be blocked by the spread of type 2 or conversions. This is precisely why we needed to define the passage times $\left\{t_{3,e}\right\}_{e\in E(\mathbb{Z}^d)}$, so the spread of type 2 from a neighbouring site does not block the spread of type 1. If there where no type 2-seeds, it is immediate type 1 survives as type 2 does not have the potential to block its spread. Through this construction, type 2 only can block type 1 through the spread from type 2-seeds. Sites that can potentially be blocked from type 1 through type 2-seeds are then the blue process while the remaining sites are red. Hence, proving the red process survives implies type 1 survives. It is easy to verify this construction yields a valid SSP in the language from \cite{dauvergne2021spread}. Moreover, by setting $C$ large enough and then $\lambda$ and $\rho$ small enough, the probability a site is a type 2-seed can be made arbitrarily small. The process of labelling sites as type 2-seeds defines a 1-dependent percolation process and so can be constructed to be stochastically dominated by an i.i.d.\ Bernoulli process of parameter $p$, for any $p\in(0,1)$ by Liggett, Schonmann and Stacey \cite{liggett1997domination}. This observation allows us to deduce from Theorem~\ref{thm:enc_SS} that for $\lambda$ and $\rho$ small enough, the red process survives and occupies infinitely many sites, and all connected components of the blue process are finite, with positive probability. \end{proof} The proof that $\rho_u$ exists and is finite requires less machinery. \begin{proof}[Proof of Theorem~\ref{thm:rho}: $\rho_u<\infty$] Define a site $x$ to be \emph{closed} if the minimum type 1 passage time on an edge incident to $x$ is greater than the time it takes $x$ to convert once occupied by type 1, so that $$ \min_{y\sim x}t_{1, xy} > \mathcal{I}_x. $$ Note that closed sites cannot pass type 1 to any of their neighbors. The process of labelling sites closed is a 1-dependent percolation process with $$ \lim_{\rho\to \infty}\mathbb{P}\left(x\mbox{ is closed}\right)=1. $$ Through Liggett, Schonmann and Stacey \cite{liggett1997domination}, by setting $\rho$ large enough, we deduce closed sites stochastically dominate a supercritical i.i.d.\ Bernoulli percolation process. Hence, for all large enough $\rho$, type 1 dies out almost surely as the origin is encapsulated by closed sites. \end{proof} \appendix \section{Appendix: Standard large deviation results} \begin{lemma}[Chernoff bounds for Poisson random variables] \label{lem:chernoff} Let $P$ be a Poisson random variable of mean $\mu$. For any $\varepsilon\in(0,1)$, $$ \mathbb{P}\left(P < (1-\varepsilon)\mu\right) < \exp\left\{-\mu \varepsilon^2/2\right\} $$ and $$ \mathbb{P}\left(P > (1+\varepsilon)\mu\right) < \exp\left\{-\mu \varepsilon^2/4\right\}. $$ For any $C>0$ and $\theta\in\mathbb{R}$, we have $$ \mathbb{P}\left(P > C\mu\right) \leq \exp\left\{-\mu\left(1-e^{\theta}+\theta C\right)\right\}. $$ \end{lemma} \end{document}
\begin{document} \title{Real Time Image Saliency for Black Box Classifiers} \begin{abstract} In this work we develop a fast saliency detection method that can be applied to any differentiable image classifier. We train a masking model to manipulate the scores of the classifier by masking salient parts of the input image. Our model generalises well to unseen images and requires a single forward pass to perform saliency detection, therefore suitable for use in real-time systems. We test our approach on CIFAR-10 and ImageNet datasets and show that the produced saliency maps are easily interpretable, sharp, and free of artifacts. We suggest a new metric for saliency and test our method on the ImageNet object localisation task. We achieve results outperforming other weakly supervised methods. \end{abstract} \section{Introduction} Current state of the art image classifiers rival human performance on image classification tasks, but often exhibit unexpected and unintuitive behaviour \citep{lime, adversarialimages}. For example, we can apply a small perturbation to the input image, unnoticeable to the human eye, to fool a classifier completely \citep{adversarialimages}. Another example of an unexpected behaviour is when a classifier fails to \textit{understand} a given class despite having high accuracy. For example, if ``polar bear'' is the only class in the dataset that contains snow, a classifier may be able to get a 100\% accuracy on this class by simply detecting the presence of snow and ignoring the bear completely \citep{lime}. Therefore, even with perfect accuracy, we cannot be sure whether our model actually detects polar bears or just snow. One way to decouple the two would be to find snow-only or polar-bear-only images and evaluate the model's performance on these images separately. An alternative is to use an image of a polar bear with snow from the dataset and apply a \textit{saliency detection method} to test what the classifier is really looking at \citep{lime, gradientbackprop}. \begin{figure} \caption{\tiny Input Image} \caption{\tiny Generated saliency map} \caption{\tiny Image multiplied by the mask} \caption{\tiny Image multiplied by inverted mask} \caption{An example of explanations produced by our model. The top row shows the explanation for the "Egyptian cat" while the bottom row shows the explanation for the "Beagle". Note that produced explanations can precisely both highlight and remove the selected object from the image.} \label{fig:our} \end{figure} Saliency detection methods show which parts of a given image are the most relevant to the model for a particular input class. Such saliency maps can be obtained for example by finding the smallest region whose removal causes the classification score to drop significantly. This is because we expect the removal of a patch which is not useful for the model not to affect the classification score much. Finding such a salient region can be done iteratively, but this usually requires hundreds of iterations and is therefore a time consuming process. In this paper we lay the groundwork for a new class of fast and accurate model-based saliency detectors, giving high pixel accuracy and sharp saliency maps (an example is given in figure \ref{fig:our}). We propose a fast, model agnostic, saliency detection method. Instead of iteratively obtaining saliency maps for each input image separately, we train a model to predict such a map for any input image in a single feed-forward pass. We show that this approach is not only orders-of-magnitude faster than iterative methods, but it also produces higher quality saliency masks and achieves better localisation results. We assess this with standard saliency benchmarks, and introduce a new saliency measure. Our proposed model is able to produce real-time saliency maps, enabling new applications such as video-saliency which we comment on in our \textit{Future Research} section (\S\ref{sect:conc}). \section{Related work} Since the rise of CNNs in 2012 \citep{alexnet} numerous methods of image saliency detection have been proposed \citep{zeiler2014visualizing, gradientbackprop, allcnn, topdownattention, topdownattention, scenecnn, feedbackoptim}. One of the earliest such methods is a gradient-based approach introduced in \citep{gradientbackprop} which computes the gradient of the class with respect to the image and assumes that salient regions are at locations with high gradient magnitude. Other similar backpropagation-based approaches have been proposed, for example Guided Backpropagation \citep{allcnn} or Excitation Backprop \citep{topdownattention}. While the gradient based methods are fast enough to be applied in real-time, they produce explanations of limited quality \citep{topdownattention} and they are hard to improve and build upon. \citet{scenecnn} proposed an approach that iteratively removes patches of the input image (by setting them to the mean colour) such that the class score is preserved. After a sufficient number of iterations, we are left with salient parts of the original image. The maps produced by this method are easily interpretable, but unfortunately, the iterative process is very time consuming and not acceptable for real-time saliency detection. In another work, \citet{feedbackoptim} introduced an optimisation method that aims to preserve only a fraction of network activations such that the class score is maximised. Again, after the iterative optimisation process, only activations that are relevant remain and their spatial location in the CNN feature map indicate salient image regions. Very recently (and in parallel to this work), another optimisation based method was proposed \citep{maskoptim}. Similarly to \citet{feedbackoptim}, \citet{maskoptim} also propose to use gradient descent to optimise for the salient region, but the optimisation is done only in the image space and the classifier model is treated as a black box. Essentially, \citet{maskoptim}'s method tries to remove as little from the image as possible, and at the same time to reduce the class score as much as possible. A removed region is then a minimally salient part of the image. This approach is model agnostic and the produced maps are easily interpretable because the optimisation is done in the image space and the model is treated as a black box. We next argue what conditions a good saliency model should satisfy, and propose a new metric for saliency. \section{Image Saliency and Introduced Evidence} Image saliency is relatively hard to define and there is no single obvious metric that could measure the quality of the produced map. In simple terms, the saliency map is defined as a summarised explanation of where the classifier ``looks'' to make its prediction. There are two slightly more formal definitions of saliency that we can use: \begin{itemize} \item Smallest sufficient region (SSR) --- smallest region of the image that alone allows a confident classification, \item Smallest destroying region (SDR) --- smallest region of the image that when removed, prevents a confident classification. \end{itemize} Similar concepts were suggetsed in \citep{maskoptim}. An example of SSR and SDR is shown in figure \ref{fig:regions}. It can be seen that SSR is very small and has only one seal visible. Given this SSR, even a human would find it difficult to recognise the preserved image. Nevertheless, it contains some characteristic for ``seal'' features such as parts of the face with whiskers, and the classifier is over 90\% confident that this image should be labeled as a ``seal''. On the other hand, SDR has a much stronger and larger region and quite successfully removes all the evidence for seals from the image. In order to be as informative as possible we would like to find a region that performs well as both SSR and SDR. \begin{figure} \caption{From left to right: the input image; smallest sufficient region (SSR); smallest destroying region (SDR). Regions were found using the mask optimisation procedure from \citep{maskoptim} \label{fig:regions} \end{figure} Both SDR and SSR remove some evidence from the image. There are few ways of removing evidence, for example by blurring the evidence, setting it to a constant colour, adding noise, or by completely cropping out the unwanted parts. Unfortunately, each one of these methods introduces new evidence that can be used by the classifier as a side effect. For example, if we remove a part of the image by setting it to the constant colour green then we may also unintentionally provide evidence for ``grass'' which in turn may increase the probability of classes appearing often with grass (such as ``giraffe''). We discuss this problem and ways of minimising introduced evidence next. \subsection{Fighting the Introduced Evidence} As mentioned in the previous section, by manipulating the image we always introduce some extra evidence. Here, let us focus on the case of applying a mask $M$ to the image $X$ to obtain the edited image $E$. In the simplest case we can simply multiply $X$ and $M$ element-wise: \begin{equation} E = X \odot M \end{equation} This operation sets certain regions of the image to a constant ``0'' colour. While setting a larger patch of the image to ``0'' may sound rather harmless (perhaps following the assumption that the mean of all colors carries very little evidence), we may encounter problems when the mask $M$ is not \textit{smooth}. The mask $M$, in the worst case, can be used to introduce a large amount of additional evidence by generating adversarial artifacts (a similar observation was made in \citep{maskoptim}). An example of such a mask is presented in figure \ref{fig:adv}. Adversarial artifacts generated by the mask are very small in magnitude and almost imperceivable for humans, but they are able to completely destroy the original prediction of the classifier. Such adversarial masks provide very poor saliency explanations and therefore should be avoided. \begin{figure} \caption{The adversarial mask introduces very small perturbations, but can completely alter the classifier's predictions. From left to right: an image which is correctly recognised by the classifier with a high confidence as a "tabby cat"; a generated adversarial mask; an original image after application of the mask that is no longer recognised as a "tabby cat". } \label{fig:adv} \end{figure} There are a few ways to make the introduction of artifacts harder. For example, we may change the way we apply a mask to reduce the amount of unwanted evidence due to specifically-crafted masks: \begin{equation} E = X \odot M + A \odot (1-M) \end{equation} where $A$ is an alternative image. $A$ can be chosen to be for example a highly blurred version of $X$. In such case mask $M$ simply selectively adds blur to the image $X$ and therefore it is much harder to generate high-frequency-high-evidence artifacts. Unfortunately, applying blur does not eliminate existing evidence very well, especially in the case of images with low spatial frequencies like a seashore or mountains. Another reasonable choice of $A$ is a random constant colour combined with high-frequency noise. This makes the resulting image $E$ more unpredictable at regions where $M$ is low and therefore it is slightly harder to produce a reliable artifact. Even with all these measures, adversarial artifacts may still occur and therefore it is necessary to encourage smoothness of the mask $M$ for example via a total variation (TV) penalty. We can also directly resize smaller masks to the required size as resizing can be seen as a smoothness mechanism. \subsection{A New Saliency Metric} To assess the quality and interpretability of saliency maps we introduce a new saliency metric. According to the SSR objective we require that the classifier is able to still recognise the object from the preserved region and that the preserved region is as small as possible. In order to make sure that the preserved region is free from adversarial artifacts, instead of masking we can crop the image. We propose to find the tightest rectangular crop that \textit{contains the entire salient region} and to feed that rectangular region to the classifier to directly verify whether it is able to recognise the requested class. We define our saliency metric simply as: \begin{equation} s(a, p) = \mathrm{log}(\tilde{a}) - \mathrm{log}(p) \end{equation} with $\tilde{a} = \mathrm{max}(a, 0.05)$. Here $a$ is the area of the rectangular crop as a fraction of the total image size and $p$ is the probability of the requested class returned by the classifier based on the cropped region. The metric is almost a direct translation of the SSR. We threshold the area at $0.05$ in order to prevent instabilities at low area fractions. Good saliency detectors will be able to significantly reduce the crop size without reducing the classification probability, and therefore a low value for the saliency metric is a characteristic of good saliency detectors. The metric will give negative values for good black box and saliency detector pairs, and high magnitude positive values for badly performing black boxes, or badly performing saliency detectors. Interpreting this metric following \textit{information theory}, this measure can be seen as the relative amount of information between an indicator variable with probability $p$ and an indicator variable with probability $a$---or the \textit{concentration of information} in the cropped region. Because most image classifiers accept only images of a fixed size and the crop can have an arbitrary size, we resize the crop to the required size disregarding aspect ratio. This seems to work well in practice. \subsection{The Saliency Objective} Taking the previous conditions into consideration, we want to find a mask $M$ that is smooth and performs well at both SSR and SDR; examples of such masks can be seen in figure \ref{fig:our}. Therefore, more formally, given class $c$ of interest, and an input image $X$, to find a saliency map $M$ for class $c$, our objective function $L$ is given by: \begin{equation} L(M) = \lambda_1 T\!V\!(M) + \lambda_2A\!V\!(M) -\mathrm{log}(f_c(\Phi(X, M))) + \lambda_3f_c(\Phi(X, 1-M))^{\lambda_4} \label{eq:obj} \end{equation} where $f_c$ is a softmax probability of the class $c$ of the black box image classifier and $T\!V\!(M)$ is the total variation of the mask defined simply as: \begin{equation} T\!V\!(M) = \sum_{i,j}(M_{ij}-M_{ij+1})^2 + \sum_{i,j}(M_{ij}-M_{i+1j})^2, \end{equation} $A\!V\!(M)$ is the average of the mask elements, taking value between 0 and 1, and $\lambda_i$ are regularisers. Finally, the function $\Phi$ removes the evidence from the image as introduced in the previous section: \begin{equation} \Phi(X, M) = X \odot M + A \odot (1-M). \end{equation} In total, the objective function is composed of 4 terms. The first term enforces mask smoothness, the second term encourages that the region is small. The third term makes sure that the classifier is able to recognise the selected class from the preserved region. Finally, the last term ensures that the probability of the selected class, after the salient region is removed, is low (note that the inverted mask $1-M$ is applied). Setting $\lambda_4$ to a value smaller than 1 (e.g.\ 0.2) helps reduce this probability to very small values. \section{Masking Model} The mask can be found iteratively for a given image-class pair by directly optimising the objective function from equation \ref{eq:obj}. In fact, this is the method used by \citep{maskoptim} which was developed in parallel to this work, with the only difference that \citep{maskoptim} only optimise the mask iteratively and for SDR (so they don't include the third term of our objective function). Unfortunately, iteratively finding the mask is not only very slow, as normally more than 100 iterations are required, but it also causes the mask to greatly overfit to the image and a large TV penalty is needed to prevent adversarial artifacts from forming. Therefore, the produced masks are blurry, imprecise, and overfit to the specific image rather than capturing the general behaviour of the classifier (see figure \ref{fig:regions}). For the above reasons, we develop a trainable masking model that can produce the desired masks in a single forward pass without direct access to the image classifier after training. The masking model receives an image and a class selector as inputs and learns to produce masks that minimise our objective function (equation \ref{eq:obj}). In order to succeed at this task, the model must learn which parts of the input image are considered salient by the black box classifier. In theory, the model can still learn to develop adversarial masks that perform well on the objective function, but in practice it is not an easy task, because the model itself acts as some sort of a ``regulariser'' determining which patterns are more likely and which are less. \begin{figure} \caption{Architecture diagram of the masking model.} \label{fig:model} \end{figure} In order to make our masks sharp and precise, we adapt a U-Net architecture \citep{unet} so that the masking model can use feature maps from multiple resolutions. The architecture diagram can be seen in figure \ref{fig:model}. For the encoder part of the U-Net we use ResNet-50 \citep{resnet} pre-trained on ImageNet \citep{imagenet}. The ResNet-50 model contains feature maps of five different scales, where each subsequent scale block downsamples the input by a factor of two. We use the ResNet's feature map from Scale 5 (which corresponds to downsampling by a factor of 32) and pass it through the feature filter. The purpose of the feature filter is to attenuate spatial locations which contents do not correspond to the selected class. Therefore, the feature filter performs the initial localisation, while the following upsampling blocks fine-tune the produced masks. The output of the feature filter $Y$ at spatial location $i$, $j$ is given by: \begin{equation} Y_{ij} = X_{ij} \sigma(X_{ij}^TC_s) \label{eq:featurefilter} \end{equation} where $X_{ij}$ is the output of the Scale 5 block at spatial location $i$, $j$; $C_s$ is the embedding of the selected class $s$ and $\sigma(\cdot)$ is the sigmoid nonlinearity. Class embedding $C$ can be learned as part of the overall objective. The upsampler blocks take the lower resolution feature map as input and upsample it by a factor of two using transposed convolution \citep{deconv}, afterwards they concatenate the upsampled map with the corresponding feature map from ResNet and follow that with three bottleneck blocks \citep{resnet}. Finally, to the output of the last upsampler block (Upsampler Scale 2) we apply 1x1 convolution to produce a feature map with with just two channels --- $C_0$, $C_1$. The mask $M_s$ is obtained from: \begin{equation} M_s = \frac{\mathrm{abs}(C_0)}{\mathrm{abs}(C_0) + \mathrm{abs}(C_1)} \end{equation} We use this nonstandard nonlinearity because sigmoid and tanh nonlinearities did not optimise properly and the extra degree of freedom from two channels greatly improved training. The mask $M_s$ has resolution four times lower than the input image and has to be upsampled by a factor of four with bilinear resize to obtain the final mask $M$. The complexity of the model is comparable to that of ResNet-50 and it can process more than a hundred 224x224 images per second on a standard GPU (which is sufficient for real time saliency detection). \subsection{Training process} We train the masking model to directly minimise the objective function from equation \ref{eq:obj}. The weights of the pre-trained ResNet encoder (red blocks in figure \ref{fig:model}) are kept fixed during the training. In order to make the training process work properly, we introduce few optimisations. First of all, in the naive training process the ground truth label would always be supplied as a class selector. Unfortunately, under such setting, the model learns to completely ignore the class selector and simply always masks the dominant object in the image. The solution to this problem is to sometimes supply a class selector for a fake class and to apply only the area penalty term of the objective function. Under this setting the model must pay attention to the class selector, as the only way it can reduce loss in case of a fake label is by setting the mask to zero. During training, we set the probability of the fake label occurrence to 30\%. One can also greatly speed up the embedding training by ensuring that the maximal value of $\sigma(X_{ij}^TC_s)$ from equation \ref{eq:featurefilter} is high in case of a correct label and low in case of a fake label. Finally, let us consider again the evidence removal function $\Phi(X, M)$. In order to prevent the model from adapting to any single evidence removal scheme the alternative image $A$ is randomly generated every time the function $\Phi$ is called. In 50\% of cases the image $A$ is the blurred version of $X$ (we use a Gaussian blur with $\sigma = 10$ to achieve a strong blur) and in the remainder of cases, $A$ is set to a random colour image with the addition of a Gaussian noise. Such a random scheme greatly improves the quality of the produced masks as the model can no longer make strong assumptions about the final look of the image. \begin{figure} \caption{Input Image} \caption{Model \& AlexNet} \caption{Model \& GoogLeNet} \caption{Model \& ResNet-50} \caption{Grad \citep{gradientbackprop} \caption{Mask \citep{maskoptim} \caption{Saliency maps generated by different methods for the ground truth class. The ground truth classes, starting from the first row are: Scottish terrier, chocolate syrup, standard schnauzer and sorrel. Columns b, c, d show the masks generated by \textit{our} \label{fig:sample} \end{figure} \section{Experiments} We present results on the ImageNet and CIFAR-10 datasets, assessing our technique with various metrics and baselines. \subsection{Detecting saliency on ImageNet} In the ImageNet saliency detection experiment we use three different black-box classifiers: AlexNet \citep{alexnet}, GoogLeNet \citep{googlenet} and ResNet-50 \citep{resnet}. These models are treated as black boxes and for each one we train a separate masking model. The selected parameters of the objective function are $\lambda_1=10$, $\lambda_2=10^{-3}$, $\lambda_3=5$, $\lambda_4=0.3$. The first upsampling block has 768 output channels and with each subsequent upsampling block we reduce the number of channels by a factor of two. We train each masking model as described in section 4.1 on 250,000 images from the ImageNet training set. During the training process, a very meaningful class embedding was learned and we include its visualisation in the Appendix. Example masks generated by the saliency models trained on three different black box image classifiers can be seen in figure \ref{fig:sample}, where the model is tasked to produce a saliency map for the ground truth label. In figure \ref{fig:sample} it can be clearly seen that the quality of masks generated by our models clearly outperforms alternative approaches. The masks produced by models trained on GoogLeNet and ResNet are sharp and precise and would produce accurate object segmentations. The saliency model trained on AlexNet produces much stronger and slightly larger saliency regions, possibly because AlexNet is a less powerful model which needs more evidence for successful classification. \subsubsection{Weakly supervised object localisation} A possible metric to evaluate saliency maps is by object localisation. We adapt the evaluation protocol from \citep{feedbackoptim} and provide the ground truth label to the masking model. Afterwards, we threshold the produced saliency map at $0.5$ and the tightest bounding box that contains the whole saliency map is set as the final localisation box. The localisation box has to have IOU greater than $0.5$ with any of the ground truth bounding boxes in order to consider the localisation successful, otherwise, it is counted as an error. The calculated error rates for the three models are presented in table \ref{table:ioum}. The lowest localisation error of $36.7\%$ was achieved by the saliency model trained on the ResNet-50 black box, this is a good achievement considering the fact that our method was not given any localisation training data and that a fully supervised approach employed by VGG \citep{vgg} achieved only slightly lower error of $34.3\%$. The localisation error of the model trained on GoogLeNet is very similar to the one trained on ResNet. This is not surprising because both models produce very similar saliency masks (see figure \ref{fig:sample}). The AlexNet trained model, on the other hand, has a considerably higher localisation error which is probably a result of AlexNet needing larger image contexts to make a successful prediction (and therefore producing saliency masks which are slightly less precise). \begin{table}[h] \centering \begin{tabular}{lccc} \toprule & Alexnet \citep{alexnet} & GoogLeNet \citep{googlenet} & ResNet-50 \citep{resnet} \\ \midrule Localisation Err (\%) & 39.8 & 36.9 & \textbf{36.7} \\ \bottomrule \end{tabular} \caption{Weakly supervised bounding box localisation error on ImageNet validation set for our masking models trained with different black box classifiers.} \label{table:ioum} \end{table} We also compared our object localisation errors to errors achieved by other weakly supervised methods and existing saliency detection techniques with the GoogLeNet black box. As a baseline we calculated the localisation error of the centrally placed rectangle which spans half of the image area --- which we name "Center". The results are presented in table \ref{table:ioures}. It can be seen that our model outperforms other approaches, sometimes by a significant margin. It also performs significantly better than the baseline (centrally placed box) and the iteratively optimised saliency masks. Because a big fraction of ImageNet images have a large, dominant object in the center, the localisation accuracy of the centrally placed box is relatively high and it managed to outperform two methods from previous literature. \begin{table}[h] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{ccccccccc} \toprule Center & Grad \citep{gradientbackprop} & Guid \citep{allcnn} & LRP \citep{lrp} & CAM \citep{cam} & Exc \citep{topdownattention} & Feed \citep{feedbackoptim} & Mask \citep{maskoptim} & This Work \\ \midrule 46.3 & 41.7 & 42.0 & 57.8 & 48.1 & 39.0 & 38.7 & 43.1 & \textbf{36.9} \\ \bottomrule \end{tabular} } \caption{Localisation errors(\%) on ImageNet validation set for popular weakly supervised methods. Error rates were taken from \citep{maskoptim} which recalculated originally reported results using few different mask thresholding techniques and achieved slightly lower error rates. For a fair comparison, all the methods follow the same evaluation protocol of \citep{feedbackoptim} and produce saliency maps for GoogLeNet classifier \citep{googlenet}.} \label{table:ioures} \end{table} \subsubsection{Evaluating the saliency metric} To better asses the interpretability of the produced masks we calculate the saliency metric introduced in section 3.2 for selected saliency methods and present the results in the table \ref{table:metrics}. We include a few baseline approaches --- the "Center box" introduced in the previous section, and the "Max box" which simply corresponds to a box spanning the whole image. We also calculate the saliency metric for the ground truth bounding boxes supplied with the data, and in case the image contains more than one ground truth box the saliency metric is set as the average over all the boxes. \begin{table}[h] \centering \begin{tabular}{lcc} \toprule & Localisation Err (\%) & Saliency Metric \\ \midrule Ground truth boxes (baseline) & 0.00 & 0.284 \\ Max box (baseline) & 59.7 & 1.366 \\ Center box (baseline) & 46.3 & 0.645 \\ \midrule Grad \citep{gradientbackprop} & 41.7 & 0.451 \\ Exc \citep{topdownattention} & 39.0 & 0.415 \\ Masking model (this work) & \textbf{36.9} & \textbf{0.318} \\ \bottomrule \end{tabular} \caption{ImageNet localisation error and the saliency metric for GoogLeNet.} \label{table:metrics} \end{table} Table \ref{table:metrics} shows that our model achieves a considerably better saliency metric than other saliency approaches. It also significantly outperforms max box and center box baselines and is on par with ground truth boxes which supports the claim that the interpretability of the localisation boxes generated by our model is similar to that of the ground truth boxes. In this table, the ``Max box'' baseline takes the maximal bounding box (i.e.\ the whole image) and resizes it to the black box input size (224, 224). Therefore $a$ takes value 1 and $\log(a)$ takes value 0. The saliency metric is then the negative cross entropy loss for the GoogLeNet black box which is 1.36. GoogLeNet attains a sightly lower cross entropy loss if ``Center box'' is used instead of the full image. \subsection{Detecting saliency of CIFAR-10} To verify the performance of our method on a completely different dataset we implemented our saliency detection model for the CIFAR-10 dataset \citep{cifar10}. Because the architecture described in section 4 specifically targets high-resolution images and five downsampling blocks would be too much for 32x32 images, we modified the architecture slightly and replaced the ResNet encoder with just 3 downsampling blocks with 5 convolutional layers each. We also reduced the number of bottleneck blocks in each upsampling block from 3 to 1. Unlike before, with this experiment we did not use a pre-trained masking model, but instead a randomly initialised one. We used a FitNet \citep{fitnet} trained to 92\% validation accuracy as a black box classifier to train the masking model. All the training parameters were used following the ImageNet model. \begin{figure} \caption{Saliency maps generated by our model for randomly selected images from CIFAR-10 validation set.} \label{fig:cif10} \end{figure} The masking model was trained for 20 epochs. Saliency maps for sample images from the validation set are shown in figure \ref{fig:cif10}. It can be seen that the produced maps are clearly interpretable and a human could easily recognise the original objects after masking. This confirms that the masking model works as expected even at low resolution and that FitNet model, used as a black box learned correct representations for the CIFAR-10 classes. More interestingly, this shows that the masking model does not need to rely on a pre-trained model which might inject its own biases into the generated masks. \section{Conclusion and Future Research} \label{sect:conc} In this work we have presented a new, fast, and accurate saliency detection method that can be applied to any differentiable image classifier. Our model is able to produce 100 saliency masks per second, sufficient for real-time applications. We have shown that our method outperforms other weakly supervised techniques at the ImageNet localisation task. We have also developed a new saliency metric that can be used to assess the quality of explanations produced by saliency detectors. Under this new metric, the quality of explanations produced by our model outperforms other popular saliency detectors and is on par with ground truth bounding boxes. The model-based nature of our technique means that our work can be extended by improving the architecture of the masking network, or by changing the objective function to achieve any desired properties for the output mask. Future work includes modifying the approach to produce high quality, weakly supervised, image segmentations. Moreover, because our model can be run in real-time, it can be used for video saliency detection to instantly explain decisions made by black-box classifiers such as the ones used in autonomous vehicles. Lastly, our model might have biases of its own --- a fact which does not seem to influence the model performance in finding biases in other black boxes according to the various metrics we used. It would be interesting to study the biases embedded into our masking model itself, and see how these affect the generated saliency masks. { \small } \appendix \section{Appendix} Figure \ref{fig:emb} shows a t-SNE visualisation of the embedding learned by the masking model trained on the ImageNet. It can be clearly seen that closely related objects have similar localisations in the embedding. For example fungi and geagraphical formations, they both form their own clusters. Figure \ref{fig:emb2} shows a subset of the embedding and again it can be clearly seen that similar dogs occupy similar positions. Figures \ref{fig:exa1} to \ref{fig:exa6} show more saliency example results with various class selectors. \begin{figure} \caption{T-SNE visualisation of the class embedding learned by the masking model.} \label{fig:emb} \end{figure} \begin{figure} \caption{T-SNE visualisation of the class embedding learned by the masking model.} \label{fig:emb2} \end{figure} \begin{figure} \caption{Masks generated by our model for the selected target class. Notice how the cat is masked in the third image because it does not contribute to the selected class - desk.} \label{fig:exa1} \end{figure} \begin{figure} \caption{Masks generated by our model for the selected target class. Note that no mask was generated for the first image because the selected target class (Irish setter) is not present in the image.} \label{fig:exa2} \end{figure} \begin{figure} \caption{Masks generated by our model for the selected target class. Notice that in the first and second image the classifier apparently needs more evidence to be able to recognise classes like ski or bearskin. It makes sense because it would be very hard to recognise these classes if only corresponding objects were masked without supporting evidence.} \label{fig:exa3} \end{figure} \begin{figure} \caption{Masks generated by our model for the selected target class.} \label{fig:exa4} \end{figure} \begin{figure} \caption{Masks generated by our model for the selected target class.} \label{fig:exa5} \end{figure} \begin{figure} \caption{Masks generated by our model for the selected target class. Note that no mask was generated for the third image because the selected target class (street sign) is not present in the image.} \label{fig:exa6} \end{figure} \end{document}
\binomegin{document} \hboxitle{ {\binomegin{flushleft} \equivnd{flushleft} \binomfseries\scshape Tur\'an's Problem for Trees} } \author{\binomfseries\itshape Zhi-Hong Sun$^1$ \hboxhanks{E-mail address: [email protected]; Website: {\hboxt http://www.hytc.edu.cn/xsjl/szh}} \ and Lin-Lin Wang$^2$ \hboxhanks{E-mail address: wanglinlin$ \hbox{\underline{\quad }}[email protected]}\\ $^1$\,School of Mathematical Sciences, Huaiyin Normal University\\ Huaian, Jiangsu 223001, People's Republic of China\\ $^2$\,Center for Combinatorics, Nankai University\\ Tianjin 300071, People's Republic of China } \date{} \title{ {egin{flushleft} \hboxhispagestyle{empty} \setcounter{page}{1} \hboxhispagestyle{fancy} \fracancyhead[L]{J. Comb. Number Theory 3(2011), no.1, 51-69} \fracancyhead[R]{ } \fracancyfoot{} \renewcommand{0pt}{0pt} \binomegin{abstract} For a forbidden graph $L$, let $ex(p;L)$ denote the maximal number of edges in a simple graph of order $p$ not containing $L$. Let $T_n$ denote the unique tree on $n$ vertices with maximal degree $n-2$, and let $T_n^*=(V,E)$ be the tree on $n$ vertices with $V=\{v_0,v_1,\ldots,v_{n-1}\}$ and $E=\{v_0v_1,\ldots,v_0v_{n-3},v_{n-3}v_{n-2},v_{n-2}v_{n-1}\}$. In the paper we give exact values of $ex(p;T_n)$ and $ex(p;T_n^*)$. \noindent \hboxextbf{2000 Mathematics Subject Classification:} Primary 05C35; Secondary 05C05. \equivnd{abstract} \makeatletter \setlength\@fptop{0\p@} \makeatother \makeatletter \def\cleardoublepage{ \if@twoside \ifodd\c@page\equivlse \hbox{} \hboxhispagestyle{empty} \if@twocolumn\hbox{} \fraci\fraci\fraci} \makeatother \renewcommand{\arabic{section}.}{\arabic{section}.} \renewcommand{\hboxhesection\arabic{subsection}.}{\arabic{section}.\arabic{subsection}.} \renewcommand{\hboxhesubsection\arabic{subsubsection}.}{\hboxhesection\arabic{subsection}.\arabic{subsubsection}.} \def\fracigurename{Figure} \makeatletter \renewcommand{\fracnum@figure}[1]{\fracigurename~\hboxhefigure.} \makeatother \def\hboxablename{Table} \makeatletter \renewcommand{\fracnum@table}[1]{\hboxablename~\hboxhetable.} \makeatother \section{Introduction} In the paper, all graphs are simple graphs. For a graph $G=(V(G),E(G))$ let $e(G)=|E(G)|$ be the number of edges in $G$ and let $\Delta(G)$ be the maximal degree of $G$. For a family of forbidden graphs $L$, let $ex(p;L)$ denote the maximal number of edges in a graph of order $p$ not containing any graphs in $L$. The corresponding Tur\'an's problem is to evaluate $ex(p;L)$. For a graph $G$ of order $p$, if $G$ does not contain any graphs in $L$ and $e(G)=ex(p;L)$, we say that $G$ is an extremal graph. In the paper we also use $Ex(p;L)$ to denote the set of extremal graphs of order $p$ not containing any graphs in $L$. \par Let $\Bbb N$ be the set of positive integers. Let $p,n\in\Bbb N$ with $p\ge n\ge 2$. For a given tree $T$ on $n$ vertices, it is difficult to determine the value of $ex(p;T)$. The famous Erd\"os-S\'os conjecture asserts that $ex(p;T)\le \frac{(n-2)p}2$. For the progress on the Erd\"os-S\'os conjecture, see [2,6,7,8]. Write $p=k(n-1)+r$, where $k\in\Bbb N$ and $r\in\{0,1,\ldots,n-2\}$. Let $P_n$ be the path on $n$ vertices. In [3] Faudree and Schelp showed that $$ex(p;P_n)=k\binominom {n-1}2+\binominom r2.\hboxag 1.1$$ In the special case $r=0$, (1.1) is due to Erd$\ddot{\hbox{\rm o}}$s and Gallai [1]. Let $K_{1,n-1}$ denote the unique tree on $n$ vertices with $\Delta(K_{1,n-1})=n-1$, and let $T_n$ denote the unique tree on $n$ vertices with $\Delta(T_n)=n-2$. In Section 2 we determine $ex(p;K_{1,n-1})$, and in Section 3 we obtain the exact value of $ex(p;T_n)$. \par For $n\ge 4$ let $T_n^*=(V,E)$ be the tree on $n$ vertices with $V=\{v_0,v_1,\ldots,v_{n-1}\}$ and $E=\{v_0v_1,\ldots,v_0v_{n-3},v_{n-3}v_{n-2},v_{n-2}v_{n-1}\}$. In Section 4 we completely determine the value of $ex(p;T_n^*)$. In addition to the above notation, throughout the paper we also use the following notation: $[x]\frac{\quad}{\quad}$the greatest integer not exceeding $x$, $d(v)\frac{\quad}{\quad}$the degree of the vertex $v$ in a graph, $\Gamma(v)\frac{\quad}{\quad}$the set of vertices adjacent to the vertex $v$, $d(u,v)\frac{\quad}{\quad}$the distance between the two vertices $u$ and $v$ in a graph, $K_n\frac{\quad}{\quad}$the complete graph on $n$ vertices, $K_{m,n}\frac{\quad}{\quad}$the complete bipartite graph with $m$ and $n$ vertices in the bipartition, $G[V_0]\frac{\quad}{\quad}$the subgraph of $G$ induced by vertices in the set $V_0$, $G-V_0\frac{\quad}{\quad}$the subgraph of $G$ obtained by deleting vertices in $V_0$ and all edges incident with them, $G-M\frac{\quad}{\quad}$the graph obtained by deleting all edges in $M$ from the graph $G$, $G+M\frac{\quad}{\quad}$the graph obtained by adding all edges in $M$ from the graph $G$. \section{The Evaluation of $ex(p;K_{1,n-1})$} \pro{Theorem 2.1} Let $p,n\in\Bbb N$ with $p\geq n-1\geq 1$. Then $ex(p;K_{1,n-1})=[\frac{(n-2)p}2]$. \equivndpro \binomegin{proof} Clearly $ex(n-1;K_{1,n-1})=e(K_{n-1})=\frac{(n-1)(n-2)}2$. Thus the result is true for $p=n-1$. Now we assume $p\ge n$. Suppose that $G$ is a graph of order $p$ without $K_{1,n-1}$. Then clearly $\Delta(G)\leq n-2$ and so $2e(G)=\sum_{v\in V(G)}d(v)\leq p\Delta(G)\leq (n-2)p$. Hence, $ex(p;K_{1,n-1})\leq \frac{(n-2)p}2$. As $ex(p;K_{1,n-1})$ is an integer, we have $$ex(p;K_{1,n-1})\leq\binomig[\frac{(n-2)p}2\binomig].\hboxag 2.1$$ Clearly $ex(p;K_{1,1})=0$. So the result holds for $n=2$. As $[\frac p2]K_2$ does not contain $K_{1,2}$, we have $ex(p;K_{1,2})\geq [\frac p2]$. This together with (2.1) gives $ex(p;K_{1,2})=\left[\frac p2\right]$. So the result is true for $n=3$. \par Suppose that $G$ is a Hamilton cycle with $p$ vertices. Then $G$ does not contain $K_{1,3}$. Thus we have $ex(p;K_{1,3})\geq p$. Combining this with (2.1) yields $ex(p;K_{1,3})=p$. So the result is true for $n=4$. \par Now we assume $n\geq 5$. By (2.1), it suffices to show that $ex(p;K_{1,n-1})\ge [\frac{(n-2)p}2]$. Set $k=[\frac{p+1}2]$, $V=\{1,2,\ldots,2k\}$ and $M=\{12,34,\cdots,(2k-1)(2k)\}$. Let us consider the following four cases. \par {\binomf Case 1.} $2\mid p$ and $2\nmid n$. Set $G=(V,E)$, where $$E=\binomig\{ij\ |\ i,j\in V,\ j-i\in\{1,2k-1,k,k\pm 1,\ldots,k\pm (n-5)/2\}\binomig\}.$$ Clearly $G$ is an $(n-2)$-regular graph of order $p$ and so $G$ does not contain $K_{1,n-1}$. Hence, $ex(p;K_{1,n-1})\ge e(G) =\frac{(n-2)p}2=[\frac{(n-2)p}2]$. \par {\binomf Case 2.} $2\mid p$ and $2\mid n$. Set $$E_1=\binomig\{ij\ |\ i,j\in V,\ j-i\in\{1,2k-1,k,k\pm 1,\ldots,k\pm (n-4)/2\}\binomig\}.$$ Then $M\subset E_1$. Let $G=(V,E_1-M)$. We see that $G$ is an $(n-2)$-regular graph of order $p$ and so $G$ does not contain $K_{1,n-1}$. Hence, $ex(p;K_{1,n-1})\ge e(G) =\frac{(n-2)p}2=[\frac{(n-2)p}2]$. \par {\binomf Case 3.} $2\nmid p$ and $2\mid n$. Let $G$ be the $(n-2)$-regular graph of order $2k$ constructed in Case 2. Let $$v_1=k-\frac n2+3,\ v_2=k-\frac n2+4,\ldots,v_{n-3}=k+\frac n2-1\quadtq{and}v_{n-2}=2k.$$ Then clearly $v_1,\ldots,v_{n-2}$ are all the vertices adjacent to the vertex $1$. If $2\mid k-\frac n2$, then $v_1,v_3,\ldots,v_{n-5}$ are odd and so $v_1v_2,v_3v_4,\ldots,v_{n-5}v_{n-4}\in M$. Thus, $v_1v_2,v_3v_4,\ldots,$ $v_{n-5}v_{n-4}\notin E(G)$. As $2k-(k+\frac n2-1)=k-\frac{n-2}2$, we see that $v_{n-3}v_{n-2}\not\in E_1$ and so $v_{n-3}v_{n-2}\not\in E(G)$. Let $$G'=G-\{1\}+\{v_1v_2,v_3v_4,\ldots,v_{n-5}v_{n-4},v_{n-3}v_{n-2}\}.$$ We see that $G'$ is an $(n-2)$-regular graph of order $p$. Hence, $ex(p;K_{1,n-1})\ge e(G')=\frac{(n-2)p}2=[\frac{(n-2)p}2]$. \par If $2\nmid k-\frac n2$, then $v_2,v_4,\ldots,v_{n-4}$ are odd and so $v_2v_3,v_4v_5,\ldots,v_{n-4}v_{n-3}\in M$. Thus, $v_2v_3,v_4v_5,\ldots,v_{n-4}v_{n-3}\notin E(G)$. As $p+1=2k>n$ we have $k-\frac n2+3>3$ and so $2,3\notin\{v_1,\ldots,v_{n-2}\}$. Clearly $2v_{n-2},3v_1\notin E_1$ and so $2v_{n-2},3v_1\notin E(G)$. Let $$G'=G-\{1\}-\{23\}+\{v_2v_3,v_4v_5,\ldots,v_{n-4}v_{n-3},3v_1,2v_{n-2}\}.$$ Then $G'$ is an $(n-2)$-regular graph of order $p$. Hence, $ex(p;K_{1,n-1})\ge e(G')=\frac{(n-2)p}2=[\frac{(n-2)p}2]$. \par {\binomf Case 4.} $2\nmid p$ and $ 2\nmid n$. As $2\mid n+1$, we can construct an $(n-1)$-regular graph $G_1$ of order $p$ by using the argument in Case 3. Let $$M_1=\cases \{23,45,\ldots,(2k-2)(2k-1),k(2k)\}&\hbox{if $2\mid k-\frac{n+1}2$,}\\ \{2(2k),3(k+3-\frac{n+1}2),45,67,\ldots,(2k-2)(2k-1)\} &\hbox{if $2\nmid k-\frac{n+1}2$.}\equivndcases $$ It is easily seen that $M_1\subset G_1$. Set $G_2=G_1-M_1$. Then for $i=2,3,\ldots,2k$ we have $$d_{G_2}(i)=\cases n-3&\hbox{if $2\mid k-\frac{n+1}2$ and $i=k$, or if $2\nmid k-\frac{n+1}2$ and $i=k+3-\frac{n+1}2$,} \\n-2&\hbox{otherwise.}\equivndcases$$ Thus $G_2$ does not contain $K_{1,n-1}$ and $$2e(G_2)=\sum_{i=2}^{2k}d_{G_2}(i)=n-3+(2k-2)(n-2)=(n-2)p-1.$$ Hence $ex(p;K_{1,n-1})\ge e(G_2)=\frac{(n-2)p-1}2=[\frac{(n-2)p}2].$ \par Putting all the above together we prove the theorem. \equivnd{proof} \pro{\noindent Corollary 2.1} Let $k,p\in\Bbb N$ with $p\ge k+2$. Then there exists a $k-$regular graph of order $p$ if and only if $2\mid kp$. \equivndpro \binomegin{proof} If $G$ is a $k-$regular graph of order $p$, then $kp=2e(G)$ and so $2\mid kp$. If $2\mid kp$, by the proof of Theorem 2.1 we know that there exists a $k-$regular graph of order $p$. \equivnd{proof} \par{\it \noindent Remark $2.1.$} In [4] Kirkman showed that $K_{2n}$ is 1-factorable. In [5] Petersen proved that a graph $G$ is 2-factorable if and only if $G$ is $2p$-regular. Thus, Corollary 2.1 can be deduced from [4] and [5]. \section{The Evaluation of $ex(p;T_n)$} \pro{\noindent Theorem 3.1} Let $p,n\in\Bbb N$ with $p\geq n\geq 5$. Let $r\in\{ 0,1,\ldots,n-2\}$ be given by $p\equiv r\mod{n-1}$. Then $$ex(p;T_n)= \cases \binomig[\frac{(n-2)(p-1)-r-1}2\binomig] &\hbox{if}\ n\ge 7\ \hbox{and}\ 2\le r\le n-4,\\ \frac{(n-2)p-r(n-1-r)}2&\hbox{otherwise.}\equivndcases$$ \equivndpro \noindent{\it Proof.\/} Let $G$ be an extremal graph of order $p$ not containing $T_n$. Suppose $v_0\in V(G)$ and $G_0$ is the component of $G$ such that $v_0\in V(G_0)$. If $d(v_0)=m\geq n-1$, as $G$ does not contain $T_n$ we see that $G_0$ is a copy of $K_{1,m}$. Suppose $m+1=k'(n-1)+r'$ with $k'\in\Bbb N$ and $r'\in\{0,1,\ldots,n-2\}.$ Then $k'K_{n-1}\cup K_{r'}$ does not contain $T_n$. As $\frac{n-2}2>1$ and $\binominom {r'}2-(r'-1)=\frac{(r'-1)(r'-2)}2\ge 0$, we find $$ e(k'K_{n-1}\cup K_{r'})=k'\binominom{n-1}2+\binominom{r'}2>k'(n-1)+r'-1=m=e(K_{1,m})=e(G_0).$$ Hence $G_0\notin Ex(m+1;T_n)$ and so $G\notin Ex(p;T_n)$. This contradicts the assumption. Therefore $d(v_0)\leq n-2$ and so $\Delta(G)\le n-2$. If $d(v_0)=n-2$, as $G_0$ is an extremal graph not containing $T_n$ we see that $G_0$ is a copy of $K_{n-1}$. \par Suppose $p=k(n-1)+r$. Then $k\in\Bbb N$. From the above we may assume $G=sK_{n-1}\cup G_1$ with $s\in\{0,1,\ldots,k\}$ and $\Delta(G_1)\le n-3$. If $s=k$, then clearly $G_1=K_r$ and so $e(G)=k\binom{n-1}2+\binom r2$. If $s\le k-1$, as $\Delta(G_1)\le n-3$ implies $G_1$ does not contain any copies of $T_n$, we see that $G_1\in Ex((k-s)(n-1)+r;K_{1,n-2})$. By Theorem 2.1 we have $e(G_1)=[\frac{(n-3)((k-s)(n-1)+r)}2]$. Hence $$e(G)=e(sK_{n-1}\cup G_1)=s\binom{n-1}2+\Big[\frac{(n-3)((k-s)(n-1)+r)}2\Big].$$ Set $f(x)=x\binominom{n-1}2+[\frac{(n-3)((k-x)(n-1)+r)}2]$. Then $$\aligned f(x+1)&=(x+1)\binominom{n-1}2+\left[\frac{(n-3)((k-x)(n-1)+r)-(n-3)(n-1)}2\right] \\&=x\binominom{n-1}2+\left[\frac{(n-3)((k-x)(n-1)+r)}2+\frac{n-1}2\right] >f(x).\equivndaligned$$ Thus, $f(k-1)>f(k-2)>\ldots>f(0)$. Since $G$ is an extremal graph, by the above we must have $s=k-1$ or $k$ and so $$\align&ex(p; T_n)=e(G)\\&=\max\binomigg\{(k-1)\binominom{n-1}2+\left[\frac{(n-3)(n-1+r)}2\right],k\binominom{n-1}2+\binominom r2\binomigg\}.\equivndalign$$ Observe that $$\frac{(n-3)(n-1+r)}2-\frac{r(r-1)}2-\frac{(n-1)(n-2)}2=\frac{r(n-2-r)-(n-1)}2.$$ We then have $$ex(p;T_n)=k\binominom{n-1}2+\binominom r2+\max\binomigg\{0,\Big[\frac{r(n-2-r)-(n-1)}2\Big]\binomigg\}.$$ If $r\in\{1,n-3,n-2\}$, then clearly $[\frac{r(n-2-r)-(n-1)}2]<0.$ For $n=6$ and $r=2$ we also have $[\frac{r(n-2-r)-(n-1)}2]=-1<0.$ Now assume $n\geq 7$ and $2\leq r\leq n-4$ . Then $$\aligned r(n-2-r)-(n-1)&=\frac{n^2-8n+8}4-\Big(r-\frac{n-2}2\Big)^2 \\&\geq \frac{n^2-8n+8}4-\Big(2-\frac{n-2}2\Big)^2=n-7\ge 0\equivndaligned$$ and so $[\frac{r(n-2-r)-(n-1)}2]\ge 0.$ Hence $$\aligned &ex(p;T_n)\\&=\cases k\binom{n-1}2+\binom r2+\binomig[\frac{r(n-2-r)-(n-1)}2\binomig]&\hbox{if $n\ge 7$ and $2\le r\le n-4$,}\\k\binom{n-1}2+\binom r2&\hbox{otherwise.} \equivndcases\equivndaligned$$ To see the result, we note that $k\binom{n-1}2+\binom r2=\frac{(n-2)(p-r)+r^2-r}2=\frac{(n-2)p-r(n-1-r)}2$ and $$k\binom{n-1}2+\binom r2+\Big[\frac{r(n-2-r)-(n-1)}2\Big] =\Big[\frac{(n-2)(p-1)-r-1}2\Big].\hskip+2cm \square $$ \section{The Evaluation of $ex(p;T_n^*)$} \par For $n\ge 4$ we recall that $T_n^*=(V,E)$ is the tree on $n$ vertices with $V=\{v_0,v_1,\ldots,v_{n-1}\}$ and $E=\{v_0v_1,\ldots,v_0v_{n-3},v_{n-3}v_{n-2},v_{n-2}v_{n-1}\}$. Clearly $T_4^*=P_4$ and $T_5^*=P_5$. \pro{\noindent Lemma 4.1} Let $p,n\in\Bbb N$ with $p\ge n\ge 6$, and let $G\in Ex(p;T_n^*)$. Then $\Delta (G)\le n-2.$ \equivndpro\binomegin{proof} Suppose that $v_0\in V(G), d(v_0)=m\ge n-1$ and $\Gamma (v_0)=\{v_1,\ldots,v_m\}.$ Let $G_0$ be the component of $G$ with $v_0\in V(G_0).$ If there are exactly $t$ vertices $u_1,\ldots,u_t\in V(G_0)$ such that $d(u_1,v_0)=\cdots=d(u_t,v_0)=2,$ then clearly $d(u_1)=\cdots=d(u_t)=1$, $V(G_0)=\{v_0,v_1,\ldots,v_m,u_1,\ldots,u_t\}$ and $|V(G_0)|=1+m+t$. If $u_iv_j\notin E(G_0)$ for some $j\in\{1,2,\ldots,m\}$ and every $i=1,2,\ldots,t$, then clearly $d(v_j)\le 2$. Thus, $e(G_0)\le m+t+\frac m2$. Set $1+m+t=k(n-1)+r(0\le r<n-1).$ We see that $$\aligned &k\binom{n-1}2+\binom r2-\frac{3m}2-t\\&= \frac{(n-2)(1+m+t-r)+r(r-1)-3m-2t}2\\&=\frac{(m+t)(n-5)-r(n-1-r)+(n-2)+t}2\\&\ge \frac{(n-1)(n-5)+(n-2)-r(n-1-r)}2\\&\ge\frac{(n-1)(n-5)+n-2-\frac{(n-1)^2}4}2 =\frac{3(n-3)^2-16}8 >0.\equivndaligned$$ Since $kK_{n-1}\cup K_r$ does not contain any copies of $T_n^*,$ applying the above we deduce $$e(G_0)\le\frac{3m+2t}2<k\binom{n-1}2+\binom r2=e(kK_{n-1}\cup K_r)\le ex(1+m+t;T_n^*).$$ As $G$ is an extremal graph not containing $T_n^*,$ we must have $e(G_0)=ex(1+m+t; T_n^*)$. This contradicts the above inequality $e(G_0)<ex(1+m+t;T_n^*)$. Hence the assumption $d(v_0)\ge n-1$ is not true. Thus $\Delta(G)\le n-2.$ The proof is now complete. \equivnd{proof} \pro{\noindent Lemma 4.2} Let $p,n\in\Bbb N$ with $p\ge n\ge 5$, and let $G\in Ex(p;T_n^*)$. Suppose that $v_0\in V(G), d(v_0)=n-2$ and $G_0$ is the component of $G$ such that $v_0\in V(G_0)$. Then $G_0\cong K_{n-1}.$ \equivndpro \binomegin{proof} Suppose $\Gamma(v_0)=\{v_1,\ldots,v_{n-2}\}$ and there are exactly $t$ vertices $u_1,\ldots,u_t\in V(G_0)$ such that $d(u_1,v_0)=\cdots=d(u_t,v_0)=2.$ We first assume $t\ge 1$. Then clearly $d(u_1)=\cdots=d(u_t)=1$ and $V(G_0)=\{v_0,v_1,\ldots,v_{n-2},u_1,\ldots,u_t\}.$ If $u_1v_i\in E(G)$ for some $i\in\{1,2,\ldots,n-2\}$, then clearly $v_iv_j\notin E(G)$ for all $j\in\{1,2,\ldots,n-2\}\setminus\{i\}.$ Thus, $$e(G_0)\le n-2+t+\binom{n-2-t}2\le \binom{n-2}2+t+1.$$ Assume $t=q(n-1)+t_0$ with $q\in\Bbb Z$ and $t_0\in\{0,1,\ldots,n-2\}.$ Then $$\aligned &e((1+q)K_{n-1}\cup K_{t_0})-\binom{n-2}2-t-1\\&=(1+q)\binom{n-1}2+\binom{t_0}2-\binom{n-2}2-q(n-1)-t_0-1 \\&=n-4+q\frac{(n-1)(n-4)}2+\frac{(t_0-1)(t_0-2)}2>0.\equivndaligned$$ As $(1+q)K_{n-1}\cup K_{t_0}$ does not contain $T_n^*,$ applying the above we get $$\aligned e(G_0)\le\binom{n-2}2+t+1<e((1+q)K_{n-1}\cup K_{t_0})\le ex(n-1+t;T_n^*).\equivndaligned$$ Since $G_0$ is an extremal graph of order $n-1+t$ not containing $T_n^*,$ we must have $e(G_0)=ex(n-1+t;T_n^*).$ This contradicts the above assertion. So $t\ge 1$ is not true and hence $V(G_0)=\{v_0,v_1,\ldots,v_{n-2}\}.$ As $G_0$ is an extremal graph not containing $T_n^*,$ we see that $G_0\cong K_{n-1}.$ This proves the lemma. \equivnd{proof} \pro{\noindent Lemma 4.3} Let $n,t\in\Bbb N$ with $n\ge 4$, and let $G\in Ex(n-2+t;T_n^*)$. Suppose that $G$ is connected and $\Delta(G)=n-3.$ Then $t\le n-4$ and $e(G)\le (n-3)^2.$ \equivndpro \binomegin{proof} Suppose $v_0\in V(G),d(v_0)=n-3,\Gamma(v_0)=\{v_1,\ldots,v_{n-3}\}$ and $V(G)=\{v_0,v_1,\ldots,$ $v_{n-3},u_1,\ldots,u_t\}$. Then $d(u_i,v_0)=2$ and $u_1,\ldots,u_t$ must be independent. As $G$ is connected and $u_i$ is adjacent to some vertex in $\Gamma(v_0),$ we have $$e(G)\le\sum_{i=1}^{n-3}d(v_i)\le\sum_{i=1}^{n-3}(n-3)=(n-3)^2.$$ On the other hand, $$e(K_{n-1}\cup K_{n-4})=\frac{(n-1)(n-2)+(n-4)(n-5)}2=n^2-6n+11>(n-3)^2.$$ Thus, for $t\ge n-3$ we have $$\aligned e(G)&=ex(n-2+t;T_n^*)\ge e\left(K_{n-1}\cup K_{n-4}\cup (t-(n-3))K_1\right) \\&=e(K_{n-1}\cup K_{n-4})>(n-3)^2.\equivndaligned$$ This contradicts the fact $e(G)\le (n-3)^2.$ So $t\le n-4.$ The proof is now complete. \equivnd{proof} \pro{\noindent Lemma 4.4} Let $p,n\in\Bbb N$ with $p\ge n\ge 4,$ and $G\in Ex(p;T_n^*)$. Suppose $\Delta(G)\le n-3$. Then $p\le 2n-6.$ \equivndpro \binomegin{proof} Assume $p=2n-4+t.$ If $t\ge 2n,$ we may write $t-2=k(n-1)+r,$ where $k\in\Bbb N$ and $r\in\{0,1,\ldots,n-2\}.$ Let $G_0\in Ex(n-1+r;K_{1,n-3}).$ From Theorem 2.1 we have $e(G_0)=[\frac{(n-1+r)(n-4)}2]$. Clearly $k(n-1)=t-2-r\ge 2n-2-r>r+1.$ Thus, $$\aligned e((k+1)K_{n-1}\cup G_0)&=(k+1)\binom{n-1}2+\left[\frac{(n-1+r)(n-4)}2\right] \\&\ge\frac{(k+1)(n-1)(n-2)+(n-1+r)(n-4)-1}2 \\&=\frac{\left((k+2)(n-1)+r\right)(n-3)}2+\frac{k(n-1)-r-1}2 \\&>\frac{\left((k+2)(n-1)+r\right)(n-3)}2=\frac{(n-3)p}2.\equivndaligned$$ On the other hand, as $(k+1)K_{n-1}\cup G_0$ does not contain $T_n^*,$ we have $$e((k+1)K_{n-1}\cup G_0)\le ex(p;T_n^*)=e(G)\le\frac{(n-3)p}2.$$ This is a contradiction. Hence $t<2n.$ \par If $t=2n-1,$ then $p=2n-4+t=3(n-1)+n-2$ and so $$\frac{(n-3)p}2<e(3K_{n-1}\cup K_{n-2})\le ex(p;T_n^*)=e(G)\le\frac{(n-3)p}2.$$ This is also a contradiction. \par If $n-1\le t<2n-1,$ setting $G_0\in Ex(t-2;K_{1,n-3})$ and using Theorem 2.1 we see that $$e(G_0)=ex(t-2;K_{1,n-3})=\left[\frac{(n-4)(t-2)}2\right].$$ It is clear that $2K_{n-1}\cup G_0$ does not contain $T_n^*$ as a subgraph and $$\aligned e(2K_{n-1}\cup G_0)&=2\binom{n-1}2+\left[\frac{(n-4)(t-2)}2\right] \\&\ge (n-1)(n-2)+\frac{(n-4)(t-2)-1}2 \\&=\frac{(2n-4+t)(n-3)}2+\frac{2n-1-t}2 >\frac{(2n-4+t)(n-3)}2.\equivndaligned$$ On the other hand, $$e(2K_{n-1}\cup G_0)\le ex(2n-4+t;T_n^*)=e(G)\le\frac{(2n-4+t)(n-3)}2.$$ This is a contradiction. \par By the above, we may assume $t\le n-2.$ If $t=n-2$, then $$\aligned ex(3n-6;T_n^*)&\ge e(2K_{n-1}\cup K_{n-4})=\frac{2(n-1)(n-2)+(n-4)(n-5)}2\\&>\frac{(3n-6)(n-3)}2\ge e(G)=ex(3n-6;T_n^*).\equivndaligned$$ This is a contradiction. If $t=n-3$, then $$ \aligned ex(3n-7;T_n^*)&\ge e(K_{n-1}\cup K_{n-3,n-3})=\frac{(n-1)(n-2)}2+(n-3)^2\\&>\frac{(3n-7)(n-3)}2\ge e(G)=ex(3n-7;T_n^*).\equivndaligned$$ This is also a contradiction. Thus $t\not=n-2,n-3$. \par Now we assume that $1\le t\le n-4.$ Suppose $H\in Ex(n-3;K_{1,n-3-t})$ and $V(H)=\{v_1,\ldots,v_{n-3}\}.$ We construct a graph $G_0=(V(G_0),E(G_0))$ of order $n-3+t$ by defining $V(G_0)=\{u_1,\ldots,u_t\}\cup V(H)$ and $E(G_0)=\{u_iv_j:1\le i\le t,1\le j\le n-3\}\cup E(H).$ It is easily seen that $d_{G_0}(v_i)\le n-4(1\le i\le n-3)$ and so $G_0$ does not contain any copies of $T_n^*.$ Hence, $$\align e(K_{n-1}\cup G_0)&=\binom{n-1}2+e(G_0)\\&\le ex(2n-4+t;T_n^*)=e(G)\le\frac{(2n-4+t)(n-3)}2.\equivndalign$$ Using Theorem 2.1 we see that $$\aligned e(G_0)&=(n-3)t+\left[\frac{(n-3)(n-4-t)}2\right]\\&\ge(n-3)t+\frac{(n-3)(n-4-t)-1}2 \\&=\frac{(2n-4+t)(n-3)}2-\binom{n-1}2+\frac 12\\&>\frac{(2n-4+t)(n-3)}2-\binom{n-1}2,\equivndaligned$$ this contradicts the above assertion. \par By the above we have $t\le 0$ and so $p\le 2n-4.$ If $p=2n-4$, since $K_{n-1}\cup K_{n-3}$ does not contain $T_n^*$ we have $$\aligned ex(2n-4;T_n^*)&\ge e(K_{n-1}\cup K_{n-3})=\frac{(n-1)(n-2)+(n-3)(n-4)}2\\&>\frac{(2n-4)(n-3)}2\ge e(G)=ex(2n-4;T_n^*).\equivndaligned$$ This is a contradiction. \par Now we assume $p=2n-5.$ It is clear that $$e(K_{n-1}\cup K_{n-4})=\frac{(n-1)(n-2)+(n-4)(n-5)}2=n^2-6n+11.$$ As $K_{n-1}\cup K_{n-4}$ does not contain $T_n^*,$ we see that $n^2-6n+11 \le ex(2n-5;T_n^*)=e(G).$ If $\Delta(G)\le n-4,$ then clearly $e(G)\le\frac{(2n-5)(n-4)}2<n^2-6n+11$. This is a contradiction. Hence, $\Delta(G)=n-3.$ Suppose that $G_1$ is the component of $G$ such that $\Delta(G_1)=n-3.$ If $|V(G_1)|=n-2+s$ for some $s\in\{0,1,\ldots,n-3\},$ by Lemma 4.3 we have $s\le n-4.$ As $G$ is an extremal graph we have $G\binomackslash G_1\cong K_{n-3-s}$ and so $$\aligned e(G)&=e(G_1)+e(G\binomackslash G_1)\le\frac{(n-2+s)(n-3)}2+\binom{n-3-s}2 \\&=\frac 12\Big(s-\frac{n-4}2\Big)^2+\frac{7n^2-40n+56}8 \\&\le\frac 12\Big(\frac{n-4}2\Big)^2+\frac{7n^2-40n+56}8=n^2-6n+9<n^2-6n+11,\equivndaligned$$ this contradicts the above assertion $e(G)\ge n^2-6n+11.$ Therefore $p\neq 2n-5$ and so $p\le 2n-6$, which completes the proof. \equivnd{proof} \pro{\noindent Theorem 4.1} Let $p,n\in\Bbb N$ with $p\ge n-1\ge 5$, and let $p=k(n-1)+r$ with $k\in\Bbb N$ and $r\in\{0,1,\ldots,n-2\}.$ Then $$\aligned &ex(p;T_n^*)\\&=\cases\frac{(k-1)(n-1)(n-2)}2+ex(n-1+r;T_n^*)&\hbox{if}\ 1\le r\le n-5;\\\frac{(n-2)p-r(n-1-r)}2&\hbox{if}\ r\in\{0,n-4,n-3,n-2\}.\equivndcases\equivndaligned$$\equivndpro \binomegin{proof} Suppose $m\in\Bbb N$ and $m\ge 2n-5$. We assert that $$ex(m;T_n^*)=\frac{(n-1)(n-2)}2+ex(m-(n-1);T_n^*).\hboxag 4.1$$ Assume $G\in Ex(m;T_n^*).$ From Lemma 4.1 we know that $\Delta(G)\le n-2.$ As $m\ge 2n-5,$ by Lemma 4.4 we have $\Delta(G)=n-2.$ Using Lemma 4.2 we see that $G$ has a component isomorphic to $K_{n-1}$ and so (4.1) is true. From (4.1) we deduce that for $k\ge 2$, $$ \aligned &ex(p;T_n^*)-ex(n-1+r;T_n^*) \\&=\sum_{s=1}^{k-1}\binomig\{ex((s+1)(n-1)+r;T_n^*)-ex(s(n-1)+r;T_n^*)\binomig\} =(k-1)\binom{n-1}2.\equivndaligned$$ This is also true for $k=1$. \par For $r=0,$ we have $ex(n-1+r;T_n^*)=e(K_{n-1})=\binom{n-1}2$ and so $$ex(p;T_n^*)=(k-1)\binom{n-1}2+\binom{n-1}2=k\binom{n-1}2=\frac{(n-2)p}2.$$ For $r\in\{n-4,n-3,n-2\}$ we have $n-1+r\ge 2n-5$ and so by (4.1) $$\aligned ex(p;T_n^*)&=(k-1)\binom{n-1}2+ex(n-1+r;T_n^*) \\&=(k-1)\binom{n-1}2+\binom{n-1}2+ex(r;T_n^*) =k\binom{n-1}2+e(K_r)\\&=\frac{(n-2)(p-r)}2+\binom r2 =\frac{(n-2)p-r(n-1-r)}2\equivndaligned$$ as asserted. The proof is now complete. \equivnd{proof} \pro{\noindent Theorem 4.2} Let $p,n\in\Bbb N$ with $p\ge n\ge 6$ and $p=k(n-1)+1$ with $k\in\Bbb N$. Then $$ex(p;T_n^*)=\frac{(n-2)(p-1)}2.$$\equivndpro \binomegin{proof} Let $G_0\in Ex(n;T_n^*).$ If $\Delta(G_0)\le n-3,$ then $e(G_0)\le\frac{(n-3)n}2<\frac{(n-1)(n-2)}2$. On the other hand, $e(G_0)=ex(n;T_n^*)\ge e(K_{n-1}\cup K_1)=\frac{(n-1)(n-2)}2$. This is a contradiction. Thus $\Delta(G_0)\ge n-2$. Applying Lemmas 4.1 and 4.2 we see that $G_0\cong K_{n-1}\cup K_1$ and so $ex(n;T_n^*)=e(G_0)=\frac{(n-1)(n-2)}2.$ Now applying Theorem 4.1 we obtain $$ex(p;T_n^*)=\frac{(k-1)(n-1)(n-2)}2+ex(n;T_n^*)=k\binom{n-1}2=\frac{(n-2)(p-1)}2.$$ This is the result. \equivnd{proof} \pro{\noindent Theorem 4.3} Let $p,n\in\Bbb N$, $p\ge n\ge 7$ and $p=k(n-1)+n-5$ with $k\in\Bbb N$. Then $$ex(p;T_n^*)=\frac{(n-2)(p-2)}2+1.$$ \equivndpro \binomegin{proof} Let $G_0\in Ex(2n-6;T_n^*).$ If $\Delta(G_0)\le n-3,$ then $e(G_0)\le\frac{(n-3)(2n-6)}2=(n-3)^2$. As $K_{n-3,n-3}$ does not contain any copies of $T_n^*$, we see that $e(G_0)\ge e(K_{n-3,n-3})=(n-3)^2$. Hence $e(G_0)=(n-3)^2$. If $\Delta(G_0)\ge n-2$, by Lemmas 4.1 and 4.2 we have $G_0\cong K_{n-1}\cup K_{n-5}$ Thus, $e(G_0)=e(K_{n-1}\cup K_{n-5})=\binom {n-1}2+\binom{n-5}2=n^2-7n+16$. Since $(n-3)^2=n^2-6n+9\ge n^2-7n+16$, we see that $ex(2n-6;T_n^*)=(n-3)^2$. Now applying the above and Theorem 4.1 we deduce $$\align ex(p;T_n^*)&=(k-1)\binom{n-1}2+ex(2n-6;T_n^*)=(k-1)\binom{n-1}2+(n-3)^2 \\&=k\frac{(n-1)(n-2)}2+\frac{n^2-9n+16}2=\frac{(n-2)(p-2)}2+1.\equivndalign$$ This is the result. \equivnd{proof} \pro{\noindent Lemma 4.5} Let $n,r\in\Bbb N$ with $n\ge 7$ and $r\le n-5$. Then there is an extremal graph $G\in Ex(n-1+r;\{K_{1,n-2},T_n^*\})$ such that $\Delta(G)=n-3$ and $G$ is connected. \equivndpro \binomegin{proof} Let $G\in Ex(n-1+r;\{K_{1,n-2},T_n^*\})$. Then $\Delta(G)\le n-3$. For $r=n-5$ we see that $K_{n-3,n-3}\in Ex(n-1+r;\{K_{1,n-2},T_n^*\})$. So the result is true. \par Now we assume $r\le n-6.$ Suppose $H\in Ex(n-3;K_{1,n-5-r})$ and $V(H)=\{v_1,\ldots,v_{n-3}\}.$ From Theorem 2.1 we know that $e(H)=ex(n-3;K_{1,n-5-r})=[\frac{(n-3)(n-6-r)}2]$. Now we construct a graph $G_0=(V(G_0),E(G_0))$ of order $n-1+r$ by defining $V(G_0)=\{u_0,\ldots,u_{r+1}\}\cup V(H)$ and $E(G_0)=\{u_iv_j:\ 0\le i\le r+1,1\le j\le n-3\}\cup E(H).$ It is easily seen that $d_{G_0}(v_i)\le n-4(1\le i\le n-3)$, $\Delta(G_0)=n-3$ and so $G_0$ does not contain any copies of $T_n^*$ and $K_{1,n-2}$. Thus, for any $G\in Ex(n-1+r;\{K_{1,n-2},T_n^*\})$, $$e(G)\ge e(G_0)=(n-3)(r+2)+\left[\frac{(n-3)(n-6-r)}2\right]=\left[\frac{(n-3)(n-2+r)}2\right].$$ If $\Delta(G)\le n-4$, we must have $G\in Ex(n-1+r;K_{1,n-3})$ and so $e(G)=[\frac{(n-4)(n-1+r)}2]$ by Theorem 2.1. As $G$ is an extremal graph and $$\align \left[\frac{(n-3)(n-2+r)}2\right]&\ge \frac{(n-3)(n-2+r)-1}2=\frac{(n-4)(n-1+r)+r+1}2 \\& >\frac{(n-4)(n-1+r)}2\ge \left[\frac{(n-4)(n-1+r)}2\right],\equivndalign$$ by the above we must have $\Delta(G)=n-3$. \par Now assume $\Delta(G)=n-3$. If $G$ is connected, the result is true. Suppose that $G$ is not connected. Let $G_1$ be a component of $G$ with $\Delta(G_1)=n-3$ and $|V(G_1)|=n-1+r-s.$ Then $1\le s\le r+1\le n-5$. As $G$ is an extremal graph, we must have $G=G_1\cup K_s$. Thus, $$e(G)=e(G_1)+\binom s2\le\Big[\frac{(n-3)(n-1+r-s)}2\Big]+\frac{s(s-1)}2.$$ On the other hand, $e(G)\ge e(G_0)=[\frac{(n-3)(n-2+r)}2].$ Therefore, $$\Big[\frac{(n-3)(n-2+r)}2\Big]-\Big[\frac{(n-3)(n-1+r-s)}2\Big]-\frac{s(s-1)}2\le 0.$$ For $s\ge 2$ we have $(s-1)(n-3-s)=(s-2)(n-4-s)+n-5\ge n-5$ and so $$\align &\Big[\frac{(n-3)(n-2+r)}2\Big]-\Big[\frac{(n-3)(n-1+r-s)}2\Big]-\frac{s(s-1)}2 \\&\ge \Big[-\frac{s^2-(n-2)s+n-3}2\Big]=\Big[\frac{(s-1)(n-3-s)}2\Big]\ge \Big[\frac{n-5}2\Big]>0. \equivndalign $$ This contradicts the previous inequality. Thus $s=1$ and hence $e(G)=e(G_1)\le [\frac{(n-3)(n-2+r)}2]=e(G_0)$. By the previous argument, $e(G)\ge e(G_0)$. Therefore $e(G)=e(G_0)$. As $G_0$ is connected and $\Delta(G_0)=n-3$, we see that the result is true. \equivnd{proof} \pro{\noindent Lemma 4.6} Let $n,r\in\Bbb N$ with $n\ge 11$ and $3\le r\le n-5$. Then there is an extremal graph $G\in Ex(n-1+r;T_n^*)$ such that $\Delta(G)=n-3$ and $G$ is connected. Moreover, $ex(n-1+r;T_n^*)=ex(n-1+r;\{K_{1,n-2},T_n^*\})$. \equivndpro \binomegin{proof} Let $G\in Ex(n-1+r;T_n^*)$. For $r=n-5$ let $G_0=K_{n-3,n-3}$. For $r\le n-6$ let $G_0$ be the graph constructed in the proof of Lemma 4.5. Then $\Delta(G_0)=n-3$ and $G_0$ does not contain any copies of $T_n^*$. Thus, $e(G)\ge e(G_0)$. For $r=n-5$ we have $e(G_0)=(n-3)^2$. For $r\le n-6$ we have $e(G_0)=[\frac{(n-3)(n-2+r)}2]$. Since $(n-3)^2\ge \frac{(n-3)(n-2+n-5)}2$, we always have $e(G)\ge [\frac{(n-3)(n-2+r)}2]$ for $r\le n-5$. \par If $\Delta(G)\ge n-2$, by Lemmas 4.1 and 4.2 we have $G\cong K_{ n-1}\cup K_r$. Thus, $e(G)=\binom{n-1}2+\binom r2$. Since $3\le r\le n-5$ and $n\ge 11$ we see that $(r-2)(n-4-r)\ge 4$ and so $$\left[\frac{(n-3)(n-2+r)}2\right]-\binom{n-1}2-\binom r2=\left[\frac{(r-2)(n-4-r)-4}2\right]\ge 0.$$ Therefore $e(G)\le e(G_0)$ and so $e(G)=e(G_0)$. Since $\Delta(G_0)=n-3$ and $G_0$ is connected, the result holds in this case. \par Now we assume $\Delta(G)\le n-3$. Then $G\in Ex(n-1+r;\{K_{1,n-2},T_n^*\})$. Applying Lemma 4.5 we see that the result is true. Thus the lemma is proved. \equivnd{proof} \pro{\noindent Lemma 4.7} Let $n,r\in\Bbb N$ with $n\ge 7$ and $r\le n-5$. Then $$ex(n-1+r;\{K_{1,n-2},T_n^*\})=(n-3)(r+2)+ex(n-3;\{K_{1,n-4-r},T_{n-2-r}^*\}).$$ Moreover, for $r\ge \frac{n-7}2$ we have $$\align&ex(n-1+r;\{K_{1,n-2},T_n^*\}) \\&=(n-3)(r+2)+\max\binomig\{(n-5-r)^2,\binomig[\frac{(n-6-r)(n-3) }2\binomig]\binomig\}.\equivndalign$$\equivndpro \binomegin{proof} It is clear that $ex(2n-6;\{K_{1,n-2},T_n^*\})=e(K_{n-3,n-3})=(n-3)^2$. So the result is true for $r=n-5$. \par Now assume $r\le n-6$. By Lemma 4.5, we can choose a graph $G\in Ex(n-1+r;\{K_{1,n-2},T_n^*\})$ so that $ \Delta(G)=n-3$ and $G$ is connected. Suppose $u_0\in V(G),d(u_0)=n-3,\Gamma(u_0)=\{v_1,\ldots,v_{n-3}\}$ and $V(G)=\{v_1,\ldots,v_{n-3},u_0,u_1,\ldots,u_{r+1}\}.$ Then $d(u_i,u_0)=2$ for $i=1,2,\ldots,r+1$ and $\{u_0,u_1,\ldots,u_{r+1}\}$ is an independent set. If $u_iv_j\notin E(G)$ for some $i\in\{1,2,\ldots,r+1\}$ and $j\in\{1,2,\ldots,n-3\}$, as $G$ is an extremal graph we see that $v_jv_k\in E(G)$ for some $k\in\{1,2,\ldots,n-3\}-\{j\}$. Set $G_1=G-v_jv_k+u_iv_j$. Then clearly $G_1$ does not contain $T_n^*$, $e(G)=e(G_1)$, $\Delta(G_1)=n-3$ and $G_1$ is connected. Repeating the above step we see that there is an extremal graph $G'\in Ex(n-1+r;\{K_{1,n-2},T_n^*\})$ such that $V(G')=\{v_1,\ldots,v_{n-3},u_0,u_1,\ldots,u_{r+1}\}$, $\Gamma(u_i)=\{v_1,\ldots,v_{n-3}\}$ for $i=0,1,\ldots,r+1$, $\Delta(G')=n-3$ and $G'$ is connected. It is easily seen that $$e(G')=(n-3)(r+2)+e(G'[v_1,\ldots,v_{n-3}]).$$ Set $H=G'[v_1,\ldots,v_{n-3}].$ Since $\Delta(G')=n-3$ and $G'\in Ex(n-1+r;\{K_{1,n-2},T_n^*\})$, we see that $\Delta(H)\le n-5-r$ and $H\in Ex(n-3;\{K_{1,n-4-r},T_{n-2-r}^*\})$. \par Now we assume $r\ge \frac{n-7}2$. If $\Delta(H)=n-5-r$, we may assume $d(v_1)=n-5-r$ and $\Gamma_H(v_1)=\{v_2,\ldots,v_{n-4-r}\}$. Since $G'$ does not contain $T_n^*$ and $d_{G'}(v_1)=n-3$, we see that $\{v_{n-3-r},\ldots,v_{n-3}\}$ is an independent set. As $r\le n-6$, by the above we have $e(H)\le\sum_{i=2}^{n-4-r}d_H(v_i)\le (n-5-r)^2.$ Since $r\ge \frac{n-7}2$ we have $n-3\ge 2(n-5-r)$. Set $H'=K_{n-5-r,n-5-r}\cup (3r+9-n)K_1$. Then $|V(H')|=n-1+r$ and $e(H')=(n-5-r)^2$, $\Delta(H')=n-5-r$ and $H'$ does not contain $T_{n-2-r}^*$. As $G'$ is an extremal graph, by the above we must have $e(H)=e(H')=(n-5-r)^2$. If $\Delta(H)<n-5-r$, then clearly $H\in Ex(n-3;K_{1,n-5-r})$. Using Theorem 2.1 we see that $e(H)=ex(n-3;K_{1,n-5-r})=[\frac{(n-3)(n-6-r)}2]$. Therefore, $e(H)=\max\{(n-5-r)^2,[\frac{(n-3)(n-6-r)}2]\}$ and so $$\align &ex(n-1+r;\{K_{1,n-2},T_n^*\}) \\&=e(G)=e(G')=(n-3)(r+2) +\max\Big\{(n-5-r)^2,\Big[\frac{(n-3)(n-6-r)}2\Big]\Big\}.\equivndalign$$ This completes the proof. \equivnd{proof} \pro{\noindent Theorem 4.4} Let $p,n\in\Bbb N$, $p\ge n\ge 11$, $r\in\{2,3,\ldots,n-6\}$ and $p\equiv r\mod{n-1}$. Let $m\in\{0,1,\ldots,r+1\}$ be given by $n-3\equiv m\mod{r+2}$. Then $$\aligned &ex(p;T_n^*)\\&=\cases \binomig[\frac{(n-2)(p-1)-2r-m-3}2\binomig] &\hbox{if $r\ge 4$ and $2\le m\le r-1$,}\\\frac{(n-2)(p-1)-m(r+2-m)-r-1}2 &\hbox{otherwise}.\equivndcases\equivndaligned$$\equivndpro \noindent{\it Proof.\/} Suppose $s=[\frac{n-3}{r+2}]$. Then $n-3=s(r+2)+m$. As $r+2<n-3$ we see that $s\in\Bbb N$. We claim that $$\aligned &ex(n-1+r;\{K_{1,n-2},T_n^*\}) \\&=\frac{(n-3-m)(n-1+r+m)}2+\max\Big\{m^2,\Big[\frac{(r+2+m)(m-1)}2\Big]\Big\}.\equivndaligned\hboxag 4.2$$ When $s=1$ we have $n-5-r=m<r+2$ and so $\frac{n-7}2<r<n-5$. Thus applying Lemma 4.7 we have $$\align &ex(n-1+r;\{K_{1,n-2},T_n^*\}) \\&=(n-3)(r+2)+\max\Big\{(n-5-r)^2,\Big[\frac{(n-6-r)(n-3) }2\Big]\Big\} \\&=\frac{(n-3-m)(n-1+r+m)}2+\max\Big\{m^2,\Big[\frac{(r+2+m)(m-1)}2\Big]\Big\}.\equivndalign $$ So (4.2) holds. \par From now on we assume $s\ge 2$. For $i=0,1,\ldots,s-2$ we have $n-i(r+2)-5\ge n-3-(s-2)(r+2)-2\ge 2(r+2)-2>r\ge 2$. Thus, by Lemma 4.7 we have $$\align &ex(n-3+r+2-i(r+2);\{K_{1,n-i(r+2)-2},T_{n-i(r+2)}^*\}) \\&=(r+2)(n-3-i(r+2)) \\&\quadq+ex(n-3-i(r+2);\{K_{1,n-(i+1)(r+2)-2},T_{n-(i+1)(r+2)}^*\}).\equivndalign$$ Hence $$\align &ex(n-1+r;\{K_{1,n-2},T_n^*\})-ex(2(r+2)+m;\{K_{1,m+r+3},T_{m+r+5}^*\}) \\&= ex(n-3+r+2;\{K_{1,n-2},T_n^*\})\\&\quadq-ex(n-3-(s-2)(r+2); \{K_{1,n-(s-1)(r+2)-2},T_{n-(s-1)(r+2)}^*\}) \\&=\sum_{i=0}^{s-2}\Big(ex(n-3+r+2-i(r+2);\{K_{1,n-i(r+2)-2},T_{n-i(r+2)}^*\}) \\&\quadq\quadq-ex(n-3-i(r+2);\{K_{1,n-(i+1)(r+2)-2},T_{n-(i+1)(r+2)}^*\})\Big) \\&=\sum_{i=0}^{s-2}(r+2)(n-3-i(r+2)). \equivndalign$$ Set $n'=m+r+5$. As $r>m-2$ and $r\ge 2$, we have $\frac{n'-7}2<r\le n'-5$ and $n'\ge r+5\ge 7$. Thus, by Lemma 4.7 we have $$\align &ex(2(r+2)+m;\{K_{1,m+r+3},T_{m+r+5}^*\}) \\&=ex(n'-1+r;\{K_{1,n'-2},T_{n'}^*\}) \\&=(n'-3)(r+2)+\max\Big\{(n'-5-r)^2,\Big[\frac{(n'-6-r)(n'-3)}2\Big]\Big\} \\&=(r+2)(n-3-(s-1)(r+2))+\max\Big\{m^2,\Big[\frac{(m-1)(m+r+2)}2\Big]\Big\}. \equivndalign$$ Therefore, $$\align &ex(n-1+r;\{K_{1,n-2},T_n^*\}) \\&=\sum_{i=0}^{s-1}(r+2)(n-3-i(r+2)) +\max\Big\{m^2,\Big[\frac{(m-1)(m+r+2)}2\Big]\Big\}.\equivndalign$$ As $$\align &\sum_{i=0}^{s-1}(r+2)(n-3-i(r+2)) \\&=(r+2)\Big((n-3)s-(r+2)\frac{(s-1)s}2\Big)=\frac{s(r+2)}2\binomig(2(n-3)-(s-1)(r+2)\binomig) \\&=\frac{(n-3-m)(n-1+r+m)}2,\equivndalign$$ from the above we see that (4.2) is also true for $s\ge 2$. \par Observe that $\frac{(m+r+2)(m-1)}2=m^2+\frac{(r-m)(m-1)-2}2$. For $m=0,1,r,r+1,$ we have $(r-m)(m-1)-2\le 0.$ Now assume $2\le m\le r-1$. If $r=3$, then $m=2$ and so $(r-m)(m-1)-2=-1<0$. If $r\ge 4$, then clearly $(r-m)(m-1)-2\ge 0.$ Thus, by (4.2) and the above we obtain $$\aligned &ex(n-1+r;\{K_{1,n-2},T_n^*\}) \\&=\cases \frac{(n-3-m)(n-1+r+m)}2+[\frac{(r+2+m)(m-1)}2]\\\quadq\quadq\quadq\quadq\quadq\hbox{if}\ r\ge 4 \ \hbox{and}\ 2\le m\le r-1,\\\frac{(n-3-m)(n-1+r+m)}2+m^2 \\\quadq\quadq\quadq\quadq\quadq\hbox{otherwise}.\equivndcases\equivndaligned\hboxag 4.3$$ \par For $r=2$ we have $m\le r+1\le 3$. Let $G\in Ex(n+1;T_n^*).$ If $\Delta(G)\ge n-2,$ by Lemmas 4.1 and 4.2 we have $G=K_{n-1}\cup K_2.$ Thus, $e(G)=\binom{n-1}2+1.$ If $\Delta(G)\le n-3,$ then $G\in Ex(n+1;\{K_{1,n-2},T_n^*\}).$ Thus, applying (4.3) we have $$\align &ex(n+1;T_n^*)\\&=\max\Big\{\frac{(n-1)(n-2)}2+1,ex(n+1;\{K_{1,n-2},T_n^*\})\Big\} \\&=\max\Big\{\frac{(n-1)(n-2)}2+1,\frac{(n-3-m)(n+1+m)}2+m^2\Big\} \\&=\frac{(n-3-m)(n+1+m)}2+m^2+\max\Big\{0,-\frac{(m-2)^2+n-11}2\Big\} \\&=\frac{(n-3-m)(n+1+m)}2+m^2. \equivndalign$$ For $r\ge 3$, by Lemma 4.6 we have $ex(n-1+r;T_n^*)=ex(n-1+r;\{K_{1,n-2},T_n^*\})$. Thus applying (4.3) we obtain $$\aligned &ex(n-1+r;T_n^*) \\&=\cases \frac{(n-3-m)(n-1+r+m)}2+[\frac{(r+2+m)(m-1)}2]\\\quadq\quadq\quadq\quadq\quadq\hbox{if}\ r\ge 4 \ \hbox{and}\ 2\le m\le r-1,\\\frac{(n-3-m)(n-1+r+m)}2+m^2\\\quadq\quadq\quadq\quadq\quadq\hbox{otherwise}.\equivndcases\equivndaligned\hboxag 4.4$$ By the previous argument, (4.4) is also true for $r=2$. \par Now suppose $p=k(n-1)+r$. Then $k\in\Bbb N$. Combining (4.4) with Theorem 4.1 we deduce the following result: $$\aligned &ex(p;T_n^*)\\&=\cases (k-1)\binom{n-1}2+\frac{(n-3-m)(n-1+r+m)}2+ \Big[\frac{(r+2+m)(m-1)}2\Big]\\\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq\hbox{if $r\ge 4$ and $2\le m\le r-1$,}\\left(k-1)\binom{n-1}2+\frac{(n-3-m)(n-1+r+m)}2+m^2 \quad\hbox{otherwise}.\equivndcases\equivndaligned$$ To see the result, we note that $$\align &(k-1)\binom{n-1}2+\frac{(n-3-m)(n-1+r+m)}2+ \Big[\frac{(r+2+m)(m-1)}2\Big] \\&=\Big[\frac{(n-2)(p-1)-2r-m-3}2\Big] \equivndalign$$ and \allowdisplaybreaks\binomegin{gather*}\hskip+3.5cm(k-1)\binom{n-1}2+\frac{(n-3-m)(n-1+r+m)}2+m^2\hskip+3.5cm \\ \hskip+3.5cm=\frac{(n-2)(p-1)-m(r+2-m)-r-1}2.\hskip+3.5cm\square \equivnd{gather*} \vskip+0.2cm \pro{\noindent Corollary 4.1} Suppose $p,n,r\in\Bbb N$, $p\ge n\ge 11$, $\frac{n-7}2<r\le n-6$ and $p\equiv r\mod{n-1}$. Then $$ex(p;T_n^*)=\cases \Big[\frac{(n-2)(p-2)-r}2\Big]&\hbox{if $\frac{n-4}2\le r\le n-7$,} \\\frac{(n-2)(p-3)}2+3&\hbox{if $r=n-6$,} \\\frac{(n-2)(2p-5)+7}4&\hbox{if $r=\frac{n-5}2$,} \\\frac{(n-2)(p-2)}2+1&\hbox{if $r=\frac{n-6}2$.} \equivndcases$$ \equivndpro \binomegin{proof} Clearly $r>\frac{n-7}2\ge 2$. Set $m=n-5-r$. Then $1\le m<r+2$ and $n-3\equiv m\mod{r+2}$. It is evident that $$2\le m\le r-1\iff \frac{n-4}2\le r\le n-7.$$ As $n\ge 11$ we see that $r\ge \frac{n-4}2$ implies $r\ge 4$. Now applying Theorem 4.4 we deduce that $$ex(p;T_n^*)=\cases \Big[\frac{(n-2)(p-1)-2r-(n-5-r)-3}2\Big]=\Big[\frac{(n-2)(p-2)-r}2\Big]\\\quadq\quadq\quadq\quadq\quadq\hbox{if $\frac{n-4}2\le r\le n-7$,} \\\frac{(n-2)(p-1)-(n-5-r)(r+2-(n-5-r))-r-1}2\\\quadq\quadq\quadq\quadq\quadq\hbox{if $r=n-6\ \hboxext{or}\ [\frac{n-5}2]$.}\equivndcases$$ This yields the result. \equivnd{proof} \pro{\noindent Corollary 4.2} Suppose $p,n\in\Bbb N$, $p\ge n\ge 11$, $2\nmid n$ and $p\equiv \frac{n-7}2\mod{n-1}$. Then $$ex(p;T_n^*)=\frac{(n-2)(2p-3)+3}4.$$ \equivndpro \binomegin{proof} Taking $r=\frac{n-7}2$ and $m=0$ in Theorem 4.4 we derive the result. \equivnd{proof} \pro{\noindent Corollary 4.3} Suppose $p,n\in\Bbb N$, $p\ge n\ge 11$ and $(n-1)\mid (p-2)$. Then $$ex(p;T_n^*)=\cases ((n-2)(p-1)-6)/2&\hbox{if $n\equiv 0\mod 2$,} \\left((n-2)(p-1)-7)/2&\hbox{if $n\equiv 1\mod 4$,} \\left((n-2)(p-1)-3)/2&\hbox{if $n\equiv 3\mod 4$.} \equivndcases$$ \equivndpro \binomegin{proof} Let $m\in\{0,1,2,3\}$ be given by $n-3\equiv m\mod 4$. Then clearly $m=1,2,3\ \hbox{or}\ 0$ according as $n\equiv 0,1,2\ \hbox{or}\ 3\mod 4$. Now putting $r=2$ in Theorem 4.4 and applying the above we obtain the result. \equivnd{proof} \pro{\noindent Corollary 4.4} Suppose $p,n\in\Bbb N$, $p\ge n\ge 11$ and $(n-1)\mid (p-3)$. Then $$ex(p;T_n^*)=\cases (n-2)(p-1)/2-2&\hbox{if $n\equiv 3\mod 5$,} \\left(n-2)(p-1)/2-4&\hbox{if $n\equiv 2,4\mod 5$,} \\left(n-2)(p-1)/2-5&\hbox{if $n\equiv 0,1\mod 5$.} \equivndcases$$ \equivndpro \binomegin{proof} Let $m\in\{0,1,2,3,4\}$ be given by $n-3\equiv m\mod 5$. Then clearly $m=2,3,4,0\ \hbox{or}\ 1$ according as $n\equiv 0,1,2,3\ \hbox{or}\ 4\mod 5$. Now putting $r=3$ in Theorem 4.4 and applying the above we obtain the result.\equivnd{proof} \par In a similar way, putting $r=4$ in Theorem 4.4 we deduce the following result. \pro{\noindent Corollary 4.5} Suppose $p,n\in\Bbb N$, $p\ge n\ge 11$ and $(n-1)\mid (p-4)$. Then $$ex(p;T_n^*)=\cases (n-2)(p-1)/2-7&\hbox{if $n\equiv 0\mod 6$,} \\left(n-2)(p-1)/2-5&\hbox{if $n\equiv \pm 2\mod 6$,} \\left((n-2)(p-1)-13)/2&\hbox{if $n\equiv \pm 1\mod 6$,} \\left((n-2)(p-1)-5)/2&\hbox{if $n\equiv 3\mod 6$.} \equivndcases$$ \equivndpro \pro{\noindent Corollary 4.6} Suppose $p\in\Bbb N$, $p\ge 11$, $r\in\{0,1,\ldots,9\}$ and $p\equiv r\mod{10}$. Then $$ex(p;T_{11}^*)=\cases (9p-r(10-r))/2&\hbox{if $r\in\{0,1,7,8,9\}$,} \\left(9p-12)/2&\hbox{if $r=2$,} \\left(9p-19)/2&\hbox{if $r=3$,} \\left(9p-22)/2&\hbox{if $r=4$,} \\left(9p-21)/2&\hbox{if $r=5$,} \\left(9p-16)/2&\hbox{if $r=6$.}\equivndcases$$ \equivndpro \binomegin{proof} The result follows from Theorems 4.1-4.3 and Corollaries 4.1-4.2. \equivnd{proof} \pro{\noindent Theorem 4.5} Let $p,n\in\Bbb N$ with $6\le n\le 10$ and $p\ge n$, and let $r\in\{0,1,\ldots,n-2\}$ be given by $p\equiv r\mod{n-1}$. \par $(\hbox{\rm i})$ If $n=6,7$, then $ex(p;T_n^*)=\frac{(n-2)p-r(n-1-r)}2$. \par $(\hbox{\rm ii})$ If $n=8,9$, then $$ex(p;T_n^*)=\cases \frac{(n-2)p-r(n-1-r)}2&\hbox{if $r\not=n-5$,} \\\frac{(n-2)(p-2)}2+1&\hbox{if $r=n-5$.}\equivndcases$$ \par $(\hbox{\rm iii})$ If $n=10$, then $$ex(p;T_n^*)=\cases 4p-r(9-r)/2&\hbox{if $r\not=4,5$,} \\4p-7&\hbox{if $r=5$,}\\4p-9&\hbox{if $r=4$.} \equivndcases$$ \equivndpro \binomegin{proof} For $r\in\{0,1,n-5,n-4,n-3,n-2\}$ the result follows from Theorems 4.1, 4.2 and 4.3. Now assume $2\le r\le n-6$. Then $r\ge 2>\frac{n-7}2$. By Lemma 4.7 we have $$\align &ex(n-1+r;\{K_{1,n-2},T_n^*\})\\&=(n-3)(r+2)+\max\Big\{(n-5-r)^2,\Big[\frac{(n-6-r)(n-3) }2\Big]\Big\}.\equivndalign$$ If $G\in Ex(n-1+r;T_n^*)$ and $\Delta(G)\ge n-2$, using Lemmas 4.1 and 4.2 we see that $G\cong K_{n-1}\cup K_r$. Thus, $$\align ex(n-1+r;T_n^*)&=\max\Big\{\binom{n-1}2+\binom r2,ex(n-1+r;\{K_{1,n-2},T_n^*\})\Big\} \\&=\max\Big\{\binom{n-1}2+\binom r2,(n-3)(r+2)\\&\quadq+\max\Big\{(n-5-r)^2,\Big[\frac{(n-6-r)(n-3) }2\Big]\Big\}\Big\}.\equivndalign$$ From this we deduce that $$\align &ex(7+2;T_8^*)=\binom 72+\binom 22,\quad ex(8+2;T_9^*)=\binom 82+\binom 22, \\&ex(8+3;T_9^*)=\binom 82+\binom 32,\quad ex(9+2;T_{10}^*)=\binom 92+\binom 22, \\&ex(9+3;T_{10}^*)=\binom 92+\binom 32,\quad ex(9+4;T_{10}^*)=43. \equivndalign$$ Suppose $p=k(n-1)+r$. Then $k\in\Bbb N$. By Theorem 4.1, $$\align ex(p;T_n^*)&=(k-1)\binom{n-1}2+ex(n-1+r;T_n^*) \\&=\frac{(n-2)(p-r)}2+ex(n-1+r;T_n^*)-\binom{n-1}2.\equivndalign$$ Now combining all the above we deduce the result. \equivnd{proof} \end{document}
\begin{document} \title {Spectral property of Cantor measures with consecutive digits} \author{Xin-Rong Dai} \address{School of Mathematics and Computational Science, Sun Yat-Sen University, Guangzhou, 510275, P. R. China} \email{daixr@@mail.sysu.edu.cn} \author{Xing-Gang He} \address{College of Mathematics and Statistics, Central China Normal University, Wuhan 430079, P. R. China} \email{xingganghe@@163.com} \author{Chun-Kit Lai} \address{Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario, L8S 4K1, Canada} \email{cklai@@math.mcmaster.ca} \thanks{The research is supported in part by the HKRGC Grant and the Focused Investment Scheme of CUHK; The second author is also supported by the National Natural Science Foundation of China 11271148.} \keywords{Cantor measures; spectral measures; spectra; trees.} \subjclass{Primary 28A80; Secondary 42C15.} \maketitle \begin{abstract} We consider equally-weighted Cantor measures $\mu_{q,b}$ arising from iterated function systems of the form ${b^{-1}(x+i)}$, $i=0,1,\cdots,q-1$, where $q<b$. We classify the $(q,b)$ so that they have infinitely many mutually orthogonal exponentials in $L^2(\mu_{q,b})$. In particular, if $q$ divides $b$, the measures have a complete orthogonal exponential system and hence spectral measures. Improving the construction in \cite{[DHS]}, we characterize all the maximal orthogonal sets $\Lambda$ when $q$ divides $b$ via a maximal mapping on the $q-$adic tree in which all elements in $\Lambda$ are represented uniquely in finite $b-$adic expansions and we can separate the maximal orthogonal sets into two types: regular and irregular sets. For a regular maximal orthogonal set, we show that its completeness in $L^2(\mu_{q,b})$ is crucially determined by the certain growth rate of non-zero digits in the tail of the $b-$adic expansions of the elements. Furthermore, we exhibit complete orthogonal exponentials with zero Beurling dimensions. These examples show that the technical condition in Theorem 3.5 of \cite{[DHSW]} cannot be removed. For an irregular maximal orthogonal set, we show that under some condition, its completeness is equivalent to that of the corresponding regularized mapping. \end{abstract} \tableofcontents \section{Introduction} Let $\mu$ be a compactly supported Borel probability measure on ${\Bbb R}^d$. We say that $\mu$ is a {\it spectral measure} if there exists a countable set $\Lambda\subset {\Bbb R}^d$ so that $E(\Lambda): = \{e^{2\pi i \langle\lambda,x\rangle}: \lambda\in\Lambda\}$ is an orthonormal basis for $L^2(\mu)$. In this case, $\Lambda$ is called a {\it spectrum} of $\mu$. If $\chi_{\Omega}dx$ is a spectral measure, then we say that $\Omega$ is a {\it spectral set}. The study of spectral measures was first initiated from B. Fuglede in 1974 \cite{[Fu]}, when he considered a functional analytic problem of extending some commuting partial differential operators to some dense subspace of $L^2$ functions. In his first attempt, Fuglede proved that any fundamental domains given by a discrete lattice are spectral sets with its dual lattice as its spectrum. On the other hand, he also proved that triangles and circles on ${\Bbb R}^2$ are not spectral sets, while some examples (e.g. $[0,1]\cup[2,3]$) that are not fundamental domains can still be spectral. From the examples and the relation between Fourier series and translation operators, he proposed a reasonable conjecture on spectral sets: {\it $\Omega\subset{\Bbb R}^s$ is a spectral set if and only if $\Omega$ is a translational tile.} This conjecture baffled experts for 30 years until 2004, Tao \cite{[T]} gave the first counterexamples on ${\Bbb R}^d$, $d\geq5$. The examples were modified later so that the conjecture are false in both directions on ${\Bbb R}^d$, $d\geq3$ \cite{[KM1]}, \cite{[KM2]}. It remains open in dimension 1 and 2. Despite the counterexamples, the exact relationship between spectral measures and tiling is still mysterious. The problem of spectral measures is as exciting when we consider fractal measures. Jorgensen and Pedersen \cite{[JP]} showed that the standard Cantor measures are spectral measures if the contraction is $\frac{1}{2n}$, while there are at most two orthogonal exponentials when the contraction is $\frac{1}{2n+1}$. Following this discovery, more spectral self-similar/self-affine measures were also found (\cite{[LaW]}, \cite{[DJ]} et. al.). The construction of these spectral self-similar measures is based on the existence of the {\it compatible pairs (known also as Hadamard triples)}. It is still unknown whether all such spectral measures are obtained from compatible pairs. Having an exponential basis, the series convergence problem was also studied by Strichartz. It is surprising that the ordinary Fourier series of continuous functions converge uniformly for standard Cantor measures \cite{[Str]}. By now there are considerable amount of literatures studying spectral measures and other generalized types of Fourier expansions like the Fourier frames and Riesz bases (\cite{[DHJ]}, \cite{[DHSW]}, \cite{[DL]}, \cite{[HLL]}, \cite{[IP]}, \cite{[JKS]} \cite{[La]}, \cite{[LaW]}, \cite{[Lai]}, \cite{[Li1]}, \cite{[Li2]}, and the references therein). In \cite{[HuL]}, Hu and Lau made a start in studying the spectral properties of Bernoulli convolutions, the simplest class of self-similar measures. They classified the contraction ratios with infinitely many orthogonal exponentials. It was recently shown by Dai that the only spectral Bernoulli convolutions are of contraction ratio $\frac{1}{2n}$ \cite{[D]}. In this paper, we study another general class of Cantor measures on ${\Bbb R}^1$. Let $b>2$ be an integer and $q<b$ be another positive integer. We consider the iterated function system(IFS) with maps $$ f_i(x) = b^{-1}(x+i), \ i = 0,1,...,q-1. $$ The IFS arises a natural {\it self-similar measure} $\mu = \mu_{q,b}$ satisfying \begin{equation}\label{eq1.1} \mu (E) = \sum_{i=0}^{q-1}\frac{1}{q}\mu( f_j^{-1}(E)) \end{equation} for all Borel sets $E$. Note that we only need to consider equal weight since non-equally weighted self-similar measures here cannot have any spectrum by Theorem 1.5 in \cite{[DL]}. It is also clear that if $q=2$, $\mu$ becomes the standard Cantor measure of $b^{-1}$ contraction. For this class of self-similar measures, we find surprisingly that the spectral properties depend heavily on the number theoretic relationship between $q$ and $b$. Our first result is to show that {\it $\mu = \mu_{q,b}$ has infinitely many orthogonal exponentials if and only if $q$ and $b$ is not relatively prime. If moreover, $q$ divides $b$, the resulting measure will be a spectral measure} (Theorem \ref{th1.1}). However, when $q$ does not divide $b$ and they are not relatively prime (e.g. $q=4, \ b=6$), variety of cases may occur and we are not sure whether there are spectral measures in these classes (see Remark \ref{rem2.1}). We then focus on the case when $b = qr$ in which we aim at giving a detailed classification of its spectra. The classification of spectra, for a given spectral measure, was first studied by Lagarias, Reeds and Wang \cite{[LRW]}. They considered the spectra of $L^2([0,1)^d)$ (more generally fundamental domains of some lattices) and they showed that the spectra of $L^2([0,1)^d)$ are exactly all the tiling sets of $[0,1)^d$. If $d=1$, the way of tiling $[0,1)$ is rather rigid, and it is easy to see that the only spectrum (respectively the tiling set) is the translates of the integer lattice ${\Bbb Z}$. Such kind of rigidity breaks down even on ${\Bbb R}^1$ if we turn to fractal measures. The first attempt of the classification of its spectra was due to \cite{[DHS]}, Dutkay, Han and Sun decomposed the maximal orthogonal sets of one-fourth Cantor measure using $4$-adic expansion with digits $\{0,1,2,3\}$ and put them into a labeling of the binary tree. The maximal orthogonal sets will then be obtained by reading all the infinite paths with digits ending eventually in $0$ (for positive elements) or $3$ (for negative elements). They also gave some sufficient conditions on the digits for a maximal orthogonal set to be a spectrum. Nonetheless, the condition is not easy to verify. Turning to our self-similar measures with consecutive digits where the one-fourth Cantor measure is a special case, we will classify all the maximal orthogonal sets using mappings on the standard $q-$adic tree called {\it maximal mappings} (Theorem \ref{th1.6}). This construction improves the tree labeling method in \cite{[DHS]} in two ways. \begin{enumerate} \item We will choose the digit system to be $\{-1,0,1,\cdots, b-2\}$ instead of $\{0,1,\cdots,$ $b-1\}$. By doing so, all integers (both positive and negative) can be expanded into $b-$adic expansions terminating at $0$ (Lemma \ref{lem1.1}). \item We impose restrictions on our labeling position on the tree so that together with (1), all the elements in a maximal orthogonal set can be extracted by reading some specific paths in the tree. These paths are collected in a countable set $\Gamma_q$ defined in (\ref{gamma}). \end{enumerate} Having such a new tree structure of a maximal orthogonal set, we discover there are two possibilities for the maximal sets depending on whether all the paths in $\Gamma_q$ are corresponding to some elements in the maximal orthogonal sets (i.e. the values assigned are eventually 0). If it happens that all the paths in $\Gamma_q$ behave nicely as said, we call such maximal orthogonal sets {\it regular}. It turns out that regular sets cover most of the interesting cases and we can give regular sets a natural ordering $\{\lambda_n: n=0,1,2,\cdots\}$. If the standard $q-$adic expansion of $n$ has length $k$, we define $N_n^{\ast}$ to be the number of non-zero digits in the $b-$adic expansion using $\{-1,0,\cdots,b-2\}$ of $\lambda_n$ after $k$ . $N_n^{\ast}$ is our crucial factor in determining whether the set is a spectrum. We show that {\it if $N_n^{\ast}$ grows slowly enough or even uniformly bounded, the set will be a spectrum, while if $N_n^{\ast}$ grows too fast, say it is of polynomial rates, then the maximal orthogonal sets will not be a spectrum} (Theorem \ref{th1.7}). In \cite{[DHSW]}, Dutkay {\it et al } tried to generalize the classical results of Landau \cite{[Lan]} about the Beurling density on Fourier frames to fractal settings. They defined the concept of {\it Beurling dimensions} for a discrete set and showed that all Bessel sequences for an IFS of similitudes with no overlap condition must have Beurling dimension not greater than its Hausdorff dimension of the attractor. Under technical assumption on the frame spectra, they showed the above two dimensions coincide. They conjectured that the assumption can be removed. However, as we see that $N_n^{\ast}$ counts the number of non-zero digits only, we can freely add $qb^m$ for any $m>0$ on the tree of the canonical spectrum. These additional terms push the $\lambda_n$'s as far away from each other as wanted and we therefore show that {\it there exists spectrum of zero Beurling dimension} (Theorem \ref{th1.9}). For the organization of the paper, we present our set-up and main results in Section 2. In Section 3, we discuss the maximal orthogonal sets of $\mu_{q,b}$ and classify all maximal orthogonal sets via the maximal mapping on the $q-$adic when $q$ divides $b$. In Section 4, we discuss the regular spectra and prove the growth rate criteria. Moreover, the examples of the spectra with zero Beurling dimensions will be given. In Section 5, we give a study on the irregular spectra. \section{Setup and main results} Let $\Lambda$ be a countable set in ${\Bbb R}$ and denote $E(\Lambda)=\{e_\lambda: \lambda\in\Lambda\}$ where $e_\lambda(x)=e^{2\pi i\lambda x}$. We say that $\Lambda$ is a {\it maximal orthogonal set} ({\it spectrum}) if $E(\Lambda)$ is a maximal orthogonal set (an orthonormal basis) for $L^2(\mu)$. Here $E(\Lambda)$ is a maximal orthogonal set of exponentials means that it is a mutually orthogonal set in $L^2(\mu)$ such that if $\alpha\not\in\Lambda$, $e_{\alpha}$ is not orthogonal to some $e_{\lambda}$, $\lambda\in\Lambda$. If $L^2(\mu)$ admits a spectrum, then $\mu$ is called a {\it spectral measure}. Given a measure $\mu$, the Fourier transform is defined to be $$ \widehat{\mu}(\xi) = \int e^{2\pi i \xi x}d\mu(x). $$ It is easy to see that $E(\Lambda)$ is an orthogonal set if and only if $$ (\Lambda-\Lambda)\setminus\{0\}\subset {\mathcal Z}(\widehat{\mu}) := \{\xi\in{\Bbb R}: \widehat{\mu}(\xi)=0\}. $$ We call such $\Lambda$ a {\it bi-zero set} of $\mu$. For $\mu = \mu_{q,b}$, we can calculate its Fourier transform. \begin{equation}\label{eq1.2} \widehat{\mu}(\xi) = \prod_{j=1}^{\infty}\left[\frac{1}{q}(1+e^{2\pi i b^{-j}\xi}+...+e^{2\pi i b^{-j}(q-1)\xi})\right]. \end{equation} Denote \begin{equation}\label{eq1.2+} m(\xi) = \frac{1}{q}\left(1+e^{2\pi i \xi}+\cdots+ e^{2\pi i (q-1)\xi}\right) \end{equation} and thus $|m(\xi)|= |\frac{\sin q\pi \xi}{q\sin \pi \xi}|$. The zero set of $m$ is $$ {\mathcal Z}(m) = \left\{\frac{a}{q}: q\nmid a, a\in{\mathbb Z}\right\}, $$ where $q\nmid a$ means $q$ does not divide $a$. We can then write $ \widehat{\mu}(\xi) = \prod_{j=1}^{\infty}m(b^{-j}\xi)$, so that the zero set of $\widehat{\mu}$ is given by \begin{equation}\label{eq1.3} {\mathcal Z}(\widehat{\mu}) = \left\{\frac{b^{n}}{q} a: n\geq 1, \ q\nmid a \right\}=r\{b^na: n\ge 0, q\nmid a\}, \end{equation} where $r=b/q$. We have the following theorem classifying which $\mu_{q,b}$ possess infinitely many orthogonal exponentials. It is also the starting point of our paper. \begin{theorem}\label{th1.1} $\mu = \mu_{q,b}$ has infinitely many orthogonal exponentials if and only if the greatest common divisor between $q$ and $b$ is greater than $1$. If $q$ divides $b$, then $\mu_{q,b}$ is a spectral measure. \end{theorem} We wish to give a classification on the spectra and the maximal orthogonal sets whenever they exist. To do this, it is convenient to introduce some multiindex notations: Denote $\Sigma_q = \{0,\cdots, q-1\}$, $\Sigma_q^0=\{{\vartheta}\}$ and $\Sigma_q^n= \underbrace{\Sigma_q\times\cdots\times \Sigma_q}_n$. Let $\Sigma_q^{\ast} = \bigcup_{n=0}^{\infty}\Sigma_q^n$ be the set of all finite words and let $\Sigma_q^{\infty} =\Sigma_q\times \Sigma_q\times\cdots $ be the set of all infinite words. Given $\sigma=\sigma_1\sigma_2\cdots \in \Sigma^{\infty}\cup\Sigma^{\ast}$, we define ${\vartheta}\sigma=\sigma$, $\sigma|_{k} =\sigma_1\cdots\sigma_k$ for $k\ge 0$ where $\sigma|_0={\vartheta}$ for any $\sigma$ and adopt the notation ${0}^{\infty} = 000\cdots$, $0^k =\underbrace{0\cdots0}_k$ and $\sigma\sigma'$ is the concatenation of $\sigma $ and $\sigma'$. We start with a definition. \begin{Def} Let $\Sigma_q^*$ be all the finite words defined as above. We say it is a {\it $q-$adic tree} if we set naturally the root is ${\vartheta}$, all the $k$-th level nodes are $\Sigma_q^k$ for $k\ge 1$ and all the offsprings of $\sigma\in \Sigma_q^*$ are $\sigma i$ for $i=0, 1,\ldots, q-1$. \end{Def} Let $\tau$ be a map from $\Sigma_q^*$ to real numbers. Then the image of $\tau$ defines a {\it $q$-adic tree labeling}. Define $\Gamma_q$ \begin{equation}\label{gamma} \Gamma_q: = \{\sigma{0}^{\infty}: \ \sigma =\sigma_1\cdots\sigma_k\in\Sigma_q^*, \ \sigma_k\neq 0\}. \end{equation} $\Gamma_q$ will play a special role in our construction. Suppose that for some word $\sigma= \sigma'0^{\infty}\in\Gamma_q$, $\tau(\sigma|_k)=0$ for all $k$ sufficiently large, we say that $\tau$ is {\it regular on $\sigma$}, otherwise {\it irregular}. Let $b$ be another integer, if $\tau$ is regular on some $\sigma\in \Gamma_q$, we define the projection $\Pi^\tau_b$ from $\Gamma_q$ to ${\mathbb R}$ as \begin{eqnarray}\label{formally} \Pi^\tau_b(\sigma)=\sum_{k=1}^\infty \tau(\sigma|_k)b^{k-1}. \end{eqnarray} The above sum is finite since $\tau(\sigma|_k) =0$ for sufficiently large $k$. If $\tau$ is regular on any $\sigma$ in $\Gamma_q$, we say that $\tau$ is a {\it regular mapping}. \begin{Example}\label{example1.1} Suppose $b=q$, let ${\mathcal C}=\{c_0=0, c_1, \ldots, c_{b-1}\}$ be a residue system mod $b$ where $c_i\equiv i$ (mod $b$). Define $\tau({\vartheta})=0$ and $\tau(\sigma)=c_{\sigma_k}$ if $\sigma=\sigma_1\cdots \sigma_k\in\Sigma_q^k\subset\Sigma_q^{\ast}$. Then it is easy to see that $\tau$ is regular on any $\sigma\in\Gamma_q$ and hence it is regular. Moreover, \begin{equation}\label{first} \Pi^\tau_b(\Gamma_b)\subseteq {\mathbb Z}. \end{equation} When ${\mathcal C}=\{0, 1, \ldots, b-1\}$, then the mapping $\Pi^\tau_b$ is a bijection from $\Gamma_b$ onto ${\mathbb N}\cup\{0\}$. \end{Example} In [DHS], putting their setup in our language, they classified maximal orthogonal sets of standard one-fourth Cantor measure via the mapping $\tau$ from $\Sigma_2^{\ast}$ to $\{0,1,2,3\}$. However, some maximal orthogonal sets may have negative elements in which those elements cannot be expressed finitely in $4$-adic expansions using digits $\{0,1,2,3\}$. In our classification, we will choose the digit system to be ${{\mathcal C}} = \{-1,0,1\cdots,b-2\}$ in which we can expand any integers uniquely by finite $b-$adic expansion. We have the following simple but important lemma. \begin{Lem}\label{lem1.1} Let ${\mathcal C}=\{-1, 0, 1, \ldots, b-2\}$ with integer $b\geq3$ and let $\tau$ be the map defined in Example \ref{example1.1}. Then $\Pi^\tau_b$ is a bijection between $\Gamma_b$ and ${\Bbb Z}$. \end{Lem} \noindent {\bf Proof.~~} For any $n\in {\mathbb Z}$ and $|n|<b$, it is easy to see that there exists unique $\sigma\in\Gamma_b$ such that $n=\Pi^\tau_b(\sigma)$. For example, $n=b-1$, then $n=\Pi^\tau_b(\sigma_1\sigma_2)$ where $\sigma_1=b-1$ and $\sigma_2=1$. When $|n|\ge b$, then $n$ can be decomposed uniquely as $n=\ell b+c$ where $c\in{\mathcal C}$. We note that $|\ell|=|\frac{n-c}b|\leq\frac{|n|+b-2}b<|n|$. If $|\ell|<b$, we are done. Otherwise, we further decompose $\ell$ in a similar way and after finite number of steps, $|\ell|<b$. The expansion is unique since each decomposition is unique. $\square$ We now define a $q-$adic tree labeling which corresponds to a maximal orthogonal set for $\mu_{q,b}$ when $b=qr$. We observe that for $b=qr$, we can decompose ${\mathcal C} = \{-1,0\cdots,b-2\}$ in $q$ disjoint classes according to the remainders after being divided by $q$: ${\mathcal C} = \bigcup_{i=0}^{q-1}{\mathcal C}_i$ where $$ {\mathcal C}_i = (i+q{\Bbb Z})\cap {\mathcal C}. $$ \begin{Def}\label{def1.5} {\normalshape Let $\Sigma_q^*$ be a $q-$adic tree and $b = qr$, we say that $\tau$ is a {\it maximal mapping} if it is a map $\tau = \tau_{q,b}: \Sigma_q^*\rightarrow \{-1,0,...,b-2\}$ that satisfies } {\normalshape (i) $\tau({\vartheta}theta) =\tau(0^n) =0$ for all $n\geq1$.} {\normalshape (ii) For all $k\geq1$}, $\tau (\sigma_1\cdots \sigma_k) \in {\mathcal C}_{\sigma_k}.$ {\normalshape (iii) For any word $\sigma\in \Sigma_q^{\ast}$, there exists $\sigma'$ such that $\tau$ is regular on $\sigma\sigma'0^{\infty}\in\Gamma_q$.} \end{Def} We call a tree mapping a {\it regular mapping} if it satisfies (i) and (ii) in above and is regular on any word in $\Gamma_q$. Clearly, regular mappings are maximal. Given a maximal mapping $\tau$, the following sets will be of our main study in this paper. \begin{equation}\label{eq1.4} \Lambda(\tau):=\{\Pi^\tau_b(\sigma): \sigma\in\Gamma_q ,\ \mbox{ $\tau$ is regular on $\sigma$}\}. \end{equation} From now on, we will assume that $b=qr$, ${\mathcal C}=\{-1, 0, 1,\ldots, b-2\}$ and $0\in \Lambda$. The main results are as follows and this is also the reason why $\tau$ is called a maximal mapping. \begin{theorem}\label{th1.6} $\Lambda$ is a maximal orthogonal set of $L^2(\mu_{q, b})$ if and only if there exists a maximal mapping $\tau$ such that $\Lambda = r \Lambda(\tau)$, where $b=qr$. \end{theorem} For the proof, (i) in Definition \ref{def1.5} is to ensure $0\in\Lambda$. (ii) is to make sure the mutually orthogonality and (iii) is for the maximal orthogonality. If $\Lambda$ is a spectrum of $L^2(\mu)$, we call the associated maximal mapping $\tau$ a {\it spectral mapping}. We will restrict our attention to regular mappings (i.e. for all $\sigma\in\Gamma_q$, $\tau$ is regular on $\sigma$). In this case, $\Lambda(\tau) = \{\Pi^\tau_b(\sigma): \sigma\in\Gamma_q \}$. The advantage of considering regular mappings is that we can give a natural ordering of the maximal orthogonal set $\Lambda(\tau)$. The ordering goes as follows: Given any $n\in{\Bbb N}$, we can expand it into the unique finite $q-$adic expansion, \begin{equation}\label{eq1.5} n= \sum_{j=1}^{k}\sigma_jq^{j-1}, \ \sigma_j\in\{0,\cdots,q-1\},\qquad \sigma_k\ne 0. \end{equation} In this way $n$ is uniquely corresponding to one word $\sigma=\sigma_1\cdots\sigma_k$, which is called the {\it $q$-adic expansion of $n$}. For a regular mapping $\tau$, there is a natural ordering of the maximal orthogonal set $\Lambda(\tau)$: $\lambda_{0}=0$ and \begin{equation}\label{eq1.51} \lambda_n =\Pi^\tau_b(\sigma 0^\infty)=\sum_{j=1}^{k}\tau(\sigma|_j)b^{j-1}+\sum_{i=k+1}^{N_n}\tau(\sigma 0^{i-k})b^{i-1} \end{equation} where $\sigma= \sigma_1\cdots\sigma_k$ is the $q-$adic expansion of $n$ in (\ref{eq1.5}), $\tau(\sigma 0^{N_n-k})\ne 0$ and $\tau(\sigma 0^{n})= 0$ for all $n>N_n-k$. Under this ordering, we have $\Lambda(\tau)=\{\lambda_n\}_{n=0}^\infty$. Let $$ N^{\ast}_n = \#\left\{\ell: \ \sigma = \sigma_1\cdots\sigma_k, \ \tau(\sigma0^\ell)\neq0 \right\}, $$ where we denote $\#A$ the cardinality of the set $A$. The growth rate of $N^{\ast}_n$ is crucial in determining whether $r\Lambda(\tau)$ is a spectrum of $L^2(\mu)$. To describe the growth rate, we let ${\mathcal N}^*_{m,n} = \max\{N^{\ast}_k: q^m\leq k< q^{n}\}$, ${\cal L}^*_n = \min\{N^{\ast}_k: q^n\leq k< q^{n+1}\}$ and ${\cal M}_n=\max \{N_k: 1\le k<q^n\}$. We have the following two criteria depending on the growth rate of $N^{\ast}_n$. \begin{theorem}\label{th1.7} Let $\Lambda= r \Lambda(\tau)$ for a regular mapping $\tau$. Then we can find $0<c_1<c_2<1$ so that the following holds. \noindent{\normalshape(i)} If there exists a strictly increasing sequence $\alpha_n$ satisfying \begin{equation}\label{eq3.4} \alpha_{n+1}-{\cal M}_{\alpha_n}\rightarrow\infty, \ \mbox{and} \ \sum_{n=1}^{\infty}c_{1 }^{{\mathcal N}^{\ast}_{\alpha_n,\alpha_{n+1}}} = \infty, \end{equation} then $\Lambda$ is a spectrum of $L^2(\mu)$. \noindent{\normalshape(ii).} If $\sum_{n=1}^{\infty}c_{2}^{{\cal L}_n^{\ast}}<\infty$, then $\Lambda$ is not a spectrum of $L^2(\mu)$. \end{theorem} The following is the most important example of Theorem \ref{th1.7}. \begin{Example}\label{rem1.8}{\normalshape For a regular mapping $\tau$, if $M: = \sup_{n}\{N^{\ast}_n\}$ is finite, then $\Lambda$ must be a spectrum}. \end{Example} \noindent {\bf Proof.~~} {\normalshape Note that ${\mathcal N}^{\ast}_{m,n}\leq M$ and therefore any strictly increasing sequences $\alpha_n$ will satisfy the second condition in (\ref{eq3.4}). Let $\alpha_1=1$ and $\alpha_{n+1} = n+{\cal M}_{\alpha_n}$ for $n\ge 1$. Then the first condition holds and hence $\Lambda$ must be a spectrum by Theorem \ref{th1.7}.} $\square$ We will also see that when there is some slow growth in ${\mathcal N}^{\ast}_n$, $\Lambda$ can still be a spectrum (see Example \ref{example3.1}). The exact growth rate for $\Lambda$ to be a spectrum is however hard to obtain from the techniques we used. Now, we can construct some spectra which can have zero Beurling dimension from regular orthogonal sets using Theorem \ref{th1.7}. In fact, they can even be arbitrarily sparse. \begin{theorem}\label{th1.9} Let $\mu = \mu_{q,b}$ be a measure defined in (\ref{eq1.1}) with $b>q$ and $\gcd(q, b)=q$. Then given any increasing non-negative function $g$ on $[0, \infty)$, there exists a spectrum $\Lambda$ of $L^2(\mu)$ such that \begin{equation}\label{eq1.6} \lim_{R\rightarrow\infty}\sup_{x\in{\Bbb R}}\frac{\#(\Lambda\cap(x-R,x+R))}{g(R)}=0. \end{equation} \end{theorem} We also make a study on the irregular spectra, although most interesting cases are from the regular one. Let $\tau$ be a maximal mapping such that it is irregular on $\{I_1 0^\infty, \ldots, I_N 0^\infty\}$, where $I_i\in\Sigma^{\ast}$ and the last word in $I_i$ is non-zero, and is regular on the others in $\Gamma_q$. We define the corresponding {\it regularized mapping} $\tau_R$: $$ \tau_{R}(\sigma) =\left\{ \begin{array}{ll} 0, & \hbox{if $\sigma=I_i0^{k}$ for $k\ge 1$;} \\ \tau(\sigma), & \hbox{otherwise.} \\ \end{array} \right. $$ Our result is as follows: \begin{theorem}\label{th1.10} Let $\tau$ be an irregular maximal mapping of $\mu$. Suppose $\tau$ is irregular only on finitely many $\sigma$ in $\Gamma_q$. Then $\tau$ is a spectral mapping if and only if a corresponding regularized mapping $\tau_R$ is a spectral mapping. \end{theorem} We will prove this theorem more generally in Theorem \ref{th6.1} by showing the spectral property is not affected if we alter only finitely many elements in $\Gamma_q$. However, we don't know whether the same holds if the finiteness assumption on irregular elements is removed. \section{Maximal orthogonal sets} In this section, we discuss the existence of orthogonal sets for $\mu_{q,b}$, in particular, Theorem \ref{th1.1} and Theorem \ref{th1.6} are proved. \noindent{\bf Proof of Theorem \ref{th1.1}.} Let gcd$(q,b)=d$. Suppose $q$ and $b$ are relatively prime i.e. $d=1$. Let $$ {\mathcal Z}_n: = \left\{\frac{b^{n}}{q}a:q\nmid a\right\}. $$ It is easy to see that ${\mathcal Z}(\widehat{\mu}_{q,b}) =\bigcup_{n=1}^{\infty}{\mathcal Z}_n $. Note that for any $a$ with $q\nmid a$, we have $q\nmid ba$ since gcd$(q,b)=1$. Hence, if $n>1$, $$ \frac{b^{n}}{q}a = \frac{b^{n-1}}{q}(ba)\in{\mathcal Z}_{n-1}. $$ This implies that ${\mathcal Z}_1\supset{\mathcal Z}_2\supset\cdots$ and ${\mathcal Z}(\widehat{\mu}_{q,b}) = {\mathcal Z}_1$. Let $$ Y_i= \left\{\frac{b}{q}a: q\nmid a, \ a\equiv i \ (\mbox{mod} \ q)\right\}, $$ then ${\mathcal Z}(\widehat{\mu}_{q,b}) = \bigcup_{i=1}^{q-1}Y_i$. If there exists a mutually orthogonal set $\Lambda$ for $\mu_{q,b}$ with $\#\Lambda\ge q$, we may assume $0\in \Lambda$ so that $\Lambda\setminus\{0\}\subset {\mathcal Z}(\widehat{\mu}_{q,b})$. Hence there exists $1\leq i\leq q-1$ such that $Y_i\cap \Lambda$ contains more than 1 elements, say $\lambda_1,\lambda_2$. But then $\lambda_1-\lambda_2 = \frac{b}{q} r$ where $q|r$. This contradicts the orthogonal property of $\Lambda$. Suppose now $d>1$, we know $d\leq q$. We first consider $d=q$ and prove that the measure is a spectral measure. This shows also the second statement. Write now $b=qr$ and define ${\mathcal D} =\{0,1,\cdots, q-1\}$ and ${\mathcal S} = \{0,r,\cdots (q-1)r\}$. Then it is easy to see that the matrix $$ H: = [e^{2\pi i \frac{ijr}{b}}]_{0\leq i,j\leq q-1} = [e^{2\pi i \frac{ij}{q}}]_{0\leq i,j\leq q-1} $$ is a Hadamard matrix (i.e. $HH^{\ast}=qI$). This shows ${\frac 1b \mathcal D}$ and ${\mathcal S}$ forms a compatible pair as in \cite{[LaW]}. Therefore it is a spectral measure by Theorem 1.2 in \cite{[LaW]}. Suppose now $1<d<q$. We have shown that $\mu_{d,b}$ is a spectral measure and hence ${\mathcal Z}(\widehat{\mu}_{d,b})$ contains an infinite bi-zero sets $\Lambda$ (i.e. $\Lambda-\Lambda\subset {\mathcal Z}(\widehat{\mu}_{d,b})\cup \{0\}$). We claim that ${\mathcal Z}(\widehat{\mu}_{d,b}) \subset {\mathcal Z}(\widehat{\mu}_{q,b})$ and hence $ {\mathcal Z}(\widehat{\mu}_{q,b})$ has infinitely many orthogonal exponentials. To justify the claim, we write $q=dt$. Note that for $d\nmid a$, $$ \frac{b^{n}}{d}a = \frac{b^{n}}{q}(ta). $$ As $q$ cannot divide $ta$. Hence, $\frac{b^{n}}{q}(ta)\in{\mathcal Z}(\widehat{\mu}_{q,b})$. This also completes the proof of Theorem \ref{th1.1}. $\square$ \begin{Rem}\label{rem2.1}{\normalshape In view of Theorem \ref{th1.1}, we cannot decide whether there are spectral measures when $1<\gcd(q, b)<q$. In general, $\mu_{q,b}$ is the convolutions of several self-similar measures with some are spectral and some are not spectral. If $q=4, \ b=6$, we know that $\{0,1,2,3\} = \{0,1\}\oplus\{0,2\}$ and hence} $$ \widehat{\mu}_{4,6}(\xi) = \prod_{j=1}^{\infty}\left(\frac{1+e^{2\pi i 6^{-j}\xi}}{2}\right)\cdot\prod_{j=1}^{\infty} \left(\frac{1+e^{2\pi i 2 6^{-j}\xi}}{2}\right)=\widehat{\nu}_{1}(\xi)\widehat{\nu}_{2}(\xi) $$ {\normalshape where $\nu_1 = \mu_{2,6}$ and $\nu_2$ is the equal weight self-similar measure defined by the IFS with maps $\frac{1}{6}x$ and $\frac{1}{6}(x+2)$. Hence, $\mu_{4,6}=\nu_1\ast\nu_2$. It is known that both $\nu_1$ and $\nu_2$ are spectral measures, but we don't know whether $\mu_{4,6}$ is a spectral measure. If $q=6$ and $b=10$, then $\{0,1,\cdots, 5\} = \{0,1\}\oplus\{0,2,4\}$ and hence $\mu_{6,10} $ is the convolution of $\mu_{2,10}$ with a non-spectral measure with $3$ digits and contraction ratio $1/10$. Because of its convolutional structure, it may be a good testing ground for studying the {\L}aba-Wang conjecture \cite{[LaW]} and also for finding non-spectral measures with Fourier frame \cite{[DL], [HLL]}.} \end{Rem} \noindent{\bf Proof of Theorem \ref{th1.6}.} Suppose $\Lambda = r\Lambda(\tau)$ for some maximal mapping $\tau$. We show that it is an maximal orthogonal set for $L^2(\mu)$. To see this, we first show $\Lambda$ is a bi-zero set. Pick $\lambda,\lambda'\in\Lambda$, by the definition of $\Lambda(\tau)$, we can find two distinct $\sigma$,$\sigma'$ in $\Gamma_q$ such that $$ \lambda = \frac{b}{q}\Pi^\tau_b(\sigma), \ \lambda' = \frac{b}{q}\Pi^\tau_b(\sigma'). $$ Let $k$ be the first index such that $\sigma|_k\neq\sigma'|_k$. Then for some integer $M$, we can write $$ q\lambda -q\lambda' = b\sum_{i=k}^{\infty}(\tau(\sigma|_i)-\tau(\sigma'|_i))b^{i-1} = b^k\left((\tau(\sigma|_k)-\tau(\sigma'|_k)) +b M\right). $$ By (ii) in Definition \ref{def1.5}, $\tau(\sigma|_k)$ and $\tau(\sigma'|_k)$ are in distinct residue class of $q$. This means $q$ does not divide $\tau(\sigma|_k)-\tau(\sigma'|_k)$. On the other hand, $q$ divides $b$. Hence, $q$ does not divide $(\tau(\sigma|_k)-\tau(\sigma'|_k)) +b M$. By (\ref{eq1.3}), $\lambda-\lambda'$ lies in ${\mathcal Z}(\widehat{\mu})$. To establish the maximality of the orthogonal set $\Lambda$, we show by contradiction. Let $\theta\not\in\Lambda$ but $\theta$ is orthogonal to all elements in $\Lambda$ . Since $0\in\Lambda$, $\theta\neq0$ and $\theta = \theta-0\in {\mathcal Z}(\widehat{\mu})$. Hence, by (\ref{eq1.3}) we may write $$ \theta = r(b^{k-1}a), $$ where $q$ does not divide $a$. Expand $b^{k-1}a$ in $b-$adic expansion using digits $\{-1,0,\cdots,$$b-2\}$ $$ b^{k-1}a = \epsilon_{k-1}b^{k-1}+\epsilon_{k}b^{k}+\cdots+\epsilon_{k+\ell}b^{k+\ell}, $$ $q$ does not divide $\epsilon_{k-1}$. Note that there exists unique $\sigma_{s}$, $0\le \sigma_s\le q-1$, such that $\epsilon_{s}\equiv \sigma_s$ (mod $q$) for $k-1\le s\le k+\ell$. Denote $\sigma_s=\epsilon_s=0$ for $s>k+\ell$. Since $\theta\not\in \Lambda$, we can find the smallest integer $\alpha$ such that $\tau(0^{k-2}\sigma_{k-1}\sigma_k\cdots \sigma_{\alpha})\ne \epsilon_\alpha$. By (iii) in the definition of $\tau$, we can find $\sigma\in\Gamma_q$ so that $\sigma= 0^{k-2}\sigma_{k-1}\sigma_k\cdots \sigma_{\alpha}\sigma'0^{\infty}$ and $\tau$ is regular on $\sigma$, then there exists $M'$ such that $$ \theta-r\Pi^\tau_b(\sigma)=rb^{\alpha}(\epsilon_{\alpha}-\tau(0^{k-1}\sigma_{k-1}\cdots \sigma_\alpha)+M'b). $$ By (ii) in the definition of $\tau$, $\tau(0^{k-1}\sigma_{k-1}\cdots \sigma_\alpha)\equiv \sigma_{\alpha}$ (mod $q$), which is also congruent to $\epsilon_{\alpha}$ by our construction. This implies $\theta-r\Pi^\tau_b(\sigma)$ is not in the zero set of $\widehat{\mu}$ since $q$ divides $\epsilon_{\alpha}-\tau(0^{k-1}i_{k-1}\cdots i_\alpha)$ and $b$ does not divide it either. It contradicts to $\theta$ being orthogonal to all $\Lambda$. Conversely, suppose we are given a maximal orthogonal set $\Lambda$ of $L^2(\mu)$ with $0\in\Lambda$. Then $\Lambda\subset {\mathcal Z}(\widehat{\mu})$. Hence, we can write $$ \Lambda = \{ra_{\lambda}: \lambda\in\Lambda, a_{\lambda} = b^{k-1}m \ \mbox{for some $k\geq1$ and $m\in{\Bbb Z}$ with $q\nmid m$}\}, $$ where $a_0=0$. Now, expand $a_{\lambda}$ in $b-$adic expansion with digits chosen from ${\mathcal C} = \{-1,0,\cdots,b-2\}$. \begin{equation}\label{eq2.1} a_{\lambda} = \sum_{j=1}^{\infty}\epsilon_{\lambda}^{(j)}b^{j-1}. \end{equation} Let $D({\vartheta}theta) = \{\epsilon_{\lambda}^{(1)}:\lambda\in\Lambda\}$ be all the first coefficients of $b$-adic expressions \eqref{eq2.1} of elements in $\Lambda$, and let $ D(c_1,\cdots,c_n) = \{\epsilon_{\lambda}^{(n+1)}: \epsilon_{\lambda}^{(1)}=c_1,\cdots \epsilon_{\lambda}^{(n)} = c_n, \ \lambda\in\Lambda \} $ be all the $n+1$-st coefficients of elements in $\Lambda$ whose first $n$ coefficients are fixed, where $c_1, c_2, \ldots, c_n\in{\mathcal C}$. We need the following lemma. \begin{Lem}\label{lem2.1} With the notations above, then $D({\vartheta}theta)$ contains exactly $q$ elements which are in distinct residue class (mod $q$) and $0\in D({\vartheta})$. Moreover, if $D(c_1,\cdots,c_n)$ with all $c_i\in{\mathcal C}$ is non-empty, then it contains exactly $q$ elements which are in distinct residue class (mod $q$) also. In particular, $0\in D(c_1,\cdots,c_n)$ if $c_1=\cdots=c_n=0$ for $n\ge 1$. \end{Lem} \noindent {\bf Proof.~~} Clearly, by \eqref{eq2.1}, $0\in D({\vartheta}theta)$ and $0\in D(c_1,\cdots,c_n)$ if $c_1=\cdots=c_n=0$ for $n\ge 1$. Suppose the number of elements in $D({\vartheta}theta)$ is strictly less than $q$. We let $\alpha\in {\mathcal C}\setminus D({\vartheta}theta)$ such that $\alpha$ is not congruent to any elements in $D({\vartheta}theta)$. Then, for any $\lambda\in \Lambda$, by \eqref{eq2.1} we have $$ r\alpha-\lambda = r\left(\alpha-\sum_{j=1}^{\infty}\epsilon_{\lambda}^{(j)}b^{j-1}\right)=\frac bq \left(\alpha-\epsilon_\lambda^{(1)}+ \sum_{j=2}^{\infty}\epsilon_{\lambda}^{(j)}b^{j-1}\right). $$ Note that $q\nmid (\alpha-\epsilon_\lambda^{(1)})$ for all $\lambda\in \Lambda$ by the assumption, this implies $r\alpha$ is mutually orthogonal to $\Lambda$ but is not in $\Lambda$, which contradicts to maximal orthogonality. Hence $D({\vartheta}theta)$ contains at least $q$ elements. If $D({\vartheta}theta)$ contains more than $q$ elements, then there exists $a_{\lambda_1}=\sum_{j=1}^{\infty}\epsilon_{\lambda_1}^{(j)}b^{j-1}, a_{\lambda_2}=\sum_{j=1}^{\infty}\epsilon_{\lambda_2}^{(j)}b^{j-1}$ such that $\epsilon_{\lambda_1}^{(1)}\equiv \epsilon_{\lambda_2}^{(1)}$ (mod $q$) and $\epsilon_{\lambda_1}^{(1)}\ne \epsilon_{\lambda_2}^{(1)}$. Then $r(a_{\lambda_1}-a_{\lambda_2})=\frac bq (\epsilon_{\lambda_1}^{(1)}-\epsilon_{\lambda_2}^{(1)}+bM)$ for some integer $M$. This means $r(a_{\lambda_1}-a_{\lambda_2})$ is not a zero of $\widehat{\mu}$. This contradicts to the mutually orthogonality. Hence, $D({\vartheta}theta)$ contains exactly $q$ elements which are in distinct residue class (mod $q$). In general, we proceed by induction. Suppose the statement holds up to $n-1$. If now $D(c_1,\cdots,c_n)$ is non-empty, then $D(c_1,\cdots,c_k)$ is also non-empty for all $k\leq n$. we now show that $D(c_1,\cdots,c_n)$ must contain at least $q$ elements. Otherwise, we consider $\theta = r(c_1+\cdots+ c_nb^{n-1}+\alpha b^n)$ where $\alpha$ is in ${\mathcal C}\setminus D(c_1,\cdots,c_n)$ and $\alpha$ is not congruent to any elements in $D(c_1,\cdots,c_n)$ (mod $q$). If $\lambda\in \Lambda$ and $\lambda= r(c_1+\cdots+ c_kb^{k-1}+c_{k+1}' b^k+\cdots)$ where $c_{k+1}\neq c_{k+1}'$ and $k\leq n$, then $c_{k+1}, \ c_{k+1}'\in D(c_1,\cdots c_k)$ and hence $\theta$ and $\lambda$ are mutually orthogonal by the induction hypothesis. If $\lambda\in \Lambda$ is such that the first $n$ digit expansion are equal to $\theta$, the same argument as in the proof of $D({\vartheta}theta)$ shows $\theta$ will be orthogonal to this $\lambda$. Therefore, $\theta$ will be orthogonal to all elements in $\Lambda$, a contradiction. Also in a similar way as the above, $D(c_1,\cdots,c_n)$ contains exactly $q$ elements can be shown. $\square$ Returning to the proof, by convention, we define $\tau({\vartheta}theta)=0$ and on the first level, we define $\tau(\sigma_1)$ to be the unique element in $D({\vartheta}theta)$ such that it is congruent to $\sigma_1$ (mod $q$). For $\sigma = \sigma_1\cdots \sigma_n$, we define $\tau(\sigma \sigma_{n+1})$ to be the unique element in $D(\tau(\sigma|_1),\cdots, \tau(\sigma|_n))$ (it is non-empty from the induction process) that is congruent to $\sigma_{n+1}$ (mod $q$). Then $\tau(0^k)=0$ for $k\ge 1$. We show that $\tau$ is a maximal mapping corresponding to $\Lambda$. (i) is satisfied by above. By Lemma \ref{lem2.1}, $\tau$ is well-defined with (ii) in Definition \ref{def1.5}. Finally, given a node $\sigma\in{\Sigma}_q^n$, by the construction of the $\tau$ we can find $\lambda$ whose first $n$ digits in the digit expansion (\ref{eq2.1}) exactly equals the value of $\tau (\sigma|_k)$ for all $1\leq k\leq n$. Since the digit expansion of $\lambda$ becomes $0$ eventually, we continue following the digit expansion of $\lambda$ so that (iii) in the definition is satisfied. We now show that $\Lambda= r\Lambda(\tau)$. For each ${a_{\lambda}}$ given in (\ref{eq2.1}), Lemma \ref{lem2.1} with the definition of $\tau$ shows that there exists unique path $\sigma$ such that $\tau(\sigma|_n) = \epsilon_{\lambda}^{(n)}$ for all $n$. As the sum is finite, this means $\Lambda\subset r\Lambda(\tau)$. Conversely, if some $\Pi^\tau_b(\sigma)\in r\Lambda(\tau)$ is not in $\Lambda$, then from the previous proof we know $\Pi^\tau_b(\sigma)$ must be orthogonal to all elements in $\Lambda$. This contradicts to the maximal orthogonality of $\Lambda$. Thus, $\Lambda= r\Lambda(\tau)$. \section{Regular spectra} In the rest of the paper, we study under what conditions a maximal orthogonal set is a spectrum or not a spectrum. \begin{Lem} \label{lem3.1} \cite{[JP]} Let $\mu$ be a Borel probability measure in ${\Bbb R}$ with compact support. Then a countable set $\Lambda$ is a spectrum for $L^2(\mu)$ if and only if $$ Q(\xi):=\sum_{\lambda\in\Lambda}|\widehat{\mu}(\xi+\lambda)|^2\equiv 1, \quad \mbox{for}\,\, \xi\in{\Bbb R}. $$ Moreover, if $\Lambda$ is a bi-zero set, then $Q$ is an entire function. \end{Lem} Note that the first part of Lemma \ref{lem3.1} is well-known. For the entire function property, we just note that the partial sum $\sum_{|\lambda|\leq n}|\cdots|^2$ is an entire function and it is locally uniformly bounded by applying the Bessel's inequality, hence $Q$ is entire by the Montel's theorem in complex analysis. One may refer to \cite{[JP]} for the details of the proof. Let $\delta_a$ be the Dirac measure with center $a$. We define $$ \delta_{{\mathcal E}}=\frac1{\#{\mathcal E}}\sum_{e\in{\mathcal E}}\delta_e $$ for any finite set ${\mathcal E}$. Let $\mu$ be the self-similar measure in (\ref{eq1.1}). Write ${\mathcal D}=\{0, 1, \ldots, q-1\}$ and $D_k=\frac 1b{\mathcal D}+\cdots+\frac 1{b^k}{\mathcal D}$ for $k\ge 1$. We recall that the {\it mask function} of ${\mathcal D}$ is $$m(\xi)=\frac 1q (1+e^{2\pi i \xi}+\cdots+e^{2\pi i (q-1)\xi})$$ and define $ \mu_{k}=\delta_{{\mathcal D}_k}, $ then $$ \widehat{\mu_{k}}(\xi)=\prod_{j=1}^{k}m(b^{-k}\xi) $$ and it is well-known that $\mu_{k}$ converges weakly to $\mu$ when $k$ tends to infinity and we have \begin{equation}\label{eq3.1-} \widehat{\mu}(\xi)=\widehat{\mu_{k}}(\xi)\widehat{\mu}(\frac{\xi}{b^k}). \end{equation} \begin{Lem}\label{lem3.2} Let $\tau$ be a regular mapping and let $\Lambda = r\Lambda(\tau)=\{\lambda_k\}_{k=0}^\infty$ be the maximal orthogonal set determined by $\tau$. Then for all $n\geq1$, \begin{equation}\label{eq3.1} \sum_{k=0}^{q^n-1}\left|\widehat{\mu_n}(\xi+r\lambda_{k})\right|^2\equiv1. \end{equation} \end{Lem} \noindent{\bf Proof.} We claim that $\{r\lambda_k\}_{k=0}^{q^{n}-1}=\frac{b}{q}\{\lambda_k\}_{k=0}^{q^{n}-1}$ is a spectrum of $L^2(\mu_{n})$. We can then use Lemma \ref{lem3.1} to conclude our lemma. Since this set has exactly $q^{n}$ elements, we just need to show the mutually orthogonality. To see this, we note that \begin{equation}\label{eq3.2} \widehat{\mu}_n(\xi) = m(\frac{\xi}{b})\cdots m(\frac{\xi}{b^n}). \end{equation} Given $l\neq l'$ in $\{0,\cdots q^n-1\}$, let $\sigma =\sigma_1\cdots \sigma_n$ and $\sigma'=\sigma_1'\cdots \sigma_n'$ be the $q$-adic expansions of $l$ and $l'$ respectively as in \eqref{eq1.5}, where $\sigma_n$ and $\sigma_n'$ may be zero. We let $s\le n$ be the first index such that $\sigma_s\neq \sigma_s'$. Then we can write $$ \lambda_{l}-\lambda_{{l'}} = b^{s-1}(\tau(\sigma|_s)-\tau(\sigma'|_s) +bM) $$ for some integer $M$. We then have from the integral periodicity of $e^{2\pi i x}$ that $$ m\left(\frac{r(\lambda_{l}-\lambda_{{l'}})}{b^s}\right) =m\left(\frac{\tau(\sigma|_s)-\tau(\sigma'|_s)}{q}\right) =0. $$ It is equal to $0$ because (ii) in the definition of maximal mapping implies that $q$ does not divide $\tau(\sigma|_s)-\tau(\sigma'|_s)$. Hence, by (\ref{eq3.2}), $\widehat{\mu}_n\left(r(\lambda_{l}-\lambda_{{l'}})\right) =0.$ $\square$ Now, we let $$ Q_n(\xi) = \sum_{k=0}^{q^n-1}\left|\widehat{\mu}(\xi+r\lambda_{k})\right|^2, \mbox{and} \ Q(\xi) = \sum_{k=0}^{\infty}\left|\widehat{\mu}(\xi+ r\lambda_{k})\right|^2. $$ For any $n$ and $p$, we have the following identity: \begin{equation}\label{eq4.1} \begin{aligned} Q_{n+p}(\xi) =& Q_n(\xi)+ \sum_{k=q^{n}}^{q^{n+p}-1}\left|\widehat{\mu}(\xi+r\lambda_{k})\right|^2\\ =&Q_n(\xi)+ \sum_{k=q^{n}}^{q^{n+p}-1}\left|\widehat{\mu_{n+p}}(\xi+r\lambda_{k})\right|^2\left|\widehat{\mu}(\frac{\xi+ r\lambda_{k}}{b^{n+p}})\right|^2. \end{aligned} \end{equation} Our goal is see whether $Q(\xi)\equiv 1$. Then by invoking Lemma \ref{lem3.1}, we can determine whether we have a spectrum. As $Q$ is an entire function by Lemma \ref{lem3.1}, we just need to see the value of $Q(\xi)$ for some small values of $\xi$. To do this, we need to make a fine estimation of the terms $\left|\widehat{\mu}(\frac{\xi+r\lambda_{k}}{b^{n+p}})\right|^2$ in the above. Write $$ c_{\min}=\min\left\{\prod_{j=0}^\infty|m(b^{-j}\xi)|^2: |\xi|\le \frac{b-1}{qb}\right\}>0, $$ where $|m(\xi)|=\frac{|\sin \pi q\xi|}{q|\sin\pi\xi|}$ and $\prod_{j=0}^\infty|m(b^{-j}\xi)|^2=|m(\xi)\widehat{\mu}(\xi)|^2$. Denote $$ c_{\max}=\max\left\{|m(\xi)|^2: \frac 1{b^2} \le |\xi|\le \frac{b-1}{qb}\right\}<1. $$ The following proposition roughly says that the magnitude of the Fourier transform is controlled by the number of non-zero digits in the $b-$adic expansion in a uniform way. Recall that $b=qr$ with $q,r\ge 2$. \begin{Prop}\label{prop3.3} Let $|\xi| \le \frac{r(b-2)}{b-1}$ and let $$ t = \xi +\sum_{k=1}^{N}d_ib^{n_k}, $$ where $d_i\in\{1,2,\cdots r-1\}$ and $1\leq n_1<\cdots< n_N$. Then \begin{equation}\label{eq3.3} c_{\min}^{N+1}\leq\left|\widehat{\mu}(t)\right|^2\leq c_{\max}^N. \end{equation} \end{Prop} \noindent {\bf Proof.~~} First it is easy to check that, for $|\xi|\le \frac{r(b-2)}{b-1}$ and all $d_k\in\{0, 1, 2, \ldots, r-1\}$, we have \begin{eqnarray}\label{eqxi} \left|\frac{\xi+\sum_{k=1}^n d_kb^k}{b^{n+1}}\right|&\le& \frac 1{b^{n+1}} \left(\frac{r(b-2)}{b-1}+(r-1)(b+b^2+\cdots+b^n)\right) \nonumber\\ &=&\frac{r(b-2)+(r-1)(b^{n+1}-b)}{b^{n+1}(b-1)} \nonumber\\ &\le&\frac{b-1}{qb} \end{eqnarray} for $n\ge1$. The inequality in the last line follows from a direct comparison of the difference and $q\geq 2$. To simplify notations, we let $n_0=0$ and $n_{N+1} = \infty$. Then $|\widehat{\mu}(t)|^2$ equals \begin{equation}\label{eq5.2} \prod_{j=1}^{\infty}\left|m\left(b^{-j}t\right)\right|^2 = \prod_{i=0}^{N}\prod_{j=n_i+1}^{n_{i+1}}\left|m\left(b^{-j}t\right)\right|^2. \end{equation} We now estimate the products one by one. By \eqref{eqxi}, we have $$ \left|\frac{\xi+\sum_{k=1}^{i}d_kb^{n_k}}{b^{n_i+1}}\right|\le \frac{b-1}{qb}. $$ Hence, together with the integral periodicity of $m$ and the definition of $c_{\min}$, we have for all $i>0$, \begin{eqnarray}\label{eq5.3} \prod_{j=n_i+1}^{n_{i+1}}\left|m(b^{-j}t)\right|^2&=&\prod_{j=n_i+1}^{n_{i+1}}\left|m\left(b^{-j}(\xi+ \sum_{k=1}^{i}d_kb^{n_k})\right)\right|^2 \nonumber\\ &\geq&\prod_{j=0}^{\infty}\left|m\left(b^{-j}\left(\frac{\xi+\sum_{k=1}^{i}d_kb^{n_k}}{b^{n_i+1}}\right)\right)\right|^2\geq c_{\min}. \end{eqnarray} For the case $i=0$, it is easy to see that $\left|\frac{\xi}{b}\right|\leq\frac{b-2}{q(b-1)}<\frac{b-1}{qb}.$ Hence, $\prod_{j=n_0+1}^{n_{1}}\left|m(b^{-j}t)\right|^2$ $\geq \prod_{j=0}^{\infty}\left|m\left(b^{-j}(\xi/b)\right)\right|^2\geq c_{\min}$. Putting this fact and \eqref{eq5.3} into \eqref{eq5.2}, we have $|\widehat{\mu}(t)|^2\geq c_{\min}^{N+1}$. We next prove the upper bound. From $|m(\xi)|\leq 1$, \eqref{eq5.2} and the integral periodicity of $m$, \begin{equation}\label{eq4.8} |\widehat{\mu}(t)|^2\leq \prod_{i=1}^{N}\left|m\left(b^{-{(n_i+1)}}t\right)\right|^2=\prod_{i=1}^{N} \left|m\left(b^{-{(n_i+1)}} (\xi +\sum_{k=1}^{i}d_kb^{n_k})\right)\right|^2. \end{equation} By \eqref{eqxi} we have $$|\xi+\sum_{k=1}^id_kb^{n_k}|\ge b^{n_i}-|\xi+\sum_{k=1}^{i-1}d_kb^{n_k}|\ge b^{n_i}-\frac {b^{n_{i-1}}(b-1)}{q}\ge b^{n_i-1}.$$ By \eqref{eqxi}, \eqref{eq4.8}, the above and the definition of $c_{\max}$, we obtain that $|\widehat{\mu}(t)|^2\le c_{\max}^N$. $\square$ We now prove Theorem \ref{th1.7}. Write $c_1= c_{\min}$ and $c_2=c_{\max}$, where $c_{\min}$ and $c_{\max}$ are in Proposition \ref{prop3.3}. Also recall the quantities defined in Section 2. For any $n\in {\mathbb N}$, the $q$-adic expression of $n$ is $\sum_{j=1}^k\sigma_jq^{j-1}$ with $\sigma_k\ne 0$. Then for the map $\tau$ we have $$\lambda_n=\sum_{j=1}^k\tau(\sigma_1\cdots \sigma_j)b^{j-1}+\sum_{j=k+1}^{N_n}\tau(\sigma_1\cdots\sigma_k0^{j-k})b^{j-1}$$ where $\tau(\sigma_1\cdots\sigma_k0^{N_n-k})\ne 0$ and $N^*_n=\#\{\tau(\sigma_1\cdots\sigma_k0^j)\ne 0: k+1\le j\le N_n\}$. Moreover, ${\mathcal N}^{\ast}_{m,n} = \max_{q^m\leq k<q^n}\{N_k^{\ast}\}$, ${\mathcal L}_n^{\ast} = \min_{q^n\leq k<q^{n+1}}\{N_k^{\ast}\}$ and ${\mathcal M}_n = \max_{1\leq k<q^{n}}\{N_k\}$. \noindent{\bf Proof of Theorem \ref{th1.7}.} (i) Let $\alpha_n$ be the increasing sequence satisfying (\ref{eq3.4}) and let $|\xi|\le\frac{b-2}{b-1}$. Recall (\ref{eq4.1}), $$ Q_{\alpha_{n+1}}(q^{-1}\xi) =Q_{\alpha_n}(q^{-1}\xi)+ \sum_{k=q^{\alpha_n}}^{q^{\alpha_{n+1}}-1}\left|\widehat{\mu_{\alpha_{n+1}}}(q^{-1}\xi+ r\lambda_{k})\right|^2\left|\widehat{\mu}(\frac{q^{-1}\xi+r\lambda_{k}}{b^{\alpha_{n+1}}})\right|^2. $$ For $k = q^{\alpha_n},\cdots, q^{\alpha_{n+1}}-1$, we may write $\lambda_k$ as $$ \lambda_k = \sum_{j=0}^{\alpha_{n+1}-1}c_jb^{j}+ \sum_{j=1}^{M_k}d_jqb^{n_{j}}, $$ where $c_j\in\{-1,\cdots,b-2\}$, $d_j\in\{1,\cdots, r-1\}$ and $\alpha_{n+1}\leq n_1<n_2<\cdots<n_{M_k}$ with $n_{M_k}=N_k$ and $M_k\leq N_{k}^{\ast}$, where $N_k$, $N_k^{\ast}$ was defined in (\ref{eq1.51}) or see the above. Note also that the second term on the right hand of the above is zero whenever $N_k<\alpha_{n+1}$. Now, $$ \frac{q^{-1}\xi+r\lambda_{k}}{b^{\alpha_{n+1}}} = \frac{q^{-1}\xi+q^{-1}\sum_{j=1}^{\alpha_{n+1}}c_jb^{j}}{b^{\alpha_{n+1}}} +\sum_{j=1}^{M_k}d_jb^{n_{j}-\alpha_{n+1}+1}. $$ Note that \begin{eqnarray*}\left|\frac \xi q+\frac 1q\sum_{j=1}^kc_jb^{j}\right| \le \frac{b-2}{q(b-1)}+(b-2)\frac{b^{k+1}-b}{q(b-1)} \le \frac{b-2}{q(b-1)} b^{k+1}=\frac{r(b-2)}{b-1}b^k \end{eqnarray*} for all $k\ge 1$. Hence, Proposition \ref{prop3.3} implies that $$ \left|\widehat{\mu}\left(\frac{q^{-1}\xi+r\lambda_{k}}{b^{\alpha_{n+1}}}\right)\right|^2\geq c_1^{1+M_k}\geq c_1^{1+N^{\ast}_k}\geq c_1^{1+ {\mathcal N}^*_{\alpha_n,\alpha_{n+1}}} $$ for all $q^{\alpha_n}\le k<q^{\alpha_{n+1}}$. Therefore, together with Lemma \ref{lem3.2}, \begin{equation}\label{eq3.5} \begin{aligned} Q_{\alpha_{n+1}}(q^{-1}\xi)\geq& Q_{\alpha_{n}}(q^{-1}\xi)+c_1^{1+{\mathcal N}^*_{\alpha_n,\alpha_{n+1}}}\sum_{k=q^{\alpha_n}}^{q^{\alpha_{n+1}}-1} \left|\widehat{\mu_{{\alpha_{n+1}}}} (q^{-1}\xi+r\lambda_{k})\right|^2\\ =&Q_n(q^{-1}\xi)+c_1^{1+{\mathcal N}^*_{\alpha_n,\alpha_{n+1}}}\left(1-\sum_{k=0}^{q^{\alpha_n}-1}\left|\widehat{\mu_{{\alpha_{n+1}}}}(q^{-1}\xi +r\lambda_{k})\right|^2\right). \end{aligned} \end{equation} From elementary analysis, there exists $\delta$, $0<\delta<1$, such that $|\widehat{\mu}(\xi)|^2$ is decreasing on $(0,\delta)$ (In fact, since $\widehat{\mu}(0)=1$ and $|\widehat{\mu}(\xi)|^2$ is entire in complex plane, there exists $\eta>0$ such that $|\widehat{\mu}(\xi)|<1$ for all $0<\xi<\eta$. If $|\widehat{\mu}(\xi)|^2$ is not decreasing on $(0,\delta)$ for any $\delta>0$, we can find a sequence $\xi_n\rightarrow 0$ such that $(|\widehat{\mu}|^2)'(\xi_n)=0$ and thus $(|\widehat{\mu}|^2)'\equiv0$ by the entire function property of $|\widehat{\mu}|^2$, this is impossible). In the proof, it is also useful to note that $|\widehat{\mu}(-\xi)|=|\widehat{\mu}(\xi)|$. We now argue by contradiction. Suppose there exists $\Lambda$ such that Theorem \ref{th1.7} (i) holds but is not a spectrum, then there exists $t_0<\min\{\delta, \frac{b-2}{b-1}\}$ such that $Q(q^{-1}t_0)<1$ because $Q$ is entire. For $0\leq k\leq q^{\alpha_n}-1$, we have $$ \left|\frac{q^{-1}t_0+r\lambda_k}{b^{\alpha_{n+1}}}\right|\leq \frac{1+rb^{{\mathcal M}_{\alpha_n}}}{b^{\alpha_{n+1}}}: = \beta_n. $$ By the assumption that $\alpha_{n+1}-{\mathcal M}_{\alpha_n}\rightarrow \infty$, we have for all $n$ large, say $n\geq M$, $\beta_n<\delta$ so that $\left|\widehat{\mu}\left(\frac{q^{-1}t_0+r\lambda_k}{b^{\alpha_{n+1}}}\right)\right|^2\ge |\widehat{\mu}(\beta_n)|^2$ and we can find $r<1$ such that $$ |\widehat{\mu}(\beta_n)|^{-2}Q(q^{-1}t_0)\leq r<1,\qquad \mbox{for}\ \, n\ge M $$ because $\beta_n$ tends to zero when $n$ tends to infinity and $\widehat{\mu}(0)=1$. According to $\widehat{\mu}(\xi) = \widehat{\mu}_{\alpha_{n+1}}(\xi)\widehat{\mu}(\xi/b^{\alpha_{n+1}})$, we have \begin{eqnarray*} |\widehat{\mu}(q^{-1}t_0+r\lambda_k)|^2&=&\left|\widehat{\mu}_{\alpha_{n+1}}(q^{-1}t_0+r\lambda_k) \widehat{\mu}(\frac{q^{-1}t_0+r\lambda_k}{b^{\alpha_{n+1}}})\right|^2\\ &\ge&|\widehat{\mu}_{\alpha_{n+1}}(q^{-1}t_0+r\lambda_k)|^2|\widehat{\mu}(\beta_n)|^2\\ &\ge&\frac {Q(q^{-1}t_0)}r |\widehat{\mu}_{\alpha_{n+1}}(q^{-1}t_0+r\lambda_k)|^2 \end{eqnarray*} From (\ref{eq3.5}) and for all $n\geq M$, $$ \begin{aligned} Q_{\alpha_{n+1}}(q^{-1}t_0)\geq& Q_{\alpha_n}(q^{-1}t_0)+\left(1-\frac r{Q(q^{-1}t_0)}\sum_{k=0}^{q^{\alpha_n}-1} |\widehat{\mu}(q^{-1}t_0+r\lambda_k)|^2\right)c_1^{1+{\mathcal N}^*_{\alpha_n,\alpha_{n+1}}}\\ \geq& Q_{\alpha_n}(q^{-1}t_0)+(1-r)c_1^{1+{\mathcal N}^*_{\alpha_n,\alpha_{n+1}}}. \end{aligned} $$ Taking summation on $n$ from $M$ to $M+p$ where $p>0$ and note that $Q_n(t)\leq 1$ for any $n$ we have $$ 1\geq Q_{\alpha_{M+p+1}}(q^{-1}t_0)\geq Q_{\alpha_M}(q^{-1}t_0)+(1-r)\sum_{n=M}^{M+p}c_1^{1+{\mathcal N}^{\ast}_{\alpha_n,\alpha_{n+1}}}. $$ As $\sum_{n=M}^{\infty}c_1^{{\mathcal N}^{\ast}_{\alpha_n,\alpha_{n+1}}}=\infty$ by the assumption, the right hand side of the above tends to infinity. This is impossible. Hence, $\Lambda$ must be a spectrum. \noindent (ii). The proof starts again at (\ref{eq4.1}) with $p=1$, we have $$ Q_{n+1}(q^{-1}\xi) =Q_n(q^{-1}\xi)+ \sum_{k=q^{n}}^{q^{n+1}-1}\left|\widehat{\mu_{n+1}}(q^{-1}\xi+r\lambda_{k})\right|^2\left|\widehat{\mu} (\frac{q^{-1}\xi+r\lambda_{k}}{b^{n+1}})\right|^2. $$ Since $N^{\ast}_k\geq {\cal L}^*_n$ for $q^n\leq k<q^{n+1}$, so Proposition \ref{prop3.3} implies that $$ Q_{n+1}(q^{-1}\xi) \leq Q_n(q^{-1}\xi)+ c_2^{{\cal L}^*_n} \sum_{k=q^{n}}^{q^{n+1}-1}\left|\widehat{\mu_{n+1}}(q^{-1}\xi+r\lambda_{k})\right|^2. $$ Using Lemma \ref{lem3.2} and note that $|\widehat{\mu_{n+1}}(\xi)|^2\geq|\widehat{\mu}(\xi)|^2$, we have $$ \begin{aligned} Q_{n+1}(q^{-1}\xi) \leq& Q_n(q^{-1}\xi)+ c_2^{{\cal L}^*_n} \left(1- \sum_{k=0}^{q^{n}-1}\left|\widehat{\mu_{n+1}}(q^{-1}\xi+r\lambda_{k}) \right|^2\right)\\ \leq& Q_n(q^{-1}\xi)+c_2^{{\cal L}^*_n}(1-Q_n(q^{-1}\xi)).\\ \end{aligned} $$ Hence, \begin{equation}\label{eq4.7} 1-Q_{n+1}(q^{-1}\xi) \geq (1-Q_n(q^{-1}\xi))(1-b^{{\cal L}^*_n})\geq \cdots \geq(1-Q_1(q^{-1}\xi))\prod_{k=1}^{n}(1-c_2^{{\cal L}^*_k}) \end{equation} Since $\sum_{n}c_2^{{\cal L}^*_n}<\infty$, $B :=\prod_{k=1}^{\infty}(1-c_2^{{\cal L}^*_n})>0$ and hence as $n$ tends to infinity in (\ref{eq4.7}), we have $$ 1-Q(q^{-1}\xi)\geq (1-Q_1(q^{-1}\xi))\cdot B>0. $$ Therefore, $\tau$ is not a spectral mapping. \qquad$\Box$ As known from Remark \ref{rem1.8}, $\tau$ is a spectral mapping if $\sup\{N^{\ast}_n\}$ is finite. Now, we give an example of spectrum with slow growth rate of $N^{\ast}_n$. \begin{Example} \label{example3.1} Let $\tau$ be a regular mapping so that $N_n \le \log_qn+\log_{c_1^{-2}}\log_q n$ and $N^{\ast}_n \le \log_{c_1^{-2}}\log_q n$ for $n\ge 1$, where $c_1$ is given in Theorem \ref{th1.7}. Then $r\Lambda(\tau)$ is a spectrum of $L^2(\mu)$. \end{Example} \noindent {\bf Proof.~~} Take $\alpha_n = n^2$. Recall ${\mathcal M}_n=\max_{1\leq k< q^n}N_k$, we have $$\alpha_{n+1}-{\mathcal M}_{\alpha_n}\ge (n+1)^2-n^2-\log_{c_1^{-2}} n^2,$$ which tends to infinity when $n$ tends to infinity. Note that $${\mathcal N}_{\alpha_n, \alpha_{n+1}}^\ast=\max_{q^{\alpha_n}\le k<q^{\alpha_{n+1}}} {\cal N}_k^\ast\le \log_{c_1^{-2}}\log_q q^{(n+1)^2} =\log_{c_1^{-2}}(n+1)^2.$$ Then $$\sum_{n=1}^\infty c_1^{{\mathcal N}_{\alpha_n, \alpha_{n+1}}^\ast}\ge \sum_{n=1}^\infty c_1^{\log_{c_1^{-2}}(n+1)^2}=\sum_{n=1}^\infty\frac1{n+1}=\infty.$$ By Theorem \ref{th1.7} the result follows. $\square$ On the other hand, if $N^{\ast}_n$ is so that ${\cal L}^{\ast}_n \ge (1+\epsilon)\log_{c_2^{-1}}n$, for some $\epsilon>0$ and $n\ge 1$, then $r\Lambda(\tau)$ is not a spectrum. This is done by checking the condition of Theorem \ref{th1.7}(ii) using the similar method in the above. We therefore omit its detail. Finally, we prove Theorem \ref{th1.9}. \noindent{\bf Proof of Theorem \ref{th1.9}.} For any $n\in{\mathbb N}$, $n$ can be expressed as \begin{equation}\label{finial} n=\sum_{j=1}^k\sigma_jq^{j-1}, \end{equation} where all $\sigma_j\in\{0, 1, \ldots, q-1\}$ and $\sigma_k\ne 0$. Let $\{m_n\}_{n=1}^\infty$ be a strictly increasing sequence of positive integers with $m_1\ge 2$. We now define a maximal mapping in terms of this sequence by $\tau({\vartheta})=\tau(0^k)=0$ for $k\ge 1$ and for $n$ as in (\ref{finial}), $$\tau(\sigma)=\left\{ \begin{array}{ll} \sigma_k, & \hbox{if}\, \sigma =\sigma_1\cdots\sigma_k, \ \sigma_k\neq 0; \\ 0, & \hbox{if}\, \sigma = \sigma_1\cdots\sigma_k0^{\ell}, \ \ell\neq m_n; \\ q, & \hbox{if}\, \sigma = \sigma_1\cdots\sigma_k0^{\ell}\,\, \hbox{and}\,\, \ell=m_n. \end{array} \right. $$ By the definition we have $\lambda_0=0$ and $$\lambda_n=\sum_{j=1}^{k}\tau(\sigma_1\cdots\sigma_j)b^{j-1}+qb^{m_n},$$ consequently, $N_n^{\ast} =1$ and by Theorem \ref{th1.7}(i) (see also Remark \ref{rem1.8}), $\Lambda:=\{\lambda_n\}_{n=0}^\infty$ is a spectrum for $L^2(\mu)$. We now find $\Lambda$ with density in (\ref{eq1.6}) zero by choosing $m_n$. To do this, we first note that there exists a strictly increasing continuous function $h(t)$ from $[0, \infty)$ onto itself such that $h(t)\le g(t)$ for $t\ge 0$ and it is sufficient to replace $g(t)$ by $h(t)$ in the proof. In this way, the inverse of $h(t)$ exists, and we denote it by $h^{-1}(t)$. Now, note that $$\lambda_n\le q\frac{b^k-1}{b-1}+qb^{m_n}\le (q+1)b^{m_n}.$$ Hence, \begin{equation}\label{fin} \lambda_{n+1}-\lambda_n\ge qb^{m_{n+1}}-(q+1)b^{m_n}\ge b^{m_n+1}. \end{equation} Therefore, we choose $m_n$ so that $b^{m_n}\ge 2h^{-1}(b^{n+1})$ for all $n\ge 1$. For any $h(R)\ge 1$, there exists unique $s\in {\mathbb N}$ such that $b^{s-1}\le h(R)<b^s$. Then $$ \frac{\sup_{x\in{\mathbb R}}\#(\Lambda\cap(x-R, x+R))}{h(R)}\le \frac{\sup_{x\in{\mathbb R}}\#(\Lambda\cap(x-h^{-1}(b^s), x+h^{-1}(b^s)))}{b^{s-1}}. $$ Note from \eqref{fin} that the length of the open intervals $(x-h^{-1}(b^s), x+h^{-1}(b^s))$ is less than $\lambda_{n+1}-\lambda_n$ whenever $n\ge s$. This implies that the set $\Lambda\cap(x-h^{-1}(b^s), x+h^{-1}(b^s))$ contains at most one $\lambda_n$ where $n\ge s$. We therefore have $$ \sup_{x\in{\Bbb R}}\#(\Lambda\cap(x-h^{-1}(b^s), x+h^{-1}(b^s)))\le s+1. $$ Thus the result follows by taking limit. $\square$ \section{Irregular spectra} Let $\tau$ be a maximal mapping (not necessarily regular) for $\mu = \mu_{q,b}$ with $b=qr$. Given any $I = \sigma_1\cdots\sigma_k\in\Sigma^k_q=\{0, 1, \ldots, q-1\}^k$ with $\sigma_k\neq 0$. Define a map $\tau'$ by $$ \tau'(\sigma) = \left\{ \begin{array}{ll} 0, & \hbox{$\sigma = I0^{\ell}$ for $\ell\ge 1$;} \\ \tau(\sigma), & \hbox{otherwise.} \end{array} \right. $$ Clearly $\tau'$ is a maximal mapping. The main result is as follows: \begin{theorem}\label{th6.1} With the notation above, $\tau$ is a spectral mapping if and only if $\tau'$ is a spectral mapping. \end{theorem} This result shows that if we arbitrarily change the value of $\tau$ along an element in $\Gamma_q$ as above, the spectral property of $\tau$ is unaffected. In particular, Theorem \ref{th1.10} follows as a corollary because we can alter the irregular elements one by one using Theorem \ref{th6.1} recursively. We now prove Theorem \ref{th6.1}. Note that we can decompose \begin{equation}\label{eq5.1}\Gamma_q=\{\sigma 0^\infty: \sigma\in\Sigma^*_q\}=\bigcup_{I\in\Sigma_q^n}I\Gamma_q \end{equation} for all $n\ge 1$. And recall that $$\Lambda(\tau)=\{\Pi^\tau_b(J): J\in\Gamma_q, \tau\,\,\mbox{is regular on}\,\, J\},$$ where $\Pi^\tau_b(J)=\sum_{k=1}^\infty \tau(J|_k)b^{k-1}$. Denote naturally $\Pi^\tau_b(I)=\sum_{k=1}^n \tau(I|_k)b^{k-1}$ if $I\in\Sigma_q^n$, and $\Pi^\tau_{b,\,I}(J)=\sum_{k=1}^\infty\tau(Ij_1\cdots j_k)b^{k-1}$ for $J=j_1j_2\cdots\in\Gamma_q$ where $IJ$ is regular for $\tau$. Define also $$\Lambda_I(\tau)=\{\Pi^\tau_{b,\,I}(J): J\in\Gamma_q, \tau\,\, \mbox{is regular on}\,\, J\}.$$ By \eqref{eq5.1} we have $$\Lambda(\tau)=\bigcup_{I\in\Sigma^n_q}\big(\Pi^\tau_b(I)+b^n\Lambda_I(\tau)\big).$$ The following is a simple lemma which was also observed in \cite{[DHS]}. \begin{Prop}\label{lemma6.1} Let $\tau$ be a tree mapping and $n\ge 1$. Then $r\Lambda(\tau)$ is a spectrum for $\mu$ if and only if all $r\Lambda_I(\tau)$, $I\in\Sigma_q^n$, are spectra. \end{Prop} \noindent {\bf Proof.~~} Recall that $\mu_k$ satisfies $\widehat{\mu}(\xi)=\widehat{\mu}_k(\xi)\widehat{\mu}(b^{-k}\xi)$ and $\widehat{\mu}_k(\xi)=\prod_{j=1}^k m(b^{-j}\xi)$ where $m(\xi)=\frac 1q\sum_{j=1}^qe^{2\pi i(j-1)\xi}$. Write $Q_I(\xi)=\sum_{\lambda\in\Lambda_I(\tau)}|\widehat{\mu}(\xi+r\lambda)|^2$. Note that \begin{eqnarray*} Q(\xi)=\sum_{\lambda\in\Lambda(\tau)}|\widehat{\mu}(\xi+r\lambda )|^2&=&\sum_{I\in\Sigma_q^n,\, \lambda\in\Lambda_I(\tau)}|\widehat{\mu}_n(\xi+r\Pi_b^\tau(I)+rb^n\lambda)|^2\left|\widehat{\mu}\left(\frac{\xi+r\Pi_b^\tau(I)}{b^n} +r\lambda\right)\right|^2\\ &=&\sum_{I\in\Sigma_q^n,\, \lambda\in\Lambda_I(\tau)}|\widehat{\mu}_n(\xi+r\Pi_b^\tau(I))|^2\left|\widehat{\mu}\left(\frac{\xi+r\Pi_b^\tau(I)}{b^n} +r\lambda\right)\right|^2\\ &=&\sum_{I\in\Sigma_q^n}|\widehat{\mu}_n(\xi+r\Pi_b^\tau(I))|^2\cdot Q_I\left(\frac{\xi+r\Pi_b^\tau(I)}{b^n}\right). \end{eqnarray*} In a similar proof of Lemma \ref{lem3.2}, we have $$1\equiv \sum_{I\in\Sigma_q^n}|\widehat{\mu}_n(\xi+r\Pi_b^\tau(I))|^2.$$ Invoking Lemma \ref{lem3.1}, the result follows. $\square$ Proposition \ref{lemma6.1} asserted that spectral property are determined by a finite number of nodes. The following two lemmas show that the spectral property of a particular node $\sigma$ can be determined by infinitely many of its offsprings and is {\it independent} of the regularity of $\sigma0^{\infty}$. These are the key lemmas to the proof of Theorem \ref{th6.1}. \begin{Lem}\label{lem6.1} Let $I\in\Sigma_q^*$ with $I\ne {\vartheta}theta$, the empty word. If $\tau$ is regular on $I0^\infty$, then $$ \Lambda_I(\tau)=\{\Pi_{b,\,I}^\tau(0^\infty)\}\cup\bigcup_{k=1}^\infty\bigcup_{j=1}^{q-1}\left(\Pi_{b,\,I}^\tau( 0^{k-1}j)+ b^k\Lambda_{I0^{k-1}j}(\tau)\right), $$ where $\Pi_{b, \,I}^\tau(0^{k-1}j)=\tau(I0)+\tau(I0^2)b+\cdots+\tau(I0^{k-1}j)b^{k-1}$. If $\tau$ is irregular on $I0^\infty$, then $$\Lambda_I(\tau)=\bigcup_{k=1}^\infty\bigcup_{j=1}^{q-1}\left(\Pi_{b,\,I}^\tau(0^{k-1}j)+ b^k\Lambda_{I0^{k-1}j}(\tau)\right).$$ \end{Lem} \noindent {\bf Proof.~~} Check it directly. $\square$ \begin{Lem}\label{lemma6.2} Let $\tau$ be a maximal mapping and let $I\in\Sigma_q^{\ast}$. Then $\Lambda_I(\tau)$ is a spectrum of $\mu$ if and only if $\Lambda_{I0^{k-1}j}(\tau)$ are spectra of $\mu$ for all $k\geq 1$ and $j=1,\cdots, q-1$. \end{Lem} \noindent {\bf Proof.~~} The necessity is clear from Proposition \ref{lemma6.1}. We now prove the sufficiency. Assume that $\Lambda_{I0^{k-1}j}(\tau)$ are spectra for all $k\geq 1$ and $j=1,\cdots, q-1$. We need to show that $Q_I(\xi)=\sum_{\lambda\in\Lambda_I(\tau)}|\widehat{\mu}(\xi+r\lambda)|^2\equiv 1$. By the integral periodicity of $m$ and Lemma \ref{lem3.1} which will be used in the second equality below, we have for all $k\geq 2$, $$ \begin{aligned} &\sum_{j=1}^{q-1}|\widehat{\mu}_k(\xi+r\Pi^{\tau}_{b,I}(0^{k-1}j)|^2\\ =&\sum_{j=1}^{q-1}|\widehat{\mu}_{k-1}(\xi+r\Pi^{\tau}_{b,I} (0^{k-1})|^2\left|\widehat{\mu}_1 \left(\frac{\xi+r\Pi^{\tau}_{b,I}(0^{k-1})+r\tau(0^{k-1}j)b^{k-1}}{b^{k}}\right)\right|^2\\ =&|\widehat{\mu}_{k-1}(\xi+r\Pi^{\tau}_{b,I}(0^{k-1})|^2 \left(1-\left|\widehat{\mu}_1\left(\frac{\xi+r\Pi^{\tau}_{b,I}(0^{k})}{b^{k}}\right) \right|^2\right)\\ =&|\widehat{\mu}_{k-1}(\xi+r\Pi^{\tau}_{b,I}(0^{k-1})|^2 -|\widehat{\mu}_{k}(\xi+r\Pi^{\tau}_{b,I}(0^{k})|^2.\\ \end{aligned} $$ If $k=1$, the above becomes $\sum_{j=1}^{q-1}|\widehat{\mu}_1(\xi+r\Pi^{\tau}_{b,I}(j)|^2= 1 -|\widehat{\mu}_{1}(\xi+r\Pi^{\tau}_{b,I}(0)|^2$. Now we simplify the following terms which is corresponding to the unions of the sets in Lemma \ref{lem6.1}, \begin{eqnarray} &&\sum_{k=1}^{\infty}\sum_{j=1}^{q-1}\sum_{\lambda\in\Lambda_{\sigma0^{k-1}j}(\tau)} |\widehat{\mu}(\xi+r\Pi^{\tau}_{b,I}(0^{k-1}j)+rb^k\lambda)|^2\nonumber\\ &=&\sum_{k=1}^{\infty}\sum_{j=1}^{q-1}\sum_{\lambda\in\Lambda_{\sigma0^{k-1}j}(\tau)} |\widehat{\mu}_k(\xi+r\Pi^{\tau}_{b,I}(0^{k-1}j)|^2\left|\widehat{\mu}\left(\frac{\xi+r\Pi^{\tau}_{b,I}(0^{k-1}j)}{b^k} +\lambda\right)\right|^2 \nonumber\\ &=&\sum_{k=1}^{\infty}\sum_{j=1}^{q-1}|\widehat{\mu}_k(\xi+r\Pi^{\tau}_{b,I}(0^{k-1}j)|^2\nonumber\\ &=&1-\lim_{N\rightarrow\infty}|\widehat{\mu}_{N}(\xi+r\Pi^{\tau}_{b,I}(0^{N})|^2\nonumber\\ &=& 1-\prod_{j=1}^{\infty}\left|m\left(\frac{\xi+r\Pi^{\tau}_{b,I}(0^{j})}{b^j}\right)\right|^2.\label{111} \end{eqnarray} We now divide the proof into two cases. \noindent{ Case (i).} If $I0^{\infty}$ is regular, then $|\widehat{\mu}(\xi+r\Pi^\tau_{b, I}(0^\infty))|^2 = \prod_{j=1}^{\infty}\left|m(\frac{\xi+r\Pi^{\tau}_{b,I}(0^{j})}{b^j})\right|^2$. Hence, by Lemma \ref{lem6.1}, $$ Q_{I}(\xi) = |\widehat{\mu}(\xi+r\Pi^\tau_{b, I}(0^\infty))|^2+(\ref{111}) \equiv 1. $$ This shows $r\Lambda_{I}(\tau)$ is a spectrum. \noindent{ Case (ii).} If $I0^{\infty}$ is irregular, then Lemma \ref{lem6.1} shows that \begin{equation*} Q_{I}(\xi) = (\ref{111}) = 1-\prod_{j=1}^{\infty}\left|m\left(\frac{\xi+r\Pi^{\tau}_{b,I}(0^{j})}{b^j}\right)\right|^2. \end{equation*} Note that, by (ii) in the definition of the maximal mapping, we may write \begin{equation}\label{112} r\Pi_{b,I}(0^{n}) =r\sum_{j=1}^n\tau(I0^j)b^{j-1}= r\sum_{j=1}^{n}(s_jq)b^{j-1} = \sum_{j=1}^{n}s_jb^{j} \end{equation} for $s_j\in\{0,1,\cdots, r-1\}$. Then \begin{equation}\label{eq6.2} Q_{I}(\xi) = 1-\prod_{j=1}^{\infty}\left|m\left(\frac{\xi+r\Pi^{\tau}_{b,I}(0^{j-1})}{b^j}\right)\right|^2. \end{equation} Suppose on the contrary $Q_{I}(\xi)<1$ for some $\xi>0$. Since $Q_{I}$ is entire, we may assume $\xi$ is small, say $|\xi|<\frac{r-1}{b-1}$. From (\ref{eq6.2}), we must have $\prod_{j=1}^{\infty}\left|m\left(\frac{\xi+r\Pi^{\tau}_{b,I}(0^{j-1})}{b^j}\right)\right|^2>0.$ For those $n$ such that $s_n\neq 0$ in \eqref{112}. $$ \frac1b\leq\frac{s_n}{b}\leq\left|\frac{\xi+\sum_{j=1}^{n}s_jb^j}{b^{n+1}}\right|\leq \frac{r-1}{b(b-1)}<\frac{1}{q(b-1)}. $$ Hence, letting $c = \max\{|m(\xi)|^2: \frac1b\leq|\xi|<\frac{1}{q(b-1)}\}<1$, we have $$ \left|m\left(\frac{\xi+r\Pi^{\tau}_{b,I}(0^{{n}})}{b^{n+1}}\right)\right|^2 = \left|m\left(\frac{\xi+\sum_{j=1}^{n}s_jb^j}{b^{n+1}}\right)\right|^2\leq c. $$ And $$ \prod_{j=1}^{\infty}\left|m\left(\frac{\xi+r\Pi^{\tau}_{b,I}(0^{j-1})}{b^j}\right)\right|^2=\lim_{N\rightarrow\infty}\prod_{j=1}^{N}\left|m\left(\frac{\xi+r\Pi^{\tau}_{b,I}(0^{j-1})}{b^j}\right)\right|^2\leq\lim_{N\rightarrow\infty}c^{\#\{n: s_n\neq 0, \ n\leq N\}}. $$ As $I0^{\infty}$ is irregular, there exists infinitely many $s_j\neq 0$. The above limit is zero. This is a contradiction and hence $\Lambda_{I}(\tau)$ must be a spectrum. $\square$ \noindent{\bf Proof of Theorem \ref{th6.1}.} By the definition of $\tau$ and $\tau'$, $\Lambda_{I'}(\tau) = \Lambda_{I'}(\tau')$ for all $I'\neq I$ and $I'\in\Sigma_{q}^k$. Moreover, $\Lambda_{I0^{n-1}j}(\tau)=\Lambda_{I0^{k-1}j}(\tau')$ for all $k\geq 1$ and $j=1,\cdots q-1$. Therefore, if $\tau$ is a spectrum, then $\Lambda_{I'}(\tau')$ are spectra of $\mu$ for all $\sigma'\neq \sigma$ and $\sigma'\in\Sigma_q^{k}$ by Proposition \ref{lemma6.1}. On the other hand, $\Lambda_{I0^{k-1}j}(\tau')$ are spectra of $\mu$ also as $\tau$ is a spectrum. By Lemma \ref{lemma6.2}, $\Lambda_{I}(\tau')$ is also a spectrum. We therefore conclude that $\Lambda(\tau')$ is a spectrum of $\mu$ by Proposition \ref{lemma6.1} again. The converse also holds by reversing the role of $\tau$ and $\tau'$. This completes the whole proof. $\square$ \end{document}
\begin{document} \pagestyle{plain} \title{On the Hopcroft's minimization algorithm} \titlerunning{On the Hopcroft's minimization algorithm} \author{Andrei P\u aun} \institute{ Department of Computer Science/IfM, Louisiana Tech University\\ P.O. Box 10348, Ruston, LA 71272, USA\\ Universidad Polit\'ecnica de Madrid - UPM, Facultad de Inform\'atica\\ Campus de Montegancedo S/N, Boadilla del Monte, 28660 Madrid, Spain \email{[email protected]} } \maketitle \begin{abstract} We show that the absolute worst case time complexity for Hopcroft's minimization algorithm applied to unary languages is reached only for de Bruijn words. A previous paper by Berstel and Carton gave the example of de Bruijn words as a language that requires O(n log n) steps by carefully choosing the splitting sets and processing these sets in a FIFO mode. We refine the previous result by showing that the Berstel/Carton example is actually the absolute worst case time complexity in the case of unary languages. We also show that a LIFO implementation will not achieve the same worst time complexity for the case of unary languages. Lastly, we show that the same result is valid also for the cover automata and a modification of the Hopcroft's algorithm, modification used in minimization of cover automata. \end{abstract} \section{Introduction} This work is a continuation of the result reported by Berstel and Carton in \cite{berstel_ciaa02}. There they showed that Hopcroft's algorithm requires O(n log n) steps when considering the example of de Bruijn words (see \cite{bruijn}) as input. The setting of the paper \cite{berstel_ciaa02} is for languages over an unary alphabet, considering the input languages having the number of states a power of 2 and choosing ``in a specific way" which set to become a splitting set in the case of ties. In this context, the previous paper showed that one needs $O(n\ log\ n)$ steps for the algorithm to complete, which is reaching the theoretical asymptotic worst case time complexity for the algorithm as reported in \cite{hopcroft,HopUll,gries,Knuutila} etc. We were interested in investigating further this aspect of the Hopcroft's algorithm, specifically considering the setting of unary languages, but for a stack implementation in the algorithm. Our effort has lead to the observation that when considering the worst case for the number of steps of the algorithm (which in this case translates to the largest number of states appearing in the splitting sets), a LIFO implementation indeed outperforms a FIFO strategy as suggested by experimental results on random automata as reported in \cite{CIAA06}. One major observation/clarification that is needed is the following: we do not consider the asymptotic complexity of the run-time, but the actual number of steps. For the current paper when comparing $n\ log\ n$ steps and $n\ log(n-1)$ steps we will say that $n\ log\ n$ is worse than $n\ log(n-1)$, even though when considering them in the framework of the asymptotic complexity (big-O) they have the same complexity, i.e. $n\ log\ n\in \Theta(n\ log(n-1))$. We give some definitions, notations and previous results in the next section, then we give a brief description of the algorithm discussed and its features in Section \ref{hop}, Section \ref{worst} describes the properties for the automaton that reaches worst possible case in terms of steps required for the algorithm (as a function of the initial number of states of the automaton). We then briefly touch upon the case of cover automata minimization with a modified version of the Hopcroft's algorithm in Section \ref{cover} and conclude by giving some final remarks in the Section \ref{fin}. \section{Preliminaries} \label{prelim} We assume the reader is familiar with the basic notations of formal languages and finite automata, see for example the excellent work by Hopcroft, Salomaa or Yu \cite{HopUll,ASalaa,syu}. In the following we will be denoting the cardinality of a finite set $T$ by $|T|$, the set of words over a finite alphabet $\Sigma$ is denoted $\Sigma^*$, and the empty word is $\lambda$. The length of a word $w\in \Sigma^*$ is denoted with $|w|$. We define $\Sigma^{l}=\{w \in \Sigma^* \mid |w|=l\}$, $\Sigma^{\leq l}=\displaystyle\bigcup_{i=0}^l \Sigma^i$, and $\Sigma^{< l}=\displaystyle\bigcup_{i=0}^{l-1} \Sigma^i$. A deterministic finite automaton (DFA) is a quintuple $A=(\Sigma,Q,\delta, q_0,F)$ where $\Sigma$ is a finite set of symbols, $Q$ is a finite set of states, $\delta:Q \times \Sigma \longrightarrow Q$ is the transition function, $q_0$ is the start state, and $F$ is the set of final states. We can extend $\delta$ from $Q \times \Sigma $ to $Q \times \Sigma^* $ by $\overline{\delta}(s,\lambda)=s,$ $\overline{\delta}(s,aw)=\overline{\delta}(\delta(s,a),w).$ We usually denote the extension $\overline{\delta}$ of $\delta$ by $\delta$. The language recognized by the automaton $A$ is $L(A)=\{w\in \Sigma^*\mid \delta(q_0,w)\in F\}$. For simplicity, we assume that $Q=\{0,1,\ldots,|Q-1|\}$ and $q_0=0$. In what follows we assume that $\delta$ is a total function, i.e., the automaton is complete. For a DFA $A=(\Sigma,Q,\delta,q_0,F)$, we can always assume, without loss of generality, that $Q=\{0,1,\ldots,n-1\}$ and $q_0=0$; we will use this idea every time it is convenient for simplifying our notations. If $L$ is finite, $L=L(A)$ and $A$ is complete, there is at least one state, called the sink state or dead state, for which $\delta(sink,w)\notin F$, for any $w\in \Sigma^*$. If $L$ is a finite language, we denote by $l$ the maximum among the length of words in $L$. \begin{definition} A language $L'$ over $\Sigma$ is called a cover language for the finite language $L$ if $L'\cap\Sigma^{\leq l} = L$. A deterministic finite cover automaton (DFCA) for $L$ is a deterministic finite automaton (DFA) $A$, such that the language accepted by $A$ is a cover language of $L$. \end{definition} \begin{definition} Let $A=(Q, \Sigma , \delta , 0 , F)$ be a DFA and $L=L(A)$. We say that $p\equiv_A q $ (state $p$ is equivalent to $q$ in $A$) if for every $w\in \Sigma^{*}$, $\delta(s,w)\in F$ iff $\delta(q,w)\in F$. \end{definition} The right language of state $p\in Q$ for a DFCA $A=(Q,\Sigma,\delta,q_0,F)$ is $R_p=\{w\mid \delta(p,w)\in F, |w|\leq l-level_A(p)\}$. \begin{definition} Let $x,y\in \Sigma^*$. We define the following similarity relation by: $x \sim_L y$ if for all $z\in \Sigma^*$ such that $xz,yz\in\Sigma^{\leq l}$, $xz\in L$ iff $yz\in L$, and we write $x\not\sim_L y$ if $x\sim_L y$ does not hold. \end{definition} \begin{definition} Let $A = (Q,\Sigma,\delta,0,F)$ be a DFA (or a DFCA). We define, for each state $q\in Q$, $level(q)=\min\{|w| \mid \delta(0,w)=q\}$. \end{definition} \begin{definition} \label{def_states} Let $A=(Q, \Sigma , \delta , 0 , F)$ be a DFCA for $L$. We consider two states $p,\ q\in Q$ and $m=\max\{level(p),level(q)\}$. We say that $p$ is similar with $q$ in $A$, denoted by $p \sim_A q$, if for every $w\in \Sigma^{\leq l-m}$, $\delta(p,w)\in F$ iff $\delta(q,w)\in F$. We say that two states are dissimilar if they are not similar. \end{definition} If the automaton is understood, we may omit the subscript $A$. \begin{lemma} \label{eq_Lq=Ls} Let $A=(Q,\Sigma,\delta,0,F)$ be a DFCA of a finite language $L$. Let $level(p)=i$, $level(q)=j$, and $m=\max\{i,j\}$. If $p\sim_A q$, then $R_p\cap \Sigma^{\leq l-m}=R_q\cap \Sigma^{\leq l-m}$. \end{lemma} \begin{definition} A DFCA $A$ for a finite language is a minimal DFCA if and only if any two distinct states of $A$ are dissimilar. \end{definition} Once two states have been detected as similar, one can merge the higher level one into the smaller level one by redirecting transitions. We refer the interested reader to \cite{wia98} for the merging theorem and other properties of cover automata. \section{Hopcroft's state minimization algorithm}\label{hop} In \cite{hopcroft} it was described an elegant algorithm for state minimization of DFAs. This algorithm was proven to be of the order $O(n\ log\ n)$ in the worst case (asymptotic evaluation). The algorithm uses a special data structure that makes the set operations of the algorithm fast. We now give the description of the algorithm as given for an arbitrary alphabet $A$ and working on an automaton $(A,Q,\delta,q_0,F)$ and later we will restrict the case to the unary languages. 1: $P =\{F,\ Q-F\}$ 2: for all $a \in A$ do 3: \hspace*{0.5cm} Add((min($F,\ Q-F), a), S$) 4: while $S \not =\emptyset $ do 5: \hspace*{0.5cm} get $(C, a)$ from $S$ \ \ (we extract $(C,a)$ according to the \hspace*{0.9cm} strategy associated with $S$: FIFO/LIFO/...) 6: \hspace*{0.5cm} for each $B \in P$ split by $(C, a)$ do 7: \hspace*{0.5cm}\hspace*{0.5cm} $B'$, $B''$ are the sets resulting from splitting of $B$ w.r.t. $(C, a)$ 8: \hspace*{0.5cm}\hspace*{0.5cm} Replace $B$ in $P$ with both $B'$ and $B''$ 9: \hspace*{0.5cm}\hspace*{0.5cm} for all $b \in A$ do 10: \hspace*{0.5cm}\hspace*{0.5cm}\hspace*{0.5cm} if $(B, b) \in S$ then 11: \hspace*{0.5cm}\hspace*{0.5cm}\hspace*{0.5cm}\hspace*{0.5cm}Replace $(B, b)$ by $(B', b)$ and $(B'', b)$ in $S$ 12: \hspace*{0.5cm}\hspace*{0.5cm}\hspace*{0.5cm}else 13: \hspace*{0.5cm}\hspace*{0.5cm}\hspace*{0.5cm}\hspace*{0.5cm} Add((min$(B',B''), b), S)$ Where the splitting of a set $B$ by the pair $(C,a)$ (the line 6) means that $\delta(B,a)\cap C\not=\emptyset$ and $\delta(B,a)\cap (Q-C)\not=\emptyset$. Where by $\delta(B,a)$ we denote the set $\{q\mid q=\delta(p,a),\ p\in B\}$. The $B'$ and $B''$ from line 7 are defined as the two subsets of $B$ that are defined as follows: $B'=\{b\in B\mid \delta(b,a)\in C\}$ and $B''=B-B'$. It is useful to explain briefly the algorithm: we start with the partition $P=\{F,Q-F\}$ and one of these two sets is then added to the splitting sequence $S$. The algorithm proceeds in splitting according to the current splitting set retrieved from $S$, and with each splitting of a set in $P$ the splitting sets stored in $S$ grows (either through instruction 11 or instruction 13). When all the splitting sets from $S$ are processed, and $S$ becomes empty, then the partition $P$ shows the state equivalences in the input automaton: all the states contained in a same set $B$ in $P$ are equivalent. Knowing all equivalences, one can easily minimize the automaton by merging all the sets in the same set in the final partition $P$. We note that there are three levels of ``nondeterminism" in the algorithm: the ``most visible one" is the strategy for processing the list stored in $S$: as a queue, as a stack, etc. The second and third levels of nondeterminism in the algorithm appear when a set $B$ is split into $B'$ and $B''$. If $B$ is not present in $S$, then the algorithm is choosing which set $B'$ or $B''$ to be added to $S$, choice that is based on the minimal number of states in these two sets. In the case when both $B'$ and $B''$ have the same number of states, then we have the second ``nondeterministic" choice. The third such choice appears when the splitted set $(B,a)$ is in the list $S$; then the algorithm mentions the replacement of $(B,a)$ by $(B',a)$ and $(B'',a)$ (line 11). This actually is implemented in the following way: $(B'',a)$ is replacing $(B,a)$ and $(B',a)$ is added to the list $S$ (or vice-versa). Since we saw that the processing strategy of $S$ matters, then also the choice of which $B'$ or $B''$ is added to $S$ and which one replaces the previous location of $(B,a)$ matters in an actual implementation. In the original paper \cite{hopcroft} and later in \cite{gries}, and \cite{Knuutila} when describing the complexity of the algorithm, the authors showed that the algorithm is influenced by the number of states that appear in the sets processed by $S$. Intuitively, that is why the smaller of the $B'$ and $B''$ is inserted in $S$ in line 13, and this makes the algorithm sub-quadratic. In the following we will focus on exactly this issue of the number of states appearing in sets processed by $S$. \section{Worst case scenario for unary languages}\label{worst} Let us start the discussion by making several observations and preliminary clarifications: we are discussing about languages over an unary alphabet. To make the proof easier, we restrict our discussion to the automata having the number of states a power of 2. The three levels of nondeterminism are clarified in the following way: we assume that the processing of $S$ is based on a FIFO approach, we also assume that there is a strategy of choosing between two just splitted sets having the same number of elements in such a way that the one that is added to the queue $S$ makes the third nondeterminism non-existent. In other words, no splitting of a set already in $S$ will take place. We denote by $S_{w},\ w\in\{0,1\}^*$ the set of states $p\in Q$ such that $\delta(p,a^{i-1})\in F$ iff $w_i=1$ for $i=1..|w|$, where $\delta(p,a^0)$ denotes $p$. As an example, $S_1=F$, $S_{110}$ contains all the final states that are followed by a final state and then by a non-final state and $S_{00000}$ denotes the states that are non-final and are followed in the automaton by four more non-final states. Let us assume that such an automaton with $2^n$ states is given as input for the minimization algorithm described in the previous section. We note that since we have only one letter in the alphabet, the states $(C,a)$ from the list $S$ can be written without any problems as $C$, thus the list $S$ (for the particular case of unary languages) becomes a list of sets of states. So let us assume that the automaton $(\{a\},Q,\delta,q_0,F)$ is given as the input of the algorithm, where $|Q|=2^n$. The algorithm proceeds by choosing the first splitter set to be added to $S$. The first such set will be chosen between $F$ and $Q-F$ based on their number of states. Since we are interested in the worst case scenario for the algorithm, and the algorithm run-time is influenced by the total number of states that will appear in the list $S$ throughout the running of the algorithm (as shown in \cite{hopcroft}, \cite{gries}, \cite{Knuutila} and mentioned in \cite{berstel_ciaa02}), it is clear that we want to maximise the sizes (and their numbers) of the sets that are added to $S$. It is time to give a Lemma that will be useful in the following. \begin{lemma}\label{number} For deterministic automata over unary languages, if a set $R$ with $|R|=m$ is the current splitter set, then $R$ cannot add to the list $S$ sets containing more than $m$ states. \end{lemma} \begin{proof} The statement of the lemma is saying that for all the sets $B_i$ from the current partition $P$ such that $\delta(B_i,a)\cap R\not=\emptyset$ and $\delta(B_i,a)\cap (Q-R)\not=\emptyset$. Then $\sum_{i} |B'_i|\le m$, where $B'_i$ is the smaller of the two sets that result from the splitting of $B_i$ with respect to $R$. We have only one letter in the alphabet, thus the number of states $q$ such that $\delta(q,a)\in R$ is at most $m$. Each $B_i'$ is chosen as the set with the smaller number of states when splitting $B_i$ thus $|B'_i|\le |\delta(B_i,a)\cap R|$ which implies that $\sum_{i} |B'_i|\le \sum_{i}|\delta(B_i,a)\cap R|=|(\bigcup_{i} \delta(B_i,a))\cap R|\le |R|$ (because all $B_i$ are disjoint). Thus we proved that if we start splitting according to a set $R$, then the new sets added to $S$ contain at most $|R|$ states. \qed \end{proof} Coming back to our previous setting, we have the automaton given as input to the algorithm and we have to find the smaller set between $F$ and $Q-F$. In the worst case (according to Lemma \ref{number}) we have that $|F|=|Q-F|$, as otherwise, fewer than $2^{n-1}$ states are contained in the set added to $S$ and thus less states will be contained in the sets added to $S$ in the second stage of the algorithm, and so on. At this step either $F=S_1$ or $Q-F=S_0$ can be added to $S$ as they have the same number of states. Either one that is added to the queue $S$ will split the partition $P$ in the worst case scenario in the following four possible sets $S_{00},S_{01},S_{10},S_{11}$, each with $2^{n-2}$ states. This is true as by splitting the sets $F$ and $Q-F$ in sets with sizes other than $2^{n-2}$, then according with Lemma \ref{number} we will not reach the worst possible number of states in the queue $S$ and also splitting only $F$ or only $Q-F$ will add to $S$ only one set of $2^{n-2}$ states not two of them. All this means that half of the non-final states go to a final state ($|S_{01}|=2^{n-2}$) and the other half go to a non final state ($S_{00}$). Similarly, for the final states we have that $2^{n-2}$ of them go to a final state ($S_{11}$) and the other half go to a non-final state. The current partition at this step 1 of the algorithm is $P=\{S_{00},S_{01},S_{10},S_{11}\}$ and the splitting sets are one of the $S_{00}, S_{01}$ and one of the $S_{10}, S_{11}$. Let us assume that it is possible to chose the splitting sets to be added to the queue $S$ in such a way so that no splitting of another set in $S$ will happen, (chose in this case for example $S_{10}$ and $S_{00}$). We want to avoid splitting of other sets in $S$ since if that happens, then smaller sets will be added to the queue $S$ by the splitted set in $S$ (see such a choice of splitters described in \cite{berstel_ciaa02}). We have arrived at step 2 of the processing of the algorithm, since these two sets from $S$ are now processed, in the worst case they will be able to add to the queue $S$ at most $2^{n-2}$ state each by splitting each of them two of the four current sets in the partition $P$. Of course, to reach this worst case, we need them to split different sets, thus in total we obtain eight sets in the partition $P$ corresponding to all the possibilities: $P=\{S_{000},S_{001},S_{010},S_{011},S_{100},S_{101},S_{110},S_{111}\}$ having $2^{n-3}$ states each. Thus four of these sets will be added to the queue $S$. And we could continue our reasoning up until the $i$-th step of the algorithm: We now have $2^{i-1}$ sets in the queue $S$, each having $2^{n-i}$ states, and the partition $P$ contains $2^i$ sets $S_w$ corresponding to all the words $w$ of the length $i$. Each of the sets in the splitting queue is of the form $S_{x_1x_2\dots x_i}$, then a set $S_{x_1x_2x_3\dots x_i}$ can only split at most two other sets $S_{x_2x_3\dots x_{i-1}0}$ and $S_{x_2x_3\dots x_{i-1}1}$ from the partition $P$. In the worst case all the level $i$ sets in the splitting queue are not splitting a set already in the queue, and split 2 distinct sets in the partition $P$, making the partition at step $i+1$ the set $P=\{S_w\mid |w|=i+1\}$, and each such $S_w$ having exactly $2^{n-i-1}$ states. And in this way the process continues until we arrive at the $n$-th step. If the process would terminate before the step $n$, of course we would not reach the worst possible number of states passing through $S$. Let us now see the properties of an automaton that would obey such a processing through the Hopcroft's algorithm. We started with $2^n$ states, out of which we have $2^{n-1}$ final and also $2^{n-1}$ non-final, out of the final states, we have $2^{n-2}$ that preceed another final state ($S_{11}$), and also $2^{n-2}$ non-final states that preceed other non-final states for $S_{00}$, etc. The strongest restrictions are found in the final partition sets $S_w$, with $|w|=n$ each have exactly one element, which means that all the words of length $n$ over the binary alphabet can be found in this automaton by following the transitions between states and having 1 for a final state and 0 for a non-final state. It is clear that the automaton needs to be circular and following the pattern of de Bruijn words. Such an automaton for $n=3$ was depicted in \cite{berstel_ciaa02} as in the following Figure \ref{fig1}. \begin{figure} \caption{A cyclic automaton of size 8 for the de Bruijn word 11101000.} \label{fig1} \end{figure} It is easy to see now that a stack implementation for the list $S$ will not be able to reach the maximum as smaller sets will be processed before processing larger sets, which will lead to splitting of sets already in the list $S$. Once this happens for a set with $2^i$ states, then the number of states that will appear in $S$ is decreased by at least $2^i$ because the splitted sets will not be able to add as many states as a FIFO implementation was able to do. We conjecture that in such a setting the LIFO strategy could prove to make the algorithm liniar with respect to the size of the input, if the aforementioned third level of nondeterminism is set to add the smaller set of $B',\ B''$ to the stack and $B$ to be replaced by the larger one. We proved the following result: \begin{theorem} The absolute worst case run-time complexity for the Hopcroft's minimization algorithm for unary languages is reached when the splitter list $S$ in the algorithm is following a FIFO strategy and only for automata following de Bruijn words for size $n$. In that setting the algorithm will pass through the queue $S$ exactly $n 2^{n-1}$ states. \end{theorem} \section{Cover automata}\label{cover} In this section we discuss briefly (due to the page restrictions imposed on the size of the paper) about an extension to Hopcroft's algorithm to cover automata. K\"orner reported at CIAA'02 a modification of the Hopcroft's algorithm so that the resulting sets in the partition $P$ will give the similarities between states with respect to the input finite language $L$. To achieve this, the algorithm is modified as follows: each state will have its level computed at the start of the algorithm; each element added to the list $S$ will have three components: the set of states, the alphabet letter and the current length considered. We start with $(F,a,0)$ for example. Also the splitting of a set $B$ by $(C,a,l_1)$ is defined as before with the extra condition that we ignore during the splitting the states that have their level+$l_1$ greater than $l$ ($l$ being the longest word in the finite language $L$). Formally we can define the sets $X=\{p\mid \delta(p,a)\in C, \ level(p)+l_1< l\}$ and $Y=\{p\mid \delta(p,a)\not\in C, \ level(p)+l_1< l\}$. Then a set $B$ will be split only if $B\cap X\not=\emptyset$ and $B\cap Y\not=\emptyset$. The actual splitting of $B$ ignores the states that have levels higher than or equal with $l-l_1$. This also adds a degree of nondeterminism to the algorithm when such states appear. The algorithm proceeds as before to add the smaller of the newly splitted sets to the list $S$ together with the value $l_1+1$. Let us now consider the same problem as in \cite{berstel_ciaa02}, but in this case for the case of DFCA minimization through the algorithm described in \cite{neamtu}. We will consider the same example as before, the automata based on de Bruijn words as the input to the algorithm (we note that the modified algorithm can start directly with a DFCA for a specific language, thus we can have as input even cyclic automata). We need to specify the actual length of the finite language that is considered and also the starting state of the de Bruijn automaton (since the algorithm needs to compute the levels of the states). We can choose the length of the longest word in $L$ as $l=2^n$ and the start state as $S_{111...1}$. For example, the automaton in figure \ref{fig1} would be a cover automaton for the language $L=\{0,1,2,4,8\}$ with $l=8$ and the start state $q_0=1$. Following the same reasoning as in \cite{berstel_ciaa02} but for the case of the new algorithm with respect to the modifications, we can show that also for the case of DFCA a queue implementation (as specifically given in \cite{neamtu}) seems a choice worse than a LIFO strategy for $S$. We note that the discussion is not a straight-forward extension of the work reported by Berstel in \cite{berstel_ciaa02} as the new dimension added to the sets in $S$, the length and also the levels of states need to be discussed in detail. We will give the details of the construction and the step-by-step discussion of this fact in the journal version of the paper. \section{Final Remarks} \label{fin} We showed that at least in the case of unary languages, a stack implementation is more desirable than a queue for keeping track of the splitting sets in the Hopcroft's algorithm. This is the first instance when it was shown that the stack is out-performing the queue. It remains open whether there are examples of languages (over an alphabet containing at least two letters) which for a LIFO approach would perform worse or as worse as the FIFO. Our conjecture is that the LIFO implementation will always outperform a FIFO implementation, which was also suggested by the experiments reported in \cite{CIAA06}. As future work planned, it is worth mentioning our conjecture that there is a strategy for processing a LIFO list $S$ such that the minimization of all the unary languages will be realized in linear time by the algorithm. We also plan to extend the current results to the case of the cover automata, although, the discussion in that case proves to be more complicated by the levels of the states and the forth nondeterminism that this introduces. \end{document}
\begin{document} \title{Transit Node Routing Reconsidered hanks{This work was partially supported by DFG Grant 933/2.} \begin{abstract} Transit Node Routing (TNR) is a fast and exact distance oracle for road networks. We show several new results for TNR. First, we give a surprisingly simple implementation fully based on Contraction Hierarchies\xspace that speeds up preprocessing by an order of magnitude approaching the time for just finding a Contraction Hierarchies\xspace (which alone has two orders of magnitude larger query time). We also develop a very effective purely graph theoretical locality filter without any compromise in query times. Finally, we show that a specialization to the online many-to-one (or one-to-many) shortest path further speeds up query time by an order of magnitude. This variant even has better query time than the fastest known previous methods which need much more space. \end{abstract} \section{Introduction and Related Work}\label{sec:tnr-related} Route planning in road networks has seen a lot of results from the algorithm engineering community in recent years. With Dijkstra's seminal algorithm being the baseline, a number of techniques preprocess the static input graph to achieve drastic speedups. \textit{Contraction Hierarchies (CH)} \cite{gssd-chfsh-08,gssv-erlrn-12} is a speedup-technique that has a convenient trade-off between preprocessing effort and query efficiency. Road network with millions of nodes and edges can be preprocessed in mere minutes while queries run in about a hundred microseconds. Transit Node Routing (TNR) \cite{bfss-frrnt-07} is one of the fastest speed-up techniques for shortest path distance queries in road networks. By preprocessing the input road network even further, it yields \textit{almost} constant-time queries, in the sense that nearly all queries can be answered by a small number of table lookups. It follows an intuition: Long distance connections almost always enter an arterial network connecting a set of important nodes -- the \textit{transit nodes}. The set of these entrances for a particular node is small on average. Once the \emph{transit nodes} are identified, a mapping from each node to its access nodes and pair-wise distances between all transit nodes is stored. Preprocessing needs to compute a table of distances between the transit nodes, the distances to the access nodes, and information for a so-called \emph{locality filter}. The filter indicates whether the shortest path might not cross any transit nodes, requiring an additional local path search. Many TNR variants have a common drawback that preprocessing time for TNR is significantly longer than for the underlying speed-up technique. Another weakness is that the locality filter requires geometric information on the position of the nodes \cite{bfm-trans-06,bfmss-itcsp-07,g-ch-08}. The presence of a geometric component in an otherwise purely graph theoretical method is regarded as awkward. There are several examples of geometric ingredients in routing techniques being superseded by more elegant and effective graph theoretical ones \cite{s-rprn-08,dssw-erpa-09} with the locality filter of TNR being the only \textit{survivor} that is still competitive. Geisberger \cite{g-ch-08} uses CH to define transit node sets and for local searches, but still uses a geometric locality filter and relies on Highway Hierarchies \cite{ss-ehh-12} for preprocessing. In lecture slides \cite{Bast11}, Bast describes a simple variant of CH-based preprocessing exploring a larger search space than ours which also computes a super-set of the access nodes because it omits post-search-stalling. No experiments are reported. The geometric locality filter is not touched. In Section~\ref{sec:tnr-our-variant} we remove all these qualifications and present a simple fully CH-based variant of TNR which yields surprisingly good preprocessing times and allows for a very effective fully graph-theoretical locality filter. A related technique is \textit{Hub Labeling (HL)} by Abraham et al.\xspace \cite{adgw-ahbla-11} which stores sorted CH search spaces, intersecting them to obtain the distance. Using sophisticated tuning measures this can be made significantly faster than TNR since it incurs less cache faults. However HL need much more space than TNR. In Section~\ref{s:manyToOne} we further accelerate TNR queries for the special case that there are many queries with fixed target (or source). This method is even faster than HL without incurring its space overhead. \section{Preliminaries}\label{sec:tnr-preliminaries} We model the road network as a directed graph $G=(V,E)$, with $\vert E \vert=m$ edges and $\vert V\vert=n$ nodes. Each node corresponds to a location, e.g. a junction, and edges represent the connections between them. Each edge $e\in E$ has an associated cost $c(e)$, where $c:E\rightarrow \mathbb{R}^+$. It is called the \textit{weight}. A path $P=\langle s,v_1, v_2, \dotsc,t\rangle$ in $G$ is a sequence of nodes such that there exists an edge between each node and the next one in $P$. The length of a path $c(P)$ is the sum its edge weights. A path with minimum cost between $s,t\in V$ is called a \textit{shortest} path and denoted by $d(s,t)$ with cost $\mu(s,t)$. Note that a shortest path need not be unique. A path $P=\langle v_0, v_1, \dotsc, v_p\rangle$ is called \textit{covered} by a node $v\in V$ if and only if $v \in P$. \subsection{Contraction Hierarchies}\label{sec:tnr-ch} Contraction Hierarchies\xspace heuristically order the nodes by some measure of importance and \emph{contract} them one by one in this order. Contracting means that a node is removed (temporarily) and as few \emph{shortcut} edges as possible are inserted to preserve shortest path distances. The CH search graph is the union of the set of original edges and the set of shortcuts with edges only leading to more important nodes. This graph is a directed acyclic graph (DAG). An important structural property of CHs is that for any two nodes $s$ and $t$, if there is an $s$--$t$-path at all, then there is also a shortest up-down path $s$--$m$--$t$ where $s$--$m$ uses only upward edges and $m$--$t$ uses only downward edges in the CH. The \emph{meeting} node $m$ is the highest node on this path in the CH. The only crucial difference between a bidirectional Dijkstra and a CH query is the stopping criterion of the bidirected search. It continues adding nodes into the priority queue until the tentative distances of added nodes exceed any upper bound that may exist for the shortest path. The shortest path goes over a \emph{middle node} that is settled in both searches and for which CH guarantees correct labelling in both directions. Although the search spaces explored in CH queries are rather small, there is a simple technique called \emph{stall-on-demand} \cite{s-rprn-08} that further prunes search spaces. We use a simplified version of that technique, which leads to queries as fast as those reported in \cite{v-femno-10}. For every node $v$ that is the end point of a relaxed edge $(u,v)$ it is checked if there exists a reverse edge $(w,v)$. The edge is not relaxed if the tentative distance of $w$ plus the edge weight of $(w,v)$ is less than the distance of $u$ plus edge $(u,v)$. If such a node $w$ exists, edge $(u,v)$ can't be part of a shortest path and thus $v$ is not added into the queue. This is done by scanning the edges incident to $v$. Computing a table of all pair-wise shortest path distances for a set of nodes can be done by running a quadratic number of queries. While this is already significantly faster with CH than with a naive implementation of Dijkstra's algorithm, tables can be computed much more efficiently with the two-phase algorithm of Knopp et al.\xspace \cite{ksssw-cmmsp-07}. Computing large distance tables is a matter of seconds since only $O(\vert S\vert + \vert T\vert)$ half searches have to be conducted. The quadratic overhead to initialize and update table entries is close to none for $\vert S\cup T\vert = \mathcal{O}(\sqrt{n})$. We refer the interested reader to \cite{ksssw-cmmsp-07,gssv-erlrn-12}. \section{Transit Node Routing} \label{sec:tnr-tnr} TNR in itself is not a complete algorithm, but a framework. A concrete instantiation has to find solutions to the following degrees of freedom: It has to identify a set of transit nodes. It has to find access node for all nodes. And is has to deal with the fact that some queries between nearby nodes cannot answered via the transit nodes set. In the remainder of this Section, we define and introduce the minimal ingredients for the generic TNR framework, conceive a concrete instantiation and then discuss an efficient implementation. \begin{definition} \label{tnr:dfn-tnr} Formally, the generic TNR framework consists of \begin{enumerate} \item A set $\mathcal{T} \subseteq V $ of transit nodes. \item A \textit{distance table} $D_\mathcal{T}: \mathcal{T} \times \mathcal{T} \to \mathbb{R} _0^+$ of shortest path distances between the transit nodes. \item A forward (backward) \textit{access node mapping} $A^\uparrow : V \rightarrow 2^\mathcal{T}$ ($A^\downarrow: V \rightarrow 2^\mathcal{T}$). For any shortest $s$--$t$-path $P$ containing transit nodes, $A^\uparrow(s)$ $\left(A^\downarrow(t)\right)$ must contain the first (last) transit node on $P$. \item A \textit{locality filter} $\mathcal{L}:V\times V\rightarrow\{\text{true}, \text{false}\}$. $\mathcal{L}(s,t)$ must be true when no shortest path between $s$ and $t$ is covered by a transit node. False positives are allowed, i.e., $\mathcal{L}(s,t)$ may sometimes be true even when a shortest path contains a transit node. \end{enumerate} \end{definition} Note that we use a simplified version of the generic TNR framework. A more detailed description is in Schultes' Ph.D. dissertation \cite{s-rprn-08}. We outline a generalization to multiple layers of transit nodes in Section \ref{sec:tnr-query-time}. During preprocessing $\mathcal{T}$, $D_\mathcal{T}$, $A^\uparrow$, $A^\downarrow$, and some information sufficient to evaluate $\mathcal{L}$ is precomputed. An $s$--$t$-query first checks the locality filter. If $\mathcal{L}$ is true, then some fallback algorithm is used to handle the local query. Otherwise, \begin{equation}\label{eq:distance} \mu(s,t)=\mu_{min}(s,t) := \mspace{-9mu}\min_{\substack{a_s\in A^\uparrow(s)\\a_t\in A^\downarrow(t)}} \{d_{A^\uparrow}(s,a_s)+D_\mathcal{T}(a_s,a_t)+d_{A^\downarrow}(a_t,t)\}. \end{equation} \section{CH based TNR}\label{sec:tnr-our-variant} Our TNR variant (CH-TNR) is based on CH and does not require any geometric information. We start by selecting a set of transit nodes. Local queries are implicitly defined and we find a locality filter to classify them. For simplicity, we assume that the graph is strongly connected. In Section~\ref{sec:tnr-query-time} we discuss what needs to be done to handle the general case. \subsubsection{Selection of Transit Nodes.} CHs order the nodes in such a way that nodes occurring in many shortest paths are moved to the upper part of the hierarchy. Hence, CH is a natural choice to identify a small node set which covers many shortest paths in the road network. We chose a number of transit nodes $\vert\mathcal{T}\vert=k$ and select the highest $k$ nodes from the CH data structure. This choice of $\mathcal{T}$ also allows us to exploit valuable structural properties of CHs. A distance table of pair-wise distances is built on this set with a CH-based implementation of the many-to-many algorithm of Knopp et al.\xspace \cite{ksssw-cmmsp-07}. \subsubsection{Finding Access Nodes.} We only explain how to find forward access nodes from a node $s$. The computation of backward access nodes works analogously. We will show that the following simple and fast procedure works: Run a forward CH query from $s$. Do not relax edges leaving transit nodes. When the search runs out of nodes to settle, report the settled transit nodes. \begin{lemma}\label{lem:findAccess} The transit nodes settled by the above procedure find a superset of the access nodes of $s$ together with with their shortest path distance. \begin{proof} Consider a shortest $s$--$t$-path $P:=\langle s,\ldots, t\rangle$ that is covered by a node $u\in\mathcal{T}$. Furthermore, assume that $u$ is the highest transit node on $P$. A fundamental property of CHs is that we can assume $P$ to consist of upward edges leading up to $u$ followed by downward edges to $t$. Moreover, the forward search of a CH query finds the shortest path to $u$. Thus, a CH query also finds a shortest path to the first transit node $v$ on $P$. It remains to show that the pruned forward search of CH-TNR preprocessing does not prune the search before settling $v$. This is the case since pruning only happens when settling transit nodes and we have defined $v$ to be the first transit node on $P$. \end{proof} \end{lemma} The resulting superset of access nodes is then reduced using \emph{post-search-stalling} \cite{s-rprn-08}: For all nodes $t_1,t_2 \in A^\uparrow(v)$, if $d_{A^\uparrow}(v,t_1)+D_\mathcal{T}(t_1,t_2)\leq d_{A^\uparrow}(v,t_2)$, discard access node $t_2$. \begin{lemma}\label{lem:minimality} Post-search-stalling yields a set of access nodes that is minimal in the sense that it only reports nodes that are the first transit node on some shortest path starting on $s$. \begin{proof} Consider a transit node $t$ that is found by our search which is not an access node for $s$, i.e., there is an access node $u$ on every shortest path from $s$ to $t$. By Lemma~\ref{lem:findAccess}, our pruned search found the shortest path to $u$ but did not relax edges out of $u$. Hence, the only way $t$ can be reported is that it is reported with a distance larger than the shortest path length. Hence, $t$ will be removed by post-search-stalling. \end{proof} \end{lemma} \subsubsection{Search Space Based Locality Filter.} Consider a shortest path query from $s$ to $t$. Let $S_{\uparrow}(s)$ denote the sub-transit-node search space considered by a CH query from $s$, i.e., those nodes $v$ settled by the forward search from $s$ which are not transit nodes. Analogously, let $S_{\downarrow}(t)$ denote the sub-transit-node CH search space backwards from $t$. If these two node sets are disjoint, all shortest up-down-paths from $s$ to $t$ must meet in the transit node set and hence, we can safely set $\mathcal{L}(s,t)=\text{false}$. Conversely, if the intersection is non-empty, there might be a meeting node below the transit nodes corresponding to a shortest path not covered by a transit node. Thus a very simple locality filter can be implemented by storing the sub-transit-node search spaces which are computed for finding the access nodes anyway. \begin{lemma}\label{lem:searchspace} The locality filter described above fulfils Definition~(\ref{tnr:dfn-tnr}). \begin{proof} We assume for $s, t \in V\backslash\mathcal{T}$, the distance $\mu(s, t) \neq \mu_{\min}(s, t)$ and thus $\mu(s, t) < \mu_{\min}(s, t)$. Then the meeting node $m$ of a CH-query is not a transit node, and it has to be in the forward search space for $s$, $\bar{S_\uparrow}(s)$ \emph{and} in the backward search space for $t$, $\bar{S_\downarrow}(t)$. Hence, $\bar{S_\uparrow}(s) \cap \bar{S_\downarrow}(t) \neq \emptyset$. \end{proof} \end{lemma} Preliminary experiments indicate that the average size of these search spaces are much smaller than the full search spaces, e.g. 32 instead of 112 in the main test instance from Section~\ref{sec:tnr-experiments}. For the locality filter, only node IDs need to be stored. Compared to hub labelling which has to store full search spaces and also distances to nodes this is already a big space saving. If we are careful to number the nodes in such a way that nearby nodes usually have nearby numbers, the node numbers appearing in a search space will often come from a small range. We precompute and store these values in order to facilitate the following \emph{interval check}: When $[\min(\bar{S_\uparrow}(s)) ,\max(\bar{S_\uparrow}(s))]\cap [\min(\bar{S_\downarrow}(t)),\max(\bar{S_\downarrow}(t))]=\emptyset$, we immediately know that the search spaces are disjoint. As the sole locality filter this would allow too many false positives but it works sufficiently often to drastically reduce the average overhead for the locality filter. Below we discuss a much more accurate lossy compression of the search spaces. \subsubsection{Graph Voronoi Label Compression.} Note that the locality filter remains correct when we add nodes to the search spaces. We do this by partitioning the graph into regions and define the extended search space as the union of all regions that contain a search space node. This helps compression since we can represent a region using a single id, e.g., the number of a node representing the region. This also speeds up the locality filter since instead of intersecting the search spaces explicitly, it now suffices to intersect the (hopefully smaller) sets of block ids. Hence, we want partitions that are large enough to lead to significant compression, yet small and compact enough to keep the false positive rate small. Our solution if a purely graph theoretical adaptation of a geometric concept. Our blocks are \emph{graph Voronoi regions} of the transit nodes. Formally, $$\Vor{v}:=\setGilt{u\in V}{\forall w\in\mathcal{T}\setminus\set{v}:\mu(u,v)\leq \mu(u,w)}$$ for $v\in\mathcal{T}$ with ties broken arbitrarily. The intuition behind this is that a positive result of the locality filter means that the search spaces of start and destination come at least close to each other. Computing the Voronoi regions is easy, using a single Dijkstra run with multiple sources on the reversed input graph, as shown by Mehlhorn \cite{m-as-88}. We call this filter the \emph{graph Voronoi filter}. \section{Experimental Evaluation} \label{sec:tnr-experiments} We implement our algorithms and data structures in C++ and test the performance on a real-world data set. The source code is compiled with GCC 4.6.1 setting optimization flags \texttt{-O3} and \texttt{mtune=native}. Our test machine is an Intel Core i7-920, clocked at 2.67 GHz with four cores and 12 GiB of RAM. It runs Linux kernel version 2.6.34 Our CH variant implements the shared-memory parallel preprocessing algorithm of Vetter \cite{v-ptdch-09} with a hop limit of $5$ and $1\,000$ settled nodes for witness searches and $7$ hops or $2\,000$ settled nodes during the actual contraction of nodes. The priority function is $$2*\text{edgeQuotient} + 4*\text{originalEdgeQuotient} + \text{nodeDepth}.$$ We experiment on the road network of Western Europe provided for the 9th DIMACS challenge on shortest paths \cite{dgj-spndi-09} by PTV AG. Results for further instances can be found in Section~\ref{sec:tnr-otherInstances}. The graph consists of about 18\,015\,449 nodes and 22\,413\,128 edges with travel time metric weights. The resulting hierarchy has 39\,256\,327 edges. The following experiments are conducted with a transit node set of size 10\,000, if not mentioned otherwise, because key results from previous work were based on the same number of transit nodes, e.g. \cite{bfmss-itcsp-07}. We use two arrays to store all access nodes. Array $A$ for the access nodes and distances, and an index array $I_A$. For each node $v$, $A$ contains two sets of entries, one for $A^\uparrow(v)$ and one for $A^\downarrow(v)$. For each access node $a \in A^\uparrow(v)$ (or $\in A^\downarrow(v)$), two values are stored, the ID of $a$ and the distance $d_{A^\uparrow}(v, a)$ (or $d_{A^\downarrow}(a, v)$). The access nodes are ordered by ID, which leads to a better query cache efficiency. The index array $I_A$ stores for each node the starting indices of its two access node sets in $A$. At the end of the index array, a dummy value points to the index after the last value in $A$. The search space based locality filters (with or without compression) are stored the same way, using arrays $S$ and $I_S$. The following design choices are used throughout the experiments. Forward and backward search spaces are merged into one set for the locality filter. Forward and backward access node sets are also merged into one set. But note that these two sets are distinct in our implementation. As the ID of a node does not contain any particular information, node IDs can be changed to gain algorithmic advantages. This \textit{renumbering} is done by applying a bijective permutation on the IDs, in order to ensure that each ID stays unique. We alter the labels of the nodes in $V$ so that $\mathcal{T} = \{0, \dotsc, k-1\}$. By proceeding this way, we can easily determine if a node $v$ is a transit node or not (during further preprocessing and during the query): $v \in \mathcal{T}$ if and only if $v < k$. \paragraph{Node Renaming.} We examine renumbering strategies separately for the transit nodes set $\mathcal{T}$ and for the remaining part of the CH search graph $V\backslash\mathcal{T}$. We follow two aims for renumbering here. One is to make table lookups faster for non-local queries, while the other aim is to make local queries as fast as possible. Therefore, we treat both parts of our data structure with different strategies. Second, the numbering makes a difference on the performance of table lookups. If the access nodes of a node are from a more compact interval, the cache efficiency of the table lookups is increased. Consider a number of access nodes $\vert A^\uparrow(s)\vert=a$ and $\vert A^\uparrow(t)\vert=b$ for source and target nodes respectively. Wlog, we assume $a<b$. The obvious worst case is $a \cdot b$ cache misses, while the best case is $a$ misses only. This happens when all $b$ entries are in one cache line. Table~\ref{tab:RenamingTransit} shows the impact of renumbering the transit node set. The strategy is \emph{input level}-based. We iterate through each level of the hierarchy top-down and order the nodes in each level with respect to their order in the input data. In other words, the partial order in each level respects the order of the input data. \begin{table}[b] \caption{Impact of renumbering on $\mathcal{T}$. The remainder of the graph is in increasing DFS ordering.} \label{tab:RenamingTransit} \centering \begin{tabular}{lrrrr} \toprule Transit & LF & Interval & TL & Total \\ Nodes & [\si{n\second}] & Test [\%] & [\si{\micro\second}] & [\si{\micro\second}] \\ \midrule 7000 & 122 & 84.9 & 1.03 & 1.50 \\ 14000 & 111 & 90.1 & 1.05 & 1.32 \\ 21000 & 105 & 92.8 & 1.04 & 1.25 \\ 28000 & 104 & 94.5 & 1.04 & 1.22 \\ \bottomrule \end{tabular} \end{table} \begin{table}[tb] \caption{Different renumbering strategies for $V \setminus \mathcal{T} $. $\mathcal{T} $ is in input order.} \label{tab:RenamingNonTransit} \centering \begin{tabular}{lrrr} \toprule & & \multicolumn{2}{c}{Query} \\ \cmidrule(lr){2-2} \cmidrule(lr){3-4} Preprocessing & dur & LS & Total\\ Strategy & [s] & [$\mu$s] & [$\mu$s]\\ \midrule (greedy) DFS Increasing & 16.9 & 27.4 & 1.38 \\ (greedy) DFS Decreasing & 16.9 & 32.2 & 1.41 \\ Input Level Ordering & 8.9 & 38.4 & 1.45 \\ \bottomrule \end{tabular} \end{table} The lower, non-transit node portion of the CH search graph is also renumbered. It is used for local queries only and thus it has no effect on non-local queries. It also doesn't influence the preprocessing in our experiments and we attribute that to the fact that search spaces are similar when nodes are close to each other and the fact that input ordering already exhibits a \emph{good} locality. Table~\ref{tab:RenamingNonTransit} gives results on different renumbering strategies that we detail in the following. The \emph{(greedy) DFS} orderings renumber the graph according to a (modified) depth-first graph traversal (DFS), while the input DFS ordering preserves the partial ordering of the levels as described above for the transit node set. For every node an upward DFS is conducted that relaxes edges in the CH search graph that lead to more important nodes. More specifically nodes in $V \setminus \mathcal{T}$, which are not yet renumbered, are explored. The actual renumbering happens during the backtracking step, i.e. we renumber a node if and only if all of its successors are already renumbered. The actual IDs can be assigned in increasing ($0,2,\ldots,n-k-1$) or decreasing order ($n-k-1,\ldots,1,0$). Column \emph{dur} gives the duration of the renumbering, while \emph{LS} gives the average running time of a local search. Column \emph{total} gives the average running time over all queries. We see from the results that the DFS strategy with increasing IDs works best with respect to efficiency of local queries. During the search for the access nodes, stall-on-demand is used to decrease the search space sizes. We tested different variants, varying in the number of hops the stalling does look ahead to find a witness for a wrong distance. A higher number of hops on the one hand increases preprocessing time, but on the other hand decreases the search spaces, speeding up the locality filter construction. Table~\ref{tab:StallOnDemand} shows that while an increase of the hop depths from 1 to 2 manages to decrease space overhead and query times, a further increase from 2 to 3 is inadvisable: The preprocessing takes \SI{43}{\minute} longer, and does only have slightly better results. Local searches take less time with higher hop depths. This is an interesting observation, because the stalling during preprocessing should not affect local searches. An explanation is that the (due to a more exact locality filter) omitted local searches have a higher distance. \begin{table}[b] \caption{Different hop depths for the stall-on-demand during preprocessing.} \label{tab:StallOnDemand} \centering \begin{tabular}{lrrrrrrr} \toprule & \multicolumn{4}{c}{Preprocessing} & \multicolumn{3}{c}{Query} \\ \cmidrule(lr){2-5} \cmidrule(lr){6-8} & Expl. & & Voronoi & Total & LS & LS & Total\\ Hops & [s] & $|\bar{S}|$ & $|\bar{S}|$ & [byte / node] & [\%] & [$\mu$s] & [$\mu$s]\\ \midrule 0 & 301 & 93.0 & 29.3 & 296 & 2.36 & 30.4 & 2.15 \\ 1 & \textbf{149} & 31.8 & 8.0 & 211 & 0.58 & 27.4 & 1.38 \\ 2 & 446 & 28.1 & 6.3 & 204 & 0.41 & 26.0 & 1.35 \\ 3 & 3237 & \textbf{27.9} & \textbf{6.1} & \textbf{203} & \textbf{0.40} & \textbf{25.7} & \textbf{1.32} \\ \bottomrule \end{tabular} \end{table} $\mathcal{T}$ is renumbered with the so-called \textit{input-level strategy}, while $V\backslash\mathcal{T}$ is ordered by the (greedy) DFS Increasing strategy. The interval check accelerates the average running time of the locality filter. Prior to running the merging step, we check in constant time if the two intervals overlap or not with the interval check. \paragraph{Scalability.} We test the scalability of parallel preprocessing for a varying number of cores in Table~\ref{tab:MultiThreadPreprocessing}. The raw results of parallelizable parts (preprocessing, distance table generation and exploration) have a quite high variance of about 10\%. Hence, we measured the preprocessing five times and averaged over all runs. The values reported in column \emph{Total} are the sum of the respective averages. Column \emph{Cores} gives the numbers of cores used. Columns \emph{CH}, \emph{Dist. Table}, \emph{Exploration} measure time, speedup and efficiency of the respective parts. The bottom line reports on four CPUs with activated hyper-threading (HT). \begin{table}[t] \caption{Scalability Experiment with 10\,000 transit nodes.} \label{tab:MultiThreadPreprocessing} \centering \begin{tabular}{lcrrrcccrrrcrrrcrrr} \toprule Cores & & \multicolumn{3}{c}{CH} & & \multicolumn{3}{c}{Dist. Table} & & \multicolumn{3}{c}{Exploration} & & \multicolumn{3}{c}{Total} \\[0.5em]\cline{1-1}\cline{3-5}\cline{7-9}\cline{11-13}\cline{15-17} & & [s] & Spdp & Eff. & & [s] & Spdp & Eff. & & [s] & Spdp & Eff. & & [s] & Spdp. & Eff. \\ \midrule 1 & & 513 & 1 & 1 & & 9.0 & 1 & 1 & & 500 & 1 & 1 & & 1046 & 1 & 1\\ 2 & & 281 & 1.83 & 0.91 & & 5.1 & 1.74 & 0.88 & & 287 & 1.74 & 0.87 & & 596 & 1.75 & 0.88 \\ 3 & & 203 & 2.53 & 0.84 & & 3.9 & 2.23 & 0.76 & & 202 & 2.48 & 0.83 & & 432 & 2.42 & 0.81 \\ 4 & & 160 & 3.20 & 0.80 & & 2.9 & 3.16 & 0.79 & & 145 & 3.43 & 0.86 & & 334 & 3.13 & 0.78 \\ \midrule 4 (HT) & & 137 & 3.75 & 0.47 & & 2.2 & 4.01 & 0.50 & & 101 & 4.93 & 0.62 & & 265 & 3.95 & 0.49 \\ \bottomrule \end{tabular} \end{table} We see that the total preprocessing time is only about a factor of two larger than plain CH preprocessing. Most additional work is due to search space exploration from each node. We see that the different parts of the algorithm scale well with an increasing number of cores. The total efficiency is slightly lower than the efficiency of the individual parts, as it includes about 23.6 seconds of non-parallelized work due to the Voronoi computation. It does not reflect the performance of real cores, but HT comes virtually for free with modern commodity processors. The rate of local queries is only 0.58~\%. On average a non-local query takes 1.22~$\mu$s, while a local query takes 28.6~$\mu$s on average. This results in an overall average query time of 1.38~$\mu$s and the space overhead amounts to 147 Bytes per node. We compare to previous approaches to distance oracles for our test instance. Some of these implementations were tested on an older AMD machine \cite{s-rprn-08} that was available for running the queries. Table~\ref{tab:tnr-compare-tnr-variant} shows \textit{Reported} as given in the respective publications denoted by \emph{From}, while column \textit{Compared} gives running times either done on or normalized to the aforementioned AMD machine. Therefore, similar to the methodology in \cite{bdsssw-chgds-10}, a scaling factor of 1.915 is determined by measuring preprocessing and query times on both machines using a smaller graph (of Germany). Scaled numbers are indicated by a star symbol. Values for CH were measured with our implementation. The simplest TNR implementation is GRID-TNR that splits the input graph into grid cells and computes a distance table between the cells border nodes. Note that the numbers for GRID-TNR were computed on a graph of the USA, but the characteristics should be similar to our test instance. Preprocessing is prohibitively expensive while the query is about 20 times slower than ours. The low space consumption is due to the fact that it is trivial to construct a locality filter for grid cells. For HH-TNR \cite{s-rprn-08} and TNR+AF \cite{bdsssw-chgds-10}, preprocessing is single-threaded. The corresponding scaling factor for preprocessing is 3.551 and the fastest HH based TNR variant is still slower by about a factor of two for preprocessing and queries. Note that the HH-based methods all implement a highly tuned TNR variant with multiple levels that is much more complex than our method. While TNR+AF has faster queries by about 25\%, the (scaled) preprocessing is about an order of magnitude slower and the space overhead is twice as much. Also, TNR+AF requires a sophisticated implementation with a partitioning step and the computation of arc flags. \begin{table}[hb] \caption{Comparison Between Various Distance Oracles.} \label{tab:tnr-compare-tnr-variant} \centering \begin{tabular}{lcrrrrr} \toprule & & \multicolumn{2}{c}{Preprocessing} & \multicolumn{2}{c}{Query} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} & & Reported & Space & {Reported} & {Compared} \\ Method & From & [min] & [byte / node] & {[$\mu$s]} & {[$\mu$s]}\\ \midrule CH & - & 2.7 & 24 & 103 & 246 \\[5pt] Grid-TNR & \cite{bfmss-itcsp-07} & 1200 & 21 & 63 & 63 \\ HH-TNR-eco & \cite{s-rprn-08} & 25 & 120 & 11 & 11 \\ HH-TNR-gen & \cite{s-rprn-08} & 75 & 247 & 4.30 & 4.30 \\ TNR+AF & \cite{bdsssw-chgds-10} & 229 & 321 & 1.90 & 1.90 \\ HL local & \cite{adgw-ahbla-11} & 159 & 1221 & 0.572 & 1.10 & $\star$\\ HL global & \cite{adgw-ahbla-11} & 165 & 1269 & 0.276 & 0.53 & $\star$\\ HL-0 local & \cite{adgw-hhlsp-12} & 3 & 1341 & 0.7 & 1.34 & $\star$\\ HL-$\infty$ global & \cite{adgw-hhlsp-12} & 372 & 1055 & 0.254 & 0.49 & $\star$\\ \midrule CH-TNR & - & 5 & 147 & 1.38 & 3.27 \\ \bottomrule \end{tabular} \end{table} While the hub labeling based methods achieve superior query times, the reader should note the high space overhead incurred by these methods. Even the most space efficient HL needs more than seven times higher more space. HL-0 local reports faster preprocessing than our method with nine times higher space overhead. It should be noted that these experiments were done on three times as many cores with 20\% faster clock speed of 3.2 GHz and 50\% larger L3 cache of 16 MiB. Single core preprocessing for HL-O local takes 17.9 minutes while our approach is slightly faster with 17.4 minutes on a slower machine. We acknowledge, though, that even HL with the fastest preprocessing has faster queries than ours by about a factor of 2--3. We attribute that to the higher number of cache misses of our method. The quality of our locality filter is compared to other TNR implementations in Table~\ref{tab:LocalityFilter}. These variants differ in the number of transit nodes and in the graph used to determine them. Nevertheless, the graphs are road networks that exhibit similar characteristics. The number of transit nodes for CH-TNR is chosen to resemble data from literature. \begin{table}[t] \caption{Comparison of Locality Filter Quality.} \label{tab:LocalityFilter} \centering \begin{tabular}{lccrSSSS} \toprule Method & {From} & & $|\mathcal{T}|$ & Local & {False} &\\ & & & & [\%] & [\%] \\ \midrule Grid-TNR & \cite{bfmss-itcsp-07} & & 7\,426 & 2.6 & \multicolumn{1}{c}{-} \\ Grid-TNR & \cite{bfmss-itcsp-07} & & 24\,899 & 0.8 & \multicolumn{1}{c}{-} \\ LB-TNR & \cite{ef-tnlbr-12} & & 27\,843 & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} \\ HH-TNR-eco & \cite{s-rprn-08} & & 8\,964 & 0.54 & 81.2 \\ HH-TNR-gen & \cite{s-rprn-08} & & 11\,293 & 0.26 & 80.7 \\\hline CH-TNR &\multicolumn{1}{c}{-} & & 10\,000 & 0.58 & 73.6 \\ CH-TNR & \multicolumn{1}{c}{-} & & 24\,000 & 0.17 & 72.1 \\ CH-TNR & \multicolumn{1}{c}{-} & & 28\,000 & 0.14 & 72.1 \\ \bottomrule \end{tabular} \end{table} We see that the fraction of local queries of our variant is lower than or on par with the numbers from literature. Also, the rate of false positives is much lower than previous work. Most noteworthy, the recent method of LB-TNR applies sophisticated optimization techniques, but does not produce a transit node set with superior locality as the rate of local queries is virtually the same. \subsection{Impact of $\vert\mathcal{T}\vert$ on Query Efficiency} \begin{figure} \caption{Average query times for different numbers of transit nodes. Reported time for the local queries is averaged over the total number of queries.} \label{fig:plotRunningTime} \end{figure} Figure~\ref{fig:plotRunningTime} gives a row-stacked plot that details the contributions of each part of the query to the average total query time depending on the transit node set size. We see that most of the query time is spent in table lookups and that this portion stays relatively stable over the entire parameter space. We attribute that to two reasons. First, the number of access nodes is relatively stable. It drops from roughly 8.5 to just below 7. Hence, the number of table lookups is also relatively stable. Second, the table lookups are mostly dominated by cache misses and it appears that our renumbering effort is successful in that it already minimizes the number of cache misses across the board. \begin{table}[hbt] \caption{Average query time, preprocessing time (on four cores), and space overhead using a varying number of transit nodes.} \label{tab:overallInfo} \centering \begin{tabular}{lccccccccc} \toprule & & Preproc. & & \multicolumn{2}{c}{Non-Local} & & \multicolumn{2}{c}{Local} & Amortized \\\cline{5-6}\cline{8-9} $\vert\mathcal{T}\vert$ & & [\si{\minute}] & & [\%] & [\si{\micro\second}] & & [\%] & [\si{\micro\second}] & [\si{\micro\second}] \\ \midrule 7000 & & 6.1 & & 99.12 & 1.225 & & 0.88 & 32.661 & 1.501 \\ 14000 & & 5.5 & & 99.62 & 1.229 & & 0.38 & 25.187 & 1.320 \\ 21000 & & 5.2 & & 99.79 & 1.198 & & 0.21 & 21.977 & 1.243 \\ 28000 & & 5.1 & & 99.86 & 1.189 & & 0.14 & 21.728 & 1.218 \\ \bottomrule \end{tabular} \end{table} In the following, preprocessing running times are reported for 4 threads. As reported before, the size of the transit node set is a tuning parameter. We look into the impact of varying the size of this set in the following experiments. Especially, we explore the effect of transit node set size on the fraction of local queries, space overhead and query time. Table~\ref{tab:overallInfo} reports on these experiments. Column $\vert\mathcal{T}\vert$ gives the size of the transit node set, while column \emph{Prepoc.} reports on the duration of the preprocessing. Columns \emph{(Non-)Local} give the fraction of (non-)local queries and the respective query times. Column \emph{Amortized} reports amortized query times, while \emph{Overhead} reports on the overhead per node. The results of the experiment support the results from the previous section that the number of local queries decreases with an increasing transit node set. This is expected behavior. Likewise, the amortized query time decreases as the number of local queries drops. This is also reflected in the absolute numbers of Table \ref{tab:LocalQueries} in which 1\,000\,000 random queries are performed. \begin{table}[b] \caption{The impact of $\vert\mathcal{T}\vert$ on the number of performed local queries.} \label{tab:LocalQueries} \centering \begin{tabular}{lrrrr} \toprule & \multicolumn{3}{c}{Local Searches} \\ \cmidrule{2-4} & & Time & Amortized \\ $\vert\mathcal{T}\vert$ & \#Performed & [\si{\micro\second}] & [\si{\micro\second}] \\ \midrule 7\,000 & 8\,798 & 31.4 & 0.277 \\ 14\,000 & 3\,820 & 24.0 & 0.092 \\ 21\,000 & 2\,128 & 20.8 & 0.044 \\ 28\,000 & 1\,441 & 20.5 & 0.029 \\ \bottomrule \end{tabular} \end{table} \begin{table}[h] \caption{Fraction of local queries according to the locality filter by Dijkstra Rank. Bold values show approximate $50\%$ threshold.} \label{tab:filterByRank} \centering \begin{tabular}{lrrrrrrrrrrrrrrrrrrrr} \toprule $\vert\mathcal{T}\vert$ & $\leq 2^9$ & $2^{10}$ & $2^{11}$ & $2^{12}$ & $2^{13}$ & $2^{14}$ & $2^{15}$ & $2^{16}$ & $2^{17}$ & $2^{18}$ & $2^{19}$ & $2^{20}$ & $\geq 2^{21}$ \\ \midrule 7\,000 & 100 & 100 & 99 & 98 & 96 & 88 & 74 & \textbf{56} & 34 & 15 & 5 & 1 & 0 \\ 14\,000 & 100 & 99 & 98 & 95 & 86 & 68 & \textbf{47} & 28 & 12 & 4 & 1 & 0 & 0 \\ 21\,000 & 100 & 99 & 96 & 89 & 73 & \textbf{51} & 29 & 14 & 5 & 1 & 0 & 0 & 0 \\ 28\,000 & 100 & 97 & 93 & 81 & \textbf{61} & 38 & 19 & 8 & 2 & 1 & 0 & 0 & 0 \\ \bottomrule \end{tabular} \end{table} The decrease in local queries is not uniform across all Dijkstra ranks. There appears to be a threshold after which the fraction of detected local queries falls sharply. Table~\ref{tab:filterByRank} reports on the fraction of local queries that are performed depending on the Dijkstra rank. We observe that increasing the transit node set size effectively lowers the rank when the locality filter detects roughly half of the queries as local queries. These values are given in bold. We see that its rank decreases by several orders of magnitude over the parameter space. A closer look at the query performance according to Dijkstra rank is given in Figures \ref{fig:rankPlots_time}. The rank of a node $v$ with respect to node $s$ is $i$ if $v$ is the $i$-th node settled by a unidirectional Dijkstra query. For sake of clear arrangement the Dijkstra rank $k:=\lfloor\log_2(i)\rfloor$ is the floored logarithm to base $2$ of $i$. In other words, it gives a notion of distance independent of the graphs underlying geometry. We see that the query time is dominated by the rather expensive fall-back algorithm for short-range queries in all the experiments. Also, we see that the query time falls shortly for medium to long range queries once the shortest paths get covered by the transit node set. Beyond this threshold the time approaches the bare minimum needed for running the locality filter and the table lookups, which is constant in practice. \begin{figure} \caption{Rank plot for using $\vert\mathcal{T} \label{fig:rankPlots_time} \end{figure} \subsection{Impact of $\vert\mathcal{T}\vert$ on Space Overhead} \begin{figure} \caption{The average search space sizes (left) and access nodes (right) per graph node.} \label{fig:plotSS} \label{fig:plotAN} \label{fig:nodesPerNode} \end{figure} The effect of transit node set size on space requirements is examined next. Besides the underlying contraction hierarchy, the distance table, and access nodes, as well as the locality filter contribute to the space consumption. The average numbers of nodes in the search space, Voronoi representatives and access nodes per node are plotted against varying sizes of $\mathcal{T}$. For both values, the average between the respective forward and backward sizes is given, since the values are virtually identical. We observe that these numbers fall as expected the larger the transit node set gets. The results are plotted in Figure~\ref{fig:nodesPerNode}. Obviously, the raw size of the CH is independent of $\vert\mathcal{T}\vert$ while the distance table grows quadratically. Space for access nodes slowly \emph{decreases} with $\vert\mathcal{T}\vert$ since the average number of access nodes decreases with more smaller local search spaces. The same applies for the locality filter -- it needs less space although it gets more effective at the same time. \begin{figure} \caption{Memory consumption depending on $\vert\mathcal{T} \label{fig:tnr-plotMemoryUncompressed} \end{figure} Figure \ref{fig:tnr-plotMemoryUncompressed} shows the relation between memory requirements and transit node set size. Note that the implementation in this experiment does not merge search spaces or access node sets to give a clearer picture of the memory consumption of each part of our method. We see that the main driver here is the size of the distance table which depends quadratically on the size of the transit node set. Although, the average access node set decreases with an increasing transit node set size, it is not enough to compensate for the distance table. We note that the space requirement of the search spaces is more or less constant over the entire parameter space. \subsection{Results for Other Instances}\label{sec:tnr-otherInstances} Further experiments are done with two additional instances. The first one is the graph from above with distance metric (euro-dist). The second test instance is an edge-expanded travel-time graph of Germany (ger-tc) extracted from OpenStreetMap\footnote{\url{http://osm.org}} at database timestamp \texttt{2012-12-10T19$\backslash$:23$\backslash$:02Z} extracted with the routines and car speed profile of Project OSRM\footnote{\url{http://project-osrm.org}}. Edge-expansion implies that the graph explicitly models turn restrictions from the data. Expanded graph nodes resembles undirected, i.e. unexpanded, edges from the input data, while expanded edges model allowed turns. Note that U-Turns are explicitly forbidden. The expanded graph of Germany is about twice as large as the unexpanded graph of Western Europe. See Table~\ref{tab:tnr-graphs} for more details on the sizes of the instances. \begin{table}[bth] \caption{The graphs instance sizes alongside the number of edges in the respective CH search graph.} \label{tab:tnr-graphs} \centering \begin{tabular}{lcrcrcr} \toprule instance & & nodes & & edges & & CH size \\ \midrule euro-time & & 18\,015\,449 & & 22\,413\,128 & & 39\,256\,327 \\ euro-dist & & 18\,015\,449 & & 22\,413\,128 & & 44\,368\,351 \\ ger-tc & & 35\,024\,256 & & 43\,790\,686 & & 105\,617\,078 \\ \bottomrule \end{tabular} \end{table} \begin{table}[b] \caption{The graphs instance sizes alongside the number of edges in the respective CH search graph.} \label{tab:graphs} \centering \begin{tabular}{lcrcrcr} \toprule instance & & nodes & & edges & & CH size \\ \midrule euro-time & & 18\,015\,449 & & 22\,413\,128 & & 39\,256\,327 \\ euro-dist & & 18\,015\,449 & & 22\,413\,128 & & 44\,368\,351 \\ ger-tc & & 35\,024\,256 & & 43\,790\,686 & & 105\,617\,078 \\ \bottomrule \end{tabular} \end{table} \begin{table}[h] \caption{Experiments on different graphs. Values are scaled to match the same hardware.} \label{tab:experimentsGraphs} \centering \begin{tabular}{lclcrrcrrr} \toprule Graph & & $\vert\mathcal{T}\vert$ & & $\vert A\vert$ & $\vert\bar{S} \vert$ & Byte $/$ & time & & query \\ & & & & & & node & [\si{\minute}] & & [\si{\micro\second}] \\ \midrule euro-dist & & 10\,000 & & 18.0 & 22.1 & 440 & 32.4 & & 6.717 \\ & & 15\,000 & & 17.1 & 18.7 & 424 & 28.9 & & 4.678 \\ & & 25\,000 & & 14.5 & 14.7 & 456 & 26.1 & & 3.317 \\ \midrule ger-tc & & 20\,000 & & 9.11 & 14.66 & 278 & 18.9 & & 2.669 \\ & & 40\,000 & & 7.78 & 11.85 & 383 & 16.9 & & 2.518 \\ & & 50\,000 & & 7.37 & 11.01 & 476 & 16.6 & & 2.678 \\ \bottomrule \end{tabular} \end{table} Table~\ref{tab:experimentsGraphs} gives results for two further instances and three different sizes of the transit node set. Besides the European network with travel time metric used before (euro-time), we consider the same network with geographical distance metric (euro-dist), and a very detailed model of the German road network (ger-tc) based on OpenStreetMap data \footnote{Database timestamp \texttt{2012-12-11T19$\backslash$:00$\backslash$:02Z} } with explicitly modelled turns. This graph has 35\,024 nodes and 43\,790\,686 edges resulting in a CH with 105\,617\,078 edges. U-Turns are explicitly forbidden and the turn restrictions from the input data were used. The experiments on ger-tc were performed on the slightly slower Intel Xeon machine with more RAM. We determine scaling factors to compare the outcomes by running the algorithm with the European (time-metric) graph on that machine. The factors are 0.804 and 0.960 for preprocessing and query, respectively. The results of our experiments are scaled accordingly to the speed of the faster machine. We see that the average number of access nodes as well as the average number of Voronoi representatives decrease with an increasing size of the transit node set. The same holds true for preprocessing duration and query times. The decrease in preprocessing duration is caused by likewise decreased search spaces in $V\backslash\mathcal{T}$. Although, it takes longer to preprocess the distance table, the impacted is minor when compared to CH graph preprocessing. Most nodes during a CH search are relaxed in the most upper portions of the hierarchy. This implies that the search space exploration during preprocessing becomes faster, because the search spaces become smaller. The same holds true for the query. The larger the transit node set, the faster the queries become. This is expected behavior, because of two reasons. First, search spaces below the transit node set become smaller, as argued above. Second, the number of local fallback queries decreases, because even more of the queries can be answered by table lookups. Unfortunately, the distance table on the transit node set grows quadratically with the number of nodes. Therefore, the overall space consumption increases again at some point, when the quadratic increase of the distance table can not be compensated by the falling average number of access nodes and Voronoi representatives. We attribute this behavior that most edge relaxations during CH query happen in the highest portions of the hierarchy. So, there is a point of diminishing returns, when the transit node set covers this dense portion of the hierarchy. Note that the results shown in Table \ref{tab:experimentsGraphs} are selected to reflect this observation. If the input graph is not strongly connected, it may be that we end up with an empty set of forward or backward access nodes for some nodes. In that case the minimum in Equation~\ref{eq:distance} minimizes over an empty set. We define this minimum as $\infty$ correctly indicating non-reachability in case of a non-local query. Similarly, non-existing paths between pairs of transit nodes will be detected during the precomputation and are indicating by a distance of $\infty$, too. The search assigning Voronoi representatives may not reach all nodes and any unreached nodes are assigned to a dummy Voronoi region. \begin{figure} \caption{Rank plot for using $\vert\mathcal{T} \label{fig:rankPlots_dist} \end{figure} As to be expected from previous work, we see that switching to distance metric is costly. The number of access nodes doubles and accordingly space overhead also doubles. Since the number of table lookups is quadratic in the number of access nodes, the query time nearly quadruples. On the positive side, the detailed model of the German graph, which is perhaps closest to state of the art routing applications, behaves similar to euro-time. The number of access nodes increases only slightly, and considering the larger graph size, the preprocessing time also remains moderate. This is an important difference to plain CHs where switching to a detailed graph model leads to significantly increased query time. Figure~\ref{fig:rankPlots_dist} shows the rank plot for the \emph{euro-dist} instance for varying sizes of the transit node set. Again, we observe that the parameter influences the threshold from which on shortest paths are covered by access nodes. Figure~\ref{fig:rankPlots_ger} shows the result for the same experiment on the edge-expanded graph of Germany. Note that we experimented on a much higher number of transit nodes on this instance, because it is much larger than the other instances. Again, we observe a sharp cut-off from which on paths are covered by access nodes and that the value of $\vert\mathcal{T}\vert$ is a tuning parameter when this cut-off occurs. Most interestingly, the queries seem to have a greater variance in the sense that there are outliers that have rather low query time. \begin{figure} \caption{Rank plot for using $\vert\mathcal{T} \label{fig:rankPlots_ger} \end{figure} \section{Further Improvements} In addition to the previous experiments we identify a number of additional enhancements and use cases of our method. Each of the following sub-section is work in progress at the time of writing. Thus the results shall be treated as preliminary. \subsection{Pruning with Arc Flags}\label{sec:tnr-af} The experimental evaluation of Section \ref{sec:tnr-experiments} shows that the majority of the query time is spent in table lookups and only to a lesser extent in the locality filter. In this Section, we analyse how further pruning using arc flags could be applied to achieve a sub-microsecond distance oracle on a fictive 2 GHz CPU. Thus, while the following numbers are encouraging, they have to be taken with a grain of salt. First, we briefly explain the layout of the query and reach for previous work by Bauer et al.\xspace \cite{bdsssw-chgds-08}, Delling \cite{d-earpa-09} and Abraham et al.\xspace \cite{dgnw-phast-12} to conduct the preprocessing. Second, we note that the cost of a L1 cache miss is about 10 cycles and the cost of a L3 cache miss is about 100 cycles on a modern memory architecture. A L1 cache hit is accounted for by a single nano-second. On a 2 GHz machine this amounts to 5 and 50 nanoseconds, respectively. These numbers were determined experimentally by Luxen and Schieferdecker \cite{ls-dmlca-12} when researching the cost associated with low-level memory accesses. The TNR query can be sped up by using arc flags as previously reported, e.g. \cite{bdsssw-chgds-08}. It is very similar to traditional arc flags. Instead of partitioning the entire input graph only the core induced by the transit nodes is preprocessed. Before running the table lookups for each and every pairwise combination of access nodes, a small set of auxiliary data in cache is queried, if the table lookup and the associated expensive cache miss is necessary at all. If we partition the overlay network induced by the transit nodes into 48 regions, like \cite{d-earpa-09}, then this would require 96 bits (uncompressed) for each node to store forward and backward flags. This totals in less than 120 KBytes of additional information for the exemplary 10\,000 transit nodes from Section \ref{sec:tnr-experiments}, which easily fits into the L2 cache of any modern processor. Further, we assume that the access nodes for each node have been sorted previously. The entire pruning table can not be scanned sequentially, but only a cache line of 32 bytes at a time. But since the data is sorted it is fair to assume that six fetch operations into L2 cache suffice. This accounts to our fictive runtime with about 5 nanoseconds for each L1 cache miss and a single nanosecond for each \emph{AND}-operation. The following numbers are exemplary for the \texttt{euro-time} graph. If we have $6.1$ Voronoi access nodes on average, we expect to check $6.1\times6.1$ pruning flags in total. As Bauer et al.\xspace report, the remaining number of table lookups for TNR-AF is $3.1$ which will cost us a L3 cache miss in the worst case. Thus, we account for an additional 55 nanoseconds for each such access. If a local query CHASE query costs $6.1$ microseconds on average \cite{dgnw-phast-12} and is conducted for $0.581\%$ of all queries, then the local searches account for roughly 100 nanoseconds on average. This amounts to an expected total query time on a fictive 2 GHz CPU of $$250 \text{ns} + 6\cdot 5 \text{ns} + (6.1)^2\cdot 1 \text{ns} + 3.1\cdot 55 \text{ns} + 100 \text{ns} \simeq 590 \text{ns} , $$ assuming that a single Voronoi-Locality filter invocation costs about 250 nanoseconds. This is a conservative estimate since it does not account for any SIMD tuning opportunity. For CHASE preprocessing, recent work of Abraham et al.\xspace \cite{dgnw-phast-12} gives an efficient algorithm, called \emph{PHAST}, to compute arc flags on our test instance in mere 14 minutes of preprocessing including CH construction. The additional memory overhead amounts to roughly 600 MB. Further, we make the (simplifying) assumption that computing the arc flags for the transit node overlay graph can be computed without much additional cost. Thus, we conclude that it is possible to construct a sub-microsecond distance oracle in about 15 minutes. \subsection{Multilayer CH-based TNR} In \cite{bfmss-itcsp-07,s-rprn-08} local queries were also handled fast by introducing additional \emph{layers} of secondary and tertiary transit nodes. Because of the higher quality of our transit node sets and because CH-routing is faster than highway-hierarchy routing, this becomes less important than in \cite{bfmss-itcsp-07}, yet, at least a secondary layer would be useful to further reduce the query times observed in Section \ref{sec:tnr-experiments}. So, consider a set $\mathcal{T'}\supset\mathcal{T}$ of secondary transit nodes.\footnote{The construction can be generalized to more than two layers easily.} In our CH-based setting this will just be the highest $k'$ nodes from the CH for some $k'>|\mathcal{T}|$. An $s$--$t$ query will first invoke the top-level locality filter $L$. If this filter comes out positive, the secondary locality filter $L'$ will be invoked. $L'$ can be implemented as before using the CH search spaces from $s$ and $t$, this time staying below level $k'$. Note that this information is gathered as a side effect of gathering the search space information for $L$. Similarly, we can obtain the secondary access nodes in a way analogous to the top level access nodes by analyzing the CH search spaces below level $k'$. When both $L$ and $L'$ are positive, we perform a CH query which is now even more local than in the single-layer case. The main challenge is handling the case when $L$ is positive yet $L'$ is negative and we have to use a lookup table $D_{\mathcal{T}'}$ for routing in the secondary arterial network. Note that we cannot simply store a complete distance table for $\mathcal{T'}\times\mathcal{T'}$ since that would be too big and since then there would be no point in having the $\mathcal{T}\times\mathcal{T}$ distance table.\footnote{An outline of multilayer CH-based TNR in \cite{Bast11} misses this important point.} Rather, we have to precompute distances $\mu(u,w)$ where $u,w\in\mathcal{T}'$ and where no node in $\mathcal{T}$ lies on this path (the latter case can be covered by using $D_{\mathcal{T}}$). Note that only nearby nodes in $\mathcal{T}'$ will require such entries. The sparseness of $D_{\mathcal{T}'}$ can be accommodated by using a hash table rather than a two-dimensional array. The entries of $D_{\mathcal{T}'}$ are computed using a refinement of the many-to-many technique from \cite{ksssw-cmmsp-07,gssv-erlrn-12}. The backward searches upward from nodes $w\in\mathcal{T'}$ store shortest path information to $w$ in the nodes below level $k$ they reach. The forward searches upward from nodes $u\in\mathcal{T'}$ use this information to generate candidate entries for $D_{\mathcal{T}'}$. These entries are validated by checking whether they are actually shorter than the best path using the top level arterial network. \subsection{TNR Based Many-to-One Computations}\label{s:manyToOne} Consider a scenario where we have to find many $s$--$t$-shortest path distances for a fixed $t$. The case for fixed $s$ works analogously. For example, this might be interesting for generalizations of A* search to multiple criteria where we can use exact single-criteria searches for pruning the search space. Although we can use Dijkstra's algorithm here (one backward search from $t$) for precomputing all single-criteria distances, this is expensive when the A* search touches only a small fraction of the nodes. The idea is to precompute $v$--$t$-distances for all transit nodes $v\in\mathcal{T}$ and to store them in a separate array $T$. This can be done using $|A^\downarrow(v)|\cdot|\mathcal{T}|$ table lookups accessing only $|A^\downarrow(t)|$ rows of the distance table. Note that $T$ is likely to fit into cache. For a fast locality filter specialized for a particular target node, one can employ highly localized backward search from t, explicitly precomputing the nodes requiring a local query. Ideally, we would like to precompute $\mu(s, t)$ for all source nodes $s$ which require a local query. In principle, this can be done using a single backward Dijkstra search from $t$. Rather than exploring the full graph, this backward search can stop when its search space is covered by transit nodes. The locality filter then uses a single array $D$ initialized to $\infty$. The backwards search sets $D[v]$ to $\mu(v, t)$ when it settles a node $v$ such that the path from $v$ to $t$ is not covered by a transit node (the Dijkstra search has to propagate this coverage information). This corresponds to the conservative approach described in \cite{s-rprn-08} for highway node routing. However, for instances with very long edges such as ferry connections this conservative approach can take a lot of time. There a several alternatives. A simple one is to use the stall-on-demand technique for covering search from \cite{s-rprn-08}, which should not be confused with the related technique used for a CH query. We can also use the stall-in-advance technique from \cite{s-rprn-08} which might be faster yet results in a one-sided error for the locality filter. For a non-local query we compute the distance $$\mu(s,t)=\min_{a\in A^\uparrow(s)}d_{A^\uparrow}(s,a)+T[a].$$ Note that this takes time linear rather than quadratic in the number of access nodes and only incurs cache faults for scanning $A^\uparrow(s)$. Preliminary experiments indicate that this method can yield an order of magnitude in query time improvement compared to TNR (to around 100ns for the European instance). \section{Conclusions and Future Work}\label{sec:tnr-conclusion} We have shown that a very simple implementation of CH-TNR yields a speedup technique for route planning with an excellent trade-off between query time, preprocessing time, and space consumption. In particular, at the price of twice the (quite fast) preprocessing time of contraction hierarchies, we get two orders of magnitude faster query time. Our purely graph theoretical locality filter outperforms previously used geometric filters. To the best of our knowledge, this eliminates the last remnant of geometric techniques in competitive speedup techniques for route planning. This filter is based on intersections of CH search spaces and thus exhibits an interesting relation to the hub labelling technique. When comparing speedup techniques one can view this as a multi-objective optimization problem along the dimensions query time, preprocessing time, space consumption, and simplicity. Any Pareto-optimal (i.e. non-dominated) method is worthwhile considering and good methods should have a significant advantage with respect to at least one measure without undue disadvantages for the other dimensions. In this respect, CH-TNR fares very well. Only hub labelling achieves significantly better query times but at the price of much higher space consumption, in particular when comparable preprocessing times are desired. Moreover, simple variants of hub labelling have even worse space consumption and less clear advantages in query time. When looking for clearly simpler techniques than CH-TNR, plain CHs come into mind but at the price of two orders of magnitude larger query time and a surprisingly small gain in preprocessing time. CH-TNR also has significant potential for further performance improvements. Our variant of CH-TNR focusses on maximal simplicity except for the Voronoi filter which is needed for space efficiency. But there are many further improvements that will not drastically change the position of CH-TNR in the landscape of speedup techniques but that could yield noticeable improvements with respect to query time, preprocessing time, or space at the price of more complicated implementation. We now outline some of these possibilities: \paragraph*{Query time:}\label{sec:tnr-query-time} In Section~\ref{s:manyToOne} we have seen that for the special case of many-to-one queries can be accelerated by another order of magnitude being the fastest known technique for this use case. But also the general case can be further accelerated. As in \cite{bdsssw-chgds-10} we could expect about twice faster queries by combining CH-TNR with arc flags for an additional sense of goal direction. The additional preprocessing time could be much smaller than in \cite{bdsssw-chgds-10} by using new CH based methods for fast parallel one-to-all shortest paths \cite{dgnw-phast-11}. Local queries can be accelerated by introducing additional layers as in HH-TNR. Alternatively, we could use hub labelling for local queries. This is still much more space efficient than full hub labelling and very simple since we need to compute local (sub-transit-node) search spaces anyway. This variant of CH-TNR can be viewed as a generalization of hub labelling that saves space and preprocessing time at the price of larger query times. \paragraph*{Preprocessing time:} Besides CH construction the most time consuming part or CH-TNR preprocessing is the exploration of the sub transit node CH search spaces for finding access nodes and partition representatives. This can probably be accelerated by a top-down computation as in \cite{adgw-hhlsp-12}. Note that using post-search-stalling we still get optimal sets of access nodes. Finding Voronoi regions might be parallelizable to some extent since it explores a very low diameter graph. \paragraph*{Space:} There are a number of relatively simple low level tuning opportunities here. For example, we can more aggressively exploit overlaps between forward/backward access nodes and search space representatives. These ``dual use'' nodes need to be stored only in the access nodes set together with a flag indicating that they are also a region representatives. We could also encode backward distances to access nodes as differences to forward distances. As in HH-TNR we could also encode access nodes of most nodes as the union of the access nodes of of their neighbors. The region representatives stored by our graph Voronoi filter are virtually identical to the access nodes so that we only need to store a flag indicating whether an access node is also a region representative plus the few region representatives that are not access nodes also. In our experiments this would reduce space consumption by another $\approx15$ bytes per node. We already exploit that most forward access nodes are also backward access nodes but we could additionally exploit that the corresponding distances are also similar. Thus, we only need to store the forward distance and the difference to the backward distance. Preliminary experiments indicate that in the vast majority of cases, this difference fits into 16 bits in our experiments. The few remaining cases could be encoded as an escape value indicating that the true distance is stored in a small hash table of exceptional values. This would give another space saving of around $15$ byte per node. Access nodes of a node $v$ are a subset of the union of the access nodes of their neighbors. Which access nodes are taken from where can be indicated by one small bit map for each neighbor. Hence it suffices to store access nodes for a dominating set of the nodes. With all these measures together, space consumption of CH-based TNR could be pushed well below 100 byte per node. \end{document}
\begingin{document} \mathfrak thispagestyle{empty} \begingin{abstract} In this paper, we apply Kauffman bracket skein algebras to develop a theory of skein adequate links in thickened surfaces. We show that any alternating link diagram on a surface is skein adequate. We apply our theory to establish the first and second Tait conjectures for adequate links in thickened surfaces. Our notion of skein adequacy is broader and more powerful than the corresponding notions of adequacy previously considered for link diagrams in surfaces. For a link diagram $D$ on a surface $\Sigma$ of minimal genus $g(\Sigma)$, we show that $${\rm span}([D]_\Sigma) \leq 4c(D) + 4 |D|-4g(\Sigma),$$ where $[D]_\Sigma$ is its skein bracket, $|D|$ is the number of connected components of $D$, and $c(D)$ is the number of crossings. This extends a classical result of Kauffman, Murasugi, and Thistlethwaite. We further show that the above inequality is an equality if and only if $D$ is weakly alternating. This is a generalization of a well-known result for classical links due to Thistlethwaite. Thus the skein bracket detects the crossing number for weakly alternating links. As an application, we show that the crossing number is additive under connected sum for adequate links in thickened surfaces. \endd{abstract} \address{Mathematics \& Statistics, McMaster University, Hamilton, Ontario} \email{[email protected]} \address{Mathematics \& Statistics, McMaster University, Hamilton, Ontario} \email{[email protected]} \address{Dept. of Mathematics, University at Buffalo, SUNY, Buffalo, NY 14260} \email{[email protected]} \subjclass[2020]{Primary: 57K10, 57K12, 57K14, 57K31} \keywords{Kauffman skein bracket, adequate diagram, alternating link, Tait conjectures.} \partialagestyle{myheadings} \maketitle \section{Introduction} The Kauffman bracket is a $\mathbb Z[A^{\partialm 1}]$-valued invariant of framed links in $\mathbb R^3$ determined by the skein relations: \begingin{equation}\langlebel{e-KB} \mathbb KPX-A \mathbb KPA -A^{-1}\mathbb KPB\ \mathfrak text{and}\ \mathbb KPC-\delta, \endd{equation} where $\delta=-A^2-A^{-2}.$ It naturally extends to an invariant of framed links in an arbitrary oriented $3$-manifold $M$ (possibly with boundary), via the skein module construction: let $\mathscr L(M)$ be the set of all unoriented, framed links in $M,$ including the empty link $\varnothing.$ The {\bf skein module} $\mathscr S(M)$ of $M$ is the quotient of the free $\mathbb Z[A^{\partialm 1}]$-module spanned by $\mathscr L(M)$ by the submodule generated by the Kauffman bracket skein relations \eqref{e-KB}, cf. \cite{Przytycki-1999}, \cite{Turaev-1990, Turaev-1991}. By this construction, the bracket $$[\, \cdot \,] \colon \mathscr L(M)\mathfrak to \mathscr S(M),$$ sending framed links to their equivalence classes in $\mathscr S(M)$, called the {\bf skein bracket}, is the universal invariant of framed links in $M$ satisfying \eqref{e-KB}. Independently of this initial motivation, skein modules quickly began to play a much broader role in the development of quantum topology, for example in connection with $SL(2,\mathbb C)$ character varieties \cite{Bullock-1997, PS-2000, FKL-2019, Turaev-1991, BFK-1999}, topological quantum field theory, \cite{BHMV-1995, Turaev-1994}, (quantum) Teichm\"uller spaces and (quantum) cluster algebras \cite{BW-2011, CL-2019, FGo-2006, FST-2008, Muller-2016}, the AJ conjecture \cite{FGL-2002, Le-2006}, and many more. In this paper we develop a general theory of skein adequacy (called adequacy, for short) for links in thickened surfaces with the aid of skein modules. Let $\Sigma$ be an oriented surface and $I=[0,1]$ be the unit interval. The skein module of the thickened surface $\Sigma \mathfrak times I$ comes naturally equipped with a product structure given by stacking, i.e., the product $L_1\cdot L_2$ is defined by placing $L_1$ on top of $L_2$ in $\Sigma \mathfrak times I$. With this product structure, the skein module $\mathscr S(\Sigma \mathfrak times I)$ becomes an algebra over $\mathbb Z[A^{\partialm 1}]$. Let $\mathcal{C}(\Sigma)$ denote the set of all non-trivial unoriented simple loops on $\Sigma$ up to isotopy and $\mathcal{MC}(\Sigma)$ denote the set of all non-trivial unoriented multi-loops on $\Sigma$, i.e., collections of pairwise disjoint simple non-contractible loops, including $\varnothing$, up to isotopy. Then by \cite{Przytycki-1999} (cf., \cite{Sikora-Westbury}), the skein module $\mathscr S(\Sigma\mathfrak times I)$ is a free $\mathbb Z[A^{\partialm 1}]$-module with basis $\mathcal{MC}(\Sigma)$. Consequently, via this identification, the skein bracket gives a map \begingin{equation}\langlebel{e-bra} [\, \cdot \, ]_\Sigma \colon \mathscr L(\Sigma\mathfrak times I)\mathfrak to \mathscr S(\Sigma\mathfrak times I)=\mathbb Z[A^{\partialm 1}]\mathcal{MC}(\Sigma). \endd{equation} We use the association \eqref{e-bra} to develop a theory of skein adequacy for links in $\Sigma\mathfrak times I$ which extends that for classical links. This theory is broader and more powerful than the corresponding notions of simple adequacy \cite{Lickorish-Thistlethwaite} and homological adequacy \cite{Boden-Karimi-2019}. For example, we will see that every weakly alternating link in $\Sigma\mathfrak times I$ without removable nugatory crossings is skein adequate. We will apply the skein bracket to establish the first and the second Tait conjecture for skein adequate link diagrams on surfaces. The first one says that skein adequate diagrams have minimal crossing number, and the second one says that two skein adequate diagrams for the same oriented link have the same writhe. (The writhe of a link diagram $D$ is denoted by $w(D)$ and is defined to be the sum of its crossing signs.) These results strengthen the earlier work of Adams et al., \cite{Adams}, who showed the minimal crossing number result for reduced alternating knot diagrams in surfaces. We also strengthen the minimality result of \cite{Boden-Karimi-2019}, for homologically adequate link diagrams in surfaces, and further show that any connected sum of two skein adequate link diagrams on surfaces is again skein adequate. This implies that the crossing number and writhe are essentially additive under connected sum of skein adequate links in thickened surfaces. For any link diagram $D$ on a surface $\Sigma$ of minimal genus, we prove that $${\rm span}([D]_\Sigma) \leq 4c(D) + 4 |D|-4g(\Sigma),$$ where $|D|$ is the number of connected components of $D$, $c(D)$ is the number of crossings, and $g(\Sigma)$ is the genus of $\Sigma.$ This inequality generalizes a result proved by Kauffman, Murasugi, and Thistlethwaite for link diagrams on $\mathbb R^2$ \cite{Kauffman-87, Murasugi-871, Thistlethwaite-87}, extending their nice geometric application of the Kauffman bracket. It also extends and strengthens an analogous recent result proved in \cite{Boden-Karimi-2019} using the homological Kauffman bracket. Additionally, we prove that the above inequality is an equality if and only if $D$ is weakly alternating. Therefore, the skein bracket, together with the crossing number, distinguishes weakly alternating links. That generalizes the analogous result of Thistlethwaite for classical links. \subsection*{Broader context and motivation} While the results presented here are new only for links in non-contractible surfaces, generalized link theory is of growing interest and has many potential connections to classical links and 3-dimensional geometry. We take a moment to discuss some of them. One motivation for our results is their connection to the theory of virtual knots and links, which can be viewed as links in thickened surfaces, considered up to homeomorphisms and stabilization \cite{Carter-Kamada-Saito}. By Kuperberg's theorem, minimal genus realizations of virtual links are unique up to homeomorphism \cite{Kuperberg}. Our theory of adequate and alternating links in thickened surfaces is invariant under surface homeomorphisms and, therefore, many of the results given here can be restated in the language of virtual links. A second motivation involves potentially novel applications to classical link theory. The Turaev surface construction associates to any classical link diagram an alternating link in a thickened surface \cite{Turaev-1987, DFKLS, CK-Turaev}. Menasco famously proved hyperbolicity for prime alternating (non-torus) links in $S^3$ \cite{Menasco-1984}, and his result has been extended to prime alternating links $L \subset \Sigma \mathfrak times I$ in \cite{Adams-2019a}. This result opens the door to using the hyperbolic geometry of alternating links in higher genus surfaces to profitably study non-alternating classical links, e.g., see \cite{Adams-2019c} and the many other papers cited below. In \cite{Dasbach-Lin-2007}, Dasbach and Lin proved a remarkable result giving a bound on the volume of alternating link complements in terms of the second and penultimate coefficients of the Jones polynomial. In \cite{Lackenby-2004}, Lackenby established an equally remarkable bound on the volume of alternating link complements in terms of the diagrammatic \emph{twist number} For alternating hyperbolic links in $S^3$, the results of \cite{Dasbach-Lin-2007} imply that the twist number is essentially an isotopy invariant of $L$, but this is not true in general. These methods have been generalized to non-alternating hyperbolic links in $S^3$ \cite{Blair-2009, Blair-Allen-Rodriguez-2019} and to hyperbolic links in arbitrary compact oriented 3-manifolds \cite{HP-2020}. In general, there is a notion of weakly generalized alternating link diagrams on surfaces due to Howie \cite{Howie-2015}, extended to links in compact oriented 3-manifolds via ``generalized projection surfaces'' by Howie and Purcell \cite{HP-2020}. The volume bounds have been extended to alternating links in thickened surfaces by Bavier and Kalfagianni \cite{Bavier-Kalfagianni-2020} and Will \cite{Will-2020} and also to virtual alternating links by Champanerkar and Kofman \cite{Champanerkar-Kofman-2020}. In \cite{Champanerkar-Kofman-2020} and \cite{Will-2020}, the volume bounds are expressed in terms of the Jones-Krushkal polynomial \cite{Krushkal-2011, Boden-Karimi-2019}, and in \cite{Bavier-Kalfagianni-2020} they are expressed in terms of a skein invariant derived from fully contractible smoothings. In \cite[Corollary 1.3]{Bavier-Kalfagianni-2020}, they deduce that, for certain alternating links in thickened surfaces, the twist number is an isotopy invariant. Interestingly, this result is consistent with the generalized Tait flyping conjecture. \section{State sum formula and the generalized Jones polynomial} \langlebel{s-state-sum} We will assume throughout this paper that $\Sigma$ is an oriented surface with one or more connected components, which may also have boundary. Links in $\Sigma\mathfrak times I$ will be represented as diagrams on $\Sigma$ up to Reidemeister moves. Every framed link in $\Sigma\mathfrak times I$ can also be represented by a link diagram with framing given by the blackboard framing. Equivalence of framed links is given by regular isotopy, which includes the second and third Reidemeister moves, as well as the modified first Reidemeister move, which replaces $\mathbb KPXR$ or $\mathbb KPXQ$ with $\ \mathbb KPXO \ .$ Let $D$ be a link diagram on a surface $\Sigma$. Given a crossing \mathbb KPX\ of $D$, we consider its $A$-type \mathbb KPA and $B$-type \mathbb KPB resolution, as in the Kauffman bracket construction. A choice of resolution for each crossing of $D$ is called a {\bf state}. Let $\mathfrak S(D)$ denote the set of all states of $D$. Thus $|\mathfrak S(D)|=2^{c(D)},$ where $c(D)$ is the crossing number of $D$. For $S\in \mathfrak S(D)$, let $|S|$ denote the number of loops in $S$ and $t(S)$ the number of contractible loops in $S$. Also let $\widehat S$ denote $S$ with contractible loops removed. Hence, $\widehat S\in \mathcal{MC}(\Sigma).$ The following state sum formula is an immediate consequence of the definition, and it generalizes the usual formula for the classical Kauffman bracket: \begingin{equation}\langlebel{e-state-sum} [D]_\Sigma=\sum_{S\in \mathfrak S(D)} A^{a(S)-b(S)}\delta^{t(S)}\widehat S \in \mathbb Z[A^{\partialm 1}]\mathcal{MC}(\Sigma), \endd{equation} where $a(S),b(S)$ are the numbers of $A$- and $B$-smoothings in $S$ and $\delta = -A^2-A^{-2}$ as before. A similar formula appears in the paper of Dye and Kauffman on the surface bracket polynomial \cite{Dye-Kauffman-2005}. Any invariant of framed links in $\Sigma\mathfrak times I$ satisfying \eqref{e-KB} can be normalized to obtain a Jones-type polynomial invariant of oriented links. In the case of the skein bracket \eqref{e-bra}, one obtains the {\bf generalized Jones polynomial}, an invariant for oriented links in $\Sigma \mathfrak times I$ given by \begingin{equation}\langlebel{e-Jones} J_{\Sigma}(D)=(-1)^{w(D)}t^{3w(D)/4}([D]_\Sigma)_{A=t^{-1/4}}. \endd{equation} \section{Adequate link diagrams in surfaces} \langlebel{s-adeq} Given a link diagram $D$, let $S_A$ be the pure $A$ state and let $S_B$ be the pure $B$ state. Then $S_A$ and $S_B$ are the states which theoretically give rise to the terms of maximal and minimal degree in \eqref{e-state-sum}. The notion of adequacy of a link diagram is designed to guarantee that the terms from $S_A$ and $S_B$ survive in the state sum formula. Therefore, when $D$ is a skein adequate diagram, its skein bracket $[D]_\Sigma$ has maximal possible span. Two states $S, S'$ are said to be {\bf adjacent} if their resolutions differ at exactly one crossing. \begingin{definition} \langlebel{defn:h-adequate} A link diagram $D$ on a surface $\Sigma$ is said to be $A$-adequate if $t(S) \leq t(S_A )$ or $\widehat{S} \neq \widehat{S}_A$ in $\mathcal{MC}(\Sigma)$ for any state $S$ adjacent to $S_A$. It is said to be $B$-adequate if $t(S) \leq t(S_B)$ or $\widehat{S} \neq \widehat{S}_B$ for any state $S$ adjacent to $S_B$. The diagram $D$ is called skein adequate if it is both $A$- and $B$-adequate. \endd{definition} The notions of $A$- and $B$-adequacy are modeled on the notions of plus- and minus-adequacy for classical links \cite{Lickorish}. Recall that a classical link diagram is said to be {\bf plus-adequate} if $|S| = |S_A|-1$ for any state $S$ adjacent to $S_A$, and it is {\bf minus-adequate} if $|S| = |S_B|-1$ for any state $S$ adjacent to $S_B$. This simpler notion of adequacy extends verbatim to link diagrams on surfaces. For link diagrams on surfaces, plus- and minus-adequacy is a special case of the notion of homological adequacy, which was introduced in \cite{Boden-Karimi-2019} and will be reviewed in \mathbb Cref{s-homological}. We will see that adequacy as defined above is more general than simple or homological adequacy. The following provides an alternative definition of adequacy: \begingin{proposition} \langlebel{p-alt-adeq} (1) A link diagram $D$ on $\Sigma$ is $A$-adequate if and only if $t(S) \leq t(S_A )$ or $|\widehat{S}|\neq |\widehat{S}_A|$ for any state $S$ adjacent to $S_A$.\\ (2) A link diagram $D$ on $\Sigma$ is $B$-adequate if and only if $t(S) \leq t(S_B)$ or $|\widehat{S}| \neq |\widehat{S}_B|$ for any state $S$ adjacent to $S_B$. \endd{proposition} \begingin{proof} We begin with some general comments. Given a link diagram $D$ and two adjacent states $S,S'$, the transition from $S$ to $S'$ is one of the following types: \begingin{itemize} \item[(i)] $|S'| = |S|+1$, i.e., one cycle of $S$ splits into two cycles of $S'$. \item[(ii)] $|S'| = |S|-1$, i.e., two cycles of $S$ merge into one cycle of $S'$. \item[(iii)] $|S'| = |S|,$ i.e., one cycle $C$ of $S$ rearranges itself into a new cycle $C'$ of $S'$.\footnote{The transition $S \mathfrak to S'$ in this case is called a \mathfrak textit{single cycle bifurcation.}} \endd{itemize} In cases (ii) and (iii), either $t(S')\leq t(S)$ or $\widehat{S'} \neq \widehat{S}$. Specifically, in case (ii), $t(S') > t(S)$ only when two non-trivial parallel cycles in $S$ merge to form one trivial cycle in $S'$, which implies that $\widehat{S} \neq \widehat{S'}$. Likewise, in case (iii), we claim that neither $C$ nor $C'$ is trivial and, consequently, $t(S') = t(S)$. To see that, note that if $S'$ is obtained from $S$ by a smoothing change of a crossing $x$ then there are two simple closed loops $\alpha,\beginta\subset \Sigma$ intersecting at $x$ only and such that the two different smoothings of $x$ yield $C$ and $C'$. Assigning some orientations to $\alpha$ and $\beginta$, we see that $C$ and $C'$ with some orientations equal $\partialm (\alpha+\beginta)$ and $\partialm (\alpha-\beginta)$ in $H_1(\Sigma).$ Since the algebraic intersection number of $\alpha$ and $\beginta$ is $1$, we know that $\alpha\ne \partialm \beginta$ and, consequently, neither $C$ nor $C'$ is trivial. Therefore, to verify that a given diagram is $A$- or $B$-adequate, it is enough to check that the conditions of \mathbb Cref{defn:h-adequate} hold in case (i). We will now prove part (1). Suppose $S$ is a state adjacent to $S_A$ with $t(S)=t(S_A)+1$. Then the transition from $S_A$ to $S$ must either be case (i) or (ii). If it is case (i), then $|S|=|S_A|+1$ and $t(S)=t(S_A)+1$, therefore, $\widehat{S} =\widehat{S}_A.$ Thus $D$ is not $A$-adequate and $|\widehat{S}|=|\widehat{S}_A|.$ If it is case (ii), then $|S|=|S_A|-1$, and two nontrivial cycles of $S_A$ must merge into a trivial cycle of $S$. In this case, the conditions for $A$-adequacy are satisfied and $|\widehat{S}|\neq|\widehat{S}_A|.$ The proof of part (2) is similar and is left to the reader. \endd{proof} For any diagram $D$, its bracket has a unique presentation $$[D]_\Sigma=\sum_\mu p_\mu(D)\mu \in \mathscr S(\Sigma\mathfrak times I),$$ where the sum is over all multi-loops $\mu$ in $\Sigma.$ Denote the maximal and minimal degrees (in the variable $A$) of the non-zero polynomials $p_\mu(D)$ in this expression by $d_{max}([D]_\Sigma)$ and $d_{min}([D]_\Sigma).$ \begingin{proposition}\langlebel{p-dminmax} For any link diagram $D$ on $\Sigma$,\\ (1) $d_{max}([D]_\Sigma) \leq c(D) + 2t(S_A)$, with equality if $D$ is $A$-adequate. \\ (2) $d_{min}([D]_\Sigma) \mathfrak geq -c(D) -2t(S_B),$ with equality if $D$ is $B$-adequate. \endd{proposition} \noindent{\it Proof of (1).} By \eqref{e-state-sum}, $[D]_\Sigma$ is given by a state sum with the term $(-1)^{t(S_A)} A^{c(D)+2t(S_A)}\widehat{S}_A$ for the state $S_A.$ Now the inequality of (1) follows from the fact that every change of a smoothing in $S_A$ decreases $a(S)-b(S)$ by two and increases $t(S)$ by at most one. The proof of equality in (1) when $D$ is $A$-adequate follows immediately from part (1) of the lemma below. The proof of (2) is analogous, and the proof of equality in (2) when $D$ is $B$-adequate follows from part (2) of the lemma below. \qed \begingin{lemma} \langlebel{lemma-hom-ad} (1) If $D$ is $A$-adequate and $S$ is a state with at least one $B$-smoothing, then either $$ a(S)-b(S)+2t(S)< c(D) + 2t(S_A)\ \quad \mathfrak text{or} \quad \widehat{S}\ne \widehat{S}_A.$$ (2) If $D$ is $B$-adequate and $S$ is a state with at least one $A$-smoothing, then either $$a(S)-b(S)+2t(S) > -c(D) - 2t(S_B)\ \quad \mathfrak text{or} \quad \widehat{S}\ne \widehat{S}_B.$$ \endd{lemma} \begingin{proof} We prove (1) by contradiction: Suppose to the contrary that $S$ is a state with at least one $B$-smoothing such that $\widehat{S}=\widehat{S}_A$ and $$a(S)-b(S)+2t(S) = c(D) + 2t(S_A).$$ Clearly, $S$ can be obtained from $S_A$ by a sequence of smoothing changes from $A$ to $B$, $S_A=S_0\mathfrak to S_1\mathfrak to \cdots \mathfrak to S_k=S$. Further, each smoothing change must increase $t(\cdot )$ by one, i.e., $t(S_{i+1})=t(S_i)+1$, for $i=0,\ldots,k-1.$ Since each smoothing change increases the number of cycles in a state by at most one, none of these smoothing changes can add a new cycle to $\widehat S_i$, $i=0,\ldots,k$. Therefore, $|\widehat{S}_{i+1}| \leq |\widehat{S}_{i}|$ for $i=0,\ldots,k-1.$ However, since $\widehat{S}=\widehat{S}_A$, none of the smoothing changes can decrease $|\widehat{S}_i|$ either. It follows that $\widehat{S}_{i+1}=\widehat{S}_i$ for $i=0,\ldots,k-1$. Thus $|\widehat{S}_{i+1}|=|\widehat{S}_i|$ and $$|S_{i+1}| =t(S_{i+1}) + |\widehat{S}_{i+1}| = t(S_{i})+1 + |\widehat{S}_{i}|= |S_{i}|+1,$$ for $i=0,\ldots,k-1$. In particular, each transition $S_i \mathfrak to S_{i+1}$ is of type (i) as discussed in the proof of \mathbb Cref{p-alt-adeq}, i.e., one where a cycle of $S_i$ splits into two cycles of $S_{i+1}$. However, since $D$ is $A$-adequate, the first smoothing change $S_A=S_0\mathfrak to S_1$ has either $t(S_1)\leq t(S_A)$ or $\widehat{S}_1 \neq \widehat{S}_A$, which is a contradiction. This completes the proof of the first statement. The proof of the second one is similar and is left to the reader. \endd{proof} The next result is an immediate consequence of \mathbb Cref{p-dminmax}. Below, ${\rm span}([D]_\Sigma)$ denotes the difference between the maximal and minimal $A$-degree of $[D]_\Sigma$. \begingin{corollary}\langlebel{cor-adequate} If $D$ is a link diagram on $\Sigma$, then $${\rm span}([D]_\Sigma) \leq 2 c(D) + 2t(S_A)+2t(S_B),$$ with equality if $D$ is skein adequate. \endd{corollary} The map $\Psi \colon \mathcal{MC}(\Sigma)\mathfrak to \mathbb Z[z]$ sending $S$ to $z^{|S|}$ extends linearly to the skein module, $$\Psi \colon \mathscr S(\Sigma\mathfrak times I)=\mathbb Z[A^{\partialm 1}]\mathcal{MC}(\Sigma)\longrightarrow \mathbb Z[A^{\partialm 1},z].$$ The composition $\Psi([D]_\Sigma)$ is called the {\bf reduced homotopy Kauffman bracket}. Obviously, $${\rm span}(\Psi([D]_\Sigma))\leq {\rm span}([D]_\Sigma),$$ where ${\rm span}(\ \cdot \ ) $ refers to the span in the $A$-degree. \begingin{proposition} If $D$ is a skein adequate link diagram on $\Sigma,$ then $${\rm span}(\Psi([D]_\Sigma))= {\rm span}([D]_\Sigma).$$ \endd{proposition} \begingin{proof} Let $S$ be a state with at least one $B$-smoothing such that $|\widehat{S}|=|\widehat{S}_A|$ and $a(S)-b(S)+2t(S)= c(D) + 2t(S_A)$. As before, $S$ can be obtained from $S_A$ by a sequence of smoothing changes from $A$ to $B$, each smoothing change can increase $t(\cdot)$ by at most one, i.e., $S_A=S_0\mathfrak to S_1\mathfrak to \cdots \mathfrak to S_k=S$. As in the proof of \mathbb Cref{lemma-hom-ad}, we must have $t(S_{i+1})=t(S_i)+1$. Further, since a smoothing change can increase the number of cycles in $S_i$ by at most one, we have $|\widehat{S}_{i+1}|\leq |\widehat{S}_{i}|$ for $i=0,\ldots,k-1.$ Now the assumption that $|\widehat{S}|=|\widehat{S}_A|$ then implies that $|\widehat{S}_{i+1}| = |\widehat{S}_{i}|$ for $i=0,\ldots,k-1.$ However, since $D$ is adequate, for the first transition $S_A=S_0 \mathfrak to S_1$, either $t(S_{1}) \neq t(S_0)+1$ or $\widehat{S}_1 \neq \widehat{S}_0$. But $t(S_{1}) = t(S_0)+1$ and $|\widehat{S}_1| = |\widehat{S}_0|$ imply that $\widehat{S}_1 = \widehat{S}_0$, which gives a contradiction. Therefore, the term with maximum $A$-degree in $\Psi([D]_{\Sigma})$ must survive. A similar argument applies to show that the term with minimum $A$-degree survives. It follows that $${\rm span}(\Psi([D]_\Sigma))= 2c(D)+2t(S_A)+2t(S_B)={\rm span}([D]_\Sigma).$$ \endd{proof} The next proposition shows that skein adequacy is inherited under passing to subsurfaces $\Sigma' \subset \Sigma$. \begingin{proposition} If a link diagram $D$ on a subsurface $\Sigma'$ of $\Sigma$ is $A$- or $B$-adequate in $\Sigma$ then it is $A$- or $B$-adequate (respectively) in $\Sigma'$. \endd{proposition} \begingin{proof} In the following, let $t(S,\Sigma)$ be the value of $t(S)$ when $S$ is regarded as a state in $\Sigma$, and let $t(S,\Sigma')$ be its value when $S$ is regarded as a state in $\Sigma'$. Suppose $D$ is not $A$-adequate in $\Sigma'$. By \mathbb Cref{p-alt-adeq}, there exists a state $S$ adjacent to $S_A$ with $t(S,\Sigma')=t(S_A, \Sigma')+1$ and $|\widehat{S}|=|\widehat{S}_A|$ in $\Sigma'$. In particular, $|S|=|S_A|+1$, and the transition from $S_A$ to $S$ must involve one cycle $C$ of $S_A$ splitting into two cycles $C_1$ and $C_2$ of $S$. At least one of $C_1, C_2$ must be trivial in $\Sigma'$, for otherwise $t(S,\Sigma') \leq t(S_A,\Sigma')$. If say $C_1$ is trivial in $\Sigma'$, then it must also be trivial in $\Sigma,$ because $\Sigma' \subset \Sigma$ is a subsurface. As a cycle in $\Sigma$, $C$ is either trivial or nontrivial. If it is trivial, then $C_2$ must also be trivial in $\Sigma$, and so in fact all three of $C,C_1,C_2$ are trivial. This implies that $t(S, \Sigma) = t(S_A,\Sigma)+1$ and $|\widehat{S}|=|\widehat{S}_A|$ in $\Sigma,$ contradicting the assumption of $A$-adequacy of $D$. If, on the other hand, $C$ is nontrivial in $\Sigma,$ then $C_2$ must also be nontrivial in $\Sigma$. This again implies that $t(S, \Sigma) = t(S_A,\Sigma)+1$ and $|\widehat{S}|=|\widehat{S}_A|$ in $\Sigma,$ leading to the same contradiction. Therefore, $D$ must be $A$-adequate on $\Sigma'$. The proof of $B$-adequacy of $D$ is identical. \endd{proof} \section{Skein and homological adequacy} \langlebel{s-homological} For completeness of discussion, in this section we compare \mathbb Cref{defn:h-adequate} of skein adequacy to two legacy versions, namely simple and homological adequacy. We will see that our notion of adequacy is broader and that the statements of \mathbb Cref{lemma-hom-ad} and \mathbb Cref{cor-adequate} are strictly stronger than the corresponding statements for simple and homological adequacy. Henceforth, we will say a link diagram on a surface is adequate if it is skein adequate. For any state $S\subset \Sigma,$ let us denote the ranks of the kernel and the image of $$i_* \colon H_1(S;\mathbb Z/2)\mathfrak to H_1(\Sigma;\mathbb Z/2),$$ by $k(S)$ and $r(S),$ respectively. The {\bf homological Kauffman} {\bf bracket}, $$\langle D\rangle_\Sigma=\sum_{S\in \mathfrak S(D)} A^{a(S)-b(S)}\delta^{k(S)}z^{r(S)},$$ was introduced by Krushkal \cite{Krushkal-2011} and studied in \cite{Boden-Karimi-2019}. Based on this invariant, \cite{Boden-Karimi-2019} introduced the notion of homological adequacy for link diagrams in surfaces. A diagram $D$ on $\Sigma$ is {\bf homologically $A$-adequate} if $k(S) \leq k(S_A) $ for any state $S$ adjacent to $S_A$, and it is {\bf homologically $B$-adequate} if $k(S) \leq k(S_B) $ for any state $S$ adjacent to $S_B$. A diagram $D$ is {\bf homologically adequate} if it is both homologically $A$- and $B$-adequate. It is not difficult to show that a diagram that is plus-adequate is homologically $A$-adequate, and one that is minus-adequate is homologically $B$-adequate. (For further details, see \S 2.2 of \cite{Boden-Karimi-2019}.) \begingin{proposition}\langlebel{p-class-homol-homot} Every homologically $A$-adequate link diagram is $A$-adequate and every homologically $B$-adequate link diagram is $B$-adequate. \endd{proposition} \begingin{proof} Recall from the discussion at the beginning of the proof of \mathbb Cref{p-alt-adeq} that there are the three possible cases, and to verify that a given diagram is $A$- or $B$-adequate, it is enough to check that the conditions of \mathbb Cref{defn:h-adequate} hold in case (i). Hence, it is enough to focus on states $S$ adjacent to $S_A$ or $S_B$ with $|S| = |S_A|+1$ or $|S| = |S_B|+1$, respectively. If $D$ is not $A$-adequate, then there exists a state $S$ adjacent to $S_A$ with $|S| = |S_A|+1$, $t(S)=t(S_A)+1$, and $\widehat{S} = \widehat{S}_A$. (Notice that if $|S| = |S_A|+1$ and $t(S)=t(S_A)+1$, then $\widehat{S} = \widehat{S}_A$ automatically holds.) In this case, we have $k(S) =k(S_A)+1,$ and it follows that $D$ is not homologically $A$-adequate. This proves the first statement in the proposition, and the proof of the second statement on $B$-adequacy is similar. \endd{proof} In summary then, for a link diagram $D$ on a surface $\Sigma$, it follows that \begingin{equation} \langlebel{e-adeq-comp} \mathfrak text{plus-adequacy} \Longrightarrow \mathfrak text{homological $A$-adequacy} \Longrightarrow \mathfrak text{$A$-adequacy}, \endd{equation} with similar statements relating minus-adequacy, homological $B$-adequacy, and $B$-adequacy. \begingin{figure}[ht!] \centering\includegraphics[scale = 0.8]{Figures/3-7-in-torus.pdf} \caption{An alternating diagram on the torus.} \langlebel{f-3.7} \endd{figure} In \mathbb Cref{ex-7knot}, we will see a knot diagram in a genus two surface which is adequate but not homologically adequate. On the other hand, it is easy to construct examples which are homologically adequate but not simply adequate. For instance, consider the alternating diagram $D$ with three crossings on the torus in \mathbb Cref{f-3.7}. A straightforward calculation shows that it is homologically adequate but not simply adequate. These examples show that none of the reverse implications in \eqref{e-adeq-comp} hold, therefore, the notion of adequacy in \mathbb Cref{defn:h-adequate} is strictly more general than either homological or simple adequacy. In general, notice that $${\rm span}(\langle D\rangle_\Sigma)\leq {\rm span}([D]_\Sigma) \leq 2 c(D) + 2t(S_A)+2t(S_B)\leq 2 c(D) + 2k(S_A)+2k(S_B), $$ where ${\rm span}( \ \cdot \ )$ is the span in the $A$-degree. Therefore, \mathbb Cref{cor-adequate} immediately implies an analogous inequality holds for homological adequacy, cf., \cite[Corollary 2.7]{Boden-Karimi-2019}. \section{Alternating links and the Tait conjectures} \langlebel{s-Tait} When tabulating knots, Tait formulated three conjectures on alternating links. The first one states that any reduced alternating diagram of a classical link has minimal crossing number. The second one asserts that any two such diagrams representing the same link have the same writhe. The third one states that any two reduced alternating diagrams of the same link are related by flype moves. The first two conjectures were resolved almost 100 years later, independently by Kauffman, Murasugi, and Thistlethwaite, using the newly discovered Jones polynomial, \cite{Kauffman-87, Murasugi-871, Thistlethwaite-87}. The third conjecture was established shortly after by Menasco and Thistlethwaite \cite{Menasco-Thistlethwaite-93}. The first two Tait conjectures actually hold more generally for adequate links \cite{Lickorish-Thistlethwaite}, and their proofs have been generalized to homologically adequate links in thickened surfaces in \cite{Boden-Karimi-2019}. Here, we generalize these results even further to adequate links in thickened surfaces. Henceforth, all links in thickened surfaces will be unframed, unless stated otherwise. Given an oriented link diagram $D$, let $c_{+}(D)$ be the numbers of crossings of type \mathbb KPXP, and let $c_{-}(D)$ be the number of crossings of type \mathbb KPXN. The proof of the following theorem can be found in \mathbb Cref{s-proof-adequate-min}. \begingin{theorem} \langlebel{t-adequate-min-cross} Let $D$ and $E$ be oriented link diagrams on $\Sigma$ representing the same oriented unframed link in $\Sigma \mathfrak times I$. \begingin{itemize} \item[(i)] If $D$ is $A$-adequate, then $c_-(D)\leq c_-(E)$. \item[(ii)] If $D$ is $B$-adequate, then $c_+(D)\leq c_+(E)$. \endd{itemize} \endd{theorem} The {\bf crossing number} of a link $L\subset \Sigma\mathfrak times I$, $c(L)$, is defined as the minimal crossing number among all diagram representatives of $L$. A link $L\subset \Sigma\mathfrak times I$ is said to be {\bf adequate} if it admits an adequate diagram on $\Sigma.$ Using \mathbb Cref{t-adequate-min-cross}, one can deduce the first and second Tait conjectures for adequate links in surfaces. \begingin{corollary}\langlebel{c-Tait} (i) Any adequate diagram of a link in $\Sigma \mathfrak times I$ has $c(L)$ crossings.\\ (ii) Any two adequate diagrams of the same oriented link in $\Sigma \mathfrak times I$ have the same writhe. \endd{corollary} \begingin{proof} Statements (i) and (ii) are immediate consequences of \mathbb Cref{t-adequate-min-cross}. In the case of (ii), if adequate diagrams $D$ and $E$ represent the same oriented link, then $c_+(D)=c_+(E)$ and $c_-(D)= c_-(E)$ by the above theorem and, hence, $$w(D)=c_+(D)-c_-(D)=c_+(E)-c_-(E)=w(E). \qedhere$$ \endd{proof} \mathbb Cref{c-Tait} implies that for an adequate link $L\subset \Sigma\mathfrak times I$, the writhe is a well-defined invariant of its oriented link type. Let $g(\Sigma)$ be the sum of the genera of the connected components of $\Sigma.$ A link diagram $D$ on $\Sigma$ is {\bf minimally embedded} if it does not lie on a subsurface of $\Sigma$ of smaller genus. In other words, the complement of $D$ on $\Sigma$ has no non-separating loops. Let $N_D$ be a neighborhood of $D$ in $\Sigma$ small enough so that it is a ribbon surface retractible onto $D$. A diagram $D$ is minimally embedded if and only if $g(N_D)=g(\Sigma).$ Furthermore, note that if $D$ is connected and $\Sigma$ is closed, then $D$ is minimally embedded if and only if $\Sigma\smallsetminus D$ is composed of disks. In that case, we say that $D$ is {\bf cellularly embedded}. A link diagram $D$ on a closed surface $\Sigma$ is said to have {\bf minimal genus} if it is minimally embedded within its isotopy class. In \cite{Manturov-2013}, it is proved that any cellularly embedded knot diagram with minimal crossing number has minimal genus. This result was recently extended to link diagrams, and the following is a restatement of Theorem 1 of \cite{BR-2021}. \begingin{theorem} \langlebel{thm-min} Any cellularly embedded link diagram with minimal crossing number has minimal genus. \endd{theorem} A link diagram $D$ on $\Sigma$ is {\bf alternating} if, when traveling along any of its components, its crossings alternate between over and under. A link $L\subset \Sigma\mathfrak times I$ is {\bf alternating} if it can be represented by an alternating link diagram. A crossing $x$ of $D$ is {\bf nugatory} if there is a simple loop in $\Sigma$ which separates $\Sigma$ and intersects $D$ only at $x.$ As observed in \cite{Boden-Karimi-2019}, although nugatory crossings in diagrams in $\Sigma=\mathbb R^2$ can always be removed by rotating one side of the diagram $180^\circ$ relative to the other, that is not always true for diagrams in non-contractible surfaces $\Sigma$, see \mathbb Cref{f-nugatory}. A nugatory crossing is said to be {\bf removable} if the simple loop can be chosen to bound a disk, otherwise it is called {\bf essential}. A link diagram is {\bf reduced} if it does not contain any removable nugatory crossings. For example, the knot in \mathbb Cref{f-7-crossing} contains an essential nugatory crossing. \begingin{figure}[!ht] \centering\includegraphics[height=30mm]{Figures/nugatory.pdf} \caption{An essential nugatory crossing.} \langlebel{f-nugatory} \endd{figure} The following strengthens Proposition 2.8 of \cite{Boden-Karimi-2019}. Its proof is given in \mathbb Cref{s-proof-alt-adeq}. \begingin{theorem}\langlebel{t-alt-adeq} Any reduced alternating diagram is adequate. \endd{theorem} Note that unlike Proposition 2.8 of \cite{Boden-Karimi-2019}, we do not assume here that $D$ is cellularly embedded, checkerboard colorable, nor that $D$ has no nugatory crossings. A link diagram on $\Sigma$ is said to be {\bf weakly alternating} if it is a connected sum $D_0\# D_1\# \cdots \# D_k$ of an alternating diagram $D_0$ in $\Sigma$ and with alternating diagrams $D_1, \ldots, D_k$ in $S^2$ (cf., \mathbb Cref{l-connect-adeq}). \mathbb Cref{t-alt-adeq} can be generalized to show that weakly alternating diagrams are adequate. In fact, in the next section we will prove \mathbb Cref{p-conn-sum-adeq}, showing that any diagram on a surface obtained as the connected sum of two adequate link diagrams is itself adequate. Let us return to Tait conjectures now. By \mathbb Cref{c-Tait}, any reduced alternating diagram $D$ has the minimal crossing number for all diagrams representing the same unframed link $L$ in $\Sigma\mathfrak times I.$ Furthermore, all such oriented diagrams representing the same link $L$ have the same writhe. The results of Kauffman, Murasugi, Thistlethwaite \cite{Kauffman-87, Murasugi-871, Thistlethwaite-87} imply that the span of the Kauffman bracket of any diagram $D\subset S^2$ satisfies $${\rm span}([D]_{S^2})\leq 4c(D)+4,$$ or equivalently for the Jones polynomial, that ${\rm span} (V_D(t)) \leq c(D)$, with equality if $D$ is alternating. Furthermore, in \cite{Thistlethwaite-87} Thistlethwaite proved that if $D\subset S^2$ is prime and non-alternating, then $${\rm span}([D]_{S^2} )<4 c(D)+4.$$ In \cite{Turaev-1987}, it is observed that the above results hold if $D\subset S^2$ is weakly alternating, namely if $D$ is a connected sum of alternating diagrams. Thus the Kauffman bracket $[ D ]_{S^2}$, together with $c(D)$, \emph{detect} weakly alternating classical links. The homological Kauffman bracket of \cite{Boden-Karimi-2019} is not sufficiently strong to prove an analogous statement for links in thickened surfaces. Consider the two knots in the genus two surface in \mathbb Cref{f-4.107}. These knots have the same homological Kauffman bracket, namely $$\langle D_1 \rangle_{\Sigma}=\langle D_2 \rangle_{\Sigma}=3\delta z^2 - 4\delta^2 z+ (A^{4}+3+A^{-4})\delta,$$ but one of them is alternating and the other is not. Consequently, the homological Kauffman bracket does not detect alternating knots in thickened surfaces. \begingin{figure}[h!] \centering \includegraphics[height=42mm]{Figures/4-98-a.pdf} \qquad \includegraphics[height=42mm]{Figures/4-107-a.pdf} \caption{Two knots in a genus two surface with the same homological Kauffman bracket.}\langlebel{f-4.107} \endd{figure} However, we are going to show that Kauffman, Murasugi, Thistlethwaite statements hold for the Kauffman bracket $[\, \cdot \, ]_\Sigma$ of diagrams in closed surfaces $\Sigma$ after replacing $4$ by $4|D|-4g(\Sigma)$ on the right. Let $|D|$ denote the number of connected components of $D$ (which may be smaller than the number of connected components of the link in $\Sigma\mathfrak times I$ represented by $D$). Let $r(D)$ be the rank of the image of $i_* \colon H_1(D;\mathbb Z/2)\mathfrak to H_1(\Sigma;\mathbb Z/2).$ If $D \subset \Sigma$ is minimally embedded, then $i_*$ is surjective and $r(D)=2g$. The proof of the next result is given in \mathbb Cref{s-proof-altern-span}. \begingin{theorem} \langlebel{t-altern-span} (i) For any link diagram $D\subset \Sigma,$ $${\rm span}([D]_\Sigma) \leq 4c(D) + 4 |D|- 2 r(D).$$ (ii) If $D$ is cellularly embedded, reduced, and weakly alternating, then $${\rm span}([D]_\Sigma) = 4c(D) + 4 |D|-4g(\Sigma).$$ (iii) If $D$ is not weakly alternating then $${\rm span}([D]_\Sigma) < 4c(D) + 4 |D|-2r(D).$$ \endd{theorem} The assumptions of \mathbb Cref{t-altern-span} (ii) are necessary: If $D$ has a removable nugatory crossing, then eliminating it decreases the right hand side of the above equality but not the left hand side. Therefore, (ii) does not hold for diagrams with removable crossings. It can also fail when $D$ is not cellularly embedded. For example, consider the alternating link in \mathbb Cref{f-minimaleg}. It has $t(S_A)=4$ and $t(S_B)=2$. Therefore, by \mathbb Cref{cor-adequate}, we have ${\rm span}([D]_\Sigma) \leq 16+12=28$, whereas $4c(D) + 4 |D|-4g(\Sigma) = 32.$ Note that this diagram is minimally embedded but not cellularly embedded. \begingin{figure}[h] \centering \includegraphics[width=3in]{Figures/4.pdf} \caption{Minimally embedded alternating diagram for which the equality of \mathbb Cref{t-altern-span} (ii) does not hold.} \langlebel{f-minimaleg} \endd{figure} Although (ii) holds for weakly alternating diagrams, in the next section we will see that it does not hold generally for connected sums of alternating diagrams in arbitrary surfaces (see \mathbb Cref{ex-6knot}). \begingin{corollary} \langlebel{cor-th} Let $L$ be a link in $\Sigma \mathfrak times I$ with a reduced, weakly alternating diagram $D$ which is cellularly embedded. Then any other cellularly embedded diagram $E$ for $L$ satisfies $c(D)\leq c(E).$ If $E$ is not weakly alternating, then $c(D) < c(E).$ \endd{corollary} \begingin{proof} The first part of the proof is a direct consequence of Tait conjecture (\mathbb Cref{c-Tait}). Let us prove the full statement now: Any cellularly embedded link diagram on a connected surface is itself connected. Therefore, it is enough to prove the statement under the assumption that $\Sigma$ and $D$ are both connected. \mathbb Cref{t-altern-span} (ii) then implies that $c(D)={\rm span}([D]_\Sigma)/4 +g(\Sigma)-1.$ If $E$ is a second link diagram for $L$ on $\Sigma$, then since $E$ is cellularly embedded, it must also be connected. \mathbb Cref{t-altern-span} (i) implies that $$c(D) = {\rm span}([D]_\Sigma)/4 +g(\Sigma)-1 ={\rm span}([E]_\Sigma)/4 +g(\Sigma)-1 \leq c(E).$$ If $E$ is not weakly alternating, then \mathbb Cref{t-altern-span} (iii) shows the last inequality is strict, therefore, it follows that $c(D) < c(E)$. \endd{proof} \begingin{remark} The corollary gives an alternate proof of \mathbb Cref{thm-min} for non-split alternating links as follows. Let $L$ be a non-split alternating link in $ \Sigma \mathfrak times I,$ where $\Sigma$ is closed oriented surface, and let $D \subset \Sigma$ a minimal crossing cellularly embedded diagram for $L$. Then \mathbb Cref{cor-th} implies that $D$ is an alternating diagram. The argument is completed by appealing to Proposition 6 of \cite{BK-2020}, which shows that alternating link diagrams have minimal genus. \endd{remark} \section{Crossing number and connected sums} \langlebel{s-connected-sums} In this section, we will study the behavior of the crossing number under connected sum of links in thickened surfaces. This problem is closely related to an old and famous conjecture for classical links, which asserts that, for any two links $L_1, L_2$, \begingin{equation} \langlebel{e-hard} c(L_1 \# L_2) = c(L_1) + c(L_2). \endd{equation} This conjecture has been verified for a wide class of links, including alternating links, adequate links, and torus links \cite{Diao-2004}. Clearly, $c(L_1 \# L_2) \leq c(L_1) + c(L_2)$. In addition, in \cite{Lackenby-2009}, Lackenby has proved that, in general, one has a lower bound of the form: $$c(L_1 \# L_2) \mathfrak geq \mathfrak tfrac{1}{152} \left( c(L_1) + c(L_2)\right).$$ The operation of connected sum is not so well-behaved for arbitrary links in thickened surfaces. Just as for classical links, it depends on the choice of components which are joined as well as their orientations. However, unless one of the links is in $S^2 \mathfrak times I$, it also depends on the diagram representatives as well as the choice of basepoints $x_i \in D_i$ where the link components are joined. The issue is the fact that a Reidemeister move applied to either of the link diagrams may change the link type of their connected sum. We take a moment to quickly review its construction. Suppose $\Sigma_1$ and $\Sigma_2$ are oriented surfaces and let $\Sigma_1\# \Sigma_2$ denote their connected sum. It is obtained from the union $(\Sigma_1 \smallsetminus {\rm int}\,B_1) \cup (\Sigma_2 \smallsetminus {\rm int}\,B_2)$ by gluing $\partial B_1\subset \Sigma_1$ to $\partial B_2\subset \Sigma_2$ by an orientation reversing homeomorphism $g \colon \partial B_1 \mathfrak to \partial B_2$. For connected surfaces, $\Sigma_1\# \Sigma_2$ is independent of the choice of disks $B_i \subset \Sigma_i$ and gluing map. If $D_1 \subset \Sigma_1$ and $D_2 \subset \Sigma_2$ are link diagrams, we can choose cutting points $x_i\in D_i$ and disk neighborhoods $B_i$ from $\Sigma_i$ such that $B_i\cap D_i$ is an interval for $i=1,2$. Then the surface $\Sigma_1\# \Sigma_2$ can be formed in such a way that $D=(D_1 \smallsetminus {\rm int}\,B_1) \cup (D_2 \smallsetminus {\rm int}\, B_2)$ is a link diagram in $\Sigma_1\# \Sigma_2$. If $D_1,D_2$ are oriented link diagrams, then we require the gluing to respect the orientations of the arcs. The resulting diagram is called a {\bf connected sum} of $D_1$ and $D_2$. In general, it depends on the choice of link diagrams $D_1,D_2$, components being joined, and the points $x_i \in D_i.$ However, it is independent of the choice of disk neighborhoods $B_i$ containing $x_i$. The next result shows that when one of the diagrams lies in $S^2\mathfrak times I$, the operation of connected sum is well-behaved. \begingin{lemma} \langlebel{l-connect-adeq} Let $D_1 \subset \Sigma\mathfrak times I$ and $D_2\subset S^2\mathfrak times I$ be oriented diagrams, where $\Sigma$ is an arbitrary surface. Then the connected sum of $D_1$ and $D_2$ is independent of the choice of the cutting points $x_1, x_2$ on the selected components of $D_1$ and of $D_2.$ We will denote the connected sum in this case by $D_1\# D_2$. The oriented link type of $D_1\# D_2$ depends only on the link types of $D_1$ and $D_2$ and a choice of which components are joined. \endd{lemma} \begingin{proof} One can shrink the image of $D_2$ in the connected sum so that all its crossings lie in a small 3-ball $B^3$ in $\Sigma \mathfrak times I.$ By an isotopy, we can move the ball along arcs of $D_1$ representing the component to which $D_1$ is joined, and moving over or under the other arcs at any crossing that we encounter. This shows that the connected sum is independent of the choice of the cut point $x_1$ on $D_1$. The independence on the cut point $x_2$ on $D_2$ follows from the well-known fact that all long knots, or rather $(1,1)$ tangles, obtained by cutting $D_2$ at different points $x_2$ of its specified component are isotopic (as $(1,1)$ tangles). Shrinking $D_2$ into a small 3-ball also allows one to translate any Reidemeister move of $D_1$ or $D_2$ into a Reidemeister move on the connected sum $D_1 \#D_2$. This proves the last statement. \endd{proof} \begingin{proposition}\langlebel{p-conn-sum-adeq} Any connected sum of two $A$- or $B$-adequate diagrams is itself $A$- or $B$-adequate (respectively). \endd{proposition} \begingin{proof} Let $D$ be a link diagram in $\Sigma_1 \# \Sigma_2$ obtained as the connected sum of $A$-adequate diagrams $D_1\subset \Sigma_1$ and $D_2\subset \Sigma_2$, and suppose to the contrary that $D$ is not $A$-adequate. By \mathbb Cref{p-alt-adeq}, there is a state $S$ for $D$ adjacent to $S_A$ with $t(S,\Sigma_1\#\Sigma_2)=t(S_A, \Sigma_1\#\Sigma_2)+1$ and $|\widehat{S}|=|\widehat{S}_A|$ in $\Sigma_1 \# \Sigma_2$. In particular, $|S|=|S_A|+1$, and the transition from $S_A$ to $S$ involves one cycle of $S_A$ splitting into two cycles. Let $x$ be the crossing of $D$ where the smoothing is changed in the transition from $S_A$ to $S$. We can assume, without loss of generality, that $x$ is a crossing from $D_1$. Let $C$ be the cycle of $S_A$ that splits into two cycles, $C'$ and $C''$ under this transition. Since $t(S,\Sigma_1\#\Sigma_2)=t(S_A, \Sigma_1\#\Sigma_2)+1$, one of the cycles $C'$ and $C''$, say $C'$, must be trivial. If $C$ is a cycle contained in $S_A(D_1)$, then the same is true for $C'$ and $C''$. However, this contradicts the assumption that $D_1$ is $A$-adequate. Otherwise, $C=C_1\# C_2$ must be a connected sum of a cycle $C_1$ in $S_A(D_1)$ with a cycle $C_2$ in $S_A(D_2).$ In the transition from $S_A$ to $S$, the cycle $C_1\# C_2$ splits into $C_1'\# C_2$ and $C_1''\# C_2$. Further, since $C'=C_1'\# C_2$ is trivial, it follows that $C_1'$ must be trivial in $\Sigma_1$ and $C_2$ must be trivial in $\Sigma_2$. If $C_1\# C_2$ is trivial, then $C_1''\# C_2$ must also be trivial. That would imply that all three of $C_1, C_1',C_1''$ are trivial in $\Sigma_1$. This again contradicts the assumption that $D_1$ is $A$-adequate, and we take a moment to explain this point. Let $S(D_1)$ be the corresponding state for $D_1$. It is obtained from $S_A(D_1)$ by switching the smoothing at $x$. The transition from $S_A(D_1)$ to $S(D_1)$ involves $C_1$ splitting into $C_1'$ and $C_1''$. Since all three of $C_1, C_1',C_1''$ are trivial in $\Sigma_1$, we have $t(S(D_1)) = t(S_A(D_1))+1$ and $|\widehat{S}(D_1)| = \widehat{S}_A(D_1)$ in $\Sigma_1,$ which contradicts the assumption of $A$-adequacy of $D_1$. The other possibility is that $C_1\#C_2$ is non-trivial. Since $C_2$ is trivial in $\Sigma_2$, the cycles $C_1$ and $C_1''$ must both be nontrivial in $\Sigma_1$. The transition from $S_A(D_1)$ to $S(D_1)$ still involves $C_1$ splitting into $C_1'$ and $C_1''$, only now $C_1, C_1''$ are nontrivial and $C_1'$ is trivial in $\Sigma_1.$ Thus $t(S(D_1)) = t(S_A(D_1))+1$ and $|\widehat{S}(D_1)| = \widehat{S}_A(D_1)|$ in $\Sigma_1,$ which again contradicts the assumption of $A$-adequacy of $D_1$. Therefore, $D=D_1\#D_2$ must be $A$-adequate. The proof of $B$-adequacy of $D$ is similar. \endd{proof} \begingin{corollary}\langlebel{c-connected} Suppose $L_1 \subset \Sigma_1 \mathfrak times I$ and $L_2 \subset \Sigma_2 \mathfrak times I$ are links represented by adequate diagrams $D_1 \subset \Sigma_1$ and $D_2 \subset \Sigma_2.$ Then any link $L$ in $(\Sigma_1 \#\Sigma_2)\mathfrak times I$ admitting a diagram which is a connected sum of $D_1$ and $D_2$ is itself adequate. Further, the crossing number and writhe satisfy $c(L) = c(L_1)+c(L_2)$ and $w(L)=w(L_1)+w(L_2)$. \endd{corollary} \begingin{proof} Suppose $L$ is represented by $D=D_1 \#D_2 \subset \Sigma_1\#\Sigma_2$. Then $D$ is adequate by \mathbb Cref{p-conn-sum-adeq}. Further, by parts (i) and (ii) of \mathbb Cref{c-Tait}, we see that: \begingin{eqnarray*} c(L)&=& c(D)=c(D_1)+c(D_2)= c(L_1)+c(L_2) \quad \mathfrak text{and} \\ w(L)&=& w(D)=w(D_1)+w(D_2)= w(L_1)+w(L_2). \qedhere \endd{eqnarray*} \endd{proof} \begingin{example} \langlebel{ex-6knot} \mathbb Cref{f-connected-sum} shows a knot diagram $D$ in the genus two surface obtained as the connected sum of two alternating diagrams of the same knot in the torus. One can easily verify that $D$ is reduced and cellularly embedded, but not alternating. Further, \mathbb Cref{p-conn-sum-adeq} implies that this diagram is adequate, and therefore a minimal crossing diagram for the knot type. Direct calculation reveals that $t(S_A) = 2, t(S_B) = 0,$ and $|\mathfrak hat{S}_A| = |\mathfrak hat{S}_B|=1$. Therefore, ${\rm span}([ D ]_\Sigma )=16$. On the other hand, since $4(c(D)+|D|-g(\Sigma)) = 20,$ by \mathbb Cref{t-altern-span} (ii), it follows that $D$ is not weakly alternating, and in fact not equivalent to any weakly alternating knot in $\Sigma \mathfrak times I$. \begingin{figure}[!ht] \centering \includegraphics[width=3in]{Figures/6-crossing.pdf} \caption{A connected sum of alternating diagrams.} \langlebel{f-connected-sum} \endd{figure} \endd{example} \begingin{example} \langlebel{ex-7knot} \mathbb Cref{f-7-crossing} shows a knot in a genus two surface with an essential nugatory crossing. Since it is reduced and alternating, \mathbb Cref{t-alt-adeq} shows that it is adequate. Note that this diagram is not homologically adequate. In fact, if $S$ is the state with a $B$-smoothing at the nugatory crossing and $A$-smoothings at all the other crossings, then one can show that $|S| = |S_A|+1$ and $k(S)>k(S_A).$ Notice that this knot can also be obtained as the connected sum of two alternating knots $K_1,K_2$ in $T^2 \mathfrak times I$, with $c(K_i)=3$ but after performing a Reidemeister one move on one of them to obtain a diagram with four crossings. In particular, this example shows that a connected sum of two diagrams $D_1 \subset \Sigma_1$ and $D_2 \subset \Sigma_2$ can be adequate even when one of them is not adequate. \begingin{figure}[ht!] \centering\includegraphics[width=3in]{Figures/7-crossing.pdf} \caption{An alternating diagram with an essential nugatory crossing.} \langlebel{f-7-crossing} \endd{figure} \endd{example} Suppose $L_1 \subset \Sigma_1 \mathfrak times I$ and $L_2 \subset \Sigma_2 \mathfrak times I$ are two alternating links in thickened surfaces with $g(\Sigma_i)>0$ for $i=1,2$. Suppose further that $D_i$ is a link diagram on $\Sigma_i$ representing $L_i$ for $i=1,2$, and that $D_1,D_2$ are both reduced and alternating. Instead of forming the connected sum of $D_1$ and $D_2$, take one of the diagrams and insert an arbitrary number (say $n$) of twists before forming the connected sum. See \mathbb Cref{f-twist-connect} for an illustration. The result will be a diagram $D$, which is similar to a connected sum of $D_1$ and $D_2$, but with $n$ essential nugatory crossings in between. This construction can be carried out so that $D$ is reduced and alternating. In particular, it will have crossing number $c(D) =c(D_1)+c(D_2)+n$. If $L$ denotes the link type of $D$, and since $D_1$ and $D_2$ are alternating and have minimal crossing number, this shows that the analogue of \eqref{e-hard} can fail arbitrarily badly for links in thickened surfaces other than $S^2 \mathfrak times I$. \begingin{figure}[ht!] \centering\includegraphics[width=4.8in]{Figures/twist-connect.pdf} \caption{Adding twists to a connected sum to create essential nugatory crossings.} \langlebel{f-twist-connect} \endd{figure} The reason \eqref{e-hard} fails in general for connected sums of links in thickened surfaces is due to the use of non-minimal diagrams in forming the connected sum. However, if one restricts the connected sum operation to minimal crossing diagrams, then one gets a plausible generalization: \begingin{conjecture}\langlebel{c-connected-sum} Suppose $L_1 \subset \Sigma_1 \mathfrak times I$ and $L_2 \subset \Sigma_2 \mathfrak times I$ are links in thickened surfaces with minimal crossing representatives $D_1, D_2$, respectively. Then any link $L$ in the thickening of $\Sigma_1\# \Sigma_2$ arising as a connected sum of $D_1$ and $D_2$ satisfies $$c(L) = c(L_1) + c(L_2).$$ \endd{conjecture} Note that the assumption that $D_1,D_2$ are minimal crossing representative implies immediately that $$c(L) \leq c(L_1)+c(L_2).$$ In fact, the inequality may fail without that assumption. This is related to the fact that crossing number is not additive under connected sum for virtual knots. For example, the Kishino knot is the connected sum of two virtual unknots. As evidence, notice that \mathbb Cref{c-connected} confirms that the conjecture is true if $L_1$ and $L_2$ are adequate links in thickened surfaces. In particular, it holds for alternating and weakly alternating links. \section{Proofs of Theorems \ref{t-adequate-min-cross}, \ref{t-alt-adeq}, and \ref{t-altern-span}} \subsection{Proof of \mathbb Cref{t-adequate-min-cross}} \langlebel{s-proof-adequate-min} Given a link diagram $D$ on $\Sigma$ and positive integer $r$, the {\bf $r$-th parallel} of $D$ is the link diagram $D^r$ on $\Sigma$ in which each link component of $D$ is replaced by $r$ parallel copies, with each one repeating the same ``over'' and ``under'' behavior of the original component. \begingin{lemma} If $D$ is $A$-adequate, then $D^r$ is also $A$-adequate. If $D$ is $B$-adequate, then $D^r$ is also $B$-adequate. \endd{lemma} \begingin{proof} Let $S_{A}(D)$ and $S_{A}(D^r)$ be the pure $A$-smoothings of $D$ and the pure $A$-smoothings of $D^r$, respectively. It is straightforward to check that $S_{A}(D^r)$ is the $r$-parallel of $S_{A}(D)$. Suppose $D^r$ is not $A$-adequate. Then there is a state $S'$ obtained by switching one $A$-smoothing in $S_{A}(D^r)$ to a $B$-smoothing, such that $t(S_{A}(D^r))<t(S')$, and $\widehat{S}_{A}(D^r)=\widehat{S'}$. In the terminology of the proof of \mathbb Cref{p-alt-adeq}, that can only happen for a smoothing change of type (i). More specifically, when the smoothing change involves one of the innermost cycles in $S_{A}(D^r)$ which is self-abutting and which, when split, creates a new trivial cycle in $S'$. That is only possible if there is a self-abutting cycle in $S_{A}(D)$ which, when split, creates a new trivial cycle. Since $D$ is $A$-adequate, this cannot happen. An analogous argument proves the statement for $B$-adequate diagrams. \endd{proof} \noindent{\it Proof of \mathbb Cref{t-adequate-min-cross}.} \noindent (i) Since $$c(D)-w(D)=c_{+}(D)+c_{-}(D)-(c_{+}(D)-c_{-}(D))= 2c_-(D),$$ we will prove that $$c(D)-w(D)\leq c(E)-w(E).$$ Our argument is an adaptation of Stong's proof \cite{Stong-1994} (cf., Theorem 5.13 \cite{Lickorish}). Let $L_1, \ldots, L_m$ be the components of $L$ and let $D_i$ and $E_i$ be the subdiagrams of $D$ and $E$ corresponding to $L_i$. For each $i =1,\ldots, m,$ choose non-negative integers $\mu_i$ and $\nu_i$ such that $w(D_i)+\mu_i=w(E_i)+\nu_i$. Let $D'$ be composed of components $D_1',\ldots, D_m'$, where each $D'_i$ is obtained from $D_i$ by adding $\mu_i$ positive kinks to it. (These kinks do not cross with other components). Similarly, let $E'$ be composed of components $E_1',\ldots, E_m'$, where each $E'_i$ is obtained from $E_i$ by adding $\nu_i$ positive kinks to it. Notice that $D'$ is still $A$-adequate. The writhes of the individual components satisfy: $$w(D'_{i})=w(D_i)+\mu_i=w(E_i)+\nu_i=w(E'_{i}).$$ Further, the sum of the signs of the crossings of $D_i'\cap D_j'$ coincides with the sum of the signs of the crossings of $E_i'\cap E_j'$, since both are equal to the linking number of $L_i$ and $L_j$. Thus $w(D')=w(E')$. For any $r$, consider the $r$-th parallels $(D')^r$ and $(E')^r$ now. Then $w((D')^r)=r^2 w(D')$, because each crossing of $D'$ corresponds to $r^2$ crossings in $(D')^r$ of the same sign. The diagrams $(D')^r$ and $(E')^r$, are equivalent and have the same writhe, thus their Kauffman brackets must be equal. In particular, we have $d_{\max}([ (D')^r]_\Sigma)=d_{\max}([ (E')^r ]_\Sigma)$. \mathbb Cref{p-dminmax} implies now that \begingin{eqnarray*} d_{\max}([(D')^r ]_\Sigma)&=& \left(c(D)+\sum_{i=1}^m \mu_i\right)r^2+2\left(t(S_{A}(D))+\sum_{i=1}^m \mu_i\right)r,\\ d_{\max}([(E')^r ]_\Sigma)&\leq& \left(c(E)+\sum_{i=1}^m \nu_i\right)r^2+2\left(t(S_A(E))+\sum_{i=1}^m \nu_i \right)r. \endd{eqnarray*} Since this is true for all $r$, by comparing coefficients of the $r^2$ terms, we find that: \begingin{equation}\langlebel{eqn:compare} c(D)+\sum_{i=1}^m \mu_i \leq c(E)+\sum_{i=1}^m \nu_i. \endd{equation} Subtracting $\sum_{i=1}^m (\mu_i +w(D_i))= \sum_{i=1}^m (\nu_i + w(E_i))$ from both sides of \eqref{eqn:compare}, we get that \begingin{equation}\langlebel{eqn:new} c(D)-\sum_{i=1}^m w(D_i)\leq c(E)-\sum_{i=1}^m w(E_i). \endd{equation} Subtracting the total linking number of $L$ from both sides of \eqref{eqn:new} gives the desired inequality. \noindent The proof of (ii) is analogous. One adds negative kinks to $D$ and $E$ in this case. \qed \subsection{Proof of \mathbb Cref{t-alt-adeq}} \langlebel{s-proof-alt-adeq} A link diagram $D$ on $\Sigma$ is {\bf alternable} if it can be made alternating by inverting some of its crossings. Every classical link diagram is alternable, but the same is not true for link diagrams in arbitrary surfaces. For example, the knot diagram in the torus in \mathbb Cref{f-2.1} is not alternable. \begingin{figure}[!ht] \centering \includegraphics[height=30mm]{Figures/2-1-in-torus.pdf} \caption{A knot diagram in the torus which is not alternable.} \langlebel{f-2.1} \endd{figure} A link diagram $D$ on $\Sigma$ is {\bf checkerboard colorable} if the components of $\Sigma\smallsetminus D$ can be colored by two colors such that any two components of $\Sigma\smallsetminus D$ that share an edge have opposite colors. \begingin{proposition}\langlebel{p-alter-cc} Any minimal embedding $D$ on $\Sigma$ is alternable if and only if it is checkerboard colorable. \endd{proposition} \begingin{proof} Observe that filling the boundaries of $\Sigma$ with disks does not affect alternability or checkerboard colorability.Likewise, removing disks from $\Sigma\smallsetminus D$ also does not affect alternability or checkerboard colorability. This has two consequences: (a) It is enough to prove this statement for surfaces $\Sigma$ with all boundary components capped, i.e., for closed surfaces. (b) Since Kamada proved that if a diagram $D$ is a deformation retract of $\Sigma$ then it is alternable if and only if it is checkerboard colorable, \cite[Lemma 7]{Kamada-2002}, our statement holds for cellularly embedded diagrams. Our strategy is to reduce the proof to this case of cellular embeddings. Suppose that $C$ is a non-disk component of $\Sigma \smallsetminus D$. Then it contains a non-contractible simple closed loop $\alpha$. Let $\Sigma'$ be obtained by cutting $\Sigma$ along $\alpha$ and by capping the boundary components. The loop $\alpha$ must be separating $\Sigma$, since otherwise $D\mathfrak hookrightarrow \Sigma'$ would be a lower genus embedding of $D.$ Observe now that since $\Sigma$ is a connected sum of two surfaces $\Sigma_1\# \Sigma_2$, where $\Sigma_1\cup \Sigma_2=\Sigma'$ and $D$ is a disjoint union of $D\cap \Sigma_1$ and of $D\cap \Sigma_2$, it is enough to prove that $D\subset \Sigma_i$ is checkerboard colored for $i=1,2.$ By repeating this process as long as possible, we reduce the statement to cellularly embedded diagrams, which is covered by (b) above. \endd{proof} \begingin{lemma}\langlebel{lem-24} Any alternable diagram can be extended by disjoint simple closed loops to a checkerboard colorable one. \endd{lemma} \begingin{proof} The surface $N_D\subset \Sigma$, being a regular neighborhood of $D$, is checkerboard colorable by the earlier mentioned result of Kamada, \cite[Lemma 7]{Kamada-2002}. The only reason that coloring does not extend to $D\subset \Sigma$ is that some connected components $C$ of $\Sigma \smallsetminus {\rm int}\, N_D$ may have multiple connected components of their boundary whose neighborhoods are colored differently. However, that issue can be resolved by adding simple closed loops around those boundary components of $C$ which are white. \endd{proof} \noindent{\it Proof of \mathbb Cref{t-alt-adeq}:} Let $D$ be alternating diagram without removable crossings. By \mathbb Cref{lem-24}, by adding disjoint simple closed loops to $D$ we obtain a diagram $D'$ which is alternating and checkerboard colorable. Hence, it is enough to prove that $D'$ is adequate. Let us assume for simplicity of notation that $D$ is checkerboard colorable. We will prove the $A$-adequacy of $D$ only, as the proof of $B$-adequacy is identical. Let $S$ be a state with all $A$-smoothings except for a $B$-smoothing at a crossing $x$ of $D$. We will prove that $D$ is $A$-adequate ``at $x$,'' meaning that $t(S)\leq t(S_A)$ or $\widehat{S}\ne \widehat{S}_A$ in $\mathscr S(\Sigma \mathfrak times I).$ As in the proof of \mathbb Cref{p-alt-adeq}, there are three cases, and to check adequacy, it is enough to check that the conditions of \mathbb Cref{defn:h-adequate} hold in the first case, namely when $|S| =|S_A|+1.$ Therefore, $S_A$ must contain a self-abutting cycle $C$, and in the transition from $S_A$ to $S$, the cycle $C$ splits into two cycles $C_1,C_2$ of $S$. Since $D$ is alternating and checkerboard colorable, $S_A$ bounds a subsurface $\Sigma'$ of $\Sigma$ of a certain color, say white, which contains no crossings of $D$. \begingin{figure}[h] \centering \includegraphics[width=2in]{Figures/alternating-eg.pdf} \caption{} \langlebel{fig:alternating} \endd{figure} We claim that neither $C_1$ nor $C_2$ is trivial. Indeed, if say $C_1$ were trivial, then there would be a loop $\mathfrak gamma$ parallel to $C_1$ totally inside $\Sigma'$ except for a little neighborhood of $x$, in which it would cross $x$. Such a curve would imply that the crossing $x$ is removable, (see for example \mathbb Cref{fig:alternating}), which is a contradiction. Therefore, neither $C_1$ nor $C_2$ is trivial, and it follows that $t(S)= t(S_A).$ Therefore, $D$ is $A$-adequate at $x$, and this completes the proof of the theorem. \qed \subsection{Link diagrams and shadows} \langlebel{s-shadow} A {\bf link shadow} in $\Sigma$ is a $4$-valent graph in $\Sigma$, possibly with loop components. In other words, a shadow is a link diagram with crossing types ignored. For that reason we refer to shadow vertices as crossings and the components of any link realization of a shadow as its link components. (Not to be confused with connected components of a shadow.) Some properties of link diagrams are entirely determined by its link shadow. For example, we will say that a link shadow $D$ on $\Sigma$ is {\bf checkerboard colorable} if the components of $\Sigma\smallsetminus D$ can be colored by two colors such that any two components of $\Sigma\smallsetminus D$ that share an edge have opposite colors. Clearly, a link diagram is checkerboard colorable if and only if its link shadow is. Similarly, a link shadow is {\bf minimally embedded} if it does not lie in a subsurface of $\Sigma$ of smaller genus, and it is immediate that a link diagram on $\Sigma$ is minimally embedded if and only if its link shadow is. Each shadow crossing has two smoothings, which cannot be differentiated as $A$- and $B$-type, as in the case of link diagrams. For that reason, for link shadows it is customary to place markers at the crossings indicating the smoothing as in \mathbb Cref{marker}. \begingin{figure}[!ht] \centering \includegraphics[height=23mm]{Figures/marker.pdf}\quad \caption{Two types of markers for a state of a link shadow.} \langlebel{marker} \endd{figure} Two consecutive crossings can have identical or opposite smoothings, see \mathbb Cref{alt-markers}. An {\bf alternating state} of a shadow is one with alternating crossing smoothings along all of its link components. In other words, a state is alternating if the smoothings at every pair of consecutive crossings are opposite. Not all link shadows admit alternating smoothings, for example the shadow of the non-alternable knot in the torus in \mathbb Cref{f-2.1}. On the other hand, any link shadow of an alternating link diagram admits two alternating smoothings, namely the shadow smoothings coming from $S_A$ and $S_B$. Given a state $S$ for a link shadow $D$, the dual state is denoted $S^{\varepsilone}$ and has opposite smoothing to $S$ at each crossing of $D$. Notice that a state $S$ is alternating if and only if its dual state $S^{\varepsilone}$ is alternating. We say that a $2$-disk $D^2$ is $2$-cutting, or simply, {\bf cutting} a shadow $D$ if its boundary intersects $D$ transversely at two points (which are not crossings) and $D^2\cap D$ contains some but not all the crossings of $D$. A connected shadow $D$ is said to be {\bf strongly prime} if it has no cutting disk. More generally, a shadow $D$ is strongly prime if all of its connected components are. \begingin{figure}[!ht] \centering \includegraphics[height=20mm]{Figures/alt-markers.pdf}\quad \caption{Two consecutive crossings with identical markers (left) and opposite markers (right).} \langlebel{alt-markers} \endd{figure} \begingin{lemma}\langlebel{l-strongly-prime} Every crossing of every strongly prime shadow $D\subset \Sigma$ has at least one smoothing producing a shadow which is again strongly prime. If $D$ is connected, then the smoothing can be chosen so the resulting shadow is connected and strongly prime. \endd{lemma} For classical links, a proof of this statement can be found in \cite{Lickorish}. That proof relies on checkerboard colorability of the diagram, which is of course true for classical links. Below, we give a proof that does not require the shadow to be checkerboard colorable. \begingin{proof} Without loss of generality we can assume that $D$ is connected. Assume now that the smoothings of a crossing $v$ in a strongly prime $D$ produce diagrams $D_1, D_2$ neither of which is strongly prime. Let $B_1,B_2$ be cutting disks for $D_1$ and $D_2.$ Since $D$ is strongly prime, we can assume that $v\in \partial B_i$ for $i=1,2$. We can also assume that $\partial B_1$ and $\partial B_2$ are in transversal position. Let $C$ be the connected component of $B_1\cap B_2$ containing $v$, as in \mathbb Cref{f-sprime1} (left). The circles $\partial B_1,$ $\partial B_2$ are broken because they may intersect each other many times. By modifying $B_1$ or $B_2$ slightly if necessary we can assume that $D$ does not contain the second intersection point, $w$, of $\partial B_1\cap \partial B_2$ in $C$. Let $\alpha_1={\rm int}(C\cap \partial B_1)$ and $\alpha_2={\rm int}(C \cap \partial B_2)$. (Note that $v\not\in\alpha_1\cup\alpha_2$.) Since $D$ intersects $\partial B_i-\{v\}$ twice, for $i=1,2,$ and $D$ intersects $\alpha_1\cup \alpha_2$ at an odd number of points, we have the following possibilities: \\ (1) $|D\cap \alpha_2|=1,$ $D\cap \alpha_1= \varnothing$.\\ (2) $|D\cap \alpha_2|=2,$ $|D\cap \alpha_1|= 1$.\\ (3) one of the two cases above with $\alpha_1$ interchanged with $\alpha_2.$ We will ignore it without loss of generality. \begingin{figure}[!ht] \centering \includegraphics[height=28mm]{Figures/1.pdf}\quad \includegraphics[height=28mm]{Figures/2.pdf}\quad \includegraphics[height=28mm]{Figures/3.pdf} \caption{} \langlebel{f-sprime1} \endd{figure} In the first case, $D$ looks like in \mathbb Cref{f-sprime1} (center), where $S,T$ (in dashed circles) are shadow tangles. In that case, since neighborhoods of $S, T$ are not cutting disks for $D$, the tangles $S,T$ are crossingless. That means that $B_2$ is not a cutting disk for $D_2$ -- a contradiction. In the second case, $D$ looks like in \mathbb Cref{f-sprime1} (right), where $R,S,T$ are shadow tangles. Note that all crossings of $D$, other than $v$, are contained in $R, S$ or $T$, since otherwise a disk containing $v$, $R, S$ and $T$ but no other crossings of $D$ would be cutting for $D$. Note also that, as in the first case, $T$ is crossingless. That means that all crossings of $D_1$ are in $R$ and $S$. Hence, $B_1$ is not cutting for $D_1$ -- a contradiction. Now assume that $D$ is connected. Then for one of the smoothings of $D$ at $v$ will be connected. Let $D'$ denote the connected shadow obtained from smoothing $D$, and assume the other smoothing is disconnected. We claim that $D'$ is strongly prime. Assume to the contrary that $D'$ is not strongly prime. Then there is a cutting disk $B$ containing some but not all the crossings of $D'$ (see \mathbb Cref{f-newpic}). We can assume that $x\in \partial B$ and that $D'$ is obtained by the smoothing of $x$ tangential to $\partial B.$ However, since the other smoothing of $D$ at $v$ is disconnected, the strands from the tangles $R$ and $T$ cannot cross each other. The neighborhoods of $R,T$ give cutting disks for $D$ unless the tangles $R,T$ are crossingless, but then $B$ would not be a cutting disk for $D'$, which is a contradiction. \endd{proof} \begingin{figure}[!ht] \centering \includegraphics[height=28mm]{Figures/3a.pdf} \caption{A cutting disk for $D'$.} \langlebel{f-newpic} \endd{figure} Suppose $D \subset \Sigma$ is a link shadow. Let $N_D$ denote a neighborhood of $D$ in $\Sigma$ small enough so that it is a ribbon surface retractible onto $D$. A \mathfrak textbf{local checkerboard coloring} of $D$ is a checkerboard coloring of $D\subset N_D$. If one exists, we say that $D$ is \mathfrak textbf{locally checkerboard colorable}. (The pair $(D, N_D)$ is the shadow of an abstract link diagram \cite{Kamada-2000}, or ALD for short. This condition is equivalent to saying that $(D, N_D)$ is the shadow of a checkerboard colorable ALD.) Obviously, if $D \subset \Sigma$ is checkerboard colorable, then it is locally checkerboard colorable. The converse holds if $D\subset \Sigma$ is cellularly embedded, but in general, a shadow can be locally checkerboard colorable without being checkerboard colorable. \begingin{lemma}\langlebel{alt-check} Suppose $D \subset \Sigma$ is a link shadow. Then $D$ is locally checkerboard colorable if and only if it admits an alternating state. \endd{lemma} \begingin{proof} If $D$ is locally checkerboard colorable, then let $S$ be the state whose smoothings at each crossing join the white regions. Then $S$ is an alternating state. Conversely, suppose $S$ is an alternating state. Let $\widehat{\Sigma}$ be the surface obtained from $N_D$ by attaching disks to each of its boundary component. Then $D \subset \widehat{\Sigma}$ is cellularly embedded. We can color $\widehat{\Sigma} \smallsetminus D$ so that each cycle in $S$ bounds a black disk and each cycle in $S^\varepsilone$ bounds a white disk. To see this, notice that at each smoothing of $S$, two local regions are joined. We can color the joined regions white and extend the coloring to the rest of $\widehat{\Sigma} \smallsetminus D$. This determines a local checkerboard coloring of $D$. \endd{proof} If $S$ and $S'$ are adjacent states on a shadow $D$ with $|S'|=|S|,$ then the transition from $S$ to $S'$ is called a \mathfrak textbf{single cycle bifurcation}. \begingin{lemma}\langlebel{cc-single} A connected shadow $D$ is locally checkerboard colorable if and only if there is no single cycle bifurcation in its cube of resolutions. \endd{lemma} \begingin{proof} For one implication, we apply \cite[Proposition 5.11]{Karimi-2019} to see that if $D$ is locally checkerboard colorable, then its cube of resolutions does not contain any single cycle bifurcations. The other implication is proved by induction on the crossing number. To start, we verify it for 1-crossing shadows, which can be classified into the {\bf first type} $\ \diag{31}{0.26in} \ $ or the {\bf second type} $\ \diag{32}{0.34in}$. The shadows of the first type are locally checkerboard colorable and of the second type are not. The cubes of resolutions for these shadows are $$\bullet$llet \mathfrak to$\bullet$llet$, and they have just one edge, which is a split/join for the shadow of the first type and a single cycle bifurcation for the shadow of the second type. Now assume the lemma has been proved for all connected shadows with fewer than $n$ crossings. Let $D$ be a connected shadow with $n$ crossings. We will show that if $D$ is not locally checkerboard colorable, then there is a single cycle bifurcation in its cube of resolutions. Pick a crossing $x$ and let $D'$ be the diagram obtained by smoothing $D$ at $x$. (It does not matter which smoothing is chosen.) Assume first that $D'$ is not locally checkerboard colorable. By induction, the cube of resolutions for $D'$ contains a single cycle bifurcation. Since the cube of resolutions of $D'$ is a face of the cube of resolutions of $D,$ the result follows. On the other hand, if $D'$ is locally checkerboard colorable, then by \mathbb Cref{alt-check}, it admits an alternating state $S'$. We color $N_{D'}\smallsetminus D'$ consistently, so that the smoothings of $S'$ joins white regions. Let $S$ be a state of $D$ which coincides with $S'$, and $S^\varepsilone$ its dual state. Switching the smoothing of $x$ in $S^\varepsilone$, we obtain $S'^\varepsilone$, considered as a state of $D$. The ribbon surface $N_{D}$ is obtained by adding a 2-dimensional 1-handle (a band) to $N_{D'}$. Unless the transition from $S'^\varepsilone$ to $S^\varepsilone$ is a single cycle bifurcation, we can extend the coloring of $(N_{D'}, D')$ to $(N_{D}, D)$. Since $D$ is not locally checkerboard colorable, the transition from $S'^\varepsilone$ to $S^\varepsilone$ must be a single cycle bifurcation. \endd{proof} Recall that $r(D)$ denotes the rank of the image of $i_* \colon H_1(D; \mathbb Z/2) \mathfrak to H_1(\Sigma; \mathbb Z/2).$ Any connected shadow is homotopy equivalent to a bouquet of circles. If $D$ has $c(D)$ crossings, then $\chi(D)=-c(D)$. It follows that $0 \leq r(D) \leq c(D) + 1$ for connected shadows with $c(D)$ crossings. \begingin{proposition}\langlebel{p-non-alt-ineq} Let $D$ be a link shadow in $\Sigma$, (not necessarily connected). \begingin{itemize} \item[(i)] If $S$ is a state of $D$, then $$t(S)+t(S^\varepsilone)\leq c(D)+2|D|-r(D). $$ \item[(ii)] If $D$ is not locally checkerboard colorable, then for any state $S$ of $D$, $$t(S)+t(S^\varepsilone)< c(D)+2|D|-r(D).$$ \item[(iii)] If $D$ is strongly prime and $S$ is non-alternating, then $$t(S)+t(S^\varepsilone)< c(D)+2|D|-r(D).$$ \endd{itemize} \endd{proposition} \begingin{proof} Let us write $\Sigma = \Sigma_1 \cup \dots \cup \Sigma_n$ as a disjoint union of connected components. Any component disjoint from $D$ does not contribute to the terms in (i), (ii) and (iii), so it can be discarded. Therefore, we can assume that $D_i= D \cap \Sigma_i \neq \varnothing$ for $i=1, \dots, n.$ Since all terms of the inequalities of the statements are additive under taking disjoint unions of surfaces, it is enough to prove the statement for $\Sigma$ connected. On the other hand, if $D=D_1 \cup D_2$ is disconnected, then $r(D)\leq r(D_1)+r(D_2)$. Thus $r(D)$ is subadditive, and since the other terms on the right hand side of (i), (ii) and (iii) are additive, it is enough to prove the proposition for connected shadows in connected surfaces. Assume henceforth that $\Sigma$ is a connected surface. Let us prove the statement for single crossing abstract shadows $D$ now. Recall from the proof of Lemma \ref{cc-single} that single crossing shadows $D$ are of two types. For both of them, $r(D) \leq 2.$ If $r(D)=0$, then $t(S)+t(S^\varepsilone)= 2$. If $r(D)=1,2$, then $t(S)+t(S^\varepsilone)\leq 1$. Therefore, statement (i) holds for 1-crossing shadows. Since shadows of the first type are locally checkerboard colorable and $t(S)=t(S^\varepsilone)=0$ for shadows of the second type, statements (ii) and (iii) hold as well. The proof of (i) proceeds by induction on the crossing number $c(D)$. Let $D$ be a connected shadow in $\Sigma$ with $c(D) \mathfrak geq 2$ crossings. We assume that statement (i) has been established for all connected shadows in $\Sigma$ with fewer than $c(D)$ crossings. Let $D'$ be the shadow resulting from smoothing at a crossing $x$ of $D$. We choose the smoothing so that $D'$ is connected. Notice that \begingin{equation}\langlebel{r-eqn} r(D)-1 \leq r(D') \leq r(D). \endd{equation} Let $S$ be a state of $D$. The chosen smoothing of $x$ coincides either with the smoothing of $x$ in $S$ or in $S^\varepsilone$ and, without loss of generality, we can assume that it coincides with the smoothing of $x$ in $S$. Then $S$ induces a state on $D'$ denoted $S'$. Clearly, $t(S')=t(S).$ The dual state $S'^\varepsilone$ to $S'$ differs from $S^\varepsilone$ at $x$ only. The states $S^\varepsilone$ and $S'^\varepsilone$ are adjacent in the cube of resolutions of $D$. Thus \begingin{equation}\langlebel{t-eqn} t(S'^\varepsilone)-1\leq t(S^\varepsilone) \leq t(S'^\varepsilone)+1. \endd{equation} \begingin{lemma}\langlebel{in-line-lemma} Either $r(D')= r(D)$ or $t(S^\varepsilone) \leq t(S'^\varepsilone)$. \endd{lemma} \begingin{proof} Assume that $t(S^\varepsilone) > t(S'^\varepsilone)$. Then either two trivial loops in $S^\varepsilone$ join to make a trivial loop in $S'^\varepsilone$, or a trivial and a nontrivial loop in $S^\varepsilone$ join to make a nontrivial loop in $S'^\varepsilone$, or a trivial loop in $S^\varepsilone$ splits to make two nontrivial loops in $S'^\varepsilone$. In each case, $r(D') = r(D).$ \endd{proof} We prove the inductive step for part (i). By Lemma \ref{in-line-lemma}, there are the two possibilities. If $r(D')=r(D)$, then \mathbb Cref{t-eqn} and the inductive assumption imply that $$t(S)+t(S^\varepsilone)\leq t(S')+t(S'^\varepsilone)+1 \leq c(D')+2-r(D')+1=c(D)+2-r(D).$$ On the other hand, if $r(D')\ne r(D)$, then $t(S^\varepsilone) \leq t(S'^\varepsilone)$, and \mathbb Cref{r-eqn} and the inductive assumption imply that $$t(S)+t(S^\varepsilone)\leq t(S')+t(S'^\varepsilone) \leq c(D')+2-r(D')=c(D)+2-r(D).$$ This completes the proof in case (i). We prove part (ii) also by induction on $c(D)$. Let $D$ be a connected shadow in $\Sigma$ with $c(D) \mathfrak geq 2$ crossings, and assume $D$ is not locally checkerboard colorable. We assume that statement (ii) has been established for all connected shadows in $\Sigma$ with fewer than $c(D)$ crossings that are not locally checkerboard colorable. By Lemma \ref{cc-single}, there is a single cycle bifurcation in the cube of resolutions of $D$. Let $D'$ be the shadow resulting from smoothing $D$ at a crossing $x$, and we assume $D'$ is connected and that the smoothing at $x$ coincides with the smoothing of $x$ in $S$. If $D'$ is locally checkerboard colorable, then the transition from $S^\varepsilone$ to $S'^\varepsilone$ must be a single cycle bifurcation, for otherwise the local checkerboard coloring would extend from $D'$ to $D$. Since the transition is a single cycle bifurcation, we have $t(S^\varepsilone)=t(S'^\varepsilone)$ and $r(D)=r(D')$. Therefore, applying part (i) to $D'$, we see that $$t(S)+t(S^\varepsilone) = t(S)+t(S'^\varepsilone) \leq c(D')+2-r(D') < c(D)+2-r(D).$$ If $D'$ is not locally checkerboard colorable, then we can apply the inductive hypothesis for part (ii) to $D'$ and use it to deduce the desired strict inequality just as before. This completes the proof of (ii). The last step is to prove statement (iii). We begin by verifying (iii) for connected shadows with one or two crossings. For a single crossing shadow $D$ of the first type, both states are alternating, so (iii) is vacuously true. Single crossing shadow of the second type are not locally checkerboard colorable, and so the result follows from (ii). \begingin{figure}[!ht] \centering \includegraphics[height=35mm]{Figures/11.pdf} \mathfrak hspace{4mm} \includegraphics[height=35mm]{Figures/12.pdf} \mathfrak hspace{4mm} \includegraphics[height=35mm]{Figures/13.pdf} \mathfrak hspace{4mm} \includegraphics[height=35mm]{Figures/14.pdf} \mathfrak hspace{4mm} \includegraphics[height=35mm]{Figures/15.pdf} \caption{Connected shadow diagrams with $2$-crossings.} \langlebel{f-shadows} \endd{figure} All abstract connected $2$-crossing shadows $D$ are depicted in \mathbb Cref{f-shadows}. For a type 1 shadow $D$, its non-alternating states appear in \mathbb Cref{f-2122} (left). Note that $0\leq r(D) \leq 3$ and $0\leq t(S),t(S^\varepsilone)\leq 1.$ If $r(D)= 0$ or $1$, then $t(S)+t(S^\varepsilone) \leq 2$ and $3 \leq c(D)+2-r(D)$. Thus (iii) holds in this case. If $r(D)=2$ or $3$, then $t(S)=t(S^\varepsilone)=0$, and statement (iii) holds. For a type 2 shadow $D$, its non-alternating states are shown in \mathbb Cref{f-2122} (right). Note that $0\leq r(D) \leq 3$ and $0\leq t(S),t(S^\varepsilone)\leq 2.$ Since $D$ is strongly prime, $r(D)>0$ and $t(S),t(S^\varepsilone)\leq 1.$ If $r(D)=1$, then $t(S)+t(S^\varepsilone) \leq 2$; if $r(D)=2$, then $t(S)+t(S^\varepsilone)\leq 1$; and if $r(D)=3$, then $t(S)+t(S^\varepsilone)=0$. In all three cases, statement (iii) is seen to hold. \begingin{figure}[!h] \centering \begingin{subfigure}{0.36\mathfrak textwidth} \centering \includegraphics[height=25mm]{Figures/21.pdf}\mathfrak hspace{.3cm}\includegraphics[height=25mm]{Figures/22.pdf} \subcaption{Type 1.} \endd{subfigure} \begingin{subfigure}{0.36\mathfrak textwidth} \centering \includegraphics[height=32mm]{Figures/23.pdf}\mathfrak hspace{.4cm}\includegraphics[height=32mm]{Figures/24.pdf}. \subcaption{Type 2.} \endd{subfigure} \caption{Non-alternating states on 2-crossing shadows of type 1 and 2.}\langlebel{f-2122} \endd{figure} Note that none of the shadows of the third, fourth, and fifth type are locally checkerboard colorable. Therefore statement (iii) follows from (ii) in these cases. The proof of (iii) proceeds by induction on the crossing number $c(D)$. Let $D$ be a strongly prime connected shadow in $\Sigma$. By (ii), we can assume that $D$ is locally checkerboard colorable. We assume additionally that $c(D) \mathfrak geq 3$ and that statement (iii) has been established for all strongly prime shadows in $\Sigma$ with fewer than $c(D)$ crossings. Let $S$ be a non-alternating state for $D$. Then $S$ has two consecutive smoothings that are identical, and we choose a third crossing $x$ of $D$. By \mathbb Cref{l-strongly-prime}, one of the smoothings of $x$ yields a shadow which is connected and strongly prime. Let $D'$ be the resulting shadow. As before, we assume that the smoothing at $x$ coincides with the smoothing of $x$ in $S$. The state $S$ induces a state on $D'$, denoted $S'$, which is non-alternating. Since $D'$ is connected, one can apply Lemma \ref{in-line-lemma} as before and argue again by induction that (iii) holds for $D$. \endd{proof} \subsection{Proof of \mathbb Cref{t-altern-span}} \langlebel{s-proof-altern-span} Part (i) follows immediately by combining \mathbb Cref{cor-adequate} and \mathbb Cref{p-non-alt-ineq} (i). For parts (ii) and (iii), note that if $D$ is a connected sum of $D_0\subset \Sigma,$ and $D_1, \ldots, D_k\subset S^2$ then \begingin{equation} \langlebel{e-csf} [D]_\Sigma = \delta^{-k} [D_0]_\Sigma\cdot \partialrod_{i=1}^k [D_i]_{S^2}. \endd{equation} Therefore, it is enough to prove parts (ii) and (iii) for \emph{prime} diagrams (alternating for (ii) and non-alternating for (iii)). The condition that $D$ is prime implies that it is not a nontrivial connected sum diagram as above. More precisely, a link diagram $D$ on $\Sigma$ is said to be {\bf prime} if any contractible simple loop $\mathfrak gamma$ in $\Sigma$ that meets $D$ transversely at two points bounds a 2-disk that intersects $D$ in an unknotted arc (possibly with self-crossings). Proof of (iii): Assume $D$ is prime. If the shadow diagram of $D$ is strongly prime, then the statement follows from \mathbb Cref{cor-adequate} and \mathbb Cref{p-non-alt-ineq} (iii). If it is not strongly prime then $D$ must contain a self-crossing trivial arc. Let $D'$ be obtained by replacing it by a simple trivial arc. Since ${\rm span}([D]_\Sigma)$ is invariant under Reidemeister moves and $r(D')=r(D)$, $${\rm span}([D]_\Sigma)={\rm span}([D']_\Sigma)\leq 4c(D')+4|D'|-2r(D')< 4c(D)+4|D|-2r(D),$$ by part (i). Our proof of (ii) follows that of \cite[Theorem 2.9]{Boden-Karimi-2019}. Since both sides of the equality in (ii) are additive under disjoint union of diagrams, it is enough to prove it for connected diagrams. By \mathbb Cref{p-alter-cc}, $D$ is checkerboard colorable. Then all regions of one color, say white, are enclosed by the cycles in the state $S_A$ of $D$, and all regions of the other color, i.e., black, are enclosed by the cycles in the state $S_B$. Therefore, the numbers of white and black regions are $t(S_A)$ and $t(S_B)$, respectively. Since $D$ defines a cellular decomposition of $\Sigma$ into $c(D)$ 0-cells, $2c(D)$ $1$-cells, and $t(S_A)+t(S_B)$ $2$-cells, $$2-2g(\Sigma)=\chi(\Sigma)=c(D)-2c(D)+t(S_A)+t(S_B),$$ and $$t(S_A)+t(S_B)=c(D)+2-2g(\Sigma).$$ By \mathbb Cref{p-dminmax}, \begingin{align*} {\rm span}([D]_\Sigma)&= d_{max}([D]_\Sigma))-d_{min}([D]_\Sigma)),\\ &= 2c(D) + 2t(S_A)+2t(S_B),\\ &= 4c(D)+4-4g(\Sigma). \qedhere \endd{align*} \newcommand{\etalchar}[1]{$^{#1}$} \begingin{thebibliography}{AARH{\etalchar{+}}19} \bibitem[AAR{\etalchar{+}}19]{Adams-2019a} Colin Adams, Carlos Albors-Riera, Beatrix Haddock, Zhiqi Li, Daishiro Nishida, Braeden Reinoso, and Luya Wang. \newblock Hyperbolicity of links in thickened surfaces. \newblock {\em Topology Appl.}, 256:262--278, 2019. \bibitem[AEG{\etalchar{+}}19]{Adams-2019c} Colin Adams, Or~Eisenberg, Jonah Greenberg, Kabir Kapoor, Zhen Liang, Kate O'Connor, Natalia Pacheco-Tallaj, and Yi~Wang. \newblock Turaev hyperbolicity of classical and virtual links, 2019. \newblock \mathfrak href{https://arxiv.org/pdf/1912.09435.pdf}{ArXiv/1912.09435}, to appear in Alg. Geom. Topol. \bibitem[AFLT02]{Adams} Colin Adams, Thomas Fleming, Michael Levin, and Ari~M. Turner. \newblock Crossing number of alternating knots in {$S\mathfrak times I$}. \newblock {\em Pacific J. Math.}, 203(1):1--22, 2002. \bibitem[BaK20]{Bavier-Kalfagianni-2020} Brandon Bavier and Efstratia Kalfagianni. \newblock Guts, volume and skein modules of 3-manifolds, 2020. \newblock \mathfrak href{https://arxiv.org/pdf/2010.06559.pdf}{ArXiv/2010.06559}. \bibitem[BAR19]{Blair-Allen-Rodriguez-2019} Ryan Blair, Heidi Allen, and Leslie Rodriguez. \newblock Twist number and the alternating volume of knots. \newblock {\em J. Knot Theory Ramifications}, 28(1):1950016, 16, 2019. \bibitem[BFK99]{BFK-1999} Doug Bullock, Charles Frohman, and Joanna Kania-Bartoszy\'{n}ska. \newblock Understanding the {K}auffman bracket skein module. \newblock {\em J. Knot Theory Ramifications}, 8(3):265--277, 1999. \bibitem[BHMV95]{BHMV-1995} Christian Blanchet, Nathan Habegger, Gregor Masbaum, and Pierre Vogel. \newblock Topological quantum field theories derived from the {K}auffman bracket. \newblock {\em Topology}, 34(4):883--927, 1995. \bibitem[BK19]{Boden-Karimi-2019} Hans~U. Boden and Homayun Karimi. \newblock The {J}ones-{K}rushkal polynomial and minimal diagrams of surface links, 2019. \newblock \mathfrak href{https://arxiv.org/pdf/1908.06453}{ArXiv/1908.06453}, to appear in Ann. Inst. Fourier (Grenoble). \bibitem[BK20]{BK-2020} Hans~U. Boden and Homayun Karimi. \newblock A characterization of alternating links in thickened surfaces, 2020. \newblock \mathfrak href{https://arxiv.org/pdf/2010.14030}{ArXiv/2010.14030}, to appear in Proc. Roy. Soc. Edinburgh Sect. A \bibitem[Bla09]{Blair-2009} Ryan~C. Blair. \newblock Alternating augmentations of links. \newblock {\em J. Knot Theory Ramifications}, 18(1):67--73, 2009. \bibitem[BR21]{BR-2021} Hans~U. Boden and William Rushworth. \newblock Minimal crossing number implies minimal supporting genus. \newblock{\em Bull. Lond. Math. Soc.}, 53(4):1174--1184, 2021. \bibitem[Bul97]{Bullock-1997} Doug Bullock. \newblock Rings of {${\rm SL}_2({\bf C})$}-characters and the {K}auffman bracket skein module. \newblock {\em Comment. Math. Helv.}, 72(4):521--542, 1997. \bibitem[BW11]{BW-2011} Francis Bonahon and Helen Wong. \newblock Quantum traces for representations of surface groups in {${\rm SL}_2(\mathbb Bbb C)$}. \newblock {\em Geom. Topol.}, 15(3):1569--1615, 2011. \bibitem[CK14]{CK-Turaev} Abhijit Champanerkar and Ilya Kofman. \newblock A survey on the {T}uraev genus of knots. \newblock {\em Acta Math. Vietnam.}, 39(4):497--514, 2014. \bibitem[CK20]{Champanerkar-Kofman-2020} Abhijit Champanerkar and Ilya Kofman. \newblock A volumish theorem for alternating virtual links, 2020. \newblock \mathfrak href{https://arxiv.org/pdf/2010.08499.pdf}{ArXiv/2010.08499}. \bibitem[CKS02]{Carter-Kamada-Saito} J.~Scott Carter, Seiichi Kamada, and Masahico Saito. \newblock Stable equivalence of knots on surfaces and virtual knot cobordisms. \newblock {\em J. Knot Theory Ramifications}, 11(3):311--322, 2002. \newblock Knots 2000 Korea, Vol. 1 (Yongpyong). \bibitem[CL19]{CL-2019} Francesco Costantino and Thang T.~Q. L{\^e}. \newblock Stated skein algebras of surfaces, 2019. \newblock \mathfrak href{https://arxiv.org/pdf/1907.11400.pdf}{ArXiv/1907.11400}. \bibitem[DFK{\etalchar{+}}08]{DFKLS} Oliver~T. Dasbach, David Futer, Efstratia Kalfagianni, Xiao-Song Lin, and Neal~W. Stoltzfus. \newblock The {J}ones polynomial and graphs on surfaces. \newblock {\em J. Combin. Theory Ser. B}, 98(2):384--399, 2008. \bibitem[Dia04]{Diao-2004} Yuanan Diao. \newblock The additivity of crossing numbers. \newblock {\em J. Knot Theory Ramifications}, 13(7):857--866, 2004. \bibitem[DK05]{Dye-Kauffman-2005} Heather~A. Dye and Louis~H. Kauffman. \newblock Minimal surface representations of virtual knots and links. \newblock {\em Algebr. Geom. Topol.}, 5:509--535, 2005. \bibitem[DL07]{Dasbach-Lin-2007} Oliver~T. Dasbach and Xiao-Song Lin. \newblock A volumish theorem for the {J}ones polynomial of alternating knots. \newblock {\em Pacific J. Math.}, 231(2):279--291, 2007. \bibitem[FG06]{FGo-2006} Vladimir Fock and Alexander Goncharov. \newblock Moduli spaces of local systems and higher {T}eichm\"{u}ller theory. \newblock {\em Publ. Math. Inst. Hautes \'{E}tudes Sci.}, (103):1--211, 2006. \bibitem[FGL02]{FGL-2002} Charles Frohman, Razvan Gelca, and Walter Lofaro. \newblock The {A}-polynomial from the noncommutative viewpoint. \newblock {\em Trans. Amer. Math. Soc.}, 354(2):735--747, 2002. \bibitem[FKL19]{FKL-2019} Charles Frohman, Joanna Kania-Bartoszynska, and Thang T.~Q. L{\^e}. \newblock Unicity for representations of the {K}auffman bracket skein algebra. \newblock{\em Invent. math.} 215(2):609--650, 2019. \bibitem[FST08]{FST-2008} Sergey Fomin, Michael Shapiro, and Dylan Thurston. \newblock Cluster algebras and triangulated surfaces. {I}. {C}luster complexes. \newblock {\em Acta Math.}, 201(1):83--146, 2008. \bibitem[How15]{Howie-2015} Joshua Howie. \newblock {\em Surface-alternating knots and links}. \newblock PhD thesis, University of Melbourne, 2015. \bibitem[HP20]{HP-2020} Joshua~A. Howie and Jessica~S. Purcell. \newblock Geometry of alternating links on surfaces. \newblock {\em Trans. Amer. Math. Soc.}, 373(4):2349--2397, 2020. \bibitem[Kam02]{Kamada-2002} Naoko Kamada. \newblock On the {J}ones polynomials of checkerboard colorable virtual links. \newblock {\em Osaka J. Math.}, 39(2):325--333, 2002. \bibitem[KK00]{Kamada-2000} Kamada, Naoko and Kamada, Seiichi. \newblock Abstract link diagrams and virtual knots. \newblock {\em J. Knot Theory Ramifications}, 9(1):93--106, 2000. \bibitem[Kar21]{Karimi-2019} Homayun Karimi. \newblock The {K}hovanov homology of alternating virtual links. \newblock {\em Michigan Math. J.}, 70(4):749--778, 2021. \bibitem[Kau87]{Kauffman-87} Louis~H. Kauffman. \newblock State models and the {J}ones polynomial. \newblock {\em Topology}, 26(3):395--407, 1987. \bibitem[Kru11]{Krushkal-2011} Vyacheslav Krushkal. \newblock Graphs, links, and duality on surfaces. \newblock {\em Combin. Probab. Comput.}, 20(2):267--287, 2011. \bibitem[Kup03]{Kuperberg} Greg Kuperberg. \newblock What is a virtual link? \newblock {\em Algebr. Geom. Topol.}, 3:587--591 (electronic), 2003. \bibitem[Lac04]{Lackenby-2004} Marc Lackenby. \newblock The volume of hyperbolic alternating link complements. \newblock {\em Proc. London Math. Soc. (3)}, 88(1):204--224, 2004. \newblock With an appendix by Ian Agol and Dylan Thurston. \bibitem[Lac09]{Lackenby-2009} Marc Lackenby. \newblock The crossing number of composite knots. \newblock {\em J. Topol.}, 2(4):747--768, 2009. \bibitem[L{\^e}06]{Le-2006} Thang T.~Q. L{\^e}. \newblock The colored {J}ones polynomial and the {$A$}-polynomial of knots. \newblock {\em Adv. Math.}, 207(2):782--804, 2006. \bibitem[Lic97]{Lickorish} W.~B.~Raymond Lickorish. \newblock {\em An introduction to knot theory}, volume 175 of {\em Graduate Texts in Mathematics}. \newblock Springer-Verlag, New York, 1997. \bibitem[LT88]{Lickorish-Thistlethwaite} W.~B.~Raymond Lickorish and Morwen~B. Thistlethwaite. \newblock Some links with nontrivial polynomials and their crossing-numbers. \newblock {\em Comment. Math. Helv.}, 63(4):527--539, 1988. \bibitem[Man13]{Manturov-2013} Vassily~O. Manturov. \newblock Parity and projection from virtual knots to classical knots. \newblock {\em J. Knot Theory Ramifications}, 22(9):1350044, 20, 2013. \bibitem[Men84]{Menasco-1984} William Menasco. \newblock Closed incompressible surfaces in alternating knot and link complements. \newblock {\em Topology}, 23(1):37--44, 1984. \bibitem[MT93]{Menasco-Thistlethwaite-93} William Menasco and Morwen Thistlethwaite. \newblock The classification of alternating links. \newblock {\em Ann. of Math. (2)}, 138(1):113--171, 1993. \bibitem[Mul16]{Muller-2016} Greg Muller. \newblock Skein and cluster algebras of marked surfaces. \newblock {\em Quantum Topol.}, 7(3):435--503, 2016. \bibitem[Mur87]{Murasugi-871} Kunio Murasugi. \newblock Jones polynomials and classical conjectures in knot theory. \newblock {\em Topology}, 26(2):187--194, 1987. \bibitem[Prz99]{Przytycki-1999} J\'{o}zef~H. Przytycki. \newblock Fundamentals of {K}auffman bracket skein modules. \newblock {\em Kobe J. Math.}, 16(1):45--66, 1999. \bibitem[PS00]{PS-2000} J\'{o}zef~H. Przytycki and Adam~S. Sikora. \newblock On skein algebras and {${\rm Sl}_2({\bf C})$}-character varieties. \newblock {\em Topology}, 39(1):115--148, 2000. \bibitem[Sto94]{Stong-1994} Richard Stong. \newblock The {J}ones polynomial of parallels and applications to crossing number. \newblock {\em Pacific J. Math.}, 164(2):383--395, 1994. \bibitem[SW07]{Sikora-Westbury} Adam~S. Sikora and Bruce~W. Westbury. \newblock Confluence theory for graphs. \newblock {\em Algebr. Geom. Topol.}, 7:439--478, 2007. \bibitem[Thi87]{Thistlethwaite-87} Morwen~B. Thistlethwaite. \newblock A spanning tree expansion of the {J}ones polynomial. \newblock {\em Topology}, 26(3):297--309, 1987. \bibitem[Tur87]{Turaev-1987} Vladimir~G. Turaev. \newblock A simple proof of the {M}urasugi and {K}auffman theorems on alternating links. \newblock {\em Enseign. Math. (2)}, 33(3-4):203--225, 1987. \bibitem[Tur88]{Turaev-1990} Vladimir~G. Turaev. \newblock The {C}onway and {K}auffman modules of a solid torus. \newblock {\em Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI)}, 167(Issled. Topol. 6):79--89, 190, 1988. \bibitem[Tur91]{Turaev-1991} Vladimir~G. Turaev. \newblock Skein quantization of {P}oisson algebras of loops on surfaces. \newblock {\em Ann. Sci. \'{E}cole Norm. Sup. (4)}, 24(6):635--704, 1991. \bibitem[Tur94]{Turaev-1994} Vladimir~G. Turaev. \newblock Algebras of loops on surfaces, algebras of knots, and quantization. \newblock In {\em Braid group, knot theory and statistical mechanics, {II}}, volume~17 of {\em Adv. Ser. Math. Phys.}, pages 324--360. World Sci. Publ., River Edge, NJ, 1994. \bibitem[Wil20]{Will-2020} David~A. Will. \newblock Homological polynomial coefficients and the twist number of alternating surface links, 2020. \newblock \mathfrak href{https://arxiv.org/pdf/2011.12274.pdf}{ArXiv/2011.12274}, to appear in Alg. Geom. Topol. \endd{thebibliography} \endd{document}
\begin{document} \title{Existence of the optimum for Shallow Lake type extsuperscript{ } \begin{center} \small{Dipartimento di Matematica, Universit\`a di Pisa,\\ Largo B. Pontecorvo 5, 56127 Pisa, Italy.\\ E-mail address: \email{[email protected]}} \end{center} \begin{abstract} We consider the optimal control problem associated with a general version of the well known shallow lake model, and we prove the existence of an optimum in the class $L_{loc}^{1}\left(0,+\infty\right)$. Any direct proof seems to be missing in the literature. Dealing with admissible controls that can be unbounded (even locally) is necessary in order to represent properly the concrete optimization problem; on the other hand, the non-compactness of the control space together with the infinite horizon setting prevents from having good \emph{a priori} estimates - and this makes the existence problem considerably harder. We present an original method which is in a way opposite to the classical control theoretic approach used to solve finite horizon Mayer or Bolza problems. Synthetically, our method is based on the following scheme: i) two uniform localization lemmas providing, given $T\geq1$ and a maximizing sequence of controls, another sequence of controls which is bounded in $L^{\infty}\left(\left[0,T\right]\right)$ and still maximizing. ii) A special diagonal procedure dealing with sequences which are not extracted one from the other. iii) A \textquotedblleft{}standard\textquotedblright{} diagonal procedure. The optimum results to be locally bounded by construction.\\ \end{abstract} \begin{keywords} Control, global optimization, non compact control space, uniform localization, convex-concave dynamics. \end{keywords} \onehalfspacing \section{Introduction} In this work we examine the optimal control problem related to a general version of the Shallow Lake model, and we prove the existence of an optimum. In the last fifteen years, a literature about this model has grown up, but, in our knowledge, no direct existence proof has been provided up to now. The optimal control problem has been introduced in \cite{Maler}, and has been studied mostly via dynamic programming (\cite{Kossioris}), or from the dynamical systems viewpoint (see e.g. \cite{Kiseleva-Wagener 1}, \cite{Kiseleva-Wagener 2} and \cite{Maler}). The latter approach consists in the analysis of the adjoint system that is obtained coupling the state equation with the adjoint equation given by the Pontryagin Maximum Principle. As it is well known, such principle provides conditions for optimality that in general are merely necessary. The main technical difficulties in order to prove the existence of an optimum arise from the fact that good \emph{a priori} estimates for the controls and for the states are missing, because of the infinite horizon setting and the unboundedness assumption on the set of admissible controls. Indeed, the intimate nature of the model requires that one may be allowed to choose a (locally integrable) control function that reaches arbitrarily large values in a finite time. Also controls that are arbitrarily near $0$ are allowed, and this produces similar effects when the functional has logarithmic dependence on the control. In this context the application of any compactness result is not straightforward.\\ Here we propose an original approach to the existence problem. In a sense, we proceed the opposite direction respect to what is done in the proof of some classical existence results for finite horizon problems such as Filippov-Cesari theorem. In the latter kind of proof, thanks to some \emph{a priori} estimates, Ascoli-Arzel\`a theorem is applied in order to obtain an optimizing sequence of \emph{states} converging to a candidate optimal state, which is proven to be almost everywhere differentiable; then some convexity assumption on the dynamics of the state equation allows to pointwise identify a control satisfying the instance of the state equation involving the candidate optimal state. Finally, such control is proven to be admissible by a measurable selection argument. This is what is essentially needed in the case of finite horizon Mayer problems; in the case of Bolza problems with coercive dependence of the integral functional on the control, the same scheme is, roughly speaking, applied to the couple $x_{n},\, J_{int}\left(x_{n},u_{n}\right)$, where $\left(x_{n}\right)_{n}$ is an optimizing sequence of states and $J_{int}\left(\mathrm{x},\mathrm{u}\right)\left(t\right)=\int_{T_{0}}^{t}L\left(s,\mathrm{x}\left(s\right),\mathrm{u}\left(s\right)\right)\mbox{d}s$ is the integral part of the objective functional; in this case, after proving that the limit $x_{*}$ of $x_{n}$ has an admissible companion control $u_{*}$, one also has to prove that $u_{*}$ is in the proper relation with the limit of $J_{int}\left(x_{n},u_{n}\right)$. For the details of the latter (complex) proof, see \cite{Fleming Rishel}, Chapter III, \S\ 5. In other words, the classical control theoretic approach to the existence problem starts with the convergence of the states and associated functionals to some limit, and ends up with with a control function giving those two limits the desired form; in particular no direct semi-continuity argument for the functional is used.\\ In our approach, dealing with a functional of the type \[ J\left(\mathrm{u}\right)=\int_{0}^{+\infty}e^{-\rho t}\left(\log\mathrm{u}\left(t\right)-c\mathrm{x}^{2}\left(t\right)\right)\mbox{d}t, \] we consider an optimizing sequence of locally integrable \emph{controls} $\left(u_{n}\right)_{n}$ and, in order to bypass the absence of \emph{a priori} estimates, we prove two\emph{ }uniform localization lemmas (``from above'' and ``from below''). This way, for a fixed compact interval $\left[0,T\right]$, we are able to find a sequence $\left(u_{n}^{T}\right)_{n}$ which is still optimizing and also uniformly bounded in $\left[0,T\right]$, by two quantities $N\left(T\right)$, $\eta\left(T\right)$. By weak (relative) compactness we can extract a sequence $\left(\bar{u}_{n}^{T}\right)_{n}$, weakly converging in $L^{1}\left(\left[0,T\right]\right)$. We repeat the process for bigger and bigger intervals, each time starting from the maximizing sequence we ended up with in the previous step. In order to merge properly the local (weak) limits, the standard diagonal argument does not work, since we are in presence of two families of sequences which \emph{a priori} are not extracted one from the other: the ``barred'' converging sequences and the ``unbarred'' sequences obtained by applying the uniform localization lemmas. For instance, $\left(u_{n}^{T+1}\right)_{n}$ will denote the sequence obtained by applying the lemmas to $\left(\bar{u}_{n}^{T}\right)_{n}$ and to the interval $\left[0,T+1\right]$. Despite this, we can exploit a monotonicity property of the bound functions $N$ and $\eta$ provided by the uniform localization lemmas, in order to end up with a locally bounded optimizing sequence $\left(v_{n}\right)_{n}$ and a ``pre-optimal'' function $v$ such that $v_{n}\rightharpoonup v$ in $L^{1}\left(\left[0,T\right]\right)$ for every $T>0$. Then we prove the pointwise convergence of the states associated with $\left(v_{n}\right)_{n}$. Furthermore, another - standard - diagonal procedure is needed in order to extract from $\left(v_{n}\right)_{n}$ a sequence $\left(v_{n,n}\right)_{n}$ such that $\log v_{n,n}\rightharpoonup\log u_{*}$, in every in $L^{1}\left(\left[0,T\right]\right)$, for a proper function $u_{*}$. This is eventually proven to be an admissible and optimal control, relying basically on dominated convergence combined with the following relations: \begin{align*} x\left(\cdot;v_{n}\right)\to x\left(\cdot;v\right) & \quad\mbox{pointwise in }\left[0,+\infty\right)\\ \log v_{n,n}\rightharpoonup\log u_{*} & \quad\mbox{in }L^{1}\left(\left[0,T\right]\right),\,\forall T>0\\ u_{*}\leq v & \quad\mbox{a.e. in }\left[0,+\infty\right), \end{align*} where $\mathrm{x}\left(\cdot;\mathrm{u}\right)$ denotes the trajectory associated with the control $\mathrm{u}$. These and other considerations serve as a semi-continuity argument and allow to conclude the proof.\\ The scheme \[ \mbox{uniform localization lemmas }\leftrightarrows\mbox{ "local" compactness } \] \[ \dashrightarrow\mbox{ two families diagonalization }\dashrightarrow\mbox{ one family diagonalization} \] can be considered a development and an improvement of the method introduced in \cite{AB} and may be hopefully generalized to a scheme for obtaining existence proofs, applicable to a wider class of infinite horizon optimal control problems with non compact control space.\\ The model describes the dynamics of the accumulation of phosphorous in the ecosystem of a shallow lake, from a optimal control theory perspective. Precisely, the state equation expresses the (non-linear) relationship between the farming activities near the lake, which are responsible for the release of phosphorus, and the total amount of phosphorous in the water, depending also on the natural production and on the natural loss consisting of sedimentation, outflow and sequestration in other biomass. The objective functional that is to be maximized, represents the social benefit depending on the pollution released by the farming activities, and takes into account the trade-offs between the utility of the agricultural activities and the utility of a clear lake. Following \cite{Maler}, we can assert that the essential dynamics of the eutrophication process can be modelled by the differential equation: \begin{equation} \dot{P}\left(t\right)=-sP\left(t\right)+r\frac{P^{2}\left(t\right)}{m^{2}+P^{2}\left(t\right)}+L\left(t\right),\label{eq: prima} \end{equation} where $P$ is the amount of phosphorus in algae, $L$ is the input of phosphorus (the \textquotedblleft{}loading\textquotedblright{}), $s$ is the rate of loss consisting of sedimentation, outflow and sequestration in other biomass, $r$ is the maximum rate of internal loading and $m$ is the anoxic level. After a change of variable and of time scale, we consider the normalized equation \[ \dot{x}\left(\tau\right)=-bx\left(\tau\right)+\frac{x^{2}\left(\tau\right)}{1+x^{2}\left(\tau\right)}+u\left(\tau\right), \] where $x\left(\cdot\right):=P\left(\cdot\right)/m$, $u\left(\cdot\right)=L\left(\cdot\right)/r$ and $b=sm/r$. Hence we see that the dynamics, as a function of the state, shows a convex-concave behaviour. In an economical analysis, the dynamics of pollution must be considered together with the social benefit of the different interest groups operating in the lake system. The social benefit obviously depends both on the status of the water and on the intensity of agricultural activities near the lake, which in a way can be measured by the amount of phosphorous released in the water. Farmers have an interest in being able to increase the loading, so that the agricultural sector can grow without the need to invest in new technology in order to reduce emissions. On the other hand, groups such as fishermen, drinking water companies and any other industry making use of the water prefer a clear lake, and the same holds for people who use to spend leisure time in relation with the lake. It is assumed that a community or country, balancing these different interests, can agree on a welfare function of the form \[ \log\mathrm{u}-c\mathrm{x}^{2}\quad(c>0), \] in the sense that the lake has value as a ``waste sink'' for agriculture $\log\mathrm{u}$, where $\mathrm{u}$ is the input of phosphorous due to farming, and it provides ecological services that decrease with the total amount of phosphorus $\mathrm{x}$ as $-c\mathrm{x}^{2}$.\\ Here we focus on the case of monotone dynamics, as a first, fundamental step foreshadowing further developments. \section{Boundedness of the value function} \begin{definition} For every $x_{0}\geq0$ and every $u\in L_{loc}^{1}\left(\left[0,+\infty\right)\right)$ the function $t\to x\left(t;x_{0},u\right)$ is the solution to the following Cauchy's Problem: \begin{equation} \begin{cases} {\displaystyle \dot{x}\left(t\right)=F\left(x\left(t\right)\right)+u\left(t\right)} & \quad t\geq0\\ x\left(t\right)=x_{0} \end{cases}\label{eq: eq stato} \end{equation} in the unknown $x\left(\cdot\right)$, where $F$ has the following properties: \begin{align*} & F\in\mathcal{C}^{1}\left(\mathbb{R},\mathbb{R}\right),\, F'\leq0\mbox{ in }\mathbb{R},\, F\left(0\right)=0,\,\lim_{x\to+\infty}F\left(x\right)=-\infty,\ {\displaystyle \lim_{x\to+\infty}F'\left(x\right):=-l<0},\\ & \mbox{there exist }\bar{x}>0\mbox{ such that }F\mbox{ is convex in }\left[0,\bar{x}\right]\mbox{ and concave in }\left[\bar{x},+\infty\right) \end{align*} Moreover, we set $F'\left(0\right)<0$.\\ \\ For every $x_{0}\geq0$, the set of the admissible controls is: \begin{eqnarray*} \Lambda\left(x_{0}\right) & := & \left\{ u\in L_{loc}^{1}\left(\left[0,+\infty\right)\right)/u>0\quad\mbox{a.e. in }\left[0,+\infty\right)\right\} \end{eqnarray*} and the \emph{objective functional} is defined by \[ \mathcal{B}\left(x_{0};u\right)=\int_{0}^{+\infty}e^{-\rho t}\left[\log u\left(t\right)-cx^{2}\left(t;x_{0},u\right)\right]\mbox{d}t\quad\forall u\in\Lambda\left(x_{0}\right), \] where $\rho$ and $c$ are positive constants. The \emph{value function} is \[ V\left(x_{0}\right):=\sup_{u\in\Lambda\left(x_{0}\right)}\mathcal{B}\left(x_{0};u\right). \] \end{definition} $ $ \begin{rem} The Cauchy's problem $\eqref{eq: eq stato}$ has a unique global solution, since the dynamics $F\left(\cdot\right)$ has (globally) bounded derivative. We have \[ -b_{0}x\leq F\left(x\right)\leq-bx+M, \] for some constants $b_{0},b,M>0$. This is easily proven setting $-b:=-l+\epsilon$ for $\epsilon>0$ sufficiently small, choose $b_{0}=F'\left(0\right)\wedge-b$ and use the assumption $F'\to-l$ at $+\infty$and the continuity of $F$. \end{rem} $ $ \begin{rem} \label{remark funzione h}Let $s_{1},s_{2}\geq0$, $u_{1},u_{2}\in L_{loc}^{1}\left(\left[0,+\infty\right),\mathbb{R}\right)$ and $t_{0}\geq0$. Set $x_{1}=x\left(\cdot;s_{1},u_{1}\right)$, $x_{2}=x\left(\cdot;s_{2},u_{2}\right)$ and define: {} \[ h\left(x_{1},x_{2}\right)\left(\tau\right):=\begin{cases} {\displaystyle \frac{F\left(x_{1}\left(\tau\right)\right)-F\left(x_{2}\left(\tau\right)\right)}{x_{1}\left(\tau\right)-x_{2}\left(\tau\right)}} & \mbox{if }x_{1}\left(\tau\right)\neq x_{2}\left(\tau\right)\\ \\ F'\left(x_{1}\left(\tau\right)\right) & \mbox{if }x_{1}\left(\tau\right)=x_{2}\left(\tau\right). \end{cases} \] {} Then $h\left(x_{1},x_{2}\right)$ is continuous, $-b_{0}\leq h\leq0$ and the following relation holds: \begin{eqnarray} \forall t\geq t_{0}:x_{1}\left(t\right)-x_{2}\left(t\right) & = & \exp\left(\int_{t_{0}}^{t}h\left(x_{1},x_{2}\right)\left(\tau\right)\mbox{d}\tau\right)\left(x_{1}\left(t_{0}\right)-x_{2}\left(t_{0}\right)\right)\nonumber \\ & & +\int_{t_{0}}^{t}\exp\left(\int_{s}^{t}h\left(x_{1},x_{2}\right)\left(\tau\right)\mbox{d}\tau\right)\left(u_{1}\left(s\right)-u_{2}\left(s\right)\right)\mbox{d}s.\nonumber \\ \label{eq: comparison} \end{eqnarray} In particular, taking $t_{0}=0$ and $s_{1}=s_{2}$: \begin{equation} \forall t\geq0:\, x_{1}\left(t\right)-x_{2}\left(t\right)=\int_{0}^{t}\exp\left(\int_{s}^{t}h\left(x_{1},x_{2}\right)\left(\tau\right)\mbox{d}\tau\right)\left(u_{1}\left(s\right)-u_{2}\left(s\right)\right)\mbox{d}s\label{eq: comparison da 0} \end{equation} Indeed, for every $t\geq t_{0}$: \begin{eqnarray*} \dot{x}_{1}\left(t\right)-\dot{x}_{2}\left(t\right) & = & F\left(x_{1}\left(t\right)\right)-F\left(x_{2}\left(t\right)\right)+u_{1}\left(t\right)-u_{2}\left(t\right)\\ & = & h\left(x_{1},x_{2}\right)\left(t\right)\left[x_{1}\left(t\right)-x_{2}\left(t\right)\right]+u_{1}\left(t\right)-u_{2}\left(t\right). \end{eqnarray*} Multiplying both sides of this equation by $\exp\left(-\int_{t_{0}}^{t}h\left(x_{1},x_{2}\right)\left(\tau\right)\mbox{d}\tau\right)$ we obtain: \begin{align*} & \frac{\mbox{d}}{\mbox{d}t}\left[\left(x_{1}\left(t\right)-x_{2}\left(t\right)\right)\exp\left(-\int_{t_{0}}^{t}h\left(x_{1},x_{2}\right)\left(\tau\right)\mbox{d}\tau\right)\right]\\ = & \exp\left(-\int_{t_{0}}^{t}h\left(x_{1},x_{2}\right)\left(\tau\right)\mbox{d}\tau\right)\left(u_{1}\left(t\right)-u_{2}\left(t\right)\right)\quad\forall t\geq t_{0} \end{align*} Fix $t\geq t_{0}$ and integrate between $t_{0}$ and $t$; then $\eqref{eq: comparison}$ is easily obtained. \end{rem} $ $ \begin{rem} \label{remark comp ODE}Relation $\eqref{eq: comparison}$ implies a well known comparison result, which in our case can be stated as follows. \emph{Let $s_{1},s_{2}\geq0$ and $u_{1},u_{2}\in L_{loc}^{1}\left(\left[0,+\infty\right),\mathbb{R}\right)$; then for every $t_{0}\geq0$ and every $t_{1}\in\left(t_{0},+\infty\right]$, if $u_{1}\geq u_{2}$ almost everywhere in $\left[t_{0},t_{1}\right]$ and $x\left(t_{0};s_{1},u_{1}\right)\geq x\left(t_{0};s_{2},u_{2}\right)$, then} \[ x\left(t;s_{1},u_{1}\right)\geq x\left(t;s_{2},u_{2}\right)\quad\forall t\in\left[t_{0},t_{1}\right]. \] Moreover another classical comparison result implies that \emph{for every $x_{0}\geq0$ and every $u\in L_{loc}^{1}\left(\left[0,+\infty\right)\right)$}: \begin{eqnarray} e^{-b_{0}t}\left(x_{0}+\int_{0}^{t}e^{b_{0}s}u\left(s\right)\mbox{d}s\right) & \leq & x\left(t;x_{0},u\right)\nonumber \\ & \leq & e^{-bt}\left(x_{0}+\int_{0}^{t}e^{bs}\left(M+u\left(s\right)\right)\mbox{d}s\right).\label{eq: comparison per singola} \end{eqnarray} \end{rem} $ $ \begin{rem} \label{B div -infty}The objective functional is not constantly equal to $-\infty$. As a trivial example, consider the control $u\equiv1\in\Lambda\left(x_{0}\right)$. Then by $\eqref{eq: comparison per singola}$: \[ 0\leq x\left(t;x_{0},u\right)\leq e^{-bt}x_{0}+\left(M+1\right)\frac{1-e^{-bt}}{b} \] which implies \[ x^{2}\left(t\right)\leq\left(x_{0}^{2}+\frac{\left(M+1\right)^{2}}{b^{2}}\right)e^{-2bt}+2\left(M+1\right)\frac{x_{0}}{b}e^{-bt}+\frac{\left(M+1\right)^{2}}{b^{2}}. \] Hence \[ \mathcal{B}\left(u\right)=-c\int_{0}^{+\infty}e^{-\rho t}x^{2}\left(t;x_{0},u\right)\mbox{d}t>-\infty. \] \end{rem} $ $ \begin{rem} \label{oss: approx semplici}Let $u\in\Lambda\left(x_{0}\right)$ and let $\left(u_{n}\right)_{n}\subseteq L^{1}\left(\left[0,+\infty\right)\right)$ be a sequence of simple functions such that $u_{n}\uparrow u$ pointwise in $\left[0,+\infty\right)$. Then \[ \mathcal{B}\left(u\right)\leq\liminf_{n\to+\infty}\mathcal{B}\left(u_{n}\right). \] Indeed, for every $n\in\mathbb{N}$, $u_{n}>0$ almost everywhere in $\left[0,+\infty\right)$, so $\left(e^{-\rho t}\log u_{n}\left(t\right)\right)_{n}\subseteq L^{1}\left(\left[0,+\infty\right)\right)$ and $e^{-\rho t}\log u_{n}\left(t\right)\uparrow e^{-\rho t}\log u\left(t\right)$ for almost every $t\geq0$. By monotone convergence we obtain: \begin{eqnarray*} \limsup_{n\to+\infty}\left[\mathcal{B}\left(u\right)-\mathcal{B}\left(u_{n}\right)\right] & = & \limsup_{n\to+\infty}\int_{0}^{+\infty}e^{-\rho t}\left[\log u\left(t\right)-\log u_{n}\left(t\right)-c\left(x^{2}\left(t\right)-x_{n}^{2}\left(t\right)\right)\right]\mbox{d}t\\ & \leq & \lim_{n\to+\infty}\int_{0}^{+\infty}e^{-\rho t}\left[\log u\left(t\right)-\log u_{n}\left(t\right)\right]\mbox{d}t\\ & = & 0, \end{eqnarray*} where the inequality holds since $0\leq x_{n}\leq x$ for every $n\in\mathbb{N}$, by Remark $\ref{remark comp ODE}$. \end{rem} $ $ \begin{definition} A sequence $\left(u_{n}\right)_{n\in\mathbb{N}}\subseteq\Lambda\left(x_{0}\right)$ is said to be \emph{maximizing at} $x_{0}$ if \[ \lim_{n\to+\infty}\mathcal{B}\left(x_{0};u_{n}\right)=V\left(x_{0}\right). \] \end{definition} $ $ \begin{proposition} \label{prop: succ max} \emph{i)} The value function $V$:$\left[0,+\infty\right)\to\mathbb{R}$ satisfies: \[ V\left(x_{0}\right)\leq\frac{1}{\rho}\log\left(\frac{\rho+b_{0}}{\sqrt{2ec}}\right)\quad\forall x_{0}\geq0. \] \emph{ii)} For every $x_{0}\geq0$, there exist constants $K_{1}\left(x_{0}\right),K_{2}\left(x_{0}\right)>0$ such that, for every $u\in\Lambda\left(x_{0}\right)$ belonging to a maximizing sequence: \begin{align} & \int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t\leq K_{1}\left(x_{0}\right),\label{eq: stima u massimizz}\\ & \int_{0}^{+\infty}e^{-\rho t}x\left(t;x_{0},u\right)\left(t\right)\mbox{d}t\leq K_{2}\left(x_{0}\right).\label{eq: stima x massimizz} \end{align} \end{proposition} Hereinafter we will often use the following weaker estimate relative to a control $u\in\Lambda\left(x_{0}\right)$ belonging to a maximizing sequence: \begin{equation} \int_{0}^{t}u\left(s\right)\mbox{d}s<K_{1}\left(x_{0}\right)e^{\rho t}\quad\forall t\geq0.\label{eq: stima cruciale massimizz} \end{equation} \begin{proof} i) Let $x_{0}\geq0$, $u\in\Lambda\left(x_{0}\right)$, $x=x\left(\cdot;x_{0},u\right)$ and $\mathcal{B}\left(u\right)=\mathcal{B}\left(x_{0};u\right)$. First assume that \begin{equation} \int_{0}^{+\infty}u\left(t\right)\mbox{d}t,\ \int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t<+\infty.\label{integrali u} \end{equation} We estimate the quantity \[ \int_{0}^{+\infty}e^{-\rho t}x^{2}\left(t\right)\mbox{d}t \] in terms of the quantities in $\eqref{integrali u}$. \emph{From above}: by $\eqref{eq: comparison per singola}$, we have for every $t\geq0$: \begin{eqnarray*} 0\leq x\left(t\right) & \leq & e^{-bt}x_{0}+\frac{M}{b}+e^{-bt}\int_{0}^{t}e^{bs}u\left(s\right)\mbox{d}s. \end{eqnarray*} Hence: \begin{eqnarray} x^{2}\left(t\right) & \leq & e^{-bt}\left(x_{0}\vee x_{0}^{2}\right)\left(1+\frac{2M}{b}\right)+\frac{M^{2}}{b^{2}}+e^{-2bt}\left(\int_{0}^{t}e^{bs}u\left(s\right)\mbox{d}s\right)^{2}\nonumber \\ & & +2\left(x_{0}\vee\frac{M}{b}\right)e^{-bt}\int_{0}^{t}e^{bs}u\left(s\right)\mbox{d}s.\label{eq: stima x^2 alto} \end{eqnarray} Focusing on the last two terms leads to the estimate \begin{eqnarray} \int_{0}^{+\infty}e^{-\rho t}e^{-2bt}\left(\int_{0}^{t}e^{bs}u\left(s\right)\mbox{d}s\right)^{2}\mbox{d}t & \leq & \int_{0}^{+\infty}e^{-\rho t}\left(\int_{0}^{t}u\left(s\right)\mbox{d}s\right)^{2}\mbox{d}t\nonumber \\ & \leq & \frac{1}{\rho}\left(\int_{0}^{+\infty}u\left(s\right)\mbox{d}s\right)^{2}\label{eq: stima int x^2 pezzo 1} \end{eqnarray} and \begin{eqnarray} \int_{0}^{+\infty}e^{-\rho t}e^{-bt}\int_{0}^{t}e^{bs}u\left(s\right)\mbox{d}s\mbox{d}t & = & \int_{0}^{+\infty}e^{bs}u\left(s\right)\int_{s}^{+\infty}e^{-\left(\rho+b\right)t}\mbox{d}t\mbox{d}s\nonumber \\ & = & \frac{1}{\rho+b}\int_{0}^{+\infty}e^{bs}u\left(s\right)e^{-\left(\rho+b\right)s}\mbox{d}s\nonumber \\ & = & \frac{1}{\rho+b}\int_{0}^{+\infty}e^{-\rho s}u\left(s\right)\mbox{d}s.\label{eq: uguagl int x^2 pezzo 2} \end{eqnarray} By $\eqref{eq: stima x^2 alto}$, $\eqref{eq: stima int x^2 pezzo 1}$ and $\eqref{eq: uguagl int x^2 pezzo 2}$ we see that there exists a constant $L\left(b,x_{0}\right)\geq0$ such that \begin{eqnarray} \int_{0}^{+\infty}e^{-\rho t}x^{2}\left(t\right)\mbox{d}t & \leq & L\left(b,x_{0}\right)+\frac{1}{\rho}\left(\int_{0}^{+\infty}u\left(t\right)\mbox{d}t\right)^{2}\nonumber \\ & & +2\left(x_{0}\vee\frac{M}{b}\right)\frac{1}{\rho+b}\int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t.\label{eq: stima int x^2 alto} \end{eqnarray} \emph{From below}: again by $\eqref{eq: comparison per singola}$: \begin{eqnarray*} \forall t\geq0:x\left(t\right) & \geq & e^{-b_{0}t}\left(x_{0}+\int_{0}^{t}e^{b_{0}s}u\left(s\right)\mbox{d}s\right)\\ & \geq & e^{-b_{0}t}\int_{0}^{t}e^{b_{0}s}u\left(s\right)\mbox{d}s. \end{eqnarray*} Hence, since $t\to\rho e^{-\rho t}\mbox{d}t$ is a probability measure, we have by Jensen's inequality: \begin{eqnarray} \int_{0}^{+\infty}e^{-\rho t}x^{2}\left(t\right)\mbox{d}t & \geq & \rho\left(\int_{0}^{+\infty}e^{-\rho t}x\left(t\right)\mbox{d}t\right)^{2}\nonumber \\ & \geq & \rho\left(\int_{0}^{+\infty}e^{-\rho t}e^{-b_{0}t}\int_{0}^{t}e^{b_{0}s}u\left(s\right)\mbox{d}s\mbox{d}t\right)^{2}\nonumber \\ & = & \frac{\rho}{\left(\rho+b_{0}\right)^{2}}\left(\int_{0}^{+\infty}e^{-\rho s}u\left(s\right)\mbox{d}s\right)^{2}\label{eq: stima int x^2 basso} \end{eqnarray} and the last equality holds by $\eqref{eq: uguagl int x^2 pezzo 2}$. The finiteness of the integrals in $\eqref{integrali u}$ implies that the application of Fubini's Theorem in $\eqref{eq: uguagl int x^2 pezzo 2}$ and in $\eqref{eq: stima int x^2 basso}$ are appropriate. Relation $\eqref{eq: stima int x^2 basso}$ allows us to write down the following estimate for $\mathcal{B}\left(u\right)$, using again Jensen's inequality (in relation with the concave function log): \begin{eqnarray} \mathcal{B}\left(u\right) & = & \int_{0}^{+\infty}e^{-\rho t}\log u\left(t\right)\mbox{d}t-c\int_{0}^{+\infty}e^{-\rho t}x^{2}\left(t\right)\mbox{d}t\nonumber \\ & \leq & \frac{1}{\rho}\log\left(\rho\int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t\right)-\frac{c}{\rho\left(\rho+b_{0}\right)^{2}}\left(\rho\int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t\right)^{2}\label{eq: stima B Paolo}\\ & \leq & \frac{1}{\rho}\max_{z>0}\left(\log z-\frac{c}{\left(\rho+b_{0}\right)^{2}}z^{2}\right)=\frac{1}{\rho}\left(\log\frac{\rho+b_{0}}{\sqrt{2c}}-\frac{1}{2}\right)\label{eq: stima B Paolo max}\\ & = & \frac{1}{\rho}\log\left(\frac{\rho+b_{0}}{\sqrt{2ec}}\right). \end{eqnarray} This holds under condition $\eqref{integrali u}$. In the opposite case, that is to say $\int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t=+\infty$, consider a sequence $\left(u_{n}\right)_{n\in\mathbb{N}}$ like in Remark $\eqref{oss: approx semplici}$. Hence \begin{eqnarray} \mathcal{B}\left(u\right) & \leq & \liminf_{n\to+\infty}\mathcal{B}\left(u_{n}\right)\leq\liminf_{n\to+\infty}\frac{1}{\rho}\log\left(\rho\int_{0}^{+\infty}e^{-\rho t}u_{n}\left(t\right)\mbox{d}t\right)\nonumber \\ & & -\frac{c}{\rho\left(\rho+b_{0}\right)^{2}}\left(\rho\int_{0}^{+\infty}e^{-\rho t}u_{n}\left(t\right)\mbox{d}t\right)^{2}\nonumber \\ & = & \lim_{z\to+\infty}\left(\frac{1}{\rho}\log z-\frac{c}{\rho\left(\rho+b_{0}\right)^{2}}z^{2}\right)=-\infty,\label{eq: B - infty} \end{eqnarray} since $\int_{0}^{+\infty}e^{-\rho t}u_{n}\left(t\right)\mbox{d}t\to\int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t$, by monotone convergence. In the intermediate case, that is to say \[ \int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t<+\infty,\quad\int_{0}^{+\infty}u\left(t\right)\mbox{d}t=+\infty, \] let again $\left(u_{n}\right)_{n\in\mathbb{N}}$ be as in Remark $\eqref{oss: approx semplici}$. We have: \begin{eqnarray*} \mathcal{B}\left(u\right) & \leq & \liminf_{n\to+\infty}\mathcal{B}\left(u_{n}\right)\leq\frac{1}{\rho}\log\left(\lim_{n\to+\infty}\rho\int_{0}^{+\infty}e^{-\rho t}u_{n}\left(t\right)\mbox{d}t\right)\\ & & -\frac{c}{\rho\left(\rho+b\right)^{2}}\left(\lim_{n\to+\infty}\rho\int_{0}^{+\infty}e^{-\rho t}u_{n}\left(t\right)\mbox{d}t\right)^{2}\\ & = & \frac{1}{\rho}\log\left(\rho\int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t\right)-\frac{c}{\rho\left(\rho+b_{0}\right)^{2}}\left(\rho\int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t\right)^{2}\\ & \leq & \frac{1}{\rho}\log\left(\frac{\rho+b_{0}}{\sqrt{2ec}}\right). \end{eqnarray*} Taking the sup among $u\in\Lambda\left(x_{0}\right)$, we see that the same estimate holds for $V\left(x_{0}\right)$. ii) Suppose that $u$ belongs to a maximizing sequence, and assume that $\mathcal{B}\left(u\right)>V\left(x_{0}\right)-1$. Fix $\tilde{K}\left(x_{0}\right)\geq0$ such that \[ \frac{1}{\rho}\log z-\frac{c}{\rho\left(\rho+b_{0}\right)^{2}}z^{2}\leq V\left(x_{0}\right)-1\quad\forall z>\tilde{K}\left(x_{0}\right). \] We showed at point $i)$ that if $\int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t<+\infty$, then relation $\eqref{eq: stima B Paolo}$, holds. Thus in this case it must be \begin{equation} \int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t\leq\frac{1}{\rho}\tilde{K}\left(x_{0}\right)=:K_{1}\left(x_{0}\right).\label{eq: unif bound int u} \end{equation} The case $\int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t=+\infty$ implies $\mathcal{B}\left(u\right)=-\infty$ by $\eqref{eq: B - infty}$, and consequently must be excluded, since $u$ belongs to a maximizing sequence (see Remark $\ref{B div -infty}$). This proves relation $\eqref{eq: stima u massimizz}$. In order to prove $\eqref{eq: stima x massimizz}$, observe that by $\eqref{eq: comparison per singola}$ we have: \begin{eqnarray*} \int_{0}^{+\infty}e^{-\rho t}x\left(t\right)\mbox{d}t & \leq & \int_{0}^{+\infty}e^{-\rho t}\left\{ e^{-bt}x_{0}+\int_{0}^{t}e^{b\left(s-t\right)}\left(1+u\left(s\right)\right)\mbox{d}s\right\} \mbox{d}t\\ & = & x_{0}\int_{0}^{+\infty}e^{-\left(\rho+b\right)t}\mbox{d}t+\int_{0}^{+\infty}e^{-\left(\rho+b\right)t}\int_{0}^{t}e^{bs}\mbox{d}s\mbox{d}t\\ & & +\int_{0}^{+\infty}e^{-\left(\rho+b\right)t}\int_{0}^{t}e^{bs}u\left(s\right)\mbox{d}s\mbox{d}t\\ & = & \frac{x_{0}}{\rho+b}+\int_{0}^{+\infty}e^{bs}\int_{s}^{+\infty}e^{-\left(\rho+b\right)t}\mbox{d}t\mbox{d}s\\ & & +\int_{0}^{+\infty}u\left(s\right)e^{bs}\int_{s}^{+\infty}e^{-\left(\rho+b\right)t}\mbox{d}t\mbox{d}s\\ & = & \frac{x_{0}}{\rho+b}+\frac{1}{\rho\left(\rho+b\right)}+\frac{1}{\rho+b}\int_{0}^{+\infty}e^{-\rho t}u\left(t\right)\mbox{d}t\\ & \leq & \frac{x_{0}}{\rho+b}+\frac{1}{\rho\left(\rho+b\right)}+\frac{K_{1}\left(x_{0}\right)}{\rho+b}\\ & =: & K_{2}\left(x_{0}\right) \end{eqnarray*} $ $ \end{proof} $ $ \section{Uniform localization lemmas} \begin{lemma} \label{lem: loc alto}There exists a function $N:\left[0,+\infty\right)^{2}\to\left(0,+\infty\right)$, continuous and strictly increasing in the second variable, such that: for every $x_{0},T>0$ and for every $u\in\Lambda\left(x_{0}\right)$ belonging to a maximizing sequence, there exists a control $\tilde{u}^{T}\in\Lambda\left(x_{0}\right)$ satisfying: \begin{align*} & \mathcal{B}\left(x_{0};\tilde{u}^{T}\right)\geq\mathcal{B}\left(x_{0};u\right)\\ & \tilde{u}^{T}=u\wedge N\left(x_{0},T\right)\quad\mbox{a. e. in }\left[0,T\right]. \end{align*} In particular, the norm $\left\Vert \tilde{u}^{T}\right\Vert _{L^{\infty}\left(\left[0,T\right]\right)}$ is bounded above by a quantity which does not depend on the original control $u$. Moreover, the state $x\left(\cdot;\tilde{u}^{T},x_{0}\right)$ associated with the control $\tilde{u}^{T}$ satisfies \[ x\left(\cdot;\tilde{u}^{T},x_{0}\right)\leq x\left(\cdot;u,x_{0}\right). \] Eventually, the bound function $N$ satisfies: \begin{equation} \lim_{T\to+\infty}Te^{-\rho T}\log N\left(x_{0},T\right)=0.\label{eq: logN a infinito} \end{equation} \end{lemma} \begin{proof} Fix $x_{0}$ and $T\geq0$. The equation \begin{equation} \log\beta+\beta b_{0}=-Tb_{0},\quad\mbox{\ensuremath{\beta}>0}\label{eq: def beta} \end{equation} has a unique solution, which is strictly less than $1$. Call this solution $\beta_{T}$, and define \begin{equation} N\left(x_{0},T\right):=K\left(x_{0}\right)\beta_{T}^{-2}e^{2\rho\left(T+\beta_{T}\right)},\label{eq: def N} \end{equation} where $K\left(x_{0}\right)=K_{1}\left(x_{0}\right)\vee1$ and $K_{1}\left(x_{0}\right)$ is the constant introduced in Proposition $\ref{prop: succ max}$. Now fix $u\in\Lambda\left(x_{0}\right)$ such that $u$ belongs to a maximizing sequence. If $u\leq N\left(x_{0},T\right)$ almost everywhere in $\left[0,T\right]$, then set $\tilde{u}^{T}:=u$, and the proof is over. If there exists a non-negligible subset of $\left[0,T\right]$ in which $u>N\left(x_{0},T\right)$ then define \begin{align*} & \tilde{I}:=\int_{0}^{T}\left(u\left(t\right)-u\left(t\right)\wedge N\left(x_{0},T\right)\right)\mbox{d}t\\ & \tilde{u}^{T}:=u\wedge N\left(x_{0},T\right)\cdot\chi_{\left[0,T\right]}+\left(u+\tilde{I}\right)\cdot\chi_{\left(T,T+\beta_{T}\right]}+u\cdot\chi_{\left(T+\beta_{T},+\infty\right)}. \end{align*} Obviously $\tilde{u}^{T}\in\Lambda\left(x_{0}\right)$, since $u\in\Lambda\left(x_{0}\right)$ and $N\left(x_{0},T\right)>0$. First we prove that \textbf{ \begin{equation} 0\leq x\left(\cdot;\tilde{u}^{T},x_{0}\right)\leq x\left(\cdot;u,x_{0}\right)\quad\mbox{in }\left[0,+\infty\right)\label{eq: orbita alto meno di orig} \end{equation} }Clearly $x\left(\cdot;\tilde{u}^{T},x_{0}\right)\geq0$, by the admissibility of $\tilde{u}^{T}$. For simplicity of notation we set $N=N\left(x_{0},T\right)$, $\tilde{x}_{T}=x\left(\cdot;\tilde{u}^{T},x_{0}\right)$ and $x=x\left(\cdot;u,x_{0}\right)$. Obviously $\tilde{x}_{T}\leq x$ in $\left[0,T\right]$, by Remark $\ref{remark comp ODE}$. Fix $t\in\left(T,T+\beta_{T}\right]$, and set $h:=h\left(\tilde{x}_{T},x\right)$, like in Remark $\ref{remark funzione h}$. Hence: \begin{eqnarray*} \tilde{x}_{T}\left(t\right)-x\left(t\right) & = & \int_{0}^{T}\exp\left(\int_{s}^{t}h\mbox{d}\tau\right)\left(u\left(s\right)\wedge N-u\left(s\right)\right)\mbox{d}s\\ & & +\tilde{I}\int_{T}^{t}\exp\left(\int_{s}^{t}h\mbox{d}\tau\right)\mbox{d}s. \end{eqnarray*} The first addend is estimated in the following way: \begin{eqnarray*} \int_{0}^{T}\exp\left(\int_{s}^{t}h\mbox{d}\tau\right)\left(u\left(s\right)\wedge N-u\left(s\right)\right)\mbox{d}s & \leq & \int_{0}^{T}e^{\left(s-t\right)b_{0}}\left(u\left(s\right)\wedge N-u\left(s\right)\right)\mbox{d}s\\ & \leq & e^{-tb_{0}}\int_{0}^{T}\left(u\left(s\right)\wedge N-u\left(s\right)\right)\mbox{d}s\\ & \leq & e^{-\left(T+\beta_{T}\right)b_{0}}\int_{0}^{T}\left(u\left(s\right)\wedge N-u\left(s\right)\right)\mbox{d}s\\ & = & -\tilde{I}e^{-\left(T+\beta_{T}\right)b_{0}}. \end{eqnarray*} Since $h\leq0$, the second addend is estimated from above by $\tilde{I}\beta_{T}$. Thus we obtain: \[ \tilde{x}_{T}\left(t\right)-x\left(t\right)\leq\tilde{I}\left(\beta_{T}-e^{-\left(T+\beta_{T}\right)b_{0}}\right), \] and the last quantity is zero, by definition of $\beta_{T}$. This implies that $\tilde{x}_{T}\leq x$ also in $\left(T+\beta_{T},+\infty\right)$, again by Remark $\ref{remark comp ODE}$. Hence, relation $\eqref{eq: orbita alto meno di orig}$ holds. Now we estimate the ``logarithmic'' part of the difference between $\mathcal{B}\left(x_{0};\tilde{u}^{T}\right)$ and $\mathcal{B}\left(x_{0};u\right)$. By the concavity of log, we have: \begin{eqnarray*} & & \int_{0}^{+\infty}e^{-\rho t}\left(\log\tilde{u}^{T}\left(t\right)-\log u\left(t\right)\right)\mbox{d}t\\ & = & \int_{0}^{T}e^{-\rho t}\left\{ \log\left(u\left(t\right)\wedge N\right)-\log u\left(t\right)\right\} \mbox{d}t\\ & & +\int_{T}^{T+\beta_{T}}e^{-\rho t}\left\{ \log\left(u\left(t\right)+\tilde{I}\right)-\log u\left(t\right)\right\} \mbox{d}t\\ & \geq & \int_{0}^{T}e^{-\rho t}\left(u\left(t\right)\wedge N\right)^{-1}\left\{ u\left(t\right)\wedge N-u\left(t\right)\right\} \mbox{d}t\\ & & +\tilde{I}\int_{T}^{T+\beta_{T}}e^{-\rho t}\left(u\left(t\right)+\tilde{I}\right)^{-1}\mbox{d}t \end{eqnarray*} \begin{eqnarray} & = & \frac{1}{N}\int_{0}^{T}e^{-\rho t}\left\{ u\left(t\right)\wedge N-u\left(t\right)\right\} \mbox{d}t\nonumber \\ & & +\tilde{I}\int_{T}^{T+\beta_{T}}e^{-\rho t}\left(u\left(t\right)+\tilde{I}\right)^{-1}\mbox{d}t\nonumber \\ & \geq & \frac{1}{N}\int_{0}^{T}\left(u\left(t\right)\wedge N-u\left(t\right)\right)\mbox{d}t\nonumber \\ & & +\tilde{I}\int_{T}^{T+\beta_{T}}e^{-\rho t}\left(u\left(t\right)+\tilde{I}\right)^{-1}\mbox{d}t\nonumber \\ & = & \tilde{I}\left(\int_{T}^{T+\beta_{T}}e^{-\rho t}\left(u\left(t\right)+\tilde{I}\right)^{-1}\mbox{d}t-\frac{1}{N}\right).\label{eq: loc alto meglio 1} \end{eqnarray} Moreover, by Jensen's inequality: \begin{eqnarray*} \int_{T}^{T+\beta_{T}}e^{-\rho t}\left(u\left(t\right)+\tilde{I}\right)^{-1}\mbox{d}t & \geq & e^{-\rho\left(T+\beta_{T}\right)}\int_{T}^{T+\beta_{T}}\left(u\left(t\right)+\tilde{I}\right)^{-1}\mbox{d}t\\ & \geq & \beta_{T}^{2}e^{-\rho\left(T+\beta_{T}\right)}\frac{1}{\int_{T}^{T+\beta_{T}}\left(u\left(t\right)+\tilde{I}\right)\mbox{d}t}\\ & \geq & \beta_{T}^{2}e^{-\rho\left(T+\beta_{T}\right)}\frac{1}{\int_{T}^{T+\beta_{T}}u\left(t\right)\mbox{d}t+\tilde{I}}\\ & \geq & \beta_{T}^{2}e^{-\rho\left(T+\beta_{T}\right)}\frac{1}{\int_{0}^{T+\beta_{T}}u\left(t\right)\mbox{d}t} \end{eqnarray*} where the penultimate inequality holds since $\beta_{T}<1$. $ $ Now by Proposition $\ref{prop: succ max}$ we can complete this estimate in the following way: \begin{eqnarray} \int_{T}^{T+\beta_{T}}e^{-\rho t}\left(u\left(t\right)+\tilde{I}\right)^{-1}\mbox{d}t & \geq & K\left(x_{0}\right)^{-1}\beta_{T}^{2}e^{-2\rho\left(T+\beta_{T}\right)}\nonumber \\ & =: & \alpha\left(x_{0},T\right).\label{eq: loc alto meglio 2} \end{eqnarray} Observe that, by definition, $N\left(x_{0},T\right)=\alpha\left(x_{0},T\right)^{-1}$. Hence, joining $\eqref{eq: loc alto meglio 1}$ with $\eqref{eq: loc alto meglio 2}$ we obtain \begin{eqnarray} \int_{0}^{+\infty}e^{-\rho t}\left(\log\tilde{u}^{T}\left(t\right)-\log u\left(t\right)\right)\mbox{d}t & \geq & \tilde{I}\left(\alpha\left(x_{0},T\right)-\frac{1}{N\left(x_{0},T\right)}\right)=0.\label{eq: loc alto meglio 3} \end{eqnarray} This implies, by $\eqref{eq: orbita alto meno di orig}$: \begin{eqnarray*} \mathcal{B}\left(x_{0};\tilde{u}^{T}\right)-\mathcal{B}\left(x_{0};u\right) & = & \int_{0}^{+\infty}e^{-\rho t}\left(\log\tilde{u}^{T}\left(t\right)-\log u\left(t\right)\right)\mbox{d}t\\ & & -c\int_{0}^{+\infty}e^{-\rho t}\left\{ \tilde{x}_{T}^{2}\left(t\right)-x^{2}\left(t\right)\right\} \mbox{d}t\\ & \geq & 0. \end{eqnarray*} Finally we prove the monotonicity of $N\left(x_{0},T\right)$ in $T$ . First observe that $T\to\beta_{T}$ is clearly a strictly decreasing function, since the function $\beta\to\log\beta+\beta b_{0}$ is strictly increasing, and remembering equation $\eqref{eq: def beta}$. Moreover, the function $T\to T+\beta_{T}$ is strictly increasing. Indeed, set $f\left(x\right):=\log x+b_{0}x$ and let $\phi$ be the inverse of $f$. Then $\beta_{T}=\phi\left(-Tb_{0}\right)$, and: \[ \frac{\mbox{d}}{\mbox{d}T}\left(T+\beta_{T}\right)=1-b_{0}\phi'\left(-Tk\right)=1-\frac{b_{0}}{f'\left(\beta_{T}\right)}=1-\frac{b_{0}\beta_{T}}{1+b_{0}\beta_{T}}>0. \] This shows that $N\left(x_{0},\cdot\right)$ is strictly increasing. Finally observe that: \begin{equation} \beta_{T}\sim e^{-Tb_{0}}\quad\mbox{for }T\to+\infty.\label{eq: andamento beta} \end{equation} Indeed, with $f$ defined as before, we have: \[ \lim_{x\to0^{+}}\frac{f\left(x\right)}{\log x}=1. \] Hence $\phi\left(y\right)\sim e^{y}$ for $y\to-\infty$ and $\beta_{T}=\phi\left(-Tb_{0}\right)\sim e^{-Tb_{0}}$ for $T\to+\infty$. It follows from $\eqref{eq: andamento beta}$ and $\eqref{eq: def N}$, that: \begin{eqnarray*} Te^{-\rho T}\log N\left(x_{0},T\right) & = & Te^{-\rho T}\log K\left(x_{0}\right)+Te^{-\rho T}\log\left(\beta_{T}^{-2}\right)\\ & & +2\rho Te^{-\rho T}\left(T+\beta_{T}\right)\\ & \sim & Te^{-\rho T}\log\left(\beta_{T}^{-2}\right)\\ & \sim & 2T^{2}e^{-\rho T}b_{0}\quad\mbox{for }T\to+\infty. \end{eqnarray*} This shows that $\eqref{eq: logN a infinito}$ holds. \end{proof} $ $ \begin{lemma} \label{lem: loc basso}There exists a function $\eta:\left[0,+\infty\right)^{2}\to\left(0,+\infty\right)$, continuous and strictly decreasing in the second variable, with the following property: \begin{align*} i)\ & \eta\left(x_{0},T\right)<N\left(x_{0},T\right)\quad\forall T>0 \end{align*} where $N$ is the function defined in Lemma $\ref{lem: loc alto}$;\\ $ii)\,$ for every $x_{0}\geq0$ and every $T\geq1$, if $u\in\Lambda\left(x_{0}\right)$ belongs to a maximizing sequence, there exists $u^{T}\in\Lambda\left(x_{0}\right)$ such that \begin{align*} & \mathcal{B}\left(x_{0};u^{T}\right)\geq\mathcal{B}\left(x_{0};u\right)\\ & u^{T}=\left(u\wedge N\left(x_{0},T\right)\right)\vee\eta\left(x_{0},T\right)\quad\mbox{a. e. in }\left[0,T\right]. \end{align*} In particular the norm $\left\Vert \log u^{T}\right\Vert _{L^{\infty}\left(\left[0,T\right]\right)}$ is bounded above by a quantity which does not depend on $u$.\end{lemma} \begin{proof} Fix $x_{0}$ and $u$ as in the hypothesis, and set $x:=x\left(\cdot;x_{0},u\right)$. In order to define the function $\eta$, we preliminarily observe that there obviously exits a number $L\left(x_{0}\right)>\rho$ such that \begin{equation} e^{L\left(x_{0}\right)-\rho}-2c\rho^{-1}e^{-L\left(x_{0}\right)}\geq2cK_{2}\left(x_{0}\right).\label{eq: cond 1 L(x_0)} \end{equation} A simple computation shows that the function $T\to e^{\left(L\left(x_{0}\right)-\rho\right)T}-2c\rho^{-1}Te^{-L\left(x_{0}\right)T}$ is increasing if \begin{equation} L\left(x_{0}\right)>\rho+\frac{2c}{\rho}.\label{eq: cond 2 L(x_0)} \end{equation} Now we now choose $L\left(x_{0}\right)$ satisfying $\eqref{eq: cond 1 L(x_0)}$ and $\eqref{eq: cond 2 L(x_0)}$ and we define \[ \eta\left(x_{0},T\right):=e^{-L\left(x_{0}\right)T}. \] Relation $i)$ follows from the fact that $N\left(x_{0},T\right)>1$; moreover we have: \begin{equation} e^{\left(L\left(x_{0}\right)-\rho\right)T}-2c\rho^{-1}Te^{-L\left(x_{0}\right)T}-2cK_{2}\left(x_{0}\right)\geq0\quad\forall T\geq1.\label{eq: per Bu_t meglio} \end{equation} Now fix $T\geq1$ and take $\tilde{u}^{T}$ as in Lemma $\ref{lem: loc alto}$. Define $u^{T}:=\tilde{u}^{T}$ if $\tilde{u}^{T}\geq\eta\left(x_{0},T\right)$ almost everywhere in $\left[0,T\right]$, and \[ u^{T}:=\left(\tilde{u}^{T}\vee\eta\left(x_{0},T\right)\right)\chi_{\left[0,T\right]}+\tilde{u}^{T}\chi_{\left(T,+\infty\right)} \] if there exists a subset of $\left[0,T\right]$ of positive measure where $\tilde{u}^{T}<\eta\left(x_{0},T\right)$. In this case define also \[ I:=\int_{0}^{T}\left[\tilde{u}^{T}\left(s\right)\vee\eta-\tilde{u}^{T}\left(s\right)\right]\mbox{d}s. \] We show that \[ \mathcal{B}\left(x_{0};u^{T}\right)-\mathcal{B}\left(x_{0};\tilde{u}^{T}\right)\geq0, \] and the conclusion will follow from Lemma $\ref{lem: loc alto}$. We provide two different estimates of the quantity $x\left(\cdot;x_{0},u_{T}\right)-x\left(\cdot;x_{0},\tilde{u}_{T}\right)$. Set $x_{T}=x\left(\cdot;x_{0},u_{T}\right)$, $\tilde{x}_{T}=x\left(\cdot;x_{0},\tilde{u}_{T}\right)$, $h=h\left(x_{T},\tilde{x}_{T}\right)$, $\eta=\eta\left(x_{0},T\right)$ and $N=N\left(x_{0},T\right)$ for simplicity of notation. Remembering that $h\leq0$, we have, for every $t\in\left[0,T\right]$: \begin{eqnarray*} x_{T}\left(t\right)-\tilde{x}_{T}\left(t\right) & = & \int_{0}^{t}e^{\int_{s}^{t}h\mbox{d}\tau}\left[u^{T}\left(s\right)-\tilde{u}^{T}\left(s\right)\right]\mbox{d}s\\ & \leq & \int_{0}^{T}e^{\int_{s}^{t}h\mbox{d}\tau}\left[\tilde{u}^{T}\left(s\right)\vee\eta-\tilde{u}^{T}\left(s\right)\right]\mbox{d}s\\ & \leq & I. \end{eqnarray*} The same estimate holds for $t>T$, since $u^{T}=\tilde{u}^{T}$ in $\left(T,+\infty\right)$. Hence: \begin{equation} x_{T}-\tilde{x}_{T}\leq I\quad\mbox{in }\left[0,+\infty\right).\label{eq: stima x_T tilde - x} \end{equation} Moreover, since $\eta>0$: \begin{eqnarray*} I & = & \int_{0}^{T}\left[\tilde{u}^{T}\left(s\right)\vee\eta-\tilde{u}^{T}\left(s\right)\right]\mbox{d}s\\ & = & \int_{\left[0,T\right]\cap\left\{ \tilde{u}^{T}\leq\eta\right\} }\left[\eta-\tilde{u}^{T}\left(s\right)\right]\mbox{d}s\\ & \leq & T\eta. \end{eqnarray*} Hence \begin{equation} x_{T}-\tilde{x}_{T}\leq T\eta\quad\mbox{in }\left[0,+\infty\right).\label{eq: stima x_T tilde - x_T} \end{equation} By $\eqref{eq: stima x_T tilde - x}$ and $\eqref{eq: stima x_T tilde - x_T}$, using the convexity relation $x^{2}-y^{2}\leq2x\left(x-y\right)$, we obtain: \begin{eqnarray*} c\int_{0}^{+\infty}e^{-\rho t}\left[x_{T}^{2}\left(t\right)-\tilde{x}_{T}^{2}\left(t\right)\right]\mbox{d}t & \leq & 2c\int_{0}^{+\infty}e^{-\rho t}x_{T}\left(t\right)\left[x_{T}\left(t\right)-\tilde{x}_{T}\left(t\right)\right]\mbox{d}t\\ & \leq & 2cI\int_{0}^{+\infty}e^{-\rho t}x_{T}\left(t\right)\mbox{d}t\\ & = & 2cI\int_{0}^{+\infty}e^{-\rho t}\left[x_{T}\left(t\right)-\tilde{x}_{T}\left(t\right)\right]\mbox{d}t\\ & & +2cI\int_{0}^{+\infty}e^{-\rho t}\tilde{x}_{T}\left(t\right)\mbox{d}t\\ & \leq & 2cIT\eta\int_{0}^{+\infty}e^{-\rho t}\mbox{d}t+2cI\int_{0}^{+\infty}e^{-\rho t}x\left(t\right)\mbox{d}t\\ & \leq & I\left(2\frac{c}{\rho}T\eta+2cK_{2}\left(x_{0}\right)\right), \end{eqnarray*} where we also used $\eqref{eq: orbita alto meno di orig}$ and $\eqref{eq: stima x massimizz}$ (the trajectory $x\left(\cdot\right)$ is associated with a control in a maximizing sequence). Moreover: \begin{eqnarray*} \int_{0}^{+\infty}e^{-\rho t}\left(\log u^{T}\left(t\right)-\log\tilde{u}^{T}\left(t\right)\right)\mbox{d}t & = & \int_{0}^{T}e^{-\rho t}\left(\log\left(\tilde{u}^{T}\left(t\right)\vee\eta\right)-\log\tilde{u}^{T}\left(t\right)\right)\mbox{d}t\\ & \geq & \int_{0}^{T}e^{-\rho t}\frac{1}{\tilde{u}^{T}\left(t\right)\vee\eta}\left(\tilde{u}^{T}\left(t\right)\vee\eta-\tilde{u}^{T}\left(t\right)\right)\mbox{d}t\\ & = & \frac{1}{\eta}\int_{0}^{T}e^{-\rho t}\left(\tilde{u}^{T}\left(t\right)\vee\eta-\tilde{u}^{T}\left(t\right)\right)\mbox{d}t\\ & \geq & \frac{e^{-\rho T}}{\eta}I. \end{eqnarray*} Joining the last two estimates leads to: \begin{eqnarray*} \mathcal{B}\left(x_{0};u^{T}\right)-\mathcal{B}\left(x_{0};\tilde{u}^{T}\right) & = & \int_{0}^{+\infty}e^{-\rho t}\left(\log u^{T}\left(t\right)-\log\tilde{u}^{T}\left(t\right)\right)\mbox{d}t\\ & & -c\int_{0}^{+\infty}e^{-\rho t}\left[x_{T}^{2}\left(t\right)-\tilde{x}_{T}^{2}\left(t\right)\right]\mbox{d}t\\ & \geq & I\left(\frac{e^{-\rho T}}{\eta\left(x_{0},T\right)}-2\frac{c}{\rho}T\eta\left(x_{0},T\right)-2cK_{2}\left(x_{0}\right)\right)\\ \\ & = & I\left(e^{\left(L\left(x_{0}\right)-\rho\right)T}-2c\rho^{-1}Te^{-L\left(x_{0}\right)T}-2cK_{2}\left(x_{0}\right)\right)\\ & \geq & 0, \end{eqnarray*} where the last inequality holds by $\eqref{eq: per Bu_t meglio}$. \end{proof} \section{Diagonal procedures and functional convergence} From this point on, the initial state $x_{0}\geq0$ is to be considered fixed. \begin{lemma} \label{prop: new seq1}There exists a sequence $\left(v_{n}\right)_{n\in\mathbb{N}}$ and a function $v$ in $\Lambda\left(x_{0}\right)$ such that: \begin{align} & \lim_{n\to+\infty}\mathcal{B}\left(x_{0};v_{n}\right)=V\left(x_{0}\right)\label{eq: new1 massimizz}\\ & v_{n}\rightharpoonup v\mbox{ in }L^{1}\left(\left[0,T\right]\right)\quad\forall T>0\label{eq: new1 conv deb}\\ & \forall T\in\mathbb{N}:\mbox{almost everywhere in}\left[0,T\right]:\nonumber \\ & \forall n\geq T:\eta\left(x_{0},T\right)\leq v,v_{n}\leq N\left(x_{0},T\right)\label{eq: new1 bound} \end{align} where $N$, $\eta$ are the functions defined in Lemmas $\ref{lem: loc alto}$ and $\ref{lem: loc basso}$ .\end{lemma} \begin{proof} Set $\mathcal{B}=\mathcal{B}\left(x_{0};\cdot\right)$ and fix $\left(u_{n}\right)_{n\in\mathbb{N}}$ and such that \[ \lim_{n\to+\infty}\mathcal{B}\left(u_{n}\right)=V\left(x_{0}\right). \] Set, for every $n\in\mathbb{N}$, $u_{n}^{1}$ as the function obtained by applying Lemma $\ref{lem: loc basso}$ to $u_{n}$, for $T=1$. Then \begin{align*} & u_{n}^{1}=\left(u_{n}\wedge N\left(x_{0},1\right)\right)\vee\eta\left(x_{0},1\right)\quad\mbox{a.e. in }\left[0,1\right]\\ & \mathcal{B}\left(u_{n}^{1}\right)\geq\mathcal{B}\left(u_{n}\right). \end{align*} Hence, as a consequence of the Dunford-Pettis criterion, there exists a subsequence $\left(\overline{u}_{n}^{1}\right)_{n}$ of $\left(u_{n}^{1}\right)_{n}$ and a function $u^{1}\in L^{1}\left(\left[0,1\right]\right)$ such that \[ \overline{u}_{n}^{1}\rightharpoonup u^{1}\mbox{ in }L^{1}\left(\left[0,1\right]\right). \] Now apply Lemma $\ref{lem: loc basso}$ to the elements of the sequence $\left(\overline{u}_{n}^{1}\right)_{n}$ in order to obtain a sequence $\left(u_{n}^{2}\right)_{n}$ satisfying, for every $n\in\mathbb{N}$: \begin{align*} & u_{n}^{2}=\left(\overline{u}_{n}^{1}\wedge N\left(x_{0},2\right)\right)\vee\eta\left(x_{0},2\right)\quad\mbox{a.e. in }\left[0,2\right]\\ & \mathcal{B}\left(u_{n}^{2}\right)\geq\mathcal{B}\left(\overline{u}_{n}^{1}\right). \end{align*} Take, again by Dunford-Pettis, $\left(\overline{u}_{n}^{2}\right)_{n}$ extracted from $\left(u_{n}^{2}\right)_{n}$ and a function $u^{2}\in L^{1}\left(\left[0,2\right]\right)$ such that \[ \overline{u}_{n}^{2}\rightharpoonup u^{2}\mbox{ in }L^{1}\left(\left[0,2\right]\right). \] Iterating this process we define families $\left(\overline{u}_{n}^{T}\right)_{n}$, $\left(u_{n}^{T}\right)_{n}$, $\sigma_{T}$ ($T\in\mathbb{N}$) such that the $\sigma_{T}$'s are strictly increasing with $\sigma_{T}\geq Id$ , satisfying for every $T,n\in\mathbb{N}$: \begin{align} & \overline{u}_{n}^{T}=u_{\sigma_{T}\left(n\right)}^{T}\label{eq: provv subseq}\\ & u_{n}^{T}=\left(\overline{u}_{n}^{T-1}\wedge N\left(x_{0},T\right)\right)\vee\eta\left(x_{0},T\right)\quad\mbox{a.e. in }\left[0,T\right]\label{eq: provv bound}\\ & \mathcal{B}\left(u_{n}^{T}\right)\geq\mathcal{B}\left(\overline{u}_{n}^{T-1}\right)\label{eq: provv meglio}\\ & \overline{u}_{n}^{T}\rightharpoonup u^{T}\mbox{ in }L^{1}\left(\left[0,T\right]\right).\label{eq: provv conv} \end{align} Fix $T\in\mathbb{N}$. The sequence $\left(\overline{u}{}_{n}^{T}\right)_{n}$ coincides, almost everywhere in $\left[0,T-1\right]$, with a sequence that is extracted from $\left(\overline{u}{}_{n}^{T-1}\right)_{n}$. Indeed, for every $n\in\mathbb{N}$: \begin{eqnarray*} \bar{u}_{n}^{T} & = & u_{\sigma_{T}\left(n\right)}^{T}\overset{{\scriptstyle a.e.\, in\,}{\scriptscriptstyle \left[0,T\right]}}{=}\left(\overline{u}_{\sigma_{T}\left(n\right)}^{T-1}\wedge N\left(x_{0},T\right)\right)\vee\eta\left(x_{0},T\right)\\ & \overset{{\scriptstyle a.e.\, in\,}{\scriptscriptstyle \left[0,T-1\right]}}{=} & \overline{u}_{\sigma_{T}\left(n\right)}^{T-1}. \end{eqnarray*} The last equality holds since applying recursively (in $T$) relation $\eqref{eq: provv bound}$ together with relation $\eqref{eq: provv subseq}$ gives $\overline{u}_{\sigma_{T}\left(n\right)}^{T-1}\in\left[\eta\left(x_{0},T-1\right),N\left(x_{0},T-1\right)\right]$; then observe that by Lemmas $\ref{lem: loc alto}$ and $\ref{lem: loc basso}$ the function $\eta\left(x_{0},\cdot\right)$ is decreasing and the function $N\left(x_{0},\cdot\right)$ is increasing. Hence $u^{T-1}=u^{T}$ almost everywhere in $\left[0,T-1\right]$, by the essential uniqueness of the weak limit. Hence, defining \[ \forall t\geq0:v\left(t\right):=u^{\left[t\right]+1}\left(t\right) \] we obtain $v=u^{T}$ almost everywhere in $\left[0,T\right]$ and \begin{equation} \forall T\in\mathbb{N}:\overline{u}_{n}^{T}\rightharpoonup v\quad\mbox{in }L^{1}\left[0,T\right].\label{eq: provv conv 2} \end{equation} Repeating the previous argument, we see that for every $T,n\in\mathbb{N}$: \begin{eqnarray*} \bar{u}_{n}^{T} & \overset{{\scriptstyle a.e.\, in\,}{\scriptscriptstyle \left[0,T-1\right]}}{=} & \overline{u}_{\sigma_{T}\left(n\right)}^{T-1}\\ & \overset{{\scriptstyle a.e.\, in\,}{\scriptscriptstyle \left[0,T-2\right]}}{=} & \overline{u}_{\sigma_{T-1}\circ\sigma_{T}\left(n\right)}^{T-2}\\ & \dots\\ & \overset{{\scriptstyle a.e.\, in\,}{\scriptscriptstyle \left[0,T-j\right]}}{=} & \overline{u}_{\sigma_{T-j+1}\circ\dots\circ\sigma_{T}\left(n\right)}^{T-j}. \end{eqnarray*} Observe that $\left(\overline{u}_{\sigma_{T-j+1}\circ\dots\circ\sigma_{T}\left(n\right)}^{T-j}\right)_{n}$ is a subsequence of $\left(\overline{u}_{n}^{T-j}\right)_{n}$ since the composition $\sigma_{T-j+1}\circ\dots\circ\sigma_{T}$ is strictly increasing and satisfies \[ \sigma_{T-j+1}\circ\dots\circ\sigma_{T}\left(n\right)\geq n\quad\forall n\in\mathbb{N}. \] Hence, inverting the quantifiers ``$\forall n\in\mathbb{N}$'' and ``a.e. in $\left[0,T-j\right]$'', we see that $\left(\overline{u}_{n}^{T}\right)_{n}$ coincides, almost everywhere in $\left[0,T-j\right]$ with a subsequence of $\left(\overline{u}_{n}^{T-j}\right)_{n}$, for every $T\in\mathbb{N}$ and $j=1,\dots,T-1$. This implies that for every $T\in\mathbb{N}$ the sequence $\left(v_{n}\right)_{n\geq T}$ defined by $v_{n}:=\overline{u}_{n}^{n}$ coincides with a subsequence of $\left(\overline{u}_{n}^{T}\right)_{n\geq1}$, almost everywhere in $\left[0,T\right]$. Hence \begin{align} & \forall T\in\mathbb{N}:\mbox{almost everywhere in }\left[0,T\right]:\nonumber \\ & \forall n\geq T:\eta\left(x_{0},T\right)\leq v_{n}\leq N\left(x_{0},T\right).\label{eq: seq new1 bound} \end{align} and \begin{align*} & v_{n}\rightharpoonup v\mbox{ in }L^{1}\left(\left[0,T\right]\right)\quad\forall T\in\mathbb{N}, \end{align*} by $\eqref{eq: provv bound}$ and $\eqref{eq: provv conv 2}$. The extension to every $T>0$ is straightforward, so we obtain $\eqref{eq: new1 conv deb}$. Now fix $T>0$; a well known property of the weak convergence implies that \begin{equation} \liminf_{n\to+\infty}v_{n}\left(t\right)\leq v\left(t\right)\leq\limsup_{n\to+\infty}v_{n}\left(t\right)\mbox{ for almost every }t\in\left[0,T\right].\label{eq: propr conv deb} \end{equation} Considering the intersection between the subsets of $\left[0,T\right]$ where relations $\eqref{eq: seq new1 bound}$ and $\eqref{eq: propr conv deb}$ hold, we obtain $\eqref{eq: new1 bound}$. In order to prove $\eqref{eq: new1 massimizz}$, observe that \begin{eqnarray*} \mathcal{B}\left(v_{n}\right) & = & \mathcal{B}\left(u_{\sigma_{n}\left(n\right)}^{n}\right)\geq\mathcal{B}\left(\overline{u}_{\sigma_{n}\left(n\right)}^{n-1}\right)\\ & = & \mathcal{B}\left(u_{\sigma_{n-1}\circ\sigma_{n}\left(n\right)}^{n-1}\right)\geq\dots\geq\mathcal{B}\left(u_{\sigma_{n-2}\circ\sigma_{n-1}\circ\sigma_{n}\left(n\right)}^{n-2}\right)\\ & \geq & \dots\geq\mathcal{B}\left(u_{\sigma_{1}\circ\dots\circ\sigma_{n}\left(n\right)}^{1}\right)\geq\mathcal{B}\left(u_{\sigma_{1}\circ\dots\circ\sigma_{n}\left(n\right)}\right). \end{eqnarray*} Fix $\epsilon>0$ and $n_{\epsilon}\in\mathbb{N}$ such that $V\left(x_{0}\right)-\mathcal{B}\left(u_{n}\right)<\epsilon$ for $n\geq n_{\epsilon}$; since $\sigma_{1}\circ\dots\circ\sigma_{m}\geq Id$, we have \[ V\left(x_{0}\right)-\mathcal{B}\left(v_{n}\right)<\epsilon\quad\forall n\geq n_{\epsilon}. \] \end{proof} \begin{proposition} \label{prop: conv punt orbits}Let $v_{n}$ ($n\in\mathbb{N}$) and $v$ be as in Proposition $\ref{prop: new seq1}$, and let $x_{n}:=x\left(\cdot;x_{0},v_{n}\right)$ and $x:=x\left(\cdot;x_{0},v\right)$ be the associated trajectories starting at $x_{0}$. Then \[ x_{n}\to x\quad\mbox{pointwise in }\left[0,+\infty\right). \] \end{proposition} \begin{proof} Fix $T>0$. By $\eqref{eq: new1 bound}$ in Proposition $\ref{prop: new seq1}$ and by Remark $\ref{remark comp ODE}$, $v$ is admissible and the following uniform estimate holds: \begin{equation} \left|x-x_{n}\right|\leq x\left(\cdot;x_{0},N\left(x_{0},T\right)\right)\quad\mbox{in }\left[0,T\right],\,\forall n\in\mathbb{N}.\label{eq: stima orbit unif} \end{equation} Now fix $t\in\left[0,T\right]$ and $n\in\mathbb{N}$. Subtracting the state equation for $x$ from the state equation for $x_{n}$, we obtain, for every $s\in\left[0,t\right]$: \begin{eqnarray*} \dot{x_{n}}\left(s\right)-\dot{x}\left(s\right) & = & F\left(x_{n}\left(s\right)\right)-F\left(x\left(s\right)\right)+v_{n}\left(s\right)-v\left(s\right)\\ & = & h_{n}\left(s\right)\left[x_{n}\left(s\right)-x\left(s\right)\right]+v_{n}\left(s\right)-v\left(s\right), \end{eqnarray*} where $h_{n}:=h\left(x_{n},x\right)$ is the function defined in Remark $\ref{remark funzione h}$. Integrating both sides of this equation between $0$ and $t$, then taking absolute values leads to: \begin{eqnarray} \left|x_{n}\left(t\right)-x\left(t\right)\right| & \leq\int_{0}^{t}\left|h_{n}\left(s\right)\right|\left|x_{n}\left(s\right)-x\left(s\right)\right|\mbox{d}s & +\left|\int_{0}^{t}\left[v_{n}\left(s\right)-v\left(s\right)\right]\mbox{d}s\right|.\label{eq: per conv qo orbite} \end{eqnarray} Observe that, for every $s\in\left[0,t\right]$: \begin{eqnarray*} \left|h_{n}\left(s\right)\right|\left|x_{n}\left(s\right)-x\left(s\right)\right| & \leq & b_{0}x\left(s;x_{0},N\left(x_{0},T\right)\right), \end{eqnarray*} by Remark $\ref{remark funzione h}$ and by $\eqref{eq: stima orbit unif}$. Since the function on the right hand side obviously belongs to $L^{1}\left(\left[0,t\right]\right)$, passing to the limsup in $\eqref{eq: per conv qo orbite}$ and remembering $\eqref{eq: new1 conv deb}$, we obtain by Dominated Convergence: \begin{eqnarray} \limsup_{n\to+\infty}\left|x_{n}\left(t\right)-x\left(t\right)\right| & \leq & \limsup_{n\to+\infty}\int_{0}^{t}\left|h_{n}\left(s\right)\right|\left|x_{n}\left(s\right)-x\left(s\right)\right|\mbox{d}s\nonumber \\ & = & \int_{0}^{t}\limsup_{n\to+\infty}\left|h_{n}\left(s\right)\right|\left|x_{n}\left(s\right)-x\left(s\right)\right|\mbox{d}s\label{eq: conv orbits 1}\\ & \leq & b_{0}\int_{0}^{t}\limsup_{n\to+\infty}\left|x_{n}\left(s\right)-x\left(s\right)\right|\mbox{d}s.\nonumber \end{eqnarray} Hence by Gronwall's inequality: \[ \limsup_{n\to+\infty}\left|x_{n}\left(t\right)-x\left(t\right)\right|=0, \] for every $t\in\left[0,T\right]$. This is equivalent to \[ \lim_{n\to+\infty}x_{n}=x\quad\mbox{in }\left[0,T\right], \] which proves the thesis, since $T>0$ is generic. \end{proof} $ $ \begin{lemma} \label{lem: new seq 2} Take $\left(v_{n}\right)_{n\in\mathbb{N}}$ and $v$ as in Lemma $\ref{prop: new seq1}$. There exists a sequence\\ $\left(v_{n,n}\right)_{n\in\mathbb{N}}$, extracted from $\left(v_{n}\right)_{n\in\mathbb{N}}$, and a function $u_{*}\in\Lambda\left(x_{0}\right)$, satisfying, for every $T>0$: \begin{align} & \log v_{n,n}\rightharpoonup\log u_{*}\quad\mbox{in }L^{1}\left(\left[0,T\right]\right)\label{eq: new2 conv deb}\\ & \eta\left(x_{0},T\right)\leq u_{*}\leq N\left(x_{0},T\right)\quad\mbox{ a. e. in }\left[0,T\right].\label{eq: u_star bound}\\ & 0\leq x\left(\cdot;x_{0},u_{*}\right)\leq x\left(\cdot;x_{0},v\right)\quad\mbox{ in}\left[0,+\infty\right).\label{eq: x_star minore x} \end{align} \end{lemma} \begin{proof} We conduct ``standard'' diagonalization on the sequence $\left(\log v_{n}\right)_{n\in\mathbb{N}}$. Observe that this sequence, by $\eqref{eq: new1 bound}$, is also uniformly bounded in the $L_{\left[0,1\right]}^{\infty}$ norm. Precisely, for any $n\in\mathbb{N}$: \[ \log\eta\left(x_{0},1\right)\leq\log v_{n}\leq\log N\left(x_{0},1\right)\quad\mbox{a.e. in }\left[0,1\right]. \] Hence by the Dunford-Pettis criterion there exists a function $f^{1}\in L^{1}\left(\left[0,1\right]\right)$ and a sequence $\left(v_{n,1}\right)_{n}$ extracted form $\left(v_{n}\right)$ such that \[ \log v_{n,1}\rightharpoonup f^{1}\quad\mbox{in }L^{1}\left(\left[0,1\right]\right). \] Again by $\eqref{eq: new1 bound}$, $\left(v_{n,1}\right)_{n}$ satisfies, for every $n\in\mathbb{N}$: \[ \log\eta\left(x_{0},2\right)\leq\log v_{n,1}\leq\log N\left(x_{0},2\right)\quad\mbox{a.e. in }\left[0,2\right]; \] therefore there exist $f^{2}\in L^{1}\left(\left[0,2\right]\right)$ and $\left(v_{n,2}\right)_{n}$ extracted from $\left(v_{n,1}\right)_{n}$ such that \[ \log v_{n,2}\rightharpoonup f^{2}\quad\mbox{in }L^{1}\left(\left[0,2\right]\right), \] and so on. This shows that there exists a function $f\in L_{loc}^{1}\left(\left[0,+\infty\right)\right)$ satisfying, together with the diagonal sequence $\left(v_{n,n}\right)_{n}$, for every $T>0$: \begin{align*} & \log v_{n,n}\rightharpoonup f\quad\mbox{in }L^{1}\left(\left[0,T\right]\right)\\ & \log\eta\left(x_{0},T\right)\leq\log v_{n,n}\leq\log N\left(x_{0},T\right)\ \mbox{a.e. in }\left[0,T\right],\forall n\geq T. \end{align*} Define $u_{*}:=e^{f}$; then relations $\eqref{eq: new2 conv deb}$ and $\eqref{eq: u_star bound}$ are easy consequences of this definition and of the properties of the weak convergence. In order to prove $\eqref{eq: x_star minore x}$, we first observe that, obviously, $x\left(\cdot;x_{0},u_{*}\right)\geq0$. Fix $0<t_{0}<t_{1}<T$ and let $t_{0}$ be a Lebesgue point for both $\log u_{*}$ and $v$. By Jensen's inequality we have, for every $n\in\mathbb{N}$: \[ \frac{\int_{t_{0}}^{t_{1}}\log v_{n,n}\left(s\right)\mbox{d}s}{t_{1}-t_{0}}\leq\log\left(\frac{\int_{t_{0}}^{t_{1}}v_{n,n}\left(s\right)\mbox{d}s}{t_{1}-t_{0}}\right); \] since $\left(v_{n,n}\right)_{n}$ is a subsequence of $\left(v_{n}\right)_{n}$, passing to the limit for $n\to+\infty$ in the previous relation, we obtain by $\eqref{eq: new1 conv deb}$ and $\eqref{eq: new2 conv deb}$: \[ \frac{\int_{t_{0}}^{t_{1}}\log u_{*}\left(s\right)\mbox{d}s}{t_{1}-t_{0}}\leq\log\left(\frac{\int_{t_{0}}^{t_{1}}v\left(s\right)\mbox{d}s}{t_{1}-t_{0}}\right). \] Passing now to the limit for $t_{1}\to t_{0}$ yields to $\log u_{*}\left(t_{0}\right)\leq\log v\left(t_{0}\right)$. By the Lebesgue Point Theorem, $t_{0}$ is a generic element of a full measure subset of $\left[0,T\right]$. This implies $\eqref{eq: x_star minore x}$, by Remark $\ref{remark comp ODE}$. \end{proof} $ $ A simple integration by parts provides the following decomposition of the objective functional: \begin{eqnarray*} \forall\mathrm{u}\in\Lambda\left(x_{0}\right):\ \mathcal{B}\left(x_{0};\mathrm{u}\right) & = & \int_{0}^{+\infty}e^{-\rho t}\left(\log\mathrm{u}\left(t\right)-cx^{2}\left(t\right)\right)\mbox{d}t\\ & = & \int_{0}^{+\infty}e^{-\rho t}\log\mathrm{u}\left(t\right)\mbox{d}t-c\int_{0}^{+\infty}e^{-\rho t}x^{2}\left(t\right)\mbox{d}t\\ & = & \lim_{T\to+\infty}e^{-\rho T}\int_{0}^{T}\log\mathrm{u}\left(s\right)\mbox{d}s+\\ & & \rho\int_{0}^{+\infty}e^{-\rho t}\left(\int_{0}^{t}\log\mathrm{u}\left(s\right)\mbox{d}s-\frac{c}{\rho}x^{2}\left(t\right)\right)\mbox{d}t\\ & =: & \lim_{T\to+\infty}e^{-\rho T}\int_{0}^{T}\log\mathrm{u}\left(t\right)\mbox{d}t+\mathcal{B}_{1}\left(x_{0};\mathrm{u}\right) \end{eqnarray*} where \[ \mathcal{B}_{1}\left(x_{0};\mathrm{u}\right):=\rho\int_{0}^{+\infty}e^{-\rho t}\left(\int_{0}^{t}\log\mathrm{u}\left(s\right)\mbox{d}s-\frac{c}{\rho}x^{2}\left(t;x_{0},\mathrm{u}\right)\right)\mbox{d}t. \] With this notation, we prove the final step. $ $ \begin{corollary} The control $u_{*}$ defined in Lemma $\ref{lem: new seq 2}$ is optimal at $x_{0}$, and \[ u_{*}\in L_{loc}^{\infty}\left(\left[0,+\infty\right)\right). \] \end{corollary} \begin{proof} Obviously $u_{*}\in L_{loc}^{\infty}\left([0,+\infty)\right)$, by $\eqref{eq: u_star bound}$. Observe that, by Jensen's inequality and by Proposition $\ref{prop: succ max}$, for every $n\in\mathbb{N}$ and $t>0$: \begin{eqnarray} e^{-\rho t}\int_{0}^{t}\log v_{n,n}\left(s\right)\mbox{d}s & \leq & te^{-\rho t}\log\left(\frac{\int_{0}^{t}v_{n,n}\left(s\right)\mbox{d}s}{t}\right)\nonumber \\ & \leq & te^{-\rho t}\log\left(K\left(x_{0}\right)e^{\rho t}\right)-te^{-\rho t}\log\left(t\right).\label{eq: stima int-log-v_n,n} \end{eqnarray} This implies that $\lim_{t\to+\infty}e^{-\rho t}\int_{0}^{t}\log v_{n,n}\left(s\right)\mbox{d}s\leq0$ and consequently \begin{equation} \mathcal{B}\left(x_{0};v_{n,n}\right)\leq\mathcal{B}_{1}\left(x_{0};v_{n,n}\right).\label{eq: B min B_1 ultimo} \end{equation} Moreover \begin{align} & \int_{0}^{+\infty}\Bigl(te^{-\rho t}\log\left(K\left(x_{0}\right)e^{\rho t}\right)-te^{-\rho t}\log\left(t\right)\Bigr)\mbox{d}t\nonumber \\ \leq & \int_{0}^{1}te^{-\rho t}\log\left(K\left(x_{0}\right)e^{\rho t}\right)\mbox{d}t-\int_{0}^{1}te^{-\rho t}\log\left(t\right)\mbox{d}t\nonumber \\ & +\int_{1}^{+\infty}te^{-\rho t}\log\left(K\left(x_{0}\right)e^{\rho t}\right)\mbox{d}t\ <\ +\infty.\label{eq: bound funzione somm} \end{align} Set $x_{n,n}:=x\left(\cdot;x_{0},v_{n,n}\right)$, $x:=\left(\cdot;x_{0},v\right)$ and $x_{*}:=\left(\cdot;x_{0},u_{*}\right)$. Relations $\eqref{eq: stima int-log-v_n,n}$ and $\eqref{eq: bound funzione somm}$ imply that the hypotheses of Lemma $\ref{lem: reverse fatou}$ are satisfied for the integral \[ \int_{0}^{\infty}e^{-\rho t}\left(\int_{0}^{t}\log v_{n,n}\left(s\right)\mbox{d}s-\frac{c}{\rho}x_{n,n}^{2}\left(t\right)\right)\mbox{d}t. \] Combining this result with relations $\eqref{eq: B min B_1 ultimo}$,$\eqref{eq: new2 conv deb}$, $\eqref{eq: x_star minore x}$ and with Proposition $\ref{prop: conv punt orbits}$ we obtain: \begin{eqnarray*} V\left(x_{0}\right) & = & \lim_{n\to+\infty}\mathcal{B}\left(x_{0};v_{n,n}\right)\leq\lim_{n\to+\infty}\mathcal{B}_{1}\left(x_{0};v_{n,n}\right)\\ & = & \rho\lim_{n\to+\infty}\int_{0}^{+\infty}e^{-\rho t}\left(\int_{0}^{t}\log v_{n,n}\left(s\right)\mbox{d}s-\frac{c}{\rho}x_{n,n}^{2}\left(t\right)\right)\mbox{d}t\\ & \leq & \rho\int_{0}^{+\infty}e^{-\rho t}\limsup_{n\to+\infty}\left(\int_{0}^{t}\log v_{n,n}\left(s\right)\mbox{d}s-\frac{c}{\rho}x_{n,n}^{2}\left(t\right)\right)\mbox{d}t\\ & = & \rho\int_{0}^{+\infty}e^{-\rho t}\left(\int_{0}^{t}\log u_{*}\left(s\right)\mbox{d}s-\frac{c}{\rho}x^{2}\left(t\right)\right)\mbox{d}t\\ & \leq & \rho\int_{0}^{+\infty}e^{-\rho t}\left(\int_{0}^{t}\log u_{*}\left(s\right)\mbox{d}s-\frac{c}{\rho}x_{*}^{2}\left(t\right)\right)\mbox{d}t\\ & = & \mathcal{B}_{1}\left(x_{0};u_{*}\right). \end{eqnarray*} Finally observe that by $\eqref{eq: u_star bound}$, for every $t\geq0$: \[ te^{-\rho t}\log\eta\left(x_{0},t+1\right)\leq e^{-\rho t}\int_{0}^{t}\log u_{*}\left(s\right)\mbox{d}s\leq te^{-\rho t}\log N\left(x_{0},t+1\right), \] which implies that the estimated quantity vanishes for $t\to+\infty$, since $\eta\left(x_{0},t\right)=e^{-L\left(x_{0}\right)t}$ and by $\eqref{eq: logN a infinito}$. Hence $\mathcal{B}_{1}\left(x_{0};u_{*}\right)=\mathcal{B}\left(x_{0};u_{*}\right)$, and this concludes the proof. \end{proof} \section*{Appendix} \addcontentsline{toc}{section}{Appendix} \begin{lemma} \label{lem: reverse fatou}Let $\left(E,\sigma,\mu\right)$ a measure space, $f_{n}$ $\left(n\in\mathbb{N}\right)$ and $g$ $\mu$-measurable functions in $E$, $F\subseteq E$ a full measure set such that: \begin{align*} & \forall n\in\mathbb{N}:f_{n}\leq g\quad\mbox{in }F\\ & \int_{E}g\mbox{d}\mu<+\infty. \end{align*} Then \[ \limsup_{n\to+\infty}\int_{E}f_{n}\mbox{d}\mu\leq\int_{E}\limsup_{n\to+\infty}f_{n}\mbox{d}\mu. \] \end{lemma} \begin{proof} \textsc{Case I}. $\int_{E}g\mbox{d}\mu=-\infty$. Then \[ \limsup_{n\to+\infty}\int_{E}f_{n}\mbox{d}\mu=-\infty \] and the thesis is trivially true. \textsc{Case I}I. $\int_{E}g\mbox{d}\mu\in-\left(\infty,+\infty\right)$ The sequence \[ a_{n}:=g-\sup_{k\geq n}f_{k} \] satisfies \[ 0\leq a_{n}\uparrow g-\limsup_{m\to+\infty}f_{m}\quad\mbox{in }F. \] Hence by Monotone convergence: \begin{equation} \int_{E}\left(g-\sup_{k\geq n}f_{k}\right)\mbox{d}\mu=\int_{E}a_{n}\mbox{d}\mu\uparrow\int_{E}\left(g-\limsup_{m\to+\infty}f_{m}\right)\mbox{d}\mu.\label{eq: 1} \end{equation} Observe that the quantities \begin{align*} & \int_{E}\left(-\sup_{k\geq n}f_{k}\right)\mbox{d}\mu:=\int_{E}\left(g-\sup_{k\geq n}f_{k}\right)\mbox{d}\mu-\int_{E}g\mbox{d}\mu\\ & \int_{E}\left(-\limsup_{m\to+\infty}f_{m}\right)\mbox{d}\mu:=\int_{E}\left(g-\limsup_{m\to+\infty}f_{m}\right)\mbox{d}\mu-\int_{E}g\mbox{d}\mu \end{align*} make sense and belong to $(-\infty,+\infty]$. It follows from $\eqref{eq: 1}$ that: \begin{equation} \lim_{n\to+\infty}\int_{E}\left(-\sup_{k\geq n}f_{k}\right)\mbox{d}\mu=\int_{E}\left(-\limsup_{m\to+\infty}f_{m}\right)\mbox{d}\mu.\label{eq: 2} \end{equation} Indeed, if $\int_{E}\left(-\sup_{k\geq n_{0}}f_{k}\right)\mbox{d}\mu=+\infty$ for some $n_{0}\in\mathbb{N}$, then both $\lim_{n\to+\infty}\int_{E}\left(-\sup_{k\geq n}f_{k}\right)\mbox{d}\mu$ and $\int_{E}\left(-\limsup_{m\to+\infty}f_{m}\right)\mbox{d}\mu$ are $+\infty$. If $\int_{E}\left(-\sup_{k\geq n}f_{k}\right)\mbox{d}\mu<+\infty$ for every $n\in\mathbb{N}$ and $\int_{E}\left(-\limsup_{m\to+\infty}f_{m}\right)\mbox{d}\mu<+\infty$, then clearly $\eqref{eq: 2}$ follows from $\eqref{eq: 1}$, whilst in case $\int_{E}\left(-\limsup_{m\to+\infty}f_{m}\right)\mbox{d}\mu=+\infty$ we have \begin{eqnarray*} +\infty & = & \int_{E}\left(g-\limsup_{m\to+\infty}f_{m}\right)\mbox{d}\mu=\lim_{n\to+\infty}\int_{E}\left(g-\sup_{k\geq n}f_{k}\right)\mbox{d}\mu\\ & = & \int_{E}g\mbox{d}\mu+\lim_{n\to+\infty}\int_{E}\left(-\sup_{k\geq n}f_{k}\right)\mbox{d}\mu \end{eqnarray*} which implies \[ \lim_{n\to+\infty}\int_{E}\left(-\sup_{k\geq n}f_{k}\right)\mbox{d}\mu=+\infty. \] It follows from $\eqref{eq: 2}$ that \begin{eqnarray*} \inf_{n\in\mathbb{N}}\int_{E}\sup_{k\geq n}f_{k}\mbox{d}\mu & = & \int_{E}\limsup_{m\to+\infty}f_{m}\mbox{d}\mu. \end{eqnarray*} Moreover, it is a consequence of the definition of sup that \[ \limsup_{m\to+\infty}\int_{E}f_{m}\mbox{d}\mu\leq\inf_{n\in\mathbb{N}}\int_{E}\sup_{k\geq n}f_{k}\mbox{d}\mu. \] \end{proof} \end{document}
\begin{document} \onecolumn \aistatstitle{Supplementary Material} \renewcommand{Appendix}{Appendix} \appendixpage \appendix \section{Derivation of Unlabelled Lower Bound Objective (Eqn. \eqref{eqn2})} \label{FirstAppendix} We show here a detailed derivation of equation \eqref{eqn2}. Please note from figure \ref{fig:generationM2}: \begin{align*} p_{\theta}(x,y,z) &= p_{\theta}(x|z,y)p(y)p(z) \\ q_{\phi}(y,z|x) &= q_{\phi}(z|x,y)q_{\phi}(y|x) \end{align*} The log-likelihood of data can be written as: \begin{align*} \log p_{\theta}(x) &= log \; \sum_{y}{}\int p_{\theta}(x,y,z)dz \\ &= \log \mathbb{E}_{q_{\phi}(y,z|x)}\; [\frac{p_{\theta}(x,y,z)}{q_{\phi}(y,z|x)}]\\ & \geq \mathbb{E}_{q_{\phi}(y,z|x)}\; \log \; [\frac{p_{\theta}(x|z,y)p(y)p(z)}{q_{\phi}(y|x)q_{\phi}(z|x,y)}] \\ &= \mathbb{E}_{q_{\phi}(y,z|x)}\;[\log p_{\theta}(x|z,y)] - \mathbb{E}_{q_{\phi}(y|x)}\; [\log (\frac{q_{\phi}(y|x)}{p(y)})] - \mathbb{E}_{q_{\phi}(y|x)} \; [\mathbb{E}_{q_{\phi}(z|x,y)} \; \log (\frac{ q_{\phi}(z|x,y)}{p(z)})] \\ &= \mathbb{E}_{q_{\phi}(y,z|x)}[\log p_{\theta}(x|z,y)] {-} KL( q_{\phi}(y|x) || p(y) ) - \mathbb{E}_{q_{\phi}(y|x)} \; [KL \;( q_{\phi}(z|x,y) || p(z))] \\ &= \mathcal{U}(x) \end{align*} The inequality at third line comes from Jensen's inequality. \section{Derivation of Mutual Information Term from KL Divergence (Eqn. \eqref{firstmi})} \label{SecondAppendix} We now give a detailed derivation of equation \eqref{firstmi}. The data distribution is denoted by $q(x)$. We also define, \begin{align*} q_{\phi}(y) &= \mathbb{E}_{q(x)}\;[q_{\phi}(y|x)]\\ q_{\phi}(y,x) &= q_{\phi}(y|x) q(x) \end{align*} \pagebreak Now, \begin{align*} \mathbb{E}_{q(x)}[KL \;( q_{\phi}(y|x) || p(y) )] &= \int q(x) \sum_{y} \; \log \frac{q_{\phi}(y|x)}{p(y)} \; q_{\phi}(y|x)dx \\ &= \int q(x)\sum_{y} \; \log \frac{q_{\phi}(y|x) q_{\phi}(y)}{p(y) q_{\phi}(y)}\;q_{\phi}(y|x) dx \\ &= \int q(x) \sum_{y} \; \log \frac{q_{\phi}(y|x)}{q_{\phi}(y)} q_{\phi}(y|x) dx + \int q(x) \sum_{y} \; \log \frac{q_{\phi}(y)}{p(y)} q_{\phi}(y|x) dx \\ &= \int \; \sum_{y}\; q_{\phi}(y,x) \log \frac{q_{\phi}(y|x)}{q_{\phi}(y)}\;dx + \sum_{y} \; \log \frac{q_{\phi}(y)}{p(y)} q_{\phi}(y) \\ &= \int \; \sum_{y}\; q_{\phi}(y,x) \log \frac{q_{\phi}(y|x)q(x)}{q_{\phi}(y)q(x)}\;dx + KL \; (q_{\phi}(y) || p(y)) \\ &= \int \; \sum_{y}\; q_{\phi}(y,x) \log \frac{q_{\phi}(y,x)}{q_{\phi}(y)q(x)}\;dx + KL \; (q_{\phi}(y) || p(y)) \\ &= \mathcal{I}_{\phi}(y;x) + KL \; (q_{\phi}(y) || p(y)) \\ &\geq \mathcal{I}_{\phi}(y;x) \end{align*} \section{Decomposition of Mutual Information (Eqn. \eqref{eqn4})} Mutual Information of equation \eqref{firstmi} can be further decomposed as: \begin{align*} \mathcal{I}_{\phi}(y;x) &= \int \; \sum_{y}\; q_{\phi}(y,x) \log \frac{q_{\phi}(y,x)}{q_{\phi}(y)q(x)}\;dx \\ &= \int \; \sum_{y}\; q_{\phi}(y|x)q(x) \log \frac{q_{\phi}(y|x)}{q_{\phi}(y)} \; dx \\ &= \int\; \sum_{y} \; q_{\phi}(y|x)q(x) \log q_{\phi}(y|x) \; dx + \int\; \sum_{y} \; q_{\phi}(y|x)q(x) \log \frac{1}{q_{\phi}(y)} \; dx \\ &= \int\; q(x) \; \sum_{y} \; q_{\phi}(y|x) \log q_{\phi}(y|x)\; dx - \sum_{y} \; q_{\phi}(y) \log q_{\phi}(y) \\ &= - \mathbb{E}_{q(x)}[\mathcal{H}(q_{\phi}(y|x)] + \mathcal{H}(q_{\phi}(y)) \\ \end{align*} \end{document}
\begin{document} \def\begin{tabular}{\begin{tabular}} \def\end{tabular}{\end{tabular}} \def\begin{flushright}{\begin{flushright}} \def\mbox{\boldmath $ }{\mbox{\boldmath $ }} \def\end{flushright}{\end{flushright}} \def\begin{flushleft}{\begin{flushleft}} \def\end{flushleft}{\end{flushleft}} \def \def\hspace{\hspacepace} \def\stackrel{\stackrelckrel} \def\parbox{\parbox} \def\begin{center}{\begin{center}} \def\end{center}{\end{center}} \def\setlength{\parindent}{2\ccwd}{\setlength{\parindent}{2\ccwd}} \def\begin{picture}{\begin{picture}} \def\end{picture}{\end{picture}} \def\unitlength=1mm{\unitlength=1mmtlength=1mm} \def\REF#1{\par\hangindent\parindent\indent\llap{#1\enspace}\ignorespaces} \noindent \begin{center} {\LARGE\bf Revised Iterative Solution of \\ \vspacepace{.2cm}Ground State of Double-Well Potential} \vspace*{5mm} {\large Zhao Wei-Qin$^{1,~2}$} \vspace*{11mm} {\small \it 1. China Center of Advanced Science and Technology (CCAST)} {\small \it (World Lab.), P.O. Box 8730, Beijing 100080, China} {\small \it 2. Institute of High Energy Physics, Chinese Academy of Sciences, P. O. Box 918(4-1), Beijing 100039, China} \end{center} \vspacepace{2cm} \begin{abstract} A revised new iterative method based on Green function defined by quadratures along a single trajectory is developed and applied to solve the ground state of the double-well potential. The result is compared to the one based on the original iterative method. The limitation of the asymptotic expansion is also discussed. \end{abstract} \vspacepace{.5cm} PACS{:~~11.10.Ef,~~03.65.Ge} Key words: iterative solution, asymptotic expansion, double-well potential \section*{\bf 1. Introduction} \setcounter{section}{1} \setcounter{equation}{0} The double-well potential in one dimension, \begin{eqnarray}\label{e1.1} V(x) = \frac{1}{2} g^2 (x^2-1)^2, \end{eqnarray} is specially interesting since it has degenerate minima and could be served as a simple example of the bound-state tunnelling problem in quantum mechanics. However, it is a non-perturbative problem to solve the Schroedinger equation for this potential due to the tunnelling effect between the two minima. The asymptotic series of the ground state energy is meaningful only for quite large $g$ (e.g. $g\geq 6$). Even for such large $g$ the obtained plateau in the energy expansion series does not necessarily consistent to the exact solution. An effective method to obtain the convergent solution of this problem for any values of $g$ is needed. Recently an iterative solution of the ground state for the one dimensional double-well potential is obtained[1] based on the Green function method developed in ref.[2]. This Green function is defined along a single trajectory, from which the ground state wave function in N-dimension can be expressed by quadratures along the single trajectory. This makes it possible to develop an iterative method to obtain the ground state wave function, starting from a properly chosen trial function. The convergence of the iterative solution very much depends on the choice of the trial function[1]. However, in the original iterative solution the obtained correction for the trial wave function is in the form of a power expansion, while the bound state solution should be in the form of an exponential when the coordinate variable approaches infinity. Recently a new revised iterative procedure[3] based on the same Green function is developed and applied to solve anhamornic oscillator and Stark effect. This method has some advantages compared to the original one. It not only gives an exponential form for the correction of the trial wave function, but also spends much less time in each iteration due to less folds of integration. It is natural to try this method to solve the double-well potential which has been attracted much attention. In Section 2, a brief introduction is given about the Green function method based on the single trajectory quadrature. Special discussion is given to the revision of the iterative formula. The revised iterative formula for the double-well potential is given in Section 3, together with the trial function and the boundary condition for the lowest even state. The numerical results based on the revised iterative formula and the original iterative method are collected in Section 4, together with a comparison to the asymptotic expansion. A compact expression of the asymptotic expansion is derived in Appendix A. Finally some discussions are given at the end. \section*{\bf 2. Green Function and the Revised Iterative Solution} \setcounter{section}{2} \setcounter{equation}{0} \vspacepace{.5cm} For a particle with unit mass, moving in an N-dimensional unperturbed potential $V_0({\bf q})$, the ground state wave function $\Phi({\bf q})$ satisfies the following Schroedinger equation: \begin{eqnarray}\label{e2.1} H\Phi({\bf q}) = E \Phi({\bf q}), \end{eqnarray} where \begin{eqnarray}\label{e2.2} H &=& T+V_0({\bf q}) = -\frac{1}{2} {\bf \nabla}^2 + V_0({\bf q}). \end{eqnarray} Assume the solution of eq.(\ref{e2.1}) could be expressed as \begin{eqnarray}\label{e2.3} \Phi({\bf q}) &=& e^{-S({\bf q})}. \end{eqnarray} Introduce a perturbed potential $U({\bf q})$ and assume \begin{eqnarray}\label{e2.4} V({\bf q})&=&V_0({\bf q})+U({\bf q})\\ {\cal H} &=& T+V({\bf q})= -\frac{1}{2} {\bf \nabla}^2 + V({\bf q}). \end{eqnarray} Define another wave function $\Psi({\bf q})$ satisfying the Schroedinger equation \begin{eqnarray}\label{e2.6} {\cal H}\Psi({\bf q}) &=& {\cal E} \Psi({\bf q})\\ {\cal E} &=& E+\Delta. \end{eqnarray} Let \begin{eqnarray}\label{e2.8} \Psi({\bf q}) &=& e^{-S({\bf q})-\tau ({\bf q})}. \end{eqnarray} The equation for $\tau$ and $\Delta$ could be derived easily[2]: \begin{eqnarray}\label{e2.9} {\bf \nabla}S\cdot {\bf \nabla}\tau + \frac{1}{2}[({\bf \nabla}\tau)^2 - {\bf \nabla}^2 \tau] = (U - \Delta). \end{eqnarray} Consider the coordinate transformation \begin{eqnarray}\label{e2.10} (q_1, q_2, q_3, \cdots, q_N) \rightarrow (S, \alpha_1, \alpha_2, \cdots, \alpha_{N-1}) = (S, \alpha) \end{eqnarray} with $\alpha=(\alpha_1, \alpha_2, \cdots, \alpha_{N-1})$ denoting the set of $N-1$ orthogonal angular coordinates satisfying the condition \begin{eqnarray}\label{e2.11} {\bf \nabla} S\cdot {\bf \nabla} \alpha_i = 0, \end{eqnarray} for $i=1,~2,~\cdots~,N-1$. Some useful quantities for this coordinate transformation[2] are listed in Appendix B. Similarly to the discussions in Ref.[2], introducing the $\theta$-function in S-space: \begin{eqnarray}\label{e2.12} \theta(S- {\overline S})= \left\{\begin{array}{cc} 1 & ~~~~~~~~{\rm if}\hspacepace{4mm} 0 \leq {\overline S} < S \\ 0 & ~~~~~~~~{\rm if} \hspacepace{4mm} 0 \leq S < {\overline S} \end{array} \right. \end{eqnarray} and define \begin{eqnarray}\label{e2.13} C = \theta [({\bf \nabla}S)^2]^{-1}=\theta h_S^2. \end{eqnarray} Using \begin{eqnarray}\label{e2.14} {\bf \nabla} S \cdot {\bf \nabla} C = 1, \end{eqnarray} it is easy to derive the following equation \begin{eqnarray}\label{e2.15} \tau = (1+CT)^{-1} C [(U-\Delta)-\frac{1}{2}({\bf \nabla}\tau)^2]. \end{eqnarray} When the N-dimensional variable ${\bf q}$ is transformed into $(S,~\alpha)$, $T=-\frac{1}{2}{\bf \nabla}^2$ could be decomposed into two parts: \begin{eqnarray}\label{e2.16} T = T_S + T_{\alpha}, \end{eqnarray} where $ T_S$ and $T_{\alpha}$ consist only the differentiation to $S$ and $\alpha$, respectively. The detailed expression of $ T_S$ and $T_{\alpha}$ is given in Appendix B. Now another Green function could be defined as[2] \begin{eqnarray}\label{e2.17} \overline{D} &\equiv& -2\theta e^{2 S} \frac{h_S}{h_{\alpha}} \theta e^{-2 S} h_S h_{\alpha} \end{eqnarray} and it is related to $C$ in the following way[2]: \begin{eqnarray}\label{e2.18} (1+ \overline{D}T_{\alpha})^{-1}\overline{D}=(1+CT)^{-1}C. \end{eqnarray} Therefore, from (\ref{e2.15}), we have \begin{eqnarray}\label{e2.19} \tau = (1+\overline{D}T_\alpha)^{-1} \overline{D} [(U-\Delta)-\frac{1}{2}({\bf \nabla}\tau)^2]. \end{eqnarray} The explicit expression of $\tau$ based on (\ref{e2.19}) is \begin{eqnarray}\label{e2.20} \tau =-2\int_{0}^{S}e^{2S'} \frac{h_{S' }}{h_{\alpha}}dS' \int_{0}^{S'}e^{-2S''} h_{S''} h_{\alpha}dS''(1+T_\alpha \overline{D})^{-1} [(U-\Delta)-\frac{1}{2}({\bf \nabla}\tau)^2]. \end{eqnarray} Therefore, we have \begin{eqnarray}\label{e2.21} -\frac{1}{2} \frac{h_{\alpha} }{h_{S} } e^{-2S} \frac{\partial \tau(S,~\alpha)}{\partial S} =\int_{0}^{S}e^{-2S'} h_{S'} h_{\alpha}dS'(1+T_\alpha \overline{D})^{-1} [(U-\Delta)-\frac{1}{2}({\bf \nabla}\tau)^2]. \end{eqnarray} The left hand side of Eq.(\ref{e2.20}) approaches to $0$ when $S \rightarrow \infty$, so is the right hand side, i.e., \begin{eqnarray}\label{e2.22} \int_{0}^\infty e^{-2S} h_{S} h_{\alpha}dS(1+T_\alpha \overline{D})^{-1} [(U-\Delta)-\frac{1}{2}({\bf \nabla}\tau)^2]=0, \end{eqnarray} which is correct for all $\alpha$. Integrating over $d\alpha=\Pi_{i=1}^{N-1}d\alpha_i$ and because of \begin{eqnarray}\label{e2.23} h_{S} h_{\alpha}T_\alpha=-\frac{1}{2} \sum_{j=1}^{N-1} \frac{\partial}{\partial \alpha_j} \frac{h_{S} h_{\alpha}}{h_j^2}\frac{\partial}{\partial \alpha_j} ~~~{\rm and}~~~\int d \alpha h_{S} h_{\alpha}T_\alpha \tau=0, \end{eqnarray} we derive \begin{eqnarray}\label{e2.24} \int h_{S} h_{\alpha}d\alpha dS e^{-2S} [(U-\Delta)-\frac{1}{2}({\bf \nabla}\tau)^2]=0. \end{eqnarray} Denoting $d {\bf q}=h_{S} h_{\alpha}d\alpha dS $, from Eq.(\ref{e2.3}), we reach a new expression of the perturbative energy \begin{eqnarray}\label{e2.25} \Delta= \frac{\int d{\bf q}~\Phi^2 [U-\frac{1}{2}({\bf \nabla}\tau)^2]}{\int d{\bf q}~\Phi^2}~. \end{eqnarray} Based on Eqs. (\ref{e2.25}) and (\ref{e2.15}) or (\ref{e2.19}) we have the new iteration series \begin{eqnarray}\label{e2.26} \Delta_n&=&\frac{\int d{\bf q}~\Phi^2 [U-\frac{1}{2}({\bf \nabla}\tau_{n-1})^2]}{\int d{\bf q}~\Phi^2}~,\nonumber\\ \tau_n &=& (1+\overline{D}T_\alpha)^{-1} \overline{D} [(U-\Delta_n)-\frac{1}{2}({\bf \nabla}\tau_{n-1})^2]\\ {\rm or}&&\nonumber\\ \tau_n &=& (1+CT)^{-1} C [(U-\Delta_n)-\frac{1}{2} ({\bf \nabla} \tau_{n-1})^2]\nonumber. \end{eqnarray} For later convenience this new iteration series is name as the $\tau$-iteration in this paper. Let us compare the above $\tau$-iteration with the original one derived from the equation for $f=e^{-\tau}$ and $\Delta$ in Ref.[2], which is named as $f$-iteration in this paper: \begin{eqnarray}\label{e2.27} \Delta_n &=& \frac{\int d{\bf q}~\Phi^2~U~f_{n-1}} {\int d{\bf q}~\Phi^2~f_{n-1}}~\nonumber\\ f_n &=& 1+(1+\overline{D}T_\alpha)^{-1} \overline{D} (-U+\Delta_n)f_{n-1}\\ {\rm or}&&\nonumber\\ f_n &=& 1+(1+CT)^{-1} C (-U+\Delta_n)f_{n-1}.\nonumber \end{eqnarray} There are several advantages for the $\tau$-iteration: 1) It directly gives an exponential form for the perturbed wave function $e^{-\tau}$. This result is consistent with those obtained using the series expansion of $\{\bf S_i\}$ and $ \{E_i\}$ (See Section 1 of Ref.[1]); 2) The iteration process in this formula is more transparent; 3) The calculation of the perturbation energy is much simpler. \vspacepace{.5cm} \section*{\bf 3. Revised Iterative Formula for the Double-well \\ Potential} \setcounter{section}{3} \setcounter{equation}{0} \vspacepace{.5cm} In this section the revised iterative formula is applied to solve the ground state for the double-well potential. For the Hamiltonian $H=T+V$ let us introduce the wave function for the lowest even eigenstate as $\psi_{ev}$, satisfying \begin{eqnarray}\label{e3.1} H \psi_{ev} &=& E_{ev} \psi_{ev}. \end{eqnarray} In the following we are going to introduce the trial wave function $\phi_{ev}$ for this state, satisfying \begin{eqnarray}\label{e3.2} (H + w_{ev}) \phi_{ev} &=& (E_{ev} +{\cal E}_{ev}) \phi_{ev}=g \phi_{ev} \end{eqnarray} Assuming \begin{eqnarray}\label{e3.3} \psi_{ev} &=& \phi_{ev} e^{-\tau_{ev}}, \end{eqnarray} the final energy and wave function $\{\psi_{ev},~E_{ev}\}$ could be obtained by solving the corresponding equation of $\{\tau_{ev},~{\cal E}_{ev}\}$ based on the revised iteration method introduced in Section 2. The key to choose the proper trial function is to satisfy the necessary boundary conditions of the state. For the ground state, namely the lowest even state, we have \begin{eqnarray}\label{e3.4} \psi_{ev}(-x) &=& \psi_{ev}(x)\nonumber\\ \psi'_{ev}(0) &=& 0\\ \psi_{ev}(\infty) &=& 0.\nonumber \end{eqnarray} The trial wave function should satisfy similar boundary conditions, namely, \begin{eqnarray}\label{e3.5} \phi_{ev}(-x) &=& \phi_{ev}(x)\nonumber\\ \phi'_{ev}(0) &=& 0\\ \phi_{ev}(\infty) &=& 0.\nonumber \end{eqnarray} Following the steps in Section 2 of Ref.[1], we introduce, for $x \geq 0$, \begin{eqnarray}\label{e3.6} g S_0(x) & \equiv & \frac{g}{3}(x-1)^2(x+2),\nonumber\\ S_1(x) &\equiv & ln~\frac{x+1}{2},\\ \phi_+(x) = \phi_+(-x) &\equiv& e^{-g S_0(x)-S_1(x)} = e^{-g S_0(x)}(\frac{2}{1+x}), \nonumber\\ \phi_-(x) &\equiv & e^{-gS_0(-x)-S_1(x)}= e^{-\frac{4}{3}g}e^{+gS_0(x)}(\frac{2}{1+x}).\nonumber \end{eqnarray} It is easy to see that $\phi_+(x)$ satisfies \begin{eqnarray}\label{e3.7} (T+V+u)\phi_+=g\phi_+, \end{eqnarray} where \begin{eqnarray}\label{e3.8} V(x) &=& \frac{1}{2}g^2 (x^2-1)^2\nonumber\\ u(x) &=& \frac{1}{(1+x)^2}. \end{eqnarray} Following the necessary boundary conditions (\ref{e3.5}), we could choose the trial function as \begin{eqnarray}\label{e3.9} \phi_{ev}(x) = \phi_{ev}(-x) \equiv \left\{\begin{array}{ccc} \phi_+(x) + \frac{g-1}{g+1} \phi_-(x),~~~~~~~{\sf for}~~0 \leq x <1\\ (1+\frac{g-1}{g+1}e^{-\frac{4}{3}g})\phi_+(x),~~~~~~{\sf for}~~x>1. \end{array} \right. \end{eqnarray} It is easy to proof that the above trial function satisfies Eq.(\ref{e3.2}) with \begin{eqnarray}\label{e3.10} w_{ev}(x) &=& w_{ev}(-x)=u(x) + \hat{g}_{ev} (x),~~~~~~ {\sf for}~~~ x \geq 0\\ \hat{g}_{ev} (x) &=& \left\{\begin{array}{ll} 2g\frac{(g-1) e^{2g S_0(x)-\frac{4}{3} g}}{(g+1)+(g-1)e^{2g S_0(x)-\frac{4}{3} g}}, &~~~~~~{\sf for}~~0 \leq x<1\\ 0&~~~~~~{\sf for}~~x>1. \end{array}\nonumber \right. \end{eqnarray} Although the potentials $V+w_{ev}$ is not continuous at $x=1$, it could be proved that $\phi_{ev}$ and $\phi'_{ev}$ are continuous at any $x$, including $x=1$ and $x=0$. Now introduce the $\theta$-function in $x$-space: \begin{eqnarray}\label{e3.11} (x|\theta|y)= \left\{\begin{array}{ccc} 0~~~~~&{\sf for}~~~~~&x>y,\\ -1~~~~~&{\sf for}~~~~~&x<y \end{array} \right. \end{eqnarray} and define the Green function \begin{eqnarray}\label{e3.12} \overline{D}_{ev}~ &\equiv& -2 \theta \phi_{ev}^{-2}\theta\phi_{ev}^2. \end{eqnarray} Following similar steps as in Section 2, we obtain the equations for $\{\tau_{ev},~{\cal E}_{ev}\}$ as follows: \begin{eqnarray}\label{e3.13} \tau_{ev} &=& \overline{D}_{ev}[-(w_{ev} - {\cal E}_{ev})-\frac{1}{2}(\tau'_{ev})^2],\\ {\cal E}_{ev} &=& \frac{\int\limits_0^{\infty}\phi^2_{ev}~ (w_{ev}+\frac{1}{2}~(\tau'_{ev})^2) dx} {\int\limits_0^{\infty} \phi^2_{ev} dx}.\nonumber \end{eqnarray} The solution of the ground state is \begin{eqnarray}\label{e3.14} \psi_{ev} &=& \phi_{ev}~ e^{-\tau_{ev}},~~~~~E_{ev}=g-{\cal E}_{ev}. \end{eqnarray} Similar procedure could also be taken by introducing \begin{eqnarray}\label{e3.15} \overline{D}_{+}~ &\equiv& -2 \theta \phi_{+}^{-2}\theta\phi_{+}^2 \end{eqnarray} and \begin{eqnarray}\label{e3.16} \tau_{+} &=& \overline{D}_{+}[-(u - {\cal E}_{+})-\frac{1}{2}(\tau'_{+})^2],\\ {\cal E}_{+} &=& \frac{\int\limits_0^{\infty}\phi^2_{+}~ (u+ \frac{1}{2}~(\tau'_{+})^2) dx} {\int\limits_0^{\infty} \phi^2_{+}~ dx}.\nonumber \end{eqnarray} It should be noticed that although $\psi_+= \phi_+~ e^{-\tau_+}$ does satisfy the equation \begin{eqnarray}\label{e3.17} H\psi_+=E_+\psi_+, \end{eqnarray} with $E_+ = g - {\cal E}_{+}$, the solution $\psi_+$ is not the eigenstate of the Hamiltonian $H=T+V$, because the trial function $\phi_+$, as well as the solution $\psi_+$ do not satisfy the necessary boundary conditions. However, we still like to keep this solution here since an analytic expression of $E_+$ and $\psi_+$ has been obtained in terms of the asymptotic power expansion of $1/g$ in Ref.[1]. This makes it possible to check the accuracy of the iteration procedure. In Appendix A compact expressions of the asymptotic power expansion of $E_+$ and $\tau'_+$ are derived. Based on the explicit integral expression of $\overline{D}_{ev}$ and $\overline{D}_+$, taking derivatives of $\tau_{ev}$ and $\tau_+$ in the first equations of (\ref{e3.13}) and (\ref{e3.16}), we obtain \begin{eqnarray}\label{e3.18} \tau'_{ev}(x) &=& 2 \phi^{-2}_{ev}(x)\int\limits_0^{x}\phi^2_{ev}(y) [(w_{ev}(y) - {\cal E}_{ev})+\frac{1}{2}(\tau'_{ev}(y))^2]~dy,\\ {\cal E}_{ev} &=& \frac{\int\limits_0^{\infty}\phi^2_{ev}(x)~ (w_{ev}(x)+\frac{1}{2}~(\tau'_{ev}(x))^2) dx} {\int\limits_0^{\infty} \phi^2_{ev}(x)~ dx}.\nonumber \end{eqnarray} and \begin{eqnarray}\label{e3.19} \tau'_{+}(x) &=& 2 \phi^{-2}_{ev}(x) \int\limits_0^{x}\phi^2_{ev}(y) [(u(y) - {\cal E}_{+})+\frac{1}{2}(\tau'_{+}(y))^2]~dy,\\ {\cal E}_+ &=& \frac{\int\limits_0^{\infty}\phi^2_{+}(x)~ (u(x)+\frac{1}{2}~(\tau'_{+}(x))^2) dx} {\int\limits_0^{\infty} \phi^2_{+}(x)~ dx}.\nonumber \end{eqnarray} The two sets of equations (\ref{e3.18}) and (\ref{e3.19}) are for the pairs of $\tau'_{ev},~{\cal E}_{ev}$ and $\tau_+,~{\cal E}_+$, which could be solved iteratively in the following way: Introducing the initial conditions \begin{eqnarray}\label{e3.20} {\cal E}_{ev,0} &=& 0,~~~~~~~~~~\tau'_{ev,0}=0 \end{eqnarray} and \begin{eqnarray}\label{e3.21} {\cal E}_{+,0} &=& 0,~~~~~~~~~~\tau'_{+,0}=0, \end{eqnarray} we have, for $n=1$, \begin{eqnarray}\label{e3.22} {\cal E}_{ev,1} &=& \frac{\int\limits_0^{\infty}\phi^2_{ev}(x)~w_{ev}(x) dx} {\int\limits_0^{\infty} \phi^2_{ev}(x)dx}\\ \tau'_{ev,1}(x) &=& 2 \phi^{-2}_{ev}(x) \int\limits_0^{x}\phi^2_{ev}(y) (w_{ev}(y) - {\cal E}_{ev,1})dy\nonumber\\ &=& -2 \phi^{-2}_{ev}(x) \int\limits_{x}^{\infty}\phi^2_{ev}(y) (w_{ev}(y) - {\cal E}_{ev,1})dy\nonumber \end{eqnarray} and \begin{eqnarray}\label{e3.23} {\cal E}_{+,1} &=& \frac{\int\limits_0^{\infty}\phi^2_+(x)~u(x) dx} {\int\limits_0^{\infty} \phi^2_+(x) dx}.\\ \tau'_{+,1}(x) &=& 2 \phi^{-2}_+(x)\int\limits_0^{x}\phi^2_+(y)(u(y) - {\cal E}_{+,1}) dy\nonumber\\ &=& -2 \phi^{-2}_+(x) \int\limits_{x}^{\infty}\phi^2_+(y)(u(y) - {\cal E}_{+,1}) dy\nonumber \end{eqnarray} For $n>1$ we have \begin{eqnarray}\label{e3.24} {\cal E}_{ev,n} &=& {\cal E}_{ev,1}+\frac{\int\limits_0^{\infty}\phi^2_{ev}(x)~ \frac{1}{2}(\tau'_{ev,n-1}(x))^2 dx} {\int\limits_0^{\infty} \phi^2_{ev}(x) dx}\\ \tau'_{ev,n}(x) &=& \tau'_{ev,1}(x)+2 \phi^{-2}_{ev}(x) \int\limits_0^{x}\phi^2_{ev}(y) [({\cal E}_{ev,1} - {\cal E}_{ev,n})+\frac{1}{2}(\tau'_{ev,n-1}(y))^2]dy~~~~~~\nonumber\\ &=& \tau'_{ev,1}(x)-2 \phi^{-2}_{ev}(x) \int\limits_{x}^{\infty}\phi^2_{ev}(y) [({\cal E}_{ev,1} - {\cal E}_{ev,n}) +\frac{1}{2}(\tau'_{ev,n-1}(y))^2]dy\nonumber \end{eqnarray} and \begin{eqnarray}\label{e3.25} {\cal E}_{+,n} &=& {\cal E}_{+,1}+\frac{\int\limits_0^{\infty}\phi^2_{+}(x)~ \frac{1}{2}(\tau'_{+,n-1}(x))^2 dx} {\int\limits_0^{\infty} \phi^2_{+}(x)~ dx}.\\ \tau'_{+,n}(x) &=& \tau'_{+,1}(x)+2 \phi^{-2}_+(x) \int\limits_0^{x}\phi^2_+(y)[({\cal E}_{+,1} - {\cal E}_{+,n}) +\frac{1}{2}(\tau'_{+,n-1}(y))^2]dy\nonumber\\ &=& \tau'_{+,1}(x)-2 \phi^{-2}_+(x) \int\limits_{x}^{\infty}\phi^2_+(y)[({\cal E}_{+,1} - {\cal E}_{+,n}) +\frac{1}{2}(\tau'_{+,n-1}(y))^2]dy\nonumber \end{eqnarray} For comparison we list in the following also the $f$-iteration formula for the double-well potential based on (\ref{e2.27})[2]: \begin{eqnarray}\label{e3.26} f_n(x) &=& 1 -2\int\limits_x^{\infty} \phi^{-2}(y) dy \int\limits_y^{\infty} \phi^2(z) (w(z) - {\cal E}_n)f_{n-1}(z) dz\\ {\cal E}_{n} &=& \int\limits_0^{\infty} \phi^2(x)f_{n-1}(x) w(x) dx \bigg/ \int\limits_0^{\infty}\phi^2(x) f_{n-1}(x) dx,\nonumber \end{eqnarray} where $f_n$ and ${\cal E}_n$ could be either for the even ground state "$ev$" or the "$+$" state. In the following section we are going to show the iterative results of ${\cal E}_{+,n}$ and ${\cal E}_{ev,n}$ based on the $\tau$-iteration formula Eqs.(\ref{e3.24}) and (\ref{e3.25}), together with the result from the $f$-iterative formula (\ref{e3.26}). The obtained ${\cal E}_{+,n}$ is also compared to the asymptotic expansion for different $g$, obtained from the compact expression in Appendix A. \section*{\bf 4. Numerical Results and Discussions} \setcounter{section}{5} \setcounter{equation}{0} \vspacepace{.5cm} Now let us look at the results based on the $\tau$-iteration formula (\ref{e3.24}) and (\ref{e3.25}). Starting from the trial wave functions $\phi_{ev}$ and $\phi_{+}$ defined in (\ref{e3.9}) and (\ref{e3.6}), the energies ${\cal E}_{ev}$ and ${\cal E}_+$ after the first 4 steps of iteration with 5-fold integrations are listed in Tables 1 and 2, together with the energies $E_{ev,5}=g-{\cal E}_{ev,5}$ and $E_{+,5}=g-{\cal E}_{+,5}$. In Ref.[1] a convergent iteration method to solve the lowest states of the double-well potential, i.e. the $f$-iteration, has been given. However, no numerical results have been provided. For comparison the numerical results for ${\cal E}_{ev}$ and ${\cal E}_+$ based on the $f$-iteration method is also listed in Tables 1 and 2. For the same folds of integrations the solution of the $f$-iteration could only reach a lower accuracy, comparing to the $\tau$-iteration, because each step iteration of $\tau'$ needs only one fold of integration, while each step iteration of $f$ depends on 2-fold integrations. The accuracy of the obtained energies is higher when $g$ becomes larger. \vspacepace{1cm} Table 1. ${\cal E}_{ev,n}$ and $E_{ev,5}=g-{\cal E}_{ev,5}$ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $g$ & $n$ & 1 & 2 & 3 & 4 & 5 & $E_{ev,5}$\\ \hline 0.05 & $\tau$-iter. & -0.0341 & -0.0172 & -0.0158 & -0.0163 & -0.0164 & 0.0664\\ \hline 0.1 & $\tau$-iter. & -0.0118 & -0.0022 & -0.0016 & -0.0017 & -0.0017 & 0.1017\\ \hline 0.3 & $\tau$-iter. & 0.0963 & 0.0973 & 0.0973 & 0.0973 & 0.0973 & 0.2027\\ \hline 0.5 & $\tau$-iter. & 0.2035 & 0.2060 & 0.2060 & 0.2060 & 0.2060 & 0.2940\\ \hline 0.5 & $f$-iter. & 0.2036 & 0.2056 & 0.2060 & &&\\ \hline 1 & $\tau$-iter. & 0.4135 & 0.4310 & 0.4312 & 0.4311 & 0.4311 & 0.5689\\ \hline 1 & $f$-iter. & 0.4135 & 0.4267 & 0.4302 & &&\\ \hline 3 & $\tau$-iter. & 0.4757 & 0.5105 & 0.5166 & 0.5173 & 0.5173 & 2.4827 \\ \hline 3 & $f$-iter. & 0.4757 & 0.5053 & 0.5141 & &&\\ \hline 6 & $\tau$-iter. & 0.29204 & 0.29399 & 0.29420 & 0.29422 & 0.29422 & 5.70578\\ \hline 6 & $f$-iter. & 0.29204 & 0.29393 & 0.29419 & &&\\ \hline 7 & $\tau$-iter. & 0.27884 & 0.27957 & 0.27962 & 0.27963 & 0.27963 & 6.72037\\ \hline 7 & $f$-iter. & 0.27884 & 0.27955 & 0.27961 & &&\\ \hline 8 & $\tau$-iter. & 0.27231 & 0.27265 & 0.27266 & 0.27266 & 0.27266 & 7.72734 \\ \hline 8 & $f$-iter. & 0.27231 & 0.27264 & 0.27266 & &&\\ \hline \end{tabular} Table 2. ${\cal E}_{+,n}$ and $E_{+,5}=g-{\cal E}_{+,5}$ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $g$ & $n$ & 1 & 2 & 3 & 4 & 5 & $E_{+,5}$\\ \hline 1 & $\tau$-iter. & 0.4135 & 0.4310 & 0.4312 & 0.4311 & 0.4311 & 0.5689\\ \hline 1 & $f$-iter. & 0.4135 & 0.4267 & 0.4302 & & &\\ \hline 3 & $\tau$-iter. & 0.3221 & 0.3257 & 0.3258 & 0.3258 & 0.3258 & 2.6742\\ \hline 3 & $f$-iter. & 0.3221& 0.3254 & 0.3257 & & &\\ \hline 6 & $\tau$-iter. & 0.27989 & 0.28040 & 0.28041 & 0.28041 & 0.28041 & 5.71959 \\ \hline 6 & $f$-iter. & 0.27989 & 0.28039 & 0.28041 & &&\\ \hline 7 & $\tau$-iter. & 0.27461 & 0.27494 & 0.27494 & 0.27494 & 0.27494 & 6.72506 \\ \hline 7 & $f$-iter. & 0.27461 & 0.27493 & 0.27494 & &&\\ \hline 8 & $\tau$-iter. & 0.27091 & 0.27113 & 0.27113 & 0.27113 & 0.27113 & 7.72887 \\ \hline 8 & $f$-iter. & 0.27091 & 0.27113 & 0.27113 & &&\\ \hline \end{tabular} \vspacepace{1cm} It is shown clearly that the obtained energies $E_{ev}$ and $E_+$ are lower than $g$ and $E_{ev}<E_+$, which are reasonable. When $g$ increases the two energies become very close to each other and the second step of iteration gives already quite accurate result. It is interesting to notice that for $g=1$ the trial function $\phi_+$ and $\phi_{ev}$ are the same, therefore starting from $\phi_+$ the obtained $\psi_+$ by iteration is the exact ground state wave function for $g=1$. \vspacepace{1cm} Table 3. ${\cal E}_{ev}$ and $E_{ev}=g-{\cal E}_{ev}$ \begin{tabular}{|c|c|c||c|c|c|} \hline $g$ & ${\cal E}_{ev}$ & $E_{ev}$ & $g$ & ${\cal E}_{ev}$ & $E_{ev}$ \\ \hline 0.05 & -0.0164 & 0.0664 & 2.2 & 0.5951 & 1.6049 \\ \hline 0.1 & -0.0017 & 0.1017 & 2.5 & 0.5745 & 1.9255 \\ \hline 0.3 & 0.0973 & 0.2027 & 2.7 & 0.5539 & 2.1461 \\ \hline 0.5 & 0.2060 & 0.2940 & 3.0 & 0.5173 & 2.4827 \\ \hline 0.7 & 0.3065 & 0.3935 & 4.0 & 0.3984 & 3.6016 \\ \hline 1.0 & 0.4311 & 0.5689 & 5.0 & 0.3273 & 4.6727 \\ \hline 1.5 & 0.5598 & 0.9402 & 6.0 & 0.29422 & 5.70578 \\ \hline 1.7 & 0.5851 & 1.1149 & 7.0 & 0.27963 & 6.72037 \\ \hline 2.0 & 0.5990 & 1.4010 & 8.0 & 0.27266 & 7.72734 \\ \hline \end{tabular} \vspacepace{1cm} In Table 3 listed are the obtained ${\cal E}_{ev}$ for different $g$ based on the $\tau$-iteration. The obtained ${\cal E}_{ev}$ is negative for very small $g$ and increases when $g$ increases from $0.05$ to $2.0$, then decreases when $g$ increases further. However, the energy for the ground state $E_{ev}=g-{\cal E}_{ev}$ monotonically increases with increasing $g$. Table 4. ${\cal E}_{+}$, $E_+=g-{\cal E}_{+}$ and ${\cal E}_{+}^N=\sum\limits_{m=0}^N \end{picture}silon_{m+1}/g^m$ \begin{tabular}{|c|c|c|c|c|c|c|} \hline $g$ & ${\cal E}_+$ & $E_+$ & $N_{min}$ & $N_{max}$ & ${\cal E}_+^N$ & $O(e^{-\frac{4}{3}g})$ \\ \hline 1 & 0.4311 & 0.5689 & - & - & - & $\sim$ 0.3\\ \hline 2 & 0.3664 & 1.6336 & - & - & - & $\sim$ 0.07 \\ \hline 3 & 0.3258 & 2.6742 & - & - & - & $\sim$ 0.02 \\ \hline 4 & 0.3024 & 3.6976 & - & - & - & $\sim$ 0.005 \\ \hline 5 & 0.2888 & 4.7112 & - & - & - & $\sim$ 0.001 \\ \hline 6 & 0.28041 & 5.71959 & 10 & 17 & 0.2807 & $\sim~ 3\times 10^{-4}$ \\ \hline 7 & 0.27494 & 6.72506 & 14 & 22 & 0.27501 & $\sim~ 4\times 10^{-5}$ \\ \hline 8 & 0.27113 & 7.72887 & 12 & 34 & 0.27115 & $\sim~ 2\times 10^{-5}$ \\ \hline 9 & 0.268336 & 8.731664 & 11 & 35 & 0.268339 & $\sim 6\times 10^{-6}$ \\ \hline \end{tabular} \vspacepace{1cm} The energy ${\cal E}_+$ and $E_+$ are given in Table 4 together with $\end{picture}silon_+$ calculated from the asymptotic power expansion to $1/g$. It can be seen that only when $g$ is large enough, say $g\geq 6$, the asymptotic expansion is meaningful. For such large $g$ the obtained ${\cal E}_+$ from the iteration and from the power expansion are comparable up to a quite accurate level. The power expansion of ${\cal E}_+$ to $1/g$ is an asymptotic one. For a fixed and large enough $g$-value the summation up to a certain number of terms becomes stable. When increasing the number of summed terms further a plateau of the energy ${\cal E}_+^N$ (within the number of summed terms $N_{min}<N<N_{man}$ shown in Table 4) is obtained, which gives the $E_+$-value accurate to a certain level. However, beyond a certain number of terms ($N>N_{max}$) the result becomes unstable and meaningless. It should be noticed that the asymptotic expansion result within the plateau region does not give the accurate value of the energy. It differs from the iteration one in the order of $e^{-\frac{4}{3}g}$, which provides the limitation of the accuracy of the asymptotic expansion. Recently it has been proved[5] that the $f$-iteration in one-dimensional problem is convergent if the trial function is properly chosen to have a finite perturbed potential $w(x)$, satisfying the conditions $w(x)>0$, $w'(x)<0$ and $w(\infty)=0$. There is no restriction to the magnitude of $w(x)$. For the double-well potential, $w_{ev}(x)$ defined in (\ref{e3.10}) satisfies these conditions when $g\geq 1$. Our numerical results show that both $f$- and $\tau$-iteration are convergent for $g\geq 1$, although it is not an easy task to prove the convergence of the $\tau$-iteration. Furthermore, the two iteration series could also be applied in some region of $g<1$, where $w_{ev}(x)$ is still positive and finite but no more a monotonic function of $x$, and reasonable results are obtained. However, for very small $g$ (e.g. $g<0.4$), where $w(x)$ becomes negative in some $x$-region, the $f$-iteration does not give reasonable results and the obtained ${\cal E}_n$ becomes unstable, while the $\tau$-iteration could still work, although the convergence becomes slower when $g$ is smaller. The reason is the following: The iterative formula for the perturbed wave function $f$ is expressed as sum of two terms. It could become negative if the term containing $w(x)$ becomes negative. This would give a negative $\psi(x)$ in some $x$-region, while the solution for the ground state wave function $\psi(x)$ should be always positive. For the $\tau$-iteration the perturbed wave function $e^{-\tau(x)}$ is always positive for any finite $\tau(x)$, either positive or negative. This condition is fulfilled as long as $|w(x)|$ is finite and $w(\infty) \rightarrow 0$. Thus, the $\tau$-iteration gives less restrictions to the perturbed potential $w(x)$. Therefore, it is of interests to further study the condition for the convergence of the $\tau$-iteration and to apply it to other physics problems where perturbation method could not be applied. \section*{\bf Acknowledgment} The author is grateful to Professor T. D. Lee for his continuous and substantial instructions and advice. This work is partly supported by National Natural Science Foundation of China (NNSFC, No. 20047001). \begin{center} {\Large \bf Reference} \end{center} 1. R. Friedberg, T. D. Lee, W. Q. Zhao and A. Cimenser, Ann. Phys. 294(2001)67 2. R. Friedberg, T. D. Lee, W. Q. Zhao, Ann. Phys. 288(2001)52 3. Zhao Wei-Qin, Commun. Theoret. Phys. (Beijing,China) 42(2004)37 4. R. Friedberg, T. D. Lee and W. Q. Zhao, IL Nuovo Cimento A112(1999)1195 5. R. Friedberg and T. D. Lee, Ann. Phys. 308(2003)263, quant-ph/0407207 \section*{\bf Appendix } \setcounter{section}{10} \setcounter{equation}{0} \noindent {\bf Appendix A. Asymptotic series expansion of $\tau_+$ and ${\cal E}_+$ }\\ Starting from the integral equation of $\tau'_+$ of the first equation in (\ref{e3.20}) $$ \tau_+'(x)= 2 \phi_+^{-2}(x) \int\limits_x^{\infty} \phi_+^2(y)[-(u(y)-{\cal E}_+)-\frac{1}{2}~(\tau_+'(y))^2] dy \eqno(A.1) $$ an asymptotic series expansion of $\tau_+$ and ${\cal E}_+$ could be obtained. From (A.1) it is easy to obtain the following equation $$ (\frac{1}{2}~\phi_+^2\tau'_+)' = [(u-{\cal E}_+)+ \frac{1}{2}~(\tau'_+)^2]\phi_+^2~ \eqno(A.2) $$ Using the definition of $\phi_+$ in (\ref{e3.6}), the above equation leads to $$ g S_0'\tau'_+= \frac{1}{2}~\tau_+'' - S_1'\tau'_+ - (u-{\cal E}_+) -\frac{1}{2}~(\tau'_+)^2. \eqno(A.3) $$ Now let us expand both $\tau'_+(x)$ and ${\cal E}_+$ in power series of $g^{-1}$ as following $$ \tau_+'= \sum\limits^{\infty}_1 \frac{1}{g^m}~ S_{m+1}' ~~~{\rm and}~~~ {\cal E}_+ = \sum\limits^{\infty}_0 \frac{1}{g^m}~ \end{picture}silon_{m+1}. \eqno(A.4) $$ Substituting (A.4) into (A.3), comparing terms proportional to $g^{-n}$, we obtain a series of equations for $\{S_m\}$ and $\{\end{picture}silon_m\}$: \begin{eqnarray} g^{0}~~~~~~~~~~S_0'S_2' &=& -(u-\end{picture}silon_1)\nonumber\\ g^{-1}~~~~~~~~~~S_0'S_3' &=& \frac{1}{2}~S_2''-S_1'S_2'+\end{picture}silon_2\nonumber\\ g^{-2}~~~~~~~~~~S_0'S_4' &=& \frac{1}{2}~S_3''-S_1'S_3'+\end{picture}silon_3-\frac{1}{2}~S^{\prime 2}_2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(A.5)\nonumber\\ &&\cdots\nonumber\\ g^{-m}~~~~~~S_0'S_{m+2}' &=& \frac{1}{2}~S_{m+1}''-S_1'S_{m+1}'+\end{picture}silon_{m+1} -\frac{1}{2}~\sum\limits_{n=1}^{m-1}S'_{n+1}S'_{m+1-n}.\nonumber \end{eqnarray} The above equations are exactly the same as those obtained in Ref.[5], when taking ${\cal E}_+=-E$ and $\end{picture}silon_m=-E_m$. From the first equation of (A.5), considering $u=\frac{1}{(1+x)^2}$, we have $\end{picture}silon_1=\frac{1}{4}$ when setting $x=1$. In the following we derive a compact expression of $\{S'_m\}$ and $\{\end{picture}silon_m\}$. Assuming, for $m\geq 1$, $$ S_{m+1}' = \frac{1}{2^{4m}}~\xi^2 \sum\limits_{i=0}^{2m-1} \beta_i(m)\xi^i \eqno(A.6) $$ where $$ \xi=\frac{2}{1+x}. \eqno(A.7) $$ Considering $S_1'=\frac{1}{x+1}$, from the last equation of (A.5) we have $$ S_0'S_{m+2}'= - \frac{1}{2^{4m+2}}~\sum\limits^{2m-1}_{l=0} \beta_l(m)\xi^{l+3}(l+4)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $$ $$ -\frac{1}{2^{4m+2}}~\sum\limits^{m-1}_{n=1}\sum\limits^{2n-1}_{i=0} \sum\limits^{2(m-n)-1}_{j=0}2\beta_i(n)\beta_j(m-n)\xi^{i+j+4}+\end{picture}silon_{m+1}. \eqno(A.8) $$ For the second summation on the right hand side of (A.8), defining $i+j+1=l$, we have $l_{min}=1$ and $l_{max}=2m-1$, which leads to $$ S_0'S_{m+2}'= - \frac{1}{2^{4m+2}}~\sum\limits^{2m-1}_{l=0} \beta_l(m)\xi^{l+3}(l+4)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $$ $$ -\frac{1}{2^{4m+2}}~\sum\limits^{2m-1}_{l=0}\sum\limits^{m-1}_{n=1} \sum\limits^{i_{max}}_{i=i_{min}}2\beta_i(n)\beta_{l-i-1}(m-n)\xi^{l+3}+\end{picture}silon_{m+1}, \eqno(A.9) $$ where $$ i_{min}={\rm max}(0,l-2(m-n)) $$ $$ i_{max}={\rm min}(2n-1,l-1).\eqno(A.10) $$ To fix $\end{picture}silon_{m+1}$ we put $x=1$, then $\xi=1$ and $S_0'=1-x^2=0$. This gives $$ \end{picture}silon_{m+1}= \frac{1}{2^{4m+2}}~\sum\limits^{2m-1}_{l=0} \beta_l(m)(l+4)+\frac{1}{2^{4m+2}}~\sum\limits^{2m-1}_{l=1}\sum\limits^{m-1}_{n=1} \sum\limits^{i_{max}}_{i=i_{min}}2\beta_i(n)\beta_{l-i-1}(m-n). \eqno(A.11) $$ Introducing $S_0'=-\frac{4(\xi-1)}{\xi^2}$, $S_{m+2}'$ could be expressed as $$ S_{m+2}'= \frac{1}{2^{4(m+1)}}~\xi^2~\frac{1}{\xi-1}~\{\sum\limits^{2m-1}_{l=0} \beta_l(m)(l+4)(\xi^{l+3}-1)~~~~~~~~~~~~~~~ $$ $$ +\sum\limits^{2m-1}_{l=1}\sum\limits^{m-1}_{n=1} \sum\limits^{i_{max}}_{i=i_{min}}2\beta_i(n)\beta_{l-i-1}(m-n)(\xi^{l+3}-1)\}. \eqno(A.12) $$ Applying the equality $$ \xi^{l+3}-1=(\xi-1)(\xi^{l+2}+\xi^{l+1}+\cdots+1)=(\xi-1)\sum\limits_{L=0}^{l+2}\xi^L, \eqno(A.13) $$ changing the summation order of $L$ and $l$ in (A.12) we finally reach the following expression of $S_{m+2}'$: $$ S_{m+2}'=\frac{1}{2^{4(m+1)}}~\xi^2~\sum\limits_{L=0}^{2m+1}~\xi^L\{\sum\limits^{2m-1}_{l=l_1} \beta_l(m)(l+4)+\sum\limits^{2m-1}_{l=l_2}\sum\limits^{m-1}_{n=1} \sum\limits^{i_{max}}_{i=i_{min}}2\beta_i(n)\beta_{l-i-1}(m-n)\} $$ $$ =\frac{1}{2^{4(m+1)}}\xi^2~\sum\limits_{L=0}^{2m+1}\beta_L(m+1)\xi^L ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \eqno(A.14) $$ with $$ l_1={\rm max}(0,L-2),~~~~~~l_2={\rm max}(1,L-2). $$ For $m=1$, it is easy to get $\beta_0(1)=1$ and $\beta_1(1)=1$. Finally we obtain , for $m\geq 1$, $$ \beta_L(m+1)=\beta_l^0(m+1)+\Delta\beta_l(m+1) \eqno(A.15) $$ where $$ \beta_L^0(m+1)=\sum\limits^{2m-1}_{l={\rm max}(0,L-2)} \beta_l(m)(l+4)~~~~~~~~~~~~~~~~~~ \eqno(A.16) $$ $$ \Delta\beta_L(m+1)=\sum\limits^{2m-1}_{l={\rm max}(1,L-2)}\sum\limits^{m-1}_{n=1} \sum\limits^{i_{max}}_{i=i_{min}}2\beta_i(n)\beta_{l-i-1}(m-n) $$ Taking $L=0$ in (A.15) and (A.16), from (A.11) we have $$ \end{picture}silon_{m+1}=\frac{1}{2^{4m+2}}~\beta_0(m+1). \eqno(A.17) $$ Based on (A.15) and (A.16) a revised pyramid structure of $\beta_l(m)$ could be constructed in a similar way as those in Appendix D of Ref.[1]. \begin{eqnarray*} \begin{array}{ccccccl} &&\beta_1(1)&\beta_0(1)&&&~~~m=1\\ &&&&&&\\ &\beta^0_3(2)&\beta^0_2(2)&\beta^0_1(2)&\beta^0_0(2)&&~~~\\ &\Delta\beta_3(2)&\Delta\beta_2(2)&\Delta\beta_1(2)&\Delta\beta_0(2)&&~~~\\ &\beta_3(2)&\beta_2(2)&\beta_1(2)&\beta_0(2)&&~~~m=2\\ &&&&&&\\ ~~~~~~~~\beta^0_5(3)&\beta^0_4(3)&\beta^0_3(3)&\beta^0_2(3)&\beta^0_1(3)&\beta^0_0(3)&~~~\\ ~~~~~~~~\Delta\beta_5(3)&\Delta\beta_4(3)&\Delta\beta_3(3)&\Delta\beta_2(3)&\Delta\beta_1(3)&\Delta\beta_0(3)&~~~\\ ~~~~~~~~\beta_5(3)&\beta_4(3)&\beta_3(3)&\beta_2(3)&\beta_1(3)&\beta_0(3)&~~~m=3 ~~~~~~~~~~~~~~~~~(A.18)\\ &&\cdots&\cdots&& \end{array} \end{eqnarray*} By using (A.15) and (A.16), we see that each row $\beta_l(m+1)$ can be obtained from the row $\beta^0_l(m+1)$ and $\Delta\beta_l(m+1)$ while the later two could be obtained from $\beta_l(m)$ above. For example, using $\beta_1(1)=\beta_0(1)=1$, we have \begin{eqnarray} \beta^0_3(2)= \beta_1(1)\cdot (1+4) = 5, \nonumber\\ \beta^0_2(2)= \beta_1(1) \cdot 5 + \beta_0(1) \cdot 4 = 9, \nonumber\\ \Delta\beta_2(2)=\Delta\beta_3(2)=0\nonumber \end{eqnarray} etc. The values of the elements in the pyramid are \begin{eqnarray*} \begin{array}{rccccccccl} \beta_l(1)&&&&1&1&&&&~~~m=1\\ &&&&&&&&&\\ \beta^0_l(2)&&&5&9&9&9&&&~~~\\ \Delta\beta_l(2)&&&0&0&0&0&&&~~~\\ \beta_l(2)&&&5&9&9&9&&&~~~m=2\\ &&&&&&&&&\\ \beta^0_l(3)&&35&89&134&170&170&170&&~~~\\ \Delta\beta_l(3)&&2&6&8&8&8&8&&~~~\\ \beta_l(3)&&37&95&142&178&178&178&&~~~m=3\\ &&&&&&&&&\\ \beta^0_l(4)&333&1093&2087&3155&4045&4757&4757&4757&~~~\\ \Delta\beta_l(4)&20&76&148&220&256&256&256&256&~~~\\ \beta_l(4)&353&1169&2235&3375&4301&5013&5013&5013&~~~m=4 ~~~~~~~~~~~~~~~~~(A.19)\\ &&&\cdots&\cdots&&&& \end{array} \end{eqnarray*} Correspondingly, the energies are $$ \end{picture}silon_2=\frac{9}{2^{6}},~~~\end{picture}silon_3=\frac{178}{2^{10}}=\frac{89}{2^9}, ~~~\end{picture}silon_4=\frac{5013}{2^{14}},~~~{\rm etc.} \eqno(A.20) $$ \vspacepace{1cm} \noindent {\bf Appendix B}\\ For the convenience of applying the coordinate system $\{S,~\alpha\}$ defined in Eq.(2.10), the definition of some quantities introduced in ref.[2] is given in the following. For the new coordinate system $$ (S,\alpha) = (S,\alpha_1({\bf q}),\alpha_2({\bf q}), \cdots, \alpha_{N-1}({\bf q})), \eqno(B.1) $$ each point ${\bf q}$ in the $N$-dimensional space will now be designated by $$ (S, \alpha_1, \alpha_2, \cdots, \alpha_{N-1}), \eqno(B.2) $$ instead of $(q_1, q_2, q_3, \cdots, q_N)$. The corresponding line element can be written as $$ d\stackrelckrel{\rightarrow}{{\bf q}} = \stackrelckrel{\wedge}S h_S dS +\sum_{j=1}^{N-1}\stackrelckrel{\wedge}{\alpha}_j h_j d\alpha_j; \eqno(B.3) $$ the gradient is given by $$ {\bf \nabla} = \stackrelckrel{\wedge}S \frac{1}{h_S}\frac{\partial}{\partial S} +\sum_{j=1}^{N-1}\stackrelckrel{\wedge}{\alpha}_j \frac{1}{h_j} \frac{\partial}{\partial \alpha_j}. \eqno(B.4) $$ The kinetic energy operator $$ T=-\frac{1}{2} {\bf \nabla}^2 \eqno(B.5) $$ can be decomposed into two parts: $$ T = T_S + T_{\alpha}, \eqno(B.6) $$ with $$ T_S = -\frac{1}{2h_Sh_\alpha} \frac{\partial}{\partial S} (\frac{h_{\alpha}}{h_S}\frac{\partial}{\partial S}), \eqno(B.7) $$ $$ T_{\alpha} = -\frac{1}{2h_Sh_\alpha} \sum_{j=1}^{N-1} \frac{\partial}{\partial \alpha_j} (\frac{h_Sh_{\alpha}}{h_j^2}\frac{\partial}{\partial \alpha_j}), \eqno(B.8) $$ in which $$ h_{\alpha} = \prod_{j=1}^{N-1}h_j, \eqno(B.9) $$ and $$ h_S^2 = [({\bf \nabla}S)^2]^{-1},~~~ h_1^2 = [({\bf \nabla}\alpha_1)^2]^{-1},\cdots,~~~ h_j^2 = [({\bf \nabla}\alpha_j)^2]^{-1},\cdots. \eqno(B.10) $$ The volume element in the ${\bf q}$-space is now $$ d^N {\bf q} = h_Sh_\alpha dSd\alpha \eqno(B.11) $$ with $$ d\alpha = \prod_{j=1}^{N-1}d\alpha_j. \eqno(B.12) $$ \end{document}
\begin{document} \title[Periodic bifurcations in descendant trees] {Periodic bifurcations in \\ descendant trees of finite \(p\)-groups} \author{Daniel C. Mayer} \address{Naglergasse 53\\8010 Graz\\Austria} \email{[email protected]} \urladdr{http://www.algebra.at} \thanks{Research supported by the Austrian Science Fund (FWF): P 26008-N25} \subjclass[2000]{Primary 20D15, 20F14, 20E18, 20E22, 20F05, 20-04; secondary 05C63} \keywords{finite \(p\)-group, central series, descendant tree, pro-\(p\) group, coclass tree, \(p\)-covering group, nuclear rank, multifurcation, coclass graph, pc-presentation, commutator calculus, Schur \(\sigma\)-group} \date{February 11, 2015} \begin{abstract} Theoretical background and an implementation of the \(p\)-group generation algorithm by Newman and O'Brien are used to provide computational evidence of a new type of periodically repeating patterns in pruned descendant trees of finite \(p\)-groups. \end{abstract} \maketitle \section{Introduction} \label{s:Intro} In \S\S\ \ref{s:Structure} -- \ref{s:HistoryDescTrees}, we present an exposition of facts concerning the mathematical \textit{structure} which forms the central idea of this article: descendant trees of finite \(p\)-groups. Their computational \textit{construction} is recalled in \S\S\ \ref{s:Construction} -- \ref{s:PruningStrategies} on the \(p\)-group generation algorithm. Recently discovered periodic patterns in descendant trees with promising arithmetical applications form the topic of the final \S\ \ref{s:PeriodicBifurcations} and the coronation of the entire work. \section{The structure: descendant trees} \label{s:Structure} In mathematics, specifically group theory, a \textit{descendant tree} is a hierarchical structure for visualizing parent-descendant relations (\S\S\ \ref{s:Terminology} and \ref{s:TreeDiagram}) between isomorphism classes of finite groups of prime power order \(p^n\), for a fixed prime number \(p\) and varying integer exponents \(n\ge 0\). Such groups are briefly called finite \(p\)-groups. The \textit{vertices} of a descendant tree are isomorphism classes of finite \(p\)-groups. Additionally to their order \(p^n\), finite \(p\)-groups possess two further related invariants, the nilpotency class \(c\) and the \textit{coclass} \(r:=n-c\) (\S\S\ \ref{s:CoclassTrees} and \ref{s:Multifurcation}). It turned out that descendant trees of a particular kind, the so-called \textit{pruned coclass trees} whose infinitely many vertices share a common coclass \(r\), reveal a \textit{repeating finite pattern} (\S\ \ref{s:VirtualPeriodicity}). These two crucial properties of finiteness and periodicity, which have been proved independently by M. du Sautoy \cite{dS} and by B. Eick and C.R. Leedham-Green \cite{EkLg}, admit a characterization of all members of the tree by finitely many \textit{parametrized presentations} (\S\S\ \ref{s:ConcreteExamples} and \ref{s:PeriodicBifurcations}). Consequently, descendant trees play a fundamental role in the classification of finite \(p\)-groups. By means of kernels and targets of \textit{Artin transfer} homomorphisms \cite{Ar2}, descendant trees can be endowed with additional structure \cite{Ma2,Ma3,Ma4}, which recently turned out to be decisive for \textit{arithmetical applications} in class field theory, in particular, for determining the exact length of \(p\)-class towers \cite{BuMa}. An important question is how the descendant tree \(\mathcal{T}(R)\) can actually be constructed for an assigned starting group which is taken as the root \(R\) of the tree. Sections \S\S\ \ref{s:LowerExponentP} -- \ref{s:SchurMpl} are devoted to recall a minimum of the necessary background concerning the \textit{\(p\)-group generation algorithm} by M.F. Newman \cite{Nm2} and E.A. O'Brien \cite{Ob,HEO}, which is a recursive process for constructing the descendant tree of a foregiven finite \(p\)-group playing the role of the tree root. This algorithm is now implemented in the ANUPQ-package \cite{GNO} of the computational algebra systems GAP \cite{GAP} and MAGMA \cite{MAGMA}. As a final highlight in \S\ \ref{s:PeriodicBifurcations}, whose formulation requires an understanding of all the preceding sections, this article concludes with brand-new discoveries of an unknown, and up to now unproved, kind of repeating infinite patterns called \textit{periodic bifurcations}, which appeared in extensive computational constructions of descendant trees of certain finite \(2\)-groups, resp. \(3\)-groups, \(G\) with abelianization \(G/G^\prime\) of type \((2,2,2)\), resp. \((3,3)\), and have immediate applications in algebraic number theory and class field theory. \section{Historical remarks on bifurcation} \label{s:HistoricalRmksBifurcation} Since computer aided classifications of finite \(p\)-groups go back to 1975, fourty years ago, there arises the question why periodic bifurcations did not show up in the earlier literature already. At the first sight, this fact seems incomprehensible, because the smallest two \(3\)-groups which reveal the phenomenon of periodic bifurcations with modest complexity were well known to both, J. A. Ascione, G. Havas and C.R. Leedham-Green \cite{AHL} and B. Nebelung \cite{Ne}. Their SmallGroups identifiers are \(\langle 729,49\rangle\) and \(\langle 729,54\rangle\) (see \S\ \ref{s:Identifiers} and \cite{BEO1,BEO2}). Due to the lack of systematic identifiers in 1977, they were called the \textit{non-CF groups} \(Q\) and \(U\) in \cite[Tbl.1, p.265, and Tbl.2, p.266]{AHL}, since their lower central series \((\gamma_j(G))_{j\ge 1}\) has a non-cyclic factor \(\gamma_3(G)/\gamma_4(G)\) of type \((3,3)\). Similarly, there was no SmallGroups Database yet in 1989, whence the two groups were designated by \(G_0^{5,6}(0,-1,0,1)\) and \(G_0^{5,6}(0,0,0,1)\) in \cite[Satz 6.14, p.208]{Ne}. So Ascione and Nebelung were both standing in front of the door to a realm of uncharted waters. The reason why they did not enter this door was the sharp definition of their project targets. A \textit{bifurcation} is the special case of a \(2\)-fold multifurcation (\S\ \ref{s:Multifurcation}): At a vertex \(G\) of coclass \(\mathrm{cc}(G)=r\) with nuclear rank \(\nu(G)=2\), the descendant tree \(\mathcal{T}(G)\) forks into a \textit{regular} component of the same coclass \(\mathcal{T}^r(G)\) and an \textit{irregular} component of the next coclass \(\mathcal{T}^{r+1}(G)\). Ascione's thesis subject \cite{As1,As2} in 1979 was to investigate two-generated \(3\)-groups \(G\) of second maximal class, that is, of coclass \(\mathrm{cc}(G)=2\). Consequently, she studied the regular component \(\mathcal{T}^2(G)\) for \(G\in\lbrace Q,U\rbrace\) and did not touch the irregular tree \(\mathcal{T}^3(G)\) whose members are not of second maximal class. The goal of Nebelung's dissertation \cite{Ne} in 1989 was the classification of metabelian \(3\)-groups \(G\) with \(G/G^\prime\) of type \((3,3)\). Therefore she focused on the \textit{metabelian skeleton} \(\mathcal{T}_\ast^2(G)\) of the regular coclass tree \(\mathcal{T}^2(G)\) for \(G\in\lbrace Q,U\rbrace\) (a special case of a \textit{pruned} coclass tree, see \S\ \ref{s:VirtualPeriodicity}) and omitted the irregular component \(\mathcal{T}^3(G)\) whose members are entirely non-metabelian of derived length \(3\). \section{Definitions and terminology} \label{s:Terminology} According to M.F. Newman \cite[\S\ 2, pp.52--53]{Nm}, there exist several distinct definitions of the \textit{parent} \(\pi(G)\) of a finite \(p\)-group \(G\). The common principle is to form the quotient \(\pi(G):=G/N\) of \(G\) by a suitable normal subgroup \(N\unlhd G\) which can be either \begin{enumerate}[({P}1)] \item the centre \(N=\zeta_1(G)\) of \(G\), whence \(\pi(G)=G/\zeta_1(G)\) is called \textit{central quotient} of \(G\) or \item the last non-trivial term \(N=\gamma_c(G)\) of the lower central series of \(G\), where \(c\) denotes the nilpotency class of \(G\) or \item the last non-trivial term \(N=P_{c-1}(G)\) of the lower exponent-\(p\) central series of \(G\), where \(c\) denotes the exponent-\(p\) class of \(G\) or \item the last non-trivial term \(N=G^{(d-1)}\) of the derived series of \(G\), where \(d\) denotes the derived length of \(G\). \end{enumerate} In each case, \(G\) is called an \textit{immediate descendant} of \(\pi(G)\) and a \textit{directed edge} of the tree is defined either by \(G\to\pi(G)\) in the direction of the canonical projection \(\pi:G\to\pi(G)\) onto the quotient \(\pi(G)=G/N\) or by \(\pi(G)\to G\) in the opposite direction, which is more usual for descendant trees. The former convention is adopted by Leedham-Green and Newman \cite[\S\ 2, pp.194--195]{LgNm}, by du Sautoy and D. Segal \cite[\S\ 7, p.280]{dSSg}, by Leedham-Green and S. McKay \cite[Dfn.8.4.1, p.166]{LgMk}, and by Eick, Leedham-Green, Newman and O'Brien \cite[\S\ 1]{ELNO}. The latter definition is used by Newman \cite[\S\ 2, pp.52--53]{Nm}, by Newman and O'Brien \cite[\S\ 1, p.131]{NmOb}, by du Sautoy \cite[\S\ 1, p.67]{dS}, by H. Dietrich, Eick and D. Feichtenschlager \cite[\S\ 2, p.46]{DEF} and by Eick and Leedham-Green \cite[\S\ 1, p.275]{EkLg}. In the following, the direction of the canonical projections is selected for all edges. Then, more generally, a vertex \(R\) is a \textit{descendant} of a vertex \(P\), and \(P\) is an \textit{ancestor} of \(R\), if either \(R\) is equal to \(P\) or there is a \textit{path} \begin{equation} \label{eqn:Path} R=Q_0\to Q_1\to\cdots\to Q_{m-1}\to Q_m=P,\text{ with }m\ge 1, \end{equation} \noindent of directed edges from \(R\) to \(P\). The vertices forming the path necessarily coincide with the \textit{iterated parents} \(Q_j=\pi^{j}(R)\) of \(R\), with \(0\le j\le m\): \begin{equation} \label{eqn:IteratedParents} R=\pi^{0}(R)\to\pi^{1}(R)\to\cdots\to\pi^{m-1}(R)\to\pi^{m}(R)=P,\text{ with }m\ge 1. \end{equation} \noindent In the most important special case (P2) of parents defined as last non-trivial lower central quotients, they can also be viewed as the \textit{successive quotients} \(R/\gamma_{c+1-j}(R)\) \textit{of class} \(c-j\) of \(R\) when the nilpotency class of \(R\) is given by \(c\ge m\): \begin{equation} \label{eqn:SuccessiveQuotients} R\simeq R/\gamma_{c+1}(R)\to R/\gamma_{c}(R)\to\cdots\to R/\gamma_{c+2-m}(R)\to R/\gamma_{c+1-m}(R)\simeq P, \end{equation} \noindent with \(c\ge m\ge 1\). Generally, the \textit{descendant tree} \(\mathcal{T}(G)\) of a vertex \(G\) is the subtree of all descendants of \(G\), starting at the \textit{root} \(G\). The \textit{maximal} possible descendant tree \(\mathcal{T}(1)\) of the trivial group \(1\) contains all finite \(p\)-groups and is somewhat exceptional, since, for any parent definition (P1--P4), the trivial group \(1\) has infinitely many abelian \(p\)-groups as its immediate descendants. The parent definitions (P2--P3) have the advantage that any non-trivial finite \(p\)-group (of order divisible by \(p\)) possesses only finitely many immediate descendants. \section{Pro-\(p\) groups and coclass trees} \label{s:CoclassTrees} For a sound understanding of coclass trees as a particular instance of descendant trees, it is necessary to summarize some facts concerning \textit{infinite topological pro-\(p\) groups}. The members \(\gamma_j(S)\), with \(j\ge 1\), of the lower central series of a pro-\(p\) group \(S\) are open and closed subgroups of finite index, and therefore the corresponding quotients \(S/\gamma_j(S)\) are finite \(p\)-groups. The pro-\(p\) group \(S\) is said to be of \textit{coclass} \(\mathrm{cc}(S):=r\) when the limit \(r=\lim_{j\to\infty}\,\mathrm{cc}(S/\gamma_j(S))\) of the coclass of the successive quotients exists and is finite. An infinite pro-\(p\) group \(S\) of coclass \(r\) is a \textit{\(p\)-adic pre-space group} \cite[Dfn.7.4.11, p.147]{LgMk}, since it has a normal subgroup \(T\), the \textit{translation group}, which is a free module over the ring \(\mathbb{Z}_p\) of \(p\)-adic integers of uniquely determined rank \(d\), the \textit{dimension}, such that the quotient \(P=S/T\) is a finite \(p\)-group, the \textit{point group}, which \textit{acts on \(T\) uniserially}. The dimension is given by \begin{equation} \label{eqn:Dimension} d=(p-1)p^{s},\text{ with some }0\le s<r. \end{equation} A central finiteness result for infinite pro-\(p\) groups of coclass \(r\) is provided by the so-called \textit{Theorem D}, which is one of the five \textit{Coclass Theorems} proved in 1994 independently by A. Shalev \cite{Sv} and by C.R. Leedham-Green \cite[Thm.7.7, p.66]{Lg}, and conjectured in 1980 already by Leedham-Green and Newman \cite[\S\ 2, pp.194--196]{LgNm}. Theorem D asserts that there are only finitely many isomorphism classes of infinite pro-\(p\) groups of coclass \(r\), for any fixed prime \(p\) and any fixed non-negative integer \(r\). As a consequence, if \(S\) is an infinite pro-\(p\) group of coclass \(r\), then there exists a minimal integer \(i\ge 1\) such that the following three conditions are satisfied for any integer \(j\ge i\). \begin{itemize} \item \(\mathrm{cc}(S/\gamma_j(S))=r\), \item \(S/\gamma_j(S)\) is not a lower central quotient of any infinite pro-\(p\) group of coclass \(r\) which is not isomorphic to \(S\), \item \(\gamma_j/\gamma_{j+1}(S)\) is cyclic of order \(p\). \end{itemize} The descendant tree \(\mathcal{T}(R)\), with respect to the parent definition (P2), of the root \(R=S/\gamma_i(S)\) with minimal \(i\) is called the \textit{coclass tree} \(\mathcal{T}(S)\) of \(S\) and its unique maximal infinite (reverse-directed) path \begin{equation} \label{eqn:MainLine} R=S/\gamma_i(S)\leftarrow S/\gamma_{i+1}(S)\leftarrow S/\gamma_{i+2}(S)\leftarrow\cdots \end{equation} \noindent is called the \textit{mainline} (or trunk) of the tree. \section{Tree diagram} \label{s:TreeDiagram} Further terminology, used in diagrams visualizing finite parts of descendant trees, is explained in Figure \ref{fig:TreeNotation} by means of an artificial abstract tree. On the left hand side, a \textit{level} indicates the basic top-down design of a descendant tree. For concrete trees, such as those in Figures \ref{fig:2GroupsCoclass1}, resp. \ref{fig:3GroupsCoclass1}, etc., the level is usually replaced by a scale of orders increasing from the top to the bottom. A vertex is \textit{capable} (or \textit{extendable}) if it has at least one immediate descendant, otherwise it is \textit{terminal} (or a \textit{leaf}). Vertices sharing a common parent are called \textit{siblings}. {\tiny \begin{figure} \caption{Terminology for descendant trees} \label{fig:TreeNotation} \end{figure} } If the descendant tree is a coclass tree \(\mathcal{T}(R)\) with root \(R=R_0\) and with mainline vertices \((R_n)_{n\ge 0}\) labelled according to the level \(n\), then the finite subtree defined as the difference set \begin{equation} \label{eqn:Branch} \mathcal{B}(n):=\mathcal{T}(R_n)\setminus\mathcal{T}(R_{n+1}) \end{equation} \noindent is called the \(n\)th \textit{branch} (or twig) of the tree or also the branch \(\mathcal{B}(R_n)\) with root \(R_n\), for any \(n\ge 0\). The \textit{depth} of a branch is the maximal length of the paths connecting its vertices with its root. Figure \ref{fig:TreeNotation} shows a descendant tree whose branches \(\mathcal{B}(2),\mathcal{B}(4)\) both have depth \(0\), and \(\mathcal{B}(5)\simeq\mathcal{B}(7)\), resp. \(\mathcal{B}(6)\simeq\mathcal{B}(8)\), are isomorphic as trees. If all vertices of depth bigger than a given integer \(k\ge 0\) are removed from branch \(\mathcal{B}(n)\), then we obtain the (depth-)\textit{pruned branch} \(\mathcal{B}_k(n)\). Correspondingly, the \textit{pruned coclass tree} \(\mathcal{T}_k(R)\), resp. the entire coclass tree \(\mathcal{T}(R)\), consists of the infinite sequence of its pruned branches \((\mathcal{B}_k(n))_{n\ge 0}\), resp. branches \((\mathcal{B}(n))_{n\ge 0}\), connected by the mainline, whose vertices \(R_n\) are called \textit{infinitely capable}. \section{Virtual periodicity} \label{s:VirtualPeriodicity} The periodicity of branches of depth-pruned coclass trees has been proved with analytic methods using zeta functions \cite[\S\ 7, Thm.15, p.280]{dSSg} of groups by M. du Sautoy \cite[Thm.1.11, p.68, and Thm.8.3, p.103]{dS}, and with algebraic techniques using cohomology groups by B. Eick and C.R. Leedham-Green \cite{EkLg}. The former methods admit the qualitative insight of ultimate virtual periodicity, the latter techniques determine the quantitative structure. \begin{theorem} \label{thm:Periodicity} For any infinite pro-\(p\) group \(S\) of coclass \(r\ge 1\) and dimension \(d\), and for any given depth \(k\ge 1\), there exists an effective minimal lower bound \(f(k)\ge 1\), where \textit{periodicity of length} \(d\) of depth-\(k\) pruned branches of the coclass tree \(\mathcal{T}(S)\) sets in, that is, there exist graph isomorphisms \begin{equation} \label{eqn:Periodicity} \mathcal{B}_k(n+d)\simeq\mathcal{B}_k(n),\text{ for all }n\ge f(k). \end{equation} \end{theorem} \begin{proof} The graph isomorphisms of depth-\(k\) pruned banches with roots of sufficiently large order \(n\ge f(k)\) are derived with cohomological methods in \cite[Thm.6, p.277, Thm.9, p.278]{EkLg} and the effective lower bound \(f(k)\) for the branch root orders is established in \cite[Thm.29, p.287]{EkLg}. \end{proof} This central result can be expressed ostensively: When we look at a coclass tree through a pair of blinkers and ignore a finite number of pre-periodic branches at the top, then we shall see a repeating finite pattern (\textit{ultimate} periodicity). However, if we take wider blinkers the pre-periodic initial section may become longer (\textit{virtual} periodicity). The vertex \(P=R_{f(k)}\) is called the \textit{periodic root} of the pruned coclass tree, for a fixed value of the depth \(k\). See Figure \ref{fig:TreeNotation}. \section{Multifurcation and coclass graphs} \label{s:Multifurcation} Assume that parents of finite \(p\)-groups are defined as last non-trivial lower central quotients (P2). For a \(p\)-group \(G\) of coclass \(\mathrm{cc}(G)=r\), we can distinguish its (entire) descendant tree \(\mathcal{T}(G)\) and its \textit{coclass-\(r\) descendant tree} \(\mathcal{T}^r(G)\), the subtree consisting of descendants of coclass \(r\) only. The group \(G\) is \textit{coclass-settled} if \(\mathcal{T}(G)=\mathcal{T}^r(G)\). The \textit{nuclear rank} \(\nu(G)\) of \(G\) (see \S\ \ref{s:CoveringGroup}) in the theory of the \(p\)-group generation algorithm by M.F. Newman \cite{Nm2} and E.A. O'Brien \cite{Ob} provides the following criteria. \begin{itemize} \item \(G\) is terminal, and thus trivially coclass-settled, if and only if \(\nu(G)=0\). \item If \(\nu(G)=1\), then \(G\) is capable, but it remains unknown whether \(G\) is coclass-settled. \item If \(\nu(G)=m\ge 2\), then \(G\) is capable and certainly not coclass-settled. \end{itemize} In the last case, a more precise assertion is possible: If \(G\) has coclass \(r\) and nuclear rank \(\nu(G)=m\ge 2\), then it gives rise to an \(m\)-fold \textit{multifurcation} into a \textit{regular} coclass-\(r\) descendant tree \(\mathcal{T}^r(G)\) and \(m-1\) \textit{irregular} descendant trees \(\mathcal{T}^{r+j}(G)\) of coclass \(r+j\), for \(1\le j\le m-1\). Consequently, the descendant tree of \(G\) is the disjoint union \begin{equation} \label{eqn:Components} \mathcal{T}(G)=\dot{\cup}_{j=0}^{m-1}\,\mathcal{T}^{r+j}(G). \end{equation} Multifurcation is correlated with different orders of the last non-trivial lower central of immediate descendants. Since the nilpotency class increases exactly by a unit, \(c=\mathrm{cl}(Q)=\mathrm{cl}(P)+1\), from a parent \(P=\pi(Q)\) to any immediate descendant \(Q\), the coclass remains stable, \(r=\mathrm{cc}(Q)=\mathrm{cc}(P)\), if \(\vert\gamma_c(Q)\vert=p\). In this case, \(Q\) is a \textit{regular} immediate descendant with directed edge \(P\leftarrow Q\) of depth 1, as usual. However, the coclass increases by \(m-1\), if \(\vert\gamma_c(Q)\vert=p^m\) with \(m\ge 2\). Then \(Q\) is called an \textit{irregular} immediate descendant with directed edge of \textit{depth} \(m\). If the condition of depth (or \textit{step size}) 1 is imposed on all directed edges, then the maximal descendant tree \(\mathcal{T}(1)\) of the trivial group \(1\) splits into a countably infinite disjoint union \begin{equation} \label{eqn:CoclassGraphs} \mathcal{T}(1)=\dot{\cup}_{r=0}^\infty\,\mathcal{G}(p,r) \end{equation} \noindent of directed \textit{coclass graphs} \(\mathcal{G}(p,r)\), which are rather \textit{forests} than trees. More precisely, the above mentioned Coclass Theorems imply that \begin{equation} \label{eqn:SporadicPart} \mathcal{G}(p,r)=\left(\dot{\cup}_i\,\mathcal{T}(S_i)\right)\dot{\cup}\mathcal{G}_0(p,r) \end{equation} \noindent is the disjoint union of finitely many coclass trees \(\mathcal{T}(S_i)\) of pairwise non-isomorphic infinite pro-\(p\) groups \(S_i\) of coclass \(r\) (Theorem D) and a finite subgraph \(\mathcal{G}_0(p,r)\) of \textit{sporadic groups} lying outside of any coclass tree. \section{Identifiers} \label{s:Identifiers} The SmallGroups Library \textit{identifiers} of finite groups, in particular \(p\)-groups, given in the form \[\langle\text{order},\text{counting number}\rangle\] in the following concrete examples of descendant trees, are due to H.U. Besche, B. Eick and E.A. O'Brien \cite{BEO1,BEO2}. When the group orders are given in a scale on the left hand side as in Figure \ref{fig:2GroupsCoclass1} and Figure \ref{fig:3GroupsCoclass1}, the identifiers are briefly denoted by \[\langle\text{counting number}\rangle.\] Depending on the prime \(p\), there is an upper bound on the order of groups for which a SmallGroup identifier exists, e.g. \(512=2^9\) for \(p=2\), and \(2187=3^7\) for \(p=3\). For groups of bigger orders, a notation with \textit{generalized identifiers} resembling the descendant structure is employed: A regular immediate descendant, connected by an edge of depth \(1\) with its parent \(P\), is denoted by \[P-\#1;\text{counting number},\] and an irregular immediate descendant, connected by an edge of depth \(d\ge 2\) with its parent \(P\), is denoted by \[P-\#d;\text{counting number}.\] The ANUPQ package \cite{GNO} containing the implementation of the \(p\)-group generation algorithm uses this notation, which goes back to J.A. Ascione in 1979 \cite{As1}. \section{Concrete examples of trees} \label{s:ConcreteExamples} In all examples, the underlying parent definition (P2) corresponds to the usual lower central series. Occasional differences to the parent definition (P3) with respect to the lower exponent-\(p\) central series are pointed out. \subsection{Coclass \(0\)} \label{ss:CoclassZero} The coclass graph \begin{equation} \label{eqn:CoclassZero} \mathcal{G}(p,0)=\mathcal{G}_0(p,0) \end{equation} \noindent of finite \(p\)-groups of coclass \(0\) does not contain a coclass tree and consists of the \textit{trivial group} \(1\) and the \textit{cyclic group} \(C_p\) of order \(p\), which is a leaf (however, it is capable with respect to the lower exponent-\(p\) central series). For \(p=2\) the SmallGroup identifier of \(C_p\) is \(\langle 2,1\rangle\), for \(p=3\) it is \(\langle 3,1\rangle\). \subsection{Coclass \(1\)} \label{ss:CoclassOne} The coclass graph \begin{equation} \label{eqn:CoclassOne} \mathcal{G}(p,1)=\mathcal{T}^1(R)\dot{\cup}\mathcal{G}_0(p,1) \end{equation} \noindent of finite \(p\)-groups of coclass \(1\) consists of the unique coclass tree with root \(R=C_p\times C_p\), the \textit{elementary abelian \(p\)-group of rank \(2\)}, and a single \textit{isolated vertex} (a terminal orphan without proper parent in the same coclass graph, since the directed edge to the trivial group \(1\) has depth \(2\)), the \textit{cyclic group} \(C_{p^2}\) of order \(p^2\) in the sporadic part \(\mathcal{G}_0(p,1)\) (however, this group is capable with respect to the lower exponent-\(p\) central series). The tree \(\mathcal{T}^1(R)=\mathcal{T}^1(S_1)\) is the coclass tree of the unique infinite pro-\(p\) group \(S_1\) of coclass \(1\). For \(p=2\), resp. \(p=3\), the SmallGroup identifier of the root \(R\) is \(\langle 4,2\rangle\), resp. \(\langle 9,2\rangle\), and a tree diagram of the coclass graph from branch \(\mathcal{B}(2)\) up to branch \(\mathcal{B}(7)\) (counted with respect to the \(p\)-logarithm of the order of the branch root) is drawn in Figure \ref{fig:2GroupsCoclass1}, resp. Figure \ref{fig:3GroupsCoclass1}, where all groups of order at least \(p^3\) are \textit{metabelian}, that is non-abelian with derived length \(2\) (vertices represented by black discs in contrast to contour squares indicating abelian groups). In Figure \ref{fig:3GroupsCoclass1}, smaller black discs denote metabelian 3-groups where even the maximal subgroups are non-abelian, a feature which does not occur for the metabelian 2-groups in Figure \ref{fig:2GroupsCoclass1}, since they all possess an abelian subgroup of index \(p\) (usually exactly one). The coclass tree of \(\mathcal{G}(2,1)\), resp. \(\mathcal{G}(3,1)\), has periodic root \(\langle 8,3\rangle\) and period of length \(1\) starting with branch \(\mathcal{B}(3)\), resp. periodic root \(\langle 81,9\rangle\) and period of length \(2\) starting with branch \(\mathcal{B}(4)\). Both trees have branches of bounded depth \(1\), so their virtual periodicity is in fact a \textit{strict periodicity}. The \(\Gamma_s\), resp. \(\Phi_s\), denote isoclinism families \cite{HaSn,Hl}. However, the coclass tree of \(\mathcal{G}(p,1)\) with \(p\ge 5\) has \textit{unbounded depth} and contains non-metabelian groups, and the coclass tree of \(\mathcal{G}(p,1)\) with \(p\ge 7\) has even \textit{unbounded width}, that is the number of descendants of a fixed order increases indefinitely with growing order \cite{DEF}. With the aid of kernels and targets of Artin transfer homomorphisms \cite{Ar2}, the diagrams in Figures \ref{fig:2GroupsCoclass1} and \ref{fig:3GroupsCoclass1} can be endowed with additional information and redrawn as \textit{structured descendant trees} \cite[Fig.3.1, p.419, and Fig.3.2, p.422]{Ma4}. {\tiny \begin{figure} \caption{\(2\)-Groups of Coclass \(1\)} \label{fig:2GroupsCoclass1} \end{figure} } The concrete examples \(\mathcal{G}(2,1)\) and \(\mathcal{G}(3,1)\) provide an opportunity to give a \textit{parametrized polycyclic power-commutator presentation} \cite[pp.82--84]{Bl} for the complete coclass tree, mentioned in \S\ \ref{s:Structure} as a benefit of the descendant tree concept and as a consequence of the periodicity of the pruned coclass tree. In both cases, the group \(G\) is generated by two elements \(x,y\) but the presentation contains the series of \textit{higher commutators} \(s_j\), \(2\le j\le n-1=\mathrm{cl}(G)\), starting with the \textit{main commutator} \(s_2=\lbrack y,x\rbrack\). The nilpotency is formally expressed by \(s_n=1\), when the group is of order \(\vert G\vert=p^n\). {\tiny \begin{figure} \caption{\(3\)-Groups of Coclass \(1\)} \label{fig:3GroupsCoclass1} \end{figure} } For \(p=2\), there are two parameters \(0\le w,z\le 1\) and the pc-presentation is given by \begin{equation} \label{eqn:ParamPres2Cocl1} \begin{aligned} G^n(z,w)= & \langle x,y,s_2,\ldots,s_{n-1}\mid\\ & x^2=s_{n-1}^w,\ y^2=s_2^{-1}s_{n-1}^z,\ \lbrack s_2,y\rbrack=1,\\ & s_2=\lbrack y,x\rbrack,\ s_j=\lbrack s_{j-1},x\rbrack\text{ for }3\le j\le n-1\rangle \end{aligned} \end{equation} The 2-groups of maximal class, that is of coclass \(1\), form three \textit{periodic infinite sequences}, \begin{itemize} \item the \textit{dihedral} groups, \(D(2^n)=G^n(0,0)\), \(n\ge 3\), forming the mainline (with infinitely capable vertices), \item the \textit{generalized quaternion} groups, \(Q(2^n)=G^n(0,1)\), \(n\ge 3\), which are all terminal vertices, \item the \textit{semidihedral} groups, \(S(2^n)=G^n(1,0)\), \(n\ge 4\), which are also leaves. \end{itemize} For \(p=3\), there are three parameters \(0\le a\le 1\) and \(-1\le w,z\le 1\) and the pc-presentation is given by \begin{equation} \label{eqn:ParamPres3Cocl1} \begin{aligned} G^n_a(z,w)= & \langle x,y,s_2,\ldots,s_{n-1}\mid\\ & x^3=s_{n-1}^w,\ y^3=s_2^{-3}s_3^{-1}s_{n-1}^z,\ \lbrack y,s_2\rbrack=s_{n-1}^a,\\ & s_2=\lbrack y,x\rbrack,\ s_j=\lbrack s_{j-1},x\rbrack\text{ for }3\le j\le n-1\rangle \end{aligned} \end{equation} \(3\)-groups with parameter \(a=0\) possess an abelian maximal subgroup, those with parameter \(a=1\) do not. More precisely, an existing abelian maximal subgroup is unique, except for the two \textit{extra special} groups \(G^3_0(0,0)\) and \(G^3_0(0,1)\), where all four maximal subgroups are abelian. In contrast to any bigger coclass \(r\ge 2\), the coclass graph \(\mathcal{G}(p,1)\) exclusively contains \(p\)-groups \(G\) with abelianization \(G/G^\prime\) of type \((p,p)\), except for its unique isolated vertex. The case \(p=2\) is distinguished by the truth of the reverse statement: Any \(2\)-group with abelianization of type \((2,2)\) is of coclass \(1\) (O. Taussky's Theorem \cite[p.83]{Ta}). {\tiny \begin{figure} \caption{\(3\)-Groups of Coclass \(2\) with Abelianization \((3,3)\)} \label{fig:3GrpTyp33Cocl2} \end{figure} } Figure \ref{fig:3GrpTyp33Cocl2} shows the interface between finite \(3\)-groups of coclass \(1\) and \(2\) of type \((3,3)\). \subsection{Coclass \(2\)} \label{ss:CoclassTwo} The genesis of the coclass graph \(\mathcal{G}(p,r)\) with \(r\ge 2\) is not uniform. \(p\)-groups with several distinct abelianizations contribute to its constitution. For coclass \(r=2\), there are essential contributions from groups \(G\) with abelianizations \(G/G^\prime\) of the types \((p,p)\), \((p^2,p)\), \((p,p,p)\), and an isolated contribution by the cyclic group of order \(p^3\): \begin{equation} \label{eqn:Coclass2} \mathcal{G}(p,2)=\mathcal{G}_{(p,p)}(p,2)\dot{\cup}\mathcal{G}_{(p^2,p)}(p,2)\dot{\cup}\mathcal{G}_{(p,p,p)}(p,2)\dot{\cup}\mathcal{G}_{(p^3)}(p,2). \end{equation} \subsubsection{Abelianization of type \((p,p)\)} \label{sss:TypePP} As opposed to \(p\)-groups of coclass \(2\) with abelianization of type \((p^2,p)\) or \((p,p,p)\), which arise as regular descendants of abelian \(p\)-groups of the same types, \(p\)-groups of coclass \(2\) with abelianization of type \((p,p)\) arise from irregular descendants of a non-abelian \(p\)-group with coclass \(1\) and nuclear rank \(2\). For the prime \(p=2\), such groups do not exist at all, since the dihedral group \(\langle 8,3\rangle\) is coclass-settled, which is the deeper reason for Taussky's Theorem. This remarkable fact has been observed by G. Bagnera \cite[Part 2, \S\ 4, p.182]{Bg} in 1898 already. For odd primes \(p\ge 3\), the existence of \(p\)-groups of coclass \(2\) with abelianization of type \((p,p)\) is due to the fact that the extra special group \(G^3_0(0,0)\) is not coclass-settled. Its nuclear rank equals \(2\), which gives rise to a \textit{bifurcation} of the descendant tree \(\mathcal{T}(G^3_0(0,0))\) into two coclass graphs. The regular component \(\mathcal{T}^1(G^3_0(0,0))\) is a subtree of the unique tree \(\mathcal{T}^1(C_p\times C_p)\) in the coclass graph \(\mathcal{G}(p,1)\). The irregular component \(\mathcal{T}^2(G^3_0(0,0))\) becomes a subgraph \(\mathcal{G}=\mathcal{G}_{(p,p)}(p,2)\) of the coclass graph \(\mathcal{G}(p,2)\) when the connecting edges of depth \(2\) of the irregular immediate descendants of \(G^3_0(0,0)\) are removed. For \(p=3\), this subgraph \(\mathcal{G}\) is drawn in Figure 4. It has seven top level vertices of three important kinds, all having order \(243=3^5\), which have been discovered by G. Bagnera \cite[Part 2, \S\ 4, pp.182--183]{Bg}. \begin{itemize} \item Firstly, there are two terminal \textit{Schur \(\sigma\)-groups} \cite{Ag} \(\langle 243,5\rangle\) and \(\langle 243,7\rangle\) in the sporadic part \(\mathcal{G}_0(3,2)\) of the coclass graph \(\mathcal{G}(3,2)\). \item Secondly, the two groups \(G=\langle 243,4\rangle\) and \(G=\langle 243,9\rangle\) are roots of finite trees \(\mathcal{T}^2(G)\) in the sporadic part \(\mathcal{G}_0(3,2)\). (However, since they are not coclass-settled, the complete trees \(\mathcal{T}(G)\) are infinite.) \item And, finally, the three groups \(\langle 243,3\rangle\), \(\langle 243,6\rangle\) and \(\langle 243,8\rangle\) give rise to (infinite) coclass trees, e.g., \(\mathcal{T}^2(\langle 729,40\rangle)\), \(\mathcal{T}^2(\langle 243,6\rangle)\), \(\mathcal{T}^2(\langle 243,8\rangle)\), each having a metabelian mainline, in the coclass graph \(\mathcal{G}(3,2)\). None of these three groups is coclass-settled. See \S\ \ref{s:PeriodicBifurcations}. \end{itemize} Displaying additional information on kernels and targets of Artin transfers \cite{Ar2}, we can draw these trees as \textit{structured descendant trees} \cite[Fig.3.5, p.439, Fig.3.6, p.442, and Fig.3.7, p.443]{Ma4}. \begin{definition} \label{dfn:SchurSigmaGroup} Generally, a \textit{Schur group} (called a \textit{closed} group by I. Schur, who coined the concept) is a pro-\(p\) group \(G\) whose relation rank \(r(G)=\mathrm{dim}_{\mathbb{F}_p}(\mathrm{H}^2(G,\mathbb{F}_p))\) coincides with its generator rank \(d(G)=\mathrm{dim}_{\mathbb{F}_p}(\mathrm{H}^1(G,\mathbb{F}_p))\). A \textit{\(\sigma\)-group} is a pro-\(p\) group \(G\) which possesses an automorphism \(\sigma\in\mathrm{Aut}(G)\) inducing the inversion \(x\mapsto x^{-1}\) on its abelianization \(G/G^\prime\). A \textit{Schur \(\sigma\)-group} \cite{Ag,BBH,BuMa,KoVe} is a Schur group \(G\) which is also a \(\sigma\)-group and has a finite abelianization \(G/G^\prime\). \end{definition} It should be pointed out that \(\langle 243,3\rangle\) is not root of a coclass tree, since its immediate descendant \(\langle 729,40\rangle=B\), which is root of a coclass tree with metabelian mainline vertices, has two siblings \(\langle 729,35\rangle=I\), resp. \(\langle 729,34\rangle=H\), which give rise to a single, resp. three, coclass tree(s) with non-metabelian mainline vertices having cyclic centres of order \(3\) and branches of considerable complexity but nevertheless of bounded depth \(5\). \renewcommand{1.0}{1.0} \begin{table}[ht] \caption{Quotients of the Groups \(G=G(f,g,h)\)} \label{tbl:QuotientsBQUGA} \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline parameters & abelianization & class-\(2\) quotient & class-\(3\) quotient & class-\(4\) quotient \\ \((f,g,h)\) & \(G/G^\prime\) & \(G/\gamma_3(G)\) & \(G/\gamma_4(G)\) & \(G/\gamma_5(G)\) \\ \hline \((0,1,0)\) & \((3,3)\) & \(\langle 27,3\rangle\) & \(\langle 243,3\rangle\) & \(\langle 729,40\rangle\) \\ \((0,1,2)\) & \((3,3)\) & \(\langle 27,3\rangle\) & \(\langle 243,6\rangle\) & \(\langle 729,49\rangle\) \\ \((1,1,2)\) & \((3,3)\) & \(\langle 27,3\rangle\) & \(\langle 243,8\rangle\) & \(\langle 729,54\rangle\) \\ \hline \((1,0,0)\) & \((9,3)\) & \(\langle 81,3\rangle\) & \(\langle 243,15\rangle\) & \(\langle 729,79\rangle\) \\ \((0,0,1)\) & \((9,3)\) & \(\langle 81,3\rangle\) & \(\langle 243,17\rangle\) & \(\langle 729,84\rangle\) \\ \hline \((0,0,0)\) & \((3,3,3)\) & \(\langle 81,12\rangle\) & \(\langle 243,53\rangle\) & \(\langle 729,395\rangle\) \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Pro-\(3\) groups of coclass \(2\) with non-trivial centre} \label{sss:Pro3Cocl2} B. Eick, C.R. Leedham-Green, M.F. Newman and E.A. O'Brien \cite[\S\ 4, Thm.4.1]{ELNO} have constructed a family of infinite pro-3 groups with coclass \(2\) having a non-trivial centre of order \(3\). The members are characterized by three parameters \((f,g,h)\). Their finite quotients generate all mainline vertices with bicyclic centres of type \((3,3)\) of six coclass trees in the coclass graph \(\mathcal{G}(3,2)\). The association of parameters to the roots of these six trees is given in Table \ref{tbl:QuotientsBQUGA}, the tree diagrams are indicated in Figures \ref{fig:3GrpTyp33Cocl2} and \ref{fig:3GrpTyp93Cocl2}, and the parametrized pro-\(3\) presentation is given by \begin{equation} \label{eqn:ParamPres3Cocl2} \begin{aligned}G(f,g,h)= & \langle a,t,z\mid\\ & a^3=z^f,\ \lbrack t,t^a\rbrack=z^g,\ t^{1+a+a^2}=z^h,\\ & z^3=1,\ \lbrack z,a\rbrack=1,\ \lbrack z,t\rbrack=1\rangle \end{aligned} \end{equation} {\tiny \begin{figure} \caption{\(3\)-Groups of Coclass \(2\) with Abelianization \((9,3)\)} \label{fig:3GrpTyp93Cocl2} \end{figure} } Figure \ref{fig:3GrpTyp93Cocl2} shows some finite \(3\)-groups with coclass \(2\) and type \((9,3)\). \subsubsection{Abelianization of type \((p^2,p)\)} \label{sss:TypeP2P} For \(p=3\), the top levels of the subtree \(\mathcal{T}^2(\langle 27,2\rangle)\) of the coclass graph \(\mathcal{G}(3,2)\) are drawn in Figure \ref{fig:3GrpTyp93Cocl2}. The most important vertices of this tree are the eight siblings sharing the common parent \(\langle 81,3\rangle\), which are of three important kinds. \begin{itemize} \item Firstly, there are three leaves \(\langle 243,20\rangle\), \(\langle 243,19\rangle\), \(\langle 243,16\rangle\) having cyclic centre of order \(9\), and a single leaf \(\langle 243,18\rangle\) with bicyclic centre of type \((3,3)\). \item Secondly, the group \(G=\langle 243,14\rangle\) is root of a finite tree \(\mathcal{T}(G)=\mathcal{T}^2(G)\). \item And, finally, the three groups \(\langle 243,13\rangle\), \(\langle 243,15\rangle\) and \(\langle 243,17\rangle\) give rise to infinite coclass trees, e. g., \(\mathcal{T}^2(\langle 2187,319\rangle)\), \(\mathcal{T}^2(\langle 243,15\rangle)\), \(\mathcal{T}^2(\langle 243,17\rangle)\), each having a metabelian mainline, the first with cyclic centres of order \(3\), the second and third with bicyclic centres of type \((3,3)\). \end{itemize} Here, it should be emphasized that \(\langle 243,13\rangle\) is not root of a coclass tree, since aside from its descendant \(\langle 2187,319\rangle\), which is root of a coclass tree with metabelian mainline vertices, it possesses five further descendants which give rise to coclass trees with non-metabelian mainline vertices having cyclic centres of order \(3\) and branches of considerable complexity, here partially even with \textit{unbounded depth} \cite[Thm.4.2(a--b)]{ELNO}. {\tiny \begin{figure} \caption{\(2\)-Groups of Coclass \(3\) with Abelianization \((2,2,2)\)} \label{fig:2GrpTyp222Cocl3} \end{figure} } \subsubsection{Abelianization of type \((p,p,p)\)} \label{sss:TypePPP} For \(p=2\), resp. \(p=3\), there exists a unique coclass tree with \(p\)-groups of type \((p,p,p)\) in the coclass graph \(\mathcal{G}(p,2)\). Its root is the elementary abelian \(p\)-group of type \((p,p,p)\), that is, \(\langle 8,5\rangle\), resp. \(\langle 27,5\rangle\). This unique tree corresponds to the pro-2 group of the family \(\#59\) by M.F. Newman and E.A. O'Brien \cite[Appendix A, no.59, p.153, Appendix B, Tbl.59, p.165]{NmOb}, resp. the pro-3 group given by the parameters \((f,g,h)=(0,0,0)\) in Table 1. For \(p=2\), the tree is indicated in Figure 6. Figure \ref{fig:2GrpTyp222Cocl3} shows some finite \(2\)-groups of coclass \(2,3,4\) and type \((2,2,2)\). \subsection{Coclass \(3\)} \label{ss:CoclassThree} Here again, \(p\)-groups with several distinct abelianizations contribute to the constitution of the coclass graph \(\mathcal{G}(p,3)\) . There are regular, resp. irregular, essential contributions from groups \(G\) with abelianizations \(G/G^\prime\) of the types \((p^3,p)\), \((p^2,p^2)\), \((p^2,p,p)\), \((p,p,p,p)\), resp. \((p,p)\), \((p^2,p)\), \((p,p,p)\), and an isolated contribution by the cyclic group of order \(p^4\). \subsubsection{Abelianization of type \((p,p,p)\)} \label{sss:TypePxPxP} Since the elementary abelian \(p\)-group \(C_p\times C_p\times C_p\) of rank \(3\), that is, \(\langle 8,5\rangle\), resp. \(\langle 27,5\rangle\), for \(p=2\), resp. \(p=3\), is not coclass-settled, it gives rise to a multifurcation. The regular component \(\mathcal{T}^2(C_p\times C_p\times C_p)\) has been described in the section about coclass \(2\). The irregular component \(\mathcal{T}^3(C_p\times C_p\times C_p)\) becomes a subgraph \(\mathcal{G}=\mathcal{G}_{(p,p,p)}(p,3)\) of the coclass graph \(\mathcal{G}(p,3)\) when the connecting edges of depth \(2\) of the irregular immediate descendants of \(C_p\times C_p\times C_p\) are removed. For \(p=2\), this subgraph \(\mathcal{G}\) is contained in Figure \ref{fig:2GrpTyp222Cocl3}. It has nine top level vertices of order \(32=2^5\) which can be divided into terminal and capable vertices: \begin{itemize} \item the groups \(\langle 32,32\rangle\) and \(\langle 32,33\rangle\) are leaves, \item the five groups \(\langle 32,27..31\rangle\) and the two groups \(\langle 32,34..35\rangle\) are infinitely capable. \end{itemize} The trees arising from the capable vertices are associated with infinite pro-\(2\) groups by M.F. Newman and E.A. O'Brien \cite[Appendix A, no.73--79, pp.154--155, and Appendix B, Tbl.73--79, pp.167--168]{NmOb} in the following manner:\\ \(\langle 32,28\rangle\) gives rise to\\ \(\mathcal{T}^3(\langle 64,140\rangle)\) associated with family \(\#73\), and \(\mathcal{T}^3(\langle 64,147\rangle)\) associated with family \(\#74\),\\ \(\mathcal{T}^3(\langle 32,29\rangle)\) is associated with family \(\#75\),\\ \(\mathcal{T}^3(\langle 32,30\rangle)\) is associated with family \(\#76\),\\ \(\mathcal{T}^3(\langle 32,31\rangle)\) is associated with family \(\#77\),\\ \(\langle 32,34\rangle\) gives rise to\\ \(\mathcal{T}^3(\langle 64,174\rangle)\) associated with family \(\#78\) (see \S\ \ref{s:PeriodicBifurcations}), and finally\\ \(\mathcal{T}^3(\langle 32,35\rangle)\) is associated with family \(\#79\) (see Figure \ref{fig:2GrpTyp222Cocl3}).\\ The roots of the coclass trees \(\mathcal{T}^4(\langle 128,438\rangle)\) in Figure \ref{fig:2GrpTyp222Cocl3} and \(\mathcal{T}^4(\langle 128,444\vert 445\rangle)\) in Figure \ref{fig:PeriodicBifurcation222} are siblings. \renewcommand{1.0}{1.0} \begin{table}[ht] \caption{Class-\(2\) Quotients \(Q\) of certain metabelian \(2\)-Groups \(G\) of Type \((2,2,2)\)} \label{tbl:Quotients222} \begin{center} \begin{tabular}{|c||c|c|c|c|c|} \hline SmallGroups & Hall Senior & Schur multiplier & \(2\)-rank of \(G^\prime\) & \(4\)-rank of \(G^\prime\) & maximum of \\ identifier of \(Q\) & classification of \(Q\) & \(\mathcal{M}(Q)\) & \(r_2(G^\prime)\) & \(r_4(G^\prime)\) & \(r_2(H_i/H_i^\prime)\) \\ \hline \(\langle 32,32\rangle\) & \(32.040\) & \((2)\) & \(2\) & \(0\) & \(2\) \\ \(\langle 32,33\rangle\) & \(32.041\) & \((2)\) & \(2\) & \(0\) & \(2\) \\ \hline \(\langle 32,29\rangle\) & \(32.037\) & \((2,2)\) & \(2\) & \(1\) & \(3\) \\ \(\langle 32,30\rangle\) & \(32.038\) & \((2,2)\) & \(2\) & \(1\) & \(3\) \\ \(\langle 32,35\rangle\) & \(32.035\) & \((2,2)\) & \(2\) & \(1\) & \(3\) \\ \hline \(\langle 32,28\rangle\) & \(32.036\) & \((2,2,2)\) & \(2\) & \(2\) & \(3\) \\ \(\langle 32,27\rangle\) & \(32.033\) & \((2,2,2,2)\) & \(3\) & \(2\) or \(3\) & \(4\) \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Hall-Senior classification} \label{sss:HallSenior} Seven of these nine top level vertices have been investigated by E. Benjamin, F. Lemmermeyer and C. Snyder \cite[\S\ 2, Tbl.1]{BLS} with respect to their occurrence as class-2 quotients \(Q=G/\gamma_3(G)\) of bigger metabelian 2-groups \(G\) of type \((2,2,2)\) and with coclass \(3\), which are exactly the members of the descendant trees of the seven vertices. These authors use the classification of 2-groups by M. Hall and J.K. Senior \cite{HaSn} which is put in correspondence with the SmallGroups Library \cite{BEO1,BEO2} in Table \ref{tbl:Quotients222}. The complexity of the descendant trees of these seven vertices increases with the 2-ranks and 4-ranks indicated in Table \ref{tbl:Quotients222}, where the maximal subgroups of index \(2\) in \(G\) are denoted by \(H_i\), for \(1\le i\le 7\). \section{History of descendant trees} \label{s:HistoryDescTrees} Descendant trees with central quotients as parents (P1) are implicit in P. Hall's 1940 paper \cite{Hl} about isoclinism of groups. Trees with last non-trivial lower central quotients as parents (P2) were first presented by C. R. Leedham-Green at the International Congress of Mathematicians in Vancouver, 1974 \cite{Nm}. The first extensive tree diagrams have been drawn manually by J. A. Ascione, G. Havas and Leedham-Green (1977) \cite{AHL}, by Ascione (1979) \cite{As1} and by B. Nebelung (1989) \cite{Ne}. In the former two cases, the parent definition by means of the lower exponent-\(p\) central series (P3) was adopted in view of computational advantages, in the latter case, where theoretical aspects were focused, the parents were taken with respect to the usual lower central series (P2). The kernels and targets of Artin transfer homomorphisms have recently turned out to be compatible with parent-descendant relations between finite \(p\)-groups and can favourably be used to endow descendant trees with additional structure \cite{Ma4}. \section{The construction: \(p\)-group generation algorithm} \label{s:Construction} The \textit{\(p\)-group generation algorithm} by M.F. Newman \cite{Nm2} and E.A. O'Brien \cite{Ob,HEO} is a recursive process for constructing the descendant tree of an assigned finite \(p\)-group which is taken as the root of the tree. It is discussed in some detail in \S\S\ \ref{s:LowerExponentP}--\ref{s:SchurMpl}. \section{Lower exponent-\(p\) central series} \label{s:LowerExponentP} For a finite \(p\)-group \(G\), the \textit{lower exponent-\(p\) central series} (briefly lower \(p\)-central series) of \(G\) is a descending series \((P_j(G))_{j\ge 0}\) of characteristic subgroups of \(G\), defined recursively by \begin{equation} \label{eqn:RecursiveLowerExpCS} P_0(G):=G\text{ and }P_j(G):=\lbrack P_{j-1}(G),G\rbrack\cdot P_{j-1}(G)^p,\text{ for }j\ge 1. \end{equation} \noindent Since any non-trivial finite \(p\)-group \(G>1\) is nilpotent, there exists an integer \(c\ge 1\) such that \(P_{c-1}(G)>P_c(G)=1\) and \(\mathrm{cl}_p(G):=c\) is called the \textit{exponent-\(p\) class} (briefly \(p\)-class) of \(G\). Only the trivial group \(1\) has \(\mathrm{cl}_p(1)=0\). Generally, for any finite \(p\)-group \(G\), its \(p\)-class can be defined as \(\mathrm{cl}_p(G):=\min\lbrace c\ge 0\mid P_c(G)=1\rbrace\). The complete lower \(p\)-central series of \(G\) is therefore given by \begin{equation} \label{eqn:CompleteLowerExpCS} G=P_0(G)>\Phi(G)=P_1(G)>P_2(G)>\cdots>P_{c-1}(G)>P_c(G)=1, \end{equation} \noindent since \(P_1(G)=\lbrack P_0(G),G\rbrack\cdot P_0(G)^p=\lbrack G,G\rbrack\cdot G^p=\Phi(G)\) is the \textit{Frattini subgroup} of \(G\). For the convenience of the reader and for pointing out the shifted numeration, we recall that the (usual) \textit{lower central series} of \(G\) is also a descending series \((\gamma_j(G))_{j\ge 1}\) of characteristic subgroups of \(G\), defined recursively by \begin{equation} \label{eqn:RecursiveLowerCS} \gamma_1(G):=G\text{ and }\gamma_j(G):=\lbrack\gamma_{j-1}(G),G\rbrack,\text{ for }j\ge 2. \end{equation} \noindent As above, for any non-trivial finite \(p\)-group \(G>1\), there exists an integer \(c\ge 1\) such that \(\gamma_c(G)>\gamma_{c+1}(G)=1\) and \(\mathrm{cl}(G):=c\) is called the \textit{nilpotency class} of \(G\), whereas \(c+1\) is called the \textit{index of nilpotency} of \(G\). Only the trivial group \(1\) has \(\mathrm{cl}(1)=0\). Thus, the complete lower central series of \(G\) is given by \begin{equation} \label{eqn:CompleteLowerCS} G=\gamma_1(G)>G^{\prime}=\gamma_2(G)>\gamma_3(G)>\cdots>\gamma_c(G)>\gamma_{c+1}(G)=1, \end{equation} \noindent since \(\gamma_2(G)=\lbrack\gamma_1(G),G\rbrack=\lbrack G,G\rbrack=G^{\prime}\) is the \textit{commutator subgroup} or \textit{derived subgroup} of \(G\). The following \textit{rules} should be remembered for the exponent-\(p\) class: \noindent Let \(G\) be a finite \(p\)-group. \begin{enumerate}[({R}1)] \item \(\mathrm{cl}(G)\le\mathrm{cl}_p(G)\), since the \(\gamma_j(G)\) descend more quickly than the \(P_j(G)\). \item \(\vartheta\in\mathrm{Hom}(G,\tilde{G})\), for some group \(\tilde{G}\) \(\Rightarrow\) \(\vartheta(P_j(G))=P_j(\vartheta(G))\), for any \(j\ge 0\). \item For any \(c\ge 0\), the conditions \(N\unlhd G\) and \(\mathrm{cl}_p(G/N)=c\) imply \(P_c(G)\le N\). \item For any \(c\ge 0\), \(\mathrm{cl}_p(G)=c\) \(\Rightarrow\) \(\mathrm{cl}_p(G/P_k(G))=\min(k,c)\), for all \(k\ge 0\), in particular, \(\mathrm{cl}_p(G/P_k(G))=k\), for all \(0\le k\le c\). \end{enumerate} We point out that every non-trivial finite \(p\)-group \(G>1\) defines a \textit{maximal path} with respect to the parent definition (P3), consisting of \(c\) edges, \begin{equation} \label{eqn:MaxPath} \begin{aligned} G=G/1=G/P_c(G)\to\pi(G)=G/P_{c-1}(G)\to\pi^2(G)=G/P_{c-2}(G)\to\cdots\\ \cdots\to\pi^{c-1}(G)=G/P_1(G)\to\pi^c(G)=G/P_0(G)=G/G=1 \end{aligned} \end{equation} \noindent and ending in the trivial group \(\pi^c(G)=1\). The last but one quotient of the maximal path of \(G\) is the elementary abelian \(p\)-group \(\pi^{c-1}(G)=G/P_1(G)\simeq C_p^d\) of rank \(d=d(G)\), where \(d(G)=\dim_{\mathbb{F}_p}(H^1(G,\mathbb{F}_p))\) denotes the generator rank of \(G\). \section{\(p\)-covering group, \(p\)-multiplicator and nucleus} \label{s:CoveringGroup} Let \(G\) be a finite \(p\)-group with \(d\) \textit{generators}. Our goal is to compile a complete list of pairwise non-isomorphic immediate descendants of \(G\). It turned out that all immediate descendants can be obtained as quotients of a certain extension \(G^{\ast}\) of \(G\) which is called the \textit{\(p\)-covering group} of \(G\) and can be constructed in the following manner. We can certainly find a \textit{presentation} of \(G\) in the form of an exact sequence \begin{equation} \label{eqn:FreePresentation} 1\longrightarrow R\longrightarrow F\longrightarrow G\longrightarrow 1, \end{equation} \noindent where \(F\) denotes the free group with \(d\) generators and \(\vartheta:\ F\longrightarrow G\) is an epimorphism with kernel \(R:=\ker(\vartheta)\). Then \(R\triangleleft F\) is a normal subgroup of \(F\) consisting of the defining \textit{relations} for \(G\simeq F/R\). For elements \(r\in R\) and \(f\in F\), the conjugate \(f^{-1}rf\in R\) and thus also the commutator \(\lbrack r,f\rbrack=r^{-1}f^{-1}rf\in R\) are contained in \(R\). Consequently, \(R^{\ast}:=\lbrack R,F\rbrack\cdot R^p\) is a characteristic subgroup of \(R\), and the \textit{\(p\)-multiplicator} \(R/R^{\ast}\) of \(G\) is an elementary abelian \(p\)-group, since \begin{equation} \label{eqn:Multiplicator} \lbrack R,R\rbrack\cdot R^p\le\lbrack R,F\rbrack\cdot R^p=R^{\ast}\unlhd R. \end{equation} \noindent Now we can define the \(p\)-covering group of \(G\) by \begin{equation} \label{eqn:CoveringGroup} G^{\ast}:=F/R^{\ast}, \end{equation} \noindent and the exact sequence \begin{equation} \label{eqn:Extension} 1\longrightarrow R/R^{\ast}\longrightarrow F/R^{\ast}\longrightarrow F/R\longrightarrow 1 \end{equation} \noindent shows that \(G^{\ast}\) is an extension of \(G\) by the elementary abelian \(p\)-multiplicator. We call \begin{equation} \label{eqn:MultiplicatorRank} \mu(G):=\dim_{\mathbb{F}_p}(R/R^{\ast}) \end{equation} \noindent the \textit{\(p\)-multiplicator rank} of \(G\). Let us assume now that the assigned finite \(p\)-group \(G=F/R\) is of \(p\)-class \(\mathrm{cl}_p(G)=c\). Then the conditions \(R\unlhd F\) and \(\mathrm{cl}_p(F/R)=c\) imply \(P_c(F)\le R\), according to the rule (R3), and we can define the \textit{nucleus} of \(G\) by \begin{equation} \label{eqn:Nucleus} P_c(G^{\ast})=P_c(F)\cdot R^{\ast}/R^{\ast}\le R/R^{\ast} \end{equation} \noindent as a subgroup of the \(p\)-multiplicator. Consequently, the \textit{nuclear rank} \begin{equation} \label{eqn:NuclearRank} \nu(G):=\dim_{\mathbb{F}_p}(P_c(G^{\ast}))\le\mu(G) \end{equation} \noindent of \(G\) is bounded from above by the \(p\)-multiplicator rank. \section{Allowable subgroups of the \(p\)-multiplicator} \label{s:Allowable} As before, let \(G\) be a finite \(p\)-group with \(d\) generators. Any \(p\)-elementary abelian central extension \(1\to Z\to H\to G\to 1\) of \(G\) by a \(p\)-elementary abelian subgroup \(Z\le\zeta_1(H)\) such that \(d(H)=d(G)=d\) is a quotient of the \(p\)-covering group \(G^{\ast}\) of \(G\). The reason is that there exists an epimorphism \(\psi:\ F\to H\) such that \(\vartheta=\omega\circ\psi\), where \(\omega:\ H\to G=H/Z\) denotes the canonical projection. Consequently, we have \(R=\ker(\vartheta)=\ker(\omega\circ\psi)=\psi^{-1}(Z)\) and thus \(\psi(R)=\psi(\psi^{-1}(Z))=Z\). Further, \(\psi(R^p)\le Z^p=1\), since \(Z\) is \(p\)-elementary, and \(\psi(\lbrack R,F\rbrack)\le\lbrack Z,Z\rbrack=1\), since \(Z\) is central. Together this shows that \(\psi(R^{\ast})=\psi(\lbrack R,F\rbrack\cdot R^p)=1\) and thus \(\psi\) induces the desired epimorphism \(\psi^\ast:\ G^{\ast}\to H\) such that \(H\simeq G^{\ast}/\ker(\psi^\ast)\). In particular, an immediate descendant \(H\) of \(G\) is a \(p\)-elementary abelian central extension \begin{equation} \label{eqn:CentralExtension} 1\to P_{c-1}(H)\to H\to G\to 1 \end{equation} \noindent of \(G\), since \[1=P_c(H)=\lbrack P_{c-1}(H),H\rbrack\cdot P_{c-1}(H)^p\text{ implies }P_{c-1}(H)^p=1\text{ and }P_{c-1}(H)\le\zeta_1(H),\] where \(c=\mathrm{cl}_p(H)\). A subgroup \(M/R^{\ast}\le R/R^{\ast}\) of the \(p\)-multiplicator of \(G\) is called \textit{allowable} if it is given by the kernel \(M/R^{\ast}=\ker(\psi^\ast)\) of an epimorphism \(\psi^\ast:\ G^{\ast}\to H\) onto an immediate descendant \(H\) of \(G\). An equivalent characterization is that \(1<M/R^{\ast}<R/R^{\ast}\) is a proper subgroup which \textit{supplements the nucleus} \begin{equation} \label{eqn:SupplementNucleus} (M/R^{\ast})\cdot(P_c(F)\cdot R^{\ast}/R^{\ast})=R/R^{\ast}. \end{equation} Therefore, the first part of our goal to compile a list of all immediate descendants of \(G\) is done, when we have constructed all allowable subgroups of \(R/R^{\ast}\) which supplement the nucleus \(P_c(G^{\ast})=P_c(F)\cdot R^{\ast}/R^{\ast}\), where \(c=\mathrm{cl}_p(G)\). However, in general the list \begin{equation} \label{eqn:DescList} \lbrace F/M\quad\mid\quad M/R^{\ast}\le R/R^{\ast}\text{ is allowable }\rbrace, \end{equation} \noindent where \(G^{\ast}/(M/R^{\ast})=(F/R^{\ast})/(M/R^{\ast})\simeq F/M\) will be redundant, due to isomorphisms \(F/M_1\simeq F/M_2\) among the immediate descendants. \section{Orbits under extended automorphisms} \label{s:Orbits} Two allowable subgroups \(M_1/R^{\ast}\) and \(M_2/R^{\ast}\) are called \textit{equivalent} if the quotients \(F/M_1\simeq F/M_2\), that are the corresponding immediate descendants of \(G\), are isomorphic. Such an isomorphism \(\varphi:\ F/M_1\to F/M_2\) between immediate descendants of \(G=F/R\) with \(c=\mathrm{cl}_p(G)\) has the property that \[\varphi(R/M_1)=\varphi(P_c(F/M_1))=P_c(\varphi(F/M_1))=P_c(F/M_2)=R/M_2\] and thus induces an automorphism \(\alpha\in\mathrm{Aut}(G)\) of \(G\) which can be extended to an automorphism \(\alpha^\ast\in\mathrm{Aut}(G^\ast)\) of the \(p\)-covering group \(G^\ast=F/R^\ast\) of \(G\). The restriction of this \textit{extended automorphism} \(\alpha^\ast\) to the \(p\)-multiplicator \(R/R^\ast\) of \(G\) is determined uniquely by \(\alpha\). Since \[\alpha^\ast(M/R^{\ast})\cdot P_c(F/R^{\ast})=\alpha^\ast\lbrack M/R^{\ast}\cdot P_c(F/R^{\ast})\rbrack=\alpha^\ast(R/R^\ast)=R/R^\ast,\] according to the rule (R2), each extended automorphism \(\alpha^\ast\in\mathrm{Aut}(G^\ast)\) induces a \textit{permutation} \(\tilde{\alpha}\) of the allowable subgroups \(M/R^{\ast}\le R/R^{\ast}\). We define \begin{equation} \label{eqn:PermGroup} P:=\langle\tilde{\alpha}\mid\alpha\in\mathrm{Aut}(G)\rangle \end{equation} \noindent to be the \textit{permutation group} generated by all permutations induced by automorphisms of \(G\). Then the map \(\mathrm{Aut}(G)\to P\), \(\alpha\mapsto\tilde{\alpha}\) is an epimorphism and the equivalence classes of allowable subgroups \(M/R^{\ast}\le R/R^{\ast}\) are precisely the \textit{orbits} of allowable subgroups \textit{under the action of} the permutation group \(P\). Eventually, our goal to compile a list \(\lbrace F/M_i\mid 1\le i\le N\rbrace\) of all immediate descendants of \(G\) will be done, when we select a representative \(M_i/R^{\ast}\) for each of the \(N\) orbits of allowable subgroups of \(R/R^{\ast}\) under the action of \(P\). This is precisely what the \textit{\(p\)-group generation algorithm} does in a single step of the recursive procedure for constructing the descendant tree of an assigned root. \section{Capable \(p\)-groups and step sizes} \label{s:StepSizes} We recall from \S\ \ref{s:TreeDiagram} that a finite \(p\)-group \(G\) is called \textit{capable} (or \textit{extendable}) if it possesses at least one immediate descendant, otherwise it is called \textit{terminal} (or a \textit{leaf}). As mentioned in \S\ \ref{s:Multifurcation} already, the nuclear rank \(\nu(G)\) of \(G\) admits a decision about the capability of \(G\): \begin{itemize} \item \(G\) is terminal if and only if \(\nu(G)=0\). \item \(G\) is capable if and only if \(\nu(G)\ge 1\). \end{itemize} In the case of capability, \(G=F/R\) has immediate descendants of \(\nu=\nu(G)\) different \textit{step sizes} \(1\le s\le\nu\), in dependence on the index \begin{equation} \label{eqn:IndexOfAllowable} (R/R^\ast:M/R^\ast)=p^s \end{equation} \noindent of the corresponding allowable subgroup \(M/R^\ast\) in the \(p\)-multiplicator \(R/R^\ast\). When \(G\) is of order \(\vert G\vert=p^n\), then an immediate descendant of step size \(s\) is of order \[\#(F/M)=(F/R^\ast:M/R^\ast)=(F/R^\ast:R/R^\ast)\cdot (R/R^\ast:M/R^\ast)=\vert G\vert\cdot p^s=p^n\cdot p^s=p^{n+s}.\] For the related phenomenon of \textit{multifurcation} of a descendant tree at a vertex \(G\) with nuclear rank \(\nu(G)\ge 2\) see \S\ \ref{s:Multifurcation} on multifurcation and coclass graphs. The \(p\)-group generation algorithm provides the flexibility to restrict the construction of immediate descendants to those of a single fixed step size \(1\le s\le\nu\), which is very convenient in the case of huge descendant numbers (see the next section). \section{Numbers of immediate descendants} \label{s:DescNumbers} We denote the \textit{number of all immediate descendants}, resp. \textit{immediate descendants of step size \(s\)}, of \(G\) by \(N\), resp. \(N_s\). Then we have \(N=\sum_{s=1}^\nu\,N_s\). As concrete examples, we present some interesting finite metabelian \(p\)-groups with extensive sets of immediate descendants, using the SmallGroups identifiers and additionally pointing out the \textit{numbers \(0\le C_s\le N_s\) of capable immediate descendants} in the usual format \begin{equation} \label{eqn:NumbersOfDesc} (N_1/C_1;\ldots;N_\nu/C_\nu) \end{equation} \noindent as given by actual implementations of the \(p\)-group generation algorithm in the computational algebra systems GAP and MAGMA. These invariants completely determine the local structure of the descendant tree \(\mathcal{T}(G)\). First, let \(p=2\). We begin with groups having abelianization of type \((2,2,2)\). See Figure \ref{fig:2GrpTyp222Cocl3}. \begin{itemize} \item The group \(\langle 32,35\rangle\) of coclass \(3\) has ranks \(\nu=1\), \(\mu=5\) and descendant numbers \((4/1)\), \(N=4\). See \S\ \ref{s:PeriodicBifurcations}. \item The group \(\langle 32,34\rangle\) of coclass \(3\) has ranks \(\nu=2\), \(\mu=6\) and descendant numbers \((6/1;19/6)\), \(N=25\). See \S\ \ref{s:PeriodicBifurcations}. \item The group \(\langle 32,27\rangle\) of coclass \(3\) has ranks \(\nu=3\), \(\mu=7\) and\\ descendant numbers \((12/2;70/25;104/85)\), \(N=186\). \end{itemize} Next, let \(p=3\). We consider groups having abelianization of type \((3,3)\). See Figure \ref{fig:3GrpTyp33Cocl2}. \begin{itemize} \item The group \(\langle 27,3\rangle\) of coclass \(1\) has ranks \(\nu=2\), \(\mu=4\) and descendant numbers \((4/1;7/5)\), \(N=11\). \item The group \(\langle 243,3\rangle=\langle 27,3\rangle-\#2;1\) of coclass \(2\) has ranks \(\nu=2\), \(\mu=4\) and descendant numbers \((10/6;15/15)\), \(N=25\). \item One of its immediate descendants, the group \(B=\langle 729,40\rangle=\langle 243,3\rangle-\#1;7\), has ranks \(\nu=2\), \(\mu=5\) and descendant numbers \((16/2;27/4)\), \(N=43\). \end{itemize} In contrast, groups with abelianization of type \((3,3,3)\) are partially located beyond the limit of actual computability. \begin{itemize} \item The group \(\langle 81,12\rangle\) of coclass \(2\) has ranks \(\nu=2\), \(\mu=7\) and\\ descendant numbers \((10/2;100/50)\), \(N=110\). \item The group \(\langle 243,37\rangle\) of coclass \(3\) has ranks \(\nu=5\), \(\mu=9\) and descendant numbers \((35/3;2\,783/186;81\,711/10\,202;350\,652/202\,266;\ldots)\), \(N>4\cdot 10^5\) unknown. \item The group \(\langle 729,122\rangle\) of coclass \(4\) has ranks \(\nu=8\), \(\mu=11\) and descendant numbers \((45/3;117\,919/1\,377;\ldots)\), \(N>10^5\) unknown. \end{itemize} \section{Schur multiplier} \label{s:SchurMpl} Via the isomorphism \[\mathbb{Q}/\mathbb{Z}\to\mu_{\infty},\ \frac{n}{d}\mapsto\exp(\frac{n}{d}\cdot 2\pi i)\] the quotient group \begin{equation} \label{eqn:AdditiveRootsOfUnity} \mathbb{Q}/\mathbb{Z}=\lbrace\frac{n}{d}\cdot\mathbb{Z}\mid d\ge 1,\ 0\le n\le d-1\rbrace \end{equation} \noindent can be viewed as the additive analogue of the multiplicative group \begin{equation} \label{eqn:RootsOfUnity} \mu_{\infty}=\lbrace z\in\mathbb{C}\mid z^d=1 \text{ for some integer } d\ge 1\rbrace \end{equation} \noindent of all roots of unity. Let \(p\) be a prime number and \(G\) be a finite \(p\)-group with presentation \(G=F/R\) as in the previous section. Then the second cohomology group \begin{equation} \label{eqn:SchurMpl} M(G):=H^2(G,\mathbb{Q}/\mathbb{Z}) \end{equation} \noindent of the \(G\)-module \(\mathbb{Q}/\mathbb{Z}\) is called the \textit{Schur multiplier} of \(G\). It can also be interpreted as the quotient group \begin{equation} \label{eqn:FreeSchurMpl} M(G)=(R\cap\lbrack F,F\rbrack)/\lbrack F,R\rbrack. \end{equation} I.R. Shafarevich \cite[\S\ 6, p.146]{Sh} has proved that the difference between the \textit{relation rank} \(r(G)=\dim_{\mathbb{F}_p}(H^2(G,\mathbb{F}_p))\) of \(G\) and the \textit{generator rank} \(d(G)=\dim_{\mathbb{F}_p}(H^1(G,\mathbb{F}_p))\) of \(G\) is given by the minimal number of generators of the Schur multiplier of \(G\), that is \begin{equation} \label{eqn:Shafarevich} r(G)-d(G)=d(M(G)). \end{equation} N. Boston and H. Nover \cite[\S\ 3.2, Prop.2]{BoNo} have shown that \begin{equation} \label{eqn:BostonNover} \mu(G_j)-\nu(G_j)\le r(G), \end{equation} \noindent for all quotients \(G_j:=G/P_j(G)\) of \(p\)-class \(\mathrm{cl}_p(G_j)=j\), \(j\ge 0\), of a pro-\(p\) group \(G\) with finite abelianization \(G/G^\prime\). Furthermore, J. Blackhurst (in the appendix \textit{On the nucleus of certain p-groups} of a paper by N. Boston, M.R. Bush and F. Hajir \cite{BBH}) has proved that a non-cyclic finite \(p\)-group \(G\) with trivial Schur multiplier \(M(G)\) is a terminal vertex in the descendant tree \(\mathcal{T}(1)\) of the trivial group \(1\), that is, \begin{equation} \label{eqn:Blackhurst} M(G)=1 \Rightarrow \nu(G)=0. \end{equation} We conclude this section by giving two examples. \begin{itemize} \item A finite \(p\)-group \(G\) has a balanced presentation \(r(G)=d(G)\) if and only if \(r(G)-d(G)=0=d(M(G))\), that is, if and only if its Schur multiplier \(M(G)=1\) is trivial. Such a group is called a \textit{Schur group} \cite{Ag,BBH,BuMa,KoVe} and it must be a leaf in the descendant tree \(\mathcal{T}(1)\). \item A finite \(p\)-group \(G\) satisfies \(r(G)=d(G)+1\) if and only if \(r(G)-d(G)=1=d(M(G))\), that is, if and only if it has a non-trivial cyclic Schur multiplier \(M(G)\). Such a group is called a \textit{Schur\(+1\) group}. \end{itemize} \section{Pruning strategies} \label{s:PruningStrategies} For \textit{searching} a particular group in a descendant tree \(\mathcal{T}(R)\) by looking for \textit{patterns} defined by the kernels and targets of Artin transfer homomorphisms \cite{Ma4}, it is frequently adequate to reduce the number of vertices in the branches of a dense tree with high complexity by sifting groups with desired special properties, for example \begin{enumerate}[({F}1)] \item filtering the \(\sigma\)-groups (see Definition \ref{dfn:SchurSigmaGroup}), \item eliminating a set of certain transfer kernel types (TKTs, see \cite[pp.403--404]{Ma4}), \item cancelling all non-metabelian groups (thus restricting to the \textit{metabelian skeleton}), \item removing metabelian groups with cyclic centre (usually of higher complexity), \item cutting off vertices whose distance from the mainline (depth) exceeds some lower bound, \item combining several different sifting criteria. \end{enumerate} The result of such a sieving procedure is called a \textit{pruned descendant tree} \(\mathcal{T}_\ast(R)<\mathcal{T}(R)\) with respect to the desired set of properties. However, in any case, it should be avoided that the mainline of a coclass tree is eliminated, since the result would be a disconnected infinite set of finite graphs instead of a tree. We expand this idea further in the following detailed discussion of new phenomena. \section{Striking news: periodic bifurcations in trees} \label{s:PeriodicBifurcations} We begin this section about brand-new discoveries with the most recent example of periodic bifurcations in trees of \(2\)-groups. It has been found on the 17th of January, 2015, motivated by a search for metabelian \(2\)-class tower groups \cite{Ma1} of complex quadratic fields \cite{No} and complex bicyclic biquadratic Dirichlet fields \cite{AZT}. \subsection{Finite \(2\)-groups \(G\) with \(G/G^\prime\simeq (2,2,2)\)} \label{ss:2Groups222} The \(2\)-groups under investigation are three-generator groups with elementary abelian commutator factor group of type \((2,2,2)\). As shown in Figure \ref{fig:2GrpTyp222Cocl3} of \S\ \ref{s:ConcreteExamples}, all such groups are descendants of the abelian root \(\langle 8,5\rangle\). Among its immediate descendants of step size \(2\), there are three groups which reveal multifurcation. \(\langle 32,27\rangle\) has nuclear rank \(\nu=3\), giving rise to \(3\)-fold multifurcation. The two groups \(\langle 32,28\rangle\) and \(\langle 32,34\rangle\) possess the required nuclear rank \(\nu=2\) for \textit{bifurcation}. Due to the arithmetical origin of the problem, we focused on the latter, \(G:=\langle 32,34\rangle\), and constructed an extensive finite part of its pruned descendant tree \(\mathcal{T}_\ast(G)\), using the \(p\)-group generation algorithm \cite{Nm2,Ob,HEO} as implemented in the computational algebra system Magma \cite{BCP,BCFS,MAGMA}. All groups turned out to be \textit{metabelian}. \begin{remark} \label{rmk:Sifting222} Since our primary intention is to provide a sound group theoretic background for several phenomena discovered in class field theory and algebraic number theory, we eliminated superfluous brushwood in the descendant trees to avoid unnecessary complexity. The selected sifting process for reducing the entire descendant tree \(\mathcal{T}(G)\) to the pruned descendant tree \(\mathcal{T}_\ast(G)\) filters all vertices which satisfy one of the conditions in Equations (\ref{eqn:LayeredTKTMainline}) or (\ref{eqn:LayeredTKTPrdSequence}), and essentially consists of pruning strategy (F2), more precisely, of \begin{enumerate} \item omitting all the \(13\) terminal step size-\(2\) descendants, and \(5\), resp. \(4\), of the \(6\) capable step size-\(2\) descendants, together with their complete descendant trees, in Theorem \ref{thm:TwoBifurcation}, resp. Corollary \ref{cor:TwoBifurcation}, and \item eliminating all, resp. \(4\), of the \(5\) terminal step size-\(1\) descendants in Theorem \ref{thm:TwoBifurcation}, resp. Corollary \ref{cor:TwoBifurcation}. \end{enumerate} \end{remark} Denote by \(x,y,z\) the generators of a finite \(2\)-group \(G=\langle x,y,z\rangle\) with abelian type invariants \((2,2,2)\). We fix an ordering of the seven maximal normal subgroups by putting \begin{equation} \label{eqn:MaxSbgr222} \begin{aligned} S_1=\langle y,z,G^\prime\rangle, S_2=\langle z,x,G^\prime\rangle, S_3=\langle x,y,G^\prime\rangle,\\ S_4=\langle x,yz,G^\prime\rangle, S_5=\langle y,zx,G^\prime\rangle, S_6=\langle z,xy,G^\prime\rangle,\\ S_7=\langle xy,yz,G^\prime\rangle. \end{aligned} \end{equation} Just within this subsection, we select a special designation for a TKT \cite[pp.403--404]{Ma4} whose first layer consists exactly of all these seven planes in the \(3\)-dimensional \(\mathbb{F}_2\)-vector space \(G/G^\prime\), in any ordering. \begin{definition} \label{dfn:Permutation} The transfer kernel type (TKT) \(\varkappa=\lbrack\varkappa_0;\varkappa_1;\varkappa_2;\varkappa_3\rbrack\) is called a \textit{permutation} if all seven members of the first layer \(\varkappa_1\) are maximal subgroups of \(G\) and there exists a permutation \(\sigma\in S_7\) such that \(\varkappa_1=(S_{\sigma(j)})_{1\le j\le 7}\). \end{definition} For brevity, we give \(2\)-logarithms of abelian type invariants in the following theorem and we denote iteration by formal exponents, for instance, \(1^3:=(1,1,1)\hat{=}(2,2,2)\), \((1^3)^4:=(1^3,1^3,1^3,1^3)\), \(0^7:=(0,0,0,0,0,0,0)\) and \((j+2,j+1)\hat{=}(2^{j+2},2^{j+1})\). Further, we eliminate an initial anomaly of generalized identifiers by putting \(G-\#2;1:=G-\#2;8\) and \(G-\#2;2:=G-\#2;9\), formally. \begin{theorem} \label{thm:TwoBifurcation} Let \(\ell\) be a positive integer bounded from above by \(10\). \begin{enumerate} \item In the descendant tree \(\mathcal{T}(G)\) of \(G=\langle 32,34\rangle\), there exists a unique path of length \(\ell\), \[G=\delta^0(G)\leftarrow\delta^1(G)\leftarrow\cdots\leftarrow\delta^\ell(G),\] of (reverse) directed edges with uniform step size \(2\) such that \(\delta^j(G)=\pi(\delta^{j+1}(G))\), for all \(0\le j\le\ell-1\) (along the path, \(\delta\) is a section of the surjection \(\pi\)), and all the vertices \begin{equation} \label{eqn:StepSizeTwoDesc} \delta^j(G)=G(-\#2;1)^j \end{equation} \noindent of this path share the following common invariants: \begin{itemize} \item the transfer kernel type with layer \(\varkappa_1\) containing three \(2\)-cycles (and nearly a permutation, except for the first component which is total, \(0\hat{=}\delta^j(G)\)), \begin{equation} \label{eqn:LayeredTKTMainline} \varkappa(\delta^j(G))=\lbrack 1;(0,S_6,S_5,S_7,S_3,S_2,S_4);0^7;0\rbrack, \end{equation} \item the \(2\)-multiplicator rank and the nuclear rank, giving rise to the bifurcation, \begin{equation} \label{eqn:Ranks} \mu(\delta^j(G))=6,\quad \nu(\delta^j(G))=2, \end{equation} \item and the counters of immediate descendants, \begin{equation} \label{eqn:Counters} N_1(\delta^j(G))=6,\ C_1(\delta^j(G))=1,\quad N_2(\delta^j(G))=19,\ C_2(\delta^j(G))=6, \end{equation} \noindent determining the local structure of the descendant tree. \end{itemize} \item A few other invariants of the vertices \(\delta^j(G)\) depend on the superscript \(j\), \begin{itemize} \item the \(2\)-logarithm of the order, the nilpotency class and the coclass, \begin{equation} \label{eqn:LogOrdClCc} \log_2(\mathrm{ord}(\delta^j(G)))=2j+5,\quad \mathrm{cl}(\delta^j(G))=j+2,\quad \mathrm{cc}(\delta^j(G))=j+3, \end{equation} \item a single component of layer \(\tau_1\), three components of layer \(\tau_2\), and layer \(\tau_3\) of the transfer target type \begin{equation} \label{eqn:LayeredTTT} \tau(\delta^j(G))=\lbrack 1^3;((j+2,j+2),(1^3)^6);((j+2,j+1)^3,(1^3)^4);(j+1,j+1)\rbrack. \end{equation} \end{itemize} \end{enumerate} \end{theorem} In view of forthcoming number theoretic applications, we add the following \begin{corollary} \label{cor:TwoBifurcation} Let \(0\le j\le\ell\) be a non-negative integer. \begin{enumerate} \item The regular component \(\mathcal{T}^{j+3}(\delta^j(G))\) of the descendant tree \(\mathcal{T}(\delta^j(G))\) is a coclass tree which contains a unique periodic sequence whose vertices \(V_{j,k}:=\delta^j(G)(-\#1;1)^k-\#1;2\) with \(k\ge 0\) are characterized by a permutation TKT \begin{equation} \label{eqn:LayeredTKTPrdSequence} \varkappa(V_{j,k})=\lbrack 1;(S_1,S_6,S_5,S_7,S_3,S_2,S_4);0^7;0\rbrack, \end{equation} \noindent with a single fixed point \(S_1\) and the same three \(2\)-cycles \((S_2,S_6)\), \((S_3,S_5)\), \((S_4,S_7)\) as in the mainline TKT of Equation (\ref{eqn:LayeredTKTMainline}). \item The irregular component \(\mathcal{T}^{j+4}(\delta^j(G))\) of the descendant tree \(\mathcal{T}(\delta^j(G))\) is a forest which contains a unique second coclass tree \(\mathcal{T}^{j+4}(\delta^j(G)-\#2;2)\) whose mainline vertices \(M_{j+1,k}:=\delta^j(G)-\#2;2(-\#1;1)^k\) with \(k\ge 0\) possess the same permutation TKT as in Equation (\ref{eqn:LayeredTKTPrdSequence}), apart from the first coclass tree \(\mathcal{T}^{j+4}(\delta^j(G)-\#2;1)\), where \(\delta^j(G)-\#2;1=\delta^{j+1}(G)\), whose mainline vertices \(\delta^{j+1}(G)(-\#1;1)^k\) with \(k\ge 0\) share the TKT in Equation (\ref{eqn:LayeredTKTMainline}). \end{enumerate} \end{corollary} \begin{proof} (of Theorem \ref{thm:TwoBifurcation}, Corollary \ref{cor:TwoBifurcation} and Theorem \ref{thm:2ParamPres})\\ The \(p\)-group generation algorithm \cite{Nm2,Ob,HEO} as implemented in the Magma computational algebra system \cite{BCP,BCFS,MAGMA} was employed to construct the pruned descendant tree \(\mathcal{T}_\ast(G)\) with root \(G=\langle 32,34\rangle\) which we defined as the disjoint union of all pruned coclass trees \(\mathcal{T}_\ast^{j+3}(\delta^j(G))\) with the successive descendants \(\delta^j(G)=G(-\#2;1)^j\), \(0\le j\le 10\), of step size \(2\) of \(G\) as roots. Using the well-known virtual periodicity \cite{dS,EkLg} of each coclass tree \(\mathcal{T}^{j+3}(\delta^j(G))\), which turned out to be strict and of the smallest possible length \(1\), the vertical construction was terminated at nilpotency class \(12\), considerably deeper than the point where periodicity sets in. The horizontal construction was extended up to coclass \(13\), where the amount of CPU time started to become annoying. \end{proof} Within the frame of our computations, the periodicity was not restriced to bifurcations only: It seems that the pruned (or maybe even the entire) descendant trees \(\mathcal{T}_\ast(\delta^j(G))\) are all isomorphic to \(\mathcal{T}_\ast(G)\) as graphs. This is visualized impressively by Figure \ref{fig:PeriodicBifurcation222}. {\tiny \begin{figure} \caption{Periodic Bifurcations in \(\mathcal{T} \label{fig:PeriodicBifurcation222} \end{figure} } The extent to which we constructed the pruned descendant tree suggests the following conjecture. \begin{conjecture} \label{cnj:TwoBifurcation} Theorem \ref{thm:TwoBifurcation}, Corollary \ref{cor:TwoBifurcation} and Theorem \ref{thm:2ParamPres} remain true for an arbitrarily large positive integer \(\ell\), not necessarily bounded by \(10\). \end{conjecture} \begin{remark} \label{rmk:Ord32Id35} We must emphasize that the root \(\langle 8,5\rangle\) in Figure \ref{fig:PeriodicBifurcation222} was drawn for the sake of completeness only, and that the mainline of the coclass tree \(\mathcal{T}^3(\langle 32,35\rangle)\) is exceptional, since \begin{itemize} \item its root is \textit{not} a descendant of \(G\) and \item the TKT of its vertices \(M_{0,k}:=\langle 32,35\rangle(-\#1;1)^k\) with \(k\ge 0\), \begin{equation} \label{eqn:LayeredTKTOrd32Id35} \varkappa(M_{0,k})=\lbrack 1;(S_1,S_2,S_5,S_4,S_3,S_6,S_7);0^7;0\rbrack, \end{equation} \noindent is a permutation with \(5\) fixed points and only a single \(2\)-cycle \((S_3,S_5)\). \end{itemize} \end{remark} \noindent One-parameter \textit{polycyclic pc-presentations} for all occurring groups are given as follows. \begin{enumerate} \item For the mainline vertices of the coclass tree \(\mathcal{T}^3(\langle 32,34\rangle)\) with class \(c\ge 3\), that is, starting with \(\langle 64,174\rangle\) and excluding the root \(\langle 32,34\rangle\), by \begin{equation} \label{eqn:Group34} \begin{aligned} \delta^0(G)(-\#1;1)^{c-2}=G_3^c:=\langle x,y,z,s_2,\ldots,s_c,t_2\mid\\ s_2=\lbrack y,x\rbrack,\ t_2=\lbrack z,x\rbrack,\ s_j=\lbrack s_{j-1},x\rbrack\text{ for }3\le j\le c,\\ x^2=1,\ y^2=s_2s_3,\ z^2=t_2,\\ s_j^2=s_{j+1}s_{j+2}\text{ for }2\le j\le c-2,\ s_{c-1}^2=s_c\rangle. \end{aligned} \end{equation} \item For the mainline vertices of the coclass tree \(\mathcal{T}^4(\langle 128,444\rangle)\) with class \(c\ge 3\) by \begin{equation} \label{eqn:Group444} \begin{aligned} \delta^1(G)(-\#1;1)^{c-3}=G_4^c:=\langle x,y,z,s_2,\ldots,s_c,t_2,t_3\mid\\ s_2=\lbrack y,x\rbrack,\ t_2=\lbrack z,x\rbrack,\ s_j=\lbrack s_{j-1},x\rbrack\text{ for }3\le j\le c,\ t_3=\lbrack t_2,x\rbrack,\\ x^2=1,\ y^2=s_2s_3,\ z^2=t_2t_3,\\ s_j^2=s_{j+1}s_{j+2}\text{ for }2\le j\le c-2,\ s_{c-1}^2=s_c,\ t_2^2=t_3\rangle. \end{aligned} \end{equation} \item For the mainline vertices of the coclass tree \(\mathcal{T}^5(\langle 512,30599\rangle)\) with class \(c\ge 4\) by \begin{equation} \label{eqn:Group30599} \begin{aligned} \delta^2(G)(-\#1;1)^{c-4}=G_5^c:=\langle x,y,z,s_2,\ldots,s_c,t_2,t_3,t_4\mid\\ s_2=\lbrack y,x\rbrack,\ t_2=\lbrack z,x\rbrack,\ s_j=\lbrack s_{j-1},x\rbrack\text{ for }3\le j\le c,\ t_3=\lbrack t_2,x\rbrack,\ t_4=\lbrack t_3,x\rbrack,\\ x^2=1,\ y^2=s_2s_3,\ z^2=t_2t_3,\\ s_j^2=s_{j+1}s_{j+2}\text{ for }2\le j\le c-2,\ s_{c-1}^2=s_c,\ t_2^2=t_3t_4,\ t_3^2=t_4\rangle. \end{aligned} \end{equation} \end{enumerate} \begin{theorem} \label{thm:2ParamPres} For higher coclass \(4\le r\le\ell+3\) the presentations (\ref{eqn:Group444}) and (\ref{eqn:Group30599}) can be generalized in the shape of a two-parameter polycyclic pc-presentation for class \(r-1\le c\le\ell+2\). \begin{equation} \label{eqn:TwoParamPres} \begin{aligned} \delta^{r-3}(G)(-\#1;1)^{c-r+1}=G_r^c:=\langle x,y,z,s_2,\ldots,s_c,t_2\ldots,t_{r-1}\mid\\ s_2=\lbrack y,x\rbrack,\ t_2=\lbrack z,x\rbrack,\ s_j=\lbrack s_{j-1},x\rbrack\text{ for }3\le j\le c,\ t_k=\lbrack t_{k-1},x\rbrack\text{ for }3\le k\le r-1,\\ x^2=1,\ y^2=s_2s_3,\ z^2=t_2t_3,\\ s_j^2=s_{j+1}s_{j+2}\text{ for }2\le j\le c-2,\ s_{c-1}^2=s_c,\ t_k^2=t_{k+1}t_{k+2}\text{ for }2\le k\le r-3,\ t_{r-2}^2=t_{r-1}\rangle. \end{aligned} \end{equation} \end{theorem} To obtain a presentation for the vertices \(\delta^{r-3}(G)(-\#1;1)^{c-r}-\#1;2\), \(c\ge r\), at depth \(1\) in the distinguished periodic sequence whose vertices are characterized by the permutation TKT (\ref{eqn:LayeredTKTPrdSequence}), we must only add the single relation \(x^2=s_c\) to the presentation (\ref{eqn:TwoParamPres}) of the mainline vertices of the coclass tree \(\mathcal{T}^{r}(\delta^{r-3}(G))\) given in Theorem \ref{thm:2ParamPres}. \subsection{Finite \(3\)-groups \(G\) with \(G/G^\prime\simeq (3,3)\)} \label{ss:3Groups33} We continue this section with periodic bifurcations in trees of \(3\)-groups, which have been discovered in 2012 and 2013 \cite{MaA,MaB,MaC}, inspired by a search for \(3\)-class tower groups of complex quadratic fields \cite{SoTa,HeSm,BuMa}, which must be Schur \(\sigma\)-groups. These \(3\)-groups are two-generator groups of coclass at least \(2\) with elementary abelian commutator quotient of type \((3,3)\). As shown in Figure \ref{fig:3GrpTyp33Cocl2} of \S\ \ref{s:ConcreteExamples}, all such groups are descendants of the extra special group \(\langle 27,3\rangle\). Among its \(7\) immediate descendants of step size \(2\), there are only two groups which satisfy the requirements arising from the arithmetical background. The two groups \(\langle 243,6\rangle\) and \(\langle 243,8\rangle\) do not show multifurcation themselves but they are not coclass-settled either, since their immediate mainline descendants \(Q=\langle 729,49\rangle\) and \(U=\langle 729,54\rangle\) possess the required nuclear rank \(\nu=2\) for \textit{bifurcation}. We constructed an extensive finite part of their pruned descendant trees \(\mathcal{T}_\ast(G)\), \(G\in\lbrace Q,U\rbrace\), using the \(p\)-group generation algorithm \cite{Nm2,Ob,HEO} as implemented in the computational algebra system Magma \cite{BCP,BCFS,MAGMA}. Denote by \(x,y\) the generators of a finite \(3\)-group \(G=\langle x,y\rangle\) with abelian type invariants \((3,3)\). We fix an ordering of the four maximal normal subgroups by putting \begin{equation} \label{eqn:MaxSbgr33} \begin{aligned} H_1=\langle y,G^\prime\rangle, H_2=\langle x,G^\prime\rangle, H_3=\langle xy,G^\prime\rangle, H_4=\langle xy^2,G^\prime\rangle. \end{aligned} \end{equation} Within this subsection, we make use of special designations for transfer kernel types (TKTs) which were defined generally in \cite[pp.403--404]{Ma4} and more specifically for the present scenario in \cite{Ma,Ma2}. We are interested in the unavoidable mainline vertices with TKTs\\ \(\mathrm{c}.18\), \(\varkappa=(0,1,2,2)\), resp. \(\mathrm{c}.21\), \(\varkappa=(2,0,3,4)\),\\ and, above all, in most essential vertices of depth \(1\) forming periodic sequences with TKTs\\ \(\mathrm{E}.6\), \(\varkappa=(1,1,2,2)\) and \(\mathrm{E}.14\), \(\varkappa=(3,1,2,2)\sim(4,1,2,2)\),\\ resp. \(\mathrm{E}.8\), \(\varkappa=(2,2,3,4)\) and \(\mathrm{E}.9\), \(\varkappa=(2,3,3,4)\sim(2,4,3,4)\),\\ and we want to eliminate the numerous and annoying vertices with TKTs\\ \(\mathrm{H}.4\), \(\varkappa=(2,1,2,2)\), resp. \(\mathrm{G}.16\), \(\varkappa=(2,1,3,4)\). We point out that, for instance \(\mathrm{E}.9\), \(\varkappa=(2,3,3,4)\sim(2,4,3,4)\), is a shortcut for the layer \(\varkappa_1=(H_2,H_3,H_3,H_4)\sim(H_2,H_4,H_3,H_4)\) of the complete (layered) TKT \(\varkappa=\lbrack\varkappa_0;\varkappa_1;\varkappa_2\rbrack\). \begin{remark} \label{rmk:Sifting33} We choose the following sifting strategy for reducing the entire descendant tree \(\mathcal{T}(G)\) to the pruned descendant tree \(\mathcal{T}_\ast(G)\). We filter all vertices which, firstly, are \(\sigma\)-groups, and secondly satisfy one of the conditions in Equations (\ref{eqn:TKTMainline}) or (\ref{eqn:TKTPrdSequences}), whence the process is a combination (F6)\(=\)(F1)\(+\)(F2)\(+\)(F5) and consists of \begin{enumerate} \item keeping all of the \(3\) terminal step size-\(2\) descendants, which are exactly the Schur \(\sigma\)-groups, and omitting \(2\) of the \(3\) capable step size-\(2\) descendants having TKT \(\mathrm{H}.4\), resp. \(\mathrm{G}.16\), together with their complete descendant trees, and \item eliminating \(2\) of the \(5\) terminal step size-\(1\) descendants having TKT \(\mathrm{c}.18\), resp. \(\mathrm{c}.21\), and \(2\) of the \(3\) capable step size-\(1\) descendants having TKT \(\mathrm{H}.4\), resp. \(\mathrm{G}.16\), \end{enumerate} \noindent in Theorem \ref{thm:ThreeBifurcation}. \end{remark} For brevity, we give \(3\)-logarithms of abelian type invariants in the following theorem and we denote iteration by formal exponents, for instance, \(1^3:=(1,1,1)\hat{=}(3,3,3)\), \((2,1)\hat{=}(9,3)\), \((2,1)^3:=((2,1),(2,1),(2,1))\), and \((j+2,j+1)\hat{=}(3^{j+2},3^{j+1})\). Further, we eliminate some initial anomalies of generalized identifiers by putting \(\langle 243,8\rangle-\#1;1:=\langle 243,8\rangle-\#1;3\), \(\langle 729,54\rangle-\#1;2\vert 4:=\langle 729,54\rangle-\#1;4\vert 2\), \(\langle 729,54\rangle-\#2;2\vert 4:=\langle 729,54\rangle-\#2;4\vert 2\), \(\langle 729,54\rangle-\#2;1:=\langle 729,54\rangle-\#2;3\), \(\langle 729,49\rangle(-\#2;1-\#1;1)^j-\#1;1:=\langle 729,49\rangle(-\#2;1-\#1;1)^j-\#1;2\), \(\langle 729,49\rangle(-\#2;1-\#1;1)^j-\#1;4\vert 5\vert 6:=\langle 729,49\rangle(-\#2;1-\#1;1)^j-\#1;5\vert 6\vert 7\), formally. \begin{theorem} \label{thm:ThreeBifurcation} Let \(\ell\) be a positive integer bounded from above by \(8\). \begin{enumerate} \item In the descendant tree \(\mathcal{T}(G)\) of \(G=\langle 243,6\rangle\), resp. \(G=\langle 243,8\rangle\), there exists a unique path of length \(2\ell\), \[G=\delta^0(G)\leftarrow\delta^1(G)\leftarrow\cdots\leftarrow\delta^{2\ell}(G),\] of (reverse) directed edges of alternating step sizes \(1\) and \(2\) such that \(\delta^i(G)=\pi(\delta^{i+1}(G))\), for all \(0\le i\le 2\ell-1\), and all the vertices with even superscript \(i=2j\), \(j\ge 0\), \begin{equation} \label{eqn:EvenDesc} \delta^{2j}(G)=G(-\#1;1-\#2;1)^j, \end{equation} resp. all the vertices with odd superscript \(i=2j+1\) \(j\ge 0\), \begin{equation} \label{eqn:OddDesc} \delta^{2j+1}(G)=G(-\#1;1-\#2;1)^j-\#1;1, \end{equation} \noindent of this path share the following common invariants, respectively: \begin{itemize} \item the uniform (w.r.t. \(i\)) transfer kernel type, containing a total component \(0\hat{=}\delta^i(G)\), \begin{equation} \label{eqn:TKTMainline} \varkappa(\delta^i(G))=\lbrack 1;(0,1,2,2)\text{ resp. }(2,0,3,4);0\rbrack, \end{equation} \item the \(2\)-multiplicator rank and the nuclear rank, \begin{equation} \label{eqn:EvenRanks} \mu(\delta^{2j}(G))=3,\quad \nu(\delta^{2j}(G))=1, \end{equation} \noindent resp., giving rise to the bifurcation for odd \(i=2j+1\), \begin{equation} \label{eqn:OddRanks} \mu(\delta^{2j+1}(G))=4,\quad \nu(\delta^{2j+1}(G))=2, \end{equation} \item and the counters of immediate descendants, \begin{equation} \label{eqn:EvenCounters} N_1(\delta^{2j}(G))=4,\ C_1(\delta^{2j}(G))=4, \end{equation} \noindent resp. \begin{equation} \label{eqn:OddCounters} N_1(\delta^{2j+1}(G))=8,\ C_1(\delta^{2j+1}(G))=3,\quad N_2(\delta^{2j+1}(G))=6,\ C_2(\delta^{2j+1}(G))=3, \end{equation} \noindent determining the local structure of the descendant tree. \end{itemize} \item A few other invariants of the vertices \(\delta^i(G)\) depend on the superscript \(i\), \begin{itemize} \item the \(3\)-logarithm of the order, the nilpotency class and the coclass, \begin{equation} \label{eqn:EvenLogOrdClCc} \log_3(\mathrm{ord}(\delta^{2j}(G)))=3j+5,\quad \mathrm{cl}(\delta^{2j}(G))=2j+3,\quad \mathrm{cc}(\delta^{2j}(G))=j+2, \end{equation} \noindent resp. \begin{equation} \label{eqn:OddLogOrdClCc} \log_3(\mathrm{ord}(\delta^{2j+1}(G)))=3j+6,\quad \mathrm{cl}(\delta^{2j+1}(G))=2j+4,\quad \mathrm{cc}(\delta^{2j+1}(G))=j+2, \end{equation} \item a single component of layer \(\tau_1\) and the layer \(\tau_2\) of the transfer target type \begin{equation} \label{eqn:EvenTTT} \tau(\delta^{2j}(G))=\lbrack 1^2;((j+2,j+1),1^3,(2,1)^2)\text{ resp. }((2,1),(j+2,j+1),(2,1)^2);(j+1,j+1,1)\rbrack, \end{equation} \noindent resp. \begin{equation} \label{eqn:OddTTT} \tau(\delta^{2j+1}(G))=\lbrack 1^2;((j+2,j+2),1^3,(2,1)^2)\text{ resp. }((2,1),(j+2,j+2),(2,1)^2);(j+2,j+1,1)\rbrack. \end{equation} \end{itemize} \end{enumerate} \end{theorem} Theorem \ref{thm:ThreeBifurcation} provided the scaffold of the pruned descendant tree \(\mathcal{T}_\ast(G)\) of \(G=\langle 243,n\rangle\), for \(n\in\lbrace 6,8\rbrace\), with mainlines and periodic bifurcations. With respect to number theoretic applications, however, the following Corollaries \ref{cor:ThreeBifurcation} and \ref{cor:Covers} are of the greatest importance. \begin{corollary} \label{cor:ThreeBifurcation} Let \(0\le i\le 2\ell\) be a non-negative integer. Whereas the vertices with even superscript \(i=2j\), \(j\ge 0\), that is, \(\delta^{2j}(G)=G(-\#1;1-\#2;1)^j\), are merely links in the distinguished path, the vertices with odd superscript \(i=2j+1\) \(j\ge 0\), that is, \(\delta^{2j+1}(G)=G(-\#1;1-\#2;1)^j-\#1;1\), reveal the essential periodic bifurcations with the following properties. \begin{enumerate} \item The regular component \(\mathcal{T}^{j+2}(\delta^{2j+1}(G))\) of the descendant tree \(\mathcal{T}(\delta^{2j+1}(G))\) is a coclass tree which contains the mainline, \[M_{j,k}:=\delta^{2j+1}(G)(-\#1;1)^k\text{ with }k\ge 0,\] which entirely consists of \(\sigma\)-groups, and three distinguished periodic sequences whose vertices \[V_{j,k}:=\delta^{2j+1}(G)(-\#1;1)^k-\#1;2\vert 4\vert 6\text{ resp. }4\vert 5\vert 6\text{ with }k\ge 0\] are \(\sigma\)-groups exactly for even \(k=2k^\prime\ge 0\) and are characterized by the following TKTs \(\varkappa(V_{j,k})=\lbrack 1;\varkappa_1;0\rbrack\) with layer \(\varkappa_1\) given by \begin{equation} \label{eqn:TKTPrdSequences} \varkappa_1\in\lbrace(1,1,2,2),(3,1,2,2),(4,1,2,2)\rbrace\text{ resp. } \varkappa_1\in\lbrace(2,2,3,4),(2,3,3,4),(2,4,3,4)\rbrace, \end{equation} \noindent which deviate from the mainline TKT of Equation (\ref{eqn:TKTMainline}) in a single component only. \item The irregular component \(\mathcal{T}^{j+3}(\delta^{2j+1}(G))\) of the descendant tree \(\mathcal{T}(\delta^{2j+1}(G))\) is a forest which contains a bunch of \(3\) isolated Schur \(\sigma\)-groups \[S_j:=\delta^{2j+1}(G)-\#2;2\vert 4\vert 6\text{ resp. }4\vert 5\vert 6,\] which possess the same TKTs as in Equation (\ref{eqn:TKTPrdSequences}), and additionally contains the root of the next coclass tree \(\mathcal{T}^{j+3}(\delta^{2j+1}(G)-\#2;1)\), where \(\delta^{2j+1}(G)-\#2;1=\delta^{2(j+1)}(G)\), whose mainline vertices \(\delta^{2(j+1)}(G)(-\#1;1)^k\) with \(k\ge 0\) share the TKT in Equation (\ref{eqn:TKTMainline}). \end{enumerate} \end{corollary} The metabelian \(3\)-groups forming the three distinguished periodic sequences \[V_{0,2k}=\delta^1(G)(-\#1;1)^{2k}-\#1;2\vert 4\vert 6\text{ resp. }4\vert 5\vert 6\text{ with }k\ge 0\] of the pruned coclass tree \(\mathcal{T}_\ast^2(G)\) in Corollary \ref{cor:ThreeBifurcation}, for \(i=0\), belong to the few groups for which all immediate descendants with respect to the parent definition (P4) are known. (We did not use this kind of descendants up to now.) Since all groups in \(\mathcal{T}(G)\setminus\mathcal{T}^2(G)\) are of derived length \(3\), the set of these descendants can be defined in the following way. \begin{definition} \label{dfn:Covers} Let \(P\) be a finite \(p\)-group, then the set of all finite \(p\)-groups \(D\) whose second derived quotient \(D/D^{\prime\prime}\) is isomorphic to \(P\) is called the \textit{cover} \(\mathrm{cov}(P)\) of \(P\). The subset \(\mathrm{cov}_\ast(P)\) consisting of all Schur \(\sigma\)-groups in \(\mathrm{cov}(P)\) is called the \textit{balanced cover} of \(P\). \end{definition} \begin{corollary} \label{cor:Covers} For \(0\le k\le\ell\), the group \(V_{0,2k}\), which does not have a balanced presentation, possesses a finite cover of cardinality \(\#\mathrm{cov}(V_{0,2k})=k+1\) and a unique Schur \(\sigma\)-group in its balanced cover \(\#\mathrm{cov}_\ast(V_{0,2k})=1\). More precisely, the covers are given explicitly by \begin{equation} \label{eqn:Covers} \begin{aligned} \mathrm{cov}(V_{0,2k})=\lbrace V_{j,2(k-j)}\mid 1\le j\le k\rbrace\cup\lbrace S_k\rbrace,\\ \mathrm{cov}_\ast(V_{0,2k})=\lbrace S_k\rbrace. \end{aligned} \end{equation} \end{corollary} The arrows in Figures \ref{fig:MultiPeriodQAdmissible} and \ref{fig:MultiPeriodUAdmissible} indicate the projections \(\pi\) from all members \(D\) of a cover \(\mathrm{cov}(P)\) onto the common metabelianization \(P\), that is, in the sense of the parent definition (P4), from the descendants \(D\) onto the parent \(P=\pi(D)\). \begin{proof} (of Theorem \ref{thm:ThreeBifurcation}, Corollary \ref{cor:ThreeBifurcation}, Corollary \ref{cor:Covers} and Theorem \ref{thm:EvenBranches})\\ The \(p\)-group generation algorithm \cite{Nm2,Ob,HEO}, which is implemented in the computational algebra system Magma \cite{BCP,BCFS,MAGMA}, was used for constructing the pruned descendant trees \(\mathcal{T}_\ast(G)\) with roots \(G=\langle 243,6\vert 8\rangle\) which were defined as the disjoint union of all pruned coclass trees \(\mathcal{T}_\ast^{j+2}(\delta^{2j+1}(G))\) of the descendants \(\delta^{2j+1}(G)=G(-\#1;1-\#2;1)^j-\#1;1\), \(0\le j\le 10\), of \(G\) as roots, together with \(4\) siblings in the irregular component \(\mathcal{T}_\ast^{j+3}(\delta^{2j+1}(G))\), \(3\) of them Schur \(\sigma\)-groups with \(\mu=2\) and \(\nu=0\). Using the strict periodicity \cite{dS,EkLg} of each pruned coclass tree \(\mathcal{T}_\ast^{j+2}(\delta^{2j+1}(G))\), which turned out to be of length \(2\), the vertical construction was terminated at nilpotency class \(19\), considerably deeper than the point where periodicity sets in. The horizontal construction was extended up to coclass \(10\), where the consumption of CPU time became daunting. \end{proof} {\tiny \begin{figure} \caption{Periodic Bifurcations in \(\mathcal{T} \label{fig:MultiPeriodQAdmissible} \end{figure} } {\tiny \begin{figure} \caption{Periodic Bifurcations in \(\mathcal{T} \label{fig:MultiPeriodUAdmissible} \end{figure} } Within the frame of our computations, the periodicity was not restriced to bifurcations only: It seems that the pruned (or maybe even the entire) descendant trees \(\mathcal{T}_\ast(\delta^{2j+1}(G))\) are all isomorphic to \(\mathcal{T}_\ast(\delta^1(G))\) as graphs. This is visualized impressively by the Figures \ref{fig:MultiPeriodQAdmissible} and \ref{fig:MultiPeriodUAdmissible}, where the following notation (not to be confused with layers) is used \[\varkappa_1=(4,1,2,2),\ \varkappa_2=(3,1,2,2),\ \varkappa_3=(1,1,2,2),\ \varkappa_0=(0,1,2,2),\] resp. \[\varkappa_1=(2,4,3,4),\ \varkappa_2=(2,3,3,4),\ \varkappa_3=(2,2,3,4),\ \varkappa_0=(2,0,3,4).\] Similarly as in the previous section, the extent to which we constructed the pruned descendant trees suggests the following conjecture. \begin{conjecture} \label{cnj:ThreeBifurcation} Theorem \ref{thm:ThreeBifurcation}, Corollary \ref{cor:ThreeBifurcation} and Corollary \ref{cor:Covers} remain true for an arbitrarily large positive integer \(\ell\), not necessarily bounded by \(8\). \end{conjecture} \noindent One-parameter \textit{polycyclic pc-presentations} for the groups in the first three pruned coclass trees of \(\mathcal{T}_\ast(\langle 243,6\rangle)\) are given as follows. \begin{enumerate} \item For the metabelian vertices of the pruned coclass tree \(\mathcal{T}_\ast^2(\delta^{0}(G))\) with class \(c\ge 5\), that is, starting with \(\langle 2187,285\rangle\) and excluding the root \(\delta^{0}(G)=\langle 243,6\rangle\) and its descendant \(Q=\langle 729,49\rangle\), by \begin{equation} \label{eqn:Scnd3ClsGrp} \begin{aligned} \delta^{0}(G)(-\#1;1)^{c-3} = G_2^c(0,0),\ G_2^c(z,w) := \langle\ x,y,s_2,\ldots,s_c,t_3\ \mid \\ s_2=\lbrack y,x\rbrack,\ s_j=\lbrack s_{j-1},x\rbrack \text{ for } 3\le j\le c,\ t_3=\lbrack s_2,y\rbrack, \\ s_j^3=s_{j+2}^2s_{j+3} \text{ for } 2\le j\le c-3,\ s_{c-2}^3=s_c^2,\ t_3^3=1, \\ x^3 = s_c^w,\ y^3 = s_3^2s_4s_c^z\ \rangle. \end{aligned} \end{equation} \item For the non-metabelian vertices of the pruned coclass tree \(\mathcal{T}_\ast^3(\delta^{2}(G))\) with class \(c\ge 5\), and including the Schur \(\sigma\)-groups, which are siblings of the root, by \begin{equation} \label{eqn:3TwrGrp9748GS} \begin{aligned} \delta^{2}(G)(-\#1;1)^{c-5} = G_3^c(0,0),\ G_3^c(z,w) := \langle\ x,y,s_2,\ldots,s_c,t_3,u_5\ \mid \\ s_2=\lbrack y,x\rbrack,\ s_j=\lbrack s_{j-1},x\rbrack \text{ for } 3\le j\le c,\ t_3=\lbrack s_2,y\rbrack, \\ u_5=\lbrack s_3,y\rbrack=\lbrack s_4,y\rbrack,\ \lbrack s_3,s_2\rbrack=u_5^2,\ t_3^3=u_5^2, \\ s_2^3=s_4^2s_5u_5,\ s_j^3=s_{j+2}^2s_{j+3} \text{ for } 3\le j\le c-3,\ s_{c-2}^3=s_c^2, \\ x^3 = s_c^w,\ y^3 = s_3^2s_4s_c^z\ \rangle. \end{aligned} \end{equation} \item For the non-metabelian vertices of the pruned coclass tree \(\mathcal{T}_\ast^4(\delta^{4}(G))\) with class \(c\ge 7\), and including the Schur \(\sigma\)-groups, which are siblings of the root, by \begin{equation} \label{eqn:3TwrGrp262744ES} \begin{aligned} \delta^{4}(G)(-\#1;1)^{c-7} = G_4^c(0,0),\ G_4^c(z,w) := \langle\ x,y,s_2,\ldots,s_c,t_3,u_5,u_7\ \mid \\ s_2=\lbrack y,x\rbrack,\ s_j=\lbrack s_{j-1},x\rbrack \text{ for } 3\le j\le c,\ t_3=\lbrack s_2,y\rbrack, \\ u_5=\lbrack s_4,y\rbrack,\ u_7=\lbrack s_6,y\rbrack,\ \lbrack s_3,s_2\rbrack=u_5^2u_7^2,\ \lbrack s_3,y\rbrack=u_5u_7^2, \\ \lbrack s_5,y\rbrack=u_7^2,\ \lbrack s_4,s_2\rbrack=u_7^2,\ \lbrack s_5,s_2\rbrack=u_7^2,\ \lbrack s_4,s_3\rbrack=u_7, \\ s_2^3=s_4^2s_5u_5,\ s_3^3=s_5^2s_6u_7^2,\ t_3^3=u_5^2u_7^2, u_5^3=u_7^2, \\ s_j^3=s_{j+2}^2s_{j+3} \text{ for } 4\le j\le c-3,\ s_{c-2}^3=s_c^2, \\ x^3 = s_c^w,\ y^3 = s_3^2s_4s_c^z\ \rangle. \end{aligned} \end{equation} \end{enumerate} The parameter \(c\) is the nilpotency class of the group, and the parameters \(0\le w\le 1\) and \(0\le z\le 2\) determine \begin{itemize} \item the location of the group on the descendant tree, and \item the transfer kernel type (TKT) of the group, as follows: \end{itemize} \(G_r^c(0,0)\) lies on the mainline (this is the so-called \textit{mainline principle}) and has TKT c.18, \(\varkappa=(0,1,2,2)\), whereas all the other groups belong to periodic sequences or are isolated Schur \(\sigma\)-groups:\\ \(G_r^c(0,1)\) possesses TKT E.6, \(\varkappa=(1,1,2,2)\),\\ \(G_r^c(1,0)\) and \(G_r^c(2,0)\) have TKT H.4, \(\varkappa=(2,1,2,2)\), and lie outside of the pruned tree,\\ \(G_r^c(1,1)\) and \(G_r^c(2,1)\) have TKT E.14, \(\varkappa=(3,1,2,2)\sim(4,1,2,2)\). \noindent In Figure \ref{fig:3TwrGrp9748GS}, resp. \ref{fig:3TwrGrp262744ES}, we have drawn the lattice of normal subgroups of \(G_3^5(z,w)\), resp. \(G_4^7(z,w)\). The \textit{upper} and \textit{lower central series}, \(\zeta(G)\), \(\gamma(G)\), of these groups form subgraphs whose relative position justifies the names of these series, as visualized impressively by Figures \ref{fig:3TwrGrp9748GS} and \ref{fig:3TwrGrp262744ES}. {\tiny \begin{figure} \caption{Normal Lattice and Central Series of \(G_3^5(z,w)\)} \label{fig:3TwrGrp9748GS} \end{figure} } {\tiny \begin{figure} \caption{Normal Lattice and Central Series of \(G_4^7(z,w)\)} \label{fig:3TwrGrp262744ES} \end{figure} } \textit{Generators} \(x,y\in G\setminus G^\prime\), \(s_2,s_3,t_3,\ldots\in G^\prime\setminus G^{\prime\prime}\), and \(u_5,u_7\in G^{\prime\prime}\), are carefully selected independently from individual isomorphism types and placed in locations which illustrate the structure of the groups. Furthermore, the \textit{normal lattice of the metabelianization} \(G/G^{\prime\prime}\) is also included as a subgraph simply by putting \(u_5=1\). We conclude with a theorem concerning the central series and some fundamental properties of the Schur \(\sigma\)-groups which we encountered among all the groups under investigation. \begin{theorem} \label{thm:EvenBranches} Let \(0\le j\le 7\) be an integer. There exist exactly \(6\) pairwise non-isomorphic groups \(G\) of order \(3^{3j+8}\), class \(2j+5\), coclass \(j+3\), having fixed derived length \(3\), such that \begin{enumerate} \item the factors of their upper central series are given by \[ \zeta_{j+1}(G)/\zeta_j(G)\simeq \begin{cases} (3,3) & \text{ for } j=2j+4, \\ (3) & \text{ for } 1\le j\le 2j+3, \\ (3,3^{j+2}) & \text{ for } j=0, \end{cases} \] \item their second derived group \(G^{\prime\prime}<\zeta_1(G)\) is central and cyclic of order \(3^{j+1}\). \end{enumerate} \noindent Furthermore, \begin{itemize} \item they are Schur \(\sigma\)-groups with automorphism group \(\mathrm{Aut}(G)\) of order \(2\cdot 3^{4j+10}\), \item the factors of their lower central series are given by \[ \gamma_j(G)/\gamma_{j+1}(G)\simeq \begin{cases} (3,3) & \text{ for odd } 1\le j\le 2j+5, \\ (3) & \text{ for even } 2\le j\le 2j+4, \end{cases} \] \item their metabelianization \(G/G^{\prime\prime}\) is of order \(3^{2j+7}\), class \(2j+5\) and of fixed coclass \(2\), \item their biggest metabelian generalized predecessor, that is the \((2j+1)\)th generalized parent, is given by either \(\langle 729,49\rangle\) or \(\langle 729,54\rangle\). \end{itemize} \end{theorem} \section{Conclusion} \label{s:Conclusion} We emphasize that the results of section \ref{ss:3Groups33} provide the background for considerably stronger assertions than those made in \cite{BuMa} (which were, however, sufficient already to disprove erroneous claims in \cite{SoTa,HeSm}). Firstly, since they concern four TKTs E.6, E.14, E.8, E.9 instead of just TKT E.9, and secondly, since they apply to varying odd nilpotency class \(5\le\mathrm{cl}(G)\le 19\) instead of just class \(5\). \end{document}
\begin{equation}gin{exam}in{document} \title{Gromov hyperbolicity, John spaces and quasihyperbolic geodesics} \author{Qingshan Zhou} \address{Qingshan Zhou, School of Mathematics and Big Data, Foshan University, Foshan, Guangdong 528000, People's Republic of China} \email{[email protected]} \author{Yaxiang Li${}^{\mathbf{*}}$} \address{Yaxiang Li, Department of Mathematics, Hunan First Normal University, Changsha, Hunan 410205, People's Republic of China} \email{[email protected]} \author{Antti Rasila} \address{Antti Rasila, College of Science, Guangdong Technion -- Israel Institute of Technology, Shantou, Guangdong 515063, People's Republic of China} \email{[email protected]; [email protected]} \def\@arabic\c@footnote{} \footnotetext{ \texttt{\tiny File:~\jobname .tex, printed: \number\year-\number\month-\number\day, \thehours.\ifnum\theminutes<10{0}\fi\theminutes} } \makeatletter\def\@arabic\c@footnote{\@arabic\c@footnote}\makeatother \date{} \subjclass[2010]{Primary: 30C65, 30L10, 30F45; Secondary: 30C20} \keywords{ Quasihyperbolic metric, Gromov hyperbolic spaces, John spaces, quasihyperbolic geodesic.\\ ${}^{\mathbf{*}}$ Corresponding author} \begin{equation}gin{exam}in{abstract} We show that every quasihyperbolic geodesic in a John space admitting a roughly starlike Gromov hyperbolic quasihyperbolization is a cone arc. This result provides a new approach to the elementary metric geometry question, formulated in \cite[Question 2]{Hei89}, which has been studied by Gehring, Hag, Martio and Heinonen. As an application, we obtain a simple geometric condition connecting uniformity of the space with the existence of Gromov hyperbolic quasihyperbolization. \end{abstract} \thanks{The research was partly supported by NNSF of China (Nos. 11601529, 11671127, 11571216).} \maketitle{} \pagestyle{myheadings} \markboth{}{} \section{Introduction} The unit disk or Poincar\'e disk $\mathbb{D}$ serves as a canonical model in studying of conformal mappings and hyperbolic geometry in complex analysis. It is noncomplete metric space with the metric inherited from the two dimensional Euclidean space $\mathbb{R}^2$. On the other hand, the unit disk equipped with the Poincar\'e metric is complete Riemannian $2$-manifold with constant negative curvature. This observation can be used in investigating the hyperbolic metric on planar domains and conformal mappings between them. A generalization of this idea to higher dimensional spaces, involving quasihyperbolic metrics and Gromov hyperbolicity, was studied by Bonk, Heinonen and Koskela in \cite{BHK}. Well-known geometric properties of a hyperbolic geodesic $[x,y]\in \mathbb{D}$ with respect to the Euclidean metric are: \begin{equation}gin{exam}in{itemize} \item $\ell([x,y])\leq C|x-y|$, \item $\min\{\ell([x,z]),\ell([z,y])\}\leq C{\operatorname{dist}}(z,\partial \mathbb{D})$ \end{itemize} for all $z\in [x,y]$, where $C$ is a universal constant. The first of the above conditions says that hyperbolic geodesic essentially minimizes the length of all curves connecting the endpoints, namely, the Gehring-Haymann condition. The second one is called the cone condition or the double twisted condition. Martio and Sarvas studied in \cite{MS78} global injectivity properties of locally injective mappings. They considered a class of domains of $\mathbb{R}^n$, named by {\it uniform domains}, which means every pair of points can be connected by a curve satisfies the above two conditions for some constant $C\geq 1$. In \cite{GO}, Gehring and Osgood investigated the geometric properties of {\it quasihyperbolic metric}, which was introduced by Gehring and Palka \cite{GP76}, and proved that every quasihyperbolic geodesic in a Euclidean uniform domain also satisfies the above two conditions. It should be noted that the class of domains on $\mathbb{R}^n$, which only satisfies the second condition known as {\it John domains} is large and of independent interest. For instance, the slit disk on $\mathbb{R}^2$ is an example of such domain. This class was first considered by John \cite{Jo61} in the context of elasticity theory. Many characterizations of uniform and John domains can be found in the literature and the importance of these classes of domains in function theory is well established, see for example \cite{GGKN17, LVZ17}. From a geometric point of view, it is natural question, whether each quasihyperbolic geodesic of a John domain is a cone arc. This question was pointed out already in 1989 by Gehring, Hag and Martio \cite{GHM}: \begin{equation}gin{ques}\leftarrowbel{q-1} Suppose $D\subset \mathbb{R}^n$ is a $c$-John domain and that $\gamma$ is a quasihyperbolic geodesic in $D$. Is $\gamma$ a $b$-cone arc for some $b=b(c)$? \end{ques} They proved in \cite[Theorem $4.1$]{GHM} that quasihyperbolic geodesic in a plane simply connected John domain is a cone arc. They also constructed several examples to show that a similar result does not hold in higher dimensions. Furthermore, Heinonen has posed the following closely related problem concerning John disks: \begin{equation}gin{ques}\leftarrowbel{q-2}$($\cite[Question 2]{Hei89}$)$ Suppose $D\subset \mathbb{R}^n$ is a $c$-John domain which is quasiconformally equivalent to the unit ball $\mathbb{B}$ and that $\gamma$ is a quasihyperbolic geodesic in $D$. Is $\gamma$ a $b$-cone arc for some constant $b$? \end{ques} With the help of the conformal modulus of path families and Ahlfors $n$-regularity of $n$-dimensional Hausdorff measure of $\mathbb{R}^n$, Bonk, Heinonen and Koskela \cite[Theorem $7.12$]{BHK} gave an affirmative answer to Question \ref{q-2} for bounded domains with the constant dependence of the space dimension $n$. Recently, Guo \cite[Remark 3.10]{Guo15} provided a geometric method to deal with this question. His method was based on the result that a noncomplete metric space with a roughly starlike Gromov hyperbolic quasihyperbolization satisfies the Gehring-Hayman condition and the ball separation condition. These properties were established by Koskela, Lammi and Manojlovi\'{c} in \cite[Theorem $1.2$]{KLM14}. The constant $b$ in their results depends on the dimension $n$ as well. The second author of this paper considered a related question for quasihyperbolic quasigeodesics in the setting of Banach spaces \cite{Li}. Note that quasihyperbolic geodesics may not exist in infinite-dimensional spaces, even with assumption of convexity \cite{RT2}. The concept of uniformity in a metric space setting was first introduced by Bonk, Heinonen and Koskela \cite{BHK}, where they connected the uniformity to the negative curvature of the space that is understood in the sense of Gromov. Moreover, they generalized the result of Gehring and Osgood and showed that every quasihyperbolic geodesic in a $c$-uniform space must be a $C$-uniform arc with $C=C(c)$, see \cite[Theorem 2.10]{BHK}. They also proved that $c$-uniform space is a Gromov $\delta$-hyperbolic with respect to its quasihyperbolic metric for some constant $\delta=\delta(c)$, see \cite[Theorem 3.6]{BHK}. In view of the above results, it is natural to consider the following more general question: \begin{equation}gin{ques}\leftarrowbel{q-3} Let $D$ be a locally compact, rectifiably connected noncomplete metric space. If $D$ is an $a$-John space and $(D,k)$ is $\delta$-hyperbolic, is every quasihyperbolic geodesic $\gamma$ a $b$-cone arc with $b$ depending only on $a$ and $\delta$? \end{ques} In this paper, we study these questions. Our main result is the following: \begin{equation}gin{exam}in{thm}\leftarrowbel{thm-1} Let $D$ be a locally compact, rectifiably connected noncomplete metric space. If $D$ is $a$-John and $(D,k)$ is $K$-roughly starlike and $\delta$-hyperbolic, then every quasihyperbolic geodesic in $D$ is a $b$-cone arc where $b$ depends only on $a, \delta$ and $K$. \end{thm} Every proper domain $D$ in $\mathbb{R}^n$ is a locally compact, rectifiably connected noncomplete metric space. Following terminology of \cite{BB03}, we call a locally compact, rectifiably connected noncomplete metric space $(D,d)$ {\it minimally nice}. For a minimally nice space $(D,d)$, we say that $D$ has a {\it Gromov hyperbolic quasihyperbolization}, if $(D,k)$ is $\delta$-hyperbolic for some constant $\delta\geq 0$, where $k$ is the quasihyperbolic metric (for definition see Subsection \ref{sub-2.2}). \begin{equation}gin{rem} The class of minimally nice John metric spaces, which admit a roughly starlike Gromov hyperbolic quasihyperbolization, is very wide. For example, it includes (inner) uniform domains (more generally, uniform metric spaces), simply connected John domains in the plane, and Gromov $\delta$-hyperbolic John domains in $\mathbb{R}^n$. \end{rem} \begin{equation}gin{rem} In view of the above, Theorem \ref{thm-1} states that all of the quasihyperbolic geodesics in the mentioned spaces are cone arcs. Moreover, Theorem \ref{thm-1} answers positively to question \ref{q-2} and also to question \ref{q-3} under a relatively mild condition. \end{rem} \begin{equation}gin{rem} The main tool in the proof of Theorem \ref{thm-1} is the uniformization process of (Gromov) hyperbolic spaces, which was introduced by Bonk, Heinonen and Koskela in \cite{BHK}. They proved that each proper, geodesic and roughly starlike $\delta$-hyperbolic space is quasihyperbolically equivalent to a $c$-uniform space; see \cite[4.5 and 4.37]{BHK}. The uniformization process of Bonk, Heinonen and Koskela has many applications and is an important tool in many related papers, see e.g. \cite{BB03, KLM14}. \end{rem} From \cite[Theorem 3.22]{Vai05} it follows that every $\delta$-hyperbolic domain of ${\mathbb R}^n$ is $K$-roughly starlike with $K$ depending only on $\delta$. Then we have the following corollary of Theorem \ref{thm-1}. \begin{equation}gin{exam}in{cor} Every quasihyperbolic geodesic in an $a$-John, $\delta$-hyperbolic domain $D$ of ${\mathbb R}^n$ is a $b$-cone arc with $b$ depending only on $a$ and $\delta$. \end{cor} \begin{equation}gin{rem} A proper domain $D$ in $\mathbb{R}^n$ is called $\delta$-{\it hyperbolic} for some $\delta\geq 0$, if $D$ has a Gromov hyperbolic quasihyperbolization. We remark that the above result is an improvement of \cite[Lemma $3.9$]{Guo15} whenever $\varphi(t)=Ct$ for some positive constant $C$. Also, we do not require the domain to be bounded. \end{rem} \begin{equation}gin{rem} There are many applications of the above mentioned classes of domains of $\mathbb{R}^n$ in the quasiconformal mappings and potential theory, see e.g. \cite{BHK, CP17,GNV94,Guo15, NV}. A crucial ingredient in the related arguments is based on the fact that quasihyperbolic geodesics in Gromov hyperbolic John domains of $\mathbb{R}^n$ are inner uniform curves. \end{rem} As another motivation of this stude, we remark that Bonk, Heinonen and Koskela established the following characterization of Gromov hyperbolic domains on the $2$-sphere in \cite{BHK}. \begin{equation}gin{exam}in{Thm}\leftarrowbel{Thm-1} $($\cite[Theorem 1.12]{BHK}$)$ Gromov hyperbolic domains on the $2$-sphere are precisely the conformal images of inner uniform slit domains. \end{Thm} A {\it slit domain} is a proper subdomain $D$ of Riemann sphere such that each component of its complement is a point or a line segment parallel to the real or imaginary axis. It is well known that every domain in Riemann sphere is conformally equivalent to a slit domain. In \cite{BHK}, Bonk, Heinonen and Koskela also pointed out that their proof of Theorem \Ref{Thm-1} is ``surprisingly indirect, using among other things the theory of modulus and Loewner spaces as developed recently in \cite{HK}, plus techniques from harmonic analysis", and ask for an elementary proof as well. In \cite{BB03}, Balogh and Buckley proved that a minimally nice metric space has a Gromov hyperbolic quasihyperbolization if and only if it satisfies the Gehring-Hayman condition and a ball separation condition. Their proof is also based on an analytic assumption that the space supports a suitable Poincar\'{e} inequality. Recently, Koskela, Lammi and Manojlovi\'{c} in \cite{KLM14} observed that Poincar\'{e} inequalities are not critical for this characterization of Gromov hyperbolicity, see \cite[Theorem 1.2]{KLM14}. By using the above results, and as an application of Theorem \ref{thm-1}, we give the following simple geometric condition connecting the uniformity of a space to its other properties: \begin{equation}gin{exam}in{thm}\leftarrowbel{thm-2} Let $Q>1$ and let $(X,d,\mu)$ be a proper, $Q$-regular $A$-annularly quasiconvex length metric measure space. Let $D$ be a bounded proper subdomain of $X$. Then $D$ is uniform if and only if it is John or linearly locally connected, quasiconvex, and has a Gromov hyperbolic quasihyperbolization. \end{thm} \begin{equation}gin{rem} With the aid of Theorem \ref{thm-1} and some auxiliary results obtained in \cite{KLM14}, the proof of Theorem \ref{thm-2} is essentially elementary and only needs the techniques from metric geometry and some estimates concerning the quasihyperbolic metrics. It is not difficult to find that Theorem \Ref{Thm-1} is a direct corollary of Theorem \ref{thm-2}. \end{rem} This paper is organized as follows. Section 2 contains notation and the basic definitions and auxiliary lemmas. In Section 3, we will prove Theorem \ref{thm-1}. The proof of Theorem \ref{thm-2} is presented in Section 4. \section{Preliminaries} \subsection{Metric geometry} Let $(D, d)$ be a metric space, and let $B(x,r)$ and $\overline{B}(x,r)$ be the open ball and closed ball (of radius $r$ centered at the point $x$) in $D$, respectively. For a set $A$ in $D$, we use $\overline{A}$ to denote the metric completion of $A$ and $\partial A=\overline{A}\setminus A$ to be its metric boundary. A metric space $D$ is called {\it proper} if its closed balls are compact. Following terminology of \cite{BB03}, we call a locally compact, rectifiably connected noncomplete metric space $(D,d)$ {\it minimally nice}. By a curve, we mean a continuous function $\gamma:$ $[a,b]\to D$. If $\gamma$ is an embedding of $I$, it is also called an {\it arc}. The image set $\gamma(I)$ of $\gamma$ is also denoted by $\gamma$. A curve $\gamma$ is called {\it rectifiably}, if the length $\ell_d(\gamma)<\infty$. A metric space $(D, d)$ is called {\it rectifiably connected} if every pair of points in $D$ can be joined with a rectifiable curve $\gamma$. The length function associated with a rectifiable curve $\gamma$: $[a,b]\to D$ is $z_{\gamma}$: $[a,b]\to [0, \ell(\gamma)]$, given by $z_{\gamma}(t)=\ell(\gamma|_{[a,t]})$. For any rectifiable curve $\gamma:$ $[a,b]\to D$, there is a unique map $\gamma_s:$ $[0, \ell(\gamma)]\to D$ such that $\gamma=\gamma_s\circ z_{\gamma}$. Obviously, $\ell(\gamma_s|_{[0,t]})=t$ for $t\in [0, \ell(\gamma)]$. The curve $\gamma_s$ is called the {\it arclength parametrization} of $\gamma$. For a rectifiable curve $\gamma$ in $D$, the line integral over $\gamma$ of each Borel function $\varrho:$ $D\to [0, \infty)$ is $$\int_{\gamma}\varrho ds=\int_{0}^{\ell(\gamma)}\varrho\circ \gamma_s(t) dt.$$ We say an arc $\gamma$ is {\it geodesic} joining $x$ and $y$ in $D$ means that $\gamma$ is a map from an interval $I$ to $D$ such that $\gamma(0)=x$, $\gamma(l)=y$ and $$\;\;\;\;\;\;\;\;d(\gamma(t),\gamma(t'))=|t-t'|\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mbox{for all}\;\;t,t'\in I.$$ Every rectifiably connected metric space $(D, d)$ admits a natural (or intrinsic) metric, its so-called length distance given by $$\ell(x, y) := \inf\ell(\gamma)$$ where $ \gamma$ is a rectifiable curve joining $ x, y $ in $D.$ A metric space $(D, d)$ is a {\it length space} provided that $d(x, y) = \ell(x, y)$ for all points $x, y\in D$. It is also common to call such a $d$ an intrinsic distance function. \subsection{Quasihyperbolic metric, quasigeodesics and solid arcs}\leftarrowbel{sub-2.2} Suppose $\gamma $ is a rectifiable curve in a minimally nice space $(D,d)$, its {\it quasihyperbolic length} is the number: $$\ell_{k_D}(\gamma)=\int_{\gamma}\frac{|dz|}{d_D(z)}, $$ where $d_D(z)={\operatorname{dist}}(x,\partial D)$ is the distance from $z$ to the boundary of $D$. For each pair of points $x$, $y$ in $D$, the {\it quasihyperbolic distance} $k_D(x,y)$ between $x$ and $y$ is defined by $$k_D(x,y)=\inf\ell_{k_D}(\gamma), $$ where the infimum is taken over all rectifiable curves $\gamma$ joining $x$ to $y$ in $D$. We remark that the resulting space $(D,k_D)$ is complete, proper and geodesic (cf. \cite[Proposition $2.8$]{BHK}). We recall the following basic estimates for quasihyperbolic distance that first used by Gehring and Palka \cite[2.1]{GP76} (see also \cite[(2.3), (2.4)]{BHK}): \begin{equation}\leftarrowbel{li-1} k_D(x,y)\geq \log{\mathcal B}ig(1+\frac{d(x,y)} {\min\{d_D(x), d_D(y)\}}{\mathcal B}ig)\geq \log|\frac{d_D(x)}{d_D(y)}|.\end{equation} In fact, more generally, we have \begin{equation}\leftarrowbel{li-2} \ell_{k_D}(\gamma)\geq \log{\mathcal B}ig(1+\frac{\ell(\gamma)} {\min\{d_D(x), d_D(y)\}}{\mathcal B}ig) \end{equation} Moreover, we have the following estimate: \begin{equation}gin{exam}in{lem}\leftarrowbel{newlemlabel} Let $D$ be a minimally nice length space. Then for $x,y\in D$ with $d(x,y) < d_D(x)$, we have $$k_D(x,y)\leq \frac{d(x,y)}{d_D(x)-d(x,y)}.$$ \end{lem} \begin{equation}gin{pf} Let $0<\epsilon<\frac{1}{2}(d_D(x)-d(x,y))$. Since $D$ is a length space, there is a curve $\alpha$ joining $x$ and $y$ such that $\ell(\alpha)\leq d(x,y)+\epsilon$. Thus we have $\ell(\alpha)<d_D(x)$, which implies that $\alpha\subset B(x,d_D(x))\cap D$. Hence, we compute $$k_D(x,y)\leq \int_{\alpha}\frac{|dz|}{d_D(z)}\leq \frac{\ell(\alpha)}{d_D(x)-\ell(\alpha)}<\frac{d(x,y)+\epsilon}{d_D(x)-d(x,y)-\epsilon}.$$ By letting $\epsilon\to 0$, we get the desired inequality. \end{pf} \begin{equation}gin{exam}in{defn} \leftarrowbel{def1.4} Suppose $\gamma$ is an arc in a minimally nice space $D$. The arc may be closed, open or half open. Let $\overline{x}=(x_0,$ $\ldots,$ $x_n)$, $n\geq 1$, be a finite sequence of successive points of $\gamma$. For $h\geq 0$, we say that $\overline{x}$ is {\it $h$-coarse} if $k_D(x_{j-1}, x_j)\geq h$ for all $1\leq j\leq n$. Let $\Phi_{k_D}(\gamma,h)$ denote the family of all $h$-coarse sequences of $\gamma$. Set $$z_{k_D}(\overline{x})=\sum^{n}_{j=1}k_D(x_{j-1}, x_j)$$ and $$\ell_{k_D}(\gamma, h)=\sup \{z_{k_D}(\overline{x}): \overline{x}\in \Phi_{k_D}(\gamma,h)\}$$ with the agreement that $\ell_{k_D}(\gamma, h)=0$ if $\Phi_{k_D}(\gamma,h)=\emptyset$. Then the number $\ell_{k_D}(\gamma, h)$ is the {\it $h$-coarse quasihyperbolic length} of $\gamma$. \end{defn} \begin{equation}gin{exam}in{defn} \leftarrowbel{def1.5} Let $D$ be a minimally nice space. An arc $\gamma\subset D$ is {\it $(\nu, h)$-solid} with $\nu\geq 1$ and $h\geq 0$ if $$\ell_{k_D}(\gamma[x,y], h)\leq \nu\;k_D(x,y)$$ for all $x$, $y\in \gamma$. \end{defn} Let $\leftarrowmbda\geq 1$ and $\mu\geq 0$. A curve $\gamma$ in $D$ is a {\it $(\leftarrowmbda, \mu)$-quasigeodesic} if $$\ell_{k_D}(x,y) \leq \leftarrowmbda k_D(x,y)+\mu$$ for all $x,y\in \gamma.$ If $\leftarrowmbda=1$, $\mu=0$, then $\gamma$ is a quasihyperbolic geodesic. \begin{equation}gin{exam}in{defn}Let $D$ and $D'$ be two minimally nice metric spaces. We say that a homeomorphism $f: D\to D'$ is an {\it $M$-quasihyperbolic mapping}, or briefly {\it $M$-QH}, if there exists a constant $M\geq 1$ such that for all $x$, $y\in D$, $$\frac{1 }{M}k_D(x,y)\leq k_{D'}(f(x),f(y))\leq M\;k_D(x,y) .$$\end{defn} In the following, we use $x$, $y$, $z$, $\ldots$ to denote the points in $D$, and $x'$, $y'$, $z'$, $\ldots$ the images of $x$, $y$, $z$, $\ldots$ in $D'$, respectively, under $f$. For arcs $\alpha$, $\begin{equation}ta$, $\gamma$, $\ldots$ in $D$, we also use $\alpha'$, $\begin{equation}ta'$, $\gamma'$, $\ldots$ to denote their images in $D'$. Under quasihyperbolic mappings, we have the following useful relationship between $(\leftarrowmbda, \mu)$-quasigeodesics and solid arcs. \begin{equation}gin{exam}in{lem}\leftarrowbel{ll-001} Suppose that $G$ and $G'$ are minimally nice metric spaces. If $f:\;G\to G'$ is $M$-QH, and $\gamma$ is a $(\leftarrowmbda, \mu)$-quasigeodesic in $G$, then there are constants $\nu=\nu(\leftarrowmbda, \mu, M)$ and $h=h(\leftarrowmbda, \mu, M)$ such that the image $\gamma'$ of $\gamma$ under $f$ is $(\nu,h)$-solid in $G'$. \end{lem} \begin{equation}gin{pf} Let $\gamma$ be a $(\leftarrowmbda,\mu)$-quasigeodesic and let $$h=M(\leftarrowmbda+\mu)\;\; \mbox{and}\;\; \nu=M^2(\leftarrowmbda+\mu).$$ To show that $\gamma'$ is $(\nu,h)$-solid, we only need to verify that for $x$, $y\in \gamma$, \begin{equation}\leftarrowbel{new-eq-3}\ell_{k_{G'}}(\gamma'[x',y'],h)\leq\nu k_{G'}(x',y').\end{equation} We prove this by considering two cases. The first case is: $k_G(x,y)<1$. Then for $z$, $w\in\gamma[x, y]$, we have $$k_{G'}(z',w')\leq Mk_G(z,w)\leq M(\leftarrowmbda k_G(x,y)+\mu)<M(\leftarrowmbda+\mu)=h,$$ and so \begin{equation}\leftarrowbel{ma-3}\ell_{k_{G'}}(\gamma'[x',y'],h)=0.\end{equation} Now, we consider the other case: $k_G(x,y)\geq 1$. Then with the aid of \cite[Theorem 4.9]{Vai6}, we have \begin{equation}gin{eqnarray}\leftarrowbel{ma-4} \ell_{k_{G'}}(\gamma'[x',y'],h) &\leq& M\ell_{k_G}(\gamma[x,y])\leq M(\leftarrowmbda k_G(x,y)+\mu)\\ \nonumber &\leq& M(\leftarrowmbda+\mu)k_{G}(x,y) \leq M^2(\leftarrowmbda+\mu)k_{G'}(x',y').\end{eqnarray} It follows from \eqref{ma-3} and \eqref{ma-4} that \eqref{new-eq-3} holds, completing the proof.\end{pf} \subsection{Uniform spaces and John spaces} In this subsection we first recall the definitions of John spaces, cone arcs and uniform spaces. We also give some results related to some special arcs which will be useful later in the proof of the main result. \begin{equation}gin{exam}in{defn} Let $a\geq 1$. A minimally nice space $(D,d)$ is called {\it $a$-John} if each pair of points $x,y\in D$ can be joined by a rectifiable arc $\alpha$ in $D$ such that for all $z\in \alpha$ $$\min\{\ell(\alpha[x,z]), \ell(\alpha[z,y])\}\leq a d_D(z),$$ where $\alpha[x,z]$ and $\alpha[z,y]$ denote two subarcs of $\alpha$ divided by the point $z$. The arc $\alpha$ is called an {\it $a$-cone} arc. \end{defn} \begin{equation}gin{exam}in{defn} Let $c\geq 1$. A minimally nice space $(D,d)$ is called {\it $c$-uniform} if each pair of points $x,y\in D$ can be joined by a $c$-uniform arc. An arc $\alpha$ is called {\it $c$-uniform} if it is a $c$-cone arc and satisfies the $c$-quasiconvexity, that is, $\ell(\alpha)\leq c d(x,y).$ \end{defn} \begin{equation}gin{exam}in{Lem}\leftarrowbel{Lem-uniform}$($\cite[(2.16)]{BHK}$)$\; If $D$ is a $c$-uniform metric space, then for all $x,y\in D$, we have $$ k_{D}(x,y)\leq 4c^2 \log{\mathcal B}ig(1+\frac{d(x,y)}{\min\{d_{D}(x),d_{D}(y)\}}{\mathcal B}ig).$$\end{Lem} The following properties of solid arcs in uniform metric spaces is from \cite{LVZ2} which will be used in our proofs. \begin{equation}gin{exam}in{Lem}\leftarrowbel{Lem13''}$($\cite[Lemma 3]{LVZ2}$)$\, Suppose that $D$ is a $c$-uniform space, and that $\gamma$ is a $(\nu,h)$-solid arc in $D$ with endpoints $x$, $y$. Let $d_D(x_0)=\max_{p\in \gamma}d_D(p)$. Then there exist constants $a_1=a_1( c, \nu, h)\geq 1$ and $a_2=a_2(c, \nu, h)\geq 1$ such that \begin{equation}gin{exam}in{enumerate} \item ${\operatorname{diam}}(\gamma[x,u])\leq a_1 d_D(u)$ for $u\in \gamma[x,x_0],$ and ${\operatorname{diam}}(\gamma[y,v])\leq a_1 d_D(v)$ for $v\in \gamma[y, x_0]$; \item ${\operatorname{diam}}(\gamma)\leq \max\big\{a_2 d(x,y), 2(e^h-1)\min\{d_D(x),d_D(y)\}\big\}.$ \end{enumerate} \end{Lem} Next we discuss the properties of cone arcs. \begin{equation}gin{exam}in{lem}\leftarrowbel{eq-8} Let $\alpha[x,y]$ be an $a$-cone arc in $D$ and let $z_0$ bisect the arclength of $\alpha[x,y]$. Then for each $z_1$, $z_2\in\alpha[x,z_0]$ $($or $\alpha[y,z_0]$$)$ with $z_2\in \alpha[z_1,z_0]$, we have $$k_D(z_1,z_2)\leq \ell_k(\alpha[z_1,z_2])\leq 2a \log\big(1+\frac{2\ell(\alpha[z_1,z_2])}{d_D(z_1)}\big)$$ and $$\ell_{k}(\alpha[z_1,z_2])\leq 4a^2k_{D}(z_1,z_2)+4a^2.$$ \end{lem} \begin{equation}gin{pf} By symmetry, we only need to verify the assertion in the case $z_1$, $z_2\in\alpha[x,z_0]$. To this end, for $z_2\in \alpha[z_1,z_0]$ be given, we have $$d_D(z_2)\geq \frac{1}{a}\ell(\alpha[z_1,z_2]).$$ If $z_2\subset B(z_1, \frac{1}{2}d_D(z_1))$, thus one finds that $d_D(z_2)\geq \frac{1}{2}d_D(z_1)$. Otherwise, we have $d_D(z_2)\geq\frac{1}{2a}d_D(z_1)$. Hence in both cases we obtain $$d_D(z_2)\geq \frac{1}{4a}[2\ell(\alpha[z_1,z_2])+d_D(z_1)],$$ which yields that \begin{equation}gin{exam}in{eqnarray*}k_{D}(z_1,z_2) &\leq& \ell_k(\alpha[z_1,z_2])= \int_{\alpha[z_1,z_2]}\frac{|dz|}{d_D(z)}\\ \nonumber &\leq& 2a\log{\mathcal B}ig(1+\frac{2\ell(\alpha[z_1,z_2])}{d_D(z_1)}{\mathcal B}ig)\\ \nonumber &\leq& 4a^2\log{\mathcal B}ig(1+\frac{d_D(z_2)}{d_D(z_1)}{\mathcal B}ig)\\ \nonumber &\leq& 4a^2k_{D}(z_1,z_2)+4a^2,\end{eqnarray*} as desired. \end{pf} \begin{equation}gin{exam}in{lem} \leftarrowbel{lem13-0-0}Suppose that $f: D\to D'$ is an $M$-QH from an $a$-John minimally nice space $D$ to a $c$-uniform space $D'$. Let $\alpha$ be an $a$-cone arc in $D$ with end points $x$ and $y$, $z_0$ bisect the arclength of $\alpha$, and let $d_{D'}(v'_1)=\max\{d_{D'}(u'): u'\in\alpha'[x',z'_0]\}$ and $d_{D'}(v'_2)=\max\{d_{D'}(u'): u'\in\alpha'[y',z'_0]\}$. Then there is a constant $a_3=a_3(a,c,M)$ such that \begin{equation}gin{exam}in{enumerate} \item for each $z'\in \alpha'[x', v'_1]$, $d' (x',z')\leq a_3\;d_{D'}(z')$ and for each $z'\in \alpha'[v'_1, z'_0]$, $d' (z'_0,z')\leq a_3\;d_{D'}(z')$. \item for each $z'\in \alpha'[y', v'_2]$, $d' (y',z')\leq a_3\;d_{D'}(z')$ and for each $z'\in \alpha'[v'_2, z'_0]$, $d' (z'_0,z')\leq a_3\;d_{D'}(z')$. \end{enumerate} \end{lem} \begin{equation}gin{pf} First, in the light of Lemma \ref{eq-8}, we see that $\alpha[x,z_0]$ and $\alpha[z_0,y]$ are $(4a^2,4a^2)$-quasigeodesics. Since $f: D\to D'$ is $M$-QH, we thus know from Lemma \ref{ll-001} that $\alpha'[x',z'_0]$ and $\alpha'[z'_0,y']$ are solid arcs. Moreover, by the choices of $v_1'$ and $v_2'$, $(1)$ and $(2)$ follows from Lemma \Ref{Lem13''}. \end{pf} \subsection{Uniformization theory of Bonk, Heinonen and Koskela } Let $(X,d)$ be a geodesic metric space and let $\delta\geq 0$. If for all triples of geodesics $[x,y], [y,z], [z,x]$ in $(X,d)$ satisfies: every point in $[x,y]$ is within distance $\delta$ from $[y,z]\cup [z,x]$, then the space $(X,d)$ is called a {\it $\delta$-hyperbolic space}. For simplicity, in the rest of this paper when we say that a minimally nice space $X$ is {\it Gromov hyperbolic} we mean that the space is $\delta$-hyperbolic with respect to the quasihyperbolic metric for some nonnegative constant $\delta$. In \cite{BHK}, Bonk, Heinonen and Koskela introduced the concept of rough starlikeness of a Gromov hyperbolic space with respect to a given base point. Let $X$ be a proper, geodesic $\delta$-hyperbolic space, and let $w\in X$, we say that $X$ is {\it $K$-roughly starlike} with respect to $w$ if for each $x\in X$ there is some point $\xi\in\partial^* X$ and a geodesic ray $\gamma=[w,\xi]$ with ${\operatorname{dist}}(x,\gamma)\leq K$. They also proved that both bounded uniform spaces and every hyperbolic domain $($a domain equipped with its quasi-hyperbolic metric is a Gromov hyperbolic space$)$ in ${\mathbb R}^n$ are roughly starlike. It turns out that this property serves as an important tool in several research, for instance \cite{BB03}, \cite{ZR} and \cite{KLM14}. Next we recall the conformal deformations which were introduced by Bonk, Heinonen and Koskela (cf. \cite[Chapter $4$]{BHK}). Let $(X,d)$ be a minimally nice space and $w\in X$. Consider the family of conformal deformations of $(X,k)$ by the densities $$\rho_\epsilon(x)=e^{-\epsilon k(x,w)}\;\;(\epsilon>0).$$ For $u$, $v\in X$, let $$d_\epsilon(u,v)=\inf\int_{\gamma} \rho_\epsilon ds_k,$$ where $ds_k$ is the arc-length element with respect to the metric $k$ and the infimum is taken over all rectifiable curves $\gamma$ in $X$ with endpoints $u$ and $v$. Then $d_\epsilon$ are metrics on $X$, and we denote the resulting metric spaces by $X_\epsilon=(X,d_\epsilon)$. The next result shows that the deformations $X_{\epsilon}$ are uniform spaces and each proper, geodesic and roughly starlike $\delta$-hyperbolic space is {\it quasihyperbolically equivalent} to a $c$-uniform space; see \cite[Propositions 4.5 and 4.37]{BHK}. \begin{equation}gin{exam}in{Lem}\leftarrowbel{lem-1}$($\cite[Propositions $4.5$ and $4.37$]{BHK} or \cite[Lemma $4.12$]{BB03}$)$ Suppose $(X,d)$ is minimally nice, locally compact and that $(X,k)$ is both $\delta$-Gromov hyperbolic and $K$-roughly starlike, for some $\delta\geq 0$, $K>0$. Then $X_\epsilon$ has diameter at most $2/\epsilon$ and there are positive numbers $c, \epsilon_0$ depending only on $\delta, K$ such that $X_\epsilon$ is $c$-uniform for all $0<\epsilon\leq \epsilon_0$. Furthermore, there exists $c_0=c_0(\delta,K)\in(0,1)$ such that the quasihyperbolic metrics $k$ and $k_\epsilon$ satisfy the quasi-isometric condition $$c_0\epsilon k(x,y)\leq k_\epsilon(x,y)\leq e \epsilon k(x,y).$$ \end{Lem} \section{The proof of Theorem \ref{thm-1}} Let $(D,d)$ be a minimally nice $a$-John metric space and $(D,k)$ $K$-roughly starlike, $\delta$-hyperbolic where $k$ is the quasihyperbolic metric of $D$. Then by Lemma \Ref{lem-1}, we know that there is a positive number $\epsilon=\epsilon(\delta)$ such that $(D,d_{\epsilon})$ is a $c$-uniform metric space and the identity map from $(D,d)$ to $(D,d_{\epsilon})$ is $M$-QH, where $c$ and $M$ depend only on $\delta$ and $K$. For simplicity, we denote $D=(D,d)$, $(D',d')=(D,d_{\epsilon})$ and $f$ the identity map from $D$ to $D'$. We may assume without loss of generality that $D$ is a length space, because the length of an arc and the quasihyperbolic metrics associated to the original metric and the length metric coincide. Fix $z_1$, $z_2\in D$ and let $\gamma$ be a quasihyperbolic geodesic joining $z_1$, $z_2$ in $D$. Let $b=4a_4e^{a_4}$, $a_4=a_5^{8c^2M}$, $a_5=a_6^{4a^2M}$ and $a_6=(8a_1^2a_3)^{16c^2M}a^2$, where $a_1$ and $a_3$ are the constants from Lemmas \Ref{Lem13''} and \ref{lem13-0-0}, respectively. In the following, we shall prove that $\gamma$ is a $b$-cone arc, that is, for each $y\in\gamma$, $$\min\{\ell(\gamma[z_1, y]),\; \ell(\gamma[z_2, y])\}\leq b\,d_D(y).$$ Let $x_0\in \gamma$ be a point such that $d_D(x_0)=\max\limits_{z\in \gamma}d_D(z). $ By symmetry, we only need to prove that for $y\in\gamma[z_1,x_0]$, \begin{equation}gin{eqnarray} \leftarrowbel{John}\ell(\gamma[z_1, y])\leq b\,d_D(y).\end{eqnarray} To this end, let $m \geq 0$ be an integer such that $$2^{m}\, d_D(z_1) \leq d_D(x_0)< 2^{m+1}\, d_D(z_1). $$ And let $y_0$ be the first point in $\gamma[z_1,x_0]$ from $z_1$ to $x_0$ with $$d_D(y_0)=2^{m}\, d_D(z_1). $$ Observe that if $d_D(x_0)=d_D(z_1)$, then $y_0=z_1=x_0$. Let $y_1=z_1$. If $z_1=y_0$, we let $y_2=x_0$. It is possible that $y_2=y_1$. If $z_1\not= y_0$, then we let $y_2,\ldots ,y_{m+1}$ be the points such that for each $i\in \{2,\ldots,m+1\}$, $y_i$ denotes the first point in $\gamma[z_1,x_0]$ from $y_1$ to $x_0$ satisfying $$d_D(y_i)=2^{i-1}\, d_D(y_1).$$ Then $y_{m+1}=y_0$. We let $y_{m+2}=x_0$. It is possible that $y_{m+2}=y_{m+1}=x_0=y_0$. This possibility occurs once $x_0=y_0$. From the choice of $y_i$ we observe that for $y\in \gamma[y_i,y_{i+1}]$ $(i\in\{1, 2, \ldots, m+1\})$, \begin{equation}\leftarrowbel{li-newadd-1} d_D(y)<d_D(y_{i+1})=2d_D(y_i)\end{equation} and so for all $i\in\{1, 2, \ldots, m+1\}$, \begin{equation}\leftarrowbel{li-newadd-2} k_{D}(y_i,y_{i+1}) =\ell_k(\gamma[y_i,y_{i+1}])\geq \frac{\ell(\gamma[y_i,y_{i+1}])}{2d_D(y_i)}.\end{equation} To prove Theorem \ref{thm-1}, we shall estimate upper bound of the quasihyperbolic distance between $y_i$ and $y_{i+1}$, which state as follows. \begin{equation}gin{exam}in{lem}\leftarrowbel{eq-0}For each $i\in \{1,\ldots, m+1\}$, $k_{D}(y_i,y_{i+1})\leq a_4$.\end{lem} We note that Theorem \ref{thm-1} can be obtained from Lemma \ref{eq-0} as follows. First, we observe from \eqref{li-newadd-2} and Lemma \ref{eq-0} that for all $i\in\{1,\ldots, m+1\}$, \begin{equation}\leftarrowbel{li-1'} \ell(\gamma[y_i,y_{i+1}])\leq 2a_4 \,d_D(y_i).\end{equation} Further, for each $y\in \gamma[y_1,x_{0}]$, there is some $i\in \{1,\ldots,m+1\}$ such that $y\in \gamma[y_i,y_{i+1}]$. It follows from \eqref{li-1} that $$ \log \frac{d_D(y_i)}{d_D(y)}\leq k_D(y,y_i)\leq \, k_D(y_i,y_{i+1})\leq a_4 ,$$ whence $$d_D(y_i)\leq e^{ a_4 }d_D(y).$$ From which and (\ref{li-1'}) it follows that \begin{equation}gin{eqnarray}\leftarrowbel{eq(li-3)} \ell(\gamma[z_1,y])&=& \ell(\gamma[y_1,y_2])+\ell(\gamma[y_2,y_3])+\ldots+\ell(\gamma[y_i,y]) \\ \nonumber &\leq& 2a_4 (d_D(y_1)+d_D(y_2)+\ldots+d_D(y_i))\\ \nonumber &\leq& 4a_4 \,d_D(y_i)\leq 4a_4 e^{a_4 }\,d_D(y),\end{eqnarray} as desired. This proves \eqref{John} and so Theorem \ref{thm-1} follows. Hence to complete the proof of Theorem \ref{thm-1}, we only need to prove Lemma \ref{eq-0}. \subsection{The proof of Lemma \ref{eq-0}} Without loss of generality, we may assume that $d_{D'}(y'_i)\leq d_{D'}(y'_{i+1})$. We note that if $ d (y_i, y_{i+1})<\frac{1}{2}d_D(y_i),$ then by Lemma \ref{newlemlabel} we have $$k_D(y_i, y_{i+1})\leq 1,$$ as desired. Therefore, we assume in the following that \begin{equation}gin{eqnarray}\leftarrowbel{eq(4-2)}d (y_i, y_{i+1})\geq \frac{1}{2}d_D(y_i).\end{eqnarray} Let $\alpha_i$ be an $a$-cone arc joining $y_i$ and $y_{i+1}$ in $D$ and let $v_i$ bisect the arclength of $\alpha_i$. Then Lemma \ref{eq-8} implies that \begin{equation}gin{eqnarray}\leftarrowbel{hl-eq(4-1-2)}\;\;\;\;\;k_{D}(y_i,y_{i+1})&\leq& k_{D}(y_i,v_i)+k_{D}(v_i,y_{i+1})\\ \nonumber &\leq& 2a\bigg(\log {\mathcal B}ig( 1+\frac{2\ell(\alpha_i[y_i,v_i])} {d_D(y_i)}{\mathcal B}ig)+\log {\mathcal B}ig( 1+\frac{2\ell(\alpha_i[y_{i+1},v_i])} {d_D(y_{i+1})}{\mathcal B}ig)\bigg)\\ \nonumber &\leq& 4a\log {\mathcal B}ig( 1+\frac{\ell(\alpha_i)} {d_D(y_i)}{\mathcal B}ig). \nonumber \end{eqnarray} Now we divide the proof of Lemma \ref{eq-0} into two cases. \begin{equation}gin{ca} \leftarrowbel{ca1} $\ell(\alpha_i)< a_5 d (y_i, y_{i+1}).$\end{ca} Then by \eqref{li-newadd-2} and \eqref{hl-eq(4-1-2)} we compute \begin{equation}gin{eqnarray}\leftarrowbel{eq(h-h-4-2')} \frac{d(y_i,y_{i+1})}{2d_D(y_i)}&\leq& k_{D}(y_i,y_{i+1}) \leq 4a \log {\mathcal B}ig( 1+\frac{\ell(\alpha_i)} {d_D(y_i)}{\mathcal B}ig) \\ \nonumber &\leq& 4a \log {\mathcal B}ig( 1+\frac{ a_5d(y_i,y_{i+1})} {d_D(y_i)}{\mathcal B}ig).\end{eqnarray} A necessary condition for \eqref{eq(h-h-4-2')} is $$ d (y_i,y_{i+1})\leq a_5^2\,d_D(y_i).$$ Hence we deduce from (\ref{eq(h-h-4-2')}) that $k_{D}(y_i,y_{i+1})\leq a_4$, as desired. \begin{equation}gin{ca} \leftarrowbel{ca2} $\ell(\alpha_i)\geq a_5 d (y_i, y_{i+1}).$\end{ca} We prove in this case by contradiction. Suppose on the contrary that \begin{equation}gin{eqnarray}\leftarrowbel{eq(h-4-2)}k_{D}(y_i,y_{i+1})> a_4.\end{eqnarray} Then by Lemma \Ref{Lem-uniform}, we get \begin{equation}gin{exam}in{eqnarray*}a_4<k_{D}(y_i,y_{i+1})\leq M k_{D'}(y'_i,y'_{i+1}) \leq 4c^2M\log{\mathcal B}ig(1+\frac{d' (y'_i,y'_{i+1})}{d_{D'}(y'_i)}{\mathcal B}ig),\end{eqnarray*} and so \begin{equation}gin{eqnarray}\leftarrowbel{eq(h-4-1')}d' (y'_i,y'_{i+1})\geq a_5d_{D'}(y'_i).\end{eqnarray} Therefore, by the choice of $v_i\in\alpha_i$ we obtain $$d_D(v_i)\geq \frac{\ell(\alpha_i)}{2a}\geq \frac{a_5}{2a} d (y_i,y_{i+1})>a_6\, d (y_i,y_{i+1}),$$ we deduce from which and \eqref{eq(4-2)} that there exists a point $v_{i,0}\in \alpha_i[y_i,v_i]$ such that \begin{equation}\leftarrowbel{eq-11} d_D(v_{i,0})=a_6\, d (y_i,y_{i+1}).\end{equation} Moreover, we claim that \begin{equation}\leftarrowbel{claim1}k_{D}(y_i,v_{i,0})\leq \frac{1}{a_5}k_{D}(y_i,y_{i+1}).\end{equation} Otherwise, we would see from Lemma \ref{eq-8} and \eqref{eq-11} that \begin{equation}gin{exam}in{eqnarray*}k_{D}(y_i,y_{i+1})&<& a_5 k_{D}(y_i,v_{i,0})\leq 4aa_5 \log{\mathcal B}ig(1+\frac{\ell(\alpha_i[y_{i},v_{i,0}])}{d_D(y_i)}{\mathcal B}ig) \\ \nonumber &\leq& 4aa_5 \log{\mathcal B}ig(1+\frac{ad(v_{i,0})}{d_D(y_i)}{\mathcal B}ig) \leq 4a^2a_5a_6\log{\mathcal B}ig(1+\frac{ d (y_i,y_{i+1})}{d_D(y_i)}{\mathcal B}ig), \end{eqnarray*} which together with \eqref{li-newadd-2} show that $$\frac{d(y_i,y_{i+1})}{d_D(y_i)}\leq 8a^2a_5a_6\log{\mathcal B}ig(1+\frac{ d (y_i,y_{i+1})}{d_D(y_i)}{\mathcal B}ig).$$ A necessary condition for the above inequality is $$ d (y_i,y_{i+1})\leq a_5^2\,d_D(y_i).$$ This shows that $k_{D}(y_i,y_{i+1})\leq a_4$, which contradicts $\eqref{eq(h-4-2)}$. Thus we get (\ref{claim1}). Then it follows from Lemma \Ref{Lem-uniform}, and \eqref{claim1} that \begin{equation}gin{exam}in{eqnarray*} k_{D'}(y'_i,v'_{i,0})&<& Mk_{D}(y_i,v_{i,0}) \leq\frac{M}{a_5}k_{D}(y_i,y_{i+1}) \\ &\leq& \frac{M^2}{ a_5}k_{D'}(y'_i,y'_{i+1}) \leq \frac{4c^2M^2}{ a_5}\log{\mathcal B}ig(1+\frac{d' (y'_i,y'_{i+1})}{d_{D'}(y'_i)}{\mathcal B}ig). \end{eqnarray*} Hence, by using an elementary compute we see from \eqref{li-1} and \eqref{eq(h-4-1')} that \begin{equation}gin{exam}in{eqnarray*} \log {\mathcal B}ig(1+\frac{d' (y'_i,v'_{i,0})}{d_{D'}(y'_i)}{\mathcal B}ig) \leq k_{D'}(y'_i,v'_{i,0})\leq \log{\mathcal B}ig(1+\frac{d' (y'_i,y'_{i+1})}{a_5d_{D'}(y'_i)}{\mathcal B}ig), \end{eqnarray*} which implies that \begin{equation}gin{eqnarray}\leftarrowbel{eq(hl-41-5)}d' (y'_i,v'_{i,0})< \frac{1}{a_5}d' (y'_i,y'_{i+1}).\end{eqnarray} Moreover, we deduce from (\ref{eq(hl-41-5)}) and (\ref{eq(h-4-1')}) that \begin{equation}gin{eqnarray}\leftarrowbel{eq--2} d_{D'}(v'_{i,0})\leq d' (y'_i,v'_{i,0})+d_{D'}(y'_i)\leq \frac{2}{a_5}d' (y'_i,y'_{i+1}).\end{eqnarray} We recall that $v_i$ is the point in the cone arc $\alpha_i[y_i,y_{i+1}]$ which bisect the arclength of $\alpha_i$. Next we need to estimate the location of the image point $v'_i$ in $\alpha'_i$. We claim that \begin{equation}gin{exam}in{cl}\leftarrowbel{eq--6} $d'(y'_i,v'_i)<\frac{1}{2}d' (y'_i,y'_{i+1}).$ \end{cl} We prove this claim by a method of contradiction. Suppose on the contrary that \begin{equation}\leftarrowbel{neweqlabel}d' (y'_i,v'_i)\geq \frac{1}{2}d' (y'_i,y'_{i+1}).\end{equation} Let $u'_{0,i}\in\gamma'[y'_{i}, y'_{i+1}]$ be a point satisfying $$d_{D'}(u'_{0,i})=\max\{d_{D'}(w'):w'\in\gamma'[y'_{i}, y'_{i+1}]\}.$$ Then we see from Lemma \Ref{Lem13''} that \begin{equation}\leftarrowbel{e---1} d_{D'}(u'_{0,i})\geq \frac{1}{a_1}\max\{d' (y'_{i+1},u'_{0,i}), d' (u'_{0,i},y'_i)\} \geq \frac{d' (y'_i,y'_{i+1})}{2a_1}.\end{equation} This together with (\ref{eq(h-4-1')}) shows that there exists some point $y'_{0,i}\in \gamma'[y'_i,u'_{0,i}]$ satisfying \begin{equation}gin{eqnarray}\leftarrowbel{eq(W-l-6-1)}d_{D'}(y'_{0,i})=\frac{d' (y'_i,y'_{i+1})}{2a_1}. \end{eqnarray} It follows from Lemma \Ref{Lem13''} that\begin{equation}gin{eqnarray}\leftarrowbel{eq(W-l-6-1add)}d' (y'_i,y'_{0,i})\leq a_1\,d_{D'}(y'_{0,i}).\end{eqnarray} Let $v'_0\in\alpha'_i[y'_{i}, v'_{i}]$ satisfy $d_{D'}(v'_0)=\max\{d_{D'}(u'):u'\in\alpha'_i[y'_{i}, v'_{i}]\}$, see Figure \ref{fig01}. Then we see from Lemma \ref{lem13-0-0} that for each $z'\in \alpha'_i[ v'_i, v'_0]$, \begin{equation}gin{eqnarray}\leftarrowbel{cla-3}d' (v'_i,z')\leq a_3 d_{D'}(z').\end{eqnarray} On the other hand, we recall that $v'_{i,0}$ is the point such that $v_{i,0}\in \alpha_i[y_i,v_i]$ and satisfying \eqref{eq-11} and \eqref{eq(hl-41-5)}. Then by \eqref{eq(hl-41-5)} and \eqref{eq--2} we have \begin{equation}gin{exam}in{eqnarray*}d' (v'_i,v'_{i,0})&\geq& d' (v'_i,y'_i)-d' (v'_{i,0},y'_i)\geq (\frac{1}{2}-\frac{1}{a_5})d' (y'_i,y'_{i+1})>a_3 d_{D'}(v'_{i,0}). \end{eqnarray*} That means $v'_0\in \alpha'_i[ v'_{i,0}, v'_i]$. Moreover, we know from Lemma \ref{lem13-0-0} and \eqref{neweqlabel} that $$d_{D'}(v'_0)\geq \frac{1}{a_3}\max\{d' (v'_{i},v'_0), d' (v'_0,y'_i)\}\geq \frac{d' (y'_i,v_i')}{2a_3}\geq \frac{d' (y'_i,y'_{i+1})}{4a_3}.$$ Hence, it follows from (\ref{eq--2}) that there exists some point $u'_0\in \alpha'_i[v'_{i,0},v'_{0}]$ such that \begin{equation}gin{eqnarray}\leftarrowbel{eq(W-l-6-2)} d_{D'}(u'_0)=\frac{d' (y'_i,y'_{i+1})}{4a_3},\end{eqnarray} and so Lemma \ref{lem13-0-0} leads to $$d' (y'_i,u'_0)\leq a_3\,d_{D'}(u'_0).$$ This together with \eqref{eq(W-l-6-1)}, \eqref{eq(W-l-6-1add)} and \eqref{eq(W-l-6-2)} show that $$d' (u'_0,y'_{0,i})\leq d' (u'_0,y'_i)+d' (y'_i,y'_{0,i})\leq 3a_3d_{D'}(u'_0).$$ Now we are ready to finish the proof of Claim \ref{eq--6}. It follows from \eqref{li-1} and Lemma \Ref{Lem-uniform} that \begin{equation}gin{exam}in{eqnarray*} \log \frac{d_D(u_0)}{d_D(y_{0,i})}&\leq& k_{D}(y_{0,i},u_0) \leq M k_{D'}(y'_{0,i},u'_0) \\ \nonumber &\leq& 4c^2M\log{\mathcal B}ig(1+\frac{d' (u'_0,y'_{0,i})}{\min\{d_{D'}(u'_0), d_{D'}(y'_{0,i})\}}{\mathcal B}ig)\\ \nonumber &<&4c^2M\log (1+3a_3), \end{eqnarray*} which yields that \begin{equation}gin{eqnarray}\leftarrowbel{eq(W-l-6-4)}d_D(u_0)\leq (1+3a_3)^{4c^2M}d_D(y_{0,i})<a_6d_D(y_{0,i}).\end{eqnarray} On the other hand, by Lemma \ref{eq-8} we can get \begin{equation}gin{exam}in{eqnarray*} k_{D}(v_{i,0},u_0)\geq 4{a}^2\log{\mathcal B}ig(1+\frac{d_D(u_0)}{d_D(v_{i,0})}{\mathcal B}ig)\end{eqnarray*} and by \eqref{li-1}, \eqref{eq--2} and \eqref{eq(W-l-6-2)} we have that \begin{equation}gin{exam}in{eqnarray*} k_{D}(v_{i,0},u_0) \geq k_{D'}(v'_{i,0},u'_0) \geq\log\frac{d_{D'}(u'_0)}{d_{D'}(v'_{i,0})} \geq\log\frac{a_5}{8a_3},\end{eqnarray*} which yields $$d_D(u_0)\geq a_6d_D(v_{i,0}).$$ Therefore, we infer from \eqref{eq(4-2)} and \eqref{eq-11} that \begin{equation}gin{exam}in{align*} d_D(u_0)\geq a_6d_D(v_{i,0})=a_6^2 d (y_i, y_{i+1}) \geq \frac{ a_6^2}{4}d_D(y_{i+1})\geq \frac{ a_6^2}{4}d_D(y_{0,i}),\end{align*} which contradicts (\ref{eq(W-l-6-4)}). Hence Claim \ref{eq--6} holds. Now we continue the proof of Lemma \ref{eq-0}. We first see from Claim \ref{eq--6} that $$d' (y'_{i+1},v'_i)\geq d'(y'_i,y'_{i+1})-d' (y'_i,v'_i)> \frac{d' (y'_i,y'_{i+1})}{2}\geq d' (y'_i,v'_i).$$ Let $q'_0\in \alpha'_i[y'_i,v'_i]$ and $u'_1\in\alpha'_i[y'_{i+1},v'_i]$ be points such that \begin{equation}gin{eqnarray}\leftarrowbel{112}\;\;\;\;\frac{d' (y'_i,v'_i)}{2a_3}= d' (q'_0,v_i')\,\,\;{\rm and}\,\,\; \frac{d'(y'_i,v'_i)}{2a_3}= d' (u'_1,v_i').\end{eqnarray} Then \begin{equation}gin{exam}in{align*} d'(y'_i,q_0')\geq d'(y'_i,v_i')-d'(q'_0,v_i')=(2a_3-1)d'(q'_0,v_i')>d'(q'_0,v_i')\end{align*} and \begin{equation}gin{exam}in{align*} d'(y'_{i+1},u_1')>d'(u'_1,v_i').\end{align*} Thus we get from Lemma \ref{lem13-0-0} that \begin{equation}gin{eqnarray}\leftarrowbel{eq(W-l-6'-0)}\;\;\;\;\;\;\;\;\;\;\;\; d_{D'}(q'_0)\geq \frac{d' (q'_0,v_i')}{a_3}\geq \frac{d' (y'_i,v'_i)}{2a^2_3} \,\; \mbox{and}\;\, d_{D'}(u'_1)\geq\frac{d' (u'_1,v_i')}{a_3}\geq \frac{d' (y'_i,v'_i)}{2a_3^2}.\end{eqnarray} Then it follows from Lemma \Ref{Lem-uniform}, \eqref{li-1}, \eqref{112} and \eqref{eq(W-l-6'-0)} that \begin{equation}gin{eqnarray} \leftarrowbel{eq--7} {\mathcal B}ig|\log\frac{d_D(u_1)}{d_D(q_0)}{\mathcal B}ig| &\leq&k_{D}(u_1,q_0) \leq M k_{D'}(u'_1, q'_0) \\\nonumber &\leq& 4c^2M\log{\mathcal B}ig(1+\frac{d' (u'_1,q'_0)}{\min\{d_{D'}(q'_0), d_{D'}(u'_1)\}}{\mathcal B}ig) \\\nonumber &\leq& 4c^2M\log{\mathcal B}ig(1+\frac{d' (u'_1,v'_i)+d' (v'_i,q'_0)}{\min\{d_{D'}(q'_0), d_{D'}(u'_1)\}}{\mathcal B}ig) \\\nonumber &\leq& 4c^2M\log (1+2a_3 ), \end{eqnarray} which implies that \begin{equation}gin{eqnarray}\leftarrowbel{eq(W-l-6'-2)}\frac{d_D(u_1)}{(1+2a_3 )^{4c^2M}}\leq d_D(q_0)\leq (1+2a_3 )^{4c^2M}e^Cd_D(u_1).\end{eqnarray} On the other hand, by \eqref{eq(h-4-1')}, \eqref{e---1} and Claim \ref{eq--6} we have $$d' (u'_{0,i},y'_i)\geq d_{D'}(u'_{0,i})-d_{D'}(y'_i)\geq (\frac{1}{2a_1}-\frac{1}{a_5})d' (y'_{i+1},y'_i)>\frac{1}{2a_1}d' (y'_{i},v'_i).$$ Then there exists $p'_0\in \gamma'[y'_i,u'_{0,i}]$ such that \begin{equation}gin{eqnarray}\leftarrowbel{132}d' ( y'_i,p'_0)=\frac{d' (y'_i,v'_i)}{2a_1},\end{eqnarray} see Figure \ref{fig02}. This combined with \eqref{112} and Lemma \Ref{Lem13''} shows that $$d' (p'_0,q'_0)\leq d' ( p'_0,y'_i)+d' (y'_i,v'_i)+d' (v'_i,q'_0)\leq (1+\frac{1}{a_1}+\frac{1}{a_3})d' (y'_i,v'_i),$$ and $$d' (y'_i,p'_0 )\leq a_1 d_{D'}(p'_0).$$ Then \eqref{eq(W-l-6'-0)} and \eqref{132} we have $$\min\{d_{D'}(q'_0),d_{D'}(p'_0)\}\geq \min\{\frac{1}{2a_1^2},\frac{1}{2a_3^2}\}d'(y_i',v_i') >\frac{1}{2a_1^2a_3^2}d'(y_i',v_i').$$ Therefore, Lemma \ref{newlemlabel} and \eqref{li-1} lead to \begin{equation}gin{exam}in{eqnarray*}\log \frac{d_D(q_0)}{d_D(p_{0})}&\leq& k_{D}(q_0, p_{0}) \leq M k_{D'}(q'_0,p'_0)\\ \nonumber &\leq& 4c^2M\log{\mathcal B}ig(1+\frac{d' (p'_0,q'_0)}{\min\{d_{D'}(q'_0),d_{D'}(p'_0)\}}{\mathcal B}ig) \\ \nonumber &\leq& 4c^2M\log(6a_1^2a_3^2).\end{eqnarray*} We infer from \eqref{eq-11} that \begin{equation}gin{exam}in{eqnarray}\leftarrowbel{eq-new-add2}d_D(q_0)&\leq& (6a_1^2a_3^2)^{4c^2M} d_D(p_0)\\ \nonumber&\leq& 2(6a_1^2a_3^2)^{4c^2M} d_D(y_i) \\ \nonumber&\leq& 2(6a_1^2a_3^2)^{4c^2M} d (y_i,y_{i+1}).\end{eqnarray} Finally, it follows from Lemma \ref{eq-8} and the choice of $q_0$ and $u_1$ that $$k_{D}(y_i, q_0)\leq 4a^2\log{\mathcal B}ig(1+\frac{d_D( q_0)}{d_D(y_i)}{\mathcal B}ig)\;\;{\rm and }\;\;k_{D}(u_i, y_{i+1})\leq 4a^2\log{\mathcal B}ig(1+\frac{d_D(u_i)}{d_D(y_{i+1})}{\mathcal B}ig).$$ Then by Lemma \Ref{Lem-uniform}, \eqref{eq--7}, \eqref{eq(W-l-6'-2)} and \eqref{eq-new-add2} we get \begin{equation}gin{eqnarray}\leftarrowbel{eq(W-l-6'-2')} k_{D}(y_i, y_{i+1})&\leq& k_{D}(y_i, q_0)+k_{D}(q_0, u_1)+k_{D}(u_1, y_{i+1}) \\ \nonumber &\leq& 4a^2 \log{\mathcal B}ig(1+\frac{d_D( q_0)}{d_D(y_i)}{\mathcal B}ig)+4A^2M \log{\mathcal B}ig(1+2a_3{\mathcal B}ig) \\ \nonumber &&+4a^2 \log{\mathcal B}ig(1+\frac{d_D( u_1)}{d_D(y_{i+1})}{\mathcal B}ig)\\ \nonumber &<& a_5 \log{\mathcal B}ig(1+\frac{ d (y_i, y_{i+1})}{d_D(y_i)}{\mathcal B}ig),\end{eqnarray} which together with \eqref{li-newadd-2} show that $$\frac{d (y_i,y_{i+1})}{2d_D(y_i)}\leq a_5 \log{\mathcal B}ig(1+\frac{ d (y_i,y_{i+1})}{d_D(y_i)}{\mathcal B}ig).$$ A necessary condition for this inequality is $ d (y_i, y_{i+1})\leq a_5^2d_D(y_i)$. Hence by (\ref{eq(W-l-6'-2')}), we know that $$k_{D}(y_i, y_{i+1})\leq a_5\log(1+a_5^2)<a_4,$$ which contradicts \eqref{eq(h-4-2)}. Therefore, we obtain Lemma \ref{eq-0} and so Theorem \ref{thm-1}. \section{The proof of Theorem \ref{thm-2}} In this section, we will prove Theorem \ref{thm-2} by means of Theorem \ref{thm-1} and some results demonstrated in \cite{KLM14}. We begin by recalling necessary definitions and results. \begin{equation}gin{defn} Let $(X, d,\mu)$ be a metric measure space. Given $Q> 1$, we say that $X$ is {\it $Q$-regular} if there exists a constant $C>0$ such that for each $x\in X$ and $0<r\leq {\operatorname{diam}}(X)$, $$C^{-1}r^Q\leq \mu(B(x,r))\leq Cr^Q.$$\end{defn} \begin{equation}gin{defn} Let $(X,d)$ be a locally compact and rectifiably connected metric space, $D\subset X$ be a domain (an open rectifiably connected set), and $C_{gh}\geq 1$ be a constant. We say that $D$ satisfies the {\it $C_{gh}$-Gehring-Hayman inequality}, if for all $x$, $y$ in $D$ and for each quasihyperbolic geodesic $\gamma$ joining $x$ and $y$, we have $$\ell(\gamma)\leq C_{gh}\ell(\begin{equation}ta_{x,y}),$$ where $\begin{equation}ta_{x,y}$ is any other curve joining $x$ and $y$ in $D$. In other words, quasihyperbolic geodesics are essentially the shortest curves in $D$. \end{defn} \begin{equation}gin{defn} Let $(X,d)$ be a metric space, $D\subset X$ be a domain, and $C_{bs}\geq 1$ be a constant. We say that $D$ satisfies the {\it $C_{bs}$-ball separation condition}, if for all $x$, $y$ in $D$ and for each quasihyperbolic geodesic $\gamma$ joining $x$ and $y$, we have for every $z\in \gamma$, $$B(z,C_{bs}d_D(z)) \cap \begin{equation}ta_{x,y} \not=\emptyset ,$$ where $\begin{equation}ta_{x,y}$ is any other curve joining $x$ and $y$ in $D$. \end{defn} \begin{equation}gin{defn} Let $(X,d)$ be a metric space, $D\subset X$ be a domain and let $c\geq 1$ be a constant. We say that $D$ is \begin{equation}gin{exam}in{enumerate} \item {\it $c$-$LLC_1$}, if for all $x\in D$ and $r>0$, we have every pair of points in $B(x,r)$ can be joined by a curve in $B(x,cr)$. \item {\it $c$-$LLC_2$}, if for all $x\in D$ and $r>0$, we have every pair of points in $D\begin{equation}gin{array}ckslash B(x,r)$ can be joined by a curve in $D\begin{equation}gin{array}ckslash B(x,\frac{r}{c})$. \item {\it $c$-$LLC$}, if it is both $c$-$LLC_1$ and $c$-$LLC_2$. \end{enumerate} Moreover, $D$ is called {\it linearly locally connected} or {$LLC$}, if it is $c$-$LLC$ for some constant $c\geq 1$. \end{defn} \begin{equation}gin{defn} Let $c\geq 1$. A noncomplete metric space $(X,d)$ is {\it $c$-locally externally connected} ($c$-$LEC$) provided the $c$-$LLC_2$ property holds for all points $x\in X$ and all $r\in (0,d(x)/c)$. \end{defn} In \cite{BH}, Buckley and Herron obtained the following interesting characterization of uniform metric spaces. \begin{equation}gin{exam}in{Thm}\leftarrowbel{bhthm4.2}$($\cite[Theorem 4.2]{BH}$)$ A minimally nice metric space $(X, d)$ is uniform and $LEC$ if and only if it is quasiconvex, $LLC$ with respect to curves, and satisfies a weak slice condition. These implications are quantitative.\end{Thm} \begin{equation}gin{defn} A metric space $(X,d)$ is called {\it annular quasiconvex}, if there is a constant $\leftarrowmbda \geq 1$ so that, for any $x\in X$ and all $0 < r' < r$, each pair of points $y, z$ in $B(x, r) \begin{equation}gin{array}ckslash B(x, r')$ can be joined with a curve $\gamma_{yz}$ in $B(x, \leftarrowmbda r)\begin{equation}gin{array}ckslash B(x, r'/\leftarrowmbda)$ such that $\ell(\gamma_{yz})\leq \leftarrowmbda d(y, z)$. \end{defn} It is not difficult to see that $\leftarrowmbda$-annularly quasiconvexity property implies $C$-$LLC_2$, and hence $C$-$LEC$, where $C=2\leftarrowmbda^2$. \subsection{The proof of Theorem \ref{thm-2}} Necessity: Suppose that $D$ is uniform. Then we know that $D$ is John and quasiconvex. Moreover, it follows from \cite[Theorem 3.6]{BHK} that $(D,k)$ is a roughly starlike Gromov hyperbolic space because $D$ is bounded, where $k$ is the quasihyperbolic metric of $D$. It remains to show that $D$ is $LLC$. Since $X$ is $A$-annularly quasiconvex, it follows that $D$ is $LEC$. Then we deduce from Theorem \Ref{bhthm4.2} that $D$ is $LLC$. Sufficiency: To prove the uniformity of $D$, we only need to prove that every quasihyperbolic geodesic $\gamma$ in $D$ is a uniform arc. We assume that $D$ is $c$-quasiconvex and $\delta$-hyperbolic. By \cite[Theorem 1.2]{KLM14}, we find that $D$ satisfies both the $C_{gh}$-Gehring-Hayman condition and the $C_{bs}$-ball condition for some constants $C_{gh},C_{bs}\geq 1$. So to prove the sufficiency, we only need to show that each quasihyperbolic geodesic in $D$ is a cone arc. We first assume that $D$ is $a$-John. Since $D$ is a bounded $\delta$-hyperbolic domain of $X$, we see from \cite[Theorem 3.1]{BB03} that $(D,k)$ is $K$-roughly starlike, because $X$ is annularly quasiconvex. Then from Theorem \ref{thm-1} the uniformity of $D$ follows. We are thus left to assume that $D$ is $c_0$-LLC. Again by virtue of the Gehring-Hayman condition, we only need to show that there is a uniform upper bound for the constant $\Leftarrowmbda$ such that $$\min\{\ell(\gamma[x,z]),\ell(\gamma[z,y])\}=\Leftarrowmbda d_D(z)$$ for each pair of points $x,y\in D$, for any quasihyperbolic geodesic $\gamma$ in $D$ joining $x$ and $y$, and for every point $z\in \gamma$. To this end, we deduce from the $C_{gh}$-Gehring-Hayman condition that $$\ell(\gamma[x,z])\leq cC_{gh}d(x,z)\;\;{\rm and}\;\;\ell(\gamma[y,z])\leq cC_{gh}d(y,z),$$ because the subarcs $\gamma[x,z]$ and $\gamma[x,y]$ are also quasihyperbolic geodesics. Thus we have $$\min\{d(x,z),d(y,z)\}\geq \frac{\Leftarrowmbda}{cC_{gh}}d_D(z).$$ On the other hand, since $D$ is $c_0$-LLC, we know that there is a curve $\begin{equation}ta$ joining $x$ to $y$ with \begin{equation}gin{eqnarray}\leftarrowbel{eq-new1}\begin{equation}ta\subset X\setminus \overline{B}(z,\frac{\Leftarrowmbda}{cc_0C_{gh}}d_D(z)).\end{eqnarray} Furthermore, since $\gamma$ is a quasihyperbolic geodesic and $D$ satisfies the $C_{bs}$-ball separation condition, we see that $$\begin{equation}ta\cap B(z,C_{bs}d_D(z))\not=\emptyset,$$ which together with \eqref{eq-new1} show that $$\Leftarrowmbda\leq cc_0C_{gh}C_{bs},$$ as required. Hence, the proof of Theorem \ref{thm-2} is complete. \begin{equation}gin{exam}in{thebibliography}{99} \bibitem{BB03} {\sc Z. M. Balogh and S. M. Buckley}, Geometric characterizations of Gromov hyperbolicity, \textit{Invent. Math.} {\bf 153} (2003), 261--301. \bibitem{BHK} {\sc M. Bonk, J. Heinonen and P. Koskela}, Uniformizing Gromov hyperbolic domains, \textit{Ast\'erisque} {\bf 270} (2001), 1--99. \bibitem{BH} {\sc S. Buckley and D. Herron}, Uniform spaces and weak slice spaces, \textit{Conform. Geom. Dyn.} {\bf 11} (2007), 191--206 (electronic). \bibitem{BHX} {\sc S. M. Buckley, D. Herron and X. Xie}, Metric space inversions, quasihyperbolic distance, and uniform spaces, \textit{Indiana Univ. Math. J.} {\bf 57} (2008), 837--890. \bibitem{CP17} {\sc S. Chen and S. Ponnusamy}, John disks and K-quasiconformal harmonic mappings, \textit{J. Geom. Anal.} {\bf 27} (2017), 1468--1488. \bibitem{GHM} {\sc F. W. Gehring, K. Hag and O. Martio}, Quasihyperbolic geodesics in John domains, \textit{Math. Scand.} {\bf 36} (1989), 75--92. \bibitem{GO} {\sc F. W. Gehring and B. G. Osgood}, Uniform domains and the quasi-hyperbolic metric, \textit{J. Analyse Math.} {\bf 36} (1979), 50--74. \bibitem{GP76} {\sc F. W. Gehring and B. P. Palka}, Quasiconformally homogeneous domains, \textit{J. Analyse Math.} {\bf 30} (1976), 172--199. \bibitem{GNV94} {\sc M. Ghamsari, R. N\"{a}kki and J. V\"{a}is\"{a}l\"{a}}, John disks and extension of maps, \textit{Monatsh. Math.} {\bf 117} (1994), 63--94. \bibitem{GGKN17} {\sc P. Goldstein, C. Guo, P. Koskela and D. Nandi}, Characterizations of generalized John domains in $\mathbb {R}^ n$ via metric duality, \textit{arXiv preprint arXiv:1710.02050,} 2017. \bibitem{Guo15} {\sc Ch. Guo}, Uniform continuity of quasiconformal mappings onto generalized John domains, \textit{Ann. Acad. Sci. Fenn. Math.} {\bf 40} (2015), 183--202. \bibitem{Hei89} {\sc J. Heinonen}, Quasiconformal mappings onto John domains, \textit{Rev. Math. Iber.} {\bf 5} (1989), 97--123. \bibitem{HK} {\sc J. Heinonen and P. Koskela}, Quasiconformal maps in metric spaces with controlled geometry, \textit{Acta Math.} {\bf 181} (1998), 1--61. \bibitem{HR93} {\sc J. Heinonen and S. Rohde}, The Gehring-Hayman inequality for quasihyperbolic geodesics, \textit{Math. Proc. Camb. Phil. Soc.} {\bf 114} (1993), 393--405. \bibitem{Jo61} {\sc F. John}, Rotation and strain, \textit{Comm. Pure. Appl. Math.} {\bf 14} (1961), 391--413. \bibitem{KLM14} {\sc P. Koskela, P. Lammi and V. Manojlovi\'c}, Gromov hyperbolicity and quasihyperbolic geodesics, \textit{Ann. Sci. \'Ec. Norm. Sup\'er.} {\bf 47} (2014), 975--990. \bibitem{Li} {\sc Y. Li,} Neargeodesic in John domain in Banach spaces, \textit{Internat. J. Math.} {\bf 25} (5) (2014), 1450041 (17 pages), doi: 101142/S0129167X14500414. \bibitem{LVZ17} {\sc Y. Li, M. Vuorinen and Q. Zhou}, Characterizations of John spaces, \textit{Monatsh. Math.,} 2018, https://doi.org/10.1007/s00605-018-1231-6. \bibitem{LVZ2} {\sc Y. Li, M. Vuorinen and Q. Zhou}, Weakly quasisymmetric maps and uniform spaces, \textit{Comput. Methods Funct. Theory,} {\bf 18} 2018, 689-715. \bibitem{MS78} {\sc O. Martio and J. Sarvas}, Injectivity theorems in plane and space, \textit{Ann. Acad. Sci. Fenn. Ser. A I Math.} {\bf 4} (1978), 383--401. \bibitem{NV} {\sc R. N\"{a}kki and J. V\"{a}is\"{a}l\"{a}}, John disks, \textit{Expo. Math.} {\bf 9} (1991), 3--43. \bibitem{RT2} {\sc A. Rasila and J. Talponen}, {On quasihyperbolic geodesics in Banach spaces.} {\it Ann. Acad. Sci. Fenn. Math.} {\bf 39} (1) (2014), 163--173. \bibitem{Vai6} {\sc J. V\"{a}is\"{a}l\"{a}}, Free quasiconformality in Banach spaces. II, \textit{Ann. Acad. Sci. Fenn. Ser. A I Math.} {\bf 16} (1991), 255--310. \bibitem{Vai05} {\sc J. V\"{a}is\"{a}l\"{a}}, Hyperbolic and uniform domains in Banach spaces, \textit{Ann. Acad. Sci. Fenn. Math.} {\bf 30} (2005), 261--302. \bibitem{ZR} {\sc J. V\"{a}is\"{a}l\"{a}}, Hyperbolic and uniform domains in Banach spaces, \textit{Ann. Acad. Sci. Fenn. Math.} {\bf 30} (2005), 261--302. \end{thebibliography} \end{document}
\begin{document} \onehalfspace \title{A bound on the dissociation number} \author{Felix Bock\and Johannes Pardey\and Lucia D. Penso\and Dieter Rautenbach} \date{} \maketitle \begin{center} {\small Institute of Optimization and Operations Research, Ulm University, Ulm, Germany\\ \texttt{$\{$felix.bock,johannes.pardey,lucia.penso,dieter.rautenbach$\}[email protected]} } \end{center} \begin{abstract} The dissociation number ${\rm diss}(G)$ of a graph $G$ is the maximum order of a set of vertices of $G$ inducing a subgraph that is of maximum degree at most $1$. Computing the dissociation number of a given graph is algorithmically hard even when restricted to subcubic bipartite graphs. For a graph $G$ with $n$ vertices, $m$ edges, $k$ components, and $c_1$ induced cycles of length $1$ modulo $3$, we show ${\rm diss}(G)\geq n-\frac{1}{3}\Big(m+k+c_1\Big)$. Furthermore, we characterize the extremal graphs in which every two cycles are vertex-disjoint.\\[3mm] {\bf Keywords:} Dissociation set \end{abstract} \section{Introduction} We consider finite, simple, and undirected graphs, and use standard terminology. A set $D$ of vertices of a graph $G$ is a {\it dissociation set} in $G$ if the subgraph $G[D]$ of $G$ induced by $D$ has maximum degree at most $1$. The {\it dissociation number ${\rm diss}(G)$} of $G$ is the maximum order of a dissociation set in $G$. The dissociation number is algorithmically hard even when restricted, for instance, to subcubic bipartite graphs \cite{bocalo,ordofigowe,ya}. Fast exact algorithms \cite{kakasc}, (randomized) approximation algorithms \cite{kakasc,hobu}, and fixed parameter tractability \cite{ts} have been studied for this parameter or its dual, the {\it $3$-path (vertex) cover} number. Several lower bounds on the dissociation number were proposed: If $G$ is a graph of order $n$ and size $m$, then \begin{eqnarray}\label{e0} {\rm diss}(G) & \geq & \begin{cases} \frac{n}{\left\lceil\frac{\Delta+1}{2}\right\rceil} & \mbox{, if $G$ has maximum degree $\Delta$ \cite{brkakase},}\\ \frac{4}{3}\sum\limits_{u\in V(G)}\frac{1}{d_G(u)+1} & \mbox{, if $G$ has no isolated vertex \cite{brkakase},}\\ \sum\limits_{u\in V(G)}\frac{1}{d_G(u)+1}+\sum\limits_{uv\in E(G)}{|N_G[u]\cup N_G[v]|\choose 2}^{-1} & \mbox{, \cite{goharasc},}\\ \frac{n}{2} & \mbox{, if $G$ is outerplanar \cite{brkakase}}\\ \frac{2n}{3} & \mbox{, if $G$ is a tree \cite{brkakase},}\\ \frac{2n}{k+2}-\frac{m}{(k+1)(k+2)}& \mbox{, if $k=\left\lceil\frac{m}{n}\right\rceil-1$ \cite{brjakaseta}, and}\\ \frac{2n}{3}-\frac{m}{6}& \mbox{, \cite{brkakase}.} \end{cases} \end{eqnarray} The results in the present papers were inspired by bounds in (\ref{e0}). Our main result is the following. \begin{theorem}\label{theorem1} If $G$ is a graph with $n$ vertices, $m$ edges, $k$ components, and $c_1$ induced cycles of length $1$ modulo $3$, then \begin{eqnarray}\label{e1} {\rm diss}(G) & \geq & n-\frac{1}{3}\Big(m+k+c_1\Big). \end{eqnarray} \end{theorem} Theorem \ref{theorem1} generalizes the lower bound $2n/3$ for trees of order $n$ in (\ref{e0}), strengthens the general lower bound $\frac{2n}{3}-\frac{m}{6}$ in (\ref{e0}) for many graphs, and almost implies the lower bound $n/2$ for subcubic graphs of order $n$, which follows from the first bound in (\ref{e0}). In the proof of Theorem \ref{theorem1}, graphs in which all cycles are pairwise vertex-disjoint play an essential role. We call such graphs {\it cycle-disjoint}; their components are restricted cactus graphs, where a {\it cactus} is a connected graph in which every block is either a $K_2$ or a cycle. As a step towards the understanding of all extremal graphs for Theorem \ref{theorem1}, we consider the extremal cycle-disjoint graphs in more detail. We propose three extension operations $(O_1)$, $(O_2)$, and $(O_3)$ applicable to a given graph $G'$, attaching a $P_3$ or a cycle of length not $0$ modulo $3$ by a bridge to $G'$, illustrated in Figure \ref{fig1}. It is easy to see that applying one of these operations to a graph that satisfies (\ref{e1}) with equality yields a graph that satisfies (\ref{e1}) with equality. Since $P_3$ and the cycles of lengths not $0$ modulo $3$ satisfy (\ref{e1}) with equality, this already allows to construct quite a rich family of extremal graphs, yet not all of them. \begin{figure} \caption{Operations constructing an extremal graph from a smaller extremal graph $G'$.} \label{fig1} \end{figure} The two operations $(O_1)$ and $(O_2)$ are sufficient for the constructive characterization of all trees $T$ of order $n$ with ${\rm diss}(T)=2n/3$, that is, of all trees that are extremal for the bound from \cite{brkakase} stated in (\ref{e0}). Let ${\cal T}$ be the set of all trees that arise from $P_3$ by repeated applications of the two operations $(O_1)$ and $(O_2)$, attaching a new $P_3$ by a bridge to trees in ${\cal T}$. \begin{theorem}\label{theorem2} For a tree $T$ of order $n$, the following statements are equivalent. \begin{enumerate}[(a)] \item ${\rm diss}(T)=\frac{2n}{3}$. \item $T\in {\cal T}$. \item $n\equiv 0\,{\rm mod}\, 3$, and, for every vertex $y$ of $T$, at most two components of $T-y$ have order not $0$ modulo $3$. \end{enumerate} \end{theorem} Next to the three simple operations illustrated in Figure \ref{fig1}, we introduce one slightly more complicated operation involving so-called {\it ((very) good) spiked cycles}: For positive integers $\ell$ and $k$ with $\ell\geq \max\{ 3,k\}$, and indices $i_1,\ldots,i_k\in [\ell]$ with $i_1<i_2<\ldots<i_k$, a {\it spiked cycle $C^*$ with $k$ spikes at $\{ i_1,\ldots,i_k\}$} arises from the cycle $C:u_1u_2\ldots u_\ell u_1$ of length $\ell$ by attaching a new endvertex $v_{i_j}$ to $u_{i_j}$ for every $j\in [k]$. The spiked cycle $C^*$ is {\it good} if either $k=1$ and $\ell\equiv 1\,{\rm mod}\,3$ or $k\geq 2$, \begin{itemize} \item $i_{j+1}-i_j\equiv 2\,{\rm mod}\,3$ for every $j\in [k-1]$, and \item $\ell+i_1-i_k\equiv 1\,{\rm mod}\,3$, \end{itemize} that is, the $k$ paths in $C^*$ between vertices of degree $3$ whose internal vertices have degree $2$, have lengths $2,\ldots,2$, and $1$ modulo $3$. The spiked cycle $C^*$ is {\it very good} if it is good and \begin{itemize} \item $\ell\not\equiv 1\,{\rm mod}\,3$, \end{itemize} that is, in particular, $k\geq 2$. See Figure \ref{fig2} for an illustration. \begin{figure} \caption{A very good spiked cycle with $\ell=15$ and $k=5$ spikes at $\{ i_1,\ldots,i_k\} \label{fig2} \end{figure} Let ${\cal C}$ be the set of all graphs that arise from the graphs in $${\cal C}_0=\Big\{ P_3\Big\} \cup \Big\{C_\ell: \ell\in \mathbb{N},\, \ell\geq 3,\mbox{ and }\ell\not\equiv 0\,{\rm mod}\, 3\Big\} \cup \Big\{C^*: C^*\mbox{ is a very good spiked cycle}\Big\}$$ by repeated applications of the three operation $(O_1)$, $(O_2)$, and $(O_3)$, as well as the fourth operation $(O_4)$ of forming the disjoint union of some graph $G'$ in ${\cal C}$ with a very good spiked cycle $C^*$, and adding a bridge between $V(G')$ and $V(C^*)$. \begin{lemma}\label{lemma2} All graphs in ${\cal C}$ satisfy (\ref{e1}) with equality. Furthermore, for every vertex $u$ of every graph $G$ in ${\cal C}$, the graph $G$ has a maximum dissociation set not containing $u$. \end{lemma} As our final result, we show that ${\cal C}$ contains all connected cycle-disjoint extremal graphs for Theorem \ref{theorem1}. Figure \ref{figex} shows two extremal graphs that are not cycle-disjoint. \begin{theorem}\label{theorem3} A connected cycle-disjoint graph satisfies (\ref{e1}) with equality if and only if it belongs to ${\cal C}$. \end{theorem} \begin{figure} \caption{Two graphs $G$ that satisfy (\ref{e1} \label{figex} \end{figure} All proofs are given in the next section. \section{Proofs} \begin{proof}[Proof of Theorem \ref{theorem1}] We prove the statement by contradiction, and suppose that $G$ is a counterexample of minimum order. Clearly, this implies that $G$ is connected, that is, we have $k=1$. If $G$ is not cycle-disjoint, then there is a vertex $u$ of $G$ such that $G'=G-u$ has $k'\leq d_G(u)-2$ components, and the choice of $G$ implies the contradiction \begin{eqnarray*} {\rm diss}(G) &\geq& {\rm diss}(G') \geq (n-1)-\frac{1}{3}\Big((m-d_G(u))+k'+c_1(G')\Big) \geq n-\frac{1}{3}\Big(m+1+c_1\Big), \end{eqnarray*} where $c_1(G')$ denotes the number of induced cycles of length $1$ modulo $3$ in $G'$, and we used the obvious fact that $c_1(G')\leq c_1$. Hence, the graph $G$ is cycle-disjoint. Using the bound for trees in (\ref{e0}), and ${\rm diss}(C_\ell)=\left\lfloor\frac{2\ell}{3}\right\rfloor$ for every integer $\ell\geq 3$, it follows easily that $G$ is neither a tree nor a cycle. We consider a longest path $P$, say $P:BvB'\ldots$, in the block-cutvertex tree \cite{ba} of $G$, that is, $B$ and $B'$ are distinct blocks of $G$, $v$ is a cutvertex of $G$ that belongs to $B$ and $B'$, and all blocks of $G$ that contain $v$ --- except for possibly the block $B'$ --- are endblocks. Let ${\cal B}$ be the set of all blocks of $G$ that contain $v$ and are distinct from $B'$. Let ${\cal B}$ contain $p$ blocks that are $K_2$s, and $q$ blocks that are cycles. Since $G$ is cycle-disjoint, we have $q\in \{ 0,1\}$. The graph $G'=G-\bigcup\limits_{H\in {\cal B}}V(H)$ is connected and cycle-disjoint. Since $B'$ is a $K_2$ or a cycle, the number $d$ of neighbors of $v$ in $V(G')$ is $1$ or $2$. Note that $c_1(G')\leq c_1$, and $c_1(G')\leq c_1-1$ if ${\cal B}$ contains a cycle of length $1$ modulo $3$. See Figure \ref{figbb} for an illustration. \begin{figure} \caption{The local configuration within $G$ where $p=4$ and $q=1$.} \label{figbb} \end{figure} First, suppose that $q=1$, that is, one block in ${\cal B}$ is a cycle $C_\ell$. Since $G$ is cycle-disjoint, we obtain that $B'$ is a $K_2$, and, hence, $d=1$. Since $C_\ell$ has a maximum dissociation set avoiding $v$, the choice of $G$ implies the contradiction \begin{eqnarray} {\rm diss}(G) &\geq & {\rm diss}(G')+p+\left\lfloor\frac{2\ell}{3}\right\rfloor\label{ex0}\\ &\geq & \left(\underbrace{n-p-\ell}_{n(G')}\right) -\frac{1}{3}\left(\underbrace{\left(m-d-p-\ell\right)}_{m(G')}+1+c_1(G')\right) +p+\left\lfloor\frac{2\ell}{3}\right\rfloor\label{ex1}\\ &\geq & \Big(n-\ell\Big) -\frac{1}{3}\Big(\left(m-1-\ell\right)+1+c_1(G')\Big) +\left\lfloor\frac{2\ell}{3}\right\rfloor\label{ex2}\\ &\geq & n-\frac{1}{3}\Big(m+1+c_1\Big),\label{ex3} \end{eqnarray} where the final inequality uses \begin{eqnarray} -\ell+\frac{\ell+1}{3}-\frac{c_1(G')}{3}+\left\lfloor\frac{2\ell}{3}\right\rfloor \geq -\frac{c_1}{3},\label{ex4} \end{eqnarray} which follows from the relation between $c_1(G')$ and $c_1$ mentioned above. Hence, no block in ${\cal B}$ is a cycle. If either $p\geq 2$ or $p=1$ and $B'$ is a cycle, then $m(G')\leq m-3$, and, the choice of $G$ implies the contradiction \begin{eqnarray*} {\rm diss}(G) &\geq & {\rm diss}(G')+p \geq \Big(n-1-p\Big) -\frac{1}{3}\Big(m(G')+1+c_1(G')\Big) +p \geq n-\frac{1}{3}\Big(m+1+c_1\Big). \end{eqnarray*} Hence, we obtain that $p=1$ and that $B'$ is a $K_2$, which implies that $v$ has degree $2$. Let $w$ be the unique neighbor of $v$ that is not an endvertex. The graph $G''=G-N_G[v]=G'-w$ has $k\leq d_G(w)-1$ components, and $G''$ has $c_1(G'')\leq c_1$ induced cycles of length $1$ modulo $3$. Since $$m(G'')+k\leq (m-d_G(w)-1)+(d_G(w)-1)=m+1-3,$$ the choice of $G$ implies the contradiction \begin{eqnarray*} {\rm diss}(G) &\geq & {\rm diss}(G'')+2 \geq \Big(n-3\Big) -\frac{1}{3}\Big(m(G'')+k+c_1(G'')\Big)+2 \geq n-\frac{1}{3}\Big(m+1+c_1\Big), \end{eqnarray*} which completes the proof. \end{proof} Applied to a subcubic graph, the first reduction considered in the proof of Theorem \ref{theorem1} corresponds to the removal of a vertex of degree $3$ that is not a cutvertex. Repeatedly applying this reduction, the set of removed vertices is a {\it nonseparating independent set}; a notion that is relevant in the context of feedback vertex sets of subcubic graphs \cite{sp,uekago}. \begin{proof}[Proof of Theorem \ref{theorem2}] (b) $\Rightarrow$ (a): Clearly, $P_3$ satisfies (a). If $T$ arises from a tree $T'$ that satisfies (a) by applying operation $(O_1)$, then some maximum dissociation set of $T$ consists of $u$, $v$, and some maximum dissociation set of $T'$, which implies that $T$ satisfies (a). Similarly, if $T$ arises from a tree $T'$ that satisfies (a) by applying operation $(O_2)$, then some maximum dissociation set of $T$ consists of $u$, $u'$, and some maximum dissociation set of $T'$, which implies that $T$ satisfies (a). A simple inductive argument implies that all trees in ${\cal T}$ satisfy (a). \noindent (a) $\Rightarrow$ (c): Let $T$ satisfy (a). By induction on the order $n$ of $T$, we prove (c). Since $P_3$ is the only star that satisfies (a) and $P_3$ satisfies (c), we may assume that $n\geq 4$ and that $T$ has diameter at least $3$. Let $P:uvwx\ldots$ be a longest path in $T$. Since $$\frac{2n}{3} ={\rm diss}(T) \geq |N_T(v)\setminus \{ w\}|+{\rm diss}\Big(T-(N_T[v]\setminus \{ w\})\Big) \stackrel{(\ref{e0})}{\geq} (d_T(v)-1)+\frac{2}{3}(n-d_T(v)),$$ we obtain $d_T(v)\in \{ 2,3\}$. First, suppose that $d_T(v)=2$. Let $T_1,\ldots,T_p$ be the components of $T-\{ u,v,w\}$, and let $n_i$ be the order of $T_i$. Since $$\frac{2n}{3} ={\rm diss}(T) \geq |\{ u,v\}|+\sum_{i=1}^p{\rm diss}(T_i) \stackrel{(\ref{e0})}{\geq} 2+\sum_{i=1}^p\frac{2n_i}{3} =\frac{2n}{3},$$ equality holds throughout this inequality chain, which implies that each $T_i$ satisfies (a). By induction, each $T_i$ satisfies (c). Now, let $y$ be any vertex of $T$. If $d_T(y)\leq 2$, then $T-y$ has at most two components. Now, let $d_T(y)\geq 3$. If $y\in V(T_j)$, then the order of the component of $T-y$ that contains $w$ is either $3+\sum_{i\not=j}n_i$ if $y$ is the neighbor of $w$ in $V(T_i)$ or $n(K)+3+\sum_{i\not=j}n_i$, where $K$ is the component of $T_i-y$ that contains the neighbor of $w$ in $V(T_i)$. Since each $n_i$ is $0$ modulo $3$, the term $3+\sum_{i\not=j}n_i$ is $0$ modulo $3$. Since $T_i-y$ has at most two components of order not $0$ modulo $3$, this implies that also $T-y$ has at most two components of order not $0$ modulo $3$. Finally, if $y=w$, then the only component of $T-y$ of order not $0$ modulo $3$ consists of $u$ and $v$. Altogether, we obtain that $T$ satisfies (c). Next, suppose that $d_T(v)=3$. Since $$\frac{2n}{3} ={\rm diss}(T) \geq |N_T(v)\setminus \{ w\}|+{\rm diss}\Big(T-(N_T[v]\setminus \{ w\})\Big) \stackrel{(\ref{e0})}{\geq} 2+\frac{2(n-3)}{3} =\frac{2n}{3},$$ the tree $T-(N_T[v]\setminus \{ w\})$ satisfies (a), and, hence, by induction, also (c). Arguing similarly as above, it follows easily that $T$ satisfies (c). \noindent (c) $\Rightarrow$ (b): Let $T$ satisfy (c). By induction on the order $n$ of $T$, we prove (b). Since $P_3$ is the only star that satisfies (c) and $P_3$ satisfies (b), we may assume that $n\geq 4$ and that $T$ has diameter at least $3$. Let $v$ be a vertex of degree at least $2$ such that all but exactly one neighbor $w$ of $v$ are endvertices. Since $T-v$ has $d_T(v)-1$ components of order $1$, we obtain, by (c), that $d_T(v)\in \{ 2,3\}$. If $d_T(v)=3$, then it is easy to see that $T'=T-(N_T[v]\setminus \{ w\})$ satisfies (c), and, hence, by induction, also (b). Since $T$ arises from $T'$ by applying operation $(O_2)$, it follows in this case that $T$ satisfies (b). By symmetry, we may assume that every vertex $v$ of degree at least $2$, such that all but exactly one neighbor of $v$ are endvertices, has degree $2$. Let $P:uvwx\ldots$ be a longest path in $T$, and let $T''=T-\{ u,v,w\}$. Since $T$ satisfies (c), it follows easily that each component of $T''$ satisfies (c), and, hence, by induction, also (b). If $T''$ is connected, then $T$ arises from $T''$ by applying operation $(O_1)$, and it follows that $T$ satisfies (b). Hence, we may assume that $T''$ has at least two components. By the choice of $P$, this implies that in $T$ all vertices of some component $K$ of $T''$ are within distance at most $2$ from $w$. Since $n(K)\geq 3$, the neighbor $v'$ of $w$ in $V(K)$ is of degree at least $3$, and all but exactly one neighbor of $v'$ are endvertices, which is a contradiction and completes the proof. \end{proof} The trees in ${\cal T}$ have the following useful property. \begin{lemma}\label{lemma0} For every vertex $u$ of every tree $T$ of order $n$ with ${\rm diss}(T)=2n/3$, the tree $T$ has a maximum dissociation set not containing $u$. \end{lemma} \begin{proof} The proof is by induction on $n$. For $n=3$, the statement is obvious. Now, let $n>3$. By Theorem \ref{theorem2}, the tree $T$ arises from the disjoint union of a tree $T'$ of order $n'$ with ${\rm diss}(T)=2n'/3$ and a copy of $P_3$ by adding a bridge between some vertex $x$ in $T'$ and some vertex $y$ in the $P_3$. Now, let $u$ be any vertex of $T$. If $u$ is a vertex of $T'$, then adding the two vertices of the $P_3$ that are distinct from $y$ to a maximum dissociation set of $T'$ not containing $u$ yields a maximum dissociation set of $T$ not containing $u$. If $u$ is a vertex of the $P_3$, then adding the two vertices of the $P_3$ that are distinct from $u$ to a maximum dissociation set of $T'$ not containing $x$ yields a maximum dissociation set of $T$ not containing $u$. \end{proof} \begin{lemma}\label{lemma1} If $C^*$ is a spiked cycle with $k$ spikes of order $n$, then ${\rm diss}(C^*)\geq \frac{2n-1}{3}$ with equality if and only if $C^*$ is good. Furthermore, if $C^*$ is good and $u$ is a vertex of $C^*$ such that, for $k=1$, the degree of $u$ is at least $2$, then the good spiked cycle $C^*$ has a maximum dissociation set not containing $u$. \end{lemma} \begin{proof} Since all statements are easily verified for $k=1$, we assume now that $k\geq 2$. Let $C^*$ be a spiked cycle with $k$ spikes at $\{ i_1,\ldots,i_k\}$, where we use the notation from the definition of spiked cycles. The graph $T=C^*-\{u_{i_1},v_{i_1}\}$ is a tree of order $n-2$, and we obtain that \begin{eqnarray}\label{e2} {\rm diss}(C^*) &\geq& {\rm diss}(T)+|\{ v_{i_1}\}| \stackrel{(\ref{e0})}{\geq} \frac{2(n-2)}{3}+1=\frac{2n-1}{3}. \end{eqnarray} Now, suppose that (\ref{e2}) holds with equality throughout. This implies that $n\equiv 2\,{\rm mod}\,3$ and that ${\rm diss}(T)=\frac{2(n-2)}{3}$. By Theorem \ref{theorem2}, the tree $T$ satisfies (c) of Theorem \ref{theorem2}. A path in $C^*$ between vertices of degree $3$ whose internal vertices have degree $2$ is called {\it special}. If $i_2-i_1\equiv 0\,{\rm mod}\,3$, then $T-u_{i_2}$ has three components of order not $0$ modulo $3$, which is a contradiction. See Figure \ref{figi1i2} for an illustration. \begin{figure} \caption{$T-u_{i_1} \label{figi1i2} \end{figure} Hence, by symmetry, no special path has length $0$ modulo $3$. If $i_2-i_1,i_{j+1}-i_j\equiv 1\,{\rm mod}\,3$, and $i_3-i_2,i_4-i_3,\ldots,i_j-i_{j-1}\equiv 2\,{\rm mod}\,3$ for some $j\in \{ 2,\ldots,k-1\}$, then $T-u_{i_{j+1}}$ has three components of order not $0$ modulo $3$, which is a contradiction. Hence, by symmetry, at most one special path has length $1$ modulo $3$. Since $n\equiv 2\,{\rm mod}\,3$, not all special paths have lengths $2$ modulo $3$. Altogether, we obtain that exactly one special path has length $1$ modulo $3$ while the other $k-1$ special paths have lengths $2$ modulo $3$, that is, the spiked cycle $C^*$ is good. Next, suppose that the spiked cycle $C^*$ is good. By symmetry, we may assume $i_2-i_1\equiv 1\,{\rm mod}\,3$. This easily implies that some maximum dissociation set $D$ of $C^*$ does not contain both $u_{i_1}$ as well as $u_{i_2}$. By symmetry, we may assume that $D$ does not contain $u_{i_1}$. Again, let $T=C^*-\{u_{i_1},v_{i_1}\}$. In view of $D$, we have ${\rm diss}(C^*)={\rm diss}(T)+1$. Since $C^*$ is good, it is easy to see that $T$ satisfies (c) of Theorem \ref{theorem2}. Hence, by Theorem \ref{theorem2}, we obtain ${\rm diss}(C^*)={\rm diss}(T)+1=\frac{2(n-2)}{3}+1=\frac{2n-1}{3}$. By Lemma \ref{lemma0}, the tree $T$ has a maximum dissociation set avoiding any specified vertex. This easily implies that $C^*$ has a maximum dissociation set avoiding any specified vertex distinct from $v_{i_1}$. Since $\tilde{C}^*=C^*-v_{i_1}$ is a spiked cycle of order $n-1\equiv 1\,{\rm mod}\,3$, we have ${\rm diss}\left(\tilde{C}^*\right)\geq \left\lceil\frac{2(n-1)-1}{3}\right\rceil=\frac{2n-1}{3}$, which implies that a maximum dissociation set of $\tilde{C}^*$ is a maximum dissociation set of $C^*$ avoiding $v_{i_1}$. This completes the proof. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma2}] By Lemma \ref{lemma1}, the graphs in ${\cal C}_0$ satisfy (\ref{e1}) with equality and they have maximum dissociation sets avoiding any specified vertex. Recall that the four operations $(O_1)$ to $(O_4)$ consist in adding a disjoint copy of a graph from ${\cal C}_0$ to some graph $G'$ and connecting this copy by a bridge to $G'$. It follows that applying one of the four operations to a graph that satisfies (\ref{e1}) with equality yields a graph that satisfies (\ref{e1}) with equality. Now, an inductive argument implies that all graphs in ${\cal C}$ satisfy (\ref{e1}) with equality. The existence of maximum dissociation sets avoiding specified vertices follows easily by induction arguing as in the proof of Lemma \ref{lemma0} and using Lemma \ref{lemma1}. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem3}] We say that a connected cycle-disjoint graph is {\it extremal} if it satisfies (\ref{e1}) with equality. By Lemma \ref{lemma2}, all graphs in ${\cal C}$ are extremal. For the converse, let $G$ be extremal. By induction on the order $n$, we show that $G\in {\cal C}$. If $G$ is a tree, then Theorem \ref{theorem2} implies $G\in {\cal T}\subseteq {\cal C}$. If $G$ is a cycle of length $\ell$, then $\ell$ is not $0$ modulo $3$ and $G\in {\cal C}_0\subseteq {\cal C}$. If $G$ is a spiked cycle, then Lemma \ref{lemma1} implies that $G$ is good. If $G$ is not very good, then $c_1=1$, contradicting the fact that $G$ is extremal. Hence, the graph $G$ is a very good spiked cycle, and $G\in {\cal C}_0\subseteq {\cal C}$. Now, let $G$ be neither a tree nor a cycle nor a spiked cycle. We choose $P:BvB'\ldots$, ${\cal B}$, $p$, $q\in \{ 0,1\}$, $G'$, $d$, and $c_1(G')$ exactly as in the proof of Theorem \ref{theorem1}. First, we assume that $q=1$. Since $G$ is cycle-disjoint, this implies that $G$ arises by adding the bridge $B'$ between $\bigcup\limits_{H\in {\cal B}}H$ and $G'$. Arguing as in (\ref{ex0}) to (\ref{ex3}) using (\ref{ex4}), we obtain that all five inequalities (\ref{ex0}) to (\ref{ex4}) hold with equality. This implies that $p=0$, that $G'$ is extremal, and that $\ell\not\equiv 0\,{\rm mod}\, 3$. By induction, we obtain that $G'\in {\cal C}$. Since $G$ is constructed by applying operation $(O_3)$ to $G'$, we obtain $G\in {\cal C}$. Hence, we may assume that $q=0$. Next, we assume that $p\geq 2$. Note that $p+d\geq 3$. By Theorem \ref{theorem1}, we obtain \begin{eqnarray*} {\rm diss}(G) &\geq & p+{\rm diss}(G')\\ &\geq & p+\Big(n-p-1\Big)-\frac{1}{3}\Big((m-p-d)+1+c_1(G')\Big)\\ &\geq & n-\frac{1}{3}\Big(m+1+c_1\Big). \end{eqnarray*} Since equality holds throughout this inequality chain, we obtain that $G'$ is extremal, and that $p+d=3$, which implies $p=2$ and $d=1$. By induction, we obtain that $G'\in {\cal C}$. Since $G$ is constructed by applying operation $(O_2)$ to $G'$, we obtain $G\in {\cal C}$. Hence, we may assume that $p=1$. Next, we assume that $v$ does not lie on a cycle, that is, the block $B'$ is a $K_2$ and the degree of $v$ is $2$. Let $w$ be the neighbor of $v$ in $B'$. Let $G''=G-N_G[v]=G'-w$. If $G''$ has $k''$ components, then $k''\leq d_G(w)-1$. By Theorem \ref{theorem1}, we obtain \begin{eqnarray*} {\rm diss}(G) &\geq & 2+{\rm diss}(G'')\\ &\geq & 2+\Big(n-3\Big)-\frac{1}{3}\Big((m-d_G(w)-1)+k''+c_1(G'')\Big)\\ &\geq & n-\frac{1}{3}\Big(m+1+c_1\Big). \end{eqnarray*} Since equality holds throughout this inequality chain, each component of $G''$ is extremal, and $k''=d_G(w)-1$, which implies that $w$ is connected by a bridge to each component of $G''$. By induction, each component of $G''$ lies in ${\cal C}$. If $k''=1$, then $G$ is constructed by applying operation $(O_1)$ to $G''$, and we obtain $G\in {\cal C}$. Hence, we may assume that $k''=2$. By symmetry, considering an alternative choice for the path $P$, we may assume that one component $K$ of $G''$ has order $2$, which contradicts $K\in {\cal C}$. Hence, we may assume that $v$ lies on a cycle, that is, the block $B'$ is a cycle. Since $G$ is not a spiked cycle, it follows, by symmetry, considering alternative choices for the path $P$, that $G$ arises from the disjoint union of \begin{itemize} \item a spiked cycle $G_0$ of order $n_0$ whose unique cycle is $B'$, \item a connected cycle-disjoint graph $G_1$ of order $n_1$, and \item a set $S$ of $s$ isolated vertices, \end{itemize} with $n_1+s\geq 2$, by adding all possible $1+s$ edges between a vertex $w$ of $G_0$ with $d_{G_0}(w)=2$ and all $1+s$ vertices in $\{ x\}\cup S$, where $x$ is some vertex of $G_1$. See Figure \ref{figg0g1} for an illustration; the indicated internal structure of $G_1$ is relevant only later. Note that $m=m(G_0)+m(G_1)+s+1$. \begin{figure} \caption{Local structure of $G$.} \label{figg0g1} \end{figure} First, we assume that $n_0\equiv 2\,{\rm mod}\,3$. We now show that $G_0$ has a dissociation set $D_0$ of order $\frac{2n_0-1}{3}=n_0-\frac{1}{3}\Big(m(G_0)+1\Big)$ that does not contain $w$. If $G_0$ is good, then Lemma \ref{lemma1} implies the existence of $D_0$. If $G_0$ is not good, then, by the parity of $n_0$, Lemma \ref{lemma1} implies ${\rm diss}(G_0)\geq \frac{2n_0-1}{3}+1$, which also implies the existence of $D_0$. Note that, in the latter case, the set $D_0$ is not a maximum dissociation set of $G_0$. Using $D_0$ and Theorem \ref{theorem1}, we obtain \begin{eqnarray*} {\rm diss}(G) &\geq & s+\left(n_0-\frac{1}{3}\Big(m(G_0)+1\Big)\right)+{\rm diss}(G_1)\\ &\geq & s+\left(n_0-\frac{1}{3}\Big(m(G_0)+1\Big)\right) +\left(n_1-\frac{1}{3}\Big(m(G_1)+1+c_1(G_1)\Big)\right)\\ &\geq & n-\frac{1}{3}\Big((m-s)+1+c_1\Big)\\ &\geq & n-\frac{1}{3}\Big(m+1+c_1\Big). \end{eqnarray*} Since equality holds throughout this inequality chain, we obtain that the graph $G_1$ is extremal, that $s=0$, and that $c_1(G_1)=c_1$. By induction, we obtain $G_1\in {\cal C}$. If $G_0$ is not good, then the union of a maximum dissociation set of $G_0$ and a maximum dissociation set of $G_1$ that does not contain $x$, cf.~Lemma \ref{lemma2}, yields the contradiction that $G$ is not extremal. Hence, the spiked cycle $G_0$ is good, and, since $c_1(G_1)=c_1$, it is very good. Since $G$ is constructed by applying operation $(O_4)$ to $G_1$, and we obtain $G\in {\cal C}$. Hence, we may assume that $n_0\not\equiv 2\,{\rm mod}\,3$. If $n_0\equiv 1\,{\rm mod}\,3$ and $s\geq 1$, then exactly the same argument can be repeated with $G_0$ and $S$ replaced by $G_0'$ and $S'$, where the spiked cycle $G_0'$ with at least two spikes arises from $G_0$ by attaching one vertex from $S$ to $w$, and $S'$ is the set of the remaining $s-1$ vertices from $S$. Note that $G_0'$ has order $n_0+1\equiv 2\,{\rm mod}\,3$ in that case. Hence, if $n_0\equiv 1\,{\rm mod}\,3$, then we may assume $s=0$. Next, we assume that $n_0\equiv 0\,{\rm mod}\,3$. The tree $T=G_0-w$ has a dissociation set of order $\left\lceil \frac{2(n_0-1)}{3}\right\rceil=\frac{2n_0}{3}=n_0-\frac{m(G_0)}{3}$. By Theorem \ref{theorem1}, we obtain the contradiction \begin{eqnarray*} {\rm diss}(G) &\geq & s+{\rm diss}(T)+{\rm diss}(G_1)\\ &\geq & s+\left(n_0-\frac{m(G_0)}{3}\right) +\left(n_1-\frac{1}{3}\Big(m(G_1)+1+c_1(G_1)\Big)\right)\\ &>& n-\frac{1}{3}\Big(m+1+c_1\Big). \end{eqnarray*} Hence, we may assume that $n_0\equiv 1\,{\rm mod}\,3$, which implies $s=0$. Let $G_1'=G_1-x$ have $r$ components $K_1,\ldots,K_r$; see Figure \ref{figg0g1} for an illustration. Clearly, we have $r\leq d_G(x)-1$. The graph $G_0'=G-V(G_1')$ is a spiked cycle with at least two spikes that arises from $G_0$ by attaching $x$ to $w$. Since the order of $G_0'$ is $n_0+1\equiv 2\,{\rm mod}\,3$, we obtain, similarly as in the case ``$n_0\equiv 2\,{\rm mod}\,3$'', that $G_0'$ has a dissociation set $D_0'$ of order $n(G_0')-\frac{1}{3}\Big(m(G_0')+1\Big)$ that does not contain $x$. Using $D_0'$ and Theorem \ref{theorem1}, we obtain \begin{eqnarray*} {\rm diss}(G) &\geq & n(G_0')-\frac{1}{3}\Big(m(G_0')+1\Big) +{\rm diss}(K_1)+\cdots+{\rm diss}(K_r)\\ &\geq & n(G_0')-\frac{1}{3}\Big(m(G_0')+1\Big) +\sum_{i=1}^r \left(n(K_i)-\frac{1}{3}\Big(m(K_i)+1+c_1(K_i)\Big)\right)\\ &=& n-\frac{1}{3}\Big((m-d_G(x)+1)+r+1+(c_1(K_1)+\cdots+c(K_r))\Big)\\ &\geq & n-\frac{1}{3}\Big(m+1+(c_1(K_1)+\cdots+c(K_r))\Big)\\ &\geq & n-\frac{1}{3}\Big(m+1+c_1\Big). \end{eqnarray*} Since equality holds throughout this inequality chain, we obtain that $r=d_G(x)-1$, which implies that every component of $G_1'$ is connected to $x$ by a bridge, that each $K_i$ is extremal, which, by induction, implies that $K_i\in {\cal C}$, and that $c_1(K_1)+\cdots+c(K_r)=c_1$. If $G_0'$ is not good, then the union of a maximum dissociation set of $G_0'$ and maximum dissociation sets of the $K_i$ that do not contain the neighbors of $x$, cf.~Lemma \ref{lemma2}, yields the contradiction that $G$ is not extremal. Hence, the spiked cycle $G'_0$ is good, and, since $c_1(K_1)+\cdots+c(K_r)=c_1$, it is very good. If $r=1$, then $G$ is constructed by applying operation $(O_4)$ to $K_1$ and we obtain $G\in {\cal C}$. Hence, we may assume that $r\geq 2$. By symmetry, considering alternative choices for the path $P$ as well as the previous arguments, we may assume that $K_r$ is either a $P_3$ or a cycle or a very good spiked cycle, and that $G-V(K_r)$ is in ${\cal C}$. It follows that $G$ is constructed by applying one of the four operations $(O_1)$ to $(O_4)$ to $G-V(K_r)$. Hence, we obtain $G\in {\cal C}$, which completes the proof. \end{proof} Within our results, the value $c_1$ can be replaced by the maximum number of pairwise vertex-disjoint cycles of length $1$ modulo $3$. It remains to elucidate the structure of all extremal graphs for Theorem \ref{theorem1}. \end{document}
\begin{document} \begin{titlepage} \thispagestyle{empty} \hspace{1em} \begin{center} \includegraphics[width=0.7\linewidth]{Images/cover-picture} \vspace*{0.8cm} {\Large \bf TH\`ESE DE DOCTORAT} \vspace*{0.5cm} Discipline: {\bf Math\'ematiques}\;\; Ecole Doctorale: {\bf ED 386}\;\; Laboratoire: {\bf IMJ-PRG} \vspace*{0.8cm} Pr\'esent\'ee par \vspace*{0.5cm} {\Large \bf EIRINI CHAVLI} \vspace*{0.8cm} Pour obtenir le grade de \ \\[1ex] {\bf DOCTEUR DE L'UNIVERSIT\'E PARIS DIDEROT-PARIS 7} \ \\ \vspace*{0.8cm} {\Large {\bf{The Brou\'e-Malle-Rouquier conjecture for the exceptional groups of rank 2}\\ }} \vspace*{0.3cm} {\Huge{***}}\\ \vspace*{0.008cm} {\Large {\bf{La conjecture de Brou\'e-Malle-Rouquier pour les groupes exceptionnels de rang 2}}}\\ \end{center} \vspace*{0.5 cm} \begin{tabular} {r@{\ }lll}&Th\`ese dirig\'ee par:& \textbf{M. Ivan MARIN}&\textbf{Universit\'e de Picardie Jules Verne}\\ \\ &Rapporteurs: &\textbf {M. Pavel ETINGOF}& \textbf{Massachusetts Institute of Technology}\\ & &\textbf{M. Louis FUNAR} & \textbf{Universit\'e Grenoble Alpes} \end{tabular} \begin{center} \vspace*{0.2cm} Soutenue le \textbf{12 mai 2016} devant le jury compos\'e de : \\ \vspace*{0.5cm} \begin{tabular}{r@{\ }lll} & \textbf{M. Fran\c{c}ois DIGNE} & \textbf{Universit\'e de Picardie Jules Verne} &\textbf{Examinateur}\\ & \textbf{M. Louis FUNAR} & \textbf{Universit\'e Grenoble Alpes} &\textbf{Rapporteur}\\ & \textbf{M. Nicolas JACON} & \textbf{Universit\'e de Reims} &\textbf{Examinateur}\\ & \textbf{M. Gunter MALLE} & \textbf{TU Kaiserslautern} &\textbf{Examinateur}\\ & \textbf{M. Ivan MARIN} & \textbf{Universit\'e de Picardie Jules Verne} &\textbf{Directeur de th\`ese}\\ & \textbf{M. G\"{o}tz PFEIFFER} & \textbf{National University of Ireland, Galway} &\textbf{Examinateur} \end{tabular} \end{center} \end{titlepage} \thispagestyle{empty} ~ \setcounter{page}{1} \chapter*{} \epigraph{\large{``A lesson without the opportunity for learners to generalise is not a mathematics lesson.''}}{\large{J. Mason, 1996, p. 65}} ~ \chapter*{Acknowledgments} \indent First and foremost, I would like to thank my supervisor for his support and for the trust he showed in me all these years of the Ph.D. I want to thank him for speaking to me in English until I learned French. For informing me about important conferences and seminars. For teaching me how to research properly, respecting my ideas and my disposition, and for letting me work in my way - he offered guidance but never interfered with my research. He was always fast to reply to my emails, and I need to thank him extra for the last year of the Ph.D. because of the sheer volume of my messages! He also helped me with the correction, editing, and revising of my article, even though that was not among his responsibilities. I feel incredibly grateful that he always made me feel at ease, and that I never felt intimidated to ask questions, which enhanced my self-confidence. His method of making me explain and analyze my ideas and the results that I had come to, even the obvious ones, helped me achieve greater clarity and validate my work. I feel that his students are his priority in many respects, and proof for this, among others, is the seminar he organized this year for his Ph.D. students. This seminar helped me a lot to improve the way I present my results. Furthermore, when I hit dead ends or found obstacles during my thesis, he was always calm and positive, and never stressed me or worsened my concerns. Finally, and most importantly, I would like to thank him because apart from being a teacher and a colleague, he has also been a friend and he has most successfully managed to keep all these features in balance. I would like to thank P. Etingof and L. Funar for spending time reading, accessing and writing a report about my thesis. I would also like to thank P. Etingof for spending the time to explain to me his article about the weak version of the freeness conjecture when I met him in Corsica, and L. Funar for consenting to travel all the way to Paris to be part of the panel. I would like to thank all the other members of the committee, for having accepted to evaluate my thesis. Specifically, I would like to thank: F. Digne, for sharing wholeheartedly his office with me in Amiens this year. N. Jacon, for always being positive, always having something good to say about my thesis, and for being such an encouraging person overall. G. Malle, for inviting me to Kaiserslautern and giving me the chance to be a speaker at the seminar held there. G. Pfeiffer, because his book was an inspiration to me and a point of reference when I was writing my article. Also, I would like to thank him for traveling from Ireland to be here for my defense. I would like to thank the people of the University of Amiens for giving me an office during my second and third year of my thesis, even though my position was in Paris. Also, I would like to thank them for granting me the ATER position this year. Specifically, I would like to thank I. Wallet and C. Calimez for offering all the necessary information, for emailing me very thoroughly and in a personal and helpful way regarding all my questions and affairs, for accommodating me with the bureaucracy, for treating me as though I were an actual member of this University, and for always being friendly and pleasant. Furthermore, I would like to thank all the people I worked with as an ATER: D. Chataur , Y. Palu, and K. Sorlin, who were more than eager to relieve me of my supervisions when I needed more time to work on my thesis. S. Ducay and E. Janvresse, who helped me with the schedule of the courses that I taught. Last but not least, R. Stancu for his support and collaboration, his kindness and his trust with the course of Galois theory for advanced students. I would like to thank all the people from Paris 7 who supported me during my thesis. Specifically, E. Delos, who was in charge of the ``ecole doctorale'', that always replied to my emails and helped me organize the presentation. O. Dudas, who helped me with my queries, offered a lot of support, and was more than willing to drive me around during a conference in Corsica, where I had a problem with my leg. M. Hindry, who accommodated me with the train routes and was very supportive of the fact that I had to commute from Amiens to Paris, when I was teaching the exercise course under his supervision. C. Lavollay, who was the administrative assistant in charge of the train fees reimbursements, and has always been very positive, helpful, indulging and pleasant and made me feel included, by inviting me to social events, such as the ``cr\^epe parties''! J. Michel, with whom I had great discussions regarding the factorized Schur elements in GAP, and who also provided me with the reference letter I needed. I would like to thank M. Chlouveraki. On a professional level, she gave me valuable advice, she informed me about available post-doctorate positions, gave me a copy of her book which has been very useful, helped me with my queries about the decomposition maps, and we had constructive discussions about them. On a more personal level, I want to thank her for the ``drinking Fridays'' I had with her and Christina. I also want to thank her for a great time we had in Italy and the fact that, to be in my presentation, she had to reschedule her holidays and leave her little son with her mother. I would like to thank my French teachers, A. Le Corre and C. Le Dilly, for having the wish to attend my presentation. They are excellent teachers, and they have been particularly supportive to me, especially in the beginning when I couldn't speak the language at all. They encouraged me a lot, and they taught me not only the grammar or the vocabulary but also the French culture. They made me feel comfortable about speaking the language, and this motivated me to start teaching during my thesis. I would like to thank all the friends I made in France. Specifically, I thank with all my heart Claire, who let me stay in her place when I had nowhere to go, even though she knew me only one week. Her support and hospitality during my first year in Paris were invaluable to me. I would like to thank Sandrine for being such a good and supporting friend. I thank her for all the moments we had in Paris and also for the hospitality at her mother's place at Gex, that I will always remember. She and Lorick, that I also thank a lot, always had a place for me and my husband, speaking in English when necessary. I would like to thank Julie, who was always there for me. Japanese food and ballet nights and Shrek movies are some of our activities that were an oasis for me during my thesis. I also thank her for traveling from Paris to Amiens, just to be here for my 30th birthday. I would like to thank Huafeng for the mathematical discussions and the lovely walks we used to do when I was living in Paris. I would like to thank some Greek friends in Paris. First of all, my dear Christina, who was an inspiration for me to come to France. If I hadn't visited her the May of 2010, I wouldn't have decided to apply in Paris 7. She let me stay in her place when I first arrived in Paris, even though she was doing her Ph.D. at the time and she was very busy. She is a person that I can count on, who supported me during my thesis and encouraged me by believing in me. I will always remember all the moments we had together. Secondly, I would like to thank Eugenia, who always had a place for me to stay when visiting Paris. I also thank her for visiting me in Amiens twice, even not having a lot of money at the time. I would like to thank Evita, who helped my organize my wedding and for her willing to come to my presentation. I would like to thank all the Ph.D. students I met in Amiens, starting with my ``thesis' brothers''; I would like to thank Alexandre, who carefully read this manuscript and corrected some English mistakes, and who also helped me to write my introduction in French. I thank him for his support and for our discussions, which always made me feel better. I also thank Georges for his support and kindness, the great time we had at Lille, the funny times in my office and the fact that he reminds me more than once that ``c'est la vie pas le paradis''! I want to thank them both for all the time we spent at the ``Atelier d'Alex'', even if this year I had a lot of work and I couldn't always be a part of the fun. I thank Malal, Vianney, Ruxi, Silvain, Emna, and Clara for the moments we had at the office BC01, Lucie for her help every time I needed it, Pierre for our nice conversations, Anne-Sophie for her help with my documents I had to send to Paris, Valerie for her kindness, Maxime for all his support and his sense of humor, Aktham for the delicious fruits he offered me and his kind words. I also thank Thomas for his help with French translations and his belief in me. I would like to thank Emilie for the nice moments we had in Lille and Caen and for traveling from far away just to be a part of my defense. I would like to thank my ``dance friends'' Anna, Clio, Sarah, and Cassis for all the good times we had and all the girls nights we organized. I also thank Clio for the `` chocolat chaud'' times I had with her and little Pia, our pleasant discussions and her patience with my French. She is a friend I will always remember. I want to thank all my childhood friends Eleni, Eleutheria, Dina, Dora, Roula, Tasos, Christos, Giorgos, who always support me and even if I am so far from them I feel so close. I would like to thank Nikos separately, for being such a great friend all these years. Always someone who supports me and believes in me, the person who I could say understands my fears, insecurities and always finds a way to make me feel better. I thank him for always being there for me. I would like to thank my former students, Olga and Efi, who went on to become my friends, for inspiring me to be the teacher I am today, for their love and support. I also thank Maria IB, my first student that is also an excellent friend now. I thank her for all the time we had together, her sense of humor, her attitude to the difficulties that I met during my thesis, which always helped me to smile and keep going. I would like to thank Niki for her support and trust all these years and for all the happy times we have in Greece every time I go back. Moreover, I would like to thank Vasso, who is such a unique friend. A person who can be happy with my happiness and sad with my sadness. Her phone calls, her emails and messages all these years I live abroad are a great support for me. I thank her for being available anytime I need her, to calm me when I feel stressed, to give me strength when I need it. I would like to thank Fotini, who has been not only a great piano teacher but also a great friend. I thank her for believing that I can achieve my goals, for being supporting and caring and for all the sweet and refreshing moments I have with her and her family, especially the ``evil portions'' we make with her little son, Diogenis, every time I am in Greece! I also thank her because, even if she didn't like the idea of new technologies, she downloaded Skype and Viber, just to be in touch with me. I would like to thank Maria ``the frog'', my best friend since we were eight years old. I thank her for supporting me during this thesis, even if she hates maths! She is like the sister I never had, always a voice in my head to say to me that I have to keep going. I thank her because even if I am in France, she always finds ways to keep in touch as if I were in Greece. I would like to thank her for the times she visited me in France and for her help during my wedding. Wherever I go, I always have her with me. I would like to thank Maria ``piano'' for all her support and kindness. She is the friend who made all my wedding bonbonniere on her own because I didn't have the money to buy the ones I wanted; who is always there to talk to the phone when I need company; who makes me her priority, who traveled from Greece so many times to visit me, even when she didn't have the money to do so. I thank her for being here for my presentation today. I would like to thank her also for all the moments we had together, for her help with my English letters that I had to write sometimes. Most importantly, I thank her because she is one of the friends that makes me feel like I never left home. I thank Giorgos, who is always near my family and who traveled today from Greece in a last-minute notice. I am happy that he wanted to be a part of this experience. I thank him for his encouragement and support. I would like to thank my parents-in-law for his love and support, for the long phone calls when I miss Greece and the feeling I have with them that I am always an important part of their family. I would like to thank Annoula for our Facebook conversations, her understanding and support and her belief in me. I also thank her for all the great moments we had in her place, together with my niece and nephew, and, of course, for the tomato salads she makes me! I thank Korina for being like the young sister I never had and for her sense of humor that makes our family meetings so unique. I thank her for traveling to Amiens to visit us, and we had such a great time. I would like to thank with all my heart my family, because if it weren't for them, I wouldn't have been the person I am today. I thank my grandma for being a second parent to me, for her support, love and calmness and for her ability to make me smile every time we talk. I would like to thank my mother for giving me freedom to chose my path, for trusting me and supporting my choices, even if she didn't approve or understand all of them. I thank her for her love, her kindness, and her smile all these years. I also thank her for making me feel like living in a five-star hotel every time I travel back home. I would like to thank my brother, Spyros, who traveled from Greece to be here today, even if he had a lot of work with his Ph.D. thesis. I thank him for being a friend; someone I can rely on. Most importantly, I thank him for feeling proud of me since we were kids. His belief in me made me achieve a lot all these years. I also thank him for his hospitality in Crete this summer, that made me relax and be able to finish my thesis on time. Last but not least, I would like to thank Angelos for all his love and support during my thesis. He accepted to leave Greece and to come to a country where he doesn't speak the language, just to be with me. I thank him for being understanding about all the weekends I had to work, about all the conferences I had to attempt and I wasn't home, and about the times I was complaining because my thesis was at a dead-end. I thank him for all the ``don't worries'' he told me and his willing to adapt to every new situation my job will put us on. I thank him for being able to see beauty, strength, and capability in me, even if sometimes I cannot see it in myself. \chapter*{} \textbf{Abstract:} Between 1994 and 1998, the work of M. Brou\'e, G. Malle, and R. Rouquier generalized in a natural way the definition of the Hecke algebra associated to a finite Coxeter group, for the case of an arbitrary complex reflection group. Attempting to also generalize the properties of the Coxeter case, they stated a number of conjectures concerning these Hecke algebras. One specific example of importance regarding those yet unsolved conjectures is the so-called BMR freeness conjecture. This conjecture is known to be true apart from 16 cases, that are almost all the exceptional groups of rank 2. These exceptional groups of rank 2 fall into three families: the tetrahedral, octahedral and icosahedral family. We prove the validity of the BMR freeness conjecture for the exceptional groups belonging to the first two families, using a case-by-case analysis and we give a nice description of the basis, similar to the classical case of the finite Coxeter groups. We also give a new consequence of this conjecture, by obtaining the classification of irreducible representations of the braid group on 3 strands in dimension at most 5, recovering results of Tuba and Wenzl. \\ \\ \textbf{keywords:} Cyclotomic Hecke algebras, BMR freeness conjecture, Complex Reflection Groups, Braid groups, Representations.\\ \begin{center} \vspace*{0.2cm} {\Huge{***}}\\ \vspace*{0.008cm} \end{center} \textbf{R\'esum\'e:} Entre 1994 et 1998, M. Brou\'e, G. Malle et R. Rouquier ont g\'en\'eralis\'e aux groupes de r\'eflexions complexes la d\'efinition naturelle des alg\`ebres de Hecke associ\'ees aux groupes de Coxeter finis. Dans la tentative de g\'em\'eraliser certaines propri\'et\'es de ces alg\`{e}bres, ils ont annonc\'e des conjectures parmi lesquelles la conjecture importante de libert\'e de BMR. Il est connu que cette derni\`{e}re conjecture est vraie sauf pour 16 cas qui concernent presque tous les groupes exceptionnels de rang 2. Ces derniers se plongent dans 3 familles : t\'etra\'edrale, octa\'edrale et icosa\'edrale. Nous prouvons que la conjecture de libert\'e de BMR est vraie pour les groupes exceptionnels appartenant aux deux premi\`{e}res familles en utilisant un raisonnement cas par cas et en donnant une jolie description de la base, ce qui est similaire au cas classique d' un groupe de Coxeter fini. Nous donnons aussi une nouvelle cons\'equence de cette conjecture qui est l'obtention de la classification des repr\'esentations irr\'eductibles du groupe de tresses \`{a} 3 brins de dimension au plus 5, retrouvant ainsi des r\'esultats de Tuba et Wenzl. \\\\ \textbf{Mot cl\'es:} Alg\`ebres de Heckes cyclotomiques, Conjecture de libert\'e de BMR, Groupes de r\'eflexions complexes, Groupes de tresses, Repr\'esentations. ~ \tableofcontents \chapter*{Introduction} \addcontentsline{toc}{chapter}{Introduction} \indent \emph{Real reflection groups}, also known as finite Coxeter groups, are finite groups of matrices with real coefficients generated by reflections (elements of order 2 whose vector space of fixed points is a hyperplane). We often encounter finite Coxeter groups in commutative algebra, Lie theory, representation theory, singularity theory, crystallography, low-dimensional topology, and geometric group theory. They are also a powerful way to construct interesting examples in geometric and combinatorial group theory. All finite Coxeter groups are particular cases of \emph{complex reflection groups}. Generalizing the definition of the real reflection groups, complex reflection groups are finite groups of matrices with complex coefficients generated by pseudo-reflections (elements of finite order whose vector space of fixed points is a hyperplane). Any complex reflection group can be decomposed as a direct product of the so-called irreducible ones (which means that, considering them as subgroups of the general linear group $GL(V)$, where $V$ is a finite dimensional complex vector space, they act irreducibly on $V$). The irreducible complex reflection groups were classified by G. C. Shephard and J. A. Todd (see \cite{shephard}); they belong either to the infinite family $G(de, e, n)$ depending on 3 positive integer parameters, or to the 34 exceptional groups, which are numbered from 4 to 37 and are known as $G_4,\dots, G_{37}$, in the Shephard and Todd classification. The infinite family $G(de, e, n)$ is the group of monomial matrices whose non-zero entries are $de^{\text{th}}$ roots of unity and their product is a $d^{\text{th}}$ root of unity. Subsequent work by a number of authors has proven that complex reflection groups have properties analogous to those of real reflection groups, such as presentations, root systems, and generic degrees. On any real reflection group, we can associate an Iwahori-Hecke algebra (a one parameter deformation of the group algebra of the real reflection group). Hecke algebras associated to reductive groups were introduced in order to decompose representations of these groups induced from parabolic subgroups. Between 1994 and 1998, M. Brou\'e, G. Malle, and R. Rouquier generalized in a natural way the definition of the Iwahori-Hecke algebra to arbitrary complex reflection groups (see \cite{bmr}). Attempting to also generalize the properties of the Coxeter case, they stated a number of conjectures concerning the Hecke algebras, which haven't been proven yet. Even without being proven, those conjectures have been used by a number of papers in the last decades as assumptions, and are still being used in various subjects, such as representation theory of finite reductive groups, Cherednik algebras, and usual braid groups (more details about these conjectures and their applications can be found in \cite{marinreport}). One specific example of importance, regarding those yet unsolved conjectures, is the so-called freeness conjecture. In 1998, M. Brou\'e, G. Malle and R. Rouquier conjectured that the generic Hecke algebra $H$ associated to a complex reflection group $W$ is a free module of rank $|W|$ over its ring of definition $R$. They also proved that it is enough to show that $H$ is generated as $R$-module by $|W|$ elements. The validity of the conjecture, even in its weak version (which states that $H$ is finitely generated as $R$-module), implies that by extending the scalars to an algebraic closure field $F$ of the field of fractions of $R$, the algebra $H\otimes_R F$ becomes isomorphic to the group algebra $FW$ (see \cite{marincubic} and \cite{marinG26}). G. Malle assumed the validity of the conjecture and used it to prove that the characters of $H$ take their values in a specific field (see \cite{malle1}). Moreover, he and J. Michel also used this conjecture to provide matrix models for the representations of $H$, whenever we can compute them; these matrices for the generators of $H$ have entries in the field generated by the corresponding character values (see \cite{mallem}). The freeness conjecture is fundamental in the world of generic Hecke algebras. Once it is proved, our better knowledge of these algebras could allow the possibility of using various computer algorithms on the structure constants for the multiplication, in order to thoroughly improve our understanding in each case (see for example \S 8 in \cite{mallem} about the determination of a canonical trace). The freeness conjecture has also many applications, apart from the ones connected to the properties of the generic Hecke algebra itself. Provided that the freeness conjecture is true, the category of representations of $H$ is equivalent to a category of representations of a Cherednik algebra (see \cite{Ocat}). Another application is about the algebras connected to cubic invariants, including the Kaufman polynomial and the Links-Gould polynomial. These algebras are quotients of the generic Hecke algebra associated to $G_4$, $G_{25}$ and $G_{32}$. I. Marin used the validity of the conjecture of these cases and he proved that the generic algebra $K_n(\gra, \grb)$ introduced by P. Bellingeri and L. Funar in \cite{bfunar} is zero for $n\geq 5$ (see theorem 1.4 in \cite{marincubic}). Furthermore, in \cite{chavli} we used the freeness conjecture for the cases of $G_4$, $G_8$ and $G_{16}$ to recover and explain a classification due to I. Tuba and H. Wenzl (see \cite{tuba}) for the irreducible representations of the braid group on 3 strands of dimension at most 5 (we explain in detail this result in Chapter 2). The freeness conjecture that we call here the BMR freeness conjecture, is known to be true for the finite Coxeter groups, and also for the infinite series by Ariki and Koike (see \cite{ariki} and \cite{arikii}). Considering the exceptional cases, one may divide them into two families; the family that includes the exceptional groups $G_4, \dots, G_{22}$, which are of rank 2 and the family that includes the rest of them, which are of rank at least 3 and at most 8. Among the second family we encounter 6 finite Coxeter groups for which we know the validity of the conjecture: the groups $G_{23}$, $G_{28}$, $G_{30}$, $G_{35}$, $G_{36}$ and $G_{37}$. Thus, it remains to prove the conjecture for 28 cases: the exceptional groups of rank 2 and the exceptional groups $G_{24}$, $G_{25}$, $G_{26}$ $G_{27}$, $G_{29}$, $G_{31}$, $G_{32}$, $G_{33}$ and $G_{34}$. Until recently, it was widely believed that the BMR freeness conjecture had been verified for most of the exceptional groups of rank 2. However, there were imprecisions in the proofs, as I. Marin indicated a few years ago (for more details see the introduction of \cite{marinG26}). In the following years, his research and his work with G. Pfeiffer concluded that the exceptional complex reflection groups for which there is a complete proof for the freeness conjecture are the groups $G_4$ (this case has also been proved in \cite{brouem} and independently in \cite{funar1995}), $G_{12}$, $G_{22}$, $G_{23}$, \dots, $G_{37}$ (see \cite{marincubic}, \cite{marinG26} and \cite{marinpfeiffer}). The remaining groups are almost all the exceptional groups of rank 2. The main goal of this PhD thesis is to prove the BMR freeness conjecture for these remaining cases. In the first chapter we give the necessary preliminaries to accurately describe the BMR freeness conjecture. More precisely, we introduce the definition of a complex reflection group and we conclude that the study of these groups reduces to the irreducible case, which leads us to the description of the Shephard-Todd classification. Following \cite{bmr}, we also associate to a complex reflection group a complex braid group, by using tools from algebraic topology. We finish this chapter by giving in detail the description of the BMR freeness conjecture and a brief report on the recent results on the subject by I. Marin and G. Pfeiffer. In the second chapter, as a first approach of the BMR freeness conjecture, we focus on the groups that are finite quotients of the braid group $B_3$ on 3 strands. These are the exceptional groups $G_4$, $G_8$ and $G_{16}$ and, since the case of $G_4$ has been proven earlier as we mentioned before, we prove the conjecture for the rest of the cases. This result completes the proof for the validity of the BMR conjecture for the case of the exceptional groups, whose associated complex braid group is an Artin group (see theorem \ref{braidcase}). In order to prove the validity of the conjecture we follow the idea I. Marin used in \cite{marincubic}, theorem 3.2(3) for the case of $G_4$. This approach is to presume how the basis should look like and then prove that this is in fact a basis. We recall that $B_3$ is given by generators the braids $s_1$ and $s_2$ and the single relation $s_1s_2s_1=s_2s_1s_2$. We also recall that the center of $B_3$ is the subgroup generated by the element $z=s_1^2s_2s_1^2s_2$ (see, for example, theorem 1.24 of \cite{turaev}) and that the order of the center of the groups $G_4$, $G_8$ and $G_{16}$ is even. What we manage to prove (see theorems \ref{th} and \ref{thh2}) is that the generic Hecke algebra associated to $G_4$, $G_8$ or $G_{16}$ is of the form $$ H=\sum\limits_{k=0}^{p-1}u_1z^k+\sum\limits_{k=1}^{p}u_1z^{-k}+ u_1\text{``some other elements''}u_1,$$ where $p:=|Z(W)|/2$ ($W$ denotes the group $G_4$, $G_8$ or $G_{16}$) and $u_1$ denotes the $R$-subalgebra of $H$ generated by the image of $s_1$ inside $H$ (for the case of $G_4$ this is a deformation of the result presented in theorem 3.2(3) in \cite{marincubic} where we ``replace'' inside the definition of $H_3$, which denotes as $A_3$ there, the element $s_2s_1^{-1}s_2$ by the element $s_2s_1^2s_2$). Exploring the consequences of the validity of the BMR freeness conjecture for the cases of $G_4$, $G_8$ and $G_{16}$, in the last section of the second chapter we prove that we can determine completely the irreducible representations of $B_3$ for dimension at most 5, thus recovering a classification of I. Tuba and H. Wenzl in a more general framework (see \cite{tuba}). Within our approach, we managed to answer their questions concerning the form of the matrices of these representations, as well as the nature of some polynomials that play an important role to the description of these representations. We also explained why their classification doesn't work if the dimension of the representation is larger than 5. Observing that the center of $B_3$ plays an important role in the construction of the basis for the Hecke algebras of $G_4$, $G_8$ and $G_{16}$, we developed a more general approach to the problem, which uses the results of P. Etingof and E. Rains on the center of the braid group associated to a complex reflection group (see \cite{ERrank2}). In chapter 3 we explain in detail these results and we also rewrite some parts of their arguments that are sketchy there. Their approach also provides the weak version of the conjecture for the exceptional groups of rank 2. We conclude by using this weak version to prove a proposition stating that if the Hecke algebra of every exceptional group of rank 2 is torsion free as module, it is enough to prove the validity of the conjecture for the groups $G_7$, $G_{11}$ and $G_{19}$, known as the maximal exceptional groups of rank 2. Unfortunately, this torsion-free assumption does not appear to be easy to check a priori. In our last chapter, we prove the validity of the BMR freeness conjecture for the exceptional groups $G_5, \dots, G_{15}$. As we explain in detail in chapter 3, P. Etingof and E. Rains connected every exceptional group of rank 2 with a finite Coxeter group of rank 3. More precisely, if $W$ is an exceptional group of rank 2, then $\overline{W}:=W/Z(W)$ can be considered as the group of even elements in a finite Coxeter group $C$ of rank 3. For every $x$ in $\overline{W}$, we fix a reduced word $w_x$ representing $x$ in $\overline{W}$. P. Etingof and E. Rains corresponds $w_x$ to an element we denote by $T_{w_x}$ inside an algebra over a ring $R^C$, which they call $A_+(C)$. This algebra is in fact related with the algebra $H$. They also prove that this algebra is spanned over $R^C$ by the elements $T_{w_x}$. Motivated by this result, for the cases of the exceptional groups $G_5, \dots, G_{15}$ we found a spanning set for $H$ over $R$ of $|W|$ elements. More precisely, for every $x\in \overline{W}$ we choose a specific word $\tilde w_x$ that represents $x$ in $\overline{W}$, which verifies some nice properties we give in detail in this chapter. Note that this word is not necessarily reduced. We associate this word to an element $T_{\tilde w_x}$ inside $A_+(C)$ and we denote by $v_x$ its image inside $H$ (see propositions \ref{ERSUR} and \ref{ERRSUR} and corollary \ref{corIVAN}). We set $U:=\sum\limits_{x\in \overline{W}} \sum\limits_{k=0}^{|Z(W)|-1}Rz^kv_x$. The main theorem of this section is that $H=U$. Using this approach we also managed to prove in a different way the case of $G_{12}$ that has already been verified by I. Marin and G. Pfeiffer (see theorem \ref{2case}). The main part of this chapter is devoted to the proof of $H=U$. In this proof we use a lot of calculations, that we tried to make as less complicated and short as possible, in order to be fairly easy to follow. To sum up, the freeness conjecture is still open for the groups $G_{17},\dots, G_{21}$. We are optimistic that the methodology we used for the rest of the groups of rank 2 can be applied to prove the conjecture for these cases, too. Since these groups are large we are not sure that we can provide computer-free proofs, as we did for the other cases. However, there are strong indications that with continued research, and possibly with the development of computer algorithms, we can prove these final cases. \begin{center} \vspace*{0.2cm} {\Huge{***}}\\ \vspace*{0.008cm} \end{center} \indent Les \emph{groupes de r\'eflexions r\'eels}, aussi appel\'es groupes de Coxeter finis, sont des groupes finis de matrices \`a coefficients r\'eels engendr\'es par des r\'eflexions (des \'el\'ements d'ordre 2 dont l'espace des points fixes est un hyperplan). On rencontre souvent des groupes de Coxeter en alg\`ebre commutative, th\'eorie de Lie, th\'eorie des repr\'esentations, th\'eorie des singularit\'es, cristallographie, topologie en petite dimension et th\'eorie g\'eom\'etrique des groupes. Ils sont aussi un outil puissant pour construire des exemples int\'eressants en g\'eom\'etrie et en th\'eorie combinatoire des groupes. Les groupes de Coxeter finis forment une sous-famille des \emph{groupes de r\'eflexions complexes}. G\'en\'eralisant la d\'efinition des groupes de r\'eflexions r\'eels, les groupes de r\'eflexions complexes sont des groupes finis de matrices \`a coefficients complexes engendr\'es par des pseudo-r\'eflexions( des \'el\'ements d'ordre fini dont l'espace des points fixes est un hyperplan). Tout groupe de r\'eflexion complexe peut \^etre d\'ecompos\'e en un produit direct de groupes de r\'eflexions complexes dits irr\'eductibles (groupes qui en \'etant vu comme un sous-groupe de $GL(V)$ o\`u $V$ est un espace vectoriel complexe de dimension finie, agissent de mani\`ere irr\'eductible sur $V$). Les groupes de r\'eflexions complexes irr\'eductibles ont \'et\'e classifi\'es par G.C. Shephard et J.A. Todd (voir \cite{shephard}); ils appartiennent soit \`a la famille infinie $G(de,e,n)$ qui d\'epend de 3 param\`etres entiers naturels soit aux 34 groupes exceptionnels num\'erot\'es de 4 \`a 37 et appel\'es $G_4,\dots, G_{37}$ dans la classification de Shephard et Todd. Les $G(de,e,n)$ de la famille infinie sont les groupes de matrices monomiales dont les coefficients non-nuls sont des racines $de$-i\`emes de l'unit\'e et dont le produit de ces coefficients est une racine $d$-i\`eme de l'unit\'e Du travail provenant de nombreux auteurs a prouv\'e que les groupes de r\'eflexions complexes ont des propri\'et\'es analogues \`a celles des groupes de r\'eflexions r\'eels, telles que les pr\'esentations, syst\`emes de racines et degr\'es g\'en\'eriques. On peut associer une alg\`ebre de Iwahori-Hecke \`a tout groupe de r\'eflexion r\'eel (une d\'eformation \`a un param\`etre de l'alg\`ebre de groupe du groupe de r\'eflexion r\'eel). Les alg\`ebres de Hecke associ\'ees aux groupes r\'eductifs ont \'et\'e introduits afin de d\'ecomposer les repr\'esentations induites par des sous-groupes paraboliques de ces groupes. De 1994 \`a 1998, M. Brou\'e, G. Malle, et R. Rouquier ont g\'en\'eralis\'e de mani\`ere naturelle la d\'efinition de l'alg\`ebre de Iwahori-Hecke \`a un groupe de r\'eflexion complexe quelconque (voir \cite{bmr}). En tentant de g\'en\'eraliser les propri\'et\'es dans le cas des groupes de Coxeter, ils ont formul\'e plusieurs conjectures concernant les alg\`ebres de Hecke, qui n'ont pas encore \'et\'e prouv\'ees. M\^emes sans \^etre prouv\'ees, ces conjectures ont \'et\'e utilis\'ees dans de nombreux articles de ces derni\`eres d\'ecennies comme hypoth\`eses, et sont toujours utilis\'ees dans de nombreux domaines, tels que la th\'eorie des repr\'esentations des groupes r\'eductifs finis, les alg\`ebres de Cherednik et les groupes de tresses usuels (plus de d\'etails sur ces conjectures et leurs applications peuvent \^etre trouv\'ees dans \cite{marinreport}). Un exemple important de ces conjectures non r\'esolues est la conjecture de libert\'e. En 1998, M. Brou\'e, G. Malle et R. Rouquier ont conjectur\'e que l' alg\`ebre de Hecke g\'en\'erique $H$ associ\'ee \`a un groupe de r\'eflexion complexe $W$ est un module libre de rang $|W|$ sur son anneau de d\'efinition $R$. Ils ont aussi prouv\'e qu'il \'etait suffisant de montrer que $H$ est engendr\'e comme $R$-module par $|W|$ \'el\'ements. La validit\'e de la conjecture, m\^eme dans sa version faible (qui dit que $H$ est finiment engendr\'ee comme $R$-module), implique que en \'etendant les scalaires \`a une cl\^oture alg\`ebrique $F$ du corps des fractions de $R$, l'alg\`ebre $H\otimes_R F$ devient isomorphe \`a l'alg\`ebre de groupe $FW$ (voir \cite{marincubic} et \cite{marinG26}). G. Malle a suppos\'e la conjecture vraie et l'a utilis\'ee pour montrer que les caract\`eres de $H$ prennent leurs valeurs dans un certain corps (voir \cite{malle1}). De plus, lui et J. Michel ont utilis\'e la conjecture pour donner des mod\`eles matriciels pour les repr\'esentations de $H$, d\`es que l'on peut les calculer; les matrices pour les g\'en\'erateurs de $H$ ont des coefficients dans le corps engendr\'e par les valeurs des caract\`eres correspondants (voir \cite{mallem}). La conjecture de libert\'e est fondamentale dans le monde des alg\`ebres de Hecke g\'en\'eriques. Une fois prouv\'ee, notre compr\'ehension meilleure de ces alg\`ebres pourrait permettre d'utiliser divers algorithmes informatiques sur les constantes structurelles, afin de profond\'ement am\'eliorer notre compr\'ehension dans chaque cas (voir par exemple \S 8 dans \cite{mallem} \`a propos de la d\'etermination d'une trace canonique). La conjecture de libert\'e a aussi de nombreuses applications autres que celles li\'ees aux propri\'et\'es de l'alg\`ebre de Hecke g\'en\'erique. Si la conjecture est vraie alors la cat\'egorie des repr\'esentations de $H$ est \'equivalente \`a une cat\'egorie de repr\'esentations d'une alg\`ebre de Cherednik (voir \cite{Ocat}). Une autre application concerne les alg\`ebres li\'ees \`a des invariants cubiques, incluant le polyn\^ome de Kaufman et le polyn\^ome de Links-Gould. Ces alg\`ebres sont les quotients de l'alg\`ebre de Hecke g\'en\'erique associ\'ee \`a $G_4, G_{25}$ et $G_{32}$. I. Marin a utilis\'e la validit\'e de la conjecture dans ces cas-l\`a et il a prouv\'e que l'alg\`ebre g\'en\'erique $K_n(\gra, \grb)$ introduite par P. Bellingeri et L. Funar dans \cite{bfunar} est nulle pour $n\geq 5$ (voir th\'eor\`eme 1.4 dans \cite{marincubic}). De plus, dans \cite{chavli} l'hypoth\`ese de conjecture pour les cas $G_4$, $G_8$ et $G_{16}$ est utilis\'ee pour retrouver et expliquer une classification due \`a I. Tuba et H. Wenzl (voir \cite{tuba}) pour les repr\'esentations irr\'eductibles du groupe de tresse \`a 3 brins de dimension au plus 5 (on explique ce r\'esultat en d\'etail dans le chapitre 2). La conjecture de libert\'e que l'on appelle ici conjecture de libert\'e BMR, est vraie et d\'emontr\'ee pour les groupes de Coxeter finis, et pour les s\'eries infinies de Ariki et Koike (voir \cite{ariki} et \cite{arikii}). En ce qui concerne les cas exceptionnels, on peut les s\'eparer en deux familles; la famille des groupes exceptionnels $G_4, \dots, G_{22}$, qui sont de rang 2 et la famille des autres qui sont de rang sup\'erieur ou \'egal \`a 3 et inf\'erieur ou \'egal \`a 8. Dans la seconde famille, on a 6 groupes de Coxeter finis pour lesquels nous savons que la conjecture est valide : les groupes $G_{23}$, $G_{28}$, $G_{30}$, $G_{35}$, $G_{36}$ et $G_{37}$. Ainsi, il reste \`a prouver la conjecture dans 28 cas : les groupes exceptionnels de rang 2 et les groupes exceptionnels $G_{24}$, $G_{25}$, $G_{26}$ $G_{27}$, $G_{29}$, $G_{31}$, $G_{32}$, $G_{33}$ et $G_{34}$. Jusqu'\`a r\'ecemment, il \'etait pens\'e que la conjecture de libert\'e BMR avait \'et\'e v\'erifi\'ee pour la plupart des groupes exceptionnels de rang $2$. N\'eanmoins, il y avait quelques impr\'ecisions dans les preuves, comme indiqu\'e par I. Marin il y a quelques ann\'ees (pour plus de d\'etails, voir l'introduction de \cite{marinG26}). Dans les ann\'ees suivantes, sa recherche et son travail en collaboration avec G. Pfeiffer ont permis de conclure que les groupes de r\'eflexions complexes exceptionnels pour lesquels il y a une preuve compl\`ete de l'hypoth\`ese de conjecture sont les groupes $G_4$ (ce cas a aussi \'et\'e d\'emontr\'e dans \cite{brouem} et de mani\`ere ind\'ependante dans \cite{funar1995}), $G_{12}$, $G_{22}$, $G_{23}$, \dots, $G_{37}$ (voir \cite{marincubic}, \cite{marinG26} et \cite{marinpfeiffer}). Les groupes restants sont presque tous des groupes exceptionnels de rang 2. L'objectif principal de cette th\`ese et de d\'emontrer la conjecture pour les cas restants. Dans le premier chapitre, nous donnons les pr\'eliminaires n\'ecessaires afin de d\'ecrire correctement la conjecture de libert\'e BMR. Plus pr\'ecis\'ement, nous introduisons la d\'efinition d'un groupe de r\'eflexion complexe et et nous ramenons \`a l'\'etude des groupes de r\'eflexions complexes irr\'eductibles, ce qui nous am\`ene \`a donner une description de la classification de Shephard-Todd. Comme dans \cite{bmr}, nous associons un groupe de tresse complexe \`a un groupe de r\'eflexions complexe, en utilisant des outils de topologie alg\'ebrique. Nous terminons ce chapitre en donnant une description d\'etaill\'ee de la conjecture de libert\'e BMR et un r\'esum\'e bref des r\'esultats r\'ecents sur le sujet par I. Marin et G. Pfeiffer. Dans le deuxi\`eme chapitre, comme premi\`ere approche de la conjecture de libert\'e BMR, nous nous concentrons sur les groupes qui sont des quotients finis du groupe de tresse \`a trois brins $B_3$. Ce sont les groupes exceptionnels $G_4$, $G_8$ et $G_{16}$ . Le cas $G_4$ ayant \'et\'e d\'emontr\'e comme nous l'avons indiqu\'e pr\'ec\'edemment, nous montrons la conjecture dans le reste des cas. Ce r\'esultat termine la preuve de la conjecture $BMR$ pour le cas des groupes exceptionnels dont le groupe de tresse complexe associ\'e est un groupe de Artin (voir th\'eor\`eme \ref{braidcase}). Pour prouver la validit\'e de la conjecture, nous proc\'edons de mani\`ere analogue \`a celle de I. Marin utilis\'ee dans \cite{marincubic}, th\'eor\`eme 3.2(3) pour le cas de $G_4$. Cette approche consiste \`a estimer une base potentielle et d\'emontrer que c'est effectivement une base. Rappelons que $B_3$ a pour g\'en\'erateurs deux tresses $s_1$ et $s_2$ avec l'unique relation $s_1s_2s_1=s_2s_1s_2$. Rappelons aussi que le centre de $B_3$ est le sous-groupe engendr\'e par l'\'el\'ement $z=s_1^2s_2s_1^2s_2$ (voir, par exemple, th\'eor\`eme 1.24 de \cite{turaev}) et que le cardinal du centre de $G_4$, $G_8$ et $G_{16}$ est pair. Nous parvenons \`a prouver (voir th\'eor\`emes \ref{th} et \ref{thh2}) que l'alg\`ebre de Hecke g\'en\'erique associ\'ee \`a $G_4$, $G_8$ et $G_{16}$ est de la forme $$ H=\sum\limits_{k=0}^{p-1}u_1z^k+\sum\limits_{k=1}^{p}u_1z^{-k}+ u_1\text{``d'autres \'el\'ements''}u_1,$$ o\`u $p:=|Z(W)|/2$ ($W$ repr\'esente le groupe $G_4$, $G_8$ ou $G_{16}$) et $u_1$ repr\'esente la sous-$R$-alg\`ebre de $H$ engendr\'ee par l'image de $s_1$ dans $H$ (pour le cas de $G_4$ c'est une d\'eformation du r\'esultat du Th\'eor\`eme 3.2(3) dans \cite{marincubic} o\`u on ``remplace'' dans la d\'efinition de $H_3$, not\'ee $A_3$ dans cet article, l'\'el\'ement $s_2s_1^{-1}s_2$ par l'\'el\'ement $s_2s_1^2s_2$). En \'etudiant les cons\'equences de la validit\'e de la conjecture de libert\'e BMR pour le cas de $G_4$, $G_8$ et $G_{16}$, nous d\'emontrons dans la derni\`ere section du deuxi\`eme chapitre que l'on peut d\'eterminer compl\`etement les repr\'esentations irr\'eductibles de $B_3$ de dimension au plus 5, retrouvant ainsi une classification de I. Tuba et H. Wenzl dans un cadre plus g\'en\'eral (voir \cite{tuba}). Dans le cadre de notre approche, nous sommes parvenus \`a r\'epondre aux questions concernant la forme des matrices de ces repr\'esentations, de m\^eme que la nature de certains polyn\^omes jouant un r\^ole important dans la description de ces repr\'esentations. Nous avons aussi expliqu\'e pourquoi leur classification ne s'adaptait pas aux repr\'esentations de dimension strictement sup\'erieure \`a 5. En observant que le centre de $B_3$ joue un r\^ole important dans la construction de la base pour l'alg\`ebre de Hecke de $G_4$, $G_8$ et $G_{16}$, nous avons \'elabor\'e une approche plus g\'en\'erale au probl\`eme qui utilise les r\'esultats de P. Etingof et E. Rains sur le centre du groupe de tresse associ\'e \`a un groupe de r\'eflexion complexe (voir \cite{ERrank2}). Dans le chapitre 3, nous expliquons ces r\'esultats en d\'etail et r\'e\'ecrivons une partie de leurs arguments qui sont peu clairs l\`a. Leur approche donne aussi la version faible de la conjecture pour les groupes exceptionnels de rang 2. Nous utilisons cette version faible afin de d\'emontrer une proposition qui dit que si l'alg\`ebre de Hecke de tout groupe exceptionnel de rang 2 est sans-torsion en tant que module, il est suffisant de prouver la validit\'e de la conjecture pour les groupes $G_7$, $G_{11}$ et $G_{19}$, que l'on appelle les groupes exceptionnels de rang 2 maximaux. Malheureusement , cette hypoth\`ese d'\^etre sans-torsion ne semble pas simple \`a v\'erifier \`a priori. Dans le dernier chapitre, nous prouvons que la conjecture de libert\'e BMR est v\'erifi\'ee pour les groupes exceptionnels $G_5, \dots, G_{15}$. Comme expliqu\'e en d\'etail dans le chapitre 3, P. Etingof et E. Rains ont li\'e chaque groupe exceptionnel de rang $2$ \`a un groupe de Coxeter fini de rang 3. Plus pr\'ecis\'ement, si $W$ est un groupe exceptionnel de rang 2 alors $\overline{W}:=W/Z(W)$ peut \^etre consid\'er\'e comme le groupe des \'el\'ements pairs d'un groupe de Coxeter fini $C$ de rang 3. Pour tout $x$ dans $\overline{W}$, nous fixons un mot r\'eduit $w_x$ repr\'esentant $x$ dans $\overline{W}$. P. Etingof et E. Rains font correspondre $w_x$ \`a un \'el\'ement que l'on note $T_{w_x}$ dans une alg\`ebre sur un anneau $R^C$, qu'ils notent $A_+(C)$. Cette alg\`ebre est en r\'ealit\'e li\'ee \`a l'alg\`ebre $H$. Ils prouvent \'egalement que l'alg\`ebre est engendr\'ee sur $R^C$ par les \'el\'ements $T_{w_x}$. En utilisant ce r\'esultat, nous avons trouv\'e un ensemble g\'en\'erateur pour $H$ sur $R$ de $|W|$ \'el\'ements pour les groupes exceptionnels $G_5, \dots, G_{15}$. Plus pr\'ecis\'ement, pour tout $x\in \overline{W}$ nous prenons un certain mot $\tilde w_x$ qui repr\'esente $x$ dans $\overline{W}$, v\'erifiants de bonnes propri\'et\'es que nous donnons en d\'etail dans ce chapitre. Remarquons que ce mot n'est pas n\'ecessairement r\'eduit. Nous associons ce mot \`a un \'el\'ement $T_{\tilde w_x}$ dans $A_+(C)$ et notons $v_x$ son image dans $H$ (voir propositions \ref{ERSUR} et \ref{ERRSUR} et le corollaire \ref{corIVAN}). On pose $U:=\sum\limits_{x\in \overline{W}} \sum\limits_{k=0}^{|Z(W)|-1}Rz^kv_x$. Le th\'eor\`eme principal de cette section est que $H=U$. En utilisant cette approche, nous avons prouv\'e d'une mani\`ere diff\'erente le cas de $G_{12}$ qui a \'et\'e v\'erifi\'e par I. Marin et G. Pfeiffer (voir th\'eor\`eme \ref{2case}). Ce chapitre est principalement d\'edi\'e \`a la preuve de $H=U$. Dans la preuve, nous utilisons beaucoup de calculs, que nous tentons de rendre aussi simples et courts que possible afin de les rendre assez faciles \`a suivre. Pour r\'esumer, l'hypoth\`ese de libert\'e est toujours ouverte pour les groupes $G_{17},\dots, G_{21}$. Nous sommes optimistes sur le fait que la m\'ethodologie utilis\'ee pour le reste des groupes de rang 2 puisse \^etre appliqu\'ee afin de v\'erifier la conjecture dans ces cas-l\`a aussi. Ces groupes \'etant grands, nous ne sommes pas certains de pouvoir r\'ealiser des preuve sans calculs faits par ordinateur, comme nous l'avions fait dans les autres cas. Cependant, il y a de forts indices qui permettent d'esp\'erer qu'en poursuivant la recherche et en utilisant \'eventuellement des algorithmes informatiques, il sera possible de d\'emontrer les cas restants. ~ \chapter {The BMR freeness conjecture} In this chapter we are going to define the subject of this PhD thesis, which is to check several cases of an important conjecture of M. Brou\'e, G. Malle and R. Rouquier, that we call the BMR freeness conjecture. We include also the necessary preliminaries to accurately describe it. \section{Complex Reflection Groups and Complex Braid Groups} \indent Let $V$ be a $\mathbb{C}$-vector space of finite dimension $n$. For all definitions and results we follow mostly \cite{bmr}, \cite{lehrer} and \cite{ivancourse}. \subsection{Complex Reflection Groups} \begin{defn} A pseudo-reflection of $V$ is a non-trivial element $s$ of $GL(V)$ of finite order, whose vector space of fixed points is a hyperplane (meaning that $dim_{\mathbb{C}}\text{ker}(s-1)=n-1$). \end{defn} By definition, one could easily observe that a pseudo-reflection has a unique eigenvalue not equal to 1. We give now some examples of pseudo-reflections. \begin{ex} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=*] \item Every reflection is a pseudo-reflection of order 2. \item If $j$ is a third root of unity, the matrices $$s_1:=\begin{bmatrix} j&0\\ -j^2&1 \end{bmatrix} \text { and } s_2:=\begin{bmatrix} 1&j^2\\ 0&j^2 \end{bmatrix}$$ are pseudo-reflections of order 3. \qed \end{itemize} \end{ex} \begin{defn} A complex reflection group $W$ of rank $n$ is a finite subgroup of $GL(V)$, which is generated by pseudo-reflections. \end{defn} \begin{rem} Every pseudo-reflection is a unitary reflection with respect to some form (see lemma 1.3 of \cite{lehrer}). Therefore, every complex reflection group $W\leq GL(V)$ can be considered as a subgroup of $U_n(\mathbb{C})$, where $U_n(\mathbb{C})$ is the unitary group of degree $n$. \label{unitary} \end{rem} \begin{ex} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=*] \item The group $\grm_d$ of all the $d$-th roots of unity is a complex reflection group of rank 1. \item The symmetric group $S_n$ is a complex reflection group of rank $n-1$, if we associate every permutation $(i$ $j)$ with a permutation matrix, obtained by permuting the rows of the $n\times n$ identity matrix according to the permutation $(i$ $j)$. \item More generally, every real reflection group (also known as finite Coxeter group) $$C=\langle s_1,\dots, s_n\;|\; (s_is_j)^{m_{i,j}}=1\rangle,$$ where $m_{i,i}=1$ and $2\leq m_{i,j}<\infty $ for $i\not=j$, is a complex reflection group of rank $n$. \qed \end{itemize} \end{ex} \begin{ex} Another example of a complex reflection group is the three-parameter family, known as $G(de,e,n)$, in the notation of Shephard and Todd (see \cite{shephard}). By definition, $G(de,e,n)$ is the group of monomial $n\times n$ matrices (meaning that they have only one nonzero entry in each row and column) where the nonzero entries are $de$-th roots of unity, and the product of these entries is a $d$th root of unity. The three-parameter family $G(de,e,n)$ can also be defined via the semi-direct product $$G(de,e,n):=D(de,e,n)\rtimes S_n,$$ where $D(de,e,n)$ denotes the group of all diagonal $n\times n$ matrices, whose diagonal entries are $de$-th roots of unity and their determinant is a $d$th root of unity. \qed \end{ex} Let $s_i$ be the permutation matrix $(i\;i+1)$, $i=1,\dots,n-1$, $t_e:=\begin{pmatrix} 0&\grz_e^{-1}&0\\ \grz_e&0&0\\ 0&0& \text{Id} \end{pmatrix}$ and $u_d:=$diag$\{\grz_d,1,\dots,1\}$, where $\grz_m$ denotes a $m$-th root of unity. We can easily verify that these matrices are pseudo-reflections. The following results are based on \cite{Michelcour}, Exercise 2.9 and on Chapter 2, \S 3 in \cite{lehrer}. \begin{prop} $G(de,e,n)$ is a complex reflection group of rank $n$. Its generators are as follows: \begin{itemize}[leftmargin=*] \item $\underline{d=1}$: The group $G(e,e,n)$ is generated by the pseudo-reflections $t_e$, $s_1$,\dots, $s_{n-1}$. \item $\underline{e=1}$: The group $G(d,1,n)$ is generated by the pseudo-reflections $u_d$, $s_1$,\dots, $s_{n-1}$. \item \underline{$d\not= 1$, $e\not=1$}: The group $G(de,e,n)$ is generated by the pseudo-reflections $u_d$, $t_{de}$, $s_1$,\dots, $s_{n-1}$. \end{itemize} \end{prop} \begin{rem} The complex reflection groups $G(de,e,n)$ give rise to the real reflection groups. More precisely: \begin{itemize}[leftmargin=*] \item The group $G(1,1,n)$ is the symmetric group $S_n$, also known as the finite Coxeter group of type $A_{n-1}$. \item The group $G(2,1,n)$ is the finite Coxeter group of type $B_n$. \item The group $G(2,2,n)$ is the finite Coxeter group of type $D_n$. \item The group $G(e,e,2)$ is the dihedral group of order $2e$, also known as the finite Coxeter group of type $I_2(e)$. \end{itemize} \label{cox} \end{rem} \begin{defn} Let $W\leq GL(V)$ be a complex reflection group. We say that $W$ is irreducible if the only subspaces of $V$ that stay stable under the action of $W$ are $\{0\}$ and $V$. \end{defn} \begin{ex} The group $G(de,e,n)$ is irreducible apart from the following cases: \begin{itemize}[leftmargin=*] \item The group $G(1,1,n)=S_n$, considered as a complex reflection group of rank $n$ ($S_n$ is irreducible as a complex reflection group of rank $n-1$). \item The group $G(2,2,2)$ (the dihedral group of order 4). \qed \end{itemize} \label{ex1} \end{ex} The following proposition (proposition 1.27 in \cite{lehrer}) states that every complex reflection group can be written as a direct product of irreducible ones. \begin{prop} Let $W\leq GL_n(V)$ be a complex reflection group. Then, there is a decomposition $V=V_1\oplus\dots\oplus V_m$ such that the restriction $W_i$ of $W$ in $V_i$ acts irreducibly on $V_i$ and $W=W_1\times\dots\times W_m$. \label{irred} \end{prop} Therefore, the study of reflection groups reduces to the irreducible case and, as a result, it is sufficient to classify the irreducible complex reflection groups. The following classification is due to G.C. Shephard and J.A. Todd (for more details one may refer to \cite{cohen} or \cite{shephard}), also known as the ``Shephard-Todd classification''. \begin{thm} Let $W$ be an irreducible complex reflection group. Then, up to conjugacy, $W$ belongs to precisely one of the following classes: \begin{itemize}[leftmargin=*] \item The infinite family $G(de,e,n)$, as described in example \ref{ex1}. \item The 34 exceptional groups $G_n$ ($n=4,\dots,37)$. \end{itemize} \label{classif} \end{thm} \begin{rem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=*] \item[(i)] G.C. Shephard and J.A. Todd numbered the irreducible unitary reflection groups from 1 to 37. However, the first three entries in their list refer to the three-parameter family. More precisely, $G_1:=G(1,1,n)$, $G_2:=G(de,e,n),$ $n\geq 2$ and $G_3:=G(d,1,1)$. Hence, when we refer to the exceptional cases, we mean the groups $G_4, \dots, G_{37}$. \item [(ii)] Among the irreducible complex reflection groups we encounter the irreducible finite Coxeter groups. By remark \ref{cox} we have already seen the finite Coxeter groups of type $A_{n-1}$, $B_n$, $D_n$ and $I_2(e)$ as irreducible complex reflection groups inside the three-parameter family $G(de,e,n)$. The rest of the cases, which are the irreducible finite Coxeter groups of types $H_3$, $F_4$, $H_4$, $E_6$, $E_7$ and $E_8$ are the exceptional groups $G_{23}$, $G_{28}$, $G_{30}$, $G_{35}$, $G_{36}$ and $G_{37}$, respectively. \end{itemize} \label{remirred} \end{rem} \subsection{Complex Braid groups} \label{seccccc} \indent This section is a rewriting of \cite{broue2000}, \cite{bmr} and \cite{marinkrammer} \S2.1. Let $W\leq GL(V)$ be a complex reflection group. We let $\mathcal{R}$ denote the set of pseudo-reflections of $W$, $\mathcal{A}=$\{ker$(s-1)\;|\;s\in \mathcal{R}\}$ the hyperplane arrangement associated to $\mathcal{R}$, and $X=V\setminus \cup \mathcal{A}$ the corresponding hyperplane complement. We assume that $\mathcal{A}$ is essential, meaning that $\cap_{H\in \mathcal{A}}H=\{0\}$. $W$ acts on $\mathcal{A}$ by $w\cdot\text{ker}(s-1)=\text{ker}(wsw^{-1}-1).$ This action is well-defined, since for every $w\in W$ and $s\in \mathcal{R}$, we have that $wsw^{-1}\in \mathcal{R}$ (see lemma 1.9 in \cite{lehrer} or lemma 1.4 (1) in \cite{brouebook}). Let $x\in X$ and $s\in \mathcal{R}$. We notice that $s(x)\in X$ (if $s(x)\not \in X$, then there is a $s'\in\mathcal{R}$, such that $s(x)\in$ ker$(s'-1)$. Therefore, $s^{-1}(s'(s(x)))=x$, which means that $x\in s^{-1}\cdot$ker$(s'-1)\subset \cup \mathcal{A}$. This contradicts the fact that $x$ belongs to $X$). Therefore, we have an action of $W$ on $X$, defined by $s\cdot x=s(x)$, for every $s\in \mathcal{R}$ and $x\in X$. Let $X/W$ be the space of orbits of the above action. For every $x\in X$ we write $\underline{x}$ for the image of $x$ under the canonical surjection $p: X\rightarrow X/W$. By Steinberg's theorem (see \cite{steinberg}) we have that the action of $W$ on $X$ is free. Therefore it defines a Galois covering $X\rightarrow X/W$, which gives rise to the following exact sequence: $$1\rightarrow \grp_1(X,x)\rightarrow \grp_1(X/W, \underline{x})\rightarrow W\rightarrow 1$$ (For more details about Galois coverings and their exact sequences, one may refer to \cite{berenstein}, Chapter 5, \S8, 9, 10). We define $P:=\grp_1(X,x)$ the \emph{pure complex braid group} associated to $W$ and $B:=\grp_1(X/W, \underline{x})$ the \emph{complex braid group} associated to $W$. The canonical projection map $p: X\rightarrow X/W$ induces a natural projection map $ p_r: B \twoheadrightarrow W$, defined as follows: Let $b\in B$ and let $\grb : \left[0,1\right]\rightarrow X$ be a path in $X$ such that $\grb(0)=x$, which lifts $b$, meaning that $b=\left[p\circ \grb\right]$. $$ \begin{tikzcd} & X \arrow{d}{p} \\ \left[0,1\right] \arrow{ur}{\grb} \arrow{r}{p\circ \grb} & X/W \end{tikzcd} $$ Then $p_r(b)$ is defined by the equality $p_r(b)(x)=\grb(1)$. The goal of this section is to define an element $\grs\in B$ from an element $s\in \mathcal{R}$. By algebraic topology, we know that if $(X, x_0)$ and $(Y,y_0)$ are based topological spaces, then $\grp_1(X\times Y, (x_0,y_0))\simeq \grp_1(X, x_0)\times \grp_1(Y,y_0)$ (one may refer, for example, to Chapter 2, \S 7 in \cite{massey}). Therefore, by proposition \ref{irred} we may assume that $W$ is an irreducible complex reflection group. Let $s\in \mathcal{R}$ a pseudo-reflection of order $m$ and $H=\text{ker}(s-1)$ the corresponding hyperplane. We fix an element $x\in X$ and we define a path from $x$ to $s\cdot x$ in the following way: \begin{itemize}[leftmargin=*] \item We choose a point $x_0$ ``close to $H$ and far from the other reflecting hyperplanes'' as follows: Let $x_0^H \in H$ and $\varepsilon>0$, such that $B(x_0^H,\varepsilon)\cap \cup \mathcal{A}\subset H$. In other words, if $H\not=H'\in \mathcal{A}$, then $B(x_0^H,\varepsilon)\cap H'=\emptyset$. Let $x_0^{H^{\perp}} \in H^{\perp}$, such that $||x_0^{H^{\perp}}||<\varepsilon$. We define $x_0=x_0^H+x_0^{H^{\perp}}\in B(x_0^H,\varepsilon)$ (see figure \ref{braid} above). \begin{figure} \caption{The choice of $x_0$} \label{ball} \end{figure} \item Let $\grg$ be an arbitrary path in $X$ from $x$ to $x_0$. Then, $\grg^{-1}$ is the path in $X$ with initial point $x_0$ and terminal point $x$, such that $\grg^{-1}(t)=\grg(1-t), \text{ for every } t\in\left[0,1\right].$ Thus, the path $s\cdot \grg^{-1}$ defined as $t\mapsto s(\grg^{-1}(t))$ is a path inside $X$, which goes from $s\cdot x_0$ to $s\cdot x$. We consider now a rotation $\grg_0$ of angle $\gru=2\grp/m$ inside $H^{\perp}$, with initial point $x_0$ and terminal point $s\cdot x_0$ (see Figure \ref{braid}). Since $x_0=x_0^H+x_0^{H^{\perp}}$, we can define $\grg_0: \left[0,1\right]\rightarrow H^{\perp}$ by $\grg_0(t)=x_0^H+e^{\gru i t}x_0^{H^{\perp}}.$ \item Let $\tilde \grg$ be the path from $x$ to $s\cdot x$ defined by $\tilde \grg:=s\cdot \grg^{-1}*\grg_0*\grg$. By the choice of $x_0$ this path lies in $X$ and its homotopy class does not depend on its choice. Let $\grs_{\grg}$ denote the element that $\tilde \grg$ induces in the braid group $B$. We say that $\grs_\grg$ is \emph{a braided reflection} associated to $s$ around the image of $H$ in $X/W$. \begin{figure} \caption{Braided reflection associated to $s$} \label{braid} \end{figure} \end{itemize} The next lemma is proposition 2.12 (1) in \cite{broue2000}. \begin{lem} Let $s$ be pseudo-reflection and $\grs_{\grg}$ a braided reflection associated to $s$, as defined above. Then, $p_r(\grs_{\grg})=s$. \end{lem} The following proposition (see Lemma 2.12 (2) in \cite{broue2000}) states that if we have two braided reflections associated to the same pseudo-reflection $s$, they are conjugate in $P$. \begin{prop} Whenever $\grg'$ is a path in $X$ from $x$ to $x_0$, if $\grt$ denotes the loop in $X$ defined by $\grt:=\grg'^{-1}*\grg$, then $$\tilde\grg'=s\cdot \grt*\tilde\grg*\grt^{-1}.$$ In particular, $\grs_{\grg}$ and $\grs_{\grg'}$ are conjugate in $P$. \end{prop} \begin{cor} Let $s_1, s_2$ two pseudo-reflections, which are conjugate in $W$ and let $\grs_1$ and $\grs_2$ denote two braided reflections associated to $s_1$ and $s_2$, respectively. Then, $\grs_1$ and $\grs_2$ are conjugate in $B$. \label{conbraid} \end{cor} Let $s\in \mathcal{R}$ and $H=\text{ker}(s-1)$ the corresponding hyperplane. Let $W_H$ be the subgroup of $W$ formed by $id_V$ and all the reflections fixing $H$. We recall that $V=H\oplus H^{\perp}$ and we set $\grf: GL(V)\rightarrow GL(H^{\perp})$ defined as $f\mapsto f|_{H^{\perp}}$. Let $w_1, w_2 \in W_H$ such that $\grf(w_1)=\grf(w_2)$. Hence, by the definition of $\grf$ we have that $w_1(h^{\perp})=w_2(h^{\perp})$, for every $h^{\perp}\in H^{\perp}$. By the definition of $W_H$ we have also that $w_1(h)=w_2(h)$, for every $h\in H$. As a result, we have that $w_1(v)=w_2(v)$, for every $v\in V$, since every $v\in V$ is written as $h+h^{\perp}$, where $h\in H$ and $h^{\perp}\in H^{\perp}$. Therefore, the restriction of $\grf$ in $W_H$ is injective and, hence, we have $W_H \leq GL(H^{\perp})\simeq GL_1(\mathbb{C})\simeq \mathbb{C}^{\times}$. As a result, the group $W_H$ is cyclic, as a subgroup of $\mathbb{C}^{\times}$. We denote by $e_H$ the order of the cyclic group $W_H$ and we give the following definition: \begin{defn} Let $s_H$ be the (unique) pseudo-reflection of eigenvalue $e^{2\grp i/e_H}$, which generates $W_H$. We say that $s_H$ is a \emph{distinguished pseudo-reflection}. \end{defn} We recall that $p_r$ denotes the natural projection map $B \twoheadrightarrow W$ induced by the canonical surjection $X\rightarrow X/W$. \begin{defn} Let $s_H$ be a distinguished pseudo-reflection. A \emph{distinguished braided reflection} $\grs_H$ associated to $s_H$ is a braided reflection around the image of $H$ in $X/W$ such that $p_r(\grs_H)=s_H$. \end{defn} The next result (proposition 2.5 (a) in \cite{broue2000}) explains the importance of the distinguished braided reflections. \begin{prop} The complex braid group $B$ is generated by all the distinguished braided reflections around the images of the hyperplanes $H\in \mathcal{A}$ in $X$. \end{prop} The next result (theorem 0.1 in \cite{bessiszariski}) shows that a complex braid group has an \emph{Artin-like presentation}; that is a presentation of the form $$\langle \mathbf{s}\in \mathbf{S}\;|\; \mathbf{v_i}=\mathbf{w_i} \rangle_{i\in I},$$ where $\mathbf{S}$ is a finite set of distinguished braided reflections and $I$ is a finite set of relations such that, for each $i\in I$, $\mathbf{v_i}$ and $\mathbf{w_i}$ are positive words in elements of $\mathbf{S}$. \begin{thm}Let $W$ be a complex reflection group with associated complex braid group $B$. There exists a finite subset $\mathbf{S}=\{\mathbf{s_1},\dots, \mathbf{s_n}\}$ of $B$, such that: \begin{itemize} \item[(i)] The elements $\mathbf{s_1}, \dots, \mathbf{s_n}$ are distinguished braided reflections and therefore, their images $s_1,\dots, s_n$ in $W$ are distinguished pseudo-reflections. \item[(ii)]The set $\mathbf{S}$ generates $B$ and therefore the set $S:=\{s_1,\dots, s_n\}$ generates $W$. \item [(iii)] There exist a set $I$ of relations of the form $\mathbf{w_1}=\mathbf{w_2}$, where $\mathbf{w_1}$ and $\mathbf{w_2}$ are positive words of equal length in the elements of $\mathbf{S}$, such that $\langle \mathbf{S}\;|\;I\rangle$ is a presentation of $B$. \item[(iv)] Viewing now $I$ as a set of relations in $S$, the group $W$ is presented by $$\langle S\;|I\;; \forall s\in S, s^{e_s}=1\rangle,$$ where $e_s$ denotes the order of $s$ in $W$. \end{itemize} \label{Presentt} \end{thm} \begin{rem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize} \item[(i)]By Brieskorn's theorem (see \cite{Brieskorn}) we have the following result: Let $C$ be a finite Coxeter group with presentation $$C=\langle s_1,\dots, s_n\;|\; s_i^2=1, \;(s_is_j)^{m_{i,j}}=1, \text{ for all } i\not=j\rangle.$$The corresponding braid group $A$ (known as an Artin group of finite Coxeter type), has a presentation of the form $$A=\langle s_1,\dots, s_n\;|\; \;\underbrace{s_is_j\dots}_{m_{i,j}-\text{ times}}=\underbrace{s_js_i\dots}_{m_{i,j}-\text{ times}}, \text{ for all } i\not=j\rangle.$$ This means that we can obtain the presentation of the finite Coxeter group from the presentation of the Artin group, if we ``add'' to the latter the additional relations $s_i^2=1$, for every $i=1,\dots,n$. Therefore, theorem \ref{Presentt} generalizes Brieskorn's result for every complex reflection group. \item[(ii)]Theorem \ref{Presentt} not only shows that a complex braid group has an Artin-like presentation but also implies that any complex reflection group has a \emph{Coxeter-like presentation}; that is a presentation of the form $$\langle s\in S\;|\; \{v_i=w_i\}_{i\in I} , \{s^{e_s}=1\}_{s\in S}\rangle,$$ where $S$ is a finite set of distinguished reflections and $I$ is a finite set of relations such that, for each $i\in I$, $v_i$ and $w_i$ are positive words with the same length in elements of $S$. The tables in Appendix 1 in \cite{broue2000} provide a complete list of the irreducible complex reflection groups in Shephard-Todd classification, together with a Coxeter-like presentation symbolized by a diagram. \end{itemize} \label{coxeterlike} \end{rem} We conclude this section by giving a description of the center of the complex braid group, a result that plays an important role in the sequel. For this description, we follow the arguments in the introduction of \cite{dignecenter}. Let $W\leq GL(V)$ be a complex reflection group. By proposition \ref{irred} we can write $W$ as a direct product $W_1\times\dots\times W_r$ of irreducible reflection groups. As we explained in section \ref{seccccc}, the associated complex braid group is the direct product $B_1\times\dots\times B_r$, where $B_i$ is the complex braid group associated to the irreducible complex reflection group $W_i$, $i=1,\dots, r$. Therefore, the center of the complex braid group $Z(B)$ is the direct product $Z(B_1)\times \dots \times Z(B_r)$ and, thus, we may assume that $W$ is irreducible. Since $W$ is an irreducible complex reflection group, its center $Z(W)$ is a (finite) subgroup of $\mathbb{C}^{\times}$, thus a cyclic group. Let $Z(W)=\langle \grz \rangle$, where $\grz:=e^{2\grp i/|Z(W)|}$ and let $x$ be a basepoint of $X$. In \cite{bmr}, \S 2.24, M. Brou\'e, G. Malle and R. Rouquier defined a path $\grb$ inside $X$ by $t\mapsto \grz^tx$, where $\grz^t:=e^{2\grp it/|Z(W)|}$. We denote by $\tilde\grb$ the homotopy class of $\grb$ in $B$. The next theorem is due to M. Brou\'e, G. Malle and R. Rouquier (see theorem 2.2.4 in \cite{bmr}) and D. Bessis (see theorem 12.8 in \cite{bessiscenter}). \begin{thm} The center of the braid group of an irreducible complex reflection group is cyclic, generated by $\tilde\grb$. \label{centerbraid} \end{thm} \section{The freeness conjecture} \subsection {Generic Hecke algebras} \indent In this section we follow mostly \cite{bmr} and \cite{marinG26}. Let $W\leq GL(V)$ be a complex reflection group. Recalling the definitions and notations of the previous section, let $B$ be the complex braid group associated to $W$ and $S$ the set of the distinguished pseudo-reflections. We denote also by $e_s$ the order of $s$ in $W$. For each $s\in S$ we choose a set of $e_s$ indeterminates $u_{s,1},\dots, u_{s,e_s}$ such that $u_{s,i}=u_{t,i}$ if $s$ and $t$ are conjugate in $W$. We denote by $R$ the Laurent polynomial ring $\mathbb{Z}[u_{s,i},u_{s,i}^{-1}]$ and we give the following definition: \begin{defn} The generic Hecke algebra $H$ associated to $W$ with parameters $u_{s,1},\dots, u_{s,e_s}$ is the quotient of the group algebra $RB$ of $B$ by the ideal generated by the elements of the form \begin{equation} (\grs-u_{s,1})(\grs-u_{s,2})\dots (\grs-u_{s,e_s}), \label{Hecker} \end{equation} where $s$ runs over the conjugacy classes of $S$ and $\grs$ over the set of distinguished braided reflections associated to the pseudo-reflection $s$. \end{defn} \begin{rem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize} \item [(i)]It is enough to choose one relation of the form described in (\ref{Hecker}) per conjugacy class, since the corresponding braided reflections are conjugate in $B$ (Corollary \ref{conbraid}). \item[(ii)]Let $C$ be a finite Coxeter group with Coxeter system $S$. In this case, the ring of Laurent polynomials is $R=\mathbb{Z}[u_{s,1}^{\pm},u_{s,2}^{\pm}]_{s\in S}$ and the generic Hecke algebra associated to $C$ has a presentation as follows: $$\langle \grs_1,\dots, \grs_n\;|\; \underbrace{\grs_i\grs_j\dots}_{m_{i,j}-\text{ times}}=\underbrace{\grs_j\grs_i\dots}_{m_{i,j}-\text{ times}},\; (\grs_i-u_1)(\grs_i-u_2)=0\rangle,$$ where $i\not =j$ and $2\leq m_{i,j}<\infty $. In this case (the real case), the generic Hecke algebra is known as the \emph{Iwahori-Hecke algebra} (for more details about Iwahori-Hecke algebras one may refer, for example, to \cite{geck} \S 4.4). \item [(iii)] Let $\grf: R\rightarrow \mathbb{C}$ be the specialization morphism defined as $u_{s,k}\mapsto e^{-2\grp \gri k/e_s}$, where $1\leq k\leq e_c$ and $\gri$ denotes an imaginary unit (a solution of the equation $x^2=-1$). Therefore, $H\otimes_{\grf}\mathbb{C}=\mathbb{C} B/(\grs^{e_s}-1)=\mathbb{C} \big(B/(\grs^{e_s}-1)\big)$. By theorem \ref{Presentt} (iv) we have that $B/(\grs^{e_s}-1)=W$. Hence, $H\otimes_{\grf}\mathbb{C}=\mathbb{C} W$, meaning that $H$ is a deformation of the group algebra $RW$. \end{itemize} \label{remi} \end{rem} In order to make the definition of the generic Hecke algebra clearer to the reader, we give an example of the generic Hecke algebra associated to the exceptional groups $G_4$ and $G_5$. \begin{ex} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=*] \item Let $W:=G_4=\langle s_1, s_2\;|\;s_1^3=s_2^3=1, s_1s_2s_1=s_2s_1s_2\rangle$. Since $s_1=(s_2s_1)s_2(s_2s_1)^{-1}$, the ring of Laurent polynomials is $\mathbb{Z}[u_i^{\pm}]$, $i=1,2,3$ and the generic Hecke algebra has a presentation as follows: $$H_{G_4}=\langle \grs_1,\grs_2:\grs_1\grs_2\grs_1=\grs_2\grs_1\grs_2,\; (\grs_i-u_1)(\grs_i-u_2)(\grs_i-u_3)=0\rangle.$$ \item Let $W:=G_{10}=\langle s_1, s_2\;|\;s_1^3=s_2^4=1, s_1s_2s_1s_2=s_2s_1s_2s_1\rangle$. The ring of Laurent polynomials is $\mathbb{Z}[u_i^{\pm},v_j^{\pm}]$, $i=1,2,3$, $j=1,2,3,4$ and the generic Hecke algebra has a presentation as follows: $$H_{G_{10}}=\langle \grs_1,\grs_2\;|\;\grs_1\grs_2\grs_1\grs_2=\grs_2\grs_1\grs_2\grs_1,\prod\limits_{i=1}^{3}(\grs_1-u_i)=\prod\limits_{j=1}^{4}(\grs_2-v_j)=0 \rangle.$$ \qed \end{itemize} \end{ex} \subsection{The BMR freeness Conjecture: Recent work and open cases} \indent Let $W\leq GL(V)$ be a complex reflection group and $H$ its associated generic Hecke algebra over the ring of Laurent polynomials $R$, as defined in the previous section. We have the following conjecture due to M. Brou\'e, G. Malle and R. Rouquier (see \cite{bmr}). \begin{co} (The BMR freeness conjecture) The generic Hecke algebra $H$ is a free module over $R$ of rank $|W|$. \end{co} The next proposition (theorem 4.24 in \cite{bmr} or proposition 2.4, (1) in \cite{marinG26}) states that in order to prove the validity of the BMR conjecture, it is enough to find a spanning set of $H$ over $R$ of $|W|$ elements. \begin{prop} If $H$ is generated as $R$-module by $|W|$ elements, then it is a free module over $R$ of rank $|W|$. \label{BMR PROP} \end{prop} \begin{thm} The BMR freeness conjecture holds for the real complex reflections groups (i.e. for the Iwahori-Hecke algebras). \label{IwahoriBMR} \end{thm} We give a sketch of the proof of this theorem in order to underline the similarities of this proof with the one of the complex cases of rank 2 we prove in Chapter 4. For more details one may refer for example to \cite{geck}, lemma 4.4.3. \begin{proof}[Sketch of the proof] Let $C$ be a finite Coxeter group with Coxeter system $S=\{s_1,\dots, s_n\}$ and $\mathcal{H}$ the associated Iwahori-Hecke algebra over the ring $R=\mathbb{Z}[u_{s,1}^{\pm},u_{s,2}^{\pm}]_{s\in S}$, as defined in remark \ref{remi} (ii). By Matsumoto's lemma (see \cite{Matsumoto}) there is a natural map (not a group homomorphism) from $C$ to the corresponding braid group, taking any element of $C$ represented by some reduced word in the generators to the same word in the generators of the braid group. Therefore, for every element $c\in C$ represented by a reduced word $s_{i_1}\dots s_{i_r}$ there is a well-defined element in $\mathcal{H}$ we denote by $T_c$, such that $T_c=T_{s_{i_1}}\dots T{s_{i_r}}$. In particular, $T_1=1_{\mathcal{H}}$. We will prove that a spanning set of $\mathcal{H}$ over $R$ is $\{T_c,\;c\in C\}$ and, therefore, by proposition \ref{BMR PROP} we prove also the validity of the BMR freeness conjecture in the real case. Let $U$ be the $R$-submodule of $\mathcal{}H$ generated by $\{T_c,\;c\in C\}$. We must prove that $\mathcal{H}=U$. Since $1_{\mathcal{H}}\in U$, it will be sufficient to show that $U$ is a left ideal of $\mathcal{H}$. For this purpose, one may check that $U$ is invariant under left multiplication by all $T_s$, $s\in S$. This is a straightforward consequence of the fact that $$T_sT_w=\begin{cases}T_{sw}, &\text { if } \ell(sw)=\ell(w)+1\\ (u_{s,1}+u_{s,2})T_w-u_{s,1}u_{s,2}T_{sw},&\text { if } \ell(sw)=\ell(w)-1 \end{cases},$$ where $\ell(sw)$ and $\ell(w)$ denote the length of the words $w$ and $sw$, respectively. \end{proof} We go back now to the case of an arbitrary complex reflection group $W$. In proposition \ref{irred} we saw that $W$ is a direct product of some irreducible complex reflection groups. Therefore, we restrict ourselves to proving the validity of the conjecture for the irreducible complex reflection groups. Due to the classification of Shephard and Todd (theorem \ref{classif}), we have to prove the conjecture for the three-parameter family $G(de,e,n)$ and for the exceptional groups $G_4, \dots, G_{37}$. Thanks to S. Ariki and S. Ariki and K. Koike (see \cite{ariki} and \cite{arikii}) we have the following theorem: \begin{thm} The BMR freeness conjecture holds for the infinite family $G(de,e,n)$. \end{thm} As a consequence of the above result, one has to concentrate on the exceptional groups, which are divided into two families: The first family includes the groups $G_4, \dots, G_{22}$, which are of rank 2 and the second one includes the rest of them, which are of rank at least 3 and at most 8. We recall that among these groups we encounter 6 finite Coxeter groups (remark \ref{remirred} (ii)), for which we know the validity of the conjecture: the groups $G_{23}$, $G_{28}$, $G_{30}$, $G_{35}$, $G_{36}$ and $G_{37}$. Thus, it remains to prove the conjecture for 28 cases: the exceptional groups of rank 2 and the exceptional groups $G_{24}$, $G_{25}$, $G_{26}$ $G_{27}$, $G_{29}$, $G_{31}$, $G_{32}$, $G_{33}$ and $G_{34}$. Among these 28 cases, we encounter 6 groups whose associated complex braid group is an Artin group: The groups $G_4$, $G_8$ and $G_{16}$ related to the Artin group of Coxeter type $A_2$, and the groups $G_{25}$, $G_{26}$ and $G_{32}$ related to the Artin group of Coxeter type $A_3$, $B_3$ and $A_4$, respectively. The next theorem is thanks to the results of I. Marin (see \cite{marincubic} and \cite{marinG26}). \begin{thm} The BMR freeness conjecture holds for the exceptional groups $G_4$, $G_{25}$, $G_{26}$ and $G_{32}$. \label{braidcase} \end{thm} The case of $G_4$ has been also proven independently by M. Brou\'e and G. Malle and B. Berceanu and L. Funar (see \cite{brouem} and \cite{funar1995}). Exploring the rest of the cases, we notice that we encounter 9 groups generated by reflections (i.e. pseudo-reflections of order 2): These groups are the exceptional groups $G_{12}$, $G_{13}$, $G_{22}$ of rank 2 and the exceptional groups $G_{24}$, $G_{27}$, $G_{29}$, $G_{31}$, $G_{33}$ and $G_{34}$ of rank at least 3 and at most 6. I. Marin and G. Pfeiffer proved the following result by using computer algorithms (see \cite{marinpfeiffer}). \begin{thm} The BMR freeness conjecture holds for the exceptional groups $G_{12}$, $G_{22}$, $G_{24}$, $G_{27}$, $G_{29}$, $G_{31}$, $G_{33}$ and $G_{34}$. \label{2case} \end{thm} To sum up, the BMR freeness conjecture is still open for the exceptional groups of rank 2, apart from the cases of $G_4$, $G_{12}$ and $G_{22}$ for which we know the validity of the conjecture (theorems \ref{braidcase} and \ref{2case}). The following chapters are devoted to the proof of 11 of the 16 remaining cases, including also another proof of the case of $G_{12}$. \underline{} \chapter{The freeness conjecture for the finite quotients of $B_3$} In this chapter we prove that the quotients of the group algebra of the braid group on 3 strands by a generic quartic and quintic relation, respectively have finite rank. This is the special case of the BMR freeness conjecture for the generic Hecke algebra of the groups $G_8$ and $G_{16}$. This result completes the proof of this conjecture in the case of the exceptional groups whose associated complex braid group is an Artin group (see theorem \ref{braidcase}). Exploring the consequences of this case, we prove that we can determine completely the irreducible representations of the braid group on 3 strands of dimension at most 5, thus recovering a classification of Tuba and Wenzl in a more general framework. This chapter is based on the author's article (see \cite{chavli}). \section{The finite quotients of $B_n$} \label{s} \indent Let $B_n$ be the braid group on $n$ strands, defined via the following presentation: $$\langle s_1, \dots, s_{n-1}\;|\;s_is_{i+1}s_i=s_{i+1}s_is_{i+1}, \; s_is_j=s_js_i\rangle,$$ where in the first group of relations $1\leq i\leq n-2$, and in the second one $|i-j|\geq 2$. There is a Coxeter's classification of the finite quotients of $B_n$ by the additional relation $s_i^k=1$ (for more details one may refer to \textsection10 in \cite{Coxeter}); these quotients are finite if and only if $\frac{1}{k}+\frac{1}{n}>\frac{1}{2}$. If we exclude the obvious cases $n= 2$ and $k=2$, which lead to the cyclic groups and to the symmetric groups respectively, there is only a finite number of such groups, which are irreducible complex reflection groups: these are the exceptional groups $G_4, G_8$ and $G_{16}$, for $n=3$ and $k=3,4,5$ and the exceptional groups $G_{25}$, $G_{32}$ for $n=4,5$ and $k=3$. The BMR freeness conjecture is known for the cases of $G_4$, $G_{25}$ and $G_{32}$, as we explained in the previous chapter (see theorem \ref{braidcase}). Therefore, it remains to prove the validity of the conjecture for the groups $G_8$ and $G_{16}$, which are finite quotients of $B_3$ together with the groups $S_3$ (if we consider the case of the symmetric group, as well) and $G_4$. Moreover, these exceptional groups belong to the class of complex reflection groups of rank two. Let $B_3$ be the braid group on 3 strands, given by generators the braids $s_1$ and $s_2$ and the single relation $s_1s_2s_1=s_2s_1s_2$ that we call braid relation. For every $k=2,\dots,5$ we denote by $R_k$ the Laurent polynomial ring $\mathbb{Z}[a_{k-1},...,a_1, a_0, a_0^{-1}]$. Let $H_k$ denote the quotient of the group algebra $R_kB_3$ by the relations \begin{equation}s_i^k=a_{k-1}s_i^{k-1}+...+a_1s_i+a_0,\label{one} \end{equation} for $i=1,2$. \begin{defn}For $k=2, 3,4$ and 5 we call the algebra $H_k$ the quadratic, cubic, quartic and quintic Hecke algebra, respectively. \end{defn} We identify $s_i$ with their images in $H_k$. We multiply $(\ref{one})$ by $s_i^{-k}$ and since $a_0$ is invertible in $R_k$ we have: \begin{equation}s_i^{-k}=-a_0^{-1}a_1s_i^{-k+1}-a_0^{-1}a_2s_i^{-k+2}-...-a_0^{-1}a_{k-1}s_i^{-1}+a_0^{-1}, \label{two}\end{equation} for $i=1,2$. If we multiply ($\ref{two}$) with a suitable power of $s_i$ we can expand $s_i^{-n}$ as a linear combination of $s_i^{-n+1},...,s_i^{-n+(k-1)}, s_i^{-n+k}$, for every $n\in \mathbb{N}$. Moreover, comparing (\ref{one}) and (\ref{two}), we can define an automorphism $\grF$ of $H_k$ as $\mathbb{Z}$-algebra, where $$\begin{array}{lcl} s_i &\mapsto &s_i^{-1},\text{ for } i=1,2\\ a _j&\mapsto&-a_0^{-1}a_{k-j},\text{ for } j=1,...,k-1\\ a_0&\mapsto&a_0^{-1} \end{array}$$ We will prove now an easy lemma that plays an important role in the sequel. This lemma is in fact a generalization of lemma 2.1 of \cite{marincubic}. \begin{lem}For every $m \in \mathbb{Z}$ we have $s_2s_1^ms_2^{-1}=s_1^{-1}s_2^ms_1$ and $s_2^{-1}s_1^ms_2=s_1s_2^ms_1^{-1}$. \label{lem1} \end{lem} \begin{proof} By using the braid relation we have that $(s_1s_2)s_1(s_1s_2)^{-1}=s_2$. Therefore, for every $m \in \mathbb{Z}$ we have $(s_1s_2)s_1^m(s_1s_2)^{-1}=s_2^m$, that gives us the first equality. Similarly, we prove the second one. \end{proof} If we assume $m$ of lemma \ref{lem1} to be positive we have $s_1s_2s_1^n=s_2^ns_1s_2$ and $s_1^ns_2s_1=s_2s_1s_2^n$, where $n\in \mathbb{N}$. Taking inverses, we also get $ s_1^{-n}s_2^{-1}s_1^{-1}=s_2^{-1}s_1^{-1}s_2^{-n}$ and $ s_1^{-1}s_2^{-1}s_1^{-n}=s_1^{-n}s_2^{-1}s_1^{-1}$. We call all the above relations \emph{the generalized braid relations.} We will denote by $u_i$ the $R_k$-subalgebra of $H_k$ generated by $s_i$ (or equivalently by $s_i^{-1}$) and by $u_i^{\times}$ the group of units of $u_i$ and we let $\grv=s_2s_1^2s_2$. Since the center of $B_3$ is the subgroup generated by the element $z=s_1^2\grv$ (see, for example, theorem 1.24 of \cite{turaev}), for all $x\in u_1$ and $m\in \mathbb{Z}$ we have that $x\grv^m=\grv^mx$. We will see later that $\grv$ plays an important role in the description of $H_k$. Let $W_k$ denote the quotient group $B_3/\langle s_i^k \rangle$, $k=2, 3, 4$ and 5 and let $r_k<\infty$ denote its order. Our goal now is to prove that $H_k$ is a free $R_k$-module of rank $r_k$, a statement that holds for $H_2$ since $W_2=S_3$ is a Coxeter group (see theorem \ref{IwahoriBMR}). We also know that this holds for the cubic Hecke algebra $H_3$ (see theorem 3.2 $(3)$ in \cite{marincubic}). For the remaining cases, we will use the following proposition. \begin{prop}Let $k\in\{4,5\}$. If $H_k$ is generated as a module over $R_k$ by $r_k$ elements, then $H_k$ is a free $R_k$-module of rank $r_k$ and, therefore, the BMR freeness conjecture holds for the exceptional groups $G_8$ and $G_{16}$. \label{rp} \end{prop} \begin{proof} Let $\tilde H_k$ denote the generic Hecke algebra of $G_8$ and $G_{16}$, for $k=4$ and 5, respectively. We know that $\tilde{H_k}$ is a free $\tilde{R_k}$-module of rank $r_k$ if and only if $H_k$ is a free $R_k$-module of rank $r_k$ (see lemma 2.3 in \cite{marinG26}). The result then follows from proposition 2.4(1) in \cite{marinG26}. \end{proof} \section{The BMR freeness conjecture for $G_8$ and $G_{16}$} \indent In proposition \ref{rp} we saw that in order to prove the BMR freeness conjecture for $G_8$ and $G_{16}$ we only need to find a spanning set of $H_k$, $k=4,5$ of $r_k$ elements. For this purpose we follow the idea I. Marin used in \cite{marincubic}, theorem 3.2 $(3)$, in order to find a spanning set for the cubic Hecke algebra. More precisely, for every $k\in\{4,5\}$, let $w_1$ denote the subgroup of $W_k$ generated by $s_1$ and let $\mathcal{J}$ denote a system of representatives of the double cosets $w_1 \backslash W_k / w_1$. We have $W_k=\bigsqcup\limits_{w\in \mathcal{J}}w_1\cdot w \cdot w_1$. For every $w\in \mathcal{J}$ we fix a factorization $f_w$ of $w$ into a product of the generators $s_1$ and $s_2$ of $W_k$ and we define an element $T_{f_w}$ inside $H_k$ as follows: Let $f_w=x_1^{a_1}x_2^{a_2}\dots x_r^{a_r},$ where $x_i\in \{s_1,s_2\}$ and $a_i\in \mathbb{Z}$. We define the element $T_{f_w}$ to be the product $x_1^{a_1}x_2^{a_2}\dots x_r^{a_r}$ inside $H_k$ (recall that we identify $s_i$ with their images in $H_k$). For $f_w=1$, we define $T_1:=1_{H_k}$. We notice that the element $T_{f_w}$ depends on the factorization $f_w$, meaning that if we choose a different factorization $f'_w$ of $w$ we may have $T_{f_w}\not=T_{f'_w}$. We set $U:=\sum\limits_{w\in \mathcal{J}}u_1\cdot T_{f_w} \cdot u_1$. The main result of this chapter is that $U$ is generated as $R_k$-module by $r_k$ elements and that $H_k=U$. We prove this result by using a case-by-case analysis. \subsection{The quartic Hecke algebra $H_4$} \label{sec2} \indent Our ring of definition is $R_4=\mathbb{Z}[a, b,c,d, d^{-1}]$ and therefore, relation (\ref{one}) becomes $s_i^4=as_i^3+bs_i^2+cs_i+d$, for $i=1,2$. We set $$\begin{array}{lcl} U'&=&u_1u_2u_1+u_1s_2s_1^{-1}s_2u_1+ u_1s_2^{-1}s_1s_2^{-1}u_1+u_1s_2^{-1}s_1^{-2}s_2^{-1} \\ U&=&U'+u_1s_2s_1^{-2}s_2u_1+u_1s_2^{-2}s_1^{-2}s_2^{-2}u_1.\end{array}$$ It is obvious that $U$ is a $u_1$-bimodule and that $U'$ is a $u_1$-sub-bimodule of $U$. Before proving our main theorem (theorem \ref{th}) we need a few preliminaries results. \begin{lem}For every $m \in \mathbb{Z}$ we have \begin{itemize}[leftmargin=0.8cm] \item[(i)] $s_2s_1^{m}s_2\in U$. \item [(ii)]$s_2^{-1}s_1^{m}s_2^{-1}\in U'$. \item [(iii)]$s_2^{-2}s_1^{m}s_2^{-1}\in U'$. \end{itemize} \label{lem2} \end{lem} \begin{proof}By using the relations (\ref{one}) and (\ref{two}) we can assume that $m\in\{0,1,-1,-2\}$. Hence, we only have to prove (iii), since (i) and (ii) follow from the definition of $U$ and $U'$ and the braid relation. For (iii), we can assume that $m\in\{-2, 1\}$, since the case where $m=-1$ is obvious by using the generalized braid relations. We have: $s_2^{-2}s_1^{-2}s_2^{-1}=s_1^{-1}(s_1s_2^{-2}s_1^{-1})s_1^{-1}s_2^{-1} =s_1^{-1}s_2^{-1}s_1^{-2}(s_2s_1^{-1}s_2^{-1}) =s_1^{-1}(s_2^{-1}s_1^{-3}s_2^{-1})s_1.$ The result then follows from $(ii)$. For the element $s_2^{-2}s_1s_2^{-1}$, we expand $s_2^{-2}$ as a linear combination of $s_2^{-1}, 1, s_2, s_2^2$ and by using the definition of $U'$ and lemma $\ref{lem1}$, we only have to check that $s_2^2s_1s_2^{-1}\in U'$. Indeed, we have: $s_2^2s_1s_2^{-1}= s_2(s_2s_1s_2^{-1})=(s_2s_1^{-1}s_2)s_1 \in U'.$ \end{proof} \begin{prop}$u_2u_1u_2\subset U.$ \label{prop1} \end{prop} \begin{proof}We need to prove that every element of the form $s_2^{\gra}s_1^{\grb}s_2^{\grg}$ belongs to $U$, for $\gra,\grb,\grg\in\{-2,-1,0,1\}$. However, when $\gra\grb\grg=0$ the result is obvious. Therefore, we can assume $\gra,\grb,\grg\in\{-2,-1,1\}$. We have the following cases: \begin{itemize}[leftmargin=*] \item\underline{$\gra=1$}: The cases where $\grg\in\{-1, 1\}$ follow from lemmas $\ref{lem1}$ and \ref{lem2}(i). Hence, we need to prove that $s_2s_1^{\grb}s_2^{-2}\in U$. For $\grb=-1$ we use lemma $\ref{lem1}$ and we have $s_2s_1^{-1}s_2^{-2}=(s_2s_1^{-1}s_2^{-1})s_2^{-1}=s_1^{-1}(s_2^{-1}s_1s_2^{-1})\in U.$ For $\grb=1$ we expand $s_2^{-2}$ as a linear combination of $s_2^{-1}, 1, s_2, s_2^2$ and the result follows from the cases where $\grg\in\{-1,0,1\}$ and the generalized braid relations. It remains to prove that $s_2s_1^{-2}s_2^{-2}\in U$. By expanding now $s_1^{-2}$ as a linear combination of $s_1^{-1}, 1, s_1, s_1^2$ we only need to prove that $s_2s_1^2s_2^{-2}\in U$ (the rest of the cases correspond to $b=-1$, $b=0$ and $b=1$). We use lemma $\ref{lem1}$ and we have: $s_2s_1^2s_2^{-2}=(s_2s_1^2s_2^{-1})s_2^{-1}=s_1^{-1}s_2(s_2s_1s_2^{-1})= s_1^{-1}(s_2s_1^{-1}s_2)s_1 \in U.$ \item \underline{$\gra=-1$}: Exactly as in the case where $\gra=1$, we only have to prove that $s_2^{-1}s_1^{\grb}s_2^{-2}\in U$. For $\grb=-1$ the result is obvious by using the generalized braid relations. For $\grb=-2$ we have: $s_2^{-1}s_1^{-2}s_2^{-2}=(s_2^{-1}s_1^{-2}s_2)s_2^{-3}=s_1s_2^{-1}(s_2^{-1}s_ 1^{-1}s_2^{-3}) =s_1(s_2^{-1}s_1^{-3}s_2^{-1})s_1^{-1}$. However, by lemma \ref{lem2}(ii) we have that the element $s_2^{-1}s_1^{-3}s_2^{-1}$ is inside $U'$ and, hence, inside $U$. It remains to prove that $s_2^{-1}s_1s_2^{-2}\in U$. For this purpose, we expand $s_2^{-2}$ as a linear combination of $s_2^{-1}, 1, s_2, s_2^2$ and by the definition of $U$ and lemma \ref{lem1} we only need to prove that $s_2^{-1}s_1s_2^{2}\in U$. Indeed, using lemma \ref{lem1} again we have: $s_2^{-1}s_1s_2^2=(s_2^{-1}s_1s_2)s_2 =s_1(s_2s_1^{-1}s_2) \in U.$ \item \underline{$\gra=-2$}: We can assume that $\grg\in\{1, -2\}$, since the case where $\grg=-1$ follows immediately from lemma \ref{lem2}$(iii)$. For $\grg=1$ we use lemma $\ref{lem1}$ and we have $s_2^{-2}s_1^{\grb}s_2=s_2^{-1}(s_2^{-1}s_1^{\grb}s_2)=(s_2^{-1}s_1s_2^{\grb})s_1^{-1}$. The latter is an element in $U$, as we proved in the case where $\gra=-1$. For $\grg=-2$ we only need to prove the cases where $\grb=\{-1,1\}$, since the case where $\grb=-2$ follows from the definition of $U$. We use the generalized braid relations and we have $s_2^{-2}s_1^{-1}s_2^{-2}=(s_2^{-2}s_1^{-1}s_2^{-1})s_2^{-1}=s_1^{-1}(s_2^{-1}s_1^{-2}s_2^{-1})\in U$. Moreover, $s_2^{-2}s_1s_2^{-2}=s_1(s_1^{-1}s_2^{-2}s_1)s_2^{-2}= s_1(s_2s_1^{-2}s_2^{-3})$. The result follows from the case where $\gra=1$, if we expand $s_2^{-3}$ as a linear combination of $s_2^{-2}$, $s_2^{-1}$, 1 and $s_2$. \qedhere \end{itemize} \end{proof} We can now prove the main theorem of this section. \begin{thm} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)] $U=u_1u_2u_1+u_1s_2s_1^{-1}s_2u_1 +u_1s_2^{-1}s_1s_2^{-1}u_1 +u_1\grv +u_1\grv^{-1} +u_1\grv^{-2}$. \item[(ii)]$H_4=U$. \end{itemize} \label{th} \end{thm} \begin{proof}\mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.6cm] \item [(i)] We recall that $\grv=s_2s_1^2s_2$. We must prove that the RHS, which is by definition $U'+u_1\grv +u_1\grv^{-2}$, is equal to $U$. For this purpose we will ``replace'' inside the definition of $U$ the elements $s_2s_1^{-2}s_2$ and $s_2^{-2}s_1^{-2}s_2^{-2}$ with the elements $\grv$ and $\grv^{-2}$ modulo $U'$, by proving that $s_2s_1^{-2}s_2\in u_1^{\times}\grv +U'$ and $s_2^{-2}s_1^{-2}s_2^{-2}\in u_1^{\times}\grv^{-2} +U'$. For the element $s_2s_1^{-2}s_2$, we expand $s_1^{-2}$ as a linear combination of $s_1^{-1}, 1, s_1, s_1^2$, where the coefficient of $s_1^2$ is invertible. The result then follows from the definition of $U'$ and the braid relation. For the element $s_2^{-2}s_1^{-2}s_2^{-2}$ we apply lemma \ref{lem1} and the generalized braid relations and we have: $s_2^{-2}s_1^{-2}s_2^{-2}=s_2^{-2}s_1^{-1}(s_1^{-1}s_2^{-2}s_1)s_1^{-1}=s_2^{-1}(s_2^{-1}s_1^{-1}s_2)s_1^{-1}(s_1^{-1}s_2^{-1}s_1)s_1^{-2}=s_2^{-1}s_1(s_2^{-1}s_1^{-2}s_2)s_1^{-1}s_2^{-1}s_1^{-2}\in s_2^{-1}s_1^2s_2^{-2}s_1^{-2}s_2^{-1}u_1$. We expand $s_1^2$ as a linear combination of $s_1$, 1, $s_1^{-1}$, $s_1^{-2}$, where the coefficient of $s_1^{-2}$ is invertible and by the generalized braid relations and the fact that $s_1^{-2}\grv^{-2}=\grv^{-2}s_1^{-2}=s_2^{-1}s_1^{-2}s_2^{-2}s_1^{-2}s_2^{-1}s_1^{-2}$ we have that $$s_2^{-2}s_1^{-2}s_2^{-2} \in s_2^{-1}s_1s_2^{-2}s_1^{-2}s_2^{-1}u_1+s_2^{-3}s_1^{-2}s_2^{-1}u_1+u_1s_2^{-1}s_1^{-3}s_2^{-1}u_1+ u_1^{\times}\grv^{-2}.$$ Therefore, by lemma \ref{lem2}(ii) it is enough to prove that the elements $s_2^{-1}s_1s_2^{-2}s_1^{-2}s_2^{-1}$ and $s_2^{-3}s_1^{-2}s_2^{-1}$ belong to $U'$. However, the latter is an element in $U'$, if we expand $s_2^{-3}$ as a linear combination of $s_2^{-2}, s_2^{-1}, 1, s_2$ and use lemma \ref{lem2}(iii), the definition of $U'$ and lemma \ref{lem1}. Moreover, $s_2^{-1}s_1s_2^{-2}s_1^{-2}s_2^{-1}=s_2^{-2}(s_2s_1s_2^{-1})\grv^{-1}= s_2^{-2}s_1^{-1}s_2s_1\grv^{-1}=s_2^{-2}s_1^{-1}s_2\grv^{-1}s_1^{-1}=(s_2^{-2}s_1^{-4}s_2^{-1})s_1^{-1} \in U'$, by lemma \ref{lem2}(iii). \item [(ii)] Since $1\in U$, it will be sufficient to show that $U$ is a left ideal of $H_4$. We know that $U$ is a $u_1$-sub-bimodule of $H_4$. Therefore, we only need to prove that $s_2U\subset U$. Since $U$ is equal to the RHS of (i) we have that $$s_2U\subset s_2u_1u_2u_1+s_2u_1s_2s_1^{-1}s_2u_1+s_2u_1s_2^{-1}s_1s_2^{-1}u_1+ s_2u_1\grv+s_2u_1\grv^{-1}+s_2u_1\grv^{-2}.$$ However, $s_2u_1u_2u_1+s_2u_1\grv+s_2u_1\grv^{-1}+s_2u_1\grv^{-2}=s_2u_1u_2u_1+s_2\grv u_1+s_2\grv^{-1}u_1+s_2\grv^{-2}u_1=s_2u_1u_2u_1+s_2^3s_1^2s_2u_1+s_1^{-2}s_2^{-1}u_1+s_1^{-2}s_2^{-2}s_1^{-1}s_2^{-1} \subset u_1u_2u_1u_2u_1$. Furthermore, by lemma \ref{lem1} we have $s_2u_1s_2^{-1}=s_1^{-1}u_2s_1$. Hence, $ s_2u_1s_2s_1^{-1}s_2u_1=(s_2u_1s_2^{-1})s_2^2s_1^{-1}s_2u_1=s_1^{-1}u_2(s_1s_2^2s_1^{-1})s_2u_1= s_1^{-1}u_2s_1^2s_2^2u_1\subset u_1u_2u_1u_2u_1.$ Moreover, by using \ref{lem1} again we have that $(s_2u_1s_2^{-1})s_1s_2^{-1}u_1=s_1^{-1}u_2s_1^2s_2^{-1}u_1\subset u_1u_2u_1u_2u_1$ . Therefore, $$ s_2u_1u_2u_1+s_2u_1s_2s_1^{-1}s_2u_1+s_2u_1s_2^{-1}s_1s_2^{-1}u_1+ s_2u_1\grv+s_2u_1\grv^{-1}+s_2u_1\grv^{-2}\subset u_1u_2u_1u_2u_1.$$ The result follows directly from proposition \ref{prop1}. \qedhere \end{itemize} \end{proof} \begin{cor}$H_4$ is a free $R_4$-module of rank $r_4=96$ and, therefore, the BMR freeness conjecture holds for the exceptional group $G_8$. \label{G8} \end{cor} \begin{proof}By proposition \ref{rp} it will be sufficient to show that $H_4$ is generated as $R_4$-module by $r_4$ elements. By Theorem \ref{th} and the fact that $u_1u_2u_1=u_1(R_4+R_4s_2+R_4s_2^{-1}+R_4s_2^2)u_1=u_1+u_1s_2u_1+u_1s_2^{-1}u_1+u_1s_2^2u_1$ we have that $H_4$ is generated as left $u_1$-module by 24 elements. Since $u_1$ is generated by 4 elements as a $R_4$-module, we have that $H_4$ is generated over $R_4$ by $r_4=96$ elements. \end{proof} \subsection{The quintic Hecke algebra $H_5$} \indent Our ring of definition is $R_5=\mathbb{Z}[a, b,c, d,e,e^{-1}]$ and therefore, relation (\ref{one}) becomes $s_i^5=as_i^4+bs_i^3+cs_i^2+ds_i+e$, for $i=1,2$. We recall that $\grv=s_2s_1^2s_2$ and we set $$\small{\begin{array}{lcl}U'&=&u_1u_2u_1+u_1\grv+u_1\grv^{-1}+ u_1s_2^{-1}s_1^2s_2^{-1}u_1+u_1s_2s_1^{-2}s_2u_1+u_1s_2^2s_1^2s_2^{2}u_1+ u_1s_2^{-2}s_1^{-2}s_2^{-2}u_1+\\&&+u_1s_2s_1^{-2}s_2^{2}u_1+ u_1s_2^{-1}s_1^2s_2^{-2}u_1+u_1s_2^{-1}s_1s_2^{-1}u_1 +u_1s_2s_1^{-1}s_2u_1+u_1s_2^{-2}s_1^{-2}s_2^{2}u_1+ u_1s_2^{2}s_1^2s_2^{-2}u_1+\\&&+u_1s_2^{2}s_1^{-2}s_2^{2}u_1+ u_1s_2^{-2}s_1^2s_2^{-2}u_1+u_1s_2^{-2}s_1s_2^{-1}u_1+u_1s_2^{-1}s_1s_2^{-2}u_1\\ \\ U''&=&U'+u_1\grv^2+u_1\grv^{-2}+ u_1s_2^{-2}s_1^2s_2^{-1}s_1s_2^{-1}u_1+u_1s_2^{2}s_1^{-2}s_2s_1^{-1}s_2u_1+ u_1s_2s_1^{-2}s_2^{2}s_1^{-2}s_2^{2}u_1+\\&&+u_1s_2^{-1}s_1^2s_2^{-2}s_1^2s_2^{-2}u_1\\ \\ U'''&=&U''+u_1\grv^3+u_1\grv^{-3}\\ \\ U''''&=&U'''+u_1\grv^4+u_1\grv^{-4}\\ \\ U&=&U''''+u_1\grv^5+u_1\grv^{-5}. \end{array}}$$ It is obvious that $U$ is a $u_1$-bi-module and that $U', U'', U'''$ and $U''''$ are $u_1-$ sub-bi-modules of $U$. Again, our goal is to prove that $H_5=U$ (theorem \ref{thh2}). As we explained in the proof of theorem \ref{th}, since $1\in U$ and $s_1U\subset U$ (by the definition of $U$), it is enough to prove that $s_2U\subset U$. We notice that $$U=\sum_{k=1}^5u_1\grv^{\pm k}+\underbrace{u_1u_2u_1+u_1\text{``some elements of length 3''}u_1}_{\in U'}+\underbrace{u_1\text{``some elements of length 5''}u_1}_{\in U''}.$$ By the definition of $U'$ and $U''$ we have that $u_1\grv^{\pm1}\subset U'$ and $u_1\grv^{\pm2}\subset U''$. Therefore, in order to prove that $s_2U\subset U$ we only need to prove that $s_2u_1\grv^{\pm k}$ ($k=3,4,5$), $s_2U'$ and $s_2U''$ are subsets of $U$. The rest of this section is devoted to this proof (see proposition \ref{p2}, lemma \ref{oo}$(ii)$, proposition \ref{xx}$(i),(ii)$ and theorem \ref{thh2}). The reason we define also $U'''$ and $U''''$ is because, in order to prove that $s_2u_1\grv^k$ and $s_2u_1\grv^{-k}$ ($k=3,4,5$) are subsets of $U$, we want to ``replace'' inside the definition of $U$ the elements $\grv^k$ and $\grv^{-k}$ by some other elements modulo $U'', U'''$ and $U''''$, respectively (see lemmas \ref{cc}, \ref{ll} and \ref{lol}). Recalling that $\grF$ is the automorphism of $H_5$ as defined in section \ref{s}, we have the following lemma: \begin{lem}The $u_1$-bi-modules $U', U'', U''', U''''$ and $U$ are stable under $\grF$. \label{r1} \end{lem} \begin{proof}We notice that $U',U'', U''', U''''$ and $U$ are of the form $$u_1s_2^{-2}s_1s_2^{-1}u_1+u_1s_2^{-1}s_1s_2^{-2}u_1+\sum u_1\grs u_1+\sum u_1\grs^{-1}u_1,$$ for some $\grs\in B_3$ satisfying $\grs^{-1}=\grF(\grs)$ and $\grs=\grF(\grs^{-1})$. Therefore, we restrict ourselves to proving that the elements $\grF(s_2^{-2}s_1s_2^{-1})=s_2^2s_1^{-1}s_2$ and $\grF(s_2^{-1}s_1s_2^{-2})=s_2s_1^{-1}s_2^2$ belong to $U'$. We expand $s_2^2$ as a linear combination of $s_2,1, s_2^{-1}, s_2^{-2}$ and $s_2^{-3}$ and by the definition of $U'$ and lemma \ref{lem1} we have to prove that the elements $s_2^{k}s_1^{-1}s_2$ and $s_2s_1^{-1}s_2^{k}$ are elements in $U'$, for $k=-3, -2$. Indeed, by using lemma \ref{lem1} we have: $s_2^{k}s_1^{-1}s_2=s_2^{k+1}(s_2^{-1}s_1^{-1}s_2)=(s_2^{k+1}s_1s_2^{-1})s_1^{-1}\in U'$ and $s_2s_1^{-1}s_2^{k}=(s_2s_1^{-1}s_2^{-1})s_2^{k+1}=s_1^{-1}(s_2^{-1}s_1s_2^{k+1})\in U'$. \end{proof} From now on, we will use lemma \ref{lem1} without mentioning it. \begin{prop} $u_2u_1u_2\subset U'.$ \label{p1} \end{prop} \begin{proof} We have to prove that every element of the form $s_2^{\gra}s_1^{\grb}s_2^{\grg}$ belongs to $U'$, for $\gra,\grb,\grg\in\{-2,-1,0,1,2\}$. However, when $\gra\grb\grg=0$ the result is obvious. Therefore, we can assume that $\gra,\grb,\grg\in \{-2,-1,1,2\}.$ We continue the proof as in the proof of proposition \ref{prop1}, which is by distinguishing cases for $\gra$. However, by using lemma \ref{r1} we can assume that $\gra\in\{1,2\}$. We have: \begin{itemize}[leftmargin=*] \item\underline{$\gra=1$}: \begin{itemize}[leftmargin=*] \item \underline{$\grg\in\{-1,1\}$}: The result follows from lemma \ref{lem1}, the braid relation and the definition of $U'$. \item \underline{$\grg=-2$}: $s_2s_1^{\grb}s_2^{-2}=(s_2s_1^{\grb}s_2^{-1})s_2^{-1}=s_1^{-1}(s_2^{\grb}s_1s_2^{-1})$. For $\grb\in\{1,-1,-2\}$ the result follows from lemma \ref{lem1} and the definition of $U'$. For $\grb=2$, we have $s_1^{-1}s_2^2s_1s_2^{-1}=s_1^{-1}s_2(s_2s_1s_2^{-1})=s_1^{-1}(s_2s_1^{-1}s_2)s_1\in U'$. \item \underline{$\grg=2$}: We need to prove that the element $s_2s_1^{\grb}s_2^{2}$ is inside $U'$. For $\grb\in\{-2,1\}$ the result is obvious by using the definition of $U'$ and the generalized braid relations. For $\grb=-1$ we have $s_2s_1^{-1}s_2^2=\grF(s_2^{-1}s_1s_2^{-2})\in\grF(U')\stackrel{\ref{r1}}{=}U'$. For $\grb=2$ we have $s_2s_1^2s_2^2= s_1^{-1}(s_1s_2s_1^2)s_2^2=s_1^{-1}s_2(s_2s_1s_2^3)=s_1^{-1}(s_2s_1^3s_2)s_1$. The result then follows from the case where $\grg=1$, if we expand $s_1^3$ as a linear combination of $s_1^2, s_1, 1, s_1^{-1}, s_1^{-2}$. \end{itemize} \item \underline {$\gra=2$}: \begin{itemize}[leftmargin=*] \item \underline{$\grg=-1$}: $s_2^2s_1^{\grb}s_2^{-1}=s_2(s_2s_1^{\grb}s_2^{-1})=(s_2s_1^{-1}s_2^{\grb})s_1\in U'$ (case where $\gra=1$). \item \underline{$\grg=2$}: We only have to prove the cases where $\grb\in\{-1,1\}$, since the cases where $\grb\in \nolinebreak \{2,-2\}$ follow from the definition of $U'$. We have $s_2^2s_1s_2^2=(s_2^2s_1s_2)s_2=s_1\grv\in U'$. Moreover, $s_2^2s_1^{-1}s_2^2=s_1^{-1}(s_1s_2^2s_1^{-1})s_2^2=s_1^{-1}\grF(s_2s_1^{-2}s_2^{-3})$. The result follows from the case where $\gra=1$ and lemma \ref{r1}, if we expand $s_2^{-3}$ as a linear combination of $s_2^{-2}, s_2^{-1}, 1, s_2, s_2^{2}$. \item \underline{$\grg=1$}: We have to check the cases where $\grb\in\{-2,-1,2\}$, since the case where $\grb=1$ is a direct result from the generalized braid relations. However, $s_2^2s_1^{-1}s_2=\grF(s_2^{-2}s_1s_2^{-1})\in\grF(U')\stackrel{\ref{r1}}{=}U'$. Hence, it remains to prove the cases where $\grb\in \{-2,2\}$. We have $s_2^2s_1^{-2}s_2=s_2^3(s_2^{-1}s_1^{-2}s_2)=s_1(s_1^{-1}s_2^3s_1)s_2^{-2}s_1^{-1}= s_1(s_2s_1^3s_2^{-3})s_1^{-1}$. The latter is an element in $U'$, if we expand $s_1^3$ and $s_2^{-3}$ as linear combinations of $s_1^2, s_1, 1, s_1^{-1}, s_1^{-2}$ and $s_2^{-2}, s_2^{-1}, 1, s_2, s_2^{2}$, respectively and use the case where $\gra=1$. Moreover, $s_2^2s_1^2s_2= s_2^2s_1(s_1s_2s_1)s_1^{-1}=(s_2^2s_1s_2)s_1s_2s_1^{-1}=s_1(s_2s_1^3s_2)s_1.$ The result follows again from the case where $\gra=1$, if we expand $s_1^3$ as a linear combination of $s_1^2, s_1, 1, s_1^{-1}, s_1^{-2}$. \item\underline{$\grg=-2$}: We need to prove that $s_2^2s_1^{\grb}s_2^{-2}\in U'$. For $\grb=2$ the result follows from the definition of $U'$. For $\grb\in\{1,-1\}$ we have: $s_2^2s_1s_2^{-2}=s_2^2(s_1s_2^{-2}s_1^{-1})s_1=(s_2s_1^{-2}s_2)s_1\in U'$. $s_2^2s_1^{-1}s_2^{-2}=s_2(s_2s_1^{-1}s_2^{-1})s_2^{-1}=(s_2s_1^{-1}s_2^{-1})s_1s_2^{-1}= s_1^{-1}(s_2^{-1}s_1^2s_2^{-1})\in U'$. It remains to prove the case where $\grb=-2$. We recall that $\grv =s_2s_1^2s_2$ and we have: $ s_2^2s_1^{-2}s_2^{-2}=s_1^{-1}(s_1s_2^2s_1^{-1})s_1^{-1}s_2^{-2}=s_1^{-1}s_2^{-2}\grv s_1^{-1}s_2^{-2}=s_1^{-1}s_2^{-2} s_1^{-1}\grv s_2^{-2}= s_1^{-1}s_2^{-2}s_1^{-1}(s_2s_1^2s_2^{-1})= s_1^{-1}(s_2^{-2}s_1^{-2}s_2^2)s_1. $ The result follows from the definition of $U'$. \qedhere \end{itemize} \end{itemize} \end{proof} From now on, in order to make it easier for the reader to follow the calculations, we will underline the elements belonging to $u_1u_2u_1u_2u_1$ and we will use immediately the fact that these elements belong to $U'$ (see proposition \ref{p1}). \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)]$s _2u_1s_2u_1s_2u_1\subset\grv^2u_1+u_1u_2u_1u_2u_1\subset U''$. \item [(ii)]$s_2\grv^2u_1=s_1s_2s_1^4s_2s_1^3s_2u_1\subset U''.$ \end{itemize} \label{oo} \end{lem} \begin {proof} We recall that $\grv=s_2s_1^2s_2$. \begin{itemize}[leftmargin=0.8cm] \item[(i)] The fact that $\grv^2u_1+u_1u_2u_1u_2u_1\subset U''$ follows directly from the definition of $U''$ and proposition \ref{p1}. For the rest of the proof, we use the definition of $u_1$ and we have that $s_2u_1s_2u_1s_2u_1=s_2u_1s_2(R_5+R_5s_1^{-1}+R_5s_1+R_5 s_1^{2}+R_5s_1^3)s_2u_1\subset \underline{s_2u_1s_2^2u_1}+s_2u_1s_2s_1^{-1}s_2u_1 +\underline{s_2u_1(s_2s_1s_2)u_1}+ s_2u_1\grv+s_2u_1s_2s_1^{3}s_2u_1$. However, $s_2u_1\grv=\underline{s_2\grv u_1}$ and $s_2u_1s_2s_1^{-1}s_2u_1=s_2u_1(s_1s_2s_1^{-1})s_2u_1= (s_2u_1s_2^{-1})s_1s_2^2u_1=\underline{s_1^{-1}u_2s_1^2s_2^2u_1}$. Therefore, it is enough to prove that $s_2u_1s_2s_1^3s_2u_1\subset\grv^2u_1+u_1u_2u_1u_2u_1$. For this purpose, we use again the definition of $u_1$ and we have: $\small{\begin{array}[t]{lcl} s_2u_1s_2s_1^3s_2u_1 &\subset& s_2(R_5+R_5s_1+R_5s_1^{-1}+R_5s_1^2+R_5s_1^3)s_2s_1^{3}s_2u_1\\ &\subset& \underline{s_2^2s_1^3s_2u_1}+ \underline{s_2(s_1s_2s_1^{3})s_2u_1}+ s_2(s_1^{-1}s_2s_1)s_1^{2}s_2u_1+\grv s_1^{3}s_2u_1+s_2s_1^{2}(s_1s_2s_1^{3})s_2u_1\\ &\subset& \underline{s_2^2s_1(s_2^{-1}s_1^2s_2)u_1}+\underline{s_1^{3}\grv s_2u_1}+s_2s_1^2s_2^2(s_2s_1s_2^2)u_1+u_1u_2u_1u_2u_1\\ &\subset&\grv^2u_1+u_1u_2u_1u_2u_1. \end{array}}$ \item[(ii)] We have that $s_2\grv^2=s_1(s_1^{-1}s_2^2s_1)(s_1s_2s_1)s_1^{-1}\grv=s_1s_2s_1^4(s_1^{-1}s_2s_1)s_1^{-2}\grv=s_1s_2s_1^4s_2s_1s_2^{-1}s_1^{-2}\grv=s_1s_2s_1^4s_2s_1s_2^{-1}\grv s_1^{-2}=s_1s_2s_1^4s_2s_1^3s_2s_1^{-2}$. Therefore, $s_2\grv^2u_1 \subset u_1s_2u_1s_2u_1s_2u_1.$ The fact that $u_1s_2u_1s_2u_1s_2u_1\subset U''$ follows immediately from (i). \qedhere \end{itemize} \end{proof} \begin{prop}\mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)]$u_2u_1s_2^{-1}s_1s_2^{-1}\subset u_1\grv^{-2}+R_5s_2^{-2}s_1^2s_2^{-1}s_1s_2^{-1}+u_1u_2u_1u_2u_1\subset U''.$ \item[(ii)] $u_2u_1s_2s_1^{-1}s_2\subset u_1 \grv^{2}+R_5s_2^{2}s_1^{-2}s_2s_1^{-1}s_2+u_1u_2u_1u_2u_1\subset U''.$ \end{itemize} \label{l2} \end{prop} \begin{proof} We restrict ourselves to proving $(i)$, since $(ii)$ follows from $(i)$ by applying $\grF$ (see lemma \ref{r1}). By the definition of $U''$ and by proposition \ref{p1} we have that $u_1\grv^{-2}+R_5s_2^{-2}s_1^2s_2^{-1}s_1s_2^{-1}+u_1u_2u_1u_2u_1\subset U''$. Therefore, it remains to prove that $u_2u_1s_2^{-1}s_1s_2^{-1}\subset u_1\grv^{-2}+R_5s_2^{-2}s_1^2s_2^{-1}s_1s_2^{-1}+u_1u_2u_1u_2u_1.$ By he definition of $u_1$ we have $u_2u_1s_2^{-1}s_1s_2^{-1}=u_2(R_5+R_5s_1+R_5s_1^{-1}+R_5s_1^{-2} +R_5s_1^{2})s_2^{-1}s_1s_2^{-1} \subset \underline{u_2s_1s_2^{-1}}+ u_2s_1s_2^{-1}s_1s_2^{-1}+\underline{u_2(s_1^{-1}s_2^{-1}s_1)s_2^{-1}}+u_2s_1^{-2}s_2^{-1}s_1s_2^{-1} +u_2s_1^{2}s_2^{-1}s_1s_2^{-1}$. However, $u_2s_1s_2^{-1}s_1s_2^{-1}=u_2(s_2s_1s_2^{-1})s_1s_2^{-1}=\underline{u_2s_1^{-1}(s_2s_1^2s_2^{-1})}$. Therefore, we only have to prove that $u_2s_1^{-2}s_2^{-1}s_1s_2^{-1}$ and $u_2s_1^{2}s_2^{-1}s_1s_2^{-1}$ are subsets of $ u_1\grv^{-2}+R_5s_2^{-2}s_1^2s_2^{-1}s_1s_2^{-1}+u_1u_2u_1u_2u_1$. We have: \\ $\small{\begin{array}[t]{lcl} u_2s_1^{-2}s_2^{-1}s_1s_2^{-1} &\subset&(R_5+R_5s_2+R_5s_2^{-1}+ R_5s_2^2+R_5s_2^3)s_1^{-2}s_2^{-1}s_1s_2^{-1}\\ &\subset&\underline{R_5s_1^{-2}s_2^{-1}s_1s_2^{-1}}+\underline{ R_5(s_2s_1^{-2}s_2^{-1})s_1s_2^{-1}}+R_5\grv^{-1}s_1s_2^{-1}+ R_5s_2(s_2s_1^{-2}s_2^{-1})s_1s_2^{-1}+\\&&+ R_5s_2^2(s_2s_1^{-2}s_2^{-1})s_1s_2^{-1}\\ &\subset&\underline{R_5s_1\grv^{-1}s_2^{-1}}+R_5(s_2s_1^{-1}s_2^{-1})s_2^{-1}s_1^2s_2^{-1}+ R_5s_2(s_2s_1^{-1}s_2^{-1})s_2^{-1}s_1^2s_2^{-1}+u_1u_2u_1u_2u_1\\ &\subset&R_5s_1^{-1}s_2^{-1}s_1s_2^{-1}s_1^2s_2^{-1}+R_5(s_2s_1^{-1}s_2^{-1})s_1s_2^{-1}s_1^2s_2^{-1}+u_1u_2u_1u_2u_1\\ &\subset&\grF(u_1s_2u_1s_2u_1s_2)+u_1u_2u_1u_2u_1. \end{array}}$\\ \\ However, by lemma \ref{oo}(i) we have that $\grF(u_1s_2u_1s_2u_1s_2)\subset \grF(\grv^2u_1+u_1u_2u_1u_2u_1)=\grv^{-2}u_1+u_1u_2u_1u_2u_1$. Therefore, $ u_2s_1^{-2}s_2^{-1}s_1s_2^{-1}\subset \grv^{-2}u_1+u_1u_2u_1u_2u_1$. By using analogous calculations, we have: \\ $\small{\begin{array}[t]{lcl} u_2s_1^{2}s_2^{-1}s_1s_2^{-1} &\subset&(R_5+R_5s_2 +R_5s_2^{-1}+R_5s_2^2+R_5s_2^{-2})s_1^{2}s_2^{-1}s_1s_2^{-1}\\ &\subset& \underline{R_5s_1^2s_ 2^{-1}s_1s_2^{-1}}+\underline{R_5(s_2s_1^2s_2^{-1})s_1s_2^{-1}}+ R_5s_2^{-1}s_1^3(s_1^{-1}s_2^{-1}s_1)s_2^{-1}+ R_5s_2(s_2s_1^2s_2^{-1})s_1s_2^{-1}+\\&&+R_5s_2^{-2}s_1^2s_2^{-1}s_1s_2^{-1}\\ &\subset& \underline{R_5(s_2^{-1}s_1^3s_2)s_1^{-1}s_2^{-2}}+ R_5s_2s_1^{-1}s_2^{2}s_1^2s_2^{-1}+R_5s_2^{-2}s_1^2s_2^{-1}s_1s_2^{-1}+u_1u_2u_1u_2u_1. \end{array}}$\\ \\ It is enough to prove that $s_2s_1^{-1}s_2^{2}s_1^2s_2^{-1}\subset u_1u_2u_1u_2u_1$. Indeed, we have that $s_2s_1^{-1}s_2^2s_1^2s_2^{-1}=s_1^{-1}(s_1s_2s_1^{-1})s_2(s_2s_1^2s_2^{-1})=\underline{s_1^{-1}s_2^{-1}(s_1s_2^2s_1^{-1})s_2^2s_1}$. \qedhere \end{proof} We can now prove a lemma that helps us to ``replace'' inside the definition of $U'''$ the element $\grv^3$ with the element $s_2s_1^3s_2^2s_1^2s_2^2$ modulo $U''$. \begin{lem}$s_2s_1^3s_2^2s_1^2s_2^2\in u_1s_2u_1s_2s_1^3s_2u_1+u_1s_2^2s_1^3s_2s_1^{-1}s_2u_1+ u_1u_2u_1u_2u_1+ u_1^{\times}\grv^3\subset u_1^{\times}\grv^3+\nolinebreak U''.$ \label{cc} \end{lem} \begin{proof} The fact that $u_1s_2u_1s_2s_1^3s_2u_1+u_1s_2^2s_1^3s_2s_1^{-1}s_2u_1+ u_1u_2u_1u_2u_1+ u_1^{\times}\grv^3$ is a subset of $u_1^{\times}\grv^3+U''$ follows from lemma \ref{oo}$(i)$ and propositions \ref{l2}(ii) and \ref{p1}. For the rest of the proof, we notice that we have $s_2s_1^3s_2^2s_1^2s_2^2=s_2s_1^2(s_1s_2^2s_1^{-1})s_1^2(s_1s_2^2s_1^{-1})s_1=s_2s_1^2s_2^{-2}\grv s_1^2s_2^{-1}s_1(s_1s_2s_1^{-1})s_1^2= s_2s_1^2s_2^{-2}s_1^2\grv s_2^{-1}s_1s_2^{-1}(s_1s_2s_1^{-1})s_1^3=s_2s_1^2s_2^{-2}s_1^2s_2s_1^3s_2^{-2}s_1s_2s_1^3=s_2s_1^2s_2^{-3}\bold{\boldsymbol{\grv} s_1^3s_2^{-2}}s_1s_2s_1^3$. However, $\bold{\boldsymbol{\grv} s_1^3s_2^{-2}}=s_1^3\grv s_2^{-2}=s_1^3(s_2s_1^2s_2^{-1})=s_1^2s_2^2s_1$ and, hence, $s_2s_1^3s_2^2s_1^2s_2^2=s_2s_1^2s_2^{-3}s_1^2s_2^2s_1^2s_2s_1^3$. Our goal now is to prove that $s_2s_1^2s_2^{-3}s_1^2s_2^2s_1^2s_2s_1^3\in u_1s_2u_1s_2s_1^3s_2u_1+u_1s_2^2s_1^3s_2s_1^{-1}s_2u_1+ u_1u_2u_1u_2u_1+ u_1^{\times}\grv^3$. For this purpose we expand $s_2^{-3}$ as a linear combination of $s_2^{-2}$, $s_2^{-1}$, 1, $s_2$ and $s_2^2$, where the coefficient of $s_2^2$ is invertible, and we have that $s_2s_1^2s_2^{-3}s_1^2s_2^2s_1^2s_2s_1^3\in s_2s_1^2s_2^{-2}s_1^2s_2^2s_1^2s_2u_1+s_2s_1^2s_2^{-1}s_1^2s_2^2s_1^2s_2u_1+ s_2s_1^4s_2^2s_1^2s_2u_1+s_2\grv^2u_1+u_1^{\times}\grv^3$. However, by lemma \ref{oo}(ii) we have that $s_2\grv^2u_1\subset u_1s_2u_1s_2s_1^3s_2u_1$. Moreover, $s_2s_1^4s_2^2s_1^2s_2u_1=s_2s_1^5(s_1^{-1}s_2^2s_1)(s_1s_2s_1)u_1\subset u_1s_2u_1s_2s_1^3s_2u_1$. It remains to prove that the elements $s_2s_1^2s_2^{-2}s_1^2s_2^2s_1^2s_2$ and $s_2s_1^2s_2^{-1}s_1^2s_2^2s_1^2s_2$ are inside $u_1s_2u_1s_2s_1^3s_2u_1+u_1s_2^2s_1^3s_2s_1^{-1}s_2u_1+ u_1u_2u_1u_2u_1$. On one hand, we have $s_2s_1^2s_2^{-2}s_1^2s_2^2s_1^2s_2=s_2s_1^3(s_1^{-1}s_2^{-2}s_1)s_1s_2\grv=s_2s_1^3s_2s_1^{-1}(s_1^{-1}s_2^{-1}s_1)s_2\grv=s_2s_1^3s_2^2(s_2^{-1}s_1^{-1}s_2)s_1^{-1}\grv=s_2s_1^3s_2^2s_1s_2^{-1}s_1^{-2}\grv=s_2s_1^3s_2^2s_1s_2^{-1}\grv s_1^{-2}=s_2s_1^3s_2^2s_1^3s_2s_1^{-2}$, meaning that the element $s_2s_1^2s_2^{-2}s_1^2s_2^2s_1^2s_2$ is inside $s_2s_1^3s_2^2u_1s_2u_1$. On the other hand, $s_2s_1^2s_2^{-1}s_1^2s_2^2s_1^2s_2 =s_2s_1^2(s_2^{-1}s_1^2s_2)\grv=s_2s_1^3s_2^2s_1^{-1}\grv=s_2s_1^3s_2^2\grv s_1^{-1}=s_2s_1^3s_2^3s_1^2s_2s_1^{-1}$ and, if we expand $s_2^3$ as a linear combination of $s_2^2$, $s_2$, 1, $s_2^{-1}$ and $s_2^{-2}$, we have that $s_2s_1^3s_2^3s_1^2s_2s_1^{-1}\in s_2s_1^3s_2^{2}s_1^2s_2u_1+s_2s_1^3\grv u_1+ \underline{s_2s_1^5s_2u_1}+\underline{ (s_2s_1^3s_2^{-1})s_1^2s_2u_1}+\underline{(s_2s_1^3s_2^{-1})(s_2^{-1}s_1^2s_2)u_1}\subset s_2s_1^3s_2^{2}u_1s_2u_1+\underline{s_2\grv u_1}+u_1u_2u_1u_2u_1$, meaning that the element $s_2s_1^2s_2^{-1}s_1^2s_2^2s_1^2s_2$ is inside $s_2s_1^3s_2^{2}u_1s_2u_1+u_1u_2u_1u_2u_1$. As a result, in order to finish the proof, it will be sufficient to show that $s_2s_1^3s_2^{2}u_1s_2u_1$ is a subset of $u_1s_2u_1s_2s_1^3s_2u_1+u_1s_2^2s_1^3s_2s_1^{-1}s_2u_1+ u_1u_2u_1u_2u_1$. Indeed, we have: $$\small{\begin{array}[t]{lcl} s_2s_1^3s_2^{2}u_1s_2u_1 &\subset&s_2s_1^3s_2^2(R_5s_1^2+R_5s_1+R_5+R_5s_1^{-1}+R_5s_1^{-2})s_2u_1\\ &\subset&s_2s_1^3s_2^2s_1^2s_2u_1+\underline{s_2s_1^3(s_2^2s_1s_2)u_1}+\underline{s_2s_1^3s_2^3u_1} +s_2s_1^2(s_1s_2^2s_1^{-1})s_2u_1+ s_2s_1^2(s_1s_2^2s_1^{-1})s_1^{-1}s_2u_1\\ &\subset&s_2s_1^4(s_1^{-1}s_2^2s_1)(s_1s_2s_1)u_1+\underline{ (s_2s_1^2s_2^{-1})s_1^2s_2^2u_1}+ (s_2s_1^2s_2^{-1})s_1^2s_2s_1^{-1}s_2u_1+u_1u_2u_1u_2u_1\\ &\subset& u_1s_2u_1s_2s_1^3s_2u_1+u_1s_2^2s_1^3s_2s_1^{-1}s_2u_1+u_1u_2u_1u_2u_1. \end{array}}$$ \end{proof} \begin{prop} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)]$s_2u_1u_2u_1u_2 \subset U'''$. \item [(ii)]$s_2^{-1}u_1u_2u_1u_2 \subset U'''$. \end{itemize} \label{p2} \end{prop} \begin{proof}By lemma \ref{r1}, we only have to prove $(i)$, since $(ii)$ is a consequence of $(i)$ up to applying $\grF$. We know that $u_2u_1u_2 \subset U'$ (proposition \ref{p1}) hence it is enough to prove that $s_2U'\subset U'''$. Set $$\small{\begin{array}{lcl}V&=&u_1u_2u_1+\grv u_1+\grv^{-1}u_1+u_1s_2^{-1}s_1^2s_2^{-1}u_1+ u_1s_2^{-1}s_1s_2^{-1}u_1+u_1s_2s_1^{-1}s_2u_1+u_1s_2^{-2}s_1s_2^ {-1}u_1+\\&&+u_1s_2^{-1}s_1^2s_2^{-2}u_1+u_1s_2^{-1}s_1s_2^{-2}u_1+u_1s_2s_1^{-2}s_2u_1+ u_1s_2^{-2}s_1^{-2}s_2^{-2}u_1+ u_1s_2^{-2}s_1^{-2}s_2^{2}u_1. \end{array}}$$ We notice that $$\begin{array}{lcl} U'&=&V+u_1s_2s_1^{-2}s_2^{2}u_1+ u_1s_2^{2}s_1^{-2}s_2^{2}u_1+u_1s_2^{2}s_1^{2}s_2^{2}u_1+ u_1s_2^{-2}s_1^{2}s_2^{-2}u_1+u_1s_2^{2}s_1^{2}s_2^{-2}u_1. \end{array}$$ Therefore, in order to prove that $s_2U'\subset U'''$, we will prove first that $s_2V\subset U'''$ and then we will check the other five cases separately. We have: $$\small{\begin{array}{lcl}s_2V&\subset& \underline{s_2u_1u_2u_1}+\underline{s_2\grv u_1}+\underline{s_2\grv^{-1} u_1}+\underline{(s_2u_1s_2^{-1})u_1u_2u_1}+s_2u_1s_2u_1s_2+s_2u_1s_2^{-2}s_1s_2^{-1}+\\&&+ s_2u_1s_2^{-2}s_1^{-2}s_2^{-2}u_1+ s_2u_1s_2^{-2}s_1^{-2}s_2^{2}u_1+U''' \end{array}}$$ However, by proposition \ref{oo}(i) we have that $s_2u_1s_2u_1s_2\subset U''\subset U'''$. It remains to prove that $A:=s_2u_1s_2^{-2}s_1s_2^{-1}+ s_2u_1s_2^{-2}s_1^{-2}s_2^{-2}u_1+ s_2u_1s_2^{-2}s_1^{-2}s_2^{2}u_1$ is a subset of $U'''$. We have: $$\small{\begin{array}{lcl} A&=&s_2u_1s_2^{-2}s_1s_2^{-1}+s_2u_1s_2^{-2}s_1^{-2}s_2^{-2}u_1+ s_2u_1s_2^{-2}s_1^{-2}s_2^{2}u_1\\ &=&(s_2u_1s_2^{-1})s_2^{-1}s_1s_2^{-1}u_1+ (s_2u_1s_2^{-1})s_2^{-1}s_1^{-2}s_2^{-2}u_1+ (s_2u_1s_2^{-1})s_2^{-1}s_1^{-2}s_2^2u_1\\ &=&s_1^{-1}u_2(s_1s_2^{-1}s_1^{-1})s_1^2s_2^{-1}u_1+s_1^{-1}u_2(s_1s_2^{-1}s_1^{-1})s_1^{-1}s_2^{-2}u_1+ s_1^{-1}u_2s_1(s_2^{-1}s_1^{-2}s_2)s_2u_1\\ &=&s_1^{-1}u_2s_1^{-1}(s_2s_1^2s_2^{-1})u_1+s_1^{-1}u_2s_1^{-1}(s_2s_1^{-1}s_2^{-1})s_2^{-1}u_1+s_1^{-1}u_2s_1^2s_2^{-1}(s_2^{-1}s_1^{-1}s_2)u_1\\ &\subset&u_1(u_2u_1s_2^{-1}s_1s_2^{-1})u_1. \end{array}}$$ By proposition \ref{l2} we have then $A\subset U'''$ and, hence, we proved that \begin{equation}s_2V\subset U'''\label{7}\end{equation} In order to finish the proof that $s_2U'\subset U''$, it will be sufficient to prove that $u_1s_2s_1^{-2}s_2^{2}u_1$, $u_1s_2^{2}s_1^{-2}s_2^{2}u_1$, $u_1s_2^{2}s_1^{2}s_2^{2}u_1$, $u_1s_2^{-2}s_1^{2}s_2^{-2}u_1$ and $u_1s_2^{2}s_1^{2}s_2^{-2}u_1$ are subsets of $U'''$. \begin{itemize}[leftmargin=0.8cm] \item[C1.] We will prove that $s_2u_1s_2s_1^{-2}s_2^2u_1\subset U'''$. We expand $s_2^2$ as a linear combination of $s_2$, 1 $s_2^{-1}$, $s_2^{-2} $ and $s_2^{-3}$ and we have that $s_2u_1s_2s_1^{-2}s_2^2u_1\subset s_2u_1s_2s_1^{-2}s_2u_1+\underline{s_2u_1s_2u_1}+\underline{s_2u_1(s_2s_1^{-2}s_2^{-1})u_1}+ s_2u_1(s_2s_1^{-2}s_2^{-1})s_2^{-1}u_1+ s_2u_1s_2s_1^{-2}s_2^{-3}u_1 \subset s_2u_1s_2s_1^{-2}s_2^{-3}u_1+s_2V+U'''$ and, hence, by relation (\ref{7}) we have that $s_2u_1s_2s_1^{-2}s_2^2u_1\subset s_2u_1s_2s_1^{-2}s_2^{-3}u_1+U'''$. Therefore, it will be sufficient to prove that $s_2u_1s_2s_1^{-2}s_2^{-3}u_1\subset U'''$. We use the definition of $u_1$ and we have: $\small{\begin{array}[t]{lcl} s_2u_1s_2s_1^{-2}s_2^{-3}u_1 &\subset&s_2(R_5+R_5s_1+R_5s_1^{-1}+R_5s_1^2+R_5s_1^{3})s_2s_1^{-2}s_2^{-3}u_1\\ &\subset&\underline{s_2^2s_1^{-2}s_2^{-3}u_1}+\underline{(s_2s_1s_2)s_1^{-2}s_2^{-3}u_1}+ s_2s_1^{-1}s_2s_1^{-2}s_2^{-3}u_1+\grv s_1^{-2}s_2^{-3}u_1+\\&&+s_2s_1^{3}s_2s_1^{-2}s_2^{-3}u_1\\ &\subset&s_1^{-1}(s_1s_2s_1^{-1})s_2s_1^{-2}s_2^{-3}u_1+\underline{s_1^{-2}\grv s_2^{-3}u_1}+ s_2s_1^{2}(s_1s_2s_1^{-1})s_1^{-1}s_2^{-3}u_1+U'''\\ &\subset&s_1^{-1}s_2^{-1}(s_1s_2^2s_1^{-1})s_1^{-1}s_2^{-3}u_1+ (s_2s_1^{2}s_2^{-1})s_1s_2s_1^{-1}s_2^{-3}u_1+U'''\\ &\subset&s_1^{-1}s_2^{-2}s_1^2(s_2s_1^{-1}s_2^{-1})s_2^{-2}u_1+ s_1^{-1}s_2^2s_1(s_1s_2s_1^{-1})s_2^{-3}u_1+ U'''\\ &\subset&s_1^{-1}s_2^{-3}(s_2s_1s_2^{-1})s_1s_2^{-2}u_1+s_1^{-1}s_2(s_2s_1s_2^{-1})s_1s_2^{-2}u_1+U'''\\ &\subset&s_1^{-1}s_2^{-3}s_1^{-1}(s_2s_1^2s_2^{-1})s_2^{-1}u_1+s_1^{-1}s_2s_1^{-1}(s_2s_1^2s_2^{-1})s_2^{-1}u_1+U'''\\ &\subset& s_1^{-1}s_2^{-3}s_1^{-2}s_2(s_2s_1s_2^{-1})u_1+s_1^{-1}s_2s_1^{-2}s_2(s_2s_1s_2^{-1})u_1+U'''\\ &\subset&u_1(u_2u_1s_2s_1^{-1}s_2)u_1+U'''. \end{array}}$ The result follows from proposition \ref{l2}(ii). \item[C2.] We will prove that $s_2u_1s_2^2s_1^{-2}s_2^2u_1\subset U'''$. For this purpose, we expand $u_1$ as $R_5+R_5s_1+R_5s_1^4+R_5s_1^2+R_5s_1^{-2}$ and we have that $s_2u_1s_2^2s_1^{-2}s_2^2u_1\subset \underline{s_2^3s_1^{-2}s_2^2u_1}+\underline{ ( s_2s_1s_2^2)s_1^{-2}s_2^2u_1}+s_2s_1^4s_2^2s_1^{-2}s_2^2u_1 +s_2s_1^2s_2^2s_1^{-2}s_2^2u_1+s_2s_1^{-2}s_2^2s_1^{-2}s_2^2u_1$. By the definition of $U'''$ we have that $s_2s_1^{-2}s_2^2s_1^{-2}s_2^2u_1\subset U'''$. Therefore, it remains to prove that $s_2s_1^4s_2^2s_1^{-2}s_2^2u_1 +s_2s_1^2s_2^2s_1^{-2}s_2^2u_1\subset U'''$. We notice that $\small{\begin{array}{lcl}s_2s_1^4s_2^2s_1^{-2}s_2^2u_1 +s_2s_1^2s_2^2s_1^{-2}s_2^2u_1&\subset& s_2s_1^3(s_1s_2^2s_1^{-1})s_1^{-1}s_2^2u_1 +\grv (s_2s_1^{-2}s_2^{-1})s_2^3u_1\\&\subset&(s_2s_1^3s_2^{-1})s_1(s_1s_2s_1^{-1})s_2^2u_1+ \grv s_1^{-2}(s_1s_2^{-2}s_1^{-1})s_1^2s_2^3u_1\\ &\subset& s_1^{-1}s_2^3s_1^2s_2^{-1}s_1s_2^3u_1+\underline{s_1^{-2}\grv s_2^{-1}s_1^{-2}s_2s_1^2s_2^3u_1} \end{array}}$ Therefore, we have to prove that the element $s_2^3s_1^2s_2^{-1}s_1s_2^3$ is inside $U'''$. For this purpose, we expand $s_2^3$ as a linear combination of $s_2^2$, $s_2$, 1 $s_2^{-1}$ and $s_2^{-2}$ and we have: $\small{\begin{array}[t]{lcl} s_2^3s_1^2s_2^{-1}s_1s_2^3 &\in& R_5s_2^3s_1^{3}(s_1^{-1}s_2^{-1}s_1)s_2^2+\underline{R_5s_2^3s_1^2(s_2^{-1}s_1s_2)}+\underline {R_5s_2^3s_1^2s_2^{-1}}+R_5s_2^3s_1^2s_2^{-1}s_1s_2^{-1}+\\&&+ R_5s_1^{-1}(s_1s_2^2s_1^{-1})s_1(s_2s_1^2s_2^{-1})s_1s_2^{-2}\\ &\in&u_2u_1s_2s_1^{-1}s_2+u_2u_1s_2^{-1}s_1s_2^{-1}u_1+u_1s_2^{-1}s_1^2s_2^3s_1^2s_2^{-2}+U'''. \end{array}}$ However, by proposition \ref{l2} we have that $u_2u_1s_2s_1^{-1}s_2$ and $u_2u_1s_2^{-1}s_1s_2^{-1}$ are subsets of $U'''$. Therefore, we only need to prove that the element $s_2^{-1}s_1^2s_2^3s_1^2s_2^{-2}$ is inside $U'''$. We expand $s_2^3$ as a linear combination of $s_2^2$, $s_2$, 1 $s_2^{-1}$ and $s_2^{-2}$ and we have that $s_2^{-1}s_1^2s_2^3s_1^2s_2^{-2}\in \grF(s_2V)+\underline{s_2^{-1}(s_1^2s_2s_1)s_1s_2^{-2}}+ \grF(s_2u_1s_2s_1^{-2}s_2^2)+R_5s_2^{-1}s_1^{2}s_2^{-2}s_1^{2}s_2^{-2}$. However, by the definition of $U'''$ we have that $s_2^{-1}s_1^{2}s_2^{-2}s_1^{2}s_2^{-2}\in U'''$. Moreover, by relation (\ref{7}) and by the previous case (case C1) we have that $\grF(s_2V)+\grF(s_2u_1s_2s_1^{-2}s_2^2)\subset \grF(U''')\stackrel{\ref{r1}}\subset U'''.$ \item[C3.] We will prove that $s_2u_1s_2^2s_1^{2}s_2^2\subset U'''$. For this purpose, we expand $u_1$ as $R_5+R_5s_1+R_5s_1^{-1}+ R_5s_1^2+R_5s_1^3$ and we have $s_2u_1s_2^2s_1^{2}s_2^2\subset \underline{s_2^3s_1^2s_2^2u_1}+\underline{(s_2s_1s_2^2)s_1^2s_2^2u_1}+ s_2s_1^{-1}s_2^2s_1^{2}s_2^2u_1+s_2s_1^{2}s_2^2s_1^2s_2^2u_1+s_2s_1^3s_2^2s_1^{2}s_2^2$. However, be lemma \ref{cc} we have that $s_2s_1^3s_2^2s_1^{2}s_2^2\subset u_1\grv^3+U''\subset U'''$. Therefore, it remains to prove that $s_2s_1^{-1}s_2^2s_1^{2}s_2^2u_1+s_2s_1^{2}s_2^2s_1^2s_2^2u_1\subset U'''$. We have: $\small{\begin{array}[t]{lcl}s_2s_1^{-1}s_2^2s_1^{2}s_2^2u_1+s_2s_1^{2}s_2^2s_1^2s_2^2u_1&=& s_2^2(s_2^{-1}s_1^{-1}s_2)s_2s_1^{2}s_2^2u_1+s_1^{-1}s_1\grv^2s_2\\ &=&s_2^2s_1(s_2^{-1}s_1^{-1}s_2)s_1^{2}s_2^2u_1+ s_1^{-1}\grv^2s_1s_2\\ &=&s_2^2s_1^2(s_2^{-1}s_1s_2)s_2u_1+s_1^{-1}s_2s_1^2s_2^2s_1^2(s_2s_1s_2)\\ &\subset& u_2u_1s_2s_1^{-1}s_2u_1+u_1s_2s_1^2s_2^2s_1^3s_2u_1. \end{array}}$. By lemma \ref{l2}(ii) it will be sufficient to prove that $s_2s_1^2s_2^2s_1^3s_2\in U'''$. We expand $s_2^3$ as a linear combination of $s_2^2$, $s_2$, 1 $s_2^{-1}$ and $s_2^{-2}$ and we have: $\small{\begin{array}[t]{lcl} s_2s_1^2s_2^2s_1^3s_2&\in& R_5\grv^2+\underline{R_5s_2s_1^2(s_2^2s_1s_2)}+\underline{R_5s_2s_1^2s_2^2}+ R_5s_2s_1(s_1s_2^2s_1^{-1})s_2+R_5s_2s_1^2s_2^2s_1^{-2}s_2\\ &\in&\underline{R_5(s_2s_1s_2^{-1})s_1^2s_2^2}+R_5s_1^{-1}(s_1s_2s_1^{2})s_2^2s_1^{-2}s_2+U'''\\ &\in&u_1s_2^2(s_1s_2^3s_1^{-1})s_1^{-1}s_2+U'''\\ &\in& u_1u_2u_1s_2s_1^{-1}s_2u_1+U'''. \end{array}}$ The result follows from proposition \ref{l2}(ii). \item[C4.] We will prove that $s_2u_1s_2^{-2}s_1^{2}s_2^{-2}u_1\subset U'''$. Since $s_2u_1s_2^{-2}s_1^{2}s_2^{-2}u_1=(s_2u_1s_2^{-1})s_2^{-1}s_1^{2}s_2^{-2}u_1=s_1^{-1}u_2s_1s_2^{-1}s_1^2s_2^{-2}u_1$, it will be sufficient to prove that $u_2s_1s_2^{-1}s_1^2s_2^{-2}\subset U'''$. We expand $u_2$ as $R_5+R_5s_2+R_5s_2^{-1}+R_5s_2^2+R_5s_2^3$ and we have: $u_2s_1s_2^{-1}s_1^2s_2^{-2}\subset \underline{R_5s_1s_2^{-1}s_1^2s_2^{-2}}+\underline{R_5(s_2s_1s_2^{-1})s_1^2s_2^{-2}}+\grF(u_1s_2u_1s_2s_1^{-2}s_2^{2})+R_5s_2^2s_1s_2^{-1}s_1^2s_2^{-2}+ R_5s_2^3s_1s_2^{-1}s_1^2s_2^{-2}$. By the first case (case C1) we have that $\grF(u_1s_2u_1s_2s_1^{-2}s_2^{2}) \subset u_1\grF(U''')u_1\stackrel{\ref{r1}}{\subset} U'''$. It remains to prove that the elements $s_2^2s_1s_2^{-1}s_1^2s_2^{-2}$ and $s_2^3s_1s_2^{-1}s_1^2s_2^{-2}$ are inside $U'''$. We have: $s_2^2s_1s_2^{-1}s_1^2s_2^{-2}=s_2(s_2s_1s_2^{-1})s_1^2s_2^{-2}=s_2s_1^{-1}(s_2s_1^3s_2^{-1})s_2^{-1}=s_2s_1^{-2}s_2^2(s_2s_1s_2^{-1})=s_1^{-1}(s_1s_2s_1^{-1})s_1^{-1}s_2^2s_1^{-1}s_2s_1=s_1^{-1}s_2^{-1}(s_1s_2s_1^{-1})s_2^2s_1^{-1}s_2s_1=\underline{s_1^{-1}s_2^{-2}(s_1s_2^3s_1^{-1})s_2s_1}$. Moreover, $s_2^3s_1s_2^{-1}s_1^2s_2^{-2}=s_2^2(s_2s_1s_2^{-1})s_1(s_1s_2^{-2}s_1^{-1})s_1=s_2^2s_1^{-2}(s_1s_2s_1^2)s_2^{-1}s_1^{-2}s_2s_1\in s_2^2s_1^{-2}s_2^2s_1^{-1}s_2u_1$. We expand $s_1^{-2}$ as a linear combination of $s_1^{-1}$, $1$, $s_1$, $s_2^{2}$ and $s_2^{3}$ and we have: $\small{\begin{array}[t]{lcl}s_2^2s_1^{-2}s_2^2s_1^{-1}s_2&\in& R_5s_2^2s_1^{-1}s_2^2s_1^{-1}s_2+\underline{R_5s_2^4s_1^{-1}s_2}+ \underline{R_5s_2(s_2s_1s_2^2)s_1^{-1}s_2}+ R_5s_2^2s_1^{2}s_2^2s_1^{-1}s_2+\\&&+ R_5s_2^2s_1^{3}s_2^2s_1^{-1}s_2\\ & \in& R_5s_2^3(s_2^{-1}s_1^{-1}s_2)s_2s_1^{-1}s_2+R_5 s_2^2s_1(s_1s_2^{2}s_1^{-1})s_2 +R_5s_2^2s_1^2(s_1s_2^2s_1^{-1})s_2+U'''\\ &\in& R_5s_2^3s_1(s_2^{-1}s_1^{-1}s_2)s_1^{-1}s_2+ R_5s_2(s_2s_1s_2^{-1})s_1^{2}s_2^{2} +R_5s_2(s_2s_1^2s_2^{-1})s_1^2s_2^2+U'''\\ &\in&\underline{R_5s_2^3s_1^2(s_2^{-1}s_1^{-2}s_2)}+R_5s_2s_1^{-1}s_2s_1^{3}s_2^{2}+ R_5s_2s_1^{-1}s_2^2s_1^3s_2^2+U''' \end{array}}$ Therefore, it remains to prove that $B:=R_5s_2s_1^{-1}s_2s_1^{3}s_2^{2}+ R_5s_2s_1^{-1}s_2^2s_1^3s_2^2\subset U'''$. We expand $s_1^3$ as a linear combination of $s_1^2$, $s_1$, 1 $s_1^{-1}$ and $s_1^{-2}$ and we have that $B\subset R_5s_2s_1^{-1}s_2(R_5s_1^2+R_5s_1+R_5+R_5s_1^{-1}+R_5s_1^{-2})s_2^{2}+ R_5s_2s_1^{-1}s_2^2(R_5s_1^2+R_5s_1+R_5+R_5s_1^{-1}+R_5s_1^{-2})s_2^2$. By cases C1, C2 and C3 we have: $\small{\begin{array}[t]{lcl} B&\subset& R_5s_2s_1^{-1}\grv s_2+\underline{R_5s_2s_1^{-1}(s_2s_1s_2^2)}+\underline{R_5s_2s_1^{-1}s_2^3u_1}+ R_5s_2s_1^{-1}s_2s_1^{-1}s_2^2+\underline{R_5s_2s_1^{-1}(s_2^2s_1s_2)s_2}+\\&&+\underline{R_5s_2s_1^{-1}s_2^4}+ R_5s_2s_1^{-1}s_2^2s_1^{-1}s_2^2+U'''\\ &\subset&R_5s_2\grv s_1^{-1}s_2+ R_5s_2s_1^{-1}s_2s_1^{-1}s_2^2+ R_5s_2s_1^{-1}s_2^2s_1^{-1}s_2^2+U'''\\ &\subset&u_2u_1s_2s_1^{-1}s_2+R_5s_2^2(s_2^{-1}s_1^{-1}s_2)s_1^{-1}s_2^2+ R_5s_2s_1^{-2}(s_1s_2^2s_1^{-1})s_2^2+U'''\\ &\stackrel{\ref{l2}}{\subset}&R_5s_2^2s_1(s_2^{-1}s_1^{-2}s_2)s_2+ R_5(s_2s_1^{-2}s_2^{-1})s_1^2s_2^3+U'''\\ &\subset&R_5s_2^2s_1^{2}s_2^{-1}(s_2^{-1}s_1^{-1}s_2)+U''' \\&\subset& u_1u_2u_1s_2^{-1}s_1s_2^{-1}+U''' . \end{array}}$ The result follows from proposition \ref{l2}(ii). \item[C5.] We will prove that $s_2u_1s_2^2s_1^2s_2^{-2}u_1\subset U'''$. For this purpose, we use straight-forward calculations and we have $s_2u_1s_2^2s_1^2s_2^{-2}=(s_2u_1s_2^{-1})s_2^2(s_2s_1^2s_2^{-1})s_2^{-1} =s_1^{-1}u_2(s_1s_2^2s_1^{-1})s_2(s_2s_1s_2^{-1})= s_1^{-1}u_2s_1(s_1s_2^2s_1^{-1})s_2s_1= s_1^{-1}u_2(s_2s_1s_2^{-1})s_1^2s_2^2s_1=s_1^{-2}(s_1u_2s_1^{-1})s_2s_1^3s_2^2s_1=s_1^{-2}s_2^{-1}u_1s_2^2s_1^3s_2^2s_1$, meaning that $s_2u_1s_2^2s_1^2s_2^{-2}u_1 \subset u_1s_2^{-1}u_1s_2^2s_1^3s_2^2u_1$. Hence, we have to prove that $s_2^{-1}u_1s_2^2s_1^3s_2^2\subset U'''$. For this purpose, we expand $s_1^3$ as a linear combination of $s_1^2$, $s_1$, 1 $s_1^{-1}$ and $s_1^{-2}$ and we have that $s_2^{-1}u_1s_2^2s_1^3s_2^2\subset \grF(s_2V+s_2u_1s_2^{-2}s_1^2s_2^{-2})+s_2^{-1}u_1s_2^2s_1s_2^2+ s_2^{-1}u_1s_2^2s_1^{-1}s_2^2$. By relation (\ref{7}) and case C4 we have that $\grF(s_2V+s_2u_1s_2^{-2}s_1^2s_2^{-2})\subset \grF(U''')\stackrel{\ref{r1}}{\subset} U'''$. Moreover, $s_2^{-1}u_1s_2^2s_1s_2^2=s_2^{-1}u_1(s_2^2s_1s_2)s_2=s_2^{-1}u_1\grv=\underline{s_2^{-1}\grv u_1}.$ It remains to prove that $s_2^{-1}u_1s_2^2s_1^{-1}s_2^2\subset U'''$. We have: $s_2^{-1}u_1s_2^2s_1^{-1}s_2^2=(s_2^{-1}u_1s_2)s_2s_1^{-1}s_2^2=s_1u_2(s_1^{-1}s_2s_1)s_1^{-2}s_2^2=s_1u_2s_1(s_2^{-1}s_1^{-2}s_2)s_2\subset u_1u_2u_1s_2^{-1}s_1s_2^{-1}u_1$. The result follows from proposition \ref{l2}(i). \qedhere \end{itemize} \end{proof} From now on we will double-underline the elements of the form $u_1s_2^{\pm}u_1u_2u_1u_2u_1$ and we will use the fact that they are elements of $U'''$ (proposition \ref{p2}) without mentioning it. We can now prove the following lemma that helps us to ``replace'' inside the definition of $U''''$ the element $\grv^4$ by the element $s_2^{-2}s_1^2s_2^2s_1^3s_2^2$ modulo $U'''$. \begin{lem} $s_2^{-2}s_1^2s_2^2s_1^3s_2^2\in u_1\grv^3+u_1^{\times}\grv^4+u_1s_2u_1u_2u_1u_2u_1\subset U''''.$ \label{ll} \end{lem} \begin{proof} In this proof we will double-underline only the elements of the form $u_1s_2u_1u_2u_1u_2u_1$ (and not of the form $u_1s_2^{-1}u_1u_2u_1u_2u_1$ ). The fact that $u_1\grv^3+u_1^{\times}\grv^4+u_1s_2u_1u_2u_1u_2u_1$ is a subset of $ U''''$ follows from the definition of $U''''$ and proposition \ref{p2}. As a result, we restrict ourselves to proving that $s_2^{-2}s_1^2s_2^2s_1^3s_2^2\in u_1\grv^3+u_1^{\times}\grv^4+u_1s_2u_1u_2u_1u_2u_1$. We first notice that $$\small{\begin{array}{lcl} s_2^{-2}s_1^2s_2^2s_1^3s_2^2&=&s_1(s_1^{-1}s_2^{-2}s_1)s_2^{-2}(s_2^2s_1s_2)s_2s_1^2(s_1s_2^2s_1^{-1})s_1^{-1}s_1^2\\ &=& s_1s_2s_1^{-2}s_2^{-3}s_1\grv s_1^2s_2^{-1}s_1(s_1s_2s_1^{-1})s_1^2\\ &=& s_1s_2s_1^{-2}s_2^{-3}s_1^3(s_2s_1^3s_2^{-1})s_1s_2s_1^2\\ &=& s_1s_2s_1^{-2}s_2^{-3}s_1^2s_2^3s_1^2s_2s_1^2\\ &\in& u_1s_2s_1^{-2}s_2^{-3}s_1^2s_2^3s_1^2s_2u_1. \end{array}}$$ We expand $s_2^{-3}$ as a linear combination of $s_2^{-2}$, $s_2^{-1}$, 1, $s_2$ and $s_2^2$, where the coefficient of $s_2^2$ is invertible, and we have: $$\small{\begin{array}[t]{lcl} s_2s_1^{-2}s_2^{-3}s_1^2s_2^3s_1^2s_2&\in& R_5s_2s_1^{-2}s_2^{-2}s_1^2s_2^3s_1^2s_2+R_5s_2s_1^{-2}s_2^{-1}s_1^2s_2^3s_1^2s_2+\underline{\underline{R_5s_2s_2^3s_1^2s_2}}+\\&&+R_5s_2s_1^{-2}s_2s_1^2s_2^3s_1^2s_2+u_1^{\times}s_2s_1^{-2}s_2^{2}s_1^2s_2^3s_1^2s_2u_1^{\times}. \end{array}}$$ However, we notice that $s_2s_1^{-2}s_2^{-2}s_1^2s_2^3s_1^2s_2=s_2s_1^{-1}(s_1^{-1}s_2^{-2}s_1)s_1s_2^3s_1^2s_2=s_2s_1^{-1}s_2^2\grv^{-1}s_1s_2^3s_1^2s_2=s_2s_1^{-1}s_2^2s_1\grv^{-1}s_2^3s_1^2s_2=s_2s_1^{-1}s_2^2s_1(s_2^{-1}s_1^{-2}s_2)\grv=s_2s_1^{-1}s_2^2s_1^2s_2^{-2}\grv s_1^{-1} =\underline{\underline{s_2s_1^{-1}s_2^2s_1^2(s_2^{-1}s_1^2s_2)}}s_1^{-1}$. Moreover, $s_2s_1^{-2}s_2^{-1}s_1^2s_2^3s_1^2s_2=s_2s_1^{-2}(s_2^{-1}s_1^2s_2)s_2^2s_1^2s_2=s_2s_1^{-1}s_2^2(s_1^{-1}s_2^2s_1)s_2(s_2^{-1}s_1s_2)=\underline{\underline{s_2s_1^{-1}s_2^3(s_1^3s_2s_1)s_1^{-2}}}.$ We also have $s_2s_1^{-2}s_2s_1^2s_2^3s_1^2s_2=s_2s_1^{-3}(s_1s_2s_1^2)s_2^3s_1^2s_2=s_2s_1^{-3}s_2(s_2s_1s_2^4)s_1^2s_2\in s_2s_1^{-3}(s_2u_1s_2u_1s_2u_1).$ However, by lemma \ref{oo}(i) we have that $s_2s_1^{-3}(s_2u_1s_2u_1s_2u_1)\subset s_2s_1^{-3}(\grv^2u_1+u_1u_2u_1u_2u_1)\subset s_2\grv^2 u_1+ \underline{\underline{u_1s_2u_1u_2u_1u_2u_1}}$. By lemma \ref{oo}(ii) we also have $s_2\grv^2 u_1\subset \underline{\underline{u_1s_2u_1u_2u_1u_2u_1}}$. It remains to prove that $s_2s_1^{-2}s_2^{2}s_1^2s_2^3s_1^2s_2\in u_1\grv^3+u_1^{\times}\grv^4+u_1s_2u_1u_2u_1u_2u_1$. We have: $$\small{\begin{array}[t]{lcl} s_2s_1^{-2}s_2^{2}s_1^2s_2^3s_1^2s_2&=& s_2(-de^{-1}s_1^{-1}-ce^{-1}-e^{-1}bs_1-e^{-1}as_1^2+e^{-1}s_1^3)s_2^{2}s_1^2s_2^3s_1^2s_2\\ &\in& R_5s_2s_1^{-1}s_2^2s_1^2s_2^3s_1^2s_2+R_5s_2^3s_1^2s_2^3s_1^2s_2 +\underline{\underline{R_5(s_2s_1s_2^2)s_1^2s_2^3s_1^2s_2u_1}}+R_5s_2s_1^2s_2^2s_1^2s_2^3s_1^2s_2+\\&&+ u_1^{\times}s_2s_1^3s_2^2s_1^2s_2^2\grv. \end{array}}$$ We first notice that we have $s_2s_1^{-1}s_2^2s_1^2s_2^3s_1^2s_2=s_2(s_1^{-1}s_2^2s_1)s_1s_2^3s_1^2s_2=s_2^2s_1^2(s_2^{-1}s_1s_2)s_2^2s_1^2s_2=s_1(s_1^{-1}s_2^2s_1)s_1^2s_2(s_1^{-1}s_2^2s_1)(s_1s_2s_1)s_1^{-1}=s_1s_2s_1^2(s_2^{-1}s_1^2s_2)s_2s_1^3s_2s_1^{-1}=\underline{\underline{s_1s_2s_1^3s_2^3(s_2^{-1}s_1^{-1}s_2)(s_1^3s_2s_1)s_1^{-2}}}.$ Moreover, we have that $s_2^3s_1^2s_2^3s_1^2s_2=s_1(s_1^{-1}s_2^3s_1)s_1s_2^3s_1^2s_2=s_1s_2s_1^3(s_2^{-1}s_1s_2)s_2^2s_1(s_1s_2s_1)s_1^{-1}=\underline{\underline{s_1s_2s_1^4s_2(s_1^{-1}s_2^2s_1)s_2s_1s_2s_1^{-1}}}.$ Using analogous calculations, $s_2s_1^2s_2^2s_1^2s_2^3s_1^2s_2=s_1^{-1}(s_1s_2s_1^2)s_2^2s_1^2s_2^3s_1^2s_2= s_1^{-1}s_2(s_2s_1s_2^3)s_1^2s_2^3s_1^2s_2=s_1^{-1}s_2s_1^2(s_1s_2s_1^3)s_2^3s_1^2s_2=s_1^{-1}s_2s_1^2s_2^2(s_2s_1s_2^4)s_1^2s_2\in u_1\grv (s_2u_1s_2u_1s_2u_1).$ However, by lemma \ref{oo}(i) we have that $u_1\grv (s_2u_1s_2u_1s_2u_1)\subset u_1\grv(\grv^2u_1+u_1u_2u_1u_2u_1)\subset u_1\grv^3+\underline{\underline{u_1\grv u_2u_1u_2u_1}}.$ In order to finish the proof, it remains to prove that $s_2s_1^3s_2^2s_1^2s_2^2\grv\in u_1^{\times}\grv^4+u_1s_2u_1u_2u_1u_2u_1$. We use lemma \ref{cc} and we have: \\\\ $\small{\begin{array}[t]{lcl} s_2s_1^3s_2^2s_1^2s_2^2\grv&\in& u_1^{\times}(u_1s_2u_1s_2s_1^3s_2+u_1s_2^2s_1^3s_2s_1^{-1}s_2+ u_1u_2u_1u_2u_1+u_1^{\times}\grv^3)\grv\\ &\in& u_1s_2u_1s_2s_1^4(s_1^{-1}s_2^2s_1)s_1s_2+u_1s_2^2s_1^3s_2(s_1^{-1}s_2s_1)s_1^{-1}\grv +u_1^{\times}\grv^4\\ &\in& u_1s_2u_1s_2s_1^4s_2s_1^2(s_2^{-1}s_1s_2)+u_1s_2^2s_1^3s_2^2s_1s_2^{-1}\grv +u_1^{\times}\grv^4\\ &\in& u_1s_2u_1s_2s_1^4s_2s_1^3s_2u_1+u_1s_2^2s_1^3s_2^2s_1^3s_2+u_1^{\times}\grv^4\\ &\in& u_1s_2u_1(s_2u_1s_2u_1s_2u_1)+u_1(s_1^{-1}s_2^2s_1)s_1^2s_2^2s_1^3s_2+ u_1^{\times}\grv^4\\ &\stackrel{\ref{oo}(i)}{\in}& u_1s_2u_1(\grv^2u_1+u_1u_2u_1u_2u_1)+ u_1s_2s_1^2(s_2^{-1}s_1^2s_2)s_2s_1^3s_2+u_1^{\times}\grv^4\\ &\in& u_1s_2\grv^2u_1+u_1s_2u_1u_2u_1u_2u_1+ u_1s_2s_1^3s_2^2(s_1^{-1}s_2s_1)s_1^2s_2+u_1^{\times}\grv^4\\ &\stackrel{\ref{oo}(ii)}{\in}&u_1s_2u_1u_2u_1u_2u_1+\underline{\underline{u_1s_2s_1^3s_2^3s_1(s_2^{-1}s_1^2s_2)}}+u_1^{\times}\grv^4 . \end{array}}$\\ \end{proof} \begin{prop}\mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)]$s_2u_1u_2u_1s_2s_1^{-1}s_2\subset U''''.$ \item[(ii)]$s_2u_1u_2u_1s_2^{-1}s_1s_2^{-1}\subset U''''.$ \item[(iii)]$s_2u_1u_2u_1u_2\grv \subset U''''$. \end{itemize} \label{xx} \end{prop} \begin{proof}\mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] By proposition \ref{l2}(ii) we have $ s_2u_1u_2u_1s_2s_1^{-1}s_2\subset s_2u_1(u_1\grv^{2}+R_5s_2^2s_1^{-2}s_2s_1^{-1}s_2+u_1u_2u_1u_2u_1)$ and, hence, by lemma \ref{oo}(ii) we have $s_2u_1u_2u_1s_2s_1^{-1}s_2\subset s_2u_1s_2^2s_1^{-2}s_2s_1^{-1}s_2+\underline{\underline{s_2u_1u_2u_1u_2u_1}}+U''''.$ As a result, we must prove that $s_2u_1s_2^2s_1^{-2}s_2s_1^{-1}s_2\subset U''''$. For this purpose, we expand $u_1$ as $R_5+R_5s_1+R_5s_1^{-1}+R_5s_1^2+R_5s_1^3$ and we have: $\begin{array}{lcl} s_2u_1s_2^2s_1^{-2}s_2s_1^{-1}s_2&\subset& u_2u_1s_2s_1^{-1}s_2+\underline{\underline{ R_5(s_2s_1s_2^2)s_1^{-2}s_2s_1^{-1}s_2}}+R_5s_2s_1^{-1}s_2^2s_1^{-2}s_2s_1^{-1}s_2 +\\&&+R_5s_2s_1^2s_2^2s_1^{-2}s_2s_1^{-1}s_2+R_5s_2s_1^3s_2^2s_1^{-2}s_2s_1^{-1}s_2. \end{array}$ By proposition \ref{l2}(ii) we have that $u_2u_1s_2s_1^{-1}s_2\subset U''''$. Moreover, $s_2s_1^2s_2^2s_1^{-2}s_2s_1^{-1}s_2=s_1^{-1}(s_1s_2s_1^2)s_2^2s_1^{-3}(s_1s_2s_1^{-1})s_2=s_1^{-1}s_2(s_2s_1s_2^3)s_1^{-3}s_2^{-1}s_1s_2^2= \underline{\underline{s_1^{-1}s_2s_1^3(s_2s_1^{-2}s_2^{-1})s_1s_2^2}}$. We also notice that $s_2s_1^3s_2^2s_1^{-2}s_2s_1^{-1}s_2=s_1^{-1}(s_1s_2s_1^3)s_2^2s_1^{-3}(s_1s_2s_1^{-1})s_2=s_1^{-1}s_2^3(s_1s_2^3s_1^{-1})s_1^{-2}s_2^{-1}s_1s_2^2=s_1^{-1}s_2^2s_1^3(s_2s_1^{-2}s_2^{-1})s_1s_2^2=s_1^{-2}(s_1s_2s_1^{-1})s_1(s_2s_1^2s_2^{-1})s_2^{-1}s_1^2s_2^2=s_1^{-2}s_2^{-1}(s_1s_2^3s_1^{-1})s_2^{-1}(s_2s_1^2s_2^{-1})s_1^2s_2^2 \in u_1s_2^{-2}s_1^2s_2^2s_1^3s_2^2\stackrel{\ref{ll}}{\subset} U''''.$ It remains to prove that the element $s_2s_1^{-1}s_2^2s_1^{-2}s_2s_1^{-1}s_2$ is inside $U''''$. We expand $s_2^2$ as a linear combination of $s_2$, 1, $s_2^{-1}$, $s_2^{-2}$ and $s_2^{-3}$ and we have $\small{\begin{array}[t]{lcl} s_2s_1^{-1}s_2^2s_1^{-2}s_2s_1^{-1}s_2&\in & s_2s_1^{-1}(R_5s_2+R_5+R_5s_2^{-1}+R_5s_2^{-2}+R_5s_2^{-3})s_1^{-2}s_2s_1^{-1}s_2\\ &\in&R_5s_2s_1^{-2}(s_1s_2s_1^{-1})s_1^{-1}(s_2s_1^{-1}s_2^{-1})s_2^2+ \underline{\underline{R_5s_2s_1^{-3}s_2s_1^{-1}s_2}}+\\&&+\underline{\underline{ R_5(s_2s_1^{-1}s_2^{-1})s_1^{-2}s_2s_1^{-1}s_2}}+\underline{\underline{ R_5(s_2s_1^{-1}s_2^{-1})(s_2^{-1}s_1^{-2}s_2)s_1^{-1}s_2}}+\\&&+R_5(s_2s_1^{-1}s_2^{-1})s_2^{-1}(s_2^{-1}s_1^{-2}s_2)s_1^{-1}s_2\\ &\in& \underline{\underline{R_5s_2s_1^{-2}s_2^{-1}s_1(s_2s_1^{-2}s_2^{-1})s_1s_2^2}}+ R_5s_1^{-1}s_2^{-1}s_1^2(s_1^{-1}s_2^{-1}s_1)s_2^{-2}s_1^{-2}s_2 +U''''\\ &\in&R_5s_1^{-1}s_2^{-1}s_1^2s_2(s_1^{-1}s_2^{-3}s_1)s_1^{-3}s_2+ U''''\\ &\in&\underline{\underline{R_5s_1^{-1}s_2^{-1}s_1^2s_2^2s_1^{-3}(s_2^{-1}s_1^{-3}s_2)}}+U''''. \end{array}}$ \item[(ii)]By proposition \ref{l2} (i) we have that $s_2u_1(u_2u_1s_2^{-1}s_1s_2^{-1})\subset s_2u_1(u_1\grv^{-2}+R_5s_2^{-2}s_1^{2}s_2^{-1}s_1s_2^{-1}+u_1u_2u_1u_2u_1) \subset\underline{\underline{s_2\grv^{-2}u_1}}+ s_2u_1s_2^2s_1^{-2}s_2s_1^{-1}s_2+\underline{\underline{s_2u_1u_2u_1u_2u_1}}$. Therefore, it remains to prove that $s_2u_1s_2^2s_1^{-2}s_2s_1^{-1}s_2 \subset U''''$. We expand $u_1$ as $R_5+R_5s_1+R_5s_1^{-1}+R_5s_1^2+R_5s_1^{3}$ and we have: $\small{\begin{array}[t]{lcl} s_2u_1s_2^2s_1^{-2}s_2s_1^{-1}s_2 &\subset& \underline{\underline{R_5s_2^{-1}s_1^2s_2^{-1}s_1s_2^{-1}}}+\underline{\underline{R_5s_1^{-2}(s_1^2s_2s_1)s_2^{-2}s_1^2s_2^{-1}s_1s_2^{-1}}}+\\&& +R_5(s_2s_1^{-1}s_2^{-1})(s_2^{-1}s_1^2s_2)s_2^{-2}s_1s_2^{-1}+R_5(s_2s_1^2s_2^{-1})s_2^{-2}(s_2s_1^2s_2^{-1})s_1s_2^{-1}+\\&&+R_5(s_2s_1^3s_2^{-1})s_2^{-1}s_1^2s_2^{-1}(s_1s_2s_1^{-1})s_1\\ &\subset&\underline{\underline{R_5s_1^{-1}s_2^{-1}s_1^2s_2^2(s_1^{-1}s_2^{-2}s_1)s_2^{-1}}}+ \underline{\underline{R_5s_1^{-1}s_2^2(s_1s_2^{-2}s_1^{-1})s_2(s_2s_1s_2^{-1})}}+\\ &&+R_5s_1^{-1}s_2^2(s_2s_1s_2^{-1})s_1(s_1s_2^{-2}s_1^{-1})s_2s_1+U''''\\ &\subset&R_5s_1^{-1}s_2^2s_1^{-1}(s_2s_1^{2}s_2^{-1})s_1^{-2}s_2^2s_1+U''''\\ &\subset&R_5s_1^{-1}s_2^2s_1^{-3}(s_1s_2^2s_1^{-1})s_2^2s_1+U''''\\ &\subset& \underline{\underline{R_5s_1^{-1}s_2(s_2s_1^{-3}s_2^{-1})s_1^2s_2^3s_1}}+U''''. \end{array}}$ \item[(iii)] We notice that $s_2u_1u_2u_1u_2\grv=s_2u_1u_2u_1u_2s_1^2s_2 s_2u_1u_2u_1u_2(s_2^{-1}s_1^2s_2) \subset s_2u_1u_2u_1u_2s_1s_2^2u_1$. We expand $\bold{u_2}$ as $R_5+R_5s_2+R_5s_2^{-1}+R_5s_2^{2}+R_5s_2^{-2}$ and we have: $\small{\begin{array}[t]{lcl} s_2u_1u_2u_1\bold{u_2}s_1s_2^2u_1&\subset&\underline{\underline{s_2u_1u_2u_1s_2^2u_1}}+\underline{\underline{s_2u_1u_2u_1(s_2s_1s_2^2)u_1}}+ s_2u_1u_2u_1(s_2^{-1}s_1s_2)s_2u_1+\\&&+ s_2u_1u_2u_1s_2(s_2s_1s_2^2)u_1+s_2u_1u_2u_1s_2^{-1}(s_2^{-1}s_1s_2)s_2u_1\\ &\subset&s_2u_1(u_2u_1s_2s_1^{-1}s_2u_1)+\underline{\underline{s_2u_1u_2u_1\grv u_1}}+s_2u_1u_2u_1(s_2^{-1}s_1s_2)s_1^{-1}s_2u_1+U''''\\ &\stackrel{(i)}{\subset}&s_2u_1u_2u_1s_2s_1^{-2}s_2u_1+U''''. \end{array}}$ It remains to prove that $s_2u_1u_2u_1s_2s_1^{-2}s_2u_1\subset U''''$. For this purpose, we expand $s_1^{-2}$ as a linear combination of $s_1^{-1}$, 1, $s_1$, $s_1^2$, $s_1^3$ and we have: $s_2u_1u_2u_1s_2s_1^{-2}s_2u_1 \subset s_2u_1u_2u_1s_2s_1^{-1}s_2u_1+\underline{\underline{s_2u_1u_2u_1s_2^2u_1}}+ \underline{\underline{s_2u_1u_2u_1(s_2s_1s_2)u_1}}+s_2u_1u_2u_1\grv u_1+s_2u_1u_2u_1(s_1^{-1}s_2s_1)s_1^2s_2u_1\stackrel{(i)}{\subset} \underline{\underline{s_2u_1u_2\grv u_1}}+s_2u_1u_2u_1s_2s_1(s_2^{-1}s_1^2s_2)u_1+U'''' \subset s_2u_1u_2u_1\grv s_2u_1+U'''' \subset s_2u_1u_2\grv u_1s_2u_1+U''''$. However, $\small{\begin{array}[t]{lcl} s_2u_1u_2\grv u_1s_2u_1&\subset&s_2u_1u_2s_1^2s_2(R_5+R_5s_1+ R_5s_1^{-1}+R_5s_1^2+R_5s_1^3)s_2u_1\\ &\subset&\underline{\underline{s_2u_1u_2s_1^2s_2^2u_1}}+\underline{\underline{s_2u_1u_2(s_1^2s_2s_1)s_2u_1}} +s_2u_1(u_2u_1s_2s_1^{-1}s_2u_1)+\\&&+ s_2u_1u_2s_1^2\grv u_1+s_2u_1(s_1^{-1}u_2s_1)(s_1s_2s_1^3)s_2u_1\\ &\stackrel{(i)}{\subset}&\underline{\underline{s_2u_1u_2\grv u_1}}+s_2u_1s_2u_1(s_2^2s_1s_2)s_2u_1+U''''\\ &\subset&s_2u_1s_2u_1\grv u_1+U''''. \end{array}}$ The result follows from the fact that $s_2u_1s_2u_1\grv u_1=\underline{\underline{s_2u_1s_2\grv u_1}}$. \qedhere \end{itemize} \end{proof} We can now prove the following lemma that helps us to ``replace'' inside the definition of $U$ the elements $\grv^5$ and $\grv^{-5}$ by the elements $s_2^{-2}s_1^2s_2^3s_1^2s_2^3$ and $s_2^{-2}s_1^{2}s_2^{-2}s_1^2s_2^{-2}$ modulo $U''''$, respectively. \begin{lem}\mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)]$s_2^{-2}s_1^2s_2^3s_1^2s_2^3\in u_1^{\times}\grv^5+ U''''.$ \item [(ii)]$s_2^{-2}s_1^{2}s_2^{-2}s_1^2s_2^{-2}\in u_1^{\times}\grv^{-5}+U''''.$ \item [(iii)]$s_2^{-2}u_1s_2^{-2}s_1^{2}s_2^{-2}\subset U.$ \end{itemize} \label{lol} \end{lem} \begin{proof}\mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.6cm] \item[(i)] $s_2^{-2}s_1^2s_2^3s_1^2s_2^3=s_2^{-2}s_1^2s_2^3s_1(s_1s_2^3s_1^{-1})s_1 =s_2^{-2}s_1^2s_2^2(s_2s_1s_2^{-1})s_1^2(s_1s_2s_1^{-1})s_1^2 =s_2^{-2}s_1^2s_2^2s_1^{-1}(s_2s_1^3s_2^{-1})s_1s_2s_1^2 =s_2^{-2}s_1^2s_2^2s_1^{-2}s_2^2\grv s_1^2$. We expand $s_1^{-2}$ as a linear combination of $s_1^{-1}$, 1, $s_1$ $s_1^{2}$ and $s_1^3$, where the coefficient of $s_1^3$ is invertible, and we have: $\small{\begin{array}[t]{lcl} s_2^{-2}s_1^2s_2^2s_1^{-2}s_2^2\grv s_1^2 &\in&s_2^{-2}s_1(s_1s_2^2s_1^{-1})s_2^2\grv u_1+ s_2^{-1}(s_2^{-1}s_1^4s_2)s_2^2\grv u_1+\\&&+s_2^{-2}s_1^2(s_2^2s_1s_2)s_2\grv u_1+s_2^{-1}(s_2^{-1}s_1^2s_2)s_2s_1^2s_2^2\grv s_2\grv u_1+ u_1^{\times}s_2^{-2}s_1^2s_2^2s_1^{3}s_2^2\grv s_1^2\\ &\stackrel{\ref{ll}}{\in}&\underbrace{s_1(s_1^{-1}s_2^{-2}s_1)s_2^{-1}s_1^2s_2^3\grv u_1+s_1(s_1^{-1}s_2^{-1}s_1)s_2^{4}s_1^{-1}s_2^2 s_1\grv u_1}_{\in u_1s_2u_1u_2u_1u_2\grv u_1} +s_2^{-2}s_1^3\grv^2u_1+\\&&+ (s_2^{-1}s_1s_2)s_2(s_1^{-1}s_2s_1)s_1s_2^2\grv u_1+u_1^{\times}(u_1\grv^3+u_1^{\times}\grv^4+u_1s_2u_1u_2u_1u_2u_1)\grv s_1^2\\ &\in&\underline{\underline{s_2^{-2}\grv^2u_1}}+s_1s_2s_1^{-1}s_2^2s_1(s_2^{-1}s_1s_2)s_2\grv u_1+u_1\grv^3+u_1^{\times}\grv^5+u_1s_2u_1u_2u_1u_2u_1\grv u_1\\ &\in&s_1s_2s_1^{-1}(u_2s_1^2s_2s_1^{-1}s_2)\grv u_1+u_1^{\times}\grv^5+u_1s_2u_1u_2u_1u_2u_1\grv u_1+U''''. \end{array}}$ By proposition \ref{l2}(ii) we have that $s_2s_1^{-1}(u_2s_1^2s_2s_1^{-1}s_2)\grv u_1\subset s_2s_1^{-1}(u_1\grv^2+R_5s_2^2s_1^{-2}s_2s_1^{-1}s_2+u_1u_2u_1u_2u_1)\grv u_1 \stackrel{\ref{ll}}{\in}s_2\grv^2u_1+u_1s_2u_1u_2u_1u_2u_1\grv u_1$ and, hence, the element $s_2^{-2}s_1^2s_2^2s_1^{-2}s_2^2\grv s_1^2 $ is inside $s_2\grv^2u_1+u_1^{\times}\grv^5+u_1s_2u_1u_2u_1u_2u_1\grv u_1+U''''$. We notice that $u_1s_2u_1u_2u_1u_2u_1\grv u_1=u_1s_2u_1u_2u_1u_2\grv u_1$ and, hence, by lemma \ref{oo}(ii) and proposition \ref{xx}(iii) we have that the element $s_2^{-2}s_1^2s_2^2s_1^{-2}s_2^2\grv s_1^2$ is inside $u_1^{\times}\grv^5+U''''$. \item[ (ii)]$\small{\begin{array}[t]{lcl} s_2^{-2}s_1^{2}s_2^{-2}s_1^2s_2^{-2}&=&s_2^{-2}(as_1+b+cs_1^{-1}+ds_1^{-2}+es_1^{-3})s_2^{-2}s_1^2s_2^{-2} \\ &\in&\underline{\underline{s_1(s_1^{-1}s_2^{-2}s_1)s_2^{-2}s_1^2s_2^{-2}}}+ \underline{R_5s_2^{-4}s_1^2s_2}+\underline{\underline{R_5s_2^{-1}(s_2^{-1}s_1^{-1}s_2^{-2})s_1^2s_2^{-2}}}+\\&&+R_5s_2^{-1}(s_2^{-1}s_1^{-2}s_2)s_2^{-3}s_1^2s_2^{-2}+R_5 s_2^{-2}s_1^{-2}(s_1^{-1}s_2^{-2}s_1)s_1s_2^{-2}\\ &\in&R_5s_2^{-1}s_1s_2^{-2}(s_1^{-1}s_2^{-3}s_1)s_1s_2^{-2}+R_5 s_2^{-1}(s_2^{-1}s_1^{-2}s_2)s_1^{-2}(s_2^{-1}s_1s_2)s_2^{-3}+U''''\\ &\in&R_5s_2^{-1}s_1^2(s_1^{-1}s_2^{-1}s_1^{-3})s_2^{-1}s_1s_2^{-2}+R_5s_2^{-1}s_1s_2^{-1}(s_2^{-1}s_1^{-2}s_2)s_1^{-1}s_2^{-3}+U''''\\ &\in&\underline{\underline{R_5s_2^{-1}s_1^2s_2^{-2}(s_2^{-1}s_1^{-1}s_2^{-2})s_1s_2^{-2}}}+R_5s_2^{-1}s_1(s_2^{-1}s_1s_2)s_2^{-2}s_1^{-2}s_2^{-3}+U'''' \\ &\in&R_5(s_2^{-1}s_1^2s_2)s_1^{-1}s_2^{-2}s_1^{-2}s_2^{-3}+U''''\\ &\in& \grF(u_1s_2^{-2}s_1^2s_2^3s_1^2s_2^3)+U''''. \end{array}}$ The result then follows from (i) and Lemma \ref{r1}. \item[(iii)] We expand $u_1$ as $R_5+R_5s_1+R_5s_1^{-1}+R_5s_1^2+R_5s_1^{-2}$ and we have that $\small{\begin{array}{lcl}s_2^{-2}u_1s_2^{-2}s_1^2s_2^{-2}&\subset& \underline{R_5s_2^{-4}s_1^2s_2^{-2}}+\underline{\underline{R_5s_2^{-1}(s_2^{-1}s_1s_2)s_2^{-3}s_1^2s_2^{-2}}}+\underline{\underline{R_5(s_2^{-2}s_1^{-1}s_2^{-1})s_2^{-1}s_1^2s_2^{-2}}}+\\&&+R_5s_2^{-2}s_1^2s_2^{-2}s_1^2s_2^{-2}+R_5s_2^{-2}s_1^{-2}s_2^{-2}s_1^2s_2^{-2}. \end{array}}$ The element $s_2^{-2}s_1^2s_2^{-2}s_1^2s_2^{-2}$ is inside $U''''$, by (ii). Moreover, $s_2^{-2}s_1^{-2}s_2^{-2}s_1^2s_2^{-2}= s_2^{-1}\grv^{-1}(s_2^{-1}s_1^2s_2)s_2^{-3} =s_2^{-1}\grv^{-1}s_1s_2^2s_1^{-1}s_2^{-3} =s_2^{-1}s_1\grv^{-1}s_2^2s_1^{-1}s_2^{-3}=\underline{\underline{s_2^{-1}s_1^2(s_1^{-1}s_2^{-1}s_1^{-2})s_2s_1^{-1}s_2^{-3}}}$. \qedhere \end{itemize} \end{proof} We can now prove the main theorem of this section. \begin{thm} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)] $U=U''''+u_1\grv^{-5}$. \item[(ii)]$H_5=U$. \label{thh2} \end{itemize} \end{thm} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)] By definition, $U=U''''+u_1\grv^5+u_1\grv^{-5}$. Hence, it is enough to prove that $\grv^{-5}\in u_1^{\times}\grv^{5}+U''''.$ By lemma \ref{lol}(ii) we have that $\grv^{-5}\in u_1^{\times}s_2^{-2}s_1^{2}s_2^{-2}s_1^2s_2^{-2}+U''''$. We expand $\bold {s_2^{-2}}$ as a linear combination of $s_2^{-1}$, 1, $s_2$, $s_2^2$ and $s_2^3$, where the coefficient of $s_2^3$ is invertible, and we have: $\small{\begin{array}{lcl} s_2^{-2}s_1^{2} s_2^{-2}s_1^2\bold {s_2^{-2}}&\in& R_5s_2^{-1}(s_2^{-1}s_1^2s_2)s_2^{-3}s_1^2s_2^{-1}+\underline{R_5s_2^{-2}s_1^2s_2^{-2}s_1^2}+ R_5(s_1^{-1}s_2^{-2}s_1)s_1s_2^{-1}(s_2^{-1}s_1^2s_2)+\\&&+R_5s_1(s_1^{-1}s_2^{-2}s_1)s_1s_2^{-1}(s_2^{-1}s_1^2s_2)s_2+R_5^{\times}s_2^{-2}s_1^{2}s_2^{-2}s_1^2s_2^3\\ &\in&u_1s_2^{-1}s_1s_2^2(s_1^{-1}s_2^3s_1)s_1s_2^{-1}+u_1s_2s_1^{-1}(s_1^{-1}s_2^{-1}s_1)(s_2^{-1}s_1s_2)s_2s_1^{-1}+\\&&+u_1s_2s_1^{-1}(s_1^{-1}s_2^{-1}s_1)(s_2^{-1}s_1s_2)s_2s_1^{-1}s_2+ u_1^{\times}s_2^{-2}s_1^{2}s_2^{-2}s_1^2s_2^3+U''''\\ &\in&u_1\grF(s_2s_1^{-1}u_2u_1s_2s_1^{-1}s_2)+ \underline{\underline{u_1s_2s_1^{-1}s_2(s_1^{-1}s_2^{-1}s_1)s_2s_1^{-1}s_2 s_1^{-1}}}+\\&&+u_1s_2s_1^{-1}s_2(s_1^{-1}s_2^{-1}s_1)s_2s_1^{-1}s_2 s_1^{-1}s_2+u_1^{\times}s_2^{-2}s_1^{2}s_2^{-2}s_1^2s_2^3+U''''\\ &\stackrel{\ref{l2}}{\in}&u_1\grF\big(s_2s_1^{-1}\grv^2u_1+s_2s_1^{-1}s_2^2s_1^{-2}s_2s_1^{-1}s_2+ s_2s_1^{-1}u_1u_2u_1u_2u_1\big)+\\&&+ u_1s_2s_1^{-1}s_2^2s_1^{-2}s_2s_1^{-1}s_2+u_1^{\times}s_2^{-2}s_1^{2}s_2^{-2}s_1^2s_2^3+U'''' \end{array}}$ However, by lemma \ref{oo}(ii) proposition \ref{xx}(i) we have that $\grF\big(s_2s_1^{-1}\grv^2u_1+s_2s_1^{-1}s_2^2s_1^{-2}s_2s_1^{-1}s_2+\underline{\underline{s_2s_1^{-1}u_1u_2u_1u_2u_1}}\big)\subset \grF(U'''')\stackrel{\ref{r1}}{\subset}U''''$. Therefore, it will be sufficient to prove that the element $s_2^{-2}s_1^{2}s_2^{-2}s_1^2s_2^3$ is inside $u_1^{\times}\grv^5+U''''$. We expand $\bold {s_2^{-2}}$ as a linear combination of $s_2^{-1}$, 1, $s_2$, $s_2^2$ and $s_2^3$, where the coefficient of $s_2^3$ is invertible, and we have: $s_2^{-2}s_1^{2}\bold {s_2^{-2}}s_1^2s_2^3 \in u_1s_2^{-3}(s_2s_1^2s_2^{-1})s_1^2s_2^3+\underline{u_1s_2^{-2}s_1^4s_2^3}+ \underline{\underline{u_1s_2^{-2}(s_1^2s_2s_1)s_1s_2^3}}+u_1s_2^{-1}(s_2^{-1}s_1^2s_2)s_2s_1^2s_2^3+u_1^{\times}s_2^{-2}s_1^2s_2^3s_1^2s_2^3$. By lemma \ref{lol}(i) we have that $u_1^{\times}s_2^{-2}s_1^2s_2^3s_1^2s_2^3\subset u_1^{\times}\grv^5+U''''.$ Therefore, $s_2^{-2}s_1^{2}s_2^{-2}s_1^2s_2^3 \in\underline{\underline{u_1(s_1s_2^{-3}s_1^{-1})s_2^2s_1^3s_2^3}}+u_1s_2^{-1}s_1s_2^2s_1^{-1}s_2s_1^2s_2^3+u_1^{\times}\grv^5+U''''$. It remains to prove that the element $s_2^{-1}s_1s_2^2s_1^{-1}s_2s_1^2s_2^3$ is inside $U''''$. Indeed, $s_2^{-1}s_1s_2^2s_1^{-1}s_2s_1^2s_2^3=(s_2^{-1}s_1s_2)s_2(s_1^{-1}s_2s_1)s_1s_2^3=s_1s_2(s_1^{-1}s_2^2s_1)(s_2^{-1}s_1s_2)s_2^2=s_1s_2^2s_1^2(s_2^{-1}s_1s_2)s_1^{-1}s_2^2=s_1s_2^3(s_2^{-1}s_1^3s_2)s_1^{-2}s_2^2=\underline{\underline{s_1^2(s_1^{-1}s_2^3s_1)s_2^3s_1^{-3}s_2^2}}$. \item[(ii)] As we explained in the beginning of this section, since $1\in U$ it will be sufficient to prove that $U$ is invariant under left multiplication by $s_2$. We use the fact that $U$ is equal to the RHS of (i) and by the definition of $U''''$ we have: $\small{\begin{array}{lcl} U&=&U'+\sum\limits_{k=2}^4\grv^{k}u_1+\sum\limits_{k=2}^5\grv^{-k}u_1+ u_1s_2^{-2}s_1^2s_2^{-1}s_1s_2^{-1}u_1+u_1s_2^{2}s_1^{-2}s_2s_1^{-1}s_2u_1+ u_1s_2s_1^{-2}s_2^{2}s_1^{-2}s_2^{2}u_1+\\&&+u_1s_2^{-1}s_1^2s_2^{-2}s_1^2s_2^{-2}u_1. \end{array}}$ On one hand, $s_2(U'+\grv^2u_1+ u_1s_2^{-2}s_1^2s_2^{-1}s_1s_2^{-1}u_1+u_1s_2^{2}s_1^{-2}s_2s_1^{-1}s_2u_1)\subset U$ (proposition \ref{p2}, lemma \ref{oo}$(ii)$ and proposition \ref{xx}$(i),(ii)$). On the other hand, $$\sum\limits_{k=2}^{5}s_2\grv^{-k}u_1=\sum\limits_{k=2}^{5}s_1^{-2}s_2^{-1}\grv^{-k+1}u_1\subset \grF(\sum\limits_{k=2}^{5}u_1s_2\grv^{k-1}u_1)\stackrel{\ref{p1}\text{ and }\ref{oo}(ii)}{\subset}u_1\grF(U+s_2\grv^{3}+s_2\grv^4)u_1 $$ Therefore, by lemma \ref{r1} we only need to prove that $$s_2(\grv^3u_1+\grv^4 u_1+ u_1s_2s_1^{-2}s_2^{2}s_1^{-2}s_2^{2}u_1+ u_1s_2^{-1}s_1^2s_2^{-2}s_1^2s_2^{-2}u_1)\subset U.$$ We first notice that $s_2\grv^4u_1=s_2\grv^3\grv u_1=s_2\grv^3u_1\grv$. Therefore, in order to prove that $s_2(\grv^3u_1+\grv^4u_1)\subset U$, it will be sufficient to prove that $s_2\grv^3u_1\subset u_1s_2u_1u_2u_1u_2u_1$ (propositions \ref{p2} and \ref{xx}(iii)). Indeed, we have: $s_2\grv^3 u_1=s_2\grv^2\grv u_1 \stackrel{\ref{oo}(ii)}{=}s_1s_2s_1^4s_2s_1^3s_2\grv u_1 =s_1s_2s_1^4s_2s_1^4(s_1^{-1}s_2^2s_1)s_1s_2u_1 =s_1s_2s_1^4s_2s_1^4s_2s_1^2(s_2^{-1}s_1s_2)u_1 \subset u_1s_2u_1(s_2u_1s_2u_1s_2u_1)$. However, by lemma \ref{oo}(i) we have that $u_1s_2u_1(s_2u_1s_2u_1s_2u_1)\subset u_1s_2u_1(\grv^2u_1+u_1u_2u_1u_2u_1)$. The result follows from lemma \ref{oo}(ii). It remains to prove that $s_2 u_1s_2^{-1}s_1^2s_2^{-2}s_1^2s_2^{-2}u_1$ and $s_2u_1s_2s_1^{-2}s_2^{2}s_1^{-2}s_2^{2}u_1$ are subsets of $U.$ We have: $\small{\begin{array}{lcl} s_2u_1s_2^{-1}s_1^2s_2^{-2}s_1^2s_2^{-2}&=&s_2(R_5+R_5s_1+R_5s_1^{-1}+ R_5s_1^2+R_5s_1^{-2})s_2^{-1}s_1^2s_2^{-2}s_1^2s_2^{-2}\\ &\subset&\underline{R_5s_1^2s_2^{-2}s_1^2s_2^{-2}}+\underline{\underline{ R_5(s_2s_1s_2^{-1})s_1^2s_2^{-2}s_1^2s_2^{-2}}}+\underline{\underline{ R_5(s_2s_1^{-1}s_2^{-1})s_1^2s_2^{-2}s_1^2s_2^{-2}}}+\\&& +R_5(s_2s_1^{2}s_2^{-1})s_1^2s_2^{-2}s_1^2s_2^{-2}+ R_5(s_2s_1^{-2}s_2^{-1})s_1^2s_2^{-2}s_1^2s_2^{-2} \\ &\subset&u_1s_2^2s_1^3s_2^{-2}s_1^2s_2^{-2}+u_1s_2^{-2}u_1s_2^{-2}s_1^2s_2^{-2}+U. \end{array}}$ However, by lemma \ref{lol}(iii) we have that $u_1s_2^{-2}u_1s_2^{-2}s_1^2s_2^{-2}\subset U$. Therefore, it remains to prove that the element $s_2^2s_1^3s_2^{-2}s_1^2s_2^{-2}$ is inside $ U$. For this purpose, we expand $s_1^3$ as a linear combination of $s_1^2$, $s_1$, 1, $s_1^{-1}$ and $s_1^{-2}$ and we have: $\small{\begin{array}{lcl}s_2^2s_1^3s_2^{-2}s_1^2s_2^{-2}&\in& R_5s_2^2s_1^2s_2^{-2}s_1^2s_2^{-2}+\underline{\underline{ R_5s_2^2(s_1s_2^{-2}s_1^{-1})s_1^3s_2^{-2}}}+\underline{u_1s_2^{-2}}+ \underline{\underline{R_5s_1^{-1}(s_1s_2^2s_1^{-1})s_2^{-2}s_1^2s_2^{-2}}}+\\&&+ R_5s_2^{2}s_1^{-2}s_2^{-2}s_1^{2}s_2^{-2}. \end{array}}$ However, $s_2^2s_1^2s_2^{-2}s_1^2s_2^{-2}=s_2(s_2s_1^2s_2^{-1})s_2^{-1}s_1(s_1s_2^{-2}s_1^{-1})s_1=s_2s_1^{-1}s_2(s_2s_1s_2^{-1})s_1s_2^{-1}s_1^{-2}s_2s_1=s_2s_1^{-2}(s_1s_2s_1^{-1})(s_2s_1^2s_2^{-1})s_1^{-2}s_2s_1=s_2s_1^{-2}s_2^{-1}(s_1s_2s_1^{-1})s_2^2s_1^{-1}s_2s_1=\underline{\underline{u_1s_2s_1^{-2}s_2^{-2}(s_1s_2^3s_1^{-1})s_2s_1}}$. Moreover, we expand $s_1^2$ as a linear combination of $s_1$, 1, $s_1^{-1}$, $s_1^{-2}$ and $s_1^{-3}$ and we have: $\small{\begin{array}{lcl} s_2^{2}s_1^{-2}s_2^{-2}s_1^{2}s_2^{-2} &\in &\underline{R_5s_2^2s_1^{-2}s_2^{-4}}+\underline{\underline{ R_5s_1^{-1}(s_1s_2^2s_1^{-1})(s_1^{-1}s_2^{-2}s_1)s_2^{-2}}}+R_5s_2^2s_1^{-2}s_2^{-1}(s_2^{-1}s_1^{-1}s_2^{-2})+\\&&+R_5s_2(s_2s_1^{-2}s_2^{-1})\grv^{-1}s_2^{-1}+ \grF(s_2^{-2}s_1^2s_2^2s_1^3s_2^2)\\ &\stackrel{\ref{ll}}{\in}&R_5s_2^2s_1^{-2}\grv^{-1}s_1^{-1}+ R_5s_2s_1^{-1}s_2^{-2}s_1\grv^{-1}s_2^{-1}+\grF(U)+U\\ &\stackrel{\ref{r1}}{\subset}&\underline{s_2^2\grv^{-1}u_1}+ R_5s_2s_1^{-1}s_2^{-2}\grv^{-1}s_1s_2^{-1}+U\\ &\subset&s_2s_1^{-1}u_2u_1s_2^{-1}s_1s_2^{-1}+U. \end{array}}$ Therefore, by proposition \ref{xx}(ii) we have that the element $s_2^{2}s_1^{-2}s_2^{-2}s_1^{2}s_2^{-2}$ is inside $U$ and, hence, $s_2 u_1s_2^{-1}s_1^2s_2^{-2}s_1^2s_2^{-2}u_1\subset U$. In order to finish the proof that $H_5=U$ it remains to prove that $s_2u_1s_2s_1^{-2}s_2^2s_1^{-2}s_2^2\subset U$. For this purpose we expand $u_1$ as $R_5+R_5s_1+R_5s_1^2+R_5s_1^3+ R_5s_1^4$ and we have: $\small{\begin{array}[t]{lcl} s_2u_1s_2s_1^{-2}s_2^2s_1^{-2}s_2^2 &\subset&\grF(s_2^{-2}u_1s_2^{-2}s_1^2s_2^{-2})+\underline{\underline{ R_5(s_2s_1s_2)s_1^{-2}s_2^2s_1^{-2}s_2^2}}+R_5\grv s_1^{-2}s_2^2s_1^{-2}s_2^2+\\&& +R_5s_2s_1^3s_2s_1^{-2}s_2^2s_1^{-2}s_2^{2} + R_5s_2s_1^4s_2s_1^{-2}s_2^2s_1^{-2}s_2^2. \end{array}}$ However, by lemma \ref{lol}(iii) we have that $\grF(s_2^{-2}u_1s_2^{-2}s_1^2s_2^{-2})$ is a subset of $\grF(U)$ and, hence, by lemma \ref{r1}, a subset of $U$. Moreover, $\grv s_1^{-2}s_2^2s_1^{-2}s_2^2=\underline{\underline{R_5s_1^{-2}\grv s_2^2s_1^{-2}s_2^2}}$. It remains to prove that $C:=R_5s_2s_1^3s_2s_1^{-2}s_2^2s_1^{-2}s_2^{2} + R_5s_2s_1^4s_2s_1^{-2}s_2^2s_1^{-2}s_2^2$ is a subset of $U$. We have: $\small{\begin{array}[t]{lcl} C &=& R_5s_2s_1^2(s_1s_2s_1^{-1})s_1^{-1}s_2(s_2s_1^{-2}s_2^{-1})s_2^3+ R_5s_1^{-1}(s_1s_2s_1^4)s_2s_1^{-2}s_2(s_2s_1^{-2}s_2^{-1})s_2^3\\ &=& R_5(s_2s_1^2s_2^{-1})(s_1s_2s_1^{-1})s_2s_1^{-1}s_2^{-2}s_1s_2^3+R_5s_1^{-1}s_2^4(s_1s_2s_1^{-1})s_1^{-1}(s_2s_1^{-1}s_2^{-1})s_2^{-1}s_1s_2^3\\ &=&R_5s_1^{-1}s_2(s_2s_1s_2^{-1})(s_1s_2^2s_1^{-1})s_2^{-2}s_1s_2^3+R_5s_1^{-1}s_2^2\grv s_1^{-2}s_2^{-1}s_1s_2^{-1}s_1s_2^3\\ &=&R_5s_1^{-1}s_2s_1^{-1}(s_2s_1s_2^{-1})s_1^2s_2^{-1}s_1s_2^3+R_5s_1^{-1}s_2^2s_1^{-2}(s_2s_1^3s_2^{-1})s_1s_2^3+U\\ &=&\underline{\underline{R_5s_1^{-1}s_2s_1^{-2}(s_2s_1^3s_2^{-1})s_1s_2^3}}+u_1s_2^2s_1^{-3}s_2^3s_1^2s_2^3+U. \end{array}}$ We expand $s_1^{-3}$ as a linear combination of $s_1^{-2}$, $s_1^{-1}$, 1, $s_1$, $s_1^2$ and $s_1^3$ and we have that $\small{\begin{array}{lcl}u_1s_2^2s_1^{-3}s_2^3s_1^2s_2^3 &\subset& u_1s_2^2s_1^{-2}s_2^3s_1^2s_2^3+\underline{\underline{u_1(s_1s_2^2s_1^{-1})s_2^3s_1^2s_2^3}}+\underline{u_1s_2^5s_1^2s_2^3}+ \underline{\underline{u_1s_2(s_2s_1s_2^3)s_1^2s_2^3}}+\\&&+ u_1s_2^2s_1^2s_2^3s_1^2s_2^3 . \end{array}}$ Hence, in order to finish the proof that $H_5=U$ we have to prove that $u_1s_2^2s_1^{-2}s_2^3s_1^2s_2^3$ and $u_1s_2^2s_1^2s_2^3s_1^2s_2^3$ are subsets of $U$. We have: $\small{\begin{array}{lcl} u_1s_2^2s_1^{-2}s_2^3s_1^2s_2^3&=&u_1s_2^2s_1^{-2}(as_2^2+bs_2+c+ds_2^{-1}+es_2^{-2})s_1^2s_2^3\\ &\subset&u_1s_2^2s_1^{-2}s_2^2s_1^2s_2^3+ u_1s_2^2s_1^{-2}s_2^2s_1^2s_2^3+\underline{u_1s_2^5}+\underline{\underline{ u_1s_2(s_2s_1^{-2}s_2^{-1})s_1^2s_2^3}}+u_1s_2^2s_1^{-2}s_2^{-2}s_1^2s_2^3 \end{array}}$ However, we have that $u_1s_2^2s_1^{-2}s_2^2s_1^2s_2^3= \underline{\underline{u_1(s_1^{-1}s_2^2s_1)s_2(s_2^{-1}s_1^{-3}s_2)s_1^2s_2^3}}$. Moreover, we have $u_1s_2^2s_1^{-2}s_2^{-2}s_1^2s_2^3=u_1(s_1s_2^2s_1^{-1})(s_1^{-1}s_2^{-2}s_1)s_1s_2^3= u_1s_2^{-1}s_1^2s_2^3\grv^{-1}s_1s_2^3=u_1s_2^{-1}s_1^2s_2^{3}s_1(s_2^{-1}s_1^{-2}s_2)s_2=u_1s_2^{-1}s_1^2s_2^3s_1^2s_2^{-1}(s_2^{-1}s_1^{-1}s_2)\subset\grF(s_2u_2u_1s_2s_1^{-1}s_2)$. By proposition \ref{xx}(i) and lemma \ref{r1} we have that $u_1s_2^2s_1^{-2}s_2^{-2}s_1^2s_2^3\subset U$. It remains to prove that $u_1s_2^2s_1^{-2}s_2^2s_1^2s_2^3\subset U$. We notice that $u_1s_2^2s_1^{-2}s_2^2s_1^2s_2^3= u_1s_2^3(s_2^{-1}s_1^{-2}s_2)s_2s_1^2s_2^3 =u_1(s_1^{-1}s_2^3s_1)s_2^{-2}s_1^{-1}s_2s_1^2s_2^3 =u_1s_2s_1^3s_2^{-3}s_1^{-1}s_2s_1^2s_2^3$. We expand $s_2^3$ as a linear combination of $s_2^2$, $s_2$, 1, $s_2^{-1}$ and $s_2^{-2}$ and we have: $\small{\begin{array}{lcl} u_1s_2s_1^3s_2^{-3}s_1^{-1}s_2s_1^2s_2^3 &\subset&u_1s_2s_1^3s_2^{-2}(s_2^{-1}s_1^{-1}s_2)s_1^2s_2^2+u_1s_2s_1^3s_2^{-3}s_1^{-1}\grv+ \underline{\underline{u_1s_2s_1^3s_2^{-3}s_1^{-1}s_2s_1^2}}+\\&&+ \underline{\underline{u_1s_2s_1^3s_2^{-3}s_1^{-1}(s_2s_1^2s_2^{-1})}}+u_1s_2s_1^3s_2^{-3}s_1^{-1}(s_2s_1^2s_2^{-1})s_2^{-1}\\ &\subset&u_1s_2s_1^3s_2^{-2}s_1(s_2^{-1}s_1s_2)s_2+ u_1s_2s_1^3s_2^{-3}s_1^{-2}s_2(s_2s_1s_2^{-1})+\underline{\underline{u_1s_2s_1^3s_2^{-3}\grv s_1^{-1}}}+U\\ &\subset& u_1(s_2u_1u_2u_1s_2s_1^{-1}s_2)u_1+U. \end{array}}$\\ The result follows from \ref{xx}(i) and \ref{r1}. Using analogous calculations we will prove that $u_1s_2^2s_1^2s_2^3s_1^2s_2^3$ is a subset of $U$. We have: $\small{\begin{array}{lcl} u_1s_2^2s_1^2s_2^3s_1^2s_2^3&=&u_1s_2^2s_1^2(as_2^2+bs_2+c+ds_2^{-1}+es_2^{-2})s_1^2s_2^3\\ &\subset&u_1s_2^2s_1^2s_2^2s_1^2s_2^3+u_1s_2\grv s_1^2s_2^3+\underline{u_1s_2^2s_1^4s_2^3} +\underline{\underline{u_1s_2(s_2s_1^2s_2^{-1})s_1^2s_2^3}}+ u_1s_2^2s_1^2s_2^{-2}s_1^2s_2^3. \end{array}}$ However, $u_1s_2\grv s_1^2s_2^3=\underline{\underline{u_1s_2s_1^2\grv s_2^3}}$. Therefore, it remains to prove that $u_1s_2^2s_1^2s_2^2s_1^2s_2^3$ and $u_1s_2^2s_1^2s_2^{-2}s_1^2s_2^3$ are subsets of $U$. We have: $\small{\begin{array}{lcl} u_1s_2^2s_1^2s_2^2s_1^2s_2^3&=&u_1s_2^2s_1^2s_2^2s_1^2(as_2^2+bs_2+c+ds_2^{-1}+es_2^{-2})\\ &\subset&u_1s_2\grv^2s_2+u_1s_2\grv^2+\underline{u_1s_2^2s_1^2s_2^2s_1^2}+ u_1s_2\grv s_2s_1^2s_2^{-1}+u_1s_2\grv s_2s_1^2s_2^{-2}\\ \end{array}}$ By lemma \ref{oo}(ii) we have that $u_1s_2\grv^2s_2+u_1s_2\grv^2\subset u_1s_2s_1^4s_2s_1^3s_2u_1s_2+U$. However, by lemma \ref{oo}(i) we have $u_1s_2s_1^4(s_2s_1^3s_2u_1s_2)\subset u_1s_2s_1^4(\grv^2u_1+u_1u_2u_1u_2u_1)\subset u_1s_2\grv^2u_1+\underline{\underline{u_1s_2u_1u_2u_1u_2u_1}}$. Using another time lemma \ref{oo}(ii) we have that $u_1s_2s_1^4(\grv^2u_1+u_1u_2u_1u_2u_1)\subset U$. It remains to prove that $D:=u_1s_2\grv s_2s_1^2s_2^{-1}+u_1s_2\grv s_2s_1^2s_2^{-2}$ is a subset of $U$. We have: $\small{\begin{array}{lcl} D &=& u_1s_2\grv(s_2s_1^2s_2^{-1})+u_1s_2\grv(s_2s_1^2s_2^{-1})s_2^{-1}\\ &=&u_1s_2\grv s_1^{-1}s_2^2s_1+u_1s_2\grv s_1^{-1}s_2^2s_1s_2^{-1}\\ &=&\underline{\underline{u_1s_2s_1^{-1}\grv s_2^2s_1}}+u_1s_2s_1^{-1}\grv s_2^2s_1s_2^{-1}. \end{array}}$ However, we have $u_1s_2s_1^{-1}\grv s_2^2s_1s_2^{-1}=u_1s_2s_1^{-1}\grv s_2(s_2s_1s_2^{-1})=u_1s_2s_1^{-1}s_2s_1(s_1s_2s_1^{-1})s_2s_1=\underline{\underline{u_1s_2s_1^{-1}(s_2s_1s_2^{-1})s_1s_2^2s_1}},$ meaning that $D \subset U$. Using analogous calculations we will prove that $u_1s_2^2s_1^2s_2^{-2}s_1^2s_2^3\subset U$. We have: $\small{\begin{array}{lcl} u_1s_2^2s_1^2s_2^{-2}s_1^2s_2^3 &=& u_1s_2(s_2s_1^2s_2^{-1})s_2^{-1}s_1^2s_2^3\\ &=&u_1s_2s_1^{-1}s_2^2s_1s_2^{-1}s_1^2(as_2^2+bs_2+c+ds_2^{-1}+es_2^{-2})\\ &\subset&u_1(s_1s_2s_1^{-1})s_2(s_2s_1s_2^{-1})s_1^2s_2^2+ \underline{\underline{u_1s_2s_1^{-1}s_2^2s_1(s_2^{-1}s_1^2s_2)}}+ \underline{ \underline{u_1s_2s_1^{-1}s_2^2s_1s_2^{-1}s_1^2}}+\\&&+ u_1s_2s_1^{-1}s_2(s_2s_1s_2^{-1})s_1^2s_2^{-1}+u_1s_2s_1^{-1}s_2(s_2s_1s_2^{-1})s_1^2s_2^{-2}\\ &\subset&u_1s_2^{-1}(s_1s_2^2s_1^{-1})s_2s_1^3s_2^2 +\underline{\underline{u_1s_2s_1^{-1}s_2s_1^{-1}(s_2s_1^3s_2^{-1})}}+\\&&+u_1s_2s_1^{-2}(s_1s_2s_1^{-1})(s_2s_1^3s_2^{-1})s_2^{-1}+U\\ &\subset&u_1s_2^{-2}s_1^2s_2^2s_1^3s_2^2+u_1s_2s_1^{-2}s_2^{-1}(s_1s_2s_1^{-1})s_2^2(s_2s_1s_2^{-1})+U\\ &\stackrel{ \ref{ll}}{\subset}&\underline{\underline{u_1s_2s_1^{-2}s_2^{-2}(s_1s_2^3s_1^{-1})s_2s_1}}+U. \end{array}}$ \qedhere \end{itemize} \end{proof} \begin{cor}$H_5$ is a free $R_5$-module of rank $r_5=600$ and, therefore, the BMR freeness conjecture holds for the exceptional group $G_{16}$. \end{cor} \begin{proof} By proposition \ref{rp} it will be sufficient to show that $H_5$ is generated as $R_5$-module by $r_5$ elements. By theorem \ref{thh2}, the definition of $U''''$ and the fact that $u_1u_2u_1=u_1+u_1s_2u_1+u_1s_2^{-1}u_1+u_1s_2^2u_1+u_1s_2^{-2}u_1$ we have that $H_5$ is spanned as left $u_1$-module by 120 elements. Since $u_1$ is spanned by 5 elements as a $R_5$-module, we have that $H_5$ is spanned over $R$ by $r_5=600$ elements. \end{proof} \section{An application: The irreducible representations of $B_3$ of dimension at most 5} \indent In 1999 I. Tuba and H. Wenzl classified the irreducible representations of the braid group $B_3$ of dimension $k$ at most 5 over an algebraically closed field $K$ of any characteristic (see \cite{tuba}) and, therefore, of $PSL_2(\mathbb{Z})$, since the quotient group $B_3$ modulo its center is isomorphic to $PSL_2(\mathbb{Z})$. Recalling that $B_3$ is given by generators $s_1$ and $s_2$ that satisfy the relation $s_1s_2s_1=s_2s_1s_2$, we assume that $s_1\mapsto A, s_2\mapsto B$ is an irreducible representation of $B_3$, where $A$ and $B$ are invertible $k\times k$ matrices over $K$ satisfying $ABA=BAB$. I. Tuba and H. Wenzl proved that $A$ and $B$ can be chosen to be in \emph{ordered triangular form}\footnote{Two $k\times k$ matrices are in ordered triangular form if one of them is an upper triangular matrix with eigenvalue $\grl_i$ as $i$-th diagonal entry, and the other is a lower triangular matrix with eigenvalue $\grl_{k+1-i}$ as $i$ -th diagonal entry.} with coefficients completely determined by the eigenvalues (for $k\leq3$) or by the eigenvalues and by the choice of a $k$-th root of det$A$ (for $k>3$). Moreover, they proved that such irreducible representations exist if and only if the eigenvalues do not annihilate some polynomials $P_k$ in the eigenvalues and the choice of the $k$th root of det$A$, which they determined explicitly. At this point, a number of questions arise: what is the reason we do not expect their methods to work for any dimension beyond 5 (see remark 2.11, 3 in \cite{tuba} )? Why are the matrices in this neat form? In \cite{tuba}, remark 2.11, 4 there is an explanation for the nature of the polynomials $P_k$. However, there is no argument connected with the nature of $P_k$ that explains the reason why these polynomials provide a necessary condition for a representation of this form to be irreducible. In this section we answer these questions by proving in a different way this classification of the irreducible representations of the braid group $B_3$ as a consequence of the BMR freeness conjecture for the generic Hecke algebra of the finite quotients of the braid group $B_3$, we proved in the previous section. The fact that there is a connexion between the classification of irreducible representations of dimension at most 5 and the finite quotients of $B_3$ has already been suspected by I. Tuba and H. Wenzl (see remark 2.11, 5 in \cite{tuba}). \subsection{Some preliminaries} \indent We set $\tilde{R_k}=\mathbb{Z}[u_1^{\pm1},...,u_k^{\pm1}]$, $k=2,3,4,5$. Let $\tilde{H_k}$ denote the quotient of the group algebra $\tilde{R_k}B_3$ by the relations $(s_i-u_1)...(s_i-u_k)$. In the previous sections we proved that $H_k$ is a free $R_k$-module of rank $r_k$. Hence, $\tilde{H_k}$ is a free $\tilde{R_k}$-module of rank $r_k$ (Lemma 2.3 in \cite{marinG26}). We now assume that $\tilde{H_k}$ has a unique symmetrizing trace $t_k: \tilde{H_k} \rightarrow \tilde{R_k}$ (i.e. a trace function such that the bilinear form $(h, h')\mapsto t_k(hh')$ is non-degenerate), having nice properties (see \cite{bmm}, theorem 2.1): for example, $t_k(1)=1$, which means that $t_k$ specializes to the canonical symmetrizing form on $\mathbb{C} W_k$. Let $\grm_{\infty}$ be the group of all roots of unity in $\mathbb{C}$. We recall that $W_k$ is the finite quotient group $B_3/\langle s_i^k \rangle$, $k=2, 3, 4$ and 5 and we let $K_k$ be the \emph{field of definition} of $W_k$, i.e. the number field contained in $\mathbb{Q}(\grm_{\infty})$, which is generated by the traces of all elements of $W_k$. We denote by $\grm(K_k)$ the group of all roots of unity of $K_k$ and, for every integer $m>1$, we set $\grz_m:=$exp$(2 \grp i/m)$. Let $\mathbf{v}=(v_1,...,v_k)$ be a set of $k$ indeterminates such that, for every $i\in\{1,...,k\}$, we have $v_i^{|\grm(K_k)|}=\grz_k^{-i}u_i$. By extension of scalars we obtain a $\mathbb{C}(\mathbf{v})$-algebra $\mathbb{C}(\mathbf{v})\tilde{H_k}:=\tilde{H_k}\otimes_{\tilde{R_k}}\mathbb{C}(\mathbf{v})$, which is split semisimple (see \cite{malle1}, theorem 5.2). Since the algebra $\mathbb{C}(\mathbf{v})\tilde{H_k}$ is split, by Tits' deformation theorem (see theorem 7.4.6 in \cite{geck}), the specialization $v_i \mapsto 1$ induces a bijection Irr$(\mathbb{C}(\mathbf{v})\tilde{H_k})\rightarrow$ Irr$(W_k)$. \begin{ex} Let $W_4:=G_{8}$. The field of definition of $G_{8}$ is $K_4:=\mathbb{Q}(i)$ (One can find this field in Appendix A, table A.1 in \cite{brouebook}). Since $|\grm(K_4)|=4$ we set $v_1^{4}:=i^{-1}u_1=-iu_1$, $v_2^4:=i^{-2}u_2=-u_2$, $v_3^4:=i^{-3}u_3=iu_3$ and $v_4^4:=i^{-4}u_4=u_4$. The theorem of G. Malle we mentioned above states that the algebra $\mathbb{C}(v_1,v_2,v_3,v_4)\tilde{H_4}$ is split semisimple and, hence, its irreducible characters are in bijection with the irreducible characters of $G_8$. \end{ex} Let $\varrho: B_3 \rightarrow GL_n(\mathbb{C})$ be an irreducible representation of $B_3$ of dimension $k\leq 5$. We set $A:=\varrho(s_1)$ and $B:=\varrho(s_2)$. The matrices $A$ and $B$ are similar since $s_1$ and $s_2$ are conjugate $( s_2=(s_1s_2)s_1(s_1s_2)^{-1})$. Hence, by Cayley-Hamilton theorem of linear algebra, there is a monic polynomial $m(X)=X^k+m_{n-1}X^{n-1}+...+m_1X+m_0\in \mathbb{C}[X]$ of degree $k$ such that $m(A)=m(B)=0$. Let $R^k_K$ denotes the integral closure of $R_k$ in $K_k$. We fix $\gru: R^k_K\rightarrow \mathbb{C}$ a \emph{specialization} of $R^k_K$, defined by $u_i\mapsto \grl_i$, where $\grl_i$ are the eigenvalues of $A$ (and $B$). We notice that $\gru$ is well-defined, since $m_0=$det$A\in \mathbb{C}^{\times}$. Therefore, in order to determine $\varrho$ it will be sufficient to describe the irreducible $\mathbb{C}\tilde{ H_k}:=\tilde{H_k}\otimes_{\gru}\mathbb{C}$-modules of dimension $k$. When the algebra $\mathbb{C}\tilde H_k$ is semisimple, we can use again Tits' deformation theorem and we have a canonical bijection between the set of irreducible characters of $\mathbb{C}\tilde{H_k}$ and the set of irreducible characters of $\mathbb{C}(\mathbf{v})\tilde{H_k}$, which are in bijection with the irreducible characters of $W_k$. However, this is not always the case. In order to determine the irreducible representations of $\mathbb{C}\tilde H_k$ in the general case (when we don't know a priori that $\mathbb{C}\tilde H_k$ is semisimple) we use a different approach. Let $R_0^{+}\big(\mathbb{C}(\mathbf{v})\tilde{H_k}\big)$ (respectively $R_0^{+}(\mathbb{C}\tilde{ H_k})$) denote the subset of the \emph{Grothendieck group} of the category of finite dimensional $\mathbb{C}(\mathbf{v})\tilde{H_k}$ (respectively $\mathbb{C}\tilde{H_k}$)-modules consisting of elements $[V]$, where $V$ is a $\mathbb{C}(\mathbf{v})\tilde{H_k}$ (respectively $\mathbb{C}\tilde{H_k}$)-module (for more details, one may refer to \textsection 7.3 in \cite{geck}). By theorem 7.4.3 in \cite {geck} we obtain a well-defined decomposition map $$d_{\gru}: R_0^{+}\big(\mathbb{C}(\mathbf{v})\tilde{H_k}) \rightarrow R_0^{+}(\mathbb{C}\tilde{ H_k}).$$ The corresponding \emph{decomposition matrix} is the Irr$\big(\mathbb{C}(\mathbf{v})\tilde{H_k}\big)\times$ Irr$(\mathbb{C}\tilde{ H_k})$ matrix $(d_{\grx\grf})$ with non-negative integer entries such that $d_{\gru}([V_{\grx}])=\sum\limits_{\grf}d_{\grx\grf}[V'_{\grf}]$, where $V_{\grx}$ is an irreducible $\mathbb{C}(\mathbf{v})\tilde{H_k}$-module with character $\grx$ and $V_{\grf}$ is an irreducible $\mathbb{C} \tilde{H_k}$-module with character $\grf$. This matrix records in which way the irreducible representations of the semisimple algebra $\mathbb{C}(\mathbf{v})\tilde{H_k}$ break up into irreducible representations of $\mathbb{C}\tilde{ H_k}$. The form of the decomposition matrix is controlled by the \emph{Schur elements}, denoted as $s_{\grx}$, $\grx \in$ Irr$\big(\mathbb{C}(\mathbf{v})\tilde{H_k}\big)$ with respect to the symmetric form $t_k$. The Schur elements belong to $R^k_K$ (see \cite{geck}, Proposition 7.3.9) and they depend only on the symmetrizing form $t_k$ and the isomorphism class of the representation. M. Chlouveraki has shown that the Schur elements are products of cyclotomic polynomials over $K_k$ evaluated on monomials of degree 0 (see theorem 4.2.5 in \cite{chlouverakibook}). In the following section we are going to use these elements in order to determine the irreducible representations of $\mathbb{C}\tilde H_k$ (more details about the definition and the properties of the Schur elements, one may refer to \S 7.2 in \cite{geck}). We say that the $\mathbb{C}(\mathbf{v})\tilde{H_k}$-modules $V_{\grx}, V_{\grc}$ \emph{belong to the same block} if the corresponding characters $\grx, \grc$ label the rows of the same block in the decomposition matrix $(d_{\grx\grf})$ (by definition, this means that there is a $\grf\in\text{Irr}(\mathbb{C}\tilde H_k)$ such that $d_{\grx,\grf}\not=0\not=d_{\grc,\grf}$). If an irreducible $\mathbb{C}(\mathbf{v})\tilde{H_k}$-module is alone in its block, then we call it a \emph{module of defect 0}. Motivated by the idea of M. Chlouveraki and H. Miyachy in \cite{chlouveraki} \textsection 3.1 we use the following criteria in order to determine whether two modules belong to the same block: \begin{itemize} \item We have $\gru(s_{\grx})\not =0$ if and only if $V_{\grx}$ is a module of defect 0 (see \cite{maller}, Lemma 2.6). This criterium together with theorem 7.5.11 in \cite{geck}, states that $V_{\grx}$ is a module of defect 0 if and only if the decomposition matrix is of the form $$\begin{pmatrix} *& \dots&*&0 &*&\dots&*\\ \vdots& \dots& \vdots&\vdots &\vdots& \dots&\vdots\\ *& \dots&*&0 &*&\dots&*\\ 0& \dots& 0&1&0&\dots&0\\ *& \dots&*&0 &*&\dots&*\\ \vdots& \dots&\vdots &\vdots& \vdots& \dots&\vdots\\ *& \dots&*&0 &*&\dots&* \end{pmatrix}$$ \item If $V_{\grx}, V_{\grc}$ are in the same block, then $\gru(\grv_{\grx}(z_0))=\gru(\grv_{\grc}(z_0))$ (see \cite{geck}, Lemma 7.5.10), where $\grv_{\grx}, \grv_{\grc}$ are the corresponding \emph{central characters}\footnote{If $z$ lies in the center of $\mathbb{C}(\mathbf{v})\tilde{H_k}$ then Schur's lemma implies that $z$ acts as scalars in $V_{\grx}$ and $V_{\grc}$. We denote these scalars as $\grv_{\grx}(z)$ and $\grv_{\grc}(z)$ and we call the associated $\mathbb{C}(\mathbf{v})$-homomorphisms $\grv_{\grx},\grv_{\grc}: Z\big(\mathbb{C}(\mathbf{v})\tilde{H_k}\big)\rightarrow \mathbb{C}(\mathbf{v})$ central characters (for more details, see \cite{geck} page 227).} and $z_0$ is the central element $(s_1s_2)^3$. \end{itemize} \subsection{The irreducible representations of $B_3$} \indent We recall that in order to describe the irreducible representations of $B_3$ of dimension at most 5, it is enough to describe the irreducible $\mathbb{C}\tilde{H_k}$-modules of dimension $k$. Let $S$ be an irreducible $\mathbb{C} \tilde{H_k}$-module of dimension $k$ and $s\in S$ with $s\not=0$. The morphism $f_{s}: \mathbb{C} \tilde{H_k}\rightarrow S$ defined by $h\mapsto hs$ is surjective since $S$ is irreducible. Hence, by the definition of the Grothendieck group we have that $d_{\gru}\big([\mathbb{C}(\mathbf{v})\tilde{H_k}]\big)=[\mathbb{C} \tilde{H_k}]=[$kerf$_{s}]+[S]$. However, since $\mathbb{C}(\mathbf{v})\tilde{H_k}$ is semisimple we have $\mathbb{C}(\mathbf{v})H_k=M_1\oplus...\oplus M_r$, where the $M_i$ are (up to isomorphism) all the simple $\mathbb{C}(\mathbf{v})\tilde{H_k}$-modules (with redundancies). Therefore, we have $\sum_{i=1}^{r}d_{\gru}([M_i])=[$kerf$_{s}]+[S].$ Hence, there is a simple $\mathbb{C}(\mathbf{v})\tilde{H_k}$-module $M$ such that \begin{equation}d_{\gru}([M])=[S]+[J],\label{eqqq}\end{equation} where $J$ is a $\mathbb{C} \tilde{H_k}$-module. \begin{rem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize} \item[(i)] The $\mathbb{C}(\mathbf{v})\tilde{H_k}$-module $M$ is of dimension at least $k$. \item[(ii)] If $J$ is of dimension 1, there is a $\mathbb{C}(\mathbf{v})\tilde{H_k}$-module $N$ of dimension 1, such that $d_{\gru}([N])=[J]$. This result comes from the fact that the 1 dimensional $\mathbb{C} \tilde H_k$ modules are of the form $(\grl_i)$ and, by definition, $\grl_i=\gru(u_i)$. \end{itemize} \label{brrrr} \end{rem} The irreducible $\mathbb{C}(\mathbf{v})\tilde{H_k}$-modules are known (see \cite{mallem} or \cite{brouem} \textsection 5B and \textsection 5D, for $n=3$ and $n=4$, respectively). Therefore, we can determine $S$ by using (\ref{eqqq}) and a case-by-case analysis. \begin{itemize} \item \underline{$k=2$} : Since $\tilde{H_2}$ is the generic Hecke algebra of $\mathfrak{S}_3$, which is a Coxeter group, the irreducible representations of $\mathbb{C} \tilde{H_2}$ are well-known; we have two irreducible representations of dimension 1 and one of dimension 2. By $(\ref{eqqq})$ and remark \ref{brrrr} (i), $M$ must be the irreducible $\mathbb{C}(\mathbf{v})\tilde{H_k}$-module of dimension 2 and $(\ref{eqqq})$ becomes $[S]=d_{\gru}([M])$. Hence, we have: $$A=\begin{bmatrix} \begin{array}{rr} \grl_1&\grl_1\\ 0&\grl_2 \end{array} \end{bmatrix},\; B=\begin{bmatrix} \begin{array}{rr} \grl_2&0\\ -\grl_2&\grl_1\\ \end{array} \end{bmatrix}$$ Moreover, $[S]=d_{\gru}([M])$ is irreducible and $M$ is the only irreducible $\mathbb{C}(\mathbf{v})\tilde{H_k}$-module of dimension 2. As a result, $M$ has to be alone in its block i.e. $\gru(s_{\grx})\not=0$, where $\grx$ is the character that corresponds to $M$. Therefore, an irreducible representation of $B_3$ of dimension 2 can be described by the explicit matrices $A$ and $B$ we have above, depending only on a choice of $\grl_1, \grl_2$ such that $\gru(s_{\grx})=\grl_1^2-\grl_1\grl_2+\grl_2^2\not=0$. \item \underline{$k=3$} : Since the algebra $\mathbb{C}(\mathbf{v})\tilde{H_3}$ is split semisimple, we have a bijection between the set Irr$(\mathbb{C}(\mathbf{v})\tilde{H_3})$ and the set Irr$(W_3)$, as we explained in the previous section. We refer to J. Michel's version of CHEVIE package of GAP3 (see \cite{michelgap}) in order to find the irreducible characters of $W_3$. We type: \begin{verbatim} gap> W_3:=ComplexReflectionGroup(4); gap> CharNames(W_3); [ "phi{1,0}", "phi{1,4}", "phi{1,8}", "phi{2,5}", "phi{2,3}", "phi{2,1}", "phi{3,2}" ] \end{verbatim} We have 7 irreducible characters $\grf_{i,j}$, where $i$ is the dimension of the representation and $j$ the valuation of its fake degree (see \cite{malle1} \textsection 6A). Since $S$ is of dimension 3, the equation $(\ref{eqqq})$ becomes $[S]=d_{\gru}([M])$, where $M$ is the irreducible $\mathbb{C}(\mathbf{v})\tilde{H_3}$-module that corresponds to the character $\grf_{3,2}$ (see remark \ref{brrrr}(i)). However, we have explicit matrix models for this representation (see \cite{brouem}, \textsection 5D or we can refer to CHEVIE package of GAP3 again) and since $[S]=d_{\gru}([M])$ we have: $$A=\begin{bmatrix} \grl_3&0&0\\ \grl_1\grl_3+\grl_2^2& \grl_2& 0\\ \grl_2& 1&\grl_1 \end{bmatrix},\; B=\begin{bmatrix} \grl_1& -1&\grl_2\\ 0&\grl_2&-\grl_1\grl_3-\grl_2^2\\ 0&0&\grl_3 \end{bmatrix}.$$ $M$ is the only irreducible $\mathbb{C}(\mathbf{v})\tilde{H_3}$-module of dimension 3, therefore, as in the previous case where $k=2$, we must have that $\gru(s_{\grf_{3,2}})\not=0$. The Schur element $s_{\grf_{3,2}}$ has been determined in \cite{malle2} and the condition $\gru(s_{\grf_{3,2}})\not=0$ becomes \begin{equation}\gru(s_{\grf_{3,2}})=\frac{(\grl_1^2+\grl_2\grl_3)(\grl_2^2+\grl_1\grl_3)(\grl_3^2+\grl_1\grl_2)}{(\grl_1\grl_2\grl_3)^2}\not=0.\label{tt1}\end{equation} To sum up, an irreducible representation of $B_3$ of dimension 3 can be described by the explicit matrices $A$ and $B$ we gave above, depending only on a choice of $\grl_1, \grl_2, \grl_3$ such that (\ref{tt1}) is satisfied. \item \underline{$k=4$} : We use again the program GAP3 package CHEVIE in order to find the irreducible characters of $W_4$. In this case we have 16 irreducible characters among which 2 of dimension 4; the characters $\grf_{4,5}$ and $\grf_{4,3}$ (we follow again the notations in GAP3, as in the case where $k=3$). Hence, by remark \ref{brrrr}(i) and relation $(\ref{eqqq})$, we have $[S]=d_{\gru}([M])$, where $M$ is the irreducible $\mathbb{C}(\mathbf{v})\tilde{H_4}$-module that corresponds either to the character $\grf_{4,5}$ or to the character $\grf_{4,3}$. We have again explicit matrix models for these representations (see \cite{brouem}, \textsection 5B, where we multiply the matrices described there by a scalar $t$ and we set $u_1=t, u_2=tu, u_3=tv$ and $u_4=tw$): $$A=\begin{bmatrix} \grl_1&0&0&0\\ \\ \frac{\grl_1^2}{\grl_2}&\grl_2& 0& 0\\\\ \frac{\grl_1^3}{r}&\frac{\grl_1\grl_2\grl_3-\grl_1r}{r}& \grl_3& 0\\\\ -\grl_2&\grl_2\gra&\frac{r\gra}{\grl_1^2}&\grl_4 \end{bmatrix},\; B=\begin{bmatrix} \grl_4&\grl_3\gra&\frac{\grl_2\grl_3\gra}{\grl_1}&-\frac{\grl_2\grl_3^2}{r}\\\\ 0&\grl_3&\frac{\grl_2\grl_3-r}{\grl_1}&\frac{\grl_1^2\grl_3}{r}\\\\ 0&0&\grl_2&\frac{\grl_1^3}{r}\\\\ 0&0&0&\grl_1 \end{bmatrix},$$ where $r:=\pm\sqrt{\grl_1\grl_2\grl_3\grl_4}$ and $\gra:=\frac{r-\grl_2\grl_3-\grl_1\grl_4}{\grl_1^2}$. Since $d_{\gru}([M])$ is irreducible either $M$ is of defect 0 or it is in the same block as the other irreducible module of dimension 4 i.e. $\gru(\grv_{\grf_{4,5}}(z_0))=\gru(\grv_{\grf_{4,3}}(z_0))$. We use the program GAP3 package CHEVIE in order to calculate these central characters. More precisely, we have 16 representations where the last 2 are of dimension 4. These representations will be noted in GAP3 as $\verb+R[15]+$ and $\verb+R[16]+$. Since $z_0=(s_1s_2)^3$ we need to calculate the matrices $R[i](s_1s_2s_1s_2s_1s_2), i=15, 16$. These are the matrices $\verb+Product(R[15]{[1,2,1,2,1,2]})+$ and $\verb+Product(R[16]{[1,2,1,2,1,2]})+$, in GAP3 notation, as we can see below: \begin{verbatim} gap> R:=Representations(H_4);; gap> Product(R[15]{[1,2,1,2,1,2]}); [ [ u_1^3/2u_2^3/2u_3^3/2u_4^3/2, 0, 0, 0 ], [ 0, u_1^3/2u_2^3/2u_3^3/2u_4^3/2, 0, 0 ], [ 0, 0, u_1^3/2u_2^3/2u_3^3/2u_4^3/2, 0 ], [ 0, 0, 0, u_1^3/2u_2^3/2u_3^3/2u_4^3/2] ] gap> Product(R[16]{[1,2,1,2,1,2]}); [ [ -u_1^3/2u_2^3/2u_3^3/2u_4^3/2, 0, 0, 0 ], [ 0, -u_1^3/2u_2^3/2u_3^3/2u_4^3/2, 0, 0 ], [ 0, 0, -u_1^3/2u_2^3/2u_3^3/2u_4^3/2, 0 ], [ 0, 0, 0, -u_1^3/2u_2^3/2u_3^3/2u_4^3/2 ] ] \end{verbatim} We have that $\gru(\grv_{4,5}(z_0))=-\gru(\grv_{4,3}(z_0))$, which means that $M$ is of defect zero i.e. $\gru(s_{\grf_{4,i}})\not=0$, where $i=3$ or 5. The Schur elements $s_{\grf_{4,i}}$ have been determined in \cite{malle2} \textsection 5.10, hence we must have \begin{equation}\gru(s_{\grf_{4,i}})=\frac{-2r\prod\limits_{p=1}^4(r+\grl_p^2)\prod\limits_{r,l}(r+\grl_r\grl_l+\grl_s\grl_t)}{(\grl_1\grl_2\grl_3\grl_4)^4}\not=0, \text {where } \{r,l,s,t\}=\{1,2,3,4\} \label{tt2} \end{equation} Therefore, an irreducible representation of $B_3$ of dimension 4 can be described by the explicit matrices $A$ and $B$ depending only on a choice of $\grl_1, \grl_2, \grl_3, \grl_4$ and a square root of $\grl_1\grl_2\grl_3\grl_4$ such that (\ref{tt2}) is satisfied. \item \underline{$k=5$} :In this case, compared to the previous ones, we have two possibilities for $S$. The reason is that we have characters of dimension 5 and dimension 6, as well. Therefore, by remark \ref{brrrr}(i) and (ii) and (\ref{eqqq}) we either have $d_{\gru}([M])=[S]$, where $M$ is one irreducible $\mathbb{C}(\mathbf{v})\tilde{H_5}$-module of dimension 5 or $d_{\gru}([N])=[S]+d_{\gru}([N'])$, where $N, N'$ are some irreducible $\mathbb{C}(\mathbf{v})\tilde{H_5}$-modules of dimension 6 and 1, respectively. In order to exclude the latter case, it is enough to show that $N$ and $N'$ are not in the same block. Therefore, at this point, we may assume that $\gru(\grv_{\grx}(z_0))\not=\gru(\grv_{\grc}(z_0))$, for every irreducible character $\grx, \grc$ of $W_5$ of dimension 6 and 1, respectively. We use GAP3 in order to calculate the central characters, as we did in the case where $k=4$ and we have: $\gru(\grv_{\grc}(z_0))=\grl_i^6$, $i\in \{1,...,5\}$ and $\gru(\grv_{\grx}(z_0))=-x^2yztw$, where $\{x,y,z,t,w\}=\{\grl_1, \grl_2, \grl_3, \grl_4, \grl_5\}$. We notice that $\gru(\grv_{\grx}(z_0))=-\grl_j$det$A$, $j\in \{1,...,5\}$. Therefore, the assumption $\gru(\grv_{\grx}(z_0))\not=\gru(\grv_{\grc}(z_0))$ becomes det$A\not=-\grl_i^6\grl_j^{-1}$, $i,j \in\{1,2,3,4,5\}$, where $i, j$ are not necessarily distinct. By this assumption we have that $d_{\gru}([M])=[S]$, where $M$ is some irreducible $\mathbb{C}(\mathbf{v})\tilde{H_5}$-module of dimension 5. We have again explicit matrix models for these representation (see \cite{mallem} or the CHEVIE package of GAP3)) We notice that these matrices depend only on the choice of eigenvalues and of a fifth root of det$A$. Since $d_{\gru}([M])$ is irreducible either $M$ is of defect 0 or it is in the same block with another irreducible module of dimension 5. However, since the central characters of the irreducible modules of dimension 5 are distinct fifth roots of $(u_1u_2u_3u_4u_5)^{6}$, we can exclude the latter case. Hence, $M$ is of defect zero i.e. $\gru(s_{\grf})\not=0$, where $\grf$ is the character that corresponds to $M$. The Schur elements have been determined in \cite{malle2} (see also Appendix A.3 in \cite{chlouverakibook}) and one can also find them in CHEVIE package of GAP3; they are $$\frac{5\prod\limits_{i=1}^5(r+u_i)(r-\grz_3u_i)(r-\grz_3^2u_i)\prod\limits_{i\not=j}(r^2+u_iu_j)}{(u_1u_2u_3u_4u_5)^7},$$ where $r$ is a 5th root of $u_1u_2u_3u_4u_5$. However, due to the assumption det$A\not=-\grl_i^5$, $i \in\{1,2,3,4,5\}$ (case where $i=j$), we have that $\gru(r)+\grl_i \not =0$. Therefore, the condition $\gru(s_{\grf})\not=0$ becomes \begin{equation}\prod\limits_{i=1}^5(\tilde{r}^2+\grl_i\tilde{r}+\grl_i^2)\prod\limits_{i\not=j}(\tilde{r}^2+\grl_i\grl_j)\not=0, \label{oua} \end{equation} where $\tilde{r}$ is a fifth root of det$A$. Therefore, an irreducible representation of $B_3$ of dimension 5 can be described by the explicit matrices $A$ and $B$, that one can find for example in CHEVIE package of GAP3, depending only on a choice of $\grl_1, \grl_2, \grl_3, \grl_4, \grl_5$ and a fifth root of det$A$ such that (\ref{oua}) is satisfied. \end{itemize} \begin{rem} \begin{enumerate} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \item We can generalize our results for a representation of $B_3$ over a field of positive characteristic, using similar arguments. However, the cases where $k=4$ and $k=5$ need some extra analysis; For $k=4$ we have two irreducible $\mathbb{C}(\mathbf{v})\tilde{H_4}$-modules of dimension 4, which are not in the same block if we are in any characteristic but 2. However, when we are in characteristic 2, these two modules coincide and, therefore, we obtain an irreducible module of $B_3$ which is of defect 0, hence we arrive to the same result as in characteristic 0. We have exactly the same argument for the case where $k=5$ and we are over a field of characteristic 5. \item The irreducible representations of $B_3$ of dimension at most 5 have been classified in \cite{tuba}. Using a new framework, we arrived to the same results. The matrices $A$ and $B$ described by Tuba and Wenzl are the same (up to equivalence) with the matrices we provide in this paper. For example, in the case where $k=3$, we have given explicit matrices $A$ and $B$. If we take the matrices $DAD^{-1}$ and $DBD^{-1}$, where $D$ is the invertible matrix $$D=\begin{bmatrix} -\grl_1\grl_2-\grl_3^2&\grl_1(\grl_3-\grl_1)&(\grl_2-\grl_3)(\grl_3-\grl_1)\\ (\grl_2-\grl_1)(\grl_3^2+\grl_1\grl_2)& \grl_1(2\grl_2\grl_1-\grl_1^2+2\grl_1\grl_3-\grl_3\grl_2)& (\grl_1-\grl_3)(\grl_2^2+\grl_1\grl_3)\\ 0& \grl_1(\grl_1-\grl_3)&-\grl_3\grl_1(\grl_1+\grl_2) \end{bmatrix},$$ we just obtain the matrices determined in \cite{tuba} (The matrix $D$ is invertible since det$D=\grl_1(\grl_1^2+\grl_2\grl_3)(\grl_3^2+\grl_1\grl_2)^2\not=0$, due to (\ref{eqqq})). \end{enumerate} \end{rem} ~ \chapter{The approach of Etingof-Rains: The exceptional groups of rank 2} In this chapter we explain in detail the arguments Etingof and Rains used in order to prove a weak version of the BMR freeness conjecture for the exceptional groups of rank 2. This weak version states that the generic Hecke algebra $H$ associated to an exceptional group of rank 2 is finitely generated as $R$-module (where $R$ is the Laurent polynomial ring over which we define $H$). Their approach also explains the appearance of the center of $B_3$ in the description of the basis of the generic Hecke algebra associated to $G_4$, $G_8$ and $G_{16}$ (see theorem 3.2 (3) in \cite{marincubic} and theorems \ref{th} and \ref{thh2}). \section{The exceptional groups of rank 2} \indent In this section we follow mostly Chapter 6 of \cite{lehrer}. Let $W$ be an irreducible complex reflection group of rank 2, meaning that $W$ is one of the groups $G_4,$\dots, $G_{22}$. We know that $W\leq U_2(\mathbb{C})$, where $U_2(\mathbb{C})$ denotes the unitary group of degree 2 (see remark \ref{unitary}). For every $w\in W$ we choose $\grl_w\in \mathbb{C}$ such that $\grl_w^2=\text{det}(w)$ and we set $$\widehat W:=\{\pm \grl_w^{-1}w;w\in W\}.$$ By definition, $\widehat W\leq SU_2(\mathbb{C})$, where $SU_2(\mathbb{C})$ denotes the special unitary group of degree 2. Therefore, $\widehat W/Z(\widehat W)\leq SU_2(\mathbb{C})/Z(SU_2(\mathbb{C}))\simeq S^3/\{\pm1\}$, where $S^3$ denotes the group of quaternions of norm 1. Since $S^3/\{\pm1\}$ is isomorphic to the three dimensional rotation group $SO_3(\mathbb{R})$, we can consider $\widehat W/Z(\widehat W)$ as a subgroup of the latter. We can also assume that $W/Z(W)\leq SO_3(\mathbb{R})$, since by construction $W/Z(W)\simeq \widehat W/Z(\widehat W)$. Any finite subgroup of $SO_3(\mathbb{R})$ is either a cyclic group, a dihedral group $D_n$ or the group of rotations of a Platonic solid, which is the tetrahedral, octahedral or icosahedral group (for the classification of the finite subgroups of $SO_3(\mathbb{R})$ one may refer to theorem 5.13 of \cite{lehrer}). Since $W$ is irreducible we may exclude the case of the cyclic group. The case of the dihedral group falls into the case of the infinite family $G(de,e,2)$. Hence, it remains to examine the case of the tetrahedral, octahedral and icosahedral group. Using the Shephard-Todd notation we have the following three families (see table \ref{t1} below), according to whether $W/Z(W)$ is the tetrahedral, octahedral or icosahedral group (for more details one may refer to Chapter 6 of \cite{lehrer}); the first family, known as \emph{the tetrahedral family}, includes the groups $G_4,\dots, G_7$, the second one, known as the \emph{octahedral family} includes the groups $G_8,\dots, G_{15}$ and the last one, known as the \emph{icosahedral family}, includes the rest of them, which are the groups $G_{16},\dots, G_{22}$. In each family, there is a maximal group of order $|W/Z(W)|^2$ and all the other groups are its subgroups. These groups belonging to the three families are known as \emph{the exceptional groups of rank 2}. \begin{table}[!ht] \begin{center} \small \caption{ \bf{The three families}} \label{t1} \scalebox{0.98} {\begin{tabular}{|c|c|c|} \hline $W/Z(W)$& $W$& Maximal Group \\ \hline $\begin{array}[t]{lcl} \\\text{Tetrahedral Group } \mathcal{T}\simeq Alt(4)\\ \\ \langle a,b,c\;|\;a^2=b^3=c^3=1, abc=1\rangle\end{array}$ & $\begin{array}[t]{lcl}\\G_4,\dots,G_7\end{array}$ & $\begin{array}[t]{lcl}\\ G_7= \langle a,b,c\;|\;a^2=b^3=c^3=1, abc=\text{central }\rangle \end{array}$\\ \hline \hline $\begin{array}[t]{lcl} \\\text{Octahedral Group } \mathcal{O}\simeq Sym(4)\\\\ \langle a,b,c\;|\;a^2=b^3=c^4=1, abc=1\rangle\end{array}$ &$\begin{array}[t]{lcl}\\ G_8,\dots,G_{15} \end{array}$& $\begin{array}[t]{lcl}\\ G_{11}= \langle a,b,c\;|\;a^2=b^3=c^4=1, abc=\text{central }\rangle \end{array}$\\ \hline \hline $\begin{array}[t]{lcl} \\\text{Icosahedral Group } \mathcal{I}\simeq Alt(5)\\\\ \langle a,b,c\;|\;a^2=b^3=c^5=1, abc=1\rangle\end{array}$ &$\begin{array}[t]{lcl}\\ G_{16},\dots,G_{22} \end{array}$& $\begin{array}[t]{lcl}\\ G_{19}= \langle a,b,c\;|\;a^2=b^3=c^5=1, abc=\text{central }\rangle \end{array}$\\ \hline \end{tabular}} \end{center} \end{table} We know that for every exceptional group of rank 2 we have a Coxeter-like presentation (see remark \ref{coxeterlike}(ii)); that is a presentation of the form $$\langle s\in S\;|\; \{v_i=w_i\}_{i\in I} , \{s^{e_s}=1\}_{s\in S}\rangle,$$ where $S$ is a finite set of distinguished reflections and $I$ is a finite set of relations such that, for each $i\in I$, $v_i$ and $w_i$ are positive words with the same length in elements of $S$. We also know that for the associated complex braid group $B$ we have an Artin-like presentation (see theorem \ref{Presentt}); that is a presentation of the form $$\langle \mathbf{s}\in \mathbf{S}\;|\; \mathbf{v_i}=\mathbf{w_i} \rangle_{i\in I},$$ where $\mathbf{S}$ is a finite set of distinguished braided reflections and $I$ is a finite set of relations such that, for each $i\in I$, $\mathbf{v_i}$ and $\mathbf{w_i}$ are positive words in elements of $\mathbf{S}$. We call these presentations \emph{the BMR presentations}, due to M. Brou\'e, G. Malle and R. Rouquier. In 2006 P. Etingof and E. Rains gave different presentations of $W$ and $B$, based on the BMR presentations associated to the maximal groups $G_7$, $G_{11}$ and $G_{19}$ (see \textsection 6.1 of \cite{ERrank2}). We call these presentations \emph{the ER presentations}. In tables \ref{t3} and \ref{t2} we give the two representations for every $W$ and $B$, as well as the isomorphisms between the BMR and ER presentations. In Appendix A we also prove that we indeed have such isomorphisms. Notice that for the maximal groups, the ER presentations coincide with the BMR presentations. In the following section we will explain how Etingof and Rains used the ER presentation in order to prove a weak version of the BMR conjecture for the exceptional groups of rank 2. \section{A weak version of the BMR freeness conjecture} \indent Let $W$ be an exceptional group of rank 2 with associated complex braid group $B$ and let $H$ denote the generic Hecke algebra associated to $W$, defined over the ring $R=\mathbb{Z}\left[u_{s,i}^{\pm}\right]$, where $s$ runs over the conjugacy classes of distinguished reflections and $1\leq i \leq e_s$, where $e_s$ denotes the order of the pseudo-reflection $s$ in $W$. For the rest of this section we follow the notations of \textsection 2.2 of \cite{marinG26}. For $k$ a unitary ring we set $R_{ k}:=R\otimes_{\mathbb{Z}}\calligra{k}$ and $H_{k}:=H\otimes_{R}R_{ k}$. We denote by $\tilde u_{s,i}$ the images of $u_{s,i}$ inside $R_k$. By definition, $H_{k}$ is the quotient of the group algebra $R_{k}B$ of $B$ by the ideal generated by $P_{s}(\grs)$, where $s$ runs over the conjugacy classes of distinguished reflections, $\grs$ over the set of distinguished braided reflections associated to $s$ and $P_{s}\left[X\right]$ are the monic polynomials $(X-\tilde u_{s,1})\dots(X-\tilde u_{s,e_s})$ inside $R_{k}\left[X\right]$. Notice that if $s$ and $t$ are conjugate in $W$ the polynomials $P_s(X)$ and $P_t(X)$ coincide. Let $Z(B)$ the center of $B$. By theorem \ref{centerbraid} we know that $Z(B)$ is cyclic and, therefore, we set $Z(B):=\langle z\rangle$. We also set $\bar B:=B/\langle z\rangle$ and $R_{ k}^+:=R_{ k}\left[x,x^{-1}\right]$. Let $f$ be a set-theoretic section of the natural projection $\grp: B\rightarrow \bar B$, meaning that $f: \bar B \rightarrow B$ is a map such that $\grp\circ f=id_{\bar B}$. The next proposition (Proposition 2.10 in \cite{marinG26}) states that $H_{k}$ inherits a structure of $R_{k}^+$-module. More precisely, $H_{ k}$ is isomorphic to the quotient of the group algebra $R_{ k}^+\bar B$ of $\bar B$ by some relations defined by the polynomials $P_{\mathbf{s}}\left[X\right]$ and by $f$. \begin{prop} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize} \item[(i)] $R_{ k}B$ admits a $R_{ k}^+$-module structure defined by $x\cdot b=zb$, for every $b\in B$. Moreover, as $R_{k}^+$-modules, $R_{ k}B\simeq R_{k}^+\bar B$. \item [(ii)] Under the isomorphism described in (i), the defining ideal of $H_{k}$ generated by the $P_s(\grs)$ is mapped to the ideal of $ R_{ k}^+\bar B$ generated by the $Q_s(\bar \grs)$, where $Q_s(X)=x^{c_{\grs}\text{deg}P_s}\cdot P_s(x^{-c_{\grs}}\cdot X)\in R_{ k}^+[X]$, the $c_{\grs}\in \mathbb{Z}$ being defined by $f(\bar\grs)=z^{c_{\grs}}\grs$. \end{itemize} \label{prIVAN} \end{prop} \begin{proof} This proof is a rewriting of the proof of Proposition 2.10 in \cite{marinG26}. For every $b\in B$ we have $\grp(f(\bar b)) =\bar b=\grp(b)$ $\Rightarrow f(\bar b)b^{-1}\in \text{ker}\grp=Z(B)$. We also have that $Z(B)\simeq \mathbb{Z}$. Therefore, there is a uniquely defined 1-cocycle $c: B \rightarrow \mathbb{Z}$, $b\mapsto c_b$ such that $\forall b\in B$, $f(\bar b)=z^{c_{b}}b$. We have: $$\begin{array}[t]{lcl} R_kB&=&\underset{b\in B}{\oplus}R_kb\\ &=&\underset{b\in B}{\oplus}R_kz^{-c_b}f(\bar b)\\ &=&\underset{d\in \bar B}{\oplus}\underset{\bar b=d}{\oplus}R_kz^{-c_b}f(d)\\ &=&\underset{d\in \bar B}{\oplus}\underset{n\in \mathbb{Z}}{\oplus}R_kz^nf(d)\\ &=&\underset{d\in \bar B}{\oplus}R_k^{+}f(d). \end{array}$$ However, $\underset{d\in \bar B}{\oplus}R_k^{+}f(d)\simeq \underset{d\in \bar B}{\oplus}R_k^{+}d=R_k^{+}\bar B$ and, therefore, $R_kB\simeq R_k^{+}\bar B$, which proves (i). For (ii), let $I$ be the $R_k$-submodule of $R_kB$ spanned by the $bP_{s}(\grs)b'$, for $b,b' \in B$. We notice that $bP_{s}(\grs)b'=z^{-c_b}f(\bar b) P_{s}(\grs)z^{-c_{b'}}f(\bar b')$. Hence, as $R_k^{+}$-module $I$ is spanned by the $f(\bar b) P_{s}(\grs)f(\bar b')$. Let $Q_s(X)=x^{c_s\text{deg}P_s} P_s(x^{-c_s}X)$. We have: $$Q_s(f(\bar \grs))=x^{c_s\text{deg}P_s}\cdot P_s(x^{-c_s}\cdot f(\bar \grs))=z^{c_s\text{deg}P_s}P_s(z^{-c_s}f(\bar \grs))= z^{c_s\text{deg}P_s}P_s(\grs).$$ Therefore, $P_s(\grs)=z^{-c_s\text{deg}P_s}Q_s(f(\bar \grs))$. However, $I$ is identified inside $R_k^+\bar B$ by the $R_k^+$-submodule generated by the $f(\bar b) P_{s}(\grs)f(\bar b')$. As a result, $I$ is identified inside $R_k^+\bar B$ by the ideal of $R_k^+\bar B$ generated by $f(\bar b)Q_s(f(\bar \grs))f(\bar b')$, which is isomorphic to the ideal of $R_k^{+}\bar B$ generated by $Q_s(\bar \grs)$. \end{proof} \begin{cor} Let $f: \bar B \rightarrow B$ be a set-theoretic section of the natural projection $B\rightarrow \bar B$. There is an isomorphism $\grF_{f}$ between the $R_k^+$-modules $R_k^+\bar B/Q_s(\bar \grs)$ and $H_k=R_kB/P_s(\grs)$. \label{corIVAN} \end{cor} We now give an example to make all the above notations clearer to the reader. \begin{ex} Let $W:=G_4=\langle s,t\;|\;s^3=t^3=1, sts=tst\rangle$. Since $s,t$ are conjugate the generic Hecke algebra of $G_4$ is defined over the ring $R=\mathbb{Z}[u_{s,i}^{\pm}]$, where $i=1,2,3$ and it has a presentation as follows: $$H=\langle \grs,\grt\;|\; \grs\grt\grs=\grt\grs\grt,\;\prod\limits_{i=1}^{3}(\grs-u_{s,i})=\prod\limits_{i=1}^{3}(\grt-u_{s,i})=0\rangle.$$ Keeping the above notations, we let $P_s(X)=P_t(X)=(X-\tilde u_{s,1})(X-\tilde u_{s,2})(X-\tilde u_{s,3})$. We fix a set-theoretic section $f: \bar B \rightarrow B$. We set $Q_s(X)=(X-x^{c_{\grs}}\cdot\tilde u_{s,1})(X-x^{c_{\grs}}\cdot\tilde u_{s,2})(X-x^{c_{\grs}}\cdot\tilde u_{s,3})$ and $Q_t(X)=(X-x^{c_{\grt}}\cdot\tilde u_{s,1})(X-x^{c_{\grt}}\cdot\tilde u_{s,2})(X-x^{c_{\grt}}\cdot\tilde u_{s,3})$, where $c_s, c_t \in \mathbb{Z}$, defined by $f(\bar \grs)=z^{a_{\grs}}\grs$ and $f(\bar \grt)=z^{c_{\grt}}\grt$. The corollary \ref{corIVAN} states that, as $R_{\calligra k}^+$-modules $$H_{ k}\simeq \langle \bar \grs,\bar \grt\;|\;(\bar{\grs}\bar{\grt})^3=1,\; \bar{\grs}\bar{\grt}\bar{\grs}=\bar{\grt}\bar{\grs}\bar{\grt},\;\prod\limits_{i=1}^{3}(\bar \grs-x^{c_{\grs}}\cdot\tilde u_{s,i})=\prod\limits_{i=1}^{3}(\bar \grt-x^{c_{\grt}}\cdot\tilde u_{s,i})=0\rangle. \makeatletter\displaymath@qed$$ \end{ex} We set $\overline{W}:=W/Z(W)$. We note that $\overline{W}$ is the group of even elements in a finite Coxeter group $C$ of rank 3 (of type $A_3$, $B_3$ and $H_3$ for the tetrahedral, octahedral and icosahedral cases, respectively), with Coxeter system $y_1,y_2,y_3$ and Coxeter matrix $(m_{ij})$. We set $\tilde \mathbb{Z}:=\mathbb{Z}\left[e^{\frac{2\pi i}{m_{ij}}}\right]$ and let $A(C)$ be the $\tilde\mathbb{Z}$-algebra presented as follows: \begin{itemize}[leftmargin=*] \item \underline{Generators}: $Y_1, Y_2, Y_3$, $t_{ij,k}$, where $i,j\in\{1,2,3\}$, $i\not=j$ and $k\in\mathbb{Z}/m_{ij}\mathbb{Z}$. \item \underline{Relations}: $Y_i^2=1$, $t_{ij,k}^{-1}=t_{ji,-k}$, $\prod\limits_{k=1}^{m_{ij}}(Y_iY_j-t_{ij,k})=0$, $t_{ij,k}Y_r=Y_rt_{ij,k}$, $t_{ij,k}t_{i'j',k'}=t_{i'j',k'}t_{ij,k}$. \end{itemize} This construction of $A(C)$ is more general and can be done for every Coxeter group $C$ (for more details one may refer to \textsection 2 of \cite{ERcoxeter}). Let $R^C=\tilde \mathbb{Z}\left[t_{ij,k}^{\pm}\right]=\tilde \mathbb{Z}\left[t_{ij,k}\right]$. The subalgebra $A_+(C)$ generated by $Y_iY_j$, $i\not=j$ becomes an $R^C$ algebra and can be presented as follows: \begin{itemize}[leftmargin=*] \item \underline{Generators}: $A_{ij}:=Y_iY_j$, where $i,j\in\{1,2,3\}$, $i\not=j$. \item \underline{Relations}: $A_{ij}^{-1}=A_{ji}$, $\prod\limits_{k=1}^{m_{ij}}(A_{ij}-t_{ij,k})=0$, $A_{ij}A_{jl}A_{li}=1$, for $\#\{i,j,l\}=3$. \end{itemize} If $w$ is a word in letters $y_i$ we let $T_w$ denote the corresponding element of $A(C)$. For every $x\in \overline{W}$ let us choose a reduced word $w_x$ that represents $x$ in $\overline{W}$. We notice that $T_{w_x}$ is an element in $A_+(C)$, since $w_x$ is reduced and $\overline{W}$ is the group of even elements in $C$. The following theorem is theorem 2.3(ii) in \cite{ERcoxeter}, which is generalized there for every finite Coxeter group. \begin{thm} The algebra $A_+(C)$ is generated as $R^C$-module by the elements $T_{w_x}$, $x\in \overline{W}$. \label{thmER} \end{thm} We recall that $C$ is a Coxeter group of type $A_3$, $B_3$ or $H_3$ respectively for each family. Therefore, the Coxeter matrix is of the form $\left( \begin{array}{ccc} 1 & m & 2 \\ m & 1 & 3 \\ 2 & 3 & 1 \end{array} \right)$, where $m=3$ for the tetrahedral family, $m=4$ for the octahedral family and $m=5$ for the icosahedral family, respectively. Hence, we can present $A_+(C)$ as follows: $$\left\langle \begin{array}{l|cl} &(A_{13}-t_{13,1})(A_{13}-t_{13,2})=0&\\ A_{13}, A_{32}, A_{21}&(A_{32}-t_{32,1})(A_{32}-t_{32,2})(A_{32}-t_{32,3})=0,& A_{13}A_{32}A_{21}=1\\ &(A_{21}-t_{21,1})(A_{21}-t_{21,2})\dots(A_{21}-t_{21,m})=0& \end{array} \right\rangle$$ For every exceptional group of rank 2 we consider the ER presentation of $B$ (see table \ref{t2}). We notice that apart from the cases of $G_{13}$ and $G_{15}$, this is a presentation in 3 generators $\gra$, $\grb$ and $\grg$ and for the cases of $G_{13}$ and $G_{15}$ this is a presentation of 4 generators $\gra$, $\grb$, $\grg$ and $\grd$. We also notice that in every case the group $\bar B$ can be presented as follows: $$\langle\bar \gra, \bar \grb, \bar \grg\;|\;\bar \gra ^{k_{\gra}}=\bar \grb ^{k_{\grb}}=\bar \grg ^{k_{\grg}}=1,\; \bar \gra \bar \grb \bar \grg=1 \rangle,$$ where $k_{\gra}\in\{0,2\}$, $k_{\grb}\in\{0,3\}$ and the values of $k_{\grg}$ depend on the family in which the group belongs; for the tetrahedral family $k_{\grg}=0$, for the octahedral family $k_{\grg}\in\{0,4\}$ and for the icosahedral family $k_{\grg}\in\{0,5\}$. In the next two propositions, which are actually a rewriting with correction of proposition 2.13 in \cite{marinG26}, we relate the algebra $A_+(C)$ with the algebra $H_{\tilde \mathbb{Z}}$. \begin{prop} Let $W$ be an exceptional group of rank 2, apart from $G_{13}$ and $G_{15}$. There is a ring morphism $\gru: R^C\twoheadrightarrow R_{\tilde \mathbb{Z}}^+$ inducing $ \grC: A_+(C)\otimes_{\gru} R_{\tilde \mathbb{Z}}^+ \twoheadrightarrow R_{\tilde \mathbb{Z}}^+\bar B/Q_{s}(\bar \grs)$ through $A_{13}\mapsto \bar \gra$, $A_{32}\mapsto \bar \grb$, $A_{21}\mapsto \bar \grg$. \label{ERSUR} \end{prop} \begin{proof} We define $\gru$ by distinguishing the following cases: \begin{itemize}[leftmargin=*] \item[-]When $k_{\gra}=2$ (cases of $G_4$, $G_5$, $G_8$, $G_{10}$, $G_{16}$, $G_{18}$, $G_{20}$) we have $(\bar\gra-1)(\bar\gra+1)=0$. Therefore, we may define $\gru(t_{13,1}):=1$ and $\gru(t_{13,2}):=-1$. \item[-] When $k_{\gra}=0$ (cases of $G_6$, $G_7$, $G_9$, $G_{11}$, $G_{12}$, $G_{14}$, $G_{17}$, $G_{19}$, $G_{21}$, $G_{22}$) we notice that $\gra$ is a distinguished braided reflection associated to a distinguished reflection $a$ of order 2 (see in tables \ref{t3} and \ref{t2} the images of $a$ and $\gra$ in BMR presentations). Keeping the notations of proposition \ref{prIVAN}, we recall that $\bar \gra$ annihilates the polynomial $Q_{a}(X)=(X-x^{c_{\gra}}\cdot \tilde u_{a,1})(X-x^{c_{\gra}} \cdot\tilde u_{a,2})$. Therefore, we may define $\gru(t_{13,1}):=x^{c_{\gra}} \cdot\tilde u_{a,1}$ and $\gru(t_{13,2}):=x^{c_{\gra}}\cdot \tilde u_{a,2}$. \item[-]When $k_{\grb}=3$ (cases of $G_4$, $G_6$, $G_8$, $G_9$, $G_{12}$, $G_{16}$, $G_{17}$, $G_{22}$) we have $(\bar\grb-1)(\bar\grb-j)(\bar\grb-j^2)=0$, where $j$ is a third root of unity. Therefore, we define $\gru(t_{32,1}):=1$, $\gru(t_{32,2}):=j$ and $\gru(t_{32,3}):=j^2$. \item[-] When $k_{\grb}=0$ (cases of $G_5$, $G_7$, $G_{10}$, $G_{11}$, $G_{14}$, $G_{18}$, $G_{19}$, $G_{20}$, $G_{21}$) we notice that $\grb$ is a distinguished braided reflection associated to a distinguished reflection $b$ of order 3 (see in tables \ref{t3} and \ref{t2} the images of $b$ and $\grb$ in BMR presentations). Therefore, similarly to the case where $k_{\gra}=0$, we may define $\gru(t_{32,1}):=x^{c_{\grb}} \cdot\tilde u_{b,1}$, $\gru(t_{32,2}):=x^{c_{\grb}} \cdot\tilde u_{b,2}$ and $\gru(t_{32,3}):=x^{c_{\grb}}\cdot \tilde u_{b,3}$. \item[-] When $k_{\grg}=m$ (cases of $G_{12}$, $G_{14}$, $G_{20}$, $G_{21}$, $G_{22}$) we have $(\bar\grg-1)(\bar\grg-\grz_m)(\bar\grg-\grz_m^2)\dots(\bar\grg-\grz_m^{m-1})=0$, where $\grz_m$ is a $m$-th root of unity. Therefore, we define $\gru(t_{21,1}):=1$, $\gru(t_{21,2}):=\grz_m$, $\gru(t_{21,3}):=\grz_m^2$,\dots, $\gru(t_{21,m}):=\grz_m^{m-1}$. \item[-] When $k_{\grg}=0$ (cases of $G_4$, $G_5$, $G_6$, $G_7$, $G_8$, $G_9$, $G_{10}$, $G_{11}$, $G_{16}$, $G_{17}$, $G_{18}$, $G_{19}$) we notice that $\grg$ is a distinguished braided reflection associated to a distinguished reflection $c$ of order $m$ (see in tables \ref{t3} and \ref{t2} the images of $\grg$ and $c$ in BMR presentations). Therefore, similarly to the case where $k_{\gra}=0$, we may define $\gru(t_{21,1}):=x^{c_{\grg}}\cdot \tilde u_{c,1}$, $\gru(t_{21,2}):=x^{c_{\grg}}\cdot\tilde u_{c,2}$, \dots, $\gru(t_{21,m}):=x^{c_{\grg}} \cdot\tilde u_{c,m}$. \end{itemize} We now consider the algebra $A_+(C)\otimes_{\gru}R_{\tilde \mathbb{Z}}^+$ and we define $\begin{array}[t]{rcl}\grC:A_+(C)\otimes_{\gru} R_{\tilde \mathbb{Z}}^+ &\twoheadrightarrow& R_{\tilde \mathbb{Z}}^+\bar B/Q_s(\bar s)\\ A_{13}&\mapsto& \bar \gra\\ A_{32}&\mapsto& \bar \grb\\ A_{21}&\mapsto& \bar \grg. \end{array}$ \end{proof} We will now deal with the cases of $G_{13}$ and $G_{15}$. For these cases, we replace $A_+(C)$ with a specialized algebra $\tilde A_+(C)$ and we use a similar technique. More precisely, both $G_{13}$ and $G_{15}$ belong to the octahedral family. We consider the ER presentation of the complex braid group associated to these groups (see table \ref{t2}) and we notice that $\bar B$ can be presented as follows: $$\langle\bar \gra, \bar \grb, \bar \grg\;|\;\bar \grb^{k_{\grb}}=\bar \grg^{4}=1, \bar \gra \bar \grb \bar \grg=1 \rangle,$$ where $k_{\grb}\in\{0,3\}$. We set $\tilde R^C:=\tilde \mathbb{Z}\left[t_{13,1}, t_{13,2}, t_{32,1}, t_{32,2}, t_{32,3}, \sqrt{t_{21,1}},\sqrt{t_{21,3}},\right]$ and we define $$ \begin{array}[t]{rlc}\grf: R^C &\longrightarrow\tilde R^C&\\ t_{21,1}&\mapsto \;\;\sqrt{t_{21,1}}&\\ t_{21,2}&\mapsto -\sqrt{t_{21,1}}&\\ t_{21,3}&\mapsto \;\;\sqrt{t_{21,3}}&\\ t_{21,4}&\mapsto -\sqrt{t_{21,3}}&. \end{array}$$ Let $\tilde A_+(C)$ denote the $\tilde R^C$ algebra $A_+(C)\otimes_{\grf}\tilde R^C$ and let $\tilde A_{ij}$ denote the image of $A_{ij}$ inside the algebra $\tilde A_+(C)$. The latter can be presented as follows: $$\left\langle \begin{array}{l|cl} &(\tilde A_{13}-t_{13,1})(\tilde A_{13}-t_{13,2})=0&\\ \tilde A_{13}, \tilde A_{32}, \tilde A_{21}&(\tilde A_{32}-t_{32,1})(\tilde A_{32}-t_{32,2})(\tilde A_{32}-t_{32,3})=0,& \tilde A_{13}\tilde A_{32}\tilde A_{21}=1\\ &(\tilde A_{21}^2-t_{21,1})(\tilde A_{21}^2-t_{21,3})=0& \end{array} \right\rangle$$ \begin{prop} Let $W$ be the exceptional group $G_{13}$ or $G_{15}$. There is a ring morphism $\gru: \tilde R^C\twoheadrightarrow R_{\tilde \mathbb{Z}}^+$ inducing $ \grC: \tilde A_+(C)\otimes_{\gru} R_{\tilde \mathbb{Z}}^+ \twoheadrightarrow R_{\tilde \mathbb{Z}}^+\bar B/Q_{s}(\bar s)$ through $\tilde A_{13}\mapsto \bar \gra$, $\tilde A_{32}\mapsto \bar \grb$, $\tilde A_{21}\mapsto \bar \grg$. \label{ERRSUR} \end{prop} \begin{proof} We use the same arguments we used for the rest of the exceptional groups of rank 2 (see the proof of proposition \ref{ERSUR}). More precisely, since $(\bar{\grg}^2-1)(\bar{\grg}^2+1)=0$, we may define $\gru(t_{21,1})=1$ and $\gru(t_{21,3})=-1$. Moreover, we notice that $\gra$ is a distinguished braided reflection associated to a distinguished reflection a of order 2 (see in tables \ref{t3} and \ref{t2} the images of $\gra$ and $a$ in BMR presentations). Keeping the notations of proposition \ref{prIVAN}, we recall that $\bar \gra$ annihilates the polynomial $Q_{a}(X)=(X-x^{c_{\gra}}\cdot \tilde u_{a,1})(X-x^{c_{\gra}} \cdot\tilde u_{a,2})$. Therefore, we may define $\gru$ such that $\gru(t_{13,1}):=x^{a_s} \tilde u_{s_1}$ and $\gru(t_{13,2}):=x^{a_s} \tilde u_{s_2}$. In the case of $G_{13}$ we have $(\bar\grb-1)(\bar\grb-j)(\bar\grb-j^2)=0$, where $j$ is a third root of unity. Therefore, we may define $\gru(t_{32,1}):=1$, $\gru(t_{32,2}):=j$ and $\gru(t_{32,3}):=j^2$. In the case of $G_{15}$ we notice that $\grb$ is a distinguished braided reflection associated to a distinguished reflection $b$ of order 3 (see in tables \ref{t3} and \ref{t2} the images of $\grb$ and $b$ in BMR presentations). We recall again that $\bar \grb$ annihilates the polynomial $Q_{b}(X)=(X-x^{c_{\grb}}\cdot \tilde u_{b,1})(X-x^{c_{\grb}} \cdot\tilde u_{b,2})(X-x^{c_{\grb}} \cdot\tilde u_{b,3})$. Therefore, we may define $\gru(t_{32,1}):=x^{c_{\grb}} \cdot\tilde u_{b,1}$, $\gru(t_{32,2}):=x^{c_{\grb}} \cdot\tilde u_{b,2}$ and $\gru(t_{32,3}):=x^{c_{\grb}}\cdot \tilde u_{b,3}$. We now consider the algebra $\tilde A_+(C)\otimes_{\gru}R_{\tilde \mathbb{Z}}^+$ and we define $\begin{array}[t]{rcl}\grC:\tilde A_+(C)\otimes_{\gru} R_{\tilde \mathbb{Z}}^+ &\twoheadrightarrow& R_{\tilde \mathbb{Z}}^+\bar B/Q_s(\bar \grs)\\ \tilde A_{13}&\mapsto& \bar \gra\\ \tilde A_{32}&\mapsto& \bar \grb\\ \tilde A_{21}&\mapsto& \bar \grg. \end{array}$ \end{proof} \begin{defn} For every exceptional group of rank 2 we call the surjection $\grC$ as described in propositions \ref{ERSUR} and \ref{ERRSUR} the \emph{ER surjection} associated to $W$. \end{defn} \begin{prop} $H_{\tilde \mathbb{Z}}$ is generated as $R_{\tilde \mathbb{Z}}^+$-module by $|\overline{W}|$ elements. \end{prop} \begin{proof} The result follows from corollary \ref{prIVAN}, theorem \ref{thmER} and from propositions \ref{ERSUR} and \ref{ERRSUR}. \end{proof} The following corollary is Theorem 2.14 of \cite{marinG26} and it is the main result in \cite{ERrank2}. \begin{cor}(A weak version of the BMR freeness conjecture) $H$ is finitely generated over $R$. \label{corER} \end{cor} A first consequence of the weak version of the conjecture is the following: \begin{prop}Let $F$ denote the field of fractions of $R$ and $\bar F$ an algebraic closure. Then, $H\otimes_R \bar F\simeq \bar F W$. \label{F} \end{prop} \begin{proof} The result follows directly from proposition 2.4(2) of \cite{marinG26} and from corollary \ref{corER}. \end{proof} Another consequence of the weak version of the conjecture is the following proposition, which states that if the generic Hecke algebra of an exceptional group of rank 2 is torsion-free as $R$-module, it will be sufficient to prove the BMR freeness conjecture for the maximal groups (see table \ref{t1}). \begin{prop} Let $W_0$ be the maximal group $G_7$, $G_{11}$ or $G_{19}$ and let $W$ be an exceptional group of rank 2, whose associated Hecke algebra $H$ is torsion-free as $R$-module. If the BMR freeness conjecture holds for $W_0$, then it holds for $W$, as well. \end{prop} \begin{proof} Let $R_0$ and $R$ be the rings over which we define the Hecke algebras $H_0$ and $H$ associated to $W_0$ and $W$, respectively. There is a specialization $\gru :R_0\rightarrow R$, that maps some of the parameters of $R_0$ to roots of unity (see tables 4.6, 4.9 and 4.12 in \cite{malle2}). We set $A:=H_0\otimes_{\gru}R$. Due to hypothesis that the BMR freeness conjecture holds for $W_0$, we have that $A$ is a free $R$-module of rank $|W_0|$. In proposition 4.2 in \cite{malle2}, G. Malle found a subalgebra $\bar A$ of $A$, such that $H\twoheadrightarrow \bar A$. A presentation of $\bar A$ is given in Appendix A of \cite{chlouverakibook}. He also noticed that if $m=|W_0|/|W|$, then there is an element $\grs\in A$ such that $A=\oplus_{i=0}^{m-1}\grs^i\bar A$. We highlight here that for all these results G. Malle does not use the validity of the BMR freeness conjecture. Since $A$ is a free $R$-module of rank $|W_0|$, we also have that $\bar A$ is a free module of rank $|W|$. Our goal is to prove that as $R$-modules, $\bar A \simeq H$. Let $F$ denotes the field of fractions of $R$ and $\bar F$ an algebraic closure. By proposition \ref{F} we have that $\bar A\otimes_R \bar F \simeq H\otimes_R \bar F$. We have the following commutative diagram: $$ \xymatrix{ H \ar[r]^{\phi_2} \ar@{->>}[d]^{\psi_2} \ar@{.>}[dr]& H \otimes_R \overline{F} \ar[d]^{\psi_1}_{\rotatebox{90}{$\simeq$}}\\ \overline{A} \ar[r]^{\phi_1}& \overline{A} \otimes_R \overline{F} } $$ Let $h\in$ ker$\grc_2$. Then, $\grf_1(\grc_2(h))=\grc_1(\grf_2(h))$ and, hence, $\grf_2(h)=(\grc_1^{-1}\circ \grf_1)(0)=\grc_1(0)=0$, which means that ker$\grc_2\subset$ ker$\grf_2$. Let $h\in$ ker$\grf_2$. Then, $\grf_1(\grc_2(h))=\grc_1(\grf_2(h))=0$, which means that $\grc_2(h)\in \bar A^{\text{tor}}$. However, $\bar A$ is a free $R$-module, therefore $A^{\text{tor}}=0$. As a result, ker$\grf_2\subset$ ker$\grc_2$ and, hence, $\grf_2=0$, since $H$ is torsion-free as $R$-module. Therefore, $\grc_2$ is an isomorphism and, as a result, $H$ is a free $R$-module of rank $|W|$. \end{proof} We now give an example that makes clearer to the reader the nice properties of the algebra $\bar A$ we mentioned in the proof of the above proposition. \begin{ex} Let $W=G_5$, which is an exceptional group that belongs to the tetrahedral family. In this family the maximal group is the group $W_0=G_7$ (see table \ref{t1}). Let $R_0$ be the Laurent polynomial ring $\mathbb{Z}\left[x_i^{\pm},y_j^{\pm}, z_l^{\pm}\right]$ over which we define the generic Hecke algebra $H_0$ associated to $G_7$, where $i=1,2$ and $j,l=1,2,3$. $H_0$ can be presented as follows: $$H_0 =\langle s,t,u\;|\; stu=tus=ust, \prod\limits_{i=1}^2(s-x_i)=\prod\limits_{j=1}^3(t-y_j)=\prod\limits_{l=1}^3(u-z_l)=0\rangle.$$ We assume that $H_0$ is a $R_0$-module of rank $|W_0|$. Let $R$ be the Laurent polynomial ring $\mathbb{Z}\left[\bar y_j^{\pm}, \bar z_l^{\pm}\right]$ over which we define the generic Hecke algebra $H$ associated to $G_5$, where $j,l=1,2,3$. $H$ can be presented as follows: $$H =\langle S,T\;|\; STST=TSTS, \prod\limits_{i=1}^3(S-\bar x_i)=\prod\limits_{j=1}^3(T-\bar y_j)=0\rangle.$$ Let $\gru: R_0\rightarrow R$ be the specialization defined by $x_1\mapsto 1, x_2\mapsto -1$. The $R$ algebra $A:=H_0\otimes_{\gru}R$ is presented as follows: $$A=\langle s,t,u\;|\; stu=tus=ust, s^2=1,\prod\limits_{j=1}^3(t-y_j)=\prod\limits_{l=1}^3(u-z_l)=0\rangle.$$ Let $\bar A=\langle t,u\rangle\leq A$. We define $\grf: H \rightarrow A$, $S\mapsto t$, $T\mapsto u$. Since $stu$ generates the center of the complex braid group associated to $G_7$ we have $tutu=tus^2tu=(tus)stu=ust(stu)=us(stu)t=utut$, meaning that $\grf$ is surjective. We also notice that $stu=ust\Rightarrow us=stut^{-1}$ and $ust=tus\Rightarrow t=su^{-1}tus\Rightarrow ts=su^{-1}tu$. Therefore, $A=\bar A+s\bar A$. In order to prove that this sum is direct, we consider an algebraic closure $\bar F$ of the field of fractions of $F$. By Proposition \ref{F} we have that $A\otimes_R\bar F =\bar A\otimes_R\bar F\oplus s\bar A\otimes_R\bar F$, since $A\otimes_R\bar F$ is a free $R$-module of rank $|W_0|=144$ and $\bar A\otimes_R\bar F$ is a free $R$-module of rank $|W|=72$. Let $x\in \bar A \cap sA$. Then, $x\otimes 1_{\bar F}\in \bar A\otimes_R\bar F\cap s\bar A\otimes_R\bar F$, which means that $x\in A^{\text{tor}}={0_A}$, since $A$ is a free $R$-module. Therefore, $A=\bar A\oplus s\bar A$, meaning that $\bar A$ is a free $R$-module of rank $|W|$. \qed \end{ex} \chapter{The BMR freeness conjecture for the tetrahedral and octahedral families} In this chapter we prove the BMR freeness conjecture for the exceptional groups belonging to the first two families, using a case-by-case analysis. As one may notice, in this proof we ``guess'' how the basis should look like and then we establish that this is in fact a basis through a series of computations. This method allowed us not only to prove the validity of the conjecture, but also to give a nice description of the basis, similar to the classical case of the finite Coxeter groups. In the first part of this chapter we explain how we arrived to suspect there exists a basis of this nice form, using the basis of the algebra $A_+(C)$ that P. Etingof and E. Rains also used in order to prove the weak version of the conjecture for the exceptional groups of rank 2, a result we explored in detail in the previous chapter. \section{Finding the basis} \indent Let $W$ denote an exceptional group of rank 2. In the previous chapter we explained that $\overline{W}:=W/Z(W)$ can be considered as the group of even elements in a finite Coxeter group $C$ of rank 3 with Coxeter system $\{y_1, y_2, y_3\}$. For every $x$ in $\overline{W}$, we fix a reduced word $w_x$ in letters $y_1,y_2$ and $y_3$ representing $x$ in $\overline{W}$ and we set $a_{ij}:=y_iy_j$, $i,j\in\{1,2,3\}$ with $i\not=j$. Since $\overline{W}$ is the group of even elements in $C$, the reduced word $w_x$ is a word in letters $a_{ij}$ and it corresponds to an element in $A_+(C)$ denoted by $T_{w_x}$. In particular, $T_{w_1}=1_{A_+(C)}$. Keeping the notations of the previous chapter, we recall that $H_{\tilde \mathbb{Z}}$ is generated as $R_{\tilde \mathbb{Z}}^+$-module by the elements $\grC(T_{\grv_x})$, $x\in \overline{W}$, where $\grC$ is the ER-surjection associated to $W$ (see \ref{ERSUR} and \ref{ERRSUR}). Therefore, we have that $H_{\tilde \mathbb{Z}}$ is spanned over $R_{\tilde \mathbb{Z}}^+$ by $|\overline{W}|$ elements. Motivated by this idea, we will explain in general how we found a spanning set of $H$ over $R$ of $|W|$ elements. For every $x\in \overline{W}$ we fix a reduced word $w_x$ in letters $y_1,y_2$ and $y_3$ that represents $x$ in $\overline{W}$. From the reduced word $w_x$ one can obtain a word $\bar w_x$ that also represents $x$ in $\overline{W}$, defined as follows: $$\bar w_x= \begin{cases} w_x &\text{for }x=1\\ w_x(y_1y_1)^{n_1}(y_2y_2)^{n_2}(y_3y_3)^{n_3}&\text{for }x\not=1 \end{cases},$$ where $n_i \in \mathbb{Z}_{\geq 0}$ and $(y_iy_i)^{n_i}$ is a shorter notation for the word $\underbrace{(y_iy_i)\dots(y_iy_i)}_{n_i-\text{ times }}$. We notice that if we choose $n_1=n_2=n_3=0$, the word $\bar w_{x}$ coincides with the word $w_{x}$. Moving some of the pairs $(y_iy_i)^{n_i}$ somewhere inside $\bar w_x$ and using the braid relations between the generators $y_i$ of the Coxeter group $C$ one can obtain a word $\tilde w_x$, which also represents $x$ in $\bar W$, such that: \begin{itemize} \item $\ell(\tilde w_x)=\ell(\bar w_x)$, where $\ell(w)$ denotes the length of the word $w$. \item Let $m$ be an odd number. Whenever in the word $\tilde w_x$ there is a letter $y_i$ at the $m$th-position from left to right, then in the $(m+1)$th-position there is a letter $y_j$, $j\not=i$. \item $\tilde w_x=w_x$ if and only if $\bar w_x=w_x$. In particular, $\tilde w_1=w_1$. \end{itemize} \begin{defn} A word $\tilde w_x$ as described above is called \emph{a base word} associated to $\bar w_x$. \end{defn} Let $\tilde w_x$ be a base word. We recall that $a_{ij}:=y_iy_j$, $i,j\in\{1,2,3\}$ with $i\not=j$. By the definition of $\tilde w_x$ and the fact that $\overline{W}$ is the group of even elements in $C$, the word $\tilde w_x$ can be considered as a word in letters $a_{ij}$. Let $T_{\tilde w_x}$ denote the corresponding element in $A_+(C)$. In particular, $T_{\tilde w_1}=T_{w_1}=1_{A_+(C)}$. Let $B$ be the complex braid group associated to $W$ and let $\bar B$ denote the quotient $B/Z(B)$. For every $b\in B$ we denote by $\bar b$ the image of $b$ under the natural projection $B\rightarrow \bar B$. In the previous chapter we explained that $\bar B$ is generated by the elements $\bar \gra,$ $\bar \grb$ and $\bar \grg$, where $\gra$, $\grb$ and $\grg$ are generators of $B$ in ER presentation (see table \ref{t2} in Appendix B). By the definition of the ER-surjection the element $\grC(T_{\tilde w_x})$ is a product of $\bar \gra$, $\bar \grb$ and $\bar \grg$ (see \ref{ERSUR} and \ref{ERRSUR}). We use the group isomorphism $\grf_2$ we describe in table \ref{t2}, Appendix B to write the elements $\gra$, $\grb$ and $\grg$ in BMR presentation and we set $\grs_{\gra}:=\grf_2(\gra)$, $\grs_{\grb}:=\grf_2(\grb)$ and $\grs_{\grg}:=\grf_2(\grg)$. Therefore, we can also consider the element $\grC(T_{\tilde w_x})$ as being a product of $\bar\grs_{\gra} $, $\bar\grs_{\grb} $ and $\bar\grs_{\grg}$. We denote this element by $\bar v_x$. We now explain how we arrived to guess a spanning set of $|W|$ elements for the generic Hecke algebra associated to every exceptional group belonging to the first two families. \begin{itemize} \item Let $W$ be an exceptional group of rank 2, which belongs either to the tetrahedral or octahedral family. For every $x\in \overline{W}$ we choose a specific reduced word $w_x$, specific non-negative integers $(n_i)_{\substack{1\leq i\leq 3}}$ and a specific base word $\tilde w_x$ associated to the word $\bar w_x$, which is determined by $w_x$ and $(n_i)$. \item For every $x\in \overline {W}$, let $x_1^{m_1}x_2^{m_2}\dots x_r^{m_r}$ be the corresponding factorization of $\bar v_x$ into a product of $\bar \grs_{\gra}$, $\bar \grs_{\grb}$ and $\bar \grs_{\grg}$ (meaning that $x_i \in\{\bar \grs_{\gra}, \bar \grs_{\grb},\bar \grs_{\grg}\}$ and $m_i\in \mathbb{Z}$). Let $f_0: \bar B\rightarrow B$ be a set theoretic section such that $f_0(x_1^{m_1}x_2^{m_2}\dots x_r^{m_r})=f_0(x_1)^{m_1}f_0(x_2)^{m_2}\dots f_0(x_r)^{m_r}$, $f_0(\bar \grs_{\gra})=\grs_{\gra}$, $f_0(\bar \grs_{\grb})=\grs_{\grb}$ and $f_0(\bar \grs_{\grg})=\grs_{\grg}$. Keeping the notations of the previous chapter, we use corollary \ref{corIVAN} and we obtain an isomorphism $\grf_{ f_0}$ between the $R_{\mathbb{Z}}^+$-modules $R_{\tilde \mathbb{Z}}^+\bar B/Q_{s}(\bar \grs)$ and $H_{\tilde \mathbb{Z}}$. We set $v_x:=\grf_{f_0}(\bar v_x)$. \item We set $U:=\sum\limits_{x\in \overline{W}} \sum\limits_{k=0}^{|Z(W)|-1}Rz^kv_x$, where $R$ is the Laurent polynomial ring over which we define the generic Hecke algebra $H$ associated to $W$. \end{itemize} The main theorem of this chapter is the following. Notice that the second part of this theorem follows directly from proposition \ref{BMR PROP}. \begin{thm} $H=U$ and, therefore, the BMR freeness conjecture holds for all the groups belonging to the tetrahedral and octahedral family. \label{main} \end{thm} \begin{rem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize} \item[1.] The choice of $w_x$, the non-negative integers $(n_i)_{\substack{1\leq i\leq 3}}$ and $\tilde w_x$ is a product of experimentation, to provide a simple and robust proof for theorem \ref{main}. We tried more combinations that lead to more complicated and bloated proofs, or others where we couldn't arrive to a conclusion. \item[2.] The case of $G_{12}$ has already been proven (see theorem \ref{2case}). However, using this approach we managed to give a proof in a different way. We recall that $G_{12}$ has a presentation of the form $$\langle s,t,u\;|\;s^2=t^2=u^2=1, stus=tust=ustu\rangle.$$ Since $s$, $t$ and $u$ are conjugate (notice that $(stu)s(stu)^{-1}=u$, $(ust)u(ust)^{-1}=t$), the Laurent polynomial ring over which we define $H$ is $\mathbb{Z}[u_{s,i}^{\pm}]_{1\leq i\leq 2}$. In this chapter we prove the BMR freeness conjecture for $H$ defined over the ring $\mathbb{Z}[u_{s,i}^{\pm},u_{t,j}^{\pm},u_{u,l}^{\pm}]_{\substack{1\leq i,j,l\leq 2 }}$, using the method we described above for the rest of the exceptional groups belonging to the first two families. \end{itemize} \end{rem} The rest of this chapter is devoted to the proof of theorem \ref{main}, using a case-by-case analysis. We finish this section by giving an example of the way we find $U$ for the exceptional group $G_{15}$. \begin{ex} Let $W:=G_{15}$, an exceptional group that belongs to the octahedral family. In this family $C$ is the Coxeter group of type $B_3$. For every $x_i\in \overline{W}$, $i=1,\dots, 24$ we fix a reduced word $w_{x_i}$ in letters $y_1, y_2, y_3$ that represents $x_i$ in $\overline{W}$: \scalebox{0.9}{ \vbox{ \begin{multicols}{3} \begin{itemize}[leftmargin=*] \item[1.] $w_{x_1}=1$ \item[2.] $w_{x_2}=y_3y_2$ \item[3.] $w_{x_3}=y_2y_3$ \item[4.] $w_{x_4}=y_2y_1y_2y_1$ \item[5.] $w_{x_5}=y_3y_1y_2y_1$ \item[6.] $w_{x_6}=y_2y_3y_2y_1y_2y_1$ \item [7.] $w_{x_7}=y_1y_3$ \item [8.] $w_{x_8}=y_3y_2y_1y_3$ \item[9.] $w_{x_9}=y_2y_1$ \item[10.] $w_{x_{10}}=y_2y_1y_2y_3$ \item[11.] $w_{x_{11}}=y_3y_1y_2y_3$ \item[12.] $w_{x_{12}}=y_2y_3y_2y_1y_2y_3$ \item[13.] $w_{x_{13}}=y_3y_2y_1y_2$ \item[14.] $w_{x_{14}}=y_2y_3y_1y_2$ \item[15.] $w_{x_{15}}=y_1y_2$ \item [16.] $w_{x_{16}}=y_2y_1y_2y_3y_2y_1y_2y_1$ \item [17.] $w_{x_{17}}=y_1y_2y_1y_3y_2y_1$ \item[18.] $w_{x_{18}}=y_3y_2y_1y_2y_3y_1y_2y_1$ \item[19.] $w_{x_{19}}=y_1y_3y_2y_1y_2y_3$ \item[20.] $w_{x_{20}}=y_2y_3y_1y_2y_1y_3$ \item[21.] $w_{x_{21}}=y_1y_2y_1y_3$ \item[22.] $w_{x_{22}}=y_3y_2y_1y_2y_3y_2y_1y_2$ \item[23.] $w_{x_{23}}=y_2y_1y_2y_3y_1y_2$ \item [24.] $w_{x_{24}}=y_1y_2y_3y_2y_1y_2$ \end{itemize} \end{multicols}}} We choose the words $\bar w_{x_i}$, $i=1,\dots, 24$ as follows: \scalebox{0.9}{ \vbox{ \begin{multicols}{3} \begin{itemize}[leftmargin=*] \item[1.] $\bar w_{x_1}=w_{x_1}$ \item[2.] $\bar w_{x_2}=w_{x_2}$ \item[3.] $\bar w_{x_3}= w_{x_3}$ \item[4.] $\bar w_{x_4}= w_{x_4}$ \item[5.] $\bar w_{x_5}= w_{x_5}\mathbf{y_2y_2}$ \item[6.] $\bar w_{x_6}= w_{x_6}$ \item [7.] $\bar w_{x_7}= w_{x_7}$ \item [8.] $\bar w_{x_8}= w_{x_5}$ \item[9.] $\bar w_{x_9}= w_{x_9}\mathbf{y_3y_3}$ \item[10.] $\bar w_{x_{10}}=w_{x_{10}}\mathbf{y_1y_1}$ \item[11.] $\bar w_{x_{11}}=w_{x_{11}}\mathbf{y_1y_1}\mathbf{y_2y_2}$ \item[12.] $\bar w_{x_{12}}=w_{x_{12}}\mathbf{y_1y_1}$ \item[13.] $\bar w_{x_{13}}=w_{x_{13}}\mathbf{y_1y_1}$ \item[14.] $\bar w_{x_{14}}=w_{x_{14}}\mathbf{y_1y_1y_2y_2}$ \item[15.] $\bar w_{x_{15}}=w_{x_{15}}\mathbf{y_1y_1y_2y_2y_3y_3}$ \item [16.] $\bar w_{x_{16}}=w_{x_{16}}\mathbf{y_1y_1}$ \item [17.] $\bar w_{x_{17}}=w_{x_{17}}\mathbf{y_2y_2y_3y_3}$ \item[18.] $\bar w_{x_{18}}=w_{x_{18}}\mathbf{y_3y_3}$ \item[19.] $\bar w_{x_{19}}=w_{x_{19}}\mathbf{y_1y_1}$ \item[20.] $\bar w_{x_{20}}=w_{x_{20}}\mathbf{y_1y_1}\mathbf{y_3y_3}$ \item[21.] $\bar w_{x_{21}}=w_{x_{21}}\mathbf{y_1y_1y_2y_2y_3y_3}$ \item[22.] $\bar w_{x_{22}}=w_{x_{22}}\mathbf{(y_1y_1)^2}$ \item[23.] $\bar w_{x_{23}}=w_{x_{23}}\mathbf{(y_1y_1)^2}\mathbf{y_2y_2y_3y_3}$ \item [24.] $\bar w_{x_{24}}=w_{x_{24}}\mathbf{(y_1y_1)^2}\mathbf{y_2y_2y_3y_3}$ \end{itemize} \end{multicols}}} We now choose a base word $\tilde w_{x_i}$ for every $i=1,\dots, 24$. We give as indicative example the base word $\tilde w_{x_{24}}$. $$\begin{array}{lcl} \bar w_{x_{24}}&=&w_{x_{24}}\mathbf{(y_1y_1)^2}\mathbf{y_2y_2y_3y_3}\\ &=&y_1y_2y_3(y_2y_1y_2\mathbf{y_1)y_1}\mathbf{y_1y_1y_2y_2y_3y_3}\\ &=&y_1y_2y_3y_1y_2y_1y_2y_1\mathbf{y_1y_1y_2y_2y_3y_3}\\ &=&y_1y_2(y_3y_1)y_2y_1y_2y_1\mathbf{y_1y_1y_2y_2y_3y_3}\\ &=&y_1y_2y_1y_3y_2y_1y_2y_1\mathbf{y_1y_1y_2y_2y_3y_3}\\ &=&(y_1y_2y_1\mathbf{y_2)y_2}y_3y_2y_1y_2y_1\mathbf{y_1y_1y_3y_3}\\ &=&y_2y_1y_2y_1y_2y_3y_2y_1y_2y_1\mathbf{y_1y_1y_3y_3}\\ &=&y_2(y_1\mathbf{y_3)y_3}y_2y_1y_2\mathbf{y_1y_1}y_3y_2y_1y_2y_1\\ &=&y_2y_3y_1y_3y_2y_1y_2y_1y_1y_3y_2y_1y_2y_1. \end{array}$$ We choose $\tilde w_{x_{24}}=y_2y_3y_1y_3y_2y_1y_2y_1y_1y_3y_2y_1y_2y_1=a_{23}a_{13}a_{21}a_{21}a_{13}a_{21}a_{21}$ and, hence, $T_{\tilde w_{x_{24}}}=A_{23}A_{13}A_{21}A_{21}A_{13}A_{21}A_{21}$. The rest of the base words are chosen as follows. Note that for $i=1,\dots,4$ and for $i=6,\dots, 8$ the base word $\tilde w_{x_i}$ is chosen to be $\bar w_{x_i}$, since $\bar w_{x_i}=w_{x_i}$. \scalebox{0.9}{ \vbox{ \begin{multicols}{3} \begin{itemize}[leftmargin=*] \item[1.] $\tilde w_{x_1}=1$ \item[2.] $\tilde w_{x_2}=y_3y_2$ \item[3.] $\tilde w_{x_3}=y_2y_3$ \item[4.] $\tilde w_{x_4}=y_2y_1y_2y_1$ \item[5.] $\tilde w_{x_5}=y_3y_2y_2y_1y_2y_1$ \item[6.] $\tilde w_{x_6}=y_2y_3y_2y_1y_2y_1$ \item [7.] $\tilde w_{x_7}=y_1y_3$ \item [8.]$\tilde w_{x_8}=y_3y_2y_1y_3$ \item[9.] $\tilde w_{x_9}=y_2y_3y_3y_1$ \item[10.] $\tilde w_{x_{10}}=y_2y_1y_2y_1y_1y_3$ \item[11.] $\tilde w_{x_{11}}=y_3y_2y_2y_1y_2y_1y_1y_3$ \item[12.] $\tilde w_{x_{12}}=y_2y_3y_2y_1y_2y_1y_1y_3$ \item[13.] $\tilde w_{x_{13}}=y_1y_3y_2y_1y_2y_1$ \item[14.] $\tilde w_{x_{14}}=y_3y_2y_1y_3y_2y_1y_2y_1$ \item[15.] $\tilde w_{x_{15}}=y_2y_3y_1y_3y_2y_1y_2y_1$ \item [16.] $\tilde w_{x_{16}}=y_2y_1y_2y_1y_1y_3y_2y_1y_2y_1$ \item [17.] $\tilde w_{x_{17}}=y_3y_2y_2y_1y_2y_3y_2y_1y_2y_1$ \item[18.] $\tilde w_{x_{18}}=y_2y_3y_2y_3y_1y_2y_3y_1y_2y_1$ \item[19.] $\tilde w_{x_{19}}=y_1y_3y_2y_1y_2y_1y_1y_3$ \item[20.] $\tilde w_{x_{20}}=y_3y_2y_1y_3y_2y_1y_2y_1y_1y_3$ \item[21.] $\tilde w_{x_{21}}=y_2y_3y_1y_3y_2y_1y_2y_1y_1y_3$ \item[22.] $\tilde w_{x_{22}}=y_1y_3y_2y_1y_2y_1y_1y_3y_2y_1y_2y_1$ \item[23.] $\tilde w_{x_{23}}=y_3y_2y_1y_3y_2y_1y_2y_1y_1y_3y_2y_1y_2y_1$ \item [24.] $\tilde w_{x_{24}}=y_2y_3y_1y_3y_2y_1y_2y_1y_1y_3y_2y_1y_2y_1$ \end{itemize} \end{multicols}}} The corresponding elements $T_{\tilde{w_{x_i}}}$ in $A_+(C)$ are as follows: \scalebox{0.93}{ \vbox{ \begin{multicols}{3} \begin{itemize}[leftmargin=*] \item[1.] $T_{\tilde w_{x_1}}=1$ \item[2.] $T_{\tilde w_{x_2}}=A_{32}$ \item[3.] $T_{\tilde w_{x_3}}=A_{23}$ \item[4.] $T_{\tilde w_{x_4}}=A_{21}A_{21}$ \item[5.] $T_{\tilde w_{x_5}}=A_{32}A_{21}A_{21}$ \item[6.] $T_{\tilde w_{x_6}}=A_{23}A_{21}A_{21}$ \item [7.] $T_{\tilde w_{x_7}}=A_{13}$ \item [8.] $T_{\tilde w_{x_8}}=A_{32}A_{13}$ \item[9.] $T_{\tilde w_{x_9}}=A_{23}A_{31}$ \item[10.] $T_{\tilde w_{x_{10}}}=A_{21}A_{21}A_{13}$ \item[11.] $T_{\tilde w_{x_{11}}}=A_{32}A_{21}A_{21}A_{13}$ \item[12.] $T_{\tilde w_{x_{12}}}=A_{23}A_{21}A_{21}A_{13}$ \item[13.] $T_{\tilde w_{x_{13}}}=A_{13}A_{21}A_{21}$ \item[14.] $T_{\tilde w_{x_{14}}}=A_{32}A_{13}A_{21}A_{21}$ \item[15.] $T_{\tilde w_{x_{15}}}=A_{23}A_{13}A_{21}A_{21}$ \item [16.] $T_{\tilde w_{x_{16}}}=A_{21}A_{21}A_{13}A_{21}A_{21}$ \item [17.] $T_{\tilde w_{x_{17}}}=A_{32}A_{21}A_{23}A_{21}A_{21}$ \item[18.] $T_{\tilde w_{x_{18}}}=A_{23}A_{23}A_{12}A_{31}A_{21}$ \item[19.] $T_{\tilde w_{x_{19}}}=A_{13}A_{21}A_{21}A_{13}$ \item[20.] $T_{\tilde w_{x_{20}}}=A_{32}A_{13}A_{21}A_{21}A_{13}$ \item[21.] $T_{\tilde w_{x_{21}}}=A_{32}A_{13}A_{21}A_{21}A_{13}$ \item[22.] $T_{\tilde w_{x_{22}}}=A_{13}A_{21}A_{21}A_{13}A_{21}A_{21}$ \item[23.] $T_{\tilde w_{x_{23}}}=A_{32}A_{13}A_{21}A_{21}A_{13}A_{21}A_{21}$ \item [24.] $T_{\tilde w_{x_{24}}}=A_{23}A_{13}A_{21}A_{21}A_{13}A_{21}A_{21}$ \end{itemize} \end{multicols} }} We recall that $\grC(A_{13})=\bar \gra$, $\grC(A_{32})=\bar \grb$ and $\grC(A_{21})=\bar \grg$, or equivalently, in BMR presentation $\grC(A_{13})=\bar t$, $\grC(A_{32})=\bar u$ and $\grC(A_{21})=\bar u^{-1}\bar t^{-1}$ (see table \ref{t2} in Appendix B). We also notice that $\grC(A_{21}A_{21})=\bar \grg^2$ and, therefore, in BMR presentation, $\grC(A_{21}A_{21})=\bar s$. Using again the example of $x_{24}$ we notice that $\bar v_{x_{24}}=\grC(w_{x_{24}})=\grC(A_{23}A_{13}A_{21}A_{21}A_{13}A_{21}A_{21})=\bar u^{-1}\bar t \bar s \bar t \bar s$ and, hence, $v_{x_{24}}=u^{-1}tsts$. Similarly, we the elements $v_{x_i}$, $i=1,\dots, 24$ are as follows: \scalebox{0.9}{ \vbox{ \begin{multicols}{3} \begin{itemize}[leftmargin=*] \item[1.] $v_{x_1}=1$ \item[2.] $v_{x_2}=u$ \item[3.] $v_{x_3}=u^{-1}$ \item[4.] $v_{x_4}=s$ \item[5.] $v_{x_5}=us$ \item[6.] $v_{x_6}=u^{-1}s$ \item [7.] $v_{x_7}=t$ \item [8.] $v_{x_8}=ut$ \item[9.] $v_{x_9}=u^{-1}t$ \item[10.] $v_{x_{10}}=st$ \item[11.] $v_{x_{11}}=ust$ \item[12.] $v_{x_{12}}=u^{-1}st$ \item[13.] $v_{x_{13}}=ts$ \item[14.] $v_{x_{14}}=uts$ \item[15.]$v_{x_{15}}=u^{-1}ts$ \item [16.] $v_{x_{16}}=sts$ \item [17.] $v_{x_{17}}=usts$ \item[18.] $v_{x_{18}}=u^{-1}sts$ \item[19.] $v_{x_{19}}=tst$ \item[20.] $v_{x_{20}}=utst$ \item[21.] $v_{x_{21}}=u^{-1}tst$ \item[22.] $v_{x_{22}}=tsts$ \item[23.]$v_{x_{23}}=utsts$ \item [24.] $v_{x_{24}}=u^{-1}tsts$ \end{itemize} \end{multicols} }} We denote by $u_1:=R+Rs$ the subalgebra of $H$ generated by $s$, $u_2:=R+Rt$ the subalgebra of $H$ generated by $t$ and $u_3:=R+Ru+Ru^{-1}$ the subalgebra of $H$ generated by $u$, one may notice that $\sum\limits_{i=1}^{24}Rv_i=u_3u_2u_1u_2u_1$. We recall that $|Z(G_{15})|=12$ and we set $U=\sum\limits_{k=0}^{11}z^ku_3u_2u_1u_2u_1$, where $z=stutu$ the generator of the center of the corresponding complex braid group. \end{ex} \section{The Tetrahedral family} \indent In this family we encounter the exceptional groups $G_4$, $G_5$, $G_6$ and $G_7$. We know that the BMR freeness conjecture holds for $G_4$ (see theorem \ref{braidcase}). We prove the conjecture for the rest of the groups belonging in this family, using a case-by-case analysis. Keeping the notations of Chapter 3, let $P_s(X)$ denote the polynomials defining $H$ over $R$. If we expand the relations $P_s(\grs)=0$, where $\grs$ is a distinguished braided reflection associated to $s$, we obtain equivalent relations of the form \begin{equation}\grs^n=a_{n-1}\grs^{n-1}+...+a_1\grs+a_0,\label{ones} \end{equation} where $n$ is the order of $s$, $a_i\in R$, for every $i\in\{1, \dots n-1\}$ and $a_0 \in R^{\times}$. We multiply $(\ref{ones})$ by $\grs_i^{-n}$ and since $a_0$ is invertible in $R$ we have: \begin{equation}\grs_i^{-n}=-a_0^{-1}a_1\grs^{-n+1}-a_0^{-1}a_2\grs^{-n+2}-...-a_0^{-1}a_{n-1}\grs^{-1}+a_0^{-1} \label{twos}\end{equation} We multiply ($\ref{ones}$) with a suitable power of $\grs$. Then, for every $m\in \mathbb{N}$ we have :\begin{equation}\grs^{m}\in R\grs^{m-1}+\dots+R\grs^{m-(n-1)}+R^{\times}\grs^{m-n}\label{ooo}\end{equation} Similarly, we multiply ($\ref{twos}$) with a suitable power of $\grs$. Then, for every $m\in \mathbb{N}$, we have: \begin{equation}\grs^{-m}\in R\grs^{-m+1}+\dots+R\grs^{-m+(n-1)}+R^{\times}\grs^{-m+n}.\label{oooo}\end{equation} For the rest of this section we use directly (\ref{ooo}) and (\ref{oooo}). \subsection{The case of $G_5$} \indent Let $R=\mathbb{Z}[u_{s,i}^{\pm},u_{t,j}^{\pm}]_{\substack{1\leq i,j\leq 3}}$ and let $H_{G_5}:=\langle s,t\;|\; stst=tsts,\prod\limits_{i=1}^{3}(s-u_{s,i})=\prod\limits_{j=1}^{3}(t-u_{t,i})=0\rangle$ be the generic Hecke algebra associated to $G_5$. Let $u_1$ be the subalgebra of $H_{G_5}$ generated by $s$ and $u_2$ the subalgebra of $H_{G_5}$ generated by $t$. We recall that $z:=(st)^2=(ts)^2$ generates the center of the associated complex braid group and that $|Z(G_5)|=6$. We set $U=\sum\limits_{k=0}^5(z^ku_1u_2+z^kt^{-1}su_2)$. By the definition of $U$ we have the following remark: \begin{rem} $Uu_2\subset U$. \label{r55} \end{rem} To make it easier for the reader to follow the calculations, we will underline the elements that belong to $U$ by definition. Moreover, we will use directly remark \ref{r55}; this means that every time we have a power of $t$ at the end of an element we may ignore it. To remind that to the reader, we put a parenthesis around the part of the element we consider. Our goal is to prove that $H_{G_5}=U$ (theorem \ref{thm55}). In order to do so, we first need to prove some preliminary results. \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)] For every $k\in\{1,\dots, 4\}$, $z^ktu_1\subset U$. \item[(ii)]For every $k\in\{1,\dots,5\}$, $z^kt^{-1}u_1\subset U$. \item[(iii)]For every $k\in\{1,\dots,4\}$, $z^ku_2u_1\subset U$. \end{itemize} \label{ts55} \end{lem} \begin{proof} By definition, $u_2=R+Rt+Rt^{-1}$. Therefore, (iii) follows from (i) and (ii). \begin{itemize}[leftmargin=0.8cm] \item[(i)] $z^ktu_1=z^kt(R+Rs+Rs^{-1})\subset \underline{z^ku_2}+Rz^k(ts)^2s^{-1}t^{-1}+Rz^kts^{-1}\subset U+\underline{z^{k+1}u_1u_2}+Rz^kts^{-1}$. However, $z^kts^{-1}\in z^k(R+Rt^{-1}+Rt^{-2})s^{-1}\subset U+\underline{z^ku_1}+Rz^k(st)^{-2}st+Rz^kt^{-1}(st)^{-2}st\subset U+\underline{z^{k-1}u_1u_2}+\underline{z^{k-1}t^{-1}su_2}\subset U$. \item[(ii) ] $z^kt^{-1}u_1=z^kt^{-1}(R+Rs+Rs^{-1})\subset \underline{z^ku_2}+\underline{z^kt^{-1}su_2}+zR^k(st)^{-2}st\subset U+\underline{z^{k-1}u_1u_2}\subset U.$ \qedhere \end{itemize} \end{proof} From now on, we will double-underline the elements described in lemma \ref{ts55} and we will use directly the fact that these elements are inside $U$. The following proposition leads us to the main theorem of this section. \begin{prop} $u_1U\subset U$. \label{su55} \end{prop} \begin{proof} Since $u_1=R+Rs+Rs^2$, it is enough to prove that $sU\subset U$. By the definition of $U$ and by remark \ref{r55}, we can restrict ourselves to proving that $z^kst^{-1}s\in U$, for every $k\in\{0,\dots,5\}$. We distinguish the following cases: \begin{itemize}[leftmargin=*] \item \underline{$k\in\{0,\dots,3\}$}: $\small{\begin{array}[t]{lcl} z^kst^{-1}s&\in&z^ks(R+Rt+Rt^2)s\\ &\in&\underline{z^ku_1}+Rz^k(st)^2t^{-1}+Rz^k(st)^2t^{-1}s^{-1}ts\\ &\in&U+\underline{z^{k+1}u_2}+Rz^{k+1}t^{-1}(R+Rs+Rs^2)ts\\ &\in&U+\underline{z^{k+1}u_1}+Rz^{k+1}t^{-1}(st)^2t^{-1}+Rz^{k+1}t^{-1}s(st)^2t^{-1}\\ &\in&U+\underline{z^{k+2}u_2}+\underline{z^{k+2}t^{-1}su_2}. \end{array}}$ \item \underline{$k\in\{4,5\}$}: $\small{\begin{array}[t]{lcl}z^kst^{-1}s&\in&z^k(R+Rs^{-1}+Rs^{-2})t^{-1}(R+Rs^{-1}+Rs^{-2})\\ &\in&\underline{\underline{z^kt^{-1}u_1}}+\underline{z^ku_1u_2}+Rz^ks^{-1}t^{-1}s^{-1}+Rz^ks^{-1}t^{-1}s^{-2}+ Rz^ks^{-2}t^{-1}s^{-1}+\\&&+Rz^ks^{-2}t^{-1}s^{-2}\\ &\in&U+Rz^k(ts)^{-2}t+Rz^k(ts)^{-2}ts^{-1}+Rz^ks^{-1}(ts)^{-2}t+ Rz^ks^{-1}(ts)^{-2}ts^{-1}\\ &\in&U+\underline{z^{k-1}u_2}+\underline{z^{k-1}ts^{-1}u_2}+\underline{z^{k-1}u_1u_2}+Rz^{k-1}s^{-1}(R+Rt^{-1}+Rt^{-2})s^{-1}\\ &\in&U+\underline{z^{k-1}u_1}+Rz^{k-1}(ts)^{-2}t+Rz^{k-1}(ts)^{-2}ts(st)^{-2}st\\ &\in&U+\underline{z^{k-2}u_2}+\underline{\underline{(z^{k-3}tu_1)t}}. \end{array}}$\\ \qedhere \end{itemize} \end{proof} We can now prove the main theorem of this section. \begin{thm} $H_{G_5}=U$. \label{thm55} \end{thm} \begin{proof} Since $1\in U$, it is enough to prove that $U$ is a left-sided ideal of $H_{G_5}$. For this purpose, one may check that $sU$ and $tU$ are subsets of $U$. However, by proposition \ref{su55} we restrict ourselves to proving that $tU\subset U$. By the definition of $U$ we have that $tU\subset \sum\limits_{k=0}^5(z^ktu_1u_2+\underline{z^ku_1u_2})\subset U+\sum\limits_{k=0}^5z^ktu_1u_2.$ Therefore, by remark \ref{r55} it will be sufficient to prove that, for every $k\in\{0,\dots,5\}$, $z^ktu_1\subset U$. However, this holds for every $k\in\{1,\dots,4\}$, by lemma \ref{ts55}(iii). For $k=0$ we have: $tu_1=(ts)^2s^{-1}t^{-1}u_1\subset u_1(\underline{\underline{zt^{-1}u_1}})\subset u_1U\stackrel{\ref{su55}}{\subset}U$. It remains to prove the case where $k=5$. We have: $$\small{\begin{array}{lcl} z^5tu_1&=&z^5t(R+Rs^{-1}+Rs^{-2})\\ &\subset&\underline{z^5u_2}+Rz^5t(ts)^{-2}tst+Rz^5(R+Rt^{-1}+Rt^{-2})s^{-2}\\ &\subset&U+\underline{\underline{(z^4u_2u_1)t}}+\underline{z^5u_1}+\underline{\underline{z^5t^{-1}u_1}}+Rz^5t^{-1}(st)^{-2}sts^{-1}\\ &\subset&U+Rz^4t^{-1}(R+Rs^{-1}+Rs^{-2})ts^{-1}\\ &\subset&U+\underline{z^4u_1}+Rz^4(st)^{-2}st^2s^{-1}+Rz^4(st)^{-2}sts^{-1}ts^{-1}\\ &\subset&U+u_1\underline{\underline{z^3u_2u_1}}+Rz^3sts^{-1}(R+Rt^{-1}+Rt^{-2})s^{-1}\\ &\subset&U+u_1U+u_1\underline{\underline{z^3tu_1}}+Rz^3st(ts)^{-2}t+Rz^3st(ts)^{-2}ts(st)^{-2}st\\ &\subset&U+u_1U+\underline{z^2u_1u_2}+u_1\underline{\underline{(zu_2u_1)t}}\\ &\subset&U+u_1U. \end{array}}$$ The result follows from proposition \ref{su55}. \end{proof} \begin{cor} The BMR freeness conjecture holds for the generic Hecke algebra $H_{G_5}$. \end{cor} \begin{proof} By theorem \ref{thm55} we have that $H_{G_5}=U$. Therefore, the result follows from proposition \ref{BMR PROP} since, by definition, $U$ is generated as $R$-module by $|G_5|=72$ elements. \end{proof} \subsection{The case of $G_6$} \indent Let $R=\mathbb{Z}[u_{s,i}^{\pm},u_{t,j}^{\pm}]_{\substack{1\leq i\leq 2 \\1\leq j\leq 3}}$ and let $H_{G_6}:=\langle s,t\;|\; ststst=tststs,\prod\limits_{i=1}^{2}(s-u_{s,i})=\prod\limits_{j=1}^{3}(t-u_{t,j})=0\rangle$ be the generic Hecke algebra associated to $G_6$. Let $u_1$ be the subalgebra of $H_{G_5}$ generated by $s$ and $u_2$ the subalgebra of $H_{G_6}$ generated by $t$. We recall that $z:=(st)^3=(ts)^3$ generates the center of the associated complex braid group and that $|Z(G_6)|=4.$ We set $U=\sum\limits_{k=0}^3z^ku_2u_1u_2$. By the definition of $U$ we have the following remark: \begin{rem} $u_2Uu_2\subset U$. \label{r66} \end{rem} Our goal is to prove that $H_{G_6}=U$ (theorem \ref{thm66}). In order to do so, we first need to prove some preliminary results. \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)] For every $k\in\{0,1,2\}$, $z^ku_1tu_1\subset U$. \item[(ii)]For every $k\in\{1,2,3\}$, $z^ku_1t^{-1}u_1\subset U$. \item[(iii)]For every $k\in\{1,2\}$, $z^ku_1u_2u_1\subset U$. \end{itemize} \label{sts66} \end{lem} \begin{proof} By definition, $u_2=R+Rt+Rt^{-1}$. Therefore, we only need to prove (i) and (ii). \begin{itemize} [leftmargin=0.8cm] \item[(i)]$z^ku_1tu_1=z^k(R+Rs)t(R+Rs)\subset z^ku_2u_1u_2+z^ksts\subset U+z^k(st)^3t^{-1}s^{-1}t^{-1}\subset U+z^{k+1}u_2u_1u_2$. The result follows from the definition of $U$. \item[(ii)] $z^ku_1t^{-1}u_1=z^k(R+Rs^{-1})t^{-1}(R+Rs^{-1})\subset z^ku_2u_1u_2+z^ks^{-1}t^{-1}s^{-1}\subset U+z^k(ts)^{-3}tst\subset U+z^{k-1}u_2u_1u_2\subset U$. \qedhere \end{itemize} \end{proof} We can now prove the main theorem of this section. \begin{thm} $H_{G_6}=U$. \label{thm66} \end{thm} \begin{proof} Since $1\in U$, it is enough to prove that $U$ is a left-sided ideal of $H_{G_6}$. For this purpose, one may check that $sU$ and $tU$ are subsets of $U$. However, by the definition of $U$, we only have to prove that $sU\subset U$. By the definition of $U$ and by remark \ref{r66}, we must prove that for every $k\in\{0,\dots,3\}$, $z^ksu_2u_1\subset U$. However, this holds for every $k\in\{1,2\}$, by lemma \ref{sts66}(iii). For $k=0$ we have: $su_2u_1\subset s(R+Rt+Rt^2)(R+Rs)\subset u_2u_1u_2+u_1tu_1+Rst^2s$. By the definition of $U$ and lemma \ref{sts66}(i), it will be sufficient to prove that $st^2s\in U$. We have: $$\small{\begin{array}[t]{lcl}st^2s&=&(st)^3t^{-1}s^{-1}t^{-1}s^{-1}ts\\ &\in& zu_2s^{-1}t^{-1}(R+Rs)ts\\ &\in&zu_2u_1u_2+zu_2s^{-1}t^{-1}(st)^3t^{-1}s^{-1}t^{-1}\\ &\in& U+u_2(z^2u_1u_2u_1)u_2. \end{array}}$$ The result follows from lemma \ref{sts66}(iii) and remark \ref{r66}. It remains to prove the case where $k=3$. We have: $z^3su_2u_1\subset s(R+Rt^{-1}+Rt^{-2})(R+Rs^{-1})\subset u_2u_1u_2+u_1t^{-1}u_1+Rst^{-2}s^{-1}$. By the definition of $U$ and lemma \ref{sts66}(ii), we need to prove that $st^{-2}s^{-1}\in U$. We have: $$\small{\begin{array}[t]{lcl} z^3st^{-2}s^{-1}&\in&z^3(R+Rs^{-1})t^{-2}s^{-1}\\ &\in&z^3u_2u_1u_2+Rz^3(ts)^{-3}tstst^{-1}s^{-1}\\ &\in&U+z^2u_2st(R+Rs^{-1})t^{-1}s^{-1}\\ &\in&U+z^2u_2u_1u_2+z^2u_2st(ts)^{-3}tst\\ &\in&U+u_2(zu_1u_2u_1)u_2. \end{array}}$$ The result follows again from lemma \ref{sts66}(iii) and remark \ref{r66}. \end{proof} \begin{cor} The BMR freeness conjecture holds for the generic Hecke algebra $H_{G_6}$. \end{cor} \begin{proof} By theorem \ref{thm66} we have that $H_{G_6}=U$. By definition, $U=\sum\limits_{k=0}^3z^ku_2u_1u_2$. We expand $u_1$ as $R+Rs$ and we have that $U=\sum\limits_{k=0}^3(z^ku_2+z^ku_2su_2)$. Therefore, $U$ is generated as $u_2$-module by 16 elements. Since $u_2$ is generated as $R$-module by 3 elements, we have that $H_{G_6}$ is generated as $R$-module by $|G_6|=48$ elements and the result follows from proposition \ref{BMR PROP}. \end{proof} \subsection{The case of $G_7$} Let $R=\mathbb{Z}[u_{s,i}^{\pm},u_{t,j}^{\pm},u_{u,l}^{\pm}]_{\substack{1\leq i\leq 2 \\1\leq j,l\leq 3}}$. We also let $$H_{G_{7}}=\langle s,t,u\;|\; stu=tus=ust, \;\prod\limits_{i=1}^{2}(s-u_{s,i})=\prod\limits_{j=1}^{3}(t-u_{t,j})=\prod\limits_{l=1}^{3}(u-u_{u,l})=0\rangle$$ be the generic Hecke algebra associated to $G_{7}$. Let $u_1$ be the subalgebra of $H_{G_{7}}$ generated by $s$, $u_2$ the subalgebra of $H_{G_{7}}$ generated by $t$ and $u_3$ the subalgebra of $H_{G_{7}}$ generated by $u$. We recall that $z:=stu=tus=ust$ generates the center of the associated complex braid group and that $|Z(G_7)|=12$. We set $U=\sum\limits_{k=0}^{11}(z^ku_3u_2+z^ktu^{-1}u_2).$ By the definition of $U$, we have the following remark. \begin{rem} $Uu_2 \subset U$. \label{r77} \end{rem} To make it easier for the reader to follow the calculations, we will underline the elements that belong to $U$ by definition. Moreover, we will use directly remark \ref{r77}; this means that every time we have a power of $t$ at the end of an element, we may ignore it. In order to remind that to the reader, we put a parenthesis around the part of the element we consider. Our goal is to prove that $H_{G_{7}}=U$ (theorem \ref{thm77}). In order to do so, we first need to prove some preliminary results. \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)]For every $k\in\{0,\dots,10\}$, $z^ku_1\subset U$. \item [(ii)] For every $k\in\{0,\dots,9\}$, $z^ku_2u\subset U$. \item[(iii)]For every $k\in\{1,\dots,10\}$, $z^ku_1u_3\subset U$. \label{l77} \end{itemize} \end{lem} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] $z^ku_1=z^k(R+Rs)\subset \underline{z^ku_3}+z^k(stu)u^{-1}t^{-1}\subset U+\underline{z^{k+1}u_3u_2}$. \item[(ii)] $\begin{array}[t]{lcl} z^ku_2u&=&z^k(R+Rt+Rt^2)u\\ &\subset& \underline{z^ku_3}+z^k(tus)s^{-1}+z^kt(tus)s^{-1}\\ &\subset& U+z^{k+1}u_1+z^{k+1}t(R+Rs)\\ &\stackrel{(i)}{\subset}&U+\underline{z^{k+1}u_2}+z^{k+1}t(stu)u^{-1}t^{-1}\\&\subset& U+\underline{z^{k+2}tu^{-1}u_2}. \end{array}$ \item[(iii)] $z^ku_1u_3=z^k(R+Rs^{-1})u_3\subset \underline{z^ku_3}+z^k(s^{-1}u^{-1}t^{-1})tu_3\subset U+z^{k-1}tu_3$. The result follows from the definition of $U$ and (i), if we expand $u_3$ as $R+Ru^{-1}+Ru$. \qedhere \end{itemize} \end{proof} From now on, we will double-underline the elements described in lemma \ref{l77} and we will use directly the fact that these elements are inside $U$. \begin{prop}$u_3U\subset U$. \label{pr77} \end{prop} \begin{proof} Since $u_3=R+Ru+Ru^2$, it is enough to prove that $uU\subset U$. By the definition of $U$ and remark \ref{r77} it will be sufficient to prove that for every $k\in\{0,\dots,11\}$, $z^kutu^{-1}\in U$. We distinguish the following cases: \begin{itemize}[leftmargin=*] \item \underline{$k\in\{0,\dots,7\}$}: $\small{\begin{array}[t]{lcl} z^kutu^{-1}&\in&z^kut(R+Ru+Ru^2)\\ &\in&\underline{z^ku_3}+z^ku(tus)s^{-1}+z^ku(tus)s^{-1}u\\ &\in&U+Rz^{k+1}u(R+Rs)+Rz^{k+1}u(R+Rs)u\\ &\in&U+\underline{z^{k+1}u_3}+Rz^{k+1}(ust)t^{-1}+Rz^{k+1}(ust)t^{-1}u\\ &\in&U+\underline{z^{k+2}u_2}+\underline{\underline{z^{k+2}u_2u}}. \end{array}}$ \item \underline{$k\in\{8,\dots,11\}$}: $\small{\begin{array}[t]{lcl} z^kutu^{-1}&\in&z^ku(R+Rt^{-1}+Rt^{-2})u^{-1}\\ &\in&\underline{z^ku_3}+Rz^ku(t^{-1}s^{-1}u^{-1})usu^{-1}+Rz^kut^{-2}(u^{-1}t^{-1}s^{-1})st\\ &\in&U+z^{k-1}u_3(R+Rs^{-1})u^{-1}+Rz^{k-1}ut^{-2}(R+Rs^{-1})t\\ &\in&U+\underline{z^{k-1}u_3}+Rz^{k-1}u_3(s^{-1}u^{-1}t^{-1})t+\underline{z^{k-1}u_3u_2}+ Rz^{k-1}ut^{-1}(t^{-1}s^{-1}u^{-1})ut\\ &\in&U+\underline{z^{k-2}u_3u_2}+Rz^{k-2}(R+Ru^{-1}+Ru^{-2})t^{-1}ut\\ &\in&U+\underline{\underline{(z^{k-2}u_2u)t}}+Rz^{k-2}(u^{-1}t^{-1}s^{-1})sut+Rz^{k-2}u^{-1}(u^{-1}t^{-1}s^{-1})sut\\ &\in&U+\underline{\underline{(z^{k-3}u_1u_3)t}}+Rz^{k-3}u^{-1}s(R+Ru^{-1}+Ru^{-2})t\\ &\in&U+Rz^{k-3}u^{-1}(stu)u^{-1}+Rz^{k-3}u^{-1}(R+Rs^{-1})u^{-1}t+ Rz^{k-3}u^{-1}(R+Rs^{-1})u^{-2}t\\ &\in&U+\underline{z^{k-2}u_3}+\underline{z^{k-3}u_3u_2}+Rz^{k-3}u^{-1}(s^{-1}u^{-1}t^{-1})t^2+Rz^{k-3}u^{-1}(s^{-1}u^{-1}t^{-1})tu^{-1}t\\ &\in&U+\underline{z^{k-4}u_3u_2}+Rz^{k-4}u^{-1}(R+Rt^{-1}+Rt^{-2})u^{-1}t\\ &\in&U+\underline{z^{k-4}u_3u_2}+Rz^{k-4}(u^{-1}t^{-1}s^{-1})su^{-1}t+ Rz^{k-4}(u^{-1}t^{-1}s^{-1})st^{-1}u^{-1}t\\ &\in&U+\underline{\underline{(z^{k-5}u_1u_3)t}}+Rz^{k-5}s(t^{-1}s^{-1}u^{-1})usu^{-1}t\\ &\in&U+Rz^{k-6}su(R+Rs^{-1})u^{-1}t\\ &\in&U+\underline{\underline{(z^{k-6}u_1)t}}+Rz^{k-6}su(s^{-1}u^{-1}t^{-1})t^2\\ &\in&U+\underline{\underline{(z^{k-7}u_1u_3)t}}. \end{array}}$\\ \qedhere \end{itemize} \end{proof} We can now prove the main theorem of this section. \begin{thm} $H_{G_7}=U$. \label{thm77} \end{thm} \begin{proof} Since $1\in U$, it will be sufficient to prove that $U$ is a left-sided ideal of $H_{G_7}$. For this purpose, one may check that $sU$, $tU$ and $uU$ are subsets of $U$. However, by proposition \ref{pr77} we only have to prove that $tU$ and $sU$ are subsets of $U$. We recall that $z=stu$, therefore $s=zu^{-1}t^{-1}$ and $s^{-1}=z^{-1}tu$. We notice that $U= \sum\limits_{k=0}^{10}z^k(u_3u_2+tu^{-1}u_2)+z^{11}(u_3u_2+tu^{-1}u_2).$ Hence, \\ $\small{\begin{array}[t]{lcl}sU&\subset&\sum\limits_{k=0}^{10} z^ks(u_3u_2+tu^{-1}u_2)+ z^{11}s(u_3u_2+tu^{-1}u_2)\\ &\subset& \sum\limits_{k=0}^{10}z^{k+1}u^{-1}t^{-1}(u_3u_2+tu^{-1}u_2)+z^{11}(R+Rs^{-1})(u_3u_2+ tu^{-1}u_2)\\ &\subset& \sum\limits_{k=0}^{10}u^{-1}t^{-1}(\underline{z^{k+1}u_3u_2}+\underline{z^{k+1}tu^{-1}u_2})+ \underline{z^{11}u_3u_2}+\underline{z^{11}tu^{-1}u_2}+z^{11}s^{-1}u_3u_2+ z^{11}s^{-1}tu^{-1}u_2\\ &\subset&u_3u_2U+z^{10}tu_3u_2+ z^{10}tutu^{-1}u_2\\ &\subset&u_3u_2U+ t\underline{z^{10}u_3u_2}+tu(\underline{z^{10}tu^{-1}u_2})\\ &\subset&u_3u_2u_3U. \end{array}}$\\\\ By proposition \ref{pr77} we have that $u_3U\subset U$. If we also suppose that $u_2U\subset U$ then, obviously, we have $tU\subset U$ but we also have $sU\subset U$ (since $sU\subset u_3u_2u_3U$). Hence, in order to prove that $U=H_{G_7}$ we restrict ourselves to proving that $u_2U \subset U$. By definition, $u_2=R+Rt^{-1}+Rt^{-2}$, therefore it will be sufficient to prove that $t^{-1}U\subset U$. By the definition of $U$ and remark \ref{r77}, this is the same as proving that, for every $k\in\{0,\dots,11\}$, $z^kt^{-1}u_3\subset U$. For $k\in\{2,\dots,11\}$ the result is obvious, since $z^kt^{-1}u_3=z^k(t^{-1}s^{-1}u^{-1})usu_3\subset u_3(\underline{\underline{z^{k-1}u_1u_3}})\subset u_3U\stackrel{\ref{pr77}}{\subset}U$. For $k\in\{0,1\}$, we have: $$\small{\begin{array}{lcl} z^kt^{-1}u_3&=&z^kt^{-1}(R+Ru+Ru^2)\\ &\subset& \underline{z^ku_2}+\underline{\underline{z^ku_2u}}+z^k(R+Rt+Rt^2)u^2\\ &\subset&U+\underline{z^ku_3}+Rz^kt(R+Ru+Ru^{-1})+Rz^kt(tus)s^{-1}u\\ &\subset&U+\underline{z^ku_2}+\underline{\underline{z^ku_2u}}+\underline{z^ktu^{-1}u_2}+Rz^{k+1}(tus)s^{-1}u^{-1}s^{-1}u\\ &\subset&U+Rz^{k+2}(R+Rs)u^{-1}(R+Rs)u\\ &\subset&U+\underline{\underline{z^{k+2}u_1}}+u_3\underline{\underline{z^{k+2}u_1u_3}}+Rz^{k+2}su^{-1}su\\ &\subset&U+u_3U+Rz^{k+2}s(R+Ru+Ru^2)su\\ &\subset&U+u_3U+\underline{\underline{z^{k+2}u_1u_3}}+Rz^{k+2}s(ust)t^{-1}u+Rz^{k+2}su(ust)t^{-1}u\\ &\subset&U+u_3U+Rz^{k+3}(stu)u^{-1}t^{-2}u+Rz^{k+3}su(R+Rt+Rt^2)u\\ &\subset&U+u_3U+u_3\underline{\underline{z^{k+4}u_2u}}+\underline{\underline{z^{k+3}u_1u_3}}+Rz^{k+3}su(tus)s^{-1}+Rz^{k+3}sut(tus)s^{-1}\\ &\subset&U+u_3U+Rz^{k+4}su(R+Rs)+Rz^{k+4}sut(R+Rs)\\ &\subset&U+u_3U+\underline{\underline{z^{k+4}u_1u_3}}+Rz^{k+4}s(ust)t^{-1}+ \underline{\underline{(z^{k+4}u_1u_3)t}}+Rz^{k+4}(stu)u^{-1}t^{-1}uts\\ &\subset&U+u_3U+\underline{\underline{(z^{k+5}u_1)t}}+z^{k+5}u_3(R+Rt+Rt^2)uts\\ &\subset&U+u_3U+z^{k+5}u_3t(stu)u^{-1}t^{-1}+z^{k+5}u_3(tus)s^{-1}ts+ z^{k+5}u_3t(tus)s^{-1}ts\\ &\subset&U+u_3U+u_3\underline{z^{k+5}tu^{-1}u_2}+z^{k+5}u_3(R+Rs)ts+ z^{k+6}u_3t(R+Rs)ts\\ &\subset&U+u_3U+z^{k+5}u_3t(stu)u^{-1}t^{-1}+z^{k+5}u_3(stu)u^{-1}s+ z^{k+6}u_3t^2(R+Rs^{-1})+\\&&+z^{k+6}u_3t(stu)u^{-1}s\\ &\subset&U+u_3U+u_3\underline{z^{k+6}tu^{-1}u_2}+u_3\underline{z^{k+6}u_1}+\underline{z^{k+6}u_3u_2}+z^{k+6}u_3t^2(s^{-1}u^{-1}t^{-1})tu+\\&&+ z^{k+6}u_3t(R+Ru+Ru^2)s\\ &\subset&U+u_3U+u_3\underline{\underline{z^{k+5}u_2u}}+z^{k+6}u_3t(stu)u^{-1}t^{-1}+z^{k+6}u_3(tus)+ z^{k+6}u_3(tus)s^{-1}(ust)t^{-1}\\ &\subset&U+u_3U+u_3\underline{z^{k+7}tu^{-1}u_2}+\underline{z^{k+7}u_3}+u_3\underline{\underline{(z^{k+8}u_1)t^{-1}}}\\ &\subset&U+u_3U. \end{array}}$$ The result follows from proposition \ref{pr77}. \end{proof} \begin{cor} The BMR freeness conjecture holds for the generic Hecke algebra $H_{G_7}$. \end{cor} \begin{proof} By theorem \ref{thm77} we have that $H_{G_7}=U$. The result follows from proposition \ref{BMR PROP}, since by definition $U$ is generated as a right $u_2$-module by 48 elements and, hence, as $R$-module by $|G_7|=144$ elements (recall that $u_2$ is generated as $R$-module by 3 elements). \end{proof} \section{The Octahedral family} \indent In this family we encounter the exceptional groups $G_8$, $G_9$, $G_{10}$, $G_{11}$, $G_{12}$, $G_{13}$, $G_{14}$ and $G_{15}$. In Chapter 2 we proved that the BMR freeness conjecture holds for $G_8$ (see corollary \ref{G8}). Moreover, we know the validity of the conjecture for the group $G_{12}$ (see theorem \ref{2case}). In this section we prove the conjecture for the rest of the exceptional groups in this family using a case-by-case analysis. We also reprove the validity of the BMR freeness conjecture for the exceptional group $G_{12}$. As in the tetrahedral case, we use directly the relations (\ref{ooo}) and (\ref{oooo}). \subsection{The case of $G_9$} \indent Let $R=\mathbb{Z}[u_{s,i}^{\pm},u_{t,j}^{\pm}]_{\substack{1\leq i\leq 2 \\1\leq j\leq 4}}$ and let $H_{G_{9}}=\langle s,t\;|\; ststst=tststs, \prod\limits_{i=1}^{2}(s-u_{s,i})=\prod\limits_{j=1}^{4}(t-u_{t,j})=0\rangle$ be the generic Hecke algebra associated to $G_{9}$. Let $u_1$ be the subalgebra of $H_{G_{9}}$ generated by $s$ and $u_2$ be the subalgebra of $H_{G_{9}}$ generated by $t$. We recall that $z:=(st)^3=(ts)^3$ generates the center of the associated complex braid group and that $|Z(G_9)|=8$. We set $U=\sum\limits_{k=0}^7(z^ku_2u_1u_2+z^ku_2st^{-2}s)$. By the definition of $U$, we have the following remark. \begin{rem} $u_2U\subset U$. \label{rem9} \end{rem} From now on, we will underline the elements that by definition belong to $U$. Moreover, we will use directly the remark \ref{rem9}; this means that every time we have a power of $t$ at the beginning of an element, we may ignore it. In order to remind that to the reader, we put a parenthesis around the part of the element we consider. Our goal is to prove that $H_{G_{9}}=U$ (theorem \ref{thm9}). The next proposition provides the necessary conditions for this to be true. \begin{prop} If $z^ku_1u_2u_1\subset U$ and $z^kst^{-2}st\in U$ for every $k\in\{0,\dots,7\}$, then $H_{G_9}=U$. \label{pr9} \end{prop} \begin{proof} Since $1\in U$, it is enough to prove that $U$ is a right-sided ideal of $H_{G_9}$. For this purpose, one may check that $Us$ and $Ut$ are subsets of $U$. By the definition of $U$ we have that $Us\subset \sum\limits_{k=0}^7u_2(z^ku_1u_2u_1)$ and $Ut\subset \sum\limits_{k=0}^7(\underline{z^ku_2u_1u_2}+z^ku_2st^{-2}st)\subset U+\sum\limits_{k=0}^7u_2(z^kst^{-2}st).$ The result follows from hypothesis and remark \ref{rem9}. \end{proof} As a first step, we prove the conditions of the above proposition for a smaller range of the values of $k$. \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{0,\dots,6\}$, $z^ku_1tu_1\subset U$. \item[(ii)] For every $k\in\{1,\dots,7\}$, $z^ku_1t^{-1}u_1\subset U$. \item[(iii)] For every $k\in\{1,\dots,6\}$, $z^ku_1u_2u_1\subset U$. \item[(iv)]For every $k\in\{0,\dots,5\}$, $z^ku_1t^2u_1t\subset U+z^{k+2}u_2u_1u_2u_1$. Therefore, for every $k\in\{0,\dots,4\}$, $z^ku_1t^2st\subset U$. \item[(v)]For every $k\in\{1,\dots,5\}$, $z^ku_1u_2u_1t\subset U+z^{k+2}u_2u_1u_2u_1$. Therefore, for every $k\in\{0,\dots,4\}$, $z^ku_1u_2u_1t\subset U$. \qedhere \end{itemize} \label{lem9} \end{lem} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)] $z^ku_1tu_1=z^k(R+Rs)t(R+Rs)\subset \underline{z^ku_2u_1u_2}+Rz^ksts\subset U+Rz^k(st)^3t^{-1}s^{-1}t^{-1}\subset U+\underline{z^{k+1}u_2u_1u_2}$. \item [(ii)] $z^ku_1t^{-1}u_1=z^k(R+Rs^{-1})t^{-1}(R+Rs^{-1})\subset \underline{z^ku_2u_1u_2}+Rz^ks^{-1}t^{-1}s^{-1}\subset U+Rz^k(ts)^{-3}tst\subset U+\underline{z^{k-1}u_2u_1u_2}$. \item[(iii)]Since $u_2=R+Rt+Rt^{-1}+Rt^{-2}$, by the definition of $U$ and by (i) and (ii) we only have to prove that, for every $k\in\{1,\dots,6\}$, $z^ku_1t^{-2}u_1 \subset U$. Indeed, $z^ku_1t^{-2}u_1=z^k(R+Rs)t^{-2}(R+Rs)\subset \underline{z^ku_2u_1u_2}+\underline{z^ku_2st^{-2}s}$. \item [(iv)] $\small{\begin{array}[t]{lcl} z^ku_1t^2u_1t&=&z^ku_1t^2(R+Rs)t\\ &\subset& \underline{z^ku_1u_2}+z^ku_1t^2st\\ &\subset& U+z^k(R+Rs)t^2st\\ &\subset& U+ \underline{z^ku_2u_1u_2}+Rz^kst(ts)^3s^{-1}t^{-1}s^{-1}\\ &\subset& U+Rz^{k+1}st(R+Rs)t^{-1}s^{-1}\\ &\subset& U+\underline{z^{k+1}u_2}+Rz^{k+1}(st)^3t^{-1}s^{-1}t^{-2}s^{-1}\\ &\subset& U+z^{k+2}u_2u_1u_2u_1. \end{array}}$ Since $z^{k+2}u_2u_1u_2u_1=u_2(z^{k+2}u_1u_2u_1)$ and since, for every $k\in\{0,\dots, 4\}$, we have $k+2\in \{2,\dots,6\}$, we can use (iii) and we have that for every $k\in\{0,\dots,4\}$, $z^ku_1t^2st\subset U$. \item [(v)] $\small{\begin{array}[t]{lcl} z^ku_1u_2u_1t&=&z^ku_1(R+Rt+Rt^{-1}+Rt^2)u_1t\\ &\subset& \underline{z^ku_1u_2}+ z^k(R+Rs)t(R+Rs)t+z^k(R+Rs^{-1})t^{-1}(R+Rs^{-1})t+z^ku_1t^2u_1t\\ &\stackrel{(iv)}{\subset}& U+\underline{z^ku_2u_1u_2}+Rz^kstst+Rz^ks^{-1}t^{-1}s^{-1}t+z^{k+2}u_2u_1u_2u_1\\ &\subset& U+Rz^k(st)^3t^{-1}s^{-1}+Rz^k(ts)^{-3}tst^2+z^{k+2}u_2u_1u_2u_1\\ &\subset& U+\underline{(z^{k+1}+z^{k-1})u_2u_1u_2}+z^{k+2}u_2u_1u_2u_1\\ &\subset& U+z^{k+2}u_2u_1u_2u_1. \end{array}}$ Using the same arguments as in (iv), we can use (iii) and we have that for every $k\in\{0,\dots,4\}$, $z^ku_1t^2st\subset U$. \qedhere \end{itemize} \end{proof} To make it easier for the reader to follow the calculations, we will double-underline the elements described in lemma \ref{lem9} and we will use directly the fact that these elements are inside $U$. The next proposition proves the first condition of \ref{pr9}. \begin{prop}For every $k\in\{0,\dots,7\}$, $z^ku_1u_2u_1\subset U$. \label{prr9} \end{prop} \begin{proof} By lemma \ref{lem9}(iii), we need to prove the cases where $k\in\{0,7\}$. We have: \begin{itemize}[leftmargin=*] \item \underline{$k=0$}: $\small{\begin{array}[t]{lcl}u_1u_2u_1&=&(R+Rs)(R+Rt+Rt^{-2}+Rt^3)(R+Rs)\\ &\subset& \underline{u_2u_1u_2}+\underline{\underline{u_1tu_1}}+\underline{Rst^{-2}s}+Rst^3s\\ &\subset&U+Rt^{-1}(ts)^3s^{-1}t^{-1}s^{-1}t^2s\\ &\subset&U+zu_2s^{-1}t^{-1}(R+Rs)t^2s\\ &\subset&U+\underline{\underline{u_2(zu_1tu_1)}}+zu_2(R+Rs)t^{-1}st^2s\\ &\subset&U+\underline{\underline{u_2(zu_1u_2u_1)}}+zu_2s(R+Rt+Rt^2+Rt^3)st^2s\\ &\subset&U+\underline{\underline{u_2(zu_1u_2u_1)}}+zu_2(st)^3t^{-1}s^{-1}ts+zu_2st(ts)^3s^{-1}t^{-1}s^{-1}ts+zu_2st^2(ts)^3s^{-1}t^{-1}s^{-1}ts\\ &\subset&U+\underline{\underline{u_2(z^2u_1tu_1)}}+z^2u_2sts^{-1}t^{-1}(R+Rs)ts+z^2u_2st^2s^{-1}t^{-1}(R+Rs)ts\\ &\subset&U+\underline{z^2u_2u_1u_2}+z^2u_2st(R+Rs)t^{-1}sts+z^2u_2st^2(R+Rs)t^{-1}sts\\ &\subset&U+\underline{\underline{u_2(z^2u_1tu_1})}+z^2u_2stst^{-1}sts+z^2u_2(st)^3t^{-1}+ z^2u_2st^2st^{-1}sts\\ &\subset&U+z^2u_2stst^{-1}sts+\underline{z^3u_2}+ z^2u_2st^2st^{-1}sts \end{array}}$ It remains to prove that $A:=z^2u_2stst^{-1}sts+ z^2u_2st^2st^{-1}sts$ is a subset of $U$. For this purpose, we expand $t^{-1}$ as a linear combination of 1, $t$, $t^2$ and $t^3$ and we have: $\small{\begin{array}{lcl} A&\subset&z^2u_2sts(R+Rt+Rt^2+Rt^3)sts+ z^2u_2st^2s(R+Rt+Rt^2+Rt^3)sts\\ &\subset&z^2u_2sts^2ts+z^2u_2(st)^3s+z^2u_2(ts)^3s^{-2}(st)^3t^{-1}+ z^2u_2(ts)^3s^{-1}t(ts)^3s^{-1}t^{-1}+z^2u_2st^2s^2ts+\\&&+Rz^2st(ts)^3+z^2u_2st(ts)^3s^{-1}t^{-1}s^{-1}(ts)^3s^{-1}t^{-1}+z^2u_2st^2st^2(ts)^3s^{-1}t^{-1}\\ &\subset&z^2u_2st(R+Rs)ts+\underline{z^3u_2u_1}+\underline{z^4u_2u_1u_2}+ z^4u_2(R+Rs)t(R+Rs)t^{-1}+z^2u_2st^2(R+Rs)ts+\\&&+\underline{z^3u_2u_1u_2}+z^4u_2sts^{-1}t^{-1}s^{-2}t^{-1}+z^3u_2st^2st^2(R+Rs)t^{-1}\\ &\subset&U+\underline{\underline{u_2(z^2u_1u_2u_1)}}+z^2u_2(st)^3t^{-1}+\underline{z^4u_2u_1u_2}+z^4u_2stst^{-1}+\underline{\underline{u_2(z^2u_1u_2u_1)}}+ z^2u_2st^2sts+\\&&+z^4u_2sts^{-1}t^{-1}(R+Rs^{-1})t^{-1}+\underline{\underline{u_2(z^3u_1t^2st)}}+z^3u_2st^2st^2st^{-1}\\ &\subset&U+\underline{z^3u_2}+z^4u_2(ts)^3s^{-1}t^{-2}+z^2u_2st(ts)^3s^{-1}t^{-1}+z^4u_2st(R+Rs)t^{-2}+z^4u_2st^2(st)^{-3}s+\\&&+z^3u_2st(ts)^3s^{-1}t^{-1}s^{-1}tst^{-1}\\ &\subset&U+\underline{z^5u_2u_1u_2}+z^3u_2st(R+Rs)t^{-1}+\underline{z^4u_2u_1u_2}+z^4u_2(st)^3t^{-1}s^{-1}t^{-3}+ \underline{\underline{u_2(z^3u_1u_2u_1)}}+\\&&+z^4u_2st(R+Rs)t^{-1}s^{-1}tst^{-1}\\ &\subset&U+\underline{(z^3+z^4+z^5)u_2u_1u_2}+z^3u_2(ts)^3s^{-1}t^{-2}+ z^4u_2(ts)^3s^{-1}t^{-2}(R+Rs)tst^{-1}\\ &\subset&U+\underline{z^4u_2u_1u_2}+z^5u_2(st)^{-3}sts^2t^{-1}+z^5u_2s^{-1}t^{-3}(ts)^3s^{-1}t^{-2}\\ &\subset&U+z^4u_2st(R+Rs)t^{-1}+z^6u_2s^{-1}(R+Rt^{-1}+Rt^{-2}+Rt)s^{-1}t^{-2}\\ &\subset&U+\underline{z^4u_2u_1}+z^4u_2(ts)^3s^{-1}t^{-2}+ \underline{z^6u_2u_1u_2}+z^6u_2(st)^{-3}st^{-1}+z^6u_2(st)^{-3}sts^2(ts)^{-3}tst^{-1}+\\&&+z^6u_2(R+Rs)t(R+Rs)t^{-2}\\ &\subset&U+\underline{(z^5+z^6)u_2u_1u_2}+z^4u_2st(R+Rs)tst^{-1}+ z^6u_2stst^{-2}\\ &\subset&U+z^4u_2st^2st^{-1}+z^4u_2(ts)^3+z^6u_2(ts)^3s^{-1}t^{-3}\\ &\subset&U+z^4u_2s(R+Rt+Rt^{-1}+Rt^{-2})st^{-1}+\underline{z^5u_2}+\underline{z^7u_2u_1u_2}\\ &\subset&U+\underline{z^4u_2u_1u_2}+z^4u_2(ts)^3s^{-1}t^{-2}+ z^4u_2(R+Rs^{-1})t^{-1}(R+Rs^{-1})t^{-1}+\\&&+z^4u_2(R+Rs^{-1})t^{-2}(R+Rs^{-1})t^{-1}\\ &\subset&U+\underline{(z^4+z^5)u_2u_1u_2}+z^4u_2s^{-1}t^{-1}s^{-1}t^{-1}+ z^4u_2s^{-1}t^{-2}s^{-1}t^{-1}\\ &\subset&U+z^4u_2(st)^{-3}s+z^4u_2s^{-1}t^{-1}(ts)^{-3}sts\\ &\subset&U+\underline{z^3u_2u_1}+z^3u_2s^{-1}t^{-1}(R+Rs^{-1})ts\\ &\subset&U+\underline{z^3u_2}+z^3u_2(st)^{-3}st^2s\\ &\subset& U+\underline{\underline{u_2(z^2u_1u_2u_1)}}. \end{array}}$ \item \underline{$k=7$}: $\small{\begin{array}[t]{lcl}z^7u_1u_2u_1&=&z^7(R+Rs)(R+Rt^{-1}+Rt^{-2}+Rt^{-3})(R+Rs)\\ &\subset& \underline{z^7u_2u_1u_2}+\underline{\underline{z^7u_1t^{-1}u_1}}+\underline{Rz^7st^{-2}s}+Rz^7st^{-3}s\\ &\subset&U+Rz^7(R+Rs^{-1})t^{-3}(R+Rs^{-1})\\ &\subset&U+\underline{z^7u_2u_1u_2}+Rz^7s^{-1}t^{-3}s^{-1}\\ &\subset&U+Rz^7(ts)^{-3}tstst^{-2}s^{-1}\\ &\subset&U+Rz^6tst(R+Rs^{-1})t^{-2}s^{-1}\\ &\subset&U+\underline{\underline{t(z^6u_1u_2u_1)}}+Rz^6t(R+Rs^{-1})ts^{-1}t^{-2}s^{-1}\\ &\subset&U+\underline{\underline{t^2(z^6u_1u_2u_1)}}++Rz^6ts^{-1}(R+Rt^{-1}+Rt^{-2}+Rt^{-3})s^{-1}t^{-2}s^{-1}\\ &\subset&U+\underline{\underline{t(z^6u_1u_2u_1)}}+Rz^6t^2(st)^{-3}st^{-1}s^{-1}+Rz^6ts^{-1}t^{-1}(st)^{-3}stst^{-1}s^{-1}+\\&&+Rz^6ts^{-1}t^{-2}(st)^{-3}stst^{-1}s^{-1}\\ &\subset&U+\underline{\underline{t^2(z^5u_1u_2u_1)}}+Rz^5ts^{-1}t^{-1}(R+Rs^{-1})t(R+Rs^{-1})t^{-1}s^{-1}+\\&&+ Rz^5ts^{-1}t^{-2}st(R+Rs^{-1})t^{-1}s^{-1}\\ &\subset&U+\underline{\underline{t(z^5u_1u_2u_1)}}+Rz^5t(sts)^{-1}t(sts)^{-1}+Rz^5ts^{-1}t^{-2}(R+Rs^{-1})ts^{-1}t^{-1}s^{-1}\\ &\subset&U+Rz^5t(ts)^{-3}tst^3(st)^{-3}st+Rz^5t(ts)^{-3}t+Rz^5ts^{-1}t^{-2}s^{-1}t^2(st)^{-3}st\\ &\subset&U+\underline{\underline{t^2(z^3u_1u_2u_1t)}}+\underline{z^4u_2}+ Rz^4ts^{-1}t^{-2}s^{-1}t^2st. \end{array}}$ It remains to prove that the element $z^4ts^{-1}t^{-2}s^{-1}t^2st$ is inside $U$. For this purpose, we expand $t^2$ as a linear combination of 1, $t$, $t^{-1}$ and $t^{-2}$ and we have: $\small{\begin{array}{lcl} z^4ts^{-1}t^{-2}s^{-1}t^2st&\in& z^4ts^{-1}t^{-2}s^{-1}(R+Rt+Rt^{-1}+Rt^{-2})st\\ &\in&U+\underline{z^4u_2u_1u_2}+Rz^4ts^{-1}t^{-2}(R+Rs)tst+ Rz^4ts^{-1}t^{-2}s^{-1}t^{-1}(R+Rs^{-1})t+\\&&+Rz^4ts^{-1}t^{-2}s^{-1}t^{-2}(R+Rs^{-1})t\\ &\in&U+\underline{\underline{t(z^4u_1u_2u_1t)}}+Rz^4ts^{-1}t^{-3}(ts)^3s^{-1}+\underline{\underline{t(z^4u_1u_2u_1)}}+ Rz^4ts^{-1}t^{-1}(st)^{-3}st^2+\\&&+Rz^4ts^{-1}t^{-1}(st)^{-3}sts+ Rz^4ts^{-1}t^{-1}(st)^{-3}stst^{-1}s^{-1}t\\ &\in&U+\underline{\underline{t(z^5u_1u_2u_1)}}+Rz^3ts^{-1}t^{-1}(R+Rs^{-1})t^2+ Rz^3ts^{-1}t^{-1}(R+Rs^{-1})ts+\\&&+Rz^3ts^{-1}t^{-1}st(R+Rs^{-1})t^{-1}s^{-1}t\\ &\in&U+\underline{z^3u_2u_1u_2}+Rz^3t^2(st)^{-3}st^3+Rz^3t^2(ts)^{-3}st^2s+\\&&+Rz^3ts^{-1}t^{-1}(R+Rs^{-1})ts^{-1}t^{-1}s^{-1}t\\ &\in&U+\underline{z^2u_2u_1u_2}+\underline{z^2u_2st^2s}+Rz^3ts^{-2}t^{-1}s^{-1}t+Rz^3t^2(st)^{-3}st^3(st)^{-3}st^2\\ &\in&U+Rz^3t(R+Rs^{-1})t^{-1}s^{-1}t+Rzt^2s(R+Rt+Rt^2+Rt^{-1})st^2\\ &\in&U+\underline{(z+z^3)u_2u_1u_2}+Rz^3t^2(st)^{-3}st^2+Rzt(ts)^3s^{-1}t+Rzt(ts)^3s^{-1}t^{-1}s^{-1}tst^2+\\&&+Rzt^2(R+Rs^{-1})t^{-1}(R+Rs^{-1})t^2\\ &\in&U+\underline{(z+z^2)u_2u_1u_2}+Rz^2ts^{-1}t^{-1}(R+Rs)tst^2+ Rzt^2s^{-1}t^{-1}s^{-1}t^2\\ &\in&U+\underline{z^2u_2}+Rz^2ts^{-1}t^{-2}(ts)^3s^{-1}t+Rzt^3(st)^{-3}st^3\\ &\in&U+\underline{\underline{t(z^3u_1u_2u_1t)}}+\underline{u_2u_1u_2}. \end{array}}$\\ \qedhere \end{itemize} \end{proof} \begin{cor} $Uu_1\subset U$ \label{cor9} \end{cor} \begin{proof} By the definition of $U$ we have that $Uu_1\subset \sum\limits_{k=0}^7(z^ku_2u_1u_2u_1+z^ku_2st^{-2}u_1)$. By proposition \ref{prr9}, we only have to prove that for every $k\in\{0,\dots,7\}$, $z^ku_2st^{-2}u_1\subset U$, which follows from the definition of $U$ if we expand $u_1$ as $R+Rs$. \end{proof} For the rest of this section, we will use directly corollary \ref{cor9}; this means that every time we have a power of $s$ at the end of an element, we may ignore it, as we did for the powers of $t$ in the beginning of the elements. In order to remind that to the reader, we put again a parenthesis around the part of the element we consider. \begin{thm}$H_{G_9}=U$. \label{thm9} \end{thm} \begin{proof} By propositions \ref{pr9} and \ref{prr9}, we only have to prove that $z^kst^{-2}st\in U$, for every $k\in\{0,\dots,7\}$, However, by lemma \ref{lem9}(v) we have that $z^kst^{-2}st\in U+u_2(z^{k+2}u_1u_2u_1)$, for every $k\in\{1,\dots, 5\}$. Therefore, by proposition \ref{prr9}, we only the cases where $k\in\{0\}\cup\{6,7\}$. \begin{itemize}[leftmargin=*] \item \underline{$k=0$}: $\small{\begin{array}[t]{lcl} st^{-2}st&\in&s(R+Rt+Rt^2+Rt^3)st\\ &\in& \underline{u_1u_2}+R(st)^3t^{-1}s^{-1}+Rst(ts)^3s^{-1}t^{-1}s^{-1}+Rst^2(ts)^3s^{-1}t^{-1}s^{-1}\\ &\in&U+\underline{zu_2u_1}+zst(R+Rs)t^{-1}u_1+zst^2(R+Rs)t^{-1}u_1\\ &\in&U+\underline{zu_1}+z(st)^3t^{-1}s^{-1}t^{-2}u_1+\underline{\underline{zu_1u_2u_1}}+zst^2s(R+Rt+Rt^2+Rt^3)u_1\\ &\in&U+\underline{(z^2u_2u_1u_2)u_1}+\underline{\underline{zu_1u_2u_1}}+\underline{\underline{(zu_1t^2u_1t)u_1}}+zst(ts)^3s^{-1}t^{-1}s^{-1}tu_1+ zst(ts)^3s^{-1}t^{-1}s^{-1}t^2u_1\\ &\in&U+z^2st(R+Rs)t^{-1}s^{-1}tu_1+z^2st(R+Rs)t^{-1}s^{-1}t^2u_1\\ &\in&U+\underline{z^2u_2u_1}+z^2(st)^3t^{-1}s^{-1}t^{-2}s^{-1}tu_1+z^2(st)^3t^{-1}s^{-1}t^{-2}s^{-1}t^2u_1\\ &\in&U+\underline{\underline{t^{-1}(z^3u_1u_2u_1t)u_1}}+z^3t^{-1}s^{-1}t^{-2}s^{-1}t^2u_1 \end{array}}$ It remains to prove that the element $z^3t^{-1}s^{-1}t^{-2}s^{-1}t^2$ is inside $U$. For this purpose, we expand $t^{-2}$ as a linear combination of 1, $t^{-1}$, $t$ and $t^2$ and we have: $\small{\begin{array}{lcl} z^3t^{-1}s^{-1}t^{-2}s^{-1}t^2 &\in&z^3t^{-1}s^{-1}(R+Rt^{-1}+Rt+Rt^2)s^{-1}t^2\\ &\in&\underline{z^3u_2u_1u_2}+Rz^3(st)^{-3}st^3+ Rz^3t^{-1}(R+Rs)t(R+Rs)t^2+\\&&+ Rz^3t^{-1}(R+Rs)t^2(R+Rs)t^2\\ &\in&U+\underline{\underline{u_2\big((z^2+z^3)u_1u_2\big)}}+ Rz^3t^{-1}stst^2+Rz^3t^{-1}st^2st^2\\ &\in&U+Rz^3t^{-2}(ts)^3s^{-1}t+Rz^3t^{-1}st(ts)^3s^{-1}t^{-1}s^{-1}t\\ &\in&U+\underline{\underline{t^{-2}(z^4u_1u_2)}}+rz^4t^{-1}st(R+Rs)t^{-1}s^{-1}t\\ &\in&U+\underline{z^4u_1}+Rz^4t^{-2}(ts)^3s^{-1}t^{-2}s^{-1}t\\ &\in&U+z^5u_2(u_1u_2u_1t). \end{array}}$ However, by lemma \ref{lem9}(v) we have that $z^5u_2(u_1u_2u_1t)\subset u_2U+ \underline{(z^7u_2u_1u_2)u_1}$. The result follows from remark \ref{rem9}. \item \underline{$k\in\{6,7\}$}: $\small{\begin{array}[t]{lcl} z^kst^{-2}st&\in&z^k(R+Rs^{-1})t^{-2}(R+Rs^{-1})t\\ &\in&\underline{z^ku_2u_1u_2}+Rz^{k}s^{-1}t^{-2}s^{-1}t\\ &\in&U+Rz^kt(st)^{-3}stst^{-1}s^{-1}t\\ &\in&U+Rz^{k-1}tst(R+Rs^{-1})t^{-1}s^{-1}t\\ &\in&U+\underline{z^{k-1}u_2}+ Rz^{k-1}tst^2(st)^{-3}st^2\\ &\in&U+Rz^{k-2}ts(R+Rt+Rt^{-1}+Rt^{-2})st^2\\ &\in&U+\underline{z^{k-2}u_2u_1u_2}+Rz^{k-2}(ts)^3s^{-1}t+Rz^{k-2}t(R+Rs^{-1})t^{-1}(R+Rs^{-1})t^2+\\&&+Rz^{k-2}t(R+Rs^{-1})t^{-2}(R+Rs^{-1})t^2\\ &\in&U+\underline{(z^{k-1}+z^{k-2})u_2u_1u_2}+Rz^{k-2}ts^{-1}t^{-1}s^{-1}t^2+Rz^{k-2}ts^{-1}t^{-2}s^{-1}t^2\\ &\in&U+Rz^{k-2}t^2(st)^{-3}st^3+Rz^{k-2}t^2(st)^{-3}stst^{-1}s^{-1}t^2\\ &\in&U+\underline{z^{k-3}u_2u_1u_2}+z^{k-3}u_2st(R+Rs^{-1})t^{-1}s^{-1}t^2\\ &\in&U+\underline{z^{k-3}u_2}+z^{k-3}u_2st^2(st)^{-3}st^3\\ &\in&U+z^{k-4}u_2st^2s(R+Rt+Rt^2+Rt^{-1})\\ &\in&U+\underline{\underline{u_2(z^{k-4}u_1u_2u_1)}}+\underline{\underline{z^{k-4}u_2(u_1t^2u_1t)}}+z^{k-4}u_2st(ts)^3s^{-1}t^{-1}s^{-1}t+\\&&+z^{k-4}u_2st^2(R+Rs^{-1})t^{-1}\\ &\in&U+z^{k-3}u_2st(R+Rs)t^{-1}s^{-1}t+\underline{z^{k-4}u_2u_1u_2}+z^{k-4}u_2st^3(st)^{-3}sts\\ &\in&U+\underline{z^{k-3}u_2}+z^{k-3}u_2(ts)^3s^{-1}t^{-2}s^{-1}t+\underline{\underline{u_2(z^{k-5}u_1u_2u_1t)s}}\\ &\in&U+z^{k-2}u_2(u_1u_2u_1t). \end{array}}$\\ Again, by lemma \ref{lem9}(v) we have that $z^{k-2}u_2(u_1u_2u_1t)\subset u_2U+ \underline{(z^ku_2u_1u_2)u_1}$. The result follows from remark \ref{rem9}. \qedhere \end{itemize} \end{proof} \begin{cor} The BMR freeness conjecture holds for the generic Hecke algebra $H_{G_9}$. \end{cor} \begin{proof} By theorem \ref{thm9} we have that $H_{G_9}=U=\sum\limits_{k=0}^7z^k(u_2+u_2su_2+u_2st^{-2})$. The result then follows from proposition \ref{BMR PROP}, since $H_{G_9}$ is generated as left $u_2$-module by 48 elements and, hence, as $R$-module by $|G_9|=192$ elements (recall that $u_2$ is generated as $R$-module by 4 elements.) \end{proof} \subsection{The case of $G_{10}$} \indent Let $R=\mathbb{Z}[u_{s,i}^{\pm},u_{t,j}^{\pm}]_{\substack{1\leq i\leq 3 \\1\leq j\leq 4}}$ and let $H_{G_{10}}=\langle s,t\;|\; stst=tsts,\prod\limits_{i=1}^{3}(s-u_{s,i})=\prod\limits_{j=1}^{4}(t-u_{t,i})=0\rangle$ be the generic Hecke algebra associated to $G_{10}$. Let $u_1$ be the subalgebra of $H_{G_{10}}$ generated by $s$ and $u_2$ the subalgebra of $H_{G_{10}}$ generated by $t$. We recall that $z:=(st)^2=(ts)^2$ generates the center of the associated complex braid group and that $|Z(G_{10})|=12$. We set $U=\sum\limits_{k=0}^{11}(z^ku_2u_1+z^ku_2st^{-1}+z^ku_2s^{-1}t+z^ku_2s^{-1}ts^{-1}).$ From now on, we will underline the elements that belong to $U$ by definition. Our goal is to prove that $H_{G_{10}}=U$ (theorem \ref{thm10}). Since $1\in U$, it is enough to prove that $U$ is a right-sided ideal of $H_{G_{10}}$ or, equivalently, that $Us$ and $Ut$ are subsets of $U$. For this purpose, we first need to prove some preliminary results. In the following lemmas we prove that some subsets of $z^ku_2u_1u_2u_1$, where $k$ belongs in a smaller range of $\{0,\dots,11\}$, are also subsets of $U$. \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)] For every $k\in\{1,\dots, 10\}$, $z^ku_2u_1u_2\subset U$. \item[(ii)]For every $k\in\{1,\dots, 11\}$, $z^ku_2st^{-1}s\subset U$. \item[(iii)]For every $k\in\{0,\dots,9\}$, $z^ku_2st^2s\subset U$. \item [(iv)]For every $k\in\{1,\dots,9\}$, $z^ku_2su_2s\subset U$. \end{itemize} \label{lem10} \end{lem} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] $\small{\begin{array}[t]{lcl} z^ku_2u_1u_2&=&z^ku_2(R+Rs+Rs^{-1})u_2\\ &=&\underline{z^ku_2}+z^ku_2s(R+Rt+Rt^{-1}+Rt^2)+z^ku_2s^{-1}(R+Rt+Rt^{-1}+Rt^{-2})\\ &\subset& U+\underline{z^ku_2u_1}+Rz^ku_2(st)^2t^{-1}s^{-1}+\underline{z^ku_2st^{-1}}+ z^ku_2(st)^2t^{-1}s^{-1}t+\underline{z^ku_2s^{-1}t}+\\&&+z^ku_2(ts)^{-2}ts+z^ku_2(ts)^{-2}tst^{-1}\\ &\subset& U+\underline{z^{k+1}u_2u_1}+\underline{z^{k+1}u_2s^{-1}t}+ \underline{z^{k-1}u_2u_1}+\underline{z^{k-1}u_2st^{-1}}. \end{array}}$ \item[(ii)] $\small{\begin{array}[t]{lcl} z^ku_2st^{-1}s&=&z^ku_2st^{-1}(R+Rs^{-1}+Rs^{-2})\\ &\subset&\underline{z^ku_2st^{-1}}+z^ku_2s^2(ts)^{-2}t+ z^ku_2(R+Rs^{-1}+Rs^{-2})t^{-1}s^{-2}\\ &\subset&U+z^{k-1}u_2(R+Rs+Rs^{-1})t+\underline{z^{k}u_2u_1}+z^ku_2(ts)^{-2}ts^{-1}+z^ku_2s^{-1}(ts)^{-2}ts^{-1}\\ &\subset& U+\underline{z^{k-1}u_2u_1}+z^{k-1}u_2(st)^2t^{-1}s^{-1}+\underline{z^{k-1} u_2s^{-1}t}+\underline{z^{k-1}u_2s^{-1}ts^{-1}}\\ &\subset&U+\underline{z^ku_2u_1}. \end{array}}$ \item[(iii)]$\small{\begin{array}[t]{lcl} z^ku_2st^2s&=&z^ku_2st(ts)^2s^{-1}t^{-1}\\ &\subset&U+z^{k+1}u_2st(R+Rs+Rs^{2})t^{-1}\\ &\subset&U+\underline{z^{k+1}u_2u_1}+z^{k+1}u_2(st)^2t^{-2}+z^{k+1}u_2(st)^2t^{-1}st^{-1}\\ &\subset&U+\underline{z^{k+2}u_2}+\underline{z^{k+2}u_2st^{-1}}. \end{array}}$ \item [(iv)] $\small{\begin{array}[t]{lcl} z^ku_2su_2s&=&z^ku_2s(R+Rt+Rt^2+Rt^{-1})\\ &\subset& \underline{z^ku_2u_1}+ z^ku_2(st)^2t^{-1}s^{-1}+z^ku_2st^2s+z^ku_2st^{-1}s\\ &\subset& U+\underline{z^{k+1}u_2u_1}+z^ku_2st^2s+z^ku_2st^{-1}s\\ &\subset& U+z^ku_2st^2s+z^ku_2st^{-1}s. \end{array}}$ The result follows then from (ii) and (iii). \qedhere \end{itemize} \end{proof} \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{0,\dots,10\}$, $z^ku_2u_1tu_1\subset U$. \item[(ii)] For every $k\in\{2,\dots,11\}$, $z^ku_2s^{-1}u_2s^{-1}\subset U$. \item[(iii)] For every $k\in\{2,\dots,10\}$, $z^ku_2u_1u_2s^{-1}\subset U$. \label{lem210} \end{itemize} \end{lem} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] $\small{\begin{array}[t]{lcl} z^ku_2u_1tu_1&=&z^ku_2(R+Rs+Rs^{-1})tu_1\\ &\subset&\underline{z^ku_2u_1}+z^ku_2(st)^2t^{-1}s^{-1}u_1+z^ku_2s^{-1}t(R+Rs+Rs^{-1})\\ &\subset&U+\underline{z^{k+1}u_2u_1}+\underline{z^ku_2s^{-1}t}+z^ku_2s^{-2}(st)^2t^{-1}+\underline{z^ku_2s^{-1}ts^{-1}}\\ &\subset&U+z^{k+1}u_2(R+Rs+Rs^{-1})t^{-1}\\ &\subset&U+\underline{z^{k+1}u_2}+\underline{z^{k+1}u_2st^{-1}}+z^{k+1}u_2(ts)^{-2}ts\\ &\subset&U+\underline{z^ku_2u_1}. \end{array}}$ \item[(ii)] $\small{\begin{array}[t]{lcl} z^ku_2s^{-1}u_2s^{-1}&=&z^ku_2s^{-1}(R+Rt+Rt^{-1}+Rt^{-2})s^{-1}\\ &\subset&\underline{z^ku_2u_1}+\underline{z^ks^{-1}ts^{-1}}+z^ku_2(ts)^{-2}t+z^ku_2(ts)^{-2}tst^{-1}s^{-1}\\ &\subset&U+\underline{z^{k-1}u_2}+z^{k-1}u_2s(st)^{-2}st\\ &\subset&U+z^{k-2}u_2s^2t\\ &\subset&U+z^{k-2}u_2(R+Rs+Rs^{-1})t\\ &\subset&U+\underline{z^{k-2}u_2}+z^{k-2}u_2(st)^2t^{-1}s^{-1}+\underline{z^{k-2}u_2s^{-1}t}\\ &\subset& U+\underline{z^{k-1}u_2u_1}. \end{array}}$ \item[(iii)] $\small{\begin{array}[t]{lcl} z^ku_2u_1u_2s^{-1}&=&z^ku_2(R+Rs+Rs^{-1})u_2s^{-1}\\ &\subset&\underline{z^ku_2u_1}+z^ku_2su_2s^{-1}+z^ku_2s^{-1}u_2s^{-1}\\ &\stackrel{(ii)}{\subset}&U+z^ku_2s(R+Rt+Rt^2+Rt^{-1})s^{-1}\\ &\subset&U+\underline{z^ku_2}+z^ku_2u_1tu_1+z^ku_2(st)^2t^{-1}s^{-1}ts^{-1}+z^ku_2s(st)^{-2}st\\ &\stackrel{(i)}{\subset}&U+\underline{z^{k+1}u_2s^{-1}ts^{-1}}+z^{k-1}u_2u_1u_2\ \end{array}}$ The result follows from lemma \ref{lem10}(i). \qedhere \end{itemize} \end{proof} To make it easier for the reader to follow the calculations, we will double-underline the elements as described in lemmas \ref{lem10} and \ref{lem210} and we will use directly the fact that these elements are inside $U$. In the following lemma we prove that some subsets of $z^ku_2u_1u_2u_1u_2$, where $k$ belongs in a smaller range of $\{0,\dots,11\}$ are also subsets of $U$. \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)] For every $k\in\{1,\dots,8\}$, $z^ku_2u_1tu_1t\subset U$. \item [(ii)] For every $k\in\{1,\dots,6\}$, $z^ku_2su_2st^2\subset U$. \item[(iii)] For every $k\in\{1,\dots,5\}$, $z^ku_2u_1tu_1t^2\subset U$. \item[(iv)] For every $k\in\{1,\dots,3\}$, $z^ku_2su_2su_2\subset U$. \item[(v)]For every $k\in\{3,\dots,5\}$, $z^ku_2s^{-1}u_2s^{-1}u_2\subset U$. \end{itemize} \label{stst10} \end{lem} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] $\small{\begin{array}[t]{lcl} z^ku_2u_1tu_1t&=&z^ku_2u_1t(R+Rs+Rs^2)t\\ &\subset&\underline{\underline{z^ku_2u_1u_2}}+z^ku_2u_1(st)^2+z^ku_2(R+Rs+Rs^2)ts^2t\\ &\subset&U+\underline{z^{k+1}u_2u_1}+ \underline{\underline{z^ku_2u_1u_2}}+z^ku_2(st)^2t^{-1}st+z^ku_2s(st)^2t^{-1}st\\ &\subset&U+\underline{\underline{z^{k+1}u_2u_1u_2}}+z^{k+1}u_2s(R+Rt+Rt^2+Rt^3)st\\ &\subset&U+\underline{\underline{z^{k+1}u_2u_1u_2}}+z^{k+1}u_2(st)^2+ z^{k+1}u_2st(ts)^2s^{-1}+z^{k+1}u_2st^2(ts)^2s^{-1}\\ &\subset&U+\underline{z^{k+2}u_2}+z^{k+2}u_2(st)^2t^{-1}s^{-2}+z^{k+2}u_2(st)^2t^{-1}s^{-1}ts^{-1}\\ &\subset&U+\underline{z^{k+3}u_2u_1}+\underline{z^{k+3}u_2s^{-1}ts^{-1}}. \end{array}}$ \item[(ii)] $\small{\begin{array}[t]{lcl} z^ku_2su_2st^2&=&z^ku_2s(R+Rt+Rt^2+Rt^3)st^2\\ &\subset&\underline{\underline{z^ku_2u_1u_2}}+z^ku_2(st)^2t+z^ku_2st(ts)^2s^{-1}t+z^ku_2st^2(ts)^2s^{-1}t\\ &\subset&U+\underline{z^{k+1}u_2}+z^{k+1}u_2u_1tu_1t+z^{k+1}u_2(st)^2t^{-1}s^{-1}ts^{-1}t\\ &\subset&U+(z^{k+1}+z^{k+2})u_2u_1tu_1t. \end{array}}$ The result follows from (i). \item[(iii)] $\small{\begin{array}[t]{lcl} z^ku_2u_1tu_1t^2&=&z^ku_2u_1t(R+Rs+Rs^2)t^2\\ &\subset&\underline{\underline{z^ku_2u_1u_2}}+z^ku_2u_1(ts)^2s^{-1}t+ z^ku_2(R+Rs+Rs^2)ts^2t^2\\ &\subset&U+\underline{\underline{z^{k+1}u_2u_1u_2}}+ \underline{\underline{z^ku_2u_1u_2}}+z^ku_2(st)^2t^{-1}st^2+z^ku_2s(st)^2t^{-1}st^2\\ &\subset&U+\underline{\underline{z^{k+1}u_2u_1u_2}}+z^{k+1}u_2su_2st^2. \end{array}}$ The result follows from (ii). \item[(iv)] $\small{\begin{array}[t]{lcl} z^ku_2su_2su_2&=&z^ku_2s(R+Rt+Rt^2+Rt^3)su_2\\ &\subset&\underline{\underline{z^ku_2u_1u_2}}+z^ku_2(st)^2u_2+z^ku_2(st)^2t^{-1}s^{-2}(st)^2u_2+ z^ku_2st^3s(R+Rt+Rt^2+Rt^3)\\ &\subset&U+\underline{z^{k+1}u_2}+\underline{\underline{z^{k+2}u_2u_1u_2}}+ \underline{\underline{z^ku_2su_2s}}+z^ku_2st^2(ts)^2s^{-1}+z^ku_2su_2st^2+\\&&+ z^ku_2st^2(ts)^2s^{-1}t^2\\ &\stackrel{(ii)}{\subset}&U+z^{k+1}u_2(st)^2t^{-1}s^{-1}ts^{-1}+z^{k+1}u_2(st)^2t^{-1}s^{-1}ts^{-1}t^2\\ &\subset&U+\underline{z^{k+2}u_2s^{-1}ts^{-1}}+z^{k+2}u_2u_1tu_1t^2. \end{array}}$ The result follows from (iii). \item[(v)] $z^ku_2s^{-1}u_2s^{-1}u_2= z^ku_2(ts)^{-1}tsu_2(ts)^{-1}tsu_2\subset z^{k-2}u_2su_2su_2$. The result follows from (iv). \qedhere \end{itemize} \end{proof} To make it easier for the reader to follow the calculations, we will also double-underline the elements as described in lemmas \ref{lem10} and \ref{lem210} and we will use directly the fact that these elements are inside $U$. The following lemma helps us to prove that $Uu_1\subset U$ (see proposition \ref{Us10}). \begin{lem} For every $k\in\{8,9\}$, $z^ks^{-1}u_2s^{-1}u_2s^{-1}\subset U$. \label{ll10} \end{lem} \begin{proof} We expand $\bold{u_2}$ as $R+Rt^{-1}+Rt^{-2}+Rt^{-3}$ and we have: $$\small{\begin{array}[t]{lcl} z^ks^{-1}\bold{u_2}s^{-1}u_2s^{-1} &\subset&\underline{\underline{z^ku_2u_1u_2s^{-1}}}+z^k(ts)^{-2}u_2s^{-1}+ z^k(ts)^{-2}tst^{-1}s^{-1}u_2s^{-1}+ z^k(ts)^{-2}tst^{-2}s^{-1}u_2s^{-1}\\ &\subset&U+\underline{z^{k-1}u_2u_1}+z^{k-1}u_2s^2(ts)^{-2}u_2s^{-1}+ z^{k-1}u_2st^{-1}(st)^{-2}su_2s^{-1}\\ &\subset&U+\underline{\underline{z^{k-2}u_2u_1u_2s^{-1}}}+z^{k-2}u_2st^{-1}s(R+Rt+Rt^{-1}+Rt^{-2})s^{-1}\\ &\subset&U+\underline{z^{k-2}u_2st^{-1}}+z^{k-2}u_2st^{-1}(st)^2t^{-1}s^{-2}+z^{k-2}u_2st^{-1}s(st)^{-2}st+\\&&+ z^{k-2}u_2st^{-1}(R+Rs^{-1}+Rs^{-2})t^{-2}s^{-1}\\ &\subset&U+z^{k-1}u_2st^{-2}(R+Rs+Rs^{-1})+z^{k-3}u_2st^{-1}s^2t+\underline{\underline{z^{k-2}u_2u_1u_2s^{-1}}}+\\&&+ z^{k-2}u_2s(st)^{-2}st^{-1}s^{-1}+z^{k-2}u_2s(st)^{-2}sts^{-1}t^{-2}s^{-1}\\ &\subset&U+\underline{z^{k-1}u_2u_1}+\underline{\underline{z^{k-1}u_2su_2s}}+\underline{\underline{z^{k-1}u_2u_1u_2s^{-1}}}+ z^{k-3}u_2st^{-1}(R+Rs+Rs^{-1})t+\\&&+\underline{\underline{z^{k-3}u_2u_1u_2s^{-1}}}+ z^{k-3}u_2s^2t(ts)^{-2}tst^{-1}s^{-1}\\ &\subset&U+\underline{z^{k-3}u_2u_1}+z^{k-3}u_2st^{-1}(st)^2t^{-1}s^{-1}+ z^{k-3}u_2s(st)^{-2}st^2+\\&&+ z^{k-4}u_2s^2t^2(R+Rs^{-1}+Rs^{-2})t^{-1}s^{-1}\\ &\subset&U+\underline{\underline{z^{k-2}u_2u_1u_2s^{-1}}}+\underline{\underline{z^{k-4}u_2u_1u_2}}+\underline{\underline{z^{k-4}u_2u_1u_2s^{-1}}}+ z^{k-4}u_2s^2t^2(ts)^{-2}t+\\&&+z^{k-4}u_2s^2t^2s^{-1}(ts)^{-2}t\\ &\subset&U+\underline{\underline{z^{k-5}u_2u_1u_2}}+z^{k-5}u_2s^2t^3(st)^{-2}st^2\\ &\subset&U+z^{k-6}u_2(R+Rs+Rs^{-1})t^3st^2\\ &\subset&U+\underline{\underline{z^{k-6}u_2u_1u_2}}+\underline{\underline{z^{k-6}u_2su_2st^2}}+z^{k-6}u_2(ts)^{-2}tst^4st^2\\ &\subset&U+\underline{\underline{z^{k-7}u_2su_2st^2}}. \end{array}}$$ \end{proof} \begin{prop} $Uu_1\subset U$. \label{Us10} \end{prop} \begin{proof} Since $u_1=R+Rs+Rs^2$, it is enough to prove that $Us\subset U$. By the definition of $U$ we only have to prove that for every $k\in\{0,\dots,11\}$, $z^ku_2st^{-1}s$ and $z^ku_2s^{-1}ts$ are subsets of $U$. However, by lemma \ref{lem10}(ii) we have that for every $k\in\{1,\dots,11\}$, $z^ku_2st^{-1}s\subset U$. Hence, it will be sufficient to prove that $u_2st^{-1}s\subset U$. We have: $$\small{\begin{array}{lcl} u_2st^{-1}s&=&u_2s(R+Rt+Rt^2+Rt^3)s\\ &\subset&\underline{u_2u_1}+u_2(ts)^2+\underline{\underline{u_2st^2s}}+u_2st^2(ts)^2s^{-1}t^{-1}\\ &\subset&U+\underline{zu_2}+zu_2st^2(R+Rs+Rs^2)t^{-1}\\ &\subset&U+\underline{\underline{u_2u_1u_2}}+zu_2(ts)^2s^{-2}(st)^2t^{-2}+zu_2st(ts)^2s^{-1}t^{-1}st^{-1}\\ &\subset&U+\underline{\underline{z^3u_2u_1u_2}}+z^2u_2st(R+Rs+Rs^2)t^{-1}st^{-1}\\ &\subset&U+\underline{\underline{z^2u_2u_1u_2}}+z^2u_2(st)^2t^{-2}st^{-1}+ z^2u_2(ts)^2st^{-1}st^{-1}\\ &\subset&U+\underline{z^3u_2st^{-1}}+\underline{\underline{z^3u_2su_2su_2}}. \end{array}}$$ It remains to prove that for every $k\in\{0,\dots,11\}$, $z^ku_2s^{-1}ts\subset U$. For $k\not=11$, the result is obvious since $z^ku_2s^{-1}ts\subset z^ku_2(R+Rs+Rs^2)ts\subset \underline{z^ku_2u_1}+z^ku_2(st)^2t^{-1}+z^ku_2s(st)^2t^{-1}\subset U+\underline{z^{k+1}u_2}+\underline{z^{k+1}u_2st^{-1}}.$ Therefore, we only have to prove that $z^{11}u_2s^{-1}ts\subset U.$ $$\small{\begin{array}{lcl} z^{11}u_2s^{-1}ts&\subset&z^{11}u_2s^{-1}t(R+Rs^{-1}+Rs^{-2})\\ &\subset&\underline{z^{11}u_2s^{-1}t}+\underline{z^{11}u_2s^{-1}ts^{-1}}+z^{11}u_2s^{-1}(R+Rt^{-1}+Rt^{-2}+Rt^{-3})s^{-2}\\ &\subset&U+\underline{z^{11}u_2u_1}+z^{11}u_2(ts)^{-2}ts^{-1}+ z^{11}u_2(ts)^{-2}tst^{-1}s^{-2}+ z^{11}u_2(ts)^{-2}tst^{-2}s^{-2}\\ &\subset&U+\underline{z^{10}u_2u_1}+z^{10}u_2s(st)^{-2}sts^{-1}+ z^{10}u_2(R+Rs^{-1}+Rs^{-2})t^{-2}s^{-2}\\ &\subset&U+\underline{\underline{z^9u_2u_1u_2s^{-1}}}+\underline{z^{10}u_2u_1}+z^{10}u_2(ts)^{-2}ts(st)^{-2}sts^{-1}+\\&&+ z^{10}u_2s^{-1}(ts)^{-2}tst^{-1}s^{-2}\\ &\subset&U+\underline{\underline{z^8u_2u_1u_2s^{-1}}}+z^9u_2s^{-1}t(R+Rs^{-1}+Rs^{-2})t^{-1}s^{-2}\\ &\subset&U+\underline{z^9u_2u_1}+z^9u_2s^{-1}t(ts)^{-2}ts^{-1}+z^9u_2s^{-1}ts^{-1}(ts)^{-2}ts^{-1}\\ &\subset&U+\underline{\underline{z^8s^{-1}u_2s^{-1}}}+z^8u_2s^{-1}u_2s^{-1}u_2s^{-1} \end{array}}$$ The result follows from lemma \ref{ll10}. \end{proof} For the rest of this section, we will use directly proposition \ref{Us10}; this means that every time we have a power of $s$ at the end of an element, we may ignore it. In order to remind that to the reader, we put a parenthesis around the part of the element we consider. We can now prove the main theorem of this section. \begin{thm} $H_{G_{10}}=U$. \label{thm10} \end{thm} \begin{proof} As we explain in the beginning of this section, since $1\in U$, it will be sufficient to prove that $U$ is a right-sided ideal of $H_{G_{10}}$. For this purpose one may check that $Us$ and $Ut$ are subsets of $U$. By proposition \ref{Us10} it is enough to prove that $Ut\subset U$. Since $t\in R+Rt^{-1}+Rt^{-2}+Rt^{-3}$, we only have to prove that $Ut^{-1}\subset U$. By the definition of $U$ we have: $$Ut^{-1}\subset\sum\limits_{k=0}^{11}(z^ku_2u_1t^{-1}+z^ku_2st^{-2}+\underline{z^ku_2s^{-1}}+z^ku_2s^{-1}ts^{-1}t^{-1}).$$ As a result, we restrict ourselves to proving that for every $k\in\{0,\dots,11\}$, $z^ku_2u_1t^{-1}$, $z^ku_2st^{-2}$ and $z^ku_2s^{-1}ts^{-1}t^{-1}$ are subsets of $U$. We distinguish the following cases: \begin{itemize}[leftmargin=0.8cm] \item[C1.] \underline{The case of $z^ku_2u_1t^{-1}$}: \begin{itemize}[leftmargin=*] \item \underline{$k\not=0$}: \\ $z^ku_2u_1t^{-1}=z^ku_2(R+Rs+Rs^{-1})t^{-1}\subset \underline{z^ku_2}+\underline{z^ku_2st^{-1}}+z^ku_2(ts)^{-2}ts\subset U+\underline{z^{k-1}u_2u_1}.$ \item \underline {$k=0$}:\\ $\small{\begin{array}[t]{lcl} u_2u_1t^{-1}&=&u_2(R+Rs+Rs^2)t^{-1}\\ &\subset& \underline{u_2}+\underline{u_2st^{-1}}+u_2s^2t^{-1}\\ &\subset&U+u_2s^2(R+Rt+Rt^2+Rt^3)\\ &\subset&U+\underline{u_2u_1}+u_2s(st)^2t^{-1}s^{-1}+u_2s(st)^2t^{-1}s^{-1}t+u_2s(st)^2t^{-1}s^{-1}t^2\\ &\subset&U+\underline{(zu_2st^{-1})s^{-1}}+zu_2st^{-1}(R+Rs+Rs^2)t+ zu_2st^{-1}(R+Rs+Rs^2)t^2\\ &\subset&U+\underline{zu_2s}+zu_2st^{-1}(st)^2t^{-1}s^{-1}+zu_2st^{-1}s(st)^2t^{-1}s^{-1}+\underline{\underline{zu_2u_1u_2}}+\underline{\underline{zu_2su_2st^2}}+\\&&+zu_2s(R+Rt+Rt^2+Rt^3)s^2t^2\\ &\subset&U+\underline{\underline{(z^2u_2u_1u_2)s^{-1}}}+\underline{\underline{(z^2u_2su_2su_2)s^{-1}}}+\underline{\underline{zu_2u_1u_2}}+zu_2(st)^2t^{-1}st^2+\\&&+ zu_2(st)^2t^{-1}s^{-1}ts^2t^2+zu_2(st)^2t^{-1}s^{-1}t^2s^2t^2\\ &\subset&U+\underline{\underline{z^2u_2u_1u_2}}+z^2u_2(R+Rs+Rs^2)ts^2t^2+ z^2u_2(R+Rs+Rs^2)t^2s^2t^2\\ &\subset&U+\underline{\underline{z^2u_2u_1u_2}}+z^2u_2(st)^2t^{-1}st^2+ z^2u_2s(st)^2t^{-1}st^2+z^2u_2(st)^2t^{-1}s^{-1}ts^2t^2+\\&&+ z^2u_2s(st)^2t^{-1}s^{-1}ts^2t^2\\ &\subset&U+\underline{\underline{z^3u_2u_1u_2}}+\underline{\underline{z^3u_2su_2st^2}}+z^3u_2s^{-1}t(R+Rs+Rs^{-1})t^2+\\&&+z^3u_2st^{-1}(R+Rs+Rs^2)ts^2t^2\\ &\subset&U+\underline{\underline{z^3u_2u_1u_2}}+z^3u_2s^{-1}(ts)^2s^{-1}t+\underline{\underline{z^3u_2s^{-1}u_2s^{-1}u_2}}+z^3u_2st^{-1}(st)^2t^{-1}st^2+\\&&+z^3u_2st^{-1}s(st)^2t^{-1}st^2\\ &\subset&U+\underline{\underline{z^4u_2u_1u_2}}+\underline{\underline{z^4u_2su_2st^2}}+z^4u_2st^{-1}st^{-2}(ts)^2s^{-1}t\\ &\subset&U+z^5u_2st^{-1}s(R+Rt+Rt^{-1}+Rt^2)s^{-1}t\\ &\subset&U+\underline{z^5u_2u_1}+z^5u_2st^{-1}(st)^2t^{-1}s^{-2}t+z^5u_2st^{-1}s(st)^{-2}st^2+z^5u_2st^{-1}st^2s^{-1}t\\ &\subset&U+z^6st^{-2}(R+Rs+Rs^{-1})t+z^4u_2s(st)^{-2}sts^3t^2+ z^5u_2st^{-1}st^2(R+Rs+Rs^2)t\\ &\subset&U+\underline{\underline{z^6u_2u_1u_2}}+z^6u_2st^{-2}(st)^2t^{-1}s^{-1}+z^6u_2st^{-1}(ts)^{-2}st^2+\underline{\underline{z^3u_2u_1tu_1t^2}}+\\&&+ z^5u_2st^{-1}st^3+z^5u_2st^{-1}(st)^2t^{-1}s^{-1}(ts)^2s^{-1}+\\&&+z^5u_2s(R+Rt+Rt^2+Rt^3)st^2s^2t\\ &\subset&U+\underline{\underline{(z^7u_2u_1u_2)u_1}}+\underline{\underline{z^5u_2su_2st^2}}+z^5u_2st^{-1}s(R+Rt+Rt^{-1}+Rt^{2})+\\&& +z^5u_2s^2t^2s^2t+z^5u_2(st)^2ts^2t+z^5u_2st(ts)^2s^{-1}ts^2t+z^5u_2st^2(ts)^2s^{-1}ts^2t\\ &\subset&U+\underline{(z^5u_2st^{-1})s}+z^5u_2st^{-1}(st)^2t^{-1}s^{-1}+z^5u_2st^{-1}st^{-1}+\\&&+z^5u_2st^{-1}(st)^2t^{-1}s^{-1}t+z^5u_2s^2t^2(R+Rs+Rs^{-1})t+\underline{\underline{z^6u_2u_1u_2}}+\\&&+ z^6u_2(st)^2t^{-1}s^{-2}ts^2t+z^6u_2st^2(R+Rs+Rs^2)ts^2t\\ &\subset&U+\underline{\underline{(z^6u_2u_1u_2)s^{-1}}}+ z^5u_2st^{-1}(R+Rs^{-1}+Rs^{-2})t^{-1}+ z^5u_2st^{-2}s^{-1}t+\\&&+ \underline{\underline{z^5u_2u_1u_2}}+z^5u_2s^2t(ts)^2s^{-1}+z^5u_2(R+Rs+Rs^{-1})t^2s^{-1}t+ \\&&+z^7u_2s^{-2}(ts)^2s^{-1}t^{-1}(st)^2t^{-1}s^{-1}+ z^6u_2st^3s^2t+z^6u_2st(ts)^2st+\\&&+ z^6u_2(st)^2t^{-1}s^{-1}ts(st)^2t^{-1}(st)^2t^{-1}s^{-1}\\ &\subset&U+\underline{\underline{z^5u_2u_1u_2}}+z^5u_2s(st)^{-2}s+ z^5u_2s(st)^{-2}st(ts)^{-2}ts+\\&&+ z^5u_2s(R+Rt+Rt^{-1}+Rt^2)s^{-1}t+ \underline{\underline{(z^6u_2u_1u_2)s^{-1}}}+ \underline{z^6u_2s^{-1}t}+\\&&+ z^5u_2st^2s^{-1}t+\underline{\underline{z^5u_2s^{-1}u_2s^{-1}u_2}}+ \underline{\underline{(z^9u_2u_1u_2)s^{-1}}}+\\&&+ z^6u_2st^3(R+Rs+Rs^{-1})t+z^7u_2(st)^2+ z^9u_2s^{-1}(ts)^2s^{-1}t^{-3}s^{-1}\\ &\subset&U+\underline{z^4u_2u_1}+ \underline{\underline{(z^3u_2u_1u_2)}}+\underline{z^5u_2}+ z^5u_2(st)^2t^{-1}s^{-2}t+z^5u_2s(ts)^{-2}st^2+\\&&+ z^5u_2st^2s^{-1}t+ \underline{\underline{(z^6u_2u_1u_2)}}+z^6u_2st^2(ts)^2s^{-1}+ z^6u_2st^3(ts)^{-2}tst^2+\underline{z^8u_2}+\\&&+\underline{\underline{(z^{10}u_2u_1u_2)s^{-1}}}\\ &\subset&U+\underline{\underline{(z^4+z^6)u_2u_1u_2}}+z^5u_2(st)^2t^{-1}s^{-1}ts^{-1}t+\underline{\underline{(z^7u_2u_1u_2)s^{-1}}}+ \underline{\underline{z^5u_2su_2st^2}}\\ &\subset&U+z^6u_2s^{-1}t(R+Rs+Rs^2)t\\ &\subset&U+\underline{z^6u_2u_1}+z^6u_2s^{-1}(ts)^2s^{-1}+ z^6u_2s^{-1}(ts)^2s^{-1}t^{-1}(st)^2t^{-1}s^{-1}\\ &\subset&U+\underline{z^7u_2u_1}+\underline{\underline{(z^8u_2u_1u_2)s^{-1}}}. \end{array}}$ \end{itemize} \item [C2.] \underline{The case of $z^ku_2st^{-2}$}: For $k\not=11$, we expand $t^{-2}$ as a linear combination of 1, $t$, $t^{-1}$ and $t^{2}$ and we have that $z^ku_2st^{-2}\subset \underline{z^ku_2u_1}+z^ku_2(st)^2t^{-1}s^{-1}+\underline{z^ku_2st^{-1}}+\underline{\underline{(z^ku_2st^2s)s^{-1}}}\subset U+\underline{z^{k+1}u_2u_1}\subset U$. It remains to prove that $z^{11}u_2st^{-2}\subset U$. We have: $\small{\begin{array}{lcl} z^{11}u_2st^{-2}&\subset&z^{11}u_2(R+Rs^{-1}+Rs^{-2})t^{-2}\\ &\subset&\underline{z^{11}u_2}+z^{11}u_2(ts)^{-2}tst^{-1}+z^{11}u_2s^{-1}(ts)^{-2}tst^{-1}\\ &\subset&U+\underline{z^{10}u_2st^{-1}}+z^{10}u_2s^{-1}t(R+Rs^{-1}+Rs^{-2})t^{-1}\\ &\subset&U+\underline{z^{10}u_2u_1}+z^{10}u_2s^{-1}t(ts)^{-2}ts+z^{10}u_2s^{-1}ts^{-1}(ts)^{-2}ts\\ &\subset&U+\underline{\underline{(z^9u_2u_1u_2)s}}+(z^9u_2s^{-1}u_2s^{-1}u_2s^{-1})s^2. \end{array}}$ The result follows from lemma \ref{ll10}. \item [C3.] \underline{The case of $z^ku_2s^{-1}ts^{-1}t^{-1}$}: For $k\not \in\{0,1\}$, we have that $z^ku_2s^{-1}ts^{-1}t^{-1}=z^ku_2s^{-1}t(ts)^{-2}ts=\underline{\underline{(z^{k-1}u_2u_1u_2)s}}\subset U.$ It remains to prove the case where $k\in\{0,1\}$. We have: $\small{\begin{array}{lcl} z^ku_2s^{-1}ts^{-1}t^{-1}&\subset&z^ku_2s^{-1}t(R+Rs+Rs^2)t^{-1}\\ &\subset&\underline{z^ku_2u_1}+z^ku_2s^{-1}(ts)^2s^{-1}t^{-2}+z^ku_2(R+Rs+Rs^2)ts^2t^{-1}\\ &\subset&U+\underline{\underline{z^{k+1}u_2u_1u_2}}+z^ku_2u_1t^{-1}+z^ku_2(st)^2t^{-1}st^{-1}+z^ku_2s(st)^2t^{-1}st^{-1}\\ &\stackrel{C1}{\subset}&U+\underline{z^{k+1}u_2st^{-1}}+\underline{\underline{z^{k+1}u_2su_2su_2}}. \end{array}}$ \qedhere \end{itemize} \end{proof} \begin{cor} The BMR freeness conjecture holds for the generic Hecke algebra $H_{G_{10}}$. \end{cor} \begin{proof} By theorem \ref{thm10} we have that $H_{G_{10}}=U$. The result follows from proposition \ref{BMR PROP}, since by definition $U$ is generated as left $u_2$-module by 28 elements and, hence, as $R$-module by $|G_{10}|=288$ elements (recall that $u_2$ is generated as $R$-module by 4 elements). \end{proof} \subsection{The case of $G_{11}$} \indent Let $R=\mathbb{Z}[u_{s,i}^{\pm},u_{t,j}^{\pm},u_{u,l}^{\pm}]$, where $1\leq i\leq 2$, $1\leq j\leq 3$ and $1\leq l\leq 4$. We also let $$H_{G_{11}}=\langle s,t,u\;|\; stu=tus=ust,\prod\limits_{i=1}^{2}(s-u_{s,i})=\prod\limits_{j=1}^{3}(t-u_{t,j})=\prod\limits_{l=1}^{4}(u-u_{u,l})=0\rangle$$ be the generic Hecke algebra associated to $G_{11}$. Let $u_1$ be the subalgebra of $H_{G_{11}}$ generated by $s$, $u_2$ the subalgebra of $H_{G_{11}}$ generated by $t$ and $u_3$ the subalgebra of $H_{G_{11}}$ generated by $u$. We recall that $z:=stu=tus=ust$ generates the center of the associated complex braid group and that $|Z(G_{11})|=24$. We set $U=\sum\limits_{k=0}^{23}(z^ku_3u_2+z^ku_3tu^{-1}u_2).$ By the definition of $U$, we have the following remark. \begin{rem} $Uu_2 \subset U$. \label{rem11} \end{rem} From now on, we will underline the elements that by definition belong to $U$. Moreover, we will use directly the remark \ref{rem11}; this means that every time we have a power of $t$ at the end of an element, we may ignore it. To remind that to the reader, we put a parenthesis around the part of the element we consider. Our goal is to prove that $H_{G_{11}}=U$ (theorem \ref{thm11}). Since $1\in U$, it will be sufficient to prove that $U$ is a left-sided ideal of $H_{G_{11}}$. For this purpose, one may check that $sU$, $tU$ and $uU$ are subsets of $U$. The following proposition states that it is enough to prove $tU\subset U$. \begin{prop} If $tU\subset U$ then $H_{G_{11}}=U$. \label{Ut11} \end{prop} \begin{proof} As we explained above, we have to prove that $sU$, $tU$ and $uU$ are subsets of $U$. However, by the definition of $U$ we have $uU\subset U$ and, hence, by hypothesis we only have to prove that $sU\subset U$. We recall that $z=stu$, therefore $s=zu^{-1}t^{-1}$ and $s^{-1}=z^{-1}tu$. We notice that $$U= \sum\limits_{k=0}^{22}z^k(u_3u_2+u_3tu^{-1}u_2)+z^{23}(u_3u_2+u_3tu^{-1}u_2).$$ Hence, we have:\\ \\ $\small{\begin{array}[t]{lcl}sU&\subset& \sum\limits_{k=0}^{22}z^ks(u_3u_2+u_3tu^{-1}u_2)+z^{23}s(u_3u_2+u_3tu^{-1}u_2)\\ &\subset& \sum\limits_{k=0}^{22}z^{k+1}u^{-1}t^{-1}(u_3u_2+u_3tu^{-1}u_2)+z^{23}(R+Rs^{-1})(u_3u_2 +u_3tu^{-1}u_2)\\ &\subset& \sum\limits_{k=0}^{22}u^{-1}t^{-1}(\underline{z^{k+1}u_3u_2}+\underline{z^{k+1}u_3tu^{-1}u_2})+ \underline{z^{23}u_3u_2}+\underline{z^{23}u_3tu^{-1}u_2}+z^{23}s^{-1}u_3u_2+ z^{23}s^{-1}u_3tu^{-1}u_2\\ &\subset&u_3u_2U+z^{22}tu_3u_2+z^{22}tu_3tu^{-1}u_2\\ &\subset&u_3u_2U+ t(\underline{z^{22}u_3u_2}+\underline{z^{22}u_3t^{-1}u_2})\\ &\subset&u_3u_2U. \end{array}}$\\\\ By hypothesis, $tU\subset U$ and, hence, $u_2U\subset U$, since $u_2=R+Rt+Rt^2$. Moreover, $u_3U\subset U$, by the definition of $U$. Therefore, $sU\subset U$. \end{proof} \begin{cor} If $z^ktu_3$ and $z^ktu_3tu^{-1}$ are subsets of $U$ for every $k\in\{0,\dots,23\}$, then $H_{G_{11}}=U$. \label{corr11} \end{cor} \begin{proof} The result follows directly from the definition of $U$, proposition \ref{Ut11} and remark \ref{rem11}. \end{proof} As a first step we will prove the conditions of corollary \ref{corr11} for some shorter range of the values of $k$, as we can see in proposition \ref{tu11} and corollary \ref{tuts111}. \begin{prop} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{0,\dots,21\}$, $z^ku_3tu\subset U$. \item[(ii)] For every $k\in\{0,\dots,19\}$, $z^ku_3tu^2\subset U$. \item[(iii)] For every $k\in\{0,\dots,19\}$, $z^ku_3tu_3\subset U$. \item[(iv)] For every $k\in\{2,\dots,21\}$, $z^ku_3u_2u_3\subset U$. \end{itemize} \label{tu11} \end{prop} \begin{proof} Since $u_3=R+Ru+Ru^{-1}+Ru^2$, (iii) follows from (i) and (ii) and the definition of $U$. Moreover, (iv) follows directly from (iii), since: \\ $\small{\begin{array}{lcl} z^ku_3u_2u_3&=&z^ku_3(R+Rt+Rt^{-1})u_3\\ &\subset& \underline{z^ku_3}+z^ku_3tu_3+z^ku_3(t^{-1}s^{-1}u^{-1})usu_3\\ &\subset& U+z^ku_3tu_3+ z^{k-1}u_3(R+Rs^{-1})u_3\\ &\subset& U+z^ku_3tu_3+\underline{z^{k-1}u_3}+z^{k-1}u_3(s^{-1}u^{-1}t^{-1})tu_3\\ &\subset& U+(z^k+z^{k-2})tu_3.\end{array}}$\\ Therefore, it is enough to prove (i) and (ii). For every $k\in\{0,\dots,21\}$ we have $z^ku_3tu=z^ku_3(tus)s^{-1}\subset z^{k+1}u_3(R+Rs)\subset \underline{z^{k+1}u_3}+z^{k+1}u_3(ust)t^{-1}\subset U+\underline{z^{k+2}u_3t^{-1}}\subset U$ and, hence, we prove (i). For (ii), we notice that, for every $k\in\{0,\dots,19\}$, $z^ku_3tu^2=z^ku_3(tus)s^{-1}u\subset z^{k+1}u_3(R+Rs)u\subset \underline{z^{k+1}u_3}+z^{k+1}u_3su\subset U+z^{k+1}u_3(ust)t^{-1}u\subset U+z^{k+2}u_3t^{-1}u$. However, if we expand $t^{-1}$ as a linear combination of 1, $t$ and $t^2$ we have that $z^{k+2}u_3t^{-1}u\subset \underline{z^{k+2}u_3}+z^{k+2}u_3tu+z^{k+2}u_3t(tus)s^{-1}\stackrel{(i)}{\subset}U+z^{k+3}u_3t(R+Rs)\subset U+\underline{z^{k+3}u_3t}+z^{k+3}u_3t(stu)u^{-1}t^{-1}\subset U+\underline{z^{k+4}u_3tu^{-1}u_2}$. \end{proof} \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{1,\dots,22\}$, $z^ku_3u_2u_1\subset U$. \item[(ii)] For every $k\in\{0,\dots,18\}$, $z^ku_3tu_3u_1\subset U$. \item[(iii)] For every $k\in\{3,\dots,22\}$, $z^ku_3tu_1u_3\subset U$. \end{itemize} \label{tsu11} \end{lem} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] $\begin{array}[t]{lcl} z^ku_3u_2u_1&=&z^ku_3u_2(R+Rs)\\ &\subset& \underline{z^ku_3u_2}+z^ku_3(R+Rt^{-1}+Rt)s\\ &\subset& z^ku_3(ust)t^{-1}+z^ku_3t^{-1}s+z^ku_3t(stu)u^{-1}t^{-1}\\ & \subset& U+\underline{z^{k+1}u_3u_2}+z^ku_3t^{-1}(R+Rs^{-1})+\underline{z^{k+1}u_3tu^{-1}u_2}\\ &\subset& U+\underline{z^{k}u_3u_2}+z^ku_3(u^{-1}t^{-1}s^{-1})\\ &\subset& U+\underline{z^{k-1}u_3}. \end{array}$ \item[(ii)] $z^ku_3tu_3u_1=z^ku_3tu_3(R+Rs)\subset z^ku_3tu_3+z^ku_3tu_3(ust)t^{-1} \subset \big(z^ku_3tu_3+z^{k+1}u_3tu_3\big)u_2$. The result follows from proposition \ref{tu11}(iii) . \item[(iii)] $z^ku_3tu_1u_3=z^ku_3t(R+Rs^{-1})u_3=z^ku_3tu_3+z^ku_3t^2(t^{-1}s^{-1}u^{-1})u_3\stackrel{\ref{tu11}(iii)}{\subset}U+z^{k-1}u_3u_2u_3$. The result follows directly from proposition \ref{tu11}(iv). \qedhere \end{itemize} \end{proof} In order to make it easier for the reader to follow the calculations, from now on we will double-underline the elements as described in proposition \ref{tu11} and in lemma \ref{tsu11} and we will use directly the fact that these elements are inside $U$. \begin{prop} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{2,\dots, 23\}$, $z^kt^2u^{-1}\in U$. \item[(ii)]For every $k\in\{0,\dots,21\}$, $z^ktutu^{-1} \in U$. \item[(iii)]For every $k\in\{0,\dots,15\}$, $z^ktu^2tu^{-1} \in U$. \item[(iv)]For every $k\in\{6,\dots,23\}$, $z^ktu^{-1}tu^{-1} \in U+z^ku_3tu_3$. Therefore, for every $k\in\{6, \dots, 19\}$, $z^ktu^{-1}tu^{-1}\in U$. \item[(v)]For every $k\in\{0,\dots, 5\}$, $z^ktu^3tu^{-1} \in U$. \item[(vi)]For every $k\in\{16,\dots,23\}$, $z^ktu^{-2}tu^{-1} \in U+(z^k+z^{k-1}+z^{k-2})u_3tu_3u_2$. Therefore, for every $k\in\{16, \dots, 19\}$, $z^ktu^{-2}tu^{-1}\in U$. \end{itemize} \label{tuts11} \end{prop} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)]$\small{\begin{array}[t]{lcl} z^kt^2u^{-1}&\in& z^k(R+Rt+Rt^{-1})u^{-1}\\ &\in& \underline{z^ku_3}+\underline{z^ku_3tu^{-1}}+z^ku_3(u^{-1}t^{-1}s^{-1})su^{-1}\\ &\in& U+z^{k-1}u_3(R+Rs^{-1})u^{-1}\\ &\in& U+\underline{z^{k-1}u_3}+z^{k-1}u_3(s^{-1}u^{-1}t^{-1})t\\ &\in& U+\underline{z^{k-2}u_3u_2}. \end{array}}$ \item[(ii)] $z^ktutu^{-1}=z^k(tus)s^{-1}tu^{-1}\in z^{k+1}(R+Rs)tu^{-1}\subset \underline{z^{k+1}u_3tu^{-1}}+z^{k+1}u_3(stu)u^{-2}$. Hence, $z^ktutu^{-1}\subset U+\underline{z^{k+2}u_3}\subset U$. \item[(iii)] $\small{\begin{array}[t]{lcl} z^ktu^2tu^{-1}&=&z^k(tus)s^{-1}utu^{-1}\\ &\in& z^{k+1}(R+Rs)utu^{-1}\\ &\in& \underline{z^{k+1}u_3tu^{-1}}+z^{k+1}u_3(ust)t^{-1}utu^{-1}\\ &\in& U+z^{k+2}u_3(R+Rt+Rt^2)utu^{-1}\\ &\in& U+\underline{z^{k+2}u_3tu^{-1}}+z^{k+2}u_3tutu^{-1}+z^{k+2}u_3t(tus)s^{-1}tu^{-1}\\ &\stackrel{(ii)}{\in}&U+z^{k+3}u_3t(R+Rs)tu^{-1}\\ &\in& U+\underline{\underline{z^{k+3}u_3u_2u_3}}+z^{k+3}u_3t(stu)u^{-2}\\ &\in& U+\underline{\underline{z^{k+4}u_3tu_3}}. \end{array}}$ \item[(iv)] $\begin{array}[t]{lcl} z^ktu^{-1}tu^{-1}&\in& z^ktu^{-1}(R+Rt^{-1}+Rt^{-2})u^{-1}\\ &\in& z^ku_3tu_3+z^kt(u^{-1}t^{-1}s^{-1})su^{-1}+z^kt(u^{-1}t^{-1}s^{-1})st^{-1}u^{-1}\\ &\in& z^ku_3tu_3+z^{k-1}t(R+Rs^{-1})u^{-1}+z^{k-1}tsu(u^{-1}t^{-1}s^{-1})su^{-1}\\ &\in& z^ku_3tu_3+\underline{z^{k-1}u_3tu^{-1}}+z^{k-1}u_3t^2(t^{-1}s^{-1}u^{-1})+ z^{k-2}tsu(R+Rs^{-1})u^{-1}\\ &\in& U+z^ku_3tu_3+\underline{z^{k-2}u_3u_2}+z^{k-2}u_3t(stu)u^{-1}t^{-1}+ z^{k-2}u_3tsu(s^{-1}u^{-1}t^{-1})t\\ &\in&U+ z^ku_3tu_3+\underline{z^{k-1}u_3tu^{-1}u_2}+\underline{\underline{(z^{k-3}u_3tu_1u_3)u_2}}\\ &\in& U+z^ku_3tu_3. \end{array}$ The result follows from proposition \ref{tu11}(iii). \item[(v)] $\small{\begin{array}[t]{lcl} z^ktu^3tu^{-1}&=&z^k(tus)s^{-1}u^2tu^{-1}\\ &\in&z^{k+1}(R+Rs)u^2tu^{-1}\\ &\in&\underline{z^{k+1}u_3tu^{-1}}+z^{k+1}u_3(ust)t^{-1}u^2tu^{-1}\\ &\in&U+z^{k+2}u_3(R+Rt+Rt^2)u^2tu^{-1}\\ &\in&U+\underline{z^{k+2}u_3tu^{-1}}+z^{k+2}u_3tu^2tu^{-1}+z^{k+2}u_3t(tus)s^{-1}utu^{-1}\\ &\stackrel{(iii)}{\in}&U+z^{k+3}u_3t(R+Rs)utu^{-1}\\ &\in&U+z^{k+3}u_3tutu^{-1}+z^{k+3}u_3tu^{-1}(ust)t^{-2}(tus)s^{-1}tu^{-1}\\ &\stackrel{(ii)}{\in}&U+z^{k+5}u_3tu^{-1}t^{-2}(R+Rs)tu^{-1}\\ &\in&U+z^{k+5}u_3t(u^{-1}t^{-1}s^{-1})su^{-1}+z^{k+5}u_3tu^{-1}t^{-2}(stu)u^{-2}\\ &\in&U+\underline{\underline{z^{k+4}u_3tu_1u_3}}+ z^{k+6}u_3tu^{-1}t^{-2}u^{-2}. \end{array}}$ We expand $t^{-2}$ as a linear combination of 1, $t^{-1}$ and $t$ and we have:\\ $\small{\begin{array}{lcl} z^{k+6}u_3tu^{-1}t^{-2}u^{-2} &\in&\underline{\underline{z^{k+6}u_3tu_3}}+z^{k+6}u_3t(u^{-1}t^{-1}s^{-1})su^{-2}+ z^{k+6}u_3tu^{-1}t(R+Ru+Ru^{-1}+Ru^{2})\\ &\in&U+\underline{\underline{z^{k+5}u_3tu_1u_3}}+\underline{z^{k+6}u_3tu^{-1}u_2}+ z^{k+6}u_3tu^{-1}(tus)s^{-1}+z^{k+6}u_3tu^{-1}tu^{-1}+\\&&+ z^{k+6}u_3tu^{-1}(tus)s^{-1}(ust)t^{-1}s^{-1} \\ &\stackrel{(iv)}{\in}&U+\underline{\underline{z^{k+7}u_3tu_3u_1}}+z^{k+8}u_3tu^{-1}(R+Rs)t^{-1}s^{-1}\\ &\in& U+z^{k+8}u_3t(u^{-1}t^{-1}s^{-1})+z^{k+8}u_3tu^{-2}(ust)t^{-2}s^{-1}\\ &\in&U+\underline{z^{k+7}u_3u_2}+z^{k+9}u_3tu^{-2}(R+Rt^{-1}+Rt)s^{-1}\\ &\in&U+\underline{\underline{z^{k+9}u_3tu_3u_1}}+z^{k+9}u_3tu^{-1}(u^{-1}t^{-1}s^{-1})+z^{k+9}u_3tu^{-2}t(R+Rs)\\ &\in&U+\underline{z^{k+8}u_3tu^{-1}}+\underline{\underline{(z^{k+9}u_3tu_3)u_2}} +z^{k+9}u_3tu^{-2}t(stu)u^{-1}t^{-1}\\ &\in& U+(z^{k+10}u_3tu^{-2}tu^{-1})u_2. \end{array}}$ The result follows from (i), (ii), (iii) and (iv), if we expand $u^{-2}$ as a linear combination of 1, $u$, $u^2$ and $u^{-1}$. \item[(vi)] $\small{\begin{array}[t]{lcl} z^ktu^{-2}tu^{-1}&\in& z^ktu^{-2}(R+Rt^{-1}+Rt^{-2})u^{-1}\\ &\in& z^ku_3tu_3u_2+z^ku_3tu^{-1}(u^{-1}t^{-1}s^{-1})su^{-1}+z^ku_3tu^{-1}(u^{-1}t^{-1}s^{-1})st^{-1}u^{-1}\\ &\in&z^ku_3tu_3u_2+z^{k-1}u_3tu^{-1}(R+Rs^{-1})u^{-1}+z^{k-1}u_3tu^{-1}s(t^{-1}s^{-1}u^{-1})usu^{-1}\\ &\in&(z^k+z^{k-1})u_3tu_3u_2+z^{k-1}u_3tu^{-1}(s^{-1}u^{-1}t^{-1})t+ z^{k-2}u_3tu^{-1}su(R+Rs^{-1})u^{-1}\\ &\in&(z^k+z^{k-1})u_3tu_3u_2+\underline{z^{k-2}u_3tu^{-1}u_2}+z^{k-2}u_3tu^{-1}(stu)u^{-1}t^{-1}+\\&&+z^{k-2}u_3tu^{-1}su(s^{-1}u^{-1}t^{-1})t\\ &\in&U+(z^k+z^{k-1})u_3tu_3u_2+\underline{z^{k-1}u_3}+z^{k-3}u_3tu^{-1}(R+Rs^{-1})ut\\ &\in&U+(z^k+z^{k-1})u_3tu_3u_2+\underline{z^{k-3}u_3u_2}+z^{k-3}u_3tu^{-1}(s^{-1}u^{-1}t^{-1})tu^2t\\ &\in&U+(z^k+z^{k-1})u_3tu_3u_2+z^{k-4}u_3tu^{-1}t(R+Ru+Ru^{-1}+Ru^{-2})t\\ &\in&U+(z^k+z^{k-1})u_3tu_3u_2+\underline{z^{k-4}u_3tu^{-1}u_2}+ z^{k-4}u_3tu^{-1}(tus)s^{-1}t+ \\&&+(z^{k-4}u_3 tu^{-1}tu^{-1})t+z^{k-4}u_3tu^{-1}(R+Rt^{-1}+Rt^{-2})u^{-2}t\\ &\stackrel{(iv)}{\in}&U+(z^k+z^{k-1})u_3tu_3u_2+z^{k-3}u_3tu^{-1}(R+Rs)t+ \underline{\underline{(z^{k-4}u_3tu_3)u_2}}+\\&&+ z^{k-4}u_3t(u^{-1}t^{-1}s^{-1})su^{-2}t+z^{k-4}u_3t(u^{-1}t^{-1}s^{-1})st^{-1}u^{-2}t\\ &\in&U+(z^k+z^{k-1})u_3tu_3u_2+\underline{z^{k-3}u_3tu^{-1}u_2}+z^{k-3}u_3tu^{-2}(ust)+\\&&+ z^{k-5}u_3t(R+Rs^{-1})u^{-2}t+z^{k-5}u_3ts(t^{-1}s^{-1}u^{-1})usu^{-2}t\\ &\in&U+(z^k+z^{k-1}+z^{k-2})u_3tu_3u_2+\underline{\underline{(z^{k-5}u_3tu_3)u_2}}+ z^{k-5}u_3t(s^{-1}u^{-1}t^{-1})tu^{-1}t+\\&&+z^{k-6}u_3t(R+Rs^{-1})usu^{-2}t\\ &\in&U+(z^k+z^{k-1}+z^{k-2})u_3tu_3u_2+\underline{\underline{z^{k-6}u_3u_2u_3}}+z^{k-6}u_3(tus)u^{-2}t+\\&&+ z^{k-6}u_3ts^{-1}u(R+Rs^{-1})u^{-2}t\\ &\in&U+(z^k+z^{k-1}+z^{k-2})u_3tu_3u_2+\underline{z^{k-5}u_3u_2}+ z^{k-6}u_3t(s^{-1}u^{-1}t^{-1})t^2+\\&&+ z^{k-6}u_3t^2(t^{-1}s^{-1}u^{-1})u^2(s^{-1}u^{-1}t^{-1})tu^{-1}t\\ &\in&U+(z^k+z^{k-1}+z^{k-2})u_3tu_3u_2+\underline{z^{k-7}u_3u_2}+ z^{k-8}u_3(R+Rt+Rt^{-1})u^2tu^{-1}t\\ &\in&U+(z^k+z^{k-1}+z^{k-2})u_3tu_3u_2+\underline{z^{k-8}u_3tu^{-1}u_2}+ (z^{k-8}u_3tu^2tu^{-1})t+\\&&+ z^{k-8}u_3(u^{-1}t^{-1}s^{-1})su^2tu^{-1}t\\ &\stackrel{(iii)}{\in}&U+(z^k+z^{k-1}+z^{k-2})u_3tu_3u_2+z^{k-9}u_3(R+Rs^{-1})u^2tu^{-1}t\\ &\in&U+(z^k+z^{k-1}+z^{k-2})u_3tu_3u_2+\underline{z^{k-9}u_3tu^{-1}u_2}+ z^{k-9}u_3(s^{-1}u^{-1}t^{-1})tu^3tu^{-1}t\\ &\in&U+(z^k+z^{k-1}+z^{k-2})u_3tu_3u_2+(z^{k-10}u_3tu^3tu^{-1})t. \end{array}}$ However, if we expand $u^3$ as a linear combination of 1, $u$, $u^2$ and $u^{-1}$, we can use (i), (ii), (iii) and (iv) and we have that, for every $k\in\{16,\dots 23\}$, $z^{k-10}u_3tu^3tu^{-1}\subset U$. Therefore, for every $k\in\{16,\dots, 23\}$, $z^ktu^{-2}tu^{-1}\in U+(z^k+z^{k-1}+z^{k-2})u_3tu_3u_2$. Moreover, by proposition \ref{tu11}(iii), we have that $\big((z^k+z^{k-1}+z^{k-2})u_3tu_3\big)u_2\subset U$, for every $k\in\{16,\dots, 19\}$ and, hence, $z^ktu^{-2}tu^{-1}\in U$ for every $k\in\{16,\dots, 19\}$. \qedhere \end{itemize} \end{proof} \begin{cor} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{2, \dots, 19\}$, $z^ktu_3tu^{-1}\in U$. \item[(ii)] For every $k\in\{1, \dots, 23\}$, $z^ku_2u^mtu^{-1}\subset U+z^ku_3tu^mtu^{-1}+z^{k-2}u_3tu^{m+1}tu^{-1}$, where $m\in \mathbb{Z}$. Therefore, for every $k\in\{4,\dots, 19\}$, $z^ku_3u_2u_3tu^{-1}\subset U$. \end{itemize} \label{tuts111} \end{cor} \begin{proof} We first prove (i). We use different definitions of $u_3$ and we have: \begin{itemize}[leftmargin=*] \item For $k\in\{2,\dots,5\}$, we write $u_3=R+Ru+Ru^2+Ru^3$. The result then follows from proposition \ref{tuts11} (i), (ii), (iii) and (v). \item For $k\in\{6,\dots,15\}$, we write $u_3=R+Ru+Ru^2+Ru^{-1}$. The result then follows from proposition \ref{tuts11} (i), (ii), (iii) and (iv). \item For $k\in\{16,\dots,19\}$, we write $u_3=R+Ru+Ru^{-2}+Ru^{-1}$. The result then follows from proposition \ref{tuts11} (i), (ii), (iv) and (vi). \end{itemize} For the first part of (ii) we have: \\ $\small{\begin{array}[t]{lcl} z^ku_2u^mtu^{-1}&=&z^k(R+Rt+Rt^{-1})u^mtu^{-1}\\ &\subset& \underline{z^ku_3tu^{-1}}+ z^ku_3tu^mtu^{-1}+z^ku_3(u^{-1}t^{-1}s^{-1})su^mtu^{-1}\\ &\subset& U+z^{k}u_3tu^mtu^{-1}+z^{k-1}u_3(R+Rs^{-1})u^mtu^{-1}\\ &\subset& U+z^ku_3tu^mtu^{-1} +\underline{z^{k-1}u_3tu^{-1}}+z^{k-1}u_3(s^{-1}u^{-1}t^{-1})tu^{m+1}tu^{-1}\\ &\subset& U+z^ku_3tu^mtu^{-1}+z^{k-2}u_3tu^{m+1}tu^{-1}. \end{array}}$\\ Therefore, for every $k\in\{4,\dots,19\}$ we have that $z^ku_3u_2u_3tu^{-1}\subset z^ku_3(R+Rt+Rt^{-1})u_3tu^{-1}\subset u_3U+ u_3t^{\pm}u_3tu^{-1}\subset u_3U+u_3(z^k+z^{k-2})tu_3tu^{-1}\stackrel{(i)}{\subset}u_3U$. The result follows from the definition of $U$. \end{proof} We now prove a lemma that leads us to the main theorem of this section (theorem \ref{thm11}). \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{0,\dots, 15\}$, $z^ku_3t^2u^2\subset U$. \item[(ii)] For every $k\in\{6,\dots,16\}$, $z^ktu^{-1}tu^{-1}tu^{-1}\in U+z^{k-6}u_3^{\times}t^2u^3t .$ \item [(iii)]For every $k\in\{12,\dots,16\}$, $z^ktu^{-1}tu^{2}tu^{-1}\in U+z^{k-12}u_3^{\times}t^2u^3t.$ \item [(iv)]For every $k\in\{1,\dots,15\}$, $z^ktu^{-1}tu^{2}tu^{-1}\in U+(z^{k+6}+z^{k+7}+z^{k+8})u_3tu_3u_2+z^{k+8}u_3^{\times}tu^{-3}tu^{-1}t.$ \end{itemize} \label{ttuttu} \end{lem} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] $\small{\begin{array}[t]{lcl} z^ku_3t^2u^2&=&z^ku_3t(tus)s^{-1}u\\ &\subset& z^{k+1}u_3t(R+Rs)u\\ &\subset& \underline{\underline{z^{k+1}u_3tu}}+z^{k+1}u_3tu^{-1}(ust)t^{-1}u\\ &\subset&U+z^{k+2}u_3tu^{-1}(R+Rt+Rt^2)u\\ &\subset&U+\underline{z^{k+2}u_3u_2}+z^{k+2}u_3tu^{-1}(tus)s^{-1}+ z^{k+2}u_3tu^{-1}t(tus)s^{-1}\\ &\subset&\underline{\underline{z^{k+3}u_3tu_3u_1}}+z^{k+3}u_3tu^{-1}t(R+Rs)\\ &\subset&U+\underline{z^{k+3}u_3tu^{-1}u_2}+z^{k+3}u_3tu^{-1}t(stu)u^{-1}t^{-1}\\ &\in& U+(z^{k+4}u_3tu_3tu^{-1})t. \end{array}}$ The result follows from corollary \ref{tuts111}(i). \item[(ii)] $\small{\begin{array}[t]{lcl} z^ktu^{-1}tu^{-1}tu^{-1}&\in& z^{k}tu^{-1}(R+Rt^{-1}+R^{\times}t^{-2})u^{-1}tu^{-1}\\ &\in&z^{k}u_3tu_3tu^{-1}+z^{k}u_3t(u^{-1}t^{-1}s^{-1})su^{-1}tu^{-1}+ z^ku_3^{\times}t(u^{-1}t^{-1}s^{-1})st^{-1}u^{-1}tu^{-1}\\ &\stackrel{\ref{tuts111}(i)}{\in}&U+z^{k-1}u_3t(R+Rs^{-1})u^{-1}tu^{-1}+z^{k-1}u_3^{\times}ts(t^{-1}s^{-1}u^{-1})usu^{-1}tu^{-1}\\ &\in&U+z^{k-1}u_3tu_3tu^{-1}+z^{k-1}u_3t(s^{-1}u^{-1}t^{-1})t^2u^{-1}+\\&&+ z^{k-2}u_3^{\times}tsu(R+R^{\times}s^{-1})u^{-1}tu^{-1}\\ &\stackrel{\ref{tuts111}(i)}{\in}& U+\underline{\underline{z^{k-2}u_3u_2u_3}}+z^{k-2}u_3t(stu)u^{-2}+z^{k-2}u_3^{\times}tsu(s^{-1}u^{-1}t^{-1})t^2u^{-1}\\ &\in&U+\underline{\underline{z^{k-1}u_3tu_3}}+z^{k-3}u_3^{\times}t(R+R^{\times}s^{-1})ut^2u^{-1}\\ &\in&U+z^{k-3}u_3(tus)s^{-1}t^2u^{-1}+ z^{k-3}u_3^{\times}t^2(t^{-1}s^{-1}u^{-1})u^2t^2u^{-1}\\ &\in&U+z^{k-2}u_3(R+Rs)t^2u^{-1}+z^{k-4}u_3^{\times}t^2u^2(R+Rt+R^{\times}t^{-1})u^{-1}\\ &\in&U+\underline{\underline{z^{k-2}u_3u_2u_3}} +z^{k-2}u_3(ust)tu^{-1}+\underline{\underline{z^{k-4}u_3u_2u_3}}+z^{k-4}u_3u_2u^2tu^{-1}+\\&&+z^{k-4}u_3^{\times}t^2u^2t^{-1}(u^{-1}t^{-1}s^{-1})st \\ &\stackrel{\ref{tuts111}(ii)}{\in}&U+\underline{z^{k-1}u_3tu^{-1}}+ z^{k-6}u_3tu^3tu^{-1}+ z^{k-5}u_3^{\times}t^2u^2t^{-1}(R+R^{\times}s^{-1})t\\ &\stackrel{\ref{tuts11}(v)}{\in}&U+z^{k-5}u_3t^2u^2+z^{k-5}u_3^{\times}t^2u^3(u^{-1}t^{-1}s^{-1})t\\ &\stackrel{(i)}{\subset}&U+z^{k-6}u_3^{\times}t^2u^3t. \end{array}}$ \item[(iii)] $\small{\begin{array}[t]{lcl} z^ktu^{-1}tu^{2}tu^{-1}&\in&z^ktu^{-1}t(R+Ru+Ru^{-1}+R^{\times}u^{-2})tu^{-1}\\ &\in& z^ku_3tu^{-1}t^2u^{-1}+z^ku_3tu^{-1}(tus)s^{-1}tu^{-1}+z^ku_3tu^{-1}tu^{-1}tu^{-1}+\\&&+ z^ku_3^{\times}tu^{-1}tu^{-2}tu^{-1} \\ &\stackrel{(ii)}{\in}&z^ku_3tu^{-1}(R+Rt+Rt^{-1})u^{-1}+z^{k+1}u_3tu^{-1}(R+Rs)tu^{-1}+\\&&+\underline{\underline{(z^{k-6}u_3u_2u_3)t}} +z^{k}u_3^{\times}tu^{-1}(R+Rt^{-1}+R^{\times}t^{-2})u^{-2}tu^{-1}\\ &\in& U+\underline{\underline{z^ku_3tu_3}}+(z^k+z^{k+1})u_3tu_3tu^{-1}+z^ku_3t(u^{-1}t^{-1}s^{-1})su^{-1}+\\&&+z^ku_3tu^{-1}(stu)u^{-2} +z^{k}u_3t(u^{-1}t^{-1}s^{-1})su^{-2}tu^{-1}+\\&&+ z^{k}u_3^{\times}t(u^{-1}t^{-1}s^{-1})st^{-1}u^{-2}tu^{-1}\\ &\stackrel{\ref{tuts111}(i)}{\in}&U+\underline{\underline{z^{k-1}u_3tu_1u_3}}+\underline{\underline{z^{k+1}u_3tu_3}} +z^{k-1}u_3t(R+Rs^{-1})u^{-2}tu^{-1}+\\&&+z^{k-1}u_3^{\times}tsu(u^{-1}t^{-1}s^{-1})su^{-2}tu^{-1}\\ &\in&U+z^{k-1}u_3tu_3tu^{-1}+z^{k-1}u_3t(s^{-1}u^{-1}t^{-1})tu^{-1}tu^{-1}+\\&&+ z^{k-2}u_3^{\times}tsu(R+R^{\times}s^{-1})u^{-2}tu^{-1}\\ &\stackrel{\ref{tuts111}(i)}{\in}&U+z^{k-2}u_3u_2u_3tu^{-1}+ z^{k-2}u_3tsu^{-1}tu^{-1}+ z^{k-2}u_3^{\times}tsu(s^{-1}u^{-1}t^{-1})tu^{-1}tu^{-1}\\ &\stackrel{\ref{tuts111}(ii)}{\in}&U+z^{k-2}u_3t(R+Rs^{-1})u^{-1}tu^{-1}+z^{k-3}u_3^{\times}t(R+Rs^{-1})utu^{-1}tu^{-1}\\ &\in&U+z^{k-2}u_3tu_3tu^{-1}+z^{k-2}u_3t(s^{-1}u^{-1}t^{-1})t^2u^{-1}+\\&&+ z^{k-3}u_3(tus)s^{-1}tu^{-1}tu^{-1}+z^{k-3}u_3^{\times}t^2(t^{-1}s^{-1}u^{-1})u^2tu^{-1}tu^{-1}\\ &\stackrel{\ref{tuts111}(i)}{\in}&U+\underline{\underline{z^{k-3}u_3u_2u_3}}+ z^{k-2}u_3(R+Rs)tu^{-1}tu^{-1}+\\&&+ z^{k-4}u_3^{\times}(R+Rt+Rt^{-1})u^2tu^{-1}tu^{-1}\\ \ &\in&U+(z^{k-2}+z^{k-4})u_3u_2u_3tu^{-1}+z^{k-2}u_3(stu)u^{-2}tu^{-1}+z^{k-4}u_3tu^2tu^{-1}tu^{-1}+\\&&+z^{k-4}u_3^{\times}(u^{-1}t^{-1}s^{-1})su^2tu^{-1}tu^{-1}\\ &\stackrel{\ref{tuts111}(ii)}{\in}&U+\underline{z^{k-1}u_3tu^{-1}}+ z^{k-4}u_3tu^2tu^{-1}tu^{-1}+ z^{k-5}u_3^{\times}(R+R^{\times}s^{-1})u^2tu^{-1}tu^{-1}\\ &\in&U+z^{k-4}u_3tu^2tu^{-1}tu^{-1}+z^{k-5}u_3tu_3tu^{-1}+ z^{k-5}u_3^{\times}(s^{-1}u^{-1}t^{-1})tu^3tu^{-1}tu^{-1}\\ &\stackrel{\ref{tuts111}(i)}{\in}&U+z^{k-4}u_3tu^2tu^{-1}tu^{-1}+ z^{k-6}u_3^{\times}tu^3tu^{-1}tu^{-1}. \end{array}}$ It remains to prove that $B:=z^{k-4}u_3tu^2tu^{-1}tu^{-1}+ z^{k-6}u_3^{\times}tu^3tu^{-1}tu^{-1}$ is a subset of $U+z^{k-12}u_3^{\times}t^2u^3t.$ We have: $\small{\begin{array}{lcl} B&=&z^{k-4}u_3tu^2tu^{-1}tu^{-1}+ z^{k-6}u_3^{\times}tu^3tu^{-1}tu^{-1}\\ &\subset&z^{k-4}u_3tu^2tu^{-1}tu^{-1}+ z^{k-6}u_3^{\times}t(R+Ru+Ru^2+R^{\times}u^{-1})tu^{-1}tu^{-1}\\ &\subset&U+(z^{k-4}+z^{k-6})u_3tu^2tu^{-1}tu^{-1}+z^{k-6}u_3u_2u_3tu^{-1}+ z^{k-6}u_3(tus)s^{-1}tu^{-1}tu^{-1}+\\&&+z^{k-6}u_3^{\times}tu^{-1}tu^{-1}tu^{-1}\\ &\stackrel{\ref{tuts111}(ii)}{\subset}&U+(z^{k-4}+z^{k-6})u_3(tus)s^{-1}utu^{-1}tu^{-1}+ z^{k-5}u_3(R+Rs)tu^{-1}tu^{-1}+\\&&+z^{k-6}u_3^{\times}tu^{-1}tu^{-1}tu^{-1}\\ &\stackrel{(ii)}{\subset}&U+(z^{k-3}+z^{k-5})u_3(R+Rs)utu^{-1}tu^{-1}+z^{k-5}u_3tu_3tu^{-1}+z^{k-5}u_3(ust)u^{-1}tu^{-1}+\\&&+z^{k-12}u_3^{\times}t^2u^3t\\ &\stackrel{\ref{tuts111}(i)}{\subset}&U+(z^{k-3}+z^{k-5})u_3tu_3tu^{-1}+ (z^{k-3}+z^{k-5})u_3(ust)t^{-2}(tus)s^{-1}tu^{-1}tu^{-1}+ \underline{z^{k-4}u_3tu^{-1}}+\\&&+ z^{k-12}u_3^{\times}t^2u^3t\\ &\stackrel{\ref{tuts111}(i)}{\subset}&U+(z^{k-1}+z^{k-3})u_3t^{-2}(R+Rs)tu^{-1}tu^{-1}+ z^{k-12}u_3^{\times}t^2u^3t\\ &\subset&U+(z^{k-1}+z^{k-3})u_3u_2u_3tu^{-1}+ (z^{k-1}+z^{k-3})u_3t^{-2}(stu)u^{-2}tu^{-1}+z^{k-12}u_3^{\times}t^2u^3t\\ &\subset&U+(z^{k-1}+z^{k-3}+z+z^{k-2})u_3u_2u_3tu^{-1}+z^{k-12}u_3^{\times}t^2u^3t\\ &\stackrel{\ref{tuts111}(ii)}{\in}&U+z^{k-12}u_3^{\times}t^2u^3t. \end{array}}$ \item[(iv)] $\small{\begin{array}[t]{lcl} z^ktu^{-1}tu^2tu^{-1}&=&z^ktu^{-1}(tus)s^{-1}utu^{-1}\\ &\in& z^{k+1}tu^{-1}(R+R^{\times}s)utu^{-1}\\ &\in&\underline{\underline{z^{k+1}u_3u_2u_3}}+z^{k+1}u_3^{\times}tu^{-2}(ust)t^{-2}(tus)s^{-1}tu^{-1}\\ &\in&U+z^{k+3}u_3^{\times}tu^{-2}t^{-2}(R+R^{\times}s)tu^{-1}\\ &\in&U+z^{k+3}u_3tu^{-2}t^{-1}(u^{-1}t^{-1}s^{-1})st+z^{k+3}u_3^{\times}tu^{-2}t^{-2}(stu)u^{-2}\\ &\in&U+ z^{k+2}u_3tu^{-2}t^{-1}(R+Rs^{-1})t+z^{k+4}u_3^{\times}tu^{-2}(R+Rt^{-1}+R^{\times}t)u^{-2}\\ &\in&U+\underline{\underline{z^{k+2}u_3tu_3}}+z^{k+2}u_3tu^{-1}(u^{-1}t^{-1}s^{-1}) t+\underline{\underline{z^{k+4}u_3tu_3}}+\\&&+ z^{k+4}u_3tu^{-1}(u^{-1}t^{-1}s^{-1})su^{-2}+z^{k+4}u_3^{\times}tu^{-2}tu^{-2}\\ &\in&U+\underline{z^{k+1}u_3tu^{-1}u_2}+z^{k+3}u_3tu^{-1}(R+Rs^{-1})u^{-2}+\\&&+ z^{k+4}u_3^{\times}tu^{-2}t(R+Ru+Ru^{-1}+R^{\times}u^{2})\\ &\in&U+\underline{\underline{z^{k+3}u_3tu_3}}+ z^{k+3}u_3tu^{-1}(s^{-1}u^{-1}t^{-1})tu^{-1}+\underline{\underline{(z^{k+4}u_3tu_3)t}}+\\&&+ z^{k+4}u_3tu^{-2}(tus)s^{-1}+z^{k+4}u_3tu_3tu^{-1}+z^{k+4}u_3^{\times}tu^{-2}(tus)s^{-1}u\\ &\stackrel{\ref{tuts111}(i)}{\in}&U+z^{k+2}u_3tu_3tu^{-1}+z^{k+5}u_3tu^{-2}(R+Rs)+z^{k+5}u_3^{\times}tu^{-2}(R+Rs)u\\ &\stackrel{\ref{tuts111}(i)}{\in}&U+\underline{\underline{z^{k+5}u_3tu_3}}+z^{k+5}u_3tu^{-2}(stu)u^{-1}t^{-1}+\underline{z^{k+5}u_3tu^{-1}}+\\&&+ z^{k+5}u_3^{\times}tu^{-3}(ust)t^{-1}u\\ &\in&U+z^{k+6}u_3tu_3u_2+z^{k+6}u_3^{\times}tu^{-3}(R+Rt+R^{\times}t^2)u\\ &\in&U+z^{k+6}u_3tu_3u_2+z^{k+6}u_3tu^{-3}(tus)s^{-1}+ z^{k+6}u_3^{\times}tu^{-3}t(tus)s^{-1}\\ &\in&U+z^{k+6}u_3tu_3u_2+z^{k+7}u_3tu^{-3}(R+Rs)+ z^{k+7}u_3^{\times}tu^{-3}t(R+R^{\times}s)\\ &\in&U+(z^{k+6}+z^{k+7})u_3tu_3u_2+z^{k+7}u_3tu^{-4}(ust)t^{-1} +z^{k+7}u_3^{\times}tu^{-3}t(stu)u^{-1}t\\ &\in&U+(z^{k+6}+z^{k+7}+z^{k+8})u_3tu_3u_2+z^{k+8}u_3^{\times}tu^{-3}tu^{-1}t. \end{array}}$\\ \qedhere \end{itemize} \end{proof} \begin{thm} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)]For every $k\in\{0,\dots,23\}$, $z^ktu_3\subset U$. \item[(ii)]For every $k\in\{0,\dots,23\}$, $z^ktu_3tu^{-1}\subset U$. \item[(iii)]$H_{G_{11}}=U$. \label{thm11} \end{itemize} \end{thm} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item [(i)] By proposition \ref{tu11} (iii), we have to prove that $z^ktu_3\subset U$, for every $k\in\{20,\dots,23\}$. We use different definitions of $u_3$ and we have: \begin{itemize}[leftmargin=*] \item $\underline{k\in\{20,21\}}$: $z^ktu_3=z^kt(R+Ru+Ru^{-1}+Ru^{-2})\subset \underline{z^ku_3t}+\underline{\underline{z^ku_3tu}}+\underline{z^ku_3tu^{-1}}+z^ku_3tu^{-2}$. Therefore, $z^ktu_3\subset U+ \mathbf{z^ku_3tu^{-2}}$. \item $\underline{k\in\{22,23\}}$: $z^ktu_3=z^kt(R+Ru^{-1}+Ru^{-2}+Ru^{-3})\subset \underline{z^ku_3t}+{\underline{z^ku_3tu^{-1}}}+z^ku_3tu^{-2}+z^ku_3tu^{-3}$. Therefore, $z^ktu_3\subset U+ \mathbf{z^ku_3tu^{-2}+z^ku_3tu^{-3}}$. \end{itemize} As a result, it will be sufficient to prove that, for every $k\in\{20,\dots,23\}$, $z^ku_3tu^{-2}$ is a subset of $U$, and that, for every $k\in\{22,23\}$, $z^ku_3tu^{-3}$ is also a subset of $U$. We have: \\ \\ $\begin{array}{lcl} z^ku_3tu^{-2}&=&z^ku_3t^2(t^{-1}s^{-1}u^{-1})usu^{-2}\\ &\subset& z^{k-1}u_3t^2u(R+Rs^{-1})u^{-2}\\ &\subset&z^{k-1}u_3t^2(u^{-1}t^{-1}s^{-1})st+z^{k-1}u_3t^2u(s^{-1}u^{-1}t^{-1})tu^{-1}\\ &\subset& U+z^{k-2}u_3t^2(R+Rs^{-1})t+z^{k-2}u_3u_2utu^{-1}\\ &\stackrel{\ref{tuts111}(ii)}{\subset}&U+\underline{\underline{z^{k-2}u_3u_2u_3}}+z^{k-2}u_3t^3(t^{-1}s^{-1}u^{-1})ut+z^{k-2}u_3tutu^{-1}+z^{k-4}u_3tu^2tu^{-1}. \end{array}$ \\ \\ Therefore, by proposition \ref{tuts11}(ii) and by corollary \ref{tuts111}(i) we have \begin{equation} z^ku_3tu^{-2}\subset U, \label{tuu111} \end{equation} for every $k\in\{20,\dots,23\}$. Moreover, since $z^ktu_3 \subset U+z^ku_3tu^{-2}$, $k\in\{20,21\}$, we use proposition \ref{tu11}(iii) and we have that \begin{equation} z^ku_3tu_3\subset U, \label{tuu1111} \end{equation} for every $k\in\{0,\dots,21\}$. We now prove that $z^ku_3tu^{-3}\subset U$, for every $k\in\{22,23\}$. We have:\\ \\ $\begin{array}[t]{lcl} z^ku_3tu^{-3}&\subset&z^ku_3(R+Rt^{-1}+Rt^{-2})u^{-3}\\ &\subset&\underline{z^ku_3}+z^ku_3(u^{-1}t^{-1}s^{-1})su^{-3}+ z^ku_3t^{-1}u(u^{-1}t^{-1}s^{-1})su^{-3}\\ &\subset&U+z^{k-1}u_3(R+Rs^{-1})u^{-3}+z^{k-1}u_3t^{-1}u(R+Rs^{-1})u^{-3}\\ &\subset&U+\underline{z^{k-1}u_3}+z^{k-1}u_3(s^{-1}u^{-1}t^{-1})tu^{-2}+ z^{k-1}u_3(u^{-1}t^{-1}s^{-1})su^{-2}+\\&&+ z^{k-1}u_3(u^{-1}t^{-1}s^{-1})su(s^{-1}u^{-1}t^{-1})tu^{-2}\\ &\subset& U+z^{k-2}u_3tu^{-2}+z^{k-2}u_3(R+Rs^{-1})u^{-2}+ z^{k-3}u_3(R+Rs^{-1})utu^{-2}\\ &\stackrel{(\ref{tuu111})}{\subset}&U+ \underline{z^{k-2}u_3}+z^{k-2}u_3(s^{-1}u^{-1}t^{-1})tu^{-1}+ z^{k-3}u_3tu_3+z^{k-3}u_3(s^{-1}u^{-1}t^{-1})tu^2tu^{-2}\\ &\stackrel{(\ref{tuu1111})}{\subset}& U+\underline{z^{k-3}u_3tu^{-1}}+ z^{k-4}u_3tu^2tu^{-2}. \end{array}$ Therefore, it will be sufficient to prove that $z^{k-4}u_3tu^2tu^{-2}$ is a subset of $U$. For this purpose, we expand $u^2$ as a linear combination of 1, $u$, $u^{-1}$ and $u^{-2}$ and we have: $\small{\begin{array}{lcl} z^{k-4}u_3tu^2tu^{-2} &\subset& U+\underline{\underline{z^{k-4}u_3u_2u_3}}+ z^{k-4}u_3tutu^{-2}+z^{k-4}u_3tu^{-1}tu^{-2}+ z^{k-4}u_3tu^{-2}tu^{-2} \end{array}}$ However, $ z^{k-4}u_3tutu^{-2}=z^{k-4}u_3(tus)s^{-1}tu^{-2}=z^{k-3}u_3s^{-1}tu^{-2}$. If we expand $s^{-1}$ as a linear combination of 1 and $s$ we have that $z^{k-3}u_3s^{-1}tu^{-2}\subset z^{k-3}u_3tu^{-2}+z^{k-3}u_3(stu)u^{-3}=z^{k-3}u_3tu^{-2}+\underline{z^{k-2}u_3}$. Therefore, by relation (\ref{tuu1111}) we have that $z^{k-3}u_3s^{-1}tu^{-2}\subset U$ and, hence, $z^{k-4}u_3tutu^{-2}\subset U$. It remains to prove that $C:=z^{k-4}u_3tu^{-1}tu^{-2}+ z^{k-4}u_3tu^{-2}tu^{-2}$ is a subset of $U$. We have: $\small{\begin{array}{lcl} C&=&z^{k-4}u_3tu^{-1}tu^{-2}+ z^{k-4}u_3tu^{-2}tu^{-2}\\ &\subset&z^{k-4}u_3t^2(t^{-1}s^{-1}u^{-1})usu^{-1}tu^{-2}+ z^{k-4}u_3tu^{-2}(R+Rt^{-1}+Rt^{-2})u^{-2}\\ &\subset&z^{k-5}u_3t^2u(R+Rs^{-1})u^{-1}tu^{-2}+ \underline{\underline{z^{k-4}u_3tu_3}}+ z^{k-4}u_3tu^{-1}(u^{-1}t^{-1}s^{-1})su^{-2}+\\&&+z^{k-4}u_3tu^{-2} t^{-1}(t^{-1}s^{-1}u^{-1})usu^{-2}\\ &\subset& U+\underline{\underline{z^{k-5}u_3u_2u_3}}+z^{k-5}u_3t^2u(s^{-1}u^{-1}t^{-1})t^2u^{-2}+z^{k-5}u_3tu^{-1}(R+Rs^{-1})u^{-2}+\\&&+z^{k-5}u_3tu^{-2}t^{-1}u(R+Rs^{-1})u^{-2}\\ &\subset& U+z^{k-6}u_3t^2ut^2u^{-2}+\underline{\underline{z^{k-5}u_3u_2u_3}}+ z^{k-5}u_3tu^{-1}(s^{-1}u^{-1}t^{-1})tu^{-1}+\\&&+ z^{k-5}u_3tu^{-1}(u^{-1}t^{-1}s^{-1})su^{-1}+ z^{k-5}u_3tu^{-1}(u^{-1}t^{-1}s^{-1})su(s^{-1}u^{-1}t^{-1})tu^{-1}\\ &\subset& U+z^{k-6}u_3t^2u(R+Rt+Rt^{-1})u^{-2} +z^{k-6}u_3tu_3tu^{-1}+z^{k-6}u_3tu^{-1}(R+Rs^{-1})u^{-1}+\\&&+ z^{k-7}u_3tu^{-1}(R+Rs^{-1})utu^{-1}\\ &\stackrel{\ref{tuts111}(i)}{\subset}&U+ \underline{\underline{z^{k-6}u_3u_2u_3}}+z^{k-6}u_3t(tus)s^{-1}tu^{-2}+ z^{k-6}u_3t^2u^2(u^{-1}t^{-1}s^{-1})su^{-2}+\\&&+ \underline{\underline{(z^{k-6}+z^{k-7})u_3u_2u_3}}+z^{k-6}u_3tu^{-1}(s^{-1}u^{-1}t^{-1})t +z^{k-7}u_3tu^{-1}(s^{-1}u^{-1}t^{-1})tu^2tu^{-1}\\ &\subset&U+z^{k-5}u_3t(R+Rs)tu^{-2}+z^{k-7}u_3t^2u^2(R+Rs^{-1})u^{-2}+ \underline{z^{k-7}u_3tu^{-1}u_2}+\\&&+ z^{k-8}u_3tu^{-1}tu^2tu^{-1}\\ &\stackrel{\ref{ttuttu}(iii)}{\subset}&U+\underline{\underline{z^{k-5}u_3u_2u_3}}+z^{k-5}u_3t(stu)u^{-3}+ \underline{z^{k-7}u_3u_2}+z^{k-7}u_3t^2u^2(s^{-1}u^{-1}t^{-1})tu^{-1}+\\&&+ \underline{\underline{(z^{k-20}u_3u_2u_3)t}} \\ &\subset&U+\underline{\underline{z^{k-4}tu_3}}+ z^{k-8}u_3u_2u_3tu^{-1}. \end{array}}$ The result follows from corollary \ref{tuts111}(ii). \item[(ii)] By corollary \ref{tuts111}(i), we restrict ourselves to proving the cases where $k\in\{0,1\}\cup\{20,\dots,23\}$. We distinguish the following cases: \begin{itemize}[leftmargin=*] \item [C1.] \underline{$k\in\{20,21\}$}: We expand $u_3$ as $R+Ru+Ru^{-1}+Ru^{-2}$ and by proposition \ref{tuts11}(i), (ii), (iv) and (v) we have that $z^ktu_3tu^{-1}\subset U+(z^k+z^{k-1}+z^{k-2})u_3tu_3$. The result follows from (i). \item [C2.]\underline{$k\in\{22,23\}$}: We expand $u_3$ as $R+Ru^{-1}+Ru^{-2}+Ru^{-3}$ and by proposition \ref{tuts11}(i), (iv) and (v) we have that $z^ktu_3tu^{-1}\subset U+(z^k+z^{k-1}+z^{k-2})u_3tu_3+z^ktu^{-3}tu^{-1}\stackrel{(i)}{\subset}U+z^ktu^{-3}tu^{-1}$. However, since $k-8\in\{1,\dots,15\}$, we can apply lemma \ref{ttuttu}(iv) and we have that $z^ktu^{-3}tu^{-1}\in u_3^{\times}Ut^{-1}+(z^k+z^{k-1}+z^{k-2})u_3tu_3u_2+z^{k-8}u_3^{\times}tu^{-1}tu^2tu^{-1}t^{-1}$. However, by (i) and by lemma \ref{ttuttu}(iii) we also have that $z^ktu^{-3}tu^{-1}\in u_3^{\times}Uu_2+\underline{\underline{(z^{k-20}u_3u_2u_3)t}}$. The result then follows from the definition of $U$ and remark \ref{rem11}. \item [C3.]\underline{$k\in\{0,1\}$}: We expand $u_3$ as $R+Ru+Ru^2+Ru^{3}$ and by proposition \ref{tuts11}(ii), (iii) and (v) we have that $z^ktu_3tu^{-1}\subset U+z^kt^2u^{-1}\subset U+z^kt^2(R+Ru+Ru^2+Ru^3)\subset U+\underline{z^ku_2}+z^ku_3t(tus)s^{-1}+z^ku_3t^2u^2+z^ku_3t^2u^3 \stackrel{\ref{ttuttu}(i)}{\subset} U+\underline{\underline{z^{k+1}u_3u_2u_1}}+z^ku_2t^2u^3$. However, by lemma \ref{ttuttu}(iii), we have that $z^kt^2u_3\in u_3^{\times}Ut^{-1}+z^{k+12}u_3^{\times}tu^{-1}tu^2tu^{-1}t^{-1}$. Since $k+12\in\{1,\dots,15\}$, we can apply lemma \ref{ttuttu}(iv) we have that $z^{k+12}tu^{-1}tu^2tu^{-1}t^{-1}\in Ut^{-1}+( z^{k+18}+z^{k+19}+z^{k+20})u_3tu_3u_2+z^{k+20}u_3tu_3tu^{-1}$. However, by (i) and by case C1, we have that $z^{k+12}tu^{-1}tu^2tu^{-1}t^{-1}\in Uu_2$. Therefore, $z^ktu_3tu^{-1}\subset u_3Ut^{-1}+U$. The result follows from the definition of $U$ and remark \ref{rem11}. \end{itemize} \item[(iii)] The result follows immediately from (i) and (ii) (see corollary \ref{corr11}(iii)). \qedhere \end{itemize} \end{proof} \begin{cor} The BMR freeness conjecture holds for the generic Hecke algebra $H_{G_{11}}$. \end{cor} \begin{proof} By theorem \ref{thm11}(iii) we have that $H_{G_{11}}=U$. The result follows from proposition \ref{BMR PROP}, since by definition $U$ is generated as left $u_3$ module by 144 elements and, hence, as $R$-module by $|G_{11}|=576$ elements (recall that $u_3$ is generated as $R$-module by 4 elements). \end{proof} \subsection{The case of $G_{12}$} \indent Let $R=\mathbb{Z}[u_{s,i}^{\pm},u_{t,j}^{\pm},u_{u,l}^{\pm}]_{\substack{1\leq i,j,l\leq 2 }}$ and let $$ H_{G_{12}}=\langle s,t,u\;|\; stus=tust=ustu,\prod\limits_{i=1}^{2}(s-u_{s,i})=\prod\limits_{j=1}^{2}(t-u_{t,j})=\prod\limits_{l=1}^{2}(u-u_{u,l})=0\rangle.$$ Let also $\bar R=\mathbb{Z}[u_{s,i}^{\pm},]_{\substack{1\leq i\leq 2 }}$. Under the specialization $\grf: R\twoheadrightarrow \bar R$, defined by $u_{t,j}\mapsto u_{s,i}$, $u_{u,l}\mapsto u_{s,i}$ the algebra $\bar H_{G_{12}}:=H_{G_{12}}\otimes_{\grf}\bar R$ is the generic Hecke algebra associated to $G_{12}$. Let $u_1$ be the subalgebra of $H_{G_{12}}$ generated by $s$, $u_2$ the subalgebra of $H_{G_{12}}$ generated by $t$ and $u_3$ the subalgebra of $H_{G_{12}}$ generated by $u$. We recall that $z:=(stu)^4$ generates the center of the complex braid group associated to $G_{12}$ and that $|Z(G_{12})|=2$. We set $U=\sum\limits_{k=0}^1(z^ku_1u_3u_1u_2+z^ku_1u_2u_3u_2+z^ku_2u_3u_1u_2+z^ku_2u_1u_3u_2+z^ku_3u_2u_1u_2).$ By the definition of $U$ we have the following remark: \begin{rem} $Uu_2\subset U$. \label{r12} \end{rem} To make it easier for the reader to follow the next calculations, we will underline the elements that by definition belong to $U$. Moreover, we will use directly remark \ref{r12}; this means that every time we have a power of $t$ at the end of an element we may ignore it. In order to remind that to the reader, we put a parenthesis around the part of the element we consider. \begin{prop}$u_1U\subset U$. \label{su12} \end{prop} \begin{proof} Since $u_1=R+Rs$ we have to prove that $sU\subset U$. By the definition of $U$ and by remark \ref{r12}, it is enough to prove that for every $k\in\{0,1\}$, $z^ksu_2u_3u_1$, $z^ksu_2u_1u_3$ and $z^ksu_3u_2u_1$ are subsets of $U$. We have: \begin{itemize}[leftmargin=*] \item $\small{\begin{array}[t]{lcl} z^ksu_2u_3u_1&=&z^ks(R+Rt)u_3u_1\\ &\subset&\underline{z^ku_1u_3u_1}+z^kst(R+Ru)u_1\\ &\subset&U+z^ku_1u_2u_1+z^kstu(R+Rs)\\ &\subset&U+z^ku_1u_2u_1+\underline{z^ku_1u_2u_3}+Rz^kstus\\ &\subset&U+z^ku_1u_2u_1+Rz^ktust\\ &\subset&U+z^ku_1u_2u_1+\underline{u_2u_3u_1u_2}\\ &\subset&U+z^ku_1u_2u_1. \end{array}}$ \item $\small{\begin{array}[t]{lcl} z^ksu_2u_1u_3&=&z^ks(R+Rt^{-1})u_1u_3\\ &\subset&\underline{z^ku_1u_3}+z^kst^{-1}(R+Rs^{-1})u_3\\ &\subset&U+\underline{z^ku_1u_2u_3}+z^kst^{-1}s^{-1}(R+Ru^{-1})\\ &\subset&U+z^ku_1u_2u_1+Rz^ks(t^{-1}s^{-1}u^{-1}t^{-1})t\\ &\subset&U+z^ku_1u_2u_1+Rz^ku^{-1}t^{-1}s^{-1}t\\ &\subset&U+z^ku_1u_2u_1+\underline{z^ku_3u_2u_1u_2}\\ &\subset&U+z^ku_1u_2u_1. \end{array}}$ \item $\small{\begin{array}[t]{lcl} z^ksu_3u_2u_1&\subset&z^k(R+Rs^{-1})u_3u_2u_1\\ &\subset&\underline{z^ku_3u_2u_1}+z^ks^{-1}(R+Ru^{-1})u_2u_1\\ &\subset&U+z^ku_1u_2u_1+z^ks^{-1}u^{-1}(R+Rt^{-1})u_1\\ &\subset&U+z^ku_1u_2u_1+\underline{z^ku_1u_3u_1}+z^ks^{-1}u^{-1}t^{-1}(R+Rs^{-1})\\ &\subset&U+z^ku_1u_2u_1+\underline{z^ku_1u_3u_2}+Rz^k(s^{-1}u^{-1}t^{-1}s^{-1})\\ &\subset&U+z^ku_1u_2u_1+Rz^kt^{-1}s^{-1}u^{-1}t^{-1}\\ &\subset&U+z^ku_1u_2u_1+\underline{z^ku_2u_1u_3u_2}\\ &\subset&U+z^ku_1u_2u_1. \end{array}}$ \end{itemize} Therefore, we have to prove that for every $k\in\{0,1\}$, $z^ku_1u_2u_1\subset U$. We distinguish the following cases: \begin{itemize}[leftmargin=*] \item $\underline{k=0}$: $\small{\begin{array}[t]{lcl} u_1u_2u_1&=&(R+Rs)(R+Rt)(R+Rs)\\ &\subset&\underline{u_2u_1u_2}+Rsts\\ &\subset&U+ (stu)^4u^{-1}(t^{-1}s^{-1}u^{-1}t^{-1})s^{-1}(u^{-1}t^{-1}s^{-1}u^{-1})s\\ &\subset&U+zu^{-2}t^{-1}s^{-1}u^{-1}s^{-2}u^{-1}t^{-1}\\ &\subset&U+zu^{-2}t^{-1}s^{-1}u^{-1}(R+Rs^{-1})u^{-1}t^{-1}\\ &\subset&U+zu^{-2}t^{-1}s^{-1}u^{-2}t^{-1}+u^{-2}t^{-1}s^{-1}zu^{-1}s^{-1}u^{-1}t^{-1}\\ &\subset&U+zu^{-2}t^{-1}s^{-1}(R+Ru^{-1})t^{-1}+u^{-2}t^{-1}s^{-1}(stu)^4u^{-1}s^{-1}u^{-1}t^{-1}\\ &\subset&U+\underline{zu_3u_2u_1u_2}+zu^{-1}(u^{-1}t^{-1}s^{-1}u^{-1})t^{-1}+ u^{-1}(stus)(tust)s^{-1}u^{-1}t^{-1}\\ &\subset&U+z(u^{-1}t^{-1}s^{-1}u^{-1})t^{-2}+(stus)\\ &\subset&U+zt^{-1}s^{-1}u^{-1})t^{-3}+tust\\ &\subset&U+\underline{zu_2u_1u_3u_2}+\underline{u_2u_3u_1u_2}. \end{array}}$ \item $\underline{k=1}$: $\small{\begin{array}[t]{lcl} zu_1u_2u_1&=&(R+Rs^{-1})(R+Rt^{-1})(R+Rs^{-1})\\ &\subset&\underline{zu_2u_1u_2}+zRs^{-1}t^{-1}s^{-1}\\ &\subset&U+Rs^{-1}t^{-1}s^{-1}z\\ &\subset&U+Rs^{-1}t^{-1}s^{-1}(stu)^4\\ &\subset&U+Rs^{-1}(ustu)s(tust)u\\ &\subset&U+Rtus^2ustu^2\\ &\subset&U+Rtu(R+Rs)ustu^2\\ &\subset&U+Rtu^2st(R+Ru)+Rtus(ustu)u\\ &\subset&U+\underline{u_2u_3u_1u_2}+Rtu^2(stu)^4(stu)^{-3}+Rtustustu\\ &\subset&U+Rztu(t^{-1}s^{-1}u^{-1}t^{-1})s^{-1}u^{-1}t^{-1}s^{-1}+Rtu(stu)^4(stu)^{-2}\\ &\subset&U+Rzs^{-1}u^{-1}(s^{-1}u^{-1}t^{-1}s^{-1})+ Rz(s^{-1}u^{-1}t^{-1}s^{-1})\\ &\subset&U+Rz(s^{-1}u^{-1}t^{-1}s^{-1})u^{-1}t^{-1}+R zt^{-1}s^{-1}u^{-1}t^{-1}\\ &\subset&U+Rzu^{-1}t^{-1}s^{-1}u^{-2}t^{-1}+\underline{zu_2u_1u_3u_2}\\ &\subset&U+zu^{-1}t^{-1}s^{-1}(R+Ru^{-1})t^{-1}\\ &\subset&U+\underline{zu_3u_2u_1u_2}+z(u^{-1}t^{-1}s^{-1}u^{-1})t^{-1}\\ &\subset&U+zt^{-1}s^{-1}u^{-1}t^{-2}\\ &\subset&U+\underline{zu_2u_1u_3u_2}. \end{array}}$\\ \qedhere \end{itemize} \end{proof} Our goal is to prove that $H_{G_{12}}=U$ (theorem \ref{thm12}). The following proposition provides a criterium for this to be true. \begin{prop} If $u_3U\subset U$, then $H_{G_{12}}=U$. \label{prop12} \end{prop} \begin{proof} Since $1\in U$, it is enough to prove that $U$ is a left-sided ideal of $H_{G_{11}}$. For this purpose, one may check that $sU$, $tU$ and $uU$ are subsets of $U$. By hypothesis and proposition \ref{prop12}, it is enough to prove that $tU\subset U$. By the definition of $U$ and remark \ref{r12} we have to prove that for every $k\in\{0,1\}$ $z^ktu_1u_3u_1$, $z^ktu_1u_2u_3$ and $z^ktu_3u_2u_1$ are subsets of $U$. We have: \begin{itemize}[leftmargin=*] \item $\small{\begin{array}[t]{lcl} z^ktu_1u_3u_1&\subset&z^k(R+Rt^{-1})u_1u_3u_1\\ &\subset&\underline{z^ku_1u_3u_1}+z^kt^{-1}(R+Rs^{-1})(R+Ru)(R+Rs)\\ &\subset&U+\underline{z^ku_2u_3u_1}+\underline{z^ku_2u_1u_3}+Rz^kt^{-1}s^{-1}us\\ &\subset&U+Rz^kt^{-1}s^{-1}(ustu)u^{-1}t^{-1}\\ &\subset&U+Rz^kusu^{-1}t^{-1}\\ &\subset&U+u_3\underline{z^ku_1u_3u_2}\\ &\subset& U+u_3U. \end{array}}$ \item $\small{\begin{array}[t]{lcl} z^ktu_1u_2u_3&\subset&z^k(R+Rt^{-1})u_1u_2u_3\\ &\subset&\underline{z^ku_1u_2u_3}+z^kt^{-1}(R+Rs)(R+Rt)(R+Ru)\\ &\subset&U+\underline{z^ku_2u_1u_3}+\underline{z^ku_2u_1u_2}+Rz^kt^{-1}stu\\ &\subset&U+Rz^kt^{-1}(stus)s^{-1}\\ &\subset&U+Rz^kusts^{-1}\\ &\subset&U+u_3u_1\underline{z^ku_2u_1}\\ &\subset&U+u_3u_1U. \end{array}}$ \item $\small{\begin{array}[t]{lcl} z^ktu_3u_2u_1&=&z^kt(R+Ru^{-1})(R+Rt^{-1})(R+Rs^{-1})\\ &\subset&\underline{z^ku_2u_3u_1u_2}+Rz^ktu^{-1}t^{-1}s^{-1}\\ &\subset&U+Rz^kt(u^{-1}t^{-1}s^{-1}u^{-1})u\\ &\subset&U+Rz^ks^{-1}u^{-1}t^{-1}u\\ &\subset&U+u_1u_3\underline{z^ku_2u_3}\\ &\subset&U+u_1u_3U. \end{array}}$ The result follows from proposition \ref{su12} and remark \ref{r12}. \qedhere \end{itemize} \end{proof} We can now prove the main theorem of this section. \begin{thm}$H_{G_{12}}=U$. \label{thm12} \end{thm} \begin{proof} By proposition \ref{prop12} it is enough to prove that $u_3U\subset U$. Since $u_3=R+Ru$, it will be sufficient to check that $uU\subset U$. By the definition of $U$ and remark \ref{r12}, we only have to prove that for every $k\in\{0,1\}$, $z^kuu_1u_3u_1$, $z^kuu_1u_2u_3$, $z^kuu_2u_3u_1$ and $z^kuu_2u_1u_3$ are subsets of $U$. \begin{itemize}[leftmargin=0.8cm] \item[C1.] We will prove that $z^kuu_1u_3u_1\subset U$, $k\in\{0,1\}$. \begin{itemize}[leftmargin=*] \item [(i)]$\underline{k=0}$: $\small{\begin{array}[t]{lcl} uu_1u_3u_1&=&u(R+Rs)(R+Ru)(R+Rs)\\ &\subset&\underline{u_3u_1}+Rusu+Rusus\\ &\subset&U+Rzuz^{-1}su+Rzusuz^{-1}s\\ &\subset&U+Rzu(stu)^{-4}su+Rzusu(stu)^{-4}s\\ &\subset&U+Rzt^{-1}(s^{-1}u^{-1}t^{-1}s^{-1})u^{-1}(t^{-1}s^{-1}u^{-1}t^{-1}u)+\\&&+ Rzus(t^{-1}s^{-1}u^{-1}t^{-1})s^{-1}(u^{-1}t^{-1}s^{-1}u^{-1})t^{-1}\\ &\subset&U+Rzt^{-2}s^{-1}u^{-1}t^{-1}u^{-2}t^{-1}s^{-1}+Rzt^{-1}s^{-2}t^{-1}s^{-1}u^{-1}t^{-2}\\ &\subset&U+Rz(R+Rt^{-1})s^{-1}u^{-1}t^{-1}u^{-2}t^{-1}s^{-1}+ Rzt^{-1}(R+Rs^{-1})t^{-1}s^{-1}u^{-1}t^{-2}\\ &\subset&U+Rzs^{-1}u^{-1}t^{-1}(R+Ru^{-1})t^{-1}s^{-1}+ Rt^{-1}s^{-1}u^{-1}t^{-1}u^{-2}t^{-1}s^{-1}z+ \underline{zu_2u_1u_3u_2}+\\&&+Rzt^{-1}s^{-1}(t^{-1}s^{-1}u^{-1}t^{-1})t^{-1}\\ &\subset&U+u_1\underline{zu_3u_2u_1}+Rs^{-1}u^{-1}t^{-1}u^{-1}t^{-1}s^{-1}z+Rt^{-1}s^{-1}u^{-1}t^{-1}u^{-2}t^{-1}s^{-1}(stu)^4+\\&&+ Rt^{-1}s^{-2}u^{-1}t^{-1}s^{-1}zt^{-1}\\ &\subset&U+u_1U+Rs^{-1}u^{-1}t^{-1}u^{-1}t^{-1}s^{-1}(stu)^4+ R(t^{-1}s^{-1}u^{-1}t^{-1})u^{-1}(stus)(tust)u+\\&&+ Rt^{-1}s^{-2}u^{-1}t^{-1}s^{-1}(stu)^4t^{-1}\\ &\subset&U+u_1U+Rs^{-1}u^{-1}t^{-1}(stus)t(ustu)+Rs^{-1}t(ustu)+ Rt^{-1}s^{-1}(tust)(ustu)t^{-1}\\ &\subset&U+u_1U+Rt^3ust+u_1t^2ust+R(ustu)s\\ &\subset&U+u_1U+u_1\underline{u_2u_3u_1u_2}+Rstus^2\\ &\subset&U+u_1U+u_1\underline{u_2u_3u_1}\subset U+u_1U. \end{array}}$ The result follows from proposition \ref{su12}. \item [(ii)]\underline{$k=1$}: $\small{\begin{array}[t]{lcl} zuu_1u_3u_1&\subset& z(R+Ru^{-1})u_1u_3u_1\\ &\subset&\underline{zu_1u_3u_1}+z u^{-1}(R+Rs^{-1})(R+Ru^{-1})(R+Rs^{-1})\\ &\subset&U+\underline{u_3u_1}+Rzu^{-1}s^{-1}u^{-1}+Rzu^{-1}s^{-1}u^{-1}s^{-1}\\ &\subset&U+Ru^{-1}s^{-1}zu^{-1}+Ru^{-1}s^{-1}zu^{-1}s^{-1}\\ &\subset&U+Ru^{-1}s^{-1}(stu)^4u^{-1}+Ru^{-1}s^{-1}(stu)^4u^{-1}s^{-1}\\ &\subset& U+Ru^{-1}(tust)u(stus)t+Ru^{-1}(tust)us(tust)s^{-1}\\ &\subset&U+Rstu^3stut+R(stus)stu\\ &\subset&U+u_1t(R+Ru)stut+Rust(ustu)\\ &\subset&U+u_1t(stu)^4(stu)^{-3}t+u_1(tust)ut+Rust^2ust\\ &\subset&U+zu_1t(u^{-1}t^{-1}s^{-1}u^{-1})t^{-1}s^{-1}u^{-1}t^{-1}s^{-1}t+u_1ustu^2+rus(R+Rt)ust\\ &\subset&U+zu_1u^{-1}t^{-2}s^{-1}u^{-1}t^{-1}s^{-1}t+u_1ust(R+Ru)+ ( uu_1u_3u_1)u_2+R(ustu)st\\ &\stackrel{(i)}{\subset}&U+zu_1u^{-1}(R+Rt^{-1})s^{-1}u^{-1}t^{-1}s^{-1}t+u_1\underline{u_3u_1u_2}+ u_1(ustu)+Uu_2+\\&&+Rstus^2t\\ &\stackrel{\ref{r12}}{\subset}&U+u_1U+zu_1u^{-1}(s^{-1}u^{-1}t^{-1}s^{-1})t+u_1(stu)^{-2}zt+ u_1tust+u_1\underline{u_2u_3u_1u_2}\\ &\subset& U+u_1U+zu_1(u^{-1}t^{-1}s^{-1}u^{-1})+u_1(stu)^{-2}(stu)^4t+ u_1\underline{u_2u_3u_1u_2}\\ &\subset&U+u_1U+zu_1t^{-1}s^{-1}u^{-1}t^{-1}+u_1t(ustu)t\\ &\subset&U+u_1U+u_1\underline{u_2u_1u_3u_2}+u_1t^2ust^2\\ &\subset&U+u_1U+u_1\underline{u_2u_3u_1u_2}\\ &\subset&U+u_1U. \end{array}}$ The result follows again from proposition \ref{su12}. \end{itemize} \item [C2.] We will prove that $z^kuu_1u_2u_3\subset U$, $k\in\{0,1\}$. We notice that $z^kuu_1u_2u_3=z^ku(R+Rs)(R+Rt)(R+Ru)\subset \underline{z^ku_3u_1u_2}+z^ku_3u_2u_3+z^kuu_1u_3u_1+Rz^k(ustu)\stackrel{C1}{\subset} U+z^ku_3u_2u_2+Rz^ktust\subset U+z^ku_3u_2u_2+\underline{z^ku_2u_3u_1u_2}\subset U+z^ku_3u_2u_2.$ Therefore, we must prove that, for every $k\in\{0,1\}$, $z^ku_3u_2u_2\subset U$. We distinguish the following cases: \begin{itemize}[leftmargin=*] \item [(i)] $\underline{k=0}:$ $\small{\begin{array}[t]{lcl} u_3u_2u_3&=&(R+Ru)(R+Rt)(R+Ru)\\ &\subset&\underline{u_2u_3u_2}+Rutu\\ &\subset&U+Rus^{-1}stu\\ &\subset&U+Ru(R+Rs)stu\\ &\subset&U+R(ustu)+Rus(stu)^4(stu)^{-3}\\ &\subset&U+Rtust+Rzus(u^{-1}t^{-1}s^{-1}u^{-1})(t^{-1}s^{-1}u^{-1}t^{-1})s^{-1}\\ &\subset&U+\underline{u_2u_3u_1u_2}+Rzt^{-1}s^{-2}u^{-1}t^{-1}s^{-2}\\ &\subset&U+Rzt^{-1}s^{-2}u^{-1}t^{-1}(R+Rs^{-1})\\ &\subset&U+\underline{zu_2u_1u_3u_2}+Rt^{-1}s^{-1}u^{-1}t^{-1}s^{-1}z\\ &\subset&U+Rt^{-1}s^{-2}u^{-1}t^{-1}s^{-1}(stu)^4\\ &\subset&U+Rt^{-1}s^{-1}(tust)(ustu)\\ &\subset&U+R(ustu)st\\ &\subset&U+Rstus^2t\\ &\subset&U+u_1\underline{u_2u_3u_1u_2}\\ &\subset&U+u_1U. \end{array}}$ The result follows from proposition \ref{su12}. \item[(ii)]$\underline{k=1}:$ $\small{\begin{array}[t]{lcl} zu_3u_2u_3&=&z(R+Ru^{-1})(R+Rt^{-1})(R+Ru^{-1})\\ &\subset&\underline{zu_2u_3u_2}+Rzu^{-1}t^{-1}u^{-1}\\ &\subset&U+R(stu)^4u^{-1}t^{-1}u^{-1}\\ &\subset&U+Rs(tust)u(stus)u^{-1}\\ &\subset&U+Rs^2tusu^2st\\ &\subset&U+u_1tus(R+Ru)st\\ &\subset&U+u_1\underline{u_2u_3u_1u_2}+u_1(stu)^4(stu)^{-3}sust\\ &\subset&U+u_1U+zu_1u^{-1}t^{-1}s^{-1}u^{-1}(t^{-1}s^{-1}u^{-1}t^{-1})ust\\ &\subset&U+u_1U+zu_1u^{-1}t^{-1}s^{-1}u^{-2}\\ &\subset&U+u_1U+zu_1u^{-1}t^{-1}s^{-1}(R+Ru^{-1})\\ &\subset&U+u_1U+u_1\underline{zu_3u_2u_1}+zu_1(u^{-1}t^{-1}s^{-1}u^{-1})\\ &\subset&U+u_1U+zu_1t^{-1}s^{-1}u^{-1}t^{-1}\\ &\subset&U+u_1U+u_1\underline{zu_2u_1u_3u_2}\\ &\subset& U+u_1U. \end{array}}$ he result follows again from proposition \ref{su12}. \end{itemize} \item[C3.] For every $k\in\{0,1\}$ we have: $z^kuu_2u_3u_1=z^ku(R+Rt)(R+Ru)(R+Rs)\subset \underline{z^ku_3u_2u_1}+Rz^kutu+Rz^kutus\subset U+z^kuu_1u_2u_3+Rz^ku(tust)t^{-1}$. Therefore, by C2 we have $z^kuu_2u_3u_1\subset U+Rz^k(ustu)st^{-1}\subset U+Rz^kstus^2t^{-1}\subset U+u_1\underline{z^ku_2u_3u_1u_2}\subset U+u_1U$. The result follows from \ref{su12}. \item[C4.]For every $k\in\{0,1\}$ we have: $z^kuu_2u_1u_3=z^ku(R+Rt^{-1})(R+Rs^{-1})(R+Ru^{-1})\subset U+\underline{z^ku_3u_2u_1}+z^kuu_1u_2u_3+z^kuu_1u_3u_1+Rz^kut^{-1}s^{-1}u^{-1}$. Therefore, by C1 and C2 we have that $z^kuu_2u_1u_3\subset U+Rz^ku(t^{-1}s^{-1}u^{-1}t^{-1})t\subset U+ Rz^kt^{-1}s^{-1}u^{-1}t\subset U+\underline{z^ku_2u_1u_3u_2}.$ \qedhere \end{itemize} \end{proof} \begin{cor} The BMR freeness conjecture holds for the generic Hecke algebra $\bar H_{G_{12}}$. \end{cor} \begin{proof} By theorem \ref{thm12} we have that $H_{G_{12}}=U=\sum\limits_{k=0}^1z^k(u_2+su_2+uu_2+suu_2+usu_2+tuu_2+tsu_2+susu_2+stuu_2+tusu_2+tsuu_2+utsu_2)$ and, hence, $H_{G_{12}}$ is generated as right $u_2$-module by 24 elements and, hence, as $R$-module by $|G_{12}|=48$ elements (recall that $u_2$ is generated as $R$-module by 2 element). Therefore, $\bar H_{G_{12}}$ is generated as $\bar R$-module by $|G_{12}|=48$ elements, since the action of $\bar R$ factors through $R$. The result follows from proposition \ref{BMR PROP}. \end{proof} \subsection{The case of $G_{13}$} \indent Let $R=\mathbb{Z}[u_{s,i}^{\pm},u_{t,j}^{\pm},u_{u_{u,l}}]_{\substack{1\leq i,j,l\leq 2 }}$ and let $$H_{G_{13}}=\langle s,t,u\;|\; ustu=tust,stust=ustus, \prod\limits_{i=1}^{2}(s-u_{s,i})=\prod\limits_{j=1}^{2}(t-u_{t,l})=\prod\limits_{l=1}^{2}(u-u_{u_k})=0\rangle$$ be the generic Hecke algebra associated to $G_{13}$. Let $u_1$ be the subalgebra of $H_{G_{13}}$ generated by $s$, $u_2$ the subalgebra of $H_{G_{13}}$ generated by $t$ and $u_3$ be the subalgebra of $H_{G_{13}}$ generated by $u$. We recall that $z:=(stu)^3=(tus)^3=(ust)^3$ generates the center of the associated complex braid group and that $|Z(G_{13})|=4$. We set $U=\sum\limits_{k=0}^{3}z^k(u_1u_2u_1u_2+u_1u_2u_3u_2+u_2u_3u_1u_2+u_2u_1u_3u_2+u_3u_2u_1u_2)$. By the definition of $U$, we have the following remark: \begin{rem} $Uu_2 \subset U$. \label{rem13} \end{rem} From now on, we will underline the elements that by definition belong to $U$. Moreover, we will use directly the remark \ref{rem13}; this means that every time we have a power of $t$ at the end of an element, we may ignore it. In order to remind that to the reader, we put a parenthesis around the part of the element we consider. Our goal is to prove that $H_{G_{13}}=U$ (theorem \ref{thm13}). Since $1\in U$, it is enough to prove that $U$ is a left-sided ideal of $H_{G_{13}}$. For this purpose, one may check that $sU$, $tU$ and $uU$ are subsets of $U$. We set $$\small{\begin{array}{lcl} v_1&=&1\\ v_2&=&u\\ v_3&=&s\\ v_4&=&ts\\ v_5&=&su\\ v_6&=&us\\ v_7&=&tu\\ v_8&=&tsu\\ v_9&=&tus\\ v_{10}&=&sts\\ v_{11}&=&stu\\ v_{12}&=&uts \end{array}}$$ By the definition of $U$, one may notice that $U=\sum\limits_{k=0}^3\sum\limits_{i=1}^{12}(Rz^kv_i+Rz^kv_it)$. Hence, by remark \ref{rem13} we only have to prove that for every $k\in\{0,\dots,3\}$, $sz^kv_i$, $tz^kv_i$ and $uz^kv_i$ are elements inside $U$, $i=1,\dots,12$. As a first step, we prove this argument for $i=8,\dots, 12$ and for a smaller range of the values of $k$, as we can see in the following proposition. \begin{prop} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{1,2,3\}$, $z^ksv_{8}\in U.$ \item[(ii)] For every $k\in\{0,1,2\}$, $z^ksv_{9}\in U.$ \item[(iii)] For every $k\in\{0,1,2\}$, $z^kuv_{10}\in U.$ \item[(iv)]For every $k\in\{0,1,2,3\}$, $z^ktv_{10}\in U.$ \item[(v)]For every $k\in\{0,1,2,3\}$, $z^ktv_{11}\in U.$ \item[(vi)] For every $k\in\{1,2,3\}$, $z^ksv_{12}\in U.$ \item[(vii)] For every $k\in\{0,1,2,3\}$, $z^ktv_{12}\in U.$ \end{itemize} \label{easy} \end{prop} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize} [leftmargin=0.8cm] \item[(i)] $\small{\begin{array}[t]{lcl}z^ksv_{8}&=&z^kstsu\\ &=&z^kz^k(R+Rs^{-1})(R+Rt^{-1})(R+Rs^{-1})(R+Ru^{-1})\\ &\in& \underline{z^ku_2u_1u_3}+\underline{z^ku_1u_2u_3}+\underline{z^ku_1u_2u_1}+ Rz^ks^{-1}t^{-1}s^{-1}u^{-1}\\ &\in& U+Rz^ks^{-1}(ust)^{-3}ustust\\ &\in& U+Rz^{k-1}s^{-1}(ustus)t\\ &\in& U+Rz^{k-1}tust^2\\ &\in& U+\underline{(z^{k-1}u_2u_3u_1)t^2}. \end{array}}$ \item[(ii)] $z^ksv_{9}=z^kstus=z^k(stu)^3(u^{-1}t^{-1}s^{-1}u^{-1})t^{-1}=z^{k+1}t^{-1}s^{-1}u^{-1}t^{-2}\in \underline{(z^{k+1}u_2u_1u_3)t^{-2}}$. \item[(iii)] $z^kuv_{10}=z^kusts=z^k(ust)^3t^{-1}(s^{-1}u^{-1}t^{-1}s^{-1}u^{-1})s= z^{k+1}t^{-2}s^{-1}u^{-1}t^{-1}\in \underline{(z^{k+1}u_2u_1u_3)t}.$ \item[(iv)] \begin{itemize}[leftmargin=*] \item[$\bullet$]\underline{$k\in\{0,1\}$}: $\small{\begin{array}[t]{lcl} z^ktv_{10}&=&z^ktsts\\ &=&z^kt(stu)^3(u^{-1}t^{-1}s^{-1}u^{-1})t^{-1}s^{-1}u^{-1}s\\ &=&z^{k+1}s^{-1}u^{-1}t^{-2}s^{-1}u^{-1}s\\ &\in&z^{k+1}s^{-1}u^{-1}(R+Rt^{-1})s^{-1}u^{-1}s\\ &\in&Rz^{k+1}s^{-1}u^{-1}s^{-1}(R+Ru)s+Rz^{k+1}(s^{-1}u^{-1}t^{-1}s^{-1}u^{-1})s\\ &\in&\underline{z^{k+1}u_1u_3}+Rz^{k+1}s^{-1}u^{-1}(R+Rs)us+Rz^{k+1}t^{-1}s^{-1}u^{-1}t^{-1}\\ &\in&U+\underline{z^{k+1}u_2}+Rz^{k+1}s^{-1}(R+Ru)sus+\underline{(z^{k+1}u_2u_1u_3)t}\\ &\in&U+\underline{z^{k+1}u_3u_1}+Rz^{k+1}s^{-1}us(ust)^3(t^{-1}s^{-1}u^{-1}t^{-1}s^{-1})u^{-1}t^{-1}\\ &\in&U+Rz^{k+2}s^{-1}t^{-1}s^{-1}u^{-2}t^{-1}\\ &\in&U+Rz^{k+2}s^{-1}t^{-1}s^{-1}(R+Ru^{-1})t^{-1}\\ &\in&U+\underline{(z^{k+2}u_1u_2u_1)t^{-1}}+Rz^{k+1}s^{-1}t^{-1}s^{-1}u^{-1}(ust)^3t^{-1}\\ &\in&U+Rz^{k+1}s^{-1}(ustus)\\ &\in& U+Rz^{k+1}tust\\ &\in& U+\underline{(z^{k+2}u_2u_3u_1)t}. \end{array}}$ \item[$\bullet$] \underline{$k\in\{2,3\}$}: $\small{\begin{array}[t]{lcl} z^ktv_{10}&=&z^ktsts\\ &\in&z^k(R+Rt^{-1})(R+Rs^{-1})(R+Rt^{-1})(R+Rs^{-1})\\ &\in& \underline{(z^ku_1u_2u_1)u_2}+Rz^kt^{-1}s^{-1}t^{-1}s^{-1}\\ &\in& U+Rz^{k-1}t^{-1}s^{-1}t^{-1}s^{-1}(stu)^3\\ &\in&U+Rz^{k-1}t^{-1}s^{-1}(ustus)tu\\ &\in&U+Rz^{k-1}ust^2u\\ &\in&U+Rz^{k-1}us(R+Rt)u\\ &\in&U+Rz^{k-1}usu+Rz^{k-1}(ustu)\\ &\in&U+Rz^{k-1}(R+Ru^{-1})(R+Rs^{-1})(R+Ru^{-1})+Rz^{k-1}tust\\ &\in&U+\underline{z^{k-1}u_3u_1}+\underline{z^{k-1}u_1u_3}+Rz^{k-1}u^{-1}s^{-1}u^{-1}+\underline{(z^{k-1}u_2u_3u_1)t}\\ &\in&U+Rz^{k-2}u^{-1}s^{-1}u^{-1}(ust)^3\\ &\in&U+Rz^{k-2}u^{-1}(tust)ust\\ &\in&U+Rz^{k-2}stu^2st\\ &\in&U+Rz^{k-2}st(R+Ru)st\\ &\in& U+\underline{(z^{k-2}u_1u_2u_1)t}+(Rz^{k-2}sv_9)t \stackrel{(ii)}{\subset}U. \end{array}}$ \end{itemize} \item[(v)] \begin{itemize}[leftmargin=*] \item[$\bullet$]\underline{$k\in\{0,1\}$}: $\small{\begin{array}[t]{lcl} z^ktv_{11}&=&z^ktstu\\ &=&z^kt(stu)^3(u^{-1}t^{-1}s^{-1}u^{-1})t^{-1}s^{-1}\\ &=&z^{k+1}s^{-1}u^{-1}t^{-2}s^{-1}\\ &\in&z^{k+1}s^{-1}u^{-1}(R+Rt^{-1})s^{-1}\\ &\in& Rz^{k+1}s^{-1}u^{-1}s^{-1}+Rz^{k+1}s^{-1}u^{-1}t^{-1}s^{-1}\\ &\in& Rz^{k+1}(R+Rs)(R+Ru)(R+Rs)+Rz^ks^{-1}u^{-1}t^{-1}s^{-1}(stu)^3\\ &\in& \underline{z^{k+1}u_1u_3}+\underline{z^{k+1}u_3u_1}+Rz^{k+1}sus+Rz^kt(ustu)\\ &\in& U+Rz^{k+1}s(ust)^3(t^{-1}s^{-1}u^{-1}t^{-1}s^{-1})u^{-1}t^{-1}+Rz^kt^2ust\\ &\in&U+Rz^{k+2}u^{-1}t^{-1}s^{-1}u^{-2}+\underline{(z^ku_2u_3u_1)t}\\ &\in&U+Rz^{k+2}u^{-1}t^{-1}s^{-1}(R+Ru^{-1})\\ &\in&U+\underline{z^{k+2}u_3u_2u_1}+Rz^{k+2}(u^{-1}t^{-1}s^{-1}u^{-1})\\ &\in& U+Rz^{k+2}t^{-1}s^{-1}u^{-1}t^{-1}\\ &\in& U+\underline{(z^{k+2}u_2u_1u_3)t^{-1}}. \end{array}}$ \item[$\bullet$]\underline{$k\in\{2,3\}$}: $\small{\begin{array}[t]{lcl} z^ktv_{11}&=&z^ktstu\\ &\in& z^k(R+Rt^{-1})(R+Rs^{-1})(R+Rt^{-1})(R+Ru^{-1})\\ &\in&\underline{z^ku_1u_2u_3}+\underline{z^ku_2u_1u_3}+\underline{(z^ku_2u_1)u_2}+ Rz^kt^{-1}s^{-1}t^{-1}u^{-1}\\ &\in& U+Rz^{k-1}t^{-1}s^{-1}(stu)^3t^{-1}u^{-1}\\ &\in&U+Rz^{k-1}ust(ustu)t^{-1}u^{-1}\\ &\in&U+Rz^{k-1}ust^2usu^{-1}\\ &\in&U+Rz^{k-1}us(Rt+R)usu^{-1}\\ &\in&U+Rz^{k-1}(ust)^3(t^{-1}s^{-1}u^{-1}t^{-1})u^{-1}+Rz^{k-1}usu(R+Rs^{-1})u^{-1}\\ &\in&U+Rz^ku^{-1}t^{-1}s^{-1}u^{-2}+\underline{z^{k-1}u_3u_1}+Rz^{k-1}us(R+Ru^{-1})s^{-1}u^{-1}\\ &\in&U+Rz^ku^{-1}t^{-1}s^{-1}(R+Ru^{-1})+\underline{z^{k-1}u_2}+Rz^{k-1}u(R+Rs^{-1})u^{-1}s^{-1}u^{-1}\\ &\in& U+\underline{z^ku_3u_2u_1}+Rz^k(u^{-1}t^{-1}s^{-1}u^{-1})+\underline{z^{k-1}u_1u_3}+\\&&+ Rz^{k-2}us^{-1}(stu)^3u^{-1}s^{-1}u^{-1}\\ &\in& U+Rz^kt^{-1}s^{-1}u^{-1}t^{-1}+Rz^{k-2}utu(stust)s^{-1}u^{-1}\\ &\in&U+\underline{(z^ku_2u_1u_3)t^{-1}}+Rz^{k-2}utu^2st\\ &\in&U+Rz^{k-2}ut(R+Ru)st\\ &\in& U+\underline{(z^{k-2}u_3u_2u_1)t}+Rz^{k-2}u(tust)\\ &\in&U+Rz^{k-2}u^2stu\\ &\in&U+Rz^{k-2}(R+Ru)stu\\ &\in& U+\underline{z^{k-2}u_1u_2u_3}+Rz^{k-2}(ustu)\\ &\in& U+Rz^{k-2}tust\\ &\in& U+\underline{(z^{k-2}u_2u_3u_1)t}. \end{array}}$ \end{itemize} \item[(vi)] $\small{\begin{array}[t]{lcl}z^ksv_{12}&=&z^ksuts\\ &\in& z^k(R+Rs^{-1})(R+Ru^{-1})(R+Rt^{-1})(R+Rs^{-1})\\ &\in& \underline{z^ku_3u_2u_1}+\underline{z^ku_1u_2u_1}+\underline{(z^ku_2u_1u_3)u_2}+ Rz^ks^{-1}u^{-1}s^{-1}+Rz^ks^{-1}u^{-1}t^{-1}s^{-1}\\ &\in& U+ Rz^{k-1}s^{-1}(stu)^3u^{-1}s^{-1}+Rz^{k-1}s^{-1}u^{-1}t^{-1}s^{-1}(stu)^3\\ &\in& U+Rz^{k-1}tu(stust)s^{-1}+Rz^{k-1}tustu\\ &\in& U+Rz^{k-1}tu^2stu+Rz^{k-1}tustu\\ &\in&U+z^{k-1}tu_3stu\\ &\in& U+z^{k-1}t(R+Ru)stu\\ &\in& U+ Rz^{k-1}tv_{11}+Rz^{k-1}t(ustu)\\ &\stackrel{(v)}{\in}&U+Rz^{k-1}t^2ust\\&\in& U+\underline{(z^{k-1}u_2u_3u_1)t}. \end{array}}$ \item[(vii)] \begin{itemize}[leftmargin=*] \item[$\bullet$]\underline{$k\in\{0,1\}$}: $\small{\begin{array}[t]{lcl} z^ktv_{12}&=&z^ktuts\\ &=&z^ks^{-1}(stu)^3(u^{-1}t^{-1}s^{-1}u^{-1})t^{-1}s^{-1}ts\\ &=&z^{k+1}s^{-1}t^{-1}s^{-1}u^{-1}t^{-2}s^{-1}ts\\ &\in&z^{k+1}s^{-1}t^{-1}s^{-1}u^{-1}(R+Rt^{-1})s^{-1}ts\\ &\in&Rz^{k+1}s^{-1}t^{-1}s^{-1}u^{-1}(R+Rs)ts+Rz^{k+1}s^{-1}(ust)^{-3}(ustu)ts\\ &\in&Rz^{k+1}s^{-1}(ust)^{-3}(ustus)ts+Rz^{k+1}s^{-1}t^{-1}s^{-1}(R+Ru)sts+Rz^ks^{-1}tust^2s\\ &\in&Rz^ktust^2s+\underline{z^{k+1}u_2}+Rz^{k+1}s^{-1}t^{-1}s^{-1}usts+Rz^k(R+Rs)tus(R+Rt)s\\ &\in&U+Rz^ktus(R+Rt)s+Rz^{k+1}s^{-1}t^{-1}s^{-1}(ust)^3t^{-1}(s^{-1}u^{-1}t^{-1}s^{-1}u^{-1})s+\\&&+ \underline{z^ku_2u_3u_1}+Rz^k(tust)s+Rz^kstus^2+Rz^k(stust)s\\ &\in& U+ \underline{z^ku_2u_3u_1}+Rz^k(tust)s+Rz^{k+2}s^{-1}t^{-1}s^{-1}t^{-2}s^{-1}u^{-1}t^{-1}+Rz^k(ustus)+\\&&+Rz^kstu(R+Rs)+Rz^kustus^2\\ &\in&U+Rz^k(ustus)+Rz^{k+2}s^{-1}t^{-1}s^{-1}(R+Rt^{-1})s^{-1}u^{-1}t^{-1}+ (Rz^ksv_9)t+\\&&+ \underline{z^ku_1u_2u_3}+Rz^ksv_9+Rz^kustu(R+Rs)\\ &\stackrel{(ii)}{\in}&U+(Rz^ksv_9)t+Rz^{k+2}s^{-1}t^{-1}s^{-2}u^{-1}t^{-1}+ Rz^{k+2}s^{-1}t^{-1}s^{-1}t^{-1}s^{-1}u^{-1}t^{-1}+\\&&+ Rz^k(ustu)+Rz^k(ustus). \end{array}}$ However, by (ii) we have that $z^ksv_9\in U$. Moreover, $z^k(ustu)=z^ktust\in\underline{(z^ku_2u_3u_1)t}$ and $z^k(ustus)=z^ksv_9t\stackrel{(ii)}{\in}U$. Therefore, it remains to prove that $D:=Rz^{k+2}s^{-1}t^{-1}s^{-2}u^{-1}t^{-1}+ Rz^{k+2}s^{-1}t^{-1}s^{-1}t^{-1}s^{-1}u^{-1}t^{-1}$ is a subset of $U$. We have: \\ \\ $\small{\begin{array}{lcl} D&=&Rz^{k+2}s^{-1}t^{-1}s^{-2}u^{-1}t^{-1}+ Rz^{k+2}s^{-1}t^{-1}s^{-1}t^{-1}s^{-1}u^{-1}t^{-1}\\ &\subset&Rz^{k+2}s^{-1}t^{-1}(R+Rs^{-1})u^{-1}t^{-1}+ Rz^{k+2}s^{-1}t^{-1}s^{-1}(ust)^{-3}(ustus)\\ &\subset&U+\underline{(z^{k+2}u_1u_2u_3)t^{-1}}+Rz^{k+2}s^{-1}(ust)^{-3}(ustus)+Rz^{k+1}s^{-1}ust+ \underline{(z^ku_2u_3u_1)t}\\ &\in&U+Rz^{k+1}tust+Rz^{k+1}s^{-1}(R+Ru^{-1})(R+Rs^{-1})t\\ &\in&U+\underline{(z^{k+1}u_2u_3u_1)t}+\underline{(z^{k+1}u_1u_3)t}+ Rz^{k+1}s^{-1}u^{-1}s^{-1}t\\ &\in&U+Rz^{k}s^{-1}(stu)^3u^{-1}s^{-1}t\\ &\in&U+Rz^ktu(stust)s^{-1}t\\ &\in&U+Rz^ktu^2stut\\ &\in&U+Rz^kt(R+Ru)stut\\ &\in&U+(Rz^ktv_{11})t+Rz^{k}(tust)ut\\ &\stackrel{(v)}{\in}&U+Rz^kustu^2t\\ &\in&U+Rz^kust(R+Ru)t\\ &\in& U+\underline{(z^ku_3u_1)t^2}+Rz^k(ustu)t\\&\in& U+\underline{(z^ku_2u_3u_1)t}. \end{array}}$ \ \item[$\bullet$]\underline{$k\in\{2,3\}$}: $\small{\begin{array}[t]{lcl} z^ktv_{12}&=&z^ktuts\\ &\in& z^k(R+Rt^{-1})(R+Ru^{-1})(R+Rt^{-1})(R+Rs^{-1})\\ &\in& \underline{z^ku_3u_2u_1}+\underline{(z^ku_2u_3u_1)u_2}+Rz^kt^{-1}u^{-1}t^{-1}s^{-1}\\ &\in& U+Rz^{k-1}t^{-1}u^{-1}t^{-1}s^{-1}(stu)^3\\ &\in&U+Rz^{k-1}t^{-1}s(tust)u\\ &\in& U+Rz^{k-1}t^{-1}sustu^2\\ & \in& U+Rz^{k-1}t^{-1}sust(Ru+R)\\ &\in& U+Rz^{k-1}t^{-1}s(ustu)+Rz^{k-1}t^{-1}(R+Rs^{-1})ust\\ &\in&U+Rz^{k-1}t^{-1}stust+\underline{(z^{k-1}u_2u_3u_1)t}+Rz^{k-1}t^{-1}s^{-1}(R+Ru^{-1})st\\ &\in&U+Rz^{k-1}t^{-1}(stu)^3(u^{-1}t^{-1}s^{-1}u^{-1})+\underline{z^{k-1}u_2}+\\&&+ Rz^{k-1}t^{-1}s^{-1}u^{-1}(R+Rs^{-1})t\\ &\in&U+Rz^kt^{-2}s^{-1}u^{-1}t^{-1}+\underline{(z^{k-1}u_2u_1u_3)t}+ Rz^{k-1}(ust)^{-3}u(stust)s^{-1}t\\ &\in&U+\underline{(z^ku_2u_1u_3)t^{-1}}+Rz^{k-2}u^2stut\\ &\in&U+Rz^{k-2}(R+Ru)stut\\ &\in&U+\underline{(z^{k-2}u_1u_2u_3)t}+Rz^{k-2}(ustu)\\ &\in& U+Rz^{k-2}tust\\ &\in& U+\underline{(z^{k-2}u_2u_3u_1)t}. \end{array}}$ \end{itemize} \end{itemize} \qedhere \end{proof} \begin{cor} $u_2U\subset U$. \label{cor13} \end{cor} \begin{proof} Since $u_2=R+Rt$, it is enough to prove that $tU\subset U$. However, by the definition of $U$ and remark \ref{rem13}(i), this is the same as proving that for every $k\in\{0,\dots,3\}$, $z^ktv_i\in U$, which follows directly from the definition of $U$ and proposition \ref{easy}(v), (vi), (vii). \end{proof} By remark \ref{rem13} (ii), we have that for every $k\in\{0,\dots,3\}$, $z^ku_iu_ju_k\subset U$, for some combinations of $i,j,l \in\{1,2,3\}$. We can generalize this argument for every $i,j,l \in\{1,2,3\}$. \begin{prop} For every $k\in\{0,\dots,3\}$ and for every combination of $i,j,l\in\{1,2,3\}$, $z^ku_iu_ju_l\subset U$. \label{prr13} \end{prop} \begin{proof} By the definition of $U$ we only have to prove that, for every $k\in\{0,\dots,3\}$, $z^ku_1u_3u_1$, $z^ku_3u_1u_3$ and $z^ku_3u_2u_3$ are subsets of $U$. We distinguish the following 3 cases: \begin{itemize}[leftmargin=0.8cm] \item[C1.] \begin{itemize}[leftmargin=*] \item [$\bullet$]\underline{$k\in\{0,1,2\}$}: $\small{\begin{array}[t]{lcl}z^ku_1u_3u_1&=&z^k(R+Rs)(R+Ru)(R+Rs)\\ &\subset& z^kv_5+z^kv_6+\underline{z^ku_1}+Rz^ksus\\ &\subset& U+Rz^ks(ust)^3(t^{-1}s^{-1}u^{-1}t^{-1}s^{-1})u^{-1}\\ &\subset& U+Rz^{k+1}u^{-1}t^{-1}s^{-1}u^{-2}\\ &\subset& U+z^{k+1}u^{-1}t^{-1}s^{-1}(R+Ru^{-1})\\ &\subset& U+\underline{z^{k+1}u_3u_2u_1}+Rz^{k+1}(u^{-1}t^{-1}s^{-1}u^{-1})\\ &\subset& U+z^{k+1}t^{-1}s^{-1}u^{-1}t^{-1}\\ &\subset& U+\underline{(z^{k+1}u_2u_1u_3)t^{-1}}. \end{array}}$ \item[$\bullet$]\underline{$k=3$}: $\small{\begin{array}[t]{lcl}z^3u_1u_3u_1&=&z^k(R+Rs^{-1})(R+Ru{-1})s\\ &\subset& \underline{z^3u_3u_1}+\underline{z^3u_1u_3}+\underline{z^3u_1}+Rz^3s^{-1}u^{-1}s\\ &\subset& U+Rz^{2}s^{-1}(stu)^3u^{-1}s\\ &\subset& U+Rz^{2}tu(stust)s\\ &\subset& U+Rz^{2}tu^2stu\\ &\subset& U+Rz^{2}t(R+Ru)stu\\ &\subset& U+ t(\underline{Rz^{2}u_1u_2u_3})+Rz^{2}(ustu)\\ &\subset& U+u_2U+Rz^{2}v_9t. \end{array}}$ The result follows from corollary \ref{cor13} and the definition of $U$. \end{itemize} \item [C2.] \begin{itemize}[leftmargin=*] \item[$\bullet$]\underline{$k\in\{1,2,3\}$}: $\small{\begin{array}[t]{lcl}z^ku_3u_1u_3&=&z^k(R+Ru^{-1})(R+Rs^{-1})(R+Ru^{-1})\\&\subset& \underline{z^ku_1u_3}+\underline{z^ku_3u_1}+Rz^ku^{-1}s^{-1}u^{-1}\\ &\subset& U+Rz^{k-1}u^{-1}s^{-1}u^{-1}(ust)^3\\&\subset& U+Rz^{k-1}u^{-1}(tust)ust\\ &\subset& U+Rz^{k-1}stu^2st\\ &\subset& U+Rz^{k-1}st(R+Ru)st\\ &\subset& U+\underline{(z^{k-1}u_1u_2u_1)t}+(Rz^{k-1}sv_9)t.\end{array}}$. The result follows from proposition \ref{easy}(ii). \item[$\bullet$] \underline{$k=0$}: $\small{\begin{array}[t]{lcl} u_3u_1u_3&=&(R+Ru)(R+Rs)(R+Ru)\\ &\subset& \underline{u_1u_3}+\underline{u_3u_1}+Rusu\\ &\subset& U+Rustt^{-1}u\\ &\subset& U+Rust(R+Rt)u\\ &\subset& U+R(ustu)+R(ust)^3t^{-1}s^{-1}(u^{-1}t^{-1}s^{-1}u^{-1})tu\\ &\subset&U+Rtust+Rzt^{-1}s^{-1}t^{-1}s^{-1}\\ &\subset& U+\underline{(u_2u_3u_1)t}+u_2(\underline{zu_1u_2u_1})\\ &\subset& U+u_2U.\end{array}}$ The result follows from corollary \ref{cor13}. \end{itemize} \item[C3.] \begin{itemize}[leftmargin=*] \item[$\bullet$] \underline{$k\in\{0,1\}$}: $\small{\begin{array}[t]{lcl} z^ku_3u_1u_3&=&z^k(R+Ru)(R+Rt)(R+Ru)\\ &\subset& \underline{z^ku_2u_3}+\underline{z^ku_3u_2}+Rz^kutu\\ &\subset&U+Rz^kus^{-1}stu\\ &\subset&U+Rz^ku(R+Rs)stu\\ &\subset&U+Rz^k(ustu)+Rz^kus(stu)^3(u^{-1}t^{-1}s^{-1}u^{-1})t^{-1}s^{-1}\\ &\subset&U+Rz^ktust+Rz^{k+1}ust^{-1}s^{-1}u^{-1}t^{-2}s^{-1}\\ &\subset&U+\underline{(z^ku_2u_3u_1)t}+Rz^{k+1}ust^{-1}s^{-1}u^{-1}(R+Rt^{-1})s^{-1}\\ &\subset&U+Rz^{k+1}us(R+Rt)s^{-1}u^{-1}s^{-1}+Rz^{k+1}us(t^{-1}s^{-1}u^{-1}t^{-1}s^{-1})\\ &\subset&U+\underline{z^{k+1}u_1}+Rz^{k+1}usts^{-1}(R+Ru)s^{-1}+Rz^{k+1}t^{-1}s^{-1}u^{-1}\\ &\subset&U+z^{k+1}ustu_1+Rz^{k+1}ust(R+Rs)us^{-1}+\underline{z^{k+1}u_2u_1u_3}\\ &\subset&U+z^{k+1}ustu_1+Rz^{k+1}(ustu)s^{-1}+\\&&+ Rz^{k+1}(ust)^3t^{-1}(s^{-1}u^{-1}t^{-1}s^{-1}u^{-1})sus^{-1}\\ &\subset&U+z^{k+1}u_2ustu_1+Rz^{k+2}t^{-1}(t^{-1}s^{-1}u^{-1}t^{-1})us^{-1}\\ &\subset&U+z^{k+1}u_2ust(R+Rs)+Rz^{k+2}t^{-1}u^{-1}t^{-1}s^{-2}\\ &\subset&U+u_2\underline{(z^{k+1}u_3u_1)t}+u_2z^{k+1}uv_{10}+ u_2\underline{z^{k+2}u_3u_2u_1}\\ &\stackrel{\ref{easy}(iv)}{\subset}&U+u_2U.\end{array}}$ The result follows from proposition \ref{cor13}(i). \item[$\bullet$] \underline{$k\in\{2,3\}$}: $\small{\begin{array}[t]{lcl} z^ku_3u_1u_3&=&z^k(R+Ru^{-1})(R+Rt^{-1})(R+Ru^{-1})\\ &\subset& \underline{z^ku_2u_3}+\underline{z^ku_3u_2}+Rz^ku^{-1}t^{-1}u^{-1}\\ &\subset&U+Rz^ku^{-1}t^{-1}s^{-1}su^{-1}\\ &\subset&U+Rz^ku^{-1}t^{-1}s^{-1}(R+Rs^{-1})u^{-1}\\ &\subset&U+Rz^k(u^{-1}t^{-1}s^{-1}u^{-1})+Rz^k(stu)^{-3}st(ustu)s^{-1}u^{-1}\\ &\subset&U+Rz^kt^{-1}s^{-1}u^{-1}t^{-1}+Rz^{k-1}st^2usts^{-1}u^{-1}\\ &\subset&U+\underline{(z^{k}u_2u_1u_3)t^{-1}}+Rz^{k-1}s(R+Rt)usts^{-1}u^{-1}\\ &\subset&U+Rz^{k-1}sus(R+Rt^{-1})s^{-1}u^{-1}+Rz^{k-1}(stust)s^{-1}u^{-1}\\ &\subset&U+\underline{z^{k-1}u_1}+Rz^{k-1}s(R+Ru^{-1})st^{-1}s^{-1}u^{-1}+Rz^{k-1}ust\\ &\subset&U+z^{k-1}u_1t^{-1}s^{-1}u^{-1}+Rz^{k-1}su^{-1}(R+Rs^{-1})t^{-1}s^{-1}u^{-1}+\\&&+ \underline{(z^{k-1}u_3u_1)t}\\ &\subset&U+z^{k-1}u_1t^{-1}s^{-1}u^{-1}+Rz^{k-1}s(u^{-1}t^{-1}s^{-1}u^{-1})+\\&&+Rz^{k-1}su^{-1}s^{-1}(ust)^{-3}(ustus)t\\ &\subset&U+z^{k-1}u_1t^{-1}s^{-1}u^{-1}u_2+Rz^{k-2}su^{-1}(tust)t\\ &\subset&U+z^{k-1}(R+Rs)t^{-1}s^{-1}u^{-1}u_2+Rz^{k-2}s^2tut\\ &\subset&U+\underline{z^{k-1}u_2u_1u_3}+Rz^{k-1}s(R+Rt)(R+Rs)(R+Ru)u_2+\\&&+\underline{(z^{k-2}u_1u_2u_3)t}\\ &\subset&U+\underline{(z^{k-1}u_3)u_2}+\underline{(z^{k-1}u_1u_2u_3)u_2}+\underline{(z^{k-1}u_1u_2u_1)u_2}+ +Rz^{k-1}stsuu_2\\ &\subset&U+(Rz^{k-1}sv_8)u_2. \end{array}}$ The result follows from proposition \ref{easy}(i). \qedhere \end{itemize} \end{itemize} \end{proof} \begin{prop} If $u_3U\subset U$, then $H_{G_{13}}=U$. \label{prop13} \end{prop} \begin{proof} As we explained in the beginning of this section, in order to prove that $H_{G_{13}}=U$, it will be sufficient to prove that $sU$, $tU$ and $uU$ are subsets of $U$. By corollary \ref{cor13} and hypothesis, we only have to prove that $sU\subset U$. By the definition of $U$ and remark \ref{rem13}, we have to prove that for every $k\in\{0,\dots,3\}$, $z^ksv_i\in U$, $k=1,\dots,12$. However, for every $i\in\{1,\dots,7\}\cup\{10,11\}$, we have that $z^ksv_i\in z^ku_1u_ju_l$, where $j,l\in\{1,2,3\}$ and not necessarily distinct. Therefore, by proposition \ref{prr13} we only have to prove that $z^ksv_i\in U$, for $i=8,9,12$. For this purpose, for every $k\in\{0,\dots,3\}$ we have to check the following cases: \begin{itemize}[leftmargin=*] \item For the element $z^ksv_8$ we only have to prove that $z^0sv_8\in U$, since by proposition \ref{easy}(i) we have the rest of the cases. We have: $sv_8=stsu=sts(ust)^3(t^{-1}s^{-1}u^{-1}t^{-1}s^{-1})u^{-1}t^{-1}s^{-1}= zst(u^{-1}t^{-1}s^{-1})u^{-1})u^{-1}t^{-1}s^{-1} =zu^{-1}t^{-1}u^{-1}t^{-1}s^{-1} \in u_3u_2\underline{zu_3u_2u_1}$. Therefore, $sv_8 \in u_3u_2U$. The result follows from corollary \ref{cor13} and from hypothesis. \item For the element $z^ksv_9$ we use proposition \ref{easy}(ii) and we only have to prove that $z^3sv_9\in U$. $z^3sv_9=z^3stus\in z^3(R+Rs^{-1})(R+Rt^{-1})(R+Ru^{-1})(R+Rs^{-1})\subset \underline{z^3u_2u_3u_1}+\underline{z^3u_1u_2u_1}+z^ku_1u_2u_1+Rz^3s^{-1}t^{-1}u^{-1}s^{-1}\stackrel{\ref{prr13}}{\subset}U+Rz^2s^{-1}(stu)^3t^{-1}u^{-1}s^{-1}\subset U+Rz^2tu(stustut^{-1}u^{-1}s^{-1}).$ By hypothesis and by corollary \ref{cor13}, it is enough to prove that $z^2stustut^{-1}u^{-1}s^{-1}\in U$. $\small{\begin{array}{lcl} z^2st(ustu)t^{-1}u^{-1}s^{-1}&=&z^2st^2usu^{-1}s^{-1}\\&\in& z^2s(R+Rt)usu^{-1}s^{-1}\\ &\in&Rz^2su(R+Rs^{-1})u^{-1}s^{-1}+ Rz^2(stu)^3u^{-1}(t^{-1}s^{-1}u^{-1}t^{-1})u^{-1}s^{-1}\\ &\in& U+\underline{z^2u_2}+Rz^2s(R+Ru^{-1})s^{-1}u^{-1}s^{-1}+Rz^3u^{-2}t^{-1}s^{-1}u^{-2}s^{-1}\\ &\in&U+\underline{z^2u_3u_1}+Rz^2(R+Rs^{-1})u^{-1}s^{-1}u^{-1}s^{-1}+u_3u_2(z^3u_1u_3u_1)\\ &\stackrel{\ref{prr13}}{\in}&U+u_3(z^2u_1u_3u_1)+Rzs^{-1}u^{-1}s^{-1}(stu)^3u^{-1}s^{-1}+u_3u_2U\\ &\stackrel{\ref{prr13}}{\in}&U+u_3u_2U+Rzs^{-1}u^{-1}(tust)usts^{-1}\\ &\in&U+u_3u_2U+Rztu^2sts^{-1}\\ &\in&U+u_3u_2U+u_2u_3(\underline{zu_1u_2u_1})\\ &\subset& U+u_3u_2u_3U. \end{array}}$ The result follows from hypothesis and from corollary \ref{cor13}. \item For the element $z^ksv_{12}$ we only have to prove that $z^0sv_{12}\in U$, since the rest of the cases have been proven in proposition \ref{easy}(vi). $\small{\begin{array}[t]{lcl} sv_{12}&=&suts\\ &=&(stu)^3u^{-1}t^{-1}s^{-1}u^{-1}(t^{-1}s^{-1}u^{-1}t^{-1})uts\\ &=&z u^{-1}t^{-1}s^{-1}u^{-2}t^{-1}s^{-1}ts\\ &\in&zu^{-1}t^{-1}s^{-1}(R+Ru^{-1})t^{-1}s^{-1}ts\\ &\in&Rzu^{-1}t^{-1}s^{-1}t^{-1}s^{-1}ts+Rz(stu)^{-3}u^{-1}(ustu)ts\\ &\in&Rzu^{-1}t^{-1}s^{-1}t^{-1}(R+Rs)ts+Ru^{-1}tust^2s\\ &\in&\underline{(zu_3)u_2}+Rzu^{-1}t^{-1}s^{-1}(R+Rt)sts+u_3u_2u_3\underline{(u_1u_2u_1)}\\ &\in&U+\underline{zu_3u_1}+Rzu^{-1}t^{-1}(R+Rs)tsts+u_3u_2u_3U\\ &\in&U+u_2u_3U+u_3\underline{(zu_1u_2u_1)}+zu_3u_2(stu)^3u^{-1}t^{-1}(s^{-1}u^{-1}t^{-1}s^{-1}u^{-1})sts\\ &\in&U+u_3u_2u_3U+z^2u_3u_2u^{-1}t^{-2}s^{-1}u^{-1}s\\ &\in&U+u_3u_2u_3U+z^2u_3u_2u_3u_2\underline{(z^2u_1u_3u_1)}\\ &\subset& U+u_3u_2u_3u_2U. \end{array}}$ The result follows again from hypothesis and from corollary \ref{cor13}. \qedhere \end{itemize} \end{proof} We can now prove the main theorem of this section. \begin{thm} $H_{G_{13}}=U$. \label{thm13} \end{thm} \begin{proof} By proposition \ref{prop13} it will be sufficient to prove that $u_3U\subset U$. By the definition of $U$ and remark \ref{rem13}(i) we have to prove that for every $k\in\{0,\dots,3\}$, $z^ku_3v_i\subset U$, $k=1,\dots,12$. However, for every $i\in\{1,\dots,7\}\cup\{12\}$, $z^ku_3v_i\subset z^ku_3u_ju_l$, where $j,l\in\{1,2,3\}$ and not necessarily distinct. Therefore, by proposition \ref{prr13} we restrict ourselves to proving that $z^ku_3v_i\subset U$, for $i=8,9,10,11$. For every $k\in\{0,\dots,3\}$ we have: \begin{itemize}[leftmargin=*] \item$\small{\begin{array}[t]{lcl} z^ku_3v_8&\subset&z^k(R+Ru^{-1})tsu\\ &\subset& \underline{z^ku_2u_1u_3}+Rz^ku^{-1}(R+Rt^{-1})(R+Rs^{-1})(R+Ru^{-1})\\ &\subset& U+z^ku_3u_1u_3+z^ku_3u_2u_3+\underline{z^ku_3u_2u_1}+Rz^k(u^{-1}t^{-1}s^{-1}u^{-1})\\ &\stackrel{\ref{prr13}}{\subset}&U+Rz^kt^{-1}s^{-1}u^{-1}t^{-1}\\&\subset& U+\underline{(z^ku_2u_1u_3)t^{-1}}. \end{array}}$ \item $\small{\begin{array}[t]{lcl} z^ku_3v_9&\subset& z^k(R+Ru)tus\\ &\subset& \underline{z^ku_2u_3u_1}+Rz^ku(tust)t^{-1}\\ &\subset& U+z^ku^2stut^{-1}\\ &\subset& U+z^k(R+Ru)stut^{-1}\\ &\subset& U+\underline{(z^ku_1u_2u_3)t^{-1}}+Rz^k(ustu)t^{-1}\\ &\subset& U+Rz^ktus\\ &\subset& U+\underline{z^ku_2u_3u_1}.\end{array}}$ \item $z^ku_3v_{10}\subset z^k(R+Ru)v_{10}\subset \underline{Rz^kv_{10}}+Rz^kuv_{10}.$ Therefore, by proposition \ref{easy}(iii), we only have to prove that $z^3uv_{10}\in U$. However, $z^3v_{10}= z^3t^{-1}(tust)s=z^3t^{-1}(ustus)=z^3t^{-1}stust\in u_2(z^3stus)u_2$. Hence, by remark \ref{rem13} and corollary \ref{cor13}, we need to prove that $z^3stus\in U$.\\ \\ $\small{\begin{array}{lcl} z^3stus&=&z^3(R+Rs^{-1})(R+Rt^{-1})(R+Ru^{-1})(R+Rs^{-1})\\ &\in&\underline{z^3u_1u_2u_3}+\underline{z^3u_1u_2u_1}+\underline{z^3u_2u_3u_1}+ z^3u_2u_3u_1+Rs^{-1}t^{-1}u^{-1}s^{-1}\\ &\stackrel{\ref{prr13}}{\in}& U+Rz^2s^{-1}(stu)^3t^{-1}u^{-1}s^{-1}\\ &\in&U+Rz^2tust(ustu)t^{-1}u^{-1}s^{-1}\\ &\in&U+Rz^2tust^2usu^{-1}s^{-1}\\ &\in&U+Rz^2tus(Rt+R)usu^{-1}s^{-1}\\ &\in&U+Rz^2t(ust)^3(t^{-1}s^{-1}u^{-1}t^{-1})u^{-1}s^{-1}+Rz^2tusu(R+Rs^{-1})u^{-1}s^{-1}\\ &\in&U+Rz^3tu^{-1}t^{-1}s^{-1}u^{-2}s^{-1}+\underline{z^2u_2u_3}+Rz^2tus(R+Ru^{-1})s^{-1}u^{-1}s^{-1}\\ &\in&U+Rz^3tu^{-1}t^{-1}s^{-1}(R+Ru^{-1})s^{-1}+\underline{z^2u_2u_1}+ Rz^2tu(R+Rs^{-1})u^{-1}s^{-1}u^{-1}s^{-1}\\ &\in&U+u_2\underline{z^3u_3u_2u_1}+Rz^2tu^{-1}t^{-1}s^{-1}u^{-1}(ust)^3s^{-1}+ u_2(z^2u_1u_3u_1)+\\&&+Rztus^{-1}u^{-1}(ust)^3s^{-1}u^{-1}s^{-1}\\ &\stackrel{\ref{prr13}}{\in}& U+u_2U+Rz^3t(stust)s^{-1}+Rztutu(stust)s^{-1}u^{-1}s^{-1}\\ &\stackrel{\ref{cor13}}{\in}& U+Rz^3t(ustu)+Rztutu^2sts^{-1}\\ &\in&U+Rz^3t^2ust+Rztut(R+Ru)sts^{-1}\\ &\in&U+u_2\underline{(z^3u_3u_1)t}+Rztu(R+Rt^{-1})sts^{-1}+Rztu(tust)s^{-1}\\ &\in&U+Rztust(R+Rs)+Rztut^{-1}(R+Rs^{-1})ts^{-1}+ Rztu^2stus^{-1}\\ &\in&U+u_2U+\underline{(zu_2u_3u_1)t}+zu_2uv_{10}+\underline{zu_2u_3u_1}+Rztut^{-1}s^{-1}(R+Rt^{-1})s^{-1}+\\&&+ Rzt(R+Ru)stus^{-1}\\ &\stackrel{\ref{easy}(iii)}{\in}& U+u_2U+u_2\underline{zu_3u_2u_1}+Rzt(R+Ru^{-1})t^{-1}s^{-1}t^{-1}s^{-1}+ Rztstu(R+Rs)+\\&&+ Rztustu(R+Rs)\\ &\stackrel{\ref{cor13}}{\in}& U+\underline{zu_1u_2u_1}+Rzt(stu)^{-3}st(ustu)t^{-1}s^{-1}+Rztv_{11}+u_2zsv_9+ Rzt(ustu)+\\&&+Rzt(ustu)s\\ &\stackrel{\ref{easy}}{\in}&U+Rst^2u+Rzt^2ust+Rzt^2usts\\ &\in&U+\underline{u_1u_2}+\underline{(zu_2u_3u_1)t}+zu_2uv_{10}\\ &\stackrel{\ref{easy}(iii)}{\subset}&U+u_2U. \end{array}}$ The result follows from corollary \ref{cor13}. \item $z^ku_3v_{11}\subset z^k(R+Ru)stu\subset \underline{z^ku_1u_2u_3}+Rz^k(ustu) \subset U+Rz^ktust\subset U+\underline{(z^ku_2u_3u_1)t}\subset U.$ \qedhere \end{itemize} \end{proof} \begin{cor} The BMR freeness conjecture holds for the generic Hecke algebra $H_{G_{13}}$. \end{cor} \begin{proof} By theorem \ref{thm13} we have that $H_{G_{13}}=U=\sum\limits_{k=0}^3\sum\limits_{i=1}^{12}(Rz^kv_i+Rz^kv_it)$. The result follows from proposition \ref{BMR PROP}, since by definition $H_{G_{13}}$ is generated as $R$-module by $|G_{13}|=96$ elements. \end{proof} \subsection{The case of $G_{14}$} Let $R=\mathbb{Z}[u_{s,i}^{\pm},u_{t,j}^{\pm}]_{\substack{1\leq i\leq 2 \\1\leq j\leq 4}}$ and let $H_{G_{14}}=\langle s,t\;|\; stststst=tstststs,\prod\limits_{i=1}^{2}(s-u_{s,i})=\prod\limits_{j=1}^{3}(t-u_{t,j})=0\rangle$ be the generic Hecke algebra associated to $G_{14}$. Let $u_1$ be the subalgebra of $H_{G_{14}}$ generated by $s$ and $u_2$ the subalgebra of $H_{G_{14}}$ generated by $t$. We recall that that $z:=(st)^4=(ts)^4$ generates the center of the associated complex braid group and that $|Z(G_{14})|=6$. We set $U=\sum\limits_{k=0}^{5}(z^ku_1u_2u_1u_2+z^ku_1tst^{-1}su_2).$ By the definition of $U$, we have the following remark. \begin{rem} $Uu_2 \subset U$. \label{rem14} \end{rem} From now on, we will underline the elements that by definition belong to $U$. Moreover, we will use directly remark \ref{rem14} and the fact that $u_1U\subset U$; this means that every time we have a power of $s$ in the beginning of an element or a power of $t$ at the end of it, we may ignore it. In order to remind that to the reader, we put a parenthesis around the part of the element we consider. Our goal is to prove that $H_{G_{14}}=U$ (theorem \ref{thm14}). Since $1\in U$, it is enough to prove that $U$ is a left-sided ideal of $H_{G_{14}}$. For this purpose, one may check that $sU$ and $tU$ are subsets of $U$. However, by the definition of $U$ and remark \ref{rem14}, we only have to prove that for every $k\in\{0,\dots,5\}$, $z^ktu_1u_2u_1$ and $z^ktu_1tst^{-1}s$ are subsets of $U$. In the following proposition we first prove this statement for a smaller range of the values of $k$. \begin{prop} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)]For every $k\in\{0,\dots,4\}$, $z^ktu_1u_2u_1\subset U$. \item[(ii)]For every $k\in\{0,\dots,4\}$, $z^ku_2u_1tu_1\subset U+z^{k+1}tu_1u_2u_1u_2$. Therefore, for every $k\in\{0,\dots,3\}$, $z^ku_2u_1tu_1 \subset U$. \item[(iii)]For every $k\in\{1,\dots,5\}$, $z^ku_2u_1t^{-1}u_1\subset U$. \item[(iv)]For every $k\in\{1,\dots,4\}$, $z^ku_2u_1u_2u_1\subset U+z^{k+1}tu_1u_2u_1u_2$. Therefore, for evey $k\in\{1,\dots,3\}$, $z^ku_2u_1u_2u_1\subset U$. \end{itemize} \label{pr14} \end{prop} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize} [leftmargin=0.8cm] \item [(i)] $z^ktu_1u_2u_1=z^kt(R+Rs)(R+Rt+Rt^{-1})(R+Rs)\subset \underline{z^ku_2u_1u_2}+\underline{Rz^ktst^{-1}s}+Rz^ktsts\subset U+Rz^k(ts)^4s^{-1}t^{-1}s^{-1}t^{-1}\subset U+\underline{z^{k+1}u_1u_2u_1u_2}\subset U.$ \item[(ii)] We notice that $z^ku_2u_1tu_1=z^k(R+Rt+Rt^2)u_1tu_1\subset \underline{z^ku_1u_2u_1}+z^ktu_1u_2u_1+ z^kt^2u_1tu_1$. However, $z^ktu_1u_2u_1\subset U$, by (i). Therefore, we have to prove that $z^kt^2u_1tu_1$ is a subset of $U$. Indeed, $z^kt^2u_1tu_1= z^kt^2(R+Rs)t(R+Rs)\subset U+\underline{z^ku_2u_1u_2}+Rz^kt^2sts\subset Rz^kt(ts)^4s^{-1}t^{-1}s^{-1}t^{-1}\subset U+ (z^{k+1}tu_1u_2u_1)u_2$. The result follows from (i). \item[(iii)]We expand $u_2$ as $R+Rt+Rt^{-1}$ and we have that $z^ku_2u_1t^{-1}u_1\subset \underline{z^ku_1u_2u_1}+z^ktu_1t^{-1}u_1+ z^kt^{-1}u_1t^{-1}u_1\subset U+z^kt(R+Rs)t^{-1}(R+Rs)+ z^kt^{-1}(R+Rs^{-1})t^{-1}(R+Rs^{-1})\subset U+\underline{z^ku_2u_1u_2}+ \underline{Rz^ktst^{-1}s}+ Rz^kt^{-1}s^{-1}t^{-1}s^{-1}\subset U+ Rz^k(st)^{-4}stst\subset U+\underline{z^{k-1}u_1u_2u_1u_2}.$ \item[(iv)] The result follows from the definition of U and from (ii) and (iii), since $z^ku_2u_1u_2u_1=z^ku_2u_1(R+Rt+Rt^{-1})u_1$. \qedhere \end{itemize} \end{proof} To make it easier for the reader to follow the calculations, from now on we will double-underline the elements as described in the above proposition (proposition \ref{pr14}) and we will use directly the fact that these elements are inside $U$. We can now prove the following lemmas that lead us to the main theorem of this section. \begin{lem} For every $k\in\{3,4\}$, $z^ktsu_2sts\subset U+z^{k+1}u_1tu_1u_2u_1u_2$. \label{lm14} \end{lem} \begin{proof}We have: \\ $\small{\begin{array}[t]{lcl} z^ktsu_2sts&=&z^kts(R+Rt+Rt^{-1})sts\\ &\subset& \underline{\underline{z^ktu_1u_2u_1}}+Rz^k(ts)^4s^{-1}t^{-1}+z^ktst^{-1}sts\\ &\subset&U+\underline{z^{k+1}u_1u_2}+z^kt(R+Rs^{-1})t^{-1}(R+Rs^{-1})t(R+Rs^{-1})\\ &\subset&U+\underline{z^ku_1u_2u_1}+\underline{\underline{(z^ktu_1u_2u_1)t}}+ Rz^kts^{-1}t^{-1}s^{-1}ts^{-1}\\ &\subset&U+Rz^kts^{-1}t^{-1}s^{-1}(R+Rt^{-1}+Rt^{-2})s^{-1}\\ &\subset&U+\underline{\underline{z^ktu_1u_2u_1}}+Rz^kt^2(st)^{-4}st+ Rz^kt^2(st)^{-4 }stst^{-1}s^{-1}\\ &\subset&U+\underline{z^{k-1}u_2u_1u_2}+Rz^{k-1}t^2st(R+Rs^{-1})t^{-1}s^{-1}\\ &\subset&U+\underline{z^{k-1}u_2}+Rz^{k-1}t^2st^2(st)^{-4}stst\\ &\subset&U+Rz^{k-2}(R+Rt+Rt^{-1})st^2stst\\ &\subset&U+\underline{\underline{z^{k-2}s(u_2u_1tu_1)t}}+Rz^{k-2}tst(ts)^4s^{-1}t^{-1}s^{-1}+ Rz^{k-2}t^{-1}s(R+Rt+Rt^{-1})stst\\ &\subset&U+Rz^{k-1}tst(R+Rs)t^{-1}s^{-1}+\underline{\underline{(z^{k-2}u_2u_1tu_1)t}}+Rz^{k-2}t^{-2}(ts)^4s^{-1}+\\&&+Rz^{k-2}t^{-1}(R+Rs^{-1})t^{-1}(R+Rs^{-1})tst\\ &\subset&U+\underline{(z^{k-1}+z^{k-2})u_2u_1u_2}+Rz^{k-1}(ts)^4s^{-1}t^{-1}s^{-1}t^{-2}s^{-1}+ \underline{\underline{(z^{k-2}u_2u_1tu_1)t}}+\\&&+Rz^{k-2}t^{-1}s^{-1}t^{-1}s^{-1}tst\\ &\subset&U+z^ku_1u_2u_1u_2u_1+ Rz^{k-2}(st)^{-4}stst^2st\\ &\stackrel{\ref{pr14}(iv)}{\subset}&U+z^{k+1}u_1tu_1u_2u_1u_2+\underline{\underline{z^{k-3}s(tu_1u_2u_1)t}}. \end{array}}$ \end{proof} \begin{lem} $z^kt^{-2}s^{-1}t^{-2}s^{-1}\in U$. \label{lem114} \end{lem} \begin{proof}We have: \\ $\small{\begin{array}[t]{lcl} z^5t^{-2}s^{-1}t^{-2}s^{-1}&=&z^5t^{-1}(st)^{-4}ststst^{-1}s^{-1}\\ &\in&z^4t^{-1}stst(R+Rs^{-1})t^{-1}s^{-1}\\ &\in&\underline{z^4u_2u_1u_2}+Rz^4t^{-1}sts(R+Rt^{-1}+Rt^{-2})s^{-1}t^{-1}s^{-1}\\ &\in&U+\underline{z^4u_2}+Rz^4t^{-1}st(R+Rs^{-1})t^{-1}s^{-1}t^{-1}s^{-1}+Rz^4t^{-1}stst^{-1}(st)^{-4}stst\\ &\in&U+\underline{z^4u_2u_1}+Rz^4t^{-1}st^2(st)^{-4}st+ Rz^3t^{-1}st(R+Rs^{-1})t^{-1}(R+Rs^{-1})tst\\ &\in&U+\underline{\underline{(z^3u_2u_1u_2u_1)u_2}}+ Rz^3t^{-1}sts^{-1}t^{-1}s^{-1}tst\\ &\in&U+ Rz^3t^{-1}(R+Rs^{-1})ts^{-1}t^{-1}s^{-1}tst\\ &\in&U+\underline{\underline{u_1(z^3u_2u_1u_2u_1)u_2}}+Rz^3t^{-1}s^{-1}ts^{-1}t^{-1}s^{-1}tst\\ &\in&U+ Rz^3t^{-1}s^{-1}t^2(st)^{-4}stst^2st\\ &\in&U+Rz^2t^{-1}s^{-1}(R+Rt+Rt^{-1})stst^2st\\ &\in&U+\underline{z^2u_1u_2u_1u_2}+Rz^2t^{-1}s^{-2}(st)^4t^{-1}s^{-1}tst+ z^2t^{-1}s^{-1}t^{-1}(R+Rs^{-1})tst^2st\\ &\in&U+Rz^3t^{-1}(R+Rs^{-1})t^{-1}s^{-1}tst+\underline{z^2u_2u_1u_2}+ Rz^2(st)^{-4}stst^2st^2st\\ &\in&U+\underline{\underline{(z^3u_2u_1tu_1)t}}+Rz^3(st)^{-4}stst^2st+ Rzstst^2s(R+Rt+Rt^{-1})st\\ &\in&U+\underline{\underline{s\big((z+z^2)tu_1u_2u_1\big)t}}+Rzstst(ts)^4s^{-1}t^{-1}s^{-1}t^{-1}+ Rzsts(R+Rt+Rt^{-1})st^{-1}st\\ &\in&U+Rz^2stst(R+Rs)t^{-1}s^{-1}t^{-1}+\underline{\underline{s(ztu_1u_2u_1)t}}+Rz(st)^4t^{-1}s^{-1}t^{-2}st+\\&&+Rzstst^{-1}st^{-1}st\\ &\in&U+\underline{z^2u_1}+Rz^2(st)^4t^{-1}s^{-1}t^{-2}s^{-1}t^{-1}+ \underline{\underline{(z^2u_2u_1u_2u_1)t}}+\\&&+ Rzst(R+Rs^{-1})t^{-1}(R+Rs^{-1})t^{-1}st\\ &\in&U+ \underline{\underline{(z^3u_2u_1u_2u_1)t^{-1}}}+\underline{zu_1u_2u_1u_2}+ \underline{\underline{s(z^3tu_1u_2u_1)t}}+Rzsts^{-1}t^{-1}s^{-1}t^{-1}st\\ \end{array}}$ \\ However, $zsts^{-1}t^{-1}s^{-1}t^{-1}st\in zsts^{-1}t^{-1}s^{-1}t^{-1}(R+Rs^{-1})t\subset \underline{\underline{s(ztu_1u_2u_1)}}+Rzst^2(st)^{-4}st^2\subset U+\underline{u_1u_2u_1u_2}$. \end{proof} \begin{lem} For every $k\in\{0,\dots,5\}$, $z^ktu_1u_2u_1\subset U$. \label{cor14} \end{lem} \begin{proof} By proposition \ref{pr14} (i), we only have to prove that $z^5tu_1u_2u_1\subset U$. We have: $z^5tu_1u_2u_1=z^5t(R+Rs^{-1})(R+Rt^{-1}+Rt^{-2})u_1\subset \underline{z^5u_2u_1}+ \underline{\underline{z^5u_2u_1t^{-1}u_1}}+z^5ts^{-1}t^{-2}u_1\subset U+z^5ts^{-1}t^{-2}(R+Rs^{-1})\subset U+\underline{z^5u_2u_1u_2}+Rz^5ts^{-1}t^{-2}s^{-1}.$ We expand $t$ as a linear combination of 1, $t^{-1}$ and $t^{-2}$ and by the definition of $U$ and lemma \ref{lem114}, we only have to prove that $z^5t^{-1}s^{-1}t^{-2}s^{-1}\in U$. Indeed, we have: $z^5t^{-1}s^{-1}t^{-2}s^{-1}=z^5(st)^{-4} ststst^{-1}s^{-1}\in z^4stst(R+Rs^{-1})t^{-1}s^{-1} \subset \underline{z^4u_1u_2}+z^4stst^2(st)^{-4}stst\subset U+s(z^3tsu_2sts)t$. We use lemma \ref{lm14} and we have that $z^3tsu_2sts\subset U+z^4u_1tu_1u_2u_1u_2$. The result follows from proposition \ref{pr14}(i). \end{proof} \begin{thm} $H_{G_{14}}=U$. \label{thm14} \end{thm} \begin{proof} As we explained in the beginning of this section, it is enough to prove that, for every $k\in\{0,\dots,5\}$, $z^ktu_1u_2u_1$ and $z^ktu_1tst^{-1}s$ are subsets of $U$. The first part is exactly what we proved in lemma \ref{cor14}. It remains to prove the second one. Since $z^ktu_1tst^{-1}s=z^kt(R+Rs)tst^{-1}s$, we must prove that for every $k\in\{0,\dots,5\}$, the elements $z^kt^2st^{-1}s$ and $z^ktstst^{-1}s$ are inside $U$. We distinguish the following cases: \begin{itemize}[leftmargin=*] \item \underline{The element $z^kt^2st^{-1}s$}: By proposition \ref{pr14} (iii), we only have to prove the case where $k=0$. We have: $$\small{\begin{array}{lcl} t^2st^{-1}s&\in&t^2s(R+Rt+Rt^2)s\\ &\in& \underline{u_2u_1}+\underline{\underline{u_2u_1tu_1}}+t(ts)^4s^{-1}t^{-1}s^{-1}t^{-1}s^{-1}ts\\ &\in&U+zts^{-1}t^{-1}s^{-1}t^{-1}(R+Rs)ts\\ &\in&U+\underline{zu_2u_1u_2}+Rzts^{-1}t^{-1}s^{-1}(R+Rt+Rt^2)sts\\ &\in&U+\underline{zu_2}+Rzts^{-1}t^{-1}(R+Rs)tsts+Rzts^{-1}t^{-1}(R+Rs)t^2sts\\ &\in& U+\underline{zu_2u_1}+Rzts^{-1}t^{-2}(ts)^4s^{-1}t^{-1}+ Rzts^{-2}(st)^4t^{-1}s^{-1}t^{-1}+\\&&+ Rzt(R+Rs)t^{-1}st^2sts\\ &\in&U+\underline{\underline{(z^2tu_1u_2u_1)t^{-1}}}+\underline{\underline{s(zu_2u_1tu_1)}}+Rzts(R+Rt+Rt^2)st^2sts\\ &\in&U+Rzts^2t^2sts+Rz(ts)^4s^{-1}t^{-1}s^{-1}tsts+Rztst^2st(ts)^4s^{-1}t^{-1}s^{-1}t^{-1}\\ &\in& U+Rzt(R+Rs)t^2sts+Rz^2s^{-1}t^{-1}(R+Rs)tsts+Rz^2tst^2st(R+Rs)t^{-1}s^{-1}t^{-1}\\ &\in& U+\underline{\underline{zu_2u_1tu_1}}+Rztst(ts)^4s^{-1}t^{-1}s^{-1}t^{-1}+ \underline{z^2u_2u_1}+ Rz^2s^{-1}t^{-2}(ts)^4s^{-1}t^{-1}+\\&&+\underline{z^2u_2u_1u_2}+ Rz^2tst(ts)^4s^{-1}t^{-1}s^{-1}t^{-2}s^{-1}\\ &\in&U+Rz^2tst(R+Rs)t^{-1}s^{-1}t^{-1}+\underline{z^3u_1u_2u_1u_2}+ z^3tsts^{-1}t^{-1}s^{-1}(R+Rt^{-1}+Rt)s^{-1}\\ &\in& U+\underline{z^2u_1}+Rz^2(ts)^4s^{-1}t^{-1}s^{-1}t^{-2}s^{-1}t^{-1}+ Rz^3tsts^{-1}t^{-1}s^{-2}+ Rz^3tst^2(st)^{-4}st+\\&&+ Rz^3tst(R+Rs)t^{-1}s^{-1}ts^{-1}\\ &\in&U+\underline{\underline{s^{-1}(z^3u_2u_1u_2u_1)t^{-1}}}+Rz^3tst(R+Rs)t^{-1}s^{-2}+\underline{\underline{(z^2tu_1u_2u_1)t}}+ \underline{\underline{z^3u_2u_1}}+\\&&+Rz^3tstst^{-1}(R+Rs)ts^{-1}\\ &\in&U+\underline{z^3u_2u_1}+Rz^3(ts)^4s^{-1}t^{-1}s^{-1}t^{-2}s^{-2}+ \underline{z^3u_2u_1u_2}+Rz^3(ts)^4s^{-1}t^{-1}s^{-1}t^{-2}sts^{-1}\\ &\in&U+s^{-1}(z^4u_2u_1u_2u_1)+Rz^4s^{-1}t^{-1}s^{-1}(R+Rt^{-1}+Rt)sts^{-1}\\ &\stackrel{\ref{pr14}(iv)}{\in}&U+s^{-1}z^5(tu_1u_2u_1)u_2+\underline{z^4u_1}+ Rz^4(ts)^{-4}tsts^2ts^{-1}+ Rz^4s^{-1}t^{-1}(R+Rs)tsts^{-1}\\ &\stackrel{\ref{cor14}}{\in}&U+Rz^3tst(R+Rs)ts^{-1}+\underline{z^4u_2u_1}+ Rz^4s^{-1}t^{-2}(ts)^4s^{-1}t^{-1}s^{-2}\\ &\in&U+\underline{\underline{z^3tu_1u_2u_1}}+Rz^3(ts)^4s^{-1}t^{-1}s^{-2}+ s^{-1}(z^4u_2u_1u_2u_1)\\ &\stackrel{\ref{cor14}}{\in}&U+\underline{z^4u_1u_2u_1}+s^{-1}(z^5tu_1u_2u_1)u_2\\ &\stackrel{\ref{cor14}}{\in}&U. \end{array}}$$ \item \underline{The element $z^ktstst^{-1}s$}: \underline{For $k\in\{4,5\}$}: $\begin{array}[t]{lcl} z^ktstst^{-1}s&\in& z^ktst(R+Rs^{-1})t^{-1}s\\ &\subset& \underline{z^ku_2u_1}+z^ktsts^{-1}t^{-1}(R+Rs^{-1})\\ &\subset& U+(z^ktsts^{-1})t^{-1}+ z^ktst^2(st)^{-4}stst\\ &\stackrel{\ref{cor14}}{\subset}&U+(z^{k-1}tst^2sts)t\\ &\stackrel{\ref{lm14}}{\subset}& U+u_1(z^ktu_1u_2u_1)u_2\\&\stackrel{\ref{cor14}}{\subset}&U. \end{array}$ \underline{ For $k\in\{0,\dots,3\}$}: $z^ktstst^{-1}s=z^k(ts)^4s^{-1}t^{-1}s^{-1}t^{-2}s\in s^{-1}(z^{k+1}u_2u_1u_2u_1)$. However, by \ref{pr14}(iv) we have that $z^{k+1}u_2u_1u_2u_1\subset U+(z^{k+2}tu_1u_2 u_1)u_2\stackrel{\ref{cor14}}{\subset}U.$ \qedhere \end{itemize} \end{proof} \begin{cor} The BMR freeness conjecture holds for the generic Hecke algebra $H_{G_{14}}$. \end{cor} \begin{proof} By theorem \ref{thm14} we have that $H_{G_{14}}=U=\sum\limits_{k=0}^{5}(z^ku_1u_2+z^ku_1tsu_2+z^ku_1t^{-1}su_2+z^ku_1tst^{-1}su_2)$. The result follows from proposition \ref{BMR PROP}, since $H_{G_{14}}$ is generated as a left $u_1$-module by 72 elements and, hence, as $R$-module by $|G_{14}|=144$ elements (recall that $u_1$ is generated as $R$ module by 2 elements). \end{proof} \subsection{The case of $G_{15}$} \indent Let $R=\mathbb{Z}[u_{s,i}^{\pm},u_{t,j}^{\pm},u_{u,l}^{\pm}]_{\substack{1\leq i,j\leq 2 \\1\leq l\leq 3}}$ and let $$H_{G_{15}}=\langle s,t,u\;|\; stu=tus,ustut=stutu,\prod\limits_{i=1}^{2}(s-u_{s,i})=\prod\limits_{j=1}^{2}(t-u_{t,j})=\prod\limits_{l=1}^{3}(u-u_{u,l})=0\rangle$$ be the generic Hecke algebra associated to $G_{15}$. Let $u_1$ be the subalgebra of $H_{G_{15}}$ generated by $s$, $u_2$ the subalgebra of $H_{G_{15}}$ generated by $t$ and $u_3$ the subalgebra of $H_{G_{15}}$ generated by $u$. We recall that $z:=stutu=ustut=tustu=tutus=utust$ generates the center of the associated complex braid group and that $|Z(G_{15})|=12$. We set $U=\sum\limits_{k=0}^{11}z^ku_3u_2u_1u_2u_1$. By the definition of $U$ we have the following remark. \begin{rem} $Uu_1 \subset U$. \label{rem15} \end{rem} From now on, we will underline the elements that belong to $U$ by definition. Our goal is to prove that $H_{G_{15}}=U$ (theorem \ref{thm 15}). Since $1\in U$, it will be sufficient to prove that $U$ is a left-sided ideal of $H_{G_{15}}$. For this purpose, one must check that $sU$, $tU$ and $uU$ are subsets of $U$. The following proposition states that it is enough to prove $tU\subset U$. \begin{prop} If $tU\subset U$, then $H_{G_{15}}=U$. \label{prrrr1} \end{prop} \begin{proof} As we explained above, it is enough to prove that $sU$, $tU$ and $uU$ are subsets of $U$. However, by hypothesis and by the definition of $U$, we can restrict ourselves to proving that $sU\subset U$. We recall that $z=stutu$. Therefore, $s=zu^{-1}t^{-1}u^{-1}t^{-1}$ and $s^{-1}=z^{-1}tutu$. We notice that $$U= \sum\limits_{k=0}^{10}z^ku_3u_2u_1u_2u_1+z^{11}u_3u_2u_1u_2u_1.$$ Hence, $\begin{array}[t]{lcl}sU&\subset& \sum\limits_{k=0}^{10}z^ksu_3u_2u_1u_2u_1+ z^{11}su_3u_2u_1u_2u_1\\ &\subset& \sum\limits_{k=0}^{10}z^{k+1}u^{-1}t^{-1}u^{-1}t^{-1}u_3u_2u_1u_2u_1+z^{11}(R+Rs^{-1})u_3u_2u_1u_2u_1\\ &\subset& u^{-1}t^{-1}u^{-1}t^{-1}\sum\limits_{k=0}^{10}z^{k+1}u_3u_2u_1u_2u_1+z^{11}u_3u_2u_1u_2u_1+(z^{11}s^{-1})u_3u_2u_1u_2u_1\\ &\subset&u_3u_2u_3u_2U+ z^{10}tutz^ku_3u_2u_1u_2u_1\\ &\subset&U+u_3u_2u_3u_2U+ tut(z^{10}z^ku_3u_2u_1u_2u_1)\\&\subset& U+u_3u_2u_3u_2U. \end{array}$\\\\ Since $tU\subset U$ we have $u_2U\subset U$ (recall that $u_2=R+Rt$) and, by the definition of $U$, we also have $u_3U\subset U$. The result then is obvious. \end{proof} A first step to prove our main theorem is analogous to \ref{rem15} (see proposition \ref{prrr15}). For this purpose, we first prove some preliminary results. \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{0,...,11\}$, $z^ku_3u_1u^{-1}u_2\subset U$. \item[(ii)] For every $k\in\{3,...,11\}$, $z^ku_3u_1u^{-2}t^{-1}\subset U$. \item[(iii)] For every $k\in\{3,...,11\}$, $z^ku_3u_1u_3t^{-1}\subset U$. \end{itemize} \label{sutt} \end{lem} \begin{proof} Since $u_3=R+Ru^{-1}+Ru^{-2}$, (iii) follows from (i) and (ii) and from the definition of $U$. For (i) we have: $z^ku_3u_1u^{-1}u_2=z^ku_3(R+Rs^{-1})u^{-1}u_2\subset \underline{z^ku_3u_2}+z^ku_3(s^{-1}u^{-1}t^{-1})u_2\subset U+z^ku_3(u^{-1}t^{-1}s^{-1})u_2\subset U+\underline{z^ku_3u_2u_1u_2}\subset U.$ It remains to prove (ii). We have: \\\\$\small{\begin{array}{lcl} z^ku_3u_1u^{-2}t^{-1}&=&z^ku_3(R+Rs^{-1})u^{-2}t^{-1}\\ &\subset&\underline{z^ku_3t^{-1}}+z^ku_3(s^{-1}u^{-1}t^{-1}u^{-1}t^{-1})tutu^{-1}t^{-1}\\ &\subset&U+z^{k-1}u_3tu(R+Rt^{-1})u^{-1}t^{-1}\\ &\subset&U+\underline{z^{k-1}u_3}+z^{k-1}u_3tu^2(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})s\\ &\subset&U+z^{k-2}u_3t(R+Ru+Ru^{-1})s\\ &\subset&U+\underline{z^{k-2}u_3ts}+z^{k-2}u_3(tustu)u^{-1}t^{-1}+z^{k-2}u_3(R+Rt^{-1})u^{-1}s\\ &\subset&U+\underline{z^{k-1}u_3t^{-1}}+\underline{z^{k-2}u_3s}+z^{k-2}u_3(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})st\\ &\subset&U+\underline{z^{k-3}u_3st}. \end{array}}$ \qedhere \end{proof} \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{0,...,10\}$, $z^ku_3u_2uu_2\subset U$. \item[(ii)] For every $k\in\{1,...,11\}$, $z^ku_3u_2u^{-1}u_2\subset U$. \item[(iii)] For every $k\in\{1,...,10\}$, $z^ku_3u_2u_3u_2\subset U$. \end{itemize} \label{tutt} \end{lem} \begin{proof} Since $u_3=R+Ru^{-1}+Ru^{-2}$, (iii) follows from (i) and (ii) and from the definition of $U$. For (i) we have: $z^ku_3u_2uu_2=z^ku_3(R+Rt)uu_2\subset \underline{z^ku_3u_2}+z ^ku_3(utust)t^{-1}s^{-1}u_2\subset U+\underline{z^{k+1}u_3t^{-1}s^{-1}u_2}\subset U.$ For (ii), we use similar kind of calculations: $z^ku_3u_2u^{-1}u_2=z^ku_3(R+Rt^{-1})u^{-1}u_2\subset \underline{z^ku_3u_2}+z ^ku_3(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})su_2\subset U+\underline{z^{k-1}u_3su_2}.$ \end{proof} \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{0,...,10\}$, $z^ku_3u_1tu\subset U$. \item[(ii)] For every $k\in\{0,...,9\}$, $z^ku_3u_1tu^2\subset U$. \item[(iii)] For every $k\in\{0,...,9\}$, $z^ku_3u_1tu_3\subset U$. \end{itemize} \label{stuu} \end{lem} \begin{proof} Since $u_3=R+Ru+Ru^{2}$, (iii) follows from (i) and (ii) and from the definition of $U$. For (i) we have: $z^ku_3u_1tu=z^ku_3(R+Rs)tu\subset z^ku_3tu+z^ku_3(stutu)u^{-1}t^{-1}\stackrel{\ref{tutt}(i)}{\subset}U+\underline{z^{k+1}u_3t^{-1}}\subset U.$ Similarly, for (ii): $z^ku_3u_1tu^2=z^ku_3(R+Rs)tu^2\subset z^ku_3tu^2+z^ku_3(stutu)u^{-1}t^{-1}u\stackrel{\ref{tutt}(iii)}{\subset}U+z^{k+1}u_3t^{-1}u\stackrel{\ref{tutt}(i)}{\subset}U.$ \end{proof} \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize} \item[(i)] For every $k\in\{0,...,10\}$, $z^ku_3u_2uu_1\subset U$. \item[(ii)] For every $k\in\{1,...,11\}$, $z^ku_3u_2u^{-1}u_1\subset U$. \item[(iii)] For every $k\in\{1,...,10\}$, $z^ku_3u_2u_3u_1\subset U$. \end{itemize} \label{tuss} \end{lem} \begin{proof} Since $u_3=R+Ru+Ru^{-1}$, (iii) follows from (i) and (ii) and from the definition of $U$. For (i) we have: $z^ku_3u_2uu_1=z^ku_3(R+Rt)u(R+Rs) \subset \underline{z^ku_3u_1}+z^ktu+z^k(tus)\stackrel{\ref{tutt}(i)}{\subset}U+z^kstu\stackrel{\ref{stuu}(i)}{\subset}U.$ Similarly, for (ii): $z^ku_3u_2u^{-1}u_1=z^ku_3(R+Rt^{-1})u^{-1}u_1\subset \underline{z^ku_3u_1}+z^ku_3(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})stu_1\subset U+\underline{z^{k-1}u_3stu_1}.$ \end{proof} \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{0,...,8\}$, $z^ku_3u_1uu_1\subset U$. \item[(ii)] For every $k\in\{0,...,11\}$, $z^ku_3u_1u^{-1}u_1\subset U$. \item[(iii)] For every $k\in\{0,...,8\}$, $z^ku_3u_1u_3u_1\subset U$. \end{itemize} \label{suu} \end{lem} \begin{proof} By remark \ref{rem15} we can ignore the $u_1$ in the end. Moreover, since $u_3=R+Ru+Ru^{-1}$, (iii) follows from (i) and (ii). However, $z^ku_3u_1u^{-1}\subset z^ku_3u_1u^{-1}u_2$ and, hence, (ii) follows from lemma \ref{sutt} (i). Therefore, it will be sufficient to prove (i). We have:\\ $\small{\begin{array}{lcl} z^ku_3u_1u&=&z^ku_3(R+Rs)u\\ &\subset& \underline{z^ku_3}+z^ku_3(ustut)t^{-1}u^{-1}t^{-1}u\\ &\subset& U+z^{k+1}u_3(R+Rt)u^{-1}(R+Rt)u\\ &\subset& U+z^{k+1}u_3u_2u+\underline{z^{k+1}u_3}+\underline{z^{k+1}u_3t}+z^{k+1}u_3tu^{-1}tu\\ &\stackrel{\ref{tutt}(i)}{\subset}&U+z^{k+1}u_3t(R+Ru+Ru^2)tu\\ &\subset&U+z^{k+1}u_3u_2u+z^{k+1}u_3(tutus)s^{-1}+z^{k+1}u_3(utust)t^{-1}s^{-1}(utust)t^{-1}s^{-1}\\ &\stackrel{\ref{tutt}(i)}{\subset}&U+\underline{z^{k+2}u_3s^{-1}}+\underline{z^{k+3}u_3t^{-1}s^{-1}t^{-1}s^{-1}}. \end{array}}$ \qedhere \end{proof} To make it easier for the reader to follow the calculations, from now on we will double-underline the elements described in the above lemmas (lemmas \ref{sutt} - \ref{suu}) and we will use directly the fact that these elements are inside $U$. \begin{prop} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{2,\dots,11\}$, $z^ks^{-1}u^{-2}\in U+z^{k-3}u_3^{\times}stst$. \item[(ii)] For every $k\in\{0,\dots,11\}$, $z^ku_3u_1u_2u_1u_2\subset U$. \item[(iii)]For every $k\in\{0,\dots,11\}$, $z^ku_3u_1u_3\subset U$. \end{itemize} \label{ststt} \end{prop} \begin{proof}\mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)]$\small{\begin{array}[t]{lcl} z^ks^{-1}u^{-2}&=&z^k(s^{-1}u^{-1}t^{-1}u^{-1}t^{-1})tutu^{-1}\\ &\in& z^{k-1}(R+R^{\times}t^{-1})u(R+R^{\times}t^{-1})u^{-1}\\ &\in& \underline{z^{k-1}u_3}+\underline{z^{k-1}u_3t^{-1}}+\underline{\underline{z^{k-1}u_3u_2u^{-1}u_2}}+R^{\times}z^{k-1}t^{-1}u^2(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})st\\ &\in& U+R^{\times}z^{k-2}t^{-1}(R+Ru+R^{\times}u^{-1})st\\ &\in& U+\underline{z^{k-2}u_3t^{-1}st}+Rz^{k-2}t^{-1}(ustut)t^{-1}u^{-1}+R^{\times}z^{k-2}u(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})stst\\ &\in& U+\underline{\underline{z^{k-1}u_3u_2u^{-1}u_2}}+z^{k-3}u_3^{\times}stst\\ &\subset& U+z^{k-3}u_3^{\times}stst. \end{array}}$ \item[(ii)] For $k\in\{0,...,5\}$, we have: $z^ku_3u_1u_2u_1u_2=z^ku_3(R+Rs)(R+Rt)(R+Rs)(R+Rt)\subset \underline{z^ku_3u_1u_2u_1}+\underline{z^ku_3u_2u_1u_2}+z^ku_3stst \stackrel{(i)}{\subset}U+z^{k+3}u_3^{\times}s^{-1}u^{-2}\stackrel{\ref{suu}(iii)}{\subset}U.$ It remains to prove the case where $k\in\{6,\dots,11\}$. We have: $\small{\begin{array}[t]{lcl} z^ku_3u_1u_2u_1u_2&=&z^ku_3(R+Rs^{-1})(R+Rt^{-1})(R+Rs^{-1})(R+Rt^{-1})\\ &\subset& \underline{z^ku_3u_1u_2u_1}+\underline{z^ku_3u_2u_1u_2}+z^ku_3s^{-1}t^{-1}s^{-1}t^{-1}\\ &\subset&z^ku_3s^{-1}(t^{-1}s^{-1}u^{-1}t^{-1}u^{-1})utut^{-1}\\ &\subset& z^{k-1}u_3s^{-1}(R+Ru^{-1}+Ru^{-2})tut^{-1}\\ &\subset& z^{k-1}u_3(R+Rs)tut^{-1}+z^{k-1}u_3s^{-1}u^{-1}(R+Rt^{-1})ut^{-1}+\\&&+ z^{k-1}u_3s^{-1}u^{-2}(R+Rt^{-1})ut^{-1}\\ &\subset&\underline{\underline{z^{k-1}u_3u_2uu_2}}+z^{k-1}u_3(ustut)t^{-2}+ \underline{z^{k-1}u_3s^{-1}t^{-1}}+z^{k-1}u_3(s^{-1}u^{-1}t^{-1}u^{-1}t^{-1})tu^2t^{-1}+\\&&+\underline{\underline{z^{k-1}u_3u_1u_3t^{-1}}}+ z^{k-1}u_3s^{-1}u^{-2}t^{-1}(R+Ru^{-1}+Ru^{-2})t^{-1}\\ &\subset&U+\underline{z^kt^{-2}}+\underline{\underline{z^{k-2}u_3u^2u_3u_2}}+z^{k-1}u_3s^{-1}u^{-2}t^{-2}+\\&&+z^{k-1}u_3(s^{-1}u^{-1}t^{-1})t(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})s+\\&&+z^{k-1}u_3(s^{-1}u^{-1}t^{-1}u^{-1}t^{-1})tut(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})stu^{-1}t^{-1}\\ &\subset& U+z^{k-1}u_3s^{-1}u^{-2}(R+Rt^{-1})+\underline{z^{k-2}u_3t^{-1}s^{-1}ts}+\\&&+ z^{k-3}u_3tu(R+Rt^{-1})stu^{-1}t^{-1}\\ &\subset&U+z^{k-1}u_3s^{-1}u^{-2}+\underline{\underline{z^{k-1}u_3u_1u_3t^{-1}}}+z^{k-3}u_3(utust)u^{-1}t^{-1}+\\&&+z^{k-3}u_3tut^{-1}(R+Rs^{-1})tu^{-1}t^{-1}\\ &\subset& U+z^{k-1}u_3s^{-1}u^{-2} +\underline{(z^{k-2}+z^{k-3})u_3u_2}+z^{k-3}u_3tut^{-1}s^{-1}tu^{-1}t^{-1}. \end{array}}$ However, we notice that $z^{k-3}u_3tut^{-1}s^{-1}tu^{-1}t^{-1}$ is a subset of $U$. Indeed, we have: $\small{\begin{array}{lcl} z^{k-3}u_3tut^{-1}s^{-1}tu^{-1}t^{-1} &\subset&z^{k-3}u_3tut^{-1}s^{-1}(R+Rt^{-1})u^{-1}t^{-1}\\ &\subset& z^{k-3}tu^2(u^{-1}t^{-1}s^{-1}u^{-1}t^{-1})+ \\&&+z^{k-3}u_3tu^2(u^{-1}t^{-1}s^{-1}u^{-1}t^{-1})tu^2(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})s\\ &\subset& \underline{\underline{z^{k-4}u_3u_2u_3u_2}}+z^{k-5}u_3 t(R+Ru+Ru^{-1})tu^2s\\ &\subset&U+\underline{\underline{z^{k-5}u_3u_2u_3u_1}}+ z^{k-5}u_3(tutus)s^{-1}us+\\&&+z^{k-5}u_3tu^{-1}t(R+Ru+Ru^{-1})s\\ &\subset&U+\underline{\underline{z^{k-4}u_3u_1uu_1}}+z^{k-5}u_3tu^{-1}(R+Rt^{-1})(R+Rs^{-1})+\\&&+z^{k-5}u_3tu^{-2}(utust)t^{-1}+z^{k-5}u_3tu^{-1}(R+Rt^{-1})u^{-1}s\\ &\subset&U+\underline{\underline{(z^{k-5}+z^{k-4})u_3u_2u_3u_2}}+\underline{\underline{z^{k-5}u_3u_2u_3u_1}} +z^{k-5}u_3tu^{-1}t^{-1}s^{-1}+\\&&+ z^{k-5}u_3t(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})sts\\ &\subset&U+z^{k-5}u_3t^2u(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})+\underline{z^{k-6}u_3tsts}\\ &\subset&U+\underline{\underline{z^{k-6}u_3u_2uu_2}}. \end{array}}$ Hence, \begin{equation} z^ku_3u_1u_2u_1u_2\subset U+z^{k-1}u_3s^{-1}u^{-2},\; k\in\{6,\dots,11\}. \label{pp} \end{equation} For $k\in\{6,...,9\}$ we rewrite (\ref{pp}) and we have $z^ku_3u_1u_2u_1u_2\subset U+z^{k-1}u_3u_1u_3u_1$. Therefore, by lemma \ref{suu}(iii) we have $z^ku_3u_1u_2u_1u_2\subset U$. For $k\in\{10,11\}$ we use (i) and (\ref{pp}) becomes $z^ku_3u_1u_2u_1u_2\subset U+z^{k-4}u_3stst$. However, since $k-4\in\{6,7\}$, we can apply (\ref{pp}) and we have that $z^{k-4}u_3stst\subset U+z^{k-5}u_3s^{-1}u^{-2}$. The result follows from lemma \ref{suu}(iii). \item [(iii)] By lemma \ref{suu} (iii), it is enough to prove that for $k\in\{9,10,11\}$, $z^ku_1u_3\subset U$. We expand $u_1$ as $R+Rs^{-1}$ and $u_3$ as $R+Ru^{-1}+Ru^{-2}$ and we have $z^ku_1u_3\subset \underline{z^ku_3}+z^ku_1u^{-1}+z^ku_1u^{-2}$. Hence, by lemma \ref{suu}(ii) we only have to prove that $z^ku_3s^{-1}u^{-2}\subset U$ . However, by (i) we have $z^ku_3s^{-1}u^{-2}\subset U+z^{k-3}u_3stst$ and the result follows directly from (ii). \qedhere \end{itemize} \end{proof} \begin{lem} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)] For every $k\in\{3,\dots,8\}$, $z^ku_3tu^{-1}u_1u\subset U$. \item[(ii)] For every $k\in\{3,4\}$, $z^ku_3tu^{-1}u_1u^2\subset U$. \item[(iii)] For every $k\in\{5,\dots,8\}$, $z^ku_3tu^{-1}u_1u^{-1}\subset U$. \item[(iv)] For every $k\in\{3,\dots,8\}$, $z^ku_3tu^{-1}u_1u_3\subset U$. \end{itemize} \label{stuuuu} \end{lem} \begin{proof} \mbox{} \vspace*{-\parsep} \vspace*{-\baselineskip}\\ \begin{itemize}[leftmargin=0.8cm] \item[(i)]$\small{\begin{array}[t]{lcl} z^ku_3tu^{-1}u_1u&\subset&z^{k}u_3t(R+Ru+Ru^2)(R+Rs)u\\ &\subset& U+\underline{\underline{z^{k}u_3u_1tu_3}}+z^{k}u_3tsu+z^{k}u_3(tus)u+z^{k}u_3tu^2su\\ &\subset&U+z^{k}u_3(R+Rt^{-1})(R+Rs^{-1})u+z^{k}u_3stu+z^{k}u_3tu(ustut)t^{-1}u^{-1}t^{-1}u\\ &\subset&U+\underline{\underline{z^{k}u_3u_1u_3u_1}}+\underline{\underline{z^{k}u_3u_2u_3u_1}}+ z^{k}u_3(t^{-1}s^{-1}u^{-1}t^{-1}u^{-1})utu^2+\\&&+\underline{\underline{z^{k}u_3u_1tu_3}}+ z^{k+1}u_3tu(R+Rt)u^{-1}t^{-1}u\\ &\subset&U+\underline{\underline{z^{k-1}u_3u_2u_3u_1}}+\underline{z^{k+1}u_3}+ z^{k+1}u_3tut(R+Ru+Ru^2)t^{-1}u\\ &\subset&U+ \underline{\underline{z^{k+1}u_3u_2u_3u_2}}+ z^{k+1}u_3(tutus)s^{-1}t^{-1}u+z^{k+1}u_3(tutus)s^{-1}ut^{-1}u\\ &\subset&U+z^{k+2}u_3s^{-1}(R+Rt)u+z^{k+2}u_3s^{-1}u(R+Rt)u\\ &\subset&U+z^{k+2}u_3u_1u_3+\underline{\underline{z^{k+2}u_3u_1tu}}+z^{k+2}u_3s^{-1}(utust)t^{-1}s^{-1}\\ &\stackrel{\ref{ststt}(iii)}{\subset}&U+ \underline{z^{k+3}u_3u_1u_2u_1}. \end{array}}$ \item[(ii)]$\small{\begin{array}[t]{lcl} z^ku_3tu^{-1}u_1u^2&\subset&z^{k}u_3t(R+Ru+Ru^2)(R+Rs)u^2\\ &\subset& U+\underline{\underline{z^{k}u_3u_1tu_3}}+z^{k}u_3tsu^2+z^{k}u_3(tus)u^2+z^{k}u_3tu^2su^2\\ &\subset&U+z^{k}u_3(R+Rt^{-1})(R+Rs^{-1})u^2+z^{k}u_3stu^2+z^{k}u_3tu(ustut)t^{-1}u^{-1}t^{-1}u^2\\ &\subset&U+\underline{\underline{z^{k}u_3u_1u_3u_1}}+\underline{\underline{z^{k}u_3u_2u_3u_1}}+ z^{k}u_3(t^{-1}s^{-1}u^{-1}t^{-1}u^{-1})utu^3+\\&&+\underline{\underline{z^{k}u_3u_1tu_3}}+ z^{k+1}u_3tu(R+Rt)u^{-1}t^{-1}u^2\\ &\subset&U+\underline{\underline{z^{k-1}u_3u_2u_3u_1}}+\underline{z^{k+1}u_3}+ z^{k+1}u_3tut(R+Ru+Ru^2)t^{-1}u^2\\ &\subset&U+ \underline{\underline{z^{k+1}u_3u_2u_3u_2}}+ z^{k+1}u_3(tutus)s^{-1}t^{-1}u^2+z^{k+1}u_3(tutus)s^{-1}ut^{-1}u^2\\ &\subset&U+z^{k+2}u_3s^{-1}(R+Rt)u^2+z^{k+2}u_3s^{-1}u(R+Rt)u^2\\ &\subset&U+z^{k+2}u_3u_1u_3+\underline{\underline{z^{k+2}u_3u_1tu_3}}+z^{k+2}u_3s^{-1}(utust)t^{-1}s^{-1}u\\ &\stackrel{\ref{ststt}(iii)}{\subset}&U+ z^{k+3}u_3(R+Rs)t^{-1}s^{-1}u\\ &\subset&z^{k+3}(u^{-1}t^{-1}s^{-1}u^{-1}t^{-1})tu^2+z^{k+3}u_3s(R+Rt)(R+Rs)u\\ &\subset&U+\underline{\underline{z^{k+2}u_3u_1tu_3}}+\underline{\underline{z^{k+3}u_3u_1u_3u_1}}+\underline{\underline{z^{k+3}u_3u_1tu_3}}+z^{k+3}u_3stsu\\ &\subset&U+z^{k+3}u_3(ustut)t^{-1}u^{-1}su\\ &\subset&U+z^{k+4}u_3(R+Rt)u^{-1}su\\ &\subset&U+z^{k+4}u_3u_1u_3+z^{k+4}tu^{-1}u_1u. \end{array}}$ The result follows from proposition \ref{ststt}(iii) and from (i). \item[(iii)]$\small{\begin{array}[t]{lcl} z^ku_3tu^{-1}u_1u^{-1}&\subset&z^ku_3(R+Rt^{-1})u^{-1}(R+Rs^{-1})u^{-1}\\ &\subset&\underline{\underline{z^ku_3u_1u_3u_1}}+\underline{\underline{z^ku_3u_2u_3u_2}}+z^ku_3t^{-1}u^{-1}s^{-1}u^{-1}\\ &\subset&U+z^ku_3(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})sts^{-1}u^{-1}\\ &\subset&U+z^{k-1}u_3s(R+Rt^{-1})s^{-1}u^{-1}\\ &\subset&\underline{\underline{z^{k-1}u_3u_1u_3u_1}}+z^{k-1}u_3s(t^{-1}s^{-1}u^{-1}t^{-1}u^{-1})ut\\ &\subset&U+z^{k-2}u_3su(R+Rt^{-1})\\ &\subset&\underline{\underline{z^{k-2}u_3u_1u_3u_1}}+\underline{\underline{z^{k-2}u_3u_1u_3t^{-1}}}. \end{array}}$ \item[(iv)] For $k\in\{3,4\}$ we have $z^ku_3tu^{-1}u_1u_3\subset z^ku_3tu^{-1}u_1(R+Ru+Ru^2) \stackrel{(i),(ii)}{\subset}U+ \underline{\underline{z^ku_3u_2u^{-1}u_1}}\subset U$. Similarly, for $k\in\{5,\dots,8\}$ we have $z^ku_3tu^{-1}u_1u_3\subset z^ku_3tu^{-1}u_1(R+Ru+Ru^{-1}) \stackrel{(i),(iii)}{\subset}U+ \underline{\underline{z^ku_3u_2u^{-1}u_1}}$. \qedhere \end{itemize} \end{proof} \begin{prop}$Uu_2\subset U$. \label{prrr15} \end{prop} \begin{proof} By the definition of $U$ and the fact that $u_2=R+Rt$, we have to prove that $z^ku_3u_2u_1u_2u_1t\subset U$, for every $k\in\{0,...,11\}$. If we expand $u_1$ as $R+Rs$ and $u_2$ as $R+Rt$ we notice that $z^ku_3u_2u_1u_2u_1t\subset z^ku_3u_2u_1u_2u_1+z^ku_3u_1u_2u_1u_2+z^ku_3tstst$. Therefore, by the definition of $U$ and by proposition \ref{ststt}(ii), we only have to prove $z^ku_3tstst\subset U$, for every $k\in\{0,\dots,11\}$. We distinguish the following cases: \begin{itemize}[leftmargin=*] \item \underline{$k\in\{0,\dots,5\}$}: $\small{\begin{array}[t]{lcl} z^ku_3tstst&=&z^ku_3tst(stutu)u^{-1}t^{-1}u^{-1}\\ &\subset&z^{k+1}u_3tst(R+Ru+Ru^2)t^{-1}u^{-1}\\ &\subset&z^{k+1}u_3tsu^{-1}+z^{k+1}u_3t(stutu)u^{-1}t^{-2}u^{-1}+z^{k+1}u_3tstu^2 t^{-1}u^{-1}\\ &\subset& z^{k+1}u_3(R+Rt^{-1})(R+Rs^{-1})u^{-1}+z^{k+2}u_3tu^{-1}(R+Rt^{-1})u^{-1}+ z^{k+1}u_3tstu^2(R+Rt)u^{-1}\\ &\subset&\underline{\underline{z^{k+1}u_3u_1u_3u_1}}+\underline{\underline{(z^{k+1}+z^{k+2})u_3u_2u_3u_1}}+ z^{k+1}u_3(t^{-1}s^{-1}u^{-1}t^{-1}u^{-1})ut+\\&&+ z^{k+2}u_3t(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})st+z^{k+1}u_3t(stutu)u^{-1}t^{-1}+ z^{k+1}u_3t(stutu)u^{-1}t^{-1}utu^{-1}\\ &\subset&U+\underline{(z^k+z^{k+1})u_3u_2u_1u_2}+\underline{\underline{z^{k+2}u_3u_2u^{-1}u_2}}+z^{k+2}u_3tu^{-1}(R+Rt)utu^{-1}\\ &\subset& U+\underline{\underline{z^{k+2}u_3u_2u^{-1}u_2}}+z^{k+2}u_3tu^{-1}(tutus)s^{-1}u^{-2}\\ &\subset&U+z^{k+3}u_3tu^{-1}u_1u_3. \end{array}}$ The result follows from lemma \ref{stuuuu}(iii). \item \underline{$k\in\{6,\dots,11\}$}: $\small{\begin{array}[t]{lcl} z^ku_3v_8t&=&z^ku_3tstst\\ &\subset&z^ku_3(R+Rt^{-1})(R+Rs^{-1})(R+Rt^{-1})(R+Rs^{-1})(R+Rt^{-1})\\ &\subset&z^ku_3u_1u_2u_1u_2+z^kt^{-1}s^{-1}t^{-1}s^{-1}\\ &\stackrel{\ref{ststt}(ii)}{\subset}& U+z^ku_3(t^{-1}s^{-1}u^{-1}t^{-1}u^{-1})utu^2tu(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})t^{-1}\\ &\subset&U+z^{k-2}u_3t(R+Ru+Ru^{-1})tut^{-1}\\ &\subset&U+\underline{\underline{z^{k-2}u_3t^2ut^{-1}}}+z^{k-2}u_3(tutus)s^{-1}t^{-1}+ z^{k-2}u_3tu^{-1}(R+Rt^{-1})ut^{-1}\\ &\subset&U+\underline{z^{k-1}u_3s^{-1}t^{-1}+z^{k-2}u_3}+z^{k-2}u_3(R+Rt^{-1})u^{-1}t^{-1}ut^{-1}\\ &\subset&U+\underline{\underline{z^{k-2}u_3t^{-1}ut^{-1}}}+z^{k-2}u_3(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})su^{-1}t^{-1}\\ &\subset&U+\underline{\underline{z^{k-3}u_3su^{-1}t^{-1}}}. \end{array}}$ \qedhere \end{itemize} \end{proof} We can now prove the main theorem of this section. \begin{thm} $H_{G_{15}}=U$. \label{thm 15} \end{thm} \begin{proof} By proposition \ref{prrrr1} it is enough to prove that $tU\subset U$. By remark \ref{rem15} and proposition \ref{prrr15}, we only have to prove that $z^ktu_3\subset U$. By lemma \ref{tutt} (iii), we only have to check the cases where $k\in\{0,11\}$. We have: \begin{itemize} \item \underline{$k=0$}: $\begin{array}[t]{lcl} tu_3&=&t(R+Ru+Ru^2)\\ &\subset&\underline{t}+\underline{\underline{tu}}+tu^2\\ &\subset& U+s^{-1}(stutu)u^{-1}t^{-1}u\\ &\subset& U+zs^{-1}u^{-1}(R+Rt)u\\ &\subset& U+\underline{zs}+zs^{-1}u^{-2}(utust)t^{-1}s^{-1}\\ &\subset&U+zu_1u_3t^{-1}s^{-1}\\ &\stackrel{\ref{ststt}(iii)}{\subset}&U+Uu_2u_1. \end{array}$ \item \underline{$k=11$}: $\begin{array}[t]{lcl} z^{11}tu_3&\subset&z^{11}(R+Rt^{-1})(R+Ru^{-1}+Ru^{-2})\\ &\subset&\underline{z^{11}u_3}+\underline{\underline{z^{11}t^{-1}u^{-1}}}+z^{11}t^{-1}u^{-2}\\ &\subset& U+z^{11}u(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})stu^{-1}\\ &\subset&U+z^{10}u_3s(R+Rt^{-1})u^{-1}\\ &\subset&U+\underline{\underline{z^{10}u_3su^{-1}}}+z^{10}u_3su(u^{-1}t^{-1}u^{-1}t^{-1}s^{-1})st\\ &\subset&U+z^9u_3sust\\ &\stackrel{\ref{ststt}(iii)}{\subset}&U+Uu_1u_2. \end{array}$ \end{itemize} The result follows from remark \ref{rem15} and proposition \ref{prrr15}. \end{proof} \begin{cor} The BMR freeness conjecture holds for the generic Hecke algebra $H_{G_{15}}$. \end{cor} \begin{proof} By theorem \ref{thm 15} we have that $H_{G_{15}}=U=\sum\limits_{k=0}^{11}z^k(u_3+u_3s+u_3t+u_3ts+u_3st+u_3tst+u_3sts+u_3tsts)$. The result follows from proposition \ref{BMR PROP}, since $H_{G_{15}}$ is generated as left $u_3$-module by 96 elements and, hence, as $R$-module by $|G_{15}|=288$ elements (recall that $u_3$ is generated as $R$-module by 3 elements). \end{proof} \begin{appendices} \chapter{The BMR and ER presentations} \indent Let $W$ be an exceptional group of rank 2 and $B$ the complex braid group associated to $W$. We know that for every complex reflection group we have a Coxeter-like presentation (see remark \ref{coxeterlike}(ii)) and for the associated complex braid group we have an Artin-like presentation (see theorem \ref{Presentt}). For the exceptional groups of rank 2 we call these presentations \emph{the BMR presentations}, due to M. Brou\'e, G. Malle and R. Rouquier. In 2006 P. Etingof and E. Rains gave different presentations of $W$ and $B$, based on the BMR presentations associated to the maximal groups $G_7$, $G_{11}$ and $G_{19}$ (see \textsection 6.1 of \cite{ERrank2}). We call these presentations \emph{the ER presentations}. In table \ref{t3} we give the ER and BMR presentation for every exceptional group of rank 2 and also the isomorphism between them. We will now prove that we have indeed such an isomorphism. An argument in \cite{ERrank2} is that the ER presentation of $W$ has some defining relations of the form $g^p=1$ and that the ER presentation of $B$ is obtained by removing such relations. As we can see in the following pages, when we prove that the ER presentation of $W$ is isomorphic to its BMR presentation we don't use the relations $g^p=1$ in order to prove the correspondence between the other kind of relations in ER presentation and the braid relations in BMR presentation. Hence, we restrict ourselves to proving that the ER presentation and BMR presentation of $W$ are isomorphic, since the case of $B$ can be proven in the same way. \subsection*{The groups $G_4$, $G_8$ and $G_{16}$} \indent Let $W$ be the group $G_4$, $G_8$ or $G_{16}$. For these groups the BMR presentation is of the form $$\langle s,t\;|\;s^k=t^k=1, sts=tst\rangle,$$ where $k=3,4,5$, for each group respectively. We also know that the center of the group in each case is generated by the element $z=(st)^3$ (see Tables in Appendix 1 in \cite{broue2000}). According to \S6.1 in \cite{ERrank2}, the ER presentation of these groups is of the form $$\langle a,b,c\;|\; a^2=b^{-3}=\text{central}, c^k=1, abc=1\rangle,$$ where $k=3,4,5$, for each group respectively. Let $\tilde W$ be a group having this ER presentation. We will prove that $\tilde W\simeq W$. Let $\grf_1:W\rightarrow \tilde W$ and $\grf_2: \tilde W \rightarrow W$, defined by $ \grf_1(s)=c$, $\grf_1(t)=c^{-1}b$ and $\grf_2(a)=(sts)^{-1}$, $\grf_2(b) =st$ and $\grf_2(c)=s$. We now prove that $\grf_1$ is a well-defined group homomorphism. Since $abc=1$ and $a^{-2}=b^3$ we have: $a(abc)bc=1\Rightarrow bcbc=a^{-2}\Rightarrow cbc=b^{-1}a^{-2} \Rightarrow cbc=b^2\Rightarrow bc=c^{-1}b^2$, meaning that $\grf_1(s)\grf_1(t)\grf_1(s)=\grf_1(t)\grf_1(s)\grf_1(t)$. Moreover, $\grf_1(s)^k=c^k=1$ and by the relations $abc=1$ and $a^{-2}=b^3$ we also have $(c^{-1}b)^k=(ab^2)^k=(aa^{-2}b^{-1})^k=(a^{-1}b^{-1})^k=a^{-1}(b^{-1}a^{-1})^ka=a^{-1}c^ka=1$, meaning that $\grf_1(t)^k=1$. Similarly, we can prove that $\grf_2$ is a well-defined group homomorphism, since $\grf_2(a)^2=(sts)^{-1}(sts)^{-1}=(tst)^{-1}(sts)^{-1}=z^{-1}=\grf_2(b)^3$, $\grf_2(c)^k=s^k=1$ and $\grf_2(a)\grf_2(b)\grf_2(c)=1$. We also notice that $\grf_1(\grf_2(a))=c^{-1}b^{-1}=a$, $\grf_1(\grf_2(b))=b$ and $\grf_1(\grf_2(c))=c$. Moreover, $\grf_2(\grf_1(s))=s$ and $\grf_2(\grf_1(t))=t$, meaning that $\grf_1\circ\grf_2=\text{id}_{\tilde W}$ and $\grf_2\circ\grf_1=\text{id}_{W}$ and, hence, $W\simeq \tilde W$. \subsection*{The groups $G_5$, $G_{10}$ and $G_{18}$} \indent Let $W$ be the group $G_5$, $G_{10}$ or $G_{18}$. For these groups the BMR presentation is of the form $$\langle s,t\;|\;s^3=t^k=1, stst=tsts\rangle,$$ where $k=3,4,5$, for each group respectively. We also know that the center of the group in each case is generated by the element $z=(st)^2$ (see Tables in Appendix 1 in \cite{broue2000}). According to \S6.1 in \cite{ERrank2}, the ER presentation of these groups is of the form $$\langle a,b,c\;|\; a^2=\text{central}, b^3=c^k=1, abc=1\rangle,$$ where $k=3,4,5$, for each group respectively. Let $\tilde W$ be a group having this ER presentation. We will prove that $\tilde W\simeq W$. Let $\grf_1:W\rightarrow \tilde W$ and $\grf_2: \tilde W \rightarrow W$, defined by $ \grf_1(s)=b$, $\grf_1(t)=c$ and $\grf_2(a)=(st)^{-1}$, $\grf_2(b) =s$ and $\grf_2(c)=t$. We now prove that $\grf_1$ is a well-defined group homomorphism. Since $abc=cab=1$ and $a^{2}:=\grz$ is central we have: $cbcb=ca^{-1}(abc)b=ca^{-1}b=\grz^{-1}cab=\grz^{-1}=a^{-2}=a^{-1}(abc)a^{-1}(abc)=bcbc$, meaning that $\grf_1(t)\grf_1(s)\grf_1(t)\grf_1(s)=\grf_1(s)\grf_1(t)\grf_1(s)\grf_1(t)$. Moreover, $\grf_1(s)^3=b^3=1$ and $\grf_1(t)^k=c^k=1$. Similarly, we can prove that $\grf_2$ is a well-defined group homomorphism, since $\grf_2(a)^2=(st)^{-2}=z^{-1}$, $\grf_2(b)^3=s^3=1$, $\grf_2(c)^k=t^k=1$ and $\grf_2(a)\grf_2(b)\grf_2(c)=1$. We also notice that $\grf_1(\grf_2(a))=c^{-1}b^{-1}=a$, $\grf_1(\grf_2(b))=b$ and $\grf_1(\grf_2(c))=c$. Moreover, $\grf_2(\grf_1(s))=s$ and $\grf_2(\grf_1(t))=t$, meaning that $\grf_1\circ\grf_2=\text{id}_{\tilde W}$ and $\grf_2\circ\grf_1=\text{id}_{W}$ and, hence, $W\simeq \tilde W$. \subsection*{The groups $G_6$, $G_{9}$ and $G_{17}$} \indent Let $W$ be the group $G_6$, $G_{9}$ or $G_{17}$. For these groups the BMR presentation is of the form $$\langle s,t\;|\;s^2=t^k=1, (st)^3=(ts)^3\rangle,$$ where $k=3,4,5$, for each group respectively. We also know that the center of the group in each case is generated by the element $z=(st)^3$ (see Tables in Appendix 1 in \cite{broue2000}). According to \S6.1 in \cite{ERrank2}, the ER presentation of these groups is of the form $$\langle a,b,c\;|\; a^2=c^k=1, b^3=\text{central}, abc=1\rangle,$$ where $k=3,4,5$, for each group respectively. Let $\tilde W$ be a group having this ER presentation. We will prove that $\tilde W\simeq W$. Let $\grf_1:W\rightarrow \tilde W$ and $\grf_2: \tilde W \rightarrow W$, defined by $ \grf_1(s)=a$, $\grf_1(t)=c$ and $\grf_2(a)=s$, $\grf_2(b) =(ts)^{-1}$ and $\grf_2(c)=t$. We now prove that $\grf_1$ is a well-defined group homomorphism. Since $abc=cab=1$ and $b^{3}:=\grz$ is central we have: $cacaca=(cab)b^{-1}(cab)b^{-1}(cab)b^{-1}=b^{-3}=\grz^{-1}=\grz^{-1}abc=a\grz^{-1}bc=ab^{-2}c=a(cab)b^{-1}(cab)b^{-1}c=acacac$, meaning that $\big(\grf_1(t)\grf_1(s)\big)^3=\big(\grf_1(s)\grf_1(t)\big)^3$. Moreover, $\grf_1(s)^2=s^2=1$ and $\grf_1(t)^k=c^k=1$. Similarly, we can prove that $\grf_2$ is a well-defined group homomorphism, since $\grf_2(a)^2=s^2=1$, $\grf_2(b)^3=(ts)^{-3}=z^{-1}$, $\grf_2(c)^k=t^k=1$ and $\grf_2(a)\grf_2(b)\grf_2(c)=1$. We also notice that $\grf_1(\grf_2(a))=a$, $\grf_1(\grf_2(b))=a^{-1}c^{-1}=b$ and $\grf_1(\grf_2(c))=c$. Moreover, $\grf_2(\grf_1(s))=s$ and $\grf_2(\grf_1(t))=t$, meaning that $\grf_1\circ\grf_2=\text{id}_{\tilde W}$ and $\grf_2\circ\grf_1=\text{id}_{W}$ and, hence, $W\simeq \tilde W$. \subsection*{The groups $G_7$, $G_{11}$ and $G_{19}$} \indent This is the case of the maximal groups. We notice that in this case the BMR presentation and the ER presentation coincide. \subsection*{The group $G_{12}$} \indent Let $W$ be the group $G_{12}$, whose BMR presentation is of the form $$\langle s,t,u\;|\;s^2=t^2=u^2=1, stus=tust=ustu\rangle.$$ We also know that the center of the group is generated by the element $z=(stu)^4$ (see Tables in Appendix 1 in \cite{broue2000}). According to \S6.1 in \cite{ERrank2}, the ER presentation of $W$ is of the form $$\langle a,b,c\;|\; a^2=1, b^3=c^{-4}=\text{central}, abc=1\rangle.$$ Let $\tilde W$ be a group having this ER presentation. We will prove that $\tilde W\simeq W$. Let $\grf_1:W\rightarrow \tilde W$ and $\grf_2: \tilde W \rightarrow W$, defined by $ \grf_1(s)=a$, $\grf_1(t)=a^{-1}c^2b$, $\grf_1(u)=(cb)^{-1}$ and $\grf_2(a)=s$, $\grf_2(b) =(stus)^{-1}$ and $\grf_2(c)=stu$. We now prove that $\grf_1$ is a well-defined group homomorphism. Since $cab=1$ we have $ca=b^{-1}$ meaning that $\grf_1(s)\grf_1(t)\grf_1(u)\grf_1(s)=\grf_1(u)\grf_1(s)\grf_1(t)\grf_1(u)$. Moreover, since $bca=1$ and $b^{-3}=c^{4}$ we have $a^{-1}c^3b=a^{-1}c^{-1}b^{-2}=(bca)^{-1}b^{-1}=b^{-1}$, meaning that we also have $\grf_1(t)\grf_1(u)\grf_1(s)\grf_1(t)= \grf_1(u)\grf_1(s)\grf_1(t)\grf_1(u)$. Moreover, $\grf_1(s)^2=a^2=1$ and since $a^2=1$ and $bca=cab=1$ we have $\grf_1(u)^2=b^{-1}c^{-1}b^{-1}c^{-1}=b^{-1}a(a^{-1}c^{-1}b^{-1})c^{-1}=b^{-1}a^{-1}c^{-1}=1$. Since $a^{-1}=bc$, $c^3=c^{-1}b^{-3}$, $a^{-1}=a$ and $abc=bca=1$ we also have $\grf_1(t)^2=a^{-1}c^2ba^{-1}c^2b=bc^3b^2c^3b=bc^{-1}b^{-1}c^{-1}b^{-2}=b(c^{-1}b^{-1}a^{-1})ac^{-1}b^{-2}=bac^{-1}b^{-2}=b(a^{-1}c^{-1}b^{-1})b^{-1}=1$. Similarly, we can prove that $\grf_2$ is a well-defined group homomorphism, since $\grf_2(a)^2=s^2=1$, $\grf_2(b)^3=\big(stus(stus)(stus)\big)^{-1}=\big(stus(tust)(ustu)\big)^{-1}= (stu)^{-4}=\grf_2(c)^{-4}=z^{-1}$ and $\grf_2(a)\grf_2(b)\grf_2(c)=1$. We also notice that $\grf_1(\grf_2(a))=a$, $\grf_1(\grf_2(b))=a^{-1}c^{-1}=b$ and $\grf_1(\grf_2(c))=c$. Moreover, $\grf_2(\grf_1(s))=s$, $\grf_2(\grf_1(t))=tustu(s^{-1}u^{-1}t^{-1}s^{-1})=tustu(u^{-1}t^{-1}s^{-1}u^{-1})=t$ and $\grf_2(\grf_1(u))=(stus)u^{-1}t^{-1}s^{-1}=(ustu)u^{-1}t^{-1}s^{-1}=u$, meaning that $\grf_1\circ\grf_2=\text{id}_{\tilde W}$ and $\grf_2\circ\grf_1=\text{id}_{W}$ and, hence, $W\simeq \tilde W$. \subsection*{The group $G_{13}$} \indent Let $W$ be the group $G_{13}$, whose BMR presentation is of the form $$\left\langle \begin{array}{l|cl} &s^2=t^2=u^2=1 &\\s,t,u&stust=ustus&\\ &tust=ustu& \end{array}\right \rangle.$$ We also know that the center of the group is generated by the element $z=(stu)^3=(tus)^3=(ust)^3$ (see Tables in Appendix 1 in \cite{broue2000}). According to \S6.1 in \cite{ERrank2}, the ER presentation of $W$ is of the form $$\left \langle \begin{array}{l|cl} &a^2=d^2=1&\\a,b,c,d &b^3=dc^{-2}=\text{central}&\\ &abc=1& \end{array}\right\rangle.$$ Let $\tilde W$ be a group having this ER presentation. We will prove that $\tilde W\simeq W$. Let $\grf_1:W\rightarrow \tilde W$ and $\grf_2: \tilde W \rightarrow W$, defined by $ \grf_1(s)=d$, $\grf_1(t)=a$, $\grf_1(u)=b(da)^{-1}$ and $\grf_2(a)=t$, $\grf_2(b) =ust$, $\grf_2(c)=(tust)^{-1}$ and $\grf_2(d)=s$. We now prove that $\grf_1$ is a well-defined group homomorphism. Since $abc=1$ and $dc^{-1}=b^3c$ we have $dab=d(abc)c^{-1}=dc^{-1}=b^3c=b^2a^{-1}(abc)=b^2a^{-1}$ meaning that $\grf_1(s)\grf_1(t)\grf_1(u)\grf_1(s)\grf_1(t)=\grf_1(u)\grf_1(s)\grf_1(t)\grf_1(u)\grf_1(s)$. Moreover, since $cab=1$ and $dc^{-2}=b^3:=\grz$ we have $b^2a^{-1}d^{-1}=b^3(b^{-1}a^{-1}c^{-1})cd^{-1}=dc^{-1}d^{-1}=\grz cd^{-1}=c\grz d^{-1}=cc^{-2}=c^{-1}=ab$, meaning that we also have $\grf_1(u)\grf_1(s)\grf_1(t)\grf_1(u)=\grf_1(t)\grf_1(u)\grf_1(s)\grf_1(t)$. Moreover, $\grf_1(s)^2=d^2=1$, $\grf_1(t)^2=a^2=1$ and since $b^3=dc^{-2}:=\grz\Rightarrow d^{-1}\grz=c^{-2}$ and $abc=cab=1$ we also have $ba^{-1}d^{-1}ba^{-1}d^{-1}=b^2(b^{-1}a^{-1}c^{-1})cd^{-1}b^2(b^{-1}a^{-1}c^{-1})cd^{-1}=b^{-1}b^3cd^{-1}b^{-1}b^3cd^{-1}=b^{-1}\grz cd^{-1}b^{-1}\grz c d^{-1}$. However, we have $b^{-1}\grz cd^{-1}b^{-1}\grz c d^{-1}= b^{-1}c(d^{-1}\grz) b^{-1}c(d^{-1}\grz)=b^{-1}c^{-1}b^{-1}c^{-1}=b^{-1}(c^{-1}b^{-1}a^{-1})ac^{-1}=b^{-1}a^{-1}c^{-1}=1$, meaning that $\grf_1(u)^2=1$. Similarly, we can prove that $\grf_2$ is a well-defined group homomorphism: $\grf_2(f)^2=d^2=1,$ $\grf_2(d)\grf_2(c)^{-2}=(stust)tust=ustustust=\grf_2(b)^3=z$ and $\grf_2(a)\grf_2(b)\grf_2(c)=1$. We also notice that $\grf_1(\grf_2(a))=a$, $\grf_1(\grf_2(b))=b$, $\grf_1(\grf_2(c))=b^{-1}a^{-1}=c$ and $\grf_1(\grf_2(d))=d$. Moreover, $\grf_2(\grf_1(s))=s$, $\grf_2(\grf_1(t))=t$ and $\grf_2(\grf_1(u))=u$, meaning that $\grf_1\circ\grf_2=\text{id}_{\tilde W}$ and $\grf_2\circ\grf_1=\text{id}_{W}$ and, hence, $W\simeq \tilde W$. \subsection*{The group $G_{14}$} \indent Let $W$ be the group $G_{14}$, whose BMR presentation is of the form $$\langle s,t\;|\;s^2=t^3=1, (st)^4=(ts)^4\rangle.$$ We also know that the center of the group is generated by the element $z=(st)^4$ (see Tables in Appendix 1 in \cite{broue2000}). According to \S6.1 in \cite{ERrank2}, the ER presentation of $W$ is of the form $$\langle a,b,c\;|\; a^2=b^3=1, c^{4}=\text{central}, abc=1\rangle.$$ Let $\tilde W$ be a group having this ER presentation. We will prove that $\tilde W\simeq W$. Let $\grf_1:W\rightarrow \tilde W$ and $\grf_2: \tilde W \rightarrow W$, defined by $ \grf_1(s)=a$, $\grf_1(t)=b$ and $\grf_2(a)=s$, $\grf_2(b) =t$ and $\grf_2(c)=(st)^{-1}$. We now prove that $\grf_1$ is a well-defined group homomorphism. Since $abc=1$ and $c^{4}:=\grz$ we have $(ab)^4=c^{-4}=\grz^{-1}=a^{-1}\grz^{-1} a=a^{-1}(abc)c^{-4}a=bc^{-3}a=b(abc)c^{-1}(abc)c^{-1}(abc)c^{-1}a=(ba)^4$ meaning that $\big(\grf_1(s)\grf_1(t)\big)^4=\big(\grf_1(s)\grf_1(t)\big)^4$. We also have $\grf_1(s)^2=a^2=1$ and $\grf_2(b)^3=t^3=1$. Similarly, we can prove that $\grf_2$ is a well-defined group homomorphism, since $\grf_2(a)^2=s^2=1$, $\grf_2(b)^3=t^3=1$, $\grf_2(c)^4=(st)^{-4}=z^{-1}$ and $\grf_2(a)\grf_2(b)\grf_2(c)=1$. We also notice that $\grf_1(\grf_2(a))=a$, $\grf_1(\grf_2(b))=a^{-1}c^{-1}=b$ and $\grf_1(\grf_2(c))=b^{-1}a^{-1}=c$. Moreover, $\grf_2(\grf_1(s))=s$ and $\grf_2(\grf_1(t))=t$, meaning that $\grf_1\circ\grf_2=\text{id}_{\tilde W}$ and $\grf_2\circ\grf_1=\text{id}_{W}$ and, hence, $W\simeq \tilde W$. \subsection*{The group $G_{15}$} \indent Let $W$ be the group $G_{15}$, whose BMR presentation is of the form $$\left\langle \begin{array}{l|cl} &s^2=t^2=u^3=1 &\\s,t,u&ustut=stutu&\\ &tus=stu& \end{array}\right \rangle.$$ We also know that the center of the group is generated by the element $z=stutu$ (see Tables in Appendix 1 in \cite{broue2000}). According to \S6.1 in \cite{ERrank2}, the ER presentation of $W$ is of the form $$\left \langle \begin{array}{l|cl} &a^2=b^3=d^2=1&\\a,b,c,d &dc^{-2}=\text{central}&\\ &abc=1& \end{array}\right\rangle.$$ Let $\tilde W$ be a group having this ER presentation. We will prove that $\tilde W\simeq W$. Let $\grf_1:W\rightarrow \tilde W$ and $\grf_2: \tilde W \rightarrow W$, defined by $ \grf_1(s)=d$, $\grf_1(t)=a$, $\grf_1(u)=b$ and $\grf_2(a)=t$, $\grf_2(b) =u$, $\grf_2(c)=(tu)^{-1}$ and $\grf_2(d)=s$. We now prove that $\grf_1$ is a well-defined group homomorphism. Since $abc=(bca)=1$ and $dc^{-2}:=\grz$ we have $bdaba=bd(abc)c^{-1}a=b(dc^{-2})c^{-1}ca= b\grz ca=(bca)\grz=dc^{-2}=d(abc)c^{-1}(abc)c^{-1}=dabab$, meaning that $\grf_1(u)\grf_1(s)\grf_1(t)\grf_1(u)\grf_1(t)=\grf_1(s)\grf_1(t)\grf_1(u)\grf_1(t)\grf_1(u)$. Moreover, since $abc=1$ and $d=\grz c^2$ we have $abd=(abc)c^{-1}d=c^{-1}\grz c^2=(\grz c^2)c^{-1}=dc^{-1}=d(abc)c^{-1}=dab$, meaning that we also have $\grf_1(t)\grf_1(u)\grf_1(s)=\grf_1(s)\grf_1(t)\grf_1(u)$. Moreover, $\grf_1(s)^2=d^2=1$, $\grf_1(t)^2=a^2=1$ and $\grf_1(u)^3=b^3=1$. Similarly, we can prove that $\grf_2$ is a well-defined group homomorphism: $\grf_2(a)^2=t^2=1,$ $\grf_2(b)^3=u^3=1$, $\grf_2(d)^2=s^2=1$, $\grf_2(d)\grf_2(c)^{-2}=stutu=z$ and $\grf_2(a)\grf_2(b)\grf_2(c)=1$. We also notice that $\grf_1(\grf_2(a))=a$, $\grf_1(\grf_2(b))=b$, $\grf_1(\grf_2(c))=b^{-1}a^{-1}=c$ and $\grf_1(\grf_2(d))=d$. Moreover, $\grf_2(\grf_1(s))=s$, $\grf_2(\grf_1(t))=t$ and $\grf_2(\grf_1(u))=u$, meaning that $\grf_1\circ\grf_2=\text{id}_{\tilde W}$ and $\grf_2\circ\grf_1=\text{id}_{W}$ and, hence, $W\simeq \tilde W$. \subsection*{The group $G_{20}$} \indent Let $W$ be the group $G_{14}$, whose BMR presentation is of the form $$\langle s,t\;|\;s^3=t^3=1, ststs=tstst\rangle.$$ We also know that the center of the group is generated by the element $z=(st)^5$ (see Tables in Appendix 1 in \cite{broue2000}). According to \S6.1 in \cite{ERrank2}, the ER presentation of $W$ is of the form $$\left\langle \begin{array}{l|cl} &a^2=\text{central}&\\ a,b,c&b^3=1,a^4=c^{-5}&\\ &abc=1& \end{array}\right\rangle .$$ Let $\tilde W$ be a group having this ER presentation. We will prove that $\tilde W\simeq W$. Let $\grf_1:W\rightarrow \tilde W$ and $\grf_2: \tilde W \rightarrow W$, defined by $ \grf_1(s)=b$, $\grf_2(a)=(ststs)^{-1}$, $\grf_2(b) =s$ and $\grf_2(c)=tsts$. We now prove that $\grf_1$ is a well-defined group homomorphism. Since $abc=1$, $a^2:=\grz$ and $\grz^{-2}c^{-5}=1$ we have $bc^{-1}a^{-1}bc^{-1}a^{-1}b=bc^{-1}a^{-2}(abc)c^{-2}a^{-2}(abc)c^{-1}=bc^{-1}\grz^{-1} c^{-2}\grz^{-1} c^{-1}=b(\grz^{-2} c^{-4})=bc=a^{-1}(abc)=\grz^{-2} c^{-5} a^{-1}= c^{-1}\grz^{-1} c^{-2} \grz^{-1} c^{-2} a^{-1}$. However, since $\grz^{-1}=a^{-2}$, $c^{-1}\grz^{-1} c^{-2} \grz^{-1} c^{-2} a^{-1}= c^{-1}a^{-2}c^{-2}a^{-2}c^{-2}a^{-1}=c^{-1}a^{-2}(abc)c^{-2}a^{-2}(abc)c^{-2}a^{-1}=(c^{-1}a^{-1}b)^2c^{-1}a^{-1}$, meaning that $\grf_1(s)\grf_1(t)\grf_1(s)\grf_1(t)\grf_1(s)=\grf_1(t)\grf_1(s)\grf_1(t)\grf_1(s)\grf_1(t)$. We also have $\grf_1(s)^3=b^3=1$ and since $abc=bca=1$ and $b^2=b^{-1}$ we also have $\grf_2(t)^3=c^{-1}(a^{-1}c^{-1}b^{-1})b(a^{-1}c^{-1}b^{-1})ba^{-1}=c^{-1}b^2a^{-1}=c^{-1}b^{-1}a^{-1}=1$. Similarly, we can prove that $\grf_2$ is a well-defined group homomorphism, since $\grf_2(a)^2=(ststs)^{-1}(ststs)^{-1}=(st)^{-5}=z^{-1}$, $\grf_2(b)^3=s^3=1$, $\grf_1(c)^{-5}=(ts)^{10}=(st)^{10}=z^{2}=\grf_2(a)^4$ and $\grf_2(a)\grf_2(b)\grf_2(c)=1$. Since $abc=1$, $a^2:=\grz$ and $\grz^{-2}c^{-5}=1$ we have that $\grf_1(\grf_2(a))=(bc^{-1}a^{-1}bc^{-1}a^{-1}b)^{-1}=(bc^{-1}a^{-2}(abc)c^{-2}a^{-2}(abc)c^{-1})^{-1}=(bc^{-1}\grz^{-1} c^{-2}\grz^{-1} c^{-1}=b(\grz^{-2} c^{-4}))^{-1}=(bc)^{-1}=a$. Moreover, $\grf_1(\grf_2(b))=b$ and since $a^2:=\grz$ and $\grz^{-2}c^{-5}=1$ we have $\grf_1(\grf_2(c))=c^{-1}a^{-2}(abc)c^{-2}a^{-2}(abc)c^{-1}=c^{-1}\grz^{-1}c^{-2}\grz^{-1}c^{-1}=\grz^{-2}c^{-4}=c$. Furthermore, $\grf_2(\grf_1(s))=s$ and $\grf_2(\grf_1(t))=(ts)^{-2}(ststs)=(ts)^{-2}(tstst)=t$, meaning that $\grf_1\circ\grf_2=\text{id}_{\tilde W}$ and $\grf_2\circ\grf_1=\text{id}_{W}$ and, hence, $W\simeq \tilde W$. \subsection*{The group $G_{21}$} \indent Let $W$ be the group $G_{14}$, whose BMR presentation is of the form $$\langle s,t\;|\;s^3=t^3=1, (st)^5=(ts)^5\rangle.$$ We also know that the center of the group is generated by the element $z=(st)^5$ (see Tables in Appendix 1 in \cite{broue2000}). According to \S6.1 in \cite{ERrank2}, the ER presentation of $W$ is of the form $$\langle a,b,c\;|\; a^2=b^3=1, c^{5}=\text{central}, abc=1\rangle.$$ Let $\tilde W$ be a group having this ER presentation. We will prove that $\tilde W\simeq W$. Let $\grf_1:W\rightarrow \tilde W$ and $\grf_2: \tilde W \rightarrow W$, defined by $ \grf_1(s)=a$, $\grf_1(t)=b$ and $\grf_2(a)=s$, $\grf_2(b) =t$ and $\grf_2(c)=(st)^{-1}$. We now prove that $\grf_1$ is a well-defined group homomorphism. Since $abc=1$ and $c^{5}:=\grz$ we have $(ab)^5=c^{-5}=\grz^{-1}=a^{-1}\grz^{-1} a=a^{-1}(abc)c^{-5}a=bc^{-4}a=b(abc)c^{-1}(abc)c^{-1}(abc)c^{-1}(abc)c^{-1}a=(ba)^4$ meaning that $\big(\grf_1(s)\grf_1(t)\big)^5=\big(\grf_1(s)\grf_1(t)\big)^5$. We also have $\grf_1(s)^2=a^2=1$ and $\grf_2(b)^3=t^3=1$. Similarly, we can prove that $\grf_2$ is a well-defined group homomorphism, since $\grf_2(a)^2=s^2=1$, $\grf_2(b)^3=t^3=1$, $\grf_2(c)^5=(st)^{-5}=z^{-1}$ and $\grf_2(a)\grf_2(b)\grf_2(c)=1$. We also notice that $\grf_1(\grf_2(a))=a$, $\grf_1(\grf_2(b))=a^{-1}c^{-1}=b$ and $\grf_1(\grf_2(c))=b^{-1}a^{-1}=c$. Moreover, $\grf_2(\grf_1(s))=s$ and $\grf_2(\grf_1(t))=t$, meaning that $\grf_1\circ\grf_2=\text{id}_{\tilde W}$ and $\grf_2\circ\grf_1=\text{id}_{W}$ and, hence, $W\simeq \tilde W$. \subsection*{The group $G_{22}$} \indent Let $W$ be the group $G_{12}$, whose BMR presentation is of the form $$\langle s,t,u\;|\;s^2=t^2=u^2=1, stust=tustu=ustus\rangle.$$ We also know that the center of the group is generated by the element $z=(stu)^5$ (see Tables in Appendix 1 in \cite{broue2000}). According to \S6.1 in \cite{ERrank2}, the ER presentation of $W$ is of the form $$\langle a,b,c\;|\; a^2=1, b^3=\text{central}, c^5=b^{-6}, abc=1\rangle.$$ Let $\tilde W$ be a group having this ER presentation. We will prove that $\tilde W\simeq W$. Let $\grf_1:W\rightarrow \tilde W$ and $\grf_2: \tilde W \rightarrow W$, defined by $ \grf_1(s)=a$, $\grf_1(t)=(b^{-1}cb^2)^{-1}$, $\grf_1(u)=(cb)^{-1}$ and $\grf_2(a)=s$, $\grf_2(b) =tustu$ and $\grf_2(c)=(stustu)^{-1}$. We now prove that $\grf_1$ is a well-defined group homomorphism. Since $abc=1$, $b^3:=\grz$ and $c^{-5}=b^6$ we have: $ab^{-2}c^{-2}ab^{-2}c^{-1}b=a\grz^{-1}bc^{-2}a\grz^{-1}bc^{-1}b=\grz^{-2}(abc)c^{-3}(abc)c^{-2}b=\grz^{-2}c^{-5}b=(\grz^{-2}b^6)b=b$. Moreover, $b^{-2}c^{-2}ab^{-2}c^{-2}=\grz^{-1}bc^{-2}a\grz^{-1}bc^{-2}=\grz^{-2}bc^{-2}(abc)c^{-3}=\grz^{-2}bc^{-5}=\grz^{-2}b^7=b$ and, analogously, $b^{-1}c^{-1}ab^{-2}c^{-2}a= b^{-1}c^{-1}a\grz^{-1}bc^{-2}a=\grz^{-1}b^{-1}c^{-1}(abc)c^{-3}(abc)c^{-1}b^{-1}=\grz^{-1}b^{-1}c^{-5}b^{-1}=\grz^{-1}b^4=b$. Hence, $\grf_1(s)\grf_1(t)\grf_1(u)\grf_1(s)\grf_1(t)=\grf_1(t)\grf_1(u)\grf_1(s)\grf_1(t)\grf_1(u)=\grf_1(u)\grf_1(s)\grf_1(t)\grf_1(u)\grf_1(s)$. Moreover, $\grf_1(s)^2=a^2=1$ and since $abc=bca=cab=1$ and $\gra=\gra^{-1}$ we also have $\grf_2(t)^2=b{-2}c^{-1}b^{-1}c^{-1}b=b^{-2}(c^{-1}b^{-1}a^{-1})a^{-1}c^{-1}b=b^{-2}(a^{-1})c^{-1}b^{-1})b^2=1$ and $\grf_1(u)^2=b^{-1}c^{-1}b^{-1}c^{-1}=b^{-1}(c^{-1}b^{-1}a^{-1})a^{-1}c^{-1}=(b^{-1}a^{-1}c^{-1})=1$. Similarly, we can prove that $\grf_2$ is a well-defined group homomorphism, since $\grf_2(a)^2=s^2=1$, $\grf_2(b)^3=(tustu)(tustu)(tustu)=(stust)(ustus)(tustu)=(stu)^5=z$, $\grf_2(c)^{-5}=(stu)^{10}=z^2=\grf_2(b)^3$ and $\grf_2(a)\grf_2(b)\grf_2(c)=1$. We also notice that $\grf_1(\grf_2(a))=a$ and since $abc=1$, $b^3:=\grz$ and $c^{-5}=b^6$ we have: $\grf_1(\grf_2(b))=\grf_1(tustu)=b^{-2}c^{-2}ab^{-2}c^{-2}=\grz^{-1}bc^{-2}a\grz^{-1}bc^{-2}=\grz^{-2}bc^{-2}(abc)c^{-3}=\grz^{-2}bc^{-5}=\grz^{-2}b^7=b$. Moreover, $\grf_1(\grf_2(c))=(ab^{-2}c^{-2}ab^{-2}c^{-2})^{-1}=(a\grz^{-1}bc^{-2}a\grz^{-1}c^{-2}=\grz^{-2}(abc)c^{-3}(abc)c^{-3})^{-1}=\grz^{2}c^{6}=c^{-5}c^6=c$. We also have that $\grf_2(\grf_1(s))=s$, $\grf_2(\grf_1(t))=(tustu)^{-1}(tustu)^{-1}(stu)^2(tustu)=(ustus)^{-1}(tustu)^{-1}(tustu)u(stust)=t$ and $\grf_2(\grf_1(u))=(tustu)^{-1}(stust)u=(tustu)^{-1}(tustu)u=u$, meaning that $\grf_1\circ\grf_2=\text{id}_{\tilde W}$ and $\grf_2\circ\grf_1=\text{id}_{W}$ and, hence, $W\simeq \tilde W$. ~ \chapter{Tables} \begin{table}[h] \begin{center} \small \caption{ \bf{BMR and ER presentations for the complex braid groups associated to the exceptional groups of rank 2}} \label{t2} \scalebox{0.65} {\begin{tabular}{|c|c|c|c|c|c|} \hline Group & BMR presentation & ER presentation & $\grf_1$: BMR $\leadsto$ ER& $\grf_2$: ER $\leadsto$ BMR \\ \hline $\begin{array}[t]{lcl} G_4\\ \\ G_8\\ \\ G_{16} \end{array}$& $\begin{array}[t]{lcl} \\\\ \langle s,t\;|\; sts=tst\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\\\ \langle \gra,\grb,\grg\;|\; \gra^2=\grb^{-3}=\text{central},\gra\grb\grg=1\rangle \end{array}$& $\begin{array}[t]{lcl} \\ s&\mapsto &\grg\\t&\mapsto& \grg^{-1}\grb \end{array}$ & $\begin{array}[t]{lcl}\\ \gra&\mapsto& (sts)^{-1}\\\grb&\mapsto& st\\\grg&\mapsto&s \end{array}$\\ \hline \hline $\begin{array}[t]{lcl} G_5\\ \\ G_{10}\\ \\ G_{18} \end{array}$& $\begin{array}[t]{lcl} \\\\ \langle s,t\;|\; stst=tsts\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\\\ \langle \gra,\grb,\grg\;|\; \gra^2=\text{central},\gra\grb\grg=1\rangle \end{array}$& $\begin{array}[t]{lcl} \\ s&\mapsto & \grb\\t&\mapsto& \grg \end{array}$ & $\begin{array}[t]{lcl} \\ \gra&\mapsto& (st)^{-1}\\\grb&\mapsto& s\\\grg&\mapsto&t \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} G_6\\ \\ G_9\\ \\ G_{17} \end{array}$& $\begin{array}[t]{lcl} \\\\ \langle s,t\;|\; (st)^3=(ts)^3\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\ \\\langle \gra,\grb,\grg\;|\; \grb^3=\text{central},\gra\grb\grg=1\rangle \end{array}$& $\begin{array}[t]{lcl}\\ s&\mapsto & \gra\\t&\mapsto& \grg \end{array}$ & $\begin{array}[t]{lcl}\\ \gra&\mapsto& s\\\grb&\mapsto& (ts)^{-1}\\\grg&\mapsto&t \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} G_7\\ \\ G_{11}\\ \\ G_{19} \end{array}$& $\begin{array}[t]{lcl} \\\\ \langle s,t,u\;|\; stu=tus=ust\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\\\ \langle \gra,\grb,\grg\;|\;\gra\grb\grg=\text{central }\rangle \end{array}$& $\begin{array}[t]{lcl}\\ s&\mapsto & \gra\\t&\mapsto& \grb\\ u&\mapsto& \grg \end{array}$ & $\begin{array}[t]{lcl}\\ \gra&\mapsto& s\\\grb&\mapsto& t\\\grg&\mapsto&u \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} \\ G_{12}\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle s,t,u\;|\; stus=tust=ustu\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle \gra,\grb,\grg\;|\; \grb^3=\grg^{-4}=\text{central},\gra\grb\grg=1\rangle \end{array}$& $\begin{array}[t]{lcl} s&\mapsto & \gra\\t&\mapsto& \gra^{-1}\grg^2\grb\\ u&\mapsto&(\grg\grb)^{-1} \end{array}$ & $\begin{array}[t]{lcl} \gra&\mapsto& s\\\grb&\mapsto& (stus)^{-1}\\\grg&\mapsto&stu \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} \\ \\ G_{13}\\ \end{array}$& $\begin{array}[t]{lcl}\\\\ \left\langle \begin{array}{l|cl} s,t,u& stust=ustus&\\ &tust=ustu& \end{array}\right \rangle \end{array}$& $\begin{array}[t]{lcl} \\ \left\langle \begin{array}{l|cl} & \grb^3=\grd\grg^{-2}=\text{central}&\\ \gra,\grb,\grg,\grd&\grg^4=\text{central}\\ &\gra\grb\grg=1& \end{array}\right \rangle\\\\ \end{array}$& $\begin{array}[t]{lcl} \\s&\mapsto & \grd\\t&\mapsto& \gra\\ u&\mapsto&\grb(\grd\gra)^{-1} \end{array}$ & $\begin{array}[t]{lcl} \gra&\mapsto& t\\\grb&\mapsto& ust\\\grg&\mapsto&(tust)^{-1}\\ \grd&\mapsto&s \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} \\ G_{14}\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle s,t\;|\; (st)^4=(ts)^4\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle \gra,\grb,\grg\;|\; \grg^4=\text{central},\gra\grb\grg=1\rangle \end{array}$& $\begin{array}[t]{lcl} s&\mapsto & \gra\\t&\mapsto& \grb \end{array}$ & $\begin{array}[t]{lcl} \gra&\mapsto& s\\\grb&\mapsto& t\\\grg&\mapsto&(st)^{-1} \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} \\\\ G_{15}\\ \end{array}$& $\begin{array}[t]{lcl}\\\\ \left\langle \begin{array}{l|cl} s,t,u& ustut=stutu&\\ &tus=stu& \end{array}\right \rangle \end{array}$& $\begin{array}[t]{lcl} \\ \left\langle \begin{array}{l|cl} & \grd\grg^{-2}=\text{central}&\\ \gra,\grb,\grg,\grd&\grg^4=\text{central}\\ &\gra\grb\grg=1& \end{array}\right \rangle\\\\ \end{array}$& $\begin{array}[t]{lcl} \\s&\mapsto & \grd\\t&\mapsto& \gra\\ u&\mapsto&\grb \end{array}$ & $\begin{array}[t]{lcl} \gra&\mapsto& t\\\grb&\mapsto& u\\\grg&\mapsto&(tu)^{-1}\\ \grd&\mapsto&s \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} \\ G_{20}\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle s,t\;|\; ststs=tstst\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle \gra,\grb,\grg\;|\; \gra^2=\text{central},\gra^4=\grg^{-5},\gra\grb\grg=1\rangle \end{array}$& $\begin{array}[t]{lcl} s&\mapsto & \grb\\t&\mapsto& (\gra\grg)^{-1} \end{array}$ & $\begin{array}[t]{lcl} \gra&\mapsto& (ststs)^{-1}\\\grb&\mapsto& s\\\grg&\mapsto&tsts \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} \\ G_{21}\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle s,t\;|\; (st)^5=(ts)^5\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle \gra,\grb,\grg\;|\; \grg^5=\text{central},\gra\grb\grg=1\rangle \end{array}$& $\begin{array}[t]{lcl} s&\mapsto & \gra\\t&\mapsto& \grb \end{array}$ & $\begin{array}[t]{lcl} \gra&\mapsto& s\\\grb&\mapsto& t\\\grg&\mapsto&(st)^{-1} \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} \\ G_{22}\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle s,t,u\;|\; stust=tustu=ustus\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle \gra,\grb,\grg\;|\; \grb^3=\text{central},\grg^5=\text{central},\gra\grb\grg=1\rangle \end{array}$& $\begin{array}[t]{lcl} s&\mapsto & \gra\\t&\mapsto& (\grb^{-1}\grg\grb^2)^{-1}\\ u&\mapsto&(\grg\grb)^{-1} \end{array}$ & $\begin{array}[t]{lcl} \gra&\mapsto&s\\\grb&\mapsto& tustu\\\grg&\mapsto&(stustu)^{-1} \end{array}$\\ \hline \end{tabular} } \end{center} \end{table} \begin{table}[htp] \begin{center} \small \caption{ \bf{BMR and ER presentations for the exceptional groups of rank 2}} \label{t3} \scalebox{0.73} {\begin{tabular}{|c|c|c|c|c|c|} \hline Group & BMR presentation & ER presentation & $\grf_1$: BMR $\leadsto$ ER& $\grf_2$: ER $\leadsto$ BMR \\ \hline $\begin{array}[t]{lcl} \\G_4\\ \\ G_8\\ \\ G_{16} \end{array}$& $\begin{array}[t]{lcl}\\ \langle s,t\;|\;s^3=t^3=1, sts=tst\rangle\\ \\ \langle s,t\;|\;s^4=t^4=1, sts=tst\rangle\\ \\ \langle s,t\;|\;s^5=t^5=1, sts=tst\rangle\\ \end{array}$& $\begin{array}[t]{lcl}\\ \langle a,b,c\;|\; a^2=b^{-3}=\text{central}, c^3=1, abc=1\rangle\\ \\ \langle a,b,c\;|\; a^2=b^{-3}=\text{central}, c^4=1, abc=1\rangle\\ \\ \langle a,b,c\;|\; a^2=b^{-3}=\text{central}, c^5=1, abc=1\rangle\\ \\ \end{array}$& $\begin{array}[t]{lcl} \\ \\ s&\mapsto &c\\ \\t&\mapsto& c^{-1}b \end{array}$ & $\begin{array}[t]{lcl}\\ a&\mapsto& (sts)^{-1}\\\\b&\mapsto& st\\\\c&\mapsto&s \end{array}$\\ \hline \hline $\begin{array}[t]{lcl} \\G_5\\ \\ G_{10}\\ \\ G_{18} \end{array}$& $\begin{array}[t]{lcl}\\ \langle s,t\;|\; s^3=t^3=1,stst=tsts\rangle\\ \\ \langle s,t\;|\; s^3=t^4=1,stst=tsts\rangle\\ \\ \langle s,t\;|\; s^3=t^5=1,stst=tsts\rangle\\ \\ \end{array}$& $\begin{array}[t]{lcl}\\ \langle a,b,c\;|\; a^2=\text{central},b^3=c^3=1, abc=1\rangle \\ \\ \langle a,b,c\;|\; a^2=\text{central},b^3=c^4=1, abc=1\rangle\\ \\ \langle a,b,c\;|\; a^2=\text{central},b^3=c^5=1, abc=1\rangle \\ \\ \end{array}$& $\begin{array}[t]{lcl} \\\\ s&\mapsto & b\\\\t&\mapsto& c \end{array}$ & $\begin{array}[t]{lcl} \\ a&\mapsto& (st)^{-1}\\\\b&\mapsto& s\\\\c&\mapsto&t \end{array}$\\ \hline \hline $\begin{array}[t]{lcl}\\ G_6\\ \\ G_9\\ \\ G_{17} \end{array}$& $\begin{array}[t]{lcl} \\\langle s,t\;|\; s^2=t^3=1,(st)^3=(ts)^3\rangle\\ \\ \langle s,t\;|\; s^2=t^4=1,(st)^3=(ts)^3\rangle\\ \\ \langle s,t\;|\; s^2=t^5=1,(st)^3=(ts)^3\rangle\\ \\ \end{array}$& $\begin{array}[t]{lcl} \\\langle a,b,c\;|\; a^2=c^3=1, b^3=\text{central},abc=1\rangle\\ \\ \langle a,b,c\;|\; a^2=c^4=1, b^3=\text{central},abc=1\rangle\\ \\ \langle a,b,c\;|\; a^2=c^5=1, b^3=\text{central},abc=1\rangle\\ \\ \end{array} $& $\begin{array}[t]{lcl}\\\\ s&\mapsto & a\\\\t&\mapsto& c \end{array}$ & $\begin{array}[t]{lcl}\\ a&\mapsto& s\\\\b&\mapsto& (ts)^{-1}\\\\c&\mapsto&t \end{array}$\\ \hline \hline $\begin{array}[t]{lcl} \\G_7\\ \\\\ G_{11}\\ \\\\ G_{19} \end{array}$& $\begin{array}[t]{lcl} \\ \left\langle \begin{array}{l|cl} \;s,t,u& s^2=t^3=u^3=1&\\ &stu=tus=ust& \end{array}\right \rangle\\\\ \left\langle \begin{array}{l|cl} \;s,t,u& s^2=t^3=u^4=1&\\ &stu=tus=ust& \end{array}\right \rangle\\\\ \left\langle \begin{array}{l|cl} \;s,t,u& s^2=t^3=u^5=1&\\ &stu=tus=ust& \end{array}\right \rangle\\\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle a,b,c\;|\;a^2=b^3=c^3=1, abc=\text{central }\rangle\\\\\\ \langle a,b,c\;|\;a^2=b^3=c^4=1, abc=\text{central }\rangle\\\\\\ \langle a,b,c\;|\;a^2=b^3=c^5=1, abc=\text{central }\rangle\\\\ \end{array}$& $\begin{array}[t]{lcl}\\\\ s&\mapsto & a\\\\t&\mapsto& b\\\\ u&\mapsto& c \end{array}$ & $\begin{array}[t]{lcl}\\\\ a&\mapsto& s\\\\b&\mapsto& t\\\\c&\mapsto&u \end{array}$\\ \hline \hline $\begin{array}[t]{lcl} \\ G_{12}\\ \end{array}$& $\begin{array}[t]{lcl} \\ \left\langle \begin{array}{l|cl} \;s,t,u& s^2=t^2=u^2=1&\\ &stus=tust=ustu& \end{array}\right \rangle\\\\ \end{array}$& $\begin{array}[t]{lcl} \\ \left \langle \begin{array}{l|cl} &a^2=1&\\ a,b,c &b^3=c^{-4}=\text{central}&\\ &abc=1& \end{array}\right\rangle \\\\\end{array}$& $\begin{array}[t]{lcl} \\s&\mapsto & a\\t&\mapsto& a^{-1}c^2b\\ u&\mapsto&(cb)^{-1} \end{array}$ & $\begin{array}[t]{lcl} \\a&\mapsto& s\\b&\mapsto& (stus)^{-1}\\c&\mapsto&stu \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} \\ \\ G_{13}\\\\ \end{array}$& $\begin{array}[t]{lcl}\\ \left\langle \begin{array}{l|cl} &s^2=t^2=u^2=1 &\\s,t,u&stust=ustus&\\ &tust=ustu& \end{array}\right \rangle\\ \\ \end{array}$& $\begin{array}[t]{lcl} \\ \left \langle \begin{array}{l|cl} &a^2=d^2=1&\\a,b,c,d &b^3=dc^{-2}=\text{central}&\\ &abc=1& \end{array}\right\rangle \\\\\end{array}$& $\begin{array}[t]{lcl}\\ s&\mapsto & d\\t&\mapsto& a\\ u&\mapsto&b(da)^{-1} \end{array}$ & $\begin{array}[t]{lcl} \\a&\mapsto& t\\b&\mapsto& ust\\c&\mapsto&(tust)^{-1}\\ d&\mapsto&s\\ \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} \\ G_{14}\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle s,t\;|\; s^2=t^3=1,(st)^4=(ts)^4\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle a,b,c\;|\;a^2=b^3=1, c^4=\text{central},abc=1\rangle \end{array}$& $\begin{array}[t]{lcl} s&\mapsto & a\\t&\mapsto& b \end{array}$ & $\begin{array}[t]{lcl} a&\mapsto& s\\b&\mapsto& t\\c&\mapsto&(st)^{-1} \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} \\\\ G_{15}\\ \end{array}$& $\begin{array}[t]{lcl}\\ \left\langle \begin{array}{l|cl} & s^2=t^2=u^3=1&\\s,t,u&ustut=stutu&\\ &tus=stu& \end{array}\right \rangle\\\\ \end{array}$& $\begin{array}[t]{lcl} \\ \left \langle \begin{array}{l|cl} &a^2=b^3=d^2=1&\\ a,b,c,d &dc^{-2}=\text{central}&\\ &abc=1& \end{array}\right\rangle \\\\\end{array}$& $\begin{array}[t]{lcl} \\s&\mapsto & d\\t&\mapsto& a\\ u&\mapsto&b \end{array}$ & $\begin{array}[t]{lcl} a&\mapsto& t\\b&\mapsto& u\\c&\mapsto&(tu)^{-1}\\ d&\mapsto&s \end{array}$\\ \hline \hline $\begin{array}[t]{lcl} \\\\ G_{20}\\ \end{array}$& $\begin{array}[t]{lcl} \\\\ \langle s,t\;|\;s^3=t^3=1, ststs=tstst\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\ \left \langle \begin{array}{l|cl} &a^2=\text{central}&\\ a,b,c&b^3=1,a^4=c^{-5}&\\ &abc=1& \end{array}\right\rangle \\\\\end{array}$& $\begin{array}[t]{lcl} \\s&\mapsto & b\\\\t&\mapsto& (ac)^{-1} \end{array}$ & $\begin{array}[t]{lcl} \\a&\mapsto& (ststs)^{-1}\\b&\mapsto& s\\c&\mapsto&tsts \end{array}$\\ \hline\hline $\begin{array}[t]{lcl} \\ G_{21}\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle s,t\;|\; s^2=t^3=1,(st)^5=(ts)^5\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\ \langle a,b,c\;|\;a^2=b^3=1, c^5=\text{central},abc=1\rangle \end{array}$& $\begin{array}[t]{lcl} s&\mapsto & a\\t&\mapsto& b \end{array}$ & $\begin{array}[t]{lcl} a&\mapsto& s\\b&\mapsto& t\\c&\mapsto&(st)^{-1} \end{array}$\\ \hline \hline $\begin{array}[t]{lcl} \\ G_{22}\\ \end{array}$& $\begin{array}[t]{lcl} \\ \left\langle \begin{array}{l|cl}s,t,u&s^2=t^2=u^2=1&\\ &stust=tustu=ustus& \end{array} \right\rangle\\ \end{array}$& $\begin{array}[t]{lcl} \\\left\langle \begin{array}{l|cl} &b^3=\text{central}&\\a,b,c&a^2=1,c^5=b^{-6}&\\&abc=1&\end{array}\right\rangle\\\\ \end{array}$& $\begin{array}[t]{lcl} \\s&\mapsto & a\\t&\mapsto& (b^{-1}cb^2)^{-1}\\ u&\mapsto&(cb)^{-1} \end{array}$ & $\begin{array}[t]{lcl} \\a&\mapsto&s\\b&\mapsto& tustu\\c&\mapsto&(stustu)^{-1} \end{array}$\\ \hline \end{tabular} } \end{center} \end{table} \end{appendices} \renewcommandReferences{Bibliography} \addcontentsline{toc}{chapter}{Bibliography} ~ \thispagestyle{empty} ~ \end{document}
\begin{document} \title[Alt-acyclic tournaments]{Alternation acyclic tournaments} \author[G\'abor Hetyei]{G\'abor Hetyei} \address{Department of Mathematics and Statistics, UNC-Charlotte, Charlotte NC 28223-0001. WWW: \tt http://www.math.uncc.edu/\~{}ghetyei/.} \subjclass [2010]{Primary 52C35; Secondary 05A10, 05A15, 11B68, 11B83} \keywords{Genocchi numbers, semi-acyclic tournament, Linial arrangement, Dellac configurations} \date{\today} \begin{abstract} We define a tournament to be alternation acyclic if it does not contain a cycle in which descents and ascents alternate. Using a result by Athanasiadis on hyperplane arrangements, we show that these tournaments are counted by the median Genocchi numbers. By establishing a bijection with objects defined by Dumont, we show that alternation acyclic tournaments in which at least one ascent begins at each vertex, except for the largest one, are counted by the Genocchi numbers of the first kind. Unexpected consequences of our results include a pair of ordinary generating function formulas for the Genocchi numbers of both kinds and a simple model for the normalized median Genocchi numbers. \end{abstract} \maketitle \section*{Introduction} Genocchi numbers of the first kind are closely related to the Bernoulli and Euler (tangent and secant) numbers, and the first classes of permutations counted by them, introduced by Dumont~\cite{Dumont} are {\em alternating} in one way or another, just like the alternating permutations, counted by the tangent and secant numbers. Whereas the tangent and secant numbers found a geometric interpretation through the work of Purtill~\cite{Purtill}, Stanley~\cite{Stanley-flag} and many people in their wake (using Andr\'e permutations, first studied by Foata, Sch\"utzenberger and Strehl in the 1970-ties~\cite{Foata}), there seems to be far less done in terms of finding geometric interpretations for the various types of Genocchi numbers, studied concurrently with the Genocchi numbers of the first kind. A notable exception is the work of Feigin~\cite{Feigin-deg}, identifying the Poincar\'e polynomials of the complete degenerate flag-varieties as $q$-generalizations of the normalized median Genocchi numbers introduced in \cite{Han-Zeng}. This paper proposes a geometric interpretation of the Genocchi numbers, in the world of hyperplane arrangements. We arrive at this interpretation by generalizing the definition of semiacyclic tournaments, used by Postnikov and Stanley~\cite{Postnikov-Stanley}, and independently, Shmulik Ravid, to bijectively label the regions created by the Linial arrangement. The subject of this paper is this wider class of tournaments (we call them {\em alternation acyclic}), which may be used to bijectively label the regions in a homogeneous variant of the Linial arrangement, which we call the {\em homogenized Linial arrangement}. The Linial arrangement studied in the literature is a section of our homogenized Linial arrangement. Using the technique of counting points in vector spaces over finite fields, developed by Athanasiadis~\cite{Athanasiadis-charpol}, we are able to prove that the number of regions created by our homogenized Linial arrangement, and thus the number of alternation acyclic tournaments, is a median Genocchi number (Theorem~\ref{thm:all-alta}). No direct combinatorial argument was found for this result. On the other hand, using this result it is possible to find a simple class of objects counted by the median Genocchi numbers, which allow a simple ${\mathbb Z}_2^n$-action, making the known fact transparent, that the median Genocchi number $H_{2n+1}$ is an integer multiple of $2^n$. The set of ${\mathbb Z}_2^n$-orbits also has a simple combinatorial representation (Theorem~\ref{thm:nmGenocchi}). While this work was still a preprint, A.\ Bigeni has found a highly nontrivial bijection~\cite{Bigeni-bij} between this model and the model of Feigin~\cite{Feigin-deg}. We also obtain an explicit combinatorial argument showing that {\em ascending} alternation acyclic tournaments (in which each numbered element defeats at least one element with a larger number, except for the largest numbered element), are counted by the Genocchi numbers of the first kind (Corollary~\ref{cor:asc}). The extension of this direct counting approach to all alternation acyclic tournaments yields recurrences leading to formulas for the ordinary generating functions for the Genocchi numbers of the first and second kinds. Our paper is structured as follows. After collecting basic facts about Genocchi numbers, hyperplane arrangements in general and the Linial arrangement in particular, we introduce alternation acyclic tournaments in Section~\ref{sec:alta} and prove their most important properties. In particular, we show that they induce a partial order which we call the right alternating walk order. In Section~\ref{sec:forest} we show how to encode each alternation acyclic tournament with a pair $(\pi,p)$, where the permutation $\pi$ is a linear extension of the alternating walk order, and the parent function $p$ assigns to each element a larger element or the infinity symbol as its parent, thus defining a partial order that is a forest. Even though this representation is not unique, using it allows us to introduce a homogenized generalization of the Linial arrangement in Section~\ref{sec:alth} and show that the regions of this hyperplane arrangement are in bijection with our alternation acyclic tournaments. Section~\ref{sec:alth} also contains the proof of Theorem~\ref{thm:all-alta}, stating that the number of all alternation acyclic tournaments is a median Genocchi number. We take a closer look at the codes $(\pi,p)$ in Section~\ref{sec:lmax} and find a way to select unique codes (which we call largest maximal representations) for each alternation acyclic tournament. We also obtain a characterization of all valid codes. In Section~\ref{sec:refined} we use this characterization to obtain Theorem~\ref{thm:Dumontfine}, a combinatorial result refining our counting of alternation acyclic tournaments. The key ingredient to obtain this result is a descent-sensitive coding of permutations, using excedant functions, a variant of an idea already present in Dumont's work~\cite{Dumont}. This result allows counting ascending alternation acyclic tournaments with the help of Dumont's theorem, and introduce new combinatorial models for the median and normalized median Genocchi numbers in Section~\ref{sec:genmod}. The generating function formulas are derived in Section~\ref{sec:genfun}. This paper raises as many questions as it answers: some of these are mentioned in the concluding Section~\ref{sec:concl}. \section{Preliminaries} \subsection{Genocchi numbers} The {\em Genocchi numbers $G_n$ of the first kind} are given by the exponential generating function $$ \sum_{n=1}^{\infty} G_n \frac{t^n}{n!}=\frac{2t}{e^t+1}. $$ The only nonzero $G_n$ with $n$ odd is $G_{1}=1$. For even $n$, the first few values of $G_n$ are $G_{2}=-1$, $G_{4}=1$, $G_{6}=-3$, $G_{8}=17$, $G_{10}=-155$ and $G_{12}=2073$. Their study goes back at least to Seidel~\cite{Seidel}, who published a triangular table, called {\em Seidel's triangle}, allowing to compute them recursively. Generalizations and variants of Seidel's triangle include~\cite{Ehrenborg,Zeng}. The first combinatorial models for them were given by Dumont~\cite{Dumont}. We will use the following result from his work~\cite[Corollaire du Th\'eor\`eme 3]{Dumont}, which characterizes the signless Genocchi numbers as numbers of {\em excedant} functions. A function, defined on a set of integers, is {\em excedant} if it satisfies $f(i)\geq i$ for all $i$. Note that excedant functions are also called {\em surjective pistols} by Dumont and Viennot in~\cite{Dumont-Viennot}. \begin{theorem}[Dumont] \label{thm:dumont} The unsigned Genocchi number $|G_{2n+2}|$ is the number of excedant functions $f:\{1,\ldots,2n\}\rightarrow \{1,\ldots,2n\}$ satisfying $f(\{1,\ldots,2n\})=\{2,4,\ldots,2n\}$. \end{theorem} The following wording is easily seen to be equivalent. \begin{corollary} \label{cor:dumont} The unsigned Genocchi number $|G_{2n+2}|$ is the number of ordered pairs $$((a_1,\ldots,a_{n}), (b_1,\ldots,b_{n}))\in {\mathbb Z}^{n}\times {\mathbb Z}^{n}$$ such that $1\leq a_i, b_i\leq i$ hold for all $i$ and the set $\{a_1,b_1,\ldots,a_{n},b_{n}\}$ equals $\{1,\ldots,n\}$. \end{corollary} Another model, ``alternating pistols'', may be found in~\cite{Viennot}. For more information on the Genocchi numbers of the first kind we refer the reader to its entries (A036968 and A001469) in~\cite{OEIS}. The {\em median Genocchi numbers $H_{2n-1}$}, also called {\em Genocchi numbers of the second kind}, also appear already in Seidel's triangle. Their study evolved concurrently with the study of the Genocchi numbers of the first kind. For detailed bibliography on them we refer the reader to the above mentioned sources, and their entry (A005439) in~\cite{OEIS}. Their first few values are $H_1=1$, $H_3=2$, $H_5=8$, $H_7=56$, $H_9=608$ and $H_{11}=9440$. In this paper we will use the following recent result on them, due to Claesson, Kitaev, Ragnarsson and Tenner~\cite{Claesson-LS}: \begin{equation} \label{eq:HLS} H_{2n-1}=\sum_{k=1}^n (-1)^{n-k}\cdot (k!)^2\cdot PS_n^{(k)}. \end{equation} Here the numbers $PS_n^{(k)}$ are the {\em Legendre-Stirling numbers}, see the work of Andrews, Gawronski and Littlejohn~\cite{Andrews-LS}. The median Genocchi number $H_{2n-1}$ is known to be an integer multiple of $2^{n-1}$, see \cite{Barsky-Dumont}. The numbers $h_n=H_{2n+1}/2^n$ are the {\em normalized median Genocchi numbers}. They are listed as sequence A000366 in~\cite{OEIS}. The first few values are $h_1=1$, $h_2=2$, $h_3=7$, $h_4=38$, $h_5=295$, $h_6=3098$ and $h_7=42271$. Several combinatorial models of these numbers exists, perhaps the most known are the {\em Dellac configurations}~\cite{Dellac}. Other models may be found in the works of Bigeni~\cite{Bigeni}, Feigin~\cite{Feigin-deg,Feigin-med}, Han and Zeng~\cite{Han-Zeng}, and Kreweras and Barraud~\cite{Kreweras-Barraud}. We will present a new combinatorial model for the normalized median Genocchi numbers in Theorem~\ref{thm:nmGenocchi}. Recently Bigeni~\cite{Bigeni-bij} found a highly nontrivial bijection between our model and Feigin's model~\cite{Feigin-deg}. \subsection{Hyperplane arrangements} A hyperplane arrangement ${\mathcal A}$ is a finite collection of codimension one hyperplanes in a $d$-dimensional vector space over ${\mathbb R}$, which partition the space into regions. The number $r({\mathcal A})$ of these regions may be found using {\em Zaslavsky's formula}~\cite{Zaslavsky}, stating \begin{equation} \label{eq:Zaslavsky} r({\mathcal A})=(-1)^d \chi({\mathcal A},-1). \end{equation} Here $\chi({\mathcal A},q)$ is the {\em characteristic polynomial} of the arrangement, which Zaslavsky expressed in terms of the M\"obius function in the intersection lattice of the hyperplanes. Instead of using Zaslavsky's original formulation, we will use the following result of Athanasiadis~\cite[Theorem 2.2]{Athanasiadis-charpol}. In the case when the hyperplanes of ${\mathcal A}$ are defined by equations with integer coefficients, we may consider the hyperplanes defined by the same equations in a vector space of the same dimension over a finite field ${\mathbb F}_q$ with $q$ elements, where $q$ is a prime number. If $q$ is sufficiently large, then the number $\chi({\mathcal A},q)$ is the number of points in the vector space that do not belong to any hyperplane in the arrangement: \begin{equation} \label{eq:Christos} \chi({\mathcal A},q)=\left|{\mathbb F}_q^d -\bigcup {\mathcal A}\right|. \end{equation} \subsection{Semiacyclic tournaments and the Linial arrangement} This paper is on a class of directed graphs properly containing the class of {\em semiacyclic tournaments}. A tournament $T$ on the set $\{1,\ldots,n\}$ is a directed graph with no loops nor multiple edges, such that for each pair of vertices $\{i,j\}$ from $\{1,\ldots,n\}$, exactly one of the directed edges $i\rightarrow j$ and $j\rightarrow i$ belongs to the graph. We consider $\{1,\ldots,n\}$ with the natural order on positive integers. A directed edge $i\rightarrow j$ in a cycle is an {\em ascent} if $i<j$ otherwise it is a {\em descent}. An {\em ascending cycle} is a directed cycle in which the number of descents does not exceed the number of ascents. A tournament on $\{1,\ldots,n\}$ is {\em semiacyclic} if it contains no ascending cycle. Semiacyclic tournaments arose in the study of the {\em Linial arrangement ${\mathcal L}_{n-1}$}. This arrangement is the set of hyperplanes $$ x_i-x_j=1 \quad \mbox{for $1\leq i<j\leq n$} $$ in the subspace $V_{n-1}\subset {\mathbb R}^{n-1}$ given by $$V_{n-1}=\{(x_1,\ldots,x_n)\in {\mathbb R}^{n}\: : \: x_1+\cdots +x_n=0\}.$$ To each region $R$ in ${\mathcal L}_{n-1}$ we may associate a tournament on $\{1,\ldots,n\}$ as follows: for each $i<j$ we set $i\rightarrow j$ if $x_i>x_j+1$ and we set $j\rightarrow i$ if $x_i<x_j+1$. Postnikov and Stanley~\cite[Proposition 8.5]{Postnikov-Stanley}, and independently Shmulik Ravid, observed that, the correspondence above establishes a bijection between the regions of the Linial arrangement ${\mathcal L}_{n-1}$ and the set of semiacyclic tournaments on the set $\{1,\ldots,n\}$. \section{Alternation acyclic tournaments} \label{sec:alta} In this section we define alternation acyclic tournaments and show some of their most basic properties. We define ascents and descents essentially the same way as Postnikov and Stanley do it in~\cite{Postnikov-Stanley}. The only minor difference is that we will use the notion of an ascent and a descent on all edges, not only on those contained in a directed cycle. \begin{definition} We call the arrow $i\rightarrow j$ an {\em ascent} if $i<j$ holds, otherwise we call it a {\em descent}. We will use the notation $i\xrightarrow{a} j$, respectively $i\xrightarrow{d} j$, to denote ascents and descents respectively. A directed cycle $C=(c_0,c_1,\ldots,c_{2k-1})$ is {\em alternating} if ascents and descents alternate along the cycle, that is, $c_{2j}\xrightarrow{d} c_{2j+1}$ and $c_{2j+1}\xrightarrow{a} c_{2j+2}$ hold for all $j$ (here we identify all indices modulo $2k$). A tournament is {\em alternation acyclic (or alt-acyclic)} if it contains no alternating cycle. \end{definition} Clearly an alternating cycle is also an ascending cycle, hence every semiacyclic tournament is also alternation acyclic. In Section~\ref{sec:alth} we will state an analogue of~\cite[Proposition 8.5]{Postnikov-Stanley} for alt-acyclic tournaments, and we will explain how each Linial arrangement is a section of a hyperplane arrangement whose regions are labeled with alt-acyclic tournaments. A generalization of the notion of a directed cycle is a the notion of a directed closed walk, in which revisiting vertices is allowed. The following, important observation implies that, for tournaments, excluding alternating closed walks vs. alternating cycles makes no difference. \begin{proposition} \label{prop:4cycle} Suppose a tournament $T$ on $\{1,\ldots,n\}$ contains a closed alternating walk $(c_0,c_1,\ldots,c_{2k-1})$, that is, a closed walk, in which descents and ascents alternate. Then $T$ contains an alternating cycle of length $4$. \end{proposition} \begin{proof} Let $(c_0,c_1,\ldots,c_{2m-1})$ be an alternating closed walk of minimum length in $T$. It suffices to show that we must have $m=2$. Indeed, note that a closed alternating walk must have even length, and there is no closed walk $c_0\rightarrow c_1\rightarrow c_0$ in a tournament, so we must have $m\geq 2$. Note also that a closed walk of length $4$ must visit $4$ distinct vertices, as it can not be the composition of a closed walk of length $3$ and one additional edge. Assume, by contradiction, that we have $m\geq 3$. As usual, we will identify the indices modulo $2m$. Furthermore, without loss of generality, we will assume that the arrows $c_{2i}\rightarrow c_{2_{i+1}}$ are descents and the arrows $c_{2i-1}\rightarrow c_{2i}$ are ascents. It suffices to show that in such a closed alternating walk we must have $c_{2i}<c_{2i+4}$ for all $i$. Since we assumed $m\geq 3$, this will yield a contradiction of the form $c_0<c_4<\cdots <c_0$. We will distinguish two cases: {\bf\noindent Case 1:} $c_{2i}=c_{2i+3}$ holds. In this case the statement follows from the fact that $c_{2i+3}\rightarrow c_{2i+4}$ is an ascent. {\bf\noindent Case 2:} $c_{2i}\neq c_{2i+3}$ holds. In this case it suffices to show that we must have $c_{2i}< c_{2i+3}$: the statement follows then by transitivity from $c_{2i+3}\xrightarrow{a} c_{2i+4}$. Assume, by contradiction, that $c_{2i}>c_{2i+3}$ holds. Then either we have $c_{2i+3}\xrightarrow{a} c_{2i}$ and $(c_{2i}, c_{2i+1}, c_{2i+2}, c_{2i+3})$ is an alternating $4$-cycle, or we have $c_{2i}\xrightarrow{d} c_{2i+3}$ and we may use this descent to replace the subwalk $(c_{2i}\xrightarrow{d} c_{2i+1} \xrightarrow{a} c_{2i+2}\xrightarrow{d} c_{2i+3})$ in the closed walk, thus obtaining a shorter alternating closed walk. In either case, we reach a contradiction with the minimality of $m$. \end{proof} The following consequence of Proposition~\ref{prop:4cycle} is analogous to a result by Postnikov and Stanley~\cite[Theorem 8.6]{Postnikov-Stanley} which characterizes semiacyclic tournaments as tournaments containing no ascending cycle of length at most $4$. \begin{corollary} \label{cor:4cycle} A tournament $T$ on $\{1,\ldots,n\}$ is alternation acyclic if and only if it contains no alternating cycle of length $4$. \end{corollary} Another way to characterize alternation acyclic tournaments is to describe them in terms of the right-alternating walk relation. \begin{definition} In a tournament $T$ on $\{1,\ldots,n\}$, there is a {\em right-alternating walk} from $u$ to $v$ if $u=v$ or there is a directed walk $u=w_0\xrightarrow{d} w_1 \xrightarrow{a} w_2 \xrightarrow{d} \cdots \xrightarrow{d} w_{2i-1}\xrightarrow{a} w_{2i}=v$ from $u$ to $v$ in which descents and ascents alternate, the first edge being a descent and the last edge being an ascent. We will use the notation $u\leq_{ra} v$ when there is a right-alternating walk from $u$ to $v$, and we will refer to $\leq_{ra}$ as the {\em right-alternating walk order induced by $T$}. We will also use the shorthand notation $u<_{ra} w$ when $u\leq_{ra} v$ and $u\neq v$ hold. \end{definition} \begin{proposition} \label{prop:rawalk} A tournament $T$ on $\{1,\ldots,n\}$ is alternation acyclic, if and only the induced right-alternating walk order is a partial order. \end{proposition} \begin{proof} The relation $\leq_{ra}$ is by definition reflexive and it is obviously transitive, as the concatenation of right-alternating walks yields a right-alternating walk. Hence the relation $\leq_{ra}$ is a partial order if and only if it is antisymmetric. This property is easily seen to be equivalent to the non-existence of a nontrivial closed alternating walk, whose non-existence is equivalent to the non-existence of an alternating $4$-cycle by Proposition~\ref{prop:4cycle}. As noted in Corollary~\ref{cor:4cycle}, the non-existence of an alternating $4$-cycle is equivalent to the tournament being alt-acyclic. \end{proof} \begin{remark} {\em There is an apparent asymmetry in the definition of the right-alternating walk order. One could analogously define the left-alternating walk order $\leq_{la}$ using alternating walks that begin with an ascent and end with a descent. It is similarly easy to see the analogue of Proposition~\ref{prop:rawalk} stating that a tournament is alt-acyclic, if and only if $\leq_{la}$ is a partial order. It should be noted that the class of alternation acyclic tournaments is closed under reversing all directed edges and it is also closed under renumbering the vertices such that each $i\in \{1,\ldots,n\}$ is replaced by $n+1-i$. Under each of these operations, the role of the partial order $\leq_{ra}$ is taken over by the partial order $\leq_{la}$ and vice versa.} \end{remark} \section{Representing alt-acyclic tournaments as biordered forests} \label{sec:forest} A partially ordered set is a {\em forest} if every element is covered by at most one element. A formula counting linear extensions of a forest is due to Knuth~\cite{Knuth}. For a bibliography on generalizations and recent results we refer the reader to~\cite{Hivert-Reiner}. Note that Hivert and Reiner use the dual definition of a forest, in which every element covers at most one element. We follow the definition of Bj\"orner and Wachs~\cite{Bjorner-Wachs}. In this section we will show that every alternation acyclic tournament may be represented as a tournament induced by a biordered forest, where one of the orders is a linear extension, and the other one is an arbitrary permutation. We will think of the linear extension as a numbering of the elements from $1$ to $n$, and we will encode the second numbering by a word $\pi=\pi(1)\pi(2)\cdots \pi(n)$, where the label of $j\in\{1,\ldots,n\}$ is the position $\pi^{-1}(j)$ of the number $j$ in $\pi$ If an element $i$ is covered by an element $j$ in a forest, we will write $j=p(i)$ and say that $j$ is the {\em parent} of $i$. We will also use the notation $p(i)=\infty$ when $i$ has no parent, and we will say that $i$ is a {\em root}. In fact, the Hasse diagram will be a union of trees, and the roots will be exactly the maximum elements of these trees. Marking the root of each tree defines the partial order. We fix a linear extension of a forest, by numbering its elements in increasing order from $1$ to $n$, where $n$ is the number of the elements. The parent function $p: \{1,2,\ldots,n\}\rightarrow \{2,\ldots,n\}\cup\{\infty\}$ must then satisfy $i<p(i)$ for all $i\in \{1,2\ldots,n\}$. The converse is also true: \begin{proposition} \label{prop:ext1} Given a forest on the set $\{1,2,\ldots,n\}$, defined by the parent function $p:\{1,2,\ldots,n\}\rightarrow \{2,\ldots,n\}\cup\{\infty\}$, the order $1<2<\cdots <n$ is a linear extension of the forest, if and only if the parent function satisfies $i<p(i)$ for all $i\in \{1,2,\ldots,n\}$. \end{proposition} \begin{proof} If the numbering represents a linear extension, then the condition $i<p(i)$ is necessary. Conversely, assume the function $p$ satisfies the stated property. If $i$ is less than $j$ in the order of the forest, then for the length $\ell$ of the shortest path from $i$ to $j$ in the Hasse diagram we have $j=p^{\ell}(i)$, implying $i<p(i)<p(p(i))<\cdots <p^{\ell}(i)=j$. \end{proof} From now on we will identify each element of the forest with its label in a fixed linear extension, and we encode the forest with its parent function $p:\{1,2,\ldots,n\}\rightarrow \{2,\ldots,n\}\cup\{\infty\}$. Next we give a second labeling of the vertices in terms of the {\em inverse} of a permutation $\pi$ of the set $\{1,\ldots,n\}$. \begin{definition} \label{def:pospi} Given a permutation $\pi: \{1,2,\ldots,n\}\rightarrow \{1,2,\ldots,n\}$, we will say that the {\em labeling induced by the positions in $\pi$} is the labeling that associates to each $i\in \{1,2,\ldots, n\}$ the position $\pi^{-1}(i)$ of $i$ in $\pi$. \end{definition} \begin{definition} \label{def:biordered} We will refer to an ordered triplet of a forest, one of its linear extensions and an arbitrary numbering of its elements as a {\em biordered forest}. We will encode the forest and its linear extension with the corresponding parent function $p:\{1,2,\ldots,n\}\rightarrow \{2,\ldots,n\}\cup\{\infty\}$, satisfying $p(i)>i$ for all $i$, and the second numbering by the permutation $\pi$ whose positions induce the second numbering. We will refer to the pair $(\pi,p)$ as the {\em code} of the biordered forest. \end{definition} The following statement is obvious. \begin{proposition} \label{prop:code} The correspondence described in Definition~\ref{def:biordered} establishes a bijection between ordered triplets formed by a forest on $n$ elements, a linear extension of this forest, and an arbitrary numbering of its elements and ordered pairs $(\pi,p)$ formed by a function $p:\{1,2,\ldots,n\}\rightarrow \{2,\ldots,n\}\cup\{\infty\}$, satisfying $p(i)>i$ for all $i$ and by a permutation $\pi$ of $\{1,2,\ldots,n\}$. \end{proposition} \begin{remark} {\em We could also use the language of $(P,\omega)$-partitions, see Stanley's book~\cite{Stanley-EC1/2}. This begins with considering a partially ordered set (in our case: a forest) and a bijection $\omega: P\rightarrow \{1,2,\ldots,n\}$. Stanley calls the labeling $\omega$ {\em natural} when $\omega$ is a linear extension of $P$. In such terms, we consider forests with a pair of labelings, one of them natural, the other one arbitrary.} \end{remark} Next we define a tournament induced by a biordered forest. \begin{definition} Let $(\pi,p)$ be the code of a biordered forest on $n$ elements. We define the tournament $T=T(\pi,p)$ as {\em the tournament induced by the biordered forest} to be the tournament whose vertex set is $\{1,2,\ldots, n\}$ and whose directed edges are defined as follows. For all $u<v$ we set $u\xrightarrow{a} v$ if $p(u)\neq \infty$ and $\pi^{-1}(v)\geq \pi^{-1}(p(u))$ hold, otherwise we set $v\xrightarrow{d} u$. \end{definition} We may visualize the pair $(\pi,p)$ as an arc-diagram, shown in Figure~\ref{fig:arcs}. \begin{figure} \caption{Arc representation of a pair $(\pi,p)$} \label{fig:arcs} \end{figure} We line up the vertices of the tournament left to right, in the order $\pi(1)$, $\pi(2)$, \ldots, $\pi(n)$. The permutation $\pi$ in Figure~\ref{fig:arcs} is $531246$. Next, for each $i$ such that $p(i)\neq \infty$, we draw a directed arc from $i$ to $p(i)$. For example, in the picture there is an arc from $\pi(4)=2$ in position $4$ to $\pi(2)=3$ in position $2$, indicating $p(2)=3$. The number $3$ is the leftmost number larger than $2$ for which $2\xrightarrow{a} 3$. All numbers larger than $2$ that are to the left of $3$ defeat $2$, and $2$ defeats all numbers larger than $2$ to the right of $3$. Hence we have $5\xrightarrow{d} 2$, $2\xrightarrow{a} 3$, $2\xrightarrow{a} 4$ and $2\xrightarrow{a} 6$. Similarly we have $p(3)=6$ and so the only ascent starting at $3$ is $3\xrightarrow{a} 6$. The parent of the numbers $\pi(3)=1$, $\pi(5)=4$ and $\pi(6)=6$ is $\infty$, no arc begins at these vertices, no ascent starts at these vertices. The parent function $p$ defines a forest with three connected components, the roots of these three trees are $1$, $4$ and $6$, respectively. The next two statements explain how biordered forests are related to alt-acyclic tournaments. \begin{proposition} \label{prop:arcs-alta} Every biordered forest $(\pi,p)$ induces an alternation acyclic tournament $T$. Furthermore, the permutation $\pi$ is a linear extension of the right alternating walk order induced by $T$. \end{proposition} \begin{proof} Let $(\pi,p)$ be the code of the biordered forest on $n$ elements, and let $T=T(\pi,p)$ be the tournament induced by it. First we show that $T$ is alt-acyclic. By Corollary~\ref{cor:4cycle} it suffices to show that there is no alternating cycle of length $4$ in $T$. Assume, by contradiction, that $u_1\xrightarrow{a} u_2\xrightarrow{d} u_3\xrightarrow{a} u_4\xrightarrow{d} u_1$ holds for some $\{u_1,u_2,u_3,u_4\}\subseteq \{1,2,\ldots,n\}$. Since we have $u_1\xrightarrow{a} u_2$ and $u_4\xrightarrow{d} u_1$ we obtain that $u_4$ must appear to the left of $p(u_1)$ whereas $u_2$ can not appear to the left of $p(u_1)$ in $\pi$. In other words, we must have $$\pi^{-1}(u_4)<\pi^{-1}(p(u_1))\leq \pi^{-1}(u_2).$$ Similarly $u_2\xrightarrow{d} u_3$ and $u_3\xrightarrow{a} u_4$ imply that we must have $$\pi^{-1}(u_2)<\pi^{-1}(p(u_3))\leq \pi^{-1}(u_4).$$ This is a contradiction, as $u_2$ and $u_4$ can not mutually precede each other in $\pi$. To show the second part of the statement it suffices to prove that $\pi^{-1}(u)<\pi^{-1}(v)$ holds whenever $v$ covers $u$ in the right alternating walk order. For an arbitrary pair $(u,v)$, satisfying $u\leq_{ra} v$, the statement $\pi^{-1}(u)\leq \pi^{-1}(v)$ follows then by considering a saturated chain from $u$ to $v$. If $v$ covers $u$ in the right alternating walk order, then there is a $w$ such that $u\xrightarrow{d} w\xrightarrow{a} v$ holds. By the definition of $T(\pi,p)$, $u$ must be to the left of $p(w)$, whereas $v$ can not be to the left of $p(w)$ in $\pi$. In other words, $\pi^{-1}(u)<\pi^{-1}(p(w))\leq \pi^{-1}(v)$ must hold, and $u$ is to the left of $v$ in $\pi$. \end{proof} Our next result is the converse of Proposition~\ref{prop:arcs-alta}. \begin{proposition} \label{prop:alta-arc} Let $T$ be an alternation acyclic tournament on $\{1,2,\ldots, n\}$, and let $\pi$ be any linear extension of the right alternating walk order. Then there is a unique parent function $p:\{1,2,\ldots,n\}\rightarrow \{2,\ldots,n\}\cup\{\infty\}$ such that the tournament induced by $(\pi, p)$ is $T$. \end{proposition} \begin{proof} Clearly, for each $u\in \{1,\ldots,n\}$, the only way to define $p(u)$ is to set $p(u)$ equal to the leftmost $v$ in $\pi$ such that $u\xrightarrow{a} v$ holds, if such a $v$ exists, and to set $p(u)=\infty$ when no ascent begins at $u$. We only need to verify that the tournament $T(\pi,p)$ induced by the biordered forest with code $(\pi,p)$ is the same as the tournament $T$ we started with. Consider a pair $(u,v)$ of vertices satisfying $u<v$. If $p(u)=\infty$ or $v$ is to the left of $p(u)\neq \infty$ then, by the definition of $p$, we must have $v\xrightarrow{d} u$ in $T$ and also in $T(\pi,p)$. Also by definition, $u\xrightarrow{a} v$ holds in both tournaments, if $v=p(u)$. We are left to consider the case when $v$ is to the right of $p(u)$ in $\pi$. In $T(\pi, p)$ we must have $u\xrightarrow{a} v$, the only remaining question is, could we have $v\xrightarrow{d} u$ in the tournament $T$, for such a vertex $v$? The answer is no, since $v\xrightarrow{d} u\xrightarrow{a} p(u)$ would imply $v<_{ra} p(u)$ in contradiction with $\pi$ being a linear extension of the partial order $\leq_{ra}$. \end{proof} \begin{remark} {\em For any alt-acyclic tournament $T$, the element $1$ is always incomparable to the other elements of $\{1,\ldots,n\}$ in the right alternating walk order, hence the partial order $\leq_{ra}$ has always at least two linear extensions. This makes the use of biordered forests to directly count alt-acyclic tournaments difficult. We will see two different ways to overcome this difficulty in Sections~\ref{sec:alth} and \ref{sec:lmax}.} \end{remark} \section{Counting alternation acyclic tournaments using hyperplane arrangements} \label{sec:alth} In this section we introduce a hyperplane arrangement whose regions are in bijection with the alternation acyclic tournaments. Using a result of Athanasiadis~\cite{Athanasiadis-charpol}, we will be able to count them. \begin{definition} Consider the vector space ${\mathbb R}^{2n-1}$ with coordinate functions $x_1$,\ldots,$x_n$, $y_1$,\ldots,$y_{n-1}$. We define the {\em homogenized Linial arrangement ${\mathcal H}_{2n-2}$} as the set of hyperplanes \begin{equation} \label{eq:hL} x_i-x_j=y_i \quad 1\leq i<j\leq n \end{equation} in the subspace $U_{2n-2}\subset {\mathbb R}^{2n-1}$ given by $$U_{2n-2}=\{(x_1,\ldots,x_n,y_1,\ldots,y_{n-1})\in {\mathbb R}^{2n-1}\: : \: x_1+\cdots +x_n=0\}.$$ \end{definition} \begin{remark} \label{rem:adjust} {\em Restricting our arrangement in ${\mathbb R}^{2n-1}$ to the set $U_{2n-2}$ does not change the number of regions, because of the following observation: given a point $$(x_1,\ldots,x_n,y_1,\ldots,y_{n-1})\in {\mathbb R}^{2n-1},$$ all points of the line $$ \{(x_1+t,\ldots,x_n+t,y_1,\ldots,y_{n-1})\::\: t\in {\mathbb R}\} $$ belong to the same region of ${\mathcal H}_{2n-2}$, considered as a hyperplane arrangement in ${\mathbb R}^{2n-2}$, since $(x_i+t)-(x_j+t)=x_i-x_j$ holds for all $1\leq i<j\leq n$. There is exactly one choice of $t$ on this line for which the sum of the $x$-coordinates is zero. Intersecting our picture with $U_{2n-2}$ allows us to get rid of an inessential dimension. It also makes our definition more compatible with the usual definition of the Linial arrangement, due to the following observation. Intersecting ${\mathcal H}_{2n-2}$ with all hyperplanes of the form $y_j=1$ yields a hyperplane arrangement, which, after discarding the redundant $y$-coordinates, is exactly the Linial arrangement ${\mathcal L}_{n-1}$.} \end{remark} Next we associate to each region $R$ of the homogenized Linial arrangement ${\mathcal H}_{2n-2}$ a tournament $T(R)$ on $\{1,\ldots,n\}$ as follows. For each $i<j$, set $i\rightarrow j$ if the points of the region satisfy $x_i-y_i>x_j$, and set $j\rightarrow i$ if $x_i-y_i<x_j$ holds for all points in the region. The correspondence $R\mapsto T(R)$ is clearly well-defined and injective. \begin{theorem} \label{thm:regions} The correspondence $R\mapsto T(R)$ establishes a bijection between all regions of the homogenized Linial arrangement ${\mathcal H}_{2n-2}$ and all alternation acyclic tournaments on the set $\{1,\ldots,n\}$ \end{theorem} \begin{proof} First we show that every tournament associated to a region is alt-acyclic. Assume, by contradiction, that there is a region $R$, such that the tournament $T(R)$ is not alt-acyclic. By Corollary~\ref{cor:4cycle} this implies the existence of an alternating $4$-cycle $i_1\xrightarrow{d} i_2\xrightarrow{a} i_3\xrightarrow{d} i_4\xrightarrow{a} i_1$. By the definition of $T(R)$, all points of the region $R$ satisfy $$ x_{i_1}>x_{i_2}-y_{i_2}>x_{i_3}\quad\mbox{because of $i_1\xrightarrow{d} i_2\xrightarrow{a} i_3$, and } $$ $$ x_{i_3}>x_{i_4}-y_{i_4}>x_{i_1}\quad\mbox{because of $i_3\xrightarrow{d} i_4\xrightarrow{a} i_1$}. $$ We obtain the contradiction $x_{i_1}>x_{i_1}$. Next we show that every alt-acyclic tournament $T$ on $\{1,\ldots,n\}$ is of the form $T(R)$ for some region $R$. Consider an alt-acyclic tournament $T$. By Proposition~\ref{prop:alta-arc}, the tournament $T$ is induced by a biordered forest with code $(\pi, p)$. Let us set $$x_i=\frac{n+1}{2}-\pi^{-1}(i)\quad\mbox{for $i=1,2,\ldots,n$}$$ and let us set $$y_i:=\pi^{-1}(p(i))- \pi^{-1}(i)-1/2 \quad\mbox{for $i=1,\ldots,n-1$.}$$ Observe first that we have $$ \sum_{i=1}^n x_i=\sum_{i=1}^n \left(\frac{n+1}{2}-\pi^{-1}(i)\right)= \frac{n(n+1)}{2}-\sum_{i=1}^{n} i=0, $$ hence the point $(x_1,\ldots,x_n,y_1,\ldots,y_{n-1})$ belongs to the vector space $U_{2n-2}$. Observe next, that for each $i<j$ the difference $x_i-x_j=\pi^{-1}(j)-\pi^{-1}(i)$ is the difference between the positions of $j$ and $i$. This integer is strictly more than $y_i=\pi^{-1}(p(i))-\pi^{-1}(i)-1/2$ exactly when $j=p(i)$ or $j$ is to the right of $p(i)$ in $\pi$. Therefore $T(R)$ is exactly the tournament induced by the biordered forest whose code is $(\pi,p)$. \end{proof} Now we are ready to prove one of the main results of our paper. \begin{theorem} \label{thm:all-alta} The number of alternation acyclic tournaments on the set $\{1,\ldots,n\}$ is the median Genocchi number $H_{2n-1}$. \end{theorem} \begin{proof} By Theorem~\ref{thm:regions} the statement is equivalent to showing that the number of regions in the homogenized Linial arrangement ${\mathcal H}_{2n-2}$ is $H_{2n-1}$. We will find this number using Zaslavsky's formula (\ref{eq:Zaslavsky}), where we compute the characteristic polynomial using Athanasiadis' result (\ref{eq:Christos}). To simplify our calculations, instead of applying (\ref{eq:Christos}) to the hyperplane arrangement ${\mathcal H}_{2n-2}$ directly, we will count the regions of the hyperplane arrangement $\widehat{{\mathcal H}}_{2n}$, given by the equations (\ref{eq:hL}) in ${\mathbb R}^{2n}$ with coordinate functions $x_1$,\ldots,$x_n$, $y_1$,\ldots,$y_{n}$. In other words, rather than removing one inessential dimension by restricting to the subspace $U_{2n-2}$ (keep in mind Remark~\ref{rem:adjust} pointing out that this restriction does not change the number of regions), we add an additional inessential dimension $y_n$ that is not involved in the equations defining the hyperplanes. The proof of Theorem~\ref{thm:regions} may be applied to show that the number of regions is the same as the number of alt-acyclic tournaments on $\{1,\ldots,n\}$ (with the remark that the value of $y_n$ may be chosen in an arbitrary fashion). Let us now consider the hyperplane arrangement $\widehat{{\mathcal H}}_{2n}$ as the subset of ${\mathbb F}_q^{2n}$ for some very large prime $q$. Let us introduce the shorthand notation $\chi(n,k,q)$ for the number of elements in the set $$ {\mathcal S}_{n,k}(q)=\left\{(x_1,\ldots,x_n, y_1,\ldots, y_{n})\in {\mathbb F}_q^{2n}-\bigcup \widehat{\mathcal H}_{2n}\::\: |\{x_1-y_1,\ldots, x_{n}-y_{n}\}|=k \right\}. $$ In other words, ${\mathcal S}_{n,k}(q)$ is the set of those points in ${\mathbb F}_q^{2n}-\bigcup \widehat{\mathcal H}_{2n}$, for which the set \break $\{x_1-y_1,\ldots, x_{n}-y_{n}\}$ has $k$ elements. By (\ref{eq:Christos}), we must have \begin{equation} \label{eq:Hsum} \chi(\widehat{{\mathcal H}}_{2n},q)=\sum_{k=1}^n \chi(n,k,q). \end{equation} We claim that the numbers $\chi(n,k,q)$ satisfy the recurrence \begin{equation} \label{eq:chirec} \chi(n,k,q)=(q-k)\cdot k \cdot \chi (n-1,k,q)+(q-k+1)^2\cdot \chi(n-1,k-1,q) \quad\mbox{for $n\geq 2$.} \end{equation} Indeed, let us first select the values of $x_1,\ldots,x_{n-1}$ and $y_1,\ldots, y_{n-1}$ of a vector belonging to ${\mathcal S}_{n,k}(q)$. Since the set $\{x_1-y_1,\ldots, x_{n}-y_{n}\}$ has either the same size as $\{x_1-y_1,\ldots, x_{n-1}-y_{n-1}\}$ or it has just one more element, the set $\{x_1-y_1,\ldots, x_{n-1}-y_{n-1}\}$ must have $k$ or $k-1$ elements. Furthermore the coordinates $x_1,\ldots,x_{n-1}$ and $y_1,\ldots, y_{n-1}$ do not satisfy those equations from (\ref{eq:hL}) which do not involve $x_n$ or $y_n$. Depending on the choice between $k$ and $k-1$, this selection may be performed in $\chi (n-1,k,q)$ or $\chi (n-1,k-1,q)$ ways, respectively. In the case when $\{x_1-y_1,\ldots,x_{n-1}-y_{n-1}\}$ has $k$ elements, there are $q-k$ ways to select the value of $x_n$ from the complement of the set $\{x_1-y_1,\ldots,x_{n-1}-y_{n-1}\}$. Once this selection is made, we may select $y_n$ in $k$ ways, making sure that $x_n-y_n$ belongs to the set $\{x_1-y_1,\ldots,x_{n-1}-y_{n-1}\}$. Similarly, in the case when $\{x_1-y_1,\ldots,x_{n-1}-y_{n-1}\}$ has $k-1$ elements, there are $q-k+1$ ways to select the value of $x_n$, and also $q-k+1$ ways to select the value of $y_n$ afterward. Both $x_n$ and $x_n-y_n$ must belong to the complement of $\{x_1-y_1,\ldots,x_{n-1}-y_{n-1}\}$ in this case. Using the initial condition \begin{equation} \label{eq:chiinit} \chi(1,k,q)=\delta_{1,k} q^2 \end{equation} (where $\delta_{1,k}$ is the Kronecker delta function), the polynomials $\chi(n,k,q)$ may be computed. Since, for each $n$, the ambient space is $2n$ dimensional, the number of regions of $\widehat{\mathcal H}_{2n}$ is equal to $$ (-1)^{2n} \chi\left(\widehat{\mathcal H}_{2n},-1\right)= \chi\left(\widehat{\mathcal H}_{2n},-1\right)= \sum_{k=1}^n \chi(n,k,-1). $$ Introducing $r(n,k):=\chi(n,k,-1)$, the initial condition (\ref{eq:chiinit}) yields $r(1,1)=1$ and the recurrence (\ref{eq:chirec}) may be rewritten as \begin{equation} \label{eq:rrec} r(n,k)=-(k+1)\cdot k\cdot r(n-1,k) + k^2\cdot r(n-1,k-1). \end{equation} Introducing $$ PS_n^{(k)}=\frac{(-1)^{n-k}\cdot r(n,k)}{(k!)^2}, $$ the initial condition $r(1,1)=1$ may be transcribed as $PS_1^{(1)}=1$, and the recurrence (\ref{eq:rrec}) may be rewritten as \begin{equation} \label{eq:LSrec} PS_n^{(k)}=k(k+1)\cdot PS_{n-1}^{(k)}+PS_{n-1}^{(k-1)}. \end{equation} Equation (\ref{eq:LSrec}) is a recurrence relation satisfied by the Legendre-Stirling numbers, shown by Andrews, Gawronski and Littlejohn~\cite[Theorem 5.3]{Andrews-LS}, and the initial conditions also match. We obtain that $r(n,k)=(-1)^{n-k}\cdot (k!)^2\cdot PS_n^{(k)}$, and that $$ r\left({\mathcal H}_{2n-2}\right) =r\left(\widehat{{\mathcal H}}_{2n}\right) =\sum_{k=1}^n (-1)^{n-k}\cdot (k!)^2\cdot PS_n^{(k)}. $$ It was shown in~\cite{Claesson-LS} (see Equation~(\ref{eq:HLS})) that the above sum equals the median Genocchi number $H_{2n-1}$. \end{proof} \section{Direct counting using the largest maximum order} \label{sec:lmax} By Proposition~\ref{prop:alta-arc}, given an alternation acyclic tournament $T$, after fixing a linear extension $\pi$ of the partial order $\leq_{ra}$, there is a unique parent function $p$ such that the biordered forest encoded by $(\pi,p)$ induces $T$. In this section we fix one such linear extension for each alternation acyclic tournament and describe how to recognize the valid pairs $(\pi,p)$. This will allow us to directly count alternation acyclic tournaments of various kinds. \begin{definition} For an alternation acyclic tournament $T$ on $\{1,\ldots,n\}$, we define the {\em largest maximal order} to be the permutation $\lambda=\lambda(1)\cdots\lambda(n)$, given recursively as follows: \begin{enumerate} \item $\lambda(n)$ is the largest maximal element of $\{1,\ldots,n\}$ ordered by $\leq_{ra}$. \item Once $\lambda(j)$ has been determined for all $j>k$, $\lambda(k)$ is the largest maximal element in the poset obtained by restricting the partial order $\leq_{ra}$ to $\{1,\ldots,n\}- \{\lambda(k+1),\ldots,\lambda(n)\}$. \end{enumerate} We call the unique pair $(\lambda,p)$ inducing $T$ the {\em largest maximal representation of $T$}. \end{definition} Note that the largest maximal order is necessarily a linear extension of the partial order $\leq_{ra}$, and that each $\lambda(k)$ is the largest maximal element in the poset obtained by restricting the partial order $\leq_{ra}$ to the set $\{\lambda(1),\ldots,\lambda(k)\}$. For example, the largest maximal order for the tournament induced by the pair $(\pi,p)$ shown in Figure~\ref{fig:arcs} is $125346$, and the largest maximal representation is shown in Figure~\ref{fig:arcs-lmax}. It is easy to verify that this diagram induces the same tournament, the fact that this is the largest maximal representation will be easily verifiable using Proposition~\ref{prop:lmaxchar} below. \begin{figure} \caption{Largest maximal representation of the tournament induced in Figure~\ref{fig:arcs} \label{fig:arcs-lmax} \end{figure} \begin{remark} {\em Consider the largest maximal representation shown in Figure~\ref{fig:arcs-lmax-rem}. Here $\pi(4)=4$ is the largest maximal element of the set $\{1,2,5,4\}$ because we have $5\xrightarrow{d} 3\xrightarrow{a} 4$ and so $5<_{ra} 4$ holds. We only discarded the vertex $3$ from the set of elements to be considered as a maximal element, but we can not correctly interpret the restriction of the partial order to the subset $\{1,2,5,4\}$ without considering the relation of $3$ to $4$ and $5$ in the entire tournament.} \end{remark} \begin{figure} \caption{Largest maximal representation illustrating the importance of ``discarded'' elements} \label{fig:arcs-lmax-rem} \end{figure} The next statement completely characterizes the largest maximal representations. Recall that $i\in \{1,\ldots, n\}$ is a {\em descent} of the permutation $\pi$ of $\{1,\ldots, n\}$ if $\pi(i)>\pi(i+1)$ holds. \begin{proposition} \label{prop:lmaxchar} Given a permutation $\lambda$ of $\{1,\ldots,n\}$ and a parent function $$p:\{1,2,\ldots,n\}\rightarrow \{2,\ldots,n\}\cup\{\infty\},$$ the pair $(\lambda,p)$ is the largest maximal representation of the tournament induced by $(\lambda,p)$ if and only if for each descent $i$ of $\lambda$, the vertex $\lambda(i+1)$ belongs to the range of $p$. \end{proposition} \begin{proof} Assume first that $(\lambda,p)$ is a largest maximal representation and that $i$ is a descent of $\lambda$. By definition $\lambda(i)$ is a maximal element in the subset $\{\lambda(1),\ldots,\lambda(i)\}$, ordered by $\leq_{ra}$, but it is not a maximal element in the subset $\{\lambda(1),\ldots,\lambda(i), \lambda(i+1)\}$, since $\lambda(i+1)$ is the largest maximal element in the latter set, and it is smaller than $\lambda(i)$. Hence $\lambda(i+1)$ must cover $\lambda(i)$ in the restriction of $\leq_{ra}$ to $\{\lambda(1),\ldots, \lambda(i+1)\}$. Since, for any, $k>i$, the relation $\lambda(k)<_{ra} \lambda(i+1)$ can not hold, the relation $\lambda(i)<_{ra} \lambda(i+1)$ is also a cover relation in the entire set $\{1,\ldots,n\}$. Thus there is a $j\in\{1,\ldots,n\}$ such that $\lambda(i)\xrightarrow{d} j\xrightarrow{a} \lambda(i+1)$ holds. Since $\lambda(i)$ is immediately to the left of $\lambda(i+1)$, we must have $p(j)=\lambda(i+1)$. Assume next that an alt-acyclic tournament is induced by a code $(\pi, p)$, in which for each descent $i$ of $\pi$, the element $\pi(i+1)$ belongs to the range of $p$. We will show by induction on $k$ that for each $k\in \{1,\ldots,n\}$ the vertex $\pi(k)$ is the largest maximal element of the set $\{\pi(1),\ldots,\pi(k)\}$. For $k=1$ we must have $\pi(1)=1$ since setting $1=\pi(i+1)$ for some $i\geq 1$ would make $i$ a descent and $1$ is never in the range of the parent function $p$. Assume now that the statement holds for some $k$ and consider the set $\{\pi(1),\ldots,\pi(k+1)\}$. Since, by Proposition~\ref{prop:arcs-alta}, the permutation $\pi$ is a linear extension of the partial order $\leq_{ra}$, the element $\pi(k+1)$ is a maximal element of the set $\{\pi(1),\ldots,\pi(k+1)\}$ ordered by $\leq_{ra}$, we only need to show that it is the largest maximal element. There is nothing to prove when $\pi(k)<\pi(k+1)$ holds: adding $\pi(k+1)$ to the set $\{\pi(1),\ldots,\pi(k)\}$ can only decrease the list of the maximal elements and, by our induction hypothesis, $\pi(k)$ was the largest element on this list before we added $\pi(k+1)$. We are left to consider the case when $\pi(k)>\pi(k+1)$ holds, that is, $k$ is a descent. By our assumption there is a $j<\pi(k+1)$ satisfying $\pi(k+1)=p(j)$. Consider any $i\leq k$ for which $\pi(i)>\pi(k+1)$ holds. This element is to the left of $\pi(k+1)=p(j)$ and it is larger than $j$. Hence we have $\pi(i)\xrightarrow{d} j\xrightarrow{a} \pi(k+1)$, implying $\pi(i)<_{ra} \pi(k+1)$. We obtained that no element of $\{\pi(1),\ldots,\pi(k+1)\}$ that is larger than $\pi(k+1)$ can be a maximal element in this set, with respect to $\leq_{ra}$. Therefore $\pi(k+1)$ is the largest maximal element. \end{proof} Proposition~\ref{prop:lmaxchar} allows us to count alt-acyclic tournaments in a recursive fashion, by using the following {\em reduction} operation. \begin{definition} \label{def:reduction} Given the largest maximal representation $(\lambda,p)$ of an alternation acyclic tournament $T$ on $\{1,\ldots,n\}$ for some $n\geq 2$, we define its {\em reduction} to the set $\{1,\ldots,n-1\}$ to be the alternation acyclic tournament $T'$ with largest maximal representation $(\lambda',p')$ where $$ \lambda'(i)= \begin{cases} \lambda(i) & \mbox{if $i< \lambda^{-1}(n)$;}\\ \lambda(i+1) & \mbox{if $i\geq \lambda^{-1}(n)$;}\\ \end{cases} \quad\mbox{and}\quad p'(i)= \begin{cases} p(i)& \mbox{if $p(i)\neq n$;}\\ \infty & \mbox{if $p(i)= n$.}\\ \end{cases} $$ In other words, the permutation $\lambda'(1)\cdots \lambda'(n-1)$ is obtained from $\lambda(1)\cdots \lambda(n)$ by deleting the letter $n$, and the parent function $p'$ is obtained from $p$ by changing all values $p(i)=n$ to $p'(i)=\infty$. \end{definition} \begin{proposition} \label{prop:reduction} If $(\lambda,p)$ is the largest maximal representation of an alternation acyclic tournament $T$ on $\{1,\ldots,n\}$ then the pair $(\lambda',p')$ given in Definition~\ref{def:reduction} is the largest maximal representation of an alternation acyclic tournament on $\{1,\ldots,n-1\}$. \end{proposition} \begin{proof} Clearly $\lambda'$ is a permutation of $\{1,\ldots,n-1\}$, and the function $p'$ maps $\{1,\ldots,n-1\}$ into $\{2,\ldots,n-1\}\cup \{\infty\}$ in such a way that $p'(i)>i$ holds for all $i$. We only need to verify that for every descent $i$ of $\lambda'$, the element $\lambda'(i+1)$ is in the range of $p'$. This is most easily verified by visualizing the reduction operation in terms of the arc representations. In such terms, the reduction operation removes the letter $n$, and redirects all arrows ending in $n$ to point to $\infty$. If a letter in $\lambda'$ is less than the letter immediately preceding it, the same remains true even after inserting the letter $n$. (Note that $\lambda^{-1}(n)$ is a descent unless $\lambda^{-1}(n)=n$.) Finally the range of $p'$ is obtained from the range of $p$ by removing $n$ from it (if it was present). \end{proof} \begin{definition} We say that an alternation acyclic tournament has {\em type $(n,i,j)$} if it is a tournament on $\{1,\ldots,n\}$, and the parent function $p$ in its largest maximal representation $(\lambda,p)$ satisfies $|p^{-1}(\infty)|=i$ and $|p(\{1,\ldots,n\})|=j+1$. We will denote the number of alternation acyclic tournaments of type $(n,i,j)$ with $A(n,i,j)$. \end{definition} Note that $p(n)=\infty$ always holds, so $A(n,i,j)>0$ can only hold when $i\geq 1$. Similarly, $j\geq 0$ must hold. \begin{proposition} \label{prop:altac} The numbers $A(n,i,j)/j!$ are integers, they are given by the initial condition $A(1,i,j)/j!=\delta_{i,1}\cdot \delta_{0,j}$ (where $\delta_{i,1}\cdot \delta_{0,j}$ is a product of Kronecker deltas), and the recurrence relation $$ \frac{A(n,i,j)}{j!}=\sum_{k=i}^{n-1}\binom{k}{i-1}\cdot \frac{A(n-1,k,j-1)}{(j-1)!}+(j+1)\cdot \frac{A(n-1,i-1,j)}{j!} \quad\mbox{for $n\geq 2$.} $$ \end{proposition} \begin{proof} The statement is equivalent to $A(1,i,j)=\delta_{i,1}\cdot \delta_{0,j}$ , and the recurrence relation $$ A(n,i,j)=\sum_{k=i}^{n-1}\binom{k}{i-1}\cdot j\cdot A(n-1,k,j-1)+(j+1)\cdot A(n-1,i-1,j) \quad\mbox{for $n\geq 2$.} $$ Suppose we have an alternation acyclic tournament $T$ of type $(n,i,j)$, and consider its reduction $T'$. We claim that the type of $T'$ must be either $(n-1,k,j-1)$ for some $k\geq i$ or $(n-1,i-1,j)$. Indeed, if $n$ is in the range of $p$ then the range of $p'$ has one less element, and $p'^{-1}(\infty)=p^{-1}(\infty)-\{n\}\cup p^{-1}(n)$ properly contains $p^{-1}(\infty)-\{n\}$. If $n$ is not in the range of $p$ then $p^{-1}(n)=\emptyset$, $p'^{-1}(\infty)=p^{-1}(\infty)-\{n\}$ has exactly one less element than $p^{-1}(\infty)$, and the range of $p'$ equals the range of $p$. We claim that any alt-acyclic tournament $T'$ of type $(n-1,k,j-1)$ (where $k\geq i$) is the reduction of exactly $\binom{k}{i-1}\cdot j$ alt-acyclic tournaments of type $(n,i,j)$. Indeed, unless $n$ is inserted as the last letter of $\lambda$, it creates a descent, so it must be inserted right before a vertex that is in the range of $p'$. There are $j$ ways to perform this insertion. Furthermore, we must take a $(k-i+1)$-element subset of $p'^{-1}(\infty)$ and reassign them to have $n$ as their parent. A similar, but simpler reasoning shows that for any alt-acyclic tournament $T'$ of type $(n-1,i-1,j)$ there are exactly $(j+1)$ alt-acyclic tournaments of type $(n,i,j)$ whose reduction is $T$. \end{proof} We computed the numbers $A(n,i,j)/j!$ using Maple and the formula given in Proposition~\ref{prop:altac} for $n\leq 5$. These are given in Table~\ref{tab:Anij}. \begin{table} \label{tab:altac} \begin{tabular}{llll} $n=2$ & \begin{tabular}{r|cc} 1 & 1 & \\ 0 & 0 & 1\\ \hline \slashbox{j}{i} & 1 & 2\\ \end{tabular} &$n=3$& \begin{tabular}{r|ccc} 2 & 1 & &\\ 1 & 1 & 4& \\ 0 & 0 & 0& 1\\ \hline \slashbox{j}{i} & 1 & 2& 3\\ \end{tabular} \\ \ & \ & \ & \ \\ \\ $n=4$ & \begin{tabular}{r|cccc} 3 & 1 &&&\\ 2 & 5 & 11 &&\\ 1 & 1 & 5 & 11& \\ 0 & 0 & 0& 0 & 1\\ \hline \slashbox{j}{i} & 1 & 2& 3 & 4\\ \end{tabular} & $n=5$& \begin{tabular}{r|ccccc} 4 & 1 &&&&\\ 3 & 16 & 26 &&&\\ 2 & 17 & 58 & 66 && \\ 1 & 1 & 6 & 16 & 26 & \\ 0 & 0 & 0& 0& 0 & 1\\ \hline \slashbox{j}{i} & 1 & 2& 3 & 4 & 5\\ \end{tabular}\\ \end{tabular} \label{tab:Anij} \caption{The values of $A(n,i,j)/j!$ for $2\leq n\leq 5$} \end{table} A generating function formula for the numbers $A(n,i,j)/j!$ will be given in Section~\ref{sec:genfun}. Inspecting the tables we can make several observations, some of which are easy to show. \begin{proposition} \label{prop:i+j>n} $A(n,i,j)=0$ holds for $i+j>n$. \end{proposition} Indeed, for the largest maximal representation $(\lambda,p)$, to have $j+1$ elements in the range of $p$, we need at least $j$ elements of $\{1,\ldots,n\}$ to have a parent different from $\infty$. \begin{proposition} \label{prop:ani0} $A(n,i,0)=\delta_{i,n}$ where $\delta_{i,n}$ is the Kronecker delta. \end{proposition} Indeed, when the range of $p$ is $\{\infty\}$ then all elements have $\infty$ as their parent. It is only a little harder to show that in the main diagonal of each table we have the {\em Eulerian} numbers. \begin{proposition} The number $A(n,n-j,j)/j!$ is the number of permutations of $\{1,\ldots,n\}$ having exactly $j$ descents. \end{proposition} \begin{proof} Because of Proposition~\ref{prop:i+j>n}, when we set $i=n-j$ in the recurrence given in Proposition~\ref{prop:altac}, only the term associated to $k=n-j$ will have a positive contribution. By $\binom{n-j}{n-j-1}=n-j$ we get $$ \frac{A(n,n-j,j)}{j!}=(n-j)\cdot \frac{A(n-1,k,j-1)}{(j-1)!}+(j+1)\cdot \frac{A(n-1,n-j-1,j)}{j!} $$ for $n\geq2$. This is exactly the recurrence for the Eulerian numbers, and the initial conditions match. \end{proof} It may be a little harder to notice that the numbers in the first column multiplied by the factorial of the row index add up to the Genocchi numbers of the first kind, that is, \begin{equation} \label{eq:gen1} |G_{2n}|=\sum_{j=0}^{n-1} A(n,1,j). \end{equation} We will prove a generalization of Equation~(\ref{eq:gen1}) in Section~\ref{sec:refined}. \section{Refined counting of alternation acyclic tournaments} \label{sec:refined} The main result of this section is the following generalization of Dumont's theorem, which also refines Theorem~\ref{thm:all-alta}. \begin{theorem} \label{thm:Dumontfine} For each $i\in \{1,\ldots,n\}$, the sum $\sum_{j=0}^n A(n,i,j)$ is the number of ordered pairs $$((a_1,\ldots,a_{n-1}), (b_1,\ldots,b_{n-1}))\in {\mathbb Z}^{n-1}\times {\mathbb Z}^{n-1}$$ satisfying the following conditions: \begin{enumerate} \itemsep=-1pt \item $0\leq a_k\leq k$ and $1\leq b_k\leq k$ hold for all $k\in \{1,\ldots,n-1\}$; \item the set $\{a_1,b_1,\ldots,a_{n-1},b_{n-1}\}$ contains $\{1,\ldots,n-1\}$; \item $|\{k\in \{1,\ldots,n-1\} \::\: a_k=0\}|=i-1$. \end{enumerate} \end{theorem} \begin{remark} \label{rem:Dumontfine} {\em Theorem~\ref{thm:Dumontfine} above is a direct generalization of Corollary~\ref{cor:dumont}, a {\em restated} variant of Dumont's original Theorem~\ref{thm:dumont}. This generalization has a shorter proof. A similar but longer argument would allow generalizing Theorem~\ref{thm:dumont} to stating that $\sum_{j=0}^n A(n,i,j)$ equals the number of excedant functions $f:\{1,2,\ldots,2n-1\}\rightarrow \{1,2,\ldots,2n-1\}$ satisfying the following conditions: \begin{enumerate} \item $f(2k)\leq 2n-2$ holds for $k=1,\ldots,n-1$; \item $f(\{1,2,\ldots,2n-1\})=\{2,4,\ldots,2n-2\}\cup \{2n-1\}$; \item $|f^{-1}(\{2n-1\})|=i$. \end{enumerate} } \end{remark} The key ingredient to proving Theorem~\ref{thm:Dumontfine} is the following bijection. \begin{theorem} \label{thm:descbij} There is a bijection between the set of all permutations $\pi$ of $\{1,\ldots,n\}$ and the set of excedant functions $f: \{1,\ldots,n\}\rightarrow \{1,\ldots,n\}$ such that each permutation $\pi$ and the corresponding excedant function $f$ have the following property: $$ \{1,\ldots,n\}-\{f(1),\ldots,f(n)\}=\{\pi(i+1) \::\: 1\leq i\leq n-1\quad \mbox{and}\quad \pi(i)>\pi(i+1)\}. $$ \end{theorem} \begin{proof} We will describe our bijection using the process of inserting the numbers $1,2,\ldots,n$ into the permutation $\pi$ in decreasing order. In order to reduce the number of cases, we place before the first number $\pi(1)$ the number $\pi(0):=0$ and after the last number $\pi(n)$ the number $\pi(n+1)=n+1$. Thus every number is inserted between two numbers. For example, for $n=6$ and the permutation $\pi(1)\cdots\pi(6)=615342$ we have the insertion process $$ 07\rightarrow 067 \rightarrow 0657 \rightarrow 06547 \rightarrow 065347 \rightarrow 0653427 \rightarrow 06153427. $$ The number $f(i)$ is computed in step $n+1-i$ when we insert $i$ into the permutation between the numbers $u$ and $v$, using the following rule: \begin{equation} \label{eq:fdef} f(i):= \begin{cases} v & \mbox{if $u>v$;}\\ \overleftarrow{u} & \mbox{if $0<u<v$};\\ i & \mbox{if $u=0$.}\\ \end{cases} \end{equation} Here $\overleftarrow{u}$ is the leftmost number $w$ in the current word such that the consecutive subword $w\cdots u$ is {\em decreasing}, that is, each number in it is smaller than the immediately preceding number. (We have $\overleftarrow{u}=u$ exactly when $u$ is immediately preceded by a smaller number.) In our example we have $(f(1),\ldots, f(6))=(5,4,4,6,6,6)$. In the third step, when we inserted $4$ between $5$ and $7$, we set $f(4)=\overleftarrow{5}=6$, in the fifth step, when we inserted $2$ between $4$ and $7$, we set $f(2)=\overleftarrow{4}=4$. In the last step, when we inserted $1$ between $6$ and $5$, we set $f(1)=5$. The operation $\pi\mapsto f$ is well-defined. The numbers $f(i)$ clearly satisfy $i\leq f(i)\leq n$. Since the number of all words $\pi(1)\cdots\pi(n)$ is the same as the number of all excedant functions $f:\{1,\ldots,n\} \rightarrow \{1,\ldots,n\}$, to show that we defined a bijection, it suffices to show that our assignment is injective: there is at most one way to reconstruct a permutation from an excedant function $f$. We always have $f(n)=n$ and the first step is to insert $n$ between $0$ and $n+1$, the last line of the definition (\ref{eq:fdef}) is applicable. Assume, by induction, that there is only one way to reconstruct the insertion of $n$, $n-1$, \ldots, $i+1$, based on the knowledge of $f(n)$, $f(n-1)$, \ldots, $f(i+1)$. Consider $f(i)$ satisfying $i\leq f(i)\leq n$, and let us show that there is only one way to insert $i$ that yields the given value of $f(i)$. Only the last line of the definition (\ref{eq:fdef}) allows setting $f(i)=i$, the value of $f(i)$ is greater than $i$ on the other two lines. Thus, in the case when $f(i)=i$, we must insert $i$ right after $0$ as the first new number in our permutation. From now on we may assume that $f(i)=w$ for some $w>i$. Let $w'$ be the immediate predecessor of $w$ in our current word. We distinguish two cases depending on how $w'$ and $w$ compare. If $w'>w$ then $i$ can not be inserted anywhere after $w$, since the only way to obtain $f(i)=w$ would be to insert $i$ between some $u$ and $v$ satisfying $u<v$ and $w=\overleftarrow{u}$. This is impossible: if $w\cdots u$ is a decreasing subword, then so is $w'w\cdots u$ and so $\overleftarrow{u}$ is either $w'$ or an even earlier number. Thus $i$ must be inserted somewhere before $w$, and the only way to get $f(i)=w$ when $w$ is a number to the right of the place of insertion is to insert $i$ right before $w$. We are left with the case when $w'<w$. If we insert $i$ anywhere before $w$, we can not get $f(i)=w$, only $w'$ or a number to the left of it. We must therefore insert $i$ after $w$ in such a way that the second line of (\ref{eq:fdef}) can be used and it yields $f(i)=w$. We must find a $u$ such that the $v$ succeeding $u$ is larger than $u$ and the subword $w\cdots u$ is decreasing. In other words, we must take the rightmost $u$ such that $w\cdots u$ is a decreasing consecutive subword. We are left to show that the set $\{f(1),\ldots,f(n)\}$ contains all numbers between $1$ and $n$ except those values that are immediately preceded by a larger number in the permutation $\pi(1)\cdots \pi(n)$. We prove the following generalization of this statement by induction: at step $n+1-i$ of the insertion process, the set $\{f(i),\ldots, f(n)\}$ contains all elements of the set $\{i,\ldots,n\}$ except those numbers, which are immediately preceded by a larger number in the current permutation of $n, n-1, \ldots, i$. At the first step $n$ is inserted and it is preceded by a smaller number. We set $f(n)=n$. Assume the statement is true up to step $n-i$ and consider the insertion of $i$. If $i$ is inserted right after $0$, the current set of numbers immediately preceded by a smaller number does not change, and $f(i)=i$ is added to the set $\{f(i+1), \ldots, f(n)\}$. In all other cases $i$ is inserted right after a larger number and $i\not \in \{f(i),\ldots,f(n)\}$. If $i$ is inserted between $u$ and $v$ satisfying $u>v$, then $v$ which was hitherto immediately preceded by a larger number, it is now immediately preceded by the smaller number $i$. The set of numbers immediately preceded by a larger number gains $i$ as a new element and loses $v$ as an element, no other change occurs. This change is properly reflected in setting $f(i)=v$. Finally, if $u<v$ holds, then the only change to the set of numbers immediately preceded by a larger number is the addition of $i$ to this set. This is properly handled, if we select $f(i)$ to be a number that is already present in the set $\{f(i+1),\ldots,f(n)\}$. Selecting $f(i)=\overleftarrow{u}$ fits the bill, as $\overleftarrow{u}$ can not be immediately preceded by a larger number. \end{proof} \begin{definition} We call the excedant function $f:\{1,\ldots,n\}\rightarrow \{1,\ldots,n\}$ associated to the permutation $\pi$ by the algorithm described in the proof of Theorem~\ref{thm:descbij} the {\em descent-sensitive code} of the permutation $\pi$. \end{definition} {\em Proof of Theorem~\ref{thm:Dumontfine}:} Consider the largest maximal representation $(\lambda,p)$ of an alternation acyclic tournament and let us replace $\lambda$ with its descent-sensitive code $f:\{1,\ldots,n\}\rightarrow \{1,\ldots,n\}$. Note that we must have $\lambda(1)=1$ for the largest maximal order of each alt-acyclic tournament and this equality is equivalent to $f(1)=1$. Hence the function $f$ is completely defined by its restriction $\widetilde{f}$ to the set $\{2,\ldots,n\}$, which sends this set into itself. Similarly, we must have $p(n)=\infty$, thus the restriction of $p$ to $\{1,\ldots,n-1\}$, which is a function $\widetilde{p}: \{1,\ldots,n-1\}\rightarrow \{2,\ldots,n\}\cup\{\infty\}$, completely determines $p$. Let us define the vectors $(a_1,\ldots,a_{n-1})$ and $(b_1,\ldots,b_{n-1})$ by setting $$ a_{k}= \begin{cases} n+1-p(n-k)& \mbox{if $p(n-k)\neq \infty$,}\\ 0 & \mbox{if $p(n-k)= \infty$}\\ \end{cases} $$ and $b_k=n+1-f(n+1-k)$ for $k=1,\ldots,n-1$. The condition $p(n-k)\in \{n-k+1,\ldots,n\}\cup\{\infty\}$ is equivalent to $0\leq a_k\leq k$ and the condition $n+1-k\leq f(n+1-k)\leq n$ is equivalent to $1\leq b_k\leq k$. The description given in Proposition~\ref{prop:lmaxchar} may be restated as follows: the pair of functions $(f,p)$ comes from a largest maximal code $(\lambda, p)$ if and only if we have \begin{equation} \label{eq:pf} \{p(1),\ldots,p(n-1)\}\cup \{f(2),\ldots, f(n)\}\supseteq \{2,\ldots,n\}. \end{equation} This is equivalent to Condition (2) in our statement. Finally, $|\{k\in \{1,\ldots,n-1\} \::\: a_k=0\}|$ is clearly the number of elements sent into $\infty$ by $p$. \qed An important special instance of Theorem~\ref{thm:Dumontfine} is the case $i=1$. In this case all $a_k>0$ is satisfied for all $k$ and the pairs $((a_1,\ldots,a_{n-1}), (b_1,\ldots,b_{n-1}))$ are exactly the ones that are counted in Corollary~\ref{cor:dumont}. Equation~(\ref{eq:gen1}) follows. \section{New combinatorial models for the Genocchi numbers} \label{sec:genmod} Equation~(\ref{eq:gen1}) inspires introducing {\em ascending alternation acyclic tournaments}. \begin{definition} We call an alternation acyclic tournament $T$ on $\{1,\ldots,n\}$ {\em ascending} if every $i<n$ is the tail of an ascent, that is, for each $i<n$ there is a $j>i$ such that $i\rightarrow j$. \end{definition} \begin{lemma} \label{lem:asc} An alternating acyclic tournament $T$ on $\{1,\ldots,n\}$ is ascending if and only if it has type $(n,1,j)$ for some $j$. \end{lemma} Indeed, for any biordered forest inducing $T$, if $(\pi,p)$ is the code of the biordered forest, $p(i)=\infty$ holds if and only if $i$ is not the tail of any ascent. An alt-acyclic tournament is ascending if and only if $n$ is the only element of $\{1,\ldots,n\}$ whose parent is $\infty$. Because of Lemma~\ref{lem:asc}, Equation~(\ref{eq:gen1}) may be rephrased as follows. \begin{corollary} \label{cor:asc} The number of ascending alternation acyclic tournaments on $\{1,\ldots,n\}$ is the unsigned Genocchi number of the first kind $|G_{2n}|$. \end{corollary} Taking into account Theorem~\ref{thm:all-alta}, Theorem~\ref{thm:Dumontfine} implies the following result on the median Genocchi numbers. \begin{corollary} \label{cor:Dumont2} The median Genocchi number $H_{2n-1}$ is the total number of all ordered pairs $$((a_1,\ldots,a_{n-1}), (b_1,\ldots,b_{n-1}))\in {\mathbb Z}^{n-1}\times {\mathbb Z}^{n-1}$$ such that $0\leq a_k\leq k$ and $1\leq b_k\leq k$ hold for all $k$ and the set $\{a_1,b_1,\ldots,a_{n-1},b_{n-1}\}$ contains $\{1,\ldots,n-1\}$. \end{corollary} Corollary~\ref{cor:Dumont2} makes the divisibility of $H_{2n-1}$ by $2^{n-1}$ especially transparent. Furthermore, it inspires the following model for the normalized median Genocchi numbers. \begin{theorem} \label{thm:nmGenocchi} The normalized median Genocchi number $h_n$ is the number of sequences $\{u_1,v_1\}, \{u_2,v_2\}, \ldots, \{u_n,v_n\}$ subject to the following conditions: \begin{enumerate} \item the set $\{u_k,v_k\}$ is a (one- or two-element) subset of $\{1,\ldots,k\}$; \item the set $\{u_1,v_1,u_2,v_2,\ldots,u_n,v_n\}$ equals $\{1,\ldots,n\}$. \end{enumerate} \end{theorem} \begin{proof} By Corollary~\ref{cor:Dumont2}, the median Genocchi number is the number of pairs of vectors $((a_1,\ldots,a_n), (b_1,\ldots,b_n))$ such that $0\leq a_k\leq k$ and $1\leq b_k\leq k$ hold for all $k$ and the set $\{a_1,b_1,\ldots,a_{n},b_{n}\}$ contains $\{1,\ldots,n\}$. Let us first define a ${\mathbb Z}_2^n$-action of the set of all such vectors. We define the involution $\phi_k$ for $k\in \{1,\ldots,n\}$ as follows. The map $\phi_k$ sends $$ ((a_1,\ldots,a_n), (b_1,\ldots,b_n))\quad\mbox{into} $$ $$ ((a_1,\ldots,a_{k-1},a_k',a_{k+1},\ldots,a_n),(b_1,\ldots,b_{k-1},b_k',b_{k+1},\ldots,b_n)) $$ where $$ (a_k',b_k')= \begin{cases} (b_k,a_k) & \mbox{if $a_k\neq b_k$ and $a_k\neq 0$;} \\ (0,b_k) & \mbox{if $a_k=b_k$;} \\ (b_k,b_k) & \mbox{if $a_k=0$.} \\ \end{cases} $$ In other words, the map $\phi_k$ changes only the $k$-th coordinates of $(a_1,\ldots,a_n)$ and $(b_1,\ldots,b_n)$, it swaps $a_k$ and $b_k$ if $\{a_k,b_k\}$ is a two element subset of $\{1,\ldots,k\}$ and it swaps the pair $(b_k,b_k)$ with the pair $(0,b_k)$. Note that in this second case, we have $$ \{a_k,b_k\}\cap \{1,\ldots,k\}=\{b_k\} $$ for $a_k=0$, as well as for $a_k=b_k$. The action of the involutions $\phi_k$ is free, as they act on different coordinates. An orbit representative for this action is the sequence of sets $$\{a_1,b_1\}\cap\{1\}, \{a_2,b_2\}\cap \{1,2\},\ldots, \{a_n,b_n\}\cap \{1,\ldots,n\}.$$ In the case when $a_k\neq 0$ we may set $u_k=a_k$ and $v_k=b_k$, and in the case when $a_k=0$, we may set $u_k=b_k$ and $v_k=b_k$. This orbit representative is valid if and only if the set $\{u_1,v_1,u_2,v_2,\ldots,u_n,v_n\}$ equals $\{1,\ldots,n\}$. \end{proof} \begin{remark} {\em It was recently shown by A.\ Bigeni~\cite{Bigeni-bij} that the above model is bijectively equivalent to the model introduced by Feigin~\cite{Feigin-deg}. The bijection is highly nontrivial, A.\ Bigeni's entire paper is devoted to it. It is through Feigin's model that the above model is related to the earlier models by Kreweras~\cite{Kreweras} and by Kreweras and Barraud~\cite{Kreweras-Barraud}. All earlier models are related to the Kreweras triangle~\cite{Kreweras}, and there is an interpretation of the numbers in the Kreweras triangle through Bigeni's bijection. Note, however, that the numbers $A(n,i,j)$ introduced in this paper form a three dimensional array, and they are {\em not directly related} to the Kreweras triangle.} \end{remark} \begin{remark} \label{rem:Dumont} {\em The variant of Theorem~\ref{thm:Dumontfine}, together with Theorem~\ref{thm:all-alta} imply the following variant of Corollary~\ref{cor:Dumont2}: the median Genocchi number $H_{2n-1}$ is the number of excedant functions $f:\{1,2,\ldots,2n-1\}\rightarrow \{1,2,\ldots,2n-1\}$ satisfying $f(2k)\leq 2n-2$ for $k=1,\ldots,n-1$ and $$ f(\{1,2,\ldots,2n-1\})=\{2,4,\ldots,2n-2\}\cup \{2n-1\}. $$} \end{remark} \section{Generating function formulas} \label{sec:genfun} In this section we prove a generating function formula for the numbers $A(n,i,j)$ introduced in Section~\ref{sec:lmax} and obtain the ordinary generating functions of the Genocchi numbers of both kinds. We begin with introducing the generating function $$\alpha(x,y,t)=\sum_{n=0}^\infty \sum_{i=1}^n \sum_{j=0}^{n-1} \frac{A(n,i,j)}{j!} x^i y^j t^n,$$ in which we denote the coefficient of $t^n$ by $\alpha_n(x,y)$. Proposition~\ref{prop:altac} may be rewritten as \begin{equation} \label{eq:ainit} \alpha_1(x,y)=x \quad\mbox{and} \end{equation} \begin{equation} \label{eq:arec} \alpha_{n+1}(x,y) =xy (\alpha_n(x+1,y)-\alpha_n(x,y))+x\alpha_n(x,y) +xy \frac{\partial}{\partial y}\alpha_n(x,y) \quad\mbox{for $n\geq 1$.} \end{equation} These equations gain a simpler form after introducing the formal power series $$ \beta_n(x,y)=\alpha_n(x,y)\cdot e^{-y}. $$ For these, equations~(\ref{eq:ainit}) and (\ref{eq:arec}) may be rewritten as \begin{equation} \label{eq:binit} \beta_1(x,y)=xe^{-y} \quad\mbox{and} \end{equation} \begin{equation} \label{eq:brec} \beta_{n+1}(x,y) =xy \beta_n(x+1,y)+x\beta_n(x,y) +xy \frac{\partial}{\partial y}\beta_n(x,y) \quad\mbox{for $n\geq 1$.} \end{equation} Let us define the polynomial $\beta_{n,k}(x)$ as the coefficient of $y^k$ in $\beta_n(x,y)$. Equations~(\ref{eq:binit}) and (\ref{eq:brec}) may be transformed into \begin{equation} \label{eq:bkinit} \beta_{1,k}(x)=x\cdot\frac{(-1)^k}{k!} \quad\mbox{for $k\geq 0$ and} \end{equation} \begin{equation} \label{eq:bkrec} \beta_{n+1,k}(x) =x(\beta_{n,k-1}(x+1)+(k+1)\beta_{n,k}(x)) \quad\mbox{for $n\geq 1$ and $k\geq 0$.} \end{equation} Note that (\ref{eq:bkrec}) also holds for $k=0$, once we set $\beta_{n,-1}(x)=0$ for all $n$. Let us set finally $\gamma_{n,k}(x)=\beta_{n,k}(x-k)$. Equations~(\ref{eq:binit}) and (\ref{eq:brec}) may be transformed into \begin{equation} \label{eq:ckinit} \gamma_{1,k}(x,y)=(x-k)\cdot\frac{(-1)^k}{k!} \quad\mbox{for $k\geq 0$ and} \end{equation} \begin{equation} \label{eq:ckrec} \gamma_{n+1,k}(x) =(x-k)(\gamma_{n,k-1}(x)+(k+1)\gamma_{n,k}(x)) \quad\mbox{for $n\geq 1$ and $k\geq 0$.} \end{equation} Again we set $\gamma_{n,-1}(x)=0$ for all $n$. This is an array of polynomials that is easy to compute after introducing $$ \gamma_k(x,t)=\sum_{n=1}^{\infty} \gamma_{n,k}(x) t^n. $$ For $k=0$, Equation~(\ref{eq:ckinit}) and repeated use of Equation~(\ref{eq:ckrec}) yields $\gamma_{n,0}=x^n$ for $n\geq 1$. Hence we have \begin{equation} \label{eq:cinit} \gamma_0(x,t)=\frac{xt}{1-xt}. \end{equation} For $k\geq 1$, Equation~(\ref{eq:ckrec}) implies the recurrence \begin{equation} \label{eq:crec} \gamma_{k}(x,t)=\frac{(x-k)t}{1-(x-k)(k+1)t}\cdot \left(\frac{(-1)^k}{k!}+ \gamma_{k-1}(x,t)\right). \end{equation} Using Equations~(\ref{eq:cinit}) and (\ref{eq:crec}), an easy induction on $k$ implies \begin{equation} \label{eq:cfun} \gamma_k(x,t)=\sum_{i=0}^k \frac{(-1)^{k-i}}{(k-i)!} \prod_{\ell=0}^i \frac{(x-k+\ell) t}{ 1-(x-k+\ell)(k+1-\ell)t}. \end{equation} Next we introduce $$ \widetilde{\beta}_k(x,t)=\sum_{n=0}^{\infty} \beta_{n,k}(x) t^n. $$ The definition of $\gamma_{n,k}(x)$ implies $\beta_{n,k}(x)=\gamma_{n,k}(x+k)$ and $\widetilde{\beta}_k(x,t)=\gamma_k(x+k,t)$. Hence Equation~(\ref{eq:cfun}) may be rewritten as \begin{equation} \label{eq:bfun} \widetilde{\beta}_k(x,t)=\sum_{i=0}^k \frac{(-1)^{k-i}}{(k-i)!} \prod_{\ell=0}^i \frac{(x+\ell) t}{ 1-(x+\ell)(k+1-\ell)t}. \end{equation} Finally, as an immediate consequence of the definitions we have $$ \alpha(x,y,t)=\sum_{k=0}^{\infty} \widetilde{\beta}_k(x,t)\cdot y^k\cdot e^y. $$ Combining the last equation with Equation~(\ref{eq:bfun}) we obtain the formula $$ \alpha(x,y,t)=\sum_{j=0}^\infty y^j \sum_{k=0}^j \frac{1}{(j-k)!} \sum_{i=0}^k \frac{(-1)^{k-i}}{(k-i)!} \prod_{\ell=0}^i \frac{(x+\ell) t}{ 1-(x+\ell)(k+1-\ell)t} $$ Taking into account $\prod_{\ell=0}^i (k+1-\ell)=(k+1)!/(k-i)!$ we obtain the following result. \begin{theorem} \label{thm:agenfun} The generating function $\alpha(x,y,t)=\sum_{n,i,j} A(n,i,j) x^i y^j t^n/j! $ is given by $$ \alpha(x,y,t)=\sum_{j=0}^\infty \frac{y^j}{j!} \sum_{k=0}^j \binom{j}{k} \frac{1}{k+1} \sum_{i=0}^k (-1)^{k-i} \prod_{\ell=0}^i \frac{(x+\ell)(k+1-\ell)t}{ 1-(x+\ell)(k+1-\ell)t}. $$ \end{theorem} By Theorem~\ref{thm:all-alta}, the generating function of the median Genocchi numbers $H_{2n-1}$ is obtained by substituting $x=1$ and replacing each $y^j$ with $j!$ in $\alpha(x,y,t)$. \begin{corollary} The median Genocchi numbers satisfy $$ \sum_{n=1}^\infty H_{2n-1} t^n= \sum_{j=0}^\infty \sum_{k=0}^j \binom{j}{k} \frac{1}{k+1} \sum_{i=0}^k (-1)^{k-i} \prod_{\ell=0}^i \frac{(1+\ell)(k+1-\ell)t}{ 1-(1+\ell)(k+1-\ell)t}. $$ \end{corollary} By Corollary~\ref{cor:asc}, the generating function of the Genocchi numbers of the first kind is obtained by replacing each $y^j$ by $j!$ and then taking the coefficient of $x$ in in $\alpha(x,y,t)$. To use Theorem~\ref{thm:agenfun}, observe that all powers of $x$ occur in the products of the form $$ \prod_{\ell=0}^i \frac{(x+\ell)(k+1-\ell)t}{ 1-(x+\ell)(k+1-\ell)t}. $$ Here, for $\ell=0$ , the factor $$ \frac{x(k+1)t}{1-x(k+1)t}=x(k+1)t+x^2(k+1)^2t^2\cdots $$ has no constant term, and the coefficient of $x$ is $(k+1)t$. We can take out this factor, simplify by (k+1), and only the constant terms of the remaining factors contribute to the coefficient of $x$. Theorem~\ref{thm:agenfun} thus has the following consequence. \begin{corollary} The Genocchi numbers of the first kind satisfy $$ \sum_{n=1}^{\infty} |G_{2n}| t^n= t\cdot \sum_{j=0}^\infty \sum_{k=0}^j \binom{j}{k} \sum_{i=0}^k (-1)^{k-i} \prod_{\ell=1}^i \frac{\ell(k+1-\ell)t}{ 1-\ell(k+1-\ell)t}. $$ In this formula, when $i=0$, we define the empty product to be equal to $1$. \end{corollary} \section{Concluding remarks} \label{sec:concl} Dumont's first permutation models for the Genocchi numbers were created by finding a class of excedant functions first~\cite[Corollaire du Th\'eor\`eme 3]{Dumont}, and then establishing a bijection between excedant functions and permutations~\cite[Section 5]{Dumont}. This bijection is very different from, although similar in spirit to our Theorem~\ref{thm:descbij}. Using the bijection presented in Theorem~\ref{thm:descbij}, new classes of permutations counted by Genocchi numbers of the first kind may be introduced, however these classes will be very similar if not identical to the examples obtained by Dumont, after combining his bijection with Foata's fundamental transformation~\cite{Foata} which transforms counting excedances into counting descents. Dumont's bijection between permutations and excedant functions makes identifying excedances easy, whereas our bijection is poised on identifying descents. More interesting results could be hoped for by finding new permutation models for median Genocchi numbers using Remark~\ref{rem:Dumont} and Corollary~\ref{cor:Dumont2}. The curiosity of all results presented in this paper is that objects counted by Genocchi numbers of the first kind are presented as subsets of objects counted by median Genocchi numbers: it is usually the other way around in the literature. This paper arose in a search for generalizations of semiacyclic tournaments that appear in the work of Postnikov and Stanley~\cite{Postnikov-Stanley}. In particular, we have found a hyperplane arrangement, whose regions are counted by the median Genocchi numbers, known to be multiples of powers of $2$. Semiacyclic tournaments count regions in the Linial arrangement, which is a section of the arrangement we presented in this paper. The number of semiacyclic tournaments on $n$ vertices is known to be $$ 2^{-n}\sum_{k=0}^n \binom{n}{k} (k+1)^{n-1} $$ see~\cite[Theorem 8.1]{Postnikov-Stanley}. It is hard to miss in the above formula that the sum after the factor $2^{-n}$ is obviously an integer, but not obviously a multiple of $2^n$. No combinatorial proof of this divisibility is known, perhaps the $q$-counting of the regions of the Linial arrangement by Athanasiadis~\cite{Athanasiadis-extLin} comes closest. Perhaps the $q$-counting of the regions of our homogenized Linial arrangement, combined with a better understanding how the Linial arrangement appears as a section of our arrangement could help find some additional explanations how divisibility by a power of $2$ appears in both settings. \section*{Acknowledgments} This work was partially supported by grants from the Simons Foundation (\#245153 and \#514648 to G\'abor Hetyei). The author thanks Ange Bigeni and two anonymous referees for valuable advice, many great suggestions and important corrections. \end{document}
\begin{document} \begin{frontmatter} \title{Reply to: `Reply to: ``Comment on: `How much security does Y-00 protocol provide us?''''} \author{Ranjith Nair}\ead{[email protected]}, \author{Horace P. Yuen}, \author{Eric Corndorf}, \author{Prem Kumar} \address{Center for Photonic Communication and Computing\\ Department of Electrical and Computer Engineering\\ Department of Physics and Astronomy\\ Northwestern University, Evanston, IL 60208} \begin{abstract} Nishioka \emph{et al} claim in \cite{nishioka05}, elaborating on their earlier paper \cite{nishioka04}, that the direct encryption scheme called Y-00 \cite{prl,yuen04} is equivalent to a classical non-random additive stream cipher, and thus offers no more security than the latter. In this paper, we show that this claim is false and that Y-00 may be considered equivalent to a \emph{random} cipher. We explain why a random cipher provides additional security compared to its nonrandom counterpart. Some criticisms in \cite{nishioka05} on the use of Y-00 for key generation are also briefly responded to. \end{abstract} \begin{keyword} Quantum cryptography \PACS 03.67.Dd \end{keyword} \end{frontmatter} \section{Introduction} The direct encryption system called Y-00 or $\alpha\eta$ \cite{prl,yuen04} uses coherent states to transmit encrypted information between two users, Alice and Bob, sharing a secret key. Nishioka \emph{et al} claimed in \cite{nishioka04} that the security of Y-00 was completely equivalent to that of a classical non-random additive stream cipher. Although we rebutted this claim in \cite{pla05}, the authors of \cite{nishioka04} have replied in \cite{nishioka05} to the effect that we did not understand the claims made in their original paper, and that our reply was irrelevant. It is in fact true that some details of our understanding of the claims in \cite{nishioka04} differ from the purported clarifications of the same provided in \cite{nishioka05}. However, setting aside questions of clarity of the claims in \cite{nishioka04}, we understand their claims in \cite{nishioka05} exactly and contend that the claimed equivalence of Y-00 to a classical non-random additive stream cipher is false, thus reiterating the conclusion of \cite{pla05}. We provide arguments in this paper that conclusively demonstrate that Y-00 is not equivalent to a non-random cipher. In Section 2, we briefly review the Y-00 direct encryption protocol. In Section 3, we describe the claims in \cite{nishioka04} and \cite{nishioka05}. In Section 4, we review the concept of a random cipher and describe why they are more secure than non-random ones against attacks on the key, thus highlighting the added feature that Y-00 provides over a non-random cipher. We then rebut the claims in \cite{nishioka05} and show why Y-00 is not equivalent to a non-random stream cipher. In Section 5, we briefly respond to the criticisms in \cite{nishioka05} of using Y-00 with weak coherent states as a key generation system. \section{The Y-00 Direct Encryption Protocol} We review the Y-00 protocol, using the notations of \cite{nishioka05} for easy reference. \begin{enumerate}[(1)] \item Alice and Bob share a secret key $\mathbf{K}_s$. \item Using a pseudo-random-number generator $PRNG(\centerdot)$, e.g., a linear feedback shift register, the seed key $\mathbf{K}_s$ is expanded into a running key sequence $\mathbf{K}=PRNG(\mathbf{K}_s)=(K_1, \ldots , K_N)$, with each block $K_i \in \{0, 1, \ldots, M-1\}$. \item For each bit $r_i$ of a plaintext sequence $\mathbf{R}_N = (r_1, \ldots, r_N)$, Alice transmits the coherent state \begin{equation} \label{state} |\psi(K_i, r_i)\rangle=|\alpha e^{i\theta(K_i,r_i)}\rangle, \end{equation} where $\alpha \in \mathbb{R}$ and $\theta(K_i,r_i)=[K_i/M+(r_i\oplus\Pi(K_i))]\pi$. $\Pi(K_i)= 0$ or $1$ according to whether $K_i$ is even or odd. This distribution of possible states is shown in Fig. 1. Thus $K_i$ can be thought of as choosing a `basis' with the states representing bits $0$ and $1$ as its end points. \item Bob, knowing $K_i$, makes a measurement to discriminate just the two states $|\psi(K_i, r_i)\rangle$ and $|\psi(K_i, r_i \oplus 1)\rangle$. \end{enumerate} \begin{figure*} \caption{Left: Overall schematic of the $\alpha\eta$ scheme. Right: Depiction of $M/2$ bases with interleaved logical state mappings.} \label{setup} \end{figure*} The probability that Bob makes an error can be made negligibly small by choosing the mean photon number $S\equiv |\alpha|^2$ large enough. In particular, the optimal quantum measurement \cite{helstrom76} for Bob has error probability \begin{equation} \label{pB} P^B_e \sim \frac{1}{4} exp(-4S). \end{equation} Eve, not knowing $K_i$, is faced with the problem of distinguishing the density operators $\rho^0$ and $\rho^1$ where \begin{equation} \label{rho} \rho^b=\sum_{K_i}\frac{1}{M}|\psi(K_i,b)\rangle\langle\psi(K_i,b)|. \end{equation} For a fixed signal energy $S$, Eve's optimal error probability is numerically seen to go asymptotically to $1/2$ as the number of bases $M \rightarrow \infty$ (See Fig. 1 of \cite{prl}). The intuitive reason for this is that increasing $M$ more closely interleaves the states on the circle representing bit 0 and bit 1, making them less distinguishable. Therefore, at least under \emph{individual} attacks on each qumode, the Y-00 protocol offers any desired level of security determined by the relative values of $S$ and $M$. One can also ask if Eve can obtain the key $\mathbf{K}_s$ under a \emph{known-plaintext} attack, thus compromising the security of future data. While the complete analysis of the security of Y-00 under known-plaintext analysis has not been performed, we can still make some remarks about the security that Y-00 offers against an attack involving a fixed measurement (e.g., a heterodyne or phase measurement) on each qumode followed by a \emph{brute-force trial} of remaining key candidates. Indeed, a simple estimate of the noise in the phase measurement (which performs better than the heterodyne measurement) can be obtained by assuming that the noise moves the measured angle around the transmitted value uniformly within a standard deviation $1/|\alpha|$ of the measurement noise. Then, it is seen that the number of possible bases $N_\sigma$ consistent with the known bit $b$ in each measurement is $N_\sigma = M/(2\pi|\alpha|)$. Thus, the randomization provided by the quantum noise creates a search problem for Eve that would \emph{not} be present if Y-00 was a non-random cipher. \section{Claims in Nishioka \emph{et al} \cite{nishioka05}} Nishioka \emph{et al} claim that Y-00 can be reduced to a classical non-random stream cipher under the attack that we now review. For each transmission $i$, Eve makes a heterodyne measurement on the state and collapses the outcomes to one of $2M$ possible values. Thus, the outcome $j \in \{0, \cdots, 2M-1\}$ is obtained if the heterodyne result falls in the wedge for which the phase $\theta \in [\theta_j-\pi/2M, \theta_j+\pi/2M]$, where $\theta_j= \pi j/M$. Further, for $q \in \{0, \cdots, M-1\}$ representing the $M$ possible values of each $K_i$, Nishioka \emph{et al} construct a function $F_j(q)$ with the property that, for each $i$, and the corresponding running key value $K_i$ actually used, \begin{equation} \label{decryption} F_{j^{(i)}}(K_i)=r_i \end{equation} with probability very close to 1. In fact, for the parameters $S=100$ and $M=200$, they calculate the probability that Eq.(\ref{decryption}) fails to hold to be $10^{-44}$, which value they demonstrate to be negligible for any practical purpose. The authors of \cite{nishioka05} further claim that the above function $F_{j^{(i)}}(q)$ can always be represented as the XOR of two bit functions $G_{j^{(i)}}(q)$ and $l_{j^{(i)}}$, where $l_{j^{(i)}}$ depends \emph{only} on the measurement result. Thus, they make the claim that the equation \begin{equation} \label{reduction} l_{j^{(i)}}=r_i \oplus G_{j^{(i)}}(K_i) \end{equation} holds with probability effectively equal to 1. They then observe that a classical additive stream cipher \cite{mvv96,massey88} (which is non-random by definition) satisfies \begin{equation}\label{streamcipher} l_i=r_i \oplus \tilde{k_i}, \end{equation} where $r_i$, $l_i$, and $\tilde{k_i}$ are respectively the $i$th plaintext bit, ciphertext bit and running key bit. Here, $\tilde{k_i}$ is obtained by using a seed key in a pseudo-random-number generator to generate a longer running key. The authors of \cite{nishioka05} then argue that since $l_{j^{(i)}}$ in Eq.(\ref{reduction}), like the $l_i$ in Eq.(\ref{streamcipher}), depends just on the measurement result, the validity of Eq.(\ref{reduction}) proves that the security of Y-00 is equivalent to that of a classical stream cipher. In particular, they claim that by interpreting $l_{j^{(i)}}$ as the ciphertext, Y-00 is not a random cipher, i.e., it does not satisfy Eq.(\ref{random}) of the next section. We analyze and respond to these claims and other statements in \cite{nishioka05} in the following section. \section{Reply to claims in \cite{nishioka05}} First of all, we review the definition of a \emph{random} cipher. Such a cipher is called a `privately-randomized cipher' in \cite{massey88}, but we will call it just a random cipher. A random cipher is defined by the two conditions: \begin{equation} \label{random} H(Y_N|\mathbf{K}_s, R_N)\neq 0, \end{equation} and \begin{equation} \label{decrypt} H(R_N|Y_N,\mathbf{K}_s)=0. \end{equation} Here, $Y_N$ refers to the $N-$ symbol long ciphertext and $R_N$ and $\mathbf{K}_s$ are the plaintext and secret key, as in section 1. Note that there is no restriction on the alphabet of $Y_N$, which may be binary, $M$-ary or even continuous. Eq.(\ref{random}) implies that, for a given key, the plaintext may be mapped by the encrypter into more than one possible ciphertext. However, it is still required that the plaintext can be recovered from the ciphertext and the key, which is the meaning of Eq.(\ref{decrypt}). The case where Eq.(\ref{decrypt}) holds but Eq.(\ref{random}) does not is the usual case of a non-random cipher in standard cryptography. The advantage of a random cipher, which is briefly described in \cite{pla05} but not appreciated in \cite{nishioka05}, is that it may be secure against attacks on the key in the case when the attacker knows the statistics $p(R_N)$ of the data. In the case where the $r_i$ are independent and identically distributed, a random cipher can be constructed that provides complete information-theoretic security of the key \cite{gunther88}, in the sense that $H(\mathbf{K}_s|Y_N)=H(\mathbf{K}_s)$. Such security cannot be obtained with nonrandom ciphers \cite{yuen05}. See \cite{yuen05} for a detailed discussion on the security of random ciphers. Although we do not claim such information-theoretic security for Y-00, the possibility is not ruled out. We have already commented on the added brute-force search complexity of Y-00 against attacks on the key in Section 2. We now proceed to prove that the claim in \cite{nishioka05} that Y-00 is reducible for Eve to a non-random stream cipher under a heterodyne measurement is false. To begin with, we believe that Eq.~(\ref{decryption}) (Eq.~(14) in \cite{nishioka05}) is correct with the probability given by them. The content of this equation is simply that Eve is able to decrypt the transmitted bit from her measurement data $J_N$ and the key $\mathbf{K}_s$. In other words, it merely asserts that Eq.(\ref{decrypt}) holds for $Y_N=J_N$. As such, it does not contradict, and is even \emph{necessary}, for the claim that Y-00 is a random cipher for Eve. In fact, we already claimed in \cite{yuen04} and \cite{pla05} that such a condition holds. In this regard, note also that the statement in Section 4.1 of \cite{nishioka05} that ``informational secure key generation is impossible when ( Eq.(\ref{decryption}) of this paper) holds'' is irrelevant, since direct encryption rather than key generation is being considered here. We also agree with the claim of Nishioka \emph{et al} that it is possible to find functions $l_{j^{(i)}}$ and $G_{j^{(i)}}(q)$, the former depending only of the measurement result $j^{(i)}$, such that Eq.(\ref{reduction}) holds, again with probability effectively equal to one. The error in \cite{nishioka05} is to use this equation to claim, in analogy with Eq.~(\ref{streamcipher}), that Y-00 is reducible to a classical stream cipher, and hence non-random. To understand the error in their argument, note that, for Eq.~(\ref{streamcipher}) to represent an \emph{additive} stream cipher, the $l_i$ in that equation should be a function \emph{only} of the measurement result, and $\tilde{k_i}$ should be a function \emph{only} of the running key. While the former requirement is true also for the $l_{j^{(i)}}$ in Eq.~(\ref{reduction}), the latter is certainly \emph{false} for the function $G_{j^{(i)}}(K_i)$ in Eq.~(\ref{reduction}), since it depends \emph{both} on the measurement result $j^{(i)}$ and the running key $K_i$. Indeed, it can be seen that the definition of the function $F_{j^{(i)}}(K_i)$, and thus, $G_{j^{(i)}}(q)$ depends on the sets $C_{j^{(i)}}^+$ and $C_{j^{(i)}}^+$ defined in Eq.~(12) of \cite{nishioka05}. The identity of these sets in turn depends on the relative angle between the basis $q$ and Eve's estimated basis $\tilde{j^{(i)}}= j^{(i)} \bmod M.$ Thus, it is clearly the case that $G_{j^{(i)}}(K_i)$ must depend both on $j^{(i)}$ and $K_i$, a fact also revealed by the inclusion of the subscript $j^{(i)}$ by the authors of \cite{nishioka05} in the notation for $G$. We have shown above why the representation of Y-00 via Eq.~(\ref{reduction}) is not equivalent to an additive stream cipher. The question may be raised, however, if Eq.~(\ref{reduction}) reduces Y-00 to any kind of nonrandom cipher whatsoever. We will show below that the answer is negative. Indeed, Nishioka \emph{et al} emphasize that Y-00 is nonrandom because \begin{equation} \label{l} H(L_N|R_N, \mathbf{K}_s) =0 \end{equation} holds, where $\mathbf{L}_N=(l_{j^{(1)}}, \ldots, l_{j^{(N)}})$. This equation follows from Eq.~(\ref{reduction}) and so by considering $\mathbf{L}_N\equiv Y_N$ to be the ciphertext, the Eq.(\ref{random}) is not satisfied, thus supposedly making Y-00 nonrandom. The choice of $\mathbf{L}_N$ as the ciphertext is supported by the statement in \cite{nishioka05} that ``It is a matter of preference what we should refer to as ``ciphertext''.'' This is not true without qualification. It ignores the \emph{crucial} point that the random variable that is chosen as ciphertext must be sufficient to decrypt to the corresponding $R_N$ for every value of the key. We will show below that, for Y-00, the ciphertext alphabet needs to be atleast $(2M)$-ary, although larger, even continuous alphabets (such as the possible values of phase angle in a phase measurement or the two quadrature amplitudes in a heterodyne measurement) may be used. Thus, if one wants to claim equivalence to a classical cipher (random or non-random), for a particular choice of ciphertext $Y_N$, one must show that Eq.~(\ref{decrypt}) is satisfied using that \emph{same} ciphertext $Y_N$. In the case where $Y_N=\mathbf{L}_N$, one may readily see that Eq.~(\ref{decrypt}) is not satisfied, i.e., $H(\mathbf{R}_N|\mathbf{L}_N,\mathbf{K}_s)\neq 0$. The reason is that, as we noted in our description above of the function $G_{j^{(i)}}(q)$, decrypting $r_i$ requires knowledge of certain ranges in which the angle between the basis chosen by the running key and the estimated basis $\tilde{j^{(i)}}$ falls. To convey this information \emph{for every possible} $j^{(i)}$, one needs at least $\log_2(2M)$ bits. It follows that the single bit $l_{j^{(i)}}$ is insufficient for the purpose of decryption, and so Eq.~(\ref{decrypt}) cannot be satisfied for $Y_N=\mathbf{L}_N$. Therefore, we conclude, that in the interpretation of $\mathbf{L}_N$ as the ciphertext, decryption is not possible even if Eve has the key $\mathbf{K}_s$. Indeed, it is $\mathbf{J}_N$ that can be regarded as a possible ciphertext, since Eq.~(\ref{decrypt}) is satisfied for $Y_N=\mathbf{J}_N$. The fact that $\mathbf{J}_N$ is a true ciphertext sufficient for decryption is implicit in the dependence on $j^{(i)}$ of the function $G$ in Eq.~(\ref{reduction}). However, with this choice of ciphertext, Y-00 necessarily becomes a \emph{random} cipher, because $H(\mathbf{J}_N|\mathbf{R}_N,\mathbf{K}_s) \neq 0$ -- this latter fact is admitted by Nishioka \emph{et al} in \cite{nishioka05}. We hope that the discussion above makes it clear that the `reduction' of Y-00 in \cite{nishioka05} to a non-random cipher is false, and that in fact, no such reduction can be made under the heterodyne attack considered in \cite{nishioka05}. However, the representation of ciphertext by $Y_N=\mathbf{J}_N$ does reduce it to a \emph{random} cipher under the heterodyne attack. As a result, it can be implemented classically in principle, but not in practice. This is because true random numbers can only be generated physically, not by an algorithm, and the practical rate for such generation is many orders of magnitude below the $\sim$ Gbps rate in our experiments where the coherent-state quantum noise does the randomization \textit{automatically}. Furthermore, our physical ``analog'' scheme does not sacrifice bandwidth or data rate compared to other known randomization techniques. See \cite{yuen05} for a detailed discussion. We conclude this section by responding to some other statements made in \cite{nishioka05}. In Section 3,2, Nishioka \emph{et al} state that ``It is interesting to note that a smaller $M$ (but not $M$=1) is preferable for increasing the stochastic property.'' Here, they mean that the decryption using $\mathbf{J}_N$ and the key is noisier for smaller $M$. We claim that this cannot be the case and that the decryption probability is essentially independent of $M$. In any case, for the heterodyne quantum noise to cover more states on the circle, it is clear that a larger $M$ is preferable (See our discussion in Section 2). In Section 3.3, Nishioka \emph{et al} claim that ``The value of $l_{j^{(i)}}$ does not have to be the same as that of $l_{j^{(i')}}$ when $i \neq i'$, even if $j^{(i)}=j^{(i')}$ holds.'' This statement is in direct contradiction to their previous statement in the same subsection that ``$l_{j^{(i)}}$ depends only on the measurement value $j^{(i)}$''. In the same subsection, Nishioka \emph{et al} claim that ``In (\cite{nishioka04}), we showed another concrete construction of $l_{j^{(i)}}$ ...''. In our opinion, there was no explicit construction of $l_{j^{(i)}}$ in that paper, which to us seemed quite vague. We were led to the choice of $l_i$ described in \cite{pla05} by the attempt to make the additive stream cipher representation Eq.~(\ref{streamcipher}) valid. In fact, such a representation is claimed by Nishioka \emph{et al} in their Case 2 of \cite{nishioka04}. It turned out, however, that decryption using that $l_i$ suffered a $0.1-1$\% error depending on the value of $S$ used. See \cite{pla05} for further details. In any case, as we have shown above, no construction of a single-bit from the heterodyne measurement results can satisfy Eq.(\ref{decrypt}) with the extremely low probability given in \cite{nishioka05}. \section{Remarks regarding Key Generation using Y-00} In \cite{pla05}, we replied to the claim that information-theoretically secure key generation is impossible for Y-00 by showing a 6 dB advantage that the users have over Eve launching a heterodyne attack. This advantage can be used for practical key generation using a small enough value of $S$. This is acknowledged in \cite{nishioka05}, thus validating our claim that it is indeed possible. However, the new issue is raised in \cite{nishioka05} that this advantage is too small to allow Y-00 to generate keys, in their example, over a distance of 50 km. In this connection, we merely wish to state that, (i) this is not the original issue under dispute and we do not wish to bring a new issue into the present discussion; (ii) similar loss limits are also present for BB84; (iii) other techniques and schemes are already discussed in \cite{yuen04} to overcome this distance limit. \section{Conclusion} We have demonstrated that, under a heterodyne measurement, the Y-00 Direct Encryption protocol cannot be reduced to a classical non-random stream cipher, as claimed in \cite{nishioka05}. Its security under heterodyne attack is equivalent to a corresponding random cipher. \end{document}
\begin{document} \global\long\def\mathbf{p}{\mathbf{p}} \global\long\def\mathbf{q}{\mathbf{q}} \global\long\def\mathfrak{C}{\mathfrak{C}} \global\long\def\mathcal{P}{\mathcal{P}} \global\long\def\mathbf{p}r{\operatorname{pr}} \global\long\def\operatorname{im}{\operatorname{im}} \global\long\def\operatorname{otp}{\operatorname{otp}} \global\long\def\operatorname{dec}{\operatorname{dec}} \global\long\def\operatorname{suc}{\operatorname{suc}} \global\long\def\mathbf{p}re{\operatorname{pre}} \global\long\def\mathbf{q}e{\operatorname{qf}} \global\long\def\mathop{\mathpalette\mathfrak{I}nd{}}{\operatorname{ind}} \global\long\def\operatorname{Nind}{\operatorname{Nind}} \global\long\def\operatorname{lev}{\operatorname{lev}} \global\long\def\operatorname{Suc}{\operatorname{Suc}} \global\long\def\operatorname{HNind}{\operatorname{HNind}} \global\long\def{\lim}{{\lim}} \global\long\def\frown{\frown} \global\long\def\operatorname{cl}{\operatorname{cl}} \global\long\def\operatorname{tp}{\operatorname{tp}} \global\long\def\operatorname{id}{\operatorname{id}} \global\long\def\left(\star\right){\left(\star\right)} \global\long\def\mathbf{q}f{\operatorname{qf}} \global\long\def\operatorname{ai}{\operatorname{ai}} \global\long\def\operatorname{dtp}{\operatorname{dtp}} \global\long\def\operatorname{acl}{\operatorname{acl}} \global\long\def\operatorname{nb}{\operatorname{nb}} \global\long\def{\lim}{{\lim}} \global\long\def\leftexp#1#2{{\vphantom{#2}}^{#1}{#2}} \global\long\def\operatorname{interval}{\operatorname{interval}} \global\long\def\emph{at}{\emph{at}} \global\long\def\mathfrak{I}{\mathfrak{I}} \global\long\def\operatorname{uf}{\operatorname{uf}} \global\long\def\operatorname{ded}{\operatorname{ded}} \global\long\def\operatorname{Ded}{\operatorname{Ded}} \global\long\def\operatorname{Df}{\operatorname{Df}} \global\long\def\operatorname{Th}{\operatorname{Th}} \global\long\def\operatorname{eq}{\operatorname{eq}} \global\long\def\operatorname{Aut}{\operatorname{Aut}} \global\long\defac{ac} \global\long\def\operatorname{Df}One{\operatorname{df}_{\operatorname{iso}}} \global\long\def\modp#1{\mathbf{p}mod#1} \global\long\def\sequence#1#2{\left\langle #1\left|\,#2\right.\right\rangle } \global\long\def\set#1#2{\left\{ #1\left|\,#2\right.\right\} } \global\long\def\operatorname{Diag}{\operatorname{Diag}} \global\long\def\mathbb{N}{\mathbb{N}} \global\long\def\mathrela#1{\mathrel{#1}} \global\long\def\mathord{\sim}{\mathord{\sim}} \global\long\def\mathordi#1{\mathord{#1}} \global\long\def\mathbb{Q}{\mathbb{Q}} \global\long\def\operatorname{dense}{\operatorname{dense}} \global\long\def\operatorname{cof}{\operatorname{cof}} \global\long\def\operatorname{tr}{\operatorname{tr}} \global\long\def\operatorname{tr}eeexp#1#2{#1^{\left\langle #2\right\rangle _{\operatorname{tr}}}} \global\long\def\times{\times} \global\long\def\Vdash{\Vdash} \global\long\def\mathbb{V}{\mathbb{V}} \global\long\def\mathbb{U}{\mathbb{U}} \global\long\def\dot{\tau}{\dot{\tau}} \global\long\def\Psi{\Psi} \global\long\def2^{\aleph_{0}}{2^{\aleph_{0}}} \global\long\def\MA#1{{MA}_{#1}} \global\long\def\rank#1#2{R_{#1}\left(#2\right)} \global\long\def\cal#1{\mathcal{#1}} \def\mathfrak{I}nd#1#2{#1\setbox0=\hbox{$#1x$}\kern\wd0\hbox to 0pt{\hss$#1\mid$\hss} \lower.9\ht0\hbox to 0pt{\hss$#1\smile$\hss}\kern\wd0} \def\Notind#1#2{#1\setbox0=\hbox{$#1x$}\kern\wd0\hbox to 0pt{\mathchardef \nn="3236\hss$#1\nn$\kern1.4\wd0\hss}\hbox to 0pt{\hss$#1\mid$\hss}\lower.9\ht0 \hbox to 0pt{\hss$#1\smile$\hss}\kern\wd0} \def\mathop{\mathpalette\Notind{}}{\mathop{\mathpalette\Notind{}}} \global\long\def\mathop{\mathpalette\mathfrak{I}nd{}}{\mathop{\mathpalette\mathfrak{I}nd{}}} \global\long\def\mathop{\mathpalette\Notind{}}{\mathop{\mathpalette\Notind{}}} \global\long\def\average#1#2#3{Av_{#3}\left(#1/#2\right)} \global\long\def\mathfrak{F}{\mathfrak{F}} \global\long\def\mx#1{Mx_{#1}} \global\long\def\mathfrak{L}{\mathfrak{L}} \global\long\defE_{\mbox{sat}}{E_{\mbox{sat}}} \global\long\defE_{\mbox{rep}}{E_{\mbox{rep}}} \global\long\defE_{\mbox{com}}{E_{\mbox{com}}} \global\long\def\trianglelefteq{\operatorname{tr}ianglelefteq} \global\long\def\trianglerighteq{\operatorname{tr}ianglerighteq} \global\long\def{\bf K}_{\lambda,\theta}^{M,C,\bar{d}}{{\bf K}_{\lambda,\theta}^{M,C,\bar{d}}} \global\long\def{\bf MxK}_{\lambda,\theta}^{M,C,\bar{d}}{{\bf MxK}_{\lambda,\theta}^{M,C,\bar{d}}} \global\long\def\sch#1#2{\Phi_{#1,#2}} \global\long\def\timesx{{\bf x}} \global\long\def{\bf y}{{\bf y}} \global\long\def{\bf z}{{\bf z}} \global\long\def\mathfrak{a}{\mathfrak{a}} \global\long\def\mathfrak{b}{\mathfrak{b}} \title{THE GENERIC PAIR CONJECTURE FOR DEPENDENT FINITE DIAGRAMS} \author{Itay Kaplan, Noa Lavi, Saharon Shelah} \thanks{The first author would like to thank the Israel Science Foundation for partial support of this research (Grant no. 1533/14). } \thanks{The research leading to these results has received funding from the European Research Council, ERC Grant Agreement n. 338821. No. 1055 on the third author's list of publications.} \address{Itay Kaplan \\ The Hebrew University of Jerusalem\\ Einstein Institute of Mathematics \\ Edmond J. Safra Campus, Givat Ram\\ Jerusalem 91904, Israel} \email{[email protected]} \urladdr{https://sites.google.com/site/itay80/ } \address{Noa Lavi\\ The Hebrew University of Jerusalem\\ Einstein Institute of Mathematics \\ Edmond J. Safra Campus, Givat Ram\\ Jerusalem 91904, Israel} \email{[email protected]} \address{Saharon Shelah\\ The Hebrew University of Jerusalem\\ Einstein Institute of Mathematics \\ Edmond J. Safra Campus, Givat Ram\\ Jerusalem 91904, Israel} \address{Saharon Shelah \\ Department of Mathematics\\ Hill Center-Busch Campus\\ Rutgers, The State University of New Jersey\\ 110 Frelinghuysen Road\\ Piscataway, NJ 08854-8019 USA} \email{[email protected]} \urladdr{http://shelah.logic.at/} \subjclass[2010]{03C45, 03C95, 03C48.} \begin{abstract} This paper generalizes Shelah's generic pair conjecture (now theorem) for the measurable cardinal case from first order theories to finite diagrams. We use homogeneous models in the place of saturated models. \end{abstract} \maketitle \section{Introduction} The generic pair conjecture states that for every cardinal $\lambda$ such that $\lambda^{+}=2^{\lambda}$ and $\lambda^{<\lambda}=\lambda$, a complete first order theory $T$ is dependent if and only if, whenever $M$ is a saturated model whose size is $\lambda^{+}$, then, after writing $M=\bigcup_{\alpha<\lambda^{+}}M_{\alpha}$ where $M_{\alpha}$ are models of size $\lambda$, there is a club of $\lambda^{+}$ such that for every pair of ordinals $\alpha<\beta$ of cofinality $\lambda$ from the club, the pair of models $\left(M_{\beta},M_{\alpha}\right)$ has the same isomorphism type. This conjecture is now proved for $\lambda$ large enough. The non-structure side is proved in \cite{Sh877,Sh906} and the other direction is proved in \cite{Sh:900,Sh950}, all by the third author. In \cite{Sh:900}, the theorem is proved for the case where $\lambda$ is measurable. This is the easiest case of the theorem, and this is the case we will focus on here. In \cite[Theorem 7.3]{Sh950}, the conjecture is proved when $\lambda>\left|T\right|^{+}+\beth_{\omega}^{+}$. The current paper has two agendas. The first is to serve as an exposition for the proof of the theorem in the case where $\lambda$ is measurable. There are already two expositions by Pierre Simon on some other parts from \cite{Sh:900,Sh950}, which are available on his website \footnote{\url{http://www.normalesup.org/~simon/notes.html} }. The second is to generalize the structure side of this theorem in the measurable cardinal case to finite diagrams. As an easy byproduct, we also generalize a weak version of the ``recounting of types'' result \cite[Conclusion 3.13]{Sh950}, which states that when $\lambda$ is measurable and $M$ is saturated of cardinality $\lambda$, then the number of types over $M$ up to conjugation is $\leq\lambda$. See Corollary \ref{cor:main} below. A finite diagram $D$ is a collection of types in finitely many variables over $\emptyset$ in some complete theory $T$. Once we fix such a $D$ we concentrate on $D$-models, which are models of $T$ which realize only types from $D$. For instance, in a theory with infinitely many unary predicates $P_{i}$, $D$ could prohibit $x\notin P_{i}$ for all $i$, thus $D$-models are just union of the $P_{i}$'s. In this context, saturated models become $D$-saturated models, which is the same as being homogenous and realize $D$ (see Lemma \ref{lem:Grossberg}), so our model $M$ will be $D$-saturated instead of saturated. We propose a definition for when a finite diagram $D$ is dependent. This definition has the feature that if the underlying theory is dependent, then so is $D$, so there are many examples of such diagrams. We also give an example of an independent theory $T$ with some dependent $D$ (Example \ref{exa:independent theory, dependent diagram}). The proof follows \cite{Sh:900} and also uses constructions from \cite{Sh950} \footnote{Instead of ``strict decompositions'' from \cite{Sh:900} we use $tK$ from \cite{Sh950}. }. However, In order to make the proof work, we will need the presence of a strongly compact cardinal $\theta$ that will help us ensure that the types we get are $D$-types and so realized in the $D$-saturated models. \subsubsection*{Organization of the paper} In Section \ref{sec:preliminaries} we expose finite diagrams and prove or cite all the facts we shall need about them and about measurable and strongly compact cardinals. We also give a precise definition of when a diagram $D$ is dependent, and prove several equivalent formulations. In Section \ref{sec:The-generic-pair} we state the generic pair conjecture in the terminology of finite diagrams, and give a general framework for proving it: we introduce \emph{decompositions} and \emph{good families} and prove that if such things exist, then the theorem is true. Section \ref{sec:The-type-decomposition} is devoted to proving that nice decompositions exist. This is done in two steps. In Section \ref{sub:Tree-type-decomposition} we construct the first kind of decomposition (\emph{tree-type decomposition}), which is the building block of the decomposition constructed in Section \ref{sub:Self-solvable-decomposition} (\emph{self-solvable decomposition}). In Section \ref{sec:Finding-a-good} we prove that the family of self-solvable decompositions over a $D$-saturated model form a good family, and deduce the generic pair conjecture. \mathbf{p}aragraph*{Acknowledgment} We would like to thank the anonymous referee for his careful reading, for his many useful comments and for finding several inaccuracies. \section{\label{sec:preliminaries}preliminaries} We start by giving the definition of homogeneous structures and of $D$-models. \begin{defn} Let $M$ be some structure in some language $L$. We say that $M$ is\emph{ $\kappa$-homogeneous} \footnote{In some publications this notion is called $\kappa$-sequence homogenous, but here we decided upon this simpler notation which is also standard, see\emph{ \cite[page 480, 1.3]{Hod}.} } if: \begin{itemize} \item for every $A\subseteq M$ with $|A|<\kappa$, every partial elementary map $f$ defined on $A$ and $a\in M$ there is some $b\in M$ such that $f\cup\{(a,b)\}$ is an elementary map. \end{itemize} We say that $M$ is \emph{homogeneous} if it is $\left|M\right|$-homogeneous. \end{defn} Note that when $M$ is homogenous, it is also \emph{strongly homogeneous}, meaning that if $f$ is a partial elementary map with domain $A$ such that $\left|A\right|<\left|M\right|$, $f$ extends to an automorphism of $M$. Fix a complete first order theory $T$ in a language $L$ with a monster model $\mathfrak{C}$ --- a saturated model containing all sets and models of $T$, with cardinality $\bar{\kappa}=\bar{\kappa}^{<\bar{\kappa}}$ bigger than any set or model we will consider. \begin{defn} For $A\subseteq\mathfrak{C}$, let $D(A)=\set{\operatorname{tp}(\bar{a}/\emptyset)}{\bar{a}\subseteq A,|\bar{a}|<\omega}$. A set $D$ of complete $L$-types over $\emptyset$ is a\emph{ finite diagram in $T$} when it is of the form $D(A)$ for some $A$. If $D$ is a finite diagram in $L$, then a set $B\subseteq\mathfrak{C}$ is a \emph{$D$-set} if $D\left(B\right)\subseteq D$. A model of $T$ which is a $D$-set is a \emph{$D$-model.} Let $A\subseteq\mathfrak{C}$ be a $D$-set. Let $p$ be a complete type over $A$ (in any number of variables). We say that $p$ is a \emph{$D$-type} if for every $\bar{c}$ realizing $p$, $A\cup\bar{c}$ is a $D$-set. We denote the set of $D$-types over $A$ by $S_{D}\left(A\right)$ (and as usual we use superscript to denote the number of variables, such as in $S_{D}^{<\omega}\left(A\right)$). We say that $M$ is \emph{$\left(D,\kappa\right)$-saturated} if whenever $\left|A\right|<\kappa$, every $p\in S_{D}^{1}\left(A\right)$ is realized in $M$. We say that $M$ is \emph{$D$-saturated} if it is $\left(D,\left|M\right|\right)$-saturated. \end{defn} Note that when $D$ is\emph{ trivial}, i.e., $D=\bigcup\set{D_{n}\left(T\right)}{n<\omega}$ (with $D_{n}\left(T\right)$ being the set of all complete $n$-types over $\emptyset$), every model of $T$ is a $D$-model. The connection between $D$-saturation and homogeneity becomes clear due to the following lemma. \begin{lem} \label{lem:Grossberg}\cite[Lemma 2.4]{grossbergLessmann} Let $D$ be a finite diagram. A $D$-model $M$ is $\left(D,\kappa\right)$-saturated if and only if $D\left(M\right)=D$ and $M$ is $\kappa$-homogeneous. \end{lem} Just as in the first order case, we get the following. \begin{cor} \label{cor:homo-isom} Let $D$ be a finite diagram. If $M$, $N$ are $D$-saturated of the same cardinality, then $M\cong N$. Furthermore, if $\lambda^{<\lambda}=\lambda$, and there is a $\left(D,\lambda\right)$-saturated model, then there exists a $D$-saturated model of size $\lambda$. \end{cor} The next natural thing, after obtaining this equivalence, would be to look for monsters. A diagram $D$ is \emph{good} if for every $\lambda$ there exists a $(D,\lambda)$-saturated model (see \cite[Definition 2.1]{Sh003}). We will assume throughout that $D$ is good. By Corollary \ref{cor:homo-isom}, as we assumed that $\bar{\kappa}^{<\bar{\kappa}}=\bar{\kappa}$, there is a $D$-saturated model $\mathfrak{C}_{D}\mathbf{p}rec\mathfrak{C}$ of cardinality $\bar{\kappa}$ --- the homogenous monster. From now on we make these assumptions without mentioning them explicitly. Let us recall the general notion of an average type along an ultrafilter. \begin{defn} \label{def:avgtp} Let $A\subseteq\mathfrak{C}_{D}$, $I$ some index set, $\bar{a}_{i}$ tuples of the same length for $i\in I$, and let $\cal U$ be an ultrafilter on $I$. The \emph{average type} $\average{\sequence{\bar{a_{i}}}{i\in I}}A{\cal U}$ is the type consisting of all the formulas $\mathbf{p}hi\left(\bar{x},\bar{c}\right)$ over $A$ such that $\set{i\in I}{\mathfrak{C}_{D}\models\mathbf{p}hi\left(\bar{a}_{i},\bar{c}\right)}\in\cal U$. \end{defn} When $\cal U$ is $\kappa$-complete, the average is $<\kappa$ satisfiable in the sequence $\sequence{\bar{a}_{i}}{i\in I}$ (any $<\kappa$ many formulas are realized in the sequence). It follows that the average type is a $D$-type (see below). \begin{lem} \label{lem:average type along ultrafilter} Let $A,I$ be as in Definition \ref{def:avgtp}, and let $\cal U$ be a $\kappa$-complete ultrafilter on $I$, where $\kappa>\left|T\right|$. Then $r=\average{\sequence{\bar{a_{i}}}{i\in I}}A{\cal U}$ is a $D$-type.\end{lem} \begin{proof} We must show that if $\bar{c}\models r$ (in $\mathfrak{C}$), then $A\cup\bar{c}$ is a $D$-set. We may assume that $\bar{c}$ is a finite tuple (and so are the tuples $\bar{a}_{i}$ for $i\in I$). It is enough to see that if $\bar{c}\bar{a}$ is a finite tuple of elements from $\bar{c}\cup A$, then for some $i\in I$, $\bar{a}_{i}\bar{a}\operatorname{eq}uiv\bar{c}\bar{a}$ (i.e., they have the same type over $\emptyset$). For each formula $\varphi\left(\bar{x},\bar{a}\right)$ such that $\varphi\left(\bar{c},\bar{a}\right)$ holds, the set $\set{i\in I}{\mathfrak{C}_{D}\models\varphi\left(\bar{a}_{i},\bar{a}\right)}\in\cal U.$ Since there are $\left|T\right|$ such formulas, by $\kappa$-completeness, there is some $i\in I$ in the intersection of all these sets, so we are done. \end{proof} Now we turn to Hanf numbers. Let $\mu\left(\lambda,\kappa\right)$ be the first cardinal $\mu$ such that if $T_{0}$ is a theory of size $\leq\lambda$, $\Gamma$ a set of finitary types in $T_{0}$ (over $\emptyset$) of cardinality $\leq\kappa$, and for every $\chi<\mu$ there is a model of $T_{0}$ of cardinality $\geq\chi$ omitting all the types in $\Gamma$, then there is such a model in arbitrarily large cardinality. Of course, when $\kappa=0$, $\mu\left(\lambda,\kappa\right)=\aleph_{0}$. In our context, $T_{0}=T$, and $\Gamma=\bigcup\set{D_{n}\left(T\right)}{n<\omega}\backslash D$, so we are interested in $\mu\left(\left|T\right|,\left|\Gamma\right|\right)$ which we will denote by $\mu\left(D\right)$, the \emph{Hanf number }of $D$. In \cite[Chapter VII, 5]{Sh:c} this number is given an upper bound: $\mu\left(D\right)\leq\beth_{\left(2^{\left|T\right|}\right)^{+}}$. \begin{defn} A finite diagram $D$ has \emph{the independence property} if there exists a formula $\mathbf{p}hi\left(\bar{x},\bar{y}\right)$ which has it, which means that there is an indiscernible sequence $\sequence{\bar{a}_{i}}{i<\mu\left(D\right)}$ and $\bar{b}$ in $\mathfrak{C}_{D}$ such that $\mathfrak{C}_{D}\models\mathbf{p}hi(\bar{b},\bar{a}_{i})$ if and only if $i$ is even. Otherwise we say that $D$ is \emph{dependent}. \end{defn} Of course, if the underlying theory $T$ is dependent, then $D$ is dependent. \begin{example} \label{exa:independent theory, dependent diagram}Let $L=\left\{ R,P,Q\right\} $ where $P$ and $Q$ are unary predicates, and $R$ is a binary predicate. Let $T$ be the model completion of the theory that states that $R\subseteq Q\times P$. So $T$ is complete and has quantifier elimination. Let $L'=L\cup\set{c_{i}}{i<\omega}$ where $c_{i}$ are constants symbols, and let $T'$ be an expansion of $T$ that says that $c_{i}\in P$ and $c_{i}\neq c_{j}$ for $i\neq j$. So $T'$ is also complete and admits quantifier elimination. As $T$ has the independence property, so does $T'$. Let $p\left(x\right)\in S^{1}\left(\emptyset\right)$ say that $x\in P$ and $x\neq c_{i}$ for all $i<\omega$. Finally, let $D$ be the finite diagram $S^{<\omega}\left(\emptyset\right)\backslash\left\{ p\right\} $. Easily $D$ is good (if $\mathfrak{C}$ is a monster model of $T$, then let $Q^{\mathfrak{C}}\cup\set{c_{i}^{\mathfrak{C}}}{i<\omega}$ be $\mathfrak{C}_{D}$). It is easy to see that $D$ is dependent. \end{example} Recall that a cardinal $\theta$ is \emph{strongly compact} if any $\theta$-complete filter (with any domain) is contained in a $\theta$-complete ultrafilter. For our context we will need to assume that if $D$ is non-trivial, then there is a strongly compact cardinal $\theta>\left|T\right|$. Strongly compact cardinals are measurable (see \cite[Corollary 4.2]{kanamori}). Recall that a cardinal $\mu$ is \emph{measurable} if it is uncountable and there is a $\mu$-complete non-principal ultrafilter on $\mu$. It follows that there is a \emph{normal} such ultrafilter (i.e., closed under diagonal intersection). See \cite[Exercise 5.12]{kanamori}. Measurable cardinals are strongly inaccessible (see \cite[Theorem 2.8]{kanamori}), which means that $\theta>\beth_{\left(2^{\left|T\right|}\right)^{+}}\geq\mu\left(D\right)$. Fix some such $\theta$ throughout. If, however, $D$ is trivial, then we do not need a strongly compact cardinal. We also note here a key fact about measurable cardinals that will be useful later: \begin{fact} \label{fact:indiscernibles exist} \cite[Theorem 7.17]{kanamori} Suppose that $\mu>\left|T\right|$ is a measurable cardinal and that $\cal U$ is a normal (non-principal) ultrafilter on $\mu$. Suppose that $\sequence{\bar{a}_{i}}{i<\mu}$ is a sequence of tuples in $\mathfrak{C}$ of equal length $<\mu$, then for some set $X\in\cal U$, $\sequence{\bar{a}_{i}}{i\in X}$ is an indiscernible sequence. \end{fact} As a consequence (which will also be used later), we have the following. \begin{cor} \label{cor:full-indiscernibles}If $A=\bigcup_{i<\mu}A_{i}\subseteq\mathfrak{C}$ is a continuous increasing union of sets where $\left|A_{i}\right|<\mu$, $B\subseteq\mathfrak{C}$ is some set of cardinality $<\mu$, and $\sequence{\bar{a}_{i}}{i<\mu}$, $\cal U$ are as in Fact \ref{fact:indiscernibles exist} with $\bar{a}_{i}$ tuples from $A$, then for some set $X\in\cal U$, $\sequence{\bar{a}_{i}}{i\in X}$ is \textup{fully indiscernible over $B$} (with respect to $A$ and $\sequence{A_{i}}{i<\mu}$), which means that for every $i\in X$ and $j<i$ in $X$, we have $\bar{a}_{j}\subseteq A_{i}$ , and $\sequence{\bar{a}_{j}}{i\leq j\in X}$ is indiscernible over $A_{i}\cup B$. \end{cor} \begin{proof} This follows by the normality of the ultrafilter $\cal U$. First note that if $E\subseteq\mu$ is a club then $E\in\cal U$ (why? Otherwise $X=\mu\backslash E\in\cal U$, so the function $f:X\to\mu$ defined by $\beta\mathfrak{L}to\sup\left(\beta\cap E\right)$ is such that $f\left(\beta\right)<\beta$, and by Fodor's lemma (which holds for normal ultrafilters), for some $\gamma<\mu$ and $Y\subseteq X$ in $\cal U$, $f\upharpoonright Y=\gamma$ which easily leads to a contradiction). Hence the set $E=\set{i<\mu}{\forall j<i\left(\bar{a}_{j}\subseteq A_{i}\right)}$ is in $\cal U$. Furthermore, the set of limit ordinals $E'$ is also in $\cal U$. The promised set $X$ is the intersection of $E\cap E'$ with the diagonal intersection of $X_{i}$ for $i<\mu$, where $X_{i}\in\cal U$ is such that $\sequence{\bar{a}_{i}}{i\in X_{i}}$ is indiscernible over $A_{i}\cup B$ (which exists thanks to Fact \ref{fact:indiscernibles exist}). Note that we have $\leq$ and not just $<$ when defining ``fully indiscernible'', because $\sequence{A_{i}}{i<\mu}$ is continuous and $X$ contains only limit ordinals. \end{proof} The following demonstrates the need for Hanf numbers and strongly compact cardinals. \begin{lem} \label{lem:eq formulations of dependence}For a finite diagram $D$ the following conditions are equivalent: \begin{enumerate} \item The formula $\mathbf{p}hi\left(\bar{x},\bar{y}\right)$ has the independence property. \item For any $\lambda$ there is an indiscernible sequence $\sequence{\bar{a}_{i}}{i<\lambda}$ and $\bar{b}$ in $\mathfrak{C}_{D}$ such that $\mathfrak{C}_{D}\models\mathbf{p}hi\left(\bar{a}_{i},\bar{b}\right)$ iff $i$ is even. \item For any $\lambda$ there is a set $\set{\bar{a}_{i}}{i<\lambda}\subseteq\mathfrak{C}_{D}$ such that for any $s\subseteq\lambda$ there is some $\bar{b}_{s}\in\mathfrak{C}_{D}$ such that $\mathfrak{C}_{D}\models\mathbf{p}hi\left(\bar{a}_{i},\bar{b}_{s}\right)$ iff $i\in s$. \item The same as (2) but with $\lambda=\theta$. \item The same as (3) but with $\lambda=\theta$. \end{enumerate} \end{lem} \begin{proof} (1) $\Rightarrow$ (3): we may assume that $\lambda\geq\mu\left(D\right)$. By assumption there is a sequence $\sequence{\bar{a}_{i}}{i<\mu\left(D\right)}$ and $\bar{b}$ in $\mathfrak{C}_{D}$ as in the definition. Let $M\mathbf{p}rec\mathfrak{C}_{D}$ be a model of size $\mu\left(D\right)$ containing all these elements. Add to the language $L$ new constants $\bar{c}$ in the length of $\bar{b}$, a new predicate $P$ in the length of $\bar{x}$ and a $2\lg\left(\bar{x}\right)$-ary symbol $<$, and a function symbol $f$. Expand $M$ to $M'$, a structure of the expanded language, by interpreting $\bar{c}^{M'}=\bar{b}$, $P^{M'}=\set{\bar{a}_{i}}{i<\mu\left(D\right)}$, $\bar{a}_{i}<^{M'}\bar{a}_{j}$ iff $i<j$ and let $f^{M'}:P^{M'}\to M'$ be onto. Let $T_{0}=Th\left(M'\right)$. By assumption, $T_{0}$ has a $D$-model of size $\mu\left(D\right)$, and so by definition $T_{0}$ has a $D$-model $N'$ of cardinality $\lambda$ and we may assume that its $L$-part $N$ is an elementary substructure of $\mathfrak{C}_{D}$. So the elements in $P^{N'}$, ordered by $<^{N'}$, form an $L$-indiscernible sequence, and $\left|P^{N'}\right|=\lambda$. For convenience of notation, let $\left(I,<\right)$ be an order, isomorphic to $\left(P^{N'},<^{N'}\right)$, and write $P^{N'}=\set{\bar{a}_{i}}{i\in I}$. The order $<$ is discrete, so every $i\in I$ has a unique successor $s\left(i\right)$, and $N\models\mathbf{p}hi\left(\bar{c},\bar{a}_{i}\right)\leftrightarrow\neg\mathbf{p}hi\left(\bar{c},\bar{a}_{s\left(i\right)}\right)$. Let $Q=\set{i\in I}{N\models\mathbf{p}hi\left(\bar{c},\bar{a}_{i}\right)}$, so $\left|Q\right|=\lambda$. Then, by indiscernibility, for any $R\subseteq Q$, \[ \sequence{\bar{a}_{i}}{i\in Q}\operatorname{eq}uiv\sequence{\bar{a}_{s^{R\left(i\right)}\left(i\right)}}{i\in Q} \] where $R\left(i\right)=0$ iff $i\in R$, and $s^{0}=\operatorname{id}$, $s^{1}=s$. Hence by the strong homogeneity of $\mathfrak{C}_{D}$, $\set{\bar{a}_{i}}{i\in Q}$ satisfies (3). (2) $\Rightarrow$ (4), (3) $\Rightarrow$ (5), (4) $\Rightarrow$ (1): Obvious. (5) $\Rightarrow$ (2): We may assume that $\lambda\geq\theta$. Let $\set{\bar{a}_{i}}{i<\theta}$ be as in (5). Since $\theta$ is measurable, by Fact \ref{fact:indiscernibles exist}, we may assume that $\sequence{\bar{a}_{i}}{i<\theta}$ is indiscernible. By compactness we can extend this sequence to $\sequence{\bar{a}_{i}}{i<\lambda}$, and let $A=\set{\bar{a}_{i}}{i<\lambda}$. Note that by indiscernibility, the set containing all tuples in the new sequence is still a $D$-set, so we may assume that this new sequence lies in $\mathfrak{C}_{D}$. Let $O$ be the set of odd ordinals in $\lambda$. By indiscernibility and homogeneity, for each $X\in\left[\lambda\right]^{<\theta}$ (i.e., $X\subseteq\lambda$, $\left|X\right|<\theta$) there is some $\bar{b}_{X}$ such that for all $i\in X$, $\mathfrak{C}_{D}\models\mathbf{p}hi\left(\bar{b}_{X},\bar{a}_{i}\right)$ iff $i\notin O$. By strong compactness, there is some $\theta$-complete ultrafilter $\cal U$ on $\left[\lambda\right]^{<\theta}$ such that for every $X\in I$ we have $\set{Y\in\left[\lambda\right]^{<\theta}}{X\subseteq Y}\in\cal U$. Let $\bar{b}\models\average{\sequence{\bar{b}_{X}}{X\in\left[\lambda\right]^{<\theta}}}A{\cal U}$ which exists in $\mathfrak{C}_{D}$ by Lemma \ref{lem:average type along ultrafilter}, then $\mathfrak{C}_{D}\models\mathbf{p}hi\left(\bar{b},\bar{a}_{i}\right)$ iff $i$ is even. \end{proof} Dependence gives rise to the concept of the\emph{ }average type of an indiscernible sequence, without resorting to ultrafilters. Let $A\subseteq\mathfrak{C}_{D}$, let $\alpha$ be an ordinal such that $\operatorname{cof}\left(\alpha\right)\geq\mu\left(D\right)$, and let $\sequence{\bar{a}_{i}}{i<\alpha}$ be an indiscernible sequence in $\mathfrak{C}_{D}$. The \emph{average type} of $\sequence{\bar{a}_{i}}{i<\alpha}$ over $A$, denoted by $\average{\sequence{\bar{a}_{i}}{i<\alpha}}A{}$, consists of formulas of the form $\mathbf{p}hi\left(\bar{b},\bar{x}\right)$ with $\bar{b}\in A$, such that for some $i$, $\mathfrak{C}_{D}\models\mathbf{p}hi\left(\bar{b},\bar{a}_{j}\right)$ for every $j\ge i$. This is well defined as $\operatorname{cof}\left(\alpha\right)\geq\mu\left(D\right)$ (and as $D$ is dependent): otherwise, we can construct an increasing unbounded sequence of ordinals $j_{i}<\alpha$, such that $\mathbf{p}hi\left(\bar{b},\bar{a}_{j_{i}}\right)\leftrightarrow\neg\mathbf{p}hi\left(\bar{b},\bar{a}_{j_{i+1}}\right)$, and the length of this sequence is $\geq\mu\left(D\right)$. We show that this type is indeed a $D$-type. \begin{lem} \label{lem:average type on indiscernible sequence}Let $A\subseteq\mathfrak{C}_{D}$ where $D$ is a dependent diagram, $\alpha$ an ordinal such that $\operatorname{cof}\left(\alpha\right)\geq\mu\left(D\right)+\left|T\right|^{+}$, and let $\sequence{\bar{a}_{i}}{i<\alpha}$ be an indiscernible sequence in $\mathfrak{C}_{D}$. The average type $r=\average{\sequence{\bar{a}_{i}}{i<\alpha}}A{}$ is a $D$-type. \end{lem} \begin{proof} The proof is similar to that of Lemma \ref{lem:average type along ultrafilter}, but here we use the fact that the end-segment filter on $\alpha$ is $\operatorname{cof}\left(\alpha\right)$-complete. The main point is that for a formula $\varphi\left(\bar{x},\bar{a}\right)\in r$, there is some $j<\alpha$ such that $\varphi\left(\bar{a}_{i},\bar{a}\right)$ holds for all $i>j$. \end{proof} \section{\label{sec:The-generic-pair}The generic pair conjecture} From this section onwards, fix a dependent diagram $D$. We also fix a strongly compact cardinal $\theta>\left|T\right|$. When $D$ is trivial, there is no need for strong compact cardinals, and one can assume $\theta=\left|T\right|^{+}$, and replace $<\theta$ satisfiable by finitely satisfiable. We leave it to the reader to find the precise replacement. \begin{conjecture} \label{conj:The generic pair conjecture}(The generic pair conjecture) Suppose $D$ is dependent. Assume $\theta<\lambda=\lambda^{<\lambda}$ and $\lambda^{+}=2^{\lambda}$. Let $\bar{M}=\langle M_{\alpha}:\alpha<\lambda^{+}\rangle$ be an increasing continuous sequence of elementary substructures of $\mathfrak{C}_{D}$ of cardinality $\lambda$, such that ${\bf M}=\bigcup_{\alpha<\lambda^{+}}M_{\alpha}$ is $D$-saturated of size $\lambda^{+}$. Then there exists a club $E\subseteq\lambda^{+}$ such that \begin{itemize} \item if $\alpha_{1}<\beta_{1},\alpha_{2}<\beta_{2}\in E$ are all of cofinality $\lambda$, then $\left(M_{\beta_{1}},M_{\alpha_{1}}\right)\cong\left(M_{\beta_{2}},M_{\alpha_{2}}\right)$. \end{itemize} \end{conjecture} To give some motivation, note that it is easy to find a club $E_{\mbox{sat}}\subseteq\lambda^{+}$ such that for any $\delta\inE_{\mbox{sat}}$ of cofinality $\lambda$, $M_{\delta}$ is homogenous and $D\left(M_{\delta}\right)=D$ (equivalently $D$-saturated by Lemma \ref{lem:Grossberg}). Just let $E_{\mbox{sat}}$ be the set of ordinals $\delta<\lambda^{+}$ such that for any $\alpha<\delta$, every $p\in S_{D}^{1}\left(A\right)$ for any $A\subseteq M_{\alpha}$ of size $<\lambda$ is realized in $M_{\delta}$. Then for any $\delta\inE_{\mbox{sat}}$ of cofinality $\lambda$, $M_{\delta}$ is $D$-saturated, and any such two are isomorphic (see Corollary \ref{cor:homo-isom}). In this section we will outline the proof of Conjecture \ref{conj:The generic pair conjecture} under the assumption that a ``good family of decompositions'' exists. We call a tuple of the form $\timesx=\left(M_{\timesx},B_{\timesx},\bar{d}_{\timesx},\bar{c}_{\timesx},r_{\timesx}\right)$ a \emph{$\lambda$-decomposition} \footnote{The idea behind the name ``decomposition'' will be clearer later, where this notion is used to analyze the type of $\bar{d}$ over $M$. } when $\left|M_{\timesx}\right|=\lambda$ and $M_{\timesx}\subseteq\mathfrak{C}$ is a $D$-model, $B_{\timesx}\subseteq M_{\timesx}$ has cardinality $<\lambda$, $\bar{c}_{\timesx},\bar{d}_{\timesx}\in\mathfrak{C}_{D}^{<\lambda}$ and $r_{\timesx}\in S_{D}^{<\lambda}\left(\emptyset\right)$ is a complete type in variables $\left(\bar{x}_{\bar{c}_{\timesx}},\bar{x}_{\bar{d}_{\timesx}},\bar{x}_{\bar{c}_{\timesx}}',\bar{x}_{\bar{d}_{\timesx}}'\right)$ (where $\bar{x}_{\bar{d}_{\timesx}},\bar{x}_{\bar{d}_{\timesx}}'$ have the same length as $\bar{d}_{\timesx}$, etc.). An \emph{isomorphism} between two $\lambda$-decompositions $\timesx$ and ${\bf y}$ is just an elementary map with domain $M_{\timesx}\cup\bigcup\bar{c}_{\timesx}\cup\bigcup\bar{d}_{\timesx}$ which maps all the ingredients of $\timesx$ onto those of ${\bf y}$, and in particular, if $\timesx\cong{\bf y}$ then $r_{\timesx}=r_{{\bf y}}$. A \emph{weak isomorphism} between $\timesx$ and ${\bf y}$ is a restriction of an isomorphism to $\left(B_{\timesx},\bar{d}_{\timesx},\bar{c}_{\timesx},r_{\timesx}\right)$ (so there exists some isomorphism extending it). We write $\timesx\leq{\bf y}$ when $M_{\timesx}=M_{{\bf y}}$, $B_{\timesx}\subseteq B_{{\bf y}}$, $r_{\timesx}\subseteq r_{{\bf y}}$ (i.e., $r_{{\bf y}}$ may add more information on the added variables), $\bar{c}_{\timesx}\operatorname{tr}ianglelefteq\bar{c}_{{\bf y}}$ (i.e., $\bar{c}_{\timesx}$ is an initial segment of $\bar{c}_{{\bf y}}$) and $\bar{d}_{\timesx}\operatorname{tr}ianglelefteq\bar{d}_{{\bf y}}$. If $\timesx$ and ${\bf y}$ are $\lambda$-decompositions with $M_{\timesx}=M_{{\bf y}}$ such that for some ${\bf z}$, ${\bf z}\leq\timesx,{\bf y}$, we will say that they are isomorphic over ${\bf z}$ if there is an isomorphism from $\timesx$ to ${\bf y}$ fixing $\bar{d}_{{\bf z}},\bar{c}_{{\bf z}},B_{{\bf z}}$. \begin{defn} \label{def:good family}(A good family) A family $\mathfrak{F}$ of $\lambda$-decompositions is \emph{good} when: \begin{enumerate} \item \label{enu:invariant under isom}The family $\mathfrak{F}$ is invariant under isomorphisms. \item \label{enu:every model is saturated}For every $\timesx\in\mathfrak{F}$, $M_{\timesx}$ is $D$-saturated. \item \label{enu:non-empty}For every $D$-saturated $M\mathbf{p}rec\mathfrak{C}_{D}$ of size $\lambda$, the ``trivial decomposition'' $\left(M,\emptyset,\emptyset,\emptyset,\emptyset\right)\in\mathfrak{F}$. \item \label{enu:enlarging}For every $\timesx\in\mathfrak{F}$ and $\bar{d}\in\mathfrak{C}_{D}^{<\lambda}$ there exists some ${\bf y}\in\mathfrak{F}$ such that $\timesx\le{\bf y}$, and $\bar{d}_{{\bf y}}\operatorname{tr}ianglerighteq\bar{d}_{\timesx}\bar{d}$. \item \label{enu:enbase}For every $\timesx\in\mathfrak{F}$ and $b\in M_{\timesx}$, $\left(M_{\timesx},B_{\timesx}\cup\left\{ b\right\} ,\bar{d}_{\timesx},\bar{c}_{\timesx},r_{\timesx}\right)\in\mathfrak{F}$. \item \label{enu:iso-extension}Suppose that $\timesx_{1},\timesx_{2},{\bf y}_{1}\in\mathfrak{F}$ where $\timesx_{1}\le{\bf y}_{1}$ and there exists some isomorphism $f:\timesx_{1}\to\timesx_{2}$, then there exists some ${\bf y}_{2}\in\mathfrak{F}$ such that $\timesx_{2}\le{\bf y}_{2}$ and $f$ can be extended to an isomorphism ${\bf y}_{1}\to{\bf y}_{2}$. \item \label{enu:union}Suppose that $\sequence{\timesx_{i}}{i<\delta}$ is a sequence of $\lambda$-decompositions from $\mathfrak{F}$ such that $\delta<\lambda$ is a limit ordinal and for every $i<j<\delta$ we have $\timesx_{i}\le\timesx_{j}$, then $\timesx_{\delta}=\sup_{i<\delta}\timesx_{i}=\left(M,\bigcup_{i<\delta}B_{\timesx_{i}},\bigcup_{i<\delta}\bar{d}_{\timesx_{i}},\bigcup_{i<\delta}\bar{c}_{\timesx_{i}},\bigcup_{i<\delta}r_{\timesx_{i}}\right)\in\mathfrak{F}$ . Note that as $\lambda$ is regular and $\delta<\lambda$ this makes sense. \item \label{enu:isohomo} Suppose that $\sequence{\timesx_{i}}{i<\delta}$ and $\sequence{{\bf y}_{i}}{i<\delta}$ are increasing sequences of $\lambda$-decompositions from $\mathfrak{F}$ such that $\delta<\lambda$ is a limit ordinal and for each $i<\delta$ there is a weak isomorphism $g_{i}:\timesx_{i}\to{\bf y}_{i}$ such that $g_{i}\subseteq g_{j}$ whenever $i<j$. Then the union $\bigcup_{i<\delta}g_{i}$ is a weak isomorphism from $\timesx=\sup_{i<\delta}\timesx_{i}$ to ${\bf y}=\sup_{i<\delta}{\bf y}_{i}$. \item \label{enu:count} For every $D$-model $M$ of cardinality $\lambda$, the number of $\timesx\in\mathfrak{F}$ with $M_{\timesx}=M$ up to isomorphism is $\leq\lambda$. \end{enumerate} \end{defn} \begin{rem} The roles of $\bar{c}_{\timesx}$ and $r_{\timesx}$ will become crucial in the next sections. In this section it is important in order to restrict the class of isomorphisms. \end{rem} \begin{rem} In Definition \ref{def:good family}, (\ref{enu:iso-extension}) follows from (\ref{enu:invariant under isom}). \end{rem} \begin{rem} \label{rem:everything in M}Note that by point (\ref{enu:invariant under isom}) in Definition \ref{def:good family}, and as ${\bf M}$ is $D$-saturated of cardinality $\lambda^{+}$, if $\mathfrak{F}$ is good, then $\mathfrak{F}$ is also good when we restrict it to decompositions contained in ${\bf M}$ (rather than $\mathfrak{C}_{D}$). More precisely, in points (\ref{enu:enlarging}) and (\ref{enu:iso-extension}), the promised decompositions ${\bf y}$ and ${\bf y}_{2}$ respectively can be found in ${\bf M}$ if the given decompositions ($\timesx,$ $\timesx_{1}$, $\timesx_{2}$ and ${\bf y}_{1}$) are in ${\bf M}$. \end{rem} Let us give an example of a ``baby application'' of the existence of a good family before we delve into the generic pair conjecture. This next theorem is a weak version of \cite[Conclusion 3.13]{Sh950}. \begin{thm} \label{thm:clever counting of types}Suppose $\mathfrak{F}$ is a good family. Then, for a $D$-saturated model $M$ of size $\lambda$, the number of types in $S_{D}^{<\lambda}\left(M\right)$ up to conjugation is $\leq\lambda$. \end{thm} \begin{proof} Suppose $\gamma<\lambda$, and $\sequence{p_{i}}{i<\lambda^{+}}$ is a sequence of types in $S_{D}^{\gamma}\left(M\right)$, which are pairwise non-conjugate. Let $\bar{d}_{i}\models p_{i}$. By (\ref{enu:enlarging}) in Definition \ref{def:good family}, for some $\timesx_{i}\in\mathfrak{F}$, $\bar{d}_{i}\trianglelefteq\bar{d}_{\timesx_{i}}$. Obviously, for $i\neq j$, $\operatorname{tp}\left(\bar{d}_{\timesx_{i}}/M\right)$ and $\operatorname{tp}\left(\bar{d}_{\timesx_{j}}/M\right)$ are not conjugates. But according to (\ref{enu:count}), this is impossible. \end{proof} \begin{rem} \label{rem:Upto Isomorphism over z}Suppose ${\bf z}$ is a $\lambda$-decomposition. From (\ref{enu:count}) in Definition \ref{def:good family} it follows that the number of $\timesx\in\mathfrak{F}$ such that ${\bf z}\leq\timesx$ up to isomorphism over ${\bf z}$ is $\leq\lambda$. Indeed, if not there is a sequence $\sequence{\timesx_{i}}{i<\lambda^{+}}$ of $\lambda$-decompositions in $\mathfrak{F}$ containing ${\bf z}$ which are pairwise not isomorphic over ${\bf z}$. By (\ref{enu:count}), we may assume that they are pairwise isomorphic, and let $f_{i}:\timesx_{i}\to\timesx_{0}$ be isomorphisms. So $f_{i}$ must fix $\bar{d}_{{\bf z}}$ and $\bar{c}_{{\bf z}}$ as they are initial segments. In addition, $f_{i}\upharpoonright B_{{\bf z}}$ is a sequence of length $<\lambda$ of elements in $M_{{\bf z}}$, and there are $\lambda$ such sequences (as $\lambda^{<\lambda}=\lambda$), so for some $i\neq j$, $f_{i}\upharpoonright B_{{\bf z}}=f_{j}\upharpoonright B_{{\bf z}}$. Hence $f_{i}^{-1}\circ f_{j}\upharpoonright B_{{\bf z}}=\operatorname{id}$ --- contradiction. \end{rem} For a decomposition $\timesx$, we will write $\timesx\Subset M$ for $M_{\timesx}\subseteq M$ and $\left(\bar{c}_{\timesx},\bar{d}_{\timesx}\right)\in\left(M^{<\lambda}\right)^{2}$. \begin{defn} \label{def:complete+bnf}Let $\gamma<\lambda^{+}$, and let $\mathfrak{F}$ be a good family of $\lambda$-decompositions. \begin{enumerate} \item We say that $\gamma$ is \emph{$\mathfrak{F}$-complete} if for every $\alpha<\beta<\gamma$ such that $M_{\alpha}$ is $D$-saturated, ${\bf y}\in\mathfrak{F}$ with $M_{{\bf y}}=M_{\alpha}$ and $\bar{d}\in M_{\beta}^{<\lambda}$ such that ${\bf y}\Subset M_{\beta}$, there exists some ${\bf y}\leq\timesx\in\mathfrak{F}$ such that $\bar{d}_{\timesx}\operatorname{tr}ianglerighteq\bar{d}\bar{d}_{{\bf y}}$ and $\timesx\Subset M_{\gamma}$. \item We say that $\gamma$ is \emph{$\mathfrak{F}$-representative} if for every $\alpha<\beta<\gamma$ such that $M_{\alpha}$ is $D$-saturated, ${\bf y}\in\mathfrak{F}$ with $M_{{\bf y}}=M_{\alpha}$ and every $\lambda$-decomposition ${\bf z}$ over $M_{\alpha}$ such that ${\bf z}\Subset M_{\beta}$ and ${\bf z}\leq{\bf y}$, there exists $\timesx\in\mathfrak{F}$ such that $M_{\timesx}=M_{\alpha}$, $\timesx\Subset M_{\gamma}$, ${\bf z}\leq\timesx$ and $\timesx$ is isomorphic to ${\bf y}$ over ${\bf z}$. \end{enumerate} \end{defn} \begin{prop} \label{prop:F-complete club} Let $\mathfrak{F}$ be a family of good $\lambda$-decompositions. Let $E_{\mbox{com}}\subseteq\lambda^{+}$ be the set of all $\delta<\lambda^{+}$ which are $\mathfrak{F}$-complete. Then $E_{\mbox{com}}$ is a club. \end{prop} \begin{proof} The fact that $E_{\mbox{com}}$ is a closed is easy. Suppose $\beta<\lambda^{+}$. Let $\beta<\beta'<\lambda^{+}$ be such that for every $\alpha<\beta$ such that $M_{\alpha}$ is $D$-saturated, and every $\bar{d}\in M_{\beta}^{<\lambda}$ and ${\bf y}\in\mathfrak{F}$ with ${\bf y}\Subset M_{\beta}$ and $M_{{\bf y}}=M_{\alpha}$, there is some ${\bf y}\leq\timesx\in\mathfrak{F}$ such that $\bar{d}_{\timesx}\trianglerighteq\bar{d}_{{\bf y}}\bar{d}$, $M_{\timesx}=M_{\alpha}$ and $\timesx\Subset M_{\beta'}$. The ordinal $\beta'$ exists because $\lambda^{<\lambda}=\lambda$ (so the number of ${\bf y}$'s and the number of $\bar{d}$'s is $\leq\lambda$), by (\ref{enu:enlarging}) of Definition \ref{def:good family} and by Remark \ref{rem:everything in M}. By induction, we can thus define an increasing sequence of ordinals $\beta_{i}$ for $i<\omega$ where $\beta_{0}=\beta$ and $\beta_{i+1}=\beta_{i}'$. Finally, $\gamma=\beta_{\omega}\inE_{\mbox{com}}$. \end{proof} \begin{prop} \label{prop:BnF club}Let $\mathfrak{F}$ be a family of good $\lambda$-decompositions. Let $E_{\mbox{rep}}\subseteq\lambda^{+}$ be the set of all $\delta<\lambda^{+}$which are $\mathfrak{F}$-representative. Then $E_{\mbox{rep}}$ is a club.\end{prop} \begin{proof} The proof is similar to the proof of Proposition \ref{prop:F-complete club}, but now in order to show that $E_{\mbox{rep}}$ is unbounded, we use Remark \ref{rem:Upto Isomorphism over z}.\end{proof} \begin{thm} \label{thm:generic pair with good family}Suppose $\mathfrak{F}$ is a good family. Let $E=E_{\mbox{sat}}\capE_{\mbox{rep}}\capE_{\mbox{com}}\subseteq\lambda^{+}$. This is a club. For every $\alpha_{1}<\beta{}_{1},\alpha_{2}<\beta_{2}\in E$ of cofinality $\lambda$ we have $\left(M_{\beta_{1}},M_{\alpha_{1}}\right)\cong\left(M_{\beta_{2}},M_{\alpha_{2}}\right)$ . Hence Conjecture \ref{conj:The generic pair conjecture} holds. \end{thm} \begin{proof} Let $AP$ \footnote{AP stands for approximations. } be the collection of tuples of the form $p=\left(\timesx_{p},{\bf y}_{p},h_{p}\right)=\left(\timesx,{\bf y},h\right)$ where $\timesx,{\bf y}\in\mathfrak{F}$ and $h:\timesx\to{\bf y}$ is a weak isomorphism, such that $M_{\timesx}=M_{\alpha_{1}}$, $\timesx\Subset M_{\beta_{1}}$, $M_{{\bf y}}=M_{\alpha_{2}}$ and ${\bf y}\Subset M_{\beta_{2}}$. For every $p_{1},p_{2}\in AP$ we write $p_{1}\le_{AP}p_{2}$ if $\timesx_{p_{1}}\leq\timesx_{p_{2}}$, ${\bf y}_{p_{1}}\leq{\bf y}_{p_{2}}$ , and $h_{p_{1}}\subseteq h_{p_{2}}$. We proceed to construct an isomorphism by a back and forth argument. In the forth part, we may add an element from $M_{\alpha_{1}}$ to $B_{\timesx}$ (thus increasing the $M_{\alpha_{1}}$-part of the domain of $h$), or an element from $M_{\beta_{1}}$ to $\bar{d}_{\timesx}$ (thus increasing the $M_{\beta_{1}}$-part). We also have to take care of the limit stage. As one could take $p$ to be a trivial tuple by (\ref{enu:non-empty}) in Definition \ref{def:good family}, and as $\alpha_{1},\alpha_{1}\inE_{\mbox{sat}}$ (and their cofinality is $\lambda$ so that $M_{\alpha_{1}},M_{\alpha_{2}}$ are saturated), $AP\neq\emptyset$. Adding an element from $M_{\alpha_{1}}$: let $p\in AP$ and $a\in M_{\alpha_{1}}$. As $h$ is a weak isomorphism, there is some isomorphism $h^{+}:\timesx\to{\bf y}$ extending $h$. Let $h^{+}\left(a\right)=b\in M_{\alpha_{2}}$. Thus, by (\ref{enu:enbase}) in Definition \ref{def:good family}, we may define $p'=\left(\timesx',{\bf y}',h'\right)$ by adding $a$ to $B_{\timesx}$ and $b$ to $B_{{\bf y}}$, and defining $h'=h\cup\left\{ \left(a,b\right)\right\} $. Of course, $h'$ is still a weak isomorphism as witnessed by the same $h^{+}$. It follows that $p\le_{AP}p'$. Adding an element from $M_{\beta_{1}}$: let $d\in M_{\beta_{1}}$ and $p\in AP$. Since $\mathfrak{F}$ is good, $\alpha_{1}\inE_{\mbox{sat}}$, $\beta_{1}\inE_{\mbox{com}}$, and by (\ref{enu:enlarging}) in Definition \ref{def:good family}, there is some $\timesx\leq\timesx'\in\mathfrak{F}$ such that $\bar{d}_{\timesx}d\operatorname{tr}ianglelefteq\bar{d}_{\timesx'}$ and $\timesx'\Subset M_{\beta_{1}}$ (here we also used the fact that the cofinality of $\beta_{1}$ is $\lambda$). Let $h^{+}:\timesx\to{\bf y}$ be as above. By (\ref{enu:iso-extension}) in Definition \ref{def:good family}, $h^{+}$ extends to an isomorphism $h^{++}:\timesx'\to{\bf y}'$ for some ${\bf y}'\in\mathfrak{F}$, such that ${\bf y}\leq{\bf y}'$ (and we may also assume that ${\bf y}$ is contained in ${\bf M}$ by Remark \ref{rem:everything in M}). Since $\beta_{2}\inE_{\mbox{rep}}$ (and since its cofinality is $\lambda$), there exists some ${\bf y}''\in\mathfrak{F}$ such that ${\bf y}''\Subset M_{\beta_{2}}$, ${\bf y}\leq{\bf y}''$, and ${\bf y}''$ is isomorphic to ${\bf y}'$ over ${\bf y}$, as witnessed by $f:{\bf y}'\to{\bf y}''$ (in particular $f\upharpoonright M_{\alpha_{2}}$ is an automorphism of $M_{\alpha_{2}}$). We have then $p'=\left(\timesx',{\bf y}'',\left(f\circ h^{++}\right)\upharpoonright\left(B_{\timesx'},\bar{d}_{\timesx'},\bar{c}_{\timesx'},r_{\timesx'}\right)\right)\in\mathfrak{F}$ satisfies that $p\le_{AP}p'$ and $d\in\bar{d}_{\timesx'}$. Of course we must also switch the roles of $\timesx$ and ${\bf y}$ in the above steps. The limit stage: suppose $\sequence{p_{i}}{i<\delta}$ is an increasing sequence of approximation where $\delta<\lambda$ is some limit. Let \[ p=\sup_{i<\delta}p_{i}=\left(\sup_{i<\delta}\timesx_{p_{i}},\sup_{i<\delta}{\bf y}_{p_{i}},\bigcup_{i<\delta}h_{i}\right). \] This tuple is still in $AP$ by (\ref{enu:union}) and (\ref{enu:isohomo}) in Definition \ref{def:good family}. \end{proof} \section{\label{sec:The-type-decomposition}type decompositions} Section \ref{sec:The-generic-pair} gave the proof of the generic pair conjecture (Conjecture \ref{conj:The generic pair conjecture}) by using $\lambda$-decompositions and a good family of these (Definition \ref{def:good family}). Here we will start to construct what eventually will be the good family. For this we need to define two kinds of decompositions. The first is the \emph{tree-type decomposition} (explained in Subsection \ref{sub:Tree-type-decomposition}), which is the basic building block of the \emph{self-solvable decomposition} which will be introduced in Subsection \ref{sub:Self-solvable-decomposition}. Eventually, the good family will be the family of self-solvable decompositions. As usual, we assume that $\theta>\left|T\right|$ is a strongly compact cardinal (unless $D$ is trivial and then $\theta=\left|T\right|^{+}$, and also replace $<\theta$ satisfiable by finitely satisfiable when appropriate, see the beginning of Section \ref{sec:The-generic-pair}), and that $D$ is dependent. Also, assume that $\lambda=\lambda^{<\lambda}>\theta$. \subsection{\label{sub:Tree-type-decomposition}Tree-type decomposition} \begin{defn} \label{def:tree type}Let $M\mathbf{p}rec\mathfrak{C}_{D}$ be a $D$-model of cardinality $\lambda$. A \emph{$\lambda$-tree-type decomposition} is a $\lambda$-decomposition $\left(M,B,\bar{d},\bar{c},r\right)$ with the following properties: \begin{enumerate} \item The tuple $\bar{c}$ is of length $<\kappa=\theta+\left|\lg\left(\bar{d}\right)\right|^{+}$ and the type $\operatorname{tp}\left(\bar{c}/M\right)$ does not split over $B$. See also Remark \ref{rem:f.s. instead of invariant}. \item For every $A\subseteq M$ such that $\left|A\right|<\lambda$ there exists some $\bar{e}_{A}\in M^{<\kappa}$ such that $\operatorname{tp}\left(\bar{d}/\bar{e}_{A}+\bar{c}\right)\vdash\operatorname{tp}\left(\bar{d}/A+\bar{c}\right)$. By this we mean that if $\bar{d}'\in\mathfrak{C}_{D}^{<\lambda}$ realizes the same type as $\bar{d}$ over $\bar{e}_{A}+\bar{c}$ (which we denote by $\bar{d}'\operatorname{eq}uiv_{\bar{e}_{A}\bar{c}}\bar{d}$), then $\bar{d}'\operatorname{eq}uiv_{A\bar{c}}\bar{d}$. Note: we do not ask that this is true in $\mathfrak{C}$, only in $\mathfrak{C}_{D}$. \end{enumerate} \end{defn} \begin{rem} Why ``tree-type''? if $\timesx$ is a tree-type decomposition such that for simplicity $\lg\left(\bar{d}\right)<\theta$, then we may define a partial order on $M^{<\theta}$ by $\bar{e}_{1}\leq\bar{e}_{2}$ if $\operatorname{tp}\left(\bar{d}/\bar{c}+\bar{e}_{2}\right)\vdash\operatorname{tp}\left(\bar{d}/\bar{c}+\bar{e}_{1}\right)$. Then this order is $\lambda$-directed (so looks like a tree in some sense). \end{rem} \begin{rem} If $\operatorname{tp}\left(\bar{d}/M\right)$ does not split over a $B$ (where $\left|B\right|<\lambda$ as usual), then $\left(M,B,\bar{d},\bar{d},r\right)$ is a $\lambda$-tree-type decomposition for any $r$: in (2) take $\bar{e}_{A}=\emptyset$. \end{rem} \begin{rem} \label{rem:f.s. instead of invariant}In Definition \ref{def:tree type} (1), we could ask that $\operatorname{tp}\left(\bar{c}/M\right)$ is $<\theta$ satisfiable in $B$ in the sense that any $<\theta$ formulas from this type \uline{in finitely many variables} are realized in $B$. \end{rem} \begin{rem} In this section, the role of $\bar{c}$ becomes clearer, but $r$ will not have any role. \end{rem} \begin{example} \cite[Exercise 2.18]{Sh:900} In DLO --- the theory of $\left(\mathbb{Q},<\right)$ --- suppose $M$ is a saturated model of cardinality $\lambda$, and $d\in\mathfrak{C}\backslash M$ is some point. Let $C_{1}$, $C_{2}$ be the corresponding left and right cuts that $d$ determines in $M$. As $M$ is saturated at least one of these cuts has cofinality $\lambda$. If only one has, then $\operatorname{tp}\left(d/M\right)$ does not split over the smaller cut, so $\left(M,C_{i},d,d,\emptyset\right)$ is a tree-type decomposition for $i=1$ or $i=2$. Otherwise for each $A$ of cardinality $<\lambda$, there are $e_{1}<d<e_{2}$ in $M$ such that $C_{1}\cap A<e_{1}<e_{2}<C_{2}\cap A$, so $\operatorname{tp}\left(d/e_{1}e_{2}\right)\vdash\operatorname{tp}\left(d/e_{1}e_{2}A\right)$. In this case, $\left(M,\emptyset,d,\emptyset,\emptyset\right)$ is a tree-type decomposition. \end{example} Our aim now is to prove that when $M$ is a $D$-model, then for every $\bar{d}\in\mathfrak{C}_{D}^{<\lambda}$ there exists a tree-type decomposition $\timesx$ such that $\bar{d}=\bar{d}_{\timesx}$. In fact, we can start with any tree-type decomposition $\timesx$, for instance the trivial one $\left(M,\emptyset,\emptyset,\emptyset,\emptyset\right)$, and find some tree-type decomposition ${\bf y}\geq\timesx$ such that $\bar{d}_{{\bf y}}=\bar{d}_{\timesx}\bar{d}$. In a sense, we decompose the type of $\bar{d}$ over $M$ into two parts: the invariant one and the ``tree-like'' one. \begin{defn} \label{def:K}Let $M\mathbf{p}rec\mathfrak{C}_{D}$ be of size $\lambda$, $\bar{d}\in\mathfrak{C}_{D}^{<\lambda}$ and $C\subseteq\mathfrak{C}_{D}$ be of size $<\kappa=\left|\lg\left(\bar{d}\right)\right|^{+}+\theta<\lambda$. The class ${\bf K}_{\lambda,\theta}^{M,C,\bar{d}}$ contains all pairs $\mathfrak{a}=\left(B_{\mathfrak{a}},\bar{c}_{\mathfrak{a}}\right)=\left(B,\bar{c}\right)$ such that: \begin{enumerate} \item $\bar{c}=\sequence{\left(\bar{c}_{i,0},\bar{c}_{i,1}\right)}{i<\gamma}\in\left(\mathfrak{C}_{D}^{<\omega}\times\mathfrak{C}_{D}^{<\omega}\right)^{\gamma}$, and $B\subseteq M$, $|B|<\lambda$. \item $\gamma<\kappa$. \item For all $i<\gamma$, $\operatorname{tp}\left(\bar{c}_{i}/MC+\bar{c}_{<i}\right)$ is $<\theta$ satisfiable in $B$ where $\bar{c}_{i}$ is $\bar{c}_{i,0}\frown\bar{c}_{i,1}$. Abusing notation, we identify $\bar{c}$ with the concatenation of $\bar{c}_{i}$ for $i<\gamma$. It follows that $\operatorname{tp}\left(\bar{c}/MC\right)$ does not split over $B$. \item For every $i<\gamma$, $\operatorname{tp}\left(\bar{c}_{i,0}/MC+\bar{c}_{<i}\right)=\operatorname{tp}\left(\bar{c}_{i,1}/MC+\bar{c}_{<i}\right)$ and in particular they are of the same (finite) length, and $\operatorname{tp}\left(\bar{c}_{i,0}/MC+\bar{c}_{<i}+\bar{d}\right)\neq\operatorname{tp}\left(\bar{c}_{i,1}/MC+\bar{c}_{<i}+\bar{d}\right)$. \end{enumerate} The class ${\bf MxK}_{\lambda,\theta}^{M,C,\bar{d}}$ consists of all the maximal elements in ${\bf K}_{\lambda,\theta}^{M,C,\bar{d}}$ with respect to the order $<$ defined by $\mathfrak{a}<\mathfrak{b}$ iff $B_{\mathfrak{a}}\subseteq B_{\mathfrak{b}}$, $\bar{c}_{\mathfrak{a}}\trianglelefteq\bar{c}_{\mathfrak{b}}$ and $\bar{c}_{\mathfrak{a}}\neq\bar{c}_{\mathfrak{b}}$. That is, it contains all $\mathfrak{a}\in{\bf K}_{\lambda,\theta}^{M,C,\bar{d}}$ such that there is no $\mathfrak{b}\in{\bf K}_{\lambda,\theta}^{M,C,\bar{d}}$ with $B_{\mathfrak{a}}\subseteq B_{\mathfrak{b}}$ and $\bar{c}_{\mathfrak{a}}$ is a strict first segment of $\bar{c}_{\mathfrak{b}}$.\end{defn} \begin{thm} \label{thm:Maximum exists}For every $\bar{d}\in\mathfrak{C}_{D}^{<\lambda}$, $C$ and $M$ as in Definition \ref{def:K}, if $\mathfrak{a}\in{\bf K}_{\lambda,\theta}^{M,C,\bar{d}}$ then there exists some $\mathfrak{b}\in{\bf MxK}_{\lambda,\theta}^{M,C,\bar{d}}$ such that $\mathfrak{a}\leq\mathfrak{b}$. \end{thm} \begin{proof} Let $\bar{c}=\bar{c}_{\mathfrak{a}}=\sequence{\left(\bar{c}_{i,0},\bar{c}_{i,1}\right)}{i<\gamma}$. We try to construct an increasing sequence $\sequence{\mathfrak{a}_{\alpha}}{\gamma\leq\alpha<\kappa}$ of elements in ${\bf K}_{\lambda,\theta}^{M,C,\bar{d}}$, where $\kappa=\left|\lg\left(\bar{d}\right)\right|^{+}+\theta<\lambda$, as follows: \begin{enumerate} \item $\mathfrak{a}_{\gamma}=\mathfrak{a}$. \item If $\alpha$ is limit then $\mathfrak{a}_{\alpha}=\sup_{\beta<\alpha}\mathfrak{a}_{\beta}$, i.e., $B_{\mathfrak{a}_{\alpha}}=\bigcup_{\beta<\alpha}B_{\mathfrak{a}_{\beta}}$ and $\bar{c}_{\mathfrak{a}_{\alpha}}=\bigcup_{\beta<\alpha}\bar{c}_{\mathfrak{a}_{\beta}}$. Note that this is well defined, i.e., $\mathfrak{a}_{\alpha}\in{\bf K}_{\lambda,\theta}^{M,C,\bar{d}}$. \item Suppose $\alpha=\beta+1$ and $\mathfrak{a}_{\beta}$ has been constructed. Let \[ \mathfrak{a}_{\alpha}=\left(B_{\alpha},\bar{c}_{\mathfrak{a}_{\beta}}\frown\left(\bar{c}_{\beta,0},\bar{c}_{\beta,1}\right)\right) \] just in case there are $\bar{c}_{\beta,0},\bar{c}_{\beta,1}\in\mathfrak{C}_{D}^{<\omega}$, $B_{\mathfrak{a}_{\beta}}\subseteq B_{\alpha}\subseteq M$ such that $\mathfrak{a}_{\alpha}\in{\bf K}_{\lambda,\theta}^{M,C,\bar{d}}$. \end{enumerate} If we got stuck somewhere in the construction it must be in the successor stage $\alpha$, and then $\mathfrak{a}_{\alpha}\in{\bf MxK}_{\lambda,\theta}^{M,C,\bar{d}}$ is as requested. So suppose we succeed: we constructed $\sequence{\left(\bar{c}_{\alpha,0},\bar{c}_{\alpha,1}\right)}{\alpha<\kappa}$. As usual we denote $\bar{c}_{\alpha}=\bar{c}_{\alpha,0}\frown\bar{c}_{\alpha,1}$. By the definition of ${\bf K}_{\lambda,\theta}^{M,C,\bar{d}}$, it follows that for every $\alpha<\kappa$, there are $\bar{a}_{\alpha}\in A^{<\omega}$ where $A=MC$, $\bar{b}_{\alpha}\in C_{\alpha}^{<\omega}$ where $C_{\alpha}=\bigcup_{\beta<\alpha}\bar{c}_{\beta}$, and a formula $\varphi_{\alpha}\left(\bar{x}_{\bar{d}},\bar{w}_{\alpha},\bar{y}_{\alpha},\bar{z}_{\alpha}\right)$ such that $\mathfrak{C}_{D}\models\varphi_{\alpha}\left(\bar{d},\bar{c}_{\alpha,0},\bar{a}_{\alpha},\bar{b}_{\alpha}\right)$ but $\mathfrak{C}_{D}\models\neg\varphi_{\alpha}\left(\bar{d},\bar{c}_{\alpha,1},\bar{a}_{\alpha},\bar{b}_{\alpha}\right)$. (The variables are all in the appropriate length, but only finitely many of them appear in the formula.) For every $\alpha<\kappa$, let $f\left(\alpha\right)$ be the maximal ordinal $<\alpha$ such that $\bar{b}_{\alpha}$ intersects $\bar{c}_{f\left(\alpha\right)}$. By Fodor's Lemma, There exists some cofinal set $S\subseteq\kappa$ and $\beta<\kappa$ such that for every $\alpha\in S$ we have $f\left(\alpha\right)=\beta$. By restricting to a smaller set, we may assume that for any $\alpha\in S$, $\alpha>\beta$ and $\varphi_{\alpha}=\varphi$ is constant. As $\bar{c}_{\alpha,0}\operatorname{eq}uiv_{A\bar{c}_{<\alpha}}\bar{c}_{\alpha,1}$ and as $\operatorname{tp}\left(\bar{c}_{\alpha}/A+\bar{c}_{<\alpha}\right)$ does not split over $A$, it follows that $\operatorname{tp}\left(\sequence{\bar{c}_{\alpha,\eta\left(\alpha\right)}}{\alpha\in S}/AC_{\beta+1}\right)$ does not depend on $\eta$ when $\eta:S\to2$. To prove this it is enough to consider a finite subset $S_{0}\subseteq S$, and to prove it by induction on its size. Indeed, given $S_{0}=\left\{ \alpha_{0}<\ldots<\alpha_{n+1}\right\} $, and any $\eta:S_{0}\to2$, \begin{align*} \sequence{\bar{c}_{\alpha,\eta\left(\alpha\right)}}{\alpha\in S_{0}}\operatorname{eq}uiv_{AC_{\beta+1}} & \sequence{\bar{c}_{\alpha,\eta\left(\alpha\right)}}{\alpha\in S_{0}\backslash\left\{ \alpha_{n+1}\right\} }\frown\left\langle \bar{c}_{\alpha_{n+1},0}\right\rangle \\ \operatorname{eq}uiv_{AC_{\beta+1}} & \sequence{\bar{c}_{\alpha,0}}{\alpha\in S_{0}}. \end{align*} It follows by homogeneity that for any subset $R$ of $S$ there is some $\bar{d}_{R}\in\mathfrak{C}_{D}^{<\lambda}$ such that $\mathfrak{C}_{D}\models\varphi\left(\bar{d}_{R},\bar{c}_{\alpha,0},\bar{a}_{\alpha},\bar{b}_{\alpha}\right)$ iff $\alpha\in R$. But this is a contradiction to the fact that $D$ is dependent, see Lemma \ref{lem:eq formulations of dependence} (5).\end{proof} \begin{defn} \label{def:orthogonal}Suppose $p\left(\bar{x}\right),q\left(\bar{y}\right)\in S_{D}\left(A\right)$ for some $A\subseteq\mathfrak{C}_{D}$. We say that $p$ is \emph{orthogonal} \footnote{Usually this notion is called \emph{weakly orthogonal}, as the notion of orthogonal types already has meaning in stable theories. However here we have no room for confusion, so we decided to stick with the simpler term. } to $q$ if there is a unique $r\left(\bar{x},\bar{y}\right)\in S_{D}\left(A\right)$ which extends $p\left(\bar{x}\right)\cup q\left(\bar{y}\right)$. \end{defn} \begin{defn} \label{def:tree like}Suppose that $M\mathbf{p}rec\mathfrak{C}_{D}$, and $C\subseteq\mathfrak{C}_{D}$ is some set. Let $p\in S_{D}\left(MC\right)$. We say that $p$ is \emph{tree-like (with respect to $M$,$C$)} if it is orthogonal to every $q\in S_{D}^{<\omega}\left(MC\right)$ for which there exists some $B\subseteq M$ with $|B|<|M|$ such that $q$ is $<\theta$ satisfiable in $B$.\end{defn} \begin{prop} \label{prop:tree-like}Let $M,C$ be as in Definition \ref{def:tree like}. Suppose that $p\in S_{D}^{\alpha}\left(MC\right)$ is tree-like and that $\left|C\right|<\kappa=\theta+\left|\alpha\right|^{+}$. Then for every $B\subseteq M$ such that $\left|B\right|<\left|M\right|$ there exists some $E\subseteq M$ with $\left|E\right|<\kappa$ such that $p|_{CE}\vdash p|_{CB}$. \end{prop} \begin{proof} It is enough to show that for any formula $\varphi\left(\bar{x},\bar{y},\bar{c}\right)$ where $\bar{c}$ is a finite tuple from $C$, there is some $E_{\varphi}\subseteq M$ such that $\left|E_{\varphi}\right|<\theta$ and \[ p|_{E_{\varphi}C}\vdash\left(p\upharpoonright\varphi\right)|_{B}=\set{\varphi\left(\bar{x},\bar{b},\bar{c}\right)\in p}{\bar{b}\in B^{\lg\left(\bar{y}\right)}} \] (because then we let $E=\bigcup_{\varphi}E_{\varphi}$). Suppose not. Let $I=\left[M\right]{}^{<\theta}$ (all subsets of $M$ of size $<\theta$), then for every $E\in I$ there exists some $\bar{d}_{1}^{E},\bar{d}_{2}^{E}\in\mathfrak{C}_{D}^{\alpha}$, $\bar{b}_{E}\in B^{\lg\left(\bar{y}\right)}$ such that $\bar{d}_{1}^{E},\bar{d}_{2}^{E}$ realize $p|_{EC}$ and $\mathfrak{C}_{D}\models\varphi\left(\bar{d}_{1}^{E},\bar{b}_{E},\bar{c}\right)\wedge\neg\varphi\left(\bar{d}_{2}^{E},\bar{b}_{E},\bar{c}\right)$. By strong compactness, there is some $\theta$-complete ultrafilter $\cal U$ on $I$ such that for every $X\in I$ we have $\set{Y\in I}{X\subseteq Y}\in\cal U$. By Lemma \ref{lem:average type along ultrafilter}, $r=\average{\sequence{\bar{d}_{1}^{E}\bar{d}_{2}^{E}\bar{b}_{E}}{E\in I}}{MC}{\cal U}\in S_{D}\left(MC\right)$. Let $\bar{d}_{1},\bar{d}_{2}\in\mathfrak{C}_{D}^{\alpha}$ and $\bar{b}\in\mathfrak{C}_{D}^{<\omega}$ be such that $\bar{d}_{1}\bar{d}_{2}\bar{b}$ is a realization of $r$. Now, $r'=\operatorname{tp}(\bar{b}/MC)$ is $<\theta$ satisfiable in $B$, $\bar{d}_{1},\bar{d}_{2}$ realize $p$ (by our choice of $\cal U$) but $\operatorname{tp}\left(\bar{d}_{1}/\bar{b}\bar{c}\right)\neq\operatorname{tp}\left(\bar{d}_{2}/\bar{b}\bar{c}\right)$ (as witnessed by $\varphi$). Hence $p$ is not orthogonal to $r'$, which is a contradiction. \end{proof} \begin{rem} \label{rem:extension exists} Let $A\subseteq B\subseteq C\subseteq\mathfrak{C}_{D}$. If $p\in S_{D}^{n}\left(B\right)$ is $<\theta$ satisfiable in $A$ and $n<\omega$ then there is an extension $p\subseteq q\in S_{D}^{n}\left(C\right)$ which is $<\theta$ satisfiable in $A$. Indeed, let $\cal U_{0}=\set{\varphi\left(A^{n}\right)}{\varphi\in p}$, note that it is $\theta$-complete, and extend it to a $\theta$-complete ultrafilter $\cal U$ on all subsets of $A^{n}$. Let $q=\set{\varphi\left(\bar{x},\bar{c}\right)}{\bar{c}\subseteq C,\varphi\left(A^{n},\bar{c}\right)\in\cal U}$. Now, as $\left|T\right|<\theta$ this type is a $D$-type: for any finite tuple $\bar{c}$ from $C$, $q|_{\bar{c}}$ is realized by some tuple from $A^{n}$ (as in the proof of Lemma \ref{def:avgtp}). \end{rem} \begin{thm} \label{thm:tree} Let $M\mathbf{p}rec\mathfrak{C}_{D}$, $\bar{d}\in\mathfrak{C}_{D}^{<\lambda}$, and $\bar{c}'\in\mathfrak{C}_{D}^{<\lambda}$ be of length $<\kappa=\left|\lg\left(\bar{d}\right)\right|^{+}+\theta$. Let $C=\bigcup\bar{c}'$ and suppose that $\mathfrak{a}\in{\bf MxK}_{\lambda,\theta}^{M,C,\bar{d}}$ and that $\operatorname{tp}\left(\bar{c}'/M\right)$ does not split over $B_{\mathfrak{a}}$. Then for any $r$, $\timesx=\left(M,B_{\mathfrak{a}},\bar{d},\bar{c}'\bar{c}_{\mathfrak{a}},r\right)$ is a $\lambda$-tree-type decomposition (see Definition \ref{def:tree type}).\end{thm} \begin{proof} As $\operatorname{tp}\left(\bar{c}'/M\right)$ does not split over $B_{\mathfrak{a}}$, and $\operatorname{tp}\left(\bar{c}_{\mathfrak{a}}/MC\right)$ does not split over $B_{\mathfrak{a}}$, it follows that $\operatorname{tp}\left(\bar{c}'\bar{c}_{\mathfrak{a}}/M\right)$ does not split over $B_{\mathfrak{a}}$. Let $\bar{c}=\bar{c}'\bar{c}_{\mathfrak{a}}$. We are left to check that for every $A\subseteq M$ such that $\left|A\right|<\lambda$ there exists some $\bar{e}_{A}\in M^{<\kappa}$ where $\kappa=\left|\lg\left(\bar{d}\right)\right|^{+}+\theta$, such that $\operatorname{tp}\left(\bar{d}/\bar{e}_{A}+\bar{c}\right)\vdash\operatorname{tp}\left(\bar{d}/A+\bar{c}\right)$. By Proposition \ref{prop:tree-like} it is enough to prove that $p\left(\bar{x}\right)=\operatorname{tp}(\bar{d}/M+\bar{c})$ is tree-like (with respect to $M$, $\bar{c}$). Let $q\left(\bar{y}\right)\in S_{D}^{<\omega}\left(M+\bar{c}\right)$ be some type which is $<\theta$ satisfiable in some $B\subseteq M$ with $\left|B\right|<\lambda$. Suppose that $p$ is not orthogonal to $q$. This means that there are $\bar{d}_{1},\bar{d}_{2},\bar{b}_{1},\bar{b}_{2}$ in $\mathfrak{C}_{D}$ such that $\bar{d}_{1},\bar{d}_{2}\models p$, $\bar{b}_{1},\bar{b}_{2}\models q$ and $\bar{d}_{1}\bar{b}_{1}\not\operatorname{eq}uiv_{M\bar{c}}\bar{d}_{2}\bar{b}_{2}$. By homogeneity, we may assume $\bar{d}_{1}=\bar{d}_{2}=\bar{d}$. Let $q'\left(\bar{y}\right)\in S_{D}^{<\omega}\left(M+\bar{c}\bar{b}_{1}\bar{b}_{2}\right)$ be an extension of $q$ which is $<\theta$ satisfiable in $B$ (which exists by Remark \ref{rem:extension exists}), and let $\bar{b}\models q'$. Then for some $i=1,2$, it must be that $\bar{d}\bar{b}_{i}\not\operatorname{eq}uiv_{M\bar{c}}\bar{d}\bar{b}$. Let $\mathfrak{b}\geq\mathfrak{a}$ be $\left(B_{\mathfrak{a}}\cup B,\bar{c}_{\mathfrak{a}}\frown\left(\bar{b}_{i},\bar{b}\right)\right)$, then easily $\mathfrak{b}\in{\bf K}_{\lambda,\theta}^{M,C,\bar{d}}$, which contradicts the maximality of $\mathfrak{a}$. \end{proof} By Theorems \ref{thm:Maximum exists} and \ref{thm:tree}, we get that: \begin{cor} \label{cor:existence of tree type dec}Suppose $\timesx$ is a $\lambda$-tree-type decomposition, and $\bar{d}_{0}\in\mathfrak{C}_{D}^{<\lambda}$. Then there exists some $\lambda$-tree-type decomposition ${\bf y}\geq\timesx$ such that $\bar{d}_{\timesx}\bar{d}_{0}=\bar{d}_{{\bf y}}$ and $r_{{\bf y}}=r_{\timesx}$. \end{cor} \begin{proof} Apply Theorem \ref{thm:Maximum exists} with $\bar{d}=\bar{d}_{\timesx}\bar{d}_{0}$, $C=\bigcup\bar{c}_{\timesx}$, $M=M_{\timesx}$ and $\mathfrak{b}=\left(B_{\timesx},\emptyset\right)$, to get some $\mathfrak{b}\leq\mathfrak{a}\in{\bf MxK}_{\lambda,\theta}^{M,C,\bar{d}}$. Now apply Theorem \ref{thm:tree} with $\bar{c}'=\bar{c}_{\timesx}$, $\mathfrak{a}$ and $r_{\timesx}$. \end{proof} \subsection{\label{sub:Self-solvable-decomposition}Self-solvable decomposition } \begin{defn} \label{def:self-solvable}Let $M\mathbf{p}rec\mathfrak{C}_{D}$ be a $D$-model of cardinality $\lambda$. A \emph{$\lambda$-self-solvable decomposition} \footnote{In \cite[Definition 3.6]{Sh950}, this is called $\mathbf{tK}$. } is a $\lambda$-tree-type decomposition $\left(M,B,\bar{d},\bar{c},r\right)$ such that for every $A\subseteq M$ with $\left|A\right|<\lambda$ there exists some $\bar{c}_{A}\bar{d}_{A}\in M^{<\lambda}$ with the following properties: \begin{enumerate} \item \label{enu:same length}The tuple $\bar{c}_{A}$ has the same length as $\bar{c}$ (so $<\kappa=\left|\lg\left(\bar{d}\right)\right|^{+}+\theta$) and $\bar{d}_{A}$ has the same length as $\bar{d}$. \item \label{enu:realize r}$\left(\bar{c}_{\timesx},\bar{d}_{\timesx},\bar{c}_{A},\bar{d}_{A}\right)$ realize $r_{\timesx}\left(\bar{x}_{\bar{c}_{\timesx}},\bar{x}_{\bar{d}_{\timesx}},\bar{x}_{\bar{c}_{\timesx}}',\bar{x}_{\bar{d}_{\timesx}}'\right)$. \item \label{enu:same type over small set}$\left(\bar{c}_{A},\bar{d}_{A}\right)$ realize $\operatorname{tp}\left(\bar{c}_{\timesx}\bar{d}_{\timesx}/A\right)$. \item \label{enu:The-strong tree}The main point is that we extend point (2) from Definition \ref{def:tree type} by demanding that $\operatorname{tp}\left(\bar{d}_{\timesx}/\bar{c}_{A}+\bar{d}_{A}+\bar{c}_{\timesx}\right)\vdash\operatorname{tp}\left(\bar{d}_{\timesx}/A+\bar{c}_{\timesx}+\bar{c}_{A}+\bar{d}_{A}\right)$. \end{enumerate} \end{defn} The first thing we would like to show is that under the assumption that $\lambda$ is measurable, a $\lambda$-self-solvable decomposition exists. In the first order case one can weaken the assumption to ask that $\lambda$ is weakly compact (see \cite[Claim 3.27]{Sh950}). However, we do not know how to extend this results to $D$-models, so we omit it. Note that the trivial decomposition $\left(M,\emptyset,\emptyset,\emptyset,\emptyset\right)$ is a $\lambda$-self-solvable decomposition. \begin{prop} \label{prop:finding self solvable by chasing tail} Let $M$ be a $D$-saturated model of cardinality $\lambda$, with $\lambda>\theta$ measurable. Let $\cal U$ be a normal non-principal $\lambda$-complete ultrafilter on $\lambda$. Let $\timesx$ be a $\lambda$-self-solvable decomposition with $M_{\timesx}=M$, and let $\bar{d}\in\mathfrak{C}_{D}^{<\lambda}$. Also write $M$ as an increasing continuous union $\bigcup_{\alpha<\lambda}M_{\alpha}$ where $M_{\alpha}\subseteq M$ is of size $<\lambda$. Finally, let $\kappa=\left|\lg\left(\bar{d}_{\timesx}\bar{d}\right)\right|^{+}+\theta$. Then for any $n<\omega$, there is a set $U_{n}\in\cal U$ , a sequence $\sequence{\left(\bar{c}_{\alpha,n},\bar{d}_{\alpha,n}\right)}{\alpha\in U_{n}\cup\left\{ \lambda\right\} }$, a type $r_{n}$ and a set $B_{n}\subseteq M$ with $\left|B_{n}\right|<\lambda$ such that the following holds: \begin{enumerate} \item For each $n<\omega$, $U_{n+1}\subseteq U_{n}$, $\timesx_{n}=\left(M,B_{n},\bar{c}_{\lambda,n},\bar{d}_{\lambda,n},r_{n}\right)$ is a $\lambda$-tree-type decomposition, $\timesx\leq\timesx_{n}\leq\timesx_{n+1}$ and $\bar{d}_{\timesx}\bar{d}\operatorname{tr}ianglelefteq\bar{d}_{\lambda,n}$. Also, $\lg\left(\bar{d}_{\lambda,n}\right),\lg\left(\bar{c}_{\lambda,n}\right)<\kappa$. \item For each $n<\omega$ and $\alpha\in U_{n}\cup\left\{ \lambda\right\} $, $\bar{c}_{\alpha,n-1},\bar{d}_{\alpha,n-1}\trianglelefteq\bar{c}_{\alpha,n},\bar{d}_{\alpha,n}$, and when $\alpha<\lambda$ they are in $M$, $\left(\bar{c}_{\lambda,n},\bar{d}_{\lambda,n},\bar{c}_{\alpha,n},\bar{d}_{\alpha,n}\right)\models r_{n}$ (so $r_{n}$ is increasing) and $\bar{c}_{\alpha,n}\bar{d}_{\alpha,n}$ realizes $\operatorname{tp}\left(\bar{c}_{\lambda,n}\bar{d}_{\lambda,n}/M_{\alpha}\right)$ (where $\bar{c}_{\alpha,-1},\bar{d}_{\alpha,-1}=\emptyset$). \item For each $n<\omega$ and $\alpha\in U_{n}$, $\operatorname{tp}\left(\bar{c}_{\lambda,n},\bar{d}_{\lambda,n},\bar{c}_{\alpha,n},\bar{d}_{\alpha,n}\right)$ contains $r_{\timesx}$ (when restricted to the appropriate variables). \item For each $n<\omega$ and $\alpha\in U_{n}$, \[ \operatorname{tp}\left(\bar{d}_{\lambda,n}/\bar{c}_{\lambda,n}+\bar{d}_{\alpha,n+1}\right)\vdash\operatorname{tp}\left(\bar{d}_{\lambda,n}/\bar{c}_{\lambda,n}+\bar{c}_{\alpha,n}+\bar{d}_{\alpha,n}+M_{\alpha}\right). \] \end{enumerate} \end{prop} \begin{proof} The construction is by induction on $n$. Assume $n=0$. Let $\bar{d}_{\lambda,0}=\bar{d}_{\timesx}\bar{d}$ and let $\bar{c}_{\lambda,0}\in\mathfrak{C}_{D}^{<\kappa}$, $B_{0}$ be such that $\timesx\leq\left(M,B_{0},\bar{d}_{\lambda,0},\bar{c}_{\lambda,0},r_{\timesx}\right)$ is a $\lambda$-tree-type decomposition (which exists by Corollary \ref{cor:existence of tree type dec}). For $\alpha<\lambda$, as $\timesx$ is a self-solvable decomposition, there are $\bar{c}_{\alpha,\timesx},\bar{d}_{\alpha,\timesx}$ in $M$ which realize $\operatorname{tp}\left(\bar{c}_{\timesx},\bar{d}_{\timesx}/M_{\alpha}\right)$ such that $\left(\bar{c}_{\timesx},\bar{d}_{\timesx},\bar{c}_{\alpha,\timesx},\bar{d}_{\alpha,\timesx}\right)\models r_{\timesx}$. Go on to find $\bar{c}_{\alpha,\timesx},\bar{d}_{\alpha,\timesx}\trianglelefteq\bar{c}_{\alpha,0},\bar{d}_{\alpha,0}$ in $M$ which realize $\operatorname{tp}\left(\bar{c}_{\lambda,0}\bar{d}_{\lambda,0}/M_{\alpha}\right)$ (exists as $M$ is $D$-saturated). By Corollary \ref{cor:full-indiscernibles}, we can find $U_{0}$ such that $\sequence{\bar{c}_{\alpha,0}\bar{d}_{\alpha,0}}{\alpha\in U_{0}}$ is a fully indiscernible sequence over $\bar{c}_{\lambda,0}+\bar{d}_{\lambda,0}$. Let $r_{0}=\operatorname{tp}\left(\bar{c}_{\lambda,0},\bar{d}_{\lambda,0},\bar{c}_{\alpha,0},\bar{d}_{\alpha,0}/\emptyset\right)$, where $\alpha\in U_{0}$. Assume $n=m+1$. Note that $\kappa=\left|\lg\left(\bar{d}_{\lambda,m}\right)\right|^{+}+\theta$. For $\alpha\in U_{m}$, let $\bar{e}_{\alpha,m}\in M^{<\kappa}$ be such that $\operatorname{tp}\left(\bar{d}_{\lambda,m}/\bar{c}_{\lambda,m}+\bar{e}_{\alpha,m}\right)\vdash\operatorname{tp}\left(\bar{d}_{\lambda,m}/\bar{c}_{\lambda,m}+\bar{d}_{\alpha,m}+\bar{c}_{\alpha,m}+M_{\alpha}\right)$, which exists as $\timesx_{m}$ is a tree-type decomposition. As $\kappa<\lambda$, by restricting $U_{m}$, we may assume that $\bar{e}_{\alpha,m}$ has a constant length, independent of $\alpha$. Further, let us assume that $\sequence{\bar{d}_{\alpha,m}\bar{c}_{\alpha,m}\bar{e}_{\alpha,m}}{\alpha\in U_{m}}$ is fully indiscernible. Let $\bar{e}_{\lambda,m}$ be such that $\bar{d}_{\lambda,m}\bar{c}_{\lambda,m}\bar{e}_{\lambda,m}\models\bigcup\set{\operatorname{tp}\left(\bar{d}_{\alpha,m}\bar{c}_{\alpha,m}\bar{e}_{\alpha,m}/M_{\alpha}\right)}{\alpha\in U_{m}}$. This is a type by full indiscernibility, and such a tuple can be found in $\mathfrak{C}_{D}$ since $\bar{d}_{\lambda,m}\bar{c}_{\lambda,m}$ already realize this union when we restrict to the appropriate variables, by point (2). Now we essentially repeat the case $n=0$, applying Corollary \ref{cor:existence of tree type dec} with $\bar{d}_{0},\timesx$ there being $\bar{e}_{\lambda,m},\timesx_{m}$ to find $B_{n}$ and $\bar{c}_{\lambda,n}$, but now we want that $\bar{c}_{\alpha,m}\trianglelefteq\bar{c}_{\alpha,n}$ and $\bar{d}_{\alpha,m}\bar{e}_{\alpha,m}=\bar{d}_{\alpha,n}$ for $\alpha\in U_{m}\cup\left\{ \lambda\right\} $, so we find these tuples and find $U_{n}$ such that $\sequence{\bar{c}_{\alpha,n}\bar{d}_{\alpha,n}}{\alpha\in U_{n}}$ is fully indiscernible over $\bar{c}_{\lambda,n}\bar{d}_{\lambda,n}$ and we let $r_{n}=\operatorname{tp}\left(\bar{c}_{\lambda,m},\bar{d}_{\lambda,m},\bar{c}_{\alpha,n},\bar{d}_{\alpha,n}/\emptyset\right)$. (In fact, in the proof we did not need full indiscernibility at any stage. In the case $n=0$ and the last stage of the successor step, we only needed that $\operatorname{tp}\left(\bar{c}_{\lambda,m},\bar{d}_{\lambda,m},\bar{c}_{\alpha,n},\bar{d}_{\alpha,n}/\emptyset\right)$ is constant, and in the construction of the $\bar{e}_{\lambda,m}$ we only needed that the types $\operatorname{tp}\left(\bar{d}_{\alpha,m}\bar{c}_{\alpha,m}\bar{e}_{\alpha,m}/M_{\alpha}\right)$ are increasing with $\alpha$.) \end{proof} \begin{cor} \label{cor:existsence of self-solvable}Let $M$ be a $D$-saturated model of cardinality $\lambda$, where $\lambda\geq\theta$ is measurable, and let $\bar{d}\in\mathfrak{C}_{D}^{<\lambda}$. Let $\timesx$ be some $\lambda$-self-solvable decomposition, possibly trivial. Then there exists some $\lambda$-self-solvable decomposition $\timesx\le{\bf y}$ such that $\bar{d}_{\timesx}\bar{d}\trianglelefteq\bar{d}_{{\bf y}}$. \end{cor} \begin{proof} Write $M=\bigcup_{\alpha<\lambda}M_{\alpha}$ where $M_{\alpha}\subseteq M$ are of cardinality $<\lambda$ and the sequence is increasing and continuous. Also choose some normal ultrafilter $\cal U$ on $\lambda$. Now we apply Proposition \ref{prop:finding self solvable by chasing tail}, to find $U_{n}$, $B_{n}$, $r_{n}$ and $\sequence{\left(\bar{c}_{\alpha,n},\bar{d}_{\alpha,n}\right)}{\alpha\in U_{n}\cup\left\{ \lambda\right\} }$. Let $\bar{d}_{\lambda}=\bigcup_{n<\omega}\bar{d}_{\lambda,n}$, $\bar{c}_{\lambda}=\bigcup_{n<\omega}\bar{c}_{\lambda,n}$, $B=\bigcup_{n<\omega}B_{n}$ and $r=\bigcup_{n<\omega}r_{n}$ (note that this is indeed a $D$-type). Also, let $U=\bigcap_{n<\omega}U_{n}\in\cal U$ (as $\cal U$ is $\lambda$-complete). Then $\left(M,B,\bar{d}_{\lambda},\bar{c}_{\lambda},r\right)$ is a $\lambda$-self-solvable decomposition: first of all it is a tree-type decomposition, as $\operatorname{tp}\left(\bar{c}_{\lambda}/M\right)$ does not split over $B$. Also, $\kappa=\left|\lg\left(\bar{d}_{\timesx}\bar{d}\right)\right|^{+}+\theta$ is regular of cofinality $>\aleph_{0}$, so $\lg\left(\bar{c}_{\lambda}\right)<\kappa=\left|\lg\left(\bar{d}_{\lambda}\right)\right|^{+}+\theta$. For each $A\subseteq M$ of size $<\lambda$, there is some $\alpha\in U$ such that $M_{\alpha}$ contains $A$. Let $\bar{c}_{A},\bar{d_{A}}=\bigcup_{n<\omega}\bar{c}_{\alpha,n},\bigcup_{n<\omega}\bar{d}_{\alpha,n}$. Then, it follows from point (2) in Proposition \ref{prop:finding self solvable by chasing tail} that $\left(\bar{c}_{\lambda},\bar{d}_{\lambda},\bar{c}_{A},\bar{d}_{A}\right)\models r$ and that $\left(\bar{c}_{A}\bar{d}_{A}\right)$ realize $\operatorname{tp}\left(\bar{c}_{\lambda}\bar{d}_{\lambda}/A\right)$. Also, note that $r_{\timesx}\subseteq r$, $B_{\timesx}\subseteq B$, $\bar{c}_{\timesx},\bar{d}_{\timesx}\trianglelefteq\bar{c}_{\lambda},\bar{d}_{\lambda}$. Finally, we must check that $\operatorname{tp}\left(\bar{d}_{\lambda}/\bar{c}_{A}+\bar{d}_{A}+\bar{c}_{\lambda}\right)\vdash\operatorname{tp}\left(\bar{d}_{\lambda}/A+\bar{c}_{\lambda}+\bar{c}_{A}+\bar{d}_{A}\right)$. This holds since formulas have finitely many variables. \end{proof} \section{\label{sec:Finding-a-good}Finding a good family} In this section we will show that the family of $\lambda$-self-solvable decompositions is a good family of $\lambda$-decompositions whenever $\lambda>\theta$ is measurable (note that in that case $\lambda^{<\lambda}=\lambda$). This will conclude the proof of Conjecture \ref{conj:The generic pair conjecture} in this case. So let $\mathfrak{F}$ be the family of $\lambda$-self-solvable decompositions $\timesx$ such that $M_{\timesx}$ is $D$-saturated of cardinality $\lambda$. Let us go over Definition \ref{def:good family}, and prove that each clause is satisfied by $\mathfrak{F}$. \begin{claim} Points (\ref{enu:invariant under isom}), (\ref{enu:every model is saturated}), (\ref{enu:non-empty}), (\ref{enu:enlarging}), (\ref{enu:enbase}) and (\ref{enu:iso-extension}) are satisfied by $\mathfrak{F}$.\end{claim} \begin{proof} Everything is clear, except (\ref{enu:enlarging}), which is exactly Corollary \ref{cor:existsence of self-solvable}. \end{proof} We now move on to point (\ref{enu:union}), but for this we will need the following lemma. \begin{lem} \label{lem:finding indiscernibles} Suppose that $\left(I,<\right)$ is some linearly ordered set. Let $\sequence{\bar{a}_{i}}{i\in I}$ be a sequence of tuples of the same length from $\mathfrak{C}_{D}$, and let $B\subseteq\mathfrak{C}_{D}$ be some set. Assume the following conditions. \begin{enumerate} \item For all $i\in I$, $\bar{a}_{i}=\bar{c}_{i}\bar{d}_{i}$. \item For all $i\in I$, $\operatorname{tp}\left(\bar{a}_{i}/B_{i}\right)$ is increasing with $i$, where \textbf{$B_{i}=B\cup\set{\bar{a}_{j}}{j<i}$.} \item For all $i\in I$, $\operatorname{tp}\left(\bar{c}_{i}/B_{i}\right)$ does not split over $B$. \item For every $j<i$ in $I$, $\operatorname{tp}\left(\bar{d}_{i}/\bar{c}_{i}+\bar{a}_{j}\right)\vdash\operatorname{tp}\left(\bar{d}_{i}/\bar{c}_{i}+\bar{a}_{j}+B_{j}\right)$. \item For every $i_{1}<i_{2}$, $j_{1}<j_{2}$ from $I$, $\operatorname{tp}\left(\bar{a}_{i_{2}}\bar{a}_{i_{1}}/\emptyset\right)=\operatorname{tp}\left(\bar{a}_{j_{2}}\bar{a}_{j_{1}}/\emptyset\right)$. \end{enumerate} Then $\sequence{\bar{a}_{i}}{i\in I}$ is indiscernible over $B$.\end{lem} \begin{proof} We prove by induction on $n$ that $\sequence{\bar{a}_{i}}{i\in I}$ is an $n$-indiscernible sequence over $B$. For $n=1$ it follows from (2). Now suppose that $\sequence{\bar{a}_{i}}{i\in I}$ is $n$-indiscernible over $B$. Let $i_{1}<\ldots<i_{n}<i_{n+1}\in I$ and $j_{1}<\ldots<j_{n}<j_{n+1}\in I$ be such that, without loss of generality, $i_{n+1}\leq j_{n+1}$. By (2), we know that $\bar{a}_{i_{1}}\ldots\bar{a}_{i_{n}}\bar{a}_{i_{n+1}}\operatorname{eq}uiv_{B}\bar{a}_{i_{1}}\ldots\bar{a}_{i_{n}}\bar{a}_{j_{n+1}}$. By (3) and the induction hypothesis, we know that $\bar{a}_{i_{1}}\ldots\bar{a}_{i_{n}}\bar{c}_{j_{n+1}}\operatorname{eq}uiv_{B}\bar{a}_{j_{1}}\ldots\bar{a}_{j_{n}}\bar{c}_{j_{n+1}}$. Combining, we get that $\bar{a}_{i_{1}}\ldots\bar{a}_{i_{n}}\bar{c}_{i_{n+1}}\operatorname{eq}uiv_{B}\bar{a}_{j_{1}}\ldots\bar{a}_{j_{n}}\bar{c}_{j_{n+1}}$. Suppose that $\varphi\left(\bar{d}_{i_{n+1}},\bar{c}_{i_{n+1}},\bar{a}_{i_{n}},\ldots\bar{a}_{i_{1}},\bar{b}\right)$ holds where $\bar{b}$ is a finite tuple from $B$. Let $r\left(\bar{x}_{\bar{d}},\bar{x}_{\bar{c}},\bar{x}_{\bar{a}}\right)=\operatorname{tp}\left(\bar{d}_{i_{n+1}},\bar{c}_{i_{n+1}},\bar{a}_{i_{n}}/\emptyset\right)$. By (4), $r\left(\bar{x}_{\bar{d}},\bar{c}_{i_{n+1}},\bar{a}_{i_{n}}\right)\vdash\varphi\left(\bar{x}_{\bar{d}},\bar{c}_{i_{n+1}},\bar{a}_{i_{n}},\ldots\bar{a}_{i_{1}},\bar{b}\right)$. Applying the last equation, we get that $r\left(\bar{x}_{\bar{d}},\bar{c}_{j_{n+1}},\bar{a}_{j_{n}}\right)\vdash\varphi\left(\bar{x}_{\bar{d}},\bar{c}_{j_{n+1}},\bar{a}_{j_{n}},\ldots\bar{a}_{j_{1}},\bar{b}\right).$ By (5), $r=\operatorname{tp}\left(\bar{d}_{j_{n+1}},\bar{c}_{j_{n+1}},\bar{a}_{j_{n}}/\emptyset\right)$, so $\bar{d}_{j_{n+1}}$ satisfies the left hand side, and so also the right hand side, and so $\varphi\left(\bar{d}_{j_{n+1}},\bar{c}_{j_{n+1}},\bar{a}_{j_{n}},\ldots\bar{a}_{j_{1}},\bar{b}\right)$ holds and we are done. \end{proof} \begin{cor} \label{cor:finding indiscernibles}Suppose that $\timesx\in\mathfrak{F}$, and let $M=M_{\timesx}$. Let $B\supseteq B_{\timesx}$ be any subset of $M$ of cardinality $<\lambda$, and let $\alpha\leq\lambda$. For $i<\alpha$, let $\bar{a}_{i}$ be such that $\bar{a}_{0}=\bar{c}_{B}\bar{d}_{B}$ (see Definition \ref{def:self-solvable}), and for $i>0$, $\bar{a}_{i}=\bar{c}_{B_{i}}\bar{d}_{B_{i}}$ where $B_{i}=B\cup\set{\bar{a}_{j}}{j<i}$. Then $\sequence{\bar{a}_{i}}{i<\alpha}\frown\left\langle \bar{c}_{\timesx}\bar{d}_{\timesx}\right\rangle $ is an indiscernible sequence over $B$.\end{cor} \begin{proof} Apply Lemma \ref{lem:finding indiscernibles} with $I=\alpha+1$ (so that $\bar{a}_{\alpha}=\bar{c}_{\timesx}\bar{d}_{\timesx}$). Let us check that the conditions there hold. (1) is obvious. (2) holds as $\operatorname{tp}\left(\bar{a}_{i}/B_{i}\right)=\operatorname{tp}\left(\bar{c}_{\timesx}\bar{d}_{\timesx}/B_{i}\right)\supseteq\operatorname{tp}\left(\bar{c}_{\timesx}\bar{d_{\timesx}}/B_{j}\right)$ when $\alpha+1>i\geq j$. (3) holds as $\operatorname{tp}\left(\bar{c}_{\timesx}/B_{i}\right)$ does not split over $B$, so the same is true for $\bar{c}_{i}$. (5) holds because for $i_{1}<i_{2}<\alpha+1$, \begin{align*} \operatorname{tp}\left(\bar{a}_{i_{2}}\bar{a}_{i_{1}}/\emptyset\right) & =\operatorname{tp}\left(\bar{c}_{\timesx}\bar{d}_{\timesx}\bar{c}_{i_{1}}\bar{d}_{i_{1}}/\emptyset\right)=r_{\timesx}\\ & =\operatorname{tp}\left(\bar{c}_{\timesx}\bar{d}_{\timesx}\bar{c}_{j_{1}}\bar{d}_{j_{1}}/\emptyset\right)=\operatorname{tp}\left(\bar{a}_{j_{2}}\bar{a}_{j_{1}}/\emptyset\right). \end{align*} Finally, (4) holds because $\operatorname{tp}\left(\bar{d}_{\timesx}/\bar{c}_{\timesx}+\bar{a}_{j}\right)\vdash\operatorname{tp}\left(\bar{d}_{\timesx}/\bar{c}_{\timesx}+\bar{a}_{j}+B_{j}\right)$, and as $\bar{c}_{\timesx}\bar{d}_{\timesx}\operatorname{eq}uiv_{B_{j+1}}\bar{c}_{B_{i}}\bar{d}_{B_{i}}=\bar{a}_{i}$, we can replace $\bar{c}_{\timesx}\bar{d}_{\timesx}$ by $\bar{c}_{B_{i}}\bar{d}_{B_{i}}$ in this implication by applying an automorphism of $\mathfrak{C}_{D}$. \end{proof} \begin{lem} \label{lem:restriction is ok}Suppose that $\timesx_{1}\leq\timesx_{2}$ are two $\lambda$-decompositions from $\mathfrak{F}$. Then for every subset $A$ of $M$ of size $<\lambda$ containing $B_{\timesx_{1}}$, and for any choice of $\bar{c}_{A},\bar{d}_{A}$ which we get when we apply Definition \ref{def:self-solvable} on $\timesx_{2}$, their restrictions to $\lg\left(\bar{c}_{\timesx_{1}}\right),\lg\left(\bar{d}_{\timesx_{1}}\right)$ satisfy all the conditions in Definition \ref{def:self-solvable}.\end{lem} \begin{proof} Denote these restrictions by $\bar{c}_{A}',\bar{d}_{A}'$. As $r_{\timesx_{1}}\subseteq r_{\timesx_{2}}$, we get Clause (\ref{enu:realize r}) of Definition \ref{def:self-solvable} immediately. Clause (\ref{enu:same type over small set}) is also clear, so we are left with (\ref{enu:The-strong tree}). Since $\timesx_{1}\in\mathfrak{F}$, there are some $\bar{c}_{A}'',\bar{d}_{A}''$ in $M$ in the same length as $\lg\left(\bar{c}_{\timesx_{1}}\right),\lg\left(\bar{d}_{\timesx_{1}}\right)$, which we get when applying Definition \ref{def:self-solvable} on $\timesx_{1}$. It is enough to show that $\bar{d}_{\timesx_{1}}\bar{c}_{\timesx_{1}}\bar{c}_{A}''\bar{d}_{A}''\operatorname{eq}uiv_{A}\bar{d}_{\timesx_{1}}\bar{c}_{\timesx_{1}}\bar{c}_{A}'\bar{d}_{A}'$. Note first that $\bar{c}_{A}''\bar{d}_{A}''\operatorname{eq}uiv_{A}\bar{c}_{A}'\bar{d}_{A}'$ by (\ref{enu:same type over small set}), and as $\operatorname{tp}\left(\bar{c}_{\timesx_{1}}/M\right)$ does not split over $A$, we also get $\bar{c}_{\timesx_{1}}\bar{c}_{A}''\bar{d}_{A}''\operatorname{eq}uiv_{A}\bar{c}_{\timesx_{1}}\bar{c}_{A}'\bar{d}_{A}'$. So suppose that $\mathfrak{C}_{D}\models\varphi\left(\bar{d}_{\timesx_{1}},\bar{c}_{\timesx_{1}},\bar{c}_{A}'',\bar{d}_{A}'',\bar{a}\right)$ where $\bar{a}$ is a finite tuple from $A$. By (\ref{enu:The-strong tree}) and (\ref{enu:realize r}), $r_{\timesx_{1}}\left(\bar{c}_{\timesx_{1}},\bar{x}_{\bar{d}_{\timesx_{1}}},\bar{c}_{A}'',\bar{d}_{A}''\right)\vdash\varphi\left(\bar{x}_{\bar{d}_{\timesx_{1}}},\bar{c}_{\timesx_{1}},\bar{c}_{A}'',\bar{d}_{A}'',\bar{a}\right)$, and applying the last equation, we get that $r_{\timesx_{1}}\left(\bar{c}_{\timesx_{1}},\bar{x}_{\bar{d}_{\timesx_{1}}},\bar{c}_{A}',\bar{d}_{A}'\right)\vdash\varphi\left(\bar{x}_{\bar{d}_{\timesx_{1}}},\bar{c}_{\timesx_{1}},\bar{c}_{A}',\bar{d}_{A}',\bar{a}\right)$, but as $\bar{d}_{\timesx_{1}}$ satisfies the left hand side (because $r_{\timesx_{1}}\subseteq r_{\timesx_{2}}$), we are done. \end{proof} \begin{thm} \label{thm:union} Suppose $\delta<\lambda$ is a limit ordinal. Let $\sequence{\timesx_{j}}{j<\delta}$ be an increasing sequence of decompositions from $\mathfrak{F}$. Then $\timesx=\sup_{j<\delta}\timesx_{j}\in\mathfrak{F}$. Hence point (\ref{enu:union}) of Definition \ref{def:good family} is satisfied by $\mathfrak{F}$. \end{thm} \begin{proof} Easily $\timesx$ is a $\lambda$-decomposition (i.e., $\left|B_{\timesx}\right|<\lambda$ and $r_{\timesx}$ is well defined). Also, $\operatorname{tp}\left(\bar{c}_{\timesx}/M\right)$ does not split over $B_{\timesx}=\bigcup B_{\timesx_{i}}$, where we let $M=M_{\timesx}$. Let $A\subseteq M$ be of cardinality $<\lambda$ and without loss of generality suppose $B_{\timesx}\subseteq A$. In order to prove the theorem, we need to find some $\bar{c},\bar{d}\in M^{<\lambda}$ in the same length as $\bar{c}_{\timesx},\bar{d}_{\timesx}$ such that $\operatorname{tp}\left(\bar{c}\bar{d}/A\right)=\operatorname{tp}\left(\bar{c}_{\timesx}\bar{d}_{\timesx}/A\right)$, $\operatorname{tp}\left(\bar{c}_{\timesx},\bar{d}_{\timesx},\bar{c},\bar{d}\right)=r_{\timesx}$, and $\operatorname{tp}\left(\bar{d}_{\timesx}/\bar{c}_{\timesx}+\bar{c}+\bar{d}\right)\vdash\operatorname{tp}\left(\bar{d}_{\timesx}/\bar{c}_{\timesx}+\bar{c}+\bar{d}+A\right)$. Let us simplify the notation by letting $\beta_{j}=\lg\left(\bar{c}_{\timesx_{j}}\right),\gamma_{j}=\lg\left(\bar{d}_{\timesx_{j}}\right)$. Note that when $\beta_{j}$ and $\gamma_{j}$ are constant from some point onwards, finding such $\bar{c},\bar{d}$ is done by just applying Definition \ref{def:self-solvable} to some $\timesx_{j}$, so although the following argument works for this case as well, it is more interesting when $\beta_{j}$ and $\gamma_{j}$ are increasing. For every $i<\delta$, let $\bar{c}_{i},\bar{d}_{i}=\bar{c}_{A_{i}}\bar{d}_{A_{i}}$ be as in Definition \ref{def:self-solvable} applied to $\timesx_{i}$ (so their length is $\beta_{i},\gamma_{i}$), where $A_{i}=A\cup\set{\bar{c}_{j},\bar{d}_{j}}{j<i}$. Now repeat this process starting with $A_{\delta}$ to construct $\bar{c}_{i},\bar{d}_{i}$ for $\delta\leq i<\delta+\delta$. Now we repeat this process $\kappa+1$ times, for $\kappa=\mu\left(D\right)^{+}+\left|T\right|^{+}<\lambda$, to construct $\bar{c}_{i},\bar{d}_{i}$ and $A_{i}$ for $\delta+\delta\leq i<\delta\cdot\kappa+\delta$. For $j<\delta$, let $O_{j}\subseteq\delta\cdot\kappa+\delta$ be the set of all ordinals $i$ such that $i\modp{\delta}\geq j$. By Corollary \ref{cor:finding indiscernibles} and Lemma \ref{lem:restriction is ok}, for each $j<\delta$, the sequence $I_{j}=\sequence{\left(\bar{c}_{i}\upharpoonright\beta_{j},\bar{d}_{i}\upharpoonright\gamma_{j}\right)}{i\in O_{j}}\frown\left\langle \left(\bar{c}_{\timesx_{j}}\bar{d}_{\timesx_{j}}\right)\right\rangle $ is an indiscernible sequence over $A$. Let $O_{j}'=O_{j}\cap\delta\cdot\kappa$, $O_{j}''=O_{j}\cap\left[\delta\cdot\kappa,\delta\cdot\kappa+\delta\right)$, and let $I_{j}'=I_{j}\upharpoonright O_{j}'$, $I_{j}''=I_{j}\upharpoonright O_{j}''$. As $O_{j}'$ has cofinality $\kappa$ (suppose $X\subseteq O_{j}'$ is unbounded, then the set $\set{i<\kappa}{X\cap\left[\delta\cdot i,\delta\cdot i+\delta\right)\neq\emptyset}$ is unbounded, so has cardinality $\kappa$, so $\left|X\right|\geq\kappa$, but easily, the set $\set{\delta\cdot i+j}{i<\kappa}$ is cofinal in $O_{j}'$), we can apply Lemma \ref{lem:average type on indiscernible sequence}, and consider the type $q_{j}\left(\bar{x}\right)=\average{I_{j}'}{A_{\delta\cdot\kappa+\delta}}{}$, which is a complete $D$-type. So each $q_{j}$ is a type in $\beta_{j}+\gamma_{j}$ variables. \begin{claim*} For $j_{1}<j_{2}$, $q_{j_{1}}\subseteq q_{j_{2}}$.\end{claim*} \begin{proof} Suppose $\varphi\left(\bar{y},\bar{a}\right)\in q_{j_{1}}$, where $\bar{a}$ is a finite tuple from $A_{\delta\cdot\kappa+\delta}$ and $\bar{y}$ is a finite subtuple of variables of $\bar{x}$. By definition, it means that for large enough $i\in O_{j_{1}}'$, $\varphi\left(\bar{c}_{i}\upharpoonright\beta_{j_{1}},\bar{d}_{i}\upharpoonright\gamma_{j_{1}},\bar{a}\right)$ holds (where we restrict $\bar{c}_{i},\bar{d}_{i}$ to $\bar{y}$, of course). But $j_{2}>j_{1}$, so $O_{j_{2}}'\subseteq O_{j_{1}}'$, so the same is true for $O_{j_{2}}'$, and so $\varphi\left(\bar{y},\bar{a}\right)\in q_{j_{2}}$. \end{proof} Let $q=\bigcup_{j<\delta}q_{j}$. As $\delta$ is limit, it follows that $q$ is also a $D$-type over $A_{\delta\cdot\kappa+\delta}$. Let $\bar{c}',\bar{d}'\models q$, and for each $j<\delta$, let $\bar{c}'_{j}=\bar{c}\upharpoonright\beta_{j},\bar{d}_{j}'=\bar{d}'\upharpoonright\gamma_{j}$. It now follows that for each $j<\delta$, the sequence $I_{j}'\frown\left\langle \bar{c}_{j}',\bar{d}_{j}'\right\rangle \frown I_{j}''$ is indiscernible over $A$. Let us check that $\bar{c}',\bar{d}'$ are as required. To show this it is enough to see that for every $j<\delta$, $\bar{c}_{\timesx_{j}}\bar{d}_{\timesx_{j}}\bar{c}_{j}'\bar{d}_{j}'\operatorname{eq}uiv_{A}\bar{c}_{\timesx_{j}}\bar{d}_{\timesx_{j}}\bar{c}_{j}\bar{d_{j}}$. Suppose $\varphi\left(\bar{c}_{\timesx_{j}},\bar{d}_{\timesx_{j}},\bar{c}_{j},\bar{d}_{j},\bar{a}\right)$ holds, where $\bar{a}$ is a finite tuple from $A$. By indiscernibility, $\varphi\left(\bar{c}_{\timesx_{j}},\bar{d}_{\timesx_{j}},\bar{c}_{\delta\cdot\kappa+j},\bar{d}_{\delta\cdot\kappa+j},\bar{a}\right)$ holds as well. By choice of $\bar{c}_{\delta\cdot\kappa+j+1},\bar{d}_{\delta\cdot\kappa+j+1}$, it follows that \begin{equation} r_{\timesx_{j}}\left(\bar{c}_{\timesx_{j}},\bar{x}_{\bar{d}_{\timesx_{j}}},\bar{c}_{\delta\cdot\kappa+j+1}\upharpoonright\beta_{j},\bar{d}_{\delta\cdot\kappa+j+1}\upharpoonright\gamma_{j}\right)\vdash\varphi\left(\bar{c}_{\timesx_{j}},\bar{x}_{\bar{d}_{\timesx_{j}}},\bar{c}_{\delta\cdot\kappa+j},\bar{d}_{\delta\cdot\kappa+j},\bar{a}\right).\tag{*}\label{eq:implication} \end{equation} By indiscernibility, \begin{align*} & \left(\bar{c}_{\delta\cdot\kappa+j+1}\upharpoonright\beta_{j}\right)\left(\bar{d}_{\delta\cdot\kappa+j+1}\upharpoonright\gamma_{j}\right)\bar{c}_{\delta\cdot\kappa+j}\bar{d}_{\delta\cdot\kappa+j}\operatorname{eq}uiv_{A}\\ & \left(\bar{c}_{\delta\cdot\kappa+j+1}\upharpoonright\beta_{j}\right)\left(\bar{d}_{\delta\cdot\kappa+j+1}\upharpoonright\gamma_{j}\right)\left(\bar{c}'\upharpoonright\beta_{j}\right)\left(\bar{d}'\upharpoonright\gamma_{j}\right), \end{align*} and as $\operatorname{tp}\left(\bar{c}_{\timesx}/M\right)$ does not split over $A$, \begin{align*} & \bar{c}_{\timesx_{j}}\left(\bar{c}_{\delta\cdot\kappa+j+1}\upharpoonright\beta_{j}\right)\left(\bar{d}_{\delta\cdot\kappa+j+1}\upharpoonright\gamma_{j}\right)\bar{c}_{\delta\cdot\kappa+j}\bar{d}_{\delta\cdot\kappa+j}\operatorname{eq}uiv_{A}\\ & \bar{c}_{\timesx_{j}}\left(\bar{c}_{\delta\cdot\kappa+j+1}\upharpoonright\beta_{j}\right)\left(\bar{d}_{\delta\cdot\kappa+j+1}\upharpoonright\gamma_{j}\right)\bar{c}'_{j}\bar{d}'_{j}. \end{align*} Applying the last equation to (\ref{eq:implication}), we get that \begin{equation} r_{\timesx_{j}}\left(\bar{c}_{\timesx_{j}},\bar{x}_{\bar{d}_{\timesx_{j}}},\bar{c}_{\delta\cdot\kappa+j+1}\upharpoonright\beta_{j},\bar{d}_{\delta\cdot\kappa+j+1}\upharpoonright\gamma_{j}\right)\vdash\varphi\left(\bar{c}_{\timesx_{j}},\bar{x}_{\bar{d}_{\timesx_{j}}},\bar{c}'_{j},\bar{d}'_{j},\bar{a}\right).\tag{**}\label{eq:second implication} \end{equation} As $\bar{d}_{\timesx_{j}}$ satisfies the left hand side of (\ref{eq:second implication}), it also satisfies the right side, and we are done. \end{proof} \begin{rem} The proof of Theorem \ref{thm:union} as above can be simplified in the case where $D$ is trivial (i.e., the usual first order case). There, we would not need to introduce $\kappa$ (i.e., we can choose $\kappa=1$), and we would not have to use dependence (which we used in applying Lemma \ref{lem:average type on indiscernible sequence} which states that the average type of an indiscernible sequence exists and is a $D$-type). To make the proof work, we only needed to find $\bar{c}',\bar{d}'$ such that the sequence $I_{j}'\frown\left\langle \bar{c}'\upharpoonright\beta_{j},\bar{d}'\upharpoonright\gamma_{j}\right\rangle \frown I_{j}''$ is indiscernible over $A$, and this can easily done by compactness. \end{rem} We now move on to points (\ref{enu:isohomo}) and (\ref{enu:count}) of Definition \ref{def:good family}. Suppose $\timesx$ is a $\lambda$-tree-type decomposition. Let $L_{\bar{c}_{\timesx}}$ be the set of formulas $\varphi\left(\bar{x}_{\bar{c}_{\timesx}},\bar{y}\right)$ where $\bar{x}_{\bar{c}_{\timesx}}$ is a tuple of variables in the length of $\bar{c}_{\timesx}$ (of course only finitely many of them appear in $\varphi$). For $B\subseteq M_{\timesx}$ over which $\operatorname{tp}\left(\bar{c}_{\timesx}/M_{\timesx}\right)$ does not split, define $\sch{\timesx}B:L_{\bar{c}_{\timesx}}\to\mathcal{P}\left(S_{D}^{<\omega}\left(B\right)\right)$ by: \[ \sch{\timesx}B\left(\varphi\left(\bar{x}_{\bar{c}_{\timesx}},\bar{y}\right)\right)=\set{p\left(\bar{y}\right)\in S_{D}\left(B\right)}{\exists\bar{e}\in M_{\timesx}^{\lg\left(\bar{y}\right)}\left(\bar{e}\models p\land\mathfrak{C}_{D}\models\varphi\left(\bar{c}_{\timesx},\bar{e}\right)\right)}. \] As $\operatorname{tp}\left(\bar{c}_{\timesx}/M\right)$ does not split over $B$, we can also replace $\exists$ with $\forall$ in the definition of $\sch{\timesx}B$. This implies that for $B'\supseteq B$ and $p\in S_{D}\left(B'\right)$, \begin{equation} p\in\sch{\timesx}{B'}\left(\varphi\right)\Leftrightarrow p|_{B}\in\sch{\timesx}B\left(\varphi\right).\tag{\ensuremath{\dagger}}\label{eq:restriction} \end{equation} Suppose that ${\bf y}$ is another $\lambda$-tree-type decomposition. When $h$ is an elementary map from $B_{\timesx}$ to $B_{{\bf y}}$, then it induces a well defined map from $S_{D}\left(B_{\timesx}\right)$ to $S_{D}\left(B_{{\bf y}}\right)$ which we will also call $h$. So if $\bar{c}_{\timesx}$ has the same length as $\bar{c}_{{\bf y}}$, it makes sense to ask that $h\circ\sch{\timesx}{B_{\timesx}}=\sch{{\bf y}}{B_{{\bf y}}}$. When $r_{\timesx}=r_{{\bf y}}$, a partial elementary map $h$ whose domain is $B_{\timesx}\cup\bigcup\bar{c}_{\timesx}\cup\bigcup\bar{d}_{\timesx}$ which maps $\left(\bar{d}_{\timesx},\bar{c}_{\timesx},B_{\timesx}\right)$ onto $\left(\bar{d}_{{\bf y}},\bar{c}_{{\bf y}},B_{{\bf y}}\right)$ and satisfies $h\circ\sch{\timesx}{B_{\timesx}}=\sch{{\bf y}}{B_{{\bf y}}}$ is called a \emph{pseudo isomorphism} between $\timesx$ and ${\bf y}$. Note that if $h$ is a pseudo isomorphism, then for any two tuples $\bar{a}$, $\bar{b}$, from $M_{\timesx}$, $M_{{\bf y}}$ respectively, if $h\upharpoonright B_{\timesx}$ can be extend to witness that $B_{\timesx}\bar{a}\operatorname{eq}uiv B_{{\bf y}}\bar{b}$, then $\bar{c}_{\timesx}B_{\timesx}\bar{a}\operatorname{eq}uiv\bar{c}_{{\bf y}}B_{{\bf y}}\bar{a}$. \begin{prop} \label{prop:pseudo isomorphism implies weak isomorphism}Suppose $\timesx,{\bf y}\in\mathfrak{F}$ are such that $r_{\timesx}=r_{{\bf y}}$, and suppose that $h:\timesx\to{\bf y}$ is a pseudo isomorphism. Then $h$ is a weak isomorphism, i.e., it extends to an isomorphism $h^{+}:\timesx\to{\bf y}$. Conversely, if $h$ is a weak isomorphism, then it is a pseudo isomorphism. \end{prop} \begin{proof} We will do a back and forth argument. In each successor step we will add an element to either $B_{\timesx}$ or $B_{{\bf y}}$ and increase $h$. In doing so, the new $\timesx$ and ${\bf y}$'s will still remain in $\mathfrak{F}$ (by point (\ref{enu:enbase}) of Definition \ref{def:good family} which is easily true for $\mathfrak{F}$). In addition, the increased $h$'s will still be pseudo isomorphisms by (\ref{eq:restriction}). In order to do this, it is enough to do a single step, so assume that $h:\timesx\to{\bf y}$ is a pseudo isomorphism, and $a\in M_{\timesx}$. We want to find $b\in M_{{\bf y}}$ such that $h\cup\left\{ \left(a,b\right)\right\} $ is a pseudo isomorphism from $\timesx'=\left(M_{\timesx},B_{\timesx}\cup\left\{ a\right\} ,\bar{d}_{\timesx},\bar{c}_{\timesx},r_{\timesx}\right)$ to ${\bf y}'=\left(M_{{\bf y}},B_{{\bf y}}\cup\left\{ b\right\} ,\bar{d}_{{\bf y}},\bar{c}_{{\bf y}},r_{{\bf y}}\right)$. Let $A=B_{\timesx}\cup\left\{ a\right\} $, and let $\bar{c}_{A}^{\timesx},\bar{d}_{A}^{\timesx}$ be as in Definition \ref{def:self-solvable} for $\timesx$. Let $\bar{c}_{B_{{\bf y}}}^{{\bf y}},\bar{d}_{B_{{\bf y}}}^{{\bf y}}$ be the parallel tuples for ${\bf y}$ and $B_{{\bf y}}$. By (\ref{enu:same type over small set}) of Definition \ref{def:self-solvable}, $B_{\timesx}\bar{c}_{A}^{\timesx}\bar{d}_{A}^{\timesx}\operatorname{eq}uiv B_{{\bf y}}\bar{c}_{B_{{\bf y}}}^{{\bf y}}\bar{d}_{B_{{\bf y}}}^{{\bf y}}$, as witnessed by expanding $h\upharpoonright B_{\timesx}$ to $B_{\timesx}\bar{c}_{A}^{\timesx}\bar{d}_{A}^{\timesx}$. Hence as $M_{{\bf y}}$ is $D$-saturated there is some $b\in M_{{\bf y}}$ such that $B_{\timesx}a\bar{c}_{A}^{\timesx}\bar{d}_{A}^{\timesx}\operatorname{eq}uiv B_{{\bf y}}b\bar{c}_{B_{{\bf y}}}^{{\bf y}}\bar{d}_{B_{{\bf y}}}^{{\bf y}}$. So we have found our $b$. As noted above, as $h$ is a pseudo isomorphism, we get that \begin{equation} B_{\timesx}a\bar{c}_{A}^{\timesx}\bar{d}_{A}^{\timesx}\bar{c}_{\timesx}\operatorname{eq}uiv B_{{\bf y}}b\bar{c}_{B_{{\bf y}}}^{{\bf y}}\bar{d}_{B_{{\bf y}}}^{{\bf y}}\bar{c}_{{\bf y}}.\tag{\ensuremath{\dagger\dagger}}\label{eq:adding c} \end{equation} Suppose now that $\varphi\left(\bar{d}_{\timesx},\bar{c}_{\timesx},a,\bar{e}\right)$ holds, where $\bar{e}$ is a finite tuple from $B_{\timesx}$. By the choice of $\bar{c}_{A}^{\timesx},\bar{d}_{A}^{\timesx}$, $r_{\timesx}\left(\bar{c}_{\timesx},\bar{x}_{\bar{d}_{\timesx}},\bar{c}_{A}^{\timesx},\bar{d}_{A}^{\timesx}\right)\vdash\varphi\left(\bar{x}_{\bar{d}_{\timesx}},\bar{c}_{\timesx},a,\bar{e}\right)$. Applying (\ref{eq:adding c}), we get that $r_{\timesx}\left(\bar{c}_{{\bf y}},\bar{x}_{\bar{d}_{{\bf y}}},\bar{c}_{B_{{\bf y}}}^{{\bf y}},\bar{d}_{B_{{\bf y}}}^{{\bf y}}\right)\vdash\varphi\left(\bar{x}_{\bar{d}_{{\bf y}}},\bar{c}_{{\bf y}},b,h\left(\bar{e}\right)\right)$. As $r_{\timesx}=r_{{\bf y}}$, $\bar{d}_{{\bf y}}$ realizes the left hand side, so also the right hand side and so $\mathfrak{C}_{D}\models\varphi\left(\bar{d}_{{\bf y}},\bar{c}_{{\bf y}},b,h\left(\bar{e}\right)\right)$. For the limit stages, note that if $\sequence{h_{i}}{i<\delta}$ is an increasing sequence of pseudo isomorphisms $h_{i}:\timesx_{i}\to{\bf y}_{i}$ where $\timesx_{i}=\left(M_{\timesx},B_{\timesx_{i}},\bar{d}_{\timesx},\bar{c}_{\timesx},r_{\timesx}\right)$ and ${\bf y}_{i}=\left(M_{{\bf y}},B_{{\bf y}_{i}},\bar{d}_{{\bf y}},\bar{c}_{{\bf y}},r_{{\bf y}}\right)$ are increasing, and $\delta<\lambda$, then $\bigcup\set{h_{i}}{i<\delta}$ is a pseudo isomorphism from $\sup_{i<\delta}\timesx_{i}$ to $\sup_{i<\delta}{\bf y}_{i}$. The other direction is immediate. \end{proof} \begin{cor} Clause (\ref{enu:isohomo}) in Definition \ref{def:good family} holds for $\mathfrak{F}$. \end{cor} \begin{proof} We are given two increasing sequences of decompositions $\sequence{\timesx_{i}}{i<\delta}$ and $\sequence{{\bf y}_{i}}{i<\delta}$ in $\mathfrak{F}$, and we assume that for each $i<\delta$ there is a weak isomorphism $g_{i}:\timesx_{i}\to{\bf y}_{i}$ such that $g_{i}\subseteq g_{i}$ whenever $i<j$. We need to show that the union $g=\bigcup_{i<\delta}g_{i}$ is also a weak isomorphism from $\timesx=\sup_{i<\delta}\timesx_{i}$ to ${\bf y}=\sup_{i<\delta}{\bf y}_{i}$. We already know by Theorem \ref{thm:union} that $\timesx,{\bf y}\in\mathfrak{F}$, so by Proposition \ref{prop:pseudo isomorphism implies weak isomorphism}, we only need to show that $g$ is a pseudo isomorphism and that $r_{\timesx}=r_{{\bf y}}$. The latter is clear, as $r_{\timesx}=\bigcup_{i<\delta}r_{\timesx_{i}}=\bigcup_{i<\delta}r_{{\bf y}_{i}}=r_{{\bf y}}$. Also, it is clear that $g$ is an elementary map taking $\left(\bar{d}_{\timesx},\bar{c}_{\timesx},B_{\timesx}\right)$ to $\left(\bar{d}_{{\bf y}},\bar{c}_{{\bf y}},B_{{\bf y}}\right)$. Note that $L_{\bar{c}_{\timesx}}=\bigcup_{i<\delta}L_{\bar{c}_{\timesx_{i}}}$ and that for $\varphi\in L_{\bar{c}_{\timesx_{i}}}$, $\sch{\timesx}{B_{\timesx}}\left(\varphi\right)=\sch{\timesx_{i}}{B_{\timesx}}\left(\varphi\right)$. The same is true for ${\bf y}$. Hence, for such $i<\delta$, $\varphi$ and for any $p\in S_{D}\left(B_{{\bf y}}\right)$, \begin{align*} p\in g\left(\sch{\timesx}{B_{\timesx}}\left(\varphi\right)\right) & \Leftrightarrow p\in g\left(\sch{\timesx_{i}}{B_{\timesx}}\left(\varphi\right)\right)\\ & \Leftrightarrow p|_{B_{{\bf y}_{i}}}\in g\left(\sch{\timesx_{i}}{B_{\timesx_{i}}}\left(\varphi\right)\right)\\ & \Leftrightarrow p|_{B_{{\bf y}_{i}}}\in g_{i}\left(\sch{\timesx_{i}}{B_{\timesx_{i}}}\left(\varphi\right)\right)\\ & \Leftrightarrow p|_{B_{{\bf y}_{i}}}\in\sch{{\bf y}_{i}}{B_{{\bf y}_{i}}}\left(\varphi\right)\\ & \Leftrightarrow p\in\sch{{\bf y}_{i}}{B_{{\bf y}}}\left(\varphi\right). \end{align*} \end{proof} \begin{defn} \label{def:externally def set}For a model $M\mathbf{p}rec\mathfrak{C}$ and $B\subseteq\mathfrak{C}$, we let $M_{\left[B\right]}$ be $M$ with predicates for all $B$-definable subsets. More precisely, for each formula $\varphi\left(x_{1},\ldots,x_{n},\bar{b}\right)$ over $B$, we add a predicate $R_{\varphi\left(\bar{x},\bar{b}\right)}\left(\bar{x}\right)$ and we interpret it as $\varphi\left(\mathfrak{C}^{n},\bar{b}\right)\cap M^{n}$. If $B\subseteq M$, then this is definably equivalent to adding names for elements of $B$. \end{defn} For a $\lambda$-decomposition $\timesx$, denote by $M_{\left[\timesx\right]}$ the structure $M_{\left[\bar{c}_{\timesx}+\bar{d}_{\timesx}+B_{\timesx}\right]}$. \begin{thm} \label{thm:externally definable sets, homogeneous}Suppose $\timesx\in\mathfrak{F}$. Then $M_{\left[\timesx\right]}$ is homogeneous. \end{thm} \begin{proof} We have to show that if $A\subseteq M$ is of cardinality $<\lambda$, and $f$ is a partial elementary map of $M_{\left[\timesx\right]}$ with domain $A$, then we can extend it to an automorphism. We may assume that $B_{\timesx}\subseteq A$ and that $f\upharpoonright B_{\timesx}=\operatorname{id}$, as $f$ preserves all $B_{\timesx}$-definable sets. It follows that $\timesx'=\left(M_{\timesx},A,\bar{d}_{\timesx},\bar{c}_{\timesx},r_{\timesx}\right)$ and $\timesx''=\left(M_{\timesx},f\left(A\right),\bar{d}_{\timesx},\bar{c}_{\timesx},r_{\timesx}\right)$ are both in $\mathfrak{F}$. By definition, $f$ extends to an elementary map $f':\left(A,\bar{d}_{\timesx},\bar{c}_{\timesx}\right)\to\left(f\left(A\right),\bar{d}_{\timesx},\bar{c}_{\timesx}\right)$, but moreover $f$ is a pseudo isomorphism. This follows easily by (\ref{eq:restriction}) above. Hence we are done by Proposition \ref{prop:pseudo isomorphism implies weak isomorphism}. \end{proof} \begin{cor} \label{cor:counting up to isomorphism}Clause (\ref{enu:count}) in Definition \ref{def:good family} holds for $\mathfrak{F}$. \end{cor} \begin{proof} Suppose $\set{\timesx_{i}}{i<\lambda^{+}}$ is a set of pairwise non-isomorphic elements of $\mathfrak{F}$ with $M_{\timesx_{i}}=M$ for all $i$. We may assume that for some $\beta,\gamma<\lambda$ and all $i<\lambda^{+}$, $\bar{c}_{\timesx_{i}}$ is of length $\beta$ and $\bar{d}_{\timesx_{i}}$ is of length $\gamma$. We may also assume, as $\lambda^{<\lambda}=\lambda$, that $B_{\timesx_{i}}=B$ for all $i<\lambda^{+}$. Let $L'$ be the common language of the structures $M_{\left[\timesx_{i}\right]}$ (which we may assume is constant as it only depends on the length of $\bar{c}_{\timesx_{i}}$, $\bar{d}_{\timesx_{i}}$ and $B_{\timesx_{i}}$). Let $D_{i}=D\left(M_{\left[\timesx_{i}\right]}\right)$ in the language $L'$ (recall that $D\left(A\right)$ consists of all types of finite tuples from $A$ over $\emptyset$). The language $L'$ has size $<\lambda$, so the number of possible $D$'s is $\leq2^{2^{\left|L'\right|}}<\lambda$, so we may assume that $D_{i}=D_{0}$ for all $i<\lambda$ (it follows that $M_{\timesx_{i}}\operatorname{eq}uiv M_{\timesx_{0}}$). Finally, we are done by Lemma \ref{lem:Grossberg}, Corollary \ref{cor:homo-isom} and Theorem \ref{thm:externally definable sets, homogeneous}. \end{proof} \begin{rem} One can also prove Corollary \ref{cor:counting up to isomorphism} directly, showing that the number of $\lambda$-decompositions in $\mathfrak{F}$ up to pseudo isomorphism is $\leq\lambda$, and then use Proposition \ref{prop:pseudo isomorphism implies weak isomorphism}. \end{rem} Finally, we have proved that $\mathfrak{F}$ is a good family of $\lambda$-decompositions, so by Theorem \ref{thm:generic pair with good family} we get: \begin{cor} \label{cor:main}Conjecture \ref{conj:The generic pair conjecture}, and the conclusion of Theorem \ref{thm:clever counting of types} hold when $\lambda$ is measurable.\end{cor} \begin{problem} To what extent can we generalize \cite[Theorem 7.3]{Sh950} to dependent finite diagrams? For instance, is the generic pair conjecture for dependent finite diagrams also true when $\lambda$ is weakly compact? \end{problem} \end{document}
\begin{document} \title{On the $1/e$-strategy for the best-choice problem under no information.} \author{F. Thomas Bruss\\Universit\'e Libre de Bruxelles} \maketitle \noindent{\bf Abstract~} The main purpose of this paper is to correct an error in the previously submitted version [*] := arXiv:2004.13749v1. [*] had been already accepted for publication in a scientific journal, but withdrawn by the author after the discovery of the error. For the withdrawal from arxiv we follow their preference to maintain what remains of interest. The background of the open problem, and the brief survey which comes with it, stay relevant. These keep their place in the present corrected version. The same is true for two new modified odds-theorems proved in [*] since they are applicable for several different stopping problems. Then, and in particular, we show where exactly the error occurred in [*], why it invalidates its main theorem and title, and what the conclusions are. The final discussion of optimal strategies {\it without value} in Section 4 is believed to be of general independent interest. \noindent{\bf Keywords} Optimal stopping, Secretary problem, Stopping times, Well-posed problem, Odds-theorem, Proportional increments, R\'enyi's theorem of relative ranks. \noindent{\bf MSC 2010 Subject Code}: 60G40 \section{Background of the problem} At the evening of Professor Larry Shepp's talk ``Reflecting Brownian Motion" at Cornell University on July 11, 1983 (13th Conference on Stochastic Processes and Applications), Professor Shepp and the author ran into each other in front of the Ezra Cornell statue. I was honoured to meet him in person, and Larry replied ``What are you working on?" And so Prof. Shepp was the very first person with whom I could discuss the {\it $1/e$-law of best choice} resulting from the {\it Unified Approach} (B. (1984)) which had been accepted for publication shortly before. I was glad to see his true interest in the $1/e$-law. As many of us know, when Larry was interested in a problem, then he was deeply interested. This article deals with an open question concerning the optimality of the so-called $1/e$-strategy for the problem of best choice under no information on the number $N$ of options. I drew again attention to this open question in my own talk ``The $e^{-1}$-law in best choice problems" at Cornell on July 14, 1983, and re-discussed it with Larry at several later occasions. An earlier related question appears already on page 885 of B. (1984) where the author conjectured that, for a {\it two-person game}, the $e^{-1}$-strategy is optimal for the decision maker who has to select. As far as the author is aware, the last written reference to the precise open question discussed with Prof. Shepp is in B. and Yor (2012). \section {The Unified Approach} We begin with a review of the {\it Unified Approach}-model and previously known results. \begin{quote}{\bf Unified Approach}: Suppose $N>0$ points are i.i.d. with a continuous distribution function $F$ on some interval $[0,T].$ Points are marked with qualities which are supposed to be uniquely rankable from $1$ (best) to $N$ (worst), and all rank arrival orders are supposed to be equally likely. The goal is to maximize the probability of stopping online and without recall on rank $1.$ (B. (1984)) \end{quote} \noindent This model was suggested for the best choice problem (secretary problem) for an unknown number $N$ of candidates. Recall that, by R\'enyi's theorem of relative ranks (R\'enyi (1962)), the $k$th candidate has relative rank $j$ with probability $1/k$ for all $1\le j\le k$ whenever all rank arrival orders are equally likely. Previous models for unknown $N$ had shown that the price for not knowing $N$ can be high. The influential paper by Presman and Sonin (1972) which modelled the unknown $N$ via the hypothesis of a known distribution $\{P(N=n)\},$ displayed the intricacies arising by the possible appearance of so-called {\it stopping islands.} Moreover, Abdel-Hamid et al. (1982) showed that the $N$-unknown problem may have several solutions, and, much worse, that for any $\epsilon>0$ there exists a sufficiently unfavorable distribution $\{P(N=n)\}_{n=1,2, \cdots}$ to reduce the optimal success probability to a value smaller than $\epsilon.$ In other words, if $N$ is modelled via $\{P(N=n)\},$ optimality may mean in some cases almost nothing. This contrasts with the well-known lower bound $1/e$ which holds in the classical model for known $N=n\ge 1.$ These discouraging facts for unknown $N$ instigated efforts to find more tractable models, as e.g.\,the model of Stewart (1981), or the one of Cowan and Zabzcyk (1978) and its generalisation studied in B. (1987), and also others. The unified approach of B. (1984) was different. The idea behind it was that it is typically easier to estimate - and this is where the time distribution $F$ comes in - {\it when} options are more likely to arrive conditional on knowing that they do arrive than making hypotheses about the distribution of its {\it number.} No assumption at all was made about the distribution of $N.$ (The same approach was later taken by B. and Samuels (1987) for more general payoffs for different ranks.) The continuous arrival time distribution $F$ is the crucial part with respect to applications. For our open problem itself the form of $F$ is irrelevant, however. If we transform the unordered i.i.d arrival times of the best, the second best ... , $T_1, T_2, ... $ say, by $T^*_k:=F(T_k)$, then the $T^*_k$ are i.i.d. $U[0,1]$ random variables and, since $F$ is continuous and increasing, the time transformation maintains the arrival order of the different relative ranks. Thus, if we know the optimal strategy for dealing with i.i.d. $U[0,1]$ random arrivals on $[0,1]$, then we know it as well for i.i.d. $F$-distributed arrival times on the original horizon $[0,T].$ In all what follows we therefore confine our interest to uniformly distributed arrival times in $[0,1]$-time. \subsection{ Related problems} A related problem, to which we will return in Subsection 2.6, is the so-called last-arrival-problem under no information (l.a.p.)\,studied by B. and Yor (2012). In this model an unknown number $N$ of points are i.i.d. $U[0,1]$ random variables, and an observer, inspecting the interval $[0,1]$ sequentially from left to right, wants to maximise the probability of stopping on the very last point. No information about $N$ whatsoever is given. Only one stop is allowed, and this again without recall on preceding observations (online). Thus the only difference of the l.a.p. model of B. and Yor (2012) to the Unified Approach model of B. (1984) is that no ranks are attributed to the observations (points). Other related problems, now again with the objective to get rank 1 of uniquely ranked candidates, arise by combining the Unified Approach model and the model of Presman and Sonin (1972) for different types of distributions of $N$. If $(P(N=n))_{n=1,2, \cdots}$ is known, then one is in the setting of a model with a prior. The i.i.d. $U[0,1]$ arrival-times can then be used as an additional means of statistical inference to update the posterior distribution of $N$. Stopping islands, as observed in the paper of Presman and Sonin (1972), bear over to corresponding islands in continuous time. The optimal strategy may thus become very complicated, and we would typically not like to compute it, but, in principle, it can be computed. For the latter class of problems, what would be a good alternative? Moreover, and in particular, what can one do if one has absolutely no information about $N$? \subsection{The 1/e-law} The answer given by the unified approach (B.(1984)) was that, as far as applications are concerned, we need not care much. For ease of reference we recall these results summarised as the {\it $1/e$-law.} Here we follow the meanwhile established tradition to call an observation of relative rank 1 a {\it record value}, or simply {\it record,} and the time when a record appears a {\it record time.} R\'enyi (1962) had called a record an {\it \'el\'ement saillant}. The $1/e$-law says: \begin{quote} 1. The strategy to wait (in $[0,1]$-time) up to time $1/e \approx 0.3678,$ and then to select the first record (if any from time $1/e$ onward), called the $1/e$-strategy, succeeds for all $N$ with probability at least $1/e.$ 2. There exists no strategy which would be better for all $N.$ 3. The $1/e$-strategy selects no candidate with precise probability $1/e.$\end{quote} \noindent Note also that 1. and 3. imply that a non-best option is selected with probability smaller than $1-2/e\approx 0.2642.$ This multiple role of the number $1/e$ gave rise to the name $1/e$-law, and Table 1 (B.(1984), p. 336) shows how good the lower bound $1/e$ for the success probability actually is. Taking also into account the minimax optimality stated in 2. we can conclude that the $1/e$-strategy is a convenient and convincing alternative for all practical purposes. See e.g. the comments of Samuels (Math. Reviews: 1985). But then, the following question arises: {\it Is the $1/e$-strategy optimal if we have no prior information at all on $N$?} \noindent As mentioned before, if the question is stated like this the answer is No. We have to return to what is known. What is known? (I)~{\bf Optimal $x$-strategies given $N=n.$}~ First, suppose that $N$ were known, say $N=n$, and that we want to determine the optimal strategy in the class of so-called $x$-strategies, that is to wait until time $x\in [0,1]$ and then to select, if any, the first record from time $x$ onward. It is not difficult to find, conditioned on $\{N=n\}$, the optimal waiting time $x_n$ and its performance in this class of $x$-strategies, namely (see B. (1984), p. 884, (2)-(7)), \begin{align} ~~x_1=0; ~x_n=\arg\left\{0 \le x \le 1: \sum_{k=1}^{n-1}\frac{(1-x)^k}{k} = 1\right\}, n=2,3, \cdots.\end{align} Note that the $x_n$-strategy is suboptimal since it does not fully use the knowledge $N=n,$ as it is the case for the optimal strategy for the classical secretary problem for $n$ candidates. (II)~{\bf Monotonicity results.~} We can derive from (1) that \begin{align}p_n(x):= P(x{\rm -strategy~succeeds}\big |\,N=n)=\frac{(1-x)^n}{n}+x\,\sum_{k=1}^{n-1}\frac{(1-x)^k}{k},\end{align} and also that $p_n(x)\ge p_{n+1}(x)$ for all $x\in [0,1].$ This implies \begin{align} \forall x \in [0,1]: p_n(x) \downarrow \lim_{n\to \infty}p_n(x)=- x \log (x). \end{align}Moreover, it follows from (2) and (3) that the optimal waiting time $x_n$ and the corresponding optimal win probability $p_n(x_n)$ satisfy, respectively, \begin{align}~~x_n\uparrow \frac{1}{e} ~~{\rm and~~}p(x_n)\downarrow \frac{1}{e}, {~\rm as~}n \to \infty.\end{align} (III) {\bf Asymptotic optimality.}~The $1/e$-strategy is, as $n\to \infty$, asymptotically optimal with win probability $1/e.$ This follows from (3) and (4), showing that the limiting performance of the $1/e$-strategy is the same as that of the well-known optimal strategy for the classical secretary problem for known $n$ as $n\to \infty,$ namely $1/e.$ Clearly one cannot do better than in the case that one knows $N.$ (IV) {\bf Connection with Pascal-processes} ~ Let $(\Pi_t)_{t\ge0}$ be a counting process on $\mathbb R^+$ with the distributional prescription that for all $T>0$ and $0< t\le T$ \begin{equation*}P(\Pi_T=n|{\cal F}_t)={n \choose \Pi_t}p(t,T)^{\Pi_t +1}(1-p(t,T))^{n-\Pi_t}, \end{equation*} where $\Pi_0=0$ and $({\cal F}_t)=\sigma(\{\Pi_u:u\le t\}).$ Then $(\Pi_t)$ is called a Pascal process with parameter function $p(t,T).$ These processes are characterized in B. and Rogers (1991). Pascal processes have the remarkable property that if points are marked independently with ranks, then, concentrating on 1-records ($\equiv$ records) in such a process, optimality for stopping on the last record cannot depend on the number of points seen before. This property of stationarity was earlier observed in a weaker form (quasi-stationarity) in B. and Samuels (1990). Both papers thus add to the interest of knowing the answer of the open problem. {\bf Challenge and Intuition} The mathematical challenge to have a complete answer for the case of no-information remains because the unified approach model was created in order to deal with any $N.$ What attempts were made before, and why? Looking in (II) of Subsection 2.2 closely at (2), (3) and (4), the open problem comes up quite naturally. Things become intriguing. For any $N=n$ there is a better strategy since the optimal $x_n$-waiting time strategy turns out strictly better than the $1/e$-strategy. Thus one gets the feeling that if there were a way of collecting information about $N$ sufficiently quickly, then this may be sufficient to prove that the $1/e$-strategy cannot be optimal. Viewing to disprove optimality, it seems promising to assume certain types and amounts of weak information about $N$, still strong enough to imply that the $1/e$-strategy is {\it not} optimal, and then to weaken the information. Interestingly, as soon as one lets information about $N$ become weaker and weaker, and finally fade away towards no-information, the $1/e$-waiting time seems to become a miraculous "fix-point" of optimal thresholds. According to III, this would surprise us much less if no-information on $N$ implied in any way that $N$ is likely to be large, but of course it does not! To understand this is a challenge. What about trying to find other types of counterexamples? The challenge remains. It is not easy to do this without leaving the framework of no-information. Arguing for example ``If we have no information on $\{P(N=n)\}_{n=1, 2, \cdots},$ then let us for instance suppose that this distribution turns out such and such, and that we have seen a history of points such and such, ..." and then imply that the $1/e$-strategy is {\it not} optimal, would not be correct. Proofs by contradiction are only valid within the same logical framework, i.e.\,no-information. Arguments implying initial information whatsoever on $N$ would not be rigorous. For the same reason, simulations are meaningless as they require parameters to randomize $N,$ and thus information on $N$ must be inputted. Looking for counterexamples cannot be expected to help. Knowing this increases the challenge. \subsection{Ill-posed or well-posed problem?} \label{sec26} Is the question possibly ill-posed? This question was asked repeatedly by several peers, and, during certain periods, the author also shared these doubts. Indeed, the notion of ``no-information" requires clarification. Can one interpret no-information in the sense that all possible values of $N$ are in an {\it unknown} interval $\{1,2, \cdots, n\}$ with no value of $N$ being more likely than others, and then let $n$ tend to infinity? No. This is equivalent to the improper Laplace prior for $N$. It is true that this prior is the prime candidate for no-information, and very often used to express the lack of knowledge about a parameter. However, this prior implies that $N$ is likely to be very large, and this \textit{is} information. After all, ``no information" on $N$ should mean that at time $0$ we know really nothing at all about $N.$ Now, more importantly, can we assure that the no-information hypothesis is a honest hypothesis in the sense that it is contradiction-free? If it is not contradiction-free, then of course we must declare the open problem ill-posed. \subsection {Formalising no-information} When B. and Yor (2012) studied the last-arrival problem (l.a.p.) under the no-information hypothesis, they faced a similar difficulty of knowing whether their problem is well-posed. These authors found a simple argument to prove that it is {\it impossible to prove} that the no-information hypothesis may imply contradictions! Their argument was that, whatever a hypothetical information space $\cal H$ may look like for the unknown parameter or random variable $N$, no-information is bound to be a singleton in that space $\cal H$. This definition may first sound like a formalism to prevent saying something wrong, but there is more to it. It implies that, as a singleton, the no-information hypothesis cannot lead to contradictory implications. A singleton in $\cal H$ has, by definition of a singleton, nothing in common with other points in $\cal H,$ whereas contradicting implications cannot come out of nothing. They would need different sources of information giving rise to (at least two) different implications. B. and Yor (2012) therefore concluded that they should, a priori, take a constructive attitude and try to find a solution. And so they did. But then the question was to know whether their solution is the solution of a well-posed problem. Hadamard's criteria (Hadamard (1902)) were the only criteria Bruss and Yor found for the solution of a well-posed problem, and they agreed with these criteria. This is why they were glad to see that their solution fully satisfied Hadamard's criteria. According to these criteria, the solution of Bruss and Yor (2012, subsection 5.3) is the solution of a well-posed problem. One part of the approach of B. and Y. (2012), following next, remains however very helpful for our problem, and this is the notion of a stochastic process with proportional increments. \subsection{ Proportional increments} For $N$ i.i.d. $U([0,1])$ arrival points, let \begin{align*}N_t= \#~ {\rm arrivals ~up~ to~ time~} t,~ t\in[0,1]. \end{align*} B. and Yor (2012, subsection 1.1 and pp. 3242-46) showed that the counting process $(N_t)_{0\le t\le 1}$ of incoming points on $[0,1]$ with $N:= N_1$ can be seen as a history-driven process with, what they called, {\it proportional increments}. This means that the process $(N_t)$ must satisfy \begin{align*}\forall \,0<t<1 {\rm ~with~}N_t>0:~~~~~~~~~~~~~~\\mathrm E(N_{t+\Deltaelta t}-N_t|{\cal F}_t)=\frac{\Deltaelta t}{t} N_t ~a.s., ~ 0 < \Deltaelta t \le 1-t,\end{align*} where the condition $N_t>0$ is crucial. Such a process $(N_t)$ will be said to have the property of proportional increments, or in short, having the {\it p.i.-property.} Conditioned on $N=N_1>0,$ let $T_1$ be the first arrival time. The definition of the p.i.-property implies then that, given $N>0,$ the process $(N_t/t)$ is a martingale on $[T_1,1]$, as shown in B. and Yor (2012, p. 3245). This clearly holds also under the stronger assumption that $(N_t)$ is a Poisson process on $[0,1].$ However, B. and Yor (2012, see p. 3255) saw a true benefit in {\it not} imposing that $(N_u)$ be Poisson. To be complete on this, we should mention that in our open problem we could, from a purely decision-theoretic point of view, suppose right away that the process $(N_t)_{0\le t\le 1}$ is a Poisson point process with unknown rate. Indeed, this cannot make a difference for decisions because we cannot tell a counting process which leaves a pattern of arrival times of a homogeneous Poisson process with unknown rate from another counting process leaving, in distribution, the same pattern. Doing so would have the advantage to be able to use the same compensator on the whole interval $[0,1].$ However, we will not need the Poisson process assumption. \subsection {Towards suitable odds-theorems} Recall that our open problem is different from the l.a.p. of B. and Yor (2012) since in the Unified Approach model we would like to stop on the very last record, not on the very last point, and thus our approach must also be different. The very first arrival time $T_1$ in the counting process $(N_t)$, is the time when $(N_t)$ makes its first jump, and the processes $(N_t)$ and $(N_t/t)$ have exactly the same jump times. $T_1$ is also the birth time of the record process $(R_t)$, say, defined by $$R_t=\# ~\rm {records~on~[0,t]}, ~ 0\le t \le 1.$$ Since in our open problem any strategy is equivalent for $N=0$ we may and do suppose that $N>0$ almost surely, and thus $T_1<1$ almost surely. $N$ is unknown at time $0$, but at time $1$ we know that, by definition, $N=N_1\ge 1$ almost surely. Since the first arrival is also the first record, we have $N_{T_1-}=R_{T_1-}= 0$ and $N_{T_1}=R_{T_1}= 1.$ Thus the two processes $(N_t)$ and $(R_t)$ have the same random birth time $T_1.$ On the interval $[T_1,1],$ the process $(N_t)$ has proportional increments, i.e. dependent increments, whereas $(R_t)$ has, as we shall see later, under certain conditions independent non-homogeneous increments. We recall here that, by the no-information hypothesis, we have no access to the posterior distribution $\{P(N=n\,|\,T_1=t_1)\}_{n=1, 2, \cdots}.$ To prepare for these properties of $(R_t),$ the idea is to first concentrate on its increments (after time $T_1$). For this purpose we prove two suitably extended versions the Odds-Theorem of optimal stopping. We should also mention here that Ferguson (2016) gave several interesting extensions of the Odds-Theorem in other directions. Moreover, Matsui and Ano (2016) studied in another extension lower bounds of the optimal success probability for the case of multiple stops. However, here we will here new extensions which are specifically tailored for our open problem. We begin with an extension in discrete time. \subsection{Odds-Theorems for delayed stopping} Let $n$ be a positive integer, and let $X_1, X_2, \cdots, X_n$ be independent Bernoulli random variables with success parameters $p_k=P(X_k=1)=1-P(X_k=0), k = 1, 2, \cdots, n.$ Suppose our goal is to maximize the probability of stopping online on the last success, i.e. on the last $X_k=1.$ The optimal strategy to achieve this goal is immediate from the Odds-Theorem (Bruss (2000)) which we recall for convenience of reference. Let \begin{align} q_k=1-p_k;~ r_k=\frac{p_k}{q_k};~ R(k,n) = \sum_{j=k}^n r_j, ~k=1,2,\cdots,n,\end{align} and let the integer $s\ge1$ (called threshold index) be defined by\begin{align} s = \begin{cases} 1 & ,\text{if~} R(1,n) <1 \\ \sup\{1\le k\le n: R(k,n) \ge 1\}& , \text{otherwise.} \\ \end{cases} \end{align}The strategy to stop on the first index $k$ with $k\ge s$ and $X_k=1$ (if such a $k$ exists) maximises the probability of stopping on the very last success (B. 2000). If no such $k$ exists, then it is understood that we have to stop at time $n$ and lose by definition. (For a different payoff function and a different approach see e.g. Grau Ribas (2020).) \noindent {\bf Delayed stopping in discrete time} \noindent Let us now consider the new case that there is a deterministic or a random delay imposed by a random time $W$ with values in $\{1, 2,\cdots,n\}$ in the sense that stopping is not allowed before time $W$. Our objective, as before, is to maximize the probability of stopping on the last success. Does it suffice to replace simply the threshold $s$ defined in (6) by $\tilde s:=\max\{W,s\}$ to obtain an optimal strategy? This seems trivial (and is true) if $W$ is deterministic. In general this is not true, of course, even not true if $W$ is a stopping time on $X_1, X_2, \cdots, X_n,$ unless we can guarantee that the knowledge of $W$ has no effect on the {\it laws} of $X_{W+1}, X_{W+2}, \dots$ and their independence. The following is a more tractable formulation. \begin{theor} {\it Let $X_1, X_2, \cdots, X_n$ be Bernoulli random variables defined on a filtered probability space $(\Omega,{\cal A}, ({\cal A}_k), P)$ where ${\cal A}_k = \sigma(\{X_j: 1\le j \le k\}).$ Suppose there exists a random time $W$ for $X_1, X_2, \cdots, X_n$ on the same probability space such that the $X_j$ with $j\ge W$ are independent random variables satisfying \begin{align*}p_j(w):=P(X_j=1|W\le w), ~1 \le w\le j\le n. \end{align*} Then, putting $r_j(w)=p_j(w)/(1-p_j(w)), $ it is optimal to stop at the random time \begin{align}\tau:=\inf\left\{k \in \{W, W+1,\cdots,n\}: \{X_k=1\} ~\& ~\sum_{j=k+1}^n r_j(W) \le1\right\},\end{align} with the understanding that we stop at time $n$ and lose by definition, if $\{...\}=\emptyset.$}\end{theor} \begin{rem} We note that no (initial) independence hypothesis is assumed for the $X_1, X_2, \cdots$ but only for those $X_j$'s with $j\ge W.$ \end{rem} \noindent {\bf Proof~of Theorem 2.1} Our proof will profit from the proof of the Odds-Theorem (B. (2000)) if we rewrite the threshold index (6) in an equivalent form. \noindent Recall the definition of $R(k,n)$ in (5). If we define, as usual, an empty sum as zero, then $s$ defined in (6) can be written as \begin{align}s'=\inf \left\{1\le k \le n: R(k+1,n):=\sum_{j=k+1}^n r_j\,\le 1\right\}.\end{align} This is straightforward: If $R(1,n)\le1$ then $R(2,n)\le1$ so that from (8) $s'=1$, and $s=1$, as stated in (6). Otherwise, if $R(1,n)> 1,$ then there exists a unique $k$ where $R(k+1,n)$ drops for the first time below the value $1$ since $R(k,n)$ decreases in $k$, and $R(n+1,n)=0.$ The first such $k$ is the $s'$ defined in (8). The definitions (6) and (8) are thus equivalent. (See also Stirzaker (2015, p. 50)) Let now $p_j(w)$ as defined in Theorem 2.1, and let for $ j= w, w+1, \cdots, n$$$q_j(w) = 1-p_j(w)=P(X_j=0|W\le w).$$It follows from the assumptions concerning $W$ that $X_{w}, X_{w+1}, \cdots , X_n$ are independent random variables with laws only dependent on the event $\{W\le w\}.$ If we think of $w$ as being fixed, then we can and do define $p_j:=p_j(w)$ for all $w\le j\le n$ and use the same notation as before defined in (5) with the corresponding odds $r_j(w)=p_j(w)/q_j(w)=: r_j.$ Accordingly, we have for $k\ge w$ the same simple monotonicity property $R(k,n)\ge R(k+1,n).$ It is easy to check that this monotonicity property is equivalent to the uni-modality property proved in B. (2000, p.1386, lines 3-12). The latter implies that the optimal rule is a {\it monotone} rule in the sense that, once it is optimal to stop on a success at index $k,$ then it is also optimal to stop on a success after index $k.$ (For a convenient criterion for a stopping rule in the discrete setting being monotone, see Ferguson (2016, p. 49)). Note that, whatever $W=w\in \{1,2, \cdots, n\}$, the odds $r_j:=r_j(w)$ are deterministic functions of the $p_j:=p_j(w)$, and so the future odds $(r_j )_{j\ge W+1}$ are also known and will not change. The only restriction we have to keep in mind for the simplified notation is that $k\ge w$ on the set $\{W\le w\}.$ But then the monotonicity property of $R(\cdot,\cdot)$ is also not affected, that is $$\forall \ell \ge j: R(W+j,n)\le1 \implies R(W+\ell,n)\le1.$$ Since the latter implies the uni-modality property of the resulting win probability on $W\le j\le n$, the monotone rule property is again maintained for the optimal rule after the random time $W,$ exactly as in B. (2000). Therefore the optimal strategy is to stop on the first success (if it exists) from time $\tau$ onwards where $\tau$ satisfies \begin{align} \tau \ge W ~{\rm and ~}\sum_{j=\tau+1}^n r_j(W) \le1. \end{align} This is the threshold index $\tau$ of Theorem 2.1, and hence the proof.\qed \begin{rem} Note that Theorem 2.1 is intuitive. Its applicability, nevertheless, can be delicate. It depends on the $p_j$'s being predictable for {\it all} $j\ge W.$ Often this is not the case. For instance, we may have (conditionally) independent random variables, but, if we collect information about the $p_j$ from observations then the distributions of the future values of $X_{j+1}, X_{j+2}, \cdots$ typically depend on $X_k,~ {1\le k\le j},$ on which the event $\{W=j\}$ may be allowed to depend! (For our purpose of settling the open question the implications of Theorem 2.1 will turn out to be strong, however.) \end{rem} \begin{rem} (Side-remark). Given that (8) is a one-line definition whereas (6) needs two lines, some readers ask why B. (2000) used definition (6). The answer is that it is (6) which points to the odds-algorithm (subsection 2.1, p.1386) which works backwards until the stopping time $s$ with $r_n, r_{n-1}, \cdots$ to give both optimal strategy and value at the same time. No other algorithm can be quicker since it computes exactly those $r_j$ which produce both answers. If we used instead the odds beginning with $r_1, r_2, \cdots$ and (8) we would first need $R(1,n)$, implying in general redundant calculations. For the preceding theorem, however, we clearly needed (8). \end{rem} \noindent{\bf Delayed stopping in continuous time} \noindent We now state and prove a continuous-time analogue of the Theorem 2.1 which plays an important role in the proof of the open conjecture. We state and prove it in a slightly more general form than what we need for the conjecture, because it may be also of interest for other problems of optimal stopping. \begin{theor} Suppose $(C_t)$ is a counting process on $[0,1]$ for which there exists a random time $\cal T$ such that the confined process $(C_t)_{{\cal T}<t\le 1}$ has independent increments according to a predictable (non-random) intensity measure $\eta(t)_{{\cal T}< t\le1}.$ We suppose that $\eta(t)$ is Riemann integrable on $[0,1]$ with $\mathrm E(C_1)<\infty$. Then the optimal strategy to stop on the {\it last} jump-time of $(C_t)$ is to select, if it exists, the first arrival time $\tau \ge \cal T$ with $\tau$ satisfying \begin{align} \mathrm E(C_1-C_\tau)= \int_{\tau}^1 \eta(u) \le 1.\end{align} \end{theor} \noindent We note that when the process $(C_t)$ is a Poisson process on $[0, 1]$ the conditions of Theorem 2.5 are clearly satisfied everywhere on $[0,1].$ This special case has been studied already in subsection 4.1 of B. (2000). \noindent{\bf Proof of Theorem 2.5} \noindent Consider a partition $\{u_0< u_1< \cdots <u_m\},~ m \in \{1,2, \cdots\},$ of the random sub-interval $[{\cal T},1]\subseteq [0,1]$ with $u_0={\cal T}$ and $u_m=1$. Let the index $j$ be thought of as depending on $m,$ thus $j:=j(m)$ and $u_j:=u_{j(m)}.$ Put \begin{align}p_j:=p_{j(m)} = \int_{u_{j -1}}^{u_j}\eta(u) du, ~ j=1, 2, \cdots, m, \end{align} where $[u_{j-1}, u_j[$ is by definition the $j$th sub-interval of the partition, $j=1, 2, \cdots, m.$ It follows that $p_j$ is the expected number of points of the process $(C_u)$ in the $j$th sub-interval, and thus by additivity from (11)\begin{align} \sum_{j=1}^m p_j= \int_{{\cal T}}^1 \eta(u)du = \mathrm E(C_1-C_{\cal T})\le \mathrm E(C_1) <\infty.\end{align} Since all $p_j$ in (11) are non-negative, and $\mathrm E(C_1)$ is finite, we can interpret them all as probabilities of certain events as soon as we choose sufficiently fine partitions to have the the $p_j$ less or equal to $1.$ This is always possible since, as we see in (11), $p_j \to 0$ as $\Deltaelta_j=u_j-u_{j-1}\to 0.$ For the following it is understood that we only speak of such sufficiently fine partitions. Since the counting process $(C_u)$ has independent increments, this allows us at the same time to see the $p_j$ as the success probabilities of independent Bernoulli random variables, namely as the indicators $$I_{j}:=I_{j(m)}~= ~{\bf 1}\,\Big\{ [u_{j-1}, u_j[ {~\rm contains~jump \,times~of~}(C_u)_{{\cal T}\le u\le 1}\Big\} $$ for $j=1, 2, \cdots , m.$ The success probability of the $j$th Bernoulli experiment is then given by $p_j=\mathrm E(I_j).$ Let us call this interpretation the "Bernoulli model" for increments of the process $(C_u)$ for the chosen partition of $[{\cal T},1].$ To be definite we now confine our interest to equidistant partitions, and in this class to those such that all $p_j <1.$ Let $$s(m)=\sup _{j \in \{1, 2, \cdots, m\}} \{p_{j(m)}\}.$$ From (11) we obtain $p_j\sim \Delta_j\eta(u_j)$ and thus, as $\Delta_j\to 0$, we have $\mathrm E(I_j)\to 0$ and also \begin{align}\mathrm E(I_{j(m)})\Big/P( \,[u_j, u_{j+1}[ {\rm ~contains~exactly~one~jump~time}) \to 1.\end{align} The idea is now the following: First, if we can interpret any increment $C_{u_{k}}-C_{u_{j}}, j \le k\le m$ as a sum of odds in our Bernoulli models, then the optimal odds-rule for stopping on the last success identifies the optimal rule for stopping on the last sub-interval of the partition containing jump-times of $(C_u).$ Note that for any fixed $m,$ the last Bernoulli success may correspond to more than one point in the last sub-interval containing points (i.e. jump-times of $(C_u)$). Second, in a limiting Bernoulli model defined by letting $m\to \infty,$ the last success corresponds, according to (13), with probability $1$ to the very last jump in $(C_u).$ Hence, provided that the notion of limiting odds is meaningful for the limiting Bernoulli model, the optimal rule in the latter identifies the optimal rule for stopping on the last jump of $(C_u).$ We will combine both parts by showing that the continuous-time analogue of odds in the limiting Bernoulli model {\it is} an intensity measure of a counting process, and we will adapt it to become the process $(C_u)$. Let $\rho$ be a real-valued non-negative Riemann integrable function $\rho: [0,1] \to \mathbb R^+,$ and let $$\Psi(x, \Deltaelta x):=\int_{x}^{x+\Deltaelta x} \rho(u) du.$$ We now chose a function $\rho$ in such a way that all $\Psi(u_{j},\Deltaelta_j)$ satisfy the equation \begin{align} \Psi_j:=\Psi(u_j,\Deltaelta _j)=\frac{p_j}{1-p_j}=r_j,~ j=1,2, \dots, m. \end{align} Note that the existence of such a function $\rho$ is evident for any {\it finite} partition since the class of Riemann integrable functions contains already infinitely many. If we choose $\rho$ in this class we have $\lim_{\Delta u \to 0}\Psi(u, \Deltaelta u)/\Deltaelta u$ exists almost everywhere on $[{\cal T},1],$ and this derivative coincides with $\rho(u)$ on $[{\cal T},1].$ Now we must check whether such a function $\rho$ exists if we let the mesh size of the partition tend to $0.$ We shall now prove that the function $\rho$ exists and is unique in the limiting Bernoulli model, and that $\eta$ and $\rho$ coincide almost everywhere on $[{\cal T},1].$ It will thus be justified to call the function $\rho$ the {\it odds-intensity} associated with the (identical) intensity $\eta$ of the process $(C_u)$ on $[{\cal T}, 1].$ Indeed, recalling $\Delta_j=1/m$, we will first show that $${\rm(i)}~~ \rho(u_j)=\lim_{\Delta_j\to 0}\,\frac{1}{\Delta_j}r_j=\eta(u_j),~j=1, 2, \cdots$$ $${\rm(ii)} \lim_{m\to \infty}\,\sum_{j=1}^m\Psi(u_j,\Delta_j)=\sum_{j=1}^\infty\lim_{m\to \infty}\, \Psi(u_j,\Delta_j).$$ \noindent The limiting equation (i) follows from the definition of odds in the Bernoulli models, and from (11), since $$\frac{{p_j}}{1-p_j}\frac{1}{\Delta_j}\sim\frac{1}{\Delta_j}\frac{\Delta_j\eta(u_j)}{(1- {\Delta_j\eta(u_j)})}=\frac{\eta(u_j)}{1- {\Delta_j\eta(u_j)}}\to \eta(u_j)~{\rm as}~\Delta_j\to 0.$$ To see (ii), we first recall that for all $j=1,2, \dots, m$ we have $p_j<1$ and thus from (14) $$p_j\le \Psi_j = p_j/(1-p_j).$$ For fixed $\epsilon$ with $0<\epsilon<1$ we now choose an integer $m:=m(\epsilon)$ large enough so that $s(m):=\sup\{p_k: 1\le k \le m\}<\epsilon.$ This is trivially always possible for a finite number $m$ of $p_k$, since, again seen as a function of $\Deltaelta _k,$ we have from (11) that each $p_k\to 0$ as $\Deltaelta_k \to 0+,$ that is, as $m\to \infty. $ Then we obtain $$p_j\le \Psi_j\le p_j/(1-s(m))\le p_j/(1-\epsilon),$$ or, according to (11) explicitly, \begin{align}\int_{u_{j-1}}^{u_j} \eta(u)\,du\le \Psi_j\le\frac{1}{1-\epsilon} \int_{u_{j-1}}^{u_j} \eta(u)\,du.\end{align} Since this inequality holds for all $j=1,2, \cdots, m(\epsilon)$, it must hold also for any sum of these terms (column-wise) taken over the same set of indices. In particular this includes tail sums beginning at an arbitrary time $x \ge {\cal T}.$ Hence, by bounded convergence, (ii) is true. But then the latter also holds for any random time $x:=\tau \ge {\cal T}$, since, by the hypothesis stated in Theorem 2.5, the intensity measure $\eta$ is supposed to be non-random from time ${\cal T}$ onwards. Thus for any set of sub-intervals of $[{\cal T},1]$, the limiting odds sum for the limiting Bernoulli model, corresponds to the integral of $\rho$ over the same set of intervals. Therefore, in particular, $\rho$ satisfying (14) must satisfy for any $\epsilon$ and equidistant partition with mesh size $\Delta_j=1/m(\epsilon)$ \begin{align}\mathrm E\big(C_1-C_\tau\big)=\int_\tau^1 \eta(u)du\le \int_\tau^1 \rho(u)du\le \frac{1}{1-\epsilon}\mathrm E\big(C_1-C_\tau\big).\end{align} Since $\epsilon>0$ can be chosen arbitrarily close to $0$ in the inequality (16), it follows from the squeezing theorem that the inner integral is bound to coincide with $\mathrm E(C_1-C_{\tau}).$ According to (ii), this inner integral is however the limiting tail sum of odds for the limiting Bernoulli model, and (i) implies thus $\rho(u)=\eta(u).$ Finally, letting $\epsilon\to 0+$ in (16) that the inner integral, that is, the limiting tail sum of odds in the limiting Bernoulli model, drops below $1$ if and only if $\mathrm E(C_1-C_\tau)$ does so. Hence the proof.\qed \begin{rem} The preceding criterion is valid independently of whether $\tau$ is a jump-time of $(C_u)$ or not. Indeed, if $\eta(u)>0$ on $[\tau, 1]$ then for all $0<\epsilon<1-\tau$ we have $\mathrm E(C_1-C_{\tau+\epsilon})<1.$ Therefore, if $\tau$ happens to be a jump-time of $(C_u)_{\,t\le u \le 1}$ we must also stop on $\tau$.\end{rem} We are now ready to tackle our main problem. \section{The open question of optimality} \subsection{Preview and visualisation of our approach} If the optimal strategy exists, then it must solely be based on all the sequential information we can have, that is, on the information stemming from the history of arrivals (points) and their relative ranks. Clearly, any strategy is trivially optimal if there are no points so that we can confine our interest to the case $N>0.$ Denote by $N_u$ the number of arrivals up to time $u$. If $N>0,$ there is at least one arrival on $[0,1]$, and the first one is a record by definition. Due to the i.i.d. structure of points on $[0,1],$ if the decision maker looks back at time $t\in [0,1]$, and if there are preceding arrivals, then he or she knows that their pattern is the outcome of i.i.d. uniformly distributed points on $[0,t].$ The same will hold by looking forward, that is, if there are arrivals then their unordered arrival times are i.i.d. on $[t,1].$ This is true since i.i.d. uniform random variables on a given interval $I$, say, stay i.i.d. conditioned on their location in sub-intervals of $I.$ This is illustrated in the figure below (Fig.1) where arrivals are denoted by *, and where the first * is meant to indicate the arrival time $T_1$. $$ |_0............................*...........*...*..\longleftarrow|_t~ .............................~ |_1$$ $$ |_0............................*...........*...*..... ~~...|_t\longrightarrow ......................~ |_1$$ \centerline{Fig. 1} \centerline{ Decision-maker's perception} \noindent From the first arrival time $T_1$ onwards ($0<T_1< 1$ a.s.) the decision maker has the information that the counting process $(N_u)_{u\ge T_1}$ is a process with proportional increments as defined in Subsection 2.6. See Fig. 2. Accordingly, given $N_u$, the expected value of the number of points in $[u, u+\Deltaelta u[$ equals $(N_u \Deltaelta u)/u$ almost surely, and it is important to note that no$~o(\Deltaelta u)$ is added here. $$~|_0...........................~_{T_1}~...........*...*..(N_u)_{u\ge T_1}......?........?...........|_1$$ \centerline{Fig. 2} \centerline{$(N_u)_{u\ge T_1}$ is a proportional-increments process} \noindent The relevant stochastic process for stopping on rank 1 is then the record process $(R_u)_{u\ge T_1}$ which is a sub-process of $(N_u)_{u\ge T_1}$(see Fig. 3) $$~|_0...........................*...........~? \,...\,?~(R_u)_{u\ge T_1} .......?..........?............ |_1$$ \centerline {Fig. 3} \centerline{$(R_u)_{u\ge T_1}$ is obtained from $ (N_u)_{u\ge T_1}$ by inverse-proportional thinning.} \noindent This thinning is by RŽnyi's Theorem such that if $T_J\ge T_1$ is a jump-time of the process $(N_u)$ then it is retained for the record process $(R_u)_{u\ge {T}_1}$ with probability $1/N_{T_J}$ independently of retained preceding points. We call this the {\it inverse-proportional thinning} property of R\'enyi's record theorem on the process $(N_u)_{u\ge T_1}.$ Note that if we have a predictable non-random intensity measure, the process $(R_u)$ can then play the role of $(C_u)$ in Theorem 2.5. Stopping online on the desired rank $1$ means stopping online on the very last record, i.e. on the last jump-time of $(R_u)_{u\ge T_1}.$ In the previous paper it was claimed (see Theorem 3.1) that the $1/e$-strategy is uniquely optimal, but its proof, based on Theorem 2.5, is {\it wrong}. We now point out where exactly the error occurred: \noindent{\bf ~~~The error in the proof~} We now recapitulate the proof which is correct until equation (22) included : Let ${\cal F}_t$ denote the filtration generated by $\{N_s: 0\le s\le t\},$ and denote by ${\cal G}_t$ the one generated by both $\{N_s: 0\le s \le t\}$ and $\{R_s: 0\le s \le t\}$ together. Since both fields are clearly increasing we have ${\cal G}_t\subseteq {\cal G}_u$ for $t\le u \le 1.$ Clearly $T_1$ is a $({\cal G}_t)$-measurable stopping time since $({\cal F}_t \subseteq{\cal G}_t)$. Given $T_1,$ choose $t \in [T_1,1]$ and define for fixed $m\in\{2,3, \cdots\}$ and $k=0, 1, 2, \cdots m-1,$ \begin{align*} u_k:=u_k(t)=t+\frac{k(1-t)}{m},\\\Deltaelta_k:=\Deltaelta_k(t)=u_{k+1}-u_k=\frac{1-t}{m}.\end{align*} \noindent It follows that for any ${\cal G}_u$-measurable random variable $X$ and $ 0\le t\le u \le 1,$ \begin{align} \mathrm E(\mathrm E(X\big|{\cal G}_u)\,|\,{\cal G}_t)=\mathrm E(X\big|{\cal G}_t). \end{align} Let now $X$ denote the number of records in $[t,1]$, that is $X=R_1-R_t.$ From the linearity of the expectation operator we obtain \begin{align} \mathrm E\big(R_1-R_t \big| {\cal G}_t\big)=\mathrm E\left(\sum_{k=0}^{m-1} (R_{u_{k+1}}-R_{u_k})\,\Big|\,{\cal G}_t\right)=\sum_{k=0}^{m-1} \,\mathrm E\left(R_{u_{k+1}}-R_{u_k}\,\Big|\,{\cal G}_t\right),\end{align} and then from (18) used in (17) \begin{align} \mathrm E\big(R_1-R_t \Big| {\cal G}_t\big)= \sum_{k=0}^{m-1} \,\mathrm E\left(\mathrm E\left(R_{u_{k+1}}-R_{u_k}\Big|{\cal G}_{u_k}\right)\,\Big|\,{\cal G}_t\right).\end{align} Let $\lambda(u)$ denote the rate of the point process $(N_t)$ at time $u,$ and $h(u)$ be the conditional probability of a point appearing at time $u$ being a record. The process $(N_u)$ inherits history-dependence from the p.i.-property so that $\lambda(u)$ is also history-dependent, namely a $\cal F$-predictable intensity process for $(N_t)$ relative to the filtration $({\cal F}_t)$. The function $h(u)$ acts like a thinning on the counting process $(N_t),$ retaining only its record-times as events. The resulting record process has an intensity, $\eta$ say, which may depend on both $\lambda$ and $h$, and which we write formally as \begin{align}\label{eq12} \eta(u):= g_{\lambda, h}(u) \,\,,T_1\le u \le1. \end{align} Note that this formal definition is a step of caution because $h(u)$ and $\lambda(u)$ are history-dependent random variables, and dependent on each other. Thus we do not assume so far that $\eta(u)= g_{\lambda, h}(u)$ factorises into $\lambda(u)h(u)$ over sub-intervals we will consider. Of course we know it does so point-wise because $h$ is defined as the conditional probability of a point being retained as a record. Now consider the inner conditional expectation on the r.h.s.\,of (19). Since $(N_u)_{T_1 \le u \le 1}$ is a p.i.-process, and $N_{u_k}$ is ${\cal G}_{u_k}$-measurable, we have correspondingly $$\mathrm E(N_{u_{k+1}}-N_{u_k}\Big|{\cal G}_{u_k})=\Deltaelta_k N_{u_{k}}/u_k ~a.s.,$$ and thus $\lambda(u_k)=N_{u_k}/u_k ~{\rm a.s.}.$ Moreover, if $u_k$ were a jump-time for $(N_u)$ it would be according to R\'enyi's Theorem a record time with probability $1/N_{u_k}$ which shows that $h$ in (20) is also history-dependent. We now show the central fact that the increments of the record process $(R_u)$ on $[u,u+\Deltaelta u[$ given ${\cal G}_u$ will never depend on the {\it locations} of jump-times in $[u,u+\Deltaelta u[,$ but only on the number of jumps in there. Indeed, if we denote the $j$th jump-time in $[u_k, u_{k+1}[$ by $A_j:=T_{j+N_{u_k}}$, then \begin{align*}\mathrm E\left (R_{u_{k+1}}-R_{u_k}\Big|N_{u_{k+1}}-N_{u_k}=J;\,A_1, A_2, \cdots , A_J\right)\\=\mathrm E\left(\sum_{j=1}^J {\bf 1}\{ A_j {\rm ~is~a ~record~time}\}\right).~~~~~~~~~~ \end{align*} Since $J\le N_1<\infty $ we can exchange the operators expectation and summation, and then use RŽnyi's Theorem. Therefore, by the definition of the $A_j,$ the latter equals \begin{align} \sum_{j=1}^J P \left( A_j{\rm~is~a~record~time}\,\Big|\, {\cal G}_{u_k}\right) = \sum_{j=1}^J\ \frac{1}{N_{u_k}+j} ~{\rm a.s.},\end{align} which is understood as being zero if $J=0.$ Given ${\cal G} _{u_k}$, the value $N_{u_k}$ is a constant, and $J$ is ${\cal F}_{u_k}$-predictable. Hence the r.h.s.\,of (20) is ${\cal F}_{u_k}$-predictable and {\it does not} depend on the location of jumps. But then, given any interval $[u, u+du]$, we can imagine these jump times (if any) to be located where we want them to be within this interval, and we are entitled to think of the first one (if any) as being in $u.$ This implies from (21) that $g_{\lambda, h}$ in (20) must factorise on the sub-interval $[u, u+du]$ into the intensity of $(N_u),$ namely $\lambda(u),$ and the inverse proportional thinning $h(u)=1/N_u.$ Now recall that the p.i.-property of $(N_u)$ for $u\ge T_1$ implies \begin{align}\lambda(u)du:=\mathrm E(dN_u|{\cal F}_u)=\mathrm E(N_{u+du}-N_u\,|\,{\cal F}_u)=\frac{N_u}{u}\,du~ {\rm a.s.}. \end{align} Since the inverse-proportional thinning on $(N_u)$ is $({\cal F}_u)$-predictable and ${\cal F}_u\subseteq{\cal G}_u,$ we have correspondingly \centerline{\bf The error was in (23) of [*]. It should read} \begin{align} \mathrm E\left(dR_u\Big|{\cal G}_u\right):=\mathrm E\left(R_{u+du}-R_u\Big|{\cal G}_u\right)=\frac{N_u}{u}\frac {1}{N_u+1}\,du ~{\rm ~a.s.}\end{align} Indeed, with the intensity $N_u/u$ of the arrival process $(N_u)$ given in (22), we have a positive increment of the record process $(R_u)$ if and only if $[u, u+\Deltaelta u]$ contains a jump-time and the latter {\it is} a record-time which occurs then with probability $du/(N_u+1),$ (and not $du/N_u$.) This means that the stopping time $T_1$ does {\it not} fulfill the condition that $(C_u):=(R_u)$ would have for $u\ge T_1$, independent increments, unless $N_u=\infty.$ Hence the claim is not proved. \subsection {Implication of the correction} (I1)~~\noindent If $N_t=\infty$ for some $t>0$ then Theorem 2.5 of [*] can be applied, because then $N_u/(N_u+1)=1$~a.s. Otherwise it cannot be applied since (23) stays history dependent. However, the case $N_u=\infty$ adds nothing new to what was known before. Indeed, since $N=N_1\ge N_u=\infty$, we know already from III of Section 2.2 that the $1/e$-strategy is optimal for $N=\infty.$ \noindent (I2)~~ Integration of (23) on $[t,1]$ yields \begin{align*} \mathrm E(R_1-R_t|{\cal G}_t) =\int_t^1\mathrm E\left(d R_u\Big|{\cal G}_u\right)du=\int_t^1 \frac{N_u}{N_u+1} u^{-1}du \le\int_t^1 u^{-1}du = -\log(t), \end{align*} where, unless $N_t=\infty$, we have the strict inequality $\mathrm E(R_1-R_t|{\cal G}_t)< - \log(t).$ Since $-\log(t)\le 1$ implies $t\le 1/e$ this implies also that, if Theorem 2.5 would apply, the $1/e$-strategy could {\it not} be optimal in the case $N<\infty ~a.s..$ \noindent (I3)~~\noindent As we have just seen, Theorem 2.5 cannot be applied in the case $N<\infty ~a.s.$ Also, we do not know whether the condition $\mathrm E(R_1-R_t|{\cal G}_t)\le 1$ for a record-time $t$ is at least a {\it necessary} condition for optimal stopping at time $t.$ Taking I1, I2 and I3 together we conclude that an answer to the open question will need a different approach. \qed \section{Optimal strategies without value} One must be careful in dealing with problems under the hypothesis of no information. Usually, if we speak of a problem of optimal stopping, we think of finding a non-anticipative rule maximizing a pre-determined objective function, and the solution we find constitutes the value (see e.g. Peskir and Shiryayev (0, RŸschendorf, or Stirzaker (2015). However, as seen for instance in the l.a.p. of Bruss and Yor (2012), it may occur that a problem of optimal stopping and/or optimal control has no value. Moreover, as we will show below, it may resist any comparison of performance versus non-optimal strategies. \noindent The following Lemma illustrates this in a simple form. \begin{Lem} In a model for problems of optimal stopping and/or optimal control in a no-information setting, the following features are possible: \begin{quote} {\rm(i)} An optimal strategy $\mathrm Ess$ solving the defined problem may exist independently of whether one can attribute a value to $\mathrm Ess$. {\rm (ii)} If an optimal strategy $\mathrm Ess$ exists, it need not be the limit of $\epsilon$-optimal strategies as $\epsilon\to 0+$. \end{quote} \end{Lem} \begin{rem} In the way Lemma 4.1 is formulated, the statements (i) and (ii) can be proven by examples having properties (i) and (ii). As said before, the no-information last-arrival problem is such an example. However, the following simple example suffices to make the point. We keep it in form of a an optimal control problem in order to concentrate on the essence, but by adding costs for observations we can change the example easily into a stopping problem.\end{rem} \noindent {\bf Proof} \noindent (i) Let $(I^{(1)}_j)_{j=1, 2, \cdots}$ and $(I^{(2)}_j)_{j=1, 2, \cdots}$ be two sequences of Bernoulli random variables, not necessarily independent of each other, and let $p_j^{(1)}=P(I_j^{(1)}=1)$ and $p_j^{(2)}=P(I_j^{(2)}=1).$ At each time $j$ the decision-maker (he, say) sees both $p_j^{(1)}$ and $p_j^{(2)}$ and decides on which Bernoulli experiment he wants to bet (see Fig. 4). If he bets on Line(1) he receives the random reward $I_j^{(1)},$ and, alternatively, if he bets on Line(2), he receives the random reward $I_j^{(2)}$. At time $j$ he sees only the two entries $p_j^{(1)}$ and $p_j^{(2)}$ but none of the future values for $j'>j.$ \begin{align*}{\rm Line~(1):}~~~ p_1^{(1)}~~~p_2^{(1)}~~~p_3^{(1)}~~~\cdots~~~p_j^{(1)}~~~\cdots ~~~p_n^{(1)}\\{\rm Line~(2):}~~~p_1^{(2)}~~~p_2^{(2)}~~~p_3^{(2)}~~~\cdots~~~p_j^{(2)}~~~\cdots~~~p_n^{(2)} \end{align*} \centerline {Fig. 4} \noindent Denoting by $ \mathrm Pi:\mathbb N \to\{\text{Line}~(1),\text{Line}~(2)\}$ the decision policy at each step, his objective is to maximise for each $n$ the expected accumulated reward. The optimal strategy, if it exists, is defined by \begin{align}\mathrm Pi^*=\arg\max_{\mathrm Pi}\left\{\mathrm E\left(\sum_{k=1}^n I_k^{\mathrm Pi(k)}\right)\right\}.\end{align} But this implies that it does exists: in order to play optimally, it suffices to bet at each step $j$ on $\max\,\{p_j^{(1)}, p_j^{(2)}\}.$ Indeed, this strategy yields at each time $n$ the expected accumulated reward $${\rm M(n)}=\max\,\{p_1^{(1)}, p_1^{(2)}\}+\max\,\{p_1^{(1)}, p_2^{(2)}\}+\cdots+\max\,\{p_n^{(1)}, p_n^{(2)}\}$$ upon which one cannot possibly improve because the maximum of a sum never exceeds the sum of the maxima. And thus we have \begin{align}M(n)=\sum_{k=1}^n \max\left\{p_k^{(1)},p_k^{(2)}\right\}\ge \max_{\mathrm Pi}\mathrm E\left(\sum_{k=1}^n I_k^{\mathrm Pi(k)}\right).\end{align} (26) and (27) imply that the optimal strategy ${\mathrm Ess}_n$ maximizing the accumulated reward until time $n$ exists, but nevertheless, before time $n,$ no value can be attributed to the optimal ${\mathrm Ess}_n$ because $M(n)$ is still unknown. This proves (i). (We note that if the corresponding values in Line (1) and Line (2) never coincide, all ${\mathrm Ess}_n$ are moreover unique. ) \qed (ii) To prove (ii), look at the following modification. Suppose that at some time $t\in \mathbb N$ a red light is switched on for Line (2), say, with probability $\delta.$ If the light is switched on, the decision maker is supposed to be no longer entitled to bet on Line (2). No information is given how often, or how long, the red light may be switched on, given it is switched on at least once. It is straightforward to check, similarly as above, that now the unique optimal strategy is to bet, {\it whenever possible}, on the Line with the entry $\max\{p_j^{(1)}, p_j^{(2)}\}$. If $\delta=0$ then we are in the case (i). Further we see easily that, if \begin{align}\ell=\lim_{n\to \infty}\,\sum_{j=1}^n \,\left |p_j^{(1)}-p_j^{(2)}\right| < \infty,\end{align} then, for any given $\epsilon>0,$ we can always choose $\delta$ sufficiently small so that the optimal strategy in this setting is $\epsilon$-optimal with respect to $\mathrm Ess.$ Indeed, for all $n$ the difference in the accumulated rewards is bounded above by $\delta \ell.$ In this case, the optimal strategy can be seen as the limit of $\epsilon$-optimal strategies. If the limit $\ell$ in (28) satisfies $\ell=\infty$, however, then this is not possible. In conclusion, we simply do not know whether the existing optimal strategy can be seen as a limit of $\epsilon$-optimal strategies, at least not in this class of $\epsilon$-optimal strategies. This does of course not exclude that one may still be able to find other $\epsilon$-optimal strategies. However, the point we want to make is that special circumstances in a given problem may {\it naturally} lead us to a certain class of $\epsilon$-optimal strategies with which we would like to study the given problem, because we understand them. Then we would like to be able to count on some form of {\it closedness} as we know it from other domains of Mathematics. In {\it Analysis} for instance, we require for good reasons that a function $f: \mathbb R^n\to \mathbb R$ allows a limit in $x\in \mathbb R^n$ if and only if for {\it all} sequences $(x_m)\to x$ we have $f(x_m)\to f(x)$. As we have just seen, without knowing that $\ell$ defined in (24) satisfies $\ell < \infty,$ we would not know whether all $\epsilon$-optimal strategies would do as $\epsilon \to 0+$. \qed \subsection {\bf Particularities of the no-information hypothesis} Lemma 4.1 tells us that we must keep, in more general cases, something important in mind: In a setting of an optimal stopping problem under no-information an optimal strategy need not have a neighbourhood in a classical sense in the set of possible strategies. An optimal expected payoff need not be a limit in an analytic sense of the corresponding expected payoffs of seemingly close strategies. But then, any argument based on a continuity assumption, or on the existence of a point of indifference for the optimal decision, etc., may become questionable. This implies that we may, in certain cases, be able to show the optimality of a certain strategy without being able to assess at the same time how a (slightly) sub-optimal strategy, or in fact any other strategy, would compare to the optimal strategy with respect to performance. The non-negligible content of what we point out here is that we have to be careful when speaking about indifference values, limiting performances or even any limit argument in the context of no-information. \centerline{***} ~~~~Abdel-Hamid A., Bather J., and Trustrum G. (1982) {\it The secretary problem with an unknown number of candidates}, J. Appl. Probability, Vol. 19 (3): 619-630. Bruss F.T. (1984) {\it A unified approach to a class of best choice problems with an unknown number of options}, Annals of Probability, Vol. 12 (3): 882-889. Bruss F.T. (1987) {\it On an optimal selection problem of Cowan and Zabczyk}, J. Appl. Probability Vol. 24: 918-928. Bruss F.T. (2000) {\it Sum the odds to one and stop}, Annals of Probability, Vol. 28 (3): 1384-1391. Bruss F.T. and Samuels S.M. (1987)), {\it A Unified Approach to a Class of Optimal Selection Problems with an Unknown Number of Options,} Annals of Probability, Vol. 15: 824-830. Bruss F.T. and Samuels S.M. (1990) {\it Conditions for quasi-stationarity of the Bayes rule in selection problems with an unknown number of rankable options}, Annals of Probability, Vol. 18 (2): 877-886. Bruss F.T. and Rogers L.C.G. (1991) {\it Pascal processes and their characterization}, Stoch. Proc. and Their Applic., Vol. 37 (2): 331-338 Bruss F.T. and Yor M. (2012) {\it Stochastic processes with proportional increments and the last-arrival problem}, Stoch. Proc. and Their Applic., Vol. 122 (9): 3239-3261. Cowan R. and Zabczyk J. (1978) {\it An optimal selection problem associated with the Poisson process}, Theory of Prob. and Applic., Vol. 23: 548-592. Ferguson T.S. (2016) {\it The Sum-the-Odds Theorem with Application to a Stopping game of Sakaguchi}, Mathematica Applicanda, Vol. 44 (1): 45-61. Grau Ribas J.M (2020),{\it An extension of the last-success-problem}, Statistics \& Probability Letters, Vol. 156, DOI: 10.1016/j.spl.2019.108591 Hadamard J. (1902) {\it Sur les probl\`emes aux d\'eriv\'ees partielles et leur signification physique.} Princeton University Bulletin, pp. 49-52. Matsui T. and Ano K. (2016) {\it Lower bounds for Bruss' odds problem with multiple stopping}, ~Math. of Oper. Research, Vol. 41 (2): 700-714. Presman E.L. and Sonin, I.M. (1972) {\it The best choice problem for a random number of objects}, Theory of Prob. and Applic., Vol. 17 (4): 657-668. R\'enyi A. (1962) {\it Th\'eorie des \'el\'ements saillants d'une suite d'observations}, Annales scientifiques de l'Universit\'e de Clermont-Ferrand 2, S\'erie Math\'ematiques, 8 (2): 7-13. R\"uschendorf L. (2016) {\it Approximative solutions of optimal stopping and selection problems}, Mathematica Applicanda, Vol. 44 (1): 17-44. Samuels, S.M. (1985) Math Reviews: MR0744243 (85m:62182). Stewart T.J. (1981) {\it The secretary problem with an unknown number of options.} Operations Research, Vol. 29 (1): 130-145. Stirzaker D. (2015) {\it The Cambridge Dictionary of Probability and Its Application}, Cambridge University Press, ~ISBN 978-1-107-07516-0. \noindent{\bf Author's address} :\\ F.\,Thomas Bruss, \\Universit\'e Libre de Bruxelles, CP 210, \\B-1050 Brussels, Belgium\\ ([email protected]) \end{document}
\begin{document} \maketitle \begin{section}{introduction} \noindent Our goal in this paper is to contribute to the model theory of valued fields with a valuation preserving automorphism. The key ideas are due to Scanlon (\cite{scanlon1}, \cite{scanlon2}) and to B\'{e}lair, Macintyre, Scanlon \cite{BMS}. We obtain the main result in \cite{BMS} under weaker assumptions, and give a simpler proof. \noindent Throughout we consider valued fields as three-sorted structures $$\ca{K}=(K,\Gamma,{{\boldsymbol{k}}};\ v, \pi)$$ where $K$ is the underlying field, $\Gamma$ is an ordered abelian group\footnote{The ordering of an ordered abelian group is total, by convention.} (the {\em value group\/}), ${{\boldsymbol{k}}}$ is a field, the surjective map $v: K^\times \to \Gamma$ is the valuation, with valuation ring $$ \ca{O}=\ca{O}_v:=\{a \in K:\ v(a) \geq 0\}$$ and maximal ideal $\fr{m}_v:=\{a \in K: v(a)>0\}$ of $\ca{O}$, and $\pi: \ca{O}\to {{\boldsymbol{k}}}$ is a surjective ring morphism. Note that then $\pi$ induces an isomorphism of fields, $$ a+\fr{m} \mapsto \pi(a):\ \ca{O}/\fr{m} \to {{\boldsymbol{k}}} \qquad (\fr{m}:=\fr{m}_v) $$ and there will be no harm in identifying the residue field $\ca{O}/\fr{m}$ with ${{\boldsymbol{k}}}$ via this isomorphism. Accordingly, we refer to ${{\boldsymbol{k}}}$ as the {\em residue field}. To simplify notation we often write $\bar{a}$ instead of $\pi(a)$. We call $\ca{K}$ as above {\em unramified} if either \begin{enumerate} \item[(i)] $\operatorname{char}{K}=\operatorname{char}{{{\boldsymbol{k}}}}=0$, or \item[(ii)] $\operatorname{char}{K}=0$, $\operatorname{char}{{{\boldsymbol{k}}}}=p>0$ and $v(p)$ is the smallest positive element of $\Gamma$. \end{enumerate} \noindent Ax \& Kochen and Ershov proved the following classical result, which we shall refer to as the {\em {\rm AKE}-principle}. (See Kochen \cite{koch} for a complete account.) \noindent {\em Let $\ca{K}$ and $\ca{K}'$ be unramified henselian valued fields with residue fields ${{\boldsymbol{k}}}$ and ${{\boldsymbol{k}}}'$ and value groups $\Gamma$ and $\Gamma'$ respectively. Then $\ca{K} \equiv \ca{K}'$ if and only if ${{\boldsymbol{k}}} \equiv {{\boldsymbol{k}}}'$, as fields, and $\Gamma \equiv \Gamma'$, as ordered abelian groups.} \noindent Thus the elementary theory of an unramified henselian valued field is determined by the elementary theories of its residue field and value group. Theorems~\ref{embed11} and ~\ref{embed13} below are strong analogues of the AKE-principle in the presence of a valuation preserving automorphism. Compared to the standard way of proving the AKE-theorems as in Kochen~\cite{koch}, the two new tools we need are: replacing a pseudo-Cauchy sequence by an equivalent one, in order to rescue pseudocontinuity, and, for positive residue characteristic, the use of a certain polynomial transformation, the $D$-transform. These two devices were introduced in ~\cite{BMS}, but we use them in combination with a simpler notion of {\em pc-sequence of $\sigma$-algebraic type\/}. The main improvement comes from a better understanding of purely residual extensions via Lemma~\ref{extresa} below. This allows us to drop a strong assumption, the {\em Genericity Axiom} of \cite{BMS}, about the automorphism induced on the residue field. Other differences with \cite{BMS} will be indicated at various places, but to make our treatment selfcontained we include expositions of relevant parts of \cite{BMS}. We assume familiarity with valuation theory, including henselization and pseudo-convergence. \noindent This paper is partly based on a chapter of the first author's thesis \cite{salih1} with advice from the second author. \end{section} \begin{section}{Preliminaries} \noindent Throughout, $\mathbb{N}=\{0,1,2,\dots\}$, and $m,n$ range over $\mathbb{N}$. We let $K^\times=K\setminus \{0\}$ be the multiplicative group of a field $K$. \noindent {\bf Difference fields.} A {\em difference field\/} is a field equipped with a distinguished automorphism of the field, the {\em difference operator}. A difference field is considered in the obvious way as a structure for the language $\{0,1, -, +, \cdot, \sigma\}$ of difference rings, with the unary function symbol $\sigma$ to be interpreted in a difference field as its difference operator, which is accordingly also denoted by $\sigma$ (unless specified otherwise). Let $K$ be a difference field. The {\em fixed field\/} of $K$ is its subfield $$\operatorname{Fix}(K):= \{a\in K:\ \sigma(a)=a\}.$$ We let $\sigma^n$ denote the $n^{\rm th}$ iterate of $\sigma$ and let $\sigma^{-n}$ denote the $n^{\rm th}$ iterate of $\sigma^{-1}$. Let $K \subseteq K'$ be an extension of difference fields and $a \in K'$. We define $K\langle a \rangle$ to be the smallest difference subfield of $K'$ containing $K$ and $a$. The underlying field of $K\langle a \rangle$ is $K(\sigma^{i}(a): i \in \mathbb{Z})$. \noindent We now introduce difference polynomials in one variable over $K$. Each polynomial $$f(x_0,\dots,x_n)\in K[x_0,\dots,x_n]$$ gives rise to a difference polynomial $F(x)= f(x, \sigma(x), \dots,\sigma^n(x))$ in the variable $x$ over $K$; we put $\deg F := \deg f\in \mathbb{N} \cup\{-\infty\}$ (where $\deg f$ is the total degree of $f$), and refer to $F$ as a {\em $\sigma$-polynomial\/} (over $K$). If $F$ is not constant (that is, $F\notin K$), let $f(x_0,\dots,x_n)$ be as above with least possible $n$ (which determines $f$ uniquely), and put $$\text{order}(F):= n, \quad \text{complexity}(F):= (n,\ \deg_{x_n} f,\ \deg f)\in \mathbb{N}^3.$$ If $F\in K$, $F\ne 0$, then $\text{order}(F):=-\infty$ and $\text{complexity}(F):=(-\infty, 0,0)$. Finally, $\text{order}(0):= -\infty$ and $\text{complexity}(0):=(-\infty, -\infty, -\infty)$. So in all cases we have $\text{complexity}(F)\in (\mathbb{N}\cup \{-\infty\})^3$, and we order complexities lexicographically. \noindent Let $a$ be an element of a difference field extension of $K$. We say that $a$ is {\em $\sigma$-transcendental\/} over $K$ if there is no nonzero $F$ as above with $F(a)=0,$ and otherwise $a$ is said to be {\em $\sigma$-algebraic\/} over $K$. As an example, let $F(x):= \sigma(x)-x$. It has order $1$, and $F(a)=0$ for all $a$ in the prime subfield of $K$, in particular, $F(a)=0$ for infinitely many $a\in K$ if $K$ has characteristic $0$. If $b$ is also an element in a difference field extension of $K$ and $a$ and $b$ are $\sigma$-transcendental over $K$, then there is a unique difference field isomorphism $K\langle a \rangle \to K \langle b \rangle$ over $K$ sending $a$ to $b$. A {\em minimal $\sigma$-polynomial of $a$ over $K$ \/} is a nonzero $\sigma$-polynomial $F(x)$ over $K$ such that $F(a)=0$ and $G(a) \neq 0$ for all nonzero $\sigma$-polynomials $G(x)$ over $K$ of lower complexity than $F(x)$. So $a$ has a minimal $\sigma$-polynomial over $K$ iff $a$ is $\sigma$-algebraic over $K$. {\em Suppose $b$ is also an element in some difference field extension of $K$, and $a$ and $b$ have a common minimal $\sigma$-polynomial $F(x)$ over $K$. Is there a difference field isomorphism $K\langle a \rangle \to K\langle b \rangle$ over $K$ sending $a$ to $b$?} The answer is not always {\em yes}, but it is {\em yes\/} if $F$ is of degree $1$ in $\sigma^m(x)$ with $F$ of order $m$. Another case in which the answer is {\em yes\/} is treated in Lemma~\ref{extresa} below. A difference field extension $L$ of $K$ is said to be {\em $\sigma$-algebraic\/} over $K$ if each $c\in L$ is $\sigma$-algebraic over $K$. For example, if $a$ is $\sigma$-algebraic over $K$, then $K\langle a\rangle$ is $\sigma$-algebraic over $K$. \noindent Let $x_0,\dots,x_n,y_0,\dots,y_n$ be distinct indeterminates, and put $\mathbf{x}=(x_0,\dots,x_n)$, $\mathbf{y}=(y_0,\ldots,y_n)$. For a polynomial $f(\mathbf{x})$ over a field $K$ we have a unique Taylor expansion in $K[\mathbf{x},\mathbf{y}]$: $$f(\mathbf{x}+ \mathbf{y})=\sum_{\i}f_{(\i)}(\mathbf{x})\cdot \mathbf{y}^{\i} ,$$ where the sum is over all $\i = (i_0,\dots,i_n)\in{\mathbb N}^{n+1}$, each $f_{(\i)}(\mathbf{x})\in K[\mathbf{x}]$, with $f_{(\i)}=0$ for $|\i|:=i_0+\cdots+i_n > \deg F$, and $\mathbf{y}^{\i}:=y_0^{i_0}\cdots y_n^{i_n}$. (Also, for a tuple $a=(a_0,\dots,a_n)$ with components $a_i$ in any field we put $a^{\i}:=a_0^{i_0}\cdots a_n^{i_n}$.) Thus $\i! f_{(\i)}(\mathbf{x}) =\partial_{\i} f$ where $\partial_{\i}$ is the operator $({\partial}/\partial x_0)^{i_0} \cdots ({\partial}/\partial x_n)^{i_n}$ on $K[\mathbf{x}]$, and \linebreak $\i!=i_0!\cdots i_n!$. We construe ${\mathbb N}^{n+1}$ as a monoid under $+$ (componentwise addition), and let $\leq$ be the (partial) product ordering on ${\mathbb N}^{n+1}$ induced by the natural order on ${\mathbb N}$. Define $\left( \begin{array}{c} \i\\ \j \end{array} \right)$ as $\left( \begin{array}{c} i_0\\ j_0 \end{array} \right)\cdots \left( \begin{array}{c} i_n\\ j_n \end{array} \right) \in {\mathbb N}$, when $\j\leq \i$ in ${\mathbb N}^{n+1}$. Then: \begin{lemma} For $\i,\j \in {\mathbb N}^{n+1}$ we have $(f_{(\i)})_{(\j)}= \left( \begin{array}{c} \i+\j\\ \i \end{array} \right) f_{(\i+ \j)}$. \end{lemma} \noindent In particular, $f_{(\i)}=f$ for $|\i|=0$, and if $|\i|=1$ with $i_k=1$, then $f_{(\i)}= \frac{\partial f}{\partial x_k}.$ Also, $\deg f_{(\i)}< \deg f$ if $|\i|\geq 1$ and $f\ne 0$. \noindent Let $K$ be a difference field, and $x$ an indeterminate. When $n$ is clear from context we set $\boldsymbol{\sigma}(x)=(x, \sigma(x),\dots,\sigma^n(x))$, and also $\boldsymbol{\sigma}(a)=(a, \sigma(a),\dots,\sigma^n(a))$ for $a\in K$. Then for $f\in K[x_0,\dots,x_n]$ as above and $F(x)= f(\bsigma(x))$ we have the following identity in the ring of difference polynomials in the distinct indeterminates $x$ and $y$ over $K$: \begin{eqnarray*} F(x+y)&=&f(\bsigma(x+y)) =f(\bsigma(x)+\bsigma(y))\\ &=&\sum_\i f_{(\i)}(\bsigma(x))\cdot \bsigma(y)^{\i} = \sum_\i F_{(\i)}(x)\cdot \bsigma(y)^{\i}, \end{eqnarray*} where $F_{(\i)}(x):=f_{(\i)}(\bsigma(x)).$ \noindent {\bf Valued fields.} We consider valued fields as three-sorted structures $$\ca{K}=(K,\Gamma,{{\boldsymbol{k}}};v, \pi)$$ as explained in the introduction. The three sorts are referred to as the {\em field sort\/} with variables ranging over $K$, the {\em value group sort\/} with variables ranging over $\Gamma$, and the {\em residue sort\/} with variables ranging over ${{\boldsymbol{k}}}$. We say that $\ca{K}$ is of {\em equal characteristic $0$} if $\operatorname{char}(K)=\operatorname{char}({{\boldsymbol{k}}})=0$. If $\operatorname{char}(K)=0$ and $\operatorname{char}({{\boldsymbol{k}}})=p>0$, we say that $\ca{K}$ is of {\em mixed characteristic}. In dealing with a valued field $\ca{K}$ as above we also let $v$ denote the valuation of any valued field extension of $\ca{K}$ that gets mentioned, unless we indicate otherwise, and any subfield $E$ of $K$ is construed as a {\em valued\/} subfield of $\ca{K}$ in the obvious way. \noindent A valued field extension $\ca{K}'$ of a valued field $\ca{K}$ is said to be {\em immediate} if the residue field and the value group of $\ca{K}'$ are the same as those of $\ca{K}$. A valued field is {\em maximal} if it has no proper immediate valued field extension and is {\em algebraically maximal} if it has no proper immediate algebraic valued field extension. \noindent A key notion in the study of immediate extensions of valued fields is that of pseudo-cauchy sequence. First, a {\em well-indexed sequence} is a sequence $\{a_\rho\}$ indexed by the elements $\rho$ of some nonempty well-ordered set without largest element; in this connection ``eventually'' means ``for all sufficiently large $\rho$''. Let $\ca{K}$ be a valued field. A {\em pseudo-Cauchy sequence} (henceforth {\em pc-sequence}) in $\ca{K}$ is a well-indexed sequence $\{a_\rho\}$ in $K$ such that for some index $\rho_0 $, $$\rho'' > \rho' > \rho \ge \rho_0\ \Longrightarrow\ v(a_{\rho''}-a_{\rho'}) > v(a_{\rho'}-a_\rho).$$ In particular, a pc-sequence in $\ca{K}$ cannot be eventually constant. For a well-indexed sequence $\{a_\rho\}$ in $\ca{K}$ and $a$ in some valued field extension of $\ca{K}$ we say that $\{a_\rho\}$ {\em pseudoconverges\/} to $a$, or $a$ is a {\em pseudolimit\/} of $\{a_\rho\}$ (notation: $a_\rho \leadsto a$) if the sequence $\{v(a-a_\rho)\}$ is eventually strictly increasing; note that then $\{a_\rho\}$ is a pc-sequence in $\ca{K}$. Let $\{a_\rho\}$ be a pc-sequence in $\ca{K}$, pick $\rho_0$ as above, and put $$\gamma_\rho :=v(a_{\rho'}-a_\rho)$$ for $\rho'>\rho \ge \rho_0$; this depends only on $\rho$ as the notation suggests. Then $\{\gamma_\rho\}_{\rho \ge \rho_0}$ is strictly increasing. The {\em width} of $\{a_\rho\}$ is the set $$\{\gamma \in \Gamma\cup\{\infty\}:\gamma>\gamma_\rho\ \mbox{for all}\ \rho \ge \rho_0\}.$$ Its significance is that if $a,b\in K$ and $a_\rho \leadsto a$, then $a_\rho \leadsto b$ if and only if $v(a-b)$ is in the width of $\{a_\rho\}$. An old and useful observation by Macintyre is that if $\{a_\rho\}$ is a pc-sequence in an expansion of a valued field (for example, in a valued difference field), then $\{a_\rho\}$ has a pseudolimit in an elementary extension of that expansion. \noindent The following easy lemma will be useful in dealing with pc-sequences. \begin{lemma}\label{kapla} Let $\Gamma$ be an ordered abelian group, $A$ a subset of $\Gamma$, and $\{\gamma_\rho\}$ a well-indexed strictly increasing sequence in $A$. Let $f_1,\dots,f_n: A \to \Gamma$ be such that for all distinct $i,j\in \{1,\dots,n\}$ the function $f_i - f_j$ is either strictly increasing or strictly decreasing. Then there is a unique enumeration $i_1,\dots,i_n$ of $\{1,\dots,n\}$ such that $$f_{i_1}(\gamma_\rho) < \dots < f_{i_n}(\gamma_\rho), \quad \text{eventually}.$$ For this enumeration and $\delta\in \Gamma$ such that $\{\gamma\in \Gamma:\ 0 < \gamma < \delta\}$ is finite, if $1\le \mu < \nu \le n$, then $f_{i_\nu}(\gamma_\rho) - f_{i_\mu}(\gamma_\rho) > \delta,$ eventually. \end{lemma} \noindent For linear functions on $\Gamma$ this was used by Kaplansky~\cite{kaplansky} in his work on immediate extensions of valued fields. The last part of the lemma is needed in dealing with finitely ramified valued fields of mixed characteristic. As in \cite{BMS} we call the valued field $\ca{K}$ {\em finitely ramified\/} if the following two conditions are satisfied: \begin{enumerate} \item[(i)] $K$ has characteristic $0$; \item[(ii)] $\{\gamma\in \Gamma:\ 0<\gamma < v(p)\}$ is finite if ${{\boldsymbol{k}}}$ has characteristic $p>0$. \end{enumerate} In particular, $\ca{K}$ is finitely ramified if $\ca{K}$ is unramified as defined in the introduction. \noindent Let $\ca{K}=(K, \Gamma, {{\boldsymbol{k}}}; v, \pi)$ be a valued field. A {\em cross-section\/} on $\ca{K}$ is a group morphism $s: \Gamma \to K^\times$ such that $v(s\gamma)=\gamma$ for all $\gamma\in \Gamma$. The following is well-known: \begin{lemma}\label{xs1} If $\ca{K}$ is $\aleph_1$-saturated, then there is a cross-section on $\ca{K}$. In particular, there is a cross-section on some elementary extension of $\ca{K}$. \end{lemma} \begin{proof} With $U(\mathcal{O})$ the multiplicative group of units of $\mathcal{O}$, the inclusion $U(\mathcal{O}) \to K^\times$ and $v: K^\times \to \Gamma$ yield the exact sequence of abelian groups $$ 1\ \to\ U(\mathcal{O})\ \to\ K^\times\ \to\ \Gamma\ \to\ 0.$$ Suppose $\ca{K}$ is $\aleph_1$-saturated. Then the group $U(\mathcal{O})$ is $\aleph_1$-saturated, hence pure-injective by \cite{cherlin}, p. 171. It is also pure in $K^\times$ since $\Gamma$ is torsion-free, and thus the above exact sequence splits. \end{proof} \noindent In the proof of Theorem~\ref{embed11a} we need the following variant: \begin{lemma}\label{xs2} Let $\ca{K}$ be $\aleph_1$-saturated, let $\ca{E}=(E, \Gamma_E, \dots)$ be an $\aleph_1$-saturated valued subfield of $\ca{K}$ such that $\Gamma_E$ is pure in $\Gamma$, and let $s_E$ be a cross-section on $\ca{E}$. Then $s_E$ extends to a cross-section on $\ca{K}$. \end{lemma} \begin{proof} By Lemma~\ref{xs1} we have a cross-section $s$ on $\ca{K}$. Now $\Gamma_E$ is pure-injective, and pure in $\Gamma$, so we have an internal direct sum decomposition $\Gamma=\Gamma_E \oplus \Delta$ with $\Delta$ a subgroup of $\Gamma$. This gives a cross-section on $\ca{K}$ that concides with $s_E$ on $\Gamma_E$ and with $s$ on $\Delta$. \end{proof} \noindent An {\em angular component map\/} on $\ca{K}$ is a multiplicative group morphism $\operatorname{ac} \colon K^{\times} \to {{\boldsymbol{k}}}^{\times}$ such that $\operatorname{ac}(a)=\pi(a)$ whenever $v(a)=0$; we extend it to $\operatorname{ac} \colon K \to {{\boldsymbol{k}}}$ by setting $\operatorname{ac}(0)=0$ and also refer to this extension as an angular component map on $\ca{K}$. A cross-section $s$ on $\ca{K}$ yields an angular component map $\operatorname{ac}$ on $\ca{K}$ by setting $\operatorname{ac}(x)= \pi\big(x/s(v(x))\big)$ for $x\in K^\times$. Thus Lemma~\ref{xs1} goes through with angular component maps instead of cross-sections. \noindent {\bf Valued difference fields.} A {\em valued difference field\/} is a valued field $\ca{K}$ as above where $K$ is not just a field, but a difference field whose difference operator $\sigma$ satisfies $\sigma(\ca{O})= \ca{O}$. It follows that $\sigma$ induces an automorphism of the residue field: $$ \pi(a) \mapsto \pi(\sigma(a)):\ {{\boldsymbol{k}}} \to {{\boldsymbol{k}}}, \quad a \in \ca{O}.$$ We denote this automorphism by $\bar{\sigma}$, and ${{\boldsymbol{k}}}$ equipped with $\bar{\sigma}$ is called the {\em residue difference field of $\ca{K}$}. (Likewise, $\sigma$ induces an automorphism of the value group $\Gamma$; at a later stage we restrict attention to $\ca{K}$ where $\sigma$ induces the identity on $\Gamma$.) \noindent Let $\ca{K}$ be a valued difference field as above. The difference operator $\sigma$ of $K$ is also referred to as the {\em difference operator of $\ca{K}$}. By an {\em extension\/} of $\ca{K}$ we shall mean a valued difference field $\ca{K}'=(K',\dots)$ that extends $\ca{K}$ as a valued field and whose difference operator extends the difference operator of $\ca{K}'$. In this situation we also say that $\ca{K}$ is a {\em valued difference subfield of\/} $\ca{K}'$, and we indicate this by $\ca{K} \le \ca{K}'$. Such an extension is called {\em immediate\/} if it is immediate as an extension of valued fields. In dealing with a valued difference field $\ca{K}$ as above $v$ also denotes the valuation of any extension of $\ca{K}$ that gets mentioned (unless specified otherwise), and any difference subfield $E$ of $K$ is construed as a valued difference subfield of $\ca{K}$ in the obvious way. The residue field of the valued subfield $\text{Fix}(K)$ of $\ca{K}$ is clearly a subfield of $\text{Fix}({{\boldsymbol{k}}})$. Let $\ca{K}^h= (K^h, \Gamma, {{\boldsymbol{k}}};\dots)$ be the henselization of the underlying valued field of $\ca{K}$. By the universal property of ``henselization'' the operator $\sigma$ extends uniquely to an automorphism $\sigma^h$ of the field $K^h$ such that $\ca{K}^h$ with $\sigma^h$ is a valued difference field. Accordingly we shall view $\ca{K}^h$ as a valued difference field, making it thereby an immediate extension of the valued difference field $\ca{K}$. Given an extension $\ca{K} \leq \ca{K}'$ of valued difference fields and $a \in K'$, we define $\ca{K} \langle a \rangle$ to be the smallest valued difference subfield of $\ca{K}'$ extending $\ca{K}$ and containing $a$ in its underlying difference field; thus the underlying difference field of $\ca{K} \langle a \rangle$ is $K \langle a \rangle$. \noindent {\bf Two lemmas.} Suppose $\ca{K}=(K,\Gamma,{{\boldsymbol{k}}};v, \pi)$ and $\ca{K}'=(K', \Gamma', {{\boldsymbol{k}}}'; v', \pi')$ are valued difference fields, put $\ca{O}:= \ca{O}_v,\ \ca{O}':= \ca{O}_{v'}$, and let $\sigma$ denote both the difference operator of $\ca{K}$ and of $\ca{K}'$. Let $\ca{E}=(E, \Gamma_E, {{\boldsymbol{k}}}_E; \dots)$ be a valued difference subfield of both $\ca{K}$ and $\ca{K}'$, that is, $\ca{E} \le \ca{K}$ and $\ca{E}\le \ca{K}'$. The next lemma is rather obvious, but Lemma~\ref{extresa} is more subtle and our later use of it is one of the reasons that we never need the Genericity Axiom of \cite{BMS}. \begin{lemma}\label{extrest} Let $a \in \ca{O}$ and assume $\alpha=\bar{a}$ is $\bar{\sigma}$-transcendental over ${{\boldsymbol{k}}}_E$. Then \begin{enumerate} \item[(i)] $v(P(a))= \min\limits_{\l} \{v(b_{\l})\}$ for each $\sigma$-polynomial $P(x)=\sum b_{\l} \boldsymbol{\sigma}^{\l}(x)$ over $E$; \item[(ii)] $v(E \langle a \rangle^{\times})=v(E^{\times})=\Gamma_E$, and $\ca{E} \langle a \rangle$ has residue field ${{\boldsymbol{k}}}_E \langle \alpha \rangle$; \item[(iii)] if $b\in \ca{O}'$ is such that $\beta=\bar{b}$ is $\bar{\sigma}$-transcendental over ${{\boldsymbol{k}}}_E$, then there is a valued difference field isomorphism $\ca{E}\langle a \rangle \to \ca{E} \langle b \rangle$ over $\ca{E}$ sending $a$ to $b$. \end{enumerate} \end{lemma} \begin{proof} Let $P(x)=\sum b_{\l} \boldsymbol{\sigma}^{l}(x)$ be a nonzero $\sigma$-polynomial over $E$. Then $P(x)=cQ(x)$ where $c \in E^{\times}$ and $v(c)=\min\limits_{\l} \{v(b_{\l})\}$, and $Q(x)$ is a $\sigma$- polynomial over the valuation ring of $E$ with some coefficient equal to $1$. Since $\alpha=\bar{a}$ is $\bar{\sigma}$-transcendental over ${{\boldsymbol{k}}}_E$, $\bar{Q}(\bar{a}) \neq 0$. Therefore $v(Q(a))=0$, and thus $$v(P(a))= v(c)=\min\limits_{\l} \{v(b_{\l})\}.$$ It follows that $v(E \langle a \rangle^{\times})=v(E^{\times})$. A similar argument shows that $E \langle a \rangle$ has residue field ${{\boldsymbol{k}}}_E \langle \alpha \rangle$. It also follows from (i) that $a$ is $\sigma$-transcendental over $E$, and (iii) is an easy consequence of this fact and of (i). \end{proof} \noindent Recall that in the beginning of this section we defined the complexity of a difference polynomial over a difference field. \begin{lemma}\label{extresa}\footnote{We thank Martin Hils for pointing out a serious error in the proof of a related lemma in \cite{salih1}.} Assume $\operatorname{char}({{\boldsymbol{k}}})=0$, and let $G(x)$ be a nonconstant $\sigma$-polynomial over the valuation ring of $E$ whose reduction $\bar{G}(x)$ has the same complexity as $G(x)$. Let $a\in \ca{O}$, $b\in \ca{O}'$, and assume that $G(a)=0$, $G(b)=0$, and that $\bar{G}(x)$ is a minimal $\bar{\sigma}$-polynomial of $\alpha:=\bar{a}$ and of $\beta:= \bar{b}$ over ${{\boldsymbol{k}}}_E$. Then \begin{enumerate} \item[(i)] $\ca{E} \langle a \rangle$ has value group $v(E^{\times})=\Gamma_E$ and residue field ${{\boldsymbol{k}}}_E \langle \alpha \rangle$; \item[(ii)] if there is a difference field isomorphism ${{\boldsymbol{k}}}_E\langle \alpha \rangle \to {{\boldsymbol{k}}}_E \langle \beta \rangle$ over ${{\boldsymbol{k}}}_E$ sending $\alpha$ to $\beta$, then there is a valued difference field isomorphism $\ca{E}\langle a \rangle \to \ca{E} \langle b \rangle$ over $\ca{E}$ sending $a$ to $b$. \end{enumerate} \end{lemma} \begin{proof} To simplify notation we set, for $k\in \mathbb{Z}$, $$a_k:= \sigma^k(a),\quad \alpha_k:= \bar{\sigma}^k(\alpha),\quad b_k:= \sigma^k(b),\quad \beta_k:= \bar{\sigma}^k(\beta).$$ As in the proof of Lemma~\ref{extrest} one shows that if $P(x)=\sum b_{\l} \boldsymbol{\sigma}(x)^{\l}$ is a $\sigma$-polynomial over $E$ of lower complexity than $G(x)$, then $v(P(a))=\min\limits_{\l}\{v(b_{\l})\}$. It is also clear that $G$ is a minimal $\sigma$-polynomial of $a$ over $E$. Let $G$ have order $m$ and degree $d>0$ with respect to $\sigma^m(x)$, so $$G(x)= P_0(x) + P_1(x)\sigma^m(x) + \cdots + P_d(x)\sigma^m(x)^d $$ where $P_0, \dots, P_d$ are $\sigma$-polynomials over the valuation ring of $E$ of order less than $m$, with $P_d\ne 0$. Then the valued subfield $E_{m-1}:=E(a_0,\dots, a_{m-1})$ of $\ca{K}$ has transcendence basis $a_0,\dots, a_{m-1}$ over $E$, the residue field of $E_{m-1}$ is ${{\boldsymbol{k}}}_E\big(\alpha_0,\dots,\alpha_{m-1})$ with transcendence basis $\alpha_0,\dots,\alpha_{m-1}$ over ${{\boldsymbol{k}}}_E$, and the value group of $E_{m-1}$ is $\Gamma_E$. Also, $v(P_d(a))=0$ and $v(P_i(a))\ge 0$ for $i=0,\dots, d-1$, and $$g(T):=T^d + p_{d-1}T^{d-1} + \dots + p_0, \quad \text{with } p_i:= P_i(a)/P_d(a) \text{ for }i=0,\dots, d-1,$$ is the minimum polynomial of $a_m$ over $E_{m-1}$ and has its coefficients in the valuation ring of $E_{m-1}$, and the reduction $\bar{g}(T)$ of $g(T)$ is the minimum polynomial of $\alpha_m$ over ${{\boldsymbol{k}}}_E(\alpha_0,\dots, \alpha_{m-1})$. For the rest of the proof we assume without loss that $\ca{K}$ and $\ca{K}'$ are henselian as valued fields. \noindent For $n\ge m$ we set $$E_n:= E(a_0, \dots, a_n), \quad \text{ a valued subfield of } \ca{K},$$ we let $E_n^h$ be the henselization of $E_n$ in $\ca{K}$, and let $E_{m-1}^h$ be the henselization of $E_{m-1}$ in $\ca{K}$. For $n\ge m$, let $g_n(T)$ be the minimum polynomial of $a_n$ over $E_{n-1}^h$. \noindent {\bf Claim 1}: for $n\ge m$ the polynomial $g_n$ has its coefficients in the valuation ring of $E_{n-1}^h$, the residue field of $E_n$ is ${{\boldsymbol{k}}}_E(\alpha_0,\dots, \alpha_n)$, the value group of $E_n$ is $\Gamma_E$, the reduction $\bar{g}_n$ of $g_n$ is the minimum polynomial of $\alpha_n$ over ${{\boldsymbol{k}}}_E(\alpha_0,\dots, \alpha_{n-1})$. \noindent Claim 1 holds for $n=m$: $\bar{g}(T)$ is the minimum polynomial of $\alpha_m$ over the residue field ${{\boldsymbol{k}}}_E(\alpha_0,\dots, \alpha_{m-1})$ of $E_{m-1}$, and so the monic polynomial $g(T)$ is necessarily the minimum polynomial of $a_m$ over $E_{m-1}^h$, that is, $g_m=g$. Assume inductively that the claim holds for a certain $n\ge m$. By applying $\sigma$ to the coefficients of $g_n(T)$ we obtain a monic polynomial $g_n^{\sigma}(T)$ over the valuation ring of $E_{n}^h$ with $a_{n+1}$ as a zero. Thus $g_{n+1}(T)$ is a monic irreducible factor of $g_n^{\sigma}(T)$ in $E_{n}^h[T]$, and has therefore coefficients in the valuation ring of $E_{n}^h$. Its reduction $\bar{g}_{n+1}$ divides the reduction of $g_n^{\sigma}$ (in the polynomial ring ${{\boldsymbol{k}}}_E( \alpha_0, \dots, \alpha_n)[T]$) and so $\alpha_{n+1}$, being a simple zero of this last reduction, is a simple zero of $\bar{g}_{n+1}$. It only remains to show that $\bar{g}_{n+1}$ is irreducible in ${{\boldsymbol{k}}}_E( \alpha_0, \dots, \alpha_n)[T]$. Suppose it is not. Then $\bar{g}_{n+1}(T)=\phi(T)\psi(T)$ where $\phi,\psi\in {{\boldsymbol{k}}}_E( \alpha_0, \dots, \alpha_n)[T]$ are monic of degree $\ge 1$, with $\phi$ irreducible in this polynomial ring and $\phi(\alpha_{n+1})=0$. Hence $\phi$ and $\psi$ are coprime. Then the factorization $\bar{g}_{n+1}=\phi\psi$ can be lifted to a nontrivial factorization of $g_{n+1}$ in $E_{n}^h[T]$, a contradiction. Claim 1 is established. It follows that $E(a_k:\ k\in \mathbb{N})$ has residue field ${{\boldsymbol{k}}}_E(\alpha_k:\ k\in \mathbb{N})$ and value group $\Gamma_E$. Applying the valued field automorphism $\sigma^{-n}$ yields that the valued subfield $E(a_{k-n}:\ k\in \mathbb{N})$ of $\ca{K}$ has residue field ${{\boldsymbol{k}}}_E(\alpha_{k-n}:\ k\in \mathbb{N})$ and value group $\Gamma_E$. Hence $E\langle a \rangle$ has residue field ${{\boldsymbol{k}}}_E\langle \alpha \rangle$ and value group $\Gamma_E$. We have proved (i). \noindent To prove (ii), let $\iota: {{\boldsymbol{k}}}_ E\langle \alpha\rangle \to {{\boldsymbol{k}}}_E\langle \beta\rangle$ be a difference field isomorphism over ${{\boldsymbol{k}}}_E$ sending $\alpha$ to $\beta$. Let $F_{m-1}:= E\big(b_0,\dots, b_{m-1})$, a valued subfield of $\ca{K}'$. Then $$h(T):=T^d + q_{d-1}T^{d-1} + \dots + q_0, \quad \text{with } q_i:= P_i(b)/P_d(b) \text{ for }i=0,\dots, d-1,$$ is the minimum polynomial of $b_m$ over $F_{m-1}$ and has its coefficients in the valuation ring of $F_{m-1}$, and the reduction $\bar{h}(T)$ of $h(T)$ is the minimum polynomial of $\beta_m$ over ${{\boldsymbol{k}}}_E(\beta_0,\dots, \beta_{m-1})$. Now $\bar{g}$ and $\bar{h}$ correspond under $\iota$. For $n\ge m$, let $$F_n:= E(b_0,\dots, b_n), \quad \text{ a valued subfield of }\ca{K}',$$ and let $F_n^h$ be the henselization of $F_n$ in $\ca{K}'$. For $n\ge m$, let $h_n(T)$ be the minimum polynomial of $b_n$ over $F_{n-1}^h$, so $h_n$ has its coefficients in the valuation ring of $F_{n-1}^h$, the residue field of $F_n$ is ${{\boldsymbol{k}}}_E(\beta_0,\dots, \beta_n)$, the value group of $F_n$ is $\Gamma_E$, the reduction $\bar{h}_n$ of $h_n$ is the minimum polynomial of $\beta_n$ over ${{\boldsymbol{k}}}_E(\beta_0,\dots, \beta_{n-1})$. It follows that $\bar{g}_n$ and $\bar{h}_n$ correspond under $\iota$, for each $n\ge m$. \noindent {\bf Claim 2}: for $n\ge m$ there is a (unique) valued field isomorphism $i_n\ :\ E_n \to F_n$ over $E$ sending $a_k$ to $b_k$ for $k=0,\dots,n$. \noindent From the remarks at the beginning of the proof it is clear that we have a unique valued field isomorphism $i_{m-1}\ :\ E_{m-1} \to F_{m-1}$ over $E$ sending $a_k$ to $b_k$ for $k=0,\dots,m-1$. It follows that the minimum polynomials $g$ and $h$ correspond under $i_{m-1}$, and so we have a field isomorphism $E_m \to F_m$ extending $i_{m-1}$ and sending $a_m$ to $b_m$. This is a {\em valued\/} field isomorphism since the residue field ${{\boldsymbol{k}}}_E(\alpha_0, \dots, \alpha_m)$ of $E_m$ has the same degree over the residue field ${{\boldsymbol{k}}}_E(\alpha_0, \dots,\alpha_{m-1})$ of $E_{m-1}$ as $E_m$ has over $E_{m-1}$, and likewise with $F_m$ and $F_{m-1}$. This proves Claim 2 for $n=m$. Assume the claim holds for a certain $n\ge m$. Then $g_n$ and $h_n$ correspond under $i_{n-1}$, and so $g_n^\sigma$ and $h_n^\sigma$ correspond under $i_n$. From the unique lifting properties of henselian local rings it follows that $g_{n+1}$ is the unique monic polynomial in $E_n^h[T]$ that divides $g_n^\sigma$, has its coefficients in the valuation ring of $E_n^h$, and whose reduction is $\bar{g}_{n+1}$; likewise with $h_{n+1}$. Therefore $g_{n+1}$ and $h_{n+1}$ correspond under $i_n$, and so we have a field isomorphism $E_n^h(a_{n+1}) \to F_n^h(b_{n+1})$ that extends $i_n$ and sends $a_{n+1}$ to $b_{n+1}$. Arguing as in the case $n=m$ we see that this field isomorphism is a valued field isomorphism; its restriction to $E_{n+1}$ is the desired $i_{n+1}$. This proves Claim 2, and then it is easy to finish the proof of (ii). \end{proof} \noindent Lemma~\ref{extresa} and its proof go through if we replace the assumption $\operatorname{char}{{{\boldsymbol{k}}}}=0$ by its consequence that $\alpha_m$ is a simple zero of its minimum polynomial over ${{\boldsymbol{k}}}_E(\alpha_0, \dots, \alpha_{m-1})$, where $m$ is the order of $G$ as in the proof. (Just add to Claim 1 in the proof that $\alpha_n$ is a simple zero of $\bar{g}_n$, for all $n\ge m$.) \noindent {\bf Hahn difference fields and Witt difference fields.} Let ${{\boldsymbol{k}}}$ be a field and $\Gamma$ an ordered abelian group. This gives the Hahn field ${{\boldsymbol{k}}}((t^{\Gamma}))$ whose elements are the formal sums $a=\sum_{\gamma \in \Gamma} a_\gamma t^{\gamma}$ with $a_\gamma \in {{\boldsymbol{k}}}$ for all $\gamma$, with well-ordered {\em support\/} $\{\gamma:\ a_\gamma \neq 0\} \subseteq \Gamma$. With $a$ as above, we define the valuation $v: {{\boldsymbol{k}}}((t^{\Gamma}))^{\times} \to \Gamma$ by $v(a):=\min \{\gamma: a_\gamma \neq 0\}$, and the surjective ring morphism $\pi:\ \ca{O}_v \to {{\boldsymbol{k}}}$ by $\pi(a):=a_0$. In this way we obtain the (maximal) valued field $\ca{K}=({{\boldsymbol{k}}}((t^\Gamma)), \Gamma, {{\boldsymbol{k}}};v,\pi)$ to which we also just refer to as the {\em Hahn field} ${{\boldsymbol{k}}}((t^\Gamma))$. \noindent Let the field ${{\boldsymbol{k}}}$ also be equipped with an automorphism $\bar{\sigma}$. Then $$\sum_{\gamma} a_\gamma t^{\gamma} \mapsto \sum_{\gamma} \bar{\sigma}(a_\gamma) t^\gamma $$ is an automorphism, to be denoted by $\sigma$, of the field ${{\boldsymbol{k}}}((t^\Gamma))$, with $\sigma(\ca{O}_v)=\ca{O}_v$. We consider the three-sorted structure $({{\boldsymbol{k}}}((t^\Gamma)), \Gamma, {{\boldsymbol{k}}};\ v,\pi)$, with the field ${{\boldsymbol{k}}}((t^\Gamma))$ equipped with the automorphism $\sigma$ as above, as a valued difference field, and also refer to it as the {\em Hahn difference field} ${{\boldsymbol{k}}}((t^\Gamma))$. Thus $\text{Fix}\big({{\boldsymbol{k}}}((t^\Gamma))\big)=\text{Fix}({{\boldsymbol{k}}})((t^\Gamma))$. \noindent Let now ${{\boldsymbol{k}}}$ be a perfect field of characteristic $p>0$. Then we have the ring $\operatorname{W}[{{\boldsymbol{k}}}]$ of Witt vectors over ${{\boldsymbol{k}}}$; it is a complete discrete valuation ring whose elements are the infinite sequences $(a_0, a_1, a_2,\dots)$ with all $a_n\in {{\boldsymbol{k}}}$; see for example \cite{serre} for how addition and multiplication are defined. The Frobenius automorphism $x \mapsto x^p$ of ${{\boldsymbol{k}}}$ induces the ring automorphism $$(a_0,a_1,a_2,\dots) \mapsto (a_0^p, a_1^p, a_2^p,\dots)$$ of $\operatorname{W}[{{\boldsymbol{k}}}]$. This automorphism of $\operatorname{W}[{{\boldsymbol{k}}}]$ extends to a field automorphism, the {\em Witt frobenius}, of the fraction field $\operatorname{W}({{\boldsymbol{k}}})$ of $\operatorname{W}[{{\boldsymbol{k}}}]$. We consider $\operatorname{W}({{\boldsymbol{k}}})$ as a valued difference field by taking the Witt frobenius as its difference operator, by taking the valuation $v$ to be the unique one with valuation ring $\operatorname{W}[{{\boldsymbol{k}}}]$, value group $\mathbb{Z}$ and $v(p)=1$, and by letting $\pi: \operatorname{W}[{{\boldsymbol{k}}}] \to {{\boldsymbol{k}}}$ be the canonical map $$(a_0, a_1, a_2,\dots) \mapsto a_0.$$ We refer to this valued difference field as the {\em Witt difference field \/} $\operatorname{W}({{\boldsymbol{k}}})$. For any perfect subfield ${{\boldsymbol{k}}}'$ of ${{\boldsymbol{k}}}$ we consider $\operatorname{W}({{\boldsymbol{k}}}')$ as a valued difference subfield of $\operatorname{W}({{\boldsymbol{k}}})$ in the obvious way. In particular, with $\mathbb{F}_p$ the prime field of ${{\boldsymbol{k}}}$, we have $\text{Fix}\big(\operatorname{W}({{\boldsymbol{k}}})\big)= \operatorname{W}(\mathbb{F}_p)$, and the latter is identified with the valued field $\mathbb{Q}_p$ of $p$-adic numbers in the usual way. In the last section the functorial nature of $\operatorname{W}$ plays a role: any field embedding $\iota: {{\boldsymbol{k}}} \to {{\boldsymbol{k}}}'$ into a perfect field ${{\boldsymbol{k}}}'$ induces the ring embedding $$\operatorname{W}[\iota]\colon \operatorname{W}[{{\boldsymbol{k}}}] \to \operatorname{W}[{{\boldsymbol{k}}}'], \quad (a_0, a_1, a_2,\dots) \mapsto (\iota a_0, \iota a_1, \iota a_2,\dots).$$ \noindent {\bf Two axioms.} Let $\ca{K}$ be a valued difference field, and consider the following two conditions on $\ca{K}$. The first one says that $\sigma$ preserves the valuation $v$. \noindent {\bf Axiom 1.} For all $a \in K^{\times}$, $v(\sigma(a))=v(a)$. \noindent {\bf Axiom 2.} For all $\gamma \in \Gamma$ there is $a \in \operatorname{Fix}(K)$ such that $v(a)=\gamma$. \noindent It is easy to see that Axiom 2 implies Axiom 1. If $\Gamma$ is an ordered abelian group and ${{\boldsymbol{k}}}$ a difference field, then the Hahn difference field ${{\boldsymbol{k}}}((t^\Gamma))$ satisfies Axiom 2. If ${{\boldsymbol{k}}}$ is a perfect field of characteristic $p>0$, then the Witt difference field $\operatorname{W}({{\boldsymbol{k}}})$ satisfies Axiom 2. If $\ca{K}$ satisfies Axiom 1, so does any valued difference subfield of $\ca{K}$, and any extension of $\ca{K}$ with the same value group. If $\ca{K}$ satisfies Axiom 2, so does any extension with the same value group. \noindent {\em From now on we assume that all our valued difference fields satisfy Axiom $1$}. By this convention, whenever we refer to an extension of a valued difference field, this extension is also assumed to satisfy Axiom 1. \end{section} \begin{section}{Pseudoconvergence and $\sigma$-polynomials} \noindent If $\{a_\rho\}$ is a pc-sequence in a valued field $K$ and $a_\rho \leadsto a$ with $a\in K$, then for an ordinary nonconstant polynomial $f(x) \in K[x]$ we have $f(a_\rho) \leadsto f(a)$, see \cite{kaplansky}. This fails in general for nonconstant $\sigma$-polynomials over valued difference fields. We do, however, have a variant of this pseudo-continuity for $\sigma$-polynomials using {\em equivalent pc-sequences}. This is a key device from \cite{BMS}, and we follow its treatment, but with some differences. \begin{definition}\label{equiv.pc} Two pc-sequences $\{a_\rho\},\{b_\rho\}$ in a valued field are equivalent if for all $a$ in all valued field extensions, $a_\rho \leadsto a \Leftrightarrow b_\rho \leadsto a.$ \end{definition} \noindent This is an equivalence relation on the set of pc-sequences with given index set and in a given valued field, and: \begin{lemma} Two pc-sequences $\{a_\rho\}$ and $\{b_\rho\}$ in a valued field are equivalent if and only if they have the same width and a common pseudolimit in some valued field extension. \end {lemma} \noindent {\bf The Basic Calculation.} Let $\ca{K}$ be a valued difference field satisfying Axiom 2, and let $\{a_\rho\}$ be a pc-sequence from $K$ such that $a_\rho \leadsto a$ with $a$ in an extension of $\ca{K}$. Put $\gamma_\rho:=v(a_\rho-a)$; then $\{\gamma_\rho\}$ is eventually strictly increasing. Let $G$ be a nonconstant $\sigma$-polynomial over $K$ of order $\le n$. Under an additional assumption on ${{\boldsymbol{k}}}$ we shall construct a pc-sequence $\{b_\rho\}$ from $K$ equivalent to $\{a_\rho\}$ such that $G(b_\rho) \leadsto G(a)$. We first choose $\theta_\rho\in \operatorname{Fix}(K)$ such that $v(\theta_\rho)= \gamma_\rho;$ this is possible by Axiom 2. We set $b_\rho:=a_\rho+\mu_\rho\theta_\rho$ and for now we demand only that $\mu_\rho\in K$ and $v(\mu_\rho)=0.$ Define $d_\rho$ by $a_\rho-a=\theta_\rho d_\rho,$ so $v(d_\rho)=0$. Then \begin{eqnarray*} b_\rho-a&=&b_\rho-a_\rho+a_\rho-a\\ &=&\theta_\rho(\mu_\rho+d_\rho). \end{eqnarray*} We now impose also $v(\mu_\rho+d_\rho)=0$. This ensures that $b_\rho\leadsto a$, and that $\{a_\rho\}$ and $\{b_\rho\}$ have the same width, so they are equivalent. Note that $d_\rho$ depends only on our choice of $\theta_\rho$ (not on $\mu_\rho$), and $d_\rho$ won't normally be in $K$. Now \begin{eqnarray*} G(b_\rho)-G(a)&=& \sum\limits_{|\l|\geq 1}G_{(\l)}(a)\cdot\boldsymbol{\sigma} (b_\rho-a)^{\l}\\ &=&\sum\limits_{m\geq 1}\sum\limits_{|\l|=m}G_{(\l)}(a)\cdot \boldsymbol{\sigma}(b_\rho-a)^{\l}\\ &=&\sum\limits_{m\geq 1}\sum\limits_{|\l|=m}G_{(\l)}(a)\cdot \boldsymbol{\sigma}\big(\theta_\rho(\mu_\rho+d_\rho)\big)^{\l}\\ &=&\sum\limits_{m\geq 1}\sum\limits_{|\l|=m}G_{(\l)}(a)\cdot \boldsymbol{\sigma} (\theta_\rho)^{\l}\cdot\boldsymbol{\sigma}(\mu_\rho+d_\rho)^{\l}\\ &=&\sum\limits_{m\geq 1}\theta_\rho^m\cdot G_m(\mu_\rho+d_\rho) \end{eqnarray*} where $G_m$ is the $\sigma$-polynomial over $K\langle a\rangle$ given by $$G_m(x)=\sum\limits_{|\l|=m}G_{(\l)}(a)\cdot\boldsymbol{\sigma}(x)^{\l}.$$ Since $G\notin K$, there is an $m\ge 1$ such that $G_m\ne 0$. For such $m$, pick $\l=\l(m)$ with $|\l|=m$ for which $v\big(G_{(\l)}(a)\big)$ is minimal, so $G_m(x)= G_{(\l)}(a)\cdot g_m\big(\boldsymbol{\sigma}(x)\big)$ where $g_m(x_0,\dots,x_n)$ has its coefficients in the valuation ring of $K\langle a \rangle$, with one of its coefficients equal to $1$. Then $$v\big(\theta_\rho^mG_m(\mu_\rho+d_\rho)\big)= m\gamma_\rho + v(G_{(\l)}(a)) + v\big(g_m(\boldsymbol{\sigma}(\mu_\rho+d_\rho))\big).$$ This calculation suggests a new constraint on $\{\mu_\rho\}$, namely that for each $m\ge 1$ with $G_m\ne 0$, $$ v\big(g_m(\boldsymbol{\sigma}(\mu_\rho + d_\rho))\big)=0 \qquad \text{(eventually in $\rho$)}. $$ Assume this constraint is met. Then Lemma~\ref{kapla} yields a fixed $m\ge 1$ such that if $m'\ge 1$ and $m'\ne m$, then, eventually in $\rho$, $$v\big(\theta_\rho^mG_m(\mu_\rho+d_\rho)\big) < v\big(\theta_\rho^{m'}G_{m'}(\mu_\rho+d_\rho)\big).$$ For this $m$ we have, eventually in $\rho$, $$v\big(G(b_\rho) - G(a)\big)= m\gamma_\rho + v(G_{(\l)}(a)), \qquad \l=\l(m),$$ so $G(b_\rho) \leadsto G(a)$, as desired. To have $\{\mu_\rho\}$ meet all constraints we introduce an axiom about $\ca{K}$ which involves only the residue difference field ${{\boldsymbol{k}}}$ of $\ca{K}$. (More precisely, it is an {\em axiom scheme}.) \vglue.3cm \noindent {\bf Axiom 3.}\ For each integer $d>0$ there is $y\in {{\boldsymbol{k}}}$ with $\bar{\sigma}^d(y)\ne y$. If $\text{char}({{\boldsymbol{k}}})=p>0$, then for any integers $d,e$ with $d\ne 0$ and $e> 0$ there is $y\in {{\boldsymbol{k}}}$ with $\bar{\sigma}^d(y)\ne y^{p^e}$. \noindent By \cite{cohn}, p. 201, this axiom implies that there are no residual $\sigma$-identities at all, that is, for every nonzero $f \in {{\boldsymbol{k}}}[x_0,\ldots,x_n]$, there is a $y$ in ${{\boldsymbol{k}}}$ with $f(\bar{\boldsymbol{\sigma}}(y))\not=0$ (and thus the set $\{y\in {{\boldsymbol{k}}}: f(\bar{\boldsymbol{\sigma}}(y))\not=0\}$ is infinite). Even so, it may not be obvious that Axiom 3 allows us to select $\{\mu_\rho\}$ as required, since the $g_m$'s are over $K\langle a\rangle,$ and we need $\bar{\mu}_\rho\in {{\boldsymbol{k}}}.$ Here is a well-known fact that will take care of this: \begin{lemma}\label{zariskitop} Let $k \subseteq k'$ be a field extension, and $g(x_0,\dots,x_n)$ a nonzero polynomial over $k'$. Then there is a nonzero polynomial $f(x_0,\dots,x_n)$ over $k$ such that whenever $y_0,\dots,y_n\in k$ and $f(y_0,\dots,y_n)\ne 0$, then $g(y_0,\dots,y_n) \ne 0$. \end{lemma} \begin{proof} Using a basis $b_1,\dots,b_m$ of the $k$-vector subspace of $k'$ generated by the coefficients of $g$, we have $g = b_1 f_1 + \dots + b_mf_m$ with $f_1,\dots,f_m\in k[x_0,\dots,x_n]$. Let $f$ be one of the nonzero $f_i$'s. Then $f$ has the required property. \end{proof} \noindent Consider an $m\ge 1$ with nonzero $G_m$, and define $$ g_{m,\rho}(x_0,\dots,x_n):= g_m(x_0+d_\rho,\dots,x_n+\sigma^n(d_\rho)).$$ Then the reduced polynomial $$\bar{g}_{m,\rho}(x_0,\dots,x_n)= \bar{g}_m(x_0+\bar{d_\rho},\dots,x_n+\bar{\sigma}^n(\bar{d_\rho}))$$ is also nonzero for each $\rho$. By the lemma above we can pick a nonzero polynomial $f_{\rho}(x_0,\dots,x_n)\in {{\boldsymbol{k}}}[x_0,\dots,x_n]$ such that if $y\in \mathcal{O}$ and $f_{\rho}\big(\bar{\boldsymbol{\sigma}}(\bar{y})\big)\ne 0$, then $\bar{g}_{m,\rho}\big(\bar{\boldsymbol{\sigma}}(\bar{y}))\big) \ne 0$ for each $m\ge 1$ with $G_m\ne 0$. \noindent {\bf Conclusion:} if for each $\rho$ the element $\mu_\rho\in \mathcal{O}$ satisfies $\bar{\mu}_\rho\ne 0$, $\bar{\mu}_\rho + \bar{d}_\rho \ne 0$, and $f_{\rho}\big(\bar{\boldsymbol{\sigma}}(\bar{\mu_\rho})\big)\ne 0$, then all constraints on $\{\mu_\rho\}$ are met. \noindent Axiom 3 allows us to meet these constraints, even if instead of a single $G$ of order $\le n$ we have finitely many nonconstant $\sigma$-polynomials $G(x)$ of order $\le n$ and we have to meet simultaneously the constraints coming from each of those $G$'s. This leads to: \begin{theorem}\label{adjustment1} Suppose $\ca{K}$ satisfies Axioms $2$ and $3$. Suppose $\{a_\rho\}$ in $K$ is a pc-sequence and $a_\rho \leadsto a$ in an extension. Let $\Sigma$ be a finite set of $\sigma$-polynomials $G(x)$ over $K$. Then there is a pc-sequence $\{b_\rho\}$ from $K$, equivalent to $\{a_\rho\},$ such that $G(b_\rho) \leadsto G(a)$ for each nonconstant $G$ in $\Sigma.$ \end{theorem} \begin{corollary} The same result, where $a$ is removed and one only asks that $\{G(b_\rho)\}$ is a pc-sequence. \end{corollary} \begin{proof} Put in an $a$ from an elementary extension. \end{proof} \noindent {\bf Refinement of the Basic Calculation.} The following improvement of the basic calculation will be needed later on. Some minor differences with \cite{BMS} are because we shall use this in combination with a simpler notion of ``pc-sequence of $\sigma$-algebraic type'' and we do not assume $\ca{K}$ is unramified. \begin{theorem} \label{crucial.result.nonwitt} Suppose $\ca{K}$ satisfies Axioms $2$ and $3$. Let $\{a_\rho\}$ be a pc-sequence from $K$ and let $a$ in some extension of $\ca{K}$ be such that $a_\rho \leadsto a$. Let $G(x)$ be a $\sigma$-polynomial over $K$ such that \begin{enumerate} \item[(i)] $G(a_\rho) \leadsto 0$, \item[(ii)] $ G_{(\l)}(b_\rho) \not\leadsto 0$ whenever $|\l|\geq 1$ and $\{b_\rho\}$ is a pc-sequence in $K$ equivalent to $\{a_\rho\}$. \end{enumerate} Let $\Sigma$ be a finite set of $\sigma$-polynomials $H(x)$ over $K$. Then there is a pc-sequence $\{b_\rho\}$ in $K$, equivalent to $\{a_\rho\}$, such that $G(b_\rho) \leadsto 0$, and $H(b_\rho) \leadsto H(a)$ for every nonconstant $H$ in $\Sigma$. \end{theorem} \begin{proof} By augmenting $\Sigma$ we can assume that $G_{(\l)}\in \Sigma$ for all $\l$. Take $n$ such that all $H\in \Sigma$ have order $\le n$. Let $\{\theta_\rho\}$ and $\{d_\rho\}$ be as before. Then the {\em basic calculation} constructs nonzero polynomials $f_\rho\in {{\boldsymbol{k}}}[x_0,\dots,x_n]$ such that if $\{\mu_\rho\}$ satisfies the constraints $$\mu_\rho\in \mathcal{O},\ \bar{\mu}_\rho\ne 0,\ \bar{\mu}_\rho + \bar{d}_\rho \ne 0,\ f_{\rho}\big(\bar{\boldsymbol{\sigma}}(\bar{\mu_\rho})\big)\ne 0,$$ then, setting $b_\rho:= a_\rho + \theta_\rho \mu_\rho$, we have: $$H(b_\rho) \leadsto H(a)\text{ for each nonconstant } H\in \Sigma .$$ Let $\{\mu_\rho\}$ satisfy these constraints, and define $\{b_\rho\}$ accordingly. Using Axiom 3 we shall be able to constrain $\{\mu_\rho\}$ further to achieve also $G(b_\rho) \leadsto 0$. We have \begin{eqnarray*} G(a_\rho)&=&G(b_\rho-\theta_\rho \mu_\rho)\\ &=&G(b_\rho)+\sum\limits_{m\geq 1}\sum\limits_{|\l|=m} G_{(\l)}(b_\rho)\cdot\boldsymbol{\sigma}(-\theta_\rho \mu_\rho)^{\l}\\ &=&G(b_\rho)+\sum\limits_{m\geq 1}(-\theta_\rho)^m\cdot\sum\limits_{|\l|=m} G_{(\l)}(b_\rho) \boldsymbol{\sigma}(\mu_\rho)^{\l}\\ &=&G(b_\rho)+\sum\limits_{m\geq 1}(-\theta_\rho)^m H_{m,\rho}(\mu_\rho) \end{eqnarray*} where $H_{m,\rho}$ is the $\sigma$-polynomial over $K$ defined by $$ H_{m,\rho}(x)= \sum\limits_{|\l|=m} G_{(\l)}(b_\rho)\cdot \boldsymbol{\sigma}(x)^{\l}.$$ For $|\l|\ge 1$ we have $G_{(\l)}(b_\rho) \not \leadsto 0$ and, provided $G_{(\l)}\notin K$, $G_{(\l)}(b_\rho) \leadsto G_{(\l)}(a)$. Hence, for $|\l|\ge 1$ we have $G_{(\l)}(b_\rho)=G_{(\l)}(a)(1+\epsilon_{\l,\rho})$, eventually in $\rho$, where $v(\epsilon_{\l,\rho})>0$, and for our purpose we may assume that this holds for {\em all\/} $\rho$. Thus $H_{m,\rho}(x)= G_{m}(x) + \epsilon_{m,\rho}(x)$ where $G_m$ is as in {\em the basic calculation\/}: $$G_{m}(x) = \sum\limits_{|\l|=m} G_{(\l)}(a)\cdot \boldsymbol{\sigma}(x)^{\l}, \quad \epsilon_{m,\rho}(x) = \sum\limits_{|\l|=m} G_{(\l)}(a)\epsilon_{\l,\rho}\cdot \boldsymbol{\sigma}(x)^{\l}. $$ We put $\gamma_{\l}:=v\big(G_{(\l)}(a)\big)$, and restrict $m$ in what follows to be $\ge 1$ with $G_m\ne 0$. For each $m$ we pick $\l=\l(m)$ with $|\l|=m$ for which $\gamma_{\l}$ is minimal, so $G_m(x)= G_{(\l)}(a)\cdot g_m\big(\boldsymbol{\sigma}(x)\big)$ where $g_m(x_0,\dots,x_n)$ has its coefficients in the valuation ring of $K\langle a \rangle$, with one of its coefficients equal to $1$. As in {\em the basic calculation\/} we now impose the constraint on $\{\mu_\rho\}$ that for each $m$, $$ v\big(g_m(\boldsymbol{\sigma}(\mu_\rho))\big)=0 \qquad \text{(all $\rho$)}. $$ Then $v\big((-\theta_\rho)^mH_{m,\rho}(\mu_\rho)\big)= m\gamma_\rho + \gamma_{\l(m)}$. Take the unique $m_0\ge 1$ with $G_{m_0}\ne 0$ such that if $m\ne m_0$, then $$m_0\gamma_\rho + \gamma_{\l(m_0)} < m\gamma_\rho + \gamma_{\l(m)}, \quad \text{(eventually in $\rho$)}.$$ Again we can assume this holds for all $\rho$. Then for all $\rho$: \begin{align*} v\big(&\sum\limits_{m}(-\theta_\rho)^m H_{m,\rho}(\mu_\rho)\big)= m_0\gamma_\rho + \gamma_{\l(m_0)},\\ G(b_\rho)= G(a_\rho) - &\sum\limits_{m}(-\theta_\rho)^m H_{m,\rho}(\mu_\rho). \end{align*} It follows that if $\rho$ is such that $v\big(G(a_\rho)\big) \ne m_0\gamma_\rho + \gamma_{\l(m_0)}$, then $$ v\big(G(b_\rho)\big)= \min\left\{v\big(G(a_\rho)\big), m_0\gamma_\rho + \gamma_{\l(m_0)}\right\}.$$ We shall make this true for {\em all\/} $\rho$. Suppose that $v\big(G(a_\rho)\big) = m_0\gamma_\rho + \gamma_{\l(m_0)}$. Then $$G(b_\rho)= G(a_\rho)\cdot\big(1- c_{\rho}g_{m_0}(\boldsymbol{\sigma}(\mu_\rho))+ \epsilon_\rho\big)$$ where $c_{\rho}=(-\theta_\rho)^{m_0} G_{(\l)}(a)/G(a_\rho)$, ($\l=\l(m_0)$), and $v( \epsilon_\rho) >0$. Note that $v(c_{\rho})=0$ and that $g_{m_0}$ is homogeneous of degree $m_0 >0$. This leads to our final constraint on $\{\mu_\rho\}$: for each $\rho$ such that $v\big(G(a_\rho)\big) = m_0\gamma_\rho + \gamma_{\l(m_0)}$ we impose $$ 1- \bar{c}_{\rho}\bar{g}_{m_0} \big(\bar{\boldsymbol{\sigma}}(\bar{\mu}_\rho)\big)\ne 0.$$ Then $v\big(G(b_\rho)\big)= \min\left\{v\big(G(a_\rho)\big), m_0\gamma_\rho + \gamma_{l(m_0)}\right\}$ for all $\rho$, and thus $\left\{v\big(G(b_\rho)\big)\right\}$ is eventually strictly increasing. \end{proof} \end{section} \section{The Witt case}\label{witt} \noindent Let $\ca{K}=(K, \Gamma, {{\boldsymbol{k}}}; v, \pi)$ be a valued difference field, satisfying Axiom 1 as usual. Assume that $\text{char}(K)=0$, $\text{char}({{\boldsymbol{k}}})=p>0$, ${{\boldsymbol{k}}}$ is perfect, that $\Gamma$ has a least positive element $1$ with $v(p)=1$, and, finally, that $\bar{\sigma}(y)=y^p$ on ${{\boldsymbol{k}}}$. We call this the {\em Witt case\/} (for $p$). These assumptions are satisfied by the Witt difference field $\operatorname{W}({{\boldsymbol{k}}})$. \noindent Axiom 3 fails in the Witt case, but we shall adjust the basic calculation and its refinement to deal with this. As in \cite{BMS} we use the formalism of $\partial$-rings from \cite{joyal}. \noindent {\bf $\partial$-rings.} Let $\partial_0: \ca{O} \to \ca{O}$ be the identity map, and define $$\partial_1: \ca{O} \to \ca{O}, \qquad \partial_1(x):=\displaystyle{\frac{\sigma(x)-x^p}{p}}.$$ Usually $\partial_1$ is written as $\partial$; it satisfies the axioms for a $p$-derivation on $\ca{O}$, namely \begin{eqnarray*} \partial(1)&=&0,\\ \partial(x+y)&=&\partial(x)+\partial(y)-\sum\limits_{i=1}^{p-1} a(p,i) x^i y^{p-i}, \quad a(p,i):={p\choose i}/p, \\ \partial(xy)&=&x^p\partial(y)+y^p\partial(x)+p\partial(x)\partial(y). \end{eqnarray*} A $\partial$-ring is a commutative ring with $1$ equipped with a unary operation $\partial$ satisfying the above identities. For the basic facts on $\partial$-rings used below, see \cite{joyal}. Because $\ca{O}$ is a $\partial$-ring, there is a unique sequence of unary operations $\partial_0,\partial_1,\partial_2,\ldots: \ca{O} \to \ca{O}$ with $\partial_0,\partial_1$ as above such that for all $a\in \mathcal{O}$ and all $n$, \begin{align*} \sigma^n(a)&= W_n(\partial_0(a),\dots, \partial_n(a)),\\ W_n(x_0,\dots, x_n) &:= x_0^{p^n} + px_1^{p^{n-1}}+ \cdots + p^nx_n\in \mathbb{Z}[x_0,\dots, x_n]. \end{align*} Recall that addition of Witt vectors \cite{serre} is given in terms of polynomials \begin{align*} S_n\in \mathbb{Z}[y_0,\dots,y_n, z_0, \dots,z_n]\ &\text{ such that}\\ W_n(y_0,\dots, y_n)+ W_n(z_0,\dots, z_n)&=W_n(S_0,\dots, S_n), \end{align*} and accordingly, $\partial_n(a+b)= S_n\big(\partial_0(a),\dots, \partial_n(a), \partial_0(b),\dots, \partial_n(b)\big)$ for all $a,b\in \ca{O}$. In $\operatorname{W}[{{\boldsymbol{k}}}]$, the ${\partial}_n$ yield the {\em components} of Witt vectors, namely, each $a\in \operatorname{W}[{{\boldsymbol{k}}}]$ equals $\big(\overline{\partial_0(a)},\overline{\partial_1(a)},\overline{\partial_2(a),}\ldots\big).$ In our Witt case, $\ca{O}/p^{n+1}\ca{O}\ \cong\ \operatorname{W}[{{\boldsymbol{k}}}]/(p^{n+1})$: \begin{lemma}\label{del.surjective} Identifying the vectors $(a_0,\dots,a_n)\in {{\boldsymbol{k}}}^{n+1}$ with the elements of $\operatorname{W}[{{\boldsymbol{k}}}]/(p^{n+1})$ in the usual way, we have a surjective ring morphism $$\ca{O} \to \operatorname{W}[{{\boldsymbol{k}}}]/(p^{n+1}),\quad a\mapsto \big(\overline{\partial_0(a)}, \overline{\partial_1(a)},\ldots,\overline{\partial_n(a)}\big) $$ with kernel $p^{n+1}\ca{O}$. \end{lemma} \noindent A difference with \cite{BMS} is our use of the following in proving Theorem~\ref{adjustment2}. \begin{lemma}\label{sum} Let $g\in \ca{O}[y_0,\dots,y_n]$ be such that its image $\bar{g}\in {{\boldsymbol{k}}}[y_0,\dots,y_n]$ is nonzero. Then there is a $g^*\in \ca{O}[y_0,\dots,y_n,z_0,\dots,z_n]$ such that for all $a,b\in \ca{O}$, $$g\big(\partial_0(a+b),\dots, \partial_n(a+b)\big)= g^*\big(\partial_0(a),\dots, \partial_n(a), \partial_0(b),\dots, \partial_n(b)\big),$$ and the image of $g^*\big(y_0,\dots,y_n,\partial_0(b),\dots, \partial_n(b)\big)$ in ${{\boldsymbol{k}}}[y_0,\dots,y_n]$ is nonzero. \end{lemma} \begin{proof} With the $S_n$ as above, put $g^*:= g(S_0,\dots, S_n)$. Then the displayed identity holds. Let $b\in \ca{O}$ and put $h:= g^*\big(y_0,\dots,y_n, \partial_0(b),\dots, \partial_n(b)\big)\in \ca{O}[y_0,\dots,y_n]$. In order to show that its image $\bar{h}$ in ${{\boldsymbol{k}}}[y_0,\dots,y_n]$ is nonzero, we can assume that ${{\boldsymbol{k}}}$ is infinite (passing to a suitable Witt extension of $K$ if necessary). Take $c_0,\dots,c_n\in {{\boldsymbol{k}}}$ such that $\bar{g}(c_0,\dots,c_n)\ne 0$. By Lemma~\ref{del.surjective}, $(c_0,\dots,c_n)=(\overline{\partial_0(x)}, \overline{\partial_1(x)},\ldots,\overline{\partial_n(x)})$ for a suitable $x\in \ca{O}$. Let $a:= x-b$. Then by the above, $$g\big(\partial_0(x),\dots,\partial_n(x)\big)= h\big(\partial_0(a),\dots, \partial_n(a)\big),$$ with image $\bar{g}(c_0,\dots,c_n)\ne 0$ in ${{\boldsymbol{k}}}$. Thus $\bar{h}\ne 0$. \end{proof} \noindent {\bf The $D$-transform.} In analogy with $\boldsymbol{\sigma}$ and $\bar{\boldsymbol{\sigma}}$, we sometimes write $\boldsymbol{\partial}(a)$ for $(\partial_0(a),\ldots,\partial_n(a)),$ and $\bar{\boldsymbol{\partial}}(a)$ for $\big(\overline{\partial_0(a)},\ldots,\overline{\partial_n(a)}\big)$ for $a$ in the valuation ring of a Witt extension. Thus $\boldsymbol{\sigma}(a)=D(\boldsymbol{\partial}(a))$ for all such $a$, where $$D(y_0,\ldots,y_n)=(y_0,y_0^p+py_1,\ldots,y_0^{p^n}+py_1^{p^{n-1}}+ \dots+p^ny_n).$$ \noindent Let $F\in K[x_0,\dots,x_n]$ be homogeneous of degree $m>0$, and consider its $D$-transform $F\big(D(y_0,\dots,y_n)\big)\in K[y_0,\dots,y_n]$. This $D$-transform is not in general homogeneous, but its constant term is zero and it has total degree $\le mp^n$. Write \begin{align*} F(x_0,\dots,x_n)&= \sum_{|\l|=m}a_{\l}\boldsymbol{x}^{\l},\quad (\text{all }a_{\l}\in K),\\ F\big(D(y_0,\dots,y_n)\big)&= \sum_{1\le |\j|\le mp^n} b_{\j}\boldsymbol{y}^{\j}, \quad (\text{all }b_{\j}\in K). \end{align*} To express how the $b_{\j}$ depend on the $a_{\l}$ we introduce a tuple $(x_{\l})$ of new variables, indexed by the $\l$ with $|\l|=m$. \begin{lemma}\label{lin} $b_{\j}=\Lambda_{\j,m}\big((a_{\l})\big)$ where $\Lambda_{\j,m}\in \mathbb{Z}[(x_{\l})]$ is homogeneous of degree $1$ and depends only on $\j,m$ and $p$, not on $K$ or $F$. \end{lemma} \noindent {\bf Adjusting the Basic Calculation to the Witt Case.} We now revisit the {\em basic calculation\/} with the assumption that $\ca{K}=(K, \Gamma, {{\boldsymbol{k}}}; v, \pi)$ is a Witt case satisfying Axiom 2, with infinite ${{\boldsymbol{k}}}$, and that $a$ lies in a Witt case extension. As before $G(x)$ is a nonconstant $\sigma$-polynomial over $K$ of order $\le n$, and we have the constraint on the sequence $\{\mu_\rho\}$ in $\ca{O}$ that $$v(\mu_\rho)= v(\mu_\rho + d_\rho)=0 \quad \text{for all } \rho.$$ We now pick up the calculation at the point where $m\ge 1$, $G_m\ne 0$, and $$v\big(\theta_\rho^mG_m(\mu_\rho+d_\rho)\big)= m\gamma_\rho + v(G_{(\l)}(a)) + v\big(g_m(\boldsymbol{\sigma}(\mu_\rho+d_\rho))\big).$$ Now $g_m\big(\boldsymbol{\sigma}(\mu_\rho+d_\rho)\big)= g_m\big(D(\boldsymbol{\partial}(\mu_\rho+d_\rho))\big)$, and the polynomial \\ $g_m\big(D(y_0,\dots,y_n)\big)$ over $K\langle a \rangle$ is nonzero, since $D$ defines (in characteristic $0$) a generically surjective map from affine $(n+1)$-space to itself. We have $$g_{m}(D(y_0,\dots,y_n))= \lambda_{m}\cdot g_{m}^D(y_0,\dots,y_n), \quad 0\ne \lambda_{m}\in K\langle a \rangle, $$ where the polynomial $g_{m}^D(y_0,\dots,y_n)$ is over the valuation ring of $K\langle a\rangle$ and has a coefficient equal to 1. Thus $$ v\big(\theta_\rho^mG_m(\mu_\rho+d_\rho)\big)= m\gamma_\rho + v(G_{(l)}(a)) + v(\lambda_m) + v\big(g_m^D(\boldsymbol{\partial}(\mu_\rho + d_\rho))\big).$$ This suggests that we constrain $\{\mu_\rho\}$ such that for each $m\ge 1$ with $G_m\ne 0$, $$ v\big(g_m^D(\boldsymbol{\partial}(\mu_\rho + d_\rho))\big)=0 \qquad \text{(eventually in $\rho$)}. $$ If this constraint is met, then $G(b_\rho) \leadsto G(a)$ as in the original calculation. We can meet the constraint as follows: Lemma~\ref{sum} applied to $g_m^D$ yields for each $\rho$ a polynomial $g_{m,\rho}(y_0,\dots,y_n)$ over the valuation ring of $K\langle a\rangle$ whose reduction $\bar{g}_{m,\rho}$ is nonzero such that $g_{m,\rho}(\boldsymbol{\partial}(c))= g_m^D\big(\boldsymbol{\partial}(c + d_\rho)\big)$ for all $c$ in the valuation ring of $K\langle a\rangle$. Then by Lemma~\ref{zariskitop} we can pick for each $\rho$ a nonzero polynomial $f_\rho(y_0,\dots,y_n)\in {{\boldsymbol{k}}}[y_0,\dots,y_n]$ such that if $c_0,\dots,c_n\in \ca{O}$ and $f_{\rho}\big(\bar{c}_0,\dots,\bar{c}_n)\ne 0$, then $v\big(g_{m,\rho}(c_0,\dots,c_n)\big) = 0$ for all $m\ge 1$ such that $G_m\ne 0$. \noindent {\bf Conclusion:} if for each $\rho$ the element $\mu_\rho\in \ca{O}$ satisfies $\bar{\mu}_\rho\ne 0$, $\bar{\mu}_\rho + \bar{d}_\rho\ne 0$, and $f_{\rho}\big(\bar{\boldsymbol{\partial}}(\mu_\rho)\big)\ne 0$, then all constraints on $\{\mu_\rho\}$ are met. \noindent Using Lemma~\ref{del.surjective}, it follows that all the constraints can be met. Thus: \begin{theorem}\label{adjustment2} Suppose $\ca{K}$ satisfies Axiom $2$ and is a Witt case with infinite ${{\boldsymbol{k}}}$. Suppose $\{a_\rho\}$ is a pc-sequence from $K$, and $a_\rho \leadsto a$ in a Witt case extension. Then the conclusion of Theorem \ref{adjustment1} holds, as does the Corollary to Theorem \ref{adjustment1}. \end{theorem} \noindent {\bf Adjusting the Refinement of the Basic Calculation to the Witt Case.} Let $\ca{K}$ be a Witt case with ${{\boldsymbol{k}}}$ of characteristic $p$. For a $\sigma$-polynomial $G(x)$ over $K$ of order $\le n$ and $a\in K$ we set $$G(m,x):=\big(G_{(\l)}(x)\big)_{|\l|=m}, \quad G(m,a):=\big(G_{(\l)}(a)\big)_{|\l|=m}.$$ Note that if $G$ is nonconstant, then $\Lambda_{\j,m}\big(G(m,x)\big)$ has lower complexity than $G$ for $1 \le m\le \deg G$, $\j\in \mathbb{N}^{n+1}$, $1\le |\j| \le mp^n$, where $\Lambda_{\j,m}$ is as in Lemma~\ref{lin}. \begin{theorem} \label{crucial.result.witt} Suppose that $\ca{K}$ satisfies Axiom $2$, and is a Witt case with infinite ${{\boldsymbol{k}}}$ of characteristic $p$. Let $\{a_\rho\}$ be a pc-sequence from $K$ and $a_\rho \leadsto a$ with $a$ in a Witt case extension. Let $G(x)$ be a $\sigma$-polynomial over $K$ of order $\le n$ so that \begin{enumerate} \item[(i)] $G(a_\rho) \leadsto 0$; \item[(ii)] $\Lambda_{\j,m}(G(m,b_\rho)) \not\leadsto 0$ whenever $1 \le m\le \deg G$, $\j\in \mathbb{N}^{n+1}$, $1\le |\j| \le mp^n$, and $\{b_\rho\}$ is a pc-sequence in $K$ equivalent to $\{a_\rho\}$. \end{enumerate} Let $\Sigma$ be a finite set of $\sigma$-polynomials $H(x)$ over $K$. Then there is a pc-sequence $\{b_\rho\}$ from $K$, equivalent to $\{a_\rho\}$, such that $G(b_\rho) \leadsto 0$, and $H(b_\rho) \leadsto H(a)$ for each nonconstant $H\in \Sigma$. \end{theorem} \begin{proof} We can assume that $\Lambda_{\j,m}\big(G(m,x)\big)\in \Sigma$ whenever $1 \le m\le \deg G$ and $1\le |\j| \le mp^n$. By increasing $n$ if necessary we arrange that all $H(x)$ in $\Sigma$ have order $\le n$. Let $\{\theta_\rho\}$ and $\{d_\rho\}$ be as before. Then the {\em adjustment of the basic calculation to the Witt case\/} constructs nonzero polynomials $f_\rho\in {{\boldsymbol{k}}}[y_0,\dots,y_n]$ such that if $\{\mu_\rho\}$ satisfies the constraints $$\mu_\rho\in \ca{O},\ \bar{\mu}_\rho\ne 0,\ \bar{\mu}_\rho + \bar{d}_\rho \ne 0,\ f_{\rho}\big(\bar{\boldsymbol{\partial}}(\mu_\rho)\big)\ne 0,$$ then, setting $b_\rho:= a_\rho + \theta_\rho \mu_\rho$, we have: $$H(b_\rho) \leadsto H(a), \text{ for each nonconstant }H\in \Sigma .$$ Let $\{\mu_\rho\}$ satisfy these constraints, and define $\{b_\rho\}$ accordingly. Proceeding as in the {\em refinement of the basic calculation} we have $$ G(a_\rho)\ =\ G(b_\rho)+\sum\limits_{m\geq 1}(-\theta_\rho)^m H_{m,\rho}(\mu_\rho)$$ where $H_{m,\rho}$ is the $\sigma$-polynomial over $K$ defined by $$ H_{m,\rho}(x)= \sum\limits_{|\l|=m} G_{(\l)}(b_\rho)\cdot \boldsymbol{\sigma}(x)^{\l}.$$ Let $1 \le m\le \deg G$, and put $b_{\j,m,\rho}:= \Lambda_{\j,m}(G(m,b_\rho))$ for $1\le |\j|\le mp^n$. Then by Lemma~\ref{lin} and the facts stated just before it, \begin{align*} H_{m,\rho}(\mu_\rho)&= B_{m,\rho}\big(\boldsymbol{\partial}(\mu_\rho)\big), \text{ where}\\ B_{m,\rho}(\boldsymbol{y}):&= \sum_{1\le |\j|\le mp^n} b_{\j,m,\rho}\boldsymbol{y}^{\j} \in K[y_0,\dots,y_n]. \end{align*} For $1\le |\j|\le mp^n$ we put $b_{\j,m}:= \Lambda_{\j,m}(G(m,a))$, and we may assume by (ii) that $b_{\j,m,\rho}=b_{\j,m}(1+\epsilon_{\j,m,\rho})$ with $v(\epsilon_{\j,m,\rho})>0$, for all $\rho$. Thus \begin{align*}B_{m,\rho}(\boldsymbol{y})&= B_{m}(\boldsymbol{y}) + \epsilon_{m,\rho}(\boldsymbol{y}), \quad \text{ where}\\ B_{m}(\boldsymbol{y}):&=\sum_{|\j|\le mp^n} b_{\j,m}\boldsymbol{y}^{\j}, \qquad \epsilon_{m,\rho}(\boldsymbol{y}) = \sum\limits_{|\j|\le mp^n} b_{\j,m}\epsilon_{\j,m,\rho}\cdot \boldsymbol{y}^{\j}. \end{align*} In the rest of the argument, $m$ ranges over the natural numbers $1,\dots,\deg G$ such that $B_m\ne 0$; put $\gamma_{\j,m}:= v(b_{\j,m})$ for $1\le |\j|\le mp^n$. Pick $\j(m)\in \mathbb{N}^{n+1}$ with $1\le |\j(m)|\le mp^n$ such that $\gamma_{\j(m),m}= \min\{\gamma_{\j,m}:1\le |\j|\le mp^n\}$. Then $$H_{m,\rho}(\mu_\rho)= b_{\j(m),m}\cdot \left(h_{m}\big(\boldsymbol{\partial}(\mu_\rho)\big)+ \delta_{m,\rho}\right), $$ where $v(\delta_{m,\rho})>0$, and $h_{m}\in \ca{O}[y_0,\dots,y_n]$ has constant term zero and a coefficient equal to $1$. We now constrain $\{\mu_\rho\}$ further by demanding, for each $m$ and $\rho$, $$ \bar{h}_{m}\big(\bar{\boldsymbol{\partial}}(\mu_\rho)\big)\ne 0.$$ Then $v\big((-\theta_\rho)^m H_{m,\rho}(\mu_\rho)\big)= m\gamma_\rho + \gamma_{j(m),m}$. Take the unique $m_0\ge 1$ with $B_{m_0}\ne 0$ such that if $m\ne m_0$, then $$m_0\gamma_\rho + \gamma_{\j(m_0),m_0} < m\gamma_\rho + \gamma_{\j(m),m}, \quad \text{(eventually in $\rho$)}.$$ Again we can assume this holds for all $\rho$. Then for all $\rho$: \begin{align*} v\big(&\sum\limits_{m}(-\theta_\rho)^m H_{m,\rho}(\mu_\rho)\big)= m_0\gamma_\rho + \gamma_{j(m_0),m_0},\\ G(b_\rho)=G(a_\rho)-&\sum\limits_{m}(-\theta_\rho)^m H_{m,\rho}(\mu_\rho). \end{align*} Hence, if $\rho$ is such that $v\big(G(a_\rho)\big) \ne m_0\gamma_\rho + \gamma_{\j(m_0),m_0}$, then $$ v\big(G(b_\rho)\big)= \min\left\{v\big(G(a_\rho)\big), m_0\gamma_\rho + \gamma_{\j(m_0),m_0}\right\}.$$ We shall make this true for {\em all\/} $\rho$. Suppose that $v\big(G(a_\rho)\big) = m_0\gamma_\rho + \gamma_{\j(m_0),m_0}$. Then $$G(b_\rho)= G(a_\rho)\cdot\big(1- c_{\rho}h_{m_0}(\boldsymbol{\partial}(\mu_\rho))+ \epsilon_\rho\big)$$ where $c_{\rho}=(-\theta_\rho)^{m_0}\cdot b_{\j(m_0),m_0}/G(a_\rho)$, and $v( \epsilon_\rho) >0$. Note that $v(c_{\rho})=0$ and that $\bar{h}_{m_0}$ is not zero but that its constant term is zero. This leads to our final constraint on $\{\mu_\rho\}$: for each $\rho$ such that $v\big(G(a_\rho)\big) = m_0\gamma_\rho + \gamma_{\j(m_0),m_0}$ we impose $$ 1- \bar{c}_{\rho}\bar{h}_{m_0} \big(\bar{\boldsymbol{\partial}}(\mu_\rho)\big)\ne 0.$$ If all constraints hold, then $G(b_\rho) \leadsto 0$. Using Lemma~\ref{del.surjective} we can meet all constraints. \end{proof} \section{Newton-Hensel Approximation}\label{newtonhensel1} \noindent Let $\ca{K}=(K, \Gamma, {{\boldsymbol{k}}}; v, \pi)$ be a valued difference field, satisfying Axiom 1 of course. Until Definition~\ref{shensel1} we fix a $\sigma$-polynomial $G$ over $\ca{O}$ of order $\le n$, and let $a \in \ca{O}$. \begin{definition} $G$ is {\em $\sigma$-henselian at $a$\/} if $v\big(G(a)\big) >0$ and $\min\limits_{|{\i}|=1}v\big(G_{(\i)}(a)\big)=0$. \end{definition} \noindent The coefficients of all $G_{(\i)}$ are in $\ca{O}$. Hence, if $G$ is $\sigma$-henselian at $a$, and $b\in \ca{O}$ satisfies $v(a-b)>0$, then $G$ is also $\sigma$-henselian at $b$. If $G$ is $\sigma$-henselian at $a$ and $G(a) \neq 0$, does there exist $b\in \ca{O}$ such that $v(a-b)>0$ and $v(G(b))>v(G(a))$? To get a positive answer we use an additional assumption on ${{\boldsymbol{k}}}$: \noindent {\bf Axiom $4_n$.} Each inhomogeneous linear $\bar{\sigma}$-equation $$1+\alpha_0x + \dots + \alpha_n\bar{\sigma}^n(x)=0 \qquad(\text{all }\alpha_i\in {{\boldsymbol{k}}}, \text{ some }\alpha_i\ne 0),$$ has a solution in ${{\boldsymbol{k}}}$. (And we say that $\ca{K}$ satisfies Axiom $4_n$ if ${{\boldsymbol{k}}}$ does.) \begin{lemma}\label{newton} Suppose $\ca{K}$ satisfies Axiom $4_n$ and $G$ is $\sigma$-henselian at $a$, with $G(a)\ne 0$. Then there is $b \in \ca{O}$ such that $v(a-b)\ge v\big(G(a)\big)$ and $v\big(G(b)\big)>v\big(G(a)\big)$. For any such $b$ we have $v(a-b)=v\big(G(a)\big)$ and $G$ is $\sigma$-henselian at $b$. \end{lemma} \begin{proof} Let $b=a + G(a)u$ where $u \in \ca{O}$ is to be determined later. Then $$G(b)=G(a)+\sum\limits_{|\i|\geq 1}G_{(\i)}(a)\cdot\boldsymbol{\sigma}(G(a)u)^{\i}.$$ Extracting a factor $G(a)$ and using Axiom 1 it follows that $$G(b)= G(a)\cdot\big( 1+\sum_{|\i|=1} c_{\i}\boldsymbol{\sigma}(u)^{\i} + \sum_{|\j|>1}c_{\j}\boldsymbol{\sigma}(u)^{\j}\big)$$ where $\min\limits_{\i|=1} v(c_{\i})=0$ and $v(c_{\j})>0$ for $|\j|>1$. Using Axiom $4_n$, we can pick $u \in \ca{O}$ such that $\bar{u}$ is a solution of $$1+ \sum\limits_{|\i|=1}\overline{c_{\i}}\cdot \bar{\boldsymbol{\sigma}}(x)^{\i}=0.$$ Then $v(b - a)=v(G(a))$, and $v(G(b)) > v(G(a))$. It is clear that any $b \in \ca{O}$ with $v(a-b)\ge v\big(G(a)\big)$ and $v\big(G(b)\big)>v\big(G(a)\big)$ is obtained in this way. \end{proof} \begin{lemma}\label{hensel.imm} Suppose $\ca{K}$ satisfies Axiom $4_n$ and $G(x)$ is $\sigma$-henselian at $a$. Suppose also that there is no $b\in K$ with $G(b)=0$ and $v(a-b) = v(G(a))$. Then there is a pc-sequence $\{a_\rho\}$ in $K$ with the following properties: \begin{enumerate} \item $a_0=a$ and $\{a_\rho\}$ has no pseudolimit in $K$; \item $\{v(G(a_\rho))\}$ is strictly increasing, and thus $G(a_\rho) \leadsto 0$; \item $v(a_{\rho'} -a_\rho)=v\big(G(a_\rho)\big)$ whenever $\rho<\rho'$; \item for any extension $\ca{K}'=(K',\dots)$ of $\ca{K}$ and $b,c\in K'$ such that $a_\rho \leadsto b$, $G(c)=0$ and $v(b-c)\ge v(G(b))$, we have $a_\rho \leadsto c$. \end{enumerate} \end{lemma} \begin{proof} Let $\{a_\rho\}_{\rho<\lambda}$ be a sequence in $\ca{O}$ with $\lambda$ an ordinal $>0$, $a_0=a$, and \begin{enumerate} \item[(i)] $G$ is $\sigma$-henselian at $a_\rho$, for all $\rho < \lambda$, \item[(ii)] $v(a_{\rho'} -a_\rho)=v\big(G(a_\rho)\big)$ whenever $\rho<\rho'<\lambda$, \item[(iii)] $v(G(a_{\rho')})>v(G(a_\rho))$ whenever $\rho<\rho'<\lambda$. \end{enumerate} (Note that for $\lambda=1$ we have such a sequence.) Suppose $\lambda= \mu +1$ is a successor ordinal. Then Lemma~\ref{newton} yields $a_\lambda\in K$ such that $v(a_\lambda - a_\mu)=v\big(G(a_\mu)\big)$ and $v\big(G(a_\lambda)\big)> v\big(G(a_\mu)\big)$. Then the extended sequence $\{a_\rho\}_{\rho<\lambda +1}$ has the above properties with $\lambda+1$ instead of $\lambda$. Suppose $\lambda$ is a limit ordinal. Then $\{a_\rho\}$ is a pc-sequence and $G(a_\rho) \leadsto 0$. If $\{a_\rho\}$ has no pseudolimit in $K$ we are done. Assume otherwise, and take a pseudolimit $a_\lambda\in K$ of $\{a_\rho\}$. The extended sequence $\{a_\rho\}_{\rho<\lambda +1}$ clearly satisfies the conditions (i) and (ii) with $\lambda+1$ instead of $\lambda$. Since $G$ is over $\ca{O}$ we have $$v(G(a_\lambda) - G(a_{\rho} )) \ge v(a_\lambda - a_{\rho})= v(a_{\rho +1}-a_\rho)= v (G(a_{\rho}))$$ for $\rho < \lambda$. Therefore $v(G(a_\lambda)) \ge v(G(a_\rho))$ for $\rho<\lambda$, and by (iii) this yields $v(G(a_\lambda))>v(G(a_\rho))$ for $\rho<\lambda$. So the extended sequence also satisfies (iii) with $\lambda+1$ instead of $\lambda$. For cardinality reasons this building process must come to an end and thus yield a pc-sequence $\{a_\rho\}$ satisfying (1), (2), (3). Let $b,c$ in an extension of $\ca{K}$ be such that $a_\rho \leadsto b$, $G(c)=0$ and $v(b-c)\ge v(G(b))$. Then $G$ is $\sigma$-henselian at $b$, and for $\rho < \rho'$, \begin{align*} \gamma_\rho:&= v(a_{\rho'}-a_\rho)=v(G(a_\rho))=v(b-a_\rho), \text{ so}\\ v(b-c)& \ge v(G(b))=v\big(G(b)-G(a_\rho) + G(a_\rho)\big)\ge \gamma_\rho, \end{align*} since $v(G(b)-G(a_\rho))\ge v(b-a_\rho)=\gamma_\rho$. Thus $a_\rho \leadsto c$, as claimed. \end{proof} \begin{definition}\label{shensel1} We say $\ca{K}$ is {\em $\sigma$-henselian\/} if for each $\sigma$-polynomial $G(x)$ over $\ca{O}$ and $a\in \ca{O}$ such that $G$ is $\sigma$-henselian at $a$, there exists $b \in \ca{O}$ such that $G(b)=0$ and $v(a-b)\ge v\big(G(a)\big)$. {\em(By the arguments above, any such $b$ will actually satisfy $v(a-b)= v\big(G(a)\big)$.)} \end{definition} \begin{corollary}\label{fixh} If $\ca{K}$ is $\sigma$-henselian, then the residue field of $\operatorname{Fix}(K)$ is $\operatorname{Fix}({{\boldsymbol{k}}})$. \end{corollary} \begin{proof} Suppose $\ca{K}$ is $\sigma$-henselian, and let $\alpha\in \operatorname{Fix}({{\boldsymbol{k}}})$; we shall find $b\in \operatorname{Fix}(K)$ such that $v(b)=0$ and $\bar{b}=\alpha$. Take $a\in K$ with $v(a)=0$ and $\bar{a}=\alpha$. Then $v(\sigma(a)-a)>0$, so $\sigma(x)-x$ is $\sigma$-henselian at $a$. So there is a $b$ as promised. \end{proof} \noindent By {\bf Axiom $4$} we mean the axiom scheme $\{\operatorname{Axiom} 4_n:\ n=0,1,2,\dots\}$. \begin{remark}\label{scan} If $\Gamma = \{0\}$, then $\ca{K}$ is $\sigma$-henselian. Suppose $\Gamma \ne \{0\}$, $\ca{K}$ satisfies Axiom $2$ and is $\sigma$-henselian. Then $\ca{K}$ satisfies Axiom $4$ by \cite{scanlon1}, Proposition 5.3, so $\bar{\sigma}^n\ne \operatorname{id}_{{{\boldsymbol{k}}}}$ for all $n\ge 1$. Hence, if also $\operatorname{char}({{\boldsymbol{k}}})=0$, then $\ca{K}$ satisfies Axiom $3$. \end{remark} \noindent From part (1) of Lemma~\ref{hensel.imm} we obtain: \begin{corollary}\label{disc} If $\ca{K}$ is maximal as valued field and satisfies Axiom $4$, then $\ca{K}$ is $\sigma$-henselian. In particular, if $\ca{K}$ is complete with discrete valuation and satisfies Axiom $4$, then $\ca{K}$ is $\sigma$-henselian. \end{corollary} \noindent Thus if the difference field ${{\boldsymbol{k}}}$ satisfies Axiom 4, then the Hahn difference field ${{\boldsymbol{k}}}((t^\Gamma))$ is $\sigma$-henselian. Suppose ${{\boldsymbol{k}}}$ has characteristic $p>0$ and every equation $$1+ \alpha_0x + \alpha_{1}x^{p}+ \cdots + \alpha_nx^{p^n}=0 \qquad (\text{all }\alpha_i\in {{\boldsymbol{k}}}, \text{ some }\alpha_i\ne 0),$$ is solvable in ${{\boldsymbol{k}}}$. Then by Corollary~\ref{disc} the Witt difference field $\operatorname{W}({{\boldsymbol{k}}})$ is $\sigma$-henselian, where $\sigma$ is the Witt frobenius. As noted in \cite{BMS}, this condition on the residue field ${{\boldsymbol{k}}}$ is {\em Hypothesis A} in Kaplansky~\cite{kaplansky} where it is related to uniqueness of maximal immediate extensions of valued fields. It is equivalent to ${{\boldsymbol{k}}}$ not having any field extension of finite degree divisible by $p$; see \cite{whaples}. \noindent Note that if $\ca{K}$ is $\sigma$-henselian, then it is henselian as a valued field. We have the following analogue of an important result about henselian valued fields: \begin{theorem}\label{lift.res.field} Suppose that $\ca{K}$ is $\sigma$-henselian and $\operatorname{char}({{\boldsymbol{k}}})=0$. Let $K_0\subseteq \ca{O}$ be a $\sigma$-subfield of $K$. Then there is a $\sigma$-subfield $K_1$ of $K$ such that $K_0\subseteq K_1 \subseteq \ca{O}$ and $\bar{K}_1={{\boldsymbol{k}}}$. \end{theorem} \begin{proof} Suppose that $\bar{K_0}\ne {{\boldsymbol{k}}}$. Take $a\in \ca{O}$ such that $\bar{a}\notin \bar{K_0}$. If $v(G(a))=0$ for all nonzero $G(x)$ over $K_0$, then $K_0\langle a\rangle$ is a proper $\sigma$-field extension of $K_0$ contained in $\ca{O}$. Next, consider the case that $v\big(G(a)\big)>0$ for some nonzero $G(x)$ over $K_0$. Pick such $G$ of minimal complexity. So $v(H(a))=0$ for all nonzero $H(x)$ over $K_0$ of lower complexity. It follows that $G$ is $\sigma$-henselian at $a$. So there is $b\in \ca{O}$ with $G(b)=0$ and $v(a-b)=v(G(a))$, so $\bar{a}=\bar{b}$. We claim that $K_0\langle b \rangle$ is a proper $\sigma$-field extension of $K_0$ contained in $\ca{O}$. To prove the claim, let $G$ have order $m$. Then the $\sigma^k(b)$ with $k\in \mathbb{Z}$ are algebraic over $K_0\big(b,\dots, \sigma^{m-1}(b)\big)$ and thus $$K_0\langle b\rangle\ =\ K_0\big(\sigma^k(b): k\in \mathbb{Z}\big)\ =\ K_0\big(b,\dots, \sigma^{m-1}(b)\big)[\sigma^k(b): k\in \mathbb{Z}] \subseteq \ca{O},$$ which establishes the claim. We finish the proof by Zorn's Lemma. \end{proof} \noindent The notion ``$\sigma$-henselian at $a$'' applies only to $\sigma$-polynomials over $\ca{O}$ and $a\in \ca{O}$. It will be convenient to extend it a little. Let $G(x)$ be over $K$ of order $\le n$ and $a\in K$. \begin{definition} We say $(G,a)$ is in {\em $\sigma$-hensel configuration\/} if $G_{(\i)}(a)\ne 0$ for some $\i \in \mathbb{N}^{n+1}$ with $|\i|=1$, and either $G(a)=0$ or there is $\gamma\in \Gamma$ such that $$ v\big(G(a)\big)= \min_{|\i|=1}v\big(G_{(\i)}(a)\big) + \gamma <v\big(G_{(\j)}(a)\big) + |\j|\cdot\gamma$$ for all $\j$ with $|\j|>1$. For $(G,a)$ in $\sigma$-hensel configuration we put $$\gamma(G,a):= v\big(G(a)\big) - \min_{|\i|=1}v\big(G_{(\i)}(a)\big).$$ \end{definition} \noindent Let $(G,a)$ be in $\sigma$-hensel configuration, $G(a)\ne 0$, and take $c\in K$ with $v(c)=\gamma(G,a)$ and put $H(x):= G(cx)/G(a)$, $\alpha:= a/c$. Then $$H(\alpha)=1, \quad \min_{|\i|=1} v\big(H_{(\i)}(\alpha)\big)=0, \quad v\big(H_{(\j)}(\alpha)\big) >0 \text{ for }|\j|>1,$$ as is easily verified. In particular, $H(\alpha +x)$ is over $\ca{O}$. Now assume that $\ca{K}$ satisfies also Axiom 4. This gives a unit $u\in \ca{O}$ such that $v\big(H(\alpha+u)\big)>0$. We claim that $H(\alpha +x)$ is $\sigma$-henselian at $u$. This is because for $P(x):=H(\alpha +x)$ we have $v\big(P(u)\big)>0$, and for each $i$, $$P_{(\i)}(u)= H_{(\i)}(\alpha+u)= H_{(\i)}(\alpha)+ \sum_{|\j|\ge 1}H_{(\i)(\j)}(\alpha)\boldsymbol{\sigma}(u)^{\j},$$ so $\min_{|\i|=1} v\big(P_{(\i)}(u)\big)=0$. If $\ca{K}$ is $\sigma$-henselian, we can take $u$ as above such that $H(\alpha+u)=0$, and then $b:= a+cu$ satisfies $G(b)=0$ and $v(a-b)=\gamma$. Summarizing: \begin{lemma}\label{hensel-conf} Assume $\ca{K}$ satisfies Axiom $4$ and is $\sigma$-henselian. Let $(G,a)$ be in $\sigma$-hensel configuration. Then there is $b\in K$ such that $G(b)=0$ and $v(a-b)= \gamma(G,a)$. \end{lemma} \noindent Up to this point we treated the Witt and non-Witt case separately, but from now on it makes sense to handle both cases at once. We say that $\ca{K}$ is {\em workable} if either it satisfies Axioms 2, 3 (as in Theorem~\ref{crucial.result.nonwitt}), or it satisfies Axiom 2 and is a Witt case with infinite ${{\boldsymbol{k}}}$ (as in Theorem~\ref{crucial.result.witt}). An {\em extension\/} of a workable $\ca{K}$ is an extension as before (and doesn't have to be workable), but in the Witt case we also require the extension to be a Witt case. In the next definition we assume that $\ca{K}$ is workable, and that $\{a_\rho\}$ is a pc-sequence from $K$. \begin{definition} We say $\{a_\rho\}$ is of {\em $\sigma$-algebraic type over $K$\/} if $G(b_\rho) \leadsto 0$ for some $\sigma$-polynomial $G(x)$ over $K$ and an equivalent pc-sequence $\{b_\rho\}$ in $K$. If $\{a_\rho\}$ is of $\sigma$-algebraic type over $K$, then a {\em minimal $\sigma$-polynomial of $\{a_\rho\}$ over $K$\/} is a $\sigma$-polynomial $G(x)$ over $K$ with the following properties: \begin{enumerate} \item[(i)] $ G(b_\rho) \leadsto 0$ for some pc-sequence $\{b_\rho\}$ in $K$, equivalent to $\{a_\rho\}$; \item[(ii)] $ H(b_\rho) \not\leadsto 0$ whenever $H(x)$ is a $\sigma$-polynomial over $K$ of lower complexity than $G$ and $\{b_\rho\}$ is a pc-sequence in $K$ equivalent to $\{a_\rho\}$. \end{enumerate} \end{definition} \noindent If $\{a_\rho\}$ is of $\sigma$-algebraic type over $K$, then $\{a_\rho\}$ clearly has a minimal $\sigma$-polynomial over $K$. The next lemma is used to study immediate extensions in the next section. Its finite ramification hypothesis is satisfied by all Witt cases. (``Finitely ramified'' is defined right after Lemma~\ref{kapla}.) \begin{lemma}\label{henselconf} Suppose $\ca{K}$ is workable and finitely ramified. Let $\{a_\rho\}$ from $K$ be a pc-sequence of $\sigma$-algebraic type over $K$ with minimal $\sigma$-polynomial $G(x)$ over $K$, and with pseudolimit $a$ in some extension. Let $\Sigma$ be a finite set of $\sigma$-polynomials $H(x)$ over $K$. Then there is a pc-sequence $\{b_\rho\}$ in $K$, equivalent to $\{a_\rho\}$, such that, with $\gamma_\rho:=v(a-a_\rho)$: \begin{enumerate} \item[$(1)$] $v(a-b_\rho)=\gamma_\rho$, eventually, and $ G(b_\rho) \leadsto 0$; \item[$(2)$] if $H\in \Sigma$ and $H \notin K$, then $H(b_\rho) \leadsto H(a)$; \item[$(3)$] $(G,a)$ is in $\sigma$-hensel configuration, and $\gamma(G,a) > \gamma_\rho$, eventually. \end{enumerate} \end{lemma} \begin{proof} Let $G$ have order $n$. We can assume that $\Sigma$ includes all $G_{(\i)}$. In the rest of the proof $\i,\j,\l$ range over $\mathbb{N}^{n+1}$. Theorems~\ref{crucial.result.nonwitt} and ~\ref{crucial.result.witt} and their proofs yield an equivalent pc-sequence $\{b_\rho\}$ in $K$ such that (1) and (2) hold. The proof of Theorem~\ref{adjustment1} shows that we can arrange that in addition there is a unique $m_0$ with $1\le m_0 \leq \deg G$ such that, eventually, $$v\big(G(b_\rho)-G(a)\big)= \min\limits_{|\i|=m_0}v\big(G_{(\i)}(a)\big) +m_0\gamma_\rho < v\big(G_{(\j)}(a)\big)+|\j|\cdot\gamma_\rho,$$ for each $\j$ with $|\j|\ge 1$ and $|\j|\not=m_0$. Now $\left\{v\big(G(b_\rho)\big)\right\}$ is strictly increasing, eventually, so $v\big(G(a)\big) > v\big(G(b_\rho)\big)$ eventually, and for $|\j|\ge 1$, $|\j|\not=m_0$: $$v\big(G(b_\rho)\big)=\min\limits_{|\i|=m_0}v\big(G_{(\i)}(a)\big) +m_0\cdot \gamma_\rho < v\big(G_{(\j)}(a)\big) + |\j|\cdot\gamma_\rho, \quad \text{eventually}.$$ We claim that $m_0=1$. Let $|\i|=1$ with $G_{(\i)}\ne 0$, and let $\j>\i$; our claim will then follow by deriving $$v\big(G_{(\i)}(a)\big) + \gamma_\rho < v\big(G_{(\j)}(a)\big) + |\j|\gamma_\rho, \quad \text{eventually}.$$ The proof of Theorem~\ref{adjustment1} with $G_{(\i)}$ in the role of $G$ shows that we can arrange that our sequence $\{b_\rho\}$ also satisfies $$v\big(G_{(\i)}(b_\rho)-G_{(\i)}(a)\big) \le v\big(G_{(\i)(\l)}(a)\big) +|\l|\cdot\gamma_\rho, \quad\text{eventually}$$ for all $\l$ with $|\l|\ge 1$. Since $v\big(G_{(\i)}(b_\rho)\big)=v\big(G_{(\i)}(a)\big)$ eventually, this yields $$v\big(G_{(\i)}(b_\rho)\big)\leq v\big(G_{(\i)(\l)}(a)\big) +|\l|\cdot\gamma_\rho, \quad\text{eventually}$$ for all $\l$ with $|\l|\ge 1$, hence for all such $\l$, $$v\big(G_{(\i)}(b_\rho)\big)\leq v{{\i+ \l}\choose \i} +v\big(G_{(\i+ \l)}(a)\big)+|\l|\cdot\gamma_\rho, \quad\text{eventually} $$ For $\l$ with $\i+\l=\j$, this yields $$v\big(G_{(\i)}(a))\leq v{{\j\choose \i}} +v\big(G_{(\j)}(a)\big)+(|\j|-1)\cdot\gamma_\rho, \quad\text{eventually}.$$ Now $K$ is finitely ramified, so $$v\big(G_{(\i)}(a))<v\big(G_{(\j)}(a)\big)+(|\j|-1)\cdot \gamma_\rho, \quad\text{eventually, hence} $$ $$v\big(G_{(\i)}(a)\big)+\gamma_\rho<v(G_{(\j)}(a))+ |\j|\cdot\gamma_\rho, \quad\text{eventually}. $$ Thus $m_0=1,$ as claimed. The above inequalities then yield (3). \end{proof} \section{Immediate Extensions}\label{max.imm.ext} \noindent Throughout this section $\ca{K}=(K,\Gamma,{{\boldsymbol{k}}}; v,\pi)$ is a workable valued difference field. The immediate extensions of $\ca{K}$ are then workable as well, and we prove here the basic facts on these immediate extensions. To avoid heavy handed notation we often let $K$ stand for $\ca{K}$ when the context permits. \begin{definition} A pc-sequence $\{a_\rho\}$ from $K$ is said to be of {\em $\sigma$-transcendental type over $K$\/} if it is not of $\sigma$-algebraic type over $K$, that is, $G(b_\rho) \not\leadsto 0$ for each $\sigma$-polynomial $G(x)$ over $K$ and each equivalent pc-sequence $\{b_\rho\}$ from $K$. \end{definition} \noindent In particular, such a pc-sequence cannot have a pseudolimit in $K$. The next lemma is a $\sigma$-analogue of a result familiar for valued fields. \begin{lemma}\label{ext.with.pseudolimit} Let $\{a_\rho\}$ from $K$ be a pc-sequence of $\sigma$-transcendental type over $K$. Then $\ca{K}$ has an immediate extension $(K\langle a \rangle, \Gamma,{{\boldsymbol{k}}}; v_a, \pi_a)$ such that: \begin{enumerate} \item[$(1)$] $a$ is $\sigma$-transcendental over $K$ and $a_\rho \leadsto a$; \item[$(2)$] for any extension $(K_1,\Gamma_1,{{\boldsymbol{k}}}_1;v_1,\pi_1)$ of $\ca{K}$ and any $b\in K_1$ with $a_\rho \leadsto b$ there is a unique embedding $$(K\langle a \rangle, \Gamma,{{\boldsymbol{k}}}; v_a, \pi_a)\ \longrightarrow\ (K_1,\Gamma_1,{{\boldsymbol{k}}}_1;v_1,\pi_1)$$ over $\ca{K}$ that sends $a$ to $b$. \end{enumerate} \end{lemma} \begin{proof} Let $\ca{K}'$ be an elementary extension of $\ca{K}$ containing a pseudolimit $a$ of $\{a_\rho\}$. Let $(K\langle a \rangle, \Gamma_a,{{\boldsymbol{k}}}_a; v_a, \pi_a)$ be the valued $\sigma$-field generated by $a$ over $\ca{K}$. To prove that $\Gamma_a=\Gamma$, consider a nonconstant $\sigma$-polynomial $G(x)$ over $K$. Use Theorem \ref{adjustment1} to get an equivalent $\{b_\rho\}$ so that $G(b_\rho) \leadsto G(a).$ Now, $G(b_\rho) \not\leadsto 0$, since $\{a_\rho\}$ is of $\sigma$-transcendental type. So $v_a(G(a))=$ eventual value of $v(G(b_\rho))\in\Gamma.$ Thus $\Gamma_a=\Gamma$ and $a$ is $\sigma$-transcendental over $(K, \sigma)$. A similar argument shows that ${{\boldsymbol{k}}}_a={{\boldsymbol{k}}}$. With $b$ as in (2), the proof of Theorem \ref{adjustment1} shows that in the argument above we can arrange, in addition to $G(b_\rho) \leadsto G(a)$, that $G(b_\rho) \leadsto G(b)$; hence $v_a(G(a))= v_1(G(b))\in \Gamma$. \end{proof} \noindent The following consequence involves both (1) and (2): \begin{corollary}\label{ctra} Let $a$ from some extension of $\ca{K}$ be $\sigma$-algebraic over $K$ and let $\{a_\rho\}$ be a pc-sequence in $K$ such that $a_\rho \leadsto a$. Then $\{a_\rho\}$ is of $\sigma$-algebraic type over $K$. \end{corollary} \noindent The $\sigma$-algebraic analogue of Lemma~\ref{ext.with.pseudolimit} is trickier: \begin{lemma}\label{imm.alg.ext} Suppose $\ca{K}$ is finitely ramified. Let $\{a_\rho\}$ from $K$ be a pc-sequence of $\sigma$-algebraic type over $K$, with no pseudolimit in $K$. Let $G(x)$ be a minimal $\sigma$-polynomial of $\{a_\rho\}$ over $K$. Then $\ca{K}$ has an immediate extension $(K\langle a \rangle, \Gamma,{{\boldsymbol{k}}}; v_a, \pi_a)$ such that \begin{enumerate} \item[$(1)$] $G(a)=0$ and $a_\rho \leadsto a$; \item[$(2)$] for any extension $(K_1, \Gamma_1,{{\boldsymbol{k}}}_1; v_1, \pi_1)$ of $\ca{K}$ and any $b\in K_1$ with $G(b)=0$ and $a_\rho \leadsto b$ there is a unique embedding $$(K\langle a \rangle, \Gamma,{{\boldsymbol{k}}}; v_a, \pi_a) \longrightarrow\ (K_1, \Gamma_1,{{\boldsymbol{k}}}_1; v_1, \pi_1)$$ over $\ca{K}$ that sends $a$ to $b$. \end{enumerate} \end{lemma} \begin{proof} Let $G(x)=F(\boldsymbol{\sigma}(x))$ with $F(x_0, \dots, x_n) \in K[x_0, \dots, x_n]$, $n=\text{order}(G)$. {\em Claim}. $F$ is irreducible in $K[x_0,\dots,x_n]$. Suppose otherwise. Then $F=F_1F_2$ with nonconstant $F_1,F_2\in K[x_0,\dots,x_n]$, and thus $G=G_1G_2$ where $G_1(x),G_2(x)$ are $\sigma$-polynomials over $K$ of lower complexity than $G$. Take a pc-sequence $\{b_\rho\}$ in $K$ equivalent to $\{a_\rho\}$ such that $G(b_\rho) \leadsto 0$ and $\{G_1(b_\rho)\}$, $\{G_2(b_\rho)\}$ pseudoconverge. Then $\{v(G(b_\rho))\}$ is eventually strictly increasing, but $\{v(G_1(b_\rho))\}$, $\{v(G_2(b_\rho))\}$ are eventually constant, contradiction. Consider the domain $K[\xi_0,\dots,\xi_n]:=K[x_0,\dots,x_n]/(F)$ with $\xi_i:= x_i + (F)$ and let $L=K(\xi_0,\dots,\xi_n)$ be its field of fractions. We extend the valuation $v$ on $K$ to a valuation $v: L^\times \to \Gamma$ as follows. Pick a pseudolimit $e$ of $\{a_\rho\}$ in some extension of $\ca{K}$ whose valuation we also denote by $v$. Let $\phi\in L$, $\phi\ne 0$, so $\phi=f(\xi_0,\dots,\xi_n)/g(\xi_0,\dots,\xi_{n-1})$ with $f\in K[x_0,\ldots,x_n]$ of lower $x_n$-degree than $F$ and $g\in K[x_0,\ldots,x_{n-1}]$, $g\ne 0$. We claim: $v\big(f(\boldsymbol{\sigma}(e))\big), v\big(g(\boldsymbol{\sigma}(e))\big)\in \Gamma$, and $v\big(f(\boldsymbol{\sigma}(e))\big)- v\big(g(\boldsymbol{\sigma}(e))\big)$ depends only on $\phi$ and not on the choice of $(f,g)$. To see why this claim is true, suppose that also $\phi=f_1(\xi_0,\dots,\xi_n)/g_1(\xi_0,\dots,\xi_{n-1})$ with $f_1\in K[x_0,\ldots,x_n]$ of lower $x_n$-degree than $F$ and $g_1\in K[x_0,\ldots,x_{n-1}]$, $g_1\ne 0$. Then $fg_1\equiv f_1g \mod F$ in $K[x_0,\dots,x_n]$, and thus $fg_1=f_1g$ since $fg_1$ and $f_1g$ have lower degree in $x_n$ than $F$. To avoid some tedious case distinctions we assume that $f,g, f_1, g_1$ are all nonconstant. (In the other cases the arguments below need some trivial modifications.) Take a pc-sequence $\{b_\rho\}$ in $K$ equivalent to $\{a_\rho\}$ such that $\{f(\boldsymbol{\sigma}(b_\rho))\}\leadsto f(\boldsymbol{\sigma}(e))$, and likewise with $g, f_1, g_1$ instead of $f$. Also $\{f(\boldsymbol{\sigma}(b_\rho))\}\not\leadsto 0$ by the minimality of $G$, so $$v(f(\boldsymbol{\sigma}(e)))= \text{eventual value of } v(f(\boldsymbol{\sigma}(b_\rho))),$$ in particular, $v(f(\boldsymbol{\sigma}(e)))\in \Gamma$, and likewise with $g$, $f_1$ and $g_1$ instead of $f$. The identity $fg_1=f_1g$ now yields $$v\big(f(\boldsymbol{\sigma}(e))\big)-v\big(g(\boldsymbol{\sigma}(e))\big)= v\big(f_1(\boldsymbol{\sigma}(e))\big)- v\big(g_1(\boldsymbol{\sigma}(e))\big).$$ This proves the claim and allows us to define $v: L^\times \to \Gamma$ by $$v(\phi):=v\big(f(\boldsymbol{\sigma}(e))\big)- v\big(g(\boldsymbol{\sigma}(e))\big).$$ It is routine to check that this map $v$ is a valuation on the field $L$ that extends the valuation $v$ on $K$. (For the multiplicative law $v(\phi_1\phi_2)=v(\phi_1) + v(\phi_2)$, let $$\phi_1=\frac{f_1(\xi_0,\dots,\xi_n)}{g_1(\xi_0,\dots,\xi_{n-1})}, \quad \phi_2=\frac{f_2(\xi_0,\dots,\xi_n)}{g_2(\xi_0,\dots,\xi_{n-1})}, \quad \phi_1\phi_2=\frac{f_3(\xi_0,\dots,\xi_n)}{g_3(\xi_0,\dots,\xi_{n-1})} $$ where $f_1, f_2, f_3\in K[x_0,\ldots,x_n]$ have lower $x_n$-degree than $F$ and where $g_1,g_2, g_3$ are nonzero polynomials in $K[x_0,\ldots,x_{n-1}]$. In view of $$\frac{f_1}{g_1}\frac{f_2}{g_2}=\frac{QF}{g_1g_2g_3} + \frac{f_3}{g_3}$$ with $Q \in K[x_0, \dots ,x_n]$, we obtain $v(\phi_1\phi_2)=v(\phi_1) + v(\phi_2)$ as in the proof on pp. 308--309 of \cite{kaplansky} for ordinary valued fields, using suitable choices of pc-sequences equivalent to $\{a_\rho\}$ in the style above.) Likewise one shows that $(L,v)$ has the same residue field as $(K,v)$. It is clear that $K(\xi_0,\dots,\xi_{n-1})$ is purely transcendental over $K$ of transcendence degree $n$. The same is true for $K(\xi_1,\dots,\xi_n)$: by the minimality of $G$ the variable $x_0$ must occur in $F$, so $\xi_0$ is algebraic over $K(\xi_1,\dots,\xi_n)$. This yields an isomorphism $$ K(\xi_0,\dots,\xi_{n-1})\buildrel{\sigma}\over\longrightarrow K(\xi_1,\dots,\xi_n), \quad \sigma(\xi_i)=\xi_{i+1}\text{ for }0\leq i\leq n-1, $$ between subfields of $L$ that extends $\sigma$ on $K$. We consider these subfields as equipped with the valuation induced by that of $L$, and claim that $\sigma$ is an isomorphism of {\em valued\/} fields. To see why, let $c\in K[\xi_0,\dots,\xi_{n-1}]$, $c\ne 0$; the claim will follow by deriving $v(c)=v(\sigma(c))$. We have $$c=h(\xi_0,\dots,\xi_{n-1}), \quad h\in K[x_0,\dots,x_{n-1}].$$ Let $h^\sigma\in K[x_0,\dots,x_{n-1}]$ be obtained by applying $\sigma$ to the coefficients of $h$. Then $\sigma(c)=h^\sigma(\xi_1,\dots,\xi_n)$. Also $\sigma(c)=f(\xi_0,\dots,\xi_n)/g(\xi_0,\dots,\xi_{n-1})$ with $f\in K[x_0,\ldots,x_n]$ of lower $x_n$-degree than $F$ and $g\in K[x_0,\ldots,x_{n-1}]$, $g\ne 0$. Thus $$ g(x_0,\dots,x_{n-1})h^\sigma(x_1,\dots,x_n)-f(x_0,\dots,x_n)=qF$$ with $q\in K[x_0,\dots,x_n]$. Put $\alpha:= v(f(\xi_0,\dots,\xi_n))$, $\beta=v(g(\xi_0,\dots,\xi_{n-1}))$ and $\gamma=v(h(\xi_0,\dots,\xi_{n-1}))=v(c)$, so $\alpha,\beta, \gamma\in \Gamma$ and $v(\sigma(c))=\alpha - \beta$. Take a pc-sequence $\{b_\rho\}$ from $K$ equivalent to $\{a_\rho\}$ such that, eventually $$v\big(f(\boldsymbol{\sigma}(b_\rho))\big)=\alpha, \quad v\big(g(\boldsymbol{\sigma}(b_\rho))\big)=\beta, \quad v\big(h(\boldsymbol{\sigma}(b_\rho))\big)=\gamma$$ and also $G(b_\rho) \leadsto 0$ and $\{v\big(q(\boldsymbol{\sigma}(b_\rho))\big)\}$ is eventually constant. Now, to be explicit, $h(\boldsymbol{\sigma}(b_\rho))=h(b_\rho, \dots, \sigma^{n-1}(b_\rho))$, so $\sigma\big (h(\boldsymbol{\sigma}(b_\rho))\big)= h^\sigma(\sigma(b_\rho),\dots,\sigma^n(b_\rho))$, and thus $v\big(h^\sigma(\sigma(b_\rho),\dots,\sigma^n(b_\rho))\big)=\gamma$, eventually. Since $\{v\big((qF)(\boldsymbol{\sigma}(b_\rho))\big)\}$ is either eventually strictly increasing, or eventually equal to $\infty$, it follows that eventually $$\beta+ \gamma= v\left(g\big(\boldsymbol{\sigma}(b_\rho)\big)\cdot h^\sigma\big(\sigma(b_\rho),\dots,\sigma^n(b_\rho)\big)\right) = v\left(f\big(\boldsymbol{\sigma}(b_\rho)\big)\right)=\alpha,$$ so $\alpha - \beta=\gamma$, and thus $v(\sigma(c))=v(c)$. This proves our claim. Consider the inclusion diagram of valued fields (with $L^h$ the henselisation of $L$): $$\begin{array}{ccccc} & & L^h & & \\ & & \uparrow & & \\ & & L & & \\ & \nearrow & & \nwarrow & \\ K(\xi_0,\ldots,\xi_{n-1}) & & & & K(\xi_1,\ldots,\xi_n)\\ & \nwarrow & & \nearrow & \\ & & K & & \end{array} $$ Note that $L$ is an algebraic immediate extension of both $K(\xi_0,\dots,\xi_{n-1})$ and $K(\xi_1,\dots,\xi_n)$, so the same is true for $L^h$ instead of $L$. Since $K$ is finitely ramified---a hypothesis we use here for the first time in the proof---this gives $K(\xi_0,\dots,\xi_{n-1})^h=K(\xi_1,\dots,\xi_n)^h=L^h$ where we take the henselizations inside $L^h$. So $\sigma$ extends uniquely to an automorphism $\sigma$ of the {\em valued\/} field $L^h$. Put $a:= \xi_0$ and let $(K\langle a \rangle,\Gamma, {{\boldsymbol{k}}}; v_a, \pi_a)$ be the valued $\sigma$-subfield of $L^h$ generated by $a$ over $K$. Note that then $G(a)=0$ and $a_\rho \leadsto a$, since $v_a(a_\rho-a)=v(a_\rho - c)$. To verify (2), the first display in the proof shows that the valuation on $L$ defined above does not depend on the choice of the pseudolimit $e$. So we can take for $e$ an element $b$ as in the hypothesis of (2), from which the conclusion of (2) follows. \end{proof} \noindent We note the following consequence. \begin{corollary} \label{immediate.sigma-immediate} Suppose $\ca{K}$ is finitely ramified. Then $\ca{K}$ as a valued field has a proper immediate extension if and only if $\ca{K}$ as a valued difference field has a proper immediate extension. \end{corollary} \noindent We say that $\ca{K}$ is {\em $\sigma$-algebraically maximal\/} if it has no proper immediate $\sigma$-algebraic extension, and we say it is {\em maximal\/} if it has no proper immediate extension. Corollary~\ref{ctra} and Lemmas~\ref{imm.alg.ext} and ~\ref{hensel.imm} yield: \begin{corollary} Suppose $\ca{K}$ is finitely ramified. Then: \begin{enumerate} \item[$(1)$] $\ca{K}$ is $\sigma$-algebraically maximal if and only if each pc-sequence in $K$ of $\sigma$-algebraic type over $K$ has a pseudolimit in $K$; \item[$(2)$] if $\ca{K}$ satisfies Axiom $4$ and is $\sigma$-algebraically maximal, then $\ca{K}$ is $\sigma$-henselian. \end{enumerate} \end{corollary} \noindent It is clear that $\ca{K}$ has $\sigma$-algebraically maximal immediate $\sigma$-algebraic extensions, and also maximal immediate extensions. If $\ca{K}$ satisfies Axiom 4 both kinds of extensions are unique up to isomorphism, but for this we need one more lemma: \begin{lemma} Suppose $\ca{K}$ is finitely ramified and $\ca{K}'$ is a workable finitely ramified $\sigma$-algebraically maximal extension of $\ca{K}$ satisfying Axiom $4$. Let $\{a_\rho\}$ from $K$ be a pc-sequence of $\sigma$-algebraic type over $K$, with no pseudolimit in $K$, and with minimal $\sigma$-polynomial $G(x)$ over $K$. Then there exists $b\in K'$ such that $a_\rho \leadsto b$ and $G(b)=0$. \end{lemma} \begin{proof} Lemma~\ref{imm.alg.ext} provides a pseudolimit $a\in K'$ of $\{a_\rho\}$. Take a pc-sequence $\{b_\rho\}$ in $K$ equivalent to $\{a_\rho\}$ with the properties listed in Lemma~\ref{henselconf}. Since $\ca{K}'$ is $\sigma$-henselian and satisfies Axiom 4, Lemma~\ref{hensel-conf} yields $b\in K'$ such that $$ v'(a-b)=\gamma(G,a)\ \text{ and }\ G(b)=0.$$ Note that $a_\rho \leadsto b$ since $\gamma(G,a) > v(a-a_\rho)=\gamma_\rho$ eventually. \end{proof} \noindent Together with Lemmas~\ref{ext.with.pseudolimit} and ~\ref{imm.alg.ext} this yields: \begin{theorem}\label{unique.max.imm.ext} Suppose $\ca{K}$ is finitely ramified and satisfies Axiom $4$. Then all its maximal immediate extensions are isomorphic over $\ca{K}$, and all its $\sigma$-algebraically maximal immediate $\sigma$-algebraic extensions are isomorphic over $\ca{K}$. \end{theorem} \noindent We now state minor variants of these results using the notion of saturation from model theory, as needed in the proof of the embedding theorem in the next section. Let $|X|$ denote the cardinality of a set $X$, and let $\kappa$ be a cardinal. \begin{lemma} Suppose $\ca{E}=(E, \Gamma_E, \dots)\le \ca{K}$ is workable and $\ca{K}$ is finitely ramified, $\sigma$-henselian, and $\kappa$-saturated with $\kappa>|\Gamma_E|$. Let $\{a_\rho\}$ from $E$ be a pc-sequence of $\sigma$-algebraic type over $E$, with no pseudolimit in $E$, and with minimal $\sigma$-polynomial $G(x)$ over $E$. Then there exists $b\in K$ such that $a_\rho \leadsto b$ and $G(b)=0$. \end{lemma} \begin{proof} By the saturation assumption we have a pseudolimit $a\in K$ of $\{a_\rho\}$. Let $\gamma_\rho=v(a-a_\rho)$. By Lemma~\ref{henselconf}, $(G,a)$ is in $\sigma$-hensel configuration with $\gamma(G,a)>\gamma_\rho$, eventually. Since $\ca{K}$ is $\sigma$-henselian, it satisfies Axiom 4, so Lemma~\ref{hensel-conf} yields $b\in K$ such that $v(a-b)=\gamma(G,a)$ and $G(b)=0$. Note that $a_\rho \leadsto b$ since $\gamma(G,a) > \gamma_\rho$ eventually. \end{proof} \noindent In combination with Lemmas~\ref{ext.with.pseudolimit} and ~\ref{imm.alg.ext} this yields: \begin{corollary}\label{immsat} If $\ca{E}=(E, \Gamma_E, \dots)\le \ca{K}$ is workable and satisfies Axiom $4$, and $\ca{K}$ is finitely ramified, $\sigma$-henselian, and $\kappa$-saturated with $\kappa>|\Gamma_E|$, then any maximal immediate extension of $\ca{E}$ can be embedded in $\ca{K}$ over $\ca{E}$. \end{corollary} \begin{section}{The Equivalence Theorem}\label{et} \noindent Theorem~\ref{embed11}, the main result of the paper, tells us when two workable $\sigma$-henselian valued difference fields of equal characteristic zero are elementarily equivalent over a common substructure. In Section 8 we derive from it in the usual way some attractive consequences on the elementary theories of such valued difference fields and on the induced structure on value group and residue difference field. In Section 9 we use coarsening to obtain analogues in the mixed characteristic case. We begin with a short subsection on angular component maps. The presence of such maps simplifies the proof of the Equivalence Theorem, but in the aftermath we can often discard these maps again, by Corollary~\ref{acm2}. \noindent {\bf Angular components.} Let $\ca{K}=(K, \Gamma, {{\boldsymbol{k}}};\ v,\pi)$ be a valued difference field. An {\em angular component map\/} on $\ca{K}$ is an angular component map $\operatorname{ac}$ on $\ca{K}$ as valued field such that in addition $\bar{\sigma}(\operatorname{ac}(a))=\operatorname{ac}(\sigma(a))$ for all $a\in K$. Examples are the Hahn difference fields ${{\boldsymbol{k}}}((t^\Gamma))$ with angular component map given by $\operatorname{ac}(a)=a_{\gamma_0}$ for nonzero $a=\sum a_\gamma t^\gamma\in {{\boldsymbol{k}}}((t^\Gamma))$ and $\gamma_0=v(a)$, and also the Witt difference fields $\operatorname{W}({{\boldsymbol{k}}})$ with angular component map determined by $\operatorname{ac}(p)=1$. (To see this, use the next lemma and the fact that $\text{Fix}\big(\operatorname{W}({{\boldsymbol{k}}})\big)=\operatorname{W}(\mathbb{F}_p)=\mathbb{Q}_p$.) \begin{lemma}\label{acm1} Suppose $\ca{K}$ satisfies Axiom $2$. Then each angular component map on the valued subfield $\operatorname{Fix}(K)$ of $\ca{K}$ extends uniquely to an angular component map on $\ca{K}$. If in addition $\ca{K}$ is $\sigma$-henselian, then every angular component map on $\ca{K}$ is obtained in this way from an angular component map on $\operatorname{Fix}(K)$. \end{lemma} \begin{proof} Given an angular component map $\operatorname{ac}$ on $\operatorname{Fix}(K)$ the claimed extension to $\ca{K}$, also denoted by $\operatorname{ac}$, is obtained as follows: for $x\in K^\times$ we have $x=uy$ with $u,y\in K^\times,\ v(u)=0, \sigma(y)=y$; then $\operatorname{ac}(x)=\bar{u}\operatorname{ac}(y)$. The second claim of the lemma follows from Corollary~\ref{fixh}. \end{proof} \noindent Here is an immediate consequence of Lemmas~\ref{xs1} and \ref{acm1}: \begin{corollary}\label{acm2} Suppose $\ca{K}$ satisfies Axiom $2$. Then there is an angular component map on some elementary extension of $\ca{K}$. \end{corollary} \noindent {\bf The Main Result.} In this subsection we consider $3$-sorted structures $$ \ca{K}=\big(K, \Gamma, {{\boldsymbol{k}}}; v, \pi, \operatorname{ac} \big)$$ where $\big(K, \Gamma, {{\boldsymbol{k}}}; v, \pi \big)$ is a valued difference field (satisfying Axiom 1 of course) and where $\operatorname{ac}: K \to {{\boldsymbol{k}}}$ is an angular component map on $\big(K, \Gamma, {{\boldsymbol{k}}}; v, \pi \big)$. Such a structure will be called an {\em ac-valued difference field\/}. Any subfield $E$ of $K$ is viewed as a valued subfield of $\ca{K}$ with valuation ring $\ca{O}_E:=\ca{O}\cap E$. \noindent If $\operatorname{char}({{\boldsymbol{k}}})=0$ and $\ca{K}$ is $\sigma$-henselian, then by Theorem~\ref{lift.res.field} there is a difference ring morphism $i:{{\boldsymbol{k}}} \to \ca{O}$ such that $\pi(i(a))=a$ for all $a \in K$; we call such $i$ a {\em $\sigma$-lifting\/} of ${{\boldsymbol{k}}}$ to $\ca{K}$. This will play a minor role in the proof of the Equivalence Theorem. \noindent A {\em good substructure\/} of $\ca{K}=(K, \Gamma, {{\boldsymbol{k}}}; v,\pi,\operatorname{ac})$ is a triple $\ca{E}=(E, \Gamma_{\ca{E}}, {{\boldsymbol{k}}}_{\ca{E}})$ such that \begin{enumerate} \item $E$ is a difference subfield of $K$, \item $\Gamma_{\ca{E}}$ is an ordered abelian subgroup of $\Gamma$ with $v(E^\times)\subseteq \Gamma_{\ca{E}}$, \item ${{\boldsymbol{k}}}_{\ca{E}}$ is a difference subfield of ${{\boldsymbol{k}}}$ with $\operatorname{ac}(E)\subseteq {{\boldsymbol{k}}}_{\ca{E}}$ (hence $\pi(\ca{O}_E)\subseteq {{\boldsymbol{k}}}_{\ca{E}}$). \end{enumerate} For good substructures $\ca{E}_1=(E_1, \Gamma_1, {{\boldsymbol{k}}}_1)$ and $\ca{E}_2=(E_2, \Gamma_2, {{\boldsymbol{k}}}_2)$ of $\ca{K}$, we define $\ca{E}_1\subseteq \ca{E}_2$ to mean that $E_1 \subseteq E_2,\ \Gamma_1 \subseteq \Gamma_2,\ {{\boldsymbol{k}}}_1 \subseteq {{\boldsymbol{k}}}_2$. If $E$ is a difference subfield of $K$ with $\operatorname{ac}(E)=\pi(\ca{O}_E)$, then $\big(E, v(E^\times), \pi(\ca{O}_E)\big)$ is a good substructure of $\ca{K}$, and if in addition $F\supseteq E$ is a difference subfield of $K$ such that $v(F^\times)=v(E^\times)$, then $\operatorname{ac}(F)=\pi(\ca{O}_F)$. Throughout this subsection $$\ca{K}=(K, \Gamma, {{\boldsymbol{k}}}; v,\pi, \operatorname{ac}), \qquad \ca{K}'=(K', \Gamma', {{\boldsymbol{k}}}'; v', \pi', \operatorname{ac}')$$ are ac-valued difference fields, with valuation rings $\ca{O}$ and $\ca{O}'$, and $$\ca{E}=(E,\Gamma_{\ca{E}}, {{\boldsymbol{k}}}_{\ca{E}}), \qquad \ca{E'}=(E',\Gamma_{\ca{E}'}, {{\boldsymbol{k}}}_{\ca{E}'})$$ are good substructures of $\ca{K}$, $\ca{K'}$ respectively. To avoid too many accents we let $\sigma$ denote the difference operator of each of $K, K', E, E'$, and put $\ca{O}_{E'}:= \ca{O}'\cap E'$. A {\em good map\/} $\mathbf{f}: \ca{E} \to \ca{E'}$ is a triple $\mathbf{f}=(f, f_{\operatorname{v}}, f_{\operatorname{r}})$ consisting of a difference field isomorphism $f:E \to E'$, an ordered group isomorphism $f_{\operatorname{v}}:\Gamma_{\ca{E}} \to \Gamma_{\ca{E}'}$ and a difference field isomorphism $f_{\operatorname{r}}: {{\boldsymbol{k}}}_{\ca{E}} \to {{\boldsymbol{k}}}_{\ca{E}'}$ such that \begin{enumerate} \item[(i)] $f_{\operatorname{v}}(v(a))=v'(f(a))$ for all $a \in E^\times$, and $f_{\operatorname{v}}$ is elementary as a partial map between the ordered abelian groups $\Gamma$ and $\Gamma'$; \item[(ii)] $f_{\operatorname{r}}(\operatorname{ac}(a))=\operatorname{ac}'(f(a))$ for all $a \in E$, and $f_{\operatorname{r}}$ is elementary as a partial map between the difference fields ${{\boldsymbol{k}}}$ and ${{\boldsymbol{k}}}'$. \end{enumerate} Let $\mathbf{f}: \ca{E} \to \ca{E'}$ be a good map as above. Then the field part $f: E \to E'$ of $\mathbf{f}$ is a valued difference field isomorphism, and $f_{\operatorname{v}}$ and $f_{\operatorname{r}}$ agree on $v(E^\times)$ and $\pi(\ca{O}_E)$ with the maps $v(E^\times) \to v'(E'^\times)$ and $\pi(\ca{O}_E) \to \pi'(\ca{O}_{E'})$ induced by $f$. We say that a good map $\mathbf{g}= (g, g_{\operatorname{v}}, g_{\operatorname{r}}) : \ca{F} \to \ca{F'}$ {\em extends\/} $\mathbf{f}$ if $\ca{E}\subseteq \ca{F}$, $\ca{E'}\subseteq \ca{F'}$, and $g$, $g_{\operatorname{v}}$, $g_{\operatorname{r}}$ extend $f$, $f_{\operatorname{v}}$, $f_{\operatorname{r}}$, respectively. The {\em domain\/} of $\mathbf{f}$ is $\ca{E}$. The next two lemmas show that condition (ii) above is automatically satisfied by certain extensions of good maps. \begin{lemma}\label{ac1} Let $\mathbf{f}: \ca{E} \to \ca{E}'$ be a good map, and $F\supseteq E$ and $F'\supseteq E'$ subfields of $K$ and $K'$, respectively, such that $v(F^\times)=v(E^\times)$ and $\pi(\ca{O}_F)\subseteq {{\boldsymbol{k}}}_{\ca{E}}$. Let $g: F \to F'$ be a valued field isomorphism such that $g$ extends $f$ and $f_{\operatorname{r}}(\pi(u))=\pi'(g(u))$ for all $u\in \ca{O}_F$. Then $\operatorname{ac}(F)\subseteq {{\boldsymbol{k}}}_{\ca{E}}$ and $f_{\operatorname{r}}(\operatorname{ac}(a))= \operatorname{ac}'(g(a))$ for all $a \in F$. \end{lemma} \begin{proof} Let $a \in F$. Then $a=a_1u$ where $a_1 \in E$ and $u\in \ca{O}_F$, $v(u)=0$, so $\operatorname{ac}(a)=\operatorname{ac}(a_1)\pi(u)\in {{\boldsymbol{k}}}_{\ca{E}}$. It follows easily that $f_{\operatorname{r}}(\operatorname{ac}(a))= \operatorname{ac}'(g(a))$. \end{proof} \noindent In the same way we obtain: \begin{lemma}\label{ac2} Suppose $\pi(\ca{O}_E)={{\boldsymbol{k}}}_{\ca{E}}$, let $\mathbf{f}: \ca{E} \to \ca{E}'$ be a good map, and let $F\supseteq E$ and $F'\supseteq E'$ be subfields of $K$ and $K'$, respectively, such that $v(F^\times)=v(E^\times)$. Let $g: F \to F'$ be a valued field isomorphism extending $f$. Then $\operatorname{ac}(F)= \pi(\ca{O}_F)$ and $g_{\operatorname{r}}(\operatorname{ac}(a))= \operatorname{ac}'(g(a))$ for all $a \in F$, where the map $g_{\operatorname{r}}:\pi(\ca{O}_F)\to \pi'(\ca{O}_{F'})$ is induced by $g$ $($and thus extends $f_{\operatorname{r}})$. \end{lemma} \noindent The following is useful in connection with Axiom 2: \begin{lemma}\label{fixr} Let $b\in K^{\times}$. Then the following are equivalent: \begin{enumerate} \item[$(1)$] There is $c \in \Fix(K)$ such that $v(c)=v(b)$. \item[$(2)$] There is $d \in K$ such that $v(d)=0$ and $\sigma(d)=(b/ \sigma(b))\cdot d$. \end{enumerate} \end{lemma} \begin{proof} For $c$ as in (1), $d=cb^{-1}$ is as in (2). For $d$ as in (2), $c=bd$ is as in (1). \end{proof} \noindent We say that $\ca{E}$ satisfies Axiom 2 (respectively, Axiom 3, Axiom 4) if the valued difference subfield $(E, v(E^\times),\pi(\ca{O}_E);\dots)$ of $\ca{K}$ does. Likewise, we say that $\ca{E}$ is workable (respectively, $\sigma$-henselian) if this valued difference subfield of $\ca{K}$ is. \begin{theorem}\label{embed11} Suppose $\operatorname{char}({{\boldsymbol{k}}})=0$, $\ca{K}$, $\ca{K}'$ satisfy Axiom $2$ and are $\sigma$-henselian. Then any good map $\ca{E} \to \ca{E}'$ is a partial elementary map between $\ca{K}$ and $\ca{K}'$. \end{theorem} \begin{proof} The theorem holds trivially for $\Gamma=\{0\}$, so assume that $\Gamma\ne \{0\}$. Then $\ca{K}$ and $\ca{K}'$ are workable. Let $\mathbf{f}=(f, f_{\operatorname{v}}, f_{\operatorname{r}}): \ca{E} \to \ca{E}'$ be a good map. By passing to suitable elementary extensions of $\ca{K}$ and $\ca{K}'$ we arrange that $\ca{K}$ and $\ca{K}'$ are $\kappa$-saturated, where $\kappa$ is an uncountable cardinal such that $|{{\boldsymbol{k}}}_{\ca{E}}|,\ |\Gamma_{\ca{E}}| < \kappa$. Call a good substructure $\ca{E}_1=(E_1,{{\boldsymbol{k}}}_1, \Gamma_1)$ of $\ca{K}$ {\em small\/} if $|{{\boldsymbol{k}}}_1|,\ |\Gamma_1|<\kappa$. We shall prove that the good maps with small domain form a back-and-forth system between $\ca{K}$ and $\ca{K}'$. (This clearly suffices to obtain the theorem.) In other words, we shall prove that under the present assumptions on $\ca{E}$, $\ca{E}'$ and $\mathbf{f}$, there is for each $a \in K$ a good map $\mathbf{g}$ extending $\mathbf{f}$ such that $\mathbf{g}$ has small domain $\ca{F}=(F,\dots)$ with $a\in F$. In addition to Corollary~\ref{immsat}, we have several basic extension procedures: \noindent (1) {\em Given $\alpha\in {{\boldsymbol{k}}}$, arranging that $\alpha\in {{\boldsymbol{k}}}_{\ca{E}}$}. By saturation and the definition of ``good map'' this can be achieved without changing $f$, $f_{\operatorname{v}}$, $E$, $\Gamma_{\ca{E}}$ by extending $f_{\operatorname{r}}$ to a partial elementary map between ${{\boldsymbol{k}}}$ and ${{\boldsymbol{k}}}'$ with $\alpha$ in its domain. \noindent (2) {\em Given $\gamma\in \Gamma$, arranging that $\gamma\in \Gamma_{\ca{E}}$}. This follows in the same way. \noindent (3) {\em Arranging ${{\boldsymbol{k}}}_{\ca{E}}=\pi(\ca{O}_E)$}. Suppose $\alpha\in {{\boldsymbol{k}}}_{\ca{E}},\ \alpha\notin \pi(\ca{O}_E)$; set $\alpha':= f_{\operatorname{r}}(\alpha)$. If $\alpha$ is $\bar{\sigma}$-transcendental over $\pi(\ca{O}_E)$, we pick $a\in \ca{O}$ and $a'\in \ca{O}'$ such that $\bar{a}=\alpha$ and $\bar{a'}=\alpha'$, and then Lemmas~\ref{extrest} and ~\ref{ac1} yield a good map $\mathbf{g}=(g, f_{\operatorname{v}}, f_{\operatorname{r}})$ with small domain $(E \langle a \rangle, \Gamma_{\ca{E}}, {{\boldsymbol{k}}}_{\ca{E}})$ such that $\mathbf{g}$ extends $\mathbf{f}$ and $g(a)=a'$. Next, assume that $\alpha$ is $\bar{\sigma}$-algebraic over $\pi(\ca{O}_E)$. Let $G(x)$ be a $\sigma$-polynomial over $\ca{O}_E$ such that $\bar{G}(x)$ is a minimal $\bar{\sigma}$-polynomial of $\alpha$ over $\pi(\ca{O}_E)$ and has the same complexity as $G(x)$. Pick $a\in \ca{O}$ such that $\bar{a}=\alpha$. Then $G$ is $\sigma$-henselian at $a$. So we have $b \in \ca{O}$ such that $G(b)=0$ and $\bar{b}=\bar{a}=\alpha$. Likewise, we obtain $b'\in \ca{O}'$ such that $f(G)(b')=0$ and $\bar{b'}=\alpha'$, where $f(G)$ is the difference polynomial over $E'$ that corresponds to $G$ under $f$. By Lemmas~\ref{extresa} and \ref{ac1} we obtain a good map extending $\mathbf{f}$ with small domain $(E \langle b \rangle, \Gamma_{\ca{E}}, {{\boldsymbol{k}}}_{\ca{E}})$ and sending $b$ to $b'$. \noindent By iterating these steps we can arrange ${{\boldsymbol{k}}}_{\ca{E}}=\pi(\ca{O}_E)$; this condition is actually preserved in the extension procedures (4), (5), (6) below, as the reader may easily verify. We do assume in the rest of the proof that ${{\boldsymbol{k}}}_{\ca{E}}=\pi(\ca{O}_E)$, and so we can refer from now on to ${{\boldsymbol{k}}}_{\ca{E}}$ as the {\em residue difference field\/} of $E$. \noindent (4) {\em Extending $\mathbf{f}$ to a good map whose domain satisfies Axiom $2$}. Let $\delta \in v(E^\times)$. Pick $b \in E^{\times}$ such that $v(b)=\delta$. Since Axiom 2 holds in $\ca{K}$, we can use Lemma~\ref{fixr} to get $d \in K$ such that $v(d)=0$ and $G(d)=0$ where $$G(x)\ :=\ \sigma(x) - \frac{b}{\sigma(b)}\cdot x.$$ Note that $v(qd)=0$ and $G(qd)=0$ for all $q \in \mathbb{Q}^{\times} \subseteq E^{\times}$. Hence by saturation we can assume that $v(d)=0$, $G(d)=0$ and $\bar{d}$ is transcendental over ${{\boldsymbol{k}}}_{\ca{E}}$. We set $\alpha= \bar{d}$, so $\bar{G}(x)$ is a minimal $\bar{\sigma}$-polynomial of $\alpha$ over ${{\boldsymbol{k}}}_{\ca{E}}$. By Lemma~\ref{extresa}, $$E\langle d \rangle=E(d), \qquad v(E(d)^\times)=v(E^\times), \qquad \pi(\ca{O}_{E(d)})={{\boldsymbol{k}}}_{\ca{E}}(\alpha), \qquad \sigma (E(d))=E(d).$$ We shall find a good map extending $\mathbf{f}$ with domain $(E(d),\Gamma_{\ca{E}}, {{\boldsymbol{k}}}_{\ca{E}}(\alpha))$. Consider the $\sigma$-polynomial $H:= f(G)$, that is, $$H(x)\ =\ \sigma(x) - \frac{f(b)}{\sigma(f(b))}\cdot x.$$ By saturation we can find $\alpha' \in {{\boldsymbol{k}}}'$ with $\bar{H}(\alpha')=0$ and a difference field isomorphism $g_{\operatorname{r}}: {{\boldsymbol{k}}}_{\ca{E}}(\alpha) \to {{\boldsymbol{k}}}_{\ca{E}'}(\alpha')$ that extends $f_{\operatorname{r}}$, sends $\alpha$ to $\alpha'$ and is elementary as a partial map between the difference fields ${{\boldsymbol{k}}}$ and ${{\boldsymbol{k}}}'$. Using again Lemma~\ref{fixr} we find $d' \in K'$ such that $v'(d')=0$ and $H(d')=0$. Since $\bar{H}(\bar{d'})=\bar{H}(\alpha')=0$, we can multiply $d'$ by an element in $K'$ of valuation zero and fixed by $\sigma$ to assume further that $\bar{d'}=\alpha'$. Then Lemmas~\ref{extresa} and \ref{ac2} yield a good map $\mathbf{g}=(g, f_{\operatorname{v}}, g_{\operatorname{r}})$ where $g:E(d) \to E'(d')$ extends $f$ and sends $d$ to $d'$. The domain $(E(d), \Gamma_{\ca{E}}, {{\boldsymbol{k}}}_{\ca{E}}(\alpha))$ of $\mathbf{g}$ is small. \noindent In the extension procedures (3) and (4) the value group $v(E^\times)$ does not change, so if the domain $\ca{E}$ of $\mathbf{f}$ satisfies Axiom 2, then so does the domain of any extension of $\mathbf{f}$ constructed as in (3) or (4). Also $\Gamma_{\ca{E}}$ does not change in (3) and (4), but at this stage we can have $\Gamma_{\ca{E}}\ne v(E^\times)$. By repeated application of (1)--(4) we can arrange that $\ca{E}$ is workable and satisfies Axiom 4. Then by Corollary~\ref{immsat} we can arrange that in addition $\ca{E}$ is $\sigma$-henselian. (Any use of this in what follows will be explicitly indicated.) \noindent (5) {\em Towards arranging $\Gamma_{\ca{E}}=v(E^\times)$; the case of no torsion modulo $v(E^\times)$}. Suppose $\gamma\in \Gamma_{\ca{E}}$ has no torsion modulo $v(E^\times)$, that is, $n\gamma\notin v(E^\times)$ for all $n>0$. Take $a \in \Fix(K)$ such that $v(a)=\gamma$. Let $i$ be a $\sigma$-lifting of the residue difference field ${{\boldsymbol{k}}}$ to $\ca{K}$. Since $\operatorname{ac}(a)$ is fixed by $\bar{\sigma}$, $a/i(\operatorname{ac}(a))\in \Fix(K)$ and $v\big(a/i(\operatorname{ac}(a))\big)=\gamma$. So replacing $a$ by $a/i(\operatorname{ac}(a))$ we arrange that $v(a)=\gamma$ and $\operatorname{ac}(a)=1$. In the same way we obtain $a' \in \Fix(K')$ such that $v'(a')=\gamma':= f_{\operatorname{v}}(\gamma)$ and $\operatorname{ac}'(a')=1$. Then by a familiar fact from the valued field context we have an isomorphism of valued fields $g: E(a) \to E'(a')$ extending $f$ with $g(a)=a'$. Then $(g, f_{\operatorname{v}}, f_{\operatorname{r}})$ is a good map with small domain $(E(a),\Gamma_{\ca{E}},{{\boldsymbol{k}}}_{\ca{E}})$; this domain satisfies Axiom 2 if $\ca{E}$ does. \noindent (6) {\em Towards arranging $\Gamma_{\ca{E}}=v(E^\times)$; the case of prime torsion modulo $v(E^\times)$}. Here we assume that $\ca{E}$ satisfies Axiom 2 and is $\sigma$-henselian. Let $\gamma\in \Gamma_{\ca{E}}\setminus v(E^\times)$ with $\ell\gamma \in v(E^\times)$, where $\ell$ is a prime number. As $\ca{E}$ satisfies Axiom 2 we can pick $b \in \Fix(E)$ such that $v(b)=\ell\gamma$. Since $\ca{E}$ is $\sigma$-henselian we have a $\sigma$-lifting of its difference residue field ${{\boldsymbol{k}}}_{\ca{E}}$ to $\ca{E}$ and we can use this as in (5) to arrange that $\operatorname{ac}(b)=1$. We shall find $c \in \Fix(K)$ such that $c^\ell=b$ and $\operatorname{ac}(c)=1$. As in (5) we have $a \in \Fix(K)$ such that $v(a)=\gamma$ and $\operatorname{ac}(a)=1$. Then the polynomial $P(x):=x^\ell-b/a^\ell$ over $K$ is henselian at $1$. This gives $u \in K$ such that $P(u)=0$ and $\bar{u}=1$. Now let $c=au$. Clearly $c^\ell=b$ and $\operatorname{ac}(c)=1$. Note that $\sigma(c)^\ell=b$, hence $\sigma(c)=\omega c$ where $\omega$ is an $\ell^{th}$-root of unity. Using $\operatorname{ac}(c)=1$ we get $\operatorname{ac}(\omega)=1$, so $\omega=1$, that is, $c\in \Fix(K)$, as promised. Likewise we find $c' \in \Fix(K')$ such that $c'^\ell=f(b)$ and $\operatorname{ac}'(c')=1$. Then $\mathbf{f}$ extends easily to a good map with domain $(E(c), \Gamma_{\ca{E}},{{\boldsymbol{k}}}_{\ca{E}})$ sending $c$ to $c'$; this domain satisfies Axiom 2. \noindent By iterating (5) and (6) we can assume in the rest of the proof that $\Gamma_{\ca{E}}=v(E^\times)$, and we shall do so. This condition is actually preserved in the earlier extension procedures (3) and (4), as the reader may easily verify. Anyway, we can refer from now on to $\Gamma_{\ca{E}}$ as the {\em value group\/} of $E$. Note also that in the extension procedures (5) and (6) the residue difference field does not change. \noindent Now let $a \in K$ be given. We want to extend $\mathbf{f}$ to a good map whose domain is small and contains $a$. At this stage we can assume ${{\boldsymbol{k}}}_{\ca{E}}=\pi(\ca{O}_E)$, $\Gamma_{\ca{E}}=v(E^\times)$, and $\ca{E}$ is workable. Appropriately iterating and alternating the above extension procedures we arrange in addition that $\ca{E}$ satisfies Axiom 4 and $E\langle a \rangle$ is an immediate extension of $E$. Let $\ca{E} \langle a \rangle$ be the valued difference subfield of $\ca{K}$ that has $E\langle a \rangle$ as underlying difference field. By Corollary~\ref{immsat}, $\ca{E} \langle a \rangle$ has a maximal immediate valued difference field extension $\ca{E}_1\le \ca{K}$. Then $\ca{E}_1$ is a maximal immediate extension of $\ca{E}$ as well. Applying Corollary~\ref{immsat} to $\ca{E}'$ and using Theorem~\ref{unique.max.imm.ext}, we can extend $\mathbf{f}$ to a good map with domain $\ca{E}_1$, construed here as a good substructure of $\ca{K}$ in the obvious way. It remains to note that $a$ is in the underlying difference field of $\ca{E}_1$. \end{proof} \noindent {\bf A variant.} At the cost of a purity assumption we can eliminate angular component maps in the Equivalence Theorem. More precisely, let $\ca{K}, \ca{K}'$ be as before except that we do not require angular component maps as part of these structures. The notion of {\em good substructure\/} of $\ca{K}$ is similarly modified by changing clause (3) in its definition to: ${{\boldsymbol{k}}}_{\ca{E}}$ is a difference subfield of ${{\boldsymbol{k}}}$ with $\pi(\ca{O}_E)\subseteq {{\boldsymbol{k}}}_{\ca{E}}$. In defining good maps, condition (ii) on $f_{\text{r}}$ is to be changed to: $f_{\text{r}}(\pi(a))=\pi'(f(a))$ for all $a\in \mathcal{O}_E$, and $f_{\text{r}}$ is elementary as a partial map between the difference fields ${{\boldsymbol{k}}}$ and ${{\boldsymbol{k}}}'$. \begin{theorem}\label{embed11a} Suppose $\operatorname{char}({{\boldsymbol{k}}})=0$, $\ca{K}$, $\ca{K}'$ satisfy Axiom $2$ and are $\sigma$-henselian, and $v(E^\times)$ is pure in $\Gamma$. Then any good map $\ca{E} \to \ca{E}'$ is a partial elementary map between $\ca{K}$ and $\ca{K}'$. \end{theorem} \begin{proof} The case $\Gamma=\{0\}$ being trivial, let $\Gamma\ne \{0\}$, and let $\mathbf{f}: \ca{E} \to \ca{E}'$ be a good map; our task is to show that $\mathbf{f}$ is a partial elementary map between $\ca{K}$ and $\ca{K}'$. We first arrange that the valued difference subfield $(E, v(E^\times),\pi(\ca{O}_E);\dots)$ of $\ca{K}$ is $\aleph_1$-saturated by passing to an elementary extension of a suitable many-sorted structure with $\ca{K}$, $\ca{K}'$, $\ca{E}$, $\ca{E}'$ and $\mathbf{f}$ as ingredients. As in the beginning of the proof of Theorem~\ref{embed11} we arrange next that $\ca{K}$ and $\ca{K}'$ are $\kappa$-saturated, where $\kappa$ is an uncountable cardinal such that $|{{\boldsymbol{k}}}_{\ca{E}}|$, $|\Gamma_{\ca{E}}| < \kappa$. Then we apply the extension procedures (2) and (3) in the proof of Theorem~\ref{embed11} to arrange that ${{\boldsymbol{k}}}_{\ca{E}}=\pi(\ca{O}_E)$ and $\ca{E}$ satisfies Axiom 2, without changing $v(E^\times)$. To simplify notation we identify $\ca{E}$ and $\ca{E}'$ via $\mathbf{f}$; we have to show that then $\ca{K} \equiv_{\ca{E}} \ca{K}'$. Since $(E, v(E^\times),\pi(\ca{O}_E);\dots)$ is $\aleph_1$-saturated, Lemmas~\ref{xs1} and ~\ref{xs2} yield cross-sections $$s_E: v(E^\times) \to \text{Fix}(E)^\times,\quad s: \Gamma\to \text{Fix}(K)^\times,\quad s': \Gamma' \to \text{Fix}(K')^\times$$ such that $s$ and $s'$ extend $s_E$. These cross-sections induce angular component maps $\operatorname{ac}_E$ on $\text{Fix}(E)$, $\operatorname{ac}$ on $\text{Fix}(K)$, and $\operatorname{ac}'$ on $\text{Fix}(K')$, which by Lemma~\ref{acm1} extend uniquely to angular component maps on $\ca{E}$, $\ca{K}$, and $\ca{K}'$. (Here we use that $\ca{E}$ satisfies Axiom 2.) This allows us to apply Theorem~\ref{embed11} to obtain the desired conclusion. \end{proof} \end{section} \begin{section}{Relative Quantifier Elimination}\label{cqe1} \noindent Here we derive various consequences of the Equivalence Theorem of Section~\ref{et}. We use the symbols $\equiv$ and $\preceq$ for the relations of elementary equivalence and being an elementary submodel, in the setting of many-sorted structures, and ``definable'' means ``definable with parameters from the ambient structure''. Let $\mathcal{L}$ be the 3-sorted language of valued fields, with sorts $\f$ (the field sort), $\v$ (the value group sort), and $\re$ (the residue sort). We view a valued field $(K, \Gamma, {{\boldsymbol{k}}};\dots)$ as an $\mathcal{L}$-structure, with $\f$-variables ranging over $K$, $\v$-variables over $\Gamma$, and $\re$-variables over ${{\boldsymbol{k}}}$. Augmenting $\mathcal{L}$ with a function symbol $\sigma$ of sort $(\f,\f)$ gives the language $\mathcal{L}(\sigma)$ of valued difference fields, and augmenting it further with a function symbol $\operatorname{ac}$ of sort $(\f,\re)$ gives the language $\mathcal{L}(\sigma,\operatorname{ac})$ of $\operatorname{ac}$-valued difference fields. In this section $$\ca{K}=(K, \Gamma, {{\boldsymbol{k}}}; \dots), \qquad \ca{K}'=(K', \Gamma', {{\boldsymbol{k}}}'; \dots)$$ are ac-valued difference fields of equicharacteristic $0$ that satisfy Axiom $2$ and are $\sigma$-henselian; they are considered as $\ca{L}(\sigma, \operatorname{ac})$-structures. \begin{corollary}\label{comp00} $\ca{K} \equiv \ca{K}'$ if and only if ${{\boldsymbol{k}}} \equiv {{\boldsymbol{k}}}'$ as difference fields and $\Gamma \equiv \Gamma'$ as ordered abelian groups. \end{corollary} \begin{proof} The ``only if'' direction is obvious. Suppose ${{\boldsymbol{k}}} \equiv {{\boldsymbol{k}}}'$ as difference fields, and $\Gamma \equiv \Gamma'$ as ordered groups. This gives good substructures $\ca{E}:=(\mathbb{Q},\{0\}, \mathbb{Q})$ of $\ca{K}$, and $\ca{E}':=(\mathbb{Q},\{0\},\mathbb{Q})$ of $\ca{K}'$, and a trivial good map $\ca{E} \to \ca{E}'$. Now apply Theorem~\ref{embed11}. \end{proof} \noindent Thus $\ca{K}$ is elementarily equivalent to the Hahn difference field ${{\boldsymbol{k}}}((t^\Gamma))$ with angular component map defined in the beginning of Section~\ref{et}. \begin{corollary}\label{comp01} Let $\ca{E}=(E, \Gamma_E, {{\boldsymbol{k}}}_E;\dots)$ be a $\sigma$-henselian ac-valued difference subfield of $\ca{K}$ satisfying Axiom $2$ such that ${{\boldsymbol{k}}}_E\preceq {{\boldsymbol{k}}}$ as difference fields, and $\Gamma_E \preceq \Gamma$ as ordered abelian groups. Then $\ca{E} \preceq \ca{K}$. \end{corollary} \begin{proof} Take an elementary extension $\ca{K}'$ of $\ca{E}$. Then $\ca{K}'$ satisfies Axiom $2$, $(E,\Gamma_E, {{\boldsymbol{k}}}_E)$ is a good substructure of both $\ca{K}$ and $\ca{K}'$, and the identity on $(E, \Gamma, {{\boldsymbol{k}}}_E)$ is a good map. Hence by Theorem~\ref{embed11} we have $\ca{K} \equiv_{\ca{E}} \ca{K}'$. Since $\ca{E} \preceq \ca{K}'$, this gives $\ca{E}\preceq \ca{K}$. \end{proof} \noindent The proofs of these corollaries use only weak forms of the Equivalence Theorem, but now we turn to a result that uses its full strength: a relative elimination of quantifiers for the $\ca{L}(\sigma, \operatorname{ac})$-theory $T$ of $\sigma$-henselian ac-valued difference fields of equicharacteristic $0$ that satisfy Axiom 2. We specify that the function symbols $v$ and $\pi$ of $\ca{L}(\sigma, \operatorname{ac})$ are to be interpreted as {\em total\/} functions in any $\ca{K}$ as follows: extend $v: K^\times \to \Gamma$ to $v: K \to \Gamma$ by $v(0)=0$, and extend $\pi: \ca{O} \to {{\boldsymbol{k}}}$ to $\pi: K \to {{\boldsymbol{k}}}$ by $\pi(a)=0$ for $a\notin \ca{O}$. Let $\mathcal{L}_{\re}$ be the sublanguage of $\mathcal{L}(\sigma, \operatorname{ac})$ involving only the sort $\re$, that is, $\mathcal{L}_{\re}$ is a copy of the language of difference fields, with $\bar{\sigma}$ as the symbol for the difference operator. Let $\mathcal{L}_{\v}$ be the sublanguage of $\mathcal{L}(\sigma, \operatorname{ac})$ involving only the sort $\v$, that is, $\mathcal{L}_{\v}$ is the language of ordered abelian groups. Let $x=(x_1, \dots, x_l)$ be a tuple of distinct $\f$-variables, $y=(y_1, \dots, y_m)$ a tuple of distinct $\re$-variables, and $z=(z_1, \dots, z_n)$ a tuple of distinct $\v$-variables. Define a {\em special $\re$-formula in $(x,y)$\/} to be an $\ca{L}(\sigma, \operatorname{ac})$-formula $$\psi(x,y)\ :=\ \psi'\big(\operatorname{ac}(q_1(x)),\dots, \operatorname{ac}(q_k(x)),y\big)$$ where $k\in \mathbb{N}$, $\psi'(u_1,\dots, u_k,y)$ is an $\mathcal{L}_{\re}$-formula, and $q_1(x),\dots, q_k(x)\in \mathbb{Z}[x]$. Also, a {\em special $\v$-formula in $(x,z)$\/} is an $\ca{L}(\sigma, \operatorname{ac})$-formula $$\theta(x,z)\ :=\ \theta'\big(v(q_1(x)),\dots, v(q_k(x)),z\big)$$ where $k\in \mathbb{N}$, $\theta'(v_1,\dots, v_k,y)$ is an $\mathcal{L}_{\v}$-formula, and $q_1(x),\dots, q_k(x)\in \mathbb{Z}[x]$. Note that these special formulas do not have quantified $f$-variables. We can now state our relative quantifier elimination: \begin{corollary}\label{qe} Every $\ \ca{L}(\sigma, \operatorname{ac})$-formula $\ \phi(x,y,z)\ $ is $T$-equivalent to a boolean combination of special $\re$-formulas in $(x,y)$ and special $\v$-formulas in $(x,z)$. \end{corollary} \begin{proof} Let $\psi(x,y)$ and $\theta(x,z)$ range over special formulas as described above. For a model $\ca{K}=(K, \Gamma, {{\boldsymbol{k}}};\dots)$ of $T$ and $a\in K^l$, $r \in {{\boldsymbol{k}}}^m$, $\gamma \in \Gamma^n$, let \begin{align*} \tp_{\re}^{\ca{K}}(a,r) &:= \{\psi(x,y):\ \ca{K} \models \psi(a,r)\} \\ \tp_{\v}^{\ca{K}}(a,\gamma) &:= \{\theta(x,z):\ \ca{K} \models \theta(a,\gamma)\}. \end{align*} Let $\ca{K}$ and $\ca{K}'$ be any models of $T$, and let $$(a,r,\gamma)\in K^l\times {{\boldsymbol{k}}}^m\times \Gamma^n, \qquad (a',r',\gamma')\in K'^l\times {{\boldsymbol{k}}}'^m\times \Gamma'^n$$ be such that $\tp^{\ca{K}}_{\re}(a,r)=\tp^{\ca{K}'}_{\re}(a',r')$ and $\tp^{\ca{K}}_{\v}(a,\gamma)=\tp^{\ca{K}'}_{\v}(a',\gamma')$. It suffices to show that under these assumptions we have $$\tp^{\ca{K}}(a, r, \gamma)\ =\ \tp^{\ca{K}'}(a', r', \gamma').$$ Let $\ca{E}:= (E,\Gamma_{\ca{E}}, {{\boldsymbol{k}}}_{\ca{E}})$ where $E:= \mathbb{Q}\langle a \rangle$, $\Gamma_{\ca{E}}$ is the ordered subgroup of $\Gamma$ generated by $\gamma$ over $v(E^\times)$, and ${{\boldsymbol{k}}}_{\ca{E}}$ is the difference subfield of ${{\boldsymbol{k}}}$ generated by $\operatorname{ac}(E)$ and $r$, so $\ca{E}$ is a good substructure of $\ca{K}$. Likewise we define the good substructure $\ca{E}'$ of $\ca{K}'$. For each $q(x)\in \mathbb{Z}[x]$ we have $q(a)=0$ iff $\operatorname{ac}(q(a))=0$, and also $q(a')=0$ iff $\operatorname{ac}'(q(a'))=0$. In view of this fact, the assumptions give us a good map $\ca{E} \to \ca{E}'$ sending $a$ to $a'$, $\gamma$ to $\gamma'$ and $r$ to $r'$. It remains to apply Theorem~\ref{embed11}. \end{proof} \noindent In the proof above it is important that our notion of a good substructure $\ca{E}=(E, \Gamma_{\ca{E}}, {{\boldsymbol{k}}}_{\ca{E}})$ did not require $\Gamma_{\ca{E}}=v(E^\times)$ or $ {{\boldsymbol{k}}}_{\ca{E}}=\pi(\ca{O}_E)$. This is a difference with the treatment in \cite{BMS}. Related to it is that in Corollary~\ref{qe} we have a separation of $\re$- and $\v$-variables; this makes the next result almost obvious. \begin{corollary}\label{comp02} Each subset of ${{\boldsymbol{k}}}^m\times \Gamma^n$ definable in $\ca{K}$ is a finite union of rectangles $X\times Y$ with $X\subseteq {{\boldsymbol{k}}}^m$ definable in the difference field ${{\boldsymbol{k}}}$ and $Y\subseteq \Gamma^n$ definable in the ordered abelian group $\Gamma$. \end{corollary} \begin{proof} By Corollary~\ref{qe} and using its notations it is enough to observe that for $a\in K^l$, a special $\re$-formula $\psi(x,y)$ in $(x,y)$, and a special $\v$-formula $\theta(x,z)$in $(x,z)$, the set $\{r\in {{\boldsymbol{k}}}^m: \ca{K}\models \psi(a,r)\}$ is definable in the difference field ${{\boldsymbol{k}}}$, and the set $\{\gamma\in \Gamma^n: \ca{K}\models \theta(a,\gamma)\}$ is definable in the ordered abelian group $\Gamma$. \end{proof} \noindent Corollary~\ref{comp02} says in particular that the relations on ${{\boldsymbol{k}}}$ definable in $\ca{K}$ are definable in the difference field ${{\boldsymbol{k}}}$, and likewise, the relations on $\Gamma$ definable in $\ca{K}$ are definable in the ordered abelian group $\Gamma$. Thus ${{\boldsymbol{k}}}$ and $\Gamma$ are stably embedded in $\ca{K}$. The corollary says in addition that ${{\boldsymbol{k}}}$ and $\Gamma$ are orthogonal in $\ca{K}$. \noindent By Corollary~\ref{acm2} we can get rid of angular component maps in Corollaries~\ref{comp00} and \ref{comp02}: these go through if we replace ``ac-valued'' by ``valued''. Also Corollary~\ref{comp01} goes through with this change, but for this we need Theorem~\ref{embed11a}. In particular, any $\sigma$-henselian valued difference field satisfying Axiom $2$, with residue difference field ${{\boldsymbol{k}}}$ of characteristic $0$ and value group $\Gamma$, is elementarily equivalent to the Hahn difference field ${{\boldsymbol{k}}}((t^\Gamma))$. \end{section} \begin{section}{The unramified mixed characteristic case}\label{mcc} \noindent We now aim for mixed characteristic analogues of Sections~\ref{et} and ~\ref{cqe1}. Kochen~\cite{koch} has a clear account how a result like Corollary~\ref{comp00} for henselian valued fields can be obtained in mixed characteristic from the equicharacteristic zero case by coarsening. We follow here the same track, but to get the mixed characteristic Equivalence Theorem~\ref{embed13} we use an elementary fact (Lemma~\ref{lemm1}) in a way that may be new and yields a proof that differs from the rather complicated treatment in \cite{BMS}. \noindent {\bf A better equivalence theorem.} We first improve Theorem~\ref{embed11} by allowing extra structure on the residue difference field and on the value group. Let $\mathcal{L}$ be the 3-sorted language of valued fields and $\mathcal{L}(\sigma,\operatorname{ac})$ the language of $\operatorname{ac}$-valued difference fields, as introduced in Section~\ref{cqe1}. Consider now a language $\mathcal{L}^*\supseteq \mathcal{L}(\sigma,\operatorname{ac})$ such that every symbol of $\mathcal{L}^*\setminus \mathcal{L}(\sigma,\operatorname{ac})$ is a relation symbol of some sort $(\v,\dots, \v)$ or $(\re,\dots, \re)$. Let $\mathcal{L}_{\v}^*$ be the sublanguage of $\mathcal{L}^*$ involving only the sort $\v$, that is, the language of ordered abelian groups together with the new relation symbols of sort $(\v,\dots, \v)$. Also, let $\mathcal{L}_{\re}^*$ be the sublanguage of $\mathcal{L}^*$ involving only the sort $\re$, that is, (a copy of) the language of difference fields together with the new relation symbols of sort $(\re,\dots, \re)$. (The difference operator symbol of $\mathcal{L}_{\re}^*$ is $\bar{\sigma}$, to avoid confusion with the difference operator symbol $\sigma$ of sort $(\f,\f)$.) By a $*$-valued difference field we mean an $\mathcal{L}^*$-structure whose $\mathcal{L}(\sigma,\operatorname{ac})$-reduct is an $\operatorname{ac}$-valued difference field. \noindent Let $\ca{K}=(K,\Gamma, {{\boldsymbol{k}}};\cdots)$ be a $*$-valued difference field. Then we shall view $\Gamma$ as an $\mathcal{L}_{\v}^*$-structure and ${{\boldsymbol{k}}}$ as an $\mathcal{L}_{\re}^*$-structure, in the obvious way. Any subfield $E$ of $K$ is viewed as a valued subfield of $\ca{K}$ with valuation ring $\ca{O}_E:=\ca{O}\cap E$. \noindent A {\em good substructure\/} of $\ca{K}=(K, \Gamma, {{\boldsymbol{k}}};\cdots)$ is a triple $\ca{E}=(E, \Gamma_{\ca{E}}, {{\boldsymbol{k}}}_{\ca{E}})$ such that \begin{enumerate} \item $E$ is a difference subfield of $K$, \item $\Gamma_{\ca{E}} \subseteq \Gamma$ as $\mathcal{L}_{\v}^*$-structures with $v(E^\times)\subseteq \Gamma_{\ca{E}}$, \item ${{\boldsymbol{k}}}_{\ca{E}} \subseteq {{\boldsymbol{k}}}$ as $\mathcal{L}_{\re}^*$-structures with $\operatorname{ac}(E)\subseteq {{\boldsymbol{k}}}_{\ca{E}}$. \end{enumerate} In the rest of this subsection $\ca{K}=(K, \Gamma, {{\boldsymbol{k}}}; \cdots)$ and $\ca{K}'=(K',\Gamma', {{\boldsymbol{k}}}'; \cdots)$ are $*$-valued difference fields, and $\ca{E}=(E,\Gamma_{\ca{E}}, {{\boldsymbol{k}}}_{\ca{E}})$, $\ca{E'}=(E',\Gamma_{\ca{E}'}, {{\boldsymbol{k}}}_{\ca{E}'})$ are good substructures of $\ca{K}$, $\ca{K'}$ respectively. A {\em good map\/} $\mathbf{f}: \ca{E} \to \ca{E'}$ is a triple $\mathbf{f}=(f, f_{\operatorname{v}}, f_{\operatorname{r}})$ consisting of an isomorphism $f:E \to E'$ of difference fields, an isomorphism $f_{\operatorname{v}}:\Gamma_{\ca{E}} \to \Gamma_{\ca{E}'}$ of $\mathcal{L}_{\v}^*$-structures and an isomorphism $f_{\operatorname{r}}: {{\boldsymbol{k}}}_{\ca{E}} \to {{\boldsymbol{k}}}_{\ca{E}'}$ of $\mathcal{L}_{\re}^*$-structures such that \begin{enumerate} \item[(i)] $f_{\operatorname{v}}(v(a))=v'(f(a))$ for all $a \in E^\times$, and $f_{\operatorname{v}}$ is elementary as a partial map between the $\mathcal{L}_{\v}^*$-structures $\Gamma$ and $\Gamma'$; \item[(ii)] $f_{\operatorname{r}}(\operatorname{ac}(a))=\operatorname{ac}'(f(a))$ for all $a \in E$, and $f_{\operatorname{r}}$ is elementary as a partial map between the $\mathcal{L}_{\re}^*$-structures ${{\boldsymbol{k}}}$ and ${{\boldsymbol{k}}}'$. \end{enumerate} \noindent Theorem~\ref{embed11} goes through in this enriched setting, with the same proof except for obvious changes:\footnote{We were told that Theorem~\ref{embed12} and its corollaries are also formal consequences of Theorem~\ref{embed11} and the stable embeddedness and orthogonality coming from Corollary~\ref{comp02}.} \begin{theorem}\label{embed12} If $\operatorname{char}({{\boldsymbol{k}}})=0$ and $\ca{K}$, $\ca{K}'$ satisfy Axiom $2$ and are $\sigma$-henselian, then any good map $\ca{E} \to \ca{E}'$ is a partial elementary map between $\ca{K}$ and $\ca{K}'$. \end{theorem} \noindent The four corollaries of Section 8 also go through in this enriched setting, with residue difference fields and value groups replaced by their $\mathcal{L}_{\re}^*$-expansions and $\mathcal{L}_{\v}^*$-expansions, respectively. In the notions used in Corollary~\ref{qe} the roles of $\mathcal{L}_{\re}$ and $\mathcal{L}_{\v}$ are of course taken over by by $\mathcal{L}_{\re}^*$ and $\mathcal{L}_{\v}^*$, respectively. Except for obvious changes the proofs are the same as in Section 8, using Theorem~\ref{embed12} in place of Theorem~\ref{embed11}. \noindent {\bf A variant.} In dealing with the mixed characteristic case it is useful to eliminate angular component maps in Theorem~\ref{embed12}. So let $\ca{K}, \ca{K}'$ be as in the previous subsection except that we do not require angular component maps as part of these structures. The notion of {\em good substructure\/} of $\ca{K}$ is then modified by replacing in clause (3) of its definition the condition $\operatorname{ac}(E)\subseteq {{\boldsymbol{k}}}_{\ca{E}}$ by $\pi(\ca{O}_E)\subseteq {{\boldsymbol{k}}}_{\ca{E}}$. In defining the notion of a good map $\mathbf{f}=(f, f_{\text{v}}, f_{\text{r}}): \ca{E} \to \ca{E}'$ the condition on $f_{\text{r}}$ is to be changed to: $f_{\text{r}}(\pi(a))=\pi(f(a))$ for all $a\in \mathcal{O}_E$, and $f_{\text{r}}$ is elementary as a partial map between the $\mathcal{L}_{\re}^*$-structures ${{\boldsymbol{k}}}$ and ${{\boldsymbol{k}}}'$. Then the same arguments as we used in proving Theorem~\ref{embed11a} yield the following: \begin{theorem}\label{embed12a} If $\operatorname{char}({{\boldsymbol{k}}})=0$ and $\ca{K}$ and $\ca{K}'$ satisfy Axiom $2$ and are $\sigma$-henselian, and $\ca{E}$ and $\ca{E}'$ are good substructures of $\ca{K}$ and $\ca{K}'$, respectively, with $v(E^{\times})$ pure in $\Gamma$, then any good map $\ca{E} \to \ca{E}'$ is a partial elementary map between $\ca{K}$ and $\ca{K}'$. \end{theorem} \noindent {\bf Coarsening.} To reduce the mixed characteristic case to the equal characteristic zero case we use coarsening. In this subsection $\ca{K}=(K, \Gamma, {{\boldsymbol{k}}};\dots)$ is a valued difference field. Let $\Delta$ be a convex subgroup of $\Gamma$, let $\dot{\Gamma}:= \Gamma/\Delta$ be the ordered quotient group, and let $\dot{v}:K^\times\to \dot{\Gamma}$ be the composition $K^\times \to \Gamma \to \dot{\Gamma}$ of $v$ with the canonical map $\Gamma\to\dot{\Gamma}$, so $\dot{v}$ is again a valuation. Let $\dot{\ca{O}}$ be the valuation ring of $\dot{v}$, and $\dot{\fr{m}}$ its maximal ideal, so \begin{eqnarray*} \dot{\ca{O}}&=&\{x\in K:\ v(x)\geq\delta,\ \mbox{for some}\ \delta\in\Delta\}\ \supseteq\ \ca{O}:= \ca{O}_v,\\ \dot{\fr{m}}&=&\{x\in K:\ v(x)>\Delta\}\ \subseteq\ \fr{m}. \end{eqnarray*} Let $\dot{{{\boldsymbol{k}}}}=\dot{\ca{O}}/\dot{\fr{m}}$ be the residue field for $\dot{v}$ and let $\dot{\pi}: \dot{\ca{O}} \to \dot{{{\boldsymbol{k}}}}$ be the canonical map. This gives a valued difference field $\dot{\ca{K}}:=(K,\dot{\Gamma}, \dot{{{\boldsymbol{k}}}};\ \dot{v}, \dot{\pi})$ satisfying Axiom 1. Some other axioms are also preserved: \begin{lemma}\label{coarse1} If $\ca{K}$ satisfies Axiom $2$, so does $\dot{\ca{K}}$. If $\ca{K}$ satisfies Axiom $2$ and is $\sigma$-henselian, then $\dot{\ca{K}}$ is $\sigma$-henselian. \end{lemma} \begin{proof} The claim about Axiom 2 is obvious. Assume $\ca{K}$ satisfies Axiom 2 and is $\sigma$-henselian. Let $G(x)$ over $\dot{\ca{O}}$ (of order $\le n$) be $\sigma$-henselian at $a\in \dot{\ca{O}}$, with respect to $\dot{\ca{K}}$. It is easy to check that then $G,a$ is in $\sigma$-hensel configuration with respect to $\ca{K}$. Lemma~\ref{hensel-conf} and Remark~\ref{scan} then yield $b\in K$ such that $v(a-b)= v\big(G(a)\big)- \min_{|\i|=1}v\big(G_{(\i)}(a)\big)$, and thus $\dot{v}(a-b)=\dot{v}\big(G(a)\big)$, as desired. \end{proof} \noindent Let $\dot{\sigma}$ be the automorphism of the field $\dot{{{\boldsymbol{k}}}}$ induced by the difference operator $\sigma$ of $\dot{\ca{K}}$. The field $\dot{{{\boldsymbol{k}}}}$ carries the valuation $v_\Delta: \dot{{{\boldsymbol{k}}}}^\times \to \Delta$ given by $v_\Delta(x+\dot{\fr{m}})=v(x)$ for $x$ a unit of $\dot{\ca{O}}$. The valuation ring of $v_\Delta$ is $\dot{\pi}(\ca{O})$, and we have the surjective ring morphism $\pi_{\Delta}\ :\ \dot{\pi}(\ca{O}) \to {{\boldsymbol{k}}}$ given by $\pi_{\Delta}\big(\dot{\pi}(a)\big)= \pi(a)$ for all $a\in \ca{O}$. Note that $$\big((\dot{{{\boldsymbol{k}}}},\dot{\sigma}), \Delta, {{\boldsymbol{k}}};\ v_\Delta,\pi_\Delta\big)$$ is a valued difference field satisfying Axiom 1 with $\dot{\sigma}$ inducing on the residue field ${{\boldsymbol{k}}}$ the same automorphism $\bar{\sigma}$ as the difference operator $\sigma$ of $\ca{K}$ does. The following is now immediate: \begin{lemma}\label{coarse2} If $\ca{K}$ satisfies Axiom $3$, so does $\dot{\ca{K}}$. \end{lemma} \noindent Let $\dot{{{\boldsymbol{k}}}}(*)$ be the expansion $(\dot{{{\boldsymbol{k}}}},\dot{\sigma}, \dot{\pi}(\ca{O}))$ of the difference field $(\dot{{{\boldsymbol{k}}}},\dot{\sigma})$, and let $\dot{\ca{K}}(*)$ be the corresponding expansion $(K, \dot{\Gamma}, \dot{{{\boldsymbol{k}}}}(*);\ \dot{v}, \dot{\pi})$ of $\dot{\ca{K}}$. Note that $\ca{O}$ is definable in the structure $\dot{\ca{K}}(*)$ by a formula that does not depend on $\ca{K}$: $$\ca{O}=\{a\in \dot{\ca{O}}:\ \dot{\pi}(a) \in \dot{\pi}(\ca{O})\}.$$ In this way we reconstruct $\ca{K}$ from $\dot{\ca{K}}(*)$. The advantage of working with $\dot{\ca{K}}(*)$ is that it has equicharacteristic $0$ if $\ca{K}$ has mixed characteristic and $v(p)\in \Delta$. \noindent Let now $\ca{K}$ be unramified with $\operatorname{char}{K}=0,\ \operatorname{char}{{{\boldsymbol{k}}}}=p>0$. Then $\mathbb{Z}\cdot v(p)$ is a convex subgroup of $\Gamma$. We set $\Delta:=\mathbb{Z} \cdot v(p)$ and note that then $\operatorname{char}{\dot{{{\boldsymbol{k}}}}}=0$. With these assumptions we have: \begin{lemma}\label{coarse3} If $\ca{K}$ is workable, so is $\dot{\ca{K}}$. \end{lemma} \begin{proof} Suppose $\ca{K}$ is workable. Then either $\ca{K}$ satisfies Axioms 2 and 3, or it satisfies Axiom 2 and is a Witt case with infinite ${{\boldsymbol{k}}}$. In the first case, $\dot{\ca{K}}$ also satisfies Axioms 2 and 3 by Lemmas~\ref{coarse1} and ~\ref{coarse2}, and is thus workable. It remains to consider the case that $\ca{K}$ satisfies Axiom 2 and is a Witt case with infinite ${{\boldsymbol{k}}}$. Then $\dot{\ca{K}}$ satisfies Axiom 2 by Lemma~\ref{coarse1}, and because ${{\boldsymbol{k}}}$ is infinite and $\bar{\sigma}$ is the frobenius map, we have $\bar{\sigma}^d\ne \text{id}$ for all $d>0$, and thus $\dot{\sigma}^d\ne \text{id}$ for all $d>0$. So $\dot{\ca{K}}$ satisfies Axiom 3 as well. \end{proof} \noindent Keeping the assumptions preceding Lemma~\ref{coarse3}, assume also that $\ca{K}$ is $\aleph_1$-saturated and ${{\boldsymbol{k}}}$ is perfect. Then the saturation assumption guarantees that $\dot{\pi}(\ca{O})$ is a complete discrete valuation ring of $\dot{{{\boldsymbol{k}}}}$. Since ${{\boldsymbol{k}}}$ is perfect, this gives a unique ring isomorphism $\iota: \operatorname{W}[{{\boldsymbol{k}}}]\ \cong\ \dot{\pi}(\ca{O})$ such that $\pi_{\Delta}\circ \iota :\operatorname{W}[{{\boldsymbol{k}}}]\to {{\boldsymbol{k}}}$ is the projection map $(a_0, a_1, a_2,\dots) \mapsto a_0$. Denote the extension of $\iota$ to a field isomorphism $\operatorname{W}({{\boldsymbol{k}}})\ \cong\ \dot{{{\boldsymbol{k}}}}$ also by $\iota$. If $\ca{K}$ is a Witt case, this gives an isomorphism $(\iota,\dots)$ of the Witt difference field $\operatorname{W}({{\boldsymbol{k}}})$ onto $(\dot{{{\boldsymbol{k}}}}, \Delta, {{\boldsymbol{k}}};\ v_\Delta,\pi_\Delta)$. \noindent {\bf Two lemmas.} The proof of the next lemma uses mainly the functoriality of $\operatorname{W}$. \begin{lemma}\label{lemm1} Let ${{\boldsymbol{k}}}_0$ be a perfect field with $\operatorname{char}({{\boldsymbol{k}}}_0)=p>0$, let ${{\boldsymbol{k}}}$ and ${{\boldsymbol{k}}}'$ be perfect extension fields of ${{\boldsymbol{k}}}_0$, and let $\sigma$ and $\sigma'$ be automorphisms of ${{\boldsymbol{k}}}$ and ${{\boldsymbol{k}}}'$, respectively. Let $\kappa$ be an uncountable cardinal such that the difference fields $({{\boldsymbol{k}}},\sigma)$ and $({{\boldsymbol{k}}}', \sigma')$ are $\kappa$-saturated, $|{{\boldsymbol{k}}}_0| < \kappa$, and $({{\boldsymbol{k}}}, \sigma) \equiv_{{{\boldsymbol{k}}}_0} ({{\boldsymbol{k}}}', \sigma')$. Then, as difference rings, $$ \big(\operatorname{W}[{{\boldsymbol{k}}}], \operatorname{W}[\sigma]\big)\ \equiv_{\operatorname{W}[{{\boldsymbol{k}}}_0]}\ \big(\operatorname{W}[{{\boldsymbol{k}}}'], \operatorname{W}[\sigma']\big).$$ \end{lemma} \begin{proof} We just apply the functor $\operatorname{W}$ to a suitable back-and-forth system between $({{\boldsymbol{k}}}, \sigma)$ and $({{\boldsymbol{k}}}', \sigma')$. In detail, let $({{\boldsymbol{k}}}_1, \sigma_1)$ range over the difference subfields of $({{\boldsymbol{k}}},\sigma)$ such that ${{\boldsymbol{k}}}_0 \subseteq {{\boldsymbol{k}}}_1$ and $|{{\boldsymbol{k}}}_1| < \kappa$, and let $({{\boldsymbol{k}}}_2, \sigma_2)$ range over the difference subfields of $({{\boldsymbol{k}}}',\sigma')$ such that ${{\boldsymbol{k}}}_0 \subseteq {{\boldsymbol{k}}}_2$ and $|{{\boldsymbol{k}}}_2| < \kappa$. Let $\Phi$ be the set of all difference field isomorphisms $\phi: ({{\boldsymbol{k}}}_1, \sigma_1) \to ({{\boldsymbol{k}}}_2, \sigma_2)$ that are the identity on ${{\boldsymbol{k}}}_0$ and are partial elementary maps between $({{\boldsymbol{k}}},\sigma)$ and $({{\boldsymbol{k}}}', \sigma')$. Note that some $\phi\in \Phi$ maps the definable closure of ${{\boldsymbol{k}}}_0$ in $({{\boldsymbol{k}}}, \sigma)$ onto the definable closure of ${{\boldsymbol{k}}}_0$ in $({{\boldsymbol{k}}}', \sigma')$, so $\Phi\ne \emptyset$ and $\Phi$ is a back-and-forth system between $({{\boldsymbol{k}}}, \sigma)$ and $({{\boldsymbol{k}}}', \sigma')$. The functorial properties of $\operatorname{W}$ and $\kappa$-saturation yield a back-and-forth system $\operatorname{W}[\Phi]$ between $\big(\operatorname{W}[{{\boldsymbol{k}}}], \operatorname{W}[\sigma]\big)$ and $\big(\operatorname{W}[{{\boldsymbol{k}}}'], \operatorname{W}[\sigma']\big)$ consisting of the $$\operatorname{W}[\phi]\ :\ \big(\operatorname{W}[{{\boldsymbol{k}}}_1], \operatorname{W}[\sigma_1]\big) \to \big(\operatorname{W}[{{\boldsymbol{k}}}_2], \operatorname{W}[\sigma_2]\big)$$ with $\phi: ({{\boldsymbol{k}}}_1, \sigma_1) \to ({{\boldsymbol{k}}}_2, \sigma_2)$ an element of $\Phi$. \end{proof} \noindent A similar use of functoriality gives: \begin{lemma}\label{lemm2} Let $\Gamma_0$ be an ordered abelian group with smallest positive element $1$, let $\Gamma$ and $\Gamma'$ be ordered abelian extension groups of $\Gamma_0$ with the same smallest positive element $1$. Let $\kappa$ be an uncountable cardinal such that $\Gamma$ and $\Gamma'$ are $\kappa$-saturated, $|\Gamma_0| < \kappa$, and $\Gamma \equiv_{\Gamma_0} \Gamma'$. Let $\Delta$ be the common convex subgroup $\mathbb{Z}\cdot 1$ of $\Gamma_0, \Gamma$ and $\Gamma'$. Then the ordered quotient groups $\dot{\Gamma}:=\Gamma/\Delta $ and $\dot{\Gamma}':=\Gamma'/\Delta$ are elementarily equivalent over their common ordered subgroup $\dot{\Gamma}_0:=\Gamma_0/\Delta$. \end{lemma} \noindent {\bf Equivalence in mixed characteristic.} In this final subsection we fix a prime number $p$, and $\ca{K}=(K, \Gamma, {{\boldsymbol{k}}};\ v, \pi)$ is a $\sigma$-henselian valued difference field such that $\operatorname{char}(K)=0$, ${{\boldsymbol{k}}}$ is perfect with $\operatorname{char}({{\boldsymbol{k}}})=p$, and $v(p)$ is the smallest positive element of $\Gamma$. Moreover, assume either that ${{\boldsymbol{k}}}$ is infinite and $\bar{\sigma}(x)=x^p$ for all $x\in {{\boldsymbol{k}}}$ (the Witt case), or that ${{\boldsymbol{k}}}$ satisfies Axiom 2. In particular, $\ca{K}$ is workable and $\ca{K}$ is not equipped here with an angular component map. We make the corresponding assumptions about $\ca{K}'=(K', \Gamma', {{\boldsymbol{k}}}';\ v', \pi')$. Also, assume that $\ca{E}=(E,\Gamma_{\ca{E}}, {{\boldsymbol{k}}}_{\ca{E}})$ and $\ca{E}'=(E',\Gamma_{\ca{E'}}, {{\boldsymbol{k}}}_{\ca{E'}})$ are good substructures of $\ca{K}$ and $\ca{K}'$, respectively, in the $\operatorname{ac}$-free sense specified at the end of Section~\ref{et}, where we defined the corresponding $\operatorname{ac}$-free notion of a {\em good map\/} $\ca{E} \to \ca{E}'$. Theorem~\ref{embed11a} goes through in the present setting: \begin{theorem}\label{embed13} Suppose that $v(E^{\times})$ is pure in $\Gamma$ and $\mathbf{f}: \ca{E} \to \ca{E}'$ is a good map. Then $\mathbf{f}$ is a partial elementary map between $\ca{K}$ and $\ca{K}'$. \end{theorem} \begin{proof} We first arrange that $\ca{K}$ and $\ca{K}'$ are $\kappa$-saturated, where $\kappa$ is an uncountable cardinal such that $|{{\boldsymbol{k}}}_{\ca{E}}|$, $|\Gamma_{\ca{E}}| < \kappa$. To simplify notation we identify $\ca{E}$ and $\ca{E}'$ via $\mathbf{f}$, so $\mathbf{f}$ becomes the identity on $\ca{E}$. We have to show that then $\ca{K}\equiv_{\ca{E}}\ca{K}'$. With $v(p)=1$ as the smallest positive element of $\Gamma_0:= \Gamma_{\ca{E}}$ and of $\Gamma$ and $\Gamma'$ and using the notations of Lemma~\ref{lemm2} we have $\dot{\Gamma} \equiv_{\dot{\Gamma}_0} \dot{\Gamma}'$ by that same lemma. From the purity of $v(E^{\times})$ in $\Gamma$ it follows that $\dot{v}(E^{\times})$ is pure in $\dot{\Gamma}$. Since $\ca{K}$ and $\ca{K}'$ are $\aleph_1$-saturated, it is harmless to identify $\dot{{{\boldsymbol{k}}}}$ and $\dot{{{\boldsymbol{k}}}}'$ with the fields $\operatorname{W}({{\boldsymbol{k}}})$ and $\operatorname{W}({{\boldsymbol{k}}}')$, respectively. Then the respective valuation rings $\dot{\pi}(\ca{O})$ and $\dot{\pi}'(\ca{O}')$ of $\dot{{{\boldsymbol{k}}}}$ and $\dot{{{\boldsymbol{k}}}}'$ are $\operatorname{W}[{{\boldsymbol{k}}}]$ and $\operatorname{W}[{{\boldsymbol{k}}}']$, and we have the common subfield $\dot{{{\boldsymbol{k}}}}_{\ca{E}}:=\operatorname{W}({{\boldsymbol{k}}}_{\ca{E}})$ of $\dot{{{\boldsymbol{k}}}}$ and $\dot{{{\boldsymbol{k}}}}'$. It now follows from Lemma~\ref{lemm1} that $\dot{{{\boldsymbol{k}}}}(*)\ \equiv_{\dot{{{\boldsymbol{k}}}}_{\ca{E}}}\ \dot{{{\boldsymbol{k}}}}'(*)$. Hence the assumptions of Theorem~\ref{embed12a} are satisfied with $\dot{\ca{K}}(*)$, and $\dot{\ca{K}}'(*)$ in the role of $\ca{K}$ and $\ca{K}'$, and $\dot{\ca{E}}(*):=(E, \dot{\Gamma}_0,\dot{{{\boldsymbol{k}}}}_{\ca{E}})$ in the role of both $\ca{E}$ and $\ca{E}'$. and with the identity on $\dot{\ca{E}}(*)$ as a good map. This theorem therefore gives $$\dot{\ca{K}}(*)\ \equiv_{\dot{\ca{E}}(*)} \dot{\ca{K}}'(*) .$$ This yields $\ca{K}\equiv_{\ca{E}}\ca{K}'$ by what we observed just after Lemma~\ref{coarse2}. \end{proof} \begin{corollary}\label{comp06} $\ca{K} \equiv \ca{K}'$ if and only if ${{\boldsymbol{k}}} \equiv {{\boldsymbol{k}}}'$ as difference fields and $\Gamma \equiv \Gamma'$ as ordered abelian groups. \end{corollary} \begin{proof} The ``only if'' direction is obvious. Suppose ${{\boldsymbol{k}}} \equiv {{\boldsymbol{k}}}'$ as difference fields, and $\Gamma \equiv \Gamma'$ as ordered groups. Then we have good substructures $\ca{E}:=(\mathbb{Q},\mathbb{Z}, \mathbb{F}_p)$ of $\ca{K}$, and $\ca{E}':=(\mathbb{Q},\mathbb{Z},\mathbb{F}_p)$ of $\ca{K}'$, and an obviously good map $\ca{E} \to \ca{E}'$. Now apply Theorem~\ref{embed13}. \end{proof} \noindent In particular, any $\sigma$-henselian Witt case valued difference field satisfying Axiom 2, with infinite residue field ${{\boldsymbol{k}}}$ and value group $\Gamma\equiv \mathbb{Z}$ as ordered abelian groups, is elementarily equivalent to the Witt difference field $\operatorname{W}({{\boldsymbol{k}}})$. The next result follows from Theorem~\ref{embed13} in the same way as Corollary~\ref{comp01} from Theorem~\ref{embed11}. \begin{corollary}\label{comp07} Let $\ca{E}=(E, \Gamma_E, {{\boldsymbol{k}}}_E;\dots)$ be a $\sigma$-henselian valued difference subfield of $\ca{K}$ satisfying Axiom $2$ such that ${{\boldsymbol{k}}}_E\preceq {{\boldsymbol{k}}}$ as difference fields, and $\Gamma_E \preceq \Gamma$ as ordered abelian groups. Then $\ca{E} \preceq \ca{K}$. \end{corollary} \noindent Theorem~\ref{embed13} does not seem to give a nice relative quantifier elimination such as Corollary~\ref{qe} but it does yield the analogue of Corollary~\ref{comp02}: \begin{corollary}\label{comp08} Each subset of ${{\boldsymbol{k}}}^m\times \Gamma^n$ that is definable in $\ca{K}$ is a finite union of rectangles $X\times Y$ with $X\subseteq {{\boldsymbol{k}}}^m$ definable in the difference field ${{\boldsymbol{k}}}$ and $Y\subseteq \Gamma^n$ definable in the ordered abelian group $\Gamma$. \end{corollary} \begin{proof} By standard arguments we can reduce to the following situation: $\ca{K}$ is $\aleph_1$-saturated, $\ca{E}=(E, \Gamma_E, {{\boldsymbol{k}}}_E;\dots)\preceq \ca{K}$ is countable, $r, r'\in {{\boldsymbol{k}}}^m$ have the same type over ${{\boldsymbol{k}}}_E$, and $\gamma, \gamma'\in \Gamma^n$ have the same type over $\Gamma_E$: \begin{align*} \tp(r|{{\boldsymbol{k}}}_E)\ &=\ \tp(r'|{{\boldsymbol{k}}}_E) \qquad(\text{in the difference field }{{\boldsymbol{k}}}),\\ \tp(\gamma|\Gamma_E)\ &=\ \tp(\gamma'|\Gamma_E) \qquad(\text{in the ordered abelian group }\Gamma). \end{align*} It suffices to show that then $(r,\gamma)$ and $(r',\gamma')$ have the same type over $\ca{E}$ in $\ca{K}$. Let ${{\boldsymbol{k}}}_1$ and ${{\boldsymbol{k}}}_1'$ be the definable closures of ${{\boldsymbol{k}}}_E(r)$ and ${{\boldsymbol{k}}}_E(r')$ in the difference field ${{\boldsymbol{k}}}$, and let $\Gamma_1$ and $\Gamma_1'$ be the ordered subgroups of $\Gamma$ generated over $\Gamma_E$ by $\gamma$ and $\gamma'$. Then $(E, \Gamma_1, {{\boldsymbol{k}}}_1)$ and $(E, \Gamma_1', {{\boldsymbol{k}}}_1')$ are good substructures of $\ca{K}$, and the assumption on types yields a good map $(E, \Gamma_1, {{\boldsymbol{k}}}_1)\to (E, \Gamma_1', {{\boldsymbol{k}}}')$ that is the identity on $(E, \Gamma_E, {{\boldsymbol{k}}}_E)$, sends $\gamma$ to $\gamma'$ and $r$ to $r'$. Note also that $v(E^\times)=\Gamma_E$ is pure in $\Gamma$. It remains to apply Theorem~\ref{embed13}. \end{proof} \end{section} \end{document}
\begin{document} \title{Projections over Quantum Homogeneous Odd-dimensional Spheres\thanks{This work was partially supported by the grant H2020-MSCA-RISE-2015-691246-QUANTUM DYNAMICS and the Polish government grant 3542/H2020/2016/2.}} \date{} \author{Albert Jeu-Liang Sheu\\{\small Department of Mathematics, University of Kansas, Lawrence, KS 66045, U. S. A.}\\{\small e-mail: [email protected]}} \maketitle \begin{abstract} We give a complete classification of isomorphism classes of finitely generated projective modules, or equivalently, unitary equivalence classes of projections, over the C*-algebra $C\left( \mathbb{S}_{q}^{2n+1}\right) $ of the quantum homogeneous sphere $\mathbb{S}_{q}^{2n+1}$. Then we explicitly identify as concrete elementary projections the quantum line bundles $L_{k}$ over the quantum complex projective space $\mathbb{C}P_{q}^{n}$ associated with the quantum Hopf principal $U\left( 1\right) $-bundle $\mathbb{S} _{q}^{2n+1}\rightarrow\mathbb{C}P_{q}^{n}$. \end{abstract} \pagebreak \section{Introduction} In the theory of quantum/noncommutative geometry popularized by Connes \cite{Conn}, C*-algebras are often viewed as the algebra $C\left( X_{q}\right) $ of continuous functions on a virtual quantum space $X_{q}$, and finitely generated projective (left) $C\left( X_{q}\right) $-module $\Gamma\left( E_{q}\right) $ are viewed as virtual vector bundles over the quantum space $X_{q}$. The former viewpoint is motivated by Gelfand's Theorem identifying all commutative C*-algebras as exactly function algebras $C_{0}\left( X\right) $ for locally compact Hausdorff spaces $X$, while the latter is motivated by Swan's Theorem \cite{Swan} characterizing all finitely generated projective $C\left( X\right) $-modules for a compact Hausdorff space $X$ as exactly the spaces $\Gamma\left( E\right) $ of continuous cross-sections of vector bundles $E$ over $X$. As spheres and projective spaces provide fundamentally important examples for the classical study of topology and geometry, quantum versions of spheres and projective spaces have been developed and provide important examples for the study of quantum geometry. In particular, from the quantum group viewpoint \cite{Dr,Woro,Wo:cm}, Soibelman, Vaksman, Meyer and others \cite{VaSo,Me,Na,Sh:qcp} introduced and studied the homogeneous odd-dimensional quantum sphere $\mathbb{S}_{q}^{2n+1}$ and the associated quantum complex projective space $\mathbb{C}P_{q}^{n}$, and from the multipullback viewpoint, Hajac and his collaborators including Baum, Kaygun, Matthes, Nest, Pask, Sims, Szyma\'{n}ski, Zieli\'{n}ski, and others \cite{HaMaSz,HaKaZi,HaNePaSiZi} developed and studied the multipullback odd-dimensional quantum sphere $\mathbb{S}_{H}^{2n+1}$ and the associated quantum complex projective space $\mathbb{P}^{n}\left( \mathcal{T}\right) $. As in the classical situation, the above mentioned quantum odd-dimensional spheres and their associated quantum complex projective spaces provide a quantum Hopf principal $U\left( 1\right) $-bundle, from which some associated quantum line bundles $L_{k}$, or rank-one projective modules over the quantum algebra of the complex projective space, for $k\in\mathbb{Z}$ are constructed \cite{Me,AriBrLa,HaMaSz,HaNePaSiZi}. It is well known that classifying up to isomorphism all vector bundles over a space $X$ in the classical case or finitely generated (left) projective modules over a C*-algebra $C\left( X_{q}\right) $ in the quantum case is an interesting but difficulty task. A major challenge in such classification is the so-called cancellation problem \cite{Ri:dsr,Ri:ct} which deals with determining whether the stable isomorphism between such objects determined by $K$-theoretic analysis can imply their isomorphism. In this paper, we use the powerful groupoid approach to C*-algebras initiated by Renault \cite{Rena} and popularized by Curto, Muhly, and Renault \cite{CuMu,MuRe} to study the C*-algebra structures of $C\left( \mathbb{S}_{q}^{2n+1}\right) $ and $C\left( \mathbb{C}P_{q}^{n}\right) $. In this framework, we work to get a complete classification of projections over $C\left( \mathbb{S}_{q}^{2n+1}\right) $ up to equivalence, extending the result of Bach \cite{Ba}, and determine the canonical monoid structure on the collection of all equivalence classes of projections over $C\left( \mathbb{S}_{q}^{2n+1}\right) $ with the diagonal sum $\boxplus$ as its binary operation. In particular, we get infinitely many inequivalent projections over $C\left( \mathbb{S}_{q}^{2n+1}\right) $ which are stably equivalent over $C\left( \mathbb{S}_{q}^{2n+1}\right) $, showing that the cancellation property does not hold for projections over $C\left( \mathbb{S}_{q} ^{2n+1}\right) $ as elaborated in Corollary 1. Then we proceed to present a set of elementary projections that freely generate $K_{0}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) $ and represent the line bundles $L_{k}$ over $\mathbb{C}P_{q}^{n}$ by concrete $\boxplus$-sum of elementary projections. We mention that a similar study has been carried out for the multipullback quantum spheres $\mathbb{S}_{H}^{2n+1}$ and the associated projective space $\mathbb{P}^{n}\left( \mathcal{T}\right) $ in the paper \cite{Sh:qps,Sh:pmqpl}, and an interesting geometric study via Milnor construction is presented by Farsi, Hajac, Maszczyk, and Zieli\'{n}ski in \cite{FHMZ} for $C\left( \mathbb{P}^{2}\left( \mathcal{T}\right) \right) $. Among works in the literature related to our topic here, we mention that the graph C*-algebra of any row-finite graph, including $C\left( \mathbb{S} _{q}^{2n+1}\right) $, satisfies the so-called stable weak cancellation property \cite{AraMoPa}, and that a \textquotedblleft geometric\textquotedblright\ realization of generators of $K_{0}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) $ using Milnor connecting homomorphism is found in \cite{ADHT}, beside the geometric study of quantum line bundles over $\mathbb{C}P_{q}^{n}$ in \cite{AriBrLa}. It would be of interest to take a close look at potential underlying connections between these works and ours. (The author thanks the referee for relevant references to the literature.) The author would like to thank Prof. Dabrowski for hosting his visit to SISSA, Trieste, Italy in the spring of 2018, and also thank him and Prof. Landi for useful discussions and questions about quantum odd-dimensional spheres and quantum complex projective spaces. \section{Preliminary notations} In this paper, we use freely the basic techniques and manipulations for $K$-theory of C*-algebras, or more generally, Banach algebras, found in \cite{Blac,Tay}. Commonly widely used notations like $M_{\infty}\left( \mathcal{A}\right) $, $GL_{\infty}\left( \mathcal{A}\right) $, unitization $\mathcal{A}^{+}$, diagonal sum $P\boxplus Q$ of elements $P,Q\in M_{\infty }\left( \mathcal{A}\right) $, the identity component $G^{0}$ of a topological group $G$, the positive cone $K_{0}\left( \mathcal{A}\right) _{+}$ of $K_{0}\left( \mathcal{A}\right) $, $\mathcal{B}\left( \mathcal{H}\right) $, $\mathcal{K}\left( \mathcal{H}\right) $, and others will not be explained in details here, and we refer to the notations section in \cite{Sh:qps} for any need of further clarification. By a projection (or an idempotent) over a C*-algebra $\mathcal{A}$, we mean a projection (or an idempotent) in the algebra $M_{\infty}\left( \mathcal{A} \right) $ of all finite matrices with entries in $\mathcal{A}$. Two projections (or idempotents) $P,Q\in M_{\infty}\left( \mathcal{A}\right) $ are called equivalent over $\mathcal{A}$, denoted as $P\sim_{\mathcal{A}}Q$, if there is an invertible $U\in GL_{\infty}\left( \mathcal{A}\right) $ such that $UPU^{-1}=Q$. We recall that the mapping $P\mapsto\mathcal{A}^{n}P$ induces a bijective correspondence between the equivalence (respectively, the stable equivalence) classes of idempotents over $\mathcal{A}$ and the isomorphism (respectively, the stable isomorphism) classes of finitely generated projective modules over $\mathcal{A}$ \cite{Blac}, where by a module over $\mathcal{A}$, we mean a left $\mathcal{A}$-module, unless otherwise specified. We also recall that the $K_{0}$-group $K_{0}\left( \mathcal{A}\right) $ classifies idempotents over $\mathcal{A}$ up to stable equivalence. The classification of idempotents up to equivalence, appearing as the so-called cancellation problem, was popularized by Rieffel's pioneering work \cite{Ri:dsr,Ri:ct} and is in general an interesting but difficult question. For a C*-algebra homomorphism $h:\mathcal{A}\rightarrow\mathcal{B}$, we use the same symbol $h$, instead of the more formal symbol $M_{\infty}\left( h\right) $, to denote the algebra homomorphism $M_{\infty}\left( \mathcal{A}\right) \rightarrow M_{\infty}\left( \mathcal{B}\right) $ that applies $h$ to each entry of any matrix in $M_{\infty}\left( \mathcal{A} \right) $. The set of all equivalence classes of idempotents, or equivalently, all unitary equivalence classes of projections, over a C*-algebra $\mathcal{A}$ is an abelian monoid $\mathfrak{P}\left( \mathcal{A}\right) $ with its binary operation provided by the diagonal sum $\boxplus$. In the following, we use the notations $\mathbb{Z}_{\geq k}:=\left\{ n\in\mathbb{Z}|n\geq k\right\} $ and $\mathbb{Z}_{\geq}:=\mathbb{Z}_{\geq0}$. In particular, $\mathbb{N}=\mathbb{Z}_{\geq1}$. We use $I$ to denote the identity operator canonically contained in $\mathcal{K}^{+}\subset \mathcal{B}\left( \ell^{2}\left( \mathbb{Z}_{\geq}\right) \right) $, and \[ P_{m}:=\sum_{i=1}^{m}e_{ii}\in M_{m}\left( \mathbb{C}\right) \subset \mathcal{K} \] to denote the standard $m\times m$ identity matrix in $M_{m}\left( \mathbb{C}\right) \subset\mathcal{K}$ for any integer $m\geq0$ (with $M_{0}\left( \mathbb{C}\right) =0$ and $P_{0}=0$ understood). We also use the notation \[ P_{-m}:=I-P_{m}\in\mathcal{K}^{+} \] for integers $m>0$, and take symbolically $P_{-0}\equiv I-P_{0}=I\neq P_{0}$. This should not cause any trouble since we will not formally add up the subscripts of these $P$-projections without necessary clarification. \section{Quantum spaces as groupoid C*-algebras} In the following, we work with some concrete $r$-discrete (or \'{e}tale) groupoids and use them to analyze and encode important structures of quantum $\mathbb{S}_{q}^{2n+1}$ and quantum $\mathbb{C}P_{q}^{n}$ in the context of groupoid C*-algebras. This groupoid approach to C*-algebras was popularized by the work of Curto, Muhly, and Renault \cite{CuMu,MuRe,SSU} and shown to be useful in the study of quantum homogeneous spaces \cite{Sh:cqg,Sh:qpsu,Sh:qs,Sh:qcp}. We refer readers to Renault's pioneering book \cite{Rena} for the fundamental theory of groupoid C*-algebras. We denote by $\overline{\mathbb{Z}}:=\mathbb{Z}\cup\left\{ +\infty\right\} $ the discrete space $\mathbb{Z}$ with a point $+\infty\equiv\infty$ canonically adjoined as a limit point at the positive end, and take $\mathbb{Z}_{\geq }:\equiv\left\{ n\in\mathbb{Z}|n\geq0\right\} \subset\overline{\mathbb{Z}}$. (We could also take $\overline{\mathbb{Z}}$ to be the one-point compactification of the discrete space $\mathbb{Z}$ in this paper since essentially we work only with groupoids restricted to a positive cone of their unit spaces.) The group $\mathbb{Z}$ acts by homeomorphisms on $\overline {\mathbb{Z}}$ in the canonical way, namely, by translations on $\mathbb{Z}$ while fixing the point $\infty$. More generally, the group $\mathbb{Z}^{n}$ acts on $\overline{\mathbb{Z}}^{n}$ componentwise in such a way. Let $\mathcal{F}^{n}:={\mathbb{Z}}\times\left( {\mathbb{Z}}^{n}\ltimes \overline{{\mathbb{Z}}}^{n}\right) |_{\overline{{\mathbb{Z}}}_{\geq}^{n}}$ with $n\geq1$ be the direct product of the group $\mathbb{Z}$ and the transformation groupoid $\mathbb{Z}^{n}\ltimes\overline{\mathbb{Z}}^{n}$ restricted to the positive \textquotedblleft cone\textquotedblright \ $\overline{\mathbb{Z}_{\geq}}^{n}$, where $\overline{\mathbb{Z}_{\geq}}$ is the closure $\mathbb{Z}_{\geq}\cup\left\{ \infty\right\} $ of $\mathbb{Z} _{\geq}$ in $\overline{\mathbb{Z}}$. (Later we also use $\overline{\mathbb{Z} }_{\geq}$ to denote this positive part $\overline{\mathbb{Z}_{\geq}}$ of $\overline{\mathbb{Z}}$.) As shown in \cite{Sh:qs}, $C\left( \mathbb{S}_{q}^{2n+1}\right) \simeq C^{\ast}\left( \mathfrak{F}_{n}\right) $, where $\mathfrak{F}_{n}$ is a subquotient groupoid of $\mathcal{F}^{n}$, namely, $\mathfrak{F} _{n}:=\widetilde{\mathfrak{F}_{n}}/\sim$ for the subgroupoid \[ \widetilde{\mathfrak{F}_{n}}:=\{\left( z,x,w\right) \in\mathcal{F} ^{n}|\;w_{i}=\infty\text{ with }1\leq i\leq n\ \ \text{implies} \] \[ x_{i}=-z-x_{1}-x_{2}-...-x_{i-1}\text{ and }x_{i+1}=...=x_{n}=0\} \] of $\mathcal{F}^{n}$, where $\sim$ is the equivalence relation generated by \[ \left( z,x,w\right) \sim\left( z,x,w_{1},...,w_{i}=\infty,\infty ,...,\infty\right) \] for all $\left( z,x,w\right) $ with $w_{i}=\infty$ for an $1\leq i\leq n$. The unit space of $\mathfrak{F}_{n}$ is $Z:=\overline{{\mathbb{Z}}}_{\geq} ^{n}/\sim$ where $\overline{{\mathbb{Z}}}_{\geq}^{n}$ is the unit space of $\widetilde{\mathfrak{F}_{n}}\subset\mathcal{F}^{n}$ embedded in $\widetilde{\mathfrak{F}_{n}}$ as the $\sim$-invariant subset $\left\{ 0\right\} \times\left\{ 0\right\} \times\overline{{\mathbb{Z}}}_{\geq}^{n}$. Let $\pi_{n}$ denote the faithful *-representation of the groupoid C*-algebra $C^{\ast}\left( \mathfrak{F}_{n}\right) $ canonically constructed on the Hilbert space $\ell^{2}\left( \mathbb{Z}\times\mathbb{Z}_{\geq}^{n}\right) =\ell^{2}\left( \mathbb{Z}\right) \otimes\ell^{2}\left( \mathbb{Z}_{\geq }^{n}\right) $ built from the open dense orbit $\mathbb{Z}_{\geq}^{n}$ in the unit space $Z$ of $\mathfrak{F}_{n}$. For practical purposes, we often identify $C^{\ast}\left( \mathfrak{F}_{n}\right) $ with the concrete operator algebra $\pi_{n}\left( C^{\ast}\left( \mathfrak{F}_{n}\right) \right) $ without making explicit distinction. Note that by restricting $\mathfrak{F}_{n}$ to the open subset $\mathbb{Z}_{\geq}^{n}$, we get the groupoid $\mathfrak{F}_{n}|_{\mathbb{Z}_{\geq}^{n}}\cong\mathbb{Z} \times\left( \left( \mathbb{Z}^{n}\ltimes\mathbb{Z}^{n}\right) |_{\mathbb{Z}_{\geq}^{n}}\right) $ with \[ C^{\ast}\left( \mathfrak{F}_{n}|_{\mathbb{Z}_{\geq}^{n}}\right) \cong C^{\ast}\left( \mathbb{Z}\right) \otimes C^{\ast}\left( \left( \mathbb{Z}^{n}\ltimes\mathbb{Z}^{n}\right) |_{\mathbb{Z}_{\geq}^{n}}\right) \cong C\left( \mathbb{T}\right) \otimes\mathcal{K}\left( \ell^{2}\left( \mathbb{Z}_{\geq}^{n}\right) \right) \] under the representation $\pi_{n}$, where $C^{\ast}\left( \mathbb{Z}\right) \cong C\left( \mathbb{T}\right) $ acts on $\ell^{2}\left( \mathbb{Z} \right) \cong L^{2}\left( \mathbb{T}\right) $ by multiplication operators, and hence $C\left( \mathbb{T}\right) \otimes\mathcal{K}\left( \ell ^{2}\left( \mathbb{Z}_{\geq}^{n}\right) \right) $ can be viewed as a closed ideal of $\pi_{n}\left( C^{\ast}\left( \mathfrak{F}_{n}\right) \right) \equiv C^{\ast}\left( \mathfrak{F}_{n}\right) $. Note that the $\mathbb{Z}$-component of $\mathcal{F}^{n}\equiv{\mathbb{Z} }\times\left( {\mathbb{Z}}^{n}\ltimes\overline{{\mathbb{Z}}}^{n}\right) |_{\overline{{\mathbb{Z}}}_{\geq}^{n}}$ gives a grading on $C^{\ast}\left( \mathfrak{F}_{n}\right) $, decomposing it into (a completion of) a direct sum of some subspaces index by $\mathbb{Z}$. More precisely, $\widetilde{\mathfrak{F}_{n}}$ is the union of the pairwise disjoint closed and open sets \[ \left( \widetilde{\mathfrak{F}_{n}}\right) _{k}:=\left\{ \left( k,x,w\right) \in\widetilde{\mathfrak{F}_{n}}|\ \left( x,w\right) \in{\mathbb{Z}}^{n}\times\overline{{\mathbb{Z}}}_{\geq}^{n}\right\} \] with $k\in\mathbb{Z}$ which are invariant under the equivalence relation $\sim$, so $\mathfrak{F}\equiv\widetilde{\mathfrak{F}_{n}}/\sim$ is the union of the pairwise disjoint closed and open sets \[ \left( \mathfrak{F}_{n}\right) _{k}:=\left( \widetilde{\mathfrak{F}_{n} }\right) _{k}/\sim \] and hence $C^{\ast}\left( \mathfrak{F}_{n}\right) $ is the closure of the (algebraic) direct sum $\oplus_{k\in\mathbb{Z}}C_{c}\left( \mathfrak{F} _{n}\right) _{k}$ where $C_{c}\left( \mathfrak{F}_{n}\right) _{k} :=C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k}\right) $. In fact, the groupoid character $\left[ \left( k,x,w\right) \right] \in\mathfrak{F} _{n}\mapsto t^{k}\in\mathbb{T}$ for any fixed $t\in\mathbb{T}\equiv U\left( 1\right) $ defines an isometric *-automorphism of $L^{1}\left( \mathfrak{F}_{n}\right) $ and hence a C*-algebra automorphism $\rho\left( t\right) $ of $C^{\ast}\left( \mathfrak{F}_{n}\right) $, sending $\delta_{\left[ \left( k,x,w\right) \right] }$ to $t^{k}\delta_{\left[ \left( k,x,w\right) \right] }$. Clearly $\rho:t\mapsto\rho\left( t\right) $ defines a $U\left( 1\right) $-action on $C^{\ast}\left( \mathfrak{F} _{n}\right) $. The degree-$k$ spectral subspace $C^{\ast}\left( \mathfrak{F}_{n}\right) _{k}$ of $C^{\ast}\left( \mathfrak{F}_{n}\right) $ under the action $\rho$, i.e. the set consisting of all elements $a\in C^{\ast}\left( \mathfrak{F}_{n}\right) $ with $\left( \rho\left( t\right) \right) \left( a\right) =t^{k}a$ for all $t\in\mathbb{T}$, is a closed linear subspace of $C^{\ast}\left( \mathfrak{F}_{n}\right) $ containing $C_{c}\left( \mathfrak{F}_{n}\right) _{k}$. Clearly $C^{\ast}\left( \mathfrak{F}_{n}\right) _{k}\cap C^{\ast}\left( \mathfrak{F}_{n}\right) _{k^{\prime}}=0$ for any $k\neq k^{\prime}$. The integration operator \[ \Lambda_{k}:a\in C^{\ast}\left( \mathfrak{F}_{n}\right) \mapsto a_{k} :=\int_{\mathbb{T}}t^{-k}\left( \rho\left( t\right) \right) \left( a\right) dt\in C^{\ast}\left( \mathfrak{F}_{n}\right) _{k}\subset C^{\ast }\left( \mathfrak{F}_{n}\right) \] is a well-defined continuous projection onto $C^{\ast}\left( \mathfrak{F} _{n}\right) _{k}$ and eliminates $C^{\ast}\left( \mathfrak{F}_{n}\right) _{l}$ for all $l\neq k$, where $\mathbb{T}$ is endowed with the standard Haar measure. Indeed for any $s\in\mathbb{T}$, \[ \left( \rho\left( s\right) \right) \left( a_{k}\right) =\int _{\mathbb{T}}t^{-k}\left( \rho\left( t\right) \right) \left( \rho\left( s\right) a\right) dt=s^{k}\int_{\mathbb{T}}\left( st\right) ^{-k}\left( \rho\left( st\right) \right) \left( a\right) dt \] \[ =s^{k}\int_{\mathbb{T}}t^{-k}\left( \rho\left( t\right) \right) \left( a\right) dt=s^{k}a_{k}, \] and for any $b\in C^{\ast}\left( \mathfrak{F}_{n}\right) _{l}$, \[ \Lambda_{k}\left( b\right) =\int_{\mathbb{T}}t^{-k}\left( \rho\left( t\right) \right) \left( b\right) dt=\int_{\mathbb{T}}t^{-k}t^{l} bdt=\left( \int_{\mathbb{T}}t^{l-k}dt\right) b=\delta_{kl}b. \] So $\Lambda_{k}$'s are mutually orthogonal projections in the sense that $\Lambda_{k}\circ\Lambda_{l}=\delta_{kl}\Lambda_{k}$. With the (algebraic) sum $\sum_{k\in\mathbb{Z}}C_{c}\left( \mathfrak{F}_{n}\right) _{k}$ clearly dense in $C^{\ast}\left( \mathfrak{F}_{n}\right) $ and $C_{c}\left( \mathfrak{F}_{n}\right) _{k}\subset C^{\ast}\left( \mathfrak{F}_{n}\right) _{k}$, we see that $\overline{C_{c}\left( \mathfrak{F}_{n}\right) _{k} }=C^{\ast}\left( \mathfrak{F}_{n}\right) _{k}$ by applying the projection operator $\Lambda_{k}$ to any sequence in $\sum_{l\in\mathbb{Z}}C_{c}\left( \mathfrak{F}_{n}\right) _{l}$ converging to an element of $C^{\ast}\left( \mathfrak{F}_{n}\right) _{k}$. Furthermore we note that clearly $C_{c}\left( \mathfrak{F}_{n}\right) _{k}C_{c}\left( \mathfrak{F}_{n}\right) _{l}\subset C_{c}\left( \mathfrak{F}_{n}\right) _{k+l}$ and $C^{\ast}\left( \mathfrak{F}_{n}\right) _{k}C^{\ast}\left( \mathfrak{F}_{n}\right) _{l}\subset C^{\ast}\left( \mathfrak{F}_{n}\right) _{k+l}$ for all $k,l\in\mathbb{Z}$, i.e. $C_{c}\left( \mathfrak{F}_{n}\right) $ and $C^{\ast}\left( \mathfrak{F}_{n}\right) $ are graded algebras (up to completion). Recall that the group $U\left( 1\right) \equiv\mathbb{T}$ acts on $C\left( \mathbb{S}_{q}^{2n+1}\right) $ by sending the standard generators $u_{n+1,m}\in C\left( SU_{q}\left( n+1\right) \right) $, $1\leq m\leq n+1$, of $C\left( \mathbb{S}_{q}^{2n+1}\right) $ to $tu_{n+1,m}$ for each group element $t\in\mathbb{T}\subset\mathbb{C}$. This $U\left( 1\right) $-action, denoted as $\tau_{t}$ for $t\in\mathbb{T}$, decomposes $C\left( \mathbb{S}_{q}^{2n+1}\right) $ into spectral subspaces $C\left( \mathbb{S}_{q}^{2n+1}\right) _{k}$ of degree $k\in\mathbb{Z}$ consisting of elements $a\in C\left( \mathbb{S}_{q}^{2n+1}\right) $ satisfying $\tau _{t}\left( a\right) =t^{k}a$ for all $t\in\mathbb{T}$. Each $u_{n+1,m}$ is in the degree-$1$ spectral subspace $C\left( \mathbb{S}_{q}^{2n+1}\right) _{1}$. On the other hand, under the identification of $C\left( \mathbb{S} _{q}^{2n+1}\right) $ with $C^{\ast}\left( \mathfrak{F}_{n}\right) $ established in the work of \cite{Sh:cqg,Sh:qs}, each $u_{n+1,m}$ faithfully represented as $t_{n+1}\otimes\gamma^{\otimes n+1-m}\otimes\alpha^{\ast }\otimes1^{\otimes m-2}$ is identified with an element in $\overline {C_{c}\left( \mathfrak{F}_{n}\right) _{1}}=C^{\ast}\left( \mathfrak{F} _{n}\right) _{1}$. So the grading on $C^{\ast}\left( \mathfrak{F} _{n}\right) $ by $C^{\ast}\left( \mathfrak{F}_{n}\right) _{k}$ coincides with the grading on $C\left( \mathbb{S}_{q}^{2n+1}\right) $ by $C\left( \mathbb{S}_{q}^{2n+1}\right) _{k}$, i.e. $C\left( \mathbb{S}_{q} ^{2n+1}\right) _{k}=C^{\ast}\left( \mathfrak{F}_{n}\right) _{k}$. The degree-$0$ spectral subspace $C\left( \mathbb{S}_{q}^{2n+1}\right) _{0} $, or equivalently, the $U\left( 1\right) $-invariant subalgebra $\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) ^{U\left( 1\right) }$ of $C\left( \mathbb{S}_{q}^{2n+1}\right) $ can be naturally called the algebra of quantum $\mathbb{C}P^{n}$, denoted as $C\left( \mathbb{C}P_{q}^{n}\right) $. The embedding $C\left( \mathbb{C}P_{q}^{n}\right) \subset C\left( \mathbb{S}_{q}^{2n+1}\right) \equiv C^{\ast}\left( \mathfrak{F}_{n}\right) $, or virtually the quantum quotient map $\mathbb{S}_{q}^{2n+1}\rightarrow \mathbb{C}P_{q}^{n}$, is a quantum analogue of the Hopf principal $U\left( 1\right) $-bundle $\mathbb{S}^{2n+1}\rightarrow\mathbb{C}P^{n}$. Furthermore the degree-$k$ spectral subspaces $C\left( \mathbb{S}_{q}^{2n+1}\right) _{k}\equiv C^{\ast}\left( \mathfrak{F}_{n}\right) _{k}$ become the quantum line bundles, denoted $L_{k}$, over $\mathbb{C}P_{q}^{n}$ associated with the quantum principal $U\left( 1\right) $-bundle $\mathbb{S}_{q}^{2n+1} \rightarrow\mathbb{C}P_{q}^{n}$. Note that in the context of groupoid C*-algebras, $C\left( \mathbb{C}P_{q}^{n}\right) \equiv C\left( \mathbb{S}_{q}^{2n+1}\right) _{0}$ is canonically identified with the groupoid C*-algebra $C^{\ast}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) $ where $\left( \mathfrak{F}_{n}\right) _{0}$ is clearly an open and closed subgroupoid of $\mathfrak{F}_{n}$. It is easy to see that the unit space of $\left( \mathfrak{F}_{n}\right) _{0}\subset\mathfrak{F}_{n}$ is the same unit space $Z\equiv\overline{{\mathbb{Z}}}_{\geq}^{n}/\sim$ that $\mathfrak{F}_{n}$ has. On the other hand, the quantum complex projective space $U\left( n\right) _{q}\backslash SU\left( n+1\right) _{q}$ has been formulated and studied by researchers from the viewpoint of quantum homogeneous space \cite{Na}. The author showed in \cite{Sh:qcp} that such a quantum space can be concretely realized by the C*-subalgebra generated by $u_{n+1,i}^{\ast}u_{n+1,j}$ with $1\leq i,j\leq n+1$ in $C\left( \mathbb{S}_{q}^{2n+1}\right) $, and then identified this C*-algebra with the groupoid C*-algebra $C^{\ast} (\mathfrak{T}_{n})$ of the subquotient groupoid $\mathfrak{T}_{n} :=\widetilde{\mathfrak{T}_{n}}/\sim$ of ${\mathbb{Z}}^{n}\ltimes \overline{{\mathbb{Z}}}^{n}|_{\overline{{\mathbb{Z}}}_{\geq}^{n}}$, where \[ \widetilde{\mathfrak{T}_{n}}:=\{\left( x,w\right) \in{\mathbb{Z}}^{n} \ltimes\overline{{\mathbb{Z}}}^{n}|_{\overline{{\mathbb{Z}}}_{\geq}^{n} }:\;w_{i}=\infty\text{ with }1\leq i\leq n\ \ \text{implies} \] \[ x_{i}=-x_{1}-x_{2}-...-x_{i-1}\text{ and }x_{i+1}=...=x_{n}=0\} \] is a subgroupoid of ${\mathbb{Z}}^{n}\times\overline{{\mathbb{Z}}} ^{n}|_{\overline{{\mathbb{Z}}}_{\geq}^{n}}$ and $\sim$ is the equivalence relation generated by \[ \left( x,w\right) \sim\left( x,w_{1},...,w_{i}=\infty,\infty,...,\infty \right) \] for all $(x,w)$ with $w_{i}=\infty$ for an $1\leq i\leq n$. It is easy to see that $\left[ \left( 0,x,w\right) \right] \in\left( \mathfrak{F} _{n}\right) _{0}\mapsto\left[ \left( x,w\right) \right] \in \mathfrak{T}_{n}$ is a well-defined homeomorphic groupoid isomorphism, and hence $C^{\ast}(\mathfrak{T}_{n})\cong C^{\ast}\left( \left( \mathfrak{F} _{n}\right) _{0}\right) $. So the quantum homogeneous space $U\left( n\right) _{q}\backslash SU\left( n+1\right) _{q}$ coincides with the quantum complex projective space $\mathbb{C}P_{q}^{n}$ defined above, and the results obtained in \cite{Sh:qcp} are valid for our study of the quantum complex projective space $\mathbb{C}P_{q}^{n}$. \section{Projections over $C\left( \mathbb{S}_{q}^{2n+1}\right) $} In \cite{Sh:qcp}, taking the groupoid C*-algebra approach, we established an inductive family of short exact sequences of C*-algebras \[ 0\rightarrow C\left( \mathbb{T}\right) \otimes\mathcal{K}\left( \ell ^{2}\left( \mathbb{Z}_{\geq}^{n}\right) \right) \rightarrow C\left( \mathbb{S}_{q}^{2n+1}\right) \rightarrow C\left( \mathbb{S}_{q} ^{2n-1}\right) \rightarrow0. \] However for the purpose of classification of projections over $C\left( \mathbb{S}_{q}^{2n+1}\right) $, it turns out that another inductive family of short exact sequences constructed below is more convenient. Under the groupoid monomorphism \[ \left( z,x,w\right) \in\widetilde{\mathfrak{F}_{n}}\mapsto\left( z+x_{1},x,w\right) \in\mathcal{F}^{n}, \] $\widetilde{\mathfrak{F}_{n}}$ is mapped homeomorphically onto the image $\widetilde{\mathfrak{F}_{n}}^{\prime}\subset\mathcal{F}^{n}$ consisting of $\left( z,x,w\right) \in\mathcal{F}^{n}$ satisfying \[ \left\{ \begin{array} [c]{lll} w_{1}=\infty\text{ }\ \Longrightarrow\ \text{\textquotedblleft}z=0\text{ and }x_{2}=...=x_{n}=0\text{\textquotedblright}, & & \\ w_{i}=\infty\text{ }\ \Longrightarrow\ \text{\textquotedblleft}x_{i} =-z-x_{2}-...-x_{i-1}\text{ and }x_{i+1}=...=x_{n}=0\text{\textquotedblright }, & \text{for } & 2\leq i\leq n, \end{array} \right. \] while the equivalence relation $\sim$ on $\widetilde{\mathfrak{F}_{n}}$ remains the same equivalence relation $\sim^{\prime}$ on $\widetilde{\mathfrak{F}_{n}}^{\prime}$ that is generated by \[ \left( z,x,w\right) \sim^{\prime}\left( z,x,w_{1},...,w_{i}=\infty ,\infty,...,\infty\right) \] for all $\left( z,x,w\right) $ with $w_{i}=\infty$ for some $1\leq i\leq n$. So we get a homeomorphic groupoid isomorphism \[ \gamma:\left[ \left( z,x,w\right) \right] \in\widetilde{\mathfrak{F}_{n} }/\sim\equiv\mathfrak{F}_{n}\mapsto\left[ \left( z+x_{1},x,w\right) \right] \in\widetilde{\mathfrak{F}_{n}}^{\prime}/\sim^{\prime}=:\mathfrak{F} _{n}^{\prime}. \] Note that the groupoid C*-algebra $C^{\ast}\left( \mathfrak{F}_{n}^{\prime }\right) $ also has a faithful *-representation $\pi_{n}^{\prime}$ canonically constructed on the Hilbert space $\ell^{2}\left( \mathbb{Z} \times\mathbb{Z}_{\geq}^{n}\right) =\ell^{2}\left( \mathbb{Z}\right) \otimes\ell^{2}\left( \mathbb{Z}_{\geq}^{n}\right) $ built from the open dense orbit $\mathbb{Z}_{\geq}^{n}$ in the unit space of $\mathfrak{F} _{n}^{\prime}$. Let $m^{\left( k\right) }$ denote $\left( m,...,m\right) \in \overline{{\mathbb{Z}}}^{k}$. Note that $\left( \left\{ \infty\right\} \times\overline{{\mathbb{Z}}}_{\geq}^{n-1}\right) /\sim=\left\{ \left[ \infty^{\left( n\right) }\right] \right\} $ is a closed invariant subset of the unit space $Z\equiv\overline{{\mathbb{Z}}}_{\geq}^{n}/\sim$ of $\mathfrak{F}_{n}$ such that with a singleton unit space, \[ \mathfrak{F}_{n}|_{\left\{ \left[ \infty^{\left( n\right) }\right] \right\} }\equiv\left\{ \left[ \left( z,-z,0^{\left( n-1\right) } ,\infty^{\left( n\right) }\right) \right] :z\in\mathbb{Z}\right\} \cong\mathbb{Z} \] as a group. On the other hand, the complement of $\left\{ \left[ \infty^{\left( n\right) }\right] \right\} $ in $Z$ is the open invariant subset $O:=\left( \mathbb{Z}_{\geq}\times\overline{{\mathbb{Z}}}_{\geq} ^{n-1}\right) /\sim$ such that $w_{1}\neq\infty$ for all $\left[ \left( z,x,w\right) \right] \in\mathfrak{F}_{n}|_{O}$ and hence in $\gamma\left( \left[ \left( z,x,w\right) \right] \right) =\left[ \left( z+x_{1},x,w\right) \right] $, there is no non-trivial condition from the definition of $\widetilde{\mathfrak{F}_{n}}^{\prime}/\sim^{\prime}$ imposed on $\left( x_{1},w_{1}\right) $, while the non-trivial conditions from the definition of $\widetilde{\mathfrak{F}_{n}}^{\prime}/\sim^{\prime}$ imposed on the other components of $\gamma\left( \left[ \left( z,x,w\right) \right] \right) $ $\ $match those in defining $\mathfrak{F}_{n-1}$. That is to say, by rewriting $x_{1},w_{1}$ as the first two components of $\gamma\left( \left[ \left( z,x,w\right) \right] \right) $, we have a homeomorphic groupoid isomorphism from $\mathfrak{F}_{n}|_{O}$ onto the groupoid $\left( \mathbb{Z}\ltimes\mathbb{Z}\right) |_{\mathbb{Z}_{\geq}}\times\mathfrak{F} _{n-1}$, namely, \[ \gamma:\left[ \left( z,x,w\right) \right] \in\mathfrak{F}_{n}|_{O} \mapsto\left( x_{1},w_{1},\left[ \left( z+x_{1},x_{2},..,x_{n} ,w_{2},..,w_{n}\right) \right] \right) \in\left( \mathbb{Z}\ltimes \mathbb{Z}\right) |_{\mathbb{Z}_{\geq}}\times\mathfrak{F}_{n-1} \subset\mathfrak{F}_{n}^{\prime}, \] which then induces a C*-algebra isomorphism \[ \gamma_{\ast}:C^{\ast}\left( \mathfrak{F}_{n}|_{O}\right) \rightarrow C^{\ast}\left( \left( \mathbb{Z}\ltimes\mathbb{Z}\right) |_{\mathbb{Z} _{\geq}}\right) \otimes C^{\ast}\left( \mathfrak{F}_{n-1}\right) \subset C^{\ast}\left( \mathfrak{F}_{n}^{\prime}\right) . \] Note that $\pi_{n}^{\prime}=\pi_{0}\otimes\pi_{n-1}$ on $\gamma_{\ast}\left( C^{\ast}\left( \mathfrak{F}_{n}|_{O}\right) \right) $ where $\pi _{0}:C^{\ast}\left( \left( \mathbb{Z}\ltimes\mathbb{Z}\right) |_{\mathbb{Z}_{\geq}}\right) \rightarrow\mathcal{K}\left( \ell^{2}\left( \mathbb{Z}_{\geq}\right) \right) $ is the well-known canonical faithful representation, and the faithful representation \[ \pi_{n}^{\prime}\circ\gamma_{\ast}:C^{\ast}\left( \mathfrak{F}_{n}\right) \rightarrow\mathcal{B}\left( \ell^{2}\left( \mathbb{Z}\times\mathbb{Z} _{\geq}^{n}\right) \right) \] restricts to an isomorphism $C^{\ast}\left( \mathfrak{F}_{n}|_{O}\right) \cong\mathcal{K}\left( \ell^{2}\left( \mathbb{Z}_{\geq}\right) \right) \otimes C\left( \mathbb{S}_{q}^{2n-1}\right) $. So these invariant subsets $\left\{ \left[ \infty^{n}\right] \right\} $ and $O$ give rise to a short exact sequence \[ 0\rightarrow\mathcal{K}\left( \ell^{2}\left( \mathbb{Z}_{\geq}\right) \right) \otimes C\left( \mathbb{S}_{q}^{2n-1}\right) \cong C^{\ast}\left( \mathfrak{F}_{n}|_{O}\right) \rightarrow C\left( \mathbb{S}_{q} ^{2n+1}\right) \overset{\eta}{\rightarrow}C^{\ast}\left( \mathfrak{F} _{n}|_{\left\{ \left[ \infty^{n}\right] \right\} }\right) \cong C\left( \mathbb{T}\right) \rightarrow0. \] The set $T:=\left\{ \left[ \left( z,x,w\right) \right] \in\mathfrak{F} _{n}:\ x_{1}=1=-z,\ x_{2}=...=x_{n}=0\right\} $ is a compact open subset of $\mathfrak{F}_{n}$, corresponding to the set $\left\{ \left[ \left( 0,x,w\right) \right] \in\mathfrak{F}_{n}^{\prime}:x=\left( 1,0^{\left( n-1\right) }\right) \right\} \subset\mathfrak{F}_{n}^{\prime}$ under the isomorphism $\gamma$, and its characteristic function $\chi_{T}\in C_{c}\left( \mathfrak{F}_{n}\right) \subset C^{\ast}\left( \mathfrak{F} _{n}\right) $ determines the operator \[ \pi_{n}^{\prime}\left( \gamma_{\ast}\left( \chi_{T}\right) \right) =\mathcal{S}\otimes\mathrm{id}\in\mathcal{T}\otimes\pi_{n-1}\left( C\left( \mathbb{S}_{q}^{2n-1}\right) \right) \subset\mathcal{B}\left( \ell ^{2}\left( \mathbb{Z}_{\geq}\right) \right) \otimes\mathcal{B}\left( \ell^{2}\left( \mathbb{Z}\times\mathbb{Z}_{\geq}^{n-1}\right) \right) \] where $\mathcal{S}$ is the unilateral shift operator generating the Toeplitz algebra $\mathcal{T}$ with $\sigma\left( \mathcal{S}\right) =\mathrm{id} _{\mathbb{T}}$ for the symbol map $\sigma$ in the short exact sequence $0\rightarrow\mathcal{K}\rightarrow\mathcal{T}\overset{\sigma}{\rightarrow }C\left( \mathbb{T}\right) \rightarrow0$. Since the quotient map $\eta:C^{\ast}\left( \mathfrak{F}_{n}\right) \rightarrow C^{\ast}\left( \mathfrak{F}_{n}|_{\left\{ \left[ \infty^{n}\right] \right\} }\right) \equiv C^{\ast}\left( \mathbb{Z}\right) $ restricts $\chi_{T}$ to $\delta_{1}\in C_{c}\left( \mathbb{Z}\right) \equiv C_{c}\left( \mathfrak{F}_{n}|_{\left\{ \left[ \infty^{n}\right] \right\} }\right) $ yielding the function $\mathrm{id}_{\mathbb{T}}\in C\left( \mathbb{T}\right) \equiv C^{\ast}\left( \mathbb{Z}\right) $, we get \[ C\left( \mathbb{S}_{q}^{2n+1}\right) \subset\mathcal{T}\otimes\pi _{n-1}\left( C\left( \mathbb{S}_{q}^{2n-1}\right) \right) \equiv \mathcal{T}\otimes C\left( \mathbb{S}_{q}^{2n-1}\right) \] being the sum of $\mathcal{K}\otimes C\left( \mathbb{S}_{q}^{2n-1}\right) $ and $\mathcal{T}\otimes1_{C\left( \mathbb{S}_{q}^{2n-1}\right) }$, which coincides with a description of $C\left( \mathbb{S}_{q}^{2n+1}\right) $ in \cite{VaSo}. The above surjective C*-algebra homomorphism $C\left( \mathbb{S}_{q} ^{2n+1}\right) \overset{\eta}{\rightarrow}C\left( \mathbb{T}\right) $ facilitates the notion of rank for an equivalence class of idempotent $P\in M_{\infty}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) $ over $C\left( \mathbb{S}_{q}^{2n+1}\right) $, namely, the well-defined classical rank of the vector bundle over $\mathbb{T}$ determined by the idempotent $\eta\left( P\right) $ over $C\left( \mathbb{T}\right) $. The set of equivalence classes of idempotents $P\in M_{\infty}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) $ equipped with the binary operation $\boxplus$ becomes an abelian graded monoid \[ \mathfrak{P}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) =\sqcup_{r=0}^{\infty}\mathfrak{P}_{r}\left( C\left( \mathbb{S}_{q} ^{2n+1}\right) \right) \] where $\mathfrak{P}_{r}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) $ is the set of all (equivalence classes of) idempotents over $C\left( \mathbb{S}_{q}^{2n+1}\right) $ of rank $r$, and \[ \mathfrak{P}_{r}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) \boxplus\mathfrak{P}_{l}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) \subset\mathfrak{P}_{r+l}\left( C\left( \mathbb{S}_{q} ^{2n+1}\right) \right) \] for $r,l\geq0$. Clearly $\mathfrak{P}_{0}\left( C\left( \mathbb{S} _{q}^{2n+1}\right) \right) $ is a submonoid of $\mathfrak{P}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) $. Now we can proceed to classify up to equivalence all projections over $C\left( \mathbb{S}_{q}^{2n+1}\right) $ by induction on $n$, extending the result obtained in \cite{Ba} for the case of $n=1$. First we define some standard basic projections \[ P_{j,k}:=\left\{ \begin{array} [c]{lll} 1_{\mathbb{T}}\otimes\left( \left( \otimes^{j-1}P_{1}\right) \otimes P_{k}\otimes\left( \otimes^{n-j}I\right) \right) \in C\left( \mathbb{T}\right) \otimes\left( \mathcal{K}^{+}\right) ^{\otimes n}, & \text{if } & k>0\text{ and }1\leq j\leq n\\ 1_{\mathbb{T}}\otimes\left( \boxplus^{k}I^{\otimes n}\right) \equiv 1_{\mathbb{T}}\otimes\left( \boxplus^{k}\left( \otimes^{n}I\right) \right) \in M_{k}\left( C\left( \mathbb{T}\right) \otimes\left( \mathcal{K} ^{+}\right) ^{\otimes n}\right) , & \text{if } & k\geq0\text{ and }j=0 \end{array} \right. \] where $I$ stands for the unit of $\mathcal{K}^{+}$. Note that $P_{0,0}=0$. (For the convenience of argument, we also use the symbol $P_{j,k}$ for the case of $n=0$, by taking $\left( \mathcal{K}^{+}\right) ^{\otimes 0}:=\mathbb{C}$ and noting that $P_{0,k}=1_{\mathbb{T}}\otimes\left( \boxplus^{k}1\right) \in M_{k}\left( \mathbb{C}\right) $ for $k\geq0$ makes sense, while $P_{j,k}$ with $1\leq j\leq n$ does not exist when $n=0$.) We note that the basic projection $P_{j,k}$ with $j\geq1$ is implemented by the characteristic function $\chi_{A_{j,k}}$ of the compact open subset \[ A_{j,k}:=\left( \left\{ 0\right\} \times\left\{ 0\right\} ^{\times n}\times\left\{ 0\right\} ^{\times j-1}\times\left\{ 0,1,..,k-1\right\} \times\overline{{\mathbb{Z}}}_{\geq}^{n-j}\right) /\sim \] of $\mathfrak{F}_{n}$ under both representations $\pi_{n}$ and $\pi _{n}^{\prime}\circ\gamma_{\ast}$. So each $P_{j,k}$ with $j\geq1$ is a projection in $C\left( \mathbb{S}_{q}^{2n+1}\right) $. On the other hand, $P_{0,k}=\boxplus^{k}\tilde{I}$ is the identity projection in $M_{k}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) $, where $\tilde{I}$ is the identity element of $C\left( \mathbb{S}_{q}^{2n+1}\right) $. Recall that in the inductive family of short exact sequences \[ 0\rightarrow C\left( \mathbb{T}\right) \otimes\mathcal{K}\left( \ell ^{2}\left( \mathbb{Z}_{\geq}^{n}\right) \right) \rightarrow C\left( \mathbb{S}_{q}^{2n+1}\right) \overset{\mu_{n}}{\rightarrow}C\left( \mathbb{S}_{q}^{2n-1}\right) \rightarrow0 \] for $C\left( \mathbb{S}_{q}^{2n+1}\right) $ found in \cite{Sh:qcp}, the quotient map $\mu_{n}:C\left( \mathbb{S}_{q}^{2n+1}\right) \rightarrow C\left( \mathbb{S}_{q}^{2n-1}\right) $ is implemented by the restriction map \[ C^{\ast}\left( \mathfrak{F}_{n}\right) \rightarrow C^{\ast}\left( \mathfrak{F}_{n}|_{\left( \overline{{\mathbb{Z}}}_{\geq}^{n-1}\times\left\{ \infty\right\} \right) /\sim}\right) \cong C^{\ast}\left( \mathfrak{F} _{n-1}\right) . \] For any $n\in\mathbb{N}$, a projection $P$ over $C\left( \mathbb{S} _{q}^{2n+1}\right) $ annihilated by $M_{\infty}\left( \mu_{n}\right) $ is a projection in $M_{\infty}\left( C\left( \mathbb{T}\right) \otimes \mathcal{K}\left( \ell^{2}\left( \mathbb{Z}_{\geq}^{n}\right) \right) \right) $ and hence has a well-defined finite operator-rank $d_{n}\left( P\right) \in\mathbb{Z}_{\geq}$, namely, the rank of the projection operator $P\left( t\right) \in M_{\infty}\left( \mathcal{K}\left( \ell^{2}\left( \mathbb{Z}_{\geq}^{n}\right) \right) \right) $, independent of $t\in\mathbb{T}$. If $P$ is not annihilated by $\mu_{n}$, then we assign $d_{n}\left( P\right) :=\infty$. Note that $d_{n}\left( P\right) $ depends only on the equivalence class of $P$ over $C\left( \mathbb{S}_{q} ^{2n+1}\right) $. In the degenerate case of $n=0$, for a projection $P$ over $C\left( \mathbb{S}_{q}^{1}\right) \equiv C\left( \mathbb{T}\right) $, we define $d_{0}\left( P\right) $ to be the finite rank of projection $P\left( t\right) \in M_{\infty}\left( \mathbb{C}\right) $, independent of $t\in\mathbb{T}$. Now for a projection $P$ over $C\left( \mathbb{S}_{q}^{2n+1}\right) $, we define for $0\leq l\leq n$, \[ \rho_{l}\left( P\right) :=d_{l}\left( \left( \mu_{l+1}\circ\cdots\circ \mu_{n-1}\circ\mu_{n}\right) \left( P\right) \right) \overset{\text{if }l=n}{\equiv}d_{n}\left( P\right) \] which depends only on the equivalence class of $P$ over $C\left( \mathbb{S}_{q}^{2n+1}\right) $ and gives us a well-defined monoid homomorphism \[ \rho_{l}:\left( \mathfrak{P}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) ,\boxplus\right) \rightarrow\left( \mathbb{Z}_{\geq}\cup\left\{ \infty\right\} ,+\right) . \] It is easy to verify that \[ \rho_{l}\left( P_{j,k}\right) =\left\{ \begin{array} [c]{lll} \infty, & \text{if } & n-j>n-l\text{, i.e. }j<l\\ k, & \text{if } & j=l\\ 0, & \text{if } & n-j<n-l\text{, i.e. }j>l \end{array} \right. \] which shows that these projections $P_{j,k}$ are mutually inequivalent over $C\left( \mathbb{S}_{q}^{2n+1}\right) $ because $P_{j,k}$'s with different indices $\left( j,k\right) $ are distinguished by the collection of homomorphisms $\rho_{0},\rho_{1},...,\rho_{n}$. \textbf{Theorem 1}. $\mathfrak{P}\left( C\left( \mathbb{S}_{q} ^{2n+1}\right) \right) $ for $n\geq0$ is the disjoint union of \[ \mathfrak{P}_{0}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) =\left\{ \left[ P_{0,0}\right] \right\} \cup\left\{ \left[ P_{j,k}\right] :k>0\text{ and }1\leq j\leq n\right\} , \] containing pairwise distinct $\left[ P_{j,k}\right] $'s indexed by $\left( j,k\right) $, and \[ \mathfrak{P}_{k}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) =\left\{ \left[ P_{0,k}\right] \right\} \ \text{ a singleton\ for all }k>0, \] and its monoid structure is explicitly determined by that \[ \left[ P_{j,k}\right] \boxplus\left[ P_{j^{\prime},k^{\prime}}\right] =\left\{ \begin{array} [c]{lll} \left[ P_{j,k}\right] , & \text{if } & 0\leq j<j^{\prime}\text{ and }k,k^{\prime}>0,\\ \left[ P_{j,k+k^{\prime}}\right] , & \text{if } & j=j^{\prime}\geq0. \end{array} \right. \] So $\left[ P_{j,k}\right] =0$ in $K_{0}\left( C\left( \mathbb{S} _{q}^{2n+1}\right) \right) $ if and only if $1\leq j\leq n$ or $j=k=0$. \textbf{Proof}. Knowing that $P_{j,k}$ are mutually inequivalent, we only need to show that any projection over $C\left( \mathbb{S}_{q}^{2n+1}\right) $ is equivalent to one of these $P_{j,k}$'s and verify the stated monoid structure. We prove by induction on $n\geq0$. When $n=0$, $C\left( \mathbb{S}_{q}^{2n+1}\right) =C\left( \mathbb{T} \right) $ and it is well known from algebraic topology about vector bundles over $\mathbb{T}$ that isomorphism classes of (complex) vector bundles over $\mathbb{T}$ are faithfully represented by trivial vector bundles, i.e. $\mathfrak{P}_{0}\left( C\left( \mathbb{T}\right) \right) =\left\{ 0\right\} \equiv\left\{ \left[ P_{0,0}\right] \right\} $ while $\mathfrak{P}_{k}\left( C\left( \mathbb{T}\right) \right) =\left\{ \left[ P_{0,k}\right] \right\} $\ for $k>0$. Then the statements of this theorem for $n=0$ are clearly verified. Now assume that the statements hold for $C\left( \mathbb{S}_{q} ^{2n-1}\right) $. We need to show that they also hold for $C\left( \mathbb{S}_{q}^{2n+1}\right) $. Since any complex vector bundle over $\mathbb{T}$ is trivial, any idempotent over $C\left( \mathbb{T}\right) $ is equivalent to the standard projection $1\otimes P_{m}\in C\left( \mathbb{T}\right) \otimes M_{\infty}\left( \mathbb{C}\right) $ for some $m\in\mathbb{Z}_{\geq}$. So for any nonzero idempotent $P\in M_{\infty}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) $ over $C\left( \mathbb{S}_{q}^{2n+1}\right) $, there is some $U\in GL_{\infty}\left( C\left( \mathbb{T}\right) \right) $ such that \[ U\eta\left( P\right) U^{-1}=1\otimes P_{m}=\eta\left( \boxplus^{m}\tilde {I}\right) \] for some $m\in\mathbb{Z}_{\geq}$ where $\tilde{I}$ is the identity element of $C\left( \mathbb{S}_{q}^{2n+1}\right) $ viewed as the identity element in $\left( \mathcal{K}\otimes C\left( \mathbb{S}_{q}^{2n-1}\right) \right) ^{+}\subset C\left( \mathbb{S}_{q}^{2n+1}\right) $, and hence $VPV^{-1} -\boxplus^{m}\tilde{I}\in M_{\infty}\left( \mathcal{K}\otimes C\left( \mathbb{S}_{q}^{2n-1}\right) \right) $ for any lift $V\in GL_{\infty}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) $ (which exists) of $U\boxplus U^{-1}\in GL_{\infty}^{0}\left( C\left( \mathbb{T}\right) \right) $ along $\eta$. Replacing $P$ by the equivalent $VPV^{-1}$, we may assume that $P\in\left( \boxplus^{m}\tilde{I}\right) +M_{r-1}\left( \mathcal{K}\otimes C\left( \mathbb{S}_{q}^{2n-1}\right) \right) $ for some large $r\geq m+1$. Now since $M_{\infty}\left( \mathbb{C}\right) $ is dense in $\mathcal{K}$, there is an idempotent $Q\in\left( \boxplus^{m}\tilde{I}\right) +M_{r-1}\left( M_{N-1}\left( \mathbb{C}\right) \otimes C\left( \mathbb{S}_{q}^{2n-1}\right) \right) $ sufficiently close to and hence equivalent to $P$ over $C\left( \mathbb{S}_{q}^{2n+1}\right) $ for some large $N$. So replacing $P$ by $Q$, we may assume that \[ K:=P-\boxplus^{m}\tilde{I}\in M_{r-1}\left( M_{N-1}\left( \mathbb{C}\right) \otimes C\left( \mathbb{S}_{q}^{2n-1}\right) \right) . \] Rearranging the entries of $P\equiv K+\boxplus^{m}\tilde{I}\in M_{r-1}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) \subset M_{r}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) $ via conjugation by the unitary \[ U_{r,N}:=\sum_{j=1}^{r-1}\left[ e_{jj}\otimes\left( \left( \mathcal{S} \otimes\mathrm{id}\right) ^{\ast}\right) ^{N}+e_{rj}\otimes\left( \left( \mathcal{S}\otimes\mathrm{id}\right) ^{\left( j-1\right) N}P_{N}\right) \right] +e_{rr}\otimes\left( \mathcal{S}\otimes\mathrm{id}\right) ^{\left( r-1\right) N} \] \[ \in M_{r}\left( \mathbb{C}\right) \otimes C\left( \mathbb{S}_{q} ^{2n+1}\right) \equiv M_{r}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) \] we get the idempotent \[ U_{r,N}PU_{r,N}^{-1}\equiv U_{r,N}\left( P\boxplus0\right) U_{r,N} ^{-1}=\left( \left( \boxplus^{m}\tilde{I}\right) \boxplus\left( \boxplus^{r-1-m}0\right) \right) \boxplus R \] for some idempotent \[ R\in M_{\left( r-1\right) N}\left( C\left( \mathbb{S}_{q}^{2n-1}\right) \right) \subset\mathcal{K}\otimes C\left( \mathbb{S}_{q}^{2n-1}\right) \subset C\left( \mathbb{S}_{q}^{2n+1}\right) \] which has rank at least $m$ as an idempotent over $C\left( \mathbb{S} _{q}^{2n-1}\right) $ since it contains $m$ copies of the identity element $\tilde{I}^{\prime}$ of $C\left( \mathbb{S}_{q}^{2n-1}\right) $ as diagonal $\boxplus$-summands, relocated from the $N$-th diagonal entry of each of the $m$ copies of $\tilde{I}$ in $P$. Now by the induction hypothesis, the idempotent $R\in M_{\left( r-1\right) N}\left( C\left( \mathbb{S}_{q}^{2n-1}\right) \right) $ is equivalent over $C\left( \mathbb{S}_{q}^{2n-1}\right) $ to $P_{j,k}^{\prime}$ (denoting a standard projection $P_{j,k}$ over $C\left( \mathbb{S}_{q}^{2n-1}\right) $) with $0\leq j\leq n-1$ and $k>0$, which is identified with \[ P_{j+1,k}\equiv\left\{ \begin{array} [c]{lll} P_{1}\otimes P_{j,k}^{\prime}\in P_{1}\otimes C\left( \mathbb{S}_{q} ^{2n-1}\right) \subset\mathcal{K}\otimes C\left( \mathbb{S}_{q} ^{2n-1}\right) \subset C\left( \mathbb{S}_{q}^{2n+1}\right) , & \text{if } & j>0\\ P_{k}\otimes\tilde{I}^{\prime}\in P_{k}\otimes C\left( \mathbb{S}_{q} ^{2n-1}\right) \subset\mathcal{K}\otimes C\left( \mathbb{S}_{q} ^{2n-1}\right) \subset C\left( \mathbb{S}_{q}^{2n+1}\right) , & \text{if } & j=0 \end{array} \right. , \] i.e. $WRW^{-1}=P_{j,k}^{\prime}\equiv P_{j+1,k}$ for some invertible $W\in M_{N^{\prime}}\left( C\left( \mathbb{S}_{q}^{2n-1}\right) \right) $ with $N^{\prime}\geq\left( r-1\right) N$. Note that if $m>0$, then $R$ has a positive rank as an idempotent over $C\left( \mathbb{S}_{q}^{2n-1}\right) $ and hence $j=0$. Since \[ W+\left( \tilde{I}-P_{N^{\prime}}\otimes\tilde{I}^{\prime}\right) \in\left( \mathcal{K}\otimes C\left( \mathbb{S}_{q}^{2n-1}\right) \right) ^{+}\subset C\left( \mathbb{S}_{q}^{2n+1}\right) , \] we get $\left( \left( \boxplus^{m}\tilde{I}\right) \boxplus\left( \boxplus^{r-1-m}0\right) \right) \boxplus R$ equivalent over $C\left( \mathbb{S}_{q}^{2n+1}\right) $ to the projection \[ \left( \left( \boxplus^{m}\tilde{I}\right) \boxplus\left( \boxplus ^{r-1-m}0\right) \right) \boxplus P_{j+1,k} \] where $j=0$ if $m>0$. If $m=0$, then clearly $P$ is equivalent over $C\left( \mathbb{S}_{q} ^{2n+1}\right) $ to $P_{j+1,k}\in\mathcal{K}\otimes C\left( \mathbb{S} _{q}^{2n-1}\right) $ with $j+1>0$ and hence is of rank $0$. (We assumed $P\ $nonzero, so $k>0$.) If $m\in\mathbb{N}$ and hence $j=0$, then $P_{j+1,k}=P_{1,k}\equiv P_{k}\otimes\tilde{I}^{\prime}$ and we can rearrange entries of $\left( \left( \boxplus^{m}\tilde{I}\right) \boxplus\left( \boxplus^{r-1-m} 0\right) \right) \boxplus P_{1,k}$ via conjugation by the unitary \[ U_{l}:=e_{11}\otimes\left( \mathcal{S}^{k}\otimes\mathrm{id}\right) +e_{1r}\otimes P_{k}+\sum_{j=2}^{r-1}e_{jj}\otimes\tilde{I}+e_{rr} \otimes\left( \mathcal{S}^{k}\otimes\mathrm{id}\right) ^{\ast} \] \[ \in M_{r}\left( \mathbb{C}\right) \otimes C\left( \mathbb{S}_{q} ^{2n+1}\right) \equiv M_{r}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) \] to get $P$ equivalent over $C\left( \mathbb{S}_{q}^{2n+1}\right) $ to \[ U_{l}\left( \left( \left( \boxplus^{m}\tilde{I}\right) \boxplus\left( \boxplus^{r-1-m}0\right) \right) \boxplus P_{1,k}\right) U_{l}^{-1}=\left( \boxplus^{m}\tilde{I}\right) \boxplus\left( \boxplus^{r-m}0\right) \equiv\boxplus^{m}\tilde{I}\equiv P_{0,m} \] which is of rank $m\in\mathbb{N}$. So we have proved the description of the sets $\mathfrak{P}_{k}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) $ in the theorem. It remains to verify the monoid structure of $\mathfrak{P}\left( C\left( \mathbb{S} _{q}^{2n+1}\right) \right) $. By specializing the above analysis for $P\equiv K+\boxplus^{m}\tilde{I}$ to the case of $K=\left( \boxplus^{m}0\right) \boxplus P_{j+1,k}$, we have already established that \[ P_{0,m}\boxplus P_{j+1,k}\equiv\left( \boxplus^{m}\tilde{I}\right) \boxplus P_{j+1,k}\sim_{C\left( \mathbb{S}_{q}^{2n+1}\right) }P_{0,m} \] for all $m\in\mathbb{N}$ and $j+1>0$, while $\left[ P_{0,k}\right] \boxplus\left[ P_{0,k^{\prime}}\right] =\left[ P_{0,k+k^{\prime}}\right] $ is obvious. On the other hand, by induction hypothesis, \[ P_{j,k}^{\prime}\boxplus P_{j^{\prime},k^{\prime}}^{\prime}\sim_{C\left( \mathbb{S}_{q}^{2n-1}\right) }\left\{ \begin{array} [c]{lll} P_{j,k}^{\prime}, & \text{if } & 0\leq j<j^{\prime}\\ P_{j,k+k^{\prime}}^{\prime}, & \text{if } & j=j^{\prime}\geq0 \end{array} \right. . \] Now by applying $P_{1}\otimes\cdot$ to both sides of this equivalence, we get \[ P_{j+1,k}\boxplus P_{j^{\prime}+1,k^{\prime}}\sim_{C\left( \mathbb{S} _{q}^{2n+1}\right) }\left\{ \begin{array} [c]{lll} P_{j+1,k}, & \text{if } & 1\leq j+1<j^{\prime}+1\\ P_{j+1,k+k^{\prime}}, & \text{if } & j+1=j^{\prime}+1\geq1 \end{array} \right. \] since if an invertible $U\in M_{N}\left( C\left( \mathbb{S}_{q} ^{2n-1}\right) \right) $ with $N$ sufficiently large conjugates an idempotent $P$ over $C\left( \mathbb{S}_{q}^{2n-1}\right) $ to an idempotent $Q$, then \[ \left( P_{1}\otimes U_{ij}\right) _{i,j=1}^{N}+\boxplus^{N}\left( \tilde {I}-P_{1}\otimes\tilde{I}^{\prime}\right) \in M_{N}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) \] is an invertible conjugating $P_{1}\otimes P$ to $P_{1}\otimes Q$. Now we have established all the monoid structure rules for $\mathfrak{P} \left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) $. $\square$ \textbf{Remark}. The last part of the above proof about the monoid structure of $\mathfrak{P}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) $ can be avoided by applying the injective monoid homomorphism $\rho$ of the following Corollary 2 to both sides of the equivalence relations describing the monoid structure. \textbf{Corollary 1}. All projections over $C\left( \mathbb{S}_{q} ^{2n+1}\right) $ of strictly positive rank are trivial. The cancellation law holds for projections of rank $\geq1$, but fails for projections of rank $0$ in case of $n>0$. \textbf{Proof}. The only equivalence class of projection of a fixed rank $k>0$ is the trivial projection $\left[ P_{0,k}\right] =\left[ \boxplus^{k} \tilde{I}\right] $ classified above. By counting the rank, it is clear that if $\boxplus^{k}\tilde{I}$ and $\boxplus^{k^{\prime}}\tilde{I}$ are stably equivalent, then $k=k^{\prime}$. So the cancellation law holds for projections of rank $\geq1$. On the other hand, since for any distinct pairs $\left( j,k\right) $ and $\left( j^{\prime},k^{\prime}\right) $ with $1\leq j,j^{\prime}\leq n$ and $k,k^{\prime}>0$, $\left[ P_{j,k}\right] \neq\left[ P_{j^{\prime} ,k^{\prime}}\right] $ but \[ \left[ P_{j,k}\right] \boxplus\left[ P_{0,1}\right] =\left[ P_{0,1}\right] =\left[ P_{j^{\prime},k^{\prime}}\right] \boxplus\left[ P_{0,1}\right] , \] the cancellation law fails for such rank-$0$ projections $P_{j,k}$ and $P_{j^{\prime},k^{\prime}}$. $\square$ \textbf{Corollary 2}. The monoid $\mathfrak{P}\left( C\left( \mathbb{S} _{q}^{2n+1}\right) \right) $ is a submonoid of $\prod_{0\leq l\leq n}\overline{\mathbb{Z}_{\geq}}$ via the injective monoid homomorphism \[ \rho:P\in\mathfrak{P}\left( C\left( \mathbb{S}_{q}^{2n+1}\right) \right) \mapsto\prod_{0\leq l\leq n}\rho_{l}\left( P\right) \in\prod_{0\leq l\leq n}\overline{\mathbb{Z}_{\geq}}. \] \textbf{Proof}. $\rho$ is injective since we already know that $\rho_{l}$'s can distinguish the standard projections $P_{j,k}$ which have been shown to constitute the whole monoid $\mathfrak{P}\left( C\left( \mathbb{S} _{q}^{2n+1}\right) \right) $. $\square$ \section{Generating Projections of $K_{0}\left( C\left( \mathbb{C}P_{q} ^{n}\right) \right) $} In this section, we present a set of elementary projections over $C\left( \mathbb{C}P_{q}^{n}\right) $, whose $K_{0}$-classes form a set of free generators of the abelian group $K_{0}\left( C\left( \mathbb{C}P_{q} ^{n}\right) \right) $. We remark that a fascinating geometric construction of free generators of $K_{0}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) $ has been found by D'Andrea and Landi in \cite{dAnLa}. As discussed before, by restricting to the degree-$0$ part of the groupoid $\mathfrak{F}_{n}$ consisting of exactly those $\left[ \left( z,x,w\right) \right] $ with $z=0$, we get a subgroupoid $\left( \mathfrak{F}_{n}\right) _{0}$ which realizes $C\left( \mathbb{C}P_{q}^{n}\right) $ as a groupoid C*-algebra $C^{\ast}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) $. Roughly speaking, $\left( \mathfrak{F}_{n}\right) _{0}$ can be extracted from $\mathfrak{F}_{n}$ by simply ignoring or removing the $z$-component of the elements $\left[ \left( z,x,w\right) \right] $. Note that if $\left[ \left( 0,x,w\right) \right] \in\left( \mathfrak{F}_{n}\right) _{0}$ and $w_{1}=\infty$, then $x_{1}=0$ by the defining condition on $\mathfrak{F}_{n} $. Furthermore since clearly $\pi_{n}\left( C^{\ast}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) \right) \subset\mathrm{id}_{\ell ^{2}\left( \mathbb{Z}\right) }\otimes\mathcal{B}\left( \ell^{2}\left( \mathbb{Z}_{\geq}^{n}\right) \right) $, we will ignore the factor $\mathrm{id}_{\ell^{2}\left( \mathbb{Z}\right) }$ and view $\pi _{n}|_{C^{\ast}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) }$ as a faithful representation of $C^{\ast}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) $ on $\ell^{2}\left( \mathbb{Z}_{\geq}^{n}\right) $ instead of on $\ell^{2}\left( \mathbb{Z}\times\mathbb{Z}_{\geq}^{n}\right) $. In \cite{Sh:qcp}, by considering the closed invariant subset $\left( \overline{\mathbb{Z}}_{\geq}^{n-1}\times\left\{ \infty\right\} \right) /\sim$ (i.e. $\left\{ \left[ w\right] :w\in\overline{\mathbb{Z}}_{\geq }^{n-1}\times\left\{ \infty\right\} \right\} $ even though $\overline {\mathbb{Z}}_{\geq}^{n-1}\times\left\{ \infty\right\} $ is not really $\sim $-invariant in the unit space $\overline{{\mathbb{Z}}}_{\geq}^{n}$ of $\widetilde{\mathfrak{F}_{n}}\subset\mathcal{F}^{n}$) and its complement $O_{0}$ in the unit space $Z$ of $\left( \mathfrak{F}_{n}\right) _{0}$ (and of $\mathfrak{F}_{n}$ as well), we get the following short exact sequence \[ 0\rightarrow\mathcal{K}\left( \ell^{2}\left( \mathbb{Z}_{\geq}^{n}\right) \right) \cong C^{\ast}\left( \left( \mathfrak{F}_{n}\right) _{0}|_{O_{0} }\right) \rightarrow C\left( \mathbb{C}P_{q}^{n}\right) \overset{\nu }{\rightarrow}C^{\ast}\left( \mathfrak{F}_{n}|_{\left( \overline{\mathbb{Z} }_{\geq}^{n-1}\times\left\{ \infty\right\} \right) /\sim}\right) \cong C\left( \mathbb{C}P_{q}^{n-1}\right) \rightarrow0 \] with $\left( \mathfrak{F}_{n}\right) _{0}|_{O_{0}}\cong\left( \mathbb{Z}^{n}\ltimes\mathbb{Z}^{n}\right) |_{\mathbb{Z}_{\geq}^{n}}$. Thus we get the following 6-term exact sequence \[ \begin{array} [c]{ccccccc} \mathbb{Z}= & K_{0}\left( \mathcal{K}\left( \ell^{2}\left( \mathbb{Z} _{\geq}^{n}\right) \right) \right) & \rightarrow & K_{0}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) & \overset{\nu_{\ast}}{\rightarrow} & K_{0}\left( C\left( \mathbb{C}P_{q}^{n-1}\right) \right) & \\ & \uparrow & & & & \downarrow & \\ & K_{1}\left( C\left( \mathbb{C}P_{q}^{n-1}\right) \right) & \leftarrow & K_{1}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) & \leftarrow & K_{1}\left( \mathcal{K}\left( \ell^{2}\left( \mathbb{Z}_{\geq}^{n}\right) \right) \right) & =0. \end{array} \] By an induction on $n\geq1$, we can establish $K_{0}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) \cong\mathbb{Z}^{n+1}$ and $K_{1}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) =0$. In fact, in the case of $n=1$, we have $K_{i}\left( C\left( \mathbb{C}P_{q}^{0}\right) \right) =K_{i}\left( \mathbb{C}\right) \cong\delta_{0i}\mathbb{Z}$ and hence $K_{0}\left( C\left( \mathbb{C}P_{q}^{1}\right) \right) \cong \mathbb{Z}\oplus\mathbb{Z}$ and $K_{1}\left( C\left( \mathbb{C}P_{q} ^{1}\right) \right) =0$. For $n>1$, the induction hypothesis $K_{1}\left( C\left( \mathbb{C}P_{q}^{n-1}\right) \right) =0$ and $K_{0}\left( C\left( \mathbb{C}P_{q}^{n-1}\right) \right) \cong\mathbb{Z}^{n}$ forces \[ K_{0}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) \cong K_{0}\left( \mathcal{K}\right) \oplus K_{0}\left( C\left( \mathbb{C}P_{q}^{n-1}\right) \right) \equiv\mathbb{Z}\oplus\mathbb{Z}^{n}=\mathbb{Z}^{n+1} \] and also $K_{1}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) =0$ in the above 6-term exact sequence. The above induction can be refined to get the following stronger result. First we note that \[ P_{j,k}\equiv1_{\mathbb{T}}\otimes\left( \left( \otimes^{j-1}P_{1}\right) \otimes P_{k}\otimes\left( \otimes^{n-j}I\right) \right) \in C\left( \mathbb{S}_{q}^{2n+1}\right) \subset C\left( \mathbb{T}\right) \otimes\mathcal{B}\left( \ell^{2}\left( \mathbb{Z}_{\geq}^{n}\right) \right) \] with $0<j\leq n$ is a projection in $C\left( \mathbb{C}P_{q}^{n}\right) \subset C\left( \mathbb{S}_{q}^{2n+1}\right) $, and can be identified with \[ \left( \left( \otimes^{j-1}P_{1}\right) \otimes P_{k}\otimes\left( \otimes^{n-j}I\right) \right) \in C\left( \mathbb{C}P_{q}^{n}\right) \subset\mathcal{B}\left( \ell^{2}\left( \mathbb{Z}_{\geq}^{n}\right) \right) . \] From now on, we view $P_{j,k}$ with $0<j\leq n$ as the latter elementary tensor product lying in $C\left( \mathbb{C}P_{q}^{n}\right) $. On the other hand, clearly the trivial projection $P_{0,k}$ of rank $k$ over $C\left( \mathbb{S}_{q}^{2n+1}\right) $ is also a trivial projection of rank $k$ over $C\left( \mathbb{C}P_{q}^{n}\right) $. \textbf{Theorem 2}. The standard projections $P_{j,1}\equiv\left( \otimes ^{j}P_{1}\right) \otimes\left( \otimes^{n-j}I\right) $ over $C\left( \mathbb{C}P_{q}^{n}\right) $ with $0\leq j\leq n$ are inequivalent over $C\left( \mathbb{C}P_{q}^{n}\right) $ and their equivalence classes (over $C\left( \mathbb{C}P_{q}^{n}\right) $, not over $C\left( \mathbb{S} _{q}^{2n+1}\right) $) form a set of free generators of $K_{0}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) \cong\mathbb{Z}^{n+1}$. \textbf{Proof}. Since $P_{j,1}$ are inequivalent over $C\left( \mathbb{S} _{q}^{2n+1}\right) $, they are clearly inequivalent over the subalgebra $C\left( \mathbb{C}P_{q}^{n}\right) $. Now we prove by induction on $n\geq1$ that $\left[ P_{j,1}\right] $ with $0\leq j\leq n$ form a set of free generators of $K_{0}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) $. For $n=1$, it is well-known that $\mathcal{K}\left( \ell^{2}\left( \mathbb{Z}_{\geq}\right) \right) ^{+}\cong C\left( \mathbb{C}P_{q} ^{1}\right) $ has $\left[ P_{1}\right] \equiv\left[ P_{1,1}\right] $ and $\left[ I\right] \equiv\left[ P_{0,1}\right] $ as free generators of its $K_{0}$-group $K_{0}\left( \mathcal{K}\left( \ell^{2}\left( \mathbb{Z} _{\geq}\right) \right) ^{+}\right) \cong\mathbb{Z}^{2}$. For $n>1$, $K_{0}\left( \mathcal{K}\left( \ell^{2}\left( \mathbb{Z}_{\geq }^{n}\right) \right) \right) \cong\mathbb{Z}$ has $\left[ \otimes^{n} P_{1}\right] \equiv\left[ P_{n,1}\right] $ as a free generator, while by induction hypothesis, $K_{0}\left( C\left( \mathbb{C}P_{q}^{n-1}\right) \right) \cong\mathbb{Z}^{n}$ has $\left[ P_{j,1}^{\prime}\right] \equiv\left[ \left( \otimes^{j}P_{1}\right) \otimes\left( \otimes ^{n-1-j}I\right) \right] $ with $0\leq j\leq n-1$ as free generators. Now with $\nu_{\ast}\left( \left[ P_{j,1}\right] \right) \equiv\nu_{\ast }\left( \left[ P_{j,1}^{\prime}\otimes I\right] \right) =\left[ P_{j,1}^{\prime}\right] $ for all $0\leq j\leq n-1$, it is easy to see from the above 6-term exact sequence that $\left[ P_{j,1}\right] $ for $0\leq j\leq n-1$ together with $\left[ P_{n,1}\right] $ form a set of free generators of $K_{0}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) \cong\mathbb{Z}^{n+1}$. $\square$ It is of interest to point out that these projections $P_{j,1}$ freely generating $K_{0}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) $ are actually lying inside $C\left( \mathbb{C}P_{q}^{n}\right) \equiv M_{1}\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) \subset M_{\infty }\left( C\left( \mathbb{C}P_{q}^{n}\right) \right) $ and they form an increasing finite sequence of projections. \section{Quantum line bundles over $C\left( \mathbb{C}P_{q}^{n}\right) $} In this section, we identify the quantum line bundles $L_{k}\equiv C\left( \mathbb{S}_{q}^{2n+1}\right) _{k}$ of degree $k$ over $C\left( \mathbb{C}P_{q}^{n}\right) $ with a concrete (equivalence class of) projection described in terms of the basic projections. We remark that an intriguing noncommutative geometric study of these line bundles in comparison with Adam's classical results on $\mathbb{C}P^{n}$ has been successfully accomplished by Arici, Brain, and Landi in \cite{AriBrLa}. (The degree convention is different in the $\pm$-sign.) To distinguish between ordinary function product and convolution product, we denote the groupoid C*-algebraic (convolution) multiplication of elements in $C_{c}\left( \mathcal{G}\right) \subset C^{\ast}\left( \mathcal{G}\right) $ by $\ast$, while omitting $\ast$ when the elements are represented as operators or when they are multiplied together pointwise as functions on $\mathcal{G}$. We also view $C_{c}\left( \mathfrak{F}_{n}\right) $ or $C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k}\right) $ (also abbreviated as $C_{c}\left( \mathfrak{F}_{n}\right) _{k}$) as left $C_{c}\left( \mathfrak{F}_{n}\right) _{0}$-modules with $C_{c}\left( \mathfrak{F}_{n}\right) $ carrying the convolution algebra structure as a subalgebra of the groupoid C*-algebra $C^{\ast}\left( \mathfrak{F} _{n}\right) $. Similarly, for a closed subset $X$ of the unit space of $\mathfrak{F}_{n}$, the inverse image $\mathfrak{F}_{n}\upharpoonright_{X}$ of $X$ under the source map of $\mathfrak{F}_{n}$ or its grade-$k$ component $\left( \mathfrak{F}_{n}\upharpoonright_{X}\right) _{k}\equiv\left( \mathfrak{F}_{n}\right) _{k}\upharpoonright_{X}$ gives rise to a left $C_{c}\left( \mathfrak{F}_{n}\right) _{0}$-module $C_{c}\left( \mathfrak{F}_{n}\upharpoonright_{X}\right) $ or $C_{c}\left( \mathfrak{F} _{n}\upharpoonright_{X}\right) _{k}$. For $k\in\mathbb{Z}_{\geq}$, the characteristic function $\chi_{B_{k}}\in C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) $ of the compact open set \[ B_{k}:=\left\{ \left[ \left( 0,0^{\left( n\right) },w\right) \right] \in\left( \mathfrak{F}_{n}\right) _{0}:w_{1}\geq k\right\} \] is a projection over $C^{\ast}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) \equiv C\left( \mathbb{C}P_{q}^{n}\right) $ which is represented under $\pi_{n}$ as $P_{-k}\otimes\left( \otimes^{n-1}I\right) $, and \[ C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) \ast\chi_{B_{k} }=C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\upharpoonright_{B_{k} }\right) \] where $B_{k}\subset\left( \mathfrak{F}_{n}\right) _{0}$ in the notation $\upharpoonright_{B_{k}}$ is canonically viewed as a subset of the unit space of $\left( \mathfrak{F}_{n}\right) _{0}$. For $k\leq0$, it is straightforward to check that \[ \left[ \left( k,x,w\right) \right] \in\left( \mathfrak{F}_{n}\right) _{k}\mapsto\left[ \left( 0,x_{1}+k,x_{2},..,x_{n},w_{1}-k,w_{2} ,..,w_{n}\right) \right] \in\left( \mathfrak{F}_{n}\right) _{0} \upharpoonright_{B_{\left\vert k\right\vert }} \] well defines a bijective homeomorphism. For example, for $w_{1}=\infty$, we have $x_{1}=-k$ on the domain side and $x_{1}+k=0$ on the range side of this map, matching the implicit constraints imposed on $\left( \mathfrak{F} _{n}\right) _{k}$ and $\left( \mathfrak{F}_{n}\right) _{0}$. Furthermore since any $\left[ \left( k,x,w\right) \right] \in\left( \mathfrak{F} _{n}\right) _{k}$ and its image $\left[ \left( 0,x_{1}+k,x_{2} ,..,x_{n},w_{1}-k,w_{2},..,w_{n}\right) \right] $ share the same target element $\left[ x+w\right] \in\overline{\mathbb{Z}}_{\geq}^{n}/\sim$, it induces a left $C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) $-module isomorphism \[ C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k}\right) \rightarrow C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\upharpoonright _{B_{\left\vert k\right\vert }}\right) \equiv C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) \ast\chi_{B_{\left\vert k\right\vert }} \] which extends to a left $C\left( \mathbb{C}P_{q}^{n}\right) $-module isomorphism \[ L_{k}\equiv\overline{C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k}\right) }\cong C\left( \mathbb{C}P_{q}^{n}\right) \left( P_{-\left\vert k\right\vert }\otimes\left( \otimes^{n-1}I\right) \right) , \] i.e. the quantum line bundle $L_{k}$ for $k\leq0$ is the finitely generated left projective module determined by the projection $P_{-\left\vert k\right\vert }\otimes\left( \otimes^{n-1}I\right) $ over $C\left( \mathbb{C}P_{q}^{n}\right) $. For $k>0$, the situation is much more complicated. We first define the closed open set \[ \left( \mathfrak{F}_{n}\right) _{k,j}:=\left( \mathfrak{F}_{n} \upharpoonright_{\left( \left\{ 0\right\} ^{j}\times\overline{\mathbb{Z} }_{\geq}^{n-j}\right) /\sim}\right) _{k}\equiv\left\{ \left[ \left( k,x,w\right) \right] \in\left( \mathfrak{F}_{n}\right) _{k}:w\in\left\{ 0\right\} ^{j}\times\overline{\mathbb{Z}}_{\geq}^{n-j}\right\} \] with each $C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k,j}\right) $ a left $C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) $-module. Note that \[ C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,j}\right) =C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) \ast\chi_{\left( \left\{ 0\right\} ^{j}\times\overline{\mathbb{Z}}_{\geq}^{n-j}\right) /\sim} \] with $\chi_{\left( \left\{ 0\right\} ^{j}\times\overline{\mathbb{Z}}_{\geq }^{n-j}\right) /\sim}$ represented under $\pi_{n}$ as the projection $\left( \otimes^{j}P_{1}\right) \otimes\left( \otimes^{n-j}I\right) $ over $C\left( \mathbb{C}P_{q}^{n}\right) $. Now the left $C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) $-module $C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k,j}\right) $ can be decomposed as \[ C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k,j}\right) =C_{c}\left( \left( \mathfrak{F}_{n}\upharpoonright_{\left( \left\{ 0\right\} ^{j}\times\overline{\mathbb{Z}}_{\geq k}\times\overline{\mathbb{Z}}_{\geq }^{n-j-1}\right) /\sim}\right) _{k}\right) \oplus {\displaystyle\bigoplus\limits_{l=0}^{k-1}} C_{c}\left( \left( \mathfrak{F}_{n}\upharpoonright_{\left( \left\{ 0\right\} ^{j}\times\left\{ l\right\} \times\overline{\mathbb{Z}}_{\geq }^{n-j-1}\right) /\sim}\right) _{k}\right) . \] It is straightforward to check that \[ \left[ \left( k,x,w\right) \right] \in\left( \mathfrak{F}_{n} \upharpoonright_{\left( \left\{ 0\right\} ^{j}\times\overline{\mathbb{Z} }_{\geq k}\times\overline{\mathbb{Z}}_{\geq}^{n-j-1}\right) /\sim}\right) _{k}\mapsto \] \[ \left[ \left( 0,x_{1},..,x_{j},x_{j+1}+k,x_{j+2},..,x_{n},0^{\left( j\right) },w_{j+1}-k,w_{j+2},..,w_{n}\right) \right] \in\left( \mathfrak{F}_{n}\upharpoonright_{\left( \left\{ 0\right\} ^{j} \times\overline{\mathbb{Z}}_{\geq}^{n-j}\right) /\sim}\right) _{0} \equiv\left( \mathfrak{F}_{n}\right) _{0,j} \] well defines a bijective homeomorphism. For example, we are considering only $w$ with $w_{1}=...=w_{j}=0<\infty$, while for $w_{j+1}=\infty$, we have $x_{j+1}=-k-x_{1}-\cdots-x_{j}$ on the domain side and $-x_{1}-\cdots -x_{j}-\left( x_{j+1}+k\right) =0$ on the range side of this map, matching the implicit constraints imposed on $\left( \mathfrak{F}_{n}\right) _{k}$ and $\left( \mathfrak{F}_{n}\right) _{0}$. Furthermore since any $\left[ \left( k,x,w\right) \right] \ $ and its image under this bijection share the same target element $\left[ x+w\right] \in\overline{\mathbb{Z}}_{\geq }^{n}/\sim$, it induces a left $C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) $-module isomorphism \[ C_{c}\left( \left( \mathfrak{F}_{n}\upharpoonright_{\left( \left\{ 0\right\} ^{j}\times\overline{\mathbb{Z}}_{\geq k}\times\overline{\mathbb{Z} }_{\geq}^{n-j-1}\right) /\sim}\right) _{k}\right) \rightarrow C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,j}\right) =C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) \ast\chi_{\left( \left\{ 0\right\} ^{j}\times\overline{\mathbb{Z}}_{\geq}^{n-j}\right) /\sim} \] which extends to a left $C\left( \mathbb{C}P_{q}^{n}\right) $-module isomorphism \[ \overline{C_{c}\left( \left( \mathfrak{F}_{n}\upharpoonright_{\left( \left\{ 0\right\} ^{j}\times\overline{\mathbb{Z}}_{\geq k}\times \overline{\mathbb{Z}}_{\geq}^{n-j-1}\right) /\sim}\right) _{k}\right) }\cong C\left( \mathbb{C}P_{q}^{n}\right) \left( \left( \otimes^{j} P_{1}\right) \otimes\left( \otimes^{n-j}I\right) \right) . \] On the other hand, for any $0\leq l\leq k-1$, \[ \left[ \left( k,x,w\right) \right] \in\left( \mathfrak{F}_{n} \upharpoonright_{\left( \left\{ 0\right\} ^{j}\times\left\{ l\right\} \times\overline{\mathbb{Z}}_{\geq}^{n-j-1}\right) /\sim}\right) _{k}\mapsto \] \[ \left[ \left( k-l,x_{1},..,x_{j},x_{j+1}+l,x_{j+2},..,x_{n},0^{\left( j+1\right) },w_{j+2},..,w_{n}\right) \right] \in\left( \mathfrak{F} _{n}\upharpoonright_{\left( \left\{ 0\right\} ^{j+1}\times\overline {\mathbb{Z}}_{\geq}^{n-j-1}\right) /\sim}\right) _{k-l}\equiv\left( \mathfrak{F}_{n}\right) _{k-l,j+1} \] well defines a bijective homeomorphism which preserves the target element $\left[ x+w\right] $ and hence induces a left $C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) $-module isomorphism \[ C_{c}\left( \left( \mathfrak{F}_{n}\upharpoonright_{\left( \left\{ 0\right\} ^{j}\times\left\{ l\right\} \times\overline{\mathbb{Z}}_{\geq }^{n-j-1}\right) /\sim}\right) _{k}\right) \rightarrow C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k-l,j+1}\right) . \] So summarizing, we get the isomorphism relation \[ \text{(*)\ \ \ }C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k,j}\right) \cong C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,j}\right) \oplus {\displaystyle\bigoplus\limits_{l=0}^{k-1}} C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k-l,j+1}\right) \equiv C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,j}\right) \oplus {\displaystyle\bigoplus\limits_{l=1}^{k}} C_{c}\left( \left( \mathfrak{F}_{n}\right) _{l,j+1}\right) \] which is recursive in the sense that the right hand side contains terms with either $k$ decreased or $j$ increased. So repeated application of this recursive expansion can lead to a direct sum of terms of the form $C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,m}\right) $ or the form $C_{c}\left( \left( \mathfrak{F}_{n}\right) _{l,n}\right) $, where \[ \overline{C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,m}\right) }\cong C\left( \mathbb{C}P_{q}^{n}\right) \left( \left( \otimes^{m}P_{1}\right) \otimes\left( \otimes^{n-m}I\right) \right) \] while \[ \left[ \left( l,x,w\right) \right] \equiv\left[ \left( l,x,0^{\left( n\right) }\right) \right] \in\left( \mathfrak{F}_{n}\upharpoonright _{\left\{ 0\right\} ^{n}/\sim}\right) _{l}\equiv\left( \mathfrak{F} _{n}\right) _{l,n}\mapsto\left[ \left( 0,x,0^{\left( n\right) }\right) \right] \in\left( \mathfrak{F}_{n}\right) _{0,n} \] well defines a bijective homeomorphism which induces a left $C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0}\right) $-module isomorphism \[ C_{c}\left( \left( \mathfrak{F}_{n}\right) _{l,n}\right) \rightarrow C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,n}\right) \] extending to a left $C\left( \mathbb{C}P_{q}^{n}\right) $-module isomorphism \[ \overline{C_{c}\left( \left( \mathfrak{F}_{n}\right) _{l,n}\right) }\cong C\left( \mathbb{C}P_{q}^{n}\right) \left( \otimes^{n}P_{1}\right) . \] \textbf{Theorem 3}. For $n\geq1$, the quantum line bundle $L_{k}\equiv C\left( \mathbb{S}_{q}^{2n+1}\right) _{k}$ of degree $k\in\mathbb{Z}$ over $C\left( \mathbb{C}P_{q}^{n}\right) $ is isomorphic to the finitely generated projective left module over $C\left( \mathbb{C}P_{q}^{n}\right) $ determined by the projection $P_{-\left\vert k\right\vert }\otimes\left( \otimes^{n-1}I\right) $ if $k\leq0$ (with $P_{-0}:=I$ understood), and the projection \[ \boxplus_{j=0}^{n}\left( \boxplus^{C_{j}^{k+j-1}}\left( \left( \otimes ^{j}P_{1}\right) \otimes\left( \otimes^{n-j}I\right) \right) \right) \] if $k>0$, where $C_{j}^{k}$ denotes the combinatorial number $\left( k!\right) /\left( j!\left( k-j\right) !\right) $. Proof. Having already taken care of the case of $k\leq0$ in the above discussion, we only need to consider the case of $k>0$. First we establish by induction on $l$ that \[ \text{(**)\ \ }C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k,0}\right) \cong\left( {\displaystyle\bigoplus\limits_{j=0}^{l-1}} \left( \oplus^{C_{j}^{k+j-1}}C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,j}\right) \right) \right) \oplus\left( {\displaystyle\bigoplus\limits_{m=1}^{k}} \left( \oplus^{C_{l-1}^{k-m+l-1}}C_{c}\left( \left( \mathfrak{F} _{n}\right) _{m,l}\right) \right) \right) . \] Indeed for $l=1$, (**) becomes \[ C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k,0}\right) \cong C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,0}\right) \oplus\left( {\displaystyle\bigoplus\limits_{m=1}^{k}} C_{c}\left( \left( \mathfrak{F}_{n}\right) _{m,1}\right) \right) , \] which is the same as the established recursive relation (*) with $j=0$. For $n\geq l>1$, by the induction hypothesis for $l-1$ and the recursive relation (*), we get \[ C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k,0}\right) \cong\left( {\displaystyle\bigoplus\limits_{j=0}^{l-2}} \left( \oplus^{C_{j}^{k+j-1}}C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,j}\right) \right) \right) \oplus\left( {\displaystyle\bigoplus\limits_{m=1}^{k}} \left( \oplus^{C_{l-2}^{k-m+l-2}}C_{c}\left( \left( \mathfrak{F} _{n}\right) _{m,l-1}\right) \right) \right) \] \[ \cong\left( {\displaystyle\bigoplus\limits_{j=0}^{l-2}} \left( \oplus^{C_{j}^{k+j-1}}C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,j}\right) \right) \right) \oplus\left( {\displaystyle\bigoplus\limits_{m=1}^{k}} \left( \oplus^{C_{l-2}^{k-m+l-2}}\left( C_{c}\left( \left( \mathfrak{F} _{n}\right) _{0,l-1}\right) \oplus {\displaystyle\bigoplus\limits_{i=1}^{m}} C_{c}\left( \left( \mathfrak{F}_{n}\right) _{i,l}\right) \right) \right) \right) \] \[ \cong\left( {\displaystyle\bigoplus\limits_{j=0}^{l-2}} \left( \oplus^{C_{j}^{k+j-1}}C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,j}\right) \right) \right) \oplus\left( \oplus^{ {\displaystyle\sum\limits_{m=1}^{k}} C_{l-2}^{k-m+l-2}}C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,l-1} \right) \right) \oplus\left( {\displaystyle\bigoplus\limits_{m=1}^{k}} {\displaystyle\bigoplus\limits_{i=1}^{m}} \left( \oplus^{C_{l-2}^{k-m+l-2}}C_{c}\left( \left( \mathfrak{F} _{n}\right) _{i,l}\right) \right) \right) \] \[ \cong\left( {\displaystyle\bigoplus\limits_{j=0}^{l-2}} \left( \oplus^{C_{j}^{k+j-1}}C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,j}\right) \right) \right) \oplus\left( \oplus^{C_{l-1}^{k+l-2}} C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,l-1}\right) \right) \oplus\left( {\displaystyle\bigoplus\limits_{i=1}^{k}} {\displaystyle\bigoplus\limits_{m=i}^{k}} \left( \oplus^{C_{l-2}^{k-m+l-2}}C_{c}\left( \left( \mathfrak{F} _{n}\right) _{i,l}\right) \right) \right) \] \[ \cong\left( {\displaystyle\bigoplus\limits_{j=0}^{l-1}} \left( \oplus^{C_{j}^{k+j-1}}C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,j}\right) \right) \right) \oplus\left( {\displaystyle\bigoplus\limits_{i=1}^{k}} \left( \oplus^{\sum_{m=i}^{k}C_{l-2}^{k-m+l-2}}C_{c}\left( \left( \mathfrak{F}_{n}\right) _{i,l}\right) \right) \right) \] \[ \cong\left( {\displaystyle\bigoplus\limits_{j=0}^{l-1}} \left( \oplus^{C_{j}^{k+j-1}}C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,j}\right) \right) \right) \oplus\left( {\displaystyle\bigoplus\limits_{i=1}^{k}} \left( \oplus^{C_{l-1}^{k-i+l-1}}C_{c}\left( \left( \mathfrak{F} _{n}\right) _{i,l}\right) \right) \right) \] \[ \equiv\left( {\displaystyle\bigoplus\limits_{j=0}^{l-1}} \left( \oplus^{C_{j}^{k+j-1}}C_{c}\left( \left( \mathfrak{F}_{n}\right) _{0,j}\right) \right) \right) \oplus\left( {\displaystyle\bigoplus\limits_{m=1}^{k}} \left( \oplus^{C_{l-1}^{k-m+l-1}}C_{c}\left( \left( \mathfrak{F} _{n}\right) _{m,l}\right) \right) \right) \] where \[ {\displaystyle\sum\limits_{m=1}^{k}} C_{l-2}^{k-m+l-2}=C_{l-2}^{l-2}+C_{l-2}^{l-1}+C_{l-2}^{l}+C_{l-2}^{l+1} +\cdots+C_{l-2}^{k+l-3} \] \[ =C_{l-1}^{l-1}+C_{l-2}^{l-1}+C_{l-2}^{l}+C_{l-2}^{l+1}+\cdots+C_{l-2}^{k+l-3} \] \[ =C_{l-1}^{l}+C_{l-2}^{l}+C_{l-2}^{l+1}+\cdots+C_{l-2}^{k+l-3}=C_{l-1} ^{l+1}+C_{l-2}^{l+1}+\cdots+C_{l-2}^{k+l-3} \] \[ =\cdots=C_{l-1}^{k+l-3}+C_{l-2}^{k+l-3}=C_{l-1}^{k+l-2} \] and similarly \[ \sum_{m=i}^{k}C_{l-2}^{k-m+l-2}=C_{l-2}^{l-2}+C_{l-2}^{l-1}+\cdots +C_{l-2}^{k-i+l-2}=C_{l-1}^{k-i+l-1}. \] Thus (**) holds for $n\geq l>1$, concluding the inductive proof of (**). Now by (**) for $l=n$, we get \[ L_{k}\equiv\overline{C_{c}\left( \left( \mathfrak{F}_{n}\right) _{k,0}\right) }\cong\left( {\displaystyle\bigoplus\limits_{j=0}^{n-1}} \left( \oplus^{C_{j}^{k+j-1}}\overline{C_{c}\left( \left( \mathfrak{F} _{n}\right) _{0,j}\right) }\right) \right) \oplus\left( {\displaystyle\bigoplus\limits_{m=1}^{k}} \left( \oplus^{C_{n-1}^{k-m+n-1}}\overline{C_{c}\left( \left( \mathfrak{F}_{n}\right) _{m,n}\right) }\right) \right) \] \[ =\left( {\displaystyle\bigoplus\limits_{j=0}^{n-1}} \left( \oplus^{C_{j}^{k+j-1}}C\left( \mathbb{C}P_{q}^{n}\right) \left( \left( \otimes^{j}P_{1}\right) \otimes\left( \otimes^{n-j}I\right) \right) \right) \right) \oplus\left( {\displaystyle\bigoplus\limits_{m=1}^{k}} \left( \oplus^{C_{n-1}^{k-m+n-1}}C\left( \mathbb{C}P_{q}^{n}\right) \left( \otimes^{n}P_{1}\right) \right) \right) \] \[ =\left( {\displaystyle\bigoplus\limits_{j=0}^{n-1}} \left( \oplus^{C_{j}^{k+j-1}}C\left( \mathbb{C}P_{q}^{n}\right) \left( \left( \otimes^{j}P_{1}\right) \otimes\left( \otimes^{n-j}I\right) \right) \right) \right) \oplus\left( \oplus^{\sum_{m=1}^{k}C_{n-1} ^{k-m+n-1}}C\left( \mathbb{C}P_{q}^{n}\right) \left( \otimes^{n} P_{1}\right) \right) \] \[ =\left( {\displaystyle\bigoplus\limits_{j=0}^{n-1}} \left( \oplus^{C_{j}^{k+j-1}}C\left( \mathbb{C}P_{q}^{n}\right) \left( \left( \otimes^{j}P_{1}\right) \otimes\left( \otimes^{n-j}I\right) \right) \right) \right) \oplus\left( \oplus^{C_{n}^{k+n-1}}C\left( \mathbb{C}P_{q}^{n}\right) \left( \otimes^{n}P_{1}\right) \right) \] \[ = {\displaystyle\bigoplus\limits_{j=0}^{n}} \left( \oplus^{C_{j}^{k+j-1}}C\left( \mathbb{C}P_{q}^{n}\right) \left( \left( \otimes^{j}P_{1}\right) \otimes\left( \otimes^{n-j}I\right) \right) \right) \] where again \[ \sum_{m=1}^{k}C_{n-1}^{k-m+n-1}=C_{n-1}^{n-1}+C_{n-1}^{n}+C_{n-1}^{n+1} +\cdots+C_{n-1}^{k+n-2}=C_{n}^{k+n-1}. \] Thus $L_{k}$ for $k>0$ is implemented by the projection \[ \boxplus_{j=0}^{n}\left( \boxplus^{C_{j}^{k+j-1}}\left( \left( \otimes ^{j}P_{1}\right) \otimes\left( \otimes^{n-j}I\right) \right) \right) . \] $\square$ \end{document}
\begin{document} \title{Sum of squares generalizations for conic sets\thanks{ The authors would like to thank the anonymous reviewers for their helpful comments and suggestions. This work has been partially funded by the National Science Foundation under grant OAC-1835443 and the Office of Naval Research under grant N00014-18-1-2079.}} \author{Lea Kapelevich \and Chris Coey \and Juan Pablo Vielma} \date{\today} \institute{Lea Kapelevich \at Operations Research Center, MIT \\ \email{[email protected]} \and Chris Coey \at Operations Research Center, MIT \\ \email{[email protected]} \and Juan Pablo Vielma \at Google Research and \\Sloan School of Management, MIT \\ \email{[email protected]}, \email{[email protected]} } \maketitle This preprint has not undergone all peer review and post-submission improvements or corrections. The Version of Record of this article is published in Mathematical Programming, and is available online at \url{https://doi.org/10.1007/s10107-022-01831-6}. \\ \begin{abstract} In polynomial optimization problems, nonnegativity constraints are typically handled using the \emph{sum of squares} condition. This can be efficiently enforced using semidefinite programming formulations, or as more recently proposed by \citet{papp2019sum}, using the sum of squares cone directly in a nonsymmetric interior point algorithm. Beyond nonnegativity, more complicated polynomial constraints (in particular, generalizations of the positive semidefinite, second order and $\ell_1$-norm cones) can also be modeled through structured sum of squares programs. We take a different approach and propose using more specialized polynomial cones instead. This can result in lower dimensional formulations, more efficient oracles for interior point methods, or self-concordant barriers with smaller parameters. In most cases, these algorithmic advantages also translate to faster solve times in practice. \end{abstract} \keywords{Polynomial optimization \and Sum of squares \and Interior point \and Non-symmetric conic optimization} \subclass{ 90-08 \and 90C25 \and 90C51 } \section{Introduction} \label{sec:introduction} The \emph{sum of squares} (\emph{SOS}) condition is commonly used as a tractable restriction of polynomial nonnegativity. While SOS programs have traditionally been formulated and solved using semidefinite programming (SDP), \citet{papp2019sum} recently demonstrated the effectiveness of a nonsymmetric interior point algorithm in solving SOS programs without SDP formulations. In this note, we focus on structured SOS constraints that can be modeled using more specialized cones. We describe and give barrier functions for three related cones useful for modeling functions of dense polynomials, which we hope will become useful modeling primitives. The first is the cone of \emph{SOS matrices}, which was described by \citet[Section 5.7]{coey2021solving} without derivation. We show that this cone can be computationally favorable to equally low-dimensional SOS formulations. Characterizations of univariate SOS matrix cones in the context of optimization algorithms have previously been given by \citet[Section 6]{genin2003optimization}. However, their use of monomial or Chebyshev bases complicates computations of oracles in an interior point algorithm \citep[Section 3.1]{papp2019sum} and prevents effective generalizations to the multivariate case. The second is an \emph{SOS $\ell_2$-norm} (\emph{SOS-L2}) cone, which can be used to certify pointwise membership in the second order cone for a vector with polynomial components. The third is an \emph{SOS $\ell_1$-norm} (\emph{SOS-L1}) cone, which can be used to certify pointwise membership in the epigraph set of the $\ell_1$-norm function. Although it is straightforward to use SOS representations to approximate these sets, such formulations introduce cones of higher dimension than the constrained polynomial vector. We believe we are first to describe how to handle these sets in an interior point algorithm without introducing auxiliary conic variables or constraints. We suggest new barriers, with lower barrier parameters than SOS formulations allow. In the remainder of this section we provide background on SOS polynomials and implementation details of interior point algorithms that are required for later sections. In \cref{sec:nonlinear} we describe the constraints we wish to model using each new cone, and suggest alternative SOS formulations for comparison. In \cref{sec:algebras} we outline how ideas introduced by \citet{papp2013semidefinite} can be used to characterize the cone of SOS matrices and the SOS-L2 cone. \cref{sec:barriers} is focused on improving the parameter of the barriers for the SOS-L2 and SOS-L1 cones. In \cref{sec:implementation} we outline implementation advantages of the new cones. In \cref{sec:experiments} we compare various formulations using a numerical example and conclude in \cref{sec:conclusions}. In what follows, we use $\mathbb{S}^m$, $\mathbb{S}_{+}^m$, and $\mathbb{S}_{++}^m$ to represent the symmetric, positive semidefinite and positive definite matrices respectively with side dimension $m$. For sets, $\cl$ denotes the closure and $\intr$ denotes the interior. $\iin{a..b}$ are the integers in the interval $[a, b]$. $\vert A \vert$ denotes the dimension of a set $A$, and $\sdim(m) = \vert \mathbb{S}^m \vert = \sfrac{m (m+1)}{2}$. We use $\langle \cdot, \cdot \rangle_{A}$ for the inner product on $A$. For a linear operator $M: A \to B$, the adjoint $M^\ast: B \to A$ is the unique operator satisfying $\langle x, M y \rangle_A = \langle y, M^\ast x \rangle_B$ for all $x \in A$ and $y \in B$. $\mathbf{I}_m$ is the identity in $\mathbb{R}^{m \times m}$. $\otimes_K: \mathbb{R}^{a_1 \times a_2} \times \mathbb{R}^{b_1 \times b_2} \to \mathbb{R}^{a_1 b_1 \times a_2 b_2}$ is the usual Kronecker product. $\diag$ returns the diagonal elements of a matrix and $\Diag$ maps a vector to a matrix with the vector on the diagonal. All vectors, matrices, and higher order tensors are written in bold font. $s_i$ is the $i$th element of a vector $\mathbf{s}$ and $\mathbf{s}_{i \in \iin{1..N}}$ is the set $\{ \mathbf{s}_1, \ldots, \mathbf{s}_N \}$. If $a, b, c, d$ are scalars, vectors, or matrices, then we use square brackets, e.g. $\begin{bsmallmatrix} a & b \\ c & d \end{bsmallmatrix}$, to denote concatenation into a matrix or vector, or round parentheses, e.g. $(a, b, c, d)$, for a general Cartesian product. If $A$ is a vector space then $A^n$ is the Cartesian product of $n$ spaces $A$. $\mathbb{R}[\mathbf{x}]_{n,d}$ is the ring of polynomials in the variables $\mathbf{x} = (x_1, \ldots, x_n)$ with maximum degree $d$. Following the notation of \citet{papp2019sum}, we use $L = \binom{n+d}{n}$ and $U = \binom{n+2d}{n}$ to denote the dimensions of $\mathbb{R}[\mathbf{x}]_{n,d}$ and $\mathbb{R}[\mathbf{x}]_{n,2d}$ respectively, when $n$ and $d$ are given in the surrounding context. \subsection{The SOS polynomials cone and generic interior point algorithms} \label{sec:introduction:sospolynomials} A polynomial $p(\mathbf{x}) \in \mathbb{R}[\mathbf{x}]_{n,2d}$ is SOS if it can be expressed in the form $p(\mathbf{x}) = \sum_{i \in \iin{1..N}} q_i(\mathbf{x})^2$ for some $N \in \mathbb{N}$ and $q_{i \in \iin{1..N}}(\mathbf{x}) \in \mathbb{R}[\mathbf{x}]_{n,d}$. We denote the set of SOS polynomials in $\mathbb{R}[\mathbf{x}]_{n,2d}$ by ${K}sos$, which is a proper cone in $\mathbb{R}[\mathbf{x}]_{n,2d}$ \citep{nesterov2000squared}. We also say that $\mathbf{s} \in {K}sos$ for $\mathbf{s} \in \mathbb{R}^U$ if $\mathbf{s}$ represents a vector of coefficients of an SOS polynomial under a given basis. We use such \emph{vectorized} definitions interchangeably with functional definitions of polynomial cones. To construct a vectorized definition for ${K}sos$, suppose we have a fixed basis for $\mathbb{R}[\mathbf{x}]_{n,2d}$, and let $p_{i \in \iin{1..L}}(\mathbf{x})$ be basis polynomials for $\mathbb{R}[\mathbf{x}]_{n,d}$. Let $\tens{\Lambda}_{\SOS}bda: \iin{1..L}^2 \to \mathbb{R}^U$ be a function such that $\tens{\Lambda}_{\SOS}bda(i,j)$ returns the vector of coefficients of the polynomial $p_i(\mathbf{x}) p_j(\mathbf{x})$ using the fixed basis for $\mathbb{R}[\mathbf{x}]_{n,2d}$. Define the \emph{lifting operator} $\tens{\Lambda}: \mathbb{R}^U \to \mathbb{S}^L$, introduced by \citet{nesterov2000squared}, as: \begin{align} \tens{\Lambda}(\mathbf{s})_{i,j} = \langle \tens{\Lambda}_{\SOS}bda(i,j), \mathbf{s} \rangle_{\mathbb{R}^U} \quad \forall i,j \in \iin{1..L}, \label{eq:general lambda} \end{align} where $\tens{\Lambda}(\mathbf{s})_{ij}$ is a component in row $i$ and column $j$. Now the cones ${K}sos$ and ${K}sos^\ast$ admit the characterization \citep[Theorem 7.1]{nesterov2000squared}: \begin{subequations} \begin{align} {K}sos &= \lbrace \mathbf{s} \in \mathbb{R}^U: \exists \mathbf{S} \in \mathbb{S}_{+}^L, \mathbf{s} = \tens{\Lambda}^\ast(\mathbf{S}) \rbrace, \\ {K}sos^\ast &= \lbrace \mathbf{s} \in \mathbb{R}^U: \tens{\Lambda}(\mathbf{s}) \in \mathbb{S}_{+}^L \rbrace. \end{align} \label{eq:characterization} \end{subequations} \cref{eq:characterization} shows that the dual cone ${K}sos^{\ast}$ is an inverse linear image of the positive semidefinite (PSD) cone, and therefore has an efficiently computable \emph{logarithmically homogeneous self-concordant barrier} (LHSCB) (see \citep[Definitions 2.3.1, 2.3.2]{nesterov1994interior}). In particular, by linearity of $\tens{\Lambda}$, the function $\mathbf{s} \mapsto -\logdet (\tens{\Lambda}(\mathbf{s}))$ is an LHSCB for ${K}sos^\ast$ \citep[Proposition 5.1.1]{nesterov1994interior} with parameter $L$ (an $L$-LHSCB for short). This makes it possible to solve optimization problems over ${K}sos$ or ${K}sos^{\ast}$ with a \emph{generic} primal-dual interior point algorithm in polynomial time \citep{skajaa2015homogeneous}. \footnote{ We direct the interested reader to \citet{faybusovich2002self}, who obtained non-linear barriers for the cone of univariate polynomials generated by Chebyshev systems by computing the universal volume barrier of \citet{nesterov1994interior}, which is unrelated to the SDP representations of these polynomials. } In a generic primal-dual interior point algorithm, very few oracles are needed for each cone in the optimization problem. For example, the algorithm described by \citet{coey2021solving} only requires a membership check, an initial interior point, and evaluations of derivatives of an LHSCB for each cone \emph{or} its dual. Therefore, there is no particular advantage to favoring either ${K}sos$ or ${K}sos^{\ast}$ formulations. Optimizing over ${K}sos$ (or ${K}sos^\ast$) directly instead of building SDP formulations is appealing because the dimension of ${K}sos$ is generally much smaller than the cone dimension in SDP formulations that are amenable to more specialized algorithms \citep{papp2019sum,coey2021solving}. In later sections we describe efficient LHSCBs and membership checks for each cone we introduce. The output of the lifting operator depends on the polynomial basis chosen for $\mathbb{R}[\mathbf{x}]_{n,d}$ as well as the basis for $\mathbb{R}[\mathbf{x}]_{n,2d}$. Following \citet{papp2019sum}, we use a set of Lagrange polynomials that are interpolant on some points $\mathbf{t}_{i \in \iin{1..U}}$ as the basis for $\mathbb{R}_{n,2d}[\mathbf{x}]$ and the multivariate Chebyshev polynomials \citep{hoffman1988generalized} as the basis in $\mathbb{R}_{n,d}[\mathbf{x}]$. These choices give the particular lifting operator we implement, $\tens{\Lambda}_{\SOS}(\mathbf{s})$: \begin{align} \tens{\Lambda}_{\SOS}(\mathbf{s})_{i,j} = \tsum{u \in \iin{1..U}} p_i(\mathbf{t}_u)p_j(\mathbf{t}_u)s_u & \quad \forall i, j \in \iin{1..L} . \label{eq:scalarlambda} \end{align} Equivalently, $\tens{\Lambda}_{\SOS}(\mathbf{s}) = \mathbf{P}^\top \Diag(\mathbf{s}) \mathbf{P}$, where $P_{u, \ell} = p_\ell(\mathbf{t}_u)$ for all $u \in \iin{1..U}, \ell \in \iin{1..L}$. The adjoint $\tens{\Lambda}_{\SOS}^\ast: \mathbb{S}^L \to \mathbb{R}^U$ is given by $\tens{\Lambda}_{\SOS}^\ast(\mathbf{S}) = \diag(\mathbf{P} \mathbf{S} \mathbf{P}^\top)$. \citet{papp2019sum} show that the Lagrange basis gives rise to expressions for the gradient and Hessian of the barrier for ${K}sos^\ast$ that are computable in $\mathcal{O}(LU^2)$ time for any $d, n \geq 1$. Although we assume for simplicity that $p$ is a dense basis for $\mathbb{R}[\mathbf{x}]_{n,d}$, this is without loss of generality. A modeler with access to a suitable sparse basis of $\bar{L} < L$ polynomials in $\mathbb{R}[\mathbf{x}]_{n,d}$ and $\bar{U} < U$ interpolation points, could use \cref{{eq:scalarlambda}} and obtain a barrier with parameter $\bar{L}$. \section{Polynomial generalizations for three conic sets} \label{sec:nonlinear} The first set we consider are the polynomial matrices $\mathbf{Q}(\mathbf{x}) \in \mathbb{R}[\mathbf{x}]_{n,2d}^{m \times m}$ (i.e. $m \times m$ matrices with components that are polynomials in $n$ variables of maximum degree $2d$) \footnote{ We assume that polynomial components in vectors and matrices involve the same variables and have the same maximum degree, to avoid detracting from the key ideas in this paper. This assumption could be removed at the expense of more cumbersome notation. } satisfying the constraint: \begin{align} \mathbf{Q}(\mathbf{x}) \succeq 0 \quad \forall \mathbf{x}. \label{eq:polypsd} \end{align} One of the first applications of matrix SOS constraints was by \citet{henrion2006convergent}. The moment-SOS hierarchy was extended from the scalar case to the matrix case, using a suitable extension of Putinar's Positivstellesatz studied by \citet{hol2004sum} and \citet{kojima2003sums}. This constraint has various applications in statistics, control, and engineering \citep[][]{aylward2007explicit,aylward2008stability,doherty2004complete,hall2019engineering}. A tractable restriction for \cref{eq:polypsd} is given by the SOS formulation: \begin{align} \mathbf{y}^\top \mathbf{Q}(\mathbf{x}) \mathbf{y} \in {K}sos \quad \forall {\mathbf{y}} \in \mathbb{R}^m. \label{eq:scalarsospsd} \end{align} This formulation is sometimes implemented in practice (e.g. \citep{legat2017sum}) and requires an SOS cone of dimension $U\sdim(m)$ (by exploiting the fact that all terms are bilinear in the $\mathbf{y}$ variables). It is well known that \cref{eq:scalarsospsd} is equivalent to restricting $\mathbf{Q}(\mathbf{x})$ to be an \emph{SOS matrix} of the form $\mathbf{Q}(\mathbf{x}) = \mathbf{M}(\mathbf{x})^\top \mathbf{M}(\mathbf{x})$ for some $N \in \mathbb{N}$ and $\mathbf{M}(\mathbf{x}) \in \mathbb{R}[\mathbf{x}]_{n,d}^{N \times m}$ \citep[Definition 3.76]{blekherman2012semidefinite}. To be consistent in terminology with the other cones we introduce, we refer to SOS matrices as \emph{SOS-PSD} matrices, or belonging to ${K}sospsd$. We show how to characterize ${K}sospsd$ and use it directly in an interior point algorithm in \cref{sec:algebras}. The second set we consider are the polynomial vectors $\mathbf{q(x)} \in \mathbb{R}[\mathbf{x}]_{n,2d}^m$ satisfying: \begin{align} {q}_1(\mathbf{x}) \geq \sqrt{\tsum{i \in \iin{2..m}} ( q_i(\mathbf{x}) )^2 } \quad \forall \mathbf{x}, \label{eq:polysoc} \end{align} and hence requiring $\mathbf{q(x)}$ to be in the epigraph set of the $\ell_2$-norm function (second order cone) pointwise (cf. \cref{eq:polypsd} requiring the polynomial matrix to be in the PSD cone). A tractable restriction for this constraint is given by the SOS formulation: \begin{align} \label{eq:arrowpsd} \mathbf{y}^\top \arrow(\mathbf{q(\mathbf{x})}) \mathbf{y} \in {K}sos \quad \forall {\mathbf{y}} \in \mathbb{R}^m, \end{align} where $\arrow: \mathbb{R}[\mathbf{x}]_{n,2d}^m \to \mathbb{R}[\mathbf{x}]_{n,2d}^{m \times m}$ is defined by: \begin{align} \begin{split} \arrow(\mathbf{p}(\mathbf{x})) = \begin{bmatrix} p_1(\mathbf{x}) & \bar{\mathbf{p}}(\mathbf{x})^\top \\ \bar{\mathbf{p}}(\mathbf{x}) & p_1(\mathbf{x}) \mathbf{I}_{m-1} \end{bmatrix}, \\ \mathbf{p}(\mathbf{x}) = ( p_1(\mathbf{x}), \bar{\mathbf{p}}(\mathbf{x}) ) \in \mathbb{R}[\mathbf{x}]_{n,2d} \times \mathbb{R}[\mathbf{x}]_{n,2d}^{m-1} . \end{split} \end{align} Due to the equivalence between \cref{eq:scalarsospsd} and membership in ${K}sospsd$, \cref{eq:arrowpsd} is equivalent to requiring that $\mathbf{q}(\mathbf{x})$ belongs to the cone we denote ${K}_{\arrow \SOSpsd}$ defined by: \begin{align} \label{eq:arrowpsdpsd} {K}_{\arrow \SOSpsd} = \{ \mathbf{q}(\mathbf{x}) \in \mathbb{R}[\mathbf{x}]_{n,2d}^m: \arrow({\mathbf{q}(\mathbf{x})}) \in {K}sospsd \} . \end{align} Membership in ${K}_{\arrow \SOSpsd}$ ensures \cref{eq:polysoc} holds due to the SDP representation of the second order cone \citep{alizadeh2003second}, and the fact that the SOS-PSD condition certifies pointwise positive semidefiniteness. An alternative restriction of \cref{eq:polysoc} is described by the set we denote ${K}sosso$, which is not representable by the usual scalar polynomial SOS cone in general: \begin{align} {K}sosso = \left\{ \begin{aligned} \mathbf{q}(\mathbf{x}) \in \mathbb{R}[\mathbf{x}]_{n,2d}^m : \exists N \in \mathbb{N}, \mathbf{p}_{i \in \iin{1..N}}(\mathbf{x}) \in \mathbb{R}[\mathbf{x}]_{n,d}^m, \\ \mathbf{q}(\mathbf{x}) = \tsum{i \in \iin{1..N}} \mathbf{p}_i(\mathbf{x}) \circ \mathbf{p}_i(\mathbf{x}) \end{aligned} \right\}, \label{eq:functionalksosso} \end{align} where $\circ: \mathbb{R}^m \times \mathbb{R}^m \to \mathbb{R}^m$ is defined by: \begin{align} \mathbf{x} \circ \mathbf{y} = \begin{bmatrix} \mathbf{x}^\top \mathbf{y} \\ x_1 \bar{\mathbf{y}} + y_1 \bar{\mathbf{x}} \end{bmatrix} , \quad \mathbf{x} = (x_1, \bar{\mathbf{x}}) , \quad \mathbf{y} = (y_1, \bar{\mathbf{y}}) \in \mathbb{R} \times \mathbb{R}^{m-1} , \label{eq:circ} \end{align} and $\circ: \mathbb{R}[\mathbf{x}]^m_{n,d} \times \mathbb{R}[\mathbf{x}]^m_{n,d} \to \mathbb{R}[\mathbf{x}]^m_{n,2d}$ on polynomial vectors is defined analogously. This set was also studied by Kojima and Muramatsu with a focus on extending Positivstellensatz results \citep{kojima2007extension}. The validity of ${K}sosso$ as a restriction of \cref{eq:polysoc} follows from the the characterization of the second order cone as a \emph{cone of squares} \citep[Section 4]{alizadeh2003second}. For this reason we will refer to the elements of ${K}sosso$ as the \emph{SOS-L$2$} polynomials. For a polynomial vector in $\mathbb{R}[\mathbf{x}]_{n,2d}^m$, the dimension of ${K}sosso$ is $U m$, which is favorable to the dimension $U \sdim(m)$ of ${K}sos$ required for \cref{eq:arrowpsd} or ${K}sospsd$ in \cref{eq:arrowpsdpsd}. In addition, we show in \cref{sec:barriers:so} that ${K}sosso$ admits an LHSCB with smaller parameter than ${K}_{\arrow \SOSpsd}$. However, we conjecture that for general $n$ and $d$, ${K}sosso \subsetneq {K}_{\arrow \SOSpsd}$ (for example, consider the vector $[1 + x^2, 1 - x^2, 2 x]$, which belongs to ${K}_{\arrow \SOSpsd}$ but not ${K}sosso$). Our experiments in \cref{sec:experiments} also include instances where using ${K}sosso$ and ${K}_{\arrow \SOSpsd}$ gives different objective values. A third formulation can be obtained by modifying the SDP formulation for ${K}_{\arrow \SOSpsd}$ to account for all sparsity in the $\mathbf{y}$ monomials (by introducing a specialized cone for the Gram matrix of $\mathbf{y}^\top \arrow(\mathbf{q}(\mathbf{x})) \mathbf{y}$). However, this approach suffers from requiring $\mathcal{O}(L^2)$ conic variables for each polynomial in $\mathbf{q}(\mathbf{x})$, so we choose to focus on ${K}sos$ and ${K}sospsd$ formulations for ${K}_{\arrow \SOSpsd}$ instead. The third and final set we consider is also described through a constraint on a polynomial vector $\mathbf{q}(\mathbf{x}) \in \mathbb{R}[\mathbf{x}]_{n,2d}^m$. This constraint is given by: \begin{equation} {q}_1(\mathbf{x}) \geq \tsum{i \in \iin{2..m}} \vert {q}_i(\mathbf{x}) \vert\quad \forall \mathbf{x}, \label{eq:polyl1} \end{equation} and hence requires the polynomial vector to be in the epigraph set of the $\ell_1$-norm function (\emph{$\ell_1$-norm cone}) pointwise. A tractable restriction for this constraint is given by the SOS formulation: \begin{subequations} \begin{align} q_1(\mathbf{x}) - \tsum{i \in \iin{2..m}} ( p_{i}(\mathbf{x})^{+} + p_{i}(\mathbf{x})^{-} ) & \in {K}sos, \\ q_i(\mathbf{x}) &= p_{i}(\mathbf{x})^{+} - p_{i}(\mathbf{x})^{-} & \forall i \in \iin{2..m}, \\ p_{i}(\mathbf{x})^{+}, p_{i}(\mathbf{x})^{-} & \in {K}sos & \forall i \in \iin{2..m}, \end{align} \label{eq:extl1} \end{subequations} which uses auxiliary polynomial variables $p_{i \in \iin{2..m}}^+(\mathbf{x}) \in \mathbb{R}[\mathbf{x}]_{n,2d}$ and $p_{i \in \iin{2..m}}^-(\mathbf{x}) \in \mathbb{R}[\mathbf{x}]_{n,2d}$. We refer to the projection of \cref{eq:extl1} onto $\mathbf{q(x)} \in \mathbb{R}[\mathbf{x}]_{n,2d}^m$ as ${K}soslo$ and to its elements as the \emph{SOS-L$1$} polynomials. Note that the dimension of ${K}soslo$ is $U m$, while \cref{eq:extl1} requires $2m-1$ SOS cones of dimension $U$ and $U(m - 1)$ additional equality constraints. In \cref{sec:barriers:l1} we derive an $L m$-LHSCB that allows us to optimize over ${K}soslo$ directly, while \cref{eq:extl1} would require an LHSCB with parameter $L (2 m - 1)$. We summarize some key properties of the new cones and SOS formulations in \cref{tab:sets}: the total dimension of cones involved, the parameter of an LHSCB for the conic sets, the time complexity to calculate the Hessian of the LHSCB (discussed in \cref{sec:implementation}), the level of conservatism of each new conic set compared to its alternative SOS formulation, and the number of auxiliary equality constraints and variables that need to be added in an optimization problem. \begin{table}[!htb] \centering \begin{tabular}{{l}*{6}{l}} \toprule & \multicolumn{2}{l}{SOS-PSD} & \multicolumn{2}{l}{SOS-L2} & \multicolumn{2}{l}{SOS-L1} \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7} & ${K}sospsd$ & \eqref{eq:scalarsospsd} & ${K}sosso$ & \eqref{eq:arrowpsd} & ${K}soslo$ & \eqref{eq:extl1} \\ \midrule cone dim. & $U \sdim(m)$ & $U \sdim(m)$ & $U m$ & $U \sdim(m)$ & $U m$ & $U (2m - 1)$ \\ parameter & $L m$ & $L m$ & $2 L$ & $L m$ & $L m$ & $L (2m - 1)$ \\ Hessian flops & $\mathcal{O}(L U^2 m^3)$ & $\mathcal{O}(L U^2 m^5)$ & $\mathcal{O}(L U^2 m^2)$ & $\mathcal{O}(L U^2 m^5)$ & $\mathcal{O}(L U^2 m)$ & $\mathcal{O}(L U^2 m)$ \\ conservatism & equal & - & greater & - & equal & - \\ equalities & 0 & 0 & 0 & 0 & 0 & $U (m-1)$ \\ variables & 0 & 0 & 0 & 0 & 0 & $2 U (m-1)$ \\ \bottomrule \end{tabular} \caption{Properties of new cones compared to SOS formulations. } \label{tab:sets} \end{table} \section{SOS-PSD and SOS-L2 cones from general algebras} \label{sec:algebras} The ideas introduced by \citet{papp2013semidefinite} relating to SOS cones in \emph{general algebras} allow us to characterize ${K}sospsd$ and ${K}sosso$ without auxiliary SOS polynomial constraints. As in \citet{papp2013semidefinite}, let us define $(A,B,\diamond)$ as a {general algebra} if $A, B$ are vector spaces and $\diamond: A \times A \to B$ is a bilinear product that satisfies the distributive property. For a general algebra $(A,B,\diamond)$, \citet{papp2013semidefinite} define the SOS cone ${K}_\diamond$: \begin{align} {K}_\diamond = \{ b \in B: \exists N \in \mathbb{N}, a_{i \in \iin{1..N}} \in A, b = \tsum{i \in \iin{1..N}} a_i \diamond a_i \}. \end{align} For instance, $\mathbb{S}_+$ is equal to the SOS cone of $(\mathbb{R}^m, \mathbb{S}^m, \overline{\diamond})$ for $\overline{\diamond}$ given by $\mathbf{x} \overline{\diamond} \mathbf{y} = \tfrac{1}{2} (\mathbf{x} \mathbf{y}^\top + \mathbf{y} \mathbf{x}^\top)$. The second order cone is equal to the SOS cone of $(\mathbb{R}^m, \mathbb{R}^m, \circ)$. ${K}sos$ is equal to the SOS cone of $(\mathbb{R}[\mathbf{x}]_{n,d}, \mathbb{R}[\mathbf{x}]_{n,2d}, \cdot)$ where $\cdot$ is the product of polynomials. To obtain our vectorized representation of ${K}sos$ we can redefine the function $\tens{\Lambda}_{\SOS}bda: \mathbb{R}^L \times \mathbb{R}^L \to \mathbb{R}^U$ so that for $\mathbf{p}_i, \mathbf{p}_j \in \mathbb{R}^L$ representing coefficients of any polynomials in $\mathbb{R}[\mathbf{x}]_{n,d}$, $\tens{\Lambda}_{\SOS}bda(\mathbf{p}_i, \mathbf{p}_j)$ returns the vector of coefficients of the product of the polynomials. Then ${K}sos$ is equal to the SOS cone of $(\mathbb{R}^L, \mathbb{R}^U, \tens{\Lambda}_{\SOS}bda)$. As we describe in \cref{sec:algebras:liftingoperators}, \citet{papp2013semidefinite} also show how to build lifting operators for general algebras. This allows us to construct membership checks and easily computable LHSCBs for ${K}sospsd^\ast$ and ${K}sosso^\ast$ once we represent them as SOS cones of \emph{tensor products} of algebras. The tensor product of two algebras $(A_1, B_1, \diamond_1)$ and $(A_2, B_2, \diamond_2)$ is a new algebra $(A_1 \otimes A_2, B_1 \otimes B_2, \diamond_1 \otimes \diamond_2)$, where $\diamond_1 \otimes \diamond_2$ is defined via its action on \emph{elementary tensors}. For $\mathbf{u}_1, \mathbf{v}_1 \in A_1$ and $\mathbf{u}_2, \mathbf{v}_2 \in A_2$: \begin{align} (\mathbf{u}_1 \otimes \mathbf{u}_2) \diamond_1 \otimes \diamond_2 (\mathbf{v}_1 \otimes \mathbf{v}_2) = (\mathbf{u}_1 \diamond_1 \mathbf{v}_1) \otimes (\mathbf{u}_2 \diamond_2 \mathbf{v}_2). \end{align} The algebra we are interested in for a functional representation of ${K}sospsd$ is the tensor product of $(\mathbb{R}[\mathbf{x}]_{n,d}, \mathbb{R}[\mathbf{x}]_{n,2d}, \cdot)$ with $(\mathbb{R}^m, \mathbb{S}^m, \bar{\diamond})$. We can think of elements in $\mathbb{R}[\mathbf{x}]_{n,d} \otimes \mathbb{R}^m$ as polynomial vectors in $\mathbb{R}[\mathbf{x}]_{n,d}^m$, and $\mathbb{R}[\mathbf{x}]_{n,2d} \otimes \mathbb{S}^m$ as the symmetric polynomial matrices in $\mathbb{R}[\mathbf{x}]_{n,2d}^{m \times m}$. The SOS cone of $(\mathbb{R}[\mathbf{x}]_{n,d} \otimes \mathbb{R}^m, \mathbb{R}[\mathbf{x}]_{n,2d} \otimes \mathbb{S}^m, \cdot \otimes \bar{\diamond})$ corresponds to the polynomial matrices that can be written as $\tsum{i \in \iin{1..N}} \mathbf{m}_i(\mathbf{x}) \mathbf{m}_i(\mathbf{x})^\top$ with $\mathbf{m}_i(\mathbf{x}) \in \mathbb{R}[\mathbf{x}]^m$ for all $i \in \iin{1..N}$ \citep[Section 4.3]{papp2013semidefinite}, which is exactly ${K}sospsd$. Equivalently, a vectorized representation of ${K}sospsd$ can be characterized as the SOS cone of $(\mathbb{R}^L \otimes \mathbb{R}^m, \mathbb{R}^U \otimes \mathbb{S}^m, \tens{\Lambda}_{\SOS}bda \otimes \bar{\diamond})$. We can think of $\mathbb{R}^L \otimes \mathbb{R}^m$ as $\mathbb{R}^{L \times m}$ and we can think of $\mathbb{R}^U \otimes \mathbb{S}^m$ as a subspace of $\mathbb{R}^{U \times m \times m}$ that represents the coefficients of symmetric polynomial matrices. Likewise, the algebra we are interested in for a functional representation of ${K}sosso$ is the tensor product of $(\mathbb{R}[\mathbf{x}]_{n,d}, \mathbb{R}[\mathbf{x}]_{n,2d}, \cdot)$ with $(\mathbb{R}^m, \mathbb{R}^m, \circ)$. We can think of $\mathbb{R}[\mathbf{x}]_{n,d} \otimes \mathbb{R}^m$ and $\mathbb{R}[\mathbf{x}]_{n,2d} \otimes \mathbb{R}^m$ as $\mathbb{R}[\mathbf{x}]_{n,d}^m$ and $\mathbb{R}[\mathbf{x}]_{n,2d}^m$ respectively. The SOS cone of the tensor product of these algebras then corresponds to ${K}sosso$ due to \cref{eq:functionalksosso}. A vectorized representation of ${K}sosso$ may be characterized as the SOS cone of $(\mathbb{R}^L \otimes \mathbb{R}^m, \mathbb{R}^U \otimes \mathbb{R}^m, \tens{\Lambda}_{\SOS}bda \otimes \circ)$. We can think of $\mathbb{R}^U \otimes \mathbb{R}^m$ as the coefficients of polynomial vectors, represented in $\mathbb{R}^{U \times m}$. \subsection{Lifting operators for SOS-PSD and SOS-L2} \label{sec:algebras:liftingoperators} The lifting operator of $(A, B, \diamond)$, when $A$ and $B$ are finite dimensional, is defined by \citet{papp2013semidefinite} as the function $\tens{\Lambda}_\diamond: B \to \mathbb{S}^{\vert A \vert}$ satisfying $\langle a_1, \tens{\Lambda}_\diamond(b) a_2 \rangle_A = \langle b, a_1 \diamond a_2 \rangle_B$ for all $a_1, a_2 \in A$, $b \in B$. This leads to the following descriptions of ${K}_\diamond$ and ${K}_\diamond^\ast$ \citep[Theorem 3.2]{papp2013semidefinite}: \begin{subequations} \begin{align} {K}_\diamond &= \{ \mathbf{s} \in B: \exists \mathbf{S} \succeq 0, \mathbf{s} = \tens{\Lambda}_\diamond^\ast(\mathbf{S}) \}, \label{eq:algebras:primal} \\ {K}_\diamond^\ast &= \{ \mathbf{s} \in B : \tens{\Lambda}_\diamond(\mathbf{s}) \succeq 0 \}. \label{eq:algebras:dual} \end{align} \label{eq:algebras} \end{subequations} Recall that in order to use either ${K}_\diamond$ or ${K}_\diamond^\ast$ in a generic interior point algorithm, we require efficient oracles for a membership check and derivatives of an LHSCB of ${K}_\diamond$ or ${K}_\diamond^\ast$. If $\tens{\Lambda}_\diamond(\mathbf{s})$ is efficiently computable, \cref{eq:algebras:dual} provides a membership check for ${K}_\diamond^\ast$. Furthermore, an LHSCB for ${K}_\diamond^\ast$ is given by $\mathbf{s} \mapsto -\logdet ({\tens{\Lambda}}_\diamond(\mathbf{s}))$ with barrier parameter $\vert A \vert$ due to the linearity of ${\tens{\Lambda}}_\diamond$ \citep[Proposition 5.1.1]{nesterov1994interior}. The following lemma describes how to compute $\tens{\Lambda}_\diamond(\mathbf{s})$ for a tensor product algebra. \begin{lemma} \label{lemma:tensorlift} \citep[Lemma 5.1]{papp2013semidefinite}: If $\mathbf{w}_1 \in B_1$ and $\mathbf{w}_2 \in B_2$, then: \begin{align} \tens{\Lambda}_{\diamond_1 \otimes \diamond_2}(\mathbf{w}_1 \otimes \mathbf{w}_2) = \tens{\Lambda}_{\diamond_1}(\mathbf{w}_1) \otimes_K \tens{\Lambda}_{\diamond_2}(\mathbf{w}_2). \end{align} \end{lemma} Let us define $\otimes: \mathbb{R}^U \times \mathbb{S}^m \to \mathbb{R}^{U \times m \times m}$ such that $(\mathbf{u} \otimes \mathbf{V})_{i,j,k} = u_i V_{j,k}$ and let us represent the coefficients of a polynomial matrix by a tensor $\mathbf{S} \in \mathbb{R}^{U \times m \times m}$. Then we may write $\mathbf{S} = \tsum{i \in \iin{1..m}, j \in \iin{1..i}} \mathbf{S}_{i,j} \otimes \mathbf{E}_{i,j}$, where $\mathbf{E}_{i,j} \in \mathbb{R}^{m \times m}$ is a matrix of zeros and ones with $E_{i,j} = E_{j,i} = 1$ and $\mathbf{S}_{i,j} \in \mathbb{R}^U$ are the coefficients of the polynomial in row $i$ and column $j$. Applying \cref{lemma:tensorlift}, the lifting operator for ${K}sospsd$, $\tens{\Lambda}_{\SOS}psd :\mathbb{R}^{U \times m \times m} \to \mathbb{S}^{L m}$ is: \begin{subequations} \begin{align} \tens{\Lambda}_{\SOS}psd(\mathbf{S}) = \tens{\Lambda}_{\tens{\Lambda}_{\SOS}bda \otimes \bar{\diamond}}(\mathbf{S}) &= \tens{\Lambda}_{\tens{\Lambda}_{\SOS}bda \otimes \bar{\diamond}}(\tsum{i \in \iin{1..m}, j \in \iin{1..i}} \mathbf{S}_{i,j} \otimes \mathbf{E}_{i,j}) \\ &= \tsum{i \in \iin{1..m}, j \in \iin{1..i}} \tens{\Lambda}_{\SOS}(\mathbf{S}_{i,j}) \otimes_K \mathbf{E}_{i,j} . \end{align} \label{eq:lampsd} \end{subequations} The output is a block matrix, where each $L \times L$ submatrix in the $i$th group of rows and $j$th group of columns is $\tens{\Lambda}_{\SOS}psd(\mathbf{S})_{i,j} = \tens{\Lambda}_{\SOS}(\mathbf{S}_{i,j})$ for all $i, j \in \iin{1..m}$. The adjoint operator $\tens{\Lambda}_{\SOS}psd^\ast: \mathbb{S}^{L m} \to \mathbb{R}^{U \times m \times m}$ may also be defined blockwise, $\tens{\Lambda}_{\SOS}psd^{\ast}(\mathbf{S})_{i,j} = \tens{\Lambda}_{\SOS}^\ast(\mathbf{S}_{i,j})$ for all $i, j \in \iin{1..m}$ where $\mathbf{S}_{i,j} \in \mathbb{R}^{L \times L}$ is the $(i,j)$th submatrix in $\mathbf{S}$. Likewise, we use a tensor $\mathbf{s} \in \mathbb{R}^{U \times m}$ to describe the coefficients of a polynomial vector, and write $\mathbf{s}_i \in \mathbb{R}^U$ to denote the vector of coefficients of the polynomial in component $i$. Applying \cref{lemma:tensorlift} again, we obtain the (blockwise) definition of the lifting operator for ${K}sosso$, $\tens{\Lambda}_{\SOS}so: \mathbb{R}^{U \times m} \to \mathbb{S}^{Lm}$: \begin{align} \tens{\Lambda}_{\SOS}so(\mathbf{s})_{i,j} = \begin{cases} \tens{\Lambda}_{\SOS}(\mathbf{s}_1) & i = j \\ \tens{\Lambda}_{\SOS}(\mathbf{s}_j) & i = 1, j \neq 1 \\ \tens{\Lambda}_{\SOS}(\mathbf{s}_i) & i \neq 1, j = 1 \\ 0 & \text{otherwise} \end{cases} & \quad \forall i, j \in \iin{1..m}, \end{align} where $\tens{\Lambda}_{\SOS}so(\mathbf{s})_{i,j} \in \mathbb{S}^L$ is the $(i,j)$th submatrix of $\tens{\Lambda}_{\SOS}so(\mathbf{s})$. Thus $\tens{\Lambda}_{\SOS}so(\mathbf{s})$ has a block arrowhead structure. The output of the adjoint operator $\tens{\Lambda}_{\SOS}so^\ast: \mathbb{S}^{L m} \to \mathbb{R}^{U \times m}$ may be defined as: \begin{align} \tens{\Lambda}_{\SOS}so^\ast(\mathbf{S})_i = \begin{cases} \tsum{j \in \iin{1..m}} \tens{\Lambda}_{\SOS}^{\ast}(\mathbf{S}_{j,j}) & i = 1 \\ \tens{\Lambda}_{\SOS}^{\ast}(\mathbf{S}_{1,i}) + \tens{\Lambda}_{\SOS}^{\ast}(\mathbf{S}_{i,1}) & i \neq 1 \end{cases} & \quad \forall i \in \iin{1..m}, \label{eq:ksossodef} \end{align} where $\tens{\Lambda}_{\SOS}so^\ast(\mathbf{S})_i \in \mathbb{R}^U$ is the $i$th slice of $\tens{\Lambda}_{\SOS}so^\ast(\mathbf{S})$ and $\mathbf{S}_{i,j} \in \mathbb{R}^{L \times L}$ is the $(i,j)$th block in $\mathbf{S}$ for all $i, j \in \iin{1..m}$. \section{Efficient barriers for SOS-L2 and SOS-L1} \label{sec:barriers} As for ${K}sospsd^\ast$ and ${K}sosso^\ast$, we show that a barrier for ${K}soslo^\ast$ can be obtained by composing a linear lifting operator with the $\logdet$ barrier. This is sufficient to optimize over ${K}sospsd$, ${K}sosso$ and ${K}soslo$ without high dimensional SDP formulations. However, for ${K}sosso^\ast$ and ${K}soslo^\ast$ we can derive improved barriers by composing nonlinear functions with the $\logdet$ barrier instead. We show that these compositions are indeed LHSCBs. \subsection{SOS-L2} \label{sec:barriers:so} Recall \cref{eq:algebras:dual} suggests that checking membership in ${K}sosso^\ast$ amounts to checking positive definiteness of $\tens{\Lambda}_{\SOS}so(\mathbf{s})$ with side dimension $L m$. This membership check corresponds to a \emph{straightforward} LHSCB with parameter $L m$ given by $\mathbf{s} \mapsto -\logdet(\tens{\Lambda}_{\SOS}so(\mathbf{s}))$. We now show that by working with a Schur complement of $\tens{\Lambda}_{\SOS}so(\mathbf{s})$, we obtain a membership check for ${K}sosso^\ast$ that requires factorizations of only two matrices with side dimension $L$ and implies an LHSCB with parameter $2 L$. Let $\tens{\Pi}: \mathbb{R}^{U \times m} \to \mathbb{S}^L$ return the Schur complement: \begin{align} \tens{\Pi}(\mathbf{s}) = {\tens{\Lambda}_{\SOS}}(\mathbf{s}_{1}) - \tsum{i \in \iin{2..m}} {\tens{\Lambda}_{\SOS}}(\mathbf{s}_{i}) {\tens{\Lambda}_{\SOS}}(\mathbf{s}_{1})^{-1} {\tens{\Lambda}_{\SOS}}(\mathbf{s}_{i}). \label{eq:pi} \end{align} By \cref{eq:algebras:dual,eq:pi}: \begin{subequations} \begin{align} {K}sosso^\ast & = \{ \mathbf{s} \in \mathbb{R}^{U \times m}: \tens{\Lambda}_{\SOS}so(\mathbf{s}) \succeq 0 \} \\ & = \cl \{ \mathbf{s} \in \mathbb{R}^{U \times m}: \tens{\Lambda}_{\SOS}so(\mathbf{s}) \succ 0 \} \\ & = \cl \{ \mathbf{s} \in \mathbb{R}^{U \times m}: {\tens{\Lambda}_{\SOS}}(\mathbf{s}_{1}) \succ 0, \tens{\Pi}(\mathbf{s}) \succ 0 \} . \label{eq:schur} \end{align} \end{subequations} \cref{eq:schur} describes a simple membership check. Furthermore, the function $F: \mathbb{R}^{U \times m} \to \mathbb{R}$ defined by: \begin{subequations} \begin{align} F(\mathbf{s}) &= -\logdet ( \tens{\Pi}(\mathbf{s}) ) - \logdet({\tens{\Lambda}_{\SOS}}(\mathbf{s}_{1})) \\ &= -\logdet(\tens{\Lambda}_{\SOS}so(\mathbf{s})) + (m - 2) \logdet (\tens{\Lambda}_{\SOS}(\mathbf{s}_1)) , \end{align} \label{eq:sosbarr} \end{subequations} is a $2 L$-LHSCB barrier for ${K}sosso$. \begin{theorem} The function $F$ defined by \cref{eq:sosbarr} is a $2 L$-LHSCB for ${K}sosso^\ast$. \label{thm:sobarr} \end{theorem} \begin{proof} It is easy to verify that $F$ is a logarithmically homogeneous barrier, so we show it is a $2L$-self-concordant barrier for ${K}sosso^\ast$. We first show that $\hat{F}: \mathbb{S}_{++}^L \times (\mathbb{R}^{L \times L})^{m-1} \to \mathbb{R}$ defined as $\hat{F}(\mathbf{X}_1, \ldots, \mathbf{X}_m) = -\logdet(\mathbf{X}_1 - \tsum{i \in \iin{2..m}} \mathbf{X}_i \mathbf{X}_{1}^{-1} \mathbf{X}_i^{\top}) - \logdet(\mathbf{X}_1)$, is a $2 L$-self-concordant barrier for the cone: \begin{align} {K}_{\ell_2}^m = \cl \bigl\{ (\mathbf{X}_1, \ldots, \mathbf{X}_m) \in \mathbb{S}^L_{++} \times (\mathbb{R}^{L \times L})^{m - 1}: \mathbf{X}_1 - \tsum{i \in \iin{2..m}} \mathbf{X}_i \mathbf{X}_{1}^{-1} \mathbf{X}_i^{\top} \succ 0 \bigr\}. \label{eq:blocksoc} \end{align} We then argue that $F$ is a composition of $\hat{F}$ with the linear map $(\mathbf{s}_1, \ldots, \mathbf{s}_m) \mapsto (\tens{\Lambda}_{\SOS}(\mathbf{s}_1), \ldots, \tens{\Lambda}_{\SOS}(\mathbf{s}_m))$ and ${K}sosso^\ast$ is an inverse image of ${K}_{\ell_2}^m$ under the same map. Then by \citet[Proposition 5.1.1]{nesterov1994interior} $F$ is self-concordant. Let $\Gamma = \mathbb{S}^L_{+} \times (\mathbb{R}^{L \times L})^{m-1}$ and $\mathbf{G}: \intr(\Gamma) \to \mathbb{S}^L$ be defined as: \begin{align} \mathbf{G}(\mathbf{X}_1, \ldots, \mathbf{X}_m) = \mathbf{X}_1 - \tsum{i \in \iin{2..m}} \mathbf{X}_i \mathbf{X}_{1}^{-1} \mathbf{X}_i^\top. \end{align} Let us check that $\mathbf{G}$ is $(\mathbb{S}_{+}^L, 1)$-compatible with the domain $\Gamma$ in the sense of \cite[Definition 5.1.1]{nesterov1994interior}. This requires that $\mathbf{G}$ is $C^3$-smooth on $\intr(\Gamma)$, $\mathbf{G}$ is concave with respect to $\mathbb{S}_{+}^L$, and at each point $\mathbf{X} = (\mathbf{X}_1, \ldots, \mathbf{X}_m) \in \intr(\Gamma)$ and any direction $\mathbf{V} = (\mathbf{V}_1, \ldots, \mathbf{V}_m) \in \mathbb{S}^L \times (\mathbb{R}^{L \times L})^{m-1}$ such that $-\mathbf{X}_1 \preceq \mathbf{V}_1 \preceq \mathbf{X}_1$, the directional derivatives of $\mathbf{G}$ satisfy: \begin{align} \tfrac{d^3 \mathbf{G}}{d \mathbf{X}^3}[\mathbf{V}, \mathbf{V}, \mathbf{V}] \preceq -3 \tfrac{d^2 \mathbf{G}}{d \mathbf{X}^2}[\mathbf{V}, \mathbf{V}] . \label{eq:defcompat} \end{align} Let $\mathbf{V} \in \mathbb{S}^L \times (\mathbb{R}^{L \times L})^{m-1}$. It can be checked that $\tfrac{d^3 \mathbf{G}}{d \mathbf{X}^3}$ is continuous on the domain of $\mathbf{G}$ and we have the directional derivatives: \begin{align} \tfrac{d^2 \mathbf{G}}{d \mathbf{X}^2}[\mathbf{V}, \mathbf{V}] &= -2 \tsum{i \in \iin{2..m}} (\mathbf{X}_i \mathbf{X}_{1}^{-1} \mathbf{V}_1 - \mathbf{V}_i)\mathbf{X}_{1}^{-1}(\mathbf{X}_i \mathbf{X}_{1}^{-1} \mathbf{V}_1 - \mathbf{V}_i)^\top, \\ \tfrac{d^3 \mathbf{G}}{d \mathbf{X}^3}[\mathbf{V}, \mathbf{V}, \mathbf{V}] &= 6 \tsum{i \in \iin{2..m}} (\mathbf{X}_i \mathbf{X}_{1}^{-1} \mathbf{V}_1 - \mathbf{V}_i)\mathbf{X}_{1}^{-1} \mathbf{V}_1 \mathbf{X}_{1}^{-1}(\mathbf{X}_i \mathbf{X}_{1}^{-1} \mathbf{V}_1 - \mathbf{V}_i)^\top. \end{align} Since $\mathbf{X}_1 \succ 0$ in $\intr(\Gamma)$, $-\tfrac{d^2 \mathbf{G}}{d \mathbf{X}^2}[\mathbf{V}, \mathbf{V}] \succeq 0$ and so by \citet[Lemma 5.1.2]{nesterov1994interior}, $\mathbf{G}$ is concave with respect to $\mathbb{S}_{+}^L$. It remains to show that \eqref{eq:defcompat} is satisfied. Since the directional derivatives decouple by each index $i$ in the sum, it is sufficient to show that the inequality is satisfied for each $i \in \iin{2..m}$. For this, it is sufficient that: \begin{align} 6 \mathbf{X}_{1}^{-1} \mathbf{V}_1 \mathbf{X}_{1}^{-1} \preceq -3 \times -2 \mathbf{X}_{1}^{-1}, \label{eq:compatreq} \end{align} for all $-\mathbf{X}_1 \preceq \mathbf{V}_1 \preceq \mathbf{X}_1$, which follows since $\mathbf{X}_1$ is positive definite on $\intr(\Gamma)$. Now by \cite[proposition 5.1.7]{nesterov1994interior}, $\hat{F}$ is a $2L$-LHSCB. The same is true for $F$ by composing $\hat{F}$ with a linear map. \qed \end{proof} \subsection{SOS-L1} \label{sec:barriers:l1} By combining \cref{eq:characterization} and \cref{eq:extl1}, the ${K}soslo$ cone admits the semidefinite representation: \begin{align} {K}soslo = \left\{ \begin{aligned} \mathbf{s} \in \mathbb{R}^{U \times m}: \exists \mathbf{S}_1, \mathbf{S}_{2,+}, \mathbf{S}_{2,-} \ldots, \mathbf{S}_{m,+}, \mathbf{S}_{m,-} \in \mathbb{S}_{+}^L, \\ \mathbf{s}_1 = \tens{\Lambda}_{\SOS}^{\ast}(\mathbf{S}_{1}) + \tsum{i \in \iin{2..m}} \tens{\Lambda}_{\SOS}^{\ast}(\mathbf{S}_{i,+} + \mathbf{S}_{i,-}), \\ \mathbf{s}_i = \tens{\Lambda}_{\SOS}^{\ast}(\mathbf{S}_{i,+}) - \tens{\Lambda}_{\SOS}^{\ast}(\mathbf{S}_{i,-}) \ \forall i \in \iin{2..m} \end{aligned} \right\}. \end{align} Its dual cone is: \begin{align} {K}soslo^\ast = \bigl\{ \mathbf{s} \in \mathbb{R}^{U \times m}: \tens{\Lambda}_{\SOS}(\mathbf{s}_{1} + \mathbf{s}_{i}) \succeq 0, \tens{\Lambda}_{\SOS}(\mathbf{s}_{1} - \mathbf{s}_{i}) \succeq 0 \ \forall i \in \iin{2..m} \bigr\}. \label{eq:l1dual} \end{align} \cref{eq:l1dual} suggests that checking membership in ${K}soslo^\ast$ amounts to checking positive definiteness of $2 (m - 1)$ matrices of side dimension $L$. This membership check corresponds to a \emph{straightforward} LHSCB with parameter $2 L (m - 1)$ that is given by $\mathbf{s} \mapsto -\tsum{i \in \iin{2..m}} \logdet(\tens{\Lambda}_{\SOS}(\mathbf{s}_{1} + \mathbf{s}_{i}) \tens{\Lambda}_{\SOS}(\mathbf{s}_{1} - \mathbf{s}_{i}))$. We now describe a membership check for ${K}soslo^\ast$ that requires factorizations of only $m$ matrices, and corresponds to an LHSCB with parameter $L m$. \begin{lemma} \label{lemma:psdequiv} The set $\{\mathbf{X} \in \mathbb{S}_{+}^L, \mathbf{Y} \in \mathbb{S}^L: -\mathbf{X} \preceq \mathbf{Y} \preceq \mathbf{X} \}$ is equal to ${K}_{\ell_2}^2 = \cl \{\mathbf{X} \in \mathbb{S}_{++}^L, \mathbf{Y} \in \mathbb{S}^L: \mathbf{X} - \mathbf{Y} \mathbf{X}^{-1} \mathbf{Y} \succ 0\}$. \end{lemma} \begin{proof} For inclusion in one direction: \begin{subequations} \begin{align} & \cl \{ \mathbf{X} \in \mathbb{S}_{++}^L, \mathbf{Y} \in \mathbb{S}^L: \mathbf{X} - \mathbf{Y} \mathbf{X}^{-1} \mathbf{Y} \succ 0 \} \\ & = \bigl\{ \mathbf{X} \in \mathbb{S}_{+}^L, \mathbf{Y} \in \mathbb{S}^L: \begin{psmallmatrix} \mathbf{X} & \mathbf{Y} \\ \mathbf{Y} & \mathbf{X} \end{psmallmatrix} \succeq 0, \ \begin{psmallmatrix} \mathbf{X} & -\mathbf{Y} \\ -\mathbf{Y} & \mathbf{X} \end{psmallmatrix} \succeq 0 \bigr\} \\ & \subseteq \bigl\{ \mathbf{X} \in \mathbb{S}_{+}^L, \mathbf{Y} \in \mathbb{S}^L: 2 \mathbf{v}^\top \mathbf{X} \mathbf{v} \pm 2 \mathbf{v}^\top \mathbf{Y} \mathbf{v} \geq 0, \ \forall \mathbf{v} \in \mathbb{R}^L \bigr\} \\ & = \{ \mathbf{X} \in \mathbb{S}_{+}^L, \mathbf{Y} \in \mathbb{S}^L: \mathbf{X} + \mathbf{Y} \succeq 0, \mathbf{X} - \mathbf{Y} \succeq 0 \} . \end{align} \end{subequations} For the other direction, suppose $-\mathbf{X} \prec \mathbf{Y} \prec \mathbf{X}$. Then $\mathbf{X} \succ 0$, $\mathbf{Y} + \mathbf{X} \succ 0$, $\mathbf{X} - \mathbf{Y} \succ 0$. Note that $(\mathbf{Y} + \mathbf{X}) \mathbf{X}^{-1} (\mathbf{X} - \mathbf{Y}) = \mathbf{X} - \mathbf{Y} \mathbf{X}^{-1} \mathbf{Y}$ is symmetric. Due to \citet[Corollary 1]{subramanian1979theorem}, this product of three matrices also has nonnegative eigenvalues. We conclude that $-\mathbf{X} \prec \mathbf{Y} \prec \mathbf{X}$ implies $\mathbf{X} \succ 0$ and $\mathbf{X} - \mathbf{Y} \mathbf{X}^{-1} \mathbf{Y} \succeq 0$. Since $-\mathbf{X} \preceq \mathbf{Y} \preceq \mathbf{X} = \cl \{ -\mathbf{X} \prec \mathbf{Y} \prec \mathbf{X} \}$, taking closures gives the result. \qed \end{proof} By \cref{lemma:psdequiv} we can write the dual cone as: \begin{align} {K}soslo^\ast = \cl \left\{ \begin{aligned} \mathbf{s} \in \mathbb{R}^{U \times m}: \tens{\Lambda}_{\SOS}(\mathbf{s}_{1}) \succ 0, \\ \tens{\Lambda}_{\SOS}(\mathbf{s}_{1}) - \tens{\Lambda}_{\SOS}(\mathbf{s}_{i}) \tens{\Lambda}_{\SOS}(\mathbf{s}_{1})^{-1} \tens{\Lambda}_{\SOS}(\mathbf{s}_{i}) \succ 0, \forall i \in \iin{2..m} \end{aligned} \right\} . \label{eq:soslodual} \end{align} \begin{theorem} The function $F: \mathbb{R}^{U \times m} \to \mathbb{R}$ given by: \begin{align} \begin{split} F(\mathbf{s}) &= -\tsum{i \in \iin{2..m}} \logdet(\tens{\Lambda}_{\SOS}(\mathbf{s}_1) - \tens{\Lambda}_{\SOS}(\mathbf{s}_i) \tens{\Lambda}_{\SOS}(\mathbf{s}_1)^{-1} \tens{\Lambda}_{\SOS}(\mathbf{s}_i)) - {} \\ & \hphantom{{}={}} \logdet(\tens{\Lambda}_{\SOS}(\mathbf{s}_1)) \end{split} \end{align} is an $L m$-LHSCB for ${K}soslo^\ast$. \end{theorem} \begin{proof} It is easy to verify that $F$ is a logarithmically homogeneous barrier, and we show it is an $L m$-self-concordant barrier. As in \cref{thm:sobarr}, we define an auxiliary cone: \begin{align} {K}_{\ell_\infty}^m &= \{ (\mathbf{X}_1, \ldots, \mathbf{X}_m) \in \mathbb{S}_{+}^L \times (\mathbb{R}^{L \times L})^{m-1} : (\mathbf{X}_1, \mathbf{X}_i) \in {K}_{\ell_2}^2 \forall i \in \iin{2..m} \}. \end{align} Let $\hat{F}: \mathbb{S}_{++}^L \times (\mathbb{R}^{L \times L})^{m-1} \to \mathbb{R}$ be defined as $\hat{F}(\mathbf{X}_1, \ldots, \mathbf{X}_m) = -\tsum{i \in \iin{2..m}} \logdet(\mathbf{X}_1 - \mathbf{X}_i \mathbf{X}_1^{-1} \mathbf{X}_i^\top) - \logdet(\mathbf{X}_1)$. We argue that $\hat{F}$ is an $L m$-self-concordant barrier for ${K}_{\ell_\infty}^m$. $F$ is a composition of $\hat{F}$ with the same linear map used in \cref{thm:sobarr} and self-concordance of $F$ then follows by the same reasoning. Let $\Gamma = \mathbb{S}_{+}^L \times (\mathbb{R}^{L \times L})^{m-1}$ and $\mathbf{H}: \intr(\Gamma) \to (\mathbb{S}_{+}^L)^{m-1}$ be defined by: \begin{align} \mathbf{H}(\mathbf{X}_1, \ldots, \mathbf{X}_m) = \bigl( \mathbf{X}_1 - \mathbf{X}_2 \mathbf{X}_{1}^{-1} \mathbf{X}_2^\top, \hdots, \mathbf{X}_1 - \mathbf{X}_m \mathbf{X}_{1}^{-1} \mathbf{X}_m^\top \bigr). \end{align} We claim that $\mathbf{H}$ is $((\mathbb{S}_+^L)^{m-1}, 1)$-compatible with the domain $\Gamma$. This amounts to showing that for all $i \in \iin{2..m}$, the mapping $\mathbf{H}_i: \mathbb{S}_{++}^L \times \mathbb{R}^{L \times L} \to \mathbb{S}^L$, $\mathbf{H}_i(\mathbf{X}) = \mathbf{X}_1 - \mathbf{X}_i \mathbf{X}_{1}^{-1} \mathbf{X}_i^\top$ is $(\mathbb{S}_+^L, 1)$-compatible with the domain $\mathbb{S}_{+}^L \times \mathbb{R}^{L \times L}$ (the requirements for compatibility decouple for each $i$). The latter holds since $\mathbf{H}_i$ is equivalent to the function $\mathbf{G}$ from \cref{thm:sobarr} with $m=2$. Then by \citet[Lemma 5.1.7]{nesterov1994interior}, $\hat{F}$ is an $L m$-self-concordant barrier. \qed \end{proof} Note that we rely on an analogy of a representation for the $\ell_\infty$-norm cone (see \citep[Section 5.1]{coey2021solving}) in \cref{eq:soslodual}. From this we derive an LHSCB that is analogous to the $\ell_\infty$-norm cone LHSCB. On the other hand, we are not aware of an efficient LHSCB for its dual, the $\ell_1$-norm cone, so we cannot use the same technique to derive an LHSCB for the dual of a polynomial analogy to the $\ell_\infty$-norm cone. \section{Implementation details} \label{sec:implementation} In \crefrange{sec:implementation:sospsd}{sec:implementation:soslo} we describe the gradients and Hessians of the LHSCBs for ${K}sospsd$, ${K}sosso$, and ${K}soslo$, which are required as oracles in an algorithm like \citet{coey2021solving}. We give computational complexities of the Hessian oracles for each cone. All the oracles we describe are implemented in the open-source solver Hypatia \citep{coey2021solving}. \footnote{Available at \url{https://github.com/chriscoey/Hypatia.jl}.} \subsection{SOS-PSD} \label{sec:implementation:sospsd} To draw comparisons between ${K}sospsd$ and its SOS representation \cref{eq:scalarsospsd}, let us outline how we modify the representation of ${K}sos$ from \cref{sec:introduction:sospolynomials} to account for sparsity in a polynomial of the form $\mathbf{y}^\top \mathbf{Q}(\mathbf{x}) \mathbf{y}$. Suppose we have interpolation points $\mathbf{t}_{i \in \iin{1..U}}$ to represent ${K}sos$ in $\mathbb{R}[\mathbf{x}]_{n,2d}$. Let $\underline{\mathbf{t}}_{i \in \iin{1..\sdim(m)}}$ represent distinct points in $\mathbb{R}^m$, where at most two components in $\underline{\mathbf{t}}_{i}$ equal one and the rest equal zero, for all $i \in \iin{1..\sdim(m)}$. We can check that the Cartesian product of $\mathbf{t}_{i \in \iin{1..U}}$ and $\underline{\mathbf{t}}_{i \in \iin{1..\sdim(m)}}$, given by $\{(\mathbf{t}_1, \underline{\mathbf{t}}_1), \ldots, (\mathbf{t}_U, \underline{\mathbf{t}}_{\sdim(m)}) \}$ gives $U \sdim(m)$ unisolvent points. The polynomial $\mathbf{y}^\top \mathbf{Q}(\mathbf{x}) \mathbf{y}$ from \cref{eq:scalarsospsd} is then characterized by its evaluation at these these points. Now let $\underline{p}_{i \in \iin{1..m}}$ be polynomials in $\mathbb{R}[\mathbf{y}]_{m,1}$ such that $\underline{p}_i(y_1, \ldots, y_m) = y_i$ for all $i \in \iin{1..m}$. Recall that for ${K}sos$ in $\mathbb{R}[\mathbf{x}]_{n,2d}$, $\mathbf{P}$ is defined by $P_{u, \ell} = p_\ell(\mathbf{t}_u)$ for all $u \in \iin{1..U}, \ell \in \iin{1..L}$. The new matrix $\underline{\mathbf{P}}$ for the $U \sdim(m)$-dimensional SOS cone is given by $\underline{\mathbf{P}} = \mathbf{Y} \otimes_K \mathbf{P}$, where $\mathbf{Y} \in \mathbb{R}^{\sdim(m) \times m}$, is a Vandermonde matrix of the polynomials $\underline{p}_{i \in \iin{1..m}}$ and points $\underline{\mathbf{t}}_{i \in \iin{1..\sdim{(m)}}}$. Finally, the lifting operator $\underline{\tens{\Lambda}_{\SOS}}(\mathbf{s}) = \underline{\mathbf{P}}^\top \diag(\mathbf{s}) \underline{\mathbf{P}}$ is of the same form as $\tens{\Lambda}_{\SOS}$. \begin{lemma} Computing the Hessian of the LHSCB of ${K}sospsd$ requires $\mathcal{O}(L U^2 m^3)$ time while the Hessian of the LHSCB in the SOS formulation requires $\mathcal{O}(L U^2 m^5)$ time if $m < L < U$. \end{lemma} \begin{proof} Define $\mathbf{T}_{i,j}: \mathbb{R}^{U \times m \times m} \to \mathbb{R}^{U \times U}$ for all $i, j \in \iin{1..m}$: \begin{align} \mathbf{T}_{i,j}(\mathbf{S}) = \mathbf{P} (\tens{\Lambda}_{\SOS}psd(\mathbf{S})^{-1})_{i,j}\mathbf{P}^\top = \bigl( (\mathbf{I}_m \otimes_K \mathbf{P}) \tens{\Lambda}_{\SOS}psd(\mathbf{S})^{-1} (\mathbf{I}_m \otimes_K \mathbf{P})^\top \bigr)_{i,j}, \label{eq:defT} \end{align} where the indices ${i,j}$ reference a $U \times U$ submatrix. For all $i, i', j, j' \in \iin{1..m}$, $u, u' \in \iin{1..U}$, the gradient and Hessian of the barrier are: \footnote{In practice we only store coefficients from the lower triangle of a polynomial matrix and account for this in the derivatives.} \begin{align} \frac{d F}{d S_{i,j,u}} & = -\mathbf{T}_{i,j}(\mathbf{S})_{u,u}, \\ \frac{d^2 F}{d S_{i,j,u} d S_{i', j',u'}} &= \mathbf{T}_{i,j'}(\mathbf{S})_{u,u'} \mathbf{T}_{j,i'}(\mathbf{S})_{u,u'} . \label{eq:sdpgradhess} \end{align} The lifting operator $\tens{\Lambda}_{\SOS}psd$ can be computed blockwise in $\mathcal{O}(L^2 U m^2)$ operations, while $\underline{\tens{\Lambda}_{\SOS}}$ requires $\mathcal{O}(L^2 U m^4)$ operations. To avoid computing the explicit inverse $\tens{\Lambda}_{\SOS}psd(\mathbf{s})^{-1}$, we use a Cholesky factorization $\tens{\Lambda}_{\SOS}psd = \mathbf{L} \mathbf{L}^\top$ to form a block triangular matrix $\mathbf{V} = \mathbf{L}^{-1}(\mathbf{I}_m \otimes_K \mathbf{P})^\top$ in $\mathcal{O}(L^2 U m^2)$ operations, while computing the larger $\underline{\mathbf{V}} = \underline{\mathbf{L}}^{-1} \underline{\mathbf{P}}^\top$ where $\underline{\tens{\Lambda}_{\SOS}}(\mathbf{s}) = \underline{\mathbf{L}} \underline{\mathbf{L}}^\top$ for the ${K}sos$ formulation requires $\mathcal{O}(L^2 U m^4)$ operations. We use the product $\mathbf{V}^\top \mathbf{V}$ to build $\mathbf{T}_{i,j}$ for all $i, j \in \iin{1..m}$ in $\mathcal{O}(L U^2 m^3)$ operations, while calculating $\underline{\mathbf{V}}^\top \underline{\mathbf{V}}$ requires $\mathcal{O}(L U^2 m^5)$ operations. Once the blocks $\mathbf{T}_{i,j}$ are built, the time complexity to compute the gradient and Hessian are the same for ${K}sospsd$ as for ${K}sos$. \qed \end{proof} \subsection{SOS-L2} \label{sec:implementation:sosso} \begin{lemma} The Hessian of the LHSCB of ${K}sosso$ requires $\mathcal{O}(L U^2 m^2)$ time while the Hessian of the LHSCB in the SOS formulation requires $\mathcal{O}(L U^2 m^5)$ time if $m < L < U$. \end{lemma} \begin{proof} Let $\mathbf{T}_{i,j}: \mathbb{R}^{U \times m} \to \mathbb{R}^{U \times U}$ be defined as in \cref{eq:defT} for all $i, j \in \iin{1..m}$, but replacing $\tens{\Lambda}_{\SOS}psd$ with $\tens{\Lambda}_{\SOS}so$. Let $\mathbf{R} = \mathbf{P} (\tens{\Lambda}_{\SOS}(\mathbf{s}_1))^{-1} \mathbf{P}^\top$. For all $i, i' \in \iin{1..m}$, $u, u' \in \iin{1..U}$, the gradient and Hessian of the barrier are: \begin{align} \frac{d F}{d s_{i,u}} &= \begin{cases} -\tsum{j \in \iin{1..m}} \mathbf{T}_{j,j}(\mathbf{s})_{u,u} + (m - 2)\mathbf{R}_{u,u} & i = 1 \\ -2 \mathbf{T}_{i,1}(\mathbf{s})_{u,u} & i \neq 1, \end{cases} \\ \begin{split} \frac{d^2 F}{d s_{i,u} d s_{i',u'}} & = \begin{cases} \tsum{j \in \iin{1..m}, k \in \iin{1..m}} ( \mathbf{T}_{j,k}(\mathbf{s})_{u,u'} )^{2} - (m - 2) ( \mathbf{R}_{u,u'} )^{2} & i = i' = 1 \\ 2 \tsum{j \in \iin{1..m}} \mathbf{T}_{j,1}(\mathbf{s})_{u,u'} \mathbf{T}_{j,i'}(\mathbf{s})_{u,u'} & i = 1, i'\neq 1 \\ 2 \tsum{j \in \iin{1..m}} \mathbf{T}_{1,j}(\mathbf{s})_{u,u'} \mathbf{T}_{i,j}(\mathbf{s})_{u,u'} & i \neq 1, i'= 1 \\ 2 ( \mathbf{T}_{1,1}(\mathbf{s})_{u,u'} \mathbf{T}_{i,i'}(\mathbf{s})_{u,u'} + \mathbf{T}_{i,1}(\mathbf{s})_{u,u'} \mathbf{T}_{1,i'}(\mathbf{s})_{u,u'} ) & i \neq 1, i' \neq 1 . \end{cases} \end{split} \label{eq:sogradhess} \end{align} To compute the blocks $\mathbf{T}_{i,j}(\mathbf{s})$ we require an inverse of the matrix $\tens{\Lambda}_{\SOS}so(\mathbf{s})$. It can be verified that: \begin{align} \tens{\Lambda}_{\SOS}so(\mathbf{s})^{-1} = \begin{bmatrix} \mathbf{0} & \\ & \mathbf{I}_{m-1} \otimes_K \tens{\Lambda}_{\SOS}(\mathbf{s}_1)^{-1} \end{bmatrix} + \mathbf{U} {\tens{\Pi}}(\mathbf{s})^{-1} \mathbf{U}^\top, \label{eq:blockarrowinv} \end{align} where, \begin{align*} \mathbf{U}^\top &= \begin{bmatrix} -\mathbf{I}_L & \tens{\Lambda}_{\SOS}(\mathbf{s}_2) \tens{\Lambda}_{\SOS}(\mathbf{s}_1)^{-1} & \tens{\Lambda}_{\SOS}(\mathbf{s}_3) \tens{\Lambda}_{\SOS}(\mathbf{s}_1)^{-1} & \ldots & \tens{\Lambda}_{\SOS}(\mathbf{s}_m) \tens{\Lambda}_{\SOS}(\mathbf{s}_1)^{-1} \end{bmatrix}. \end{align*} Computing $\mathbf{T}_{i,j}$ for all $i, j \in \iin{1..m}$ is the most expensive step in obtaining the Hessian and we do this in $\mathcal{O}(L U^2 m^2)$ operations. The complexity of computing the Hessian in the SOS formulation of ${K}_{\arrow \SOSpsd}$ is the same as in the SOS formulation of ${K}sospsd$ since the cones have the same dimension. \qed \end{proof} \subsection{SOS-L1} \label{sec:implementation:soslo} \begin{lemma} The Hessians of the LHSCBs of ${K}soslo$ and its SOS formulation require $\mathcal{O}(L U^2 m)$ time if $m < L < U$. \end{lemma} \begin{proof} Let $\mathbf{T}_{i,j}(\mathbf{s}): \mathbb{R}^{U \times m} \to \mathbb{S}^U$ for all $i \in \iin{2..m}, j \in \{ 1, 2 \}$ be defined by: \begin{align} \mathbf{T}_{i,j}(\mathbf{s}) &= \mathbf{P} (\tens{\Lambda}_{\SOS}so((s_1, s_i))^{-1} )_{1, j} \mathbf{P}^\top \\ & = \bigl( (\mathbf{I}_m \otimes_K \mathbf{P}) \tens{\Lambda}_{\SOS}so((s_1, s_i))^{-1} (\mathbf{I}_m \otimes_K \mathbf{P})^\top \bigr)_{1,j} . \label{eq:defl1T} \end{align} For all $i, i' \in \iin{1..m}$, $u, u' \in \iin{1..U}$, the gradient and Hessian of the barrier are: \begin{align} \frac{d F}{d s_{i,u}} &= \begin{cases} -2 \tsum{j \in \iin{2..m}} \mathbf{T}_{j,1}(\mathbf{s})_{u,u} + (m - 2) \mathbf{R}_{u,u} & i = 1 \\ - 2 \mathbf{T}_{i,2}(\mathbf{s})_{u,u} & i \neq 1, \end{cases} \\ \frac{d^2 F}{d s_{i,u} d s_{i',u'}} &= \begin{cases} 2 \tsum{j \in \iin{2..m}, k \in \{1, 2 \}} ( \mathbf{T}_{j,k}(\mathbf{s})_{u,u'} )^{2} - (m - 2) (\mathbf{R}_{u,u'})^{2} & i = i' = 1 \\ 4 \mathbf{T}_{i,1}(\mathbf{s})_{u,u'} \mathbf{T}_{i,2}(\mathbf{s})_{u,u'} & i \neq 1, i' = 1 \\ 4 \mathbf{T}_{i',1}(\mathbf{s})_{u,u'} \mathbf{T}_{i',2}(\mathbf{s})_{u,u'} & i = 1, i' \neq 1 \\ 2 \tsum{k \in \{1, 2 \}} (\mathbf{T}_{i,k}(\mathbf{s})_{u,u'})^{2} & i = i' \neq 1 \\ 0 & \text{otherwise}. \end{cases} \label{eq:l1gradhess} \end{align} Calculating $\mathbf{T}_{i,j}(\mathbf{s})$ for all $i \in \iin{2..m}, j \in \{ 1, 2 \}$ can be done in $\mathcal{O}(L U^2 m)$ operations. The Hessian of the SOS formulation requires computing $\mathcal{O}(m)$ Hessians of SOS cones that require $\mathcal{O}(LU^2)$ time. We use the block arrowhead structure of the Hessian when applying its inverse similarly to \cref{eq:blockarrowinv}. \qed \end{proof} \section{Numerical example} \label{sec:experiments} For each cone $({K}sospsd, {K}sosso, {K}soslo)$ we compare the computational time to solve a simple example with its SOS formulation from \cref{sec:nonlinear}. We use an example analogous to the polynomial envelope problem from \citep[Section 7.2]{papp2019sum}, but replace the nonnegativity constraint by a conic inequality. Let $q_{i \in \iin{2..m}}(\mathbf{x})$ be randomly generated polynomials in $\mathbb{R}_{n,2d_r}[\mathbf{x}]$. We seek a polynomial that gives the tightest approximation to the $\ell_1$ or $\ell_2$ norm of $( q_2(\mathbf{x}), \ldots, q_{m}(\mathbf{x}) )$ for all $\mathbf{x} \in [-1, 1]^n$: \begin{subequations} \begin{align} \min_{q_1(\mathbf{x}) \in \mathbb{R}_{n,2d}[\mathbf{x}]} \int_{[-1, 1]^n} q_1(\mathbf{x}) d \mathbf{x} & : \\ q_1(\mathbf{x}) & \geq \vert \vert (q_2(\mathbf{x}), \ldots, q_{m}(\mathbf{x})) \vert \vert_p & \forall \mathbf{x} \in [-1, 1]^n , \label{eq:polyepigraph:norm} \end{align} \label{eq:polyepigraph} \end{subequations} with $p \in \{1,2\}$ in \cref{eq:polyepigraph:norm}. To restrict \cref{eq:polyepigraph:norm} over $[-1, 1]^n$, we use \emph{weighted sum of squares} (WSOS) formulations. A polynomial $q(\mathbf{x})$ is WSOS with respect to weights $g_{i \in \iin{1..K}}(\mathbf{x})$ if it can be expressed in the form of $q(\mathbf{x}) = \tsum{i \in \iin{1..K}} g_i(\mathbf{x}) p_i(\mathbf{x})$, where $p_{i \in \iin{1..K}}(\mathbf{x})$ are SOS. \citet[Section 6]{papp2019sum} show that the dual WSOS cone (we will write ${K}wsos^\ast$) may be represented by an intersection of ${K}sos^\ast$ cones. We represent the dual \emph{weighted} cones ${K}wsospsd^\ast$, ${K}wsosso^\ast$ and ${K}_{\WSOS \ell_1}^\ast$ analogously using intersections of ${K}sospsd^\ast$, ${K}sosso^\ast$ and ${K}_{\SOS \ell_1}^\ast$ respectively. Let $\mathbf{f}_{i \in \iin{1..m}}$ denote the coefficients of $q_{i \in \iin{1..m}}(\mathbf{x})$ and let $\mathbf{w} \in \mathbb{R}^U$ be a vector of quadrature weights on $[-1, 1]^n$. A low dimensional representation of \cref{eq:polyepigraph} may be written as: \begin{align} \min_{\mathbf{f}_1 \in \mathbb{R}^U} \mathbf{w}^\top \mathbf{f}_1 & : \quad (\mathbf{f}_1, \ldots, \mathbf{f}_{m}) \in {K}, \label{eq:envelope} \end{align} where ${K}$ is ${K}wsosso$ or ${K}wsoslo$. If $p=2$, we compare the ${K}wsosso$ formulation with two alternative formulations involving ${K}_{\arrow \SOSpsd}$. We use either ${K}wsospsd$ to model ${K}_{\arrow \SOSpsd}$ as implied in \cref{eq:arrowpsdpsd}, or ${K}wsos$ as in \cref{eq:arrowpsd}. For $p=1$, we build an SOS formulation by replacing \eqref{eq:envelope} with: \begin{subequations} \begin{align} \min_{\mathbf{f}_1, \mathbf{g}_2, \ldots, \mathbf{g}_{m}, \mathbf{h}_2, \ldots, \mathbf{h}_{m} \in \mathbb{R}^U} \mathbf{w}^\top \mathbf{f}_1 & : \\ \mathbf{f}_1 - \tsum{i \in \iin{2..m}} (\mathbf{g}_i + \mathbf{h}_i) & \in {K}wsos, \\ \mathbf{f}_i - \mathbf{g}_i + \mathbf{h}_i &= 0, \quad \mathbf{g}_i, \mathbf{h}_i \in {K}wsos & \forall i \in \iin{2..m}. \end{align} \label{eq:envelope:l1ext} \end{subequations} We select interpolation points using a heuristic adapted from \citep{papp2019sum,sommariva2009computing}. We uniformly sample $N$ interpolation points, where $N \gg U$. We form a Vandermonde matrix of the same structure as the matrix $\mathbf{P}$ used to construct the lifting operator, but using the $N$ sampled points for rows. We perform a QR factorization and use the first $U$ indices from the permutation vector of the factorization to select $U$ out of $N$ rows to keep. All experiments are performed on hardware with an AMD Ryzen 9 3950X 16-Core Processor (32 threads) and 128GB of RAM, running Ubuntu 20.10, and Julia 1.8 \citep{bezanson2017julia}. Optimization models are built using JuMP \citep{LubinDunningIJOC} and solved with Hypatia 0.5.3 \citep{coey2021solving} using our specialized, predefined cones. Scripts we use to run our experiments and raw results are available in the Hypatia repository. \footnote{Instructions to repeat our experiments are at \url{https://github.com/chriscoey/Hypatia.jl/tree/master/benchmarks/natvsext}.} We use default settings in Hypatia and set relative optimality and feasibility tolerances to $10^{-7}$. In \cref{tab:envelope,tab:norml1:master}, we show Hypatia's termination status, number of iterations, and solve times for $n \in \{1,4\}$ and varying values of $d_r$ and $m$. We use symbols to represent the termination status, which are described in \cref{sec:results}. If $p = 1$, we let $d = d_r$, where the maximum degree of $q_1(\mathbf{x})$ is $2 d$. If $p = 2$, we vary $d \in \{d_r, 2 d_r \}$ and add an additional column \emph{obj} in \cref{tab:envelope} to show the ratio of the objective value under the ${K}wsos$ (or equivalently ${K}wsospsd$) formulation divided by the objective value under the ${K}wsosso$ formulation. Note that in our setup, the dimension of ${K}wsosso$ only depends on $d$. A more flexible implementation could allow polynomial components to have different degrees in ${K}wsosso$ for the $d = 2 d_r$ case. For $p=2$ and $d = 2 d_r$, the difference in objective values between ${K}wsosso$ and alternative formulations is less than $1\%$ across all converged instances. For $p=2$ and $d = d_r$, the difference in the objective values is around $10$--$43\%$ across converged instances. However, the solve times for ${K}wsosso$ with $d = 2 d_r$ are sometimes faster than the solve times of alternative formulations with $d = d_r$ and equal values of $n$, $m$, and $d_r$. This suggests that it may be beneficial to use ${K}wsosso$ in place of SOS formulations, but with higher maximum degree in the ${K}wsosso$ cone. The solve times using ${K}wsospsd$ are slightly faster than the solve times using ${K}wsos$. For the case where $p=1$, the ${K}wsoslo$ formulation is faster than the ${K}wsos$ formulation, particularly for larger values of $m$. We also observe that the number of iterations the algorithm takes for ${K}wsosso$ compared to alternative formulations varies, but larger for ${K}wsoslo$ compared to the alternative SOS formulation. \section{Conclusions} \label{sec:conclusions} SOS generalizations of PSD, $\ell_2$-norm and $\ell_1$-norm constraints can be modeled using specialized cones that are simple to use in a generic interior point algorithm. The characterizations of ${K}sospsd$ and ${K}sosso$ rely on ideas from \citet{papp2013semidefinite} as well as the use of a Lagrange polynomial basis for efficient oracles in the multivariate case. For the ${K}sospsd$ barrier, the complexity of evaluating the Hessian is reduced by a factor of $\mathcal{O}(m^2)$ from the SOS formulation barrier. This does not result in significant speed improvements since Hessian evaluations are not the bottleneck in an interior point algorithm. In contrast, the dimension and barrier parameter of the ${K}sosso$ and ${K}soslo$ cones are lower compared to their SOS formulations, and the complexity of evaluating the Hessian of the ${K}sosso$ barrier is reduced by a factor of $\mathcal{O}(m^3)$ from its SOS formulation. For both ${K}sosso$ and ${K}soslo$, the total solve time was generally lower compared to their SOS formulations. While there is no penalty in using lower dimensional representations of SOS-L1 constraints, SOS-L2 formulations give rise to more conservative restrictions than higher dimensional SOS formulations, which is observable in practice. \appendix \section{Result tables} \label{sec:results} The termination status (\emph{st}) columns of \cref{tab:envelope,tab:norml1:master} use the following codes to classify solve runs: \begin{description}[font=\normalfont\itshape] \item[co] the solver claims the primal-dual certificate returned is optimal given its numerical tolerances, \item[tl] a limit of 1800 seconds is reached, \item[rl] a limit of approximately $120$GB of RAM is reached, \item[sp] the solver terminates due to slow progress during iterations, \item[er] the solver reports a different numerical error, \item[sk] we skip the instance because the solver reached a time or RAM limit on a smaller instance. \end{description} \begin{table}[!htb] \sisetup{ table-text-alignment = right, table-auto-round, table-figures-integer = 4, table-figures-decimal = 1, table-format = 4.1, add-decimal-zero = false, add-integer-zero = false, } \footnotesize \begin{tabular}{rrrrrrSrrSrrSS[table-format = 2.2]} \toprule & & & & \multicolumn{3}{l}{${K}sosso$} & \multicolumn{3}{l}{${K}sos$} & \multicolumn{3}{l}{${K}sospsd$} & \\ \cmidrule(lr){5-7} \cmidrule(lr){8-10} \cmidrule(lr){11-13} $n$ & $d_r$ & $m$ & $d$ & {st} & {iter} & {time} & {st} & {iter} & {time} & {st} & {iter} & {time} & {obj} \\ \midrule \multirow{20}{*}{1} & \multirow{10}{*}{20} & \multirow{2}{*}{4} & 20 & co & 13 & .05 & co & 17 & .44 & co & 13 & .21 & .89 \\ & & & 40 & co & 16 & .21 & co & 19 & 1.8 & co & 15 & 1.1 & .99 \\ & & \multirow{2}{*}{8} & 20 & co & 13 & .12 & co & 17 & 2.9 & co & 14 & 2.1 & .85 \\ & & & 40 & co & 19 & .70 & co & 21 & 18. & co & 16 & 10. & 1.0 \\ & & \multirow{2}{*}{16} & 20 & co & 14 & .39 & co & 19 & 48. & co & 14 & 27. & .80 \\ & & & 40 & co & 21 & 2.4 & co & 20 & 264. & co & 17 & 188. & 1.0 \\ & & \multirow{2}{*}{32} & 20 & co & 15 & 1.6 & co & 22 & 1189. & co & 17 & 843. & .78 \\ & & & 40 & co & 23 & 13. & tl & 3 & 2033. & tl & 7 & 2075. & .03 \\ & & \multirow{2}{*}{64} & 20 & co & 17 & 8.5 & rl & $\ast$ & $\ast$ & rl & $\ast$ & $\ast$ & $\ast$ \\ & & & 40 & co & 20 & 59. & sk & $\ast$ & $\ast$ & sk & $\ast$ & $\ast$ & $\ast$ \\ \cmidrule(lr){2-14} & \multirow{10}{*}{40} & \multirow{2}{*}{4} & 40 & co & 14 & .17 & co & 17 & 1.4 & co & 14 & 1.0 & .89 \\ & & & 80 & co & 19 & 1.0 & co & 19 & 7.7 & co & 17 & 6.2 & .99 \\ & & \multirow{2}{*}{8} & 40 & co & 16 & .57 & co & 19 & 15. & co & 15 & 9.1 & .82 \\ & & & 80 & co & 21 & 3.1 & co & 21 & 93. & co & 17 & 62. & 1.0 \\ & & \multirow{2}{*}{16} & 40 & co & 17 & 2.0 & co & 20 & 246. & co & 16 & 152. & .79 \\ & & & 80 & co & 27 & 13. & co & 21 & 1737. & co & 18 & 1206. & 1.0 \\ & & \multirow{2}{*}{32} & 40 & co & 18 & 7.6 & tl & 3 & 2031. & tl & 8 & 1803. & .02 \\ & & & 80 & co & 27 & 53. & rl & $\ast$ & $\ast$ & rl & $\ast$ & $\ast$ & $\ast$ \\ & & \multirow{2}{*}{64} & 40 & co & 19 & 36. & sk & $\ast$ & $\ast$ & sk & $\ast$ & $\ast$ & $\ast$ \\ & & & 80 & co & 26 & 226. & sk & $\ast$ & $\ast$ & sk & $\ast$ & $\ast$ & $\ast$ \\ \cmidrule(lr){1-14} \multirow{20}{*}{4} & \multirow{8}{*}{2} & \multirow{2}{*}{4} & 2 & co & 13 & .15 & co & 18 & .91 & co & 15 & .59 & .75 \\ & & & 4 & co & 21 & 33. & co & 43 & 133. & co & 37 & 97. & 1.0 \\ & & \multirow{2}{*}{8} & 2 & co & 13 & .41 & co & 21 & 11. & co & 18 & 7.7 & .64 \\ & & & 4 & co & 21 & 102. & tl & 49 & 1816. & tl & 60 & 1811. & 1.0 \\ & & \multirow{2}{*}{16} & 2 & co & 15 & 2.3 & co & 30 & 242. & co & 25 & 203. & .59 \\ & & & 4 & co & 21 & 437. & sk & $\ast$ & $\ast$ & sk & $\ast$ & $\ast$ & $\ast$ \\ & & \multirow{2}{*}{32} & 2 & co & 15 & 10. & tl & 6 & 1848. & tl & 10 & 1972. & 15. \\ & & & 4 & co & 22 & 1707. & sk & $\ast$ & $\ast$ & sk & $\ast$ & $\ast$ & $\ast$ \\ & & \multirow{2}{*}{64} & 2 & co & 15 & 46. & sk & $\ast$ & $\ast$ & sk & $\ast$ & $\ast$ & $\ast$ \\ & & & 4 & tl & 10 & 1935. & sk & $\ast$ & $\ast$ & sk & $\ast$ & $\ast$ & $\ast$ \\ \cmidrule(lr){2-14} & \multirow{6}{*}{4} & \multirow{2}{*}{4} & 4 & co & 17 & 11. & co & 30 & 114. & co & 27 & 93. & .69 \\ & & & 8 & tl & 10 & 1840. & rl & $\ast$ & $\ast$ & tl & $\ast$ & $\ast$ & $\ast$ \\ & & \multirow{1}{*}{8} & 4 & co & 18 & 42. & co & 34 & 1494. & co & 29 & 1111. & .58 \\ & & \multirow{1}{*}{16} & 4 & co & 18 & 174. & rl & $\ast$ & $\ast$ & tl & $\ast$ & $\ast$ & $\ast$ \\ & & \multirow{1}{*}{32} & 4 & co & 16 & 580. & sk & $\ast$ & $\ast$ & sk & $\ast$ & $\ast$ & $\ast$ \\ & & \multirow{1}{*}{64} & 4 & tl & 10 & 1853. & sk & $\ast$ & $\ast$ & sk & $\ast$ & $\ast$ & $\ast$ \\ \bottomrule \end{tabular} \caption{Solve time in seconds and number of iterations (iter) for instances with $p = 2$.} \label{tab:envelope} \end{table} \begin{table}[!htb] \sisetup{ table-text-alignment = right, table-auto-round, table-figures-integer = 4, table-figures-decimal = 1, table-format = 4.1, add-decimal-zero = false, add-integer-zero = false, } \footnotesize \begin{tabular}{rrrrrSrrS} \toprule & & & \multicolumn{3}{l}{${K}soslo$} & \multicolumn{3}{l}{${K}sos$} \\ \cmidrule(lr){4-6} \cmidrule(lr){7-9} $n$ & $d$ & $m$ & {st} & {iter} & {time} & {st} & {iter} & {time}\\\midrule \multirow{10}{*}{1} & \multirow{5}{*}{40} & 8 & {co} & 17 & .53 & {co} & 15 & .48 \\ & & 16 & {co} & 21 & 1.3 & {co} & 15 & 1.9 \\ & & 32 & {co} & 25 & 3.2 & {co} & 15 & 11. \\ & & 64 & {co} & 29 & 7.6 & {co} & 17 & 87. \\ & & 128 & {co} & 32 & 17. & {co} & 18 & 610. \\ \cmidrule(lr){2-9} & \multirow{5}{*}{80} & 8 & {co} & 21 & 2.6 & {co} & 18 & 2.6 \\ & & 16 & {co} & 24 & 5.6 & {co} & 17 & 13. \\ & & 32 & {co} & 27 & 13. & {co} & 18 & 89. \\ & & 64 & {co} & 31 & 31. & {co} & 18 & 600. \\ & & 128 & {co} & 38 & 83. & tl & $\ast$ & $\ast$ \\ \cmidrule(lr){1-9} \multirow{10}{*}{4} & \multirow{5}{*}{2} & 8 & {co} & 17 & .49 & {co} & 17 & .37 \\ & & 16 & {co} & 18 & 1.0 & {co} & 16 & 1.3 \\ & & 32 & {co} & 24 & 2.8 & {co} & 17 & 7.8 \\ & & 64 & {co} & 27 & 6.4 & {co} & 17 & 57. \\ & & 128 & {co} & 30 & 14. & {co} & 17 & 400. \\ \cmidrule(lr){2-9} & \multirow{5}{*}{4} & 8 & {co} & 25 & 28. & {co} & 21 & 54. \\ & & 16 & {co} & 28 & 86. & {co} & 22 & 318. \\ & & 32 & {co} & 29 & 198. & tl & 9 & 1823. \\ & & 64 & {co} & 31 & 423. & sk & $\ast$ & $\ast$ \\ & & 128 & {co} & 42 & 1210. & sk & $\ast$ & $\ast$ \\ \bottomrule \end{tabular} \caption{Solve time in seconds and number of iterations (iter) for instances with $p = 1$.} \label{tab:norml1:master} \end{table} \end{document}
\begin{document} \title{\huge{Fair and Distributed Dynamic Optimal Transport for\\ Resource Allocation over Networks} } \author{Jason Hughes and Juntao Chen \thanks{The authors are with the Department of Computer and Information Science, Fordham University, New York, NY, 10023 USA. E-mail: \{jhughes50,jchen504\}@fordham.edu}} \maketitle \begin{abstract} Optimal transport is a framework that facilitates the most efficient allocation of a limited amount of resources. However, the most efficient allocation scheme does not necessarily preserve the most fairness. In this paper, we establish a framework which explicitly considers the fairness of dynamic resource allocation over a network with heterogeneous participants. As computing the transport strategy in a centralized fashion requires significant computational resources, it is imperative to develop computationally light algorithm that can be applied to large scale problems. To this end, we develop a fully distributed algorithm for fair and dynamic optimal transport with provable convergence using alternating method of multipliers. In the designed algorithm, each corresponding pair of resource supplier and receiver compute their own solutions and update the transport schemes through negotiation iteratively which do not require a central planner. The distributed algorithm can yield a fair and efficient resource allocation mechanism over a network. We corroborate the obtained results through case studies. \end{abstract} \section{Introduction} Optimal transport (OT) is a centralized framework that enables the design of efficient schemes for distributing resources by considering heterogeneous constraints between the resource suppliers and receivers \cite{galichon2018optimal}. Efficiency in transporting and distributing resources has long been sought out, such as the optimal dispatch of raw materials in manufacturing, backup of power units in disaster affected neighbors, and matching between employees and tasks in an enterprise network. Under the standard OT paradigm, the resource distribution scheme maximizes the aggregated utilities of all participants in a centralized way, regardless of whether that distribution is fair for the resource receivers \cite{you2016energy,zhang2019consensus}. This efficiency maximization paradigm is not suitable for many societal problems. For example, in energy systems, the resilience planning should take into account these generally under considered communities which are hit heavily by natural disasters. Though from the central planner's perspective, the resilience planning in these areas may not contribute as significant as other areas to the system's utility by cost-benefit analysis. Therefore, it is necessary to incorporate fairness during the transport mechanism design for constrained resource allocation, especially in the scenarios that promote social equity. The transport network that the resources are distributed over becomes more complex with a large number of suppliers and receivers. This large-scale feature of the OT problem gives rise to another concern on the centralized computation of the transport plan. The required computation for centralized planning grows exponentially with the number of participants in the framework. This concern is further intensified if the resource allocation is completed over a period of time sequentially. To this end, we aim to develop a distributed algorithm for fair and efficient dynamic resource allocation where the centralized planner is not necessary. The distributed algorithm is obtained by leveraging alternating method of multipliers (ADMM) approach \cite{boyd2011distributed}. To enable a fair resource allocation, we include a fairness measure in the objective function in the dynamic OT framework. Therefore, the resulting dynamic transport plan will have a balance between efficiency and fairness. In the designed ADMM-based distributed algorithm, each participant (resource supplier or receiver) only needs to solve its own problem and exchange the results with the corresponding connected agents, which enables parallel updates on the solution. The algorithm terminates when the solution computed at each pair of supplier and receiver coincides, at which point the dynamic transport strategy given by our developed distributed algorithm is the same as the one under centralized design. Our distributed algorithm offers insights for fair and efficient dynamic resource distribution over networks. First, the updates of transport strategies at both the supplier side and the receiver side can be seen as bargaining for the resources transfer. The bargaining process ends when both parties reach an agreement. Furthermore, during each update, each receiver node in the network proposes a solution that explicitly considers the fairness. In comparison, the supplier nodes solely focus on maximizing their payoff by selling their resources. At the next round of updates, each pair of supplier and receiver will propose a resource distribution scheme that is closer to the average of their previous solutions. It indicates that, as the bargaining progresses, the resource suppliers will also consider the fairness and the receivers will take into account the efficiency of the transport plan to have a consensus. The algorithm can also be implemented online conveniently by adapting to the changes in the resource allocation network and participants’ preferences. The contributions of this paper are summarized as follows. First, we establish a framework that can yield fair and efficient dynamic resource transportation over networks. Second, we develop a distributed algorithm based on ADMM to compute the dynamic transport strategy in which the resource suppliers and receivers negotiate iteratively on the strategy. Third, we use case studies to corroborate the effectiveness and applicability to changing environment of the algorithm. \textit{Related Works:} Optimal resource allocation/matching has been investigated vastly in various fields, including communication networks \cite{you2016energy}, energy systems \cite{awad2016optimal}, critical infrastructure \cite{huang2018distributed} and cyber systems \cite{chen2018security}. To compute the optimal transport strategy efficiently, a number of techniques have been developed, such as simultaneous approximation \cite{mirrokni2012simultaneous}, population-based optimization \cite{deb2017population}, and distributed algorithms \cite{zhang2019consensus,niu2013efficient}. Our work is also related to fair allocation of constrained resources \cite{coucheney2009fair,abdel2014utility}. In this work, we leverage ADMM to develop a fast computational mechanism for fair and efficient dynamic resource matching over large-scale networks. \section{Problem Formulation}\label{sec:problem} In this section, we first present a standard dynamic OT framework for limited resource allocation over a network. Then, we extend the framework to a fair OT setting by considering the fairness in the dynamic transport design. In the network, we denote $\mathcal{X}:=\{1, ..., |\mathcal{X}|\}$ the set of destinations or targets that receive the resources, and $\mathcal{Y}:=\{1, ..., |\mathcal{Y}|\}$ the set of origins or sources that distribute resources to the targets. Specifically, each source node $y\in\mathcal{Y}$ is connected to a number of target nodes denoted by $\mathcal{X}_y$, representing that $y$ has choices in allocating its resources to a specific group of destinations $\mathcal{X}_y$ in the network. Similarly, it is possible that each target node $x\in\mathcal{X}$ receives resources from multiple source nodes, and this set of suppliers to node $x$ is denoted by $\mathcal{Y}_x$. Note that $\mathcal{X}_y$, $\forall y$ and $\mathcal{Y}_x$, $\forall x$ are nonempty. Otherwise, the corresponding nodes are isolated in the network and do not play a role in the considered optimal transport strategy design. It is also straightforward to see that the resources are transported over a bipartite network, where one side of the network consists of all source nodes and the other includes all destination nodes. This bipartite graph may not be complete due to constrained matching policies between participants. Another reason yielding incomplete bipartite graph in practice can be the infeasible transport of resources between certain pairs of source and destination nodes incurred by long transport distance. For convenience, we denote by $\mathcal{E}$ the set including all feasible transport paths in the network, i.e., $\mathcal{E}:=\{\{x,y\}|x\in\mathcal{X}_y,y\in\mathcal{Y}\}$. Note that $\mathcal{E}$ also refers to the set of all edges in the established bipartite graph for resource transportation. We denote by $\pi_{xy}^t\in\mathbb{R}_+$ the amount of resources transported from the origin node $y\in\mathcal{Y}$ to the destination node $x\in\mathcal{X}$ at time $t$, where $\mathbb{R}_+$ is the set of nonnegative real numbers and $t\in\mathcal{T}:=\{1,2,...,T\}$. For convenience, let $\Pi:=\{\pi_{xy}^t\}_{x\in\mathcal{X}_y,y\in\mathcal{Y},t\in\mathcal{T}}$ be the transport plan designed for the considered network. To this end, the centralized dynamic optimal transport problem can be formulated as follows: \begin{align} \max_{\Pi}\ \sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}} \sum_{y\in\mathcal{Y}_x}& d_{xy}(\pi_{xy}^t) + \sum_{t\in\mathcal{T}} \sum_{y\in\mathcal{Y}} \sum_{x\in\mathcal{X}_y} \left(s_{xy}(\pi_{xy}^t) - c_{xy}(\pi_{xy}^t)\right)\notag\\ \mathrm{s.t.}\quad &\underline{p}_{x}\leq \sum_{t\in\mathcal{T}} \sum_{y\in\mathcal{Y}_x} \pi_{xy}^t\leq \bar{p}_{x},\ \forall x\in\mathcal{X},\notag\\ &\underline{q}_{y}\leq \sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}_y} \pi_{xy}^t\leq \bar{q}_{y},\ \forall y\in\mathcal{Y},\label{OT1:eqn}\\ &\pi_{xy}^t\geq 0,\ \forall t\in\mathcal{T},\ \forall\{x,y\} \in\mathcal{E},\notag \end{align} where $d_{xy}:\mathbb{R}_+\rightarrow\mathbb{R}$ and $s_{xy}:\mathbb{R}_+\rightarrow\mathbb{R}$ are utility functions for destination/target node $x$ and source node $y$, respectively; $c_{xy}:\mathbb{R}_+\rightarrow\mathbb{R}$ is a cost function of source node $y$ for transporting resources to target node $x$. Furthermore, $\bar{p}_x\geq \underline{p}_{x}\geq 0$, $\forall x\in\mathcal{X}$ and $\bar{q}_y\geq \underline{q}_{y}\geq 0$, $\forall y\in\mathcal{Y}$. The constraints $\underline{p}_{x}\leq \sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x} \pi_{xy}\leq \bar{p}_{x}$ and $\underline{q}_{y}\leq \sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}_y} \pi_{xy}\leq \bar{q}_{y}$ capture the limitations on the amount of requested and transferred resources at the target $x$ and source $y$, respectively. We have the following assumption on the utilities functions $d_{xy}$ and $s_{xy}$ and the cost function $c_{xy}$. \begin{assumption}\label{assump:1} The utility functions $d_{xy}$ and $s_{xy}$ are concave and monotonically increasing, and the transport cost function $c_{xy}$ is convex and monotonically increasing on $\pi_{xy}^t$, $\forall x\in\mathcal{X},\forall y\in\mathcal{Y}$. \end{assumption} There are a number of functions of interest that satisfy the properties in Assumption \ref{assump:1}. For example, the utility functions $d_{xy}$ and $s_{xy}$ can adopt a linear form, indicating a linear growth of payoff on the amount of transferred and consumed resources. $d_{xy}$ and $s_{xy}$ can also take a logarithmic form, representing the marginal utility decreases with the amount of transported resources. The cost function $c_{xy}$ can admit linear and quadratic forms, capturing the flat and increasing growth of transport costs on the resources, respectively. In the above formulation, there is no consideration of fairness in resource allocation. The central planner devises an optimal transport strategy by maximizing the social welfare. In practice, some target nodes may not contribute as significant as other nodes to the social objective by receiving a certain amount of resources from the sources. This efficient resource allocation plan yields a larger objective value. However, it is not fair for some nodes if their requests for resources are ignored. Therefore, it is urgent to incorporate the equity consideration during resource allocation. One possible way to achieve this goal is to introduce a fairness measure to the objective function in the OT framework as follows: \begin{equation}\label{obj_fair:eqn} \begin{aligned} \sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}} \sum_{y\in\mathcal{Y}_x} d_{xy}(\pi_{xy}^t) +\sum_{t\in\mathcal{T}} \sum_{y\in\mathcal{Y}} \sum_{x\in\mathcal{X}_y} \left(s_{xy}(\pi_{xy}^t) - c_{xy}(\pi_{xy}^t)\right)\\ +\sum_{x\in\mathcal{X}} \omega_x f_{x}(\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^t), \end{aligned} \end{equation} where $\omega_x\geq 0$ is a weighting constant for fairness, and $f_x:\mathbb{R}_+\rightarrow \mathbb{R}$. Note that $\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^t$ is total amount of resources received for the target node $x$ over $T$ periods of time. Thus, $f_{x}(\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^t)$ quantifies the level of fairness by allocating $\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^t$ resources to each target $x$. To facilitate a fair transport strategy, the central planner needs to devise $f_x$ strategically. One consideration is that the marginal utility of the fairness term $f_x$ should decrease. Otherwise, it will lead to an unfair distribution of resources, i.e., some target nodes receive most of the resources in the network as the central planner aims to maximize $\sum_{x\in\mathcal{X}} \omega_x f_{x}(\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^t)$. We have the following assumption on the property of fairness function. \begin{assumption}\label{assump:2} The fairness function $f_x$, $\forall x\in\mathcal{X}$ is concave and monotonically increasing. \end{assumption} There can be various choices for the fairness function. One possible choice is a proportional fairness function \cite{abdel2014utility}: \begin{equation}\label{fairness:eqn} f_x\Big(\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^t\Big) = \log\Big(\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^t+1\Big), \ \forall x\in\mathcal{X}. \end{equation} To this end, the central planner's goal is to devise a fair and efficient transport strategy that maximizes the objective function \eqref{obj_fair:eqn} while takes into account the same set of constraints on resources capacity in \eqref{OT1:eqn}. \section{Distributed Algorithm for Fair and Efficient Dynamic Transport Strategy Design}\label{sec:algorithm} The planner can solve the formulated optimization problem in Section \ref{sec:problem} in a centralized manner. One primal concern is the computational feasibility. It can be computationally expensive to calculate the fair and efficient resource distribution plan when the number of sources and targets becomes enormous as can be observed in a large-scale network. Therefore, we shift our attention in finding a fair and efficient transport strategy from a centralized way to a fully distributed fashion. \subsection{Feasibility and Optimality} Before developing the distributed algorithm, we first analyze the feasibility of the formulated optimization problem. \begin{lemma}\label{lem:fea} It is feasible to find a fair transport plan $\Pi$ if the following conditions are satisfied: \begin{align} \sum_{y\in\mathcal{Y}_x}\bar{q}_y &\geq \underline{p}_x,\quad \forall x\in\mathcal{X},\label{fea:eqn1}\\ \sum_{y\in\mathcal{Y}} \bar{q}_y & \geq \sum_{x\in\mathcal{X}} \underline{p}_x.\label{fea:eqn2} \end{align} \end{lemma} The two inequalities in Lemma \ref{lem:fea} have natural interpretations. \eqref{fea:eqn1} ensures that all the target nodes' requests can be fulfilled. \eqref{fea:eqn2} indicates the the total demand of resources is less than the total supply that the source nodes can provide. We next characterize the existence of optimal solution to the formulated problem. \begin{lemma} Under Assumptions \ref{assump:1} and \ref{assump:2}, and the inequalities \eqref{fea:eqn1} and \eqref{fea:eqn2}, there exists a fair and efficient transport strategy that maximizes the objective \eqref{obj_fair:eqn} while satisfying the constraints in \eqref{OT1:eqn}. \end{lemma} The existence of the optimal solution is guaranteed by the concavity of $d_{xy}$, $s_{xy}$ and $f_x$ and the convexity of $c_{xy}$, as well as the feasibility of the problem resulting from \eqref{fea:eqn1} and \eqref{fea:eqn2}. \subsection{Distributed Algorithm} In this subsection, we aim to develop a distributed algorithm to solve the formulated problem. Our first step is to rewrite the optimization problem in the ADMM form by introducing ancillary variables $\pi_{xy}^{t,d}$ and $\pi_{xy}^{t,s}$. The additional superscripts $d$ and $s$ indicate that the corresponding parameters belong to the destination/target node or the source node, respectively. We then set $\pi_{xy}^t = \pi_{xy}^{t,d}$ and $\pi_{xy}^t = \pi_{xy}^{t,s}$, indicating that the solutions proposed by the targets and sources are consistent with the ones proposed by the central planner. This reformulation facilitates the design of a distributed algorithm which allows us to iterate through the process in obtaining the fair and efficient transport plan. To this end, the reformulated optimal transport problem under fairness consideration is presented as follows: \begin{align} \min_{\Pi_d \in \mathcal{F}_d, \Pi_s \in \mathcal{F}_s,\Pi} & -\sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}} \sum_{y\in\mathcal{Y}_x} d_{xy}(\pi_{xy}^{t,d}) -\sum_{t\in\mathcal{T}} \sum_{y\in\mathcal{Y}} \sum_{x\in\mathcal{X}_y} (s_{xy}(\pi_{xy}^{t,s}) \notag\\ &- c_{xy}(\pi_{xy}^{t,s})) -\sum_{x\in\mathcal{X}}\omega_x f_{x}(\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^{t,d})\notag\\ \mathrm{s.t.}\quad & \pi_{xy}^{t,d} = \pi_{xy}^t,\ \forall t\in\mathcal{T},\ \forall \{x,y\}\in\mathcal{E},\label{OT2:eqn}\\ & \pi_{xy}^t=\pi_{xy}^{t,s},\ \forall t\in\mathcal{T},\ \forall \{x,y\}\in\mathcal{E},\notag \end{align} where $\Pi_d:=\{\pi_{xy}^{t,d}\}_{x\in\mathcal{X}_y,y\in\mathcal{Y},t\in\mathcal{T}}$, $\Pi_s:=\{\pi_{xy}^{t,s}\}_{x\in\mathcal{X},y\in\mathcal{Y}_x,t\in\mathcal{T}}$, $\mathcal{F}_d := \{ \Pi_d | \pi_{xy}^{t,d} \geq 0, \underline{p}_x \leq \sum_{t\in\mathcal{T}} \sum_{y \in \mathcal{Y}_x} \pi_{xy}^{t,d} \leq \bar{p}_x,\{x,y\} \in \mathcal{E},t\in\mathcal{T}\}$, and $\mathcal{F}_s := \{ \Pi_s | \pi_{xy}^{t,s} \geq 0, \underline{q}_y \leq \sum_{t\in\mathcal{T}}\sum_{x \in \mathcal{X}_y} \pi_{xy}^{t,s} \leq \bar{q}_y,\{x,y\} \in \mathcal{E},t\in\mathcal{T} \}$. Note that we transform the original maximization of the social utility problem to an equivalent program of minimizing the aggregated cost. Furthermore, due to the constraints, the optimal solutions of $\Pi_t$, $\Pi_s$, and $\Pi$ to \eqref{OT2:eqn} are the same. Our next focus is to develop a distributed algorithm to solve the problem \eqref{OT2:eqn}. We let $\alpha_{xy}^{t,s}$ and $\alpha_{xy}^{t,d}$ be the Lagrangian multipliers associated with the constraint $\pi_{xy}^{t,s} = \pi_{xy}^t$ and $\pi_{xy}^t=\pi_{xy}^{t,d}$, respectively. The Lagrangian then facilitates the application of ADMM in the distributed algorithm design. Specifically, the Lagrangian associated with the optimization problem \eqref{OT2:eqn} can then be written as follows: \begin{align} L &\left(\Pi_{d}, \Pi_{s}, \Pi, \alpha_{xy}^{t,s}, \alpha_{xy}^{t,d} \right) = \notag\\ &- \sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}} \sum_{y\in\mathcal{Y}_x} d_{xy}(\pi_{xy}^{t,d}) - \sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}} \sum_{x\in\mathcal{X}_y} \left(s_{xy}(\pi_{xy}^{t,s}) - c_{xy}(\pi_{xy}^{t,s})\right) \notag\\ & -\sum_{x\in\mathcal{X}}\omega_x f_{x}(\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^{t,d})+ \sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}} \sum_{y\in\mathcal{Y}_x} \alpha_{xy}^{t,d} (\pi_{xy}^{t,d} - \pi_{xy}^t) \notag\\ &+\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}} \sum_{x\in\mathcal{X}_y} \alpha_{xy}^{t,s} (\pi_{xy}^t - \pi_{xy}^{t,s}) + \frac{\eta}{2} \sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}} \sum_{y\in\mathcal{Y}_x} (\pi_{xy}^{t,d} - \pi_{xy}^t)^2 \notag\\ &+ \frac{\eta}{2} \sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}} \sum_{x\in\mathcal{X}_y} (\pi_{xy}^t - \pi_{xy}^{t,s})^2,\label{Lag:eqn} \end{align} where $\eta > 0$ is a positive scalar constant controlling the convergence rate in the algorithm designed below. In \eqref{Lag:eqn}, the last two terms $\frac{\eta}{2}\sum_{t\in\mathcal{T}} \sum_{x\in\mathcal{X}} \sum_{y\in\mathcal{Y}_x} (\pi_{xy}^{t,d} - \pi_{xy}^t)^2$ and $\frac{\eta}{2} \sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}} \sum_{x\in\mathcal{X}_y} (\pi_{xy}^t - \pi_{xy}^{t,s})^2$, acting as penalization, are quadratic. Hence, the Lagrangian function $L$ is strictly convex, ensuring the existence of a unique optimal solution. We can apply ADMM to the minimization problem in \eqref{OT2:eqn}. The designed distributed algorithm is presented in the following proposition. \begin{proposition} The iterative steps of ADMM to \eqref{OT2:eqn} are summarized as follows: \begin{equation}\label{ADMM1_eqn1} \begin{split} \Pi_{x}^d(k+1) \in \arg \min_{\Pi_{x}^d\in\mathcal{F}_{x}^d} - \sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x} d_{xy}(\pi_{xy}^{t,d}) - \omega_x f_{x}(\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^{t,d}) \\ + \sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x} \alpha_{xy}^{t,d}(k) \pi_{xy}^{t,d} + \frac{\eta}{2}\sum_{t\in\mathcal{T}} \sum_{y\in\mathcal{Y}_x} (\pi_{xy}^{t,d} - \pi_{xy}^t(k))^2, \end{split} \end{equation} \begin{equation}\label{ADMM1_eqn2} \begin{aligned} \Pi_{y}^s(k+1) \in \arg \min_{\Pi_{y}^s\in\mathcal{F}_{y}^s} - \sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}_y} \left(s_{xy}(\pi_{xy}^{t,s}) - c_{xy}(\pi_{xy}^{t,s})\right) \\ -\sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}_y} \alpha_{xy}^{t,s}(k)\pi_{xy}^{t,s} + \frac{\eta}{2} \sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}_y} (\pi_{xy}^t(k) - \pi_{xy}^{t,s})^2, \end{aligned} \end{equation} \begin{equation}\label{ADMM1_eqn3} \begin{split} \Pi_{xy}(&k+1)= \arg \min_{\Pi_{xy}} - \sum_{t\in\mathcal{T}} \alpha_{xy}^{t,d}(k)\pi_{xy}^t + \sum_{t\in\mathcal{T}}\alpha_{xy}^{t,s}(k)\pi_{xy}^t \\ &+\frac{\eta}{2}\sum_{t\in\mathcal{T}}(\pi_{xy}^{t,d}(k+1) - \pi_{xy}^t)^2 + \frac{\eta}{2}\sum_{t\in\mathcal{T}}(\pi_{xy}^t - \pi_{xy}^{t,s}(k+1))^2, \end{split} \end{equation} \begin{equation}\label{ADMM1_eqn4} \begin{split} \alpha_{xy}^{t,d}(k+1) = \alpha_{xy}^{t,d}(k) + \eta(\pi_{xy}^{t,d}(k+1)-\pi_{xy}^t(k+1))^2, \end{split} \end{equation} \begin{equation}\label{ADMM1_eqn5} \begin{split} \alpha_{xy}^{t,s}(k+1) = \alpha_{xy}^{t,s}(k) + \eta(\pi_{xy}^t(k+1)-\pi_{xy}^{t,s}(k+1))^2, \end{split} \end{equation} where $\Pi_{\tilde{x}}^d:=\{\pi_{xy}^{t,d}\}_{y\in\mathcal{Y}_x,x=\tilde{x},t\in\mathcal{T}}$ represents the solution at target node $\tilde{x}\in\mathcal{X}$, $\Pi_{\tilde{y}}^s:=\{\pi_{xy}^{t,s}\}_{x\in\mathcal{X}_y,y=\tilde{y},t\in\mathcal{T}}$ represents the proposed solution at source node $\tilde{y}\in\mathcal{Y}$, and $\Pi_{\tilde{x}\tilde{y}}:=\{\pi_{xy}^{t}\}_{y=\tilde{y},x=\tilde{x},t\in\mathcal{T}}$ includes the solution between $\tilde{x}$ and $\tilde{y}$. In addition, $\mathcal{F}_{x}^d := \{ \Pi_{x}^d | \pi_{xy}^{t,d} \geq 0, y\in\mathcal{Y}_x, t\in\mathcal{T}, \underline{p}_x \leq \sum_{t\in\mathcal{T}}\sum_{y \in \mathcal{Y}_x} \pi_{xy}^{t,d} \leq \bar{p}_x\}$, and $\mathcal{F}_{y}^s := \{ \Pi_{y}^s | \pi_{xy}^{t,s} \geq 0, x\in\mathcal{X}_y, t\in\mathcal{T}, \underline{q}_y \leq \sum_{t\in\mathcal{T}}\sum_{x \in \mathcal{X}_y} \pi_{xy}^{t,s} \leq \bar{q}_y\}$. \end{proposition} \begin{proof} Let $\Vec{x} = [\Vec{\Pi}^{d\mathsf{T}}, \Vec{\Pi}^\mathsf{T}]^\mathsf{T}$, $\Vec{y} = [\Vec{\Pi}^\mathsf{T}, \Vec{\Pi}^{s\mathsf{T}}]^\mathsf{T}$, and $\alpha = [\{\Vec{\alpha_{xy}^{t,s}}\}^\mathsf{T}, \{\Vec{\alpha_{xy}^{t,d}}\}^\mathsf{T}]^\mathsf{T}$, where $\mathsf{T}$ and $\Vec{}$ denotes the transpose and vectorization operator. Note that these three vectors are all $2T|\mathcal{E}| \times 1$. Now we can write the constraints in \eqref{OT2:eqn} in a matrix form such that $\mathbf{A}\Vec{x} = \vec{y}$, where $\mathbf{A} = [\textbf{I},\textbf{0};\textbf{I},\textbf{0}]$ with $\textbf{I}$ and $\textbf{0}$ denoting the $T|\mathcal{E}|$-dimensional identity and zero matrices, respectively. Next, we note that $\Vec{x} \in \mathcal{F}_{\Vec{x}}^d$ and $\Vec{y} \in \mathcal{F}_{\Vec{y}}^s$, where $ \mathcal{F}_{\Vec{x}}^d = \{ \Vec{x} | \pi_{xy}^{t,d} \geq 0, \underline{p}_x \leq \sum_{t\in\mathcal{T}}\sum_{y \in \mathcal{Y}_x} \pi_{xy}^{t,d} \leq \bar{p}_x, \{x,y\} \in \mathcal{E} \},\ \mathcal{F}_{\Vec{y}}^s := \{ \Vec{y} | \pi_{xy}^{t,s} \geq 0, \underline{q}_y \leq \sum_{t\in\mathcal{T}}\sum_{x \in \mathcal{X}_y} \pi_{xy}^{t,s} \leq \bar{q}_y, \{x,y\} \in \mathcal{E} \}. $ Then, we can solve \eqref{OT2:eqn} using the iterations: 1) $ \Vec{x}(k+1) \in \arg \min_{\Vec{x} \in \mathcal{F}_{\Vec{x}}^d} L(\Vec{x},\Vec{y}(k),\alpha(k)); $ 2) $ \Vec{y}(k+1) \in \arg \min_{\Vec{y} \in \mathcal{F}_{\Vec{y}}^s} L(\Vec{x}(k+1),\Vec{y},\alpha(k)); $ 3) $ \alpha(k+1) = \alpha(k) + \eta(A\Vec{x}(k+1) - \Vec{y}(k+1)), $ whose convergence is proved \cite{boyd2011distributed}. Because we have no coupling among $\Pi_{x}^d, \Pi_{y}^s, \Pi_{xy}, \alpha_{xy}^{t,d},$ and $\alpha_{xy}^{t,s}$, the above iterations can be equivalently decomposed to \eqref{ADMM1_eqn1}-\eqref{ADMM1_eqn5}. \end{proof} We can further simplify equations \eqref{ADMM1_eqn1}-\eqref{ADMM1_eqn5} down to four equations, and the results are summarized below. \begin{proposition}\label{prop:2} The iterations \eqref{ADMM1_eqn1}-\eqref{ADMM1_eqn5} can be simplified as follows: \begin{equation}\label{ADMM2_eqn1} \begin{split} \Pi_{x}^d(k+1) \in \arg \min_{\Pi_{x}^d\in\mathcal{F}_{x}^d} - \sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x} d_{xy}(\pi_{xy}^{t,d}) - \omega_x f_{x}(\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^{t,d}) \\ + \sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x} \alpha_{xy}^{t}(k) \pi_{xy}^{t,d} + \frac{\eta}{2}\sum_{t\in\mathcal{T}} \sum_{y\in\mathcal{Y}_x} (\pi_{xy}^{t,d} - \pi_{xy}^t(k))^2, \end{split} \end{equation} \begin{equation}\label{ADMM2_eqn2} \begin{split} \Pi_{y}^s(k+1) \in \arg \min_{\Pi_{y}^s\in\mathcal{F}_{y}^s} - \sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}_y} \left(s_{xy}(\pi_{xy}^{t,s}) - c_{xy}(\pi_{xy}^{t,s})\right) \\ +\sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}_y} \alpha_{xy}^{t}(k)\pi_{xy}^{t,s} + \frac{\eta}{2} \sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}_y} (\pi_{xy}^t(k) - \pi_{xy}^{t,s})^2, \end{split} \end{equation} \begin{equation}\label{ADMM2_eqn3} \begin{split} \Pi_{xy}(k+1) = \frac{1}{2} \left(\Pi_{xy}^{d}(k+1) + \Pi_{xy}^{s}(k+1)\right), \end{split} \end{equation} \begin{equation}\label{ADMM2_eqn4} \begin{split} \alpha_{xy}^t(k+1) = \alpha_{xy}^t(k) + \frac{\eta}{2}\left(\pi_{xy}^{t,d}(k+1) - \pi_{xy}^{t,s}(k+1)\right), \end{split} \end{equation} where $\Pi_{xy}^{d}$ and $\Pi_{xy}^{s}$ in \eqref{ADMM2_eqn3} are obtained from \eqref{ADMM2_eqn1} for fixed $y$ and \eqref{ADMM2_eqn2} for fixed $x$, respectively. \end{proposition} \begin{proof} As \eqref{ADMM1_eqn3} is strictly concave, we can solve it by first-order condition: $ \pi_{xy}^t(k+1) = \frac{1}{2\eta}(\alpha_{xy}^{t,d}(k) - \alpha_{xy}^{t,s}(k)) + \frac{1}{2}(\pi_{xy}^{t,d}(k+1) + \pi_{xy}^{t,s}(k+1)). $ By substituting the above equation into \eqref{ADMM1_eqn4} and \eqref{ADMM1_eqn5} we get: $ \alpha_{xy}^{t,d}(k+1) = \frac{1}{2}(\alpha_{xy}^{t,d}(k) + \alpha_{xy}^{t,s}(k)) + \frac{\eta}{2}(\pi_{xy}^{t,d}(k+1) - \pi_{xy}^{t,s}(k+1)), $ $ \alpha_{xy}^{t,s}(k+1) = \frac{1}{2}(\alpha_{xy}^{t,d}(k) + \alpha_{xy}^{t,s}(k)) + \frac{\eta}{2}(\pi_{xy}^{t,d}(k+1) - \pi_{xy}^{t,s}(k+1)). $ We can see that $\alpha_{xy}^{t,d} = \alpha_{xy}^{t,s}$ during each update. Hence, $\pi_{xy}^t(k+1)$ can be further simplified as $\pi_{xy}^t(k+1) = \frac{1}{2} (\pi_{xy}^{t,d}(k+1) + \pi_{xy}^{t,s}(k+1))$ shown in \eqref{ADMM2_eqn3}. In addition, we can achieve \eqref{ADMM1_eqn4} and \eqref{ADMM1_eqn5} from $\alpha_{xy}^{t,d} = \alpha_{xy}^{t,s} = \alpha_{xy}^t$ represented in \eqref{ADMM2_eqn4}. \end{proof} We can iterate through equations \eqref{ADMM2_eqn1}-\eqref{ADMM2_eqn4} to obtain a fair and efficient resource transport strategy until getting a convergence. Note that the fairness is explicitly considered during the solution updates, which can be seen from the ADMM iteration step \eqref{ADMM2_eqn1}. For convenience, we summarize the iterations in Proposition \ref{prop:2} in Algorithm \ref{Alg:1}. \begin{algorithm}[!t] \caption{Distributed Algorithm}\label{Alg:1} \begin{algorithmic}[1] \While {$\Pi_{x}^d$ and $\Pi_{y}^s$ not converging} \State Compute $\Pi_{x}^d(k+1)$ using \eqref{ADMM2_eqn1}, $\forall x\in\mathcal{X}_y$ \State Compute $\Pi_{y}^s(k+1)$ using \eqref{ADMM2_eqn2}, $\forall y\in\mathcal{Y}_x$ \State Compute $\Pi_{xy}(k+1)$ using \eqref{ADMM2_eqn3}, $\forall \{x,y\}\in \mathcal{E}$ \State Compute $\alpha_{xy}^t(k+1)$ using \eqref{ADMM2_eqn4}, $\forall t\in\mathcal{T}$, $\forall \{x,y\}\in \mathcal{E}$ \EndWhile \State \textbf{return} $\pi_{xy}^t(k+1)$, $\forall t\in\mathcal{T}$, $\forall \{x,y\}\in \mathcal{E}$ \end{algorithmic} \end{algorithm} \section{Discussions on the Distributed Algorithm}\label{sec:discussion} In this section, we discuss several crucial aspects of the proposed distributed algorithm for fair and efficient resource allocation mechanisms. \subsection{Fairness and Efficiency Trade-off} The fairness of the transport scheme is ensured during the updates of solutions. As shown in \eqref{ADMM2_eqn1}, the level of fairness is regulated by the parameter $\omega_x$, $x\in\mathcal{X}$. Specifically, $\omega_x$ trades off between the efficiency and fairness of the transport strategy. With a larger $\omega_x$, the fairness term has a more significant impact on the solution, yielding a fairer resource allocation plan but in turn the plan is less efficient. For every target $x$, it maximizes $f_{x}(\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^{t,d})$ at each step. The concavity of $f_x$ guarantees that it is impossible for a single target in the network to receive all the resources. Together with the penalization terms $\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x} \alpha_{xy}^{t}(k) \pi_{xy}^{t,d} + \frac{\eta}{2}\sum_{t\in\mathcal{T}} \sum_{y\in\mathcal{Y}_x} (\pi_{xy}^{t,d} - \pi_{xy}^t(k))^2$, it also ensures that the request for resources from each target $x$ will not be arbitrarily large. \subsection{Implementation of Fairness } In the reformulated problem \eqref{OT2:eqn}, we associated the fairness function, $f_x$, $\forall x\in\mathcal{X}$, with the corresponding target node. This leads to natural interpretations that when proposing the transport strategy, each target needs to be aware of the fairness of the resource allocation over networks. In a resource distribution market, the supplier (source node) may not care where its resources are allocated in the end. However, a target cares whether it gets more or less resources than another target. For example, if a large company is distributing resources to customers, the company does not care where their product goes, while consumers care if only a few consumers are able to obtain the product. This observation is consistent with the iteration steps \eqref{ADMM2_eqn1} and \eqref{ADMM2_eqn2}, where each target $x$ aims to maximize the fairness term $\omega_x f_{x}(\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_x}\pi_{xy}^{t,d}) $, while each source $y$ merely maximizes its own utility. Note that during the problem reformulation, the fairness term could also be applied to the source. We can design distributed algorithm to solve this reformulated problem using similar techniques as in Section \ref{sec:algorithm}. \subsection{Continuous and Distributed Resource Allocation}\label{subsec:online} In Algorithm \ref{Alg:1}, all the participants (sources and targets) updates their decisions on the transferred/requested resources iteratively in a distributed fashion. In a resource distribution market, the number of participants and their preferences can vary over time. For example, some suppliers will leave the market when they finish the allocation of their resources. Similarly, new target nodes may join the market when they need to purchase resources. Hence, it is necessary to devise a continuous resource allocation mechanism that is adaptive to the changes in the market. We can extend the Algorithm \ref{Alg:1} in this regard and implement it in an online form. Specifically, when there are changes in the market, we can continue to solve the optimal transport problem by using Algorithm \ref{Alg:1} with necessary updates resulting from the market changes. In this way, we do not need to recompute the fair and efficient transport strategy for the new scenario from scratch. The algorithm will take into account these changes inherently and continuously update the solution in a distributed way. We will illustrate the continuous resource allocation with a case study in Section \ref{sec:case}. \section{Case Studies}\label{sec:case} In this section, we corroborate our algorithm for distributed optimal transport with fairness consideration. We consider a scenario with five target nodes and two source nodes and a transport network structure connecting all source nodes to both target nodes. The upper bounds are $\bar{q}_1 = 2$, $\bar{q}_2 = 3$, $\bar{q}_3 = 4$, $\bar{q}_4 = 3$, $\bar{q}_5 = 2$, $\bar{p}_1 = 4$, and $\bar{p}_2 = 4$. The lower bounds, $\underline{q}_y$ and $\underline{p}_x$ are 0 for all nodes. For illustration simplicity, we consider the resource allocation over a single period, i.e., $T=1$. Thus, we omit the time index $t$ in the notations in case studies. Furthermore, we adopt the following linear utility and cost functions: $d_{xy}(\pi_{xy}) = \delta_{xy}\pi_{xy}$, $s_{xy}(\pi_{xy}) = \sigma_{xy}\pi_{xy}$ and $c_{xy}(\pi_{xy}) = \zeta_{xy}\pi_{xy}$, $\forall \{x,y\} \in \mathcal{E}$. The corresponding parameters are selected as: $ [\zeta_{xy}]_{x\in\mathcal{X},y\in\mathcal{Y}} = \begin{bmatrix} 1 & 2 & 1 & 2 & 1 \\ 2 & 1 & 3 & 1 & 2\end{bmatrix}. $ We consider proportional fairness in the resource allocation, i.e., the fairness function admits the form shown in \eqref{fairness:eqn}. \subsection{Fair and Distributed Resource Allocation} We first show the effectiveness of the designed Algorithm \ref{Alg:1}. Specifically, we compare the optimal transport strategies with and without fairness considerations using Algorithm \ref{Alg:1}. For the algorithm with fairness, we set the weighting factor $\omega_x = 3$. We focus on comparing their induced social utility. The social utility is the aggregate of the payoffs of the sources and targets and the benefits of fairness in resource allocation. The results are shown in Fig. \ref{fig:f1}. Fig. \ref{fig:f1_1} indicates that the distributed algorithm (both with and without fairness consideration) converges to the corresponding centralized optimal solution $\pi_{xy}^o$ (i.e., problem \eqref{OT2:eqn} is solved directly). We also observe in Fig. \ref{fig:f1_1} that the algorithm with fairness converges to a higher social utility. The increase in the social utility is due to the addition of fairness when designing the resource transport scheme. We also note that the fairness has little effect on the convergence of the algorithm. Fig. \ref{fig:f1_2} shows the residual of transport strategy. The residual measures the difference between the strategy at the current update and the centralized optimal solution. We can observe that the residual goes to 0 around $k=50$, which demonstrates the effectiveness of the designed distributed algorithm. After verifying that the algorithm works and increases social utility, we further verify that it is efficient in computing the transport strategy. In a case with 20 source nodes and 20 target nodes, the designed distributed algorithm takes just over two minutes which is considerably good given that there are $20\times 20$ connections with the transport scheme where all source nodes are connected to all target nodes. \begin{figure} \caption{Impact of fairness consideration on the transport strategy design using Algorithm \ref{Alg:1} \label{fig:f1_1} \label{fig:f1_2} \label{fig:f1} \end{figure} \subsection{Online Distributed Resource Allocation} Next, we investigate a case study using the discussed continuous, or online, distributed algorithm in Section \ref{subsec:online}. We adopt the same utility, cost and fairness functions as in the previous scenario. In this case, the resource allocation network changes over time as shown in Fig. \ref{fig:f2}. Specifically, there are three sources and two targets at $k=0$, and not all of which are connected, i.e., the bipartite graph is incomplete. At step $k=250$, one target node joins the network, and hence the network has three source nodes and three target nodes. At step $k=500$, one source node leaves the network, and hence two source nodes needs to satisfy the requests from three target nodes. When the resource allocation network is changed, the upper bounds on the amount of transferable resources at sources and the amount of sources requested at the targets are also updated, with each parameter shown in Fig. \ref{fig:f2}. The weighting constant on the fairness is chosen as $\omega_x = 3$, $\forall x\in\mathcal{X}$, throughout the case study. Other parameters are summarized in Table \ref{tab:parameters}. The online algorithm addresses the problem continuously without resetting the algorithm. The results are shown in Fig. \ref{fig:f2}. When the resource transport network changes (at $k=250,\ 500$), the online algorithm will respond to these changes quickly by proposing new allocation schemes. The solutions obtained from the online distributed algorithm are consistent with the centralized optimal solutions. Thus, this online distributed algorithm is applicable to the resource distribution market with frequent changes. \begin{figure} \caption{Network structures for the continuous/online resource allocation. } \label{fig:net_on1} \label{fig:net_on2} \label{fig:net_on3} \label{fig:online_net} \end{figure} \begin{table} \centering \caption{Parameters in the Online Distributed Resource Allocation} \label{tab:parameters} \begin{tabular}{ |c|c|c|c|c| } \hline \multirow{5}{4em}{$k=0$} & $\delta_{11}=2$ & $\delta_{21}=3$ & $\delta_{22}=4$ & $\delta_{32}=4$ \\ & $\sigma_{11}=7$ & $\sigma_{21}=8$ & $\sigma_{22}=6$ & $\sigma_{32}=5$ \\ & $\zeta_{11}=2$ & $\zeta_{21}=4$ & $\zeta_{22}=2$ & $\zeta_{32}=2$ \\ & $\delta_{13}=0$ & $\sigma_{13} = 0$ & $\zeta_{13} = 0 $ & $\delta_{31} = 0$ \\ & $\sigma_{31} = 0$ &$\zeta_{31} = 0$ & &\\ \hline \multirow{7}{4em}{$k=250$} & $\delta_{11}=4$ & $\delta_{12}=2$ & $\delta_{13}=0$ & $\delta_{21}=3$ \\ & $\delta_{22}=5$ & $\delta_{23}=4$ & $\delta_{31}=0$ & $\delta_{32}=6$ \\ & $\delta_{33}=2$ & $\sigma_{11}=8$ & $\sigma_{12}=14$ & $\sigma_{13}=0$ \\ & $\sigma_{21}=7$ & $\sigma_{22}=10$ & $\sigma_{23}=9$ & $\sigma_{31}=0$ \\ & $\sigma_{32}=12$ & $\sigma_{33}=4$ & $\zeta_{11}=2$ & $\zeta_{12}=10$ \\ & $\zeta_{13}=0$ & $\zeta_{21}=4$ & $\zeta_{22}=5$ & $\zeta_{23}=5$ \\ & $\zeta_{31}=0$ & $\zeta_{32}=6$ & $\zeta_{33}=2$ & \\ \hline \multirow{5}{4em}{$k=500$} & $\delta_{11}=3$ & $\delta_{12}=2$ & $\delta_{13}=5$ & $\delta_{22}=3$ \\ & $\delta_{23}=3$ & $\sigma_{11}=5$ & $\sigma_{12}=7$ & $\sigma_{13}=5$ \\ & $\sigma_{22}=7$ & $\sigma_{23}=4$ & $\zeta_{11}=2$ & $\zeta_{12}=2$ \\ & $\zeta_{13}=1$ & $\zeta_{22}=1$ & $\zeta_{23}=2$ & $\delta_{21} = 0$\\ & $\sigma_{21} = 0$ & $\zeta_{21} = 0$ & & \\ \hline \end{tabular} \end{table} \begin{figure} \caption{Adaptive fair and efficient transport strategies design using the online algorithm. The transport network structure and participants preferences change over time at $k=250,\ 500$. (a) and (b) depict the trajectories of social utility and residual of transport strategy, respectively.} \label{fig:f2_1} \label{fig:f2_2} \label{fig:f2} \end{figure} \section{Conclusion}\label{sec:conclusion} In this paper, we have investigated fair and efficient dynamic transport of limited amount of resources in a network of participants with various preferences. The designed distributed algorithm can successfully yield the identical transport plan designed under the centralized manner, making our algorithm applicable to large-scale networks. The fairness is explicitly promoted in the algorithm, through bargaining and negotiations between each pair of resource supplier (source) and resource receiver (target). Throughout the negotiation, the sources maximize their revenue but need to consider the fairness. Similarly, the targets optimize the fairness but should take into account the efficiency of resource allocation. The algorithm terminates when the two parties reach a consensus. Further work includes the investigation of fair resource transport under incomplete information between two parties. \end{document}
\begin{document} \title{A lower bound for the sum of the two largest signless Laplacian eigenvalues} \author{Leonardo de Lima\thanks{Department of Production Engineering, Federal Center of Technological Education, Rio de Janeiro, Brazil;\textit{email: [email protected], [email protected]}} \thanks{Research supported by CNPq Grant 305867/2012--1 and FAPERJ 102.218/2013.} \ and$\;$Carla Oliveira\thanks{Department of Mathematical and Estatistics, National School of Statistics, Rio de Janeiro RJ, Brazil ; \textit{email: [email protected]}} \thanks{Research supported by CNPq Grant 305454/2012--9.} } \date{} \maketitle \begin{abstract} Let $G$ be a graph of order $n \geq 3$ with sequence degree given as $d_{1}\left( G\right) \geq \ldots \geq d_{n}\left( G\right)$ and let $\mu_1(G),\ldots, \mu_n(G)$ and $q_1(G), \ldots, q_{n}(G)$ be the Laplacian and signless Laplacian eigenvalues of $G$ arranged in non increasing order, respectively. Here, we consider the Grone's inequality [R. Grone, Eigenvalues and degree sequences of graphs, Lin. Multilin. Alg. 39 (1995) 133--136] $$ \sum_{i=1}^{k} \mu_{i}(G) \geq \sum_{i=1}^{k} d_{i}(G)+1$$ and prove that for $k=2$, the equality holds if and only if $G$ is the star graph $S_{n}.$ The signless Laplacian version of Grone's inequality is known to be true when $k=1.$ In this paper, we prove that it is also true for $k=2,$ that is, $$q_{1}(G)+q_{2}(G) \geq d_1(G)+d_2(G)+1$$ with equality if and only if $G$ is the star $S_{n}$ or the complete graph $K_{3}.$ When $k \geq 3$, we show a counterexample. \textbf{Keywords: }\emph{signless Laplacian; Laplacian; two largest eigenvalues; sequence degree; lower bound.} \textbf{AMS classification: }\emph{05C50, 05C35} \end{abstract} \section{Introduction and main results} Define $G=(V,E)$ as a finite simple graph on $n$ vertices. The sequence degree of $G$ is denoted by $d(G) = \left(d_1(G),d_2(G),\ldots,d_n(G) \right)Ò$ such that $ d_1(G) \geq d_2 (G)\geq \ldots \geq d_n(G).$ Write $A$ for the adjacency matrix of $G$ and let $D$ be the diagonal matrix of the row-sums of $A,$ i.e., the degrees of $G.$ The matrix $L\left( G\right) =A-D$ is called the \emph{Laplacian} or the $L$-matrix of $G$ and the matrix $Q\left( G\right) =A+D$ is called the \emph{signless Laplacian} or the $Q$-matrix of $G$. As usual, we shall index the eigenvalues of $L\left( G\right) $ and $Q\left( G\right) $ in non-increasing order and denote them as $\mu_{1}(G) \geq \mu_{2}(G) \geq \ldots \geq \mu_{n}(G)$ and $q_{1}(G) \geq q_{2}(G) \geq \ldots \geq q_{n}(G),$ respectively. We denote the following graphs on $n$ vertices: the complete graph $K_{n}$; the star $S_{n}$ and the complete bipartite graph $K_{n_1,n_2}$, such that $n_1 \geq n_2$ and $n=n_1+n_2.$ The main result of this paper is about to the sum of the two largest $Q-$eigenvalues of a graph. There is quite a few papers on that subject and a contribution to this area have been made very recently by Ashraf \textit{et al.}, \cite{AOT13}, Oliveira \textit{et al.}, \cite{OLRC13}, and Li and Tian, \cite{LT14}. An useful result to the area was obtained by Schur \cite{Sc23} as the following: \begin{theorem}[Schur's inequality, \cite{Sc23}] Let $A$ be a real symmetric matrix with eigenvalues $\lambda_{1} \geq \ldots \geq \lambda_{n}$ and diagonal elements $t_{1} \geq \ldots \geq t_{n}.$ Then $ \sum_{i=1}^{k} \lambda_{i} \geq \sum_{i=1}^{k} t_{i} .$ \end{theorem} In 1994, Grone and Merris, \cite{GM94}, proved that $ \mu_{1}(G) \geq d_1(G)+1$ with equality if and only if there exists a vertex of $G$ with degree $n-1.$ Based on Schur's inequality, Grone, in \cite{G95}, proved a more general bound to the Laplacian eigenvalues related to the sum of the vertex degrees: \begin{equation} \sum_{i=1}^{k} \mu_{i} \geq 1 + \sum_{i=1}^{k} d_{i} . \label{in1} \end{equation} Considering Grone's inequality, \cite{G95}, we proved that the extremal graph to the case $k=2$ is the star graph $S_{n}.$ Hence, we state the equality conditions to the Grone's inequality when $k=2$ as presented in the following theorem. \begin{theorem}\label{th4} Let $G$ be a simple connected graph on $n \geq 3$ vertices. Then $$ \mu_{1}(G)+\mu_{2}(G) \geq d_1(G)+ d_{2}(G) + 1 $$ with equality if and only if $G$ is a star $S_{n}.$ \end{theorem} The signless Laplacian version of Grone's inequality could be stated as the following: \begin{equation} \sum_{i=1}^{k} q_{i}(G) \geq 1+ \sum_{i=1}^{k} d_{i}(G). \label{in2} \end{equation} Motivated by Grone's inequality, we considered to study whether the signless Laplacian version of the inequality (\ref{in1}) is true. The case $k=1$ has been proved in the literature (see Lemma \ref{lemmaq1} in Section 2). For $k \geq 3,$ we show a counterexample such that inequality (\ref{in2}) is not true when $G$ is the star graph plus one edge. The main result of this paper proves that inequality (\ref{in2}) is true for $k=2$ as stated by the next theorem. \begin{theorem}\label{th3} Let $G$ be a simple connected graph on $n \geq 3$ vertices. Then \[ q_{1}\left( G\right) + q_{2}\left( G\right) \geq d_{1}\left( G\right) + d_{2}\left( G\right) + 1. \] Equality holds if and only if $G$ is one of the following graphs: the complete graph $K_{3}$ or a star $S_{n}.$ \end{theorem} The paper is organized such that preliminary results are presented in the next section and the main proofs are in the Section \ref{prova}. \section{Preliminary results} Define the graph $S_{n}^{+}$ as the graph obtained from a star $S_{n}$ plus an edge. We introduce the paper showing that inequality (\ref{in2}) does not hold for $k \geq 3.$ \begin{proposition}\label{pr1} Let $G$ be isomorphic to $S_{n}^{+}.$ For $k \geq 3,$ $$\sum_{i=1}^{k} q_{i}(G) < 1+ \sum_{i=1}^{k} d_{i}(G).$$ \end{proposition} \begin{proof} Let $G$ be isomorphic to $S_{n}^{+}.$ In this case, $d_1(G)=n-1,d_2(G)=d_3(G)=2$ and $d_4(G)=\ldots=d_{n}(G)=1.$ From Oliveira \emph{et al.} \cite{OLRC13}, we have $q_3(G)=\ldots=q_{n-1}(G) = 1$ and also \begin{eqnarray} n <& q_1(G) <& n + \frac{1}{n} \nonumber \\ 3 - \frac{2.5}{n} <& q_2(G) <& 3 - \frac{1}{n}. \nonumber \end{eqnarray} From Das in \cite{Das10}, $q_{n}(G) < d_{n}(G)$ and then we get \begin{eqnarray} 0 \leq& q_{n} (G)<& 1. \nonumber \end{eqnarray} Since for $k \geq 3,$ \begin{eqnarray} 1+\sum_{i=1}^{k} d_{i}(G) = n+k+1 \nonumber \end{eqnarray} and \begin{eqnarray} \sum_{i=1}^{k} q_{i}(G) < n+k+1, \nonumber \end{eqnarray} then the result follows. \end{proof} The following two results are important to our purpose here. \begin{lemma}[\cite{CRS07}]\label{lemmaq1} Let $G$ be a connected graph on $n \geq 4$ vertices. Then, $$q_{1}(G) \geq d_{1}(G)+1$$ with equality if and only if $G$ is the star $S_{n}.$ \end{lemma} \begin{lemma}[\cite{Das10}]\label{lemmaq2} Let $G$ be a graph. Then $$q_{2}(G) \geq d_{2}(G) -1.$$ \end{lemma} From Lemmas \ref{lemmaq1} and \ref{lemmaq2}, it is straightforward that $q_{1}(G)+ q_2(G) \geq d_1(G) + d_2(G).$ Here, we improved that lower bound to $d_1(G)+d_2(G)+1$ and in order to prove it we need to define the class of graphs $\mathcal{H}(p,r,s)$ that is obtained from $2K_1 \vee \overline{K_{p}}$ with additional $r$ and $s$ pendant vertices to the vertices $u$ and $v$ with largest and second largest degree, respectively (see Figure 1). The Propositions \ref{pr4} and \ref{pr5} will be useful to prove Theorem \ref{th3} and both present a lower bound to $q_2(G)$ within the family $\mathcal{H}(p,r,s).$ \begin{figure} \caption{Graph $\mathcal{H} \label{fig1} \end{figure} \begin{proposition}\label{pr4} For $p \geq 1$ and $r \geq s \geq 1,$ let $G$ be a graph on $n \geq 3$ vertices isomorphic to $\mathcal{H}(p,r,s).$ Then $$q_2(G) > d_2(G).$$ \end{proposition} \begin{proof} For $p \geq 1$, $r \geq s \geq 1$, consider $G$ as the graph isomorphic to $\mathcal{H}(p,r,s)$. Labeling the vertices in a convenient way, we get \[ Q(G) = \left( \begin{array}{c|c|c|c|c} p+r & 0 & \mathbf{1}_{1 \times p} & \mathbf{1}_{1 \times r} & \mathbf{1}_{1 \times s} \\ \hline 0 & p+s & \mathbf{1}_{1 \times p} & \mathbf{0}_{1 \times r} & \mathbf{1}_{1 \times s} \\ \hline \mathbf{1}_{p \times 1} & \mathbf{1}_{p \times 1} & 2\mathbf{I}_{p \times p} & \mathbf{0}_{p \times r} & \mathbf{0}_{p \times s} \\ \hline \mathbf{1}_{r \times 1} & \mathbf{0}_{r \times 1} & \mathbf{0}_{r \times p} & \mathbf{I}_{r \times r} & \mathbf{0}_{r \times s} \\ \hline \mathbf{0}_{s \times 1} & \mathbf{1}_{s \times 1} & \mathbf{0}_{s \times p} & \mathbf{0}_{s \times r} & \mathbf{I}_{s \times s} \\ \end{array}\right ). \] Observe that $\mathbf{x}_{j} = e_{3}-e_{j}$, for $j=4,\ldots,p+2$ are eigenvectors associated to the eigenvalue $2$ which has multiplicity at least $p-1.$ Also, let us define $\mathbf{y}_{j} = e_{p+3}-e_{j}$ for each $j=p+4,\ldots,p+r+2$ and $\mathbf{z}_{j} = e_{p+r+3}-e_{j}$ for each $j=p+r+4,\ldots,p+r+s+2.$ Observe that $\mathbf{y}_{j}$ and $\mathbf{z}_{j}$ are eigenvectors associated to the eigenvalue $1$ with multiplicity at least $r+s-2.$ In all the cases the others $5$ eigenvalues are the same of the reduced matrix $$M = \left( \begin{array}{ccccc} p+r & 0 & p & r & 0 \\ 0 & p+s & p & 0 & s \\ 1 & 1 & 2 & 0 & 0\\ 1 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 1 \\ \end{array} \right) .$$ The characteristic polynomial of M is given by $f(x,p,r,s) = x^{5}-(s+r+p+6)x^4+((r+p+3)s+(p+3)r+p^2+6p+5)x^{3}+((-2r-p-4)s+(-2p-2)r-2p^2-6p-2)x^{2}+((s+r)p+p^{2}+2p)x.$ Considering $r=s+k$, where $k \geq 0$ note that $f(d_2(G),p,s+k,s)=f(p+s,p,s+k,s) = (s+p) (ks^2 + s^2 + 2kps + 2p - 2ks - 2s + kp^2 - kp) > 0.$ As $f(0,p,r,s)<0$, if we take $q_2(G) < y < q_1(G),$ then $f(y,p,r,s)<0.$ So, since $f(d_2(G),p,r,s) > 0$ and from Lemma \ref{lemmaq2}, we get $q_2(G) > d_{2}(G).$ \end{proof} \begin{proposition}\label{pr5} For $p \geq 1$ and \ $r \geq 1,$ let $G$ be a graph on $n \geq 3$ vertices isomorphic to $\mathcal{H}(p,r,0).$ Then $$q_2(G) \geq d_2(G).$$ Equality holds if and only if $G=P_4$. \end{proposition} \begin{proof} For $p,r \geq 1,$ consider $G$ as the graph isomorphic to $\mathcal{H}(p,r,0)$. Labeling the vertices in a convenient way, we get \[ Q(G) = \left( \begin{array}{c|c|c|c} p+r & 0 & \mathbf{1}_{1 \times p} & \mathbf{1}_{1 \times r} \\ \hline 0 & p & \mathbf{1}_{1 \times p} & \mathbf{0}_{1 \times r} \\ \hline \mathbf{1}_{p \times 1} & \mathbf{1}_{p \times 1} & 2\mathbf{I}_{p \times p} & \mathbf{0}_{p \times r} \\ \hline \mathbf{1}_{r \times 1} & \mathbf{0}_{r \times 1} & \mathbf{0}_{r \times p} & \mathbf{I}_{r \times r} \\ \end{array}\right ). \] If $p=r=1$, then $q_2(G) = d_2(G)=2$. If $p \geq 1$ and $r \geq 2$, observe that $\mathbf{x}_{j} = e_{3}-e_{j}$, for $j=4,\ldots,p+2$ are eigenvectors associated to the eigenvalue $2$ which has multiplicity at least $p-1.$ Let us define $\mathbf{y}_{j} = e_{p+3}-e_{j}$ for each $j=p+4,\ldots,p+r+2.$ Observe that $\mathbf{y}_{j}$ are eigenvectors associated to the eigenvalue $1$ with multiplicity at least $r-1.$ The others $4$ eigenvalues are the same of the reduced matrix $$M = \left( \begin{array}{cccc} p+r & 0 & p & r \\ 0 & p & p & 0 \\ 1 & 1 & 2 & 0 \\ 1 & 0 & 0 & 1 \\ \end{array} \right) .$$ The characteristic polynomial of $M$ is given by $f(x,p,r) = x^{4}+(-r-2p-3)x^3+((p+2)r+p^2+4p+2)x^{2}+(-pr-p^2-2p)x.$ As $f(-1,p,r)>0$, if we take $q_2(G) < y < q_1(G),$ then $f(y,p,r)<0.$ Note that $f(d_2(G),p,r)=f(p,p,r) = rp^2 > 0.$ Therefore, from Lemma \ref{lemmaq2}, we have $q_2(G) > d_{2}(G)$. So $q_2(G) \geq d_2(G)$ with equality if and only if $G=P_4$. \end{proof} Let $\mathcal{G}(p,r,s)$ be the graph isomorphic to $\mathcal{H}(p,r,s)$ plus the edge $(u,v),$ see Figure 2. The next proposition shows that Theorem \ref{th3} is true for the family $\mathcal{G}(0,r,s).$ \begin{figure} \caption{Graph $\mathcal{G} \label{fig1} \end{figure} \begin{proposition}\label{pr3} For $r,s \geq 1,$ let $G$ be isomorphic to $\mathcal{G}(0,r,s).$ Then, $$q_1(G) + q_{2}(G) > d_1(G) + d_{2}(G)+1.$$ \end{proposition} \begin{proof} For $r,s \geq 1,$ let $G$ be the graph isomorphic to $\mathcal{G}(0,r,s).$ Labeling the vertices of $G$ in a convenient way, we get \[ Q(G) = \left( \begin{array}{c|c|c|c} r+1 & 1 & \mathbf{1}_{1 \times r} & \mathbf{1}_{1 \times s} \\ \hline 1 & s+1 & \mathbf{0}_{1 \times r} & \mathbf{1}_{1 \times s} \\ \hline \mathbf{1}_{r \times 1} & \mathbf{0}_{r \times 1} & \mathbf{I}_{r \times r} & \mathbf{0}_{r \times s} \\ \hline \mathbf{0}_{s \times 1} & \mathbf{1}_{s \times 1} & \mathbf{0}_{s \times r} & \mathbf{I}_{s \times s} \\ \end{array}\right ). \] Let us define $\mathbf{y}_{j} = e_{3}-e_{j}$ for each $j=4,\ldots,r+2,$ and $\mathbf{z}_{j} = e_{r+3}-e_{j}$ for each $j=r+4,\ldots,r+s+2.$ Observe that $\mathbf{y}_{j}$ and $\mathbf{z}_{j}$ are eigenvectors associated to the eigenvalue $1$ with multiplicity at least $r+s-2.$ The others $4$ eigenvalues are the same of the reduced matrix $$M = \left( \begin{array}{cccc} r+1 & 1 & r & 0 \\ 1 & s+1 & 0 & s \\ 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 \\ \end{array} \right) .$$ The characteristic polynomial of $M$ is given by $f(x,r,s) = x^4+(-r-s-4)x^3+((r+2)s +2r+5)x^2 + (-r-s-2)x.$ As $f(0,r,s)>0$, if we take $q_2(G) < y < q_1(G),$ then $f(y,r,s)<0.$ Since $d_2(G)=s+1,$ we get $f(d_2(G),r,s)=s(s+1)(r-s) \geq 0$ which implies $q_{2}(G) \geq d_{2}(G).$ From the equality conditions of Lemma \ref{lemmaq1}, $q_1(G) > d_1(G) + 1$ and the result follows. \end{proof} Next, in the Proposition \ref{pr2}, we present some bounds to $q_1(G)$ and $q_2(G)$ when $G$ is isomorphic to $\mathcal{G}(p,r,s)$ for $p \geq 1$, $r \geq s \geq 1.$ \begin{proposition}\label{pr2} For $p \geq 1$, $r \geq s \geq 1,$ let $G$ be a graph on $n \geq 3$ vertices and isomorphic to $\mathcal{G}(p,r,s).$ Then \begin{itemize} \item[(i)] if $p = 1$ and $r=s$, then $q_1(G) > d_1(G) + \frac{3}{2}$ and $q_2(G) > d_2(G) -\frac{1}{2};$ \item[(ii)] if $p \geq 2$ and $r=s$, then $q_1(G) > d_1(G) + 2;$ \item[(iii)] if $p \geq 1$ and $r \geq s+3$, then $q_2(G) > d_2(G);$ \item[(iv)] if $p \geq 1$ and $r \in \{ s+1,s+2 \},$ then $q_1(G) > d_1(G)+1+\frac{p}{n}$ and $q_2(G) > d_2(G)-\frac{p}{n}.$ \end{itemize} \end{proposition} \begin{proof} For $p \geq 1$, $r \geq s \geq 1$, let $G$ be isomorphic to the graph $\mathcal{G}(p,q,r).$ Labeling the vertices in a convenient way, we get \[ Q(G) = \left( \begin{array}{c|c|c|c|c} p+r+1 & 1 & \mathbf{1}_{1 \times p} & \mathbf{1}_{1 \times r} & \mathbf{1}_{1 \times s} \\ \hline 1 & p+s+1 & \mathbf{1}_{1 \times p} & \mathbf{0}_{1 \times r} & \mathbf{1}_{1 \times s} \\ \hline \mathbf{1}_{p \times 1} & \mathbf{1}_{p \times 1} & 2\mathbf{I}_{p \times p} & \mathbf{0}_{p \times r} & \mathbf{0}_{p \times s} \\ \hline \mathbf{1}_{r \times 1} & \mathbf{0}_{r \times 1} & \mathbf{0}_{r \times p} & \mathbf{I}_{r \times r} & \mathbf{0}_{r \times s} \\ \hline \mathbf{0}_{s \times 1} & \mathbf{1}_{s \times 1} & \mathbf{0}_{s \times p} & \mathbf{0}_{s \times r} & \mathbf{I}_{s \times s} \\ \end{array}\right ). \] Observe that $\mathbf{x}_{j} = e_{3}-e_{j}$, for $j=4,\ldots,p+2$ are eigenvectors associated to the eigenvalue $2$ which has multiplicity at least $p-1.$ Also, let us define $\mathbf{y}_{j} = e_{p+3}-e_{j}$ for each $j=p+4,\ldots,p+r+2,$ and $\mathbf{z}_{j} = e_{p+r+3}-e_{j}$ for each $j=p+r+4,\ldots,p+r+s+2.$ Observe that $\mathbf{y}_{j}$ and $\mathbf{z}_{j}$ are eigenvectors associated to the eigenvalue $1$ with multiplicity at least $r+s-2.$ The others $5$ eigenvalues are the same of the reduced matrix $$M = \left( \begin{array}{ccccc} p+r+1 & 1 & p & r & 0 \\ 1 & p+s+1 & p & 0 & s \\ 1 & 1 & 2 & 0 & 0\\ 1 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 1 \\ \end{array} \right) .$$ The characteristic polynomial of $M$ is given by $f(x,p,r,s) = x^{5}+(-s-r-2p-6)x^4+((r+p+4)s+(p+4)r+p^2+8p+13)x^{3}+((-2r-2p-5)s+(-2p-5)r-2p^2-14p-12)x^{2}+((p+2)s+(p+2)r+p^{2}+12p+4)x-4p.$ Since all eigenvalues of $Q$ are nonnegative, the roots of $f(x,p,r,s)$ are also nonnegative. As $f(0,p,r,s)<0$, if we take $q_2(G) < y < q_1(G),$ then $f(y,p,r,s)<0.$ This fact will be useful for the proof of the following cases below. The largest and second largest degree of $G$ are given by $d_{1}(G) = p+r+1$ and $d_2(G) = p+s+1,$ respectively. Using the characteristic polynomial $f(x,p,r,s),$ we prove the following cases: (i) $p=1$ and $r=s$: observe that $f(d_1(G)+3/2,1,r,s) = -\frac{(6r+25)(4r^2+12r+25)}{32}$ and $f(d_2(G)-1/2,1,r,s) = \frac{(2r-1)(20r^2+12r+5)}{32}.$ For $r = s\geq 1$, we get $f(d_1(G)+3/2,1,r,s) <0 $ and $f(d_2(G)-1/2,1,r,s)>0.$ As from Lemma \ref{lemmaq1}, $f(d_1(G)+1,1,r,s) < 0$ and we get $f(d_1(G)+3/2,1,r,s) <0$, so $q_1(G) > d_1(G)+3/2.$ Also, from Lemma \ref{lemmaq2}, $f(d_2(G)-1,1,r,s)>0 $ and we get $f(d_2(G)-1/2,1,r,s)>0$, then $q_2(G)>d_2(G)-1/2.$ (ii) $p \geq 2$ and $r=s:$ note that $\;f(d_1(G)+2,p,r,r)= -(2r+3p+6)(pr-2r+p^{2}+p-2) <0$ and also from Lemma \ref{lemmaq1}, $\;f(d_1(G)+1,p,r,r)<0.$ So, we can conclude that $q_1(G) > d_1(G)+2.$ (iii) $p \geq 1$ and $r \geq s+3:$ note that $f(d_2(G),p,s+k,s)=ks^3+(3k-2)ps^2+((3k-5)p^2+(k+2)p-k)s+(k-3)p^3+(k+1)p^2 > 0 $ for $k \geq 3.$ Also, from Lemma \ref{lemmaq2}, we get $f(d_2(G)-1,p,s+k,s)>0.$ So, $q_2(G) > d_2(G).$ (iv) $p \geq 1$ and $r \in \{ s+1,s+2 \}:$ note that $n=2s+p+r+2$. Considering first $r=s+1,$ we get $f(d_1(G)+1+p/(p+2s+4),p,s+2,s)= - (ns+np+p+n/n) <0$ and $f(d_2(G)-p/(p+2s+4),p,s+2,s) = (2s^2+3ps+3s+p^2+2p)/(p+2s+4)^2 >0.$ Using Lemmas \ref{lemmaq1} and \ref{lemmaq2} analogous to the previous cases, we get $q_1(G)> d_1(G)+1+p/n$ and $q_2(G) > d_2(G)-p/n.$ Now, setting $r=s+2,$ we get $f(d_1(G)+1+p/(p+2s+4),p,s+2,s) <0$ and $f(d_2(G)-p/(p+2s+4),p,s+2,s)>0$ since \begin{eqnarray} & &f(d_1(G)+1+p/(p+2s+4),p,s+2,s) = -(32s^8+(208p+512)s^7+(568p^2+2992p+3456)s^6 + \nonumber \\ & &+ (836p^3+7120p^2+18048p+12800)s^5+(720p^4+8764p^3+36536p^2+59136p+28160)s^4+ \nonumber \\ & &+ (370p^5+5976p^4+36036p^3+98144p^2+113408p+36864)s^3+ \nonumber \\ & &+ (110p^6+2240p^5+18104p^4+72468p^3+145280p^2+126720p+26624)s^2+ \nonumber \\ & &+ (17p^7+420p^6+4326p^5+23576p^4+71008p^3+112000p^2+75776p+8192)s+ \nonumber \\ & &+ p^8+29p^7+368p^6+2616p^5+11024p^4+26960p^3+34944p^2+18432p)/(32s^5+\nonumber \\ & &+ (80p+320)s^4+(80p^2+640p+1280)s^3+\nonumber \\ & &+ (40p^3+480p^2+1920p+2560)s^2+(10p^4+160p^3+960p^2+2560p+2560)s+ \nonumber \\ & &+p^5+20p^4+160p^3+640p^2+1280p+1024) \nonumber \end{eqnarray} and \begin{eqnarray} & & f(d_2(G)-p/(p+2s+4),p,s+2,s) = (64s^8+(352p+640)s^7+(840p^2+3136p+2496)s^6+ \nonumber \\ & &+(1156p^3+6360p^2+11040p+4480)s^5+(1014p^4+7104p^3+18712p^2+19200p+2560)s^4+ \nonumber \\ & &+(581p^5+4814p^4+16220p^3+27048p^2+16640p-3072)s^3 +(211p^6+1998p^5+7832p^4+ \nonumber \\ & &+17128p^3+20400p^2+6144p-5120)s^2+(44p^7+470p^6+2044p^5+5120p^4+8720p^3+ \nonumber \\ & &+8288p^2+512p-2048)s+4p^8+48p^7+228p^6+600p^5 \nonumber \\ & & +1200p^4+2016p^3+1728p^2)/(p+2s+4)^5. \nonumber \end{eqnarray} Using Lemmas \ref{lemmaq1} and \ref{lemmaq2} analogously to the previous cases, we get $q_1(G)> d_1(G)+1+p/n$ and $q_2(G) > d_2(G)-p/n.$ \end{proof} \section{Proofs}\label{prova} In this section, we prove the main results of the paper. \begin{proof} \noindent \textbf{of Theorem \ref{th3}} Let $G$ be a simple connected graph on $n \geq 3$ vertices. Assume that $u$ and $v$ are the vertices with largest and second largest degrees of $G$, i.e., $d(u) = d_{1}(G)$ and $d(v) = d_{2}(G).$ Take $H$ as a subgraph of $G$ containing $u$ and $v$ isomorphic to $\mathcal{H}(p,q,r)$ or $\mathcal{G}(p,r,s).$ Note that $d_1(G)+d_2(G)=d_1(H)+d_2(H)$ and from interlace Theorem (see \cite{HoJo85}), $q_1(G)+q_2(G) \geq q_1(H)+q_2(H).$ Firstly, suppose that $H$ is isomorphic to $\mathcal{H}(p,r,s).$ Since $G$ is connected, the cases $p=0$ with any $r$ and $s$ are not possible. If $p=1$ and $r=s=0,$ then $H = \mathcal{H}(1,0,0)= S_{3}$ and $q_1(H)+q_2(H)=4= d_1(H)+ d_2(H)+1.$ If $p \geq 2$ and $r=s=0,$ then $H = \mathcal{H}(p,0,0)= K_{2,p}$ and $q_1(H) +q_2(H) = 2p+2 > d_1(H) + d_2(H)+1 = 2p+1.$ If $p,r \geq 1$ and $s=0,$ from Proposition \ref{pr5} and Lemma \ref{lemmaq1}, we get $q_1(H)+q_2(H)>d_1(H)+d_2(H)+1.$ Now, if $p \geq 1$ and $r \geq s \geq 1,$ from Proposition \ref{pr4} and Lemma \ref{lemmaq1}, follows that $q_1(H)+q_2(H)>d_1(H)+d_2(H)+1.$ Now, suppose that $H$ is isomorphic to $\mathcal{G}(p,q,r).$ If $p=s=0$ and $r \geq 1,$ $H = \mathcal{G}(0,r,0) = S_{r+2}$ and $q_1(H)+q_2(H)=r+3 = d_1(H)+d_2(H)+1.$ If $p=0$ and $r,s \geq 1,$ the result follows from Proposition \ref{pr3}. If $p=1$ and $r=s=0,$ then $H$ is the complete graph $K_3$ and $q_1(H)+q_2(H)=5 = d_1(H)+d_2(H)+1.$ If $p \geq 2$ and $r=s=0$, then $H = \mathcal{G}(p,0,0) = K_2 \vee \overline{K_{p}}$, i.e., the complete split graph, and it is well-known that $q_1(H)=(n+2+\sqrt{n^2+4n-12})/2$ and $q_{2}(H)=n-2.$ It is easy to check that for $p \geq 2$, we have $q_1(H)+ q_{2}(H) > d_{1}(H) + d_{2}(H)+1.$ If $p \geq 1, r \geq 1$ and $s \geq 0$, from the interlacing theorem (see \cite{HoJo85}) and the proof to the graph $\mathcal{H}(p,r,s)$, we get $q_1(H)+q_2(H) \geq q_1(\mathcal{H}(p,r,s))+q_2(\mathcal{H}(p,r,s)) > d_1(\mathcal{H}(p,r,s))+d_2(\mathcal{H}(p,r,s))+1$ and the result of the theorem follows. From the cases above, the equality conditions are restricted to the graphs $K_3$ and $S_n$ and the result follows. \end{proof} \begin{proof} \noindent \textbf{of Theorem \ref{th4}} Let $G$ be a simple connected graph on $ n\geq 3$ vertices. The result $\mu_1(G)+\mu_2(G) \geq d_1(G)+d_2(G)+1$ follows from Grone in \cite{G95}. Now, we need to prove the equality case. Assume that $u$ and $v$ are the vertices with largest and second largest degrees of $G$, i.e., $d(u) = d_{1}(G)$ and $d(v) = d_{2}(G).$ Take $H$ as a subgraph of $G$ containing $u$ and $v$ isomorphic to $\mathcal{H}(p,q,r)$ or $\mathcal{G}(p,r,s).$ Note that $d_1(G)+d_2(G)=d_1(H)+d_2(H)$ and from interlace Theorem (see \cite{HoJo85}) $\mu_1(G)+\mu_2(G) \geq \mu_1(H)+\mu_2(H).$ Firstly, suppose that $H$ is isomorphic to $\mathcal{H}(p,r,s).$ In this case, $H$ is bipartite and $\mu_{i}(H) = q_{i}(H)$ for $i=1,\ldots,n$ (see \cite{CRS07}). The proof is analogous to the Theorem \ref{th3} and the equality cases are similar, i.e., when $H = S_{3}.$ Now, suppose that $H$ is isomorphic to $\mathcal{G}(p,r,s).$ If $p=1, r=s=0$ then $H = \mathcal{G}(1,0,0)=K_3$ and $6 = \mu_1(H)+\mu_2(H) > d_1(H)+d_2(H)+1 = 5.$ If $p \geq 2, r=s=0,$ then $2p+4 = \mu_1(H)+\mu_2(H) > d_1(H)+d_2(H)+1 = 2p+1.$ The remanning cases are similar to the ones of the Theorem \ref{th3} and equality holds when $G = S_{n}.$ \end{proof} \noindent \textbf{Acknowledgements:} Leonardo de Lima has been supported by CNPq Grant 305867/2012-1 and FAPERJ Grant 102.218/2013 and Carla Oliveira has also been supported by CNPq Grant 305454/2012-9. \end{document}
\begin{document} \title{On Gale's Contribution in Revealed Preference Theory} \begin{abstract} We investigate Gale's important paper published in 1960. This paper contains an example of a candidate of the demand function that satisfies the weak axiom of revealed preference and that is doubtful that it is a demand function of some weak order. We examine this paper and first scrutinize what Gale proved. Then we identify a gap in Gale's proof and show that he failed to show that this candidate of the demand function is not a demand function. Next, we present three complete proofs of Gale's claim. First, we construct a proof that was constructible in 1960 by a fact that Gale himself demonstrated. Second, we construct a modern and simple proof using Shephard's lemma. Third, we construct a proof that follows the direction that Gale originally conceived. Our conclusion is as follows: although, in 1960, Gale was not able to prove that the candidate of the demand function that he constructed is not a demand function, he substantially proved it, and therefore it is fair to say that the credit for finding a candidate of the demand function that satisfies the weak axiom but is not a demand function is attributed to Gale. \noindent \textbf{Keywords}: Consumer Theory, Revealed Preference, Weak Axiom, Strong Axiom, Shephard's Lemma, Integrability Theory. \noindent \textbf{JEL codes}: C61, C65, D11. \noindent \textbf{MSC2020 codes}: 91B08, 91B16 \end{abstract} \section{Introduction} Since the weak axiom of revealed preference was created by Samuelson (1938), there had been an active debate in consumer theory regarding what type of choice behavior can be expressed by utility maximizing behavior. When a candidate of the demand function is given, what are the conditions for it to behave as if it were a consequence of some weak order maximization problem? In the 1950s, several researchers argued that the weak axiom of revealed preference is a necessary and sufficient condition for this problem. Although Houthakker (1950) presented the strong axiom of revealed preference and argued that it was the answer to the above question, Houthakker himself does not present any example of a function that obeys the weak axiom but violates the strong axiom, and some economists seemed to believe that the weak axiom of revealed preference is sufficient. Gale (1960) is a landmark paper that offered an answer to this controversy. He presented a candidate of the demand function and showed that, although it satisfies the weak axiom of revealed preference, it is not a consequence of any weak order maximizing behavior. This made it clear that the weak axiom of revealed preference is not sufficient to answer the question in the previous paragraph. Eventually, Richter (1966) rigorously showed that Houthakker's claim is correct, and that settled this issue. However, some economists argue that this thesis presented by Gale is insufficient. According to them, Gale only showed that the candidate of the demand function he constructed satisfies the weak axiom of revealed preference, but he did not rigorously show that it does not correspond to any weak order. The primary objective of the present paper is to investigate this claim; that is, to investigate what Gale's contribution is and whether the example Gale constructed is really not a demand function. We first scrutinize Gale's constructed candidate of the demand function and his discussion on this function. Gale showed that this candidate satisfies the weak axiom of revealed preference and the corresponding inverse demand function does not satisfy Jacobi's integrability condition. However, in Gale's paper, there is no sufficient proof for this function to be not able to be represented by the weak order maximizing behavior. Thus, the assertion in the previous paragraph is correct. Next, we present three complete proofs of Gale's claim. The first is a proof using another fact that Gale himself presented in 1960. In addition to the argument above, Gale showed that the constructed candidate of the demand function does not satisfy the strong axiom of revealed preference. As mentioned above, the strong axiom of revealed preference is a necessary and sufficient condition for a function to be a demand function that corresponds to a weak order. This was proved by Richter in 1966; thus, this was not yet proved in 1960. So can Gale's claim be proved without using Richter's result but only from what was known in 1960? We answer this question affirmatively. From this point of view, we think that it is fair to say that while the assertions in the two previous paragraphs are indeed correct, the proof of Gale's claim can be easily reconstructed from what Gale himself showed, and thus it is Gale who showed that the weak axiom of revealed preference is insufficient for rationalizability. Second, we provide a modern and sophisticated proof of Gale's claim. The above proof is correct, but Gale's proof that the function does not satisfy the strong axiom of revealed preference is too genius to be easily conceived. In this section, we provide a more mechanical, easy, and simple proof using Shephard's lemma. Perhaps, this proof is the most standard one at the present time. Third, we prove the above result using Jacobi's integrability condition as addressed by Gale. Although both of the above proofs are simple and rigorous, they are fundamentally different from the proof scheme that Gale himself would have had in mind. Therefore, we start from the inverse demand function, construct a method for calculating a binary relation, and prove that Jacobi's integrability condition is equivalent to the transitivity of this binary relation. Because Gale's function can be seen as a consequence of the maximizing behavior of this binary relation, the fact that this binary relation is not weakly ordered means that this function cannot be rationalized. Although this proof is heavily complicated, this is probably the closest to the proof that Gale had in mind. We prove our main claims in the main text, but we have chosen to place some of the proof in the appendix. In particular, the result of Richter (1966) (Theorem 1), Shephard's lemma (Theorem 3), and the method for constructing a binary relation from an inverse demand function (Theorem 4) are all results that are too heavy to read the proofs in situ, and thus we place the proofs of these results in the appendix. This paper is organized as follows. In Section 2, we introduce many symbols and terms that are necessary for this paper. In Section 3, we scrutinize Gale's paper and the result that Gale showed. Section 4 contains three proofs of Gale's claim listed above. Section 5 is the conclusion. \section{Preliminaries} In this section, we provide basic knowledge of consumer theory. First, we define $\mathbb{R}^M_+=\{x\in \mathbb{R}^M|x_i\ge 0\mbox{ for all }i\}$ and $\mathbb{R}^M_{++}=\{x\in \mathbb{R}^M|x_i>0\mbox{ for all }i\}$. The former is called the {\bf nonnegative orthant} of $\mathbb{R}^M$, and the latter is called the {\bf positive orthant} of $\mathbb{R}^M$, respectively. If $M=1$, then this symbol is abbreviated, and these sets are simply written as $\mathbb{R}_+$ and $\mathbb{R}_{++}$. For $x,y\in\mathbb{R}^M$, we write $x\ge y$ if and only if $x-y\in \mathbb{R}^M_+$, and $x\gg y$ if and only if $x-y\in\mathbb{R}^M_{++}$. In this paper, $\Omega$ denotes the consumption space, and we assume that $\Omega$ is a subset of $\mathbb{R}^n$. An element $x\in \Omega$ is called a {\bf consumption plan}, and $x_i$ denotes the amount of consumption for the $i$-th commodity. Choose any binary relation $R$ on $\Omega$.\footnote{That is, $R\subset \Omega^2$.} We write $xRy$ instead of $(x,y)\in R$. We say that $R$ is\footnote{Note that, reflexiveness, completeness, transitivity, symmetry, asymmetry, and antisymmetry can be defined even when $R$ is a binary relation for some abstract set $X$.} \begin{itemize} \item {\bf reflexive} if $xRx$ for all $x\in \Omega$, \item {\bf complete} if for all $x,y\in \Omega$, either $xRy$ or $yRx$, \item {\bf transitive} if $xRy$ and $yRz$ imply $xRz$, \item {\bf p-transitive} if $\dim(\mbox{span}\{x,y,z\})\le 2$, $xRy$ and $yRz$ imply $xRz$, \item {\bf continuous} if $R$ is closed in $\Omega^2$, \item {\bf symmetric} if $xRy$ implies $yRx$, \item {\bf asymmetric} if $xRy$ implies $\neg (yRx)$, \item {\bf antisymmetric} if $xRy$ and $yRx$ imply $x=y$, \item {\bf monotone} if $x\gg y$ implies $xRy$ and $\neg (yRx)$, \item {\bf convex} if $xRy$, $yRx$ and $t\in [0,1]$ imply $[(1-t)x+ty]Rx$, and \item {\bf strictly convex} if $xRy$, $yRx$, $x\neq y$, and $t\in ]0,1[$ imply $[(1-t)x+ty]Rx$ and $\neg (xR[(1-t)x+ty])$. \end{itemize} We call a complete and transitive binary relation a {\bf weak order}, a transitive and asymmetric binary relation a {\bf strong order}, and a reflexive, transitive, and symmetric binary relation an {\bf equivalence relation}. Suppose that $\succsim$ is a binary relation, and define \[x\succ y\Leftrightarrow x\succsim y\mbox{ and }y\not\succsim x,\] \[x\sim y\Leftrightarrow x\succsim y\mbox{ and }y\succsim x.\] It is known that if $\succsim$ is a weak order, then $\succ$ is a strong order and $\sim$ is an equivalence relation.\footnote{Only transitivity of $\succ$ is not trivial, and thus we show this. Suppose not. Then, there exist $x,y,z\in \Omega$ such that $x\succ y$ and $y\succ z$ but $x\not\succ z$. Because $x\succ y$ and $y\succ z$, $x\succsim y$ and $y\succsim z$, and thus $x\succsim z$. Hence, we must have $z\succsim x$. By the transitivity of $\succsim$, we have that $z\succsim y$, which contradicts $y\succ z$.} Choose any binary relation $\succsim$ on $\Omega$. A function $u:\Omega\to \mathbb{R}$ is said to {\bf represent} $\succsim$, or a {\bf utility function} of $\succsim$, if \[x\succsim y\Leftrightarrow u(x)\ge u(y).\] It is easy to show that if $\succsim$ is represented by a function, then it is a weak order. Debreu (1954) showed that if $\succsim$ is a continuous weak order and $\Omega$ is connected, then there exists a continuous function that represents $\succsim$. Choose $p\in \mathbb{R}^n_{++}$ and $m>0$. Define \[\Delta(p,m)=\{x\in \Omega|p\cdot x\le m\},\] and for a binary relation $\succsim$ defined on $\Omega$, define \[f^{\succsim}(p,m)=\{x\in \Delta(p,m)|y\not\succ x\mbox{ for all }y\in \Delta(p,m)\}.\] We call this multi-valued function $f^{\succsim}$ a {\bf semi-demand function} corresponds to $\succsim$, and if $\succsim$ is a weak order, then we call $f^{\succsim}$ a {\bf demand function} corresponds to $\succsim$. If $\succsim$ is represented by $u$, then $f^{\succsim}$ is also written as $f^u$. Note that, for given $(p,m)\in \mathbb{R}^n_{++}\times \mathbb{R}_{++}$, $f^u(p,m)$ is the set of solutions to the following maximization problem: \begin{align} \max~~~~~&~u(x)\nonumber \\ \mbox{subject to. }&~x\in \Omega,\label{UMP}\\ &~p\cdot x\le m.\nonumber \end{align} We call this problem (\ref{UMP}) the {\bf utility maximization problem} for the utility function $u$. Therefore, the demand function $f^u$ is explained as the (possibly multi-valued) solution function to the utility maximization problem (\ref{UMP}). Suppose that $f:P\to \Omega$, where $P$ is a nonempty cone that is included in the following set: \[\{(p,m)\in \mathbb{R}^n_{++}\times \mathbb{R}_{++}|\Delta(p,m)\neq \emptyset\}.\] We call this function $f$ a {\bf candidate of demand} (CoD) if $f(p,m)\in \Delta(p,m)$ for all $(p,m)\in P$. If $f=f^{\succsim}$, then we say that $f$ corresponds to $\succsim$ or $\succsim$ corresponds to $f$. Moreover, if $\succsim$ is represented by $u$, then we say that $f$ corresponds to $u$ or $u$ corresponds to $f$. We call a CoD $f$ a {\bf semi-demand function} if it is a semi-demand function corresponds to some binary relation $\succsim$, and a {\bf demand function} if it is a demand function corresponds to some weak order $\succsim$. For a CoD $f$, let $R(f)$ be the range of $f$: that is, \[R(f)=\{x\in \Omega|x=f(p,m)\mbox{ for some }(p,m)\in P\}.\] We say that a CoD $f$ is {\bf homogeneous of degree zero} if \[f(ap,am)=f(p,m)\] for any $a>0$. Note that, because $P$ is a cone, the above definition has no problem. Next, we say that a CoD $f$ satisfies {\bf Walras' law} if \[p\cdot f(p,m)=m\] for any $(p,m)\in P$. Note that, if $f=f^{\succsim}$ for some weak order $\succsim$, then $f$ is automatically homogeneous of degree zero. Moreover, if $\succsim$ is monotone, then $f$ automatically satisfies Walras' law. For a CoD $f$, define a binary relation $\succ_r$ on $\Omega$ such that \[x\succ_ry\Leftrightarrow \exists (p,m)\in P\mbox{ s.t. }f(p,m)=x,\ y\in \Delta(p,m)\setminus\{x\}.\] We call this relation $\succ_r$ the {\bf direct revealed preference relation}, and we say that $f$ satisfies the {\bf weak axiom of revealed preference} (or simply, the {\bf weak axiom}) if $\succ_r$ is asymmetric. Choose any binary relation $R$ on $\Omega$, and define \[R^*=\cap\{\bar{R}\subset \Omega^2|R\subset \bar{R}\mbox{ and }\bar{R}\mbox{ is transitive}\}.\] This binary relation $R^*$ is called the {\bf transitive closure} of $R$. Because the intersection of any family of transitive binary relations is also transitive, we have that $R^*$ is transitive. In the appendix, we show that $xR^*y$ if and only if there exists a finite sequence $x_1,...,x_k$ such that $x_1=x,x_k=y$ and $x_iRx_{i+1}$ for all $i\in \{1,...,k-1\}$. Specifically, let $\succ_{ir}$ be the transitive closure of $\succ_r$. We call this relation $\succ_{ir}$ the {\bf indirect revealed preference relation}, and we say that $f$ satisfies the {\bf strong axiom of revealed preference} (or simply, the {\bf strong axiom}) if $\succ_{ir}$ is asymmetric. Clearly, the strong axiom implies the weak axiom. Finally, suppose that $f:P\to \Omega$ is a CoD that is differentiable at $(p,m)$. Define \[s_{ij}(p,m)=\frac{\partial f_i}{\partial p_j}(p,m)+\frac{\partial f_i}{\partial m}(p,m)f_j(p,m).\] The $n\times n$ matrix $S_f(p,m)$ whose $(i,j)$-th component is $s_{ij}(p,m)$ is called the {\bf Slutsky matrix}. We sometimes call the matrix-valued function $S_f$ itself the Slutsky matrix. This matrix has the following alternative notation: \[S_f(p,m)=D_pf(p,m)+D_mf(p,m)f^T(p,m),\] where $D_p$ and $D_m$ denote the partial differential operators, and $f^T$ denotes the transpose of $f$.\footnote{Throughout this paper, $A^T$ denotes the transpose of the matrix $A$.} \section{Gale's Example} Gale (1960) presented a CoD that satisfies the weak axiom of revealed preference but is doubtful to be a demand function. In this section, we introduce Gale's example. Throughout this section, we consider $\Omega=\mathbb{R}^n_+$ and $P=\mathbb{R}^n_{++}\times\mathbb{R}_{++}$, and usually assume that $n=3$. Let \[A=\begin{pmatrix} -3 & 4 & 0\\ 0 & -3 & 4\\ 4 & 0 & -3 \end{pmatrix},\] and define \[h_A(p)=\frac{1}{p^TAp}Ap\] on the set \[C=\{p\in\mathbb{R}^3_{++}|Ap\ge 0\}.\] Note that, if $p\in C$, then $Ap\neq 0$ and $p^TAp>0$, and thus $h_A(p)$ is well-defined. Indeed, if $p\in \mathbb{R}^3_{++}$ and $Ap\le 0$, then \[4p_2\le 3p_1,\ 4p_3\le 3p_2,\ 4p_1\le 3p_3,\] and thus, $64p_1\le 27p_1$, which is a contradiction. Hence, if $p\in C$, then $Ap\ge 0$ and $Ap\neq 0$, which implies that $p^TAp>0$. Now, choose any $p\in \mathbb{R}^3_{++}$, and define $\bar{p}$ as follows. \begin{enumerate}[I)] \item If $p\in C$, then we define $\bar{p}=p$. \item Suppose that for some $(i,j,k)\in \{(1,2,3),(2,3,1),(3,1,2)\}$, $-3p_i+4p_j\le 0$, and $-3p_j+4p_k\le 0$. By our previous argument, we have that $-3p_k+4p_i>0$. In this case, define $\bar{p}_i=\frac{9}{16}p_k$, $\bar{p}_j=\frac{4}{3}p_k$, and $\bar{p}_k=p_k$. \item Suppose that for some $(i,j,k)\in \{(1,2,3),(2,3,1),(3,1,2)\}$, $-3p_i+4p_j\le 0$, $-3p_j+4p_k\ge 0$, and $-3p_k+4p_i\ge 0$. We separate this case into two subcases. \begin{enumerate}[i)] \item If $16p_j-9p_k\ge 0$, then define $\bar{p}_i=\frac{4}{3}p_j$, $\bar{p}_j=p_j$, and $\bar{p}_k=p_k$. \item If $16p_j-9p_k\le 0$, then define $\bar{p}_i=\frac{4}{3}p_j$, $\bar{p}_j=p_j$, and $\bar{p}_k=\frac{16}{9}p_j$. \end{enumerate} \end{enumerate} In the appendix, we check that $\bar{p}$ is well-defined and the mapping $p\mapsto \bar{p}$ is continuous. Note that, in any case, $\bar{p}\le p$ and $\bar{p}\in C$. Moreover, $\bar{p}$ is homogeneous of degree one with respect to $p$: that is, $\overline{ap}=a\bar{p}$ for all $a>0$. Define \begin{equation}\label{GALE} f^G(p,m)=h_A(\bar{p})m. \end{equation} We call this CoD $f^G$ {\bf Gale's example}. It is easy to show that Gale's example $f^G$ is a continuous CoD that is homogeneous of degree zero and satisfies Walras' law. We show that $f^G$ satisfies the weak axiom. Suppose not. Then, there exist $x,y\in \Omega$ such that $x\succ_ry$ and $y\succ_rx$. Therefore, $x\neq y$, and there exist $(p,m),(q,w)\in P$ such that $x=f^G(p,m),\ p\cdot y\le m$ and $y=f^G(q,w),\ q\cdot x\le w$. Because Gale's example is homogeneous of degree zero, replacing $(p,m),(q,w)$ with $\frac{1}{m}(p,m),\frac{1}{w}(q,w)$, we can assume without loss of generality that $m=w=1$. Moreover, because $\bar{p}\le p$ and $\bar{q}\le q$, we can assume without loss of generality that $p=\bar{p},\ q=\bar{q}$. Then, \[q^TAp\le p^TAp,\ p^TAq\le q^TAq.\] Define \[\lambda=\frac{p^TAp}{q^TAp}.\] By definition, $\lambda\ge 1$. Define $r=\lambda q$. Then, $(r-p)^TAp=0$, and \[p^TAr=\lambda p^TAq\le \lambda^2p^TAq\le \lambda^2q^TAq=r^TAr,\] which implies that \[0\le (r-p)^TAr=(r-p)^TA(r-p).\] Let $z=r-p$. By the above arguments, \[z^TAz\ge 0,\ z^TAp=0.\] If $z\neq 0$, then for some $(i,j,k)\in \{(1,2,3), (2,3,1), (3,1,2)\}$, $z_i\ge 0, z_k\le 0$ and $z_i-z_k>0$. If $z_j\ge 0$, then \begin{align*} 0\le z^TAz=&~-3z_i^2-3z_j^2-3z_k^2+4z_iz_j+4z_jz_k+4z_kz_i\\ =&~-(3z_i^2-4z_iz_j+3z_j^2)-3z_k^2+4z_k(z_i+z_j)<0, \end{align*} which is a contradiction. If $z_j\le 0$, then \[0\le z^TAz=-(3z_j^2-4z_jz_k+3z_k^2)-3z_i^2+4z_i(z_j+z_k)<0,\] which is a contradiction. Therefore, we have that $z=0$. This implies that $q$ is proportional to $p$, and thus $x=y$, which is a contradiction. Hence, $f^G$ satisfies the weak axiom. Gale claimed that $f^G$ is not a demand function. The reason he explained is as follows. First, suppose that $f:P\to \Omega$ is a CoD, and $g:\mathbb{R}^n_{++}\to \mathbb{R}^n_{++}$ satisfies \[x=f(g(x),g(x)\cdot x)\] for every $x\in \mathbb{R}^n_{++}$. We call this function $g$ an {\bf inverse demand function} of $f$. Gale claimed that if $f$ is a demand function, then $g(x)$ must satisfy the following {\bf Jacobi's integrability condition}:\footnote{Here, we abbreviate variables to avoid the expression becoming too long. Note that, we say that $g(x)$ satisfies (\ref{JACOBI}) if and only if $g$ satisfies (\ref{JACOBI}) for all $x$ and $i,j,k\in \{1,...,n\}$, and $g(x)$ violates (\ref{JACOBI}) if there exist $x$ and $i,j,k\in \{1,...,n\}$ such that (\ref{JACOBI}) is not satisfied.} \begin{equation}\label{JACOBI} g_i\left(\frac{\partial g_j}{\partial x_k}-\frac{\partial g_k}{\partial x_j}\right)+g_j\left(\frac{\partial g_k}{\partial x_i}-\frac{\partial g_i}{\partial x_k}\right)+g_k\left(\frac{\partial g_i}{\partial x_j}-\frac{\partial g_j}{\partial x_i}\right)=0. \end{equation} We define \[B\equiv A^{-1}=\frac{1}{37}\begin{pmatrix} 9 & 12 & 16\\ 16 & 9 & 12\\ 12 & 16 & 9 \end{pmatrix}.\] Then, $Bx\ge 0$ for all $x\ge 0$, and $Bx\gg 0,\ x^TBx>0$ if $x\neq 0$. Define \begin{equation}\label{GALE2} g(x)=\frac{1}{x^TBx}Bx. \end{equation} Then, for all $x\in \mathbb{R}^3_{++}$, \[h_A(g(x))=\frac{(x^TBx)^2}{x^TB^TABx}\frac{1}{x^TBx}ABx=x,\] which implies that $g(x)\in C$, and thus \[f^G(g(x),g(x)\cdot x)=x.\] However, this $g(x)$ violates (\ref{JACOBI}), and thus, if Gale's claim is correct, then $f^G$ is not a demand function.\footnote{We verify the fact that $g(x)$ violates (\ref{JACOBI}) in the appendix. See Subsection A.5.} We point out the problem included in Gale's explanation. First, Gale's explanation is deeply related to the following theorem. \noindent {\bf Frobenius' Theorem}. Suppose that $U\subset \mathbb{R}^n$ and $g:U\to \mathbb{R}^n\setminus\{0\}$ is $C^1$. Then, the following two statements are equivalent. \begin{enumerate}[i.] \item $g(x)$ satisfies (\ref{JACOBI}). \item For every $x\in U$, there exist a neighborhood $V\subset U$ of $x$ and a pair of a $C^1$ function $u:V\to \mathbb{R}$ and a positive continuous function $\lambda:V\to \mathbb{R}$ such that \begin{equation}\label{FROBENIUS} \nabla u(y)=\lambda(y)g(y) \end{equation} for all $y\in V$. \end{enumerate} For a proof of this theorem, see Theorem 2 of Hosoya (2021a) and Theorem 10.9.4 of Dieudonne (1969). If $f^G=f^u$ for some $C^1$ nondegenerate function $u$,\footnote{A $C^1$ function $u$ is said to be {\bf nondegenerate} if $\nabla u(x)\neq 0$ for all $x$.} then by Lagrange's multiplier rule, (\ref{FROBENIUS}) must hold, which implies that $g(x)$ satisfies (\ref{JACOBI}). As we have checked, $g(x)$ actually violates (\ref{JACOBI}). By the contrapositive of the above argument, we have that there is no $C^1$ nondegenerate function $u$ such that $f^G=f^u$. From this fact, Gale stated that $f^G$ is not a demand function.\footnote{Note that, Gale's construction to this example is not a guesswork. Actually, Gale defined $h_A(p)=\frac{1}{p^TAp}Ap$ for an arbitrary $3\times 3$ matrix $A$, and proved that $g(x)$ defined by (\ref{GALE2}) satisfies (\ref{JACOBI}) if and only if $A$ is symmetric. See his Lemma. And then, he tried to construct a matrix $A$ that is asymmetric but $f(p,m)=h_A(p)m$ satisfies the weak axiom, and obtained the above matrix $A$.} However, this does not mean that there is no weak order $\succsim$ such that $f^G=f^{\succsim}$, because there exists a weak order that cannot be represented by a utility function.\footnote{In fact, Gale stated that ``$f^G$ is not a demand function corresponds to some utility function $u$.'' However, even this claim is not obvious if $u$ is not differentiable.} Therefore, we cannot determine whether $f^G$ is a demand function by Gale's explanation alone. With this consideration, we believe that Gale only showed that $f^G$ is not a demand function corresponding to some $C^1$ nondegenerate utility function $u$. In the next section, we consider this problem, and treat three approaches to this problem. First, we use Gale's another contribution to show that $f^G$ is not a demand function. Second, we construct a proof that $f^G$ is not a demand function using Shephard's lemma. Third, we use Hosoya's (2013) integrability result to show that a restriction of $f^G$ into $(f^G)^{-1}(\mathbb{R}^3_{++})$ is actually not a demand function but a semi-demand function corresponds to a complete, p-transitive, and continuous preference relation, and construct an alternative proof that $f^G$ is not a demand function. \section{Three Proofs of Gale's Claim} \subsection{First Proof: Direct Method} In this subsection, we prove that $f^G$ is not a demand function using a fact that Gale found. Gale considered the following four price vectors and consumption vectors. \[p^1=(9,16,12),\ x^1=f^G(p^1,9)=(1,0,0),\] \[p^2=(340,440,330),\ x^2=f^G(p^2,303)=(0.6,0,0.3),\] \[p^3=(410,400,300),\ x^3=f^G(p^3,303)=(0.3,0,0.6),\] \[p^4=(16,12,9),\ x^4=f^G(p^4,9)=(0,0,1).\] We can easily check that \[p^1\cdot x^2=9,\ p^2\cdot x^3=300,\ p^3\cdot x^4=300,\] and thus \[x^1\succ_rx^2,\ x^2\succ_rx^3,\ x^3\succ_rx^4.\] Because $\succ_r\subset \succ_{ir}$, \[x^1\succ_{ir}x^2,\ x^2\succ_{ir}x^3,\ x^3\succ_{ir}x^4,\] and because $\succ_{ir}$ is transitive, we conclude that \[(1,0,0)\succ_{ir}(0,0,1).\] Gale stated that, by almost the same arguments, one can show that $(0,0,1)\succ_{ir}(0,1,0)$ and $(0,1,0)\succ_{ir}(1,0,0)$, which means that $\succ_{ir}$ is not asymmetric and $f^G$ violates the strong axiom. Actually, his claim is correct. Define \[p^5=(440,330,340),\ x^5=f^G(p^5,303)=(0,0.3,0.6),\] \[p^6=(400,300,410),\ x^6=f^G(p^6,303)=(0,0.6,0.3),\] \[p^7=(12,9,16),\ x^7=f^G(p^7,9)=(0,1,0),\] \[p^8=(330,340,440),\ x^8=f^G(p^8,303)=(0.3,0.6,0),\] \[p^9=(300,410,400),\ x^9=f^G(p^9,303)=(0.6,0.3,0),\] \[x^{10}=x^1=(1,0,0).\] Then, we can easily check that $x^i\succ_rx^{i+1}$ for $i\in \{1,...,9\}$, and thus \[(0,0,1)\succ_{ir}(1,0,0),\] which implies that the strong axiom does not hold. Gale claimed that because $f^G$ is not a demand function, it must violate the strong axiom by Houthakker's (1950) theorem, and Gale verified this result by constructing the above sequence. However, we claim that the converse relationship is more important. Suppose that $f^G=f^{\succsim}$ for a weak order $\succsim$, and $x\succ_ry$. By the definition of $f^{\succsim}$, we have that $x\succ y$. Because $\succ$ is transitive, \[x^1\succ x^4,\ x^4\succ x^{10}.\] However, because $x^{10}=x^1$, this contradicts the asymmetry of $\succ$. This implies that such a weak order $\succsim$ is absent, and thus $f^G$ is not a demand function. This is one approach for proving that Gale's example does not become a demand function. In this context, Richter (1966) presented the following elegant theorem. \noindent {\bf Theorem 1}. Let $P=\{(p,m)\in \mathbb{R}^n_{++}\times \mathbb{R}_{++}|\Delta(p,m)\neq \emptyset\}$, and suppose that $f:P\to \Omega$ is a CoD. Then, $f$ is a demand function if and only if $f$ satisfies the strong axiom of revealed preference. We present the proof of this theorem in the appendix. We make several remarks on the above arguments. First, some readers might think that using Houthakker's result, Gale actually proved that $f^G$ is not a demand function. However, we believe this consideration is wrong. Houthakker's paper itself contains ambiguities characteristic of older economics papers, and it is debatable as to what his theorem is. As far as we can see, Houthakker's paper seems to calculate a so-called indifference hypersurface, and in this argument, he treated some kind of differential equation. Therefore, we believe that Houthakker's paper, like Gale's, implicitly assumed the differentiability of $u$. If this consideration is correct, then Houthakker's theorem cannot be used to show $f^G$ is not a demand function. However, even if our consideration is wrong, it is reasonable to assume that Gale could not show his result using Houthakker's theorem. Gale stated that ``Since the demand function in our example does not come from a preference relation it follows from Houthakker's theorem that it must fail to satisfy the Strong Axiom.'' A straightforward understanding of this statement suggests that Gale considered Houthakker's theorem to be the assertion that ``if a CoD is not a demand function, then it does not satisfy the strong axiom.'' And, as we saw above, the claim needed to complete the proof of Gale's result is the opposite of this, namely, that ``if the CoD does not satisfy the strong axiom, then it is not a demand function.'' Theorem 1 resolves this problem perfectly, because there is no vague claim. However, Richter's paper was published in 1966. Therefore, in 1960, this result was not known, and thus Gale could not apply it to $f^G$. By the way, Gale mentioned Uzawa's (1959) result. His result is as follows. \noindent {\bf Theorem 2}. Suppose that $\Omega=\mathbb{R}^n_+$, $P=\mathbb{R}^n_{++}\times\mathbb{R}_{++}$, and a CoD $f$ satisfies the following assumptions. \begin{enumerate}[I)] \item $R(f)=\Omega\setminus\{0\}$. \item $f$ satisfies Walras' law. \item For all $p\in \mathbb{R}^n_{++}$, there exist $\varepsilon>0$ and $L>0$ such that if $\|q-p\|<\varepsilon$ and $m,w>0$, then $\|f(q,m)-f(q,w)\|\le L|m-w|$.\footnote{Uzawa originally stated this condition as follows: ``for all $(p,m)\in P$, there exist $\varepsilon>0$ and $L>0$ such that if $\|q-p\|<\varepsilon$ and $|w_i-m|<\varepsilon$ for $i\in \{1,2\}$, then $\|f(q,w_1)-f(q,w_2)\|\le L|w_1-w_2|$''. However, to the best of our understanding, Uzawa's condition is too weak to prove this theorem and there is a gap, unless our strengthened condition is used.} \item $f$ satisfies the strong axiom of revealed preference. \end{enumerate} Define \[\succsim=\{(x,y)\in \Omega^2|y\not\succ_{ir}x\}.\] Then, $\succsim$ is a continuous weak order and $f=f^{\succsim}$. We omit the proof of this theorem. Note that Gale's example $f^G$ satisfies I)-III) of this theorem. Therefore, in 1960, Gale knew that if $f^G$ satisfies the strong axiom, then $f^G$ is a demand function. However, as we noted above, the converse relationship is important for proving the absence of a weak order $\succsim$ such that $f^G=f^{\succsim}$. Therefore, this result is also not able to be used to prove that $f^G$ is not a demand function. In the 21st century, we can easily show that Gale's example is not a demand function using Theorem 1. In 1960, proving this result was not easy. However, at least this result can be shown using Gale's sequence $x^1,...,x^{10}$, and the fact that $\succ_r\subset \succ$. We think that Gale could have constructed this logic even in 1960. In conclusion, we can say the following. First, there is a gap in Gale's proof that Gale's example is not a demand function. Second, Gale would have been able to prove this in a different manner, even in 1960. Therefore, it is fair to say that the credit for finding a CoD that satisfies the weak axiom but is not a demand function is attributed to Gale. \subsection{Second Proof: Using Shephard's Lemma} In the previous subsection, the sequence $x^1,...,x^{10}$ was given. The construction of such a sequence is not easy in general, and probably this could be found because Gale is a genius.\footnote{Gale himself stated that the finding of this sequence is a ``considerable labour''.} However, we and almost all readers are not geniuses, and thus finding such a sequence is an incredibly hard task. Hence, we want to find another proof that can be constructed by usual economists. One way to find such a proof is to use Shephard's lemma and the Slutsky matrix. To introduce the Shephard's lemma, we first consider the following minimization problem. \begin{align} \min~~~~~&~p\cdot y,\nonumber \\ \mbox{subject to. }&~y\in \Omega,\label{EMP}\\ &~u(y)\ge u(x).\nonumber \end{align} This problem (\ref{EMP}) is the dual problem for the utility maximization problem (\ref{UMP}). The problem (\ref{EMP}) is called the {\bf expenditure minimization problem}. We define the value of this problem as $E^x(p)$, and call the function $E^x$ the {\bf expenditure function}. It is useful to consider another definition of the expenditure function. Suppose that a weak order $\succsim$ is given. Then, the expenditure function $E^x$ is defined as follows. \begin{equation} E^x(p)=\inf\{p\cdot y|y\succsim x\}.\label{EX} \end{equation} Note that, if $\succsim$ is represented by $u$, then $E^x(p)$ becomes the value of the problem (\ref{EMP}), and thus these two definitions of the function $E^x$ coincide. The following theorem was shown in Lemma 1 of Hosoya (2020). \noindent {\bf Theorem 3}. Suppose that either $\Omega=\mathbb{R}^n_+$ or $\Omega=\mathbb{R}^n_{++}$, $P=\mathbb{R}^n_{++}\times \mathbb{R}_{++}$, and $f:P\to \Omega$ is a continuous demand function. Suppose also that $\succsim$ is a weak order such that $f=f^{\succsim}$, and define $E^x$ by (\ref{EX}). Then, $E^x:\mathbb{R}^n_{++}\to \mathbb{R}_+$ is concave and continuous. Moreover, if $f$ satisfies Walras' law and $x\in R(f)$, then the following facts hold. \begin{enumerate}[1)] \item $E^x(p)>0$ for all $p\in \mathbb{R}^n_{++}$. \item If $x=f(p^*,m^*)$, then $E^x(p^*)=m^*$. \item For every $p\in \mathbb{R}^n_{++}$, the following equality holds. \begin{equation}\label{Shephard} \nabla E^x(p)=f(p,E^x(p)). \end{equation} \end{enumerate} We provide the proof of this result in the appendix. The equation (\ref{Shephard}) is famous and called {\bf Shephard's lemma}. Note that, because $f$ is continuous, we have that $E^x$ is $C^1$. If $f$ is continuously differentiable at $(p^*,m^*)$, then by 2) and 3), we can easily verify that $E^x$ is twice continuously differentiable at $p^*$, and \[D^2E^x(p^*)=S_f(p^*,m^*),\] where $D^2E^x(p^*)$ denotes the Hessian matrix of $E^x$ at $p^*$. By Young's theorem, $D^2E^x(p^*)$ is symmetric, and thus we conclude that $S_f(p^*,m^*)$ is also symmetric. Consider applying this result to Gale's example $f^G$. It is easy to show that $R(f^G)$ includes $\mathbb{R}^3_{++}$, and if $(p,m)\in f^{-1}(\mathbb{R}^3_{++})$, then $f^G$ is continuously differentiable at this point. Hence, choose $p^*=(1,1,1),\ m^*=3$, and $x=(1,1,1)$. Then, $x=f^G(p^*,m^*)$. If $f^G$ is a demand function, then by the above consideration, \[S_f(p^*,m^*)=D^2E^x(p^*)\] is symmetric. However, \[s_{12}(p^*,m^*)=\frac{11}{3}\neq -\frac{1}{3}=s_{21}(p^*,m^*)\] and thus, $f^G$ is not a demand function. This method of proof does not require any inspiration of a genius and shows that Gale's example is not a demand function in the shortest way. To construct this proof, we need only Theorem 3 and a simple calculation for $s_{ij}$. \subsection{Third Proof: On the ``Open Cycle'' Theoretic Approach} In this subsection, we investigate another aspect of Gale's example. In previous subsections, we proved that Gale's example is not a demand function. Consider the restriction $\tilde{f}$ of Gale's example to the inverse image of the positive orthant. Then, we can construct a complete, p-transitive, and continuous binary relation $\succsim$ such that $\tilde{f}=f^{\succsim}$. Thus, the restriction of Gale's example is in fact a semi-demand function. Moreover, our construction method for $\succsim$ has two features. First, for our construction method, $\tilde{f}$ is a demand function if and only if our $\succsim$ is transitive. Second, (\ref{JACOBI}) for an inverse demand function is equivalent to the transitivity of $\succsim$. Hence, we can conclude that Gale's example is not a demand function because $g(x)$ defined in (\ref{GALE2}) does not satisfy (\ref{JACOBI}). This explanation seems to be very close to Gale's original explanation, and thus we think that this is the form of proof that Gale originally intended to present. Furthermore, this proof is deeply related to the mysterious ``open cycle theory'' due to Pareto in 1906. We start from recalling the notion of the {\bf inverse demand function}. Choose any function $g:\mathbb{R}^n_{++}\to \mathbb{R}^n_{++}$. This function $g$ is called an inverse demand function of a CoD $f:P\to \Omega$ if and only if $x=f(g(x),g(x)\cdot x)$ for all $x\in \mathbb{R}^n_{++}$. If $f=f^u$ for some continuous utility function $u:\Omega\to \mathbb{R}$ that is increasing, $C^1$, and nondegenerate on $\mathbb{R}^n_{++}$,\footnote{A function $u$ is said to be {\bf increasing} if $x\gg y$ implies $u(x)>u(y)$. Note that, if $f=f^u$ for some increasing function $u$, then $f$ must satisfy Walras' law.} then by Lagrange's multiplier rule, $g$ is an inverse demand function of $f$ if and only if \[\nabla u(x)=\lambda(x)g(x)\] for some $\lambda(x)>0$. Define $\tilde{\Omega}=\mathbb{R}^n_{++}$. Let $g:\tilde{\Omega}\to \mathbb{R}^n_{++}$ be a given $C^1$ function. Choose any $(x,v)\in \tilde{\Omega}^2$, and consider the following ordinary differential equation (ODE): \begin{equation}\label{IND} \dot{y}(t)=(g(y(t))\cdot x)v-(g(y(t))\cdot v)x,\ y(0)=x. \end{equation} Let $y(t;x,v)$ denote the nonextendable solution for the above parametrized ODE. Define \[w^*=(v\cdot x)v-(v\cdot v)x,\] \[t(x,v)=\inf\{t\ge 0|y(t)\cdot w^*\ge 0\}.\] In the appendix, we show that if $x$ is not proportional to $v$, then $t(x,v)$ is well-defined and positive, and $y(t;x,v)$ is proportional to $v$ if and only if $t=t(x,v)$. Define \[u^g(x,v)=\frac{\|y(t(x,v);x,v)\|}{\|v\|},\] \[\succsim^g=(u^g)^{-1}([1,+\infty[).\] The following result is a slight modification of Theorem 1 of Hosoya (2013). \noindent {\bf Theorem 4}. Suppose that $k\ge 1$ and $g:\tilde{\Omega}\to \mathbb{R}^n_{++}$ is a $C^k$ function. Then, the following results hold.\footnote{Throughout this paper, $Dg(x)$ denotes the Jacobian matrix of $g$ at $x$.} \begin{enumerate}[I)] \item $u^g(x,v)$ is a well-defined continuous function on $\tilde{\Omega}^2$ and $\succsim^g$ is a complete, p-transitive, continuous, and monotone binary relation on $\tilde{\Omega}$. \item $x\in f^{\succsim^g}(g(x),g(x)\cdot x)$ for all $x\in \tilde{\Omega}$ if and only if $w^TDg(x)w\le 0$ for all $x\in \tilde{\Omega}$ and $w\in \mathbb{R}^n$ such that $w\cdot g(x)=0$. Moreover, if $w^TDg(x)w<0$ for all $x\in \tilde{\Omega}$ and $w\in \mathbb{R}^n$ such that $w\neq 0$ and $w\cdot g(x)=0$, then $f^{\succsim^g}(p,m)$ contains at most one element, and if $f^{\succsim^g}(p,m)\neq \emptyset$, then $p$ is proportional to $g(x)$ for $x=f^{\succsim^g}(p,m)$. \item Consider a CoD $f:P\to \tilde{\Omega}$ such that $g$ is an inverse demand function of $f$, and suppose that $f=f^{\succsim^g}$. Then, the following statements are equivalent. \begin{enumerate}[i)] \item $f$ is a demand function. \item $\succsim^g$ is transitive. \item For every $v\in \tilde{\Omega}$, $u^g_v:x\mapsto u^g(x,v)$ is $C^k$ and there exists $\lambda:\tilde{\Omega}\to \mathbb{R}$ such that $\nabla u^g_v(x)=\lambda(x)g(x)$ for all $x\in \tilde{\Omega}$. \item For every $v\in \tilde{\Omega}$, $u^g_v:x\mapsto u^g(x,v)$ is $C^k$ and represents $\succsim^g$. \item $g(x)$ satisfies (\ref{JACOBI}). \end{enumerate} \end{enumerate} We provide the proof of this theorem in the appendix. Applying this theorem to Gale's example, we can present the following arguments. First, recall that in Gale's example, the consumption set $\Omega$ is $\mathbb{R}^3_+$ and the domain $P=\mathbb{R}^3_{++}\times\mathbb{R}_{++}$. Let $\tilde{\Omega}=\mathbb{R}^3_{++}$ and $\tilde{P}=(f^G)^{-1}(\tilde{\Omega})$. In the appendix, we check that $\tilde{P}=\{(mg(x),m)|x\in \tilde{\Omega},m>0\}$, where $g(x)$ is given by (\ref{GALE2}).\footnote{See Subsection A.6.} Define $\tilde{f}:\tilde{P}\to \tilde{\Omega}$ as the restriction of $f^G$ to $\tilde{P}$. Then, $\tilde{f}$ is also a CoD, where the consumption set is $\tilde{\Omega}$ and the domain is $\tilde{P}$. Second, suppose that $f^G$ is a demand function. Then, $f^G=f^{\succsim}$ for some weak order $\succsim$ on $\Omega$. Let $\succsim'=\succsim\cap (\tilde{\Omega})^2$: that is, $\succsim'$ is the restriction of $\succsim$ to $\tilde{\Omega}$. Then, $\succsim'$ is a weak order on $\tilde{\Omega}$ and $\tilde{f}=f^{\succsim'}$, which implies that $\tilde{f}$ is a demand function. Third, recall the function $g$ defined in (\ref{GALE2}). We found that \[x=f^G(g(x),g(x)\cdot x)=\tilde{f}(g(x),g(x)\cdot x)\] for all $x\in \tilde{\Omega}$, which implies that $g(x)$ is an inverse demand function of $\tilde{f}$. In the appendix, we show that $w^TDg(x)w<0$ for all $x\in \mathbb{R}^3_{++}$ and $w\in\mathbb{R}^3$ such that $w\neq 0$ and $w\cdot g(x)=0$.\footnote{See Subsection A.5.} By Theorem 4, $\tilde{f}=f^{\succsim^g}$ and $g(x)$ satisfies (\ref{JACOBI}) if and only if $\tilde{f}$ is a demand function.\footnote{Note that, the claim $\tilde{f}=f^{\succsim^g}$ means that $\tilde{f}(p,m)=f^{\succsim^g}(p,m)$ for all $(p,m)\in \tilde{P}$ and $f^{\succsim^g}(p,m)=\emptyset$ if $(p,m)\notin \tilde{P}$. Therefore, we needed to verify that $\tilde{P}=\{(mg(x),m)|x\in \tilde{\Omega},m>0\}$ in the above argument.} Because $g(x)$ defined in (\ref{GALE2}) violates (\ref{JACOBI}), we have that $\tilde{f}$ is not a demand function. In conclusion, we obtain two results. 1) if $f^G$ is a demand function, then $\tilde{f}$ is also a demand function. 2) $\tilde{f}$ is not a demand function. Therefore, by the contrapositive of 1), $f^G$ is also not a demand function. This is the third proof that Gale's example is not a demand function. \noindent {\bf Notes on Theorem 4}. This result is deeply related to the ``open cycle theory'' in Pareto (1906b). In 1906, Pareto published his famous monograph called ``Manuale'' (Pareto, 1906a). It is said that Volterra wrote a review of this ``Manuale'' (Volterra, 1906), and pointed out a mathematical error in Pareto's discussion of the problem of consumer choice. In fact, there are various arguments about this story, and some say that Volterra criticized Pareto for not writing (\ref{JACOBI}), whereas others say that Volterra was not actually criticizing Pareto, but rather referring to another work of Pareto and recommending that it be incorporated into the book.\footnote{If the reader can read Japanese, we recommend Suda (2007), which includes a very detailed survey on this problem.} In any case, Pareto's response to Volterra's review was to try to construct the consumer theory for the case in which (\ref{JACOBI}) does not hold (Pareto, 1906b), and eventually, this paper was incorporated into the mathematical appendix of the French edition of his book (Pareto, 1909), known as ``Manuel''. We explain why this theory is called ``open cycle theory'' using ``three-sided tower'' arguments due to Samuelson (1950).\footnote{We note that Samuelson consistently argued that Pareto's argument was wrong. Although Pareto connects this problem with the order of consumption, Samuelson did not think that this explanation is correct.} Samuelson explained what occurs when the integrability condition (\ref{JACOBI}) is violated using three linearly independent vectors $x,y,z$. First, consider the plane spanned by $x,y$, and draw the indifference curve passing through $x$. This curve intersects the straight line passing through $0$ and $y$ only once, and this point can be written as $ay$ for some $a>0$. Similarly, consider the plane spanned by $y,z$, and draw the indifference curve passing through $ay$. Then, this curve intersects the straight line passing through $0$ and $z$ only once, and this point can be written as $bz$ for some $b>0$. Finally, consider the plane spanned by $z,x$, and draw the indifference curve passing through $bz$. Then, this curve intersects the straight line passing through $0$ and $x$ only once, and this point can be written as $cx$ for some $c>0$. Samuelson said that $c=1$ must hold if (\ref{JACOBI}) holds, but if (\ref{JACOBI}) does not hold, then $c\neq 1$ in some cases. In other words, if (\ref{JACOBI}) is violated, then the above cycle constructed from three indifference curves is not necessarily a closed curve. Pareto's theory tried to treat the case in which the cycle constructed using indifference curves does not become a closed curve, and thus this theory is called ``open cycle theory''. By the way, Samuelson called the consumer who violates (\ref{JACOBI}) ``a man that can be easily cheated.'' To illustrate why, consider the above example with $c<1$. In this case, when the consumer is offered to exchange $x$ for $ay$, then he/she accepts this exchange because these are indifferent. Next, if he/she is offered to exchange $ay$ for $bz$, then he/she also accepts this because these are indifferent. Finally, when someone offers him/her to exchange $bz$ for $cx$, he/she accepts this because these are indifferent. As a result of these exchanges, his/her initial consumption vector $x$ is reduced to $cx$. Samuelson described such an individual as ``easily cheated.'' Of course, a similar exchange can be done in the case $c>1$. This discussion can be further understood using Theorem 4. First, the ODE (\ref{IND}) is intentionally designed so that the right-hand side is orthogonal to $g(y(t))$. In microeconomics, the budget hyperplane is orthogonal to the price vector, and at the optimal consumption plan, the indifference hypersurface is tangent to the budget hyperplane. Therefore, the supporting hyperplane of the indifference hypersurface is orthogonal to the price vector at this point. Because $g(y(t))$ denotes the price vector under which $y(t)$ is optimal, the trajectory of the curve $y(t;x,v)$ can be seen as the indifference curve passing through $x$ in the plane spanned by $x,v$. Hence, translating Samuelson's arguments into our symbols, \[ay=y(t(x,y);x,y),\ bz=y(t(ay,z);y,z),\ cx=y(t(bz,x);z,x).\] By an easy calculation, we obtain that \[a=u^g(x,y),\ b=u^g(ay,z),\ c=u^g(bz,x).\] It is easy to verify that \[x\sim^gay,\ ay\sim^gbz,\ bz\sim^gcx.\] Therefore, if $\succsim^g$ is transitive, then $c=1$. However, if $\succsim^g$ is not transitive, then there exist $x,y,z$ such that $c\neq 1$. Hence, Theorem 4 can be viewed as the result of a mathematical expression of Samuelson's explanation. Probably, Gale had this Samuelson's argument in mind and thus focused on the fact that $g(x)$ in (\ref{GALE2}) does not satisfy (\ref{JACOBI}). In this sense, the proof in this subsection can be considered to be the closest to the argument that Gale originally had in mind.\footnote{However, it cannot be assumed that Gale himself knew or substantially proved Theorem 4. This can be seen in the proof of Theorem 4. As we see in the appendix, the proof of Theorem 4 is extremely long and difficult. If Gale had proved such a result, we cannot believe that he would not have left a record of it anywhere. In this regard, although it is possible to prove that $f^G$ is not a demand function using Gale's own idea, it is inconceivable that Gale would have done so.} Samuelson discussed using three linearly independent vectors, and he did not have a problem with the fact that an indifference curve can be drawn. In other words, he thought that such a problem would not occur for the case in which $n=2$. In this connection, we mention two facts. First, (\ref{JACOBI}) automatically holds if any two of $i,j,k$ coincide, and therefore it holds unconditionally when $n=2$. Second, Rose (1958) showed that the weak axiom implies the strong axiom when $n=2$. Therefore, in light of Theorem 1, there is no CoD that satisfies the weak axiom but is not a demand function. Indeed, Gale's example treated the case in which $n=3$, which is an essential assumption. We must mention Hurwicz and Richter (1979a, b). They considered an axiom called {\bf Ville's axiom of revealed preference} (or simply, {\bf Ville's axiom}) for a function $g:\mathbb{R}^n_{++}\to\mathbb{R}^n_{++}$. A function $g:\mathbb{R}^n_{++}\to \mathbb{R}^n_{++}$ is said to satisfy Ville's axiom if there is no piecewise $C^1$ closed curve $x:[0,T]\to \mathbb{R}^n_{++}$ such that \begin{equation}\label{VILLE} g(x(t))\cdot \dot{x}(t)>0 \end{equation} for almost all $t\in [0,T]$. They showed that when $g$ is $C^1$, $g$ satisfies this axiom if and only if $g$ satisfies (\ref{JACOBI}). For a modern proof of this result, see Hosoya (2019). Because $g(x)$ defined by (\ref{GALE2}) violates (\ref{JACOBI}), this $g(x)$ also violates Ville's axiom. Therefore, there exists a closed curve $x(t)$ that satisfies (\ref{VILLE}) for almost all $t$. If $f^G=f^u$ for some $C^1$ utility function $u$, then by Lagrange's multiplier rule, $\nabla u(x)=\lambda(x)g(x)$ for all $x$. Therefore, \[\frac{d}{dt}u(x(t))>0\] for almost all $t\in [0,T]$, which contradicts $x(0)=x(T)$. Ville's axiom is built on this idea, and it is worth noting that this idea and the ``open cycle'' argument are very similar. In fact, we can easily show the following result. \noindent {\bf Theorem 5}. Suppose that $g:\tilde{\Omega}\to \mathbb{R}^n_{++}$ is a $C^1$ function. Then, $g$ satisfies Ville's axiom if and only if $\succsim^g$ is transitive. We present a proof in the appendix. Finally, we mention a fact. We have already shown that the restriction $\tilde{f}$ of Gale's example is a semi-demand function. Actually, Hosoya (2021b) showed that any CoD that satisfies Walras' law and the weak axiom is a semi-demand function corresponds to some complete binary relation. Hence, Gale's example itself is also a semi-demand function. We think that Gale's example is actually a semi-demand function corresponds to a complete, p-transitive, and continuous binary relation, and such a binary relation can be constructed in the same manner as Theorem 4. However, there are several technical difficulties, and thus it is an open problem. \section{Conclusion} We have scrutinized Gale's paper written in 1960 to see how far Gale had shown. As a result, we found that Gale showed that his constructed CoD satisfies the weak axiom of revealed preference, and that the corresponding inverse demand function does not satisfy Jacobi's integrability condition, but he did not show that this CoD is not a demand function. Next, we proved that this CoD is not a demand function in three ways. The first is the proof that Gale was able to construct in 1960, the second is the simplest modern proof, and the third is the proof constructed in the direction Gale originally intended. We paid particular attention to the first proof, and found that, although what Gale showed in his paper was insufficient, we could easily confirm Gale's claim from what he showed. Thus, it is fair to say that the credit for discovering a CoD that is not a demand function while satisfying the weak axiom of revealed preference belongs to Gale. \appendix \section{Proofs of Results} \subsection{Proof of the Well-Definedness of $\bar{p}$ in Gale's Example} In this subsection, we prove the well-definedness of $\bar{p}$. Recall the definition of $\bar{p}$. For $p\in \mathbb{R}^3_{++}$, the definition of $\bar{p}$ is as follows. \begin{enumerate}[I)] \item If $p\in C$, define $\bar{p}=p$. \item Suppose that for some $(i,j,k)\in \{(1,2,3),(2,3,1),(3,1,2)\}$, $-3p_i+4p_j\le 0$, and $-3p_j+4p_k\le 0$. By our previous argument, we have that $-3p_k+4p_i>0$. In this case, define $\bar{p}_i=\frac{9}{16}p_k$, $\bar{p}_j=\frac{4}{3}p_k$, and $\bar{p}_k=p_k$. \item Suppose that for some $(i,j,k)\in \{(1,2,3),(2,3,1),(3,1,2)\}$, $-3p_i+4p_j\le 0$, $-3p_j+4p_k\ge 0$, and $-3p_k+4p_i\ge 0$. We separate this case into two subcases. \begin{enumerate}[i)] \item If $16p_j-9p_k\ge 0$, then define $\bar{p}_i=\frac{4}{3}p_j$, $\bar{p}_j=p_j$, and $\bar{p}_k=p_k$. \item If $16p_j-9p_k\le 0$, then define $\bar{p}_i=\frac{4}{3}p_j$, $\bar{p}_j=p_j$, and $\bar{p}_k=\frac{16}{9}p_j$. \end{enumerate} \end{enumerate} Our problem is the existence of $p$ for which multiple definitions are applicable. For example, if $p=(4,3,4)$, then $-3p_1+4p_2=0$, $-3p_2+4p_3>0$ and $-3p_3+4p_1>0$, which implies that I) and III)-i) are applicable for this $p$. We need to show that, in such a case, both definition in I) and III)-i) lead to the same $\bar{p}$. We separate the proof into seven cases. Let $I=\{(1,2,3),(2,3,1),(3,1,2)\}$. Recall that if $-3p_i+4p_j\le 0$ and $-3p_j+4p_k\le 0$, then $-3p_k+4p_i>0$. Define $C_1=C$ and \begin{align*} C_2=\{p\in \mathbb{R}^3_{++}|&-3p_i+4p_j\le 0,\ -3p_j+4p_k\le 0\mbox{ for some }(i,j,k)\in I\},\\ C_3=\{p\in \mathbb{R}^3_{++}|&-3p_i+4p_j\le 0,\ -3p_j+4p_k\ge 0,\\ &~-3p_k+4p_i\ge 0,\ 16p_j-9p_k\ge 0\mbox{ for some }(i,j,k)\in I\},\\ C_4=\{p\in \mathbb{R}^3_{++}|&-3p_i+4p_j\le 0,\ -3p_j+4p_k\ge 0,\\ &~-3p_k+4p_i\ge 0,\ 16p_j-9p_k\le 0\mbox{ for some }(i,j,k)\in I\}. \end{align*} Then, $p\in C_1$ if and only if I) is applicable, $p\in C_2$ if and only if II) is applicable, $p\in C_3$ if and only if III)-i) is applicable, and $p\in C_4$ if and only if III)-ii) is applicable. \noindent {\bf Case 1}. $-3p_i+4p_j>0,\ -3p_j+4p_k>0,\ -3p_k+4p_i>0$. In this case, $p\in C_1$ but $p\notin C_2\cup C_3\cup C_4$. Thus, only definition I) is applicable, and thus $\bar{p}=p$. \noindent {\bf Case 2}. $-3p_i+4p_j=0,\ -3p_j+4p_k>0,\ -3p_k+4p_i>0$. In this case, $-9p_k+16p_j>0$, and thus $p\in C_1\cap C_3$ but $p\notin C_2\cup C_4$. Hence, definitions I) and III)-i) are applicable, and in both definitions, $\bar{p}=p$. \noindent {\bf Case 3}. $-3p_i+4p_j<0,\ -3p_j+4p_k>0,\ -3p_k+4p_i>0$. In this case, if $-9p_k+16p_j>0$, then $p\in C_3$ but $p\notin C_1\cup C_2\cup C_4$. If $-9p_k+16p_j<0$, then $p\in C_4$ but $p\notin C_1\cup C_2\cup C_3$. In both cases, $\bar{p}$ is well-defined. If $-9p_k+16p_j=0$, then $p\in C_3\cap C_4$ and thus definitions III)-i) and III)-ii) are applicable, and in both definitions, $\bar{p}$ are the same. \noindent {\bf Case 4}. $-3p_i+4p_j=0,\ -3p_j+4p_k=0,\ -3p_k+4p_i>0$. Define $i^*=j, j^*=k, k^*=i$. In this case, $-9p_k+16p_j>0$ and $-9p_{k^*}+16p_{j^*}=0$. This implies that $p\in C_1\cap C_2\cap C_3\cap C_4$, and thus definitions I), II), III)-i), and III)-ii) are applicable. And in all cases, $\bar{p}=p$. \noindent {\bf Case 5}. $-3p_i+4p_j=0,\ -3p_j+4p_k<0,\ -3p_k+4p_i>0$. Define $i^*=j, j^*=k, k^*=i$. Then, $-9p_{k^*}+16p_{j^*}<0$, and thus $p\in C_2\cap C_4$ but $p\notin C_1\cup C_3$. Hence, definitions II) and III)-ii) are applicable, and in both cases, $\bar{p}$ are the same. \noindent {\bf Case 6}. $-3p_i+4p_j<0,\ -3p_j+4p_k=0,\ -3p_k+4p_i>0$. In this case, $-9p_k+16p_j>0$, and thus $p\in C_2\cap C_3$ but $p\notin C_1\cup C_4$. Hence, definitions II) and III)-i) are applicable, and in both cases, $\bar{p}$ are the same. \noindent {\bf Case 7}. $-3p_i+4p_j<0,\ -3p_j+4p_k<0,\ -3p_k+4p_i>0$. In this case, $p\in C_2$ but $p\notin C_1\cup C_3\cup C_4$, and thus only definition II) is applicable, and thus $\bar{p}$ is defined. Hence, in all cases $\bar{p}$ is well-defined. Using this argument, we can easily show that the mapping $p\mapsto \bar{p}$ is continuous. Actually, the restriction of this mapping to $C_i$ is trivially continuous, and if $p\in C_i\cap C_j$, then the value of these functions coincide. The continuity of this mapping immediately follows from this fact. \subsection{Proof of the Basic Property on the Transitive Closure} In this subsection, we prove the following. \noindent {\bf Lemma 1}. Let $X\neq \emptyset$ and suppose that $R$ is a binary relation on $X$. Let $R^*$ be the transitive closure of $R$. Then, $xR^*y$ if and only if there exists a finite sequence $x_1,...,x_k$ such that $x_1=x,\ x_k=y$ and $x_iRx_{i+1}$ for all $i\in \{1,...,k-1\}$. \noindent {\bf Proof}. Define $R^+$ as the set of all $(x,y)\in X^2$ such that there exists a finite sequence $x_1,...,x_k$ such that $x_1=x,\ x_k=y$ and $x_iRx_{i+1}$ for all $i\in \{1,...,k-1\}$. It suffices to show that $R^+=R^*$. It is obvious that $R^+$ is transitive, and thus $R^*\subset R^+$. Conversely, suppose that $xR^+y$. Then, there exists a finite sequence $x_1,...,x_k$ such that $x_1=x,\ x_k=y$ and $x_iRx_{i+1}$ for all $i\in \{1,...,k-1\}$. Because $R\subset R^*$, we have that $x_iR^*x_{i+1}$ for all $i\in \{1,...,k-1\}$. Since $R^*$ is transitive, we have that $xR^*y$, which implies that $R^+\subset R^*$. This completes the proof. $\blacksquare$ \subsection{Proof of Theorem 1} Suppose that $f=f^{\succsim}$ for some weak order $\succsim$. We have already shown that $x\succ_ry$ implies that $x\succ y$. Hence, Lemma 1 and the transitivity of $\succ$ mean that $x\succ_{ir}y$ implies that $x\succ y$. Hence, \[x\succ_{ir}y\Rightarrow x\succ y\Rightarrow y\not\succ x\Rightarrow y\not\succ_{ir}x,\] which implies that $\succ_{ir}$ is asymmetric, and thus $f$ satisfies the strong axiom. To prove the converse relationship, we need the following lemma. \noindent {\bf Lemma 2} (Szpilrajn's extension theorem). Suppose that $X$ is a nonempty set and $\succ$ is a strong order on $X$. Then, there exists an antisymmetric weak order $\succsim^*$ that includes $\succ$.\footnote{This theorem was shown by Szpilrajn (1930). For another modern proof, see Chambers and Echenique (2016).} \noindent {\bf Proof}. Let $L$ be the set of all strong orders on $X$ that includes $\succ$. Define \[\succeq=\{(\succ_1,\succ_2)\in L^2|\succ_2\subset \succ_1\}.\] Then, $\succeq$ is a reflexive and transitive binary relation on $L^2$, and clearly every chain $C$ of $\succeq$ has an upper bound. By Zorn's lemma, there exists a maximal element $\succ^*\subset L$ with respect to $\succeq$. Define \[\succsim^*=\succ^*\cup \{(x,x)|x\in X\}.\] Clearly $\succsim^*$ is antisymmetric, and it is easy to show that $\succsim^*$ is transitive. It suffices to show that $\succsim^*$ is complete. Suppose not. Then, there exists $x,y\in X$ such that $x\not\succsim^*y$ and $y\not\succsim^*x$. By the definition of $\succsim^*$, we have that $x\neq y$. Define \[\succ^+=\succ^*\cup\{(z,w)\in X^2|z\succsim^*x,\ y\succsim^*w\}.\] We first show that $\succ^+$ is asymmetric. Suppose not. Then, there exist $z,w\in X$ such that $z\succ^+w$ and $w\succ^+z$. If $z\not\succ^*w$ and $w\not\succ^*z$, then $z\succsim^*x$ and $y\succsim^*z$. This implies that $y\succsim^*x$, which is a contradiction. Therefore, either $z\succ^*w$ or $w\succ^*z$. Without loss of generality, we assume that $w\succ^*z$. Then, $z\not\succ^*w$, and thus $z\succsim^*x$ and $y\succsim^*w$. Because $w\succ^*z$, we have that $w\succsim^*z$, and by the transitivity of $\succsim^*$, $y\succsim^*x$, which is a contradiction. Second, we show that $\succ^+$ is transitive. Suppose that $z\succ^+w$ and $w\succ^+v$. If $z\succ^*w$ and $w\succ^*v$, then $z\succ^*v$, and thus $z\succ^+v$. Suppose that $z\not\succ^*w$. Then, $z\succsim^*x$ and $y\succsim^*w$. If $w\not\succ^*v$, then $w\succsim^*x$. This implies that $y\succsim^*x$, which is a contradiction. Hence, we have that $w\succ^*v$. This implies that $y\succsim^*v$, and thus $z\succ^+w$. Next, suppose that $z\succ^*w$ and $w\not\succ^*v$. Then, $w\succsim^*x$ and $y\succsim^*v$. Because $z\succ^*w$, we have that $z\succsim^*x$, which implies that $z\succ^+v$. Thus, in any case, we have that $z\succ^+v$, which implies that $\succ^+$ is transitive. Therefore, $\succ^+$ is a strong order that includes $\succ^*$. Because $x\succ^+y$ and $x\not\succ^*y$, we have that $\succ^*\subsetneq \succ^+$. This implies that $\succ^*$ is not maximal with respect to $\succeq$, which is a contradiction. This completes the proof. $\blacksquare$ Now, suppose that $f$ satisfies the strong axiom. Then, $\succ_{ir}$ is a strong order on $\Omega$, and thus there exists an antisymmetric weak order $\succsim^*$ on $\Omega$ that includes $\succ_{ir}$. Suppose that $x=f(p,m)$ and $y\in \Delta(p,m)$. If $x=y$, then $x\succsim^*y$. If $x\neq y$, then $x\succ_{ir}y$, and thus $x\succsim^*y$. Moreover, because $\succsim^*$ is antisymmetric, $y\not\succsim^*x$. This implies that $x=f^{\succsim^*}(p,m)$, and thus $f=f^{\succsim^*}$. Hence, $f$ is a demand function. This completes the proof of Theorem 1. $\blacksquare$ \subsection{Proof of Theorem 3} Choose any $p,q$ and $t\in [0,1]$, and define $r=(1-t)p+tq$. Choose any $\varepsilon>0$. Then, there exists $y\in \Omega$ such that $y\succsim x$ and $r\cdot y\le E^x(r)+\varepsilon$. Thus, \[E^x(r)+\varepsilon\ge r\cdot y=(1-t)p\cdot y+tq\cdot y\ge (1-t)E^x(p)+tE^x(q),\] which implies that $E^x$ is concave. Because any concave function defined on an open set is continuous, $E^x$ is continuous. In the reminder of the proof, we assume that $f$ satisfies Walras' law and $x\in R(f)$. Suppose that $x=f(p^*,m^*)$. Choose any $p\in \mathbb{R}^n_{++}$. Then, there exists $\varepsilon>0$ such that $p^*\cdot y<m^*$ for all $y\in \Delta(p,\varepsilon)$. In this case, $x\succ y$ for all $y\in \Delta(p,\varepsilon)$, and thus $E^x(p)>\varepsilon$. Hence, we have that $E^x(p)>0$, and thus 1) holds. Next, suppose that $y\in \Delta(p^*,m^*)$ and $x\neq y$. Then, $x\succ y$. By the contrapositive of this, we have that if $y\succsim x$, then either $y=x$ or $p^*\cdot y>m^*$, which implies that $E^x(p^*)=m^*$. Hence, 2) holds. Third, choose any $p\in\mathbb{R}^n_{++}$. Define \[D_{i,+}E^x(p)=\lim_{h\downarrow 0}\frac{E^x(p+he_i)-E^x(p)}{h},\ D_{i,-}E^x(p)=\lim_{h\uparrow 0}\frac{E^x(p+he_i)-E^x(p)}{h},\] where $e_i$ denotes the $i$-th unit vector. Because $E^x$ is concave, both limits are defined, and \[D_{i,+}E^x(p)\le D_{i,-}E^x(p).\] For any $q\in \mathbb{R}^n_{++}$ and $\varepsilon>0$, define $X(q)=f(q,E^x(q))$ and $X^{\varepsilon}(q)=f(q,E^x(q)+\varepsilon)$. By definition of $E^x(q)$, there exists $y\in \Omega$ such that $y\succsim x$ and $q\cdot y\le E^x(q)+\varepsilon$. Because $X^{\varepsilon}(q)\succsim y$, we have that $X^{\varepsilon}(q)\succsim x$, and thus $p\cdot X^{\varepsilon}(q)\ge E^x(p)$. Because $f$ is continuous, letting $\varepsilon\to 0$, we obtain \[p\cdot X(q)\ge E^x(p)=p\cdot X(p).\] Therefore, if we set $q=p+he_i$, then \begin{align*} E^x(q)-E^x(p)=&~q\cdot X(q)-p\cdot X(p)\\ =&~p\cdot (X(q)-X(p))+hX_i(q)\\ \ge&~hX_i(q). \end{align*} By the above inequality, \[D_{i,+}E^x(p)\ge f_i(p,E^x(p))\ge D_{i,-}E^x(p),\] and thus, \[\frac{\partial E^x}{\partial p_i}(p)=f_i(p,E^x(p)),\] which implies that (\ref{Shephard}) holds. This completes the proof. $\blacksquare$ \subsection{Property of the Inverse Demand Function} In the main text, we claimed several property of the inverse demand function $g(x)$ defined in (\ref{GALE2}). In this subsection, we present proofs of these facts. First, we present a lemma. \noindent {\bf Lemma 3}. Suppose that $U\subset \mathbb{R}^n$ is open, $g:U\to \mathbb{R}^n\setminus\{0\}$ is $C^1$, and $\lambda:U\to \mathbb{R}_{++}$ is also $C^1$. Define $h(x)=\lambda(x)g(x)$. Then, the following holds. \begin{enumerate}[1)] \item $g(x)$ satisfies (\ref{JACOBI}) if and only if $h(x)$ satisfies (\ref{JACOBI}). \item $w^TDg(x)w\le 0$ for all $x\in U$ and $w\in \mathbb{R}^n$ such that $w\cdot g(x)=0$ if and only if $w^TDh(x)w\le 0$ for all $x\in U$ and $w\in \mathbb{R}^n$ such that $w\cdot h(x)=0$. \item $w^TDg(x)w<0$ for all $x\in U$ and $w\in\mathbb{R}^n$ such that $w\neq 0$ and $w\cdot g(x)=0$ if and only if $w^TDh(x)w<0$ for all $x\in U$ and $w\in \mathbb{R}^n$ such that $w\neq 0$ and $w\cdot h(x)=0$. \end{enumerate} \noindent {\bf Proof}. By Leibniz's rule, \begin{align*} &~h_i\left(\frac{\partial h_j}{\partial x_k}-\frac{\partial h_k}{\partial x_j}\right)+h_j\left(\frac{\partial h_k}{\partial x_i}-\frac{\partial h_i}{\partial x_k}\right)+h_k\left(\frac{\partial h_i}{\partial x_j}-\frac{\partial h_j}{\partial x_i}\right)\\ =&~\lambda^2\left[g_i\left(\frac{\partial g_j}{\partial x_k}-\frac{\partial g_k}{\partial x_j}\right)+g_j\left(\frac{\partial g_k}{\partial x_i}-\frac{\partial g_i}{\partial x_k}\right)+g_k\left(\frac{\partial g_i}{\partial x_j}-\frac{\partial g_j}{\partial x_i}\right)\right]\\ &~+\lambda\left[g_i\left(g_j\frac{\partial \lambda}{\partial x_k}-g_k\frac{\partial \lambda}{\partial x_j}\right)+g_j\left(g_k\frac{\partial \lambda}{\partial x_i}-g_i\frac{\partial \lambda}{\partial x_k}\right)+g_k\left(g_i\frac{\partial \lambda}{\partial x_j}-g_j\frac{\partial \lambda}{\partial x_i}\right)\right]\\ =&~\lambda^2\left[g_i\left(\frac{\partial g_j}{\partial x_k}-\frac{\partial g_k}{\partial x_j}\right)+g_j\left(\frac{\partial g_k}{\partial x_i}-\frac{\partial g_i}{\partial x_k}\right)+g_k\left(\frac{\partial g_i}{\partial x_j}-\frac{\partial g_j}{\partial x_i}\right)\right], \end{align*} and thus, $h(x)$ satisfies (\ref{JACOBI}) if and only if $g(x)$ satisfies (\ref{JACOBI}). Hence, 1) holds. Next, we note that $w\cdot g(x)=0$ if and only if $w\cdot h(x)=0$. If $w\cdot g(x)=0$, then \[w^TDh(x)w=w^Tg(x)D\lambda(x)w+\lambda(x)w^TDg(x)w=\lambda(x)w^TDg(x)w,\] and thus, 2) and 3) hold. This completes the proof. $\blacksquare$ We show that $g(x)$ defined in (\ref{GALE2}) violates (\ref{JACOBI}). Choosing $\lambda(x)=37(x^TBx)$ and applying Lemma 3, we have that $g(x)$ satisfies (\ref{JACOBI}) if and only if \begin{equation}\label{GALE3} h(x)=\begin{pmatrix} 9 & 12 & 16\\ 16 & 9 & 12\\ 12 & 16 & 9 \end{pmatrix}\begin{pmatrix} x_1\\ x_2\\ x_3 \end{pmatrix} \end{equation} satisfies (\ref{JACOBI}). However, if $x_1=x_2=x_3=1$, then \begin{align*} &~h_1\left(\frac{\partial h_2}{\partial x_3}-\frac{\partial h_3}{\partial x_2}\right)+h_2\left(\frac{\partial h_3}{\partial x_1}-\frac{\partial h_1}{\partial x_3}\right)+h_3\left(\frac{\partial h_1}{\partial x_2}-\frac{\partial h_2}{\partial x_1}\right)\\ =&~37(12-16)+37(12-16)+37(12-16)=-444\neq 0, \end{align*} and thus $h(x)$ violates (\ref{JACOBI}). Define \[C=\begin{pmatrix} 9 & 14 & 14\\ 14 & 9 & 14\\ 14 & 14 & 9 \end{pmatrix}.\] Then, $w^TDh(x)w=w^TCw$, where $h(x)$ is defined in (\ref{GALE3}). Let $t=\frac{h_1(x)}{h_2(x)}$. Then, $\frac{9}{16}\le t\le \frac{4}{3}$, and thus \[\begin{vmatrix} 9 & 14 & h_1(x)\\ 14 & 9 & h_2(x)\\ h_1(x) & h_2(x) & 0 \end{vmatrix}=(h_2(x))^2(-9t^2+28t-9)>0. \] Next, let $s_1=\frac{h_1(x)}{h_3(x)}$ and $s_2=\frac{h_2(x)}{h_3(x)}$. Then, $\frac{3}{4}\le s_1\le \frac{16}{9}$, $\frac{9}{16}\le s_2\le \frac{4}{3}$, and $\frac{9}{16}\le \frac{s_1}{s_2}\le \frac{4}{3}$, and thus \begin{align*} &~\begin{vmatrix} 9 & 14 & 14 & h_1(x)\\ 14 & 9 & 14 & h_2(x)\\ 14 & 14 & 9 & h_3(x)\\ h_1(x) & h_2(x) & h_3(x) & 0 \end{vmatrix}\\ =&~(h_3(x))^2[115s_1^2-140s_1s_2+115s_2^2-140s_1-140s_2+115]<0. \end{align*} By Theorem 5 of Debreu (1952), we have that $w^TCw<0$ for all $w\in \mathbb{R}$ such that $w\neq 0$ and $w\cdot h(x)=0$. Therefore, by Lemma 3, $w^TDg(x)w<0$ for all $x\in \mathbb{R}^3_{++}$ and $w\in \mathbb{R}^3$ such that $w\neq 0$ and $w\cdot g(x)=0$. \subsection{Proof of a Fact on the Domain of Gale's Example} In Subsection 4.3, we first introduce Theorem 4 and then provide a proof that $f^G$ is not a demand function. In this proof, we defined $\tilde{\Omega}=\mathbb{R}^3_{++}$ and $\tilde{P}=(f^G)^{-1}(\tilde{\Omega})$, and claimed that $\tilde{P}=\{(mg(x),m)|x\in \tilde{\Omega},\ m>0\}$, where $g(x)$ is defined by (\ref{GALE2}). In this subsection, we prove this claim rigorously. First, we present three lemmas. \noindent {\bf Lemma 4}. Suppose that $U\subset \mathbb{R}^n$ is open, and $g:U\to \mathbb{R}^n\setminus \{0\}$ is $C^1$. Suppose also that $g_n(x)\neq 0$. Then, \begin{equation}\label{JACOBI2} g_i\left(\frac{\partial g_j}{\partial x_n}-\frac{\partial g_n}{\partial x_j}\right)+g_j\left(\frac{\partial g_n}{\partial x_i}-\frac{\partial g_i}{\partial x_n}\right)+g_n\left(\frac{\partial g_i}{\partial x_j}-\frac{\partial g_j}{\partial x_i}\right)=0 \end{equation} for all $i,j\in \{1,...,n-1\}$ if and only if $g$ satisfies (\ref{JACOBI}) for all $i,j,k\in \{1,...,n\}$. \noindent {\bf Proof}. Clearly, (\ref{JACOBI}) implies (\ref{JACOBI2}). Conversely, suppose that (\ref{JACOBI2}) holds for all $i,j\in \{1,...,n-1\}$. Choose any $i,j,k\in \{1,...,n\}$. If two or three of them are $n$, then trivially (\ref{JACOBI}) holds. If one of them is $n$, then by (\ref{JACOBI2}), (\ref{JACOBI}) holds. Therefore, without loss of generality, we can assume that $i,j,k\in \{1,...,n-1\}$. Then, \[g_i\left(\frac{\partial g_j}{\partial x_n}-\frac{\partial g_n}{\partial x_j}\right)+g_j\left(\frac{\partial g_n}{\partial x_i}-\frac{\partial g_i}{\partial x_n}\right)+g_n\left(\frac{\partial g_i}{\partial x_j}-\frac{\partial g_j}{\partial x_i}\right)=0,\] \[g_j\left(\frac{\partial g_k}{\partial x_n}-\frac{\partial g_n}{\partial x_k}\right)+g_k\left(\frac{\partial g_n}{\partial x_j}-\frac{\partial g_j}{\partial x_n}\right)+g_n\left(\frac{\partial g_j}{\partial x_k}-\frac{\partial g_k}{\partial x_j}\right)=0,\] \[g_k\left(\frac{\partial g_i}{\partial x_n}-\frac{\partial g_n}{\partial x_i}\right)+g_i\left(\frac{\partial g_n}{\partial x_k}-\frac{\partial g_k}{\partial x_n}\right)+g_n\left(\frac{\partial g_k}{\partial x_i}-\frac{\partial g_i}{\partial x_k}\right)=0.\] Multiplying $g_k$ by the first line, $g_i$ by the second line, and $g_j$ by the third line and summing them, we obtain \[g_n\left[g_i\left(\frac{\partial g_j}{\partial x_k}-\frac{\partial g_k}{\partial x_j}\right)+g_j\left(\frac{\partial g_k}{\partial x_i}-\frac{\partial g_i}{\partial x_k}\right)+g_k\left(\frac{\partial g_i}{\partial x_j}-\frac{\partial g_j}{\partial x_i}\right)\right]=0,\] and because $g_n(x)\neq 0$, (\ref{JACOBI}) holds. This completes the proof. $\blacksquare$ \noindent {\bf Lemma 5}. Suppose that $U\subset \mathbb{R}^n$ is open, and $g:U\to \mathbb{R}^n\setminus \{0\}$ is $C^1$. Moreover, suppose that $g_n\equiv 1$. Define \[a_{ij}(x)=\frac{\partial g_i}{\partial x_j}(x)-\frac{\partial g_i}{\partial x_n}(x)g_j(x),\] and let $A_g(x)$ be the $(n-1)\times (n-1)$ matrix whose $(i,j)$-th component is $a_{ij}(x)$. Then, the following holds.\footnote{The matrix-valued function $A_g$ is sometimes called the {\bf Antonelli matrix} of $g$.} \begin{enumerate}[1)] \item $g(x)$ satisfies (\ref{JACOBI}) if and only if $A_g(x)$ is symmetric for all $x\in U$. \item $w^TDg(x)w\le 0$ for all $x\in U$ and $w\in \mathbb{R}^n$ such that $w\cdot g(x)=0$ if and only if $A_g(x)$ is negative semi-definite for all $x\in U$. \item $w^TDg(x)w<0$ for all $x\in U$ and $w\in \mathbb{R}^n$ such that $w\neq 0$ and $w\cdot g(x)=0$ if and only if $A_g(x)$ is negative definite for all $x\in U$. \end{enumerate} \noindent {\bf Proof}. By Lemma 4, $g(x)$ satisfies (\ref{JACOBI}) if and only if $g(x)$ satisfies (\ref{JACOBI2}). Therefore, $g(x)$ satisfies (\ref{JACOBI}) if and only if for every $x\in U$ and $i,j\in \{1,...,n-1\}$, \[g_i\left(\frac{\partial g_j}{\partial x_n}-\frac{\partial g_n}{\partial x_j}\right)+g_j\left(\frac{\partial g_n}{\partial x_i}-\frac{\partial g_i}{\partial x_n}\right)+g_n\left(\frac{\partial g_i}{\partial x_j}-\frac{\partial g_j}{\partial x_i}\right)=0.\] Because $g_n\equiv 1$, we have that $\frac{\partial g_n}{\partial x_i}=\frac{\partial g_n}{\partial x_j}=0$, and thus the above formula can be transformed as follows: \[g_i\frac{\partial g_j}{\partial x_n}-g_j\frac{\partial g_i}{\partial x_n}+\frac{\partial g_i}{\partial x_j}-\frac{\partial g_j}{\partial x_i}=0,\] where the left-hand side is $a_{ij}-a_{ji}$. Therefore, (\ref{JACOBI}) is equivalent to the symmetry of $A_g$, and 1) holds. Next, define $\hat{g}=(g_1,...,g_{n-1})$. By an easy calculation, \[Dg(x)=\left( \begin{array}{c|c} A_g(x) & \frac{\partial \hat{g}}{\partial x_n}(x)\\ \hline 0^T & 0 \end{array} \right)+\left( \begin{array}{c|c} \frac{\partial \hat{g}}{\partial x_n}(x)\hat{g}^T(x) & 0\\ \hline 0^T & 0 \end{array} \right).\] Choose any $w\in \mathbb{R}^n$ such that $w\cdot g(x)=0$, and define $\hat{w}=(w_1,...,w_{n-1})$. Then, $\hat{w}\cdot \hat{g}(x)=-w_n$, and thus \begin{align*} w^TDg(x)w=&~\hat{w}^TA_g(x)\hat{w}+\hat{w}^T\frac{\partial \hat{g}}{\partial x_n}(x)w_n+\hat{w}^T\frac{\partial \hat{g}}{\partial x_n}(x)\hat{g}^T(x)\hat{w}\\ =&~\hat{w}^TA_g(x)\hat{w}, \end{align*} which implies that 2) holds. Moreover, if $w\neq 0$ and $w\cdot g(x)=0$, then $\hat{w}\neq 0$, which implies that 3) holds. This completes the proof. $\blacksquare$ \noindent {\bf Lemma 6}. Suppose that $f:P\to \Omega$ is a CoD that satisfies Walras' law, and $g:\mathbb{R}^n_{++}\to \mathbb{R}^n_{++}$ is an inverse demand function of $f$ such that $g_n\equiv 1$. Let $x=f(p,m)$, and suppose that $f$ is differentiable at $(p,m)$ and $g$ is differentiable at $x$. Define $\tilde{S}_f(p,m)$ as the $(n-1)\times (n-1)$ matrix whose $(i,j)$-th component is $s_{ij}(p,m)$. Then, the matrix $A_g(x)$ is regular and $(A_g(x))^{-1}=\tilde{S}_f(p,m)$. \noindent {\bf Proof}. Define\footnote{In this proof, we frequently abbreviate variables. Note that we use the symbol $s_{i,j}$ instead of $s_{ij}$ because if $j=n-1$, then the latter expression is confusing.} \[\hat{S}=\begin{pmatrix} s_{1,1} & ... & s_{1,n-1} & \frac{\partial f_1}{\partial m}\\ \vdots & \ddots & \vdots & \vdots\\ s_{n,1} & ... & s_{n,n-1} & \frac{\partial f_n}{\partial m} \end{pmatrix},\ F=\begin{pmatrix} \frac{\partial f_1}{\partial p_1} & ... & \frac{\partial f_1}{\partial p_{n-1}} & \frac{\partial f_1}{\partial m}\\ \vdots & \ddots & \vdots & \vdots\\ \frac{\partial f_n}{\partial p_1} & ... & \frac{\partial f_n}{\partial p_{n-1}} & \frac{\partial f_n}{\partial m} \end{pmatrix}.\] In other words, $\hat{S}$ is the matrix whose $n$-th column vector is $D_mf(p,m)$ and $j$-th column vector is the same as that of $S_f(p,m)$ for $j\neq n$, and $F$ is the matrix whose $n$-th column vector is $D_mf(p,m)$ and $j$-th column vector is the same as that of $D_pf(p,m)$ for $j\neq n$. Define \begin{align*} H=&~\begin{pmatrix} \frac{\partial g_1}{\partial x_1}&\cdots&\frac{\partial g_1}{\partial x_{n-1}}\\ \vdots &\ddots & \vdots\\ \frac{\partial g_{n-1}}{\partial x_1}&\cdots &\frac{\partial g_{n-1}}{\partial x_{n-1}}\\ \end{pmatrix},\\ b=&~\left(\frac{\partial g_1}{\partial x_n},...,\frac{\partial g_{n-1}}{\partial x_n}\right)^T,\\ c=&~\left(\frac{\partial}{\partial x_1}[g(x)\cdot x],...,\frac{\partial}{\partial x_{n-1}}[g(x)\cdot x]\right),\\ \hat{f}=&~(f_1,...,f_{n-1})^T,\ \hat{g}=(g_1,...,g_{n-1})^T. \end{align*} Because $g_n\equiv 1$, to differentiate $y=f(g(y),g(y)\cdot y)$ in $y$ at $y=x$, we have that\footnote{$I_n$ denotes the identity matrix.} \[I_n=D_pf(p,m)Dg(x)+D_mf(p,m)D[g(x)\cdot x]=F\times \begin{pmatrix} H & b\\ c & \frac{\partial}{\partial x_n}[g(x)\cdot x]. \end{pmatrix}\] Therefore, $F$ is regular and \begin{align*} F^{-1}=&~\begin{pmatrix} H & b\\ c & \frac{\partial}{\partial x_n}[g(x)\cdot x] \end{pmatrix}\\ =&~\begin{pmatrix} I_{n-1} & 0\\ \hat{f}^T & 1 \end{pmatrix}\times \begin{pmatrix} H & b\\ \hat{g}^T & 1 \end{pmatrix}\\ =&~\begin{pmatrix} I_{n-1} & 0\\ \hat{f}^T & 1 \end{pmatrix}\times \begin{pmatrix} H-b\hat{g}^T & b\\ 0 & 1 \end{pmatrix}\times \begin{pmatrix} I_{n-1} & 0\\ \hat{g}^T & 1 \end{pmatrix}. \end{align*} Because $H-b\hat{g}^T=A_g(x)$ by definition, we have that $A_g(x)$ is regular, and \begin{align*} \hat{S}=&~F\times \begin{pmatrix} I_{n-1} & 0\\ \hat{f}^T & 1 \end{pmatrix}\\ =&~\begin{pmatrix} I_{n-1} & 0\\ \hat{g}^T & 1 \end{pmatrix}^{-1}\times \begin{pmatrix} A_g(x) & b\\ 0 & 1 \end{pmatrix}^{-1}\\ =&~\begin{pmatrix} I_{n-1} & 0\\ -\hat{g}^T & 1 \end{pmatrix}\times \begin{pmatrix} (A_g(x))^{-1} & -(A_g(x))^{-1}b\\ 0 & 1 \end{pmatrix} \end{align*} as desired. This completes the proof. $\blacksquare$ Now, let $g(x)$ be defined by (\ref{GALE2}) and choose any $x\in \tilde{\Omega}$. Let $k(x)=\frac{1}{g_3(x)}g(x)$, $p=k(x)$ and $m=k(x)\cdot x$. Then, $f^G(p,m)=x$. Clearly $f^G$ is differentiable at $(p,m)$, and $k$ is differentiable at $x$. Because $f^G$ is homogeneous of degree zero, we have that \[D_pf^G(p,m)p+D_mf^G(p,m)m=0.\] On the other hand, because $f^G$ satisfies Walras' law, \[m=(f^G)^T(p,m)p.\] To combine these equations, we obtain \[S_{f^G}(p,m)p=0.\] By Lemma 6, we have that $\tilde{S}_{f^G}(p,m)$ is regular, and thus the rank of $S_{f^G}(p,m)$ is $2$. Suppose that there exists $(q,w)\in \mathbb{R}^3_{++}\times \mathbb{R}_{++}$ such that $f^G(q,w)=x$ and $q$ is not proportional to $p$. Choose any $t\in [0,1]$, and define $(r,c)=(1-t)(p,m)+t(q,w)$. Suppose that $f^G(r,c)=y\neq x$. By Walras' law, $r\cdot x=c$. Because $r\cdot y\le c$, either $p\cdot y\le m$ or $q\cdot y\le w$, which contradicts the weak axiom. Therefore, we have that $f^G(r,c)=x$ for all $t\in [0,1]$. Hence, \[S_{f^G}(p,m)(q-p)=\lim_{t\downarrow 0}\frac{f^G((1-t)p+tq,(1-t)m+tw)-f^G(p,m)}{t}=0.\] This implies that the dimension of the kernel of $S_{f^G}(p,m)$ is greater than or equal to $2$, which contradicts the rank-nullity theorem. Therefore, such a $(q,w)$ is absent, and we conclude that $\tilde{P}=\{(mg(x),m)|x\in\tilde{\Omega},\ m>0\}$. By the way, suppose that $f$ is a CoD that satisfies Walras' law, $g$ is an inverse demand function of $f$, $x=f(p,m)$, $f$ is differentiable at $(p,m)$, and $g$ is differentiable at $x$. Using Lemmas 3, 5, and 6, we can easily show that $g$ violates (\ref{JACOBI}) at $x$ if and only if $S_f(p,m)$ is not symmetric. Let $x=(1,1,1)$ and $(p,m)=(1,1,1,3)$. Because $g(x)$ defined by (\ref{GALE2}) violates (\ref{JACOBI}) at $x$, $S_{f^G}(p,m)$ is not symmetric, and by Theorem 3, we can conclude that $f^G$ is not a demand function. This is another proof that Gale's example is not a demand function. However, this logic can be used because $f$ is differentiable. To prove III) of Theorem 4, we need a more complicated argument because $f^{\succsim^g}$ is not necessarily differentiable. \subsection{Proof of I) of Theorem 4} First, we must define several symbols. In this definition, we abbreviate variables, but all symbols are actually functions on $\tilde{\Omega}^2$. For any $(x,v)\in \tilde{\Omega}^2$, we define\footnote{We restrict the operator $R$ to $\mbox{span}\{x,v\}$.} \[a_1=\frac{1}{\|x\|}x,\] \[a_2=\begin{cases} \frac{1}{\|v-(v\cdot a_1)a_1\|}(v-(v\cdot a_1)a_1) & \mbox{if }v\neq (v\cdot a_1)a_1,\\ 0 & \mbox{otherwise}, \end{cases}\] \[Py=(y\cdot a_1)a_1+(y\cdot a_2)a_2,\] \[Rw=(w\cdot a_1)a_2-(w\cdot a_2)a_1,\] \[v_1=\arg\min\{w\cdot a_1|w\in P\mathbb{R}^n_+,\ \|w\|=1,\ w\cdot a_2\ge 0\},\] \[v_2=\arg\min\{w\cdot a_1|w\in P\mathbb{R}^n_+,\ \|w\|=1,\ w\cdot a_2\le 0\}.\] \[\Delta=\{w\in \mbox{span}\{x,v\}|w\cdot Rv\le 0,\ w\cdot v_1\ge x\cdot v_1,\ w\cdot v_2\le x\cdot v_2\}.\] \[C=\|x\|\|v-(v\cdot a_1)a_1\|.\] Moreover, we define $y_1=y_2=x$ if $x$ is proportional to $v$, and otherwise, $y_i$ is the unique intersection of $\{s_1v|s_1\in \mathbb{R}\}\cap \{x+s_2Rv_i|s_2\in \mathbb{R}\}$. \noindent {\bf Lemma 7}. All the above symbols are well-defined. Moreover, the following results hold. \begin{enumerate}[i)] \item If $x$ is not proportional to $v$, then $\{a_1,a_2\}$ is the orthonormal basis of $\mbox{span}\{x,v\}$ derived from $x,v$ by the Gram-Schmidt method, and $P$ is the orthogonal projection from $\mathbb{R}^n$ onto $\mbox{span}\{x,v\}$.\footnote{As a result, $y\cdot w=Py\cdot w$ for any $y\in \mathbb{R}^n$ and $w\in \mbox{span}\{x,v\}$.} \item If $x$ is not proportional to $v$, then $R$ is the unique orthogonal transformation on $\mbox{span}\{x,v\}$ such that $Ra_1=a_2$ and $Ra_2=-a_1$. Moreover, if $T$ is an orthogonal transformation on $\mbox{span}\{x,v\}$ such that $w\cdot Tw=0$ for any $w\in \mbox{span}\{x,v\}$, then we must have either $T=R$ or $T=-R=R^{-1}=R^3$.\footnote{In particular, if $z\in \mbox{span}\{x,v\}\cap \tilde{\Omega}$ and $[x,z]\cap \{cv|c\in \mathbb{R}\}=\emptyset$, then $R(x,v)=R(z,v)$. Note that $[x,z]$ represents $\{(1-t)x+tz|t\in [0,1]\}$.} \item If $x$ is not proportional to $v$, then both $v_1$ and $v_2$ are continuous and single-valued at $(x,v)$. Moreover, $P\mathbb{R}^n_+=\{c_1v_1+c_2v_2|c_1,c_2\ge 0\}$. \item Both $y_1$ and $y_2$ are continuous and single-valued. Moreover, for any $(x,v)\in \tilde{\Omega}^2$, $y_1(x,v),\ y_2(x,v)\in \tilde{\Omega}$. \item $\Delta=(x+RP\mathbb{R}^n_+)\cap \{w\in \mbox{span}\{x,v\}|w\cdot Rv\le 0\}=\mbox{co}\{x,y_1,y_2\}$.\footnote{By this result, we have that $\Delta$ is a compact subset of $\tilde{\Omega}$.} \item $(y\cdot x)v-(y\cdot v)x=C\cdot RPy$ for any $y\in \mathbb{R}^n$. \end{enumerate} We omit the proof of Lemma 7. See the proof of Theorem 1 in Hosoya (2013) for more detailed arguments. We now prove I) of Theorem 4. For any $(x,v)\in \tilde{\Omega}^2$, let $y(\cdot;x,v)$ be the nonextendable solution to the following initial value problem: \begin{equation}\label{INTEG} \dot{y}(t)=(g(y(t))\cdot x)v-(g(y(t))\cdot v)x,\ y(0)=x. \end{equation} Note that, by Lemma 7, this equation can be rewritten as \begin{equation}\label{INTEG2} \dot{y}(t)=CRPg(y(t)),\ y(0)=x. \end{equation} Recall the following definition: \[w^*=(v\cdot x)v-(v\cdot v)x=CRv,\] \[t(x,v)=\inf\{t\ge 0|y(t;x,v)\cdot w^*\ge 0\}.\] We first show the well-definedness of $t(x,v)$. Actually, $t(x,v)=0$ if $x$ is proportional to $v$, and thus we can assume that $x$ is not proportional to $v$. By the Cauchy-Schwarz inequality, we have that \[x\cdot w^*<0.\] Moreover, \[\dot{y}(t;x,v)\cdot w^*=(CRPg(y(t;x,v)))\cdot (CRv)=C^2g(y(t;x,v))\cdot v>0.\] Furthermore, because $\dot{y}(t;x,v)\in RP\mathbb{R}^n_+$, by v) of Lemma 7, we have that $y(t;x,v)\in \Delta$ if and only if $t\ge 0$ and $y(t;x,v)\cdot w^*\le 0$. Because $\Delta$ is compact and $y(t;x,v)$ is nonextendable, we have that there exists $t^*>0$ such that $y(t^*;x,v)\notin\Delta$, which implies that $y(t^*;x,v)\cdot w^*>0$. This indicates that $t(x,v)$ is well-defined. Moreover, clearly $y(t;x,v)\cdot w^*=0$ if and only if $t=t(x,v)$, which implies that $y(t;x,v)$ is proportional to $v$ if and only if $t=t(x,v)$. Because $t(x,v)$ is the unique solution to the following equation: \[y(t;x,v)\cdot w^*=0,\] by the implicit function theorem, the function $t:(y,z)\mapsto t(y,z)$ is $C^k$ around $(x,v)$, and thus $u^g$ is also $C^k$ around $(x,v)$. By iv) of Lemma 7, we can easily verify that $u^g$ is continuous on $\tilde{\Omega}^2$,\footnote{Recall that $u^g(x,v)v\in [y_1(x,v),y_2(x,v)]$ for any $(x,v)\in\tilde{\Omega}^2$.} and thus $\succsim^g$ is a continuous binary relation on $\tilde{\Omega}$. It suffices to show that $\succsim^g$ is complete, p-transitive, and monotone. We introduce a lemma. \noindent {\bf Lemma 8}. Suppose that $x,v,z,w\in \Omega$, $x$ is not proportional to $v$, $z$ is not proportional to $w$, and $\mbox{span}\{x,v\}=\mbox{span}\{z,w\}$. If there exist $t_1,t_2\in \mathbb{R}$ such that $y(t_1;x,v)=y(t_2;z,w)$, then the trajectory of $y(t;x,v)$ is the same as that of $y(t;z,w)$. \noindent {\bf Proof}. Since $P(x,v)=P(z,w)$, we abbreviate both $P(x,v)$ and $P(z,w)$ as $P$. We abbreviate $R(x,v)$ as $R$. By ii), $R(z,w)$ is equal to either $R$ or $-R$. We define $s=1$ if $R(z,w)=R$ and $s=-1$ otherwise. Define \[z_1(t)=y((C(x,v))^{-1}t;x,v), z_2(t)=y(s(C(z,w))^{-1}t;z,w).\] For vi), \[\dot{z}_1(t)=RPg(z_1(t)),\ \dot{z}_2(t)=RPg(z_2(t)).\] Hence, both $z_1$ and $z_2$ are solutions to the following autonomous differential equation: \[\dot{z}=RPg(z).\] Moreover, we can easily check that both $z_1$ and $z_2$ are nonextendable. By assumption, there exist $t_3,t_4$ such that $z_1(t_3)=z_2(t_4)$, and thus the trajectory of $z_1$ must be equal to that of $z_2$. This completes the proof. $\blacksquare$ Now, we prove that \begin{equation}\label{KEY} u^g(y,v)\ge u^g(z,v)\Leftrightarrow u^g(y,z)\ge 1(\Leftrightarrow y\succsim^gz) \end{equation} for all $x,v\in \tilde{\Omega}$ and $y,z\in\mbox{span}\{x,v\}\cap \tilde{\Omega}$.\footnote{That is, the function $y\mapsto u^g(y,v)$ represents $\succsim^g$ on $\tilde{\Omega}\cap \mbox{span}\{x,v\}$.} If $x$ is proportional to $v$, then (\ref{KEY}) holds trivially. Hence, we assume that $x$ is not proportional to $v$. If $z$ is proportional to $v$, then $z=av$, and by Lemma 8, $y(t(y,z);y,z)=y(t(y,v);y,v)$. Therefore, $u^g(z,v)=a$ and $u^g(y,v)=au^g(y,z)$, which implies that \[u^g(y,v)\ge u^g(z,v)\Leftrightarrow au^g(y,z)\ge a\Leftrightarrow u^g(y,z)\ge 1,\] as desired. Hence, we assume that $z$ is not proportional to $v$. Suppose that $y$ is proportional to $z$. If $u^g(y,z)=1$, then $y=z$, and thus $u^g(y,v)=u^g(z,v)$. Suppose that $u^g(y,z)>1$. Define \[c(s)=\frac{\|y(t(y,(1-s)y+sv);y,(1-s)y+sv)\|}{\|y(t(z,(1-s)z+sv);z,(1-s)z+sv)\|}.\] By assumption, $c(0)=u^g(y,z)>1$. If $c(1)\le 1$, then $c(s)=1$ for some $s\in [0,1]$ by the intermediate value theorem. Thus, there exist $t',t''$ such that $y(t';y,v)=y(t'';z,v)$ and $c(s)\equiv 1$ by Lemma 8. This implies that $u^g(y,z)=c(0)=1$, which is a contradiction. Hence, $c(1)>1$, and thus, $u^g(y,v)>u^g(z,v)$. Finally, if $u^g(y,z)<1$, then by the symmetrical argument as above, we can show that $u^g(y,v)<u^g(z,v)$, and thus (\ref{KEY}) holds when $y$ is proportional to $z$. Next, suppose that $y$ is not proportional to $z$. By Lemma 8, the trajectory of $y(t;y,v)$ is the same as that of $y(t;u^g(y,z)z,v)$, and thus \begin{align} u^g(u^g(y,z)z,v)=&~\frac{\|y(t(u^g(y,z)z,v);u^g(y,z)z,v)\|}{\|v\|}\nonumber \\ =&~\frac{\|y(t(y,v);y,v)\|}{\|v\|}\label{KEY2}\\ =&~u^g(y,v).\nonumber \end{align} Because we have already shown that $u^g(u^g(y,z)z,v)\ge u^g(z,v)$ if and only if $u^g(y,z)\ge 1$, we have that (\ref{KEY}) holds.\footnote{We mention that (\ref{KEY2}) holds unconditionally: that is, if $\dim(\mbox{span}\{y,z,v\})\le 2$, then \[u^g(u^g(y,z)z,v)=u^g(y,v)\] even when $y$ is proportional to $z$ or $z$ is proportional to $v$. The proof is easy and thus omitted.} Completeness and p-transitivity of $\succsim^g$ directly follow from (\ref{KEY}).\footnote{For completeness, note that by (\ref{KEY}), \[u^g(x,v)\ge 1\Leftrightarrow u^g(v,x)\le 1.\]} It suffices to show that $\succsim^g$ is monotone. Actually, we prove that \[v\gneq x\Rightarrow v\succ^gx.\] Let $(x,v)\in \tilde{\Omega}^2$ and $v\gneq x$. If $x$ is proportional to $v$, then clearly $v\succ^gx$. Hence, we assume that $x$ is not proportional to $v$. By the completeness of $\succsim^g$, it suffices to show that $u^g(x,v)<1$. Define $z=v-x$. Then, $z\in \mbox{span}\{x,v\}$ and $z\gneq 0$. By i), ii), and vi) of Lemma 7, \[\dot{y}(t;x,v)\cdot Rz>0,\] and thus, \begin{align*} u^g(x,v)v\cdot Rz=&~y(t(x,v);x,v)\cdot Rz\\ >&~y(0;x,v)\cdot Rz\\ =&~x\cdot Rz\\ =&~v\cdot Rz. \end{align*} Therefore, to prove that $u^g(x,v)<1$, it suffices to show that $v\cdot Rz<0$. Because $x\cdot a_2=0$, \begin{align*} v\cdot Rz=&~v\cdot [(z\cdot a_1)a_2-(z\cdot a_2)a_1]\\ =&~(z\cdot a_1)(v\cdot a_2)-(v\cdot a_2)(v\cdot a_1)\\ =&~-(v\cdot a_2)(x\cdot a_1). \end{align*} Since $x\cdot a_1>0$ by definition and $v\cdot a_2\ge 0$ by the Cauchy-Schwarz inequality, we have that $v\cdot Rz\le 0$. If $v\cdot Rz=0$, then $v$ is proportional to $z$, and thus, $x$ is proportional to $v$, which contradicts our initial assumption. Hence, we have that $v\cdot Rz<0$, which completes the proof of I). $\blacksquare$ \subsection{Proof of II) of Theorem 4} Let $D_xu^g$ denote the partial derivative of $u^g$ with respect to the first variable or its transpose.\footnote{We think that this abbreviation of the transpose symbol does not cause any confusion.} We first present a lemma. \noindent {\bf Lemma 9}. Suppose that $(x,v)\in \tilde{\Omega}^2$ and $x$ is not proportional to $v$. Then, the following results hold. \begin{enumerate}[1)] \item $D_xu^g(x,v)\neq 0$ and $D_xu^g(x,v)x>0$. \item $PD_xu^g(x,v)=\lambda(x)Pg(x)$, where $\lambda(x)=\frac{D_xu^g(x,v)x}{g(x)\cdot x}>0$. \end{enumerate} \noindent {\bf Proof}. If $y,z,w\in \tilde{\Omega}$ are linearly dependent, then by (\ref{KEY2}), $u^g(y,w)=u^g(u^g(y,z)z,w)$. Therefore, for any $t>0$, \[t=u^g(tv,v)=u^g(u^g(tv,x)x,v),\ 1=u^g(x,x)=u^g(u^g(x,v)v,x).\] Taking derivatives with respect to $t$ on both sides of the former equation at $t=u^g(x,v)$, we have that, \begin{align*} 1=&~D_xu^g(u^g(u^g(x,v)v,x)x,v)x\times [D_xu^g(u^g(x,v)v,x)v]\\ =&~D_xu^g(x,v)x\times [D_xu^g(u^g(x,v)v,x)v], \end{align*} which implies that $D_xu^g(x,v)\neq 0$ and $D_xu^g(x,v)x\neq 0$. Since $u^g(tx,v)>u^g(x,v)$ for any $t>1$, $D_xu^g(x,v)x>0$ and 1) holds. To prove 2), define $z=\dot{y}(0;x,v)$. By (\ref{KEY2}), $u^g(y(t;x,v),v)=u^g(x,v)$. Taking derivatives with respect to $t$ at $t=0$, we have that $D_xu^g(x,v)z=0$. Note that because $z=CRPg(x)$, $g(x)\cdot z=0$, and thus both $Pg(x)$ and $PD_xu^g(x,v)$ belong to $\{w\in \mbox{span}\{x,v\}|w\cdot z=0\}$, whose dimension is exactly $1$. Hence, these are linearly dependent, and thus 2) holds. This completes the proof. $\blacksquare$ Suppose that $x\in f^{\succsim^g}(g(x),g(x)\cdot x)$ for all $x\in \tilde{\Omega}$. Choose any $x\in \tilde{\Omega}$ and $w\in \mathbb{R}^n$ such that $w\cdot g(x)=0$. If $w=0$, then clearly $w^TDg(x)w=0$. Otherwise, we can assume without loss of generality that $x+w,x-w\in \tilde{\Omega}$. Let $v=x+w$, $x(t)=x+tw$ and $c(t)=u^g(x(t),v)$. Since $x\in f^{\succsim^g}(g(x),g(x)\cdot x)$, we have that $c(0)\ge c(t)$ for any $t\in [-1,1]$. Note that $c'(t)=D_xu^g(x(t),v)w$ for all $t\in ]-1,1[$. By the mean value theorem, there exists a positive sequence $(t_m)$ such that $t_m\downarrow 0$ as $m\to \infty$ and $c'(t_m)\le 0$ for all $m$. Then, \begin{align*} 0\ge&~\limsup_{m\to \infty}\frac{D_xu^g(x(t_m),v)w}{t_m}\\ =&~\limsup_{m\rightarrow \infty}\frac{\lambda(x(t_m))g(x(t_m))\cdot w}{t_m}\\ \ge&~M\limsup_{m\rightarrow\infty}\frac{g(x(t_m))\cdot w}{t_m}\\ =&~M\limsup_{m\rightarrow\infty}\frac{g(x(t_m))-g(x)}{t_m}\cdot w\\ =&~M[w^TDg(x)w], \end{align*} where $M=\max_{t\in [0,\frac{1}{2}]}\lambda(x(t))>0$. Hence, $w^TDg(x)w\le 0$ for all $x\in \tilde{\Omega}$ and $w\in \mathbb{R}^n$ such that $w\cdot g(x)=0$. Second, suppose that there exists $x\in \tilde{\Omega}$ such that $x\notin f^{\succsim^g}(g(x),g(x)\cdot x)$. By p-transitivity, continuity, and monotonicity of $\succsim^g$, there exists $v\in\tilde{\Omega}$ such that $g(x)\cdot v<g(x)\cdot x$ and $v\sim^g x$. Clearly, $v$ is not proportional to $x$. Define $x(t)=(1-t)x+tv$. Then, \begin{align*} \left.\frac{d}{dt}[u^g(x(t),v)]\right|_{t=0}=&~D_xu^g(x,v)(v-x)\\ =&~\lambda(x)g(x)\cdot (v-x)\\ =&~\lambda(x)[g(x)\cdot v-g(x)\cdot x]<0, \end{align*} and thus, $u^g(x(t),v)<u^g(x,v)$ for a sufficiently small $t>0$, which implies that $x(t)\not\succsim^gx$, and thus $\succsim^g$ is not convex. Hence, for proving the first claim of II) of Theorem 4, it suffices to show that if $w^TDg(x)w\le 0$ for all $x\in \tilde{\Omega}$ and $w\in \mathbb{R}^n$ such that $w\cdot g(x)=0$, then $\succsim^g$ is convex. Suppose not. Then, there exist $x,v\in \tilde{\Omega}$ and $t\in ]0,1[$ such that $v\sim^gx$ and $(1-t)x+tv\not\succsim^gx$. Clearly, $v$ is not proportional to $x$. Let $w=v-x$, $x(s)=x+sw$, and \[s^*=\max[\arg\min\{u^g(x(s),v)|s\in [0,1]\}].\] Since $u^g(x(t),v)<1$, we have $s^*\in ]0,1[$. Define $c(s)=u^g(x(s),v)$. Then, $c(s^*)\le c(s)$ for any $s\in [0,1]$, and thus, $c'(s^*)=0$. Hence, $D_xu^g(x(s^*),v)w=0$. Let $p=PD_xu^g(x(s^*),v)$. Then, $p\cdot w=0$. By 1) of Lemma 9, $p\cdot x(s^*)>0$, and thus $p\neq 0$. Consider the following equation: \[f(a,b)=u^g(bp+x(s^*+a),v)=u^g(x(s^*),v).\] Note that $\frac{\partial f}{\partial b}(0,0)=\|p\|^2\neq 0$. Applying the implicit function theorem, we can prove the existence of the $C^k$ function $b:]-\varepsilon,\varepsilon[\to \mathbb{R}$ such that $b(0)=0$ and $f(a,b(a))=u^g(x(s^*),v)$ for any $a\in ]-\varepsilon,\varepsilon[$. Differentiating both sides with respect to $a$, we have \[D_xu^g(b(a)p+x(s^*+a),v)[b'(a)p+w]=0.\] Hence, \[0=D_xu^g(x(s^*),v)[b'(0)p+w]=b'(0)\|p\|^2,\] which implies that $b'(0)=0$. By Lemma 9, if $a>0$ is so small that $D_xu^g(b(a)p+x(s^*+a),v)p>0$, then \begin{align*} 0=&~\limsup_{a'\downarrow a}\frac{1}{a'-a}[-D_xu^g(b(a)p+x(s^*+a),v)[b'(a')p+w]\\ &~+D_xu^g(b(a)p+x(s^*+a),v)[b'(a')p+w]]\\ =&~\limsup_{a'\downarrow a}\frac{1}{a'-a}[-\lambda(b(a)p+x(s^*+a))g(b(a)p+x(s^*+a))\cdot[b'(a')p+w]\\ &~+D_xu^g(b(a)p+x(s^*+a),v)[b'(a')p+w]]\\ =&~\limsup_{a'\downarrow a}\frac{1}{a'-a}[\lambda(b(a)p+x(s^*+a))\\ &~\times [g(b(a')+x(s^*+a'))-g(b(a)p+x(s^*+a))]\cdot[b'(a')p+w]\\ &~+D_xu^g(b(a)p+x(s^*+a),v)[(b'(a')p+w)-(b'(a)p+w)]]\\ =&~\lambda(b(a)p+x(s^*+a))[b'(a)p+w]^TDg(b(a)p+x(s^*+a))[b'(a)p+w]\\ &~+D_xu^g(b(a)p+x(s^*+a),v)p\times \limsup_{a'\downarrow a}\frac{b'(a')-b'(a)}{a'-a}, \end{align*} and thus, $\limsup_{a'\downarrow a}\frac{b'(a')-b'(a)}{a'-a}\ge 0$. By the same arguments, we can show that $\limsup_{a'\uparrow a}\frac{b'(a')-b'(a)}{a'-a}\ge 0$. Now, choose any such $a>0$. Define $h(\theta)=b'(\theta)a-b'(a)\theta$. Then, $h(0)=h(a)=0$, and thus, there exists $\theta^*\in ]0,a[$ such that either $h(\theta^*)\ge h(\theta)$ for any $\theta\in [0,a]$ or $h(\theta^*)\le h(\theta)$ for any $\theta\in [0,a]$. If the former holds, \begin{align*} 0\ge&~\limsup_{\theta\downarrow \theta^*}\frac{h(\theta)-h(\theta^*)}{\theta-\theta^*}\\ =&~a\limsup_{\theta\downarrow \theta^*}\frac{b'(\theta)-b'(\theta^*)}{\theta-\theta^*}-b'(a), \end{align*} and thus we have that $b'(a)\ge 0$. By the symmetrical arguments, we can show that $b'(a)\ge 0$ in the latter case. Hence, we have that $b'(a)\ge 0$ for a sufficiently small $a>0$, and thus, $b(a)\ge 0$ for a sufficiently small $a>0$. On the other hand, $D_xu^g(x(s^*),v)p=\|p\|^2>0$, and thus, \[D_xu^g(b'p+x(s^*+a),v)p>0\mbox{ for any }b'\in [0,b(a)],\] for a sufficiently small $a>0$. Therefore, \[u^g(x(s^*+a),v)\le u^g(b(a)p+x(s^*+a),v)=u^g(x(s^*),v),\] which contradicts the definition of $s^*$. Hence, the first claim of II) is correct. Next, suppose that $w^TDg(x)w<0$ for all $x\in \mathbb{R}^n_{++}$ and $w\in \mathbb{R}^n$ such that $w\neq 0$ and $w\cdot g(x)=0$. Choose any $(p,m)\in \mathbb{R}^n_{++}\times\mathbb{R}_{++}$. For any $x\in \tilde{\Omega}$ such that $p\cdot x=m$, if $g(x)$ is not proportional to $p$, then there exists $v\in \tilde{\Omega}$ such that $p\cdot v\le p\cdot x$ and $v\succ^gx$, and thus, $x\notin f^{\succsim^g}(p,m)$. Therefore, if there is no $x$ such that $p\cdot x=m$ and $g(x)$ is proportional to $p$, then $f^{\succsim^g}(p,m)=\emptyset$. Next, suppose that $p=ag(x)$ for some $a>0$ and $m=p\cdot x$. We have already shown that $x\in f^{\succsim^g}(p,m)$. Suppose that $\succsim^g$ is strictly convex. Then, $x\succ^gv$ for any $v\in \tilde{\Omega}$ such that $p\cdot v\le m$ and $v\neq x$, which implies that $f^{\succsim^g}(p,m)=\{x\}$. Hence, it suffices to show that $\succsim^g$ is strictly convex. Suppose not. Then, there exist $y,z\in\tilde{\Omega}$ and $t\in ]0,1[$ such that $y\sim^gz$, $y\neq z$ and $(1-t)y+tz\not\succ^gy$. Define $y(s)=(1-s)y+sz$. We have already shown that $\succsim^g$ is convex, and thus $y(s)\succsim^gy$ for any $s\in [0,1]$. This implies that $u^g(y(t),y)\le u^g(y(s),y)$ for any $s\in [0,1]$. By the first-order condition and Lemma 9, \[g(y(t))\cdot (z-y)=0,\] and by the mean value theorem, there exists a sequence $(s_m)$ such that $s_m\downarrow t$ as $m\to \infty$ and \[g(y(s_m))\cdot (z-y)\ge 0\] for all $m$. Threfore, \[(z-y)^TDg(y(t))(z-y)=\lim_{m\to \infty}\frac{[g(y(s_m))-g(y(t))]\cdot (z-y)}{s_m-t}\ge 0,\] which contradicts our initial assumption. This completes the proof of II). $\blacksquare$ \subsection{Proof of III) of Theorem 4} We separate the proof into two steps. \noindent {\bf Step 1}. Suppose that $g:\tilde{\Omega}\to \mathbb{R}^n_{++}$ is $C^k$. Then, ii)-v) of III) of Theorem 4 are equivalent. \noindent {\bf Proof of Step 1}. First, we show that iii) is equivalent to iv). Suppose that iii) holds. Then, for any $x,z\in\tilde{\Omega}$, \[\frac{d}{dt}[u^g_v(y(t;x,z))]=0,\] and thus $u^g_v(x)=u^g_v(u^g(x,z)z)$. Hence, \[u^g_v(x)\ge u^g_v(z)\Leftrightarrow u^g_v(u^g(x,z)z)\ge u^g_v(z)\Leftrightarrow u^g(x,z)\ge 1\Leftrightarrow x\succsim^gz,\] which implies that iv) holds. Conversely, suppose that iv) holds. Choose any $x\in \tilde{\Omega}$ and any linearly independent family $v_1,...,v_{n-1}\in\mathbb{R}^n$ such that $v_i\cdot g(x)=0$ and $x+v_i\in\tilde{\Omega}$ for all $i\in \{1,...,n-1\}$. Since both $v_i$ and $\dot{y}(0;x,x+v_i)$ are orthogonal to $P(x,x+v_i)g(x)$, $\dot{y}(0;x,x+v_i)$ is proportional to $v_i$. Since $y(t;x,x+v_i)\sim^gx$ for any $t$, we have that $\nabla u^g_v(x)$ is orthogonal to $\dot{y}(0;x,x+v_i)$, and thus $\nabla u^g_v(x)\cdot v_i=0$ for all $i\in \{1,...,n-1\}$. Therefore, $\nabla u^g_v(x)=\lambda(x)g(x)$ for some $\lambda(x)\in\mathbb{R}$, which implies that iii) holds. Second, we show that iv) is equivalent to ii). It is obvious that iv) implies ii). Conversely, suppose that ii) holds. First, choose any $x,z,v\in \tilde{\Omega}$. Then, \[u^g(x,v)v\sim^gx\sim^gu^g(x,z)z\sim^gu^g(u^g(x,z)z,v)v,\] which implies that $u^g(x,v)=u^g(u^g(x,z)z,v)$. Next, choose any $x,z,v\in \tilde{\Omega}$ and suppose that $u_v^g(x)\ge u_v^g(z)$. Then, \[u^g(u^g(x,z)z,v)=u^g(x,v)\ge u^g(z,v),\] which implies that $u^g(x,z)\ge 1$. Hence, $x\succsim^gz$. Conversely, suppose that $x\succsim^gz$. Then, $u^g(x,z)\ge 1$, and thus, \[u_v^g(x)=u^g(u^g(x,z)z,v)\ge u_v^g(z).\] Hence, $u_v^g$ represents $\succsim^g$. Fix any $v\in \tilde{\Omega}$ and choose any $z\in \tilde{\Omega}$ such that $z$ is not proportional to $v$. Then, for any $x\in \tilde{\Omega}$, we have $x$ is not proportional to either $v$ or $z$. If $x$ is not proportional to $v$, clearly $u^g_v$ is $C^k$ around $x$. Otherwise, \[u_v^g(y)=u^g(u^g(y,z)z,v)\] for any $y\in \tilde{\Omega}$ and the right-hand side is $C^k$ in $y$ around $x$. Therefore, $u_v^g$ is $C^k$ on $\tilde{\Omega}$, and thus iv) holds. It suffices to show that iii) is equivalent to v). By Frobenius' theorem, iii) implies v). Conversely, suppose that v) holds. We introduce a lemma. \noindent {\bf Lemma 10}. For any $v\in \tilde{\Omega}$, there exist open neighborhoods $W^v,W_1^v$ of $v$, $\hat{u}^v:W_1^v\to \mathbb{R}$, and $\lambda^v:W_1^v\to \mathbb{R}_{++}$ such that \begin{enumerate}[1)] \item $u^g(x,z)=u^g(u^g(x,y)y,z)$ for any $x,y,z\in W^v$, \item $y([0,t(x,y)];x,y)\subset W_1^v$ for any $x,y\in W^v$, \item $\hat{u}^v$ is $C^k$ on $W_1^v$ and $\nabla\hat{u}^v(x)=\lambda^v(x)g(x)$ for any $x\in W_1^v$. \end{enumerate} \noindent {\bf Proof}. Fix any $v\in \tilde{\Omega}$. By Frobenius' theorem, there exists an open and convex neighborhood $W_1^v\subset \tilde{\Omega}$ of $v$, $\hat{u}^v:W_1^v\to \mathbb{R}$, and $\lambda^v:W_1^v\to \mathbb{R}_{++}$ such that $\hat{u}^v$ is $C^k$ and $\nabla\hat{u}^v(x)=\lambda^v(x)g(x)$ for all $x\in W_1^v$. Since $y_1,y_2$ are continuous on $\tilde{\Omega}^2$ and $y_1(v,v)=v=y_2(v,v)$, $U_1^v=y_1^{-1}(W_1^v)\cap y_2^{-1}(W_1^v)$ is open and includes $(v,v)$.\footnote{See Lemma 7.} Let $W_2^v\subset W_1^v$ be an open, convex neighborhood of $v$ such that $W_2^v\times W_2^v\subset U_1^v$, $U_2^v=y_1^{-1}(W_2^v)\cap y_2^{-1}(W_2^v)$, and $W^v\subset W_2^v$ be an open neighborhood of $v$ such that $W^v\times W^v\subset U_2^v$. Clearly, 3) holds. To show 2), choose any $x,y\in W^v$. If $x$ is proportional to $y$, then $t(x,y)=0$, and thus 2) holds. Otherwise, since $x,y\in W^v$, we have that $(x,y)\in U_2^v$, and thus, $y_1(x,y),y_2(x,y)\in W_2^v$. Since $W_2^v$ is convex, we have that $\Delta(x,y)\subset W_2^v\subset W_1^v$, and thus, $y([0,t(x,y)];x,y)\subset \Delta(x,y)\subset W_1^v$. Hence, 2) holds. To show 1), suppose that $x,y,z\in W^v$. If these vectors are linearly dependent, then we have shown that $u^g(x,z)=u^g(u^g(x,y)y,z)$ holds in (\ref{KEY2}). Otherwise, we have already shown that $y([0,t(x,y)];x,y)\subset W_2^v$, and thus, $u^g(x,y)y\in W_2^v$. Hence, $y_1(u^g(x,y)y,z),y_2(u^g(x,y)y,z)\in W_1^v$, and thus, $\Delta(u^g(x,y)y,z)\subset W_1^v$. Therefore, $y([0,t(u^g(x,y)y,z)];u^g(x,y)y,z)\subset W_1^v$. Moreover, we have already proved that $y([0,t(x,z)];x,z)\subset W_1^v$. By easy computations, \[\frac{d}{dt}\hat{u}^v(y(t;x,y))=0\mbox{ for any }t\in [0,t(x,y)],\] \[\frac{d}{dt}\hat{u}^v(y(t;u^g(x,y)y,z))=0\mbox{ for any }t\in [0,t(u^g(x,y)y,z)],\] and \[\frac{d}{dt}\hat{u}^v(y(t;x,z))=0\mbox{ for any }t\in [0,t(x,z)].\] Hence, we obtain \[\hat{u}^v(u^g(x,z)z)=\hat{u}^v(x)=\hat{u}^v(u^g(x,y)y)=\hat{u}^v(u^g(u^g(x,y)y,z)z).\] Since $\nabla\hat{u}^v(w)\gg 0$ for all $w\in W_1^v$, we have that $u^g(x,z)=u^g(u^g(x,y)y,z)$. Hence, 1) holds. This completes the proof. $\blacksquare$ Choose any $(x,v)\in \tilde{\Omega}^2$. If $x$ is not proportional to $v$, then $u^g$ is $C^k$ around $(x,v)$. Otherwise, there exists $s>0$ such that $x=sv$. Let $W^x$ be the set defined in Lemma 10, and choose any $z\in W^x$ that is not proportional to $x$. Let $W'=W^x\setminus \{tz|t\in\mathbb{R}\}$ and $W''=s^{-1}W'$. Then, $W'$ is an open neighborhood of $x$, $W''$ is an open neighborhood of $v$, and for any $(y,w)\in W'\times W''$, \[u^g(y,w)=su^g(y,sw)=su^g(u^g(y,z)z,sw)=u^g(u^g(y,z)z,w),\] where the right-hand side is $C^k$ around $(x,v)$. Hence, $u^g$ is $C^k$ on $\tilde{\Omega}^2$. Define \[X=\{x\in \tilde{\Omega}|\exists \mu(x)\in\mathbb{R}\mbox{ such that }D_xu^g(x,v)=\mu(x)g(x)\}.\] To prove iii), it suffices to show that $X=\tilde{\Omega}$. First, we show that $X$ includes some open neighborhood of $L=\{sv|s>0\}$. Fix any $s>0$. Let $x=sv$ and choose $W^x, W_1^x, \hat{u}^x, \lambda^x$ as in Lemma 10. Fix any $y\in W^x$. Choose any $z\in W^x$ such that $\hat{u}^x(z)=\hat{u}^x(y)$. Since \[\frac{d}{dt}\hat{u}^x(y(t;y,z))\equiv 0\] for all $t\in [0,t(y,z)]$, we have that $\hat{u}^x(y)=\hat{u}^x(u^g(y,z)z)$, and thus, $u^g(y,z)=1$. Hence, \[u^g(y,v)=su^g(y,x)=su^g(u^g(y,z)z,x)=su^g(z,x)=u^g(z,v).\] Conversely, choose any $z\in W^x$ such that $u^g(y,v)=u^g(z,v)$. Then, \[u^g(z,v)=u^g(y,v)=su^g(y,x)=su^g(u^g(y,z)z,x)=u^g(u^g(y,z)z,v),\] which implies that $u^g(y,z)=1$. Hence, \[\hat{u}^x(y)=\hat{u}^x(u^g(y,z)z)=\hat{u}^x(z).\] Therefore, $\hat{u}^x(z)=\hat{u}^x(y)$ if and only if $u^g(z,v)=u^g(y,v)$. By the preimage theorem, the set $(\hat{u}^x)^{-1}(y)$ is an $n-1$ dimensional $C^k$ manifold and both $\nabla\hat{u}^x(y)$ and $D_xu^g(y,v)$ are in the orthogonal complement of the tangent space $T_y((\hat{u}^x)^{-1}(y))$.\footnote{See Section 1.4 of Guillemin and Pollack (1974).} Hence, there exists $c\in \mathbb{R}$ such that \[D_xu^g(y,v)=c\nabla\hat{u}^x(y)=c\lambda^x(y)g(y).\] Therefore, we have proved that $\cup_{s>0}W^{sv}\subset X$. Second, fix any $x\in \tilde{\Omega}$. It suffices to show that $x\in X$. If $x$ is proportional to $v$, then it is clear. Otherwise, define $A_x$ as the trajectory of $y(\cdot;x,v)$ and $B_x=A_x\cap Y$, where $Y$ is the interior of $X$. It suffices to show that $x\in B_x$. To show this, we prove that $B_x=A_x$. Since $A_x$ is connected, it suffices to show that $B_x$ is nonempty, open, and closed in $A_x$. Clearly, $B_x$ is open in $A_x$. Since $u^g(x,v)v\in B_x$, $B_x$ is nonempty. Hence, it suffices to show that $B_x$ is closed in $A_x$. Suppose that $(z^m)$ is a sequence on $B_x$ that converges to $z^*\in A_x$. It suffices to show that $z^*\in B_x$. Without loss of generality, we assume that $z^*\neq u^g(x,v)v$. Choose $W^{z^*},W_1^{z^*},\hat{u}^{z^*},\lambda^{z^*}$ as in Lemma 10. Since $W^{z^*}$ is open in $\tilde{\Omega}$, there exists $m$ such that $z^m\in W^{z^*}$. By the definition of $A_x$ and Lemma 8, there exists $t^m$ such that $y(t^m;z^*,v)=z^m$. Since $z^m\in B_x$, there exists an open set $W_{z^m}\subset W^{z^*}$ and $\mu:W_{z^m}\to\mathbb{R}$ such that $D_xu^g(z,v)=\mu(z)g(z)$ for any $z\in W_{z^m}$. Let $W_3$ be an open neighborhood of $z^*$ such that $W_3\subset W^{z^*}$ and $y(t^m;z,v)\in W_{z^m}$ for any $z\in W_3$. By the local submersion theorem,\footnote{Again, see Section 1.4 of Guillemin and Pollack (1974).} there exist open $W_4,V\subset \mathbb{R}^n$, and $\phi:V\rightarrow W_4$ such that $z^*\in W_4\subset W_3$, $V$ is convex, $\phi$ is a bijection from $V$ onto $W_4$, both $\phi$ and $\phi^{-1}$ are $C^k$, and $(\hat{u}^{z^*}\circ \phi)(w)=w^1$ for any $w\in V$. Now, suppose that $y,z\in W_4$, and $\hat{u}^{z^*}(y)=\hat{u}^{z^*}(z)$. Define $z_1(s)$ and $z_2(s)$ such that \[z_1(s)=\phi((1-s)\phi^{-1}(y)+s\phi^{-1}(z)),\ z_2(s)=y(t^m;z_1(s),v).\] Then, $\hat{u}^{z^*}(z_2(s))=\hat{u}^{z^*}(z_1(s))=\hat{u}^{z^*}(y)$ for any $s\in [0,1]$. On the other hand, $z_2(s)\in W_{z^m}$ for any $s\in [0,1]$, and thus, \[\frac{d}{ds}[u^g(z_2(s),v)]=0,\] which implies that \[u^g(y,v)=u^g(z_1(0),v)=u^g(z_2(0),v)=u^g(z_2(1),v)=u^g(z_1(1),v)=u^g(z,v).\] Symmetrically, we can show that if $u^g(y,v)=u^g(z,v)$, then $\hat{u}^{z^*}(y)=\hat{u}^{z^*}(z)$. Hence, $u^g(y,v)=u^g(z,v)$ if and only if $\hat{u}^{z^*}(y)=\hat{u}^{z^*}(z)$, and thus, we can prove $W_4\subset X$ using the preimage theorem. Therefore, v) implies iii), as desired. This completes the proof of Step 1. $\blacksquare$ \noindent {\bf Step 2}. Suppose that $f:P\to \tilde{\Omega}$ is a CoD such that $g$ is an inverse demand function of $f$, and $f=f^{\succsim^g}$. Then, i) and ii) in III) of Theorem 4 are equivalent. \noindent {\bf Proof of Step 2}. Clearly ii) implies i). Conversely, suppose that i) holds. Then, there exists a weak order $\succsim$ such that $f^{\succsim^g}=f^{\succsim}$. First, we note the following fact. Choose any $x,v\in \tilde{\Omega}$, and define \[x_0^m=x,\] \[x_{i+1}^m=x_i^m+\frac{t(x,v)}{m}CRPg(x_i^m).\] The finite sequence $x_0^m,...,x_m^m$ is the explicit Euler approximation of the equation (\ref{INTEG}), and thus $x_m^m\to y(t(x,v);x,v)$ as $m\to \infty$. Moreover, because $g(x_i^m)\cdot x_i^m=g(x_i^m)\cdot x_{i+1}^m$, we have that $x_i^m\succ x_{i+1}^m$, and thus $x\succ x_m^m$. This implies that if $0<c<u^g(x,v)$, then $x\succ cv$. Now, suppose that ii) is violated. Then, there exist $x,y,z\in \tilde{\Omega}$ such that $x\succsim^gy$, $y\succsim^gz$, but $z\succ^gx$. This implies that $u^g(z,x)>1$. Choose any $a\in ]1,u^g(z,x)[$. Then, $z\succ ax$. Because $ax\gg x$, we have that $u^g(ax,y)>1$, and thus there exists $b\in ]1,u^g(ax,y)[$. Then, $ax\succ by$, and thus $z\succ by$. Because $by\gg y$, we have that $u^g(by,z)>1$, and thus $by\succ z$, which is a contradiction. This completes the proof of Step 2. $\blacksquare$ Clearly, Steps 1-2 imply III) of Theorem 4. This completes the proof. $\blacksquare$ \subsection{Proof of Theorem 5} Suppose that $\succsim^g$ is transitive. By Step 1 in the proof of III) of Theorem 4, it is represented by $u^g_v$, and $u^g_v$ is $C^1$. By Lagrange's multiplier rule, we have that \[\nabla u^g_v(x)=\lambda(x)g(x)\] for some $\lambda(x)>0$. Now, choose any piecewise $C^1$ closed curve $x:[0,T]\to \tilde{\Omega}$. Then, \[u^g_v(x(0))=u^g_v(x(T)),\] and thus, there exists a non-null set $S\subset [0,1]$ such that for all $t\in S$, \[\frac{d}{dt}u^g_v(x(t))\le 0.\] This implies that for all $t\in S$, \[g(x(t))\dot{x}(t)\le 0,\] and thus $g$ satisfies Ville's axiom. Conversely, suppose that $\succsim^g$ is not transitive. Then, there exists $x,y,z\in \tilde{\Omega}$ such that $x\succsim^gy$, $y\succsim^gz$, and $z\succ^gx$. Because $\succsim^g$ is p-transitive, we have that $x,y,z$ are linearly independent. Because $z\succ^gx$, we have that $u^g(x,z)<1$. Consider the following differential equation \[\dot{z}^1(t;a)=(g(z^1(t;a))\cdot x)z-(g(z^1(t;a))\cdot z)x+az,\ z^1(0;a)=x.\] We have that $z^1(t;0)=y(t;x,z)$, and thus for sufficiently small $a>0$, there exists $t_1>0$ such that $z^1(t_1;a)=\alpha z$ for some $\alpha\in ]0,1[$. Note that, \[g(z^1(t;a))\cdot \dot{z}^1(t;a)=az\cdot g(z^1(t;a))>0\] for all $t\in [0,t_1]$. Because $u^g(y,z)\ge 1$, by (\ref{KEY}), we have that $u^g(z,y)\le 1$, and thus $u^g(\alpha z,y)<1$. Consider the following differential equation \[\dot{z}^2(t;b)=(g(z^2(t;b))\cdot \alpha z)y-(g(z^2(t;b))\cdot y)\alpha z+by,\ z^2(0;b)=\alpha z.\] We have that $z^2(t;0)=y(t;\alpha z,y)$, and thus for sufficiently small $b>0$, there exists $t_2>0$ such that $z^2(t_2;b)=\beta y$ for some $\beta\in ]0,1[$. Again note that, \[g(z^2(t;b))\cdot \dot{z}^2(t;b)=by\cdot g(z^2(t;b))>0\] for all $t\in [0,t_2]$. Because $u^g(x,y)\ge 1$, we have that $u^g(y,x)\le 1$, and thus $u^g(\beta y,x)<1$. Consider the following differential equation \[\dot{z}^3(t;c)=(g(z^3(t;c))\cdot \beta y)x-(g(z^3(t;c))\cdot x)\beta y+cx,\ z^3(0;c)=\beta y.\] We have that $z^3(t;0)=y(t;\beta y,x)$, and thus for sufficiently small $c>0$, there exists $t_3>0$ such that $z^3(t_3;c)=\gamma x$ for some $\gamma\in ]0,1[$. Note that \[g(z^3(t;c))\cdot \dot{z}^3(t;c)=cx\cdot g(z^3(t;c))>0\] for all $t\in [0,t_3]$. Finally, define $z^4(t)=[(1-t)\gamma+t]x$, and \[x(t)=\begin{cases} z^1(t;a) & \mbox{if }0\le t\le t_1,\\ z^2(t-t_1;b) & \mbox{if }t_1\le t\le t_1+t_2,\\ z^3(t-t_1-t_2;c) & \mbox{if }t_1+t_2\le t\le t_1+t_2+t_3,\\ z^4(t-t_1-t_2-t_3) & \mbox{if }t_1+t_2+t_3\le t\le t_1+t_2+t_3+1. \end{cases}\] Then, for $T=t_1+t_2+t_3+1$, $x:[0,T]\to \tilde{\Omega}$ is a piecewise $C^1$ closed curve, and for almost all $t\in [0,T]$, \[g(x(t))\cdot \dot{x}(t)>0,\] which implies that $g$ violates Ville's axiom. This completes the proof. $\blacksquare$ \if0 \section*{Reference} \begin{description} \item{[1]} Chambers, C. P. and Echenique, F. (2016) \textit{Revealed Preference Theory}. Cambridge University Press, Cambridge. \item{[2]} Debreu, G. (1952) ``Definite and Semi-Definite Quadratic Forms.'' Econometrica 20, 295-300. \item{[3]} Debreu, G. (1954) ``Representation of a Preference Ordering by a Numerical Function.'' In: Thrall, R. M., Coombs, C. H., Davis, R. L. (Eds.) \textit{Decision Processes.} Wiley, New York, 159-165. \item{[4]} Dieudonne, J. (1969) \textit{Foundations of Modern Analysis}. Academic Press, London. \item{[5]} Gale, D. (1960) ``A Note on Revealed Preference.'' Economica 27, 348-354. \item{[6]} Guillemin, V. and Pollack, A. (1974) \textit{Differential Topology}. Prentice Hall, New Jersey. \item{[7]} Hosoya, Y. (2013) ``Measuring Utility from Demand.'' Journal of Mathematical Economics 49, 82-96. \item{[8]} Hosoya, Y. (2019) ``Revealed Preference Theory.'' Applied Analysis and Optimization 3, 179-204. \item{[9]} Hosoya, Y. (2020) ``Recoverability Revisited.'' Journal of Mathematical Economics 90, 31-41. \item{[10]} Hosoya, Y. (2021a) ``Equivalence between Nikliborc's Theorem and Frobenius' Theorem.'' Pure and Applied Functional Analysis 6, 719-741. \item{[11]} Hosoya, Y. (2021b) ``The Weak Axiom of Revealed Preference and Inverse Problems in Consumer Theory.'' Linear and Nonlinear Analysis 7, 9-31. \item{[12]} Houthakker, H. S. (1950) ``Revealed Preference and the Utility Function.'' Economica 17, 159-174. \item{[13]} Hurwicz, L. and Richter, M. K. (1979a) ``Ville Axioms and Consumer Theory.'' Econometrica 47, 603-619. \item{[14]} Hurwicz, L. and Richter, M. K. (1979b) ``An Integrability Condition with Applications to Utility Theory and Thermodynamics.'' Journal of Mathematical Economics 6, 7-14. \item{[15]} Pareto, V. (1906a) \textit{Manuale di Economia Politica con una Introduzione alla Scienza Sociale}. Societa Editrice Libraria, Milano. \item{[16]} Pareto, V. (1906b) ``L'ofelimit\`a Nei Cicli Non Chuisi.'' Giornale degli Economisti 33, 15-30. English translated by Chipman, J. S. ``Ophelimity in Nonclosed Cycle.'' in: Chipman, J. S., Hurwicz, L., Richter, M. K., Sonnenschein, H. F. (Eds.) \textit{Preferences, Utility, and Demand}. Harcourt Brace Jovanovich, New York, pp.370-385 (1971). \item{[17]} Pareto, V. (1909) \textit{Manuel d'Economie Politique}. Giard et E. Briere, Paris. \item{[18]} Richter, M. K. (1966) ``Revealed Preference Theory.'' Econometrica 34, 635-645. \item{[19]} Rose, H. (1958) ``Consistency of Preference: The Two-Commodity Case.'' Review of Economic Studies 25, 124-125. \item{[20]} Samuelson, P. A. (1938) ``A Note on the Pure Theory of Consumer's Behaviour''. Economica 5, 61-71. \item{[21]} Samuelson, P. A. (1950) ``The Problem of Integrability in Utility Theory.'' Economica 17, 355-385. \item{[22]} Suda, S. (2007) ``Vilfredo Pareto and the Integrability Problem of Demand Function.'' Keio Journal of Economics 99, 637-655. \item{[23]} Szpilrajn, E. (1930) ``Sur l'Extension de l'Ordre Partiel.'' Fundamenta Mathematicae 16, 386-389. \item{[24]} Uzawa, H. (1959) ``Preferences and Rational Choice in the Theory of Consumption.'' In: Arrow, K. J., Karlin, S., Suppes, P. (Eds.) \textit{Mathematical Models in the Social Science, 1959: Proceedings of the First Stanford Symposium}, Stanford University Press, Stanford, pp.129-149. Reprinted in: Chipman, J. S., Hurwicz, L., Richter, M. K., Sonnenschein, H. F. (Eds.) \textit{Preferences, Utility, and Demand}. Harcourt Brace Jovanovich, New York, pp.7-28 (1971). \item{[25]} Volterra, V. (1906) ``L'economia Matematica ed il Nuovo Manuale del Prof. Pareto.'' Giornale degli Economisti 33, 296-301. English translated by Kirman, A. P. ``Mathematical Economics and Professor Pareto's New Manual.'' in: Chipman, J. S., Hurwicz, L., Richter, M. K., Sonnenschein, H. F. (Eds.) \textit{Preferences, Utility, and Demand}. Harcourt Brace Jovanovich, New York, pp.365-369 (1971). \end{description} \end{document}
\begin{document} \title{Multifidelity Approximate Bayesian Computation with Sequential Monte Carlo Parameter Sampling hanks{ Originally submitted to the editors January 2020; resubmitted October 2020. unding{ This work is supported by BBSRC through grant BB/R000816/1.}{} \begin{abstract} Multifidelity approximate Bayesian computation (MF-ABC) is a likelihood-free technique for parameter inference that exploits model approximations to significantly increase the speed of ABC algorithms~(Prescott and Baker, JUQ, 2020). Previous work has considered MF-ABC only in the context of rejection sampling, which does not explore parameter space particularly efficiently. In this work, we integrate the multifidelity approach with the ABC sequential Monte Carlo (ABC-SMC) algorithm into a new MF-ABC-SMC algorithm. We show that the improvements generated by each of ABC-SMC and MF-ABC to the efficiency of generating Monte Carlo samples and estimates from the ABC posterior are amplified when the two techniques are used together. \end{abstract} \begin{keywords} Bayesian inference, likelihood-free methods, stochastic simulation, multifidelity methods, sequential Monte Carlo \end{keywords} \begin{AMS} 62F15, 65C20, 65C60, 93B30 \end{AMS} \doublespacing \section{Introduction} \label{s:Intro} An important goal of the mathematical modelling of a physical system is to be able to make quantitative predictions about its behaviour. In order to make accurate predictions, the parameters of the mathematical model need to be calibrated against experimental data. Bayesian inference is a widely-used approach to model calibration that seeks to unify the information from observations with prior knowledge about the parameters to form a posterior distribution on parameter space~\cite{Beaumont2010,Hines2015,Schnoerr2017}. This approach is based on Bayes' Theorem, whereby the posterior distribution is proportional to the product of the prior distribution and the likelihood of the data under the model. However, in many practical settings, the model is often too complicated for the likelihood of the data to be calculated, making the posterior distribution unavailable. In this case, likelihood-free methods for Bayesian parameter inference become a useful option. Specifically, approximate Bayesian computation (ABC) is a class of such likelihood-free methods~\cite{Sisson2018,Sunnaker2013}. Rather than calculating the likelihood of the observed data for a given parameter value, it is estimated. The model is simulated under a particular parameter value, and the simulated output is then compared with the observed data. If the simulation and observations are suitably close (according to a predetermined metric and threshold~\cite{Fearnhead2012,Harrison2017a}) then, in the classical rejection sampling approach, the likelihood is approximated as $1$. Otherwise, the likelihood is approximated as $0$. This binary approximation is usually interpreted as an acceptance or a rejection of the parameter value input into the simulation. Using this approach, a weighted Monte Carlo sample from an approximated posterior can be built by repeatedly proposing parameters from the prior distribution, simulating the model with each parameter, and calculating the binary weight. One widely-acknowledged weakness of the ABC approach is its heavy reliance on repeated simulation. There has been a significant amount of work dedicated to overcoming this reliance by exploiting methods for intelligent exploration of parameter space in order to reduce the number of simulations in areas of low likelihood~\cite{Sisson2018}. For example, the parameters to be input into the simulation can instead be proposed from a importance distribution, rather than the prior, which we aim to construct in order to improve the algorithm's performance. One successful approach to importance sampling is known as Sequential Monte Carlo ABC (ABC-SMC), which aims to build consecutive samples using parameter proposals taken from progressively closer approximations to the posterior, parameterised by decreasing ABC thresholds~\cite{Toni2009}. Research in this area has considered how to choose the sequence of decreasing thresholds and distance metrics~\cite{DelMoral2012,Prangle2017}, and how best to evolve the parameters from one approximation to produce parameter proposals for the next~\cite{Alsing2018,Beaumont2009,Filippi2013}. Another proposed strategy for overcoming the simulation bottleneck is the multifidelity ABC (MF-ABC) approach~\cite{Prescott2020}. This approach assumes that, in addition to the model under investigation (termed the \emph{high-fidelity} model), there also exists a \emph{low-fidelity} model, depending on the same parameters, that is significantly faster to simulate, usually at the cost of being less accurate~\cite{Peherstorfer2018}. The multifidelity approach to parameter inference uses simulations from both the low-fidelity model and high-fidelity model to approximate the likelihood. The high-fidelity model is simulated as little as possible, to reduce the simulation bottleneck, but just enough to ensure that the resulting posterior estimate is suitably accurate. This technique is related to multilevel approaches to ABC~\cite{Jasra2017,Tran2015,Warne2018}, which use a decreasing sequence of ABC-SMC thresholds to produce coupled estimates at different levels. Other approaches that can be interpreted in the multifidelity framework include lazy ABC~\cite{Prangle2016}, delayed acceptance ABC~\cite{Everitt2017} and early rejection~\cite{Picchini2016}. In each of these, low-fidelity simulations are sometimes used to reject parameters before completing a high-fidelity simulation. Importantly, the more general MF-ABC framework in \cite{Prescott2020} allows for early acceptance as well as early rejection. A key observation that can be made about these two techniques for improving ABC performance is that they are orthogonal, in the sense that they improve different aspects of the ABC approach. ABC-SMC considers only improving the method for proposing parameters to use in simulations and does not directly affect the binary estimate of the likelihood. In contrast, MF-ABC makes no change to the parameter proposals, but instead directly alters the method used to estimate the likelihood by using a combination of both low-fidelity and high-fidelity model simulations. The complementarity of these two approaches has previously been shown in the specific context of combining delayed acceptance with SMC~\cite{Everitt2017}. Thus, combining the general multifidelity framework of~\cite{Prescott2020} with SMC should therefore yield significant speed-up over existing methods. \subsection{Outline} \label{s:Outline} In this paper we bring together these two orthogonal approaches to speeding up ABC algorithms. We will introduce a combined multifidelity sequential Monte Carlo ABC algorithm (MF-ABC-SMC). \Cref{s:Background} formulates the existing ABC algorithms briefly described above, and the techniques we can use to quantify their performance. We then show how these ABC approaches can be combined in \Cref{s:MF-ABC-SMC}, by incorporating the multifidelity technique with the sequential importance sampling approach to ABC-SMC. In \Cref{MFABCSMC} we then fully exploit the SMC framework to optimise the multifidelity approach, producing the MF-ABC-SMC algorithm in \Cref{MFABC:SMC}. This new algorithm is applied in \Cref{s:Example} to a heterogeneous network of Kuramoto oscillators in a hierarchical Bayes parameter estimation task, to produce low-variance ABC posterior estimates significantly faster than the classical ABC-SMC approach. Finally, in \Cref{s:Discussion}, we discuss some important open questions for further optimising the MF-ABC-SMC algorithm. \section{Theoretical background} \label{s:Background} Assume that the model we are seeking to calibrate is a map (usually stochastic) from parameters $\theta$, taking values in a parameter space $\Theta$, to an output $y$, taking values in data space $\mathcal Y$. We denote this map as a conditional density $f(\cdot~|~\theta)$ on $\mathcal Y$, and term the drawing of $y \sim f(\cdot~|~\theta)$ as simulating the model, with $y$ termed a simulation. For Bayesian inference, we furthermore assume the existence of a prior distribution $\pi(\cdot)$ on $\Theta$, and of the observed data $\obs y \in \mathcal Y$ that will be used to calibrate the model. The model induces the likelihood of the observed data, written $L(\theta) = f(\obs y~|~\theta)$, which is a function of $\theta$. As described previously, the goal of Bayesian inference is to infer the posterior distribution $p(\theta~|~\obs y)$ on $\Theta$, given $\obs y$ and $\pi(\cdot)$. Bayes' Theorem equates the posterior to the product of likelihood and prior, \[ p( \theta ~|~ \obs y) = \frac{1}{\zeta} L(\theta) \pi(\theta), \] where the normalisation constant, $\zeta$, ensures that $p(\theta~|~\obs y)$ is a probability density that integrates to unity. \subsection{Approximate Bayesian computation} \label{s:ABC} Often, the model under consideration is sufficiently complicated that the likelihood cannot be calculated, necessitating a likelihood-free approach. We assume that, while the value of any $f(y ~|~ \theta)$ is not available, we are still able to simulate $y \sim f(\cdot ~|~ \theta)$. Let $d(y, \obs y)$ denote a metric that quantifies how close any simulation, $y$, is to the observed data, $\obs y$. For a positive threshold value $\epsilon > 0$, we can then define a neighbourhood $\Omega_\epsilon(d, \obs y) = \{ y \in \mathcal Y ~|~ d(y, \obs y)<\epsilon \}$ of model simulations that are `close' to the data. Typically the dataset, $\obs y$, is constant for the parameter estimation task, and the distance metric, $d$, is pre-determined. Hence we will often drop the $(d, \obs y)$ dependence from our notation and simply write $\Omega_\epsilon$ for the $\epsilon$-neighbourhood of $\obs y$ under the distance function $d$. For a given positive distance threshold $\epsilon > 0$, ABC replaces the exact likelihood, $L(\theta)$, with the ABC approximation to the likelihood, \begin{subequations} \label{eq:ABC} \begin{equation} \label{eq:ABClikelihood} L_\epsilon(\theta) = \mathbb P(y \in \Omega_\epsilon ~|~ \theta) = \int \mathbb I(y \in \Omega_\epsilon) f(y~|~\theta) ~\mathrm dy, \end{equation} which is, to leading order for small $\epsilon$, approximately proportional to $L(\theta)$. The ABC approximation to the likelihood then induces the ABC posterior, \begin{equation} \label{eq:ABCposterior} p_{\epsilon}(\theta~|~\obs y) = \frac{1}{Z} ~ L_\epsilon(\theta) \pi(\theta), \end{equation} \end{subequations} where (similarly to $\zeta$ above) the constant $Z$ ensures that the ABC posterior is a probability distribution with unit integral. \begin{algorithm} \caption{Importance sampling ABC (ABC-IS)} \label{ABC:Importance} \begin{algorithmic}[1] \REQUIRE{ Data $\obs y$ and neighbourhood $\Omega_\epsilon$; model $f(\cdot~|~\theta)$; prior $\pi$; importance distribution $\hat q$ proportional to $q$; sample index $n=0$; stopping criterion $S$. } \ENSURE{Weighted sample $\left\{ \theta_n, w_n \right\}_{n=1}^{N}$.} \REPEAT{} \STATE{Increment $n \leftarrow n+1$.} \STATE{Generate $\theta_n \sim \hat q(\cdot)$.} \STATE{Simulate $y_n \sim f(\cdot~|~\theta_n)$.} \STATE{Set $w_n = \left[ \pi(\theta_n) \big/ q(\theta_n) \right] \cdot \mathbb I \left( y_n \in \Omega_\epsilon \right) $.} \UNTIL{$S =$ \texttt{true}.} \end{algorithmic} \end{algorithm} The importance sampling ABC algorithm~\cite{Owen2013,Sisson2018} presented in \Cref{ABC:Importance} (ABC-IS) presents a simple method for drawing samples from the ABC posterior, $p_\epsilon(\theta~|~\obs y)$. In addition to the data, $\obs y$, and the prior, $\pi(\theta)$, we assume an importance probability distribution $\hat q(\theta) = q(\theta) / Z_q$ defined by the function $q(\theta)$, where $q(\theta)>0$ for all $\theta$ in the support of $\pi$. Note that we do not assume that the normalisation constant $Z_q$ is known. For each parameter proposal $\theta_n \sim \hat q(\cdot)$ from the importance distribution, the model $y_n \sim f(\cdot~|~\theta_n)$ is simulated and a weight $w_n = w(\theta_n, y_n)$ is assigned, using the weighting function, \begin{equation} \label{eq:ImportanceWeight} w(\theta, y) = \left[ \pi(\theta) \big/ q(\theta) \right] \cdot \mathbb I(y \in \Omega_\epsilon) \geq 0, \end{equation} to produce a weighted sample $\{ w_n, \theta_n \}$. Note that the general stopping criterion used in \Cref{ABC:Importance} allows for the algorithm to terminate after (for example) a fixed number of non-zero weights, a fixed number of parameter proposals, $N$, when a fixed budget of total computational time or memory is reached, or any other more complicated combination of such conditions. Furthermore we note that ABC-IS is easily parallelised, although care must be taken to ensure that the chosen stopping condition is correctly applied in this case~\cite{Jagiella2017}. If we choose the importance distribution $\hat q = \pi$ equal to the prior, then \Cref{ABC:Importance} (ABC-IS) is known as rejection sampling, which we refer to as ABC-RS. For any arbitrary function $F(\cdot)$ defined on the parameter space $\Theta$, we can estimate the expected value of $F(\theta)$ under $p_\epsilon(\theta~|~\obs y)$, such that \begin{equation} \label{eq:estimate} \mathbb E_{p_\epsilon}(F(\theta)) \approx \bar F = \frac{\sum_{n=1}^N w_n F(\theta_n)}{\sum_{n=1}^N w_n}. \end{equation} Although this estimate is biased (except in the ABC-RS case), it is consistent, such that the mean squared error (MSE) of $\bar F$ is dominated by the variance and decays to $0$ on the order $1/N$. \subsection{Sequential Monte Carlo (SMC)} \label{s:SMC} Sequential Monte Carlo (SMC) is commonly used to efficiently explore parameter space in ABC. The goal is to propagate a sample from the prior through a sequence of intermediate distributions towards the target distribution, $p_\epsilon(\theta~|~\obs y)$. The intermediate distributions are typically the sequence of ABC approximations $p_{\epsilon_t}$, for $t=1,\dots,T$, defined by a sequence of decreasing thresholds, $\epsilon_1 > \dots > \epsilon_T = \epsilon$. \Cref{ABC:SMC} presents the sequential importance sampling approach to ABC-SMC~\cite{Beaumont2009,DelMoral2006,Sisson2007,Toni2009}, also known as population Monte Carlo~\cite{Cappe2004}. Each Monte Carlo sample $\{ \theta_n^{(t)}, w_n^{(t)} \}$ built at generation $t$ is used to construct an importance distribution $\hat q_{t+1}$, defined in \Cref{eq:importance}, that is used to generate the next generation's Monte Carlo sample. The final sample, $\{ \theta_n^{(T)}, w_n^{(T)} \}$, is produced by \Cref{ABC:Importance} using importance distribution $\hat q_T$ and threshold $\epsilon_T = \epsilon$. Note that \Cref{ABC:SMC} requires the specification of a threshold sequence $\epsilon_t$, the stopping conditions $S_t$, and the perturbation kernels $K_t(\cdot~|~\theta)$ for $t=1,\dots,T$~\cite{Sisson2018}. We will not re-examine these aspects of ABC-SMC in detail in this paper, and will implement ABC-SMC using established techniques to select $\epsilon_t$, $S_t$ and $K_t$~\cite{Beaumont2009,DelMoral2012,Filippi2013,Toni2009}. \begin{algorithm} \caption{Sequential Monte Carlo ABC (ABC-SMC)} \label{ABC:SMC} \begin{algorithmic}[1] \REQUIRE{ Data $\obs y$; sequence of nested neighbourhoods $\Omega_{\epsilon_T} \subseteq \Omega_{\epsilon_{T-1}} \subseteq \cdots \subseteq \Omega_{\epsilon_1}$ for $0 < \epsilon = \epsilon_T < \epsilon_{T-1} < \dots < \epsilon_1$; prior $\pi$; perturbation kernels $K_t(\cdot~|~\theta)$; initial importance distribution $\hat q_1$ (often set to $\pi$); model $f(\cdot~|~\theta)$; stopping conditions $S_1, S_2, \dots, S_T$.} \ENSURE{Weighted sample $\left\{ \theta_n^{(T)}, w_n^{(T)} \right\}_{n=1}^{N_T}$.} \FOR{$t=1, \dots, T-1$} \STATE{Produce $\{ \theta_n^{(t)}, w_n^{(t)}\}_{n=1}^{N_t}$ using \Cref{ABC:Importance} (ABC-IS) with importance distribution $\hat q_t$, neighbourhood $\Omega_{\epsilon_t}$, and stopping condition $S_t$.} \STATE{Define the next importance distribution, $\hat q_{t+1}$, proportional to \begin{equation} \label{eq:importance} q_{t+1}( \theta ) = \begin{cases} \sum_{n=1}^{N_t} w_n^{(t)} K_t( \theta ~|~\theta_n^{(t)}) \bigg/ \sum_{m=1}^{N_t} w_m^{(t)} & \pi(\theta)>0 , \\ 0 &\text{else.} \end{cases} \end{equation}} \ENDFOR{} \STATE{Produce $\left\{ \theta_n^{(T)}, w_n^{(T)} \right\}_{n=1}^{N_T}$ using \Cref{ABC:Importance} (ABC-IS) with importance distribution $\hat q_T$, neighbourhood $\Omega_{\epsilon_T} = \Omega_\epsilon$, and stopping condition $S_T$.} \end{algorithmic} \end{algorithm} One weakness of the population Monte Carlo approach taken in \Cref{ABC:SMC} is the $O(N^2)$ cost for each run of \Cref{ABC:Importance} (ABC-IS), where $N_t \sim N$ is the scale of each generation's sample size. To overcome this problem, the SMC sampler was adapted in \cite{DelMoral2012} to reduce this cost to $O(N)$. When the calculation of each $w_n^{(t)}$ is dominated by $q_{t}(\theta_n^{(t)})$, then a significant computational burden can be alleviated through using the $O(N)$ sampling approach. However, in many practical problems, the calculation of each weight, $w_n^{(t)}$, is instead dominated by sampling $y_n^{(t)} \sim f(\cdot~|~\theta^{(t)}_n)$. We will focus on the latter setting and aim to reduce the cost of ABC-SMC through reducing the cost of calculating each $w_n^{(t)}$ using the multifidelity approach described in \Cref{s:MFABC}. \subsection{Multifidelity ABC} \label{s:MFABC} \Cref{s:SMC} describes how the SMC strategy provides parameter proposals such that simulation time is not wasted in regions of low likelihood. An orthogonal approach to improving ABC efficiency is to avoid computationally expensive simulations, where possible, by relying on the \emph{multifidelity} framework~\cite{Prescott2020}. We term the model of interest, $f(\cdot~|~\theta)$, which maps parameter space $\Theta$ to an output space $\mathcal Y$, as the high-fidelity model. We now assume that, in addition, there is a low-fidelity (i.e. approximate) model, $\tilde f(\cdot~|~\theta)$, of the same physical system with the same parameter space $\Theta$. The simulations of this model are denoted $\tilde y \sim \tilde f(\cdot~|~\theta)$, taking values in the output space $\tilde{\mathcal Y}$, which may differ from $\mathcal Y$. Importantly, we assume that the low-fidelity model is computationally cheaper, in the sense that simulations $\tilde y \sim \tilde f(\cdot~|~\theta)$ of the low-fidelity model incur less computational burden than simulations $y \sim f(\cdot~|~\theta)$ of the high-fidelity model. We can also assume that the experimental observations $\obs y \in \mathcal Y$ can be mapped to the new data space, giving $\obs{\tilde y} \in \tilde{\mathcal Y}$. Similarly, we define an associated region, \[ \tilde \Omega_{\epsilon} = \tilde \Omega_{\epsilon} (\obs{\tilde y}, \tilde d) = \{ \tilde y \in \tilde{\mathcal Y} ~|~ \tilde d(\tilde y,\obs{\tilde y}) < \tilde \epsilon \}, \] that is the $\tilde \epsilon$-neighbourhood of the observed data, $\obs{\tilde y}$, as a subset of the output space $\tilde{\mathcal Y}$, defined under the distance metric $\tilde d$. However, in the interests of clarity, we will assume for the remainder of this article that the output spaces of each model fidelity are such that $\tilde{\mathcal Y} = \mathcal Y$. Similarly, we assume that the observed data are such that $\obs{\tilde y} = \obs y$, the distance metrics are such that $\tilde d = d$, and the ABC thresholds are such that $\tilde \epsilon = \epsilon$, so that $\tilde \Omega_{\tilde \epsilon} = \Omega_\epsilon$. In general, the models $f(\cdot~|~\theta)$ and $\tilde f(\cdot~|~\theta)$ can be simulated independently for a given $\theta$. Then, if the low-fidelity model is a good approximation to the high-fidelity model, the outputs $y$ and $\tilde y$ will be near, in some sense, and the distances from data $d(y, \obs y)$ and $d(\tilde y, \obs y)$ will be correlated. However, to improve this correlation we will also allow for coupling between the two models, writing $\check f(y, \tilde y~|~\theta)$ as the coupled density for $(y, \tilde y)$ with marginals that coincide with the independent models $f(y~|~\theta)$ and $\tilde f(\tilde y~|~\theta)$. The benefit of this approach is that, with a judicious choice of coupling, we can produce a high-fidelity simulation $y \sim f(\cdot~|~\tilde y, \theta)$ conditionally on a previously-simulated low-fidelity simulation, $\tilde y \sim \tilde f(\cdot~|~\theta)$, where the distances are more closely correlated. Furthermore, simulations of $y \sim f(\cdot~|~\tilde y, \theta)$ may be less computationally burdensome than independent simulations of $y \sim f(\cdot~|~\theta)$. For example, suppose the high-fidelity model is a Markovian stochastic dynamical system on the time horizon $t \in [0,T]$, and that the low-fidelity model is the same system on $t \in [0,\tau]$ for $\tau<T$~\cite{Prangle2016}. The low-fidelity and high-fidelity models can clearly be simulated independently on $[0,\tau]$ and $[0,T]$, respectively, and the low-fidelity simulation will, on average, be less computationally expensive than the high-fidelity simulation. However, the natural coupling of the models fixes $y(t) = \tilde y(t)$ over $t \in [0, \tau]$, allowing the high-fidelity model to be simulated conditional on a low-fidelity simulation. Many other possible couplings exist for different multifidelity models, often involving shared random noise processes, and methods for coupling are currently an area of active research~\cite{Croci2018,Lester2018,Prescott2020}. In order to apply the multifidelity framework to ABC parameter inference, recall that each weight, $w_n$, generated by \Cref{ABC:Importance} requires a simulation $y_n \sim f(\cdot~|~\theta_n)$ from the high-fidelity model. The multifidelity approach in \cite{Prescott2020} calculates the weight $w_n = w(\theta_n, \tilde y_n, u_n, y_n)$ by replacing the weighting function in \Cref{eq:ImportanceWeight} with \begin{equation} w(\theta, \tilde y, u, y) = \frac{\pi(\theta)}{q(\theta)} \left( \mathbb I(\tilde y \in \Omega_{\epsilon}) + \frac{\mathbb I(u < \alpha(\theta, \tilde y))}{\alpha(\theta, \tilde y)} \left[ \mathbb I(y \in \Omega_{\epsilon}) - \mathbb I(\tilde y \in \Omega_{\epsilon}) \right] \right), \label{eq:w_mf} \end{equation} where $(\tilde y, y) \sim \check f(\cdot, \cdot~|~\theta)$ are coupled multifidelity simulations, where $u$ is a unit uniform random variable, and where $\alpha(\theta, \tilde y) \in (0,1]$ is a positive \emph{continuation probability}. Note that, in general, we could allow the continuation probability, $\alpha(\theta, \tilde y, u, y)$, to depend on all of the stochastic variables, but we will assume $\alpha(\theta, \tilde y) \in (0,1]$ to be independent of $u$ and $y$. The important consequence of the multifidelity weight in \Cref{eq:w_mf} is that, by the specific order in which we simulate the variables, the weight $w(\theta, \tilde y, u, y)$ may be calculated without the computational cost of simulating $y$. Given $\theta$, we first simulate $\tilde y \sim \tilde f(\cdot~|~\theta)$ from the low-fidelity model. This defines the continuation probability $\alpha(\theta, \tilde y) \in (0,1]$. Second, we generate the unit uniform random variable, $u$. If $u \geq \alpha(\theta, \tilde y) \in (0,1]$ then we can return $w(\theta, \tilde y, u, y)$ without simulating $y \sim f(\cdot ~|~\theta, \tilde y)$ from the coupling, thus incurring the lower computational expense of only simulating from $\tilde f$. \Cref{MFABC:Importance} is an adaptation of \Cref{ABC:Importance} that returns a weighted sample $\{ \theta_n, w_n \}_{n=1}^N$ from the ABC posterior $p_\epsilon(\obs y ~|~ \theta)$, with weights calculated using $w$ in \Cref{eq:w_mf}. As with \Cref{ABC:Importance}, in the rejection sampling case where $\hat q(\theta) = \pi(\theta)$, then we refer to this algorithm as MF-ABC-RS. \begin{algorithm} \caption{Multifidelity ABC importance sampling (MF-ABC-IS)} \label{MFABC:Importance} \begin{algorithmic}[1] \REQUIRE{ Data $\obs y$ and neighbourhood $\Omega_\epsilon$; prior $\pi$; coupling $\check f(\cdot, \cdot~|~\theta)$ of models $f(\cdot~|~\theta)$ and $\tilde f(\cdot~|~\theta)$; continuation probability function $\alpha = \alpha(\theta, \tilde y)$; sample index $n=0$; importance distribution $\hat q$ proportional to $q(\theta)$; stopping condition $S$.} \ENSURE{Weighted sample $\{ \theta_n, w_n \}_{n=1}^{N}$.} \REPEAT{} \STATE{Increment $n \leftarrow n+1$.} \STATE{Generate $\theta_n \sim \hat q(\cdot)$.} \STATE{Simulate $\tilde y_n \sim \tilde f(\cdot~|~\theta_n)$.} \STATE{Set $w_n = \mathbb I \left( \tilde y_n \in \Omega_{\epsilon} \right)$.} \STATE{Generate $u_n \sim \mathrm{Uniform}(0,1)$.} \IF{$u_n < \alpha(\theta_n, \tilde y_n)$} \STATE{Simulate $y_n \sim f(\cdot ~|~ \tilde y_n, \theta_n)$.} \STATE{Update $w_n \leftarrow w_n + \left[ \mathbb I(y_n \in \Omega_\epsilon) - w_n \right] \big/ \alpha(\theta_n, \tilde y_n)$.} \ENDIF{} \STATE{Update $w_n \leftarrow \left[ \pi(\theta_n) \big/ q(\theta_n) \right] w_n$.} \UNTIL{$S = \texttt{true}$.} \end{algorithmic} \end{algorithm} \begin{proposition} \label{MFABCValidity} The weighted Monte Carlo sample $\{ \theta_n, w_n\}$ returned by \Cref{MFABC:Importance} is from the ABC posterior $p_\epsilon(\theta~|~\obs y)$. \end{proposition} \begin{proof} We note that the density of each $z = (\theta, \tilde y, u, y)$ sampled by \Cref{MFABC:Importance} is $$ g(z) = \check f(\tilde y, y~|~\theta) \hat q(\theta), $$ on $\mathcal Z = \Theta \times \mathcal Y \times [0,1] \times \mathcal Y$. Furthermore, the multifidelity weight in \Cref{eq:w_mf} integrates such that $$ \int_0^1 w(z) ~\mathrm du = \frac{\pi(\theta)}{q(\theta)} \mathbb I(y \in \Omega_\epsilon), $$ which is independent of $\tilde y$. Therefore, for any integrable $F:\Theta \rightarrow \mathbb R$, we have the identity \begin{align*} \int_{\Theta} F(\theta) \pi_\epsilon(\theta~|~\obs y) ~\mathrm d\theta &= \frac{Z_q}{Z} \int_{\Theta \times \mathcal Y} F(\theta) \frac{\pi(\theta)}{q(\theta)} \mathbb I(y \in \Omega_\epsilon) f(y~|~\theta) \hat q(\theta) ~\mathrm d\theta ~\mathrm dy \\ &= \frac{Z_q}{Z} \int_{\Theta \times \mathcal Y^2} F(\theta) \frac{\pi(\theta)}{q(\theta)} \mathbb I(y \in \Omega_\epsilon) \check f(\tilde y, y~|~\theta) \hat q(\theta) ~\mathrm d\theta ~\mathrm dy ~\mathrm d\tilde y \\ &= \frac{Z_q}{Z} \int_{\mathcal Z} F(\theta) w(z) g(z) ~\mathrm dz. \end{align*} Thus the normalised Monte Carlo estimate in \Cref{eq:estimate} is consistent for a weighted sample from \Cref{MFABC:Importance}. \end{proof} The key issue for \Cref{MFABC:Importance} (MF-ABC-IS) is the choice of continuation probability, $\alpha(\theta, \tilde y) \in (0,1]$. Smaller values for $\alpha$ lead to greater computational savings, since high-fidelity simulations are generated less often. However, it is possible to show that this comes at the cost of an increase in the mean squared error (MSE) of the estimators, $\bar F$, of integrable functions $F:\Theta \rightarrow R$. In \Cref{s:Performance}, we quantify this tradeoff through the definition of the algorithm's efficiency. \begin{note} The multifidelity weight in \Cref{eq:w_mf} compares a uniform $u \in [0,1]$ with $\alpha(\theta, \tilde y)$. In more generality, we may consider any multifidelity weight $w(\theta, \tilde y, u, y)$ that depends on $u \in \mathcal U$ with any distribution, $p(u~|~\theta, \tilde y)$, that does not rely on the expensive high-fidelity model. If $w$ is designed such that \[ \int_{u \in U} w(\theta, \tilde y, u, y) p(u~|~\theta, \tilde y) ~\mathrm du = \frac{\pi(\theta)}{q(\theta)} \mathbb I(y \in \Omega_\epsilon), \] then, by an extension of \Cref{MFABCValidity}, the multifidelity weight remains valid. \end{note} \subsection{ABC Performance} \label{s:Performance} In \Cref{s:Background} so far, we have summarised previous developments of ABC algorithms that aim to improve performance, but have not yet defined how to quantify this improvement. For any weighted sample $\{\theta_n, w_n\}$ built using \Cref{ABC:Importance} (ABC-IS) or \Cref{MFABC:Importance} (MF-ABC-IS), each sampled pair $(\theta_n, w_n)$ also incurs a computational cost, which we will denote $T_n$, producing a total computational cost of $T_{\mathrm{total}} = \sum_n T_n$ for the sample. We define the observed efficiency of such a sample as follows. \begin{definition} \label{def:efficiency:obs} The \emph{effective sample size}~\cite{Elvira2018,Liu2004} of a weighted Monte Carlo sample $\{ \theta_n, w_n \}$ is \[ \mathrm{ESS} = \frac{\left(\sum_n w_n\right)^2}{\sum_n w_n^2}. \] Given a computational cost of $T_n$ for each sampled pair $(w_n, \theta_n)$, the \emph{observed efficiency} of a weighted Monte Carlo sample is \[ \frac{\mathrm{ESS}}{T_{\mathrm{total}}} = \frac{\left(\sum_n w_n\right)^2}{\left( \sum_n w_n^2 \right) \left( \sum_n T_n \right)}, \] which is expressed in units of effective samples per time unit. \end{definition} Larger values of ESS typically correspond to smaller values of MSE in estimates of the form in \Cref{eq:estimate}. Since ESS and $T_{\text{total}}$ both scale linearly with $N$, taking limits as $N \rightarrow \infty$ in the observed efficiency motivates the following definition of the theoretical efficiency of an ABC algorithm. \begin{definition} \label{def:efficiency:th} The \emph{theoretical efficiency} of an ABC algorithm generating a weighted Monte Carlo sample $\{ \theta_n, w_n \}$ with simulation times $\{ T_n \}$ is \[ \psi = \frac{\mathbb E(w)^2}{\mathbb E(w^2) \mathbb E(T)}, \] which is expressed in units of effective samples per time unit. \end{definition} Note that the theoretical efficiency is not just a characteristic of the ABC algorithm and of the models, but also of the numerical implementation and hardware of the computers generating the simulation. For example, the theoretical efficiency of \Cref{ABC:Importance} (ABC-IS) is \[ \psi_{\text{ABC-IS}} = \frac{Z}{\mathbb E_{\pi_\epsilon}(\pi / \hat q) \mathbb E_{\hat q}(T)}, \] where $Z = p(y \in \Omega_\epsilon)$ is the normalisation constant in \Cref{eq:ABCposterior}, and where $\mathbb E_\nu$ denotes expectations with respect to probability density $\nu$. Setting $\hat q = \pi$, the theoretical efficiency of ABC-RS is \[ \psi_{\text{ABC-RS}} = \frac{Z}{\mathbb E_{\pi}(T)}. \] Hence, the theoretical efficiency of ABC-IS is improved over ABC-RS by choosing an importance distribution, $\hat q$, that is more likely than $\pi$ to propose $\theta$ incurring smaller simulation times and high posterior likelihoods. In the case of \Cref{MFABC:Importance} with $\hat q(\theta) = \pi(\theta)$ (MF-ABC-RS), this performance measure has been used to determine a good choice of continuation probability, $\alpha(\theta, \tilde y)$~\cite{Prescott2020}. By using the theoretical efficiency, $\psi$, as a performance metric in the remainder of this paper, we will quantify the improvement in performance over \Cref{ABC:Importance,ABC:SMC,MFABC:Importance} that can be achieved by combining multifidelity and SMC techniques. \section{Multifidelity ABC-SMC} \label{s:MF-ABC-SMC} There are two distinct approaches to improving the performance of ABC parameter inference specified in \Cref{s:Background}. ABC-SMC proposes parameters $\theta_n$ from a sequence of importance distributions that progressively approximate the target ABC approximation to the posterior. In contrast, MF-ABC enables sampled parameters to be weighted without necessarily having to produce a simulation, $y_n$, from the high-fidelity model. In this section we present the main contribution of this paper, which is to combine these orthogonal approaches into an MF-ABC-SMC algorithm. We will replicate the procedure of extending \Cref{ABC:Importance} (ABC-IS) into \Cref{ABC:SMC} (ABC-SMC) in the multifidelity context, focusing first on the $O(N^2)$ SMC sampler based on sequential importance sampling. Suppose that, as in \Cref{s:SMC}, we have a decreasing sequence of thresholds, $\epsilon_1 > \dots > \epsilon_T = \epsilon$, inducing the neighbourhoods, $\Omega_{\epsilon_1} \supseteq \cdots \supseteq \Omega_{\epsilon_T}$, and a sequence of ABC posteriors, $p_{\epsilon_t}(\theta~|~\obs y)$. In principle, we can replace the call of \Cref{ABC:Importance} (ABC-IS) in step 2 of \Cref{ABC:SMC} with a call of \Cref{MFABC:Importance} (MF-ABC-IS) instead, and the SMC algorithm would proceed in much the same way. However, the key difficulty with implementing multifidelity ABC-SMC lies in the definition of the importance distribution, $\hat q_{t+1}$, for generation $t+1$, given the Monte Carlo sample, $\{ \theta_n^{(t)}, w_n^{(t)} \}_{n=1}^{N_t}$, returned in generation $t$. Since the weights, $w_n$, are now calculated using the multifidelity weight in \Cref{eq:w_mf}, there is a positive probability that there exist $w_n < 0$. Negative values of $w_n$ are generated whenever $\tilde y_n \in \Omega_\epsilon$, but where we also simulate the high-fidelity model such that $y_n \notin \Omega_\epsilon$. This leads to two problems with the existing definition of the importance distribution in \Cref{eq:importance}. Primarily, there may exist $\theta$ in the prior support with $q_{t+1}(\theta) < 0$, unless the perturbation kernels, $K_t$, are carefully designed to avoid this. Secondly, even if the $K_t$ could be chosen to guarantee $q_{t+1}(\theta)>0$ on the prior support, it is not clear that we can easily sample from the importance distribution $\hat q_{t+1} \propto q_{t+1}$ when some $\theta_n^{(t)}$ have negative weights. \begin{algorithm} \caption{Multifidelity ABC-SMC with pre-determined $\alpha_t$ (MF-ABC-SMC-$\alpha$)} \label{MFABC:SMCalpha} \begin{algorithmic}[1] \REQUIRE{ Data $\obs y$; sequence of nested neighbourhoods $\Omega_{\epsilon_T} \subseteq \Omega_{\epsilon_{T-1}} \subseteq \cdots \subseteq \Omega_{\epsilon_1}$ for $0 < \epsilon = \epsilon_T < \epsilon_{T-1} < \dots < \epsilon_1$; prior $\pi$; coupling $\check f(\cdot, \cdot~|~\theta)$ of models $f(\cdot~|~\theta)$ and $\tilde f(\cdot~|~\theta)$; initial importance distribution $\hat r_1$ (often set to $\pi$); perturbation kernels $K_t(\cdot~|~\theta)$; continuation probabilities $\alpha_t(\theta, \tilde y)$; stopping conditions $S_t$; where $t=1,\dots,T$. } \ENSURE{ Weighted sample $\{\theta_n^{(T)}, w_n^{(T)} \}_{n=1}^{N_T}$. } \FOR{$t = 1, \dots, T-1$} \STATE{Produce $\{ \theta_n^{(t)}, w_n^{(t)} \}_{n=1}^{N_t}$ from \Cref{MFABC:Importance} (MF-ABC-IS), using the neighbourhood $\Omega_{\epsilon_t}$, continuation probability $\alpha_t$, importance distribution $\hat r_t$, and stopping condition $S_t$. } \STATE{Set $\hat w_n^{(t)} = | w_n^{(t)} |$ for $n=1,\dots,N_t$.} \STATE{Define $\hat r_{t+1}$ proportional to $r_{t+1}$ given in \Cref{eq:new_importance} } \ENDFOR{} \STATE{Produce $\{ \theta_n^{(T)}, w_n^{(T)} \}_{n=1}^{N_T}$ from \Cref{MFABC:Importance}, using neighbourhood $\Omega_{\epsilon}$, continuation probability $\alpha_T$, importance distribution $\hat r_T$ and stopping condition $S_T$.} \end{algorithmic} \end{algorithm} In \Cref{MFABC:SMCalpha} (MF-ABC-SMC-$\alpha$) we have adapted the SMC approach to counter the possibility of negative weights, by considering a sampling algorithm that produces non-negative weights $\hat w_n^{(t)} \geq 0$ in parallel with $w_n^{(t)}$. This method closely parallels the approach to the sign problem taken in \cite{Lyne2015}. At each generation we produce two weighted samples at once, $\{ \hat w_n^{(t)}, \theta_n^{(t)} \}$ and $\{ w_n^{(t)}, \theta_n^{(t)} \}$. We replace the importance distribution $\hat q_{t+1}$ defined by $q_{t+1}$ in \Cref{eq:importance} with the importance distribution $\hat r_{t+1}$ proportional to the non-negative function \begin{equation} r_{t+1}(\theta) = \begin{cases} \sum_{n=1}^N \hat w_n^{(t)} K_t(\theta~|~\theta_n^{(t)}) \bigg/ \sum_{m=1}^N \hat w_m^{(t)} & \pi(\theta)>0, \\ 0 & \text{else,} \end{cases} \label{eq:new_importance} \end{equation} which is defined by the weights $\hat w_n^{(t)} = | w_n^{(t)} |$. With this choice of importance distribution, the weighted samples $\{ \hat w_n^{(t)}, \theta_n^{(t)} \}$ can be shown to be drawn from the alternative target distribution, \begin{equation} \label{eq:new_target} \rho_t(\theta) \propto p_{\epsilon_t}(\theta~|~\obs y) + \delta_t(\theta), \end{equation} where the difference between the new target distribution, $\rho_t$, and the ABC posterior, $p_{\epsilon_t}$, is given by \[ \delta_t(\theta) = 2\pi(\theta) \int_{\mathcal Y^2} (1-\alpha_t(\theta, \tilde y)) \mathbb I(\tilde y \in \Omega_{\epsilon_t}) \mathbb I(y \notin \Omega_{\epsilon_t}) \check f(\tilde y, y~|~\theta) ~\mathrm d\tilde y ~\mathrm dy, \] for $t=1,\dots,T$. The function $\delta_t$ implies that the new target distribution (compared to $\pi_{\epsilon_t}$) contains additional density in regions of parameter space where it is more likely that $\tilde y \in \Omega_{\epsilon_t}$ but $y \notin \Omega_{\epsilon_t}$; in other words, where $w_n^{(t)} < 0$ is more likely. The importance distribution in \Cref{eq:new_importance} effectively makes the SMC algorithm target $\rho_t$ at each generation instead of $p_{\epsilon_t}$. However, the weighted samples $\{ w_n^{(t)}, \theta_n^{(t)} \}$ based on the multifidelity weights $w_n^{(t)}$ from each generation's weighting function, \begin{equation} w_t(\theta, \tilde y, u, y) = \frac{\pi(\theta)}{r_t(\theta)} \left( \mathbb I(\tilde y \in \Omega_{\epsilon_t}) + \frac{\mathbb I(u < \alpha_t(\theta, \tilde y))}{\alpha_t(\theta, \tilde y)} \left[ \mathbb I(y \in \Omega_{\epsilon_t}) - \mathbb I(\tilde y \in \Omega_{\epsilon_t}) \right] \right), \label{eq:w_mf_t} \end{equation} remain from the ABC posteriors $p_{\epsilon_t}$. Hence, at any generation (and in particular at $t=T$), we can produce an estimate of a $p_{\epsilon_t}$-integrable function $F:\Theta \rightarrow \mathbb R$, such that \[ \mathbb E_{p_{\epsilon_t}} (F) \approx \frac{\sum_n w_n^{(t)} F(\theta_n^{(t)})}{ \sum_m w_m^{(t)}}, \] is a consistent Monte Carlo estimate of $F$ under the ABC posterior. \begin{note} \label{Note:ON2} Each calculation of the weight in \Cref{eq:w_mf_t} relies on the $O(N)$ calculation of the importance weight in \Cref{eq:new_importance}, making the SMC sampler $O(N^2)$. An alternative sampling method that is linear in $N$ is proposed in \cite{DelMoral2012}, which replaces the sequential importance sampling approach described above. However, as noted in \cite{DelMoral2012}, both the $O(N)$ and $O(N^2)$ sampling algorithms require $O(N)$ simulations of $(\tilde y, y) \sim \check f(\cdot, \cdot~|~\theta)$ or $\tilde y \sim \tilde f(\cdot~|~\theta)$. In this multifidelity setting, we are assuming that simulation time comprises the vast majority of the computational burden of each calculation of \Cref{eq:w_mf_t}. The benefit of the multifidelity weight is that it reduces the computational burden of generating $(\tilde y, y)$ by sometimes requiring $\tilde y$ alone. In \Cref{LinearSMCSampler} we show that the $O(N)$ SMC sampling algorithm dilutes this benefit. Therefore, in this work we will focus on the sequential importance sampling SMC sampler described in \Cref{MFABC:SMCalpha}. \end{note} \section{Adaptive MF-ABC-SMC} \label{MFABCSMC} In \Cref{MFABC:SMCalpha} (MF-ABC-SMC-$\alpha$), in addition to an assumed sequence of ABC thresholds, $\epsilon_t$, perturbation kernels, $K_t$, and stopping conditions $S_t$, for $t=1,\dots,T$, we also assume a given sequence of continuation probabilities, $\alpha_t$. In this algorithm, each importance distribution, $\hat r_{t+1}$, is determined by the output at generation $t$. Methodologies for adaptively choosing the perturbation kernels, $K_{t+1}$, and the ABC thresholds, $\epsilon_{t+1}$, based on the preceding generations' samples, have been explored in previous work~\cite{DelMoral2012,Filippi2013}. In this section, we will consider the adaptive approach to choosing each generation's continuation probability, $\alpha_{t+1}$, based on the simulation output of generation $t$. \subsection{Optimal continuation probabilities} \label{s:eta} For each generation, $t$, the continuation probability, $\alpha_t$, is an input into that generation's call of \Cref{MFABC:Importance}. Dropping the generational indexing $t$ temporarily, in this subsection we first consider how to choose a continuation probability function, $\alpha(\theta, \tilde y)$, to maximise the theoretical efficiency, $\psi$, of any run of \Cref{MFABC:Importance} (MF-ABC-IS), as specified in \Cref{def:efficiency:th}. For simplicity, we will constrain the search for optimal $\alpha(\theta, \tilde y)$ to the piecewise constant function \begin{equation} \label{eq:constantrates} \alpha(\theta, \tilde y) = \eta_1 \mathbb I(\tilde y \in \Omega_{\epsilon}) + \eta_2 \mathbb I(\tilde y \notin \Omega_{\epsilon}), \end{equation} for the constants $\eta_1, \eta_2 \in (0,1]$. Here, $\eta_1$ is the probability of generating $y$ after a `positive' low-fidelity simulation (where $\tilde y \in \Omega_\epsilon$) and $\eta_2$ is the probability of generating $y$ after a `negative' low-fidelity simulation (where $\tilde y \notin \Omega_\epsilon$). The goal of this section is to specify the values of the two parameters, $\eta_1$ and $\eta_2$, that will give the largest theoretical efficiency, $\psi$. In previous work, we have derived the optimal values of $\eta_1$ and $\eta_2$ to use in the special case of \Cref{MFABC:Importance} (MF-ABC-RS) corresponding to rejection sampling, where the importance distribution $\hat q = \pi$ equal to the prior distribution~\cite{Prescott2020}. We can now extend this analysis by finding optimal values of $\eta_1$ and $\eta_2$ to use in the more general case of \Cref{MFABC:Importance} (MF-ABC-IS). The key to this optimisation is the following lemma, which describes how the efficiency of \Cref{MFABC:Importance} varies with the continuation probabilities used. The lemma assumes a given importance distribution, $\hat q(\theta)$, defined as the normalisation of the known non-negative function $q(\theta)$. \begin{lemma} \label{lemma:phi} The theoretical efficiency, given in \Cref{def:efficiency:th}, of \Cref{MFABC:Importance} (MF-ABC-IS) varies with the continuation probabilities $\eta_1$ and $\eta_2$ according to \[ \psi(\eta_1,\eta_2) = \frac{\mathbb E(w)^2}{\mathbb E(w^2) \mathbb E(T)} = \frac{Z^2}{\phi(\eta_1, \eta_2)}, \] where the denominator is expressed as a function of $(\eta_1, \eta_2)$ such that \begin{equation} \label{eq:Phi} \phi(\eta_1, \eta_2) = \left( W + \left( \frac{1}{\eta_1} - 1 \right) \fp W + \left( \frac{1}{\eta_2} - 1 \right) \fn W \right) \left( \bar{T}_{\mathrm{lo}} + \eta_1 \bar T_{\mathrm{hi}, \mathrm p} + \eta_2 \bar T_{\mathrm{hi}, \mathrm n} \right). \end{equation} The coefficients in $\psi$ are given by the integrals \begin{subequations} \label{eq:PhiComponents} \begin{align} \label{eq:Z} Z &= \int L_\epsilon(\theta) \pi(\theta) ~\mathrm d\theta ,& &L_\epsilon(\theta) = \int_{\mathcal Y^2} \mathbb I(y \in \Omega_\epsilon) \check f(\tilde y, y~|~\theta) ~\mathrm d\tilde y ~\mathrm dy ,\\ W &= \int \frac{\pi(\theta)}{q(\theta)} L_\epsilon(\theta) \pi(\theta) ~\mathrm d\theta ,\\ \fp W &= \int \frac{\pi(\theta)^2}{q(\theta)} \fp p(\theta) ~\mathrm d\theta ,& &\fp p(\theta) = \int_{\mathcal Y^2} \mathbb I(\tilde y \in \Omega_\epsilon) \mathbb I(y \notin \Omega_\epsilon) \check f(\tilde y, y~|~\theta) ~\mathrm d\tilde y ~\mathrm dy ,\\ \fn W &= \int \frac{\pi(\theta)^2}{q(\theta)} \fn p(\theta) ~\mathrm d\theta ,& &\fn p(\theta) = \int_{\mathcal Y^2} \mathbb I(\tilde y \notin \Omega_\epsilon) \mathbb I(y \in \Omega_\epsilon) \check f(\tilde y, y~|~\theta) ~\mathrm d\tilde y ~\mathrm dy ,\\ \bar T_{\mathrm{lo}} &= \int T_{\mathrm{lo}}(\theta) q(\theta) ~\mathrm d\theta ,& &T_{\mathrm{lo}}(\theta) = \int_{\mathcal Y^2} T(\tilde y) \check f(\tilde y, y~|~\theta) ~\mathrm d\tilde y ~\mathrm dy ,\\ \bar T_{\mathrm{hi,p}} &= \int T_{\mathrm{hi,p}}(\theta) q(\theta) ~\mathrm d\theta ,& &T_{\mathrm{hi,p}}(\theta) = \int_{\mathcal Y^2} T(y) \mathbb I(\tilde y \in \Omega_\epsilon) \check f(\tilde y, y~|~\theta) ~\mathrm d\tilde y ~\mathrm dy ,\\ \bar T_{\mathrm{hi,n}} &= \int T_{\mathrm{hi,n}}(\theta) q(\theta) ~\mathrm d\theta ,& &T_{\mathrm{hi,n}}(\theta) = \int_{\mathcal Y^2} T(y) \mathbb I(\tilde y \notin \Omega_\epsilon) \check f(\tilde y, y~|~\theta) ~\mathrm d\tilde y ~\mathrm dy , \end{align} \end{subequations} where $T(\tilde y)$ is the computational cost of simulating $\tilde y \sim \tilde f(\cdot~|~\theta)$ and where $T(y)$ is the cost of simulating $y \sim f(\cdot~|~\theta, \tilde y)$, such that $T(\tilde y)+T(y)$ is the cost of simulating $(\tilde y, y) \sim \check f(\cdot, \cdot~|~\theta)$. \end{lemma} \subsubsection{Optimising efficiency} We can conclude from \Cref{lemma:phi} that the optimal continuation probabilities $(\eta_1^\star, \eta_2^\star)$ in any closed, bounded domain $\mathcal H \subseteq (0,1]^2$ are those that minimise the function $\phi(\eta_1, \eta_2)$ given in \Cref{eq:Phi}. \Cref{etastar:unbounded,etastar:boundary} below explicitly find the global minimiser of $\phi$ over $[0,\infty)^2$ (if it exists), and then over the boundary $\partial \mathcal H$ of a rectangular domain $\mathcal H = [\rho_1, 1] \times [\rho_2, 1]$, where the user-specified lower bounds $\rho_1$ and $\rho_2$ are chosen to ensure $\mathcal H$ is closed. These results combine in \Cref{etastar} to give the minimiser of $\phi$ over $\mathcal H$, and hence the optimal continuation probabilities for use in \Cref{MFABC:Importance}. \begin{lemma} \label{etastar:unbounded} We first consider all non-negative values of $\eta_1, \eta_2 \geq 0$. If $W > \fp{W} + \fn{W}$, then the minimum value of $\phi(\eta_1,\eta_2)$ in \Cref{eq:Phi}, and the optimal value of $(\eta_1, \eta_2)$ in the entire positive quadrant, are given by \begin{subequations} \label{eq:etastar:unbounded} \begin{align} \bar \phi &= \left( \sqrt{(W - \fp{W} - \fn{W}) \bar T_{\mathrm{lo}}} + \sqrt{\fp{W} \bar T_{\mathrm{hi, p}}} + \sqrt{\fn{W} \bar T_{\mathrm{hi, n}}} \right)^2, \\ \left( \bar \eta_1, \bar \eta_2 \right) &= \left( \sqrt{ \frac{\bar T_{\mathrm{lo}}}{W - \fp{W} - \fn{W}} \cdot \frac{\fp{W}}{\bar T_{\mathrm{hi, p}} }}, \sqrt{ \frac{\bar T_{\mathrm{lo}}}{W - \fp{W} - \fn{W}} \cdot \frac{\fn{W}}{\bar T_{\mathrm{hi, n}} }} \right), \end{align} \end{subequations} respectively. If $W \leq \fp{W} + \fn{W}$, then there is no minimum of $\phi(\eta_1,\eta_2)$ in $\eta_1, \eta_2 \geq 0$. \end{lemma} \begin{lemma} \label{etastar:boundary} Under the same conditions as \Cref{etastar:unbounded}, fix the closed region $\mathcal H = [\rho_1, 1] \times [\rho_2, 1]$ of positive continuation probabilities with user-defined lower bounds $\rho_1, \rho_2 \in (0,1)$. Define the two functions, for $x>0$, \begin{subequations} \label{eq:etastar:boundary} \begin{align} \eta_1(x) &= \max \left\{ \rho_1, ~\min \left[ 1, \sqrt{\frac{\bar T_{\mathrm{lo}} + \bar T_{\mathrm{hi}, \mathrm{n}} x }{W - \fp{W} - (1-x^{-1})\fn{W}} \cdot \frac{\fp{W}}{\bar T_{\mathrm{hi}, \mathrm p}}} ~ \right] \right\}, \\ \eta_2(x) &= \max \left\{ \rho_2, ~\min \left[ 1, \sqrt{\frac{\bar T_{\mathrm{lo}} + \bar T_{\mathrm{hi}, \mathrm{p}} x }{W - (1-x^{-1}) \fp{W} - \fn{W}} \cdot \frac{\fn{W}}{\bar T_{\mathrm{hi}, \mathrm n}}} ~ \right] \right\}. \end{align} \end{subequations} Then the minimum value of $\phi$ on the boundary, $\partial \mathcal H$, of $\mathcal H$ is attained at the minimum of $\phi(1, \eta_2(1))$, $\phi(\eta_1(1), 1)$, $\phi(\rho_1, \eta_2(\rho_1))$ or $\phi(\eta_1(\rho_2), \rho_2)$. \end{lemma} \begin{proposition} \label{etastar} Assume the same conditions as \Cref{etastar:unbounded,etastar:boundary}. Compute the minimiser, $(\bar \eta_1, \bar \eta_2)$, and minimal value, $\bar \phi$, of $\phi$ in $(0,\infty)^2$ using \Cref{etastar:unbounded}, if they exist. If $(\bar \eta_1, \bar \eta_2) \in \mathcal H$ then set $(\eta_1^\star, \eta_2^\star) = (\bar \eta_1, \bar \eta_2)$ and $\phi^\star = \bar \phi$. Otherwise, set $\phi^\star$ equal to the minimum of the four values of $\phi$ listed in \Cref{etastar:boundary}, and $(\eta_1^\star, \eta_2^\star)$ to the associated argument. Then $\phi^\star$ is the minimum value of $\phi$ over $(\eta_1, \eta_2) \in \mathcal H$, and $(\eta_1^\star, \eta_2^\star)$ are the minimising continuation probabilities. \end{proposition} \subsubsection{Interpreting efficiency} The optimised efficiency of \Cref{MFABC:Importance} is determined by the values of the various coefficients defined in \Cref{eq:PhiComponents}. The normalisation constant, $Z$, and the ABC approximation to the likelihood, $L_\epsilon$, are properties of the ABC approach as defined in \Cref{eq:ABC}. The coefficient $W$ can be written $W = Z \mathbb E_{\pi}(p_\epsilon/q)$, as a scaling of the prior expectation of the ratio $p_\epsilon / q$. Larger values of $W$ require $q$ to be more concentrated (relative to $\pi$) in regions of high posterior density, which is a well-known characteristic of importance sampling~\cite{Owen2013}. Thus, neither $Z$ nor $W$ relate specifically to the multifidelity approach and are properties of ABC importance sampling. The coefficient $\bar T_{\mathrm{lo}}$ represents the average time taken to simulate $\tilde y$ from the low-fidelity model. The other two time-based coefficients, $\bar T_{\mathrm{hi,p}}$ and $\bar T_{\mathrm{hi,n}}$, represent the average time taken to complete the simulation of $(\tilde y, y) \sim \check f$ from the coupling, conditional on whether or not $\tilde y \in \Omega_\epsilon$. These coefficients therefore determine how much time can be saved by avoiding expensive simulations and stopping after generating $\tilde y \sim \tilde f(\cdot~|~\theta)$. The key determinants of the success of the multifidelity technique are $\fp W$ and $\fn W$ and their tradeoff between the high-fidelity simulation costs, $\bar T_{\mathrm{hi,p}}$ and $\bar T_{\mathrm{hi,n}}$. \Cref{eq:Phi} implies that the marginal cost to the efficiency of decreasing $\eta_1$ and $\eta_2$ is smaller for smaller values of $\fp W$ and $\fn W$. These coefficients can be written as the two prior expectations, \[ \fp W = \mathbb E_\pi \left( \frac{\pi}{q} ~\fp p \right), \quad \fn W = \mathbb E_\pi \left( \frac{\pi}{q} ~\fn p \right). \] Thus, they are scalings of the probabilities of a false positive (where $\tilde y \in \Omega_\epsilon$ but $y \notin \Omega_\epsilon$) and a false negative (where $\tilde y \notin \Omega_\epsilon$ but $y \in \Omega_\epsilon$), respectively. Hence, small values of $\fp W$ and $\fn W$ correspond to at least one of the following cases. First, if the low-fidelity and high-fidelity models are closely correlated, then the probability of a false positive or false negative is small. This demonstrates the value of coupling the models, as described in \Cref{s:MFABC}. Second, for small values of $\fp W$ and $\fn W$ we require $q$ to be larger than $\pi$ in regions of parameter space where false positives or false negatives are relatively likely. We can intepret this as a requirement that (in addition to $q$ being concentrated in regions of high posterior density) the region of parameter space where simulations of the low-fidelity and high-fidelity model are less often in agreement (in terms of membership of $\Omega_\epsilon$) should be explored more thoroughly by $q$ than by $\pi$. \subsection{Constructing continuation probabilities} \label{s:eta:MC} There is an important barrier to implementing \Cref{etastar} as a method for choosing optimal continuation probabilities. Before running any ABC iterations, and in the absence of extensive analysis of the models being simulated, the quantities in \Cref{eq:PhiComponents} are unknown. We therefore cannot directly construct the optimisers in \Cref{eq:etastar:unbounded,eq:etastar:boundary}. However, recall that we are aiming to use this method in the context of sequential Monte Carlo to adaptively produce continuation probabilities. In \Cref{MFABC:SMCalpha}, at each generation $t \geq 1$, we have a sample $\{ \theta_n^{(t)}, w_n^{(t)} \}$ that is used to produce $\hat r_{t+1}$. The proposed approach to constructing continuation probabilities is similar: we will use the same Monte Carlo sample to also produce approximately optimal values of $\eta_1$ and $\eta_2$ defining the continuation probability $\alpha_{t+1}$ to use at generation $t+1$. The following definition specifies how to calculate approximations of the quantities in \Cref{eq:PhiComponents} using an existing Monte Carlo sample. These approximations can then be substituted into \Cref{eq:etastar:unbounded,eq:etastar:boundary}. Hence, we can estimate the optimal continuation probabilities as given by \Cref{etastar}. \begin{definition} \label{def:MonteCarlo} Consider a Monte Carlo sample $\left\{ \theta_n, w_n \right\}_{n=1}^{N}$ constructed from a run of \Cref{MFABC:Importance}, which used the importance distribution $\hat q(\theta)$ proportional to $q(\theta)$ and continuation probability $\alpha(\theta, \tilde y)$. For the set of low-fidelity simulations $\{ \tilde y_n \}_{n=1}^{N}$ and high-fidelity simulations $\{ y_n \}_{n \in M}$, where $M = \{ n~:~ y_n \text{ exists} \}$, store: the simulation times, $\tilde t_n = T(\tilde y_n)$ and $t_n = T(y_n)$; the distances from data, $\tilde d_n = d(\tilde y_n, \obs y)$ and $d_n = d(y_n, \obs y)$; the importance densities $q_n = q(\theta_n)$; and the continuation probabilities $\alpha_n = \alpha(\theta_n, \tilde y_n)$. We now consider finding the optimal continuation probability, $\alpha^\star$, to be used in a new run of \Cref{MFABC:Importance}, with the new importance distribution, $q^\star(\theta)$, and the new ABC threshold, $\epsilon$. We define the Monte Carlo estimates, \begin{subequations} \label{eq:MonteCarlo} \begin{align} \hat Z &= \frac{1}{N} \left[ \sum_{n=1}^N \frac{\pi(\theta_n)}{q_n} \mathbb I(\tilde d_n < \epsilon) + \sum_{n \in M} \frac{\pi(\theta_n)}{q_n \alpha_n} \left( \mathbb I(d_n < \epsilon) - \mathbb I(\tilde d_n < \epsilon)\right) \right] \label{eq:Zhat} , \\ \hat{W} &= \frac{1}{N} \left[ \sum_{n=1}^N \frac{\pi(\theta_n)^2}{q^\star(\theta_n) q_n} \mathbb I(\tilde d_n < \epsilon) + \sum_{n \in M} \frac{\pi(\theta_n)^2}{q^\star(\theta_n) q_n \alpha_n} \left( \mathbb I(d_n < \epsilon) - \mathbb I(\tilde d_n < \epsilon)\right) \right] \label{eq:MC_tpfn} , \\ \fp{\hat{W}} &= \frac{1}{N} \sum_{n \in M} \frac{ \pi(\theta_n)^2}{q^\star(\theta_n) q_n \alpha_n} \mathbb I (\tilde d_n < \epsilon) \mathbb I (d_n \geq \epsilon) \label{eq:MC_fp} , \\ \fn{\hat{W}} &= \frac{1}{N} \sum_{n \in M} \frac{ \pi(\theta_n)^2}{q^\star(\theta_n) q_n \alpha _n} \mathbb I (\tilde d_n \geq \epsilon) \mathbb I (d_n < \epsilon) \label{eq:MC_fn} , \\ \hat T_{\mathrm{lo}} &= \frac{1}{N} \sum_{n=1}^{N} \frac{q^\star(\theta_n)}{q_n} \tilde t_n \label{eq:T_lo} , \\ \hat T_{\mathrm{hi}, \mathrm p} &= \frac{1}{N} \sum_{n \in M} \frac{q^\star(\theta_n)}{q_n \alpha_n} \mathbb I(\tilde d_n < \epsilon) t_n \label{eq:T_hi_p} , \\ \hat T_{\mathrm{hi}, \mathrm n} &= \frac{1}{N} \sum_{n \in M} \frac{q^\star(\theta_n)}{q_n \alpha_n} \mathbb I(\tilde d_n \geq \epsilon) t_n \label{eq:T_hi_n} , \end{align} \end{subequations} corresponding to the quantities in \Cref{eq:PhiComponents}. \end{definition} The estimates in \Cref{eq:MonteCarlo} are scaled Monte Carlo estimates of the quantities in \Cref{eq:PhiComponents}, such that the approximation \[ \psi(\eta_1,\eta_2) = \frac{Z^2}{\phi(\eta_1,\eta_2)} \approx \frac{\hat Z^2}{\hat \phi(\eta_1,\eta_2)}, \] holds, with \begin{equation} \label{eq:PhiHat} \hat \phi(\eta_1, \eta_2) = \left( \hat W + \left( \frac{1}{\eta_1} - 1 \right) \fp{\hat W} + \left( \frac{1}{\eta_2} - 1 \right) \fn{\hat W} \right) \left( \hat{T}_{\mathrm{lo}} + \eta_1 \hat T_{\mathrm{hi}, \mathrm p} + \eta_2 \hat T_{\mathrm{hi}, \mathrm n} \right). \end{equation} Thus, we can substitute the estimates in \Cref{eq:MonteCarlo} into \Cref{eq:etastar:unbounded,eq:etastar:boundary}. Applying \Cref{etastar} with these estimates thus provides near-optimal continuation probabilities for the new run of \Cref{MFABC:Importance}, constructed from the existing Monte Carlo sample. \begin{note} The Monte Carlo estimates in \Cref{eq:MonteCarlo} are not independent of each other, so there is a bias in the approximation $\hat Z^2 / \hat \phi$. Hence, the continuation probabilities $(\eta_1^\star, \eta_2^\star)$ returned by \Cref{etastar}, if using the estimates in \Cref{eq:MonteCarlo}, can only be near-optimal. \end{note} \subsection{Adaptive MF-ABC-SMC algorithm} \label{s:MFABC:SMC} \Cref{MFABC:SMCalpha} presents a multifidelity ABC-SMC algorithm that relies on a predetermined sequence of continuation probability functions, $\alpha_t(\theta, \tilde y)$. In \Cref{s:eta,s:eta:MC} we have shown how to use an existing Monte Carlo sample to produce continuation probabilities of the form \[ \alpha_t(\theta, \tilde y) = \eta_1 \mathbb I(\tilde y \in \Omega_{\epsilon_t}) + \eta_2 \mathbb I(\tilde y \notin \Omega_{\epsilon_t}), \] where the values of $\eta_1$ and $\eta_2$ are chosen according to \Cref{etastar}, in order to (approximately) optimise the efficiency of generating a sample from \Cref{MFABC:Importance}. This result allows us to write an adaptive MF-ABC-SMC algorithm that uses the Monte Carlo output of \Cref{MFABC:Importance} at generation $t$ to construct not just an importance distribution, $\hat r_{t+1}(\theta)$, but also a continuation probability, $\alpha_{t+1}(\theta, \tilde y)$, for use in the next generation. \Cref{MFABC:SMC} (MF-ABC-SMC) is an adaptive multifidelity sequential Monte Carlo algorithm for ABC parameter inference. In place of the pre-defined continuation probabilities $\alpha_t$ used in \Cref{MFABC:SMCalpha} (MF-ABC-SMC-$\alpha$), we instead only require an initial continuation probability, most sensibly set to $\alpha_1 \equiv 1$, and the two lower bounds, $\rho_1$ and $\rho_2$, on the allowed values of the continuation probabilities. Since the final sample is generated by a run of \Cref{MFABC:Importance}, by \Cref{MFABCValidity} it follows that the weighted sample is from the ABC posterior, $p_\epsilon(\theta~|~\obs y)$. \begin{algorithm} \caption{Multifidelity ABC-SMC (MF-ABC-SMC)} \label{MFABC:SMC} \begin{algorithmic}[1] \REQUIRE{ Data $\obs y$; sequence of nested neighbourhoods $\Omega_{\epsilon_T} \subseteq \Omega_{\epsilon_{T-1}} \subseteq \cdots \subseteq \Omega_{\epsilon_1}$ for $0 < \epsilon = \epsilon_T < \epsilon_{T-1} < \dots < \epsilon_1$; prior $\pi$; coupling $\check f(\cdot, \cdot~|~\theta)$ of models $f(\cdot~|~\theta)$ and $\tilde f(\cdot~|~\theta)$; initial importance distribution $\hat r_1$ (often set to $\pi$); initial continuation probability $\alpha_1 \equiv 1$; lower bounds on continuation probabilities, $\rho_1,\rho_2 \in (0,1)$; perturbation kernels $K_t(\cdot~|~\theta)$; stopping conditions $S_t$; where $t=1,\dots,T$. } \ENSURE{ Weighted sample $\{\theta_n^{(T)}, w_n^{(T)} \}_{n=1}^{N_T}$. } \FOR{$t = 1, \dots, T-1$} \STATE{Produce $\{ \theta_n^{(t)}, w_n^{(t)} \}_{n=1}^{N_t}$ from \Cref{MFABC:Importance} (MF-ABC-IS), using the neighbourhood $\Omega_{\epsilon_t}$, continuation probability $\alpha_t$, importance distribution $\hat r_t$, and stopping condition $S_t$. Store simulation times, distances, importance densities and continuation probabilities as specified in \Cref{def:MonteCarlo}.} \STATE{Set $\hat w_n^{(t)} = | w_n^{(t)} |$ for $n=1,\dots,N_t$.} \STATE{Define $\hat r_{t+1}$ proportional to $r_{t+1}$ given in \Cref{eq:new_importance} } \STATE{Update the estimates in \Cref{eq:MonteCarlo} with the values stored at step 2, using importance distribution $r_{t+1}(\theta)$ and ABC threshold $\epsilon_{t+1}$.} \STATE{Calculate $(\eta_1^\star, \eta_2^\star)$ using \Cref{etastar} with lower bounds $\rho_1, \rho_2$.} \STATE{Set $\alpha_{t+1}(\tilde y, \theta) = \eta_1^\star \mathbb I(\tilde y \in \Omega_{\epsilon}) + \eta_2^\star \mathbb I(\tilde y \notin \Omega_{\epsilon})$. } \ENDFOR{} \STATE{Produce $\{ \theta_n^{(T)}, w_n^{(T)} \}_{n=1}^{N_T}$ from \Cref{MFABC:Importance}, using neighbourhood $\Omega_{\epsilon_T}$, continuation probability $\alpha_T$, importance distribution $\hat r_T$ and stopping condition $S_T$.} \end{algorithmic} \end{algorithm} \begin{note} In common with the non-adaptive \Cref{MFABC:SMCalpha} (MF-ABC-SMC-$\alpha$), the importance weights $r_t(\theta_n^{(t)})$ are required to construct the weights $w_n^{(t)}$ in step 2, each of which is an $O(N)$ calculation. However, each calculation of $r_{t+1}(\theta_n^{(t)})$ required in step 5 is also $O(N)$. Thus, there is extra cost at each generation on the order $O(N^2)$. However, in common with \Cref{Note:ON2}, we will assume that this cost is justified by our aim to reduce the large simulation burden that dominates the algorithm's run time. \end{note} \section{Example: Kuramoto Oscillator Network} \label{s:Example} To demonstrate the multifidelity and SMC approaches to parameter inference, we will infer the parameters of a Kuramoto oscillator model on a complete network, with stochastic heterogeneity in each node's intrinsic frequency. In \Cref{s:existing} we consider the performance of the previously developed ABC algorithms introduced in \Cref{s:Background} (ABC-RS, ABC-SMC and MF-ABC-RS) and demonstrate the orthogonal ways in which the SMC and multifidelity techniques improve performance. In \Cref{s:Results} we apply the adaptive algorithm, \Cref{MFABC:SMC} (MF-ABC-SMC) to demonstrate that the efficiency of parameter estimation is significantly improved by combining the multifidelity and SMC approaches. The algorithms have been implemented in Julia~\cite{Julia} and the source code can be found at \url{github.com/tpprescott/mf-abc-smc}. The Kuramoto oscillator model is defined on a complete network of $M$ nodes, where each node, $i$, has a dynamically evolving phase value, $\phi_i$, determined by the ordinary differential equation \begin{equation} \label{eq:Kuramoto_hi} \dot \phi_i = \omega_i + \frac{K}{M} \sum_{j=1}^{M} \sin \left( \phi_j - \phi_i \right), \end{equation} for $i=1,\dots,M$. Each constant $\omega_i$, the intrinsic angular velocity, is an independent draw from a Cauchy distribution with median $\omega_0$ and dispersion parameter $\gamma$. In addition to these two parameters, we have an interconnection strength $K$. Simulations of the ODE system are run over a fixed time interval $t \in [0,T]$, and we will assume fixed initial conditions $\phi_i(0)=0$ for all $i$. The multifidelity approach makes use of a low-dimensional approximation of the coupled oscillator dynamics, as described in~\cite{Hannay2018,Ott2008,Ott2009}. The approximation is based on tracking the Daido order parameters, which are a set of complex-valued representations of the high-dimensional vector $(\phi_i)_{i=1}^M$, defined as \[ Z_n(t) = \frac{1}{M} \sum_{j=1}^M \exp( i n \phi_j), \] for positive integers $n$ and the imaginary unit $i$. A system of coupled ODEs can be generated for the set of $Z_n$. Under the assumption that $Z_n(t) = Z_1(t)^n$, known as the Ott-Antonsen ansatz~\cite{Hannay2018,Ott2008}, the system can be reduced to a single ODE for $Z_1$, which is known as the Kuramoto parameter. This complex-valued trajectory is usually represented by two real trajectories, corresponding to its magnitude $R(t) = \| Z_1(t)) \|$ and phase $\Phi(t) = \arg(Z_1(t))$. The approximation of the $M$-dimensional ODE system in \Cref{eq:Kuramoto_hi} under the OA ansatz is thus given by the two-dimensional ODE system \begin{subequations} \label{eq:Kuramoto_lo} \begin{align} \dot{\tilde R} &= \left( \frac{K}{2} - \gamma \right) \tilde R - \frac{K}{2} \tilde R^3, \\ \dot{\tilde \Phi} &= \omega_0, \end{align} \end{subequations} with initial conditions $(\tilde R(0), \tilde \Phi(0)) = (1, 0)$, which directly simulates the low-dimensional representation of the $M$-dimensional state vector. The goal of this example is to infer the parameters $(K, \omega_0, \gamma)$ based on synthetic data, generated by simulating a system of $M=256$ oscillators with random angular velocities $\omega_i$ over $t \in (0, 30]$. We record the trajectories $\obs R(t)$ and $\obs \Phi(t)$ of the magnitude and phase of the Kuramoto parameter. The parameter values used to generate these data are $(K=2,~\omega_0 = \pi/3,~\gamma=0.1)$. The likelihood of the observed data under the model in \Cref{eq:Kuramoto_hi} with stochastic parameters $\omega_i \sim \mathrm{Cauchy}(\omega_0, \gamma)$ is unavailable, and we must therefore resort to ABC inference, requiring repeated simulation. \begin{figure} \caption{ In colour are five trajectories for $R(t)$ and $\Phi(t)$ for the high-fidelity model in \Cref{eq:Kuramoto_hi} \label{fig:eg_dynamics} \end{figure} Example trajectories of the high-fidelity and low-fidelity models in \Cref{eq:Kuramoto_hi,eq:Kuramoto_lo} are given in \Cref{fig:eg_dynamics}. The trajectories $\obs R(t)$, $\obs \Phi(t)$, $R(t)$, $\Phi(t)$, $\tilde R(t)$ and $\tilde \Phi(t)$ on $t \in [0,30]$ are infinite dimensional. In order to easily compare trajectories, we will select a finite number of informative \emph{summary statistics} from the trajectories, guided by the approximated system in \Cref{eq:Kuramoto_lo}. We take \begin{align*} S_1(R, \Phi) &= \left( \frac{1}{30} \int_0^{30} R(t) ~\mathrm dt \right)^2, \\ S_2(R, \Phi) &= \frac{1}{30} \left( \Phi(30) - \Phi(0) \right), \\ S_3(R, \Phi) &= R \left( T_{1/2} \right), \end{align*} where $T_{1/2}$ is the first value of $t \in [0,30]$ for which $\obs R(t)$ is halfway between $\obs R(0)=1$ and its average value $S_1(\obs R, \obs \Phi)^{1/2}$. Justification for the choice of these summary statistics and distances is provided in \Cref{appendix:summary_statistics}. Simulation of the high-fidelity model produces $y \sim f(\cdot~|~(K, \omega_0, \gamma))$ by: (a) generating $\omega_i$, $i=1,\dots,256$, from $\mathrm{Cauchy}(\omega_0, \gamma)$; then (b) simulating the ODE system in \Cref{eq:Kuramoto_hi}; then (c) computing $y = (S_1(R, \Phi), S_2(R, \Phi), S_3(R, \Phi))$. Simulation of the low-fidelity model produces $\tilde y \sim \tilde f(\cdot~|~(K, \omega_0, \gamma))$ by: (a) simulating the ODE system in \Cref{eq:Kuramoto_lo}; then (b) computing $\tilde y = (S_1(\tilde R, \tilde \Phi), S_2(\tilde R, \tilde \Phi), S_3(\tilde R, \tilde \Phi))$. The distances $d(y, \obs y)$ and $d(\tilde y, \obs y)$ are defined according to the weighted Euclidean norm, $d(a,b)^2 = 4(a_1 - b_1)^2 + (a_2 - b_2)^2 + (a_3 - b_3)^2$. Note that the low-fidelity model is deterministic. Therefore there is no meaningful definition of a coupling between the models at each fidelity: any simulation $y \sim f(\cdot~|~\tilde y,~(K, \omega_0, \gamma)) = f(\cdot~|~(K, \omega_0, \gamma))$ from the high-fidelity model will be independent of $\tilde y$. \subsection{Existing ABC algorithms} \label{s:existing} We set independent uniform priors on $[1, 3]$, $[-2\pi, 2\pi]$ and $[0, 1]$ for $K$, $\omega_0$ and $\gamma$, respectively. Using the uniform prior as importance distributions, samples from the ABC posterior, $p_{0.5}((K, \omega_0, \gamma) ~|~\obs y)$, are produced using \Cref{ABC:Importance} (ABC-RS), \Cref{ABC:SMC} (ABC-SMC), and \Cref{MFABC:Importance} (MF-ABC-RS) with $\epsilon = 0.5$. The continuation probability used in \Cref{MFABC:Importance} (MF-ABC-RS) is the constant $\alpha \equiv 0.5$. The resulting samples are depicted in \Cref{post:ABC:Rejection,post:ABC:SMC,post:MFABC:Rejection}. \Cref{table:ABCFlavours} shows the observed values for the performance of each of these algorithms, quantified in terms of ESS, total simulation time, and observed efficiency (i.e. the ratio of the first two). Note that the ESS of the sample from \Cref{ABC:SMC} (ABC-SMC) depends on only $N_4 = 1500$ weights, $w_n^{(4)}$, corresponding to the final generation. However, we will measure the observed efficiency by using the total time to simulate, which includes the total simulation time of the preceding generations. \begin{table} \center \begin{tabularx}{\textwidth}{l *4{Y}} \toprule Algorithm & ESS & Sim. time (\si{\minute}) & \multicolumn{2}{c}{Efficiency (ESS \si{\per\minute})} \\ \midrule \Cref{ABC:Importance} (ABC-RS) & 148.0 & 43.6 & 3.39 & ~ \\ \Cref{ABC:SMC} (ABC-SMC) & 255.1 & 48.8 & 5.23 & $\times 1.54$ \\ \Cref{MFABC:Importance} (MF-ABC-RS) & 126.4 & 22.9 & 5.52 & $\times 1.63$ \\ \bottomrule \end{tabularx} \caption{Comparing existing ABC algorithms for sampling $p_{0.5}((K, \omega_0, \gamma)~|~\obs y)$ based on a uniform prior, using $\epsilon=0.5$. The stopping condition for \Cref{ABC:Importance,MFABC:Importance} is $N=6000$. The threshold schedule for \Cref{ABC:SMC} is $(2, 1.5, 1, 0.5)$ with stopping conditions $S_t$ of $N_t = 1500$ parameter proposals for $t=1,2,3,4$, leading to the same total number of parameter proposals as \Cref{ABC:Importance,MFABC:Importance}. The perturbation kernels $K_t$ in \Cref{ABC:SMC} are Gaussian with diagonal covariance equal to twice the empirical variance of the sample at generation $t$~\cite{Beaumont2009}. The continuation probability used in \Cref{MFABC:Importance} is fixed at the constant $\alpha \equiv 0.5$. Percentages refer to the increase in efficiency over the base efficiency of ABC-RS.} \label{table:ABCFlavours} \end{table} Even with minimal tuning of \Cref{MFABC:Importance,ABC:SMC}, the samples built using these algorithms both show significant improvements in efficiency. We have chosen stopping conditions to ensure equal number of parameter proposals for each algorithm, in order to demonstrate the distinct effects of each. \Cref{ABC:SMC} (ABC-SMC) produces a larger ESS for a similar simulation time. This is characteristic of ABC-SMC, whereby parameters with low likelihood are less likely to be proposed. However, \Cref{MFABC:Importance} (MF-ABC-RS) instead speeds up the simulation time of the fixed number of parameter proposals, albeit with some damage to the ESS. This result illustrates the orthogonal effects of the SMC and multifidelity ABC algorithms, and thus the potential for combining the techniques in \Cref{MFABC:SMC} to produce further gains in efficiency. The key to the success of MF-ABC-RS and the multifidelity approach in general is the assumption that the low-fidelity model is cheaper to simulate than the high-fidelity model. In this case, the high-fidelity model in \Cref{eq:Kuramoto_hi} has a mean (respectively, standard deviation) simulation time of approximately 520 (638) \si{\micro\second} to simulate, while the low-fidelity model in \Cref{eq:Kuramoto_lo} has a simulation time of approximately 10 (12) \si{\micro\second}. Note that these averages and standard deviations are observed across the uniform distribution of parameter values on the intervals $[1, 3]$, $[-2\pi, 2\pi]$ and $[0, 1]$ for $K$, $\omega_0$ and $\gamma$, respectively. The relatively large standard deviations imply that parameter proposals in this domain can produce very different simulation times. Different importance distributions will therefore vastly alter the relative simulation costs of the high-fidelity and low-fidelity models. \subsection{Multifidelity ABC-SMC} \label{s:Results} In order to demonstrate the increased efficiency of combining multifidelity approaches with SMC, we produced $100$ samples from the ABC posterior, $p_{0.1}((K, \omega_0, \gamma)~|~\obs y)$, consisting of $50$ replicates from each of \Cref{ABC:SMC} (ABC-SMC) and \Cref{MFABC:SMC} (MF-ABC-SMC). Common to both algorithms is the number of generations, $T=8$, which corresponds to the nested sequence of ABC neighbourhoods $\Omega_{\epsilon_t}$ with the sequence of thresholds 2.0, 1.5, 1.0, 0.8, 0.6, 0.4, 0.2 and 0.1. Each generation has a stopping condition of $\mathrm{ESS} \geq 400$, evaluated after every $100$ parameter proposals (to allow for parallelisation). This condition reflects a specification that we need each generation's sample to be, in some sense, `good enough' to produce a reliable importance distribution that can be used in the next generation. Finally, we specified the perturbation kernels $K_t(\cdot~|~(K, \omega_0, \gamma)^{(t)}_n)$ at each generation to be Gaussians centred on the parameter value $(K, \omega_0, \gamma)^{(t)}_n$. The covariance matrices are diagonal matrices $\mathrm{diag}(\sigma_K^{(t)}, \sigma_{\omega_0}^{(t)}, \sigma_\gamma^{(t)})$, where \begin{align*} (\sigma_K^{(t)})^2 &= 2 \frac{\sum |w_n^{(t)}| (K_n^{(t)} - \mu_K^{(t)})^2}{\sum |w_n^{(t)}|}, \\ \mu_K^{(t)} &= \frac{\sum |w_n^{(t)}| K_n^{(t)} }{\sum |w_n^{(t)}|}, \end{align*} and similarly for $\sigma_{\omega_0}^{(t)}$ and $\sigma_\gamma^{(t)}$. These perturbation kernels implement a typical choice for the covariance of using twice the empirical variance of the observed parameter values~\cite{Beaumont2009,Filippi2013}. Note that we use this definition for the multifidelity case also, where weights $w_n^{(t)}$ may be negative, since we are using the $\rho_t$ (as defined in \Cref{eq:new_target}) and not the $p_{\epsilon_t}$ as the target distributions. Further to these common inputs, the parameters $\rho_1$ and $\rho_2$ are the only additional algorithm parameters we need to specify to implement \Cref{MFABC:SMC} (MF-ABC-SMC). We set lower bounds of $\rho_1 = \rho_2 = 0.01$ on the allowed continuation probabilities, with the aim to limit the variability of $w_n^{(t)}$ to prevent the collapse of the ESS. \subsubsection{Multifidelity ABC-SMC increases observed efficiency} \label{s:Results:Efficiency} \Cref{ABC:SMC,MFABC:SMC} were implemented and run using Julia 1.5.1 on a 36 core CPU (2 $\times$ 18 core with hyperthreading), 2.3/3.7 GHz, 768 GB RAM. \Cref{tab:results} quantifies the average performance for each of \Cref{ABC:SMC,MFABC:SMC}, separated out for each generation from $t=1,\dots,8$, and also across the entire SMC algorithm. The final row of this table demonstrates that \Cref{MFABC:SMC} (MF-ABC-SMC) results in a 60\% saving in the total simulation time required to produce a final sample from the ABC posterior $p_{0.1}$ with an ESS of approximately 400. This corresponds to an efficiency 2.48 times that of \Cref{ABC:SMC} (ABC-SMC). This performance improvement is derived from an 85\% saving in the simulation time required for each parameter proposal when averaged across the entire SMC algorithm. \begin{table}[] \centering \begin{tabularx}{\textwidth}{c *9{Y}} \toprule ~ & \multicolumn{3}{c}{a. Sim. time / proposal} & \multicolumn{3}{c}{b. Simulation time} & \multicolumn{3}{c}{c. Efficiency} \\ ~ & \multicolumn{3}{c}{\si{\micro\second}} & \multicolumn{3}{c}{\si{\minute}} & \multicolumn{3}{c}{ESS \si{\per\second}} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(l){8-10} $t$ & ABC & \multicolumn{2}{c}{MF-ABC} & ABC & \multicolumn{2}{c}{MF-ABC} & ABC & \multicolumn{2}{c}{MF-ABC} \\ \midrule 1 & 537 & 530 & $-1\%$ & 13.6 & 13.4 & $-2\%$ & 0.51 & 0.52 & $\times 1.01$ \\ 2 & 675 & 31 & $-95\%$ & 11.2 & 1.3 & $-88\%$ & 0.63 & 8.35 & $\times 13.4$ \\ 3 & 665 & 33 & $-95\%$ & 15.2 & 2.0 & $-87\%$ & 0.46 & 4.41 & $\times 9.59$ \\ 4 & 599 & 35 & $-94\%$ & 11.6 & 1.9 & $-84\%$ & 0.60 & 4.99 & $\times 8.26$ \\ 5 & 525 & 35 & $-93\%$ & 11.3 & 2.2 & $-81\%$ & 0.62 & 3.99 & $\times 6.47$ \\ 6 & 424 & 36 & $-92\%$ & 10.8 & 2.6 & $-76\%$ & 0.64 & 2.90 & $\times 4.53$ \\ 7 & 330 & 46 & $-86\%$ & 13.6 & 6.0 & $-56\%$ & 0.50 & 1.16 & $\times 2.31$ \\ 8 & 268 & 80 & $-70\%$ & 15.5 & 11.8 & $-24\%$ & 0.43 & 0.57 & $\times 1.31$ \\ \midrule SMC & 449 & 65 & $-85\%$ & 102.9 & 41.2 & $-60\%$ & 0.066 & 0.163 & $\times 2.48$ \\ \bottomrule \end{tabularx} \caption{ Row $t$ shows empirical mean values for 50 ABC-SMC samples and 50 MF-ABC-SMC samples of: (a) the simulation time divided by the number, $N_t$, of parameter proposals in generation $t$; (b) the simulation time in generation $t$; (c) ESS divided by simulation time in generation $t$. The final row is the empirical mean values for each sample, of (a) the total simulation time divided by the total number, $\sum_t N_t$, of parameter proposals; (b) the total simulation time; (c) ESS from generation $8$ divided by total simulation time (all generations). The third columns for each of (a), (b) and (c) quantify the improvement in performance of MF-ABC-SMC over ABC-SMC. } \label{tab:results} \end{table} \Cref{fig:empirical_means} depicts 100 estimates of the posterior mean of each of the parameters, $\mathbb E_{p_{0.1}}(K)$, $\mathbb E_{p_{0.1}}(\omega_0)$ and $\mathbb E_{p_{0.1}}(\gamma)$, constructed from the 50 samples generated by each of \Cref{ABC:SMC,MFABC:SMC}. This figure demonstrates that the estimates generated by the multifidelity algorithm, MF-ABC-SMC, are of a similar quality to those produced by ABC-SMC.\footnote{ In addition to the observation of the posterior means in \Cref{fig:empirical_means}, we have also depicted representative posterior samples from each algorithm in \Cref{post:ABC:SMC:ESS400,post:MFABC:SMC:ESS400}. } Taking $K$, $\omega_0$ and $\gamma$ in turn, the means of the 50 estimates produced by each algorithm are, to three significant figures, indistinguishable at 2.17, 1.06 and 0.125. Similarly, the variability of these estimates are also broadly similar: \Cref{ABC:SMC} (respectively, \Cref{MFABC:SMC}) produces estimates with standard deviations .0268 (.0277), .0034 (.0027), and .00271 (.00265). Importantly, the distribution of total simulation times for each of these 100 estimates demonstrates that MF-ABC-SMC is reliably significantly less computationally expensive than ABC-SMC to produce comparable samples, with the average simulation times reflecting the 60\% saving identified in \Cref{tab:results}. \begin{figure*} \caption{ The empirical posterior means of each of $K$, $\omega_0$ and $\gamma$ for each of the 100 samples, plotted against the total simulation time required to generate each mean. } \label{fig:empirical_means} \end{figure*} Recall our initial observation that the average simulation time of the low-fidelity model, 10 \si{\micro\second}, is approximately 2\% of that of the high-fidelity model, 520 \si{\micro\second}, with expectations taken over the prior. Given this initial difference in each model's average simulation time, the observed 148\% increase in efficiency from \Cref{ABC:SMC} to \Cref{MFABC:SMC} is determined by a number of other factors specific to the characteristics of SMC sampling, which we now explore. \subsubsection{MF-ABC-SMC is more effective in early generations} \label{s:Results:Generations} In producing the samples summarised in \Cref{tab:results,fig:empirical_means}, we enforced the same decreasing schedule of $\epsilon_t$ and the same stopping criteria ($\mathrm{ESS} \geq 400$) for all runs of both \Cref{ABC:SMC} (ABC-SMC) and \Cref{MFABC:SMC} (MF-ABC-SMC). This allows a direct comparison between the two algorithms of the efficiencies at each generation, in addition to their overall performance. \Cref{fig:efficiencies_generation} depicts the distributions of each of the measures for which the means are given in \Cref{tab:results}. As the generation index varies, there are significant differences between the performance improvement generated by MF-ABC-SMC over ABC-SMC. By all three of the measures, the benefit of MF-ABC-SMC appears to accrue most significantly in the earlier generations. There are a number of factors that explain these observed differences. \begin{figure} \caption{ Observed distributions of simulation time per proposal, total simulation time, and efficiency for 50 samples from \Cref{ABC:SMC} \label{fig:efficiencies_generation} \end{figure} First, we observe that the average simulation time per proposal of \Cref{ABC:SMC} (ABC-SMC) decreases as $t$ increases. As the importance distribution evolves towards the posterior through the SMC algorithm, the most expensive high-fidelity simulations are required much less often. This means that the relative saving of using the low-fidelity model also evolves with the generation, $t$. The continuation probabilities used in \Cref{MFABC:SMC} (MF-ABC-SMC) aim to balance the saving in simulation cost against the probability of false positives and false negatives, according to the optima in \Cref{eq:etastar:unbounded}. Since there is less computational cost available to save in later generations, we therefore find larger continuation probabilities. \Cref{fig:etas} shows how the values of $\eta_1$ and $\eta_2$ vary for each of the 50 instances of \Cref{MFABC:SMC} (MF-ABC-SMC) across generations $t=2,4,6,8$, reflecting the adaptation of the algorithm to changing distributions in simulation costs. Values in the bottom-left of the figure correspond to smaller continuation probabilities, and thus smaller total simulation times. At $t=8$, the optimal continuation probabilities are no longer clustered to the bottom-left of the figure, but instead have begun to migrate to the $(1,1)$ corner, which corresponds to the classical ABC-SMC approach. This leads to the increase in simulation time per proposal for MF-ABC-SMC at generation $t=8$, as more high-fidelity simulations are required. \begin{figure} \caption{ Estimated values of the optimal continuation probabilities $\alpha = \eta_1 \mathbb I(\tilde y \in \Omega_{\epsilon_t} \label{fig:etas} \end{figure} Second, we observe that for generation $t=8$ we have an average of 70\% improvement in simulation time per proposal but only a 24\% average improvement in the generation's total simulation time. It follows that MF-ABC-SMC requires approximately 2.5 times as many parameter proposals, on average, in the final generation to produce an ESS of at least 400 when compared with ABC-SMC, meaning that only a 30\% improvement in efficiency can be found. Thus, the final generation incurs over a quarter of the total simulation cost of MF-ABC-SMC, on average. The improved efficiency at the final generation is relatively small because at smaller ABC thresholds the inaccuracy of the low-fidelity model becomes more significant. In this example, the underlying stochasticity of the high-fidelity model means that the probability $\mathbb P(y \in \Omega_{0.1}~|~ \tilde y \in \Omega_{0.1})$ is sufficiently small to produce a high false positive rate, corresponding to a relatively large value of $\fp W$ in \Cref{etastar:unbounded}. This reduces the optimal available efficiency towards that of classical ABC-SMC. It should be noted that a strength of the adaptive method of selecting continuation probabilities is that this case can be detected, so that the efficiency of \Cref{MFABC:SMC} (MF-ABC-SMC) is bounded below by the efficiency of \Cref{ABC:SMC} (ABC-SMC). Finally, note that the sequence of $\epsilon_t$ was chosen to be equal across ABC-SMC and MF-ABC-SMC to allow for a direct comparison between the efficiencies at each generation. This constraint produced an overall 60\% saving in simulation time, and a 148\% increase in efficiency. However, in practice, we should also aim to adaptively choose the sequence $\epsilon_t$ (for either algorithm) to optimise performance. In the following subsection, we describe how incorporating the adaptive selection of $\epsilon_t$ based on the preceding generations' output allows for a better comparison between the two algorithms. \subsubsection{MF-ABC-SMC reduces ABC approximation bias} \label{s:Adaptive} Implementing \Cref{MFABC:SMC} (MF-ABC-SMC) requires a decreasing sequence of $\epsilon_t$ values to be pre-specified. Often, appropriate values of $\epsilon_t$ cannot be determined before any simulations have been generated. If the sequence of $\epsilon_t$ decreases too slowly then the algorithm will take a long time to reach the target posterior; too quickly, and acceptance rates will be too low. As a result, rather than specifying a sequence of thresholds \emph{a priori}, previous work in the SMC context~\cite{DelMoral2012} has explored choosing $\epsilon_{t+1}$ by predicting its effect on the ESS of that generation. In the spirit of this approach, we adaptively choose the sequence $\epsilon_t$ by predicting its effect on the efficiency of that generation, as defined in \Cref{lemma:phi}. \begin{figure} \caption{ Adaptively selecting $\epsilon_t$ to achieve a predicted target efficiency equal to the efficiency of the first generation. Curves plot the estimated efficiency as a function of $\epsilon$ for (a) importance distributions $\hat q_t$, and (b) importance distributions $\hat r_t$ and associated optimised continuation probabilities. Stars plot observed efficiencies, $\mathrm{ESS} \label{fig:adaptive_epsilon} \label{fig:adaptive_epsilon_eta} \label{fig:adaptive} \end{figure} \Cref{fig:adaptive} depicts a possible strategy for choosing $\epsilon_{t+1}$ conditionally on the output from generation $t$ in ABC-SMC and MF-ABC-SMC. The key to this strategy is the observation that the efficiency of \Cref{MFABC:Importance}, defined by \Cref{eq:Phi,eq:PhiComponents}, depends on the ABC threshold, $\epsilon$, and the importance distribution, $\hat q$, used in \Cref{MFABC:Importance}. By writing the efficiency as $\psi(\eta_1, \eta_2;~\epsilon, \hat q)$ and fixing $\eta_1$, $\eta_2$ and $\hat q$, we can consider the efficiency of \Cref{MFABC:Importance} as a function of $\epsilon$. In particular, by setting $\eta_1 = \eta_2 = 1$, the efficiency of \Cref{ABC:Importance}, $\psi(1, 1; \epsilon, \hat q)$, also varies with $\epsilon$. We assume that at each generation we have a target efficiency, $\psi_t^\star$, that enables a given ESS to be generated with a known computational budget. In generation $t$ of \Cref{MFABC:SMC} (MF-ABC-SMC), we produce the sample $\{ \theta_n^{(t)}, w_n^{(t)} \}$ from \Cref{MFABC:Importance} (MF-ABC-IS), which can used to define an importance distribution, $\hat r_{t+1}$. Steps 5 and 6 of \Cref{MFABC:SMC} then use the next ABC threshold, $\epsilon_{t+1}$, to calculate optimal continuation probabilities $(\eta_1^\star, \eta_2^\star)$ by maximising the efficiency function, $\psi(\eta_1, \eta_2; \epsilon_{t+1}, \hat r_{t+1})$. In the case where $\epsilon_{t+1}$ is unknown, an adaptive approach to finding an appropriate value is to replace steps 5 and 6 of \Cref{MFABC:SMC} with the following subroutine: \begin{enumerate}[label=\alph*.] \item find $(\eta_1^\star, \eta_2^\star)$ to maximise the efficiency, $\psi(\eta_1, \eta_2; \epsilon_t, \hat r_{t+1})$, using \Cref{etastar}; \item set $\epsilon^{\star} \leftarrow \max \left\{ 0 < \epsilon \leq \epsilon_t ~:~ \psi(\eta_1^\star, \eta_2^\star; \epsilon, \hat r_{t+1}) \leq \psi^\star_{t+1} \right\} $, or $\epsilon^{\star} \leftarrow \epsilon_t$ if this set is empty; \item set $\epsilon_{t+1} \leftarrow \epsilon^\star$ and continue to step 7 of \Cref{MFABC:SMC}. \end{enumerate} This procedure produces a sequence of ABC thresholds designed such that each generation's efficiency is maintained at a target level. It alternates between finding continuation probabilities that maximise the efficiency at the preceding threshold, $\epsilon_t$, and then finding a value of $\epsilon_{t+1} < \epsilon_t$ such that the predicted efficiency matches the target. If $\epsilon_{t+1}=\epsilon_t$ then the target efficiency, $\psi_{t+1}^\star$, needs to be reviewed as it is not achievable. Note that for the case of ABC-SMC, which is equivalent to MF-ABC-SMC with fixed continuation probabilities $\eta_1 = \eta_2 = 1$, we skip the optimisation in step a and use $\eta_1^\star = \eta_2^\star = 1$ in step b. The strength of the multifidelity approach in this context is that the additional degrees of freedom afforded by the continuation probabilities allows for the ABC thresholds to decrease more quickly, while maintaining a target efficiency at each generation. This benefit is depicted in \Cref{fig:adaptive}. Each curve in the left-hand plot is the predicted efficiency $\psi(1, 1; \epsilon, \hat q_{t+1})$ as a function of $\epsilon$. Each curve in the right-hand plot is the predicted efficiency $\psi(\eta_1^\star, \eta_2^\star; \epsilon, \hat r_{t+1})$, again as a function of $\epsilon$, where we have found optimal continuation probabilities, $(\eta_1^\star, \eta_2^\star)$. For each algorithm, we choose the target efficiency at each generation $t>1$ to equal the efficiency observed in generation $1$. \Cref{fig:adaptive_epsilon} demonstrates the adaptive threshold selection implemented for four generations of \Cref{ABC:SMC} (ABC-SMC), producing a decreasing sequence for $\epsilon_t$ of $2 > 1.21 > 0.83 > 0.51$. In comparison, \Cref{fig:adaptive_epsilon_eta} shows how the adaptive selection of thresholds in \Cref{MFABC:SMC} (MF-ABC-SMC) produces a sequence for $\epsilon_t$ of $2 > 0.62 > 0.23 > 0.11$. Clearly, the adaptive sequence of $\epsilon_t$ enabled by MF-ABC-SMC decreases much more quickly than the equivalent sequence for ABC-SMC, while the efficiency of each generation remains broadly constant and predictable. In our example, each generation has a stopping condition of $\mathrm{ESS} \geq 400$. As a result, by choosing to specify a constant target efficiency equal to the observed efficiency of generation $t=1$, we effectively impose a constant target simulation budget for each generation. Since (for the example in \Cref{fig:adaptive}) we have specified four generations for each run of the adaptive versions of \Cref{ABC:SMC,MFABC:SMC}, we have thus specified a fixed, equal total simulation time for each algorithm. In this setting, the results in \Cref{fig:adaptive} show that the bias incurred by using an ABC approximation to the posterior with threshold $\epsilon>0$ is vastly reduced by implementing the multifidelity approach to SMC. Using MF-ABC-SMC, the sample produced in generation $4$ is from $p_{0.11}(\theta~|~\obs y)$, while ABC-SMC can only produce a sample from $p_{0.51}(\theta~|~\obs y)$ for a similar computational cost. Thus, by incorporating the multifidelity approach into a method for adaptively selecting ABC thresholds, we can afford to allow the sequence $\epsilon_t$ of MF-ABC-SMC thresholds to decrease much more rapidly, at no cost to the efficiency of the algorithm. \section{Discussion and conclusions} \label{s:Discussion} In this work we have examined how to integrate two approaches to overcoming the computational bottleneck of repeated simulation within ABC: the SMC technique for producing parameter proposals, and the multifidelity technique for using low-fidelity models to reduce overall simulation time. By combining these approaches, we have produced the MF-ABC-SMC algorithm in \Cref{MFABC:SMC}. The results in \Cref{s:Example} demonstrate that the efficiency of sampling from the ABC posterior (measured as the ratio of the ESS to simulation time) can be significantly improved by using \Cref{MFABC:SMC} (MF-ABC-SMC) in place of \Cref{ABC:SMC} (ABC-SMC). This improvement was demonstrated by using a common schedule of decreasing ABC thresholds for both algorithms. In this case, the increase in efficiency was most significant during the early SMC generations, where both the average simulation time of the high fidelity model and overall acceptance rates are relatively large. By also implementing an adaptive ABC thresholding scheme into both algorithms and thus allowing different sequences of ABC threshold, MF-ABC-SMC is shown to greatly reduce the bias incurred by using an ABC approximation to the posterior, in comparison to ABC-SMC. Having introduced the MF-ABC-SMC algorithm, a number of open questions emerge. Some of these questions are specific to the implementation of multifidelity approaches. However, others arise from a re-evaluation of SMC implementation strategies in this new context. Below, we consider these two classes of question in turn. \subsection{Multifidelity implementation} The key to maximising the benefit of MF-ABC-SMC is the ability to set a continuation probability based on the simulations generated during preceding generations. The estimates in \Cref{eq:MonteCarlo} of the quantities in \Cref{eq:PhiComponents} are natural Monte Carlo approximations to the required integrals. In the SMC context, when generating the continuation probability for generation $t+1$, each of these estimates could actually be constructed using the parameter proposals, importance weights, simulations, distances, and continuation probabilities of any (or all) generations $1 \leq s \leq t$, not just generation $t$. Future work should clarify how best to combine many generations' samples into estimates of \Cref{eq:PhiComponents}, and the potential for improvement that might arise from this. Another question arises when we break the assumption made at the start of \Cref{s:MFABC}, and no longer assume that the output spaces of each model fidelity are such that $\tilde{\mathcal Y} = \mathcal Y$. In general, the observed data, $\obs{\tilde y} \neq \obs y$, the distance metrics, $\tilde d(\tilde y, \obs{\tilde y}) \neq d(y,\obs y)$, and the thresholds, $\tilde \epsilon \neq \epsilon$, may all be distinct. In this case, any estimate, $\tilde w$, of $\mathbb I(y \in \Omega_\epsilon)$ can be used in place of $\mathbb I(\tilde y \in \Omega_\epsilon)$ to give a multifidelity acceptance weight of the form \[ w(\theta, \tilde y, u, y) = \tilde w(\theta, \tilde y) + \frac{\mathbb I(u<\alpha(\theta, \tilde y))}{\alpha(\theta, \tilde y)} \left( \mathbb I(y \in \Omega_\epsilon) - \tilde w(\theta, \tilde y) \right). \] Note that, in the case of equal output spaces, we might consider using $\tilde w(y_n) = \mathbb I(\tilde y \in \Omega_{\tilde \epsilon})$ for distinct thresholds, $\tilde \epsilon \neq \epsilon$. However, in general, $\tilde w(\theta, \tilde y)$ may also encompass completely different output spaces based on distinct modelling frameworks for the same system (albeit with the same parameter space). In a similar way to evolving acceptance probabilities across generations, we could also evolve the estimate $\tilde w$ across generations, using the information gathered from repeated simulation of both high-fidelity and low-fidelity models to better approximate $\mathbb I(y \in \Omega_\epsilon)$ and thus reduce our reliance on the high-fidelity model. The form of continuation probability, $\alpha(\theta, \tilde y)$, defined in \Cref{eq:constantrates} implements a multifidelity algorithm that provides a single continuation probability for $\tilde y \in \Omega_\epsilon$ and another for $\tilde y \notin \Omega_\epsilon$, independently of the parameter value. There may be significant improvements to the multifidelity approach available through making $\alpha(\theta, \tilde y)$ depend more generally on $\theta$ and $\tilde y$. For example, it is probable that there should be less need to simulate $y$ after generating $\tilde y$ such that $d(\tilde y, \obs y) \gg \epsilon$, than if $d(\tilde y, \obs y) = 1.01 \epsilon$. However, with $\alpha$ as defined in \Cref{eq:constantrates}, these two cases are treated equally. There is likely to be significant potential for improved performance from exploring less simplistic forms for the continuation probability. Finally, as has been noted in previous work on multifidelity ABC~\cite{Prescott2020}, there is much potential in being able to use multiple low-fidelity models, beyond a single low-fidelity approximation. If there exist multiple low-fidelity models, different generations of MF-ABC-SMC may allow us to progressively focus on using the most efficient model, and identify the specific regions of parameter space for which one model or another may bring most benefit for parameter estimation. \subsection{SMC implementation} Previous research into the implementation of ABC-SMC has ensured that the importance distribution formed from the preceding generation, as given in \Cref{eq:importance}, is optimal, by choice of the perturbation kernels, $K_t$~\cite{Filippi2013}. This has typically been treated as a requirement to trade-off a wide exploration of parameter space against a high acceptance rate. We have replaced the acceptance rate by the theoretical efficiency as the quantification of an ABC algorithm's performance. Therefore, since we now explicitly include the simulation time in the definition of the algorithm's performance, the perturbation kernels that optimise the tradeoff between efficiency and exploration may be reformulated and hence different optima used. In \Cref{s:Results} we applied a widely-used strategy for determining the perturbation kernels \cite{Beaumont2009}. This strategy has been justified only in the context of positive weights and an importance distribution that approximates the ABC posterior. However, the importance distribution used in MF-ABC-SMC potentially makes this choice of perturbation kernels suboptimal. It remains to extend existing results on the optimality of perturbation kernels, such as those in \cite{Filippi2013}, to apply to importance distributions of the form in \Cref{eq:new_importance} that approximate the alternative target distribution in \Cref{eq:new_target}. There is therefore justification for reopening the question of specifying optimal perturbation kernels for SMC, in the context of both including simulation time in the performance tradeoff and for dealing with multifidelity samples with negative weights. We have restricted the choice of SMC sampler to the $O(N^2)$ PMC sampler, due to the effect of the $O(N)$ sampler in diluting the benefits of the multifidelity approach, as discussed in \Cref{s:MF-ABC-SMC,LinearSMCSampler}. Although this choice can be justified when simulation times dominate the algorithm, future work in this area should seek to optimise a multifidelity approach in the context of the more efficient sampling technique. Given that the observed effect of multifidelity ABC is to significantly reduce the simulation time per parameter proposal, and thereby make the simulation cost at each iteration much \emph{less} dominant, this extension will be necessary to ensure optimal performance of MF-ABC-SMC in larger sample sizes. In \Cref{s:Adaptive} we described how to adapt \Cref{ABC:SMC,MFABC:SMC} to implement an adaptive sequence of $\epsilon_t$ as an approach to minimising bias for a fixed computational budget. The strategy we used was to choose $\epsilon_t$ to maintain an efficiency as close as possible to a target, set to equal the observed efficiency of the first generation. Further work in this area should investigate the use of more sophisticated strategies for choosing each $\epsilon_t$. This question relates closely to the sequence of stopping criteria. In \Cref{s:Results}, we constrained the effective sample size at each generation to be at least $400$ to ensure a relatively low variance in each generation's sample, but this choice was made arbitrarily. Future work should therefore consider how to best achieve the ultimate goal of the SMC algorithm: a sample from a final generation with minimal bias relative to the true posterior, with a small variance, and constructed quickly. This goal should be achieved by optimising the complex, interdependent choices of stopping criteria, ABC thresholds, continuation probabilities and perturbation kernels. \appendix \section{Linear SMC Sampler} \label{LinearSMCSampler} The following procedure briefly describes the linear SMC sampling method of Del Moral et al, (2012) for the multifidelity context. For each $z_n^{(t)} = (\theta_n^{(t)}, \tilde y_n^{(t)}, u_n^{(t)}, y_n^{(t)})$ produced in generation $t$, with weight $W_n^{(t)}$, a perturbation to $z_{\star}$ is proposed with density \[ g(z_\star) = K_t(\theta_\star~|~\theta_n^{(t)}) \check f(\tilde y_\star, y_\star~|~\theta_\star) \] on $\mathcal Z = \Theta \times \mathcal Y \times [0,1] \times \mathcal Y$. The proposal is accepted with a Metropolis--Hastings acceptance probability based on the ratio \[ \frac{\hat w_t(z_\star)}{\hat w_t(z_n^{(t)})} \frac{K_t(\theta_n^{(t)}~|~\theta_\star) \pi(\theta_\star)}{K_t(\theta_\star~|~\theta_n^{(t)}) \pi(\theta_n^{(t)})}, \] for the non-negative multifidelity weight \[ \hat w_t(z) = \left| \mathbb I(\tilde y \in \Omega_{\epsilon_t}) + \frac{\mathbb I(u < \alpha_t)}{\alpha_t} \left( \mathbb I(y \in \Omega_{\epsilon_t}) - \mathbb I(\tilde y \in \Omega_{\epsilon_t}) \right) \right|. \] The MCMC step thus produces a sample point for the new generation, $z_n^{(t+1)}$, based on the MH acceptance step. In the subsequent generation, the weight $W_n^{(t)}$ is then updated such that \begin{equation} \label{eq:w_SMC} W_n^{(t+1)} \propto W_n^{(t)} \frac{\hat w_{t+1}(z_n^{(t+1)})}{\hat w_t(z_n^{(t+1)})}. \end{equation} Thus, for each $z_n^{(t+1)}$ we need to calculate both $\hat w_{t}(z_n^{(t+1)})$ and $\hat w_{t+1}(z_n^{(t+1)})$. The benefit of the multifidelity weight is that, for $u < \alpha_t$, we only need to generate $\tilde y$ rather than $(\tilde y, y)$, avoiding a significant computational cost. However, the weight update step given in \Cref{eq:w_SMC} requires two weight calculations dependent on $z_n^{(t+1)}$, each of which may require the expensive computation of $(\tilde y, y) \sim \check f(\cdot,\cdot~|~\theta)$. Indeed, the cost saving of needing to generate $\tilde y \sim \tilde f(\cdot~|~\theta)$ alone for the weight update step exists only if \[ u_n^{(t+1)} > \max \left\{ \alpha_t \left( \theta_n^{(t+1)}, \tilde y_n^{(t+1)} \right), \alpha_{t+1} \left( \theta_n^{(t+1)}, \tilde y_n^{(t+1)} \right) \right\}. \] Using the linear SMC sampling procedure thereby increases the simulation cost of updating the weights, relative to the $O(N^2)$ sequential importance sampling approach. We assume that simulation costs dominate the cost of calculating importance weights, and thus we focus on sequential importance sampling. \section{Choice of summary statistics} \label{appendix:summary_statistics} Recall the summary statistics \begin{align*} S_1(R, \Phi) &= \left( \frac{1}{30} \int_0^{30} R(t) ~\mathrm dt \right)^2, \\ S_2(R, \Phi) &= \frac{1}{30} \left( \Phi(30) - \Phi(0) \right), \\ S_3(R, \Phi) &= R \left( T_{1/2} \right), \end{align*} where $T_{1/2}$ is the first value of $t \in [0,30]$ for which $\obs R(t)$ is halfway between $\obs R(0)=1$ and its average value $S_1(\obs R, \obs \Phi)^{1/2}$. These statistics are connected to trajectories,$\phi_j(t)$ for $j =1,\dots,256$, of the high-fidelity model, \Cref{eq:Kuramoto_hi}, through the definition \[ R(t) \exp (i \Phi(t)) = \frac{1}{256} \sum_{j=1}^{256} \exp(i \phi_j(t)). \] The low-fidelity model, in \Cref{eq:Kuramoto_lo}, directly models the evolution of $R$ and $\Phi$. Example trajectories of the low-fidelity and high-fidelity model are shown in \Cref{fig:eg_dynamics}. We can use the model in \Cref{eq:Kuramoto_lo} to justify the choice of summary statistics. In particular, the steady-state value of $\tilde R$ is equal to \[ \tilde R^\star = \left( 1 - 2\frac{\gamma}{K} \right)^{1/2}, \] while we can write the solution $\tilde \Phi(t) = \omega_0 t$. Then we use $S_1$ to approximate $(\tilde R^\star)^2 = 1 - 2\gamma/K$ and thus identify the ratio $\gamma/K$. Similarly, $S_2 = \tilde \Phi(30)/30 = \omega_0$ allows us to directly identify $\omega_0$. Finally, $S_3$ is a measure of the time-scale of the dynamics. Trajectories with equal values for $S_1$ (i.e. equal steady states, and thus equal values for $\gamma/K$) can be distinguished by the speed at which they reach their steady state, which we will infer through $S_3$. Note that the sampling point, $T_{1/2}$, used in $S_3$ is chosen to be relevant to the observed data specifically, and aims to distinguish any simulated trajectories from $\obs R$ and $\obs \Omega$ in particular. Thus we select $S_3$ to identify the scale of $\gamma$ and $K$. Hence, we assume that these three summary statistics will be sufficient to identify the parameters. \section{Posterior samples} \Cref{post:ABC:Rejection,post:ABC:SMC,post:ABC:SMC:ESS400,post:ABC:SMC:adaptive,post:MFABC:Rejection,post:MFABC:SMC:ESS400,post:MFABC:SMC:adaptive} show samples from the posterior distributions $p_\epsilon((K, \omega_0, \gamma)~|~\obs y)$ approximating the Bayesian posteriors of the parameters for the Kuramoto oscillator network in \Cref{s:Example}. The samples have been generated using \Cref{ABC:Importance} (ABC-RS), \Cref{ABC:SMC} (ABC-SMC), \Cref{MFABC:Importance} (MF-ABC-RS), and \Cref{MFABC:SMC} (MF-ABC-SMC). The plots on the diagonal are one-dimensional empirical marginals for each parameter (i.e. weighted histograms). The plots above the diagonal are all of the two-dimensional empirical marginals for each parameter pair, represented as heat maps. The axes are discretised for this visualisation by partitioning each parameter's prior support into $B$ bins, where $B$ is the integer nearest to $(2 \times \mathrm{ESS})^{1/2}$. For example, when $\mathrm{ESS} \approx 400$, each axis is partitioned into $28 \approx \sqrt{800}$ bins across its prior support. The plots below the diagonal are all of the two-dimensional projections of the Monte Carlo set $\{ \theta_n, w_n \}$, where the weights $w_n$ are represented by colour. In particular, negative weights are coloured orange and positive weights are purple. Note that, for simplicity of visualisation, the weights are rescaled (without any loss of generality) to take values between $-1$ and $+1$. \subsection{Existing ABC algorithms} The posterior samples in \Cref{post:ABC:Rejection,post:ABC:SMC,post:MFABC:Rejection} from $p_{0.5}((K, \omega_0, \gamma)~|~\obs y)$ are generated by running \Cref{ABC:Importance,ABC:SMC,MFABC:Importance} for a fixed total of $N=6000$ parameter proposals and with threshold $\epsilon = 0.5$. For \Cref{ABC:SMC}, these proposals are split equally across four generations with decreasing thresholds $\epsilon_t = 2, 1.5, 1, 0.5$. The efficiency of generating these posterior samples is discussed in \Cref{s:existing}. \subsection{Sequential Monte Carlo} The posterior samples in \Cref{post:ABC:SMC:ESS400,post:MFABC:SMC:ESS400} from $p_{0.1}((K, \omega_0, \gamma)~|~\obs y)$ are generated by running the two SMC algorithms, \Cref{ABC:SMC,MFABC:SMC}, for eight generations with a common schedule of thresholds \[\epsilon_t = 2, 1.5, 1, 0.8, 0.6, 0.4, 0.2, 0.1,\] and with stopping condition of $\mathrm{ESS}=400$ at each generation. Each figure is one representative output of the 50 runs of each of \Cref{ABC:SMC,MFABC:SMC} used in \Cref{s:Results}, where the efficiency of generating these posterior samples is discussed. \subsection{Adaptive epsilon} The posterior samples in \Cref{post:ABC:SMC:adaptive,post:MFABC:SMC:adaptive} are generated by the extension of the two SMC algorithms, \Cref{ABC:SMC,MFABC:SMC}, to allow for adaptive selection of thresholds $\epsilon$ as discussed in \Cref{s:Adaptive}. Running the adaptive extensions of each of \Cref{ABC:SMC,MFABC:SMC} for four generations, with a fixed efficiency in each generation and stopping condition $\mathrm{ESS}=400$, produces the posterior samples in \Cref{post:ABC:SMC:adaptive,post:MFABC:SMC:adaptive}, respectively, from $p_{0.74}((K, \omega_0, \gamma)~|~\obs y)$ and $p_{0.1}((K, \omega_0, \gamma)~|~\obs y)$, respectively. \begin{figure*} \caption{ Sample from ABC posterior generated by \Cref{ABC:Importance} \label{post:ABC:Rejection} \end{figure*} \begin{figure*} \caption{ Sample from ABC posterior produced using the final generation of \Cref{ABC:SMC} \label{post:ABC:SMC} \end{figure*} \begin{figure*} \caption{ Sample from ABC posterior generated by \Cref{MFABC:Importance} \label{post:MFABC:Rejection} \end{figure*} \begin{figure*} \caption{ Sample from ABC posterior produced using the final generation of \Cref{ABC:SMC} \label{post:ABC:SMC:ESS400} \end{figure*} \begin{figure*} \caption{ Sample from ABC posterior produced using the final generation of \Cref{MFABC:SMC} \label{post:MFABC:SMC:ESS400} \end{figure*} \begin{figure*} \caption{ Sample from ABC posterior produced by the final generation of the adaptive modification of \Cref{ABC:SMC} \label{post:ABC:SMC:adaptive} \end{figure*} \begin{figure*} \caption{ Sample from ABC posterior produced by the final generation of the adaptive modification of \Cref{MFABC:SMC} \label{post:MFABC:SMC:adaptive} \end{figure*} \end{document}
\begin{document} {\LARGE \bf Group Invariant Entanglements in \\ \\ Generalized Tensor Products} \\ \\ {\bf Elem\'{e}r E ~Rosinger} \\ \\ {\small \it Department of Mathematics \\ and Applied Mathematics} \\ {\small \it University of Pretoria} \\ {\small \it Pretoria} \\ {\small \it 0002 South Africa} \\ {\small \it [email protected]} \\ \\ {\bf Abstract} \\ The group invariance of entanglement is obtained within a very general and simple setup of the latter, given by a recently introduced considerably extended concept of tensor products. This general approach to entanglement - unlike the usual one given in the particular setup of tensor products of vector spaces - turns out not to need any specific algebraic structure. The resulting advantage is that, entanglement being in fact defined by a negation, its presence in a general setup increases the chances of its manifestations, thus also its availability as a resource. \\ \\ \\ {\bf 0. Preliminaries} \\ The interest in generalized tensor products, [3-5], is natural and also quite fundamental. Let us indeed divide physical systems in two classes : \\ 1) Cartesian : those for which the state space of the composite is the Cartesian product of the state spaces of the components. Therefore obviously, Cartesian systems cannot exhibit entanglement. \\ 2) Non-Cartesian : those for which the state space of the composite is not the Cartesian product of the state spaces of the components, and instead, it is some larger set. Therefore, such non-Cartesian systems do inevitably exhibit entanglement. \\ Regarding the above division, what is not yet understood clearly enough is that the usual quantum systems are but a particular instance of non-Cartesian systems. \\ The generalized tensor products provide a considerably large family of non-Cartesian ways of composition of systems. Indeed, so far, it was believed that only vector spaces could be composed by tensor products. Thus by far most of the non-quantum systems automatically fell out of the possibility of composition by tensor products, since their state spaces where not vector spaces. \\ However, generalized tensor products can now be defined for absolutely arbitrary sets, and all one needs is some very mild structures on them, far milder than any algebraic ones, and in particular, far milder than vector spaces. \\ And then the only problem is to find some really existing systems which compose at least in some of these many non-Cartesian ways. \\ A main interest in this regard comes, of course, from quantum computation. \\ Here however, we should better use the term non-Cartesian computation. Indeed, the electronic computers are of course Cartesian systems, and our great interest in quantum computers is not in the quanta themselves, but in their non-Cartesian composition, which among others, opens the possibility to such extraordinary computational resources as entanglement. \\ However, the immense trouble with the quantum type non-Cartesian computers is decoherence. \\ And then, the idea is to use non-Cartesian systems which do not decohere, yet have available entanglement. \\ Not to mention that, by studying lots of non-Cartesian systems other than the quantum ones, one may discover new and yet unknown resources, beyond entanglement. \\ In conclusion : 1) Non-Cartesian systems can be many more than quantum ones, and they all have entanglement. \\ 2) Let us call Classical systems those which do NOT exhibit decoherence, or in which decoherence can easily be avoided. \\ And then, we are interested in : \\ Non-Cartesian Classical systems which, therefore, have entanglement and do not have decoherence. \\ The fact that there may be many many non-Cartesian systems is suggested by the immense generality and variety of generalized tensor products. All that remains to do, therefore, is to identify as many as possible such really existing non-Cartesian Classical systems ... \\ And of course, also to identify important new features of them beyond entanglement ... \\ \\ {\bf 1. Group Actions on General Tensor Products} \\ We shall consider entanglement within the very general and rather simple underlying tensor product structure, briefly presented for convenience in Appendix 2. \\ Let $X,~ Y$ be arbitrary nonvoid sets and let ${\cal A}$ be any nonvoid set of binary operations on $X$, while correspondingly, ${\cal B}$ is any nonvoid set of binary operations on $Y$. Then, as seen in Appendix 2, one can define the tensor product \\ (1.1)~~~ $ X \bigotimes_{{\cal A}, {\cal B}} Y = Z / \approx_{{\cal A}, {\cal B}} $ \\ with the canonical quotient embedding \\ (1.2)~~~ $ X \times Y \ni ( x, y ) \longmapsto x \bigotimes_{{\cal A}, {\cal B}} y \in X \bigotimes_{{\cal A}, {\cal B}} Y $ \\ and consequently $X \bigotimes_{{\cal A}, {\cal B}} Y$ will be the set of all elements \\ (1.3)~~~ $ x_1 \bigotimes_{{\cal A}, {\cal B}} y_1 ~\gamma~ x_2 \bigotimes_{{\cal A}, {\cal B}} y_2 ~\gamma~ \ldots ~\gamma~ x_n \bigotimes_{{\cal A}, {\cal B}} y_n $ \\ with $n \geq 1$ and $x_i \in X,~ y_i \in Y$, for $1 \leq i \leq n$. \\ Let now $( G, . ),~ ( H, . )$ be two groups which act on $X$ and $Y$, respectively, according to \\ (1.4)~~~ $ G \times X \ni ( g , x ) \longmapsto g x \in X,~~~ H \times Y \ni ( h, y ) \longmapsto h y \in Y $ \\ It is important to note that, as a general property of such group actions, for every given $g \in G,~ h \in H$, the mappings \\ (1.5)~~~ $ X \ni x \longmapsto g x \in X,~~~ Y \ni y \longmapsto h y \in Y $ \\ are bijective. \\ Our aim is to define a group action $( G, . ) \times ( H, . )$ on, see (A2.2.4), the tensor product $X \bigotimes_{{\cal A}, {\cal B}} Y$. \\ In this regard we note that, as a consequence of (1.4), we have a natural group action \\ (1.6)~~~ $ ( G \times H ) \times ( X \times Y ) \ni ( ( g, h ), ( x, y ) ) \longmapsto ( g x, h y ) \in X \times Y $ \\ thus we obtain a natural group action, see (A2.1.2) \\ (1.7)~~~ $ \begin{array}{l} ( G \times H ) \times Z \ni ( ( g, h ), ( ( x_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n ) ) ) \longmapsto \\ \\ ~~~~~~~ \longmapsto ( g x_1, h y_1 ) ~\gamma~ ( g x_2, h y_2 ) ~\gamma~ \ldots ~\gamma~ ( g x_n, h y_n ) \in Z \end{array} $ \\ Now we shall assume that the families of binary operations ${\cal A},~ {\cal B}$ satisfy the following compatibility relations with the respective group actions $( G, . ),~ ( H, . )$, namely \\ (1.8)~~~ $ \begin{array}{l} \forall~~~ \alpha \in {\cal A},~~ \beta \in {\cal B} ~: \\ \\ \forall~~~ g \in G,~ h \in H,~ x, x\,' \in X,~ y, y\,' \in Y ~: \\ \\ ~~~~~ \alpha ( g x , g x\,' ) = g \alpha ( x, x\,' ),~~~ \beta ( h y, h y\,' ) = h \beta ( y, y\,' ) \end{array} $ \\ \\ In this case, the equivalence relation, see (A2.2.1) - (A2.2.3), $\approx_{{\cal A}, {\cal B}}$ on $Z$ has, for $z, z\,' \in Z$, and $g \in G,~ h \in H$, the following property \\ (1.9)~~~ $ z ~\approx_{{\cal A}, {\cal B}}~ z\,' ~~~\Longleftrightarrow~~~ ( g, h ) z ~\approx_{{\cal A}, {\cal B}}~ ( g, h ) z\,' $ \\ Indeed, let us assume the left hand side of the above relation. Then in view of (A2.2.1) - (A2.2.3), we have three possibilities, namely \\ (1.10)~~~ $ z = z\,' $ \\ (1.11)~~~ for some $\alpha \in {\cal A}$, the transformation (A2.2.1) applied to $z$ \\ \hspace*{1.5cm} gives $z\,'$, or vice-versa \\ (1.12)~~~ for some $\beta \in {\cal B}$, the transformation (A2.2.2) applied to $z$ \\ \hspace*{1.5cm} gives $z\,'$, or vice-versa \\ Now if (1.10) holds, then the right hand term of (1.9) is obviously valid. \\ Further, in the case of (1.11), we have \\ $~~~~~~ z = ( x_1, y_1 ) ~\gamma~ ( x\,'_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n ) $ \\ $~~~~~~ z\,' = ( \alpha ( x_1, x\,'_1), y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots , ( x_n, y_n ) $ \\ thus (1.7) gives for $g \in G,~ h \in H$, \\ $~~~~~~ ( g, h ) z = ( g x_1, h y_1 ) ~\gamma~ ( g x\,'_1, h y_1 ) ~\gamma~ ( g x_2, h y_2 ) ~\gamma~ \ldots ~\gamma~ ( g x_n, h y_n ) $ \\ $~~~~~~ ( g, h ) z\,' = ( g \alpha ( x_1, x\,'_1), h y_1 ) ~\gamma~ ( g x\,'_1, h y_1 ) ~\gamma~ ( g x_2, h y_2 ) ~\gamma~ \ldots ~\gamma~ ( g x_n, h y_n ) $ \\ with the second one, in view of (1.8), being \\ $~~~~~~ ( g, h ) z\,' = ( \alpha ( g x_1, g x\,'_1), h y_1 ) ~\gamma~ ( g x\,'_1, h y_1 ) ~\gamma~ ( g x_2, h y_2 ) ~\gamma~ \ldots ~\gamma~ ( g x_n, h y_n ) $ \\ Thus applying (1.11) once again, the right hand term of (1.9) again holds. \\ In case of (1.12), the argument is similar. \\ Conversely, let us assume the validity of the right hand term in (1.9). Then applying to it the group action $( g^{-1}, h^{-1} )$ and the above argument, we obviously obtain the left hand term in (1.9). \\ Now the validity of (1.9) means that the group action (1.6) can naturally be extended, see (1.2), to a group action \\ (1.13)~~~ $ ( G \times H ) \times ( X \bigotimes_{{\cal A}, {\cal B}} Y ) \longrightarrow ( X \bigotimes_{{\cal A}, {\cal B}} Y ) $ \\ by \\ (1.14)~~~ $ ( g, h ) ( x \bigotimes_{{\cal A}, {\cal B}} y ) = ( g x ) \bigotimes_{{\cal A}, {\cal B}} ( h y ) $ \\ with the consequence \\ (1.15)~~~ $ \begin{array}{l} ( g, h ) ( x_1 \bigotimes_{{\cal A}, {\cal B}} y_1 ~\gamma~ x_2 \bigotimes_{{\cal A}, {\cal B}} y_2 ~\gamma~ \ldots ~\gamma~ x_n \bigotimes_{{\cal A}, {\cal B}} y_n ) = \\ \\ ~~~~ = ( g x_1 ) \bigotimes_{{\cal A}, {\cal B}} ( h y_1 ) ~\gamma~ ( g x_2 ) \bigotimes_{{\cal A}, {\cal B}} ( h y_2 ) ~\gamma~ \ldots ~\gamma~ ( g x_n ) \bigotimes_{{\cal A}, {\cal B}} ( h y_n ) \end{array} $ \\ for $g \in G,~ h \in H$. \\ \\ {\bf 2. Group Actions on Generalized Entanglements} \\ We recall the general definition of entanglement within the above extended concept of tensor products. Namely, an element \\ (2.1)~~~ $ w \in X \bigotimes_{{\cal A}, {\cal B}} Y $ \\ is called {\it entangled}, if and only if it is {\it not} of the form \\ (2.2)~~~ $ w = x \bigotimes_{{\cal A}, {\cal B}} y $ \\ for some $x \in X$ and $y \in Y$. \\ {\bf Theorem 2.1.} \\ Given any group action, see (1.13) - (1.15) \\ (2.3)~~~ $ ( G \times H ) \times ( X \bigotimes_{{\cal A}, {\cal B}} Y ) \longrightarrow ( X \bigotimes_{{\cal A}, {\cal B}} Y ) $ \\ and element \\ (2.4)~~~ $ w = x \bigotimes_{{\cal A}, {\cal B}} y X \bigotimes_{{\cal A}, {\cal B}} Y $ \\ then, for every $( g, h ) \in G \times H$, we have \\ (2.5)~~~ $ w $ is entangled ~~~$\Longleftrightarrow~~~ ( g, h ) w $ is entangled \\ therefore \\ (2.6)~~~ $ w $ is not entangled ~~~$\Longleftrightarrow~~~ ( g, h ) w $ is not entangled \\ {\bf Proof.} \\ Let us assume the situation in the left hand term of (2.6). Then (2.2) gives \\ $~~~~~~ w = x \bigotimes_{{\cal A}, {\cal B}} y $ \\ for some $x \in X$ and $y \in Y$, hence in view of (1.14), we have \\ $~~~~~~ ( g, h ) w = ( g x ) \bigotimes_{{\cal A}, {\cal B}}( h y ) $ \\ which according to (2.2), implies that the right hand term of (2.6) is valid. \\ Obviously, the converse implication in (2.6) follows from the above line of argument, by applying the group action $( g^{-1}, h^{-1} )$ to the right hand term in it. \\ As for (2.5), it is but an immediate consequence of (2.6). \\ {\bf Remark 2.1.} \\ The relevance of Theorem 2.1. is in the following three facts : \\ 1) The arbitrary group actions (1.13) - (1.15) \\ (2.7)~~~ $ ( G \times H ) \times ( X \bigotimes_{{\cal A}, {\cal B}} Y ) \longrightarrow ( X \bigotimes_{{\cal A}, {\cal B}} Y ) $ \\ on the general tensor products, see (1.1) \\ (2.8)~~~ $ X \bigotimes_{{\cal A}, {\cal B}} Y $ \\ are can be defined by only requiring the natural compatibility conditions, see (1.8) \\ (2.9)~~~ $ \begin{array}{l} \forall~~~ \alpha \in {\cal A},~~ \beta \in {\cal B} ~: \\ \\ \forall~~~ g \in G,~ h \in H,~ x, x\,' \in X,~ y, y\,' \in Y ~: \\ \\ ~~~~~ \alpha ( g x , g x\,' ) = g \alpha ( x, x\,' ),~~~ \beta ( h y, h y\,' ) = h \beta ( y, y\,' ) \end{array} $ \\ \\ 2) The property of being, or alternatively, not being entangled within these general tensor products (2.8), is {\it invariant} under the general group actions (2.9). \\ 3) Since being entangled is defined by the {\it negation} of a relation, namely, (2.2), it follows that, quite likely, there are far more entangled elements in a given tensor product, than there are non-entangled ones, such indeed being the case with the usual tensor products of vector spaces. Consequently, when generalizing the concept of tensor products to the extent done in (2.8), while at the same time, keeping the corresponding generalization of the concept of entanglement still as the negation of the same kind of relation, it is quite likely that the amount of entangled elements in such generalized tensor products may significantly {\it increase}. And such an increase may be convenient, if we recall the extent to which usual entanglement is a fundamental resource in quantum information theory. \\ \\ {\bf 3. Examples} \\ We present here, starting with section 3.2. below, several examples with the following two aims, namely, to show : \\ 1) how much more general is the context in which tensor products and entanglement can be defined, than the usual one based on vector spaces, \\ 2) the variety and novelty of both tensor products and entanglement even in the simplest cases which go beyond the usual concepts. \\ First however, several preliminary constructions are needed. \\ {\bf 3.1. Preliminaries} \\ {\bf 3.1.1. Binary Operations} \\ Given any nonvoid set $X$ and any binary operation $\alpha : X \times X \longrightarrow X$, we call a subset $A \subseteq X$ to be $\alpha$-{\it stable}, if and only if \\ (3.1.1.1)~~~ $ x, y \in A ~~\Longrightarrow~~ \alpha ( x, y ) \in A $ \\ Obviously, $X$ is $\alpha$-stable, and the intersection of any family of $\alpha$-stable subsets is $\alpha$-stable. Consequently, for every subset $A \subseteq X$, we can define the smallest $\alpha$-stable subset which contains it, namely \\ (3.1.1.2)~~~ $ [ A ]_\alpha = \bigcap_{A \subseteq B,~ B ~\alpha-stable}~ B $ \\ Therefore, we can associate with $\alpha$ the mapping $\psi_\alpha : {\cal P} ( X ) \longrightarrow {\cal P} ( X )$ defined by \\ (3.1.1.3)~~~ $ \psi_\alpha ( A ) = [ A ]_\alpha,~~~ A \subseteq X $ \\ In view of (3.1.2), we have \\ (3.1.1.4)~~~ $ \psi_\alpha ( \psi_\alpha ( A ) ) = \psi_\alpha ( A ) \supseteq A,~~~ A \subseteq X $ \\ since as mentioned, $[ A ]_\alpha$ is $\alpha$-stable, and obviously $[ A ]_\alpha \subseteq [ A ]_\alpha$. \\ A particular case of the above is the following. Let $( S,\ast )$ be a semigroup with the neutral element $e$. Then $[ \{ e \} ]_\ast = \{ e \}$, while for $a \in S,~ a \neq e$, we have $[ \{ a \} ]_\ast = \{ a, a \ast a, a \ast a \ast a, \dots \}$. \\ For instance, if $( S, \ast ) = ( \mathbb{N}, + )$, then $[ \{ 0 \} ]_+ = \{ 0 \}$, while $[ \{ 1 \} ]_+ = \mathbb{N} \setminus \{ 0 \} = \mathbb{N}_1$. \\ We note that the mapping $\psi_\alpha : {\cal P} ( X ) \longrightarrow {\cal P} ( X )$ satisfies three of the four Kuratowski closure axioms, except for $\psi_\alpha ( A \cup B ) = \psi_\alpha ( A ) \cup \psi_\alpha ( B )$, with $A, B \subseteq X$. Indeed, this condition is obviously not satisfied, as can be seen in the following simple example, when $X = \mathbb{R}^2$ and $\alpha$ is the usual addition $+$. Let $A = \mathbb{}R \times \{ 0 \},~ B = \{ 0 \} \times \mathbb{R} \subsetneqq \mathbb{R}^2$. Then $\psi_\alpha ( A ) = A,~ \psi_\alpha ( B ) = B$, while $\psi_\alpha ( A \cup B ) = \mathbb{R}^2$. \\ We shall denote by \\ (3.1.1.5)~~~ $ {\cal B}_X $ \\ the set of all binary operations on $X$. Obviously \\ (3.1.1.6)~~~ $ car\, X = n ~~\Longrightarrow~~ car\, {\cal B}_X = n^{(n^2)} $ \\ \\ {\bf 3.1.2. Generators} \\ Given any nonvoid set $X$, a {\it generator} on $X$ is any mapping $\psi : {\cal P} ( X ) \longrightarrow {\cal P} ( X )$, such that \\ (3.1.2.1)~~~ $ A \subseteq \psi ( A ),~~~ A \subseteq X $ \\ (3.1.2.2)~~~ $ \psi ( A ) \subseteq \psi ( A\,' ),~~~ A \subseteq A\,' \subseteq X $ \\ It follows that every $\psi_\alpha$ associated in (3.1.1.3) to a binary operation $\alpha$ on $X$ is a generator. \\ \\ {\bf 3.1.3. From Generators to Binary Operations} \\ Let us consider the inverse problem, namely, to associate binary operations to given generators. \\ Let $\psi$ be a generator on a set $X$. A binary operation $\alpha$ on $X$ is {\it compatible} with $\psi$, if and only if \\ (3.1.3.1)~~~ $ \psi ( A ) $ is $\alpha$-stable,~~~ $ A \subseteq X $ \\ In view of (3.1.1.2), this condition is equivalent with \\ (3.1.3.2)~~~ $ [ A ]_\alpha \subseteq \psi ( A ),~~~ A \subseteq X $ \\ which is further equivalent with \\ (3.1.3.3)~~~ $ \psi_\alpha ( A ) \subseteq \psi ( A ),~~~ A \subseteq X $ \\ Let us now denote by \\ (3.1.3.4)~~~ $ {\cal B}_\psi $ \\ the set of all binary operations $\alpha$ on $X$ which are compatible with $\psi$. \\ \\ {\bf 3.1.4. Open Problems} \\ 1) Given a generator $\psi$ on a set $X$, find the binary operations $\alpha$ on $X$ compatible with $\psi$ and with the largest $\psi_\alpha$ in the sense of (4.3). \\ 2) Characterize those generators $\psi$ on a set $X$ which coincide with such largest $\psi_\alpha $. \\ \\ {\bf 3.1.5. Special Classes of Generators} \\ \\ {\bf 3.1.5.1. The Identity Generator} \\ We note that the identity mapping $id_{{\cal P} ( X )} : {\cal P} ( X ) \ni A \longmapsto A \in {\cal P} ( X )$ is a generator on $X$. Therefore, let us start by presenting binary operations on $X$ which belong to ${\cal B}_{id_{{\cal P} ( X )} }$. \\ Further, let us denote by $\lambda_X,~ \rho_X$ the binary operations on $X$ defined by \\ (3.1.5.1.1)~~~ $ \lambda_X ( x, x\,' ) = x,~~~ \rho_X ( x, x\,' ) = x\,',~~~ x, x\,' \in X $ \\ then it is obvious that \\ (3.1.5.1.2)~~~ $ \psi_{\lambda_X} = \psi_{\rho_X} = id_{{\cal P} ( X )} $ \\ Furthermore, let ${\cal Q} \subseteq X \times X$ and define $\alpha_{\cal Q} : X \times X \longrightarrow X$ by \\ (3.1.5.1.3)~~~ $ \alpha_{\cal Q} ( x, x\,' ) = \begin{array}{|l} x ~\mbox{if}~ ( x, x\,' ) \in {\cal Q} \\ \\ x\,' ~\mbox{if}~ ( x, x\,' ) \notin {\cal Q} \end{array} $ \\ then again, we obtain \\ (3.1.5.1.4)~~~ $ \psi_{\alpha_{\cal Q}} = id_{{\cal P} ( X )} $ \\ Let now $\leq$ be a total order on $X$ and denote by $\min_X,~ \max_X$ the binary operations on $X$ defined by \\ (3.1.5.1.5)~~~ $ \min_X ( x, x\,' ) = x \wedge x\,',~~~ \max_X ( x, x\,' ) = x \vee x\,',~~~ x, x\,' \in X $ \\ Obviously once more we have \\ (3.1.5.1.6)~~~ $ \psi_{\min_X} = \psi_{\max_X} = id_{{\cal P} ( X )} $ \\ {\bf Remark 3.1.5.1.1.} \\ We note that, see (3.1.5.1.3) \\ (3.1.5.1.7)~~~ $\lambda_X = \alpha_{X \times X},~~~ \rho_X = \alpha_\phi $ \\ (3.1.5.1.8)~~~ $ \min_X = \alpha_{X^2_+},~~~ \max_X = \alpha_{X^2_-} $ \\ where $X^2_+ = \{ ( x, x\,' ) \in X \times X ~|~ x \leq x\,' \}$, while $X^2_- = \{ ( x, x\,' ) \in X \times X ~|~ x\,' \leq x \}$. \\ We also note that both binary operations $\lambda_X$ and $\rho_X$ are {\it associative}, while in case $X$ has at least two elements, they are {\it not} commutative. Similarly, for a total order $( X, \leq )$, both binary operations $\min_X$ and $\max_X$ are {\it associative}, and when $X$ has at least two elements, they are {\it not} commutative. \\ {\bf Theorem 3.1.5.1.1.} \\ Let $\alpha$ be a binary operation on $X$. Then the following three properties are equivalent \\ (3.1.5.1.9)~~~ $ \psi_\alpha = id_{{\cal P} ( X )} $ \\ (3.1.5.1.10)~~~ $ \forall~ x, x\,' \in X ~: ~~ \alpha ( x, x\,' ) \in \{ x, x\,'\} $ \\ (3.1.5.1.11)~~~ $ \exists~ {\cal Q} \subseteq X \times X ~: ~~ \alpha = \alpha_{\cal Q} $ \\ {\bf Proof.} \\ In view of (3.1.5.1.4), we have (3.1.5.1.11) $\Longrightarrow$ (3.1.5.1.9). \\ Assume $\psi_\alpha = id_{{\cal P} ( X )}$, then (3.1.3) yields $[ A ]_\alpha = A$, for $A \subseteq X$. Given now $A = \{ x, x\,' \} \subseteq X$, it follows that $\alpha ( x, x\,' ) \in \{ x, x\,'\}$. Thus (3.1.5.1.9) $\Longrightarrow$ (3.1.5.1.10). \\ We show now that (3.1.5.1.10) $\Longrightarrow$ (3.1.5.1.11). Let \\ $~~~~~~ {\cal Q} = \{~ ( x, x\,' ) \in X \times X ~|~ \alpha ( x, x\,' ) = x ~\} $ \\ then clearly $\alpha = \alpha_{\cal Q}$. $\Box$ \\ As seen next, there are plenty of binary operations $\alpha$ on a set $X$, such that $\psi_\alpha = id_{{\cal P} ( X )}$. \\ {\bf Theorem 3.1.5.1.2.} \\ Given ${\cal Q},~ {\cal Q}\,' \subseteq X \times X$, then \\ (3.1.5.1.12)~~~ $ \alpha_{\cal Q} = \alpha_{{\cal Q}\,'} ~~~\Longleftrightarrow~~~ {\cal Q} \setminus \displaystyleelta^2 X = {\cal Q}\,' \setminus \displaystyleelta^2 X $ \\ Consequently \\ (3.1.5.1.13)~~~ $ car\, X = n ~~\Longrightarrow~~ car\, {\cal B}_{id_{{\cal P} ( X )}} = 2^{( n^2 - n)} $ \\ {\bf Proof.} \\ The equivalence in (3.1.5.1.12) is immediate. Further, one notes that (3.1.5.1.12) $\Longrightarrow$ (3.1.5.1.13). $\Box$ \\ Let us find now all associative binary operations in ${\cal B}_{id_{{\cal P} ( X )} }$. The answer is presented in \\ {\bf Theorem 3.1.5.1.3.} \\ Given ${\cal Q} \subseteq X \times X$, then \\ (3.1.5.1.14)~~~ $ \alpha_{\cal Q} $ is associative ~~~$\Longleftrightarrow~~~ {\cal Q} \circ {\cal Q} \subseteq {\cal Q} $ \\ {\bf Proof.} \\ Let us start with the implication $\Longrightarrow$. Given $x, x\,', x\,'' \in X$, assume that $( x, x\,' ), ( x\,', x\,'' ) \in {\cal Q}$. Then $\alpha_{\cal Q} ( \alpha_{\cal Q} ( x, x\,' ), x\,'' ) = \alpha_{\cal Q} ( x, x\,'' )$, while $\alpha_{\cal Q} ( x, \alpha_{\cal Q} ( x\,', x\,'' ) ) = \alpha_{\cal Q} ( x, x\,' )= x$. Thus the associativity of $\alpha_{\cal Q}$ implies $\alpha_{\cal Q} ( x, x\,'' ) = x$, which means that $( x, x\,'' ) \in {\cal Q}$. \\ For the converse implication $\Longleftarrow$, let $x, x\,', x\,'' \in X$. Then we have the following four possible cases : \\ 1) $( x, x\,' ), ( x\,', x\,'' ) \in {\cal Q}$ \\ This means that $( x, x\,'' ) \in {\cal Q}$, thus $\alpha_{\cal Q} ( \alpha_{\cal Q} ( x, x\,' ), x\,'' ) = \alpha_{\cal Q} ( x, x\,'' ) = x$. \\ On the other hand, $\alpha_{\cal Q} ( x, \alpha_{\cal Q} ( x\,', x\,'' ) ) = \alpha_{\cal Q} ( x, x\,' ) = x$. Hence associativity holds. \\ 2) $( x, x\,' ) \in {\cal Q},~ ( x\,', x\,'' ) \notin {\cal Q}$ \\ Then $\alpha_{\cal Q} ( \alpha_{\cal Q} ( x, x\,' ), x\,'' ) = \alpha_{\cal Q} ( x, x\,'' )$, while $\alpha_{\cal Q} ( x, \alpha_{\cal Q} ( x\,', x\,'' ) ) = \alpha_{\cal Q} ( x, x\,'' )$, thus associativity holds. \\ 3) $( x, x\,' ), ( x\,', x\,'' ) \notin {\cal Q}$ \\ Then $\alpha_{\cal Q} ( \alpha_{\cal Q} ( x, x\,' ), x\,'' ) = \alpha_{\cal Q} ( x\,', x\,'' ) = x\,''$, while $\alpha_{\cal Q} ( x, \alpha_{\cal Q} ( x\,', x\,'' ) ) = \alpha_{\cal Q} ( x, x\,'' )$. Thus associativity holds, if and only if \\ (3.1.5.1.15)~~~ $ \alpha_{\cal Q} ( x, x\,'' ) = x\,'' $ \\ which in view of (3.1.5.1.3) is equivalent with \\ (3.1.5.1.16)~~~ $ ( x, x\,'' ) \notin {\cal Q} $ \\ However, we note that, for $u, v \in X,~ u \neq v$, we have \\ (3.1.5.1.17)~~~ $ ( u, v ) \notin {\cal Q} ~~~\Longleftrightarrow~~~ ( v, u ) \in {\cal Q} $ \\ and in view of (3.1.5.1.12), we can assume that \\ (3.1.5.1.18)~~~ $ \displaystyleelta^2 X \subseteq {\cal Q} $ \\ But then, the assumption $( x, x\,' ), ( x\,', x\,'' ) \notin {\cal Q}$ implies $x \neq x\,',~ x\,' \neq x\,''$, hence (3.1.5.1.17) gives $( x\,'', x\,' ), ( x\,', x ) \in {\cal Q}$, which in view of the assumed right hand in (3.1.5.1.14), means \\ (3.1.5.1.19)~~~ $ ( x\,'', x ) \in {\cal Q} $ \\ Here, there are another two subcases, namely \\ 3.1) $x\,'' \neq x$ \\ And this, together with (3.1.5.1.19), (3.1.5.1.17), gives (3.1.5.1.16), thus (3.1.5.1.15), which yields associativity. \\ 3.2) $x\,'' = x$ \\ In which case (3.1.5.1.15) follows trivially, thus also associativity. \\ 4) $( x, x\,' ) \notin {\cal Q},~ ( x\,', x\,'' ) \in {\cal Q}$ \\ Then $\alpha_{\cal Q} ( \alpha_{\cal Q} ( x, x\,' ), x\,'' ) = \alpha_{\cal Q} ( x\,', x\,'' ) = x\,'$, while $\alpha_{\cal Q} ( x, \alpha_{\cal Q} ( x\,', x\,'' ) ) = \alpha_{\cal Q} ( x, x\,' )= x\,'$, thus associativity holds. $\Box$ \\ As for the commutative binary operations in ${\cal B}_{id_{{\cal P} ( X )} }$, we have the obvious result in \\ {\bf Theorem 3.1.5.1.4.} \\ Given ${\cal Q} \subseteq X \times X$, then \\ (3.1.5.1.20)~~~ $ \alpha_{\cal Q} $ is commutative ~~~$\Longleftrightarrow~~~ {\cal Q} = \displaystyleelta^2 X $ \\ {\bf Remark 3.1.5.1.2.} \\ In the particular case of the identity generators $id_{{\cal P} ( X )}$ on arbitrary sets $X$, the two problems in section 5 above obtain simple solutions. Namely, every binary operation $\alpha \in {\cal B}_{id_{{\cal P} ( X )}}$ is a solution of both mentioned problems. \\ Indeed, according to (3.1.5.1.11), each such binary operation is of t he form $\alpha = \alpha_{\cal Q}$, for some ${\cal Q} \subseteq X \times X$. And then (3.1.5.1.9) concludes the proof. \\ \\ {\bf 3.1.5.2. A Few Steps beyond the Identity Generator} \\ Generators $\psi$ on $X$ which are more complex than the identity generator can obviously be obtained in many ways. In the sequel, we shall consider several such classes. \\ A first possible class of generators $\psi$ on $X$ which are more complex than the identity generator is given by those for which \\ (3.1.5.2.1)~~~ $ \exists~~~ \alpha \in {\cal B}_\psi ~:~~~ \exists~~~ x, x\,' \in X ~:~~~ \alpha ( x, x\,' ) \notin \{ x, x\,' \} $ \\ A simple example can be obtained as follows. Let $a, b, c \in X$ be three different elements, and let $\alpha : X \times X \longrightarrow X$ be given by \\ (3.1.5.2.2)~~~ $ \alpha ( x, x\,' ) = \begin{array}{|l} x ~\mbox{if}~ ( x, x\,' ) \neq ( a, b ) \\ \\ c ~\mbox{if}~ ( x, x\,' ) = ( a, b ) \end{array} $ \\ then $\alpha$ is neither associative, nor commutative. \\ \\ {\bf 3.1.6. Open Problems} \\ 1) Characterize the generators $\psi$ on $X$ for which (3.1.5.2.1) holds with $\alpha$ associative. \\ 2) Characterize the generators $\psi$ on $X$ for which (3.1.5.2.2) holds with $\alpha$ commutative. \\ \\ {\bf 3.2. A First Example : Tensor Products of Totally Ordered \\ \hspace*{1cm }Sets, and Their Entanglement} \\ Let $( X, \leq )$ and $( Y, \leq)$ be two totally ordered sets, and let us consider on them the respective binary operations, see (3.1.5.1.5), $\min_X$ and $\min_Y$. We shall now construct the tensor product, see (A2.1.8) \\ (3.2.1)~~~ $ X \bigotimes_{\min_X, \min_Y} Y = Z / \approx_{\min_X, \min_Y} $ \\ For that purpose, we have to particularize the conditions (A2.1.6), (A2.1.7) which are involved in the definition of the equivalence relation $\approx_{\min_X, \min_Y}$. In this respect, obviously (A2.1.6) takes the from \\ (3.2.2)~~~ replace $( x_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$ \\ \hspace*{1.85cm} with $( x_1, y_1 ) ~\gamma~ ( x\,'_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$, \\ \hspace*{1.9cm}or vice-versa, where $x\,'_1 \in X,~ x\,'_1 \geq x_1$ \\ while (A2.1.7) becomes \\ (3.2.3)~~~ replace $( x_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$ \\ \hspace*{1.85cm} with $( x_1, y_1 ) ~\gamma~ ( x_1, y\,'_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$, \\ \hspace*{1.9cm}or vice-versa, where $y\,'_1 \in Y,~ y\,'_1 \geq y_1$ \\ It follows in particular that, for $x \in X,~ y \in Y$, we have \\ (3.2.4)~~~ $ \begin{array}{l} x \bigotimes_{\min_X, \min_Y} y ~=~ ( x \bigotimes_{\min_X, \min_Y} y ) ~\gamma~ ( x\,' \bigotimes_{\min_X, \min_Y} y ) ~=~ \\ \\ ~=~ ( x \bigotimes_{\min_X, \min_Y} y ) ~\gamma~ ( x \bigotimes_{\min_X, \min_Y} y\,' ) \end{array} $ \\ \\ for $x\,' \in X,~ x\,' \geq x,~ y\,' \in Y,~ y\,' \geq y$. Indeed, (3.2.2) gives \\ $~~~~~~ ( x, y ) ~\approx_{\min_X, \min_Y}~ ( x, y ) ~\gamma~ ( x\,', y ) $ \\ while (3.2.3) implies \\ $~~~~~~ ( x, y ) ~\approx_{\min_X, \min_Y}~ ( x, y ) ~\gamma~ ( x, y\,' ) $ \\ which proves (3.2.4). \\ We note that (3.2.4) is a powerful {\it simplification} rule for elements of the tensor product $X \bigotimes_{\min_X, \min_Y} Y$, which according to (A2.1.10), are of the form \\ (3.2.5)~~~ $ x_1 \bigotimes_{\min_X, \min_Y} y_1 ~\gamma~ x_2 \bigotimes_{\min_X, \min_Y} y_2 ~\gamma~ \ldots ~\gamma~ x_n \bigotimes_{\min_X, \min_Y} y_n $ \\ The interpretation of (3.2.4) in terms of certain {\it paths} in $( X, \leq ) \times ( Y, \leq )$ further clarifies its meaning. Indeed, let us consider on $X \times Y$ the {\it reflexive, antisymmetric} binary relation $\dashv$, defined by \\ (3.2.6)~~~ $ ( x, y ) \dashv ( x \,', y\,' ) ~~~\Longleftrightarrow~~~ \left ( ~ \begin{array}{l} ~~1)~~ x = x\,',~~~ y \leq y\,',~~ \mbox{or} \\ \\ ~~2)~~ x \leq x\,',~~~ y = y\,' \end{array} ~ \right ) $ \\ \\ which in general is {\it not} transitive. Then a finite sequence in $X \times Y$ \\ (3.2.7)~~~ $ ( x_1, y_1 ), ( x_2, y_2 ), ( x_3, y_3 ), \ldots , ( x_n, y_n ) $ \\ is called {\it path-free} in $( X, \leq ) \times ( Y, \leq )$, if and only if none of the relations holds \\ (3.2.8)~~~ $ ( x_i, y_i ) \dashv ( x_j, y_j ) $ \\ where $1 \leq i, j \leq n,~ i \neq j$. It is, therefore, convenient to introduce on $X \times Y$ the related binary relation $\bowtie$ defined by \\ (3.2.9)~~~ $ ( x, y ) \bowtie ( x \,', y\,' ) ~~~\Longleftrightarrow~~~ \left (~ \begin{array}{l} \mbox{neither}~~ ( x, y ) \dashv ( x\,', y\,' ) \\ \\ \mbox{nor}~~ ( x\,', y\,' ) \dashv ( x, y ) \end{array} ~ \right ) $ \\ It follows that the path-free condition (3.2.8) can be written in the equivalent form \\ (3.2.10)~~~ $ ( x_i, y_i ) \bowtie ( x_j, y_j ),~~~ 1 \leq i < j \leq n $ \\ Now, in view of (3.2.4), we have for $x, x\,' \in X,~ y, y\,' \in Y$, the implication \\ (3.2.11)~~~ $ ( x, y ) \dashv ( x \,', y\,' ) ~~~\Longrightarrow~~~ \left ( ~ \begin{array}{l} x \bigotimes_{\min_X, \min_Y} y ~=~ \\ \\ ~=~ ( x \bigotimes_{\min_X, \min_Y} y ) ~\gamma~ ( x\,' \bigotimes_{\min_X, \min_Y} y\,' ) \end{array} ~ \right ) $ \\ and therefore \\ (3.2.12)~~~ $ \begin{array}{l} \left ( ~ \begin{array}{l} x \bigotimes_{\min_X, \min_Y} y ~\neq~ \\ \\ ~=~ ( x \bigotimes_{\min_X, \min_Y} y ) ~\gamma~ ( x\,' \bigotimes_{\min_X, \min_Y} y\,' ) \end{array} ~ \right ) ~~~\Longrightarrow~~~ \\ \\ ~~~~~~~~~~~~ \Longrightarrow~~~ ( x, y ) \bowtie ( x \,', y\,' ) \end{array} $ \\ And then (3.2.5) results in \\ {\bf Theorem 3.2.1.} \\ The elements of the tensor product $X \bigotimes_{\min_X, \min_Y} Y$ are of the form \\ (3.2.13)~~~ $ x_1 \bigotimes_{\min_X, \min_Y} y_1 ~\gamma~ x_2 \bigotimes_{\min_X, \min_Y} y_2 ~\gamma~ \ldots ~\gamma~ x_n \bigotimes_{\min_X, \min_Y} y_n $ \\ where \\ (3.2.14)~~~ $ ( x_1, y_1 ), ( x_2, y_2 ), ( x_3, y_3 ), \ldots , ( x_n, y_n ) $ \\ are path-free in $( X, \leq ) \times ( Y, \leq )$. \\ {\bf Corollary 3.2.1.} \\ If an element $w \in X \bigotimes_{\min_X, \min_Y} Y$ is entangled, then it is of form (3.2.13), and the respective path-free (3.2.14) is of length at least two. \\ {\bf Remark 3.2.1.} \\ Obviously, the tensor products \\ $~~~~~~ X \bigotimes_{\min_X, \max_Y} Y,~ X \bigotimes_{\max_X, \min_Y} Y $ and $ X \bigotimes_{\max_X, \max_Y} Y $ \\ can be obtained in a similar manner, simply by replacing $( X, \leq )$ with $( X, \geq )$, and $( Y, \leq )$ with $( Y, \geq )$. \\ \\ {\bf 3.3. A Second Example} \\ We can consider the more general situation when $( X, \leq )$ and $( Y, \leq)$ are two partially ordered sets which are lattices. In this case we can still define the binary operations $\min_X : X \times X \longrightarrow X$ and $\min_Y : Y \times Y \longrightarrow Y$ in the usual manner. \\ Let us now now construct the tensor product, see (A2.1.8) \\ (3.3.1)~~~ $ X \bigotimes_{\min_X, \min_Y} Y = Z / \approx_{\min_X, \min_Y} $ \\ For that purpose, we have to particularize the conditions (A2.1.6), (A2.1.7) which are involved in the definition of the equivalence relation $\approx_{\min_X, \min_Y}$. In this respect, obviously (A2.1.6) takes the from \\ (3.3.2)~~~ replace $( x_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$ \\ \hspace*{1.85cm} with $( x'_1, y_1 ) ~\gamma~ ( x\,''_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$, \\ \hspace*{1.9cm}or vice-versa, where $x\,'_1, x\,''_1 \in X,~ x = \min_X ( x\,'_1, x\,''_1 )$ \\ while (A2.1.7) becomes \\ (3.3.3)~~~ replace $( x_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$ \\ \hspace*{1.85cm} with $( x_1, y\,'_1 ) ~\gamma~ ( x_1, y\,''_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$, \\ \hspace*{1.9cm}or vice-versa, where $y\,'_1, y\,''_1 \in Y,~ y = \min_Y ( y\,'_1, y\,''_1 )$ \\ It follows in particular that, for $x \in X,~ y \in Y$, we have \\ (3.3.4)~~~ $ \begin{array}{l} x \bigotimes_{\min_X, \min_Y} y ~=~ ( x\,' \bigotimes_{\min_X, \min_Y} y ) ~\gamma~ ( x\,'' \bigotimes_{\min_X, \min_Y} y ) ~=~ \\ \\ ~=~ ( x \bigotimes_{\min_X, \min_Y} y\,' ) ~\gamma~ ( x \bigotimes_{\min_X, \min_Y} y\,'' ) \end{array} $ \\ \\ for $x\,', x\,'' \in X,~ x = \min_X ( x\,', x\,'' )$ and $y\,', y\,'' \in Y,~ y = \min_Y ( y\,', y\,'' )$. \\ Let us see the effect on the elements \\ (3.3.5)~~~ $ x_1 \bigotimes_{\min_X, \min_Y} y_1 ~\gamma~ x_2 \bigotimes_{\min_X, \min_Y} y_2 ~\gamma~ \ldots ~\gamma~ x_n \bigotimes_{\min_X, \min_Y} y_n $ \\ which constitute the tensor product $X \bigotimes_{\min_X, \min_Y} Y$, of the simplifications introduced by (3.3.4). For that purpose, the following notions are useful. Given $( x, y ) \in X \times Y$, we denote \\ (3.3.6)~~~ $ V_X ( x, y ) = \{~ ( ( x\,', y ), ( x\,'', y ) ) ~~|~~ x\,', x\,'' \in X,~ x = \min_X ( x\,', x\,'' ) ~\} $ \\ and call it the $X$-{\it wedge} at $( x, y)$, while the $Y$-{\it wedge} at $( x, y)$ is given by \\ (3.3.7)~~~ $ V_Y ( x, y ) = \{~ ( ( x, y\,' ), ( x, y\,'' ) ) ~~|~~ y\,', y\,'' \in Y,~ y = \min_Y ( y\,', y\,'' ) ~\} $ \\ Further, three pairs $( x_1, y_1 ), ( x_2, y_2 ), ( x_3, y_3 ) \in X \times Y$ are called {\it wedge-related}, if and only if, in one of their permutation $i, j, k$, we have \\ (3.3.8)~~~ $ ( ( x_i, y_i ), ( x_j, y_j ) ) \in V_X ( x_k, y_k ) \cup V_Y ( x_k, y_k ) $ \\ Now, a finite sequence in $X \times Y$ \\ (3.3.9)~~~ $ ( x_1, y_1 ), ( x_2, y_2 ), ( x_3, y_3 ), \ldots , ( x_n, y_n ) $ \\ is called {\it wedge-free} in $( X, \leq ) \times ( Y, \leq )$, if and only if no three of them are wedge-related. \\ It follows that (3.3.5) results in \\ {\bf Theorem 3.3.1.} \\ The elements of the tensor product $X \bigotimes_{\min_X, \min_Y} Y$ are of the form \\ (3.3.10)~~~ $ x_1 \bigotimes_{\min_X, \min_Y} y_1 ~\gamma~ x_2 \bigotimes_{\min_X, \min_Y} y_2 ~\gamma~ \ldots ~\gamma~ x_n \bigotimes_{\min_X, \min_Y} y_n $ \\ where \\ (3.3.11)~~~ $ ( x_1, y_1 ), ( x_2, y_2 ), ( x_3, y_3 ), \ldots , ( x_n, y_n ) $ \\ are wedge-free in $( X, \leq ) \times ( Y, \leq )$. \\ {\bf Corollary 3.3.1.} \\ If an element $w \in X \bigotimes_{\min_X, \min_Y} Y$ is entangled, then it is of form (3.3.10), and the respective wedge-free (3.3.11) is of length at least two. \\ {\bf Remark 3.3.1.} \\ Obviously, the tensor products \\ $~~~~~~ X \bigotimes_{\min_X, \max_Y} Y,~ X \bigotimes_{\max_X, \min_Y} Y $ and $ X \bigotimes_{\max_X, \max_Y} Y $ \\ can be obtained in a similar manner, simply by replacing $( X, \leq )$ with $( X, \geq )$, and $( Y, \leq )$ with $( Y, \geq )$. \\ \\ {\bf 3.4. A Third Example} \\ Let $X$ and $Y$ be two nonvoid sets and let us consider on them the binary operations, see (3.1.5.1.1), $\lambda_X,~ \lambda_Y$, respectively. Let us now construct the tensor product, see (A2.1.8) \\ (3.4.1)~~~ $ X \bigotimes_{\lambda_X, \lambda_Y} Y = Z / \approx_{\lambda_X, \lambda_Y} $ \\ For that purpose, we have to particularize the conditions (A2.1.6), (A2.1.7) which are involved in the definition of the equivalence relation $\approx_{\lambda_X, \lambda_Y}$. In this respect, obviously (A2.1.6) takes the from \\ (3.4.2)~~~ replace $( x_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$ \\ \hspace*{1.85cm} with $( x_1, y_1 ) ~\gamma~ ( x\,'_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$, \\ \hspace*{1.9cm}or vice-versa \\ while (A2.1.7) becomes \\ (3.4.3)~~~ replace $( x_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$ \\ \hspace*{1.85cm} with $( x_1, y_1 ) ~\gamma~ ( x_1, y\,'_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$, \\ \hspace*{1.9cm}or vice-versa \\ In particular, it follows that, for $x \in X,~ y \in Y$, we have \\ (3.4.4)~~~ $ \begin{array}{l} x \bigotimes_{\lambda_X, \lambda_Y} y ~=~ ( x \bigotimes_{\lambda_X, \lambda_Y} y ) ~\gamma~ ( x\,' \bigotimes_{\lambda_X, \lambda_Y} y ) ~=~ \\ \\ ~=~ ( x \bigotimes_{\lambda_X, \lambda_Y} y ) ~\gamma~ ( x \bigotimes_{\lambda_X, \lambda_Y} y\,' ) \end{array} $ \\ \\ for $x\,' \in X,~ y\,' \in Y$. Indeed, (3.4.2) gives \\ $~~~~~~ ( x, y ) ~\approx_{\lambda_X, \lambda_Y}~ ( x, y ) ~\gamma~ ( x\,', y ) $ \\ while (3.4.3) implies \\ $~~~~~~ ( x, y ) ~\approx_{\lambda_X, \lambda_Y}~ ( x, y ) ~\gamma~ ( x, y\,' ) $ \\ which gives the proof of (3.4.4). \\ Here we note that the above simplification rule (3.4.4) regarding the terms of the tensor product $X \bigotimes_{\lambda_X, \lambda_Y} Y$ is considerably more powerful than that in (3.2.4). Consequently, the tensor product $X \bigotimes_{\lambda_X, \lambda_Y} Y$ will have {\it fewer} elements than the tensor product $X \bigotimes_{\min_X, \min_Y} Y$. \\ Let us make more clear the respective situation. Any given finite sequence in $X \times Y$ \\ (3.4.5)~~~ $ ( x_1, y_1 ), ( x_2, y_2 ), ( x_3, y_3 ), \ldots , ( x_n, y_n ) $ \\ is called {\it repetition free}, if and only if \\ (3.4.6)~~~ $ x_i \neq x_j,~~ y_i \neq y_j,~~~ 1 \leq i < j \leq n $ \\ Now, in view of (A2.1.10), we obtain \\ {\bf Theorem 3.4.1.} \\ The elements of the tensor product $X \bigotimes_{\lambda_X, \lambda_Y} Y$ are of the form \\ (3.4.7)~~~ $ x_1 \bigotimes_{\lambda_X, \lambda_Y} y_1 ~\gamma~ x_2 \bigotimes_{\lambda_X, \lambda_Y} y_2 ~\gamma~ \ldots ~\gamma~ x_n \bigotimes_{\lambda_X, \lambda_Y} y_n $ \\ where \\ (3.4.8)~~~ $ ( x_1, y_1 ), ( x_2, y_2 ), ( x_3, y_3 ), \ldots , ( x_n, y_n ) $ \\ are repetition free in $X \times Y$. \\ {\bf Corollary 3.4.1.} \\ If an element $w \in X \bigotimes_{\lambda_X, \lambda_Y} Y$ is entangled, then it is of form (3.4.7), and the respective repetition free (3.4.8) is of length at least two. \\ {\bf Remark 3.4.1.} \\ Obviously, the tensor products \\ $~~~~~~ X \bigotimes_{\lambda_X, \rho_Y} Y,~ X \bigotimes_{\rho_X, \lambda_Y} Y $ and $ X \bigotimes_{\rho_X, \rho_Y} Y $ \\ can be obtained in a similar manner. \\ \\ {\bf 3.5. A Fourth Example} \\ Let $X$ be a vector space over $\mathbb{R}$ or $\mathbb{C}$, and define on it the binary operation \\ (3.5.1)~~~ $ \mu ( x, x\,' ) = ( x + x\,' ) / 2,~~~ x, x\,' \in X $ \\ Similarly, on a vector space $Y$ over $\mathbb{R}$ or $\mathbb{C}$, we define the binary operation \\ (3.5.2)~~~ $ \nu ( y, y\,' ) = ( y + y\,' ) / 2,~~~ y, y\,' \in Y $ \\ Now we construct the tensor product \\ (3.5.3)~~~ $ X \bigotimes_{\mu, \nu} Y = Z / \approx_{\mu, \nu} $ \\ Particularizing the conditions (A2.1.6), (A2.1.7) which are involved in the definition of the equivalence relation $\approx_{\mu, \nu}$, we obtain that (A2.1.6) takes the from \\ (3.5.4)~~~ replace $( x_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$ \\ \hspace*{1.85cm} with $( x\,'_1, y_1 ) ~\gamma~ ( x\,''_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$, \\ \hspace*{1.9cm}or vice-versa, where $x\,'_1, x\,''_1 \in X,~ x\,'_1 + x\,''_1 = 2 x_1$ \\ while (A2.1.7) becomes \\ (3.5.5)~~~ replace $( x_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$ \\ \hspace*{1.85cm} with $( x_1, y\,'_1 ) ~\gamma~ ( x_1, y\,''_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$, \\ \hspace*{1.9cm}or vice-versa, where $y\,'_1, y\,''_1 \in Y,~ y\,'_1 + y\,''_1 = 2 y_1$ \\ It follows that, for $x \in X,~ y \in Y$, we have \\ (3.5.6)~~~ $ \begin{array}{l} x \bigotimes_{\mu, \nu} y ~=~ ( x\,' \bigotimes_{\mu, \nu} y ) ~\gamma~ ( x\,'' \bigotimes_{\mu, \nu} y ) ~=~ \\ \\ ~=~ ( x \bigotimes_{\mu, \nu} y\,' ) ~\gamma~ ( x \bigotimes_{\mu, \nu} y\,'' ) \end{array} $ \\ \\ for $x\,', x\,'' \in X,~ x\,' + x\,'' = 2 x,~ y\,', y\,'' \in Y,~ y\,' + y\,'' = 2 y$. \\ Obviously, (3.5.6) is again a powerful simplification rule regarding the terms of the tensor product $X \bigotimes_{\mu, \nu} Y$. However, we can make more clear the respective situation. Namely, any given finite sequence in $X \times Y$ \\ (3.5.7)~~~ $ ( x_1, y_1 ), ( x_2, y_2 ), ( x_3, y_3 ), \ldots , ( x_n, y_n ) $ \\ is called {\it median free}, if and only if \\ (3.5.8)~~~ $ 2 x_i \neq x_j + x_k,~~ 2 y_i \neq y_j + y_k $ \\ for $1 \leq i, j, k \leq n$, with $i, j, k$ pair-wise different. \\ Now, in view of (A2.1.10), we obtain \\ {\bf Theorem 3.5.1.} \\ The elements of the tensor product $X \bigotimes_{\lambda_X, \lambda_Y} Y$ are of the form \\ (3.5.9)~~~ $ x_1 \bigotimes_{\lambda_X, \lambda_Y} y_1 ~\gamma~ x_2 \bigotimes_{\lambda_X, \lambda_Y} y_2 ~\gamma~ \ldots ~\gamma~ x_n \bigotimes_{\lambda_X, \lambda_Y} y_n $ \\ where \\ (3.5.10)~~~ $ ( x_1, y_1 ), ( x_2, y_2 ), ( x_3, y_3 ), \ldots , ( x_n, y_n ) $ \\ are median free in $X \times Y$. \\ {\bf Corollary 3.5.1.} \\ If an element $w \in X \bigotimes_{\lambda_X, \lambda_Y} Y$ is entangled, then it is of form (3.5.9), and the respective repetition free (3.5.10) is of length at least two. \\ \\ {\bf 4. Further Generalizations of Tensor Products} \\ The above example in subsection 3.5. gives an indication about both the possibility and interest in further generalizing tensor products, and doing so beyond the already rather general setup in Appendix 2. \\ As a start, one can in a natural manner extend the concept of tensor products defined in (A2.1.8) which is based on binary operations. Instead, and as suggested by (3.5.4), (3.5.5), one can use arbitrary multi-arity relations. Indeed, in (3.5.4), the $1 + 2$, or ternary relation on $X$, of the form \\ $~~~~~~ 2 x = x\,' + x\,'' $ \\ is used, and obviously this relation cannot be obtained as \\ $~~~~~~ x = \alpha ( x\,', x\,'' ) $ \\ from a binary operation $\alpha$ on $X$, in case $( X, \alpha )$ is, for instance, an arbitrary semigroup. Similar is the situation with (3.5.5). \\ Let therefore $X$ and $Y$ be two nonvoid sets, and let $A \subseteq X^n \times X^m$, while $B \subseteq Y^k \times Y^l$, for some $n, m, k, l \geq 1$. Then, in extending (A2.1.5) - (A2.1.8), we define an equivalence relation $\approx_{A, B}$ on $Z$, as follows. Two sequences in $Z$ are equivalent, if and only if they are identical, or each can be obtained from the other by a finite number of applications of the following operations \\ (4.1)~~~ permute pairs $( x_i, y_i )$ within the sequence \\ (4.2)~~~ replace $( x_1, y ) ~\gamma~ \ldots ~\gamma~ ( x_n, y ) ~\gamma~ ( u_1, v_1 ) ~\gamma~ \ldots ~\gamma~ ( u_p, v_p )$ \\ \hspace*{1.85cm} with $( x\,'_1, y ) ~\gamma~ \ldots ~\gamma~ ( x\,'_m, y ) ~\gamma~ ( u_1, v_1 ) ~\gamma~ \ldots ~\gamma~ ( u_p, v_p )$ \hspace*{1.9cm}or vice-versa, where $( x_1, \ldots , x_n, x\,'_1, \ldots , x\,'_m ) \in A$ \\ (4.3)~~~ replace $( x, y_1 ) ~\gamma~ \ldots ~\gamma~ ( x, y_k ) ~\gamma~ ( u_1, v_1 ) ~\gamma~ \ldots ~\gamma~ ( u_p, v_p )$ \\ \hspace*{1.85cm} with $( x, y\,'_1 ) ~\gamma~ \ldots ~\gamma~ ( x, y\,'_l ) ~\gamma~( u_1, v_1 ) ~\gamma~ \ldots ~\gamma~ ( u_p, v_p )$, \\ \hspace*{1.9cm}or vice-versa, where $( y_1, \ldots , y_k, y\,'_1, \ldots , y\,'_l ) \in B$ \\ Further details related to the resulting tensor products \\ (4.4)~~~ $ X \bigotimes_{A, B} Y = Z / \approx_{A, B} $ \\ will be presented elsewhere. \\ Here we note that in the example in subsection 3.4., we had \\ (4.5)~~~ $ A = \{~ ( x, x\,'_1, x\,'_2 ) \in X \times X^2 ~~|~~ 2 x = x\,'_1 + x\,'_2 ~\} $ \\ (4.6)~~~ $ B = \{~ ( y, y\,'_1, y\,'_2 ) \in Y \times Y^2 ~~|~~ 2 y = y\,'_1 + y\,'_2 ~\} $ \\ and obviously, we obtain $\approx_{A, B} ~=~ \approx_{\mu, \nu}$ on $Z$. \\ This example already shows the interest in the above generalization. Indeed, with $A, B$ as above, one can define the tensor product in subsection 3.4. not only when $X$ and $Y$ are vector spaces, but also in the more general case when they are merely semigroups. And in such a case, the respective equivalence $\approx_{A, B}$ is in general no longer of the form in (A2.2.1) - (A2.2.3), but of the more general form in (4.1) - (4.3) above. \\ Finally, the more general tensor products (A2.2.4) can also be further extended by considering families ${\cal A}$ of subsets $A \subseteq X^n \times X^m$, respectively, families ${\cal B}$ of subsets $B \subseteq Y^k \times Y^l$, and accordingly, modifying the operations (A2.2.2), (A2.2.3). \\ \\ {\bf Appendix 1. ( Definition of Usual Tensor Products of \\ \hspace*{3cm} Vector Spaces)} \\ For convenience, we recall here certain main features of the usual tensor product of vector spaces, and relate them to certain properties of Cartesian products. \\ Let $\mathbb{K}$ be a field and $E,F, G$ vector spaces over $\mathbb{K}$. \\ {\bf A1.1. Cartesian Product of Vector Spaces} \\ Then $E \times F$ is the vector space over $\mathbb{K}$ where the operations are given by \\ $~~~~~~ \lambda ( x, y ) + \mu ( u, v ) ~=~ ( \lambda x + \mu u, \lambda y + \mu v ) $ \\ for any $x, y \in E,~ u, v \in F,~ \lambda, \mu \in \mathbb{K}$. \\ \\ {\bf A1.2. Linear Mappings} \\ Let ${\cal L} ( E, F )$ be the set of all mappings \\ $~~~~~~ f : E ~\longrightarrow~ F $ \\ such that \\ $~~~~~~ f ( \lambda x + \mu u ) ~=~ \lambda f ( x ) + \mu f ( u ) $ \\ for $u, v \in E,~ \lambda, \mu \in \mathbb{K}$. \\ \\ {\bf A1.3. Bilinear Mappings} \\ Let ${\cal L} ( E, F; G )$ be the set of all mappings \\ $~~~~~~ g : E \times F ~\longrightarrow~ G $ \\ such that for $x \in E$ fixed, the mapping $F \ni y \longmapsto g ( x, y ) \in G$ is linear in $y$, and similarly, for $y \in F$ fixed, the mapping $E \ni x \longmapsto g ( x, y ) \in G$ is linear in $x \in E$. \\ It is easy to see that \\ $~~~~~~ {\cal L} ( E, F; G ) ~=~ {\cal L} ( E, {\cal L} ( F, G ) ) $ \\ \\ {\bf A1.4. Tensor Products} \\ The aim of the tensor product $E \bigotimes F$ is to establish a close connection between the {\it bilinear} mappings in ${\cal L} ( E, F; G )$ and the {\it linear} mappings in ${\cal L} ( E \bigotimes F , G )$. \\ Namely, the {\it tensor product} $E \bigotimes F$ is : \\ (A1.4.1)~~~ a vector space over $\mathbb{K}$, together with \\ (A1.4.2)~~~ a bilinear mapping $t : E \times F ~\longrightarrow~ E \bigotimes F$, such that we \\ \hspace*{1.6cm} have the following : \\ \\ {\bf UNIVERSALITY PROPERTY} \\ $\begin{array}{l} ~~~~~~~~~~ \forall~~~ V ~\mbox{vector space over}~ \mathbb{K},~~ g \in {\cal L} ( E, F; V ) ~\mbox{bilinear mapping}~ : \\ \\ ~~~~~~~~~~ \exists~ !~~ h \in {\cal L} ( E \bigotimes F, V ) ~\mbox{linear mapping}~ : \\ \\ ~~~~~~~~~~~~~~~~ h \circ t ~=~ g \end{array} $ \\ \\ or in other words : \\ (A1.4.3)~~~ the diagram commutes \begin{math} \setlength{\unitlength}{1cm} \thicklines \begin{picture}(13,7) \put(0.9,5){$E \times F$} \put(2.5,5.1){\vector(1,0){6.2}} \put(9.2,5){$E \bigotimes F$} \put(5,5.4){$t$} \put(1.7,4.5){\vector(1,-1){3.5}} \put(9.5,4.5){\vector(-1,-1){3.5}} \put(5.5,0.3){$V$} \put(3,2.5){$g$} \put(8.1,2.5){$\exists~!~~ h$} \end{picture} \end{math} and \\ (A1.4.4)~~~ the tensor product $E \bigotimes F$ is {\it unique} up to vector \\ \hspace*{1.6cm} space isomorphism. \\ Therefore we have the {\it injective} mapping \\ $~~~~~~ {\cal L} ( E, F; V ) \ni g ~\longmapsto~ h \in {\cal L} ( E \bigotimes F, V ) ~~~~\mbox{with}~~~~ h \circ t ~=~ g $ \\ The converse mapping \\ $~~~~~~ {\cal L} ( E \bigotimes F, V ) \ni h ~\longmapsto~ g ~=~ h \circ t \in {\cal L} ( E, F; V ) $ \\ obviously exists. Thus we have the {\it bijective} mapping \\ $~~~~~~ {\cal L} ( E \bigotimes F, V ) \ni h ~\longmapsto~ g ~=~ h \circ t \in {\cal L} ( E, F; V ) $ \\ \\ {\bf A1.5. Lack of Interest in ${\cal L} ( E \times F, G )$} \\ Let $f \in {\cal L} ( E \times F, G )$ and $( x, y ) \in E \times F$, then $( x, y ) = ( x, 0 ) + ( 0, y )$, hence \\ $~~~~~~ f ( x, y ) ~=~ f ( ( x, 0 ) + ( 0, y ) ) ~=~ f ( x, 0 ) ~+~ f ( 0, y ) $ \\ thus $f ( x, y )$ depends on $x$ and $y$ in a {\it particular} manner, that is, separately on $x$, and separately on $y$. \\ \\ {\bf A1.6. Universality Property of Cartesian Products} \\ Let $X, Y$ be two nonvoid sets. Their cartesian product is : \\ (A1.6.1)~~~ a set $X \times Y$, together with \\ (A1.6.2)~~~ two projection mappings~ $p_X : X \times X ~\longrightarrow~ X, \\ \hspace*{1.6cm} p_Y : X \times Y ~\longrightarrow~ Y$, such that we have the following : \\ \\ {\bf UNIVERSALITY PROPERTY} \\ $\begin{array}{l} ~~~~~~~~~~ \forall~~~ Z ~\mbox{nonvoid set},~~ f : Z ~\longrightarrow~ X,~~ g : Z ~\longrightarrow~ Y~ : \\ \\ ~~~~~~~~~~ \exists~ !~~ h : Z ~\longrightarrow~ X \times Y~ : \\ \\ ~~~~~~~~~~~~~~~~ f ~=~ p_X \circ h,~~~ g ~=~ p_Y \circ h \end{array} $ \\ \\ or in other words : \\ (A1.6.3)~~~ the diagram commutes \begin{math} \setlength{\unitlength}{1cm} \thicklines \begin{picture}(20,9) \put(6.5,3.6){$\exists~ ! ~~ h$} \put(6.1,6.5){\vector(0,-1){5.5}} \put(6,7){$Z$} \put(3.5,5.5){$f$} \put(5.7,7){\vector(-1,-1){3}} \put(8.5,5.5){$g$} \put(6.55,7){\vector(1,-1){3}} \put(2.3,3.6){$X$} \put(9.8,3.6){$Y$} \put(3.5,1.8){$p_X$} \put(5.5,0.7){\vector(-1,1){2.7}} \put(8.5,1.8){$p_Y$} \put(6.9,0.7){\vector(1,1){2.7}} \put(5.7,0.3){$X \times Y$} \end{picture} \end{math} \\ \\ {\bf A1.7. Cartesian and Tensor Products seen together} \\ \begin{math} \setlength{\unitlength}{1cm} \thicklines \begin{picture}(30,10) \put(-0.2,6){$\forall~G$} \put(0.5,6.5){\vector(1,1){2}} \put(0.8,7.6){$\forall~f$} \put(2.8,8.8){$\underline{\underline{E}}$} \put(0.5,5.5){\vector(1,-1){2}} \put(0.8,4.2){$\forall~g$} \put(2.8,3.1){$\underline{\underline{F}}$} \put(5.8,6){$\underline{\underline{E \times F}}$} \put(5.5,6.5){\vector(-1,1){2}} \put(4.8,7.6){$\underline{\underline{pr_E}}$} \put(5.5,5.5){\vector(-1,-1){2}} \put(4.8,4.2){$\underline{\underline{pr_F}}$} \put(0.5,6.1){\vector(1,0){4.9}} \put(2.6,6.3){$\exists~ !~~ h$} \put(7,6.5){\vector(1,1){2}} \put(7.3,7.6){$\underline{\underline{~t~}}$} \put(7,5.5){\vector(1,-1){2}} \put(7.2,4.2){$\forall~k$} \put(9.3,8.8){$\underline{\underline{E \bigotimes F}}$} \put(9.1,3.1){$\forall~V$} \put(9.5,8.3){\vector(0,-1){4.5}} \put(9.8,6){$\exists~ ! ~~ l$} \end{picture} \end{math} \\ \\ {\bf Appendix 2. ( Tensor Products beyond Vector Spaces)} \\ {\bf A2.1. The Case of Arbitrary Binary Operations} \\ Let us present the {\it first extension} of the standard definition of {\it tensor product} $X \bigotimes Y$, see Appendix 1, to the case of two structures $( X, \alpha )$ and $( Y, \beta )$, where $\alpha : X \times X \longrightarrow X,~ \beta : Y \times Y \longrightarrow Y$ are arbitrary binary operations on two arbitrary given sets $X$ and $Y$, respectively. The way to proceed is as follows. Let us denote by $Z$ the set of all finite sequences of pairs \\ (A2.1.1)~~~ $ ( x_1, y_1 ), \dots , ( x_n, y_n ) $ \\ where $n \geq 1$, while $x_i \in X,~ y_i \in Y$, with $1 \leq i \leq n$. We define on $Z$ the binary operation $\gamma$ simply by the concatenation of the sequences (A2.1.1). It follows that $\gamma$ is associative, therefore, each sequence (A2.1.1) can be written as \\ (A2.1.2)~~~ $ ( x_1, y_1 ), \dots , ( x_n, y_n ) = ( x_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n ) $ \\ where for $n = 1$, the right hand term is understood to be simply $( x_1, y_1 )$. Obviously, if $X$ or $Y$ have at least two elements, then $\gamma$ is not commutative. \\ Thus we have \\ (A2.1.3)~~~ $ Z = \left \{ ( x_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n ) ~~ \begin{array}{|l} ~ n \geq 1 \\ \\ ~ x_i \in X,~ y_i \in Y,~ 1 \leq i \leq n \end{array} \right \} $ \\ \\ which clearly gives \\ (A2.1.4)~~~ $ X \times Y \subseteq Z $ \\ Now we define on $Z$ an equivalence relation $\approx_{\alpha, \beta}$ as follows. Two sequences in (A2.1.1) are equivalent, if and only if they are identical, or each can be obtained from the other by a finite number of applications of the following operations \\ (A2.1.5)~~~ permute pairs $( x_i, y_i )$ within the sequence \\ (A2.1.6)~~~ replace $( \alpha ( x_1, x\,'_1 ) , y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$ \\ \hspace*{1.9cm} with $( x_1, y_1 ) ~\gamma~ ( x\,'_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$, \\ \hspace*{1.9cm}or vice-versa \\ (A2.1.7)~~~ replace $( x_1, \beta ( y_1, y\,'_1 ) ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$ \\ \hspace*{1.9cm} with $( x_1, y_1 ) ~\gamma~ ( x_1, y\,'_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$, \\ \hspace*{1.9cm}or vice-versa \\ Finally, the {\it tensor product} of $( X, \alpha )$ and $( Y, \beta )$ is defined to be the quotient space \\ (A2.1.8)~~~ $ X \bigotimes_{\alpha, \beta} Y = Z / \approx_{\alpha, \beta} $ \\ with the canonical quotient embedding, see (A2.1.4) \\ (A2.1.9)~~~ $ X \times Y \ni ( x, y ) \longmapsto x \bigotimes_{\alpha, \beta} y \in X \bigotimes_{\alpha, \beta} Y $ \\ where as in the usual case of tensor products, we denote by $x \bigotimes_{\alpha, \beta} y$, or simply $x \bigotimes y$, the equivalence class of $( x, y ) \in X \times Y \subseteq Z$. \\ Obviously, the binary operation $\gamma$ on $Z$ will canonically lead by this quotient operation to a {\it commutative} and {\it associative} binary operation on $X \bigotimes_{\alpha, \beta} Y$, which for convenience is denoted by the same $\gamma$, although in view of (A2.1.8), this time it depends on $\alpha$ and $\beta$, thus it should rigorously be written $\gamma_{\alpha, \beta}$. \\ In this way, the elements of $X \bigotimes_{\alpha, \beta} Y$ are all the expressions \\ (A2.1.10)~~~ $ x_1 \bigotimes_{\alpha, \beta} y_1 ~\gamma~ x_2 \bigotimes_{\alpha, \beta} y_2 ~\gamma~ \ldots ~\gamma~ x_n \bigotimes_{\alpha, \beta} y_n $ \\ with $n \geq 1$ and $x_i \in X,~ y_i \in Y$, for $1 \leq i \leq n$. \\ The customary particular situation is when $X$ and $Y$ are commutative semigroups, groups, or even vector spaces over some field $\mathbb{K}$. In this case $\alpha, \beta$ and $\gamma$ are as usual denoted by +, that is, the sign of addition. \\ It is easy to note that in the construction of tensor products above, it is {\it not} necessary for $( X, \alpha )$ or $( Y, \beta )$ to be semigroups, let alone groups, or for that matter, vector spaces. Indeed, it is sufficient that $\alpha$ and $\beta$ are arbitrary binary operations on $X$ and $Y$, respectively, while $X$ and $Y$ can be arbitrary sets. \\ Also, as seen above, $\alpha$ or $\beta$ need {\it not} be commutative either. However, the resulting tensor product $X \bigotimes_{\alpha, \beta} Y$, with the respective binary operation $\gamma$, will nevertheless be commutative and associative. \\ \\ {\bf A2.2. A Second Generalization of Tensor Products} \\ The first generalization of tensor products presented in section A2.1. above can easily be further extended. Indeed, let $X,~ Y$ be arbitrary sets and let ${\cal A}$ be any set of binary operations on $X$, while correspondingly, ${\cal B}$ is any set of binary operations on $Y$. \\ The constructions in (A2.2.1) - (A2.1.4) can again be implemented, since they only depend on the sets $X,~ Y$. \\ Now, we can define on $Z$ the equivalence relation $\approx_{{\cal A}, {\cal B}}$ as follows. Two sequences in (A2.1.1) are equivalent, if and only if they are identical, or each can be obtained from the other by a finite number of applications of the following operations \\ (A2.2.1)~~~ permute pairs $( x_i, y_i )$ within the sequence \\ \\ (A2.2.2)~~~ replace $( x_1, y_1 ) ~\gamma~ ( x\,'_1, y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$ \\ \hspace*{1.9cm} with $( \alpha ( x_1, x\,'_1), y_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots , ( x_n, y_n )$, \\ \hspace*{1.9cm} or vice-versa, where $\alpha \in {\cal A}$ \\ \\ (A2.2.3)~~~ replace $( x_1, y_1 ) ~\gamma~ ( x_1, y\,'_1 ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$ \\ \hspace*{1.9cm} with $( x_1, \beta ( y_1, y\,'_1 ) ) ~\gamma~ ( x_2, y_2 ) ~\gamma~ \ldots ~\gamma~ ( x_n, y_n )$, \\ \hspace*{1.9cm} or vice-versa, where $\beta \in {\cal B}$ \\ \\ Thus one obtains the tensor product \\ (A2.2.4)~~~ $ X \bigotimes_{{\cal A}, {\cal B}} Y = Z / \approx_{{\cal A}, {\cal B}} $ \\ with the canonical quotient embedding \\ (A2.2.5)~~~ $ X \times Y \ni ( x, y ) \longmapsto x \bigotimes_{{\cal A}, {\cal B}} y \in X \bigotimes_{{\cal A}, {\cal B}} Y $ \\ and $X \bigotimes_{{\cal A}, {\cal B}} Y$ being the set of all elements \\ (A2.2.6)~~~ $ x_1 \bigotimes_{{\cal A}, {\cal B}} y_1 ~\gamma~ x_2 \bigotimes_{{\cal A}, {\cal B}} y_2 ~\gamma~ \ldots ~\gamma~ x_n \bigotimes_{{\cal A}, {\cal B}} y_n $ \\ with $n \geq 1$ and $x_i \in X,~ y_i \in Y$, for $1 \leq i \leq n$. \\ Let us further note that, given sets ${\cal A} \subseteq {\cal A}\,'$ of binary operations on $X$, and correspondingly, sets ${\cal B} \subseteq {\cal B}\,'$ of binary operations on $Y$, we have the surjective mapping \\ (A2.2.7)~~~ $ X \bigotimes_{{\cal A}, {\cal B}} Y \ni ( z )_{{\cal A}, {\cal B}} \longmapsto ( z )_{{\cal A}\,', {\cal B}\,'} \in X \bigotimes_{{\cal A}\,', {\cal B}\,'} Y $ \\ where $( z )_{{\cal A}, {\cal B}}$ denotes the $\approx_{{\cal A}, {\cal B}}$ equivalence class of $z \in Z$, and similarly with $( z )_{{\cal A}\,', {\cal B}\,'}$. \\ \\ \end{document}
\begin{document} \setcounter{tocdepth}{2} \renewcommand{\arabic{footnote}}{\fnsymbol{footnote}} \setcounter{footnote}{1} \hspace*{-2ex}\parbox{\textwidth}{ \begin{center} {\Large \bf Spontaneous Decay in the Presence of Absorbing Media } \\[4ex] Ho Trung Dung\footnotemark, Ludwig Kn\"{o}ll, and Dirk-Gunnar Welsch\\[2ex] {\it Theoretisch-Physikalisches Institut, Friedrich-Schiller-Universit\"{a}t Jena,\\ Max-Wien-Platz 1, D-07743 Jena, Germany} \\[1ex] (April 11, 2001) \end{center} } \footnotetext{On leave from the Institute of Physics, National Center for Sciences and Technology, 1 Mac Dinh Chi Street, District 1, Ho Chi Minh City, Vietnam.} \setcounter{footnote}{0} \vspace*{-1cm} \section*{ABSTRACT} After giving a summary of the basic-theoretical concept of quantization of the electromagnetic field in the presence of dispersing and absorbing (macroscopic) bodies, their effect on spontaneous decay of an excited atom is studied. Various configurations such as bulk material, planar half space media, spherical cavities, and microspheres are considered. In particular, the influence of material absorption on the local-field correction, the decay rate, the line shift, and the emission pattern are examined. Further, the interplay between radiative losses and losses due to material absorption is analyzed. Finally, the possibility of generating entangled states of two atoms coupled by a microsphere-assisted field is discussed. \renewcommand{\arabic{footnote}}{\arabic{footnote}} \renewcommand{\Alph{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} \section{INTRODUCTION} \label{sec:Intro} Spontaneous emission of an excited atom is not an immutable property of the atom, but it sensitively depends on the photonic spectral density of states that are involved in the atomic transition at the chosen location of the atom. Already \citet{Purcell46} pointed out that spontaneous emission can be enhanced when the atom is inside a cavity and its transition is in resonance with a cavity mode. The opposite case of inhibition of spontaneous emission is also possible [\citet{Kleppner81}]. It is further well known that the decay process can even be reversed by strongly coupling the atom to a sufficiently sharp cavity-field mode, so that the emitted photon can be \pagebreak \vspace*{4.5cm} \\ reabsorbed and reemitted. Obviously, the photonic density of states can be modified by the presence of macroscopic bodies, which, in the simplest case, change the boundary conditions for the electromagnetic field. For the last years, engineering periodic dielectric structures (photonic crystals) has been of increasing interest [\citet{John87,Yablonovich87,John94,Kofman94,Joannopoulous95, Soukoulis96, Woldeyohannes99,Nikolopoulos00,Zhu00,Schriemer01}]. It is worth noting that spontaneous emission may be regarded as being a basic process in the rapidly growing field of cavity quantum electrodynamics (QED), where strong (resonant) interactions of a single or a few atoms with a single or a few radiation-field modes formed by material bodies are studied. Cavity QED itself has offered novel possibilities of testing fundamental aspects of quantum physics, such as quantum nondemolition measurement, complementarity, and entanglement [for reviews, see, e.g., \citet{Hinds91,Haroche92,Meschede92, Meystre92, Berman94,Haroche98,Kimble98,Walther98}]. Spontaneous emission in the presence of material bodies is not only interesting from the point of view of fundamental research, but it has also offered a number of interesting applications. It can provide a reliable and efficient single-photon source to be used in quantum information processing [\citet{DeMartini96}, \citet{Kitson98}]. The sensitivity to the ambient medium of resonance fluorescence is crucial in scanning near-field optical microscopy [\citet{Betzig93,Kopelman93,Bian95,Henkel98,Gersen00}]. Another important potential application is the so-called thresholdless laser [\citet{DeMartini88,Yamamoto93,Protsenko99}]. In a conventional laser, only a small portion of the spontaneous emission is channeled into the lasing mode formed by the cavity mirrors, the rest being lost to the free space modes. In a microcavity, due to strongly modified emission pattern and enhanced emission rate, a large portion of spontaneously emitted photons is stored in the cavity resonance mode. Losses due to excitation of free-space modes are thus drastically reduced and ultralow threshold lasing can be achieved. Controlling of the spontaneous decay also plays an important role in solid-state systems, where instead of atoms, quantum well or quantum dot excitons play the role of the emitters [\citet{Yokoyama95,Khitrova99,Yamamoto00}]. So, the improved directionality of the spontaneously emitted light could have dramatic impact on manufacture of high-efficient light-emitting diodes and displays [\citet{Yamamoto93}], and the spectral narrowing could help to increase the transmission capacity of optical fiber systems where chromatic dispersion is the limiting factor [\citet{Hunt93}]. Although certain properties of spontaneous emission such as the decay rate can be described classically, using the model of a classically oscillating dipole interacting with its own radiation field [\citet{Chance78,Wylie85,Haroche92}], spontaneous emission is an intrinsically quantum mechanical process. Its proper description requires quantization of both the atom and the radiation field. Obviously, in the presence of material bodies the medium-assisted electromagnetic field must be quantized. As long as the medium can be regarded as being nondispersing and nonabsorbing, whose (real) permittivity changes with space in general, electromagnetic-field quantization can be performed, using, e.g., generalized orthogonal-mode expansion [\citet{Knoell92}]. However, the concept fails when material absorption is included and the (spatially varying) permittivity becomes a complex function of frequency. The systematic study of the problem during the last years has generalized earlier results and offered powerful methods to deal with the special requirements of quantizing the electromagnetic field in absorbing media [for a review, see \citet{Knoell01}]. There are many reasons why inclusion of material absorption in the study of spontaneous emission is desired. One might ask what would happen when the atomic transition frequency becomes close to a medium resonance, where absorption is strong. In particular the question of the effect of absorption on spontaneous emission in the presence of band-gap material arises. Obviously, spontaneous decay must not necessarily be accompanied by a really observable photon, if the atom is near an absorbing body, and the question is of what is the (average) fraction of emitted light. Another question is of how can absorption modify the local field felt by the atom. A rigorous approach to the problem has acquired even more significance with the recent progress in designing certain types of microcavities (e.g., microspheres), where the ultimate quality level determined by intrinsic material losses has been achieved \ [\citet{Gorodetsky96}]. Although there has been a large body of theoretical work on medium-assisted spontaneous emission, material absorption has been ignored usually. Roughly speaking, there have been two concepts to treat absorption, namely the microscopic and the macroscopic approach. The microscopic approach starts from a microscopic model of the medium [\citet{Lee95,Yeung96}; \citet{Juzeliunas97,Fleischhauer99}; \citet{Crenshaw00a,Crenshaw00b,Wubs01}]. Accordingly, the underlying total Hamiltonian typically consists of the Hamiltonians of the free atom, the free radiation field, the atomic systems of the medium, and all the mutual interactions. The resulting equations of motion for the coupled system are then tried to rewrite in order to eliminate, on applying various approximation schemes, the medium variables and to obtain closed equations of motion for the atom-field system only. In this way, the life time of an excited atom in absorbing bulk material [\citet{Lee95}; \citet{Juzeliunas97}; \citet{Fleischhauer99,Crenshaw00a,Crenshaw00b,Wubs01}], the initial transient regime [\citet{Wubs01}], and the problem of local field corrections [\citet{Juzeliunas97,Fleischhauer99,Crenshaw00a, Crenshaw00b,Wubs01}] have been studied, and the problem of spontaneous emission of an excited atom near an absorbing interface has been considered [\citet{Yeung96}]. The concepts typically borrow, at some stage of calculation, from macroscopic electrodynamics, e.g., when a (model-specific) permittivity is introduced, boundary conditions at surfaces of discontinuity are set or local-field corrections within the framework of cavity models are considered. Apart from the fact that the (simplified) microscopic models do not yield, in general, the exact permittivities, the calculations can become rather involved, particularly when surfaces of discontinuity are taken into account [\citet{Yeung96}]. Further, the elimination of the medium variables must be done very carefully in order to ensure that the equal-time commutation relations are preserved. If this is not the case [\citet{Crenshaw00a,Crenshaw00b}], the results are questionable. In the macroscopic approach, the medium is described, from the very beginning, in terms of a spatially varying permittivity, which is a complex function of frequency, $\varepsilon({\bf r},\omega)$, that satisfies the Kramers--Kronig relations. This approach has -- similar to classical optics -- the benefit of being universally valid, because it uses only general physical properties, without the need of involved {\em ab initio} calculations. Clearly, this concept is valid only to some approximately fixed length scale which exceeds the average distance of two atoms. With regard to the calculation of the lifetimes and line shifts, the macroscopic approach is simple. It is well known that, according to Fermi's golden rule, the rate of spontaneous decay $\Gamma$ of an excited atom [position ${\bf r}_A$, (real) transition dipole moment ${\bf d}$, transition frequency $\omega_A$] can be expressed in terms of an electric-field correlation function as follows [see, e.g., \citet{Loudon83}]: \begin{equation} \label{1.1} \Gamma = \frac{2\pi}{\hbar^2}\!\! \int_0^\infty\!\!{\rm d}\omega\, \langle 0|{\bf d}\hat{\bf E}({\bf r}_{\rm A},\omega)\!\otimes\! \hat{\bf E}^\dagger({\bf r}_{\rm A},\omega_{\rm A}){\bf d}|0 \rangle \end{equation} [cf. Eqs.~(\ref{e2.1}) -- (\ref{e2.2})]. It is also well known [see, e.g., \citet{Abrikosov75}] that, in agreement with the dissipation-fluctuation theorem, the relation \begin{eqnarray} \label{1.2} \lefteqn{ \langle 0|\hat{\mbb{E}}({\bf r},\omega)\otimes \hat{\mbb{E}}{^\dagger}({\bf r}',\omega') | 0 \rangle } \nonumber\\&& = \frac{\hbar\omega^2}{\pi\epsilon_0c^2} \,{\rm Im}\,\mbb{G}({\bf r},{\bf r}',\omega) \,\delta(\omega-\omega') \qquad \end{eqnarray} ($c$, vacuum velocity of light) is valid, where $\mbb{G}({\bf r},{\bf r}',\omega)$ is the Green tensor of the classical, macroscopic Maxwell equations. Combining Eqs.~(\ref{1.1}) and (\ref{1.2}) yields \begin{equation} \label{e3.12} \Gamma = \frac{2\omega_{\rm A}^2}{\hbar\varepsilon_0 c^2} \,{\bf d}\,{\rm Im}\, \mbb{G}({\bf r}_{\rm A},{\bf r}_{\rm A},\omega_{\rm A})\,{\bf d} \end{equation} [see also \citet{Agarwal75,Wylie84,Wylie85}]. The line shift can be calculated in a similar way to obtain \begin{equation} \label{e3.13} \delta\omega_A = {{\cal P} \over \pi\hbar\varepsilon_0} \!\int_0^\infty \!\!{\rm d}\omega\, {\omega^2\over c^2} \frac{{\bf d}\,{\rm Im}\,\mbb{G}({\bf r}_A,{\bf r}_A,\omega)\,{\bf d}} {\omega-\omega_A} \end{equation} [${\cal P}$, principal value; for a more rigorous derivation of Eqs.~(\ref{e3.12}) and (\ref{e3.13}), see Section \ref{sec3.1.1}]. Hence, knowing the Green tensor of the classical problem for given complex permittivity, the decay rate and the line shift are known as well. Equations (\ref{e3.12}) and (\ref{e3.13}) were used in order to calculate decay rates and line shifts for an excited atom near a realistic (i.e., absorbing) metallic sphere [\citet{Ruppin82,Agarwal83}], near an absorbing interface [\citet{Agarwal75,Agarwal77, Wylie84,Wylie85}], and in a planar cavity filled with an absorbing medium [\citet{Tomas99}]. Based on Eq.~(\ref{e3.12}), spontaneous emission of an excited atom in absorbing bulk material was studied [\citet{Barnett92}], including local field corrections [\citet{Barnett96}]. The associated line shift (without local-field correction) was considered in Welton's interpretation [\citet{Matloob00a}]. Spontaneous decay of an atom at the center of an absorbing sphere has been calculated in \citet{Tomas01} (with local field correction). In what follows we restrict our attention to the macroscopic approach that is based on QED in dispersing and absorbing media, within the framework of a source-quantity representation of the electromagnetic field in terms of the (classical) Green tensor of the macroscopic Maxwell equations and appropriately chosen fundamental bosonic fields [\citet{Huttner92,Ho93,Matloob95,Gruner96,Ho98,Scheel98,Knoell01}]. The quantization scheme is outlined in Section \ref{sec:quan}. In Section \ref{sec:Gen_Theor}, the basic formulas for studying the spontaneous decay of an excited atom are given, which cover both the strong- and the weak-coupling regime. Section \ref{sec:realcavity} is devoted to the spontaneous decay in bulk material, with special emphasis on local-field effects. The problem of spontaneous decay near a planar interface is considered in Section \ref{sec:planar}, and Sections \ref{sec:beyond} and \ref{sec:microsphere}, respectively, present results for an atom in a spherical cavity and near a microsphere. In Section \ref{sec:2atoms} a system of two atoms coupled to a microsphere is analyzed, with special emphasis on entangled-state preparation. Finally, a summary is given in Section \ref{sec:concl}. \setcounter{equation}{0} \section{\hspace{-0ex}QUANTIZATION SCHEME} \label{sec:quan} Following \citet{Ho98,Scheel98,Knoell01}, we first consider the electromagnetic field in the presence of dispersing and absorbing macroscopic bodies in the case where no additional atomic sources are present. The electric-field operator $\hat{\bf E}$ can be represented in the form of \begin{eqnarray} \label{e2.1} &\displaystyle \hat{\bf E}({\bf r}) = \hat{\bf E}^{(+)}({\bf r}) + \hat{\bf E}^{(-)}({\bf r}), \\[.5ex] \label{e2.1a} &\displaystyle \hat{\bf E}^{(-)}({\bf r}) = \big[\hat{\bf E}^{(+)}({\bf r})\big]^\dagger ,& \\[.5ex] \label{e2.2} &\displaystyle \hat{\bf E}^{(+)}({\bf r}) = \int_{0}^{\infty} {\rm d}\omega\, \underline{\hat{\bf E}}({\bf r},\omega) ,& \end{eqnarray} and the induction-field operator $\hat{\bf B}$ accordingly. The fields $\underline{\hat{\bf E}}$ and $\underline{\hat{\bf B}}$ then satisfy the macroscopic Maxwell equations \begin{equation} \label{e2.3} \mbb{\nabla} {} \underline{\hat{\bf B}}({\bf r},\omega) = 0, \end{equation} \begin{equation} \label{e2.4} \mbb{\nabla} {} \left[ \varepsilon_0 \varepsilon({\bf r},\omega) \underline{\hat{\bf E}}({\bf r},\omega)\right] = \underline{\hat{\rho}}_{\rm N}({\bf r},\omega) , \end{equation} \begin{equation} \label{e2.5} \mbb{\nabla} \times \underline{\hat{\bf E}}({\bf r},\omega) = i\omega \underline{\hat{\bf B}}({\bf r},\omega) , \end{equation} \begin{equation} \label{e2.6} \mbb{\nabla} \!\times\! \underline{\hat{\bf B}}({\bf r},\omega) \!=\! \mu_0 \underline{\hat{\bf j}}_{\rm N}({\bf r},\omega) \!-\!\frac{i\omega}{c^2}\,\varepsilon({\bf r},\omega) \underline{\hat{\bf E}}({\bf r},\omega). \end{equation} As already mentioned, the real part $\varepsilon_{R}$ and the imaginary part $\varepsilon_{I}$ of the complex (relative) permittivity $\varepsilon({\bf r},\omega)$ satisfy (for any ${\bf r}$) the Kramers--Kronig relations. The operator noise charge and current densities $\underline{\hat{\rho}}_{\rm N}({\bf r},\omega)$ and $\underline{\hat{\bf j}}_{\rm N}({\bf r},\omega)$ respectively, which are associated with absorption, are related to the operator noise polarization $\underline{\hat{\bf P}}_{\rm N}({\bf r},\omega)$ as \begin{eqnarray} \label{e2.7} &&\underline{\hat{\rho}}_{\rm N}({\bf r},\omega) = - \mbb{\nabla} {} \underline{\hat{\bf P}}_{\rm N}({\bf r},\omega) , \qquad \\[.5ex] \label{e2.8} &&\underline{\hat{\bf j}}_{\rm N}({\bf r},\omega) = -i\omega \underline{\hat{\bf P}}_{\rm N}({\bf r},\omega) , \qquad \end{eqnarray} where \begin{equation} \label{e2.9} \underline{\hat{\bf P}}_{\rm N}({\bf r},\omega) = i \sqrt{\frac{\hbar \varepsilon_0}{\pi} \varepsilon_{I}({\bf r},\omega)} \,\hat{\bf f}({\bf r},\omega). \end{equation} Here, $\hat{\bf f}({\bf r},\omega)$ and $\hat{\bf f}^\dagger({\bf r},\omega)$ are bosonic fields which play the role of the fundamental variables of the composed system (electromagnetic field and medium including a dissipative system), \begin{equation} \label{e2.10} \big[ \hat{f}_i({\bf r},\omega), \hat{f}^{\dagger}_j({\bf r}',\omega')\big] = \delta_{ij} \delta({\bf r}\!-\!{\bf r}') \delta(\omega\!-\!\omega') , \end{equation} \begin{equation} \label{e2.11} \big[ \hat{f}_i({\bf r},\omega), \hat{f}_j({\bf r}',\omega')\big] = 0 . \end{equation} {F}rom Eqs.~(\ref{e2.3}) -- (\ref{e2.9}) it follows that $\underline{\hat{\bf E}}$ can be written in the form \begin{eqnarray} \label{e2.12} \lefteqn{ \hat{\underline{\bf E}}({\bf r},\omega) = i \sqrt{\frac{\hbar}{\pi\varepsilon_0}}\,\frac{\omega^2}{c^2} } \nonumber\\[.5ex]&&\hspace{-2.5ex}\times\! \!\!\int \!\!{\rm d}^3{\bf r}' \sqrt{\varepsilon_{I}({\bf r}',\omega)} \,\mbb{G}({\bf r},{\bf r}',\omega) {}\hat{\bf f}({\bf r}',\omega), \qquad \end{eqnarray} and $\hat{\underline{\bf B}}$ $\!=$ $\!(i\omega)^{-1} \mbb{\nabla}\times\hat{\underline{\bf E}}$ accordingly, where $\mbb{G}({\bf r},{\bf r}',\omega)$ is the classical Green tensor satisfying the equation \begin{eqnarray} \label{e2.13} \left[ \frac{\omega^2}{c^2}\,\varepsilon({\bf r},\omega) \!-\!\mbb{\nabla}\!\times\!\mbb{\nabla}\!\times\! \right] \mbb{G}({\bf r},{\bf r}',\omega) \!=\! -\mbb{\delta}({\bf r}\!-\!{\bf r}') \ \ \end{eqnarray} together with the boundary condition at infinity [$\mbb{\delta}({\bf r})$, dyadic $\delta$-function]. In this way, the electric field and the induction field are expressed in terms of a continuum set of the bosonic fields $\hat{\bf f}({\bf r},\omega)$ [and $\hat{\bf f}^\dagger ({\bf r},\omega)$], and the Hamiltonian of the composed system reads (without the infinite ground-state energy) \begin{equation} \label{e2.14} \hat{H} = \int\! {\rm d}^3{\bf r} \! \int_0^\infty \!{\rm d}\omega \,\hbar\omega \,\hat{\bf f}^\dagger({\bf r},\omega){}\hat{\bf f}({\bf r},\omega). \end{equation} Using Eq.~(\ref{e2.12}) [together with Eqs.~(\ref{e2.1}) and (\ref{e2.2})], one can introduce scalar and vector potentials $\hat{\varphi}$ and $\hat{\bf A}$, respectively, and express them in terms of the fundamental bosonic fields. In particular, in the Coulomb gauge one obtains \begin{eqnarray} \label{e2.15} &\displaystyle -\mbb{\nabla} \hat{\varphi}({\bf r}) = \hat{\bf E}^\parallel({\bf r}) , & \\[.5ex] \label{e2.16} &\displaystyle \hspace{-5ex} \hat{\bf A}({\bf r}) \!=\! \!\int_0^\infty\!\! {\rm d}\omega \, (i\omega)^{-1} \underline{\hat{\bf E}}^\perp({\bf r},\omega) + {\rm H.c.}, & \end{eqnarray} where \begin{equation} \label{e2.17} \hat{\bf E}^{\perp(\parallel)}({\bf r}) = \int \!{\rm d}^3{\bf r}' \, \mbox{\boldmath $\delta$}^{\perp(\parallel)} ({\bf r}\!-\!{\bf r}') {} \, \hat{\bf E}({\bf r}'), \end{equation} with $\mbox{\boldmath $\delta$}^\perp({\bf r})$ and $\mbox{\boldmath $\delta$}^\parallel({\bf r})$ being the dyadic transverse and longitudinal $\delta$-functions, respectively. We now consider the interaction of the medium-assisted electromagnetic field with additional point charges $q_\alpha$. Applying the minimal-coupling scheme, we may write the complete Hamiltonian in the form of \begin{eqnarray} \label{e2.18} \lefteqn{ \hat{H} = \!\int\! {\rm d}^3{\bf r} \!\int_0^\infty\! {\rm d}\omega \,\hbar\omega \,\hat{\bf f}^\dagger({\bf r},\omega){}\hat{\bf f}({\bf r},\omega) } \nonumber\\&&\hspace{2ex} + \sum_\alpha {1\over 2m_\alpha} \left[ \hat{\bf p}_\alpha - q_\alpha \hat{\bf A}(\hat{\bf r}_\alpha) \right]^2 \nonumber\\&&\hspace{-10ex} +\,{\textstyle{1\over 2}} \int {\rm d}^3{\bf r}\, \hat{\rho}_A({\bf r}) \hat{\varphi}_A({\bf r}) + \int {\rm d}^3{\bf r}\, \hat{\rho}_A({\bf r}) \hat{\varphi}({\bf r}) , \end{eqnarray} where $\hat{\bf r}_\alpha$ is the position operator and $\hat{\bf p}_\alpha$ is the canonical momentum operator of the $\alpha$th charged particle of mass $m_\alpha$. The Hamiltonian (\ref{e2.18}) consists of four terms. The first term is the energy observed when the particles are absent [cf. Eq.~(\ref{e2.14})]. The second term is the kinetic energy of the particles, and the third and fourth terms are their Coulomb energies, where the potential $\hat{\varphi}_A$ can be given by \begin{equation} \label{e2.19} \hat{\varphi}_A({\bf r}) = \int {\rm d}^3{\bf r}' \,\frac {\hat{\rho}_A({\bf r}')} {4\pi\varepsilon_0|{\bf r}-{\bf r}'|} \,, \end{equation} with \begin{equation} \label{e2.20} \hat{\rho}_A({\bf r}) = \sum_\alpha q_\alpha \delta({\bf r}-\hat{\bf r}_\alpha) \end{equation} being the charge density. Obviously, the last term in Eq.~(\ref{e2.18}) is the Coulomb energy of interaction of the particles with the medium. Note that all terms are expressed in terms of the dynamical variables $\hat{\bf f}({\bf r},\omega)$, $\hat{\bf f}^\dagger({\bf r},\omega)$, $\hat{\bf r}_\alpha$, and $\hat{\bf p}_\alpha$. It is worth noting the quantization scheme is fully equivalent to the so-called auxiliary-field scheme introduced \ by \ \citet{Tip98} [for details, see \citet{Tip01}]. \setcounter{equation}{0} \section{SPONTANEOUS DECAY:\\ GENERAL FORMALISM} \label{sec:Gen_Theor} \subsection{Basic equations} \label{subsec:basic} Let us consider $N$ two-level atoms [positions ${\bf r}_A$, transition frequencies $\omega_A$ ($A$ $\!=$ $\!1,2,...,N$)] that resonantly interact with radiation via electric-dipole transition (dipole moments ${\bf d}_A$). Let us further assume that the atoms are sufficiently far from each other, so that the interatom Coulomb interaction can be ignored. In this case, the electric-dipole approximation and the rotating wave approximation apply, and the minimal-coupling Hamiltonian takes the form of [\citet{Ho00,Knoell01}] \begin{eqnarray} \label{e3.1} \lefteqn{ \hat{H} = \int \!{\rm d}^3{\bf r} \! \int_0^\infty \!\!{\rm d}\omega \,\hbar\omega \,\hat{\bf f}^\dagger({\bf r},\omega){}\hat{\bf f}({\bf r},\omega) } \nonumber\\[.5ex]&& \hspace{-5ex} + \sum_{A} {\textstyle{1\over 2}}\hbar\omega_A \hat{\sigma}_{Az} - \sum_{A} \big[ \hat{\sigma}_A^\dagger \hat{\bf E}^{(+)}({\bf r}_A){}{\bf d}_A + {\rm H.c.}\big]. \qquad \end{eqnarray} Here and in the following, the two-level atoms are described in terms of the Pauli operators $\hat{\sigma}_A$, $\hat{\sigma}_A^\dagger$, and $\hat{\sigma}_{Az}$. For a single-quantum excitation of the system, the system wave function at time $t$ can be written as \begin{eqnarray} \label{e3.2} \lefteqn{ |\psi(t)\rangle = \sum_{A} C_{U_A}(t) e^{-i(\omega_A-\bar{\omega})t} |U_A\rangle |\{0\}\rangle } \nonumber\\[.5ex]&&\hspace{0ex} +\!\int\! {\rm d}^3{\bf r} \int_0^\infty\!\! {\rm d}\omega\, \Bigl[ C_{Li}({\bf r},\omega,t) \nonumber\\[.5ex]&&\hspace{2ex} \times \, e^{-i (\omega-\bar{\omega})t} |L\rangle |\{1_i({\bf r},\omega)\} \rangle \Bigr] \qquad \end{eqnarray} ($\bar{\omega}$ $\!=$ $\!$ $\!\frac{1}{2}\sum_A\omega_A$). Here, $|U_A\rangle$ is the excited atomic state, where the $A$th atom is in the upper state and all the other atoms are in the lower state, and $|L\rangle$ is the atomic state, where all atoms are in the lower state. Accordingly, $|\{0\}\rangle$ is the vacuum state of the rest of the system, and $|\{1_i({\bf r},\omega)\}\rangle$ is the state, where it is excited in a single-quantum Fock state. The Schr\"odinger equation yields \begin{eqnarray} \label{e3.3} \lefteqn{\hspace{-1ex} \dot{C}_{U_A}(t) = \frac{-1}{\sqrt{\pi\varepsilon_0\hbar}} \!\int_0^\infty \!\!{\rm d}\omega \!\int\!{\rm d}^3{\bf r} \bigg\{ \frac{\omega^2}{c^2}\,e^{-i(\omega-\omega_A)t} } \nonumber \\[0ex]&& \hspace{-5ex}\times \Bigl[ \sqrt{\varepsilon_{I}({\bf r},\omega)} \,{\bf d}_A\mbb{G}({\bf r}_A,{\bf r},\omega) {\bf C}_{L}({\bf r},\omega,t) \Bigr] \bigg\}, \end{eqnarray} \begin{eqnarray} \label{e3.4} \lefteqn{ \dot{\bf C}_{L}({\bf r},\omega,t) = \sum_{A} \frac{1}{\sqrt{\pi\varepsilon_0\hbar}} \,\frac{\omega^2}{c^2} \,e^{i(\omega-\omega_A)t} } \nonumber \\[.5ex]&&\hspace{-2ex}\times\, \sqrt{\varepsilon_{I}({\bf r},\omega)} {\bf d}_A\mbb{G}^\ast({\bf r}_A,{\bf r},\omega)\, C_{U_A}(t). \quad \end{eqnarray} We now substitute the result of formal integration of Eq.~(\ref{e3.4}) into Eq.~(\ref{e3.3}). Making use of the relationship \begin{eqnarray} \label{e3.5} \lefteqn{ {\rm Im}\,G_{ij}({\bf r},{\bf r'},\omega) = \int {\rm d}^3{\bf s}\, \bigg[ \frac{\omega^2}{c^2}\, \varepsilon_{I}({\bf s},\omega) }\nonumber\\[.5ex]&& \hspace{4ex}\times\, G_{ik}({\bf r},{\bf s},\omega) G^\ast_{jk}({\bf r'},{\bf s},\omega) \bigg], \qquad \end{eqnarray} we obtain the following (closed) system of integro-differential equations: \begin{eqnarray} \label{e8.5} \lefteqn{\hspace{0ex} \dot{C}_{U_A}(t) = \sum_{A'} \int_0^t {\rm d}t'\, K_{AA'}(t,t')\, C_{U_{A'}}(t') } \nonumber\\[.5ex]&&\hspace{-3ex} -\frac{1}{\sqrt{\pi\varepsilon_0\hbar}} \!\int_0^\infty \!\!{\rm d}\omega \!\int\!{\rm d}^3{\bf r} \bigg\{ \frac{\omega^2}{c^2}\,e^{-i(\omega-\omega_A)t} \nonumber\\[.5ex]&& \hspace{-5ex}\times \Big[ \sqrt{\varepsilon_{I}({\bf r},\omega)} \,{\bf d}_A\mbb{G}({\bf r}_A,{\bf r},\omega) {\bf C}_{L}({\bf r},\omega,0) \Big] \bigg\}, \end{eqnarray} \begin{eqnarray} \label{e8.6} \lefteqn{\hspace{0ex} K_{AA'}(t,t') = \frac{-1 } {\hbar\pi\varepsilon_0} \int_0^\infty {\rm d}\omega \biggl[ {\omega^2\over c^2} e^{-i(\omega-\omega_A)t} } \nonumber\\[.5ex]&&\hspace{-1ex}\times\, e^{i(\omega-\omega_{A'})t'} {\bf d}_A {\rm Im}\,\mbb{G}({\bf r}_A,{\bf r}_{A'},\omega) {\bf d}_{A'} \biggr]. \qquad \end{eqnarray} Note that \begin{equation} \label{e8.6a} K_{AA'}(t,t')=K_{A'A}^\ast(t',t), \end{equation} because of the reciprocity theorem. The excitation can initially reside in either an atom or the medium-assisted electromagnetic field. The latter case, i.e., \mbox{${\bf C}_L({\bf r},\omega,0)$ $\!\neq$ $\!0$} in Eq.~(\ref{e8.5}), could be realized, for example, by coupling the field first to an excited atom {$D$} in a time interval $\Delta t$ such that, according to Eq.~(\ref{e3.4}), ${\bf C}_L({\bf r},\omega,0)$ reads \begin{eqnarray} \label{e3.35} \lefteqn{ {\bf C}_{L}({\bf r},\omega,0) = \int_{-\Delta t}^0 {\rm d}t' \frac{1}{\sqrt{\pi\varepsilon_0\hbar}} \,\frac{\omega^2}{c^2} \,e^{i(\omega-\omega_D)t'} } \nonumber \\[.5ex]&&\hspace{-2ex}\times\, \sqrt{\varepsilon_{I}({\bf r},\omega)} {\bf d}_D\mbb{G}^\ast({\bf r}_D,{\bf r},\omega)\, C_{U_D}(t'), \qquad \end{eqnarray} where $C_{U_D}(t)$ describes the single-atom decay according to Eq.~(\ref{e3.8}) given below. Substitution of the expression (\ref{e3.35}) into Eq.~(\ref{e8.5}) then yields \begin{eqnarray} \label{e8.6b} \lefteqn{ \dot{C}_{U_A}(t) = \sum_{A'} \int_0^t {\rm d}t'\, K_{AA'}(t,t')\,C_{U_{A'}}(t') } \nonumber\\[.5ex]&&\hspace{2ex} + \int_{-\Delta t}^0 {\rm d}t'\, K_{AD}(t,t')\,C_{U_D}(t'). \qquad \end{eqnarray} \subsection{Single-Atom Decay} \label{sec3.1} \subsubsection{Atomic Dynamics} \label{sec3.1.1} Let us restrict our attention to the spontaneous decay of a single atom. For the initial condition ${\bf C}_{L}({\bf r},\omega,0)$ $\!=$ $\!0$, Eq.~(\ref{e8.5}) becomes ($C_{U}$ $\!\equiv$ $\!C_{U_A}$, \mbox{${\bf d}$ $\!\equiv$ $\!{\bf d}_A$}) \begin{equation} \label{e3.6} \dot{C}_{U}(t) =\int_0^t {\rm d}t'\, K(t-t') C_{U}(t'), \end{equation} where \begin{equation} \label{e3.7} K(t-t') = K_{AA}(t,t'). \end{equation} Integrating both sides of Eq.~(\ref{e3.6}) with respect to time, we easily derive, on changing the order of integrations on the right-hand side, a Volterra integral equation of the second kind, \begin{equation} \label{e3.8} C_{U}(t) =\int_0^t {\rm d}t'\, \bar{K}(t-t') C_{U}(t') + 1 \end{equation} [$C_{U}(0)$ $=$ $\!1$], where, according to Eqs.~(\ref{e3.7}) and (\ref{e8.6}), \begin{eqnarray} \label{e3.9} \lefteqn{\hspace{-2ex} \bar{K}(t\!-\!t') = \frac{-1}{\hbar\pi\varepsilon_0} \!\int_0^\infty \!\!{\rm d}\omega \bigg\{\!\Big[1\! -\!e^{-i(\omega-\omega_A)(t-t')}\Big] } \nonumber\\[.5ex]&&\hspace{4ex} \times\,{\omega^2\over c^2}\, \frac{{\bf d}{\rm Im}\,\mbb{G}({\bf r}_A,{\bf r}_A,\omega) {\bf d}}{i(\omega-\omega_A)}\bigg\}. \qquad \end{eqnarray} It is worth noting that Eqs.~(\ref{e3.6}) and (\ref{e3.8}) apply to the spontaneous decay of an atom in the presence of an arbitrary configuration of dispersing and absorbing macroscopic bodies. All the matter parameters that are relevant for the atomic evolution are contained, via the Green tensor, in the kernel functions (\ref{e3.7}) and (\ref{e3.9}). In particular when absorption is disregarded and the permittivity is regarded as being a real, frequency-independent quantity (which of course can change with space), then the formalism yields the results of standard mode decomposition, obtained by Laplace transform techniques [\citet{Lewenstein88a,Lewenstein88b,John94}] and delay-differential-equation techniques [\citet{Cook87,Feng89,Ho99}]. It should be pointed out that the Green tensor has been available for a large variety of configurations such as planarly, spherically, and cylindrically multilayered media [\citet{Tai94,Chew95}]. In order to study the case where the atom is surrounded by matter, the atom should be assumed to be localized in some small free-space region, so that the Green tensor at the position of the atom reads \begin{equation} \label{e3.16} \mbb{G}({\bf r}_A,{\bf r}_A,\omega) \!=\!\mbb{G}^V({\bf r}_A,{\bf r}_A,\omega) \!+\! \mbb{G}^R({\bf r}_A,{\bf r}_A,\omega), \end{equation} where $\mbb{G}^V$ is the vacuum Green tensor with \begin{equation} \label{e3.17} {\rm Im}\,\mbb{G}^V({\bf r}_A,{\bf r}_A,\omega) = {\omega \over 6\pi c} \, \mbb{I} \end{equation} (Appendix \ref{app:bulk}), and $\mbb{G}^R$ describes the effects of reflections at the (surface of discontinuity of the) surrounding medium. The contribution of $\mbb{G}^V$ to $\bar{K}$ in Eq.~(\ref{e3.9}) can be treated in the Markov approximation (see below), thus \begin{eqnarray} \label{e3.19} \lefteqn{ \bar{K}(t-t') = - {\textstyle\frac{1}{2}}\Gamma_0 + \frac{1}{\hbar\pi\varepsilon_0} \int_0^\infty \!\!{\rm d}\omega\, {\omega^2\over c^2} } \nonumber\\&&\hspace{-2ex}\times\, \frac{{\bf d}{\rm Im}\,\mbb{G}^R({\bf r}_A, {\bf r}_A,\omega){\bf d}} {i(\omega-\omega_A)} \Big[e^{-i(\omega-\omega_A)(t-t')}\!-\!1 \Big], \nonumber\\ \end{eqnarray} where $\Gamma_0$ is the well-known decay rate in free space, \begin{equation} \label{e3.18} \Gamma_0= {\omega_A^3d^2 \over 3\hbar\pi\varepsilon_0c^3}\,. \end{equation} The integro-differential equation (\ref{e3.6}) [or the integral equation (\ref{e3.8})] together with the kernel function (\ref{e3.7}) [or (\ref{e3.19})] can be regarded as the basic equation for studying the influence of an arbitrary configuration of dispersing and absorbing matter on the spontaneous decay of an excited atom. \paragraph{Weak Coupling} When the Markov approximation applies, i.e., when in a coarse-grained description of the atomic motion memory effects are disregarded, then we may let \begin{equation} \label{e3.10} \frac{e^{i(\omega_A-\omega)(t-t')}-1} {i(\omega_A-\omega)} \quad\to\quad \zeta(\omega_A-\omega) \end{equation} in Eq.~(\ref{e3.9}) [$\zeta(x)$ $\!=$ $\!\pi\delta(x)$ $\!+$ $\!i{\cal P}/x$], and thus \begin{equation} \label{e3.11} \bar{K}(t-t') = - {\textstyle\frac{1}{2}} \Gamma + i \delta\omega_A, \end{equation} where $\Gamma$ and $\delta\omega_A$ are respectively given by Eqs.~(\ref{e3.12}) and (\ref{e3.13}). Substitution of the expression (\ref{e3.11}) into Eq.~(\ref{e3.8}) for the kernel function yields the familiar (weak-coupling) result that \begin{equation} \label{e3.15} C_{U}(t) = \exp\!\left[\left(-{\textstyle\frac{1}{2}} \Gamma+i\delta\omega_A\right)t\right]. \end{equation} Application of Eqs.~(\ref{e3.12}), (\ref{e3.13}), (\ref{e3.16}), and (\ref{e3.17}) yields \begin{equation} \label{e3.17a} \Gamma \!=\! \Gamma_0 \!+\! {2\omega_A^2\over \hbar\varepsilon_0c^2}\, {\bf d}{\rm Im}\,\mbb{G}^R({\bf r}_A,{\bf r}_A,\omega_A){\bf d}, \end{equation} \begin{eqnarray} \label{e3.14} \lefteqn{\hspace{-5ex} \delta\omega_A = {\omega_A^2\over \hbar\varepsilon_0c^2}\, \biggl[ {\bf d}{\rm Re}\,\mbb{G}^R({\bf r}_A,{\bf r}_A,\omega_A){\bf d} } \nonumber\\[.5ex]&& \hspace{-8ex} - {1\over \pi}\!\int_0^\infty \!\!{\rm d}\omega\, {\omega^2\over \omega_A^2} \frac{{\bf d}{\rm Im}\,\mbb{G}^R({\bf r}_A,{\bf r}_A,\omega){\bf d}} {\omega+\omega_A} \biggr]. \end{eqnarray} In Eq.~(\ref{e3.14}), the Kramers--Kronig relations have been used and the divergent contribution of the vacuum to the line shift is thought of as being included in the atomic transition frequency $\omega_A$. It is not difficult to see that in Eq.~(\ref{e3.14}) the second term, which is only weakly sensitive to the atomic transition frequency, is small compared to the first one and can therefore be neglected in general. \paragraph{Strong Coupling} When the atomic transition frequency approaches a resonance frequency of a resonator-like equipment of macroscopic bodies, then the strength of the coupling between the atom and the electromagnetic field can increase to such an extent that the weak-coupling approximation fails and the integral equation (\ref{e3.6}) must be considered. Let us assume, for simplicity, that only a single (field-)resonance line of Lorentzian shape is involved in the atom--field interaction. In this case, the kernel function (\ref{e3.7}) may be approximated by \begin{eqnarray} \label{e6.2} \lefteqn{\hspace{-.5ex} K(t-t') \simeq -\frac{\Gamma_C(\Delta\omega_C)^2}{2\pi} {\rm e}^{-i(\omega_C-\omega_A)(t-t')} } \nonumber\\[.5ex]&&\hspace{0ex} \times \int_{-\infty}^\infty \!{\rm d}\omega \, \frac{{\rm e}^{-i(\omega-\omega_C)(t-t')}}{(\omega-\omega_C)^2 +\left(\Delta\omega_C \right)^2} \nonumber \\[.5ex]&&\hspace{-4ex} =\!{\textstyle\frac{1}{2}}\Gamma_C \Delta\omega_C {\rm e}^{-i(\omega_C-\omega_A)(t-t')} {\rm e}^{-\Delta\omega_C|t-t'|}, \qquad \end{eqnarray} and thus the integral equation (\ref{e3.6}) corresponds to the differential equation [\citet{Ho00}] \begin{eqnarray} \label{e6.3} \lefteqn{\hspace{-2ex} \ddot{C}_{U}(t) +\left[i(\omega_C-\omega_A) +\Delta\omega_C \right] \dot{C}_{U}(t) } \nonumber\\ && \hspace{8ex} +\,{\textstyle\frac{1}{2}}\Gamma_C \Delta\omega_C C_{U}(t) = 0. \quad \end{eqnarray} Here, $\omega_C$ and $\Delta\omega_C$ are respectively the mid-frequency and the line width of the field resonance associated with the bodies, and $\Gamma_C$ is the (weak-coupling) decay rate at $\omega_C$. Equation (\ref{e6.3}) typically applies to the strong-coupling regime for an arbitrary resonator configuration, provided that the field that effectively interacts with the atom can be regarded as being a single-resonance field of Lorentzian shape.\footnote{Equations of the type (\ref{e6.3}) can also be obtained within the framework of standard (Markovian) quantum noise theory, where an appropriately chosen undamped mode is coupled to a two-level atom and some reservoir [\citet{Sachdev84}].\label{fnote1}} In particular, when material absorption is disregarded, then the line broadening solely results from the radiative losses due to the input-output coupling [\citet{Cook87,Lai88,Feng89}]. Equation (\ref{e6.3}) reveals that the upper-state probability amplitude of the atom obeys the equation of motion for a damped harmonic oscillator. In the strong-coupling regime, where \mbox{$\omega_A$ $\!=$ $\!\omega_C$} and \mbox{$\Omega$ $\!\gg$ $\!\Delta\omega_C$}, damped Rabi oscillations are observed: \begin{equation} \label{e6.5} C_{U}(t) = {\rm e}^{-\Delta\omega_C t/2} \cos(\Omega t/2), \end{equation} where the Rabi frequency $\Omega$ reads \begin{equation} \label{e6.6} \Omega = \sqrt{2\Gamma_C \Delta\omega_C}\,. \end{equation} \subsubsection{Emitted-Light Intensity} \label{sec3.1.2} It is well known that the intensity of light registered by a point-like photodetector at position ${\bf r}$ and time $t$ is given by \begin{equation} \label{e3.20} I({\bf r},t) \equiv \langle \psi(t) | \hat{\bf E}^{(-)}({\bf r}){}\hat{\bf E}^{(+)}({\bf r}) | \psi(t) \rangle. \end{equation} The emitted-light intensity associated with the spontaneous decay of an excited atom in the presence of dispersing and absorbing matter can be obtained by combining Eqs. (\ref{e2.1}) -- (\ref{e2.2}), (\ref{e2.12}), (\ref{e3.2}), and (\ref{e3.5}). The result is \begin{eqnarray} \label{e3.21} \lefteqn{ I({\bf r},t) = \biggl| {\omega_A^2 \over \pi\varepsilon_0c^2} \int_0^t {\rm d}t' \int_0^\infty\!\! {\rm d}\omega\, \Big[ C_{U}(t') } \nonumber \\ && \times\, e^{-i(\omega-\omega_A)(t-t')} {\rm Im}\, \mbb{G}({\bf r},{\bf r}_A,\omega)\,{\bf d} \Big]\biggr|^2\!, \qquad\quad \end{eqnarray} where, in the spirit of the rotating wave approximation used, \mbox{$\omega^2\!=\!\omega_A^2$} has been set in the frequency integral. Again, all relevant matter parameters are contained in the Green tensor. In contrast to Eq.~(\ref{e3.8}) [together with the kernel function (\ref{e3.19})], Eq.~(\ref{e3.21}) requires information about the Green tensor at different space points. In particular, its dependence on space and frequency essentially determines the retardation effects. In the simplest case of free space we have \begin{eqnarray} \label{e3.22} \lefteqn{ {\rm Im}\,\mbb{G}^V({\bf r},{\bf r}_A,\omega) = {1\over 8i\pi\rho} \left( \mbb{I} - \frac{\mbb{\rho}\otimes\mbb{\rho}} {\rho^2} \right) } \nonumber\\[.5ex]&&\hspace{0ex} \times \left(e^{i\omega\rho/c} - e^{-i\omega\rho/c} \right) + {\cal O}\!\left(\rho^{-2}\right) \qquad \end{eqnarray} ($\mbb{\rho}$ $\!=$ $\!{\bf r}$ $\!-$ $\!{\bf r}_A$; Appendix \ref{app:bulk}). We substitute Eqs.~(\ref{e3.15}) ($\Gamma$ $\!=$ $\!\Gamma_0$) and (\ref{e3.22}) into Eq.~(\ref{e3.21}), calculate the time integral, and extend the lower limit in the frequency integral to $-\infty$, \begin{eqnarray} \label{e3.23} \lefteqn{\hspace{-5ex} \int_{-\infty}^\infty \!\!{\rm d}\omega \left(e^{i\omega\rho/c}\! -\! e^{-i\omega\rho/c} \right) \frac {e^{-(\Gamma_0/2+i\omega_A')t} \!-\! e^{-i\omega t} } {i[\omega \!-\! (\omega_A'\!-\! i\Gamma_0/2)]} } \nonumber\\[.5ex] && \hspace{-8ex} =\! - 2\pi \exp\!\left[ \left(-{\textstyle\frac{1}{2}}\Gamma_0 \!-\!i\omega_A'\right)\!\left(\!t\! -\!{\rho\over c}\right)\!\right] \Theta\!\left(\!t\!-\!{\rho\over c}\right) \end{eqnarray} [$\Theta(x)$, unit step function], where \begin{eqnarray} \label{e3.24} \omega_A' = \omega_A - \delta\omega_A\,. \end{eqnarray} Thus, the well-known (far-field) result \begin{equation} \label{e3.25} I({\bf r},t) = \left( {\omega_A^2d\sin\theta \over 4\pi\varepsilon_0c^2\rho} \right)^2 \!e^{-\Gamma_0(t-\rho/c)} \, \Theta(t\!-\!\rho/c) \end{equation} is recognized ($\theta$, angle between $\mbb{\rho}$ and $\mbb{d}$). It should be noted that the general expression (\ref{e3.21}) is valid for an arbitrary coupling regime. In particular, in the weak-coupling regime the Markov approximation applies, and $C_{U}(t')$ can be taken at \mbox{$t'$ $\!=$ $\!t$} and put in front of the time integral in Eq.~(\ref{e3.21}), with $C_{U}(t)$ being simply the exponential (\ref{e3.15}). Equation (\ref{e3.21}) thus simplifies to \begin{equation} \label{e3.26} I({\bf r},t) \simeq |{\bf F}({\bf r},{\bf r}_A,\omega_A)|^2 e^{-\Gamma t} , \end{equation} where \begin{eqnarray} \label{e3.27} \lefteqn{ {\bf F}({\bf r},{\bf r}_A,\omega_A) = -{i\omega_A^2 \over \varepsilon_0c^2} \biggl[ \mbb{G}({\bf r},{\bf r}_A,\omega_A){\bf d} } \nonumber\\[.5ex]&& -\, {1\over\pi} \int_0^\infty {\rm d}\omega\, \frac{{\rm Im}\, \mbb{G}({\bf r},{\bf r}_A,\omega){\bf d}} {\omega+\omega_A} \biggr]. \qquad \end{eqnarray} Since the second term on the right-hand side of Eq.~(\ref{e3.27}) is small compared to the first one, it can be omitted, and the spatial distribution of the emitted-light intensity (emission pattern) can be given by, on disregarding transit time delay, \begin{equation} \label{e3.28} |{\bf F}({\bf r},{\bf r}_A,\omega_A)|^2 \simeq \biggl| {\omega_A^2 \over \varepsilon_0c^2}\, \mbb{G}({\bf r},{\bf r}_A,\omega_A){\bf d} \biggr|^2\!. \end{equation} Material absorption gives rise to nonradiative decay. The fraction of really emitted radiation energy can be obtained by integration of the Pointing-vector expectation value with respect to time and over the surface of a sphere whose radius $r$ is much larger than the extension of the system consisting of the macroscopic bodies and the atom, \begin{equation} \label{e3.29} W = 2c\varepsilon_0\!\int_0^\infty \!\!{\rm d}t \!\int_0^{2\pi} \!\!d\phi \!\int_0^\pi \!\!d\theta\, r^2 \sin\theta \,I({\bf r},t). \end{equation} The ratio $W/W_0$ (\mbox{$W_0$ $\!=$ $\!\hbar\omega_A$}) then gives us a measure of the emitted radiation energy, and accordingly, \mbox{$1$ $\!-$ $\!W/W_0$} measures the energy absorbed by the bodies. \subsubsection{Emitted-Light Spectrum} \label{sec3.1.3} Next, let us consider the time-dependent power spectrum of the emitted light, which for sufficiently small passband width of the spectral apparatus can be given by [see, e.g., \citet{Vogel94}] \begin{eqnarray} \label{e3.30} \lefteqn{ S({\bf r},\omega_{S},T) = \int_0^T \!\!{\rm d}t_2 \!\int_0^T \!\!{\rm d}t_1 \Big[e^{-i\omega_S(t_2-t_1)} } \nonumber\\&& \hspace{4ex} \times\, \big\langle \hat{\bf E}^{(-)}({\bf r},t_2){} \hat{\bf E}^{(+)}({\bf r},t_1) \big\rangle\Big], \qquad \end{eqnarray} where $\omega_{S}$ is the setting frequency of the spectral apparatus and $T$ is the operating-time interval of the detector. In close analogy to the derivation of Eq.~(\ref{e3.21}), combination of Eqs.~(\ref{e2.1}) -- (\ref{e2.2}), (\ref{e2.12}), (\ref{e3.2}), and (\ref{e3.5}) leads to \begin{eqnarray} \label{e3.31} \lefteqn{ S({\bf r},\omega_{S},T) = \biggl| {\omega_A^2 \over \pi\varepsilon_0c^2} \int_0^T {\rm d}t_1 \Big[e^{i(\omega_{S}-\omega_A) t_1} } \nonumber\\[.5ex]&&\times \int_0^{t_1}{\rm d}t'\,C_{U}(t') \int_0^\infty{\rm d}\omega\, e^{-i(\omega-\omega_A) (t_1-t')} \nonumber\\[.5ex]&&\hspace{6ex}\times\, {\rm Im}\, \mbb{G}({\bf r},{\bf r}_A,\omega){\bf d} \Big]\biggr|^2 . \qquad \end{eqnarray} Further calculation again requires knowledge of the Green tensor of the problem. Let us use Eq.~(\ref{e3.31}) to recover the free-space result. Following the line that has led from Eq.~(\ref{e3.21}) to Eq.~(\ref{e3.25}), we find that \begin{eqnarray} \label{e3.32} \lefteqn{ S({\bf r},\omega_{S},T) = \left( {\omega_A^2 d \sin\theta \over 4\pi \varepsilon_0c^2\rho} \right)^2 \Theta\!\left(T\!-\!\rho/c\right) } \nonumber\\&&\hspace{2ex}\times\, \left| \frac {e^{[-\Gamma_0/2+i(\omega_{S}-\omega_A')](T-\rho/c)} -1} {\omega_{S}- \omega_A'+i\Gamma_0/2} \right|^2\!. \qquad\quad \end{eqnarray} In particular for $T$ $\!\rightarrow$ $\!\infty$, we recognize the well-known Lorentzian: \begin{eqnarray} \label{e3.33} \lefteqn{ \lim_{T\to\infty} S({\bf r},\omega_{S},T) = \left( {\omega_A^2 d \sin\theta \over 4\pi \varepsilon_0c^2\rho} \right)^2 } \nonumber\\[.5ex]&&\hspace{2ex}\times\, {1\over (\omega_{S} - \omega_A')^2 +\Gamma_0^2/4}\,. \qquad \end{eqnarray} If retardation is ignored and the Markov approximation applies, Eq.~(\ref{e3.31}) can be simplified in a similar way as Eq.~(\ref{e3.21}). In close analogy to the derivation of Eq.~(\ref{e3.26}) we may write \begin{eqnarray} \label{e3.34} \lefteqn{ S({\bf r},\omega_{S},T) = |{\bf F}({\bf r},{\bf r}_A,\omega_A)|^2 } \nonumber\\ &&\hspace{2ex}\times\, \left| \frac {e^{[-\Gamma/2+i(\omega_S-\omega_A')] T} -1} {\omega_{S}-\omega_A' + i\Gamma/2} \right|^2\!, \qquad \end{eqnarray} with ${\bf F}({\bf r},{\bf r}_A,\omega_A)$ from Eq.~(\ref{e3.27}). \subsection{Two-Atom Coupling} \label{sec3.2} We now turn to the problem of two atoms (denoted by A and B) coupled through a medium-assisted electromagnetic field in the case of single-quantum excitation. For simplicity, let us consider atoms with equal transition frequencies, so that \begin{equation} \label{e8.20b} K_{AA'}(t,t') \equiv K_{AA'}(t-t') \end{equation} ($A'$ $\!=$ $\!B,D$) and \begin{equation} \label{e8.20c} K_{AB}(t-t') = K_{BA}(t-t'), \end{equation} and assume that the isolated atoms undergo the same decay law, \begin{equation} \label{e8.20a} K_{AA}(t,t') = K_{BB}(t,t') \equiv K(t-t'). \end{equation} Introducing the new variables \begin{equation} \label{e8.12} C_\pm (t) = 2^{-1/2} \Bigl[ C_{U_A}(t) \pm C_{U_B}(t) \Bigr], \end{equation} it is not difficult to prove that the integro-differential equations (\ref{e8.6b}) decouple as follows: \begin{eqnarray} \label{e8.17} \lefteqn{ \dot{C}_\pm(t) = \int_{0}^t {\rm d}t'\, K_\pm(t-t')\,C_\pm(t') } \nonumber\\[.5ex]&& \hspace{-0ex} +\,2^{-1/2} \int_{-\Delta t}^0 {\rm d}t'\, \left[K_{AD}(t-t') \right. \nonumber\\[.5ex]&&\hspace{6ex} \left. \pm\,K_{BD}(t-t')\right]C_{U_D}(t') , \qquad \end{eqnarray} where \begin{equation} \label{e8.18} K_\pm(t-t') = K(t-t') \pm K_{AB}(t-t'). \end{equation} Obviously, the $C_{\pm}$ are the expansion coefficients of the wave function with respect to the (atomic) basis \begin{equation} \label{e8.14} |\pm \rangle = 2^{-1/2} \left( |U_A\rangle \pm |U_B\rangle \right), \end{equation} and $|L\rangle$ (instead of the basis $|U_A\rangle$, $|U_B\rangle$, and $|L\rangle$). Thus, they are the probability amplitudes of finding the total system in the states $|+\rangle |\{0\}\rangle$ and $|-\rangle |\{0\}\rangle$, respectively. In the further treatment of Eq.~(\ref{e8.17}) one can again distinguish between the weak- and the strong-coupling regime. \paragraph{Weak Coupling} In the weak-coupling regime, the Markov approximation applies, and in Eq.~(\ref{e8.17}) $C_\pm(t')$ can be replaced with $C_\pm(t)$, with the time integrals being $\zeta$-functions. In particular, when the field is initially not excited, then the second term on the right-hand side of Eq.~(\ref{e8.17}) vanishes and we are left with a homogeneous first-order differential equation, whose solution is, in analogy to Eq.~(\ref{e3.15}), \begin{equation} \label{e8.13} C_\pm (t) = e^{(-\Gamma_\pm/2 +i \delta_\pm) t} C_\pm(0), \end{equation} where ($\Gamma$ $\!\equiv$ $\Gamma_{AA}$, $\delta$ $\!\equiv$ $\delta_{AA}$) \begin{equation} \label{e8.9} \Gamma_\pm = \Gamma \pm \Gamma_{AB}\,, \end{equation} \begin{equation} \label{e8.9a} \delta_\pm = \delta \pm \delta_{AB}\,, \end{equation} \begin{equation} \label{e8.10} \Gamma_{AB} = {2\omega_A^2\over \hbar\varepsilon_0c^2}\, {\bf d}_A {\rm Im}\,\mbb{G}({\bf r}_A,{\bf r}_B,\omega_A) {\bf d}_B \,, \end{equation} \begin{equation} \label{e8.11} \delta_{AB} \!=\! {{\cal P}\over \pi\hbar\varepsilon_0}\, \!\int_0^\infty \!\!{\rm d}\omega\, {\omega^2\over c^2} \frac{{\bf d}_A {\rm Im}\,\mbb{G}({\bf r}_A,{\bf r}_B,\omega) {\bf d}_B} {\omega-\omega_A}\,. \end{equation} Clearly, $\Gamma_\pm$ are the decay rates of the states $|\pm\rangle$, and the assumption (\ref{e8.20a}) means that the two atoms are positioned in such a way that they have equal single-atom decay rates and line shifts. Note that the values of $\Gamma_+$ and $\Gamma_-$ can substantially differ from each other, because of the interference term $\Gamma_{AB}$ (of positive or negative sign). \paragraph{Strong Coupling} In the strong-coupling regime, the atoms are predominantly coupled (in a resonator-like equipment) to a sharp field resonance, whose mid-frequency approximately equals the atomic transition frequency. As a result, the atomic probability amplitudes in Eq.~(\ref{e8.17}) must not necessarily be slowly varying compared with the kernel functions and the Markov approximation thus fails in general. Regarding the line shape of the field resonance as being a Lorentzian, one can of course approximate the kernels $K(t-t')$, $K_{AB}(t-t')$ [and $K_{AD}(t-t')$ and $K_{BD}(t-t')$] in a similar way as done in Eq.~(\ref{e6.2}) for a single atom. Equation (\ref{e8.17}) reveals that the motion of the states $|\pm\rangle$ defined by Eq.~(\ref{e8.14}) is governed by the kernel functions $K_\pm(t-t')$, and it may happen that one of them becomes very small, because of destructive interference [cf. Eq.~(\ref{e8.18})]. In that case, either $|+\rangle$ or $|-\rangle$ is weakly coupled to the field, and thus the strong-coupling regime cannot be realized for both of these states simultaneously. \setcounter{equation}{0} \section{BULK MEDIUM} \label{sec:realcavity} The formalism outlined in Section \ref{sec:Gen_Theor} requires knowledge of the permittivity as a function of space and frequency. The spatial variation is typically determined by some arrangement of macroscopic bodies, each of which being characterized by a permittivity that is a function of frequency only. The frequency response of a dielectric body depends on its atomic structure and can be measured with high precision. For theoretical studies it may be useful to have some analytical expression at hand. \subsection{Drude--Lorentz Model} \label{subsec:DLmodel} In the Drude--Lorentz model, which is widely used in practice, the permittivity is given by \begin{equation} \label{e4.5} \varepsilon(\omega) = 1 + \sum_\alpha {\omega_{P\alpha}^2 \over \omega_{T\alpha}^2 - \omega^2 - i\omega \gamma_\alpha}\,, \end{equation} where $\omega_{T\alpha}$ and $\gamma_\alpha$ are the medium oscillation frequencies and linewidths, respectively, and $\omega_{P\alpha}$ correspond to the coupling constants. It is worth noting that the Drude--Lorentz model covers both dielectric (\mbox{$\omega_{T\alpha}$ $\!\neq$ $\! 0$}) and metallic (\mbox{$\omega_{T\alpha}$ $\!=$ $\!0$}) matter. An example of the permittivity for a (single-resonance) dielectric as a function of frequency is shown in Fig.~\ref{f1a}. \begin{figure} \caption{\label{f1a} \label{f1a} \end{figure} {F}rom the permittivity, the refractive index can be obtained according to the relations \begin{equation} \label{e4.5.1} n(\omega)=\sqrt{\varepsilon(\omega)}=n_R(\omega) + in_I(\omega), \end{equation} \begin{eqnarray} \label{e4.5.1a} \lefteqn{\hspace{-4ex} n_{R(I)}(\omega) = } \nonumber\\[.5ex]&&\hspace{-6ex} \sqrt{{1\over2}\left[ \sqrt{\varepsilon_{R}^2(\omega)+\varepsilon_{I}^2(\omega)} +(-)\, \varepsilon_{R} (\omega)\right]} . \end{eqnarray} The Drude--Lorentz model features band gaps between the transverse frequencies $\omega_{T\alpha}$ and the longitudinal frequencies \mbox{$\omega_{L\alpha}$ $\!=$ $\!\sqrt{\omega_{T\alpha}^2\!+\!\omega_{P\alpha}^2}$}. Far from a medium resonance, we typically observe that \begin{equation} \label{e4.5.2} \varepsilon_{I}(\omega) \ll |\varepsilon_{R}(\omega)|. \end{equation} For $\omega$ $\!<$ $\!\omega_{T\alpha}$ (outside a band gap) we have \begin{eqnarray} \label{e4.5.3} &\varepsilon_{R}(\omega)>1, \\&\displaystyle \hspace{-2ex} n_R(\omega)\simeq\sqrt{\varepsilon_{R}(\omega)}\gg n_I(\omega)\simeq {\varepsilon_{I}(\omega)\over 2\sqrt{\varepsilon_{R}(\omega)}}\,, \nonumber\\ \label{e4.5.3a} \end{eqnarray} and for $\omega_{T\alpha}$ $\!<$ $\!\omega$ $\!<\omega_{L\alpha}$ (inside a band gap) \begin{eqnarray} \label{e4.5.4} &\varepsilon_{R}(\omega)<0, \\&\displaystyle \hspace{-1ex} n_R(\omega)\simeq {\varepsilon_{I}(\omega)\over 2\sqrt{|\varepsilon_{R}(\omega)|}}\ll n_I(\omega)\simeq \sqrt{|\varepsilon_{R}(\omega)|}\,. \nonumber\\ \label{e4.5.5} \end{eqnarray} When (inside a band gap) \begin{equation} \label{e4.5.6} \varepsilon_{R}(\omega)<-1 \end{equation} is valid, which, in view of Eq. (\ref{e4.5}), leads to \begin{equation} \label{e4.5.7} \omega< \sqrt{\omega_{T\alpha}^2 +{\textstyle\frac{1}{2}}\,\omega_{P\alpha}^2} \,, \end{equation} then the Drude--Lorentz model also incorporates surface-guided waves [see, e.g., \citet{Raether88,Ho01}], which are observed in the presence of an interface. These waves are bound to the interface, with the amplitudes being damped into either of the neighboring media. Typical examples are surface phonon polaritons for dielectrics and surface plasmon polaritons for metals. Note that in any case \mbox{$\varepsilon_{I}(\omega)$ $\!>$ $\!0$} is valid. \subsection{Local-Field Correction} \label{subsec:LFcorrection} {F}rom simple arguments based on the change of the mode density, it was suggested that the spontaneous emission rate of an atom inside a nonabsorbing medium should be modified according to \mbox{$\Gamma$ $\!=$ $\!n\Gamma_0$}, where $n$ is the (real) refractive index of the medium and $\Gamma_0$ is given by Eq.~(\ref{e3.18}) [\citet{Dexter56,diBartolo68,Yariv75,Nienhuis76}]. In this formula, it is assumed that the local field the atom interacts with is the medium-assisted electromagnetic field obtained by averaging over a region which contains a great number of medium constituents. In reality, the atom is located in a small region of free space, and the field therein differs from the averaged field. This effect is usually taken into consideration by introduction of a local-field correction factor $\xi$, thus \begin{equation} \label{e4.1b} \Gamma = n \xi \Gamma_0\,. \end{equation} In the (Clausius--Mosotti) virtual-cavity model it is given by [\citet{Knoester89,Milonni95}] \begin{equation} \label{e4.1a} \xi = \left( \frac{n^2+2}{3} \right)^2, \end{equation} and in the (Onsager) real-cavity model by [\citet{Glauber91}] \begin{equation} \label{e4.1} \xi = \left( \frac{3n^2}{2n^2+1} \right)^2. \end{equation} For absorbing media it was suggested that the index of refraction should be replaced by its real part and the square of the correction factor in Eqs.~(\ref{e4.1a}) and (\ref{e4.1}) by the absolute square [\citet{Barnett92,Barnett96,Juzeliunas97}]. Later it was found, on the basis of the quantization scheme outlined in Section \ref{sec:quan}, that a proper inclusion of the (quantum) noise polarization leads to a more complicated form of the local-field correction [\citet{Scheel99a,Scheel99b}], which (for the virtual-cavity model) was confirmed by an alternative, microscopic approach [\citet{Fleischhauer99}]. In contrast to the virtual cavity model where the modification of the field outside the cavity is disregarded, in the real-cavity model the mutual modification of the field outside and inside the cavity are taken into account in a consistent way. Experiments suggested that the real-cavity model is suitable for describing the decay of substitutional guest atoms different from the constituents of the medium [\citet{Rikken95,deVries98,Schuurmans98}]. Let us consider an excited atom placed at the center of an empty, spherical cavity (embedded in an otherwise homogeneous medium). According to Eq.~(\ref{e3.17a}) and the Green tensor given in Appendix \ref{app:spherical}, the decay rate can be given in the form of [\citet{Scheel99b}] \begin{equation} \label{e4.2} \Gamma = \Gamma_0 \left[ 1+{\rm Re}\, C_1^N(\omega_A) \right], \end{equation} where the generalized reflection coefficient $C_1^N(\omega)$ reads [$\tilde{k}$ $\!=$ $\!\tilde{k}(\omega)$ $\!=$ $R\omega/c$] \begin{eqnarray} \label{e4.3} \lefteqn{\hspace{-1ex} C_1^N(\omega) = e^{i\tilde{k}} \bigl\{i+\tilde{k}[n(\omega)\!+\!1]-i\tilde{k}^2n(\omega) } \nonumber \\ &&\hspace{-2ex} -\,\tilde{k}^3n^2(\omega)/[n(\omega)\!+\!1] \bigr\} \bigl\{ \sin \tilde{k} \!-\! \tilde{k}[\cos \tilde{k} \nonumber \\ &&\hspace{-2ex} +\,in(\omega)\sin \tilde{k}] \!+\!i\tilde{k}^2n(\omega)\cos \tilde{k} \!-\!\tilde{k}^3[\cos \tilde{k} \nonumber \\ &&\hspace{-2ex} -\,in(\omega)\sin \tilde{k}] n^2(\omega)/[n^2(\omega)\!-\!1] \bigr\}^{-1}\!\!. \qquad \end{eqnarray} As long as the surrounding medium can be treated as a continuum, Eq.~(\ref{e4.2}) [together with Eq.~(\ref{e4.3})] is exact. It is valid for arbitrary cavity radius and arbitrary complex refractive index, without restriction to transition frequencies far from medium resonances. When the cavity radius is much smaller than the wavelength of the atomic transition, i.e., $R\omega_A/c$ $\!=$ $\!\tilde{k}(\omega_A)$ $\!\ll$ $\!1$, then the real-cavity model of local-field correction is realized. In this case, $C_1^N(\omega_A)$ can be expanded in powers of $\tilde{k}(\omega_A)$ to obtain [\citet{Scheel99b,Tomas01}] \begin{eqnarray} \label{e4.4} \lefteqn{ \Gamma = \Gamma_0 \left|\frac{3\varepsilon} {2\varepsilon\!+\!1}\right|^2 \Bigg\{ n_R } \nonumber \\[.5ex] && \hspace{-3ex} \!+\frac{\varepsilon_{I}} {|\varepsilon|^2} \biggl[ \left(\!\frac{c}{\omega_A R}\!\right)^3 \!+\!\frac{28|\varepsilon|^2 \!+\!16\varepsilon_{R}\!+\!1} {5|2\varepsilon+1|^2} \!\left(\!\frac{c}{\omega_A R}\!\right) \nonumber \\[.5ex] && \hspace{-3ex} -\frac{2}{|2\varepsilon\!+\!1|^2} \bigl( 2n_I|\varepsilon|^2 +\,n_I\varepsilon_{R} \!+\!n_R \varepsilon_{I}\bigr) \biggr]\Biggr\} \nonumber \\[.5ex] && +\,O(\omega_AR/c), \end{eqnarray} where the dependence of the permittivity $\varepsilon$ and the refractive index $n$ on $\omega_A$ has been suppressed. For $\varepsilon_{I}(\omega_A)$ $\!=$ $\!0$, i.e, when material absorption is fully disregarded, Eq. (\ref{e4.4}) reproduces exactly local-field correction factor (\ref{e4.1}). The second term in the curly brackets essentially results from absorption. It is seen that material absorption gives rise to a strong dependence of the decay rate on the cavity radius. In particular, the leading term proportional to $R^{-3}$ can be regarded as corresponding to nonradiative energy transfer from the atom to the medium. \begin{figure} \caption{\label{fig:se} \label{fig:se} \end{figure} Examples of the dependence of rate of spontaneous decay on the atomic transition frequency are plotted in Fig.~\ref{fig:se} for a Drude--Lorentz-type dielectric medium. It is seen that in the band-gap region (where for a nonabsorbing medium spontaneous emission would be inhibited) the decay rate can drastically increase, because of the non-radiative decay channel associated with absorption. Note that the strongest enhancement of spontaneous decay is observed at $\omega_A$ $\!\simeq$ $\!\sqrt{\omega_T^2\!+\! \frac{3}{2}\omega_P^2}$, which [for small values of $\varepsilon_I(\omega_A)$] corresponds to $2\varepsilon(\omega_A)$ $\!+$ $\!1$ $\!\simeq$ $\!0$. \setcounter{equation}{0} \section{PLANAR SURFACE} \label{sec:planar} Let us turn to the problem of spontaneous decay of an excited atom located near the surface of a half-space medium. For real permittivity, configurations of that type have been studied extensively in connection with Casimir and van der Waals forces [see, e.g., \citet{Meschede90,Fichet95} and references therein] and with regard to scanning near-field optical microscopy [see, e.g., \citet{Henkel98}]. To be more specific, let us consider two infinite half-spaces such that \begin{equation} \label{e5.1a} \varepsilon({\bf r},\omega) = \left\{ \begin{array}{l@{\quad \rm if \quad }l} \varepsilon(\omega) & z \leq 0\\ 1 & z > 0 \end{array} \right.. \end{equation} For $z$ $\!>$ $\!0$, the reflection part of the Green tensor reads [\citet{Maradudin75,Mills75,Tomas95,Ho98}] \begin{eqnarray} \label{e5.1} \lefteqn{\hspace{-1ex} G^R_{xx}(z,z,\omega) = -\frac{i}{8\pi k^2}\!\int_0^\infty \!\!{\rm d}k_\| \,k_\|\beta\, {\rm e}^{2i\beta z} r^p(k_\|) } \nonumber\\&& \hspace{6ex} +\,\frac{i}{8\pi} \!\int_0^\infty \!\!{\rm d}k_\| \, \frac{k_\|{\rm e}^{2i\beta z}}{\beta} \,r^s(k_\|), \qquad \end{eqnarray} \begin{equation} \label{e5.2} G^R_{yy}(z,z,\omega)= G^R_{xx}(z,z,\omega), \end{equation} \begin{equation} \label{e5.3} G^R_{zz}(z,z,\omega) = \frac{i}{4\pi k^2} \int_0^\infty \!{\rm d}k_\| \, k_\|^3 \frac{{\rm e}^{2i\beta z}}{\beta} \,r^p(k_\|) \end{equation} [$k$ $\!=$ $\!\omega/c$, $\beta$ $\!=$ $\!(k^2$ $\!-$ $\!k_\|^2)^{1/2}$], where $r^p(k_\|)$ and $r^s(k_\|)$ are respectively the familiar Fresnel reflection coefficients for $p$- (TM) and $s$- (TE) polarized waves. Substitution of these expressions ($\omega$ $\!=$ $\!\omega_A$) into Eqs.~(\ref{e3.17a}) and (\ref{e3.14}) yields the decay rate and the line shift of an atom at a distance $z$ from the surface [\citet{Agarwal75,Agarwal77, Scheel99c}], which are in agreement with classical results [see, e.g., \citet{Chance78} and references therein]. When the distance of the atom from the surface is small compared to the wavelength, $kz$ $\!\ll$ $\!1$, then the integrals in Eqs. (\ref{e5.1}) -- (\ref{e5.3}) can be evaluated asymptotically to give [\citet{Scheel99c}] \begin{eqnarray} \label{e5.4} \lefteqn{\hspace{-3ex} G^R_{zz}(z,z,\omega) =\frac{1}{16\pi k^2z^3}\,\frac{n^2(\omega)-1}{n^2(\omega)+1} } \nonumber\\[.5ex]&& \hspace{-6ex} +\,\frac{1}{8\pi z}\,\frac{[n(\omega)-1]^2}{n(\omega)[n(\omega)+1]} \nonumber\\[.5ex]&& \hspace{-6ex} +\,\frac{ik}{12\pi}\frac{[n(\omega)-1][2n(\omega)-1]} {n(\omega)[n(\omega)+1]} + O(kz), \hspace{2ex} \end{eqnarray} \begin{eqnarray} \label{e5.5} \lefteqn{ \hspace{-8ex} G^R_{xx}(z,z,\omega)\!=\!{\textstyle\frac{1}{2}}G^R_{zz}(z,z,\omega) \!-\!\frac{1}{16\pi z}\,\frac{n^2(\omega)\!-\!1}{n^2(\omega)\!+\!1} } \nonumber\\[.5ex]&& -\frac{ik}{3\pi}\frac{n(\omega)-1}{n(\omega)+1} + O(kz), \end{eqnarray} \begin{equation} \label{e5.6} G^R_{yy}(z,z,\omega)= G^R_{xx}(z,z,\omega). \end{equation} Inserting Eqs.~(\ref{e5.4}) -- (\ref{e5.6}) into Eq.~(\ref{e3.17a}) yields \begin{eqnarray} \label{e5.7} \lefteqn{ \Gamma = {3\Gamma_0\over 8} \left(1+{d_z^2\over d^2} \right) \left({c\over \omega_Az}\right)^3 } \nonumber\\[.5ex]&&\times\, {\varepsilon_{I}(\omega_A) \over |\varepsilon(\omega_A)+1|^2} + O\!\left({c \over \omega_Az }\right)\!. \end{eqnarray} The leading (\mbox{$\sim$ $\!z^{-3}$}) term is the same as in the microscopic approach by \citet{Yeung96}. This term, which is proportional to $\varepsilon_{I}(\omega_A)$, is closely related to nonradiative decay, i.e., energy transfer from the atom to the medium. Obviously, a change of $\varepsilon_{I}(\omega_A)$ mostly affects the near-surface behavior of the decay rate. Note that the distance of the atom from the surface must not be smaller than interatomic distances in the medium (otherwise an interface cannot be defined). \begin{figure} \caption{\label{fig:pl} \label{fig:pl} \end{figure} Examples of the spontaneous decay rate are shown in Fig.~\ref{fig:pl} for dielectric matter of Drude--Lorentz type. Note the strong absorption-assisted enhancement of spontaneous decay that is observed inside the band gap at \mbox{$\omega_A$ $\!\simeq$ $\!\sqrt{\omega_T^2\!+\!\frac{1}{2}\omega_P^2}$}. It corresponds [for small values of $\varepsilon_I(\omega_A)$] to the condition \mbox{$\varepsilon(\omega_A)$ $\!+$ $\!1$ $\!\simeq$ $\!0$}, which marks the position of the highest density of the surface-guided waves [cf. Eq.~(\ref{e4.5.7})]. Similarly, from Eq.~(\ref{e3.14}) together with Eqs.~(\ref{e5.4}) -- (\ref{e5.6}), the line shift due to the presence of the macroscopic body reads \begin{eqnarray} \label{e5.9} \lefteqn{ \delta\omega_A = {3\Gamma_0\over 32} \left(1+{d_z^2\over d^2} \right) \left({ c \over \omega_Az}\right)^3 } \nonumber\\[.5ex]&&\times\, {|\varepsilon(\omega_A)|^2-1 \over |\varepsilon(\omega_A)+1|^2} + O\!\left({c \over \omega_Az }\right)\!. \end{eqnarray} In contrast to the decay rate, here the leading (\mbox{$\sim$ $\!z^{-3}$}) term even appears when absorption is disregarded [\mbox{$\varepsilon_{I}(\omega_A)$ $\!=$ $\!0$}]. \setcounter{equation}{0} \section{SPHERICAL MICRO- \\ RESONATOR} \label{sec:beyond} In Section \ref{subsec:LFcorrection}, an atom in a microsphere whose radius is much smaller than the wavelength of the atomic transition was considered. If the radius is not small compared with the wavelength, the cavity can act as a resonator. It is well known that the spontaneous decay of an excited atom can be strongly modified when it is placed in a microresonator [\citet{Hinds91,Haroche92,Meschede92,Meystre92,Berman94,Kimble98}]. There are typically two qualitatively different regimes: the weak-coupling regime and the strong-coupling regime. In the weak-coupling regime the Markov approximation applies and a monotonous exponential decay is observed, the decay rate being enhanced or reduced compared to the free-space value depending on whether the atomic transition frequency fits a cavity resonance or not. The strong-coupling regime, in contrast, is characterized by reversible Rabi oscillations where the energy of the initially excited atom is periodically exchanged between the atom and the radiation field. This usually requires that the emission is in resonance with a high-quality cavity mode. \begin{figure} \caption{\label{fig:slabho} \label{fig:slabho} \end{figure} Let us consider an excited atom placed at the center of a spherical three-layer structure (Fig.~\ref{fig:slabho}). The outer layer ($r$ $\!>$ $\!R_1$) and the inner layer \mbox{($0$ $\!\le$ $\!r$ $\!<$ $\!R_2$)} are vacuum, whereas the middle layer ($R_2$ $\!\le$ $\!r$ $\!\le$ $\!R_1$), which plays the role of the resonator wall, is matter. In particular for a Drude--Lorentz-type dielectric, the wall would be perfectly reflecting in the band-gap zone, provided that absorption could be disregarded. Restricting our attention to a true resonator, we may assume that the condition $R_2\omega_A/c\!\gg\! 1$ is satisfied. \subsection{Weak Coupling} {F}rom Eq.~(\ref{e3.12}) together with the Green tensor for a spherical three-layer structure as given in Appendix \ref{app:spherical}, the decay rate becomes [\citet{Ho00}] \begin{eqnarray} \label{e6.1} \Gamma \hspace{-1ex}&\simeq&\hspace{-1ex} \Gamma_0 \,{\rm Re}\! \left[ \frac{n(\omega_A)-i\tan(\omega_AR_2/c)} {1-in(\omega_A)\tan(\omega_AR_2/c)} \right] \nonumber \\[.5ex] \hspace{-1ex}&=&\hspace{-1ex} \Gamma_0 \, n_R(\omega_A) [1+\tan^2(\omega_AR_2/c)] \nonumber \\[.5ex] \hspace{-1ex}&&\hspace{-1ex}\times\, \Bigl\{ [1+n_I(\omega_A) \tan(\omega_AR_2/c)]^2 \nonumber \\[.5ex] \hspace{-1ex}&&\hspace{1ex} +\,n_R^2(\omega_A) \tan^2(\omega_AR_2/c) \Bigr\}^{-1}\!\!. \end{eqnarray} Note that in Eq.~(\ref{e6.1}) it is assumed that \mbox{$\exp[-in_I(\omega_A)(R_1-R_2)\omega_A/c]$ $\!\ll$ $\!1$} (thick cavity wall). \begin{figure} \caption{\label{fig:ho1} \label{fig:ho1} \end{figure} The dependence on the transition frequency of the decay rate is illustrated in Fig.~\ref{fig:ho1}. It is seen that the decay rate very sensitively depends on the transition frequency. Narrow-band enhancement of spontaneous decay ($\Gamma/\Gamma_0$ $\!>$ $\!1$) alternates with broadband inhibition ($\Gamma/\Gamma_0$ $\!<$ $\!1$). The frequencies at which the maxima of enhancement are observed correspond to the resonance frequencies of the cavity. Within the band gap the heights and widths of the frequency intervals in which spontaneous decay is feasible are essentially determined by material absorption. Outside the band-gap zone the change of the decay rate is less pronounced, because of the relatively large input-output coupling, the (small) material absorption being of secondary importance. \begin{figure} \caption{\label{fig3-1} \label{fig3-1} \end{figure} The widths of the resonance lines are responsible for the damping of the corresponding intracavity field (mode). There are two damping mechanisms: photon leakage to the outside of the cavity and photon absorption by the cavity-wall material. The first mechanism is the dominant one outside bands in regions where normal dispersion \mbox{(${\rm d}n_R/{\rm d}\omega$ $\!>$ $\!0$)} is observed, while the latter dominates inside band gaps where anomalous dispersion \mbox{(${\rm d}n_R/{\rm d}\omega$ $\!<$ $\!0$)} is observed. To illustrate this in more detail, let us consider the total amount of radiation energy observed outside the cavity and compare it with the energy $W_0$ $\!=$ $\!\hbar\omega_{\rm A}$ emitted by an atom in free space. Application of Eq.~(\ref{e3.29}) [together with Eqs.~(\ref{e3.26}) and (\ref{e3.28})] yields [\citet{Ho00}] \begin{equation} \label{E41C} {W\over W_0} \simeq \frac{ |{\cal A}^N_l(\omega_{\rm A})|^2} {1+{\rm Re}\, {\cal C}^N_l(\omega_{\rm A}) }\,, \end{equation} with ${\cal A}^N_l(\omega_{\rm A})$ and ${\cal C}^N_l(\omega_{\rm A})$ being given according to Eqs.~(\ref{A.25}) and (\ref{A.26}) respectively. Examples of the dependence of $W/W_0$ on the atomic transition frequency are plotted in Fig.~\ref{fig3-1}. It is seen that inside the band gap most of the energy emitted by the atom is absorbed by the cavity wall in the course of time, while outside the band gap the absorption is (for the chosen values of $\gamma$) much less significant. Note that with increasing value of $\gamma$ the band gap is smoothed a little bit, and thus the fraction of light that escapes from the cavity can increase. \subsection{Strong Coupling} \label{subsec:st_coupling} When the coupling between the atom and a cavity resonance (mid-frequency $\omega_C$, line width $\Delta \omega_C$) is so strong that the (weak-coupling) decay rate becomes comparable to the cavity line width, \mbox{$\Gamma_C$ $\!\gsim$ $\!\Delta \omega_C$}, the Markov approximation is no longer adequate. In this case, the integral equation (\ref{e3.8}) or approximate equations of the type (\ref{e6.3}) and (\ref{e6.5}) should be used in order to describe the temporal evolution of the (upper) atomic state. For the configuration under investigation, the cavity line width can be given by \begin{equation} \label{e6.4} \Delta\omega_C = \frac{c\Gamma_0}{R_2\Gamma_C} \,. \end{equation} Inside a band gap, $\Gamma_C$ is essentially determined by material absorption. In particular, the single-resonance Drude--Lorentz model reveals that \begin{eqnarray} \label{e6.4a} \Gamma_C \hspace{-1ex}&\simeq&\hspace{-1ex} \Gamma_0 \,\frac{n_I^2(\omega_C)+1}{n_R(\omega_C)} \nonumber\\[.5ex] \hspace{-1ex}&\simeq&\hspace{-1ex} \Gamma_0 \,\frac{2\sqrt{(\omega_L^2-\omega_C^2)(\omega_C^2-\omega_T^2)}} {\gamma\omega_C} \qquad \end{eqnarray} ($\gamma$ $\!\ll$ $\!\omega_T$, $\!\omega_P$, $\!\omega_P^2/\omega_T$). Below the band gap radiative losses dominate and $\Gamma_C$ reads \begin{equation} \label{e6.4b} \Gamma_C \simeq \Gamma_0 \,n_R(\omega_C) \simeq \Gamma_0 \,\sqrt{\frac{\omega_L^2-\omega_C^2} {\omega_T^2-\omega_C^2}} \end{equation} ($n_R$ $\!\gg$ $\!n_I$). \begin{figure} \caption{\label{fig:ho2} \label{fig:ho2} \end{figure} Typical examples of the time evolution of the upper-state occupation probability are shown in Fig.~\ref{fig:ho2}. The curves are the exact (numerical) solutions of the integral equation (\ref{e3.8}) [together with the kernel function (\ref{e3.19})] for a (single-resonance) dielectric wall of Drude--Lorentz type. The figure shows that with increasing value of the intrinsic absorption constant $\gamma$ of the wall material the Rabi oscillations become less pronounced. Clearly, larger values of $\gamma$ mean enlarged absorption probability of the emitted photon by the cavity wall and thus reduced probability of atom-field energy interchange. \setcounter{equation}{0} \section{MICROSPHERE} \label{sec:microsphere} Light propagating in a dielectric sphere can be trapped by repeated total internal reflections. When the round-trip optical path fits integer numbers of the wavelength, whispering gallery (WG) waves are formed, which combine extreme photonic confinement with very high quality factors [\citet{Collot93,Chang96,Gorodetsky96, Vernooy98b,Uetake99}] -- properties that are crucial for cavity QED experiments [\citet{Lin92,Barnes96, Lermer98,Vernooy98a,Fujiwara99,Yukawa99}] and certain optoelectronical applications [\citet{Chang96}]. WG waves are commonly classified by means of three numbers [\citet{Collot93,Chang96}]: the angular-momentum number $l$, the azimuthal number $m$, and the number $i$ of radial maxima of the field inside the sphere. In the case of a uniform sphere, the WG waves are \mbox{($2l$ $\!+$ $\!1$)}-fold degenerate, i.e., the \mbox{$2l$ $\!+$ $\!1$} azimuthal resonances belong to the same frequency $\omega_{l,i}$. A dielectric microsphere whose permittivity is of Drude--Lorentz type does not only give rise to WG waves, but can also feature surface-guided (SG) waves inside band-gap regions.\footnote{For the dependence on frequency of the quality factors of WG and SG waves, see \citet{Ho01}.} In contrast to WG waves, each angular-momentum number $l$ is associated with only one SG wave. If an excited atom is situated near a dielectric microsphere, spontaneous decay sensitively depends on whether or not the transition is tuned to a WG or an SG resonance. Moreover, whereas WG waves typically suffer from material absorption, the effect of material absorption on SG waves is weak in general. \subsection{Decay Rate} \label{sec7.1} Applying Eq.~(\ref{e3.17a}) together with the Green tensor of a microsphere (Appendix \ref{app:spherical}), the spontaneous decay rate for a (with respect to the sphere) radially oriented dipole moment can be given by \begin{eqnarray} \label{e7.1} \lefteqn{ \Gamma^\perp =\Gamma_0 \biggl\{ 1 +{\textstyle{3\over 2}} \sum_{l=1}^\infty \biggl[ l(l+1)(2l+1) } \nonumber\\&&\times\, {\rm Re}\biggl(\!{\cal B}^N_l\!(\omega_A) \biggl[ {h^{(1)}_l({k_Ar_A}) \over {k_Ar_A}} \biggr]^2 \biggr) \biggr] \biggr\}, \qquad \end{eqnarray} and for a tangential dipole it reads \begin{eqnarray} \label{e7.2} \lefteqn{ \Gamma^\parallel = \Gamma_0 \biggl\{ 1 + {\textstyle{3\over 4}} \sum_{l=1}^\infty \biggl[ (2l+1) } \nonumber\\&&\hspace{-1.5ex}\times\, {\rm Re}\biggl(\!{\cal B}^M_l\!(\omega_A) \left[ h^{(1)}_l({k_Ar_A}) \right]^2 \nonumber\\&&\hspace{-1.5ex} +\,{\cal B}^N_l\!(\omega_A) \biggl[ { \bigl[{k_Ar_A} h^{(1)}_l({k_Ar_A})\bigr]^\prime \over {k_Ar_A} } \biggr]^2 \biggr) \biggr] \biggr\} \qquad \end{eqnarray} \begin{figure} \caption{\label{f3} \label{f3} \end{figure} ($k_A$ $\!=$ $\omega_A/c$), where ${\cal B}^N_l\!(\omega_A)$ is defined according to Eq.~(\ref{eA.21}), and the prime indicates \label{page1} the derivative with respect to $k_Ar_A$ [\citet{Ho01}].\footnote{Equations (\ref{e7.1}) and (\ref{e7.2}) can be regarded as being the natural extension of the (classical) theory for nonabsorbing matter as given by \citet{Chew87} and \citet{Klimov96}. Note that when the formulas for nonabsorbing matter are given in terms of the Green tensor, without an explicit decomposition in real and imaginary parts, then for the real permittivity the complex one can be substituted.\label{fnote2}} Note that a radially oriented transition dipole moment only couples to TM waves, whereas a tangentially oriented dipole moment couples to both TM and TE waves. It is worth noting that when the atom is very close to the microsphere, then the decay rates Eqs.~(\ref{e7.1}) and (\ref{e7.2}) reduce to exactly the same form as in Eq.~(\ref{e5.7}) for a planar interface, with $z$ being now the distance between the atom and the surface of the microsphere. Obviously, nonradiative decay, which dominates in this case, does not respond sensitively to the actual radiation-field structure. The dependence on the transition frequency of the decay rate, as it is typically observed for not too small (large) values of the atom-surface distance (material absorption), is illustrated in Fig.~\ref{f3} for a radially oriented transition dipole moment. Since it mimics the single-quantum excitation spectrum of the sphere-assisted (TM) radiation field, the figure reveals that both the WG and SG field resonances can strongly enhance the spontaneous decay. Material absorption broadens the resonance lines at the expense of the heights, and the enhancement is accordingly reduced [see the inset in Fig.~\ref{f3}(a)]. Clearly, the sphere-assisted enhancement of spontaneous decay decreases with increasing distance between the atom and the sphere [compare Figs.~\ref{f3}(a) and (b)]. Figure \ref{f3} also reveals that SG waves can give rise to a much stronger enhancement of the spontaneous decay than WG waves. In particular, with increasing angular-momentum number the SG field resonance lines strongly overlap and huge enhancement [e.g., of the order of magnitude of $10^4$ for the parameters chosen in Fig.~\ref{f3}(a)] can be observed for transition frequencies inside a band gap. When the distance between the atom and the sphere increases, then the atom rapidly decouples from that part of the field. Thus, the huge enhancement of spontaneous decay rapidly reduces and the interval in which inhibition of spontaneous decay is typically observed, extends accordingly [see Fig.~\ref{f3}(b)]. \subsection{Frequency Shift} \label{sec7.2} The sphere-assisted frequency shift calculated from Eq.~(\ref{e3.14}) together with the Green tensor given in Appendix \ref{app:spherical} reads \begin{eqnarray} \label{e7.3} \lefteqn{ \delta\omega_A^\perp = - {3\Gamma_0\over 4} \sum_{l=1}^\infty \biggl[ l(l+1)(2l+1) } \nonumber\\&&\times\, {\rm Im}\biggl( {\cal B}^N_l\!(\omega_A) \biggl[ {h^{(1)}_l({k_Ar_A}) \over {k_Ar_A}} \biggr]^2 \biggr) \biggr] \qquad \end{eqnarray} for a radially oriented transition dipole moment, and \begin{eqnarray} \label{e7.4} \lefteqn{ \delta\omega_A^\parallel = - {3\Gamma_0\over 8} \sum_{l=1}^\infty \biggl[ (2l+1) } \nonumber\\&&\hspace{-2ex}\times\, {\rm Im}\biggl( {\cal B}^M_l\!(\omega_A) \left[ h^{(1)}_l({k_Ar_A}) \right]^2 \nonumber\\&&\hspace{-2ex} +\,{\cal B}^N_l\!(\omega_A) \biggl[ {\bigl[{k_Ar_A} h^{(1)}_l({k_Ar_A})\bigr]^\prime \over {k_Ar_A}} \biggr]^2 \biggr) \biggr] \qquad \end{eqnarray} \begin{figure} \caption{ The frequency shift in spontaneous decay of an excited atom near a microsphere is shown as a function of the transition frequency for a radially oriented transition dipole moment and a single-resonance Drude--Lorentz-type dielectric [\mbox{$R$ $\!=$ $\!2\,\lambda_T$} \label{f5} \end{figure} for a tangentially oscillating dipole.\footnote{Again, Eqs.~(\ref{e7.3}) and (\ref{e7.4}) could be obtained from the classical theory for nonabsorbing matter [\citet{Klimov96}]; see footnote \ref{fnote2} on page \pageref{page1}.} Note that the small quantum corrections that arise from the second term in Eq.~(\ref{e3.14}) have been omitted. For very small distance between the atom and the sphere, Eqs.~(\ref{e7.3}) and (\ref{e7.4}) acquire the same form as for a planar interface, Eq. (\ref{e5.9}). In Fig.~\ref{f5}, an example of the dependence on the transition frequency of the frequency shift for a radially oriented dipole is shown. It is seen that the field resonances can give rise to noticeable frequency shifts in the very vicinities of the corresponding resonance frequencies. Transition frequencies that are lower (higher) than a resonance frequency are shifted to lower (higher) frequencies. In close analogy to the behavior of the decay rate, the frequency shift is more pronounced for SG resonances than for WG resonances and can be huge for large angular-momentum numbers when the lines of the SG field resonances strongly overlap. The behavior of the frequency shift as shown in Fig.~\ref{f5}(b) can already be seen in the single-resonance approximation [\citet{Ching87}]. Let the atomic transition frequency $\omega_A$ be close to a resonance frequency $\omega_C$ of the microsphere and assume that, in a first approximation, the effect from the other resonances may be ignored. For a Lorentzian resonance line of width $\Delta\omega_C$, from Eq.~(\ref{e3.13}) it then follows that \begin{eqnarray} \label{e7.5} \delta\omega_A \hspace{-1ex}&\simeq&\hspace{-1ex} -{{\cal P} \over 4\pi}\, \!\! \int_{-\infty}^\infty \!\!{\rm d}\omega\, {1\over \omega\!-\!\omega_A} {\Gamma_C \Delta\omega_C^2\over (\omega\!-\!\omega_C)^2\!+\!\Delta\omega_C^2} \nonumber\\[.5ex] \hspace{-1ex}&=&\hspace{-1ex} -{\Gamma_C \Delta\omega_C \over 2} {\omega_A-\omega_C \over (\omega_A-\omega_C)^2+\Delta\omega_C^2} \,, \end{eqnarray} where $\Gamma_C$ (which corresponds to the height of the line) is the decay rate for \mbox{$\omega_A$ $\!=$ $\omega_C$}. In particular, Eq.~(\ref{e7.5}) indicates that the frequency shift peaks at half maximum on both sides of the resonance line. With increasing material absorption, the linewidth $\Delta\omega_C$ increases while $\Gamma_C$ decreases, and thus the absolute value of the frequency shift is reduced, the distance between the maximum and the minimum being somewhat increased. With decreasing distance between the atom and the microsphere near-field effects become important and Eq.~(\ref{e7.5}) fails, as it can be seen from a comparison of Figs. \ref{f5}(a) and (b). \subsection{Emitted-Light Intensity} \label{sec7.3} \subsubsection{Spatial distribution} \label{sec7.3.1} Substitution into Eq.~(\ref{e3.27}) of the expression for the Green tensor (Appendix \ref{app:spherical}) yields (\mbox{$\theta_A$ $\!=$ $\!\phi_A$ $\!=$ $\!0$}, \mbox{$r_A$ $\!\le$ $\!r$}) \begin{eqnarray} \label{e7.6} \lefteqn{ {\bf F}^\perp({\bf r},{\bf r}_A,\omega_A) = {k_A^3d \over 4\pi \varepsilon_0} \sum_{l=1}^\infty \biggl\{ (2l+1) } \nonumber\\[.5ex]&&\hspace{-1ex} \times\, {1\over {k_Ar_A}} \left[j_l({k_Ar_A}) + {\cal B}_l^N\!(\omega_A) h_l^{(1)}({k_Ar_A}) \right] \nonumber\\[.5ex]&&\hspace{-1ex} \times\, \biggl[ {\bf e}_r \,l(l\!+\!1)\, {h_l^{(1)}({k_Ar})\over {k_Ar}} \,P_l(\cos\theta) \nonumber\\[.5ex]&&\hspace{-1ex} -\, {\bf e}_\theta \,{[{k_Ar}\,h_l^{(1)}({k_Ar})]'\over {k_Ar}}\, \sin\theta P_l'(\cos\theta) \biggr] \!\biggr\} \qquad \end{eqnarray} for a radially oriented transition dipole moment, and \begin{eqnarray} \label{e7.7} \lefteqn{\hspace{-2ex} {\bf F}^\parallel({\bf r},{\bf r}_A,\omega_A) = {k_A^3d \over 4\pi \varepsilon_0} \sum_{l=1}^\infty \biggl\{ {(2l+1)\over l(l+1)} \biggl[ {\bf e}_r \cos \phi\, } \nonumber\\[.5ex]&&\hspace{-1ex}\times\, \tilde{\cal B}_l^N l(l\!+\!1)\,{h_l^{(1)}({k_Ar})\over {k_Ar}}\, \sin\theta P_l'(\cos\theta) \nonumber\\[.5ex]&&\hspace{-1ex} + \,{\bf e}_\theta \cos \phi \biggl(\tilde{\cal B}_l^M h_l^{(1)}({k_Ar}) P_l'(\cos\theta) \nonumber\\&&\hspace{-1ex} + \,\tilde{\cal B}_l^N {[{k_Ar}\,h_l^{(1)}({k_Ar})]'\over {k_Ar}}\, \tilde{P}_l(\cos\theta) \biggr) \nonumber\\[.5ex]&&\hspace{-1ex} -\,{\bf e}_\phi \sin \phi \biggl(\tilde{\cal B}_l^M h_l^{(1)}({k_Ar}) \tilde{P}_l(\cos\theta) \nonumber\\&&\hspace{-1ex} +\,\tilde{\cal B}_l^N {[{k_Ar}\,h_l^{(1)}({k_Ar})]'\over {k_Ar}}\,P_l'(\cos\theta) \biggr)\! \biggr]\! \bigg\} \quad \end{eqnarray} for a tangentially oriented dipole in the $xz$-plane. Here the abbreviating notations \begin{eqnarray} \label{e7.8} \lefteqn{ \tilde{\cal B}_l^N = {1\over {k_Ar_A}} \Big\{ [{k_Ar_A}j_l({k_Ar_A})]' } \nonumber\\&& +\, {\cal B}_l^N\!(\omega_A) [{k_Ar_A}h_l^{(1)}({k_Ar_A})]' \Big\} , \end{eqnarray} \begin{equation} \label{e7.9} \tilde{\cal B}_l^M = j_l({k_Ar_A}) + {\cal B}_l^M\!(\omega_A) h_l^{(1)}({k_Ar_A}) , \end{equation} \begin{equation} \label{e7.10} \tilde{P}_l (\cos\theta)= l(l+1)P_l(\cos\theta) - \cos\theta P'_l(\cos\theta) \end{equation} have been introduced. $|{\bf F}^{\perp(\|)} ({\bf r},{\bf r}_A,\omega_A)|^2$ determines, according to Eq.~(\ref{e3.26}), the spatial distribution of the light emitted by a radially (tangentially) oriented dipole. \begin{figure} \caption{\label{f6} \label{f6} \end{figure} Let us restrict our attention to a radially oriented transition dipole moment. Examples of $|{\bf F}^\perp({\bf r},{\bf r}_A,\omega_A)|^2$ are plotted in Fig.~\ref{f6}. In this case, the far field is essentially determined by $F^\perp_\theta$, as an inspection of Eq.~(\ref{e7.6}) reveals. When the atomic transition frequency coincides with the frequency of a WG wave of angular-momentum number $l$ far from the band gap [Fig.~\ref{f6}(a)], then the corresponding $l$-term in the series (\ref{e7.6}) obviously yields the leading contribution to the emitted radiation, whose angular distribution is significantly determined by the term \mbox{$\sim$ $\!\sin\theta\, P'_l(\cos\theta)$}. Thus, the emission pattern has $l$ lobes in, say, the $yz$-plane, i.e., $l$ cone-shaped peaks around the $z$-axis, because of symmetry reasons. The lobes near \mbox{$\theta$ $\!=$ $\!0$} and \mbox{$\theta$ $\!=$ $\!\pi$} are the most dominant ones in general, because of \begin{equation} \label{e7.11} -\sin\theta P'_l(\cos\theta) \sim (\sin\theta)^{-1/2} + O\bigr(l^{-1}\bigl) \end{equation} ($0$ $\!<$ $\!\theta$ $\!\le$ $\!\pi$). Note that the superposition of the leading term with the remaining terms in series (\ref{e7.6}) gives rise to some asymmetry with respect to the plane \mbox{$\theta$ $\!=$ $\!\pi/2$}. When the atomic transition frequency approaches (from below) a band gap (but is still outside it), a strikingly different behavior is observed [Fig.~\ref{f6}(b)]. The emission pattern changes to a two-lobe structure similar to that observed in free space, but bent away from the microsphere surface, the emission intensity being very small. Since near a band gap absorption losses dominate, a photon that is resonantly emitted is almost certainly absorbed and does not contribute to the far field in general. If the photon is emitted in a lower-order WG wave where radiative losses dominate, it has a bigger chance to escape. The superposition of all these weak (off-resonant) contributions just form the two-lobe emission pattern observed, as it can also be seen from careful inspection of the series (\ref{e7.6}). When the atomic transition frequency is inside a band gap and coincides with the frequency of a SG wave of low order such that the radiative losses dominate, then the emission pattern resembles that observed for resonant interaction with a low-order WG wave [compare Figs.~\ref{f6}(a) and (c)]. With increasing transition frequency the absorption losses become substantial and eventually change the \vspace*{134mm} \noindent emission pattern in a quite similar way as do below the band gap [compare Figs.~\ref{f6}(b) and (d)]. Obviously, the respective explanations are similar in the two cases. \subsubsection{Radiative versus nonradiative decay} \label{sec7.3.2} Since the imaginary part of both the vacuum Green tensor $\mbb{G}^V$ and the scattering term $\mbb{G}^{R}$ is transverse, the decay rate (\ref{e3.12}) results from the coupling of the atom to the transverse part of the electromagnetic field. Nevertheless, the decay of the excited atomic state must not necessarily be accompanied by the emission of a real photon, but instead a matter quantum can be created, because of material absorption. To compare the two decay channels, let us consider, according to Eq.~(\ref{e3.29}), the fraction $W/W_0$ of the atomic (transition) energy that is irradiated by an atom with a radially oriented transition dipole moment. Using Eqs.~(\ref{e3.26}) and (\ref{e7.6}), one derives [\citet{Ho01}] \begin{figure} \caption{\label{f8} \label{f8} \end{figure} \begin{eqnarray} \label{e7.12} \lefteqn{ {W\over W_0} = {3\Gamma_0\over 2 \Gamma^\perp} \sum_{l=1}^\infty { l(l+1)(2l+1) \over ({k_Ar_A})^2} } \nonumber\\&&\times \left|j_l({k_Ar_A}) \!+\! {\cal B}_l^N\!(\omega_A) h_l^{(1)}({k_Ar_A}) \right|^2\!\!. \qquad\quad \end{eqnarray} Recall that \mbox{$W/W_0$ $\!=$ $\!1$} implies fully radiative decay, while \mbox{$W/W_0$ $\!=$ $\!0$} implies fully nonradiative one. The dependence of the ratio $W/W_0$ on the atomic transition frequency is illustrated in Fig.~\ref{f8}. The minima at the WG field resonance frequencies indicate that the nonradiative decay is enhanced relative to the radiative one. Obviously, photons at these frequencies are captured inside the microsphere for some time, and hence the probability of photon absorption is increased. For transition frequencies inside a band gap, two regions can be distinguished. In the low-frequency region, where low-order SG waves are typically excited, radiative decay dominates. Here, the light penetration depth into the sphere is small and the probability of a photon being absorbed is small as well. With increasing atomic transition frequency the penetration depth increases and the chance of a photon to escape drastically diminishes. As a result, nonradiative decay dominates. Clearly, the strength of the effect decreases with decreasing material absorption [compare Fig.~\ref{f8}(a) with (b)]. {F}rom the figure two well pronounced minima of the totally emitted light energy, i.e., noticeable maxima of the energy transfer to the matter, are seen for transition frequencies inside the band gap. The first minimum results from the overlapping high-order SG waves that mainly underly absorption losses. The second one is observed at the longitudinal resonance frequency of the medium. It can be attributed to the atomic near-field interaction with the longitudinal component of the medium-assisted electromagnetic field, the strength of the longitudinal field resonance being proportional to $\varepsilon_{I}$. Hence, the dip at the longitudinal frequency of the emitted radiation energy reduces with decreasing material absorption and may disappear when the atom is moved sufficiently away from the surface. \subsubsection{Temporal evolution} \label{sec7.3.3} Throughout this section we have restricted our attention to the weak-coupling regime where the excited atomic state decays exponentially, Eq.~(\ref{e3.15}). When retardation is disregarded, then the intensity of the emitted light (at some chosen space point) simply decreases exponentially, Eq.~(\ref{e3.26}). To study the effect of retardation, the frequency integral in the exact equation (\ref{e3.21}) must be performed numerically in general. Typical examples of the temporal evolution of the far-field intensity are shown in Fig.~\ref{f10} for a radially oriented transition dipole moment in the case when the atomic transition frequency coincides with the frequency of a WG wave. \begin{figure} \caption{\label{f10} \label{f10} \end{figure} Whereas the long-time behavior of the intensity of the emitted light is, with little error, exponential, the short-time behavior (on a time scale given by the atomic decay time) sensitively depends on the quality factor [$Q$ $\!\sim$ $\!10^3$ in Fig.~\ref{f10}(a), $Q$ $\!\sim$ $\!10^4$ in Fig.~\ref{f10}(b)]. The observed delay between the upper-state atomic population and the intensity of the emitted light can be quite large for a high-$Q$ microsphere, because the time that a photon spends in the sphere increases with the $Q$\,value. Further, in the short-time domain some kink-like fine structure is observed, which obviously reflects the different arrival times associated with multiple reflections. \subsection{Metallic Microsphere} \label{sec7.4} The permittivity of a metal (on the basis of the Drude model) can be obtained by setting in Eq.~(\ref{e4.5}) the lowest resonance frequency $\omega_{T\alpha}$ equal to zero. Thus, the results derived for the band gap of a dielectric microsphere also apply, for appropriately chosen values of the corresponding \mbox{$\omega_{P\alpha}$ $\!\equiv$ $\!\omega_P$} and \mbox{$\gamma_\alpha$ $\!\equiv$ $\!\gamma$}, to a metallic sphere. In particular, the results obtained in the nonretardation limit (\mbox{$c$ $\!\to$ $\!\infty$}) and for small sphere sizes (\mbox{$R$ $\!\ll$ $\!\lambda_P$}) [\citet{Ruppin82,Agarwal83}] are recovered. \begin{figure} \caption{ The decay rate of an atom near a metallic microsphere is shown as a function of the transition frequency for a radially oriented transition dipole moment and a single-resonance metal of Drude type [\mbox{$R$ $\!=$ $\!5\lambda_P$} \label{f15} \end{figure} The dependence of the decay rate on the transition frequency of an excited atom near a metallic microsphere is illustrated in Fig.~\ref{f15} for \mbox{$R$ $\!>$ $\!\lambda_P$}. The inset shows the emission pattern for the case when the atomic transition frequency coincides with the frequency of a SG wave. Note that the SG field resonances seen in Fig.~\ref{f15} obey, according to condition (\ref{e4.5.7}), the relation \mbox{$\omega/\omega_P$ $\!<$ $\!1/\sqrt{2}$ $\!\simeq$ $\!0.71$}. When the radius of the microsphere becomes substantially smaller than the wavelength $\lambda_P$, then distinct peaks are only seen for a few lowest-order resonances [\citet{Ruppin82,Agarwal83}]. It is worth noting that, in contrast to dielectric matter, a large absorption in metals can substantially enhance the near-surface divergence of the decay rate, Eq.~(\ref{e5.7}), which is in agreement with experimental observations of the fluorescence from dye molecules near a planar metal surface [\citet{Drexhage74}]. \setcounter{equation}{0} \section{\hspace{0ex}QUANTUM CORRELATIONS} \label{sec:2atoms} Let us finally address the problem of spontaneous decay of an atom in the case when there is a second atom that is resonantly dipole-dipole coupled to the first one. Similarly to single-atom spontaneous decay, the dipole-dipole interaction can be controlled by the presence of macroscopic bodies. Various aspects of the problem have been discussed for bulk material [\citet{Knoester89}], photonic crystals [\citet{Kurizki88,Kurizki90,John95,Bay97,Rupasov97}], optical lattices [\citet{Goldstein97b,Guzman98}], planar cavities [\citet{Kobayashi95a,Agarwal98}] (and unspecified cavities [\citet{Kurizki96,Goldstein97a}]), and microspheres [\citet{Agarwal00}]. In particular, resonant energy transfer realized through dipole-dipole interaction has been studied theoretically for bulk material [\citet{Juzeliunas94a,Juzeliunas94b}], microspheres [\citet{Gersten84,Druger87,Leung88}], and planar microcavities [\citet{Kobayashi95a,Kobayashi95b}], and experimentally for droplets [\citet{Folan85}] and planar microstructures [\citet{Hopmeier99,Andrew00}], with potential for enhanced photon-harvesting systems and optical networks. Interatomic interaction can give rise to nonclassical correlation and may be used for entangled-state preparation, which has been of increasing interest in the study of fundamental issues of quantum mechanics and with regard to application in quantum information processing. The entanglement, which is very weak in free space, may be expected to be enhanced significantly in resonator-like equipments. Proposals have been made for entangling spatially separated atoms in Jaynes-Cummings systems through sequential or simultaneous strong atom-field coupling [\citet{Kudryavtsev93,Phoenix93,Cirac94,Freyberger96, Gerry96,Plenio99,Beige00}]. Unfortunately, the Jaynes-Cummings model does not give any indication of the influence on entanglement of the actual properties (such as form and intrinsic dispersion and absorption) of the bodies in a really used scheme. {F}rom Section \ref{sec3.2} we know that not only the evolution of a single atom but also the mutual evolution of two atoms is fully determined by the Green tensor according to the chosen configuration of macroscopic bodies. The formalism thus renders it possible to examine interatom quantum correlations established in the presence of arbitrary macroscopic body. \subsection{Entangled-state preparation} \label{subsec:ent_gen} \paragraph{Weak Coupling} Let us consider, e.g., two atoms near a microsphere of the type studied in Section \ref{sec:microsphere}. To be more specific, let as assume that the (two-level) atoms are of the same kind, that they are located at diametrically opposite positions (\mbox{${\bf r}_A$ $\!=$ $\!-{\bf r}_B$}), and that their transition dipole moments are radially oriented. Obviously, the conditions (\ref{e8.20c}) and (\ref{e8.20a}) are fulfilled for such a system, so that from Eqs.~(\ref{e8.9}) and (\ref{e8.10}) together with the Green tensor for a microsphere (Appendix \ref{app:spherical}) one then finds that \begin{eqnarray} \label{e8.16} \lefteqn{ \Gamma_\pm^\perp = {\textstyle{3\over 2}} \Gamma_0 \sum_{l=1}^\infty {\rm Re}\biggl\{ {l(l+1)(2l+1) \over (k_Ar_A)^2} } \nonumber\\&& \times\, \left[ j_l(k_Ar_A)+ {\cal B}^N_l\!(\omega_A) h^{(1)}_l(k_Ar_A) \right] \nonumber\\&& \times\, h^{(1)}_l(k_Ar_A) \left[ 1\mp(-1)^l\right] \biggr\}. \end{eqnarray} When atom $A$ is initially in the upper state and atom $B$ is accordingly in the lower state, then the two superposition states $|+\rangle$ and $|-\rangle$, Eq.~(\ref{e8.14}), are initially equally excited [\mbox{$C_+(0)$ $\!=$ $\!C_-(0)$ $\!=$ $\!2^{-\frac{1}{2}}$}]. If the atomic transition frequency coincides with a microsphere resonance, the most significant contribution to the single-atom decay rate $\Gamma^\perp$, Eq.~(\ref{e7.1}), comes (for sufficiently small atom-surface distance) from the corresponding term in the $l$-sum, i.e., \begin{eqnarray} \label{e8.16b} \lefteqn{ \Gamma^\perp \simeq {\textstyle{3\over 2}} \Gamma_0\, l(l+1)(2l+1) } \nonumber\\&& \times\, {\rm Re} \biggl\{ \biggl[{h^{(1)}_l(k_Ar_A) \over k_Ar_A}\biggr]^2 {\cal B}^N_l\!(\omega_A) \biggr\}, \quad \end{eqnarray} and Eq.~(\ref{e8.16}) can be approximated as follows: \begin{equation} \label{e8.16c} \Gamma_\pm^\perp \simeq \Gamma^\perp \left[1 \mp (-1)^l\right]. \end{equation} Hence \mbox{$\Gamma_-$ $\!\gg$ $\!\Gamma_+$} (\mbox{$\Gamma_+$ $\!\gg$ $\!\Gamma_-$}) if $l$ is even (odd), i.e., the state $|-\rangle$ ($|+\rangle$) decays much faster than the state $|+\rangle$ ($|-\rangle$). Consequently, there exists a time window, during which the overall system is prepared in an entangled state that is a superposition of the state with the atoms being in the state $|+\rangle$ ($|-\rangle$) and the medium-assisted field being in the ground state, and all the states with the atoms being in the lower state $|L\rangle$ and the medium-assisted field being in a single-quantum Fock state. The window is opened when the state $|-\rangle$ ($|+\rangle$) has already decayed while the state $|L\rangle$ emerges, and it is closed roughly after the lifetime of the state $|+\rangle$ ($|-\rangle$). \begin{figure} \caption{\label{fig:ent} \label{fig:ent} \end{figure} As a result, the two atoms are also entangled to each other. The state is a statistical mixture, the density operator of which is obtained from the density operator of the overall system by taking the trace with respect to the medium-assisted field. Within approximation (\ref{e8.16c}) it takes the form of \begin{eqnarray} \label{e8.16a} \hat{\rho}_A \simeq |C_\pm(t)|^2 |\pm\rangle\langle\pm| + \left[1-|C_\pm(t)|^2 \right] |L\rangle\langle L|, \end{eqnarray} where \begin{equation} \label{e8.16d} C_\pm(t) \simeq 2^{-1/2} e^{-\Gamma_\pm t} . \end{equation} Applying the criterion suggested by \citet{Peres96}, it is not difficult to prove that the state (\ref{e8.16a}) is indeed inseparable. It is worth noting that the atoms become entangled within the weak-coupling regime, starting from the state $|U_A\rangle$ (or $|U_B\rangle$) and the vacuum field. In the language of (Markovian) damping theory one would probably say that the two atoms are coupled to the same dissipative system, which gives rise to the quantum coherence. The frequency dependence of $\Gamma_\pm$ as given by Eq.~(\ref{e8.16}) is illustrated in Fig.~\ref{fig:ent}(a) for a frequency interval inside a band gap, and the dependence on the atom-surface distance is illustrated in Fig.~\ref{fig:ent}(b). We see that the values of $\Gamma_+$ and $\Gamma_-$ can be substantially different from each other before they tend to the free-space rate $\Gamma_0$ as the distance from the sphere becomes sufficiently large. In particular, the decay of one of the states $|+\rangle$ or $|-\rangle$ can strongly be suppressed [see the minimum value of $\Gamma_+$ in Fig.~\ref{fig:ent}(b)] at the expense of the other one, which rapidly decays. Note that $\Gamma_+$ also differs from $\Gamma_-$ for two atoms in free space [\citet{DeVoe96}]. However, the difference that occurs by mediation of the microsphere is much larger. For example, at the distance for which in Fig.~\ref{fig:ent}(b) $\Gamma_+$ attains the minimum the ratio \mbox{$\Gamma_-/\Gamma_+$ $\!\simeq$ $\!67000$} is observed, which is to be compared with the free-space ratio \mbox{$\Gamma_-/\Gamma_+$ $\!\simeq$ $\!1.0005$}. The effect may become even more pronounced for larger microsphere sizes and lower material absorption, i.e., sharper microsphere resonances. Needless to say that it is not only observed for SG waves considered in Fig.~\ref{fig:ent}, but also for WG waves. \paragraph{Strong Coupling} Entangled-state preparation in the weak-coupling regime has the advantage that it could routinely be achieved experimentally. However, the value of $|C_+(t)|^2$ in Eq.~(\ref{e8.16a}) (or the value of $|C_-(t)|^2$ in the corresponding equation for the state $|-\rangle$) is always less than $1/2$. In order to achieve a higher degree of entanglement, the strong-coupling regime is required. Let us assume that the two atoms are initially in the ground state and the medium-assisted field is excited. The field excitation can be achieved, for example, by coupling an excited atom $D$ to the microsphere and then making sure that the atomic excitation is transferred to the field (cf. Section \ref{subsec:basic}). If the atom $D$ strongly interacts with the field, the excitation transfer can be controlled by adjusting the interaction time. Another possibility would be measuring the state populations and discarding the events where the atom is found in the upper state. Here we restrict our attention to the first method and assume that all three atoms $D$, $A$, and $B$ strongly interact with the same microsphere resonance (of mid-frequency $\omega_C$ and line width $\Delta\omega_C$). According to Eq.~(\ref{e6.5}), the upper-state probability amplitude $C_{U_D}(t)$ of atom $D$ reads \begin{equation} \label{e8.19} C_{U_D}(t) = e^{-\Delta\omega_C (t+\Delta t)/2} \cos [\Omega_D (t+\Delta t)/2], \end{equation} with $\Omega_D$ being given according to Eq.~(\ref{e6.6}). For \begin{equation} \label{e8.22} \Delta t = \pi/\Omega_D , \end{equation} the initially (i.e., at time \mbox{$t$ $\!=$ $\!-\Delta t$}) excited atom $D$ is at time $t$ $\!=$ $\!0$ in the lower state [\mbox{$C_{U_D}(0)$ $\!=$ $\!0$}]. {F}rom the preceding subsection we know that when the resonance angular-momentum number $l$ is odd (even), then the state $|+\rangle$ ($|-\rangle$) ``feels'' a sharply peaked high density of medium-assisted field states, so that a strong-coupling approximation of the type (\ref{e6.2}) applies. The state $|-\rangle$ ($|+\rangle$), in contrast, ``feels'' a flat one and the (weak-coupling) Markov approximation applies. Assuming atom $A$ (or $B$) is at the same position as was atom $D$, from Eqs.~(\ref{e8.17}), (\ref{e8.19}), and (\ref{e8.22}) we then find that \begin{equation} \label{e8.21} C_\pm(t) \simeq - e^{-\Delta\omega_C( t +\pi/\Omega_D)/2} \sin(\Omega_\pm t/2) \end{equation} [with $\Omega_\pm$ according to Eq.~(\ref{e6.6})] and \begin{equation} \label{e8.21a} C_\mp(t) \simeq 0 \end{equation} (the sign of $C_-(t)$ in Eq. (\ref{e8.21}) is reversed if atom $B$ is at the same position as was atom $D$). Note that $\Omega_\pm$ $\!=$ $\!2^{\frac{1}{2}}\Omega$, because of Eq.~(\ref{e8.16c}). The two-atom entangled state is again of the form given in Eq.~(\ref{e8.16a}), but now the weight of the state $|+\rangle$ ($|-\rangle$) can reach values larger $1/2$, provided that the resonance linewidth $\Delta\omega_C$ is small enough. \subsection{Violation of Bell's inequality} \label{subsec:Bell_ineq} Violations of Bell's inequalities provide support to quantum mechanics versus local (hidden-variables) theories [\citet{Bell65,Clauser69}]. Despite outstanding progress\footnote{For recent experiments using photons, see \citet{Weihs98,Kuzmich00}, and using trapped ions, see \citet{Rowe01}.} in the test of Bell's inequalities, a decisive experiment to rule out any local realistic theory is yet to be performed [\citet{Vaidman01}], and the problem continues to attract much attention. Though entangled states of spatially separated atoms in a cavity have been observed [\citet{Hagley97}], a test of Bell's inequalities for such a system has yet to be realized. The Bell's inequality for spin systems can be written in the form of [\citet{Bell65,Clauser69}] \begin{eqnarray} \label{e8.23} \lefteqn{ B_S=|E(\theta_1,\theta_2)-E(\theta_1,\theta'_2) } \nonumber\\[.5ex] && +\,E(\theta'_1,\theta_2)+E(\theta'_1,\theta'_2)| \le 2, \qquad \end{eqnarray} where \begin{equation} \label{e8.24} E(\theta_1,\theta_2) = \bigl\langle \hat{\sigma}_A^{\theta_1} \hat{\sigma}_B^{\theta_2} \bigr\rangle, \end{equation} \begin{equation} \label{e8.25} \qquad\quad \hat{\sigma}_A^{\theta} = \cos\theta \,\hat{\sigma}_A^x + \sin\theta \,\hat{\sigma}_A^y . \end{equation} When the atomic state $|u_A,u_B\rangle$ is not populated, as it is the case for a state of the type (\ref{e8.16a}), it is not difficult to prove that \begin{equation} \label{e8.25a} E(\theta_1,\theta_2) = E(\theta_1-\theta_2,0). \end{equation} \begin{figure} \caption{\label{fig:Bell_ineq} \label{fig:Bell_ineq} \end{figure} Let us choose \begin{equation} \label{e8.25b} \theta = \theta_1 - \theta_2 = \theta_2 - \theta'_1 = \theta'_1 - \theta'_2\,. \end{equation} The inequality (\ref{e8.23}) thus simplifies to \begin{equation} \label{e8.26} B_S=|3E(\theta,0) - E(3\theta,0)| \le 2 . \end{equation} An entangled state of the type (\ref{e8.16a}) can only give rise to a violation of the Bell's inequality if \mbox{$|C_+(t)|^2$ $\!\ge$ $\!2^{-\frac{1}{2}}$ $\!\simeq$ $\!0.71$} [\citet{Beige00}], which cannot be achieved in the weak-coupling regime, Eq.~(\ref{e8.16d}). It can be achieved, in contrast, in the strong-coupling regime, Eq.~(\ref{e8.21}), where \begin{eqnarray} \label{e8.27} \lefteqn{ E(\theta,0) = \cos\theta\,|C_\pm(t)|^2 } \nonumber\\[.5ex]&&\hspace{-1ex} = \cos\theta\, e^{-\Delta\omega_C(t+\pi/\Omega_D)} \sin^2\bigl(\Omega t/\sqrt{2}\bigr). \qquad \end{eqnarray} Substitution of this expression into in Eq. (\ref{e8.26}) yields, on choosing \mbox{$\theta$ $\!=$ $\!\pi/4$}, \begin{equation} \label{e8.28} B_S = 2\sqrt{2}\,e^{-\Delta\omega_C(t+\pi/\Omega_D)} \sin^2\bigl(\Omega t/\sqrt{2}\bigr), \end{equation} which clearly shows that $B_S$ $\!>$ $\!2$ becomes possible as long as \mbox{$\Delta\omega_C(t$ $\!+$ $\!\pi/\Omega_D)$ $\!\ll$ $\!1$}. Examples of the temporal evolution of $B_S$ for two atoms near a dielectric microsphere are shown in Fig.~\ref{fig:Bell_ineq}(a). In Figure~\ref{fig:Bell_ineq}(b) the dependence of the ratio $\Omega/\Delta\omega_C$ on the distance of the atoms from the sphere is plotted. The strong-coupling regime can be observed for distances for which \mbox{$\Omega/\Delta\omega_C$ $\!\gg$ $\!1$} is valid. The inset reveals that the maximum value of $B_S$ decreases with increasing atom-surface distance and reduces below the threshold value of $2$ still in the strong-coupling regime. \section{SUMMARY} \label{sec:concl} We have studied spontaneous decay in the presence of dispersing and absorbing macroscopic bodies, basing on quantization of the (macroscopic) electromagnetic field in arbitrary linear, causal media. The formalism covers both weak and strong couplings and enables one to include the material absorption and dispersion in a consistent way, without restriction to a particular frequency domain. It replaces the standard concept of orthogonal-mode decomposition, which requires real permittivities and thus does not allow for material absorption, with a source-quantity representation in terms of the classical Green tensor and appropriately chosen bosonic-field variables. All relevant information about the bodies such as form and intrinsic dispersion and absorption properties are contained in the Green tensor. The formalism has been applied to study spontaneous decay of a single atom in the presence of various absorbing and dispersing macroscopic bodies, including open configurations such as bulk and planar half space media, and closed configurations such as a spherical cavity or a microsphere. Absorption can noticeably influence spontaneous decay. So, the decay rate in absorbing bulk material takes a much more complicated form than one would expect from the simple product form that is commonly used for nonabsorbing matter. The decay rate of an atom located very near a planar surface shows that due to material absorption the decay rate drastically rises as the atom approaches the surface of the body, because of near-field assisted (nonradiative) energy transfer from the atom to the medium. In fact, this is valid for an atom that is sufficiently near an arbitrary body, because for short enough atom--surface distances, any curved surface can be approximated by a planar one. Spontaneous decay can strongly be influenced by field resonances that can appear due to the presence of macroscopic bodies, depending on whether the atomic transition frequency is tuned to a field resonance or not. In particular, the decay process can be mainly radiative or nonradiative, which depends on whether the radiative losses due to input-output coupling or the losses due to material absorption dominate. In particular, to understand what happens when the atomic transition frequency is inside a band-gap zone of a body, inclusion in the study of material absorption is necessary. Finally, spontaneous decay of (two) dipole-dipole coupled atoms in the presence of macroscopic bodies offers the possibility of entangled state preparation and verification of the violation of Bell's inequalities. Whereas entangled states can already be prepared in the weak-coupling regime, violation of Bell's inequalities requires the strong-coupling regime, the ultimate limits being given by material absorption. \section*{ACKNOWLEDGMENTS} We thank S. Scheel, E. Schmidt, and A. Tip for discussions. H.T.D. gratefully acknowledges support from the Alexander von Humboldt Stiftung. This work was supported by the Deutsche Forschungsgemeinschaft. \begin{appendix} \renewcommand{\Alph{section}.\arabic{equation}}{\Alph{section}.\arabic{equation}} \setcounter{equation}{0} \section{THE GREEN TENSOR} \label{appA} \subsection{Bulk Medium} \label{app:bulk} For bulk material the Green tensor reads as \mbox{($\mbb{\rho}$ $\!=$ ${\bf r}$ $\!-$ $\!{\bf r}'$)} \begin{eqnarray} \label{se2.1} \mbb{G}({\bf r},{\bf r}',\omega) =\left[\mbb{\nabla}^r\otimes\mbb{\nabla}^r \!+\! \mbb{I}q^2(\omega) \right] \frac{{\rm e}^{iq(\omega)\rho}} {4\pi q^2(\omega)\rho}\,, \end{eqnarray} where \begin{eqnarray} \label{se2.1a} q(\omega) \hspace{-1ex}&=&\hspace{-1ex} \sqrt{\varepsilon(\omega)} \,\omega/c. \end{eqnarray} It can be decomposed into a longitudinal and a transverse part, \begin{equation} \label{se2.1b} \mbb{G}({\bf r},{\bf r}',\omega) = \mbb{G}^\|({\bf r},{\bf r}',\omega) + \mbb{G}^\perp({\bf r},{\bf r}',\omega), \end{equation} where \begin{eqnarray} \label{se2.2a} \lefteqn{ \mbb{G}^\|({\bf r},{\bf r}',\omega) = -\frac{1}{4\pi q^2} \bigg[ \frac{4\pi}{3} \delta(\mbb{\rho}) \mbb{I} } \nonumber\\[.5ex]&&\hspace{8ex} +\left( \mbb{I} -\frac{3\mbb{\rho}\otimes\mbb{\rho}}{\rho^2} \right) \frac{1}{\rho^3} \bigg] \qquad \end{eqnarray} and \begin{eqnarray} \label{se2.2b} \lefteqn{ \mbb{G}^\perp({\bf r},{\bf r}',\omega) = \frac{1}{4\pi q^2} \bigg\{ \left( \mbb{I} -\frac{3\mbb{\rho} \otimes\mbb{\rho}}{\rho^2} \right) \frac{1}{\rho^3} } \nonumber \\ &&\hspace{-1ex} +\,q^3 \bigg[\! \left(\! \frac{1}{q\rho}\! +\!\frac{i}{(q\rho)^2} \!-\!\frac{1}{(q\rho)^3}\! \right) \mbb{I} \nonumber \\ &&\hspace{-1ex} \!-\!\left(\! \frac{1}{q\rho} \!+\!\frac{3i}{(q\rho)^2}\! -\!\frac{3}{(q\rho)^3}\! \right) \frac{\mbb{\rho}\otimes\mbb{\rho}}{\rho^2}\! \bigg] {\rm e}^{iq\rho}\! \bigg\}. \qquad \end{eqnarray} In particular, from Eq.~(\ref{se2.2b}) it follows that \begin{eqnarray} \label{se2.2c} {\rm Im}\,\mbb{G}^\perp({\bf r},{\bf r},\omega) \hspace{-1ex}&=&\hspace{-1ex} \lim_{{\bf r}'\to{\bf r}} {\rm Im}\,\mbb{G}^\perp({\bf r},{\bf r}',\omega) \nonumber \\ \hspace{-1ex}&=&\hspace{-1ex} \frac{\omega}{6\pi c}\,n_R(\omega) \mbb{I}. \end{eqnarray} \subsection{Spherical Multilayers} \label{app:spherical} The Green tensor of a spherical structure, consisting of ${\cal N}$ concentric layers, enumerated from outward in (the outmost layer labeled as layer 1, the innermost layer as layer ${\cal N}$), can be decomposed into two parts \begin{equation} \label{A2.1} \mbb{G}({\bf r},{\bf r'},\omega) = \mbb{G}^{(s)}({\bf r},{\bf r'},\omega) \delta_{fs} + \mbb{G}^{(fs)}({\bf r},{\bf r'},\omega), \end{equation} where $\mbb{G}^{(s)}({\bf r},{\bf r'},\omega)$ represents the contribution of the direct waves from the source in an unbounded space, and $\mbb{G}^{(fs)}({\bf r},{\bf r'},\omega)$ is the scattering part that describes the contribution of the multiple reflection \mbox{($f$ $\!=$ $\!s$)} and transmission \mbox{($f$ $\!\neq$ $\!s$)} due to the presence of the surfaces of discontinuity ($f$ and $s$, respectively, refer to the regions where are the field and source points ${\bf r}$ and ${\bf r}'$). In Eq.~(\ref{A2.1}), $\mbb{G}^{(s)}$ is nothing but the bulk-material Green tensor (\ref{se2.1}). In the (local) spherical coordinate systems it reads as [see, e.g., \citet{Li94}] \begin{eqnarray} \label{A2.2} \lefteqn{ \mbb{G}^{(s)}({\bf r},{\bf r}',\omega) = \frac{{\bf e}_r\otimes{\bf e}_r}{k_s^2}\, \delta(r-r') } \nonumber\\[.5ex]&&\hspace{-2ex} +\,{ik_s\over 4\pi} \sum_{e\atop o} \sum_{l=1}^\infty \sum_{m=0}^l \biggl\{ {2l\!+\!1\over l(l\!+\!1)} {(l\!-\!m)!\over(l\!+\!m)!}\,(2\!-\!\delta_{0m}) \nonumber\\[.5ex]&&\hspace{-2ex} \times \left[ {\bf M}^{(1)}_{{e \atop o}lm} ({\bf r},k_s) \otimes {\bf M}_{{e \atop o}lm} ({\bf r}',k_s) \right. \nonumber\\[.5ex]&&\hspace{2ex} \left. +\,{\bf N}^{(1)}_{{e \atop o}lm} ({\bf r},k_s) \otimes {\bf N}_{{e \atop o}lm} ({\bf r}',k_s) \right] \biggr\} \qquad \end{eqnarray} if \mbox{$r$ $\!\ge$ $\!r'$}, and \mbox{$\mbb{G}^{(s)}({\bf r},{\bf r}',\omega)$ $\!=$ $\!\mbb{G}^{(s)}({\bf r}',{\bf r},\omega)$} if \mbox{$r$ $\!<$ $\!r'$}. The scattering part $\!\mbb{G}^{(fs)}({\bf r},{\bf r}',\omega)$ in Eq.~(\ref{A2.1}) reads [\citet{Li94}] \begin{eqnarray} \label{A2.3} \lefteqn{ \mbb{G}^{(fs)}({\bf r},{\bf r'},\omega) } \nonumber\\&&\hspace{-3ex} = {ik_s\over 4\pi} \sum_{e\atop o} \sum_{l=1}^\infty \sum_{m=0}^l \biggl( {2l\!+\!1\over l(l\!+\!1)} {(l\!-\!m)!\over(l\!\!+m)!}\,(2\!-\!\delta_{0m}) \nonumber\\[.5ex]&&\hspace{-2ex} \biggl\{ (1-\delta_{f{\cal N}}) {\bf M}^{(1)}_{{e \atop o}lm} ({\bf r},k_f) \otimes \bigl[ {\bf M}_{{e \atop o}lm} ({\bf r}',k_s) \nonumber\\[.5ex]&&\hspace{-4ex}\times\, (1-\delta_{s1}) {\cal A}^M_l (\omega) +\,{\bf M}^{(1)}_{{e \atop o}lm} ({\bf r}',k_s) (1-\delta_{s{\cal N}}) {\cal B}^M_l (\omega) \bigr] \nonumber\\[.5ex]&&\hspace{-2ex} +\,(1-\delta_{f{\cal N}}) {\bf N}^{(1)}_{{e \atop o}lm} ({\bf r},k_f) \otimes \bigl[ {\bf N}_{{e \atop o}lm} ({\bf r}',k_s) \nonumber\\[.5ex]&&\hspace{-4ex}\times\, (1-\delta_{s1}) {\cal A}^N_l (\omega) +\,{\bf N}^{(1)}_{{e \atop o}lm} ({\bf r}',k_s) (1-\delta_{s{\cal N}}) {\cal B}^N_l (\omega) \bigr] \nonumber\\[.5ex]&&\hspace{-2ex} +\,(1-\delta_{f1}) {\bf M}_{{e \atop o}lm} ({\bf r},k_f) \otimes \bigl[ {\bf M}_{{e \atop o}lm} ({\bf r}',k_s) \nonumber\\[.5ex]&&\hspace{-4ex}\times\, (1-\delta_{s1}) {\cal C}^M_l (\omega) +\,{\bf M}^{(1)}_{{e \atop o}lm} ({\bf r}',k_s) (1-\delta_{s{\cal N}}) {\cal D}^M_l (\omega) \bigr] \nonumber\\[.5ex]&&\hspace{-2ex} +\,(1-\delta_{f1}) {\bf N}_{{e \atop o}lm} ({\bf r},k_f) \otimes \bigl[ {\bf N}_{{e \atop o}lm} ({\bf r}',k_s) \nonumber\\[.5ex]&&\hspace{-4ex}\times\, (1-\delta_{s1}) {\cal C}^N_l (\omega) \nonumber\\&&\hspace{-0ex} +\,{\bf N}^{(1)}_{{e \atop o}lm} ({\bf r}',k_s) (1-\delta_{s{\cal N}}) {\cal D}^N_l (\omega) \bigr] \biggr\} \!\biggr), \end{eqnarray} where \begin{equation} \label{A2.3a} k_{f(s)}=\sqrt{\varepsilon_{f(s)}(\omega)}\,{\omega\over c}\,, \end{equation} and ${\bf M}$ and ${\bf N}$ represent TM- and TE-waves, respectively, \begin{eqnarray} \label{A2.4} \lefteqn{\hspace{-7ex} {\bf M}_{{e \atop o}nm}({\bf r},k) \!=\! \mp {m \over \sin\theta} j_n(kr) P_n^m(\cos\theta) {\sin\choose\cos} (m\phi) {\bf e}_{\theta} } \nonumber\\&& \hspace{-2ex} - j_n(kr) \frac{dP_n^m(\cos\theta)}{d\theta} {\cos\choose\sin} (m\phi) {\bf e}_{\phi}\,, \end{eqnarray} \begin{eqnarray} \label{A2.5} \lefteqn{\hspace{-4ex} {\bf N}_{{e \atop o}nm}({\bf r},k) \!=\! {n(n\!+\!1)\over kr} j_n(kr) P_n^m(\cos\theta) {\cos\choose\sin} (m\phi) {\bf e}_r } \nonumber\\&&\hspace{-2ex} + {1\over kr} \frac{d[rj_n(kr)]}{dr} \Biggl[ \frac{dP_n^m(\cos\theta)}{d\theta} {\cos\choose\sin} (m\phi) {\bf e}_{\theta} \nonumber \\&&\hspace{-2ex} \mp {m \over \sin\theta} P_n^m(\cos\theta) {\sin\choose\cos} (m\phi) {\bf e}_{\phi} \Biggr], \end{eqnarray} with $j_n(x)$ being the spherical Bessel function of the first kind and $P_n^m(x)$ being the associated Legendre function. The superscript ${(1)}$ in Eq.~(\ref{A2.3}) indicates that in Eqs.~(\ref{A2.4}) and (\ref{A2.5}) the spherical Bessel function $j_n(x)$ has to be replaced by the first-type spherical Hankel function $h^{(1)}_n(x)$. The coefficients ${\cal A}_l^{M,N}$, ${\cal B}_l^{M,N}$, ${\cal C}_l^{M,N}$, and ${\cal D}_l^{M,N}$ are to be found from the coupled recurrence equations \begin{eqnarray} \label{A2.6} \lefteqn{ \left( \begin{array}{cc} {\cal A}_{(f+1)s}^{M,N}+\delta_{(f+1)s} & {\cal B}_{(f+1)s}^{M,N}\\ {\cal C}_{(f+1)s}^{M,N} & {\cal D}_{(f+1)s}^{M,N} \end{array} \right) }\nonumber\\[.5ex]&& = \left( \begin{array}{cc} 1/T_{Ff}^{M,N} & R_{Ff}^{M,N} /T_{Ff}^{M,N} \\ R_{Pf}^{M,N} /T_{Pf}^{M,N} & 1/T_{Pf}^{M,N} \end{array} \right) \nonumber\\[.5ex]&&\hspace{2ex} \times \left( \begin{array}{cc} {\cal A}_{fs}^{M,N} & {\cal B}_{fs}^{M,N} \\ {\cal C}_{fs}^{M,N} & {\cal D}_{fs}^{M,N}+\delta_{fs} \end{array} \right) , \end{eqnarray} \begin{equation} \label{A2.6a} {\cal A}_{{\cal N}s}^{M,N} ={\cal B}_{{\cal N}s}^{M,N} ={\cal C}_{1s}^{M,N} ={\cal D}_{1s}^{M,N} = 0 \end{equation} [with $f$ and $s$ being taken according to Eq.~(\ref{A2.3})]. Here, the coefficients are redefined as \begin{eqnarray} \label{A2.6b} && {\cal A}_l^{M,N} \equiv {\cal A}_{fs}^{M,N}, \hspace{1ex} {\cal B}_l^{M,N} \equiv {\cal B}_{fs}^{M,N}, \nonumber\\ && {\cal C}_l^{M,N} \equiv {\cal C}_{fs}^{M,N}, \hspace{1ex} {\cal D}_l^{M,N} \equiv {\cal D}_{fs}^{M,N}, \qquad \end{eqnarray} and \begin{eqnarray} \label{A2.7} \lefteqn{\hspace{-6ex} R^M_{Pf} = \frac { k_{f+1} H'_{(f+1)f}H_{ff} \!-\! k_f H'_{ff}H_{(f+1)f} } { k_{f+1} J_{ff}H'_{(f+1)f} \!-\! k_f J'_{ff}H_{(f+1)f} }\,, } \\[.5ex] && \hspace{-10ex} \label{A2.8} R^M_{Ff} = \frac { k_{f+1} J'_{(f+1)f} J_{ff} \!-\! k_f J'_{ff}J_{(f+1)f} } { k_{f+1} J'_{(f+1)f} H_{ff} \!-\! k_f J_{(f+1)f} H'_{ff} }\,, \\[.5ex] &&\hspace{-10ex} \label{A2.9} R^N_{Pf} = \frac { k_{f+1} H_{(f+1)f} H'_{ff} \!-\! k_f H_{ff}H'_{(f+1)f} } { k_{f+1} J'_{ff} H_{(f+1)f} \!-\! k_f J_{ff}H'_{(f+1)f} }\,, \\[.5ex] && \hspace{-10ex} \label{A2.10} R^N_{Ff} = \frac { k_{f+1} J_{(f+1)f} J'_{ff} \!-\! k_f J_{ff}J'_{(f+1)f} } { k_{f+1} J_{(f+1)f} H'_{ff} \!-\! k_f J'_{(f+1)f}H_{ff} }\,, \end{eqnarray} \begin{eqnarray} \lefteqn{\hspace{-3ex} \label{A2.11} T^M_{Pf} = \frac { k_{f+1} [ J_{(f+1)f}H'_{(f+1)f} \!-\! J'_{(f+1)f}H_{(f+1)f} ]} { k_{f+1} J_{ff}H'_{(f+1)f} \!-\! k_f J'_{ff}H_{(f+1)f} }\,, } \nonumber\\[.5ex]&& \\&& \hspace{-7ex} \label{A2.12} T^M_{Ff} = \frac { k_{f+1}[J'_{(f+1)f} H_{(f+1)f} \!-\! J_{(f+1)f} H'_{(f+1)f} ] } { k_{f+1} J'_{(f+1)f} H_{ff} \!-\! k_f J_{(f+1)f} H'_{ff} }\,, \nonumber\\[.5ex]&& \\&& \hspace{-7ex} \label{A2.13} T^N_{Pf} = \frac { k_{f+1} [J'_{(f+1)f} H_{(f+1)f} \!-\! J_{(f+1)f}H'_{(f+1)f} ] } { k_{f+1} J'_{ff} H_{(f+1)f} \!-\! k_f J_{ff}H'_{(f+1)f} }\,, \nonumber\\[.5ex]&& \\&& \hspace{-7ex} \label{A2.14} T^N_{Ff} = \frac { k_{f+1}[ J_{(f+1)f} H'_{(f+1)f} \!-\! J'_{(f+1)f}H_{(f+1)f} ] } { k_{f+1} J_{(f+1)f} H'_{ff} \!-\! k_f J'_{(f+1)f}H_{ff} }\,, \nonumber\\[.5ex]&& \end{eqnarray} with \begin{eqnarray} \label{A2.15} J_{il} \hspace{-1ex}&=&\hspace{-1ex} j_n(k_iR_l) , \\[.5ex] \label{A2.16} H_{il} \hspace{-1ex}&=&\hspace{-1ex} h^{(1)}_n(k_iR_l) , \\[.5ex] \label{A2.17} J'_{il} \hspace{-1ex}&=&\hspace{-1ex} {1\over \rho} { d[\rho j_n(\rho)] \over d\rho } \bigg|_{\rho=k_iR_l} , \\[.5ex] \label{A2.18} H'_{il}\hspace{-1ex}&=&\hspace{-1ex} {1\over \rho} { d[\rho h^{(1)}_n(\rho)] \over d\rho } \bigg|_{\rho=k_iR_l}. \end{eqnarray} The coefficients in the first and the last layers can be found immediately from Eqs. (\ref{A2.6}) and (\ref{A2.6a}). The rest can be obtained by again using recurrence equations (\ref{A2.6}). \subsubsection{Two-layered medium} \label{app:2l} For a sphere of radius $R$ (including the special cases of an empty sphere in an otherwise homogeneous medium and a material sphere in vacuum) we have [\citet{Li94}] \begin{eqnarray} \label{A2.19} \lefteqn{ \mbb{G}^{(11)}({\bf r},{\bf r'},\omega) } \nonumber\\[.5ex]&&\hspace{-2ex} = {ik_1\over 4\pi} \sum_{e\atop o} \sum_{l=1}^\infty \sum_{m=0}^l {2l+1\over l(l+1)} {(l-m)!\over(l+m)!}\,(2\!-\!\delta_{0m}) \nonumber\\[.5ex]&&\hspace{-2ex}\times \Bigl[ {\cal B}^M_l (\omega) {\bf M}^{(1)}_{{e \atop o}lm} ({\bf r},k_1) \otimes {\bf M}^{(1)}_{{e \atop o}lm} ({\bf r}',k_1) \nonumber\\[.5ex]&&\hspace{-2ex} +\; {\cal B}^N_l (\omega) {\bf N}^{(1)}_{{e \atop o}lm} ({\bf r},k_1) \otimes {\bf N}^{(1)}_{{e \atop o}lm} ({\bf r}',k_1) \Bigr] \label{A2.20} \end{eqnarray} \mbox{($r,r'$ $\!>$ $\!R$)}, \begin{eqnarray} \lefteqn{ \mbb{G}^{(22)}({\bf r},{\bf r'},\omega) } \nonumber\\[.5ex]&&\hspace{-2ex} ={ik_2\over 4\pi} \sum_{e\atop o} \sum_{l=1}^\infty \sum_{m=0}^l {2l+1\over l(l+1)} {(l-m)!\over(l+m)!}\,(2\!-\!\delta_{0m}) \nonumber\\[.5ex]&&\hspace{-2ex}\times \Bigl[ {\cal C}^M_l (\omega) {\bf M}_{{e \atop o}lm} ({\bf r},k_2) \otimes {\bf M}_{{e \atop o}lm} ({\bf r}',k_2) \nonumber\\[.5ex]&&\hspace{-2ex} +\; {\cal C}^N_l (\omega) {\bf N}_{{e \atop o}lm} ({\bf r},k_2) \otimes {\bf N}_{{e \atop o}lm} ({\bf r}',k_2) \Bigr] \end{eqnarray} \mbox{($r,r'$ $\!<$ $\!R$)}, where \begin{eqnarray} \label{eA.21} {\cal B}^{M,N}_l(\omega) \hspace{-1ex}&=&\hspace{-1ex} - R^{M,N}_{F1} , \\[.5ex] \label{eA.22} {\cal C}^{M,N}_l(\omega) \hspace{-1ex}&=&\hspace{-1ex} - R^{M,N}_{F1} \frac{T^{M,N}_{F1} R^{M,N}_{P1}}{T^{M,N}_{P1}} \,. \qquad \end{eqnarray} \subsubsection{Three-layered medium} \label{app:3l} For three-layered media of radii $R_1$ and $R_2$ ($R_1>R_2$), the spherical cavity presented in Fig. \ref{fig:slabho} in particular, we have that [\citet{Li94}] \begin{eqnarray} \label{A.23} \lefteqn{\hspace{-4ex} \mbb{G}^{(13)}({\bf r},{\bf r'},\omega) = {ik_3\over 4\pi} \sum_{e\atop o} \sum_{n=1}^\infty \sum_{l=0}^n } \nonumber\\[.5ex] &&\hspace{-7ex}\times \Biggl\{ \frac{2n\!+\!1}{n(n\!+\!1)} \frac{(n\!-\!l)!}{(n\!+\!l)!} (2\!-\!\delta_{0l}) \nonumber \\[.5ex] &&\hspace{-7ex}\times \biggl[ {\cal A}^M_l (\omega) {\bf M}^{(1)}_{{e \atop o}nl} ({\bf r},k_1) \otimes {\bf M}_{{e \atop o}nl} ({\bf r}',k_3) \nonumber \\[.5ex] &&\hspace{-7ex} +\,{\cal A}^N_l (\omega) {\bf N}^{(1)}_{{e \atop o}nl} ({\bf r},k_1) \otimes {\bf N}_{{e \atop o}nl} ({\bf r}',k_3) \biggr]\Biggr\} \quad \end{eqnarray} ($r<R_2,\ r'>R_1$), \begin{eqnarray} \label{A.24} \lefteqn{\hspace{-4ex} \mbb{G}^{(33)}({\bf r},{\bf r'},\omega) = {ik_3\over 4\pi} \sum_{e\atop o} \sum_{n=1}^\infty \sum_{l=0}^n } \nonumber\\[.5ex] &&\hspace{-7ex}\times \Biggl\{ \frac{2n\!+\!1}{n(n\!+\!1)} \frac{(n\!-\!l)!}{(n\!+\!l)!} (2\!-\!\delta_{0l}) \nonumber \\[.5ex] &&\hspace{-7ex}\times \biggl[ {\cal C}^M_l (\omega) {\bf M}_{{e \atop o}nl} ({\bf r},k_1) \otimes {\bf M}_{{e \atop o}nl} ({\bf r}',k_3) \nonumber \\[.5ex] &&\hspace{-7ex} +\,{\cal C}^N_l (\omega) {\bf N}_{{e \atop o}nl} ({\bf r},k_1) \otimes {\bf N}_{{e \atop o}nl} ({\bf r}',k_3) \biggr]\Biggr\} \quad \end{eqnarray} ($r,\ r'>R_1$), where \begin{eqnarray} \label{A.25} \lefteqn{\hspace{-4ex} {\cal A}^{M,N}_l (\omega) \!=\! \frac { T_{F1}^{M,N} T_{F2}^{M,N} T_{P1}^{M,N} } { T_{P1}^{M,N} \!+\! T_{F1}^{M,N} R_{P1}^{M,N} R_{F2}^{M,N} }\,, } \\[.5ex] &&\hspace{-7ex} \label{A.26} {\cal C}^{M,N}_l (\omega) \!=\! { {\cal A}^{M,N}_l \over T_{P2}^{M,N}} \left[ {R_{P2}^{M,N} \over T_{F1}^{M,N}} +{R_{P1}^{M,N} \over T_{P1}^{M,N}}\right]. \quad \end{eqnarray} \end{appendix} \end{document}
\begin{document} \title[Frobenius amplitude]{ Frobenius amplitude and strong vanishing theorems for vector bundles } \author{ Donu Arapura } \address{Arapura: Department of Mathematics\\ Purdue University\\ West Lafayette, IN 47907} \email{[email protected]} \thanks{Arapura partially supported by the NSF} \address{Keeler: Department of Mathematics \\ MIT \\ Cambridge, MA 02139-4307 } \thanks{ Keeler partially supported by an NSF Postdoctoral Research Fellowship.} \email{[email protected]} \urladdr{http://www.mit.edu/\~{}dskeeler} \subjclass{Primary 14F17} \date{} \maketitle \begin{center} With Appendices by Dennis S. Keeler \end{center} \begin{abstract} The primary goal of this paper is to systematically exploit the method of Deligne-Illusie to obtain Kodaira type vanishing theorems for vector bundles and more generally coherent sheaves on algebraic varieties. The key idea is to introduce a number which provides a cohomological measure of the positivity of a coherent sheaf called the Frobenius or F-amplitude. The F-amplitude enters into the statement of the basic vanishing theorem, and this leads to the problem of calculating, or at least estimating, this number. Most of the work in this paper is devoted to doing this various situations. \end{abstract} \tableofcontents In \cite{deligne-ill}, Deligne, Illusie and Raynaud gave a beautiful proof of the Kodaira-Akizuki-Nakano vanishing theorem for ample line bundles using characteristic $p$ methods. The goal of this paper is to apply these methods to obtain vanishing theorems for vector bundles and, more generally, sheaves in a systematic fashion. In order to facilitate this, we introduce a cohomological measure of the positivity of a coherent sheaf on an algebraic variety that we call the Frobenius or $F$-amplitude. We also introduce some variations on this idea, such as the $F$-amplitude relative to a normal crossing divisor. The smaller the amplitude, the more positive it is; when it is zero, we say that the sheaf is $F$-ample. ($F$-ample vector bundles have been called ``cohomologically $p$-ample'' in \cite{gieseker}, \cite{mi} and possibly elsewhere, but we prefer the shorter term.) $F$-ampleness for bundles of rank greater than one turns out to be an unreasonably restrictive notion, and it appears more useful to consider the class of bundles with small $F$-amplitude relative to the rank. As the terminology suggests, the definition of $F$-amplitude makes use of the Frobenius map in an essential way. However, it can be extended into characteristic zero by the usual reduction modulo $p$ tricks. While this leads to a definition, it is one that is not particularly convenient to use in practice. For curves and projective spaces, we can give a reformulation of $F$-amplitude in characteristic free terms. In general, it seems that the best we can hope for are some reasonable bounds on $F$-amplitude, and much of this paper is devoted to finding such bounds. The key result in this direction is theorem~\ref{thm:keyestimate}, which shows that in characteristic zero the $F$-amplitude of an ample vector bundle is bounded above by its rank. The proof relies on some work of Carter and Lusztig in modular representation theory. The penultimate section contains the main theorem. It gives the vanishing of the cohomology groups of a sheaf on a smooth projective variety tensored with the differentials with logarithmic singularities along a divisor in a range determined by the $F$-amplitude relative to the divisor. A special case of this for $F$-ample bundles had been considered by Migliorini \cite{mi}. The vanishing theorem is nominally a characteristic $p$ result; the interesting consequences are in characteristic zero. From this we are able to recover some old results such as Le Potier's vanishing theorem, and to discover some new ones as well. One corollary that we want to call attention to is the following Kawamata-Viehweg type theorem (cor.~\ref{cor:kvvan1}): Let $\E$ be a vector bundle on smooth projective variety $X$. Suppose there is an effective fractional $\Q$-divisor $\Delta$ with normal crossing support $D$ such that $\E(-\Delta)$ is ample, which means that some symmetric power $S^m(\E)(-m\Delta)$ is ample in the usual sense. Then $H^i(\Omega_X^j(log D)(-D)\otimes \E) = 0$ for $i+j\ge dim\,X+ rank(\E)$; in particular, $H^i(\omega_X \otimes \E) = 0$ for $i\ge rank(\E)$. This result is put to use in the final section to obtain a refinement of the Lefschetz hyperplane theorem and to obtain a Le Potier theorem for noncompact varieties. The notion of $F$-semipositivity is obtained by relaxing the condition for $F$-ampleness. We show that $F$-semipositive vector bundles are nef. In characteristic $0$, more is true, namely $F$-semipositive bundles are ``arithmetically nef'' which means roughly that it specializes to a nef bundle in positive characteristic. The converse fails in general. However, for line bundles the equivalence of these notions has been established by Dennis Keeler, and included as an appendix. This can be used to slightly extend the aforementioned vanishing theorem. \section{Frobenius amplitude}\label{sec:frobenius-amplitude} In this section, we define the notion of Frobenius (or simply $F$-) amplitude. This definition is most natural in positive characteristic, and we start with this case. Let $k$ be a field of characteristic $p > 0$, and let $X$ be a variety defined over $k$. $F$, or sometimes $F_X$, will denote the absolute Frobenius of $X$ (i.e. the morphism of schemes which is the identity on the set $X$ and the $p$th power map on $O_X$). The absolute Frobenius can be factored as: $$ \xymatrix{ X\ar[r]_{F'}\ar[rd]\ar@/^/[rr]^{F_X} & X'\ar[d]\ar[r]_{F_k} & X\ar[d] \\ & spec\, k\ar[r]_{F_k} & spec\, k } $$ where the righthand square is cartesian. $F'$ is the relative Frobenius. When $k$ is perfect, $F_k:spec\,k\to spec\,k$ and its base change $X'\to X$ are isomorphisms of $\Z/p\Z$-schemes. In view of this, the relative Frobenius can be replaced by the absolute Frobenius and $X'$ by $X$ in the statements of \cite[2.1, 4.2]{deligne-ill}. Given a coherent sheaf $\E$, denote $F^{n*}\E$ by $\E^{(p^n)}$. For a vector bundle $\E$ given by a $1$-cocycle $g_{ij}$, $\E^{(p^n)}$ is given by $g_{ij}^{p^n}$. If $\mathcal{I}$ is an ideal sheaf on $\PP^n$ generated by polynomials $f_i$, then $\mathcal{I}^{(p^n)}$ is the ideal sheaf generated by $f_i^{p^n}$. Define the {\em $F$-amplitude} $\phi(\E)$ of a coherent sheaf $\E$ to be the smallest integer $l$ such that for any locally free sheaf $\F$, there exists an $N$ such that $H^i(X, \E^{(p^m)}\otimes \F) = 0$ for all $i > l$ and $m > N$. A few words of caution should be added here. We are purposely using the naive definition, but this has reasonable properties only when $X$ is smooth (which implies that $F$ is flat) or $\E$ is locally free. In more general situations, $F^{n*}\E$ should be replaced by the derived pullback $LF^{n*}\E$, at which point $\E$ may as well be replaced by an object in $D_{coh}(X)$ (one day, perhaps). We have that $\phi(\E)$ is less than or equal to the coherent cohomological dimension of $X$ which is less than or equal to the $dim X$. Now suppose that $k$ is a field of characteristic $0$. By a diagram over a scheme $S$, we will mean a collection of $S$-schemes $X_i$, $S$-scheme morphisms $f_{ij}:X_{i}\to X_{j}$, $O_{X_{i}}$-modules $\E_{i,l}$ and morphisms between the pullbacks and pushforwards of these modules. Given a morphism $S'\to S$, and a diagram $D$ over $S$, we can define its fiber product $D\times_{S}S'$ in the obvious way. Given a diagram $D$ over $Spec\,k$, an {\em arithmetic thickening} (or simply just thickening) of it is a choice of a finitely generated $\Z$-subalgebra $A\subset k$, and a diagram $\tilde D$ over $Spec\, A$, so that $D$ is isomorphic to the fiber product over $Spec\, k$. Given two thickenings $\tilde D_{i}\to Spec\, A_{i}$, we will say the second refines the first if there is a homomorphism $A_1\to A_2$, and an isomorphism between $D_2$ and $D_1\times_{Spec A_1}Spec A_2$. By standard arguments (e. g. \cite[sect. 6]{illusie2}): \begin{lemma} Any finite diagram of $k$-schemes of finite type and coherent sheaves has an arithmetic thickening. Any two thickenings have a common refinement. \end{lemma} Suppose that $X$ is a quasiprojective $k$-variety with a coherent sheaf $\E$. Given a thickening $(\tilde X, \tilde \E)$ over $A$, we will write $p(q) = char(A/q)$, $X_q$ for the fiber and $\E_q = \tilde\E|_{X_{q}}$ for each closed point $q\in Spec A$. We will say that a property holds for {\em almost all $q$} if it holds for all $q$ in a nonempty open subset of $Spec\, A$. For each closed point $q\in Spec\, A$, the fiber $X_q$ is defined over the finite field $A/q$, so that the $F$-amplitude of the restriction $\E_q$ can be defined as before. We say that $i\ge\phi(\E)$ if and only if $i\ge\phi(\E_q)$ holds for almost all $q$. Equivalently, the $F$-amplitude $\phi(\E)$ is obtained by minimizing $\max_q\,\phi(\E_q)$ over all thickenings. Note that there is no (obvious) semicontinuity property for $\phi(\E_q)$. So it is not clear if this is the optimal definition, but it is sufficient for the present purposes. Any alternative definition should satisfy the following: for any arithmetic thickening of a finite collection of coherent sheaves $\E_1,\ldots \E_N$, there is a sequence of closed points $q_j$ with $char\, (A/q_j)\to \infty$ such that $\phi(\E_i)\ge \phi((\E_i)_{q_j})$. Let $X$ be a smooth projective variety over $k$. We have an ordering on divisors defined in the usual way: $D\le D'$ if and only if the coefficients of $D$ are less than or equal to the coefficients of $D'$. Fix a reduced divisor $D\subset X$ with normal crossings. Assume that $char\, k =p >0$, then we define the $F$-amplitude of a coherent sheaf $\E$ relative to $D$ as follows $$\phi(\E,D) = \min\{\phi(\E^{(p^n)}(-D'))\, |\,n\in \N,\, 0\le D'\le (p^n-1)D\}. $$ If $D'\le (p^n-1)D$ is a divisor for which this minimum is achieved, we will refer to the $\Q$-divisor $\frac{1}{p^n}D'$ as a critical divisor for $\E$ relative to $D$. It will be convenient to introduce the relation on divisors, $A<_{strict} B$ if the multiplicity of $A$ along any irreducible component $C$ of the union of their supports is less than the multiplicity of $B$ along $C$. Then the upper inequality above is just that $D'<_{strict} p^nD$. When $char\, k = 0$, we proceed as above, $\phi(\E,D)$ is the minimum of $\max_q\, \phi(\E_q,D_q)$ over all thickenings of $(X,D, \E)$. We define the generic $F$-amplitude $\phi_{gen}(\E)$ of a locally free sheaf $\E$ to be the infimum of $\phi(f^*\E, D)$ where $f:Y\to X$ varies over all birational maps $f$ with exceptional divisor $D$ such that $Y$ is smooth and $D$ has normal crossings. In any characteristic, we will define $\E$ to be \cp if and only if $\phi(\E) = 0$. We will see below that a line bundle is ample if and only if it is $F$-ample. However for bundles of higher rank, $F$-ampleness is a stronger condition. In positive characteristic, $F$-ample vector bundles are the same as cohomologically $p$-ample vector bundles as defined in \cite{gieseker}. Most of the work below will be in positive characteristic. The proofs in characteristic zero are handled by standard semicontinuity arguments on a thickening. Throughout the rest of this paper, unless stated otherwise, $X$ will denote a projective variety over a field $k$, and the symbols $\E, \F,\ldots$ will denote coherent sheaves on $X$. \section{Elementary bounds on $F$-amplitude} \begin{lemma}\label{lemma:one} If a sheaf $\F$ on a topological space is quasi-isomorphic to a bounded complex $\F^\dt$ then $H^i(\F) = 0$ provided that $H^a(\F^b) = 0$ for all $a+b=i$. \end{lemma} \begin{proof} This follows from the spectral sequence $$E_1^{ab}=H^b(\F^a)\Rightarrow \mathbb{H}^{a+b}(\F^\dt)\cong H^{a+b}(\F).$$ \end{proof} \begin{lemma}\label{lemma:locfree} Suppose $char\, k =p > 0$ and that $\E$ is a locally free sheaf on $X$. Then for any coherent sheaf $\F$, $$H^i(X, \E^{(p^m)}\otimes \F) = 0$$ for $i>\phi(\E)$ and $m>>0$. \end{lemma} \begin{proof} This will be proved by descending induction starting from $i=dim\,X+1$. Choose an ample line bundle $O(1)$. We can find an exact sequence $$0\to \F''\to \F'\to \F\to 0$$ where $\F'$ is a sum of twists of $O(1)$ (by Serre's theorems, we can take $\F'= H^0(\F(n))\otimes O(-n)$ for $n>>0$). Tensoring this with $\E^{(p^m)}$ and applying the long exact sequence for cohomology shows that $H^i(X, \E^{(p^m)}\otimes \F) = 0$ for $i>\phi(\E)$ and $m>>0$. \end{proof} The proof gives something slightly stronger: \begin{cor}\label{cor:FamplViaLineb} Fix an ample line bundle $O_X(1)$. Then $\phi(\E)\le A$ if and only if for any $b$ there exists $n_0$ such that $$H^i(\E^{(p^n)}(b)) = 0$$ for all $i> A$, $n\ge n_0$. \end{cor} \begin{lemma}\label{lemma:ampleline} A line bundle is $F$-ample if and only if it is ample. \end{lemma} \begin{proof} First assume that we are over a field of characteristic $p>0$. Then we have $L^{(p^{n})} = L^{p^n}$. Therefore if $L$ is ample, it is $F$-ample by Serre's vanishing theorem. Suppose $L$ is $F$-ample. Choose $x_0\in X$, then by lemma \ref{lemma:locfree} $H^1(X,m_{x_0}\otimes L^{n_0}) = 0$ for $x_0$ and some $n_0$ a power of $p$. Therefore $L^{n_0}$ has a global section $s_0$ which is nonzero at $x_0$. Let $U_0$ be the complement of the zero set of $s_0$. If $U_0\not= X$, we can choose $x_1$ in the complement and arrange that $H^1(X,m_{x_1}\otimes L^{n_1}) = 0$ for some power $n_1$ of $p$. Therefore $L^{n_1}$ has a section $s_1$ not vanishing at $x_1$. If $U_0\cup U_1\not= X$, then we can choose $x_2$ in the complement and proceed as above. Eventually this process has to stop, because $X$ is noetherian. Therefore $L^{n_0n_1\ldots}$ is generated by the sections $s_0^{n_1n_2\ldots}, s_1^{n_0n_2\ldots}, \ldots$. Repeating the same line of reasoning with the sheaves $ m_xm_y\otimes L^{n}$ shows that some power of $L$ is very ample. Choose a thickening of $(\tilde X, \tilde L)$ over $A$. If $L$ is ample, then we can assume $\tilde L$ is ample by shrinking $Spec\, A$ if necessary. Consequently $\tilde L_{q}$ is $F$-ample for each closed point $q\in Spec\, A$ by the previous paragraph. Therefore $L$ is $F$-ample. Now suppose that $L$ is $F$-ample. As above, it suffices to show that for any ideal sheaf $I$, $H^1(I\otimes L^{n}) = 0$ for some $n >0$. But this is easily seen by choosing a thickening of $(X,L,I)$ applying the previous case on a closed fiber, using semicontinuity to deduce this for the generic fiber, then flat base change to deduce the vanishing for $X$. \end{proof} \begin{thm}\label{thm:first} Let $\E, \E_0\ldots $ be coherent sheaves on a projective variety $X$. Assume either that $X$ is smooth or that these sheaves are locally free. Then the following statements hold. \begin{enumerate} \item\label{thm:first1} Given an exact sequence $0\to\E_1\to \E_2\to \E_3\to 0$, $\phi(\E_2) \le max(\phi (\E_1), \phi (\E_3))$. \item\label{thm:first2} Let $$0\to \E_n\to \E_{n-1}\to \ldots \E_0\to \E\to 0$$ be an exact sequence such that for each $i$, $\phi (\E_i)\le i +l$, then $\phi (\E) \le l$. \item\label{thm:first3} Let $0\to \E\to \E^0\to \E^1\to \ldots \E^n\to 0$ be an exact sequence such that for each $i$, $\phi (\E^i) \le l-i$ then $\phi (\E) \le l$. \item\label{thm:first4} Let $f:Y\to X$ be a proper morphism of projective varieties such that $d$ is the maximum dimension of the closed fibers. If $\E$ is locally free then $\phi(f^*\E) \le \phi(\E) + d$. In particular, if $f$ is a closed immersion, then $\phi(f^*\E)\le \phi(\E)$. \item\label{thm:first5} If $f:Y\to X$ is an \'etale morphism of smooth projective varieties, then $\phi(f_*\E) = \phi(\E)$. \end{enumerate} \end{thm} \begin{proof} The first statement is obvious, and second and third follow from lemma \ref{lemma:one} For the remaining statements, we will assume that $char k = p > 0$, the characteristic $0$ case is a straightforward semicontinuity argument. There is a commutative diagram \begin{equation}\label{eq:frobenius-diagram} \xymatrix{ Y\ar[d]\ar[r]^{F^m_Y} & Y\ar[d] \\ X\ar[r]^{F^m_X} & X } \end{equation} Suppose that $f$ is proper with fibers of dimension $\le d$. If $\E$ is a locally free $O_X$-module, and $\F$ a coherent $O_Y$-module, then $ H^i(\E^{(p^m)}\otimes R^jf_*\F) = 0$ for $i>\phi(\E)$ and $m >>0$. Therefore the Leray spectral sequence implies $$ H^{i}((f^*\E)^{(p^m)}\otimes \F) =0$$ for $i>\phi(\E)+d$ and $m >> 0$. If $f:Y\to X$ is \'etale, then the above diagram is cartesian. Furthermore, $F_X$ and $F_Y$ are both flat when $X$ and $Y$ are smooth. Cohomology commutes with flat base change, therefore for any coherent $O_Y$-module $\E$ and locally free $O_X$-module $\F$, $$H^i(X, (f_*\E)^{(p^m)}\otimes \F) \cong H^i(Y, \E^{(p^m)}\otimes f^*\F),$$ and this implies the equality of amplitudes. \end{proof} These result easily extend to the case of $F$-amplititude relative to a divisor. Here we just treat one case that will be needed later. \begin{lemma}\label{lemma:genfinitepullback} Let $Y\to X$ be a morphism of varieties with $Y$ smooth. Suppose that $D= \sum D_i$ is a divisor with normal crossings on $Y$, such that there exist $a_i\ge 0$ for which $L=O_Y(-\sum a_iD_i)$ is relatively ample. Then for any locally free sheaf $\E$ on $X$, $\phi(f^*\E, D)\le \phi(\E)$. \end{lemma} \begin{proof} The proof is very similar case (4), of the previous theorem. Assume $char\, k = p$, and choose $n_0$ such that $p^{n_0}> a_i$. Set $\E' = f^*\E^{(p^{n_0})}\otimes L$ and let $\F$ be another coherent sheaf on $Y$. Since $L$ is relatively ample, the higher direct images of $\F\otimes L^N$ vanish for $N>>0$. Therefore, the spectral sequence $$H^a(\E^{(p^{n+n_0})}\otimes R^bf_*(\F\otimes L^{p^n})) \Rightarrow H^{i}((\E')^{(p^n)}\otimes \F)$$ yields the vanishing of the abutment for $i>\phi(\E)$ and $n>>0$. \end{proof} \begin{cor}\label{cor:biratphi} Suppose that $k$ has characteristic $0$. If $f:Y\to X$ is a resolution of singularities such that the exceptional divisor $D$ has normal crossings, then $\phi(f^*\E, D)\le \phi(\E)$. If $g:Z\to X$ is any resolution of singularities, then $\phi_{gen}(g^*\E)\le \phi(\E)$. \end{cor} \begin{proof} Since $f$ can be realized as the blow up of $X$ along an ideal, it follows that we can find a relatively ample divisor of the form $-\sum\, a_iD_i$. with $a_i\ge 0$ where the $D_i$ are the irreducible components of $D_{red}$. The first assertion clearly implies that second, since $Z$ can be blown up further. \end{proof} \begin{cor}\label{cor:genphi} Suppose that $k$ has characterisitic $0$. If $g:Z\to X$ is a surjective morphism with $Z$ smooth, then $\phi_{gen}(g^*\E)\le \phi(\E)+d$, where $d$ is the dimension of the generic fiber. \end{cor} \begin{proof} Construct the following commutative diagram: $$ \xymatrix{ & Y\ar[r]^{\alpha}\ar[d]^{\beta}\ar[ld]^{\epsilon} & Z\ar[d]^{g}\ar@{-->}[ld]^{\gamma} \\ Y'\ar[r]_{\kappa} & \PP^d\times X\ar[r]_{\delta} & X } $$ where $\gamma$ is a generically finite rational map (which exists by Noether's normalization lemma), $\delta$ is the projection, $\alpha$ is a resolution of the indeterminacy locus of $\gamma$, and $Y\to Y'\to \PP^d\times X$ the Stein factorization of $\beta$. By theorem \ref{thm:first} (4) and the previous corollary (applied to $\delta\circ \kappa$ and $\epsilon$ respectively), $$\phi_{gen}(g^*\E)\le \phi_{gen}(\alpha^*g^*\E) \le \phi((\delta\circ\kappa)^*\E) \le \phi(\E) + d.$$ \end{proof} \section{Asymptotic regularity}\label{sec:asymptotic-regularity} Fix a very ample line bundle $O_X(1)$ on a projective variety $X$. Recall that a coherent sheaf $\F$ on $X$ is $m$-regular \cite{mumford} provided that $H^i(\F(m-i)) = 0$ for $i>0$. The regularity $reg(\F)$ of a sheaf $\F$ is the least $m$ such that $\F$ is $m$-regular. Let $Reg(X)=max(1,reg(O_X))$. Although we will not need this, it is worth remarking that $Reg(X) = reg(O_X)$ unless $(X, O_X(1))$ is a projective space with an ample line bundle of degree $1$. \begin{lemma} Let $\F$ be $0$-regular coherent sheaf, then it is globally generated and $$ ker[H^0(\F)\otimes O_X\to \F]$$ is $Reg(X)$-regular. \end{lemma} \begin{proof} The global generation of $0$-regular sheaves is due to \cite[p. 100]{mumford}. Let $R= Reg(X)$ and $K= ker[H^0(\F)\otimes O_X\to \F]$. By definition $R \ge 1$. We have an exact sequence $$0\to K(R-i)\to H^0(\F)\otimes O_X(R-i)\to \F(R-i)\to 0.$$ From the long exact of cohomology groups and $R+1$-regularity of $O_X$ and the $R$-regularity of $\F$ [loc. cit.], we can conclude that $H^i(K(R-i))= 0$ for $i > 1$. As the multiplications $$H^0(\F)\otimes H^0(O_X(i))\to H^0(\F(i))$$ are surjective for $i \ge 0$ [loc. cit.], $H^1(K(R-1))$ injects into $H^1(O_X(R-1)) = 0$. Therefore $K$ is $R$-regular. \end{proof} \begin{cor}\label{cor:syz} Let $\F$ be $m$-regular, then for any $N \ge 0$, there exist vector spaces $V_i$ and a resolution $$V_N\otimes O_X(-m-NR)\to\ldots V_1\otimes O_X(-m-R) \to V_0\otimes O_X(-m)\to \F\to 0$$ where $R= Reg(X)$. \end{cor} \begin{proof} After replacing $\F$ by $\F(m)$, we may assume that $m=0$. Therefore $\F$ is generated by its global sections $V_0 = H^0(\F)$. Let $K_0= \F(-R)$ and $K_1$ be the kernel of the surjection $V_0\otimes O_X\to K_0(R)$. Then $K_1(R)$ is $0$-regular, so we can continue the above process indefinitely and define vectors spaces $V_i$ and sheaves $K_i$ which fit into exact sequences $$0\to K_{i+1}\to V_i\otimes O_X\to K_i(R)\to 0.$$ After tensoring these with $O_X(-iR)$, we can splice these sequences together to obtain the desired resolution. \end{proof} \begin{lemma}\label{lemma:cpestim1} Let $\E$ be a coherent sheaf on $X$, and let $n$ be the greatest integer strictly less than $-reg(\E)/Reg(X)$. Then $$\phi(\E) \le max(dim X-n-1, 0).$$ In particular, $\E$ is \cp if $reg(\E) < -Reg(X)(dim X -1)$. \end{lemma} \begin{proof} Let $m = reg(\E)$, $R= Reg(X)$ and $d = dim X$. We may assume $d-n-1 > 0$, otherwise the lemma is trivially true. By corollary \ref{cor:syz}, there exists a resolution $$0\to \E_{n+1}\to \E_n\to \ldots \E_0\to \E\to 0$$ where $\E_i = V_i\otimes O_X(-m-iR)$ for $i \le n$, and $$\E_{n+1} = ker[V_n\otimes O_X(-m-nR)\to V_{n-1}\otimes O_X(-m-(n-1)R)].$$ When $i \le n$, we have $-m-iR> 0$, therefore $\phi(\E_i) = 0 \le i + d -n-1$ by lemma \ref{lemma:ampleline}. Also $\phi(\E_{n+1}) \le d = (n+1) + d-n-1$. Consequently the lemma follows from theorem \ref{thm:first} (2). \end{proof} Suppose that $char\, k = p > 0$. Let $$ minreg(\E) = {\inf}_n \,\{reg\,\E^{(p^n)}\}.$$ \begin{cor}\label{cor:cpestim} For any coherent sheaf $\E$, $$\phi(\E) \le max(dim X-n-1, 0),$$ where $n$ is the greatest integer strictly less than $-minreg(\E)/Reg(X)$. \end{cor} \begin{proof} Apply the lemma to all powers $\E^{(p^n)}$. \end{proof} When $char\, k = p > 0$, we define the {\em asymptotic regularity } $$ areg(\E) = {\lim \sup}_n \, reg\, \E^{(p^n)}.$$ Of course $minreg(\E)\le areg(\E)$, but equality will usually fail. For example, $minreg(O_X(-1)) < areg(O_X(-1)) = \infty$. When $char\, k = 0$, define $areg(\E)$ to be the infimum of $\sup_q[areg(\E_q)]$ over all thickenings of $(X,\E, O_X(1))$. In other words, $areg(\E) \le m$ if and only if $areg (\E_q) \le m$ for almost all $q$ for a given thickening. \begin{lemma}\label{lemma:cpcrit} ($char\, k = p$) Let $\E$ be a coherent sheaf. The following statements are equivalent \begin{enumerate} \item $\E$ is $F$-ample. \item $areg(\E) = -\infty$. \item $minreg(\E) < Reg(X)(dim X -1)$ \end{enumerate} \end{lemma} \begin{proof} If $\E$ is $F$-ample, then clearly $reg(\E^{(p^n)})\to -\infty$ which is the content of 2. The implication $2\Rightarrow 3$ follows from the inequality $minreg(\E)\le areg(\E)$. The implication $3\Rightarrow 1$ follows from corollary \ref{cor:cpestim}. \end{proof} \begin{cor} Conditions (1) and (2) are equivalent in characteristic $0$. \end{cor} In any characteristic, call $\E$ {\em F-semipositive }(with respect to $O_X(1)$) if and only if $areg(\E) < \infty$. We will see, shortly, that this notion is independent of the choice of $O_X(1)$. The previous lemma shows that an $F$-ample sheaf is $F$-semipositive. \begin{lemma}\label{lemma:regularityestimate} Let $N+1\ge d=\dim X$. If $$\E_N\to \E_{N-1}\to\ldots \E_0\to \E\to 0$$ is an exact sequence of coherent sheaves on $X$, then $$reg(\E)\le \max\{reg(\E_0),reg(\E_1)-1,\ldots reg(\E_{d-1})-d\}$$ \end{lemma} \begin{proof} Extend this to a sequence $$0\to \E_{N+1}\to \E_N\to\ldots\E_0\to \E\to 0.$$ The regularity estimate follows from lemma~\ref{lemma:one} and the fact that $m$-regular sheaves are $m'$-regular for all $m'\ge m$ \cite[p. 100]{mumford}. \end{proof} \begin{prop}\label{prop:semipos} Let $f:X\to Y$ be a morphism of projective varieties. Assume that $Y$ is equipped with a very ample line bundle $O_Y(1)$. Let $\E$ be a coherent sheaf on $Y$ which is F-semipositive with respect to $O_Y(1)$. If ${\mathcal T}or_i^{f^{-1}O_Y}(O_X, \E) = 0$ for all $i > 0$ (e. g. if $f$ is flat, or $\E$ is locally free), then $f^*\E$ is F-semipositive with respect to $O_X(1)$. \end{prop} \begin{proof} We give the proof in positive characteristic. By hypothesis and corollary \ref{cor:syz}, there exists a resolution. $$V_N\otimes O_Y(-m-NR)\to\ldots V_1\otimes O_Y(-m-R) \to V_0\otimes O_Y(-m)\to \E^{(p^n)}\to 0$$ where the constants $m, R, N>>0$ can be chosen independently of $n$. This stays exact after applying $f^*$ by our assumptions. Therefore the regularity of $f^*(\E^{(p^n)}) = (f^*\E)^{(p^n)}$ stays bounded as $n\to \infty$ by lemma~\ref{lemma:regularityestimate}. \end{proof} \begin{cor} Let $O_X(1)'$ be another very ample line bundle on $X$, then a sheaf $\E$ is F-semipositive with respect to $O_X(1)$ if and only if it is F-semipositive with respect to $O_X(1)'$. \end{cor} \begin{proof} Apply the proposition to the identity map. \end{proof} Recall that a locally free sheaf $\E$ on $X$ is nef (or numerically semipositive) if for any curve $f:C\to X$, any quotient of $f^*\E$ has nonnegative degree. In characteristic $0$, it is convenient to introduce an ostensibly stronger property: $\E$ is arithmetically nef if there is a thickening $(\tilde X,\tilde \E)$ over $Spec\, A$ such that the restriction of $\tilde \E$ to the fibers are nef. To simplify the statements, we define arithmetically nef to be synonymous with nef in positive characteristic. Further discussion of these matters can be found in the appendix. The name $F$-semipositive stems from the following: \begin{lemma}\label{lemma:Fsemiposisnef} If $\E$ is an $F$-semipositive locally free sheaf, then it is arithmetically nef. \end{lemma} \begin{proof} By definition, we may work over a field of characteristic $p>0$. Suppose that $\F$ is a quotient of $f^*\E$ with negative degree. This implies that $deg(\F^{(p^n)})\to -\infty$ as $n\to \infty$. By proposition \ref{prop:semipos}, $f^*\E$ is F-semipositive, which implies that there is a fixed line bundle $L$ such that $\E^{(p^n)}\otimes L$ is globally generated for all $n$. Therefore $\F^{(p^n)}\otimes L$ is globally generated for all $n$ which implies that $deg(\F^{(p^n)})$ is bounded below. This is a contradiction. \end{proof} For line bundles, the converse is given by proposition~\ref{prop:nefline}. However, it fails for higher rank (example~\ref{ex:amplenotFample}). \section{Tensor products} \begin{thm}\label{thm:tensor1} Let $\E$ and $\F$ be two vector bundles on a smooth projective variety $X$, then $$\phi(\E\otimes \F) \le \phi(\E)+\phi(\F).$$ \end{thm} \begin{proof} Assume that $k$ is a field of characteristic $p>0$. Let $Y = X\times X$ and let $p_i:Y\to X$ denote the projections. Given two coherent sheaves $\E_i$ on $X$, let $\E_1\boxtimes \E_2 = p_1^*\E_1\otimes p_2^*\E_2$. Choose a very ample line bundle $O_X(1)$ on $X$, then $L = O(1)\boxtimes O(1)$ is again very ample. Let $\Delta\subset X$ be the diagonal. Choose $\nu >>0$. By corollary \ref{cor:syz}, we can construct a resolution \begin{equation}\label{eq:Diagres} 0\to \G_{\nu+1}\to \G_\nu\to \ldots \G_0\to O_\Delta\to 0 \end{equation} where $\G_i = V_i\otimes L^{\otimes a_i}$ for $i\le \nu$. The Frobenius map $F_Y = F_X\times F_X$. Thus using K\"unneth's formula \cite[III, 6.7.8]{ega}, for any $b$ we get \begin{eqnarray*} H^i(\G_j\otimes L^{b}\otimes F_Y^{N*}(\E\boxtimes \F)) &=& V_j\otimes H^i(\E^{(p^N)}(b+a_j)\boxtimes \F^{(p^N)}(b+a_j))\\ &=& V_j\otimes \bigoplus_{c+d=i} \, H^c(\E^{(p^N)}(b+a_j))\otimes H^d(\F^{(p^N)}(b+a_j))\\ &=& 0 \end{eqnarray*} for $i>\phi(\E) + \phi(\F)$, $j\le \nu$ and $N>>0$. Tensoring (\ref{eq:Diagres}) by $L^{b}\otimes F_Y^{N*}(\E\boxtimes \F)$ and applying lemma \ref{lemma:one} shows that \begin{eqnarray*} H^i((\E\otimes \F)^{(p^N)}(2b)) &=& H^i(O_\Delta\otimes L^{b}\otimes F_Y^{N*}(\E\boxtimes \F))\\ & =& 0 \end{eqnarray*} for $i>\phi(\E) + \phi(\F)$ and $N>>0$. Thus corollary \ref{cor:FamplViaLineb} gives the desired bound on $\phi(\E\otimes \F)$. If $char\, k = 0$, then we can carry out the above argument on the fiber of some thickening. \end{proof} \begin{cor}\label{cor:tensorample} The tensor product of two $F$-ample bundles is $F$-ample. \end{cor} \begin{cor}\label{cor:logtensor} Let $D$ and $E$ be reduced effective divisors such that $D+E$ has normal crossings. Suppose that $\E$ and $\F$ are a pair of vector bundles with critical divisors $\Delta$ and $\Xi$ along $D$ and $E$ respectively. If $\Delta+\Xi$ is strictly fractional, i.e. has all its multiplicities less then $1$, then $$\phi(\E\otimes \F, D+E)\le \phi(\E,D) +\phi(\F,E).$$ \end{cor} \begin{proof} We can find $n,m$ such that $\phi(\E,D) = \phi(\E^{(p^n)}(-D'))$ and $\phi(\F,E) = \phi(\F^{(p^m)}(-E'))$ where $D' = p^n\Delta$ and $E'= p^m\Xi$. After replacing $E'$ by $p^{n-m}E'$, or the other way around, we can assume that $m=n$. Therefore $$(\E\otimes \F)^{(p^n)}(-D'-E') = \E^{(p^n)}(-D')\otimes \F^{(p^n)}(-E') $$ has $F$-amplitude bounded by the sum. \end{proof} \begin{remark} If $D$ and $E$ are disjoint, then the conditions on the critical divisors are automatic. \end{remark} \begin{thm}\label{thm:tensor} Let $\E$ and $\F$ be two coherent sheaves on $X$ such that one of them is locally free and $\E$ is F-semipositive, then $$\phi(\E\otimes \F) \le \phi(\F).$$ \end{thm} \begin{proof} Assume that $char\, k = p >0$. Let $m=areg(\E)$, $N>>0$ and $R= Reg(X)$. Then $reg(\E^{(p^\mu)}) \le m$ for all but finitely many $\mu$. Given a locally free sheaf ${\mathcal G}$, choose $\mu_{0}$, so that $$H^i(\F^{(p^\mu)}\otimes {\mathcal G}(-m-jR))=0$$ for all $\mu> \mu_{0}$, $i> \phi(\F)$, and $j = 0,1,\ldots N$. By increasing $\mu_0$, if necessary, we can assume that $\E^{(p^\mu)}$ is $m$-regular when $\mu> \mu_o$. From corollary \ref{cor:syz}, we obtain a resolution $$0\to \E_{N+1}\to \E_N\to \ldots \E_0\to \E\to 0$$ where $\E_i= V_i\otimes O_X(-m-iR)$ for $i \le N$. Tensoring this by $\F^{(p^\mu)}\otimes {\mathcal G}$ and applying lemma \ref{lemma:one} shows that $$H^i((\E\otimes\F)^{(p^\mu)}\otimes {\mathcal G}) = 0$$ when $\mu> \mu_{0}$ and $i > \phi(\F)$. If $char\, k = 0$, then we can carry out the above argument on the fiber of some thickening. \end{proof} We can refine corollary \ref{cor:tensorample}. \begin{cor}\label{cor:tensorcp} The tensor product of an \cp vector bundle and an F-semipositive vector bundle is \cp. \end{cor} \section{Characterization of $F$-ample sheaves on special varieties} It is possible to give an elementary characterization of $F$-ampleness for curves and projective spaces. Recall that a vector bundle $\E$ over a variety $X$ defined over a field $k$ of characteristic $p$ is $p$-ample \cite{hartshorne} if for any coherent sheaf $\F$ there exists $n_0$ such that $\E^{(p^n)}\otimes \F$ is globally generated for all $n\ge n_0$. \begin{lemma}\label{lemma:Fample2pample} An $F$-ample vector bundle $\E$ is $p$-ample. \end{lemma} \begin{proof} Suppose $\F$ is a coherent sheaf. Since the regularity of the sheaves $\E^{(p^n)}\otimes \F\to -\infty$, these sheaves are globally generated for $n>>0$. \end{proof} \begin{cor} An $F$-ample vector bundle $\E$ is ample. \end{cor} \begin{proof} \cite[6.3]{hartshorne} \end{proof} As we will see the converse to both statements fail in general. \begin{lemma}\label{lemma:phipample} If $\E$ is a $p$-ample vector bundle on a projective variety $X$, then $\phi(\E) < dim\, X$. \end{lemma} \begin{proof} Choose a coherent sheaf $\F$. Let $L$ to be the $N$th power of an ample line bundle, chosen large enough so that $H^i(\F\otimes L) = 0$ for $i>0$. Then for all $n>>0$, $\E_n=\E^{(p^n)}\otimes L^{-1}$ is globally generated. Therefore $$H^0(\E_n)\otimes \F\otimes L\to \E^{(p^n)}\otimes \F$$ is surjective. It follows that the top degree cohomology of the right hand vanishes for $n>>0$. \end{proof} This leads to a complete characterization for curves. \begin{prop}\label{prop:curveFample-ample} Let $\E$ be a coherent sheaf over a smooth projective curve $X$ defined over a field $k$, then the following are equivalent \begin{enumerate} \item $\E$ is $F$-ample. \item $\E/torsion$ is $p$-ample when $char\, k= p$. \item $\E/torsion$ is ample. \end{enumerate} \end{prop} \begin{proof} Since $\E$ is a direct sum of $\E/torsion$ with the torsion part, we may assume that $\E$ is a vector bundle. Suppose $char\, k=p$. Then the equivalence of the first two statements follows from lemmas \ref{lemma:Fample2pample} and \ref{lemma:phipample}. The equivalence of the last two from \cite[6.3, 7.3]{hartshorne}. If $char\, k = 0$, we can deduce the equivalence of (1) and (3) from the previous cases, because ampleness is an open condition. \end{proof} Now, we turn to projective space. For integers $a\le b$, let $[a,b] = \{a, a+1, \ldots , b\}$. \begin{thm}\label{thm:proj} Let $\E$ be a coherent sheaf on the projective space $\PP^n_k$, then $$\phi(\E) = min\, \{i_0\,|\, H^i(\PP_k^n, \E(j)) = 0,\, \forall i> i_0, \forall j\in [-n-1, 0]\}$$ In particular, $\E$ is $F$-ample if and only if $H^i(\E(j)) = 0$ for all $j\in [-n-1, 0]$ and $i > 0$. \end{thm} This leads to a characterization of $F$-ample bundles on $\PP^n= \PP_k^n$. For a slightly different characterization, see \cite[sect. 4]{mi}. The key step in the proof of theorem \ref{thm:proj} is the following proposition. \begin{prop}\label{prop:split} Let $\pi:\PP^n\to \PP^n$ be a finite morphism, and let $d$ be the degree of $\pi^*O(1)$. \begin{enumerate} \item For each $i$, $\pi_* O(i)$ is a direct sum of line bundles. \item If $-d -n -1 < i < d$ then $\pi_* O(i)$ is a sum of line bundles of the form $O(l)$ with $l\in [-n-1, 0]$. \item There exists a constant $C$ depending only on $n$, such that if $d > C$, then for each $O(l)$ with $l\in [-n-1, 0]$, $O(l)$ occurs as a component of $\pi_*O(i)$ for some $i\in [-n-1, 0]$. \end{enumerate} \end{prop} \begin{proof} Since $\pi$ is finite, $$H^a((\pi_*O(i))\otimes O(j))= H^a(O(i+ dj))$$ for all $a,i$ and $j$. In particular, these groups vanish for all $0< a < n$ and all $j$. Therefore $\pi_*O(i)$ splits into a sum of line bundles by a theorem of Horrocks \cite{horrocks}. If $-d -n -1 < i < d$, then $$H^n((\pi_*O(i))\otimes O(1))= H^n(O(i+ d))= 0$$ and $$H^0((\pi_*O(i))\otimes O(-1))= H^0(O(i- d))= 0.$$ This implies the second statement. Let $p_m(x) = \left( \begin{array}{c}x+m\\ m\\ \end{array} \right)$ and $(\Delta_x p)(x,y,\ldots) = p(x,y,\ldots) -p(x-1,y,\ldots)$. Note that $\Delta_x p_m(x) = p_{m-1}(x)$. Choose $-n-1 \le i \le 0$. Let us write $$\pi_*O(i) = \bigoplus_{l} O(l)^{\oplus f(l,i)}.$$ By comparing cohomology of $\pi_*O(i)$ and $O(i)$, we see that $S=\{l\,|\, f(l,i)\not= 0\}$ is contained in $[-n, 0]$ if $i=0$, $S$ is contained in $[-n, -1]$ if $-n-1 < i < 0$, and $S$ is contained in $[-n-1,-1]$ if $i = -n-1$. Furthermore $f(0,0) = f(-n-1, -n-1)=1$, and this shows that the proposition holds true for $l= 0,-n-1$. We now assume that $-n\le l \le 0$. Tensoring $\pi_*O(i)$ by $O(x)$ and computing Euler characteristics yields: \begin{equation}\label{eq:fp} \sum_l f(l,i)p_n(x+l) = p_n(dx+i). \end{equation} Setting $x=0$ yields $$f(0,i) = p_n(i).$$ Applying $\Delta_x$ to equation (\ref{eq:fp}) and setting $x=0$ yields $$(-1)^{n-1}f(-n,i) = (p_n(i) - f(0,i)) - p_n(i-d)$$ hence $$ f(-n,i)=(-1)^n p_n(i-d).$$ Applying $\Delta_x^2$ to equation (\ref{eq:fp}) and setting $x=0$ yields $$(-1)^{n-2}(n-1)f(-n, i) + (-1)^{n-2}f(-n+1, i) = (p_n(i) - f(0,i)) - 2p_n(i-d) + p_n(i-2d)$$ hence $$f(-n+1, i) = (-1)^{n-1}(n+1)p_n(i-d) + (-1)^{n-2}p_n(i-2d).$$ Doing this repeatedly gives a formula for each $f(l,i)$, with $-n-1< l < 0$, which is a nonzero polynomial of degree at most $n$ in $i$ and $d$. Choosing a specific $d >> 0$ (for $n$ fixed) forces $f(l,i)$ to be a nonzero polynomial in $i$ of degree at most $n$. Therefore $f(l,i)\not= 0$ for some $i$ in the range $-n\le i \le 0$. \end{proof} We need the following (presumably well known) version of the projection formula. \begin{lemma}\label{lemma:projform} If $\pi:X\to Y$ is a finite map of quasiprojective schemes, then $\pi_*(\pi^*\E\otimes \F) \cong \E\otimes \pi_*\F$. \end{lemma} \begin{proof} Choose a resolution $\E_1\to \E_0\to \E\to 0$ by vector bundles $\E_i$. There is a diagram $$ \begin{array}{ccccccc} \E_1\otimes \pi_*\F& \to& \E_0\otimes \pi_*\F& \to& \E\otimes \pi_*\F& \to& 0\\ \downarrow&&\downarrow&&\downarrow&&\\ \pi_*(\pi^*\E_1\otimes \F)& \to& \pi_*(\pi^*\E_0\otimes \F)& \to& \pi_*(\pi^*\E\otimes \F)& \to& 0\\ \end{array} $$ where the first two vertical arrows are isomorphisms by the usual projection formula. This implies that the third arrow is also an isomorphism. \end{proof} \begin{proof}[Proof of theorem \ref{thm:proj}] As usual, we prove the result in positive characteristic; the characteristic zero case is a formal consequence. To begin with, we show $H^i(\E(j)) = 0$ for $i>\phi(\E)$ and $j\in [-n-1, 0]$. First assume $char\, k = p> 0$. Choose $m>>0$, then $O(j)$ is a direct summand of some $F^m_*O(l)$ for $l\in [-n-1, 0]$ by the previous proposition. Therefore by the projection formula (lemma \ref{lemma:projform}), $$H^i(\E(j)) \subseteq H^i(\E\otimes F^m_*O(l)) = H^i(\E^{(p^m)}\otimes O(l)) = 0.$$ Conversely, suppose that $H^i(\E(j)) = 0$ for all $i > i_0$ and $j\in [-n-1, 0]$. For any integer $l$, we can choose $m >> 0$ so that $F^m_*O(l)$ is a direct of line bundles $O(j)$ with $j \in [-n-1, 0]$. Therefore $$ H^i(\E^{(p^m)}\otimes O(l))=H^i(\E\otimes F^m_*O(l)) = 0$$ for $i > i_0$. Since any coherent sheaf $\F$ on $\PP^n$ has a finite resolution by direct sums of line bundles, this shows that $H^i(\E^{(p^m)}\otimes \F) = 0$ for $m>>0$ and $i>i_0$ (by the same argument as in the proof of corollary~\ref{cor:FamplViaLineb}). \end{proof} The theorem yields improvements on the regularity estimates of the previous section. \begin{cor} An $F$-ample sheaf $\E$ on $\PP^n$ is $(-1)$-regular; in particular $\E(-1)$ is globally generated. \end{cor} \begin{ex}\label{ex:amplenotFample} The tangent bundle $T$ of $\PP^n$ is ample and in fact $p$-ample in positive characteristic, but $T$ is not $F$-ample if $n\ge 2$, because $H^{n-1}(T(-n-1)) = H^1(\Omega^1)^*\not=0$. The bundle $T(-1)$ is globally generated, and therefore nef. However, it cannot be $F$-semipositive, since otherwise $T$ would be $F$-ample by corollary~\ref{cor:tensorcp}. \end{ex} \begin{ex}\label{ex:restr} Let $X$ be a projective variety with an ample line bundle $L$. Embed $i:X\hookrightarrow \PP^n$ using a large multiple of $L$. Then $L$ is \cp but $i_*L$ is not, because the conclusion of the above corollary fails. Therefore theorem~\ref{thm:first} (5) fails for non\'etale finite maps. \end{ex} \begin{cor} A vector bundle on $\PP^2$ is $F$-ample if and only if it is isomorphic to a sum of the form $E\oplus O(1)^{\oplus N}$ where $E$ is $(-2)$-regular. \end{cor} \begin{proof} A direct sum of a $(-2)$-regular sheaf and a bunch of $O(1)$'s satisfies the conditions of the theorem by \cite[p. 100]{mumford}. Now suppose that $V$ is a \cp vector bundle. It is $(-1)$-regular by the previous corollary, and therefore $V(-1)$ is generated by global sections. Suppose that $H^2(V(-4))\not= 0$. Then by Serre duality, there is a nonzero morphism $V(-1)\to O$ and let $V'$ be the kernel twisted by $O(1)$. Since the map $H^0(V(-1))\otimes O\to O$ must split, it follows that the map $V(-1)\to O$ also splits. Therefore $V'$ is again $(-1)$-regular, so we can continue splitting off copies of $O(1)$ from $V$ until we arrive at a summand $E$ with $H^2(E(-4)) = 0$. Since we also have $H^1(E(-3)) = 0$, it follows that $E$ is $(-2)$-regular. \end{proof} \section{$F$-amplitude of ample bundles} As we have seen, ample vector bundles need not be $F$-ample. However, we do have an estimate on their amplitude, at least in characteristic $0$. \begin{thm}\label{thm:keyestimate} Let $X$ be a projective variety over a field of characteristic $0$ and let $\E$ be an ample vector bundle of rank $r$ on $X$. Then $\phi(\E)< r$. \end{thm} Keeler has found that the inequality $\phi(\E)< dim\, X$ also holds for ample vector bundles (proposition~\ref{prop:ample-dimension-bound}). Before giving the proof, we need to review some results from (modular) representation theory. We will choose our notations consistent with those of \cite{arapura}. Let $A$ be a commutative ring and $E=A^r$. Fix a partition $\lambda = (\lambda_1\ge\lambda_2\ge\ldots)$ of weight $|\lambda| = \sum_i\lambda_i$. The Schur power $\BS^\lambda(E)$ can be constructed as the space of global sections of a line bundle associated to $\lambda$ over the scheme $Flag(E)$ of flags on $E$. A more elementary construction is possible; $\BS^\lambda(E)$ can be defined as a quotient of $\E^{\otimes|\lambda|}$ by an explicit set of relations involving $\lambda$ \cite[8.1]{fulton}. This quotient map can be split using a Young symmetrizer when $A$ contains $\Q$, but not in general. The paper \cite{carter-lustig} gives essentially a dual construction; their Weyl module $E_\lambda$ coincides with our $\BS^{\lambda'}(E)^*$ where $\lambda'$ is the conjugate partition (see \cite[p 251]{jantzen} for the comparison with the first construction). We will need to make the initial description more explicit. Let $\pi_k:Flag(E)\to Grass_k(E)$ be the canonical map to the Grassmannian of $k$-dimensional {\em quotients} of $E$, and $i_k$ its Pl\"ucker embedding. Then $$\BS^\lambda(E) = H^0(Flag(E), L_\lambda)$$ where $a_i = \lambda_i-\lambda_{i+1}$ and $$L_\lambda =\bigotimes_k\, \pi_k^*i_k^*O_{\PP(\wedge^kE)}(a_k).$$ At the two extremes, $\BS^{(n)}(E) = S^n(E)$ and $\BS^{(1,1,\ldots 1)}(E) = \wedge^i(E)$ where $i$ is the length of the string. These Schur powers turn out to be free $A$-modules, and since the constructions are functorial, they carry $GL_r(A)$ actions. When $A=k$ is a field of characteristic $0$, the $GL_r(k)$-modules $\BS^\lambda(E)$ are all irreducible. This is no longer true when $A=k$ is a field of characteristic $p>0$. For example, the symmetric power $S^p(E)$ contains a nontrivial submodule $E^{(p)}$ which is the representation associated to the $p$th power map $GL_r(k)\to GL_r(k)$. This inclusion can be extended to a resolution: \begin{thm}(Carter-Lusztig \cite[pg 235]{carter-lustig}) If $k$ is a field of characteristic $p>0$. Then there exists an exact sequence of $GL_r(k)$-modules $$0\to E^{(p)}\to \BS^{(p)}(E)\to \BS^{(p-1, 1)}(E)\to \BS^{(p-2, 1, 1)}(E)\to\ldots \BS^\lambda(E)\to 0$$ where $\lambda=(p-\min(p-1,r-1), 1,1\ldots)$. \end{thm} These constructions are easy to globalize to the case where $E$ is replaced by a vector bundle $\E$. In this case, the two meanings of $\E^{(p)}$ agree. \begin{cor} If $\E$ is a vector bundle over a scheme $X$ defined over a field $k$ of characteristic $p>0$, then there exists a resolution of $\E^{(p)}$ as above. \end{cor} Suppose we are in the situation of theorem \ref{thm:keyestimate}. Choose a line bundle $M$ on $X$. Let $(\tilde X, \tilde \E, \tilde M)$ be an arithmetic thickening over $ Spec\, A$. After shrinking $Spec\, A$ we can assume that $\E$ is locally free. Then we have vector bundles $\BS^\lambda(\tilde \E)$ over $\tilde X$. Fix a partition $\lambda$. Then for any natural number $N$, we get a new partition $(N)+\lambda = (N+\lambda_1, \lambda_2,\ldots)$. \begin{lemma}\label{lemma:HiXqBS} With notation and assumptions as above (specifically that $\E$ is ample), there exists an integer $N_0$ such that $$H^i(\tilde X, \BS^{(N)+\lambda}(\tilde \E)\otimes \tilde M) = 0$$ for $N\ge N_0$. \end{lemma} \begin{proof} Let $\pi:Flag(\E)\to X$ be the bundle of flags on $\E$. To simplify notation, we will write $\tilde M$ instead of $\pi^*\tilde M$. The fibers of $\pi$ are partial flag varieties. The higher cohomology groups of $L_{(N)+\lambda}$ along these fibers are zero by Kempf's vanishing theorem (see for example \cite[II, 4.5]{jantzen}). Therefore the higher direct images vanish, and consequently the Leray spectral sequence yields isomorphisms $$H^i(\tilde X, \BS^{(N)+\lambda}(\tilde \E)\otimes \tilde M) \cong H^i(Flag(\E), L_{(N)+\lambda}\otimes \tilde M).$$ Let $\pi_1:Flag(\E)\to \PP(\E)$ be the canonical projection. For reasons similar to those above, there are isomorphisms $$H^i(Flag(\E), L_{(N)+\lambda}\otimes \tilde M) \cong H^i(\PP(\E), \pi_{1*}L_{(N)+\lambda}\otimes\tilde M).$$ By the projection formula, the right hand side is the cohomology of $O_{\PP(\E)}(N)\otimes \pi_*(L_\lambda)\otimes \tilde M$. Since $O(1)$ is ample, these groups vanish for $N>>0$ and $i>0$. \end{proof} \begin{cor}\label{cor:HiXqBS} With the notation of section 1, given $a > 0$, there exists $N_0$ $$H^i( X_q, \BS^{(N)+\lambda}(\E_q)\otimes M_q^{\otimes n}) = 0$$ for all $i>0$ , $0\le n\le a$, $N\ge N_0$ and closed points $q\in Spec\, A$. \end{cor} \begin{proof} \cite[III. 12.9]{hartshorne2} \end{proof} \begin{proof}[Proof of theorem \ref{thm:keyestimate}] Choose $M = O_X(-1)$ with $O_X(1)$ very ample. Let $C << 0$ be a constant. By corollary \ref{cor:HiXqBS}, there exists a $N_0$ such that the sheaves $ \BS^{(N-i, 1,\dots 1)}(\E_q))$ ($0\le i< r$) have regularity less than $C$ for all $N\ge N_0$ and all closed points $q\in Spec\, A$. In particular, there exits a nonempty open set $U\subset Spec\, A$ such that $$reg( \BS^{(p(q)-i, 1,\dots 1)}(\E_q)) < C$$ for all closed $q\in U$ ($p(q) = char\, A/q$). By lemma \ref{lemma:cpestim1}, these sheaves are $F$-ample. Then the Carter-Lusztig resolution (which has length bounded by $r=rank(\E)$) together with lemma \ref{lemma:one} shows that $\phi(\E)< r$. \end{proof} \begin{cor}\label{cor:keyestimate} Let $\E_i$ be ample vector bundles. Then $$\phi(\E_1\otimes\E_2\otimes\ldots \E_m) < rank(\E_1)+rank(\E_2)+\ldots rank(\E_m).$$ \end{cor} The analogue for a pair is the following: \begin{thm}\label{thm:logkeyestimate} Let $X$ be a smooth projective variety defined over a field of characteristic $0$, and let $\E$ be a vector bundle of rank $r$ on $X$. Suppose there exists a reduced normal crossing divisor $D$, a positive integer $n$, and a divisor $0\le D'<_{strict} nD$ such that $S^n(\E)(-D')$ is ample. Then $\phi(\E, D) < r$. \end{thm} \begin{remark} The hypothesis amounts to the condition that the ``vector bundle'' $\E(-\Delta)$ is ample for some fractional effective $\Q$-divisor $\Delta =\frac{1}{n}D'$ supported on $D$. \end{remark} \begin{proof} The proof is very similar to the previous one, so we will just summarize the main points. Let $M = O_X(-1)$ with $O_X(1)$ very ample. Choose a thickening of $(X,\E,D,M)$. A small modification of corollary \ref{cor:HiXqBS} shows that the regularity of the sheaves $$ \BS^{(Nn+j-i, 1,\dots 1)}(\E_q)(-ND'_q),\> 0\le i< r,\, 0<j<n$$ can be made less than a given $C$ for all $N$ greater than some $N_0$ depending on $C$. All but finitely primes are of the form $Nn+j$ for $N$ and $j$ as above. Thus the above sheaves will be $F$-ample for almost all $q$. The Carter-Lusztig resolution shows that $\phi(\E^{(p(q))}(-ND')) < r$ which implies the theorem. \end{proof} \section{An $F$-ampleness criterion} The notion of geometric positivity was introduced in \cite{arapura}. Although the methods are very different, there appear to be some parallels between $F$-ampleness and geometric positivity. The following result is an analogue of [loc. cit., cor. 3.10]. \begin{thm}\label{thm:strongsemistable} Let $\E$ be a rank $r$ vector bundle on a smooth projective variety $X$ such that $det(\E)$ is ample and $S^{rN}(\E)\otimes det(\E)^{-N}$ is globally generated for some $N>0$ prime to $char\, k$. Then $\E$ is $F$-ample. \end{thm} \begin{remark} The hypothesis that $S^{rN}(\E)\otimes det(\E)^{-N}$ is globally generated implies that $\E$ is strongly semistable in the sense of \cite[p. 247]{arapura}. We leave it as an open problem to determine whether $F$-ampleness follows only assuming strong semistability of $\E$ and ampleness of $det(\E)$. \end{remark} As usual all the work will be in characteristic $p>0$. Let $q= p^n$ for some $n>0$. In this section we will modify our previous conventions, and write $F_X:X\to X$ for the absolute $q$th power Frobenius. Let $\PP= \PP(\E)$ and $\PP' = \PP(\E^{(q)})$ with canonical projections denoted by $\pi$ and $\pi'$. Consider the commutative diagram $$ \xymatrix{ \PP\ar[r]_{\Phi}\ar[rd]_{\pi}\ar@/^/[rr]^{F_\PP} & \PP'\ar[r]_{\phi}\ar[d]^{\pi'} & \PP\ar[d]^{\pi} \\ & X\ar[r]_{F_X} & X } $$ where the right hand square is cartesian. $\Phi$ is the relative $q$th power Frobenius associated to $\pi$. Let $\Sigma_N = S^{rN}(\E)\otimes det(\E)^{-N}$. \begin{prop}\label{prop:relFsplit} If $\Sigma_{q-1}$ is globally generated, then $O_{\PP'}\to \Phi_*O_\PP$ splits. \end{prop} \begin{proof} By Grothendieck duality for finite flat maps \cite[ex.III 6.10, 7.2]{hartshorne2}, the proposition is equivalent to the splitting of the trace map \begin{equation}\label{eq:Grothtrace} tr: \Phi_*\omega_{\PP/\PP'} \to O_{\PP'} \end{equation} We have $$F^*_\PP O_\PP(1) = O_\PP(q)$$ $$\phi^* O_\PP(1) = O_{\PP'}(1),$$ therefore $$ \Phi^*O_{\PP'}(1) = O_{\PP}(q).$$ Also $$\omega_{\PP'/X} = O_{\PP'}(-r) \otimes(\pi')^* det(\E^{(q)}) = O_{\PP'}(-r) \otimes (\pi')^*det(\E)^q$$ $$\omega_{\PP/X} = O_{\PP}(-r) \otimes \pi^* det(\E)$$ \cite[ex. III 8.4]{hartshorne2}. Therefore $$\Phi^*\omega_{\PP'/X} = O_{\PP}(-qr) \otimes \pi^*det(\E)^q$$ $$\omega_{\PP/\PP'} = O_{\PP}((q-1)r) \otimes \pi^*det(\E)^{1-q}.$$ Observe that \begin{equation}\label{eq:piomegaPP} \pi_*\omega_{\PP/\PP'} =\Sigma_{q-1}. \end{equation} Suppose that $0<i<r$, then using the projection formula and the previous computations \begin{eqnarray*} R^i\pi'_*[(\Phi_*\omega_{\PP/\PP'})(-i)] &=& R^i\pi_*\omega_{\PP/\PP'}(-qi)\\ &=& R^i\pi_*O_{\PP}(q(r-i)-r) \otimes \pi^*det(\E)^{1-q}\\ &=& 0 \end{eqnarray*} Thus $\Phi_*\omega_{\PP/\PP'}$ is regular relative to $\pi'$, and it follows that the canonical map $$(\pi')^*\pi'_* \Phi_*\omega_{\PP'/\PP}\to \Phi_*\omega_{\PP'/\PP}$$ is surjective \cite[V, 2.2]{fulton-lang}. By (\ref{eq:piomegaPP}), this gives a surjection \begin{equation}\label{eq:Sigma2omega} (\pi')^*\Sigma_{q-1}\to \Phi_*\omega_{\PP'/\PP} \end{equation} Composing this with the Grothendieck trace (\ref{eq:Grothtrace}), gives a surjection $$(\pi')^*\Sigma_{q-1} \to O_{\PP'}$$ Since $\Sigma_{q-1}$ is globally generated, there exists a morphism $s:O_{\PP'}\to (\pi')^*\Sigma_{q-1}$ such that the composite $O_{\PP'}\to O_{\PP'}$ is nonzero. This corresponds to an element $a\in k^*=H^0(O_{\PP'})-\{0\}$. The composite of $\frac{1}{a}s$ with (\ref{eq:Sigma2omega}) gives a splitting of (\ref{eq:Grothtrace}). \end{proof} \begin{cor}\label{cor:relFsplit} With the same assumptions as in the proposition, $\E^{(q)}$ is a direct summand of $S^{q}(\E)$. \end{cor} \begin{proof} By the projection formula, the canonical map \begin{equation}\label{eq:OPP12PhiOPPq} O_{\PP'}(1)\to \Phi_*\Phi^*O_{\PP'}(1) = \Phi_*O_{\PP}(q) \end{equation} can be identified with $$O_{\PP'}(1)\to O_{\PP'}(1)\otimes \Phi_*O_{\PP}$$ This splits. Applying $\pi'_*$ to (\ref{eq:OPP12PhiOPPq}) yields a split injection $\E^{(q)}\to S^{q}(\E)$. \end{proof} \begin{proof}[Proof of theorem \ref{thm:strongsemistable}] Choose $q=p^n\equiv 1 \> (mod\, N)$; $q$ can be chosen arbitrarily large. Then $\Sigma_{q-1}$ is globally generated since it is a quotient of $S^q(\Sigma_N)$. It follows that $\E^{(q)}$ is a direct summand of $S^q(\E)$. Since $S^N(\E) = \Sigma_N\otimes det(\E)^{N}$ is ample, the same is true for $\E$. Thus $reg(\E^{(q)}) = reg(S^q(\E))\to -\infty$ as $q\to \infty$. This finishes the proof in characteristic $p$, the remaining case is handled as usual. \end{proof} \section{The main vanishing theorem} As a warm up to the main theorem, we will extend some of the conclusions of theorem~\ref{thm:proj} to a more general class of spaces called Frobenius split varieties \cite{mehta}. This means that the map $O_{X}\to F_{*}O_{X}$ splits (actually, we only need the ostensibly weaker property that this map splits in the derived category). Proposition \ref{prop:relFsplit} implies that projective spaces are Frobenius split. Other examples of Frobenius split varieties include quotients of semisimple groups by parabolic subgroups [loc. cit], and most mod $p$ reductions of a smooth Fano variety \cite[4.11]{smith}. \begin{prop} Suppose that $X$ is a smooth projective variety such that $O_X\to F_*O_X$ splits in the derived category of $O_X$-modules. Then $H^{i}(X,\E) = 0$ for $i > \phi(\E)$. If $\E$ is locally free, then $H^i(X,\omega_{X}\otimes \E) = 0$ for $i > \phi(\E)$. \end{prop} \begin{proof} For the first statement, use the fact there is an injection $$H^i(X,\E)\hookrightarrow H^i(X,\E\otimes F_*O_X)$$ because it splits by hypothesis. On the other hand the projection formula gives $$H^i(X,\E\otimes F_*O_X)\cong H^i(X,\E^{(p)})$$ By iterating we get a sequence of injections $$H^{i}(\E)\hookrightarrow H^i(\E^{(p)}) \hookrightarrow \ldots H^i(\E^{(p^n)}) = 0$$ for $i > \phi(\E)$ and $n >> 0$. We can replace $\E$ by $\E^{*}$ and $i$ by $dim X - i$ in the above sequence of injections. This together with Serre duality yields the result. \end{proof} When $\E$ is a vector bundle on $\PP^n$, the proposition yields the vanishing $$H^i(\E(j-n-1)) = 0$$ for $j\ge 0$, $i>\phi(\E)\ge\phi(\E(j))$, obtained earlier. When $k$ is a perfect field, let $W(k)$ be the ring of Witt vectors over $k$, and $W_2(k)=W(k)/p^2W(k)$. It is helpful to keep the following example in mind: if $k=\Z/p\Z$, then $W(k)$ is the ring of $p$-adic integers, so that $W_2(k) \cong \Z/(p^2)$. \begin{thm}\label{thm:van} Let $k$ be a perfect field of characteristic $p>n$, and let $X$ be a smooth $n$ dimensional projective $k$-variety with a reduced normal crossing divisor $D$. Suppose that $\E$ is a coherent sheaf on $X$. If $(X,D)$ can be lifted to a pair over $Spec\, W_2(k)$. Then $$H^i(X, \Omega_X^j(\log D)(-D)\otimes \E) = 0$$ for $i+j > n+\phi(\E, D)$. \end{thm} \begin{remark} Note that $\E$ is not required to lift. It is possible to obtain a weaker statement when $p\le n$, but we won't need it. \end{remark} The proof is based on the following lemmas. \begin{lemma}\label{lemma:bootstrap1} Suppose that $0 \le D'\le pD$ is a divisor such that $$H^i(\Omega_X^j(log D)(-D')\otimes \E^{(p)}) = 0 $$ for all $i+j>N$, then $$H^i(\Omega_X^j(log D)(-D'_{red})\otimes \E) = 0 $$ for all $i+j>N$. \end{lemma} \begin{proof} Set $D_1 = D'_{red}$ and $B = pD_1-D'$. To avoid confusion, we will say few words about our conventions. The differentials on $\Omega_X^\dt(log D)(B)$ and $\Omega_X^\dt(log D)$ are inherited from the complex of meromorphic forms. All other differentials are induced from these using tensor products and pushforwards. There is a quasi-isomorphism $$\Omega_X^\dt(log D)\cong \Omega_X^\dt(log D)(B)$$ where $B = pD_1-D'$ by \cite[3.3]{hara} (see also \cite[4.1]{ms}). Tensoring both sides with $\E^{(p)}(-D')$ yields $$\Omega_X^\dt(log D)\otimes \E^{(p)}(-D')\cong \Omega_X^\dt(log D)\otimes [\E(-D_1)]^{(p)}.$$ This implies that $$F_*(\Omega_X^\dt(log D)(-D')\otimes \E^{(p)})\cong F_*(\Omega_X^\dt(log D)\otimes F^*(\E(-D_1))).$$ Lemma~\ref{lemma:projform} shows that the right side is quasi-isomorphic to $$[F_*\Omega_X^\dt(log D)]\otimes \E(-D_1)$$ By \cite[4.2]{deligne-ill} (and the remarks of section 1), this is quasi-isomorphic to $$\left(\bigoplus_j \Omega_X^j(log D)[-j]\right)\otimes \E(-D_1).$$ The spectral sequence $$H^i(\Omega_X^j(log D)(-D')\otimes \E^{(p)})\Rightarrow \mathbb{H}^{i+j}(\Omega_X^\dt(log D)(-D')\otimes \E^{(p)})$$ together with the hypothesis shows that the abutment vanishes for $i+j>N$. Therefore $$\mathbb{H}^{i}(F_*(\Omega_X^\dt(log D)(-D')\otimes \E^{(p)})) \cong \bigoplus_j H^{i-j}(\Omega_X^j(log D)\otimes \E(-D_1))$$ vanishes for $i>N$. \end{proof} \begin{lemma}\label{lemma:bootstrap2} Suppose that $0 \le D'\le p^aD$ is a divisor such that $$H^i(\Omega_X^j(log D)(-D')\otimes \E^{(p^a)}) = 0 $$ for all $i+j>N$, then $$H^i(\Omega_X^j(log D)(-D'_{red})\otimes \E) = 0 $$ for all $i+j>N$. \end{lemma} \begin{proof} We prove this by induction on $a$. The case where $D'=0$ is straightforward, so we assume that $D'\not= 0$. The initial case $a=1$ is the previous lemma. Suppose that the lemma holds for $a$, and suppose that $(\E, D')$ satisfies the hypothesis of the lemma with $a$ replaced by $a+1$. Since $\{1,\ldots p\}$ forms a set of representatives of $\Z/p\Z$, we can decompose $D' = pD_1 + D_2$ such that $D_{2,red} = D'_{red}$, $0< D_2\le pD$ and $0\le D_1<_{strict} p^aD$. These assumptions guarantee that the divisor $D'_{red}+D_1$ is less than or equal to $p^aD$ and has the same support as $D'$. Then $$\E^{(p^{a+1})}(-D') = \E_1^{(p)}(-D_2)$$ where $\E_1 = \E^{(p^a)}(-D_1)$. By assumption, $$H^i(\Omega_X^j(log D)(-D')\otimes \E^{(p^{a+1})}) = H^i(\Omega_X^j(log D)(-D_2)\otimes \E_1^{(p)}) = 0$$ for $i+j > N$. Lemma~\ref{lemma:bootstrap1} implies $$H^i(\Omega_X^j(log D)(-D'_{red})\otimes \E_1) = H^i(\Omega_X^j(log D)(-D'_{red}-D_1)\otimes \E^{(p^{a})})=0$$ for $i+j > N$. Induction yields the desired conclusion. \end{proof} \begin{proof} [Proof of theorem \ref{thm:van}] By definition, $\phi(\E, D) = \phi(\E^{(p^a)}(-D'))$ for some $0\le D' <_{strict} p^aD$. We can assume $a>>0$ since we can replace $\E^{(p^a)}(-D')$ by a Frobenius power. Therefore $$H^i(\Omega_X^j(log D)(-p^bD'- D)\otimes \E^{(p^{a+b})}) = H^i(\Omega_X^j(log D)(- D)\otimes (\E^{(p^{a})}(-D'))^{(p^b)}) = 0 $$ for $b>>0$ and all $i>\phi(\E, D) $ and all $j$. Since the support of $p^bD'+D$ is $D$ and the coefficients of this divisor are less than or equal to $p^{a+b}$, lemma~\ref{lemma:bootstrap2} implies the theorem. \end{proof} \begin{cor}\label{cor:van1} Suppose that $(X,D,\E)$ is as above and $char\, k = 0$, then $$H^i(X, \Omega_X^j(log D)(-D)\otimes \E) = 0$$ for $i+j > n + \phi(\E, D)$. In particular, $$H^i(X, \Omega_X^j\otimes \E) = 0$$ for $i+j > n+\phi(\E)$ \end{cor} \begin{proof} Choose an arithmetic thickening of $(X,D,\E)$. Almost all of the closed fibers satisfies the conditions of the theorem. Therefore the corollary follows by semicontinuity. \end{proof} \begin{cor}\label{cor:van2} Suppose that $(X,D,\E, k)$ is as in theorem or corollary~\ref{cor:van1}, and that $L$ is an arithmetically nef line bundle on $X$. Then $$H^i(X, \Omega_X^j(log D)(-D)\otimes \E\otimes L) = 0$$ for $i+j > n + \phi(\E, D)$. \end{cor} \begin{proof} By proposition~\ref{prop:nefline}, $L$ is $F$-semipositive, hence $\phi(\E\otimes L, D) = \phi(\E, D)$. So the corollary follows from theorem~\ref{thm:van} in positive characteristic, or the previous corollary in characterisitic $0$. \end{proof} \begin{cor}\label{cor:vandual} Suppose that $(X,D,\E,k)$ satisfy the conditions of the theorem or the corollary~\ref{cor:van1}, then $$Ext^i(\E, \Omega_X^j(log D)) = 0$$ for $i+j < n - \phi(X,D)$. If $\E$ is locally free, then $$H^i(X,\Omega_X^j(log D)\otimes \E^*) = 0$$ for $i+j < n - \phi(X,D)$. \end{cor} \begin{proof} This is a consequence of Serre duality. \end{proof} \begin{cor} (Le Potier) Suppose that $char \, k = 0$ and $\E_i$ are ample locally free sheaves on a smooth variety $X$, then $$H^i(X, \Omega_X^j\otimes \E_1\otimes \ldots \E_m) = 0$$ for $i+j \ge n+rank(\E_1)+\ldots rank(\E_m)$ \end{cor} \begin{proof} Follows from corollary~\ref{cor:van1} and corollary~\ref{cor:keyestimate}. \end{proof} The next result is a generalization of the Kawamata-Viehweg vanishing theorem \cite{kawamata, viehweg}. (To obtain the statement given in the introduction, set $\Delta = \frac{1}{m}D'$.) \begin{cor}\label{cor:kvvan1} With $(X,D,\E,k)$ as in the corollary~\ref{cor:van1}. Suppose that there is a positive integer $m$, and a divisor $0\le D'\le (m-1)D$ such that $S^{m}(\E)(-D')$ is ample. Then $$H^i(X, \Omega_X^j(log D)(-D)\otimes \E) = 0$$ for $i+j\ge n+rank(\E)$. In particular, $$H^i(X, \omega_X\otimes \E) = 0$$ for $i \ge rank(\E)$. \end{cor} \begin{proof} This follows from theorem~\ref{thm:logkeyestimate} and corollary~\ref{cor:van1}. \end{proof} \begin{cor}\label{cor:genphivan} Suppose that $char \, k = 0$ and $\E$ is a locally free sheaf on a projective $k$-variety $Z$ with rational singularities, then $$H^i(Z, \omega_Z\otimes \E) = 0$$ for $i> \phi_{gen}(\E)$. \end{cor} \begin{proof} By the previous corollary, $H^i(Y,\omega_Y\otimes f^*\E)$ vanishes for $i> \phi_{gen}(\E)$, for some resolution of singularities $f:Y\to Z$. We have $\R f_*\omega_Y = \omega_Z$ because $Z$ has rational singularities, and so the corollary follows. \end{proof} \begin{cor}\label{cor:kvvanishing} Suppose that $char \, k = 0$ and $\E$ is the pull back of an ample vector bundle under a surjective morphism $f:X\to Y$ with $X$ smooth, then $$H^i(X, \omega_X\otimes \E) = 0$$ for $i\ge rank(\E)+d$ where $d$ is the dimension of the generic fiber of $f$. \end{cor} \begin{proof} Follows from corollary~\ref{cor:genphi} and corollary~\ref{cor:genphivan}. \end{proof} \section{Some applications} In this section, we work over $\C$. We start with a refinement of the Lefschetz hyperplane theorem (which corresponds to case B with $\E= O_X$ and $r=m=1$). \begin{prop} Let $D\subset X$ be a smooth divisor on a smooth $n$ dimensional projective variety. Suppose that $m>0$. \begin{enumerate} \item[A.] If $S^m(\E)(rD)$ is ample for some $-m< r\le 0$, then $$H^i(X,\Omega_X^j\otimes \E)\to H^i(D,\Omega_D^j\otimes \E)$$ is bijective if $i+j\ge n + rank(\E)$ and surjective if $i+j= n +rank(\E)-1$. \item[B.] If $S^m(\E)(rD)$ is ample for some $0< r \le m$, then $$H^i(X,\Omega_X^j\otimes \E^*)\to H^i(D,\Omega_D^j\otimes \E^*)$$ is bijective if $i+j\le n - rank(\E)-1$ and injective if $i+j= n -rank(\E)$. \end{enumerate} \end{prop} \begin{proof} We have an exact sequence $$0\to \Omega_X^j(log D)(-D)\to \Omega_X^j\to \Omega_D^j\to 0.$$ Tensoring this with $\E$ and applying corollary \ref{cor:kvvan1} proves A. For B, tensor this with $\E^*$ and observe that by Serre duality and corollary \ref{cor:kvvan1} $$H^i(X,\Omega_X^j(log D)(-D)\otimes \E^*)\cong H^{n-i}(X,\Omega_X^{n-j}(log D)(-D)\otimes \E(D))^*=0 $$ when $i+j \le n-rank(\E)$. \end{proof} Given an algebraic variety $Y$ and algebraic coherent sheaf $\F$ over $Y$, we denote the corresponding analytic objects by $Y^{an}$ and $\F^{an}$. Given a closed subvariety $Z\subset Y$, let $$codim(Z,\,Y) = \left\{ \begin{array}{l} \mbox{ the codimension of $Z$ if $Z\not=\emptyset$} \\ $dim\, Y + 1$\mbox{ otherwise} \end{array} \right. $$ Our goal is to prove a vanishing theorem for ample vector bundles over quasiprojective varieties. This generalizes some results for line bundles due to Bauer, Kosarew \cite{bk2} and the author \cite{arapura1}. \begin{thm}\label{thm:noncompact} Suppose that $U$ is a smooth quasiprojective variety with a possibly singular projective compactification $Y$. Let $\E$ be the restriction to $U$ of an ample vector bundle on some compactification of $U$ (possibly other than $Y$). Then $$H^i(U^{an},\, (\Omega_U^j\otimes \E^*)^{an})= H^i(U,\, \Omega_U^j\otimes \E^*) = 0$$ for $i+j\le codim(Y-U,\, Y)- rank(\E)-1$. \end{thm} As a first step, we need a generalization of Steenbrink's vanishing theorem \cite{steenbrink}. \begin{prop}\label{prop:steen} Suppose that $f:X\to Y$ is a desingularization of an $n$-dimensional projective variety such that $X$ possesses a reduced normal crossing divisor $D$ containing the exceptional divisor. If $\E$ is a nef (e. g. globally generated) vector bundle, then $$R^if_*[\Omega_X^j(log D)(-D)\otimes \E] = 0$$ for $i+j \ge n + rank(\E)$. \end{prop} \begin{proof} As in the proof of corollary~\ref{cor:biratphi}, we can find a relatively ample divisor $-\sum a_iD_i$ supported on $D$ with $a_i\ge 0$. Let $L$ be a large power of an ample line bundle on $Y$. Then $f^*L^{\otimes N}(-\sum a_iD_i)$ is ample for all $N>0$. Therefore $S^m(\E\otimes f^*L^{\otimes N})(-\sum a_iD_i)$ is ample for all $m> 0$. Then corollary~\ref{cor:kvvan1} shows that $$H^i(\Omega_X^j(log D)(-D)\otimes \E\otimes f^*L^{\otimes N}) = 0$$ for $i+j \ge n + rank(\E)$. For $N>>0$, the Leray spectral sequence and Serre vanishing yields $$H^0(R^if_*[\Omega_X^j(log D)(-D)\otimes \E]\otimes L^{\otimes N}) = 0$$ for $i+j \ge n + rank(\E)$. We can assume that these sheaves are all globally generated by increasing $N$ if necessary. Thus they must vanish. \end{proof} \begin{cor}\label{cor:localvan} Let $Y$ be a projective variety, $Z\subset Y$ a closed subvariety containing the singular locus, and $f:X\to Y$ a resolution of singularities which is an isomorphism over $Y-Z$ such that $D= f^{-1}(Z)_{red}$ is a divisor with normal crossings. Then for any nef vector bundle $\E$ on $X$, $$H_D^i(X,\Omega_X^j(log D)\otimes \E^*) = 0$$ for $i+j\le codim(Y,Z) -rank(\E)$. \end{cor} \begin{proof} This is a generalization of \cite[thm 1]{arapura1}. A proof of this corollary can be obtained by simply replacing Steenbrink's theorem with proposition~\ref{prop:steen} in the proof given there. \end{proof} \begin{proof}[Proof of theorem~\ref{thm:noncompact}] The groups $H^i(U, \F)$ and $ H^i(U^{an}, \F^{an})$ are isomorphic for any coherent sheaf $\F$ on $Y$ and $i< codim(Y-U,Y)-1$ by \cite[IV 2.1]{hartshorne-ampsub}. So it suffices to prove the vanishing in the algebraic category. Let $Z = Y-U$ and let $f:X\to Y$ be a desingularization satisfying the assumptions of corollary~\ref{cor:localvan}. We can assume that $X$ also dominates the compactification where $\E$ extends to an ample bundle. We use the same symbol for this extension, and its pullback to $X$. Corollary~\ref{cor:biratphi} and theorem~\ref{thm:keyestimate} implies that $\phi(\E, D) \ < rank(\E)$. Corollary~\ref{cor:localvan} shows that $$H^{i+1}_D(X,\Omega_X^j(log D)\otimes \E^*)=0.$$ Then the exact sequence for local cohomology yields a surjection $$H^i(X,\Omega_X^j(log D)\otimes \E^*)\to H^i(U,\Omega_X^j(log D)\otimes \E^*) = H^i(U,\Omega_U^j\otimes \E^*).$$ The cohomology group on the left vanishes as a consequence of corollary~\ref{cor:vandual}. \end{proof} \end{appendix} \end{document}
\begin{document} \title{Floating point numbers are real numbers} \author[1]{Walter F. Mascarenhas \thanks{[email protected]}} \affil[1]{ IME, Universidade de S\~{a}o Paulo} \maketitle \begin{abstract} Floating point arithmetic allows us to use a finite machine, the digital computer, to reach conclusions about models based on continuous mathematics. In this article we work in the other direction, that is, we present examples in which continuous mathematics leads to sharp, simple and new results about the evaluation of sums, square roots and dot products in floating point arithmetic. \end{abstract} \maketitle \section{Introduction} \label{secIntro} According to Knuth \cite{Knuth}, floating point arithmetic has been used since Babylonia (1800 B.C.). It played an important role in the beginning of modern computing, as in the work of Zuse in the late 1930s. Today we have several models for floating point arithmetic. Some of these models are based on algebraic structures, like Kulisch's Ringoids \cite{Kulisch}. Others models validate numerical software and lead to automated proofs of results about floating point arithmetic \cite{Boldo}. There are also models based on continuous mathematics, which are used intuitively. For example, when analysing algorithms based on the floating point operations $\wrm{op} \in \mathds{S}et{+,-,*,/}$, executed with machine precision $\epsilon$, one usually argues that \pbDef{defEpsArg} \wfl{x \, \rm{op} \, y} = \wlr{\, x \,\wrm{op} \, y \, } \wlr{1 + \delta} \hspace{1cm} \wrm{with} \hspace{1cm} \wabs{\delta} \leq \epsilon, \peDef{defEpsArg} where $\wfl{z}$ is the rounded value of $z$. Equation \pRef{defEpsArg} is called ``the $\wlr{1 + \epsilon}$ argument.'' It may not apply in the presence of underflow, but lead to many results in the hands of Wilkinson \cite{WilkinsonA,WilkinsonB}. The effectiveness of the $(1 + \epsilon)$ argument is illustrated by Equation 3.4 in Higham \cite{Higham}, which expresses the dot product $\hat{d}_n$ of the vectors $\wvec{x},\wvec{y} \in \wrn{n}$ as \pbDef{highamsDot} \hat{d}_n = x_1 y_1 \wlr{1 + \theta_n} + x_2 y_2 \wlr{1 + {\theta_n}'} + x_3 y_3 \wlr{1 + \theta_{n-1}} + \dots + x_n y_n \wlr{1 + \theta_2}. \peDef{highamsDot} The $\theta_k$ above are bounded in terms of the unit roundoff $u$ as \pbDef{highamsTheta} \wabs{\theta_k} \leq \frac{k u}{1 - k u} =: \gamma_k, \peDef{highamsTheta} and Equations \pRef{highamsDot} and \pRef{highamsTheta} are a good example of the use of continuous mathematics to analyze floating point operations. They express well the effects of rounding errors on dot products, and will suffice for most people interested in their numerical evaluation. The purpose of this article is to simplify and extend the results about floating point arithmetic obtained using the $(1 + \epsilon)$ argument. We argue that by thinking of the set of floating point numbers as a subset of the real numbers we can use techniques from continuous mathematics to derive and prove non trivial results about floating point arithmetic. For instance, we show that in many circumstances we can replace Higham's $\gamma_k$ by its linearized counterpart $k u$ and still obtain rigorous bounds. For us, the replacement of $\gamma_k$ by $k u$ is interesting because it leads to simpler versions of our articles \cite{Masc,MascCam,arxiv}, and arguments by other people could be simplified as well. We could, for example, replace some of Wilkinson's $1.06$ factors by $1$. In fact, we can even replace $\gamma_k$ by $k u / \wlr{1 + k u}$ when estimating the effects of rounding errors in the evaluation of the sum $\wfl{\sum_{k=0}^n x_k}$ of $n + 1$ numbers. Instead of \pbTClaim{naiveSum} \wabs{\wfl{\sum_{k=0}^n x_k} - \sum_{k=0}^n x_k} \leq \frac{n u}{1 - n u} \sum_{k=0}^n \wabs{x_k}, \peTClaim{naiveSum} we prove the sharper bound \pbTClaim{sharpSumB} \wabs{\wfl{\sum_{k = 0}^n x_k} - \sum_{k=0}^n x_k} \leq \frac{n u}{1 + n u} \sum_{k = 0}^n \wabs{x_k}, \peTClaim{sharpSumB} for arithmetics with subnormal numbers, when we round to nearest with unit roundoff $u$ and $20 n u \leq 1$. This bound grows slightly less than linearly with $n u$, that is, the right hand side is a strictly concave function of $n u$. Due to this concavity, we can rigorously conclude from Equation \pRef{sharpSumB} that \pbTClaim{sharpSumS} \wabs{ \wfl{\sum_{k = 0}^n x_k} - \sum_{k = 0}^n x_k} \leq n u \sum_{k = 0}^n \wabs{x_k}, \peTClaim{sharpSumS} and Equation \pRef{sharpSumS} is simpler than Equation \pRef{naiveSum}. When $x_k \geq 0$, Equation \pRef{sharpSumS} can be improved to \pbTClaim{sharpSumSB} x_k \geq 0 \Rightarrow \wabs{ \wfl{\sum_{k = 0}^n x_k} - \sum_{i = 0}^n x_k} \leq u \sum_{k = 1}^n \sum_{i = 0}^k x_i, \peTClaim{sharpSumSB} which is also simple and does not have higher order terms in $u$. We also analyze dot products, and derive simple and rigorous bounds like \pbTClaim{sharpDotIntro} \wabs{ \wfl{\sum_{k = 0}^n x_k y_k} - \sum_{k = 0}^n x_k y_k} \leq \wlr{n + 1} u \sum_{k = 0}^n \wabs{x_k y_k} \leq \wlr{n + 1} u \sqrt{\sum_{k = 0}^n x_k^2} \sqrt{\sum_{k = 0}^n y_k^2}, \peTClaim{sharpDotIntro} provided that $\sum_{k = 0}^n \wabs{x_k y_k}$ is not too small, for the formal concepts of small presented in the statement of our results. The bound in Equation \pRef{sharpSumB} is new, and was derived with the theory introduced in the present article, but Equations \pRef{sharpSumS} and \pRef{sharpDotIntro} are not new. Actually, stronger versions of them were already proved by C.-P. Jeannerod and S. Rump in \cite{Rump1} and \cite{Rump2}, and we present them here only as an improvement of the older results in \cite{Higham}. When comparing the content of these references with the present work, please note that there is a slight difference in our bound \pRef{sharpDotIntro} and similar bounds in them: our dot products involve $n + 1$ pairs of numbers, whereas the sums and dot products in \cite{Higham,Rump1,Rump3} are defined for $n$ numbers, or pairs of numbers. Therefore, to leading order in $u$, Equation \pRef{sharpDotIntro} states exactly the same as the analogous equations (3.7) in \cite{Higham}: \pbDef{HighamFoo} \wabs{ \wvec{x}^{\mathrm{T}{}} \wvec{y} - \wfl{\wvec{x}^{\mathrm{T}{}} \wvec{y}}} \leq n u \wabs{\wvec{x}}^{\mathrm{T}{}} \wabs{\wvec{y}} + \wfc{O}{u^2}, \peDef{HighamFoo} because $n$ in \cite{Higham} is the same as $n + 1$ for us. Our contribution is to show that there is no need for the $\wfc{O}{u^2}$ term in Equation \pRef{HighamFoo} or the 1.06 factor in Wilkinson's Equation 6.11 \cite{WilkinsonB}, implicit in his exponent $t_2$. We also extend it to situations in which we may have underflow because, as one may expect after reading \cite{DemmelA,DemmelB,HighamSum,Higham,Neumaier}, the bounds above must be corrected in order to handle underflow or arithmetics without subnormal numbers. In the rest of the article we describe such corrections. Equations \pRef{sharpSumS} and \pRef{sharpSumSB} are simpler than Equation \pRef{naiveSum}, but the proofs we present for them are definitely not. However, we hope that after our bounds are validated via the usual peer review process or by automated tools, people will be able to use them without reading their complicated proofs. For this reason we divided the article in three parts (besides this introduction.) In Section \ref{secDef} we define the terms which allow us to treat floating point numbers as particular cases of real numbers. In Section \ref{secSharp} we illustrate the use of the definitions in Section \ref{secDef} to derive sharper and simpler bounds for the effects on rounding errors in fundamental operations in floating point arithmetic, like sums, products, square roots and dot products. Readers should focus on Section \ref{secSharp}. It would be nice if they could find better proofs for the results stated in that section. In fact, we are glad that after we posted the first version of this manuscript M. Lange and S. Rump \cite{Rump3} derived stronger versions of some results presented here, using more direct arguments. This does not contradict the effectiveness of the use of continuous mathematics to analyze floating point arithmetic. Our point is that we can deduce the results thinking in continuous terms, and their formal proofs is just the last step in the discovery process. In the last part of the article we prove our results. We try to handle all details in our proofs, and this makes them long and tedious. For this reason, we wrote two versions of the article. We plan to publish the long one, and the very long one will be available at arxiv.org. While reading any of these versions, we ask the reader not to underestimate how easily ``short and intuitive'' arguments about floating point arithmetic can be wrong. For example, in the appendix of our article \cite{Extended} we argue that we can gain intuition about what would happen if we were to round upward instead of to nearest by replacing $u$ by $2u$. This argument is correct in that context, because we verified each and every floating point operation in our computations. However, this intuitive argument is not rigorous in general. In fact, by replacing $u$ by $2u$ in the bounds for rounding to nearest in the present article one will not obtain rigorous bounds for arithmetics which round upward or downward. \iftoggle{LatexFull} { Finally, this extended version of the article is meant to be read using a software like the Adobe Acrobat Reader, so that you can click on the hyperlinks (anything in blue) and follow them. For example, the statement of our lemmas end with a blue triangle. By clicking on this triangle you will access the proof of the corresponding result, and by clicking on the ``back button'' you will return to the statement. Please, do use this feature of your reader in order to select which arguments to follow in more detail. Otherwise, you will find this article to be unbearably long. } \section{Definitions} \label{secDef} This section presents models of floating point arithmetic which extend the floating point operations to all real numbers. In the same way that one can use complex analysis to study integer arithmetic, and Sobolev spaces and distributions to learn about regular solutions of differential equations, by thinking of the set of floating point numbers as a subset of the set of real numbers we can use abstract arguments from optimization theory, point set topology and convex analysis to reason about the floating point arithmetics implemented in real computers. Most of our floating point numbers have the form $x = \pm \beta^{e} \wlr{\beta^{\mu} + r}$ where $\beta \in \mathds{S}et{2, 3, 4, \dots}$ is the base, $e$ is an integer exponent, the exponent $\mu$ is a positive integer, and the remainder $r$ is an integer in $[0,\wlr{\beta - 1} \beta^{\mu} )$. We also define zero as a floating point number and, finally, our models account for subnormal numbers $s = \pm \beta^{e}r $, for an integer $r \in [1, \beta^{\mu})$. We now define floating point numbers more formally. \pbLongDefBT{longDefBase}{Base} A base is an integer greater than one. \peLongDef{longDefBase} \pbLongDefBT{longDefEps}{Unit roundoff} The unit roundoff associated to the base $\beta$ and the positive integer exponent $\mu$ is $u := u_{\beta,\mu} := 1/\wlr{2 \beta^\mu}$ (We omit the subscript from $u_{\beta,\mu}$ when $\beta$ and $\mu$ are evident given the context.) \peLongDef{longDefEps} The unit roundoff is our measure of rounding errors. It is equal to half the distance from $1$ to the next floating point number. Some authors express their results in terms of ulps (units in the last place) or the machine precision, and our $u$ correspond to half of the ulp or the machine epsilon used by them. However, the reader must be aware that there are conflicting definitions of these terms in the literature, and there is no universally accepted convention. A choice must be made, and we prefer to follow Higham \cite{Higham} and use the unit roundoff $u$ in Definition \ref{longDefEps}. Our models are based on {\it floating point systems}, which are subsets of $\mathds R{}$ to which we round real numbers. The simplest floating point systems are the perfect ones, which are defined below. \pbLongDefBT{longDefMinusSet}{Minus set} For $\wcal{A} \subset \mathds R{}$, we define $-\wcal{A} := \mathds{S}et{-x, \ \wrm{for} \ x \in \wcal{A}}$. \peLongDef{longDefMinusSet} \pbLongDefBT{longDefSign}{Sign function} The function $\wrm{sign}: \mathds R{} \rightarrow \mathds R{}$ is given by $\mathds{S}ign{0} := 1$ and $\mathds{S}ign{x} = \wabs{x}/x$ for $x \neq 0$, that is, we define the sign of $0$ as one. \peLongDef{longDefMinusSet} \pbLongDefBT{longDefBinade}{Equally spaced range (E)} The equally spaced range associated to the integer exponent $e$, the base $\beta$ and the positive exponent $\mu$ is \[ \wcal{E}_{e,\beta,\mu} := \mathds{S}et{\beta^e \wlr{\beta^{\mu} + r} \ \ \wrm{for} \ \ r = 0,1,2,3,\dots, \wlr{\beta -1 } \beta^{\mu} - 1} \] (We write simply $\wcal{E}_{e}$ when $\beta$ and $\mu$ are evident given the context.) \peLongDef{longDefBinade} \pbLongDefBT{longDefPerfect}{Perfect system} The perfect floating point system associated to the base $\beta$ and the positive exponent $\mu$ is \[ \wcal{P} := \wcal{P}_{\beta,\mu} := \mathds{S}et{0} \, \bigcup \, \wlr{\bigcup_{e = -\infty}^{\infty} \wcal{E}_{e,\beta,\mu}} \bigcup \, \wlr{\bigcup_{e = -\infty}^{\infty} - \wcal{E}_{e,\beta,\mu}}. \eqno \eod \] \peLongDefX{longDefPerfect} Perfect floating point systems are convenient for proofs, but ignore underflow and overflow and are not practical. It is our opinion that the best compromise to handle overflow is to assume that it does not happen, that is, to formulate models which do not take overflow into account and shift the burden to handle overflow to the users of the model. This opinion is not due to laziness, but to the fact that verifying the absence of overflow in particular cases is simpler than dealing with floating point systems in which there is a maximum element. Underflow is more subtle than overflow, and it may be difficult to avoid it even in simple cases. Therefore, handling underflow in each particular case would be too complicated, and it is a better compromise to have models that take underflow into account. Such models are formulated by limiting the range of the exponents $e$ in Definition \ref{longDefPerfect}. \pbLongDefBT{longDefMPFR}{MPFR system} The MPFR system associated to the base $\beta$, the positive integer $\mu$ and the integer exponent $e_{\alpha} < -\mu$ is \[ \wcal{M} := \wcal{M}_{e_{\alpha},\beta,\mu} := \mathds{S}et{0} \, \bigcup \, \wlr{\bigcup_{e = e_{\alpha}}^{\infty} \wcal{E}_{\beta,\mu}} \bigcup \, \wlr{\bigcup_{e = e_{\alpha}}^{\infty} - \wcal{E}_{\beta,\mu}}. \eqno \eod \] \peLongDefX{longDefMPFR} The name MPFR is a tribute to the MPFR library \cite{MPFR}, which has been very helpful in our studies of floating point arithmetic. This library does not use subnormal numbers, but allows for very wide exponent ranges (the minimal exponent is $1- 2^{30}$ in the default configuration.) As a result, underflow is very unlikely and when it does happen its consequences are minimal. \pbLongDefBT{longDefSubNormal}{Subnormal numbers} \index{subnormal number} The set of positive subnormal numbers associated to the base $\beta$, the positive integer exponent $\mu$ and the integer exponent $e_{\alpha}$ is $\wcal{S}_{e_{\alpha}} := \wcal{S}_{e_{\alpha},\beta,\mu} := \mathds{S}et{\beta^{e_{\alpha}} r, \ \ \wrm{with} \ \ r = 1,2,\dots, \beta^{\mu} - 1}$. \peLongDef{longDefSubNormal} \pbLongDefBT{longDefIEEE}{IEEE system} The IEEE system associated to the base $\beta$, the positive exponent $\mu$ and the integer exponent $e_{\alpha}$, with $e_{\alpha} < -\mu$, is \[ \wcal{I} := \wcal{I}_{e_{\alpha}, \beta, \mu} := \mathds{S}et{0} \, \bigcup \, \wcal{S}_{e_{\alpha},\beta,\mu} \, \bigcup \, -\wcal{S}_{e_{\alpha},\beta,\mu} \, \bigcup \, \wlr{\bigcup_{e = e_{\alpha}}^{\infty} \wcal{E}_{e,\beta,\mu}} \, \bigcup \, \wlr{\bigcup_{e = e_{\alpha}}^{\infty} -\wcal{E}_{e,\beta,\mu}}. \] The elements of $\wcal{S}_{e_{\alpha},\beta,\mu} \cup - \wcal{S}_{e_{\alpha},\beta,\mu}$ are the subnormal numbers for $\wcal{I}$. \peLongDef{longDefIEEE} The name IEEE is due to the IEEE 754 Standard for floating point arithmetic \cite{IEEE}, which contemplates subnormal numbers. \pbLongDefBT{defFloatSys}{Floating point system} \index{floating point system} There are three kinds of floating point systems: \begin{itemize} \item The perfect ones in Definition \ref{longDefPerfect}, which do not contain subnormal numbers. \item The unperfect ones, which can be either \begin{itemize} \item The IEEE systems in Definition \ref{longDefIEEE}, which have subnormal numbers, or \item The MPFR systems in Definition \ref{longDefMPFR}, which do not have subnormal numbers. \end{itemize} \end{itemize} For brevity, we refer to ``the floating point system $\wcal{F}$'' as ``the system $\wcal{F}$,'' and throughout the article the letter $\wcal{F}$ will always refer to a floating point system. \peLongDef{longDefFloatSys} Please pay attention to the technical detail that, in order to avoid pathological cases, our definitions require that $\beta \geq 2$ and $\mu > 0$, so that the mantissas of our floating point numbers have at least two bits and $u \leq 1/4$. Additionally, the minimum exponent $e_{\alpha}$ for unperfect systems is smaller than $-\mu$, so that $1$ and $1/\beta$ are floating point numbers. By limiting the exponent range, we also limit the size of the smallest positive floating point numbers, which are quantified by the numbers $\alpha$ and $\nu$ below. \pbLongDefBT{longDefAlpha}{Alpha} For a perfect system we define $\alpha := 0$; the IEEE system $\wcal{I}_{e_{\alpha},\beta,\mu}$ has $\alpha := \beta^{e_{\alpha}}$, and $\alpha := \beta^{e_{\alpha} + \mu}$ for the MFPR system $\wcal{M}_{e_{\alpha},\beta,\mu}$ (Informally, the set of non negative elements of a system begins at $\alpha$.) \peLongDef{longDefAlpha} \pbLongDefBT{longDefNu}{Nu} For a perfect system we define $\nu := 0$ and the unperfect system $\wcal{F}_{e_{\alpha},\beta,\mu}$ has $\nu := \beta^{e_{\alpha} + \mu}$. (Informally, the {\bf N}ormalized range for a system is formed by the numbers $z$ with $\wabs{z} \geq \nu$, and $\nu$ is the Greek {\bf N}.) \peLongDef{longDefNu} \pbLongDefBT{longDefERange}{Exponent for F} Any integer $e$ is an exponent for a perfect system, and $e \in \mathds Z{}$ is an exponent for the unperfect system $\wcal{F}_{e_{\alpha}}$ if $e \geq e_{\alpha}$. \peLongDef{longDeERange} This article is about rounding to nearest, as we now formalize. \pbLongDefBT{longDefRound}{Rounding to nearest} A function $\wrm{fl}: \mathds R{} \rightarrow \mathds R{}$ rounds to nearest in the floating point system $\wcal{F}$ if $\wfl{z} \in \wcal{F}$ and $\wabs{\wfl{z} - z} \leq \wabs{x - z}$ for $x \in \wcal{F}$ and $z \in \mathds R{}$. \peLongDef{longDefRound} \pbLongDefBT{longDefTies}{Breaking ties} When $\wrm{fl}$ rounds to nearest in $\wcal{F}$, we say that $\wrm{fl}$ breaks ties downward if, for $x \in \wcal{F}$ and $z \in \mathds R{}$, $\wabs{x - z} = \wabs{\wfl{z} - z} \Rightarrow x \geq \wfl{z}$. Similarly, $\wrm{fl}$ breaks ties upward if $\wabs{x - z} = \wabs{\wfl{z} - z} \Rightarrow x \leq \wfl{z}$. \peLongDef{longDefTies} We now model the numerical sum $\wfl{\sum_{k = 0}^n y_k}$ of $n+1$ real numbers. For technical reasons, it is important to allow for the use of different rounding functions in the evaluation of the partial sums $s_k = \wlr{\sum_{i = 0}^{k-1} y_i} + y_{k}$. With this motivation, we state the last definitions in this section. \pbLongDefBT{longDefTuple}{Rounding tuples} A tuple of functions $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ rounds to nearest in $\wcal{F}$ if its elements round to nearest in $\wcal{F}$. In this case we say that $\wrm{fl}t$ is a rounding $n$-tuple, $n$ is $\wrm{fl}t$'s dimension and $\wcal{F}$ is $\wrm{fl}t$'s range. \peLongDef{longDefTuple} \pbLongDefBT{longDefProj}{Projection} Let $\wcal{A}$ be a set and $\wcal{A}^n$ the Cartesian product $\wcal{A} \times \dots \times \wcal{A}$ with $n$ factors. For $k = 1,\dots,n$, we define $\wrm{P}_k: \wcal{A}^n \rightarrow \wcal{A}^k$ as the projection on the first $k$ coordinates, that is $\wfc{\wrm{P}_k}{x_1,\dots,x_n} := \wlr{x_1,\dots,x_k}$. When $\wcal{A}$ is a vector space with zero element $\wvec{0}$, we define $P_0: \wcal{A}^n \rightarrow \mathds{S}et{\wvec{0}}$ as $\wfc{\wrm{P}_0}{x_1,\dots,x_n} := \wvec{0}$. \peLongDef{longDefProj} \pbLongDefBT{longDefPSum}{Floating point sum} Let $\wcal{R}$ be the set of all functions from $\mathds R$ to $\mathds R{}$, and $f_0$ its zero element. We define $S_0: \mathds{S}et{0} \times \mathds{S}et{f_0} \rightarrow \mathds R{}$ as $\wcal{S}umkf{0}{0,f_0} := 0$. For $n > 0$ we define $S_n: \wrn{n} \times \wcal{R}^n \rightarrow \mathds R{}$ recursively as $\wcal{S}umkf{n}{\wvec{z}, \, \wrm{fl}t} \, := \, \wflk{n}{\wfc{S_{n-1}}{\wrm{P}_{n-1} \wvec{z}, \wrm{P}_{n-1} \wrm{fl}t} + z_{n}}. $ \peLongDef{longDefPSum} As a convenient notation, given a rounding $n$-tuple $\wrm{fl}t$ we write \[ \wflt{\sum_{k = 0}^n x_k} := \wfc{S_n}{\wlr{x_0 + x_1,x_2,x_3,x_4,\dots,x_n},\wrm{fl}t}, \] and when $\wrm{fl}t = \mathds{S}et{\wrm{fl},\wrm{fl},\dots,\wrm{fl}}$ has all its elements equal to $\wrm{fl}$ we write \[ \wfl{\sum_{k = 0}^n x_k} := \wflt{\sum_{k = 0}^n x_k}. \] We ask the reader to forgive us for the inconsistency in these expressions: neither $\wflt{\sum_{k = 0}^n x_k}$ nor $\wfl{\sum_{k = 0}^n x_k}$ is the value of a function $\wfc{\wrm{fl}t}{s}$ at $s = \sum_{k = 0}^n x_k$, but rather the value obtained by rounding the partial sums using the elements of $\wrm{fl}t$. Note also that $\wflt{\sum_{k = 0}^n x_k}$ is defined in terms of $x_0 + x_1$, that is, the first term in the sum is treated differently from the others. The same detail is present in Equation \pRef{highamsDot}, in which $x_1 y_1$ and $x_2 y_2$ are treat differently from the other terms. We emphasize that we define ``the floating point sum of $n + 1$ real numbers'', and not the ``the sum of $n+1$ floating point numbers.'' As a result, our rounded sums apply to all real numbers, not only to the ones in the system $\wcal{F}$, in the spirit of the first paragraph of this section. Dot products are similar to sums: \pbLongDefBT{longDefFpDot}{Dot product} The dot product of the vectors $\wvec{x},\wvec{y} \in \wrn{n + 1}$ evaluated with the rounding tuples $\wrm{fl}t = \mathds{S}et{\wrm{fl}_1,\dots,\wrm{fl}_n}$ and $\wrm{R} = \mathds{S}et{\wrm{r}_0,\dots,\wrm{r}_n}$ is \[ \wfc{\wrm{dot}_{\wrm{fl}t,\wrm{R}}}{\sum_{k = 0}^n x_k y_k} := \wflt{\sum_{k=0}^n \wflrk{r}{k}{x_k y_k}} \eqno \eod \] \peLongDefX{longDefFpDot} We also analyze dot products evaluated with the fused multiply add operations available in modern hardware and programming languages: \pbLongDefBT{longDefFmaDot}{Fma dot product} The fma dot product of the vectors $\wvec{x}, \wvec{y} \in \wrn{n+1}$ evaluated with the rounding tuple $\wrm{fl}t = \mathds{S}et{\wrm{fl}_0,\dots,\wrm{fl}_n}$ is \[ \wfc{\wrm{fma}_{\wrm{fl}t}}{\sum_{k=0}^n x_k y_k} := \wcal{S}umkf{n+1}{\wlr{x_0 y_0, x_1 y_1, x_2 y_2, \dots, x_n y_n},\wrm{fl}t}. \eqno \eod \] \peLongDefX{longDefFmaDot} \section{Sharp error bounds} \label{secSharp} This section presents sharper versions of the $(1 + \epsilon)$ argument. In summary, we argue that when rounding to nearest with unit roundoff $u$, in many situations we can use \pbTClaim{bestEps} \epsilon = \frac{u}{1 + u} \peTClaim{bestEps} in the $(1 + \epsilon)$ argument, and this value is better than $u$ or $u/(1 - u)$. The section has four parts. The first part describes the advantages of the $\epsilon$ in Equation \pRef{bestEps} when dealing with a few floating point operations. The next one generalizes our results to sums of many numbers, by proving the bound \pRef{sharpSumB}. Section \ref{secCumSum} presents bounds on the errors in sums which are expressed in terms of $\sum_{k = 1}^n \wabs{\sum_{i = 0}^k x_i}$. Section \ref{secDot} is about dot products. It shows that by working with real numbers from the start it is easy to adapt results derived for sums in order to obtain bounds for the errors in dot products. \subsection{Basics} \label{secBasics} This section is about the $(1 + \epsilon)$ argument for a few floating point operations. When rounding a floating point number, our first lemma states that the $\epsilon$ in Equation \pRef{bestEps} can be used when the real number $z$ is in the normal range, ie., the absolute value of $z$ is greater than the number $\nu$ in Definition \ref{longDefNu}. \pbLemmaBT{lemUNear}{A better epsilon} If $\wrm{fl}$ rounds to nearest in $\wcal{F}$ and $\wabs{z} \geq \nu_{\wcal{F}}$ then \pbTClaim{rhoNear} \wabs{\wfl{z} - z} \leq \frac{\wabs{z} u}{1 + u}. \peTClaim{rhoNear} In particular, if $\wcal{F}$ is perfect then Equation \pRef{rhoNear} holds for all $z \in \mathds R{}$. \peLemma{lemUNear} Lemma \ref{lemUNear} is sharp in the sense that for any $\epsilon$ smaller than $u/\wlr{1+u}$ there exists a real number $z$ near $1 + u$ for which Equation \pRef{rhoNear} does not hold. It has been known for a long time \cite{Knuth}, but it leads to bounds slightly stronger than the ones in \cite{Higham} for instance, because \[ \frac{u}{1 + u} < u < \frac{u}{1 - u} \] and when the result of the operation $x \, \wrm{op} \, y \neq 0$ is in the normal range we have the bound \pbDef{nearRound} \frac{1}{1 + u} \leq \frac{\wfl{x \, \wrm{op} \, y}}{x \, \wrm{op} \, y} \leq \frac{1 + 2 u}{1 + u}, \peDef{nearRound} instead of the usual bound \pbDef{usualRound} 1 - u \leq \frac{ \wfl{x \, \wrm{op} \, y}}{x \, \wrm{op} \, y} \leq \frac{1}{1 - u}. \peDef{usualRound} As a result, we could use the same argument as Higham to conclude that in a perfect floating point system \pbDef{sharpSum} \wflt{\sum_{k=0}^n x_k} = \wlr{x_0 + x_1} \xi_0^n + \sum_{i = 2}^n x_k \xi_k^{n - i + 1} \hspace{0.5cm} \wrm{with} \hspace{0.5cm} \frac{1}{1 + u} \leq \xi_k \leq \frac{1 + 2u}{1 + u} \peDef{sharpSum} and \[ \wflt{\sum_{k=0}^n x_k y_k} = x_0 y_0 \xi_0^{n + 1} + \sum_{i = 1}^n x_k y_k \xi_k^{n - i + 2} \hspace{0.5cm} \wrm{with} \hspace{0.5cm} \frac{1}{1 + u} \leq \xi_k \leq \frac{1 + 2u}{1 + u}. \] The underlying reason as to why \pbDef{concF} \wfc{f}{u} := \frac{1 + 2 u}{1 + u} \hspace{0.5cm} \wrm{is \ a \ better \ upper\ bound \ than} \hspace{0.5cm} \wfc{h}{u} := \frac{1}{1 - u} \peDef{concF} is the difference between concavity and convexity. The function $\wfc{f_\tau}{x} := \wfc{f}{u}^{\tau}$ has second derivative \[ \wdsf{f_\tau}{u} = \frac{\tau \wfc{f_{\tau-2}}{u}}{\wlr{1 + u}^4} \wlr{\tau - 3 - 4 u} \] and is concave for $\tau \leq 3 + 4 u$. On the other hand, $\wfc{h_{\tau}}{x} := \wfc{h}{u}^{\tau}$ has second derivative \[ \wdsf{h_{\tau}}{u} = \tau \wlr{\tau + 1} \wfc{h_{\tau-2}}{u} \] and is convex for all $\tau > 0$ and $0 < u < 1$. As a result, we can linearize rigorously an upper bound based on $f_{\tau}$, with $0 \leq \tau \leq 3 + 4 u$, whereas linearizing an upper bound based on $h_{\tau}$ is correct only to leading order. For instance, using the bound \pRef{nearRound} we can prove the next corollary, and similar results combining multiplications and divisions, but we could not prove such results based only on the usual bound \pRef{usualRound}. \pbCorolBT{corFourProd}{Three products} Let $x,y,z$ and $w$ be real numbers. If $\hat{p}_1 := \wfl{x * y}$, $\hat{p}_2 := \wfl{\hat{p}_1 * z}$ and $\hat{p}_3 := \wfl{\hat{p}_2 * w}$, $p_i \neq 0$, and $\wabs{p_i}$ satisfy Equation \pRef{rhoNear} for $k = 1$, $2$ and $3$ then \[ 1 - k u \leq \frac{\hat{p}_k}{p_k} \leq 1 + k u, \] for $p_1 := x * y$, $p_2 := x * y * z$ and $p_3 := x * y * z * w$. \peFullCorol{corFourProd} Lemma \ref{lemUNear} also yields a simple proof of a well known result about square roots when $\beta = 2$ \cite{Cody,Kahan}, and solves an open problem for arbitrary bases $\beta$ \cite{BoldoB}: \pbCorolBT{corSqrt}{Square roots} For the base $\beta = 2$, if $x \in \wcal{F}$ is such that $x^2 \geq \nu$ and $\wfl{x^2}$ and $\wfl{\sqrt{\wfl{x^2}}}$ are evaluated rounding to nearest then $\wfl{\sqrt{\wfl{x^2}}} = \wabs{x}$. Moreover, \pbDef{thSqrt} \wfl{ \frac{\wabs{x}}{ \wfl{\sqrt{\wfl{x^2}}} }} \leq 1 \peDef{thSqrt} for a general base $\beta$, under the same hypothesis on $\wrm{fl}$ and $x$. \peCorol{corSqrt} The next two lemmas show that there are other conditions besides $\wabs{z} \geq \nu$ in which we can use the bound in Equation \pRef{rhoNear}: \pbLemmaBT{lemSmallSum}{Exact sums} If $x, y \in \wcal{F}$ are such that $\alpha \leq \wabs{x + y} \leq \beta \nu$ then $z := x + y \in \wcal{F}$, that is, the sum $x + y$ is exact. In particular, $z$ satisfies Equation \pRef{rhoNear}. \peLemma{lemSmallSum} \pbLemmaBT{lemIEEESum}{IEEE sums} Let $\wcal{I}$ be an IEEE system and $x,y \in \wcal{I}$. If $0 < \wabs{x + y} \leq \beta \nu$ then $\wabs{x + y} \geq \alpha$, and $z := x + y \in \wcal{I}$ and satisfies Equation \pRef{rhoNear}. \peLemma{lemIEEESum} The last two lemmas combined with Lemma \ref{lemUNear} imply that we can use the bound \pRef{rhoNear} for every real number $z$ which is the sum of two floating point numbers in an IEEE system. This is yet another instance in which subnormal numbers lead to simpler results, and corroborates Demmel's arguments \cite{DemmelA,DemmelB} and the soundness of the decision to include subnormal numbers in the IEEE standard for floating point arithmetic \cite{IEEE}. Another instance is the fundamental Sterbenz's Lemma, which must be modified by the inclusion of the term $\alpha$ in its hypothesis in order to hold for MPFR systems: \pbLemmaBT{lemSterbenz}{Sterbenz's Lemma} If $a,b \in \wcal{F}$ and $\alpha \leq b - a \leq a$ then $b - a \in \wcal{F}$. \peFullLemma{lemSterbenz} \subsection{Norm one bounds} \label{secOneNorm} This subsection extends Lemma \ref{lemUNear} to sums with many parcels. Our results are described by the next lemma and its corollaries. In particular, we show that underflow does not affect sums of positive numbers. Therefore, there is no need for terms involving the smallest positive floating point number when bounding the errors in such sums. \pbLemmaBT{lemNormOneBound}{Norm one bound} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ rounds to nearest in a perfect system, $20 n u \leq 1$ and $y_0,\dots y_n \in \mathds R{}$ then \pbTClaim{normOneBound} \wabs{ \, \wflt{\sum_{k = 0}^n y_k} - \sum_{k = 0}^n y_k \, } \leq \frac{n u}{1 + n u} \sum_{k = 0}^n \wabs{y_k}. \peTClaim{normOneBound} \peLemma{lemNormOneBound} \pbCorolBT{corNormOneIEEE}{IEEE norm one bound} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ rounds to nearest in an IEEE system $\wcal{I}$, $20 n u \leq 1$ and $y_0,\dots, y_n \in \wcal{I}$ then Equation \pRef{normOneBound} is satisfied. \peFullCorol{corNormOneIEEE} \pbCorolBT{corNormOneMPFR}{MPFR norm one bound} If $\wrm{fl}t$ rounds to nearest in a MPFR system $\wcal{M}$, $20 n u \leq 1$, $\wvec{y} \in \wcal{M}^{n+1}$ and $y_k \geq 0$ for all $k$ then Equation \pRef{normOneBound} holds. \peFullCorol{corNormOneMPFR} \pbCorolBT{corNormOneUnperfect}{Unperfect norm one bound} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ rounds to nearest in an unperfect system, $y_0,\dots,y_n \in \mathds R{}$ and $20 n u \leq 1$ then \pbTClaim{thSharpSumUnperfect} \wabs{ \, \wflt{\sum_{k = 0}^n y_k} - \sum_{k = 0}^n y_k \, } \leq \frac{n \alpha}{2} + \frac{n u}{1 + n u} \wlr{\frac{n \alpha}{2} + \sum_{k = 0}^n \wabs{y_k}}. \peTClaim{thSharpSumUnperfect} If, additionally, $u \sum_{k = 0}^n \wabs{y_k} \geq \alpha$ then Equation \pRef{normOneBound} is satisfied. \peCorol{corNormOneUnperfect} Note that Corollaries \ref{corNormOneIEEE} and \ref{corNormOneMPFR} have different hypothesis regarding the floating point numbers $y_0,\dots,y_n$: in the IEEE case, in which we have subnormal numbers, Equation \pRef{normOneBound} holds for all such $y_k$. In the MPFR case, due to the absence of subnormal numbers, we must assume that $y_k \geq 0$, for Equation \pRef{normOneBound} does not hold for instance when $\beta = 2$, $x_0 = 3 \alpha/2$, $x_1 = -\alpha$, $n = 1$ and we break ties upward. Note also that the number $\alpha$ in Equation \pRef{thSharpSumUnperfect} for an IEEE system is much smaller than the $\alpha$ for the corresponding MPFR system. The next example shows that the bound \pRef{normOneBound} is sharp: \pbExampleBT{exSharpSum}{The norm one bound is sharp} If $\wrm{fl}$ rounds to nearest in the perfect system $\wcal{P}_{2,\mu}$, breaking ties downward, and $x_0 := 1$ and $x_k := u$ for $k = 1, \dots,n$ then \[ \wfl{\sum_{k = 0}^n x_k} = 1 = \sum_{k=0}^n x_k - n u = \sum_{k=0}^n x_k - \frac{n u}{1 + n u} \sum_{k = 0}^n x_k. \] If $\wrm{fl}$ breaks ties upward for the same $x_k$ and $2 n u < 1$ then \[ \wfl{\sum_{k = 0}^n x_k} = 1 + 2 n u = \sum_{k=0}^n x_k + n u = \sum_{k=0}^n x_k + \frac{n u}{1 + n u} \sum_{k = 0}^n x_k. \eqno \eod \] \pePlainExampleX{exSharpSum} As in Lemma \ref{lemUNear}, the bound \pRef{normOneBound} has concavity properties which allow us to linearize rigorously bounds resulting from a couple of its applications: \pbLemmaBT{lemConvexity}{Convexity} For $k \in \mathds N{}$ and $i = 1,\dots,k$, let $n_i$ be a positive number and define functions $f_k, g_k: (0,\infty) \rightarrow \mathds R{}$ by \[ \wfc{f_k}{u} = \prod_{i = 1}^k \frac{1 + 2 n_i u}{1 + n_i u} \hspace{1cm} \wrm{and} \hspace{1cm} \wfc{g_k}{u} = \prod_{i = 1}^k \frac{1}{1 + n_i u}. \] The functions $f_k$ are strictly concave for $k = 1$, $2$ and $3$ and the functions $g_k$ are convex for all $k$. In particular, for $k \leq 3$, \[ 1 - \wlr{\sum_{i = 1}^k n_i} u \leq \wfc{g_k}{u} \leq \wfc{f_k}{u} \leq 1 + \wlr{\sum_{i = 1}^k n_i} u. \pFullLink{lemConvexity} \] \peLemmaX{lemConvexity} As a final point for this section, we note that Lemma \ref{lemNormOneBound} implies that \pbTClaim{sharpSumNearMax} \wabs{ \, \wflt{\sum_{k =0}^n y_k} - \sum_{k = 0}^n y_k \, } \leq \frac{n \wlr{n + 1} u}{1 + n u} \max_{k = 0,\dots,n} \wabs{y_k}, \peTClaim{sharpSumNearMax} and it is natural to ask whether the quadratic term in $n$ in the right hand side of Equation \pRef{sharpSumNearMax} is necessary. The next example shows that bounds in terms of $\max \wabs{y_k}$ do need a quadratic term in $n$ (or large constant factors): \pbExampleBT{exQuadGrowth}{Quadratic growth} If $\wrm{fl}$ rounds to nearest in the perfect system $\wcal{P}_{2,\mu}$, breaking ties downward, $y_0 := 1 + u$ and $y_k := 1 + 2^{\wfloor{\wfc{\log_2}{k + 1}}} u$ for $k = 1,\dots, n := 2^m - 1$, where $m \in \mathds N{}$ is such that $2^m u < 1$, then \[ \wfl{\sum_{k = 0}^n y_k} = \sum_{k = 0}^n y_k - \frac{n^2 + 2 n + 3}{3} u \leq \sum_{k = 0}^n y_k - \frac{n^2 + 2 n + 3}{6} u \max_{k = 0,\dots, n} \wabs{y_k}. \pFullLink{exQuadGrowth} \] \peExampleX{exQuadGrowth} \subsection{Cumulative bounds} \label{secCumSum} Although Lemma \ref{lemNormOneBound} leads to the simple bound \pRef{sharpSumS} on the error in the evaluation of sums, it is not as good from the qualitative view as the result one would obtain from the version of Higham's Equation 3.4 for sums, or from our Equation \pRef{highamsDot}. We believe that Higham and Wilkinson would write this equation as \pbDef{wilkSum} \wfl{\sum_{k = 1}^n x_k} = \wlr{x_1 + x_2} \wlr{1 + \theta_{n-1}} + x_3 \wlr{1 + \theta_{n-2}} + \dots + x_n \wlr{1 + \theta_1}. \peDef{wilkSum} for $\theta_n$ in Equation \pRef{highamsTheta}. In fact, Wilkinson presents an expression similar to Equation \pRef{wilkSum} for sums using a double precision accumulator in page 117 of \cite{WilkinsonB}. Equation \pRef{wilkSum} gives a better intuition regarding the effects of rounding errors in the corresponding sum than the bound in Lemma \ref{lemNormOneBound}. Therefore, it is natural to look for bounds that take into account the stronger relative influence of the first parcels in Equation \pRef{wilkSum}. The next examples are relevant in this context. \pbExampleBT{exSharpSumNearB}{Minimum cumulative bound} If $\wrm{fl}$ rounds to nearest in the perfect system $\wcal{P}_{2,\mu}$, breaking ties downward, and $x_k := u^{-k}$ for $k = 1,\dots,n$ then \[ \wfl{\sum_{k = 0}^n x_k} = u^{-n} = \sum_{k = 0}^n x_k - \kappa_n u \sum_{k = 1}^n \sum_{i = 0}^k x_i \] for \[ 1 - u < \kappa_n := \frac{\wlr{1 - u}\wlr{{1 - u^{n}}}}{1 - u^{n} - n u^{n+1} \wlr{1 - u}} < \wlr{1 - u} \wlr{1 + u^n} < 1. \pFullLink{exSharpSumNearB} \] \peExampleX{exSharpSumNearB} \pbExampleBT{exSharpSumNearC}{Maximum cumulative bound} If $\wrm{fl}$ rounds to nearest in the perfect system $\wcal{P}_{\beta,\mu}$, breaking ties upward, $1 = e_1 < e_2 \dots < e_n$ are integer exponents, $x_0 := u$, $x_1 := 1$, and $x_k := \beta^{e_k} \wlr{1 + u} - \beta^{e_{k-1}} \wlr{1 + 2 u}$ for $k = 2,\dots, n$ then \pbDef{ssBnC} \wfl{\sum_{k=0}^n x_k} \leq \sum_{i = 0}^n x_i + \tau_n u \sum_{k=1}^n \sum_{i = 0}^k x_k, \peDef{ssBnC} for \[ \tau_n := \frac{1}{1 + u \wlr{\frac{\beta - 2}{\beta - 1} + \frac{n}{\beta^{n} - 1}}}. \] Additionally, if $e_k = k - 1$ for $k \geq 1$ then we have equality in Equation \pRef{ssBnC}. \peFullExample{exSharpSumNearC} These examples indicate that there is an asymmetry between the upper and lower bounds on the errors $\delta := \wfl{\sum_{k=0}^n x_k} - \sum_{i = 0}^n x_i$ in terms of $\sum_{k = 1}^n \sum_{i = 0}^k x_k$: The constants $\kappa_n$ in Example \ref{exSharpSumNearB} and $\tau_n$ in Example \ref{exSharpSumNearC} are equal to $1/\wlr{1 + u}$ for $n = 1$ but as $n$ increases, $\kappa_n$ decreases toward $1 - u$ whereas $\tau_n$ increases toward $1$. Therefore, the worst lower and upper values for $\delta$ are reached in different situations, and are due to distinct causes. In fact, the lower bound for $\delta$ in the next Lemma is a straightforward consequence of the convexity of the functions $\wlr{1 + u}^{-k}$ and Equation \pRef{sharpSum}, whereas the upper bound is a non trivial consequence of the concavity of $\wlr{1 + 2u}/\wlr{1 + u}$. \pbLemmaBT{lemPositiveBound}{Positive cumulative bound} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ rounds to nearest in a perfect system, $y_0,y_1,\dots y_n \in \mathds R{}$, with $y_k\geq 0$ for $k = 0,\dots,n$, and $20 n u \leq 1$ then \pbTClaim{thPositiveBound} - \frac{u}{1 + u} \sum_{k = 1}^n \sum_{i = 0}^k y_k \leq \wflt{\sum_{k=0}^n y_k} - \sum_{k = 0}^n y_k \, \leq \tau_n u \sum_{k = 1}^n \sum_{i = 0}^k y_k, \peTClaim{thPositiveBound} for $\tau_n$ in Example \ref{exSharpSumNearC}. \peLemma{lemPositiveBound} \pbCorolBT{corPositiveUnperfect}{Unperfect cumulative bound} If $\wrm{fl}t$ rounds to nearest in an unperfect system $\wcal{F}$, $20 n u \leq 1$, $\wvec{y} \in \wcal{F}^{n+1}$ and $y_k \geq 0$, then Equation \pRef{thPositiveBound} holds. \peCorol{corPositiveUnperfect} The next example shows that Lemma \ref{lemPositiveBound} does not apply to sums of numbers with mixed signs, and Lemma \ref{lemSignedSum} and its corollary show that the example is nearly worst possible. \pbExampleBT{exSignedSumB}{Mixed signs} If $\wrm{fl}$ rounds to nearest in a perfect system $\wcal{P}_{2,\mu}$, breaking ties upward, $x_0 := u$, $x_1 := 1$, $x_k := - 2^{1 - k} \wlr{1 + 3 u}$ for $k > 1$ and $2^n u \leq 1$ then \pbDef{thXSSA} \wfl{\sum_{k = 0}^n x_k} - \sum_{k = 0}^n x_k = 2 \wlr{1 - 2^{- n}} u = \frac{\kappa_n u}{1 - \wlr{n - 2} u} \sum_{k = 1}^{n} \wabs{\sum_{i = 0}^k x_i}, \peDef{thXSSA} for \[ 1 - u \leq \kappa_n := \frac{\wlr{1 - 2^{-n}}\wlr{1 - \wlr{n - 2} u}}{\wlr{1 - 2^{-n}} \wlr{1 + 3u} - n u} \leq 1. \pFullLink{exSignedSumB} \] \peExampleX{exSignedSumB} \pbLemmaBT{lemSignedSum}{Signed cumulative bound} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ rounds to nearest in a perfect system, $y_0,y_1,\dots y_n \in \mathds R{}$ and $20 n u < 1$ then \pbTClaim{thSignedSum} \wabs{ \, \wflt{\sum_{k=0}^n y_k} - \sum_{k = 0}^n y_k \, } \leq \frac{u}{1 - \wlr{n-2}u} \sum_{k = 1}^n \wabs{\sum_{i = 0}^k y_i}. \peTClaim{thSignedSum} \peLemma{lemSignedSum} \pbCorolBT{corSignedSum}{Unperfect signed cumulative bound} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ rounds to nearest in an unperfect system, $y_0,y_1,\dots y_n \in \mathds R{}$ and $20 n u \leq 1$ then \pbTClaim{thCorSignedSum} \wabs{ \, \wflt{\sum_{k = 0}^n y_k} - \sum_{k = 0}^n y_k \, } \leq \wlr{1 + 2 n u} \frac{n \alpha}{2} + \frac{u}{1 - \wlr{n - 2} u} \sum_{k = 1}^n \wabs{\sum_{i = 0}^k y_i}, \peTClaim{thCorSignedSum} If, additionally, $u \sum_{k = 1}^n \wabs{\sum_{i = 0}^k y_i} \geq n \alpha$ then \pbTClaim{thCorSignedSumL} \wabs{ \, \wflt{\sum_{k = 0}^n y_k} - \sum_{k = 0}^n y_k \, } \leq \frac{3}{2}\wlr{1 + \frac{n u}{2}} u \sum_{k = 1}^n \wabs{\sum_{i = 0}^k y_i}. \peTClaim{thCorSignedSumL} \peCorol{corSignedSum} \subsection{Dot products} \label{secDot} This section presents bounds on the errors in the numerical evaluation of dot products. These bounds are derived from the ones for sums presented in Section \ref{secOneNorm}. This derivation is possible because some of our previous bounds apply to general real numbers, and a numerical dot product is simply a numerical sum of real numbers, which may or may not be floating point numbers. If our analysis of sums were restricted to floating point numbers then the extensions presented here would be harder to derive. For example, the next corollaries follow directly from Lemma \ref{lemNormOneBound} and the Definition \ref{longDefFmaDot} of numerical dot products using fused multiply adds (these corollaries are proved in the extended version of the article): \pbCorolBT{corFmaPerfect}{Dot prod. with fma} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{0},\dots,\wrm{fl}k{n}}$ rounds to nearest in a perfect system, $20 n u \leq 1$ and $\wvec{x},\wvec{y} \in \wrn{n+1}$ then \pbTClaim{thSharpFmaDotNear} \wabs{ \, \wf{\wrm{fma}_{\wrm{fl}t}}{\sum_{k = 0}^n x_k y_k} - \sum_{k = 0}^n x_k y_k \, } \leq \frac{\wlr{n + 1} u}{1 + \wlr{n + 1} u} \sum_{k = 0}^n \wabs{x_k y_k}. \peTClaim{thSharpFmaDotNear} \peFullCorol{corFmaPerfect} \pbCorolBT{corFmaUnperfect}{Unperfect Dot prod. with fma} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{0},\dots,\wrm{fl}k{n}}$ rounds to nearest in an unperfect system, $20 n u \leq 1$ and $\wvec{x},\wvec{y} \in \wrn{n+1}$ then \[ \wabs{ \, \wf{\wrm{fma}_{\wrm{fl}t}}{\sum_{k = 0}^n x_k y_k} - \sum_{k = 0}^n x_k y_k \, } \leq \wlr{n + 1} \frac{\alpha}{2} \] \pbTClaim{thFmaUnperfect} + \frac{\wlr{n + 1} u}{1 + \wlr{n + 1} u} \wlr{\frac{\wlr{n + 1} \alpha}{2} + \sum_{k = 0}^n \wabs{x_k y_k}}. \peTClaim{thFmaUnperfect} If, additionally, $ u \sum_{k = 0}^n \wabs{x_k y_k} \geq \alpha$ then Equation \pRef{thSharpFmaDotNear} holds. \peFullCorol{corFmaUnperfect} When we evaluate dot products rounding each product $x_k y_k$, the bounds are slightly worse, but can still be obtained with the theory in Section \ref{secOneNorm}: \pbCorolBT{corDotPerfect}{Dot prod.} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ and $\wrm{R} = \mathds{S}et{\wrm{r}_0,\dots,\wrm{r}_n}$ round to nearest in a perfect system, $20 n u \leq 1$ and $\wvec{x},\wvec{y} \in \wrn{n+1}$ then \pbTClaim{thSharpDotNear} \wabs{ \, \wfc{\wrm{dot}_{\wrm{fl}t,\wrm{R}}}{\sum_{k = 0}^n x_k y_k} - \sum_{k = 0}^n x_k y_k \, } \leq \beta_n u \sum_{k = 0}^n \wabs{x_k y_k} \leq \wlr{n + 1} u \sum_{k = 0}^n \wabs{x_k y_k}, \peTClaim{thSharpDotNear} where \[ \beta_n := \frac{n + 1 + 3 n u}{1 + \wlr{n + 1} u + n u^2} \leq \frac{n + 1}{1 + n u/2} \hspace{1cm} \wrm{and} \hspace{1cm} \beta_n \leq \frac{n + 1}{1 + \wlr{n - 3} u}. \pFullLink{corDotPerfect} \] \peCorolX{corDotPerfect} \pbCorolBT{corDotIEEE}{IEEE dot prod.} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ and $\wrm{R} = \mathds{S}et{\wrm{r}_0,\dots,\wrm{r}_n}$ round to nearest in an IEEE system, $20 n u \leq 1$ and $\wvec{x},\wvec{z} \in \wrn{n+1}$ then \pbTClaim{thSharpDotNearIEEEB} \wabs{ \, \wfc{\wrm{dot}_{\wrm{fl}t,\wrm{R}}}{\sum_{k = 0}^n x_k y_k} - \sum_{k = 0}^n x_k y_k \, } \leq 1.05 \wlr{n + 1} \frac{\alpha}{2} + \beta_n u \sum_{k = 0}^n \wabs{x_k y_k}, \peTClaim{thSharpDotNearIEEEB} for $\beta_n$ in Corollary \ref{corDotPerfect}. If, additionally, $u \sum_{k = 0}^n \wabs{x_k y_k} \geq \alpha$ then \[ \wabs{\, \wfc{\wrm{dot}_{\wrm{fl}t,\wrm{R}}}{\sum_{k = 0}^n x_k y_k} - \sum_{k = 0}^n x_k y_k \, } \leq \frac{3}{2} \wlr{n + 1} u \sum_{k = 0}^n \wabs{x_k y_k}. \pFullLink{corDotIEEE} \] \peCorolX{corDotIEEE} \pbCorolBT{corDotMPFR}{MPFR dot prod.} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ and $\wrm{R} = \mathds{S}et{\wrm{r}_0,\dots,\wrm{r}_n}$ round to nearest in a MPFR system, $20 n u \leq 1$ and $\wvec{x},\wvec{z} \in \wrn{n+1}$ then \pbTClaim{thSharpDotNearMPFRB} \wabs{ \, \wfc{\wrm{dot}_{\wrm{fl}t,\wrm{R}}}{\sum_{k = 0}^n x_k y_k} - \sum_{k = 0}^n x_k y_k \, } \leq \frac{\wlr{2.05 n + 1.05}\alpha}{2} + \beta_n \sum_{k = 0}^n \wabs{x_k y_k}, \peTClaim{thSharpDotNearMPFRB} for $\beta_n$ in Corollary \ref{corDotPerfect}. If, additionally, $u \sum_{k = 0}^n \wabs{x_k y_k} \geq \alpha$ then the last equation in Corollary \ref{corDotIEEE} is satisfied. \peFullCorol{corDotMPFR} Finally, in all bounds above we can use the Cauchy-Schwarz inequality and replace the terms $\sum_{k = 0}^n \wabs{x_k y_k}$ by $\wvnorm{x}{}_2 \wvnorm{y}{}_2$. With this replacement, we can compare our bounds to the ones in \cite{DemmelA} and \cite{Neumaier}. \section{Proofs} \label{secProofs} This section we prove our main results. Section \ref{secAuxDefs} contains more definitions and Section \ref{secAuxLemmas} presents more lemmas. In Section \ref{secProp} we state basic results about floating point systems and rounding to nearest. We call such results by ``Propositions,'' because they are obvious and readers should be able to deduce them with little effort. Section \ref{secLemmas} begins with the proofs of the main lemmas, and after that we prove some of the corollaries. The extended version of the article contains the proofs of the remaining lemmas and corollaries and the propositions, and the verification of the examples. \subsection{More definitions} \label{secAuxDefs} The proofs of our bounds on the errors in sums use the following definitions: \pbLongDefBT{longDefTightFunction}{Tight function} Let $\wcal{A}$ and $\wcal{B}$ be topological spaces and $\wcal{R}$ a set. A function $f: \wcal{A} \times \wcal{R} \rightarrow \wcal{B}$ is tight if for every sequence $\mathds{S}et{\wlr{a_k,r_k}, k \in \mathds N{}} \subset \wcal{A} \times \wcal{R}$ such that $\lim_{k \rightarrow \infty} a_k$ there exists $r \in \wcal{R}$ and a subsequence $\mathds{S}et{\wlr{a_{n_k},r_{n_k}}, k \in \mathds N{}}$ with $\lim_{k \rightarrow \infty} \wfc{f}{a_{n_k},r_{n_k}} = \wfc{f}{a,r}$. \peLongDef{longDefTightFunction} \pbLongDefBT{longDefTightSet}{Tight set of functions} Let $\wcal{A}$ and $\wcal{B}$ be topological spaces and let $\wcal{R}$ be a set of functions from $\wcal{A}$ to $\wcal{B}$. We say that $\wcal{R}$ is tight if the function $f: \wcal{A} \times \wcal{R} \rightarrow \wcal{B}$ given by $\wfc{f}{a,r} := \wfc{r}{a}$ is tight. \peLongDef{longDefTightSet} \subsection{More lemmas} \label{secAuxLemmas} \pbLemmaBT{lemUNearS}{Sharp epsilons} Suppose $\wrm{fl}$ rounds to nearest in $\wcal{F}$ and $e$ is an exponent for $\wcal{F}$. If $\wabs{z} = \beta^{e} \wlr{\beta^{\mu} + w}$ with $w \in [0, \wlr{\beta - 1}\beta^{\mu}]$ then \pbTClaim{rHat} \wfl{z} = \mathds{S}ign{z} \beta^{e}\wlr{\beta^{\mu} + r} \hspace{0.3cm} \wrm{for} \hspace{0.3cm} \ r \in [0, \wlr{\beta - 1} \beta^{\mu}] \cap \mathds Z{} \hspace{0.3cm} \wrm{and} \hspace{0.3cm} \wabs{r - w} \leq 1/2. \peTClaim{rHat} Moreover, \pbTClaim{uNearSA} \wabs{\frac{\wfl{z} - z}{z}} \leq \frac{u}{1 + \max \mathds{S}et{1, 2 w} u} \leq \frac{u}{1 + u} \peTClaim{uNearSA} and \[ \wabs{\frac{\wfl{z} - z}{z}} \leq \frac{u}{1 + \wlr{2 r - 1} u} \hspace{0.5cm} \wrm{and} \hspace{1cm} \wabs{\frac{\wfl{z} - z}{\wfl{z}}} \leq \frac{u}{1 + 2 r u}. \pLink{lemUNearS} \] \peLemmaX{lemUNearS} \pbLemmaBT{lemOpt}{Compactness} Let $\wcal{R}$ be a set, $\wcal{L} \subset \mathds R{} \setminus \mathds{S}et{0}$, $\wcal{A},\wcal{B} \subset \wrn{n}$, $\wcal{X} \subset \wrn{m}$ and $\wcal{K} \subset \wcal{A}$. Define $\wcal{Z} := \wcal{A} \cup \wcal{B}$. If the functions $f: \wcal{Z} \times \wcal{X} \rightarrow \mathds R{}$, $h: \wcal{Z} \times \wcal{R} \rightarrow \wcal{X}$, and $\wfc{g}{\wvec{z},r} := \wfc{f}{\wvec{z}, \wfc{h}{\wvec{z},r}}$ and $\varphi \in \mathds R{}$ are such that \begin{itemize} \item $\wcal{K}$ is compact and for $\wvec{z} \in \wcal{A}$ there exist $\lambda \in \wcal{L}$ such that $\lambda \wvec{z} \in \wcal{K}$. \item If $\lambda \in \wcal{L}$, $\wvec{z} \in \wcal{A}$, $\lambda \wvec{z} \in \wcal{K}$ and $r \in \wcal{R}$ then $\wfc{h}{\lambda \wvec{z},r'} = \lambda \wfc{h}{\wvec{z},r}$ for some $r' \in \wcal{R}$. \item $f$ is upper semi-continuous and $\wfc{f}{\lambda \wvec{z}, \lambda \wvec{x}} \geq \wfc{f}{\wvec{z},\wvec{x}}$ for $\wvec{z} \in \wcal{A}$ and $\lambda \in \wcal{L}$. \item $h$ is tight, in the sense of Definition \ref{longDefTightFunction}. \item $\wfc{g}{\wvec{z},r} \leq \varphi$ for $\wlr{\wvec{z},r} \in \wcal{B} \times \wcal{R}$. \end{itemize} then either $\wfc{g}{\wvec{z},r} \leq \varphi$ for all $\wlr{\wvec{z},r} \in \wcal{Z} \times \wcal{R}$ or there exist $\wlr{\wvec{z}^*,r^*} \in \wcal{K} \times \wcal{R}$ such that $\wfc{g}{\wvec{z}^*,r^*} \geq \wfc{g}{\wvec{z},r}$ for all $\wlr{\wvec{z},r} \in \wcal{Z} \times \wcal{R}$. \peLemma{lemOpt} Lemma \ref{lemOpt} is a compactness argument. Its purpose is to show that either there exists examples for which the relative effects of rounding errors are the worst possible or these errors are small. It is necessary because floating point systems are infinite and we cannot take this existence for granted. The intuition behind Lemma \ref{lemOpt} is simple. The vector $\wvec{z}$ represents the input to computation. The vector $\wfc{h}{\wvec{z},r}$ is obtained by rounding functions of $\wvec{z}$ using the rounding functions $r \in \wcal{R}$. The bad set $\wcal{B}$ represents situations like underflow or very poor scaling, and its elements are handled separately. For $\wvec{z}$ outside of the bad set, we can use scaling by powers of $\beta$ (represented by $\lambda \in \wcal{L}$) to reduce the analysis of $\wfc{f}{\wvec{z},\wvec{x}}$ to real numbers $\wvec{z}$ in the compact set $\wcal{K}$. We can then deal with the discontinuity in rounding by analyzing all functions which round to nearest (represented by $\wcal{R}$) instead of a single function. In the end, as in the applications of the classic Banach-Alaoglu Theorem, by using compactness and continuity in their full generality, we can analyze the existence of maximizers for the relative effects of rounding errors. We can then exploit the implications of maximality in order to describe precisely such maximizers. \subsection{Propositions} \label{secProp} In this section we present auxiliary results about floating point systems. We believe readers will find most of them to be trivial, and they are presented only to make our arguments more precise. In all propositions $\beta$ is a base, $\mu$ is a positive integer, $\wcal{F}$ is a floating point system associated to $\beta$ and $\mu$, $z \in \mathds R{}$, $x \in \wcal{F}$, and $u$, $e_{\alpha}{}$, $\alpha$ and $\nu$ are the numbers related to this system in Definitions \ref{longDefEps}, \ref{longDefPerfect}, \ref{longDefMPFR}, \ref{longDefIEEE}, \ref{longDefAlpha} and \ref{longDefNu}, Finally the function $\wrm{fl}$ rounds to nearest in $\wcal{F}$. \pbPropBT{propOrder}{Order by the exponent} Let $d$ and $e$ be integers and $v,w \in \mathds R{}$, with $v < \wlr{\beta - 1}\beta^{\mu}$ and $w \geq 0$. If $d < e$ then $\beta^{d} \wlr{\beta^{\mu} + v} < \beta^{e} \wlr{\beta^{\mu} + w}$. \peFullProp{propOrder} \pbPropBT{propNormalForm}{Normal form} If $z \in \mathds R{}$ is different from zero then there exist unique $e \in \mathds Z{}$ and $w \in [0, \wlr{\beta - 1}\beta^{\mu})$ such that $z = \mathds{S}ign{z} \beta^{e} \wlr{\beta^{\mu} + w}$. \peFullProp{propNormalForm} \pbPropBT{propIntegerForm}{Integer form} If $\wcal{F}$ is unperfect and $x \in \wcal{F}$ then there exists $e,r \in \mathds Z{}$ with $e \geq e_{\alpha}$ such that $x = \beta^e r$ and \begin{itemize} \item $r = 0$ if and only if $x = 0$. \item $0 < \wabs{r} < \beta^{\mu}$ if and only if $x$ is subnormal and $\wcal{F}$ is an IEEE system. \item $\beta^{\mu} \leq \wabs{r} < \beta^{1 + \mu}$ if and only if $\wabs{x} \in \wcal{E}_e$. \end{itemize} \peFullProp{propIntegerForm} \pbPropBT{propSymmetry}{Symmetry} $\wcal{F}$ is symmetric, that is, $x \in \wcal{F} \Leftrightarrow -x \in \wcal{F} \Leftrightarrow \wabs{x} \in \wcal{F}$. \peFullProp{propSymmetry} \pbPropBT{propNu}{The minimality of nu} Let $x$ be a floating point number. If $\wabs{x} \geq \nu$ and $x \neq 0$ then $x$ is normal, that is, there exists an exponent $e$ for $\wcal{F}$ such that $\wabs{x} \in \wcal{E}_{e}$. If $0 < \wabs{x} < \nu$ then $\wcal{F}$ is an IEEE system $\wcal{I}_{e_{\alpha}}$ and $x$ is subnormal, that is, $\wabs{x} \in \wcal{S}_{e_{\alpha}}$. Conversely, if $e,r \in \mathds Z{}$ and $z = \beta^{e} r$ with $\wabs{r} \leq \beta^{1 + \mu}$ and $\wabs{z} \geq \nu$ then $z \in \wcal{F}$. \peFullProp{propNu} \pbPropBT{propSubnormalSum}{Subnormal sum} Let $\wcal{I}$ be an IEEE system. If $x,y \in \wcal{I}$ are subnormal then $x + y \in \wcal{I}$. \peFullProp{propSubnormalSum} \pbPropBT{propCriticalSum}{Critical sum} If $e$ is an exponent for $\wcal{F}$ and $x \in \wcal{F}$ and $z \in \mathds R{}$ are such that $\wabs{x + z} = \beta^{e} \wlr{\beta^{\mu} + r + 1/2}$ with $r \in [0,\wlr{\beta - 1} \beta^{\mu}) \cap \mathds Z{}$ then $\wabs{z} \geq \beta^{e}/2$. \peFullProp{propCriticalSum} \pbPropBT{propRoundIdentity}{Identity} If $x \in \wcal{F}$ then $\wfl{x} = x$. \peFullProp{propRoundIdentity} \pbPropBT{propRoundMonotone}{Monotonicity} If $z \leq w$ then $\wfl{z} \leq \wfl{w}$, and if $x \in \wcal{F}$ then \begin{itemize} \item $x > \wfl{z} \Rightarrow x > z$, \item $x < \wfl{z} \Rightarrow x < z$, \item $\wabs{x} > \wabs{\wfl{z}} \Rightarrow \wabs{x} > \wabs{z}$, \item $\wabs{x} < \wabs{\wfl{z}} \Rightarrow \wabs{x} < \wabs{z}$. \end{itemize} \peFullProp{propRoundMonotone} \pbPropBT{propRoundMinus}{Symmetric rounding} $\wflr{m}{z} := - \wfl{-z}$ rounds to nearest in $\wcal{F}$. \peFullProp{propRoundMinus} \pbPropBT{propRoundNormal}{Normal rounding} Let $e$ be an exponent for $\wcal{F}$. If $\wabs{z} = \beta^{e} \wlr{\beta^{\mu} + w}$, with $w \in [0, \wlr{\beta - 1} \beta^{\mu})$, then $\wfl{z} \in \mathds{S}et{a,b}$ for \[ a := \mathds{S}ign{z} \beta^{e} \wlr{\beta^{\mu} + \wfloor{w}} \in \wcal{F} \hspace{0.7cm} \wrm{and} \hspace{0.7cm} b := \mathds{S}ign{z} \beta^{e} \wlr{\beta^{\mu} + \wceil{w}} \in \wcal{F}, \] and \pbDef{thRoundNormal} \wabs{\wfl{z} - z} = \min \mathds{S}et{\wabs{z - a}, \, \wabs{z - b}} \leq \frac{\wabs{b - a}}{2} \leq \beta^e/2. \peDef{thRoundNormal} If $z < m := \wlr{a + b}/2$ then $\wfl{z} = \min \mathds{S}et{a,b}$, and if $z > m$ then $\wfl{z} = \max \mathds{S}et{a,b}$. In particular, if $r \in \mathds Z{}$ and $\wabs{r - w} < 1/2$ then $\wfl{z} = \mathds{S}ign{z} \beta^e \wlr{\beta^{\mu} + r}$. \peFullProp{propRoundNormal} \pbPropBT{propRoundSubnormal}{Subnormal rounding} Let $\wcal{I} = \wcal{I}_{e_{\alpha}}$ be an IEEE system. If $\wabs{z} \leq \nu$ then $\wfl{z} \in \mathds{S}et{a,b}$ for \[ a := \beta^{e_{\alpha}} \wfloor{\beta^{-e_{\alpha}} z} \in \wcal{I} \hspace{0.7cm} \wrm{and} \hspace{0.7cm} b := \beta^{e_{\alpha}} \wceil{ \beta^{-e_{\alpha}} z} \in \wcal{I} \] and \[ \wabs{\wfl{z} - z} = \min \mathds{S}et{z - a, b - z} \leq \frac{b - a}{2} \leq \alpha/2. \] If $z < m:= \wlr{a + b}/2$ then $\wfl{z} = a$, and if $z > m$ then $\wfl{z} = b$. If $r \in [-\beta^{\mu}, \beta^{\mu}) \cap \mathds Z{}$ then $\wfl{z} = \beta^{e_{\alpha}}{r}$ for $\beta^{e_{\alpha}} \wlr{r - 1/2} < z < \beta^{e_{\alpha}} \wlr{r + 1/2}$ and $\wfl{\beta^{e_{\alpha}}\wlr{ r + 1/2}} \in \mathds{S}et{ \beta^{e_{\alpha}} r, \beta^{e_{\alpha}}\wlr{r+1}}$. \peFullProp{propRoundSubnormal} \pbPropBT{propRoundBelowAlpha}{Rounding below alpha} If $\wabs{z} < \alpha/2$ then $\wfl{z} = 0$. If $\wabs{z} = \alpha/2$ then $\wfl{z} \in \mathds{S}et{0, \mathds{S}ign{z} \alpha}$ and if $\alpha/2 < \wabs{z} \leq \alpha$ then $\wfl{z} = \mathds{S}ign{z} \alpha$. In particular, if $\wabs{z} \leq \alpha$ then $\wabs{\wfl{z} - z} \leq \alpha / 2$. \peFullProp{propRoundBelowAlpha} \pbPropBT{propRoundAdapt}{Perfect adapter} Let $\wcal{P}_{\beta,\mu}$ be a perfect system and $\wcal{F}_{e_{\alpha},\beta,\mu}$ an unperfect one. If $\wrm{fl}$ rounds to nearest in $\wcal{F}$ then there exists $\wrm{fl}x$ which rounds to nearest in $\wcal{P}$ and is such that $\wflx{z} = \wfl{z}$ for $z$ with $\wabs{z} \geq \nu_{\wcal{F}}$. \peFullProp{propRoundAdapt} \pbPropBT{propRoundIEEEX}{IEEE adapter} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ rounds to nearest in the IEEE system $\wcal{I}_{e_{\alpha},\beta,\mu}$ and $\wcal{P}_{\beta,\mu}$ is a perfect system then there exists $\wrm{fl}tx = \mathds{S}et{\wrm{fl}xk{1},\dots,\wrm{fl}xk{n}}$ which rounds to nearest in $\wcal{P}$ and is such that $\wflt{\sum_{k=0}^n x_k} = \wrm{fl}txf{\sum_{k=0}^n x_k}$ for all $\wvec{x} \in \wcal{I}^{n+1}$. In particular, $\wfltx{\sum_{k=0}^n x_k} \in \wcal{I}$. \peFullProp{propRoundIEEEX} \pbPropBT{propRoundMPFRX}{MPFR adapter} If $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ rounds to nearest in the MPFR system $\wcal{M}_{e_{\alpha},\beta,\mu}$ and $\wcal{P}_{\beta,\mu}$ is a perfect system then there exists $\wrm{fl}tx = \mathds{S}et{\wrm{fl}xk{1},\dots,\wrm{fl}xk{n}}$ which rounds to nearest in $\wcal{P}$ and is such that $\wflt{\sum_{k=0}^n x_k} = \wrm{fl}txf{\sum_{k=0}^n x_k}$ for $\wvec{x} \in \wcal{M}^{n+1}$ with $\wflt{\sum_{i = 0}^{k-1} x_i} + x_k \geq 0$ for $k = 0,\dots,n$. \peFullProp{propRoundMPFRX} \pbPropBT{propFlat}{Flatness} Let $e$ be an exponent for $\wcal{F}$ and $z$ with $\wabs{z} = \beta^{e} \wlr{\beta^{\mu} + w}$ for $w \in [0,\wlr{\beta - 1} \beta^{\mu})$. On the one hand, if $w = \wfloor{w} + 1/2$ then \[ \wabs{w - y} < \beta^{e}/2 \ \Rightarrow \ \wfl{y} \in \mathds{S}et{ \mathds{S}ign{z} \beta^{e} \wlr{\beta^{\mu} + \wfloor{w}}, \, \mathds{S}ign{z} \beta^{e} \wlr{\beta^{\mu} + \wceil{w}}}. \] On the other hand, if $w - \wfloor{w} \neq 1/2$ then there exists $\delta > 0$ such that if $\wrm{fl}k{1}$ and $\wrm{fl}k{2}$ round to nearest in $\wcal{F}$ and $\wabs{y - z} < \delta$ then $\wflk{1}{y} = \wflk{2}{z}$. \peFullProp{propFlat} \pbPropBT{propSumScale}{Scaled sums} Suppose $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{n}}$ rounds to nearest in a perfect system $\wcal{P}$ and $\wcal{S}umk{k}$ is the sum in Definition \ref{longDefPSum}. If $\wvec{z} \in \wrn{n}$, $\sigma \in \mathds{S}et{-1,1}$ and $m \in \mathds Z{}$ then there exist $\wrm{fl}tx = \mathds{S}et{\wrm{fl}xk{1},\dots,\wrm{fl}xk{n}}$ which round to nearest in $\wcal{P}$ such that $\wcal{S}umkf{k}{\sigma \beta^m \, \wvec{z}, \, \wrm{fl}t} \, = \, \sigma \beta^m \, \wcal{S}umkf{k}{\wvec{z}, \, \wrm{fl}tx}$ for $k = 1,\dots,n$. \peFullProp{propSumScale} \pbPropBT{propWholeIsTight}{Whole is tight} The set of all functions which round to nearest in $\wcal{F}$ is tight. \peFullProp{propWholeIsTight} \pbPropBT{propSumsAreTight}{Sums are tight} Let $\wcal{R}$ be a tight set of functions which round to nearest in $\wcal{F}$ and $\wcal{S}umk{k}$ the sum in Definition \ref{longDefPSum}. The function $T_n: \mathds R{}^{n} \times \wcal{R}^n \rightarrow \wrn{n+1}$ given by \[ \wfc{T_n}{\wvec{z},\wrm{fl}t} := \wlr{\wcal{S}umkf{0}{\wvec{z},\wrm{fl}t}, \, \wcal{S}umkf{1}{\wvec{z},\wrm{fl}t}, \, \wcal{S}umkf{2}{\wvec{z},\wrm{fl}t}, \, \dots, \wcal{S}umkf{n}{\wvec{z},\wrm{fl}t}} \] is tight. \peFullProp{propSumsAreTight} \subsection{Lemmas} \label{secLemmas} This section presents the proofs of the Lemmas other than \ref{lemSterbenz} and \ref{lemConvexity}, which are proved in the extended version of the article.\\ \pbProofB{Lemma}{lemUNear} If $z= 0$ then $\wfl{z} = z = 0$ by Prop. \ref{propRoundIdentity} and Equation \pRef{rhoNear} holds. If $z \neq 0$ then, by Prop. \ref{propNormalForm}, $z = \mathds{S}ign{z} \beta^{e} \wlr{\beta^{\mu{}} + w}$, with $e \in \mathds Z{}$ and $r \in \mathds R{}$ with $w \in [0, \wlr{\beta - 1} \beta^{\mu{}})$. When $\wcal{F}$ is unperfect $\nu = \beta^{e_{\alpha} + \mu}$ and, by Prop. \ref{propOrder}, $e \geq e_{\alpha}$ because $\wabs{z} \geq \nu$. Therefore, $e$ is an exponent for $\wcal{F}$ and Lemma \ref{lemUNear} follows from Lemma \ref{lemUNearS}. Finally, Lemma \ref{lemUNear} applies to all $z$ when $\wcal{F}$ is perfect because $\nu = 0$ in this case. \peProof{Lemma}{lemUNear}\\ \pbProofB{Lemma}{lemSmallSum} If $\wcal{F}$ is perfect then $\alpha = \nu = 0$ and Lemma \ref{lemSmallSum} holds because $0 \in \wcal{F}$. It is clear that the Lemma also holds when $x = 0$ or $y =0$, and from now on we suppose that $x,y \neq 0$ and $\wcal{F}$ is unperfect. In this case $\nu = \beta^{e_{\alpha} + \mu}$, and we can assume that $\wabs{y} \geq \wabs{x}$ because $x + y = y + x$. Moreover, $x + y \in \wcal{F} \Leftrightarrow -(x + y) \in \wcal{F}$ by Prop. \ref{propSymmetry} and we can also assume that \pbDef{SNSA} \alpha \leq x + y \leq \beta \nu = \beta^{1 + e_{\alpha} + \mu} \hspace{1cm} \wrm{and} \hspace{1cm} y > 0. \peDef{SNSA} If $y$ is subnormal then $0 < \wabs{x} \leq y < \nu$, $x$ is also subnormal by Prop. \ref{propNu} and Lemma \ref{lemSmallSum} follows from Prop. \ref{propSubnormalSum}. Therefore, we can assume that $y$ is normal, that is, \pbDef{SNSY} y = \beta^{e_{\alpha} + e} \wlr{\beta^{\mu} + r_y} \hspace{0.5cm} \wrm{with} \hspace{0.5cm} e \geq 0 \hspace{0.5cm} \wrm{and} \hspace{0.5cm} r_y \in [0,\wlr{\beta - 1} \beta^{\mu}) \cap \mathds Z{}. \peDef{SNSY} On the one hand, if $x > 0$ then Equation \pRef{SNSA} leads to $y < \beta^{1 + e_{\alpha}{} + \mu}$ and Equation \pRef{SNSY} yields $e = 0$. Prop. \ref{propIntegerForm} and the assumption $0 < \wabs{x} \leq y$ lead to \[ x = \beta^{e_{\alpha}} r_x \hspace{1cm} \wrm{with} \hspace{1cm} r_x \in \mathds Z{} \hspace{0.5cm} \wrm{and} \hspace{0.5cm} 1 \leq r_x \leq \beta^{\mu} + r_y, \] and Equations \pRef{SNSA} and \pRef{SNSY} imply that \[ x + y = \beta^{e_{\alpha}} \wlr{\beta^{\mu} + r_x + r_y} \leq \beta^{1 + e_{\alpha} + \mu} \ \ \Rightarrow x + y \geq \nu \ \ \wrm{and} \ \ r_x + r_y \leq \wlr{\beta - 1} \beta^{\mu}, \] and Prop. \ref{propNu} with $r = \beta^{\mu} + r_x + r_y$ shows that $x + y \in \wcal{F}$. On the other hand, if $x < 0$ then Prop. \ref{propIntegerForm} and the assumption $0 < \wabs{x} \leq y$ lead to \[ x = -\beta^{e_{\alpha} + d} r_x \hspace{0.5cm} \wrm{with} \hspace{0.5cm} 0 \leq d \leq e, \hspace{0.5cm} r_x \in \mathds Z{} \hspace{0.5cm} \wrm{and} \hspace{0.5cm} 1 \leq r_x < \beta^{1 + \mu}. \] It follows that $ x + y = \beta^{e_{\alpha} + d} \wlr{\beta^{e - d} \wlr{\beta^{\mu} + r_y} - r_x} = \beta^{e_{\alpha}} r $ for \[ r := \beta^d \wlr{\beta^{e - d} \wlr{\beta^{\mu} + r_y} - r_x} \in \mathds Z{}. \] Since $x < 0$, using \pRef{SNSA} and the identity $\nu = \beta^{e_{\alpha} + \mu}$ we deduce that \[ 0 < r = \beta^{-e_{\alpha}} \wlr{x + y} < \beta^{-e_{\alpha}} \beta \nu = \beta^{1 + \mu}. \] When $\wcal{F}$ is a MPFR system we have that $\alpha = \nu$, Equation \pRef{SNSA} implies that $x + y \geq \nu$ and the equation above and Prop. \ref{propNu} show that $x + y \in \wcal{F}$. Finally, when $\wcal{F}$ is an IEEE system we either have (i) $r \geq \beta^{\mu}$, in which case $x + y \geq \beta^{e_{\alpha} + \mu} = \nu$ and $x + y \in \wcal{F}$ by Prop. \ref{propNu}, or (ii) $r < \beta^{\mu}$, and $x + y \in \wcal{S}_{e_{\alpha}}$ is a subnormal number, which belongs to $\wcal{F}$. Therefore, $x + y \in \wcal{F}$ in all cases and we are done. \peProof{Lemma}{lemSmallSum}\\ \pbProofB{Lemma}{lemIEEESum} By Prop. \ref{propIntegerForm}, $x = \beta^{d} r$ and $y = \beta^{e} s$ for $d,e,r,s \in \mathds Z{}$ such that $d, e \geq e_{\alpha}$. It follows that $z = \beta^{e_{\alpha}} t$ for $t := \beta^{d - e_{\alpha}} r + \beta^{e - e_{\alpha}} s \in \mathds Z{}$. We have that $\wabs{t} \geq 1$ because $t \in \mathds Z{} \setminus \mathds{S}et{0}$ and $\wabs{z} = \beta^{e_{\alpha}} \wabs{t} \geq \beta^{e_{\alpha}} = \alpha$, and $z \in \wcal{F}$ by Lemma \ref{lemSmallSum}. \peProof{Lemma}{lemIEEESum}\\ \pbProofB{Lemma}{lemNormOneBound} This proof illustrates the use of optimization to bound rounding errors. We define $z_1 := y_0 + y_1$, $z_{k} := y_k$ for $k > 1$ and use the sums $\wcal{S}umk{k}$ in Definition \ref{longDefPSum}, the set $\wcal{R}$ of all $n-$tuples which round to nearest and the function \pbDef{defEta} \wfc{\eta}{\wvec{z},\wrm{fl}t} := \sum_{k = 1}^n \wabs{\wcal{S}umkf{k}{\wvec{z},\wrm{fl}t} - \wlr{\wcal{S}umkf{k-1}{\wvec{z},\wrm{fl}t} + z_k}}, \peDef{defEta} from $\wrn{n} \times \wcal{R}$ to $\mathds R{}$. We show that Example \ref{exSharpSum} is the worst case for the ratio \pbDef{ssNQ} \wfc{q_n}{\wvec{z},\wrm{fl}t} := \frac{\wfc{\eta}{\wvec{z},\wrm{fl}t}}{\sum_{k = 1}^n \wabs{z_k}}. \peDef{ssNQ} This ratio is related to Equation \pRef{normOneBound} because \[ \wabs{\wflt{\sum_{k = 0}^n y_k} - \sum_{k = 0}^n y_k} = \wabs{\sum_{k = 1}^n \wlr{\wcal{S}umkf{k}{\wvec{z},\wrm{fl}t} - \wcal{S}umkf{k-1}{\wvec{z},\wrm{fl}t} - z_k}} \leq \wfc{\eta}{\wvec{z},\wrm{fl}t} \] and \[ \wabs{\wflt{\sum_{k = 0}^n y_k} - \sum_{k = 0}^n y_k} \leq \wfc{q_n}{\wvec{z},\wrm{fl}t} \sum_{k=1}^n \wabs{z_k} \leq \wfc{q_n}{\wvec{z},\wrm{fl}t} \sum_{k=0}^n \wabs{y_k}. \] Therefore, to prove Lemma \ref{lemNormOneBound} it suffices to show that \pbTClaim{ssNT} \sup_{\wlr{\wvec{z},\wrm{fl}t} \in \wlr{\wrn{n} \setminus \mathds{S}et{0}} \times \wcal{R}} \wfc{q_n}{\wvec{z},\wrm{fl}t} = \theta_n u \hspace{1cm} \wrm{for} \hspace{1cm} \theta_n := \frac{n}{1 + n u}. \peTClaim{ssNT} The ratio $q_n$ can be written as $\wfc{q_n}{\wvec{z},\wrm{fl}t} = \wfc{f}{\wvec{z},\wfc{h}{\wvec{z},\wrm{fl}t}}$ for \[ \wfc{h}{\wvec{z},\wrm{fl}t} := \wlr{\wcal{S}umkf{0}{\wvec{z},\wrm{fl}t}, \wcal{S}umkf{1}{\wvec{z},\wrm{fl}t},\dots,\wcal{S}umkf{n}{\wvec{z},\wrm{fl}t}} \in \wrn{n+1} \] and \[ \wfc{f}{\wvec{z},\wvec{x}} := \frac{\sum_{k= 1}^n \wabs{x_k - \wlr{x_{k-1} + z_k}}}{\sum_{k = 1}^n \wabs{z_k}}. \] The function $f$ is continuous for $\wvec{z} \neq 0$ and satisfies $\wfc{f}{\lambda \wvec{z},\lambda \wvec{x}} = \wfc{f}{\wvec{z},\wvec{x}}$ for $\lambda \neq 0$, and Prop. \ref{propSumScale}, \ref{propWholeIsTight} and \ref{propSumsAreTight} show that $h$ satisfies the requirements of Lemma \ref{lemOpt}. We can then apply this Lemma to prove that either (i) $\sup q_n \leq \theta_n u$ or (ii) $q_n$ has a maximizer $\wlr{\wvec{z}^*,\wrm{fl}t^*}$. In case (ii) we use the properties of this maximizer to prove that it is no worse than what is described in Example \ref{exSharpSum}. For instance, this example tells us that $\wfc{q_n}{\wvec{z}^*,\wrm{fl}t^*} \geq \theta_n u$ and if $q$ has a partial derivative with respect to $z_k$ at $\wvec{z}^*$ then this derivative is zero. We prove Equation \pRef{ssNT} by induction. For $n = 1$, this equation follows from Lemma \ref{lemUNear}. Let us then assume that $n > 1$ and Equation \pRef{ssNT} is valid for $\wvec{z} \in \wrn{m}$ and rounding tuples $\wrm{fl}t = \mathds{S}et{\wrm{fl}k{1},\dots,\wrm{fl}k{m}}$ when $m < n$ and show that it also holds for $n$. To apply Lemma \ref{lemOpt}, let us define the numbers \pbDef{ssA} a:= \frac{1 + \wlr{1 + 2u} \theta_{n-1} - \wlr{1 + u} \theta_n}{1 + u} = \frac{\wlr{n - 1} \wlr{3 + 2 n u} u }{\wlr{1 + n u} \wlr{1 + \wlr{n-1} u} \wlr{1 + u}} > 0 \peDef{ssA} (recall that $n \geq 2$) and \pbDef{ssB} b := \theta_n - \theta_{n-1} = \frac{1}{\wlr{1 + n u} \wlr{1 + \wlr{n-1} u}} > 0, \peDef{ssB} and split $\wrn{n} \setminus \mathds{S}et{\wvec{0}}$ as the union of the set \pbDef{ssDefD} \wcal{B} := \mathds{S}et{ \wvec{z} \in \wrn{n} \ \ \wrm{with} \ \ b \sum_{k = 2}^n \wabs{z_n} > a \wabs{z_1} \ } \peDef{ssDefD} and the cone $\wcal{A} := \mathds{S}et{\lambda \wvec{z}, \ \wrm{with} \ \wvec{z} \in \wcal{K}, \ \lambda \in \mathds R{} \setminus \mathds{S}et{0}}$, for \pbDef{ssDefK} \wcal{K} := \mathds{S}et{ \wvec{z} \in \wrn{n} \ \ \wrm{with} \ \ 2/3 \leq z_1 \leq 2 \beta / 3 \ \ \wrm{and} \ \ b \sum_{k = 2}^n \wabs{z_k} \leq a z_1 \ }. \peDef{ssDefK} We claim that $\wfc{q_n}{\wvec{z},\wrm{fl}t} \leq \theta_n u$ for $\wvec{z} \in \wcal{B}$ and $\wrm{fl}t \in \wcal{R}$. In fact, writing $\hat{s}_k := \wfc{S_k}{\wvec{z},\wrm{fl}t}$ and $s_k := \sum_{i = 1}^k z_i$ for $k = 0,\dots,n$ and using Equation \pRef{ssNT} with $\wrm{fl}tx := \mathds{S}et{\wrm{fl}k{2}, \dots, \wrm{fl}k{n}}$ and $\tilde{\wvec{z}} := \wlr{\hat{s}_1 + z_2, z_3, \dots,z_n}$ we obtain by induction that \pbDef{ssTemp} \sum_{k = 2}^n \wabs{\hat{s}_k - \hat{s}_{k-1} - z_k} \leq \theta_{n-1} u \wlr{ \wabs{\hat{s}_1 + z_2} + \sum_{k = 3}^n \wabs{z_k}}. \peDef{ssTemp} Keeping in mind that $z_1 = s_1$, we have that \[ \wabs{\hat{s}_1 - s_1} + \sum_{k = 2}^n \wabs{\hat{s}_k - \hat{s}_{k-1} - z_k} \leq \wlr{\wabs{\hat{s}_1 - s_1} + \theta_{n-1} u \wabs{\hat{s}_1}} + \theta_{n-1} u \sum_{k = 2}^n \wabs{z_k}, \] and Lemma \ref{lemUNear}, the definitions \pRef{defEta}, \pRef{ssA} and \pRef{ssB} of $\eta$, $a$ and $b$ and $\hat{s}_0 = 0$ yield \[ \wfc{\eta}{\wvec{z},\wrm{fl}t} = \sum_{k = 1}^n \wabs{\hat{s}_k - \hat{s}_{k-1} - z_k} \leq \wlr{1 + \theta_{n-1} \wlr{1 + 2 u} } \frac{u \wabs{s_1}}{1+ u} + \theta_{n-1} u \sum_{k=2}^n \wabs{z_k} \] \[ = u \wlr{\wlr{1 + \wlr{1 + 2 u} \theta_{n-1} - \wlr{1 + u} \theta_n} \frac{\wabs{z_1}}{1 + u} - \wlr{\theta_n - \theta_{n-1}} \sum_{z = 2}^n \wabs{z_k}} + \theta_n u \sum_{k = 1}^n \wabs{z_k} \] \[ = u \wlr{a \wabs{z_1} - b \sum_{k = 2}^n \wabs{z_k}} + \theta_n u \sum_{k = 1}^n \wabs{z_k}. \] The definitions \pRef{ssNQ} and \pRef{ssDefD} of $q$ and $\wcal{B}$ and this equation imply that $\wfc{q_n}{\wvec{z},\wrm{fl}t} \leq \theta_n u$, and, indeed, $\wfc{q_n}{\wvec{z},\wrm{fl}t} \leq \theta_n u$ for $\wvec{z} \in \wcal{B}$ and $\wrm{fl}t \in \wcal{R}$. As a result, Lemma \ref{lemOpt} shows that either (i) the supremum of $q_n$ is at most $\theta_n u$ or (ii) there exists $\wvec{z}^* \in \wcal{K}$ and $\wrm{fl}t{}^* \in \wcal{R}$ with \[ \wfc{q_n}{\wvec{z}^*,\wrm{fl}t{}^*} = \sup_{\wlr{\wvec{z},\wrm{fl}t{}} \in \wlr{\wrn{n} \setminus \mathds{S}et{\wvec{0}}} \times \wcal{R}} \wfc{q_n}{\wvec{z},\wrm{fl}t{}^*}. \] In case (i) we are done and we now analyze case (ii). Let us define $\hat{s}^*_k := \wcal{S}umkf{k}{\wvec{z}^*,\wrm{fl}t^*}$, and $s^*_k := \sum_{i = 1}^k z^*_k$, for $k = 0,\dots, n$. Since $\wvec{z}^* \in \wcal{K}$, the definitions of $a$ and $b$ lead to \[ \sum_{k=2}^n \wabs{z_k^*} \leq \frac{\wlr{n - 1} \wlr{3 + 2 n u}}{1 + u} u z^*_1. \] Using Lemma \ref{lemUNear}, the hypothesis $20 n u \leq 1$ and induction we deduce that \[ \wabs{\hat{s}^*_k - z^*_1} \leq \wabs{\hat{s}^*_k - \wlr{ \wlr{\hat{s}^*_1 + z^*_2} + \sum_{i=3}^n z^*_i}} + \wabs{\hat{s}^*_1 - s^*_1} + \sum_{i = 2}^n \wabs{z^*_i} \] \[ \leq \frac{\wlr{n - 1} u}{1 + \wlr{n-1} u} \wlr{\wabs{\hat{s}_1^* + z_2^*} + \sum_{i=3}^n \wabs{z_i^*}} + \frac{u}{1 + u} z^*_1 + \frac{\wlr{n - 1} \wlr{3 + 2 n u}}{1 + u} u z^*_1 \] \[ \leq \wlr{\frac{n - 1}{1 + \wlr{n-1} u} \wlr{1 + 2 u + \wlr{n - 1} \wlr{3 + 2 n u} u} + 1 + \wlr{n - 1} \wlr{3 + 2 n u}} \frac{u}{1 + u} z^*_1 \] and, since $s^*_1 = z^*_1$ and $20 n u \leq 1$, \pbDef{ssSk} \wabs{\hat{s}_k^* - z_1^*} \leq \kappa n u z_1^* \leq \kappa z_1^*/20, \peDef{ssSk} for \[ \kappa := \wlr{\frac{1}{1 + n u} \wlr{1 + \wlr{3 + 2 n u} n u} + 3 + 2 n u} \frac{1}{1 + u} \] \pbDef{ssTheta} \leq \frac{1}{1 + \frac{1}{20}} \wlr{1 + \frac{1}{20} \wlr{3 + \frac{1}{10}}}+ 3 + \frac{1}{10} = \frac{21}{5}. \peDef{ssTheta} Since $2/3 \leq z^*_1 \leq 2 \beta /3$, Equations \pRef{ssSk} and \pRef{ssTheta} lead to \[ \frac{1}{\beta} \leq \frac{1}{2} < \frac{158}{300} \leq \frac{79}{100} z^*_1 \leq \hat{s}^*_k \leq \frac{121}{100} z^*_1 \leq \frac{121}{150}\beta < \beta \] for $1 < k \leq n$, and since $\hat{s}_1^* = \wflk{1}{z^*_1}$ and $2/3 \leq z^*_1 \leq 2/3 \beta$ this equation also holds for $k = 1$. Monotonicity (Prop. \ref{propRoundMonotone}) and the fact that $\hat{s}^*_k = \wflk{k}{\hat{s}_{k-1} + z_k}$ lead to \pbDef{ssNZS} 1/\beta < \hat{s}_{k-1} + s_k^* < \beta \ \ \wrm{for} \ \ 1 \leq k \leq n. \peDef{ssNZS} We now explore the implications of $\wlr{\wvec{z}^*,\wrm{fl}t^*}$ being a maximizer of $q_n$. Example \ref{exSharpSum} shows that $\wfc{q_n}{\wvec{z}^*,\wrm{fl}t{}^*} \geq \theta_n u$ and this implies that $z_k \neq 0$ for all $k$, because if $z_k = 0$ for some $k$ then we would have $\wfc{q_n}{\wvec{z}^*,\wrm{fl}t{}^*} = \wfc{q_{n-1}}{\tilde{\wvec{z}},\wrm{fl}tx}$ for $\tilde{\wvec{z}} \in \wrn{n-1}$ and $\wrm{fl}tx$ obtained by removing the $k$th coordinate of $\wvec{z}^*$ and $\wrm{fl}k{k}$ from $\wrm{fl}t^*$, and $\wfc{q_{n-1}}{\wrm{fl}tx,\tilde{\wvec{z}}} \leq \theta_{n-1} u < \theta_n u$, contradicting the maximality of $\wlr{\wvec{z}^*,\wrm{fl}t^*}$. Therefore, $z^*_k \neq 0$ for $k = 1,\dots, n$, and the denominator of $q_n$ has non zero partial derivatives at $\wvec{z}^*$. Equation \pRef{ssNZS} shows that $\hat{s}^*_{k-1} + z^*_k \neq 0$, and Prop. \ref{propFlat} implies that the numerator of $q_n$ will have a zero partial derivative with respect to $z_k$ if $\hat{s}^*_{k-1} + z^*_k$ is not of the form \pbDef{ssMid} \hat{s}^*_{k-1} + z^*_k = \beta^{e_k} \wlr{\beta^{\mu} + r_k + 1/2} \hspace{0.7cm} \wrm{with} \hspace{0.7cm} e_k \in \mathds Z{} \ \ \wrm{and} \ \ r_k \in [0, \wlr{\beta - 1} \beta^{\mu}), \peDef{ssMid} and this would imply that the derivative of $q_n$ is well defined and different from zero. By the maximality of $\wlr{\wvec{z}^*,\wrm{fl}t^*}$, we conclude that Equation \pRef{ssMid} is valid. Combining this equation with Equation \pRef{ssNZS} we conclude that we can write $\mathds{S}et{1,2,\dots,n} = \wcal{L} \cup \cal{H}$ (for low and high) so that the exponents in $e_k$ Equation \pRef{ssMid} are $e_k = - \mu -1$ for $k \in \wcal{L}$ and $e_k = -\mu$ for $k \in \wcal{H}$. Since $\beta^{-\mu}/2 = u$, this leads to \begin{eqnarray} \nonumber k \in \wcal{L}\ & \Rightarrow & \ \frac{1 + u}{\beta} \leq \hat{s}^*_{k-1} + z^*_k \leq \frac{\beta - u}{\beta},\\ \nonumber k \in \wcal{U}\ & \Rightarrow & \ 1 + u \leq \hat{s}^*_{k-1} + z^*_k \leq \beta - u, \end{eqnarray} As a result, Prop. \ref{propCriticalSum} implies that \pbDef{ssZlh} k \in \wcal{L} \Rightarrow \wabs{z^*_k}\geq u/\beta \hspace{1cm} \wrm{and} \hspace{1cm} k \in \wcal{H} \Rightarrow \wabs{z^*_k}\geq u, \peDef{ssZlh} and Prop. \ref{propRoundNormal} yields \pbDef{ssEk} k \in \wcal{L} \Rightarrow \wabs{\hat{s}_k^* - \wlr{\hat{s}_{k-1}^* + z^*_k}} = u/\beta \hspace{0.5cm} \wrm{and} \hspace{0.5cm} k \in \wcal{H} \Rightarrow \wabs{\hat{s}_k^* - \wlr{\hat{s}_{k-1}^* + z^*_k}} = u. \peDef{ssEk} We now show that if $1 \in \wcal{L}$ then we obtain a contradiction to the maximality of $\wlr{\wvec{z}^*,\wrm{fl}t^*}$. Indeed, let $m \in [1,n]$ be the last index such that $k \in \wcal{L}$ for $1 \leq k \leq m$. If $m = n$ then $k \in \wcal{L}$ for all $k \in [1,n]$ and the inequality $z_1^* \geq 2 / 3$ and Equations \pRef{ssZlh} and \pRef{ssEk} and the fact that $2 \beta / 3 - u > 1$ imply that \[ \wfc{q_n}{\wvec{z}^*,\wrm{fl}t^*} / \wlr{\theta_n u} = \frac{\frac{n u / \beta}{2/3 + \wlr{n - 1} u / \beta}}{\frac{n u}{1 + n u}} = \frac{1 + n u}{\wlr{\frac{2 \beta}{3} - u} + n u} < 1, \] and this contradicts the maximality of $\wlr{\wvec{z}^*,\wrm{fl}t^*}$. For $m < n$ we have \[ \sum_{k = 1}^{m} \wabs{z^*_k} \geq \sum_{k = 1}^{m} z^*_k = \wlr{\hat{s}^*_{m} + z^*_{m+1}} - \wlr{\sum_{k=1}^{m} \wlr{\hat{s}^*_{k} - \wlr{\hat{s}^*_{k-1} + z^*_k}}} - z^*_{m+1} \] \[ \geq \wlr{1 + u} - \wlr{m u / \beta} - \wabs{z^*_{m+1}}. \] Let $\ell$ be the size of $\wcal{L}$ and $h$ the size of $\wcal{H}$. Equations \pRef{ssZlh} and \pRef{ssEk}, the identity $n = \ell + h$ and the hypothesis $20 n u \leq 1$ lead to \[ \wfc{q_n}{\wvec{z}^*,\wrm{fl}t^*} - \theta_n u \leq \frac{\ell u / \beta + h u}{1 + u - m u / \beta - \wabs{z^*_{m+1}} + (\ell - m) u / \beta + \wabs{z^*_{m+1}} + \wlr{h-1} u} - \frac{n u}{1 + n u} \] \[ = - u \frac{\xi} {\wlr{1 + n u}\wlr{\beta - 2 m u + \ell u + \beta h u}}, \] for \[ \xi := \wlr{\beta - 1} \ell - 2 h m u - 2 \ell m u = \ell \wlr{\wlr{\beta - 1} - \wlr{\frac{m}{\ell}} \wlr{2 h u} - 2 m u} \geq 0.8 \ell > 0, \] and, again, $\wfc{q_n}{\wvec{z}^*,\wrm{fl}t^*} < \theta_n u$. Therefore, by the maximality of $\wlr{\wrm{fl}t^*,\wvec{z}^*}$ we must have $z_1^* \geq 1$, and Equation \pRef{ssMid} shows that $z_1^* \geq 1 + u$ and Equations \pRef{ssZlh} and \pRef{ssEk} lead to \pbDef{nobL} \wfc{q_n}{\wvec{z}^*,\wrm{fl}t^*} \leq \frac{\ell u / \beta + h u}{1 + u + \ell u / \beta +\wlr{h - 1} u} = \frac{\ell u / \beta + h u}{1 + \ell u / \beta + h u}. \peDef{nobL} Since $n = \ell + h$, $\theta_n = \wlr{\ell + h}/\wlr{1 + \wlr{\ell + h} u}$ and \[ \frac{\ell + h}{1 + \wlr{\ell + h} u} - \frac{\ell / \beta + h}{1 + \ell u / \beta + h u} = \frac{\wlr{\beta - 1} \ell}{\wlr{1 + \wlr{\ell + h} u}\wlr{\beta + \ell u + \beta h u}} \geq 0, \] Equation \pRef{nobL} implies that $\wfc{q_n}{\wvec{z}^*,\wrm{fl}t^*} \leq \theta_n u$ and we are done. \peProof{Lemma}{lemNormOneBound}\\ \pbProofB{Lemma}{lemPositiveBound} Let us define $z_1 := y_0 + y_1$ and $z_{k} := y_k$ for $k > 1$. Using Lemma \ref{lemUNear} and induction in $n$ we can show that \[ \wcal{S}umkf{n}{\wvec{z},\wrm{fl}t} \geq \sum_{k=1}^n \wlr{1 + u}^{-\wlr{n - k + 1}} z_k = \frac{1}{1 + u} \sum_{k = 1}^{n} \wlr{1 + u}^{-\wlr{n - k}} z_k. \] The convexity of the functions $\wlr{1 + u}^{-\wlr{n - k}}$, which have value $1$ and derivative $- \wlr{n - k}$ at $u = 0$, lead to \[ \wcal{S}umkf{n}{\wvec{z},\wrm{fl}t} \geq \frac{1}{1 + u} \wlr{\sum_{k=1}^n z_k - u \sum_{k = 1}^{n} \wlr{n - k} z_k} \] \[ = \frac{1}{1 + u} \wlr{\wlr{1 + u} \sum_{k=1}^n z_k - u \sum_{k = 1}^{n} \wlr{n - k + 1} z_k} = \sum_{k=1}^n z_k - \frac{u}{1+u} \sum_{k=1}^n \wlr{n - k + 1} z_k, \] and the lower bound in Equation \pRef{thPositiveBound} follows from the identities \[ \sum_{i = 0}^k y_i = \sum_{i=1}^k z_i, \hspace{0.6cm} \wflt{\sum_{k=0}^{n} y_k} = \wcal{S}umkf{n}{\wvec{z},\wrm{fl}t} \hspace{0.6cm} \wrm{and} \hspace{0.6cm} \sum_{k = 1}^n \sum_{i = 0}^k y_i = \sum_{k = 1}^n \wlr{n - k + 1} z_k. \] In order to prove the second inequality in Equation \pRef{thPositiveBound}, we proceed as in the proof of Lemma \ref{lemNormOneBound} (We ask the reader to look at the first two paragraphs of that proof.) This time we consider only the rounding tuple $\wrm{fl}t := \mathds{S}et{\wrm{fl},\dots,\wrm{fl}}$ where $\wrm{fl}$ rounds to nearest and breaks all ties upward, because our function \pbDef{snbQ} \wfc{q_n}{\wvec{z}} := \frac{\wfc{\eta}{\wvec{z}}}{\sum_{k = 1}^n \wlr{n - k + 1} z_k} \peDef{snbQ} for \pbDef{ssnbNum} \wfc{\eta}{\wvec{z}} := \wcal{S}umkf{n}{\wvec{z},\wrm{fl}t} - \sum_{k=1}^n z_k = \sum_{k=1}^n \wlr{\wcal{S}umkf{k}{\wvec{z},\wrm{fl}t} - \wcal{S}umkf{k-1}{\wvec{z},\wrm{fl}t} - z_k} \peDef{ssnbNum} is clearly maximized by the rounding tuple $\wrm{fl}t$ for which all ties are broken upward. We prove by induction that \pbDef{snbUpper} \wfc{q_n}{\wvec{z}} \leq \tau_n u := \frac{u}{1 + u \wlr{\frac{\beta - 2}{\beta - 1} + \frac{n}{\beta^{n} - 1}}}. \peDef{snbUpper} For $n = 1$ Equation \pRef{snbUpper} follows from Lemma \ref{lemUNear}. Let us then assume it holds for $n - 1$ and prove it for $n$ using Lemma \ref{lemOpt} to show that either Equation \pRef{snbUpper} holds or there exists a maximizer for $q_n$, which we then analyze. With this purpose, define \pbDef{snbCAB} a := \frac{1 + \wlr{n - 1} \wlr{1 + 2 u} \tau_{n - 1} - n \wlr{1 + u} \tau_{n}}{1 + u} \hspace{0.5cm} \wrm{and} \hspace{0.5cm} b := \tau_{n} - \tau_{n-1}. \peDef{snbCAB} In order to prove that $a$ and $b$ are positive, note that \[ \tau_n = \frac{1}{1 + u \phi_n} \hspace{0.5cm} \wrm{and} \hspace{0.5cm} \tau_{n-1} := \frac{1}{1 + u \wlr{\phi_n + \delta_n}} \hspace{0.5cm} \wrm{for} \hspace{0.5cm} \phi_n := \frac{\beta - 2}{\beta - 1} + \frac{n}{\beta^{n} - 1} \] and \[ \delta_n := \frac{n-1}{\beta^{n-1} - 1} - \frac{n}{\beta^n - 1} = \frac{n \wlr{\beta - 1} - \wlr{\beta - \beta^{1 - n}}}{\beta^{n} \wlr{1 - \beta^{1 - n}}\wlr{1 - \beta^{-n}}} > 0. \] For $\beta,n \geq 2$ we have that $\delta_n > 0$, and the positivity of $\delta_n$ implies that \[ b = \tau_n - \tau_{n-1} = u \delta_n \tau_{n-1} \tau_n > 0. \] For $n = 2$ the software Mathematica shows that \[ a = u \frac{\beta - 1 + u \wlr{\beta - 2}}{\wlr{1 + u}^2 \wlr{\beta + 1 + \beta u}} > 0. \] Mathematica also shows that when $\beta = 2$ \[ a = u \frac{\wlr{2^{n} \wlr{n - 2} + 2}\wlr{2^n - n - 1}}{\wlr{1 + u} \wlr{2^n - 1 + n u} \wlr{2^n - 2 + 2 \wlr{n - 1} u}}, \] which is positive for $n \geq 3$. For $\beta = 3$ we have \[ a = \frac{u \wlr{u + 2} \wlr{3^n - 2 n - 1} \wlr{\wlr{2 n - 3} 3^{n} + 3}} {\wlr{1 + u} \wlr{2 \wlr{3^n - 1} + u \wlr{3^n + 2 n -1}}\wlr{2 \times 3^{n} - 6 + u \wlr{3^n + 6 n - 9 } }}, \] which is also positive for $n \geq 3$. Finally, for $\beta \geq 4$ and $n \geq 3$ \[ n \delta_n \leq n \frac{\wlr{n - 1} \wlr{\beta - 1}}{\wlr{1 - \beta^{1 - n}} \wlr{1 - \beta^{-n}}} \beta^{-n} \leq \frac{3 \times 2 \times 4^{-3}}{\wlr{1 - 4^{-2}}\wlr{1 - 4^{-3}}} = \frac{32}{315} < 0.2, \] and the software Mathematica also shows that \[ a = \frac{\wlr{n - 3} + \wlr{1 - \wlr{n - 1 + n u} \delta_n} + \phi_n \wlr{1 + \wlr{\delta_n + \phi_n + n - 2}u} } {\wlr{1 + u} \wlr{1 + u \phi_n} \wlr{1 + u \wlr{\phi_n + \delta_n}}} \] and this number is positive for $n \geq 3$ because $\wlr{n - 1 + n u} \delta_n \leq n \delta_n \leq 0.2$. Therefore, $a$ and $b$ are positive and the set \[ \wcal{K} := \mathds{S}et{\wvec{z} \in \wrn{n}\setminus \mathds{S}et{0} \ \wrm{with} \ 2 / 3 \leq z_1 \leq 2 \beta / 3, \ \ z_k \geq 0 \ \ \wrm{and} \ b \sum_{k = 2}^{n} \wlr{n - k + 1} z_k \leq a \, z_1} \] is compact. We now split $\mathds{S}et{\wvec{x} \in \wrn{n} \setminus \mathds{S}et{\wvec{0}} \ \wrm{with} \ x_k \geq 0 }$ as the union of the set \pbDef{snbD} \wcal{B} := \mathds{S}et{\wvec{z} \in \wrn{n} \ \wrm{with} \ \ z_k \geq 0 \ \ \ \wrm{and} \ \ b \sum_{k = 2}^{n} \wlr{n - k + 1} z_k > a\, z_1} \peDef{snbD} and the cone \[ \wcal{A} := \mathds{S}et{\lambda \wvec{x} \ \wrm{with} \ \wvec{x} \in \wcal{K} \ \wrm{and} \ \lambda \in \mathds R{}, \ \lambda > 0} \] and show that $\wfc{q_n}{\wvec{z}} \leq \tau_n u $ for $\wvec{z} \in \wcal{B}$. In fact, for such $\wvec{z}$, let us write $\hat{s}_k := \wfc{S_k}{\wvec{z},\wrm{fl}t}$ for $k = 0,\dots,n$. Using induction, Lemma \ref{lemUNear}, and the definitions of $a$, $b$, and keeping in mind that $s_1 = z_1$, we deduce that \[ \sum_{k = 1}^n \wlr{\hat{s}_k - \wlr{\hat{s}_{k-1} + z_k}} = \wlr{\hat{s}_{1} - s_{1}} + \wlr{\wlr{\hat{s}_2 - \wlr{\hat{s}_1 + z_2}} + \sum_{k = 3}^n \wlr{\hat{s}_k - \wlr{s_{k-1} - z_k}}} \] \[ \leq \frac{u}{1 + u} z_1 + \tau_{n-1} u \wlr{ \wlr{n-1} \wlr{\hat{s}_1 + z_2} + \sum_{k = 3}^n \wlr{n - k + 1} z_k} \] \[ = \frac{u}{1 + u} z_1 + \tau_{n-1} u \wlr{ \wlr{n-1} \hat{s}_1 + \sum_{k = 2}^n \wlr{n - k + 1} z_k} \] \[ \leq \wlr{1 + \wlr{1 + 2 u} \wlr{n-1} \tau_{n-1}} \frac{u z_1}{1 + u} + \tau_{n-1} u \sum_{k = 2}^n \wlr{n - k + 1} z_k \] \[ = \wlr{1 + \wlr{1 + 2 u} \wlr{n-1} \tau_{n-1} - n \wlr{1 + u} \tau_{n}} \frac{u z_1}{1 + u} \] \[ - \wlr{\tau_{n} - \tau_{n-1}} u \sum_{k = 2}^n \wlr{n - k + 1 } z_k + \tau_{n} u \wlr{n z_1 + \sum_{k = 2}^n \wlr{n - k + 1 } z_k}, \] and it follows that \[ \wfc{\eta}{\wvec{z}} \leq \wlr{a z_1 - b \sum_{k = 2}^n \wlr{n - k + 1} z_k} u + \tau_{n} u \sum_{k = 1}^n \wlr{n - k + 1 } z_k. \] By the definition of $\wcal{B}$ the term in parenthesis above is negative and this equation shows that $\wfc{q_n}{\wvec{z}} \leq \tau_{n} u$ for $\wvec{z} \in \wcal{B}$. According to Lemma \ref{lemOpt} we have that either (i) Equation \pRef{snbUpper} holds or (ii) $q_n$ has a maximizer $\wvec{z}^* \in \wcal{K}$. In case (i) we are done and we now suppose that there exists such $\wvec{z}^{*}$. Define $\hat{s}^*_k := \wfc{S_k}{\wvec{z}^*,\wrm{fl}t}$ for $k = 0,\dots,n$. The same argument used in the proof of Lemma \ref{lemNormOneBound} to deduce that $z_k^* \neq 0$ and Equation \pRef{ssMid} shows that $z^*_k \neq 0$ for $k = 1,\dots, n$, and \pbDef{snbDecomp} \hat{s}^*_{k-1} + z^*_k = \beta^{d_k} \wlr{\beta^{\mu} + r_k + 1/2} \hspace{0.3cm} \wrm{with} \hspace{0.3cm} d_k \in \mathds Z{} \hspace{0.3cm} \wrm{and} \hspace{0.3cm} r_k \in [0,\wlr{\beta - 1} \beta^{\mu}) \cap \mathds Z{}. \peDef{snbDecomp} Since $\wrm{fl}$ break ties upward, we have that \pbDef{snbRUp} \hat{s}^*_{k} = \beta^{d_k} \wlr{\beta^{\mu} + r_k + 1}, \peDef{snbRUp} If the $r_k$ in Equation \pRef{snbDecomp} were all zero then, since $\hat{s}_{0}^* = 0$ and \[ \frac{1}{\beta} < 2/3 \leq z_1^* \leq \frac{2 \beta}{3} < \beta, \] Equation \pRef{snbDecomp} would yield $z_1^* = \beta^{-\mu}\wlr{\beta^\mu + 1/2} = 1 + u$ and, for $k > 1$, Equations \pRef{snbDecomp} and \pRef{snbRUp} would lead to $\hat{z}^*_k = \beta^{d_k + \mu}\wlr{1 + u} - \beta^{d_{k-1} + \mu} \wlr{1 + 2 u}$ and the $z_k^*$ would correspond to the $x_k$ in Example \ref{exSharpSumNearC} with $e_k = d_k + \mu$ (take $x_0 = 0$ and $x_1 = z^*_1$). Therefore, by the last line in the statement of Example \ref{exSharpSumNearC}, in order to complete this proof it suffices to show that $r_k = 0$ for all $k$, and this is what we do next. We start with $k < n$ and after that we handle the case $k = n$. Let us define $r_{0} := 0$, assume that $r_{i} = 0$ for $i < k < n$ and show that $r_k = 0$. Take $\delta_k := \min\mathds{S}et{1,r_k}$ and $\wvec{z}' \in \wrn{n}$ given by $z_i' := z_i^*$ for $i < k$ or $i > k + 1$ and \[ z_k' := z_k^* - \beta^{d_k} \delta_k \hspace{1cm} \wrm{and} \hspace{1cm} z_{k+1}' := z_{k+1}^* + \beta^{d_k} \delta_k. \] We now prove that $\delta_k = 0$ by showing that $\wvec{z}' = \wvec{z}^*$. If $\delta_k = 0$ then $\wvec{z}' = \wvec{z}^*$ and $\wvec{z}$ is in the domain of $q_n$. If $\delta = 1$ then $z'_{k+1} > 0$ and showing that $z_k' \geq 0$ suffices to prove that $\wvec{z}'$ is in the domain of $q_n$. Indeed, Equations \pRef{snbDecomp} and \pRef{snbRUp} and $r_{k-1} = 0$ lead to \[ z_k' := \beta^{d_k} \wlr{\beta^\mu + r_k + 1/2} - \beta^{d_{k-1}} \wlr{\beta^\mu + 1} - \beta^{d_k} \delta_k \] \[ = \beta^{d_k} \wlr{\beta^\mu + \wlr{r_k - \delta_k} + 1/2} - \beta^{d_{k-1}} \wlr{\beta^\mu + 1}. \] Equations \pRef{snbDecomp}, \pRef{snbRUp} and $z_k^* \geq 0$ imply that \[ \hat{s}_{k}^* = \wfl{\hat{s}_{k-1}^* + z_k} \geq \hat{s}_{k-1}^* + \beta^{d_k}/2 > \hat{s}_{k-1}^*, \] Prop. \ref{propOrder} leads to $d_k \geq d_{k-1}$. Moreover, $\delta_k \leq r_k$ by definition and it follows that if $d_k > d_{k-1}$ then $\beta^{d_k}/2 \geq \beta^{d_{k-1}}$ and $z_k' \geq 0$. If $d_k = d_{k-1}$ then $\hat{s}^*_{k} > \hat{s}^*_{k-1}$ implies that $\beta^{\mu} + r_k + 1 > \beta^{\mu} + 1$, $r_k > 1$, and $r_k - \delta_k \geq 1$ and $z_k' \geq 0$. Therefore, $\wvec{z}'$ is on the domain of $q_n$. We now analyze $\eta$ defined in Equation \pRef{ssnbNum} and show that all parcels in $\wfc{\eta}{\wvec{z}^*}$ and $\wfc{\eta}{\wvec{z}'}$ are equal. Since we break ties upward, Equation \pRef{snbRUp} shows that \[ \wfl{\hat{s}^*_{k-1} + z_k'} = \wfl{\beta^{d_k} \wlr{\beta^{\mu} + \wlr{r_k - \delta_k} + 1/2}} = \beta^{d_k} \wlr{\beta^{\mu} + \wlr{r_k - \delta_k} + 1} \] \pbDef{ssNBA} = \beta^{d_k} \wlr{\beta^{\mu} + r_k + 1} - \beta^{d_k} \delta_k = \wfl{\hat{s}^*_{k-1} + z^*_k} - \beta^{d_k} \delta_k = \hat{s}^*_k - \beta^{d_k} \delta_k. \peDef{ssNBA} It follows that \[ \wcal{S}umkf{k}{\wvec{z}',\wrm{fl}t} + z_{k+1}' = \wfl{\hat{s}^*_{k-1} + z_k'} + z_{k+1}' = \] \[ \wlr{\hat{s}^*_k - \beta^{d_k} \delta_k} + \wlr{z_{k+1}^* + \beta^{d_k} \delta_k} = \hat{s}^*_k + z_{k+1}^* = \wcal{S}umkf{k}{\wvec{z}^*,\wrm{fl}t} + z_{k+1}^*. \] This equation leads to \[ \wcal{S}umkf{k+1}{\wvec{z}',\wrm{fl}t} = \wfl{ \wcal{S}umkf{k}{\wvec{z}',\wrm{fl}t} + z_{k+1}'} = \wfl{\hat{s}^*_k + z_{k+1}^*} = \wcal{S}umkf{k+1}{\wvec{z}^*,\wrm{fl}t}, \] and \[ \wcal{S}umkf{k+1}{\wvec{z}',\wrm{fl}t} - \wlr{\wfc{S_{k}}{\wvec{z}',\wrm{fl}t} + z_{k+1}'} = \wcal{S}umkf{k+1}{\wvec{z}^*,\wrm{fl}t} - \wlr{\wfc{S_{k}}{\wvec{z}^*,\wrm{fl}t} + z_{k+1}^*}. \] Therefore, $\wcal{S}umkf{i}{\wvec{z}',\wrm{fl}t} = \wcal{S}umkf{i}{\wvec{z}^*,\wrm{fl}t}$ for $i < k$ and $i \geq k + 1$. It follows that \[ \wcal{S}umkf{i}{\wvec{z}',\wrm{fl}t} - \wlr{\wfc{S_{i-1}}{\wvec{z}',\wrm{fl}t} + z_i'} = \wcal{S}umkf{i}{\wvec{z}^*,\wrm{fl}t} - \wlr{\wfc{S_{i-1}}{\wvec{z}^*,\wrm{fl}t} + z^*_i} \] for $i <k$ and $i \geq k + 1$. For $i = k$, the definition $z_k' := z_k^* - \beta^{d_k} \delta_k$ and Equation \pRef{ssNBA} yield \[ \wcal{S}umkf{k}{\wvec{z}',\wrm{fl}t} - \wlr{\wcal{S}umkf{k-1}{\wvec{z}',\wrm{fl}t} + z_k'} = \wfl{\hat{s}^*_{k-1} + z_k'} - \wlr{\hat{s}^*_{k-1} + z_k'} = \] \[ = \wlr{\wfl{\hat{s}^*_{k-1} + z_k^*} - \beta^{d_k} \delta_k} - \wlr{\hat{s}^*_{k-1} + z_k^* - \beta^{d_k} \delta_k} \] \[ = \wcal{S}umkf{k}{\wvec{z}^*,\wrm{fl}t} - \wlr{\wcal{S}umkf{k-1}{\wvec{z}^*,\wrm{fl}t} + z^*_k}, \] Therefore, all parcels in the numerators $\eta$ in Equation \pRef{ssnbNum} are equal for $\wvec{z}^*$ and $\wvec{z}'$. Let us now analyze the denominator $D_n$ of $q_n$. Note that \[ \wlr{n - k + 1} z_k' + \wlr{n - k} z_{k+1}' = \] \[ \wlr{n - k + 1} \wlr{z_k^* - \beta^{d_k} \delta_k} + \wlr{n - k} \wlr{z_{k+1}^* + z_k^* + \beta^{d_k} \delta_k} \] \[ = \wlr{n - k + 1} z_k^* + \wlr{n - k} z^*_{k+1} - \beta^{d_k} \delta_k. \] Moreover, $z_i' = z_i^*$ for $i \not \in \mathds{S}et{k,k+1}$ and \[ \wfc{D_n}{\wvec{z}'} - \wfc{D_n}{\wvec{z}^*} = \wlr{\sum_{i = 1}^n \wlr{n - i - 1} z_i'} - \wlr{\sum_{i = 1}^n \wlr{n - i - 1} z_i^*} = \] \[ \wlr{\wlr{n - k - 1} z_k' + \wlr{n - k} z_{k+1}'} - \wlr{\wlr{n - k - 1} z_k^* + \wlr{n - k} z_{k+1}^*} = -\beta^{d_k} \delta_k. \] Since the numerators of $\wfc{q_n}{\wvec{z}'}$ and $\wfc{q_n}{\wvec{z}^*}$ are equal and $\wvec{z}^*$ is maximal this equation implies that $\beta^{d_k} \delta_k \leq 0$. Therefore, $\delta_k = \min \mathds{S}et{1,r_k} = 0$, and $r_k = 0$. Finally, for $k = n$, define $\wvec{z}'$ with $z_k' = z_k^*$ for $k < n$ and $z_n' = z_n^* - \beta^{d_n} r_n$. As before, $\wvec{z}'$ is in the domain of $q_n$ and $\wcal{S}umkf{k}{\wvec{z}',\wrm{fl}t} = \wcal{S}umkf{k}{\wvec{z}^*,\wrm{fl}t}$ for $k < n$. For $k = n$, Equation \pRef{snbDecomp} leads to \[ \wcal{S}umkf{n-1}{\wvec{z}',\wrm{fl}t} + z_n' = \hat{s}^*_{n-1} + z_n^* - \beta^{d_n} r_n = \beta^{d_n} \wlr{\beta^\mu + 1/2}. \] We break ties upward, $\wcal{S}umkf{n}{\wvec{z}',\wrm{fl}t} = \wfl{\wcal{S}umkf{n-1}{\wvec{z}',\wrm{fl}t} + z_n'} = \beta^{d_n} \wlr{\beta^\mu + 1}$ and \[ \wcal{S}umkf{n}{\wvec{z}',\wrm{fl}t} - \wlr{\wcal{S}umkf{n- 1}{\wvec{z}',\wrm{fl}t} + z_n'} = \beta^{d_n} \wlr{\beta^\mu + 1} - \beta^{d_n} \wlr{\beta^\mu + 1/2} = \] \[ \beta^{d_n} / 2 = \beta^{d_n} \wlr{\beta^\mu + r_n + 1} - \beta^{d_n} \wlr{\beta^\mu + r_n + 1/2} \] \[ = \wcal{S}umkf{n}{\wvec{z}^*,\wrm{fl}t} - \wlr{\wcal{S}umkf{n- 1}{\wvec{z}^*,\wrm{fl}t} + z_n^*}, \] and the numerator of $q_n$ in \pRef{ssnbNum} would not change if were to replace $\wvec{z}^*$ by $\wvec{z}'$. However, the denominator would be reduced by $\beta^{d_n} r_n$, and this would contradict the maximality of $\wvec{z}^*$. Therefore $r_n = 0$. In summary, $r_k = 0$ for all $k$, the $z_k^*$ are as the $x_k$ in Example \ref{exSharpSumNearC} and we are done. \peProof{Lemma}{lemPositiveBound} \\ \pbProofB{Lemma}{lemSignedSum} Let us write $z_1 := y_0 + y_1$, $z_k := y_k$ for $k > 1$, $s_k := \sum_{i = 1}^k z_i$ and $\hat{s}_k = \wcal{S}umkf{k}{\wvec{z},\wrm{fl}t}$ for $k = 0,\dots,n$. We prove by induction that \pbDef{sspA} \wabs{\hat{s}_n - s_n} \leq \frac{u}{1 - \wlr{n - 2} u} \sum_{k = 1}^n \wabs{\sum_{i =1}^n z_i}, \peDef{sspA} which is equivalent to Equation \pRef{thSignedSum}. For $n = 1$, Equation \pRef{sspA} follows from Lemma \ref{lemUNear}. We now prove Equation \pRef{sspA} for $n \geq 2$, assuming that it holds for $n - 1$. For $\wvec{w} \in \wrn{n-1}$ with $w_1 = \hat{s}_1 + z_2$ and $w_k = y_{k+1}$ for $k > 1$, we obtain by induction that $\wcal{S}umkf{k}{\wvec{w},\wrm{fl}tx} = \hat{s}_{k+1}$ for $\wrm{fl}tx = \mathds{S}et{\wrm{fl}k{2},\dots,\wrm{fl}k{n}}$, \[ \wabs{\hat{s}_n - \wlr{\hat{s}_1 + z_2} - \sum_{k=3}^n z_k} \leq \frac{u}{1 - \wlr{n - 3} u} \wlr{\sum_{k = 2}^n \wabs{ \wlr{\hat{s}_1 + z_2} + \sum_{i = 3}^k z_i}} \] and \[ \wabs{\hat{s}_n - s_n} - \wabs{\hat{s}_1 - z_1} \leq \frac{u}{1 - \wlr{n - 3} u} \wlr{ \wlr{n - 1} \wabs{\hat{s}_1 - z_1} + \sum_{k = 2}^n \wabs{\sum_{i = 1}^k z_i}}. \] Since $\hat{s}_1 = \wflk{1}{z_1}$, Lemma \ref{lemUNear} leads to \[ \wabs{\hat{s}_n - s_n} \leq \frac{u}{1 + u} \wabs{z_1} + \frac{u}{1 - \wlr{n - 3} u} \wlr{ \wlr{n - 1} \frac{u}{1 + u} \wabs{z_1} + \sum_{k = 2}^n \wabs{\sum_{i = 1}^k z_i}}. \] \[ = \frac{u}{1 + u} \wlr{1 + \frac{\wlr{n - 1}u}{1 - \wlr{n - 3} u}} \wabs{z_1} + \frac{u}{1 - \wlr{n - 3} u} \sum_{k = 2}^n \wabs{\sum_{i = 1}^k z_i}. \] \[ \leq \frac{u}{1 - \wlr{n - 2} u} \sum_{k = 1}^n \wabs{\sum_{i = 1}^k z_i} \] \pbDef{lssB} + \wlr{\frac{1}{1 + u} \wlr{1 + \frac{\wlr{n - 1} u}{1 - \wlr{n - 3}u}} - \frac{1}{1 - \wlr{n - 2} u}} u \wabs{z_1} \peDef{lssB} The software Mathematica shows that \[ \frac{1}{1 + u} \wlr{ 1 + \frac{\wlr{n - 1} u}{1 - \wlr{n - 3} u}} - \frac{1}{1 - \wlr{n - 2} u} = - \frac{\wlr{n -1} u^2}{\wlr{1 + u}\wlr{1 - \wlr{n - 2} u} \wlr{1 - \wlr{n - 3} u}}, \] and this number is negative for $n \geq 2$ because $n u < 1$. As a result, Equation \pRef{lssB} implies Equation \pRef{sspA} and we are done. \peProof{Lemma}{lemSignedSum}\\ \pbProofB{Lemma}{lemUNearS} Let us start with $z > 0$ and define $m := \wlr{\wfloor{w} + \wceil{w}}/2$. By Prop. \ref{propRoundNormal}, there are three possibilities : \begin{itemize} \item If $w < m$ then $r = \wfloor{w}$ satisfies Equation \pRef{rHat}. \item If $w > m$ then $r = \wceil{w}$ satisfies Equation \pRef{rHat}. \item If $w = m$ then $r_1 := \wfloor{w}$ and $r_2 := \wceil{w}$ satisfy $r_i \in [0,\wlr{\beta - 1} \beta^{\mu{}})$, $\wabs{r_i - w} \leq 1/2$ and $\wfl{z} = \beta^{e} \wlr{\beta^{\mu{}} + r}$ for $r \in \mathds{S}et{r_1,r_2}$. Therefore, Equation \pRef{rHat} is also satisfied. \end{itemize} According to Definition \ref{longDefEps}, $2 u \times \beta^{\mu{}} = 1$ and Equation \pRef{rHat} yields \pbDef{unsA} \wabs{\frac{\wfl{z} - z}{z}} = \frac{\wabs{r - w}}{\beta^{\mu{}} + w} = \frac{2 u \wabs{r - w}}{1 + 2 w u} \leq \frac{u}{1 + 2 w u}. \peDef{unsA} When $w \geq 1/2$, this equation implies that \[ \wabs{\frac{\wfl{z} - z}{z}} \leq \frac{u}{1 + \max \mathds{S}et{1,2 w} u}, \] and when $w < 1/2$, Equation \pRef{rHat} and the fact that $r$ is integer imply that $r = 0$ and \[ \wabs{\frac{\wfl{z} - z}{z}} = \frac{w}{\beta^{\mu{}} + w} = \frac{2 w u}{1 + 2 w u} < \frac{u}{1 + u} = \frac{u}{1 + \max \mathds{S}et{1, 2 w} u}, \] and we have verified Equation \pRef{uNearSA}. Equation \pRef{unsA} also leads to \[ \wabs{\frac{\wfl{z} - z}{z}} \leq \frac{u}{1 + 2 w u} \leq \frac{u}{1 + 2 \wlr{r - 1/2} u} = \frac{u}{1 + \wlr{2 r - 1} u} \] and \[ \wabs{\frac{\wfl{z} - z}{\wfl{z}}} = \frac{\wabs{r - w}}{\beta^{\mu} + r} = \frac{2 u \wabs{r - w}}{1 + 2 r u} \leq \frac{u}{1 + 2 r u}. \] This proves the last equation in Lemma \ref{lemUNearS} and we are done with $z > 0$. To prove Lemma \ref{lemUNearS} for $z < 0$, use the argument above for $z' = -z$ and the function $\wrm{m}$ in Prop. \ref{propRoundMinus}. \peProof{Lemma}{lemUNearS}\\ \pbProofB{Lemma}{lemOpt} Let us define $\psi := \sup_{\wlr{\wvec{z},r} \in \wcal{Z} \times \wcal{R}} \wfc{g}{\wvec{z},r}$. If $\psi \leq \varphi$ then $\wfc{g}{\wvec{z},r} \leq \varphi$ for all $\wlr{\wvec{z},r} \in \wcal{Z} \times \wcal{R}$ and we are done. Let us then assume that $\varphi < \psi$ and let $\mathds{S}et{\wlr{\wvec{z}_k, r_k}, k \in \mathds N{} } \subset \wcal{Z} \times \wcal{R}$ be a sequence such that $\lim_{k \rightarrow \infty} \wfc{g}{\wvec{z}_k, r_k} = \psi$ and $\wfc{g}{\wvec{z}_k,r_k} > \varphi$. It follows that $\wvec{z}_k \in \wcal{A}$ for each $k$ and there exists $\lambda_k \in \wcal{L}$ and $r_k' \in \wcal{R}$ for which $\wvec{z}_k' := \lambda_k \wvec{z}_k \in \wcal{K}$ satisfies $\wfc{h}{\wvec{z}_k', r_k'} = \lambda_k \wfc{h}{\wvec{z}_k, r_k}$. Since the sequence $\wvec{z}_k'$ is contained in the compact set $\wcal{K}$, it has a subsequence which converges to $\wvec{z}^* \in \wcal{K}$, and we may assume that this subsequence is $\wvec{z}_k'$ itself. The scaling properties of $f$ lead to \[ \wfc{f}{\wvec{z}_k',\wfc{h}{\wvec{z}_k',r_k'}} = \wfc{f}{\lambda_k \wvec{z}_k, \lambda_k \wfc{h}{\wvec{z}_k,r_k}} \geq \wfc{f}{\wvec{z}_k,\wfc{h}{\wvec{z}_k,r_k}} = \wfc{g}{\wvec{x}_k, r_k} \] and \[ \liminf_{k \rightarrow \infty} \wfc{f}{ \wvec{z}_{k}', \ \wfc{h}{\wvec{z}_{k}', r_{k}'}} \geq \liminf_{k \rightarrow \infty } \wfc{g}{\wvec{z}_k, r_k} = \lim_{k \rightarrow \infty } \wfc{g}{\wvec{z}_k, r_k} = \psi. \] Since $h$ is tight, there exists $r^* \in \wcal{R}$ and a subsequence $\wvec{z}_{n_k}'$ such that $\lim_{k \rightarrow \infty} \wfc{h}{\wvec{z}_{n_k}',r_{n_k}'} = \wfc{h}{\wvec{z}^*,r^*}$. By the upper semi-continuity of $f$ and the maximality of $\psi$ we have \[ \psi \geq \wfc{g}{\wvec{z}^*,r^*} = \wfc{f}{\wvec{z}^*, \wfc{h}{\wvec{z}^*, r^*}} \] \[ \geq \limsup_{k \rightarrow \infty} \wfc{f}{ \wvec{z}_{n_k}', \ \wfc{h}{\wvec{z}_{n_k}', r_{n_k}'}} \geq \liminf_{k \rightarrow \infty} \wfc{f}{ \wvec{z}_{k}', \ \wfc{h}{\wvec{z}_{k}', r_{k}'}} \geq \psi. \] Therefore, $\wfc{g}{\wvec{z}^*, r^*} = \psi$ and we are done. \peProof{Lemma}{lemOpt}\\ \subsection{Corollaries} \label{secCorol} In this section we prove some of the corollaries stated in the article. The remaining corollaries are proved in the extended version.\\ \pbProofB{Corollary}{corSqrt} $\wfl{x^2} \geq \nu$ by Monotonicity (Prop. \ref{propRoundMonotone}), and Lemma \ref{lemUNear} yield \pbDef{sqrtA} \wabs{z - \wabs{x}} \, \wlr{z + \wabs{x}} = \wabs{z^2 - x^2} = \wabs{\wfl{x^2} - x^2} \leq \frac{\wabs{x}^2 u}{1 + u} \peDef{sqrtA} for $z := \sqrt{\wfl{x^2}} > 0$. It follows that $\delta := \wabs{z - \wabs{x}}/\wabs{x}$ satisfies \[ \delta \leq \frac{u}{1 + u} \frac{\wabs{x}}{\wabs{x} + z} \leq \frac{u}{1 + u} < u = \beta^{-\mu}/2 \leq \frac{1}{4} \ \ \ \Rightarrow \ \ 1 - \delta > 0. \] Equation \pRef{sqrtA} leads to \[ \frac{u}{1 + u} \geq \delta \frac{z + \wabs{x}}{\wabs{x}} \geq \delta \frac{2 \wabs{x} - \wabs{z - \wabs{x}}}{\wabs{x}} = \delta \wlr{2 - \delta} > 0, \] and \[ 1 - \delta = \sqrt{\wlr{1 - \delta}^2} = \sqrt{1 - \delta \wlr{2 - \delta}} \geq \sqrt{1 - \frac{u}{1 + u}} = \frac{1}{\sqrt{1 + u}}, \] and \pbDef{sqrtPsi} \delta \leq 1 - \frac{1}{\sqrt{1 + u}} = \frac{u}{2} \psi \hspace{0.5cm} \wrm{for} \hspace{0.5cm} \psi := \frac{2}{u} \frac{\sqrt{1 + u} - 1}{\sqrt{1 + u}} = \frac{2}{1 + u + \sqrt{1 + u}} < 1. \peDef{sqrtPsi} Let $\wcal{P}$ be the complete system with the same $\beta$ and $\mu$ as $\wcal{F}$. By Prop. \ref{propRoundAdapt} there exists $\wrm{fl}x$ which rounds to nearest in $\wcal{P}$ and is such that $\wflx{w} = \wfl{w}$ for $w$ with $\wabs{w} \geq \nu_{\wcal{F}}$. In particular, $\wfl{x^2} = \wflx{x^2}$. Since $\nu < 1$ and $x^2 \geq \nu$ we have that $\wabs{x} \geq \nu$ and by Prop. \ref{propNu} there exists an exponent $e$ for $\wcal{F}$ and $r \in [0,\wlr{\beta - 1} \beta^{\mu}) \cap \mathds Z{}$ such that $\wabs{x} = \beta^{e} \wlr{\beta^{\mu} + r}$. This implies that $\beta^{e + \mu} \leq \wabs{x} < \beta^{e + \mu + 1}$. The numbers $\beta^{2 e + 2 \mu}$ and $\beta^{2 e + 2 \mu + 2}$ are in $\wcal{P}$ (although $\beta^{2 e + 2 \mu}$ may not be in $\wcal{F}$) and, by the monotonicity of $\wrm{fl}x$, \[ \beta^{2 e + 2 \mu} \leq \wfl{x^2} = \wflx{x^2} \leq \beta^{2 e + 2 \mu + 2}, \] and $\beta^{e + \mu} \leq \sqrt{\wfl{x^2}} = z = \sqrt{\wfl{x^2}} \leq \beta^{e + \mu + 1}$. By Prop. \ref{propOrder} and Prop. \ref{propNormalForm}, $z = \beta^{e} \wlr{\beta^{\mu} + w}$ with $0 \leq w \leq \wlr{\beta - 1} \beta^{\mu}$. As a result, \[ \delta = \frac{\wabs{\beta^{e} \wlr{\beta^{\mu} + w} - \beta^{e} \wlr{\beta^{\mu} + r}}} {\beta^{e} \wlr{\beta^{\mu} + r}} = \frac{\wabs{w - r}}{\beta^{\mu} + r}, \] and recalling that $2 u \beta^{\mu} = 1$ and using Equation \pRef{sqrtPsi} we obtain \pbDef{sqrtDR} \wabs{w - r} \leq \frac{1}{4} \psi \wlr{1 + \beta^{-\mu} r} = \frac{1}{4} \psi \wlr{1 + 2 r u}. \peDef{sqrtDR} There are two possibilities: either \pbDef{sqrtCRA} \frac{1}{4} \psi \wlr{1 + 2 r u} < \frac{1}{2} \peDef{sqrtCRA} or \pbDef{sqrtCRB} \frac{1}{4} \psi \wlr{1 + 2 r u} \geq \frac{1}{2}. \peDef{sqrtCRB} In case \pRef{sqrtCRA} $\wabs{w - r} < 1/2$ by Equation \pRef{sqrtDR}, Prop. \ref{propRoundNormal} shows that $\wfl{z} = \wabs{x}$ and Corollary \ref{corSqrt} holds for $x$. For instance, if $\beta = 2$ then $2 r u < 2 \wlr{2 - 1} 2^{\mu} u = 1$ and $r$ satisfies Equation \pRef{sqrtCRA} because $\psi < 1$. Therefore, we have proved Corollary \ref{corSqrt} for $\beta = 2$. In order to complete the proof for the cases in which Equation \pRef{sqrtCRB} is valid, it suffices to show that \pbDef{sqrtCond} \frac{\wabs{x}}{\wfl{z}} = \frac{\wabs{x}}{\wfl{\sqrt{\wfl{x^2}}}} < 1 + u = \beta^{-\mu} \wlr{\beta^{\mu} + 1/2}, \peDef{sqrtCond} because this equation implies that $\wfl{\wabs{x}/\sqrt{\wfl{z}}} \leq 1$ by Prop. \ref{propRoundNormal} and monotonicity. We first show that Equation \pRef{sqrtCond} is valid when \pbDef{sqrtH} \zeta := 1 + 2 r u > \wlr{1 + u}^{3/2} + 1 + u. \peDef{sqrtH} In fact, for $\psi$ in Equation \pRef{sqrtPsi}, Equation \pRef{sqrtH} is equivalent to \[ \zeta > \frac{1 + u}{1 - \frac{\psi}{2} \wlr{1 + u}}, \hspace{0.7cm} \frac{\zeta}{1 + u} - \frac{\psi}{2} \zeta - 1 > 0 \hspace{0.7cm} \wrm{and} \hspace{0.7cm} \zeta - \frac{\psi}{2} \zeta u - u > \frac{\zeta}{1+u}, \] and can also be written as \pbDef{sqrtHB} 1 + u > \frac{\zeta}{\zeta - \frac{\psi}{2} \zeta u - u} \hspace{0.7cm} \wrm{or} \hspace{0.7cm} \frac{1 + 2 r u}{1 + 2 r u - \frac{\psi}{2}\wlr{1 + 2 r u} u - u} < 1 + u. \peDef{sqrtHB} Since $w \in [0,\wlr{\beta - 1} \beta^{\mu}]$, Prop. \ref{propRoundNormal} implies that $\wfl{z} \geq \beta^{e} \wlr{\beta^{\mu} + w - 1/2}$ and \[ \frac{\wabs{x}}{\wfl{z}} \leq \frac{\beta^{e} \wlr{\beta^{\mu} + r}}{\beta^{e} \wlr{\beta^{\mu} + w - 1/2}} = \frac{1 + 2 r u}{1 + 2 w u - u}, \] because $2 u \beta^{\mu} = 1$. Equations \pRef{sqrtDR} shows that $w \geq r - \psi \wlr{1 + 2 r u} / 4$ and \pbDef{sqrtHC} \frac{\wabs{x}}{\wfl{z}} \leq \frac{1 + 2 r u}{1 + 2 r u - \frac{\psi}{2} \wlr{1 + 2 r u} u - u}. \peDef{sqrtHC} Equations \pRef{sqrtHB} and \pRef{sqrtHC} lead to Equation \pRef{sqrtCond}. Therefore, Equation \pRef{sqrtH} implies Equation \pRef{sqrtCond} and Corollary \ref{corSqrt} is valid when Equation \pRef{sqrtH} is satisfied. In the case opposite to Equation \pRef{sqrtH} we have that \pbDef{sqrtTRPA} 2 r u \leq \wlr{1 + u}^{3/2} + u = 1 + \frac{5}{2} u + \frac{3}{8 \sqrt{1 + \xi_1}} u^2 \peDef{sqrtTRPA} for some $\xi_1 \in [0,u]$. Since $r$ is integer and $2 u = \beta^{-\mu}$, Equation \pRef{sqrtTRPA} implies that \pbDef{sqrtTRA} r < \beta^{\mu} + \frac{5}{4} + \frac{3}{16} u < \beta^{\mu} + 2 \Rightarrow r \leq \beta^{\mu} + 1. \peDef{sqrtTRA} Moreover, Equation \pRef{sqrtCRB} leads to \[ r \geq \beta^{\mu} \frac{2 - \psi}{\psi} = \beta^{\mu} \wlr{u + \sqrt{1 + u}} = \beta^{\mu} \wlr{1 + \frac{3}{2} u - \frac{1}{8 \wlr{1 + \xi_2}^{3/2}} u^2} \] for some $\xi_2 \in [0,u]$, and since $r$ is integer and $2 u = \beta^{-\mu}$, we have that \pbDef{sqrtTR} r \geq \beta^{\mu} + \frac{3}{4} - \frac{1}{16 \wlr{1 + \xi_2}^{3/2}} u \Rightarrow r \geq \beta^{\mu} + 1. \peDef{sqrtTR} Equations \pRef{sqrtTRA} and \pRef{sqrtTR} show that there is just one $r$ left: $r = \beta^{\mu} + 1$, which corresponds to $\wabs{x} = \beta^{e} \wlr{2 \beta^{\mu} + 1}$. It follows that \[ x^2 = \beta^{2 e}\wlr{4 \beta^{2 \mu} + 4 \beta^{\mu} + 1} = \beta^{2 e + \mu}\wlr{\beta^{\mu} + \wlr{3 \beta^{\mu} + 4 + \beta^{-\mu}}}. \] If $\beta \geq 5$ then $3 \beta^{\mu} + 4 + \beta^{-\mu} < \wlr{\beta - 1} \beta^{\mu}$ and Prop. \ref{propRoundNormal} implies that \[ \wfl{x^2} = 4 \beta^{2 e + \mu}\wlr{\beta^{\mu} + 1} \Rightarrow z = \sqrt{\wfl{x^2}} = 2 \beta^{e + \mu} \sqrt{1 + \beta^{-\mu}} \] \[ = 2 \beta^{e + \mu} \wlr{1 + \frac{1}{2} \beta^{-\mu} - \frac{\theta_5}{2} \beta^{-\mu}}, \] where, for some $\xi_5 \in [0,\beta^{-\mu}]$, \[ 0 \leq \theta_5 := \frac{1}{4 \wlr{1 + \xi_5}^{3/2}} \beta^{-\mu} \leq \frac{1}{4} \times \frac{1}{5} = \frac{1}{20}. \] Therefore, $z := \sqrt{\wfl{x^2}} = \beta^{e} \wlr{2 \beta^{\mu} + 1 - \theta_5}$ and the bound $\wabs{\theta_5} \leq 1/20$ and Prop. \ref{propRoundNormal} imply that $\wfl{z} = \beta^{e} \wlr{2 \beta^{\mu} + 1} = \wabs{x}$ and we are done with the case $\beta \geq 5$. For $\beta = 3$, the critical $x$ is $3^{e} \wlr{2 \times 3^{\mu} + 1}$ and \[ x^2 = 3^{2 e}\wlr{4 \times 3^{2 \mu} + 4 \times 3^{\mu} + 1} = 3^{2 e + \mu + 1}\wlr{3^{\mu} + 3^{\mu - 1} + 1 + \wlr{\frac{1}{3} + 3^{-\mu-1}}} \] The bound \[ \frac{1}{3} + 3^{-\mu-1} \leq \frac{1}{3} + \frac{1}{9} = \frac{4}{9} < 1/2 \] and Prop. \ref{propRoundNormal} lead to \[ \wfl{x^2} = 3^{2 e + \mu + 1}\wlr{3^{\mu} + 3^{\mu - 1} + 1} = 4 \times 3^{2 e + 2 \mu}\wlr{1 + \frac{3}{4} \times 3^{-\mu}} \] and \[ z := \sqrt{\wfl{x^2}} = 2 \times 3^{e + \mu}\wlr{1 + \frac{3}{8} \times 3^{-\mu} - \frac{\theta_3}{2} \times 3^{-\mu}} = 3^{e}\wlr{2 \times 3^{\mu} + \frac{3}{4} - \theta_3} \] where, for some $\xi_3 \in [0,1/3]$, \[ 0 \leq \theta_3 := \frac{1}{4 \wlr{1 + \xi_3}^{3/2}} \times \frac{9}{16} \times 3^{-\mu} \leq \frac{3}{64}. \] Since $3/4 - 3 / 64 = 45 / 64 > 1/2$, Prop. \ref{propRoundNormal} shows that $\wfl{z} = \wabs{x}$ when $\beta = 3$. Finally, for $\beta = 4$, we care about $x = 4^{e} \wlr{2 \times 4^{\mu} + 1}$ and \[ x^2 = 4^{2 e}\wlr{4 \times 4^{2 \mu} + 4 \times 4^{\mu} + 1} = 4^{2 e + 1 + \mu}\wlr{4^{\mu} + 1 + 4^{-\mu - 1}}, \] $4^{-\mu - 1} < 1/2$ and Prop. \ref{propRoundNormal} yields \[ \wfl{x^2} = 4^{2 e + 1 + \mu}\wlr{4^{\mu} + 1} = 4^{2 e + 1 + 2 \mu}\wlr{1 + 4^{-\mu}}. \] It follows that \[ z := \sqrt{\wfl{x^2}} = 2 \times 4^{e + \mu}\sqrt{1 + 4^{-\mu}} = 2 \times 4^{e + \mu} \wlr{1 + \frac{1}{2} \times 4^{-\mu} - \frac{\theta_4}{2} \times 4^{-\mu}} \] where, for some $\xi_4 \in [0,1/4]$, \[ 0 < \theta_4 := \frac{1}{4 \sqrt{1 + \xi_4}} \, 4^{-\mu} < \frac{1}{16}. \] Therefore, $z = 4^{e + 1} \wlr{2 \times 4^{\mu} + 1 - \theta_4}$, $\wfl{z} = \wabs{x}$ and we are done. \peProof{Corollary}{corSqrt} \\ \pbProofB{Corollary}{corNormOneUnperfect} Let $\wcal{P}$ be the perfect system corresponding to $\beta$ and $\mu$ and $\wrm{fl}tx$ the rounding tuple in Prop. \ref{propRoundIEEEX} or \ref{propRoundMPFRX}, depending on whether $\wcal{F}$ is an IEEE system or a MPFR system. As in the proof of Lemma \ref{lemNormOneBound}, we define $z_1 := y_0 + y_1$, $z_k := y_k$ for $2 \leq k \leq n$, $s_k := \sum_{i = 1}^k z_i$ and $\hat{s}_k := \wcal{S}umkf{k}{\wvec{x},\wrm{fl}t}$ for $k = 0,\dots,n$. We also use the set $\wcal{T}$ of indexes $k$ in $[1,n]$ such that $\wabs{\wcal{S}umkf{k-1}{\wvec{z},\wrm{fl}t} + z_k} < \tau$ for \[ \tau := \beta^{e_{\alpha}} \wlr{\beta^{\mu} + r} \hspace{1cm} \wrm{and} \hspace{1cm} r := \beta^{\mu} \frac{\beta - 1}{2}. \] Note that $\tau \in \wcal{E}_{e_{\alpha}} \subset \wcal{F}$ because $r$ is integer and $r < \wlr{\beta - 1} \beta^{\mu}$. The threshold $\tau$ was chosen because $\nu = \beta^{e_{\alpha} + \mu}$, \pbDef{nouTau} \tau = \frac{\beta + 1}{2} \nu < \beta \nu \peDef{nouTau} and Prop. \ref{propRoundBelowAlpha} shows that \pbDef{sseeeA} \wabs{z} \leq \beta \nu \Rightarrow \wabs{\wfl{z} - z} \leq \alpha / 2, \peDef{sseeeA} where $\alpha = \beta^{e_{\alpha}}$ for IEEE systems and $\alpha = \nu = \beta^{e_{\alpha} + \mu}$ for MPFR systems. Let $m \in [0,n]$ be the size of $\wcal{T}$. We prove by induction that \[ \wfc{\eta}{\wvec{z},\wrm{fl}t} := \sum_{k=1}^n \wabs{\wcal{S}umkf{k}{\wvec{z},\wrm{fl}t} - \wlr{\wcal{S}umkf{k-1}{\wvec{z},\wrm{fl}t} + z_k}} \] satisfies \pbTClaim{thSharpSumUnperfectPV} \wfc{\eta}{\wvec{z},\wrm{fl}t} \leq \frac{m \alpha}{2} + \frac{\wlr{n - m} u}{1 + \wlr{n - m} u} \wlr{\frac{m \alpha}{2} + \sum_{k = 1}^n \wabs{z_k}}. \peTClaim{thSharpSumUnperfectPV} If $m = 0$ then $\wcal{S}umkf{n}{\wvec{z},\wrm{fl}t} = \wcal{S}umkf{n}{\wvec{z},\wrm{fl}tx}$ and Equation \pRef{thSharpSumUnperfectPV} follows from Lemma \ref{lemNormOneBound}. Assuming that Equation \pRef{thSharpSumUnperfectPV} holds for $m - 1$, let us show that it holds for $m$. If $\wabs{s_1} < \tau$ then the sum $\wlr{\hat{s}_1 + z_2} + \sum_{k = 3}^n z_k$ has $n - 1$ parcels and there are $m - 1$ indices in $[2,n] \cap \wcal{T}$. As a result $(n-1) - (m-1) = n - m$, Equation \pRef{sseeeA}, the identity $s_1 = z_1$ and induction yield \[ \wfc{\eta}{\wvec{z},\wrm{fl}t} = \wabs{\hat{s}_1 - s_1} + \wlr{\sum_{k = 2}^n \wabs{\hat{s}_k - \wlr{\hat{s}_{k-1} + z_k}}} \] \[ \leq \frac{\alpha}{2} + \wlr{\frac{\wlr{m - 1}\alpha}{2} + \frac{\wlr{n - m} u}{1 + \wlr{n - m} u} \wlr{ \frac{\wlr{m - 1}\alpha}{2} + \wabs{\hat{s}_1 + z_2} + \sum_{k = 3}^n \wabs{z_k}}} \] \[ \leq \frac{m \alpha}{2} + \frac{\wlr{n - m} u}{1 + \wlr{n - m} u} \wlr{ \wlr{ \wabs{\hat{s}_1 - s_1} - \frac{\alpha}{2}} + \frac{m \alpha}{2} + \wabs{s_1} + \sum_{k = 2}^n \wabs{z_k}} \] \[ \leq \frac{m \alpha}{2} + \frac{\wlr{n - m} u}{1 + \wlr{n - m} u} \wlr{ \frac{m \alpha}{2} + \sum_{k = 1}^n \wabs{z_k}}. \] Therefore, Equation \pRef{thSharpSumUnperfectPV} holds when $\wabs{s_1} < \tau$. Let us then assume that $\wabs{s_1} \geq \tau$ and define $\ell \in [2,n]$ as the first index such that $\wabs{\hat{s}_{\ell - 1} + z_{\ell}} < \tau$, \pbDef{IEEESpq} S := \sum_{k = 1}^{\ell - 1} \wabs{z_k}, \hspace{1cm} p := \ell - 1 \hspace{1cm} \wrm{and} \hspace{1cm} q := n - m - \ell + 1. \peDef{IEEESpq} Monotonicity and $\tau \in \wcal{F}$ implies that $\wabs{\hat{s}_\ell} = \wabs{\wflk{\ell}{\hat{s}_{\ell - 1} + z_{\ell}}} \leq \tau$ and the proof of Lemma \ref{lemNormOneBound}, Equation \pRef{sseeeA} and induction yield \[ \wfc{\eta}{\wvec{z},\wrm{fl}t} = \sum_{k = 1}^{\ell - 1} \wabs{\hat{s}_{k} - \wlr{s_{k - 1} + z_k}} + \wabs{\hat{s}_{\ell} - \hat{s}_{\ell - 1} - z_{\ell}} + \sum_{k = \ell + 1}^{n} \wabs{\hat{s}_{k} - \wlr{\hat{s}_{k-1} + z_k}} \] \[ \leq \frac{p u}{1 + p u} S + \frac{\alpha}{2} + \wlr{ \frac{\wlr{m - 1}\alpha}{2} + \frac{q u}{1 + q u} \wlr{ \frac{ \wlr{m - 1} \alpha }{2} + \wabs{\hat{s}_{\ell} + z_{\ell + 1}} + \sum_{k = \ell + 2}^n \wabs{z_k}}} \] \pbDef{ssIEEEA} \leq \frac{p u}{1 + p u} S + \frac{m \alpha}{2} + \frac{q u}{1 + q u} \wlr{\frac{\wlr{m - 1}\alpha}{2} + \tau + \sum_{k = \ell + 1}^n \wabs{z_k}}. \peDef{ssIEEEA} If $S \geq 7 \tau / 6$ then \[ \frac{p}{1 + p u} S + \frac{q}{1 + q u} \tau \leq \wlr{\frac{p}{1 + p u} + \frac{6}{7} \frac{q}{1 + q u}} S \leq \frac{\wlr{p + q} u}{1 + \wlr{p + q} u} S - \Delta S \] for \[ \Delta := \frac{p + q}{1 + \wlr{p + q} u} - \wlr{\frac{p}{1 + p u} + \frac{6}{7} \frac{q}{1 + q u}}. \] The software Mathematica shows that \[ \Delta = q \frac{1 + q u - 6 \wlr{2 + q u + p u} p u }{\wlr{1 + p u} \wlr{1 + q u} \wlr{1 + \wlr{p + q} u}} \] and the hypothesis $20 n u \leq 1$ implies that $\Delta \geq 0$. Therefore, if $S \geq 7 \tau / 6$ then Equation \pRef{ssIEEEA} leads to \[ \wfc{\eta}{\wvec{z},\wrm{fl}t} \leq \frac{m \alpha}{2} + \frac{\wlr{p + q} u}{1 + \wlr{p + q} u} \wlr{\frac{m \alpha}{2} + S +\sum_{k = \ell + 1}^n \wabs{z_k}}, \] and Equation \pRef{thSharpSumUnperfectPV} follows from Equation \pRef{IEEESpq}. We can then assume that $S < 7 \tau / 6$ and, for $1 \leq k < \ell$, Lemma \ref{lemNormOneBound} leads to \[ \wabs{\hat{s}_k} \leq \wabs{s_k} + \wabs{\hat{s}_k - s_k} \leq \wlr{1 + \frac{k u}{1 + k u}} S < 1.05 \times \frac{7}{6} \frac{\beta + 1}{2} \nu < \frac{2}{3} \wlr{\beta + 1} \nu \leq \beta \nu, \] and Equation \pRef{sseeeA} implies that $\wabs{\hat{s}_{k} - \wlr{\hat{s}_{k} + z_k}} \leq \alpha/2$ for $1 \leq k < \ell$. It follows that \pbDef{ssIEEEBA} \sum_{k=1}^{\ell - 1} \wabs{\hat{s}_{k} - \wlr{s_{k - 1} - z_k}} \leq \wlr{\ell - 1} \alpha / 2 = p \alpha / 2. \peDef{ssIEEEBA} The identity $u \nu = \alpha/2$ for IEEE systems and the inequality $u \nu = u \alpha \leq \alpha/4$ for MPFR systems, the hypothesis $20 n u \leq 1$ and the fact that \[ \sum_{k = 1}^{\ell - 1} \wabs{z_k} \geq \wabs{z_1} = \wabs{s_1} \geq \tau = \wlr{\beta + 1} \nu / 2 \geq \frac{3}{2} \nu \] imply that \[ \frac{p \alpha}{2} \leq \frac{p \nu u}{2} \leq \frac{2}{3} \wlr{1 + p u} \frac{p u}{1 + p u} \sum_{k = 1}^{\ell -1} \wabs{z_k} \] \[ \leq \frac{2}{3} \times \frac{21}{20} \times \frac{p u}{1 + p u} \sum_{k = 1}^{\ell - 1} \wabs{z_k} = \frac{7}{10} \frac{p u}{1 + p u} \sum_{k = 1}^{\ell - 1} \wabs{z_k}. \] Using induction as in Equation \pRef{ssIEEEA} and the bounds in the previous equation and in Equation \pRef{ssIEEEBA}, and recalling that $\wabs{z_1} \geq \tau$, we obtain \[ \wfc{\eta}{\wvec{z},\wrm{fl}t} \leq \frac{p \alpha}{2} + \frac{\alpha}{2} + \frac{\wlr{m - 1} \alpha}{2} + \frac{qu}{1 + q u} \wlr{\frac{\wlr{m - 1} \alpha}{2} + \tau + \sum_{k = \ell + 1}^n \wabs{z_k}} \] \pbDef{IEEEUfa} \leq \frac{7}{10} \frac{p u}{1 + p u}\sum_{k = 1}^n \wabs{z_k} + \frac{m \alpha}{2} + \frac{q u}{1 + q u} \wlr{\frac{m \alpha}{2} + \sum_{k = 1}^n \wabs{z_k}}. \peDef{IEEEUfa} According to the software Mathematica, \[ \frac{p + q}{1 + \wlr{p + q} u} - \wlr{\frac{q}{1 + q u} + \frac{7}{10} \frac{p}{1 + p u}} = p \frac{3 + 3 p u - 7 \wlr{2 + q u + p u} q u}{10 \wlr{1 + p u} \wlr{1 + q u} \wlr{1 + \wlr{p + q} u}}, \] and this number is positive due to the hypothesis $20 n u \leq 1$. As a result, Equation \pRef{IEEEUfa} implies Equation \pRef{thSharpSumUnperfectPV} and this concludes the inductive proof of Equation \pRef{thSharpSumUnperfectPV}. This equation leads to \pbDef{nouF} \wabs{ \, \wflt{\sum_{k = 0}^n y_k} - \sum_{k = 0}^n y_k \,} \leq \frac{m \alpha}{2} + \frac{\wlr{n - m} u}{1 + \wlr{n - m} u} \wlr{\frac{m \alpha}{2} + \sum_{k = 0}^n \wabs{y_k}}, \peDef{nouF} and implies Equation \pRef{thSharpSumUnperfect} because $0 \leq m \leq n$. Finally, when the additional condition in Corollary \ref{corNormOneUnperfect} holds we have that \[ \sum_{k = 0}^n \wabs{y_k} \geq \theta \frac{\wlr{1 + n u}^2}{u} \alpha \] for $1 > \theta := \frac{1}{\wlr{1 + n u}^2} \geq \wlr{20/21}^2 > 0.9$ and the software Mathematica shows that \[ \frac{n u}{1 + n u} \theta \frac{\wlr{1 + n u}^2}{u} \alpha - \wlr{\frac{m \alpha}{2} + \frac{\wlr{ n - m} u}{1 + \wlr{n - mu} u} \wlr{\frac{m \alpha}{2} + \theta \frac{\wlr{1 + n u}^2}{u} \alpha}} \] \[ = \alpha m \frac{2 \theta - 1 + 2 u \wlr{ \wlr{\theta - 1} n + m}}{1 + \wlr{n - m} u} \geq \alpha m \frac{0.8 - 2 u \wlr{0.1 n - m}}{1 + \wlr{n - m} u} > 0, \] and Equation \pRef{normOneBound} follows from Equation \pRef{nouF}. \peProof{Corollary}{corNormOneUnperfect}\\ \pbProofB{Corollary}{corSignedSum} Define $z_1 := y_0 + y_1$ and $z_k := y_k$ for $2 \leq k \leq n$, $s_k := \sum_{i = 1}^k z_i$ and $\hat{s}_{k} := \wcal{S}umkf{k}{\wvec{x},\wrm{fl}t}$ for $k = 0, \dots, n$. Let $\wcal{P}$ be the perfect system corresponding to $\beta$ and $\mu$ and $\wrm{fl}tx$ the rounding tuple in Props. \ref{propRoundIEEEX} or \ref{propRoundMPFRX}, depending on whether $\wcal{F}$ is an IEEE system or a MPFR system. By definition of $\wrm{fl}tx$, we have that $\wflk{k}{s_{k-1} + z_k} = \wflxkf{k}{s_{k-1} + z_k}$ when $\wabs{s_{k-1} + z_k} \geq \nu$. Let $\wcal{T}$ be the set of indexes $k$ in $[1,n]$ such that $\wabs{\wcal{S}umkf{k-1}{\wvec{z},\wrm{fl}t} + z_k} < \nu$ and $m \in [0,n]$ its size. We prove by induction that \pbTClaim{csi} \wabs{ \, \wflt{\sum_{k = 1}^n z_k} - \sum_{k = 1}^n z_k \, } \leq \wlr{1 + 2 \wlr{n - m} u} m \frac{\alpha}{2} + \frac{1 - m u / 2}{1 - \wlr{n - 2} u} u \sum_{k = 1}^n \wabs{\sum_{i = 1}^k z_i}. \peTClaim{csi} When $m = 0$ we have that $\hat{s}_n = \wcal{S}umkf{n}{\wvec{z},\wrm{fl}tx}$ and Equation \pRef{csi} follows from Lemma \ref{lemSignedSum}. Assuming that Equation \pRef{csi} holds for $m - 1$, let us prove it for $m$. Let $\ell$ be the last element of $\wcal{T}$ (Note that $\ell \geq m$.) It follows that $\wabs{\hat{s}_{k - 1} - z_k} \geq \nu$ for $k > \ell$ and $\hat{s}_k = \wflxkf{k}{\hat{s}_{\ell} + \sum_{i = \ell + 1}^k z_i}$ for $k > \ell$. The proof of Lemma \ref{lemSignedSum} shows that \[ \wabs{\hat{s}_{n} - \wlr{\hat{s}_{\ell} + \sum_{k = \ell + 1}^n z_k}} \leq \frac{u}{1 - \wlr{\wlr{n - \ell} - 2} u} \sum_{k = \ell + 1}^n \wabs{\hat{s}_{\ell} + \sum_{i = \ell + 1}^k z_i} \] \[ \leq \frac{u}{1 - \wlr{n - \ell - 2} u} \sum_{k = \ell + 1}^n \wlr{\wabs{\hat{s}_{\ell} - s_{\ell}} + \wabs{\sum_{i = 1}^k z_i}} \] \pbTClaim{csib} = A u \wlr{\wlr{n - \ell} \wabs{\hat{s}_{\ell} - s_{\ell}} + \sum_{k= \ell + 1}^n \wabs{\sum_{i = 1}^k z_i}}, \peTClaim{csib} for \[ A := \frac{1}{1 - \wlr{n - \ell - 2} u}. \] Moreover, $\wabs{\hat{s}_{\ell - 1} + z_\ell} < \nu$ and, by induction and Prop. \ref{propRoundBelowAlpha}, \[ \wabs{\hat{s}_{\ell} - s_{\ell}} \leq \wabs{\hat{s}_{\ell} - \hat{s}_{\ell - 1} - z_{\ell}} + \wabs{\hat{s}_{\ell - 1} - s_{\ell - 1}} \] \[ \leq \frac{\alpha}{2} + \wlr{1 + 2 \wlr{\ell - m} u} \wlr{m - 1 } \frac{\alpha}{2} + \frac{\wlr{1 - \wlr{m - 1} u/2} u}{1 - \wlr{\ell - 3} u} \sum_{k = 1}^{\ell - 1} \wabs{\sum_{i = 1}^k z_i} \] \pbTClaim{cisc} = \wlr{m + 2 \wlr{\ell - m} \wlr{m - 1} u} \frac{\alpha}{2} + C u \sum_{k = 1}^{\ell - 1} \wabs{\sum_{j = 1}^k z_j}. \peTClaim{cisc} for \[ C := \frac{1 - \wlr{m - 1} u / 2}{1 - \wlr{\ell - 3} u}. \] Combining Equations \pRef{csib} and \pRef{cisc} we obtain \[ \wabs{\hat{s}_n - s_n} \leq \wabs{\hat{s}_n - \wlr{\hat{s}_{\ell} + \sum_{k=\ell + 1}^n z_k}} + \wabs{\hat{s}_\ell - s_{\ell}} \leq \] \[ \leq \wlr{1 + A \wlr{n - \ell} u} \wabs{\hat{s}_\ell - s_{\ell}} + A u \sum_{k=\ell + 1}^n \wabs{\sum_{i =1}^k z_i} \] \pbTClaim{cisd} \leq D \wlr{m + 2 \wlr{\ell - m} \wlr{m - 1} u} \frac{\alpha}{2} + D C u \sum_{k = 1}^{\ell - 1} \wabs{\sum_{i =1}^k z_i} + A u \sum_{k=\ell + 1}^n \wabs{\sum_{i =1}^k z_i}, \peTClaim{cisd} for \[ D := 1 + A \wlr{n - \ell} u = \frac{1 + 2 u}{1 - \wlr{n - \ell - 2} u}. \] We now show that $Q < 1$ for \[ Q := \frac{D \wlr{m + 2 \wlr{\ell - m} \wlr{m - 1} u}} { \wlr{1 + 2 \wlr{n - m} u} m } = \frac{\wlr{1 + 2 u}\wlr{1 + 2 \wlr{\ell - m}\wlr{1 - 1/m}u}} {\wlr{1 - \wlr{n - \ell - 2} u}{\wlr{1 + 2 \wlr{n - m} u}}} . \] It easy to see that $Q < 1$ when $\ell = n$. Since $20 n u \leq 1$, when $\ell < n$ we have \[ Q < \frac{\wlr{1 + 2 u}\wlr{1 + 2 \wlr{\ell - m} u} } {\wlr{1 - \wlr{n - \ell - 2} u}{\wlr{1 + 2 \wlr{n - m} u}}} \] \[ = \frac{1 + \wlr{2 \ell - 2 m + 2} u + 4 \wlr{\ell - m} u^2} { 1 + \wlr{n + \ell - 2 m + 2} u - 2 \wlr{n - \ell - 2} \wlr{n - m} u^2 } \] \[ = \frac{1 + \wlr{2 \ell - 2 m + 2} u + 4 \wlr{\ell - m} u^2} {1 + \wlr{2 \ell - 2 m + 2} u + \wlr{n - \ell} \wlr{1 - 2 \frac{\wlr{n - \ell - 2}}{n - \ell} \wlr{n - m} u} u} \] \[ \leq \frac{1 + \wlr{2 \ell - 2 m + 2} u + 0.2 u} { 1 + \wlr{2 \ell - 2 m + 2} u + \wlr{1 - 0.1} u } < 1. \] Therefore, $Q < 1$ and, equivalently, \pbDef{csad} D \wlr{m + 2 \wlr{\ell - m} \wlr{m - 1} u} \leq \wlr{1 + 2 \wlr{n - m} u} m. \peDef{csad} Moreover, \[ D C = \frac{1 + 2 u}{1 - \wlr{n - \ell - 2} u} \frac{1 - \wlr{m - 1} u / 2}{1 - \wlr{\ell - 3} u} = \frac{\wlr{1 + 2 u} \wlr{1 - \wlr{m - 1} u / 2}}{1 - \wlr{n - 5} u + \wlr{\ell - 3}\wlr{n - \ell - 2} u^2 }. \] Note that the function $\wf{h}{\ell} := \wlr{\ell - 3}\wlr{n - \ell - 2}$ is concave. Therefore its minimum in the interval $[1,n]$ is at the endpoints. Since $\wfc{h}{1} = \wf{h}{n} = - 2 \wlr{n - 3}$, we have \[ D C \leq \frac{\wlr{1 + 2 u} \wlr{1 - \wlr{m - 1} u / 2}}{1 - \wlr{n - 5} u - 2 \wlr{n - 3} u^2 }, \] and the software Mathematica shows that \[ \frac{\wlr{1 + 2 u} \wlr{1 - \wlr{m - 1} u / 2}}{1 - \wlr{n - 5} u - 2 \wlr{n - 3} u^2 } - \frac{1 - m u / 2}{1 - \wlr{n - 2} u } = - u \frac{1 - 2 u - m u + n u}{2 \wlr{1 + 3 u - n u}\wlr{1 + 2 u - n u}} < 0, \] where the last inequality follows from the hypothesis $20 n u \leq 1$. Therefore, \[ D C \leq \frac{1 - m u / 2}{1 - \wlr{n - 2} u }. \] Not also that, since $\ell \geq m$ and $20 n u \leq 1$, \[ A - \frac{1 - m u/2}{1 - \wlr{n - 2} u} = \frac{1 - \wlr{n - 2} u - \wlr{1 - m u /2} \wlr{1 - \wlr{ n - \ell - 2} u}}{\wlr{1 - \wlr{n - 2} u} \wlr{1 - \wlr{n - \ell - 2} u}} \] \[ = - \frac{\ell - m / 2 \wlr{1 - \wlr{ n - \ell - 2} u}}{\wlr{1 - \wlr{n - 2} u} \wlr{1 - \wlr{n - \ell - 2} u}} u < 0, \] and \[ A \leq \frac{1 - m u / 2}{1 - \wlr{n - 2} u }. \] The bounds on $DC$ and $A$ above, combined with Equations \pRef{cisd} and \pRef{csad} imply Equation \pRef{csi}, and we completed the inductive proof of this equation. Finally, when $u \sum_{k = 1}^n \wabs{ \sum_{i = 0}^n y_i} \geq n \alpha$ Equation \pRef{csi} leads to \[ \wabs{ \, \wflt{\sum_{k = 0}^n y_k} - \sum_{k = 0}^n y_k \, } \leq \theta_{m} u \sum_{k = 1}^n \wabs{\sum_{i = 0}^k y_i}, \] for \[ \theta_{m} := \wlr{1 + 2 \wlr{n - m} u} \frac{m}{2n } + \frac{1 - m u / 2}{1 - \wlr{n - 2} u}. \] The derivative of $\theta_m$ with respect to $m$ is \[ \frac{1 - u \wlr{4 m - 2} - 2 u^2 \wlr{n^2 - 2 m n + 4 m - 2 n}}{2 n \wlr{1 + 2 u - n u}}, \] and it is positive because $20 m u \leq 20 n u \leq 1$. Thus, $\theta_m$ is maximized for $m = n$ and \[ \theta_m \leq \frac{m}{2} + \frac{1 - n u / 2}{1 - \wlr{n - 2} u} = \frac{3 - 2 \wlr{n - 1} u}{2 \wlr{1- \wlr{n -2} u}}, \] and Equation \pRef{thCorSignedSumL} holds because \[ \frac{3 - 2 \wlr{n - 1} u}{2 \wlr{1- \wlr{n -2} u}} - \frac{3}{2} \wlr{1 + \frac{n u}{2}} = - u \frac{8 + n \wlr{1 - 3 \wlr{n - 2} u}}{1 - \wlr{n - 2} u} < 0 \] when $20 n u \leq 1$. \peProof{Corollary}{corSignedSum}\\ \iftoggle{LatexFull}{ \section{Extended version} \label{secExtend} In this part of the article we prove Lemmas \ref{lemSterbenz} and \ref{lemConvexity}, the corollaries which were not proved in the previous sections, and the propositions. We try to prove every assertion we make, no matter how trivial it may sound. In all propositions $\wcal{F}$ is a floating point system, $z \in \mathds R{}$, $x \in \wcal{F}$, $\wrm{fl}$ rounds to nearest in $\wcal{F}$, and $u$, $e_{\alpha}{}$, $\mu$ $\alpha$ and $\nu$ are the numbers related to this system in Definitions \ref{longDefEps}, \ref{longDefPerfect}, \ref{longDefMPFR}, \ref{longDefIEEE}, \ref{longDefAlpha} and \ref{longDefNu}. \subsection{Proofs of Lemmas \ref{lemSterbenz} and \ref{lemConvexity}} \label{secExtLem} In this section we prove Lemmas \ref{lemSterbenz} and \ref{lemConvexity}.\\ \pbProofB{Lemma}{lemSterbenz} If $b - a < \beta \nu$ then Lemma \ref{lemSterbenz} follows from Lemma \ref{lemSmallSum}. Therefore, we can assume that $b - a \geq \beta \nu$. Prop. \ref{propNormalForm} implies that $a = \beta^{d}\wlr{\beta^{\mu} + r}$ and $b = \beta^{e} \wlr{\beta^{\mu} + s}$ with $d,e \in \mathds Z{}$ and $r,s \in [0,\wlr{\beta - 1} \beta^{\mu})$. Since $a \leq b \leq 2 a$ and $\beta \geq 2$, \[ \beta^{d}\wlr{\beta^{\mu} + r} \leq \beta^{e} \wlr{\beta^{\mu} + s} \leq 2 \beta^{d}\wlr{\beta^{\mu} + r} \leq \beta^{d + 1}\wlr{\beta^{\mu} + r}. \] Prop. \ref{propOrder} shows that $d \leq e \leq d + 1$ and either (i) $e = d$ or (ii) $e = d + 1$. In case (i) $b - a = \beta^{e} \wlr{s - r} \geq \beta \nu$. Since $0 \leq s - r < \wlr{\beta - 1} \beta^{\mu}$ and $b - a \geq \nu$, Prop. \ref{propNu} implies that $b - a \in \wcal{F}$. In case (ii) $0 < b - a = \beta^{d} t$ for $t := \wlr{ \wlr{\beta - 1 } \beta^{\mu} + \beta s - r} > 0$ and \[ b - a \leq a \Rightarrow t \leq \beta^{\mu} + r < \beta^{1 + \mu}. \] This bound, the assumption $b - a \geq \beta \nu$ and Prop. \ref{propNu} imply that $z \in \wcal{F}$. \peProof{Lemma}{lemSterbenz}\\ \pbProofB{Lemma}{lemConvexity} The function $g_k$ has first derivative \[ \wdf{g_k}{u} = -\wfc{g_k}{u} \sum_{i = 1}^k \frac{n_i}{1 + n_i u} \] and second derivative \[ \wdsf{g_k}{u} = \wfc{g_k}{u} \wlr{ \wlr{\sum_{i = 1}^k \frac{n_i}{1 + n_i u}}^2 + \sum_{i = 1}^k \frac{n_i^2}{\wlr{1 + n_i u}^2}} > 0, \] and, therefore, it is convex. Similarly, the function $f_k$ has first derivative \[ \wdf{f_k}{u} = \wfc{f_k}{u} \sum_{i = 1}^k \frac{n_i}{\wlr{1 + n_i u} \wlr{1 + 2 n_i u}} \] and second derivative \[ \wdsf{f_k}{u} = \wfc{f_k}{u} \wlr{ \wlr{\sum_{i = 1}^k \frac{n_i}{\wlr{1 + n_i u} \wlr{1 + 2 n_i u}}}^2 - \sum_{i=1}^k \frac{n_i^2 \wlr{3 + 4 n_i u}}{\wlr{\wlr{1 + n_i u} \wlr{1 + 2 n_i u}}^2}}. \] It follows that \pbDef{dsf} \wdsf{f_k}{u} = -\wfc{f_k}{u} \wvec{v}^\mathrm{T}{} \wlr{3 \wvec{I} - \mathds{1} \mathds{1}^{\mathrm{T}{}}} \wvec{v} - 4 \wfc{f_k}{u} u \sum_{i=1}^k \frac{n_i^3}{\wlr{\wlr{1 + n_i u} \wlr{1 + 2 n_i u}}^2}, \peDef{dsf} where $\wvec{I}$ is the $k \times k$ identity matrix, $\mathds{1} \in \wrn{k}$ is the vector with all entries equal to $1$ and $\wvec{v} \in \wrn{k}$ has entries \[ v_i := \frac{n_i}{\wlr{1 + n_i u} \wlr{1 + 2 n_i u}}. \] The $k \times k$ symmetric matrix $\wvec{M} = 3 \wvec{I} - \mathds{1} \mathds{1}^{\mathrm{T}{}}$ has a $(k-1)$ dimensional eigenspace associated to the eigenvalue $3$ which is orthogonal to $\mathds{1}$, and $\mathds{1}$ is an eigenvector with eigenvalue $3 - k$. Therefore, $\wvec{M}$ is positive semidefinite for $k \leq 3$, Equation \pRef{dsf} implies that \[ \wdsf{f_k}{u} \leq - 4 \wfc{f_k}{u} u \sum_{i=1}^k \frac{n_i^3}{\wlr{\wlr{1 + n_i u} \wlr{1 + 2 n_i u}}^2} < 0 \] and we are done. \peProof{Lemma}{lemConvexity}\\ \subsection{Proofs of the remaining corollaries} \label{secExtCor} In this section we prove the corollaries which were not proved in the previous sections.\\ \pbProofB{Corollary}{corFourProd} Corollary \ref{corFourProd} is a consequence of the convexity of $\wlr{1 + u}^{-k}$ and the concavity of $f^k$ for $k \leq 3$ and $f$ in \pRef{concF}, which yield \[ 1 - k u \leq \frac{1}{\wlr{1 + u}^k} \leq \wlr{\frac{1 + 2 u}{1 + u}}^k \leq 1 + k u \] for $k = 1$, $2$ and $3$. \peProof{Corollary}{corFourProd}\\ \pbProofB{Corollary}{corNormOneIEEE} Let $\wrm{fl}tx$ be the rounding tuple in Prop. \ref{propRoundIEEEX}. If the $y_k$ are floating point numbers then $\wcal{S}umkf{k}{\wvec{y},\wrm{fl}t} = \wcal{S}umkf{k}{\wvec{y},\wrm{fl}tx}$ for all $k$ and Corollary \ref{corNormOneIEEE} follows from Lemma \ref{lemNormOneBound}. \peProof{Corollary}{corNormOneIEEE}\\ \pbProofB{Corollary}{corNormOneMPFR} Let $\wrm{fl}tx$ be the rounding tuple in Prop. \ref{propRoundMPFRX}. If all $y_k$ are non negative floating point numbers then $\wcal{S}umkf{k}{\wvec{y},\wrm{fl}t} = \wcal{S}umkf{k}{\wvec{y},\wrm{fl}tx}$ for all $k$ and Corollary \ref{corNormOneMPFR} follows from Lemma \ref{lemNormOneBound}. \peProof{Corollary}{corNormOneMPFR} \\ \pbProofB{Corollary}{corPositiveUnperfect} If $\wcal{F}$ is a MPFR system, let $\wrm{fl}tx$ be the rounding tuple in Prop. \ref{propRoundMPFRX}. Since all $y_k$ belong to $\wcal{M}$ and are non negative we have that $\wcal{S}umkf{k}{\wvec{y},\wrm{fl}t} = \wcal{S}umkf{k}{\wvec{y},\wrm{fl}tx}$ for all $k$ and Corollary \ref{corPositiveUnperfect} follows from Lemma \ref{lemPositiveBound}. If $\wcal{F}$ is an IEEE system, let $\wrm{fl}tx$ be rounding tuple in Prop. \ref{propRoundIEEEX}. Since all $y_k$ are floating point numbers, $\wcal{S}umkf{k}{\wvec{y},\wrm{fl}t} = \wcal{S}umkf{k}{\wvec{y},\wrm{fl}tx}$ for all $k$ and Corollary \ref{corPositiveUnperfect} follows from Lemma \ref{lemPositiveBound}. \peProof{Corollary}{corPositiveUnperfect}\\ \pbProofB{Corollary}{corFmaPerfect} In a perfect system, the dot product of $n + 1$ numbers evaluated using a fma, as in Definition \ref{longDefFmaDot}, is the floating point sum of the $(n + 2)$ real numbers $p_0 := 0$ and $p_k := x_{k-1} y_{k-1}$ for $k > 0$, and Equation \pRef{thSharpFmaDotNear} follows from Lemma \ref{lemNormOneBound} applied to the $p_k$. \peProof{Corollary}{corFmaPerfect}\\ \pbProofB{Corollary}{corFmaUnperfect} In an unperfect systems, the dot product of $n + 1$ numbers evaluated using a fma, as in Definition \ref{longDefFmaDot}, is the floating point sum of the $(n + 2)$ real numbers $p_0 := 0$ and $p_k := x_{k-1} y_{k-1}$ for $k > 0$, and Corollary \ref{corFmaUnperfect} follows from Corollary \ref{corNormOneUnperfect} applied to the $p_k$. \peProof{Corollary}{corFmaUnperfect}\\ \pbProofB{Corollary}{corDotPerfect} The dot product is the floating point sum of the floating point numbers $p_k := \wflrk{r}{k}{x_k y_k}$. In a perfect system, Lemma \ref{lemUNear} shows that \[ p_k = x_k y_k + \theta_k \frac{u}{1 + u} x_k y_k \hspace{1cm} \wrm{with} \hspace{1cm} \wabs{\theta_k} \leq 1, \] and Lemma \ref{lemNormOneBound} implies that \[ \wabs{\wflt{\sum_{k = 0}^n p_k} - \sum_{k = 0}^n p_k} = \frac{n u}{1 + n u} \sum_{k = 0}^n \wabs{p_k}. \] It follows that \[ \wabs{\wflt{\sum_{k = 0}^n x_k y_k} - \sum_{k = 0}^n x_k y_k } \leq \sum_{k = 0}^n \wabs{p_k - x_k} + \wabs{\wflt{\sum_{k = 0}^n x_k y_k} - \sum_{k = 0}^n p_k} \leq \beta_n u \sum_{k = 0}^n \wabs{x_k y_k} \] for \[ \beta_n := \frac{1}{1 + u} \wlr{1 + \frac{n}{1 + n u} \wlr{1 + 2 u}} = \frac{n + 1 + 3 n u}{1 + \wlr{n + 1} u + n u^2}. \] Finally, note that for $n \geq 1$ and $20 n u \leq 1$, \[ \beta_n - \frac{n + 1}{1 + n u/2} = - u \frac{\wlr{n - 2} \wlr{n - 1 - n u}}{\wlr{1 + n u/2} \wlr{1 + \wlr{n+1}u + n u^2}} \leq 0, \] and \[ \beta_n - \frac{n + 1}{1 + \wlr{n - 3} u} = - u \frac{n + 4 + 10 n u - 2 n^2 u}{\wlr{1 + \wlr{n + 1} u + n u^2}\wlr{1 + \wlr{n - 3} u}} < 0. \] \peProof{Corollary}{corDotPerfect}\\ \pbProofB{Corollary}{corDotIEEE} The dot product is the sum of the $n + 1$ floating point numbers $p_k := \wflrk{r}{k}{x_k y_k}$, and Corollary \ref{corNormOneIEEE} shows that \[ \wabs{\wflt{\sum_{k = 0}^n p_k} - \sum_{k = 0}^n p_k} \leq \frac{n u}{1 + n u} \sum_{k = 0}^n \wabs{p_k}. \] We also have \[ \wabs{p_k - x_k y_k} \leq \frac{u}{1 + u} \wabs{x_k y_k} + \frac{\alpha}{2} \] and \[ \wabs{\wflt{\sum_{k = 0}^n x_k y_k} - \sum_{k = 0}^n x_k y_k} \leq \wabs{\wflt{\sum_{k = 0}^n p_k} - \sum_{k = 0}^n p_k} + \sum_{k = 0}^n \wabs{p_k - x_k y_k} \] \[ \leq \frac{n u}{1 + n u} \sum_{k = 0}^n \wabs{p_k} + \sum_{k = 0}^n \wabs{p_k - x_k y_k} \] \[ \leq \frac{n u}{1 + n u} \wlr{ \frac{1 + 2 u}{1 + u} \sum_{k = 0}^n \wabs{x_k y_k} + \wlr{n + 1} \frac{\alpha}{2}} + \frac{\wlr{n+1} \alpha}{2} + \frac{u}{1 + u} \sum_{k = 0}^n \wabs{x_k y_k} \] \[ = \beta_n u \sum_{k = 0}^n \wabs{x_k y_k} + b \frac{\alpha}{2} \] for $\beta_n$ in Corollary \ref{corDotPerfect} and \[ b := \wlr{n + 1} \wlr{1 + \frac{n u}{1 + n u}} = \wlr{n + 1} \frac{1 + 2 n u}{1 + n u} < 1.05 \wlr{n + 1}, \] because $20 n u \leq 1$. Finally, if $ u \sum_{k = 0}^n \wabs{x_k y_k} \geq \alpha$ then \[ \beta_n u \sum_{k = 0}^n \wabs{x_k y_k} + b \frac{\alpha}{2} \leq \theta_n u \sum_{k = 0}^n \wabs{x_k y_k} \hspace{1cm} \wrm{for} \hspace{1cm} \theta_n := \beta_n + \frac{n + 1}{2} \frac{1 + 2 n u}{1 + n u}, \] and the software Mathematica shows that \[ \theta_n - 3 \frac{n +1}{2} = - u \frac{n^2 - 3 n + 2 + n u \wlr{1 + n}}{2 \wlr{1 + u} \wlr{1 + n u}} \] which is negative for $n \geq 1$. This proves the last equation in Corollary \ref{corDotIEEE}. \peProof{Corollary}{corDotIEEE}\\ \pbProofB{Corollary}{corDotMPFR} The dot product is the sum of the $n + 1$ floating point numbers $p_k := \wflrk{r}{k}{x_k y_k}$, and the proof of Corollary \ref{corNormOneUnperfect} shows that \[ \wabs{\wflt{\sum_{k = 0}^n p_k} - \sum_{k = 0}^n p_k} \leq \frac{m \alpha}{2} + \frac{\wlr{n - m} u}{1 + \wlr{n - m} u} \wlr{\frac{m \alpha}{2} + \sum_{k = 0}^n \wabs{p_k}}, \] for some $m \in [0,n]$. We also have that \[ \wabs{p_k - x_k y_k} \leq \frac{u}{1 + u} \wabs{x_k y_k} + \frac{\alpha}{2} \] and \[ \wabs{\wflt{\sum_{k = 0}^n x_k y_k} - \sum_{k = 0}^n x_k y_k} \leq \wabs{\wflt{\sum_{k = 0}^n p_k} - \sum_{k = 0}^n p_k} + \sum_{k = 0}^n \wabs{p_k - x_k y_k} \] \[ \leq \frac{m \alpha}{2} + \frac{\wlr{n - m} u}{1 + \wlr{n - m} u} \wlr{\frac{m \alpha}{2} + \sum_{k = 0}^n \wabs{p_k}} + \sum_{k = 0}^n \wabs{p_k - x_k y_k} \] \pbDef{MPRFBA} \leq \frac{\wlr{n - m} u}{1 + \wlr{n - m} u} \wlr{ \frac{1 + 2 u}{1 + u} \sum_{k = 0}^n \wabs{x_k y_k} + \wlr{m + n + 1} \frac{\alpha}{2}} + \peDef{MPRFBA} \[ \frac{\wlr{m + n+1} \alpha}{2} + \frac{u}{1 + u} \sum_{k = 0}^n \wabs{x_k y_k} \leq \beta_n u \sum_{k = 0}^n \wabs{x_k y_k} + b \frac{\alpha}{2} \] for $\beta_n$ in Corollary \ref{corDotPerfect} and \[ b := \frac{n^2 + n - m^2 -m}{1 + \wlr{n - m} u} u + \wlr{m + n + 1} \leq \frac{n \wlr{n + 1} u}{1 + n u} + 2 n + 1 \leq 2.05n + 1.05. \] Finally, if $ u \sum_{k = 0}^n \wabs{x_k y_k} \geq \alpha$, then \[ \wabs{\wflt{\sum_{k = 0}^n x_k y_k} - \sum_{k = 0}^n x_k y_k} \leq \gamma_n u \sum_{k = 0}^n \wabs{x_k y_k} \] for \[ \gamma_n := \beta_n + \frac{1}{2} \wlr{\frac{n^2 + n - m^2 -m}{1 + \wlr{n - m} u} u + \wlr{m + n + 1}}. \] The derivative of $\gamma_n$ with respect to $m$ is \[ - \frac{1 + 2 n u + u}{\wlr{1 + \wlr{n - mu} u}^2} < 0 \] and $\gamma_n$ is maximized for $m = 0$, in which case it is equal to the $\theta_n$ in the proof of Corollary \ref{corDotIEEE}. This proves the last statement in Corollary \ref{corDotMPFR}. \peProof{Corollary}{corDotMPFR}\\ \subsection{Numbers} \label{secNumbers} This section contains new propositions about real and integer numbers, and the proofs of propositions related to these numbers stated in the main part of the article. \subsubsection{Propositions} \label{secNumbersProps} This sections presents more propositions regarding real and integer numbers. \pbPropBT{propNormalFormCont}{Continuity of the normal form} If $e$ is integer, $\wabs{z} = \beta^{e} \wlr{\beta^{\mu} + w}$ with $0 < w < \wlr{\beta - 1} \beta^{\mu}$ and \[ \wabs{y - z} < \beta^{e} \min \mathds{S}et{w, \wlr{\beta - 1} \beta^{\mu} - w} \] then $y = \mathds{S}ign{z} \beta^{e} \wlr{\beta^{\mu} + v}$ with $0 < v < \wlr{\beta - 1} \beta^{\mu}$ and $\wabs{v - w} = \beta^{-e} \wabs{y - z}$. \peFullProp{propNormalFormCont} \pbPropBT{propNormalFormDis}{Discontinuity of the normal form} If $e$ is integer and $\wabs{z} = \beta^{e + \mu}$ with $\wabs{y - z} < \beta^{e + \mu - 1} \wlr{\beta - 1}$ then we have three possibilities: \begin{itemize} \item[(i)] $\wabs{y} < \wabs{z}$ and $y = \mathds{S}ign{z} \beta^{e - 1} \wlr{\beta^{\mu} + w}$ with \[ 0 < v = \wlr{\beta - 1}\beta^{\mu} - \beta^{1 - e} \wabs{y - z} < \wlr{\beta - 1}\beta^{\mu}. \] \item[(ii)] $\wabs{y} = \wabs{z}$ and $y = z$. \item[(iii)] $\wabs{y} > \wabs{z}$ and $y = \mathds{S}ign{z} \beta^{e} \wlr{\beta^{\mu} + w}$ with $0 < w = \beta^{-e} \wabs{y - z} < \beta^{\mu - 1} \wlr{\beta - 1}$. \end{itemize} \peProp{propNormalFormDis} \subsubsection{Proofs} \label{secNumbersProof} In this section we prove the propositions regarding integer and real numbers.\\ \pbProofB{Proposition}{propOrder} Since $d,e\in \mathds Z{}$ and $d < e$ we have that $e - d \geq 1$ and \[ \beta^{e} \wlr{\beta^{\mu} + w} - \beta^{d} \wlr{\beta^{\mu} + v} \geq \beta^{d} \wlr{\wlr{\beta^{e - d} - 1} \beta^{\mu} - v} \geq \beta^{d} \wlr{\wlr{\beta - 1} \beta^{\mu} - v} > 0, \] and this shows that $\beta^{e} \wlr{\beta^{\mu} + w} > \beta^{d} \wlr{\beta^{\mu} + v}$. \peProof{Proposition}{propOrder}\\ \pbProofB{Proposition}{propNormalForm} The integer exponent $e := \wfloor{\wfc{\log_{\beta}}{\wabs{z}}} - \mu$ satisfies \[ \wfc{\log_{\beta}}{\wabs{z}} - \mu - 1 < e \leq \wfc{\log_{\beta}}{\wabs{z}} - \mu \hspace{1.0cm} \wrm{and} \hspace{1.0cm} {\beta}^{- \mu - 1} \wabs{z} < {\beta}^{e} \leq \wabs{z} {\beta}^{-\mu}. \] The equation above shows that $w := {\beta}^{-e} \wabs{z} - {\beta}^{\mu}$ satisfies $0 \leq w < \wlr{\beta - 1}{\beta}^{\mu}$ and $z = \mathds{S}ign{z} {\beta}^{e}\wlr{{\beta}^{\mu} + w}$. If $z = \mathds{S}ign{z} {\beta}^{d} \wlr{{\beta}^{\mu} + v}$ with $d \in \mathds Z{}$ and $0 \leq v < \wlr{\beta - 1} {\beta}^{\mu}$ then \[ {\beta}^{e} \wlr{\beta^{\mu} + w} = \wabs{z} = {\beta}^{d} \wlr{{\beta}^{\mu} + v}, \] and Prop. \ref{propOrder} implies that $d = e$, and the equation above implies that $v = w$. \peProof{Proposition}{propNormalForm}\\ \pbProofB{Proposition}{propNormalFormCont} We have that \[ \wabs{1 - \frac{y}{z}} < \frac{\beta^{e} w}{\wabs{z}} = \frac{w}{\beta^{\mu} + w} < 1 \ \ \Rightarrow \frac{y}{z} > 0 \Rightarrow y \neq 0, \] and Prop. \ref{propNormalForm} yield $d$ and $v \in [0,\wlr{\beta - 1} \beta^{\mu})$ such that $y = \mathds{S}ign{y} \beta^{d} \wlr{\beta^{\mu} + v}$. The inequality \[ \frac{\mathds{S}ign{y} \beta^{d}\wlr{\beta^{\mu} + v}}{\mathds{S}ign{z} \beta^{e}\wlr{\beta^{\mu} + w}} = \frac{y}{z} > 0 \] implies that $\mathds{S}ign{y} = \mathds{S}ign{z}$. Moreover, \[ \beta^{d} \wlr{\beta^{\mu} + v} = \wabs{y} \leq \wabs{z} + \wabs{y-z} < \wabs{z} + \beta^{e} \wlr{\wlr{\beta - 1}\beta^{\mu} - w} = \beta^{e + 1 +\mu} \] and Prop. \ref{propOrder} implies that $d \leq e$. Similarly, \[ \beta^{d} \wlr{\beta^{\mu} + v} = \wabs{y}\geq \wabs{z} - \wabs{y-z} > \wabs{z} - \beta^{e} w = \beta^{e + \mu}, \] and $d \geq e$. Therefore $d = e$, $y = \mathds{S}ign{y} \beta^{e} \wlr{\beta^{\mu} + v}$ and $\wabs{y - z} =\beta^{e} \wabs{w - z}$. \peProof{Proposition}{propNormalFormCont}\\ \pbProofB{Proposition}{propNormalFormDis} We have that \[ \wabs{1 - \frac{y}{z}} < \frac{\beta - 1}{\beta} < 1 \ \ \Rightarrow \frac{y}{z} > 0 \Rightarrow y \neq 0, \] and Prop \ref{propNormalForm} yields $d \in \mathds Z{}$ and $w \in [0,\wlr{\beta - 1} \beta^{\mu})$ such that $y = \mathds{S}ign{y} \beta^{d} \wlr{\beta^{\mu} + w}$. The inequality \[ \frac{\mathds{S}ign{y} \beta^{d}\wlr{\beta^{\mu} + w}}{\mathds{S}ign{z} \beta^{e + \mu}} = \frac{y}{z} > 0 \] implies that $\mathds{S}ign{y} = \mathds{S}ign{z}$. We also have that \[ \beta^{d} \wlr{\beta^{\mu} + w} = \wabs{y} \leq \wabs{z} + \wabs{y-z} < \wabs{z} + \beta^{e + \mu - 1} \wlr{\beta - 1} = \beta^{e} \wlr{\beta^\mu + \beta^{\mu - 1} \wlr{\beta - 1} } \] and Prop. \ref{propOrder} implies that $d \leq e$. It $\wabs{y} \geq \wabs{z}$ then Prop. \ref{propOrder} implies that $d \geq e$. It follows that $d = e$ and the conditions in items (ii) and (iii) in Prop. \ref{propNormalFormDis} are satisfied. If $\wabs{y} < \wabs{z}$ then Prop. \ref{propOrder} implies that $d < e$ and \[ \beta^{d} \wlr{\beta^{\mu} + w} = \wabs{y} \geq \wabs{z} - \beta^{e + \mu - 1} \wlr{\beta - 1} = \beta^{e - 1} \wlr{\beta^{\mu + 1} - \wlr{\beta^{\mu + 1} - \beta^{\mu}}} = \beta^{e - 1 + \mu} \] and Prop. \ref{propOrder} imply that $d \geq e - 1$. Therefore $d = e - 1$ and the conditions in item (i) in Prop. \ref{propNormalFormDis} are satisfied. \peProof{Proposition}{propNormalFormDis}\\ \subsection{Floating point systems} \label{secfloatSys} In this section we present more definitions related to floating point systems and more propositions about them. We prove the propositions regarding floating point systems stated in the previous sections and the propositions stated here. In most definitions, propositions and proofs in this section $\wcal{F}$ is a floating point system, $\wrm{fl}$ rounds to nearest in $\wcal{F}$, $z,w \in \mathds R{}$ and $x,y \in \wcal{F}$, and the numbers $\alpha$ and $\nu$ are as in Definitions \ref{longDefAlpha} and \ref{longDefNu}, and the exceptions are stated explicitly. \subsubsection{Propositions} \label{floatSysProps} This section presents more propositions regarding floating point systems. \pbPropBT{propAlpha}{Minimality of alpha} $\alpha \in \wcal{F}$ and if $x \in \wcal{F} \setminus \mathds{S}et{0}$ then $\wabs{x} \geq \alpha$. \peProp{propAlpha} \pbPropBT{propEmptyNormalRange}{Empty normal range} If $e$ is an exponent for $\wcal{F}$ and $r$ is an integer with $r \in [0,\wlr{\beta - 1} \beta^{\mu})$ then $\wcal{F} \cap \wlr{\beta^{e} \wlr{\beta^{\mu} + r}, \, \beta^{e} \wlr{\beta^{\mu} + r + 1}} = \emptyset$. \peFullProp{propEmptyNormalRange} \pbPropBT{propEmptySubnormalRange}{Empty subnormal range} Let $\wcal{I}_{e_{\alpha}}$ be an IEEE system. If $r \in \mathds Z{}$ and $-\beta^{\mu} \leq r < \beta^{\mu}$ then $\wlr{\beta^{e_{\alpha}} r, \, \beta^{e_{\alpha}} \wlr{r+ 1}} \cap \wcal{I}_{e_{\alpha}} = \emptyset$. \peFullProp{propEmptySubnormalRange} \pbPropBT{propSysScale}{Scale invariance} If $\wcal{F}$ is perfect then $x \in \wcal{F}$ if and only if $\beta x \in \wcal{F}$. If $\wcal{F}$ is unperfect and $x \in \wcal{F}$ then $\beta x \in \wcal{F}$. \peProp{propSysScale} \subsubsection{Proofs} \label{floatSysProofs} In this section we prove the propositions regarding floating point systems.\\ \pbProofB{Proposition}{propIntegerForm} According to Definitions \ref{longDefMPFR} and \ref{longDefIEEE} of MPFR system and IEEE system, we have three possibilities: (i) $x = 0$, in which case $x = \beta^{e_{\alpha}} r$ for $r = 0$, (ii) $x$ is subnormal, and $\wcal{F}$ is an IEEE system and $x = \beta^{e_{\alpha}} r$ with $\wabs{r} \in [1,\beta^{\mu}) \cap \mathds Z{}$ and (iii) $x \in \wcal{E}_{e}$ for some $e \geq e_{\alpha}$, and $x = \beta^{e_{\alpha} + e} r$ with $\wabs{r} \in [\beta^{\mu},\beta^{1 + \mu}) \cap \mathds Z{}$. \peProof{Proposition}{propIntegerForm}\\ \pbProofB{Proposition}{propSymmetry} In the three possible cases, Definitions \ref{longDefPerfect}, \ref{longDefMPFR} and \ref{longDefIEEE}, the floating point systems are clearly symmetric. \peProof{Proposition}{propSymmetry}\\ \pbProofB{Proposition}{propNu} If $\wcal{F}$ is a perfect system or a MPFR system and $x \in \wcal{F} \setminus \mathds{S}et{0}$ then $\wabs{x} \in \wcal{E}_{e}$ for some exponent $e$ for $\wcal{F}$ by Definitions \ref{longDefPerfect} and \ref{longDefMPFR}. If $\wcal{F}$ is an IEEE system $\wcal{I}_{e_{\alpha}}$ then $\nu = \beta^{e_{\alpha} + \mu}$ and $x$ with $\wabs{x} \geq \nu$ is not subnormal. As a result, by definition of IEEE system, $\wabs{x} \in \wcal{E}_{e}$ for some exponent $e$ for $\wcal{F}$. If $0 < \wabs{x} < \nu$ then $\wcal{F}$ is not perfect, because $\nu = 0$ for perfect systems. Moreover, $\wabs{x} \not \in \wcal{E}_{e}$ for $e \geq e_{\alpha}$ and, by Definition \ref{longDefMPFR}, $\wcal{F}$ is not a MPFR system. Therefore, $\wcal{F}$ is an IEEE system and $x$ is subnormal. Regarding the converse part, if $r$ is a multiple of $\beta$ then we can replace $e$ by $e + 1$ and $r$ by $r/\beta$ and $z$ stays the same. Therefore, we can assume that $r$ is not a multiple of $\beta$. In particular, $\wabs{r} < \beta^{1 + \mu}$. By symmetry (Prop. \ref{propSymmetry}), it suffices to show that $\wabs{z} \in \wcal{F}$ when $\wabs{z} \geq \nu$. If $\wcal{F}$ is perfect then $\wabs{z} \in \wcal{E}_{e}$ and Prop. \ref{propNu} holds. Therefore, we can assume that $\wcal{F}$ is unperfect. In this case $\nu = \beta^{e_{\alpha} + \mu}$ by Definition \ref{longDefNu} and $\beta^{e} \wabs{r} \geq \beta^{e_{\alpha} + \mu}$; actually $\beta^{e} \wabs{r} > \beta^{e_{\alpha} + \mu}$ because $r$ is not a multiple of $\beta$. Since $0 < \wabs{r} < \beta^{1 + \mu}$, there exists a first integer $d > 1$ such that $\beta^d \wabs{r} \geq \beta^{1 + \mu}$. Dividing $\beta^{d-1} \wabs{r}$ by $\beta^{\mu}$ we obtain that $\beta^{d-1} \wabs{r} = \beta^{\mu} q + p$ for $p,q \in \mathds Z{}$ with $q \geq 0$ and $0 \leq p < \beta^{\mu}$. The definition of $d$ yields $\beta^{1 + \mu} > \beta^{d - 1} \wabs{r} = \beta^{\mu} q + p$ and \[ s := \wlr{q - 1} \beta^\mu + p < \wlr{\beta - 1} \beta^{\mu}. \] Moreover, \[ \beta^{1 + \mu} q + \beta p = \beta^{d} \wabs{r} \geq \beta^{1 + \mu} \Rightarrow q \geq 1 - p/\beta^\mu > 0 \ \ \Rightarrow q \geq 1 \ \ \Rightarrow s \geq 0. \] As a result, $\wabs{r} = \beta^{1 - d} \wlr{\beta^{\mu} + s}$ with $s \in [0, \wlr{\beta - 1} \beta^{\mu})$ and Prop. \ref{propOrder} leads to \[ \wabs{z} \geq \nu \Rightarrow \beta^{e + 1 - d} \wlr{\beta^{\mu} + s} \geq \beta^{e_{\alpha}} \wlr{\beta^\mu + 0} \] \[ \Rightarrow e + 1 - d \geq e_{\alpha} \Rightarrow \wabs{z} = \beta^{e + 1 - d} \wlr{\beta^{\mu} + s} \in \wcal{E}_{e + 1 - d} \subset \wcal{F}. \] \peProof{Proposition}{propNu}\\ \pbProofB{Proposition}{propSubnormalSum} Prop. \ref{propSymmetry} states that $x + y \in \wcal{I} \Leftrightarrow -\wlr{x + y} \in \wcal{I}$, and it suffices to show that $\wabs{x + y} \in \wcal{I}$. Since $x$ and $y$ are subnormal, $x = \mathds{S}ign{x} \beta^{e_{\alpha}} r_x$ with $r_x \in [1,\beta^{\mu}) \cap \mathds Z{}$ and $y = \mathds{S}ign{y} \beta^{e_{\alpha}} r_y$ with $r_y \in [1,\beta^{\mu}) \cap \mathds Z{}$. If $\mathds{S}ign{x} = - \mathds{S}ign{y}$ then $\wabs{x + y} = \beta^{e_{\alpha}} \wabs{r_x - r_y}$ and $\wabs{x + y}$ is either $0$ or subnormal, because \[ \wabs{r_x - r_y} < \max \mathds{S}et{r_x,r_y} < \beta^{\mu}. \] If $\mathds{S}ign{x} = \mathds{S}ign{y}$ then $\wabs{x + y} = \beta^{e_{\alpha}} \wlr{r_x + r_y}$ with $1 < r_x + r_y < 2 \beta^{\mu} \leq \beta^{1 + \mu}$. If $r_x + r_y < \beta^{\mu}$ then $x + y$ is subnormal, otherwise $\wabs{x + y} \geq \beta^{e_{\alpha} + \mu} = \nu$ and Prop. \ref{propNu} implies that $\wabs{x + y} \in \wcal{I}$. \peProof{Proposition}{propSubnormalSum}\\ \pbProofB{Proposition}{propCriticalSum} Let us start with $s := x + z > 0$. If $x \leq \beta^{e + \mu}$ then Prop. \ref{propCriticalSum} holds because $z = s - x \geq \beta^{e} \wlr{r + 1/2} \geq \beta^{e} / 2$. If $x \geq \beta^{e + \mu + 1}$ then \[ z := s - x \leq \beta^e \wlr{r + 1/2 + \beta^{\mu} - \beta^{\mu + 1}} = - \beta^{e} / 2 - \wlr{ \wlr{\beta - 1} \beta^{\mu} - \wlr{r + 1}} \leq -\beta^{e}/2, \] because $r \in [0,\wlr{\beta - 1} \beta^{\mu}) \cap \mathds Z{}$, and again $\wabs{z} \geq \beta^e/2$. Therefore, we only need to analyze the case $\beta^{e + \mu} < x < \beta^{e + \mu + 1}$. In this case, by Prop. \ref{propNormalForm}, $x = \beta^{d} \wlr{\beta^\mu + t}$ for $d \in \mathds Z{}$ and $t \in [0,\wlr{\beta - 1}\beta^{\mu}) \cap \mathds Z{}$. Prop. \ref{propOrder} implies that $d = e$ and \[ z = s - x = \beta^{e} \wlr{r - t + 1/2} \ \ \Rightarrow \ \ \wabs{z} \geq \beta^{e} \wabs{r - t + 1/2} \geq \beta^{e}/2, \] because $r - t \in \mathds Z{}$, and we are done with the case $x + z > 0$. Finally, when $x + z < 0$ the argument above for $-x$ and $-z$ and leads to $\wabs{z} = \wabs{-z} \geq \beta^e/2$. \peProof{Proposition}{propCriticalSum}\\ \pbProofB{Proposition}{propAlpha} If $\wcal{F}$ is perfect then $\alpha = 0 \in \wcal{F}$ and Prop. \ref{propAlpha} is trivial. If $\wcal{F}$ is the MPFR system $\wcal{M}_{e_{\alpha},\beta,\mu}$ then $\alpha = \beta^{e_{\alpha} + \mu} \in \wcal{E}_{e_{\alpha}} \subset \wcal{F}$ and if $x \in \wcal{F} \setminus \mathds{S}et{0}$ then $\wabs{x} \in \wcal{E}_{e}$ for some $e \geq e_{\alpha}$. By definition of $\wcal{E}_{e}$, $\wabs{x} = \beta^{e} \wlr{\beta^{\mu} + r}$ with $r \geq 0$ and $\wabs{x} \geq \alpha$. Finally, if $\wcal{F}$ is the IEEE system $\wcal{I}_{e_{\alpha},\beta,\mu}$ then $\alpha = \beta^{e_{\alpha}} \in \wcal{E}_{e_{\alpha}} \subset \wcal{F}$ and if $x \in \wcal{F} \setminus \mathds{S}et{0}$ then $\wabs{x} \in \wcal{F} \setminus \mathds{S}et{0}$ and either (i) $\wabs{x} \in \wcal{S}_{e_{\alpha}}$ or (ii) $\wabs{x} \in \wcal{E}_{e}$ with $e \geq e_{\alpha}$. In case (i), $\wabs{x} = \beta^{e_{\alpha}} r$ for $r \in \mathds Z{} \setminus {0}$ and $\wabs{x} \geq \alpha$. As for the MPFR system, in case (ii) $\wabs{x} \geq \alpha$. \peProof{Proposition}{propAlpha}\\ \pbProofB{Proposition}{propEmptyNormalRange} We show that if $z \in \wlr{\beta^{e}\wlr{\beta^{\mu} + r}, \ \beta^{e}\wlr{\beta^{\mu} + r + 1}}$ then $z \not \in \wcal{F}$. By Prop. \ref{propOrder} and \ref{propNormalForm}, there exists $w$ with $r < w < r + 1$ such that $z = \beta^{e}\wlr{\beta^{\mu} + w}$. $z \not \in - \wcal{E}_{d}$ because $z > 0$. Prop. \ref{propOrder} implies that if $d > e$ then $z < y$ for $y \in \wcal{E}_{d}$ and if $d < e$ then $z > y$ for $y \in \wcal{E}_{d}$. Therefore, $z \not \in \bigcup_{d \neq e} \wcal{E}_{d}$. Moreover, if $y \in \wcal{E}_{e}$ then $y = \beta^{e} \wlr{\beta^{\mu} + s}$ with $s \in \mathds Z{}$ and $y \neq z$ because $w \not \in \mathds Z{}$. As a result, $z \not \in \bigcup_{d \in \mathds Z{}} \wlr{\wcal{E}_{d} \cup -\wcal{E}_{d}}$. This proves that $z \not \in \wcal{F}$ when $\wcal{F}$ is a perfect or MPFR system. Finally, if $\wcal{F}$ is an IEEE system $\wcal{I}_{e_{\alpha}}$ then $e \geq e_{\alpha}$ because $e$ is an exponent for $\wcal{F}$, and $z > y$ for all $y \in \wcal{S}_{e_{\alpha}} \cup -\wcal{S}_{e_{\alpha}}$. This shows that $z \not \in \wcal{S}_{e_{\alpha}} \cup -\wcal{S}_{e_{\alpha}}$ and $z \not \in \wcal{F}$. \peProof{Proposition}{propEmptyNormalRange}\\ \pbProofB{Proposition}{propEmptySubnormalRange} We show that if $z \in \wlr{\beta^{e_{\alpha}}r, \ \beta^{e_{\alpha}}\wlr{r + 1}}$ then $z \not \in \wcal{I}$. We have that $\wabs{z} < \beta^{e_{\alpha}} \max \mathds{S}et{\wabs{r},\wabs{r + 1}} \leq \beta^{e_{\alpha} + \mu}$ and $\wabs{z} \not \in \bigcup_{e = e_{\alpha}}^{\infty} \wcal{E}_{e}$. Moreover, $w := \beta^{-e_{\alpha}} z$ is such that $r < w < r + 1$ and $z = \beta^{e_{\alpha}} w$. It follows that $w \not \in \mathds Z{}$ and $\wabs{z} \not \in \wcal{S}_{e_{\alpha}}$, and combining the arguments above and symmetry (Prop. \ref{propSymmetry}) we conclude that $z \not \in \wcal{I}$. \peProof{Proposition}{propEmptySubnormalRange}\\ \pbProofB{Proposition}{propSysScale} Since $\wcal{A}_e := \mathds{S}et{\beta x, x \in \wcal{E}_{e}} = \wcal{E}_{e + 1}$, the set $\wcal{P}$ in Definition \ref{longDefPerfect} is such that $x \in \wcal{P}$ if and only if $\beta x \in \wcal{P}$. For the MPFR system $\wcal{M}_{e_{\alpha},\beta,\mu}$, if $x \in \wcal{M}$ then $\wabs{x} \in \wcal{E}_{e}$ for some $e \geq e_{\alpha}$, $\wabs{\beta x} \in \wcal{E}_{e +1} \subset \wcal{M}$ and $\beta x \in \wcal{M}$ by symmetry (Prop. \ref{propSymmetry}.) For the IEEE system $\wcal{I}_{e_{\alpha},\beta,\mu}$, if $x \in \wcal{I}$ then either $x \in \wcal{E}_{e}$ for some $e \geq e_{\alpha}$, and the argument used in the MFPR case applies to $x$, or $x = \mathds{S}ign{x} \beta^{e_{\alpha}} r$ with $r \in [0,\beta^{\mu}) \cap \mathds Z{}$. If $\beta r < \mu$ then $\wabs{\beta x} = \beta^{e_{\alpha}} \wlr{\beta r} \in \wcal{S}_{e_{\alpha}}$ and $\beta x \in \wcal{I}$ by symmetry. If $\beta r \geq \beta^\mu$ then $s = \beta r - \beta^{\mu} \in [0, \wlr{\beta - 1} \beta^{\mu}) \cap \mathds Z{}$ and $\wabs{\beta x} = \beta^{e_{\alpha}} \wlr{\beta^{\mu} + s} \in \wcal{E}_{e_{\alpha}} \subset \wcal{I}$ and $s \in \wcal{I}$ by symmetry. \peProof{Proposition}{propSysScale}\\ \subsection{Rounding} \label{secRounding} This section proves the propositions about rounding to nearest stated previously, and states and proves more propositions about rounding. \subsubsection{Propositions} \label{secRoundingProps} In this section we state more propositions regarding rounding to nearest.\\ \pbPropBT{propRoundSign}{Propagation of the sign} If $\wfl{z} \neq 0$ then $\mathds{S}ign{\wfl{z}} = \mathds{S}ign{z}$. For a general $z \in \mathds R{}$, $\wfl{z} = \mathds{S}ign{z} \wabs{\wfl{z}}$. \peFullProp{propRoundSign} \pbPropBT{propRoundScale}{Rounding after scaling} Let $m$ be an integer. If $\wcal{F}$ is perfect then the function $\wflr{s}{z} := \beta^{-m} \wfl{\beta^m z}$ rounds to nearest in $\wcal{F}$. \peProp{propRoundScale} \pbPropBT{propRoundOnInterval}{Rounding in an interval} If $a,b \in \wcal{F}$ and $a \leq z \leq b$ then $\wfl{z} \in [a,b]$ and $\wabs{\wfl{z}- z} \leq (b - a)/2$. Moreover, if $z < m := (a + b)/2$ then $\wfl{z} < b$ and if $z > m$ then $\wfl{z} > a$. \peFullProp{propRoundOnInterval} \pbPropBT{propRoundCombination}{Combination} For $\wcal{A}_1, \wcal{A}_2 \subset \mathds R{}$ with $\wcal{A}_1 \cup \wcal{A}_2 = \mathds R{}$, let $f_i: \wcal{A}_i \rightarrow \mathds R{}$ be such that, for $z_i \in \wcal{A}_i$ and $x \in \wcal{F}{}$, $\wfc{f_i}{z_i} \in \wcal{F}$ and $\wabs{z_i - \wfc{f_i}{z_i}} \leq \wabs{z_i - x}$. The function $\wrm{fl}: \mathds R{} \rightarrow \mathds R{}$ given by $\wfl{z} = \wfc{f_1}{z}$ for $z \in \wcal{A}_1$ and $\wfl{z} = \wfc{f_2}{z}$ for $z \in \wcal{A}_2 \setminus \wcal{A}_1$ rounds to nearest in $\wcal{F}$. \peProp{propRoundCombination} \pbPropBT{propRoundExtension}{Extension} If $\wcal{A} \subset \mathds R{}$ and $f: \wcal{A} \rightarrow \mathds R{}$ is such that, for $z \in \wcal{A}$ and $x \in \wcal{F}$, $\wfc{f}{z} \in \wcal{F}$ and $\wabs{z - \wfc{f}{z}} \leq \wabs{z - x}$ then there exists a function $\wrm{fl}$ which rounds to nearest in $\wcal{F}$ and is such that $\wfl{z} = \wfc{f}{z}$ for $z \in \wcal{A}$. \peProp{propRoundExtension} \subsubsection{Proofs} \label{secRoundingProofs} In this section we prove the propositions regarding rounding to nearest.\\ \pbProofB{Proposition}{propRoundIdentity} By definition of rounding to nearest, $0 = \wabs{x - x} \geq \wabs{\wfl{x} - x}$. Therefore, $\wfl{x} = x$. \peProof{Proposition}{propRoundIdentity}\\ \pbProofB{Proposition}{propRoundMonotone} Let us show that if $\wfl{z} > \wfl{w}$ then $z > w$. Indeed, in this case we have that \[ \wabs{\wfl{w} - z} \geq \wabs{\wfl{z} - z} \geq \wfl{z} - z > \wfl{w} - z. \] Therefore, $\wabs{\wfl{w} - z} > \wfl{w} - z$ and this implies that $z > \wfl{w}$. It follows that \[ z - \wfl{w} = \wabs{\wfl{w} - z} \geq \wabs{\wfl{z} - z} \geq \wfl{z} - z \ \ \Rightarrow \ \ z \geq \frac{\wfl{z} + \wfl{w}}{2}. \] Similarly, \[ \wabs{w - \wfl{z}} \geq \wabs{w - \wfl{w}} \geq w - \wfl{w} > w - \wfl{z} \ \ \Rightarrow \ \ w \leq \wfl{z}, \] and \[ \wfl{z} - w = \wabs{\wfl{z} - w} \geq \wabs{\wfl{w} - w} \geq w - \wfl{w} \ \ \Rightarrow \ \ w \leq \frac{\wfl{z} + \wfl{w}}{2}. \] As a result, $w \leq \wlr{\wfl{z} + \wfl{w}}/2 \leq z$. Moreover, $w \neq z$ because $\wfl{z} \neq \wfl{w}$. Therefore, $z > w$ as we have claimed. Logically, we have proved that $z \leq w \Rightarrow \wfl{z} \leq \wfl{w}$. When $x \in \wcal{F}$ we have that $\wfl{x} = x$ (Prop. \ref{propRoundIdentity}) and the argument above shows that $\wfl{z} > x \Rightarrow z > x$ and $x > \wfl{z} \Rightarrow x > z$. Moreover, \[ \wabs{x} > \wabs{\wfl{z}} \Rightarrow \wabs{x} > \wfl{z} \Rightarrow \wabs{x} > z \] and using the function $\wrm{m}$ in Prop. \ref{propRoundMinus} we obtain \[ \wabs{x} > \wabs{\wfl{z}} \Rightarrow \wabs{x} > -\wfl{z} \Rightarrow \wabs{x} > \wflr{m}{-z} \Rightarrow \wabs{x} > -z. \] Therefore, $\wabs{x} > \wabs{\wfl{z}} \Rightarrow \wabs{x} > \max \mathds{S}et{z,-z} = \wabs{z}$. Finally, if $\wabs{x} < \wabs{\wfl{z}}$ then either (i) $\wfl{z} < 0$ or (ii) $\wfl{z} > 0$. In both cases Prop. \ref{propRoundSign} shows that $\mathds{S}ign{z} = \mathds{S}ign{\wfl{z}}$. In case (i) $z$ is positive and \[ \wabs{x} < \wabs{\wfl{z}} \Rightarrow \wabs{x} < \wfl{z} \Rightarrow \wabs{x} < z = \wabs{z}. \] and in case (ii) $z$ is negative and \[ \wabs{x} < \wabs{\wfl{z}} \Rightarrow \wabs{x} < -\wfl{z} \Rightarrow \wabs{x} < \wflr{m}{-z} \Rightarrow \wabs{x} < -z = \wabs{z}. \] Therefore, $\wabs{x} < \wabs{\wfl{z}} \Rightarrow \wabs{x} < \wabs{z}$ in both cases and we are done. \peProof{Proposition}{propRoundMonotone}\\ \pbProofB{Proposition}{propRoundMinus} If $x \in \wcal{F}$ and $z \in \mathds R{}$ then $-x \in \wcal{F}$ by symmetry and \[ \wabs{\wflr{m}{z} - z} = \wabs{ \wlr{-\wfl{-z}} - z} = \wabs{\wfl{-z} - \wlr{-z}} \leq \wabs{\wlr{-x} - \wlr{-z}} = \wabs{x - z}. \] Therefore, $\wabs{\wflr{m}{z} - z} \leq \wabs{x -z}$ and $\wrm{m}$ rounds to nearest. \peProof{Proposition}{propRoundMinus}\\ \pbProofB{Proposition}{propRoundNormal} Let us start with $z > 0$. $w \leq \wlr{\beta - 1} \beta^{\mu}$ and if $\wfloor{w} = \wlr{\beta - 1} \beta^{\mu}$ then $a = b = \beta^{e + 1 + \mu} \in \wcal{E}_{e + 1}$, and this implies that $a,b \in \wcal{F}$ because $e + 1$ is also an exponent for $\wcal{F}$. Similarly, if $\wceil{w} = \wlr{\beta - 1} \beta^{\mu}$ then $b \in \wcal{E}_{e+1} \subset \wcal{F}$. If $\wceil{w} < \wlr{\beta - 1} \beta^{\mu}$ then $0 \leq \wfloor{w} \leq \wceil{w} < \wlr{\beta - 1}\beta^{\mu}$ and $a,b \in \wcal{E}_{e} \subset \wcal{F}$. Therefore, in all cases, $a,b \in \wcal{F}$. If $w \in \mathds Z{}$ then $\wfloor{w} = \wceil{w}$ and $z = a = b \in \wcal{F}$ and $\wfl{z} = a = b = m$ because $\wfl{x} = x$ when $x \in \wcal{F}$ by Prop. \ref{propRoundIdentity}. If $w \not \in \mathds Z{}$ then $\wceil{w} = \wfloor{w} + 1$, Prop. \ref{propEmptyNormalRange} shows that $(a,b) \cap \wcal{F} = \emptyset$, Equation \pRef{thRoundNormal} follows from Prop. \ref{propRoundOnInterval}, and we also have that $(b - a) / 2 \leq \beta^{e}/2$. For the last paragraph in Prop. \ref{propRoundNormal}, we either have (i) $r \leq w$ or (ii) $r > w$. In case (i) \[ r \leq w \leq r + \wabs{w - r} < r + 1/2 \Rightarrow \wfloor{w} = r, \ \ \wceil{w} = r + 1 \] and \[ \frac{1}{2} \wlr{ \wfloor{w} + \wceil{w} } = r + 1/2 > w. \] This implies that $a = \beta^{e} \wlr{\beta^{\mu} + r}$, $b = \beta^{e} \wlr{\beta^{\mu} + r + 1}$ and $z < \wlr{a + b}/2$, and the results in the previous paragraph show that $\beta^{e} \wlr{\beta^{\mu} + r} = a = \wfl{z}$. In case (ii), $r > w \geq 0 \Rightarrow r \geq 1$ and \[ r - 1/2 \leq w < r \Rightarrow \wfloor{w} = r - 1, \ \ \wceil{w} = r \hspace{0.2cm} \wrm{and} \hspace{0.2cm} \frac{1}{2} \wlr{ \wfloor{w} + \wceil{w} } = r - 1/2 < w. \] This implies that $a = \beta^{e} \wlr{\beta^{\mu} + r - 1}$, $b = \beta^{e} \wlr{\beta^u + r}$ and $z > \wlr{a + b}/2$, and the results in the first paragraph of this proof show that $\beta^{e} \wlr{\beta^u + r} = b = \wfl{z}$. Finally, for $z < 0$ the arguments above for $\tilde{z} = -z$ and $\wrm{fl}x$ equal to the function $\wrm{m}$ in Prop. \ref{propRoundMinus} and symmetry (Prop. \ref{propSymmetry}) prove Prop. \ref{propRoundNormal} for $z$. \peProof{Proposition}{propRoundNormal}\\ \pbProofB{Proposition}{propRoundSubnormal} Recall that $\nu = \beta^{e_{\alpha}+ \mu} \in \wcal{E}_{e_{\alpha}} \subset \wcal{I}$, and by symmetry $-\nu \in \wcal{I}$. Let us write $w := \beta^{-e_{\alpha}} z$ and $r :=\wfloor{w}$. We have that $a = \beta^{e_{\alpha}} r$ and if $r = w$ then $a = b = z$ and $\wfl{z} = z$ by Prop. \ref{propRoundIdentity} and Prop. \ref{propRoundSubnormal} is valid. Let us then assume that $r \neq w$. This implies that $w \not \in \mathds Z{}$, $r < \beta^{\mu}$, $r + 1 = \wceil{w}$ and $b = \beta^{e_{\alpha}} \wlr{r + 1}$. We have that $a \in \wcal{F}$ because \begin{eqnarray} \nonumber w < 1 - \beta^{\mu} & \Rightarrow & r = -\beta^{\mu} \Rightarrow a = -\nu \in - \wcal{E}_{e_{\alpha}} \subset \wcal{F}, \\ \nonumber 1 - \beta^{\mu} < w < 0 & \Rightarrow & 1 - \beta^{\mu} \leq r < 0 \Rightarrow a \in -\wcal{S}_{e_{\alpha}} \subset \wcal{F}, \\ \nonumber 0 < w < 1 & \Rightarrow & r = 0 \Rightarrow a = 0 \in \wcal{F}, \\ \nonumber 1 < w < \beta^{\mu} & \Rightarrow & 1 \leq r < \beta^{\mu} \Rightarrow a \in \wcal{S}_{e_{\alpha}} \subset \wcal{F}, \end{eqnarray} and $b \in \wcal{F}$ because \begin{eqnarray} \nonumber -\beta^{\mu} < w < -1 & \Rightarrow & 1 - \beta^{\mu} < r + 1 \leq -1 \Rightarrow b \in -\wcal{S}_{e_{\alpha}} \subset \wcal{F}, \\ \nonumber -1 < w < 0 & \Rightarrow & r + 1 = 0 \Rightarrow b = 0 \in \wcal{F}, \\ \nonumber 0 < w < \beta^{\mu } - 1 & \Rightarrow & 1 \leq r + 1 < \beta^{\mu} \Rightarrow b \in \wcal{S}_{e_{\alpha}} \subset \wcal{F}, \\ \nonumber \beta^{\mu} - 1 < w < \beta^{\mu} & \Rightarrow & r + 1 = \beta^{\mu} \Rightarrow b = \nu \in \wcal{E}_{e_{\alpha}} \in \wcal{F}. \end{eqnarray} Therefore, by monotonicity $\wfl{z} \in [a,b] \cap \wcal{F}$ and Prop. \ref{propEmptySubnormalRange} implies that $\wfl{z} \in \mathds{S}et{a,b}$. It follows that \[ \wabs{\wfl{z} - z} = \min \mathds{S}et{z - a, b - z} \leq \frac{b - a}{2} = \frac{\beta^{e_{\alpha}} \wlr{r + 1} - \beta^{e_{\alpha}} \wlr{r}}{2} = \alpha /2. \] Finally, if $z < m$ then $\wabs{b - z} > \wabs{a - z} \Rightarrow \wfl{z} = a$ and if $z > m$ then $\wabs{a - z} > \wabs{b - z} \Rightarrow \wfl{z} = b$. \peProof{Proposition}{propRoundSubnormal}\\ \pbProofB{Proposition}{propRoundBelowAlpha} Note that, by Prop. \ref{propAlpha}, if $x \in \wcal{F} \setminus \mathds{S}et{0,\pm \alpha}$ then $\wabs{x} > \alpha$. When $\wabs{z} < \alpha /2$, if $x \in \wcal{F} \setminus \mathds{S}et{0}$ then Prop. \ref{propAlpha} implies that $\wabs{x} \geq \alpha$ and \[ \wabs{x - z} \geq \wabs{x} - \wabs{z} \geq \alpha - \wabs{z} > \alpha / 2 > \wabs{z - 0}, \] and $\wfl{z} = 0$ because $0 \in \wcal{F}$. When $\wabs{z} = \alpha/2$, $\wabs{z - 0} = \wabs{z - \mathds{S}ign{z} \alpha} = \alpha / 2$ and $\wabs{z - \wlr{-\mathds{S}ign{z}}} = 3 \alpha/2$. As a result, if $x \in \wcal{F} \setminus \mathds{S}et{0,\pm \alpha}$ then \[ \wabs{x - z} \geq \wabs{x} - \wabs{z} > \alpha - \alpha / 2 = \alpha/2 = \wabs{z - 0}, \] and the bounds above imply that $\wfl{z} \in \mathds{S}et{0, \mathds{S}ign{z} \alpha}$. When $\alpha/2 < \wabs{z} < \alpha$, $\wabs{z - \mathds{S}ign{z} \alpha} = \alpha - \wabs{z} < \alpha/2$, $\wabs{z - 0} = \wabs{z} > \alpha/2$ and \[ \wabs{z - \wlr{-\mathds{S}ign{z}} \alpha} = \wabs{z} + \alpha > \alpha / 2. \] Moreover, if $x \in \wcal{F} \setminus \mathds{S}et{0,\pm \alpha}$ has the same sign as $z$ then $x > \alpha$ and \[ \wabs{x - z} = x - z = \wlr{x - \alpha} + \wlr{\alpha - z} > \wabs{\mathds{S}ign{z} \alpha - z}. \] and if $x$ has the opposite sign of $z$ then $\wabs{x - z} \geq \wabs{x} > \alpha > \wabs{\mathds{S}ign{z} \alpha - z}$, and the bounds in this paragraph imply that $\wfl{z} = \mathds{S}ign{z} \alpha$. Finally, if $\wabs{z} = \alpha$ then $\wfl{z} = z = \mathds{S}ign{z} \alpha$ by Prop. \ref{propRoundIdentity}. \peProof{Proposition}{propRoundBelowAlpha}\\ \pbProofB{Proposition}{propRoundAdapt} Let $\wcal{A}$ be the set $\mathds{S}et{z \in \mathds R{} \ \wrm{with} \ \wabs{z} \geq \nu_{\wcal{F}}}$ and $f: \wcal{A} \rightarrow \mathds R{}$ the function $\wfc{f}{z} = \wfl{z}$. We claim that if $x \in \wcal{P}$ and $z \in \wcal{A}$ then $\wabs{z - \wfc{f}{z}} \leq \wabs{x - \wfc{f}{z}}$. In fact, if $x \in \wcal{F}$ then $\wabs{x - z} \geq \wabs{\wfl{z} - z} = \wabs{\wfc{f}{z} - z}$, because $\wrm{fl}$ rounds to nearest in $\wcal{F}$. If $x \not \in \wcal{F}$ then \[ x \in \wcal{P} \setminus \wcal{F} \subset \wlr{\bigcup_{e = -\infty}^{+\infty} \wlr{ \wcal{E}_{e} \bigcup -\wcal{E}_e} \setminus \bigcup_{e = e_{\alpha}}^{+\infty} \wlr{\wcal{E}_{e} \bigcup - \wcal{E}_e}} = \bigcup_{e = -\infty}^{e_{\alpha}- 1} \wlr{\wcal{E}_{e} \bigcup - \wcal{E}_e} \] and \[ \wabs{x} < \beta^{e_{\alpha} - 1} \wlr{\beta^{\mu} + \wlr{\beta - 1} \beta^{\mu}} = \beta^{e_{\alpha} + \mu - 1} = \nu_{\wcal{F}}. \] since $\nu_{\wcal{F}} \in \wcal{F}$, if $z \geq \nu_{\wcal{F}}$ then $z \geq \wabs{x} \geq x$ and \[ \wabs{x - z} = z - x = \wabs{\nu_{{\wcal{F}}} - z} + \wabs{\nu_{\wcal{F}} - x} \geq \wabs{\wfl{z} - z} + \wabs{\nu_{\wcal{F}} - x} > \wabs{\wfc{f}{z} - z}. \] Similarly, $\nu_{\wcal{F}} \in \wcal{F}$ and if $z \leq - \nu_{\wcal{F}}$ then $z \leq - \wabs{x} \leq x$ and \[ \wabs{x - z} = x - z = \wabs{-\nu_{{\wcal{F}}} - z} + \wabs{-\nu_{\wcal{F}} - x} \geq \wabs{\wfl{z} - z} + \wabs{-\nu_{\wcal{F}} - x} > \wabs{\wfc{f}{z} - z}, \] Therefore, $\wabs{x - z} \geq \wabs{\wfc{f}{z} - z}$ in all cases. To complete the proof it suffices to take the extension of $f$ to $\mathds R{}$ given by Prop. \ref{propRoundExtension}. \peProof{Proposition}{propRoundCompletion}\\ \pbProofB{Proposition}{propRoundIEEEX} For $k = 1,\dots,n$ let $\wrm{fl}x_{k}$ be the adapter of $\wrm{fl}_k$ in Prop. \ref{propRoundAdapt}. On the one hand, by the definition of $\wrm{fl}x_k$, we have that if $x,y \in \wcal{F}$ and $\wabs{x + y} \geq \nu_{\wcal{I}}$ then \pbDef{priee} \wflk{k}{x + y } = \wflxkf{k}{x + y}. \peDef{priee} On the other hand, Lemma \ref{lemIEEESum} shows that Equation \pRef{priee} holds when $\wabs{x + y} \leq \nu_{\wcal{I}}$. Therefore, Equation \pRef{priee} holds for all $x,y \in \wcal{I}$. For $\wvec{x} \in \wcal{I}^{n+1}$ define $\wvec{z} \in \wrn{n}$ as $z_1 := x_0 + x_1$ and $z_k := x_k$ for $2 \leq k < n$. We now prove by induction that $\wcal{S}umkf{k}{\wvec{z},\wrm{fl}t} = \wcal{S}umkf{k}{\wvec{z},\wrm{fl}tx}$. By definition, $\wcal{S}umkf{0}{\wvec{z},\wrm{fl}t} = 0 = \wcal{S}umkf{0}{\wvec{z},\wrm{fl}tx} $. Let us then analyze $k > 0$ assuming that $\wcal{S}umkf{k - 1}{\wvec{z},\wrm{fl}tx} = \wcal{S}umkf{k - 1}{\wvec{z},\wrm{fl}t} \in \wcal{I}$. Using Equation \pRef{priee} we deduce that \[ \wcal{S}umkf{k}{\wvec{z},\wrm{fl}tx} = \wflxkf{k}{\wcal{S}umkf{k-1}{\wvec{z},\wrm{fl}t} + z_{k}} = \wflk{k}{\wcal{S}umkf{k-1}{\wvec{x},\wrm{fl}t} + z_{k}} = \wcal{S}umkf{k}{\wvec{z},\wrm{fl}t} \in \wcal{I} \] and we are done. \peProof{Proposition}{propRoundIEEEX}\\ \pbProofB{Proposition}{propRoundMPFRX} For $k = 1,\dots,n$, let $\wrm{fl}xk{k}$ be the adapter of $\wrm{fl}k{k}$ in Prop. \ref{propRoundAdapt}. By the definition of $\wrm{fl}xk{k}$ we have that if $x,y \in \wcal{F}$ and $\wabs{x + y} \geq \alpha_{\wcal{I}} = \nu_{\wcal{I}}$ then \pbDef{prmp} \wflk{k}{x + y } = \wflxkf{k}{x + y}, \peDef{prmp} and, of course, this equation is also satisfied when $x + y = 0$. For $\wvec{x} \in \wcal{I}^{n+1}$ define $\wvec{z} \in \wrn{n}$ as $z_1 := x_0 + x_1$ and $z_k := x_k$ for $2 \leq k < n$. We now prove by induction that if $y_k := \wcal{S}umkf{k - 1}{\wvec{z},\wrm{fl}t} + z_k \geq 0$ for $k = 0,\dots,n$ then $\wcal{S}umkf{k}{\wvec{z},\wrm{fl}t} = \wcal{S}umkf{k}{\wvec{z},\wrm{fl}tx}$. By definition, $\wcal{S}umkf{0}{\wvec{z},\wrm{fl}t} = 0 = \wcal{S}umkf{0}{\wvec{z},\wrm{fl}tx} $. Let us then analyze $k > 0$ assuming that $\wcal{S}umkf{k - 1}{\wvec{z},\wrm{fl}tx} = \wcal{S}umkf{k - 1}{\wvec{z},\wrm{fl}t} \in \wcal{M}$. The assumption that $y_k \geq 0$ and Prop. \ref{propAlpha} implies that either $y_k = 0$ or $y_k \geq \alpha_{\wcal{M}} = \nu_{\wcal{M}}$, and in both cases Equation \pRef{prmp} holds for $x + y = y_k$. It follows that \[ \wcal{S}umkf{k}{\wvec{z},\wrm{fl}tx} = \wflxkf{k}{\wcal{S}umkf{k-1}{\wvec{z},\wrm{fl}t} + z_{k}} = \wflxkf{k}{y_k} = \wflk{k}{y_k} = \wcal{S}umkf{k}{\wvec{z},\wrm{fl}t} \in \wcal{M}, \] and we are done. \peProof{Proposition}{propRoundMPFRX}\\ \pbProofB{Proposition}{propFlat} If $w = 0$ then we can take $\delta = \beta^{e-1}/2$, because in this case $z \in \wcal{E}_{e} \subset \wcal{F}$ and $\wflk{1}{z} = z$ by Prop. \ref{propRoundIdentity} and, according to Prop. \ref{propNormalFormDis}, if $\wabs{y - z} < \delta$ then either \begin{itemize} \item[(i)] $y = \mathds{S}ign{z} \beta^{e} \wlr{\beta^{\mu} + v}$ with \[ 0 \leq v = \beta^{-e} \wabs{y - z} < \beta^{-e} \delta < 1/2 \Rightarrow \wfloor{v} = 0 \] and $\wflk{2}{y} = \wflk{1}{z} = z$ by Prop. \ref{propRoundNormal}, or \item[(ii)] $y = \mathds{S}ign{z} \beta^{e - 1} \wlr{\beta^{\mu} + v}$ for \[ \wlr{\beta - 1} \beta^{\mu} - \beta^{1 - e} \wabs{y - z} = v < \wlr{\beta - 1} \beta^{\mu} \Rightarrow \] \[ \wlr{\beta - 1} \beta^{\mu} - 1/2 < v < \wlr{\beta - 1} \beta^{\mu} \Rightarrow \wceil{v} = \wlr{\beta - 1} \beta^{\mu} \] and, by Prop. \ref{propRoundNormal}, \[ \wflk{2}{y} = \mathds{S}ign{z} \beta^{e - 1} \wlr{\beta^{\mu} + \wlr{\beta - 1} \beta^{\mu}} = \mathds{S}ign{z} \beta^{e + \mu} = z = \wflk{1}{z} \] \end{itemize} Let us then assume that $w > 0$ and write $m := \wfloor{w} + 1/2$ and show that \[ \delta = \beta^{e} \min \mathds{S}et{w, \, \wlr{\beta - 1} \beta^{\mu} - w, \, 1/2 - \wabs{m - w}, \, \wabs{m - w}} \] is a valid choice. Note that $\delta > 0$, because $\wabs{m - w} \leq 1/2$ for a general $w$ and $w \neq 1/2$ for the particular $w$ we discuss here. If $\wabs{y - z} < \delta$ then Prop. \ref{propNormalFormCont} implies that $y = \mathds{S}ign{z} \beta^{e} \wlr{\beta^{\mu} + v}$ with \[ \wabs{v - w} = \beta^{-e} \wabs{y - z} < \beta^{-e} \delta \leq \min \mathds{S}et{ 1/2 - \wabs{m - w}, \wabs{m - w}}. \] On the one hand, if $w < m$ then $\wabs{m - w} = m - w$, \[ \wfloor{w} = m - 1/2 < m - \wlr{\wabs{w - m} + \wabs{v - w}} \leq v \leq w + \wabs{w - v} < w + \wabs{m - w} = m, \] $\wfloor{v} = \wfloor{w}$ and Prop. \ref{propRoundNormal} implies that $\wflk{2}{y} = \wflk{1}{z} = \mathds{S}ign{z} \beta^{e} \wlr{\beta^{\mu} + \wfloor{w}}$. On the other hand, if $w > m$ then $\wabs{m - w} = w - m$, \[ m = w - \wabs{w - m} < w - \wabs{w - v} \leq v \leq m + \wlr{\wabs{w - m} + \wabs{v - w}} < m + 1/2 = \wceil{w}, \] $\wceil{v} = \wceil{w}$ and Prop. \ref{propRoundNormal} implies that $\wflk{2}{y} = \wflk{1}{z} = \mathds{S}ign{z} \beta^{e} \wlr{\beta^{\mu} + \wceil{w}}$. \peProof{Proposition}{propFlat}\\ \pbProofB{Proposition}{propSumScale} For $k = 1,\dots,n$ Props. \ref{propRoundMinus} and \ref{propRoundScale} show that the function $\wflxkf{k}{z} := \sigma \beta^{-m} \wflk{k}{\sigma \beta^m z}$ rounds to nearest in $\wcal{P}$, and we define $\wrm{fl}tx := \mathds{S}et{\wrm{fl}xk{1},\dots,\wrm{fl}xk{n}}$. We now prove by induction in $k = 0, \dots, n$ that \pbDef{sumSPFoo} \wcal{S}umkf{k}{\sigma \beta^m \wvec{z},\wrm{fl}t} = \sigma \beta^m \wcal{S}umkf{k}{\wvec{z},\wrm{fl}tx}, \peDef{sumSPFoo} For $k = 0$, $\wcal{S}umkf{0}{\sigma \beta^m \wvec{z},\wrm{fl}t} = 0 = \sigma \beta^m \wcal{S}umkf{k}{\wvec{z},\wrm{fl}tx}$ by definition. Assuming that \pRef{sumSPFoo} holds for $k \geq 0$ we have that \[ \wcal{S}umkf{k+1}{\sigma \beta^m \wvec{z},\wrm{fl}t} = \wflk{k+1}{\wcal{S}umkf{k}{\sigma \beta^m \wvec{z},\wrm{fl}t} + \sigma \beta^m z_k} \] \[ = \wflk{k+1}{\sigma \beta^m \wlr{\wcal{S}umkf{k}{\wvec{z},\wrm{fl}tx} + z_k}} = \sigma \beta^m \wflxkf{k+1}{\wcal{S}umkf{k}{\wvec{z},\wrm{fl}tx} + z_k} = \wcal{S}umkf{k+1}{\wvec{z},\wrm{fl}tx}, \] and we are done. \peProof{Proposition}{propSumScale}\\ \pbProofB{Proposition}{propRoundSign} Prop. \ref{propRoundIdentity} shows that $\wfl{0} = 0$. Therefore, if $\wfl{z} \neq 0$ then either (i) $z > 0$ or (ii) $z < 0$. In case (i) \[ \wabs{\wfl{z} - z} \leq \wabs{0 - z} \Rightarrow z - \wfl{z} \leq z \Rightarrow \wfl{z} \geq 0 \Rightarrow \mathds{S}ign{\wfl{z}} = 1 = \mathds{S}ign{z}. \] In case (ii) \[ \wabs{\wfl{z} - z} \leq \wabs{0 - z} \Rightarrow \wfl{z} - z \leq -z \Rightarrow \wfl{z} \leq 0. \] Since $\wfl{z} \neq 0$ this implies that $\mathds{S}ign{\wfl{z}} = -1 = \mathds{S}ign{z}$. It follows that if $\wfl{z} \neq 0$ then $\wfl{z} = \mathds{S}ign{\wfl{z}} \wabs{\wfl{z}} = \mathds{S}ign{z} \wabs{\wfl{z}}$ and it is clear that this equality also holds when $\wfl{z} = 0$. \peProof{Proposition}{propRoundSign}\\ \pbProofB{Proposition}{propRoundScale} Suppose $x \in \wcal{F}{}$ and $z \in \mathds R{}$. When $\wcal{F}$ is perfect we have that $\beta^{m} x \in \wcal{F}$ by Prop. \ref{propSysScale} and since $\wrm{fl}$ rounds to nearest we have \[ \wabs{\wflr{s}{z} - z} \, = \, \wabs{ \wlr{\beta^{-m} \wfl{\beta^m z}} - z} \, = \, \beta^{-m} \wabs{\wfl{\beta^m z} - \wlr{\beta^{m} z}} \] \[ \leq \, \beta^{-m} \wabs{\wfl{\beta^m z} - \wlr{\beta^{m} x}} \, = \, \wabs{ \wlr{\beta^{-m} \wfl{\beta^m z}} - x} \, = \, \wabs{\wflr{s}{z} - x}. \] Therefore, $\wrm{s}$ rounds to nearest in $\wcal{F}$. \peProof{Proposition}{propRoundScale}\\ \pbProofB{Proposition}{propRoundOnInterval} Since $x = a,b \in \wcal{F}$, the definition of rounding to nearest yields $\wabs{z - a} \geq \wabs{z - \wfl{z}}$ and $\wabs{z - b} \geq \wabs{z - \wfl{z}}$. If $y < a$ then $y < z$ and \[ \wabs{z - y} = z - y > z - a = \wabs{z - a} \geq \wabs{z - \wfl{z}} \ \ \Rightarrow \ \ \wabs{z - y} > \wabs{z - \wfl{z}} \] Therefore, $\wfl{z} \neq y$. Similarly, if $y > b$ then $y > z$ and \[ \wabs{z - y} = y - z > b - z = \wabs{z - b} \geq \wabs{z - \wfl{z}} \ \ \Rightarrow \ \ \wabs{z - y} > \wabs{z - \wfl{z}} \] As a result, $\wfl{z} \neq y$ and $\wfl{z} \in \mathds R{} \setminus \wlr{\mathds{S}et{y < a} \cup \mathds{S}et{y > b}} = [a,b]$. If $z \leq m$ then \[ \wabs{\wfl{z} - z} \leq \wabs{a - z} = z - a \leq m - a = \delta := \wlr{b - a}/2. \] and if $z \geq m$ then \[ \wabs{\wfl{z} - z} \leq \wabs{b - z} = b - z \leq b - m = \delta. \] Therefore, $\wabs{\wfl{z} - z} \leq \delta$. If $ z < m$ then \[ \wfl{z} \leq \wabs{\wfl{z} - z} + \wlr{z - a} + a \leq \delta + z < \delta + m = b, \] and $\wfl{z} < b$. If $ z > m$ then \[ \wfl{z} \geq b - \wlr{b - z} - \wabs{z - \wfl{z}} \geq z - \delta > m - \delta = a, \] and $\wfl{z} > a$. \peProof{Proposition}{propRoundOnInterval}\\ \pbProofB{Proposition}{propRoundCombination} If $z \in \mathds R{}$ then either (i) $z \in \wcal{A}_1$ or (ii) $z \in \wcal{A}_2 \setminus \wcal{A}_1$. In case (i), for $x \in \wcal{F}$ we have that $\wabs{x - z} \geq \wabs{\wflk{1}{z} - z}$ by hypothesis. Therefore, $\wabs{x - z} \geq \wabs{\wflk{1}{z} - z} = \wabs{\wfl{z} - z}$ in case (i). In case (ii), for $x \in \wcal{F}$ we have that $\wabs{x - z} \geq \wabs{\wflk{2}{z} - z} = \wabs{\wfl{z} - z}$. As a result, $\wabs{x - z} \geq \wabs{\wfl{z} - z}$ in both cases and $\wrm{fl}$ rounds to nearest in $\wcal{F}$. \peProof{Proposition}{propRoundCombination}\\ \pbProofB{Proposition}{propRoundExtension} We assume that there exists $f_2: \mathds R{} \rightarrow \mathds R{}$ which rounds to nearest in $\wcal{F}$. Take $\wcal{A}_1 = \wcal{A}$ and $\wcal{A}_2 = \mathds R \setminus \wcal{A}$. Prop. \ref{propRoundCombination} with $f_1 = f$ implies that there exists $\wrm{fl}$ which rounds to nearest in $\wcal{F}$ and is such that $\wfl{z} = \wfc{f}{z}$ for $z \in \wcal{A}$. \peProof{Proposition}{propRoundExtension} \\ \subsection{Tightness} \label{secTight} In this section we prove the propositions regarding tightness, and present and prove additional propositions about this subject. \subsubsection{Propositions} \label{secTightProps} In this section we present additional propositions regarding tightness. \pbPropBT{propTightContinuous}{Tightness and continuity} Let $\wcal{A}$, $\wcal{B}$ and $\wcal{C}$ be topological spaces and $\wcal{R}$ a set. If $g: \wcal{A} \times \wcal{B} \rightarrow \wcal{C}$ is continuous and $h: \wcal{A} \times \wcal{R} \rightarrow \wcal{B}$ is tight then $f: \wcal{A} \times \wcal{R} \rightarrow \wcal{C}$ given by $\wfc{f}{a, r} = \wfc{g}{a,\wfc{h}{a,r}}$ is tight. In particular, if $\wcal{R}$ is a tight set of functions from $\wcal{A}$ to $\wcal{B}$ then the function $f: \wcal{A} \times \wcal{R} \rightarrow \wcal{B}$ given by $\wfc{f}{a,r} = \wfc{g}{a,\wfc{r}{a}}$ is tight. \peProp{propTightContinuous} \pbPropBT{propTightChain}{Tight chain rule} Let $\wcal{A}$, $\wcal{B}$ and $\wcal{C}$ be topological spaces and let $\wcal{T}$ and $\wcal{U}$ be sets. If the functions $h: \wcal{A} \times \wcal{T} \rightarrow \wcal{B}$ and $g: \wcal{B} \times \wcal{U} \rightarrow \wcal{C}$ are tight then the function $f: \wcal{A} \times \wlr{\wcal{T} \times \wcal{U}} \rightarrow \wcal{C}$ given by $\wfc{f}{a, \wlr{t,u}} := \wfc{g}{\wfc{h}{a,t}, u}$ is tight. \peProp{propTightChain} \subsubsection{Proofs} \label{secTightProofs} This section contains the proofs of the propositions regarding tightness.\\ \pbProofB{Proposition}{propWholeIsTight} Let $\wcal{R}$ be the set of all functions which round to nearest in $\wcal{F}$ and let $\wcal{S} = \mathds{S}et{\wlr{z_k, \wrm{fl}_k}, k \in \mathds N{}} \subset \mathds R{} \times \wcal{R}$ be a sequence with $\lim_{k \rightarrow \infty} z_k = z$. Props. \ref{propRoundSubnormal}, \ref{propRoundBelowAlpha} and \ref{propFlat} imply that there exist $a,b \in \wcal{F}$ and $\delta > 0$ such that if $\wabs{y - z} < \delta$ then $\wfl{y} \in \mathds{S}et{a,b}$ for $\wrm{fl} \in \wcal{R}$. Let $m \in \mathds N{}$ be such that $k > m \Rightarrow \wabs{z_k - z} < \delta$ and define $\wcal{A} := \mathds{S}et{k \geq m \ \wrm{with} \ \wflk{k}{z_k} = a}$ and $\wcal{B} := \mathds{S}et{k \geq m \ \wrm{with} \ \wflk{k}{z_k} = b}$. Since $\wcal{A} \bigcup \wcal{B} = \mathds{S}et{k \geq m, k \in \mathds N{}}$ is infinite, $\wcal{A}$ or $\wcal{B}$ is infinite. By exchanging $a$ and $b$ if necessary, we may assume that $\wcal{A}$ is infinite, and $\mathds{S}et{\wlr{z_{n_k}, \wrm{fl}_{n_k}}, n_k \in \wcal{A}}$ is a subsequence of $\wcal{S}$. We claim that the function $\wrm{fl}: \mathds R{} \rightarrow \mathds R{}$ given by $\wfl{w} = \wflk{m}{w}$ for $w \neq z$ and $\wfl{z} = a$ rounds to nearest in $\wcal{F}$. Indeed, if $z' \in \wcal{F} \setminus \mathds{S}et{z}$ and $w \in \mathds R{}$ then \[ \wabs{w - \wfl{z'}} = \wabs{w - \wflk{m}{z'}} \geq \wabs{z' - \wflk{m}{z'}} = \wabs{z' - \wfl{z'}} \] because $\wrm{fl}k{m}$ rounds to nearest in $\wcal{F}$, and \[ \wabs{w - \wfl{z}} = \wabs{w - a} = \wabs{w - \wflk{n_k}{z_k}} \geq \wabs{z_k - \wflk{n_k}{z_k}} = \wabs{z_k - a} = \wabs{z_k - \wfl{z}}. \] because the $\wrm{fl}k{n_k}$ round to nearest in $\wcal{F}$. Taking the limit $k \rightarrow \infty$ in the equation above we obtain $\wabs{w - \wfl{z}} \geq \wabs{z - \wfl{z}}$, and $\wrm{fl}$ rounds to nearest in $\wcal{F}$. Finally, \[ \lim_{k \rightarrow \infty} \wfc{\varphi}{z_{n_k},\wrm{fl}k{n_k}} = \lim_{k \rightarrow \infty} \wflk{n_k}{z_{n_k}} = a = \wfl{z} \] and $\wcal{R}$ is tight. \peProof{Proposition}{propWholeIsTight}\\ \pbProofB{Proposition}{propSumsAreTight} For $n = 0$, $\wfc{T_0}{\wvec{z},\wrm{fl}t} = 0$ and Prop. \ref{propSumsAreTight} follows from Prop. \ref{propTightContinuous}, because constant functions are continuous. Assuming that Prop. \ref{propSumsAreTight} holds for $n \geq 0$, let us show that it holds for $n + 1$. By induction and Prop. \ref{propTightContinuous} the function $h: \wrn{n+1} \times \wcal{R}^n \rightarrow \wrn{n+1} \times \mathds R{}$ given by $\wfc{h}{\wvec{w},\wrm{fl}t} := \wlr{\wfc{T_n}{\wrm{P}_n \wvec{w}, \wrm{fl}t},w_{n+1}}$ is tight. The function $g: \wlr{\wrn{n+1} \times \mathds R} \times \wcal{R} \rightarrow \wrn{n+2}$ given by $\wfc{g}{\wlr{\wvec{w},z}, \wrm{fl}} := \wlr{\wvec{w}, \wfl{w_{n+1} + z}}$ is also tight by Prop. \ref{propTightContinuous} because $\wcal{R}$ is tight. Finally, Prop. \ref{propSumsAreTight} follows from Prop. \ref{propTightChain} for $f = T_n$, $g$ and $h$ because $\wfc{T_{n+1}}{\wvec{z},\wrm{fl}t} = \wfc{g}{\wfc{h}{\wvec{w},\wrm{P}_n \wrm{fl}t},\wrm{fl}k{n+1}}$. \peProof{Proposition}{propSumsAreTight}\\ \pbProofB{Proposition}{propTightContinuous} Let $\mathds{S}et{\wlr{a_k, r_k}, k \in \mathds N{}} \subset \wcal{A} \times \wcal{R}$ be a sequence with $\lim_{k \rightarrow \infty}a_k = a$. Since $h$ is tight, there exists $r \in \wcal{R}$ and a subsequence $\mathds{S}et{\wlr{a_{n_k}, r_{n_k}}, k \in \mathds N{}}$ such that $\lim_{k \rightarrow \infty} \wfc{h}{a_{n_k}, r_{n_k}} = \wfc{h}{a,r}$. By continuity of $g$, \[ \lim_{k \rightarrow \infty} \wfc{f}{a_{n_k},r_{n_k}} = \lim_{k \rightarrow \infty} \wfc{g}{a_{n_k}, \wfc{h}{a_{n_k},r_{n_k}}} = \wfc{g}{a,\wfc{h}{a,r}} = \wfc{f}{a,r}, \] and $f$ is tight. To handle the particular case, note that when $\wcal{R}$ is set of tight functions as in the hypothesis the function $h: \wcal{A} \times \wcal{R} \rightarrow \wcal{B}$ given by $\wfc{h}{a,r} = \wfc{r}{a}$ is tight. \peProof{Proposition}{propTightContinuous}\\ \pbProofB{Proposition}{propTightChain} Let $\mathds{S}et{\wlr{a_k,\wlr{t_k,u_k}}, k \in \mathds N{}} \subset \wcal{A} \times \wlr{\wcal{T} \times \wcal{U}}$ be a sequence such that $\lim_{k \rightarrow \infty} a_k = a$. Since $h$ is tight, there exists $t \in \wcal{T}$ and a subsequence $\mathds{S}et{\wlr{a_{n_k}, t_{n_k}}, k \in \mathds N{}}$ of $\mathds{S}et{\wlr{a_k,t_k}, k \in \mathds N{}}$ such that $b_{n_k} := \wfc{h}{a_{n_k}, t_{n_k}}$ satisfies $\lim_{k \rightarrow \infty} b_{n_k} = \wfc{h}{a,t} =: b$. Since $b_{n_k}$ converges to $b$ and $g$ is tight, there exists $u \in \wcal{U}$ and a subsequence $\mathds{S}et{\wlr{b_{m_k},u_{m_k}}, k \in \mathds N{}}$ of $\mathds{S}et{\wlr{b_{n_k},u_{n_k}}, k \in \mathds N{}}$ such that \[ \wfc{g}{b,u} = \lim_{k \rightarrow \infty} \wfc{g}{b_{m_k},u_{m_k}} = \lim_{k \rightarrow \infty} \wfc{g}{\wfc{h}{a_{m_k}, t_{m_k}},u_{m_k}}. \] This leads to \[ \wfc{f}{a,\, \wlr{t,u} \, } = \wfc{g}{\wfc{h}{a,t}, \, u} = \wfc{g}{b, \, u} = \] \[ \lim_{k \rightarrow \infty} \wfc{g}{\wfc{h}{a_{m_k}, \, t_{m_k}}, \, u_{m_k}} = \lim_{k \rightarrow \infty} \wfc{f}{a_{m_k}, \, \wlr{t_{m_k}, u_{m_k}}}, \] and $f$ is tight. \peProof{Proposition}{propTightChain}\\ \subsection{Examples} \label{secExample} In this section we verify the examples 2 to 5. Example 1 needs no verification.\\ \pbVerifyB{exQuadGrowth} Our parcels are $y_0 := 1$ and $y_k := 1 + 2^{\wfloor{\wfc{\log_2}{k+1}}} u$ for $k = 1, \dots, n := 2^m - 1$ and we break ties downward. If $1 \leq 2^{\ell} - 1 \leq k < 2^{\ell + 1} - 1$ then $y_k = 1 + 2^{\ell} u$ and we now show by induction that, for $k \geq 1$, \pbDef{exqgI} \sum_{i = 0}^k y_i = k + 1 + \frac{4^{\ell} + 2}{3} u + \wlr{k + 1 - 2^{\ell}} 2^{\ell} u \hspace{1cm} \wrm{and} \hspace{1cm} \wfl{\sum_{i = 0}^k y_i} = k + 1. \peDef{exqgI} Indeed, for $k = 1$ we have $\ell = 1$ and $y_0 + y_1 = 2 + 2 u$, the first equality in Equation \pRef{exqgI} is clearly correct and the second holds because we break ties downward. If $\wfl{\sum_{i = 0}^k y_i} = k + 1$ and $2^{\ell} - 2 \leq k < 2^{\ell + 1} - 2$ then $y_{k+1} = 1 + 2^{\ell} u$ and \[ \wfl{\sum_{i = 0}^{k+1} y_i} = \wfl{k + 1 + 1 + 2^{\ell} u} = k + 2 + 2^{\ell} u = k + 2 = \wlr{k+1} + 1, \] because $k + 2 \geq 2^{\ell}$ and we break ties downward. Therefore, $\wfl{\sum_{i = 0}^k y_i} = k + 1$. Let us now assume that the first Equation in \pRef{exqgI} holds for $k$ and show that it holds for $k+1$. When $2^{\ell} - 1 \leq k < 2^{\ell + 1} - 2$ we have that $2^{\ell} - 1 \leq k + 1 < 2^{\ell + 1} - 1$ and \pbDef{exqgIA} \sum_{i = 0}^{k+1} y_i = \wlr{\sum_{i = 0}^{k} y_i} + y_{k+1} = k + 1 + \frac{4^{\ell} + 2}{3} u + \wlr{k + 1 - 2^{\ell}} 2^{\ell} u + 1 + 2^{\ell} u \peDef{exqgIA} \[ = \wlr{k + 1} + 1 + \frac{4^{\ell + 1} + 2}{3} u + \wlr{\wlr{k + 1} + 1 - 2^{\ell+1}} 2^{\ell+1} u \] and the first equality in Equation \pRef{exqgI} holds for $k + 1$. For $k = 2^{\ell + 1} - 2$, we have that \[ 2^{\ell + 1} - 1 = k + 1 < 2^{\wlr{\ell + 1} +1} - 1 \] and Equation \pRef{exqgIA} leads to \[ \sum_{i = 0}^{k+1} y_i = k + 2 + \frac{4^{\ell} + 2}{3} u + 4^{\ell} u = \wlr{k + 1} + 1 + \frac{4^{\ell + 1} + 2}{3} u + \wlr{\wlr{k + 1} + 1 - 2^{\ell}} 2^{\ell + 1} u \] because $k + 2 - 2^{\ell + 1} = 0$, and the first equality in Equation \pRef{exqgI} is satisfied for $k + 1$. Finally, for $n = 2^m - 1$ we have that $\ell = m$ and \[ \frac{1}{u} \wlr{\sum_{k = 0}^n y_k - \wfl{\sum_{k = 0}^n y_k}} = \frac{4^{m} + 2}{3} = \frac{\wlr{n + 1}^2 + 2}{3} = \frac{n^2 + 2n + 3}{3}. \] The last equation in Example \ref{exQuadGrowth} follows from the equation above and the fact that $y_k < 2$ when $2^m u < 1$. \peVerify{exQuadGrowth}\\ \pbVerifyB{exSharpSumNearB} Let us define $\rho := u^{-k}$. Since $x_k = \rho^k$ and we break ties downward, we have $\wfl{\sum_{i = 0}^k x_i} = \rho^k$ and \[ \sum_{i = 0}^k x_i = \frac{\rho^{k+1} - 1}{\rho -1} \hspace{0.2cm} \wrm{and} \hspace{0.2cm} \sum_{k=1}^n \sum_{i = 0}^k x_i = \frac{\rho^{n+2} - \rho^2 - n \wlr{\rho - 1}}{\wlr{\rho -1}^2} = \frac{1}{u^n} \frac{1 - u^n - n u^{n+1} \wlr{1 - u}}{\wlr{1 - u}^2}. \] It follows that \[ \sum_{k = 0}^n x_k - \wfl{\sum_{k = 0}^n x_k} = \frac{\rho^{n+1} - 1}{\rho - 1} - \rho^n = \frac{\rho^n - 1}{\rho - 1} = \frac{1}{u^{n-1}} \frac{1 - u^n}{1 - u} = \kappa_n u \sum_{k=1}^n \sum_{i = 0}^k x_i \] for \[ \kappa_n := \frac{\wlr{1 - u}\wlr{{1 - u^{n}}}}{1 - u^{n} - n u^{n+1} \wlr{1 - u}}. \] If $2 n u < 1$ then $1 - u \leq \kappa_n \leq \wlr{1 - u} \wlr{1 + u^n}$ because \[ 0 < \frac{\kappa_n}{1 - u} - 1 = \frac{n u^{n+1} \wlr{1 - u}}{1 - u^{n} - n u^{n+1} \wlr{1 - u}} = u^n \frac{n u \wlr{1 - u}}{1 - u^n - n u^{n+1} \wlr{1 - u}} < u^n. \] \peVerify{exSharpSumNearB} \\ \pbVerifyB{exSharpSumNearC} Recall that $x_0 := u$, $x_1 := 1$ and \[ x_k := \beta^{e_k} \wlr{1 + u} - \beta^{e_{k-1}} \wlr{1 + 2 u} \] for $k \geq 2$, with $0 = e_1 < e_2 \dots < e_n \in \mathds Z{}$. Induction using the basic properties of rounding to nearest in Prop. \ref{propRoundNormal} shows that \[ s_k := \sum_{i = 0}^k x_i = \beta^{e_k} \wlr{1 + u} - u \sum_{i=1}^{k-1} \beta^{e_i}, \hspace{1.5cm} \hat{s}_{k} := \wfl{\sum_{i = 0}^k x_i} = \beta^{e_k} \wlr{1 + 2u} \] for $k \geq 1$ and \[ \sum_{k = 1}^n s_k = \wlr{1 + u} \sum_{k = 1}^n \beta^{e_k} - u \sum_{k =1}^n \sum_{i = 1}^{k - 1} \beta^{e_i} = \wlr{1 + u} \sigma_{n} - u \sum_{k = 1}^n \sigma_{k - 1}, \] for $\sigma_k := \sum_{i = 1}^{k} \beta^{e_i}$ (we assume that $\sum_{i}^k a_k = 0$ when $k < i$.) Therefore \[ \hat{s}_{n} - s_{n} = \wlr{\beta^{e_n} + \sum_{k = 1}^{n-1} \beta^{e_k}} u = u \sum_{k = 1}^{n} \beta^{e_k} = u \sigma_n, \] and \pbDef{exncA} \frac{\hat{s}_{n} - s_{n}}{\sum_{k=1}^n s_k} = \frac{\sigma_n u}{\sigma_n + u \wlr{\sigma_n - \sum_{k=0}^{n-1} \sigma_k}} = \frac{u}{1 + u \wlr{1 - \sum_{k = 1}^{n - 1} v_k }} \hspace{0.5cm} \wrm{for} \hspace{0.5cm} v_k := \frac{\sigma_k}{\sigma_n}. \peDef{exncA} Note that \[ \sigma_{\wlr{k+1}} - 1 = \sum_{i = 1}^{k + 1} \beta^{e_i} - 1 = \sum_{i = 2}^{k + 1} \beta^{e_i} \geq \beta \sum_{i = 2}^{k + 1} \beta^{e_{\wlr{i-1}}} = \beta \sum_{i = 1}^{k} \beta^{e_{i}} = \beta \sigma_k. \] Since $\sigma_0 = 0$ and $1/\sigma_n = v_1 = \sigma_1 / \sigma_n$, dividing the last equation by $\sigma_n$ we obtain \pbDef{excnLPA} v_1 + \beta v_k - v_{k+1} \leq 0 \hspace{1cm} \wrm{for} \ \ k = 1,\dots, n - 2, \hspace{0.5cm} \wrm{and} \hspace{0.5cm} v_1 + \beta v_{n-1} \leq 1. \peDef{excnLPA} We end the verification of Example \ref{exSharpSumNearC} using a duality argument to prove that \pbDef{lpineq} \sum_{k = 1}^{n - 1} v_k \leq \frac{1}{\beta - 1} - \frac{n}{\beta^n - 1}. \peDef{lpineq} This equation combined with Equation \pRef{exncA} shows that the value of $\tau_n$ mentioned in Example \ref{exSharpSumNearC} is appropriate. We use basic facts about duality in linear programming \cite{Chvatal} applied to the problem with variables $v_k$, objective function $\sum_{k = 1}^{n-1} v_k$ and constraints given by $v_k \geq 0$ and Equation \pRef{excnLPA}. This problem can be written as \pbDef{excnLP} \left\{ \begin{array}{cccccccc} \wrm{maximize} & \mathds{1}{}^{\mathrm{T}} \wvec{v} & = & \sum_{k = 1}^{n-1} v_k & & & & \\ \wrm{subject \ to} & \wvec{A} \wvec{v} & \leq & \wvec{e}, & & v_k & \geq & 0. \end{array} \right. \peDef{excnLP} where the matrix $A$ has $a_{11} := \beta + 1$, $a_{i1} = 1$ for $1 < i < n$, $a_{ii} = \beta$ for $2 \leq i < n$, $a_{i,i+1} = -1$ for $1 \leq i < n-1$ and the remaining $a_{ij}$ are $0$. The vector $\mathds{1}$ has all its entries equal to $1$ and $e_i = 0$ for $1 \leq i < n - 1$ and $e_{n-1} = 1$. This problem has a feasible solution \[ v_k = \frac{\beta^{k} - 1}{\beta^{n}- 1}, \hspace{1cm} \wrm{for} \hspace{1cm} k = 1,\dots,n-1 \] and \pbDef{excnVal} \sum_{k = 0}^{n-1} v_k = \frac{1}{\beta^n - 1} \wlr{\frac{\beta^{n} - 1}{\beta - 1} - n} = \frac{1}{\beta - 1} - \frac{n}{\beta^n - 1}. \peDef{excnVal} Its dual has $n - 1$ variables, which we call $y_1, \dots, y_{n-1}$, and is \pbDef{excnDual} \left\{ \begin{array}{cccccccc} \wrm{minimize} & \wvec{e}^{\mathrm{T}} \wvec{y} & = & y_{n-1} & & & & \\ \wrm{subject \ to} & \wvec{A}^{\mathrm{T}} \wvec{y} & \geq & \mathds{1}{},& \ \ & y_k & \geq & 0. \end{array} \right. \peDef{excnDual} We claim that the vector $\wvec{y} \in \wrn{n-1}$ with entries \[ y_{n-1} = \frac{1}{\beta - 1} - \frac{n}{\beta^n - 1} \hspace{1cm} \wrm{and} \hspace{1cm} y_{k} = \beta^{n - k - 1} y_{n-1} + \frac{1}{\beta - 1} \ \ \ \wrm{for} \ k = 1 \dots n - 2, \] is a feasible solution of the dual problem. Indeed, $y_{n-1} \geq 0$ because \[ \frac{\beta^n - 1}{\beta - 1} = \sum_{k = 0}^{n-1} \beta^k \geq n, \] and the other entries of $\wvec{y}$ are clearly non negative because $y_{n-1} \geq 0$. The first inequality in the system $\wvec{A}^{\mathrm{T}} \wvec{y} \geq \mathds{1}{}$ is satisfied because \[ \wlr{\beta + 1} y_1 + \sum_{k = 2}^{n-1} y_k = \wlr{\wlr{\beta + 1} \beta^{n-2} + \sum_{k=2}^{n-2} \beta^{n-k-1} + 1} y_{n-1} + \frac{\beta + n-2}{\beta - 1} \] \[ = \wlr{\sum_{k=0}^{n-1} \beta^k} y_{n-1} + \frac{\beta + n-2}{\beta - 1} \] \[ = \frac{\beta^{n} - 1}{\beta - 1} \wlr{\frac{1}{\beta - 1} - \frac{n}{\beta^n - 1}} + \frac{\beta + n-2}{\beta - 1} = \frac{\beta^{n} - \beta}{\wlr{\beta - 1}^2} + 1 \geq 1, \] and the remaining inequalities are satisfied as equalities, because \[ - y_{k-1} + \beta y_k = -\beta^{n - k} y_{n-1} - \frac{1}{\beta - 1} + \beta \beta^{n - k - 1} y_{n-1} + \frac{\beta}{\beta - 1} = 1. \] The value of the objective function of the dual problem for $\wvec{y}$, $y_{n-1}$, is equal to the value of the objective function of the primal problem in \pRef{excnVal}. Therefore, this is the optimal value of both problems and Equation \pRef{lpineq} holds. The linear programming problem above also shows that the worst case in Equation \pRef{ssBnC} is achieved for $e_k = k - 1$, because these exponents lead to the $v_k$ in the solution of the primal problem. \peVerify{exSharpSumNearC}\\ \pbVerifyB{exSignedSumB} Recall that $x_0 := u$, $x_1 := 1$ and $x_k := - 2^{1 - k} \wlr{1 + 3 u}$ for $k > 1$. It follows by induction that \[ \sum_{i = 0}^k x_i = 2^{1 - k} \wlr{1 + 3 u} - 2 u \hspace{1.0cm} \wrm{and} \hspace{1.0cm} \wfl{\sum_{i=0}^k x_i} = 2^{1 - k} \wlr{1 + 2 u}. \] Since $2^{n} u \leq 1$, we have \[ \sum_{k = 1}^n \wabs{\sum_{i = 0}^k x_i} = 2 \wlr{1 - 2^{-n}} \wlr{1 + 3u} - 2 n u > 0, \] and Equations \pRef{thXSSA} follows from the expressions above. Finally, since $2^{-n} \geq u$, we have that $n u < 1$ and \[ \kappa_n - \wlr{1 - u} = u \frac{\wlr{2^{-n} - u} n + 3 \wlr{1 - 2^{-n}}u }{\wlr{1 - 2^{-n}} \wlr{1 + 3u} - n u} > 0 \] and \[ 1 - \kappa_n = u \frac{1 - 2^{-n} \wlr{n + 1}}{\wlr{1 - 2^{-n}} \wlr{1 + 3u} - n u} \geq 0. \] \peVerify{exSignedSumB}\\ } \end{document}
\begin{document} \title[Holomorphic horospherical transform]{Holomorphic horospherical transform on non-compactly causal spaces} \author{Simon Gindikin, Bernhard Kr\"otz and Gestur \'{O}lafsson} \mathrm{ad}dress{Department of Mathematics, Rutgers University, New Brunswick, NJ 08903} \email{[email protected]} \mathrm{ad}dress{Max-Planck-Institut f\"ur Mathematik, Vivatsgasse 7, D-53111 Bonn, Germany} \email{[email protected]} \mathrm{ad}dress{Department of Mathematics, Louisiana State University, Baton Rouge, LA 70803, USA} \email{[email protected]} \subjclass{} \keywords{Radon transform, horospheres, Hardy spaces} \thanks{SG was supported in part by NSF-grant DMS-0070816} \thanks{BK was supported by the RiP-program in Oberwolfach, and a Heisenberg fellowship of the DFG} \thanks{G\'O was supported by the RiP-program in Oberwolfach, NSF grants DMS-0139783 and DMS-0402068} \begin{abstract}We develop integral geometry for non-compactly causal symmetric spaces. We define a complex horospherical transform and, for some cases, identify it with a Cauchy type integral. \end{abstract} \maketitle \section*{Introduction} Within the class of semisimple symmetric spaces $Y=G/H$ we focus on those which can be realized as Shilov boundaries of Stein tubes $D=D(Y)$ in the affine complexification $Y_\mathbb{C}=G_\mathbb{C}/H_\mathbb{C}$. These $Y$ come in two different flavors: compactly causal (CC) and non-compactly causal (NCC) symmetric spaces. It is important to mention that one can realize different series of representations of $L^2(Y)$ as holomorphic functions on $D$: the holomorphic discrete series for CC and a multiplicity one subspace of the most continuous spectrum in the NCC-case. \par In \cite{GKO} we developed integral geometry for $D$ in the CC-case. If we consider the usual (real) horospherical transform on $Y$, then holomorphic discrete series lie in its kernel. So we considered a complex version of such a transform - horospherical Cauchy transform - using a kernel of Cauchy type with singularities on complex horospheres (on $Y_\mathbb{C}$) which do not intersect $Y$. As a result we constructed a dual domain $\Xi_+$ in the manifold $\Xi$ of complex horospheres on $Y_\mathbb{C}$ and our horospherical transform is an intertwining operator from holomorphic functions on $D$ to holomorphic functions on $\Xi_+$ which admits an explicit inversion. \par In this paper we develop holomorphic integral geometry for NCC-spaces. In \cite{GKO} we constructed for any NCC-space a $G$-invariant tube type domain $D(Y)$ in $Y_\mathbb{C}$ which has $Y$ as Shilov boundary. The crucial point is that $D$ contains the associated Riemannian symmetric space $X=G/K$ as a totally real submanifold. We have $X_\mathbb{C}\simeq Y_\mathbb{C}$. Let us remark that all spherical functions on $X$ extend holomorphically to $D$ and admit boundary values on $Y$. Furthermore, in some cases, $D$ coincides with the maximal $G$-domain of holomorphy of spherical functions on $X$ -- the complex crown of $X$. \par There are substantial differences in our constructions of holomorphic horospherical transform for CC and NCC-spaces. In the CC-case we used a Cauchy type integral operator on $Y$ with singularities on complex horospheres which do not intersect $Y$ (horospherical Cauchy transform). For NCC-spaces we are able to construct similar operators in some cases, including the important subclass of Cayley type spaces. In general, we give another construction. We use the fact that complex horospheres which do not intersect $Y$ allow real forms in $D$ and we can construct a complex horospherical transformation extending the real one. Let us mention that this construction does not work in CC-case. If we have both type of constructions the second one can be regarded as a residue of first one. \par Let us decsribe our results in more detail. For a fixed choice of horospherical coordinates $X\simeq N\times A$ one parameterizes the set $\Xi_\mathbb{R}$ of real horospheres on $X$ by $G/MN$ (with $M=Z_K(A)$ as usual). For Schwartz-functions $f$ on $X$ one defines the real Radon transform $$\mathcal{R}_\mathbb{R} : \mathcal{S}(X)\to C^\infty(\Xi_\mathbb{R}), \ \ \mathcal{R}(f)({\bf x}i)=\int_{\bf x}i f\,.$$ Our first goal is to give a holomorphic version of $\mathcal{R}_\mathbb{R}$. For that we construct a certain $G$-invariant submanifold $\Xi_+$ of the complex horospheres $\Xi=G_\mathbb{C}/ M_\mathbb{C} N_\mathbb{C}$ on $X_\mathbb{C} =Y_\mathbb{C}$. The complex horospheres of $\Xi_+$ have the property that they do not intersect $Y$. Let us point out, that in contrast to the CC-case, $\Xi_+$ is not longer open in $\Xi$; it is a $CR$-submanifold of complex dimension equal to the rank of $Y$ (i.e. $\dim A$). Next we recall the most-continuous Hardy-space $\mathcal{H}^2(D)$ on $D$ from \cite{GKO}. This is a Hilbert space of holomorphic functions on $D$ with an equivariant isometric boundary map $b: \mathcal{H}^2(D)\to L_{\rm mc}^2(Y)$ into a multiplicity-one subspace of the most-continuous spectrum of $Y$. We show that there is a natural dense $G$-subspace $\mathcal{H}^2(D)_0$ of $\mathcal{H}^2(D)$ whose elements restrict to Schwartz-functions on $X$ and for which $\mathcal{R}_\mathbb{R}$ allows a holomorphic $G$-equivariant extension $$\mathcal{R}: \mathcal{H}^2(D)_0\to CR^\infty(\Xi_+)\, .$$ We call $\mathcal{R}$ the holomorphic horospherical Radon transform. \par The main result of this paper is an interpretation of a (slightly twisted) Radon transform $\mathcal{R}_\chi$ as a Cauchy-type integral. For the highly symmetric subclass of Cayley-type spaces CT=CC$\cap$NCC we show in Theorem 5.3 that there is Cauchy-type kernel $\mathcal{K}_\chi(,)\in CR_\chi^\infty(\Xi_+)\hat \otimes \mathcal{S}(Y)$ such that \begin{equation} \langlebel{II1} \mathcal{R}_\chi(f)(\cdot)={1\over (2\pi)^n} \int_Y b(f)(y)\ \mathcal{K}_\chi (\cdot , y) \ dy \in CR_\chi^\infty(\Xi_+)\, . \end{equation} Note that (\ref{II1}) is in analogy to our definition of the holomorphic horospherical Radon transform for the CC-case \cite{gko2}. For the general case of an NCC-space we believe that there is an integral representation of $\mathcal{R}_{\chi}$ as in (\ref{II1}) but we are not aware of such. We consider the definition of a holomorphic horospherical Radon transform in terms of a Cauchy type integral to be most robust for further generalization to other symmetric spaces or different series of representations. \par For the CC-case, following \cite{G3}, we found an inversion formula for the holomorphic horospherical transform of compelling beauty: $$f = (\mathcal{L} \mathcal{R} f)^\vee\, .$$ Here $\mathcal{L}$ is a differential operator and $\phi\mapsto \phi^\vee$ is a dual transform \cite{gko2} with regard to a double fibration which brings $D$ and $\Xi_+$ into duality. Let us remark that the situation is different for the NCC-spaces -- for the basic example $Y=\mathrm{Sl}(2,\mathbb{R})/ \mathrm{SO}(1,1)$ we show that $\mathcal{L}$ is a pseudo differential but not a differential operator (cf. Section 6). \par The paper is concluded with a geometric definition of the most continuous Hardy space $\mathcal{H}^2(D)$ which is of independent interest. \par {\it Achknowledgement}: We thank the referee for his patient pressure on more precision. It is due to him that the paper is now more readable. \section{Horospheres on symmetric spaces of triangular type } \langlebel{s-one} \noindent In \cite{GKO} we associated to every NCC-symmetric space $Y=G/H$ a $G$-Stein manifold $D$ with the following properties: \begin{enumerate} \item The complex manifold $D$ has a natural $G$-realization in the complexification $Y_\mathbb{C}$ of $Y$; \item The symmetric space $Y=G/H$ is $G$-isomorphic to the distinguished (Shilov) boundary of $D$. \end{enumerate} The objective of this section is to study the space $\Xi=G_\mathbb{C}/M_\mathbb{C} N_\mathbb{C}$ of horospheres in $Y_\mathbb{C}$ in relation to $D$. In particular we will introduce a natural $G$-invariant $CR$-manifold $\Xi_+\subset \Xi$ whose elements have the properties that they do non intersect the real space $Y$, i.e. have no \textit{real points}. \subsection{Notation}\langlebel{ss-one} We begin with the construction of NCC-symmetric spaces from the complex geometric point of view. \par Let us denote by $\mathfrak{g}$ a simple non-compact real Lie algebra. We fix a Cartan involution $\theta:\mathfrak{g}\to \mathfrak{g}$, write $\mathfrak{g}=\mathfrak{k}+\mathfrak{p}$ for the associated eigenspace decomposition and select a maximal abelian subspace $\mathfrak{a}\subset\mathfrak{p}$. Write $\Pi=\{\alpha_1,\ldots,\alpha_n\}$ for a basis of the restricted root system $\Sigma=\Sigma(\mathfrak{g},\mathfrak{a})$ and expand the highest root $\beta$ in terms of $\Pi$ $$\beta=k_1 \alpha_1 +\ldots + k_n \alpha_n \qquad (k_i\in \mathbb{Z}_{>0})\, .$$ One calls $\alpha_i$ {\it miniscule} if $k_i=1$. Let us recall that miniscule elements exist if and only if $\Sigma$ is not of the type $E_8$, $F_4$ or $G_2$. For the remainder we fix a miniscule root $\alpha$ and define $Z\in \mathfrak{a}$ by the requirement $\alpha_i(Z)=\delta_{\alpha_i \alpha}$. Observe that $Z$ gives rise to a triangular decomposition \begin{equation}\langlebel{3g} \mathfrak{g}=\mathfrak{g}_{-1} + \mathfrak{g}_0 + \mathfrak{g}_1 \end{equation} with $\mathfrak{g}_j=\{ X\in \mathfrak{g}\mid [Z,X]=j X\}$. Further the $3$-grading (\ref{3g}) defines an involution $\sigma$ of $\mathfrak{g}$ by $\sigma(X)=(-1)^j X $ for $X\in \mathfrak{g}_j$ and eigenspace decomposition $$\mathfrak{g}=\mathfrak{g}_0 + ( \mathfrak{g}_{-1}+\mathfrak{g}_{1})\, .$$ By construction $\sigma$ and $\theta$ commute and hence $\tau=\sigma\circ \theta$ defines an involution whose eigenspace decomposition shall be denoted $\mathfrak{g}=\mathfrak{h}+\mathfrak{q}$. The so-obtained symmetric pairs $(\mathfrak{g}, \mathfrak{h})$ we call non-compactly causal (NCC). It is an elementary exercise to classify all NCC-symmetric pairs. \par We denote by $\mathcal{W}$ the Weyl group of $\Sigma$, set $Z_H={\pi\over 2} Z$ and define $\Omega_H\subseteq \mathfrak{a}$ by \begin{equation}\langlebel{de-OH} \OmegaH=\mathop{\textrm{int}} \{\hbox{convex hull of }\ \mathcal{W}(\mathbb{Z}H)\}\, . \end{equation} Observe that $\overline \Omega_H$ is a compact convex subset of $\mathfrak{a}$ with extreme points $\mathcal{W} (Z_H)$. On the global level we fix a linear connected Lie group $G$ with Lie algebra $\mathfrak{g}$. We denote by $G_\mathbb{C}$ the corresponding linear connected complexification of $G$. Further we request that $\tau$ as well is complex linear extension $\tau_\mathbb{C}$ exponentiate to involutions on $G$, resp. $G_\mathbb{C}$, and we denote by $H$, resp. $H_\mathbb{C}$ their corresponding fixed point groups. In this way we obtain a totally real embedding $$Y\hookrightarrow Y_\mathbb{C}= G_\mathbb{C}/H_\mathbb{C}$$ of $Y$ in the Stein symmetric space $Y_\mathbb{C}$. We refer to $Y$ as a non-compactly causal symmetric space (NCC). \par Attached to $Y$ and $\Omega_H$ comes a Stein manifold $D$ which we will now describe. Write $K<G$ for the compact group of $\theta$-fixed elements and $X=G/K$ for the corresponding Riemann symmetric space. As before we obtain a totally real embedding $$X\hookrightarrow X_\mathbb{C}= G_\mathbb{C}/K_\mathbb{C}\, .$$ Recall that $H_\mathbb{C}$ and $K_\mathbb{C}$ are conjugate, i.e. with ${\bf z}H=\exp (i\mathbb{Z}H)$ we have cf. \cite{GKO}: \begin{equation}\langlebel{eq=conj} e^{i\mathrm{ad} (\mathbb{Z}H)}\mathfrak{k}_\mathbb{C}=\mathfrak{h}_\mathbb{C}\qquad \mathrm{and}\qquad \mathrm{Ad} ({\bf z}H)K_\mathbb{C}= H_\mathbb{C}\, . \end{equation} Hence $X_\mathbb{C}$ and $Y_\mathbb{C}$ are canonically $G_\mathbb{C}$-isomorphic via the map $$X_\mathbb{C}\ni gK_\mathbb{C} \mapsto g{\bf z}H^{-1} H_\mathbb{C}\in Y_\mathbb{C}\, . $$ In the sequel we identify $X_\mathbb{C}$ with $Y_\mathbb{C}$. \par We write $x_o=K_\mathbb{C}\in X_\mathbb{C}$ for the base point in $X_\mathbb{C}$ and set $$D=G\exp(i\Omega_H)\cdot x_o\, .$$ Note that $D$ was denoted by $\Xi_H$ in our previous article \cite{GKO}. According to \cite{GKO}, $D$ is an open $G$-invariant Stein neighborhood of $X$ in $X_\mathbb{C}=Y_\mathbb{C}$. Moreover, the map $Y=G/H\ni gH\mapsto gz_H\cdot x_o\in X_\mathbb{C} $ identifies $Y$ with the distinguished boundary $ \partial_dD$ of $D$ (see \cite{GKO}, Section 1, for more details). \par In summary, the symmetric Stein manifold $X_\mathbb{C}=Y_\mathbb{C}$ admits 2 real forms $X$ and $Y$ and a Stein neighborhood $D$ of $X$ with $Y$ as its Shilov boundary. \subsection{Complex horospheres} \langlebel{ss-two} From now we let $Y=G/H$ be an NCC symmetric space and $X=G/K$ its Riemannian counterpart. \par In this section we introduce the $G$-space of \textit{horospheres} in the complex manifold $X_\mathbb{C}=Y_\mathbb{C}$. This was done for CC-spaces in \cite{gko2}. \par We begin with some general remarks on convexity which we will use frequently. Let $G=NAK$ be an Iwasawa decomposition of $G$ and $N_\mathbb{C} A_\mathbb{C} K_\mathbb{C}\subsetneq G_\mathbb{C}$ its Zariski-open complexification. In particular, $N_\mathbb{C} A_\mathbb{C}\cdot x_o$ is a Zariski-open subset in the affine variety $X_\mathbb{C}$. Define the finite $2$-group $F=A_\mathbb{C} \cap K_\mathbb{C}$ and note that there are well defined holomorphic maps $$n: N_\mathbb{C} A_\mathbb{C} \cdot x_o\to N_\mathbb{C}, \qquad a: N_\mathbb{C} A_\mathbb{C}\cdot x_o \to A_\mathbb{C}/F$$ such that $z=n(z) a(z)\cdot x_o$ for all $z\in N_\mathbb{C} A_\mathbb{C}\cdot x_o$. Now, the fact that $D$ is contractible and $D\subset N_\mathbb{C} A_\mathbb{C}\cdot x_o$, implies that $a|_{D}$ admits a well defined holomorphic logarithm $$\log a: D \to \mathfrak{a}_\mathbb{C}\, .$$ For $Z\in \Omega_H$, the complex convexity theorem (cf.\ \cite{GK02,ko}) then implies that \begin{equation} \langlebel{cc} \mathop{\textrm{Im}} \log a(G\exp (iZ)\cdot x_o)={\rm conv} (\mathcal{W}\cdot Z) \end{equation} where ${\rm conv} (\cdot)$ denotes the convex hull of $(\cdot)$. \par Submanifolds of $X_\mathbb{C}$ of the type $$gN_\mathbb{C} \cdot x_o \qquad (g\in G_\mathbb{C})$$ will be referred as {\it horospheres}. We denote by $\mathbf{\mathrm{Hor}}(X_\mathbb{C})$ the set of all horospheres on $X_\mathbb{C}$ and note that $\mathbf{\mathrm{Hor}} (X_\mathbb{C})$ has a natural $G$-structure $(g,hN_\mathbb{C}\cdot x_o)\mapsto gh N_\mathbb{C}\cdot x_o$. \par To understand the space horospheres and the related harmonic analysis it is useful to bring them in the context of a double fibration. Set $M=Z_K(A)\subset M_\mathbb{C}=Z_{K_\mathbb{C}} (A)$, define $$\Xi=G_\mathbb{C}/M_\mathbb{C} N_\mathbb{C}$$ and consider: \begin{equation}\langlebel{eq=df} {\bf x}ymatrix { & G_\mathbb{C}/ M_\mathbb{C} \ar[dl]_{\pi_1} \ar[dr]^{\pi_2} &\\ \Xi & & X_\mathbb{C}\,.}\end{equation} Then horospheres in $X_\mathbb{C}$ are exactly the subsets of $X_\mathbb{C}$ of the form \begin{equation}\langlebel{eq=E} E({\bf x}i)=\pi_2(\pi_1^{-1}({\bf x}i)) \qquad ({\bf x}i\in \Xi)\, . \end{equation} If ${\bf x}i_o=M_\mathbb{C} N_\mathbb{C}\in \Xi$ denotes the base point and ${\bf x}i=g\cdot {\bf x}i_o\in \Xi$ then, using that $M_\mathbb{C} \subset H_\mathbb{C}$, we have: $$E({\bf x}i)=gM_\mathbb{C} N_\mathbb{C}\cdot x_o=gN_\mathbb{C} \cdot x_o\subset X_\mathbb{C}\, . $$ \par Similarly, for $z\in X_\mathbb{C}$ we set \begin{equation}\langlebel{eq=S} S(z)=\pi_1(\pi_2^{-1}(z))\ .\end{equation} If $z=g\cdot x_o$ for $g\in G_\mathbb{C}$, then $S(z)=gK_\mathbb{C}\cdot {\bf x}i_o$. Moreover, for $z\in X_\mathbb{C}$ and ${\bf x}i\in \Xi$ one has the incidence relations \begin{equation}\langlebel{eq=indi} z\in E({\bf x}i)\iff \pi_1^{-1}({\bf x}i)\cap \pi_2^{-1}(z)\neq \emptyset \iff {\bf x}i\in S(z)\, .\end{equation} \begin{proposition}\langlebel{gen-parametrization} The map $$\Xi\to \mathbf{\mathrm{Hor}}(X_\mathbb{C}), \ \ {\bf x}i\mapsto E({\bf x}i)$$ is a $G_\mathbb{C}$-equivariant bijection. \end{proposition} \begin{proof} $G_\mathbb{C}$-equivariance and surjectivity are clear. The injectivity follows the same way as in the proof of Proposition 2.1 in \cite{gko2} by replacing $H_\mathbb{C}$ by $K_\mathbb{C}$.\end{proof} One of the important features of $\Xi$ is, that there exists a right $A_\mathbb{C}$-action on $\Xi$ that commutes with the left $G_\mathbb{C}$-action. For ${\bf x}i=g\cdot{\bf x}i_o$ and $a\in A_\mathbb{C}$ we set \begin{equation}\langlebel{eq=ra} {\bf x}i\cdot a= ga\cdot {\bf x}i_o\, . \end{equation} Since $A_\mathbb{C}$ normalizes $M_\mathbb{C} N_\mathbb{C}$ it is clear that (\ref{eq=ra}) is well defined. {}From the definition it is also clear that the left $G_\mathbb{C}$-action and the right $A_\mathbb{C}$-action commutes. In this way we obtain an action of $G_\mathbb{C}\times A_\mathbb{C}$ on $\Xi$ by $$(G_\mathbb{C}\times A_\mathbb{C}) \times \Xi \to \Xi, \ \ ((g,a), {\bf x}i)\mapsto g\cdot {\bf x}i\cdot a\, .$$ \par We conclude this subsection with an alternative characterization of horospheres as level sets of holomorphic functions. For that let $\{ \omega_1, \ldots, \omega_n\}\subset \mathfrak{a}^*$ be the set of fundamental $K$-spherical lowest weights. For each $1\leq j\leq n$ we write $(\pi_j, V_j)$ for the corresponding finite dimensional representation of $G$ with lowest weight $\omega_j$. We extend this representation to a holomorphic representation of $G_\mathbb{C}$ which we denote by the same symbol. Endow $V_j$ with a complex bilinear pairing $ \langle \hbox to 0.5em{}, \hbox to 0.5em{} \rangle$ such that $\langle \pi_j(g)v, w\rangle =\langle v, \pi_j(\theta(g)^{-1})w\rangle$ for all $v,w\in V_j$ and $g\in G_\mathbb{C}$. Such a form $\langle \hbox to 0.5em{} ,\hbox to 0.5em{} \rangle$ exists as $\pi\circ \theta$ is isomorphic to the representation contragredient to $\pi_j$. We write $v_j\in V_j$ for a lowest weight vector and $\eta_j\in V_j$ for a $K_\mathbb{C}$-fixed vector subject to the normalization $\langle \eta_j, v_j\rangle=1$. Finally, define holomorphic functions on $f_j: G_\mathbb{C} \to \mathbb{C}$ by \begin{equation} \langlebel{ef} f_j(g)=\langle \pi_j(g)\eta_j, v_j\rangle \qquad (g\in G_\mathbb{C})\,. \end{equation} Note, that we have \begin{equation}\langlebel{eq-NAKcov} f_j(nak)=a^{\omega_j} \end{equation} for all $n\in N_\mathbb{C}$, $k\in K_\mathbb{C}$ and $a\in A_\mathbb{C}$. Here, as elsewhere in this article, we use the notation $a^\mu =e^{\mu (X)}$ if $a=\exp X\in A_\mathbb{C}$. We recall that \begin{equation}\langlebel{eq-zero} G_\mathbb{C}\setminus N_\mathbb{C} A_\mathbb{C} K_\mathbb{C}=\{g\in G_\mathbb{C}\mid \mathop{\mathrm{pr}} od_{j=1}^n f_j(g)=0\}\, . \end{equation} (see \cite{vdB}, Lemma 3.4). \begin{lemma}\langlebel{le-NK} $N_\mathbb{C} K_\mathbb{C}=\{g\in G_\mathbb{C} \mid f_j (g)=1 \quad \hbox{for all}\quad 1\leq j\leq n\}$. \end{lemma} \begin{proof} This follows from (\ref{eq-NAKcov}) and (\ref{eq-zero}).\end{proof} We will often view $f_j$, or more generally it left translates, as a function on $X_\mathbb{C}$. We will also, without further comments, view the function $g\mapsto f_j(g^{-1}z)$, $z\in X_\mathbb{C}$ as a function on $\Xi$. With that in mind we have: \begin{lemma} Let ${\bf x}i \in \Xi$ and $x\in X_\mathbb{C}$. Then $$E({\bf x}i)=\{ z\in X_\mathbb{C} \mid f_j({\bf x}i^{-1} z )=1 \quad \hbox{for all} \quad 1\leq j\leq n\}$$ and $$S(x)=\{ \varrho \in \Xi \mid f_j(\varrho^{-1} x )=1 \quad \hbox{for all} \quad 1\leq j\leq n\}$$ \end{lemma} \begin{proof} Notice that $E(g\cdot {\bf x}i_o)=gE({\bf x}i_o)$ and $S(g\cdot x_o)=gS(x_o)$. We can therefore assume that ${\bf x}i ={\bf x}i_o$ and $x=x_o$. Now, the claim is a reformulation of Lemma \ref{le-NK}. \end{proof} \subsection{Some $G$-submanifolds of $\Xi$} We define the $G$-space of {\it real horospheres} in $X$ as $$\Xi_\mathbb{R} = G/MN\, .$$ Then $\Xi_\mathbb{R}\subset \Xi=G_\mathbb{C} / M_\mathbb{C} N_\mathbb{C}$ is obviously a totally real $G$-submanifold of $\Xi$ and the right $A$-action leaves $\Xi_\mathbb{R}$ invariant. \par Let $T=\exp(i\mathfrak{a})\subset G_\mathbb{C}$ and note that $A_\mathbb{C}=A\times T$; note that $F=K\cap T$. We contrast $\Xi_\mathbb{R}$ with the $G\times A_\mathbb{C}$-invariant subset of $\Xi$ \begin{equation} \Xi_0=G\cdot {\bf x}i_o\cdot A_\mathbb{C} =GA_\mathbb{C}\cdot {\bf x}i_o\, . \end{equation} \begin{proposition} The following assertion hold: \begin{enumerate} \item The map \begin{equation} \langlebel {eq=decomp} K/M\times_F A_\mathbb{C} \to \Xi_0, \ \ [kM,a]\mapsto ka\cdot {\bf x}i_o \end{equation} is a real analytic isomorphism. \item The map $$\Xi_\mathbb{R} \times_F T\to \Xi_0, \ \ [gMN, t]\mapsto gt\cdot{\bf x}i_o$$ is a $G$-equivariant real analytic diffeomorphism. \end{enumerate} \end{proposition} \begin{proof} (i) follows from the fact that $G=KAN$ and $N A_\mathbb{C} \subset A_\mathbb{C} N_\mathbb{C}$. Finally (ii), is a consequence of (i). \end{proof} Note that (\ref{eq=decomp}) describes a natural $CR$-structure on $\Xi_0$ of $CR$-dimension $\dim A$ and $CR$-codimension $\dim K/M$. Define a tube domain in $A_\mathbb{C}$ by $$T (\Omega_H)=\exp (\mathfrak{a} + i\Omega_H) =A\exp (i\Omega_H)\simeq \mathfrak{a} +i\Omega_H\, $$ and set \begin{equation}\langlebel{defYH} \Xi_+=G\exp(i\Omega_H)\cdot {\bf x}i_o=K T(\Omega_H)\cdot {\bf x}i_o\, . \end{equation} Then $\Xi_+$ is a real-analytic, $G$-invariant open submanifold of $\Xi_0$. In particular $\Xi_+$ is a $CR$-manifold. The coordinate decomposition of $\Xi_0$ slightly simplifies for $\Xi_+$. \begin{proposition} For $\Xi_+$ the following assertions hold: \begin{enumerate} \item The map $$K/M\times T(\Omega_H) \to \Xi_+, \ \ (kM,a)\mapsto ka\cdot {\bf x}i_o$$ is a real analytic isomorphism. \item The map $$\Xi_\mathbb{R} \times \Omega_H \to \Xi_+, \ \ (gMN, Z)\mapsto g\exp(iZ)\cdot{\bf x}i_o$$ is a $G$-equivariant real analytic diffeomorphism. \end{enumerate} \end{proposition} We conclude this section with a remark on the structure of $\Xi_+$. \begin{remark} (Shilov boundary of $\Xi_+$) The map $$\Xi_\mathbb{R}\to \partial\Xi_+, \ \ gMN\mapsto gz_H\cdot {\bf x}i_o$$ identifies $\Xi_\mathbb{R}$ as the Shilov boundary $\partial_S \Xi_+$ of $\Xi_+$. In this sense $\Xi_\mathbb{R}$ parameterizes the real horospheres on $Y$ (see also Lemma \ref{nc} below). \end{remark} \subsection{Horospheres without real points}\langlebel{ss=three} The aim of this subsection is to show that horospheres corresponding to $\Xi_+$ do not contain real points, i.e., are disjoint form $Y$. Recall from Subsection \ref{ss-one} that we identify $Y=G/H$ with the (Shilov) boundary orbit $G\cdot y_o\subset X_\mathbb{C}$ of $y_o=z_H\cdot x_o$ in $G_\mathbb{C}/K_\mathbb{C}$. Define the parameter set of \textit{horospheres without real points} by \begin{equation}\langlebel{de-Ur} \Xi_{\rm nr}=\{ {\bf x}i \in \Xi\mid E({\bf x}i) \cap Y=\emptyset\}\, . \end{equation} The following statement should be compared to the complex convexity Theorem (\ref{cc});it means that convexity breaks down at the extreme points of $\Omega_H$. \begin{lemma}\langlebel{nc} Let ${\mathcal U}=\bigcup_{w\in \mathcal{W}} NAwH$. Then ${\mathcal U}\cdot y_o$ is open and dense in $G\cdot y_o$ and $$G\cdot y_o\cap N_\mathbb{C} A_\mathbb{C} \cdot x_o = {\mathcal U}\cdot y_o= N A \mathcal{W} z_H\cdot x_o\, .$$ \end{lemma} \begin{proof} It is a special case of a Theorem by Rossmann-Matsuki (cf. \cite{Ma79}) that ${\mathcal U}$ is dense in $G$. Hence ${\mathcal U}\cdot y_o$ is dense in $G\cdot y_o$. As ${\bf z}H^{-1}H_\mathbb{C} {\bf z}H =K_\mathbb{C}$ (cf. \ (\ref{eq=conj})), it follows that ${\mathcal U}\cdot y_o = NA\mathcal{W} {\bf z}H\cdot x_o$. It remains to show the inclusion "$\supseteq$" for the first asserted equality. But this follows from (\ref{eq-zero}). \end{proof} We can now prove the main result of this subsection. \begin{theorem} \langlebel{ti}$\Xi_+\subseteq \Xi_{\rm nr}$. \end{theorem} \begin{proof} We argue by contradiction. Note that $E({\bf x}i)\cap Y\neq \emptyset$ for some ${\bf x}i\in\Xi_+$ means that there exist $Z\in \Omega_H$ such that $$Gz_H\cdot x_o \cap \exp(iY)N_\mathbb{C}\cdot x_o\not= \emptyset\, .$$ Now $\exp(iY)N_\mathbb{C}\cdot x_o=N_\mathbb{C} \exp(iY)\cdot x_o\subset N_\mathbb{C} \exp(i\Omega_H)\cdot x_o$ and the assertion follows from Lemma \ref{nc}. \end{proof} \subsection{Real forms of $E({\bf x}i)$ and $S(z)$} In this last part of this section we introduce certain $G$-invariant real forms of the complex manifolds $E({\bf x}i)$ and $S(z)$. \par We begin with the horospheres. For ${\bf x}i=ga\cdot {\bf x}i_o\in \Xi_+$, with $g\in G$ and $a\in \exp (i\Omega_H)$, define \begin{equation}\langlebel{real1} E_\mathbb{R}({\bf x}i)=gNa\cdot x_o\subset E({\bf x}i )\, . \end{equation} Then $E_\mathbb{R}({\bf x}i)$ is well defined, $G$-invariant and a totally real submanifold of $E({\bf x}i)$. Further, the assignment $\Xi_+\ni {\bf x}i\mapsto E_\mathbb{R}({\bf x}i)$ is $G$-equivariant. \par Next we consider $S(z)\simeq K_\mathbb{C}/ M_\mathbb{C}$. Because of the relation $K_\mathbb{C} ={\bf z}H^{-1} H_\mathbb{C} {\bf z}H$ there are two natural real forms. Accordingly we define for $z=ga\cdot x_o\in D$: \begin{equation}\langlebel{real2} S^K_\mathbb{R}(z)=gaK\cdot {\bf x}i_o \qquad \text{and}\qquad S^H_\mathbb{R} (z)=ga {\bf z}H^{-1} H{\bf z}H \cdot {\bf x}i_o\, . \end{equation} Obviously $S^K_\mathbb{R}(z)$ and $S^H_\mathbb{R} (z)$ are $G$-invariant totally real submanifold of $S(z)=gaK_\mathbb{C}\cdot x_o$ and the maps $D\ni z\mapsto S^K_\mathbb{R}(z)$ and $D\ni z\mapsto S^H_\mathbb{R}(z)$ are $G$-equivariant. Note that $S_\mathbb{R}^H(z)\simeq H/M$ as manifolds. \subsection{Invariant measure on $Y$}\langlebel{ss-measure} Lemma \ref{nc} allows for a natural normalization of the invariant measure on $Y$. Assume that invariant measures on $G$, $A$ and $N$ have been fixed and let $\mathcal{W}_H=N_{K\cap H} (\mathfrak{a})/ Z_{H\cap K}(\mathfrak{a})$ be the small Weyl group. By Lemma \ref{nc} the union $${\mathcal U}=\bigcup_{w\in \mathcal{W}/\mathcal{W}_H} AN w\cdot y_o$$ is disjoint open and dense in $Y$. As the complement is an analytic set, it follows that $Y\setminus {\mathcal U}$ has measure zero. We can normalize the invariant measure on $Y$ such that for all $f\in L^1 (Y)$: $$\int_Y f (y) \, dy=\sum_{w\in \mathcal{W}/ \mathcal{W}_H} \int_A \int_N f (anw\cdot y_o)\ da \ dn\, .$$ \section{The Frech\'et module $CR^\infty(\Xi_+)$} \noindent In this section we use the right $A$-action on $\Xi_\mathbb{R}$ and $\Xi_+$ to define $G$-submodules of the smooth $A$-covariant functions on $\Xi_\mathbb{R}$ respectively $CR$-functions on $\Xi_+$. Those modules are the standard realization respectively a CR-realization of the space of smooth vectors in the principal series representations given by induction from the right. Note that later we will use the induction from the left. Recall, that $A$ acts on the space of horospheres from the right. This action induces a right regular representation of $A$ on any function space on $\Xi_\mathbb{R}$, $\Xi_+$ or any other right invariant set of horospheres given by $$(R(a)f)({\bf x}i)=f({\bf x}i\cdot a)\, .$$ Let $\rho=1/2 \sum_{\alpha\in\Sigma^+} (\dim \mathfrak{g}^\alpha)\cdot \alpha$. Let $\langlembda \in \mathfrak{a}_\mathbb{C}^*$. The index ${}_\langlembda$ will denote the subspace of $(\langlembda-\rho)$-covariant functions. In particular $$C^\infty(\Xi_\mathbb{R})_\langlembda=\{ f\in C^\infty (\Xi_\mathbb{R})\mid (\mathfrak{o}rall a\in A) \ R(a)f = a^{\langlembda-\rho} f\}\, .$$ We recall that $G$ acts $C^\infty(\Xi_\mathbb{R})$ by left translations in the argument $$(L(g)f)({\bf x}i)=f(g^{-1}\cdot {\bf x}i) $$ for $g\in G$, $f\in C^\infty(\Xi_\mathbb{R})$ and ${\bf x}i\in \Xi_\mathbb{R}$. The so obtained representation $(L, C^\infty(\Xi_\mathbb{R})_\langlembda)$ is the smooth model of the spherical principal series with parameter $\langlembda$. \par Write $CR^\infty(\Xi_+)$ for the space of smooth $CR$-functions on $\Xi^+$ and set $$CR^\infty(\Xi_+)_\langlembda=\{ f\in CR^\infty (\Xi_+)\mid (\mathfrak{o}rall a\in A)\ R(a)f = a^{\langlembda-\rho} f\}\, .$$ As characters on $A$ extend to holomorphic functions on $T(\Omega_H)$, it follows that the restriction map \begin{equation} \langlebel{rest}{\rm Res}_\langlembda: CR^\infty(\Xi_+)_\langlembda \to C^\infty(\Xi_\mathbb{R})_\langlembda, \ \ f\mapsto f|_{\Xi_\mathbb{R}}\end{equation} is a $G$-equivariant topological isomorphism of $G$-modules. \subsection{$CR$-realization of the $H$-spherical holomorphic vector} For each $\langlembda\in\mathfrak{a}_\mathbb{C}^*$ we define a certain $H$-invariant element $f_\langlembda\in CR^{-\infty}(\Xi_+)_\langlembda$ which was called the $H$-spherical holomorphic distribution vector in \cite{GKO}. The generalized function $f_\langlembda$ is defined by $$f_\langlembda({\bf x}i) = a({\bf x}i^{-1}z_H^{-1})^{\rho-\langlembda} \qquad ({\bf x}i\in \Xi_+)\, .$$ We notice that on the dense subset $$\Xi_+'=\bigcup_{w\in \mathcal{W}} HwT(\Omega_H)\cdot{\bf x}i_o$$ of $\Xi_+$, the function belongs to $CR^\infty(\Xi_+')_\langlembda$ and is given by $$f_\langlembda(hwa\cdot{\bf x}i_o) = (w^{-1} z_H w)^{\langlembda-\rho} a^{\langlembda-\rho} \, .$$ For $\mathbb{R}e \langlembda \ll 0$, this function is actually continuous on $\Xi_+$ and the meromorphic continuation in $\langlembda$ as a distribution is achieved with Bernstein's theorem \cite{B}. There are no singularities on the imaginary axis $i\mathfrak{a}^*$ and we arrive at a well defined analytic assignment $$i\mathfrak{a}^*\to CR^{-\infty}(\Xi_+)_\langlembda^H, \ \langlembda\mapsto f_\langlembda\,$$ (cf. \cite{GKO}, Th. 2.4.1). \section{The holomorphic horospherical Radon transform}\langlebel{s=two} \noindent The real Radon transform on $X$ is the $G$-equivariant injective map \begin{equation}\langlebel{rera} \mathcal{R}_\mathbb{R}: \mathcal{S}(X)\to C^\infty(\Xi_\mathbb{R}), \ \ \mathcal{R}_\mathbb{R}(f)(g\cdot {\bf x}i_o)= \int_N f(gn\cdot x_o)\ dn (\quad (g\in G))\, .\end{equation} The purpose of this section is to show that $\mathcal{R}_\mathbb{R}$ has a natural extension to a $G$-equivariant map $$\mathcal{R} : \mathcal{H}^2(D)_0\to CR^\infty(\Xi_+)$$ which we call the holomorphic horospherical Radon transform. Here $\mathcal{H}^2(D)_0\hookrightarrow L^2 (X)$ is a dense a subspace of the most-continuous Hardy space $\mathcal{H}^2(D)\subset \mathcal{O}(D)$ of $L^2(Y)$ (cf.\ \cite{GKO}). On the infinitesimal level this extension is related to the previously established fact (\ref{rest}), i.e. $C^\infty(\Xi_\mathbb{R})_\langlembda$ is canonically $G$-isomorphic to $CR^\infty(\Xi_+)_\langlembda$ via restriction. \par This section is organized as follows: First we have to recall some facts about the Fourier analysis on $X$, in particular Arthur's spectral characterization of the Schwartz space $\mathcal{C}(X)$. Subsequently we give a brief summary on the most-continuous Hardy space $\mathcal{H}^2(D)$ of \cite{GKO}. Finally we define the holomorphic horospherical Radon transform $\mathcal{R}$ and discuss some of its properties. \subsection{Fourier analysis on $X$} We recall the compact realization of the principal series representations. Let $B=M\backslash K$. For $\langlembda\in \mathfrak{a}^*_\mathbb{C}$ define a representation $\pi_\langlembda$ of $G$ on $L^2(B)$ by \begin{equation}\langlebel{eq-pi} \pi_\langlembda (g)f(Mk)=a(kg)^{\rho -\langlembda}f(Mk(\emph{}kg))\, . \end{equation} Then $(\pi_\langlembda ,L^2(B ))$ is unitary if $\langlembda\in i\mathfrak{a}^*$. We write $\mathcal{H}_\langlembda =L^2(B)$ to indicate the dependence of the $G$-action on $\langlembda$. Let $\mathfrak{a}^+=\{H\in \mathfrak{a}\mid (\mathfrak{o}rall \alpha \in \mathbb{D}elta^+ )\, \alpha (H)>0\}$ and $$\mathfrak{a}_+^*=\{\langlembda \in \mathfrak{a} \mid (\mathfrak{o}rall H\in\mathfrak{a}^+)\, \langlembda (H)>0\}\, .$$ Denote by $\hat{G}_{\mathrm{r}}$ the reduced dual of $G$ and by $\hat{G}_{\mathrm{rsp}}$ the spherical reduced dual. Then $i\mathfrak{a}_+^*\ni \langlembda\mapsto [\pi_\langlembda ]\in \hat{G}_{\mathrm{rsp}}$ is an isomorphism of measure spaces. Here $[\pi_\langlembda ]$ denotes the equivalence class of $\pi_\langlembda$. We have \begin{equation}\langlebel{eq-Fourier} L^2(X )\simeq \int^\oplus_{i\mathfrak{a}^*_+}\mathcal{H}_\langlembda \, \frac{d\langlembda }{|\mathbf{c} (\langlembda )|^2} \end{equation} where $\mathbf{c} (\langlembda)$ is the Harish-Chandra $c$-function. To explain the above isomorphism, we need some basic fact on the Fourier transform on $X$. For that, recall that $a:X\to A$ denotes the $A$-projection with regard to the horospherical coordinates $X=NA\cdot x_o\simeq N\times A$. Set ${\mathcal X}=B\times i\mathfrak{a}_+^*$ and define a Radon measure $d\mu_{{\mathcal X}}$ on ${\mathcal X}$ by $$d\mu_{{\mathcal X}}(b,\langlembda ):=db \frac{d\langlembda}{|\mathbf{c}(\langlembda)|^2}\, .$$ For $f\in L^1(X)\cap L^2(X)$ define its \textit{spherical Fourier transform} $\hat{f}:{\mathcal X}\to \mathbb{C}$ by $$\hat{f} (b,\langlembda)=\int_X f(x) a(bx)^{\rho-\langlembda} \ dx\, .$$ We will also write $\mathcal{F}_X(f)$ for $\hat{f}$. We can normalize the left-Haar measure $dx$ on $X$ such that the Fourier transform extends to an unitary isomorphism $\hat{\hbox to 1em{}} : L^2(X)\to L^2({\mathcal X}, d\mu_{{\mathcal X}})$. If $f$ is rapidly decreasing (see exact definition in a moment), then the Fourier inversion formula holds pointwise: $$f(x)=\int_{{\mathcal X}} \hat f(b,\langlembda) a(bx)^{\rho+\langlembda} \ d\mu_{{\mathcal X}}(b,\langlembda )\qquad (x\in X)\, .$$ For $\langlembda \in i\mathfrak{a}^*$ define $\hat{f}_\langlembda \in L^2(B)$ by $b\mapsto \hat{f}_\langlembda (b)=\hat{f}(b,\langlembda)$. Then the isomorphism in (\ref{eq-Fourier}) is given by $$L^2(X)\ni f\mapsto (\hat{f}_\langlembda )_\langlembda \in \int^\oplus_{i\mathfrak{a}^*_+}\mathcal{H}_\langlembda \, \frac{d\langlembda }{|\mathbf{c} (\langlembda )|^2}\, .$$ In the following we will also need the operator valued Fourier transform. If $\mathcal{H}$ is a Hilbert space, then $B_2(\mathcal{H})\simeq \mathcal{H} \hat \otimes \mathcal{H}^*$ denotes the Hilbert space of Hilbert-Schmidt operators on $\mathcal{H}$. Write $$L^2(G)_{\rm sph}=\int_{i\mathfrak{a}_+^*}^\oplus B_2(\mathcal{H}_\langlembda) \ \frac{d\langlembda}{|\mathbf{c}(\langlembda)|^2}$$ for the $K$-spherical spectrum in $L^2(G)$. Recall that the isomorphism is given by the operator valued Fourier transform $\mathcal{F} (f)(\langlembda )= \int_G f(x)\pi_\langlembda (x)\, dx$, $f\in L^1(G)\cap L^2(G)$. The inverse map is $$f(g)=\int_{i\mathfrak{a}_+^*}\mathop{\mathrm{Tr}} (\pi_\langlembda (g^{-1})\mathcal{F} (f)(\langlembda ))\, \frac{d\langlembda }{|\mathbf{c} (\langlembda )|^2}\, .$$ The constant function $v_{K,\langlembda}=\mathbf{1}_{B}$ defines a normalized $K$-fixed vector in $\mathcal{H}_\langlembda$. Assume that $f\in L^1(G)\cap L^2(G)_{\rm sph}$. Then, because $\mathcal{F} (f)(\langlembda)=\mathcal{F} (R_k f)(\langlembda)=\mathcal{F} (f)(\langlembda)\pi_\langlembda (k)$, it follows, that \begin{equation}\langlebel{eq-FT} \mathcal{F} (f)(\langlembda)v= \langle v,v_{K,\langlembda}\rangle \mathcal{F}(f)(\langlembda)v_{K,\langlembda}= \langle v,v_{K,\langlembda} \rangle \hat{f}_\langlembda \, . \end{equation} For $x=k_1\exp (Z) k_2\in G$, with $Z\in \mathfrak{a}$ and $k_1,k_2\in K$, let $\sigma (x)=-B (Z,d\theta(\bf 1) (Z))$, where $B$ is the Killing form on $\mathfrak{g}$. Denote by $U(\mathfrak{g} )$ the Universal enveloping algebra of $\mathfrak{g}$ and by $\varphi_0$ the basic spherical function. For $D,E\in U(\mathfrak{g})$, $s\in \mathbb{R}$ and $f\in C^\infty (G)$, let $$p_{D,E,s}(f):=\sup_{x\in G}|L_D R_Ef(x)|\varphi_0(x)^{-1}(1+\sigma (x))^s\, .$$ Then $\mathcal{C} (G)$ is the space of smooth functions on $G$ such that $p_{D,E,s}(f)<\infty $ for all such $E,D$ and $s$, cf. \cite{HC66}, \S 9 . We set $$\mathcal{C} (G)_{\rm sph}= \mathcal{C} (G)\cap L^2(G)_{\rm sph}\, .$$ For $\tau\in \hat K$ we write $|\tau|$ for the norm of the corresponding highest weight. If $\tau\in \hat K$, then we write $L^2(B)_\tau$ for the subspace of $L^2(B)$ which transform under $\tau$. We denote by $\mathbb{D}(i\mathfrak{a}^*)$ the algebra of all constant coefficient differential operators on $\mathfrak{a}^*$ and set $\mathcal{S}(i\mathfrak{a}_+^*)=\{f|_{i\mathfrak{a}_+^*}\mid f\in \mathcal{S} (i\mathfrak{a}^*)\}$. We cite a theorem of Arthur \cite{Arthur}, p. 4719, specialized to the spherical case: \begin{theorem}\langlebel{th-arthur} The operator valued Fourier transform $\mathcal{F}$ is a topological linear isomorphism from $\mathcal{C}(G)_{\rm sph}$ onto \begin{align*} \{ &A(\, \cdot \, )\in \int_{i\mathfrak{a}_+^*}^\oplus B_2(\mathcal{H}_\langlembda) \, \frac{d\langlembda}{|\mathbf{c}(\langlembda)|^2} \mid (\mathfrak{o}rall v,w\in L^2(B )_{\rm K-fin}) \, \langle A(\cdot )v,w\rangle \in \mathcal{S}(i\mathfrak{a}_+^*)\, , \\ &\mathfrak{o}rall D\in \mathbb{D}(i\mathfrak{a}^*), \mathfrak{o}rall n\in \mathbb{N}_0 \sup_{\langlembda\in i\mathfrak{a}_+^*, \sigma,\tau \in \hat K\atop v\in L^2(B)_\sigma, w\in L^2(B)_\tau} {|D \langle A(\langlembda) v, w\rangle|\over \|v\|\cdot \|w\|} (1+|\langlembda|)^n (1+ |\tau|)^n (1+|\sigma|)^n<\infty \}\, .\end{align*} \end{theorem} \subsection{The most-continuous Hardy space} We recall now the spectral definition of the Hardy space $\mathcal{H}^2(D)$ from \cite{GKO}. For $v\in \mathcal{H}_\langlembda$ define an analytic function $f_{v, \langlembda}$ on $X$ by $$f_{v,\langlembda}(x)=\langlengle \pi_\langlembda(x^{-1}) v, v_{K, \langlembda}\ranglengle= \langlengle v, \pi_\langlembda (x) v_{K, \langlembda}\ranglengle\, .$$ Let us denote by $z\mapsto \bar{z}$ the complex conjugation in $G_\mathbb{C}$ with respect to $G$ and recall that $f_{v,\langlembda}$ extends to a holomorphic function $\tilde f_{v,\langlembda} $ on $D$ via $$\tilde f_{v,\langlembda}(x)=\langlengle v, \pi_\langlembda(\bar{x})v_{K,\langlembda} \ranglengle$$ for $x\in D$, cf. \cite{GKO} and \cite{GKO2}, Proposition 2.2.3. In particular $$f_{v,\langlembda}(ga \cdot x_o)= \langlengle \pi_\langlembda(g^{-1}) v, \pi_\langlembda (a^{-1}) v_{K,\langlembda }\ranglengle $$ for $g\in G$ and $a\in \exp (i\Omega_H)$ Define a generalized hyperbolic cosine function on $i\mathfrak{a}^*$ by \begin{equation}\langlebel{eq-cosh} \mathbb{C}OSH (\langlembda)=\sum_{w\in \mathcal{W}/\mathcal{W}_H} {\bf z}H^{2w^{-1}\langlembda} \end{equation} for $\langlembda\in i\mathfrak{a}^*$. Define a measure $\mu$ on $i\mathfrak{a}_+^*$ by \begin{equation}\langlebel{eq-mu} d\mu(\langlembda) =\frac{d\langlembda} {\mathbb{C}OSH(\langlembda) \cdot |\mathbf{c}(\langlembda)|^2}\, . \end{equation} With this preparation we can define the unitary representation $(L ,\mathcal{H}^2(D))$ of $G$ by $$(L ,\mathcal{H}^2(D))=\int^\oplus_{i\mathfrak{a}_+^*}(\pi_\langlembda ,\mathcal{H}_\langlembda )\, d\mu (\langlembda)\, .$$ Thus $\mathcal{H}^2(D)$ is the Hilbert space of all measurable functions $s :i\mathfrak{a}_+^*\to L^2(M\backslash K)$ such that $\|s\|^2=\int_{i\mathfrak{a}_+^*}\|s (\langlembda ) \|^2 \, d\mu (\langlembda )<\infty$. In the sequel we often write write $s_\langlembda$ for $s (\langlembda )$. Let us denote by $\|\cdot\|_H$ the norm on $\mathcal{H}^2(D)$. Recall from \cite{GKO} that the map $$\Phi: \mathcal{H}^2(D)\hookrightarrow \mathcal{O}(D), \ \ s=(s_\langlembda)\mapsto \left( x\mapsto \int_{i\mathfrak{a}_+^*} \tilde f_{s_\langlembda, \langlembda}(x) \, d\mu(\langlembda)\right)$$ is a $G$-equivariant continuous injection. In the sequel we often view $\mathcal{H}^2(D)$ as a subspace of $\mathcal{O}(D)$; we call $\mathcal{H}^2(D)$ the {\it most-continuous Hardy space of $Y$}. This notion is motivated by the main result of \cite{GKO} which states that there exists an isometric $G$-equivariant value mapping $$b: \mathcal{H}^2(D)\to L_{\rm mc}^2 (Y), \ \ f\mapsto b(f)\, $$ which is isometric onto a multiplicity free subspace of $L_{\rm mc}^2(Y)$. \subsection{The Fourier Transform on $X$ and the Hardy space} The definition of $\mathcal{H}^2 (D)$ in the previous subsection does not use the Fourier transform on $X$. But the following Lemma shows that the space $\mathcal{H}^2 (D)$ has a natural description in terms of the Fourier transform. \begin{lemma}\langlebel{eq=hn} Let $f\in \mathcal{H}^2(D)$. Then the following assertions hold: \begin{enumerate} \item $f|_X\in L^2(X)$ and \begin{eqnarray*} f (z) &=&\int_{i\mathfrak{a}_+^*} \widehat{f|_X}(b,\langlembda ) \mathbb{C}OSH (\langlembda) \, a(bz)^{\langlembda +\rho}\, d\mu (b,\langlembda )\qquad (z\in D)\\ \|f\|_H^2&=&\int_{\mathcal X} |\widehat{f|_X}(b,\langlembda)|^2\cdot \mathbb{C}OSH(\langlembda)\, d\mu_{\mathcal X}(b,\langlembda)\ge \|f|_X\|_{L^2(X)}^2 \, . \end{eqnarray*} \item If $f=\Phi^{-1}(s_\langlembda )$, then $$(\widehat{f|_X})_\langlembda = \frac{s_\langlembda}{\mathbb{C}OSH (\langlembda )}\, .$$ \item For $a=\exp (iY)\in \exp (i\Omega_H)$ let $f_a:G\to \mathbb{C}$ by $f_a(g)=f(ga\cdot x_o)$. Let $Q\subset \exp (i\Omega_H)$ be compact and such that $a\in Q$. Then there exists a constant $C_Q>0$ such that $$ \|f_a\|_{L^2(G)}\le C_Q\| f\|_H $$ \end{enumerate} \end{lemma} \begin{proof} (1) and (2). Let $f\in \mathcal{H}^2(D)$ and $f=\int_{i\mathfrak{a}_+^*} \tilde f_{s_\langlembda,\langlembda} \, d\mu (\langlembda )$. Then obviously we have (2), i.e., \begin{equation}\langlebel{eq-decomposition} f=\int_{i\mathfrak{a}_+^*}\frac{\tilde f_{s_\langlembda,\langlembda} }{\mathbb{C}OSH (\langlembda )}\, \frac{d\langlembda}{|\mathbf{c} (\langlembda )|^2} \end{equation} and \begin{eqnarray*} \|f\|_H^2&=&\int_{i\mathfrak{a}_+^*}\|s_\langlembda \|^2_{L^2(B)}\, d\mu (\langlembda )\\ &=& \int_{i\mathfrak{a}_+^*}\left\|\frac{s_\langlembda}{\mathbb{C}OSH (\langlembda )} \right\|^2_{L^2(B)} \, \mathbb{C}OSH (\langlembda )\, \frac{d\langlembda }{|\mathbf{c} (\langlembda )|^2}\\ &\ge &\int_{i\mathfrak{a}_+^*}\left\|\frac{s_\langlembda}{\mathbb{C}OSH (\langlembda )} \right\|^2_{L^2(B)} \, \frac{d\langlembda }{|\mathbf{c} (\langlembda )|^2}\\ &=& \|f|_X\|^2_{L^2 (X)}\, . \end{eqnarray*} Thus $f|_X\in L^2(X)$ and we can write $f|_X=\int_{\mathcal X} \widehat{f|_X}(b,\langlembda) a(b\, \cdot \, )^{\rho+\langlembda} db \frac{d\langlembda}{ |\mathbf{c}(\langlembda)|^2}$. Equation (\ref{eq-decomposition}) implies that $\widehat{f|_X} (b,\langlembda )=s_\langlembda (b)/\mathbb{C}OSH (\langlembda )$ or $s_\langlembda (b)=\widehat{f|_X}(b,\langlembda )\mathbb{C}OSH (\langlembda )$ for almost all $\langlembda$. This finishes the proof of (1) and (2). \par (3) We recall Faraut's version of the Gutzmer identity \cite{Faraut} \begin{equation}\langlebel{eq=Gutzmer} \int_G |f(ga\cdot x_o)|^2 \ dg =\int_{\mathcal X} |\widehat{ f|_X} (b,\langlembda)|^2 \varphi_\langlembda(a^2) \ db \frac{d\langlembda}{|\mathbf{c}(\langlembda)|^2} \end{equation} where $\varphi_\langlembda(a^2)$ is the analytically continued spherical function given by $$\varphi_\langlembda(a^2)=\int_K \left|a(ka)^{\rho+\langlembda}\right|^2 \ dk \,$$ (cf. \cite{KS}, Sect. 4). Now for a compact subset $Q\subset \exp (i\Omega_H)$ there exists a constant $C_Q>0$ such that \begin{equation}\langlebel{EST} (\mathfrak{o}rall \langlembda\in i\mathfrak{a}^*)\qquad \varphi_\langlembda(a^2)\leq C_Q\mathbb{C}OSH(\langlembda) \end{equation} for all $\langlembda$ (cf.\ \cite{kos05}, Lemma 2.1) and the assertion of the lemma follows. \end{proof} In order to define the Radon transform for functions in the Hardy space we first need a technical fact, interesting on its own. Let $ \left(\int^\oplus_{i\mathfrak{a}^*_+}\mathcal{H}_\langlembda \, d\mu (\langlembda )\right)_0$ denote the space of all sections $(s_\langlembda)$ such that for all $v\in L^2(B)_{\hbox{$K$-finite}} $ $$i\mathfrak{a}_+^*\ni \langlembda \mapsto \langle s_\langlembda ,v \rangle \in \mathcal{S}(i\mathfrak{a}_+^*)\, $$ and \begin{equation}\langlebel{rd} (\mathfrak{o}rall D\in \mathbb{D}(i\mathfrak{a}^*), \mathfrak{o}rall n\in \mathbb{N}_0) \sup_{\langlembda\in i\mathfrak{a}_+^*, \tau\in \hat K \atop v\in L^2(B)_\tau} {| D \langle s_\langlembda, v\rangle |\over \|v\|} (1+|\langlembda|)^n (1+|\tau|)^n<\infty \, .\end{equation} Then we set \begin{equation}\langlebel{eq-Hc0}\mathcal{H}^2(D)_0=\Phi^{-1} \left( \left(\int^\oplus_{i\mathfrak{a}^*_+}\mathcal{H}_\langlembda \, d\mu (\langlembda )\right)_0 \right)\, . \end{equation} \begin{theorem}\langlebel{th=Schwartz} Let $f\in \mathcal{H}^2(D)_0$. Fix $z\in T(\Omega_H)$. Then the function $$G\ni g\mapsto f(gz\cdot x_o)\in\mathbb{C}$$ belongs to $\mathcal{C} (G)$. Moreover, the following functions are locally bounded on $T(\Omega_H)$: \begin{enumerate} \item $z\to \int_G |f(gz\cdot x_o)|^2 \ dg$ \item $z\to \int_N |f(nz\cdot x_o)| \ dn$ \end{enumerate} \end{theorem} \begin{proof} Without loss of generality we may assume that $z=a\in \exp(i\Omega_H)$. In the sequel we often identify $f$ with $f|_X$, a right $K$-invariant function on $G$. Let $v,w\in L^2(B)_{K-{\rm finite}}$. Then, by (\ref{eq-FT}) and Lemma \ref{eq=hn}: \begin{eqnarray*} \langle \mathcal{F} (R_a f)(\langlembda )v,w \rangle & =&\langle \mathcal{F} (f)(\langlembda )\pi_\langlembda (a)v, w\rangle\\ &=&\langle \pi_\langlembda (a)v,v_{K,\langlembda } \rangle \langle \hat{f}_\langlembda ,w\rangle\\ &=&\frac{\langle v, \pi_\langlembda (a)v_{K,\langlembda } \rangle}{\mathbb{C}OSH (\langlembda )}\langle s_\langlembda ,w\rangle \, . \end{eqnarray*} Let us write $F(\langlembda, v,w):=\langle \mathcal{F} (R_a f)(\langlembda )v,w \rangle $. It is clear that $F$ is smooth in the $\langlembda$-variable. In order to show that $g\mapsto f(ga\cdot x_0)$ belongs to $\mathcal{C}(G)$ we use Arthur's Theorem \ref{th=Schwartz}: we have to show for all $n\in \mathbb{N}_0$ and $D\in \mathbb{D}(i\mathfrak{a}^*)$ that \begin{equation}\langlebel{p1}\sup_{\langlembda\in i\mathfrak{a}_+^*, \sigma,\tau\in \hat K\atop v\in L^2(B)_\tau, w\in L^2(B)_\sigma} {\left|D F(\langlembda, v, w) \right|\over \|v\|\cdot\|w\|} (1+|\langlembda|)^n (1+|\tau|)^n (1+|\sigma|)^n <\infty\, .\end{equation} We use the expression for $F(\langlembda, v,w)$ derived above to compute its derivatives: For fixed $D$, we find according to Leibniz a finite set of $D_1, D_1'\ldots, D_N, D_N'$ in $\mathbb{D}(i\mathfrak{a}^*)$ such that \begin{equation}\langlebel{p2} D F(\langlembda, v, w)= \sum_{j=1}^N \left[D_j \frac{\langle v,\pi_\langlembda (a)v_{K,\langlembda } \rangle}{\mathbb{C}OSH (\langlembda )}\right] \cdot \left[D_j' \langle s_\langlembda, w\rangle\right]\, . \end{equation} According to (\ref{rd}), for all $D\in \mathbb{D}(i\mathfrak{a}^*)$, $n\in \mathbb{N}_0$ the following estimate holds: \begin{equation}\langlebel{p3}\sup_{\langlembda\in i\mathfrak{a}_+^*, \sigma\in \hat K\atop w\in L^2(B)_\sigma} {\left|D \langle s_\langlembda, w\rangle\right |\over \|w\|} (1+|\langlembda|)^n (1+|\sigma|)^n <\infty\, .\end{equation} Now, after combining (\ref{p1})-(\ref{p3}) we are left to show for all $D\in \mathbb{D}(i\mathfrak{a}^*)$, $n\in\mathbb{N}_0$: \begin{equation}\langlebel{p4}\sup_{\langlembda\in i\mathfrak{a}_+^*, \tau\in \hat K\atop v\in L^2(B)_\tau} {\left|D \frac{\langle v, \pi_\langlembda (a)v_{K,\langlembda } \rangle}{\mathbb{C}OSH (\langlembda )} \right|\over \|v\|} (1+|\tau|)^n <\infty\, .\end{equation} To establish (\ref{p4}), we first note that $\pi_\langlembda(a)v_{K,\langlembda }$ is an analytic vector for the representation. \par Now if $w\in \mathcal{H}_\langlembda^\omega= C^\omega(M\backslash K)$ and $w=\sum_{\tau \in \hat K} w_\tau$ is its expansion in $K$-types, then we recall from \cite{KV}, Th. 2.2 (1), that there exists a $\delta>0$ such that $$\|w_\tau\| << e^{-\delta|\tau|}\, .$$ To be more precise, given $\delta>0$ sufficiently small, there exists a ball $U$ arround ${\bf 1}$ in $K_\mathbb{C}$ such that $\pi_\langlembda(t)w$ exists for all $t\in U$ and $$\|w_\tau\| \leq (\sup_{t\in U} \|\pi_\langlembda (t) w\|) \cdot e^{-\delta/2 |\tau|}\, .$$ \par Coming back to our original situation where our analytic vector is $w=\pi_\langlembda(a) v_{K_\langlembda}$, we obtain that $\pi_\langlembda(t)\pi_\langlembda(a) v_{K_\langlembda}$ exists for all $t\in U$. It follows that $\pi(G U aK_\mathbb{C}) v_{K_\langlembda}$ exists. As the crown is a domain of holomorphy for principal series representations (cf. \cite{K}), we obtain, by continuity, a compact neighborhood $Q$ of $a$ in $\exp(i\Omega_H)$ such that $UaK_\mathbb{C} \subset G Q K_\mathbb{C}$. By the very nature of the principal series, it is easy to see that $U$ can be chosen independent from $\langlembda$. Hence we get a $C>0$ such that for all $\langlembda\in i\mathfrak{a}^*$, $\tau\in \hat K$ and $v\in L^2(B)_\tau$ \begin{equation}\langlebel{p5} {|\langle v, \pi_\langlembda (a)v_{K,\langlembda }\rangle| \over \|v\|} \leq C (1+|\tau|)^{-n}\cdot \sup_{b\in Q} \|\pi_\langlembda (b)v_{K,\langlembda }\|\, . \end{equation} Moreover, $\sup_{b\in Q}{\|\pi_\langlembda (b)v_{K,\langlembda }\|\over \sqrt{\mathbb{C}OSH (\langlembda )}}$ is uniformly bounded in $\langlembda$ by (\ref{EST}). This, in combination with (\ref{p5}), proves (\ref{p4}) for the case of $D={\bf 1}$. Now, if $D$ is of order $k$, we first observe that $$(\mathfrak{o}rall k\in K)(\mathfrak{o}rall b\in T(\Omega_H))\quad \left[D \pi_\langlembda (b)v_{K,\langlembda }\right](Mk)= p_k ((\langlembda +\rho)(\log a(kb)) \left(\pi_\langlembda (b)v_{K,\langlembda }\right)(Mk)$$ for a polynomial $p_k$ of degree $k$ and independent from $\langlembda$. It follows from the complex convexity theorem (\ref{cc}) that $$\sup_{b\in Q} \sup_{k\in K} |p_k (\langlembda +\rho)(\log a(kb))| \leq C \cdot \sqrt{\mathbb{C}OSH (\langlembda )}$$ Furthermore the decay of ${1\over \mathbb{C}OSH (\langlembda )}$ is not affected by differentiation. Therefore we obtain (\ref{p4}) for all $D$. \par Moving on to (1) and (2), we observe that statement (1) is Lemma \ref{eq=hn}, part 3; part (2) follows from the already established fact in conjunction with Lemma 22 in \cite{HC66}. \end{proof} \subsection{The definition of the Radon Transform} Denote by $CR(\Xi_+)$ the vector space of continuous $CR$-functions on $\Xi_+$, i.e. the space of continuous functions on $\Xi_+\simeq K/M\times T(\Omega_H)$ (cf. (\ref{eq=decomp})) which are holomorphic in the second variable. \begin{lemma}\langlebel{prele} Let $f\in \mathcal{H}^2(D)_0$. Then the assignment $$\Xi_+\ni {\bf x}i=ga\cdot{\bf x}i_o\mapsto a^{-2\rho}\int_N f(gna\cdot x_o)\ dn\in \mathbb{C} \qquad (g\in G, a\in \exp(i\Omega_H))$$ defines a $CR$-function on $\Xi_+$. \end{lemma} \begin{proof} It follows from Theorem \ref{th=Schwartz} that the right hand side is a continuous function. It remains to show that is a $CR$-function. For that let $g=kb$ for $k\in K$ and $b\in A$. The right hand side becomes $$a^{-2\rho}\int_N f(kbna\cdot x_o)\ dn=(ab)^{-2\rho}\int_N f(knba\cdot x_o)\ dn$$ and the holomorphicity in $ab$ follows with Theorem \ref{th=Schwartz}. \end{proof} In view of this lemma, the prescription $$\mathcal{R}: \mathcal{H}^2(D)_0\to CR(\Xi_+), \ \ f\mapsto \left( {\bf x}i=ga\cdot{\bf x}i_o\mapsto a^{-2\rho}\int_N f(gna\cdot x_o)\ dn\right)$$ is a well defined and continuous $G$-equivariant map. We call $\mathcal{R}$ the \textit{ holomorphic horospherical Radon transform}. \begin{remark} (a) We recall the real Radon transform on $X$ from (\ref{rera}).Now if $f\in \mathcal{H}^2(D)_0$, then $f|_X\in L^2(X)$ by Lemma \ref{prele}. Thus, for $g\in G$, we get $$\mathcal{R}(f)(g\cdot {\bf x}i_o)=\mathcal{R}_\mathbb{R}(f|_X)(g\cdot {\bf x}i_o)\, . $$ In other words, the holomorphic Radon transform restricted to $\Xi_\mathbb{R}$ agrees with the real Radon transform of the restricted function on $X$. \par (b) Notice that $E({\bf x}i)\cap D$ for ${\bf x}i\in \Xi_+$ contains the real horosphere $E_\mathbb{R}({\bf x}i)$. The holomorphic Radon transform $\mathcal{R}$ then writes as $$\mathcal{R}(f)({\bf x}i)=\int_{E_\mathbb{R}({\bf x}i)} f({\bf x}i') \ d\nu_{\bf x}i({\bf x}i')$$ with $d\nu_{\bf x}i$ equals $a^{-2\rho}$-times the measure on $E_\mathbb{R}({\bf x}i)$ obtained by the natural identification of the real horosphere $E_\mathbb{R}({\bf x}i)$ with $N$. It is clear that any other $N$-orbit in $E({\bf x}i)\cap D$ would yield the same result. \end{remark} \par If the function $f\in \mathcal{H}^2(D)_0$ is left $K$-invariant, then we can define the \textit{holomorphic Abel-transform} by: $$\mathcal {A}(f)(z)=z^{-\rho} \int_N f(nz\cdot x_o)\ dn \qquad (z\in T(\Omega_H) )\, .$$ Note that $\mathcal {A}$ is just the restriction of the holomorphic Radon transform to $K$-invariant function (modulo the $z^{-\rho}$-factor). Further let us remark that $\mathcal {A}$ gives a continuous mapping $$\mathcal {A}:\mathcal{H}^2(D)_0^K\to \mathcal{O}(T(\Omega_H))^\mathcal{W}, \ \ \mathcal {A}(f)(z)=z^{-\rho} \int_N f(nz\cdot x_o)\ dn\ .$$ \section{The holomorphic Radon transform as Cauchy integral I: The hyperboloid} \noindent In this and the next section we will show (for an appropriate class of $Y$'s) that the holomorphic Radon transform on the NCC-space $Y$ can be expressed as a Cauchy type integral. For that purpose it is instructive to explaining the example of the hyperboloid first. For earlier treatments of the hyperboloid with alternative methods we refer to \cite{G1}, \cite{G2}. We start by recalling some standard function spaces on $Y$. \subsection{Function spaces} Let $Y$ be a NCC-space. For $g\in G$ let $\Theta (g)=\varphi_0(g\tau (g)^{-1})^{1/2}$. Then $\Theta$ is left $K$-invariant and right $H$-invariant. For $g=k\exp (Z) h$ with $Z\in \mathfrak{a}$ define $\|g\cdot y_0\|:=\|Z\|$, where $\|Z\|=\sqrt{\mathop{\mathrm{Tr}} \mathrm{ad} (Z)^2}$. Let $D\in U(\mathfrak{g} )$, were $U(\mathfrak{g} )$ is the enveloping algebra of $\mathfrak{g}_\mathbb{C}$, and view $L_D$ as a differential operator on $Y$. For $n\in \mathbb{N}$ and $f\in C^\infty (Y)$ define $$p_{n,D}(f)=\sup_{y\in Y}\Theta_G(y)^{-1}(1+\|y \|)^n |L_Df(y)|\, .$$ Then the Schwartz space $\mathcal{C} (Y)$ is defined as the space of smooth function on $Y$ such that $p_{n,D}(f)<\infty $ for all $n$ and $D$. It is well known, that $\mathcal{C} (Y)\subset L^2(Y)$, but $\mathcal{C} (Y)$ is not contained in $L^1(Y)$. We will therefore need a smaller space to make sure that the Cauchy integral exists. For that, define for $r>0$ the space $$\mathcal{S}_r(Y):=\{ f\in C^\infty (Y)\mid (\mathfrak{o}rall D\in U(\mathfrak{g} ))\, \sup_{y\in Y}e^{r\|y\|}|L_Df(y)| <\infty\}$$ and $$\mathcal{S} (Y):=\bigcap_{r>0}\mathcal{S}_r(Y)\, .$$ Then $\mathcal{S} (Y)\subset L^1(Y)\cap L^2(Y)$ and $C_c^\infty (Y)\subset \mathcal{S} (Y)\subset \mathcal{C} (Y)$. The space $\mathcal{S} (Y)$ is called the (zero) \textit{Schwartz space}, cf. \cite{FD91}. It follows from Theorem 3 in \cite{FD91} and our spectral definition of the space $\mathcal{H}^2(D)$ that $$\mathcal{H}^2 (D)_{00}:=\{ f\in \mathcal{H}^2(D)_0\mid f\ \hbox{$K$-finite}, \ b(f)\in \mathcal{S}(Y)$$ is dense in $\mathcal{H}^2(D)$. Here used the fact that elements in $\mathcal{H}^2(D)_0$ have boundary values on $Y$ (cf.\ \cite{GKO}, Sect. 3 for a similar argument). In particular we have that every element $f\in \mathcal{H}^2(D)_{00}$ is integrable on $Y$, has a holomorphic extension to $D$, and that the integral $\int_N f(anw\cdot y_0)\, dn$ is well defined for all $a\in A$ and $w\in \mathcal{W}$. We will use this without comments in the following. \subsection{The Radon transform and Cauchy integral on the hyperboloid} Assume that $n\ge 2$ and let $G=\mathrm{SO}_e(1,n)$ be the Lorentz group. Let us fix our choices for the groups $A$, $N$ and $K$. For the maximal compact subgroup we take $$K=\left\{k_R=\begin{pmatrix} 1 & 0\\ 0 &R\end{pmatrix}\mid R\in \mathrm{SO}(n)\right\}\simeq \mathrm{SO} (n)\, .$$ Next, for $z\in \mathbb{C}$ we set $$a_z=\begin{pmatrix} \cosh z & 0 & \sinh z\\ 0 & \mathbf{1}_{n-1}& 0\\ \sinh z & 0 & \cosh z \end{pmatrix}$$ and $$A=\{ a_t\mid t\in \mathbb{R}\}\qquad \hbox{and}\qquad A_\mathbb{C}=\{ a_z\mid z\in \mathbb{C}\}\, .$$ Note that $\mathfrak{a}=\mathbb{R} Z$ and $a_z=\exp (z Z )$ with $Z=E_{1\, n+1}+E_{n+1\, 1}$. The only positive root is $\alpha$, determined by $\alpha (Z)=1$ and hence $\mathbb{Z}H=\frac{\pi }{2}Z$. We also have, that $\rho = \frac{n-1}{2} \alpha $. Further, for $v\in \mathbb{C}^{n-1}$ and $(v, v) =\sum_{j=1} v_j v_j$ we define an unipotent matrix $$n_v=\begin{pmatrix} 1+ \frac{1}{2} (v, v) & v^T & -\frac{1}{2} (v, v)\\ v & \mathbf{1}_{n-1}& -v \\ \frac{1}{2} (v, v) & v^T & 1 -\frac{1}{2} (v, v) \end{pmatrix}\, . $$ Then $N$ and $N_\mathbb{C}$ are given by $$N=\{ n_v\mid v\in \mathbb{R}^{n-1}\}\qquad \hbox{and}\qquad N_\mathbb{C}=\{ n_v\mid v\in \mathbb{C}^{n-1}\}\, .$$ \par Define a quadratic form $$z,w\mapsto z\cdot w=z_0w_0-\sum_{j=1}^nz_jw_j$$ on $\mathbb{C}^{n+1}$ and let $$\square({\bf z})=z_0^2-z_1^2-\ldots -z_n^2 \qquad ({\bf z}=(z_0, \ldots, z_n)^T\in \mathbb{C}^{n+1})$$ be the corresponding quadratic form. We define the real and complex hyperboloids by $$X=\{ {\bf x}\in\mathbb{R}^{n+1}\mid \square({\bf x})=1, x_0>0\}$$ and $$X_\mathbb{C}=\{ {\bf z}\in\mathbb{C}^{n+1}\mid \square({\bf z})=1\}\, . $$ As a common base point for $X$ and $X_\mathbb{C}$ we take ${\bf x}_o=(1,0,\ldots, 0)^T$ and note that the map $$G_\mathbb{C}/ K_\mathbb{C} \to X_\mathbb{C}, \ \ gK_\mathbb{C}\mapsto g({\bf x}_o)$$ is a $G_\mathbb{C}$-isomorphism which identifies $G/K$ with $X$. We have that $a_z\cdot {\bf x}_o=(\cosh (z),0,\ldots ,0,\sinh(z))^T$ and hence ${\bf y}_0={\bf z}H\cdot {\bf x}_o=(0,\ldots ,0,i)^T$. It is clear that the stabilizer of ${\bf y}_o$ is $$H=\left\{\begin{pmatrix} h & 0\\ 0 & 1\end{pmatrix}\mid h\in \mathrm{SO}_e(1,n-1)\right\}\simeq \mathrm{SO}_e(1,n-1)\, .$$ We have therefore with this identification: $$D=\{ {\bf z}={\bf x}+i{\bf y}\in X_\mathbb{C}\mid \square({\bf x})>0, x_0>0\}$$ and $$Y=G ({\bf y}_0 )=\{ i{\bf y}\in i\mathbb{R}^{n+1}\mid \square({\bf y})=-1\}\, . $$ Set $$\Xi=\{ {\bf z}eta\in \mathbb{C}^{n+1}\mid {\bf z}eta\neq 0, \square({\bf z}eta)=0\}\, .$$ If ${\bf x}i_o=(1,0, \ldots, 0, 1)^T\in \Xi$, then the stabilizer of ${\bf x}i_o$ is $M_\mathbb{C} N_\mathbb{C}$ and the map $$G_\mathbb{C}/ M_\mathbb{C} N_\mathbb{C} \to \Xi, \ \ gM_\mathbb{C} N_\mathbb{C}\mapsto g\cdot{\bf x}i_o$$ is a $G_\mathbb{C}$-isomorphism. Now the $CR$-submanifold $\Xi_+\subset\Xi$ is described as \begin{eqnarray*} \Xi_+&=&G\left\{ (e^{it}, 0, \ldots, 0, e^{it})^T\mid |t|<\frac{\pi}{2}, t\in \mathbb{R}\right\}\\ &=&\{{\bf z}eta={\bf x}i+i\eta\in\Xi: \square({\bf x}i)=\square(\eta)=0; \ {\bf x}i\neq 0\}_0\, . \end{eqnarray*} We will also use certain $G$-subdomains of $\Xi_+$: for $0<c\leq \frac{\pi}{ 2}$ set $$\Xi_c=G\left\{ (e^{it}, 0, \ldots, 0, e^{it})^T\mid |t|<c, t\in \mathbb{R} \right\}\, .$$ \par In order to define the Cauchy-transform we need to establish a simple, but important, technical fact. \begin{lemma}\langlebel{lem=hyp} For all ${\bf x}i\in \Xi_+$ and $y\in Y$ one has $${\bf x}i\cdot y\not\in \mathbb{R}^\times \, .$$ More precisely, for all $0<c<\frac{\pi}{2}$ there exists a $C>0$ such that $$(\mathfrak{o}rall y\in Y) (\mathfrak{o}rall {\bf x}i\in \Xi_c) \qquad |1-{\bf x}i\cdot y|>C\, .$$ \end{lemma} \begin{proof} By $G$-equivariance of the form we may assume that ${\bf x}i=e^{it}{\bf x}i_o$. Let $y=i{\bf y}$ for ${\bf y}\in \mathbb{R}^{n+1}$. Thus ${\bf x}i\cdot y= ie^{it} {\bf x}i_o\cdot {\bf y}$ with ${\bf x}i_o\cdot{\bf y}\in \mathbb{R}$. As $|t|<\frac{\pi }{2}$, the assertions follow. \end{proof} We now define the Cauchy-kernel function $$\mathcal{K}: \Xi_+\times Y\to \mathbb{C} , \ \ ({\bf x}i, y)\mapsto \frac{1}{1- {\bf x}i\cdot y}\, .$$ In view of the previous lemma, this function is defined, continuous and bounded on all subsets $\Xi_c\times Y$ for $c<\frac{\pi}{2}$. Moreover, $\mathcal{K}$ is a $CR$-function in the first variable and $G$-invariant, i.e., $\mathcal{K}(g\cdot {\bf x}i ,g\cdot y)=\mathcal{K}({\bf x}i,y)$. In particular, the function $$G\ni g\mapsto \mathcal{K} (g)=\mathcal{K} ({\bf x}i_o,g\cdot {\bf y}_0)\in \mathbb{C}$$ is left $N$-invariant and right $H$-invariant, a fact that we will use in a moment. We will therefore identify $\mathcal{K}$ with a function on $Y$ without further comments. A simple calculation shows that \begin{equation}\langlebel{eq-KSO} \mathcal{K}(g)=\frac{1}{1-i(g_{0\, n}-g_{n\, n})}\, . \end{equation} We have $\mathcal{W}_H=\{\mathbf{1} \}$, and $\mathcal{W}=\{\mathbf{1} , \varepsilonilon\}$ where $\varepsilonilon=-1$ on $\mathfrak{a}$. As $\varepsilonilon$ corresponds to the matrix $w=\begin{pmatrix} I_{n-1} & 0\cr 0&-I_2\end{pmatrix}$ it follows that \begin{equation}\langlebel{eq-K2} \mathcal{K} (a_z)=\frac{1}{1-ie^{-z}}\qquad\text{and}\qquad \mathcal{K} (a_zw)=\frac{1}{1+ie^{-z}}\, . \end{equation} Write $\mathcal{S}(Y)$ for the Schwartz-space on $Y$. Henceforth we will make the assumption that $n$ is even and define the {\it Cauchy - transform} by $$\mathcal{C}: \mathcal{S}(Y)\to CR(\Xi_+), \ \ \mathcal{C}(f)({\bf x}i)=\int_Y \frac{f(y)}{1- {\bf x}i\cdot y}\, dy= \int_Y f(y)K({\bf x}i,y)\, dy\, $$ where $dy$ denotes the $G$-invariant measure on $Y$ from Subsection 1.6. \begin{remark} In the following we have to raise complex powers of elements $z\in A_\mathbb{C}$ which are in the boundary of $\exp(i\Omega_H)$. This is not a problem as the map $$\exp: 2\Omega_H\to A_\mathbb{C} , \ \ Y\mapsto \exp(iY)$$ is injective; a fact which holds in full generality. \end{remark} \begin{theorem} Let $G=\mathrm{SO}(n,1)$ with $n=2k$ even. Let $f\in \mathcal{H}^2(D)_{00}$. Then, up to normalization of measures on $A$, one has $$\mathcal{C}(f)({\bf x}i)=(-1)^{k-1} 2\pi\cdot \mathcal{R}(f)({\bf x}i) \qquad ({\bf x}i\in \Xi_+)\, .$$ \end{theorem} \begin{proof} Since both $\mathcal{C}(f)$ and $\mathcal{R}(f)$ are $CR$-functions, it is sufficient to show that both coincide on $G/MN\subset \Xi$. Moreover, by $G$-equivariance of both maps, it is in fact sufficient to show that \begin{equation}\langlebel {e1} \mathcal{C}(f)({\bf x}i_o)=(-1)^{k-1} 2\pi \cdot \mathcal{R}(f)({\bf x}i_o)\, .\end{equation} Using (\ref{eq-K2}) and that ${\bf z}H^{2\rho}=(-1)^{k-1} i$ and $z_H^{-2\rho}=\overline{{\bf z}H^{2\rho}}=(-1)^k i$ we get: \begin{eqnarray*} \mathcal{C}(f)({\bf x}i_o) &=& \int_Y f(y) \cdot \mathcal{K}( y)\ dy \\ &=& \sum_{w\in \mathcal{W}} \int_A \int_N f(anw\cdot {\bf y}_o)\cdot \mathcal{K}( anw)\ dn \ da \\ &=& \sum_{w\in \mathcal{W}} \int_A \int_N f(anw\cdot {\bf y}_o) \cdot \mathcal{K}( aw)\ dn \ da \\ &=& \sum_{w\in \mathcal{W}} \int_A \int_N f(anw{\bf z}H \cdot {\bf x}_o) \cdot \mathcal{K}(aw)\ dn \ da \\ &=& \sum_{w\in \mathcal{W}} \int_A \int_N f(an{\bf z}H^w \cdot {\bf x}_o)\cdot \mathcal{K}( aw)\ dn \ da \\ &=& \sum_{w\in \mathcal{W}} \int_A \int_N f(an{\bf z}H^w \cdot {\bf x}_o)\cdot ({\bf z}H^w)^{-2\rho}\cdot ({\bf z}H^w)^{2\rho}\cdot \mathcal{K}(aw)\ dn \ da \\ &=& \sum_{w\in \mathcal{W}} \int_A \mathcal{R}(f)(a{\bf z}H^w \cdot {\bf x}i_o)\cdot ({\bf z}H^w)^{2\rho}\cdot \mathcal{K}( aw ) \ da \\ &=& (-1)^{k-1}i\left( \int_\mathbb{R} \frac{\mathcal{R}(f)(a_{t+i\frac{\pi }{2}}\cdot {\bf x}i_o)} {1- e^{-(t+ i\frac{\pi }{2})}}\, dt- \int_\mathbb{R} \frac{\mathcal{R}(f)(a_{t-i\frac{\pi }{2}}\cdot {\bf x}i_o)} {1- e^{-(t- i\frac{\pi }{2})}}\, dt\right) \, . \end{eqnarray*} Consider the strip domain $S=\{ z\in \mathbb{C}\mid |\mathop{\textrm{Im}} z|\le\frac{\pi }{2}\}$. By our assumption on $f$, the function $$S\ni z\mapsto F(z)= \frac{i\mathcal{R}(f)(a_z\cdot {\bf x}i_o) }{ 1- e^{-z}}\in \mathbb{C}$$ defines a meromorphic function $F$ on $\mathop{\textrm{int}} S$ which extends continuously up to the boundary and which has at most a simple pole at $z=0$. Thus the Residue theorem yields that $$\mathcal{C}(f)({\bf x}i_o) = (-1)^k 2\pi i \cdot {\rm Res}(F,0)= (-1)^{k-1} 2\pi \cdot \mathcal{R}(f)({\bf x}i_o) $$ and this concludes the proof of our theorem. \end{proof} \begin{remark} (a) We mention that the geometric pairing ${\bf x}i\cdot y$ can be expressed using the previously defined power functions $f_j$ (cf.\ \ref{ef}): $${\bf x}i\cdot y= f_1({\bf x}i^{-1}y)\, .$$ \par\noindent (b) The assumption that $n$ is even is not a real restriction as one can slightly modify $\mathcal{C}$ so that it works for all parities (see our discussion in the next section). \end{remark} \section{The holomorphic Radon transform as Cauchy integral II: Cayley type spaces} \noindent The technique used for the hyperboloid in the previous can be used for NCC-spaces of Cayley type as well. Let us recall that Cayley type spaces are those which are associated to Euclidean Jordan algebras $V$: i.e. $X$ is a tube domain associated to $V$ and $H$ is the structure group of the cone of squares in $V$. In terms of the set of restricted roots $\Sigma$, this means that $\Sigma$ is of type $C_n$, say $$\Sigma=\left\{ \frac{1}{2}(\pm \alphamma_i \pm \alphamma_j)\mid 1\leq i, j\leq n \right\} \backslash \{ 0\}\, . $$ We assume now that $Y=G/H$ is of Cayley type. Define $T_j\in \mathfrak{a}$ by $\alphamma_i(T_j)=\delta_{ij}$, then \begin{equation} \langlebel{eq=om} \Omega_H=\bigoplus_{j=1}^n \left ]-\frac{\pi }{2}, \frac{\pi }{2}\right[ T_j\, . \end{equation} As a basis of $\Sigma$ we shall choose $$\Pi=\left\{ \frac{1}{2}(\alphamma_1-\alphamma_2), \ldots, \frac{1}{2}(\alphamma_{n-1}-\alphamma_n),\alphamma_n\right\}\, .$$ Obviously $\omega_1=-\alphamma_1$ is a fundamental spherical lowest weight and accordingly $f_1(g)=\langle \pi_1(g)\eta_j, v_1\rangle$ defines a holomorphic function on $G_\mathbb{C}$ with $f_1(g) =a(g)^{-\alphamma_1}$ for $g\in N_\mathbb{C} A_\mathbb{C} K_\mathbb{C}$. The analogue of Lemma \ref{lem=hyp} now reads as follows. \begin{lemma} \langlebel{lem=tra} Let the notation be as above. Then the following holds: \begin{enumerate} \item $f_1(\exp(i\Omega_H)Gz_H)\subseteq \mathbb{C} \backslash \mathbb{R}^\times$. \item For all $0\leq c < 1$ there exists $C>0$ such that $$ (\mathfrak{o}rall g\in G)(\mathfrak{o}rall Z\in \Omega_H)\quad |1 -f_1(\exp(ic Z)gz_H)| > C\, .$$ \end{enumerate} \end{lemma} \begin{proof} First it is clear from (\ref{eq=om}) that \begin{equation}\langlebel{e11} \exp(i\Omega_H)^{\omega_1}=\{ z\in \mathbb{C} \mid \mathrm{Re} z>0\}\, . \end{equation} Next recall that $\bigcup_{w\in \mathcal{W}} NA wH$ is dense in $G$ and that \begin{equation} \langlebel{e22} f_1(nawh)=a^{\omega_1} z_H^{w\omega_1}\in \mathbb{R}^+\{ -i, i\}\, . \end{equation} Combine (\ref{e11}) and (\ref{e22}) and the assertions follow. \end{proof} For $0<c<1$ define a $G$-subdomains of $\Xi_+$ by \begin{equation}\langlebel{eq-Xc} \Xi_c=G\exp (ic\Omega_H)\cdot{\bf x}i_o\, . \end{equation} Note that $-\alphamma_j$, $j\not= 1$ is not a fundamental spherical lowest weight. Therefore, for $1\leq j\leq n$ we define a meromorphic function on $G_\mathbb{C}$ directly by $$h_j(g)=a(g)^{-\alphamma_j} \quad\hbox{for $g\in N_\mathbb{C} A_\mathbb{C} K_\mathbb{C}$}\, .$$ Note that $h_1=f_1$ and, in the same manner as in Lemma \ref{lem=tra}, one establishes that \begin{equation} h_j(\exp(i\Omega_H) Gz_H)\subseteq \mathbb{C} \backslash \mathbb{R}^\times \cup\{\infty\} \end{equation} In particular we see that the prescription $$\mathcal{K}: \Xi_+ \times Y\to \mathbb{C} , \ \ ({\bf x}i, y)\mapsto \mathop{\mathrm{pr}} od_{j=1}^n \frac{1}{ 1-h_j({\bf x}i^{-1}y)}$$ defines an analytic function, $CR$ in the first variable and bounded on all subsets $\Xi_c\times Y$. \begin{remark} Alternatively, the kernel $\mathcal{K}$ can be expressed in Jordan algebra terms. Let $V$ be the Euclidean Jordan algebra associated to $X$ and $W\subset V$ its cone of squares. Form the tube domains $\mathcal{T}^\pm = V \pm iW \subset V_\mathbb{C}$. Then $X=\mathcal{T}^+$ and $D\simeq \mathcal{T}^+\times \mathcal{T}^-$ with $X$ realized in $D$ via the map $x\mapsto (x,\overline x)$. Write $\mathbb{D}elta_j$ for the power functions on $V_\mathbb{C}$ (i.e. generalized principal minors). Then, on $D$, one has $$h_j(z,w)=\frac{\mathbb{D}elta_j(z-w)}{\mathbb{D}elta_{j-1}(z-w)} \qquad\hbox{for $(z,w)\in D$}$$ with the understanding that $\mathbb{D}elta_0 \equiv 1$ Thus $\mathcal{K}$, when considered as a function on $D$, is given by $$\mathcal{K}(z,w)=\frac{\mathbb{D}elta_1 (z-w)\cdot \ldots\cdot \mathbb{D}elta_{n-1}(z-w)}{ \mathop{\mathrm{pr}} od_{j=1}^n (\mathbb{D}elta_{j-1}(z-w) -\mathbb{D}elta_j(z-w))} \, .$$ \end{remark} However, $\mathcal{K}$ is not our Cauchy-kernel yet and some small modification is needed. For that recall the decomposition $\Xi_+\simeq G/MN \times \exp(i\Omega_H)$. For a unitary character $\chi\in \widehat{ T/F}$, where $F=K\cap T$ is the canonical finite 2-group as usually, define the space of {\it $\chi$-twisted $CR$-functions} by \begin{equation}\langlebel{eq-twisted} CR_\chi(\Xi_+)=\{ f(ga\cdot {\bf x}i_o)=h(ga\cdot{\bf x}i_o) \chi^{-1}(a)\mid h\in CR(\Xi_+)\}\, . \end{equation} Let $\alphamma_0=\alphamma_1+\ldots + \alphamma_n$. In the sequel we will fix $\chi$ to be $$\chi= -2\rho +\alphamma_0$$ and notice that $\chi={\bf 1}$ for $G=\mathrm{Sl}(2,\mathbb{R})$. For $g\in N_\mathbb{C} AT K_\mathbb{C}$ write $t(g)\in T/F$ for the compact middle part of $g$ and define the Cauchy-kernel $\mathcal{K}$ by \begin{equation}\langlebel{eq-CKer} \mathcal{K}_\chi ({\bf x}i, y) =\mathcal{K}({\bf x}i,y) \chi(t({\bf x}i^{-1}y))\, . \end{equation} and notice that $\mathcal{K}_\chi$ is defined whenever ${\bf x}i^{-1}y\in N_\mathbb{C} A_\mathbb{C} K_\mathbb{C}$ and, when defined, is in $CR_\chi$ as a function of the first variable. We note that $\mathcal{H}_\chi$ is $G$-invariant and hence corresponds to a function of one variable $\mathcal{K}_\chi (g)=\mathcal{K}_\chi ({\bf x}i_o,g\cdot y_0)$. As before, $\mathcal{K}_\chi$ is $N\times H$-invariant, and will be identified with left $N$-invariant function on $Y$. We define the \textit{twisted Cauchy transform} by $$\mathcal{C}_\chi: \mathcal{S}(Y)\to CR_\chi(\Xi_+), \ \ \mathcal{C}_\chi(f)({\bf x}i)=\int_Y f(y) \mathcal{K}_\chi({\bf x}i, y) \ dy$$ and the {\it twisted holomorphic Radon transform} by $$\mathcal{R}_\chi : \mathcal{H}^2(D)_{00}\to CR_\chi (\Xi_+), \ \ f\mapsto \left( {\bf x}i=ga\cdot{\bf x}i_o\mapsto a^{-\alphamma_0} \int_N f(gna\cdot x_o)\ dn\right)\, .$$ We come to the main result of this section. \begin{theorem} Suppose that $Y=G/H$ is of Cayley type. Let $f\in \mathcal{H}^2(D)_{00}$. Then $$\mathcal{C}_\chi (f)({\bf x}i)=(-2\pi i )^n \cdot \mathcal{R}_\chi (f)({\bf x}i) \qquad ({\bf x}i\in \Xi_+)\, .$$ \end{theorem} \begin{proof} Both $\mathcal{C}_\chi (f)$ and $\mathcal{R}_\chi (f)$ are $CR$-functions, and so it is sufficient to show that both coincide on $G/MN\subset \Xi$. Next, by $G$-equivariance of both maps, it is enough to show that \begin{equation}\langlebel {e111} \mathcal{C}_\chi (f)({\bf x}i_o)=(2\pi)^n \cdot \mathcal{R}_\chi(f)({\bf x}i_o)\, .\end{equation} We now get, as in the proof of Theorem 4.2, for the left hand side: \begin{eqnarray*} \mathcal{C}_\chi(f)({\bf x}i_o) &=& \int_Y f(y) \cdot \mathcal{K}_\chi( y)\ dy \\ &=& \ldots \\ &=& \sum_{w\in \mathcal{W}/ \mathcal{W}_H} \int_A \mathcal{R}_\chi(f)(a{\bf z}H^w \cdot {\bf x}i_o)\cdot ({\bf z}H^w)^{\alphamma_0}\cdot \mathcal{K}_\chi( a{\bf z}H^w\cdot x_o) \ da \\ &=& \sum_{w\in \mathcal{W}/ \mathcal{W}_H} \int_A \mathcal{R}(a{\bf z}H^w \cdot {\bf x}i_o)\cdot ({\bf z}H^w)^{\alphamma_0}\cdot \mathcal{K}(a{\bf z}H^w\cdot x_o) \ da \, . \end{eqnarray*} Specifically we have $\mathcal{W}/ \mathcal{W}_H\simeq (\mathbb{Z}_2)^n$ and ${\bf z}H^{w\alphamma_0}=i^n {\rm sgn}(w)$ \begin{eqnarray*} \mathcal{C}_\chi(f)({\bf x}i_o) &=& i^n \sum_{\varepsilon\in (\mathbb{Z}_2)^n} {\rm sgn}(\varepsilon) \int_{\mathbb{R}^n} \mathcal{R}(f)(\exp\left(\sum_{j=1}^n (t_j+i\varepsilon_j \frac{\pi }{2})T_j\right)\cdot {\bf x}i_o)\cdot\\ & & \quad \cdot \mathcal{K}( \exp\left(\sum_{j=1}^n (t_j+i\varepsilon_j \frac{\pi }{2})T_j\right)\cdot x_o) \ dt \end{eqnarray*} Next observe that $$\mathcal{K}( \exp\left(\sum_{j=1}^n (t_j+i\varepsilon_j \frac{\pi }{2})T_j\right)\cdot x_o)=\mathop{\mathrm{pr}} od_{j=1}^n \frac{1}{ 1- e^{-(t_j+ i \varepsilon_j \frac{\pi }{2})}}\, .$$ Let us consider the multi strip domain $S=\{ z\in \mathbb{C}^n \mid |\mathop{\textrm{Im}} z_j|\le \frac{\pi }{2}\}$. By our assumption on $f$, the prescription $$S\ni z\mapsto \mathcal{R}(f)(\exp\left(\sum_{j=1}^n z_j T_j\right)\cdot {\bf x}i_o) \cdot \mathop{\mathrm{pr}} od_{j=1}^n \frac{1}{ 1- e^{-z_j}}\in\mathbb{C}$$ defines a meromorphic function $F$ on $\mathop{\textrm{int}} S$ which extends continuously up to the boundary and with at most a simple multi-pole at $z_j=0$. Thus iteratively applying the the Residue theorem yields that $$\mathcal{C}_\chi (f)({\bf x}i_o) = (-2\pi i)^n \cdot {\rm Res}(F,0)= (-2\pi i )^n \cdot \mathcal{R}(f)({\bf x}i_o) =(-2\pi i)^n \mathcal{R}_\chi(f)({\bf x}i_o) $$ and it concludes the proof of our theorem. \end{proof} \section{Some remarks on the inversion of the holomorphic Radon transform} \noindent The inversion of the real horospherical transform on $X$ can be analytically continued to give the inversion of the holomorphic horospherical transform. The dual transform is given by integration over the real form $S_\mathbb{R}^K$ of $S(z)$. However, there is a second non-compact real form $S_\mathbb{R}^H(z)$ of $S(z)$ which gives rise to a different dual transform and inversion. This is the topic of this section. Mainly we will focus on $G=\mathrm{Sl}(2,\mathbb{R})$. \par We begin with the definition of an appropriate function space. Let us denote by $\mathcal{F}(\Xi_+)$ denote the space of $CR$-functions on $\Xi_+$ which extend continuously to $G\exp(i\overline \Omega_H)\cdot{\bf x}i_o$ such that $H\ni h \mapsto f(ghz_H\cdot {\bf x}i_o)\in\mathbb{C}$ is integrable for all $g\in G$. For those functions we define the dual Radon transform by $$\mathcal{F}(\Xi_+)\to C(Y), \ \ \phi\mapsto \phi^\vee$$ with $$\phi^\vee(y) =\int_H f(ghz_H\cdot{\bf x}i_o)\ dh =\int_{S_\mathbb{R}(y)} f \qquad (y=g\cdot y_o\in Y)\, .$$ Clearly, this is a $G$-equivariant mapping. We would like to understand the relation between $\mathcal{R}$ and $\phi\mapsto \phi^\vee$. In this context we would like to mention the result in \cite{gko2} for the holomorphic discrete series: there exists a differential operator ${\mathcal L}$ such that $({\mathcal L}\mathcal{R} (f))^\vee =f $. Hence it is natural to ask whether a similar statement would hold true for the most continuous spectrum considered in this paper. It will turn out that the situation different for the most continuous series in the sense that the inverting operator ${\mathcal L}$ is not a differential operator. We give a detailed discussion for the the basic example. \subsection{The example of $G=\mathrm{Sl}(2,\mathbb{R})$} For this paragraph we let $G=\mathrm{Sl}(2,\mathbb{R})$ with the usual choices $$A=\left\{\begin{pmatrix} t & 0 \\ 0 & \frac{1}{ t}\end{pmatrix}\mid t>0\right \}, \quad N=\left\{\begin{pmatrix} 1 & x \\ 0 & 1\end{pmatrix}\mid x\in \mathbb{R}\right \}, $$ and $K=\mathrm{SO}(2,\mathbb{R})$. Let $f\in \mathcal{H}^2(D)_{00}$ be a $K$-invariant function. In the sequel we will identify $\mathfrak{a}_\mathbb{C}^*$ with $\mathbb{C}$ via the assignment $$\mathbb{C}\ni \langlembda\mapsto \langlembda\cdot\rho\in \mathfrak{a}_\mathbb{C}^*\,. $$ In this coordinates one has $$\mathbf{c}(\langlembda)= \pi^{-{1/ 2}} \frac{\Gamma (\langlembda/2)}{ \Gamma((\langlembda+1)/2)} \quad \hbox{and} \qquad |\mathbf{c}(\langlembda)|^{-2}= \frac{i\pi\langlembda}{ 2} \tanh \left(\frac{i\pi\langlembda}{2}\right)$$ We know that $f|_X\in L^2(X)$ and, as $f$ is $K$-invariant, we can write $$f(x)=\frac{1}{2} \int_\mathbb{R} \hat f(i\langlembda)\phi_{i\langlembda}(x)\, \frac{d\langlembda}{|\mathbf{c}(i\langlembda)|^2} \qquad (x\in X)\, .$$ Applying $\mathcal{R}$ yields that $$\mathcal{R}(f)({\bf x}i)=\frac{1}{2} \int_\mathbb{R} \hat f(i\langlembda) a({\bf x}i^{-1})^{\rho(1+i\langlembda)} \ d\langlembda\, , $$ and thus $$\mathcal{R}(f)^\vee (y_o)=\frac{1}{2} \int_H \int_\mathbb{R} \hat f(i\langlembda) a({\bf z}H^{-1}h)^{\rho(1+i\langlembda)} \ d\langlembda \ dh \, .$$ Now, for $h=\begin{pmatrix} \cosh t & \sinh t \\ \sinh t & \cosh t\end{pmatrix}\in H=\mathrm{SO}_e(1,1)$ one has ${\bf z}H^{-1} h = \begin{pmatrix} e^{-i\frac{\pi}{ 4}} \cosh t & e^{-i\frac{\pi}{ 4}}\sinh t \\ e^{i\frac{\pi }{4}}\sinh t & e^{i\frac{\pi }{4}}\cosh t\end{pmatrix}$ and so $$a({\bf z}H^{-1} h)^\rho=\left(\frac{1}{i(\sinh ^2 t + \cosh ^2 t)}\right)^\frac{1}{2} =e^{-i\frac{\pi }{4}} \cdot \frac{1}{ (\cosh 2t)^\frac{1}{2}}$$ Therefore we obtain that \begin{equation}\langlebel{I1} \mathcal{R}(f)^\vee (y_o)=\frac{1}{2} \int_\mathbb{R} \int_\mathbb{R} \hat f(i\langlembda) \cdot \frac{e^{\frac{\pi}{4}(\langlembda -i)} }{ (\cosh 2t)^{\frac{1}{2}(1+i\langlembda)}} \ d\langlembda\ dt \, .\end{equation} \begin{lemma} $$\int_\mathbb{R} \frac{1}{ (\cosh 2t)^{\frac{1}{2}(1+i\langlembda)}}\ dt =\frac{1}{2}B(1/2, (1+i\langlembda)/4)=\frac{1}{2} \frac{\Gamma((1+i\langlembda)/4) \Gamma(1/2)}{ \Gamma((3 + i\langlembda)/4)}$$ \end{lemma} \begin{proof} Let us denote the integral on the left hand side by $I(\langlembda)$. With the substitution $u=\cosh 2 t$ we obtain \begin{eqnarray*} I(\langlembda) & =& \int_1^\infty \frac{1}{ u^{\frac{1}{2}(1+i\langlembda)}} \frac{1}{ (u^2 -1)^{1/2}} \ du \\ &=&\frac{1}{2} \int_1^\infty v^{-\frac{1+i\langlembda}{ 4}} (v -1)^{-{1/2}} v ^{-1/2}\ dv\qquad(v=u^2)\\ &=&\frac{1}{2}B(1/2, (1+i\langlembda)/4) \end{eqnarray*} as $B(p,q)=\int_1^\infty u^{-(p+q)}(u-1)^{p-1}\, du$. \end{proof} Using this, we get: \begin{equation}\langlebel{I2} \mathcal{R}(f)^\vee (y_o)=\frac{1}{4} \int_\mathbb{R} \hat f(i\langlembda) \cdot e^{\frac{\pi}{4}(\langlembda -i)} B(1/2, (1+i\langlembda)/4) \ d\langlembda\, .\end{equation} Define $$C_1(\langlembda)= e^{\frac{\pi}{4}(\langlembda -i)} B(1/2, (1+i\langlembda)/4) + e^{-\frac{\pi}{4}(\langlembda +i)}B(1/2, (1-i\langlembda)/4) $$ and note that $\langlembda\mapsto \hat f(i\langlembda)$ is an even function. Thus (\ref{I2}) yields that \begin{equation}\langlebel{I3} \mathcal{R}(f)^\vee (y_o)=\frac{1}{2} \int_\mathbb{R} \hat f(i\langlembda) \cdot C_1(\langlembda) \ d\langlembda\, .\end{equation} By the Fourier inversion formula, we have \begin{equation} \langlebel{I4} f(y_o)=\frac{1}{2} \int_\mathbb{R} \hat f(i\langlembda) \phi_{i\langlembda}(y_o) {d\langlembda\over |\mathbf{c}(\langlembda)|^2}\, . \end{equation} Now, from the special values of the Gau\ss{} hypergeometric function we get $$\phi_{i\langlembda}(y_o)=F(1/4+i\langlembda/4, 1/4 - i\langlembda/4, 1; 1)= \frac{1}{2B((3-i\langlembda )/4,(3+i\langlembda )/4)}\, .$$ We therefore define $$C_2(\langlembda) =\frac{1/2}{B((3-i\langlembda )/4,(3+i\langlembda )/4)\, |\mathbf{c}(i\langlembda)|^2}$$ and note that (\ref{I4}) transforms into \begin{equation} \langlebel{I5} f(y_o)=\frac{1}{2} \int_\mathbb{R} \hat f(i\langlembda) \cdot C_2(\langlembda) \ d\langlembda \end{equation} In the next step we want to compare the expressions $C_1(\langlembda)$ and $C_2(\langlembda)$. If there would exist a differential operator ${\mathcal L}$, then there should be a polynomial $g(\langlembda) $ such that $g \cdot C_1=C_2$. We first consider $$c_1(\langlembda):=C_1(\langlembda) \cdot \left({\Gamma(1/2)\over \Gamma(3/4 - i\langlembda/4) \Gamma(3/4 + i \langlembda/4)}\right)^{-1}=C_1^+(\langlembda) + C_1^-(\langlembda)$$ with $$C_1^\pm (\langlembda) =e^{\pm {\pi\langlembda \over 4}-i\frac{\pi }{4}}\cdot {\Gamma((1\pm i\langlembda)/4) \Gamma(1/2)\over \Gamma(3/4 \pm i\langlembda/4)}\cdot \left( {\Gamma(1/2)\over \Gamma(3/4 - i\langlembda/4)\Gamma(3/4 + i \langlembda/4)}\right)^{-1}\, . $$ We focus on $C_1^+$ and obtain \begin{eqnarray*} C_1^+ (\langlembda) & = & e^{\frac{\pi}{4}(\langlembda -i)}\cdot {\Gamma((1+ i\langlembda)/4) \Gamma(1/2)\over \Gamma(3/4 + i\langlembda/4)} \cdot\left( {\Gamma(1/2)\over \Gamma(3/4 - i\langlembda/4)\Gamma(3/4 + i \langlembda/4)}\right)^{-1} \\ & =& e^{\frac{\pi}{4}(\langlembda -i)}\cdot \Gamma((1+ i\langlembda)/4) \Gamma(3/4 - i\langlembda/4)\\ & =& e^{\frac{\pi}{4}(\langlembda -i)}\cdot \Gamma((1+ i\langlembda)/4) \Gamma(1 + (-1/4 - i\langlembda/4))\\ & =& e^{\frac{\pi}{4}(\langlembda -i)}\cdot (-1/4 - i\langlembda/4)\Gamma((1+ i\langlembda)/4) \Gamma(-(1 +i\langlembda)/4)\\ & =& {e^{\frac{\pi}{4}(\langlembda -i)}\pi \over \sin \pi (1/4 +i\langlembda/4)} \end{eqnarray*} Likewise we obtain $$C_1^-(\langlembda)=C_1^+(-\langlembda) = {e^{-\frac{\pi}{4}(\langlembda +i)}\pi \over \sin \pi (1/4 -i\langlembda/4)}$$ and so \begin{eqnarray*} c_1(\langlembda) &=& \frac{e^{\frac{\pi}{4}(\langlembda -i)}\pi }{\sin \pi (1/4 +i\langlembda/4)} + \frac{e^{-\frac{\pi}{4}(\langlembda +i)}\pi }{ \sin \pi (1/4 -i\langlembda/4)}\\ &=& \frac{\pi}{ i} \frac{e^{\frac{\pi}{4}(\langlembda -i)}}{ \sinh \pi (\langlembda/4- i/4)} + \frac{\pi}{i} \frac{e^{-\frac{\pi}{4}(\langlembda +i)} }{ \sinh \pi (-\langlembda/4- i/4)}\\ &=& \frac{\pi}{i} \frac{\cosh ({\pi\langlembda }{ 4}-i\frac{\pi }{4}) + \sinh({\pi\langlembda \over 4}-i\frac{\pi }{4})}{ \sinh \pi (\langlembda/4- i/4)} + {\pi\over i}{\cosh (-{\pi\langlembda \over 4}-i\frac{\pi }{4}) + \sinh(-{\pi\langlembda \over 4}-i\frac{\pi }{4}) \over \sinh \pi (-\langlembda/4- i/4)}\\ &=& {\pi\over i} \cdot\left( 2 + \coth (\frac{\pi}{4}(\langlembda - i)) + \coth (-\frac{\pi}{4}(\langlembda+i))\right) \end{eqnarray*} Now define $g(\langlembda)$ by the requirement $$g(\langlembda) c_1(\langlembda)= {1\over |\mathbf{c}(i\langlembda)|^2}= \frac{\pi\langlembda}{ 2} \tanh\left(\frac{\pi\langlembda}{ 2}\right)$$ Now note that \begin{eqnarray*}c_1(\langlembda + i) & =& {\pi\over i} \cdot\left( 2 + \coth ({\pi\langlembda \over 4}) + \coth (-{\pi\langlembda \over 4}-i\frac{\pi }{2})\right)\\ &=& {\pi\over i} \cdot\left( 2 + \coth ({\pi\langlembda \over 4}) - \tanh ({\pi\langlembda \over 4})\right)\\ &=& {\pi\over i} \cdot\left( 2 + {2\over \sinh ({\pi\langlembda \over 2})}\right) \end{eqnarray*} and thus we get $$g(\langlembda+i ) {\pi\over i} \cdot\left( 2 + {2\over \sinh ({\pi\langlembda \over 2})}\right) = {\pi(\langlembda+i)\over 2} \coth\left({\pi\langlembda\over 2}\right)\, .$$ Further manipulation then yields that $$g(\langlembda)= {i\langlembda\over 4} \cdot {\sinh({\pi\langlembda\over 2})\over 1- \cosh ({\pi\langlembda\over 2})}$$ and it is obvious that $g$ is not a polynomial function. Since $$g(\langlembda) C_1(\langlembda)= C_2(\langlembda) $$ it is now clear that there exists {\it no} differential operator ${\mathcal L}$ which inverts the holomorphic Radon transform. However the function $g(\langlembda)$ defines us a spectral multiplier which is a pseudo-differential operator which we now call ${\mathcal L}$. We summarize our discussion. \begin{theorem} Let $G=\mathrm{Sl}(2,\mathbb{R})$ and ${\mathcal L}$ the spectral multiplier defined by the function $g(\langlembda)= {i\langlembda\over 4} \cdot {\sinh({\pi\langlembda\over 2})\over 1- \cosh ({\pi\langlembda\over 2})}$. Then for $f\in \mathcal{H}^2(D)_0$ such that $\mathcal{R}(f)\in \mathcal{F}(\Xi_+)$ one has $$f = ({\mathcal L} \mathcal{R} (f))^\vee\, .$$ \end{theorem} \section{Geometric definition of the Hardy space} \noindent This final section deals with the structure of the Hardy space $\mathcal{H}^2(D)$. It allows independent reading and is of independent interest. \par Initially, the Hardy space was defined spectrally (see \cite{GKO}). Below we will show how to define the Hardy space geometrically, i.e. we give geometric definition of the norm on $\|\cdot\|_H$ on $\mathcal{H}^2(D)$ through $G$-orbit integrals on $D$. For that we start by recalling the orbital integral ${\bf{O}}_h$ and the pseudo-differential operator $\mathcal{D}$ introduced also used in \cite{kos05}. \par In this section $Y=G/H$ can be an arbitrary NCC-space. \subsection{$G$-orbit integrals on the domain $D$} For a sufficiently decaying functions $h$ on $D$ we define its {\it $G$-orbit integral} on $D$ as the following function on $i2\Omega_H$ $${\bf{O}}_{h}(iX)=\int_G h(g\exp(i\frac{1}{ 2}X)\cdot x_o) \ dg\qquad (X\in 2\Omega_H)\ .$$ For $f\in \mathcal{H}^2(D)$ we notice that $|f|^2$ is a sufficiently decaying function on $D$, i.e. ${\bf{O}}_{|f|^2}(iX)$ is finite for all $X\in 2\Omega_H$. Moreover, in view of (\ref{eq=Gutzmer}) we see that ${\bf{O}}_{|f|^2}$ has a natural holomorphic extension to a holomorphic function on the abelian tube domain $\mathcal{T}(2\Omega_H)=\mathfrak{a}+i2\Omega_H$, namely \begin{equation}\langlebel{eq=exgutz} {\bf{O}}_{|f|^2}(Z)=\int_{\mathcal X} |\hat f(b,\langlembda)|^2 \varphi_\langlembda(\exp(Z)) \ d\mu_{\mathcal X}(b,\langlembda) \qquad (Z\in \mathcal{T}(2\Omega_H)) \end{equation} \subsection{A certain pseudo-differential operator} Define a space $\mathcal{F}(\mathcal{T}(2\Omega_H))$ of $\mathcal{W}$-invariant holomorphic functions on the tube domain $\mathcal{T}(2\Omega_H)$ by the following property: $f\in\mathcal{F}(\mathcal{T}(2\Omega_H))$ if $f$ can be written as $$f(Z)=\int_{i\mathfrak{a}_+^*} h(\langlembda)\varphi_\langlembda(\exp(Z)) \frac{d\langlembda}{ |\mathbf{c}(\langlembda)|^2}\qquad (Z\in \mathcal{T}(2\Omega_H))$$ where $h\in L^1(i\mathfrak{a}_+^*, \frac{\mathbb{C}OSH(\langlembda)}{ |\mathbf{c}(\langlembda)|^2}d\langlembda)$. If $Q\subset \mathcal{T}(2\Omega_H)$ is compact, then there exists a constant $C_Q>0$ such that $$(\mathfrak{o}rall \langlembda\in i\mathfrak{a}^*)\qquad \sup_{X\in Q} |\varphi_\langlembda(\exp(i2X))| \le C_Q \mathbb{C}OSH(\langlembda)\, .$$ As $\frac{1}{|\mathbf{c}(\langlembda)|^2}$ is at most of polynomial growth, it follows that $f$ is indeed holomorphic and $\mathcal{W}$-invariant. Moreover, $f$ is uniquely determined by $h$. It follows from our discussion that the prescription $$\mathcal{D}: \mathcal{F}(\mathcal{T}(2\Omega_H))\to \mathcal{O} (\mathcal{T}(2\Omega_H))^\mathcal{W};\ (\mathcal{D} F)(Z)=\int_{i\mathfrak{a}_+^*} h(\langlembda) \sum_{w\in \mathcal{W}} e^{\langlembda(wZ)} \frac{d\langlembda}{ |\mathbf{c}(\langlembda)|^2}$$ is a well defined linear mapping. \begin{remark}The operator $\mathcal{D}$ is a pseudo-differential operator and a differential operator if all root multiplicities are even. The operator $\mathcal{D}$ is related to the Abel transform as explained in \cite{kos05}, Remark 3.2. \end{remark} \begin{example} In this example we discuss the operator $\mathcal{D}$ when the underlying group $G$ is complex. Then $\mathcal{D}$ is a differential operator of a particularly nice form. \par If $G$ is complex, then there is an explicit formula for spherical functions, due to Harish-Chandra: $$\varphi_\langlembda(\exp(Z))={\mathbf{c}}(\langlembda) \frac{\sum_{w\in \mathcal{W}} \varepsilon(w) e^{\langlembda(wZ)}}{\mathop{\mathrm{pr}} od_{\alpha\in \Sigma^+} 2\sinh\alpha(Z)}$$ for all $Z\in \mathcal{T}(2\Omega_H)$. The $\mathbf{c}$-function has the familiar form $$\mathbf{c}(\langlembda)=\frac{\mathop{\mathrm{pr}} od_{\alpha\in \Sigma^+} \langle \rho, \alpha\rangle}{ \mathop{\mathrm{pr}} od_{\alpha\in \Sigma^+} \langle \langlembda,\alpha\rangle}\ .$$ For each $\alpha\in \Sigma$ let $A_\alpha\in \mathfrak{a}$ be such that $\alpha=\langle\cdot, A_\alpha\rangle$. Furthermore let $ \partial_\alpha$ be the partial derivative on $\mathcal{T}(2\Omega_H)$ in direction $A_\alpha$. Define a partial differential operator on $\mathcal{T}(2\Omega_H)$ by $ \partial_{\Sigma^+}=\mathop{\mathrm{pr}} od_{\alpha\in \Sigma^+} \partial_\alpha$. Finally with $J(Z)=\mathop{\mathrm{pr}} od_{\alpha\in \Sigma^+} 2\sinh\alpha(Z)$ we declare a differential operator on $\mathcal{T}(2\Omega_H)$ by $$\mathcal{D}=\mathrm{const}\cdot \partial_{\Sigma^+}\circ J\ .$$ with $\mathrm{const}= \mathop{\mathrm{pr}} od_{\alpha\in \Sigma^+} \langle \rho, \alpha\rangle $. The relation $$\mathcal{D}(\varphi_\langlembda\circ \exp)(Z)=\sum_{w\in \mathcal{W}} e^{\langlembda(wZ)}$$ is now obvious. \end{example} \subsection{The geometric norm} For a function $f\in \mathcal{H}^2(D)$ let us write $\|f\|_H$ for its norm as before. By Lemma \ref{eq=hn} this norm is given by \begin{equation}\langlebel{eq=spectral} \|f\|_H = \int_{{\mathcal X}} |\hat f(b,\langlembda)|^2 \mathbb{C}OSH(\langlembda) \ d\mu_{\mathcal X}(b,\langlembda) \end{equation} The objective of this section is to express $\|f\|_H$ in terms of the much more geometric orbital integrals ${\bf{O}}_{|f|^2}$. Our result is as follows. \begin{theorem}\langlebel{th=Hardy} Let $f\in \mathcal{H}^2(D)$. Then ${\bf{O}}_{|f|^2}\in \mathcal{F}(\mathcal{T}(2\Omega_H))$ and the Hardy space norm $\|f\|_H$ of $f$ is given by \begin{equation} \langlebel{eq=norm} \|f\|_H =\sup_{X\in 2\Omega_H} \frac{\mathcal{D}{\bf{O}}_{|f|^2}(iX)}{ |\mathcal{W}_H|} . \end{equation} In particular, the Hardy space $\mathcal{H}^2(D)$ can be defined as \begin{equation}\langlebel{eq=char} \mathcal{H}^2(D)=\{ f\in \mathcal{O}(D)\mid {\bf{O}}_{|f|^2}\in \mathcal{F}(\mathcal{T}(2\Omega_H))\ \sup_{X\in 2\Omega_H} \frac{\mathcal{D}{\bf{O}}_{|f|^2}(iX)}{ |\mathcal{W}_H|}<\infty\}\ . \end{equation} \end{theorem} \begin{proof} Fix $f\in \mathcal{H}^2(D)$. By equation (\ref{eq=exgutz}) we have $${\bf{O}}_{|f|^2}(Z)=\int_{{\mathcal X}} |\hat f(b,\langlembda)|^2 \varphi_\langlembda(\exp(Z))\ d\mu_{{\mathcal X}}(b,\langlembda)$$ for all $Z\in \mathcal{T}(2\Omega_H)$. By the spectral definition of $\mathcal{H}^2(D)$ it follows that $$h(\langlembda):=\int_B |\hat f(b,\langlembda)|^2 \ db $$ defines a function $h\in L^1(i\mathfrak{a}_+^*, \frac{\mathbb{C}OSH(\langlembda)}{ |\mathbf{c}(\langlembda)|^2}d\langlembda)$. Thus ${\bf{O}}_{|f|^2}\in \mathcal{F}(\mathcal{T}(2\Omega_H))$ and the application of $\mathcal{D}$ to ${\bf{O}}_{|f|^2}$ yields $$ (\mathcal{D}{\bf{O}}_{|f|^2})(Z)=\int_{\mathcal X} |\hat f(b,\langlembda)|^2 \sum_{w\in \mathcal{W}}e^{\langlembda(Z)}\ d\mu_{{\mathcal X}}(b,\langlembda)\, .$$ Now notice that $$\frac{1}{ |\mathcal{W}_H|} \sum_{w\in \mathcal{W}}e^{\langlembda(iwX)}\leq \mathbb{C}OSH(\langlembda)\qquad (X\in 2\Omega_H)$$ and $$\sup_{X\in 2\Omega_H} \sum_{w\in \mathcal{W}}e^{\langlembda(iwX)} =\lim_{X\to \mathbb{Z}H}\sum_{w\in \mathcal{W}}e^{\langlembda(i2wX)} =|\mathcal{W}_H|\cdot \mathbb{C}OSH(\langlembda)\ .$$ The claim follows now from the spectral definition of the norm in $\mathcal{H}^2(D)$. Finally, backtracking the steps of the proof readily yields (\ref{eq=char}). \end{proof} \begin{remark} Some comments on the geometric Hardy norm $$\|f\|_H =\sup_{Z\in \Omega_H} \frac{1}{ |\mathcal{W}_H|}\cdot(\mathcal{D}{\bf{O}}_{|f|^2})(Z)$$ seem to be appropriate. Usually, in the theory of Hardy spaces (e.g. Hardy space on the upper half plane) one takes the supremum over a family of $L^2$-integrals over totally real submanifolds. In our case one takes a supremum over $G$-orbits, which for the exception of the orbit through the origin, are never totally real. Secondly, we find the appearance of the pseudo differential operator $\mathcal{D}$ interesting. In the context of Hardy spaces it might be novel. \end{remark} \subsection{The $K$-invariant case} In this subsection we give another description of the subspace $\mathcal{H}^2(D)^K$ using the Abel transform and the results in Appendix A. We start by noting the following simple connection between the Abel transform and the Fourier transform of a $K$-invariant function. For that we note first, that $\hat{f}(b,\langlembda)$ is independent of $b\in B$ if $f$ is $K$-invariant. We write then simply $\hat{f}(\langlembda)$ and note that $\hat{f}$ is $\mathcal{W}$-invariant. Furthermore \begin{eqnarray*} \hat{f}(\langlembda )&=&\int_X f(x)a(x)^{\rho - \langlembda }\, dx\\ &=&\int_A\int_N f(na \cdot x_o)a^{-\rho -\langlembda}\, dn da\\ &=&\mathcal{F}_A(\mathcal {A} (f))(\langlembda ) \end{eqnarray*} where $\mathcal{F}_A$ stands for the Fourier transform on the abelian group $A$. Recall, that $\mathcal{F} : L^2(A)\to L^2(i\mathfrak{a}^*,|\mathcal{W} |^{-1} d\langlembda )$ is a unitary isomorphism. Define a multiplication operator $D_\mathfrak{a}$ on $i\mathfrak{a}^*$ by $D_\mathfrak{a} (F) = \mathbf{c} (-\langlembda )^{-1}F$ and denote the corresponding multiplier operator on $A$ by $D_A$. Let $\Lambda := D_A\circ \mathcal {A}$. Finally, we define a multiplier $m$ on $\mathcal{W}\times i\mathfrak{a}^*$ by $m(s,\langlembda )=\mathbf{c} (-s\langlembda )/\mathbf{c} (-\langlembda )$. We denote by $\tau$ the corresponding representation $\tau (s)f(\langlembda )=m(s^{-1},\langlembda )f(s^{-1}\langlembda )$. Then, cf. \cite{os05}, Section 1, in particular Lemma 1.4, we have a commutative diagram, where each of the maps is an unitary isomorphism: \begin{equation}\langlebel{eq=isometries} {\bf x}ymatrix { L^2(A,|\mathcal{W}|^{-1}da)^{\mathcal{W}} \ar[r]^\Lambda\ar[d]^{\mathcal{F}_X} & L^2(A,|\mathcal{W}|^{-1}da)^{\tau (\mathcal{W} )}\ar[d]^{\mathcal{F}_A}\\ L^2(i\frak{a}^*,\frac{d\langlembda }{|\mathcal{W}||\mathbf{c} (\langlembda )|^2})\ar[r]_{D_{\mathfrak{a}}} & L^2(i\mathfrak{a}^* ,|\mathcal{W}|^{-1} d\langlembda )^{\tau (\mathcal{W})}\,. } \end{equation} Recall the Hardy space $\mathcal{H}^2(T(\Omega_H))$ from Appendix A and its spectral description in Theorem \ref{co=A1}. It follows then from Theorem \ref{eq=hn}, and the obvious renormalization of measures, as we have not included the $2\pi$ in the exponential function, that $\Lambda (\mathcal{H}^2(D)^K)\subseteq \mathcal{H}^2(T(\Omega_H))^{\tau (\mathcal{W} )}$. As $\Lambda :L^2(A,|\mathcal{W}|^{-1}da)^{\mathcal{W}} \rightarrow L^2(A,|\mathcal{W}|^{-1}da)^{\tau (\mathcal{W} )}$ is a unitary isomorphism, we get: \begin{theorem}\langlebel{th-isohardy} The map $\Lambda :\mathcal{H}^2(D)^K\to \mathcal{H}^2(T(\Omega_H))^{\tau (\mathcal{W})}$ is a unitary isomorphism. \end{theorem} \begin{example} If $G$ has complex structure, then the map $\Lambda$ is a multiplication operator given by $$\Lambda (f)(a)=\left(\mathop{\mathrm{pr}} od_{\alpha\in\Sigma^+}\sinh \langle \alpha , \log (a)\rangle \right)f(a)\, .$$ \end{example} We now determine the reproducing kernel for $\mathcal{H}^2 (D)^K$. One could easily deduct that from Theorem \ref{th-repkern} but we will give another proof, that follows similar arguments. \begin{theorem} The reproducing kernel $\mathcal{K} (z,w)$ for $\mathcal{H}^2 (D)^K$ is given by $$\mathcal{K} (z,w)=\int_{i\mathfrak{a}_+^*}\varphi_\langlembda (z)\varphi_{-\langlembda }(\overline{w})\, d\mu (\langlembda )\, .$$ \end{theorem} \begin{proof} Let $f\in \mathcal{H}^2 (D)^K$ and $w\in D$. Recall, that by Lemma \ref{eq=hn} we have $\Phi (g)(\langlembda )=\hat{g}(\langlembda )\mathbb{C}OSH (\langlembda )$ for all $g\in \mathcal{H}^2 (D)^K$. Therefore \begin{eqnarray*} f(w)&=&\langle f,K_w\rangle\\ &=&\int_{i\mathfrak{a}_+}\hat{f}(\langlembda )\mathbb{C}OSH (\langlembda )\overline{\widehat{K_w}(\langlembda )\mathbb{C}OSH (\langlembda )} \, d\mu (\langlembda )\\ &=& \int_{i\mathfrak{a}_+^*}\hat{f}(\langlembda )\overline{\widehat{K_w}(\langlembda )\mathbb{C}OSH (\langlembda )} \, \frac{d\langlembda}{|\mathbf{c} (\langlembda )|^2}\\ &=&\int_{i\mathfrak{a}_+^*}\hat{f}(\langlembda )\varphi_\langlembda (w)\, \frac{d\langlembda}{|\mathbf{c} (\langlembda )|^2}\, . \end{eqnarray*} Thus $$\widehat{K_w}(\langlembda )= \frac{\varphi_{-\langlembda}(\overline{w})}{\mathbb{C}OSH (\langlembda )}\, .$$ {}From this we now get: \begin{eqnarray*} K(z,w)&=&\langle K_w , K_z\rangle\\ &=& \int_{i\mathfrak{a}_+^*} \widehat{K_w}(\langlembda )\mathbb{C}OSH (\langlembda ) \overline{\widehat{K_z}(\langlembda )\mathbb{C}OSH (\langlembda )}\, d\mu (\langlembda)\\ &=&\int_{i\mathfrak{a}_+^*} \varphi_{-\langlembda}(\overline{w}) \varphi_{\langlembda}(z) \, d\mu (\langlembda ) \end{eqnarray*} and the claim follows. \end{proof} \section*{Appendix: Hardy spaces on strip domain} \noindent We let $V$ be an Euclidean vector space, e.g. $V=\mathbb{R}^n$ endowed with the standard inner product. Denote by ${\rm O}(V)$ the orthogonal group of $V$ and let $\mathcal{W}\subset {\rm O}(V)$ be a finite subgroup which acts irreducibly on $V$. We fix $y_o\in V$, $y_o\neq 0$ and set $$\Omega={\rm int}\left( \{ \hbox{convex hull of }\ \mathcal{W}(y_o)\}\right)\ .$$ Notice that $\Omega$ is the interior of a compact polyhedron and that $0\in \Omega$. Write $V_\mathbb{C} :=V\otimes_\mathbb{R} \mathbb{C}\simeq V+iV$ for the complexification of $V$ and define a tube domain in $V_\mathbb{C}$ by $$T(\Omega)=V+i\Omega\ .$$ Let us denote by $dx$ the measure $(2\pi )^{-\dim V /2}$ times the normalized Lebesgue measure on $V$. Then the Fourier transform $$f\mapsto \mathcal{F} f(\langlembda )= \int_V f(x)e^{-\langle \langlembda , x\rangle}\, dx$$ is a unitary $L^2$-isomorphism. $V^*$ is the dual of $V$, and $\langle \langlembda \cdot x \rangle=\langlembda (x)$ denotes the standard duality between $V$ and $V^*$. Denote by $\mathcal{O} (T(\Omega ))$ the space of holomorphic functions on $T(\Omega )$. The Hardy space $\mathcal{H}^2(T (\Omega )$ is defined by: $$\mathcal{H}^2(T(\Omega)):=\{f\in \mathcal{O}(T(\Omega))\mid \|f\|^2_\mathcal{H}\:=\sup_{y\in \Omega} \int_V |f(x+iy)|^2 \ dx<\infty\}\ .$$ As the Hardy-norm locally dominates the Bergman-norm on $T(\Omega)$, it follows hence $\mathcal{H}^2(T(\Omega))$ is complete, i.e. a Banach space. In fact, $\mathcal{H}^2(T(\Omega))$ is a Hilbert space as we will show in a moment. Then for $f\in \mathcal{H}^2(T(\Omega))$ and $y\in \Omega$ one has $$\int_V |f(x+iy)|^2 \ dx =\int_{V^*} |\mathcal{F}(f|_V)({\bf x}i)|^2 e^{-2 \langlengle y, {\bf x}i\ranglengle} \ d{\bf x}i$$ which is immediate from \cite{SW} Ch. III, \S2. It follows that \begin{equation} \langlebel{eq=A1} \|f\|^2_\mathcal{H} =\sup_{y\in \Omega}\int_{V^*} |\mathcal{F}(f|_V)({\bf x}i)|^2 e^{- 2 \langlengle y, {\bf x}i\ranglengle} \ d{\bf x}i\ . \end{equation} For $y \in V $ define $\mathbb{C}OSH,\mathbb{C}OSH_y : V^*\to \mathbb{C}$ by $$\mathbb{C}OSH_y (\langlembda )=\frac{1}{|W|}\sum_{s\in \mathcal{W}} e^{-2 \langlengle y,s \langlembda \ranglengle}=\frac{1}{|W|}\sum_{s\in \mathcal{W}} e^{- 2 \langlengle sy,\langlembda \ranglengle}$$ and $$\mathbb{C}OSH (\langlembda )=\mathbb{C}OSH_{y_o}(\langlembda )\, .$$ As with $\|\cdot\|_{L^2(V)}$ is $\mathcal{W}$-invariant and $\mathcal{F}$ is $\mathcal{W}$-equivariant, it follows from (\ref{eq=A1}) that \begin{equation} \langlebel{eq=A2} \|f\|^2=\sup_{y\in \Omega}\int_{V^*} |\mathcal{F}(f|_V)({\bf x}i)|^2 \mathbb{C}OSH(y,{\bf x}i) \ d{\bf x}i\ . \end{equation} Now, for every $\langlembda \in V$, the function $y\mapsto \mathbb{C}OSH_{\bf y} ( \langlembda )$ is strictly convex on $\overline \Omega$; hence we have the inequality $$(\mathfrak{o}rall y\in \Omega)(\mathfrak{o}rall \langlembda \in V^*) \qquad \mathbb{C}OSH_y(\langlembda )\leq \mathbb{C}OSH (\langlembda )\ .$$ and so it follows that \begin{equation} \langlebel{eq=A3} \|f\|^2_\mathcal{H} =\int_{V^*} |\mathcal{F}(f|_V)(\langlembda )|^2 \mathbb{C}OSH(\langlembda ) \ d\langlembda \ . \end{equation} Define a $\mathcal{W}$-invariant measure $\mu_0$ on $V^*$ by $$d\mu_0 (\langlembda )=\mathbb{C}OSH (\langlembda )d\langlembda\, .$$ \begin{theorem}\langlebel{th=A1} The mapping $$\mathcal{H}^2(T(\Omega))\to L^2(V^*, d\mu_0), \ \ f\mapsto \mathcal{F}(f|_V)$$ is an isometric isomorphism. In particular, $\mathcal{H}^2(T(\Omega))$ is a Hilbert space. \end{theorem} A (continuous) multiplier on $V^*$ is a continuous map $m :\mathcal{W}\times V^*\to \mathbb{C}$ such that for all $s,w\in \mathcal{W}$ and $\langlembda\in V^*$ we have $$m(sw,\langlembda )=m(s,w\langlembda )m(w,\langlembda )\, .$$ Assume from now on that $|m(s,\langlembda )|=1$ for all $s\in\mathcal{W}$ and $\langlembda \in V^*$. Then, because of the $\mathcal{W}$-invariance of $d\mu_0$, we can define a unitary representation of $\mathcal{W}$ on $L^2(V^*, d\mu_0)$ by \begin{equation}\langlebel{eq-tau} \tau (s) f(\langlembda )=m(s^{-1},\langlembda )f(s^{-1}\langlembda )\, . \end{equation} As the Fourier transform is a unitary isomorphism, we have a unitary representation, also denoted by $\tau$, of $\mathcal{W}$ on $\mathcal{H}^2 (T(\Omega ))$ such that the Fourier transform is an intertwining operator. Denote the space of $\tau (\mathcal{W})$-invariant elements by the superscript $\tau (\mathcal{W})$. Then \begin{corollary}\langlebel{co=A1} The Fourier transform is a unitary isomorphism $$\mathcal{F} :\mathcal{H}^2(T (\Omega ))^{\tau (\mathcal{W})}\to L^2(V^*,d\mu_0 )^{\tau (\mathcal{W})}\, .$$ \end{corollary} The Hardy space $\mathcal{H}^2(T(\Omega))^{\tau (\mathcal{W})}$ being a Hilbert space of holomorphic functions admits as such a reproducing kernel function $K(z,w)$ often called the Cauchy-Szeg\"o-kernel. Notice that for fixed $w\in T(\Omega)$, the function $K_w(z)$ belongs to $\mathcal{H}^2(T(\Omega))^{\tau (\mathcal{W})}$ and that $$\langlengle f, K_w\ranglengle =f(w) \qquad \hbox{for all $f\in \mathcal{H}^2(T(\Omega))^{\tau (\mathcal{W})}$}\ .$$ Here of course $\langle\, \cdot\, ,\, \cdot\, \rangle$ denotes the inner product on $\mathcal{H}^2(T(\Omega))^{\tau (\mathcal{W})}$. We now determine $K(z,w)$ explicitly. For that define for $w\in V$, $\mathbb{C}OS_w^m:V^*\to \mathbb{C}$ by $$\mathbb{C}OS_{w}^m (\langlembda ):=\frac{1}{ |\mathcal{W}|} \sum_{s\in \mathcal{W}} m(s,\langlembda )^{-1} e^{ i \langle w,s\langlembda \rangle}$$ and note that $$\frac{\mathbb{C}OS_w^m}{\mathbb{C}OSH}\in \mathcal{H}^2(T(\Omega))^{\tau (\mathcal{W})}$$ Write $(\cdot|\cdot)$ for the inner product on $L^2(V^*,d\mu_0)^{\tau (\mathcal{W})}$. For $f\in \mathcal{H}^2(T(\Omega))^{\tau (\mathcal{W})}$ let $F=\mathcal{F} (f|_V)$. It follows from Corollary \ref{co=A1} that $\langle f, K_w\rangle =(F|\mathcal{F}(K_w|_V))$. On the other hand we have \begin{eqnarray*} f(w)&=&\int_{V^*} F(\langlembda )e^{- i \langle w,\langlembda \rangle}\ d\langlembda\\ &=&\int_{V^*} F(\langlembda ) \frac{e^{- i \langle w,\langlembda \rangle}}{ \mathbb{C}OSH(\langlembda )}\, d\mu_0(\langlembda )\\ &=&\int_{V^*} m (s^{-1},\langlembda ) F(s^{-1}\langlembda ) \frac{e^{- i \langle w,\langlembda \rangle}}{ \mathbb{C}OSH(\langlembda )}\, d\mu_0(\langlembda )\\ &=&\int_{V^*} F(\langlembda ) m (s^{-1},s\langlembda ) \frac{e^{- i \langle w,s \langlembda \rangle}}{ \mathbb{C}OSH(\langlembda )}\, d\mu_0(\langlembda )\\ &=&\int_{V^*} F(\langlembda )\overline{ \frac{\mathbb{C}OS_{w}^m(\langlembda )}{ \mathbb{C}OSH(\langlembda )} }\ d\mu_0 (\langlembda ) \end{eqnarray*} and thus $$\mathcal{F} (K_w|_V)(\langlembda ) = \frac{\mathbb{C}OS_{w}^m(\langlembda )}{ \mathbb{C}OSH(\langlembda )}\, .$$ \begin{theorem}\langlebel{th-repkern} The reproducing kernel for the Hardy space $\mathcal{H}^2(T(\Omega))^{\tau (\mathcal{W})}$ is given by \begin{eqnarray} K(z,w) &=&\int_{V^*} \frac{\left(\frac{1}{|\mathcal{W}|}\sum_{s\in \mathcal{W}} \frac{ e^{ i \langle s(z),\langlembda \rangle}}{m(s,\langlembda )} \right)\cdot \left( \overline{\frac{1}{ |\mathcal{W}|}\sum_{s\in \mathcal{W}} \frac{ e^{i \langle s(w),\langlembda \rangle}}{m(s,\langlembda )}}\right) }{\mathbb{C}OSH(\langlembda )} \ d\langlembda\langlebel{eq=A4}\\ &=& \int_{V^*} \frac{\mathbb{C}OS^m_z(\langlembda)}{\mathbb{C}OSH (\langlembda )}\mathbb{C}OS_{-\overline w}^m(\langlembda )\, d\langlembda\, . \nonumber \end{eqnarray} \end{theorem}\langlebel{ap=K(z,w)} \begin{example} The equation (\ref{eq=A4}) can be evaluated in the relevant special cases. Let us for example consider the case of $V=\mathbb{R}$, $\Omega=]-1,1[$, $\mathcal{W}={\rm O}(\mathbb{R})\simeq\{ \mathbf{1}, -\mathbf{1}\}$ and $m(s,\langlembda)=1$. Using the standard measure on $\mathbb{R}$ we get the following from (\ref{eq=A4}) and \cite{Bate}, Sect. 1.9, formula (12): \begin{eqnarray*} K(z,w)&=& \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} \frac{\cos(z{\bf x}i) \cos( \overline w{\bf x}i ) }{\cosh( 2 {\bf x}i )} \ d{\bf x}i\\ &=&\sqrt{\frac{\pi }{ 2}} \frac{\cosh (\frac{\pi}{ 4} z)\cdot\cosh ( \frac{\pi}{ 4} \overline w)}{ \cosh (\frac{\pi}{2} z) + \cosh (\frac{\pi}{ 2} \overline w)} \, . \end{eqnarray*} \end{example} \end{document}
\begin{document} \begin{abstract} This article presents a geometric approach to some similarity problems involving metric arguments in the non-positively curved space of positive invertible operators of an operator algebra and the canonical isometric action by invertible elements on the cone given by $g\cdot a=gag^*$.\par Through this approach, we extend and put into a geometric framework results by G. Pisier and partially answer a question by Andruchow, Corach and Stojanoff about minimality properties of canonical unitarizers. \end{abstract} \title{Geometric aspects of similarity problems} \tableofcontents \section{Introduction} In this article, we address similarity problems as presented for example by G. Pisier in \cite{pisier4} or by N. Ozawa in \cite{osawa}. Generally, those problems ask, when some continuous homorphims (e.g. from a group or a $C^*$-algebra) into the algebra of bounded operators on a Hilbert space are conjugate to special homomorphisms.\par In the case of groups, this is Dixmier's unitarizability question: "For which groups is every uniformly bounded linear representation on a Hilbert space similar to a unitary representation". Those groups are called unitarizable. Even though this question is still open in full generality, some partial answers are known: independently Day \cite{day}, Dixmier \cite{dix} and Nakamura and Takeda \cite{naka} proved that amenable groups are unitarizable and the non-unitarizable groups were found (e.g. \cite{ehrenp}, \cite{epm}, \cite{pytls}). This led to asking whether every unitarizable group is amenable.\par The similarity problem in the case of $C^*$-algebras is known as Kadison's problem and partial answers were given by Haagerup, who proved in \cite{haag} that uniformly bounded cyclic representations as well as completely bounded homomorphisms are similar to \\$*$-homomorphisms.\par In the present article, we connect those similarity problems through induced actions on the cone of positive invertible operators with a fixed point property for actions by isometries.\par After introducing some terminology and background, such as the metric and geodesic structure on the cone $P$ of positive invertible elements of a $C^*$-algebra in Chapter 2, we will connect analytic data coming from a uniformly bounded map (such as its uniform bound) with geometric quantities like diameter of orbits and distances of those to fixed point sets and analyze their properties in Chapter 3. \par Chapter 4 is then devoted to putting some interpolation results of G. Pisier's studies on similarity problems into this geometric set-up. At the end of this chapter, we will restrict the focus to some subalgebras of $\B(\h)$, which allow for a CAT(0)-metric upon the same geodesic structure of their cone of positive operators to prove unitarizability results in those cases.\par In Chapter 5, we give a partial answer to a problem formulated by Andruchow, Corach and Stojanoff in \cites{andcorstoj1,andcorstoj2} about the minimality properties of the canonical unitarizers of some representations.\par \section{Preliminaries}\label{prel} \subsection{Geometry on the cone of positive invertibles}\label{geopos} In this subsection, we recall some geometric facts about the cone $P$ of positive invertible operators in a unital $C^*$-algebra $A$. Throughout this article, we will mainly use the metric and the geodesic structure of $P$, but mention the differential geometric background for the sake of completeness.\par As an open subset of the real space $A_s$ of self-adjoint elements in $A$, $P$ naturally inherits the structure of a Banach manifold. It also carries a canonical symmetric space structure with Cartan symmetries given by $\sigma_a(b)=ab^{-1}a$ for $a,b\in P$, see \cite{neeb}*{Example 3.9}. The corresponding exponential map $\exp_{\id}:T_{\id}P\simeq A_s\to P$ at the identity element $\id\in P$ is given by the ordinary exponential with inverse $\log:P\to A_s$ and the corresponding geodesics between any two points $a,b\in P$ are given by \begin{align} \gamma_{a,b}(t)=a^{\frac12}(a^{-\frac12}ba^{-\frac12})^ta^{\frac12}.\label{geodesic} \end{align}\par Note that this space is isomorphic to the quotient $G/U$, where $G$ and $U$ are the groups of invertible and unitary elements in $A$.\par By defining \begin{align*} d(a,b)=\left\Vert\log\left(a^{-\tfrac12}ba^{-\tfrac12}\right)\right\Vert, \end{align*} $P$ is turned into a metric space. This metric structure comes from a Finslerian length structure on $P$ (defined through identifying $A_s$ together with the restriction of the norm on $A$ to $A_s$ with the tangential space of $P$ at $\id$ and using the Cartan symmetries). With respect to this Finsler structure the geodesics between any two points $a,b\in P$ are length minimizing, but they are not unique as shortest continuous paths between $a$ and $b$ (since the operator norm is not uniformly convex in general). See \cite{neeb}*{Example 6.7} and \cite{correcht} for details, or \cite{schlicht}*{Chapter 4} for explicit calculations avoiding the use of Finslerian length structures.\par We can also regard a positive invertible $a\in P$ as a Hilbertian norm $\|\cdot\|_a$ compatible with the original norm $\|\cdot\|$ and defined as $\|\xi\|_a=\|a\xi\|=\langle a^2\xi,\xi\rangle^{\frac12}$ for $\xi\in\h$. A Banach-Mazur type distance $\delta$ on the set of norms compatible with the Hilbertian norm of $\h$ can be defined as \begin{align*}\delta(||\cdot||,|||\cdot|||)=\sup_{\xi\neq 0}\left|{\log\frac{||\xi||}{ |||\xi|||}}\right|.\end{align*} It was proved in \cite{acms}*{Proposition 2.2} that $d(a,b)=\delta(\|\cdot\|_a,\|\cdot\|_b)$. In \cite{cpr}*{Theorem 1} the "exponential metric increasing property", which states that for $a\in P$ and $X,Y\in T_aP$ the inequality $\|X-Y\|_a\leq d(\exp_a(X),\exp_a(Y))$ holds, was shown to be equivalent to Segal's inequality, which states that $\|e^{X+Y}\|\leq\|e^{\frac{X}{2}}e^Ye^{\frac{X}{2}}\|$ for self-adjoint operators $X$ and $Y$. \par Using the exponential metric increasing property the convexity of the distance along geodesics can be derived, see \cite{correcht}. \begin{prop}\label{geoconv} For two geodesics $\alpha$ and $\beta$ in $P$ the map $[0,1] \to P$, $t\mapsto d(\alpha(t),\beta(t))$ is convex, i.e. \begin{align*} d(\alpha(t),\beta(t))\leq t\cdot d(\alpha(1),\beta(1))+(1-t)d(\alpha(0),\beta(0)) \end{align*} \end{prop} In \cite{cpr3}*{Proposition 1} or \cite{schlicht}*{Chapter 4}, the following was proved: \begin{prop}\label{invmetricap} The action $I$ of $G$ on $(P,d)$ given by $g\cdot a=gag^*$ is isometric. \end{prop} \subsection{Basic properties of the restricted canonical action}\label{basicprop} In this subsection, we prove the basic non-metric properties of the canonical action restricted to subgroups $H<G$. \begin{defn} Let $A$ be a $C^*$-algebra, $G\subseteq A$ the group of invertible elements and $P\subseteq G$ the set of positive invertibles, then for a subgroup $H< G$ we define the action $I$ of $H$ on $P$ as $h\cdot a=I_h(a)=hah^*$. If clarification is needed, we write $I_H$ to express, which group is acting.\par The \textit{\textbf{fixed point set}} for this action is denoted by $P^H$. The \textit{\textbf{orbit}} of $a\in P$ shall be denoted by $\oo_H(a)$. A group $H$ is said to be \textit{\textbf{unitarizable}}, if there is an invertible operator $s$ such that $s^{-1}Hs$ is a group of unitaries. \end{defn} \begin{rem}\label{posunit} Note that if $s$ is a unitarizer of $H$ and $s=bu$ its polar decomposition, then $b$ is a positive unitarizer of $H$ as $u^{-1}b^{-1}Hbu$ is a group of unitaries. In this case $\|s\|=\|b\|$. \end{rem} The next proposition relates positive unitarizers to fixed points of $I_H$. \begin{prop}\label{fixed} A positive invertible operator $s$ unitarizes $H$ if and only if $s^2$ is a fixed point of the action $I_H$. \end{prop} \begin{proof} Observe that \begin{align*} s^{-1}Hs\subseteq U &\Leftrightarrow s^{-1}hs(s^{-1}hs)^*=\id\quad\forall h\in H\\ &\Leftrightarrow s^{-1}hs^2h^*s^{-1}=\id\quad\forall h\in H \\ &\Leftrightarrow I_h(s^2)=hs^2h^*=s^2\quad\forall h\in H. \end{align*} \end{proof} We next show how orbits and fixed point sets behave under translations. This is well-known but we include a proof for the sake of completeness. \begin{prop}\label{tranorb} Let a group $G$ act on a set $X$. If $H$ is a subgroup of $G$, then for $f\in G$ and $x\in X$ we have $f^{-1}\cdot\oo_H(x)=\oo_{f^{-1}Hf}(f^{-1}\cdot x)$ and $f^{-1}\cdot X^H=X^{f^{-1}Hf}$. \end{prop} \begin{proof} The first identity follows from \begin{align*} \oo_{f^{-1}Hf}(x)&=\{(f^{-1}hf)\cdot x:h\in H\}=\{f^{-1}\cdot (h\cdot (f\cdot x)):h\in H\} \\ &=f^{-1}\cdot \{h\cdot (f\cdot x):h\in H\} =f^{-1}\cdot\oo_H(f\cdot x). \end{align*} by substitution of $f^{-1}\cdot x$ for $x$. For the second identity, we see \begin{align*} x\in X^{f^{-1}Hf}&\Leftrightarrow f^{-1}hf\cdot x=x\; \forall h\in H \Leftrightarrow f^{-1}\cdot (h\cdot (f\cdot x))=x\; \forall h\in H \\ &\Leftrightarrow h\cdot f\cdot x=f\cdot x\; \forall h\in H \Leftrightarrow f\cdot x\in X^H \Leftrightarrow x\in f^{-1}\cdot X^H. \end{align*} \end{proof} \begin{rem}\label{tranorb2} If $A$ is a $C^*$-algebra, $P$ is the set of positive invertible elements and $G$ the group of invertible elements of $A$, then Proposition \ref{tranorb} says that for a subgroup $H$ of $G$ and for $f\in G$ and $a\in P$ \begin{align*}I_{f^{-1}}(\oo_H(a))=f^{-1}\oo_H(a)f^{-1*}=\oo_{f^{-1}Hf}(f^{-1}af^{-1*})\end{align*} and \begin{align*}I_{f^{-1}}(p^H)=f^{-1}p^Hf^{-1*}=P^{f^{-1}Hf}.\end{align*} \end{rem} \begin{rem}\label{fixunit} If $H$ is a group of unitaries in a $C^*$-algebra $A\subseteq\B(\h)$, then the commutant $H'\cap A$ of $H$ in $A$ is a $C^*$-algebra, hence \begin{align*} P^H&=\{a\in P: I_h(a)=hah^{-1}=a\; \forall h\in H\} \\ &= \{a\in P: ha=ah\; \forall h\in H\} \\ &=P\cap H'=\exp(H'\cap A_s). \end{align*} \end{rem} \begin{defn}\label{lietrip} A closed real subspace $S\subseteq A_s\simeq T_{\id} P$ is called a \textit{\textbf{Lie triple system}} if $[[X,Y],Z]\in S$ for every $X,Y,Z\in S$. A closed submanifold $C\subseteq P$ is \textit{\textbf{totally geodesic}} if $\exp_a(T_aC)=C$ for all $a\in C$.\par Here, the bracket is given by the commutator $[X,Y]=XY-YX$. \end{defn} \begin{prop}\label{totalgeod} Let $H$ be a group of invertible elements, then the fixed point set $P^H$ of the action $I$ is a totally geodesic submanifold of $P$. \end{prop} \begin{proof} If $H$ is not unitarizable then $P^H$ is empty. If $H$ is unitarizable and $f$ is a positive unitarizer, then by Proposition \ref{tranorb2} $P^H=fP^{f^{-1}Hf}f$, so that the fixed point set is a translation of the fixed point set of the unitary group $f^{-1}Hf$.\par By Remark \ref{fixunit} $P^{f^{-1}Hf}=P\cap (f^{-1}Hf)'=\exp((f^{-1}Hf)'\cap A_s)$. Since $(f^{-1}Hf)'$ is a $*$-subalgebra of $A$, it is a Lie triple system. From the identity $[X,Y]^*=-[X^*,Y^*]$ one verifies easily that $A_s$ is in fact a Lie triple system.\par Therefore, the intersection $(f^{-1}Hf)'\cap A_s$ is a Lie triple system and by Corollary 4.17 in \cite{condelarotonda2} $P^{f^{-1}Hf}=P\cap (f^{-1}Hf)'=\exp((f^{-1}Hf)'\cap A_s)$, being the exponential of a Lie triple system, is a totally geodesic submanifold. Since $P^H$ is a translation of the totally geodesic manifold $P^{f^{-1}Hf}$ it is also totally geodesic. \end{proof} \section{Metric characterization of the similarity number and size of a group}\label{simnum} In this chapter, we define the size and similarity number of a group of invertible elements in a $C^*$-algebra and show how these quantities are related to the norm and completely bounded norm of unital homomorphism. We then show how the similarity number and size of a group depend on the diameter of orbits and the distance of orbits to fixed point sets of the canonical associated action on positive invertible elements. \begin{defn}The \textbf{diameter} of $D\subseteq P$ is given by $\diam(D)=\sup_{x,y\in D}d(x,y)$. The \textbf{distance between two subsets} $C$ and $D\subseteq P$ is defined as $\dist(C,D)=\inf_{x\in C,y\in D}d(x,y)$. \end{defn} \begin{defn}\label{sizesim} We define the \textit{\textbf{size}} $\vert H\vert$ of a subgroup $H< G$ of $G$ by $|H|=\sup_{h\in H}\|h\|$. \par The \textit{\textbf{similarity number}} of $H$ is $\Sim(H)=\inf\{\|s\|\|s^{-1}\|:s \mbox{ is a unitarizer of }H\}$. From Remark \ref{posunit} one easily gets $\Sim(H)=\inf\{\|s\|\|s^{-1}\|:s \mbox{ is a positive unitarizer of }H\}$. \end{defn} In Pisier's approach to similarity problems (see \cites{pisier2,pisier3}) the similarity number and size of a group is used. Note that the similarity number defined above is not the same as the similarity degree defined by Pisier. The norm and completely bounded norm of a unital homorphism $\pi:A\to \B(\h)$ have known interpretations in terms of the size and similarity number of the group of invertibles $\pi(U_A)\subseteq G_{\B(\h)}$ which we recall.\par The following result is due to Haagerup, see Theorem 1.10 \cite{haag} or Theorem 9.1 and Corollary 9.2 in \cite{paulsen}. Here we use the notation $\Ad_s(a)=sas^{-1}$ and note that a bounded unital homorphism is not necessarily $*$-preserving. \begin{thm}\label{haag} Let $A$ be a $C^*$-algebra with unit and let $\pi:A\to \B(\h)$ be a bounded unital homomorphism. Then $\pi$ is similar to a $*$-homomorphism (i.e. there is an invertible $s\in \B(\h)$ such that $Ad_s\circ \pi$ is a $*$-homomorphism) if and only if $\pi$ is completely bounded. If $\pi$ is completely bounded then \begin{align*}\|\pi\|_{c.b.}=\inf\{\|s^{-1}\|\|s\|:\Ad_s\circ \pi \mbox{ is a $*$-homomorphism }\}.\end{align*} \end{thm} The following is Lemma 9.6 in \cite{paulsen} and follows from the fact that a $C^*$-algebra is the linear span of its unitaries. \begin{prop}\label{orthog} If $A$ and $B$ are unital $C^*$-algebras and $\pi:A\to B$ is a unital homomorphism, then $\pi$ is a $*$-homomorphism if and only if $\pi$ sends unitaries to unitaries, i.e. $\pi(U_A)\subseteq U_B$, where $U_A$ and $U_B$ are the group of unitaries of $A$ and $B$ respectively.\par Therefore, $\Ad_s\circ \pi$ is a $*$-homomorphism for an invertible $s\in B$ if and only if \newline$\Ad_{s^{-1}}(\pi(U_A))=s^{-1}\pi(U_A)s$ is a group of unitaries. \end{prop} Combining the previous two results we get: \begin{prop}\label{simcb} Let $A$ be a $C^*$-algebra with unit and let $\pi:A\to \B(\h)$ be a completely bounded unital homomorphism. Then \begin{align*}\|\pi\|_{c.b.}=\Sim({\pi(U_A)})\end{align*} \end{prop} \begin{proof} \begin{align*} \|\pi\|_{c.b.}&=\inf\{\|s\|\|s^{-1}\|:\Ad_{s^{-1}} \circ \pi \mbox{ is a $*$-homomorphism } \}&\mbox{by Theorem \ref{haag} } \\ &=\inf\{\|s\|\|s^{-1}\|:s \mbox{ is a unitarizer of } \pi(U_A) \}&\mbox{by Corollary \ref{orthog}} \\ &=\Sim({\pi(U_A)})&\mbox{by Definition \ref{sizesim} } \end{align*} \end{proof} \begin{rem} A corollary of the Russo-Dye theorem, which states that the closed unit ball in a unital $C^*$-algebra is the closed convex hull of unitaries (see \cite{davidson}*{Theorem I.8.4}), is the fact that if $A$ and $B$ are unital $C^*$-algebras and $\pi:A\to B$ is a unital homomorphism, then $\|\pi\|=|\pi(U_A)|$. \end{rem} The next theorem characterizes metrically the similarity number and size of a group $H$ of invertible operators in a $C^*$-algebra in terms of the canonical isometric action of $H$ on $P$. \begin{thm}\label{distdiam} For a group $H$ of invertible elements in a $C^*$-algebra the identities \begin{align*}\dist\left(\oo_H(\id),P^H\right)=\dist\left(\id,P^H\right)=\log(\Sim(H))\end{align*} and \begin{align*}\diam(\oo_H(\id))=2\log(|H|)\end{align*} hold. \end{thm} \begin{proof} We denote by $\lambda_{\max}(a)$ and by $\lambda_{\min}(a)$ the maximum and the minimum of the spectrum of $a\in P$. Then, using the characterization of unitarizers \begin{align}\label{ig1} \Sim(H)&=\inf\{\|s\|\|s^{-1}\|:s \mbox{ is a positive unitarizer of }H\} \nonumber\\ &\stackrel{\textrm{Prop. \ref{fixed}}}=\inf\left\{\left\|a^{\frac12}\right\|\left\|a^{-\frac12}\right\|:a \in P^H\right\}=\inf_{a \in P^H}\left(\frac{\lambda_{\max}(a)}{\lambda_{\min}(a)}\right)^{\frac12}. \end{align} Also, using the fact that for $a\in P^H$ and $\alpha >0$ we have $\alpha a \in P^H$ \begin{align}\label{ig2} \dist(\id,P^H)&=\inf_{a \in P^H}d(\id,a)=\inf_{a \in P^H}\|\log(a)\| \nonumber\\ &=\inf_{a \in P^H}\max\{\log(\lambda_{\max}(a)),-\log(\lambda_{\min}(a))\} \nonumber\\ &=\inf_{a \in P^H, \alpha > 0}\max\{\log(\lambda_{\max}(\alpha a)),-\log(\lambda_{\min}(\alpha a))\} \\ &=\inf_{a \in P^H, c \in \mathbb{R}}\max\{\log(\lambda_{\max}(a)) + c,-\log(\lambda_{\min}(a)) -c\} \nonumber\\ &=\inf_{a \in P^H}\frac12(\log(\lambda_{\max}(a))-\log(\lambda_{\min}(a))) \nonumber\\ &=\log\left(\inf_{a \in P^H}\left(\frac{\lambda_{\max}(a)}{\lambda_{\min}(a)}\right)^{\frac12}\right). \nonumber \end{align} Combining (\ref{ig1}) and (\ref{ig2}) we get $\dist(\id,P^H)=\log(\Sim(H))$. Also \begin{align*} \dist(\oo_H(\id),P^H)&= \inf_{h\in H}\dist(I_h(\id),P^H) =\inf_{h\in H}\dist(\id,I_{h^{-1}}(P^H)) \\ &=\inf_{h\in H}\dist(\id,P^H) =\dist(\id,P^H), \end{align*} where the second equality follows from the fact that $I$ is isometric, and the third equality follows from the fact that $P^H$ is $I_H$ invariant. Since \begin{align*} d(\id,hh^*)&=\|\log(hh^*)\|=\max\{\log\|hh^*\|,\log\|(hh^*)^{-1}\|\} \\ &=\max\{\log(\|h\|^2),\log(\|h^{-1}\|^2)\|\} \end{align*} it follows that \begin{align*} \diam(\oo_H(\id))&= \sup_{h\in H} d(\id,hh^*)=\sup_{h\in H} \max\{\log(\|h\|^2),\log(\|h^{-1}\|^2)\|\} \\ &=\sup_{h\in H} \log(\|h\|^2)=\sup_{h\in H} 2\log(\|h\|) =2\log(|H|). \end{align*} \end{proof} \begin{rem} For $H$ a unitarizable group, $h\in H$ and a positive unitarizer $s$ of $H$ one has $\|h\|=\|s(s^{-1}hs)s^{-1}\|\leq \|s\|\|s^{-1}hs\|\|s^{-1}\|= \|s\|\|s^{-1}\|$ as $s^{-1}hs$ is unitary. Taking the supremum over $h\in H$ and the infimum over positive unitarizers $s$ we obtain $|H|\leq \Sim(H).$\par Now, after taking logarithms this inequality turns into $D_H(\id)\leq 2\dist(\id,P^H),$ which holds for any isometric action on a metric space.\par Therefore, the fact that $|H|\leq \Sim(H)$ (and in particular the fact that $\|\pi\|\leq\|\pi\|_{c.b}$ for a unital homomorphism $\pi:A \to \B(\h)$) corresponds to the geometric fact that the diameter of the orbit of the identity element is less or equal than twice the distance between the identity element and the fixed point set of the action. \end{rem} \begin{defn} A positive invertible $a\in P$ with spectrum $\sigma(a)$ is said to have \textit{\textbf{symmetric spectrum}}, if $\log(\max(\sigma(a)))=-\log(\min(\sigma(a)))$.\par Note that this is equivalent to $\|a\|=\|a^{-1}\|$. \end{defn} \begin{rem}\label{simdist} Observe in the proof of $\dist(\id,P^H)=\log(\Sim(H))$ that an $a\in P^H$ minimizing the distance to $\id$ corresponds to a unitarizer $a^{\frac12}$, which minimizes the quantity $\|s\|\|s^{-1}\|$ among all unitarizers. Also, a unitarizer $s$ such that $\|s\|\|s^{-1}\|=\Sim(H)$ can be scaled to have symmetric spectrum and the resulting scaled fixed point $s^2$ minimizes the distance to $\id$. \end{rem} The next lemma shows that closest points to a point $b\in P$ in $P^H$ and -by the previous remark- unitarizers realizing the similarity number exist. \begin{lem}\label{minunit} Let $A\subseteq \B(\h)$ be a von Neumann algebra acting on a separable Hilbert space $\h$ and let $H$ be a group of invertible operators in $A$, then for $b\in P$ there is an $a\in P^H$ such that $\dist(b,P^H)=d(b,a)$. \end{lem} \begin{proof} Using the isometric translation $c\mapsto b^{-\frac12}cb^{-\frac12}$ we can assume by Proposition \ref{tranorb2} that $b=\id$. For $a\in P$ \begin{align*}d(\id,a)=\|\log(a)\|=\max\{\log(\lambda_{\max}(a)),-\log(\lambda_{\min}(a))\},\end{align*} where $\lambda_{\max}(a)$ and $\lambda_{\min}(a)$ denote the maximum and minimum of the spectrum of \\$a\in P\subseteq A_s$. Hence the closed metric balls around $\id$ are operator intervals, i.e. \begin{align*}B[\id,r]=\{b\in P:d(\id,b)\leq r\}=[e^{-r}\id,e^{r}\id]\subseteq P.\end{align*} There is a sequence $(a_n)_n\subseteq P^H$ such that $d(\id,a_n)\to \dist(\id,P^H)=\inf_{b \in P^H}d(\id,b)$. Since the set $$\{a\in A:hah^*=a\; \forall h\in H\}=\bigcap_{h\in H}\{a\in A:hah^*=a\}$$ is weak operator closed, and for every $r>0$ the set $[e^{-r}\id,e^{r}\id]$ is is also weak operator closed, we conclude that $P^H\cap [e^{-r}\id,e^{r}\id]$ is weak operator closed. Also, since the weak operator topology on closed balls is metrizable and compact, it follows that there is a subsequence of $(a_n)_n$ converging weakly to an $a\in P^H$.\par This subsequence, which we still denote by $(a_n)_n$, also satisfies $d(\id,a_n)\to \dist(\id,P^H)$. Now, for every $\epsilon >0$ there is an $n_{\epsilon}\in \N$ such that for $n\geq n_{\epsilon}$ \begin{align*}a_n\in B[\id,\dist(\id,P^H)+\epsilon]=[e^{-\dist(\id,P^H)-\epsilon}\id,e^{\dist(\id,P^H)+\epsilon}\id].\end{align*} Since operator intervals are weak operator closed, it follows that the weak operator limit $a$ of $(a_n)_n$ is in $[e^{-\dist(\id,P^H)-\epsilon}\id,e^{\dist(\id,P^H)+\epsilon}\id]$. Therefore, $d(\id,a)<\dist(\id,P^H)+\epsilon$ for every $\epsilon>0$ so that $d(\id,a)\leq \dist(\id,P^H)$.\par Since $d(\id,a)\geq \dist(\id,P^H)=\inf_{b \in P^H}d(\id,b)$ the conclusion follows. \end{proof} \section{Geometric interpolation of the similarity number and size of a group}\label{interpolation} This chapter begins with a geometric interpolation result relating the similarity number and size of one-parameter families of groups of invertible operators. Then a translation of some of Pisier's results about the similarity degree of groups and operator algebras into our geometric set up is given and we prove a geometric interpolation result about the similarity constants of certain group extensions. We end with a subsection studying the orbit structure of isometric actions on totally geodesic sub-manifolds of positive invertible operators with stronger convexity properties. \begin{defn} A \textit{\textbf{geodesically convex set}} is a subset $C\subseteq P$ such that $\gamma_{a,b}(t)\in C$ for all $a,b\in C$ and $t\in [0,1]$. A map $f:P\to \R$ is called a \textit{\textbf{geodesically convex function}}, if $f(\gamma_{a,b}(t))\leq (1-t)f(a)+tf(b)$ holds for all $a,b\in C$ and $t\in [0,1]$. \end{defn} \begin{nota} For a uniformly bounded group $H$ of invertible operators in a $C^*$-algebra we denote by $D_H$ the orbit diameter function $D_H: P \to \mathbb{R}^+,~D_H(a)=\diam(\oo_H(a))$. \end{nota} \begin{lem}\label{lemdh} The map $D_H: P \to \mathbb{R}^+$ is invariant for the action of $I_H$, geodesically convex and $2$-Lipschitz. \end{lem} \begin{proof} The invariance if $D_H$ follows from the fact that $\oo_H(h\cdot a)=\oo_H(a)$ for $a\in P$ and $h\in H$. To prove that $D_H$ is geodesically convex, we see that for a geodesic $\gamma_{a,b}:[0,1]\to P$ the following holds \begin{align*} D_H(\gamma_{a,b}(t))&=\sup_{h\in H}d(\gamma_{a,b}(t),h\cdot \gamma_{a,b}(t)) = \sup_{h\in H}d(\gamma_{a,b}(t), \gamma_{h\cdot a,h\cdot b}(t)) \\ &\stackrel{\textrm{Prop \ref{geoconv}}}\leq \sup_{h\in H}(td(a,h\cdot a) + (1-t)d(b,h\cdot b)) \\ &\leq t\sup_{h\in H}d(a,h\cdot a) + (1-t)\sup_{h\in H}d(b,h\cdot b) \\ &= tD_H(a) +(1-t)D_H(b). \end{align*} the $2$-Lipschitz property for $D_H$, follows from \begin{align*} D_H(a)&=\sup_{h\in H}d(a,h\cdot a) \leq \sup_{h\in H}(d(a,b)+d(b,h\cdot b)+d(h\cdot b,h\cdot a)) \\ &=\sup_{h\in H}(2d(a,b)+ d(b,h\cdot b)) =2d(a,b)+\sup_{h\in H}d(b,h\cdot b) =2d(a,b)+D_H(b). \end{align*} By symmetry, we get $D_H(b)-D_H(a)\leq 2d(b,a)$ so that $|D_H(a)-D_H(b)|\leq 2d(a,b)$. \end{proof} \begin{rem} For a geodesic $\gamma$ in $P$, the quotient \begin{align*}f_{\gamma}(t)=\frac{D_H(\gamma(t))-D_H(\gamma(0))}{d(\gamma(t),\gamma(0))}\end{align*} is a convex function of $t$ because $D_H$ is geodesically convex. It is bounded above by $2$ and bounded below by $-2$ because $D_H$ is $2$-Lipschitz.\par Therefore the limit of $\lim\limits_{t\to\infty}f_{\gamma}(t)$ exists and we can interpret this quantity as a slope of $D_H$ at infinity. \end{rem} \begin{lem}\label{lemdist} For a geodesically convex subset $C\subseteq P$, the map \begin{align*}P\to \R^+,\quad a\mapsto \dist(a,C)\end{align*} is geodesically convex and 1-Lipschitz. \end{lem} \begin{proof} Let $\epsilon >0$ and let $e,f \in C$ such that $d(a,e)<d(a,C)+\frac{\epsilon}{2}$ and $d(b,f)<d(a,C)+\frac{\epsilon}{2}$. Since $\gamma_{e,f}$ lies in $C$ and the distance along geodesics is convex (Proposition \ref{geoconv}), we have for $t\in[0,1]$ \begin{align*} \dist(\gamma_{a,b}(t),C)&\leq \dist(\gamma_{a,b}(t),\gamma_{e,f}(t)) \leq (1-t)d(a,e)+td(b,f)\\ &\leq (1-t)\dist(a,C)+t\dist(b,C)+\epsilon , \end{align*} Taking $\epsilon >0$ arbitrarily small we get the inequality. Observe also that \begin{align*}d(a,C)\leq \inf_{c\in C}(d(a,b)+d(b,c))=d(a,b)+d(b,C),\end{align*} so that by symmetry we get the Lipschitz bound. \end{proof} \begin{rem} The last two Propositions are valid for more general GCB-spaces (see \cite{schlicht} for details and definitions) and the Lipschitz bound is valid for arbitrary isometric actions on metric spaces. \end{rem} We next use the fact that the diameter of the orbit of a point and the distance of a point to a geodesically convex subset are geodesically convex functions to prove a geometric interpolation theorem. This extends results proved by Pisier using complex interpolation techniques \cite{pisier3}*{Lemmas 2.2 and 2.3}. \begin{thm}\label{geomint} If $H$ is a uniformly bounded group, $\gamma_t=\gamma_{r^2,s^2}(t)$ is the geodesic connecting positive invertible elements $r^2$ and $s^2$ and $H_t=\gamma_t^{-1/2}H\gamma_t^{1/2}$ is the one-parameter family of groups between the group $r^{-1}Hr$ and the group $s^{-1}Hs$ then \begin{align*}|H_t|\leq |r^{-1}Hr|^{1-t}|s^{-1}Hs|^{t}\end{align*} If $H$ is also unitarizable, then \begin{align*}\Sim(H_t)\leq \Sim(r^{-1}Hr)^{1-t}\Sim(s^{-1}Hs)^{t}\end{align*} In particular, if $s$ is a positive unitarizer such that $d\left(\id,P^H\right)=d(\id,s^2)$, then the one-parameter family of groups $H_t=s^{-t}Hs^t$ satisfies \begin{align*}|H_t|\leq |H|^{1-t}\textrm{ and }\Sim(H_t)=\Sim(H)^{1-t}.\end{align*} \end{thm} \begin{proof} By Proposition \ref{invmetricap}, the action $I$ is isometric, so that by Proposition \ref{tranorb} for $f\in G$ and $b\in P$ \begin{align*} D_{f^{-1}Hf}(b)&=\diam(\oo_{f^{-1}Hf}(b))=\diam(f^{-1}\oo_H(fbf^*)f^{-1*}) \\ &=\diam(\oo_H(fbf^*))=D_H(fbf^*). \end{align*} Now, using the fact that $\gamma_t=\gamma_{r^2,s^2}(t)$ is a geodesic and the geodesic convexity of $D_H$ proved in Lemma \ref{lemdh} \begin{align*} D_{H_t}(\id)&=D_{\gamma_t^{-1/2}H\gamma_t^{1/2}}(\id)=D_H(\gamma_t) \\ &=D_H(\gamma_{r^2,s^2}(t))\leq (1-t)D_H\left(r^2\right) + tD_{H}\left(s^2\right) \\ &=(1-t)D_{r^{-1}Hr}(\id) + tD_{s^{-1}Hs}(\id). \end{align*} Exponentiating this equation and using Proposition \ref{distdiam}, we get \begin{align*}|H_t|^2\leq |r^{-1}Hr|^{2(1-t)}|s^{-1}Hs|^{2t}\end{align*} and therefore $|H_t|\leq |r^{-1}Hr|^{1-t}|s^{-1}Hs|^{t}$. By Proposition \ref{tranorb} and Proposition \ref{invmetricap}, we get for $f\in G$ and $b\in P$ \begin{align*}\dist\left(b,P^{f^{-1}Hf}\right)=\dist\left(b,f^{-1}P^Hf^{-1*}\right)=\dist\left(fbf^*,P^H\right).\end{align*} Since $P^H$ is geodesically convex, we can use Lemma \ref{lemdist} to get \begin{align*} \dist\left(\id,P^{H_t}\right)&=\dist\left(\id,P^{\gamma_t^{-1/2}H\gamma_t^{1/2}}\right)\\ &=\dist\left(\gamma_t,P^H\right)=\dist\left(\gamma_{r^2,s^2}(t),P^H\right)\\ &\leq (1-t)d\left(r^2,P^H\right) + td\left(s^2,P^H\right) \\ &= (1-t)\dist\left(\id,P^{r^{-1}Hr}\right)+t\dist\left(\id,P^{s^{-1}Hs}\right). \end{align*} Exponentiating this inequality we obtain \begin{align*}\Sim(H_t)\leq \Sim\left(r^{-1}Hr\right)^{1-t}\Sim\left(s^{-1}Hs\right)^{t}.\end{align*} Now, if the geodesic is $\gamma_{1,s^2}(t)=s^{2t}$, then since $H_1=s^{-1}Hs$ is a group of unitaries $|H_t|\leq |H|^{1-t}$. In the inequality for the similarity number we can get instead an equality: since $s^2$ is a point in $P^H$ minimizing the distance from $\id$ to $P^H$ and geodesic have minimal lenght, $s^2$ minimizes distance between the points in $P^H$ to any point in the geodesic $\gamma_{\id,s^2}(t)=s^{2t}$. Therefore \begin{align*}\dist\left(\id,P^{H_t}\right)=\dist\left(\id,P^{\gamma_t^{-1/2}H\gamma_t^{1/2}}\right)=\dist\left(\gamma_t,P^H\right)=(1-t)\dist\left(\id,P^H\right).\end{align*} Exponentiating this equation and using Proposition \ref{distdiam} we get $\Sim(H_t)=\Sim(H)^{1-t}$. \end{proof} \begin{cor} In the case of a completely bounded unital homomorphism $\pi:A\to \B(\h)$, we can define a family of homomorphisms $\pi_t=\Ad_{s^t}\circ \pi$ such that \begin{align*}\|\pi_t\|\leq \|\pi\|^{1-t}\mbox{ and } \|\pi_t\|_{c.b.}=\|\pi\|_{c.b.}^{1-t}.\end{align*} \end{cor} Pisier used bounds that relate the similarity number and size of groups to characterize classes of groups and algebras, see Theorem 1 in \cite{pisier1} and the discussion following that theorem. If we take the logarithm in inequalities of the form \begin{align*}\Sim(H)\leq K|H|^{\alpha}\end{align*} for constants $K>1$ and $\alpha>0$, we get by Proposition \ref{distdiam} \begin{align*}\dist(\id,P^H)\leq \log(K) + \frac{\alpha}{2}D_H(\id).\end{align*} Therefore, composing group and algebra representations restricted to unitary groups \begin{align*}\Gamma\xrightarrow{\rho} G\to \Isom(P),\qquad U_A\xrightarrow{\pi|_{U_A}} G\to \Isom(P)\end{align*} Theorem 1 in \cite{pisier1} has the following translations in geometric terms: \begin{thm}\label{pisiersim} The following holds: \begin{itemize} \item A discrete group $\Gamma$ is amenable if and only if for every uniformly bounded representation $\rho:\Gamma\to \B(\h)$ the inequality $\dist\left(\id,P^{\rho(\Gamma)}\right)\leq D_{\rho(\Gamma)}(\id)$ holds. \item A discrete group $\Gamma$ is finite if and only if there is a $c>0$ such that for every uniformly bounded representation $\rho:\Gamma\to \B(\h)$ the inequality $\dist(\id,P^{\rho(\Gamma)})\leq c+ \frac{1}{2}D_{\rho(\Gamma)}(\id)$ holds. \item A $C^*$-algebra $A$ is nuclear if and only if for every unital completely bounded homomorphism $\psi:A\to \B(\h)$ the inequality $\dist(\id,P^{\psi(U_A)})\leq D_{\psi(U_A)}(\id)$ holds, where $U_A$ is the group of unitaries of $A$. \item A $C^*$-algebra $A$ is finite dimensional if and only if there is a $c>0$ such that for every unital completely bounded homomorphism $\psi:A\to \B(\h)$ the inequality $\dist(\id,P^{\psi(U_A)})\leq c+ \frac{1}{2}D_{\psi(U_A)}(\id)$ holds. \end{itemize} \end{thm} In the composed actions $\Gamma\xrightarrow{\pi} G\to \Isom(P)$ if $\Gamma$ is amenable, then the distance of each point $a\in P$ to the fixed point set is less or equal than the diameter of the orbit of that point. This proposition is well known, we state and prove it in our geometric setting: \begin{prop}\label{distamenable} If $\pi:\Gamma\to G$ is a strongly continuous uniformly bounded representation of an amenable locally compact group $\Gamma$ and if we denote $H=\pi(\Gamma)$ then \begin{align*}\dist\left(a,P^{\pi(\Gamma)}\right)\leq D_{\pi(\Gamma)}(a)\end{align*} for every $a\in P$. \end{prop} \begin{proof} Since the translation $b\mapsto a^{-\frac12}ba^{-\frac12}$, $P\to P$ is isometric and $g\mapsto a^{-\frac12}ga^{\frac12}$, is continuous if we endow $G$ with the strong operator topology, by Proposition \ref{invmetricap} and Proposition \ref{tranorb} \begin{align*}\dist\left(\id,P^{a^{-\frac12}Ha^{\frac12}}\right)=\dist\left(\id,a^{-\frac12}P^Ha^{-\frac12}\right)=\dist\left(a,P^H\right)\end{align*} and \begin{align*} D_{a^{-\frac12}Ha^{\frac12}}(\id)&=\diam(\oo_{a^{-\frac12}Ha^{\frac12}}(\id))=\diam(a^{-\frac12}\oo_H(a^{\frac12}\id a^{\frac12})a^{-\frac12}) \\ &=\diam(\oo_H(a^{\frac12}a^{\frac12}))=D_H(a), \end{align*} so the problem is reduced to proving that $\dist(\id,P^H)\leq D_H(\id)$. Note that \begin{align*}\oo_H(\id)=\{hh^*:h\in H\}\subseteq [|H|^{-2}\id,|H|^2\id]\end{align*} and that there is a fixed point $a\in P^{\pi(\Gamma)}$ in the weak closure of the linear convex hull of the orbit of $\id$, see \cite{fack}*{Proposition 1. (ii)}. Alternatively, we could have used the fixed point property characterization of amenable groups noting that $g\cdot b=\pi(g)b\pi(g)^*$ is a continuous action on the weakly compact convex set $\overline{conv}^w(\oo_H(\id))$. Hence, \begin{align*}a \in \overline{conv}^w(\oo_H(\id))\subseteq [|H|^{-2}\id,|H|^2\id]=B[\id,2\log(|H|)],\end{align*} so that \begin{align*}\dist(\id,P^H)\leq d(\id,a)\leq 2\log(|H|)=D_H(\id).\end{align*} \end{proof} \begin{prop}\label{diagsim} Consider a Hilbert space direct sum $\h=\oplus_{n\in \N}\h_n$, the algebra $\B(\h)$ and its diagonal subalgebra $A=\oplus_{n\in \N}\B(\h_n)$.\par If $H=\oplus_{n\in \N}H_n$ is a unitarizable group of invertibles in $A$, then \begin{align*} \Sim_A(H)&=\min\{\|r\|\|r^{-1}\|:r\in A, r \mbox{ unitarizes }H\}\\ &=\sup_{n\in \N}\Sim_{\B(\h_n)}(H_n)=\sup_{n\in \N}\min\{\|r_n\|\|r_n^{-1}\|:r_n\in \B(\h_n), r_n \mbox{ unitarizes }H_n\}\\ &=\Sim_{\B(\h)}(H)=\inf\{\|s\|\|s^{-1}\|:s\in \B(\h), s \mbox{ unitarizes }H\} \end{align*} \end{prop} \begin{proof} If $s$ is an invertible in $\B(\h)$ unitarizing $H$, then $shs^{-1}|_{s(\h_n)}:s(\h_n)\to s(\h_n)$ is unitary for each $n\in \N$ and $h\in H$. Since $s(\h_n)$ and $\h_n$ have the same Hilbert space dimension, there is a unitary $u_n:s(\h_n)\to \h_n$. Hence, \begin{align*}r_n=u_n\circ s|_{h_n}:\h_n\to s(\h_n) \to \h_n\end{align*} is a unitarizer of $H_n$ with $\|r_n\|\|r_n^{-1}\|\leq\|s\|\|s^{-1}\|$.\par By scaling the $r_n$ so that they have symmetric spectrum we get that $r=\oplus_{n\in \N}r_n\in A$ is a unitarizer of $H$ with $\|r\|\|r^{-1}\|= \sup_{n\in \N}\|r_n\|\|r_n^{-1}\| \leq\|s\|\|s^{-1}\|$.\par Hence, we get $\Sim_A(H)\leq \Sim_{\B(\h)}(H)$. The inequality $\Sim_A(H)\leq \Sim_{\B(\h)}(H)$ is trivial, and it is easy to verify that $\Sim_A(H)=\sup_{n\in \N}\Sim_{\B(\h_n)}(H_n)$. \end{proof} The following Proposition has also been proven in \cite{pisier4} algebraically. Here, we give a more geometric and intuitive proof \begin{thm}\label{unitconst} Let $\Gamma$ be a locally compact unitarizable group (i.e. every strongly continuous uniformly bounded representation is conjugate to a unitary representation). Then, there are constants $K\geq 1$ and $\alpha>0$ such that for every strongly continuous uniformly bounded representations $\pi:\Gamma\to \B(\h)$ we have \begin{align*}\dist(\id,P^{\pi(\Gamma)})\leq \log(K) + \frac{\alpha}{2}D_{\pi(\Gamma)}(\id)\end{align*} or equivalently \begin{align*}\Sim(\pi(\Gamma))\leq K|\pi(\Gamma)|^\alpha.\end{align*} \end{thm} \begin{proof} Assume for contradiction the conclusion not to hold. Then, there are strongly continuous uniformly bounded representations $\pi_n:\Gamma\to \B(\h_n)$ with similarity number $\Sim(\pi_n(\Gamma))>n|\pi_n(\Gamma)|^n>n~\forall n\in\N$.\par Now for $n\in \N$, if $|\pi_n(\Gamma)|\leq 2$, we set $\rho_n=\pi_n$. Otherwise, by Theorem \ref{geomint} we know that there is some positive invertible $s_n\in \B(\h_n)$ and a parameter $t\in [0,1]$ such that $|\Ad_{s_n^{-t}}(\pi_n(\Gamma))|\leq|\pi_n(\Gamma)|^{1-t}=2$ and $\Sim(\Ad_{s_n^{-t}}(\pi_n(\Gamma)))=\Sim(\pi_n(\Gamma))^{1-t}$. Then \begin{align*}\Sim(\Ad_{s_n^{-t}}(\pi_n(\Gamma)))=\Sim(\pi_n(\Gamma))^{1-t}>n^{1-t}|\pi_n(\Gamma)|^{(1-t)n}=n^{1-t}2^n>n.\end{align*} We conclude that $\rho_n=\Ad_{s^{-t}}\circ\pi_n:\Gamma \to \B(\h_n)$ is a strongly continuous uniformly bounded representation with $\Sim(\rho_n(\Gamma))>n$ and $|\rho_n(\Gamma)|\leq 2$.\par Now, $\rho=\oplus_{n\in \N}\rho_n:\Gamma \to \oplus_{n\in \N} \B(\h_n)\subseteq \B(\oplus_{n\in \N} \h_n)$ is a strongly continuous representation with $|\rho(\Gamma)|\leq 2$ and (by Proposition \ref{diagsim}) $\Sim_B(\rho(\Gamma))\geq\Sim_{\B(\h_n)}(\rho_n(\Gamma))>n$ for any $n\in\N$. But this contradicts the unitarizability of $\Gamma$. \end{proof} \begin{defn} We call a locally compact unitarizable group $\Gamma$ \textit{\textbf{unitarizable with constants}} $(K,\alpha)$, where $K$ and $\alpha$ are the minimal constants such that the previous proposition holds. \end{defn} Now, we provide bounds on the constants $(K,\alpha)$ for extensions of unitarizable groups by amenable groups using metric properties of the canonical associated action: \begin{thm} Let $1\to \Sigma \to \Gamma \to \Lambda \to 1$ be an extension of locally compact groups such that $\Lambda$ is amenable and $\Sigma$ is unitarizable with constants $(K,\alpha)$, then $\Gamma$ is unitarizable with constants $(K^3,3\alpha +2)$. \end{thm} \begin{proof} Let $\pi:\Gamma\to G$ be a strongly continuous uniformly bounded representation. Since $\Sigma$ is a normal subgroup of $\Gamma$, the set $P^{\pi(\Sigma)}$ is $\pi(\Gamma)$-invariant and the action \begin{align*}\Gamma\xrightarrow{\pi} G\to \Isom(P^{\pi(\Sigma)})\end{align*} factors through an action $\Lambda\simeq\Gamma/\Sigma\xrightarrow{\overline{\pi}} G\to \Isom(P^{\pi(\Sigma)})$. By Lemma \ref{minunit} there is an $a\in P^{\pi(\Sigma)}$ minimizing the distance to $\id$, i.e. an $a$ such that $d(\id,a)=\dist(\id,P^{\pi(\Sigma)})$. Then, by an argument such as in Lemma \ref{distamenable} we get a fixed point $b\in P^{\pi(\Gamma)}$ and a bound \begin{align*}d(a,b)\leq \diam(\oo_{\overline{\pi}(\Lambda)}(a))=\diam(\oo_{\pi(\Gamma)}(a))=D_{\pi(\Gamma)}(a).\end{align*} Since the diameter of the orbit of a point is 2-Lipschitz (see Lemma \ref{lemdh}), $\Sigma$ has bounds $(K,\alpha)$ and $\pi(\Sigma)\subseteq \pi(\Gamma)$, we obtain \begin{align*} \diam(\oo_{\pi(\Gamma)}(a))&\leq2\dist(\id,a)+\diam(\oo_{\pi(\Gamma)}(\id)) \\ &=2\dist(\id,P^{\pi(\Sigma)})+\diam(\oo_{\pi(\Gamma)}(\id)) \\ &\leq2(\log(K)+\frac{\alpha}{2}\diam(\oo_{\pi(\Sigma)}(\id)))+\diam(\oo_{\pi(\Gamma)}(\id)) \\ &\leq2(\log(K)+\frac{\alpha}{2}\diam(\oo_{\pi(\Gamma)}(\id)))+\diam(\oo_{\pi(\Gamma)}(\id)). \end{align*} Therefore \begin{align*} \dist(\id,P^{\pi(\Gamma)})&\leq d(\id,b)\leq d(\id,a)+d(a,b)\\ &\leq\log(K)+\frac{\alpha}{2}\diam(\oo_{\pi(\Gamma)}(\id))\\ &+2(\log(K)+\frac{\alpha}{2}\diam(\oo_{\pi(\Gamma)}(\id)))+\diam(\oo_{\pi(\Gamma)}(\id))\\ &\leq3\log(K)+\frac{3\alpha + 2}{2}\diam(\oo_{\pi(\Gamma)}(\id)), \end{align*} proving the constants for the group $\Gamma$. \end{proof} \subsection{Algebras with trace}\label{algtrace} In this subsection, we consider the case of particular subalgebras $A\subseteq \B(\h)$ of $\B(\h)$, for which the family of geodesics as defined in (\ref{geodesic}) is having a stronger convexity property: there will be metrics having the same geodesics but defining a CAT(0)-structure on the corresponding subcone $P\cap A$ of positive invertible elements in $A$.\par One instance, where this happens, is the algebra of Hilbert-Schmidt pertubations of scalar multiples of the identity operator $A=\B_2(\h)+\C\id \subseteq \B(\h)$. Recall that $\B_2(\h)$ is a Banach algebra consisting of all operators such that the \textit{Hilbert-Schmidt norm} defined by $\|a\|_{HS}=\tr(a^*a)^{\frac12}$ is finite. Then, $\|a\|\leq \|a\|_{HS}$ for $a\in \B_2(\h)$.\par There is a natural Hilbert space structure on $A$, such that the scalar operators are orthogonal to Hilbert-Schmidt operators: $\langle a+ \alpha\id,b+\beta\id\rangle_2=\tr(ab^*)+\alpha\Bar{\beta}.$ The geometry of this space was studied in \cite{larotonda}. Note, that $A=\B_2(\h)+\C \id$ is complete with the norm corresponding to the inner product, since the Hilbert-Schmidt inner product induces a complete norm on the ideal of Hilbert-Schmidt operators. The real space of $\B_2(\h)+\C \id$, which is $\B_2(\h)_s+\R \id$, inherits the structure of a real Banach space, and with the same inner product it becomes a real Hilbert space.\par Inside $\B_2(\h)_s+\R \id$, we consider the open subset $P$ of positive invertible operators. For $a\in P$, one can identify the tangential space $T_aP$ of $P$ at $a$ with $\B_2(\h)_s+\R \id$ and endow this manifold with a real Riemannian metric by means of the following formula: $\langle X,Y\rangle_a=\langle a^{-1}X,a^{-1}Y \rangle_2$.\par The unique geodesic $\gamma_{a,b}:[0,1]\to P$ joining $a$ and $b$ is given as in (\ref{geodesic}) by $\gamma_{a,b}(t)=a^{\frac12}(a^{-\frac12}ba^{-\frac12})^ta^{\frac12}$ and realizes the distance, i.e. $d(a,b)=\mathrm{Length}(\gamma_{a,b})=\|\log(a^{-\frac12}ba^{-\frac12})\|_2$ (see Theorem 3.8 and Remark 3.9 in \cite{larotonda}).\par With this metric, $P$ is a complete metric space and if $g$ is an invertible operator in $\B_2(\h)+\C \id$ then $I_g:P\to P$ is an isometry as shown in Lemma 2.5 in \cite{larotonda}.\par Lemma 3.11 in \cite{larotonda}, $P$ is a complete $CAT(0)$ space. We next prove that if a group $H$ is close in some sense to the unitary group it is unitarizable using the Bruhat-Tits fixed point theorem (see \cite{bruhattits}). \begin{thm}\label{hsunit} If $H$ is a group of invertible elements in $\B_2(\h)+\C \id$ such that we have $\sup_{h\in H}\|hh^*-\id\|_2=C < \infty$ then $H$ is unitarizable. \end{thm} \begin{proof} First of all, $\sup_{h\in H}\|hh^*-\id\|_2=C < \infty$ implies \begin{align*}\|hh^*\|-\|\id\|\leq \|hh^*-\id\|\leq \|hh^*-\id\|_2\leq C~\forall h\in H,\end{align*} so that $(C+1)^{-1}\id \leq hh^* \leq (C+1)\id$.\par We look at the induced action on $P$ and want to prove the finiteness of $\diam(\oo_H(\id))=\sup_{h\in H}\|\log(hh^*)\|_2$ to derive the existence of a fixed point.\par For $h\in H$ and as $hh^*-\id$ is compact, $hh^*$ is diagonalizable and has eigenvalues $(s_j)_j\subseteq [(C+1)^{-1},(C+1)]$, hence $\|hh^*-\id\|_2^2=\sum_j(s_j-1)^2\leq C^2.$ Therefore, $\log(hh^*)$ is diagonalizable and has eigenvalues $(\log(s_j))_j$.\par Now, let $D$ be a real number such that $|\log(x)|\leq D|x-1|$ for all $x\in [(C+1)^{-1},(C+1)]$. Then \begin{align*}\|\log(hh^*)\|_2^2=\sum_j\log(s_j)^2\leq \sum D^2(s_j-1)^2 \leq D^2C^2.\end{align*} Since the last inequality holds for all $h\in H$ we see that $\diam(\oo_H(\id))\leq D^2C^2$. Since $\oo_H(\id)$ is bounded, by Theorem \cite{lang}*{Chapter XI, Lemma 3.1 and Theorem 3.2} the circumcenter $a\in P$ of this set is a fixed point for the action of $H$. \par Finally, by Proposition \ref{fixed} $s=a^{\frac12}$ is a unitarizer of $H$. \end{proof} \begin{cor} If $H$ is a group of invertible elements with $\sup_{h\in H}\|h-\id\|_2=M < \infty$, then $H$ is unitarizable. \end{cor} \begin{proof} That $H\subseteq \B_2(\h)+\C \id$ is apparent. Since $\|h\|-\|\id\|\leq \|h-\id\|\leq \|h-\id\|_2\leq M$ for all $h\in H$ we see that $\|h\|\leq M+1$ for all $h\in H$. Since \begin{align*}hh^*-\id=hh^*-h+h-\id=h(h^*-\id)+h-\id \end{align*} for all $h\in H$ it follows that \begin{align*}\|hh^*-\id\|_2\leq \|h\|\|h^*-\id\|_2+\|h-\id\|_2\leq (M+1)M + M\end{align*} for all $h\in H$. By Proposition \ref{hsunit}, $H$ is unitarizable. \end{proof} \begin{rem} An extension of the previous propositions to p-Schatten perturbations of scalar operators ($1<p<\infty$) is possible, as in that case we are dealing with uniformly convex non-positively curved Busemann spaces and the existence of circumcenters of bounded sets implies that the Bruhat-Tits fixed point theorem still holds. We should also remark that the aforementioned fixed point theorem holds for actions of semigroups. \end{rem} We want to further analyze the orbit structure of the action $I_H$. Using Proposition \ref{tranorb} we can assume that $H$ is a group of unitaries. Let $\M=(\B_2(\h)_s+\R \id)\cap H'$ denote the set of elements in $\B_2(\h)_s+\R\id$ commuting with the $H$-action. Then \begin{align*}\B_2(\h)_s+\R \id=\M \oplus \M^{\perp}\end{align*} with respect to the inner product $\langle x,y\rangle_2$. Note that since $\R\id\subseteq H'$ $\M^{\perp}\subseteq \B_2(\h)$ follows.\par The next result is Proposition 5.12 in \cite{larotonda}. It shows that the orthogonal splitting of $\B_2(\h)_s+\R \id$ also makes sense in the metric language. \begin{prop}\label{minprop} If $\M\subseteq \B_2(\h)_s+\R \id$ is a Lie triple system, then the map \begin{align*}\M^{\perp}\times \M \xrightarrow{\sim} P,\qquad (X,Y)\mapsto e^Ye^Xe^Y\end{align*} is a diffeormorphism. If $a=e^Ye^Xe^Y$ is the decomposition of an $a\in P$, then the closest point in $\exp(\M)$ to $a$ is $e^{2Y}$, and this point is unique with this property. \end{prop} The following gives a metric description of the orbit structure on $P$: \begin{prop}\label{inv} The sets $e^Ye^{\M^{\perp}}e^Y$ are invariant for the action $I_H$. The circumcenter of any orbit in $e^Ye^{\M^{\perp}}e^Y$ is $e^{2Y}$. \end{prop} \begin{proof} We have $\M^{\perp}=\{X\in \B_2(\h)_s+\R \id: \langle X,Y\rangle_2=0 \mbox{ for all } Y\in \M\}$. Note that $\M^{\perp}$ is $\Ad_H$-invariant because if $Y+\alpha\id\in \M\subseteq \B_2(\h)_s+\R\id$, $X\in \M^{\perp}\subseteq \B_2(\h)_s$ and $h\in H$, then \begin{align*} \langle hXh^{-1},Y+\alpha\id \rangle_2 &=\langle hXh^{-1}+0\id,Y+\alpha\id \rangle_2=\tr((hXh^{-1})Y^*)+0\bar{\alpha}\\ &=\tr(hXY^*h^{-1})= \tr(h^{-1}hXY^*)=\tr(XY^*)= \langle X,Y+\alpha\id\rangle_2=0, \end{align*} where we used that $Y\in H'$ and the cyclicity property of the Hilbert-Schmidt trace.\par Because $\M=(\B_2(\h)_s+\R\id)\cap H'$ is the self-adjoint part of the algebra $(\B_2(\h)+\C\id)\cap H'\subseteq A$, it is a Lie triple system, see Definition \ref{lietrip} and the proof of Proposition \ref{totalgeod}. Therefore we can apply the decomposition in Proposition \ref{minprop} and write $a\in P$ as $a=e^Ye^Xe^Y$. Then, \begin{align*}I_h(a)=hah^{-1}=he^Ye^Xe^Yh^{-1}=e^{\Ad_h(Y)}e^{\Ad_h(X)}e^{\Ad_h(Y)}=e^Ye^{\Ad_h(X)}e^Y\end{align*} so that the sets $e^Ye^{\M^\perp}e^Y$ are invariant for the action $I_H$. An orbit in $e^Ye^{\M^\perp}e^Y$ is of the form $\{e^Ye^{\Ad_h(X)}e^Y:h\in H\}$ for some $X\in \M^{\perp}$, and by \cite{lang}*{Chapter XI, Lemma 3.1 and Theorem 3.2} its circumcenter is a fixed point of the action which is closest to each element in the orbit. This point is $e^{2Y}$ by Proposition \ref{minprop}. \end{proof} Another case of a CAT(0)-space structure on positive operators are finite von Neumann algebras $A$, where the metric on $P$ is defined through the trace $\tau:A\to \C$. This case was fully treated in \cite{miglio2} \begin{thm} If $H$ is a group of invertible operators in a finite von Neumann algebra $A$ such that $\sup_{h\in H}\|h\| = |H| < \infty$ then there is an $s \in \{a\in A:|H|^{-1}\id\leq a \leq |H|\id\}$ such that $s^{-1}Hs$ is a group of unitary operators in $A$. \end{thm} \begin{defn} If $B\subseteq A$ is a unital inclusion of $C^*$-algebras then a map $E:A\to B$ such that $E(b)=b$ for all $b\in B$ and $\|E\|=1$ is called a \textit{\textbf{conditional expectation}}. By a result of Tomiyama \cite{tomi}, these conditions imply that $E(b_1ab_2)=b_1E(a)b_2$ and $E(a^*)=E(a)^*$ for $b_1,b_2\in B$ and $a\in A$. \end{defn} Assume that the group $H$ is a group of unitaries in the finite von Neumann algebra $A$. By a theorem of Takesaki \cite{takesaki}, there is a conditional expectation $E:A\to H' \cap A$ compatible with the trace, i.e. $E(\tau (x))=E(x)$ for $\in A$. This conditional expectation provides an orthogonal splitting $A=(A\cap H')\oplus_{\tau} \Ker(E)$ with respect to the inner product $\langle x,y\rangle= \tau(y^*x)$. Theorem 5.4 and Corollary 5.5 in \cite{andruchowlarotonda} in this case imply the following result. \begin{prop}\label{minvn} Assuming the notation of the previous paragraph, let \begin{align*}(A_s\cap \Ker(E))\times (A_s\cap H') \xrightarrow{\sim} P,\qquad (X,Y)\mapsto e^Ye^Xe^Y\end{align*} be the bijection given by the Porta-Recht splitting (\cite{portarecht}).\par Then, for $a=e^Ye^Xe^Y$, the closest point in $\exp(A_s\cap H')$ to $a$ is $e^{2Y}$, and this point is unique with this property. \end{prop} The invariance of normal leaves as in Proposition \ref{inv} also holds in this case and the proof uses Proposition \ref{minvn} instead of Proposition \ref{minprop}. \section{Minimality properties of canonical unitarizers}\label{acsproblem} In \cite{andcorstoj1} and \cite{andcorstoj2} Andruchow, Corach and Stojanoff studied the differential geometry of spaces of representations of some classes of $C^*$-algebras and von Neumann algebras. Using the Porta-Recht splitting \cite{portarecht} and given a certain $*$-representation $\pi_0:A\to B$, for every representation $\pi_1:A\to B$ in the $G_B$-conjugacy orbit of $\pi_0$ they constructed a canonical positive invertible $e^{-X_0}$ such that $\Ad_{e^{-X_0}}\circ \pi_1$ is a $*$-representation. In Remark 5.9 in \cite{andcorstoj1} and in \cite{andcorstoj2}*{Section 1.5} it is asked if $e^{-X_0}$, satisfies $\|e^{X_0}\|\|e^{-X_0}\|=\|\pi_1\|_{c.b.}$.\par We give a partial answer and a geometrical insight to this question using results from the previous chapters.\par We briefly recall some definition and constructions in \cite{andcorstoj1}. Let $A$ be a unital $C^*$-algebra and $B\subseteq \B(\h)$ be a $C^{*}$-algebra of bounded linear operators on a separable Hilbert space $\h$. Denote by $R(A,B)$ the set of bounded unital homomorphisms from $A$ to $B$ and by $R_0(A,B)$ the subset of $*$-representations. The group $G_B$ of invertible operators in $B$ acts on $R(A,B)$ by inner automorphisms by the formula \begin{align*}(g\cdot \pi) (a)=(\Ad_g\circ \pi) (a)=g\pi(a) g^{-1}\end{align*} for $a\in A$ and $g\in G_B$. The group of unitary operators $U_B$ acts on $R_0(A,B)$ in the same way. In this way $R(A,B)$ and $R_0(A,B)$ are homogeneous spaces.\par There is also an action of $U_B$ on conditional expectations defined in $B$ and given by $u\cdot E=\Ad_u\circ E \circ \Ad_{u^{-1}}$.\par Now, for a given $\pi_0 \in R_0(A,B)$ and a fixed conditional expectation $E_{\pi_0}:B\to \pi_0(A)'\cap B$ one obtains, by the splitting theorem of Porta and Recht (\cite{portarecht}) that for every $\pi$ in the $G$-orbit of $\pi_0$ in $R(A,B)$ there is a natural way of choosing a unique positive operator $s\in G_B$ such that $\Ad_s\circ \pi$ is a $*$-representation: if $g\in G_B$ is such that $\pi_1=\Ad_g\circ \pi_0$, the Porta-Recht splitting asserts that there are $u\in U_B$, ${Y_0}={Y_0}^*\in \pi_0(A)'\cap B$ and ${Z_0}={Z_0}^*\in \Ker(E_{\pi_0})$ such that $g=ue^{Z_0}e^{Y_0}$. Then for $a\in A$ \begin{align*} \pi_1(a)&=ue^{Z_0}e^{Y_0}\pi_0(a)e^{-{Y_0}}e^{-{Z_0}}u^*=ue^{Z_0}\pi_0(a)e^{-{Z_0}}u^* \\ &=e^{u{Z_0}u^*}u\pi_0(a)u^*e^{-u{Z_0}u^*}=e^{\Ad_u {Z_0}}(u\cdot \pi_0)(a)e^{-\Ad_u {Z_0}}. \end{align*} If we define $\rho=u\cdot \pi_0=\Ad_u\circ \pi_0$, $X_0=\Ad_u({Z_0})$ and $E_{\rho}=u\cdot E_{\pi_0}= \Ad_u\circ E_{\pi_0}\circ \Ad_{u^{-1}}$, then $\Ad_{e^{-X_0}}\circ \pi_1=\rho\in R_0(A,B)$ and $X_0\in \Ker(E_{\rho})$. This is the $X_0$ mentioned in the question by Andruchow, Corach and Recht in the introduction to this chapter. For the next theorem, which partially answers the question from the beginning of this chapter, we need the following result by Conde and Larotonda from \cite{condelarotonda2}*{Corollary 4.39}, which we state here in the case of operator algebras and conditional expectations. \begin{thm}\label{minexp} Let $A$ be a $C^*$-algebra and $B$ a $C^*$-subalgebra of $A$. Let $E:A\to B$ be a conditional expectation and let \begin{align*}(A_s\cap \Ker(E))\times B_s \xrightarrow{\sim} P_A,\qquad (X,Y)\mapsto e^Ye^Xe^Y \\ U_A\times (A_s\cap \Ker(E))\times B_s \xrightarrow{\sim}G_A,\qquad (u,X,Y)\mapsto ue^Xe^Y \end{align*} be the Porta-Recht splitting of $P_A$ and of $G_A$.\par Then $\|(\I-E)|_{A_s}\|=1$ if and only if for every $X\in A_s\cap \Ker(E)$ and $Y\in B_s$ a closest point in $\exp(B_s)$ to $e^Ye^Xe^Y$ is $e^{2Y}$, i.e. \begin{align*}\dist(\exp(B_s),e^Ye^Xe^Y)=d(e^{2Y},e^Ye^Xe^Y)=\|\log(e^X)\|=\|X\|.\end{align*} \end{thm} \begin{rem}\label{normleaves} The Porta-Recht splitting provides a global tubular neighborhood to the submanifold $\exp(B_s)=P\cap B_s=P_B$ and the normal leaves to this submanifold are the sets $e^Y\exp(A_s\cap \Ker(E))e^Y$ for $Y\in B_s$. \end{rem} \begin{thm}\label{thmacs} Assuming the notation and construction of canonical unitarizers of the previous paragraph and assuming that for every unitarizable group $H<G_B\subseteq \B(\h)$ \begin{align*}\label{propsimsubalg} \Sim_B(H)=\Sim_{\B(\h)}(H), \end{align*} then \begin{align*}\|\pi_1\|_{c.b.}=\exp\left(\dist\left(e^{-2{X_0}},P^{\rho(U_A)}\right)\right)=\exp\left(\dist\left(e^{-2{X_0}},\exp(\rho(U_A)'\cap \B(\h)_s)\right)\right).\end{align*} If $\|\I-E_{\pi_0}\|=1$ the equality $\|e^{X_0}\|\|e^{-{X_0}}\|=\|\pi_1\|_{c.b.}$ holds. \end{thm} \begin{proof} Note that \begin{align*} \|\pi_1\|_{c.b.}&=\Sim_{\B(\h)}({\pi_1(U_A)}) &\mbox{ by Proposition \ref{simcb}}\\ &=\Sim_B({\pi_1(U_A)}) &\mbox{ by assumption }\\ &=\exp(\dist(\id,P_B^{\pi_1(U_A)}))&\mbox{ by Theorem \ref{distdiam} } \\ &=\exp(\dist(\id,P_B^{e^{X_0}\rho(U_A)e^{-{X_0}}})) &\mbox{ since $\Ad_{e^{-X_0}}\circ \pi_1=\rho$ }\\ &=\exp(\dist(\id,e^{X_0}P_B^{\rho(U_A)}e^{X_0})) &\mbox{ by Proposition \ref{tranorb} } \\ &=\exp(\dist(\id,I_{e^{X_0}}(P_B^{\rho(U_A)})))\\&=\exp(\dist(I_{e^{-X_0}}(\id),P_B^{\rho(U_A)})) \\ &=\exp(\dist(e^{-2{X_0}},P_B^{\rho(U_A)})) \\ &=\exp(\dist(e^{-2{X_0}},\exp(\rho(U_A)'\cap B_s))) &\mbox{ by Remark \ref{fixunit} }. \end{align*} This proves the first equality. Further, \begin{align*} E_{\rho}= \Ad_u\circ E_{\pi_0}\circ \Ad_{u^{-1}}: B\to \Ad_u(\pi_0(U_A)'\cap B)=\Ad_u(\pi_0(U_A))'\cap B=\rho(U_A)'\cap B. \end{align*} If $\|\I-E_{\pi_0}\|=1$, then because $\|\Ad_u\|=1$ \begin{align*} \|\I-E_{\rho}\|&=\|\I-\Ad_u\circ E_{\pi_0}\circ \Ad_{u^{-1}}\|=\|\Ad_u\circ (\I-E_{\pi_0})\circ \Ad_{u^{-1}}\|\\ &\leq \|\Ad_u\|\|I-E_{\pi_0}\|\|\Ad_{u^{-1}}\|=1, \end{align*} so that $\|\I-E_{\rho}\|=1$. Therefore by Theorem \ref{minexp}, \begin{align*}\dist(\exp(\rho(U_A)'\cap B_s),e^X)=d(\id,e^X)=\|X\|\end{align*} for all $X\in \Ker(E_{\rho})$. Hence, since $X_0\in \Ker(E_{\rho})$ \begin{align*}\|\pi_1\|_{c.b.}=\exp(\dist(e^{-2{X_0}},\exp(\rho(U_A)'\cap B_s)))=e^{\|2X_0\|}.\end{align*} We get $\|e^{X_0}\|\|e^{-{X_0}}\|\leq e^{\|{X_0}\|}e^{\|{-X_0}\|}=e^{\|2X_0\|}=\|\pi_1\|_{c.b.}$. Since $\Ad_{e^{-X_0}}\circ \pi_1$ is a $*$-homomorphism we also get $\|\pi_1\|_{c.b.}=\Sim_{\B(\h)}({\pi_1(U_A)})\leq \|e^{X_0}\|\|e^{-{X_0}}\|$. \end{proof} Note in the proof of the last theorem that \begin{align*}\|\pi_1\|_{c.b.}= \Sim_B({\pi_1(U_A)})= \exp\left(\dist\left(\id,P_B^{\pi_1(U_A)}\right)\right)\end{align*} and that a positive invertible $s\in B$ such that $\Ad_{s^{-1}}\circ \pi_1$ is a star representation corresponds to an $s^2\in P_B^{\pi_1(U_A)}$. After applying the isometric translation $I_{e^{-X_0}}$ to the point $\id$ and the set $P_B^{\pi_1(U_A)}$ we get the point $e^{-2{X_0}}$ and the set $P_B^{\rho(U_A)}=\exp(\rho(U_A)'\cap B_s)$. The point $e^{-2{X_0}}$ is in the leaf $\exp(\Ker(E_{\rho}\cap B_s))$ normal to the manifold $\exp(\rho(U_A)'\cap B_s))$, which contains $\id$. Therefore $\id$ is the result of projecting $e^{-2{X_0}}$ to the manifold $\exp(\rho(U_A)'\cap B_s))$ and if we apply $I_{e^{X_0}}$ we see that $e^{2X_0}$ is the result of projecting $\id$ to $P_B^{\pi_1(U_A)}$.\par Hence, we translated the question by Andruchow, Corach and Stojanoff into the following: Under what conditions does the point $e^{2X_0}$ minimize the distance to $\id$ among the points in $P_B^{\pi_1(U_A)}$? \begin{rem} Note that if $B=\oplus_{n\in \N}\B(\h_n)\subseteq \B(\h)=\B(\oplus_{n\in \N}\h_n)$ then by Proposition \ref{diagsim}, $\Sim_B(H)=\Sim_{\B(\h)}(H)$ so that $B\subseteq \B(\h)$ satisfies the condition in Theorem \ref{thmacs}. \end{rem} We next give an example of a conditional expectation $E$ satisfying $\|\I-E\|=1$. \begin{ex} Consider the Hilbert space direct sum $\h=\oplus_{n\in \N}\h_n$, the algebra $\B(\h)$ and its diagonal subalgebra $A=\oplus_{n\in \N}\B(\h_n)$. For each $n\in \N$ let $p_n$ be an orthogonal projection in $\B(\h_n)$. Let $A_n=\{X\in \B(\h_n):p_nX=Xp_n\}$ and $E_n:\B(\h_n)\to A_n$, $X\mapsto p_nXp_n+(\id-p_n)X(\id-p_n)$ be a conditional expectation as defined above.\par Then, $E=\oplus_{n\in \N}E_n:\oplus_{n\in \N}\B(\h_n)\to \oplus_{n\in \N}A_n$ is a conditional expectation such that $\|\I-E\|=1$. This follows from the fact that each conditional expectation $E_n$ satisfies $\|\I_n-E_n\|=1$. To see this note that $q_n=2p_n-\id$ is a self-adjoint unitary in $\B(\h_n)$ so that for every $X\in \B(\h_n)$ \begin{align*}\|(\I_n-E_n)(X)\|=\|X-E_n(X)\|=\|X-\tfrac12(X+q_nXq_n)\|=\|\tfrac12(X-q_nXq_n)\|\leq \|X\|.\end{align*} \end{ex} \begin{bibdiv} \begin{biblist} \bib{andcorstoj1}{article}{author={E. Andruchow},author={G. Corach},author={D. Stojanoff}, title={A geometric characterization of nuclearity and injectivity},journal={J. Funct. Anal.},volume={133},date={1995},pages={no. 2, 474--494}} \bib{andcorstoj2}{article}{ author={E. Andruchow}, author={G. Corach}, author={D. Stojanoff}, title={The homogeneous space of representations of a nuclear $C\sp *$-algebra}, journal={Harmonic analysis and operator theory (Caracas, 1994), Contemp. Math.}, number={189}, publisher={Amer. Math. Soc., Providence, RI}, year={1995}, pages={37--53} } \bib{acms}{article}{author={E. Andruchow}, author={G. Corach},author={M. Milman}, author={D. Stojanoff}, title={Geodesics and interpolation}, journal={Rev. Un. Mat. Argentina}, volume={40},date={1997}, pages={no. 3-4, 83--91}} \bib{andruchowlarotonda}{article}{author={E. Andruchow}, author={G. Larotonda}, title={Nonpositively Curved Metric in the Positive Cone of a Finite von Neumann Algebra}, journal={J. London Math. Soc.}, volume={74}, date={2006}, pages={no. 1, 205-218}} \bib{bruhattits}{article}{author={F. Bruhat}, author={J. Tits}, title={Groupes r\'eductifs sur un corps local, I. Donn\'ees radicielles valu\'ees}, journal={Inst. Hautes \'Etudes Sci. Publ. Math.}, volume={41}, date={1972}, pages={5-252}} \bib{condelarotonda2}{article}{author={C. Conde}, author={G. Larotonda}, title={Manifolds of semi-negative curvature}, journal={Proc. Lond. Math. Soc.}, number={3}, date={2010}, pages={no. 3, 670--704}} \bib{correcht}{article}{ author={G. Corach}, author={H. Porta}, author={L. Recht}, title={Convexity of the geodesic distance on spaces of positive operators}, journal={Illinois journal of mathematics}, volume={38}, date={1994}, pages={87-94}, } \bib{cpr}{article}{author={G. Corach},author={H. Porta},author={L. Recht},title={A geometric interpretation of Segal's inequality $\Vert e\sp {X+Y}\Vert \leq\Vert e\sp {X/2}e\sp Ye\sp {X/2}\Vert $},journal={ Proc. Amer. Math. Soc.},volume={115},date={1992},pages={no. 1, 229--231}} \bib{cpr3}{article}{author={G. Corach},author={H. Porta},author={L. Recht},title={Geodesics and operator means in the space of positive operators}, journal={Internat. J. Math.}, number={4}, date={1993}, pages={no. 2, 193--202}} \bib{davidson}{book}{author={K. R. Davidson}, title={$C\sp *$-algebras by example.}, series={Fields Institute Monographs}, publisher={American Mathematical Society, Providence, RI}, date={1996}} \bib{day}{article}{ AUTHOR = {Day, M.}, TITLE = {Means for the bounded functions and ergodicity of the bounded representations of semi-groups}, JOURNAL = {Trans. Amer. Math. Soc.}, VOLUME = {69}, YEAR = {1950}, PAGES = {276--291}, } \bib{dix}{article}{ author={Dixmier, J.}, title={Les moyennes invariants dans les semi-groups et leurs applications}, journal={Acta Sci. Math. Szeged}, date={1950}, number={12}, pages={213-227},} \bib{ehrenp}{article}{ AUTHOR = {Ehrenpreis, L.}, AUTHOR = {Mautner, F.}, TITLE = {Uniformly bounded representations of groups}, JOURNAL = {Proc. Nat. Acad. Sci. U. S. A.}, VOLUME = {41}, YEAR = {1955}, PAGES = {231--233}} \bib{epm}{article}{ AUTHOR = {Epstein, I.},author={Monod, N.}, TITLE = {Nonunitarizable representations and random forests}, JOURNAL = {Int. Math. Res. Not. IMRN}, YEAR = {2009}, NUMBER = {22}, PAGES = {4336--4353}, } \bib{fack}{article}{author={T. Fack}, title={A Dixmier's theorem for finite type representations of amenable semigroups},journal={Math. Scand.},number={93}, date={2003}, pages={no. 1, 136--160}} \bib{haag}{article}{author={U. Haagerup}, title={Solution of the similarity problem for cyclic representations of $C\sp{\ast} $-algebras}, journal={Ann. of Math.}, number={118}, date={1983}, pages={no. 2, 215--240}} \bib{lang}{book}{author={S. Lang}, title={Fundamentals of Differential Geometry}, series={Graduate Texts in Mathematics}, publisher={Springer-Verlag, New York}, date={1999}} \bib{larotonda}{article}{author={G. Larotonda}, title={Non-positive curvature: a geometrical approach to Hilbert-Schmidt operators}, journal={Differential Geom. Appl.}, number={25}, date={2007}, pages={no. 6, 679--700}} \bib{miglio2}{article}{author={M. Miglioli}, title={Unitarization of uniformly bounded subgroups in finite von-Neumann algebras}, journal={Bull. London Math. Soc.}, date={2014}, number={46}, pages={1264-1266}} \bib{naka}{article}{ AUTHOR = {Nakamura, M.},author={Takeda, Z.}, TITLE = {Group representation and {B}anach limit}, JOURNAL = {T\^ohoku Math. J. (2)}, VOLUME = {3}, YEAR = {1951}, PAGES = {132--135}, } \bib{neeb}{article}{author={K.-H. Neeb}, title={A Cartan-Hadamard Theorem for Banach-Finsler manifolds}, journal={Geom. Dedicata}, number={95}, date={2002}, pages={115-156}} \bibitem{osawa}N. Oszawa, {\it An invitation to the similarity problems}. available from \href{http://www.kurims.kyoto-u.ac.jp/~narutaka/notes/similarity.pdf}{http://www.kurims.kyoto-u.ac.jp/~narutaka/notes/similarity.pdf} \bib{paulsen}{book}{author={V. Paulsen}, title={Completely bounded maps and operator algebras}, series={Cambridge Studies in Advanced Mathematics}, number={78}, publisher={Cambridge University Press, Cambridge}, date={2002}} \bib{pisier1}{article}{author={G. Pisier}, title={A similarity degree characterization of nuclear $C\sp *$-algebras}, journal={Publ. Res. Inst. Math. Sci.}, number={42}, date={2006}, pages={no. 3, 691--704}} \bib{pisier4}{book}{author={G. Pisier}, title={Similarity problems and completely bounded maps}, note={Second, expanded edition. Includes the solution to "The Halmos problem''}, series={Lecture Notes in Mathematics}, number={1618}, publisher={Springer-Verlag, Berlin}, date={2001}} \bib{pisier3}{article}{author={G. Pisier}, title={The similarity degree of an operator algebra}, journal={Algebra i Analiz}, number={10}, date={1998}, pages={no. 1, 132--186}, note={translation in St. Petersburg Math. J. 10 (1999), no. 1, 103--146}} \bib{pisier2}{article}{author={G. Pisier}, title={The similarity degree of an operator algebra}, journal={II. Math. Z.}, number={234}, date={2000}, pages={no. 1, 53--81}} \bib{portarecht}{article}{author={H. Porta}, author={L. Recht}, title={Conditional expectations and operator decompositions}, journal={Ann. Global Anal. Geom.}, number={12}, date={1994}, pages={335--339}} \bib{pytls}{article}{ AUTHOR = {Pytlik, T.}, Author={Szwarc, R.}, TITLE = {An analytic family of uniformly bounded representations of free groups}, JOURNAL = {Acta Math.}, VOLUME = {157}, YEAR = {1986}, NUMBER = {3-4}, PAGES = {287--309}, } \bib{schlicht}{thesis}{ AUTHOR = {Schlicht, P.}, TITLE = {Amenable groups and a geometric view on unitarisability}, SCHOOL={Universit\"at Leipzig}, YEAR={2014}, NOTE={available online: \href{http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-132865}{http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-132865}} } \bib{takesaki}{article}{author={M. Takesaki}, title={Conditional expectations in von Neumann algebras}, journal={J. Functional Analysis}, number={9}, date={1972}, pages={306--321}} \bib{tomi}{article}{author={J. Tomiyama}, title={On the projection of norm one in $W^*$-algebras},journal={Proc. Japan Acad.}, number={33}, date={1957}, pages={608--612}} \end{biblist} \end{bibdiv} \noindent \end{document}
\begin{document} \title{Improved quantum hypergraph-product LDPC codes} \date\today \author{\IEEEauthorblockN{Alexey A.\ Kovalev\IEEEauthorrefmark{1} and Leonid P.\ Pryadko\IEEEauthorrefmark{2}} \IEEEauthorblockA{Department of Physics \& Astronomy, University of California, Riveside, California 92521} \IEEEauthorblockA{\IEEEauthorrefmark{1} Email:{[email protected]} \qquad \IEEEauthorrefmark{2} Email:{[email protected]} }} \maketitle \begin{abstract} We suggest several techniques to improve the toric codes and the finite-rate generalized toric codes (quantum hypergraph-product codes) recently introduced by Tillich and Z\'emor. For the usual toric codes, we introduce the rotated lattices specified by two integer-valued periodicity vectors. These codes include the checkerboard codes, and the family of minimal single-qubit-encoding toric codes with block length $n=t^2+(t+1)^2$ and distance $d=2t+1$, $t=1,2,\ldots$. We also suggest several related algebraic constructions which increase the rate of the existing hypergraph-product codes by up to four times. \end{abstract} \maketitle \IEEEpeerreviewmaketitle \section{Introduction} Quantum error correction\cite{shor-error-correct,Knill-Laflamme-1997,Bennett-1996} made quantum computing (QC) theoretically possible. However, high precision required for error correction \cite{Knill-error-bound, Dennis-Kitaev-Landahl-Preskill-2002, Steane-2003, Fowler-QEC-2004,Fowler-2005, Raussendorf-Harrington-2007} combined with the large number of auxiliary qubits necessary to implement it, have so far inhibited any practical realization beyond proof-of-the-principle demonstrations\cite{chuang-2000,chuang-2001,Gulde-2003, Chiaverini-2004,Friedenauer-2008, Kim-2010}. For stabilizer codes, the error syndrome is obtained by measuring the generators of the stabilizer group. The corresponding quantum measurements can be greatly simplified (and also done in parallel) in low-density parity-check (LDPC) codes which are specially designed to have stabilizer generators of small weight. Among LDPC codes, the toric (and related surface) codes \cite{kitaev-anyons,Dennis-Kitaev-Landahl-Preskill-2002, Raussendorf-Harrington-2007,Bombin-2007} have the stabilizer generators of smallest weight, $w=4$, with the support on neighboring sites of a two-dimensional lattice. These codes have other nice properties which make them suitable for quantum computations with relatively high error threshold. Unfortunately, these code families have very low code rates that scale as inverse square of the code distance. Recently, Tillich and Z\'emor proposed a finite-rate generalization of toric codes\cite{Tillich2009}. The construction relates a quantum code to a direct product of hypergraphs corresponding to two classical binary codes. Generally, thus obtained LDPC codes have finite rates and the distances that scale as a square root of the block length. Unfortunately, despite finite asymptotic rates, for smaller block length, the rates of the quantum codes which can be obtained from the construction\cite{Tillich2009} are small. In this work, we present a construction aimed to improve the rates of both regular toric\cite{kitaev-anyons} and generalized toric codes\cite{Tillich2009}. For the toric codes, we introduce the rotated tori specified by two integer-valued periodicity vectors. Such codes include the checkerboard codes \cite{Bombin-2007} ($\pi/4$-rotation), and the family \cite{Kovalev-Dumer-Pryadko-2011} of minimal single-qubit-encoding toric codes with block length $n=t^2+(t+1)^2$ and distance $d=2t+1$, $t=1,2,\ldots$. For the generalized toric codes\cite{Tillich2009}, we suggest an algebraic construction equivalent to the $\pi/4$ rotation of the regular toric codes. The resulting factor of up to four improvement of the code rate makes such codes competitive even at relatively small block sizes. \section{Definitions.} We consider binary quantum error correcting codes (QECCs) defined on the complex Hilbert space $\mathcal{H}_2^{\otimes n}$ where $\mathcal{H}_{2}$ is the complex Hilbert space of a single qubit $\alpha\left|0\right\rangle +\beta\left|1\right\rangle $ with $\alpha,\beta\in\mathbb{C}$ and $\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1$. Any operator acting on such an $n$-qubit state can be represented as a combination of Pauli operators which form the Pauli group $\mathscr{P}_{n}$ of size $2^{2n+2}$ with the phase multiplier $i^{m}$: \begin{equation} \mathscr{P}_{n}=i^{m}\{I,X,Y,Z\}^{\otimes n},\; m=0,\ldots,3\:, \label{eq:PauliGroup} \end{equation} where $X$, $Y$, and $Z$ are the usual Pauli matrices and $I$ is the identity matrix. It is customary to map the Pauli operators, up to a phase, to two binary strings, $\mathbf{v},\mathbf{u}\in\{0,1\}^{\otimes n}$ \cite{Calderbank-1997}, \begin{equation} U\equiv i^{m'}X^{\mathbf{v}}Z^{\mathbf{u}}\: \rightarrow(\mathbf{v},\mathbf{u}), \label{eq:mapping} \end{equation} where $X^{\mathbf{v}}=X_{1}^{v_{1}}X_{2}^{v_{2}}\ldots X_{n}^{v_{n}}$ and $Z^{\mathbf{u}}=Z_{1}^{u_{1}}Z_{2}^{u_{2}}\ldots Z_{n}^{u_{n}}$. A product of two quantum operators corresponds to a sum ($\mathop{\rm mod} 2$) of the corresponding pairs $(\mathbf{v}_i,\mathbf{u}_i)$. An $[[n,k,d]]$ stabilizer code $\mathcal{Q}$ is a $2^k$-dimensional subspace of the Hilbert space $\mathcal{H}_2^{\otimes n}$ stabilized by an Abelian stabilizer group $\mathscr{S}=\left\langle G_{1},\ldots ,G_{n-k}\right\rangle $, $-{\mathbb 1}\not\in\mathscr{S}$ \cite{gottesman-thesis}. Explicitly, \begin{equation} \label{eq:stabilizer-code} \mathcal{Q}=\{\ket\psi: S\ket\psi=\ket\psi,\forall S\in \mathscr{S}\}. \end{equation} Each generator $G_i\in \mathscr{S}$ is mapped according to Eq.~(\ref{eq:mapping}) in order to obtain the binary check matrix $H=(A_{X}|A_{Z})$ in which each row corresponds to a generator, with rows of $A_{X}$ formed by $\mathbf{v}$ and rows of $A_{Z}$ formed by $\mathbf{u}$ vectors. For generality, we assume that the matrix $H$ may also contain unimportant linearly dependent rows which are added after the mapping has been done. The commutativity of stabilizer generators corresponds to the following condition on the binary matrices $A_{X}$ and $A_{Z}$: \begin{equation} A_{X}A_{Z}^{T}+A_{Z}A_{X}^{T}=0 \;(\mathop{\rm mod} 2).\label{eq:product} \end{equation} A more narrow set of Calderbank-Shor-Steane (CSS) codes \cite{Calderbank-Shor-1996} contains codes whose stabilizer generators can be chosen to contain products of only Pauli $X$ or Pauli $Z$ operators. For these codes the parity check matrix can be chosen in the form: \begin{equation} H=\left(\begin{array}{c|c} G_{X} & 0\\ 0 & G_{Z} \end{array}\right),\label{eq:CSS} \end{equation} where the commutativity condition simplifies to $G_{X}G_{Z}^{T}=0$. The dimension of a quantum code is $k=n-\mathop{\rm rank} H$; for a CSS code this simplifies to $k=n-\mathop{\rm rank} G_{X}-\mathop{\rm rank} G_{Z}$. The distance $d$ of the quantum code is given by the minimum weight of an operator $U$ which commutes with all operators from the stabilizer $\mathscr{S}$, but is not a part of the stabilizer, $U\not\in \mathscr{S}$. In terms of the binary vector pairs $(\mathbf{a},\mathbf{b})$, this is equivalent to a minimum weight of the bitwise OR $(\mathbf{a}|\mathbf{b})$ of all pairs satisfying the symplectic orthogonality condition, \begin{equation} A_X \mathbf{b}+A_Z \mathbf{a}=0,\label{eq:symplectic} \end{equation} which are not linear combinations of the rows of $H$. \section{Toric codes and rotated toric codes} \subsection{Canonical construction} We consider the toric codes\cite{kitaev-anyons} in the restricted sense, with qubits located on the bonds of a square lattice $L_\xi\times L_\eta$, with periodic boundary conditions along the directions $\xi$ and $\eta$. The stabilizer generators $A_i\equiv \prod_{j\in\square_i}X_j $ and $B_i\equiv \prod_{j\in +_i}Z_j$ are formed as the products of $X_j$ around each plaquette, and $Z_j$ around each vertex (this defines a CSS code). The corresponding block length is $n=2L_\xi L_\eta$, and there are $r_A=r_B=L_\xi L_\eta-1$ independent generators of each kind, which leaves us with the code of size $k=n-r_A-r_B=2$. This code is degenerate: the degeneracy group is formed by products of the generators $A_i$, $B_i$; its elements can be visualized as (topologically trivial) loops drawn on the original lattice (in the case of products of $A_i$), or the dual lattice in the case of products of $B_i$. The two sets of logical operators are formed as the products of $X$ ($Z$) operators along the topologically non-trivial lines formed by the bonds of the original (dual) lattice (see Fig.~\ref{fig:toric}). The code distance $d=\min(L_\xi,L_\eta)$ is given by the minimal weight of such operators. \begin{figure} \caption{(Color online) Lattice representing the canonical toric code $[[50,2,5]]$. The generators $A_i$ are formed by Pauli $X$ generators around a plaquette (blue square) while the generators $B_i$ are formed by Pauli $Z$ generators around a vertex (red square). Dashed horizontal blue line and vertical red line represent a pair of mutually conjugate logical operators formed by the products of $X$ and $Z$ respectively. Shading corresponds to an alternative checkerboard representation of the underlying lattice.} \label{fig:toric} \end{figure} \subsection{Checkerboard codes \protect\cite{Bombin-2007}} \label{sec:checkerboard} In the following, it will be convenient to consider a lattice with qubits placed on the vertices. Then, if we color every other plaquette to form a checkerboard pattern, we can define the operators $A_i$ as products of $X$ operators around the colored plaquettes, and the operators $B_i$ as products of $Z$ operators around the white plaquettes (see Fig.~\ref{fig:checker}, Left). Now, the checkerboard code with $n=L_x L_y$, where both $L_x$ and $L_y$ are even, can be defined by taking periodic boundary conditions on the sides of a rectangle of size $L_x\times L_y$. The condition ensures that we can maintain a consistent checkerboard pattern. Then, the product of all $A_i$ (or of all $B_i$) gives identity. Thus, the stabilizer is formed by $n-2$ independent generators, which again gives $k=2$ as in the regular toric codes. The two sets of logical operators are formed by the products of $X$ operators along the topologically non-trivial paths drawn through the colored areas, and the products of $Z$ operators along the topologically non-trivial paths through the white areas (see Fig.~\ref{fig:checker}, Left). The distance of the code, $d=\min(L_x,L_y)$, corresponds to the shortest topologically non-trivial chain of qubits, graphically, a horizontal or a vertical straight line. \begin{figure} \caption{Left: Lattice representation of the checkerboard code $[[16,2,4]]$. Qubits are placed in the lattice vertices; dashed blue and red lines represent a pair of logical operators as in Fig.~\ref{fig:toric} \label{fig:checker} \end{figure} \subsection{Checkerboard codes with arbitrary rotation} Compared to the regular toric codes, the checkerboard codes use half as many qubits with the same $k$ and distance. The disadvantage is that the distance is always even. This latter restriction can be lifted by introducing periodicity vectors which are not necessarily parallel to the bonds of the lattice. (Note that a similar trick was used in early small-cluster exact diagonalization studies of the Hubbard model\cite{Dagotto-Joynt-etal-1990}). Let us define two integer-valued periodicity vectors $\mathbf{L}_i=(a_i,b_i)$, $i=1,2$, and identify all points on the lattice which can be connected by a vector of the form $m_1 \mathbf{L}_1+m_2 \mathbf{L}_2$, with integer $m_i$. The checkerboard pattern is preserved iff both $\|\mathbf{L}_i\|_1\equiv |a_i|+|b_i|$ are even, $i=1,2$. Such a cluster contains \begin{equation} n=|\mathbf{L}_1\times \mathbf{L}_2|=|a_1 b_2 -b_1 a_2 | \label{eq:cluster-size} \end{equation} vertices, and, again, we have $k=2$ as for the standard checkerboard codes. Since the qubits in the positions shifted by $\mathbf{L}_i$ are the same, it is easy to see that our code is identical to that on a cluster with periodicity vectors, e.g., $\mathbf{L}_1$, $\mathbf{L}_1+\mathbf{L}_2$, and, generally, a cluster with periodicity vectors $\mathbf{L}_i'=g_{ij}\mathbf{L}_j$, where the integer-valued matrix $g_{ij}$ has the determinant $\det g=\pm 1$. For a periodicity vector $\mathbf{L}=(a,b)$ with $a+b$ even, the shortest topologically non-trivial qubit chain has $\|\mathbf{L}\|_\infty\equiv \max(|a|,|b|)$ operators which leads to the code distance: \begin{equation} d(\mathbf{L}_1,\mathbf{L}_2)=\min_{m_1,m_2} \|m_1\mathbf{L}_1+m_2\mathbf{L}_2\|_\infty. \label{eq:dist-checker} \end{equation} \begin{example} A family of near-optimal odd-distance checkerboard codes can be introduced by taking $\mathbf{L}_1=(2t+1,1)$, $\mathbf{L}_2=(-1,2t+1)$, $t=1,2,\ldots$. Such codes have the parameters $[[1+(2t+1)^2, 2,2t+1]]$; explicitly: $[[10,2,3]]$ (illustrated in Fig.~\ref{fig:checker}, Right), $[[26,2,5]]$, $[[50,2,7]]$, \ldots. \end{example} \begin{example} \label{ex:toric} The original toric codes are recovered by taking $\mathbf{L}_i$ along the diagonals, $\mathbf{L}_1=(L_\xi,L_\xi)$, $\mathbf{L}_2=(-L_\eta,L_\eta)$, so that $\|L_i\|_1$ are always even, thus $n=2L_\xi L_\eta$, $k=2$, and $d=\min(L_\xi,L_\eta)$. For odd distances, taking $L_\xi=L_\eta=d$, we have the codes $[[18,2,3]]$, $[[50,2,5]]$, $[[98,2,7]]$, \ldots. \end{example} \subsection{Non-bipartite rotated toric codes} We now construct a version of rotated toric codes on clusters with at least one of the periodicity vectors $\mathbf{L}_i$ violating the checkerboard pattern, e.g., $\|\mathbf{L}_1\|_1$ odd. Since the checkerboard pattern cannot be maintained, we define identical stabilizer generators in a non-CSS form, with the stabilizer generators $G_i=ZXXZ$ on each plaquette given by the products of $Z$ operators along one diagonal, and $X$ operators along the other diagonal. With periodic boundary conditions, the product of all $G_i$ is an identity, and this is the only relation between these operators on a non-bipartite cluster. Thus, here we have only one encoded qubit, $k=1$. The operators $G_i$ can be viewed as local Clifford (LC) transformed $A_i$ or $B_i$ operators of the toric code. It is easy to see that the logical operators have to correspond to topologically non-trivial closed chains of qubits, as for the bipartite case. However, in order to close the loop, we have to take only the translation vectors with $\|\mathbf{L}\|_1$ even. For example, if $\|\mathbf{L}_1\|_1$ is odd and particularly small, the minimal chain could wrap twice around the direction given by $\mathbf{L}_1$. Since the two turns could share some of the qubits, it is difficult to come up with a general expression for the distance. \begin{example} \label{ex:non-bipartite} Checkerboard-like codes can be obtained by taking $L_x$ or $L_y$ odd. Smallest codes in this family correspond to $L_x=L_y=d$; they have parameters $[[d^2,1,d]]$, where $d=2t+1$. Explicitly, $[[9,1,3]]$, $[[25,1,5]]$, $[[49,1,7]]$, \ldots. \end{example} \begin{example} \label{ex:toric-rotated} A family of smallest odd-distance rotated toric codes \cite{Kovalev-Dumer-Pryadko-2011} is obtained for $\mathbf{L}_1=(t+1,t)$, $\mathbf{L}_2=(-t,t+1)$, $t=1,2,\ldots$. These codes have the parameters $[[t^2+(t+1)^2,1,2t+1]]$. Explicitly, $[[5,1,3]]$, $[[13,1,5]]$, $[[25,1,7]]$, $[[41,1,9]]$, \ldots. \end{example} \section{Generalized toric and checkerboard codes} \subsection{Algebraic representation of hypergraph-product codes} \label{sec:algebraic} The finite-rate generalization\cite{Tillich2009} of the toric code relies on hypergraph theory, with the square lattice generalized to a product of hypergraphs (each corresponding to a parity check matrix of a classical binary code). We first recast the original construction into an algebraic language. Let $\mat{H}_1$ (dimensions $r_{1}\times n_{1}$) and $\mat{H}_2$ (dimensions $r_{2}\times n_{2}$) be two binary matrices. The associated (hypergraph-product) quantum code\cite{Tillich2009} is a CSS code with the stabilizer generators \begin{equation} \begin{array}{c} \displaystyle G_{X}=(E_{2}\otimes\mathcal{H}_{1},\mathcal{H}_{2}\otimes E_{1}),\\ \displaystyle G_{Z}=(\mathcal{H}_{2}^{T}\otimes\widetilde{E}_{1},\widetilde{E}_{2} \otimes\mathcal{H}_{1}^{T}). \end{array}\label{eq:Till} \end{equation} Here each matrix is composed of two blocks constructed as Kronecker products (denoted with ``$\otimes$''), and $E_i$ and $\widetilde{E}_i$, $i=1,2$, are unit matrices of dimensions given by $r_i$ and $n_i$, respectively. The matrices $G_X$ and $G_Z$, respectively, have $r_1r_2$ and $n_1n_2$ rows (not all of the rows are linearly independent), and they both have $n\equiv r_2 n_1+r_1 n_2$ columns, which gives the block length of the quantum code. The commutativity condition $G_X G_Z^T=0$ is obviously satisfied by Eq.~(\ref{eq:Till}) since the Kronecker product obeys $(A\otimes B)(C\otimes D)=AC \otimes BD$. Note that the construction~(\ref{eq:Till}) is somewhat similar to product codes introduced by Grassl and R\"otteler\cite{Grassl-Rotteler-2005}. The main difference is that here the check matrix and not the generator matrix is written in terms of direct products. The parameters $[[n,k,d]]$ of thus constructed quantum code are determined by those of the four classical codes which use the matrices $\mat{H}_1$, $\mat{H}_2$, $\mat{H}_1^T$, and $\mat{H}_2^T$ as the parity-check matrices. The corresponding parameters are introduced as \begin{equation} \label{eq:params} \mat{C}_{\mat{H}_i}=[n_i,k_i,d_i],\quad \mat{C}_{\mat{H}_i^T}=[{\widetilde n}_i,\widetilde{k}_i,\widetilde{d}_i],\quad i=1,2, \end{equation} where we use the convention \cite{Tillich2009} that the distance $d_i(\widetilde{d}_i)= \infty $ if $k_i(\widetilde{k}_i)=0$. The matrices $\mat{H}_i$ are arbitrary, and are allowed to have linearly-dependent rows and/or columns. As a result, both $k_i=n_i-\mathop{\rm rank}\mat{H}_i$ and $\widetilde k_i=\widetilde n_i-\mathop{\rm rank}\mat{H}_i$ can be non-zero at the same time as the block length of the ``transposed'' code $\mat{C}_{\mat{H}_i^T}$ is given by the number of rows of $\mat{H}_i$, $\widetilde n_i=r_i$. Specifically, for the hypergraph-product code (\ref{eq:Till}), we have $n=r_2n_1+r_1n_2$, $k=2k_1k_2-k_1s_2-k_2s_1$ with $s_i=n_i-r_i,\, i=1,2 \,$ (Theorem 7 from Ref.~\cite{Tillich2009}), while the distance $d$ satisfies the conditions $d\ge \min(d_1,d_2,\widetilde d_1, \widetilde d_2)$ (Theorem 9 from Ref.~\cite{Tillich2009}), and two upper bounds (Lemma 10 from Ref.~\cite{Tillich2009}): if $k_1>0$ and $\widetilde k_2>0$, then $d\le d_1$; if $k_2>0$ and $\widetilde k_1>0$, then $d\le d_2$. These parameters can also be readily established from the stabilizer generators in the form of Eq.~(\ref{eq:Till}). For example, the dimension of the quantum code follows from \begin{statement}\label{prop:full} The number of linearly independent rows in matrices $G_X$ and $G_Z$ given by Eq.~ (\ref{eq:Till}) is $\mathop{\rm rank} G_X=r_{1}r_{2}-\widetilde{k}_{1}\widetilde{k}_{2}$ and $\mathop{\rm rank} G_Z =n_{1}n_{2}-k_{1}k_{2}$. \end{statement} \begin{proof} The matrices $G_X$ and $G_Z$ have $r_1r_2$ and $n_1n_2$ rows, respectively. To count the number of linearly-dependent rows in $G_X$, we notice that the equations $(a^T\otimes b^T)\cdot (E_2\otimes \mat H_1)=0$ and $(a^T\otimes b^T)\cdot (\mat H_2\otimes E_1)=0$ are both satisfied iff $a\in \mat C_{\mat H_2^T}$ and $b\in \mat C_{\mat H_1^T}$, thus there are $\widetilde k_1 \widetilde k_2$ linear relations between the rows of $G_X$, and we are left with $r_1r_2-\widetilde k_1 \widetilde k_2$ linearly-independent rows. Similarly, there are $n_1n_2-k_1k_2$ linearly independent rows in $G_Z$. \end{proof} To prove the lower bound on the distance, consider a vector $\mathbf{u}$ such that $G_X \cdot \mathbf{u}=0$ and $\mathop{\rm wgt}(\mathbf{u})<d$. We construct a quantum code in the form (\ref{eq:Till}) from the matrices $\mat{H}_1'$, $\mat{H}_2'$ formed only by the columns of respective $\mat{H}_i$, $i=1, 2$, that are involved in the product $G_X \cdot \mathbf{u}$. According to Proposition~\ref{prop:full}, the reduced code has $k=0$, so that the reduced $\mathbf{u}'$, $G_X'\cdot \mathbf{u}'=0$, has to be a linear combination of the rows of $G_Z'$. The rows of $G_Z'$ are a subset of those of $G_Z$, with some all-zero columns removed; thus the full vector $\mathbf{u}$ is also a linear combination of the rows of $G_Z$. Similarly, a vector $\mathbf{v}$ such that $G_Z \cdot \mathbf{v}=0$ and $\mathop{\rm wgt}(\mathbf{v})<d$, is a linear combination of rows of $G_X$. The upper bound is established by considering vectors $\mathbf{u}\equiv (\mathbf{e}\otimes \mathbf{c},0)$ with $\mathbf{c}\in\mathcal{C}_{\mat{H}_1}$, which requires $k_{1}>0$. Vector $\mathbf{e}$, $\mathop{\rm wgt}(\mathbf{e})=1$, for which $\mathbf{u}$ is not a linear combination or rows of $G_{Z}$, exists only when $\widetilde{k}_{2}>0$. The other upper bound is established by considering vectors $(0,\mathbf{c}\otimes \mathbf{e})$ with $\mathbf{c}\in\mathcal{C}_{\mat{H}_2}$. \subsection{Original code family from full-rank matrices} \label{sec:orig} In Ref.~\cite{Tillich2009}, only one large family of quantum codes based on the hypergraph-product ansatz~(\ref{eq:Till}) is given. Namely, the matrix $\mathcal{H}_{1}$ is taken as a full-rank parity matrix of a binary LDPC code with parameters $\mathcal{C}_{\mat{H}_1}=[n_1,k_1,d_1]$ ($r_1=n_1-k_1$), so that the transposed code has dimension zero, $\widetilde k_1=0$. The second matrix is taken as $\mathcal{H}_{2}=\mathcal{H}_{1}^{T}$, so that $\mathcal{C}_{\mat{H}_2^T}=\mathcal{C}_{\mat{H}_1}$. Then Eq.~(\ref{eq:Till}) defines a quantum LDPC code with parameters \begin{equation} \mathcal{Q}^\mathrm{orig}=[[(n_1-k_1)^{2}+n_1^{2},k_1^{2},d_1]], \label{eq:par-orig} \end{equation} where the weight of each row of $G_X$, $G_Z$ equals to the sum of the row-weight and the column-weight of $\mathcal{H}_1$. \begin{example} Let $\mathcal{H}_1$ be a parity-check matrix of the repetition code $[d,1,d]$. Then the quantum code has the parameters $[2d^2-2d+1,1,d]$. Explicitly, $[[13,1,3]]$, $[[25,1,4]]$, $[[41,1,5]]$,\ldots --- these parameters are inferior compared to the original toric code family, cf. Examples \ref{ex:non-bipartite}, \ref{ex:toric-rotated}. \end{example} \subsection{Code family from square matrices} \label{sec:squar} Instead of using full-rank parity-check matrices\cite{Tillich2009}, let us start with a pair of binary codes with square parity-check matrices $\mat{H}_i$, such that $\widetilde d_1=d_1$, $\widetilde d_2=d_2$. Then, automatically, $\widetilde k_i=k_i=n_i-\mathop{\rm rank} \mat{H}_i$. The hypergraph-product ansatz~(\ref{eq:Till}) gives the code with the parameters \begin{equation} \mathcal{Q}^{\rm square}=[[2n_1 n_2, 2k_1k_2, \min(d_1,d_2)]].\label{eq:par-squar} \end{equation} Note that the rate $R=k/n$ of this family is up to twice that of the family originally suggested in Ref.~\cite{Tillich2009}, see Sec.~\ref{sec:orig}. \begin{example} The standard toric codes are recovered by taking for $\mat{H}_2=\mat{H}_1$ the circulant matrix of a repetition code. The code parameters are $[[2d_1^2,2,d_1]]$, cf.\ Example \ref{ex:toric}. \end{example} We suggest two general ways to obtain suitable square parity check matrices. First, if we start from an $[n_1,k_1,d_1]$ LDPC code with the full-rank parity check matrix $P$, we can construct the following symmetric matrix, \begin{equation} \label{eq:symmetrization} \mat{H}_1^{\rm sym}=\left( \begin{array}[c]{cc} \mathbb1&P\\P^T&0 \end{array}\right), \end{equation} so that the code $\mathcal{C}_{\mat{H}_1^{\rm sym}}$ is a $[2n_1-k_1,k_1,d_1]$ LDPC code. Second construction assumes that $\mathcal{C}_{\mat{H}_i}$ are cyclic LDPC codes. The full circulant matrices $\mat{H}_i$ are constructed from coefficients of check polynomials $h_i(x)$. The check polynomials of the transposed code, $\widetilde h_i(x)=h_i(x^{n_i-1})\mathop{\rm mod} (x^{n_i}-1)$, are just the original check polynomials reversed, and the original and transposed codes have the same parameters. \subsection{Code family from symmetric matrices.} If we have two symmetric parity-check matrices, $\mat{H}_i=\mat{H}_i^T$, $i=1,2$ [e.g., from Eq.~(\ref{eq:symmetrization})], the full hypergraph-product code~(\ref{eq:Till}) can be transformed into a direct sum of two independent codes, each with the following non-CSS check matrix \begin{equation} \label{eq:check-symm} H=(E_2\otimes\mat{H}_1|\mat{H}_2\otimes E_1), \quad \mat{H}_i^T=\mat{H}_i,\quad i=1,2. \end{equation}This gives the following \begin{theorem}\label{th:symmetrized}A quantum code in Eq.~(\ref{eq:check-symm}) has parameters \begin{equation} \mathcal{Q}^{\rm sym}=[[n_1 n_2, k_1k_2, \min(d_1,d_2)]].\label{eq:par-symm} \end{equation} \end{theorem} Thus, we can reduce by half both the blocklength and the number of encoded qubits, i.e., keeping the rate of Eq.~(\ref{eq:par-squar}) but doubling the relative distance. For a cyclic LDPC code $\mathcal{C}_{\mat{H}}$ with a \emph{palindromic} check polynomial, $x^{\deg h(x)}h(1/x)=h(x)$, such that $n-\deg h(x)$ is even, we can always construct a symmetric circulant matrix $\mat{H}$ from the polynomial $x^{[n-\deg h(x)]/2}h(x)$. \begin{example} If $\mathcal{H}_{1}=\mathcal{H}_{2}$ are symmetric check matrices of a cyclic $[n_1,k_1,d_1]$ code corresponding to a palindromic polynomial $h(x)$, then the quantum code has parameters $[[n_1^{2},k_1^{2},d_1]]$. In particular, for $n_1=17$ and $h(x)=1+x^3+x^4+x^5+x^6+x^9$ we obtain $[[289,81,5,w=12]]$ code, and for $h(x)=1+x$, we recover the non-bipartite checkerboard codes from Example \ref{ex:non-bipartite}. \end{example} \subsection{Code family from two-tile codes} Finally, let us construct a generalization of the regular {}``bipartite'' checkerboard codes. We start with a pair of binary codes with the parity check matrices of even size \begin{equation} \mathcal{H}_{1}=\tbinom{1 0}{0 1} \otimes a_{1}+\tbinom{0 1}{1 0}\otimes b_{1}, \quad\mathcal{H}_{2}^{p}=a_{2}\otimes\tbinom{1 0}{0 1}+b_{2}\otimes\tbinom{0 1}{1 0}, \label{eq:check-tiled} \end{equation} constructed from the half-size matrices ({}``tiles'') $a_{i}$, $b_{i}$ with the distances of the classical codes $\mathcal{C}_{\mathcal{H}_{i}}$ and $\mathcal{C}_{\mathcal{H}_{i}^{T}}$ given by $d_{i}$ and $\widetilde{d}_{i}$, $i=1,2$, where the check matrix $\mathcal{H}_{2}=\tbinom{1 0}{0 1} \otimes a_{2}+\tbinom{0 1}{1 0}\otimes b_{2}$ is equivalent to $\mathcal{H}_{2}^{p}$ and can be rendered to the latter form by row and column permutations. It is convenient to introduce notation for the dimensionality of symmetric subspaces of $\mathcal{C}_{\mathcal{H}_{1}}$ and $\mathcal{C}_{\mathcal{H}_{2}^p}$ containing only words of type ${1 \choose 1}\otimes\alpha_{1}$ and $\alpha_{2}\otimes{1 \choose 1}$ as $k_{i}^{s}\equiv n_{i}/2-\mathop{\rm rank} (a_{i}+b_{i})$, and for asymmetric subspaces as $k_{i}^{a}\equiv k_{i}-k_{i}^{s}$, $i=1,2$ (analogously we define $\widetilde{k}_{i}^{s}$ and $\widetilde{k}_{i}^{a}$). We define half-size CSS matrices {[}cf.~Eq.~(\ref{eq:Till}){]} \begin{equation} \begin{array}{c} G_{X}=(E_{2}^{({1/2})}\otimes\mathcal{H}_{1},\mathcal{H}_{2}^{p} \otimes E_{1}^{({1/2})}),\\ G_{Z}=(\mathcal{H}_{2}^{pT}\otimes\widetilde{E}_{1}^{({1/2})}, \widetilde{E}_{2}^{({1/2})}\otimes\mathcal{H}_{1}^{T}), \end{array}\label{eq:Tor1} \end{equation} where the identity matrices $E_{i}^{(1/2)}$, $\widetilde{E}_{i}^{(1/2)}$ have dimensions $r_{i}/2$, $n_{i}/2$, half-size compared to those in Eq.~(\ref{eq:Till}). \begin{statement}\label{th:lemma} The numbers of linearly independent rows in matrices~(\ref{eq:Tor1}) are $\mathop{\rm rank} G_X=r_{1}r_{2}/2-\widetilde{k}_{1}^{s}\widetilde{k}_{2}^{s}- \widetilde{k}_{1}^{a}\widetilde{k}_{2}^{a}$ and $\mathop{\rm rank} G_Z =n_{1}n_{2}/2-k_{1}^{s}k_{2}^{s}-k_{1}^{a}k_{2}^{a}$. \end{statement} \begin{proof} To count the number of linearly-dependent rows in $G_X$, we notice that the equations $\upsilon^{T}\cdot(E_{2}^{({1/2})}\otimes\mathcal{H}_{1})=0$ and $\upsilon^{T}\cdot(\mathcal{H}_{2}\otimes E_{1}^{({1/2})})=0$ are both satisfied for ansatz \begin{equation} \upsilon=\alpha_{1}\otimes{\alpha_3 \choose \alpha_4}+\alpha_{2}\otimes{\alpha_4 \choose \alpha_3}, \end{equation} if and only if either (i) $\alpha_{1}\neq\alpha_{2}$, $\alpha_{3}\neq\alpha_{4}$ and ${\alpha_1 \choose \alpha_2}\in\mathcal{C}_{\mathcal{H}_{2}^{T}}$, ${\alpha_3 \choose \alpha_4}\in\mathcal{C}_{\mathcal{H}_{1}^{T}}$ or (ii) $\upsilon=\alpha_{1}'\otimes{1 \choose 1}\otimes\alpha_{3}'$ and $\alpha_{1}'\in\mathcal{C}_{a_{2}^{T}+b_{2}^{T}}$, $\alpha_{3}'\in\mathcal{C}_{a_{1}^{T}+b_{1}^{T}}$, thus there are $\widetilde{k}_{1}^{s} \widetilde{k}_{2}^{s}+\widetilde{k}_{1}^{a}\widetilde{k}_{2}^{a}$ linear relations between the rows in $G_X$, and we are left with $\mathop{\rm rank} G_{X}=r_{1}r_{2}/2-\widetilde{k}_{1}^{s} \widetilde{k}_{2}^{s}-\widetilde{k}_{1}^{a}\widetilde{k}_{2}^{a}$ linearly-independent rows. Similarly, we prove that $\mathop{\rm rank} G_{Z}=n_{1}n_{2}/2-k_{1}^{s}k_{2}^{s}-k_{1}^{a}k_{2}^{a}$. \end{proof} \begin{theorem}\label{th:tiled} A quantum CSS code in Eqs.~(\ref{eq:check-tiled}) and (\ref{eq:Tor1}) has the parameters: \begin{equation} \begin{array}{c} n=(n_{1}r_{2}+n_{2}r_{1})/2,\\ k=2 k_{1}^s k_{2}^s+2 k_{1}^{a}k_{2}^{a}-k_{1}s_{2}/2-k_{2}s_{1}/2,\\ d\geq\min(d_{1}/2,d_{2}/2,\widetilde{d}_{1}/2,\widetilde{d}_{2}/2), \end{array}\label{eq:parameters} \end{equation} where $s_{i}=n_{i}-r_{i}$, $i=1,2$. In addition, for $k_{1}>0$ and $\widetilde{k}_{2}>0$ the upper bound $d\le d_{1}$ exists and for $k_{2}>0$ and $\widetilde{k}_{1}>0$ the upper bound $d\le d_{2}$ exists. \end{theorem} \begin{proof} The number of encoded qubits $k$ follows from Proposition \ref{th:lemma}. The lower bound on the distance can be established as for the original hypergraph-product codes in Sec.~\ref{sec:algebraic}, except now the reduced binary check matrices $\mat{H}_1'$, $\mat{H}_2'$ should preserve the tiled form (\ref{eq:check-tiled}). Hence, for every column involved in the product $G_X\cdot \mathbf{u}$, we may need to insert two columns into the reduced matrices; thus we need $\mathop{\rm wgt}(\mathbf{u})<d/2$ which reduces the lower bound on the distance. The two upper bounds can be established by considering vectors $(\mathbf{e}\otimes \mathbf{c},0)$ with $\mathbf{c}\in\mathcal{H}_{1}$ and $(0,\mathbf{c}\otimes \mathbf{e})$ with $\mathbf{c}\in\mathcal{H}_{2}$, exactly as for the hypergraph-product codes in Sec.~\ref{sec:algebraic}. \end{proof} \begin{theorem} \label{th:tiled1} Suppose $a_{i}$ and $b_{i}$, $i=1,2$ in Eq.~(\ref{eq:check-tiled}) are such that $k_{i}^{a}=0$, $k_{i}^{s}\neq 0$, $r_{i}=n_{i}$ and binary codes with generator matrices $a_{i}+b_{i}$ and $a_{i}^T+b_{i}^T$ are not distance $1$ codes. Then the quantum code in Eq.~(\ref{eq:Tor1}) has parameters $[[n_{1}n_{2},2k_{1}k_{2},\min(d_{1},d_{2},\widetilde{d}_{1},\widetilde{d}_{2})]]$, cf.\ Eq.~(\ref{eq:par-squar}). \end{theorem} The proof is similar to the proof of Theorem \ref{th:tiled}. The additional restrictions on the binary codes guarantee that a vector $\mathbf{u}$ of weight less than $d$ can only overlap with columns of $\mat{H}_i$ in less than $d$ positions even after the symmetric counterparts are added. If we start from distance-$d$ LDPC codes with half size square parity matrices $\mat{H}_i^{(1/2)}$ [e.g., from Eq. (\ref{eq:symmetrization})] then $a_i=\mat{H}_i^{(1/2)}+E^{(1/2)}$ and $b_i=E^{(1/2)}$ in Eq. (\ref{eq:check-tiled}) lead to distance-$2d$ code satisfying Theorem \ref{th:tiled1}. Alternatively, one can start with two cyclic LDPC codes with even blocksize $n_{i}$, $i=1,2$, and the check polynomials $h_{i}(x)$ that divide $x^{n_{i}/2}-1$. The corresponding square circulant parity-check matrices $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ (and $\mathcal{H}_{2}^p$) satisfy~(\ref{eq:check-tiled}). The generator polynomials, \begin{equation} g_{i}(x)=(x^{n_{i}}-1)/h_{i}(x)=(x^{n_{i}/2}+1)\,(x^{n_{i}/2}-1)/h_{i}(x),\label{eq:generator-tiled} \end{equation} and their reversed indicate that $k_i^a=0$. \begin{example}If $\mathcal{H}_{1}$ is the square parity matrix of a cyclic $[n_1,k_1,d_1]$ code corresponding to the polynomial $h(x)$ that divides $1-x^{n_1/2}$ and $\mathcal{H}_{2}=\mathcal{H}_{1}$ then the quantum code has parameters $[[n_1^{2},2k_1^{2},d_1]]$. For $n_1=30$ and $h(x)=1+x+x^3+x^5$ we obtain $[[900,50,14,w=8]]$ code. For $h(x)=1+x$, we recover the bipartite checkerboard codes from Sec.~\ref{sec:checkerboard}.\end{example} \section{Conclusions} We suggested several simple techniques to improve existing quantum LDPC codes, toric codes, and generalized toric codes with asymptotically finite rate (quantum hypergraph-product codes\cite{Tillich2009}). In the latter case we increased the rate of the code family originally proposed in Ref.~\cite{Tillich2009} by up to four times. \section*{Acknowledgment} We are grateful to I. Dumer and M. Grassl for multiple helpful discussions. This work was supported in part by the U.S. Army Research Office under Grant No.\ W911NF-11-1-0027, and by the NSF under Grant No. 1018935. \end{document}